id
stringlengths
20
20
content
stringlengths
211
8.3M
dsir_books
float64
-20,679,750.45
-316.68
fluency_en
listlengths
2
2
rps_lines_ending_with_terminal_punctution_mark
float64
0
100
modernbert_cleanliness
listlengths
6
6
qurater
listlengths
4
4
rps_doc_num_sentences
int64
1
84.7k
rps_doc_word_count
float64
9
1.29M
ad_en
listlengths
2
2
rps_doc_frac_no_alph_words
float64
15.7
92
modernbert_reasoning
listlengths
6
6
rps_doc_frac_chars_top_2gram
float64
0
92.9
rps_lines_uppercase_letter_fraction
float64
0.06
87.8
rps_doc_frac_unique_words
float64
1.16
100
rps_lines_numerical_chars_fraction
float64
0
85.7
fineweb_edu
listlengths
1
1
dsir_math
float64
-17,396,368.29
-306.94
rps_doc_mean_word_length
float64
1.44
61
dsir_wiki
float64
-20,772,722.61
-313.93
rps_doc_frac_chars_top_3gram
float64
0
119
rps_doc_unigram_entropy
float64
1.2
8.5
modernbert_professionalism
listlengths
6
6
modernbert_readability
listlengths
6
6
sub_path
stringclasses
1 value
BkiUbybxK19JmejM8HP8
\section{Introduction} Interaction with the environment leads to decoherence \cite{Zurek_RMP03} of quantum states of small systems, such as qubits. While it is an effect which should be avoided in the context of quantum computation (\emph{e.g.} by performing gate operations on timescales much shorter than the decoherence time, and by using error correction \cite{Nielsen_Chuang}), in other contexts the phenomenon is interesting in itself \cite{Zurek_RMP03}. Recently there has been a lot of attention devoted to using measurements of a qubit's coherence dynamics to acquire information on the environmental noise affecting the qubit. While the measurements of a qubit's energy relaxation give information on high frequency ($\omega \! > \! k_{\text{B}}T$) quantum noise \cite{Schoelkopf_spectrometer,Astafiev_PRL04}, the qubit's dephasing is sensitive to low-frequency environmental fluctuations, which often can be assumed to be classical. The measurement of dephasing during the free evolution of the qubit typically provides only information about the total noise power at low frequencies, since most often the average over many repetitions of an experiment leads to an apparent decay of the signal due to inhomogeneous broadening, i.e.~the observed dephasing is dominated by the slowest environmental fluctuations \cite{Makhlin_CP04,Ithier_PRB05}. While the possibility of characterizing the environmental noise by analysis of the qubit's coherence decay was the subject of many investigations focusing on specific kinds of environments \cite{Falci_PRA04,Faoro_PRL04,Benedetti_PRA14,Kotler_Nature11}, recently noise spectroscopy methods based on dynamical decoupling (DD) \cite{Viola_PRA98,Uhrig_PRL07,Khodjasteh_PRA07,Cywinski_PRB08,Biercuk_JPB11,Yang_FP11,Khodjasteh_NC13} of the qubit from its environment have become widely used in experiments \cite{Biercuk_Nature09,Bylander_NP11,Alvarez_PRL11,Yuge_PRL11,Medford_PRL12,Dial_PRL13,Staudacher_Science13,Muhonen_arXiv14,Romach_arXiv14}. The application of multiple short pulses rotating the qubit's state removes the influence of the quasi-static fluctuations, and in fact for a large number of appropriately spaced pulses it can filter out the noise at all frequencies with exception of a set of narrow-band ranges \cite{Kotler_Nature11,Alvarez_PRL11,Bylander_NP11}, the contributions from which determine the time-dependence of coherence decay. It is thus possible to turn the qubit into a true \textit{spectrometer of noise} \cite{Biercuk_Nature09,Bylander_NP11,Alvarez_PRL11,Yuge_PRL11,Medford_PRL12,Dial_PRL13,Staudacher_Science13,Muhonen_arXiv14,Romach_arXiv14}. Various schemes of noise spectroscopy with qubits (including the ones not using DD \cite{Yan_PRB12,Fink_PRL13}) have been mainly applied to the case of pure dephasing due to linear coupling to the noise $\xi(t)$, \textit{i.e.}~for the qubit-environment interaction of the $v_{1}\hat{\sigma}_{z}\xi(t)$ form. In such a case only the off-diagonal element of the qubit's density matrix, $\rho_{+-}(t)$, decays. Under the common assumption of Gaussian statistics of $\xi(t)$, the noise is fully characterized by its spectral density $S(\omega)$, which is the Fourier transform of its two-point correlation function $C(t) \! = \! \mean{\xi(t)\xi(0)} - \mean{\xi}^2$, where $\mean{...}$ denotes the averaging over the realizations of the stochastic process. The noise spectroscopy methods which are the most relevant here are based on the fact that $\rho_{+-}(t)$ is given in this case by an expression containing an integral of $S(\omega)$ multiplied by a sequence-specific \emph{filter function} \cite{deSousa_TAP09,Cywinski_PRB08,Biercuk_Nature09,Biercuk_JPB11}. For large number $n$ of pulses in an appropriately chosen DD sequence (i.e.~a Carr-Purcell \cite{Carr_Purcell} sequence), we have $\rho_{+-}(t) \! \propto \! \exp[-\chi^{l}_{2}(t)]$ with $\chi^{l}_{2}(t) \! \sim \! t S(n\pi/t)$. Thus, by changing $n$ and $\te$, $S(\omega)$ can be reconstructed \cite{Alvarez_PRL11,Yuge_PRL11,Staudacher_Science13,Muhonen_arXiv14,Romach_arXiv14} from measurements of $\rho_{+-}(t)$. However, one often encounters the case in which the coupling to the noise is quadratic: \beq \hat{H} = \frac{1}{2}[\Omega + \vq\xi^{2}(t')]\hat{\sigma}_{z} \,\, , \label{eq:H} \eeq where $\Omega$ is the controlled qubit splitting, and $\vq$ is the coupling constant. Obtaining such a form of qubit-noise coupling usually requires tuning some parameters of the qubit to specific values, and the point in the parameter space in which Eq.~(\ref{eq:H}) holds exactly (or, as will be discussed later, approximately), is usually referred to as the Optimal Working Point (OWP) of the qubit. At such an OWP the influence of noise is typically suppressed compared to the linear coupling case, and the qubit dephasing time is thus enhanced. If we were able to perform noise spectroscopy at the OWP, we would gain information about noise in a wider range of frequencies, since with longer dephasing time it is possible to acquire good-quality data on a longer timescale, which should give access to more detailed information about noise spectrum at lower frequencies. We cannot however use the simple formulas connecting $\rho_{+-}(t)$ to $S(\omega)$ given before: even though $\xi(t)$ is assumed to have Gaussian statistics, its square $\xi^{2}(t)$ is \textit{not} Gaussian-distributed. The derivation of useful formulas for coherence decay under DD at the OWP is the goal of this paper. Since their application for magnetic-field tuning of transitions used in atomic clocks \cite{Bollinger_PRL85}, the OWPs have been identified for almost all kinds of qubits, including superconducting charge \cite{Vion_Science02,Ithier_PRB05} and flux \cite{Yoshihara_PRL06,Kakuyanagi_PRL07} qubits, semiconductor quantum dot (QD) charge qubits \cite{Petersson_PRL10}, mixed electronic-nuclear spin qubits based on electrons bound to bismuth donors in silicon \cite{Wolfowicz_NN13,Balian_PRB14}, and a recently realized ``resonant exchange qubit'' based on a triple QD \cite{Medford_NN13,Medford_PRL13}. The existence of the OWP with respect to charge noise was also predicted theoretically for singlet-triplet qubits in double QDs \cite{Stopa_NL08,Li_PRB10,Ramon_PRB10}. The most relevant noises are the $1/f^{\beta}$ charge and flux noise ubiquitous in condensed matter \cite{Paladino_RMP14} and random telegraph noise (RTN) coming from a two-level fluctuator strongly coupled to the qubit \cite{Galperin_PRL06,Bergli_PRB06,Bergli_PRB07,Ramon_PRB12}. Furthermore, in the recently investigated case of an OWP of a spin qubit in Bi-doped silicon \cite{Wolfowicz_NN13,Balian_PRB14}, the Ornstein-Uhlenbeck (OU) noise will become relevant for isotopically purified samples, in which the electron coherence will be limited by interaction with other electron spins, not the nuclear spins. In this case the influence of the dipolarly coupled electron spin bath can be mapped on interaction with OU noise \cite{Dobrovitski_PRL09,deLange_Science10}. The performance of DD protocols at an OWP (or close to it) was most extensively discussed for superconducting qubits (and double quantum dot qubits sensitive to charge noise), for which either RTN \cite{Bergli_PRB07,Ramon_PRB12}, or a $1/f$ type noise (due to many RTN sources, often - but not always - well approximated by a Gaussian process) was considered \cite{Falci_PRA04,Faoro_PRL04}. However, since these papers focused either on non-Gaussian noise, or on the presence of signatures of non-Gaussianity of $1/f$ noise and issues specific to the bath consisting of two-level fluctuators, there is a need for development of an easy-to-use method of noise spectroscopy for the case of Gaussian noise at an OWP. Here I investigate the possibility of using DD sequences for performing spectroscopy of Gaussian noise $\xi(t')$ at the OWP. In order to deal with non-Gaussian statistics of $\xi^{2}(t')$ I use linked-cluster (cumulant) expansion to average the qubit's phase over realizations of noise, building on the seminal papers \cite{Makhlin_PRL04,Makhlin_CP04,Falci_PRL05} in which free evolution dephasing at an OWP was considered. Potentially useful solutions allowing for noise spectroscopy appear in two cases. The first case is that of noise with non-singular spectrum at low frequencies, i.e.~of noise with finite autocorrelation time. I will argue that at large $n$ the dephasing at relatively short timescales can then be described in a way similar to the linear-coupling case, only with the spectral density of the $\xi^{2}(t')$ process, $S_{2}(\omega)$, appearing in the well-known formulas \cite{deSousa_TAP09,Cywinski_PRB08}. Thus, the experimentally established methods of noise spectroscopy \cite{Bylander_NP11,Medford_PRL12} can be used to reconstruct $S_{2}(\omega)$. This result can be explained by the effective ``Gaussianization'' of the noise experienced by the qubit subjected to many $\pi$ pulses. Formally, the second term in the cumulant expansion of $\rho_{+-}(t)$ becomes a good approximation to an exact result (up to a time comparable to $T_{2}$) for large enough $n$. The second case is that of $1/f^{\beta}$ noise with $\beta \! > \! 1$ at low frequencies. The noise autocorrelation time $t_{c}$ is then ill-defined, or its value is simply irrelevant when the qubit evolution time $\te$ is much shorter than $t_{c}$. Furthermore, for the $\beta \! > \! 1$ case which we consider, the noise power is concentrated at low frequencies. The solution is obtained by separate averaging over fast ($\omega \! > \! 1/t$) and slow ($\omega_{0} \! < \! \omega \! < \! 1/\te$, where $\omega_{0}$ is the low-frequency cutoff) fluctuations. Such an approximate procedure allows for a resummation of cumulant expansion and derivation of closed formulas for $\rho_{+-}(t)$. The coherence at time $t$ under a sequence of many pulses is shown to be determined by $S(n\pi/t)$ (similarly to the linear coupling case), thus allowing for reconstruction of $S(\omega)$. The characteristic feature of this solution is an appearance of a power-law tail of the coherence signal. Interestingly, for noise with no intrinsic low-frequency cutoff $\omega_{0}$, the coherence time $T_{2}$ scales with both $n$, and the total measurement time $T_{M}$ which determines the effective cutoff $\omega_{0} \! \sim \! 1/T_{M}$. It is important to note here that the OWP Hamiltonian given in Eq.~(\ref{eq:H}) often appears as an effective approximate Hamiltonian when the qubit having large longitudinal splitting, $\Omega$, is exposed to transverse noise $v_{t}\hat{\sigma}_{x}\xi(t)$. While the domain of approximate equivalence of the calculation using the transverse noise and the one employing the effective Hamiltonian can be easily established in the case of free evolution, it is less clear on what timescale one can use the latter approach when the qubit is subjected to a DD sequence. This is mostly related to accumulation of errors caused by interplay of many pulses (along specific axes) and the random tilts of the qubit quantization axis caused by the transverse noise \cite{Mkhitaryan_PRB14}. This effect is not accounted for in the effective Hamiltonian treatment. The goal of this paper is to address a theoretical question of derivation of analytical (and potentially useful for noise spectroscopy purposes) formulas for time-dependence of coherence when the qubit-noise coupling is given by Eq.~(\ref{eq:H}). While the example results given in the paper are shown to agree with simulations of decoherence due to transverse noise in the presence of large enough $\Omega$ splitting, the question of the precise extent of the domain (defined by $n$ and the timescale of interest) of quantitative applicability of the presented theory to specific cases of OWPs occurring due to presence of transverse noise remains to be further investigated. However, let me note that in experimental implementations of DD based noise spectroscopy the issue of accumulation of errors of realistic pulses is always present, and in practical cases the reliable results are always confined to the regime of not-very-large numbers of pulses. When working at the OWP, it is crucial to have a qualitative picture of expected coherence decay under DD for relatively small number of pulses, and the calculations based on Eq.~(\ref{eq:H}) given below provide precisely such kind of picture. The paper is organized in the following way. In Section \ref{sec:OWP} the basic general facts about the Optimal Working Point of the qubit and the types of noise most relevant for qubits are reviewed. Section \ref{sec:linear} contains an outline of derivation of noise spectroscopy formulas for the case of linear coupling to noise. This is given here in order to make the paper self-contained and to establish the reference point for the OWP noise spectroscopy methods discussed later. In Section \ref{sec:LCE} I present a general form of the solution for coherence dynamics at an OWP (assuming an effective Hamiltonian description) using the cumulant expansion, and the short-time behavior of this solution is discussed in Section \ref{sec:short}. Approximate analytical solutions for $\rho_{+-}(t)$ at longer times are then given in Section \ref{sec:Gaussian} and \ref{sec:1f} for cases of noise with finite autocorrelation time and of $1/f^{\beta}$ noise, respectively. The analytical results given in these sections are compared with numerical simulations employing Ornstein-Uhlenbeck noise with autocorrelation time $t_{c}$. By varying the coupling to the noise (which determines the timescale $T_{2}$ at which coherence is non-negligible) we can illustrate both the regime of $T_{2} \! \gg \! t_{c}$ (for weak coupling), and the case of $T_{2} \! \ll \! t_{c}$ (for strong coupling), which effectively corresponds to $1/\omega^{2}$ noise. The relation between the results obtained with the effective Hamiltonian and a model in which the OWP arises due to transverse coupling to noise in the presence of large longitudinal splitting is discussed in Section \ref{sec:transverse}. \section{Optimal working point and the noise affecting the qubit} \label{sec:OWP} Quadratic qubit-noise coupling given in Eq.~(\ref{eq:H}) can appear in two ways. In the first one (the ``intrinsic'' OWP) we have the Hamiltonian \beq \hat{H} = \frac{1}{2}\left[ \Omega' + \delta \Omega(\xi_{0}+\xi(t)) \right ] \hat{\sigma}_{z} \,\, , \eeq where $\delta \Omega$ is the energy offset which depends on a controlled parameter $\xi_{0}$ and a stochastic variable $\xi(t)$, and we assume that $\partial \delta \Omega(x)/\partial x|_{x=\xi_{0}} \! = \! 0$. In the lowest nonvanishing order with respect to $\xi(t)$ we obtain the Hamiltonian from Eq.~(\ref{eq:H}) with $\Omega\! = \! \Omega' + \delta\Omega(\xi_{0})$ and $v_{2} \! = \! \frac{1}{2}\partial^{2}\delta \Omega/\partial x^{2}|_{x=\xi_{0}}$. Such an \emph{intrinsic} OWP appears for example in superconducting qubits of the types which are mostly affected by flux noise \cite{Shnirman_PS02,Yoshihara_PRL06,Kakuyanagi_PRL07} - the nonlinear dependence of the Josephson energy on the magnetic flux allows for the presence of an extremum of $\delta \Omega(\xi)$ dependence. The second type of the OWP (the \emph{extrinsic} one) appears when the Hamiltonian is given by \beq \hat{H} = \frac{\Omega}{2}\hat{\sigma}_{z} + v_{t}\frac{\xi(t')}{2}\hat{\sigma}_{x} \,\, . \label{eq:Ht} \eeq This is the case in which the qubit, having energy eigenstates quantized along the $z$ axis, is exposed to transverse noise. The noise is assumed above to be along the $x$ direction only, since this is the often encountered case \cite{Vion_Science02,Ithier_PRB05,Shnirman_PS02,Bergli_PRB06,Bergli_PRB07,Medford_PRL13}. When the characteristic energy scale of the noise, $v_{t}\sigma$, with $\sigma^{2} \! \equiv \! \mean{\xi^{2}}$, is much smaller than $\Omega$, in the lowest orders of expansion in $v_{t}\sigma/\Omega$ the noise is causing the tilting of the qubit's quantization axis by angle $\approx v_{t}\sigma/\Omega$ and a fluctuation of the qubit's precession frequency around this axis of magnitude $\approx v^{2}_{t}\sigma^2/2\Omega$. This geometric interpretation of the qubit's evolution should make it clear that in the process of dephasing of a freely evolving qubit (i.e.~the decoherence of a superposition of eigenstates of $\hat{\sigma}_{z}$ without any additional pulses rotating the qubit's state) the effect of noise on precession frequency is dominant, while the axis tilting contributes only a small (on the order of $v^{2}_{t}\sigma^{2}/\Omega^2$) correction to the coherence signal magnitude. The effective pure dephasing Hamiltonian is again of the form given in Eq.~(\ref{eq:H}), only with $v_{2} \! = \! v^{2}_{t}/2\Omega$. It is important to note that this approach should be used with caution in the case of decoherence of the qubit exposed to multiple pulses (rotations about $x$ or $y$ axes). Firstly, in the presence of the uniaxial transverse noise a difference in performance of DD pulses ($\pi$ rotations) along the $x$ and $y$ axes is expected \cite{Bergli_PRB07,Ramon_PRB12}. Secondly, these $\pi$ rotations become imperfect in the presence of transverse noise, and the resulting pulse errors can accumulate with increasing $n$ \cite{Mkhitaryan_PRB14}. Nevertheless, in the following Sections I will develop a theory of DD noise spectroscopy at an OWP using the Hamiltonian from Eq.~(\ref{eq:H}), and the issue of accuracy of these calculations when the original Hamiltonian is in fact given by Eq.~(\ref{eq:Ht}) will be revisited in Section \ref{sec:transverse}. The stochastic process $\xi(t')$ will be assumed to be stationary and Gaussian. It should be noted that the random telegraph noise (RTN), considered in other works in the context of extrinsic OWP \cite{Bergli_PRB06,Bergli_PRB07,Ramon_PRB12}, is thus excluded from the following considerations, since it has non-Gaussian statistics. \section{Noise spectroscopy for linear coupling to Gaussian dephasing noise} \label{sec:linear} Let us recount here the results for dephasing due to $v_{1}\xi(t')\hat{\sigma}_{z}$ coupling to Gaussian noise \cite{deSousa_TAP09,Cywinski_PRB08}. We focus now on evolution of a superposition of the two states of the qubit under the application of $n$ ideal $\pi$ pulses ($\pi$ rotations about $x$ or $y$ axis) applied in the Carr-Purcell (CP) sequence \cite{Carr_Purcell}, \textit{i.e} with pulses at times $\tau_{k} \! = \! (k-\frac{1}{2})\te/n$, with $\te$ being the total sequence time. The CP sequence is chosen because its use leads to a particularly transparent method of noise spectroscopy at large $n$. It is also simple to implement, and in the realistic case of imperfect pulses, a simple choice of the initial pulse axis vs the axis used for the subsequent rotations introduced by Meiboom and Gill results in the CPMG protocol which is quite robust against systematic pulse errors \cite{Meiboom_Gill}. We define the decoherence function \beq W_{l}(t) = \left\langle \exp\left( -iv_{1}\int_{0}^{\te}f_{t}(t') \xi(t')\text{d}t' \right ) \right\rangle \,\, , \label{eq:Wl} \eeq where $f_{t}(t')$ is a temporal filter function, which is zero for $t'$ outside of the $[0,\te]$ range, and equal to $\pm \! 1$ within it, changing sign at each pulse time $\tau_{k}$. For an even number of pulses $n$ the decoherence function is related to the off-diagonal element of the density matrix by $W(\te) \!= \! \rho_{+-}(\te)/\rho_{+-}(0)$, while for odd $n$ we should replace $\rho_{+-}(0)$ by $\rho_{-+}(0)$ in this formula. A simple Gaussian average gives a closed result for the decoherence function in this case: $W_{l}(\te) \! = \! \exp[-\chi^{l}_{2}(\te)]$, with $\chi^{l}_{2}(\te)\! \equiv \! \frac{v^{2}_{1}}{2}R^{l}_{2}(\te)$, and \beq R^{l}_{2} = \int_{0}^{\infty} |\tilde{f}_{t}(\omega)|^2 S(\omega) \frac{\text{d}\omega}{\pi} \,\, , \label{eq:R2ldef} \eeq where $\tilde{f}_{t}(\omega)$ is the Fourier transform of $f_{t}(t')$. It is useful to note now that this result can be viewed as a particularly simple case of application of the cumulant expansion technique \cite{Kubo_JPSJ62} to performing the noise average. Due to the Gaussian statistics of $\xi$, the average of $\mean{\exp(-i\Phi(t))}$ (with the phase $\Phi(t)$ defined in Eq.~(\ref{eq:Wl})) is simply given by the exponent of the second cumulant: $W_{l}(\te)\! = \! \exp(-\frac{1}{2}\mean{\Phi(t)^{2}})$. \begin{figure} \includegraphics[width=\linewidth]{Fig_spectroscopy.eps} \caption{(Color online)~(a) Frequency-domain filter function for CP sequence with $n\! = \! 2$, $4$, and $10$ pulses. The plotted function $F(z) \! = \! \pi^{2}|\tilde{f}_{t}(z)|^{2}/4\te^{2}$ with $z \! =\! \omega\te/\pi$, has a dominant peak at $z_{1} \! \approx \! n$, and the next peak at $z_{2}\! \approx \! 3n$ is $\approx \! 9$ times smaller. (b) Calculation of $W_{l}(\te)$ for linear coupling $v_{1}\xi(t')\hat{\sigma}_{z}$ to noise with spectral density $v_{1}S(\tilde{\omega})\! = \! A/\tilde{\omega}^{3/2} + B/[\gamma^2 + (\tilde{\omega}-\omega_{p})^2]$ where $\tilde{\omega}\! = \! \omega/v_{1}$, and $A \! = \! \pi^{7/2}/4$, $B\! = \! 10$, $\omega_{p}\! = \! 10$ are dimensionless constants. (c) Demonstration of spectroscopy in this case: the results of exact calculation of $W_{l}(\te)$ from (b) are used to reconstruct $v_{1}S(\omega)$ using Eq.~(\ref{eq:Rl}). } \label{fig:f} \end{figure} The spectroscopy of $S(\omega)$ can be achieved at large $n$ when we realize that $\tilde{f}_{t}(\omega)$ can then be approximated by a series of narrow peaks \cite{Cywinski_PRB08,Bylander_NP11,Alvarez_PRL11} - this simply comes from the fact that the influence of the noise component with frequency matching the frequency of pulse application (or its harmonics) is not suppressed by the DD sequence. For even (e) and odd (o) $n$ we have \begin{align} \tilde{f}^{n=e}_{t}(\omega) & \approx \frac{e^{i\omega t/2}}{\omega} (-1)^{\frac{n}{2}+1} \sum_{k=-\infty}^{\infty}(-1)^{k} \Delta[\omega \te - 2\pi n (k-\frac{1}{2})] \,\, , \label{eq:fe} \\ \tilde{f}^{n=o}_{t}(\omega) & \approx i\frac{e^{i\omega t/2}}{\omega} (-1)^{\frac{n-1}{2}} \sum_{k=-\infty}^{\infty} \Delta[\omega \te - 2\pi n(k-\frac{1}{2})] \,\, , \label{eq:fo} \end{align} where $\Delta(x)$ can be approximated by a square peak of height $2n$ and width $2\pi /\te$ centered at $x$. This is illustrated in Fig.~\ref{fig:f}(a). For $S(\omega)$ which does not have very pronounced maxima (i.e.~it is mostly a non-increasing function of $\omega$) we can safely take only the first peak \cite{Bylander_NP11}, i.e.~assume $|\tilde{f}_{t}(\omega)|^{2} \! \approx \! \delta(\omega\te - \pi n)8\pi n^{2}/\omega^2$. Note that the amplitude of peaks in $|\tilde{f}_{t}(\omega)|^{2}$ at larger $\omega$ is smaller than that of the first one by factor of $(k+1)^2$ for the $k$-th peak (see Fig.~\ref{fig:f}a). The single-peak approximation leads to \beq R^{l}_{2}(\te) \approx \frac{8 t}{\pi^2}S\left ( \frac{\pi n}{\te}\right ) \,\, . \label{eq:Rl} \eeq Thus, by changing $n$ and $\te$, $S(\omega)$ can be reconstructed \cite{Alvarez_PRL11,Yuge_PRL11,Staudacher_Science13,Muhonen_arXiv14,Romach_arXiv14} from measurements of $\rho_{+-}(t)$. The accuracy of this formula is shown in Fig.~\ref{fig:f}(c), where it was used to reconstruct $S(\omega)$ from $W_{l}(\te)$ calculated using the exact equations given before. The peak in $S(\omega)$ can be well-described in this case with large enough $n$, but it should be kept in mind that for strongly non-monotonic $S(\omega)$ one should consider using a more complicated spectroscopy scheme in which the contribution of filter function peaks at higher frequencies is taken into account \cite{Alvarez_PRL11}. Alternatively, when reconstruction of $S(\omega)$ from the above formula is unreliable, it is often more robust to extract the characteristic decay timescale $T_{2}$ from the datasets corresponding to various $n$. When the part of the noise spectrum mostly responsible for coherence decay is of $1/\omega^{\beta}$ form (as it is often the case), we have $W_{l}\! \sim \! \exp[-(\te/T_{2})^{\beta+1}]$ and the decay timescale $T_{2} \! \sim \! n^{\gamma}$ with $\gamma \! = \! \beta/(\beta+1)$. Fitting of $T_{2}$ vs $n$ dependence to a power law gives the exponent $\gamma$, and it allows for rather reliable estimation of $\beta$ \cite{Cywinski_PRB08,deLange_Science10,Medford_PRL12,Muhonen_arXiv14,Siyushev_NC14}. \section{Linked cluster expansion for quadratic coupling to Gaussian noise} \label{sec:LCE} The decoherence function $W(\te)$ is now given by \beq W(\te) = \left\langle \exp\left( -i \vq \int_{0}^{\te}f_{t}(t') \xi^{2}(t')\text{d}t' \right ) \right\rangle \,\, . \label{eq:W2} \eeq When dealing with pure dephasing due to classical noise, the most natural theoretical approach, especially suited for transparent investigation of non-Gaussian effects, is the linked cluster expansion (LCE) \cite{Makhlin_PRL04,Makhlin_CP04,Cywinski_PRB08}. The most general form of the linked cluster theorem \cite{Negele} states that $W(\te)$ is given by an exponent of the sum of all the \emph{linked} terms that appear in expansion of RHS of Eq.~(\ref{eq:W2}). By definition these linked terms are the ones that cannot be written as products of other expressions (each including a separate average over a few $\xi$ terms). From this description it should be clear that the linked clusters are basically cumulants of the random variable appearing in the exponent on the RHS of Eq.~(\ref{eq:W2}). Let us take a look at the lowest-order terms in this expansion, making the usual assumption of $\mean{\xi(t')} \! =\! 0$. The first order term is \beq W^{(1)}(t) = -iv_{2}\int f_{t}(t')\mean{\xi^{2}(t')}\text{d}t' = -iv_{2}C(0)\int f_{t}(t')\text{d}t' \,\, , \label{eq:We1} \eeq which is equal to zero because $\int f_{t}(t')\text{d}t' \! = \!0$ for any reasonable DD sequence. However, for the purpose of quickly explaining the structure of terms appearing in the expansion of $W(t)$ let us forget about this fact for a moment (note also that the derivation below applies also to the free evolution case \cite{Makhlin_PRL04}, in which $\int f_{t}(t')\text{d}t' \! = \!t$). In the second order of expansion we encounter a $\mean{\xi^{2}(t_{1})\xi^{2}(t_{2})}$ average. We use now the assumption of Gaussian statistics of $\xi(t')$, which tells us how the multi-point correlation functions factorize into the two-point ones. With the notation of $\xi(t_{k})\! \equiv \! \xi_{k}$ we have \beq \mean{\xi_{1}\xi_{1}\xi_{2}\xi_{2}} = \mean{\xi_{1}^{2}}\mean{\xi_{2}^{2}} + 2\mean{\xi_{1}\xi_{2}}^{2} = C^{2}(0) + 2C^{2}(t_{12}) \,\, , \label{eq:xi1xi2} \eeq where $\xi(t_{k})\! \equiv \! \xi_{k}$, $t_{kl} \! \equiv \! t_{k}-t_{l}$, and the factor of $2$ in front of the second term comes from two possibilities of pairing of $\xi_{1}$ with $\xi_{2}$. We obtain then \begin{align} W^{(2)} & = [W^{(1)}]^2 -v^{2}_{2}\int\int f_{t}(t_{1}) f_{t}(t_{2}) C(t_{12})C(t_{21}) \text{d}t_{1}\text{d}t_{2} \end{align} where we have used the fact that $C(t_{12}) \! =\! C(t_{21})$ to make the expression more symmetric. The first term on the LHS is the \emph{unlinked}, while the second is the \emph{linked} one: despite the fact that the average in Eq.~(\ref{eq:xi1xi2}) factorized into the product of two averages, the presence of integrals with respect to $t_1$ and $t_2$ variables that are interlocking these averages precludes factorization of this expression. In the third order the $\mean{\xi_{1}^{2}\xi_{2}^{2}\xi_{3}^{2}}$ average factorizes into a $\mean{\xi_{1}^{2}}\mean{\xi_{2}^{2}}\mean{\xi_{3}^{2}}$ term, six terms of $\mean{\xi_{k}^{2}}\mean{\xi_{l}\xi_{m}}^{2}$ type, and eight terms of $\mean{\xi_{k}\xi_{l}}\mean{\xi_{l}\xi_{m}}\mean{\xi_{m}\xi_{k}}$ type. The latter ones lead to linked terms in $W^{(3)}$. The pattern should be clear now: in the $k$-th order of expansion of $W(t)$ we obtain $(2k-1)!!$ terms coming from possible pairings of $\xi_{k}$ operators under average, and $2^{k-1}(k-1)!$ of these terms are linked. This gives us \begin{align} W(\te) & = \exp \left [ \sum_{k=1}^\infty\frac{(-iv_{2})^k}{k}R_{k}(\te) \right ] \,\, \label{eq:WR} \\ & = \exp \left [-\sum_{k=1}\chi_{k}(\te) \right ] \,\, , \label{eq:Wchi} \end{align} with the linked cluster contributions \begin{align} \!\!\! R_{k} & = \! 2^{k-1}\int f_{t}(t_{1}) \text{d}t_{1} ... \int f_{t}(t_{k}) \text{d}t_{k} C(t_{12})...C(t_{k1}) \,\, , \label{eq:Rt}\\ & \!\!\!\!\!\!\! = 2^{k-1} \! \int \frac{\text{d}\omega_{1}...\text{d}\omega_{k}}{(2\pi)^k} S(\omega_{1})...S(\omega_{k}) \tilde{f}_{t}(\omega_{12})...\tilde{f}_{t}(\omega_{k1}) \,\, , \label{eq:Rw} \end{align} where $\omega_{kl} \! \equiv \! \omega_{k}-\omega_{l}$. The above equations are generalizing the results of Ref.~\cite{Makhlin_PRL04} to the case of evolution of the qubit affected by a series of ideal $\pi$ pulses. Note that when the number of pulses $n$ is odd, then the $R_{k}(t)$ terms with odd $k$, i.e.~the ones contributing a nontrivial phase $\psi(\te)$ to $W(t) \! = \! |W(t)|e^{i\psi(\te)}$, are identically zero. This follows from $\tilde{f}_{t}(-\omega) \! = \! -\tilde{f}_{t}(\omega)$ for odd $n$. For even $n$ the odd-order linked clusters do not vanish, and the phase contribution $\psi(t)$ is nonzero. In the following discussion we will ignore this phase. The approximations which will be used below either imply that for even $n$ we have $\psi(t)\! \approx \! 0$ (and the appearance of nonzero phase is then one of possible signatures of breaking of certain assumptions allowing for derivation of a simple solution of the problem), or the discussion of $\psi(t)$ contribution becomes cumbersome, while the numerical simulations show that the corrections brought by this term are qualitatively unimportant. In experiments one can simply choose to work with odd $n$, or perform the measurements of the qubit state along both $x$ and $y$ axes, reconstructing both real and imaginary part of $W(\te)$ in this way. \section{Short time behavior} \label{sec:short} Under dynamical decoupling the lowest-order nonvanishing term in Eq.~(\ref{eq:WR}), $R_{2}(\te)$, can be rewritten in the form analogous to that of Eq.~(\ref{eq:R2ldef}) \beq R_{2}(t) = \int_{0}^{\infty} |\tilde{f}_{t}(\omega)|^2 S_{2}(\omega) \frac{\text{d}\omega}{\pi} \approx \frac{8 t}{\pi^2}S_{2}\left ( \frac{\pi n}{\te}\right ) \,\, , \label{eq:R2} \eeq only with $S(\omega)$ appearing in Eq.~(\ref{eq:R2ldef}) replaced by \beq S_{2}(\omega) = \int S(\omega_{1})S(\omega_{1}-\omega)\frac{\text{d}\omega_{1}}{\pi} \,\, , \label{eq:S2} \eeq which is the spectral density of $\xi^{2}(t')$ process, i.e.~it is the Fourier transform of its autocorrelation function given by \beq C_{\xi^2}(t') \equiv \mean{\xi^{2}(t')\xi^{2}{0}} - \mean{\xi^{2}}^2 = 2C^{2}(t') \,\, . \eeq At very short times we have $W(t) \! \approx \! 1-\frac{1}{2}R_{2}(t)$, so that the measurement of the initial decay gives information about $S_{2}(\omega)$ at high frequencies. However, if our goal is to reconstruct the spectrum of $\xi(t')$ process, the relation between $S_{2}(\omega)$ and $S(\omega)$ has to be discussed. The fact that $S_{2}(\omega)$ is a convolution of $S(\omega)$ with itself means that in general there is no simple relation between the two spectra at the same frequency. However, in a few often-encountered cases we can find such a relation. In this paper we will focus either on case of noise with $S(\omega) \! \sim \! 1/\omega^{\beta}$ (with $\beta \! > \! 1$ and low-frequency cutoff $\omega_{0}$) in a large range of frequencies, or on the case of OU noise with correlation time $t_{c}$, for which we have the correlation function $C^{\text{OU}}(\tau) \! = \! \sigma^{2}e^{-\tau/t_{c}}$ and the corresponding spectral density: \beq S^{\text{OU}}(\omega) = \frac{2\sigma^2 t_{c}}{1+\omega^{2}t^{2}_{c}} \,\, , \label{eq:SOU} \eeq where $\sigma^2$ is the total power of the noise. For the case of $1/\omega^{\beta}$ noise, the integral in Eq.~(\ref{eq:S2}) is dominated by contributions of regions with $|\omega_{1}|$ or $|\omega_{1}-\omega|$ close to $\omega_{0}$, and a simple calculation gives $S_{2}(\omega) \! \sim \! 1/\omega^{\beta}$. A similar situation is encountered for OU noise, for which it is easy to calculate $S_{2}(\omega)$ exactly: \beq S^{\text{OU}}_{2}(\omega) = \frac{8\sigma^4 t_{c}}{4+\omega^{2}t^{2}_{c}} \,\, , \label{eq:S2OU} \eeq where we see that the high-frequency $1/\omega^2$ behavior is inherited from $S(\omega)$. At very short $\te$ we can then expect \beq W(\te) \approx 1 - (\te/T_{s})^{\beta+1} \,\, , \eeq where $T_{s}$ is the characteristic parameter characterizing the intial ``decay shoulder'' of $W(\te)$, and $\beta$ characterizes the power-law decay of $S(\omega)$ at large frequencies $\approx n\pi/\te$. The utility of the above result is diminished by the fact that high quality data in the regime of $W(\te) \! \approx \! 1$ are often not available due to finite measurement precision. Furthermore, only the high-end part of the spectrum, which causes the decay of coherence at short times, is probed here. A spectroscopy method using the measured $W(\te)$ for $\te$ comparable and larger than $T_{2}$ (roughly defined as the half-decay time of $W(\te)$) is clearly desirable. In the previously discussed case of linear coupling to $\xi$, $R^{l}_{2}$ was the only nonvanishing term. However, $R_{k}$ with $k\! > \! 2$ are not zero now, and their contribution to $W(\te)$ has to be taken into account in order to make any statement on the form of coherence decay (and its relation to the noise spectrum) beyond the short-time limit. \section{Noise with finite correlation time - Gaussian approximation} \label{sec:Gaussian} It is quite intuitive that with increasing $n$ the noise affecting the qubit should become better described by the Gaussian approximation, provided that the noise has a finite correlation time $t_{c}$. In the case of free evolution, the phase $\phi(\te) \! = \! \int_{0}^{\te} \xi^{2}(t')\text{d}t'$ is not Gaussian-distributed except at very long $t \! \gg \! t_{c}$, for which we can argue that $\phi(\te)$ is a sum of a large number of independent contributions (each from a slice of time of width $\approx \! t_c$). However, in such a free evolution case, coherence could be practically zero at such long times. On the other hand, in the case of DD the filtered phase, $\phi_{f}(\te) \! = \! \int f_{t}(t') \xi^{2}(t')\text{d}t'$, can be viewed as a sum over $n+1$ contributions, the signs of which are arranged in such a way that the correlated parts of subsequent contributions cancel each other when $t/n \! \ll \! t_{c}$. In other words, the terms effectively contributing to the filtered phase are weakly correlated (especially when $\te \! > \! t_{c}$), allowing for application of the Central Limit Theorem, leading to Gaussian distribution of $\phi_{f}$ at large $n$. This is equivalent to saying that keeping only the $R_{2}(\te)$ term becomes a good approximation. The above argument applies to any non-Gaussian noise with finite $t_{c}$: the fact that DD suppresses non-Gaussian features was noticed for the case of a qubit linearly coupled to the source of RTN in \cite{Cywinski_PRB08,Ramon_PRB12}. One thing that is crucial to note here is that the above heuristic argument fails for coupling to $\xi^{2}(t')$ noise unless $\te \! \gg \! t_{c}$. The reasons for this will become clear in the next Section. Let us explore now the possibility of reaching the Gaussian limit using the OU process as an example of noise with finite $t_{c}$. $R_{2}(t)$ is then given for large $n$ by \beq R^{\text{OU}}_{2}(\te) \approx \frac{64 \sigma^{4}t_{c}\te^{3}}{\pi^{2}(4\te^{2} + n^{2}\pi^{2}t^{2}_{c})} \,\, , \eeq where Eqs.~(\ref{eq:R2}) and (\ref{eq:S2OU}) were used. When $\te/n \! \ll \! t_{c}$ we have then in Eq.~(\ref{eq:Wchi}) the second order term \beq \chi_{2}(t) = \frac{v^{2}_{2}}{2}R_{2}(t) = \frac{32}{\pi^4} \frac{(\sigma^{2}v_{2}\te)^2 (\te/t_{c})}{n^2} \,\, , \label{eq:chi2} \eeq while for $\te/n \! \gg \! t_{c}$ we have $\chi_{2} \! \propto \! (v_{2} \te) (v_{2}t_{c})$, which is independent of $n$. We focus now on the former regime, in which the dynamical decoupling actually leads to enhancement of coherence time. For noise bounded at low frequencies the main contributions to the integrals in Eq.~(\ref{eq:Rw}) come from peaks of $\tilde{f}(\omega_{kl})$. Within the single-peak approximation used before, this means that all $\omega_{kl} \! \approx \! \pm n\pi/\te$. For the $4$th order term using this approximation we get \beq R^{\text{OU}}_{4}(\te) \approx \frac{1024t}{3\pi^4}\int \left[ S^{\text{OU}} \Big(\omega_{1} \Big)S^{\text{OU}} \Big(\omega_{1} - \frac{n\pi}{\te} \Big) \right ]^{2} \frac{\text{d}\omega_{1}}{\pi} \,\, , \label{eq:R4OU} \eeq from which we get that for $\te/n \! \ll \! t_{c}$ \beq |\chi_{4}(\te)| = \frac{v^{4}_{2}}{4}R_{4}(\te) \approx \frac{4096}{3\pi^8}\frac{(\sigma^{2}v_{2}\te)^{4} (\te/t_{c})}{n^{4}} \,\, . \label{eq:chi4} \eeq The accuracy of approximation leading to Eq.~(\ref{eq:R4OU}) is shown in Fig.~\ref{fig:R2R4}, where a numerical calculation of $\chi_{4}(\te)$ given by exact formula (\ref{eq:Rw}) is compared with the approximate calculation. \begin{figure}[t] \includegraphics[width=\linewidth]{Fig_chi4.eps} \caption{(Color online)~Numerical calculation of $|\chi_{4}(t)|$ (circles) for Ornstein-Uhlenbeck noise with $\sigma\! = \! 1$ and $v_{2}t_{c} \! = \! 1$, done for $n\! = \! 11$ pulses of the CP sequence. The dashed line is the ``single-peak'' (SP) approximation for $|\chi_{4}| \! =\! \frac{1}{2}R_{4}$ from Eq.~(\ref{eq:R4OU}), while the solid line is the SP approximation for $t/n \! \ll \! t_{c}$ from Eq.~(\ref{eq:chi4}). The latter formula agrees quite well with the numerical calculation for $t/n \! < t_{c} \! < t$ (i.e.~for $1\! < \! t/t_{c} \! < n$). $\chi^{2}_{2}(t)$ is shown as crosses, and one can see that for $t/t_{c}\! \ll \! 1$ we have $|\chi_{4}(t)| \! \approx \! \chi^{2}_{2}(t)$, in agreement with Eq.~(\ref{eq:chi2n}) which holds in this regime. } \label{fig:R2R4} \end{figure} \begin{figure}[h] \includegraphics[width=\linewidth]{Fig_gaussian.eps} \caption{(Color online) Decoherence function at an OWP for CP sequences with $n\!=\!1$, $11$ and $21$. Symbols are the results of numerical simulation with OU noise (with $\sigma\! = \! 1$) obtained using standard algorithms \cite{Gillespie_PRE96}. The coupling is $\vq \! = \! 1/t_{c}$. The solid lines are the Gaussian approximation $W(\te)\! =\! \exp[-\chi_{2}(t)]$, while the dashed lines are $W(\te)\! =\! \exp[-\chi_{2}(t) - \chi_{4}(t)]$ with $\chi_{2}(t)$ and $\chi_{4}(t)$ given in Eqs~(\ref{eq:chi2}) and (\ref{eq:chi4}), respectively. The divergence of the latter signifies the failure of the approach in which only a few cumulants are taken into account. Already for $n\! = \! 11$ the characteristic decay timescale $T_{2}$ is well described by the Gaussian approximation, while for $n\! = \! 21$ the two lowest cumulants are enough to describe the coherence decay by an order of magnitude from the initial value. } \label{fig:gaussian} \end{figure} Let us focus now on regime of $\te \! \leq \! T_{2}$, with $T_{2}$ defined by $\chi_{2}(T_{2}) \! = \! 1$ (i.e.~$T_{2}$ is the time of decay of $W(\te)$ to $1/e$), provided that the Gaussian approximation is correct on this timescale. The condition for the latter is \beq \frac{|\chi_{4}(\te)|}{\chi^{2}_{2}(\te)} \approx \frac{4t_{c}}{3\te} \ll 1 \,\, , \label{eq:g} \eeq where we have used Eqs.~(\ref{eq:chi2}) and (\ref{eq:chi4}). We see now that the condition for the initial decay of $W(\te)$ to be well described by Gaussian approximation is $\te/n \! \ll \! t_c \! \ll \! \te$. At longer times, $\te \! \gg \! T_{2}$, $\chi_{2}(\te)$ becomes larger than one, and for the condition of applicability of the Gaussian approximation we can use simply $\chi_{4}(\te)\! \ll \! 1$. This leads to $\sigma^{2}v_{2}\te/n \! \ll \! (t_{c}/\te)^{1/4}$. With $\te$ increasing to values larger that $t_{c}$, maintaining this condition at given $\te$ requires a slightly superlinear scaling of $n$ with $\te$. This is illustrated in Fig.~\ref{fig:gaussian}, where simulation of dephasing due to square of OU noise are compared with the Gaussian formula, and the next-order formula including $\chi_{4}(\te)$. For $n\! =\! 1$ the Gaussian regime does not extend to timescale when $W(\te) \! = \! 1/e$, while with $n\! = \! 11$ and $21$ the $T_2$ time is well-described by Gaussian expressions. While the OU noise has been used here, the physics should be qualitatively the same for any other noise with a well-defined $t_{c}$. The discussion in the next Section will make it clear that the fingerprint of the situation considered here is the disappearance of long-time power-law tail (visible for $n\!=\! 1$ in Fig.~\ref{fig:gaussian}) with increasing $n$, and enlargement of timescale on which $W(\te) \! \sim \! \exp(-(t/T_{2})^{\alpha})$. It should also be clear that the most realistic method of noise spectroscopy in this case is the investigation of $T_{2} \! \sim \! n^{\gamma}$ scaling, with $T_{2}$ times fit to the data for $W(\te)$ with the long-time tails removed. \section{Resummation of linked clusters for low-frequency noise} \label{sec:1f} The calculations from the previous Section do not work for noise with an ill-defined correlation time, or when $\te \! \ll \! t_{c}$. Formally, the assumption that the peaks of $\tilde{f}(\omega_{kl})$ dominate the integrals in Eq.~(\ref{eq:Rw}) is now incorrect. The regions of very small $\omega_{k}$, in which the spectral densities diverge, also matter. Before I present an approximation for Eq.~(\ref{eq:Rw}) which works for noise with strong low-frequency component, let me give a simple qualitative explanation of the relevant physics. The first thing to note is that for noise with a spectrum diverging at low frequencies, one has to carefully think on the meaning of $\Omega$ in Eq.~(\ref{eq:H}). This quantity consists of ``bare'' qubit splitting renormalized by the average contribution from noise, $v_{2}\mean{\xi^{2}}$. However, we should now consider more carefully the meaning of the $\mean{...}$ average. Practically, $\Omega$ is measured with some accuracy before the coherence measurements are begun. The cycle of the qubit's initialization, evolution, and measurement, is then repeated many times. The key observation is that for $1/f^\beta$ noise, there will be slow, essentially static on a timescale of $\te$, fluctuations of $\xi^{2}$. During a single evolution, the noise contribution to the qubit's splitting is \beq \xi^{2}(t')\! \approx \! \xi^{2}_{lf} + 2\xi_{lf}\dxi(t') + \dxi^{2}(t') \,\, , \label{eq:xi_separation} \eeq with $\xi_{lf}$ being the quasi-static shift changing between measurements (\textit{i.e.}~coming from the noise spectrum for $\omega_{0} \! < \! \omega \! < \! 1/\te$), and with $\dxi(t')$ being the high-frequency component. The low-frequency cutoff is $\omega_{0} \! \approx \! \text{max}(1/t_{c},1/T_{M})$, with $T_{M}$ being the total data acquisition time, or, in the case of using the gathered data to estimate the slow drift of $\xi^{2}$, by the timescale on which this information is fed back into the averaging procedure (note that such a procedure of correcting for slowly evolving energy offset and suppressing decoherence was simulated in \cite{Falci_PRL05} for the free evolution case). Typically $T_{M}$ is orders of magnitude larger than $t$, and in the case of noise with finite $t_{c}$ determining the low-frequency cutoff, we assume here $t_{c} \! \gg \! \te$. In both cases we have $\mean{\xi^{2}_{lf}} \! \gg \! \mean{\dxi^{2}}$ when $\beta \! > \! 1$, and the dominant noisy term is $2\xi_{lf}\dxi(t')$ (note that the influence of the quasi-static shift $\xi^{2}_{lf}$ is simply removed by the DD sequence when the Hamiltonian approach from Eq.~(\ref{eq:H}) is used, as is the case in this Section). This amounts to an observation that in the presence of $1/f^{\beta}$ noise, for $T_{M}$, $t_{c} \! \gg \te$ we average over evolutions of qubits operated \textit{in the neighborhood} of the OWP: the quasi-static fluctuations $\xi_{lf}$ are shifting the qubit away from the ``true'' OWP, and they renormalize the fast noise $\dxi(t')$ affecting the qubit, so that the dephasing is caused by the $2\xi_{lf}\dxi(t')$ term. This crucial aspect of dephasing due to quadratic coupling to noise with a significant low-frequency component was discussed in Ref.~\cite{Bergli_PRB06}, where $\xi(t')$ being a sum of many random telegraph signals was considered, and the slowest fluctuators were shown to determine the effect that the fast fluctuators have on the qubit. Let us look at noise with power concentrated at very low frequencies, i.e.~in the range of $\omega \! < \! 1/\te$. Specifically, let us take $S(\omega) \! = \! A_{\beta}/|\omega|^{\beta}$ with $\beta \! > \! 1$ in this frequency range. At higher $\omega$ the spectrum can be of any shape, as long as the total contribution of the high frequencies to the noise power is much smaller than the low-frequency contribution. As mentioned in Sec.~\ref{sec:short}, $S_{2}(\omega)$ is dominated by regions where $S(\omega)$ diverges, and taking only them into account we obtain for $\omega\! \gg \! \omega_{0}$ the following expression: \beq S_{2}(\omega) \approx \frac{4}{\pi(\beta-1)} \frac{A_{\beta}}{\omega^{\beta-1}_{0}} S(\omega) = 4\sigma^{2}_{0}S(\omega) \,\, , \label{eq:S2beta} \eeq where $\sigma^{2}_{0}$ is the standard deviation of the low-frequency ($\omega\! < \! 1/\te$) noise, \beq \sigma^{2}_{0} \! = \! \int_{\omega_{0}}^{1/\te} S(\omega)\text{d}\omega/\pi \,\, . \label{eq:s0} \eeq It is also useful to rederive these results starting from the correlation function of $\xi^{2}(t')$ noise while using the above-discussed separation of $\xi(t')$ into a quasi-static part $\xi_{lf}$ and the ``fast'' noise $\dxi(t')$: \begin{align} C_{\xi^2}(t') & \equiv \mean{\xi^{2}(t')\xi^2(0)} - \mean{\xi^2}^2 = 2\mean{\xi(t')\xi(0)}^2 \nonumber \\ & \approx 2\mean{\dxi(t')\dxi(0)}^2 + 4 \sigma^{2}_{0} \mean{\dxi(t')\dxi(0)} + 2\sigma^{4}_{0} \,\, , \end{align} where we identified $\sigma^{2}_{0}$ with $\mean{\xi_{lf}^2}$. After neglecting the first term (because $\mean{\dxi^2}\! \ll \! \sigma^{2}_{0}$) the Fourier transform of the above expression evaluated at finite $\omega$ agrees with Eq.~(\ref{eq:S2beta}). We can immediately see now how the high- and low-frequency noise components work together by looking at the lowest order linked term, $R_{2}(\te)$, which is given by Eq.~(\ref{eq:R2}). Through the presence of $S_{2}(\omega)$ in that formula $R_{2}$ is affected both by high-$\omega$ fluctuations, since the filter $|\tilde{f}_{t}(\omega)|^{2}$ picks out fluctuations with $\omega \! \approx \! n\pi/\te$, and by very low frequencies, since $S_{2}(\omega)$ depends on $\omega_{0}$ through $\sigma^{2}_{0}$. The essence of the calculation below is separate averaging over the slow and fast fluctuations. Splitting the noise into a quasi-static part and a high-frequency part, $\xi(t') \! = \! \xi_{lf} + \dxi(t')$, we have \beq W(\te) = \left \langle \exp\left( -i\vq \int f_{t}(t') [ \dxi^{2}(t') + 2\xi_{lf}\dxi(t')]\text{d}t' \right ) \right \rangle \,\, . \eeq Now we perform the average over the low frequencies. Using the fact that $\xi_{lf}$ is Gaussian distributed with standard deviation $\sigma^{2}_{0}$, we arrive at an expression to be averaged only over the high frequencies (hf) \begin{eqnarray} W(\te) & = & \Big \langle \exp \big [ -i\vq\int f_{t}(t') \dxi^{2}(t')\text{d}t' + \nonumber \\ & & \!\!\!\!\!\!\!\!\!\!\!\! -2\sigma^{2}_{0}v^{2}_{2} \int \text{d} t_{1} \int \text{d} t_{2} f_{t}(t_{1}) f_{t}(t_{2}) \dxi(t_{1})\dxi(t_{2}) \big ] \Big \rangle_{\text{hf}} \,\, . \label{eq:Wfull} \end{eqnarray} In Eq.~(\ref{eq:Wfull}) the second term is expected to dominate when $\sigma^{2}_{0} \! \gg \! \mean{\dxi^{2}}_{\text{hf}}$, i.e.~when $\text{min}(T_{M},t_{c}) \! \gg \! \te$. The calculation of the average involving only this term can be done using a linked cluster expansion similar to the one previously discussed. One only has to be careful with the definition of linked and unlinked clusters when expanding in powers of the second term, $\mathcal{X}(\te)$, in Eq.~(\ref{eq:Wfull}): the correct definition of unlinked cluster of $k$-th order is that it can be written as a product of terms which contain integrals over sets of time variables $\{t_{k}\}$ coming from \emph{disjoint} sets of $\mathcal{X}$ forming $\mathcal{X}^{k}$. However, the same result can be obtained in a simpler way by coming back to Eq.~(\ref{eq:Rt}), into which we plug $C(t) \! = \! \mean{\dxi(t)\dxi(0)}_{\text{hf}} + \sigma^{2}_{0}$, and keep only the terms with the maximal power of $\sigma_{0}$, \textit{i.e.}~the ones in which every second $C(t_{kl})$ is replaced by $\sigma^{2}_{0}$. Only terms with even $k$ survive then, and \beq R_{2n}(\te) \approx [R^{l}_{2}(\te)]^{n}(2\sigma_{0})^{2n}\,\, , \label{eq:R2n} \eeq where $R^{l}_{2}(\te)$ is given in Eq.~(\ref{eq:R2ldef}). Equivalently we have \beq \chi_{2n}(\te) \approx -\frac{1}{2}\frac{(-1)^{n}}{n}\left[ 2\chi_{2}(\te) \right ] ^{n} \,\, ,\label{eq:chi2n} \eeq which means, for example, that $|\chi_{4}(t)| \! = \! \chi^{2}_{2}(t)$. The quality of this approximation is illustrated in Fig.~(\ref{fig:R2R4}), where one can see how the exact calculation of $\chi_{4}(\te)$ for OU noise agrees with the above formula when $t \! \ll \! t_{c}$. Using Eq.~(\ref{eq:chi2n}) we can now write out all the terms in the exponent in Eq.~(\ref{eq:WR}). After noticing that the appearing sum is in fact an expansion of the $-\frac{1}{2}\ln(1+x)$ function with $x\! \equiv \! 4\vq^2\sigma^{2}_{0}R^{l}_{2}(\te)$ we arrive at the final expression for the decoherence function: \beq W(\te) \! = \! \frac{1}{\sqrt{1+4\vq^2\sigma^{2}_{0}R^{l}_{2}(\te)}} = \frac{1}{\sqrt{1+2\chi_{2}(t)}} \,\, . \label{eq:Wlf} \eeq This is the main result of this paper for the case of noise with a dominant low-frequency component. For large $n$ we can use Eq.~(\ref{eq:Rl}) to relate $R^{l}_{2}(\te)$ to $S(n\pi/\te)$. For $S(\omega \! \approx \! n\pi/\te) \! \propto \! 1/\omega^{\beta}$ we have \beq W(\te) \approx \frac{1}{\sqrt{2}}\left( \frac{T_{2}}{\te} \right )^{\frac{\beta+1}{2}} \,\, \text{for} \,\, \te \! \gg \! T_{2} \,\, ,\label{eq:powerlaw} \eeq where the characteristic decay timescale fulfills \beq T_{2} \sim n^\gamma / T^{\eta}_{M} \,\,\, \text{where} \,\, \gamma\! = \! \frac{\beta}{\beta+1} \,\, \text{and} \,\, \eta \! = \! \frac{\beta-1}{\beta+1} \,\, . \label{eq:powerlaws} \eeq Note that if at high frequencies we have $S(\omega \! \approx \! n\pi/\te) \! \propto 1/\omega^{\beta'}$ (while the low-frequency spectrum is still described by exponent $\beta$), in the above formulas $\beta'$ replaces $\beta$ in all the places with the exception of the numerator of the formula for $\eta$. Clearly the key signature of $1/f^{\beta}$ type noise dominating the decoherence at an OWP is the power-law asymptotic decay of $W(\te)$. Similar tails were obtained for free evolution \cite{Makhlin_PRL04,Falci_PRL05} and spin echo \cite{Ithier_PRB05} at the OWP for the case of quasi-static noise (i.e.~noise with high-frequency cutoff $\omega_{\text{h}}$ leading to complete loss of phase coherence on timescale $\te\! \ll \! 1/\omega_{\text{h}}$). Equation (\ref{eq:Wlf}) generalizes these results to the case of decoherence under the influence of multiple pulses, with the only assumption being that $\sigma_{0}^{2}$ is closely approximating the total power of the noise. Another signature of dephasing induced by quadratic coupling to noise with strong low-frequency spectral component is the dependence of the measured coherence signal on the total measurement time $T_{M}$. For linear coupling to $1/f^{\beta}$-type noise such a dependence appears only in the case of free evolution (without pulses) \cite{Makhlin_CP04,Ithier_PRB05}. At the OWP this effect is present also under dynamical decoupling. \begin{figure}[t] \includegraphics[width=\linewidth]{Fig_xi2_lf.eps} \caption{(Color online)~Decoherence due to OU noise (with $\sigma\! = \! 1$) at an OWP for CP sequence with $n\! = \! 1$, $5$, and $11$. Symbols are the results of numerical simulation. For each $\te$ the averaging time was $T_{M} \! = \! M\te$ with $M\! = \! 10^{6}$, so that the resulting $\sigma^{2}_{0}$ was well approximated by the total power of the OU noise, $\sigma^{2}$. With coupling $v_{2} \! = \! 10^{5}/t_{c}$, the coherence decay in the presented time range is due to the $1/\omega^{2}$ tail of $S(\omega)$. The solid lines are obtained using Eq.~(\ref{eq:Wlf}). For $n\!= \! 5$ the dotted line is the Gaussian approximation, and the dashed line is $W(\te)\! \sim \! \te^{-3/2}$ asymptotics from Eq.~(\ref{eq:powerlaw}). } \label{fig:Wlf} \end{figure} When $W(\te)$ is measured reliably for large $n$ in a wide range of $\te$, then Eq.~(\ref{eq:Wlf}) together with Eq.~(\ref{eq:Rl}) can be used to reconstruct $\vq^{2}\sigma^{2}_{0}S(\omega)$. Let me remind the reader that Eq.~(\ref{eq:Wlf}) holds for $S(\omega \! \approx \! n\pi/\te)$ being of any form, we only require the low-frequency part to be diverging as $1/\omega^{\beta}$ and dominating the total noise power. Alternatively, when the noise is of $1/\omega^{\beta}$ form also at high frequencies, fitting of the $n$ dependence of the power-law tail of $W(\te)$ using Eq.~(\ref{eq:powerlaws}) allows for inferring the value of $\beta$. If the noise is of $1/\omega^{\beta}$ only at low frequencies, then the dependence of $W(\te)$ on $T_{M}$ can be used to fit the value of $\beta$. In Fig.~\ref{fig:Wlf} Eq.~(\ref{eq:Wlf}) is compared with the results of numerical simulations of dephasing due to noise with $S(\omega) \! \propto \! 1/\omega^{2}$ and a low-frequency cutoff at $\omega_{0} \! \ll \! 1/\te$ (actually an OU noise strongly coupled to the qubit causing dephasing for $\te \! \ll \! t_{c} \!=\! \omega_{0}^{-1}$). One can also see there how the Gaussian approximation fails when $W(\te)$ decays significantly on timescale of $\te \! \ll \! t_{c}$. The asymptotic decay of coherence, according to Eq.~(\ref{eq:powerlaws}), should be given in this case by $1/t^{3/2}$, and the dashed line for $n\! = \! 5$ shows that it is indeed a good approximation to the results of the numerical simulation. The above derivation is based on the assumption that the contribution of the first term in the exponent in Eq.~(\ref{eq:Wfull}) can be neglected, i.e.~when $1/\omega_{0} \! \gg \! \te$. This in fact is only a necessary condition. Let us discuss now the approximate sufficient condition. In the second order with respect to this term we obtain $\chi_{2}^{\text{hf}}(\te)$, which for $1/\omega^\beta$ noise is given by a formula analogous to the one for $\chi_{2}(\te)$, only with $\sigma^{2}_{0}$ replaced by \beq \sigma_{\te}^{2} \approx \int_{1/\te}^{\infty} S(\omega)\frac{\text{d}\omega}{\pi} \,\, , \eeq so that we have $\chi^{\text{hf}}_{2}(\te)/ \chi_{2}(\te) \! = \! \sigma^{2}_{\te}/\sigma^{2}_{0}$. For noise with low-frequency cutoff determined by the measurement time $T_{M}$ this means that $\chi^{\text{hf}}_{2}(\te)/ \chi_{2}(\te) \! = \! (\te/T_{M})^{\beta -1}$. Using $\chi_{2}(\te) \! =\! (t/T_{2})^{\beta+1}$ we see that $\chi_{2}^{\text{hf}}(\te) \! \ll \! 1$ when $(t/T_{M}) \! \ll \! (T_{2}/T_{M})^{1/2+1/2\beta}$. In the case of OU noise with low-frequency cutoff given by $1/t_{c}$, which is used in simulations shown in the Figures, this means that for the high-frequency correction to be negligible we need $t/t_{c} \! \ll \! (T_{2}/t_{c})^{3/4}$. With $T_{2}$ values in Fig.~\ref{fig:Wlf} this mean that $t/t_{c}$ should not be larger than about 10 times $T_{2}/t_{c}$. In fact, at longer time $\te$, when the previously derived $W(\te) \! \approx \! 1/\sqrt{2\chi_{2}(\te)} \! \ll \! 1$, in order to have $\chi^{\text{hf}}_{2}(\te) \! \ll \! 1$ we need \beq \frac{1}{\pi} \frac{\te}{t_{c}} \ll W^{2}(\te) \,\, . \eeq This means that in order for the formulas from this section to work down to $W(\te)\! \approx \! 0.1$ in the case of noise used in the simulations shown in Fig.~\ref{fig:Wlf}, the condition $\te/t_{c} \! \ll \! \pi\cdot 10^{-2}$ has to be fulfilled. The results of numerical simulations in Fig.~\ref{fig:Wlf} are shown for $\te/t_{c} \! \leq \! 10^{-2}$, and in fact at longer $\te$ the simulated $W(\te)$ falls below the results of Eq.~(\ref{eq:Wlf}) due to the influence of high-frequency noise components. The fact that simulation datapoints are slightly below the theoretical solid line in Fig.~\ref{fig:Wlf} at $\te/t_{c} \! \approx \! 10^{-2}$ is the precursor of this. \section{Simulations for coupling to transverse noise} \label{sec:transverse} Let us come back now to the issue of the relation between the calculations using an effective Hamiltonian from Eq.~(\ref{eq:H}) (i.e.~all the results presented before) and the behavior of coherence at the extrinsic OWP, appearing at large qubit splitting $\Omega$ in the presence of transverse noise. We thus take the Hamiltonian from Eq.~(\ref{eq:Ht}), describing an often-encountered case of uniaxial transverse noise, and we set out to re-analyze the dynamics of coherence decay under dynamical decoupling. \begin{figure}[t] \includegraphics[width=\linewidth]{Fig_SE_pulses.eps} \caption{(Color online)~Decoherence due to transverse OU noise with $\sigma \! = \! 1$ for CP sequence with $n\! = \! 1$ (spin echo). The transverse coupling constant is $v_{t}\! = \! \sqrt{2\Omega v_{2}}$ and $v_{2}\!= \! 10^{5}/t_{c}$. The notation $(i,\pi_{j})$ specifies the initial direction of the qubit ($i\! =\! x$, $y$) and the axis about which the $\pi$ pulse is applied ($j\!= \! x$, $y$). The solid lines are obtained using Eq.~(\ref{eq:Wlf}). } \label{fig:SE_pulses} \includegraphics[width=\linewidth]{Fig_CP4_pulses.eps} \caption{(Color online)~Decoherence due to transverse OU noise with $\sigma \! = \! 1$ for CP sequence with $n\! = \! 4$. Parameters and symbols are the same as in Fig.~\ref{fig:SE_pulses}. } \label{fig:CP4_pulses} \end{figure} We focus on the case in which the qubit is initialized along the $x$ or $y$ axis (i.e.~in a superposition of eigenstates of $\Omega \hat{\sigma}_{z}/2$), and the coherence of the qubit (the off-diagonal element of its density matrix in the same basis) is monitored as a function of time. The dephasing due to transverse noise can be approximately mapped on pure dephasing described by Eq.~(\ref{eq:H}) when $\Omega$ is much larger than the rms amplitude of the noise felt by the qubit, $v_{t}\sigma$. In this regime, as discussed in Section \ref{sec:OWP}, in the lowest order in transverse perturbation we can derive Eq.~(\ref{eq:H}) from Eq.~(\ref{eq:Ht}). For any $\Omega$ present in the transverse coupling Hamiltonian, the effective pure-dephasing Hamiltonian has $v_{2} = v^{2}_{t}/2\Omega$. Now we want to give at least a qualitative answer to the question: for what values of $\Omega$, $n$, and $t$, the previously obtained results correspond also to dephasing caused by transverse noise. The first thing that should be noted is that now the influence of ideal (error-free) $\pi$ pulses does depend on the axis with respect to which these rotations are performed. As noted in \cite{Bergli_PRB07}, the $\pi$ pulse about the axis along which the noise couples to the qubit (the $x$ axis in Hamiltonian (\ref{eq:Ht})) should be less efficient in mitigation of noise-induced dephasing compared to the pulse about a perpendicular $y$ axis. This is easiest to see in the case of quasi-static noise, for which a single $\pi_{y}$ pulse leads to a perfect echo of the coherence (more precisely, an echo sequence with such a pulse gives $\mean{\hat{\sigma}_{x}(t)} \! = \! -\mean{\hat{\sigma}_{x}(0)}$ and $\mean{\hat{\sigma}_{y}(t)} \! = \! \mean{\hat{\sigma}_{y}(0)}$ at the echo time $t$), while the $\pi_{x}$ pulse gives only a partial recovery of coherence. A strong asymmetry between performance of $\pi_{x}$ and $\pi_{y}$ pulses was also seen in the case of coupling to non-Gaussian random telegraph noise \cite{Bergli_PRB07,Ramon_PRB12}. The application of $\pi_{z}$ pulses (which in the currently considered case are expected to have some influence, since the noise couples to the qubit through $\hat{\sigma}_{x}$) requires the interpulse time to be $\ll 1/\Omega$, otherwise the dephasing is actually enhanced \cite{Faoro_PRL04}. For this reason only $\pi_{x}$ and $\pi_{y}$ pulses are considered below. \begin{figure}[t] \includegraphics[width=\linewidth]{Fig_transverse_gaussian.eps} \caption{(Color online)~Decoherence due to transverse OU noise (with $\sigma\! = \! 1$) at an OWP for CPMG $(y,\pi_{y})$ sequence with $n\! = \! 1$, $11$, and $21$. Symbols are the results of numerical simulation for transverse noise with $v_t$ couplings chosen such that for qubit splitting $\Omega\!=\! 10 v_{2}\sigma^{2}$ (upper panel) and $\Omega\!=\! 100 v_{2}\sigma^{2}$ (lower panel) the results are expected to match the effective Hamiltonian results from Fig.~(\ref{fig:gaussian}), where $v_{2}\! = \! 1/t_{c}$ was used. Solid lines are the Gaussian approximation $W(\te)\! =\! \exp[-\chi_{2}(t)]$, while the dashed line for $n\!= \! 21$ is $W(\te)\! =\! \exp[-\chi_{2}(t) - \chi_{4}(t)]$. } \label{fig:Wg_transverse} \end{figure} \begin{figure}[t] \includegraphics[width=\linewidth]{Fig_transverse_lf.eps} \caption{(Color online)~Decoherence due to transverse OU noise for CPMG $(y,\pi_{y})$ sequence with $n\! = \! 1$, $5$, and $11$. Symbols are the results of numerical simulation for transverse noise with $v_t$ coupling chosen such that the results are expected to match the effective Hamiltonian calculations from Fig.~\ref{fig:Wlf} (where $\vq \! = \! 10^{5}/t_{c}$ has been used). The circle and cross symbols correspond to two values of $\Omega$. The solid lines are obtained using Eq.~(\ref{eq:Wlf}). } \label{fig:Wlf_transverse} \end{figure} However, in the case of coupling to transverse Gaussian noise which is not exactly quasi-static, these differences are less pronounced. In Fig.~\ref{fig:SE_pulses} the results of simulations of the echo sequence are shown for the case of Ornstein-Uhlenbeck noise, in the coupling regime in which the coherence signal decays for $t \! \ll \! t_{c}$. Two possible initial states $i\! = \! x$, $y$ of the qubit (along the $x$ and $y$ axes) and two above-discussed $\pi_{j}$ pulses are considered - the notation $(i,\pi_{j})$ is used to label the results. For $\Omega \! = \! 10\vq$ (and assuming $\sigma\! = \! 1$ for the noise) we see that the coherence is best protected in the $(x,\pi_{y})$ case, while $(x,\pi_{x})$ fares the worst. The differences between various cases almost disappear for $\Omega \! = \! 100\vq$ (this corresponds to $v_{t} \! = \! \sqrt{200}\vq \! \ll \! \Omega$). It is worth noting that the $(y,\pi_{y})$ results are matching the results of the effective Hamiltonian calculation (the solid line) very well even for the lower value of $\Omega$. In Fig.~\ref{fig:CP4_pulses} analogous results are shown for the CP sequence with $n\! = \! 4$ pulses. The accumulation of errors (leading to apparent coherence loss) is evident, especially in the $(y,\pi_{x})$ case (which was somewhat surprisingly agreeing very well with the effective Hamiltonian results for $n\! = \! 1$). However, $(y,\pi_{y})$ results are still matching the previously described theory, even at quite low values of $\Omega$. In the following we will then use the transverse noise simulations for this case to check whether the previously presented results correspond also to the case of an extrinsic OWP. It is interesting to note that the $(y,\pi_{y})$ case corresponds precisely to the Car-Purcell-Meiboom-Gill sequence \cite{Meiboom_Gill}: the qubit initialized along the $z$ axis is rotated by $\pi/2$ with a pulse about the $x$ axis, and subsequent $\pi$ pulses are about the $y$ axis. This sequence is known to be robust against systematic pulse errors \cite{Borneman_JMR10}, which is probably related to the above-discussed behavior - slow components of the noise cause quasi-static perturbations of qubit's quantization axis, which translate into quasi-systematic pulse errors. In Figures \ref{fig:Wg_transverse} and \ref{fig:Wlf_transverse} I show the results of simulations of decoherence due to transverse noise, with parameters chosen to match the parameters of previously considered effective Hamiltonians. These Figures should be compared with previously presented Figs.~\ref{fig:gaussian} and \ref{fig:Wlf}, respectively. Comparison of Fig.~\ref{fig:Wg_transverse} with Fig.~\ref{fig:gaussian} shows that the effective ``Gaussianization'' of noise discussed in Section \ref{sec:Gaussian} holds quite well in the case of an extrinsic OWP, provided that $\Omega$ is large enough - the results for $\Omega \! = \! 10\vq$ visibly differ from those from Fig.~(\ref{fig:gaussian}), but when using $\Omega \! = \! 100 \vq$ the differences are almost invisible. Actually they are most pronounced at short times, at which the apparent decoherence due to the influence of pulses pushes the simulation results below the theoretical result - note that the points for $n\! =\! 21$ and $\Omega\! = \! 100 \vq$ are slightly below the theoretical solid line at short times. On the other hand, when decoherence occurs at times $t \! \ll \! t_c$, the simulations of the influence of transverse noise agree very well with the analytical results from Section \ref{sec:1f}, even at smaller values of $\Omega$, see Fig.~\ref{fig:Wlf_transverse}. The results shown in this Figure for $\Omega\! = \! 10 \vq$ and $100 \vq$ are indistinguishable from those presented earlier in Fig.~\ref{fig:Wlf}. \section{Conclusion} \label{sec:discussion} In this paper I have discussed how one can perform the spectroscopy of the environmental (classical and Gaussian) noise $\xi(t')$, when the stochastic contribution to the qubit's energy splitting is $\propto \xi^{2}(t')$. Furthermore, the numerical simulations have shown that coherence dynamics under dynamical decoupling for a qubit affected by purely transverse noise, is described by the same analytical theory quite well (at least for moderate numbers of pulses used in this paper) when the CPMG pulse sequence is used. For noise with no low-frequency divergence of its spectral density $S(\omega)$ (i.e.~for noise with well-defined correlation time comparable or shorter than the qubit's evolution timescale), using a large number of pulses $n$ one can use established methods of dynamical decoupling noise spectroscopy to reconstruct the spectral density of $\xi^{2}(t')$ process. For noise with $S(\omega) \! \sim \! 1/\omega^{\beta}$ at low frequencies (and with $\beta \! > \! 1$ guaranteeing that most of the noise power is concentrated at low frequencies), a closed formula for the qubit's dephasing under dynamical decoupling has been derived. At large $n$, the coherence at time $\te$ is determined by $S(n\pi/\te)$, as in the case of the linear coupling to noise, but the functional form of the coherence decay is distinct from that case: the decay at long times is not superexponential, but of power-law character. Furthermore, the coherence signal depends on the low-frequency cutoff of the noise spectrum, which is typically given by an inverse of the total data acquisition time $T_{M}$. Because of this, the information about both the low- and high-frequency part of $S(\omega)$ can be obtained from measurements of the characteristic decay timescale and of the asymptotic behavior of coherence. The presented results therefore show how one can obtain quantitative information on the noise spectrum by measuring decoherence of the qubit operated at its optimal point. \section*{Acknowledgements} I would like to thank C.~M.~Marcus for inspiring discussions. I also thank V.~Mkhitaryan, V.~V.~Dobrovitski, and G.~Ramon for discussions and comments. This work is supported by funds of Polish National Science Center (NCN), grant no.~DEC-2012/07/B/ST3/03616.
-52,621.6139
[ -2.615234375, 2.490234375 ]
49.627792
[ -3.037109375, 1.05859375, -2.34375, -5.80078125, -0.970703125, 8.34375 ]
[ 3.1484375, 7.8515625, 1.9248046875, 6.1015625 ]
734
8,983
[ -3.41796875, 3.912109375 ]
30.891599
[ -6.63671875, -4.9765625, -5.09765625, -2.583984375, 2.46484375, 14.078125 ]
1.024018
22.15514
19.559167
2.416417
[ 2.20561146736145 ]
-34,200.317412
5.381164
-52,440.545785
0.5027
5.976186
[ -2.787109375, -4.08984375, -4.21484375, -5.203125, 2.345703125, 13.046875 ]
[ -5.58203125, -2.5390625, -2.455078125, -1.638671875, 3.73046875, 5.69921875 ]
BkiUgKbxK7Tt522WdVLV
\section{Introduction} In this paper, we present a richly annotated and genre-diversified language resource, the Prague Dependency Treebank-Consolidated version 1.0 (PDT-C in the sequel). PDT-C \citelanguageresource{lrPDT-C}\footnote{\url{https://ufal.mff.cuni.cz/pdt-c}} is a treebank from the family of PDT-style corpora developed in Prague (for more information, see \newcite{haj17}). The main features of this annotation style are: \begin{itemize} \item it is based on a well-developed dependency syntax theory known as the Functional Generative Description \cite{SgallHP:1986}, \item interlinked hierarchical layers of annotation, \item deep syntactic layer with certain semantic features. \end{itemize} From 2001, when the first PDT-treebank was published, various branches of PDT-style corpora have been developed with different volumes of manual annotation on varied types of texts, differing in both the original language and genre specification. The treebanks were built with different intention. Manual annotation included in them covered a certain part of the PDT-annotation scheme (see Sect.~\secref{sec:from}). In the PDT-C project, we integrate four genre-diversified PDT-style corpora of Czech texts (see Sect.~\secref{sec:data}) into one consolidated edition uniformly and manually annotated at the morphological, surface and deep syntactic layers (see Sect.~\secref{sec:layers}). In the current PDT-C 1.0 release, manual annotation has been fully performed at the lowest morphological layer (lemmatization and tagging); also, basic phenomena of the annotation at the highest deep syntactic layer (structure, functions, valency) have been done manually in all four datasets.\footnote{Consolidation and fully manual annotation of the surface-syntactic layer is planned for the next, 2.0 version of PDT-C.} The difference from the separately published original treebanks can be briefly described as follows: \begin{itemize} \item it is published in one package, to allow easier data handling for all the datasets; \item the data is enhanced with a manual linguistic annotation at the morphological layer and new version of morphological dictionary is enclosed (see Sect.~\secref{sec:morph}); \item a common valency lexicon for all four original parts is enclosed; \item a number of errors found during the process of manual morphological annotation has been corrected (see also Sect.~\secref{sec:morph}). \end{itemize} PDT-C 1.0 is provided as a digital open resource accessible to all users via the LINDAT/CLARIN repository.\footnote{{\url{http://hdl.handle.net/11234/1-3185}}} \begin{table*}[t] \captionsetup{justification=centering} \begin{center} \begin{tabular}{lllll} Dataset/Type of annotation & Written & Translated & Spoken & User-generated \\\hline\hline Audio & non-applicable & non-applicable & provided & non-applicable \\ ASR transcript & non-applicable & non-applicable & provided & non-applicable \\ Transcript & non-applicable & non-applicable & manually & non-applicable \\ \multicolumn{5}{c}{Morphological layer\vbox{\vskip 1.5em}} \\\hline Speech reconstruction & non-applicable & non-applicable & manually & non-applicable \\ \textbf{Lemmatization} & manually & \textbf{manually} & \textbf{manually} & \textbf{manually} \\ \textbf{Tagging} & manually & \textbf{manually} & \textbf{manually} & \textbf{manually} \\ \multicolumn{5}{c}{Surface syntactic layer\vbox{\vskip 1.5em}} \\\hline Dependency structure & manually & automatically & automatically & automatically \\ Surface syntactic functions & manually & automatically & automatically & automatically \\ Clause segmentation & manually & not annotated & not annotated & not annotated \\ \multicolumn{5}{c}{Deep syntactic layer\vbox{\vskip 1.5em}} \\\hline Deep syntactic structure & manually & manually & manually & manually \\ Deep syntactic functions & manually & manually & manually & manually \\ Verbal valency & manually & manually & manually & manually \\ Nominal valency & manually & not annotated & not annotated & not annotated \\ Grammatemes & manually & not annotated & not annotated & not annotated \\ Coreference & manually & manually & manually & not annotated \\ Topic-focus articulation & manually & not annotated & not annotated & not annotated \\ Bridging relations & manually & not annotated & not annotated & not annotated \\ Discourse & manually & not annotated & not annotated & not annotated \\ Genre specification & manually & not annotated & not annotated & not annotated \\ Quotation & manually & not annotated & not annotated & not annotated \\ Multiword expressions & manually & not annotated & not annotated & not annotated\\ \end{tabular} \caption{Overview of various types of annotation and their realization in the datasets (new manual annotation made to PDT-C 1.0 is indicated in bold)} \label{tab:annot} \end{center} \end{table*} \section{Related Work} \label{sec:related} There is a wide range of corpora with rich linguistic annotation, e.g., Penn Treebank \cite{marcus1993building}, its successors PropBank \cite{kingsbury2002treebank} and NomBank \cite{meyers2004annotating} and OntoNotes \cite{Hovy:2006:O9S:1614049.1614064}; for German, there is Tiger \cite{brants2002tiger} and Salsa \cite{burchardt2006salsa}. The Prague Dependency Treebank is an effort inspired by the PennTreebank \cite{marcus1993building} (but annotated natively in dependency-style) and it is unique in its attempt to systematically include and link different layers of language including the deep syntactic layer. The PDT project has been successfully developed over the years and PDT-annotation scheme has been used for other in-house development of related treebanks of Czech texts (see next Sect.~\secref{sec:from}) and also for treebanks originating elsewhere: HamleDT \cite{biblio:ZeMaHamleDTTo2012}, Slovene Dependency Treebank \cite{dvzeroski2006towards}, Greek Dependency Treebank \cite{prokopidis2005theoretical}, Croatian Dependency Treebank \cite{berovic2012croatian}, Latin Dependency Treebank \cite{bamman2006design}, and Slovak National Corpus \cite{vsimkova2006}. \section{From PDT to PDT-C} \label{sec:from} The first version of the Prague Dependency Treebank (PDT in the sequel) was published in 2001 \citelanguageresource{biblio:HaViPragueDependency2001}. It only contained annotation at the morphological and surface syntactic layers, and a very small "preview" of how the deep syntactic annotation might look like. The full manual annotation at all three annotation layers including the deep syntactic one is present in the second version, PDT 2.0, published in 2006 \citelanguageresource{biblio:HaPaPragueDependency2006}. The later versions of PDT did not bring more annotated data, but enriched and corrected the annotation of PDT 2.0 data. The latest edition of the core PDT corpus is version 3.5 \citelanguageresource{lrPDT35}\footnote{\url{http://ufal.mff.cuni.cz/pdt3.5}} encompassing all previous versions, corrections and additional annotation made under various projects between 2006 and 2018 on the original texts (clause segmentation at the surface syntactic layer, annotation of bridging relations, discourse, genre specifications, etc.; see Sect.~\secref{sec:layers}). A slightly modified (simplified) scenario has been used for other PDT-corpora based on Czech texts: Prague Czech-English Dependency Treebank (the latest published versions is PCEDT 2.0 \citelanguageresource{PCEDT-LR}\footnote{\url{https://ufal.mff.cuni.cz/pcedt2.0}} and PCEDT 2.0 Coref \citelanguageresource{PCEDTcoref}\footnote{\url{https://ufal.mff.cuni.cz/pcedt2.0-coref}}), Prague Dependency Treebank of Spoken Czech (the latest published version is 2.0 \citelanguageresource{pdtsc-LR}\footnote{\url{https://ufal.mff.cuni.cz/pdtsc2.0}}), and unpublished small treebank PDT-Faust.\footnote{\url{https://ufal.mff.cuni.cz/grants/faust}} In contrast to the original project of PDT, in these treebanks, the morphological and surface syntactic annotations were done automatically, and the manual annotation at the deep syntactic layer is simplified: grammatemes, topic-focus articulation, nominal valency, and other special annotations are absent. In the PDT-C project, we aim to provide all these included treebanks with full manual annotation at the lower layers and unify and correct annotation at all layers. Specifically, the data in PDT-C 1.0 is (mainly) enhanced with a manual annotation at the morphological layer, consistently across all the four original treebanks (see Sect.~\secref{sec:morph}). \begin{table*}[t] \centering \begin{tabular}{l|rrrrr} & Written & Translated & Spoken & Internet & Total \\\hline Morphological layer & 1,957,247 & \textbf{1,162,072} & \textbf{742,257} & \textbf{33,772} & 3,895,348 \\ Surface syntactic layer & 1,503,739 & 1,162,072 & 742,257 & 33,772 & 3,441,840 \\ Deep syntactic layer & 833,195 & 1,162,072 & 742,257 & 33,772 & 2,771,296 \\ \end{tabular} \caption{Volume of the datasets (number of tokens, new manual annotation made to PDT-C 1.0 is indicated in bold)} \label{tab:volume} \end{table*} \section{Data} \label{sec:data} As mentioned above, PDT-C 1.0 consists of four different datasets coming from PDT-corpora of Czech published earlier: dataset of written texts, dataset of translated texts, dataset of spoken texts, datasets of user-generated texts. These datasets are described in the following subsections. The data volume is given in Table~\ref{tab:volume}. Altogether, the consolidated treebank contains 3,895,348 tokens with manual morphological annotation and 2,771,296 tokens with manual deep syntactic annotation (manual annotation of the surface syntactic layer is contained only in the dataset of written texts and it consists of 1,503,739 tokens). Table~\ref{tab:annot} presents an overview of various types of annotation at the three annotation layers (see Sect.~\secref{sec:layers}) in each dataset and the information of the manner in which the annotations was carried out. The newly provided manual morphological annotation made to PDT-C 1.0 is indicated in bold. The markup used in PDT-C 1.0 is the language-independent Prague Markup Language (PML), which is an XML subset (using a specific scheme) customized for multi-layered linguistic annotation \cite{biblio:PaStRecentAdvances2008}. \subsection{Written Data} The dataset of written texts is based on the core \textbf{Prague Dependency Treebank} \citelanguageresource{lrPDT35}. The data consist of articles from Czech daily newspapers and magazines. The annotation in the PDT dataset is the richest one and it has completely been perfomed manually (cf. Table~\ref{tab:annot}). In PDT-C 1.0, the annotation at the morphological layer has been checked and corrected to reflect the updated morphological annotation guidelines and also to be fully consistent with the morphological dictionary (see Sect.~\secref{sec:morph}) \subsection{Translated Data} The dataset of translated texts comes from the \textbf{Prague Czech-English Dependency Treebank} (PCEDT in the sequel \citelanguageresource{PCEDTcoref}). PCEDT is a (partially) manually annotated Czech-English parallel corpus. The English part consists of the Wall Street Journal sections of the Penn Treebank \citelanguageresource{penntb-LR}, with the original annotation preserved, but also converted to the PDT-style morphology and dependency syntax annotation. A manually annotated deep syntactic layer was added. The Czech part of PCEDT used in the PDT-C consolidated edition, has been manually (and professionally, with multiple quality control passes) translated from the English original, sentence to sentence \cite{biblio:HaHaAnnouncingPrague2012}. In the PDT-C 1.0 translation dataset coming from PCEDT, there is a simplified manual annotation at the deep syntactic layer. The annotation at surface syntactic layer is still done only by an automatic parser and there is the new manual annotation at the morphological layer (cf. Table~\ref{tab:annot}). \subsection{Spoken Data} The dataset of spoken texts is taken from the \textbf{Prague Dependency Treebank of Spoken Czech} (PDTSC in sequel \citelanguageresource{pdtsc-LR}). PDTSC contains slightly moderated testimonies of Holocaust survivors from the Shoa Foundation Visual History Archive\footnote{\url{https://ufal.mff.cuni.cz/cvhm/vha-info.html}} and dialogues (two participants chat over a collection of photographs) recorded for the EC-funded Companions project.\footnote{\url{http://cordis.europa.eu/project/rcn/96289\_en.html}} The spoken data differs from the other included PDT-corpora mainly in the "spoken" part of the corpus \cite{biblio:MiMiPDTSC202017}. The process starts at the "audio" layer, which contains the audio signal. The next layer contains the transcript as produced by an automatic speech recognition engine (also coming from the Companions project). The word layer contains manual transcription of the recorded speech, and the morphological layer contains the reconstructed, i.e. grammatically corrected version of the sentence. From this point on, annotation on the upper layers is standard (see Sect.~\secref{sec:layers}). In the PDT-C 1.0 spoken dataset coming from PDTSC, there is also only a simplified manual annotation at the deep syntactic layer, the annotation at surface syntactic layer is still done only automatically and there is the new manual annotation at the morphological layer (cf. Table~\ref{tab:annot}). \subsection{User-generated Data} The dataset of user-generated texts comes from the \textbf{PDT-Faust} corpus. PDT-Faust is a small treebank containing short segments (very often with non-standard as well as expressive, obscene and/or vulgar content) typed in by various users on the \url{reverso.net} webpage for translation. In the PDT-C 1.0 user-generated content dataset, there is also only a simplified manual annotation at the deep syntactic layer. Compared to the other datasets, there is no annotation of coreference. The annotation at surface syntactic layer is performed automatically and there is the added manual annotation at the morphological layer (cf. Table~\ref{tab:annot}). \begin{figure}[ht!] \captionsetup{justification=centering} \begin{center} \includegraphics[width=0.40\textwidth]{i-layers-links.png} \caption{ Linking the layers in PDT-C 1.0} \label{fig-layers} \end{center} \end{figure} \begin{figure}[ht!] \captionsetup{justification=centering} \begin{center} \hspace*{-0.07in}\includegraphics[width=0.40\textwidth]{propojeni_rovin_pdtsc.png} \caption{ Annotation layers added for the spoken language dataset of PDT-C 1.0 } \label{fig:addedlayers} \end{center} \end{figure} \section{Multi-layer Architecture} \label{sec:layers} The PDT-annotation scheme, described in detail in \newcite{haj17}, has a multi-layer architecture: \begin{itemize} \item \textbf{morphological layer} (m-layer): all tokens of the sentence get a lemma and morphological tag, \item \textbf{surface syntactic layer} (a-layer): a dependency tree capturing surface syntactic relations such as subject, object, adverbial, etc., \item \textbf{deep syntactic layer} (t-layer) capturing the deep syntactic relations, ellipses, valency, topic-focus articulation, and coreference. In the process of the further development of the PDT, additional semantic features are being added to the original annotation scheme \end{itemize} \vspace*{-7mm} In addition to the above-mentioned three (main) annotation layers in the PDT-scenario, there is also the raw text layer (w-layer), where the text is segmented into documents and paragraphs and individual tokens are assigned unique identifiers. As it is mentioned in Sect.~\secref{sec:data}, there is additional audio and speech recognition layer (z-layer) in the spoken data. In the spoken data part (as opposed to the written corpora), the w-layer is in fact also an ``annotated'' layer, namely the manually provided transcription of the audio signal. \textbf{Linking the layers}. In order not to lose any piece of the original information, tokens (nodes) at a lower layer are explicitly referenced from the corresponding closest (immediately higher) layer. These links allow for tracing every unit of annotation all the way down to the original text, or to the transcript and audio (in the spoken data). It should be noted that while the inter-layer links are important for visualizing the trees and for training various NLP tools and applications, they are not part of any annotation layer from the theoretical point of view: the linguistic information should, be represented at the higher layer in its terms, without a loss.\footnote{An exception is the surface syntactic layer, where certain information from the morphology is missing (in the pure definition of its elements); consequently, the m-layer and a-layer are often taken together as a ``morphosyntactic'' annotation layer.} Figure~\ref{fig-layers} shows the relations between the neighboring layers as annotated and represented in the data. The rendered Czech sentence \textit{Byl by šel do lesa.} ('lit.: He-was would went toforest.') contains past conditional of the verb \textit{jít} ('to go') and a typo. These layers (from the w-layer to the t-layer) are part of all four datsets; the spoken dataset has, in addition, the audio and z-layers as depicted in Fig.~\ref{fig:addedlayers}. In the following subsections, the manual annotation of the most important phenomena is shortly described. \subsection{Spontaneous Speech Reconstruction} Spontaneous speech reconstruction is a special type of manual annotation at the morphological layer that only belongs to the spoken data. The purpose of speech reconstruction is to ``translate'' the input "ungrammatical" spontaneous speech to a written text, before it is tagged and parsed. The transcript is segmented into sentence-like segments and these segments are edited to meet written-text standards, which means cleansing the text from the discourse-irrelevant and content-less material (e.g., superfluous connectives and deictic words, false starts, repetitions, etc. are removed) and re-chunking and re-building the original segments into grammatical sentences with acceptable word order and proper morphosyntactic relations between words. For more information about this type of manual annotation see \newcite{biblio:HaCiPDTSLAn2008} and \newcite{biblio:MiMiPDTSC202017}. \subsection{Lemmatization and Tagging} At the morphological layer, a lemma and a tag is assigned to each wordform. The annotation contains no syntactic structure, no attempt is even made to put together e.g. analytical verb forms or other types of multiword expressions. The annotation rules are described in \newcite{novymanual}. There is manual annotation of lemmas and tags in all four datasets of PDT-C 1.0; for the new features, see Sect.~\secref{sec:morph}. \subsection{Surface Syntactic Annotation} The surface syntactic annotation consists of dependency trees with surface syntactic function (dependency relation) assigned to every edge of the tree. A syntactic function determines the relation between the dependent node and its governing node (which is the node one level ``up'' the tree, in the standard visualization used in PDT-corpora). The annotation guidelines are described in \newcite{analmanual}.\footnote{\url{http://ufal.ms.mff.cuni.cz/pdt2.0/doc/manuals/en/a-layer/html/index.html}} For all the datasets in PDT-C 1.0, the surface syntactic annotation is the results of an automatic dependency parser except for the core written dataset which contains manual annotation of surface syntax, as it was done for its first released version PDT 1.0 \citelanguageresource{biblio:HaViPragueDependency2001}. Moreover, surface syntactic trees in the written data part are enriched with annotation of clause segmentation (cf. Table~\ref{tab:annot}), which was taken from the subsequent releases of PDT \cite{biblio:LoHoAnnotationsentence2012}. \subsection{Deep Syntactic Annotation} One of the important distinctive features of the PDT-style annotation is the fact that in addition to the morphological and surface syntactic layer, it includes complex annotation of deep syntax, with certain semantic features, at the highest layer. Annotation principles used at the deep syntactic layer and the annotation guidelines are described in several annotation manuals \cite{trmanualanot,trmanualref,biblio:MiBeFromPDT2013,biblio:MiAnnotationtectogrammatical2014}.\footnote{\url{http://ufal.mff.cuni.cz/pdt2.0/doc/manuals/en/t-layer/html/index.html}} At the deep syntactic layer, every sentence is represented as a rooted tree with labeled nodes and edges. The tree reflects the underlying dependency structure of the sentence. Unlike the lower layers, not all the original tokens are represented at this layer as nodes -- the nodes here stand for content words only. Function words (prepositions, auxiliary verbs, etc.) do not have nodes of their own, but their contribution to the meaning of the sentence is not lost – several attributes are attached to the nodes the values of which represent such a contribution (e.g. tense for verbs). Some of the nodes do not correspond to any original token; they are added in case of surface deletions (ellipses). The types of the (semantic) dependency relations are represented by the \textit{functor} attribute attached to all nodes. \subsection{Valency} The core ingredient in the annotation of the deep syntactic layer is valency, the theoretical description of which, as developed in the framework of Functional Generative Description, is summarized mainly in \newcite{panevova1974verbal}. The valency criterion divides functors into argument and adjunct functors. There are five core arguments: Actor ({\tt ACT}), Patient ({\tt PAT}), Addressee ({\tt ADDR}), Origin ({\tt ORIG}) and Effect ({\tt EFF}). In addition, about 50 types of adjuncts (temporal, local, casual, etc.) are used. For a particular verb (or more precisely, verb sense), a subset of the functors is obligatory, while others are either not present at all or are optional. The valency lexicon that all the parts of PDT-C use, \textbf{PDT-Vallex} \cite{biblio:HaPaPDTVALLEXCreating2003,biblio:UrBuildingPDTVALLEX2012}, was built in parallel with the manual annotation. It contains over 11,000 valency frames for more than 7,000 verbs which occurred in the datasets. It has been used for consistent annotation of valency: each occurrence of a verb in all corpora is linked to the appropriate valency frame in the lexicon. \subsection{Coreference} All parts of PDT-C except the user-generated data also capture grammatical and pronominal textual coreference relations (cf. Table~\ref{tab:annot}). Grammatical coreference is based on language-specific grammatical rules, whereas in order to resolve textual coreference, contextual knowledge is needed. Textual coreference annotation is based on the ``chain principle'', the anaphoric entity always referring to the last preceding coreferential antecedent. Coreference can be also cataphoric (point to the text that follows) and coreference links can span multiple sentences \cite{biblio:ZiHaDiscourseand2015}. \subsection{Grammatemes} So called grammatemes (described in detail in \newcite{biblio:RaZaAnnotationGrammatemes2006}, \newcite{biblio:PaSeAnnotationMorphological2010}, \newcite{biblio:SePaGrammaticalnumber2010}) are attached to some nodes; they provide information about the node that cannot be derived from the deep syntactic structure, the functor and other attributes. Grammatemes are counterparts of those morphological categories which bear relevant deep syntactic or semantic information. Grammatemes are annotated at the deep syntactic layer only in the core PDT corpus (written texts; cf. Table~\ref{tab:annot}). \subsection{Topic-Focus Articulation} The information structure of the sentence (its topic-focus articulation) is expressed by various means (intonation, sentence structure, word order). It constitutes one of the basic aspects of the deep syntactic structure (for arguments on the semantic relevance of topic-focus articulation, see \newcite{SgallHP:1986}). The semantic basis of the articulation of the sentence into topic and focus is the relation of contextual boundness: a prototypical declarative sentence asserts that its focus holds (or does not hold, as the case may be) about its topic. Within both topic and focus, contextually-bound, contrastive contextually-bound, and non-bound nodes are distinguished \cite{biblio:HaSgTopicfocusarticulation1998}. The nodes at the deep structure layer are ordered according to the degrees of communicative dynamism.\footnote{The nodes at the surface-syntactic layer, as well as at the morphological layer, are of course naturally ordered based on the surface word order.} In PDT-C 1.0, topic-focus articulation is captured at the deep syntactic layer only in the core PDT corpus (written text dataset; cf. Table~\ref{tab:annot}). \subsection{Additional Annotation} At the deep syntactic layer in the core PDT part (written dataset), valency of nouns, textual nominal coreference, bridging and discourse relations and other semantic properties of the sentence such as genre specification, multiword expressions, quotation are also annotated. More information of these special annotations can be found in general overview \cite{biblio:MiBeFromPDT2013}, and also in detailed studies \cite{biblio:LoHoAnnotationsentence2012,biblio:NeMiAnnotatingExtended2011,biblio:NeMiHowDependency2013,biblio:PoJiManualfor2012,biblio:BeStAnnotationMultiword2010,biblio:ZiHaDiscourseand2015}. \section{New Annotation of Morphology} \label{sec:morph} As it has been already mentioned, the latest published versions of the included datasets have been enhanced in PDT-C 1.0 by a new or corrected manual annotation at the morphological layer (new in translated, spoken, and user-generated data, corrected in the original written dataset). Altogether, there are now 3,895,348 tokens with manual morphological annotation in the PDT-C 1.0 (cf. Table~\ref{tab:volume}). \subsection{Annotation Process} The annotation is based on a manual disambiguation of an automatic, dictionary-based morphological analysis of the annotated texts. For such automatic preprocessing, we use the MorphoDiTa tool \cite{strakova14}.\footnote{\url{https://ufal.mff.cuni.cz/morphodita}} Key element to annotation consistency at the morphological layer is the Czech morphological dictionary \textbf{MorfFlex} \citelanguageresource{morfflex}, which is now an integral part of the PDT-C 1.0 release. The MorfFlex dictionary lists more than 100,000,000 lemma-tag-wordform triples. For each wordform, full inflectional information is coded in a positional tag. Wordforms are organized into entries (\textit{paradigm instances}, or \textit{paradigms} in short) according to their formal morphological behavior. The paradigm (set of wordforms) is identified by a unique lemma. The formal specification of the (original) dictionary is in \newcite{biblio:HaDisambiguationRich2004}. Based on the long-time experience with the usage of the dictionary and the current manual annotation of real data, we proposed to capture some phenomena differently in order to achieve better consistency within the dictionary as well as between the dictionary and the annotated corpora. The changes concern several complicated morphological features of Czech (a brief description of the changes is in the following subsections; for detailed information see \newcite{biblio:HlMiModificationsCzech2019} and \newcite{novymanual}). In addition, we are enriching the MorfFlex dictionary with new words and wordforms found in the newly annotated texts. Some quantitative characteristics of the amended morphological dictionary and some statistical data about the changes made are tabulated in Tab.~\ref{tab:dict}. The number of paradigms that are now different from the original new version (i.e. there is a change in a lemma, comment attached to the lemma, form, and/or tag) is 299,055 paradigms; this means that 28.58\% of the dictionary has been modified. \begin{table}[h] \captionsetup{justification=centering} \begin{center} \begin{tabular}{l|r} Description & Volume\\\hline Paradigms in original version & 1,035,659\\ Paradigms in new version & 1,046,422 \\ Paradigms removed & 3,381 \\ Paradigms added & 14,144 \\ Paradigms changed & 299,055 \\ \end{tabular} \caption{Statistics on the morphological dictionary and changes made for PDT-C 1.0} \label{tab:dict} \end{center} \end{table} \textbf{Achieving consistency}. The changes in the dictionary have been projected back into the manually annotated data by repeated re-annotation to guarantee full consistency between all the data and the dictionary. While the sheer amount of annotated texts did not allow for multiple annotation (e.g., to determine inter-annotator agreement; in general, IAA on morphological annotation is about 97\%{} \cite{Hajic2004}) to finish in time with regard to the funding available, the consistency of annotation has been checked by a specific module using the information from the morphological dictionary and the annotated data. Within a detailed analysis of consistency between the data and dictionary, the following cases of inconsistencies are distinguished (they are ordered; a particular case applies only when none of the above cases apply): \begin{itemize} \item \textbf{Full matches}: An analysis of a wordfom (attached lemma and tag) in data fully matches an analysis in the dictionary. \item \textbf{Unique lemma, comment change}: Compared to the analysis of a wordform in the data, there is a unique analysis in the dictionary that has same tag, same lemma (including lemma number) but differs in a comment (an additional descriptive element) attached to the lemma. \item \textbf{Unique lemma, sense change}: There is a unique analysis in the dictionary that has the same tag and the same lemma, but differs in the lemma index number. \item \textbf{Unique lemma, tag change}: There is a unique analysis in the dictionary that has same complete lemma (including lemma index number and comment), but differs in the tag. \item \textbf{Unique rest}: There is a unique analysis in the dictionary, and none of the variants above apply. \item \textbf{Multiple lemma, comment change}: There are multiple analyses in the dictionary that have the same tag and the same lemma with an index number, but not the comment. \item \textbf{Multiple lemma, sense change}: There are multiple analyses in the dictionary that have the same tag and the same lemma but differ in lemma number. \item \textbf{Multiple lemma, tag change}: There are multiple analyses in the dictionary that have same complete lemma but differ in tag. \item \textbf{Multiple rest}: There are multiple analyses in the dictionary, neither of the above variants apply. \item \textbf{No analysis}: There is no analysis in the dictionary for the given wordform. \end{itemize} An inconsistency between data and the dictionary (of all the types listed above) indicates an annotation problem or error in the dictionary. All inconsistencies have been corrected (mostly manually, partially also automatically: e.g., changes in representation of verbal aspect (see below) have been done automatically, since the information has been found elsewhere). There are only full matches now (the first type in the above list), except for a small amount of wordform occurrences in the data that are not in the dictionary (but have manual analysis in the data); this applies mostly to foreign wordfoms and non-standard, sparse forms of Czech. This now applies to all the four parts of PDT-C 1.0, including the almost 2 millions tokens in the translated, spoken and user-generated data that have been newly manually annotated for lemmas and tags, using the annotation and corrections procedure described herein. \begin{table}[h] \captionsetup{justification=centering} \begin{center} \begin{tabular}{l|r|r} Type of inconsistency & \% & Forms \\\hline Full matches & 75.27\% & 1,473,162 \\ Unique lemma, comment change & 1.85\% & 36,297 \\ Unique lemma, sense change & 4.44\% & 86,977 \\ Unique, tag change & 7.19\% & 140,682 \\ Unique rest & 6.71\% & 131,347 \\ Multiple lemma, comment change & 0.00\% & 0 \\ Multiple lemma, sense change & 0.03\% & 591 \\ Multiple, tag change & 0.25\% & 4,803 \\ Multiple rest & 3.73\% & 72,924 \\ No analysis & 0.53\% & 10,459 \\ \end{tabular} \caption{Analysis of inconsistencies between the original PDT-data and the new version of the MorfFlex dictionary} \label{tab:consistency} \end{center} \end{table} In the written dataset, i.e. in the original data of PDT (where the manual morphological annotation has already been completed for the original PDT 1.0 version \citelanguageresource{biblio:HaViPragueDependency2001}), the annotation at the morphological layer has been checked and corrected to reflect the updated morphological annotation guidelines and also to be fully consistent with the new morphological dictionary. Tab.~\ref{tab:consistency} quantifies the amount of inconsistencies between the originally annotated data and the new version of the dictionary. All mismatches (25\% tokens) have been resolved. \subsection{New Features in Lemmatization} The \textit{lemma} is a unique identifier of the paradigm. Usually it is the base form of the word (e.g. infinitive for verbs, nominative singular for nouns), possibly followed by a number distinguishing different lemmas with the same spelling. \textbf{Lemma numbering} has been improved and made more consistent. Now, we do not strive to make any distinction between meanings of homonyms. The only differences we want to capture are those of formal morphological nature. It means that we add numbers only to lemmas that have: \begin{itemize} \item different POS, e.g. \textit{růst-1} as noun ('a growth') and \textit{růst-2} as verb ('to grow'), \item different gender/declension for nouns, e.g. \textit{kredenc-1} as masculine and \textit{kredenc-2} as feminine, even if they have the same meaning ('a cupboard'), \item different aspect and/or conjugation in case of verbs, e.g. \textit{stát-1} with perfective aspect ('to happen') and \textit{stát-2} with imperfective aspect ('to stand'). \end{itemize} Thus, we have, e.g., lemma \textit{jeřáb-1} for crane as a bird (animate masculine) and \textit{jeřáb-2} for both a tree and crane as a device for lifting heavy objects (inanimate masculine). We do not distinguish the latter two meanings (tree vs. device), because they do not differ from the inflectional point of view (same declension). There might be a difference in derivation; e.g. the word \textit{jeřábník} (a man who works with a crane-device) is derived from \textit{jeřáb} as a device. This has been delegated to derivational data sources, such as \newcite{vidra-etal-2019-derinet}; here, we do not to take such derivational, stylistic and semantic differences into account. \textbf{Orthographic and stylistic variants} of a word (e.g., an archaic variant \textit{these}, a standard variant \textit{teze}, and a non-standard variant \textit{téze} 'thesis') were not tackled uniformly in MorfFlex. Some variants had different paradigms with different lemmas, others were grouped into one paradigm with one common lemma. In the former case there was no connection between the two variant lemmas. The latter case led to the most massive violations of the principles of the dictionary because there were different wordforms with the same tags belonging to the same lemma. We have decided to capture variants in separate paradigms with different lemmas: we select one of the variants as ``basic'' (the standard one, i.e. \textit{teze}) and other variants (non-standard \textit{téze} and archaic \textit{these}) refer to it in an additional descriptive element, attached to the lemma. We have introduced new codes for marking variants of different style; cf. lemmas of variants of word \textit{teze} in Table~\ref{tab:variants}. \begin{table}[h] \captionsetup{justification=centering} \begin{center} \begin{tabular}{l|l} Wordform & Lemma\\\hline {\it teze} & teze\\ {\it these} & these\_,a\_$^\wedge$($^\wedge$DD**teze)\\ {\it téze} & téze\_,h\_$^\wedge$($^\wedge$GC**teze)\\ \end{tabular} \caption{Capturing stylistic variants: word \textit{teze} 'thesis'} \label{tab:variants} \end{center} \end{table} \vspace*{-7mm} \begin{table}[h] \captionsetup{justification=centering} \begin{center} \begin{tabular}{r|l} Position & Description \\\hline \textbf{1} & \textbf{Part of speech} \\ \textbf{2} & \textbf{Detailed part of speech} \\ 3 & Gender \\ 4 & Number \\ 5 & Case \\ 6 & Possessor's gender \\ 7 & Possessor's number \\ 8 & Person \\ 9 & Tense \\ 10 & Degree of comparison \\ 11 & Negation \\ 12 & Voice \\ \textbf{13} & \textbf{Verbal aspect} \\ \textbf{14} & \textbf{Aggregate} \\ \textbf{15} & \textbf{Variant, style, abbreviation} \end{tabular} \caption{Attributes in positional tags (amended positions are in bold font)} \label{tab:tag} \end{center} \end{table} \begin{table*}[h] \begin{center} \begin{tabular}{l|l|l|l} Wordform & Lemma & Tag & Example \\ \hline {\it Wall} & {\tt Wall} & \texttt{\underline{F\%{}}-------------} & {\it na Wall Street} 'on the Wall Street' \\ {\it česko} & {\tt česko} & \texttt{\underline{S2}--------A----} & {\it česko-ruská kniha} 'Czech-Russian book' \\ {\it kou} & {\tt ka} & \texttt{\underline{SN}FS7-----A----} & {\it s manželem/kou} 'with husband/wife' \\ {\it připravil} & {\tt připravit} & \texttt{VpYS----R-AA\underline{P}--} & {\it Připravil návrh.} '[He] has prepared a proposal.' \\ {\it proň} & {\tt on} & \texttt{P5ZS4--3-----\underline{p}-} & {\it Žije proň.} '[He] lives for him.' \\ {\it s} & {\tt strana} & \texttt{NNFXX-----A---\underline{a}} & {\it na s. 12} 'at page 12' \\ \end{tabular} \caption{Examples of annotation (amended positions/values are underlined)} \label{tab:examplesnewtags} \end{center} \end{table*} \vspace*{-6mm} \subsection{New Features in Tagging} Czech is a highly inflectional language. There are 15 categories, encoded in a positional tag, which is a string of 15 characters; every position encodes one morphological category using one character symbol. An overview of the 15 positions is in Table~\ref{tab:tag}. The categories we have newly defined and/or amended are indicated in bold. Examples of the use of the modified categories are shown in Table~\ref{tab:examplesnewtags}.\footnote{For the original full list of tag categories and values, see e.g. \url{https://ufal.mff.cuni.cz/pdt/Morphology_and_Tagging/Doc/hmptagqr.html}.} \textbf{Foreign words}. Two new POS have been introduced: for foreign words and for segments (see below). Their detailed description can be found in \cite{biblio-7044017271673886902}. Foreign word is such word that is not subject to Czech inflectional system and has no meaning of its own in Czech. The tag contains special values at the POS and detailed POS position, namely {\tt F\%}. There are no other morphological values involved in the tag (cf. the example of wordform \textit{Wall} in Table~\ref{tab:examplesnewtags}). \textbf{Segments} are incomplete words. In order to understand them, they must be joined with another string or word to create a complete word. We have created a new POS with the code {\tt S} for them. According to their position in the complete word, we distinguish prefixal and suffixal segments. The tag of the prefixal segments has the code {\tt 2} at the 2nd position (cf. the wordform \textit{česko} in Table~\ref{tab:examplesnewtags}). The suffixal segments express an affiliation to a specific POS. Thus, all the inflectional categories that describe the whole wordform, except for the first one (which is {\tt S}), are filled in the tag (cf. wordform \textit{kou} in Table~\ref{tab:examplesnewtags}). \textbf{Aspect}. The verbal aspect was not part of the tag, the information was included in the dictionary in a form of an additional field attached to lemmas. We have added the information about the aspect directly to the tag, to its 13th position, which had been kept free as a reserve. The values are: {\tt P} for perfective verbs, {\tt I} for imperfective ones and {\tt B} for verbs with both aspects. There is an example of verbal wordform \textit{připravil} in Table~\ref{tab:examplesnewtags}. \textbf{Aggregates}. New solution has also been implemented for so-called aggregates. An aggregate is a wordform created by combining two or more forms (components of the aggregate) into one and cannot be simply assigned any POS (e.g. wordform \textit{proň} consists of a pronoun \textit{on} ('he') and the joined preposition \textit{pro} ('for')). The tag describes the main component of the aggregate (i.e the pronoun \textit{on}) and the joined components are coded at the free 14th position of the tag (for joined preposition \textit{pro}, there is value {\tt p}; cf. wordform \textit{proň} in Table~\ref{tab:examplesnewtags}). \textbf{Stylistic variants, abbreviations}. For marking stylistic variants of a wordform (e.g. both \textit{orli} and \textit{orlové} ('eagles') are the wordforms of the noun \textit{orel} ('eagle') and express plural masculine nominative), we use the 15th position of the tag, as has been done before. The main difference lies in the fact that now we use this position strictly for variants of wordform. Numbers 1 to 4 mark standard variants, while numbers 5 to 9 relate to substandard ones. We have also added new values -- letter {\tt a}, {\tt b} and {\tt c} -- to the 15th position of the tag for marking abbreviation of a (single) word which is captured as a special wordform of the paradigm of that word (cf. abbreviation \textit{s} ('p') which abbreviates word \textit{strana} ('page') in Table~\ref{tab:examplesnewtags}). \section{Conclusion and Future Work} A large, combined genre-diversified treebank resource for Czech with enhanced morphological annotation has been presented, which increases the amount of morphologically annotated data for Czech almost twofold, to nearly 4 million tokens. PDT-C 1.0 will be published under an open license in the first months of 2020, together with the morphological and valency lexicons related to the annotation. A large number of changes has been made not only to the data, but also to the morphological dictionary MorfFlex, and consistency is now assured across all of the four original datasets. In the near future, the combined corpus will be used for building a new model for Czech morphological disambiguation tool MorphoDiTa, which will in turn be used for automatic POS and morphological annotation of all the Czech corpora available in the Kontext KWIC tool at the LINDAT/CLARIAH-CZ research infrastructure.\footnote{\url{https://clariah.lindat.cz}} It will also be made available again for the Czech National Corpus\footnote{\url{https://www.korpus.cz}} to re-annotate its corpora with a more accurate POS and morphology. The whole PDT-C 1.0 will also be available for advanced search in the PML-TQ tool.\footnote{\url{https://lindat.mff.cuni.cz/services/pmltq}} On the annotation side, surface syntactic annotation of the three so far manually unannotated datasets from the PDT-C 1.0 will be tackled next, again in order to increase the amount of data available to train dependency parsers for Czech. Having both morphology as well as dependency syntax annotated will then allow to increase the amount of Czech data in the Universal Dependencies \cite{NIVRE16.348} collection to almost 5 million; the conversion to the UD style of annotation will be straightforward, as it was already in the case of PDT \cite{biblio-1745977273001647149}. \section*{Acknowledgements} The research and language resource work reported in the paper has been supported by the LINDAT/CLARIN and LINDAT/CLARIAH-CZ projects funded by Ministry of Education, Youth and Sports of the Czech Republic (projects LM2015071, LM2018101 and EF16\_{}013/0001781). The original annotation has been supported by multiple projects in the past, funded both nationally by the Ministry of Education, Youth and Sports of the Czech Republic and the Czech Science Foundation, such as Projects No. VS96151, LN00A063, LC536, LM2010013, GA405/96/0198, GV405/96/K214), as well as projects funded by the European Commission in the 6th and 7th Framework Programmes and the H2020 Programme that in part added certain language resources, such as the Companions and Khresmoi Integrated Projects, the Faust STREP and several others. \section*{Bibliographical References} \label{main:ref} \bibliographystyle{lrec}
-24,498.150795
[ -2.466796875, 2.4375 ]
40.49217
[ -3.04296875, 1.265625, -1.359375, -4.44140625, -1.0791015625, 6.234375 ]
[ 3.478515625, 6.7578125, 1.705078125, 7.24609375 ]
408
5,766
[ -2.986328125, 3.57421875 ]
23.125042
[ -5.09765625, -0.7294921875, -1.5849609375, -0.865234375, 0.300048828125, 6.4609375 ]
1.048901
25.582688
23.606557
2.957454
[ 2.1350598335266113 ]
-19,585.219771
6.117759
-23,643.098074
0.765415
5.889664
[ -3.84375, -3.0703125, -2.880859375, -3.478515625, 2.634765625, 9.828125 ]
[ -6.5078125, -2.4453125, -2.33203125, -1.6865234375, 4.3515625, 5.6484375 ]
BkiUdZE5qsMAI4khReue
\section{Introduction} \label{sec:intro} The measurement of rotation periods in open clusters can be a powerful and convenient tool in understanding stellar angular momentum evolution by offering the opportunity to connect a star's age to its rotation period and color or, alternatively, mass. \citet{1972ApJ...171..565S} first noticed the relationship between period and age, though the color-period relation has long been observed in the Hyades, where there is a notable dependency of stellar rotation period on spectral type for stars cooler than about mid-F \citep{1987ApJ...321..459R,2011MNRAS.413.2218D}. \citet{1988ApJ...333..236K} concluded that by the age of the Hyades, a star's initial angular momentum no longer plays a significant role in determining its rotation period, and another mechanism, possibly stellar winds coupled with magnetic fields, `brakes' the rotation of a star as it ages. \citet{1989ApJ...343L..65K} then acknowledged the use of rotation, a direct observable independent of distance, as a potential age estimator. \citet{2003ApJ...586..464B} developed the first semi-empirical model for deriving ages from the colors and rotation periods of FGK dwarfs, introducing the term gyrochronology. Following the improvements of \citet{2007ApJ...669.1167B}, there was a need for precise rotation period measurements in clusters to test gyrochronology and probe its limitations. The availability of large field-of-view (FOV) CCD cameras on smaller (1\,m class) telescopes enabled the measurement of thousands of rotation periods for stars in young open clusters via the rotational modulation of stellar flux \citep{irwin2007monitor,irwin2009monitor,2009ApJ...695..679M,0004-637X-733-2-115,2009MNRAS.400..451C,0004-637X-695-1-336,2010arXiv1006.0950H,2010A&A...515A.100J}, using co-eval populations to enable the creation of period-age-mass (P-t-M) surfaces based on different gyrochronology models. NASA's \emph{Kepler} mission, dedicated to the detection of transiting exoplanets of nearby bright stars, offered unprecedented precision to study stellar variability, particularly stellar rotation. Key \emph{Kepler} large-scale rotation period studies include those of \citet{2013MNRAS.432.1203M}, \citet{2013A&A...560A...4R}, \citet{2041-8205-775-1-L11} (letter), \citet{2014ApJS..211...24M}, \citet{2013A&A...557L..10N}, \citet{2014A&A...572A..34G}, and \citet{2015A&A...583A..65R}. These studies measured the rotation periods of tens of thousands of \emph{Kepler} field stars, including planetary hosts. Along with the ground-based cluster surveys, they helped bring to light a number of interesting trends, including, for example, the puzzling bimodal period distribution shown by late K and M-dwarfs, as well as the existence of a correlation between stellar temperature, rotation period, and the spread thereof, which may be related to differential rotation, active region evolution, or a combination of the two. The Kepler Cluster Study extended the P-t-M surface from $\sim$600\,Myr (Hyades) to 2.5\,Gyr by using \emph{Kepler} data to measure rotation periods of FGK dwarfs in NGC 6811 (1\,Gy;\citealt{2011ApJ...733L...9M}) and NGC 6819 (2.5\,Gyr;\citealt{2015Natur.517..589M}). These measurements agreed with the predictions of the most recent gyrochronology relations by \citet{0004-637X-722-1-222}. Other approaches using \emph{Kepler} data, however, have focused on asteroseismic age determination \citep{2014A&A...572A..34G,2015MNRAS.450.1787A,2016Natur.529..181V}, and these studies have called into question the behavior of stars at older ages. For instance, \citet{2016Natur.529..181V} attempts to explain the anomalously rapid rotation of old field stars by proposing a `weakened magnetic braking' theory, which states that around solar age, stars with masses similar to and above that of the Sun should begin to diverge from the `standard' gyrochronology relations, which their cooler counterparts are still expected to follow. The need for high precision observations of solar age clusters to test this new theory alongside the established gyrochronology relations becomes apparent. The best candidate for an in-depth rotation study on a solar-age cluster is M67. M67 is estimated to be $\sim4.3$\,Gyr old, and roughly 800 -- 900\, pc from the Sun \citep{1994A&AS..103..375C,2007A&A...470..585B,2015AJ....150...97G}. The cluster's age is not the only similarity to the Sun; calculated metallicty values from the literature show that [Fe/H] ranges from about $-0.1$ to $0.1$, with average values lying around $-0.01$ \citep[][the most recent of which reports $-0.01 \pm 0.05$]{1996AJ....112..628F,2007A&A...470..585B,2007AJ....133..370T,2011AJ....142...59J}. The members of M67 have been catalogued several times based on proper motion \citep{1977A&AS...27...89S,2005ARep...49..693L,2008A&A...484..609Y} and radial velocities \citep{2008A&A...484..609Y,2008A&A...489..677P,2015AJ....150...97G}. \citet[][hereafter G15]{2015AJ....150...97G} report 1278 candidate members, making M67 the richest, relatively nearby solar age open cluster. There is a general tendency for middle-aged, main-sequence stars to have longer activity cycles and so lower variability amplitudes than their younger counterparts \citep{2008ApJ...687.1264M, 1995ApJ...452..332R, 1998ApJS..118..239R, 1996A&A...305..284H, 1998ASPC..154..153B}, making their signals more difficult to detect. Thus high photometric precision is necessary for a study of rotation in M67. K2, the second phase of the \emph{Kepler} mission following the loss of two of the spacecraft's reaction wheels by 2013, presents such and opportunity. However, the mission's reduced pointing accuracy relative to \emph{Kepler} leads to lower photometric precision due to an increase in systematics associated with inter- and intra-pixel sensitivity variations, as well as aperture losses \citep{2014PASP..126..398H}. Early estimates of K2's photometric precision indicated, for stars with a magnitude of $V=12$, a precision of approximately 400 parts per million (ppm) for the long-cadence, or 30-minute, observations, and 80 ppm over the course of 6 hours \citep{2014PASP..126..398H}. Previously, \emph{Kepler} reached a precision level of 10 ppm at a magnitude of $K_{p}=10$ for 6-hour observations \citep{2012PASP..124.1279C, 2014PASP..126..948V}. Members of the community have since developed methods to improve the photometric precision of the K2 light curves, in particular to correct the pointing-related variations. These include the Vanderburg \& Johnson (hereafter VJ) pipeline \citep{2014PASP..126..948V,2016ApJS..222...14V}, the K2 Systematics Correction (K2SC) pipeline \citep{2015MNRAS.447.2880A,2016MNRAS.459.2408A}, EVEREST \citep{2016AJ....152..100L}, K2VARCAT \citep{2014arXiv1411.6830A,2016MNRAS.456.2260A}, and the PSF-fitting used by \citet{2016MNRAS.456.1137L} for K2 observations of M35 and NGC 2158. Several of these pipelines match the photometric precision of \emph{Kepler} for stars with $K_{p}=12.5$, and perform within a factor of two of the original mission for fainter stars. This, combined with the much more diverse sample of stars surveyed by K2 compared to the \emph{Kepler} prime mission, including numerous open clusters, makes K2 data a treasure trove for stellar angular momentum evolution studies. Indeed, the K2 mission has produced spectacular results for nearby, well-studied young open clusters, including the Pleiades \citep{2016AJ....152..113R,2016AJ....152..114R,2016AJ....152..115S} and Hyades \citep{2016ApJ...822...47D}. More critical to this study, however, K2 observed M67 during Campaign 5.\footnote{https://keplerscience.arc.nasa.gov/k2-fields.html} \citet{2016ApJ...823...16B} and \citet{2016MNRAS.459.1060G,2016MNRAS.463.3513G} already used those data to measure rotation periods for M67 members and investigate the implications of their results in terms of gyrochronology. However, both studies worked only with stars observed using individual postage stamps, avoiding the crowded central regions of the cluster, and while \citet{2016ApJ...823...16B} obtained a good match to pre-existing gyrochronology relations, this was based on a very small sample of 20 stars, with the majority of the reported periods not convincingly confirmed when examined by eye based on the provided light curves. By contrast, \citet{2016MNRAS.463.3513G} showed no dependence of rotation period on mass (color) and significant scatter for all masses. Furthermore, there are discrepancies in the measured periods for several stars in common between the two studies, and they inferred mutually inconsistent gyrochronological ages for M67 ($4.2 \pm 0.2$ and $5.0 \pm 0.2$\,Gyr, respectively). \begin{figure*} \includegraphics[width=1\linewidth]{hot_star_hist.pdf} \caption{Periods acquired from a set of 89 K2 Campaign 5 A and F stars using the autocorrelation function (ACF) (green) and the Lomb-Scargle periodogram (blue). We also include a normalized version of the Lomb-Scargle periodogram (red), which we implement later in this paper, as a demonstration of how we can set detection thresholds and remove potentially anomalous measurements. \label{fig:hotstarhist}} \end{figure*} In addition to exploiting only part of the data available, the published K2 M67 rotation studies made specific choices in terms of light curve extraction, detrending and period search methods, without investigating possible alternatives in any detail, and both methods involved a large component of human decision-making, which is difficult to reproduce. Furthermore, it is useful to dwell on just how challenging it is to determine accurate rotation periods in M67 data. Based on existing gyrochronology relations, and on what we know about the Sun itself, these stars are expected to have rotation periods of order 20 -- 30\,d, and amplitudes of a few tenths of a percent at most. Even in the full \emph{Kepler} field star sample of \citet{2014ApJS..211...24M} based on 4 years of continuous observations, there are relatively few detections in this area of parameter space, due either to sensitivity limits or the underlying period-amplitude distribution. The limited duration of a K2 campaign ($\sim$75\,d; about 2 -- 3 rotation cycles for a typical M67 star) will only make the process of detecting accurate rotational modulation harder. In addition, the reduced photometric precision of K2 light curves (relative to the \emph{Kepler} prime mission) is compounded by the significant degree of crowding of cluster stars, which creates additional systematics, and the large \emph{Kepler} pixel size. In addition to the difficulties mentioned above, a sample of 89 hot A and F stars from K2 Campaign 5 has highlighted the presence of 25\,d$+$, non-astrophysical signals in the data, as can be seen in Figure~\ref{fig:hotstarhist}. This figure shows the rotation periods of the dataset, acquired using the autocorrelation function (ACF), the standard Lomb-Scargle periodogram, and a normalized version of the Lomb-Scargle periodogram (which we explain and implement in this study in Section~\ref{sec:tests}). The hot star sample selection is detailed in Section~\ref{subsec:star_samples}. We should not expect periods much greater than $\sim$5\,d for these stars. The significant number of measurements from either the ACF or the standard Lomb-Scargle periodogram above 25\,d shows that there is a substantial risk of making erroneous period measurements on the order of 25 -- 30\,d in M67. All of this should lead to a healthy degree of prior skepticism regarding any period detections in M67, for G and K stars at least, and any such detections need to be backed up by detailed completeness (sensitivity) and reliability tests. Using all the cluster members and the aforementioned set of A and F stars observed during K2 Campaign 5, we carefully compare the effects of different light curve extraction and detrending methods. We perform systematic tests with simulated, sinusoidal signals injected into the Campaign 5 M67 K2 light curves, in order to properly evaluate the biases and best-case sensitivity limits of each detrending pipeline. This paper reports on the first set of injection tests carried out as part of our team's K2 M67 rotation study, which ultimately seeks to produce a list of periods from this critical cluster in which we can quantify our level of confidence. Later, a round of smaller injection tests will use more realistic signals, generated by the star spot models from \citet{2015MNRAS.450.3211A}, to help us reach that final goal. The structure of the rest of this this paper is as follows: section \ref{sec:lcs} discusses the K2 observations, light curve extraction, and light curve detrending. Section \ref{sec:tests} describes the injection tests, including the generation of the injected signals, the period search method, and the definition of a systematic detection criterion. In this section we also discuss how we handle intrinsically variable stars (i.e. stars with an astrophysical signal before injection), and what we consider a `valid' detection. The results are presented in Section \ref{sec:results}, where we compute the completeness and reliability of the period search as a function of period and amplitude, before they are discussed in Section \ref{sec:discussion}. We summarize our conclusions in section \ref{sec:conclusion}. \section{Observations and light curve preparation} \label{sec:lcs} \subsection{K2 Observations} M67 was observed during K2 Campaign 5, which commenced on 2015 April 27 and ended on July 10. The pointing was centred on RA = 130$^h$09$^m$27$^s$.53 and DEC = 16$^\circ$49'46.61''.\footnote{See \url{https://keplerscience.arc.nasa.gov/k2-data-release-notes.html\#k2-campaign-5}.} Individual K2 targets for Campaign 5 were proposed for observation by members of the community. Our proposal (PI.\ R.\ Matthieu) included $12\,935$ candidate members, with $12\,521$ drawn from the EPIC database, 266 from the 2MASS calibration field \citep[see e.g.][]{2009ApJ...698.1872S}, and the last 148 targets filling in those missing from the EPIC database around $K_{p}\sim19$ (Gilliland, private communication). Membership for stars brighter than $K_{p}=15$ is based on radial velocity and proper motion, as explained in G15, taking those with membership probabilities of 20\% or greater to make a more complete catalog. For the remainder, we accepted those stars whose photometric membership made them likely members, unless contradicted by known radial velocities or proper motions. These stars formed our master target list. M67 was located on channels 1 and 2 of CCD module 6, near the edge of the FOV. Figure~\ref{fig:mod6_fov} shows the entire Campaign 5 FOV, with M67 targets represented by the teal circles on CCD module 6 along with a scattering of orange diamonds that represent the sample of hot stars used in this analysis, which will be discussed later. The stars located in the less crowded, outer parts of the cluster, were observed in the standard manner, by downloading a small window of pixels centred on each star of interest. A total of $\sim25\,000$ individual star postage stamps were collected during Campaign 5; of these 1877 fell on output channels 6.1 and 6.2. For the crowded inner regions of the cluster, a `superstamp' was used: a large, contiguous region created by juxtaposing many postage stamps. Approximately 2210 of the proposed M67 targets were located within the superstamp (the number of stars in the superstamp for which light curves are actually extracted depends on the method used). Figure~\ref{fig:mod6_fov2} shows the locations of the M67 SAP sample (in blue) and the superstamp stars (in red) where they fall on channels 6.1 and 6.2 using two images of M67 taken from the ESO Online Digitized Sky Survey, centered on the channels' respective pointing. \begin{figure} \plotone{targets_fov.pdf} \caption{Location of the 1877 M67 SAP and 976 superstamp stars (teal circles) used in this study, along with an additional 89 hot stars (orange diamonds), on the \emph{Kepler} Campaign 5 field of view. \label{fig:mod6_fov}} \end{figure} \begin{figure} \centering \includegraphics[width=1\linewidth]{mod6_fov.pdf} \caption{Zoomed in look at the M67 SAP and superstamp stars using two images of M67 retrieved from the ESO Online Digitized Sky Survey. The SAP stars are surrounded by blue circles while the SS stars are surrounded by red circles. \label{fig:mod6_fov2}} \end{figure} Most postage stamps were read out in `long cadence' mode, or one observation every 30 min, with a total of 3663 cadences during the campaign. After basic reduction (correction of pixel-level instrumental effects), each series of postage stamps was combined into a Target Pixel File (TPF), and the TPFs were made publicly available at the Mikulski Archive for Space Telescopes (MAST) a few months after the end of the campaign. In the rest of this section, we describe the two parallel light curve preparation procedures, or `pipelines', from two different institutions (Oxford and Harvard-CfA in collaboration with NASA Ames), which we tested against each other in this study. The K2 M67 study team includes core members of the teams behind the K2SC and VJ pipelines, which we implement here. While other K2 pipelines have been successful in various contexts, we did not include them in this study for a variety of reasons. The EVEREST pipeline, for example, does not perform optimally with a crowded field \citep{2016AJ....152..100L,2017arXiv170205488L}. Furthermore, the K2 light curves from K2VARCAT \citep{2014arXiv1411.6830A, 2016MNRAS.456.2260A} only encompass Campaigns 0 -- 4. Alternatively, \citet{2018arXiv180206354C} have recently released Campaign 5 M67 light curves using a similar approach to superstamp data as \citet{2015MNRAS.447.2880A} (which we employ here). \citet{2016MNRAS.463.1831N} have also produced Campaign 5 M67 light curves using the PSF-fitting approach of \citet{2016MNRAS.456.1137L} in addition to aperture and optimal-mask photometry. However, the light curves from \citet{2018arXiv180206354C} were unavailable when we started this study, and we did not realize those of \citet{2016MNRAS.463.1831N} were publicly available until after we completed it. A future comparison with both sets of light curves would be interesting, especially since the root mean square error of the latter using aperture photometry at the bright end is similar to that of the pipelines we use (see Section 2.4). Therefore, for each star, we applied our two pipelines to extract the light curves, correct for pointing-related systematics, and correct for residual common-mode systematics. In an ideal world, one would try every possible combination of each of these steps, resulting in 8 versions of each light curve from the three separate datasets, each of which would have been injected with 42 simulated signals. This would have led to an unmanageably large set of tests (almost 1\,000\,000 in total), and would have taken far too long to complete, especially when applying PDC-MAP for common-mode correction (see Section~\ref{subsubsec:cfa_commonmode}). While mixing the pipelines may have produced slightly more optimal results, we treat each pipeline as a unit, and produced only two versions of each light curve. \subsection{The `Oxford' pipeline} \label{subsec:oxford} The `Oxford' pipeline is comprised of the following: \begin{itemize} \item for light curve extraction: the Simple Aperture Photometry (SAP) component of the standard \textit{Kepler} pipeline, or the procedure described in \citet{2015MNRAS.447.2880A} for cases where the SAP light curves are not available; \item for pointing systematics correction: the K2SC pipeline \citet{2016MNRAS.459.2408A}; \item for the correction of residual common-mode systematics: a Principal Component Analysis (PCA) step. \end{itemize} \subsubsection{Light curve extraction} \label{subsubsec:extract1} Stars located outside the superstamp were processed on the ground by the simple aperture photometry (SAP) component of the \emph{Kepler} pipeline. SAP light curves are extracted using a fixed, pixelated aperture, the boundaries of which are defined so as to maximize the signal-to-noise ratio of the resulting light curve \citep{2016PASP..128g5002V}. They are corrected for known, pixel-level instrumental effects only, and contain significant instrumental systematics, primarily due to the pointing variations of the telescope. These light curves are publicly available at MAST. The MAST light curve files also include a second version of each light curve, produced by the Pre-search Data Conditioning (PDC) component of the \emph{Kepler} pipeline. The PDC step was designed to remove common-mode systematics and prepare the light curves for planetary transit searches. It was also designed to preserve all astrophysical signals, but it can reduce the amplitude of these intrinsic signals at timescales greater than 15\,d \citep{2015AJ....150..133G}. Additionally, in the case of K2 data, it was never designed to remove the sawtooth systematics associated with the spacecraft roll error \citep{1538-3873-124-919-985,1538-3873-124-919-1000,2014PASP..126..100S}. Therefore, the use of PDC light curves can be problematic for variability studies with K2, so we work with the SAP rather than the PDC versions. As previously mentioned, MAST light curves are not available for the superstamp. We therefore use the method of \citet{2015MNRAS.447.2880A} to extract the superstamp light curves ourselves. We give a brief summary of it here. We processed channels 6.1 and 6.2 in turn channel in turn. First, a Full Frame Image (FFI) is downloaded and a preliminary astrometric solution obtained using {\tt astrometry.net} \citep{lang2010}. For each epoch, we then stitch together all the superstamp postage stamps from a given channel to create a reconstructed image, which is blank where no data was downloaded, and is initialized with the preliminary astrometric solution from the FFI. We also construct a binary mask that indicates where data was collected. We then use the publicly available {\tt CASUTools} routines {\tt nebuliser}, {\tt imcore}, and {\tt wcsfit} to model and subtract the background, identify stars within each image, and obtain a refined astrometric solution by cross-matching the catalog generated from each image with 2MASS. The variable background subtraction is new (compared to \citealt{2015MNRAS.447.2880A}) and is important because of the high density of unresolved stars in the central regions of the cluster. Next, we construct a master image by stacking the first 20 valid images using the astrometric solution for each frame. We repeat the source detection, photometry, and astrometric solution steps on this master image to obtain a master catalog. We then extract light curves for every star on the master catalog by placing a circular aperture at the sky position of each star on each image (as recorded in the master catalog). We compute the median of each star's light curve and a zero-point correction for each frame by taking the weighted average of the median-subtracted fluxes, using the inverse square scatter as weights, on that frame. This removes some systematics (particularly those associated with aperture losses), but significant residual systematics are nonetheless present in the extracted light curves. The light curve extraction is done using several aperture radii (1, 2, 2$\sqrt{2}$, 4, 4$\sqrt{2}$, and 8 pixels) for every star in the superstamp. In general, larger apertures give the best results for bright stars, while fainter stars require smaller apertures to minimize the background noise. However, the best aperture for any given star also depends on the relative positions and brightness of neighbouring stars. In practice, we select the aperture which minimizes the star's calculated scatter (see section \ref{subsec:p2p}). Eventually, however, this aperture needs to be checked for contamination from other stars, especially if it is larger than about 2 pixels. We extracted light curves for 1342 stars located in the superstamp. We then cross-matched our master catalog to that of G15, finding a total of 976 matches. Most of the stars in our catalog without G15 matches are faint stars with poor quality K2 photometry, so we focus the remainder of our analysis of superstamp stars on those with G15 matches, hereafter referred to as the `SS' sample. \subsubsection{Star-by-star systematics correction} The dominant systematics in K2 light curves are due to the spacecraft's pointing variations. We correct these using the K2SC pipeline. We briefly summarize K2SC here, but the more interested reader is referred to \citet{2016MNRAS.459.2408A}. K2SC models each light curve as the sum of two unknown functions, one which depends on the star's pixel position and represents the pointing systematics, and one which depends on time and represents the star's intrinsic variations, as well as any residual systematics not captured by the position-dependent component. The model is formulated as an additive Gaussian process, which allows us to require only that each function have a certain degree of smoothness, without having to specify its exact form. It also enables us to separate the time- and position-dependent components. Removing the former from the original light curve is helpful for looking for planetary transits, while subtracting the latter, which is dominated by the $\sim$6-hour drift of the spacecraft, only leaves us with a light curve corrected for position-dependent systematics, with minimal impact on astrophysical variability. K2SC also flags significant outliers in the data, which we subsequently remove since the rotational signals in which we are interested are relatively smooth for this study. \subsubsection{Common-mode systematics correction} \begin{figure*} \plotone{pca_trends.pdf} \caption{Principal components used to correct long-term systematics in the SAP (left) and superstamp (right) datasets as part of the Oxford pipeline. The primary PCs are in blue, while the secondary PCs are in orange. \label{fig:trends}} \end{figure*} Visual inspection of the K2SC-processed light curves reveals significant, residual long-term trends common to many light curves and which appear to be present whatever the extraction and detrending method used. \citet{2016ApJ...823...16B}, who used the MAST PDC versions of the light curves, already noticed them, and used a Principal Component Analysis (PCA) approach to remove them. These particular trends are likely due to stellar drift from differential velocity aberration (DVA), which causes photometric aperture losses or variations from sensitivity differences within a pixel. We also use a PCA approach to remove residual systematics. We normalize each light curve by dividing by its median. We then construct a matrix, where each row is the logarithm of the normalized light curve. (We also experimented with the more standard approach of re-scaling each light curve to have zero mean and unit variance, but found that this gave poorer results). We then use singular value decomposition (SVD) to compute the eigenvectors of the matrix, which are the principal components (PCs); the eigenvalues, which tell us what fraction of the total variance of the dataset is explained by each trend; and a matrix of coefficients relating each light curve to each PC. The first few PCs represent the dominant trends in the data, and these can be removed by subtracting each of them times the corresponding coefficient from each light curve. Because M67 falls on two different CCD channels, the two subsets might be expected to show different systematics. However, we found no significant difference between the sets of PCs extracted from the entire set, or from each output channel in turn. We therefore processed all the SAP stars on the two channels, both cluster members and non-members, together. On the other hand, there were much more significant differences between the PCs extracted from the SAP and SS sets, as illustrated in Figure~\ref{fig:trends}, so these were processed separately. The SS PCs appear noisier than the SAP PCs as a result of the relative dominance of the common-mode systematics in each set. The first, most dominant PC of the SAP set accounts for roughly 84\% of the total variance of the light curves, while the first PC of the SS set only accounts for about 43\%. The amplitude of the SS PCs are therefore much smaller relative to the noise than the SAP PCs. If PCA is applied blindly, large-amplitude features in individual light curves can dominate the PCs, skewing the correction and introducing these features in the corrected light curves of other stars. This can be diagnosed easily, however, as the PCs are linear combinations of the light curves, and the linear combination becomes dominated by a single star. Two problematic stars were identified in this way in the SAP set, and excluded from the PC estimation: EPIC 211391083 (an eclipsing binary) and EPIC 211327533 (whose light curve contains a large discontinuity). We also excluded the first 85 cadences ($\sim$1.8\,d) from the light curves before evaluating the PCs, as the systematics were particularly pronounced in that early phase of the campaign, and the PCA correction was of significantly lower quality when they were included (perhaps because the assumption of linearity, which is intrinsic in PCA, was violated more radically in that period). Usually, a threshold in the fraction of explained variance (i.e. the eigenvalue) associated with each PC is used to decide how many PCs to include in the correction. We experimented with different values for this threshold, and settled on thresholds of 2.5\% and 7.5\% for the SAP and SS sets respectively, which correspond to 2 PCs in both cases, shown in Figure~\ref{fig:trends}. We tested a range of thresholds for each dataset and found that too high of thresholds only produced one PC, which was not sufficient to explain all of the dominant, common modes seen in the light curves when examined by eye, while too low produced three or more, in which case the extra PCs were superfluous or introduced unwanted features in the data. A range of thresholds can produce the same 2 PCs, but the ones we chose were the on the lower end of those ranges for both datasets. \subsection{The `CfA' pipeline} In analogy with the Oxford pipeline, the `CfA' pipeline is comprised of the following three segments: \begin{itemize} \item Light curve extraction from calibrated K2 target pixel files (as described by \citealt{2014PASP..126..948V} and \citealt{2016ApJS..222...14V}) \item Correction for the 6-hour roll systematics (again as described by \citealt{2014PASP..126..948V} and \citealt{2016ApJS..222...14V}) \item Correction of non-roll systematics using the \emph{Kepler} team's Presearch Data Conditioning (PDC) technique \citep{1538-3873-124-919-1000,2014PASP..126..100S}. \end{itemize} \subsubsection{Light curve extraction} \label{subsubsec:extract2} Light curve extraction for the CfA pipeline proceeds following the prescription of \citet{2016ApJS..222...14V}, with a few modifications for working on cluster light curves and in crowded regions. Unlike the Oxford pipeline, the CfA pipeline extracts light curves from the calibrated K2 target pixel files for both stars in the superstamp and in individual target postage stamps. We begin by using the World Coordinate System astrometric solution produced by the K2 pipeline to identify the star of interest in either the target postage stamp or the superstamp sub-aperture\footnote{Unlike the Oxford pipeline, we did not splice together the superstamp sub-apertures for our analysis, and instead worked with the 50x50 pixel sub-apertures created by the \emph{Kepler} pipeline. When stars fell near the edge of one of the sub-apertures, we spliced together neighboring sub-apertures to produce the light curves.}. We then laid down 20 different stationary, pixelated photometric apertures over the star in question: ten of these apertures are circular, with radii logarithmically spaced between 1.5 and 13 pixels; the other ten are shaped like the \emph{Kepler} pixel response function (PRF) with different sizes. The PRF-shaped apertures tend to be smaller than the circular apertures. After defining the 20 different apertures, we summed the calibrated, background-subtracted pixels within those apertures at each time stamp to create raw light curves. \subsubsection{Star-by-star systematics correction} After producing the raw light curves, we removed the dominant systematic errors on 6-hour timescales introduced by K2's unstable pointing using the method described by \citet{2014PASP..126..948V} and \citet{2016ApJS..222...14V}. This correction works by decorrelating variability due to the roll of the spacecraft (which is correlated with the spacecraft's roll angle) from astrophysical variability (which is not correlated with the roll angle). Over the course of a K2 campaign, the motion of the spacecraft causes stars to trace out a path back and forth on the detector, which slowly wanders as DVA causes the apparent positions of stars to move over the course of a campaign. We break up the campaign into a number of shorter time segments, when the motion of the spacecraft can be approximated as purely one-dimensional, and perform our systematics corrections in those divisions individually. We decorrelate the systematics from the astrophysical variability by iteratively dividing away low-frequency variations, determining the relationship between the measured flux and \emph{Kepler}'s roll angle, fitting a piecewise linear function to that relationship (with outlier exclusion to prevent transits, flares, or other short-timescale variability from being mistaken as a spacecraft systematic and removed). Once we have a piecewise linear function which models the roll-angle dependence in the raw light curve, we divide it away, remove any leftover low-frequency variations from the resulting light curve, and repeat the process. After a few iterations, the process converges, and we divide out the best-fit roll-angle variability from the raw light curve to yield a systematics-corrected light curve, with stellar variability and transits preserved. After removing systematics from each of the 20 raw light curves produced from the various photometric apertures, we selected a `best' aperture to use in our analysis by determining which aperture produced the light curve with the best photometric precision. In the crowded M67 region, we only allowed the PRF-shaped apertures and circular apertures smaller than 3 pixels in radius to be chosen to prevent additional stars from falling within the photometric aperture. \subsubsection{Common-mode systematics correction} \label{subsubsec:cfa_commonmode} The CfA star-by-star systematic correction above performs very well at removing roll-induced errors in each light curve. However, just as with the Oxford light curves, there still exist significant common-mode systematics as a result of focus changes, along with other long-term drifts due to DVA and other stochastic errors (such as attitude tweaks, safe modes, etc.). PDC-MAP, on the other hand, was designed to address the focus-induced systematics that were dominant in the original \emph{Kepler} mission. An ideal K2 pipeline can therefore be created by applying the CfA correction first followed by the PDC correction. PDC uses a method somewhat similar to PCA, but it utilizes a Bayesian maximum \textit{a posteriori} (MAP) approach, where a subset of highly correlated and quiet stars is used to generate a co-trending basis vector set, which is in turn used to establish a range of robust, `reasonable' fit parameters. These parameters are then used to generate a Bayesian prior and a Bayesian posterior probability distribution function (PDF) which, when maximized, finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection that commonly afflicts simple least-squares fitting. A numerical and empirical approach is taken where the Bayesian prior PDFs are generated from fits to the light-curve distributions themselves. PDC has two modes of operation: `Single-Scale' (ssMAP) and `Multi-Scale' (msMAP). The former, ssMAP \citep{1538-3873-124-919-1000}, performs the MAP correction in a single band-pass. This has the advantage of instilling a strong regularization on the basis vector fit coefficients, thereby minimizing the removal of stellar signals. However, this is at the expense of more bias and less systematic error removal. msMAP \citep{2014PASP..126..100S}, on the other hand, utilizes an over-complete, discrete wavelet transform, dividing each light curve into multiple channels, or bands. The light curves in each band are then corrected separately, allowing for a better separation of characteristic signals and improved removal of the systematic errors, but at the expense of more stellar signal removal, especially at longer periods. Both methods have their advantages, but generally speaking, msMAP performs better at preserving and cleaning light curves at transit time scales, and ssMAP performs better at preserving long period signals. It is therefore expected that ssMAP will perform better for the studies carried out in this paper. \subsection{Scatter comparison} \label{subsec:p2p} \begin{figure*} \includegraphics[width=1\linewidth]{scatter.pdf} \caption{Scatter versus $K_{p}$ magnitude for the SAP (left column) and SS (right column) samples. The results of the Oxford and CfA pipelines are shown in black and red, respectively. From top to bottom, we plot the scatter in the raw light curves, after star-by-star systematics correction, and after common-mode systematics correction. \label{fig:p2pscatter}} \end{figure*} We here compute the scatter of the M67 SAP and SS stars as a function of \emph{Kepler} magnitude at each step of the Oxford and CfA pipelines in order to conduct a preliminary assessment of pipeline performance prior to the injection tests. We use a variation of the median of absolute deviations (MAD) \citep{1983ured.book.....H} scatter, estimated in the following manner: \begin{equation} \label{eq:p2p} \textrm{MAD} = 1.48\textrm{median}(|f_{i} - m_{f}|) \end{equation} \noindent where $f_{i}$ is the flux at each cadence $i$, and $m_{f}$ is the median flux. We use the median as opposed to the mean because it is less influenced by outliers. Technically, the median term itself is the MAD scatter; we include a factor of 1.48 as a scaling factor that, under the assumption of a Gaussian distribution of $f$, makes the scatter equivalent to the standard deviation. The scatter is given in Figure \ref{fig:p2pscatter}. The Oxford pipeline results are shown in black and the CfA, using single-scale PDC-MAP, in red. From top to bottom, the figure shows the scatter for the raw light curves, after star-by-star systematics correction, and after common-mode systematics removal, with the SAP sample on the left and the SS sample on the right. In the SAP sample, the raw scatter produced by the two pipelines are fairly similar, though the CfA pipeline performs slightly better. For both pipelines, the star-by-star systematic correction reduces the scatter somewhat over the whole magnitude range and forms tighter relationships, but the final common-mode systematic correction has the most drastic effect, leading to scatter values of a few tens of ppm at the bright end. The final scatter obtained by the CfA pipeline is slightly lower than that obtained by the Oxford pipeline over the entire magnitude range. The final median scatter at $K_{p}=12$ for the Oxford and CfA SAP samples are 544\,ppm and 321\,ppm, respectively. By contrast, the raw outputs of the two pipelines are very different for the SS sample. The extraction step used by the Oxford pipeline, which involves moving apertures, already lessens the pointing-related systematics in the raw light curves of fainter stars. The effect of the star-by-star systematic correction on the scatter is then relatively minor for both pipelines, but the common-mode systematic correction is particularly effective over the whole magnitude range for the CfA pipeline, and for bright stars in the Oxford pipeline. The final results of the two pipelines are fairly similar, though the CfA pipeline performs slightly better for brighter stars (not to the extent of the SAP sample, however). Among the fainter stars, there are several instances where the CfA pipeline leads to final scatters of order 1\%, whereas the corresponding scatter for the Oxford pipeline is much lower. In these cases, the Oxford pipeline generally tends to correct the systematics better. The final median SS scatter at $K_{p}=12$ is 203\,ppm for the Oxford pipeline and 275\,ppm for the CfA pipeline. The final scatter distributions are very similar for the SAP and SS samples, as well as for the two pipelines. It is worth noting, however, that the scatter between the CfA SAP and SS datasets are closer to each other than their counterparts in the Oxford pipeline. This is most likely due to the fact that the CfA light curves are all extracted using the same technique, while the two Oxford datasets are extracted via different methods, and so have different starting points with respect to noise levels. Regardless, these numbers should give us increased confidence that the light curves we are using, from both pipelines, are relatively robust, and there is no obvious effect in either sample that is not handled reasonably well by either pipeline. \section{Injection Tests} \label{sec:tests} We seek to quantify the limits of our ability to measure stellar rotation periods in K2 M67 light curves. To do this, we need to simulate realistic light curves that share the same time sampling and noise properties as those we wish to analyze, but which also contain signals of known period and amplitude (as well as any other parameter that might affect the detectability of the signal). In the absence of a detailed generative model for the various noise sources and systematics in K2 data, we are not able to simulate realistic light curves from scratch. Instead, we inject known rotation signals into the raw versions of the observed SAP and SS light curves (i.e. just after extraction). This introduces a few problems, the most obvious being that some light curves may already contain strong astrophysical variability, to which we later return. We then separately apply the detrending steps (pointing-related and common-mode systematics correction) of the Oxford and CfA pipelines, before attempting to recover the periodic signals. This enables us to assess the extent to which the noise present in the data, as well as the pre-processing applied to reduce this noise, affects the detectability of the periods. As described in Section~\ref{sec:intro}, this paper deals only with sinusoidal injected signals, but the tests are designed to be comprehensive, in that every signal is injected into every light curve. A smaller set of tests involving more realistic signals and comparing different period search methods will form the subject of a later paper. \subsection{The stellar samples} \label{subsec:star_samples} For the purposes of the injection tests, we define the following 3 stellar samples: \begin{itemize} \item the SAP sample consists of the 1877 stars observed with individual postage stamps in channels 6.1 and 6.2, 1319 which match to our master catalog and 236 of which are confirmed members from G15 \item the SS sample consists of the 976 stars included in the superstamp and with matches in the master catalog, 359 of which are confirmed members from G15. 75 of the SS overlap with the SAP \item in addition, we define a hot star sample consisting of 89 main sequence stars with spectral type A through F located on CCD modules 6, 11, and 16 with effective temperatures ranging from 6300\,K to 10715\,K (the majority falling between 6300 and 7300\,K), $B-V$ values of -1.0 to 0.45, and $\textrm{log}\,g$ values greater than 4.0. This information was all acquired from MAST. These stars were chosen because they lack an outer convection zone, or have only a very thin one, and so are not able to support a large-scale magnetic field and so do not spin-down quickly \citep{1962AnAp...25...18S}. Thus, if they do display periodic variability, it is expected to occur on timescales of $\sim2$\,d or less, whether it is due to pulsations (many of these stars lie in the classical instability strip) or rotation (as these hot stars are rapid rotators) \citep{2013A&A...557L..10N}. Therefore, their light curves are expected to contain only noise and systematics, at least on timescales longer than $\sim$2 -- 7\,d, which makes them ideal targets for injection tests. 22 of the hot stars overlap with the SAP set \end{itemize} \subsection{The injected signals} \label{subsubsec:injsig} We inject sinusoidal signals by directly multiplying the following into the raw light curves of each test sample: \begin{equation} \label{eq:injsig} f_{\rm inj} = \frac{a_{\rm inj}}{2}\sin(\frac{2\pi}{P_{\rm inj}}t+\phi_{\rm inj}) + 1 \end{equation} \noindent where $f_{\rm inj}$ is the injected signal, $a_{\rm inj}$ is the injected amplitude, $P_{\rm inj}$ is the injected period, $\phi_{\rm inj}$ is the injected phase, and $t$ is time in days. We vary $P_{\rm inj}$ from 5 to 35\,d in intervals of 5\,d, leading to seven period values. We also vary $a_{\rm inj}$, as follows: 0.05\%, 0.10\%, 0.30\%, 0.50\%, 1.00\%, and 3.00\%. Each light curve, then, is injected with 42 different combinations of periods and amplitudes, leading to over 120\,000 injections in total for each pipeline to process. The phase, $\phi_{\rm inj}$, is selected at random for each injection, but we keep track of its value. Where we have extracted the SS light curves using multiple aperture sizes from the Oxford pipeline, we inject the signals into the version which minimizes the scatter for the corresponding, non-injected light curve. \subsection{The Detection Algorithm} \label{subsubsec:detalg} Because these injection tests involve only sinusoidal signals, we use a modified version of the widely-used Lomb-Scargle (LS) periodogram \cite{1982ApJ...263..835S} to recover them. The LS periodogram is essentially equivalent to least-squares fitting of sinusoidal signals \citep{2006MNRAS.370..954I}, and is thus in principle optimal to recover stable sinusoidal signals in the presence of white Gaussian noise of known variance. Real stellar rotation signals are not strictly sinusoidal; they contain significant power at harmonics of the true period due to the scattered distribution of the active regions on the stellar surface, evolve over time with the active region evolution, and may contain signals at a range of periods due to differential rotation. For this reason, rotation period searches in \emph{Kepler} and K2 data often use other period search methods which do not depend on the assumption of sinusoidality or strict periodicity \citep{2013MNRAS.432.1203M,2017arXiv170605459A}. These methods have been shown to outperform approaches based on least-squares sine-fitting in real and simulated datasets \citet{2015MNRAS.450.3211A}. However, if we know the signal of interest is sinusoidal, then a sine-fitting approach is likely to do at least as well as these alternative, more flexible methods. The model we are considering is of the form: \begin{equation} \label{eq:pmodel} m(t) = m_{dc} + \alpha\sin(\omega t + \phi) \end{equation} \noindent where $m_{\rm dc}$ is the light curve's vertical offset, $\alpha$ is the amplitude, $\phi$ is the phase, and $\omega$ is the angular frequency. For each value of $\omega$ (or period, $P=2\pi/\omega$), this model can be expressed as a linear basis model with three basis functions: a constant, a sine term, and a cosine term, both with period $P$. For each trial period we can solve for the values of $m_{\rm dc}$, $\alpha$, and $\phi$ that minimize the sum of the squared residuals, or $\chi^2$ (in space data, the relative measurement uncertainties can be treated as approximately constant for a given star, so the two are equivalent). We evaluate the relative reduction in $\chi^2$ with respect to a constant model, $S = (\chi^2(0)-\chi^2(P))/\chi^2(0)$, as a function of period $P$, to construct a periodogram. For this study, we search for 490 periods ranging from 2 to 100 days, using an evenly-spaced grid in frequency space. \subsubsection{Periodogram normalization} \label{subsubsec:per_norm} \begin{figure} \plotone{power_med_ssredo.pdf} \caption{Median Lomb-Scargle periodogram for the non-injected SS (blue) and SAP (green) datasets for both pipelines. The non-injected hot star dataset for each pipeline separated from the SAP dataset is shown with red dashed lines. The Oxford pipeline is on the left, and the CfA pipeline is on the right. The presence of broad peaks around 30\,d and longer in the hot star sample highlights the presence of long-term, non-astrophysical signals in the Campaign 5 light curves \label{fig:normal}} \end{figure} If the light curves consisted simply of a stable sinusoidal signal plus white Gaussian noise, locating the peak from the Lomb-Scargle periodogram would give us the best-fit period, and evaluating the significance of the detection would be relatively straightforward \citep[see e.g.][]{1986ApJ...302..757H}. However, this is not the case, even after our light curves are subjected to the systematics-correction steps of our two pipelines. The residual systematics, which remain in the light curves despite our best efforts, often lead to broad peaks in the range 30 -- 60\,d, which can overwhelm the peak due to the injected signal. To address this, we perform a collective normalization of the periodograms before searching for the period. \begin{figure} \plotone{36180_compare.pdf} \caption{Example of the effect of periodogram normalization using EPIC 211355490 injected with a period of 20\,d and amplitude of 0.1\%. The top panel shows the systematics-corrected flux from the Oxford (gray) and CfA (blue) pipelines. The middle panel gives the original periodograms for both pipelines, while the bottom panel shows the normalized periodograms. The periods found in each periodogram are indicated by the vertical lines, with the black lines depicting the Oxford periods and the blue lines identifying the CfA periods. The CfA period in both cases is 20.0\,d, while the Oxford period improves from about 62\,d to 19.2\,d. \label{fig:renorm_example}} \end{figure} To normalize the periodograms, we compute the median periodogram as a function of trial period for stars which we expect to share the same systematic noise properties (i.e. separately for the SAP and SS sets, and for the Oxford and CfA pipelines). Each star's periodogram is then divided by the median periodogram, thereby suppressing the peaks, or excess power, that is seen in many light curves. The normalized periodogram is thus given by $S'(P) = S(P)/\hat{S}(P)$, where $\hat{S}(P)$ is the median periodogram value for period $P$. The median periodograms are evaluated from the non-injected versions of the light curves to avoid the induced power at fixed periods from their injected counterparts, and they can be seen in Figure~\ref{fig:normal}, with the Oxford SS and SAP median periodograms on the left and the CfA on the right. The SAP median periodograms for both datasets include the hot stars, as they were processed in both cases using the same techniques. However, Figure~\ref{fig:normal} also shows the hot star median periodograms separate from the rest of the SAP set for both pipelines, given by red dashed lines. This emphasizes that the broad peaks in both pipelines at long periods are most likely not a result of true astrophysical signals, as we would not expect periods much longer than about 5\,d in the hot star samples. We also note that the median periodogram for the SAP set from the Oxford pipeline is higher overall and peaks at longer periods than the SS sample, hinting that the former likely contains systematics with larger amplitudes on greater timescales. This is consistent with the relative amplitudes of the PCA trends compared to noise for the two samples, which are smaller for the SS (see Figure~\ref{fig:trends}. It also means that long-period signals will be more heavily suppressed by normalization in the SAP than in the SS sample. Figure \ref{fig:renorm_example} illustrates the effect of the periodogram normalization for an individual star from the SAP set. The top panel shows the light curve of EPIC 211355490 injected with a period of 20\,d and amplitude of 0.1\%, fully processed with both the Oxford (in gray) and the CfA (in blue) pipelines. The middle panel depicts the original periodograms prior to normalization for both pipelines, while the bottom panel gives the normalized versions. The respective periods are indicated by vertical lines. There is a relatively strong, long-period signal in the Oxford pipeline that is not present in the CfA (unsurprising given the differences in Figure~\ref{fig:normal}), and this is what the Lomb-Scargle finds in the former, settling on a period of 62\,d. On the other hand, the normalized Oxford periodogram has a peak at 19.2\,d, which closely matches the injected period. The normalization of the periodogram thus allowed us to find the `true' period of Oxford version of this light curve when otherwise it would have been lost by the systematic, long-term trend. The CfA, however, appears to have done a better job of cleaning up systematics in the light curve, and found peaks of 20.0\,d in both the original and normalized periodograms. \subsubsection{Detection threshold} Once we have identified the peak in each normalized periodogram, we must decide whether it is significant. After testing a number of different possible schemes, we opted for the following detection threshold definition: \begin{equation} \label{eq:detection} S'_{\rm T} = {\rm MAX}(S'_{90} \times C, S'_{\rm min}) \end{equation} \noindent where $S'_{T}$ is the threshold value and $S'_{90}$ is the $90^{\rm th}$ percentile of the normalized peridogram for a given light curve. $C$ and $S_{\rm min}$ are tuning parameters which we can vary to alter the relative and absolute components of the threshold, respectively. For example, if $S'_{90} = 4$, and the value for $C$ is set at 3, then the maximum power of the normalized periodogram must be at 12 or greater for the associated period measurement to be considered a detection. $S_{\rm min}$ comes into play when $S'_{90} \times C$ is so low that most peaks would qualify as a detection; it ensures there's a minimum value for the threshold. Both $C$ and $S_{\rm min}$ can then be varied to test their effect on the completeness and reliability of the period search, which we discuss in Section~\ref{subsubsec:vary_thresh}. If the highest peak in the normalized periodogram passes the threshold defined in Eq.~(\ref{eq:detection}), then we have a detection, and we record the corresponding period, as well as the best-fit amplitude and phase. Otherwise, a result of `no detection' is recorded. \subsection{Evaluating the results} To evaluate the results of the injection tests, we need to quantify how sensitive the period search is, i.e. what fraction of the injected signals are successfully recovered, but also how trustworthy the detections are, i.e. what fraction of the detections are `valid', meaning that the measured period and phase are within some tolerance of the injected values. \subsubsection{Valid detections, completeness and reliability} Here we define several key terms in understanding the statistics from this study: \begin{itemize} \item Validity: We consider a detection \emph{valid} if the recorded period and phase are within 20\% of the injected values. \item Completeness: For a given range of \emph{injected} period and amplitude, we define the \emph{completeness} as the fraction of injected signals which led to a valid detection. We also calculate a \emph{threshold error} statistic, which records the fraction of cases where the period and phase corresponding to the highest peak in the normalized periodogram were within the valid range, but the peak was not high enough to pass the detection threshold. \item Reliability: For a given range of \emph{recorded}, or measured, period and amplitude, we define the \emph{reliability} as the fraction of recorded detections which were valid. \end{itemize} Another way to understand the definitions of completeness and reliability given above is to look ahead at Figure~\ref{fig:detvinj1}, which shows (for the hot star sample) the detected versus injected periods for different injected amplitudes. To evaluate completeness, we consider a vertical bin in one of these diagrams around one of the injected periods. The completeness is given by the number of detections (colored points) in that bin which were valid, divided by the total points in the bin. To be valid, a detection must lie within the gray shaded area, which indicates a $\le 20\%$ period error. In addition, a valid detection must also have a $\le 20\%$ phase error (not shown on the figure). We can think of reliability as being similar, but considering a horizontal rather than vertical bin in the same kind of diagram. (This is not quite correct, since reliability is computed for a given detected period and amplitude, whereas the figure shows the injections split by injected amplitude.) \subsubsection{Calculating Uncertainties} To estimate the uncertainties, we assume a Poisson distribution with respect to both completeness and reliability and define the uncertainty as: \begin{equation} \label{eq:uncertain} \sigma = \pm\frac{1}{\sqrt{N}} \end{equation} \noindent where $N$ is the number of injections per injected period and amplitude bin (i.e. no. of injected amplitudes $\times$ no. of injected periods $\times$ no. of stars in each test sample) in the case of completeness. For reliability, $N$ is the total number of detections in each detected period and measured amplitude range. Therefore, for each test set within each pipeline, the individual completeness values will have the same uncertainty while the reliability will vary from bin to bin. We recognize this is a little simplistic, but the more sophisticated approach would require repeating the injections many times, which would be far too time-consuming. For our purposes, this is a good first approximation. \subsubsection{Intrinsically variable stars} We injected signals into real K2 light curves in order to faithfully reproduce the noise properties of actual M67 light curves. In doing this, we are implicitly assuming that most of them do not already contain a detectable intrinsic periodic signal. However, of course, some of them do. In such cases, a detection might be caused by the intrinsic rather than the injected signal. This can lead to detections which appear `invalid', but are in fact correct, biasing the completeness and reliability statistics. To avoid this, we run the period search on the non-injected versions of the light curves. If a detection occurs in any of these, the star is labelled as intrinsically variable, and is excluded from the completeness and reliability calculations. This means that only a fraction of the total number of injections we performed is actually used in the final results. This is not entirely satisfactory, since in reality we do not know whether the signal detected in the light curves without injections were indeed due to the intrinsic variability of the star or to systematics. Furthermore, the number of stars excluded as `variables' depends on the detection threshold used. However, we were unable to devise a more satisfactory solution. We note that, for reasonable threshold values, the variable stars represent a relatively small minority of the test light curves. \subsubsection{Varying the threshold} \label{subsubsec:vary_thresh} \begin{figure*} \plotone{complete_compare.pdf} \caption{Comparison of completeness values for an injected period of 25d at all injected amplitudes for different values of $C$ (top row) and $S_{min}$ (bottom row). For the top row, $S_{min}$ was fixed at 5. For the bottom row, $C$ was fixed at 4. The legends are also marked with the number of stars in each injected period and amplitude bin for each value of $C$ and $S_{min}$. It is worth noting that by $S_{min}=12$, all sensitivity at periods greater than 25\,d was gone. We used the Oxford pipeline for these tests. \label{fig:comp_compare}} \end{figure*} \begin{figure*} \plotone{reliability_compare.pdf} \caption{Comparison of reliability values for a measured amplitude between 0.25\% and 0.45\% across all detected periods for different values of $C$ (top row) and $S_{min}$ (bottom row). For the top row, $S_{min}$ was fixed at 5. For the bottom row, we fixed $C$ at 4. It is worth noting that we lost sensitivity for detections greater than 32.5\,d at $S_{min}=8$, and we lost sensitivity for detections greater than 27.5\,d at $S_{min}=12$. We used the Oxford pipeline for these tests. \label{fig:rel_compare}} \end{figure*} The detection threshold obviously has an effect on the results of this study. Lower thresholds increase completeness, but also lead to a drop in reliability in the same bins. If measured rotation periods are to be compared with theoretical models, it is preferable to prioritize high reliability over slight gains in completeness --- so long as the completeness itself is well-measured to account for missed detections. We therefore recorded the normalized periodogram peak $S'$; the corresponding period, phase, and amplitude; and the $90^{\rm th}$ percentile of the normalized periodogram, $S'_{90}$, for each injected signal in each light curve. It was then trivial to vary the value of $C$ and $S_{\rm min}$ in \ref{eq:detection} and examine the impact that this had on completeness, reliability, and the `variables' excluded from the statistics. We experimented with $C=0$ to $C=6$ while holding $S_{\rm min}$ at 5, and varying $S_{\rm min}$ from 0 to 12 while holding $C$ at 4, in order to test the effect that each parameter has on completeness and reliability. We used the Oxford pipeline to do these two tests. A sample of the results of these tests can be seen in Figures~\ref{fig:comp_compare} and \ref{fig:rel_compare}. Figure~\ref{fig:comp_compare} shows the effect on completeness for changing both tuning parameters at an injected period of 25\,d across all injected amplitudes. The top row of each figure shows the effect of changing $C$, while the bottom row shows the effect of changing $S_{\rm min}$. In both panels, we print the number of stars in each injected period and injected amplitude bin. We can see that at $C=0$ and $C=1$, the results are dominated by the minimum threshold value, but they have relatively few stars per bin. Completeness then peaks at $C=2$ before starting to decline again. There is also an increase of $\sim$500 stars per bin at $C=2$, followed by another 500 stars at $C=3$. Beyond $C=3$, the number of stars per bin still grows, but less rapidly. For $S_{\rm min}$, the completeness essentially stays the same until about $S_{\rm min}=8$. By $S_{\rm min}=12$, the completeness drops to about half the values at $S_{\rm min}=0$ at amplitudes of 0.5\% and less. Not visible in Figure~\ref{fig:comp_compare} is the fact that all sensitivity at periods greater than 25\,d, at any amplitude, is lost. The number of stars per bin increases with $S_{\rm min}$, but very slightly. Thus, as expected, $C$ has a much greater effect on the number of stars per bin and, more importantly, completeness, than $S_{\rm min}$, as long as $S_{\rm min}$ is not too high (in the case of completeness). Likewise, in Figure~\ref{fig:rel_compare}, we can see an illustration of how $C$ and $S_{\rm min}$ affect reliability across the ranges of measured periods in this study. Here we have fixed the measured amplitude to the range of 0.25\% to 0.45\%, just above the solar range. Again, the top row shows the effect of changing $C$. As expected, reliability generally increases as $C$ increases, noticeably so until about $C=4$. The effect of changing $S_{\rm min}$, however, is largely negligible except at the long period ranges. There, it decreases until $S_{\rm min}=8$. Here the reliability is zero, but with very few detections, and beyond, the number of detections drops to zero for periods longer than about $\sim$30\,d. Figures~\ref{fig:comp_compare} and \ref{fig:rel_compare} demonstrate the clear trade-off between completeness and reliability. Using large values for $C$ and $S_{\rm min}$, i.e. a stringent detection threshold, leads to low completeness and large threshold error, but excellent reliability. A high detection threshold also means that fewer stars are marked as intrinsically variable, so that more injected light curves are used to compute the final statistics. As the threshold is gradually lowered, completeness increases and threshold error decreases, at the cost of reduced reliability. The number of variable stars also grows, and for very low detection thresholds almost all stars are excluded, leaving relatively few in the calculation of the results. Therefore, we settled on $C=4$ and $S_{\rm min}=4$ as the best compromise. When using these values, 238 of the 1877 stars in the Oxford SAP set were marked as intrinsically variable, as were 120 of the 976 stars in the SS set and 23 of the 89 hot stars, leaving 1639, 856, and 66 stars in each set, respectively. Using the same tuning parameters for the CfA pipeline, we are left with 1587 SAP stars, 848 SS stars, and 70 hot stars. \section{Results} \label{sec:results} We restrict ourselves to a presenting the results in this section and defer a more detailed discussion to Section~\ref{sec:discussion}. The completeness and reliablity results for the CfA pipeline come from the single scale PDC-MAP, or `PDC-ssMAP.' Across the board, PDC-ssMAP performed better, for our purposes, than PDC-msMAP, as expected (see Section~\ref{subsubsec:cfa_commonmode}). \subsection{Recovered versus injected periods} \begin{figure*} \plotone{ox_hot_detvsinj.pdf} \caption{Detected versus injected period at each of the injected amplitudes for the hot star sample processed with the Oxford pipeline. Colored points represent cases which passed our detection threshold; those which did not are shown as smaller black points. The shaded grey area shows the region where the injected and detected periods are within 20\% of each other. \label{fig:detvinj1}} \end{figure*} \begin{figure*} \plotone{cfa_hot_detvsinj.pdf} \caption{Same as Figure~\ref{fig:detvinj1}, but for the hot star sample processed with the CfA pipeline. \label{fig:av_detvinj1}} \end{figure*} \begin{figure*} \plotone{ox_sap_detvsinj.pdf} \caption{Same as Figure~\ref{fig:detvinj1}, but for the SAP sample processed with the Oxford pipeline. \label{fig:detvinj2}} \end{figure*} \begin{figure*} \plotone{cfa_sap_detvsinj.pdf} \caption{Same as Figure~\ref{fig:detvinj1}, but for the SAP sample processed with the CfA pipeline. \label{fig:av_detvinj2}} \end{figure*} \begin{figure*} \plotone{ox_ss_detvsinj.pdf} \caption{Same as Figure~\ref{fig:detvinj1}, but for the SS sample processed with the Oxford pipeline. \label{fig:detvinj3}} \end{figure*} \begin{figure*} \plotone{cfa_ss_detvsinj.pdf} \caption{Same as Figure~\ref{fig:detvinj1}, but for the SS sample processed with the CfA pipeline. \label{fig:av_detvinj3}} \end{figure*} Figures~\ref{fig:detvinj1} through \ref{fig:av_detvinj3} For the hot star, SAP, and SS samples from the Oxford and CfA pipelines. Each of these figures is made up of 6 panels, corresponding to the injected amplitudes of 3.0, 1.0, and 0.50\% (top row), and 0.3, 0.1, and 0.05\% (bottom row). Within each panel, the colored circles represent detections, while the cases that did not pass the detection threshold are shown as smaller black dots. The gray shaded area in each plot shows the period validity range, though recall that a $\pm 20\%$ phase match is also required. Phase considerations aside, a black point in the shaded area is a missed valid detection, whereas a colored circle outside the shaded area is an invalid detection. The stars marked as intrinsically variable were excluded from these figures. \subsection{Completeness and Reliability} \label{subsec:comprel} \begin{figure*} \includegraphics[width=1\linewidth]{combo_completeness_redo.pdf} \caption{Completeness tables for the hot star (left), SAP (centre), and SS samples (right), detrended with the Oxford pipeline (top row) and the CfA pipeline (bottom row). The grids have the injected period bin along the x-axis and the injected amplitude along the y-axis. The completeness for the solar case is highlighted in blue for each dataset. \label{fig:completeness}} \end{figure*} The completeness results for the Oxford and CfA pipelines are shown in the top and bottom rows of Figure~\ref{fig:completeness}, respectively. We have identified in blue the completeness for the solar case---where the injected amplitude is 0.1\% and the injected period is 25\,d---for each dataset. Tables~\ref{tab:hotcomplete} through \ref{tab:ss_av_complete} in Appendix \ref{sec:result_tables} record the completeness values and uncertainties for each injected period-amplitude bin for all test samples from each pipeline. The reliability results for the Oxford and CfA pipelines are shown in the top and bottom rows of Figure~\ref{fig:reliability}, respectively. The bins with red dots are those with relatively few detections. We consider bins with 100 detections or less in the SAP and SS samples and with 30 or less in the hot star sample as insignificant. All three samples, from both pipelines, have very few detections in the longest period, lowest amplitude bins. The corresponding (generally very high) reliability values therefore have large uncertainties and should not be treated as very meaningful. The actual values with uncertainties are presented in Tables~\ref{tab:hotreliability1} through \ref{tab:ss_av_reliability1} in Appendix \ref{sec:result_tables}. \begin{figure*} \includegraphics[width=1\linewidth]{combo_reliability.pdf} \caption{Reliability for the hot star (left), SAP (centre), and SS samples (right), detrended with the Oxford pipeline (top row) and the CfA pipeline (bottom row). While the bins shown here roughly match those in Figure~\ref{fig:completeness}, here they represent \emph{detected} rather than injected period and amplitude. While the same number of injections was carried out for every injected period and amplitude, the measured and injected values do not necessarily match, so the number of detections in each bin varies. The red circles represent where there were less than 30 detections in the hot star sample and less than 100 in the SAP and SS samples. \label{fig:reliability}} \end{figure*} \begin{figure*} \includegraphics[width=1\linewidth]{combo_ampvamp.pdf} \caption{Injected vs. detected amplitude for the valid detections in the hot star (left), SAP (centre), and SS samples (right), detrended with the Oxford pipeline (top row) and CfA pipeline (bottom row). The associated injected periods are marked in different colors. \label{fig:ox_ampvamp}} \end{figure*} Finally, Figure~\ref{fig:ox_ampvamp} shows the injected versus detected amplitudes for valid detections from the Oxford (top) and CfA (bottom) pipelines. The detected amplitudes can differ significantly from the injected ones for several reasons. First, the light curves into which the signals were injected may already contain some power at the corresponding period. Second, the intrinsic noise levels of the light curve will affect the amplitude of the flux as a whole. Lastly, the systematics correction steps can alter the injected signal, in some cases leading to measured amplitudes that are smaller than the injected ones (e.g. the Oxford SS). \section{Discussion} \label{sec:discussion} We now discuss the results from our injection tests, starting with a brief evaluation of the effects of periodogram normalization in light of the completeness and reliability results in Section \ref{subsec:normal}. We then look at the `ideal case' of the hot star results in Section \ref{subsec:hot_discuss} before looking at the SAP and SS samples in Section \ref{subsec:sap_ss_discuss}. Next, we provide an overall comparison of the samples and the pipelines in Section \ref{subsec:compare_stuff}. Finally, we will discuss the implications of the injection tests on M67 rotation studies using K2 data. \subsection{Residual long-term trends} \label{subsec:normal} The median periodograms shown in Figure~\ref{fig:normal} in Section~\ref{subsubsec:per_norm} tell us a lot about the residual trends present in the light curves after full processing from both the Oxford and CfA pipelines. For the SAP and SS samples, the median periodogram rises steeply for periods above 20\,d, peaking around $\sim50$ -- $55$\,d and $\sim40$ -- $45$\,d, respectively, before decreasing again and flattening off for periods $>80$\,d (to which we expect little or no sensitivity in K2 light curves anyhow). While evident in both pipelines, this feature is much more dramatic in the Oxford pipeline. Importantly, the power present at periods relevant to a study of M67 in both hot star samples shows that this is largely non-astrophysical, despite our steps for systematic correction. Furthermore, the differences between the SAP and SS samples are significant: our \emph{a priori} expectation was that the SS sample might be more problematic than the SAP one due to increased crowding in the densest parts of M67. However, the median periodograms tell a different story, with considerably more residual power in the 30 -- 70\,d period range in the SAP than the SS light curves for both pipelines. If the raw periodograms were used for period detection, these residual trends would lead to numerous false detections at mid-to-long periods (see Appendix~\ref{subsec:nonorm}). Our normalization procedure designed to avoid this seems successful, since we record consistently high reliability wherever we have a significant number of detections (see Figure~\ref{fig:reliability} and Tables~\ref{tab:hotreliability1} to \ref{tab:ss_av_reliability1} in Appendix~\ref{sec:result_tables}). However, normalization also suppresses the detection of real signals at longer periods, as is apparent in Figure~\ref{fig:completeness}. While we may avoid the trap of lingering systematics by normalizing the periodograms, this comes at the potential cost of real astrophysical signals. \subsection{The Hot Star Sample} \label{subsec:hot_discuss} The set of 89 hot stars is an ideal test case, as they should not show significant rotational modulation beyond about 5\,d. Therefore, we can be confident that if we see a long period where we did not inject one, it is most likely the result of lingering systematics, to which we alluded above. For the Oxford pipeline, we were left with 66 stars after removing stars found to have instrinsic variability (i.e. they likely have an astrophysical signal that could interfere with the injection test results) using our detection threshold on the non-injected versions of the light curves. Of the 23 stars marked as variable, 19 had detected periods less than 10\,d and the rest had periods less than $\sim$16\,d, apart from one (see Figure~\ref{fig:hotstarhist}). In the CfA pipeline, we used 70 stars in the analysis. All 19 variables had periods under 12\,d. These numbers further enforce the fact that our normalization works reasonably well for this particular dataset in both pipelines, though the presence of flagged variables with periods longer than 5\,d shows that the set of hot stars may also be contaminated by background stars, or that there are a number of cooler F stars in the sample, but it is good enough for our purposes. As we can see in Figures~\ref{fig:detvinj1} and \ref{fig:av_detvinj1}, we do a good job at recovering the injected periods from both pipelines in the hot star sample. Some of the invalid detections likely come from variables --- possibly pulsators --- not removed from the sample but with low amplitudes. Where these start to become detections at the lowest injected amplitudes, the injected signal may be `boosting' (i.e. increasing the amplitude of the intrinsic signal) them just enough to be detected. Even for very regularly sampled data like K2, injecting a signal at a given period affects the periodogram at other frequencies in a complex way. However, a number of the invalid detections could also be the result of the normalization, which promotes shorter periods at the expense of longer ones: when dividing out the median periodograms, the much lower power seen at short periods increases short-period significance relative to longer periods. The hot star completeness values for both pipelines can be seen in the leftmost panels of Figure~\ref{fig:completeness} and in Tables~\ref{tab:hotcomplete} and \ref{tab:hot_cfa_complete} in Appendix~\ref{sec:result_tables}. As expected from simple signal-to-noise arguments, we see completeness decrease with increasing injected period and decreasing injected amplitude for both pipelines. We point out that the best-case scenario for recovering a solar-like signal of 25\,d and 0.1\% amplitude is $\sim$45\% from the hot star dataset. However, the limited size of the hot star sample means that the individual completeness values are somewhat uncertain ($\pm$ $\sim$12\%) for both pipelines. The reliability in the hot star sample from either pipeline is consistently high ($>$90\%). High reliability means that we can typically trust a detection made within a measured amplitude and period range. We can see from Figure~\ref{fig:reliability} that where we have a significant number of detections (more than 30 for the hot star samples), we can be fairly confident in our results, though again, the uncertainties are relatively large for the hot star samples. The high reliability also indicates that normalization did not introduce low frequency signals in light curves that did not have a strong intrinsic signal in the first place, again reinforcing the validity of this step. \subsection{SAP and SS Completeness and Reliability} \label{subsec:sap_ss_discuss} We have shown that the hot star sample is important for validating our injection test procedures and highlighting the presence of non-astrophysical power around 25\,d and longer in the K2 Campaign 5 light curves. We now discuss the results from the SAP and SS samples, which specifically illustrate the complexity of a period search in M67. Figures~\ref{fig:detvinj2} to \ref{fig:av_detvinj3}, which show the detected versus injected periods for the SAP and SS datasets from both pipelines, are a lot messier than their hot star counterparts, most likely due to the larger sample sizes and more diverse stellar populations within. However, like the hot stars, detections become more difficult and less reliable as the injected period increases and the injected amplitude decreases. In addition, lingering power around 10 -- 20\,d seems to exist in the SAP and SS light curves. This power was either not originally strong enough to mark the light curves as variables until boosted by a lower-amplitude, injected signal, or it could be the result of half-period harmonic measurements from the Lomb-Scargle periodogram. The completeness results for both the SAP and SS samples are in the middle and far right panels, respectively, in Figure~\ref{fig:completeness}. As with the hot stars, we see the same general trend of diminishing completeness with increasing injected period and decreasing amplitude, but it has been exacerbated. For both samples from either pipeline, collapsing the figures onto either axis, the completeness falls around or below 50\% for amplitudes $\leq$0.50\% and periods $\geq$20\,d, showing just how hard it is to detect long-period, low-amplitude signals in either case. Of particular importance is the solar case, highlighted in blue in Figure~\ref{fig:completeness}, where the injected period is 25\,d and the amplitude is 0.10\%. Critically, the completeness here is $\sim$15\% or lower for both the SAP and SS samples in either pipeline. Even with the best-case scenario of perfect sinusoidal signals, solar variability, which we expect in M67, is clearly difficult to find with our detection criteria. The middle and far right panels of Figure~\ref{fig:reliability} present the reliability for the SAP and SS samples. As with the hot stars, where there is a significant number of detections ($>100$ for the larger datasets), the reliability for the SAP and SS samples rarely drops below 90\% for both pipelines. This is encouraging, as it shows that the procedure we have developed should lead to relatively few false alarms, i.e. where we do get a detection, we can generally trust the period measurement. There are a few striking features in Figure~\ref{fig:reliability}, however. Detecting long periods (particularly at $\sim$35\,d) in the Oxford SAP sample appears to be not only more difficult than the CfA pipeline, but also less reliable. The low reliability in this period range could either be a result of periodogram normalization or a failure to correct the residual long-term trends in the data, which the CfA pipeline does better. The Oxford SAP sample also has lower reliability in the shortest period, lowest amplitude bin. This could be an effect of promoting short-period signals when dividing out the median periodogram. We experimented with adding a `floor' to the Oxford median periodograms, by which we set any value below a given threshold to that floor in an attempt to reduce this effect. We tested floor values of 0.005 and 0.01. While these did improve reliability in the short period, low amplitude bin, the floors tended to reduce reliability overall in all three Oxford samples, particularly the hot star and SS sets, and especially around mid-range periods. Thus, we decided to exclude a floor from the study to avoid further artificial effects. One surprising feature in the SS samples is the lack of detections across all periods at amplitudes of 2.0\% and greater for the Oxford pipeline, even though many 3.0\% signals were injected. This can be seen in both Figures~\ref{fig:reliability} and \ref{fig:ox_ampvamp}. If we then compare the number of SS detections from one pipeline to another in the amplitude ranges 0.75\% -- 2.00\% and 0.075\% -- 0.25\% in Tables~\ref{tab:ssreliability1} and \ref{tab:ss_av_reliability1} in Appendix~\ref{sec:result_tables}, there are systematically more detections in the Oxford pipeline for these ranges than in the CfA pipeline. This indicates that the amplitudes of the Oxford SS sample are suppressed, primarily after the PCA. Though the PCA may also suppress amplitudes in the Oxford SAP sample, it is more evident in the SS sample due to the high noise amplitude of the SS PCs compared to the SAP PCs. The amplitude suppression could also indicate that the Oxford pipeline removes some intrinsic variability in addition to the systematics. However, it seems that despite the suppression, the signals remain intact enough to be detected at a similar rate, just at lower amplitudes. \subsection{Comparing the Samples and Pipelines} \label{subsec:compare_stuff} In both pipelines, the hot star sample has the highest overall completeness and reliability. This is unsurprising given its smaller sample size and lack of diversity in terms of variability. Due to the general absence of rotational modulation and other competing signals in the original light curves, the hot stars should also better preserve the injected sinusoids. Finally, the average brightness of the hot star dataset is $K_{p}=11.0$, while the combined average of the SAP and SS datasets is $K_{p}=15.3$, and the number of valid detections generally decreases with increasing magnitude. The Oxford SS completeness is slightly lower than the Oxford SAP at short periods and large amplitudes, but it is better for periods $\geq15$\,d and amplitudes below about 0.50\%. In addition, the SS sample is more reliable for both the longest and shortest detected periods. These differences are likely due to the separate light curve extraction methods in the Oxford pipeline. The SAP sample is extracted with pixellated apertures, which may not be very optimal for a crowded field, even in the outer regions of the cluster. The SS sample, however, uses deblended, circular apertures, making it better able to deal with crowding. On the other hand, the CfA SAP completeness and reliability are very comparable (or slightly higher) across most bins when compared against the CfA SS sample. The slight advantage to the SAP sample is probably due to less crowding, as the two datasets are extracted and processed the same way in the CfA pipeline. In addition, the larger size of the SAP sample means that there is a bigger matrix from which to characterize and correct the common-mode trends via PDC-MAP (or PCA, in the case of the Oxford pipeline). This could partially help explain the generally higher completeness values from the SAP sample in either pipeline, as well as the higher reliability in the case of the CfA pipeline. With respect to completeness, the CfA pipeline seems to perform slightly better than the Oxford pipeline, particularly in the hot star and SAP samples. Thus, Figure~\ref{fig:summary} provides the average completeness (and associated reliability in smaller text) from the CfA SAP and SS samples as a summary of the best we can do with the K2 Campaign 5 M67 data, with the solar case highlighted in red. The differences between the Oxford and CfA pipelines are rooted in the approaches each takes to produce corrected light curves from raw K2 data. The moving apertures and deblending procedures during light curve preparation give the Oxford pipeline an edge in certain cases, particularly with the SS sample, but the larger, more optimized aperture selection from the CfA pipeline is superior elsewhere. While the Oxford pipeline typically better removes systematics from strongly variable stars, most M67 targets won't be sufficiently variable on short timescales, so the advantage is minimal. Finally, the more sophisticated PDC-MAP outperforms the relatively crude PCA of the Oxford pipeline in the common-mode systematic removal. While an ideal pipeline would combine the best elements of the the two, the performance of the existing pipelines is comparable, and both are valid options regarding the analysis of K2 light curves. Most critically, however, is that for both pipelines, the completeness around the solar case in the SAP and SS samples is at best $\sim$15\%, illustrating that it is very difficult to detect 0.1\% amplitude, 25\,d signals in K2 Campaign 5 M67 data. \begin{figure*} \includegraphics[width=1\linewidth]{summary_plot.pdf} \caption{Average completeness (top number) and the roughly associated reliability (bottom number with uncertainty) for the CfA SAP and SS samples, with the solar case highlighted in red. The uncertainty for the completeness values is $\pm$2.1\%. Note that the reliability values do not perfectly match the injected amplitudes and periods but are representative of the ranges shown in Figure~\ref{fig:reliability}. \label{fig:summary}} \end{figure*} \subsection{Implications for K2 M67 rotation studies} \label{subsec:imply} \begin{deluxetable*}{lccccccccc} \tablecaption{Comparison with periods from \citet{2016ApJ...823...16B} \label{tab:barnes_compare}} \tablecolumns{8} \tablenum{13} \tablewidth{0pt} \tablehead{ \colhead{EPIC} & \colhead{Barnes Per (d)} & \colhead{Amp (\%)} & \colhead{Comp (\%)} & \colhead{Rel (\%)} & \colhead{Ox Per (d)} & \colhead{CfA Per (d)} } \startdata 211388204 & 31.8 & 0.32 & 21 $\pm$1.8 & 96 $\pm$3.9 & -- & -- \\ 211394185 & 30.4 & 0.78 & 55 $\pm$1.8 & 98 $\pm$2.3 & 15.4 & 15.1 \\ 211395620 & 30.7 & 0.47 & 32 $\pm$1.8 & 95 $\pm$3.1 & -- & -- \\ 211397319 & 25.1 & 0.31 & 29 $\pm$1.8 & 98 $\pm$3.2 & -- & -- \\ 211397512 & 34.5 & 0.73 & 34 $\pm$1.8 & 68 $\pm$4.1 & 16.4 & -- \\ 211398025 & 28.8 & 0.29 & 21 $\pm$1.8 & 96 $\pm$3.9 & -- & -- \\ 211398541 & 30.3 & 0.51 & 32 $\pm$1.8 & 95 $\pm$3.1 & -- & -- \\ 211399458 & 30.2 & 0.66 & 32 $\pm$1.8 & 95 $\pm$3.1 & -- & -- \\ 211399819 & 28.4 & 0.31 & 21 $\pm$1.8 & 96 $\pm$3.9 & -- & -- \\ 211400500 & 26.9 & 0.30 & 29 $\pm$1.8 & 98 $\pm$3.2 & -- & -- \\ 211406596 & 26.9 & 0.36 & 29 $\pm$1.8 & 98 $\pm$3.2 & -- & -- \\ 211410757 & 18.9 & 0.14 & 11 $\pm$1.8 & 99 $\pm$5.8 & -- & -- \\ 211411477 & 31.2 & 0.20 & 21 $\pm$1.8 & 95 $\pm$11.9 & -- & -- \\ 211411621 & 30.5 & 0.30 & 21 $\pm$1.8 & 96 $\pm$3.9 & 15.8 & 16.1 \\ 211413212 & 24.4 & 0.22 & 29 $\pm$1.8 & 97 $\pm$8.0 & -- & -- \\ 211413961 & 31.4 & 0.26 & 21 $\pm$1.8 & 96 $\pm$3.9 & -- & -- \\ 211414799 & 18.1 & 0.17 & 11 $\pm$1.8 & 99 $\pm$5.8 & -- & 9.0 \\ 211423010 & 24.9 & 0.29 & 29 $\pm$1.8 & 98 $\pm$3.2 & -- & 22.2 \\ 211428580 & 26.9 & 0.33 & 29 $\pm$1.8 & 98 $\pm$3.2 & -- & -- \\ 211430274 & 31.1 & 0.40 & 32 $\pm$1.8 & 96 $\pm$3.9 & -- & -- \\ \enddata \tablecomments{The columns from left to right are EPIC, the associated period reported from \citet{2016ApJ...823...16B}, our estimate of the peak-to-peak amplitude for the light curve, the estimated completeness and reliability statistics based off the Barnes period and amplitude, and the Oxford and CfA periods where we have detections using $C=4$.} \end{deluxetable*} \begin{deluxetable*}{lccccccccc} \tablecaption{Comparison with detected periods from \citet{2016MNRAS.463.3513G} \label{tab:gonz_compare}} \tablecolumns{8} \tablenum{14} \tablewidth{0pt} \tablehead{ \colhead{EPIC} & \colhead{Gonzalez Per (d)} & \colhead{Amp (\%)} & \colhead{Comp (\%)} & \colhead{Rel (\%)} & \colhead{Ox Per (d)} & \colhead{CfA Per (d)} } \startdata 211387834 & 15.2 & 1.36 & 74 $\pm$1.8 & 98 $\pm$2.0 & 14.9 & 14.9 \\ 211390071 & 25.3 & 1.42 & 63 $\pm$1.8 & 99 $\pm$2.2 & 13.5 & -- \\ 211393422 & 28.6 & 1.53 & 55 $\pm$1.8 & 98 $\pm$2.3 & 14.5 & -- \\ 211397501 & 12.4, 12.4 & 1.19 & 81 $\pm$1.8 & 98 $\pm$1.9 & 12.2 & 12.5 \\ 211397955 & 29.0, 29.2 & 1.36 & 55 $\pm$1.8 & 98 $\pm$2.3 & 14.7 & 14.5 \\ 211400662 & 13.5 & 2.04 & 91 $\pm$1.8 & 99 $\pm$1.8 & 13.7 & 13.1 \\ 211403852 & 27.3 & 1.23 & 63 $\pm$1.8 & 99 $\pm$2.2 & 13.9 & 13.3 \\ 211404310 & 26.3 & 1.17 & 63 $\pm$1.8 & 99 $\pm$2.2 & -- & 13.0 \\ 211405671 & 25.6, 28.2 & 2.06, 2.35 & 77, 76 $\pm$1.8 & 98 $\pm$2.0, 99 $\pm$1.9 & 26.3 & -- \\ 211407277 & 24.7 & 2.50 & 77 $\pm$1.8 & 98 $\pm$2.0 & -- & 25.0 \\ 211408116 & 30.1 & 0.14 & 6 $\pm$1.8 & 95 $\pm$11.9 & 13.5 & -- \\ 211408874 & 23.9, 24.5 & 2.12 & 80 $\pm$1.8 & 98 $\pm$2.0 & -- & 23.8 \\ 211414974 & 26.8, 29.9 & 1.91, 1.92 & 63 $\pm$1.8 & 97 $\pm$2.1, 98 $\pm$2.3 & 27.0 & -- \\ 211424980 & 15.6, 31.2 & 1.26, 1.49 & 74 $\pm$1.8 & 98 $\pm$2.0, 98 $\pm$2.3 & 15.6 & 15.6 \\ 211427666 & 28.8 & 1.20 & 55 $\pm$1.8 & 98 $\pm$2.3 & 14.1 & -- \\ 211429354 & 25.9 & 0.19 & 8 $\pm$1.8 & 97 $\pm$8.0 & -- & 26.3 \\ 211430648 & 32.6 & 0.67 & 16 $\pm$1.8 & 68 $\pm$4.1 & -- & 17.2 \\ 211433352 & 13.7, 13.8 & 1.21 & 74 $\pm$1.8 & 98 $\pm$2.0 & 13.9 & 13.5 \\ \enddata \tablecomments{The columns from left to right are the EPICs from \citet{2016MNRAS.463.3513G} where either of our two pipelines have detections, the associated periods reported from that paper, our estimate of the peak-to-peak amplitude for each light curve, the estimated completeness and reliability statistics based off the Gonzalez period and amplitude, and the Oxford and CfA periods where we have detections using $C=4$. Where there is more than one reported value, the first number comes from the PDCSAP sample from \citet{2016MNRAS.463.3513G}, while the second number is from the K2SC sample.} \end{deluxetable*} \begin{figure*} \includegraphics[width=1\linewidth]{211394185_compare.pdf} \caption{Light curves and periodograms for EPIC 211394185 from the Oxford (gray) and CfA (blue) pipelines. The top panel displays the light curves, while the middle panel gives the original periodogram power for both pipelines, and the bottom panel shows the normalized periodograms. In both periodogram panels, the black lines show the final Oxford period, the blue lines show the location of the final CfA period, and the red lines show the location of the period from the Barnes paper. \label{fig:211394185}} \end{figure*} \begin{figure*} \includegraphics[width=1\linewidth]{211411621_compare.pdf} \caption{Same as Figure~\ref{fig:211394185} except for EPIC 211411621. \label{fig:211411621}} \end{figure*} The injection test results indicate that, if the Sun is considered typical, measuring rotation periods for solar-like stars in M67 based on their K2 light curves is challenging. Specifically, for the best case scenario of solar-like rotational variability --- 0.1\% amplitude, 25\,d non-evolving sinusoidal signal --- our completeness maxes out at 14\% and 15\% for the SAP and SS samples, respectively. Considering that detecting non-sinusoidal, evolving spot-modulation will be more difficult, these results suggest that both current and future rotation period measurements in M67, based solely on the K2 Campaign 5 data, must be carefully examined before they are used to guide models of stellar angular momentum evolution and/or calibrate gyrochronology relations. We consider the 20 stars with measured rotation periods from \citet{2016ApJ...823...16B} and the stars from \citet{2016MNRAS.463.3513G} where we had detections from either the CfA or Oxford pipeline. Recall that all of these light curves come from the SAP sample. Since we do not have amplitude measurements for the versions of the light curves used in either paper, we estimated them by binning the flux of each light curve based on the reported period and taking the median of the difference between the $95^{th}$ and $5^{th}$ percentiles of each bin. We did this for both the Oxford and CfA versions of each light curve and averaged the values. We then estimated the completeness and reliability based on the corresponding SAP results from this study. We averaged the Oxford and CfA completeness for the injected period and amplitude bins closest to the determined values. If the calculated amplitude was halfway between the injected amplitudes, we used the higher amplitude bin. We also used the average Oxford and CfA reliability from the appropriately ranged bins into which the reported periods and estimated amplitudes fell. The results are shown in Tables~\ref{tab:barnes_compare} and \ref{tab:gonz_compare}. The completeness for the reported periods from \citet{2016ApJ...823...16B} is low, averaging at about 27\%, with only one period falling into a range above 50\% (EPIC 211394185). The corresponding reliability is high, only dipping below 95\% in one instance (EPIC 211397512). However, these values are a bit misleading, as they only really apply to those cases where we would actually get a detection according to our criteria, i.e. using a threshold factor of $C=4$. If we have a detection in an area with low completeness, then the result is generally trustworthy. Therefore, in Table~\ref{tab:barnes_compare}, we also show the period from the normalized Lomb-Scargle for the corresponding, non-injected light curves from the Oxford and CfA pipelines where we had a detection based on our threshold (i.e. where the star was classified as a `variable'). For this sample of stars, the Oxford pipeline only has 3 detections, while the CfA has 4. Almost all of these detections appear to be rough harmonics of the published result from \citet{2016ApJ...823...16B}, while the CfA period for EPIC 211423010 is close to the Barnes value. Both the Oxford and CfA pipelines have detections for EPICs 211394185 and 211411621 from the Barnes sample. Figures~\ref{fig:211394185} and \ref{fig:211411621} show the Oxford and CfA versions of these two light curves, respectively, along with the original periodograms in the middle panels and the normalized versions in the bottom panels. While obviously variable, a clear period is difficult to find by eye in either case, though it appears that the normalization suppressed any power at a period that would be comparable to the Barnes result. The normalization step is important for avoiding false positives, however, as we have illustrated with the presence of long-term trends in the median periodogram of the hot star samples. We recognize that \citet{2016ApJ...823...16B} used several different period detection methods other than the Lomb-Scargle periodogram. However, the concern here is obviously the underlying signal, and the lack of detections from the Oxford and CfA pipelines, as well as the lack of agreement with the Barnes result where we do have one, show just how challenging it is to measure solar-like signals in the K2 Campaign 5 data. Looking at Table~\ref{tab:gonz_compare}, we have a list of 18 EPICs from either the `PDCSAP' or `K2SC' samples given in \citet{2016MNRAS.463.3513G} where we had a detection from at least one of our pipelines. The average completeness is $\sim$60\%, which is higher than the Barnes sample, and the reliability is $\geq$95\% (except in one case, EPIC 211430648), but the stars listed here are only a small fraction of those published by \citet{2016MNRAS.463.3513G}. The PDCSAP sample in total has 98 reported periods; we detected 16 from this sample. The K2SC sample had 40 periods; we detected 9. In addition, notice that none of the stars in Table~\ref{tab:barnes_compare} show up in \ref{tab:gonz_compare}, meaning that of the 9 stars reported by \citet{2016MNRAS.463.3513G} that overlap with \citet{2016ApJ...823...16B}, neither the Oxford nor the CfA pipeline records a detection. However, as with the Barnes detections, the CfA and Oxford results agree, and they are either in agreement with the Gonzalez value, or they are rough harmonics. The harmonics again highlight ambiguity in the light curves, while the general lack of detections from the Gonzalez sample --- especially where reported from Barnes --- reinforces the conclusions we have already made about finding solar signals in M67 with K2 data. We can use the completeness and reliability statistics from this work to guide future re-evaluations of M67 periods from K2 data. Reliability helps establish amplitude and period thresholds for detecting true signals in M67. We can probably trust detections in a measured period and amplitude bin with reliability of 80\% or higher, suggesting that 1 or fewer out of every 5 measurements is likely to be unreliable. However, we need to take into account completeness, as we simply cannot say anything about the true number of stars with periods and amplitudes corresponding to a given bin without it, since completeness tells us what fraction of potential detections we miss. For example, if we have three detections in a bin where the completeness was 50\%, we know that there were approximately 3 other stars in that range that we missed. It is important to recall that these sinusoidal injection tests represent a `best case' scenario: real rotation signals are non-sinusoidal and evolve over time, and both of these factors are likely to affect their detectability in a negative way. Therefore, although it is of course possible that better light curve extraction, detrending, and period search methods might be devised than those we have used here, it seems rather unlikely that the detectability of true rotation signals will improve much beyond what we have shown. In short, the K2 Campaign 5 data may allow the measurement of some rotation periods for main sequence stars in M67, but only if they are relatively active and display amplitudes somewhat larger than the active Sun. \section{Conclusions and Future Work} \label{sec:conclusion} The open cluster M67, recently observed in Campaign 5 of the NASA K2 mission, offers the unique chance to measure rotation periods for solar-age stars. This means that we have the opportunity to fill in a much-needed gap in the calibration of the age-mass-rotation period relationship which forms the basis for the field of gyrochronology and serves as an age-dating method for main sequence stars. However, our physical understanding behind the empirical observations that have driven gyrochronology is still evolving, and certain studies suggest inconsistencies between the ages that we would predict via gyrochronology and the results of other methods, such as asteroseismology, for older stars. M67, therefore, provides a testing ground both for gyrochronology and new theories exploring the possible weakening of the magnetic fields that induce angular momentum loss in stars. Because of the immense scientific potential of M67, we want to ensure that we have a full understanding of the limits of the data we are using to measure the rotation periods. Though K2 does have relatively high precision, especially compared to ground-based observations, the data still suffers from stubborn systematic features, and the $\sim$75\,d observation window means that it will be challenging to identify periods of 25\,d or longer, which we are expecting for a cluster the approximate age of the Sun. In addition, the crowded field will make the task even more difficult. It is the goal of this study to understand how these challenges manifest themselves in order to ultimately acquire a set of M67 rotation periods for which we have confidence. We devised a series of sinusoidal injection tests with real Campaign 5 data in order to determine the best-case scenario threshold limits for K2 M67 data. We used a subset of the SAP light curves processed via the \emph{Kepler} pipeline that fell on the spacecraft's module 6 CCD (which encompassed M67), M67 members contained in the superstamp, and a small set of hot A and F stars scattered throughout the Campaign 5 field of view as the three test samples for the injection tests. Into each raw light curve of each sample, we injected sinusoids with six different amplitudes ranging from 0.05\% to 3.0\% and seven periods ranging from 5 to 35\,d. We processed the injected light curves using two different methods based on existing literature, within this paper known as the `Oxford' and `CfA' pipelines. We then defined a detection threshold and ran a Lomb-Scargle periodogram on the injected light curves and their non-injected counterparts. We normalized the periodograms of both the injected and non-injected light curves by dividing out the median periodograms of the latter. Those non-injected light curves whose normalized periodograms met the detection threshold were marked as `variable,' and we removed the corresponding stars from the injected samples. Finally, we analyzed the results of the period search on the remaining sets of injected light curves in terms of completeness and reliability. Completeness describes how sensitive our methods are in recovering the injected signals, while reliability quantifies how trustworthy a detected period is within a measured amplitude range. The results of the injection tests shed light on the nature of the K2 M67 data. The hot star samples from the Oxford and CfA pipelines highlighted the presence of non-astrophysical trends in the data with power at $\sim$25\,d and longer, a problematic feature --- especially given the periods we expect in M67 --- which necessitated the periodogram normalization in the period search. In general, for the hot star, SAP, and SS samples from both pipelines, completeness diminished as the period increased and amplitude decreased. The hot star completeness was higher than the SAP and SS samples due to the brightness of the stars in the sample and the lack of rotational modulation beyond about 10\,d for this particular dataset. However, for the SAP and SS samples from both pipelines, the completeness falls around or below about 50\% at injected periods of 20\,d and amplitudes of 0.50\%. Crucially, at the solar case --- where the injected period is 25\,d and the amplitude 0.10\% --- the maximum completeness is only around 15\% for the SAP and SS datasets. Despite the generally low sensitivity, the reliability is typically very high, consistently at values greater than 90\%, except in the instance of very long periods ($\sim$35\,d) and a couple cases at detected periods of around 15\,d with moderately low measured amplitudes (0.075 -- 0.25\%). This means that while it is very difficult to detect the kind of periods we expect to see in M67 given its age, if we do get a detection, we can generally trust the result. The CfA completeness is generally greater in both the SAP and SS samples. Except at long periods and the $\sim$15\,d range, the reliability for the SAP samples from both pipelines are comparable, but the Oxford reliability is slightly higher for the SS, despite amplitude suppression from the PCA. There is a slight trade-off between the two pipelines, and while the CfA performs marginally better overall, both are viable options. It is important to understand how this study alters our understanding of both future and previously published studies regarding rotation periods in M67 from K2 data, namely \citet{2016ApJ...823...16B} and \citet{2016MNRAS.463.3513G}. Overall, we urge caution when using the periods from \citet{2016ApJ...823...16B} and \citet{2016MNRAS.463.3513G} because we cannot reproduce the same results here without lowering the detection threshold and sacrificing reliability. However, we can use the injection tests as a basis for the re-evaluation of M67 periods. With completeness, we can quantify just how difficult is to find even the best-case, long-period rotation signals in M67 data, or how many detections we may miss, while reliability gives us confidence in our measurement. To maintain some flexibility while still trusting the result, we decide on a reliability threshold of 80\% for a given measured period and amplitude range. While this study has given us important insights into the K2 M67 data, we have to remember that the sinusoidal injections offer a best-case scenario. The rotational modulation in most stars does not usually take the form of a simple sinusoid. We ultimately want to do another, smaller round of injection tests using more realistic signals, such as those generated from the star spot models of \citet{2015MNRAS.450.3211A}. This necessarily means that the Lomb-Scargle periodogram may not end up being the optimum rotation detection method. As a result, we will use the star spot model tests to compare the Lomb-Scargle with the autocorrelation function and a Gaussian process. Following this, we hope to present our own rotation periods for M67 from Campaign 5 data, informed by both the sinusoidal and star spot model injection tests, along with a comparison to previously published results and existing theory. While we have shown that it is difficult to find low-amplitude, 25\,d periods in the K2 M67 data from Campaign 5, we should still be able to infer important information regarding the true rotation periods in M67 in a manner similar to the planet occurrence studies conducted by \citet{0067-0049-201-2-15} and \citet{2013PNAS..11019273P}. In addition, Campaign 16, which has just been released at the time of this writing, will greatly enhance our confidence in any rotation periods measured from Campaign 5 data, as the extra observation time will allow us to improve completeness without sacrificing reliability. We conducted this study specifically in the context of K2 Campaign 5 M67 light curves, but the caution we have urged based on our conclusions can reasonably be extended to rotation studies with other K2 campaigns and the upcoming, first data release from TESS. We have shown that long-term, systematic trends still exist in K2 data which could skew low-amplitude period detections of about 25\,d and longer, and these trends are likely to be present in other campaigns, especially in the crowded fields of clusters. Finally, while TESS will survey nearly the entire sky, the most coverage will be along the Ecliptic poles as opposed to the Ecliptic itself, where observations will only last about 27\,d \citep{2014SPIE.9143E..20R}. While this will complement K2 data, the short observation time and large pixel sizes (15\,$\mu$m x 15\,$\mu$m; \citealt{2014SPIE.9143E..20R}) will likely cause the data to suffer from similar problems that we have seen in this study. \acknowledgments R.E. would like to thank the Rhodes Trust and the U.S. Air Force for helping finance this work. The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Air Force, the Department of Defense or the U.S. Government. Financial support for this work also includes the Science and Technology Facilities Council (STFC) consolidated grants ST/H002456/1 and ST/N000919/1. This work was performed in part under contract with the California Institute of Technology/Jet Propulsion Laboratory funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Insitute. Some/all of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
-39,975.406367
[ -3.337890625, 3.1484375 ]
50.401606
[ -3.20703125, 0.83544921875, -1.6328125, -5.48046875, -0.480224609375, 7.7578125 ]
[ 5.02734375, 6.66015625, 4, 7.140625 ]
1,180
16,109
[ -3.015625, 3.4375 ]
24.964397
[ -6.01953125, -3.03125, -3.376953125, -2.333984375, 1.634765625, 11.390625 ]
0.971559
32.361356
16.543547
9.497303
[ 3.330831527709961 ]
-31,142.290357
5.367124
-39,189.83482
0.747175
6.185493
[ -3.6484375, -3.734375, -2.931640625, -3.548828125, 2.568359375, 10.3125 ]
[ -6.45703125, -3.009765625, -2.89453125, -2.419921875, 4.1875, 7.13671875 ]
BkiUcO7xK0iCl7UGWjGg
\section{INTRODUCTION} Automatic underground parking is one of the essential parts of autonomous driving. During this process, the auto-vehicle is proposed to obtain the environmental information from the underground parking scenario, process the input from sensors, and make planning. To address this complete procedure, various algorithms are needed, such as semantic segmentation, parking slot detection, simultaneous localization and mapping (SLAM), etc. However, applying these algorithms could be challenging in the underground parking scenario. In such a scenario, narrow roads with obstacles, areas with dazzling or dim lighting, and poor-texture walls increase the noise and uncertainty of sensor measurements\cite{underpark}. The structure and content of the environment are highly repetitive compared to the urban street scene with landmarks, which makes it hard for feature matching and place recognition. In addition, vehicles may obtain an unreliable estimation result for location due to the Global Navigation Satellite System (GNSS) signal shielding, dim conditions, and ground reflection\cite{gnssa}\cite{gnssb}. Recently, researchers have found that these challenges can be partly solved by applying deep learning to help vehicles perceive the surrounding scenario\cite{gcn} \cite{nvidia} \cite{lidar_surrounding}. Specifically, semantic information increases the perception ability of the autonomous car, and in return, an accurate pose estimation may also help perception. However, many commonly used datasets provide either successive images with timestamps for the SLAM task or discrete images for visual perception tasks, which makes it difficult for users to develop a functional automatic parking system using a single dataset. Besides, there is a large density of obstacles and ground sign semantic information in underground parking scenarios, including walls, pillars, drivable areas, lanes, parking slots, arrows, and bumps. Only a few existing datasets completely labeled these items, which requires plenty of human labor and time. To cover the lack of datasets that support both the SLAM and perception tasks with well-labeled data, in this paper, we present SUPS, a novel dataset supporting multiple tasks through multiple sensors and multiple semantic labels to address the issues in developing an underground automatic parking system. The virtual scene used in SUPS is simulated through LGSVL \cite{lgsvl}, which is an autonomous vehicle simulation platform with highly flexible configurations of vehicles and sensors. The simulation platform has the advantage of environmental variability as well as the virtual sensor diversity and accessibility. The virtual underground parking scenario we built is similar to the real-world underground parking scenario in illumination, texture, content, and complexity of scenarios. Multiple sensors such as fisheye cameras, pinhole cameras, depth cameras, LiDARs, inertial measurement unit(IMU), GNSS are activated among records. Pixel-level semantic segmentation, parking slot detection, depth estimation, visual SLAM, and LiDAR-based SLAM tasks are supported in this dataset. Our main contributions of this work are summarized as follows: \begin{itemize} \item We present SUPS, a simulated dataset for underground automatic parking, which supports multiple tasks with multiple sensors and multiple semantic labels aligned with successive images according to timestamps. It allows benchmarking the robustness and accuracy of SLAM and perception algorithms in the underground parking scenario. \item We evaluate several state-of-the-art semantic segmentation and SLAM algorithms on our dataset to show its practicability and challenging difficulties in the underground parking scenario. \item We open-source the SUPS dataset and the whole simulation underground parking scenario to enable researchers to make self-designed changes for specific tasks. \end{itemize} \section{RELATED WORK} \label{II} In this section, we first briefly review the mainstream benchmark datasets for autonomous driving tasks and then discuss the integration of the surrounding perception systems, semantic information and SLAM algorithms in the autonomous driving system. \subsection{Autonomous driving datasets} Existing autonomous driving datasets are diverse in carriers, environments, and sensors. However, most datasets were designed only for either SLAM or perception tasks. For example, EuRoC MAV\cite{Euroc}, TUM VI\cite{TUM}, and nuScenes\cite{nuScenes} did not provide label information for land-marking semantic segmentation tasks. Cityscapes\cite{Cityscapes}, nuScenes\cite{nuScenes}, ApolloScape\cite{apolloscape}, and BDD100k\cite{BDD100K} did not support depth estimation and SLAM tasks. Few researchers have worked on the combination of multiple tasks due to the extremely high cost of human labor and resources. Only a few datasets contained underground parking scenarios. VIODE\cite{VIODE} provided underground scenes; however, the dataset was recorded with an unmanned aerial vehicle (UAV), which is not suitable for some autonomous driving algorithms. PS2.0\cite{PS2.0} is a dataset for parking slot detection tasks, which includes the underground parking scenario, but SLAM tasks are not supported. Thus, they have an obvious defect for underground automatic parking. At the same time, surround-view cameras are not applied in some mainstream autonomous driving datasets, such as KITTI\cite{kitti}, Cityscapes\cite{Cityscapes}, and BDD100k\cite{BDD100K}. In summary, the open-sourced autonomous driving datasets were not sufficient for SLAM and perception tasks at the same time in the underground parking scenario. Taking advantage of the virtual scene, our dataset serves as a relatively complete solution to this issue by providing multiple sensors for diverse tasks, considering what a real underground parking scenario looks like. Tab.\ref{table_ex} provides the novelty of our dataset against existing datasets. \subsection{Perception system for autonomous driving} There is a growing interest in handling complex applications with 360° perception. GNNs~\cite{9046288,9744550} has been applied to improve parking slot detection at the sight of bird-eye-view (BEV)\cite{gcn}. Multi-pinhole cameras have been used in an end-to-end algorithm to solve motion planning tasks\cite{nvidia}. At the same time, perception algorithms such as semantic segmentation, detection, and scene encoding have caused heated discussion in SLAM tasks. Mask-SLAM joins a mask into the standard architecture to avoid disturbing information using semantic segmentation\cite{mask-slam}. DynaSLAM removed the dynamic objects with semantic segmentation\cite{dynaslam}. VIODE proposed the VINS-Mask which eliminated the use of feature points on some moving objects\cite{VIODE}. However, to the best of our knowledge, few open-source algorithms indeed make use of semantic information in feature extraction and frame matching. One of the obstacles to the development of such an algorithm is the lack of a dataset with plentiful well-labeled images for tracking and mapping. In addition, existing datasets mainly focus on labels of the drivable area, lane lines, and dynamic objects (such as other vehicles, cycles, and people); these features can hardly help track the location and build a usable map. \section{OVERVIEW OF DATASET} \label{III} In this section, we introduce our dataset in detail with four sections. First, we describe our motivation to build this dataset. Then, we introduce the dataset generation procedure with the platform, simulator, carrier setups, and data processing method. Third, we explain the strategy to record the data sequence and split the set. Finally, we elaborate all contents of our dataset. \subsection{Motivation} \subsubsection{Surrounding perception}To finish the parking task, an automatic vehicle has to obtain the information of its surrounding environment without a blind corner. Our 360° perception system consists of four fisheye cameras surrounding the vehicle, a stereo pinhole camera set forward, and one LiDAR on the top. Fisheye cameras with a downside pitch angle are used for a larger FOV and better perception of ground signs. \subsubsection{Multisensor for multitasks}SLAM tasks and various visual tasks are both challenging and indivisible from each other in the issue of autonomous parking. Semantic SLAM and multimodal sensing require many well-labeled frames as well as a complete set of sensors that cover vision, depth, ranging, and velocity measurements. The simulated multisensor system in the virtual scene provides GPS, IMU, LiDAR, depth measurements, and ground truth semantic segmentation aligned with every successive image according to timestamps. The detailed configuration of sensors is introduced in subsection \ref{sub_b}. \subsubsection{Changeable simulated scene}Our simulated underground parking scenario was built based on the Unity Engine\cite{unity}. Users can easily modify the scene and rebuild it for the specific application. For example, the parking lot structure, traffic signs, illumination, semantic labels, and travel routes can all be changed. An overview of the existing scene is described in subsection \ref{sub_b}. \subsection{Dataset generation} \label{sub_b} The SUPS dataset generation procedure involves three steps: virtual scene construction, simulator setups, recording and processing. These steps were completed on a computer with Ubuntu 18.04, Intel(R) Core i7-11800H CPU, and a GeForce RTX 3060 (Laptop) GPU. \begin{figure}[t] \centering \includegraphics[width=8cm]{overview.png} \caption{An overview of the simulating scene. \textbf{Area A} is the orange part shown on the right side and \textbf{Area B} is the green part shown on the left side.} \label{scene_overview} \end{figure} \subsubsection{Virtual scene construction}We built our 3D simulating scene for the underground parking scenario through the Unity Engine\cite{unity}. The whole scene consists of two areas \textbf{Area A} and \textbf{Area B} as shown in Fig. \ref{scene_overview}. \textbf{Area A} occupies $2635 m^{2}$ with a road length of $120 m$. \textbf{Area B} occupies $2015 m^{2}$ with $195 m$ long road. The drivable road in the whole scene is approximately $350 m$ long, and the parking slot occupies an area of $4650 m^{2}$. There are several different road loops in the scene for providing loop closure constraints in the SLAM task. To enrich the scene, we involve items such as walls, pillars, static vehicles, parking slots, lane lines, drivable areas, collision avoidance strips, speed bumps, arrows, and road signs. Additionally, there are challenging scenarios for perception and SLAM tasks in the simulated scene, similar to the real-world underground parking scenarios, such as crossing, entrance, one-way street, dazzle, dim area, and low-texture area(see Fig. \ref{full_sensor}). Furthermore, to maximally restore the actual underground parking scenario, we use spotlights at the ceiling. \subsubsection{Simulator setups}On the LGSVL\cite{lgsvl} platform, we create a Lincoln2017NKZ autonomous vehicle model and apply multiple sensors (detail of sensors shown in Tab. \ref{sensor_detail_table}). An overview of sensors on the ego car is shown in Fig. \ref{sensor_on_car}. We also provide all intrinsic and extrinsic parameters for the sensors. To obtain the ground truth of the semantic segmentation task, we install six additional semantic segmentation cameras on the ego model that share the same parameters as the corresponding four fisheye cameras and two pinhole cameras. Fig. \ref{full_sensor} shows an example of observations and measurements from all sensors. \begin{table}[t] \caption{Detail of sensors in the ego vehicle model} \label{sensor_detail_table} \begin{center} \begin{tabular}{|c||c||c|} \hline Sensor name & Position & Sampling Frequency\\ \hline Fisheye camera 1 & front & 20 Hz\\ \hline Fisheye camera 2 & left & 20 Hz\\ \hline Fisheye camera 3 & right & 20 Hz\\ \hline Fisheye camera 4 & rear & 20 Hz\\ \hline Pinhole camera 1 & top & 20 Hz\\ \hline Pinhole camera 2 & top-right & 20 Hz\\ \hline Depth camera & top & 20 Hz\\ \hline GPS Odometry sensor & rear axle center & 30 Hz \\ \hline IMU & rear axle center & 100 Hz \\ \hline LiDAR & top-behind & 7 Hz(rotation)\\ \hline \end{tabular} \end{center} \end{table} \begin{figure} \centering \includegraphics[width=8cm]{SensorOnCar.png} \caption{Positions of sensors on ego vehicle. The names of the cameras are written in short. For example, \textit{Front \& Front seg} means \textit{Front camera and Front semantic segmentation camera}.} \label{sensor_on_car} \end{figure} \subsubsection{Data processing}Only a few existing datasets provide enough information for parking slot detection. In the real world, it is common that parking lines are worn down or hid by cars and obstacles, which makes it difficult for human laborers to label them accurately. In most datasets, the FOV of the BEV cannot cover a complete parking space along the roadside. At the same time, different algorithms described the parking space variously. For instance, GCN-parking-slot described a parking space with two corners at the entry side and the correspondence of corners\cite{gcn}. Our dataset provides an adaptable description of the parking slots, which includes all the information to separate a parking space (see Fig. \ref{parking_slot}). Corners with vectors, center lines, and object-level masks can be easily extracted with ordered point coordinates. For instance, we provide data for parking slot detection using the description form from GCN\cite{gcn}(see Fig. \ref{gcn_example} for details). \begin{figure} \centering \includegraphics[height=2.6cm]{pics/gcn_ori.png} \includegraphics[height=2.6cm]{pics/gcn_label.png} \includegraphics[height=2.6cm]{pics/gcn_res.jpg} \caption{Example of the parking slot detection data form. The left is an original image, the medial is the ground truth, and the right shows the GCN\cite{gcn} form description.} \label{gcn_example} \end{figure} \begin{figure}[t] \centering \includegraphics[width=8cm]{pics/parking_slot.png} \caption{The description of a single parking slot. Four corners of the parking space are labeled with different colored squares, which show the width of parking lines and the order of corners. The center line, object-level mask, and corners with vectors can be generated with these labels easily.} \label{parking_slot} \end{figure} Additionally, since bird-eye-view(BEV) images are especially helpful in perception and decision making when operating automatic parking, besides the original camera frames, we provide BEV images and segmentation ground truth (see Fig. \ref{origin_seg}) projected by the inverse perspective mapping(IPM) method. \subsection{Acquisition strategy} \subsubsection{SLAM tasks}Three routes at two speed levels are recorded in our dataset. Fig. \ref{route} shows three driving routes (\textbf{Loop A}, \textbf{Loop B} and \textbf{Loop C}) sequence in detail. \textbf{Loop A} has a single loop closure with a route approximately $160 m$ long, \textbf{Loop B} has two loops ($60 m$ long and $150 m $ long) with a route approximately $250 m$. \textbf{Loop C} contains several complex loops and the route is approximately $700 m$ as shown in Fig. \ref{route}. The speed limit on most underground parking scenarios is $5 km/h$, so the high-speed level of our recorded data is $5 km/h$, and the car drives at $3.5 km/h$ at a low-speed level. Additionally, considering that some SLAM algorithms (such as Vins-MONO\cite{vins-mono}) need enough IMU excitation when initializing, we provide an extra version for each data record that has an initialization process, in which we make some slow turns before the ego car drives according to the route. \begin{figure}[t] \centering \subfigure[Loop A and Loop B]{ \includegraphics[width=8cm]{LOOP_AB.png} } \subfigure[Loop C]{ \includegraphics[width=8cm]{LOOP_C.png} } \caption{Overview of three routes and driving sequences. The green point is where the ego starts, and the red point is where the ego stops. Each white point at the crossing is labeled with a capital letter from $A$ to $M$ to describe the route when ego car passes through. As (a) shows, in \textbf{Loop A}, ego car drives through $Start \to C \to D \to H \to G \to End$. In \textbf{Loop B}, ego car drives through $Start \to B \to A \to E \to F \to J \to I \to K \to L \to End$. As (b) shows, in \textbf{Loop C}, ego car drives through $Start \to C \to G \to H \to D \to B \to L \to M \to C \to B \to F \to E \to A \to B \to J \to I \to K \to M \to End$.} \label{route} \end{figure} \subsubsection{Perception tasks}We generate the depth images, original frames, and their corresponding semantic segmentation ground truth from a record of driving in Loop C, which covers almost all information in the underground parking scenario. Captured frames are split into three subsets at a ratio of 7:1:2 as the training, validation, and testing sets. The training set serves for training perception models only, the validation set can be used as a supplement of the training set or for model selection, and the testing set is for model evaluation only. \subsection{Dataset content} We record our dataset in rosbag format. Stereo cameras and surrounding cameras are partially activated with other sensors because long periods and quantities of sensors are costly for memory and computation. However, we still provide bags in which all sensors described in subsection \ref{sub_b} are activated when the vehicle travels in \textbf{Loop A} and \textbf{B}. Records are named as $Route\_SpeedLimit\_InitializeProcess\_Sensor.bag$. \begin{itemize} \item \textit{Route} describes in which loop the bag file is recorded. Candidates are \textit{Loop A}, \textit{Loop B}, \textit{Loop C}. \item \textit{SpeedLimit} describes the average traveling speed. Candidates are \textit{fast}, \textit{slow}. \item \textit{InitializeProcess} describes whether there is an extra process that helps initialize IMU. Candidates are \textit{disturbed} (for sure), \textit{direct} (for not). \item \textit{Sensor} describes the sensors activated during records. \textit{Stereo} means stereo forward cameras and other types of sensors are activated. \textit{Surround} means surrounding fisheye cameras and other types of sensors are activated. \textit{Full} means all sensors are activated. \end{itemize} More than 5,000 frames are provided with ground truth for supported perception tasks, such as semantic segmentation, parking slot detection, and depth estimation. Details for some supported tasks will be introduced in section \ref{IV}. Fig. \ref{origin_seg} shows an example of classification labels in our dataset. \begin{figure} \centering \subfigure[Front view]{ \includegraphics[width=4cm]{pics/origin_p.png}\vspace{6pt} \includegraphics[width=4cm]{pics/seg_p.png}\vspace{6pt} } \subfigure[Bird-eye view]{ \includegraphics[width=4cm]{pics/origin_BEV.png}\vspace{6pt} \includegraphics[width=4cm]{pics/seg_BEV.png}\vspace{6pt} } \caption{Comparison of images and semantic segmentation ground truth in front view and BEV. Items in the scene are labeled separately: drivable area, wall, pillar, static vehicle, parking lines, lane lines, collision avoidance strips, speed bumps, arrows. The left column shows the original image in the front view and the projected BEV, and the right column shows the ground truth.} \label{origin_seg} \end{figure} \section{Experiments} \label{IV} \subsection{Semantic segmentation} Semantic segmentation is a standard perception task, and numerous methods and networks have provided solutions to this issue. However, for autonomous driving usage, the focal point of this task focuses on the trade-off between accuracy and efficiency. BiSeNet\cite{bisenet} is a bilateral segmentation network including {\it Spatial Path} and {\it Context Path}, which achieves efficiency while ensuring accuracy. SFNet\cite{sfnet} proposes a flow alignment module to achieve feature fusion and achieves high mIoU on the Cityscape\cite{Cityscapes} test set. We assess the performance of both networks to test how well the algorithms can work on our dataset and the simultaneous usage practicability. The results can be seen in Tab. \ref{model_compare} and Fig. \ref{model_res}. \begin{table}[t] \caption{Comparison of the MIOU and FPS between models.} \label{model_compare} \centering \begin{tabular}{c|ccc} \hline Model & MIOU & FPS \\ \hline BiSeNet\cite{bisenet} & 0.8531 & 62.59 \\ SFNet\cite{sfnet} & 0.9091 & 6.45 \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[height=2cm]{pics/01_ori.png} \includegraphics[height=2cm]{pics/01_gt.png} \includegraphics[height=2cm]{pics/01_bs.png} \includegraphics[height=2cm]{pics/01_sf.png}\\ \vspace{2pt} \includegraphics[height=2cm]{pics/02_ori.png} \includegraphics[height=2cm]{pics/02_gt.png} \includegraphics[height=2cm]{pics/02_bs.png} \includegraphics[height=2cm]{pics/02_sf.png}\\ \vspace{2pt} \includegraphics[height=2cm]{pics/03_ori.png} \includegraphics[height=2cm]{pics/03_gt.png} \includegraphics[height=2cm]{pics/03_bs.png} \includegraphics[height=2cm]{pics/03_sf.png}\\ \vspace{2pt} \includegraphics[height=2cm]{pics/04_ori.png} \includegraphics[height=2cm]{pics/04_gt.png} \includegraphics[height=2cm]{pics/04_bs.png} \includegraphics[height=2cm]{pics/04_sf.png}\\ \vspace{2pt} \includegraphics[height=2cm]{pics/05_ori.png} \includegraphics[height=2cm]{pics/05_gt.png} \includegraphics[height=2cm]{pics/05_bs.png} \includegraphics[height=2cm]{pics/05_sf.png} \caption{From the left side to right side, the first column is the original BEV image, the second column is the segmentation ground truth, the third column is the inference result from BiSeNet\cite{bisenet}, and the last column is the inference result from SFNet\cite{sfnet}.} \label{model_res} \end{figure} \subsection{Visual-SLAM} The ORB-SLAM series is the open-source state-of-the-art Visual-SLAM solution, and ORB-SLAM3\cite{orb3} is the latest work of this series which has robust performance. VINS-Fusion\cite{vins-fusion} is another classical feature-based Visual-SLAM method. We benchmark their robustness and accuracy on our dataset on three routes. The experimental results are shown in Tab. \ref{ape_SLAM} and Fig. \ref{eva_SLAM}. We evaluate the absolute trajectory error (ATE) using EVO\cite{evo}. \begin{figure} \centering \subfigure[ORB-SLAM3\cite{orb3}]{ \includegraphics[height=3cm]{pics/A_ORB.png} \includegraphics[height=3cm]{pics/B_ORB.png} \includegraphics[height=3cm]{pics/C_ORB.png} } \subfigure[VINS-Fusion\cite{vins-fusion}]{ \includegraphics[height=3cm]{pics/A_VINS.png} \includegraphics[height=3cm]{pics/B_VINS.png} \includegraphics[height=3cm]{pics/C_VINS.png} } \subfigure[LIO-SAM\cite{lio-sam}]{ \includegraphics[height=3cm]{pics/A_LIO.png} \includegraphics[height=3cm]{pics/B_LIO.png} \includegraphics[height=3cm]{pics/C_LIO.png} } \caption{Evaluation of the Visual SLAM and LiDAR based SLAM methods on three routes. The left column is evaluated on Loop A, the medial column evaluated on Loop B, and the right column evaluated on Loop C.} \label{eva_SLAM} \end{figure} \begin{table}[] \caption{Comparison of the APE error of the SLAM algorithms on three routes.} \label{ape_SLAM} \centering \begin{tabular}{c|c|ccccc} \hline SLAM & Route & Max & Median & Min & RMSE & STD \\ \hline \multirow{3}{*}{ORB3\cite{orb3}} & A & 2.70 & 1.86 & 0.17 & 1.78 & 0.77 \\ & B & 3.12 & 1.88 & 0.23 & 1.96 & 0.78 \\ & C & 10.25 & 3.14 & 1.14 & 4.78 & 2.57 \\ \hline \multirow{3}{*}{VINS-F\cite{vins-fusion}} & A & 4.11 & 1.78 & 0.19 & 1.70 & 0.71 \\ & B & 6.28 & 1.82 & 0.06 & 1.95 & 0.77 \\ & C & 7.30 & 2.71 & 1.75 & 3.03 & 0.99 \\ \hline \multirow{3}{*}{LIO-SAM\cite{lio-sam}} & A & 1.52 & 0.87 & 0.04 & 0.92 & 0.38 \\ & B & 1.40 & 0.71 & 0.03 & 0.80 & 0.32 \\ & C & 1.37 & 1.01 & 0.80 & 1.04 & 0.14 \\ \hline \end{tabular} \end{table} \subsection{LiDAR-based SLAM} While Visual-SLAM algorithms have more plentiful information and need fewer resources, LiDAR-based SLAM is a classical method that has reliable accuracy and robustness. LIO-SAM\cite{lio-sam} is a recent tightly coupled LiDAR SLAM method with IMU and GPS information. We evaluate the results of LIO-SAM\cite{lio-sam} and list the results in Fig. \ref{eva_SLAM} and Tab. \ref{ape_SLAM}. \section{CONCLUSIONS} \label{V} We provide the SUPS dataset, a novel perception and SLAM benchmark that supports autonomous driving tasks in the underground parking scenario with complete sensors, e.g., surround fisheye cameras, forward pinhole cameras, depth camera, LiDAR, GNSS, and IMU. We evaluate the semantic segmentation, parking slot detection, and SLAM algorithms on the proposed dataset to verify its practicability. In addition to the SUPS dataset, we open-source the virtual scene and vehicle setups on the simulator for users making the specific adaptation. Since we have provided a large FOV that can cover a complete parking space along the roadside in BEV sight, we have proposed a special description of the parking slot. One of our future directions is to develop object-level detection methods instead of using low-level corners and center lines to describe the parking slots. Object-level object detection could be more helpful than scattered structures in subsequent motion planning and decision-making procedures. \addtolength{\textheight}{-7cm} \section*{Acknowledgment} This work is supported by Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01), ZJ Lab, and Shanghai Center for Brain Science and Brain-Inspired Technology. \bibliographystyle{IEEEtran}
-18,254.443767
[ -3.248046875, 2.939453125 ]
32.631579
[ -3.482421875, 0.57275390625, -1.873046875, -5.015625, -0.280517578125, 7.18359375 ]
[ 2.865234375, 6.90234375, 2.9765625, 7.62109375 ]
278
3,321
[ -1.306640625, 1.373046875 ]
25.004708
[ -6.328125, -3.568359375, -3.515625, -1.2548828125, 2.15625, 10.4609375 ]
0.517114
21.765944
31.074977
4.64225
[ 2.671776294708252 ]
-14,301.72375
6.114122
-17,989.903727
1.920709
5.963318
[ -3.546875, -3.193359375, -2.64453125, -3.498046875, 2.88671875, 9.4296875 ]
[ -6.1484375, -2.8671875, -2.978515625, -2.181640625, 4.42578125, 6.73828125 ]
BkiUfYQ5qhDCjWqnQ3hH
\section{Introduction} {Magnetars, including soft gamma ray repeaters (SGRs) and anomalous X-ray pulsars (AXPs), are defined as isolated neutron stars (NSs) emitting from radio to X-rays, and they are presumably powered by the dissipation of their very strong magnetic fields ($\sim10^{14}$\,G, \citealt{2002ApJ...574..332T,2008A&ARv..15..225M})}. {Such fields are inferred from their rapid spin-down and extreme flaring activity \citep{1982ApJ...260..371K,1992ApJ...392L...9D,2002ApJ...574..332T}.} {While fields of a similar magnitude have also been suggested for some accreting pulsars \citep{2010A&A...515A..10D,2012MNRAS.425..595R,2016MNRAS.457.1101T}, the emission in this case is clearly powered by accretion rather than field decay, so these objects are generally not considered magnetars.} {On the other hand,} accretion from a fall-back fossil disk, surviving the supernova explosion onto a magnetized NS \citep{1995A&A...299L..41V,2001ApJ...554.1245A}, or an isolated white dwarf (WD) \citep{2014PASJ...66...14C,2020arXiv200407963B} has also been invoked to explain the persistent X-ray emission and spin evolution of AXPs. A serious flaw in {this scenario} is that it cannot explain the giant flares observed from SGRs, which must still be powered by the decay or reconnection of ultra-strong magnetic fields close to the surface of the NS and triggered by crustal shifts. {Accretion is thus currently not considered as a mainstream explanation for the AXP phenomenon.} {Nevertheless, significant observational efforts aimed at the detection of optical or infrared emission from cool fossil disks, potentially powering accretion, have been undertaken and indeed were successful in revealing the presence of a disk \citep{2000Natur.408..689H,2001ApJ...556..399K,2006ApJ...649L..87E,2006Natur.440..772W,2008A&ARv..15..225M}. We note, however, that evidence for the presence of a disk does not necessarily imply that accretion powers the observed emission of AXPs.} {To further investigate the accretion scenario, we have taken an alternative and purely phenomenological approach, based on the comparison of the aperiodic variability properties of accreting objects with that of several magnetars including the prototypical AXP \object{4U~0142$+$61}\xspace, often suggested in literature as an accreting system, and of two other bright magnetars, 1RXS~J170849.0$-$400910 and 1E~1841$-$045. For completeness, we also include a bright radio pulsar PSR~B1509$-$58 for comparison.} {If accretion is the mechanism at the base of the observed emission, one shall expect to observe a similar aperiodic variability for accreting sources and magnetars. Accretion is known to be an intrinsically noisy process \citep{1997MNRAS.292..679L} and all accreting systems, from young stellar objects to active galactic nuclei, do exhibit strong red-noise type aperiodic variability \citep{1971ApJ...168L..43R,1974PASJ...26..303O,2009A&A...507.1211R,2015SciA....1E0686S}. There is no reason for magnetars to be an exception. In this work, we show, however, that the observed variability properties of AXPs are drastically different from those of accreting systems, and thus conclude that the observed emission is likely not powered by accretion.} \section{Object selection, observations, and analysis} Being the brightest and arguably the best studied AXPs, \object{4U~0142$+$61}\xspace is also the archetypal source discussed in the context of the accretion scenario, and, in fact, {one of the two magnetars for which substantial} observational arguments exist to support this interpretation {(the other being 1E~161348$-$5055 in RCW~103, \citealt{2018arXiv180305716E})}. In particular, the detection of mid-infrared emission from a cool disk around the source \citep{2006Natur.440..772W} largely motivated the development of the fall-back accretion scenario. The peculiar two-component broadband X-ray spectrum, similar to that of some accretion-powered pulsars \citep[see i.e., Fig 1 in ][]{2012A&A...540L...1D}, has also been interpreted in favor of accretion for this source \citep[see also][and references therein]{2007ApJ...657..441E,2013ApJ...764...49T,2014ApJS..212....6O,2015MNRAS.454.3366Z,2020arXiv200407963B}. In addition, its high observed flux and comparatively low spin period ($\sim8.7$\,s) make \object{4U~0142$+$61}\xspace an ideal target for our study. {In this study, we also consider two other magnetars, 1RXS~J170849.0$-$400910 and 1E~1841$-$045, both of which are bright and have been observed with the same instrument (i.e., \textit{NuSTAR}), which is also used in this work for \object{4U~0142$+$61}\xspace and the reference accreting sources. Finally, for completeness, we also include a bright radio pulsar PSR B1509$-$58, which is also detected in the hard X-ray band and was observed with \textit{NuSTAR} \citep{2016ApJ...817...93C}.} {On the other hand, we have not included 1E~161348$-$5055 in our sample, since its very long spin period ($\sim6.7$ hours) would require very long observations to probe variability on timescales comparable and exceeding the spin period.} {Indeed,} the power spectral density (PSD) of magnetized NSs and WDs {used to describe their variability} reveals the presence of strong power-law noise, which is truncated around the spin frequency of the compact object \citep{2009A&A...507.1211R}. Here, the break is associated with the interaction of the accretion flow with the magnetosphere, which suppresses noise at higher frequencies \citep{2009A&A...507.1211R,2019MNRAS.482.3622S}. If the origin of the observed X-ray emission is the same in accreting pulsars and AXPs, one would expect similar power spectra, that is to say red-type noise with a break around the spin frequency. Considering that the physics of accretion is expected to be the same for similar luminosities, the amplitude of noise relative to the pulsed signal can also be expected to be similar for X-ray pulsars and magnetars. We observe that the lack of secular spin-up trends of magnetar candidates would imply, in the context of the fossil-disk accretion scenario, that they are close to co-rotation with the inner accretion disk regions. This in turn means that the break in the power spectrum can only occur around the spin frequency; additionally, in the accretion scenario, the noise at lower frequencies shall not be suppressed and must be observable. {Constraining noise level on timescales of hours is, however, challenging from an observational perspective as it requires very long observations, hence the omission of 1E~161348$-$5055.} {The selection of the reference accreting sources for comparison can be fairly arbitrary as the observed power spectra are qualitatively similar for all accretors \citep{2009A&A...507.1211R,2015SciA....1E0686S,2019MNRAS.482.3622S}. Nevertheless, for a quantitative comparison, we have chosen objects which are as similar as possible to the considered magnetars in terms of phenomenology and the quality of the existing observations. As a reference accreting pulsar, we have selected 1A~0535$+$262 observed with \textit{NuSTAR} in quiescence in \cite{2019MNRAS.487L..30T}. In this observation, the source was found at a luminosity comparable with that of AXPs and it exhibited a two-component energy spectrum similar to that of X~Persei and \object{4U~0142$+$61}\xspace \citep{2019MNRAS.487L..30T}. This is an important point as the hypothesis is such that the persistent emission of AXPs is at least, in part, based on the similarity of their spectra to that of accretion-powered pulsars \citep{2013ApJ...764...49T}.} {Accretion onto a white dwarf has also been suggested to power the persistent emission of AXPs \citep{2014PASJ...66...14C,2020arXiv200407963B}. As a reference white dwarf accretor, we have chosen the intermediate polar (IP) GK~Persei, which was observed by \textit{NuSTAR} in outburst and thus has one of the highest luminosities among all IPs \citep{2019MNRAS.482.3622S}. We emphasize, however, that power spectra of all IPs appear to be qualitatively similar \citep{2019MNRAS.482.3622S}. We stress again that high quality data obtained with the same instrument (\textit{NuSTAR}) at comparable flux levels are available for all considered objects.} \begin{table*}[!ht] \begin{center} \begin{tabular}{lllllll} Source & \object{4U~0142$+$61}\xspace & 1A 0535+262 & GK Per & 1RXS~J170849.0 & 1E~1841$-$045 & PSR B1509-58\\ \hline obs.id & 30001023003 & 90401370001 & 30101021002 & 30401023002 & 30001025012 & 40024001002\\ $L_{\rm x}, 10^{34}$\,erg\,s$^{-1}$ & 38.3$^{a}$ & 6.7$^{b}$ & 0.14$^c$ & 4.2$^{d}$ & 0.85$^{d}$ & $\sim40^{e}$\\ exposure, ks & 143 & 55 & 72 & 93 & 100 & 34\\ count-rate, s$^{-1}$ & 3.46 & 2.54 & 1.7 & 1.3 & 1.34 & 1.3\\ $f_{\rm break}$, Hz & 0.115 & $9.662\times10^{-3}$ & $2.849\times10^{-3}$ & 9.08676$\times10^{-2}$& 8.4825$\times10^{-2}$ & 6.59 \\ $\Gamma_1/\Gamma_2$ & 0.7/2.0& 0.66(6)/1.89(6) & 0.72(6)/2.30(7) & 0.7/2.0 & 0.7/2.0 & 0.7/2.0\\ $A_{\rm noise}$ & $\le2\times10^{-3}$ & 0.44$_{-0.07}^{+0.15}$& 0.37$_{-0.01}^{+0.18}$ & 2.7$_{-1.8}^{+1.8}\times10^{-3}$ & $\le3\times10^{-3}$ & $\le4\times10^{-4}$\\ $A_{\rm noise}/A_{\rm pulse}$ & $\le0.35$ & 40$_{-14}^{+22}$ & 220$_{-125}^{+730}$ & $\le0.2$ & $\le0.17$ & $\le6\time10^{-4}$\\ \end{tabular} \end{center} \caption{Summary of the data used and derived PSD parameters for \object{4U~0142$+$61}\xspace, 1A~0535+262, and GK~Per. $^a$-using fluxes reported in \cite{2015ApJ...815...15W} and distance from \cite{2020arXiv200407963B}, $^b$-\cite{2019MNRAS.487L..30T}, $^c$-\cite{2019MNRAS.482.3622S}, $^{d}$-\cite{2014ApJS..212....6O}, $^e$-\cite{2016ApJ...817...93C}. All uncertainties are reported at the $1\sigma$ confidence level.} \label{tab:psd_fit} \end{table*} \subsection{Observed variability in accretors and magnetars} {As already mentioned, all sources in the sample have been observed by \textit{NuSTAR}. The summary of the observations used in the analysis is presented in Table~\ref{tab:psd_fit}. To investigate the observed variability properties in all objects, we reduced the data and} extracted {source} light curves in the 3-80\,keV energy range, with a time resolution of 0.0625\,s using the \texttt{HEADAS 6.27.1} software and current set of calibration files (version 20200526). In each case, the source photons were extracted from a region centered on the source with a radius of 80$^{\prime\prime}$. The source signal dominated the count-rate ($ \ge95$\% of all counts in all cases) so the background was not subtracted for a timing analysis. Light curves that were extracted from the two \textit{NuSTAR} units were corrected to the solar system barycenter and co-added to improve the counting statistics. PSDs were constructed using the \texttt{powspec} task and converted to a format that is readable by \texttt{XSPEC}, as described in \cite{2012MNRAS.419.2369I}. They were also rebinned by a constant factor to ensure that at least 20 points contribute to each frequency bin to reduce the statistical bias \citep{10.2307/2241759} associated with the Whittle statistics \citep{1953ArM.....2..423W} used to fit the resulting PSDs. To model the PSDs, we chose a broken power law with a break fixed at the spin frequency of a given source (see i.e., Table~\ref{tab:psd_fit}) because all three sources are expected to be close to co-rotation. A zero-width Lorentzian curve with the same frequency and an additional constant were also included in the model to account for pulsations and white noise. The white noise amplitude was fixed to the expected level of two, but it was not subtracted for a clearer presentation of the power spectrum of the AXPs. We also rescaled the frequency axis for plotting so that the break in the PSDs appears at the same location to ease the comparison between individual objects, as was done by \cite{2009A&A...507.1211R}. The results are shown in Fig.~\ref{fig:psd}. As it is clearly seen from the figure, low-frequency noise dominates the power spectra of the two, well-established accreting objects, 1A~0535+262 and GK~Per. On the contrary, it is completely absent in the PSD of {magnetars}. To estimate an upper limit on the noise amplitude, we included a broken power law component, fixing the indices {$\Gamma_{1,2}$ below and above the break frequency $f_{\rm break}$} to values similar to that obtained for the other two reference sources, and we calculated the $1\sigma$ confidence bounds for the amplitudes {$A_{\rm noise/pulse}$ of the noise and pulsed signal} using the \texttt{error} command in \texttt{XSPEC}. The results are presented in Table~\ref{tab:psd_fit}. The noise amplitude in \object{4U~0142$+$61}\xspace is consistent with zero and, if there is any, it has to be at least by factor of 180 lower than that in the other two objects. In Table 1, we also report the ratio of the pulsed signal amplitude to noise amplitude. Also, in this case, the relative noise power in \object{4U~0142$+$61}\xspace is substantially lower (by a factor of at least 75) than that of the reference accreting sources. {A similar conclusion also holds for the other two considered magnetars.} We note that, to our best knowledge, there are also no reports of aperiodic variability for {the persistent emission in} the other magnetars not studied here. Therefore, we conclude that the sources studied in this work do not constitute a special sample, and the absence of any observed aperiodic variability is a strong general argument against the accretion-powered origin of emission from magnetars. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{pspec_compare_more.pdf} \caption{Power density spectra of the magnetized compact objects discussed in the text and labeled in the legend. The frequency is expressed in units of respective objects spin frequency and power is normalized such that expected white noise level is 2.} \label{fig:psd} \end{figure} \section{Conclusions} The recent discovery of the the two-component X-ray spectra typical of AXPs in several accreting X-ray pulsars at low luminosity \citep[see][]{2019MNRAS.483L.144T, 2019MNRAS.487L..30T} has suggested that {such a spectral shape} might be a common feature of {accretion-powered pulsars at low luminosities. The similarity of the spectra of these objects and AXPs \citep{2012A&A...540L...1D} can thus be viewed as an argument in favor of the common origin of the observed emission in X-ray pulsars and AXPs, that is to say accretion \citep{2012A&A...540L...1D,2013ApJ...764...49T}.} We examined, {therefore,} the hypothesis that the persistent emission of AXP \object{4U~0142$+$61}\xspace\ {could indeed be} powered by accretion. We find, however, that, despite the similarity of the observed X-ray spectra, the aperiodic variability that is universally observed in all accreting systems, including the low-luminous X-ray pulsars and accreting white dwarfs \citep{2009A&A...507.1211R,2015SciA....1E0686S}, is completely absent in \object{4U~0142$+$61}\xspace {RXS~J170849.0$-$400910 and 1E~1841$-$045.} More specifically, this result follows from comparing the observed PSDs of {these objects} with that of the accreting pulsar 1A~0535+262 (observed in the low luminosity state) and of the intermediate polar GK~Persei, {which can be considered as representative objects of an accreting neutron star and a white dwarf, respectively}. All of these objects were observed with the \textit{NuSTAR} observatory at a comparable flux and luminosity level, which allowed us to obtain power spectra of a similar quality. We find that despite {similar luminosities, counting statistics, and energy spectra,} the variability properties of accreting objects and magnetars are {drastically} different and no evidence for the low-frequency red noise that is typical for accreting sources is detected in the magnetars of the sample. We emphasize that the choice of other reference objects would not alter our conclusions since aperiodic variability is an established feature of accreting systems. {We conclude, therefore, that the observed persistent emission from \object{4U~0142$+$61}\xspace, 1RXS~J170849.0$-$400910, 1E~1841$-$045, and PSR~B1509$-$58 is not due to accretion, as expected.} Considering that \object{4U~0142$+$61}\xspace is a prime candidate for accretion-powered AXPs and the lack of detected (or reported) variability in any of the magnetar candidates, we conclude that our finding constitutes a strong independent argument against the accretion-powered origin of the persistent X-ray emission in magnetars. This conclusion can be further verified by extending a similar analysis to more sources. \begin{acknowledgements} VD thanks Joachim Tr\"umper for asking the question whether accreting pulsars in low-luminosity state are indeed magnetars. A question which this paper aims to answer. This work was supported by the Russian Science Foundation (grant 19-12-00423). VFS thanks Deutsche Forschungsgemeinschaft for financial support (grant DFG-GZ WE 1312/53-1). We thank German Academic Exchange Service (DAAD, project 57405000) and the Academy of Finland (projects 324550, 331951) for travel grants. \end{acknowledgements} \vspace{-0.3cm}
-11,267.983749
[ -2.966796875, 2.79296875 ]
15.116279
[ -3.373046875, 0.5576171875, -1.66796875, -4.93359375, -1.0703125, 7.28125 ]
[ 2.40234375, 7.4765625, 3.33203125, 3.603515625 ]
253
2,273
[ -2.4765625, 2.537109375 ]
30.851674
[ -6.17578125, -3.7734375, -3.53515625, -2.29296875, 1.7529296875, 11.7109375 ]
1.202827
13.144502
32.952046
7.615802
[ 2.9625775814056396 ]
-9,431.972121
5.852178
-11,154.837495
0.571343
5.599332
[ -3.626953125, -3.64453125, -3.0859375, -3.90234375, 2.443359375, 10.84375 ]
[ -6.38671875, -2.841796875, -2.5390625, -2.142578125, 3.79296875, 5.9296875 ]
BkiUeZrxK4tBVicoD6Ga
\section{Introduction} EUDET~\cite{eudetweb} is a project supported by the European Union in the Sixth Framework Programme structuring the European Research Area~\cite{CORDIS}. The project is an Integrated Infrastructure Initiative (I3) which aims to create a coordinated European effort towards research and development for ILC detectors. The emphasis of the project is the creation and improvement of infrastructures to enable R\&D on detector technologies with larger prototypes. After establishing several new technologies to match the required ILC detector performances, the construction of and experimentation with larger scale prototypes to demonstrate the feasibility of these detector concepts is the next important step towards the design of an ILC detector. Such larger detectors generally require cooperation between several institutes and EUDET is intended to provide a framework for European and also Global collaboration. The project comprises 31 European partner institutes from 12 different countries working in the field of High Energy Physics. In addition, 23 associated institutes will contribute to and exploit the EUDET research infrastructures. The project started in January 2006 and will run for four years providing additional funding of 7\,MEuros from the European Union. In addition significant resources are committed by the participating institutes. EUDET contributes to the development of larger prototypes of all detector components for which major R\&D efforts are ongoing: vertex and tracking detectors as well as calorimeters. The project is organised in three Joint Research Activities: test beam infrastructure, infrastructure for tracking detectors and infrastructure for calorimeters, which are subdivided into several tasks addressing different detector types and technologies. The project is complemented by networking activities, the tasks of which include support for information exchange and common analysis tools as well as a transnational access scheme through which the use of the DESY test beam and, at a later stage, the exploitation of the infrastructures by European research groups is subsidised. With the increasing size and complexity of detector prototypes data acquisition issues become more and more important. For some of the EUDET infrastructures the development of a DAQ system is part of the project and first conceptual ideas have been developed. Even though it is certainly to early to design the final DAQ system it is instrumental to exchange ideas, homogenise concepts across sub-detector boundaries and thus prepare the ground for an integrated concept for the ILC detector. In EUDET a coherent DAQ approach is discussed for the large prototypes involved to facilitate combined test beam experiments. Even though it was not part of the original project, discussions have started to evaluate the feasibility in light of the very different demands of the detectors. It should also be noted that efforts on coherent DAQ schemes are very welcome to advance the concept of the Global Detector Network~\cite{GDN}. \section{The Joint Research Activities} \subsection{Test Beam Infrastructure} This activity aims at improving the current test beam installation with a large bore magnet of up to about 1 Tesla and a low mass coil. The magnet, called PCMAG, is provided by KEK, one of the associated institutes. In addition a high resolution beam telescope made of pixel detectors using Monolithic Active Pixel Sensors (MAPS) is under development~\cite{Tobias}. Initially both devices will be constructed and used at the DESY test beam facility but they are transportable, as all EUDET infrastructures, and could be used later at other laboratories. \begin{figure}[htbp] \epsfxsize=8cm \centerline{\epsfbox{JRA1sketch.eps}} \caption{Design of the DAQ system for the high resolution pixel telescope.} \label{fig:JRA1sketch} \end{figure} An important part of this project is the development of a DAQ system for the pixel telescope. Fig.~\ref{fig:JRA1sketch} shows a first design of it. This task includes the design of front-end electronics and data reduction boards. It will be complemented by a special trigger logic unit. Some parts of the design, like the connection to the readout computers, are not yet decided. A first demonstrator set-up of the telescope is scheduled to become operational by mid 2007 and the full telescope by the end of 2008. \subsection{Infrastructure for Tracking Detectors} Both options for the ILC central tracking detector, a high resolution TPC and a large low-mass tracker consisting of silicon strip detectors, are addressed in this activity. The TPC activity centres around the development and construction of a large field cage, to be used inside PCMAG to test various options of micro pattern gas detectors which are under study for the gas amplification at the end-plates. For the silicon tracking option studies will concentrate on the design of a large and light mechanical structure, the cooling aspects as well as on front-end electronics development. Together with the TPC field cage a general purpose readout system to be used with different end-plate technologies will be provided. The design of this readout is based on existing technologies, namely the ALTRO chip developed for the ALICE experiment~\cite{altro}, which can provide the required high number of channels at low cost. It has also the potential to be further developed and tailored to the requirements of the ILC TPC using new high integration technologies as they become available. The TPC readout system will be complemented by an adequate DAQ system. This infrastructure is schedule to be ready for first test beam measurements beginning of 2008. \subsection{Infrastructure for Calorimeters} This part of EUDET comprises the construction of a fully equipped module of the electromagnetic calorimeter, a versatile stack for testing technologies for the hadron calorimeter as well as calibration and sensor test devices for the forward calorimeter. The development of front-end electronics and a DAQ system for the calorimeters also belong to the project. \begin{figure}[htbp] \epsfxsize=8cm \centerline{\epsfbox{JRA3sketch.eps}} \caption{Off-detector receiver design for calorimeters.} \label{fig:JRA3sketch} \end{figure} A conceptual design of the a DAQ system to be used with the electromagnetic and hadron calorimeters exists. It is flexible and can be adapted to different options of the readout electronics. Commercial products are used to ensure the system is inexpensive, scalable and maintainable. Fig.~\ref{fig:JRA3sketch} shows the concept of the off-detector receiver card. These cards will be mounted directly on PCI buses of the DAQ computers. This concept is expected to provide a high-speed generic DAQ card available in 2009 for test beam experiments. \section{Conclusions} Within the EUDET project, infrastructures for coming ILC detector R\&D with larger prototypes will be developed and constructed in the next years. DAQ systems for the calorimeters, vertex and tracking detectors are part of the project, which will permit detailed test beam experiments in a few years. Efforts have started to investigate if the concepts for these DAQ systems can be homogenised despite the partially diverging requirements on the R\&D issues to be addressed by the different detectors. Obviously, any modification and enlargement of the DAQ systems planned has to be accommodated within the time frame and the resources of EUDET. The advantages and possible benefits are, however, numerous ranging from combined test beam experiments to the valuable experience to be gained for the ILC detector. \section*{Acknowledgments} This work is supported by the Commission of the European Communities under the 6th Framework Programme 'Structuring the European Research Area', contract number RII3-026126.
-5,011.434644
[ -3.021484375, 2.8125 ]
35.971223
[ -5.87890625, -3.798828125, -3.181640625, -8.7890625, 2.62890625, 13.4765625 ]
[ 3.0234375, 6.4921875, 3.595703125, 7.39453125 ]
56
1,157
[ -2.373046875, 2.1640625 ]
19.044035
[ -6.765625, -4.3203125, -0.93017578125, 1.7451171875, 3.22265625, 5.47265625 ]
1.692829
24.119158
36.214347
1.633844
[ 2.087627649307251 ]
-4,808.269861
5.616249
-4,812.720522
0.738689
5.199455
[ -4.24609375, -4.8125, -3.521484375, -1.89453125, 3.9609375, 8.2734375 ]
[ -7.12890625, -5.49609375, -3.71484375, -2.54296875, 6.36328125, 10.0078125 ]
BkiUfe85qhLBfzo2pOli
\section{Introduction} \IEEEPARstart{I}{dentification} of anatomical reference points and landmarks is a prerequisite for numerous medical image analysis tasks\cite{rohr2001landmark}. These include image registration \cite{miao2012automatic, murphy2011semi, wang2018fast, han_robust_2014, alam2018medical}, initialization of segmentation methods \cite{oktay2017stratified}, and computation of clinical measurements for patient diagnosis and treatment planning \cite{wang2016benchmark, al2018automatic, torosdagli2018deep, kasel2013standardized, ionasec2008dynamic, zhengautomatic2010}. While manual identification of anatomical landmarks might be trivial, it is often tedious and cumbersome \cite{wang2016benchmark, elattar2016automatic}. Fast and accurate automatic landmark localization methods can replace manual identification and may be especially helpful when precise localization of multiple landmarks is required. Several application-specific automatic landmark localization methods have been proposed previously, such as methods combining segmentation of specific structures containing the landmarks and subsequent local rule-based analysis of those structures \cite{wachter2010patient, zheng2012automatic, elattar2016automatic}. More generic localization methods employ either multi-atlas image registration \cite{alam2018medical, isgum2009multi} or machine learning. In multi-atlas image registration, multiple atlas images with annotated landmarks are registered to the image of interest. Subsequently, a voting scheme determines the location of landmarks. Such approaches are accurate and robust to limited diversity in the anatomy and image acquisition, but they are typically time-consuming \cite{alam2018medical, isgum2009multi}. Machine learning provides a faster and more robust alternative. Conventional machine learning approaches for landmark localization in medical images are often classification- \cite{ionasec2008dynamic, ibragimov2015computerized, dabbah_detection_2014, mahapatra_landmark_2012, lu_discriminative_2012, zheng2012automatic, urschler2018integrating, donner_global_2013, oktay2017stratified} or regression-based \cite{al2018automatic, stern_local_2016,han_robust_2014, gao_collaborative_2015, lindner2015fully, urschler2018integrating, donner_global_2013, oktay2017stratified}. Classification-based methods detect the presence of a landmark in image slices, patches, or voxels. Classification methods use a hard threshold: the landmark is either present or absent. Therefore, these methods usually rely on careful consideration of a final threshold value, which may be data and task specific. Regression-based methods circumvent the use of a hard threshold by outputting a continuous value\cite{zhou_discriminative_2014}. Regression-based methods predict the displacement or distance to the landmark from image slices, patches, or voxels. Similar to many other automatic image analysis tasks, automatic landmark localization methods have become primarily deep learning-based \cite{litjens2017survey, zheng_3d_2015, o2018attaining}. Deep learning methods outperform conventional machine learning methods in a wide \REV{range} of applications\cite{litjens2017survey}. The advantage of deep learning is that it does not require handcrafting of features. Several deep learning methods have been proposed for landmark localization that employ classification. Yang et al. \cite{yang_automated_2015} classified image slices with a convolutional neural network (CNN) and predicted a landmark location based on intersecting the classification outputs from all axial, coronal and sagittal image slices. Zheng et al. \cite{zheng_3d_2015} localized a landmark by classifying image voxels with multi-layer perceptrons, while Arik et al. \cite{arik2017fully} performed pixel classification with a CNN to localize landmarks. Xu et al. \cite{xu2017supervised} localized landmarks with a CNN that classified pixels based on their relative position (up, down, left or right) to the landmark of interest. Subsequently, landmarks were localized by using the obtained pixel-wise action steps. Deep learning-based methods exploiting regression often predict heatmaps representing e.g. the distance between evaluated voxels and the landmark of interest \cite{wolterink2019coronary, payer2019integrating, o2018attaining, zhang2020headlocnet, torosdagli2018deep, meyer2018pixel}. Landmarks are identified as local or global minima in these heatmaps. Voxel labels in heatmaps can be seen as pseudo-probabilities, indicating how close a voxel is located to a landmark. This makes heatmap regression comparable with voxel classification without using a hard threshold. Wolterink et al. \cite{wolterink2019coronary} employed a CNN containing dilated convolutions to predict heatmaps indicating landmark locations. Similar to Wolterink et al. \cite{wolterink2019coronary}, Payer et al. \cite{payer2019integrating} and O'Neil et al. \cite{o2018attaining} also proposed methods to predict heatmaps for automatic landmark localization. Payer et al.\cite{payer2019integrating} used a CNN that combined local appearance responses of a single landmark with the spatial configuration of that landmark to all other landmarks while O'Neil et al. \cite{o2018attaining} employed a CNN that analyzed low resolution images and subsequently used a second CNN for further refinement. Torosdagli et al. \cite{torosdagli2018deep} used a CNN to predict heatmaps representing the geodesic distance to a segmented organ containing landmarks and subsequently used a long short-term memory classification network to localize landmarks placed closely together. Unlike methods that performed a single task at a time, Meyer et al. \cite{meyer2018pixel} used a multi-task network to determine which landmark was closest to an analyzed pixel and subsequently predicted the normalized 2D distance towards that landmark. Heatmap regression often requires combining a large number of predictions, for instance via a majority voting strategy, making it computationally expensive and often time-consuming. Therefore, a different approach was chosen by Zhang et al. \cite{zhang_detecting_2017}, who used a CNN to predict displacement vectors indicating the distance and direction from an analyzed voxel towards the landmark of interest. Subsequently, the CNN-architecture was expanded with additional layers to model correlations between analyzed input patches and output predicted landmark coordinates. Even though good results were obtained, predicting landmark coordinates directly from the image might be prohibited to large and complex CNN-architectures that model the complex non-linear mappings from input image to landmark location. Besides deep learning-based regression methods that directly predict landmark locations, regression has also been used to iteratively determine the landmark locations in an image\cite{aubert_automatic_2016, li2018fast, ghesu2019multi, alansary2019evaluating, al2019partial}. Aubert et al. \cite{aubert_automatic_2016} used a network to regress the displacement from an initial input patch, chosen with a statistical shape model, to the reference landmark. The landmark position was obtained by iteratively moving the input patch, using the predicted displacements, until convergence was reached and the landmark was localized. Li et al. \cite{li2018fast} localized landmarks in an iterative manner and employed a CNN that predicted the distance along each of the three coordinate axes from the center of 2.5D patches towards the landmark of interest while using classification to predict positive or negative movement along each coordinate axis. Ghesu et al. \cite{ghesu2019multi}, Alansary et al. \cite{alansary2019evaluating}, and Al et al. \cite{al2019partial} localized landmarks exploiting deep reinforcement learning to obtain the optimal search path from an initial starting location towards the landmark of interest. \begin{figure} \centering \includegraphics[width=\columnwidth]{nooth1.pdf} \caption{The fully convolutional neural network analyzes images in a patch-based manner, combining regression and classification. The landmark is localized by jointly predicting the displacement vector pointing from the center of each patch to the landmark with regression (R) and by predicting the presence of the landmark in each patch with classification (C). The final landmark location is obtained by computing a weighted average of the predicted displacement vectors, using the obtained posterior classification probabilities as weights during averaging.} \label{overview} \end{figure} \begin{figure*} \centering \includegraphics[width=\textwidth, trim=0cm 2.5cm 0cm 0cm]{nooth2.pdf} \caption{\REVV{Schematics of the proposed method for landmark localization. The method employs a global-to-local localization approach with ResNet-based CNNs. The first CNN provides global estimates of landmarks for the second specialized CNNs that predict final landmark locations. Since the CNNs are fully convolutional, they can handle input images of any size.}} \label{fig:network} \end{figure*} \REVV{In this study, we propose a global-to-local localization approach, where an initial FCNN predicts the global locations of multiple landmarks simultaneously, and subsequently specialized FCNNs refine the final location of each landmark. Global multi-landmark localization and subsequent local single landmark localization are performed in a similar manner. While previous landmark localization methods used one approach, either classification \textit{or} regression, we propose a patch-based fully convolutional neural network (FCNN) that performs both classification \textit{and} regression (shown in Fig.~\ref{overview}). A patch-based approach provides a computationally efficient alternative to voxel-based approaches. However, since patch-based classification is inherently less precise than its voxel-wise counterpart, we mitigate this by jointly regressing the displacement vectors that point to the location of the landmark.} Conversely, using a regression-only localization approach might lead to sub-optimal localization results, because we postulate that displacement vectors predicted in image patches farther from the landmark of interest are less accurate than displacement vectors predicted in image patches closer to it. This can be mitigated by employing the posterior probabilities from the classification task as weights for weighted averaging of the displacement vectors. Combining regression and classification results in a landmark localization method that is both fast and highly accurate. We show that our method is generally applicable to a variety of landmark localization tasks: 8 landmarks in 3D coronary CT angiography (CCTA) scans, 2 landmarks in 3D olfactory MR scans, and 19 landmarks in 2D cephalometric X-rays. Additionally, we show that our method is able to localize single landmarks and multiple landmarks simultaneously. \section{Method} We propose an automatic landmark localization method that employs \REVV{a global-to-local estimation of landmark locations (Fig.~\ref{fig:network}). During global landmark localization, a fully convolutional neural network analyzes full input images in a patch-based manner and predicts the location of multiple landmarks. During subsequent local analysis, the location of each landmark is refined by an FCNN. The FCNNs employed for global and local analysis perform simultaneous regression and classification for a given input patch. In regression, the FCNNs predict displacement vectors from the center of any patch to landmarks of interest. The location of each landmark might be obtained by computing the average of the landmark locations to which predicted displacement vectors point, but presumably not all patches are equally important for accurate localization; i.e. image patches closer to the target landmark are likely more relevant for accurate landmark localization than patches farther from the target landmark. Therefore, simultaneously to regression, classification is performed to determine the importance of each patch in the landmark localization. Classification determines the presence of the target landmark in an image patch and the obtained posterior classification probabilities are used for weighted averaging of all predicted displacement vectors.} \REVV{The global FCNN is based on ResNet34~\cite{he2016deep} and it consists of one convolutional layer with 16 ($7\times7\times7$) kernels and a stride of 2, which is followed by 4 ResNet-blocks. One ResNet-block contains 3, 4, or 6 convolutional layer pairs, where each convolutional layer pair consist of two convolutional layers with 32, 64, 128, or 256 ($3\times3\times3$) kernels. In contrast to the original ResNet34\cite{he2016deep}, which contains a strided convolutional layer as first layer in every ResNet-block, our network contains a pooling layer before the first and second ResNet-block, which is an average pooling layer with a size and stride of $2\times2\times2$ voxels. After the four ResNet-blocks, the network has two output heads: one for regression of displacement vectors, and another for classification of landmark presence. Both output heads are similar in design. Each head has two 256-node dense layers and an output-layer, implemented as 1$\times$1$\times$1 convolutions \cite{long_fully_2015}. The classification head outputs scalars, one for each landmark, forced between 0 and 1 by a sigmoid function. The regression head outputs displacement vectors for each landmark.} \REVV{The specialized FCNNs for local landmark prediction are of similar but smaller design. Each network consists of a ResNet-block, followed by average pooling, a second ResNet-block, and the parallel regression and classification heads. The first ResNet-block consists of two convolutional layers of 32 ($3\times3\times3$) kernels. Average pooling is done with a size and stride of $2\times2\times2$ voxels. The second ResNet-block consists of two convolutional layers with 64 ($3\times3\times3$) kernels. Similarly to the global FCNN, the two ResNet-blocks are followed by two output heads: one for the classification task, and another for the regression task. All layers use 64 kernels.} Each convolutional layer in the FCNNs applies zero-padding, and after each convolutional layer, batch normalization \cite{ioffe_batch_2015} is applied. To allow application to images of arbitrary size, 3D feature maps are not flattened but dense layers are implemented as convolutions with a size of 1$\times$1$\times$1 voxel\cite{long_fully_2015}. Throughout a network, \REVV{rectified linear units (ReLUs)} are used for activation, except for the regression and classification output layers. For regression, a linear activation function is used, and for classification, a sigmoid activation is used to obtain posterior probabilities between~0~and~1. The loss function that was optimized during training consisted of two parts: the mean absolute error between the regression output and reference displacements, and the binary cross-entropy between the classification output and reference labels. To ensure that input patches located far from the landmark have less influence on updates of network parameters compared to those located close to the landmark, the mean absolute error is calculated on log-transformed displacement vectors. As optimization algorithm, Adam with a learning rate of 0.001 \cite{kingma_adam:_2014} was used. Since a network is fully convolutional it can analyze input images of varying size. Depending on its input image the network outputs a varying number of displacement vectors and posterior probabilities during global landmark localization. Due to the network's \REVV{average pooling layers and the first convolutional layer with a stride of two voxels}, its outputs are distributed on a grid, where the grid spacing is defined by \REVV{the sum of the number of pooling layers and strided convolutional layers}. With $n$ representing \REVV{the sum of the number of pooling layers and strided convolutional layers for the global or local localization step}, this leads to a down-sampling rate of $1/2^n$ and therefore, a patch size of $2^n$ voxels. Hence, for a given network, a grid with a grid spacing of $2^n$ voxels is used to sample patches from an input image. \section{Data} \begin{figure}[!] \centering \begin{tabular}{@{}c@{}} \includegraphics[width=0.485\textwidth]{nooth3.pdf}\\ \small (a) CCTA \end{tabular} \vspace{\floatsep} \begin{tabular}{@{}c@{}} \includegraphics[width=0.485\textwidth]{nooth4.pdf}\\ \small (b) Olfactory MR \end{tabular} \vspace{\floatsep} \begin{tabular}{@{}c@{}} \includegraphics[width=0.485\textwidth]{nooth5.pdf}\\ \small (c) Cephalometric X-rays \end{tabular} \caption{Example images with reference annotations. (a) Axial, coronal, and sagittal slices (rows) from cardiac CT angiography (CCTA) scans of three different patients (columns), in which the right ostium is indicated by a red cross. (b) Axial, coronal, and sagittal slices (rows) from olfactory MR scans of three different patients (columns), in which the center of the right olfactory bulb is indicated by a red cross. (c) Cephalometric X-ray of three different patients (columns), in which 19 different landmarks are indicated by a red cross.} \label{fig:GT_annotations} \end{figure} We evaluated the method on three different datasets containing 3D CCTA scans, 3D olfactory MR scans, and 2D cephalometric X-rays (Fig.~\ref{fig:GT_annotations}). The choice of these datasets was based on the diversity in image acquisition modality (CT, MR, and X-ray), image dimensionality (2D and 3D), and anatomical coverage (cardiac, brain, and head). \subsection{Coronary CT Angiography} The dataset consisted of 672 CCTA scans, which were acquired in the University Medical Center Utrecht (Utrecht, The Netherlands) as part of regular patient care. The need for informed consent was waived by the Institutional Medical Ethical Review Board. ECG-triggered scans were acquired with a 256-detector row scanner (Philips Brilliance iCT, Philips Medical, Best, The Netherlands). Tube voltage ranged from 80~to~140\,kVp while tube current ranged from 210~to~300\,mAs. Intravenous contrast was administered before acquisition. All acquired scans had a slice thickness of 0.9\,mm with 0.45\,mm spacing. In-plane resolution ranged between 0.29 and 0.49\,mm. In all scans, an expert manually annotated eight clinically relevant cardiac landmarks: the aortic valve commissures between the non-coronary and right (NCRC), the non-coronary and left (NCLC), and the left and right aortic valve leaflets (LRC), the hinge points (most caudal attachments) of the left (LH), non-coronary (NCH), and right (RH) aortic valve leaflets, and the right (RO) and left coronary ostium (LO). These landmarks can be used to perform clinical measurements in patients undergoing transcatheter aortic valve implantation (TAVI) \cite{kasel2013standardized, elattar2016automatic, zheng2012automatic, wachter2010patient}. \REVV{An expert observer created manual annotations for the full dataset using the protocol by Kasel et al.~\cite{kasel2013standardized}. To determine inter-observer variability, a second observer annotated 100 randomly selected scans from the test-set. This same set was used to determine the intra-observer variability: after one month, the first observer repeated annotations in the 100 scans. Variability was defined as the Euclidean distance between the landmark annotations.} \subsection{Olfactory MR} The dataset contained 61 olfactory MR scans, which were acquired as part of clinical routine in Hospital Gelderse Vallei (Ede, The Netherlands). The local ethics committee approved the study where informed consent was obtained from all subjects. Scans were acquired with a 3T Magnetom Verio MRI scanner (Siemens, Erlangen, Germany). To visualize the olfactory bulbs, a coronal T2-weighted fast spin-echo sequence was performed (echo time: 153 ms, repetition time: 4630 ms, field of view: 120$\times$120\,mm). Each scan contained 28 coronal slices reconstructed to an isotropic in-plane pixel size of 0.47\,mm and a slice thickness of 1\,mm with no inter-slice gap. In each scan, an expert manually delineated the right and left olfactory bulb. The center of each manual delineation was taken as ground truth landmark location. \subsection{Cephalometric X-rays} The dataset consisted of 400 publicly available cephalometric X-rays (lateral cephalograms) from the \textit{ISBI 2015 Grand Challenge in Automatic Detection and Analysis for Diagnosis in Cephalometric X-ray Images} \cite{wang2016benchmark}. X-rays were acquired with a Soredex CRANEX\textregistered \ Excel Ceph machine (Tuusula, Finland) and Soredex SorCom software (3.1.5, version 2.0), and were obtained in TIFF format with a resolution of $1935\times2400$ pixels and an isotropic pixel size of 0.1\,mm. In all X-rays, two experienced medical doctors both manually annotated 19 clinically relevant landmarks, which can be used for diagnosis and treatment planning in orthodontic patients \cite{wang2016benchmark, torosdagli2018deep}. Following the challenge protocol, the average of the annotations provided by both experts was used as ground truth landmark location \cite{wang2016benchmark}. The landmarks were: the sella (L1), nasion (L2) orbitale (L3), porion (L4), subspinale (L5), supramentale (L6), pogonion (L7), menton (L8), gnathion (L9), gonion (L10), lower incisal incision (L11) upper incisal incision (L12), upper lip (L13), lower lip (L14), subnasale (L15), soft tissue pogonion (L16), posterior nasal spine (L17), anterior nasal spine (L18), and articulate (L19). The intra-observer and interobserver variability were determined within the challenge following Lindner et al.\cite{lindner2015fully}. \section{Evaluation} Evaluation was performed by computing the median Euclidean distance and interquartile range (IQR) between manually defined reference and automatically predicted landmark locations. In addition, following the \textit{ISBI 2015 Grand Challenge in Automatic Detection and Analysis for Diagnosis in Cephalometric X-ray Images} \cite{wang2016benchmark}, success detection rates (SDRs) were calculated. The detection of a landmark is considered successful when the Euclidean distance between the automatically localized landmark and its reference location is smaller than a predefined distance threshold. In our analysis we used 10 distance thresholds ranging from 0.5\,mm to 5\,mm. The maximal distance threshold was defined when 95\% of the landmarks were successfully detected. \REVV{Intra- and second observer SDRs were determined in a similar way by using the two annotated sets. When two annotations of a landmark were within the distance of the set threshold, the annotation was considered successful.} \section{Experiments and Results} The method was implemented in Python using \REVV{PyTorch \cite{NEURIPS2019_9015} on an NVIDIA 2080 Ti with 11 GB of memory.} \subsection{Experiments} All datasets were first randomly divided into a training set, a validation set, and a hold-out test set. Training sets and validation sets were used to develop the method, while test sets were used for final evaluation. Note that the test sets were not used during method development in any way. \subsubsection{Coronary CT Angiography} \label{sec:ccta} The available set of 672 scans was randomly divided into 412 training, 60 validation and 200 test scans. For computational purposes, scans were resampled to an isotropic voxel size of 1.5\,mm$^3$. \REVV{The global FCNN was trained for 300,000 iterations, using mini-batches of 4 randomly sampled sub-images of $72\times72\times72$ voxels. The FCNN was evaluated during training on the entire validation set every 10,000 iterations. The best performing model was used for subsequent analysis. A local FCNN was trained using similar settings. However, the size of the sub-images was chosen based on the distance errors obtained during localization of landmarks with the multi-landmark network in the validation set and was therefore set to $16\times16\times16$ voxels. Moreover, sub-images were randomly sampled such that they always contained the landmark of interest.} \begin{table*} \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{1.3} \centering \caption{ \REV{\REVV{Median (IQR) Euclidean distance errors (mm) between computed and reference landmark locations in CCTA scans obtained with networks used for multi-landmark localization. Different training settings are evaluated: regression of displacement vectors with (R\textsubscript{log}) and without (R) log-transformation, employing the classification layer (C) or not, and performing only global localization (Global) or global-to-local localization (Global-to-local) of landmarks. The proposed method combined global-to-local landmark localization and employed regression of log-transformed displacement vectors and the classification output layer (Proposed ML). Additionally, distance errors obtained with the method adjusted for single landmark localization are listed as well (Proposed SL). The distance errors obtained by the intra-observer (Intra-observer) and the second observer (Second observer), computed as the distance between two annotations made by the same observer, and the distance between annotations made by two different observers, respectively, on a subset of the test set are also listed. Results are listed per landmark (NCRC, NCLC, LRC, LH, NCH, RH, RO, and LO) and for all landmarks together (All), with the smallest distance error shown in \textbf{bold}}.}} \begin{tabular}{l|cccccccc|c} & \multicolumn{3}{c}{Aortic valve commissures} & \multicolumn{3}{c}{Aortic valve hinges} & \multicolumn{2}{c|}{Coronary ostia} & \\ & NCRC & NCLC & LRC & LH & NCH & RH & RO & LO & All \\\hline Intra-observer & 2.68 (2.25) & 1.93 (1.72) & 1.96 (2.07)\ssymbol{2} & 2.04 (1.39)* & 2.54 (2.20) & 2.56 (2.28)\ssymbol{3} & 1.43 (1.05) & 1.88 (1.40)* & 2.06 (1.84)\ssymbol{3}\\ Second observer & 3.00 (1.23)\ssymbol{3} & 3.46 (1.45)\ssymbol{3} & 2.96 (1.13)\ssymbol{3} & 1.73 (1.32)* & 1.96 (1.19)\ssymbol{3} & 1.80 (1.62) & 1.78 (1.54)\ssymbol{2} & 2.31 (1.56)\ssymbol{3} & 2.50 (1.68)\ssymbol{3} \\\hline \textit{Global} \\ R & \REVV{3.20 (2.05)\ssymbol{3}} & \REVV{2.95 (1.95)\ssymbol{2}} & \REVV{3.08 (2.47)\ssymbol{3}} & \REVV{3.14 (2.10)\ssymbol{3}} & \REVV{3.51 (2.04)\ssymbol{3}} & \REVV{3.55 (2.14)\ssymbol{3}} & \REVV{4.36 (2.92)\ssymbol{3}} & \REVV{3.92 (2.71)\ssymbol{3}} &\REVV{ 3.44 (2.36)\ssymbol{3}} \\ R\textsubscript{log} & \REVV{3.12 (2.06)\ssymbol{3}} & \REVV{3.12 (2.02)\ssymbol{3}} & \REVV{3.04 (2.09)\ssymbol{3}} & \REVV{2.49 (1.84)} & \REVV{3.36 (2.18)\ssymbol{3}} & \REVV{3.18 (1.71)\ssymbol{3}} & \REVV{4.08 (2.87)\ssymbol{3}} & \REVV{4.05 (2.66)\ssymbol{3}} & \REVV{3.23 (2.30)\ssymbol{3}}\\ C & \REVV{5.64 (4.13)\ssymbol{3}} & \REVV{4.77 (2.10)\ssymbol{3}} & \REVV{4.62 (2.37)\ssymbol{3}} & \REVV{4.43 (2.23)\ssymbol{3}} & \REVV{4.69 (3.01)\ssymbol{3}} & \REVV{4.34 (2.73)\ssymbol{3}} & \REVV{5.07 (2.67)\ssymbol{3}} & \REVV{4.64 (2.61)\ssymbol{3}} & \REVV{4.76 (2.72)\ssymbol{3}} \\ R + C & \REVV{2.93 (1.96)} & \REVV{2.60 (1.53)} & \REVV{2.62 (2.54)} & \REVV{2.73 (1.54)\ssymbol{2}} & \REVV{3.31 (1.92)\ssymbol{2}} & \REVV{3.14 (1.66)\ssymbol{2}} & \REVV{4.23 (2.94)\ssymbol{3}} & \REVV{3.72 (2.78)\ssymbol{2}} & \REVV{3.09 (2.11)\ssymbol{3}} \\ R\textsubscript{log} + C & \REVV{ 2.72 (1.71)} & \REVV{2.59 (1.84)} & \REVV{2.60 (1.98)} & \REVV{2.51 (1.74)} & \REVV{3.08 (1.79)} & \REVV{2.77 (1.70)} & \REVV{2.90 (1.93)} & \REVV{3.31 (2.23)} & \REVV{2.81 (1.88)} \\\hline \textit{Global-to-local} \\ Proposed SL & \REVV{1.99 (1.82)} & \REVV{1.94 (1.32)} & \REVV{\textbf{1.69 (1.73)}} & \REVV{\textbf{2.29 (1.46)}} & \REVV{2.68 (1.75)\ssymbol{3}} & \REVV{3.09 (1.82)\ssymbol{3}} & \REVV{1.48 (0.98)} & \REVV{1.55 (1.02)} & \REVV{2.03 (1.79)\ssymbol{3}} \\ Proposed ML & \REVV{\textbf{1.85 (1.96)}} & \REVV{\textbf{1.80 (1.59) }}& \REVV{1.76 (1.67)} & \REVV{2.40 (1.58)} & \REVV{\textbf{2.48 (1.72)}} & \REVV{\textbf{2.23 (1.42)}} & \REVV{\textbf{1.45 (1.20)}} & \REVV{\textbf{1.55 (0.97)}} &\REVV{\textbf{ 1.87 (1.67)}} \\ \end{tabular} \begin{tablenotes} \small \item Significance outcome by the Wilcoxon Signed Rank test compared to R\textsubscript{log} + C for the upper part of the table, and the proposed method for multi-landmark localization (Proposed ML) and the Intra-observer, Second observer, and proposed method for single landmark localization (Proposed SL) is indicated with * for p<0.05, $\dagger$ for p<0.01, and $\ddagger$ for p<0.001. \end{tablenotes} \label{tab:ccta_results} \end{table*} Table \ref{tab:ccta_results} lists the obtained median Euclidean distance errors (last row: Proposed ML) and those obtained by the intra-observer and second observer (first two rows) per landmark and for all landmarks together. Median distance errors obtained with the proposed method range from 1.45 to 2.48 mm for different landmarks, which corresponds to an error between 1.03 and 1.65 voxels. \REV{This is close to \REVV{distance errors obtained by the intra-observer} which ranged from 1.43 to 2.68 mm, and \REVV{the distance errors obtained by the second observer} which ranged from 1.73 to 3.46 mm.} Distance errors obtained for automatic localization of the coronary ostia were the smallest, with 1.45 mm for the RO and 1.55 mm for the LO. On average, the processing time per scan was 0.06~$\pm$~0.05 seconds. \REVV{For six out of eight landmarks, distance errors obtained with the proposed method were lower than distance errors obtained by the intra-observer annotation. For three of these six landmarks differences were statistically significant. For five out of eight landmarks, distance errors obtained with the proposed method were lower than distance errors obtained by the second observer. For all landmarks but the RH, the differences were statistically significant}. To provide further insight in the performance, Fig. \ref{fig:CCTA_sdr_method} shows the SDRs obtained with the proposed method, while the intra-observer and second observer SDRs are shown as horizontal lines. Overall, the SDRs obtained with the method are \REVV{similar or better than intra-observer and second-observer SDRs}. Fig. \ref{fig:vectorsCCTA} shows vector fields visualizing the predicted displacement vectors for localization of the RO landmark in the axial viewing plane in six CCTA scans from the test set: three scans in which the localization error was below 2.5 mm (top row) and three scans in which the localization error was above 5.0 mm (bottom row). Larger errors were made in scans in which anatomical deviation was present, such as both coronary ostia being located on the left side in close proximity to each other (Fig. \ref{fig:vectorsCCTA} bottom row). This anatomical deviation occurred in only 0.2\% of the scans in the training set but in 2\% of the scans in the test set, which might explain the error. \begin{figure*} \centering \includegraphics[]{nooth6.pdf} \caption{\REVV{Success detection rates (SDRs) for landmark localization in CCTA scans. The results for the ablation study on the performance of global multi-landmark localization (Global) are shown for: FCNNs trained for one task, i.e. regression of displacement vectors (R), regression of a log-transformed displacement vectors (R\textsubscript{log}), and classification of patches (C); FCNNs trained for joint regression and classification (R + C) and joint regression of log-transform displacement and classification (R\textsubscript{log} + C). Furthermore, the results for our proposed global-to-local FCNNs (Global-to-local) trained for single landmark localization (Proposed SL) and multi-landmark localization (Proposed ML) are also shown. Additionally, intra-observer and second observer SDRs are indicated by horizontal lines. Results are shown as \% over all landmarks.}} \label{fig:CCTA_sdr_method} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{nooth7.pdf}\\ \vspace{\floatsep} \includegraphics[width=\columnwidth]{nooth8.pdf}\\ \caption{Vector fields (orange) visualizing the predicted displacement vectors in the axial plane in six different CCTA scans from the test set where localization of the right coronary ostium was performed. For visualization purposes, 3D predicted displacement vectors are shown as 2D vector fields and the magnitudes of the vectors are rescaled. The green squares indicate posterior probabilities larger than 0.5, obtained by the classification task of the network. Reference and computed landmark locations are indicated with a blue and pink cross, respectively. The top row depicts scans in which localization errors were below 2.5 mm, while the bottom row depicts scans in which localization errors were above 5.0 mm. Images in the bottom row all contain two coronary ostia which are both located on the left side in close proximity of each other.} \label{fig:vectorsCCTA} \end{figure} \subsubsection{Olfactory MR} The available set of 61 olfactory MR scans was randomly divided into 36 training, 5 validation and 20 test scans. Scans were resampled to an isotropic voxel size of 0.47\,mm$^3$. \REVV{Training settings were similar to those used in the CCTA experiment, described in Section~\ref{sec:ccta}.} However, because scans contained only 60 coronal slices after resizing, scans were zero-padded in the z-direction. The size of the olfactory bulbs ranged between 1.4 and 5.2\,mm in-plane, and between 5.0 and 13.0\,mm in the z-direction. The median (IQR) Euclidean distance error between computed landmark locations and reference locations was \REVV{0.87 (1.36)} mm and \REVV{0.90 (0.58)} mm for the right and left bulb, respectively, and \REVV{0.90 (0.85)} mm when taking both landmarks into account. Fig.~\ref{fig:bulb_sdr} shows the SDRs obtained with the proposed method. When a distance threshold of 4 mm was used, 95.0\% of all landmarks present in the test set were successfully detected. The processing time per scan was on average \REVV{0.07~$\pm$~0.01} seconds. \begin{figure} \centering \includegraphics[width=\columnwidth]{nooth9.pdf} \caption{\REVV{Success detection rates (SDRs) for olfactory bulb localization in MR scans, obtained with the proposed method. Results are shown as \% over all landmarks present in the test set and are given for eight distance thresholds ranging from 0.5\,mm to 4\,mm.}} \label{fig:bulb_sdr} \end{figure} \subsubsection{Cephalometric X-rays} The 150 cephalometric X-rays from the training set of the \textit{ISBI 2015 Grand Challenge in automatic Detection and Analysis for Diagnosis in Cephalometric X-ray Images} \cite{wang2016benchmark} were used for training (140 X-rays) and validation (10 X-rays). The challenge provides two separate test sets for evaluation: one set containing 150 images (Test1), and one set containing 100 images (Test2). To mitigate varying image contrast, histogram equalization was performed on the X-ray images before analysis. Since the X-ray is a 2D image, we used a 2D version of the network in Fig.~\ref{fig:network}. Furthermore, because cephalometric X-rays are large ($1935\times2400$ pixels), we \REVV{also added an average pooling layer before the third and fourth ResNet-block}. Adding \REVV{average} pooling layers allowed us to enlarge the receptive field, while keeping a low computational complexity. \REVV{The network for global localization was again trained for 300,000 iterations, using mini-batches containing 4 sub-images of $592\times592$ pixels. For local analysis, mini-batches containing 4 sub-images of $16\times16$ pixels were used during training}. Table \ref{tab:ceph_table} lists the median Euclidean distance errors obtained with the proposed method. Errors range from \REVV{0.46 to 2.12 mm} for different landmarks in Test1 and from \REVV{0.42 to 4.32 mm} for different landmarks in Test2. For both test sets, the best results were obtained for the localization of L12, which is the upper incisal incision. As reported by Lindner et al. \cite{lindner2015fully}, the mean intra-observer variability for the first and second observer were 1.73~$\pm$~1.35\,mm and 0.90~$\pm$~0.89\,mm, respectively, while the mean inter-observer variability was 1.38~$\pm$~1.55\,mm. When computing the mean distance error obtained on all landmarks present in both test sets, we obtain a distance error of \REVV{1.35~$\pm$~1.19\,mm, which is lower than the intra-observer variability of the first observer and the inter-observer variability}. As defined by the challenge protocol \cite{wang2016benchmark}, we evaluated our method computing the SDRs using four distance thresholds (2\,mm, 2.5\,mm, 3\,mm, and 4\,mm). These results are shown in Fig. \ref{fig:ceph_comparison}. On average, the processing time per scan was 0.05~$\pm$~0.009 seconds. \afterpage{\begin{table*} \renewcommand{\arraystretch}{1.3} \centering \caption{Median (IQR) Euclidean distance errors (mm) between the computed landmark locations and the reference locations, obtained with the proposed method. Results are listed separately for the two different test sets: Test1 and Test2. Furthermore, results are listed per landmark (L1-L19) and for all landmarks together (All).} \begin{adjustbox}{} \begin{tabular}{lcccccccccc} & L1 & L2 & L3 & L4 & L5 & L6 & L7 & L8 & L9 & L10 \\\hline Test1 & \REVV{0.51 (0.41)} & \REVV{1.00 (1.25)} & \REVV{1.01 (1.09)} & \REVV{1.62 (1.71)} & \REVV{1.64 (1.64)} & \REVV{0.94 (0.98)} & \REVV{0.70 (0.85)} & \REVV{0.63 (0.70)} & \REVV{0.76 (0.85)} & \REVV{2.12 (1.83)} \\ Test2 & \REVV{0.52 (0.34)} & \REVV{0.57 (1.00)} & \REVV{2.31 (1.40)} & \REVV{1.19 (1.49)} & \REVV{1.11 (1.06)} & \REVV{2.62 (1.73)} & \REVV{0.58 (0.72)} & \REVV{0.50 (0.47)} & \REVV{0.52 (0.55)} & \REVV{1.68 (1.61)}\\ & L11 & L12 & L13 & L14 & L15 & L16 & L17 & L18 & L19 & All \\\cline{1-11} Test1 & \REVV{0.80 (1.26)} & \REVV{0.46 (0.89)} & \REVV{1.13 (0.83)} & \REVV{0.84 (0.58)} & \REVV{0.90 (0.88)} & \REVV{1.23 (1.14)} & \REVV{0.64 (0.64)} & \REVV{0.94 (1.21)} & \REVV{1.50 (1.78)} & \REVV{0.95 (1.15)} \\ Test2 & \REVV{0.63 (0.87)} & \REVV{0.42 (0.67)} & \REVV{2.32 (0.87)} & \REVV{1.87 (1.09)} & \REVV{0.94 (0.64)} & \REVV{4.32 (1.47)} & \REVV{0.88 (0.81)} & \REVV{1.13 (1.19)} & \REVV{1.06 (1.35)} & \REVV{1.07 (1.60)} \end{tabular} \end{adjustbox} \label{tab:ceph_table} \end{table*}} \afterpage{\begin{figure*} \centering \begin{tabular}{@{}c@{}} \includegraphics[width=0.32\textwidth]{nooth10.pdf}\\ \small (a) Test1 \end{tabular} \begin{tabular}{@{}c@{}} \includegraphics[width=0.32\textwidth]{nooth11.pdf}\\ \small (b) Test2 \end{tabular} \begin{tabular}{@{}c@{}} \includegraphics[width=0.32\textwidth]{nooth12.pdf}\\ \small (c) Test1 and Test2 \end{tabular} \caption{Success detection rates (SDRs) for landmark localization in cephalometric X-rays. SDRs obtained with the proposed method (Proposed) are shown together with SDRs reported in previous studies. Results are shown as \% over all landmarks and are given for four distance thresholds (2, 2.5, 3, 4\,mm). Results are shown separately for the two test sets, (a) Test1 containing 150 images, and (b) Test2 containing 100 images, (c) as well as for both test sets combined.} \label{fig:ceph_comparison} \end{figure*}} \subsection{Ablation Study} To investigate whether the application of classification or the log-transform during training is truly beneficial for accurate landmark localization, we performed an ablation study with CCTA scans only, assuming results generalize to other datasets. For this, four additional networks for global multi-landmark localization were trained. These networks were trained with or without applying the log-transform with regression, and with or without using the classification output layer. For the classification-only network, a final landmark location was obtained by computing a weighted average of all predicted landmark locations. To obtain the final landmark location, the centers of the analyzed patches served as predicted landmark locations, while the posterior classification probabilities were used as weights during averaging. Table~\ref{tab:ccta_results} shows that the proposed approach utilizing joint classification and regression of log-transformed displacement vectors achieved best performance (R\textsubscript{log} + C). Regression-only networks obtained smaller distance errors compared to classification-only networks. The addition of classification improved both regression-only networks, one using log-transform and one without it. The log-transform improved localization performance in the networks performing regression, with and without classification. When the approach for global localization utilizing joint classification and regression of log-transformed displacement vectors is combined with local single landmark localization (Table~\ref{tab:ccta_results}, Proposed ML), smaller distance errors were obtained compared to utilizing only global localization. Fig.~\ref{fig:CCTA_sdr_method} shows the obtained SDRs. Better SDRs were obtained by networks performing joint classification and regression compared to regression-only \REV{and classification-only} networks. However, the best results were obtained when joint classification and regression of log-transformed displacement vectors were used with a \REVV{global-to-local approach.} \subsection{Single Landmark Localization} \REVV{The proposed method employing joint classification and regression of log-transformed displacement vectors was evaluated for single landmark localization in CCTA by training one network for each of the eight cardiac landmarks. Table~\ref{tab:ccta_results} lists the obtained Euclidean distance errors (Proposed SL). With the exception of localization of the LRC and LH, the network trained for multi-landmark localization outperformed networks trained for single landmark localization. However, differences in performance were only significant for localization of the NCH and RH. Fig. \ref{fig:CCTA_sdr_method} shows the obtained SDRs. For a distance threshold of 0.5 mm, the SDR obtained with networks trained for single landmark localization was slightly better than the SDR obtained with a network trained for multi-landmark localization. However, this difference was only 0.6\%. For all other distance thresholds, better SDRs were obtained by the network trained for multi-landmark localization compared to networks trained for single landmark localization.} \subsection{Comparison with State-of-the-art} A number of methods have previously been proposed to localize anatomical landmarks in medical images. \subsubsection{Coronary CT Angiography} \label{sec:ccta_comparison} \begin{table*} \setlength{\tabcolsep}{3.5pt} \renewcommand{\arraystretch}{1.3} \centering \caption{\REV{Median (IQR) Euclidean distance errors (mm) between computed and reference landmark locations for eight landmarks in CCTA scans. Landmarks were automatically localized with the methods by Alansary et al. \cite{alansary2019evaluating}, Payer et al. \cite{payer2019integrating}, and with the proposed method (Proposed). Methods localize either single landmarks (SL) or multiple landmarks simultaneously (ML). Results are listed per landmark (NCRC, NCLC, LRC, LH, NCH, RH, RO, and LO) and for all landmarks together (All).}} \begin{tabular}{ll|cccccccc|c} Method & & NCRC & NCLC & LRC & LH & NCH & RH & RO & LO & All \\ \hline Alansary et al. \cite{alansary2019evaluating}&SL & 3.35 (2.38)\ssymbol{3} & 3.35 (2.38)\ssymbol{3} & 3.35 (2.62)\ssymbol{3} & 3.35 (1.55)\ssymbol{3} & 3.67 (2.38)\ssymbol{3} & 3.67 (2.03)\ssymbol{3} & 3.35 (2.14)\ssymbol{3} & 2.60 (2.18)\ssymbol{3} & 3.35 (2.38)\ssymbol{3} \\ Payer et al. \cite{payer2019integrating} & SL & 2.30 (2.03)\ssymbol{3} & 2.70 (1.89)\ssymbol{3} & 3.06 (2.85)\ssymbol{3} & 2.45 (1.80)* & 2.72 (1.57) & 2.82 (1.50) & 2.03 (1.51)\ssymbol{3} & 2.69 (2.32)\ssymbol{3} & 2.55 (1.90)\ssymbol{3}\\\smallskip \REVV{Proposed} & \REVV{SL} & \REVV{1.99 (1.82)} & \REVV{1.94 (1.32)} & \REVV{1.69 (1.73)} & \REVV{2.29 (1.46)} & \REVV{2.68 (1.75)} & \REVV{3.09 (1.82)} & \REVV{1.48 (0.98)} & \REVV{1.55 (1.02)} & \REVV{2.03 (1.79)}\\ Payer et al. \cite{payer2019integrating}& ML & 1.79 (1.39) & 1.77 (1.52) & 1.90 (1.80) & 2.12 (1.39) & 2.50 (1.68) & 2.28 (1.36) & 1.30 (1.01) & 1.59 (1.17) & 1.90 (1.54)\\ \REVV{Proposed}& \REVV{ML} & \REVV{1.85 (1.96)} & \REVV{1.80 (1.59} & \REVV{1.76 (1.67)} & \REVV{2.40 (1.58)} & \REVV{2.48 (1.72)} & \REVV{2.23 (1.42)} & \REVV{1.45 (1.20)} & \REVV{1.55 (0.97)} & \REVV{1.87 (1.67)}\\ \end{tabular} \begin{tablenotes} \small \item Significance outcome by the Wilcoxon Signed Rank test compared to the proposed method is indicated with * for p<0.05, $\dagger$ for p<0.01, and $\ddagger$ for p<0.001. Comparisons are made between methods performing single landmark localization (SL) or methods performing multi-landmark localization (ML). \end{tablenotes} \label{tab:ccta_PA} \end{table*} Previous methods have been proposed to specifically localize the aortic valve hinges, the aortic valve commissures, and the coronary ostia in cardiac CT scans. Waecher et al. \cite{wachter2010patient} used pattern matching and reported distance errors of 1.0~$\pm$~0.8\,mm and 1.2~$\pm$~0.6\,mm for the right and left ostium, respectively. Wolterink et al. \cite{wolterink2019coronary} used a CNN to localize the ostia and obtained a mean distance error of 1.8~$\pm$~1.0\,mm. However, both methods were tested on small sets containing only 20\cite{wachter2010patient} or 36\cite{wolterink2019coronary} scans that might not have contained the anatomical deviation which was present in our test set comprising of 200 CCTA scans. Removing eight scans from our test set that show severe anatomical deviation (the right ostium located on the left side, the left ostium located on the right side), or cases where a stent is present in the pulmonary arteries, improves results for localization of the coronary ostia from \REVV{2.03~$\pm$~3.05\,mm to 1.75~$\pm$~1.84\,mm}. For the aortic valve commissures and the aortic valve hinges, the proposed method obtained mean distance errors of \REVV{2.33~$\pm$~1.90\,mm and 2.61~$\pm$~1.44\,mm,} respectively. Zheng et al. \cite{zheng2012automatic} exploited voxel classification with landmark specific probabilistic boosting trees and reported mean distance errors of 2.17~$\pm$~1.31\,mm, 2.09~$\pm$~1.18\,mm, and 2.07~$\pm$~1.53\,mm for the aortic valve commissures, the aortic valve hinges, and the coronary ostia, respectively. Landmarks were localized in in C-arm CT scans with a voxel size ranging between 0.70 and 0.84 mm. Elattar et al. \cite{elattar2016automatic} applied a local rule-based approach and combined results obtained for localization of the aortic valve hinges and the coronary ostia in 40 CCTA scans with a voxel size varying from 0.44 to 0.9\,mm. The analysis led to a mean distance error of 2.81~$\pm$~2.08\,mm. Al et al. \cite{al2018automatic} also combined results obtained for localization of all eight landmarks in 71 CCTA scans using cross-validation and obtained a mean distance error of 2.04~$\pm$~1.11\,mm. Voxel sizes of used CCTA scans were not reported. Computing the same measure, the mean distance error obtained for localization of the eight cardiac landmarks, we obtained a distance error of \REVV{2.36~$\pm$~1.24\,mm}. These aforementioned methods have been developed and tested on different CCTA datasets than used in our work. Hence, a comparison between the results should only be used as an indication of the performance. To enable a direct comparison of our methods with previous work, we have tested the very recently proposed methods by Alansary et al. \cite{alansary2019evaluating}, who employ reinforcement learning and localize a single landmark at the time, and Payer et al. \cite{payer2019integrating}, who employ heatmap regression to localize either a single landmark or multiple landmarks jointly, on our data. For this, we used code made publicly available by the authors\footnote{https://github.com/amiralansary}\footnote{https://www.github.com/christianpayer}. The results are listed in Table~\ref{tab:ccta_PA}. The Wilcoxon Signed Rank test was used to test for significance. Results show that our method performing single landmark localization significantly outperformed the method proposed by Alansary et al. \cite{alansary2019evaluating}. Furthermore, we significantly outperform the method proposed by \REVV{Payer et al. \cite{payer2019integrating} when trained for single landmark localization. When comparing our method for multi-landmark localization with the multi-landmark localization proposed by Payer et al. \cite{payer2019integrating}, differences in performance are not significant.} On average, the processing time per scan was 0.29~$\pm$~0.56 seconds for the method proposed by Alansary et al. \cite{alansary2019evaluating}, and 0.42~$\pm$~0.13 seconds and 0.49~$\pm$~0.28 seconds for the method proposed by Payer et al. \cite{payer2019integrating} for single landmark localization and multi-landmark localization, respectively. \REVV{On average, the processing times per scan for our method were 0.04~$\pm$~0.01 and 0.06~$\pm$~0.05 seconds for single landmark localization and multi-landmark localization, respectively.} \subsubsection{Olfactory MR} To the best of our knowledge, no landmark localization methods have been evaluated for localization of the olfactory bulbs in MRI. To compare our proposed method with state-of-the-art landmark localization methods, we have evaluated the publicly available methods by Alansary et al. \cite{alansary2019evaluating} and Payer et al. \cite{payer2019integrating} as for the landmarks in CCTA (see section \ref{sec:ccta_comparison}). Results are listed in Table~\ref{tab:bulb_PA}. Comparing our method for localization of one olfactory bulb per scan, we significantly outperform other methods performing single landmark localization. The difference between methods performing localization of both olfactory bulbs simultaneously was not significant. On average, the processing time per scan was 0.78~$\pm$~1.53 seconds for the method proposed by Alansary et al. \cite{alansary2019evaluating}, and 0.38~$\pm$~0.003 seconds and 0.44~$\pm$~0.25 seconds for the method proposed by Payer et al. \cite{payer2019integrating} for single landmark localization and multi-landmark localization, respectively. For the proposed method, the processing time per scan for single landmark localization and multi-landmark localization were 0.07~$\pm$~0.008 and 0.07~$\pm$~0.01 seconds per scan, respectively. \begin{table} \renewcommand{\arraystretch}{1.3} \centering \caption{\REV{Median (IQR) Euclidean distance errors (mm) between computed and reference landmark locations for the right and left olfactory bulb in MRI. Landmarks were automatically localized with the methods by Alansary et al. \cite{alansary2019evaluating}, Payer et al. \cite{payer2019integrating}, and the proposed method (Proposed). Methods localize either single landmarks (SL) or multiple landmarks simultaneously (ML). Results are listed per landmark (Right Bulb, Left Bulb) and for both bulbs together (Both Bulbs).}} \begin{tabular}{ll|cc|c} Method & &Right Bulb & Left Bulb & Both Bulbs \\\hline Alansary et al.\cite{alansary2019evaluating}& SL & 1.28 (1.46) & 1.44 (1.42)* & 1.41 (1.42)\ssymbol{2} \\ Payer et al. \cite{payer2019integrating}& SL & 1.67 (1.48)* & 1.38 (2.22) & 1.55 (2.00)\ssymbol{2} \\\smallskip \REVV{Proposed} & \REVV{SL} & \REVV{0.93 (0.74)} & \REVV{0.99 (0.94)} & \REVV{0.95 (0.94)} \\ Payer et al. \cite{payer2019integrating}& ML & 0.91 (1.06) & 0.76 (1.01) & 0.89 (0.97) \\ \REVV{Proposed} & \REVV{ML} & \REVV{0.87 (1.36)} & \REVV{0.90 (0.58)} & \REVV{0.90 (0.85)} \\ \end{tabular} \begin{tablenotes} \small \item Significance outcome by the Wilcoxon Signed Rank test compared to the proposed method is indicated with * for p<0.05 and $\dagger$ for p<0.01. Comparisons are made between methods performing single landmark localization (SL) or methods performing multi-landmark localization (ML). \end{tablenotes} \label{tab:bulb_PA} \end{table} \subsubsection{Cephalometric X-rays} Previous methods have been proposed to localize landmarks in cephalometric X-rays. Ibragimov et al. \cite{ibragimov2015computerized}, Lindner et al. \cite{lindner2015fully}, and Urschler et al.\cite{urschler2018integrating} all employed conventional machine learning, while Arik et al. \cite{arik2017fully} and Payer et al. \cite{payer2019integrating} both proposed a CNN to localize landmarks in cephalometric X-rays. Fig.~\ref{fig:ceph_comparison} shows a comparison between the SDRs obtained in previous studies and the SDRs obtained with the proposed method. Payer et al. \cite{payer2019integrating} reported the percentage of outliers. Hence, for comparison with our results, we reformulated their results into SDRs. For all distance thresholds, our method \REVV{obtained better SDRs compared to those obtained in previous studies}. Reported processing times for the method proposed by Lindner et al. \cite{lindner2015fully}, Urschler et al.\cite{urschler2018integrating}, and Payer et al. \cite{payer2019integrating} were 5, 56, and 2 seconds per scan, respectively. However, for the proposed method, the processing time for localization of all landmarks was on average 0.05~$\pm$~0.009 seconds per scan. \section{Discussion} An automatic method for anatomical landmark localization in medical images has been proposed. The method \REVV{employs global-to-local analysis where initially a fully convolutional neural network predicts the locations of multiple landmarks simultaneously. Subsequently, specialized FCNNs refine the global landmark locations. For global multi-landmark localization, an FCNN analyzes 2D or 3D images of arbitrary size in a patch-based manner. For every patch in an image, regression is used to predict displacement vectors that point from the center of the patch to landmarks of interest. Simultaneously, classification is used to predict the presence of landmarks of interest in each image patch. The global landmark locations are obtained by a weighted average of the displacement vectors predicted by regression, using posterior probabilities predicted by classification as weights. Subsequently, specialized FCNNs refine the global landmark locations by analyzing a local sub-image around each landmark in a similar manner, performing regression and classification simultaneously and combining the results.} The method was evaluated using three different datasets, namely 3D CCTA scans, 3D olfactory MR scans and 2D cephalometric X-rays. Results demonstrate that the method is able to localize landmarks with high accuracy in medical images differing in modality, dimensionality and depicted anatomy. Results obtained for the localization of the aortic valve commissures, the aortic valve hinges, and the coronary ostia in CCTA are comparable with the intra-observer variability and second observer performance. Previous methods analyzed images at higher resolution and reported slightly better results. However, these methods have been developed and tested on different CCTA datasets than used in our work and therefore, a comparison between the results should only be used as an indication of the performance. \REV{Ideally, landmarks would be localized in the native resolution. However, due to hardware limitation, we have resampled the scans prior to analysis to reduce the image resolution. It is worth noting that memory limitations prohibited analysis of complete images at the native resolution. Hence, hardware limitations require partitioning of images during inference for analysis at a resolution close to the native resolution. Our preliminary experiments showed that increasing the image resolution had a negative impact on the performance when analyzing complete images. However, addressing }the hardware limitations and analyzing scans with a higher resolution will probably lead to lower distance errors in mm. Using the CCTA dataset, we have shown that joint regression and classification improves upon \REV{classification-only}. \REV{For the classification-only networks, landmarks were localized by computing a weighted average of the predicted landmark locations. For this, the center of analyzed patches were considered the predicted landmark locations while the classification output, i.e. posterior probabilities, served as weights during averaging. A different approach for the final decision could also be considered. For example, only the center of the patch with the highest classification probability could have been taken into account or only patches with a posterior probability higher than a threshold could have been used. However, the ability of the classification-only network to precisely localize a landmark will always be limited by its patch-size. A deeper neural network that could perform voxel-based classification could therefore be more precise compared to a patch-based classification network. Nevertheless, a voxel-based classification network would demand balancing of the data during training due to a high class imbalance between landmark and background voxels. Furthermore, voxel-based analysis is more computationally demanding compared to the here proposed patch-based classification network. Since landmark localization is typically a prerequisite for subsequent, more complex medical image analysis \cite{rohr2001landmark, miao2012automatic, murphy2011semi, wang2018fast, han_robust_2014, alam2018medical, oktay2017stratified, wang2016benchmark, al2018automatic, torosdagli2018deep, kasel2013standardized, ionasec2008dynamic, zhengautomatic2010}, localization speed may be important.} Additionally, we have also shown that joint regression and classification improves upon regression-only landmark localization. For regression-only methods, the final landmark location was predicted by computing the average over all landmark locations obtained with the predicted displacement vectors. Hence, independent of their distance to the landmark, all patches contributed equally. Inspection of the results showed that predictions from patches far from the landmark of interest resulted in larger distance errors compared to those made from patches close to the landmark of interest. Hence, equally weighting all predictions resulted in larger distance errors compared to joint classification and regression. With joint regression and classification, such errors were mitigated by weighting the displacement vectors using the posterior probabilities obtained from classification. Namely, patches farther from the landmark of interest received lower posterior probabilities, thereby reducing the influence on the final landmark prediction. Employing a log-transform for displacement further improved localization. This is likely caused by the nature of the log-transform under the influence of the mean absolute error loss during training; i.e. during training, prediction errors from patches close to the landmark of interest are more heavily penalized than predictions from patches far from the landmark of interest. \REVV{Networks trained for multi-landmark localization obtained slightly better results compared to networks trained for single landmark localization in CCTA. However, no statistically significant difference between the two approaches was found, indicating that our proposed method could be used for single- as well as for multi-landmark localization.} Visual inspection of the results obtained with our proposed method on the CCTA dataset showed that larger Euclidean distance errors were obtained in images in which anatomical abnormalities were present, such as the right ostium being located on the left side of the patient. Training the network with more images that depict these types of anatomical deviation or modeling of these anatomical deviations by exploiting data augmentation, such as elastic transformations of images, could be beneficial to increase the variation in the dataset and ultimately improve localization. In contrast to previous work that required segmentation of the aorta \cite{elattar2016automatic, zheng2012automatic, wachter2010patient}, the here proposed method does not require any prior segmentation steps. Moreover, during inference, the method analyzes complete images and thus it is capable of localizing target landmarks in large 3D image volumes with high speed. As demonstrated by the results, direct learning from the data without preprocessing steps incorporating knowledge about the anatomy leads to accurate localization of the eight cardiac landmarks. \REVV{When comparing with recent methods \cite{alansary2019evaluating, payer2019integrating}, our method outperforms results obtained by Alansary et al. \cite{alansary2019evaluating} and Payer et al. \cite{payer2019integrating} in the localization of single landmarks. Furthermore, our method performs on par compared to the method proposed by Payer et al. \cite{payer2019integrating} for multi-landmark localization.} \REVV{Contrary to earlier approaches, the here proposed method can be used for both single and multi-landmark localization. Furthermore, our method is able to localize landmarks faster compared to competing methods. For pre-operative applications or offline tasks, such as the initialization of segmentation methods \cite{oktay2017stratified}, localization speed might be less important but for real-time applications, such as intra-operative applications, speed may be crucial \cite{alam2018medical}.} For landmark localization in both test sets of the cephalometric X-ray challenge, our proposed method outperformed previous methods, having an error close to the variability between the two observers. \section{Conclusion} We have shown that the proposed method is able to localize landmarks in 2D and 3D medical images of arbitrary size, acquired with three different imaging modalities and depicting different anatomical coverage. \REVV{The method localizes multiple or single landmarks} with high accuracy and speed, making it suitable for application in studies including a large number of images or real-time localization. \section*{Acknowledgments} This work is part of the research program Deep Learning for Medical Image Analysis with project number P15-26, financed by the Dutch Technology Foundation with contribution by Philips Healthcare. \IEEEtriggeratref{37}
-38,197.611731
[ -1.564453125, 1.6162109375 ]
39.087948
[ -3.33203125, 1.3427734375, -1.8369140625, -4.89453125, -0.8486328125, 6.90625 ]
[ 3.2421875, 7.5625, 2.34765625, 8.3203125 ]
934
7,995
[ -1.4404296875, 1.333984375 ]
26.639248
[ -6.9453125, -4.16015625, -4.3203125, -1.8115234375, 2.791015625, 12.390625 ]
0.556016
5.608433
20.662914
5.987725
[ 1.7006943225860596 ]
-28,873.008941
6.186241
-36,407.343546
0.72181
6.132025
[ -3.478515625, -3.693359375, -3.845703125, -4.36328125, 2.859375, 11.625 ]
[ -6.3359375, -2.44140625, -2.572265625, -2.279296875, 3.92578125, 6.1796875 ]
BkiUdu85qYVBhn-WmWnJ
\section*{Introduction} A fullerene is a spherically shaped molecule consisting of carbon atoms in which every carbon ring forms a pentagon or a hexagon \cite{Fowl95,Schw15}. Every atom of a fullerene has bonds with exactly three neighboring atoms. Fullerenes are the subject of intense research in chemistry and they have found promising technological applications, especially in nanotechnology and materials science \cite{Ashr16,Cata11}. Molecular graphs of fullerenes are called \emph{fullerene graphs}. A fullerene graph is a \mbox{3-connected} planar graph in which every vertex has degree 3 and every face has size 5 or 6. By Euler's polyhedral formula, the number of pentagonal faces is always 12. It is known that fullerene graphs having $n$ vertices exist for all even $n \ge 24$ and for $n = 20$. The number of all non-isomorphic fullerenes was reported, for example, in \cite{Brin97,Fowl95,Goed15-2}. Fullerenes without adjacent pentagons, \emph{i.\,e.}, each pentagon is surrounded only by hexagons, satisfy the isolated pentagon rule and called \emph{IPR fullerenes}. They are considered as thermodynamic stable fullerene compounds. Mathematical studies of fullerenes include applications of graph theory and topology methods, design of computational and combinatorial algorithms, information theory approaches, etc. (see selected publications \cite{Aliz14,Ando16,Ashr16,Cata11,Dobr20,Egor20,Fowl95,Ghor20,Goed15-2,Sabi18,Sabi15,Schw15}). Topological indices are often applied for quantifying structural complexity of molecular graphs. One of the recent concept of complexity reflects the diversity of values of a vertex graph invariant. A number of distance topological indices of a graph are designed as the sum of contributions of its vertices. \emph{Complexity} of a graph is the number of pairwise distinct values of the vertex contributions. Graphs with maximum possible complexity (equal to the number of vertices) is called \emph{irregular graphs}. An irregular graph has obviously the identity automorphism group while a graph with such a group may not be irregular. One of the famous distance topological index is the Wiener index of a graph $G$ with vertex set $V(G)$ which is defined as $W(G)= \frac{1}{2}\sum_{v \in V(G)} tr(v)$, where $tr(v)= \sum_{u \in V(G)} d(v,u)$ is the transmission of vertex $v$ and $d(v,u)$ denotes the shortest distance between vertices $v$ and $u$. This index was introduced by Harold Wiener in 1947 \cite{Wien47}. Bibliography on the Wiener index and its applications can be found in \cite{Dobr01,Dobr02,Knor16}. The Wiener complexity of a graph is the number of pairwise distinct vertex transmissions. It was studied for various classes of graphs in \cite{Aliz14,Aliz18,Dobr19-3,Dobr19}. In particular, it was demonstrated that there do not exist transmission irregular fullerene graphs with $n \le 232$ vertices \cite{Dobr19}. One of the possible reason of this fact is that the interval of transmission values may be too narrow for fullerene of such sizes. In this paper, we consider a generalization of the vertex transmission and compute the corresponding complexity of fullerene graphs. Our goal is to find irregular fullerene graph. \section*{Computational results} Values of vertex transmission of arbitrary connected graphs with $n$ vertices lie between $n-1$ and $n(n-1)/2$. A generalization of the transmission $tr(v)$ of a vertex $v\in V(G)$ is the $(r,s)$-transmission of $v$ defined as $$ tr_{r,s}(v)= \sum_{u \in V(G)} \sum_{i=r}^{s} d(v,u)^i. $$ For $r=s=1$, transmission $tr_{r,s}(v)$ coincides with $tr(v)$. The interval of possible values of $tr_{r,s}(v)$ is much lager than that for $tr(v)$. The corresponding analogs of the Wiener index of a graph $G$ may be written as $$ W_{r,s}(G)\ = \sum_{\{u,v\} \subseteq V(G)} \sum_{i=r}^{s} d(u,v)^i = \frac{1}{2} \sum_{v \in V(G)} tr_{r,s}(v). $$ The similar modifications of the Wiener index were considered in \cite{Gutm97,Gutm04,Shab13}. It can be noted that $W_{1,1}$ is the Wiener index, $W_{k,k}$ is the $k$-th distance moment for positive integer $k$ \cite{Klei99}, $WW = \frac{1}{2} W_{1,2} $ is the hyper-Wiener index \cite{Rand93,Klei95}, and $TSZ = \frac{1}{6}(2W_{1,2} + W_{2,3})$ is the Tratch--Stankevich--Zefirov index \cite{Trat90}. We have computed Wiener $(r,s)$-complexity for small values of $r$ and $s$, namely, for $r,s=1,2,3$. One of our goal is to find irregular fullerene graphs. Computing results for fullerene and IPR fullerene graphs are presented in Table~\ref{Table_1} and Table~\ref{Table_2}, respectively. Here $C_{n,r,s}$ denotes the maximal Wiener $(r,s)$-complexity of fullerene graphs with $n$ vertices. The numbers of fullerene graphs with $C_{n,r,s}=n$, i.e. with the maximal Wiener $(r,s)$-complexity, are in bold. Tables~\ref{Table_1} and \ref{Table_2} show that irregular fullerene graphs with $n\ge 76$ vertices exist for all considered $(r,s)$-transmissions. {\renewcommand{\baselinestretch}{1} \begin{table}[h!] \centering \caption{The number of fullerene graphs ($N$) with $n$ vertices having the maximal Wiener $(r,s)$-complexity $C_{n,r,s}$.} \label{Table_1} \begin{tabular}{rr@{\hspace{1mm}}rr@{\hspace{1mm}}rr@{\hspace{1mm}}rr@{\hspace{1mm}}rr@{\hspace{1mm}}rr@{\hspace{1mm}}r} \hline & \multicolumn{2}{c}{$r=s=1$} & \multicolumn{2}{c}{$r=s=2$} & \multicolumn{2}{c}{$r=1, s=2$} & \multicolumn{2}{c}{$r=s=3$} & \multicolumn{2}{c}{$r=2, s=3$} & \multicolumn{2}{c}{$r=1, s=3$} \\ \hline $n$ &$C_{n,r,s}$& $N$ &$C_{n,r,s}$ & $N$ & $C_{n,r,s}$ & $N$ & $C_{n,r,s}$& $N$ & $C_{n,r,s}$ & $N$ & $C_{n,r,s}$ & $N$ \\ \hline 20 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 24 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 & 1 \\ 26 & 2 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 & 3 & 1 \\ 28 & 5 & 1 & 5 & 1 & 5 & 1 & 5 & 1 & 5 & 1 & 5 & 1 \\ 30 & 7 & 1 & 8 & 1 & 8 & 1 & 8 & 1 & 8 & 1 & 8 & 1 \\ 32 & 9 & 1 & 11 & 1 & 11 & 1 & 11 & 1 & 11 & 1 & 11 & 1 \\ 34 & 10 & 2 & 12 & 1 & 12 & 1 & 12 & 1 & 12 & 1 & 12 & 1 \\ 36 & 14 & 1 & 20 & 1 & 20 & 1 & 20 & 1 & 20 & 1 & 20 & 1 \\ 38 & 18 & 1 & 22 & 1 & 22 & 1 & 22 & 1 & 22 & 1 & 22 & 1 \\ 40 & 19 & 1 & 25 & 1 & 25 & 1 & 25 & 1 & 25 & 1 & 25 & 1 \\ 42 & 22 & 1 & 30 & 1 & 30 & 1 & 30 & 1 & 30 & 1 & 30 & 1 \\ 44 & 25 & 1 & 32 & 1 & 32 & 1 & 32 & 2 & 32 & 2 & 32 & 2 \\ 46 & 25 & 4 & 37 & 1 & 37 & 1 & 37 & 1 & 37 & 1 & 37 & 1 \\ 48 & 30 & 1 & 41 & 1 & 39 & 1 & 41 & 1 & 41 & 1 & 41 & 1 \\ 50 & 35 & 1 & 42 & 2 & 42 & 2 & 42 & 4 & 42 & 2 & 42 & 4 \\ 52 & 36 & 1 & 46 & 2 & 46 & 1 & 47 & 1 & 46 & 2 & 47 & 1 \\ 54 & 37 & 1 & 50 & 2 & 48 & 2 & 51 & 1 & 51 & 1 & 51 & 1 \\ 56 & 40 & 1 & 52 & 1 & 51 & 2 & 52 & 3 & 52 & 3 & 52 & 3 \\ 58 & 43 & 2 & 57 & 1 & 55 & 1 & 57 & 1 & 56 & 2 & 57 & 1 \\ 60 & 44 & 3 & 58 & 1 & 57 & 2 & 58 & 3 & 58 & 2 & 58 & 3 \\ \bf{62} & 46 & 3 & 60 & 2 & 59 & 4 & \bf{62}& 1 &\bf{62} & 1 &\bf{62} & 1 \\ \bf{64} & 49 & 5 &\bf{64}& 1 & 63 & 1 & \bf{64}& 2 &\bf{64} & 2 &\bf{64} & 2 \\ \bf{66} & 50 & 2 & 65 & 5 & 65 & 1 & \bf{66}& 2 &\bf{66} & 2 &\bf{66} & 2 \\ \bf{68} & 56 & 1 & 67 & 5 & 66 & 7 & \bf{68}& 1 &\bf{68} & 1 &\bf{68} & 1 \\ \bf{70} & 56 & 1 & 69 & 7 & 69 & 1 & \bf{70}& 9 &\bf{70} & 9 &\bf{70} & 9 \\ \bf{72} & 56 & 6 &\bf{72}& 2 & 71 & 4 & \bf{72}& 18 &\bf{72} & 16 &\bf{72} & 18 \\ \bf{74} & 61 & 1 &\bf{74}& 4 & 73 & 3 & \bf{74}& 24 &\bf{74} & 19 &\bf{74} & 26 \\ \bf{76} & 63 & 1 &\bf{76}& 2 &\bf{76} & 1 & \bf{76}& 53 &\bf{76} & 50 &\bf{76} & 55 \\ \bf{78} & 64 & 2 &\bf{78}& 14 &\bf{78} & 1 & \bf{78}& 86 &\bf{78} & 72 &\bf{78} & 92 \\ \bf{80} & 66 & 2 &\bf{80}& 14 &\bf{80} & 2 & \bf{80}& 169 &\bf{80} & 140 &\bf{80} & 174 \\ \bf{82} & 71 & 1 &\bf{82}& 22 &\bf{82} & 3 & \bf{82}& 286 &\bf{82} & 251 &\bf{82} & 299 \\ \bf{84} & 70 & 2 &\bf{84}& 52 &\bf{84} & 11& \bf{84}& 483 &\bf{84} & 416 &\bf{84} & 505 \\ \bf{86} & 73 & 3 &\bf{86}& 69 &\bf{86} & 14& \bf{86}& 818 &\bf{86} & 672 &\bf{86} & 856 \\ \bf{88} & 73 & 7 &\bf{88}& 132 &\bf{88} & 16& \bf{88}& 1305 &\bf{88} & 1058 &\bf{88} & 1345 \\ \bf{90} & 79 & 1 &\bf{90}& 154 &\bf{90} & 36& \bf{90}& 2024 &\bf{90} & 1641 &\bf{90} & 2104 \\ \bf{92} & 80 & 1 &\bf{92}& 247 &\bf{92} & 38& \bf{92}& 3108 &\bf{92} & 2472 &\bf{92} & 3292 \\ \bf{94} & 82 & 1 &\bf{94}& 385 &\bf{94} & 73& \bf{94}& 4836 &\bf{94} & 3782 &\bf{94} & 5052 \\ \bf{96} & 84 & 2 &\bf{96}& 511 &\bf{96} & 86& \bf{96}& 6932 &\bf{96} & 5396 &\bf{96} & 7366 \\ \bf{98} & 86 & 1 &\bf{98}& 697 &\bf{98} &111& \bf{98}& 9800 &\bf{98} & 7623 &\bf{98} & 10493 \\ \bf{100} & 89 & 1 &\bf{100}& 923 &\bf{100} &147&\bf{100}& 13870 &\bf{100}& 10627 &\bf{100} & 14886 \\ \hline \end{tabular} \end{table} } {\renewcommand{\baselinestretch}{1} \begin{table}[h] \centering \caption{The number of IPR fullerene graphs ($N$) with $n$ vertices having the maximal Wiener $(r,s)$-complexity $C_{n,r,s}$.} \label{Table_2} \begin{tabular}{rrrrrrrrrrrrr} \hline & \multicolumn{2}{c}{$r=s=1$} & \multicolumn{2}{c}{$r=s=2$} & \multicolumn{2}{c}{$r=1, s=2$} & \multicolumn{2}{c}{$r=s=3$} & \multicolumn{2}{c}{$r=2, s=3$} & \multicolumn{2}{c}{$r=1, s=3$} \\ \hline $n$ &$C_{n,r,s}$& $N$ &$C_{n,r,s}$ & $N$ & $C_{n,r,s}$ & $N$ & $C_{n,r,s}$& $N$ & $C_{n,r,s}$ & $N$& $C_{n,r,s}$ & $N$ \\ \hline 60 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ 70 & 5 & 1 & 5 & 1 & 5 & 1 & 5 & 1 & 5 & 1 & 5 & 1 \\ 72 & 4 & 1 & 4 & 1 & 4 & 1 & 4 & 1 & 4 & 1 & 4 & 1 \\ 74 & 6 & 1 & 9 & 1 & 9 & 1 & 9 & 1 & 9 & 1 & 9 & 1 \\ 76 & 13 & 1 & 17 & 1 & 17 & 1 & 17 & 1 & 17 & 1 & 17 & 1 \\ 78 & 14 & 1 & 17 & 2 & 17 & 2 & 18 & 1 & 18 & 1 & 18 & 1 \\ 80 & 17 & 1 & 20 & 1 & 19 & 1 & 21 & 1 & 21 & 1 & 21 & 1 \\ 82 & 19 & 1 & 32 & 1 & 30 & 1 & 33 & 1 & 33 & 1 & 33 & 1 \\ 84 & 25 & 1 & 37 & 1 & 35 & 1 & 37 & 1 & 37 & 1 & 37 & 1 \\ 86 & 39 & 1 & 62 & 1 & 58 & 3 & 65 & 2 & 64 & 2 & 65 & 2 \\ 88 & 36 & 1 & 69 & 1 & 65 & 1 & 70 & 1 & 70 & 1 & 70 & 1 \\ 90 & 39 & 2 & 71 & 1 & 69 & 1 & 73 & 2 & 73 & 1 & 73 & 2 \\ 92 & 41 & 1 & 80 & 1 & 76 & 1 & 84 & 1 & 84 & 1 & 84 & 1 \\ 94 & 48 & 1 & 82 & 1 & 80 & 2 & 84 & 2 & 84 & 2 & 84 & 2 \\ 96 & 49 & 1 & 85 & 1 & 81 & 2 & 87 & 5 & 87 & 3 & 87 & 6 \\ 98 & 55 & 1 & 87 & 1 & 85 & 2 & 91 & 3 & 91 & 2 & 91 & 3 \\ 100 & 55 & 3 & 90 & 2 & 90 & 1 & 95 & 1 & 95 & 1 & 95 & 1 \\ 102 & 59 & 1 & 94 & 4 & 92 & 1 & 98 & 2 & 98 & 1 & 98 & 2 \\ 104 & 65 & 1 & 97 & 2 & 99 & 1 & 101 & 1 & 100 & 9 & 101 & 2 \\ 106 & 69 & 1 & 101 & 2 & 101 & 1 & 104 & 4 & 103 & 9 & 104 & 3 \\ 108 & 70 & 1 & 103 & 2 & 102 & 3 & 107 & 3 & 107 & 2 & 107 & 2 \\ \bf{110} & 72 & 1 & 108 & 1 & 106 & 2 &\bf{110}& 1 & \bf{110} & 1 & \bf{110} & 1 \\ \bf{112} & 74 & 1 & 108 & 3 & 109 & 2 &\bf{112}& 1 & \bf{111} & 8 & \bf{112} & 1 \\ \bf{114} & 76 & 2 & 112 & 4 & 110 & 1 &\bf{114}& 4 & \bf{114} & 2 & \bf{114} & 4 \\ \bf{116} & 80 & 2 & 114 & 2 & 113 & 1 &\bf{116}& 11 & \bf{116} & 6 & \bf{116} & 15 \\ \bf{118} & 81 & 1 & 116 & 2 & 117 & 1 &\bf{118}& 32 & \bf{118} & 26 & \bf{118} & 32 \\ \bf{120} & 87 & 1 & 119 & 1 & 118 & 2 &\bf{120}& 39 & \bf{120} & 34 & \bf{120} & 47 \\ \bf{122} & 89 & 2 & 121 & 4 & 120 & 4 &\bf{122}& 73 & \bf{122} & 49 & \bf{122} & 82 \\ \bf{124} & 91 & 2 &\bf{124}& 1 & 122 & 8 &\bf{124}& 146 & \bf{124} & 100 & \bf{124} & 164 \\ \bf{126} & 93 & 1 &\bf{126}& 1 & 125 & 1 &\bf{126}& 262 & \bf{126} & 164 & \bf{126} & 268 \\ \bf{128} & 96 & 1 &\bf{128}& 6 & 127 & 4 &\bf{128}& 409 & \bf{128} & 270 & \bf{128} & 416 \\ \bf{130} & 97 & 3 &\bf{130}& 4 & \bf{130} & 1 &\bf{130}& 739 & \bf{130} & 466 & \bf{130} & 728 \\ \bf{132} & 100 & 2 &\bf{132}& 5 & \bf{132} & 1 &\bf{132}& 1246 & \bf{132} & 749 & \bf{132} & 1235 \\ \bf{134} & 102 & 2 &\bf{134}& 14 & \bf{134} & 3 &\bf{134}& 2000 & \bf{134} & 1314& \bf{134} & 1929 \\ \bf{136} & 105 & 1 &\bf{136}& 18 & \bf{136} & 1 &\bf{136}& 3020 & \bf{136} & 1831& \bf{136} & 2966 \\ \hline \end{tabular} \end{table} } \clearpage \begin{figure}[ht] \begin{minipage}[h]{0.47\linewidth} \center{\includegraphics[width=0.85\linewidth]{Figure_1_f33.pdf}} $n=62 \ (r=1,2,3, s=3)$ \\ \vspace*{5mm} \end{minipage} \hfill \begin{minipage}[h]{0.47\linewidth} \vspace*{-5mm} \center{\includegraphics[width=0.81\linewidth]{Figure_1_f57.pdf}} $n=110 \ (r=1,2,3, s=3)$ \\ \end{minipage} \vfill \begin{minipage}[h]{0.47\linewidth} \center{\includegraphics[width=0.9\linewidth]{Figure_1_f34.pdf}} $n=64 \ (r=s=2)$ \\ \vspace*{5mm} \end{minipage} \hfill \begin{minipage}[h]{0.47\linewidth} \vspace*{-5mm} \center{\includegraphics[width=0.85\linewidth]{Figure_1_f64.pdf}} $n=124 \ (r=s=2)$ \\ \end{minipage} \vfill \begin{minipage}[h]{0.47\linewidth} \center{\includegraphics[width=0.9\linewidth]{Figure_1_f40.pdf}} $n=76 \ (r=1, s=2)$ \\ \vspace*{5mm} \end{minipage} \hfill \begin{minipage}[h]{0.47\linewidth} \vspace*{-5mm} \center{\includegraphics[width=0.85\linewidth]{Figure_1_f67.pdf}} $n=130 \ (r=1, s=2)$ \end{minipage} \caption{Minimal irregular fullerene and IPR fullerene graphs with $n$ vertices.} \label{Figure_1} \end{figure} Despite the fact that the interval of possible values for transmission $tr_{1,2}$ is a bit longer than for $tr_{2,2}$, the number of vertices of irregular fullerene graph for $r=1, s=2$ is less than for $r=s=2$. The same is also valid for values of $r=s=3$ and $r=2, s=3$. Based on the growth of irregular fullerene graphs when the number of vertices increases, we can formulate the following results. \setcounter{section}{1} \begin{proposition} Irregular fullerene graphs with $n$ vertices exist for $n=64$ if $r=s=2$ or $$ n \geq \left\{ \begin{array}{rl} 72,& \mbox{if } $r=s=2$ \\ 76,& \mbox{if } $r=1, s=2$ \\ 62,& \mbox{if } $r=1,2,3$,\ $s=3$. \end{array} \right. $$ \end{proposition} \begin{proposition} Irregular IPR fullerene graphs with $n$ vertices exist for $$ n \geq \left\{ \begin{array}{rl} 124,& \mbox{if } $r=s=2$ \\ 130,& \mbox{if } $r=1, s=2$ \\ 110,& \mbox{if } $r=1,2,3$, \ $s=3$. \end{array} \right. $$ \end{proposition} Diagrams of the minimal irregular fullerene graphs are shown in Fig.~\ref{Figure_1}. These graphs have 64 vertices ($r=s=2$), 76 vertices ($r=1,s=2$), and 62 vertices (the same graph for $r=s=3$, $r=2, s=3$, and $r=1, s=3$). The minimal irregular IPR fullerene graphs have 124 vertices ($r=s=2$), 130 vertices ($r=1, s=2$), and 110 vertices (the same graph for $r=s=3$, $r=2, s=3$, and $r=1,s=3$). As for the Wiener complexity, the following question is still open: \begin{problem} Do there exist fullerene graphs with maximum Wiener complexity? \end{problem} Based on the above considerations and results of~\cite{Dobr19}, we assume that such irregular fullerene graphs may have 300--400 vertices. \bigskip {\bf Acknowledgment.} The work was supported by the Russian Foundation for Basic Research (project number 19--01--00682), the state contract of the Sobolev Institute of Mathematics (project no. 0314--2019--0016) (AAD), and the Ministry of Science and Education of Russia (agreement no. 075--02--2021--1392) (AV).
-30,089.829727
[ -3.046875, 2.853515625 ]
24.242424
[ -3.16796875, 2.021484375, -1.31640625, -4.79296875, -1.3759765625, 6.6171875 ]
[ 3.05859375, 7.43359375, 4.73046875, 7.890625 ]
83
2,080
[ -3.18359375, 3.6328125 ]
62.586579
[ -4.0234375, -1.005859375, -1.5458984375, -2.091796875, 0.255126953125, 5.921875 ]
0.505341
9.817912
28.894231
19.368199
[ 3.6960179805755615 ]
-22,308.071267
4.186058
-29,271.361962
0.689101
5.405726
[ -3.814453125, -2.1328125, -1.931640625, -3.3671875, 2.294921875, 8.28125 ]
[ -5.78125, -0.6748046875, -1.619140625, -0.146728515625, 2.68359375, 2.41796875 ]
BkiUddg5qhLBXQtDHM5m
\section{Introduction} There has been a considerable development of methods for smooth estimation of density and distribution functions, following the introduction of several kernel smoothing by \cite{Ros56} and the further advances made on kernel method by \cite{Par62}. {We advise the reader to see the paper of~\cite{{Har91}} for an introduction of several kernel smoothing techniques}. However, these methods have difficulties at and near boundaries when curve estimation is attempted over a region with boundaries. Moreover, it is well known in nonparametric kernel density estimation that the bias of the standard kernel density estimator \begin{eqnarray*} \hat{f}(x)=\frac{1}{nh_n}\displaystyle\sum_{i=1}^nK\left(\frac{x-X_i}{h_n}\right) \end{eqnarray*} is of a larger order near the boundary than that in the interior, where, $K$ is a kernel (that is, a function satisfying $\int_{\mathbb{R}} K\left(x\right)dx=1$), and $\left(h_n\right)$ is a bandwidth (that is, a sequence of positive real numbers that goes to zero). We suppose for simplicity that there is a single known boundary to the support of the density function $f$ which we might as well take to be at the origin, {then} we are dealing with positive data. For convenience, we consider a symmetric kernel (for instance, normal kernels). Away from the boundary, which means that at any $x>h_n$, the usual asymptotic mean and variance expressions are applied. Let us now suppose that $f$ has two continuous derivatives everywhere, and that as $n\to\infty$, $h = h_n\to 0$ and $nh\to 0$. Then, \begin{eqnarray*} \mathbb{E}\left[\hat{f}(x)\right]\simeq f(x)+\frac{1}{2}h^2f''(x)\int x^2K(x)dx, \end{eqnarray*} and \begin{eqnarray*} Var\left[\hat{f}(x)\right]\simeq(nh)^{-1}f(x) \int K^2(x)dx. \end{eqnarray*} Near the boundary, the expression of the mean and the variance are different. Let $x=ph$, we have \begin{eqnarray*} \mathbb{E}\left[\hat{f}(x)\right]\simeq \displaystyle f(x)\int_{-\infty}^pK(x)dx-f'\left(x\right)\displaystyle\int_{-\infty}^pxK(x)dx+\frac{1}{2}h^2\displaystyle f''(x)\int_{-\infty}^px^2K(x)dx, \end{eqnarray*} and \begin{eqnarray*} Var\left[\hat{f}(x)\right]\simeq(nh)^{-1}\displaystyle f(x)\int_{-\infty}^pK^2(x)dx. \end{eqnarray*} These bias phenomena are called boundary bias. Many authors have suggested methods for reducing this phenomena such as data reflection (\cite{Sch85}), boundary kernels (\cite{Mul91, Mul93} and \cite{Mul94}), the local linear estimator (\cite{Lej92} and \cite{Jon93}), the use of beta and gamma kernels (\cite{Che99, Che00}).\\ For a smooth estimate of a density function with finite known support, Vitale's method (\cite{Vit75}) based on the Bernstein polynomials, illustrated below, also has been investigated in the literature (\cite{Gho01}, \cite{Bab02}, \cite{Kak04}, \cite{Rao05}) and more recently by~\cite{Leb10} and \cite{Kak14}). The idea comes from the Weierstrass's approximation theorem that for any continuous function $u$ on the interval $[0,1]$, we have \begin{eqnarray*} \displaystyle\sum_{k=0}^m u\left(\frac{k}{m}\right)b_k(m,x)\to u(x),\quad \mbox{uniformly in}\, x\in[0,1], \end{eqnarray*} where $b_{k}(m,x)={{m}\choose{k}}x^{k}(1-x)^{m-k}$ is the Bernstein polynomial of order $m$ .\\ In the context of distribution function $F$ with support $[0,1]$, \cite{Vit75} proposed an estimator \begin{eqnarray*} \widetilde{F}_n(x)=\displaystyle\sum_{k=0}^{m}F_n\left(\frac{k}{m}\right)b_k(m,x), \end{eqnarray*} where $F_n$ is the empirical distribution based on a random sample $X_{1},X_{2}, \ldots, X_{n}$. Hence, an estimator for the density $f$ is given by \begin{eqnarray} \widetilde{f}_n(x)&=&\frac{d}{dx}\widetilde{F}_n(x)=m\displaystyle\sum_{k=0}^{m-1}\left[F_n\left(\frac{k+1}{m}\right)-F_n\left(\frac{k}{m}\right)\right]b_k(m-1,x). \label{eq:26} \end{eqnarray} In this paper, we propose a recursive method to estimate the unknown density function $f$. The advantage of recursive estimators is that their update from a sample of size $n$ to one of size $n + 1$, requires considerably less computations. This property is particularly important, since the number of points at which the function is estimated is usually very large.\\ $ $\\ Let $X_{1},X_{2}, \ldots, X_{n}$ be a sequence of i.i.d random variables having a common unknown distribution $F$ with associated density $f$ supported on $[0,1]$. {In order to construct a recursive method to estimate the unknown density $f$, we first follow \cite{Leb10} and we introduce $T_{n,m}$ as follows}: \begin{eqnarray*} T_{n,m}(x)&=&m\sum^{m-1}_{k=0}\left(\mathbb{I}_{\left\{X_n\leq\frac{k+1}{m}\right\}}-\mathbb{I}_{\left\{X_{n}\leq\frac{k}{m}\right\}}\right)b_{k}(m-1,x)\\ &=&m\sum^{m-1}_{k=0}\mathbb{I}\left\{\frac{k}{m}<X_{n}\leq\frac{k+1}{m}\right\}b_{k}(m-1,x)\\ &=&mb_{k_n}(m-1,x). \end{eqnarray*} Here, we let $k_n=[mX_n]$, where $[x]$ denotes the largest integer smaller than $x$. Then, we use Robbins-Monro's scheme (\cite{Rob51}), and we set $f_{0}(x)\in\mathbb{R}$ and {for all $n\in\mathbb{N}^*$, we set} \begin{eqnarray}\label{eq:rec:density} f_{n}(x)=(1-\gamma_n)f_{n-1}(x)+\gamma_{n}Z_{n}\left(x\right), \end{eqnarray} {where $(\gamma_n)$ is a sequence of real numbers, called a stepsize and $Z_{n}(x)=2T_{n,m}(x)-T_{n,m/2}(x)$. Then for simplicity, we suppose that $f_0(x)=0$ and $\Pi_n=\prod_{j=1}^n(1-\gamma_j)$}. Then, it follows from~\eqref{eq:rec:density}, that one can estimate $f$ recursively at the point $x$ by \begin{eqnarray} f_n(x)=\Pi_n\sum_{k=1}^n\Pi_k^{-1}\gamma_k Z_k(x). \label{eq:1} \end{eqnarray} Our first aim in this paper is to compute the bias, the variance, the mean squared error ($MSE$) and the mean integrated squared error ($MISE$) of our proposed recursive estimators. It turns out that they heavily depend on the choice of the stepsize $(\gamma_n)$. Moreover, we give the optimal order $(m_n)$ which minimizes the $MSE$ and the $MISE$ of the proposed recursive estimators. Further, we show that using the stepsize $(\gamma_n)=(n^{-1})$ and the optimal order $\left(m_n\right)$, the proposed estimator $f_n$ can dominate Vitale's estimator $\widetilde{f}_n$ in terms of $MISE$. Finally, we confirm our theoretical results from a simulation study.\\ The remainder of this paper is organized as follows. In Section~\ref{section:notation:assumption}, we list our notations and assumptions. In Section~\ref{section:main:result}, we state the main theoretical results regarding bias, variance, $MSE$ and $MISE$. Section~\ref{section:app} is devoted to some numerical studies : first by simulation (Subsection~\ref{subsection:simu}) and second using some real datasets (Subsection~\ref{subsection:realdata}). We conclude the paper in Section~\ref{section:conclusion}. Appendix~\ref{section:proof} gives the proof of our theoretical results. \section{Assumptions and Notations} \label{section:notation:assumption} \begin{defi}\label{def:1} Let $\gamma\in\mathbb{R}$ and $(v_n)_{n\ge 1}$ be a nonrandom positive sequence. We say that $(v_n)\in \mathcal{GS}(\gamma)$ if \\ \begin{equation} \lim_{n \rightarrow +\infty} n\left[1-\frac{v_{n-1}}{{v_n}}\right]=\gamma. \nonumber \end{equation} \end{defi} This condition was introduced by \cite{Gal73} to define regularly varying sequences (see also \cite{Boj73}). Typical sequences in $\mathcal{GS}(\gamma)$ are, for $b\in \mathbb{R}$, $n^\gamma(\log n)^b$, $n^\gamma(\log \log n)^b$, and so on.\\ To study the estimator (\ref{eq:1}), we make the following assumptions : \begin{description} \item $\left(A1\right)$ $f$ admits a continuous fourth-order derivative $f^{(4)}$ on $[0, 1]$. \item $\left(A2\right)$ $(\gamma_{n})\in \mathcal{GS}\left(-\alpha\right)$, $\alpha\in(\frac{1}{2},1]$. \item $\left(A3\right)$ $(m_{n})\in \mathcal{GS}(a)$, $a\in(0,1)$. \item$\left(A4\right)$ $\lim_{n\rightarrow\infty}(n\gamma_{n})\in(\min\left(2a,(2\alpha-a)/4\right),\infty)$. \end{description} \begin{itemize} \item Assumption $(A1)$ is standard in the framework of nonparametric estimation of probability density using Bernstein polynomials (see \cite{Leb10}). \item Assumption $(A2)$ on the stepsize is usual in the framework of the recursive estimation for density estimation (see \cite{Mok09} and \cite{Sla13,Sla14a,Sla14b}). {This assumption ensures that $\sum_{n\geq 1} \gamma_n=\infty$ and $\sum_{n\geq 1} \gamma_n^2<\infty$, which are two classical assumptions for obtaining the convergence of Robbins-Monro's algorithm (see~\cite{Duf97})}. \item Assumption $(A3)$ on $(m_n)$ was introduced similarly to the assumption on the bandwidth used for the recursive kernel distribution estimator (see \cite{Sla14a,Sla14b}) {to ensure the application of the technical lemma given in the appendix A}. \item Assumption $(A4)$ on the limit of $\left(n\gamma_n\right)$ as $n$ goes to infinity is usual in the framework of stochastic approximation algorithms. {This condition ensures the application of the technical lemma given in the appendix A to obtain the asymptotic bias and variance respectively}. \end{itemize} \begin{rem} The intuition behind the use of such order $\left(m_n\right)$ belonging to $\mathcal{GS}\left(a\right)$ is that the ratio $m_{n-1}/m_n$ is equal to $1-a/n+o\left(1/n\right)$, then using such order and using the assumption on the stepsize, which ensures that $\gamma_{n-1}/\gamma_n$ is equal to $1+\alpha/n+o\left(1/n\right)$, the application of the technical lemma given in the appendix A ensures that the bias and the variance will depend only on $m_n$ and $\gamma_n$ and not on $m_1,\ldots,m_n$ and $\gamma_1,\ldots,\gamma_n$, then the $MISE$ will depend also only on $m_n$ and $\gamma_n$, which will be helpful to deduce an optimal order and an optimal stepsize. \end{rem} Throughout this paper we will use the following notations: \begin{eqnarray*} \Delta_1(x)&=&\frac{1}{2}\left[(1-2x)f'(x)+x(1-x)f''(x)\right],\quad \psi(x)=(4\pi x(1-x))^{-1/2}, \quad \xi=\displaystyle\lim_{n\to\infty}(n\gamma_n)^{-1},\\ \Delta_2(x)&=&\frac{1}{6}(1-6x(1-x))f''(x)+\frac{5}{12}x(1-x)(1-2x)f^{(3)}(x)+\frac{1}{8}x^2(1-x)^2f^{(4)}(x),\\ C_1&=&\int_0^1f(x)\psi(x)(x)dx,\quad C_2=\int_0^1\Delta_2^2(x)dx, \quad C_3=\frac{1}{\sqrt{2}}+4\left(1-\sqrt{\frac{2}{3}}\right),\\ C_4&=&\int_0^1\Delta_1^2(x)dx, \quad C_5=\displaystyle\int_0^1\left\{-\frac{\Delta^2_1(x)}{2f(x)}+\Delta_2(x)\right\}^2dx,\\ C_6&=&\displaystyle\int_0^1\left\{-\frac{\Delta^2_1(x)}{2f(x)}+\Delta_2(x)+f(x)\displaystyle\int_0^1\frac{\Delta^2_1(y)}{2f(y)}dy\right\}^2dx. \end{eqnarray*} \section{Main results} \label{section:main:result} {In this section we start} by developing the bias, the variance and the $MSE$ of the proposed recursive estimators using Bernstein estimator first in the boundary region and then at the edges. \subsection{Within the interval $[0,1]$}\label{subsection:with} The following proposition gives the bias, variance, and $MSE$ of the proposed recursive estimator $f_n(x)$ for $x\in(0, 1)$. \begin{prop}\label{prop:1} $ $\\ Let Assumptions $\left(A1\right)-\left(A4\right)$ hold. For $x\in(0,1)$, we have \begin{enumerate} \item If $a\in\left(0,\frac{2}{9}\alpha\right]$, then \begin{equation} Bias\left[f_n(x)\right]=-m^{-2}_{n} \frac{2}{1-2a\xi}\Delta_2(x)+o\left(m^{-2}_{n}\right). \label{eq:2} \end{equation} If $a\in\left(\frac{2}{9}\alpha,1\right)$, then \begin{equation} Bias\left[f_n(x)\right]=o\left(\sqrt{\gamma_{n}m^{1/2}_{n}}\right). \label{eq:3} \end{equation} \item If $a\in\left[\frac{2}{9}\alpha,1\right)$, then \begin{equation} Var[f_{n}(x)]=C_3\gamma_{n}m^{1/2}_{n}\frac{2}{4-(2\alpha-a)\xi}f(x)\psi(x)+o\left(\gamma_{n}m^{1/2}_{n}\right). \label{eq:4} \end{equation} If $a\in\left(0,\frac{2}{9}\alpha\right]$, then \begin{equation} Var[f_{n}(x)]=o\left(m_n^{-4}\right). \label{eq:5} \end{equation} \item If $\lim_{n\rightarrow\infty}(n\gamma_{n})>\max\left(2a,(2\alpha-a)/4\right)$, that implies $\lim_{n\rightarrow\infty}(n\gamma_{n})>2a$ and $\lim_{n\rightarrow\infty}(n\gamma_{n})>(2\alpha-a)/4$, {in this case, we have} $1-2a\xi>0$ and $4-(2\alpha-a)\xi>0$ then (\ref{eq:2}) and (\ref{eq:4}) hold simultaneously. \item If $a\in\left(0,\frac{2}{9}\alpha\right)$, then \begin{eqnarray*} MSE\left[f_{n}(x)\right]=\Delta_2^2(x)m^{-4}_{n}\frac{4}{(1-2a\xi)^2}+o\left(m^{-4}_{n}\right). \end{eqnarray*} If $a=\frac{2}{9}\alpha$, then \begin{eqnarray}\label{eq:MSE:2.9} MSE\left[f_{n}(x)\right]&=&\Delta_2^2(x)m^{-4}_{n}\frac{4}{(1-2a\xi)^2}+C_3f(x)\psi(x)\gamma_{n}m^{1/2}_{n}\frac{2}{4-(2\alpha-a)\xi}\nonumber\\ &&+o\left(m^{-4}_{n}+\gamma_{n}m^{1/2}_{n}\right). \end{eqnarray} If $a\in\left(\frac{2}{9}\alpha,1\right]$, then \begin{eqnarray*} MSE\left[f_{n}(x)\right]&=&C_3f(x)\psi(x)\gamma_{n}m^{1/2}_{n}\frac{2}{4-(2\alpha-a)\xi}+o\left(\gamma_{n}m^{1/2}_{n}\right). \end{eqnarray*} \end{enumerate} \end{prop} {In order to choose the optimal order $\left(m_n\right)$, we choose $\left(m_n\right)$ such that} \begin{eqnarray*} {m_n=\underset{m_n}{\arg\min}MSE\left[f_{n}\left(x\right)\right],\quad \mbox{for}\,\, x\in(0,1).} \end{eqnarray*} {Then, it follows from~\eqref{eq:MSE:2.9}, that $\left(m_n\right)$ must be equal to} \begin{eqnarray*} \left(2^{2/9}\left(1-\frac{4}{9}\xi\right)^{-2/9}\left[\frac{32\Delta_2^2(x)}{C_3f(x)\psi(x)}\right]^{2/9}\gamma_n^{-2/9}\right), \end{eqnarray*} {and then the corresponding $MSE$ is equal to} \begin{eqnarray*} MSE\left[f_n(x)\right]=\frac{9(32C_3^8)^{1/9}}{8}\frac{(\Delta_2(x))^{2/9}(f(x)\psi(x))^{8/9}}{2^{2/9}\left(1-\frac{4}{9}\xi\right)^{10/9}}\gamma_n^\frac{8}{9}+o\left(\gamma_n^\frac{8}{9}\right). \end{eqnarray*} {Moreover, since the optimal stepsize should be obtained by minimizing the $MSE$ of our proposed estimators $f_n\left(x\right)$, then $\left(\gamma_n\right)$ must be chosen in $\mathcal{GS}\left(-1\right)$}. {By considering the case when $(\gamma_n)=\left(\gamma_0n^{-1}\right)$, we obtain} \begin{eqnarray*} \left(m_n\right)=\left(2^{2/9}\left(\gamma_0-4/9\right)^{-2/9}\left[\frac{32\Delta_2^2(x)}{C_3f(x)\psi(x)}\right]^{2/9}n^{2/9}\right), \end{eqnarray*} {and the corresponding $MSE$ is equal to} \begin{eqnarray*} MSE\left[f_n(x)\right]=\frac{9(32C_3^8)^{1/9}(\Delta_2(x))^{2/9}(f(x)\psi(x))^{8/9}}{8}\frac{\gamma_0^2}{2^{8/9}\left(\gamma_0-4/9\right)^{10/9}}n^{-8/9}+o\left(n^{-8/9}\right). \end{eqnarray*} \subsection{The edges of the interval $[0,1]$}\label{subsection:edgs} For the cases $x=0,1$, we need {the following} additional assumption : \begin{description} \item $\left(A'4\right)$ $\quad\lim_{n\rightarrow\infty}(n\gamma_{n})\in(\min\left(2a,(\alpha-a)/2\right),\infty)$. \end{description} The following proposition gives bias, variance and $MSE$ of $f_{n}(x)$, for $x=0,1$. \begin{prop}\label{prop:2} $ $\\ Let Assumptions $\left(A1\right)-\left(A3\right)$ and $\left(A^{\prime}4\right)$ hold, for $x=0,1$, we have \begin{enumerate} \item If $a\in\left(0,\frac{\alpha}{5}\right]$, then \begin{equation} Bias\left[f_n(x)\right]=-m^{-2}_{n} \frac{2}{1-2a\xi}\Delta_2(x)+o\left(m^{-2}_{n}\right). \label{eq:2'} \end{equation} If $a\in\left(\frac{\alpha}{5},1\right)$, then \begin{equation} Bias\left[f_n(x)\right]=o\left(\sqrt{\gamma_{n}m_{n}}\right). \label{eq:3'} \end{equation} \item If $a\in\left[\frac{\alpha}{5},1\right)$, then \begin{equation} Var[f_{n}(x)]=\frac{5}{2}\gamma_{n}m_{n}\frac{1}{2-(\alpha-a)\xi}f(x)+o\left(\gamma_{n}m_{n}\right). \label{eq:4'} \end{equation} If $a\in\left(0,\frac{\alpha}{5}\right)$, then \begin{equation} Var[f_{n}(x)]=o\left(m_n^{-4}\right). \label{eq:5'} \end{equation} \item If $\lim_{n\rightarrow\infty}(n\gamma_{n})>\max\left(2a,(\alpha-a)/2\right)$, that implies $\lim_{n\rightarrow\infty}(n\gamma_{n})>2a$ and $\lim_{n\rightarrow\infty}(n\gamma_{n})>(\alpha-a)/2$ {then} $1-2a\xi>0$ and $2-(\alpha-a)\xi>0$, then, (\ref{eq:2'}) and (\ref{eq:4'}) hold simultaneously. \item If $a\in\left(0,\frac{\alpha}{5}\right)$, then \begin{eqnarray*} MSE\left[f_{n}(x)\right]=\Delta_2^2(x)m^{-4}_{n}\frac{4}{(1-2a\xi)^2}+o\left(m^{-4}_{n}\right). \end{eqnarray*} If $a=\frac{\alpha}{5}$, then \begin{eqnarray}\label{eq:MSE:1.5} MSE\left[f_{n}(x)\right]&=&\Delta_2^2(x)m^{-4}_{n}\frac{4}{(1-2a\xi)^2}+\frac{5}{2}f(x)\gamma_{n}m_{n}\frac{1}{2-(\alpha-a)\xi}\nonumber\\ &&+o\left(m^{-4}_{n}+\gamma_{n}m_{n}\right). \end{eqnarray} If $a\in\left(\frac{\alpha}{5},1\right)$, then \begin{eqnarray*} MSE\left[f_{n}(x)\right]&=&\frac{5}{2}f(x)\gamma_{n}m_{n}\frac{1}{2-(\alpha-a)\xi}+o\left(\gamma_{n}m_{n}\right). \end{eqnarray*} \end{enumerate} \end{prop} {In order to choose the optimal order $\left(m_n\right)$, we choose $\left(m_n\right)$ such that} \begin{eqnarray*} {m_n=\underset{m_n}{\arg\min}MSE\left[f_{n}\left(x\right)\right],\quad \mbox{for}\,\, x=0,1} \end{eqnarray*} {Then, it follows from~\eqref{eq:MSE:1.5}, that $\left(m_n\right)$ must be equal to} \begin{eqnarray*} \left(2^{1/5}\left(1-\frac{2}{5}\xi\right)^{-1/5}\left[\frac{32\Delta_2^2(x)}{5f(x)}\right]^{1/5}\gamma_n^{-1/5}\right), \end{eqnarray*} {and then the corresponding $MSE$ is equal to} \begin{eqnarray*} MSE\left[f_n(x)\right]=\frac{5^{8/5}32^{1/5}}{8}\frac{\left(\Delta_2(x)\right)^{2/5}\left(f(x)\right)^{4/5}}{2^{4/5}\left(1-\frac{2}{5}\xi\right)^{6/5}}\gamma_n^{4/5}+o\left(\gamma_n^{4/5}\right). \end{eqnarray*} {Moreover, since the optimal stepsize should be obtained by minimizing the $MSE$ of our proposed estimators $f_n\left(x\right)$, then $\left(\gamma_n\right)$ must be chosen in $\mathcal{GS}\left(-1\right)$}. {By considering the case when $(\gamma_n)=\left(\gamma_0n^{-1}\right)$, we obtain} \begin{eqnarray*} \left(2^{1/5}\left(\gamma_0-2/5\right)^{-1/5}\left[\frac{32\Delta_2^2(x)}{5f(x)}\right]^{1/5}n^{1/5}\right), \end{eqnarray*} {and then the corresponding $MSE$ is equal to} \begin{eqnarray*} MSE\left[f_n(x)\right]=\frac{5^{8/5}32^{1/5}\left(\Delta_2(x)\right)^{2/5}\left(f(x)\right)^{4/5}}{8}\frac{\gamma_0^2}{2^{4/5}\left(\gamma_0-2/5\right)^{6/5}}n^{-4/5}+o\left(n^{-4/5}\right). \end{eqnarray*} {In the next subsection we give the $MISE$ of the proposed recursive estimator $f_n$ introduced in~\eqref{eq:rec:density} for $x\in\left(0,1\right)$.} \subsection{$MISE$ of the recursive estimator $f_n$}\label{section:MISE} The following proposition gives the $MISE$ of the proposed recursive estimator $f_n$. \begin{prop}\label{prop:3} $ $\\ Let Assumptions $\left(A1\right)-\left(A4\right)$ hold. We have \begin{enumerate} \item If $a\in\left(0,\frac{2}{9}\alpha\right)$, then \begin{eqnarray*} MISE\left[f_{n}\right]=C_2m^{-4}_{n}\frac{4}{(1-2a\xi)^2}+o\left(m^{-4}_{n}\right). \end{eqnarray*} \item If $a=\frac{2}{9}\alpha$, then \begin{eqnarray*} MISE\left[f_{n}\right]&=&C_2m^{-4}_{n}\frac{4}{(1-2a\xi)^2}+C_1C_3\gamma_{n}m^{1/2}_{n}\frac{2}{4-(2\alpha-a)\xi}\\ &&+o\left(m^{-4}_{n}+\gamma_{n}m^{1/2}_{n}\right). \end{eqnarray*} \item If $a\in\left(\frac{2}{9}\alpha,1\right)$, then \begin{eqnarray*} MISE\left[f_{n}\right]&=&C_1C_3\gamma_{n}m^{1/2}_{n}\frac{2}{4-(2\alpha-a)\xi}+o\left(\gamma_{n}m^{1/2}_{n}\right). \end{eqnarray*} \end{enumerate} \end{prop} The following result is a consequence of the previous proposition which gives the optimal order $\left(m_n\right)$ of the estimator (\ref{eq:1}) and the corresponding $MISE$. \begin{coro}\label{coro:1} $ $\\ Let Assumptions $\left(A1\right)-\left(A4\right)$ hold. To minimize the $MISE$ of $f_n$, the stepsize $(\gamma_n)$ must be chosen in $\mathcal{GS}(-1)$ and $(m_n)$ must be in $\mathcal{GS}(2/9)$ such that \begin{eqnarray*} \left(2^{2/9}\left(1-\frac{4}{9}\xi\right)^{-2/9}\left[\frac{32C_2}{C_1C_3}\right]^{2/9}\gamma_n^{-2/9}\right), \end{eqnarray*} {and then the corresponding $MISE$ is equal to} \begin{eqnarray*} MISE\left[f_{n}\right]=\frac{9(32C_1^8C_3^8C_2)^{1/9}}{8}\frac{1}{2^{8/9}\left(1-\frac{4}{9}\xi\right)^{10/9}}\gamma_n^\frac{8}{9}+o\left(\gamma_n^\frac{8}{9}\right). \end{eqnarray*} Moreover, in the case when $(\gamma_n)=\left(\gamma_0n^{-1}\right)$, {the optimal order $\left(m_n\right)$ must be equal to} \begin{eqnarray} \left(2^{2/9}\left(\gamma_0-4/9\right)^{-2/9}\left[\frac{32C_2}{C_1C_3}\right]^{2/9}n^{2/9}\right), \label{eq:30} \end{eqnarray} {and the corresponding $MISE$ is given by} \begin{eqnarray} MISE\left[f_{n}\right]=\frac{9\left(8C_1^8C_3^8C_2\right)^{1/9}}{8}\frac{\gamma_0^2}{2^{6/9}\left(\gamma_0-4/9\right)^{10/9}}n^{-8/9}+o\left(n^{-8/9}\right). \end{eqnarray} \label{eq:31} \end{coro} \begin{rem} The minimum of $\frac{\gamma_0^2}{\left(\gamma_0-4/9\right)^{10/9}}$ is reached for $\gamma_0=1$. Moreover, to minimize the variance of $f_n$, we should choose $\gamma_0=1-\frac{a}{2}$, with $a=2/9$ for $x\in(0,1)$ and $\gamma_0=1-a$ with $a=1/5$ for $x=0,1$. Therefore, in {the application section}, we will consider the following stepsizes $(\gamma_n)=(n^{-1})$, $(\gamma_n)=\left(\frac{8}{9}n^{-1}\right)$ and $(\gamma_n)=\left(\frac{4}{5}n^{-1}\right)$. \end{rem} Let us now state the following theorem, which gives the weak convergence rate of the estimator $f_n$ defined in (\ref{eq:1}). \begin{theor}[Weak pointwise convergence rate]\label{theo:1} $ $\\ Let Assumption $\left(A1\right)-\left(A4\right)$ hold. For $x\in(0,1)$, we have \\ \begin{enumerate} \item If $\gamma^{-1/2}_{n}m^{-9/4}_{n}\rightarrow c$ for some constant $c\geq 0$, then \\ \begin{eqnarray*} \gamma^{-1/2}_{n}m^{-1/4}_{n}(f_{n}(x)-f(x))\stackrel{\mathcal{D}}{\rightarrow}\mathcal{N}\left(-\frac{2c}{1-2a\xi}\Delta_2(x),\frac{2}{4-(2\alpha-a)\xi}C_3f(x)\psi(x)\right). \end{eqnarray*} \item If $\gamma^{-1/2}_{n} m^{-9/4}_{n}\rightarrow \infty $, then \\ \begin{eqnarray*} m_{n}^{-2}\left(f_{n}(x)-f(x)\right)\stackrel{\mathbb{P}}{\rightarrow}-\frac{2}{1-2a\xi}\Delta_2(x), \end{eqnarray*} \end{enumerate} where $\stackrel{\mathcal{D}}{\rightarrow}$ denotes the convergence in distribution, $\mathcal{N}$ the Gaussian-distribution and $\stackrel{\mathbb{P}}{\rightarrow}$ the convergence in probability. \end{theor} \begin{rem} When the bandwidth $\left(h_n\right)$ is chosen such that $\lim_{n\to \infty}\gamma^{-1/2}_{n}m^{-9/4}_{n}=0$ and the stepsize such that $\lim_{n\to \infty}n\gamma_n=\gamma_0$, the proposed recursive estimators fulfills the following central limit theorem \begin{eqnarray*} n^{1/2}m^{-1/4}_{n}(f_{n}(x)-f(x))\stackrel{\mathcal{D}}{\rightarrow}\mathcal{N}\left(0,\frac{\gamma_0^2}{2\gamma_0-8/9}C_3f(x)\psi(x)\right). \end{eqnarray*} \end{rem} \subsection{Results on some classical Bernstein density estimator} {In the next paragraph, we recall some results on Vitale's Bernstein density estimator $\widetilde{f}_n$ given in~\eqref{eq:26}}. \subsubsection{Vitale's Bernstein density estimator $\widetilde{f}_n$} Under some classical assumption on the density, such as $f$ is continuous and admits two continuous and bounded derivatives, for $x\in[0,1]$, we have \begin{equation*} Bias\left[\widetilde{f}_n(x)\right]=\frac{\Delta_1(x)}{m}+o\left(m^{-1}\right), \text{uniformly in}\, x\in[0,1]. \end{equation*} \begin{equation*} Var[\widetilde{f}_{n}(x)]=\begin{cases} \frac{m^{1/2}}{n}f(x)\psi(x)+o_x\left(\frac{m^{1/2}}{n}\right)& \quad\text{for }x\in(0,1),\\ \frac{m}{n}f(x)+o_x\left(\frac {m}{n}\right)&\quad\text{for }x=0, 1. \end{cases} \end{equation*} Moreover, we have \begin{eqnarray*} MISE\left[\widetilde{f}_{n}\right]&=&\frac{m^{1/2}C_1}{n}+\frac{C_4}{m^2}+o\left(m^{1/2}{n}^{-1}\right)+o\left(m^{-2}\right). \end{eqnarray*} To minimize the $MISE$ of $\widetilde{f}_n$, the order $(m_n)$ must equal to \begin{eqnarray} \left(\left[\frac{4C_4}{C_1}\right]^{2/5}n^{2/5}\right), \label{eq:27} \end{eqnarray} and the corresponding $MISE$ \begin{eqnarray*} MISE\left[\widetilde{f}_{n}\right]=\frac{5\left(C_1^4C_4\right)^{1/5}}{4}n^{-4/5}+o\left(n^{-4/5}\right). \label{eq:28} \end{eqnarray*} \begin{rem} If we let $h=m^{-1}$ be the bandwidth of the Vitale's estimator $\widetilde{f}_n$, it is clear that the bias of $\widetilde{f}_n$ is $O\left(h^{-1}\right)$ which is larger than the classical kernel density estimators which is $O(h^2)$ except near the boundaries. To reduce this bias, \cite{Leb10} suggest a new Bernstein estimator using the method of bias correction. This methodology was used first by \cite{Pol95} in the context of spectral density estimation and is linked with the work of \cite{Sch71} and \cite{Sch77} on bias reduction in estimation. \end{rem} \subsubsection{Bias correction for Bernstein density $\widetilde{f}_{n,m,m/2}$} {The bias correction for Bernstein density proposed by Leblanc~\cite{Leb10} is defined by} \begin{eqnarray} \widetilde{f}_{n,m,m/2}(x)=2\widetilde{f}_{n,m}(x)-\widetilde{f}_{n,m/2}(x), \quad x\in[0,1] \label{eq:32} \end{eqnarray} where $\widetilde{f}_{n,m}$ and $\widetilde{f}_{n,m/2}$ are the Bernstein density estimators introduced by Vitale with order $m$ and $m/2$ respectively which defined in (\ref{eq:26}). Let us now recall the characteristics of the estimator $\widetilde{f}_{n,m,m/2}$. Under the Assumption $(A1)$, we have \begin{equation*} Bias[\widetilde{f}_{n,m,m/2}(x)]=-2\frac{\Delta_2(x)}{m^2}+o\left(m^{-2}\right),\text{ uniformly in}\, x\in[0,1]. \end{equation*} \begin{equation*} Var[\widetilde{f}_{n,m,m/2}(x)]=\begin{cases} C_3\frac{m^{1/2}}{n}f(x)\psi(x)+o_x\left(\frac{m^{1/2}}{n}\right)& \quad\text{for }x\in(0,1),\\ \frac{5}{2}\frac{m}{n}f(x)+o_x\left(\frac {m}{n}\right)&\quad\text{for }x=0, 1. \end{cases} \end{equation*} and then \begin{eqnarray*} MISE\left[\widetilde{f}_{n,m,m/2}\right]=\frac{C_1C_3m^{1/2}}{n}+\frac{4C_2}{m^4}+o\left(m^{1/2}{n}^{-1}\right)+o\left(m^{-4}\right). \end{eqnarray*} To minimize the $MISE$ of $\widetilde{f}_{n,m,m/2}$, the order $(m_n)$ must equal to \begin{eqnarray} \left(\left[\frac{32C_2}{C_1C_3}\right]^{2/9}n^{2/9}\right), \label{eq:33} \end{eqnarray} {and then the corresponding $MISE$} \begin{eqnarray*} MISE\left[\widetilde{f}_{n,m,m/2}\right]=\frac{9\left(32C_1^8C_2C_3^8\right)^{1/9}}{8}n^{-8/9}+o\left(n^{-8/9}\right). \label{eq:34} \end{eqnarray*} \cite{Kak14} have generalized the estimator proposed by Leblanc $\widetilde{f}_{n,m,m/2}$ and defined in (\ref{eq:32}) \begin{eqnarray} \widetilde{f}_{n,m,m/b}(x)=\frac{b}{b-1}\widetilde{f}_{n,m}(x)-\frac{1}{b-1}\widetilde{f}_{n,m/b}(x), \quad x\in[0,1] \label{eq:35} \end{eqnarray} where $b=2,3, \ldots$ and $\widetilde{f}_{n,m}$ and $\widetilde{f}_{n,m/b}$ are the Vitale's density estimators defined in (\ref{eq:26}) with order $m$ and $m/b$ respectively. Under the Assumption $\left(A1\right)$, we have \begin{equation*} Bias[\widetilde{f}_{n,m,m/b}(x)]-f(x)=-\frac{b}{m^2}\Delta_2(x)+o\left(m^{-2}\right), \text{ uniformly in}\, x\in[0,1]. \end{equation*} \begin{equation*} Var[\widetilde{f}_{n,m,m/b}(x)]=\begin{cases} \lambda_1(b)\frac{m^{1/2}}{n}f(x)\psi(x)+o_x\left(\frac{m^{1/2}}{n}\right)& \quad\text{for }x\in(0,1),\\ \lambda_2(b)\frac{m}{n}f(x)+o_x\left(\frac {m}{n}\right)&\quad\text{for }x=0, 1, \end{cases} \end{equation*} \begin{eqnarray*} \lambda_1(b)&=&\frac{1}{(1-b)^2}\left\{b^2+b^{-1/2}-2b\left(\frac{2}{b+1}\right)^{1/2}\right\},\\ \lambda_2(b)&=&\frac{1}{(1-b)^2}\left\{b^2+b^{-1}-2\right\}. \end{eqnarray*} Moreover, we have \begin{eqnarray*} MISE\left[\widetilde{f}_{n,m,m/b}\right]=\lambda_1(b)C_1\frac{m^{1/2}}{n}+\frac{b^2C_2}{m^4}+o\left(m^{1/2}{n}^{-1}\right)+o\left(m^{-4}\right). \end{eqnarray*} To minimize the $MISE$ of $\widetilde{f}_{n,m,m/b}$, the order $(m_n)$ must equal to \begin{eqnarray} \left(\left[\frac{b^2}{\lambda_1(b)}\frac{8C_2}{C_1}\right]^{2/9}n^{2/9}\right), \label{eq:36} \end{eqnarray} {and then the corresponding $MISE$} \begin{eqnarray} MISE\left[\widetilde{f}_{n,m,m/b}\right]=\left(b\lambda^4_1(b)\right)^{2/9}\frac{9\left(8C_1^8C_2\right)^{1/9}}{8}n^{-8/9}+o\left(n^{-8/9}\right). \label{eq:37} \end{eqnarray} \begin{rem} The equation (\ref{eq:37}) indicates that the choice $b=2$ is the best choice in terms of the $MISE$ for the estimator $\widetilde{f}_{n,m,m/b}$, since the factor $\left(b\lambda^4_1(b)\right)^{2/9}$ is increasing in $b=2,3, \ldots$.\\ This method of bias correction reduces the bias of Bernstein estimator from $O\left(m^{-1}\right)$ to $O\left(m^{-2}\right)$, but it loses the non-negativity. As an additive bias correction to the logarithm of estimator, \cite{Ter80} originally developed a multiplicative bias correction that enables keeping the non-negativity. This method was adopted by \cite{Hir10} for the beta kernel estimator introduced by \cite{Che99}. \end{rem} \subsubsection{{Multiplicative bias-correction Bernstein density estimator $\widetilde{f}_{n,m,b,\varepsilon}$}} This estimator was considered by~\cite{Kak14} by applying a multiplicative bias correction method to the Bernstein estimator \begin{eqnarray} \widetilde{f}_{n,m,b,\varepsilon}(x)=\left\{\widetilde{f}_{n,m}(x)\right\}^{b/(b-1)}\left\{\widetilde{f}_{n,m/b}(x)+\varepsilon\right\}^{-1/b-1}, \quad x\in[0,1], \label{eq:38} \end{eqnarray} for some $\varepsilon=\varepsilon(m)>0$, converting to zero at a suitable rate. Under the Assumption $(A1)$, for $x\in[0,1]$ such as $f(x)>0$, with $m=O\left(n^\eta\right)$ and $\varepsilon\approx m^\tau$ where $\eta\in(0,1)$ and $\tau >2$ we have \begin{eqnarray*} \mathbb{E}[\widetilde{f}_{n,m,b,\varepsilon}(x)]-f(x)&=&-\frac{b}{m^2}\left\{-\frac{\Delta^2_1(x)}{2f(x)}+\Delta_2(x)\right\}\\ &&+O\left(Var[\widetilde{f}_{n,m}(x)]+Var[\widetilde{f}_{n,m/b}(x)]\right)+o\left(m^{-2}\right), \end{eqnarray*} and \begin{equation*} Var[\widetilde{f}_{n,m,b,\varepsilon}(x)]=Var[\widetilde{f}_{n,m,m/b}(x)]+o\left(Var[\widetilde{f}_{n,m}(x)]+Var[\widetilde{f}_{n,m/b}(x)]+m^{-4}\right). \end{equation*} Moreover, we have \begin{eqnarray*} MISE\left[\widetilde{f}_{n,m,b,\varepsilon}\right]=\lambda_1(b)C_1\frac{m^{1/2}}{n}+\frac{b^2C_5}{m^4}+o\left(m^{1/2}{n}\right)+o\left(m^{-4}\right). \end{eqnarray*} To minimize the $MISE$ of $\widetilde{f}_{n,m,b,\varepsilon}$, the order $\left(m_n\right)$ must equal to \begin{eqnarray} \left(\left[\frac{b^2}{\lambda_1(b)}\frac{8C_5}{C_1}\right]^{2/9}n^{2/9}\right), \label{eq:39} \end{eqnarray} and the corresponding $MISE$ {is equal to} \begin{eqnarray*} MISE\left[\widetilde{f}_{n,m,b,\varepsilon}\right]=\left(b\lambda^4_1(b)\right)^{2/9}\frac{9\left(8C_1^8C_5\right)^{1/9}}{8}n^{-8/9}+o\left(n^{-8/9}\right). \label{eq:40} \end{eqnarray*} \begin{rem} Note that the estimator $\widetilde{f}_{n,m,b,\varepsilon}$ retains non-negativity, but it is not a genuine density. In fact, $\widetilde{f}_{n,m,b,\varepsilon}$ does not generally integrate to unity. To solve this problem, \cite{Kak14} proposed the normalized bias-corrected Bernstein estimator. \end{rem} \subsubsection{Normalized bias-corrected Bernstein estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$} The normalized bias-corrected Bernstein estimator is given by: \begin{eqnarray} \widetilde{f}^N_{n,m,b,\varepsilon}(x)=\frac{\widetilde{f}_{n,m,b,\varepsilon}(x)}{\displaystyle\int_{0}^1\widetilde{f}_{n,m,b,\varepsilon}(y)dy}, \quad x\in[0,1]. \label{eq:41} \end{eqnarray} Under the Assumption $(A1)$, for $x\in[0,1]$ such as $f(x)>0$, with $m=O\left(n^\eta\right)$ and $\varepsilon\approx m^\tau$ where $\eta\in(0,1)$ and $\tau >2$ we have \begin{eqnarray*} \mathbb{E}[\widetilde{f}^N_{n,m,b,\varepsilon}(x)]-f(x)&=&-\frac{b}{m^2}\left\{-\frac{\Delta^2_1(x)}{2f(x)}+\Delta_2(x)+f(x)\displaystyle\int_0^1\frac{\Delta^2_1(y)}{2f(y)}dy\right\}\\ &&+O\left(Var[\widetilde{f}_{n,m}(x)]+Var[\widetilde{f}_{n,m/b}(x)]+n^{-1}m^{1/2}\right)+o\left(m^{-2}\right), \end{eqnarray*} and \begin{equation*} Var[\widetilde{f}^N_{n,m,b,\varepsilon}(x)]=Var[\widetilde{f}_{n,m,m/b}(x)]+o\left(Var[\widetilde{f}_{n,m}(x)]+Var[\widetilde{f}_{n,m/b}(x)]+n^{-1}m^{1/2}+m^{-4}\right), \end{equation*} and then, we have \begin{eqnarray*} MISE\left[\widetilde{f}^N_{n,m,b,\varepsilon}\right]=\lambda_1(b)C_1\frac{m^{1/2}}{n}+\frac{b^2C_6}{m^4}+o\left(m^{1/2}{n}^{-1}\right)+o\left(m^{-4}\right). \end{eqnarray*} To minimize the $MISE$ of $\widetilde{f}^N_{n,m,b,\varepsilon}$, the order $\left(m_n\right)$ must equal to \begin{eqnarray} \left(\left[\frac{b^2}{\lambda_1(b)}\frac{8C_6}{C_1}\right]^{2/9}n^{2/9}\right), \label{eq:42} \end{eqnarray} and the corresponding $MISE$ is equal to \begin{eqnarray*} MISE\left[\widetilde{f}^N_{n,m,b,\varepsilon}\right]=\left(b\lambda^4_1(b)\right)^{2/9}\frac{9\left(8C_1^8C_6\right)^{1/9}}{8}n^{-8/9}+o\left(n^{-8/9}\right). \label{eq:43} \end{eqnarray*} \section{Applications}\label{section:app} When using the Bernstein polynomials, we must consider a density on $[0,1]$. For this purpose, we need to make some suitable transformations in different cases (we list below) : \begin{enumerate} \item Suppose that $X$ is concentrated on a finite support $[a,b]$. Then we work with the sample values $Y_1,\ldots, Y_n$ where $Y_i=\frac{X_i-a}{b-a}$. Denoting by $g_n(x)$ the estimated density function of $Y_1, \ldots, Y_n$, we compute the estimated density $f_n$ of $X$ \begin{eqnarray*} f_n(x)=\frac{1}{b-a}g_n\left(\frac{x-a}{b-a}\right). \end{eqnarray*} \item For the densities functions concentrated on the interval $(-\infty,+\infty)$, we can use the transformed sample $Y_i=\frac{1}{2}+\frac{1}{\pi}\arctan(X_i)$ which transforms the range to the interval $(0,1)$. Hence \begin{eqnarray*} f_n(x)=\frac{1}{\pi(1+x^2)}g_n\left(\frac{1}{2}+\frac{1}{\pi}\arctan(x)\right). \end{eqnarray*} \item For the support $[0,\infty)$, we can use the transformed sample $Y_i=\frac{X_i}{X_i+1}$ which transforms the range to the interval $(0, 1)$. Hence \begin{eqnarray*} f_n(x)=\frac{1}{(1+x)^2}g_n\left(\frac{x}{1+x}\right). \end{eqnarray*} \end{enumerate} \paragraph{Computational cost} As mentioned in the introduction, the advantage of the proposed recursive estimators on their non-recursive version is that their update, from a sample of size $n$ to one of size $n+1$, require less computations. This property can be generalized, one can check that it follows from~(\ref{eq:rec:density}) that for all $n_1\in \left[0,n-1\right]$, \begin{eqnarray*} f_{n}\left(x\right)&=&\prod_{j=n_1+1}^n\left(1-\gamma_{j}\right)f_{n-1}\left(x\right)+\sum_{k=n_1}^{n-1}\prod_{j=k+1}^n\left(1-\gamma_{j}\right)\gamma_{k}Z_k\left(x\right)+\gamma_nZ_n\left(x\right),\\ &=&\alpha_1f_{n-1}\left(x\right)+\sum_{k=n_1}^{n-1}\beta_{k}Z_k\left(x\right)+\gamma_nZ_n\left(x\right), \end{eqnarray*} where $\alpha_1=\prod_{j=n_1+1}^n\left(1-\gamma_{j}\right)$ and $\beta_k=\gamma_{k}\prod_{j=k+1}^n\left(1-\gamma_{j}\right)$. {It is clear, that the proposed estimators can be viewed as a linear combination of two estimators, which improve considerably the computational cost. Moreover, in order to give some comparative elements with Vitale's estimator defined in~\eqref{eq:26}, including computational coasts, we consider $500$ samples of size $500$ generated from the beta distribution $\mathcal{B}\left(3,5\right)$; moreover, we suppose that we receive an additional $500$ samples of size $500$ generated also from the beta distribution $\mathcal{B}\left(3,5\right)$. Performing the two methods, the running time using our proposed recursive estimator defined by algorithm~\eqref{eq:rec:density} with stepsize $\left(\gamma_n\right)=\left(n^{-1}\right)$ and the order $\left(m_n\right)$ according to the minimization of Least Squares Cross-Validation (LSCV) described below was roughly $25$ minutes on the author's workstation, while the running time using the non-recursive estimator~\eqref{eq:26} was roughly more than $4$ hours on the author's workstation. } The aim of this paragraph is to compare the performance of Vitale's estimator $\widetilde{f}_n$ defined in (\ref{eq:26}), Leblanc's estimator $\widetilde{f}_{n,m,m/2}$ given in~(\ref{eq:32}), the generalized estimator $\widetilde{f}_{n,m,m/b}$, defined in~\eqref{eq:35}, the multiplicative bias corrected Bernstein estimator $\widetilde{f}_{n,m,b,\varepsilon}$ given in~(\ref{eq:38}) and the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ given in (\ref{eq:41}) with that of the proposed estimator (\ref{eq:1}). \begin{itemize} \item [(1)] When applying $f_n$, one needs to choose two quantities: \begin{itemize} \item [$\bullet$] The stepsize $(\gamma_n)$ {is chosen to be equal to $\left(\nu n^{-1}\right)$, with $\nu \in\left\{\frac{4}{5},\frac{8}{9},1\right\}$}. \item [$\bullet$] The order $(m_n)$ is chosen to be equal to (\ref{eq:30}). \end{itemize} \item [(2)] When applying $\widetilde{f}_n$, one needs to choose the order $\left(m_n\right)$ to be equal to (\ref{eq:27}). \item [(3)] When applying $\widetilde{f}_{n,m,m/2}$, one needs to choose the order $\left(m_n\right)$ to be equal to (\ref{eq:33}). \item[(3)] When applying $\widetilde{f}_{n,m,m/b}$, one needs to choose the order $\left(m_n\right)$ to be equal to (\ref{eq:36}) and $b=3,4$. \item [(4)] When applying $\widetilde{f}_{n,m,b,\varepsilon}$, one needs to choose the order $\left(m_n\right)$ to be equal to (\ref{eq:39}), $b=2,3,4$ and $\varepsilon=0.00001$. \item[(5)] When applying $\widetilde{f}^N_{n,m,b,\varepsilon}$, one needs to choose the order $\left(m_n\right)$ to be equal to (\ref{eq:42}), $b=2,3,4$ and $\varepsilon=0.00001$. \end{itemize} \subsection{Simulations}\label{subsection:simu} We consider the following ten density functions : \begin{description} \item(a) the beta density $\mathcal{B}(3,5)$, $f(x)=\frac{x^2(1-x)^4}{B(3,5)}$, \item(b) the beta density $\mathcal{B}(1,6)$, $f(x)=\frac{(1-x)^5}{B(1,6)}$, \item(c) the beta density $\mathcal{B}(3,1)$, $f(x)=\frac{x^2}{B(3,1)}$, \item(d) the beta mixture density $1/2\mathcal{B}(3,9)+1/2\mathcal{B}(9,3)$, $f(x)=0.5\frac{x^2(1-x)^8}{B(3,9)}+0.5\frac{x^8(1-x)^2}{B(9,3)}$, \item(e) the beta mixture density $1/2\mathcal{B}(3,1)+1/2\mathcal{B}(10,10)$, $f(x)=0.5\frac{x^2}{B(3,1)}+0.5\frac{x^9(1-x)^9}{B(10,10)}$, \item (f) the beta mixture density $1/2\mathcal{B}(1,6)+1/2\mathcal{B}(3,5)$, $f(x)=0.5\frac{(1-x)^5}{B(1,6)}+0.5\frac{x^2(1-x)^4}{B(3,5)}$, \item (g) the beta mixture density $1/2\mathcal{B}(2,1)+1/2\mathcal{B}(1,4)$, $f(x)=0.5\frac{x}{B(2,1)}+0.5\frac{(1-x)^3}{B(1,4)}$, \item (h) the truncated exponential density $\mathcal{E}_{[0,1]}(1/0.8)$, $f(x)=\frac{\exp(-x/0.8)}{0.8\left\{1-\exp(-1/0.8)\right\}}$, \item(i) the truncated normal density $\mathcal{N}_{[0,1]}(0,1)$, $f(x)=\frac{\exp(-x^2/2)}{\displaystyle\int_0^1\exp(-t^2/2)dt}$, \item(j) the truncated normal mixture density $1/4\mathcal{N}_{[0,1]}(2,1)+3/4\mathcal{N}_{[0,1]}(-3,1)$,\\ $f(x)=0.25\frac{\exp(-(x-2)^2/2)}{\displaystyle\int_0^1\exp(-(t-2)^2/2)dt}+0.75\frac{\exp(-(x+3)^2/2)}{\displaystyle\int_0^1\exp(-(t+3)^2/2)dt}$. \end{description} For each density function and sample of size $n$, we approximate the average integrated squared error ($ISE$) of the estimator using $N=500$ trials of sample size $n$; $\overline{ISE}=\frac{1}{N}\displaystyle\sum_{k=1}^{N}ISE\left[\hat{f}_k\right]$, where $\hat{f}_k$ is the estimator computed from the $k$th sample, and, $ISE[\hat{f}]=\displaystyle\int_{0}^1\left\{\hat{f}(x)-f(x)\right\}^2dx$. \begin{table} \centering \begin{tabular}{c|c|c|c|c|c} \hline Density function&n&Vitale's estimator&Recursive 1&Recursive 2&Recursive 3\\ \hline$(a)$&50&\textbf{0.085389}&0.088252&0.089350&0.092738\\ &200&0.028167&\textbf{0.025737}&0.026057&0.027045\\ &500&0.013533&\textbf{0.011398}&0.011540&0.011977\\ \hline$(b)$&50&0.145441&\textbf{0.129385}&0.130996&0.135963\\ &200&0.047977&\textbf{0.037733}&0.038202&0.039651\\ &500&0.023050&\textbf{0.016710}&0.016918&0.017560\\ \hline$(c)$&50&0.074634&\textbf{0.061163}&0.061925&0.064273\\ &200&0.024620&\textbf{0.017837}&0.018059&0.0187441\\ &500&0.011828&\textbf{0.007899}&0.007997&0.0083012\\ \hline$(d)$&50&\textbf{0.108664}&0.119492&0.120980&0.125567\\ &200&0.035845&\textbf{0.034847}&0.035281&0.036619\\ &500&0.017222&\textbf{0.015433}&0.015625&0.016217\\ \hline$(e)$&50&\textbf{0.124816}&0.157438&0.159398&0.165442\\ &200&\textbf{0.041174}&0.045914&0.046485&0.048248\\ &500&\textbf{0.019782}&0.020333&0.020587&0.021367\\ \hline$(f)$&50&\textbf{0.084119}&0.109237&0.110596&0.114790\\ &200&\textbf{0.027749}&0.031857&0.032253&0.033476\\ &500&\textbf{0.013332}&0.014108&0.014284&0.014825\\ \hline$(g)$&50&\textbf{0.063962}&0.065611&0.066428&0.068947\\ &200&0.021099&\textbf{0.019134}&0.019372&0.020107\\ &500&0.010137&\textbf{0.008474}&0.008579&0.008904\\ \hline$(h)$&50&\textbf{0.046147}&0.042890&0.043424&0.04507\\ &200&0.015222&\textbf{0.012508}&0.012664&0.013144\\ &500&0.007313&\textbf{0.005539}&0.005608&0.005821\\ \hline$(i)$&50&\textbf{0.031391}&0.036611&0.037066&0.038472\\ &200&\textbf{0.010355}&0.010676&0.010809&0.011219\\ &500&0.004975&\textbf{0.004728}&0.004787&0.004968\\ \hline$(j)$&50&0.071621&\textbf{0.065671}&0.066489&0.069010\\ &200&0.023626&\textbf{0.019152}&0.019390&0.020125\\ &500&0.011351&\textbf{0.008481}&0.008587&0.008913\\ \end{tabular} \caption{The average integrated squared error ($ISE$) of \texttt{Vitale's estimator} $\widetilde{f}_n$ and the three recursive estimators; \texttt{recursive 1} correspond to the estimator~$f_n$ with the choice $(\gamma_n)=(n^{-1})$, \texttt{recursive 2} correspond to the estimator~$f_n$ with the choice $(\gamma_n)=\left(\left[1-\frac{a}{2}\right]n^{-1}\right)$ ($a=2/9$) and \texttt{recursive 3} correspond to the estimator~$f_n$ with the choice $(\gamma_n)=\left(\left[1-a\right]n^{-1}\right)$ ($a=1/5$).} \label{Tab:1} \end{table} \begin{table} \centering \small \begin{tabular}{p{0.5cm}|c|c|c|c|c|c|c|c|c|c} \multicolumn{3}{c|}{}&\multicolumn{2}{c|}{$\widetilde{f}_{n,m,b}$} &\multicolumn{3}{c|}{$\widetilde{f}_{n,m,b,\varepsilon}$, $\varepsilon=0.00001$}&\multicolumn{3}{c}{$\widetilde{f}^N_{n,m,b,\varepsilon}$, $\varepsilon=0.00001$}\\ \hline&n&$\widetilde{f}_{n,m,m/2}$&$b=3$&$b=4$&$b=2$&$b=3$&$b=4$&$b=2$&$b=3$&$b=4$\\ \hline$(a)$&50&{\bf 0.08504}&0.08687&0.08874&0.10597&0.10824&0.11057&0.09852&0.10064&0.10280\\ &200&{\bf 0.02480}&0.02533&0.02587&0.03090&0.03156&0.03224&0.02873&0.02935&0.02998\\ &500&{\bf 0.01098}&0.01122&0.01146&0.01368&0.01398&0.01428&0.01272&0.01299&0.01327\\ \hline$(b)$&50&{\bf 0.12469}&0.12736&0.13010&0.13284&0.13569&0.13861&0.14319&0.14626&0.14940\\ &200&{\bf 0.03636}&0.03714&0.03794&0.03874&0.03957&0.04042&0.04176&0.04265&0.04357\\ &500&{\bf 0.01610}&0.01644&0.01680&0.01715&0.01752&0.01790&0.01849&0.01889&0.01929\\ \hline$(c)$&50&{\bf 0.05894}&0.06020&0.06150&0.08278&0.08455&0.08637&0.08721&0.08908&0.09100\\ &200&{\bf 0.01719}&0.01755&0.01793&0.02414&0.02465&0.02518&0.02543&0.02598&0.02653\\ &500&{\bf 0.00761}&0.00777&0.00794&0.01069&0.01092&0.01115&0.01126&0.01150&0.01175\\ \hline$(d)$&50&{\bf 0.11515}&0.11762&0.12015&0.14391&0.14699&0.15015&0.14656&0.14971&0.15292\\ &200&{\bf 0.03358}&0.03430&0.03504&0.04196&0.04286&0.04379&0.04274&0.04366&0.04459\\ &500&{\bf 0.01487}&0.01519&0.01551&0.01858&0.01898&0.01939&0.01892&0.01933&0.01975\\ \hline$(e)$&50&{\bf 0.15172}&0.15498&0.15830&0.15311&0.15640&0.15976&0.15463&0.15795&0.16134\\ &200&{\bf 0.04424}&0.04519&0.04616&0.04465&0.04561&0.04659&0.04509&0.04606&0.04705\\ &500&{\bf 0.01959}&0.02001&0.02044&0.01977&0.02020&0.02063&0.01997&0.02040&0.02083\\ \hline$(f)$&50&0.10527&0.10753&0.10984&{\bf 0.10527}&0.11383&0.11627&0.11420&0.11665&0.11916\\ &200&{\bf 0.03070}&0.03135&0.03203&0.03250&0.03319&0.03391&0.03330&0.03402&0.03475\\ &500&{\bf 0.01359}&0.01388&0.01418&0.01439&0.01470&0.01501&0.01475&0.01506&0.01539\\ \hline$(g)$&50&0.06323&0.06458&0.06597&0.06298&0.06433&0.06571&{\bf 0.06252}&0.06386&0.06524\\ &200&0.01844&0.01883&0.01924&0.01836&0.01876&0.01916&{\bf 0.01823}&0.01862&0.01902\\ &500&0.00816&0.00834&0.00852&0.00813&0.00830&0.00848&{\bf 0.00807}&0.00824&0.00842\\ \hline$(h)$&50&0.04133&0.04222&0.04312&0.03872&0.03955&0.04040&{\bf 0.03566}&0.03643&0.03721\\ &200&0.01205&0.01231&0.01257&0.01129&0.01153&0.01178&{\bf 0.01040}&0.01062&0.01085\\ &500&0.00533&0.00545&0.00557&0.00500&0.00510&0.00521&{\bf 0.00460}&0.00470&0.00480\\ \hline$(i)$&50&{\bf 0.03528}&0.03603&0.03681&0.03594&0.03671&0.03750&0.03589&0.03666&0.03745\\ &200&{\bf 0.01028}&0.01051&0.01073&0.01048&0.01070&0.01093&0.01046&0.01069&0.01092\\ &500&{\bf 0.00455}&0.00465&0.00475&0.00464&0.00474&0.00484&0.00463&0.00473&0.00483\\ \hline$(j)$&50&0.06328&0.06464&0.06603&0.06439&0.06577&0.06718&{\bf 0.06236}&0.06370&0.06507\\ &200&0.01845&0.01885&0.01925&0.01877&0.01918&0.01959&{\bf 0.01818}&0.01857&0.01897\\ &500&0.00817&0.00817&0.00852&0.00831&0.00849&0.00867&{\bf 0.00805}&0.00822&0.00840\\ \end{tabular} \caption{The average integrated squared error ($ISE$) of Leblanc estimator's $\widetilde{f}_{n,m,m/2}$ and the three estimators introduced by Kakizawa: $\widetilde{f}_{n,m,m/b}$ with $b=3,4$, $\widetilde{f}_{n,m,b,\varepsilon}$ and $\widetilde{f}^N_{n,m,b,\varepsilon}$ with $b=2,3,4$ and $\varepsilon=0.00001$.} \label{Tab:2} \end{table} From Table \ref{Tab:1} and \ref{Tab:2} we conclude that : \begin{itemize} \item In all the considered cases, the average $ISE$ of our density estimator (\ref{eq:1}) is smaller than that of Vitale's estimator defined in (\ref{eq:26}), except the cases $(e)$ and $(f)$ of the Beta mixture and the cases $(a)$, $(d)$, $(g)$ and $(h)$ for the small sample size $n=50$ and in the case $(i)$ for the size $n=50$ and $n=200$. \item In all the cases, the average $ISE$ of our recursive density estimator (\ref{eq:1}) is slightly larger then that of Leblanc's estimator $\widetilde{f}_{n,m,m/2}$ given in (\ref{eq:32}). \item In all the cases, the average $ISE$ of our density estimator (\ref{eq:1}) is smaller than that of the generalized estimator $\widetilde{f}_{n,m,m/b}$, with $b=4$ (see (\ref{eq:35})). \item In all the cases, the average $ISE$ of our density estimator (\ref{eq:1}) is smaller than that of the multiplicative bias corrected Bernstein estimator $\widetilde{f}_{n,m,b,\varepsilon}$ given in~(\ref{eq:38}) and the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ given in~(\ref{eq:41}), except the cases $(g)$ and $(h)$. \item The average $ISE$ of the generalized estimator $\widetilde{f}_{n,m,m/b}$, (\ref{eq:35}) increase when $b$ increase, {then} the optimal choice is $b=2$ which corresponds to Leblanc's estimator $\widetilde{f}_{n,m,m/2}$ given in~(\ref{eq:32}). \item The average $ISE$ of the multiplicative bias corrected Bernstein estimator $\widetilde{f}_{n,m,b,\varepsilon}$ given in~(\ref{eq:38}) and the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ defined in~(\ref{eq:41}) increase when $b$ increase \item The average $ISE$ of the multiplicative bias corrected Bernstein estimator $\widetilde{f}_{n,m,b,\varepsilon}$ given in~(\ref{eq:38}) and the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ given in~(\ref{eq:41}) are smaller than that of the estimator $\widetilde{f}_{n,m,m/b}$ defined in~(\ref{eq:35}), in the cases $(g)$ and $(h)$ and are larger in the other cases. \item The average $ISE$ of the multiplicative bias corrected Bernstein estimator $\widetilde{f}_{n,m,b,\varepsilon}$ (\ref{eq:38}) is larger than that of the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ given in~(\ref{eq:41}) in the cases $(a)$, $(g)$, $(h)$, $(i)$ and $(j)$ and is smaller in the other cases. \item The average $ISE$ decreases as the sample size increases. \end{itemize} \begin{figure}[!ht] \begin{center} \includegraphics[width=0.6\textwidth,angle=270,clip=true,trim=40 0 0 0]{Beta_3_5.eps} \end{center} \label{Fig:1} \caption{Qualitative comparison between the proposed density estimator $f_n$ given in~(\ref{eq:1}) with stepsize $(\gamma_n)=(n^{-1})$ (solid line), the Vitale's estimator $\widetilde{f}_n$ defined in~(\ref{eq:26}) (dashed line), Leblanc's estimator $\widetilde{f}_{n,m,m/2}$ defined in~(\ref{eq:32}) (dotted line), the generalized estimator $\widetilde{f}_{n,m,m/4}$ defined in~(\ref{eq:35}) (dot-dashed line), Kakizwa's estimator $\widetilde{f}_{n,m,b,\varepsilon}$ defined in~(\ref{eq:38}) (long-dashed) and the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ defined in~(\ref{eq:41}) (two-dashed line) with $b=2$ and $\varepsilon=0.00001$ for $500$ samples respectively of size $50$ (left panel) and of size $250$ (right panel) of the beta density $\mathcal{B}\left(3,5\right)$.} \end{figure} From figure \ref{Fig:1}, we can observe that : \begin{itemize} \item The proposed recursive estimator $f_n$ given in~(\ref{eq:1}) using the stepsize $\left(\gamma_n\right)=\left(n^{-1}\right)$ is closer to the true density function compared to Vitale's estimator $\widetilde{f}_n$ given in (\ref{eq:26}), the generalized estimator $\widetilde{f}_{n,m,m/4}$ given in~(\ref{eq:35}), Kakizwa's estimator $\widetilde{f}_{n,m,b,\varepsilon}$ defined in~(\ref{eq:38}) and the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ defined in~(\ref{eq:41}) with $b=2$ and $\varepsilon=0.00001$. \item Within the interval $[0,1]$, our density estimator (\ref{eq:1}) using the stepsize $\left(\gamma_n\right)=\left(n^{-1}\right)$ is closer to the true density function compared to Leblanc's estimator $\widetilde{f}_{n,m,m/2}$ defined in~(\ref{eq:32}). \item When the sample size increases, we get closer estimation of the true density function. \end{itemize} \subsection{Real dataset}\label{subsection:realdata} In any practical situation, to estimate an unknown density function $f$, it is essential to specify the order $m$ to be used for the estimator. {One way is to use least squares cross-validation} ($LSCV$) method to obtain a data-driven choice of $m$. \subsubsection{Order selection method} First, we recall that the $LSCV$ method is based on minimizing the integrated square error between the estimated density function $\hat{f}$ and the true density function $f$ \begin{eqnarray*} \displaystyle\int_0^1 \left(\hat{f}(x)-f(x)\right)^2dx=\displaystyle\int_0^1\hat{f}^2(x)dx-2\displaystyle\int_0^1\hat{f}(x)f(x)dx+\displaystyle\int_0^1f^2(x)dx. \end{eqnarray*} From this, \cite{Sil86} derived the score function \begin{eqnarray} LSCV_{\hat{f}}(m)=\displaystyle\int_0^1\hat{f}^2(x)dx-\frac{2}{n}\displaystyle\sum_{i=1}^{n}\hat{f}_{-i}(X_i), \label{eq:45} \end{eqnarray} where $\hat{f}_{-i}$ is the density estimate without the data point $X_i$. The smoothing parameter is chosen by minimizing $LSCV(m)$ ($m=\arg\min_{m}LSCV(m)$). \\ For our proposed recursive estimator $f_n$ given in (\ref{eq:1}), we make the following choice of $(\gamma_n)=(n^{-1})$, and then we choice the order $m$ in order to minimize the following criterion : \begin{eqnarray*} LSCV_{f_n}(m)=\displaystyle\int_0^1f^2_n(x)dx-\frac{2}{n}\displaystyle\sum_{i=1}^{n}f_{n,-i}(X_i). \end{eqnarray*} We define integer sequences $p_i=[m_iX_i]$ and $q_i=[m_iX_i/2]$, {we then have} $X_i\in\left(\frac{p_i}{m_i}, \frac{p_{i}+1}{m_i}\right]$ and $X_i\in\left(\frac{2q_i}{m_i}, \frac{2(q_i+1)}{m_i}\right]$. Then we obtain \begin{eqnarray*} f_{n,-i}(x)=\frac{1}{n-1}\left[nf_n(x)-\displaystyle\sum_{i=1}^{n}\left\{2m_ib_{p_i}(m_i-1,x)-\frac{m_i}{2}b_{q_i}\left(\frac{m_i}{2}-1,x\right)\right\}\right], \end{eqnarray*} {then} we conclude that \begin{eqnarray*} LSCV_{f_n}(m)=\displaystyle\int_0^1f^2_n(x)dx-\frac{2}{n-1}\left[\displaystyle\sum_{i=1}^{n}f_{n}(X_i)-\frac{1}{n}\displaystyle\sum_{i=1}^{n}\left\{2m_ib_{p_i}(m_i-1,X_i)-\frac{m_i}{2}b_{q_i}\left(\frac{m_i}{2}-1,X_i\right)\right\}\right]. \end{eqnarray*} Note that, the $LSCV$ function for Vitale's estimator (\ref{eq:26}), is written as \begin{eqnarray*} LSCV_{\widetilde{f}_n}(m)=\displaystyle\int_0^1\left\{\widetilde{f}_n(x)\right\}^2dx-\frac{2}{n}\displaystyle\sum_{i=1}^{n}\widetilde{f}_{n,-i}(X_i). \end{eqnarray*} We define integer sequences $k_i=[mX_i]$, {we then have} $X_i\in\left(\frac{k_i}{m}, \frac{k_i+1}{m}\right]$, {consequently} \begin{eqnarray*} \widetilde{f}_{n,-i}(x)&=&\frac{1}{n-1}\left[\widetilde{f}_n(x)-mb_{k_i}(m-1,x)\right], \end{eqnarray*} {then} we conclude \begin{eqnarray*} LSCV_{\widetilde{f}_n}(m)=\displaystyle\int_0^1\left\{\widetilde{f}_n(x)\right\}^2dx-\frac{2}{n-1}\left[\displaystyle\sum_{i=1}^n\widetilde{f}_n(X_i)-\frac{m}{n}\displaystyle\sum_{i=1}^nb_{k_i}(m-1,X_i)\right]. \end{eqnarray*} The $LSCV$ function for the estimator $\widetilde{f}_{n,m,m/b}$ defined in (\ref{eq:35}) with $b=2,3,4,...$, is written as \begin{eqnarray*} LSCV_{\widetilde{f}_{n,m,m/b}}(m)=\displaystyle\int_0^1\left\{\widetilde{f}_{n,m,m/b}(x)\right\}^2dx-\frac{2}{n}\displaystyle\sum_{i=1}^{n}\widetilde{f}_{n,m,m/b,-i}(X_i). \end{eqnarray*} We define integer sequences $k_i=[mX_i]$ and $r_i=[mX_i/b]$, {we then have} $X_i\in\left(\frac{k_i}{m}, \frac{k_i+1}{m}\right]$ and $X_i\in\left(\frac{br_i}{m}, \frac{b(r_{i}+1)}{m}\right]$. Then we obtain \begin{eqnarray*} LSCV_{\widetilde{f}_{n,m,m/b}}(m)&=&\displaystyle\int_0^1\left\{\widetilde{f}_{n,m,m/b}(x)\right\}^2dx-\frac{2}{n-1}\Bigg[\displaystyle\sum_{i=1}^{n}\widetilde{f}_{n,m,m/b}(X_i)\\ &-&\frac{1}{n}\displaystyle\sum_{i=1}^{n}\left\{\frac{b}{b-1}mb_{k_i}(m-1,X_i)-\frac{1}{b-1}\frac{m}{b}b_{r_i}\left(\frac{m}{b}-1,X_i\right)\right\}\Bigg]. \end{eqnarray*} Using the Kakizawa's estimators $\widetilde{f}_{n,m,b,\varepsilon}$ defined in~(\ref{eq:38}) and $\widetilde{f}^N_{n,m,b,\varepsilon}$ defined in~(\ref{eq:41}), the $LSCV$ function is written as in (\ref{eq:45}).\\ \subsubsection{Old Faithful data}\label{data:1} In this subsection, we consider the well known Old Faithful data given in Tab.2.2 of \cite{Sil86}. These data consist of the eruption lengths (in minutes) of $107$ eruptions of the Old Faithful geyser in Yellowstone National Park, U.S.A. The data are such that $\min_i(x_i)=1.67$ and $\max_i(x_i)=4.93$, {then} it is convenient to assume that the density of eruption times is defined on the interval $[1.5,5]$ and transform the data into the unit interval.\\ The $LSCV$ procedure was performed and resulted in $m=104$ for Vitale's estimator $\widetilde{f}_n$ defined in~(\ref{eq:26}), $(m_n)=(n^{0.987})$ for our proposed estimator $f_n$ defined in~(\ref{eq:1}), $m=66$ for Leblanc's estimator $\widetilde{f}_{n,m,m/2}$ defined in~(\ref{eq:32}), $m=52$ for the estimator $\widetilde{f}_{n,m,m/4}$ defined in~(\ref{eq:35}), $m=66$ for the multiplicative bias corrected Bernstein estimator $\widetilde{f}_{n,m,2,0.00001}$ defined in~(\ref{eq:38}) and $m=66$ for the normalized estimator $\widetilde{f}^N_{n,m,2,0.00001}$ defined in~(\ref{eq:41}). These estimators are shown in Figure \ref{Fig:3} along with an histogram of the data and a Gaussian kernel density estimate using the $LSCV$-based bandwidth $h=0.3677$. All the estimators are smooth and seem to capture the pattern highlighted by the histogram. We observe that our recursive estimator outperformed the others estimators near $x=1.5$. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.65\textwidth,angle=270,clip=true,trim=40 0 0 0]{plot_silv.eps} \end{center} \label{Fig:3} \caption{Density estimates for the Old Faithful data : recursive estimator $f_n$ defined in~(\ref{eq:1}) (solid line), Vitale's estimator $\widetilde{f}_n$ defined in~(\ref{eq:26}) (dashed line), Leblanc's estimator defined in~$\widetilde{f}_{n,m,m/2}$ defined in~(\ref{eq:32}) (dotted line), the generalized estimator $\widetilde{f}_{n,m,m/4}$ defined in~(\ref{eq:35}) (dot-dashed line), Kakizwa's estimator $\widetilde{f}_{n,m,b,\varepsilon}$ defined in~(\ref{eq:38}) (long-dashed) and the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ defined in~(\ref{eq:41}) (two-dashed line) with $b=2$ and $\varepsilon=0.00001$ (left panel) and Gaussian kernel density estimate using the $LSCV$-based bandwidth $h=0.3677$ (right panel)} \end{figure} \subsubsection{Tuna data}\label{data:2} Our last example concerne the tuna data given in \cite{Che96}. The data come from an aerial line transect survey of Southern Bluefin Tuna in the Great Australian Bight. An aircraft with two spotters on board flies randomly allocated line transects. The data are the perpendicular sighting distances (in miles) of $64$ detected tuna schools to the transect lines. The survey was conducted in summer when tuna tend to stay on the surface. We analyzed the transformed data divided by $w=18$ (the data are such that $\min_i(x_i)=0.19$ and $\min_i(x_i)=16.26$).\\ The $LSCV$ procedure was performed and resulted in $m=14$ for Vitale's estimator $\widetilde{f}_n$ defined in~(\ref{eq:26}), $(m_n)=(n^{0.633})$ for our proposed estimator $f_n$ defined in~(\ref{eq:1}), $m=4$ for Leblanc's estimator $\widetilde{f}_{n,m,m/2}$ defined in~(\ref{eq:32}), $m=4$ for the estimator $\widetilde{f}_{n,m,m/4}$ defined in~(\ref{eq:35}), $m=8$ for the multiplicative bias corrected Bernstein estimator $\widetilde{f}_{n,m,2,0.00001}$ defined in~(\ref{eq:38}) and $m=4$ for the normalized estimator $\widetilde{f}^N_{n,m,2,0.00001}$ defined in~(\ref{eq:41}). These estimators are shown in Figure \ref{Fig:4} along with an histogram of the data and a Gaussian kernel density estimate using the $LSCV$-based bandwidth $h= 1.291$. All the estimators are smooth and seem to capture the pattern highlighted by the histogram. We can observe that our proposed recursive estimator outperformed the other estimators near the boundaries. \begin{figure}[!ht] \begin{center} \includegraphics[width=0.65\textwidth,angle=270,clip=true,trim=40 0 0 0]{tuna_asma.eps} \end{center} \label{Fig:4} \caption{Density estimates for the tuna data : recursive estimator $f_n$ defined in (\ref{eq:1}) (solid line), Vitale's estimator $\widetilde{f}_n$ defined in (\ref{eq:26}) (dashed line), Leblanc's estimator $\widetilde{f}_{n,m,m/2}$ defined in (\ref{eq:32}) (dotted line), the generalized estimator $\widetilde{f}_{n,m,m/4}$ defined in (\ref{eq:35}) (dot-dashed line), Kakizwa's estimator $\widetilde{f}_{n,m,b,\varepsilon}$ defined in (\ref{eq:38}) (long-dashed) and the normalized estimator $\widetilde{f}^N_{n,m,b,\varepsilon}$ defined in (\ref{eq:41}) (two-dashed line) with $b=2$ and $\varepsilon=0.00001$ (left panel) and Gaussian kernel density estimate using the $LSCV$-based bandwidth $h=1.291$ (right panel)} \end{figure} \section{Conclusion}\label{section:conclusion} In this paper, we propose a recursive estimator of a density function based on a stochastic algorithm derived from Robbins-Monro's scheme and using Bernstein polynomials. We first study its asymptotic properties. We show that our proposed estimator of density function have a good boundary properties. Moreover, the bias rate of the proposed estimator is of $m^{-2}$, which is better than the Vitale's estimator with a bias rate of $m^{-1}$. For almost all the cases, the average $ISE$ of the proposed estimator (\ref{eq:1}) with a stepsize $\left(\gamma_n\right)=\left(n^{-1}\right)$ and the corresponding order $\left(m_n\right)$ is smaller than that of Vitale's estimator and than that of the multiplicative bias-corrected estimator defined by Kakizawa. Furthermore, our proposed recursive density estimator has a slightly larger average $ISE$ compared to Leblanc's estimator. In addition, a major advantage of our proposal is that its update, when new sample points are available, require less computational cost than the non recursive estimators. Our proposed estimator always integrates to unity, but is not necessarily non negative. However, we found that truncation and renormalisation may solve this issue. Finally, through simple real-life examples (Old Faithful data and tuna data) and a simulation study, we demonstrated how the recursive Bernstein polynomial density estimators can lead to very satisfactory estimates. In conclusion, using the proposed recursive estimator $f_n$, we can obtain better results compared to those given by Vitale's estimator, Leblanc's estimator and the multiplicative bias-corrected estimator defined by Kakizawa especially near the boundaries.
-85,303.245566
[ -2.396484375, 2.267578125 ]
21.857923
[ -3.560546875, 0.5458984375, -2.021484375, -5.79296875, -0.658203125, 8.453125 ]
[ 1.658203125, 7.87890625, -0.090576171875, 4.71875 ]
661
4,950
[ -3.703125, 4.25390625 ]
39.820581
[ -5.9375, -4.02734375, -4.015625, -2.328125, 2.138671875, 12 ]
0.520536
6.851518
25.353535
11.653312
[ 2.307025909423828 ]
-50,157.416577
8.150101
-84,491.375095
0.743623
5.839045
[ -2.34375, -3.330078125, -3.84765625, -5.32421875, 2.28515625, 12.5625 ]
[ -5.546875, -1.537109375, -1.630859375, -0.97509765625, 3.26171875, 3.421875 ]
BkiUeJk25YjgKNDlWMnH
\section{Introduction} The Higgs sector in the Standard Model (SM) receives large quadratic mass corrections from top and gauge boson loop diagrams. New symmetries involving top-Higgs and gauge-Higgs sectors below 1 TeV are proposed to remove the corrections. Important new physics candidate is the Minimal Supersymmetric Standard Model. Recently, alternative scenarios which do not rely on Supersymmetry have been proposed. The little Higgs model \cite{Little Higgs} is one of these alternatives. In this model, the Higgs boson is regarded as a pseudo Nambu-Goldstone boson, which originates in the spontaneous breaking of a global symmetry at certain high scale, and the global symmetry protects the Higgs mass from the quadratic corrections. The simplest version of the model is called the littlest Higgs model \cite{Arkani-Hamed:2002qy}. The global symmetry of the model is SU(5), which is spontaneously broken into SO(5). The part of the SU(5) symmetry is gauged, and the gauge symmetry is SU(2)$_1$$\times$SU(2)$_2$$\times$U(1)$_1$$\times$U(1)$_2$. The top sector is also extended to respect the part of the global symmetry. The model thus contains several new particles such as heavy gauge bosons and top partners. This model, however, predicts a large correction to the electroweak (EW) observables because of the direct mixing between heavy and light gauge bosons after the EW symmetry breaking \cite{Perelstein:2005ka}. The precision EW measurements force the masses of heavy gauge bosons and top partners to be 10 TeV, reintroducing the fine tuning problem to the Higgs mass. A solution of the problem is the introduction of T-parity to the model which forbids the mixing \cite{T-parity}. This is the symmetry under the transformations, SU(2)$_1$$\leftrightarrow$SU(2)$_2$ and U(1)$_1$$\leftrightarrow$U(1)$_2$. Almost all new particles are assigned a T-odd charge, while the SM particles have a T-even charge. The matter sector should be extended so that T-odd partners are predicted. The lightest T-odd particle is the heavy photon $A_H$, which is stable and a candidate for dark matter \cite{LHDM}. In this talk, we consider the Littlest Higgs model with T-parity (LHT) as a typical example of models implementing both the Little Higgs mechanism and T-parity. We focus mainly on how accurately the property of $A_H$ (dark matter) is determined at the LHC and ILC, and discuss the connection between these collider experiments and cosmology. \section{Collider Signatures} We consider production processes of new heavy particles as a collider signature of the LHT; productions of top partners at the Large Hadron Collider (LHC), and those of heavy gauge bosons at the ILC with 500 GeV and 1 TeV center of mass energies. This is because top partners and heavy gauge bosons play an important role in the cancellation of quadratically divergent corrections to the Higgs mass, namely the Little Higgs mechanism. At the LHC, top partners are expected to be copiously produced, and their properties will be determined accurately. However, it is difficult to determine the properties of heavy gauge bosons at the LHC, because they have no color charge. On the other hand, the International Linear Collider (ILC) will provide an ideal environment to measure the properties of heavy gauge bosons. Detections of the signals at both LHC and ILC experiments are therefore mandatory to search the LHT, which leads to the deep understanding of the Little Higgs mechanism. \begin{wraptable}{l}{0.67\columnwidth} \centerline{\small \begin{tabular}{|c|c|c|c|} \hline $f$ (GeV) & $m_h$ (GeV) & $\lambda_2$ & $\kappa_l$ \\ \hline 580 & 134 & 1.15 & 0.5 \\ \hline \hline $m_{A_H}$ (GeV) & $m_{W_H(Z_H)}$ (GeV) & $m_{T_+}$ (GeV) & $m_{T_-}$ (GeV) \\ \hline 82 & 370 & 840 & 670 \\ \hline \end{tabular} } \caption{\small Representative point used in our simulation study} \label{table:point} \end{wraptable} In order to perform a numerical simulation at the collider experiments, we need to choose a representative point in the parameter space of the LHT. We consider the electroweak precision measurements and the WMAP observation to choose a point. Using a well-used $\chi^2$ analysis, we have selected a point shown in Table \ref{table:point}. No fine-tuning is needed at the sample point to keep the Higgs mass on the electroweak scale. The masses of the top partners ($T_+$: T-even top partner, $T_-$: T-odd top partner) and heavy gauge bosons ($A_H$: heavy photon, $W_H (Z_H)$: heavy $W(Z)$ boson) at the point are also summarized in the Table. Notice that, once we fix the parameters $f$ (vacuum expectation value of the breaking; SU(5) $\rightarrow$ SO(5)), $m_h$ (Higgs mass), $\lambda_2$ ($\lambda_2 f$ gives a vector-like mass of the T-odd top partner), and $\kappa_l$ ($\sqrt{2}\kappa_l f$ gives a vector-like mass of heavy leptons), all masses of new particles and their interactions are completely determined. We consider following three processes, $pp \rightarrow T_+ \bar{T}_+$, $pp \rightarrow T_+ \bar{t}~(t \bar{T}_+)$, $pp \rightarrow T_- \bar{T}_-$ as signals of the LHT at the LHC. The T-even top partner $T_+$ decays mainly into $t W$, while $T_-$ decays into $t A_H$ with almost 100\% branching fraction. On the other hand, there are several processes for the productions of heavy gauge bosons at the ILC. Among those, we consider the process $e^+e^- \rightarrow A_HZ_H$ for the signal at the ILC with $\sqrt{s} = 500$ GeV (1.9 fb), because $m_{A_H} + m_{Z_H}$ is less than 500 GeV. On the other hand, we focus on $e^+e^- \rightarrow W_H^+W_H^-$ at $\sqrt{s} = 1$ TeV, because huge cross section (280 fb) is expected for the process. Note that $Z_H$ decays into $A_H h$, and $W_H$ decays into $A_H W$ with almost 100\% branching fractions. \subsection{Signatures at the LHC} For $T_+ \bar{T_+}$ pair production process, an accurate determination of $m_{T_+}$ is possible. For single-$T_+$ production, we can obtain information on the mixing parameter between $T_+$ and top quark from the measurement of its cross section. For $T_- \bar{T}_-$ pair production, studying the upper end-point of the $M_{T2}$ distribution, a certain relation between $m_{A_H}$ and $m_{T_-}$ is obtained. See Ref.\cite{LHT at LHC} for more details of these simulation studies. Since the top sector in the LHT is parameterized by $f$ and $\lambda_2$, each measurement of these three observables provides a relation between these two parameters. The measurements of the three observables then give non-trivial determinations of the parameters $f$ and $\lambda_2$. It turns out that the parameter $f$ is determined to be \underline{$f = 580 \pm 33$ GeV} at the representative point. \subsection{Signatures at the ILC ($\sqrt{s} = 500$ GeV)} \begin{wrapfigure}{r}{0.5\columnwidth} \centerline{\includegraphics[width=0.4\columnwidth]{ahzh.eps}} \caption{\small Energy distribution of $h$} \label{fig:ahzh} \end{wrapfigure} The signature of the $A_HZ_H$ production is a single Higgs boson in the final state, mainly 2 jets from $h \to b \bar{b}$. Importantly, $A_H$ and $Z_H$ masses can be estimated from the edges of the distribution of the reconstructed Higgs boson energies. The signal distribution after backgrounds have been subtracted is shown in Fig.\ref{fig:ahzh}. The endpoints, $E_{\rm max}$ and $E_{\min}$, have been estimated by fitting the distribution with a line shape determined by a high statistics signal sample. The fit resulted in $m_{A_H}$ and $m_{Z_H}$ being $83.2 \pm 13.3$ GeV and $366.0 \pm 16.0$ GeV, respectively. With these values, it turns out that the parameter $f$ is determined to be \underline{$f = 576 \pm 25.0$ GeV}. See Ref.\cite{Asakawa:2009qb} for more details of the simulation study. \subsection{Signatures at the ILC ($\sqrt{s} = 1$ TeV)} \begin{wrapfigure}{r}{0.5\columnwidth} \centerline{\includegraphics[width=0.4\columnwidth]{whwh.eps}} \caption{\small Energy distribution of $W$} \label{fig:whwh} \end{wrapfigure} Since $W_H$ decays into $A_H$ and $W$ with the 100\% branching fraction, analysis procedure depends on the $W$ decay modes. We have used 4-jet final states from hadronic decays of two $W$ bosons. As in the case of the $A_H Z_H$ production, the masses of $A_H$ and $W_H$ bosons can be determined from the edges of the $W$ energy distribution. The energy distribution of the reconstructed $W$ bosons after subtracting SM backgrounds is depicted in Fig.\ref{fig:whwh}, where the distribution has been fitted with a line shape determined by a high statistics signal sample. The fitted masses of $A_H$ and $W_H$ bosons are $81.58 \pm 0.67$ GeV and $368.3 \pm 0.63$ GeV, respectively. The mass resolution improves dramatically at $\sqrt{s} = 1 $ TeV, compared to that at $\sqrt{s} = 500$ GeV. Using the masses obtained by the simulation, the parameter $f$ is determined as \underline{$f = 580 \pm 0.09$ GeV}. It is also possible to determine the spin and SM gauge charges of the $W_H$ boson, and the helicity of $W$ bosons from the decay of $W_H$ at the 1 TeV ILC experiment. See Ref.\cite{Asakawa:2009qb} for more details of the simulation study. \section{Cosmological connection} \begin{wrapfigure}{r}{0.5\columnwidth} \centerline{\includegraphics[width=0.4\columnwidth]{Omegah2.eps}} \caption{\small The probability density of $\Omega h^2$} \label{fig:Omegah2} \end{wrapfigure} Once we obtain the parameter $f$, it is possible to establish a cosmological connection. The most important physical quantity relevant to the connection is the thermal abundance of dark matter relics $\Omega h^2$. It is well known that the abundance is determined by the annihilation cross section of dark matter. In the LHT, the cross section is determined by $f$ and $m_h$ in addition to well known gauge couplings. The Higgs mass $m_h$ is expected to be measured very accurately at the ILC experiment, so that it is quite important to measure $f$ accurately to predict the abundance. Figure \ref{fig:Omegah2} shows the probability density of $\Omega h^2$ obtained from the results in the previous section. As shown in the figure, the abundance will be determined with ${\cal O}$(10\%) accuracy even at the LHC and the ILC with $\sqrt{s} =$ 500 GeV, which is comparable to the WMAP observation \cite{Komatsu:2008hk}. At the ILC with $\sqrt{s} =$ 1 TeV, the accuracy will improve to 1\% level, which stands up to that expected for future cosmological observations such as from the PLANCK satellite \cite{Planck:2006uk}. Note that the results at the ILC are obtained model-independent way, while those at the LHC are obtained using the relation predicted by the model. \section{Acknowledgments} The authors thank all the members of the ILC physics subgroup \cite{Ref:subgroup} for useful discussions. They are grateful to the Minami-tateya group for the help of the event generator preparation. This work is supported in part by the Creative Scientific Research Grant (No. 18GS0202) of the Japan Society for Promotion of Science and the JSPS Core University Program. \begin{footnotesize}
-9,478.93519
[ -3.43359375, 3.07421875 ]
42.253521
[ -5.75390625, -3.703125, -2.26171875, -6.80078125, 1.6669921875, 10.8125 ]
[ 2.416015625, 9.5234375, 3.037109375, 5.35546875 ]
103
1,625
[ -3.19921875, 3.6796875 ]
26.401806
[ -5.38671875, -4.1484375, -3.13671875, -1.5048828125, 1.7509765625, 9.578125 ]
1.737984
28.86203
31.015385
4.707554
[ 2.5543861389160156 ]
-7,410.871143
5.134154
-9,224.579602
1.53422
5.3425
[ -3.09765625, -3.478515625, -3.66796875, -4.45703125, 2.12890625, 10.7421875 ]
[ -7.140625, -4.45703125, -3.52734375, -2.552734375, 5.29296875, 8.453125 ]
BkiUdmg5qYVBXTJ3BsEF
\section{Introduction} The formation and evolution of galaxies in the Universe is now being observed to redshifts z $\sim$ 8 \citep[e.g.][]{bou09,ouc09} using photometric imaging to wavelengths $\sim$ 1 \um in search of "Lyman Break Galaxies" (LBGs; Steidel et al. 1999). These observations reveal the rest frame ultraviolet luminosity of star-forming galaxies and allow the measurement of changes in the star formation rate density of the Universe as a function of redshift \citep{mad98}. (Because such star-forming galaxies are observationally characterised by indicators of young, massive, short-lived stars, we refer to them in the remainder of this paper as "starburst" galaxies.) Observations of LBGs indicate an increase in the characteristic luminosity of starburst galaxies for 0 $<$ z $\la$ 2, and then a decrease in luminosity for z $\ga$ 2 until the observational limit at z $\sim$ 8 \citep{red09}. Is this evolution of starburst galaxies determined correctly based on LBGs? The most uncertain factor in quantifying the luminosities is determining an accurate correction for obscuration by dust, because this obscuration can be extreme in the rest frame ultraviolet. The importance of dust in the early universe is emphasized by the result that the most luminous galaxies known have most of their luminosity emerging as reradiation by dust of intrinsic luminosity initially generated at much shorter wavelengths and absorbed by the dust. Initially termed "Ultraluminous Infrared Galaxies" (ULIRGS; Soifer et al. 1987) when discovered with the Infrared Astronomical Satellite (IRAS), such galaxies are found in large numbers in surveys at mid-infrared wavelengths at all redshifts to z $\sim$ 3 \citep[e.g.][]{lef05,dey08} and millimeter wavelengths \citep{cha05,pop08,men09}. The evolution of starbursts among ULIRGs has been tracked to z $\sim$ 3 \citep{wee08} with the $Spitzer$ Space Telescope using redshifts measured with The Infrared Spectrograph on $Spitzer$ (IRS; Houck et al. 2004). Because of the inability to measure redshifts of dusty starbursts with z $>$ 3 using the IRS, these have not yet been found at redshifts as high as LBGs. Combining our understanding of dusty galaxies dominated by infrared luminosity with galaxies discovered by ultraviolet observations is essential to a full understanding of the formation and evolution of galaxies (as well as the formation and evolution of dust). The availability of IRS spectroscopy for the same galaxies which are detected in the rest frame ultraviolet by the Galaxy Evolution Explorer (GALEX, Martin et al. 2005) allows a direct comparison between the infrared luminosity from dust and the ultraviolet luminosity from hot stars. Our initial comparison in this way for 287 starburst galaxies having both $Spitzer$ spectra and GALEX ultraviolet measures found large differences in the star formation rates (SFRs) deduced from infrared compared to ultraviolet measures of SFR \citep{sar09}. These differences are attributed to extreme obscuration of the ultraviolet luminosity. Our previous sample arose primarily from selection in the infrared, chosen as starbursts having IRS spectra available. In the present paper, we use IRS spectra for samples of ultraviolet-selected starburst galaxies to study obscuration effects in sources with similar selection criteria as LBGs, in order to understand how conclusions depend on selection criteria. Our objective is to use observations in the infrared and ultraviolet to make an independent test of dust corrections for starbursts detected as LBGs at high redshift, such as in \citet{red09}, and to determine reliable corrections for dust obscuration which are independent of selection technique. We make measures only of the obscuration in order to avoid uncertain assumptions regarding transformation of luminosities to SFRs. In section 2, below, we describe the sample selection which includes 25 Markarian galaxies, 23 ultraviolet luminous galaxies discovered with GALEX, and the 50 starburst galaxies having the largest infrared/ultraviolet ratios. In section 3, we present the infrared spectra of all 98 sources and compare the characteristics of infrared and ultraviolet spectra in section 4. Results are used in section 5 to develop an empirical method for determining obscuration corrections as a function of M(UV), and in section 6 to compare bolometric luminosities $L_{ir}$ with M(UV). Our most important result (section 7) indicates that the bolometric luminosities deduced for starburst galaxies using LBGs at z $>$ 2 have been underestimated, because obscuration corrections have been significantly underestimated for starbursts. This means that SFRs have also been underestimated at high redshifts, and we propose a new observational test to verify these conclusions. \section{Selection of Starburst Galaxies} For the various comparisons which follow in this paper, we define three independent samples of starburst galaxies which have been selected using either ultraviolet or infrared criteria. These are the "Markarian" sample, the "GALEX" sample, and the $Spitzer$ sample. These samples are then compared using various spectroscopic parameters and conclusions are reached regarding how measures of dust obscuration and SFRs depend on sample selection criteria and source luminosities. \subsection {Markarian Starburst Galaxies} The first survey for galaxies selected on the basis of ultraviolet brightness was the Markarian survey \citep{mar67}, which detected "galaxies with an ultraviolet continuum" based on their apparent brightness in objective prism spectra extending to wavelengths $\ga$ 340 nm. Although the Markarian survey was not homogeneous for flux limits or selection criteria, it produced large samples of galaxies which subsequently proved important for classification of ultraviolet-luminous galaxies. For example, the prototype "starburst galactic nucleus" was defined by Markarian 538, also NGC 7714 \citep{wee81}. The first set of starburst galaxies arising from this definition was the list of 50 Markarian galaxies in \citet{bal83}, selected using optical spectroscopy for measurement of emission line widths to distinguish starbursts from active galactic nuclei (AGN). Because the selection of these Markarian starburst galaxies was based only on optical criteria, these galaxies provide an important comparison sample to starbursts selected in the infrared (section 2.3). (It was subsequently found that most Markarian galaxies would also be detected by the Infrared Astronomical Satellite (IRAS), but IRAS had not flown when the Markarian lists were compiled.) For this reason, we present in this paper new IRS observations from our $Spitzer$ program 50834 of 25 Markarian galaxies selected from the Balzano (1983) list, and also include archival observations of NGC 7714. \subsection {GALEX Starburst Galaxies} The availability of the GALEX all sky survey enables the identification of nearby galaxies (z $<$ 0.3) which are similar in ultraviolet luminosity to the distant LBGs; these local galaxies have been called ultraviolet luminous galaxies, or UVLGs \citep{hec05}. They represent the extremes in luminosity of ultraviolet-selected local starbursts, and they are sufficiently bright that various other data are available. A number of these UVLGs selected with GALEX have IRS observations available ($Spitzer$ archival program 40444, L. Armus, P.I.). We have made new extractions of the IRS spectra for these sources and refer to them as the GALEX sample. \subsection {Highly Obscured Spitzer Starburst Galaxies} Our previous paper \citep{sar09} compared IRS spectra with GALEX data for 287 starburst galaxies with z $<$ 0.5, sources chosen only by having both IRS spectra and GALEX fluxes. In that paper, the SFR measured from the luminosity of the 7.7 \um polycyclic aromatic hydrocarbon (PAH) feature, SFR(PAH), was compared to the SFR measured from the rest frame ultraviolet luminosity at 153 nm, SFR(UV). The relation found was log [SFR(PAH)/SFR(UV)] = 0.53log [$\nu$L$_{\nu}$(7.7 $\mu$m)] - 21.5 for SFR in \mdot and $\nu$L$_{\nu}$(7.7 $\mu$m) in erg s$^{-1}$. This relation indicates a much higher SFR as measured in the infrared, which indicates very severe extinction in the ultraviolet for these starbursts, implying the escape of $<$ 1\% of the ultraviolet luminosity for the most luminous starbursts. To compare with the Markarian and GALEX ultraviolet-selected starbursts as one extreme of selection, we desire a comparably sized sample representing the other extreme of infrared selection. For this sample, we utilize the infrared-selected starbursts tabulated in \citet{sar09} discovered in $Spitzer$ flux-limited surveys and having the most extreme values of SFR(PAH)/SFR(UV), those with log [SFR(PAH)/SFR(UV)] $>$ 2. This gives a sample of 50 sources, which we call the $Spitzer$ sample. Original IRS observations of this $Spitzer$ sample are in \citet{wee09}, taking sources from a flux limited survey to f$_{\nu}$(24\ums) $>$ 10 mJy with the Multiband Imaging Photometer for $Spitzer$ (MIPS, Rieke et al. 2004), and in archival program 40539 (G. Helou, P.I.), which derives sources from various flux limited surveys with MIPS to 5 mJy. This $Spitzer$ sample also provides a low redshift sample that approaches in infrared luminosity the high luminosity, optically faint, and very dusty starbursts discovered with $Spitzer$ at z $\sim$ 2 (source list and original references in \citet{wee08} plus new sources in \citet{des09}), which have been defined as "Dust Obscured Galaxies", or DOGs \citep{dey08}. In the same fashion, therefore, that our ultraviolet selected samples represent local LBGs, the $Spitzer$ sample represents local DOGs. (Luminosity comparisons between DOGs and the local sample are shown in Figure 16.) \section {Observations and Results} \subsection{$Spitzer$ Infrared Spectra} The infrared spectra used for our analysis were all obtained with the IRS \footnote{The IRS was a collaborative venture between Cornell University and Ball Aerospace Corporation funded by NASA through the Jet Propulsion Laboratory and the Ames Research Center.} using the Short Low module in orders 1 and 2 (SL1 and SL2) and the Long Low module in orders 1 and 2 (LL1 and LL2). Combining these orders gives total low resolution spectral coverage from $\sim$5\,\um to $\sim$35\,\ums. All spectra which are used in this paper are presented below, although spectra of 7 sources were originally given in \citet{wee09}. An IRS observation in one order consists of two independent spectra obtained at two different nod positions on the slit. Each spectrum is derived from several different images taken at the same nod position. All images with the source in one of the two nod positions were coadded to obtain the image of the source spectrum. The background was determined from coadded background images that added both nod positions with the source in the other slit (i.e., both nods on the LL1 slit provide a background image when the source is in the LL2 slit). The difference between coadded source images minus coadded background images was used for the spectral extraction, giving two independent extractions of the spectrum for each order. These independent spectra were compared to reject any highly outlying pixels in either spectrum, and a final mean spectrum was produced. The extraction of one dimensional spectra from the two dimensional images was done with the SMART analysis package \citep{hig04,leb09}, beginning with the Basic Calibrated Data products, version 18, of the $Spitzer$ flux calibration pipeline. Sources in the $Spitzer$ sample and GALEX sample are faint and always unresolved, so that use of the SMART Optimal Extraction in \citet{leb09} improves signal to noise (S/N). The brighter Markarian sources, sometimes slightly resolved, were extracted with the "tapered column" extraction which uses an extraction window that is wider in the direction perpendicular to dispersion. That some Markarian starburst galaxies are slightly resolved at the scale of the IRS slits is noticed when the overlapping portions of the SL1 and LL2 spectra show a larger flux in the wider LL2 slit (the SL1 slit is 3.7 \arcsec wide, and the LL2 slit is 10.5 \arcsec wide). When the SL1 and LL2 flux densities are different in the overlapping wavelengths, the spectra are matched, or "stitched", by raising the SL1 by the factor needed to match LL2. These "stitching" factors are given in Table 1. Figure 1 shows spectra of all starbursts in the Markarian sample, Figure 2 all starbursts in the GALEX sample, and Figure 3 all starbursts in the $Spitzer$ sample. Twelve AGN found among the Markarian and GALEX samples (section 4.2) are shown in Figure 4. In all Figures, spectra are truncated at 25\,\um in the rest frame because of the absence of any significant features in the low resolution spectra beyond that wavelength. Illustrated spectra are not smoothed. For convenience of illustration, spectra are arranged by spectral slope; i.e by the ratio f$_{\nu}$(24 \ums)/f$_{\nu}$(7.7 \ums). Spectra in Figures are normalized to 1 mJy at 7.7 \um and offset for illustration. Observed spectroscopic data for all sources are given in Tables 1 and 2 (Markarian sample), 3 and 4 (GALEX sample), and 5 and 6 ($Spitzer$ sample). Average spectra for all starbursts in the 3 samples are shown in Figure 5. These averages were produced by averaging the individual normalized spectra of starburst galaxies in Figures 1, 2, and 3. After averaging, the average spectra were boxcar-smoothed to 0.2 \um for illustration in Figure 5. \subsection{GALEX Ultraviolet and SDSS Optical Data} GALEX \citep{mar05} obtains images in the ultraviolet at wavelengths 134-179 nm (FUV) and 177-283 nm (NUV)\footnote{GALEX data used in this paper were those available in August, 2009, at the MultiMission Archive at Space Telescope Science Institute [http://archive.stsci.edu/]}. As discussed in \citet{sar09}, GALEX ultraviolet-selected starbursts and $Spitzer$ infrared-selected starbursts generally agree well in position, to accuracies much better than the observing apertures, which are of similar sizes. Point source GALEX images have full width at half maximum (FWHM) of 4.2\arcsec~ for FUV and 5.3\arcsec~ for NUV (Morrissey et al. 2005). For the $Spitzer$ IRS, spatial resolution is wavelength dependent, so the FWHM of an unresolved source at the mid-infrared wavelengths used for our spectroscopic analysis is 3\arcsec $\la$ FWHM $\la$ 6\arcsec. The observed flux density at a rest frame FUV wavelength of 153 nm is determined by using the GALEX observed flux densities at FUV (effective wavelength 153 nm) and NUV (effective wavelength 227 nm) to fit a power law continuum between these wavelengths, and then interpolate to the flux density at observed wavelength (1+z)153 nm using the resulting power law index. This f$_{\nu}$[(1+z)153 nm] is the value of f$_{\nu}$(FUV) listed in Tables 1, 3, and 5. When we refer in this paper to ultraviolet luminosities or absolute magnitudes M(UV), we mean M(153 nm). For comparison with the IRS spectra, we also use optical spectra from the Sloan Digital Sky Survey (SDSS, Gunn et al. 1998). We classified all such spectra to determine if the Balmer lines show broad wings characteristic of type 1 AGN, or have unusually strong [OIII]/H$\beta$ ratios, which may indicate type 2 AGN. Sources with H$\beta$ comparable in strength to [OIII] $\lambda$ 4959 without broad H$\beta$ wings are classified as starbursts. These SDSS classifications result in some of the AGN discussed in section 4.2. \section{Infrared Spectroscopic Characteristics of Starbursts} \subsection{Infrared Luminosities and PAH Strengths} Starburst galaxies show very similar spectra in the mid-infrared, characterised by strong PAH emission features \citep[e.g.][]{rig00,gen98,bra06}. The exceptional similarity among such spectra is well illustrated in Figures 1, 2, and 3. Starburst spectra can be separated from spectra containing a contribution to dust emission from an AGN by using the equivalent widths (EW) of the PAH features. The best PAH feature for such quantitative use in starburst spectra observed with the IRS over all redshift ranges 0 $<$ z $<$ 3 is the 6.2 \um feature, because this feature is spectrally isolated and does not require deconvolution from adjacent features. Because of the strength of the PAH features, luminosities of these features provide a measure of starburst luminosity which relates to the star formation rate \citep[e.g.][]{for04,bra06}. The PAH luminosity parameter which we have defined and used previously is $\nu$L$_{\nu}$(7.7$\mu$m), where L$_{\nu}$(7.7$\mu$m) is determined from the flux density f$_{\nu}$(7.7 \ums) at the peak of the 7.7$\mu$m feature \citep{hou07}. This parameter was chosen to enable comparisons with starbursts at redshifts z $\sim$ 2, for which the 7.7 \um feature is the strongest spectral feature and whose peak flux density is the simplest parameter to measure in sources of poor S/N \citep{wee08}. Measurement of a peak f$_{\nu}$(7.7 \ums) also avoids the uncertainties of deconvolution and definition of the underlying continuum for the broad PAH features, which can lead to different measures of PAH flux even from the same spectrum. Although such deconvolutions are valuable for sources of sufficiently high S/N \citep{smi07}, they are difficult to implement in sources at high redshift with poor S/N and few visible PAH features. An EW(6.2 \ums) $>$ 0.4 \um has been found empirically to be a good discriminant for "pure" starbursts; sources with a PAH feature of this strength invariably have optical spectra classified as starbursts. We examine this criterion for the present samples in Figure 6, where EW(6.2 \ums) is compared with source luminosity determined from $\nu$L$_{\nu}$(7.7$\mu$m). In this Figure, the starbursts in Figures 1, 2, and 3 are distinguished from the AGN in Figure 4 (classification of the AGN sources is discussed below in section 4.2). The 3 GALEX sources with starburst classification but EW(6.2 \ums) $<$ 0.4 are especially interesting and are discussed in section 4.2 regarding whether they are really starbursts or have an AGN contribution. Our new results provide an independent confirmation of the EW criterion adopted previously to define pure starbursts. It is expected that the $Spitzer$ sample should have EW(6.2 \ums) $>$ 0.4 \um, because this was a criterion for inclusion in the tabulation of \citet{sar09}, from which we drew this sample. It is notable, however, that the Markarian starbursts all satisfy the criterion of EW(6.2 \ums) $>$ 0.4 \um. This is independent confirmation of the EW criterion, because the \citet{bal83} sample of Markarian starbursts was defined using only optical criteria. The four Markarian sources with EW(6.2 \ums) $<$ 0.4 \um now appear with SDSS spectra to be AGN, so the original classification was incorrect, overlooking weak, broad wings on the hydrogen lines. We note, however, that two GALEX sources with a starburst classification from both SDSS and IRS spectra have EW(6.2 \um) $<$ 0.4 \um. These are sources 3 and 16 in Figure 2B (the low EW measurement for source 13 is not reliable because the feature is so close to the end of the spectrum). In previous studies, such weak PAH strengths in sources without AGN were observed only in blue compact dwarfs (BCDs; Wu et al. 2006). These two sources have luminosities greatly exceeding BCDs, however, so they are potentially an important class of luminous starburst with weak PAH features. As discussed in efforts to understand BCD spectra, such sources could arise if the ionization radiation is unusually hard, such as in a very young starburst, or if the HII region is density bounded and not surrounded by molecular clouds, or in circumstances of low metallicity. No such sources with low EW(6.2 \ums) are encountered among the Markarian sample, and the presence of several other AGN within the GALEX sample (section 4.2) raises the question of whether these objects also have a strong dust continuum and weak PAH because of the presence of an unrecognized AGN. Because of this possibility, we can at present only call attention to the unusual nature of these sources but cannot confirm their nature. Figure 6 also shows the luminosity range of our samples, a factor of 10$^{4}$, and illustrates that all three samples overlap in luminosity. Especially notable is that the ultraviolet-selected starbursts provide a continuous range of luminosities, typically lower luminosities from the Markarian sample and higher from the GALEX sample, but they overlap in both EW and luminosity parameters. That these samples also overlap in $\nu$L$_{\nu}$(7.7$\mu$m) with the $Spitzer$ sample of highly obscured starbursts is evidence that these very different selection techniques have nevertheless returned samples of similarly luminous starbursts and with similar spectroscopic nature regarding strength of the PAH star formation indicators. \subsection{Sources Classified as AGN} When comparing ultraviolet and infrared luminosities in order to study starbursts, it is important that AGN are not included in the samples. An AGN can raise the ultraviolet luminosity and also raise the level of infrared dust continuum. We are not considering details of AGN in the present paper, but after definition of the samples described in section 2, it was found that spectra showed AGN in a few cases. Because we have IRS spectra of those sources, we identify them and show the spectra, but we do not use them subsequently in our analysis of starbursts. In Tables 1 and 3, sources in the Markarian and GALEX samples which are classified by us as AGN are noted. These spectra are shown in Figure 4, and these sources are not included as starbursts in the various Figures. For the five Markarian AGN (Mkn 93, 149, 193, 764, and 863), the classification as AGN arises from SDSS spectra which show weak, broad wings to the Balmer lines. These were overlooked in the initial classification of \citet{bal83} from which we drew the sample. All IRS spectra for these AGN except for Mkn 193 (source 16 in Table 1) show strong PAH features. This means that the infrared spectrum is characterized by a circumnuclear starburst, and the PAH features could be used to measure the luminosity of this starburst. The ultraviolet luminosity may be contaminated by the AGN, however, so we do not use these sources to compare ultraviolet and infrared luminosities. Mkn 193 also shows strong [SIV] 10.5 \um emission, which is an indicator of the hard ionizing radiation from an AGN. For the GALEX sources in Table 3, we classify 7 as AGN. This classification includes any source which could be an AGN based on either optical or infrared spectra. Such conservative criteria are designed to assure that we do not include sources with ultraviolet luminosity arising in part from an AGN. In most cases, the AGN classification arises because of weak PAH 6.2 \um together with absorption in the silicate feature at $\sim$ 10 \um. This combination is interpreted as evidence of an AGN with hot dust close to the AGN whose emission raises the level of continuum beneath the PAH and is also absorbed in the silicate feature by intervening cooler dust (Imanishi 2007). Five GALEX AGN are classified in this way with IRS spectra (sources 1, 2, 4, 10, and 23 in Table 3 and Figure 4). There is some ambiguity in this classification, however. Sources 3 and 16 in Table 3 are included in Figure 2 as starbursts despite having weak EW(6.2 \ums), because they show no evidence of silicate absorption. Sources 3 and 16 have high [OIII]/H$\beta$ ratios from SDSS spectra but no evidence that the lines are broad as in type 2 AGN. Such high ratios also arise in BCDs, for example, because of harder ionizing radiation, so we do not exclude these as starbursts. The question of emission line ratios and ionizing radiation is considered more in section 4.3, below. GALEX sources 12 and 15 from Table 3 are classified as AGN both because of SDSS spectra and because of strong [SIV] 10.5 \um in the IRS spectra. These Markarian and GALEX AGN are not included in the remaining discussion, because we do not want to confuse conclusions regarding starbursts by including sources whose spectroscopic parameters may be influenced by an AGN. \subsection{Dust and Ionizing Radiation in Starbursts} Mid-infrared spectra of starbursts such as those in Figure 5 contain data for several independent indicators for the luminosity of the starburst. These are the luminosity of the PAH emission features discussed above, the luminosity of the dust continuum underlying the PAH features, and the luminosity of the emission lines. These three indicators measure different characteristics of the starburst. The PAH emission is excited by photons of various energies penetrating the photodissociation region (PDR) at the boundary between the HII region and the surrounding molecular cloud \citep{pee04}. Such photons are not the most energetic photons which have ionized the HII region, because such energetic photons would destroy the PAH molecules. The dust continuum arises from dust intermixed in the HII region and heated primarily by the hot stars, and the emission lines arise within the HII region from ionizing photons arising in the hot, young stars. Differences among these three parameters (PAH, dust, emission lines) give information about differences in the nature of different starbursts. Some notable differences are seen when comparing the three samples of starbursts considered herein. These differences are first illustrated in Figure 5, which shows average spectra for each sample. The well established dominance of PAH features in starburst spectra is again confirmed in the three samples, as shown for individual sources in Figures 1, 2, and 3. There are no measurable differences among relative PAH strengths for the three samples. There are, however, differences in the relative strengths of the emission lines and the PAH features. The relative strengths of the emission lines compared to PAH gives a measure of relative luminosities from the HII region compared to the surrounding PDR where the PAH features arise. Relative intensities of the emission lines themselves provide a diagnostic for the hardness of ionizing radiation in the HII region. The most important emission lines observed in the low resolution IRS spectra are labeled in Figure 5. The [NeII] 12.8 \um emission line is blended with an adjacent PAH feature so the actual measurement of its flux is uncertain in these low resolution spectra, but the shape and relative flux of this blended feature does not change among the three samples. The total flux of the blended feature is dominated by [NeII], so we can conclude that the HII luminosity relative to the PDR luminosity is similar among the samples. Compared to the fluxes of the PAH features and the [NeII] line, there is a clear progression in relative strength of the higher ionization lines [SIV] 10.5 \um and [NeIII] 15.6 \um, from strongest in the GALEX sources, intermediate in the Markarian sources, to weakest in the $Spitzer$ sources. The [SIII] 18.7 \um line is also strong in the Markarian sources compared to the $Spitzer$ sources, although S/N is too low for measurement in the GALEX average spectrum. The average spectra indicate, therefore, that the ultraviolet-selected samples contain hotter ionizing stars than does the extreme infrared-selected $Spitzer$ sample. The average spectra also show increased dust continuum for the GALEX sample compared to the other samples, easily seen by the higher continuum for wavelengths $>$ 15 \um but also measurable by an increased continuum at $\sim$ 10 \ums. That dust emission increases while the hardness of the ionization also increases is consistent with having warmer dust within the HII region in the sources with harder ionization. Comparisons are plotted for individual sources in Figures 7 and 8. In Figure 7, the ratio of total flux in [NeIII] to PAH 11.3 \um is compared with EW of PAH 11.3 \um (in \ums). This plot includes all three samples plus the nearby starburst galaxies in the \citet{bra06} sample as measured with the high resolution spectroscopy in \citet{ber09}. [NeIII]/PAH is a measure of hardness of the ionizing radiation, and EW(11.3 \ums) shows the strength of the dust continuum relative to the strength of PAH. The median values of log ([NeIII]/PAH), including limits, are -0.29, -1.01, -1.13, and -1.25 for the GALEX, Markarian, $Spitzer$, and nearby starbursts. Both Figure 5 and these results show decreasing EW(11.3 \ums) PAH with increasing [NeIII]/PAH. The average EW(11.3 \ums) for the $Spitzer$ sample is 0.65 \ums, for the Markarian sample is 0.55 \ums, and for the GALEX sample is 0.50 \ums. The values for EW(11.3 \ums) indicate that the Markarian starbursts have 18\% more dust luminosity in the continuum at $\sim$ 11 \um relative to PAH luminosity than has the $Spitzer$ sample, and the GALEX sample has 30\% more dust luminosity than the $Spitzer$ sample. A similar result is found by measuring dust continuum directly. For example, the value of continuum f$_{\nu}$(10 \ums) in the normalized average spectra is 0.16/0.29/0.40 for $Spitzer$/Markarian/GALEX. The ratio of dust continuuum at 15 \um to PAH peak at 7.7 \um is 0.48/0.68/0.91 for $Spitzer$/Markarian/GALEX. These results are all consistent with the simple interpretation that the harder ionizing radiation in the ultraviolet-selected starbursts results in stronger dust emission at mid-infrared wavelengths, compared to the extreme $Spitzer$ sample. Figure 8 compares the [NeIII]/PAH ratio with luminosity of the PAH features, measured by $\nu$L$_{\nu}$(7.7 $\mu$m) in erg s$^{-1}$. There is no trend with luminosity for this ratio, but the result shows as in Figure 7 the systematically stronger [NeIII] compared to PAH in the UV-selected GALEX and Markarian samples compared to the $Spitzer$ infrared-selected sample. \section{Obscuration in Starbursts} Because the starbursts at highest redshift have been discovered in the rest frame ultraviolet \citep{bou09,red09}, understanding their intrinsic luminosities requires corrections for dust obscuration. The measure of obscuration is the greatest uncertainty in measuring SFRs. \citet{cal07} and \citet{cal08} have summarized the various methods for determining such corrections and their application to measurement of SFRs. When observing the rest frame ultraviolet, the obscuration is estimated by comparing the observed spectral slope (or "reddening") with an assumed intrinsic slope \citep[e.g.][]{bur05}. This is the source of the correction applied to the LBGs discovered at the highest redshifts. This has led to the result that the dust correction remains small for lower luminosity galaxies, which yields the conclusion that low luminosity galaxies progressively dominate the luminosity density for z $>$ 3 \citep{bou09}. Such obscuration corrections apply only for the portion of the ultraviolet luminosity that is able to emerge from the obscuration. Completely hidden sources cannot be measured in this way. This is the primary reason why infrared measures of starbursts are so important, because infrared luminosities are much less subject to extinction. In our discussion below, we present a new, direct comparison between the parameters most directly measuring the starburst luminosity as observed in the infrared (PAH luminosity) and in the ultraviolet (continuum luminosity). These parameters are compared to determine empirically the relation for unobscured starbursts, and then to use this relation to determine obscuration corrections for obscured starbursts. The PAH emission and the ultraviolet monochromatic continuum provide two spectral "features" which are intrinsic to the same starburst. The f$_{\nu}$(7.7 $\mu$m) arises from the photodissociation region immediately surrounding the ongoing starburst and directly excited by photons from the starburst. This PAH emission is intrinsic to the photodissociation region and is present regardless of whether any dust obscures the starburst. It is just as intrinsic to the starburst as radiation from the hot stars or emission lines from the HII region. Both f$_{\nu}$(7.7 $\mu$m) and f$_{\nu}$(153 nm) features are affected by the same obscuring dust in the surrounding cold molecular cloud, but the ultraviolet suffers heavy extinction whereas the PAH feature suffers negligible extinction. If we can determine empirically the intrinsic ratio between the PAH emission and ultraviolet continuum in unobscured sources, we can then simply use the observed ratio in obscured sources to determine how much extinction applies to the ultraviolet. This empirical determination is made using Figures 11, 12, and 14, discussed below, which illustrate the most extreme, limiting values of f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm) found in any of the starbursts. In all three figures, this limit is log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)] = 1. We will adopt this as the intrinsic value for sources with negligible obscuration of f$_{\nu}$(153 nm). If these extreme sources also have some ultraviolet obscuration, then the final values of obscuration which we derive are lower limits, and sources are even more obscured in the ultraviolet than our results indicate. \subsection{Comparison of Star Formation Rates} The ultimate objective of studying the luminosities of starbursts is to determine the SFR in starbursts and to follow the resulting cosmic evolution of the SFR density. Various assumptions enter the transformation from the luminosity of a starburst into a quantitative measure of SFR in \mdot. For infrared measures based on luminosities of the dust continuum, the SFR is determined by equating the bolometric luminosity of the starburst to the luminosity absorbed and reradiated by dust. For ultraviolet luminosities, the SFR derives from the total luminosity of the hot stars in the starburst measured at a rest frame ultraviolet wavelength. For sources with a mid-infrared spectrum arising only from a starburst with no AGN contamination, the luminosity $\nu$L$_{\nu}$(7.7 $\mu$m) relates empirically to the total infrared luminosity of the starburst, $L_{ir}$, according to log $L_{ir}$ = log [$\nu$L$_{\nu}$(7.7 $\mu$m)] + 0.78 \citep{hou07}. This transformation has no dependence on luminosity, as is seen by combining the IRAS-derived $L_{ir}$ for the local, low luminosity starburst sample of \citet{bra06} with the high luminosity, high redshift submillimeter galaxies in \citet{pop08}. Using the relation from \citet{ken98} to convert $L_{ir}$ into a SFR , this result for $L_{ir}$ yields that log [SFR(PAH)] = log [$\nu$L$_{\nu}$(7.7 $\mu$m)] - 42.57$\pm$0.2, for $\nu$L$_{\nu}$(7.7 $\mu$m) in ergs s$^{-1}$ and SFR in \mdot. This conversion was confirmed in \citet{sar09} who found no offset between this measure of SFR and that derived from radio continuum observations of the same starbursts \citep{con92}. SFR(UV) is determined using the relation SFR(UV)= 1.08x10$^{-28}$L$_{\nu}$(153 nm), for L$_{\nu}$(153 nm) in ergs s$^{-1}$ Hz$^{-1}$ \citep{sam07}. For our comparisons, SFR(UV) is the observed value without applying any extinction corrections to the ultraviolet. For comparison to SFR(PAH), we convert this to log [SFR(UV)] = log $\nu$L$_{\nu}$(FUV) - 43.26, for $\nu$L$_{\nu}$(FUV) in ergs s$^{-1}$ and SFR in \mdot, using f$_{\nu}$(153 nm) from Tables 1, 3, and 5. The ratio SFR(PAH)/SFR(UV) is a measure of the fraction of luminosity absorbed by dust and seen in the infrared compared to the fraction which escapes absorption and can be seen in the ultraviolet. Figure 9 shows SFRs determined from 7.7 \um PAH luminosity compared to SFRs determined from observed GALEX UV continuum for the three samples of starbursts. The line is reproduced form the full sample of 287 infrared-selected starbursts in \citet{sar09}. The GALEX and Markarian UV-selected samples show similar values of SFR(PAH)/SFR(UV), with median log SFR(PAH)/SFR(UV) = 0.8. If the discrepancy between SFR(PAH) and SFR(UV) is caused only by the obscuration of the UV, then this result indicates that only $\sim$ 15\% of the intrinsic UV luminosity is observed, even for these ultraviolet-selected samples. Much greater obscuration is found for the $Spitzer$-selected sample, with median log SFR(PAH)/SFR(UV) = 2.4, indicating that only 0.4\% of the intrinsic ultraviolet luminosity emerges. The GALEX sample represents the most extreme UV-selected starbursts, and these are systematically more luminous than the Markarian sources. In Figure 10, we transform this comparison to use M(UV) as a measure of luminosity. M(UV) is an AB magnitude, determined with the fundamental definition m$_{AB}$ = -2.5log f$_{\nu}$ - 48.60, for f$_{\nu}$ in erg cm$^{-2}$ s$^{-1}$ Hz$^{-1}$ \citep{oke74}. M(UV) is determined as the m(UV) which arises using L$_{\nu}$(153 nm) to determine f$_{\nu}$(153 nm) at a distance of 10 pc. The line shown is the linear least squares fit to all of the points. It is notable that this line has the opposite slope of the line in Figure 9. This result indicates that objects selected using ultraviolet observations of luminosity select in favor of sources having less obscuration. As a result, the dependence of obscuration on luminosity has the opposite sign of the dependence found when starbursts are selected in the infrared. Figure 10 shows the similarity of GALEX and Markarian sources in terms of obscuration as measured by SFR(PAH)/SFR(UV), but shows that the GALEX sample is systematically more luminous. \subsection{Comparing Infrared and Ultraviolet Starburst Luminosities to Determine Obscuration} While the ultimate use of infrared or ultraviolet spectra of starbursts is to determine the star formation rates, the dependence of this result on the amount of obscuration means that it is useful to discuss only those parameters which can measure the obscuration. In this way, we hope to determine a quantitative measure of obscuration in starbursts which is independent of the many assumptions required to relate observed luminosity to SFR from either ultraviolet or infrared spectra. Therefore, we introduce a new comparison which is only between the observed parameters used to determine PAH or UV luminosities. This avoids any of the uncertainties in converting to SFRs. These parameters are the observed f$_{\nu}$(7.7 $\mu$m) and f$_{\nu}$(153 nm) (Tables 1, 3, and 5). We assume neglible extinction of the PAH, so that the PAH luminosity represents the intrinsic luminosity of the starburst. This is justified by the extinction curve of \citet{dra89}, which shows small PAH extinction compared to optical or UV extinction. For example, from Draine, A(H$\alpha$)/E(J-K) = 6.5, but A(7.7$\mu$m)/E(J-K) = 0.07, so 5 magnitudes of H$\alpha$ extinction (1\% emerges) corresponds to only 0.05 mag of 7.7 \um extinction (95\% emerges). Figure 11 shows the comparison of infrared PAH flux density to ultraviolet flux density, f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm), as a function of luminosity $\nu$L$_{\nu}$(7.7 $\mu$m). The distribution of points among the three samples illustrates an empirical result that the smallest value of log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)] = 1. We take this as the empirical value which relates PAH luminosity to ultraviolet luminosity for a starburst with negligible obscuration of the UV. This is the most important result of the comparisons shown in Figure 11. It is an empirical result which can then be used to compare ultraviolet and infrared spectra for a quantitative determination of obscuration in the ultraviolet spectra. Figure 12 shows the same comparison but using M(UV) as the luminosity measure. This Figure also shows the empirical result that the limiting value of log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)] = 1, so we take this to be the value for an unobscured source. The UV-selected sources in both Markarian and GALEX samples have similar values of log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)], with a median of log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)] = 1.8, compared to the median for the $Spitzer$ sample of 3.4. Figures 11 and 12 show the strong offset in the samples resulting from the selection criteria. The infrared-selected starbursts have median values of f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm) larger by about a factor of 100 compared to the ultraviolet-selected starbursts. This is a direct indication that the infrared-selected sample shows extreme obscuration in the ultraviolet. Demonstrating obscuration in this way avoids the uncertainties and assumptions required when comparing SFRs determined from the different samples. Using the limit for f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm) as representing sources without obscuration, such that log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)] = 1 means no obscuration, Figures 11 and 12 then give the correction factor which can be used to determine the obscuration correction for a source. The fraction of UV luminosity intrinsically emitted compared to the luminosity observed, UV(intrinsic)/UV(observed), is given by log[UV(intrinsic)/UV(observed)] = log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)] - 1. \subsection{Obscuration Corrections to M(UV)} The correction for dust obscuration determined above in section 5.2 is compared with luminosity M(UV) in Figure 13. Figure 13 illustrates our most important results regarding the selection effects arising from obscuration by dust for discovering starbursts . The luminosities of infrared-selected and ultraviolet-selected samples overlap, but the most luminous ultraviolet sources (the GALEX sample, diamonds in Figure 13) appear luminous because they have low obscuration. There is a very large range between the miminally obscured ultraviolet sources and the most obscured infrared sources, with the latter having dust correction factors exceeding 100 at M(UV) = -18, compared to a correction of less than a factor of 10 at this M(UV) for the ultraviolet-selected sources. We show in Figure 13 the linear fit for the most obscured $Spitzer$ starbursts (upper line), the sample in Table 5, but we also plot in Figure 13 the full sample of 287 infrared-selected starbursts (center line) having IRS and GALEX measures from \citet{sar09}. The distribution of these points shows that the ultraviolet-selected starbursts from the Markarian and GALEX samples (fit by the lower line) generally lie at the extremes of the full infrared-selected sample. The fits to the sources in Figure 13 give measures of obscuration which apply for starbursts so obscured that most of the ultraviolet luminosity is hidden. These fits are independent of measures derived only from reddening of ultraviolet observations, as in \citet{bou09} and \citet{bur05}. Even in the ultraviolet-selected samples, the obscuration is significant, being a factor of 8 to 10 with little dependence on luminosity. The linear least squares fit to the ultraviolet selected samples is log[UV(intrinsic)/UV(observed)] = 0.07($\pm$0.04)M(UV)+2.09$\pm$0.69. This obscuration is about twice that used by \citet{red09} to correct for LBGs with z $>$ 3, although the M(UV) in Figure 13 overlap with the M(UV) of these high redshift sources. For the full infrared-selected $Spitzer$ sample (central line in Figure 13), The linear least squares fit is log[UV(intrinsic)/UV(observed)] = 0.17($\pm$0.02)M(UV)+4.55($\pm$0.4); the fit to the extreme infrared-selected $Spitzer$ sample in Table 5 (upper line) is log [UV(intrinsic)/UV(observed)] = 0.14($\pm$0.03)M(UV)+4.68$\pm$0.50. We make another estimate of effects of dust obscuration in sections 6 and 7, by determining the relation between M(UV) and total bolometric luminosities $L_{ir}$. This comparison is possible because we have an empirical scaling between $L_{ir}$ and $\nu$L$_{\nu}$(7.7 $\mu$m). \subsection{Ratio H$\alpha$/H$\beta$ as Measure of Obscuration compared to Infrared/Ultraviolet} A method often used to determine obscuration corrections for starbursts is the measure of reddening from the H$\alpha$/H$\beta$ ratio \citep{cal07}. This method works only for the emission lines which are able to emerge from the starbursts and so cannot account for totally obscured starbursts. It is useful, therefore, to attempt a comparison between the obscuration which would be deduced from H$\alpha$/H$\beta$ and that deduced from f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm). Our primary objective of this comparison is to obtain another measure of the value f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm) for an unobscured starburst. Figure 14 compares H$\alpha$/H$\beta$ determined from SDSS spectra of the starbursts (Tables 1, 3, 5) with the ratio f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm) discussed in section 5.2. The relative H$\alpha$/H$\beta$ line fluxes were measured using $\lambda$f$_{\lambda}$ taking f$_{\lambda}$ at the peak of the line in the SDSS spectra. Values given in Tables 1, 3, and 5 are for all sources with SDSS spectra available. The intrinsic H$\alpha$/H$\beta$ ratio in an unobscured source should be 2.87, which indeed agrees with lowest value in Figure 14. The comparisons in Figure 14 confirm the determination of log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)] = 1 for an unobscured source, found above in section 5.2. The line is a linear least squares fit to all of the points. The result that there is a trend line shows that there is indeed some relation between reddening observed for the Balmer lines and the obscuration shown by comparing PAH and UV luminosities. To quantify this relation, we show in Figure 15 a measure of obscuration for H$\alpha$ compared with the obscuration deduced from comparing f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm) discussed in section 5.2. From \citet{cal07}, k(H$\alpha$) = 2.47 and K(H$\beta$)-K(H$\alpha$)=1.16. Knowing the intrinsic value H$\alpha$/H$\beta$ = 2.87, this differential extinction means that 0.47log H$\alpha$(intrinsic) = 1.47log H$\alpha$(observed)-log H$\beta$(observed)-0.46. From this, we derive a value H$\alpha$(intrinsic)/H$\alpha$(observed). This fraction of escaping H$\alpha$ is compared to the fraction of escaping UV by using the empirical result in section 5.2 that log[UV(intrinsic)/UV(observed)] = log[f$_{\nu}$(7.7 $\mu$m)/f$_{\nu}$(153 nm)]-1. The comparison is shown in Figure 15. Figure 15 shows the expected result for highly obscured regions that the obscuration of the intrinsic UV is much greater than the obscuration which would be deduced from the Balmer lines. This is because the Balmer lines which can be observed emerge from outside of the most obscured regions, whereas our measure of ultraviolet obscuration applies to the total obscuration of the most hidden regions. \subsection{Comparing Starburst and Stellar Luminosities for Starburst Galaxies} Understanding the formation of galaxies requires, in addition to a measure of the SFR, a determination of the mass of stars already present. Such masses can be estimated from observations at various rest-frame wavelengths if stellar models and mixtures are assumed, as explained, for example, in \citet{red09}, \citet{lon09} and \citet{bus09}. For the starburst samples in the present paper, we compare the luminosities of starbursts with the luminosities of the underlying stellar component for the galaxy. This comparison is done by showing in Figure 16 the ratio of f$_{\nu}$(7.7 \ums) at the peak of the PAH feature to the rest frame f$_{\nu}$(1.6 \ums), which arises from the stellar continuum. For the Markarian, GALEX, and $Spitzer$ samples, the stellar luminosity is measured from the Two Micron All Sky Survey (2MASS; Skrutskie et al. 2006) $H$ band flux densities; no corrections for redshift are made because there are only small differences between the $H$ flux density and $K$ (2.2 \ums) flux density (Tables 1, 3, and 5.) Figure 16 is also useful for comparison of PAH luminosities among the various samples, including the DOGs at z $\sim$ 2. In Figure 16, we include numerous starburst DOGs from the optically faint, high redshift samples selected at f$_{\nu}$(24 \ums). All have published IRS spectra that allow the measure of f$_{\nu}$(7.7 \ums), and have photometry with the $Spitzer$ Infrared Array Camera (IRAC; Fazio et al. 2004) that allows measurement of rest frame 1.6 \um luminosity. These sources are found in \citet{wee06,yan07,far08} and \citet{des09}. These sources represent the most luminous sources known as measured with $\nu$L$_{\nu}$(7.7 $\mu$m) so they show the most luminous, most obscured, starburst ULIRGs \citep{wee08}. Most of these sources were selected as "bump" sources, for which an excess continuum is observed at 4.6 \um to 5.8 \um from giant and supergiant stars having lower opacity at rest frame $\sim$ 1.6 \um \citep{sim99}. The comparison in Figure 16 is with IRAC 5.8 \um fluxes to estimate rest frame f$_{\nu}$(1.6 \ums) because fluxes at 4.6 \um and 5.8 \um are so similar that no redshift corrections are made. Because the high redshift starbursts were selected based both on f$_{\nu}$(24 \ums) and f$_{\nu}$(5.8 \ums), it is expected that selection effects will restrict the range of f$_{\nu}$(7.7 \ums)/f$_{\nu}$(1.6 \ums), the measure of starburst luminosity/stellar luminosity. No such selection effect can apply for the lower redshift samples from the present paper, however, because 2MASS magnitudes played no role in choosing these sources. The similarity in the starburst to stellar ratio is striking among all of the samples. The range of 10$^{4}$ in PAH luminosity compares to a range of only 4 relative to stellar luminosity. We do not attempt to calculate stellar masses because of uncertainties in converting a 1.6 \um stellar luminosity into a stellar mass. More importantly, the homogeneity of the f$_{\nu}$(7.7 \ums)/f$_{\nu}$(1.6 \ums) ratio among the samples in Figure 16 raises a fundamental question of what such a stellar mass measure at 1.6 \um means. We wonder if the stellar luminosity at 1.6 \um arises from supergiant stars very recently evolved from the starburst itself and does not reflect the underlying stellar mass of old stars. There is some evidence for this interpretation if we compare the ratios f$_{\nu}$(7.7 \ums)/f$_{\nu}$(1.6 \ums) among the different samples. This evidence is that the median value of the ratio f$_{\nu}$(7.7 \ums)/f$_{\nu}$(1.6 \ums) scales with obscuration determined from f$_{\nu}$(7.7 \ums)/f$_{\nu}$(153 nm), as measured in section 5.2. The ultraviolet-selected GALEX and Markarian sample combined have a median f$_{\nu}$(7.7 \ums)/f$_{\nu}$(1.6 \ums) = 8 (counting limits), the low redshift infrared-selected extreme $Spitzer$ sample in Table 5 (local DOGs) has median f$_{\nu}$(7.7 \ums)/f$_{\nu}$(1.6 \ums) = 13, and the high redshift DOGs sample of the most luminous, most dusty starbursts have median f$_{\nu}$(7.7 \ums)/f$_{\nu}$(1.6 \ums) = 20. While extinction at 1.6 \um is much less than in the ultraviolet, some extinction can also be expected at 1.6 \um for the most obscured sources. Because extinction at 7.7 \um is less than at 1.6 \um \citep{dra89}, the ratio f$_{\nu}$(7.7 \ums)/f$_{\nu}$(1.6 \ums) should scale with the amount of obscuration, more obscured sources having a higher ratio, if the 1.6 \um stellar luminosity arises within the starburst and is also obscured by dust. This is the trend which is observed among the samples, from most obscured $Spitzer$ sources to least obscured ultraviolet sources. By contrast, we would not expect such a scaling if the 1.6 \um luminosity arises from an underlying stellar component which is outside of the obscured starbursts. \section{Comparing Bolometric Luminosities $L_{ir}$ with M(UV)} The fundamental question for measurement of SFRs is which parameter best represents the integrated SFR over the life of the starburst \citep[e.g.][]{cal08, ros02}. This is the estimate required to determine the rate at which galaxies form. Different parameters are sensitive to different stars in the starbursts and to starbursts of different ages. For example, the H$\alpha$ emission depends on the luminosity of the most massive, short lived stars with the greatest ionizing radiation shortward of the Lyman limit, but L$_{\nu}$(153 nm) measures luminosity from hot stars longward of the Lyman limit, so can be influenced by lower mass, longer lived stars. Comparing these measures would not give the same result for starbursts with different mass distributions for the ionizing stars or different ages for the starburst. The preferred parameter is the total luminosity of all stars in the starburst, which does not depend on what fraction of the stellar luminosity ionizes hydrogen. This parameter is measured by $L_{ir}$, the total bolometric luminosity over all wavelengths, and it is dominated by infrared luminosity because most starburst luminosity is absorbed by obscuring dust. This is the basis of measures of SFR using infrared luminosities \citep{ken98}. The measure of SFR that we have used from $\nu$L$_{\nu}$(7.7 $\mu$m) is related empirically to $L_{ir}$ by using the starburst galaxies in \citet{bra06}, which all have IRAS flux measures allowing the determination of $L_{ir}$. The transformation adopted is log $L_{ir}$ = log[$\nu$L$_{\nu}$(7.7$\mu$m)] + 0.78 \citep{hou07}, and this is confirmed to extend to starbursts at z $\ga$ 2 using submillimeter determinations of $L_{ir}$ \citep{pop08}. (The $L_{ir}$ in Brandl et al. is a total 8-1000 \um infrared luminosity as estimated from IRAS fluxes according to \citet{san96}.) Using this transformation, we determine log $L_{ir}$ compared to M(UV) for all sources in our samples. Results are shown in Figures 17 and 18, comparing the ultraviolet-selected samples to the infrared-selected samples. In Figure 17, the lines which are fit compare the extreme $Spitzer$ sample of Table 5 to the ultraviolet Markarian+GALEX samples. the extremes of infrared selection and ultraviolet selection for starbursts. This Figure demonstrates in alternative fashion to the discussion in section 5.2 that there exists a population of very obscured starbursts which have larger luminosities $L_{ir}$ than found among ultraviolet-selected starbursts and which have faint M(UV) because of this obscuration. The offset between the fits in Figure 17 indicates that ultraviolet-selected starbursts of a given M(UV) can underestimate bolometric luminosities by about a factor of 30 over all luminosities when compared with the most obscured starbursts. Another way to state the result is that starbursts of the same $L_{ir}$ can be found at all -16 $>$ M(UV) $>$ -21, and those which appear faintest in M(UV) are so because of obscuration. If there are large populations of starbursts at z $>$ 3 which are as dusty and obscured as these extreme $Spitzer$ sources, the conclusion that only low luminosity galaxies exist at such redshifts is incorrect. In Figure 18, we show the samples compared with the fit found for the full infrared-selected sample of 287 starbursts from \citet{sar09}. In this case, differences between lines for the ultraviolet-selected sample (lower line) and the infrared-selected sample (upper line) are not as large as in Figure 17. The offset gives another estimate of obscuration corrections for ultraviolet-selected galaxies when compared to infrared-selected samples for comparison with the obscuration corrections in section 5.2. For the samples and fits in Figures 17 and 18, we conclude the following. For ultraviolet-selected galaxies, log $L_{ir}$ = -(0.33$\pm$0.04)M(UV)+4.52$\pm$0.69. For the full $Spitzer$ infrared-selected starburst sample, log $L_{ir}$ = -(0.23$\pm$0.02)M(UV)+6.99$\pm$0.41. For the extreme infrared-selected $Spitzer$ sample in Tables 5 and 6, log $L_{ir}$ = -(0.25$\pm$0.03)M(UV)+7.2$\pm$0.48, for $L_{ir}$ in \ldot~and M(UV) the AB magnitude at rest frame 153 nm. All of these interpretations depend on an accurate determination of $L_{ir}$. A crucial requirement is to improve the calibration of $L_{ir}$ to $\nu$L$_{\nu}$(7.7$\mu$m). We can make one test of the adopted calibration using our new IRS spectra of Markarian galaxies, because most have IRAS fluxes. We show in Figure 19 the result of this ratio for both the Brandl starbursts and for our new observations of Markarian galaxies in Table 2. Results in Figure 19 show that, within the uncertainties, the ultraviolet-selected Markarian galaxies have the same transformation from $L_{ir}$ to $\nu$L$_{\nu}$(7.7$\mu$m) as we have explained and adopted in previous analyses for $Spitzer$-selected sources \citep{hou07,wee08}. This is an important confirmation that this transformation and the subsequent use of $L_{ir}$ to determine SFRs is not modified by starbursts with the hotter ionizing continuum seen in the Markarian starbursts. Nevertheless, improvements for this calibration using far infrared and millimeter observations of starbursts with a variety of luminosities and redshifts are essential. \section{Obscuration Corrections and Evolution for Starburst Galaxies at High Redshift} Our motivation for comparing ultraviolet-selected and infrared-selected starbursts is to determine an independent measure of obscuration by dust, because the value of such a dust correction is crucial for measuring the SFR density. Present conclusions (summarized in Bouwens et al. 2009), which track the SFR density to z $\sim$ 7, indicate that obscuration for luminous galaxies, M(UV) $\la$ -20, is substantial, requiring a correction factor to ultraviolet luminosities of 6$\pm$2.5. The overall dust correction factor to total SFR density is taken as only a factor of $\sim$ 2, however, because the correction is smaller for low luminosity galaxies which dominate the observed SFR density. As concluded by Bouwens et al., "Because these same lower luminosity galaxies dominate the luminosity density in the UV continuum, the overall dust extinction correction remains modest at all redshifts and the evolution of this correction with redshift is only modest." Having a correct measurement of dust obscuration is crucial to determine the true SFR density as a function of redshift because the form of evolution for SFR density depends on this correction. For example, the observed decrease in the SFR density from z $\sim$ 2 to z $\sim$ 7 in \citet{red09} is comparable to the factors applied for dust correction for the most luminous sources. Many assumptions and explanations enter in the determination of such dust corrections, as thoroughly discussed in \citet{bou09} and \citet{red09}. Fundamentally, the adopted corrections depend on interpreting as reddening by dust the observed differences in the slopes of the ultraviolet continuum for LBGs of different luminosities. The ultimate measure of SFR derives from the corrected values of $L_{ir}$ which are finally adopted after applying the corrections. The consequences of different methods for determining corrections can be compared, therefore, simply by comparing the derived $L_{ir}$ as a function of ultraviolet luminosity, M(UV). These are the comparisons we now make using our methodology from section 6 for determining $L_{ir}$ based on infrared PAH luminosities. Results are shown in Figure 20. We consider that Figure 20 summarizes the most important results of our present paper. The relation between $L_{ir}$ and M(UV) for the various samples which we consider is compared to that adopted in \citet{bou09} for LBGs at high redshift used to determine the evolution of SFR density for z $>$ 2. In Figure 20, the line with asterisks is taken from Figure 10 of Bouwens et al. for z = 2.5. The comparison of this line with the points in Figure 20 indicates that the result adopted for high redshift LBGs really applies only to the lower envelope for all starbursts. Differences between this line and the other lines in Figure 20 show differences that would also arise for resulting measures of SFR density. The adopted results for LBGs agree with the full infrared-selected sample at M(UV) $\sim$ -21, but differ by a factor $>$ 10 at M(UV) = -17. Differences are greater at fainter M(UV) for reasons discussed in section 5.2, that the faintest sources in the ultraviolet are those with the greatest obscuration by dust. The crucial question that results from Figure 20 is: which line represents the true population of starburst galaxies at high redshift? Are the dust properties determined for our local samples (z $<$ 0.5) similar to properties in high redshift starbursts of similar M(UV)? Answering this question for z $>$ 2 is not feasible at present, because there are no significant samples of sources discovered at such redshifts using their dust emission. A direct comparison could be made at z $\sim$ 2 because of the large population of DOGs discovered by $Spitzer$ at about this redshift. A strong selection effect leads to a median redshift z = 1.8$\pm$0.2 for the starburst DOGs with f$_{\nu}$(24 \ums) $>$ 0.5 mJy, using the high redshift, dusty starbursts shown in Figure 16 (asterisks). This selection effect arises when the strong PAH 7.7 \um feature falls within the MIPS 24 \um band used for the surveys that find these DOGs. When the 24 \um selection is also coupled to a criterion of having a rest-frame 1.6 \um excess from stellar luminosity ("bump" sources; see section 5.5), there is nearly complete success in finding a dusty starburst with z $\sim$ 2 \citep {wee06,far08,des09}. This selection using "bump" sources means that existing $Spitzer$ surveys covering tens of deg$^{2}$, such as the First Look Survey \citep{fad06}, the Bootes survey \citep{dey08}, and the SWIRE surveys \citep{lon04}, can yield reliable lists of dusty starbursts in a well defined redshift interval near z $\sim$ 2 , even if spectroscopic redshifts are not available for all of the sources. There is also a straightforward empirical transformation in this redshift interval that f$_{\nu}$(7.7 \ums) = (1.5$\pm$0.3)f$_{\nu}$(24 \ums), for DOGs like those in Figure 16. The space density and luminosity function of such starbursts could be determined to f$_{\nu}$(24 \ums) $>$ 0.3 mJy, for which the f$_{\nu}$(7.7 \ums) $\sim$ 0.5 mJy, or log $\nu$L$_{\nu}$(7.7 $\mu$m) = 45.2 at z = 1.8. From Figures 17 and 18, this value of $\nu$L$_{\nu}$(7.7 $\mu$m) corresponds to -16 $>$ M(UV) $>$ -22 , depending on whether the source is highly obscured or has lower obscuration such as in the ultraviolet-selected samples. This range of M(UV) is easily accessible in surveys for UVLGs at z $\sim$ 2, so such surveys in the same regions where are available the $Spitzer$ surveys would make possible a direct comparison between infrared and ultraviolet selection of the starburst population at z $\sim$ 2. This would provide a definitive answer regarding obscuration corrections and measures of true SFR densities as a function of M(UV). Even though existing wide-area infrared surveys reach only the top end of the bolometric luminosity function at high redshifts, it is encouraging that tracking the most luminous sources gives results for the form of evolution over all redshifts that are consistent with results found from total SFR densities which reach deeper into luminosity functions. We reach this conclusion by comparing the evolution of the SFR density as illustrated in \citet{red09}, their Figure 10, with the luminosity evolution factor of (1+z)$^{2.5}$ found from $Spitzer$ surveys for the most luminous starbursts \citep{wee08}. This scaling for luminosity evolution matches, within the uncertainties, the scaling of total SFR density for 0 $<$ z $<$ 2 summarized in \citet{red09}. \section{Summary and Conclusions} 1. We have selected starburst galaxies discovered using both infrared and ultraviolet selection for comparison of spectral parameters measured with the $Spitzer$ Infrared Spectrograph to ultraviolet photometry from GALEX. Our primary objective was to determine how the luminosity and dust obscuration of starbursts depend on selection technique. Sources have z $<$ 0.5 and cover a luminosity range of $\sim$ 10$^{4}$; the highest luminosities are comparable to starburst luminosities found at z $\sim$ 3. 2. We measure and illustrate spectra of 26 Markarian galaxies, 23 ultraviolet luminous galaxies discovered with GALEX, and 50 galaxies with the most extreme infrared/ultraviolet ratios discovered with $Spitzer$. We find (section 4.3) that these samples differ in the nature of the ionizing stars, with hotter stars in the ultraviolet-selected sources as evidenced by stronger [NeIII] emission and stronger mid-infrared dust continuum (section 4, Figures 6 - 8.) 3. It is found that a strong selection effect arises for the ultraviolet-selected samples: the most luminous UV sources appear luminous because they have the least obscuration, not because they have the largest bolometric luminosities (section 5.3). Comparisons between rest-frame infrared fluxes f$_{\nu}$(7.7 \ums) arising from PAH emission and ultraviolet fluxes f$_{\nu}$(153 nm) arising from the stellar continuum allow an empirical method for determining obscuration in starbursts and dependence of this obscuration on infrared or ultraviolet luminosity. Even sources selected in the ultraviolet and which resemble Lyman Break Galaxies require obscuration corrections of about a factor of 10. These results are contrary to those adopted in studies of LBGs, which find that the most luminous galaxies in M(UV) are also the most obscured. Obscuration correction for the ultraviolet-selected Markarian+GALEX sample has the form log[UV(intrinsic)/UV(observed)] = 0.07($\pm$0.04)M(UV)+2.09$\pm$0.69; the full infrared-selected $Spitzer$ sample is log[UV(intrinsic)/UV(observed)] = 0.17($\pm$0.02)M(UV)+4.55($\pm$0.4); and is log [UV(intrinsic)/UV(observed)] = 0.14($\pm$0.03)M(UV)+4.68$\pm$0.50 for the extreme infrared-selected $Spitzer$ sample (Figure 13). 4. Analysis of obscuration determined from the H$\alpha$/H$\beta$ ratio in SDSS spectra confirms the trends observed by comparing infrared/ultraviolet ratios, although obscuration measured by H$\alpha$/H$\beta$ is much less than measured by infrared/ultraviolet (section 5.4 and Figure 15). 5. Stellar luminosities measured at rest frame 1.6 \um are exceptionally uniform compared to starburst luminosity over a redshift range 0 $<$ z $<$ 2 and luminosity range of 10$^{4}$, but the ratio from different samples indicates that the 1.6 \um stellar luminosity is also obscured by dust in proportion to the ultraviolet obscuration. This leads to our suggesting that stellar luminosities measured at 1.6 \um reveal the recently evolved stars from the starburst rather than an underlying old population (section 5.5 and Figure 16). 6. We examine the relation of total bolometric luminosity $L_{ir}$ to M(UV) for infrared-selected and ultraviolet-selected samples. Spectroscopy of Markarian galaxies confirms the transformation assumed for $\nu$L$_{\nu}$(7.7 $\mu$m) to $L_{ir}$ and implies that the hotter ionizing stars in the uv-selected samples do not affect this calibration. For ultraviolet-selected galaxies, log $L_{ir}$ = -(0.33$\pm$0.04)M(UV)+4.52$\pm$0.69. For the full infrared-selected sample, log $L_{ir}$ = -(0.23$\pm$0.02)M(UV)+6.99$\pm$0.41, and log $L_{ir}$ = -(0.25$\pm$0.03)M(UV)+7.2$\pm$0.48 for the extreme infrared-selected $Spitzer$ sample, all for $L_{ir}$ in \ldot and M(UV) the AB magnitude at rest frame 153 nm (section 6 and Figure 18). 7. Our most important summary result (section 7 and Figure 20) indicates that the bolometric luminosities deduced for starburst galaxies using LBGs at z $>$ 2 are insufficient, by a factor of at least 10 for M(UV) $\sim$ -17. This also implies that obscuration corrections by factors of two to three applied for 3 $<$ z $<$ 7 to track the formation and evolution of the earliest galaxies are insufficient, if dusty galaxies exist at such epochs comparable to those with z $<$ 3. We propose an observational test which can determine more accurate dust corrections and $L_{ir}$ for large samples of infrared-selected starbursts and LBGs at z $\la$ 2. This test is possible because existing $Spitzer$ surveys allow reliable discovery of dusty starbursts with z = 1.8 $\pm$ 0.2 based only on photometric properties. \acknowledgments We thank Don Barry for continued technical assistance and V. Lebouteiller and J. Bernard-Salas for providing the SMART-Optimal Extraction software. This work is based on observations made with the $Spitzer$ Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under NASA contract 1407. Support for this work by the IRS GTO team at Cornell University was provided by NASA through Contract Number 1257184 issued by JPL/Caltech. Support was also provided by the US Civilian Research and Development Foundation under grant ARP1-2849-YE-06. LAS and DWW thank D. Engels and the Hamburger Sternwarte for hospitality during preparation of this paper, and LAS acknowledges the Deutsche Forschungsgemeinschaft for support through grant En 176/36-1.
-36,371.813375
[ -2.482421875, 2.416015625 ]
90
[ -2.62890625, 0.344970703125, -2.13671875, -6.01171875, -0.81005859375, 8.6640625 ]
[ 5.171875, 7.8515625, 4.54296875, 7.13671875 ]
700
10,336
[ -3.359375, 3.978515625 ]
24.341445
[ -6.15234375, -2.984375, -3.130859375, -1.6318359375, 1.337890625, 10.640625 ]
0.929488
60.736209
13.912539
2.036279
[ 3.142697811126709 ]
-29,410.579729
5.256482
-35,541.497027
0.287129
5.789162
[ -3.755859375, -3.78515625, -2.830078125, -3.4453125, 2.7421875, 10.1640625 ]
[ -6.0859375, -2.568359375, -2.74609375, -1.849609375, 3.845703125, 5.91015625 ]
BkiUdd05qWTA8JC6STHR
\section{Introduction} Chemical analysis at trace levels is crucial in various fields such as food safety screening, \cite{wang2021emerging,choi2019emerging} environmental pollutant monitoring, \cite{wong2021nanozymes,zhu2018fluorescent} clinical forensics, \cite{sauvage2006screening,steuer2019metabolomic} and biological contaminants sensing. \cite{rasheed2019environmentally,liu2018hundreds} A preliminary step of analyte extraction and enrichment prior to detection contributes to improve the limit of detection (LoD) of analytical tools.\cite{aguirre2015dispersive,ramos2012critical} Due to the large active surface area, microscopic droplets are of great interest for preconcentration of target analytes across the droplet-liquid interface at extremely low concentrations.\cite{henschke1999mass,anthemidis2009development, feng2018droplet, zhang2022biphasic} The enhanced preconcentration depends on the partition of the compound between droplet liquid and surrounding sample liquid. Recently, dispersive liquid-liquid microextraction (DLLME) has received great attention because of its rapid and efficient extraction. \cite{rezaee2006determination,galuch2019determination,lemos2022syringe,gallo2021dispersive,grau2022use,carbonell2021natural,shojaei2021application,ji2021hydrophobic,wang2021strategies} DLLME is a spontaneous emulsification technique based on a ternary mixture containing dispersive solvent, extractant, and aqueous sample solution. \cite{vitale2003liquid,rezaee2009dispersive} The formation of extractant microdroplets increases the interfacial mass transfer of the analytes, leading to increased extraction efficiency. The microdroplets are then centrifuged and collected from the bulk for subsequent analysis. The main disadvantages of DLLME are the requirement of two individual steps for extraction and equipment-assisted sample separation, as well as the consumption of relatively large amount of disperser solvents. Surface nanodroplets on immersed substrates provide an alternative platform for efficient extraction. The most-used method to induce surface nanodroplets is the solvent exchange. In this process, surface nanodroplet nucleation and subsequent growth occur due to the droplet liquid transient oversaturation when a good solvent of the droplet liquid is displaced by a poor solvent. \cite{zhang2015formation} Surface nanodroplet exhibits huge surface-to-volume ratio and excellent long-term stability against evaporation or dissolution. \cite{li2020speeding, an2015wetting} These features enable surface nanodrops to continuously extract and enrich trace amounts of solutes from the flow of an aqueous solution. The extraction efficiency of surface nanodroplets produced on a planar or a curved surface was investigated by Yu et. al in 2016. \cite{yu2016large} The term 'nanoextraction' describing the liquid–liquid extraction based on surface nanodroplets was proposed by Li et al. in 2019. \cite{li2019functional} Nanoextraction can be integrated with surface reactions for further application in chemical analysis. It has been demonstrated that many chemical reactions in nanodroplets are faster and more effective than their macroscopic counterparts in a bulk medium. \cite{zhang2016compartmentalized,piradashvili2016reactions} Reactive components could be introduced to a single droplet to impart multi\textendash functionalities under well-controlled mixing conditions without the influence of uncontrolled collision, coalescence or Ostwald ripening for droplet reaction in a bulk system. Nanoextraction coupled with in-situ surface reactions provides an active platform for synthesis of functionalized nanomaterials and combinative analysis.\cite{li2019functional,dabodiya2022sequential} This review aims to summarize the latest progress and point out the current research trends of nanoextractions. Herein, we begin by briefly introducing the historical development of droplet-based extraction, followed by elaborating the formation principle and control parameter of surface nanodroplets. Then, we discuss the current research progress of nanoextraction, as well as reaction-assisted nanoextraction. Finally, we give our perspectives of what would be the future development of nanoextraction in terms of analytical practices. Prior review articles on surface nanodroplet mainly focus on its formation and dissolution dynamics. \cite{lohse2015surface,qian2019surface} To the best of our knowledge, there has been no review article discussing the nanoextraction based on surface nanodroplet. We hope this review could provide support for further development of nanoextraction in the trace chemical component analysis. \begin{figure}[htbp] \centering \includegraphics[height=8cm]{History.pdf} \caption{(A) Illustration of the classical single drop microextraction (SDME) process. (B) Schematic illustration of a ternary phase diagram of oil (O), water (W), and solute (S). Oil or water droplets can be formed in the ouzo region or reverse ouzo region, respectively. (C) General Schematic of the classical DLLME procedure. Panel (A) and (B) taken from ref.\cite{lohse2020physicochemical} Copyright 2020 Springer Nature. Panel (C) reproduced with permission from ref.\cite{rezaee2010evolution} Copyright 2009 Elsevier B.V. } \label{fgr:History} \end{figure} \section{Development of droplet-based extraction} Single drop microextraction (SDME), as a solvent miniaturized extraction technique, was explored by Liu et al.\cite{liu1996analytical} and Jeannot et al. \cite{jeannot1996solvent} in 1996. In a typical SDME process, an organic microdroplet ($\sim$1\textendash 10 $\mu$L) is suspended at the tip of a syringe needle which is surrounded by an aqueous sample solution (Fig. \ref{fgr:History}A). After extraction, the drop with enriched analytes is withdrawn and injected into an analytical instrument for detection and quantification. The extraction of target compounds from aqueous phase into an organic solvent drop is based on passive diffusion. \cite{yangcheng2006directly,sikanen2010implementation} The extraction efficiency is intrinsically determined by the analyte partition coefficient between the droplet liquid and water. Different approaches to perform SDME, such as continuous-flow microextraction, \cite{liu2000continuous,wu2016dynamic} headspace SDME, \cite{vidal2005headspace,snow2010novel} drop-to-drop microextraction, \cite{wijethunga2011chip,shrivas2007rapid} and direct immersion-SDME, \cite{ruiz2014ternary,nunes2020direct} have been extensively explored for analytical applications. Jeannot et al.\cite{jeannot2010single} summarized the historical development and pointed out advantage/disadvantages of various modes of SDME technique. A recent review from 2021 focused on the applications of SDME combined with multiple analytical tools (spectroscopy, chromatography, and mass spectrometry).\cite{kailasa2021applications} The popularity of SDME mainly lies in its cost-effectiveness and low consumption of organic solvent. Its drawbacks are the instability of the hanging droplet, narrow drop surface, and consequently slow diffusion kinetics and limited sensitivity. DLLME, as a modified solvent microextraction technique, was introduced in 2006 by Rezaee et al. for the extraction of organic analytes from aqueous samples. \cite{rezaee2006determination} In this method, multiple extractant microdroplets are formed and stably dispersed in an aqueous sample comprising of the target analytes. The formation of small droplets is based on the spontaneous emulsification in a ternary system, which is well known as the "Ouzo effect". \cite{tan2016evaporation} A typical ternary mixture consists of a good solvent (e.g., ethanol), a poor solvent (e.g., water), and a small ratio of extractant (e.g., oil). The three-phase diagram of a representative ternary system is shown in Fig. \ref{fgr:History}B. The nucleation and growth of the droplets spontaneously occur in the ouzo regime, which is surrounded by the binodal and spinodal curves. \cite{aubry2009nanoprecipitation} This emulsification without surfactant is kinetically stable over hours or days.\cite{prevost2021spontaneous} The dramatically increased surface area leads to the enhanced diffusion kinetics and high recovery efficiency. The typical procedure of DLLME is demonstrated in Fig. \ref{fgr:History}C, microdroplets of extractant are spontaneously formed when a mixture containing an extracting solvent and a disperse solvent is rapidly injected into an aqueous sample solution. The partition equilibrium at the droplet interface is reached in a few seconds owing to the large surface area. Consequently, the extraction is almost independent of time.\cite{assadi2010determination} The cloudy emulsion is then centrifuged to collect the droplets with concentrated target compound for analysis. Over the years, DLLME technique has been developed from its basic approach into many other advanced configurations. Recent revolutions of DLLME have been made regarding to the selection of extracting and disperse solvent,\cite{deng2019hexafluoroisopropanol,el2019deep} combination with other extraction techniques,\cite{shamsipur2015extraction,rai2016comparative} association with derivatization reaction,\cite{sajid2018dispersive,pinto2018quantitative} mechanical agitation-assisted emulsification, \cite{mansour2017solidification,de2015new} etc. The review devoted specifically to DLLME was published by Rezaee et al. \cite{rezaee2010evolution} in 2010 comprehensively summarizing the early development of DLLME. Sajid et al. \cite{sajid2022dispersive} reviewed latest advancements of DLLME with respect to its evolved design of devices, green aspects, and application extensions. Compared to SDME, DLLME greatly improves the enrichment efficiency. Some limitations still exist, such as the consumption of toxic extraction solvents (e.g., carbon tetrachloride, chlorobenzene, and cyclohexane).\cite{rezaee2010evolution} \begin{figure}[htbp] \centering \includegraphics[height=9.5cm]{Formation.pdf} \caption{(A) Schematic showing the typical solvent exchange process. (B) Sketch showing the solution composition during the formation of oil nanodroplets in the solvent exchange. The dilution path represents the solution composition with a oil-to-ethanol ratio same as in solution A. The overall oil supply (oversaturation level) is reflected by the shaded area. (C) Sketch showing the multicomponent surface nanodroplets comprising three different oils. (D) Plots of oil composition in the binary droplets compared to their initial ratio in solution A. (E) Schematic illustration of the continuous mixing setup for the formation of multicomponent surface nanodroplets. The flow distributor is used to mix different solution $A_i$ (i = 1, 2, 3) and introduce the mixture to a solvent exchange chamber. Panel (A) taken from ref.\cite{lohse2020physicochemical} Copyright 2020 Springer Nature. Panel (B) reprinted with the permission from ref. \cite{lu2015solvent} Copyright 2015 American Chemical Society. Panel (C) and (D) reproduced with the permission from ref.\cite{li2018formation} Copyright 2018 American Chemical Society. Scheme (E) taken from ref. \cite{you2021tuning} Copyright 2021 John Wiley \& Sons, Inc. } \label{fgr:Formation} \end{figure} \section{Formation of surface nanodroplets} The formation principle of surface nanodroplets is also based upon the "ouzo effect".\cite{zhang2015formation} The standard protocol of the solvent exchange to form surface nanodroplets is shown in Fig. \ref{fgr:Formation}A. Different from the generation of dispersive microdroplets in a DLLME process, surface nanodroplets are induced by an oversaturation pulse at the interacting front of the ouzo solution (solution A) and the poor solvent (solution B) in a narrow chamber.\cite{lohse2020physicochemical} As shown in Fig. \ref{fgr:Formation}B, the oversaturation level is demonstrated by the integrated area between the binodal curve and the dilution path through the ouzo region. The nanodroplets pinned on the substrate are 5\textendash 500 nm in height and 0.1\textendash 10 $\mu$m in lateral diameter. \cite{li2019controlled} The surface properties (e.g., hydrophobicity, patterns, etc.) of the substrate will affect the droplet formation, growth, and distribution.\cite{bao2015highly,lohse2015surface} The wettability of the substrate needs to be compatible with the droplet liquid. Oil droplet can be formed on the hydrophobic substrate, while water droplet can be formed on the hydrophilic substrate. The prepatterned microdomain on the substrate defines the spatial arrangement and lateral dimension of the surface droplets.\cite{bao2015highly} The growth of the nucleated droplets follows the constant contact angle mode when the lateral diameter is smaller than that of the domain. After the droplets reach the domain circumference, growth proceeds in constant contact radius mode until the solvent exchange is completed. Finally, highly ordered array of surface nanodroplets with well\textendash defined sizes are generated on the patterned substrate. For those unpatterned substrates, surface nanodroplets are sparsely and heterogeneously distributed over the whole area. \cite{zhang2012transient} A variety of liquids, such as water\cite{lu2016influence, wei2020integrated}, alkanes, \cite{lu2015solvent, lu2016influence} alcohols,\cite{dyett2018coalescence,dyett2018growth} fatty acids,\cite{lu2017universal} and other oils,\cite{tan2016evaporation} can be used to produce surface nanodroplets based on an appropriate ternary system. The formation of nanodroplets are tunable in the solvent exchange. More specifically, the solvent type and solution composition influence the oversaturation in the three-phase diagram and consequently the number density and size of the nanodroplets.\cite{lu2015solvent, lu2016influence} The droplet diameter increases almost linearly with the square root of the shaded area in Fig. \ref{fgr:Formation}B. For a given ternary system, the droplet size can be controlled by the flow rate and the chamber geometry.\cite{zhang2015formation,dyett2018growth} Qian et al. \cite{qian2019surface} systematically summarized the effects of different factors on the formation of nanodroplets. The flexibility of droplet liquid, tunable droplet distribution and size, and long-term stability make surface nanodroplets good candidates for extraction. Multicomponent surface droplets comprising more than one compound can also be produced by solvent exchange (Fig. \ref{fgr:Formation}C).\cite{li2018formation} In this process, different droplet liquids are mixed at a certain ratio in solution A prior to the solvent exchange. As is indicated in Fig. \ref{fgr:Formation}D, the oil 1 content (oil 1/ (oil 1 + oil 2)) in the binary droplets varies from 0 to 1 by changing the initial ratio of these two liquids in the solution. Nevertheless, the ratio of these two oils in the droplets is not directly formulated by their composition in the original solution but is governed by the oversaturation ratios of each component in the three\textendash phase diagram.\cite{li2018formation} Following the same principle, ternary droplets with desired ratio can be generated by controlling the oversaturation of each oil. You et al. \cite{you2021tuning} developed a continuous flow-in setup to further simplify the formation and formulation of multicomponent droplets. As is shown in Fig. \ref{fgr:Formation}E, a flow distributor is employed as a passive mixer to introduce more than one streams of solution A during the solvent exchange. The composition of the produced surface nanodroplets can be easily tuned by adjusting the flow rate ratios of each stream of the solution A with a high reproducibility. These controllable multicomponent surface droplets are of great interest in the compartmentalized chemical reactions and microanalytics.\cite{zheng2021microfluidic,lohse2020physicochemical} \begin{figure*}[htbp] \centering \includegraphics[height=14cm]{Extraction-1.pdf} \caption{ (A) Schematic illustration showing the extraction of analytes from the sample flow into the surface nanodroplets. (B) Fluorescent image showing a red dye in water (10 nM) is extracted into the surface droplet branches over time. Scale bar: 200 $\mu$m. (C) Fluorescent images showing selective extraction of biphasic surface droplets from a mixture of R6G and fluorescein under different fluorescence excitations. Scale bar: 20 $\mu$m. (D) Sketch showing the extraction of analytes from the dense suspension into the surface nanodroplets. (E) Image of the oil sand wastewater consisting of bitumen, solid particles, and hydrocarbons. (F) Fluorescent image showing the surface nanodroplets after extracting 10$^{-6}$ M Nile red dye from the oil sand wastewater. ($\sim$ 30 wt\% solid content). Scale bar: 100 $\mu$m. Scheme (A) reproduced with the permission from ref. \cite{yu2016large} Copyright 2016 American Chemical Society. Panel (B) taken from ref. \cite{lu2017universal} Copyright 2022 National Academy of Science. Panel (C) taken from ref.\cite{li2020encapsulated} Copyright 2020 John Wiley \& Sons, Inc. Panel (D)\textendash(F) reproduced with the permission of ref. \cite{you2021surface} Copyright 2021 The Royal Society of Chemistry. } \label{fgr:Extraction-1} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[height=10cm]{Extraction-2.pdf} \caption{ (A) Schematic showing the collection of surface nanodroplets by the capillary force between air and water. (B) Image showing the collected octanol in the capillary tube. (C) UV-Vis absorbance spectra of collected octanol droplets after the extraction from aqueous sample flow with chlorpyrifos concentrations from $2\times10^{-8}$ M to 10$^{-6}$ M. (D) Schemetic illustraion showing the fomarion process of the surface nanodroplets in an evaporating liquid film (E) FTIR spectra of octanol droplets after extraction of 10$^{-3}$ {M} nonanoic acid, octanol, and nonanoic acid. Panel (A), (B), and (C) taken from ref. \cite{li2022surface} Copyright 2022 American Chemical Society. Panel (D) and (E) reproduced with the permission from ref. \cite{qian2020one} Copyright 2019 John Wiley \& Sons, Inc.} \label{fgr:Extraction-2} \end{figure*} \section{Nanoextraction based on surface nanodroplets} \citet{yu2016large} illustrated the feasibility of surface nanodroplets for nanoextraction. Highly ordered surface droplets were generated on a vertical prepatterned substrate and used to extract the model compound rhodamine 6G (R6G) from the flow (Fig. \ref{fgr:Extraction-1}A). The enrichment efficiency was quantified by the fluorescent intensity ratio between the oil droplets and the surrounding media. The results showed that the concentrations of R6G in oil droplets were increased by a factor of 3, indicating the significant enrichment of the hydrophobic solute from the highly diluted aqueous solution. \citet{lu2017universal} studied the droplet branches confined in quasi-2D channel for nanoextraction. As is illustrated in Fig. \ref{fgr:Extraction-1}B, the fluorescent intensity of nanodroplet branches gradually increased over time, demonstrating the red dye in water is continuously extracted and accumulated into the oil nanodroplet branches. Li et al. \cite{li2020encapsulated} produced biphasic surface droplets within water nanodroplets enclosed in oil microdroplets. This biphasic feature of the droplet unit allows selective concentration of both hydrophobic and hydrophilic analytes from a mixture flow. Fig. \ref{fgr:Extraction-1}C demonstrates the selective fluorescence enhancement of the host and encapsulated droplets. An aqueous mixture containing hydrophobic R6G and hydrophilic fluorescein was introduced to the host droplet for extraction. Those two analytes can be separately excited at different range of wavelength. R6G is concentrated into the host droplet while fluorescein is enriched into the encapsulated nanodroplets. The compartmentalized nature of the biphasic droplets may contribute to broader applications in sensing and diagnostic fields. You et al. \cite{you2021surface} developed the nanoextraction in a capillary tube. Due to the narrow diameter of the capillary, reliable extraction is achieved with a sample volume as low as 50 $\mu$L. This method enables extraction of trace compounds from the slurry with highly concentrated solid particles. As is demonstrated in Fig. \ref{fgr:Extraction-1}D. The target compound in a suspension sample is steadily extracted into the surface nanodroplets. Although part of solid particles is adsorbed and aggregated at the droplet-water interface, their influence on the extraction and in-situ detection would be minimal. The LoD of a model compound (Nile red) for aqueous samples and slurry samples was found to be 10 nM and 1 nM, respectively. Further, the successful extraction and detection of analytes from a oil sand wastewater ($\sim$ 30 wt\% solid content) indicates the potential of this technique in the application of environmental monitoring (Fig. \ref{fgr:Extraction-1}E,F). In addition to in-situ analysis, the surface nanodroplets with extracted analytes in a 3-meter hydrophobic capillary tube could be collected (total volume $\geq$ 2 $\mu$L) for ex-situ detection.\cite{li2022surface} The collection of surface nanodroplets is governed by the capillary force between water and air (Fig. \ref{fgr:Extraction-2}A,B). The collected octanol was then analyzed by a UV-vis spectrometry (Fig. \ref{fgr:Extraction-2}C). The LoD was found to be $\sim$ $2\times10^{-9}$ M for two representative micropollutants, triclosan and chlorpyrifos. A linear range above 2 × 10$^{-7}$ M was achieved for quantification. This nanoextraction method was also proved to be compatible with GC-MS and fluorescence microscopy. The formation of surface nanodroplet is based not only on the solvent exchange method but also on an evaporating ternary liquid microfilm. Qian et al.\cite{qian2020one} investigated the nanoextraction efficiency in an evaporating thin liquid film on a spinning substrate. In brief, in a ternary system consisting of a target analyte in water, an extractant oil and a co-solvent ethanol, nanodroplets of oil formed spontaneously in the film within rapid evaporation of ethanol (Fig. \ref{fgr:Extraction-2}D). In this case, an even less amount of analyte solution (5 $\mu$L) is required, and the entire process could be completed in 10 s. The LoD of the model compound Nile Blue was found to be 10$^{-12}$ M by in-situ fluorescence detection. Besides, droplets with extracted nonanoic acid enabled in-situ chemical identification and quantification by a FTIR microscope. (Fig. \ref{fgr:Extraction-2}E). The successful online and offline detection of surface nanodroplets by various analytical tools indicates that nanoextraction can be a general approach for sample preconcentration. \section{Nanoextraction coupled with in-droplet reaction} \begin{table*}[htbp] \begin{threeparttable} \caption{ Level of enhancement $p/p^*$ of Long-Chain Alkyl Acid. Reproduced with the permission of ref.\cite{wei2022interfacial} Copyright 2022 American Chemical Society. } \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}cccccc} \hline Acid & 0.5 mM & 1 mM & 5 mM & 50 mM & 100 mM \\ \hline Acetic acid & 2.25 & & 1.01 & 1 & \\ N-butyric acid & & 11.4 & 3.4 & 1 & \\ Gallic acid & & 4.2 & 1.1 & 1 & \\ 4-hydroxybenzoic acid & & & 7.3 & 1.5 & 1 \\ Benzoic acid & & & 6.2 & 1.2 & 1 \\ \hline \end{tabular*} \end{threeparttable} \end{table*} \begin{figure*}[htbp] \centering \includegraphics[height=12cm]{Reaction.pdf} \caption{ (A) Schematics showing the fabrication of AgNPs, the enrichment of analytes, and the in-situ SERS detection based on surface droplets. (B) Raman spectra of surface nanodroplets after the extraction from aqueous sample flow with R6G concentrations from 10$^{-9}$ M to 10$^{-6}$ M. (C) Optical images of a droplet containing bromocresol purple before and after interact with $1.7\times 10^{-4}$ M acetic acid. (D) Optical images of a droplet containing bromocresol green before and after interact with $1.0\times 10^{-4}$ M acetic acid. Length of the scale bar: 25 $\mu$m. (E) Droplet discoloration time versus the reciprocal of droplet size for different types of acid. The acid concentration is 5 mM. (F) Detection of different Chinese spirits by surface droplets. Panel (A) and (B) taken from ref. \cite{li2019functional} Copyright 2019 John Wiley \& Sons, Inc. Panel (C)\textendash(F) taken from ref.\cite{wei2020integrated} Copyright 2022 American Chemical Society.} \label{fgr:Reaction} \end{figure*} Nanoextraction can be combined with surface reactions for online chemical analysis. \cite{li2019functional} As shown in Fig. \ref{fgr:Reaction}A, binary droplets comprising reactive component (Vitamin E) and nonreactive component (1-octanol, OCT) were produced by solvent exchange process. AgNO$_3$ aqueous solution were then injected to react with Vitamin E in the droplets and generate silver nanoparticles (AgNPs). OCT in droplets acts as a preconcentration agent to extract analyte from a continuous flow. In situ sensitive surface-enhanced Raman spectroscopy (SERS) detection was achieved based on the AgNPs-functionalized droplets (Fig. \ref{fgr:Reaction}B). LoD reaches to 8 $\times$ 10$^{-11}$ M for methylene blue, 3 $\times$ 10$^{-9}$ M for malachite green, and 10$^{-10}$ M for R6G, respectively. Besides, the quantitative and reproductive detection of R6G was achieved within a large linear range from 10$^{-9}$ to 10$^{-6}$ M. \citet{wei2020integrated} integrated nanoextraction and colorimetric reactions for acid detection by droplet decoloration time. The formed aqueous droplets contain two halochromic compounds, bromocresol green and bromocresol purple. The acid dissolved in an oil flow was extracted into the water droplets. The reaction of the extracted acid with the halochromic compound leads to the pH change and subsequent decoloration of the droplets over a certain time. The decoloration can be simply identified by an optical microscope (Fig. \ref{fgr:Reaction}C,D). It is clear in Fig. \ref{fgr:Reaction}E that the time for droplet decoloration is dependent on the type of acid. This chemical specificity allows the identification of different acid mixture. The authors successfully applied this drop-based analysis method for distinguishing counterfeit alcoholic spirits by comparing their discoloration time. For example, the decoloration time of three Chinese famous spirits Fenjiu, Maotai, and Erguotou is distinguishable due to their characteristic acid profile (Fig. \ref{fgr:Reaction}F). In a follow-up work performed by \citet{wei2022interfacial}, the authors experimentally and theoretically studied the mechanism behind the enhanced extraction by surface nanodroplets. The droplet decoloration process is associated with three steps: 1) interfacial partition, 2) dissociation, and 3) colorimetric reaction. Step 1 is recognized as the rate-limiting step. Further, the concentration change rate of acid in droplets from those three steps can be quantified by: \begin{equation} \frac{dC_w(t)}{dt}=\frac{C}{R}\beta(\frac{C_o(t)}{p}-C_w(t))+rC_w(t) \label{e1} \end{equation} The first term on the right represents interfacial transfer of acid (step 1). The second term represents reaction of acid in the droplets (step 2 and 3). From eqn (1), the shift from actual partition coefficient ($p$) to apparent partition coefficient ($p^*$) of acid is observed to correspond to more acid molecules extracted into the droplets. As is indicated in Table 1, The partition of acid in the reacting droplets can be shifted up to 11 times of that in the bulk by interfacial colorimetric reactions. \section{Conclusions and outlook} Advances have been made towards efficient liquid-liquid nanoextraction based on surface nanodroplets. The formation of nanodroplets can be achieved in a fluid chamber or a capillary tube by solvent exchange, or in an evaporating thin liquid film. Target analytes can be continuously extracted from a sample flow into the pinned surface droplets. The extraction is almost not limited by the sample volume or affected by the solid particles. After extraction, nanodroplets with concentrated analytes can be directly used for online analysis (fluorescent microscope, FTIR microscope, and Raman spectroscopy) or be collected for offline analysis (GC-MS and Uv-vis spectroscopy). The stable nanodroplet with large surface-to-volume ratio is a general platform for interfacial reaction, which enhances the shift of partition and enables synthesis of functional materials for SERS detection. Nanoextraction has emerged as a powerful and reliable sample-preconcentration approach due to its high extraction efficiency, simplicity of operation, low cost and high environment benignity. This technique is easily accessible to most of researchers and laboratories. Further development of nanoextraction may be expanded to complex matrixes, such as biological samples (e.g., cells and tissues). The organic solvents in the ternary system are used for the formation of surface droplets. After droplet formation, the surrounding liquid can be aqueous solution. The biological samples may be brought into contact via their aqueous solution. The biocompatibility issues would be minimal in the nanoextraction process. \begin{acknowledgments} We are grateful for our close collaborators for their valuable findings. We acknowledge funding support from the Natural Sciences and Engineering Research Council of Canada (NSERC)\textendash Discovery project, and Alliance Grant Alberta Innovates\textendash Advanced program. X.H.Z. acknowledges support from the Canada Research Chairs Program. H.Y.W acknowledges support from China Scholarship Council (No. 202106450020). \end{acknowledgments} \section*{Nomenclature} \noindent $C_w(t)$: Acid concentration change rate in water droplet;\\ $C_o(t)$: Acid concentration change rate in oil flow;\\ $\beta$: Mass transfer coefficient;\\ $p$: Actual distribution coefficient;\\ $p^*$: Apparent distribution coefficient;\\ $r$: Total reaction rate in water droplet;\\ $R$: Lateral radius of acid surface droplets. \section*{AUTHOR DECLARATIONS} \subsection*{Conflict of Interest} The authors have no conflicts to disclose. \section*{Data Availability} Data sharing is not applicable to this article as no new data were created or analyzed in this study. \section*{REFERENCES} \nocite{*}
-15,569.621461
[ -0.17724609375, 0.7041015625 ]
42.957746
[ -3.349609375, 0.84521484375, -2.142578125, -4.1328125, -0.56298828125, 6.91796875 ]
[ 2.6171875, 6.12890625, 2.89453125, 5.953125 ]
304
3,936
[ -0.450439453125, 0.3076171875 ]
21.453434
[ -6.234375, -2.48046875, -3, -1.7783203125, 1.275390625, 10.359375 ]
0.622565
29.611147
28.556911
3.039665
[ 2.117534637451172 ]
-12,841.327691
6.325457
-14,839.791849
0.843475
5.921417
[ -4.09765625, -3.935546875, -3.359375, -3.876953125, 2.841796875, 11.3203125 ]
[ -6.34375, -2.232421875, -2.982421875, -1.9345703125, 3.67578125, 6.33984375 ]
BkiUd4g4dbghUyoCIbIC
\section{Introduction} The introduction of CCD detectors to photometric studies of open clusters has led to significant improvements in the accuracy and precision of data for faint cluster stars observed previously via photoelectric or photographic techniques. The tradeoff is a low efficiency and associated steep wavelength dependence for such detectors in the blue-violet region, limiting photometric accuracy in the traditional ultraviolet filters: the Johnson system {\it U}-band and the Str\"{o}mgren system {\it u}-band. Color corrections for observations in the ultraviolet become non-linear and multi-valued in that situation, particularly for hot stars that display a sizable Balmer discontinuity superposed on their continuum \citep[see, for example,][]{mv77}. Many recent open cluster studies have therefore been restricted to observations in the {\it BVRI} bands, or simply the {\it BVI} or {\it VRI} bands. The intrinsic relations for OB stars and GK dwarfs in color-color diagrams restricted to such systems are nearly parallel to the reddening vectors for interstellar extinction \citep{ca93}, posing difficulties for establishing the reddening of cluster stars \citep[see][]{ca09}. AF stars can be studied in such fashion \citep[e.g.,][]{te11}, but that would limit photometric studies in {\it BVRI} to intermediate-age clusters with their main sequences consisting of unevolved 1--2 $M_{\odot}$ stars. Even in such cases, an accurate knowledge of the interstellar reddening of member stars is essential for the reliable establishment of distance and age \citep{tu96}, which means that the photometric study of many clusters may be limited by the nature of the filter bands employed for the observations. Some researchers studying intermediate-age and old clusters have therefore adopted an alternate approach to establish cluster intrinsic parameters, by identifying a putative red giant clump in cluster color-magnitude diagrams arising from the He-burning stage of low-mass red giants \citep{ca70} and inferring the reddening, age, and distance of a cluster through optimized fitting of model isochrones to the data \citep*[e.g.,][]{ca95,ca04}. There are two problems associated with such an approach. First, the dependence of the fitting technique on model isochrones may bias the results. Second, the most frequently encountered stars along most Galactic lines of sight tend to be K giants and A dwarfs \citep{mc65,mc70}, which are also the most luminous members of intermediate-age open clusters. As noted by \citet{bm73}, simulations of {\it UBV} photometry for unrelated stars lying in typical Galactic star fields can generate color-magnitude diagrams containing pseudo-main sequences similar to those of true open clusters. The presence of physically-unrelated A dwarfs and K giants along typical Galactic lines of sight may therefore generate features in open cluster color-magnitude diagrams, A dwarfs and G giants or FG dwarfs and K giants, in similar locations to those found in the color-magnitude diagrams of intermediate-age or old open clusters, respectively. The photometric properties of foreground or background stars in some open cluster fields could therefore be mistaken for the characteristics of an old open cluster color-magnitude diagram, thereby biasing studies of clusters tied to the red clump method. An independent means of establishing cluster membership (e.g., star counts, proper motions, or radial velocities) and especially reddening for cluster stars is therefore essential for avoiding incorrect conclusions about cluster parameters. The ready availability of {\it JHK}$_s$ photometry \citep{cu03} from the Two Micron All Sky Survey \citep[2MASS,][]{sk06} for Galactic star fields offers one such means, since it addresses the primary parameter vital for open cluster studies: the amount of foreground reddening of cluster stars \citep*[e.g.,][]{ma08}. Presented here are examples of star clusters, Berkeley 44, Turner 1, and Collinder 419, for which the method addresses potential incorrect choices of cluster parameters based solely on optical photometry. \section{{\it JHK}$_{s}$ Intrinsic Relations} The effective wavelengths for the 2MASS {\it JHK}$_s$ system, 1.235 $\mu$m, 1.662 $\mu$m, and 2.159 $\mu$m, respectively, are fairly close to those for the older {\it JK} system of \citet{jo68} and the {\it JHK} system studied by \citet{ko83}, and observations with the {\it JHK} and {\it K}$_s$ filters are standardized in fairly similar fashion. The main source of difficulty is likely to be the presence of atmospheric absoprtion features within the same photometric bands \citep{my05,my08}, which reduces the precision of repeated observations. {\it K}$_s$-band observations, and possibly {\it H}-band observations, of stars are also suceptible to emission from circumstellar dust. Is it possible that the intrinsic {\it VJK} and {\it VJHK} colors for main-sequence stars derived by \citet{jo66,jo68} and \citet{ko83}, respectively, are also applicable to the 2MASS {\it JHK}$_s$ system? Such a possibility can be tested from available 2MASS observations of standard stars and stars in open clusters of known reddening. \begin{figure}[!t] \begin{center} \includegraphics[width=7.5cm]{turf1.eps} \end{center} \caption{\small{Intrinsic {\it (V--K)}$_0$ (filled squares), {\it (V--H)}$_0$ (open circles), and {\it (V--J)}$_0$ (filled circles) colors for main-sequence stars are plotted as functions of intrinsic {\it (B--V)}$_0$ color from \citet{jo68} and \citet{ko83} in the top portion of the diagram. The adopted relations from least squares fits are denoted by gray curves. The lower diagram plots observed data for photometric standards and standard stars using the same symbols as above. Solid curves denote best fits to the data, and differ from the predicted relations (gray curves).}} \label{fig1} \end{figure} Intrinsic {\it JHK}$_s$ colors for main-sequence stars were initially derived here as follows. The relationships of \citet{jo66,jo68} and \citet{ko83} for intrinsic {\it V--J}, {\it V--H}, and {\it V--K} colors as functions of {\it (B--V)}$_0$ were tabulated and plotted, as in Fig.~\ref{fig1} (upper), including estimates for {\it V--H} color from the intrinsic colors of \citet{jo66,jo68} using his tabulated values for {\it V--J} and {\it V--K} interpolated according to the effective wavelengths of the filters. Least squares fits to the data then established polynomial relationships between the colors. The derived relationships were then tested using 2MASS data and {\it BV} observations for 19 reasonably bright, nearby and unreddened, photometric and spectroscopic dwarf standards, stars with unsaturated observations in the Ursa Major and Hyades clusters, which are both unreddened, and stars in the young cluster NGC 2244 corrected for differential reddening within the Rosette Nebula \citep[see][]{tu76a}. NGC 2244 stars were used in order to tie down the hot end of the sequence; its member stars possess excellent photometry on the {\it UBV} system \citep{jo62} and lie in a region of well-established reddening law \citep{tu76a}. Reddening corrections for early-type stars were applied using {\it UBV} colors and the relations {\it E(J--H)} = 0.295 {\it E(B--V)}, {\it E(H--K}$_s$) = 0.49 {\it E(J--H)}, and $A_V = 2.427 E(J-H)$ for $R = A_V/E(B-V) = 3.05$, derived from van de Hulst's reddening curve No. 15 \citep[see][]{jo66}. In addition, the observations for Hyades stars were supplemented by data from \citet{ca82} in order to reduce the photometric scatter typical of 2MASS observations. The results are plotted in the lower portion of Fig.~\ref{fig1}. The observational trends of {\it V--J}, {\it V--H}, and {\it V--K}$_s$ are fairly similar to the predicted trends seen in the top portion of Fig.~\ref{fig1}, but with noticeable offsets, particularly in {\it V-- H}. The adopted intrinsic relations were therefore established from polynomial fits to the observational data rather than from the predicted relations, with the results tabulated in Table~\ref{tab1} for zero-age main sequence (ZAMS) stars. The derived polynomial relationships of Fig.~\ref{fig1} were used to derive intrinsic {\it J--H} and {\it H--K} colors for main-sequence stars as a function of intrinsic broad band color index {\it (B--V)}$_0$, and the ZAMS calibration was that of \citet{tu76b,tu79}. \begin{figure}[!t] \begin{center} \includegraphics[width=7cm]{turf2.eps} \end{center} \caption{\small{Unreddened {\it JHK}$_s$ colors (left) for stars in Stock 16 and NGC 2244 (top), NGC 2362 (middle), and NGC 2281 (bottom), and (right) for stars in the Hyades (top), and the Ursa Major cluster (middle) relative to the intrinsic relation for ZAMS stars (gray curves). The lower right diagram shows the data for stars in all clusters, except for saturated UMa stars, combined using running 20-point means.}} \label{fig2} \end{figure} \setcounter{table}{0} \begin{table*} \caption[]{Empirical {\it JHK}$_s$ calibration for ZAMS stars.} \label{tab1} \centering \small \begin{tabular*}{0.94\textwidth}{@{\extracolsep{-1.4mm}}ccccccccccccccc} \hline \noalign{\smallskip} {\it J--H} &{\it H--K}$_s$ &$M_J$ &{\it J--H} &{\it H--K}$_s$ &$M_J$ &{\it J--H} &{\it H--K}$_s$ &$M_J$ &{\it J--H} &{\it H--K}$_s$ &$M_J$ &{\it J--H} &{\it H--K}$_s$ &$M_J$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} --0.129 &--0.107 &--2.393 &0.024 &0.023 &1.748 &0.148 &0.042 &3.035 &0.253 &0.058 &4.513 &0.391 &0.083 &5.438 \\ --0.125 &--0.104 &--2.021 &0.028 &0.026 &1.779 &0.151 &0.042 &3.088 &0.256 &0.058 &4.546 &0.396 &0.084 &5.455 \\ --0.121 &--0.100 &--1.749 &0.032 &0.027 &1.810 &0.154 &0.043 &3.142 &0.259 &0.059 &4.568 &0.400 &0.085 &5.481 \\ --0.117 &--0.097 &--1.506 &0.036 &0.027 &1.841 &0.157 &0.043 &3.196 &0.262 &0.059 &4.591 &0.405 &0.085 &5.507 \\ --0.113 &--0.094 &--1.303 &0.039 &0.027 &1.873 &0.160 &0.043 &3.239 &0.265 &0.060 &4.623 &0.409 &0.086 &5.533 \\ --0.109 &--0.090 &--1.130 &0.043 &0.028 &1.904 &0.162 &0.044 &3.293 &0.268 &0.061 &4.646 &0.414 &0.087 &5.559 \\ --0.105 &--0.087 &--0.976 &0.047 &0.028 &1.926 &0.165 &0.044 &3.337 &0.272 &0.061 &4.668 &0.419 &0.088 &5.584 \\ --0.101 &--0.083 &--0.832 &0.051 &0.029 &1.957 &0.168 &0.045 &3.390 &0.275 &0.062 &4.690 &0.424 &0.089 &5.609 \\ --0.097 &--0.080 &--0.678 &0.054 &0.029 &1.979 &0.171 &0.045 &3.434 &0.278 &0.062 &4.722 &0.428 &0.090 &5.633 \\ --0.093 &--0.076 &--0.533 &0.058 &0.030 &2.001 &0.173 &0.045 &3.478 &0.282 &0.063 &4.744 &0.433 &0.091 &5.658 \\ --0.088 &--0.072 &--0.388 &0.062 &0.030 &2.023 &0.176 &0.046 &3.521 &0.285 &0.063 &4.776 &0.438 &0.092 &5.681 \\ --0.084 &--0.069 &--0.233 &0.065 &0.031 &2.045 &0.179 &0.046 &3.565 &0.288 &0.064 &4.808 &0.443 &0.093 &5.705 \\ --0.080 &--0.065 &--0.088 &0.069 &0.031 &2.068 &0.182 &0.047 &3.609 &0.292 &0.065 &4.840 &0.448 &0.094 &5.728 \\ --0.119 &--0.045 &--2.279 &0.021 &0.024 &1.732 &0.188 &0.052 &2.982 &0.365 &0.069 &4.468 &0.536 &0.107 &5.370 \\ --0.116 &--0.042 &--1.912 &0.026 &0.025 &1.761 &0.193 &0.052 &3.035 &0.370 &0.070 &4.500 &0.540 &0.108 &5.386 \\ --0.112 &--0.040 &--1.644 &0.030 &0.026 &1.790 &0.198 &0.053 &3.089 &0.375 &0.070 &4.523 &0.544 &0.110 &5.411 \\ --0.109 &--0.037 &--1.405 &0.034 &0.028 &1.819 &0.202 &0.053 &3.143 &0.379 &0.071 &4.545 &0.549 &0.111 &5.437 \\ --0.106 &--0.035 &--1.206 &0.038 &0.029 &1.848 &0.207 &0.054 &3.186 &0.384 &0.072 &4.578 &0.553 &0.113 &5.462 \\ --0.102 &--0.032 &--1.037 &0.043 &0.030 &1.878 &0.212 &0.054 &3.240 &0.389 &0.072 &4.600 &0.557 &0.115 &5.486 \\ --0.099 &--0.030 &--0.887 &0.047 &0.031 &1.898 &0.217 &0.055 &3.284 &0.394 &0.073 &4.622 &0.562 &0.117 &5.511 \\ --0.095 &--0.027 &--0.748 &0.051 &0.032 &1.927 &0.221 &0.055 &3.338 &0.398 &0.074 &4.644 &0.566 &0.118 &5.535 \\ --0.092 &--0.025 &--0.597 &0.056 &0.033 &1.948 &0.226 &0.055 &3.382 &0.403 &0.074 &4.676 &0.570 &0.120 &5.558 \\ --0.088 &--0.023 &--0.457 &0.060 &0.034 &1.968 &0.231 &0.056 &3.426 &0.408 &0.075 &4.697 &0.574 &0.122 &5.582 \\ --0.085 &--0.020 &--0.316 &0.064 &0.034 &1.988 &0.236 &0.056 &3.470 &0.412 &0.076 &4.729 &0.579 &0.124 &5.605 \\ --0.081 &--0.018 &--0.164 &0.069 &0.035 &2.009 &0.240 &0.057 &3.514 &0.417 &0.077 &4.760 &0.583 &0.126 &5.628 \\ --0.077 &--0.016 &--0.023 &0.073 &0.036 &2.030 &0.245 &0.057 &3.558 &0.422 &0.078 &4.792 &0.587 &0.128 &5.650 \\ --0.074 &--0.014 &0.109 &0.078 &0.037 &2.061 &0.250 &0.058 &3.602 &0.427 &0.078 &4.823 &0.591 &0.130 &5.673 \\ --0.070 &--0.012 &0.231 &0.082 &0.038 &2.072 &0.255 &0.058 &3.646 &0.431 &0.079 &4.854 &0.595 &0.132 &5.694 \\ --0.066 &--0.010 &0.344 &0.087 &0.039 &2.093 &0.260 &0.058 &3.690 &0.436 &0.080 &4.884 &0.599 &0.135 &5.716 \\ --0.063 &--0.008 &0.447 &0.091 &0.039 &2.114 &0.264 &0.059 &3.724 &0.441 &0.081 &4.915 &0.603 &0.137 &5.737 \\ --0.059 &--0.006 &0.550 &0.096 &0.040 &2.146 &0.269 &0.059 &3.768 &0.445 &0.082 &4.945 &0.608 &0.139 &5.758 \\ --0.055 &--0.004 &0.654 &0.100 &0.041 &2.178 &0.274 &0.060 &3.802 &0.450 &0.083 &4.976 &0.612 &0.141 &5.779 \\ --0.051 &--0.002 &0.757 &0.105 &0.042 &2.200 &0.279 &0.060 &3.846 &0.454 &0.084 &5.006 &0.616 &0.144 &5.799 \\ --0.047 &--0.001 &0.852 &0.109 &0.042 &2.232 &0.283 &0.061 &3.880 &0.459 &0.085 &5.026 &0.620 &0.146 &5.819 \\ --0.044 &0.001 &0.946 &0.114 &0.043 &2.254 &0.288 &0.061 &3.923 &0.464 &0.086 &5.045 &0.624 &0.149 &5.838 \\ --0.040 &0.003 &1.041 &0.118 &0.044 &2.286 &0.293 &0.061 &3.957 &0.468 &0.087 &5.065 &0.628 &0.151 &5.857 \\ --0.036 &0.005 &1.116 &0.123 &0.044 &2.328 &0.298 &0.062 &3.991 &0.473 &0.088 &5.084 &0.631 &0.154 &5.876 \\ --0.032 &0.006 &1.201 &0.127 &0.045 &2.361 &0.303 &0.062 &4.035 &0.477 &0.089 &5.103 &0.635 &0.156 &5.895 \\ --0.028 &0.008 &1.266 &0.132 &0.046 &2.393 &0.308 &0.063 &4.069 &0.482 &0.090 &5.122 &0.639 &0.159 &5.913 \\ --0.024 &0.009 &1.342 &0.137 &0.046 &2.426 &0.312 &0.063 &4.102 &0.487 &0.091 &5.140 &0.643 &0.162 &5.930 \\ --0.020 &0.011 &1.408 &0.141 &0.047 &2.469 &0.317 &0.064 &4.136 &0.491 &0.093 &5.159 &0.647 &0.164 &5.948 \\ --0.016 &0.012 &1.465 &0.146 &0.047 &2.502 &0.322 &0.064 &4.169 &0.496 &0.094 &5.177 &0.651 &0.167 &5.965 \\ --0.012 &0.014 &1.521 &0.151 &0.048 &2.555 &0.327 &0.065 &4.203 &0.500 &0.095 &5.195 &0.655 &0.170 &5.981 \\ --0.008 &0.015 &1.558 &0.155 &0.048 &2.598 &0.332 &0.065 &4.236 &0.505 &0.096 &5.213 &0.658 &0.173 &5.997 \\ --0.004 &0.017 &1.585 &0.160 &0.049 &2.651 &0.336 &0.066 &4.270 &0.509 &0.098 &5.240 &0.662 &0.176 &6.013 \\ 0.000 &0.018 &1.612 &0.165 &0.049 &2.704 &0.341 &0.066 &4.303 &0.514 &0.099 &5.258 &0.666 &0.179 &6.028 \\ 0.005 &0.019 &1.640 &0.169 &0.050 &2.748 &0.346 &0.067 &4.336 &0.518 &0.101 &5.285 & $\cdots$ & $\cdots$ & $\cdots$ \\ 0.009 &0.021 &1.658 &0.174 &0.050 &2.811 &0.351 &0.067 &4.379 &0.523 &0.102 &5.301 & $\cdots$ & $\cdots$ & $\cdots$ \\ 0.013 &0.022 &1.676 &0.179 &0.051 &2.874 &0.355 &0.068 &4.412 &0.527 &0.103 &5.328 & $\cdots$ & $\cdots$ & $\cdots$ \\ 0.017 &0.023 &1.704 &0.183 &0.051 &2.928 &0.360 &0.069 &4.445 &0.531 &0.105 &5.344 & $\cdots$ & $\cdots$ & $\cdots$ \\ \noalign{\smallskip} \hline \end{tabular*} \end{table*} The intrinsic relationship for 2MASS colors in Table~\ref{tab1} is similar to the intrinsic color-color relation for dwarf stars in the {\it UBV} system, displaying a noticeable ``kink'' near spectral type A0. 2MASS {\it J} magnitudes sample the Brackett continuum in hot stars, while {\it H} and {\it K}$_s$ magnitudes sample the Pfund continuum, so {\it J--H} color should provide a measure of the Pfund discontinuity in hot stars, much like {\it U--B} color provides a measure of the Balmer discontinuity. Similarly, {\it H--K}$_s$ color should provides a measure of stellar temperature, much like {\it B--V} color does in the {\it UBV} system. For cool stars all colors should closely track changes in the slope of the black body continuum in the far infrared. Interstellar reddening affects {\it JHK}$_s$ colors much less than it does {\it UBV} colors, but the effects of circumstellar emission are often more important in the far infrared than in the visible region. A {UBV} color-color diagram plots {\it U--B} versus {\it B--V}, for which a plot of {\it J--H} versus {\it H--K}$_s$ would be the closest approximation. It appears, however, that {\it J--H} colors may display a greater precision than {\it H--K}$_s$ colors in 2MASS photometry, so the diagnostic tool used here to establish reddening for cluster stars is a 2MASS color-color diagram in which {\it H--K}$_s$ is plotted versus {\it J--H}. The intrinsic relations were tested further using the three clusters used for the calibration and three additional clusters that lie in fields of uniform interstellar reddening: Stock 16 \citep{tu85a}, NGC 2362 \citep{jo53,jo57,jo61}, and NGC 2281 \citep{pe61}, which are reddened by {\it E(B--V)} = 0.49, 0.11, and 0.11, respectively \citep[see][]{tu96,jo61}. Obvious unreddened stars in the same fields were also included in the samples. Fig.~\ref{fig2} plots 2MASS colors for stars in the core regions of Stock 16, NGC 2362, and NGC 2281 with cited magnitude uncertainties of less than $\pm0.05$, corrected for their known reddening or lack of reddening for suspected foreground stars, along with similar data for Hyades and Ursa Major cluster stars and dereddened data for stars in NGC 2244. Data for the Ursa Major cluster include 11 stars with observations of uncertain quality (bright stars with saturated photometry and blended stars). The bottom right portion of Fig.~\ref{fig2} displays the data for all clusters, except for UMa stars of poor quality, combined using running 20-point means as a function of {\it J--H} color. The last step was made as a means of reducing the photometric scatter in both colors. The intrinsic relation of Table~\ref{tab1} appears to be confirmed by stars in the open clusters of Fig.~\ref{fig2}, although there is clearly additional scatter in the observations of individual stars that cannot be attributed to differential reddening. In three of the four reddened clusters the {\it UBV} colors display {\it no} differential reddening, so it is unlikely to appear in the {\it JHK}$_s$ colors unless there is some unforeseen component, such as circumstellar emission, affecting the observations. A pronounced ``extra'' scatter for B-type stars may have such an origin, given the evidence for circumstellar dust associated with some rapidly-rotating late B-type stars \citep{tu93}, but it is unlikely that similar effects extend to the remaining cluster stars. It is more reasonable to attribute the scatter to intrinsic uncertainties in the precision of the {\it JHK}$_s$ magnitude estimates, which appear to be larger than the cited values and also larger than what is attainable from iris photometry of photographic plates \citep{tw89}. The tendency toward larger scatter for the coolest stars of NGC 2362 and NGC 2281 may be a problem with membership selection. An earlier calibration similar to that of Table~\ref{tab1} was derived using stars in Stock 16, NGC 2362, and NGC 2251, with polynomial and linear fits made to combined running 20-point means for the data. Some of the features of the earlier calibration \citep[see][]{te08,te09} can be seen in the lower right portion of Fig.~\ref{fig2}, namely the pronouned ``kink'' near spectral type A0 in the mean data. But the data also match the new curved relationship very well. For Hyades stars the {\it JHK}$_s$ observations represent a combination of both 2MASS data and data from \citet{ca82}, as noted. The colors from both data sets display a similar scatter to that for Stock 16 and NGC 2244 members, but is much reduced when the data are averaged together. There remains a sizeable residual scatter that is slightly larger than the cited uncertainties in the observations. Perhaps it represents an observational limitation for ground-based observations in the infrared imposed by variable atmospheric water vapour content \citep[e.g.][]{my05,my08}. The scatter is greatly reduced in the 20-point running means, which is consistent with a problem that is tied to the precision of the observations. Otherwise, the intrinsic relation of Table~\ref{tab1} appears to be confirmed. The colors for cluster stars in Fig.~\ref{fig2} agree closely with the derived intrinsic relation, in most cases displaying only random scatter about it. If there were an extra reddening component of, say, 0.01 to 0.02 in {\it E(J--H)} for the stars, the centroids for the data would display a noticeable offset from the intrinsic relation. Thus, despite large scatter in the colors for stars in many open clusters, it is possible to derive an observational reddening for the stars from a color-color diagram by eye, with uncertainties of no more than $\pm 0.01$ to $\pm 0.02$ in {\it E(J--H)}, provided that the scatter in such cases is random in nature. Likewise, distance moduli derived from observational {\it J} versus {\it J--H} color-magnitude diagrams can be derived by eye with uncertainties typically no larger than $\pm 0.1$ to $\pm 0.2$. Such conclusions are confirmed by tests on the many open clusters studied to date \citep[see, for example,][]{te08,te09}. \section{Other Open Clusters} As noted above, the intrinsic relations of Table~\ref{tab1} have been tested successfully previously \citep{te08,te09}, but three clusters of uncertain properties were selected for further testing here: Berkeley 44, Turner 1, and Collinder 419. All three clusters possess limited published optical observations, and provide good examples of where 2MASS observations may help to clarify previous conclusions about cluster properties. \begin{figure}[!t] \begin{center} \includegraphics[width=6.2cm]{turf3.eps} \end{center} \caption{\small{The $15\arcmin \times 15\arcmin$ field of Berkeley 44 from the Palomar Observatory Sky Survey red image of the field. The image is centered on 2000 co-ordinates: 19:17:16, +19:33:00, the cluster center identified visually.}} \label{fig3} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=6.2cm]{turf4.eps} \end{center} \caption{\small{{\it JHK}$_s$ photometry for stars within $2\arcmin$ of the center of Berkeley 44 with magnitude uncertainties smaller than $\pm0.05$ (points) as well as with larger uncertainties (plus signs). The intrinsic color-color relation is depicted in gray (top), while the intrinsic relations for {\it E(J--H)} = 0.295 ({\it E(B--V)} = 1.00) and {\it J--M}$_J$ = 12.1 ({\it V}$_0${\it--M}$_V$ = 11.38) are shown as black lines. The thin black curve is an isochrone for $\log t = 9.3$ adapted from \citet{me93}, the thin gray curve an isochrone of identical age from \citet{bo04}.}} \label{fig4} \end{figure} \subsection{Berkeley 44} The sparse northern hemisphere cluster Berkeley 44 (Fig.~\ref{fig3}) has been identified as an old group by \citet*{ca06}, although with some question about its reality because of ambiguities in the star counts and the similarity of its color-magnitude diagram to that of stars in the surrounding reference field. The published co-ordinates for the cluster by \citet{di02} and \citet{ca06} do not match the optical density peak visible on the Palomar Observatory Sky Survey, and an alternate cluster center (Fig.~\ref{fig3}) was adopted here. The group lies more than $3^{\circ}$ from the Galactic plane, so contamination by foreground early-type stars should be relatively low. However, the extreme faintness of cluster stars makes it necessary to consider both high quality and low quality 2MASS data for the field, with uncertainties of $\pm0^{\rm m}.05$ representing the demarcation. The 2MASS {\it JHK}$_s$ data for Berkeley 44 (Fig.~\ref{fig4}) confirm that it is an old open cluster. The cluster color-color diagram is devoid of early-type stars lying within $2\arcmin$ of the adopted cluster center, most cluster members being cooler late-type stars. A few stars of inferred spectral types G and K appear to be essentially unreddened, with most stars of spectral types F or later reddened by similar amounts. The cluster color-magnitude diagram reveals a well-defined clump of red giants at $J \simeq 12.2$ with implied spectral types of early K, so the identification of this cluster as an old cluster is confirmed. The optimum fit by eye to the {\it JHK}$_s$ observations yields a reddening of {\it E(J--H)} $=0.295\pm0.02$ ({\it E(B--V)} $ = 1.00 \pm0.07$) and a distance modulus of {\it J--M}$_J = 12.1 \pm0.1$ ({\it V}$_0${\it--M}$_V = 11.38 \pm0.24$), corresponding to a distance of $d = 1.89 \pm0.21$ kpc. The uncertainty in the intrinsic distance modulus includes the uncertainty in interstellar extinction towards the cluster. The ZAMS fit is tied to low quality {\it JHK}$_s$ observations at the faint end, so may be less well-established than the formal uncertainty suggests, although the implied distance agrees closely with the value of 1.8 kpc derived by \citet{ca06}. The implied cluster reddening is significantly smaller than the value of {\it E(B--V)} = 1.40 $\pm0.10$ obtained by \citet{ca06}, which in any case is inconsistent with the 2MASS data unless one adopts an extreme fit to the very reddest stars. Berkeley 44 main-sequence members have intrinsic {\it J--H} colors of $\sim0.15$ or redder, indicating stars of earliest spectral type $\sim$F0, corresponding to turnoffs for intermediate-age clusters of age $\sim10^9$ yrs. A variety of model isochrones near $\log t \simeq 9$ were therefore tested in the cluster color-magnitude diagram (Fig.~\ref{fig4}), with the optimum fit produced by an isochrone of $\log t = 9.3$, as shown, corresponding to a cluster age of $\sim2$ Gyr. The procedure involved use of the {\it M}$_V$ versus {\it B--V} isochrones published by \citet*{me93}, adapted to the {\it M}$_J$ versus {\it J--H} system with our empirical link between the {\it UBV} and 2MASS systems. An isochrone of the same age from \citet*{bo04} is also plotted in Fig.~\ref{fig4}, but deviates systematically in {\it J--H} from the \citet{me93} isochrone, suggesting a possible problem in the conversion from $\log T_{\rm eff}$ to {\it J--H} with the Padova isochrones used by \citet{bo04}. A problem arises with stars in the red giant clump, since they are not replicated well by either set of evolutionary models used for the isochrones. Yet their colors are consistent with the adopted cluster reddening. Berkeley 44 is indeed an old open cluster, but inferences about its parameters derived using the larger reddening estimated by \citet{ca06} differ from those inferred from 2MASS observations. \begin{figure}[!t] \begin{center} \includegraphics[width=6.2cm]{turf5.eps} \end{center} \caption{\small{The $15\arcmin \times 15\arcmin$ field of Turner 1 from the Palomar Observatory Sky Survey red image of the field. The image is centered on 2000 co-ordinates: 19:48:26.5, +27:17:59, the cluster center identified from star counts \citep{tu85b}. S Vul is the bright star near the center of the image.}} \label{fig5} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=6.2cm]{turf6.eps} \end{center} \caption{\small{{\it JHK}$_s$ photometry for stars within $4\arcmin$ of the center of Turner 1 with magnitude uncertainties smaller than $\pm0.05$. The intrinsic color-color relation is depicted in gray, while the intrinsic relations for {\it E(J--H)} = 0.15 ({\it E(B--V)} = 0.51) and {\it J--M}$_J$ = 9.7 ({\it V}$_0${\it--M}$_V$ = 9.34) are shown as black lines. The thin black curve is an isochrone for $\log t = 9.8$ adapted from \citet{me93}.}} \label{fig6} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=6.2cm]{turf7.eps} \end{center} \caption{\small{The same as Fig.~\ref{fig6} but for {\it E(J--H)} = 0.29 ({\it E(B--V)} = 1.02) and {\it J--M}$_J$ = 13.2 ({\it V}$_0${\it--M}$_V$ = 12.47), shown as black lines. Small plus signs denote additional stars within $11\arcmin$ of the cluster center inferred to be B-type and A-type stars, with no restriction on photometric uncertainties. The star symbol corresponds to the Cepheid S Vul.}} \label{fig7} \end{figure} \subsection{Turner 1} The sparse cluster designated as Turner 1 (Fig.~\ref{fig5}) surrounding the long-period Cepheid S Vul was discovered during the preparation of finder charts for Galactic Cepheids \citep{tu85b}, and was subsequently studied on the basis of photoelectric and photographic {\it UBV} photometry by \citet*{te86}. {\it UBV} photometry for stars in Turner 1 yielded a cluster distance of $643 \pm25$ pc and a reddening of {\it E(B--V)} $= 0.48 \pm0.02$ for what appeared to be an old group of G-type dwarfs with a putative red giant branch, although there was also evidence for more heavily reddened B-type stars in the same field \citep{te86}. The limited {\it UBV} data implied that Turner 1 lay foreground to the Cepheid S Vul, which is projected near its center, although no further tests were made, spectroscopic observations being restricted by the faintness of cluster stars. It was noted, however, that the putative reddened G-dwarf sequence identified from {\it UBV} colors might be contaminated by more heavily reddened B dwarfs. 2MASS {\it JHK}$_s$ data provide new insights into the earlier results. It is possible to detect a sparse group of G dwarfs (with rather large photometric scatter) and a putative late-K giant branch in the data, with similar parameters to the {\it UBV} study (Fig.~\ref{fig6}). But many of the stars in the field, and most of the stars within $11\arcmin$ of the adopted cluster center, appear instead to be reddened B-type and A-type stars (Fig.~\ref{fig7}). A best fit by eye to the {\it JHK}$_s$ observations for the former yields a reddening of {\it E(J--H)} $= 0.15 \pm0.02$ ({\it E(B--V)} $= 0.51 \pm0.07$) and a distance modulus of {\it J--M}$_J = 9.7 \pm0.1$ ({\it V}$_0${\it--M}$_V = 9.34 \pm0.24$), corresponding to a distance of $d = 0.74 \pm0.08$ kpc, values that are reasonably consistent with those obtained for the cluster from {\it UBV} photometry. The model isochrone from \citet{me93} that best fits the observations has $\log t = 9.8$, implying a cluster age of $\sim6$ Gyr, although the red giants are displaced redward of that isochrone, much like the case for Berkeley 44 red giant clump stars. Conceivably a refined empirical calibration of 2MASS intrinsic colors that includes a separate relation for red giant stars could eliminate the problem. There appear to be large numbers of M dwarfs lying along the line of sight to the cluster. The implied best fit by eye to the {\it JHK}$_s$ observations for the group of reddened B-type stars yields a reddening of {\it E(J--H)} $= 0.30 \pm0.02$ ({\it E(B--V)} $= 1.02 \pm0.07$) and a distance modulus of {\it J--M}$_J = 13.2 \pm0.2$ ({\it V}$_0${\it--M}$_V = 12.47 \pm0.29$), corresponding to a distance of $d = 3.12 \pm0.42$ kpc. Similar values were used by \citet{tu10} with an earlier version of the 2MASS calibration to establish very reasonable estimates for the reddening and luminosity of the 68$^{\rm d}$ Cepheid S Vulpeculae as a possible cluster member. S Vul is represented by the star symbol in Fig.~\ref{fig7}. In the present instance the results imply intrinsic parameters of $(\langle B \rangle - \langle V \rangle)_0 = 0.87 \pm 0.07$ and $\langle M_V \rangle = -6.86 \pm 0.29$ for the Cepheid, close to what would be predicted empirically \citep[see][]{tu10}. The angular brackets denote intensity means. Spectroscopic observations are essential for confirming the picture implied by the 2MASS observations, since star counts for stars identified photometrically as B dwarfs display only a marginal concentration towards the cluster core. It appears that the main concentration of stars in Turner 1 corresponds to the old, sparse, foreground cluster identified in Fig.~\ref{fig6}, with contamination by a young group of background B-dwarfs located along the same line of sight. \begin{figure}[!t] \begin{center} \includegraphics[width=6.2cm]{turf8.eps} \end{center} \caption{\small{The $15\arcmin \times 15\arcmin$ field of Collinder 419 from the Palomar Observatory Sky Survey red image of the field. The image is centered on 2000 co-ordinates: 20:18:08, +40:42:42, the adopted cluster center. HD 193322 is the bright star north of the center of the image.}} \label{fig8} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=6.2cm]{turf9.eps} \end{center} \caption{\small{{\it JHK}$_s$ photometry for stars within $5\arcmin$ of the adopted center for Collinder 419 with uncertainties smaller than $\pm0.05$. The intrinsic color-color relation is depicted in gray, while the intrinsic relations for {\it E(J--H)} = 0.39 ({\it E(B--V)} = 1.32) and {\it J--M}$_J$ = 11.7 ({\it V}$_0${\it--M}$_V$ = 10.75) are shown as black lines.}} \label{fig9} \end{figure} \subsection{Collinder 419} Collinder 419 is a sparse group of stars (Fig.~\ref{fig8}) of small angular diameter \citep[$4\arcmin$.5 according to][]{co31} associated with the massive O9 V((n)) double star HD 193322 \citep{wa71}. Collinder estimated a distance of 1470 pc for the group from its angular diameter, while \citet{gi10} used astrometric and photometric data from the UCAC3 catalogue for bright stars in the field to derive a distance of $d = 741 \pm36$ pc, with a reddening of {\it E(B--V)} $= 0.37 \pm0.05$. Included as a possible member of the group was the M3 III star IRAS 20161+4035. \setcounter{table}{1} \begin{table*} \caption[]{Deduced parameters for clusters analyzed by {\it JHK}$_s$ photometry.} \label{tab2} \centering \small \begin{tabular*}{0.75\textwidth}{@{\extracolsep{+0.8mm}}lccccc} \hline \noalign{\smallskip} Cluster &{\it E(J--H)} &{\it E(B--V)} &{\it J--M}$_J$ &{\it V}$_0${\it--M}$_V$ &{\it d} (kpc) \\ \noalign{\smallskip} \hline \noalign{\smallskip} Berkeley 44 &$0.295 \pm0.02$ &$1.00 \pm0.07$ &$12.1 \pm0.1$ &$11.38 \pm0.24$ &$1.89 \pm0.21$ \\ Turner 1a &$0.15 \pm0.02$ &$0.51 \pm0.07$ &$9.7 \pm0.2$ &$9.34 \pm0.29$ &$0.74 \pm0.08$ \\ Turner 1b &$0.29 \pm0.02$ &$1.02 \pm0.07$ &$13.2 \pm0.2$ &$12.47 \pm0.29$ &$3.12 \pm0.42$ \\ Collinder 419 &$0.39 \pm0.02$ &$1.32 \pm0.07$ &$11.7 \pm0.2$ &$10.75 \pm0.28$ &$1.42 \pm0.18$ \\ \noalign{\smallskip} \hline \end{tabular*} \end{table*} The cluster does not stand out well at optical wavelengths, but we were able to detect a slight density enhancement visually and estimate a crude center of symmetry at 2000 co-ordinates: 20:18:08, +40:42:42, close to the coordinates estimated by Collinder. The cluster also appears as a set of reddened B-type stars in available 2MASS {\it JHK}$_s$ photometry for stars lying within $5\arcmin$ of that center of symmetry (Fig.~\ref{fig9}), reasonably consistent with Collinder's inferred dimensions. A best fit by eye to the {\it JHK}$_s$ data yields a reddening of {\it E(J--H)} $= 0.39 \pm0.02$ ({\it E(B--V)} $= 1.32 \pm0.07$), significantly larger than the value estimated by \citet{gi10}, a distance modulus of {\it J--M}$_J = 11.7 \pm0.2$ ({\it V}$_0${\it--M}$_V = 10.75 \pm0.28$), and an inferred distance of $d = 1.42 \pm0.18$ kpc, which is also larger than the \citet{gi10} value, although consistent with a location within the young complex of stars and H II regions \citep[including Berkeley 87,][]{te10} associated with the Cygnus X region at $\sim1.2$ kpc. In this case the 2MASS data provide a more solid basis for the reality of the group than the UCAC3 data, although they do not address the possible membership of HD 193322. The cluster does not show up as a significant density enhacement from star counts, and it is conceivable that Collinder 419 represents merely a clump in one of the many OB associations seen along the direction of the Cygnus arm. \section{Discussion} Table~\ref{tab2} summarizes the results of the present study of four open clusters using 2MASS observations. Most of the inferred parameters for the clusters differ from published results tied to optical photometry, although those for Turner 1 were considered as an alternate possibility in the study by \citet{te86}. In each case the optical photometry was limited by the faintness of cluster members arising from large interstellar reddening, which is where {\it JHK}$_s$ photometry has advantages over optical band photometry. The point to emphasize, however, is that {\it JHK}$_s$ observations of cluster stars often provide relatively complete samples that can be used to infer reasonably accurate estimates for the all-important interstellar reddening toward the cluster, in most cases in more straightforward manner than what is often used for optical band {\it BVRI} observations. Once the reddening is known, ZAMS fitting, or isochrone fitting in some cases, can then be used to derive the distance to the cluster. That is true despite the relatively low intrinsic precision of existing 2MASS data for most Galactic fields. \section*{acknowledgments} \small{This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.}
-35,136.713913
[ -3.5859375, 3.29296875 ]
30.909091
[ -3.4765625, 1.3662109375, -1.4033203125, -6.03125, -0.71142578125, 7.97265625 ]
[ 4.921875, 7.53515625, 2.994140625, 6.359375 ]
1,084
5,303
[ -3.1328125, 3.6875 ]
38.055122
[ -4.76171875, -1.826171875, -2.3359375, -2.322265625, 0.8369140625, 8.7578125 ]
0.79804
19.168712
30.096172
25.63053
[ 2.5760583877563477 ]
-26,012.644822
5.080332
-33,305.630095
0.4083
6.244345
[ -4.171875, -2.830078125, -2.720703125, -3.5, 2.3984375, 9.7265625 ]
[ -5.703125, -1.8857421875, -2.3515625, -1.33203125, 3.51953125, 5.30859375 ]
BkiUfs_xK6mkyCfOErHR
\section{Introduction} Resolvent estimates for Schr\"odinger operators play a decisive role in numerous areas in spectral and scattering theory, as well as partial differential equations. In particular, resolvent estimates which are uniform in the spectral parameter are intimately connected with dispersive and smoothing estimates for the corresponding (time-dependent) Schr\"odinger equation, as observed by Kato \cite{Kato1965}. As a general rule, resolvent estimates that hold up to the spectrum (usually called a limiting absorption principle) are associated with global in time Strichartz and smoothing estimates for the Schr\"odinger flow. Results in this category are usually obtained by considering a decaying electromagnetic potential as a perturbation of the free Laplacian, see for example \cite{GeorgievEtAl2007} for small perturbations and \cite{ErdoganEtAl2008,ErdoganEtAl2009,FanelliVega2009,D'AnconaEtAl2010,Garcia2011,Garcia2015} for large perturbations. On the other hand, resolvent estimates that are uniform only up to a $\mathcal{O}(1)$ distance to the spectrum are associated with local in time estimates. This is usually due to the presence of eigenvalues or resonances that prevent the dispersion of the flow. Potentials in this situation are usually unbounded. Prominent examples here are the harmonic oscillator (quadratic electric potential) and the constant magnetic field (linear vector potential). We mention \cite{Fujiwara1980,Yajima1991,YajimaZhang2004,Doi2005,RobbianoZuily2008,D'AnconaFanelli2009} for estimates involving unbounded potentials. There is a big gap in the regularity and decay conditions for the electromagnetic potential between the two scenarios. In the first, the potentials can usually be quite rough, but have sufficient decay at infinity. In the second case, unbounded potentials are allowed but they are usually assumed to be smooth. Very little is known in the intermediate case. Our resolvent estimate is a step in this direction. The paper is organized as follows. In Section \ref{section Assumptions and main result} we state the assumptions on the potentials and the resolvent estimate in the simplest case, with a uniform bound with respect to the spectral parameter. In Section \ref{section Lq'Lq and smoothing estimates} we prove the resolvent estimate for the unperturbed operator. In Section \ref{section Proof main theorem, general case} we use a perturbative argument to prove the estimate in the general case. In the final Section \ref{section complex-valued potentials} we state a more precise version of the resolvent estimate and give an application to eigenvalue bounds for Schr\"odinger operators with complex-valued potentials.\\ \underline{{\bf Notation}} \begin{itemize} \item $\langle x\rangle=(1+|x|^2)^{1/2}$. \item $\langle D\rangle$ is the Fourier multiplier with symbol $\langle\xi\rangle$. \item $X=(x,\xi)\in{\mathbb R}^{2n}$. \item $e_s(X):=\langle X\rangle^s$ and $E_{s}:=e_s^W(x,D)$; see also Appendix \ref{Appendix psdos}. \item $\mathcal{D}({\mathbb R}^n)=C_c^{\infty}({\mathbb R}^n)$, and $\mathcal{D}'({\mathbb R}^n)$ is the space of distributions. \item $\mathcal{S}({\mathbb R}^n)$ is the Schwartz space, and $\mathcal{S}'({\mathbb R}^n)$ is the space of tempered distributions. \item $\mathcal{B}(X,Y)$ is the space of bounded linear operators between Banach spaces $X$ and $Y$. \item $A\lesssim B$ if there exist a constant $C>0$ (depending only on fixed quantities) such that $A\leq CB$. \item $\langle u,v\rangle=\int_{{\mathbb R}^n}u(x)\overline{v}(x){\,\rm d} x$ for $u,v\in \mathcal{S}({\mathbb R}^n)$. \item If $X$ is a Banach space densely and continuously embedded in $L^2({\mathbb R}^n)$, we identify $L^2({\mathbb R}^n)$ with a dense subspace of $X'$. Thus, the duality pairing $\langle\cdot,\cdot\rangle_{X,X'}$ extends the $L^2$-scalar product $\langle\cdot,\cdot\rangle$. This is meant when we write $X\subset L^2({\mathbb R}^n)\subset X'$. \item $\sigma(P)$ is the spectrum of $P$. \item $\dom(P)$ is the domain of $P$. \end{itemize} \section{Assumptions and main result}\label{section Assumptions and main result} We consider the Schr\"odinger operator \begin{align}\label{P} P=(-\I\nabla+A(x))^2+V(x),\quad \dom(P)=\mathcal{D}({\mathbb R}^n)\subset L^2({\mathbb R}^n),\quad n\geq 2. \end{align} Here, $A:{\mathbb R}^n\to {\mathbb R}^n$ is the vector potential and $V:{\mathbb R}^n\to {\mathbb R}$ is the electric potential. In the following, $\epsilon>0$ is a yet undetermined constant that will later be chosen sufficiently small \vspace{10pt} \underline{\bf Assumptions on the potentials} Let $A=A_0+ A_1$ and $V=V_0+W+ V_1$ and assume that the following assumptions hold. \begin{enumerate} \item[(A1)] $A_0\in C^{\infty}({\mathbb R}^n,{\mathbb R}^n)$ and for every $\alpha\in{\mathbb N}^n$, $|\alpha|\geq 1$, there exist constants $C_{\alpha},\epsilon_{\alpha}>0$ such that \begin{align} |\partial_x^{\alpha} A_0(x)|\leq C_{\alpha},\quad|\partial^{\alpha}B_0(x)|\leq C_{\alpha}\langle x\rangle^{-1-\epsilon_{\alpha}},\quad x\in{\mathbb R}^n.\label{assumptions on A_0} \end{align} Here, $B_0=(B_{0,j,k})_{j,k=1}^n$ is the magnetic field, i.e. \begin{align*} B_{0,j,k}(x)=\partial_jA_{0,k}(x)-\partial_kA_{0,j}(x). \end{align*} \item[(A2)] $V_0\in C^{\infty}({\mathbb R}^n,{\mathbb R})$ and for every $\alpha\in{\mathbb N}^n$, $|\alpha|\geq 2$, there exist constants $C_{\alpha}>0$ such that \begin{align} |\partial_x^{\alpha} V_0(x)|&\leq C_{\alpha},\quad x\in{\mathbb R}^n.\label{assumptions on V_0} \end{align} \item[(A3)] $W\in L^{\infty}({\mathbb R}^n,{\mathbb R})$. \item[(A4)] $A_1\in L^{\infty}({\mathbb R}^n,{\mathbb R})$ and there exists $\delta>0$ such that \begin{align*} |A_1(x)|\lesssim \epsilon\langle x\rangle^{-1-\delta}\quad\mbox{for almost every }x\in{\mathbb R}^n. \end{align*} Moreover, assume that one of the following additional assumptions holds: \begin{enumerate} \item[(A4a)] $A_1\in \Lip({\mathbb R}^n,{\mathbb R}^n)$ and \begin{align*} |\nabla A_1(x)|\lesssim \epsilon\langle x\rangle^{-1-\delta}\quad\mbox{for almost every }x\in{\mathbb R}^n. \end{align*} \item[(A4b)] There exists $\delta'\in (0,\delta)$ such that $\langle x\rangle^{1+\delta'} A_1\in \dot{W}^{\frac{1}{2},2n}({\mathbb R}^n;{\mathbb R}^n)$, with $\|\langle x\rangle^{1+\delta'}A_1\|_{\dot{W}^{\frac{1}{2},2n}}\lesssim \epsilon$. \end{enumerate} \item[(A5)] Assume that $V_1\in L^{r}({\mathbb R}^n,{\mathbb R})$, with $\|V_1\|_{L^r}\lesssim \epsilon$, for some $r\in(1,\infty]$ if $n=2$ and $r \in[n/2,\infty]$ if $n\geq 3$. \end{enumerate} \begin{remark} We can relax the assumption \eqref{assumptions on V_0} in the same way as in \cite{KochTataru2005Hermite}. \begin{enumerate} \item[(A2')] $V_0\in C^{2}({\mathbb R}^n,{\mathbb R})$ and for every $\alpha\in{\mathbb N}^n$, $|\alpha|=2$, there exist constants $C_{\alpha}$ such that \begin{align} |\partial_x^{\alpha} V_0(x)|\leq C_{\alpha},\quad x\in{\mathbb R}^n.\label{assumptions on V_0'} \end{align} \end{enumerate} To see this, we decompose $V_0$ into its low-frequency and its hight-frequency part, $V_0=V_0^{\rm low}+V_0^{\rm high}$. Here, $V_0^{\rm low}:=\chi(D)V_0$, where $\chi\in \mathcal{D}({\mathbb R}^n)$ is supported in $B(0,2)$ and $\chi\equiv 1$ in $B(0,1)$. Then, by Bernstein inequalities $V_0^{\rm low}$ satisfies (A2). On the other hand, $V_0^{\rm high}\in L^{\infty}({\mathbb R}^n)$, so this term can be absorbed into $W$. It would be natural to also try to relax the smoothness assumption on $A_0$ to $C^1({\mathbb R}^n,{\mathbb R}^n)$. However, if we just split such an $A_0$ into high and low frequency parts, then $A_0^{\rm high}$ will have no decay, and thus it cannot be absorbed into the perturbative part. Moreover, even if it could be absorbed, then it would not be small. \end{remark} \begin{remark} Assumption (A4b) was used in \cite{ErdoganEtAl2009}, where it was also remarked that a condition similar to (A4a), but with $|\nabla A_1(x)|\lesssim\langle x\rangle^{-2-\delta}$, would imply (A4b). Here we state both conditions, because neither is weaker than the other. There is an obvious trade-off between decay and regularity. \end{remark} We will consider $P$ as a small perturbation of the Schr\"odinger operator \begin{align}\label{P_0 tilde} \widetilde{P}_0=(-\I\nabla+A_0(x))^2+\widetilde{V}_0(x),\quad \dom(\widetilde{P}_0)=\mathcal{D}({\mathbb R}^n) \end{align} where $\widetilde{V}:=V_0+W$. In the case $W=0$, we also write \begin{align}\label{P_0} P_0=(-\I\nabla+A_0(x))^2+V_0(x),\quad \dom(P_0)=\mathcal{D}({\mathbb R}^n). \end{align} Our resolvent estimate involves the following spaces. Let $X$ be the completion of $\mathcal{D}({\mathbb R}^n)$ with respect to the norm \begin{center} \fbox{$\|u\|_X:=\|u\|_{L^2}+\|\langle x\rangle^{-\frac{1+\mu}{2}}E_{1/2} u\|_{L^2}+\|u\|_{L^{q}} $} \end{center} where $0<\mu\leq \delta$ is fixed. Then its topological dual $X'$ is the space of distributions $f\in \mathcal{D}'({\mathbb R}^n)$ such that the norm \begin{center} \fbox{$\|f\|_{X'}:=\inf_{f=f_1+f_2+f_3}\left(\|f_1\|_{L^2}+\|\langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2} f_2\|_{L^2}+\|f_3\|_{L^{q'}}\right)$} \end{center} is finite. Here, $q=2r'$, i.e. \begin{align*} \begin{cases} q\in [2,\infty) \quad &\mbox{if } n=2,\\ q\in [2,2n/(n-2)]\quad &\mbox{if } n\geq 3, \end{cases} \end{align*} Our main result is the following theorem. \begin{theorem}\label{thm resolvent estimate X'X} Assume that $A$, $V$ satisfy Assumptions (A1)--(A5). Moreover, let $a>0$ be fixed. Then there exists $\epsilon_0>0$ such that for all $\epsilon<\epsilon_0$, we have the estimate \begin{align}\label{eq. resolvent estimate main theorem} \|u\|_{X}\leq C \|(P-z)u\|_{X'} \end{align} for all $z\in{\mathbb C}$ with $|\im z|\geq a$ and for all $u\in \mathcal{D}({\mathbb R}^n)$. The constants $\epsilon_0,C$ depend on $n$, $q$, $\mu$, $\delta$, $\delta'$, $a$, $\|W\|_{L^{\infty}}$ and on finitely many seminorms $C_{\alpha}$ in \eqref{assumptions on A_0} and \eqref{assumptions on V_0}. \end{theorem} \begin{remark} In Section \ref{section complex-valued potentials} we will state a more precise version of the estimate~\eqref{eq. resolvent estimate main theorem} that takes into account the dependence of $C$ on $|\im z|$ for large values of $|\im z|$. We will also allow $V_1$, $A_1$ to be complex-valued. \end{remark} \begin{remark} Although we always assume that $a>0$ is fixed, it can be seen by inspection of the proof that the constant in \eqref{eq. resolvent estimate main theorem} is $C=\mathcal{O}(a^{-1})$ as $a\to 0$. Moreover, one could replace the condition $|\im z|\geq a>0$ by the similar condition $\mathrm{dist}(z,\sigma(\widetilde{P}_0))\geq a'>0$ or, a fortiori, $\mathrm{dist}(z,\sigma(P_0))\geq a'+\|W\|_{L^{\infty}}$. \end{remark} \begin{remark} It is instructive to consider the example $P=-\Delta$ (i.e.\ $A=V=0$). Assume that $\re z>0$. Then, by a scaling argument, the estimate \eqref{eq. resolvent estimate main theorem} implies that \begin{align}\label{uniform Sobolev Laplacian} \|u\|_{L^q}\leq C \epsilon^{\frac{n}{2}(1/q'-1/q)-1}\|(-\Delta-1\pm \I\epsilon)u\|_{L^{q'}} \end{align} for all $\epsilon>0$. In the case $1/q-1/q'=2/n$, this is a special case of the uniform Sobolev inequality in \cite{KenigRuizSogge1987}. \end{remark} \begin{remark} It is clear that if $A,V\neq 0$, a uniform estimate of the form \eqref{uniform Sobolev Laplacian} for $1/q-1/q'=2/n$ cannot hold in general, due to the possible presence of eigenvalues. \end{remark} \section{$L^{q'}\to L^{q}$ and smoothing estimates}\label{section Lq'Lq and smoothing estimates} In this section, we will establish the $L^{q'}\to L^q$ and the smoothing estimate for $P_0$. These will be the main ingredients in the proof of Theorem \ref{thm resolvent estimate X'X}. We follow the general approach of Koch and Tataru in \cite[Section 4]{KochTataru2009}, where a version of the resolvent estimate \eqref{eq. resolvent estimate main theorem} was proved for the Hermite operator. In fact, the bound~\eqref{eq. resolvent estimate main theorem} follows in this special case by combining Proposition~4.2, Proposition~4.6 and Proposition~4.7 in \cite{KochTataru2009}. We start with the $L^{q'}\to L^q$ estimate. The proof of Theorem \ref{thm LpLp'} below follows the same arguments as that of Proposition 4.6 in \cite{KochTataru2009} for the Hermite operator, see also \cite[Section 2]{KochTataru2005Hermite}. Note, however, that the symbol of $P_0$ does not satisfy the bounds (3) in \cite{KochTataru2005Hermite}, which implies the short-time dispersive estimate for the propagator in their case. Here, we use results of Yajima \cite{Yajima1991}, see also \cite{MR2172692}. \begin{theorem}[$L^{q'}\to L^q$ estimate]\label{thm LpLp'} Assume that $A_0$, $V_0$ satisfy Assumptions (A1)--(A2). Let $q\in[2,\infty)$ if $n=2$ and $q\in [2,2n/(n-2)]$ if $n\geq 3$, and let $a>0$ be fixed. Then there exists $C_0>0$ such that for all $z\in{\mathbb C}$ with $|\im z|\geq a$ and for all $u\in \mathcal{D}({\mathbb R}^n)$, we have the estimate \begin{align}\label{eq. Lq'Lq in thm} |\im z|^{1/2}\|u\|_{L^2}+\|u\|_{L^q}\leq C_0 \|(P_0-z)u\|_{L^{q'}}. \end{align} The constant $C_0$ depends on $n$, $q$, $a$, and on finitely many seminorms $C_{\alpha}$ in \eqref{assumptions on A_0} and \eqref{assumptions on V_0}. \end{theorem} \begin{proof} By \cite[Theorem 6]{Yajima1991}, $P_0$ is essentially selfadjoint. By abuse of notation we continue to denote the selfadjoint extension of $P_0$ by the same symbol. It then follows that $\sigma(P_0)\subset{\mathbb R}$ and \begin{align}\label{trivial L2L2 estimate with imaginary part} \|(P_0-z)^{-1}\|_{\mathcal{B}(L^2)}\leq \frac{1}{|\im z|}, \end{align} where $(P_0-z)^{-1}$ is the $L^2$-resolvent. By \cite[Theorem 4]{Yajima1991}, we have the short-time dispersive estimate \begin{align}\label{dispersive estimate L} \|{\rm e}^{\I tP_0}\|_{\mathcal{B}(L^1,L^{\infty})}\lesssim |t|^{-n/2},\quad t\leq T\ll 1. \end{align} By now standard abstract arguments \cite{KeelTao1998}, the full range of Strichartz estimates for the Schr\"odinger equation holds: If $(p_1,q_1)$, $(p_2,q_2)$ are sharp Schr\"odinger-admissible, i.e. if \begin{align}\label{Strichartz admissible} \frac{2}{p_i}+\frac{n}{q_i}=\frac{n}{2},\quad p_i\in[2,\infty],\quad q_i\in[2,2n/(n-2)],\quad (n,p_i,q_i)\neq (2,2,\infty), \end{align} and if $u$ satisfies the Schr\"odinger equation for $P_0$, \begin{align}\label{Schroedinger equation for L} \I\partial_t u-P_0 u=f,\quad u|_{t=0}=u_0, \end{align} then \begin{align}\label{Strichartz estimates} \|u\|_{L^{p_1}_t([0,T];L^{q_1}_x)}\lesssim \|u_0\|_{L^2_x}+\|f\|_{L^{p_2'}_t([0,T];L^{q_2'}_x)}. \end{align} For the non-endpoint case $q\neq 2n/(n-2)$, this also follows from \cite[Theorem 1]{Yajima1991}. We now fix a pair $(p,q)$ as in \eqref{Strichartz admissible} and with $q$ as in the theorem for the rest of the proof. Let $\Pi_{[k,k+1]}$ be the spectral projection of $P_0$ onto the interval $[k,k+1]$, $k\in{\mathbb Z}$. For any $u\in L^2({\mathbb R}^n)$, the function $v(x,t):={\rm e}^{-\I tk}\Pi_{[k,k+1]}u(x)$ satisfies the estimate \begin{align}\label{eq. energy estimate} \|(\I\partial_t-P_0)v\|_{L^{\infty}_t([0,T];L^2_x)}\leq \|u\|_{L^2_x}. \end{align} Applying the Strichartz estimates \eqref{Strichartz estimates} with $(p_1,q_1)=(p,q)$ and $(p_2,q_2)=(\infty,2)$ to $v(x,t)$, it follows that \begin{align}\label{O(1) bound spectral projections} \|\Pi_{[k,k+1]}u\|_{L^{q}}\lesssim \|u\|_{L^2} \end{align} This argument can be found in \cite{KochTataru2005Hermite}, see Corollary 2.3 there. Since $\Pi_{[k,k+1]}^*=\Pi_{[k,k+1]}=\Pi_{[k,k+1]}^2$, the dual as well as the $TT^*$ version of \eqref{O(1) bound spectral projections} yield \begin{align}\label{dual and TT* version of O(1) bound spectral projections} \|\Pi_{[k,k+1]}u\|_{L^2}\lesssim \|u\|_{L^{q'}},\quad\|\Pi_{[k,k+1]}u\|_{L^q}\lesssim \|u\|_{L^{q'}}. \end{align} Using the first inequality in \eqref{dual and TT* version of O(1) bound spectral projections}, orthogonality of the spectral projections and the spectral theorem, we see that for any $u\in L^2({\mathbb R}^n)\cap L^{q'}({\mathbb R}^n)$, we have \begin{equation}\begin{split} \|(P_0-z)^{-1}u\|_{L^2}^2&=\sum_{k\in{\mathbb Z}}\|(P_0-z)^{-1}\Pi_{[k,k+1]}u\|_{L^2}^2\\ &\leq \sum_{k\in{\mathbb Z}}\sup_{\lambda\in[k,k+1]}|\lambda-z|^{-2}\|\Pi_{[k,k+1]}u\|_{L^2}^2 \lesssim |\im z|^{-1}\|u\|_{L^{q'}}^2. \end{split} \end{equation} Here, we estimated the sum by \begin{align*} \sum_{k\in{\mathbb Z}}\sup_{\lambda\in[k,k+1]}|\lambda-z|^{-2}\leq \sum_{|k-\lfloor{\re z}\rfloor|\leq 3}\frac{1}{a|\im z|}+\sum_{|k-\lfloor{\re z}\rfloor|> 3}\frac{1}{(\lambda-\re z)^2+(\im z)^2} \end{align*} where $\lfloor{\re z}\rfloor$ is the integer part of $\re z$. Setting $k':=k-\lfloor{\re z}\rfloor\in{\mathbb Z}$ in the second sum, we have $|k'|>3$, and \begin{align*} |\lambda-\re z|\geq |k'|-|\lambda-k|-|\re z-\lfloor{\re z}\rfloor|\geq |k'|-2>1. \end{align*} Changing variables $k\to k'$, it follows that the second sum is $\mathcal{O}(|\im z|^{-1})$. By a density argument, we have \begin{align}\label{eq. Lq'L2} \|(P_0-z)^{-1}\|_{\mathcal{B}(L^{q'},L^2)}\lesssim |\im z|^{-1/2}. \end{align} This proves the first half of \eqref{eq. Lq'Lq in thm}. We now apply the Strichartz estimates \eqref{Strichartz estimates} to $v(x,t)={\rm e}^{-\I tz}u(x)$, assuming that $\im z<0$. Otherwise, we choose $v(x,t)={\rm e}^{\I tz}u(x)$. Note that $v$ satisfies the Schr\"odinger equation \eqref{Schroedinger equation for L} with \begin{align*} f(x,t)={\rm e}^{-\I tz}(z-P_0)u(x). \end{align*} Applying the Strichartz estimates \eqref{Strichartz estimates} with $(p_1,q_1)=(p_2,q_2)=(p,q)$, we obtain \begin{align}\label{almost resolvent estimate with Strichrtz} \|u\|_{L^{q}}\lesssim \|u\|_{L^2}+\|(P_0-z)u\|_{L^{q'}}. \end{align} Combining \eqref{eq. Lq'L2} and \eqref{almost resolvent estimate with Strichrtz}, we arrive at \begin{align*} \|u\|_{L^{q}}\lesssim \|(P_0-z)u\|_{L^{q'}}. \end{align*} This proves the second half of \eqref{eq. Lq'Lq in thm}. \end{proof} We now establish the smoothing estimate for $P_0$. We follow Doi \cite{Doi2005}, who considered the corresponding smoothing estimate for the propagator, in a more general situation (time-dependent potentials and non-trivial metric) than we do here. One way to prove the smoothing estimate would be to appeal to the corresponding smoothing estimate for the propagator ${\rm e}^{\I tP_0}$ \cite[Theorem 2.8]{Doi2005} and use the same strategy as in the proof of Theorem \ref{thm LpLp'} to deduce the resolvent smoothing estimate. For the sake of clarity, we decided to provide a direct proof of the resolvent smoothing estimate for the special case considered here. Let us also mention that Robbiano and Zuily \cite{RobbianoZuily2008}, generalizing a result of Yajima and Zhang \cite{YajimaZhang2004}, proved a smoothing estimate for the propagator similar to that of \cite{Doi2005} under partly more general ($V_0$ can grow superquadratically and $A_0$ superlinearly) and partly more restrictive assumptions (they impose stronger symbol type conditions on $V_0$ and $A_0$). Although the technique of~\cite{RobbianoZuily2008} is simpler than that of~\cite{Doi2005}, it is not directly applicable under our assumptions. For our purpose, we do not need the full strength of the calculus used in \cite{Doi2005}. It will be sufficient to use the following metrics, \begin{align}\label{metrics g0 g1} g_0={\,\rm d} x^2+\frac{{\,\rm d} \xi^2}{\langle X\rangle ^2},\quad g_1=\frac{{\,\rm d} x^2}{\langle x\rangle ^2}+\frac{{\,\rm d} \xi^2}{\langle \xi\rangle ^2}, \end{align} where $X=(x,\xi)\in {\mathbb R}^{2n}$. We refer to Appendix \ref{Appendix psdos} for more details about the calculus we use here and for the notation. \begin{lemma}\label{lemma commutator estimate} Assume (A1)--(A2). Then there exists $\lambda\in S_1(1,\langle x\rangle,g_0)$ such that the following hold. \begin{enumerate} \item There exist constants $C,c>0$ such that \begin{align*} -\{|\xi|^2,\lambda(x,\xi)\}\geq c\langle x\rangle^{-1-\mu}e_{1/2}(x,\xi)^2-C,\quad (x,\xi)\in {\mathbb R}^{2n}. \end{align*} \item $[P_0,\lambda^W]-\frac{1}{\I}\{|\xi|^2,\lambda\}^w\in \Op^W(S(1,g_0))$. \item We have the positive commutator estimate \begin{align}\label{eq. positive commutator} \|\langle x\rangle^{-\frac{1+\mu}{2}}E_{1/2} u\|_{L^{2}}^2\lesssim \langle -\I[P_0,\lambda^W]u,u\rangle+\|u\|_{L^2}^2. \end{align} \end{enumerate} \end{lemma} \begin{proof} (1) The claim follows from \cite[Lemma 8.3]{Doi2005}. We give a sketch of the proof for the simpler case considered here. Let $\psi,\chi\in C^{\infty}({\mathbb R})$ be such that $0\leq \psi,\chi\leq 1$, $\supp(\psi)\subset [1/4,\infty)$, $\psi(t)=1$ for $t\in [1/2,\infty)$, $\psi'\geq 0$, $\supp(\chi)\subset (-\infty,1]$ and $\chi(t)=1$ for $t\leq 1/2$. Further, set $\psi_+(t)=\psi(t)$, $\psi_-(t)=\psi(-t)$ and \begin{align*} \psi_0(t)=1-\psi_+(t)-\psi_-(t),\quad \psi_1(t)=\psi_-(t)-\psi_+(t)=-\operatorname{sgn}(t)\psi(|t|). \end{align*} We define the function $\lambda:{\mathbb R}^{2n}\to{\mathbb R}$ by \begin{align}\label{lambda} -\lambda=\left( \theta\psi_0(\theta)-(M_0-\langle a\rangle^{-{\mu}})\psi_1(\theta)\right)\chi(r), \end{align} where $M_0>2$ is a constant, $a(x,\xi)=\frac{x\cdot\xi}{\langle\xi\rangle}$, $\theta(x,\xi)=\frac{a(x,\xi)}{\langle x\rangle}$ and $r(x,\xi)=\frac{\langle x\rangle}{\langle\xi\rangle}$. The claim that $\lambda\in S_1(1,\langle x\rangle,g_0)$ follows from the fact that $\langle x\rangle\leq \langle\xi\rangle$ on the support of $\chi(r)$. We write $h_0(\xi)=|\xi|^2$ and denote by $H_{h_0}=2\xi\cdot\nabla_x$ the corresponding Hamiltonian vector field. Observe that on the support of $\psi_0(\theta)$, we have $\theta\leq 1/2$. This implies that \begin{align}\label{Hh0theta} -H_{h_0}\theta=\frac{2}{\langle x\rangle}\left(\frac{|\xi|^2}{\langle \xi\rangle}-\frac{x\cdot\xi}{\langle x\rangle}\theta\right)\geq \frac{\langle \xi\rangle}{\langle x\rangle}-2. \end{align} It can then be shown that \begin{align*} -H_{h_0}\lambda &=\left((H_{h_0}\theta)\psi_0(\theta)+\mu\langle a\rangle^{-\mu-2}|a|(H_{h_0}a)\psi_1(\theta)\right)\chi(r)\\ &+(H_{h_0}\theta)(M_0-\langle a\rangle^{-\mu}-|\theta|)(\psi_+'(\theta)-\psi_-'(\theta))\chi(r)\\ &+\left(\theta\psi_0(\theta)-(M_0-\langle a\rangle^{-\mu})\psi_1(\theta)\right)\chi'(r)(H_{h_0}r)\\ &\geq \left((H_{h_0}\theta)\psi_0(\theta)+\mu\langle a\rangle^{-\mu-2}|a|(H_{h_0}a)\psi_1(\theta)\right)\chi(r)-C_1\\ &\geq c_1\left(\langle\xi\rangle\langle x\rangle^{-1}\psi_0(\theta)+\langle x\rangle^{-\mu-1}\langle\xi\rangle\psi(|\theta|)\right)\chi(r)-C_2\\ &\geq c_2\langle x\rangle^{-1-\mu} e_{1/2}(x,\xi)^2-C_3. \end{align*} In particular, we used that $\psi_+'(t)-\psi_-'(t)\geq 0$ and $\psi_0(t)+\psi(|t|)=1$. (2) We have, by slight abuse of notation, \begin{align*} [P_0,\lambda^W]-\frac{1}{\I}\{P_0,\lambda\}^W=\frac{1}{\I}A+\frac{1}{\I}B \end{align*} where \begin{align*} A=\I(P_0\#\lambda-\lambda\#P_0)^W-\{P_0,\lambda\}^W,\quad B=\{P_0,\lambda\}^W-\{|\xi|^2,\lambda\}^W. \end{align*} Lemma \ref{Lemma 3.4 Doi} and Proposition \ref{Proposition membership to symbol classes} imply that $A,B\in \Op^W(S(1,g_0))$. (3) follows from (1)--(2) together with Corollary \ref{corollary boundedness on L2}, Theorem \ref{theorem sharp Garding} and the calculus for adjoints (Proposition \ref{Proposition selfadjointness of Weyl quantization}) and compositions (Theorem \ref{theorem composition}). \end{proof} To state the following theorem it will be convenient to introduce the spaces $Y\supset X$ and $Y'\subset X'$ with norms \begin{align} \|u\|_Y&:=\|u\|_{L^2}+\|\langle x\rangle^{-\frac{1+\mu}{2}}E_{1/2} u\|_{L^2},\label{definition of Y}\\ \|f\|_{Y'}&:=\inf_{f=f_1+f_2}\left(\|f_1\|_{L^2}+\|\langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2} f_2\|_{L^2}\right). \end{align} Note that $X=Y\cap L^q$ and $X'=Y'+L^{q'}$. \begin{theorem}[Smoothing estimate]\label{thm smoothing} Let $\mu>0$, and let $a>0$ be fixed. Then for all $z\in{\mathbb C}$ with $|\im z|\geq a$ and for all $u\in \mathcal{D}({\mathbb R}^n)$, we have the estimate \begin{align}\label{eq. smoothing estimate thm} \|u\|_{Y}\leq C_0\|(P_0-z)u\|_{Y'}. \end{align} The constant $C_0$ depends on $n$, $\mu$, $a$, and on finitely many seminorms $C_{\alpha}$ in \eqref{assumptions on A_0} and \eqref{assumptions on V_0}. \end{theorem} \begin{proof} Inequality \eqref{eq. smoothing estimate thm} follows from the following four inequalities: For all $u\in \mathcal{D}({\mathbb R}^n)$, we have \begin{align} \|u\|_{L^2}&\leq |\im z|^{-1}\|(P_0-z)u\|_{L^2},\label{eq. smoothing 1}\\ \|u\|_{L^2}&\lesssim |\im z|^{-1/2}\|\langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2} (P_0-z)u\|_{L^2},\label{eq. smoothing 2}\\ \|\langle x\rangle^{-\frac{1+\mu}{2}}E_{1/2}u\|_{L^2}&\lesssim |\im z|^{-1/2}\|(P_0-z)u\|_{L^2},\label{eq. smoothing 3}\\ \|\langle x\rangle^{-\frac{1+\mu}{2}}E_{1/2}u\|_{L^2}&\lesssim\|\langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2}(P_0-z)u\|_{L^2}.\label{eq. smoothing 4} \end{align} Again, \eqref{eq. smoothing 1} immediately follows from \eqref{trivial L2L2 estimate with imaginary part}. Inequality \eqref{eq. smoothing 2} follows from \eqref{eq. smoothing 3} by a duality argument that we shall postpone to the end of the proof. It remains to prove \eqref{eq. smoothing 3} and \eqref{eq. smoothing 4}. To this end we use \eqref{eq. positive commutator}. Note that we may replace $P_0$ by $P_0-z$ in \eqref{eq. positive commutator} since the commutator with $z\in{\mathbb C}$ is zero. Since $\lambda\in S(1,g_0)$, the $L^2$-boundedness of such symbols (Corollary \ref{corollary boundedness on L2}) and the Cauchy-Schwarz inequality yield the estimate \begin{align*} \|u\|_{Y}^2&\lesssim\langle -\I[P_0-z,\lambda^W]u,u\rangle+\|u\|_{L^2}^2\\ &\leq 2\|\lambda^Wu\|_{L^{2}}\left(\|(P_0-z) u\|_{L^{2}}+|\im z|\|u\|_{L^2}\right)+\|u\|_{L^2}^2\\ &\leq (4C_{\lambda}|\im z|^{-1}+|\im z|^{-2})\|(P_0-z) u\|_{L^{2}}^2\\ &\lesssim |\im z|^{-1}\|(P_0-z) u\|_{L^{2}}^2 \end{align*} where $C_{\lambda}:=\|\lambda^W\|_{\mathcal{B}(L^2)}$. Note that $\lambda^W$ is self-adjoint by Proposition \ref{Proposition selfadjointness of Weyl quantization}. In the last inequality, we also used~\eqref{trivial L2L2 estimate with imaginary part} again. This proves~\eqref{eq. smoothing 3}. Similarly, \eqref{eq. positive commutator} and duality of $Y,Y'$ yield \begin{align*} \|u\|_{Y}^2&\lesssim\langle -\I[P_0-z,\lambda^W]u,u\rangle+\|u\|_{L^2}^2\\ &\leq 2\|\lambda^Wu\|_{Y}\|(P_0-z) u\|_{Y'}+2|\im z|\|\lambda^Wu\|_{L^2}\|u\|_{L^2}+\|u\|_{L^2}^2\\ &\leq \epsilon \|\lambda^Wu\|_{Y}^2+\epsilon^{-1}\|(P_0-z) u\|_{Y'}^2+(1+2C_{\lambda}|\im z|)\|u\|_{L^2}^2\\ &\leq \epsilon (C_{\lambda}')^2 \|u\|_{Y}^2+\epsilon^{-1}\|(P_0-z) u\|_{Y'}^2+(1+2C_{\lambda}|\im z|)\|u\|_{L^2}^2 \end{align*} for any $\epsilon>0$. In addition to the $L^2$-boundedness of $\lambda^W$, we used that the commutator $[\langle x\rangle^{-\frac{1+\mu}{2}}E_{1/2}, \lambda^W]$ is $L^2$-bounded (Corollary \ref{corollary boundedness on L2}), being in $\Op^W(S(1,g_1))$ by Corollary \ref{Corollary commutator} and Proposition \ref{Proposition membership to symbol classes}. This implies that $C_{\lambda}':=\|\lambda^W\|_{\mathcal{B}(Y)}<\infty$. Hiding the term with $\epsilon$ in the above inequality on the left, we get \begin{align}\label{eq. double smoothing} \|u\|_{Y}^2\lesssim \|(P_0-z) u\|_{Y'}^2+|\im z|\|u\|_{L^2}^2. \end{align} Combining \eqref{eq. smoothing 2} and \eqref{eq. double smoothing}, we get \eqref{eq. smoothing 4}. We now provide the details of the duality argument leading to \eqref{eq. smoothing 2}. From \eqref{eq. smoothing 1}, \eqref{eq. smoothing 3}, we see that \begin{align*} \|(P_0-z)^{-1}f\|_{Y}\lesssim|\im z|^{-1/2}\|f\|_{L^2},\quad \mbox{for all } f\in (P_0-z)\mathcal{D}({\mathbb R}^n)\subset L^2({\mathbb R}^n). \end{align*} Since $(P_0-z)^{-1}$ is $L^2$-bounded and $\mathcal{D}({\mathbb R}^n)$ is a core\footnote{This is equivalent to the essential selfadjointness of $P_0$ on $\mathcal{D}({\mathbb R}^n)$.} for $P_0$, the set $(P_0-z)\mathcal{D}({\mathbb R}^n)$ is dense in $L^2({\mathbb R}^n)$ \cite[Problem III.5.19]{Kato1966}. Therefore, \begin{align*} (P_0-z)^{-1}\in \mathcal{B}(L^2,Y) \mbox{ with } \|(P_0-z)^{-1}\|_{ \mathcal{B}(L^2,Y)}\lesssim|\im z|^{-1/2}. \end{align*} Let $f\in L^2({\mathbb R}^n)$, $g\in \mathcal{D}({\mathbb R}^n)$. Then \begin{align*} |\langle f,g\rangle|\leq \|(P_0-\overline{z})^{-1}f\|_{Y}\|(P_0-z)g\|_{Y'}\lesssim |\im z|^{-1/2}\|f\|_{L^2}\|(P_0-z)g\|_{Y'} \end{align*} Taking the supremum over all $f\in L^2({\mathbb R}^n)$ with $\|f\|_{L^2}=1$, we arrive at \begin{align*} \|g\|_{L^2}\lesssim|\im z|^{-1/2}\|(P_0-z)g\|_{Y'}, \end{align*} proving \eqref{eq. smoothing 2}. \end{proof} \section{Proof of Theorem \ref{thm resolvent estimate X'X}}\label{section Proof main theorem, general case} We first prove Theorem \ref{thm resolvent estimate X'X} for $P_0$ and for $\widetilde{P}_0$. \begin{proof}[Proof of Theorem \ref{thm resolvent estimate X'X} for $P_0$] Since $X=Y\cap L^q$ and $X'=Y'+L^{q'}$, inequality \eqref{eq. resolvent estimate main theorem} is equivalent to the following four inequalities: For all $u\in \mathcal{D}({\mathbb R}^n)$, we have \begin{align} \|u\|_{Y}&\lesssim\|(P_0-z)u\|_{Y'},\label{9 inequalities 1}\\ \|u\|_{Y}&\lesssim\|(P_0-z)u\|_{L^{q'}},\label{9 inequalities 2}\\ \|u\|_{L^q}&\lesssim\|(P_0-z)u\|_{Y'},\label{9 inequalities 3}\\ \|u\|_{L^q}&\lesssim\|(P_0-z)u\|_{L^{q'}}\label{9 inequalities 4}. \end{align} We have already proved \eqref{9 inequalities 1} and \eqref{9 inequalities 4} in Theorem \ref{thm smoothing} and Theorem \ref{thm LpLp'}, respectively. Moreover, \eqref{9 inequalities 3} follows from \eqref{9 inequalities 2} by a duality argument. Since it is a bit more involved than the previous one, we relegate its proof to Appendix \ref{appendix proof duality argument}. It remains to prove~\eqref{9 inequalities 2}. Here we use that $\lambda^W\in\mathcal{B}(L^q)$ since $1<q<\infty$, see Corollary \ref{Corollary Lp boundedness}. Denoting $C_{\lambda,q}:=\|\lambda^W\|_{\mathcal{B}(L^q)}$, it then follows from \eqref{eq. positive commutator} that \begin{align*} \|u\|_{Y}^2&\lesssim\langle -\I[P_0-z,\lambda^W]u,u\rangle+\|u\|_{L^2}^2\\ &\leq 2\|\lambda^Wu\|_{L^q}\|(P_0-z)u\|_{L^{q'}}+2|\im z|\|\lambda^Wu\|_{L^2}\|u\|_{L^2}+\|u\|_{L^2}^2\\ &\leq \|\lambda^Wu\|_{L^q}^2+\|(P_0-z)u\|_{L^{q'}}^2+(1+2C_{\lambda}|\im z|)\|u\|_{L^2}^2\\ &\leq C_{\lambda,q}^2\|u\|_{L^q}^2+\|(P_0-z)u\|_{L^{q'}}^2+(1+2C_{\lambda}|\im z|)\|u\|_{L^2}^2\\ &\lesssim\|(P_0-z)u\|_{L^{q'}}^2 \end{align*} where we used \eqref{eq. Lq'Lq in thm} in the last step. \end{proof} \begin{proof}[Proof of Theorem \ref{thm resolvent estimate X'X} for $\widetilde{P}_0$] An inspection of the previous proofs shows that we may add the perturbation $W\in L^{\infty}$ to the operator $P_0$, without any smallness assumption on the norm. This is because our estimates control $L^2$-norms. For the reader's convenience, we provide the argument for the first part of the proof of Theorem \ref{thm LpLp'}, i.e.\ for the spectral projection estimate \eqref{O(1) bound spectral projections}. To this end, we observe that the analogue of the energy estimate \eqref{eq. energy estimate}, \begin{align* \|(\I\partial_t-\widetilde{P}_0){\rm e}^{-\I t k} \widetilde{\Pi}_{[k,k+1]}u\|_{L^{\infty}_t([0,T];L^2_x)}\leq \|u\|_{L^2_x}, \end{align*} holds for the spectral projections $\widetilde{\Pi}_{[k,k+1]}$ of $\widetilde{P}_0$, by selfadjointness. We write \begin{align*} (\I\partial_t-P_0){\rm e}^{-\I t k} \widetilde{\Pi}_{[k,k+1]}u=(\I\partial_t-\widetilde{P}_0){\rm e}^{-\I t k} \widetilde{\Pi}_{[k,k+1]}u +{\rm e}^{-\I t k} W\widetilde{\Pi}_{[k,k+1]}u \end{align*} and apply the Strichartz estimates \eqref{Strichartz estimates} with $(p_1,q_1)=(p,q)$ and $(p_2,q_2)=(\infty,2)$. This yields \begin{align*} \| \widetilde{\Pi}_{[k,k+1]}u\|_{L^q}\lesssim (1+\|W\|_{L^{\infty}})\|u\|_{L^2}. \end{align*} Similarly, one can show that all the previous inequalities for $P_0$ continue to hold for $\widetilde{P}_0$ with the same modification of the constant. \end{proof} To treat the general case, we write $P=\widetilde{P}_0+L$, where \begin{align}\label{L} L=-2\I A_1\cdot \nabla-\I(\nabla\cdot A_1)+2 A_0\cdot A_1+A_1^2+V_1. \end{align} \begin{lemma}\label{L bounded} Under the assumptions (A3)--(A5), we have $ L\in\mathcal{B}(X,X')$, with $\|L\|_{\mathcal{B}(X,X')}\leq C_L\epsilon$ for all $\epsilon\in [0,1]$. The constant $C_L$ is independent of $\epsilon$ and depends only on $n$, $\mu$, $\delta$, $\delta'$ and on $\|\langle x\rangle^{-1}A_0\|_{L^{\infty}}$. \end{lemma} For the proof of Lemma \ref{L bounded} the following propositions will be used. \begin{proposition}\label{commutator bound A1 Taylor} Let $s\leq 1$ and $f\in \Lip({\mathbb R}^n)$. Then $[f,\langle D\rangle^{s}]\in\mathcal{B}(L^2)$. \end{proposition} \begin{proof} \cite[Proposition 4.1.A]{MR1121019}. \end{proof} \begin{proposition}\label{commutator bound A1 Schlag et al} Let $s\leq 1/2$ and $f\in \dot{W}^{\frac{1}{2},2n}({\mathbb R}^n)$. Then $[f,\langle D\rangle^{s}]\in\mathcal{B}(L^2)$. \end{proposition} \begin{proof} \cite[Lemma 2.2]{ErdoganEtAl2009}. \end{proof} \begin{proof}[Proof of Lemma \ref{L bounded}] Without loss of generality we may assume that $\epsilon=1$; the dependence of the bound on $\epsilon$ follows by scaling. We start with the estimate \begin{align*} \|L u\|_{X'}&\leq 2\|\langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2} A_1\cdot\nabla u\|_{L^2}+\|(\nabla\cdot A_1)u\|_{L^2} \\ &+2\|A_0\cdot A_1 u\|_{L^2}+\|A_1^2 u\|_{L^2}+\|V_1 u\|_{L^{q'}}. \end{align*} We immediately see from H\"older's inequality that \begin{align*} \|(\nabla\cdot A_1)u\|_{L^2}+2\|A_0\cdot A_1 u\|_{L^2}+\|A_1^2 u\|_{L^2}+\|V_1 u\|_{L^{q'}}\\ \leq (3+2\|\langle x\rangle^{-1}A_0\|_{L^{\infty}})\epsilon\|u\|_{X}. \end{align*} It remains to prove \begin{align}\label{only hard part in perturbation argument} \|\langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2} A_1\cdot\nabla u\|_{L^2}\lesssim\|u\|_{X}. \end{align} We set $\widetilde{A_1}(x):=\langle x\rangle^{1+\mu}A_1(x)$. Then, \eqref{only hard part in perturbation argument} would follow from \begin{align*} \langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2}\langle x\rangle^{-(1+\mu)} \widetilde{A_1}\cdot\nabla E_{-1/2}\langle x\rangle^{\frac{1+\mu}{2}}\in\mathcal{B}(L^2). \end{align*} Writing \begin{align*} \langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2}&=\left(\langle x\rangle^{\frac{1+\mu}{2}}E_{-1/2}\langle D\rangle^{1/2}\langle x\rangle^{-\frac{1+\mu}{2}}\right)\langle x\rangle^{\frac{1+\mu}{2}}\langle D\rangle^{-1/2},\\ E_{-1/2}\langle x\rangle^{\frac{1+\mu}{2}}&=\langle D\rangle^{-1/2}\langle x\rangle^{\frac{1+\mu}{2}}\left(\langle x\rangle^{-\frac{1+\mu}{2}}\langle D\rangle^{1/2}E_{-1/2}\langle x\rangle^{\frac{1+\mu}{2}}\right), \end{align*} and using the $L^2$-boundedness of the operators in brackets (a consequence of Corollary \ref{corollary boundedness on L2} and Proposition \ref{Proposition membership to symbol classes}), it remains to prove that \begin{align}\label{only hard part in perturbation argument tilde} B:=\langle x\rangle^{\frac{1+\mu}{2}}\langle D\rangle^{-1/2}\langle x\rangle^{-(1+\mu)} \widetilde{A_1}\cdot\nabla \langle D\rangle^{-1/2}\langle x\rangle^{\frac{1+\mu}{2}}\in\mathcal{B}(L^2). \end{align} After some commutations, we see that \begin{align*} B=B_0+B_1+B_2+B_3, \end{align*} where \begin{align*} B_0&:=\langle D\rangle^{-1/2}\widetilde{A}_1\cdot\nabla\langle D\rangle^{-1/2},\\ B_1&:=\langle x\rangle^{-\frac{1+\mu}{2}}[\langle D\rangle^{-1/2},\langle x\rangle^{\frac{1+\mu}{2}}]\widetilde{A}_1\cdot\nabla\langle D\rangle^{-1/2},\\ B_2&:=\langle x\rangle^{-\frac{1+\mu}{2}}\langle D\rangle^{-1/2}\widetilde{A}_1\cdot[\nabla\langle D\rangle^{-1/2},\langle x\rangle^{\frac{1+\mu}{2}}],\\ B_3&:=\langle x\rangle^{\frac{1+\mu}{2}}[\langle D\rangle^{-1/2},\langle x\rangle^{-(1+\mu)}]\widetilde{A}_1\cdot\nabla\langle D\rangle^{-1/2}\langle x\rangle^{\frac{1+\mu}{2}}. \end{align*} Since $\widetilde{A}_1\in L^{\infty}$ (recall that $0<\mu\leq \delta$), it is immediate that $B_2$ is bounded (again a consequence of Corollary \ref{corollary boundedness on L2} and Proposition \ref{Proposition membership to symbol classes}). Using the identity \begin{align}\label{commutator identity} [T^{-1},S]=-T^{-1}[T,S]T^{-1}, \end{align} with $T=\langle D\rangle^{1/2}$ and $S=\langle x\rangle^{\frac{1+\mu}{2}}$ we can write \begin{align*} B_1=-\langle x\rangle^{-\frac{1+\mu}{2}}\langle D\rangle^{-1/2}[\langle D\rangle^{1/2},\langle x\rangle^{\frac{1+\mu}{2}}]B_0. \end{align*} Hence, if $B_0$ is bounded, then so is $B_1$. A similar calculation shows that if $B_0$ is bounded, then $B_3$ is bounded. To prove the boundedness of $B_0$, we commute again, using \eqref{commutator identity}, to see that \begin{align*} B_0=-\langle D\rangle^{-1/2}[\langle D\rangle^{1/2},\widetilde{A}_1]\langle D\rangle^{-1/2}\cdot\nabla\langle D\rangle^{-1/2}+\widetilde{A}_1\langle D\rangle^{-1/2}\cdot\nabla\langle D\rangle^{-1/2}. \end{align*} Clearly, the second term is bounded. The first term is bounded by Proposition~\ref{commutator bound A1 Taylor} (if (A4a) is assumed) or Proposition~\ref{commutator bound A1 Schlag et al} (if (A4b) is assumed). \end{proof} \begin{proof}[Proof of Theorem \ref{thm resolvent estimate X'X} in the general case] By what we have already proved, Theorem~\ref{thm resolvent estimate X'X} holds for $\widetilde{P}_0$. Lemma \ref{L bounded} then yields that for $u\in \mathcal{D}({\mathbb R}^n)$, we have \begin{align*} \|(P-z)u\|_{X'}\geq \|(P_0-z)u\|_{X'}-\|Lu\|_{X'}\geq \left(\frac{1}{C_0}-C_L\epsilon\right)\|u\|_X\geq \frac{1}{2C_0}\|u\|_X, \end{align*} provided $\epsilon\leq (2C_LC_0)^{-1}$; here, $C_0$, $C_L$ are the constants in Theorem \ref{thm resolvent estimate X'X} and Lemma~\ref{L bounded}, respectively. \end{proof} \section{Application to complex-valued potentials}\label{section complex-valued potentials} In this Section we use the resolvent estimate to find upper bounds on the location of (complex) eigenvalues for Schr\"odinger operators with complex-valued potentials. For Schr\"odinger operators $-\Delta+V$ with decaying but singular potentials $V$, results of this type have been established e.g.\ in \cite{AAD01} in one dimension and in \cite{Frank11,FrankSimon2015,FanelliKrejcikVega2015} in higher dimensions. Estimates for sums of eigenvalues of the Schr\"odinger operator with constant magnetic field perturbed by complex electric potentials were obtained in \cite{Sambou2014}. The conditions on the potential there are much more restrictive than ours and the results (when applied to a single eigenvalue) are considerably weaker. To our knowledge, our eigenvalue estimates are the first with a complex-valued magnetic potential. For definiteness, we assume that $P_0$ is either the harmonic oscillator or the Schr\"odinger operator with constant magnetic field (called the Landau Hamiltonian in quantum mechanics), but other examples could easily be accommodated. Hence, from now on, either \begin{align}\label{harmonic oscillator} P_0=-\Delta+|x|^2,\quad x\in{\mathbb R}^n \quad(\mbox{Harmonic Oscillator}), \end{align} or, for $n$ even and $B_0>0$, \begin{align}\label{constant magnetic field} P_0= \sum_{j=1}^{n/2}\left[\left(-\I\partial_{x_j}-\frac{B_0}{2} y_j\right)^2+\left(-\I\partial_{y_j}+\frac{B_0}{2}x_j\right)^2\right] \quad\mbox{(Landau Hamiltonian)}, \end{align} where in the case \eqref{constant magnetic field} we denoted the independent variable by $(x,y)=z\in{\mathbb R}^{n}$. In the mathematical literature, \eqref{harmonic oscillator} is also called the Hermite operator and \eqref{constant magnetic field} is known as the twisted Laplacian. The spectra of these operators can be computed explicitly to be \begin{align*} \sigma(P_0)=2{\mathbb N}+m(n), \end{align*} where $m(n)=n$ in the case of \eqref{harmonic oscillator} and $m(n)=n/2$ in the case of \eqref{constant magnetic field}. In the notation of \eqref{P_0}, the harmonic oscillator \eqref{harmonic oscillator} corresponds to $P_0$ with $V_0(x)=|x|^2$, $A_0(x)=0$, while the Landau Hamiltonian \eqref{constant magnetic field} corresponds to $P_0$ with $V_0(z)=0$ and \begin{align*} A_0(z)=\frac{B_0}{2}(-y_1,x_1,\ldots,-y_{n/2},x_{n/2}),\quad z=(x,y)\in{\mathbb R}^n. \end{align*} As before, we allow a real-valued bounded potential $W\in L^{\infty}({\mathbb R}^n,{\mathbb R})$ in the definition of the unperturbed operator $\widetilde{P}_0=P_0+W$. We now consider a perturbation of $\widetilde{P}_0$ by a \emph{complex-valued} electromagnetic potential $(A_1,V_1)$, that is we consider the Schr\"odinger operator \begin{align*} P=(-\I\nabla+A)^2+V=\widetilde{P}_0+L,\quad \dom(P)=\mathcal{D}({\mathbb R}^n). \end{align*} Here, $L$ is given by \eqref{L}. We only require a smallness assumption on $A_1$, but not on $V_1$. To be clear, we repeat the assumptions on $A_1,V_1$ at this point. \begin{enumerate} \item[($A4_{{\mathbb C}}$)] $A_1\in \Lip({\mathbb R}^n,{\mathbb C}^n)$ and there exists $\delta>0$ such that \begin{align} |\nabla A_1(x)|\lesssim \epsilon\langle x\rangle^{-1-\delta}\quad\mbox{for almost every }x\in{\mathbb R}^n. \end{align} Moreover, assume that one of the following additional assumptions hold. \item[($A4a_{{\mathbb C}}$)] $A_1\in \Lip({\mathbb R}^n,{\mathbb C}^n)$ and \begin{align*} |\nabla A_1(x)|\lesssim \epsilon\langle x\rangle^{-1-\delta}\quad\mbox{for almost every }x\in{\mathbb R}^n. \end{align*} \item[($A4b_{{\mathbb C}}$)] There exists $\delta'\in (0,\delta)$ such that $\langle x\rangle^{1+\delta'} A_1\in \dot{W}^{\frac{1}{2},2n}({\mathbb R}^n;{\mathbb C}^n)$, and we have $\|\langle x\rangle^{1+\delta'}A_1\|_{\dot{W}^{\frac{1}{2},2n}}\lesssim \epsilon$. \item[($A5_{{\mathbb C}}$)] Assume that $V_1\in L^{r}({\mathbb R}^n,{\mathbb C})$ for some $r\in(1,\infty]$ if $n=2$ and $r \in[n/2,\infty]$ if $n\geq 3$. \end{enumerate} Let $Q(P_0)$ be the form domain\footnote{$Q(P_0)=D(\mathfrak{q}_0)$ in the notation of Appendix \ref{appendix m-sectorial}. We equip $Q(P_0)$ with the form norm \eqref{form norm}.} of $P_0$, \begin{align} Q(P_0)=\set{u\in L^2({\mathbb R}^n)}{\nabla_{A_0} u\in L^2({\mathbb R}^n),\, V_0^{1/2}u\in L^2({\mathbb R}^n)}\label{form domain P0}. \end{align} By Lemma \ref{lemma relative form bounded} there exists a unique $m$-sectorial extension of $P$ with the property $\dom(P)\subset Q(P_0)$. By abuse of notation we will still denote this extension by $P$ in the following theorem. \begin{theorem}\label{thm complex potentials} Assume that $P_0$ is either the harmonic oscillator \eqref{harmonic oscillator} or the Landau Hamiltonian \eqref{constant magnetic field} (when $n$ is even). Assume that $A_1,V_1$ satisfy $A4_{{\mathbb C}}$--$A5_{{\mathbb C}}$. For $a>0$ fixed, there exists $\epsilon_0>0$ such that whenever $A_1$ satisfies $A4_{{\mathbb C}}$ with $\epsilon<\epsilon_0$, then every eigenvalue $z$ of $P$ with $|\im z|\geq a$ satisfies the estimate \begin{align}\label{spectral estimate complex potentials} |\im z|^{1-\frac{n}{2r}}\leq \frac{C_0(1+\|W\|_{L^{\infty}})}{1-\epsilon/\epsilon_0}\|V_1\|_{L^r}. \end{align} Here, $C_0$ is the constant in the estimate \eqref{eq. resolvent estimate z dependent}, and $\epsilon_0$ depends on $n,\delta,\delta',\mu,a,B_0,C_0$. \end{theorem} \begin{remark} Since the left hand side of \eqref{spectral estimate complex potentials} is $\geq a^{1-\frac{n}{2r}}$ by assumption, Theorem~ \ref{thm complex potentials} implies that for any $a>0$ and $0\leq \epsilon<\epsilon_0$, there exists $v_0=v_0(a,\epsilon)$ such that for $\|V\|_{L^r}\leq v_0$, all eigenvalues $z\in\sigma(P)$ are contained in $\set{z\in{\mathbb C}}{|\im z|\leq a}$. \end{remark} \begin{proof}[Proof of Theorem \ref{thm complex potentials}] Assume that $z\in{\mathbb C}$, with $|\im z|\geq a$, is an eigenvalue of~$P$, i.e.\ there exists $u\in \dom(P)$, with $\|u\|_{L^2}=1$, such that $Pu=zu$. Since $P_0\in \mathcal{B}(Q(P_0),Q(P_0)')$ and $L\in \mathcal{B}(Q(P_0),Q(P_0)')$ by Lemma \ref{lemma relative form bounded} ii), we have \begin{align}\label{eigenvalue equation in Q(P0)'} (P_0-z)u=-Lu\quad \mbox{in }Q(P_0)'. \end{align} Since $Q(P_0)\subset X(z)$ densely and continuously, by Lemma \ref{lemma continuous embedding}, we have $u\in X(z)$, $\|u\|_{X(z)}\neq 0$, and \eqref{eigenvalue equation in Q(P0)'} implies that $(P_0-z)u+Lu=0$ in $X(z)'$. By Lemma \ref{L bounded}, $L\in\mathcal{B}(X(z),X(z)')$, and thus $(P_0-z)u\in X(z)'$. This means that \begin{align}\label{eigenvalue equation in X(z)'} (P_0-z)u=-Lu\quad \mbox{in }X(z)'. \end{align} Then \eqref{eigenvalue equation in X(z)'} and \eqref{eq. resolvent estimate z dependent with inverse on left} yield \begin{align}\label{lower bound for Lu} \frac{1}{C_0(1+\|W\|_{L^{\infty}})}\|u\|_{X(z)}\leq \|(P_0-z)u\|_{X(z)'}=\|Lu\|_{X(z)'}. \end{align} We estimate the right hand side from above, as in the proof of the general case of Theorem \ref{thm resolvent estimate X'X} in Section \ref{section Proof main theorem, general case}, by \begin{equation*}\begin{split} \|Lu\|_{X(z)'}&\leq \left(\epsilon C_{n,\mu,\delta}+2\epsilon(1+B_0)|\im z|^{-1}+|\im z|^{n\left(\frac{1}{2}-\frac{1}{q}\right)-1}\|V_1\|_{L^{r}}\right)\|u\|_{X(z)}\\ &\leq \left(\epsilon C_{n,\mu,\delta,a,B_0}+|\im z|^{n\left(\frac{1}{2}-\frac{1}{q}\right)-1}\|V_1\|_{L^{r}}\right)\|u\|_{X(z)} \end{split} \end{equation*} Together with \eqref{lower bound for Lu}, this implies that \begin{align*} \|u\|_{X(z)}\leq C_0(1+\|W\|_{L^{\infty}})\left(\epsilon C_{n,\mu,\delta,a,B_0}+|\im z|^{n\left(\frac{1}{2}-\frac{1}{q}\right)-1}\|V_1\|_{L^{r}}\right)\|u\|_{X(z)}. \end{align*} Dividing both sides by $\|u\|_{X(z)}\neq 0$, it follows that any eigenvalue $z$ of $P$ with $|\im z|\geq a$, must satisfy the inequality \begin{align*} 1\leq C_0(1+\|W\|_{L^{\infty}})\left(\epsilon C_{n,\mu,\delta,a,B_0}+|\im z|^{n\left(\frac{1}{2}-\frac{1}{q}\right)-1}\|V_1\|_{L^{r}}\right). \end{align*} If we set $\epsilon_0=1/(C_0(1+\|W\|_{L^{\infty}}) C_{n,\mu,\delta,a,B_0})$, then this estimate is equivalent to~\eqref{spectral estimate complex potentials}. \end{proof} Instead of the Landau Hamiltonian \eqref{constant magnetic field}, we can also consider the Pauli operator with constant magnetic field. For simplicity, we assume that $n=2$ here, but the general case when $n$ is even can be handled with no additional difficulty. On $\mathcal{D}({\mathbb R}^2,{\mathbb C}^2)$, the Pauli operator is given by \begin{align}\label{Pauli} P= \begin{pmatrix} (-\I\nabla+A(x))^2+B(x)&0\\ 0&(-\I\nabla+A(x))^2-B(x) \end{pmatrix} +W(x) +V_1(x). \end{align} Here, $A=(A^1,A^2)=(A_0^1,A_0^2)+(A_1^1,A_1^2)$ and $B=\partial_{1}A^2-\partial_{2}A^1$. We choose $A_0(z)=\frac{B_0}{2}(-y,x)$ for $z=(x,y)\in{\mathbb R}^2$ and $B_0>0$. Although we could easily allow $W$ and $V_1$ to be matrix-valued potentials, we assume that they are scalar multiples of the identity in ${\mathbb C}^2$. \begin{corollary}\label{thm Pauli} Assume that $n=2$ and that $P$ is the Pauli operator \eqref{Pauli}. Assume also that $A_1,V_1$ satisfy $A4_{{\mathbb C}}$--$A5_{{\mathbb C}}$. For $a>0$ fixed, there exists $\epsilon_0>0$ such that whenever $A_1$ satisfies $A4_{{\mathbb C}}$ with $\epsilon<\epsilon_0$, then every eigenvalue $z$ of $P$ with $|\im z|\geq a$ satisfies the estimate \begin{align*} |\im z|^{1-\frac{n}{2r}}\leq \frac{C_0(1+B_0+\|W\|_{L^{\infty}})}{1-\epsilon/\epsilon_0}\|V_1\|_{L^r}. \end{align*} Here, $C_0$ is the constant in the estimate \eqref{eq. resolvent estimate z dependent}, and $\epsilon_0$ depends on $n,\delta,\delta',\mu,a,B_0,C_0$. \end{corollary} \begin{proof} In view of the direct sum structure of \eqref{Pauli}, the proof reduces to proving \eqref{spectral estimate complex potentials} for eigenvalues $z$ of the Schr\"odinger operators \begin{align*} P_{\pm}=(-\I\nabla+A(x))^2\pm B(x)+W(x)+V_1(x). \end{align*} The proof is analogous to the one of Theorem \ref{thm complex potentials}. \end{proof} \begin{remark} Note that the reason we were able to remove the smallness assumption on $V_1$, but not on $A_1$ is that the smoothing part of the $X(z)$ norm is $z$-independent (see Appendix \ref{Section A more precise version of Theorem}). \end{remark}
-60,803.377022
[ -2.75390625, 2.607421875 ]
36.40416
[ -3.125, 0.77294921875, -2.353515625, -6.08203125, -1.2998046875, 9.34375 ]
[ 3.814453125, 9.671875, 1.5703125, 5.29296875 ]
330
5,048
[ -3.50390625, 4.1640625 ]
36.1026
[ -5.48046875, -4.13671875, -5.09765625, -2.474609375, 1.9189453125, 13.0390625 ]
0.496339
15.246122
26.683835
3.691034
[ 1.7146497964859009 ]
-38,096.797751
6.385895
-60,746.029471
0.51185
6.093598
[ -1.5244140625, -3.287109375, -4.0390625, -5.72265625, 1.8876953125, 12.96875 ]
[ -5.609375, -1.919921875, -2.439453125, -1.4033203125, 3.744140625, 4.66796875 ]
BkiUfK425V5jQ-dgBK4F
\section{Introduction} \label{intro} The Quasi-Newton method is the most effective optimization algorithm when the objective function is non-linear, twice continuously differentiable and involves a large number of variables. In this method, we approximate the inverse-Hessian using gradient information iteratively. If one compares the Quasi-Newton method with the Newton method (NM) or the steepest descent method in terms of computational cost and storage space, then the quasi-Newton method has a clear advantage over the other two methods \cite{Nocedal12}. The rate of convergence of the steepest descent method is linear, while the rate of convergence of the Newton method is quadratic \cite{Nocedal12}. It is known that the computation of the Hessian matrix is quite challenging and time-consuming for the large-scale problem. Therefore, Quasi-Newton is a helpful method where one has to approximate the Hessian matrix. \par Depending on the different ways of approximation to the Hessian, various methods, i.e. Symmetric Rank 1 (SR1), Davidon–Fletcher–Powell (DFP) and Broyden–Fletcher–Goldfarb–Shanno (BFGS) updates are introduced to approximate Hessian matrix \cite{Nocedal12}. Among them, the BFGS update formula performs better than the other two methods \cite{Nocedal12}. But, when the objective function contains a large number of variables, then the limited memory scheme is quite helpful. Further, Albert S. Berahas employs LBFGS with displacement aggregation \cite{Albert S. Berahas 2022} and manifests that this strategy reduces the number of iterations and function evaluations in the L-BFGS algorithm. \par Dai \cite{Y.Dai06} proposes a counterexample that the standard BFGS update fails to converge for some non-convex functions. Some efficient attempts have been made to modify the usual quasi-Newton equation and approximate the inverse Hessian. Then convergence analysis for non-convex objective function is studied under certain assumptions. Li and Fukushima slightly modify the standard BFGS update \cite{Dong Hui 2001} and get the super-linear and global convergence of the Modified BFGS without the convexity assumption for the unconstrained minimization problems. Then, Yunhai Xiao \cite{Yunhai Xiao 2008} show that MLBFGS algorithm performs better than the standard LBFGS for large scale optimization problems. Here, we propose a modified limited memory BFGS method with displacement aggregation to solve unconstrained optimization problems with a twice continuously differentiable objective function. The global convergence of this proposed method is established under certain assumption. The numerical experiment on some non-convex problems shows promising results of M-LBFGS with displacement aggregation. \subsection{\textbf{Contribution}} Here, we analyse the displacement aggregation strategy \cite{Albert S. Berahas 2022} with the MLBFGS algorithm for large-scale optimization problems. In particular, we show that MLBFGS with displacement aggregation outperforms M-LBFGS when the objective function is twice continuously differentiable and contains many variables. Therefore, M-LBFGS with displacement aggregation is more acceptable than MLBFGS. \section{Background on M-LBFGS} Let $x_{k}$ denotes as the k-th iteration generated by the optimization algorithm. We can get subsequent iterate $x_{k+1}=x_{k}+s_{k}$ after computing $s_{k}$. We have to minimize the following quadratic model to get the subsequent iterate $s_{k+1}$. The quadratic model is given by $m_{k+1}= f_{k+1}+g_{k+1}^Td +\frac{1}{2} d^T B_{k+1} d$, where $B_{k+1}$ is the Hessian approximation, $\emph W_{k+1}$ is the inverse Hessian approximation and the descent direction $d$ is calculated by minimizing the model $m_{k+1}(d)$ such as $d_{k+1}=- \emph W_{k+1} g_{k+1}$ and $s_{k} =\alpha_{k+1} d_{k+1}$ for some $\alpha_{k+1} \geq 0$. We have to choose the Hessian approximation matrix as a symmetric matrix such that it satisfies the secant equation $B_{k+1}s_{k}=y_{k}$. In BFGS, we compute the inverse Hessian matrix $ \emph W$ in such a way that $\min_{\emph W} \| \emph W- \emph W_{k}\|_{M}$ having \begin{center} $(i) \emph W= \emph W^T $, i.e., symmetric matrix and\\ $(ii)\emph W y_{k}=s_{k}$ with $s_{k}^T y_{k}>0$, \end{center} where $\parallel .\parallel_M$ is denoted as weighted Frobenius norm. But in MBFGS, we have modified quasi-Newton equation of the form \begin{equation} \emph W_{k+1}\bar y_{k}=s_{k} , \end{equation} where $\bar y_{k}=y_{k}+r_{k}||g_{k}||s_{k} $ and $ r_{k}= 1+max[0,-\frac {y_{k}^Ts_k}{s_k^T s_k}]$ \cite{Dong Hui 2001}. \begin{algorithm} \caption{Backtracking line search in M-BFGS}\label{alg:cap} \begin{enumerate} \item Take an initial guess $x_0 \in \mathbb{R}^n$ and $B_0 \succ 0$ be the initial Hessian approximation and constants $m\in(0,1)$ and $n\in(0,1)$. Let $k=0$. \item In order to get $d_{k}$, we have to solve the equation $B_{k} d=g_{k}$. \item Take the smallest positive integer $j$ that satisfy $ f(x_{k}+m^{j}d_{k}) \leq f(x_{k})+n m^{j}g_{k}^Td_{k}$ and $\lambda_{k}=m^{jk}$. \item Then the next iterate $x_{k+1}=x_{k}+\lambda_{k}d_{k}$. \item Update $B_{k}$using M-BFGS Hessian formula\\$B_{k+1}=B_{k}-\frac {B_{k}s_{k}s_{k}^T B_{k}}{s_{k}^T B_{k}s_{k}}+\frac{\bar y_{k} \bar y_{k}^T}{\bar y_{k}^Ts_{k}}$ \item where $s_{k}=x_{k+1}-x_{k},y_{k}=g_{k+1} -g_{k},\bar y_{k}=y_{k}+r_{k}||g_{k}||s_{k} $ and $ r_{k}=1+ max[0,-\frac {y_{k}^Ts_k}{s_k^T s_k}]$. \item Then $k=k+1$ and go to step-1. \end{enumerate} \end{algorithm} \subsection{\textbf{Iterative and compact form of M-BFGS}} There are numerous ways to construct an inverse Hessian approximation \cite{Nocedal12}. But, we choose the iterative and compact forms because they are the most practical methods for constructing such approximations. Let's discuss a generic notation for creating an inverse Hessian approximation $\bar{\emph W} \succ 0$ by dropping the dependency on the optimization algorithm's iteration count.\par Let $ S=[s_1, \dots ,s_m] \in \mathbb{R}^{n\times m}$ denotes as the difference of iterate displacement, $Y=[y_1, \dots ,y_m]\in \mathbb{R}^{n\times m}$ denotes the difference of iterative gradient and $ \varrho= [\frac{1}{s_1^T \bar y_1},\dots,\frac{1}{s_m^T \bar y_m}]$. Then the modified BFGS update is simplified by \begin{equation} \emph W_{j+1}=(I -\frac {\bar y_j s_j^T}{s_j^T \bar y_j})^T \emph W_{j} (I -\frac {\bar y_j s_j^T}{s_j^T \bar y_j}) + \frac {s_j s_j^T} {s_j^T \bar y_j} =E_{j}^T \emph W_{j} E_{j}+ \varrho_{j}{s_j s_j^T}, \end{equation} where $\bar Y=[\bar y_1,\dots,\bar y_m]$, $\bar y_{k}=y_{k}+r_{k}||g_{k}||s_{k} $ and $ r_{k}=1+ max[0,-\frac {y_{k}^Ts_k}{s_k^T s_k}]$ (\cite{Dong Hui 2001}). From an initial matrix $ {\emph W} \succ 0$, one compute inverse Hessian approximation $\bar {\emph W }$ and the displacement pairs in $(S, \bar Y,\varrho ) $ by initializing $\bar{\emph W} \longleftarrow {\emph W}.$ For all $j \in \{1,2,\dots,m\}$, \begin{equation} \label{4} E_{j} \longleftarrow I -\varrho_{j} \bar{y_{j}} s_{j}^{T},F_{j} \longleftarrow \varrho_{j}{s_j s_j^T} , \bar{\emph W} \longleftarrow E_{j}^{T}\bar{\emph W} E_{j}+ F_{j}. \end{equation} We compute matrix $\bar{\emph W}$ with previous iterate/gradient displacement vectors using MBFGS optimization algorithm in each iteration. We denote output function $\bar{\emph W}=MBFGS ( \emph W,S,\bar Y)$. \par We can construct an inverse Hessian approximation using the MBFGS algorithm from the initial approximation by combining all the low-rank changes directly instead of applying the updates iteratively. The compact form of inverse Hessian updates in Algorithm 2 generates the same output as (\ref 4). \begin{algorithm} \caption{Compact form of MBFGS algorithm in matrix notation}\label{alg:cap} \begin{enumerate} \item Choose an initial symmetric positive definite matrix $ \emph W $ and $(S, \bar Y,\varrho ) $ as in iterative form of MBFGS . \item Set $ (\bar B, \bar C) \in {\mathbb{R}^{m \times m}} {\times} {\mathbb{R}^{m \times m}}$ with ${\bar B_{i , j}} \longleftarrow {s_{i}^T \bar{y_{j}}}$ $\forall (i , j ) $ such that $1 \leq i \leq j \leq m$ and $ \bar C_{i ,i} \longleftarrow s_{i}^T \bar{y_{i}}$ $ \forall i \in [1, . . . ,m] $ be the diagonal matrix, i.e., \begin{equation} \bar B \longleftarrow \begin{bmatrix} s_{1}^T \bar{y_{1}} & \dots &s_{1}^T \bar y_{m} \\ 0 & \ddots & \vdots\\ 0 & 0 &s_{m}^T \bar y_{m} \end{bmatrix} , \bar C\longleftarrow \begin{bmatrix} {s_{1}^T \bar{y_{1}}} & & \\ & \ddots & \\ & & {s_{m}^T} \bar{y_{m}} \\ \end{bmatrix}. \end{equation} \item Set \begin{equation} \bar{ \emph W} \longleftarrow \emph W + \begin{bmatrix} S & \emph W \bar Y \end{bmatrix} \begin{bmatrix} \bar B^{T}(C+\bar Y^{T} \emph W \bar Y)\bar B^{-1} & -\bar B^{-T} \\ -\bar B^{-1} & 0 \end{bmatrix} \begin{bmatrix} S^{T} \\ \bar Y^{T} \emph W \end{bmatrix}. \end{equation} \end{enumerate} \end{algorithm} \subsection{\textbf{Global and superlinear convergence of MBFGS algorithm}} In the following section, we show that Agg-MBFGS method generates the same sequence of matrices as full-memory MBFGS. As a result, an optimization technique using the above updating method maintains the same convergence and convergence rate properties. Also, we show that Agg-MBFGS with limited memory achieves global superlinear convergence under certain assumptions. \begin{theorem} Suppose that level set $\Omega = \{ x : f(x)\leq f(x_0) \} $ is bounded and $ f(x) $ is twice continuously differentiable near $x^*$ contained in $\Omega$. Let $x_{k} \rightarrow x^* $ where $g(x^*)=0$, Hessian $H$ is positive definite near $x^*$ and $\alpha_{k} $ is satisfied by backtracking Armijo line search method, then the sequence $x_{k}$ generated by MBFGS algorithm converges to $x^*$ superlinearly as well as globally. \end{theorem} \begin{proof}: we can find the proof of this theorem in \cite{Dong Hui 2001}. \end{proof} \section{Displacement aggregation} \subsection{\textbf {Parallel iterative displacements in succession}} In this section, we show that if we find in M-BFGS that a previously-stored iterate is a multiple of a newly computed iterate displacement vector, then we can omit the previously stored iterate displacement vector and still approximate inverse Hessian matrix, which is generated by the full memory scheme. \begin{theorem} Let $S=[s_1,\dots,s_m]\in \mathbb{R}^{n\times m}$ ,$Y=[y_1,\dots,y_m]$, $\bar Y=[\bar y_1, \dots ,\bar y_m]\in \mathbb{R}^{n\times m}$ with $\bar y_{j}=y_{j}+r_{j}\|g_{j}\|s_{j} $ , $ r_{j}=1+ max[0,-\frac{y_{j}^Ts_j}{s_j^T s_j}]$, $\varrho=[\frac{1}{s_1^{T}\bar y_1},\dots,\frac{1}{s_m^{T}\bar y_m}]\in \mathbb{R}^{m}_{>0}$ and let $ s_j=\sigma s_{j+1}$ for some nonzero $\sigma \in \mathbb{R}$, then with \begin{center} $ \bar S =[s_1,\dots,s_{j-1}, s_{j+1},\dots,s_m]$ \\ $ \hat Y=[\bar y_1,\dots,\bar y_{j-1},\bar y_{j+1},\dots ,\bar y_m] $\\ \end{center} Algorithm 2 gives \begin{equation} MBFGS( \emph W, S, \bar Y)= MBFGS ( \emph W, \bar S , \hat Y), \end{equation} for any matrix $ \emph W \succ 0$. \end{theorem} \begin{proof} Consider any matrix $ \emph W \succ 0$. For any $j \in [{1,2,....m}]$, assume that $ \emph W_{1:j}=MBFGS( \emph W,S_{1;j},\bar Y_{1:j})$ where \\ $ S_{1;j}=[s_1,\dots ,s_j],Y_{1:j}=[y_1,\dots, y_j]$ and $\bar Y_{1:j}=[\bar y_1,\dots,\bar y_j] $. From (\ref{4}), we have \begin{equation}\label{35} \begin{split} \emph W_{1;j+1} &=E_{j+1}^{T} \emph W_{1;j}E_{j+1}+F_{j+1} \\ &=E_{j+1}^{T}E_{j}^{T} \emph W_{1;j-1} E_{j} E_{j+1}+E_{j+1}^{T}F_{j}E_{j+1}+F_{j+1}. \end{split} \end{equation} As $ s_j=\sigma s_{j+1}$, we have \begin{align*} E_{j}E_{j+1} &= (I-\varrho_{j} \bar {y_{j}}s_{j}^{T})(I-\varrho_{j+1}\bar y_{j+1}s_{j+1}^{T})\\ &= (I-(\frac{\sigma \bar {y_{j}}s_{j+1}^{T}}{\sigma s_{j+1}^{T}\bar{y_{j}}}))(I-\varrho_{j+1} \bar y_{j+1}s_{j+1}^{T})\\ &= I -\varrho_{j+1} \bar y_{j+1}s_{j+1}^{T} -\frac{ \bar {y_{j}}s_{j+1}^{T}}{ s_{j+1}^{T}\bar{y_{j}}} + \varrho_{j+1}\frac{\bar {y_{j}}(s_{j+1}^{T} \bar y_{j+1})s_{j+1}^{T}}{s_{j+1}^{T}\bar{y_{j}}}\\ &= (I -\varrho_{j+1} \bar y_{j+1}s_{j+1}^{T})= E_{j+1}. \end{align*} \begin{align*} F_{j}E_{j+1} &= \varrho_{j}{s_j s_j^T}(I-\varrho_{j+1}\bar y_{j+1}s_{j+1}^{T})\\ &=(\frac{1}{\sigma s_{j+1}^{T} \bar y_{j}})\sigma^{2}s_{j+1}s_{j+1}^{T}(I-\varrho_{j+1} \bar y_{j+1}s_{j+1}^{T})\\ &=0. \end{align*} From equation (\ref{35}), we get \begin{equation} \emph W_{1;j+1}=E_{j+1}^{T} \emph W_{1;j-1}E_{j+1}+F_{j+1}. \end{equation} Hence, the inverse Hessian matrix obtained by skipping the updates for $(s_{j},\bar y_{j}) $ and simply using $(s_{j+1},\bar y_{j+1})$ same as using $(s_{j},\bar y_{j}) $ and $ (s_{j+1},\bar y_{j+1})$. \end{proof} According to Theorem 2, the later update replaces the earlier vector irrespective of the gradient displacement vectors if two iterate displacement vectors are linearly dependent. We also notice that the same inverted Hessian matrix results if the earlier update is skipped. Therefore, if we remove the linearly dependent vector, we can free up some storage. In the next section, we consider the case that a prior displacement iteration can be expressed as the linear combination of the several subsequent displacements. \subsection{\textbf{Aggregate MBFGS}} Let iterate and gradient displacement information be represented by $(S, Y) =([s_1,\dots ,s_m],[y_1,\dots,y_m])=(S_{1:m},Y_{1:m})$ and we also have a previous stored curvature pair $(s_0,y_0)\in \mathbb{R}^{n} \times \mathbb{R}^{n}$, $\varrho_0=\frac{1}{s_0^{T} \bar y_0}$ such that \begin{equation} s_0 = S_{1:m} \sigma , \end{equation} for some $ \sigma \in \mathbb{R}^{m}$. where $\bar Y_{1:m}=[\bar y_1, \dots ,\bar y_m]$. Here, we have the linear dependence $s_0$ on the span of the newly computed set $S_{1:m}$. We aim to calculate aggregated gradient displacement \begin{equation} \hat{Y}=[\hat y_1,\dots,\hat y_m], \end{equation} in such a way that \begin{equation} \label{28} MBFGS( \emph W, S_{0:m}, \bar Y_{0:m})= MBFGS( \emph W, S_{1:m}, \hat Y_{1:m}). \end{equation} Here, the aim is to find $\hat Y_{1:m}$in such a way that the inverse Hessian matrix $MBFGS( \emph W, S_{0:m}, \bar Y_{0:m}) $ is equivalent with new one, which is generated by skipping $(s_0, y_0)$ and employing only $(S_{1:m},\hat Y_{1:m})$. \begin{algorithm} \caption{ Basic View of Displacement Aggregation,}\label{alg:cap} \begin{itemize} \item[1] : \textbf{Require}: $ \emph W\succ 0 $ and $(S_{1:m},\bar Y_{1:m},\varrho_{1:m} )=([s_1,\dots ,s_m],[\bar y_1,\dots,\bar y_m],[\varrho_1,\dots,\varrho_{m}]) $, $ (s_0,y_0)$ and $ \varrho_0=\frac{1}{s_0^{T} \bar y_0}$ \item[2] : Set \begin{equation} \bar B_{0:m} = \begin{bmatrix} s_{0}^T \bar{y_{0}} &s_{0}^T \bar Y_{1:m} \\ &s_{0}^T \bar B_{1:m} \end{bmatrix} , \bar C_{0:m} = \begin{bmatrix} {s_{0}^T \bar{y_{0}}} & \\ & \bar C_{1:m } \\ \end{bmatrix}. \end{equation} Find $ \hat{Y}_{1.m}=[\hat y_1,\dots,\hat y_m] $ such that \begin{equation} \hat B_{1:m} \longleftarrow \begin{bmatrix} s_{1}^T \hat{y_{1}} & \dots &s_{1}^T \hat y_{m} \\ & \ddots & \vdots\\ & &s_{m}^T \hat y_{m} \end{bmatrix} , \hat C_{1:m} \longleftarrow \begin{bmatrix} {s_{1}^T \hat{y_{1}}} & & \\ & \ddots & \\ & & {s_{m}^T} \hat{y_{m}} \\ \end{bmatrix} \end{equation} and satisfies \\ \begin{align*} MBFGS( \emph W, S_{0:m}, \bar Y_{0:m}) =& \emph W + \begin{bmatrix} S_{0:m} & \emph W \bar Y_{0:m} \end{bmatrix} \begin{bmatrix} \bar B^{-T}_{0:m}(\bar C_{0:m}+\bar Y^{T}_{0:m} \emph W \bar Y_{0:m})\bar B^{-1}_{0:m} & -\bar B^{-T}_{0:m} \\ -\bar B^{-1}_{0:m} & 0 \end{bmatrix} \begin{bmatrix} S^{T}_{0:m} \\ \bar Y^{T}_{0:m} \emph W \end{bmatrix} \\ =& \emph W + \begin{bmatrix} S_{1:m} & \emph W \hat Y_{1:m} \end{bmatrix} \begin{bmatrix} \hat B^{-T}_{1:m}(\hat C_{1:m}+\hat Y^{T}_{1:m} \emph W \hat Y_{1:m})\hat B^{-1}_{1:m} & -\hat B^{-T}_{1:m} \\ -\hat B^{-1}_{1:m} & 0 \end{bmatrix} \begin{bmatrix} S^{T}_{1:m} \\ \hat Y^{T}_{1:m} \emph W \end{bmatrix}\\ =& MBFGS (\emph W, S_{1:m}, \hat Y_{1:m}). \end{align*} \item[3] return $\hat Y_{1:m}$ \end{itemize} \end{algorithm} \begin{algorithm} \caption{Detailed View of Displacement Aggregation}\label{alg:cap} \begin{itemize} \item[1] : \textbf{Require}: $ \emph W\succ 0 $ and $(S_{1:m}, \bar Y_{1:m},\varrho_{1:m} )=([s_1,\dots ,s_m], [\bar y_1,\dots,\bar y_m],[\varrho_1,\dots,\varrho_{m}])$, $ (s_0,y_0)$ and $ \varrho_0=\frac{1}{s_0^{T} \bar y_0}$. \item[2] : Set $ \hat Y_{1:m}= \emph W^{-1}S_{1:m} \begin{bmatrix} A & 0 \end{bmatrix} +\bar y_0 \begin{bmatrix} b \\ 0 \end{bmatrix}^{T} + \bar Y_{1:m} $ such that with $ \chi_0=1+\varrho_0 ||\bar y_0||^2_ \emph W $, one finds\\ \begin{equation} \label{10} \hat B_{1:m}=\bar B_{1:m}. \end{equation} \begin{equation} \label{11} \begin{bmatrix} b \\ 0 \end{bmatrix} = -\varrho_0(S_{1:m}^{T} \bar Y_{1:m}- \bar B_{1:m})^T \sigma . \end{equation} \begin{equation} \label{12} (\hat Y_{1:m}-\bar Y_{1:m})^T \emph W (\hat Y_{1:m}-\bar Y_{1:m})= \frac {\chi_0} {\varrho_0}\begin{bmatrix} b \\ 0 \end{bmatrix} \begin{bmatrix} \emph b \\ 0 \end{bmatrix}^T - \begin{bmatrix} \emph A & 0 \end{bmatrix}^T (S_{1:m}^{T} \bar Y_{1:m}- \bar B_{1:m})-(S_{1:m}^{T} \bar Y_{1:m}- \bar B_{1:m})^T \begin{bmatrix} \emph A & 0 \end{bmatrix} . \end{equation} \item[3] return $\hat Y_{1:m}$ \end{itemize} \end{algorithm} \subsection{\textbf{Existence of matrices $\emph A$ and real $\emph b$ in Agg-MBFGS}} Now, we generalize the concept of linear dependence by proving the following theorem. Remark:- We can show the existence of the matrix \emph A and calculation for \emph b by the similar proof of theorem 3.2 in (\cite{Albert S. Berahas 2022}). \begin{theorem} \label{2} Let one has an arbitrary positive definite matrix ($ \emph W >0$) along with (i) $(S_{1:m},Y_{1:m}, \bar Y_{1:m} )=([s_1,\dots ,s_m], [y_1,\dots,y_m], [\bar y_1,\dots,\bar y_m])$ where we accumulate all the linear independent columns of $S_{1:m}$, $\varrho_{1:m}=[\varrho_1,\dots,\varrho_{m}]$, $(s_0,\bar y_0)$ and $ \varrho_0=\frac{1}{s_0^{T} \bar y_0}$ such that $s_0 = S_{1:m}\sigma $ for some $\sigma \in \mathbb{R}^{n}$. Then there exist $A \in \mathbb{R}^{m \times m}$ and $b \in \mathbb{R}^{m-1}$such that $\hat Y_{1:m}= \emph W^{-1}S_{1:m} \begin{bmatrix} \emph A & 0 \end{bmatrix} +\bar y_0 \begin{bmatrix} \emph b \\ 0 \end{bmatrix}^{T} + \bar Y_{1:m} $ and the equations (\ref{10}),(\ref{11}),(\ref{12}) hold. Consequently, for this $ \hat Y_{1:m}$, one can find $s_i^{T} \hat y_{i}=s_i^{T} \bar y_{i} > 0$ $\forall$ $i \in \{1,2,\dots,m \} $ and\\ $ {MBFGS( \emph W, S_{0:m}, \bar Y_{0:m})= MBFGS ( \emph W, S_{1:m}, \hat Y_{1:m})}$. \end{theorem} \begin{proof} We have to replace $y$ with $\bar y$. We have $ (m + 1)(m - 1) $ unknowns in \begin{equation} \label{9} \hat Y_{1:m}= \emph W^{-1}S_{1:m} \begin{bmatrix} \emph A & 0 \end{bmatrix} +\bar y_0 \begin{bmatrix} \emph b \\ 0 \end{bmatrix}^{T} + \bar Y_{1:m}, \end{equation} for $ \hat Y_{1:m}$. In particular, total number of unknowns in \emph A and \emph b are $m(m - 1)$ and $m - 1$ respectively. So total number of unknown is $m(m-1)+(m-1) = (m+1)(m-1)$. We can observed that (\ref{9}) equation imposes $ \hat y_{m}=\bar y_{m}$. We can define the submatrix $\bar U$ by skipping the last column of $\bar B_{1:m}$ and $\hat U$ with size $ m \times (m-1)$ as a submatrix of $\hat B_{1:m}$. \begin{equation}\label{31} \bar U = \begin{bmatrix} s_{1}^T \bar y_{1} & \dots &s_{1}^T \bar y_{m-1} \\ \vdots & \ddots & \vdots\\ \vdots & \ddots &s_{m-1}^T \bar y_{m-1}\\ 0 & \dots & 0 \end{bmatrix}. \end{equation} We can further simplify (\ref{10}-\ref{12}) to the following form: \begin{equation}\label{32} \hat U= \bar U, \end{equation} \begin{equation} \label{70} b=-\varrho_{0}(S_{1:m}^T \bar Y_{1:m-1}-\bar U)^T \sigma, \end{equation} and \begin{equation} \label{80} \begin{split} (\hat Y_{1:m-1}- \bar Y_{1:m-1})^T \emph W (\hat Y_{1:m-1}-\bar Y_{1:m-1})\\ =\frac {\chi_{0}}{\varrho}bb^T-A^T(S_{1:m}^T \bar Y_{1:m-1}-\bar U)-(S_{1:m}^T \bar Y_{1:m-1}-\bar U)^T\emph A. \end{split} \end{equation} We can get the total number of equations by finding the number of nonzero entries in (\ref{32}) and recognizing the symmetry in (\ref{80}). From (\ref{32}), we have $ m(m-1)/2)$ equation. Similarly from (\ref{70}) and (\ref{80}) , we get $m-1$ and $ m(m-1)/2 $ number of equations respectively. Hence the total number of equations are $$ m(m-1)/2 + (m-1)+ m(m-1)/2=(m+1)(m-1).$$\\ As a result, (\ref{32}-\ref {80}) is a square system of quadratic and linear equations that must be solved for the unknowns in the matrices \emph A and \emph b. An equation for $b\in B _{m}$ is shown in equation (\ref{70}). For the time being, let's assume that \emph b is equal to the right-hand side of this equation, which leaves us with the task of confirming the existence of a real solution for \emph A. Let us write $A=[a_{1},...a_{m-1}] $ where $ a_{i} $ has length m for all $\forall$ $i \in \{1,2,\dots,m-1 \} $. One can get from (\ref{9}) that the jth column of (\ref{32}) needs $S_{1:j}^T \bar y_{j}=S_{1:j}^T \hat y_{j}\Longleftrightarrow S_{1:j}^T \bar y_{j}=S_{1:j}^T( \emph W^{-1}S_{1:m}a_j+b_j \bar y_0+\bar y_{j})$ . Hence, equation (\ref{32}) reduces to the system of affine equations \begin{equation}\label{38} S_{1:j}^T \emph W^{-1}S_{1:m}a_j=-b_{j}S_{1:j}^T \bar y_{0}, \end{equation} $\forall$ $j \in \{1,2,\dots,m-1 \} $. For each $j \in \{1, \dots,m-1 \}$, let us write \begin{equation} \label{13} a_j=M^{-1} \begin{bmatrix} a_{j,1} \\ a_{j,2} \end{bmatrix}, \end{equation} where $M=S_{1:m}^T \emph W^{-1} S_{1:m} \succ 0$, with $a_{j ,1} $ having length $j$ and $a_{j ,2} $ having length $ m - j$. We need the full column rank of $S_{1:m}$ for the positive definiteness of $M$. Hence we have \begin{equation} \label{14} a_{j,1}=-b_{j}S_{1,j}^T \bar y_0 \in \mathbb{R}^{j}, \end{equation} to satisfy (\ref{38}). From (\ref {14}), we can show that (\ref{32}) and (\ref{38}) are satisfied for any $a_{j ,2}$. Then we have to show the existance of $ a_{j,2} \in \mathbb{R}^ {m-j} \forall$ $j \in \{1,2,\dots,m-1 \} $ in such a way that (\ref{80}) is satisfied which complete the proof. From (\ref{80}), we can write (\ref{9}) as \begin{equation}\label{39} A^TMA+\Psi^TA+A^T\Psi-\varpi\varpi^T=0 , \end{equation} where \begin{equation} \Psi=S_{1:m}^T \bar y_{0}b^T+S_{1:m}^T \bar Y_{1:m-1}- \bar U \in \mathbb{R}^{m \times (m-1)} , \end{equation} and \begin{equation} \varpi=\frac{b}{\sqrt\varrho_{0}}\in \mathbb{R}^{m-1}. \end{equation} we can rewrite equation $(3.20)$ as \begin{equation}\label{60} (MA+\Psi)^T M^{-1}(MA+\Psi)=\varpi\varpi^{T}+\Psi^{T}M^{-1}\Psi . \end{equation} we can rewrite the equations (\ref{60}) to a particular form which will be useful for our proof. Consider the matrix $ MA+\Psi $ in (\ref{60}). By the defination of $a_{j,1}$, $a_{j,2}$, $M $ and $\Psi$ in $(3.18-3.22)$ as well as of $\bar U$ from (\ref{31}) , the $j$th column of the matrix can be written as\\ $ [MA+\Psi]_{j}= \begin{bmatrix} -b_{j}S_{1:j}^{T}\bar y_{0} \\ a_{j,2} \\ \end{bmatrix} + \begin{bmatrix} b_{j}S_{1:j}^{T} \bar y_{0} \\ b_{j}S_{j+1:m}^{T} \bar y_{0}\\ \end{bmatrix} $+$ \begin{bmatrix} 0_{j} \\ S_{j+1:m}^{T}\bar y_{j}\\ \end{bmatrix} $=$ \begin{bmatrix} 0_{j} \\ a_{j,2}\\ \end{bmatrix} $+$ \begin{bmatrix} 0_{j} \\ S_{j+1:m}^{T}(b_{j}\bar y_{0}+\bar y_{j})\\ \end{bmatrix} $ where $0_{j}$ is a zero vector of length j. As $M^{-1} $ is positive definite matrix, there exist a matrix $ L\in \mathbb{R}^{m\times m}$ such that $L^{T}L=M^{-1}.$ Let us define $Z=[z_{1},\dots,z_{m-1}]\in \mathbb{R}^{(m-1)\times (m-1)}$ such that $Z^{T}Z=\varpi\varpi^{T}+\Psi^{T}M^{-1}\Psi $ (whose existance follows as $\varpi\varpi^{T}+\Psi^{T}M^{-1}\Psi \succeq 0$) and defining, for all $j\in \{1,2,\dots,m-1 \}$, \begin{equation} \gamma_{j}(a_{j,2})=L\left( \begin{bmatrix} 0_{j} \\ a_{j,2}\\ \end{bmatrix} + \begin{bmatrix} 0_{j} \\ S_{j+1:m}^{T}(b_{j} \bar y_{0}+ \bar y_{j})\\ \end{bmatrix} \right). \end{equation} it follows that the $(i,j)\in \{1,2,...m-1\}\times \{1,2,...m-1 \}$ element of equation (\ref{60}) is\\ \begin{equation}\label{40} \gamma_{i}(a_{i,2})^{T}\gamma_{j}(a_{j,2})=z_{i}^{T}z_{j}. \end{equation} we try to prove the existence of matrix A using an inductive argument in reverse order $\{1,2,\dots,m-1 \}$. Firstly we consider for the unknown $a_{m-1,2}$ which is one dimensional. We find with $a_{m-1,2}^*=-s_m^T(b_{m-1} \bar y_0+ \bar y_{m-1})\in \mathbb{R}$ that $\gamma_{m-1}(a_{m-1,2}^*)=L \begin{bmatrix} 0_{m-1} \\ s_{m}^{T}(b_{m-1} \bar y_{0}+ \bar y_{m-1}-s_{m}^{T}(b_{m-1} \bar y_{0}+ \bar y_{m-1})\\ \end{bmatrix} =0_m$.\\ Let $a_{m-1,2}=a_{m-1,2}^*+\lambda_{m-1}$where $ \lambda_{m-1} $ is one dimensional. From the left hand side of the $(i,j)=(m-1,m-1)$ equation in (\ref{40}), one can find total $\|\gamma_{m-1}(a_{m-1,2})\|_2^2$ number of equation which is a strongly convex quadratic with the unknown $\lambda_{m-1}$. As $\|\gamma_{m-1}(a_{m-1,2})\|_2=0$ and $z_{m-1}^Tz_{m-1} \geq 0$, there exists $\lambda_{m-1}^* \in \mathbb{R}$ such that $a_{m-1,2}=a_{m-1,2}^*+\lambda_{m-1}^* \in \mathbb{R}$ satisfies the $ (i,j)=(m-1,m-1)$ equation in $(3.25)$. \par In order to show the existence of $a_{l,2}\in \mathbb{R}^{m-l}$ satisfying (\ref{40}) $\forall (i,j)$ with $i \in \{j,\dots,m-1\}$ and $j=l $ ,let us assume that there exist real numbers $\{a_{l+1,2},\dots,a_{m-1,2} \} $ in such a way that equation (\ref{40}) hold $\forall (i,j)$ with $i \in \{l+1,\dots,m-1\} $ and $j \in \{i,\dots,m-1 \}$ i.e solving the following system of equation for $a_{l,2}$: \begin{equation} \label{24} \gamma_{m-1}(a_{m-1,2})^T \gamma_{l}(a_{l,2})=z_{m-1}^T z_l, \end{equation} \begin{center} .\\ .\\ .\\ \end{center} \begin{equation} \label{51} \gamma_{l+1}(a_{l+1,2})^T \gamma_{l}(a_{l,2})=z_{l+1}^T z_l, \end{equation} \begin{equation} \label{15} \gamma_{l}(a_{l,2})^T \gamma_{l}(a_{l,2})=z_{l}^T z_l. \end{equation} Here equation $(3.26-3.27)$ are affine equations in $a_{l,2}$, whereas $(3.28)$ is a quadratic equation in $a_{l,2}$. For all $t \in \{l+1,\dots,m-1 \}$, let \begin{equation} \Theta_{l+1,t}=\begin{bmatrix} 0_{t-(l+1)} \\ a_{t,2}+S_{t+1:m}^T(b_t\bar y_{0}+ \bar y_{t})\\ \end{bmatrix} \in \mathbb{R}^{m-(l+1)}. \end{equation} By $(3.24)$, we can write \begin{equation} \gamma_t ({a_{t,2}})= L \begin{bmatrix} 0_{l+1} \\ \Theta_{l+1,t}\\ \end{bmatrix} \forall t \in \{l+1,l+2,\dots,m-1 \}. \end{equation} Our goal is to find $a_{l,2}^* \in \mathbb{R}^{m-l}$that satisfy $(3.26-3.27)$such that \begin{equation} a_{l,2}^*+S_{l+1:m}^T(b_l \bar y_0+\bar y_l) \in span \{ \begin{bmatrix} 0 \\ \Theta_{l+1,l+1}\\ \end{bmatrix},\dots,\begin{bmatrix} 0 \\ \Theta_{l+1,m-1}\\ \end{bmatrix} \}, \end{equation} and from $(3.28)$ \begin{equation} \label{23} \gamma_{l}(a_{l,2}^*)^T \gamma_{l}(a_{l,2}^*)\leq z_{l}^T z_l. \end{equation} After it is completed, we can show the existence of nonzero vector $\bar{a_{l,2}}\in \mathbb{R}^{m-l} $ such that ${a_{l,2}}^*+\lambda_{l}\bar{a_{l,2}}$ satisfies (\ref{24}-\ref{51}) for arbitrary one dimensional $\lambda_{l}$. As equation (\ref{23}) hold and the left hand side of (\ref{15}) is a strongly convex quadratic in the unknown $\lambda_{l}$, we can argue that there exists$\lambda_{l}^* \in \mathbb{R}$ such that $a_{l,2}=a_{l,2}^*+\lambda_{l}^* \bar a_{l,2} \in \mathbb{R}^{m-l}$ satisfies (\ref{24}- \ref{15}). \par Let $\{\Theta_{l+1,l+1} \dots \Theta_{l+1,m-1}\}$ has column rank $c$ so that their exists $\{t_l,\dots,t_c\}\subseteq \{l+1,\dots,m-1\}$ with \begin{equation} \label{18} span \{ \begin{bmatrix} 0\\ \Theta_{l+1,t_1}\\ \end{bmatrix} ,\dots,\begin{bmatrix} 0\\ \Theta_{l+1,t_c}\\ \end{bmatrix}\}=span \{ \begin{bmatrix} 0\\ \Theta_{l+1,l+1}\\ \end{bmatrix} ,\dots,\begin{bmatrix} 0\\ \Theta_{l+1,m-1}\\ \end{bmatrix}\}. \end{equation} Now, Let us discuss the case when $c=0$ \begin{equation} \label{50} \Theta_{l+1,t}=0_{m-(l+1)} , \gamma_t(a_{t,2})=0_m \forall t \in \{l+1,l+2,\dots,m-1\} . \end{equation} from (\ref{40}) and (\ref{50}) using induction hypothesis, we have \begin{equation} z_t=0_{m-1} \forall t \in \{l+1,l+2,\dots,m-1\} . \end{equation} From $(3.34)$ and $(3.35)$, the affine equations $(3.26-3.27)$ are satisfied by any $a_{l,2}^* \in \mathbb{R}^{m-l}$. We can choose $a_{l,2}^*=-S_{l+1:m}^T(b_l\bar y_0+\bar y_l)$ and find by $(3.24)$ that \begin{equation} \gamma_{l}(\ a_{l,2}^*)=L \left(\begin{bmatrix} 0_l\\ a_{l,2}^*+S_{l+1:m}^T(b_l\bar y_0+\bar y_l)\\ \end{bmatrix}\right)\ =0_m, \end{equation} which shows that this choice satisfies $(3.32)$. Let us discuss the case when $c>0$. For $a_{l,2}^*$ to satisfy $(3.31)$, it follows with $(3.33)$ that we must have \begin{equation} \label{17} a_{l,2}^*+S_{l+1:m}^T(b_l\bar y_0+\bar y_l)=\begin{bmatrix} 0 & \dots & 0\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}\beta_l, \end{equation} where $\beta_l$ has length c. Choosing \begin{equation} \label{22} \beta_l=\left(\begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}^{T} L^TL \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}\right)^{-1} \begin{bmatrix*}[c] z_{t_l}^T\\ .\\ .\\ .\\ z_{t_c}^T \end{bmatrix*}z_l \in \mathbb{R}^c \end{equation} From (\ref{17}), we have for any $t\in \{t_1,\dots,t_c\}$ \begin{equation} \label{21} \begin{aligned} \gamma_t(a_{t,2})^T \gamma_l(a_{l,2}^*) &=\begin{bmatrix} 0_{l+1} \\ \Theta_{l+1,t}\\ \end{bmatrix}^T L^TL \begin{bmatrix} 0_l\\ a_{l,2}^*+S_{l+1:m}^T(b_l\bar y_0+\bar y_l) \\ \end{bmatrix}\\ &=\begin{bmatrix} 0_{l+1} \\ \Theta_{l+1,t}\\ \end{bmatrix}^{T} L^T L \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix} \beta_l =z_t^T z_l. \end{aligned} \end{equation} Now we have to prove that $ \gamma_t(a_{t,2})^T \gamma_l(a_{l,2}^*)=z_t^T z_l$ for any $t\in (\{l+1,\dots,m-1 \}|\{t_1,\dots,t_c\})$. From (\ref{18}),we have $\Theta_{l+1,t}=[\Theta_{l+1,t_1} \dots \Theta_{l+1,t_c}] \gamma_{l,1}$ for any t and towards this end, first notice that for any such $t$, we have from (\ref{18}) that $\Theta_{l+1,t}=[\Theta_{l+1,t_1},\dots,\Theta_{l+1,t_c}]\gamma _{l,t} $ for some $\gamma _{l,1}\in \mathbb{R}^{c} $. Combining the relationship (\ref{18}) along with inductive hypothesis that, for any pair $(i,j)$ with $i \in \{l+1,\dots,m-1 \}$ and $j \in \{i,\dots,m-1\}$, we have \begin{equation} \label{20} \begin{bmatrix} 0_{l+1} \\ \Theta_{l+1,t}\\ \end{bmatrix}^T L^TL \begin{bmatrix} 0_{l+1} \\ \Theta_{l+1,t}\\ \end{bmatrix} =\gamma_{i}(a_{i,2})^T \gamma_{j}(a_{j,2})=z_i^{T}z_j. \end{equation} As $M^{-1}=L^TL$ is positive definite matrix, we have \begin{equation} \label{19} \begin{aligned} rank([z_{l+1} \dots z_{m-1}])&=rank([\gamma_{l+1}(a_{l+1,2}) \dots \gamma_{m-1}(a_{m-1,2})])\\ &=rank([\gamma_{t_1}(a_{t_1,2}) \dots \gamma_{t_c}(a_{t_c,2})])\\ &=rank([z_{t_1} \dots z_{t_c}])=c \end{aligned}. \end{equation} There exists a vector $\bar \gamma _{l,1} \in \mathbb{R}^c $ in a such way that $z_t=[z_{t_1} \dots z_{t_c}]\bar \gamma _{l,1}$ for any $t \in (\{l+1,\dots,m-1 \}|\{t_1,\dots,t_c\})$ using \eqref{19}. if we combine the definition of $\gamma _{l,1}$ and $\bar \gamma _{l,1}$ with (\ref{20}), we have from any such t that \begin{equation*} \begin{split} \gamma _{l,1}^T \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}^T & L^T L \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix} \\[4pt] & = \begin{bmatrix} 0_{l+1} \\ \Theta_{l+1,t}\\ \end{bmatrix}^T L^T L \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}\\[4pt] & = z_t^T[z_{t_1} \dots z_{t_c}]\\[4pt] & =\bar \gamma _{l,1}^T [z_{t_1} \dots z_{t_c}]^T [z_{t_1} \dots z_{t_c}] \\[4pt] & = \bar \gamma _{l,1}^T \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}^T L^T L \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix} \end{split} \end{equation*} From this, we have $\gamma_{l,t}=\bar \gamma_{l,t}$. From the definition of $\gamma_{l,1}=\bar \gamma_{l,1}$, we have \begin{eqnarray*} \gamma_t(a_{t,2})^T \gamma_l(a_{l,2}^*) &=&\begin{bmatrix} 0_{l+1} \\ \Theta_{l+1,t}\\ \end{bmatrix}^T L^TL \begin{bmatrix} 0_l\\ a_{l,2}^*+S_{l+1:m}^T(b_l\bar y_0+\bar y_l) \\ \end{bmatrix}\\ &=& \gamma _{l,1}^T \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}^T L^T L \begin{bmatrix} 0_l\\ a_{l,2}^*+S_{l+1:m}^T(b_l\bar y_0+\bar y_l) \\ \end{bmatrix}\\ &=& \gamma _{l,1}^T [z_{t_1},\dots,z_{t_c}]^T z_l\\ &=& \bar \gamma _{l,1}^T [z_{t_1},\dots,z_{t_c}]^T z_l=z_t^T z_l \end{eqnarray*} for any $t \in (\{l+1,\dots,m-1\}|\{t_1,\dots,t_c\}) $. Combining this with (\ref{21}), we can get $a_{l,2}^*$ (from \ref{17}) along with $\beta_l$ (from \ref{22}) satisfies \ref{23}. From (\ref{17}), (\ref{20}) and (\ref{22}), we have \begin{equation*} \begin{split} \gamma_l(a_{l,2})^T \gamma_l(a_{l,2}^*)\\ &=\begin{bmatrix} 0_l\\ a_{l,2}^*+S_{l+1:m}^T(b_l\bar y_0+\bar y_l) \\ \end{bmatrix}^T L^T L \begin{bmatrix} 0_l\\ a_{l,2}^*+S_{l+1:m}^T(b_l\bar y_0+\bar y_l) \\ \end{bmatrix}\\[4pt] &= \beta_l^T \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}^T L^T L \begin{bmatrix} 0_{l+1}& \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix} \beta_l\\[4pt] &= z_l^T \begin{bmatrix} z_{t_1}^T\\ .\\ .\\ .\\ z_{t_c}^T \end{bmatrix}^{T} \left( \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix}^T L^T L \begin{bmatrix} 0_{l+1}& \dots & 0_{l+1}\\ \Theta_{l+1,t_l}&\dots&\Theta_{l+1,t_c}\\ \end{bmatrix} \right)^{-1} \begin{bmatrix} z_{t_1}^T\\ .\\ .\\ .\\ z_{t_c}^T \end{bmatrix} z_l \\[4pt] &= z_l^T \begin{bmatrix} z_{t_1}^T\\ .\\ .\\ .\\ z_{t_c}^T \end{bmatrix}^T \left(\begin{bmatrix} z_{t_1}^T\\ .\\ .\\ .\\ z_{t_c}^T \end{bmatrix} \begin{bmatrix} z_{t_1}^T \dots z_{t_c}^T \end{bmatrix} \right)^{-1} \begin{bmatrix} z_{t_1}^T\\ .\\ .\\ .\\ z_{t_c}^T \end{bmatrix} z_l \leq z_l^T z_l, \end{split} \end{equation*} As the eigenvalue of $$ \begin{bmatrix} z_{t_1}^T \dots z_{t_c}^T \end{bmatrix} \left(\begin{bmatrix} z_{t_1}^T\\ .\\ .\\ .\\ z_{t_c}^T \end{bmatrix}\begin{bmatrix} z_{t_1}^T \dots z_{t_c}^T \end{bmatrix}\right)^{-1} \begin{bmatrix} z_{t_1}^T\\ .\\ .\\ .\\ z_{t_c}^T \end{bmatrix} $$ lie in $\{0,1\}$, we can get the last inequality .this last inequality becomes strict if $z_l \notin span \{z_{t_1},\dots,z_{t_c}\})$. Hence, we have shown that $a_{l,2}^*$ from (\ref{17}) satisfies (\ref{23}). Our aim is to show the existence of a non zero $a_{l,2}^*+\lambda_{l} \bar a_{l,2}$ in such a way that $a_{l,2}^*+\lambda_{l} \bar a_{l,2}$ satisfiy (\ref{24}-\ref{25}) for arbitrary $\lambda_{l}$. From (\ref{24}-\ref{25}), such an $\bar a_{l,2}\in \mathbb{R}^{m-l} $ must satisfy \begin{equation} \label{26} \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,l+1}&\dots&\Theta_{l+1,m-1}\\ \end{bmatrix} L^T L \begin{bmatrix} 0_l\\ \bar a_{l,2}\\ \end{bmatrix} =0_{m-(l+1)}. \end{equation} Since \begin{equation}\label{25} \begin{bmatrix} 0_{l+1} & \dots & 0_{l+1}\\ \Theta_{l+1,l+1}&\dots&\Theta_{l+1,m-1}\\ \end{bmatrix} L^T L \in \mathbb{R}^{(m-(l+1))\times m}, \end{equation} There are at least $l+1$ linearly independent vectors in $\mathbb{R}^{m}$ that correspond to the null space of this matrix, which means the nullity of this matrix is at least $l+1$ dimensions from the above equation. Let $N_{l+1} \in \mathbb{R}^{m \times (l+1)}$ has $l+1$ linearly independent vectors in $\mathbb{R}^m$ which lie in the null space of (\ref{25}). As this matrix has $l+1$ linearly independent columns, there exists a nonzero vector $\zeta_{l+1} \in \mathbb{R}^{l+1} $the first $l$ elements of $N_{l+1} \zeta_{l+1}$ are zero. Consider $$ [\bar a_{l,2}]_t:=[N_{l+1}\zeta_{l+1}]_{l+1} \forall t \in \{1, \dots,m-l\}.$$ one can find that $\bar a_{l,2}$ satisfies (\ref{26}). from (\ref{23}), we find that the left-hand side of equation (\ref{15}) is a strongly convex quadratic in the unknown $\lambda_{l}$ . As (\ref{15}) holds, we can claim that there exists $\lambda_l^* \in \mathbb{R}$ such that $a_{l,2}=a_{l,2}^*+\lambda_{l}^* \bar a_{l,2} \in \mathbb{R}^{m-l}$ which satisfies (\ref{24})-(\ref{15}). \par From all the previous discussion, we can be easily shown the existance of $A \in \mathbb{R}^{m \times (m-1)}$ and $b \in \mathbb{R}^{m-1}$ such that , with $\hat Y_{1:m} \in \mathbb{R}^{n \times m }$ defined as in (\ref{9}), the equation (\ref{10}-\ref{12}) hold. Hence one can easily show that $s_i^{T} \hat y_{i}=s_i^{T} \bar y_{i} > 0$ $\forall$ $i \in \{1,2,\dots,m \}$ hold and MBFGS( \emph W, $S_{0:m}$, $\bar Y_{0:m}$)= MBFGS(\emph W \\, $S_{1:m}$, $\hat Y_{1:m}$) as $\hat Y_{1:m} \in \mathbb{R}^{n \times m}$ exist, we know that $s_i^{T} \hat y_{i}=s_i^{T} \bar y_{i} > 0$ $\forall$ $i \in \{1,2,\dots,m \}$ are a subset of the equations in (\ref{24}),and (\ref{10}-\ref{12}) was derived explicitly to ensure that, with $\hat Y_{1:m}$ satisfying (\ref{9}), one would find that (\ref{28}) holds. \end{proof} \subsection{\textbf{Implimentation of Agg-MBFGS}} The implementation of the Agg-MBFGS technique to iteratively aggregate displacement information using the MBFGS approximation is discussed here. In the limited memory scheme, we have to use the most recent curvature pair to approximate the inverse Hessian matrix. In aggregation scheme, we use the curvature pairs that take from a subset of the prior iteration with the changing gradient displacement vectors. Firstly, we have to store all iterate displacement vectors in such a way that all of the displacement vectors are linearly independent and accumulate in the set $ S= \{s_{k_0},\dots,s_{k_{m-1}}\}$ where $\{k_i\}_{i=0}^{m-1} \subset \mathbb{N}$ with $k_i <k_{i+1} \forall i\in \{0,\dots,m-2\}$ and the element of $\bar Y=\{\bar y_{k_0},\dots,\bar y_{k_{m-1}}\}$ are not same as the previously computed but they can be computed by our aggregation scheme. Then, Let us take a newly computed curvature pair $(s_{k_m},y_{k_m})$ for $k_m \in \mathbb{N}$ with $k_{m-1} < k_m$ . In this section, we want to show how one can add a newly computed iterate displacement vector and if needed, we may apply our aggregation scheme to form a new set $\bar S \subseteq S \cup {s_{k_m}}$ and $\hat Y$ in such a way that \begin{enumerate} \item Both $\bar S $ and $\hat Y$ must have the same number of vectors i.e either $m$ or $ m-1$. \item All the elements of the set $\bar S$ should be linearly independent. \item The curvature pairs ($S \cup {s_{k_m}}$,$\bar Y \cup {y_{k_m}})$ generates the same inverse Hessian approximation as generated by the curvature pairs $(\bar S, \hat Y)$ . \end{enumerate} Till now, we denote the previous stored iterate/gradient displacement vector as $\{s_o,\dots,s_{m-1}\}$ and $\{\bar y_0,\dots,\bar y_{m-1}\}$ respectively. We also denote newly computed curvature pair as $(s_m,\bar y_m)$. After computing $(s_m,\bar y_m)$, three possible cases arise which we discuss below. \begin{enumerate} \item When the iterate displacement vectors $\{s_0,\dots,s_m\}$ and $\bar Y $ is linearly independent, then simply we add the new curvature $(s_m,\bar y_m)$ pairs by continuing optimization algorithm $\bar S=\{s_o,\dots,s_{m}\}$ and $\hat Y=\{\bar y_0,\dots,\bar y_m\}$. if $m=n$ , then this case is impossible. \item If $s_{m-1}=\sigma s_m $ for some $\sigma \in \mathbb{R}$, then we should discard the previously stored pair $(s_{m-1},\bar y_{m-1})$ and replace with newly computed pair $(s_m,\bar y_m)$ so that we can form newly updated set $\bar S=\{s_0,\dots,s_{m-2},s_m\} $ and $\hat Y=\{\bar y_0,\dots,\bar y_{m-2},\bar y_m \}$.The choice we can take so far has been justified by theorem (\ref{1}). \item If $s_j \in span \{s_{j+1},\dots,s_m \}$ for some $j\in \{0,\dots,m-2\}$, then we can use our aggregation scheme to compute $\{\hat y_{j+1},\dots,\hat y_m\}$, discard the pair $(s_j,y_j)$ and form new set $\bar S =\{s_0,.,s_{j-1}, s_{j+1},.,s_m \}$ and $\hat Y=\{\bar y_0,. ,\bar y_{j-1}, \hat y_{j+1},. ,\hat y_m \}$. All the choice that we can take so far has been justified by theorem (\ref{2}). \end{enumerate} From a computational perspective, firstly, we have to identify which of the scenarios occurs. The best approach to demonstrate is to keep an inner product matrix's Cholesky factorization consistent with previously stored iterate displacement vectors. Then add a freshly computed iterate displacement vector and see if the procedure breaks down. Before computing the newly iterated displacement vector, suppose that we have a lower triangular matrix $L\in\mathbb{R}^{m \times m}$ with the positive diagonal elements \begin{equation} \label{6} [s_{m-1} \dots s_0]^T [s_{m-1} \dots s_0]= LL^T. \end{equation} This decomposition is possible because the vectors $[s_{m-1} \dots s_0]$ are linearly independent. Then we add the newly computed iterate vector and do Cholesky factorization of the augmented inner product matrix. Then their exist a scalar $\mu \in \mathbb{R}_>0$, vector $\zeta \in \mathbb{R}^m $ and a lower triangular matrix $M\in\mathbb{R}^{m \times m}$ with \begin{equation} \label{7} [s_m \dots s_0]^T [s_m \dots s_0]= \begin{bmatrix} \mu & 0 \\ \zeta & M \\ \end{bmatrix} \begin{bmatrix} \mu & \zeta^T \\ 0 & M^T \\ \end{bmatrix}. \end{equation} One can get $\mu=\|s_m\|$, $\zeta^T =\{s_m^Ts_{m-1} \dots s_m^Ts_0\}/\mu$ and $MM^T=LL^T -\zeta \zeta^T$ by equating the terms from (\ref{6}) and (\ref{7}) i.e. using a rank-one down date(see \cite{P.E Gill}), one can easily compute $M$ from $L$. Firstly, we discuss about the case when this down date does not break down. If all the diagonal elements are positive, then one can reach to case 1 . Then we compute the newly iterated displacement vector. We get the newly updated Cholesky factorization after a subsequent optimization algorithm. But if the computed diagonal element being equal to zero, then there will be a strong possibility that down dates does break down i.e one can get a lower triangular matrix $\Xi \in \mathbb{R}^{i \times i}$ with positive diagonal elements and a vector $\xi \in \mathbb{R}^{i}$ for smallest $i \in \{1,\dots,m \}$ such that \begin{equation} \label{8} [s_m s_{m-1} \dots s_{m-i}]^T [s_m s_{m-1} \dots s_{m-i}]=\begin{bmatrix} \Xi & 0 \\ \xi^T & 0 \\ \end{bmatrix} \begin{bmatrix} \Xi^T & \xi \\ 0 & 0 \\ \end{bmatrix}. \end{equation} Letting $\sigma \in \mathbb{R}^{i}$ be the unique vector satisfying $\Xi^T \sigma =\xi$, one can find that the vector $[\sigma^T,-1]^T$ lies in the null space of (\ref{8}), from which it follows that $$ [s_m s_{m-1} \dots s_{m-i+1}]\sigma=s_{m-i}.$$ As the iterate displacement vectors $\{s_{m-1},\dots,s_{m-i}\}$ are linearly dependent, then the first element of $\sigma$ should be nonzero. When the breakdown happens for $i=1$, our problem reduces to case-2. When the breakdown happens for $i>1$, then our problem reduces to case-3 with the vector $\sigma$ in such a way that one should apply our aggregation scheme to omit the pair $(s_{m-i},y_{m-i})$.\par We can apply standard Cholesky factorization during the updating procedures to factorize $$[s_m \dots s_{m-i+1} s_{m-i-1} \dots s_0]^T [s_m \dots s_{m-i+1} s_{m-i-1} \dots s_0]$$ when the breakdown happens in the rank-one down date. \par Now, we discuss how one may implement our aggregation scheme such that, given $S_{1:m}$ with full column rank, $\hat Y_{1:m}$, $\varrho_{1:m}>0$, $\sigma \in \mathbb{R}^{m}$ satisfying $s_0=S_{1:m}\sigma$, $\bar y_0 $ and $\varrho_{0} >0$, one may compute $A \in \mathbb{R}^{m \times m-1}$ and $b \in \mathbb{R}^{m-1}$ in order to obtain $\hat Y_{1:m}$. We show the existance of matrx \emph A , matrix \emph b from Theorem 3 and also calculate matrix \emph b from (\ref{70}). The main task is to compute the matrix \emph A. From the proof of Theorem 3, let $A=[a_1 \dots a_{m-1}]$ where $a_l \in \mathbb{R}^{m}$ for all $l \in \{1,\dots,m-1\}$ and as in (\ref{13}), Let \begin{equation*} a_l=M^{-1} \begin{bmatrix} a_{l,1} \\ a_{l,2} \end{bmatrix} \end{equation*} where $a_{l,1} \in \mathbb{R}^{l},a_{l,2} \in \mathbb{R}^{m-l},$ and $M=S_{1:m}^T \emph W^{-1}S_{1:m} \succ 0$. The most expensive operation is to the product $M^{-1}$ with $ \begin{bmatrix} a_{l,1} \\ a_{l,2} \end{bmatrix}.$ It is better to calculate the inverse of M using Cholesky factorization. Then we have to update the iterate displacement set in each iteration by adding/deleting rows/columns to continue the process of adding/deleting rows/columns. Further, it is pretty easy to compute the product operation with $M^{-1}$ as it seems in a triangular fashion. One can compute \emph A, which is described as Algorithm 5. \begin{algorithm} \caption{Computation of\emph A in Displacement Aggregation}\label{alg:cap} \begin{enumerate} \item For each $l \in \{1,\dots,m-1\}$, calculate the l-element vector $a_{l,1} $ from (\ref{14}). \item Calculate $a_{m-1,2}$ by solving the quadratic equation (\ref{15}). \item for $ l=m-2,\dots,1 $ do \item Calculate $\beta_l$ from (\ref{22} ). \item Calculate $a_{l,2}^*$ from (\ref{17} ). \item Calculate $\bar a_{l,2}$ that satisfy (\ref{26}). \item Calculate $\lambda_l^* \in \mathbb{R}$ in such a way that $a_{l,2}=a_{l,2}^* + \lambda_{l}^*\bar a_{l,2}$ solves the quadratic equation (\ref{15}). \item end for \item $A=[a_1 \dots a_m-1] $with $\{a_j\}_{j=1}^{m-1}$ defined by (\ref{13}). \end{enumerate} \end{algorithm} \subsection{\textbf{Space complexity of the proposed algorithm}} Let $X=[x_0,\dots,x_{m-1}], G=[\nabla f_{x_0}, \dots, \nabla f_{x_{m-1}}], S=\{s_0, \dots, s_{m-1}\}, Y=\{y_0, \dots, y_{m-1}\}, \bar Y=\{\bar y_0, \dots,\bar y_{m-1}\}$ be defined in the above Algorithm 1. The total computational cost of Modified LBFGS is $\mathcal O(5mn)$ per iteration, where $n$ is the number of variables used in the optimization algorithm and $m$ is the desired memory allocation given by the user where $(3\leq m \leq 10)$, typically taken in practice). The computational cost for computing the inner products $(\{s_m^Ts_{m-1},\dots,s_m^Ts_0 \})$ is $\mathcal O(mn)$. The computation of Cholesky factorization for calculating inverse of M and calculating $\sigma$ in (case 2) along with (case 3) is $\mathcal O(m^2)$. \begin{equation} \label{16} \hat Y_{j+1:m}= \emph W_{0:j-1}^{-1}S_{j+1:m} \begin{bmatrix} A & 0 \end{bmatrix} +\bar y_0 \begin{bmatrix} b \\ 0 \\ \end{bmatrix}^{T} + \bar Y_{j+1:m}. \end{equation} There is a computational cost to calculate $ \emph W_{0:j-1}^{-1}S_{j+1:m}$ for every possible value of $j$ as required in (\ref{16}). We can compute this matrix $ \emph W_{0:j-1}^{-1}$ without using MBFGS inverse Hessian approximation. The total computational cost for doing a matrix-vector product with a compact representation of this approximation is $\mathcal O(j(m-j)n) \leq \mathcal O(m^2n)$. The cost of computing \emph b in equation (\ref{70}) is $\mathcal O(m^2)$. Then we have to calculate matrix \emph A as in Algorithm 5. Let the computational cost of $S_{1:m}^T y_0$ is $\mathcal O(mn)$, the cost of computing $\{a_{l,1}\}_{l=1}^{m-1}$ is $\mathcal O(m^2)$. The reverse order computation is used to determine $\{a_{l,2}\}_{l=1}^{m-1}$ to solve the system of linear and quadratic equations that comprise (\ref{80}). The cost of computing $ a_{m-1,2}$ is $\mathcal O(1)$ provided a factorization of M is known and that the elements of the right-hand side (\ref{60}) have already been computed at the cost of $\mathcal O(m^3)$. The QR factorization of the matrix in (\ref{25}) is the most expensive operation in each iteration of this scheme having a dimension of each l is $(m-(l+1))\times m$. Hence the total computational cost from $l=m-2$ to $l=1$ is $\mathcal O(m^4)$. Hence the total computational cost is $\mathcal O(m^2n)+\mathcal O(m^4)$. \section{Numerical Experiment} The effectiveness of Algorithm-1 (M-LBFGS), Algorithm-2, and Algorithm-3 are examined in this section (Aggregation Modified BFGS). A collection of 52 nonlinear unconstrained problems is used in our experiment. \par We have used the CUTEst environment to carry out our numerical experiment, and all the test problems are taken from CUTEst. We have used MATLAB 2020b interface to write the code. All the computational operations are performed on a PC(Intel(R)Core(TM)i5-10210U CPU, 2.11GHz ) with the UBUNTU Linux Operating system. The stopping criteria for our proposed algorithms and M-LBFGS are that all the iteration continue until the gradient vector reach $||g_k||_\infty \leq 10^{-6} max (1, ||g_0||_\infty)$ where $ k \in \mathbb{N}$ or surpass the limit $10^{5} $ . \par When it comes to the majority of cases, $ n \gg m$, which means that the computational costs of doing our aggregation scheme are negligible in comparison to the computational costs of calculating search directions, which are the same for all algorithms applying the standard two-loop recursion for Modified L-BFGS. Here we take the initial Hessian matrix as the identity matrix. We tested the problems with dimensions ranging from 2 to 132,200. \begin{table} \centering \caption{number of iteration, function evaluation, and aggregation when MLBFGS and AggMBFGS are applied to solve the problem from CUTEst set with $n \in [2,123200]$}. \begin{tabular}{l*{5}{l}{l}}\toprule \multirow{2}{*}{Name} & \multirow{2}{*}{Dim} & \multicolumn{3}{l}{ AggMBFGS } & \multicolumn{2}{l}{ MLBFGS }\\ \cmidrule(lr){3-5} \cmidrule(lr){6-7} & (n) & Iters & func & Agg & Iters & Func\\ \midrule ARGLINA &200 &2 &3 &0 &2 &3 \\ \\ ARGLINB &200 &3 &49&0 &3&49\\ \\ ARGLINC &200 &3 &49&0 &3&49\\ \\ ARGTRIGLS&200& 669& 10362 & 0& 669& 10362\\ \\ ARWHEAD &5000&49&212&43&59&235\\ \\ BA-L1LS&57&352&8284&0&352&8284\\ \\ BA-L1SPLS&57&50&1549&0&50&1549\\ \\ BDQRTIC&5000&62&718&0&62&718\\ \\ BOX &10000&207&1399&106&280&1930\\ \\ BOXPOWER&20000&10&71&1&10&71\\ \\ BROWNAL&200&17&114&1&17&115\\ \\ BROYDN3DLS&5000&89&429&0&89&429 \\ \\ BROYDN7D&5000&7830&49931&0&7830&49931\\ \\ BROYDNBDLS&5000&104&756&0&104&756\\ \\ BRYBND&5000&104&756&0&104&756\\ \\ CHAINWOO&4000&353&3630&0&353&3630\\ \\ CHNROSNB&50&335&3065&0&335&3065\\ \\ CHNRSNBM&50&344&3200&0&344&3200\\ \\ CURLY10&10000&2183&28121&0&2183&28182\\ \\ CURLY20&10000&417&6167&0&417&6167\\ \\ CURLY30&10000&326&5383&0&326&5383\\ \\ DIXMAANA&3000&15&40&9&19&52\\ \\ DIXMAANB&3000&32&63&0&32&63\\ \\ DIXMAANC&3000&25&57&0&25&57\\ \\ DIXMAAND&3000&19&50&0&19&50\\ \\ DIXMAANE&3000&268&498&0&268&498\\ \\ DIXMAANF&3000&203&250&0&203&250\\ \\ DIXMAANG&3000&132&224&0&132&224\\ \\ DIXMAANH&3000&288&1066&0&288&1066\\ \\ DIXMAANI&3000&201&230&0&201&230\\\\ \bottomrule \end{tabular} \end{table} \begin{table} \centering \begin{tabular}{l*{5}{l}{l}}\toprule \multirow{2}{*}{Name} & \multirow{2}{*}{Dim} & \multicolumn{3}{l}{ AggMBFGS } & \multicolumn{2}{l}{ MLBFGS }\\ \cmidrule(lr){3-5} \cmidrule(lr){6-7} & (n) & Iters & func & Agg & Iters & Func\\ \midrule DIXMAANJ&3000&128&253&0&128&253\\ \\ DIXMAANK&3000&132&301&0&132&301\\ \\ DMN15333LS&99&12&253&1&13&253\\ \\ DMN37142LS&66&8531&80501&0&8531&80501\\ \\ ERRINROS&50&60&604&0&60&604\\ \\ ERRINRSM&50&74&721&0&74&721\\ \\ FLETBV3M&5000&10&86&3&12&141\\ \\ HILBERTA&2&13&48&10&15&50\\ \\ LIARWHD&5000&59&229&51&15&47\\ \\ NONCVXU2&5000&34&138&6&36&142\\ \\ NONDQUAR&5000&33&231&0&33&231\\ \\ POWELLSG&5000&28&269&38&39&155\\ \\ POWER&10000&55&1841&0&55&1841\\ \\ SPARSQVR&10000&78&1018&0&78&1018\\ \\ TESTQUAD&5000&4866&95084&0&4866&95084 \\ \\ TQUARTIC&5000&236&2939&633&342&1434\\ \\ YATP2LS&123200&264&3062&0&264&3062\\ \\ YATP1LS&123200&46&400&25&56&294\\ \bottomrule \end{tabular} \end{table} \section{Conclusion} We have shown that Modified Limited memory BFGS with displacement aggregation performs well for the twice continuously differentiable function containing many variables. So it is better to use a displacement aggregation strategy while working with large-scale optimization. We also observed that Modified L-BFGS with displacement aggregation gives promising results for both convex and non-convex functions with several variables under certain assumptions. \clearpage \bibliographystyle{spmpsci}
-79,244.939752
[ -2.771484375, 2.529296875 ]
16.43002
[ -4.44921875, -0.81689453125, -1.9248046875, -5.11328125, 0.52294921875, 7.9765625 ]
[ 1.75, 7.97265625, 1.3232421875, 6.421875 ]
297
6,086
[ -2.53125, 2.642578125 ]
43.853238
[ -5.75390625, -3.82421875, -3.62890625, -1.6845703125, 2.296875, 10.6640625 ]
0.539665
8.322709
23.657856
11.956155
[ 2.6833043098449707 ]
-50,289.885133
5.480447
-78,955.656336
0.449721
6.109217
[ -2.8125, -3.349609375, -3.775390625, -5.0546875, 2.49609375, 12.0859375 ]
[ -5.7890625, -1.1884765625, -1.484375, -1.0380859375, 3.2578125, 2.927734375 ]
BkiUdZI5qWTA_H8tsybT
\section{Introduction} The aim of this paper is to study the existence, global convergence and geometric properties of gradient flows with respect to a specific class of Hessian Riemannian metrics on convex sets. Our work is indeed deeply related to the constrained minimization problem $$ \min \{f(x)\mid x\in \overline{C},\; Ax=b\}, \leqno (P) $$ where $\overline{C}$ is the closure of a nonempty, {\it open} and convex subset $C$ of $\RR^n$, $A$ is a $m< n$ real matrix with $m\leq n$, $b\in \mathbb{R}^m$ and $f\in C^1(\RR^n)$. A strategy to solve $(P)$ consists in endowing $C$ with a Riemannian structure $(\cdot,\cdot)^H$, to restrict it to the relative interior of the feasible set ${{\cal F}}:=C\cap\{x\mid Ax=b\}$, and then to consider the trajectories generated by the steepest descent vector field $-\nabla_{_H} f_{|_{{{\cal F}}}}$. This leads to the initial value problem $$(H\mbox{-}SD)\qquad \dot x(t)+\nabla_{_H} f_{|_{{{\cal F}}}}(x(t))=0,\:x(0)\in {{\cal F}},$$ where $(H\mbox{-}SD)$ stands for $H$-steepest descent. We focus on those metrics that are induced by the Hessian $H=\nabla^2 h$ of a {\em Legendre type} convex function $h$ defined on $C$ (cf. Def. \ref{D:legendre}). The use of Riemannian methods in optimization has increased recently: in relation with Karmarkar algorithm and linear programming see Karmarkar \cite{Kar90}, Bayer-Lagarias \cite {BaL89}; for continuous-time models of proximal type algorithms and related topics see Iusem-Svaiter-Da Cruz \cite{IuS99}, Bolte-Teboulle \cite{BoT02}. For a systematic dynamical system approach to constrained optimization based on double bracket flows, see Brockett \cite{Bro88,Bro91}, the monograph of Helmke-Moore \cite{HeM94} and the references therein. On the other hand, the structure of $(H\mbox{-}SD)$ is also at the heart of some important problems in applied mathematics. For connections with population dynamics and game theory see Hofbauer-Sygmund \cite{HoS98}, Akin \cite{Aki79}, Attouch-Teboulle \cite{AttTeb}. We will see that $(H\mbox{-}SD)$ can be reformulated as the differential inclusion $\frac{d}{dt}\nabla h(x(t))+\nabla f(x(t))\in {\rm Im}\: A^T,\:x(t)\in{{\cal F}},$ which is formally similar to some evolution problems in infinite dimensional spaces arising in thermodynamical systems, see for instance Kenmochi-Pawlow \cite{KenPaw} and references therein. A classical approach in the asymptotic analysis of dynamical systems consists in exhibiting attractors of the orbits by using Liapounov functionals. Our choice of Hessian Riemannian metrics is based on this idea. In fact, we consider first the important case where $f$ is convex, a condition that permits us to reformulate $(P)$ as a variational inequality problem: $\mbox{find }a\in \overline{{{\cal F}}} \mbox{ such that }(\nabla_{_H} f_{|_{{{\cal F}}}}(x),x-a)^H_x\geq0\:\mbox{ for all }x\mbox{ in }{{\cal F}}.$ In order to identify a suitable Liapounov functional, this variational problem is met through the following integration problem: {\it find the metrics $(\cdot,\cdot)^H$ for which the vector fields $V^a:{{\cal F}}\to\mathbb{R}^n$, $a\in {{\cal F}}$, defined by $V^a(x)=x-a,$ are $(\cdot,\cdot)^H$-gradient vector fields}. Our first result (cf. Theorem \ref{T:Poincare}) establishes that such metrics are given by the Hessian of strictly convex functions, and in that case the vector fields $V^a$ appear as gradients with respect to the second variable of some distance-like functions that are called $D$-functions. Indeed, if $(\cdot,\cdot)^H$ is induced by the Hessian $H=\nabla^2 h$ of $h: {{\cal F}}\mapsto \RR$, we have for all $a,x$ in ${{\cal F}}$: $\nabla_{_H} D_h(a,.)(x)=x-a, \mbox{ where }D_h(a,x) =h(a)-h(x)-dh(x)(x-a).$ For another characterization of Hessian metrics, see Duistermaat \cite{Dui01}. Motivated by the previous result and with the aim of solving $(P)$, we are then naturally led to consider Hessian Riemannian metrics that cannot be smoothly extended out of ${{\cal F}}$. Such a requirement is fulfilled by the Hessian of a {\it Legendre (convex) function } $h$, whose definition is recalled in section \ref{S:gradientflow}. We give then a differential inclusion reformulation of $(H\mbox{-}SD)$, which permits to show that in the case of a linear objective function $f$, the flow of $-\nabla_{_H} f_{|_{{{\cal F}}}}$ stands at the crossroad of many optimization methods. In fact, following \cite{IuS99}, we prove that viscosity methods and Bregman proximal algorithms produce their paths or iterates in the orbit of $(H\mbox{-}SD)$. The $D$-function of $h$ plays an essential role for this. In section \ref{S:Examples} it is given a systematic method to construct Legendre functions based on barrier functions for convex inequality problems, which is illustrated with some examples; relations to other works are discussed. Section \ref{GEAS} deals with global existence and convergence properties. After having given a non trivial well-posedness result (cf. Theorem \ref{T:existence}), we prove in section \ref{S:value} that $f(x(t))\rightarrow \inf_{\overline{{{\cal F}}}}f$ as $t\rightarrow +\infty$ whenever $f$ is convex. A natural problem that arises is the trajectory convergence to a critical point. Since one expects the limit to be a (local) solution to $(P)$, which may belong to the boundary of $C$, the notion of critical point must be understood in the sense of the optimality condition for a local minimizer $a$ of $f$ over $\overline{{{\cal F}}}$: $$({\cal O})\qquad \nabla f(a)+N_{\overline{{{\cal F}}}}(a)\ni 0,\:a\in \overline{{{\cal F}}},$$ where $N_{\overline{{{\cal F}}}}(a)$ is the normal cone to $\overline{{{\cal F}}}$ at $a$, and $\nabla f$ is the Euclidean gradient of $f$. This involves an asymptotic singular behavior that is rather unusual in the classical theory of dynamical systems, where the critical points are typically supposed to be in the manifold. In section \ref{S:bregman} we assume that the Legendre type function $h$ is a {\em Bregman function with zone $C$} and prove that under a quasi-convexity assumption on $f$, the trajectory converges to some point $a$ satisfying $({\cal O})$. When $f$ is convex, the preceding result amounts to the convergence of $x(t)$ toward a global minimizer of $f$ over $\overline{{{\cal F}}}$. We also give a variational characterization of the limit and establish an abstract result on the rate of convergence under uniqueness of the solution. We consider in section \ref{S:LP} the case of linear programming, for which asymptotic convergence as well as a variational characterization are proved without the Bregman-type condition. Within this framework, we also give some estimates on the convergence rate that are valid for the specific Legendre functions commonly used in practice. In section \ref{S:dual}, we consider the interesting case of positivity and equality constraints, introducing a {\em dual} trajectory $\lambda(t)$ that, under some appropriate conditions, converges to a solution to the dual problem of $(P)$ whenever $f$ is convex, even if primal convergence is not ensured. Finally, inspired by the seminal work \cite{BaL89}, we define in section \ref{S:legendre-transform} a change of coordinates called {\it Legendre transform coordinates}, which permits to show that the orbits of $(H\mbox{-}SD)$ may be seen as straight lines in a positive cone. This leads to additional geometric interpretations of the flow of $-\nabla_{_H} f_{|_{{{\cal F}}}}$. On the one hand, the orbits are geodesics with respect to an appropriate metric and, on the other hand, they may be seen as $\dot q$-trajectories of some Lagrangian, with consequences in terms of integrable Hamiltonians. {\bf Notations.} ${\rm Ker}\: A=\{x\in\RR^n\;|\; Ax=0\}.$ The orthogonal complement of ${\cal A}_0$ is denoted by ${\cal A}_0^\perp$, and $\langle \cdot , \cdot \rangle$ is the standard Euclidean scalar product of $\RR ^n$. Let us denote by $\mathbb{S}_{++}^n$ the cone of real symmetric definite positive matrices. Let $\Omega\subset\RR^n$ be an open set. If $f:\Omega\to \RR$ is differentiable then $\nabla f$ stands for the Euclidean gradient of $f$. If $h:\Omega\mapsto\RR$ is twice differentiable then its Euclidean Hessian at $x\in \Omega$ is denoted by $\nabla^2 h(x)$ and is defined as the endomorphism of $\RR^n$ whose matrix in canonical coordinates is given by $[\frac{\partial^2 h(x)}{\partial x_i\partial x_j}]_{i,j \in \{1,..,n\} }$. Thus, $\forall x\in \Omega$, $d^2h(x)=\<\nabla^2h(x)\:\cdot,\cdot\rangle$. \section{Preliminaries} \subsection{The minimization problem and optimality conditions}\label{S:problem} Given a positive integer $m< n$, a full rank matrix $A\in\mathbb{R}^{m\times n}$ and $b\in {\rm Im}\: A$, let us define \begin{equation}\label{E:affine-space} {\cal A}=\{x\in\RR^n\;|\; Ax=b\}. \end{equation} Set ${\cal A}_0={\cal A}-{\cal A}={\rm Ker}\: A$. Of course, ${\cal A}_0^\perp={\rm Im}\: A^T$ where $A^T$ is the transpose of $A$. Let $C$ be a nonempty, open and convex subset of $\mathbb{R}^n$, and $f:\RR^n\to \RR$ a ${\cal C}^1$ function. Consider the constrained minimization problem $$ \inf \{f(x)\;|\;x\in \overline{C},\; Ax=b\}. \leqno (P) $$ The set of optimal solutions of $(P)$ is denoted by $S(P)$. We call $f$ the {\em objective function} of $(P)$. The {\em feasible set} of $(P)$ is given by $ \overline{{{\cal F}}}=\{x\in\mathbb{R}^n\;|\;x\in \overline{C},\; Ax=b\}=\overline{C}\cap {\cal A},$ and ${{\cal F}}$ stands for the {\em relative interior} of $\overline{{{\cal F}}}$, that is \begin{equation}\label{E:feasible} {{\cal F}}={\rm ri}\: \overline{{{\cal F}}}=\{x\in\mathbb{R}^n\;|\;x\in C,\; Ax=b\}=C\cap{\cal A}. \end{equation} Throughout this article, we assume that \begin{equation}\label{E:hypof} {{\cal F}}\neq\emptyset . \end{equation} It is well known that a necessary condition for $a$ to be locally minimal for $f$ over $\overline{{{\cal F}}}$ is \( ({\cal O}):\:-\nabla f(a)\in N_{\overline{{{\cal F}}}}(a)\), where $N_{\overline{{{\cal F}}}}(x)=\{\nu \in \RR^n\;|\; \forall y\in\overline{{{\cal F}}},\; \langle y-x,\nu\rangle \leq 0\}$ is the {\em normal cone} to $\overline{{{\cal F}}}$ at $x\in\overline{{{\cal F}}}$ ($N_{\overline{{{\cal F}}}}(x)=\emptyset$ when $x\notin\overline{{{\cal F}}}$); see for instance \cite[Theorem 6.12]{RoW98}. By \cite[Corollary 23.8.1]{Roc70}, \( N_{\overline{{{\cal F}}}}(x)=N_{\overline{C} \cap {\cal A}}(x)=N_{\overline{C}}(x)+N_{{\cal A}}(x)=N_{\overline{C}}(x)+{\cal A}_0^\perp,\) for all \( x\in\overline{{{\cal F}}}\). Therefore, the necessary optimality condition for $a\in \overline{{{\cal F}}}$ is \begin{equation}\label{E:optcond} -\nabla f(a)\in N_{\overline{C}}(a)+ {\cal A}_0^\perp. \end{equation} If $f$ is convex then this condition is also sufficient for $a\in\overline{{{\cal F}}}$ to be in $S(P)$. \subsection{Riemannian gradient flows on the relative interior of the feasible set} Let $M$ be a smooth manifold. The tangent space to $M$ at $x\in M$ is denoted by $T_x M$. If $f:M\mapsto \RR$ is a ${\cal C}^1$ function then $df(x)$ denotes its differential or tangent map $df(x):T_xM\to \mathbb{R} $ at $x\in M$. A ${\cal C}^k$ metric on $M$, $k\geq 0$, is a family of scalar products $(\cdot ,\cdot)_x$ on each $T_x M$, $x\in M$, such that $(\cdot ,\cdot)_x$ depends in a ${\cal C}^k$ way on $x$. The couple $M, (\cdot ,\cdot)_x$ is called a ${\cal C}^k$ Riemannian manifold. This structure permits to identify $T_x M$ with its dual, i.e. the cotangent space $T_x M^*$, and thus to define a notion of gradient vector. Indeed, given $f$ in $M$, the gradient of $f$ is denoted by $\nabla_{_{(\cdot,\cdot)}}\:f$ and is uniquely determined by the following conditions: \\ \hspace*{.3cm}(g$_1$) tangency condition: for all $x\in M$, $ \nabla_{_{(\cdot,\cdot)}}\: f (x) \in T_x M^*\simeq T_xM,$ \\ \hspace*{.3cm}(g$_2$) dualility condition: for all $x\in M$, $v\in T_x M$, $ df(x)(v)=(\nabla_{_{(\cdot,\cdot)}}\:f(x),v)_x.$ \\ We refer the reader to \cite{DoC,Lan95} for further details. Let us return to the minimization problem $(P)$. Since $C$ is open, we can take $M=C$ with the usual identification $T_xC \simeq \mathbb{R}^n$ for every $x\in C$. Given a continuous mapping $H:C\to\mathbb{S}_{++}^n$, the metric defined by \begin{equation}\label{E:metric} \forall x \in C,\; \forall u,v\in \RR^n,\; (u,v)_x^{H}=\langle H(x)u,v\rangle, \end{equation} endows $C$ with a ${\cal C}^0$ Riemannian structure. The corresponding Riemannian gradient vector field of the objective function $f$ restricted to $C$, which we denote by $\nabla_{_H} f_{|_C}$, is given by \begin{equation}\label{E:riemann} \nabla_{_H} f_{|_C}(x)=H(x)^{-1}\nabla f(x). \end{equation} Next, take $N={{\cal F}}=C\cap {\cal A}$, which is a smooth submanifold of $C$ with $T_x{{\cal F}} \simeq {\cal A}_0$ for each $x\in{{\cal F}}$. Definition (\ref{E:metric}) induces a metric on ${{\cal F}} $ for which the gradient of the restriction $f_{|_{{\cal F}}}$ is denoted by $\nabla_{_H} f_{|_{{\cal F}}}$. Conditions $(g_1)$ and $(g_2)$ imply that for all \( x \in {{\cal F}}\) \begin{equation}\label{E:gradH} \nabla_{_H} f_{|_{{\cal F}}} (x)=P_x H(x)^{-1} \nabla f(x), \end{equation} where, given $x\in C$, $P_x:\RR^n\to {\cal A}_0$ is the $(\cdot,\cdot)_x^H$-orthogonal projection onto the linear subspace ${\cal A}_0$. Since $A$ has full rank, it is easy to see that \begin{equation}\label{E:projH} P_x=I -H(x)^{-1}A^T(AH(x)^{-1}A^T )^{-1}A, \end{equation} and we conclude that for all \( x \in {{\cal F}}\) \begin{equation}\label{E:gradH-expl} \nabla_{_H} f_{|_{{\cal F}}}(x)=H(x)^{-1}[I-A^T(AH(x)^{-1}A^T )^{-1}AH(x)^{-1}]\nabla f(x). \end{equation} Given $x\in {{\cal F}}$, the vector $-\nabla_{_H} f_{|_{{\cal F}}}(x)$ can be interpreted as that direction in ${\cal A}_0$ such that $f$ decreases the most steeply at $x$ with respect to the metric $(\cdot,\cdot)_x^{H}$. The {\em steepest descent method} for the (local) minimization of $f$ on the Riemannian manifold ${{\cal F}}, (\cdot,\cdot)^H_x$ consists in finding the solution trajectory $x(t)$ of the vector field $-\nabla_{_H} f_{|_{{\cal F}}}$ with initial condition $x^0\in{{\cal F}}$: \begin{equation}\label{E:steepestD} \left\{ \begin{array}{l} \dot{x}+\nabla_{_H} f_{|_{{\cal F}}}(x)=0,\\ x(0)=x^0\in{{\cal F}}. \end{array}\right. \end{equation} \section{Legendre gradient flows in constrained optimization}\label{S:gradientflow} \subsection{Liapounov functionals, variational inequalities and Hessian metrics}\label{S:Poincare} This section is intended to motivate the particular class of Riemannian metrics that is studied in this paper in view of the asymptotic convergence of the solution to (\ref{E:steepestD}). Let us consider the minimization problem $(P)$ and assume that $C$ is endowed with some Riemannian metric $(\cdot,\cdot)^H_x$ as defined in (\ref{E:metric}). Recall that \( V:{{\cal F}}\mapsto \RR \) is a {\em Liapounov functional} for the vector field $-\nabla_H f_{|_{{\cal F}}}$ if $\forall x\in {{\cal F}}$, $(-\nabla_{_H} f_{|_{{\cal F}}}(x), \nabla_{_H} V(x))^H _x\leq 0$. If $x(t)$ is a solution to (\ref{E:steepestD}), this implies that $ t\mapsto V(x(t))$ is nonincreasing. Although \( f_{|_{{\cal F}}} \) is indeed a Liapounov functional for $-\nabla_H f_{|_{{\cal F}}}$, this does not ensure the convergence of $x(t)$ (see for instance the counterexample of Palis-De Melo \cite{PalDem82} in the Euclidean case). Suppose that the objective function $f$ is convex. For simplicity, we also assume that $A=0$ so that ${{\cal F}}=C$. In the framework of convex minimization, the set of minimizers of \( f \) over $\overline{C}$, denoted by \( \mbox{\rm Argmin}\: _{\overline{C}}\: f \), is characterized in variational terms as follows: \begin{equation} \label{varEuc} \begin{array}{lcl} a\in \mbox{\rm Argmin}\: _{\overline{C}}\: f & \Leftrightarrow & \forall x\in \overline{C},\: \langle \nabla f(x),x-a\rangle \geq 0.\\ \end{array} \end{equation} Setting \(q_{a}(x)=\frac{1}{2}|x-a|^{2} \) for all $a\in\mbox{\rm Argmin}\: _{\overline{C}}$, one observes that $\nabla q_a(x)=x-a$ and thus, by (\ref{varEuc}), $q_{a}$ is a Liapounov functional for $-\nabla f$. This key property allows one to establish the asymptotic convergence as $t\to+\infty$ of the corresponding steepest descent trajectories; see \cite{Bru74} for more details in a very general non-smooth setting. To use the same kind of arguments in a non Euclidean context, observe that by (\ref{E:riemann}) together with the continuity of $ \nabla f$, the following variational Riemannian characterization holds \begin{equation} \label{varRie} \begin{array}{lcl} a\in \mbox{\rm Argmin}\: _{\overline{C}}\: f & \Leftrightarrow & \forall x\in C,\: ( \nabla_{H} f(x),x-a)^H _x \geq 0.\\ \end{array} \end{equation} We are thus naturally led to the problem of {\it finding the Riemannian metrics on $C$ for which the mappings $C\ni x\mapsto x-y\in\mathbb{R}^n$, $y\in C$, are gradient vector fields}. The next result gives a characterization of such metrics: they are induced by Hessian of strictly convex functions. \begin{theorem}\label{T:Poincare} Assume that $H\in{\cal C}^1(C;\mathbb{S}_{++}^n)$, or in other words that $(\cdot,\cdot)^H_x$ is a ${\cal C}^1$ metric. The family of vector fields $\{V^y: C\ni x\mapsto x-y\in\mathbb{R}^n\},\:y\in C$ is a family of $(\cdot,\cdot)^H$-gradient vector fields if and only if there exists a strictly convex function \(h\in {\cal C}^3(C) \) such that \(\forall x\in C\), \( H(x)=\nabla ^{2}h(x) \). Besides, defining \( D_{h}:C\times C\mapsto \RR \) by \begin{equation} \label{E:BRE} D_{h}(y,x)=h(y)-h(x)-\langle \nabla h(x),x-y\rangle, \end{equation} we obtain \( \nabla _{H}D_{h}(y,\cdot )(x)=x-y.\) \end{theorem} \begin{proof} The set of metrics complying with the ``gradient''requirement is denoted by \({\cal M} \), that is, \( (\cdot,\cdot)^H_x\in {\cal M} \Leftrightarrow H\in{\cal C}^1(C;\mathbb{S}_{++}^n)\hbox{ and } \forall y\in C,\: \exists \varphi _{y}\in {\cal C}^{1}(C;\RR),\: \nabla_{_H} \varphi _{y}(x)=x-y.\). Let $(x_1,..,x_n)$ denote the canonical coordinates of $\RR^n$ and write \(\sum _{i,j}H_{ij}(x)dx_{i}dx_{j} \) for \((\cdot,\cdot)^H_x\). By (\ref{E:riemann}), the mappings \( x\mapsto x-y\), \( y\in C \), define a family of \( (\cdot,\cdot)^H_x\) gradients iff \( k_{y}:x\mapsto H(x)(x-y)\), \( y\in C \), is a family of Euclidean gradients. Setting \( \alpha ^{y}(x)=\langle k_{y}(x),\cdot \rangle \), \( x, y\in C \), the problem amounts to find necessary (and sufficient) conditions under which the \( 1 \)-forms \( \alpha ^{y} \) are all exact. Let $y\in C$. Since \( C \) is convex, the Poincar\'e lemma \cite[Theorem V.4.1]{Lan95} states that \( \alpha ^{y} \) is exact iff it is closed. In canonical coordinates we have \( \alpha ^{y}(x)=\sum_{i} \left( \sum_{k}H_{ik}(x)(x_{k}-y_{k})\right) dx_{i},\; x\in C,\) and therefore \( \alpha ^{y} \) is exact iff for all \(i,j\in \{1,..,n\}\) we have \( \frac{\partial }{\partial x_{j}}\sum_{k}H_{ik}(x)(x_{k}-y_{k}) =\frac{\partial }{\partial x_{i}}\sum_{k}H_{jk}(x)(x_{k}-y_{k}),\) which is equivalent to \( \sum_{k}\frac{\partial }{\partial x_{j}}H_{ik}(x)(x_{k}-y_{k})+H_{ij}(x)=\sum_{k}\frac{\partial }{\partial x_{i}}H_{jk}(x)(x_{k}-y_{k})+H_{ji}(x).\) Since \( H_{ij}(x)=H_{ji}(x) \), this gives the following condition: \( \sum_{k}\frac{\partial }{\partial x_{j}}H_{ik}(x)(x_{k}-y_{k})=\sum_{k}\frac{\partial }{\partial x_{i}}H_{jk}(x)(x_{k}-y_{k}),\:\forall i,j\in \{1,..,n\}.\) If we set \( V_{x}=(\frac{\partial }{\partial x_{j}}H_{i1}(x),..,\frac{\partial } {\partial x_{j}}H_{in}(x))^{T} \) and \( W_{x}=(\frac{\partial }{\partial x_{i}}H_{j1}(x) ,..,\frac{\partial }{\partial x_{i}}H_{jn}(x))^{T} \), the latter can be rewritten \( \langle V_{x}-W_{x},x-y\rangle =0\), which must hold for all $(x,y)\in C\times C$. Fix $x\in C$. Let \( \epsilon _{x}>0 \) be such that the open ball of center \( x \) with radius \( \epsilon _{x} \) is contained in \( C \). For every \( \nu \) such that \( |\nu|=1 \), take \( y= x+\epsilon_{x}/2 \nu \) to obtain that \( \langle V_{x}-W_{x},\nu \rangle =0\). Consequently, \( V_{x}=W_{x} \) for all \( x\in C \). Therefore, \((\cdot,\cdot)^H_x\in{\cal M}\) iff \begin{equation} \label{partialhessian} \forall x \in C,\:\forall i,j,k \in \{1,..,n\},\:\frac{\partial} {\partial x_i} H_{jk}(x)=\frac{\partial} {\partial x_j} H_{ik}(x). \end{equation} \begin{lemma}\label{L:Poincare} If \(H :C\mapsto \mathbb{S}_{++}^n \) is a differentiable mapping satisfying {\rm (\ref{partialhessian})}, then there exists \( h\in {\cal C}^{3}(C) \) such that \(\forall x \in C \), \( H (x)=\nabla^2 h(x) \). In particular, $h$ is strictly convex. \end{lemma} \begin{proof}[of Lemma \ref{L:Poincare}.] For all \( i\in \{1,..,n\} \), set \( \beta ^{i}=\sum _{k}H _{ik}dx_{k} \). By (\ref{partialhessian}), \( \beta ^{i} \) is closed and therefore exact. Let \( \phi _{i} :C\mapsto \RR\) be such that \( d\phi _{i}=\beta ^{i} \) on \( C \), and set \( \omega =\sum _{k}\phi _{k}dx_{k} \). We have that \( \frac{\partial }{\partial x_{j}}\phi _{i}(x)=H_{ij}(x) =H_{ji}(x)=\frac{\partial }{\partial x_{i}}\phi _{j}(x)\), \(\forall x\in C\). This proves that \( \omega \) is closed, and therefore there exists $ h\in {\cal C}^{2}(C,\RR)$ such that \( dh=\omega \). To conclude we just have to notice that \( \frac{\partial }{\partial x_{i}}h(x)=\phi _{i}, \) and thus \( \frac{\partial^2 h}{\partial x_{j}\partial x_{i}}(x)=H _{ji}(x) ,\:\forall x\in C\). \end{proof} To finish the proof, remark that taking $\varphi_y=D_h(y,\cdot)$ with $D_h$ being defined by (\ref{E:BRE}), we obtain \(\nabla \varphi_y(x)=\nabla^2 h(x)(x-y)\), and therefore \( \nabla _{H}\varphi_y(x)=x-y \) in virtue of (\ref{E:riemann}). \end{proof} \begin{remark} {\rm (a) In the theory of Bregman proximal methods for convex optimization, the distance-like function $D_h$ defined by {\rm (\ref{E:BRE})} is called the $D$-{\em function} of $h$. Theorem {\rm \ref{T:Poincare}} is a new and surprising motivation for the introduction of $D_h$ in relation with variational inequality problems. (b) For a geometrical approach to Hessian Riemannian structures the reader is referred to the recent work of Duistermaat {\rm \cite{Dui01}}.} \end{remark} Theorem \ref{T:Poincare} suggests to endow $C$ with a Riemannian structure associated with the Hessian $H=\nabla^2h$ of a strictly convex function $h : C\mapsto \RR$. As we will see under some additional conditions, the $D$-function of $h$ is essential to establish the asymptotic convergence of the trajectory. On the other hand, if it is possible to replace $h$ by a sufficiently smooth strictly convex function $h':C'\mapsto \RR$ with $C'\supset\supset C$ and $h'_{|_C}=h$, then the gradient flows for $h$ and $h'$ are the same on $C$ but the steepest descent trajectories associated with the latter may leave the feasible set of $(P)$ and in general they will not converge to a solution of $(P)$. We shall see that to avoid this drawback it is sufficient to require that $|\nabla h(x^j)|\rightarrow +\infty$ for all sequences $(x^j)$ in $C$ converging to a boundary point of $C$. This may be interpreted as a sort of {\em barrier technique}, a classical strategy to enforce feasibility in optimization theory. \subsection{Legendre type functions and the $(H\mbox{-}SD)$ dynamical system}\label{S:legendre} In the sequel, we adopt the standard notations of convex analysis theory; see \cite{Roc70}. Given a closed convex subset $S$ of $\RR^n$, we say that an extended-real-valued function $g:S\mapsto \RR \cup \{+\infty \}$ belongs to the class $\Gamma_0(S)$ when $g$ is lower semicontinuous, proper ($g\not\equiv+\infty$) and convex. For such a function $g\in\Gamma_0(S)$, its {\em effective domain} is defined by ${\rm dom}\: g=\{x\in S\;|\; g(x)<+\infty\}$. When $g\in\Gamma_0(\RR^n)$ its {\em Legendre-Fenchel conjugate} is given by $g^*(y)=\sup\{\<x,y\rangle-g(x)\;|\;x\in\mathbb{R}^n\}$, and its {\em subdifferential} is the set-valued mapping $\partial g:\mathbb{R}^n\to {\cal P}(\mathbb{R}^n)$ given by $\partial g (x)=\{y\in\mathbb{R}^n\;|\; \forall z\in \mathbb{R}^n, \: f(x)+\<y,z-x\rangle\leq f(z)\}$. We set ${\rm dom}\:\partial g=\{x\in\mathbb{R}^n\;|\;\partial g(x)\neq\emptyset\}$. \begin{definition}\label{D:legendre}{\rm \cite[Chapter 26]{Roc70}} A function $h\in \Gamma_0(\mathbb{R}^n)$ is called:\\ {\rm (i)} {\rm essentially smooth}, if $h$ is differentiable on ${\rm int}\:{\rm dom}\: h$, with moreover $|\nabla h (x^j)| \rightarrow +\infty$ for every sequence $(x^j)\subset{\rm int}\: {\rm dom}\: h$ converging to a boundary point of ${\rm dom}\: h$ as $j\rightarrow +\infty$;\\ {\rm (ii)} {\rm of Legendre type} if $h$ is essentially smooth and strictly convex on ${\rm int}\:{\rm dom}\: h$. \end{definition} Remark that by \cite[Theorem 26.1]{Roc70}, $h\in \Gamma_0(\mathbb{R}^n)$ is essentially smooth iff $\partial h(x)=\{\nabla h(x)\}$ if $x\in {\rm int}\: {\rm dom}\: h$ and $\partial h(x)=\emptyset$ otherwise; in particular, ${\rm dom}\:\partial h={\rm int}\:{\rm dom}\: h$. Motivated by the results of section \ref{S:Poincare}, we define a Riemannian structure on $C$ by introducing a function $h\in \Gamma_0(\mathbb{R}^n)$ such that: $$ \left\{ \begin{array}{cl} {\rm (i)}& h\hbox{ is of Legendre type with } {\rm int}\:{\rm dom}\: h=C.\\ {\rm (ii)}& h_{|_C}\in {\cal C}^2(C;\mathbb{R}) \hbox{ and } \forall x\in C, \nabla ^2 h(x)\in \mathbb{S}_{++}^n.\\ {\rm (iii)}& \hbox{The mapping } C\ni x \mapsto \nabla ^2 h(x) \hbox{ is locally Lipschitz continuous}. \end{array} \right. \leqno (H_0) $$ Here and subsequently, we take $H=\nabla^2 h$ with $h$ satisfying $(H_0)$. The Hessian mapping $ C\ni x \mapsto H(x)$ endows $C$ with the (locally Lipschitz continuous) Riemannian metric \begin{equation}\label{E:metrich} \forall x \in C,\: \forall u,v\in \RR^n,\: (u,v)_x ^H=\langle H(x)u,v\rangle=\langle \nabla^2 h(x)u,v\rangle, \end{equation} and we say that $(\cdot,\cdot)_x^{H}$ is the {\it Legendre metric} on $C$ induced by the Legendre type function $h$, which also defines a metric on ${{\cal F}}=C\cap {\cal A} $ by restriction. In addition to $f\in{\cal C}^1(\RR^n)$, we suppose that the objective function satisfies \begin{equation}\label{E:locLip} \nabla f \hbox{ is locally Lipschitz continuous on } \RR^n. \end{equation} The corresponding steepest descent method in the manifold ${{\cal F}}, (\cdot,\cdot)^H_x$, which we refer to as $(H\mbox{-}SD)$ for short, is then the following continuous dynamical system $$ \left\{ \begin{array}{l} \dot{x}(t)+\nabla_{_H} f_{|_{{\cal F}}}(x(t))=0,\: t\in (T_m, T_M),\\ x(0)=x^0\in{{\cal F}}, \end{array}\right. \leqno (H\hbox{-}SD) $$ with $H=\nabla^2 h$ and where $-\infty\leq T_m<0<T_M\leq +\infty$ define the interval corresponding to the unique maximal solution of $(H\mbox{-}SD)$. Given an initial condition $x^0\in{{\cal F}}$, we shall say that $(H\hbox{-}SD)$ is {\em well-posed} when its maximal solution satisfies $T_M=+\infty$. In section \ref{S:wellposed} we will give some sufficient conditions ensuring the well-posedness of $(H\mbox{-}SD)$. \subsection{Differential inclusion formulation of $(H\mbox{-}SD)$ and some consequences} It is easily seen that the solution $x(t)$ of $(H\hbox{-}SD)$ satisfies: \begin{equation}\label{E:cont-prox} \left\{ \begin{array}{rcl} \displaystyle{\frac{d}{dt}\nabla h(x(t))+\nabla f(x(t))}&\in& {\cal A}_0^\perp\textrm{ on } (T_m,T_M),\\ \displaystyle{x(t)}&\in& {{\cal F}}\:\: \textrm{ on } (T_m,T_M),\\x(0)&=&x^0\in{{\cal F}}. \end{array} \right. \end{equation} This differential inclusion problem makes sense even when $x\in W^{1,1}_{loc}(T_m,T_M;\RR^n)$, the inclusions being satisfied almost everywhere on $(T_m,T_M)$. Actually, the following result establishes that $(H\mbox{-}SD)$ and (\ref{E:cont-prox}) describe the same trajectory. \begin{proposition}\label{difinc} Let $x\in W^{1,1}_{loc}(T_m,T_M;\RR^n)$. Then, $x$ is a solution of {\rm (\ref{E:cont-prox})} iff $x$ is the solution of $(H\hbox{-}SD)$. In particular, {\rm (\ref{E:cont-prox})} admits a unique solution of class ${\cal C}^1$. \end{proposition} \begin{proof} Assume that $x$ is a solution of (\ref{E:cont-prox}), and let $I'$ be the subset of $(T_m,T_M)$ on which $t\mapsto (x(t),\nabla h(x(t))$ is derivable. We may assume that $x(t)\in {{\cal F}}$ and $\frac{d}{dt}\nabla h(x(t))+\nabla f(x(t)) \in {\cal A}_0 ^{\perp}$, $\forall t\in I'$. Since $x$ is absolutely continuous, $\dot x(t)+H(x(t))^{-1}\nabla f((x(t)) \in H(x(t))^{-1}{\cal A}_0 ^{\perp}$ and $\dot x(t)\in {\cal A}_0$, $\forall t\in I'$. But the orthogonal complement of ${\cal A}_0$ with respect to the inner product $\<H(x)\cdot,\cdot\rangle$ is exactly $H(x)^{-1}{\cal A}_0 ^{\perp}$ when $x\in {{\cal F}}$. It follows that $\dot x+P_{x}H(x)^{-1}\nabla f(x)=0$ on $I'$. This implies that $x$ is the ${\cal C}^1$ solution of $(H\hbox{-}SD)$. \end{proof} Suppose that $f$ is convex. On account of Proposition \ref{difinc}, $(H\hbox{-}SD)$ can be interpreted as a continuous-time model for a well-known class of iterative minimization algorithms. In fact, an implicit discretization of (\ref{E:cont-prox}) yields the following iterative scheme: $ \nabla h(x^{k+1})-\nabla h (x^k)+\mu_k\nabla f (x^{k+1})\in {\rm Im}\: A^T,\; Ax^{k+1}=b,$ where $\mu_k>0$ is a step-size parameter and $x^0\in{{\cal F}}$. This is the optimality condition for \begin{equation}\label{E:BPM} x^{k+1}\in \mbox{\rm Argmin} \left\{ f(x)+1/\mu_k D_h(x,x^k)\;|\; Ax=b\right\}, \end{equation} where $D_h$ is given by \begin{equation}\label{E:Dfunction} D_h (x,y)=h(x)-h(y)-\langle \nabla h(y), x-y\rangle,\:x\in {\rm dom}\: h,\;y\in {\rm dom}\: \partial h=C. \end{equation} The above algorithm is accordingly called the {\em Bregman proximal minimization} method; for an insight of its importance in optimization see for instance \cite{CeZ92,ChT93,IuM00,Kiw97b}. Next, assume that $f(x)=\<c,x\rangle$ for some $c\in\RR^n$. As already noticed in \cite{BaL89,Fia90,Mac89} for the log-metric and in \cite{IuS99} for a fairly general $h$, in this case the $(H\mbox{-}SD)$ gradient trajectory can be viewed as a {\em central optimal path}. Indeed, integrating (\ref{E:cont-prox}) over \( [0,t] \) we obtain $\nabla h(x(t))-\nabla h(x^0)+tc\in {\cal A}_0^\perp$. Since $x(t)\in {\cal A}$, it follows that \begin{equation}\label{E:viscosity} x(t)\in \mbox{\rm Argmin} \left\{ \< c,x\rangle +1/tD_h(x,x^0)\mid Ax=b\right\}, \end{equation} which corresponds to the so-called {\em viscosity method} relative to $g(x)=D_h(x,x^0)$; see \cite{Att96,ACH97,IuS99} and Corollary \ref{C:selection}. Remark now that for a linear objective function, (\ref{E:BPM}) and (\ref{E:viscosity}) are essentially the same: the sequence generated by the former belongs to the optimal path defined by the latter. Indeed, setting $t_0=0$ and $t_{k+1}=t_k+\mu_k$ for all $k\geq 0$ ($\mu_0=0$) and integrating (\ref{E:cont-prox}) over $[t_k, t_{k+1}]$, we obtain that $x(t_{k+1})$ satisfies the optimality condition for (\ref{E:BPM}). The following result summarizes the previous discussion. \begin{proposition}\label{P:optBreRie} Assume that $f$ is linear and that the corresponding $(H\mbox{-}SD)$ dynamical system is well-posed. Then, the viscosity optimal path \( \widetilde{x}(\varepsilon) \) relative to $g(x)=D_h(x,x^0)$ and the sequence \( (x^{k})\) generated by {\rm (\ref{E:BPM})} exist and are unique, with in addition $\widetilde{x}(\varepsilon)=x(1/\varepsilon)$, $\forall \varepsilon>0$, and $x^{k}=x(\sum^{k-1}_{l=0}\mu _{l})$, $\forall\ k\geq 1$, where $x(t)$ is the solution of $(H\mbox{-}SD)$. \end{proposition} \begin{remark} {\rm In order to ensure asymptotic convergence for proximal-type algorithms, it is usually required that the step-size parameters satisfy $\sum \mu_k=+\infty$ . By Proposition {\rm \ref{P:optBreRie}}, this is necessary for the convergence of {\rm (\ref{E:BPM})} in the sense that when $(H\mbox{-}SD)$ is well-posed, if $x^k$ converges to some $x^*\in S(P)$ then either $x^0=x^*$ or $\sum\mu_k=+\infty$.} \end{remark} \section{Global existence, asymptotic analysis and examples}\label{GEAS} \subsection{Well-posedness of $(H\mbox{-}SD)$}\label{S:wellposed} In this section we establish the well-posedness of $(H\hbox{-}SD)$ (i.e. $T_M=+\infty$) under three different conditions. In order to avoid any confusion, we say that a set $E\subset \mathbb{R}^n$ is {\em bounded} when it is so for the usual Euclidean norm $|y|=\sqrt{\langle y,y\rangle}$. First, we propose the condition: $$ \hbox{The lower level set $\{ y\in \overline{{{\cal F}}} \;|\;f(y)\leq f(x^0)\}$ is bounded.} \leqno (WP_1) $$ Notice that $(WP_1)$ is weaker than the classical assumption imposing $f$ to have bounded lower level sets in the $H$ metric sense. Next, let $D_h$ be the $D$-function of $h$ that is defined by (\ref{E:Dfunction}) and consider the following condition: $$ \left\{ \begin{array}{l} {\rm (i)}\:\: {\rm dom}\: h=\overline{C} \hbox{ and $\forall a\in \overline{C}$, $\forall\gamma \in \RR$},\: \{y \in {{\cal F}} \:| D_h (a,y)\leq \gamma\}\hbox{ is bounded}.\\ {\rm (ii)}\: \hbox{$S(P)\neq\emptyset$ and $f$ is quasi-convex (i.e. the lower level sets of $f$ are convex)}. \end{array} \right. \leqno (WP_2) $$ When $\overline{{{\cal F}}}$ is unbounded $(WP_1)$ and $(WP_2)$ involve some a priori properties on $f$. This is actually not necessary for the well-posedness of $(H\mbox{-}SD)$. Consider: $$ \hbox{$\exists$ $K\geq 0$, $L\in \RR$ such that $\forall x\in C$, } ||H(x)^{-1}||\leq K|x|+L. \leqno (WP_3) $$ This property is satisfied by relevant Legendre type functions; take for instance (\ref{E:xlogx}). \begin{theorem}\label{T:existence} Assume that {\rm (\ref{E:locLip})} and $(H_0)$ hold and additionally that either $(WP_1)$, $(WP_2)$ or $(WP_3)$ is satisfied. If $\inf _{_{{{\cal F}}}} f>-\infty$ then the dynamical system $(H$-$SD)$ is well-posed. Consequently, the mapping $t \mapsto f(x(t))$ is nonincreasing and convergent as $t\rightarrow +\infty$. \end{theorem} \begin{proof} When no confusion may occur, we drop the dependence on the time variable $t$. By definition, $$ T_M=\sup \{T>0\:| \exists !\textrm{ solution $x$ of $(H$-$SD)$ on $[0,T)$ s.t. } x([0,T)) \subset {{\cal F}}\}. $$ We have that $T_M>0$. The definition (\ref{E:projH}) of $P_x$ implies that for all $y \in {\cal A}_0$, $(H(x)^{-1}\nabla f(x)+\dot x, y+\dot x)_x^H=0$ on $[0,T_M)$ and therefore \begin{equation}\label{liap} \langle \nabla f(x)+H(x)\dot x, y+\dot x\rangle=0 \textrm{ on } [0,T_M). \end{equation} Letting $y=0$ in (\ref{liap}), yields \begin{equation}\label{fliap} \frac{d}{dt}f(x)+\langle H(x)\dot x,\dot x\rangle=0. \end{equation} By (\ref{E:hypof})(ii), $f(x(t))$ is convergent as $t\to T_M$. Moreover \begin{equation} \label{velo} \langle H(x(\cdot))\dot x(\cdot),\dot x(\cdot)\rangle \in L^1(0,T_M;\RR). \end{equation} Suppose that $T_M<+\infty$. To obtain a contradiction, we begin by proving that $x$ is bounded. If $(WP_1)$ holds then $x$ is bounded because $f(x(t))$ is non-increasing so that $x(t)\in \{ y\in \overline{{{\cal F}}} \:|f(y)\leq f(x^0)\}$, $\forall t\in [0,T_M)$. Assume now that $f$ and $h$ comply with $(WP_2)$, and let $a\in \overline{{{\cal F}}}$. For each $t\in [0,T_M)$ take $y=x(t)-a$ in (\ref{liap}) to obtain $ \langle \nabla f(x)+\frac{d}{dt}\nabla h(x),x-a+\dot{x}\rangle =0.$ By (\ref{fliap}), this gives $ \langle \frac{d}{dt}\nabla h(x),x-a\rangle+\langle \nabla f(x), x-a\rangle =0$, which we rewrite as \begin{equation} \label{Dliap} \frac{d}{dt} D_h (a,x(t))+\langle \nabla f(x(t)), x(t)-a\rangle =0,\:\forall t\in [0,T_M). \end{equation} Now, let $a\in\overline{F}$ be a minimizer of $f$ on $\overline{{{\cal F}}}$. From the quasi-convexity property of $f$, it follows that $\forall t\in [0,T_M)$, $\langle \nabla f(x(t)), x(t)-a\rangle \geq 0$. Therefore, $D_h(a,x(t))$ is non-increasing and $(WP_2)$(ii) implies that $x$ is bounded. Suppose that $(WP_3)$ holds and fix $ t\in [0,T_M)$, we have $|x(t)-x^0| \leq \int_0 ^t |\dot x(s)|ds \leq \int_0 ^t ||\sqrt{H(x(s))^{-1}}|||\sqrt{H(x(s))}\:\dot x(s)|ds \leq (\int_0 ^t ||H(x(s))^{-1}||ds)^{1/2}(\int_0 ^t \langle H(x(s))\dot x(s),\dot x(s)\rangle ds)^{1/2}$. The latter follows from the Cau\-chy-Schwartz inequality together with the fact that $\|H(x)\|^2$ is the biggest eigenvalue of $H(x)$. Thus $ |x(t)-x^0| \leq 1/2[\int_0 ^t ||H(x(s))^{-1}||ds+\int_0 ^t \langle H(x(s))\dot x(s), \dot x(r)\rangle ds].$ Combining $(WP_3)$ and (\ref{velo}), Gronwall's lemma yields the boundedness of $x$. Let $\omega(x^0)$ be the set of limit points of $x$, and set $K=x([0,T_M))\cup \omega(x^0)$. Since $x$ is bounded, $\omega(x^0)\neq \emptyset$ and $K$ is compact. If $K\subset C$ then the compactness of $K$ implies that $x$ can be extended beyond $T_M$, which contradicts the maximality of $T_M$. Let us prove $K \subset C$. We argue again by contradiction. Assume that $x(t_j) \rightarrow x^*$, with $t_j<T_M$, $t_j \rightarrow T_M$ as $ j \rightarrow +\infty$ and $x^*\in {\rm bd}\: C= \overline{C}\setminus C$. Since $h$ is of Legendre type, we have $|\nabla h (x(t_j))| \rightarrow +\infty$, and we may assume that $\nabla h (x(t_j))/ |\nabla h (x(t_j))|\to\nu \in \RR^n$ with $|\nu|=1$. \begin{lemma}\label{L:normal} If $(x^j)\subset C$ is such that $x^j\to x^*\in {\rm bd}\: C$ and $\nabla h (x^j)/ |\nabla h (x^j))|\to \nu \in \RR^n$, $h$ being a function of Legendre type with $C={\rm int}\:{\rm dom}\: h$, then $\nu \in N_{\overline{C}}(x^*)$. \end{lemma} \begin{proof}[of Lemma \ref{L:normal}]. By convexity of $h$, $\langle \nabla h (x^j)-\nabla h(y), x^j-y\rangle \geq 0$ for all $y\in C$. Dividing by $|\nabla h (x^j)|$ and letting $j \rightarrow +\infty$, we get $\<\nu,y-x^*\rangle\leq 0$ for all $y\in C$, which holds also for $y\in\overline{C}$. Hence, $\nu \in N_{\overline{C}}(x^*)$. \end{proof} Therefore, $\nu \in N_{\overline{C}}(x^*)$. Let $\nu _0=\Pi_{{\cal A}_0}\nu$ be the Euclidean orthogonal projection of $\nu$ onto ${\cal A}_0$, and take $y=\nu_0$ in (\ref{liap}). Using (\ref{fliap}), integration gives \begin{equation} \label{blowup} \langle \nabla h (x(t_j)),\nu _0\rangle=\langle \nabla h (x^0)-\int_0 ^{t_j} \nabla f(x(s))ds,\nu _0\rangle. \end{equation} By $(H_0)$ and the boundedness property of $x$, the right-hand side of (\ref{blowup}) is bounded under the assumption $T_M<+\infty$. Hence, to draw a contradiction from (\ref{blowup}) it suffices to prove $\langle \nabla h (x(t_j)), \nu_0\rangle \rightarrow +\infty$. Since $\langle \nabla h (x(t_j))/ |\nabla h (x(t_j))|, \nu _0\rangle \rightarrow |\nu _0|^2$, the proof of the result is complete if we check that $\nu _0 \neq 0$. This is a direct consequence of the following \begin{lemma}\label{L:lem1} Let $C$ be a nonempty open convex subset of $\RR^n$ and ${\cal A}$ an affine subspace of $\RR^n$ such that $C \cap {\cal A} \neq \emptyset.$ If $x^* \in ({\rm bd}\: C )\cap {\cal A}$ then $ N_{\overline{C}} (x^*)\cap {\cal A}_0^{\perp}=\{0\}$ with ${\cal A}_0={\cal A}-{\cal A}$. \end{lemma} \begin{proof}[of Lemma \ref{L:lem1}]. Let us argue by contradiction and suppose that we can pick some $v\neq 0$ in ${\cal A}_0 ^{\perp}\cap N_{\overline{C}} (x^*)$. For $y_0 \in C \cap {\cal A} $ we have $\langle v, x^*-y_0 \rangle=0$. For $r\geq 0$, $z\in \RR^n$, let $B(z,r)$ denote the ball with center $z$ and radius $r$. There exists $\epsilon >0$, such that $B(y_0,\epsilon) \subset C$. Take $w$ in $B(0,\epsilon )$ such that $\langle v,w\rangle <0$, then $y_0+w \in C$, yet $\langle v, x^*-(y_0+w) \rangle=\langle v,w\rangle <0$. This contradicts the fact that $v$ is in $N_{\overline{C}} (x^*)$. \end{proof} This completes the proof of the theorem. \end{proof} \subsection{Value convergence for a convex objective function}\label{S:value} As a first result concerning the asymptotic behavior of $(H\mbox{-}SD)$, we have the following: \begin{proposition}\label{P:convergence} If $(H\mbox{-}SD)$ is well-posed and $f$ is convex then \( \forall a\in{{\cal F}},\;\forall t>0,\; f(x(t))\leq f(a)+\frac{1}{t}D_h(a,x^0)\), where $D_h$ is defined by {\rm (\ref{E:Dfunction})}, hence $ \lim\limits_{t\to +\infty}f(x(t))=\inf_{\overline{{{\cal F}}}}f.$ \end{proposition} \begin{proof} We begin by noticing that $f(x(t))$ converges as $t\to+\infty$ (see Theorem \ref{T:existence}). Fix $a\in {{\cal F}}$. By (\ref{Dliap}), we have that the solution $x(t)$ of $(H$-$SD)$ satisfies $ \frac{d}{dt} D_h(a,x(t))+\langle \nabla f(x(t)),x(t)-a\rangle=0, \: \forall t\geq 0.$ The convex inequality$ f(x)+\<\nabla f(x),x-a\rangle\leq f(a)$ yields $ D_h(a,x(t))+\int_0^t[f(x(s))-f(a)]ds\leq D_h(a,x^0).$ Using that $D_h\geq 0$ and since $f(x(t))$ is non-increasing, we get the estimate. Letting $t\to+\infty$, it follows that $ \lim_{t\to +\infty}f(x(t))\leq f(a). $ Since $a\in{{\cal F}}$ was arbitrary chosen, the proof is complete. \end{proof} \subsection{Bregman metrics and trajectory convergence}\label{S:bregman} In this section we establish the convergence of $x(t)$ under some additional properties on the $D$-function of $h$. Let us begin with a definition. \begin{definition}\label{D:bregman} A function $h\in\Gamma_0(\mathbb{R}^n)$ is called {\rm Bregman function with zone $C$} when the following conditions are satisfied:\\ {\rm (i)} ${\rm dom}\: h=\overline{C}$, $h$ is continuous and strictly convex on $\overline{C}$ and $h_{|_C}\in {\cal C}^1(C;\mathbb{R})$.\\ {\rm (ii)} $\forall a\in \overline{C}$, $\forall \gamma\in\mathbb{R}$, $\{y\in C| D_h(a,y)\leq \gamma\}$ is bounded, where $D_h$ is defined by {\rm (\ref{E:Dfunction})}.\\ {\rm (iii)} $\forall y\in\overline{C}$, $\forall y^j\to y$ with $y^j\in C$, $D_h(y,y^j)\to 0$. \end{definition} Observe that this notion slightly weakens the usual definition of Bregman function that was proposed by Censor and Lent in \cite{CeL81}; see also \cite{Bre67}. Actually, a Bregman function in the sense of Definition \ref{D:bregman} belongs to the class of $B$-functions introduced by Kiwiel (see \cite[Definition 2.4]{Kiw97a}). Recall the following important asymptotic separation property: \begin{lemma}\label{L:Bregman}{\rm \cite[Lemma 2.16]{Kiw97a}} If $h$ is a Bregman function with zone $C$ then $\forall y\in\overline{C}$, $\forall (y^j)\subset C$ such that $D_h(y,y^j)\to 0$, we have $y^j\to y$. \end{lemma} \begin{theorem}\label{T:convergence} Suppose that $(H_0)$ holds with $h$ being a Bregman function with zone $C$. If $f$ is quasi-convex satisfying {\rm (\ref{E:locLip})} and $S(P)\neq\emptyset$ then $(H$-$SD)$ is well-posed and its solution $x(t)$ converges as $t\to+\infty$ to some $x^*\in \overline{{{\cal F}}}$ with $ -\nabla f (x^*) \in N_{\overline{C}}(x^*)+ {\cal A}_0^\perp. $ If in addition $f$ is convex then $x(t)$ converges to a solution of $(P)$. \end{theorem} \begin{proof} Notice first that $(WP_2)$ is satisfied. By Theorem \ref{T:existence}, $(H$-$SD)$ is well-posed, $x(t)$ is bounded and for each $a\in S(P)$, $D_h(a,x(t))$ is non-increasing and hence convergent Set $f_\infty=\lim_{t\to+\infty}f(x(t)) $ and define $L=\{y\in \overline{{{\cal F}}}\;|\;f(y)\leq f_\infty\}$. The set $L$ is nonempty and closed. Since $f$ is supposed to be quasi-convex, $L$ is convex, and similar arguments as in the proof of Theorem \ref{T:existence} under $(WP_2)$ show that $D_h(a,x(t))$ is convergent for all $a\in L$. Let $x^*\in L$ denote a cluster point of $x(t)$ and take $t_j\to+\infty$ such that $x(t_j)\to x^*$. Then, by (iii) in Definition \ref{D:bregman}, $ \lim_{t}D_h(x^*,x(t))=\lim_{j}D_h(x^*,x(t_j))=0.$ Therefore, $x(t)\to x^*$ thanks to Lemma \ref{L:Bregman}. Let us prove that $x^*$ satisfies the optimality condition $-\nabla f(x^*)\in N_{\overline{C}}(x^*)+ {\cal A}_0^\perp$. Fix $z\in {\cal A}_0$, and for each $t\geq 0$ take $y=-\dot{x}(t)+z$ in (\ref{liap}) to obtain $ \langle\frac{d}{dt} \nabla h(x(t))+ \nabla f(x(t)),z\rangle=0.$ This gives \begin{equation}\label{E:inte} \frac{1}{t}\int_0^t\langle\nabla f(x(s)),z\rangle ds=\langle s(t),z\rangle, \end{equation} where $s(t)=[\nabla h(x^0)-\nabla h(x(t))]/t.$ If $x^*\in {{\cal F}}$ then $\nabla h(x(t))\to \nabla h(x^*)$, hence $ \langle \nabla f(x^*),z\rangle=\lim_{t\to+\infty}\frac{1}{t} \int_0^t\langle\nabla f(x(s)),z\rangle ds =\lim_{t\to+\infty}\langle s(t),z\rangle=0.$ Therefore, $\Pi_{{\cal A}_0}\nabla f(x^*)=0$. But $N_{\overline{{{\cal F}}}}(x^*)={\cal A}_0^\perp$ when $x^*\in {{\cal F}}$, which proves our claim in this case. Assume now that $x^*\notin{{\cal F}}$, which implies that $x^*\in\partial C\cap {\cal A}$. By (\ref{E:inte}), we have that $\langle s(t),z\rangle$ converges to $\langle \nabla f(x^*),z\rangle$ as $t\to+\infty$ for all $z\in {\cal A}_0$, and therefore $\Pi_{{\cal A}_0} s(t)\to\Pi_{{\cal A}_0} \nabla f(x^*)$ as $t\to+\infty$. On the other hand, by Lemma \ref{L:normal}, we have that there exists $\nu\in -N_{\overline{C}}(x^*)$ with $|\nu|=1$ such that $ \nabla h(x(t_j))/|\nabla h(x(t_j))|\to \nu$ for some $t_j\to+\infty$. Since $N_{\overline{C}}(x^*)$ is positively homogeneous, we deduce that $\exists$ $\bar\nu\in -N_{\overline{C}}(x^*)$ such that $ \Pi_{{\cal A}_0} \nabla f(x^*)= \Pi_{{\cal A}_0} \bar\nu$. Thus, $-\nabla f(x^*)\in -\Pi_{{\cal A}_0}\bar\nu+{\cal A}_0^\perp\subseteq N_{\overline{C}}(x^*)+{\cal A}_0^\perp$, which proves the theorem. \end{proof} Following \cite{IuS99}, we remark that when $f$ is linear, the limit point can be characterized as a sort of ``$D_h$-projection'' of the initial condition onto the optimal set $S(P)$. In fact, we have: \begin{corollary}\label{C:selection} Under the assumptions of Theorem {\em \ref{T:convergence}}, if $f$ is linear then the solution $x(t)$ of $(H$-$SD)$ converges as $t\to+\infty$ to the unique optimal solution $x^*$ of \begin{equation}\label{E:selection} \min_{x\in S(P)} D_h(x,x^0). \end{equation} \end{corollary} \begin{proof} Let $x^*\in S(P)$ be such that $x(t)\to x^*$ as $t\to +\infty$. Let $\bar x\in S(P)$. Since $x(t)\in {{\cal F}}$, the optimality of $\bar x$ yields $f(x(t))\geq f(\bar x)$, and it follows from (\ref{E:viscosity}) that $D_h(x(t),x^0)\leq D_h(\bar x, x^0)$. Letting $t\to+\infty$ in the last inequality, we deduce that $x^*$ solves (\ref{E:selection}). Noticing that $D_h(\cdot,x^0)$ is strictly convex due to Definition \ref{D:bregman}(i), we conclude the result. \end{proof} We finish this section with an abstract result concerning the rate of convergence under uniqueness of the optimal solution. We will apply this result in the next section. Suppose that $f$ is convex and satisfies (\ref{E:hypof}) and {\rm (\ref{E:locLip})}, with in addition $S(P)=\{a\}$. Given a Bregman function $h$ complying with $(H_0)$, consider the following growth condition: $$ f(x)-f(a)\geq \alpha D_h (a,x)^{\beta}, \: \forall x \in U_a \cap \overline{C}, \leqno (GC) $$ where $U_a$ is a neighborhood of $a$ and with $\alpha>0$, $\beta \geq 1$. The next abstract result gives an estimation of the convergence rate with respect to the $D$-function of $h$. \begin{proposition}\label{P:rate1} Assume that $f$ and $h$ satisfy the above conditions an let $x:[0,+\infty)\to {{\cal F}}$ be the solution of $(H\mbox{-}SD)$. Then we have the following estimations:\\ $\bullet$ If $\beta =1$ then there exists $K>0$ such that $D_h (a,x(t))\leq Ke^{-\alpha t}$, $\forall t>0.$\\ $\bullet$ If $\beta >1$ then there exists $K'>0$ such that $D_h (a,x(t))\leq K'/t^{\frac{1}{\beta-1}}$, $\forall t>0.$ \end{proposition} \begin{proof} The assumptions of Theorem \ref{T:convergence} are satisfied, this yields the well-posedness of $(H$-$SD)$ and the convergence of $x(t)$ to $a$ as $t\to +\infty$. Besides, from (\ref{Dliap}) it follows that for all $t\geq 0$, $\frac{d}{dt} D_h(a,x(t))+ \langle \nabla f (x(t)), x(t)-a \rangle=0.$ By convexity of $f$, we have $ \frac{d}{dt} D_h(a,x(t))+f (x(t))-f(a)\leq 0.$ Since $x(t)\to a$, there exists $t_0$ such that $\forall t\geq t_0$, $x(t)\in U_a \cap {{\cal F}}$. Therefore by combining $(GC)$ and the last inequality it follows that \begin{equation} \label{difine} \frac{d}{dt} D_h(a,x(t))+\alpha D_h (a,x(t))^{\beta}\leq 0, \: \forall t\geq t_0. \end{equation} In order to integrate this differential inequality, let us first observe that we have the following equivalence: $D_h(a,x(t))>0, \:\forall t\geq0$ iff $x^0\neq a$. Indeed, if $a \in \overline{{{\cal F}}}\setminus{{\cal F}}$ then the equivalence follows from $x(t)\in{{\cal F}}$ together with Lemma \ref{L:Bregman}; if $a\in {{\cal F}}$ then the optimality condition that is satisfied by $a$ is $\Pi_{{\cal A}_0}\nabla f(a)=0$, and the equivalence is a consequence of the uniqueness of the solution $x(t)$ of $(H$-$SD)$. Hence, we can assume that $x^0\neq a$ and divide (\ref{difine}) by $D_h (a,x(t))^{\beta}$ for all $t\geq t_0$. A simple integration procedure then yields the result. \end{proof} \subsection{Examples: interior point flows in convex programming}\label{S:Examples} This section gives a systematic method to construct explicit Legendre metrics on a quite general class of convex sets. By so doing, we will also show that many systems studied earlier by various authors \cite{BaL89,Kar90,Fay91a,Fia90,Mac89} appears as particular cases of $(H\mbox{-}SD)$ systems. Let $p\geq 1$ be an integer and set $I=\{1,\ldots,p\}$. Let us assume that to each $i\in I$ there corresponds a ${\cal C}^3$ concave function $g_i:\mathbb{R}^n\to \mathbb{R}$ such that \begin{equation}\label{E:slater} \exists x^0\in \mathbb{R}^n, \; \forall i\in I, \; g_i(x^0)> 0. \end{equation} Suppose that the open convex set $C$ is given by \begin{equation}\label{E:convexC} C=\{x\in\RR^n\;|\;g_i(x)>0, i\in I\}. \end{equation} By (\ref{E:slater}) we have that $C\neq\emptyset$ and $\overline{C}=\{x\in\RR^n\;|\;g_i(x)\geq 0, i\in I\}.$ Let us introduce a class of convex functions of Legendre type $\theta \in \Gamma_0(\mathbb{R})$ satisfying $$ \left\{ \begin{array}{l} \hbox{(i) }(0,\infty)\subset {\rm dom}\: \theta \subset [0,\infty).\\ \hbox{(ii) }\theta\in {\cal C}^3(0,\infty) \hbox{ and } \lim_{s\to 0^{+}}\theta'(s)=-\infty.\\ \hbox{(iii) }\forall s>0,\: \theta''(s)>0.\\ \hbox{(iv) }\hbox{Either }\theta\hbox{ is non-increasing or }\forall i\in I, g_i \hbox{ is an affine function.} \end{array} \right.\leqno{(H_1)}$$ \begin{proposition} Under {\rm (\ref{E:slater})} and $(H_1)$, the function $h\in \Gamma_0(\mathbb{R}^n)$ defined by \begin{equation}\label{E:defofh} h(x)=\sum_{i\in I}\theta(g_i(x)). \end{equation} is essentially smooth with ${\rm int}\:{\rm dom}\: h=C$ and $h\in {\cal C}^3(C)$, where $C$ is given by {\rm (\ref{E:convexC})}. If we assume in addition the following non-degeneracy condition: \begin{equation}\label{E:nondeg} \forall x\in C,\; {\rm span}\{\nabla g_i(x)\:|\:i\in I \}=\RR^n, \end{equation} then $H=\nabla^2 h$ is positive definite on $C$, and consequently $h$ satisfies $(H_0)$. \end{proposition} \begin{proof} Define $h_i\in\Gamma_0(\mathbb{R}^n)$ by $h_i(x)=\theta(g_i(x)).$ We have that $\forall i\in I$, $C\subset {\rm dom}\: h_i$. Hence $ {\rm int}\:{\rm dom}\: h=\bigcap_{i\in I}{\rm int}\:{\rm dom}\: h_i\supseteq C\neq \emptyset,$ and by \cite[Theorem 23.8]{Roc70}, we conclude that $ \partial h(x)=\sum_{i\in I}\partial h_i(x)$ for all $x\in\mathbb{R}^n$. But $ \partial h_i(x)=\theta'(g_i(x))\nabla g_i(x)$ if $g_i(x)>0$ and $\partial h_i(x)=\emptyset$ if $g_i(x)\leq 0$; see \cite[Theorem IX.3.6.1]{HiL96}. Therefore $\partial h(x)=\sum_{i\in I}\theta'(g_i(x))\nabla g_i(x)$ if $x\in C$, and $\partial h(x)=\emptyset$ otherwise. Since $\partial h$ is a single-valued mapping, it follows from \cite[Theorem 26.1] {Roc70} that $h$ is essentially smooth and ${\rm int}\:{\rm dom}\: h={\rm dom}\:\partial h=C$. Clearly, $h$ is of class ${\cal C}^3$ on $C$. Assume now that (\ref{E:nondeg}) holds. For $x\in C$, we have $\nabla^2 h(x)=\sum_{i\in I}\theta''(g_i(x))\nabla g_i(x)\nabla g_i(x)^T+\sum_{i\in I}\theta'(g_i(x))\nabla^2g_i(x).$ By $(H_1)$(iv), it follows that for any $v\in\mathbb{R}^n$, $ \sum_{i\in I}\theta'(g_i(x))\<\nabla^2g_i(x)v,v\rangle\geq 0$. Let $v\in\mathbb{R}^n$ be such that $\<\nabla ^2h(x)v,v\rangle=0$, which yields $ \sum_{i\in I}\theta''(g_i(x))\<v,\nabla g_i(x)\rangle^2=0.$ According to $(H_1)$(iii), the latter implies that $v\in {\rm span}\{\nabla g_i(x)|i\in I \}^\perp=\{0\}$. Hence $\nabla ^2h(x)\in\mathbb{S}_{++}^n$ and the proof is complete. \end{proof} If $h$ is defined by (\ref{E:defofh}) with $\theta\in\Gamma_0(\mathbb{R})$ satisfying $(H_1)$, we say that $\theta$ is the {\em Legendre kernel} of $h$. Such kernels can be divided into two classes. The first one corresponds to those kernels $\theta$ for which ${\rm dom}\:\theta=(0,\infty)$ so that $\theta(0)=+\infty$, and are associated with {\em interior barrier} methods in optimization as for instance : the log-barrier $\theta_1(s)=-\ln(s)$, $s>0$ and the inverse barrier $\theta_{2}(s)=1/s$, $s>0$. The kernels $\theta$ belonging to the second class satisfy $\theta(0)<+ \infty$, and are connected with the notion of {\em Bregman function} in proximal algorithms theory. Here are some examples: the Boltzmann-Shannon entropy $\theta_3(s)=s\ln(s)-s$, $s\geq 0$ (with $0\ln0=0$); $\theta_4(s)=-\frac{1}{\gamma}s^\gamma$ with $\gamma\in (0,1)$ , $s\geq 0$ (Kiwiel \cite{Kiw97a}); $\theta_5(s)= (\gamma s-s^{\gamma})/(1-\gamma)$ with $\gamma \in (0,1)$, $s\geq 0$ (Teboulle \cite{Teb92}); the ``$x \log x$'' entropy $\theta_6 (s)=s\ln s$, $ s\geq 0$. In relation with Theorem \ref{T:convergence} given in the previous section, note that the Legendre kernels $\theta_i$, $i=3,...,6$, are all Bregman functions with zone $\mathbb{R}_+$. Moreover, it is easily seen that each corresponding Legendre function $h$ defined by {\rm (\ref{E:defofh})} is indeed a Bregman function with zone $C$. In order to illustrate the type of dynamical systems given by $(H$-$SD)$, consider the case of positivity constraints where $p=n$ and $g_i(x)=x_i$, $i\in I$. Thus $C=\mathbb{R}^n_{++}$ and $\overline{C}=\mathbb{R}^n_+$. Let us assume that $\exists x^0\in \RR^n_{++}$, $ Ax^0=b$. Recall that the corresponding minimization problem is $(P)\:\:\min\{ f(x) \;|\; x\geq 0,\; Ax=b\}$ and take first the kernel $\theta_3$ from above. The associated Legendre function (\ref{E:defofh}) is given by \begin{equation}\label{E:xlogx} h(x)=\sum_{i=1}^n x_i\ln x_i-x_i, \; x\in\mathbb{R}^n_{+}, \end{equation} and the differential equation in $(H$-$SD)$ is given by \begin{equation}\label{E:faybusovich} \dot{x}+[I-XA^T(AXA^T)^{-1}A]X\nabla f(x)=0. \end{equation} where $X={\rm diag}(x_1,...,x_n)$. If $f(x)=\<c,x\rangle$ for some $c\in \mathbb{R}^n$ and in absence of linear equality constraints, then (\ref{E:faybusovich}) is $ \dot{x}+X c=0$. The change of coordinates $y=\nabla h(x)=(\ln x_1,...,\ln x_n)$ gives $ \dot{y}+c=0$. Hence, $x(t)=(x_1^0e^{-c_1t},...,x_n^0e^{-c_nt})$, $t\in \RR,$ where $x^0=(x_1^0,...,x_n^0)\in\mathbb{R}^n_{++}$. If $c\in\RR^n_+$ then $\inf_{x\in \mathbb{R}^n_{+}}\<c,x\rangle=0$ and $x(t)$ converges to a minimizer of $f=\<c,\cdot\rangle$ on $\mathbb{R}^n_+$; if $c_{i_0}<0$ for some $i_0$, then $\inf_{x\in \mathbb{R}^n_{+}}\<c,x\rangle=-\infty$ and $x_{i_0}(t)\to+\infty$ as $t\rightarrow +\infty$. Next, take $A=(1,\ldots,1)\in\mathbb{R}^{1\times n}$ and $b=1$ so that the feasible set of $(P)$ is given by $\overline{{{\cal F}}}=\Delta_{n-1}=\{x\in\RR^n\mid x\geq 0,\: \sum_{i=1}^nx_i=1\}$, that is the $(n-1)$-dimensional simplex. In this case, (\ref{E:faybusovich}) corresponds to $\dot x+[X-xx^T]\nabla f(x)=0$, or componentwise \begin{equation}\label{E:karmarkar} \dot x_i+x_i\left(\frac{\partial f}{\partial x_i}-\sum_{j=1}^nx_j\frac{\partial f}{\partial x_j}\right)=0,\quad i=1,\ldots,n. \end{equation} For suitable choices of $f$, this is a {\em Lotka-Volterra} type equation that naturally arises in population dynamics theory and, in that context, the structure $(\cdot,\cdot)^H$ with $h$ as in (\ref{E:xlogx}) is usually referred to as the {\em Shahshahani} metric; see \cite{Aki79,HoS98} and the references therein. The figure \ref{3dess1} gives a numerical illustration of system (\ref{E:karmarkar}) for $n=3$ and with $f(x)=x_3-x_2$. \begin{figure \begin{center} \includegraphics[scale=0.4]{shasha.eps} \end{center} \caption{\label{3dess1} A trajectory of (\ref{E:karmarkar}).} \end{figure} Karmarkar studied (\ref{E:karmarkar}) in \cite{Kar90} for a quadratic objective function as a continuous model of the interior point algorithm introduced by him in \cite{Kar84}. Equation (\ref{E:faybusovich}) is studied by Faybusovich in \cite{Fay91a,Fay91b,Fay91c} when $(P)$ is a linear program, establishing connections with completely integrable Hamiltonian systems and exponential convergence rate, and by Herzel et al. in \cite{HRZ91}, who prove quadratic convergence for an explicit discretization. Take now the log barrier kernel $\theta_1$ and $ h(x)=-\sum_{i=1}^n\ln x_i.$ Since $\nabla ^2 h(x)=X^{-2}$ with $X$ defined as above, the associated differential equation is \begin{equation}\label{E:Atrajectory} \dot{x}+[I-X^2A^T(AX^2A^T)^{-1}A]X^2\nabla f(x)=0. \end{equation} This equation was considered by Bayer and Lagarias in \cite{BaL89} for a linear program. In the particular case $f(x)=\<c,x\rangle$ and without linear equality constraints, (\ref{E:Atrajectory}) amounts to $\dot{x}+X^2c=0$, or $\dot y+c=0$ for $y=\nabla h(x)=-X^{-1}e$ with $e=(1,\cdots,1)\in\RR^n$, which gives $x(t)=\left(1/(1/x_1^0+c_1t),...,1/(1/x_n^0+c_nt)\right)$, $ T_m\leq t\leq T_M, $ with $ T_m=\max\{-1/x_i^0c_i\;|\; c_i>0\}$ and $T_M=\min\{-1/x_i^0c_i\;|\; c_i<0\}$ (see \cite[pag. 515]{BaL89}). Denote by $\Pi_{A_0}$ the Euclidean orthogonal projection onto ${\cal A}_0$. To study the associated trajectories for a general linear program, it is introduced in \cite{BaL89} the {\em Legendre transform coordinates} $y=\Pi_{{\cal A}_0}\nabla h(x)=[I-A^T(AA^T)^{-1}A]X^{-1}e$, which still linearizes (\ref{E:Atrajectory}) when $f$ is linear (see section \ref{S:legendre-transform} for an extension of this result), and permits to establish some remarkable analytic and geometric properties of the trajectories. A similar system was considered in \cite{Fia90,Mac89} as a continuous log-barrier method for nonlinear inequality constraints and with ${\cal A}_0=\RR^n$. New systems may be derived by choosing other kernels. For instance, taking $h(x)=-1/\gamma\sum_{i=1}^n x_i^\gamma$ with $\gamma\in (0,1)$, $A=(1,\ldots,1)\in\mathbb{R}^{1\times n}$ and $b=1$, we obtain \begin{equation}\label{E:kiwiel} \dot x_i+\frac{x_i^{2-\gamma}}{1-\gamma}\left(\frac{\partial f}{\partial x_i}-\sum_{j=1}^n\frac{x_j^{2-\gamma}}{\sum_{k=1}^nx_k^{2-\gamma}}\frac{\partial f}{\partial x_j}\right)=0,\quad i=1,\ldots,n. \end{equation} \subsection{Convergence results for linear programming}\label{S:LP} Let us consider the specific case of a linear program $$ \min_{x\in \RR^n}\{\langle c,x\rangle\mid Bx\geq d,\: Ax= b\}, \leqno{(LP)} $$ where $A$ and $b$ are as in section \ref{S:problem}, $c\in \RR^n$, $B$ is a $p\times n$ full rank real matrix with $p\geq n$ and $d\in\RR^p$. We assume that the optimal set satisfies \begin{equation}\label{E:LP} S(LP)\hbox{ is nonempty and bounded}, \end{equation} and there exists a Slater point $x^0\in\RR^n$, $Bx^0>d$ and $Ax^0=b$. Take the Legendre function \begin{equation}\label{E:defhLP} h(x)=\sum_{i=1}^n\theta(g_i(x)),\quad g_i(x)= \<B_i,x\rangle-d_i, \end{equation} where $B_i\in\mathbb{R}^n$ is the $i$th-row of $B$ and the Legendre kernel $\theta$ satisfies $(H_1)$. By (\ref{E:LP}), $(WP_1)$ holds and therefore $(H\mbox{-}SD)$ is well-posed due to Theorem \ref{T:existence}. Moreover, $x(t)$ is bounded and all its cluster points belong to $S(LP)$ by Proposition \ref{P:convergence}. The variational property (\ref{E:viscosity}) ensures the convergence of $x(t)$ and gives a variational characterization of the limit as well. Indeed, we have the following result: \begin{proposition}\label{P:selectionLP} Let $h$ be given by {\rm (\ref{E:defhLP})} with $\theta$ satisfying $(H_1)$. Under {\rm (\ref{E:LP})}, $(H\mbox{-}SD)$ is well-posed and $x(t)$ converges as $t\to+\infty$ to the unique solution $x^*$ of \begin{equation}\label{E:selectionLP} \min_{x\in S(LP)}\sum\limits_{i\notin I_0} D_{\theta}(g_i(x),g_i(x^0)), \end{equation} where $I_0=\{i\in I\mid g_i(x)=0\mbox{ for all } x\in S(LP)\}$. \end{proposition} \begin{proof} Assume that $S(LP)$ is not a singleton, otherwise there is nothing to prove. The relative interior ${\rm ri}\: S(LP)$ is nonempty and moreover ${\rm ri}\: S(LP)=\{x\in\RR^n\mid g_i(x)=0\mbox{ for }i\in I_0,\: g_i(x)> 0\mbox{ for }i\not\in I_0,\: Ax=b\}$. By compactness of $S(LP)$ and strict convexity of $\theta\circ g_i$, there exists a unique solution $x^*$ of (\ref{E:selectionLP}). Indeed, it is easy to see that $x^*\in {\rm ri}\:(LP)$. Let $\bar x\in S(LP)$ and $t_j\to+\infty$ be such that $x(t_j)\to\bar x$. It suffices to prove that $\bar x= x^*$. When $\theta(0)<+\infty$, the latter follows by the same arguments as in Corollary \ref{C:selection}. When $\theta(0)=+\infty$, the proof of \cite[Theorem 3.1]{ACH97} can be adapted to our setting (see also \cite[Theorem 2]{IuS99}). Set $x^*(t)=x(t)-\bar x+x^*$. Since $Ax^*(t)=b$ and $D_h(x,x^0)=\sum_{i=1}^m D_{\theta}(g_i(x),g_i(x^0))$, (\ref{E:viscosity}) gives \begin{equation}\label{E:visco} \langle c,x(t)\rangle+\frac{1}{t}\sum_{i=1}^m D_{\theta}(g_i(x(t)),g_i(x^0))\leq\langle c,x^*(t)\rangle+\frac{1}{t}\sum_{i=1}^m D_{\theta}(g_i(x^*(t)),g_i(x^0)). \end{equation} But $\langle c,x(t)\rangle=\langle c,x^*(t)\rangle$ and $\forall i\in I_0$, $g_i(x^*(t))=g_i(x(t))>0$. Since $x^*\in{\rm ri}\: S(LP)$, for all $i\notin I_0$ and $j$ large enough, $g_i(x^*(t_j))> 0$. Thus, the right-hand side of (\ref{E:visco}) is finite at $t_j$, and it follows that $\sum\limits_{i\notin I_0}D_{\theta}(g_i(\bar x),g_i(x^0))\leq\sum\limits_{i\notin I_0} D_{\theta}(g_i(x^*),g_i(x^0)).$ Hence, $\bar x= x^*$. \end{proof} {\bf Rate of convergence.} We turn now to the case where there is no equality constraint so that the linear program is \begin{equation}\label{E:lp} \min_{x\in\mathbb{R}^n}\{\<c,x\rangle\mid Bx\geq d\}. \end{equation} We assume that (\ref{E:lp}) admits a unique solution $a$ and we study the rate of convergence when $\theta$ is a Bregman function with zone $\RR_{+}$. To apply Proposition \ref{P:rate1}, we need: \begin{lemma}\label{L:norm} Set $C=\{x\in\RR^n|Bx> d\}$. If {\rm (\ref{E:lp})} admits a unique solution $a\in\RR^n$ then $\exists k_0>0$, $\forall y\in \overline{C}$, $\langle c,y-a\rangle \geq k_0{\cal N}(y-a)$, where ${\cal N} (x)=\sum _{i\in I} |\langle B_i , x\rangle|$ is a norm on $\RR^n$. \end{lemma} \begin{proof} Set $ I_0=\{i\in I\mid \<B_i,a\rangle=d_i\}$. The optimality conditions for $a$ imply the existence of a {\em multiplier} vector $\lambda \in \RR^p_+$ such that $\lambda _i [d_i-\< B_i,a\rangle]=0$, $\forall i\in I,$ and $c=\sum_{i\in I} \lambda_i B_i$. Let $y\in \overline{C}$. We deduce that $ \< c,y-a \rangle=N(y-a)$ where $N(x)=\sum_{i\in I_0} \lambda_i |\< B_i,x\rangle|$. By uniqueness of the optimal solution, it is easy to see that ${\rm span}\{B_i\mid i\in I_0 \}=\RR^n$, hence $N$ is a norm on $\RR^n$. Since ${\cal N}(x)=\sum _{i\in I} |\langle B_i , x\rangle|$ is also a norm on $\RR^n$ (recall that $B$ is a full rank matrix), we deduce that $\exists k_0$ such that $N(x)\geq k_0{\cal N}(x)$. \end{proof} The following lemma is a sharper version of Proposition \ref{P:rate1} in the linear context. \begin{lemma} Under the assumptions of Proposition {\rm \ref{P:selectionLP}}, assume in addition that $\theta$ is a Bregman function with zone $\RR_{-}$ and that there exist $\alpha>0$, $\beta\geq 1$ and $\varepsilon>0$ such that \begin{equation} \label{comp} \forall s\in (0,\varepsilon), \: \alpha D_{\theta}(0,s)^{\beta}\leq s. \end{equation} Then there exists positive constants $K,L,M$ such that for all $t>0$ the trajectory of $(H$-$SD)$ satisfies $D_h(a,x(t))\leq Ke^{-Lt}$ if $\beta=1$, and $D_h(a,x(t))\leq M/t^{\frac{1}{\beta-1}}$ if $\beta>1$. \end{lemma} \begin{proof} By Lemma \ref{L:norm}, there exists $k_0$ such that for all $t>0$, \begin{equation} \label{restim} \langle c,x(t)-a\rangle \geq\sum _{i\in I} k_0 |\langle B_i, x(t) \rangle -\<B_i,a\rangle|. \end{equation} Now, if we prove that $\exists \lambda>0$ such that \begin{equation} \label{restim2} |\langle B_i, x(t) \rangle -\<B_i,a\rangle|\geq \lambda D_{\theta}(\<B_i,a\rangle-d_i, \langle B_i, x(t) \rangle -d_i) \end{equation} for all $i\in I$ and for $t$ large enough, then from (\ref{restim}) it follows that $f(\cdot)=\< c,\cdot\rangle$ satisfies the assumptions of Proposition \ref{P:rate1} and the conclusion follows easily. Since $x(t)\rightarrow a$, to prove (\ref{restim2}) it suffices to show that $\forall r_0 \geq 0$, $\exists \eta, \mu>0$ such that $\forall s$, $|s-r_0|<\eta$, $\mu D_{\theta}(r_0,s)^{\beta} \leq |r_0-s|.$ The case where $r_0=0$ is a direct consequence of (\ref{comp}). Let $r_0>0$. An easy computation yields $\frac{d^2}{ds^2} D_{\theta}(r_0, s)_{|s=r_0}=\theta '' (r_0),$ and by Taylor's expansion formula \begin{equation} \label{Taylor} D_{\theta}(r_0, s)=\frac{\theta '' (r_0)}{2}(s-r_0)^2+o(s-r_0)^2 \end{equation} with $\theta '' (r_0)>0$ due to $(H_1)$(iii). Let $\eta$ be such that $\forall s$, $|s-r_0|<\eta$, $s>0$, $D_{\theta}(r_0, s) \leq \theta '' (r_0)(s-r_0)^2$ and $D_{\theta}(r_0, s) \leq 1$; since $\beta \geq 1$, $D_{\theta}(r_0, s) ^{\beta}\leq D_{\theta}(r_0, s) \leq \theta '' (r_0)|s-r_0|$. \end{proof} To obtain Euclidean estimates, the functions $s\mapsto D_{\theta}(r_0, s)$, $r_0 \in \RR _+$ have to be locally compared to $s\mapsto |r_0-s|$. By (\ref{Taylor}) and the fact that $\theta '' >0$, for each $r_0>0$ there exists $K,\eta>0$ such that $ |r_0-s|\leq K\sqrt{D_{\theta}(r_0, s)}, \:\forall s, \:|r_0-s|< \eta.$ This shows that, in practice, the Euclidean estimate depends only on a property of the type (\ref{comp}). Examples:\\ $\bullet$ The Boltzmann-Shannon entropy $\theta_3 (s)=s\ln(s)-s$ and $\theta_6 (s)=s\ln s$ satisfy $D_{\theta _i}(0,s)=s$, $s>0$; hence for some $K,L>0$, $|x(t)-a|\leq K e^{-Lt}$, $\forall t\geq0$.\\ $\bullet$ With either $\theta_4 (s)= -s^{\gamma}/\gamma$ or $\theta _5 (s)= (\gamma s-s^{\gamma})/(1-\gamma)$, $\gamma \in (0,1)$, we have $D_{\theta_i}(0,s)=(1+1/\gamma)s^{\gamma}$, $s>0$; hence $|x(t)-a|\leq K/t^{\frac{\gamma}{2-2\gamma}}$, $\forall t>0.$ \subsection{Dual convergence}\label{S:dual} In this section we focus on the case $C=\RR^n_{++}$, so that the minimization problem is $$ \min \{f(x)\mid x\geq 0,\; Ax=b\}. \leqno (P) $$ We assume \begin{equation}\label{A1} \hbox{$f$ is convex and }S(P)\neq \emptyset, \end{equation} together with the Slater condition \begin{equation}\label{A2} \exists x^0\in \RR^n,\; x^0>0,\; Ax^0=b. \end{equation} In convex optimization theory, it is usual to associate with $(P)$ the {\em dual} problem given by $$ \min \{p(\lambda)\mid \lambda\geq 0\}, \leqno (D) $$ where $p(\lambda)=\sup\{\<\lambda,x\rangle-f(x)\mid Ax=b\}$. For many applications, dual solutions are as important as primal ones. In the particular case of a linear program where $f(x)=\<c,x\rangle$ for some $c\in\RR^n$, writing $\lambda=c+A^Ty$ with $y\in\RR^m$ the linear dual problem may equivalently be expressed as $ \min \{\<b,y\rangle\mid A^Ty+c\geq 0\}$. Thus, $\lambda$ is interpreted as a vector of {\em slack} variables for the dual inequality constraints. In the general case, $S(D)$ is nonempty and bounded under (\ref{A1}) and (\ref{A2}), and moreover \(S(D)=\{\lambda\in\RR^n\mid \lambda\geq 0,\; \lambda\in\nabla f(x^*)+{\rm Im}\: A^T,\; \<\lambda,x^*\rangle=0\}\), where $x^*$ is any solution of $(P)$; see for instance \cite[Theorems VII.2.3.2 and VII.4.5.1]{HiL96}.\\ Let us introduce a Legendre kernel $\theta$ satisfying $(H_1)$ and define \begin{equation}\label{E:hdual} h(x)=\sum_{i=1}^n \theta(x_i). \end{equation} Suppose that $(H\mbox{-}SD)$ is well-posed. Integrating the differential inclusion (\ref{E:cont-prox}), we obtain \begin{equation}\label{E:opt-dual} \lambda(t)\in c(t) +{\rm Im}A^T, \end{equation} where $ c(t)=\frac{1}{t}\int_0^t \nabla f(x(\tau))d\tau$ and $\lambda(t)$ is the {\em dual trajectory } defined by \begin{equation}\label{E:dual-traj} \lambda(t)=\frac{1}{t}[\nabla h(x^0)-\nabla h(x(t))]. \end{equation} Assume that $x(t)$ is bounded. From (\ref{A1}), it follows that $\nabla f$ is constant on $S(P)$, and then it is easy to see that $\nabla f(x(t))\to\nabla f(x^*)$ as $t\to+\infty$ for any $x^*\in S(P)$. Consequently, $c(t)\to\nabla f(x^*)$. By (\ref{E:dual-traj}) together with \cite[Theorem 26.5]{Roc70}, we have $ x(t)=\nabla h^*(\nabla h(x^0)-t\lambda(t)),$ where the Fenchel conjugate $h^*$ is given by $ h^*(\lambda)=\sum_{i=1}^n \theta^*(\lambda_i). $ Take any solution $\widetilde{x}$ of $A\widetilde{x}=b$. Since $Ax(t)=b$, we have $\widetilde x-\nabla h^*(\nabla h(x^0)-t\lambda(t))\in {\rm Ker}\: A$. On account of (\ref{E:opt-dual}), $\lambda(t)$ is the unique optimal solution of \begin{equation}\label{E:penalty} \lambda(t)\in \mbox{\rm Argmin}\left\{\<\widetilde{x},\lambda\rangle+\frac{1}{t} \sum_{i=1}^n \theta^*(\theta'(x^0_i)-t\lambda_i)\mid \lambda\in c(t)+{\rm Im}A^T \right\}. \end{equation} By $(H_1)$(iii), $\theta'$ is increasing in $\RR_{++}$. Set \( \eta=\lim_{s\to +\infty}\theta'(s)\in (-\infty,+\infty]\). Since $\theta^*$ is a Legendre type function, ${\rm int}\:{\rm dom}\: \theta^*={\rm dom}\:\partial\theta^*={\rm Im}\: \partial \theta=(-\infty,\eta)$. From $(\theta^*)'=(\theta')^{-1}$, it follows that $\lim_{u\to-\infty}(\theta^*)'(u)=0$ and $\lim_{u\to\eta^-}(\theta^*)'(u)=+\infty$. Consequently, (\ref{E:penalty}) can be interpreted as a {\em penalty approximation scheme} of the dual problem $(D)$, where the dual positivity constraints are penalized by a separable strictly convex function. Similar schemes have been treated in \cite{ACH97,Com,IuM00}. Consider the additional condition \begin{equation}\label{A3} \hbox{Either } \theta(0)<\infty, \hbox{ or } S(P) \hbox{ is bounded, or $f$ is linear.} \end{equation} As a direct consequence of \cite[Propositions 10 and 11]{IuM00}, we obtain that under {\rm (\ref{A1}), (\ref{A2}), (\ref{A3})} and $(H_1)$, $\{\lambda(t)\mid t\to+\infty\}$ is bounded and its cluster points belong to $S(D)$. The convergence of $\lambda(t)$ is more difficult to establish. In fact, under some additional conditions on $\theta^*$ (see \cite[Conditions $(H_0)$-$(H_1)$]{Com} or \cite[Conditions (A7) and (A8)]{IuM00}) it is possible to show that $\lambda(t)$ converges to a particular element of the dual optimal set (the ``$\theta^*$-center" in the sense of \cite[Definition 5.1]{Com} or the $D_h(\cdot,x^0)$-center as defined in \cite[pag. 616]{IuM00}), which is characterized as the unique solution of a {\em nested hierarchy} of optimization problems on the dual optimal set. We will not develop this point here. Let us only mention that for all the examples of section \ref{S:Examples}, $\theta_i^*$ satisfies such additional conditions and consequently: \begin{proposition} Under {\rm (\ref{A1}), (\ref{A2})} and {\rm (\ref{A3})}, for each of the explicit Legendre kernels given in section {\rm \ref{S:Examples}}, $\lambda(t)$ given by {\rm (\ref{E:dual-traj})} converges to a particular dual solution. \end{proposition} \section{Legendre transform coordinates}\label{S:legendre-transform} \subsection{Legendre functions on affine subspaces} The first objective of this section is to slightly generalize the notion of Legendre type function to the case of functions whose domains are contained in an affine subspace of $\mathbb{R}^n$. We begin by noticing that the Legendre type property does not depend on canonical coordinates. \begin{lemma}\label{L:leg-coord} Let $g\in \Gamma_0(\mathbb{R}^r)$, $r\geq 1$, and $T:\mathbb{R}^r\to\mathbb{R}^r$ an affine invertible mapping. Then $g$ is of Legendre type iff $g\circ T$ is of Legendre type. \end{lemma} \begin{proof} The proof is elementary and is left to the reader. \end{proof} From now on, ${\cal A}$ is the affine subspace defined by (\ref{E:affine-space}), whose dimension is $r=n-m$. \begin{definition}\label{D:glegendre} A function $g\in\Gamma_0({\cal A})$ is said to be of {\em Legendre type} if there exists an affine invertible mapping $T:{\cal A}\to\mathbb{R}^r$ such that $g\circ T^{-1}$ is a Legendre type function in $\Gamma_0(\RR^r)$. \end{definition} By Lemma \ref{L:leg-coord}, the previous definition is consistent. \begin{proposition}\label{P:leg-trans2} Let $h\in\Gamma_0(\mathbb{R}^n)$ be a function of Legendre type with $C={\rm int}\:{\rm dom}\: h$. If ${{\cal F}}=C\cap{\cal A}\neq \emptyset$ then the restriction $h_{|_{\cal A}}$ of $h$ to ${\cal A}$ is of Legendre type and moreover $ {\rm int}_{\cal A} {\rm dom}\: h_{|_{\cal A}}={{\cal F}}$ (${\rm int}_{\cal A} B$ stands for the interior of $B$ in ${\cal A}$ as a topological subspace of $\mathbb{R}^n$). \end{proposition} \begin{proof} From the inclusions ${{\cal F}}\subset {\rm dom}\: h_{|_{\cal A}}\subset\overline{{{\cal F}}}=\overline{C}\cap{\cal A}$ and since ${\rm ri}\:\overline{{{\cal F}}}={{\cal F}}$, we conclude that $ {\rm int}_{\cal A} {\rm dom}\: h_{|_{\cal A}}={{\cal F}}\neq\emptyset$. Let $T:\mathbb{R}^r\to{\cal A}$ be an invertible transformation with $ Tz=Lz+x^0$ for all $z\in\mathbb{R}^r$, where $x^0\in{\cal A}$ and $L:\mathbb{R}^r\to{\cal A}_0$ is a nonsingular linear mapping. Define $k=h_{|_{\cal A}}\circ T$. Clearly, $k\in\Gamma_0(\mathbb{R}^r)$. Let us prove that $k$ is essentially smooth. We have ${\rm dom}\: k=T^{-1}{\rm dom}\: h_{|_{\cal A}}$ and therefore ${\rm int}\:{\rm dom}\: k=T^{-1}{{\cal F}}$. Since $h$ is differentiable on $C$, we conclude that $k$ is differentiable on ${\rm int}\:{\rm dom}\: k$. Now, let $(z^j)\in{\rm int}\:{\rm dom}\: k$ be a sequence that converges to a boundary point $z\in {\rm bd}\:{\rm dom}\: k$. Then, $Tz^j\in{\rm int}_{\cal A}{\rm dom}\: h_{|_{\cal A}}$ and $Tz^j\to Tz\in {\rm bd}_{\cal A}{\rm dom}\: h_{|_{\cal A}}\subset {\rm bd}\:{\rm dom}\: h$. Since $h$ is essentially smooth, $|\nabla h(Tz^j)|\to +\infty$. Thus, to prove that $|\nabla k(z^j)|\to +\infty$ it suffices to show that there exists $\lambda>0$ such that $ |\nabla k(z^j)|\geq \lambda |\nabla h(Tz^j)|$ for all $j$ large enough. Note that $ \nabla k (z^j)=\nabla[h_{|_{\cal A}}\circ T](z^j)=L^*\nabla h_{|_{\cal A}}(Tz^j)=L^*\Pi_{{\cal A}_0}\nabla h(Tz^j),$ where $L^*:{\cal A}_0\to\mathbb{R}^r$ is defined by $ \langle z,L^*x\rangle=\langle Lz,x\rangle$, $\forall (z,x)\in \mathbb{R}^r\times {\cal A}_0.$ Of course, $L^*$ is linear with ${\rm Ker}\: L^*=\{0\}$. Therefore $ \frac{\nabla k(z^j)}{|\nabla h(Tz^j)|}=L^*\Pi_{{\cal A}_0}\frac{\nabla h(Tz^j)}{|\nabla h(Tz^j)|}.$ Let $\omega$ denote the nonempty and compact set of cluster points of the normalized sequence $\nabla h(Tz^j)/|\nabla h(Tz^j)|$, $j\in\mathbb{N}$. By Lemma \ref{L:normal}, we have that $ \omega\subset\{\nu \in N_{\overline{C}}(Tz)\: | \: |\nu|=1\},$ and consequently Lemma \ref{L:lem1} yields $ \Pi_{{\cal A}_0}\omega\cap\{0\}=\emptyset.$ By compactness of $\omega$, we obtain $ \liminf_{j\to+\infty}|\Pi_{{\cal A}_0}\nabla h(Tz^j)|/|\nabla h(Tz^j)|>0,$ which proves our claim. Finally, the strict convexity of $k$ on ${\rm dom}\: \partial k={\rm int}\:{\rm dom}\: k=T^{-1}{{\cal F}}$ is a direct consequence of the strict convexity of $h$ in ${{\cal F}}$. \end{proof} \subsection{Legendre transform coordinates} The prominent fact of Legendre functions theory is that $h\in\Gamma_0(\mathbb{R}^n)$ is of Legendre type iff its Fenchel conjugate $h^*$ is of Legendre type \cite[Theorem26.5]{Roc70}, and $\nabla h:{\rm int}\: {\rm dom}\: h \to {\rm int}\:{\rm dom}\: h^*$ is onto with $(\nabla h)^{-1}=\nabla h^*$. In the case of Legendre functions on affine subspaces, we have the following generalization: \begin{proposition}\label{P:leg-trans1} If $g\in \Gamma_0({\cal A})$ is of Legendre type in the sense of Definition {\rm \ref{D:glegendre}}, then $\nabla g({\rm int}_{\cal A}{\rm dom}\: g)$ is a nonempty, open and convex subset of ${\cal A}_0$. In addition, $\nabla g$ is a one-to-one continuous mapping from ${\rm int}_{\cal A}{\rm dom}\: g$ onto its image. \end{proposition} \begin{proof} Let $Tx=Lx+z_0$ with $L:{\cal A}_0\to\mathbb{R}^r$ being a linear invertible mapping and $z_0\in\mathbb{R}^p$. Set $k=g\circ T^{-1}\in\Gamma_0(\mathbb{R}^r)$, which is of Legendre type. We have ${\rm dom}\: k=T{\rm dom}\: g$. Define $L^*:\mathbb{R}^r\to {\cal A}_0$ by $ \langle L^*z,x\rangle=\langle z,Lx\rangle$, $\forall (z,x)\in \mathbb{R}^r\times{\cal A}_0$. We have that $ \nabla g(x)=\nabla[k\circ T](x)=L^*\nabla k(Tx)$ for all $x\in{\rm int}_\A{\rm dom}\: g$. Therefore $\nabla g({\rm int}_\A{\rm dom}\: g) =L^*\nabla k(T{\rm int}_\A{\rm dom}\: g)=L^*\nabla k({\rm int}_{\mathbb{R}^r}{\rm dom}\: k)=L^*{\rm int}_{\mathbb{R}^r}{\rm dom}\: k^*.$ Since ${\rm int}_{\mathbb{R}^r}{\rm dom}\: k^*$ is a nonempty, open and convex subset of $\mathbb{R}^r$ and $L^*$ is an invertible linear mapping, then $L^*{\rm int}_{\mathbb{R}^r}{\rm dom}\: k^*$ is an open and nonempty subset of ${\cal A}_0$. Moreover, by \cite[Theorem 6.6]{Roc70}, we have $L^*{\rm int}_{\mathbb{R}^r}{\rm dom}\: k^*={\rm ri}\: L^*{\rm dom}\: k^*.$ Consequently, $ \nabla g({\rm int}_\A{\rm dom}\: g)={\rm ri}\: L^*{\rm dom}\: k^*={\rm int}_{{\cal A}_0} L^*{\rm dom}\: k^*\neq \emptyset.$ Finally, since $\nabla k:{\rm int}_{\mathbb{R}^r}{\rm dom}\: k\to{\rm int}_{\mathbb{R}^r}{\rm dom}\: k^*$ is one-to-one and continuous, the same result holds for $\nabla g=L^*\circ\nabla k\circ T$ on ${\rm int}_\A{\rm dom}\: g$. \end{proof} \vspace{2ex} In the sequel, we assume that $h$ satisfies the basic condition $(H_0)$ and ${{\cal F}}=C\cap{\cal A}\neq\emptyset$. The {\em Legendre transform coordinates mapping} on ${{\cal F}}$ associated with $h$ is defined by \begin{equation}\label{E:LTC} \begin{array}{cccl} \phi_h:&{{\cal F}}&\to&{{\cal F}}^*=\phi_h({{\cal F}})\\ &x&\mapsto&\phi_h(x)=\nabla(h_{|_{\cal A}})=\Pi_{{\cal A}_0}\nabla h(x). \end{array} \end{equation} This definition retrieves the Legendre transform coordinates introduced by Bayer and Lagarias in \cite{BaL89} for the particular case of the log-barrier on a polyhedral set. \begin{theorem}\label{T:dphi} Under the above definitions and assumptions, ${{\cal F}}^*$ is a convex, (relatively) open and nonempty subset of ${\cal A}_0$, $\phi_h$ is a ${\cal C}^1$ diffeomorphism from ${{\cal F}}$ to ${{\cal F}}^*$, and for all $x\in{{\cal F}}$, $d\phi_h(x)=\Pi_{{\cal A}_0}H(x)$ and $d\phi_h(x)^{-1}=\sqrt{H(x)^{-1}}\Pi_{\sqrt{H(x)}{\cal A}_0}\sqrt{H(x)^{-1}}$, where $H(x)=\nabla^2h(x)$. \end{theorem} \begin{proof} By Propositions \ref{P:leg-trans2} and \ref{P:leg-trans1}, ${{\cal F}}^*$ is a convex, open and nonempty subset of ${\cal A}_0$ and $\phi_h$ is a continuous bijection. By $(H_0)$(ii), $\phi_h$ is of class ${\cal C}^1$ on ${{\cal F}}$ and we have for all $x\in {{\cal F}}$, $d\phi_h(x)=\Pi_{{\cal A}_0}\nabla ^2 h(x)=\Pi_{{\cal A}_0} H(x).$ Let $v\in{\cal A}_0$ be such that $d\phi_h(x)v=0$. It follows that $H(x)v\in{\cal A}_0^\perp$ and in particular $\langle H(x)v,v\rangle=0$. Hence, $v=0$ thanks to $(H_0)$(iii). The implicit function theorem implies then that $\phi_h$ is a ${\cal C}^1$ diffeomorphism. The formula concerning $d\phi_h(x)^{-1}$ is a direct consequence of the next lemma. \begin{lemma} Define the linear operators $L_i:\mathbb{R}^n\to\mathbb{R}^n$ by $L_1=\Pi_{{\cal A}_0} H(x)$ and $L_2=\sqrt{H(x)^{-1}}\Pi_{\sqrt{H(x)}{\cal A}_0}\sqrt{H(x)^{-1}}$. Then $L_2L_1v=v$ for all $v\in{\cal A}_0$. \end{lemma} This follows by the same method as in \cite{BaL89}, pag. 545; we leave the proof to the reader. \end{proof} Similarly to the classical Legendre type functions theory, the inverse of $\phi_h$ can be expressed in terms of Fenchel conjugates. For that purpose, we notice that inverting $\phi_h$ is a minimization problem. Indeed, given $y\in{\cal A}_0$, the problem of finding $x\in{{\cal F}}$ such that $y=\Pi_{{\cal A}_0}\nabla h(x)$ is equivalent to $ x=\mbox{\rm Argmin}\{h(z)-\<y,z\rangle|z\in{\cal A}\}$, or equivalently \begin{equation}\label{E:inv-min} x=\mbox{\rm Argmin}\{(h+\delta_{{\cal A}})(z)-\<y,z\rangle\}, \end{equation} where $\delta_{{\cal A}}$ is the {\em indicator} of ${\cal A}$, i.e. $\delta_{{\cal A}}(z)= 0$ if $z\in {\cal A}$ and $+\infty$ otherwise. Let us recall the definition of {\em epigraphical sum} of two functions $g_1,g_2\in\Gamma_0(\mathbb{R}^n)$, which is given by $ \left(g_1\square g_2\right)(y)=\inf\{ g_1(u)+g_2(v)|u+v=y\}$, $\forall y\in\mathbb{R}^n.$ We have $g_1\square g_2\in\Gamma_0(\mathbb{R}^n)$ and if $g_1$ and $g_2$ satisfy ${\rm ri}\:{\rm dom}\: g_1\cap {\rm ri}\:{\rm dom}\: g_2\neq \emptyset$ then $(g_1+g_2)^*=g_1^*\square g_2^*$ (see \cite{Roc70}). \begin{proposition}\label{P:F*} We have that $\phi_h^{-1}:{{\cal F}}^*\to{{\cal F}}$ is given by $ \phi_h^{-1}(y)=\nabla[h^*\square(\delta_{{\cal A}_0^\perp}+\<\cdot,\widetilde x\rangle)](y),$ for any $\widetilde x\in{\cal A}$, and moreover \( {{\cal F}}^*=\Pi_{{\cal A}_0}{\rm int}\:{\rm dom}\: h^*\). \end{proposition} \begin{proof} The optimality condition for (\ref{E:inv-min}) yields $y\in\partial(h+\delta_{{\cal A}})(x).$ Thus, $x\in\partial (h+\delta_{\cal A})^*(y)$. From ${{\cal F}}\neq\emptyset$, we conclude that the function $g\in\Gamma_0(\mathbb{R}^n)$ defined by $g=(h+\delta_{\cal A})^*$ satisfies $g =h^*\square\delta_{\cal A}^*=h^*\square (\delta_{{\cal A}_0^\perp}+\<\cdot,\widetilde x\rangle)$ with $\widetilde x\in{\cal A}$. Moreover, by \cite[Corollary 26.3.2]{Roc70}, $g$ is essentially smooth and we deduce that indeed $x=\nabla g(y)$. Since $g$ is essentially smooth, ${\rm dom}\:\partial g={\rm int}\:{\rm dom}\: g$. By definition of epigraphical sum, $ g(y)=\inf\{h^*(u)+\delta_{{\cal A}_0^\perp}(v)+\<v,\widetilde x\rangle |u+v=y\},$ and consequently we have that $y\in {\rm dom}\: g$ iff $y\in {\rm dom}\: h^*+ {\cal A}_0^\perp$. Hence, ${\rm int}\:{\rm dom}\: g={\rm int}\:{\rm dom}\: h^* + {\cal A}_0^\perp$ (see for instance \cite[Corollary 6.6.2]{Roc70}). Recalling that ${{\cal F}}^*$ is a relatively open subset of ${\cal A}_0$, we deduce that ${{\cal F}}^*=\Pi_{{\cal A}_0}{\rm dom}\:\partial g=\Pi_{{\cal A}_0}{\rm int}\:{\rm dom}\: h^*$. \end{proof} \subsection{Linear problems in Legendre transform coordinates} \subsubsection{Polyhedral sets in Legendre transform coordinates} One of the first interest of Legendre transform coordinates is to transform linear constraints into positive cones. \begin{proposition}\label{P:domh*} Assume that $ C=\{x\in\mathbb{R}^n | Bx>d\}$, where $B$ is a $p\times n$ full rank matrix, with $p\geq n$. Suppose also that $h$ is of the form {\rm (\ref{E:defhLP})} with $\theta$ satisfying $(H_1)$, and let $\eta=\lim_{s\to+\infty}\theta'(s)\in(-\infty,+\infty]$. If $\eta<+\infty$ then \( \overline{{\rm dom}\: h^*}=\{y\in\RR^n\mid y+B^T\lambda=0,\;\lambda_i\geq -\eta\}, \) and ${\rm dom}\: h^*=\mathbb{R}^n$ when $\eta=+\infty$. \end{proposition} \begin{proof} By \cite[Theorem 11.5]{RoW98}, $\overline{{\rm dom}\: h^*}=\{y\in\RR^n\mid \<y,d\rangle\leq h^\infty(d) \hbox{ for all }d\in\RR^n\}$, where $h^\infty$ is the {\em recession} function, also known as {\em horizon} function, of $h$. The recession function is defined by $h^\infty(d)=\lim_{t\to+\infty}\frac{1}{t}[h(\bar x+td)-h(\bar x)], \; d\in\mathbb{R}^n$, where $\bar x\in{\rm dom}\: h$; this limit does not depend of $\bar x$ and eventually $h^\infty(d)=+\infty$ (see also \cite{Roc70}). In this case, it is easy to verify that $ h^\infty(d)=\sum_{i=1}^p\theta^\infty(\<B_i,d\rangle).$ Clearly, $\theta^\infty(-1)=+\infty$ and $\theta^\infty(1)=\lim_{s\to+\infty}\theta'(s)=\eta$. In particular, if $\eta=+\infty$ then ${\rm dom}\: h^*=\mathbb{R}^n$. If $\eta<+\infty$ then $y\in\overline{{\rm dom}\: h^*}$ iff for all $d\in\mathbb{R}^n$ such that $Bd\geq 0$, $\<y,d\rangle\leq h^\infty(d)=\sum_{i=1}^p\eta\<B_i,d\rangle$, that is $\<y-\eta B^Te,d\rangle\leq 0 $ with $e=(1,\cdots,1)$. Thus, by the Farkas lemma, $y\in\overline{{\rm dom}\: h^*}$ iff $\exists \mu\geq 0$, $y-\eta B^Te+B^T\mu=0$. \end{proof} As a direct consequence of Propositions \ref{P:F*} and \ref{P:domh*}: \begin{corollary} Under the assumptions of Proposition {\rm \ref{P:domh*}}, if $\eta=0$ then ${{\cal F}}^*$ is a positive convex cone and if $\eta=+\infty$ then ${{\cal F}}^*={\cal A}_0$. \end{corollary} \subsubsection{$(H\mbox{-}SD)$-trajectories in Legendre transform coordinates} In the sequel, we assume that $ f(x)=\langle c,x\rangle $ for some $c\in\mathbb{R}^n$. As another striking application of Legendre transform coordinates, we prove now that the trajectories of $(H\mbox{-}SD)$ may be seen as straight lines in ${{\cal F}}^*$. Recall that the {\em push forward} vector field of $\nabla_{_H} f_{|_{{\cal F}}}$ by $\phi_h$ is defined for every \(y\in{{\cal F}}^* \) by \([(\phi_h) _*\nabla_{_H} f_{|_{{{\cal F}}}} ]\:(y)=d\phi_h(\phi_h^{-1}(y))\nabla_{_H} f_{|_{{\cal F}}}(\phi_h^{-1}(y))\). \begin{proposition}\label{P:pushforward} For all $y\in{{\cal F}}^*$, $ [(\phi_h) _*\nabla_{_H} f_{|_{{\cal F}}} ]\:(y)=\Pi_{{\cal A}_0}c .$ \end{proposition} \begin{proof} Let $y\in{{\cal F}}^*$. Setting $x=\phi_h^{-1}(y)$, by Theorem \ref{T:dphi} we get $[(\phi_h) _*\nabla_{_H} f_{|_{{{\cal F}}}}] \:(y)=d\phi_h(x)\nabla_{_H} f_{|_{{\cal F}}}(x)= \Pi_{{\cal A}_0}H(x)H(x)^{-1}[I-A^T(AH(x)^{-1}A^T)^{-1}AH(x)^{-1}]c= \Pi_{{\cal A}_0}c-\Pi_{{\cal A}_0}A^Tz,$ where $ z=[(AH(x)^{-1}A^T)^{-1}AH(x)^{-1}]c.$ Since ${\rm Im}\: A^T={\cal A}_0^\perp$, the conclusion follows. \end{proof} Next, we give two optimality characterizations of the orbits of $(H\mbox{-}SD)$, extending thus to the general case the results of \cite{BaL89} for the log-metric. \subsubsection{Geodesic curves} First, we claim that the orbits of $(H\mbox{-}SD)$ can be regarded as geodesics curves with respect to some appropriate metric on ${{\cal F}}$. To this end, we endow ${{\cal F}}^*=\phi_h({{\cal F}})$ with the Euclidean metric, which allows us to define on ${{\cal F}}$ the metric \begin{equation}\label{E:H2} (\cdot,\cdot)^{H^2}=\left( \phi_h \right)^* \<\cdot,\cdot\rangle, \end{equation} that is, $\forall (x,u,v) \in {{\cal F}}\times\RR^n\times\RR^n$, $(u,v)^{H^2} _x =\< d\phi_h (x)u,d\phi_h (x)v\rangle=\< \Pi_{{\cal A}_0}H(x)u,\Pi_{{\cal A}_0}H(x)v\rangle.$ For each initial condition $x^0\in {{\cal F}}$, and for every $c\in\RR^n$ we set \begin{equation} \label{geovelo} v=d\phi_h (x^0)^{-1}\Pi_{{\cal A}_0}c=\sqrt{H(x^0)^{-1}}\Pi_{\sqrt{H(x^0)}{\cal A}_0}\sqrt{H(x^0)^{-1}}\Pi_{{\cal A}_0}c. \end{equation} \begin{theorem} Let $(x^0,c)\in {{\cal F}}\times\RR^n$, set $f(x)=\<c,x\rangle, \: \forall x\in C$ and define $v$ as in {\rm (\ref{geovelo})}. If ${{\cal F}}$ is endowed with the metric $(\cdot,\cdot)^{H^2}$ given by {\rm (\ref{E:H2})}, then the solution $x(t)$ of $(H\mbox{-}SD)$ is the unique geodesic passing through $x^0$ with velocity $v$. \end{theorem} \begin{proof} Since ${{\cal F}},\:(\cdot,\cdot)^{H^2}$ is isometric to the Euclidean Riemannian manifold ${{\cal F}}^*$, the geodesic joining two points of ${{\cal F}}$ exists and is unique. Let us denote by $\gamma:J \subset \RR \mapsto {{\cal F}}$ the geodesic passing through $x^0$ with velocity $v$. By definition of $(\cdot,\cdot)^{H^2}$, $\phi_h(\gamma)$ is a geodesic in ${{\cal F}}^*$. Whence $\phi_h(\gamma(t))=\phi_h(x^0)+td\phi_h(x^0)v,$ where $t\in J$. In view of (\ref{geovelo}), this can be rewritten $\phi_h(\gamma(t))=\phi_h(x^0)+t\Pi_{{\cal A}_0}c$. By Proposition \ref{P:pushforward} we know that $\left( \phi_h \right)_* \nabla_{_H} f_{|{{\cal F}}}=\Pi_{{\cal A}_0}c$, and therefore $\phi_h^{-1}(\phi_h(\gamma))=\gamma$ is exactly the solution of $(H\mbox{-}SD)$.\end{proof} \begin{remark} {\rm A Riemannian manifold is called {\em geodesically complete} if the maximal interval of definition of every geodesic is $\mathbb{R}$. When $\Pi_{{\cal A}_0}c\neq 0$ and ${{\cal F}}^*$ is not an affine subspace of $\RR^n$, the Riemannian manifold ${{\cal F}},\:(\cdot,\cdot)^{H^2}$ is not complete in this sense.} \end{remark} \subsubsection{Lagrange equations} Following the ideas of \cite{BaL89}, we describe the orbits of $(H\mbox{-}SD)$ as orthogonal projections on \( {\cal A} \) of \( \dot{q}- \)trajectories of a specific {\em Lagrangian system}. Recall that given a real-valued mapping \( {\cal L}(q,\dot q) \) called the Lagrangian, where \( q=(q_{1},\dots ,q_{n}) \) and \( \dot q=(\dot q_1,\dots ,\dot q_{n}) \), the associated Lagrange equations of motion are the following \begin{equation}\label{LAG} \frac{d}{dt}\frac{\partial {\cal L}}{\partial \dot q_{i}}=\frac{\partial {\cal L}}{\partial q_{i}}, \quad \frac{d}{dt}q_{i}=\dot q_{i},\quad \forall i=1\dots n. \end{equation} Their solutions are \( C^{1}- \)piecewise paths \( \gamma :t\longmapsto (q(t),\dot q(t)) \), defined for \( t\in J\subset \RR \), that satisfy (\ref{LAG}), and appear as extremals of the functional \( \widehat{{\cal L}}(\gamma )=\int _{J}{\cal L}(q(t),\dot q(t))dt\). Notice that in general, the solutions are not unique, in the sense that they do not only depend on the initial condition \( \gamma (0) \). Let us introduce the Lagrangian ${\cal L}:\mathbb{R}^n\times C\to \mathbb{R}$ defined by \begin{equation}\label{E:LAG} {\cal L} (q,\dot q)=\<\Pi _{{\cal A}_{0}}c,q\rangle-h(\Pi _{{\cal A}}\dot q), \end{equation} where \( \Pi _{{\cal A}}\) is the orthogonal projection onto \( {\cal A} \), i.e. \(\Pi _{{\cal A}}x=\widetilde x+\Pi _{{\cal A}_{0}}(x-\widetilde x) \) for any \( \widetilde x\in {\cal A} \). \begin{theorem}\label{T:LAG} For any solution \( \gamma (t)=(q(t),\dot q(t)) \) of the Lagrangian dynamical system {\rm (\ref{LAG})} with Lagrangian given by {\rm (\ref{E:LAG})}, the projection \( x(t)=\Pi _{{\cal A}}\dot q(t) \) is the solution of (H-SD) with initial condition \( x^0=\Pi _{{\cal A}}\dot{q}(0). \) \end{theorem} \begin{proof} It is easy to verify that \( \nabla (h\circ \Pi _{{\cal A}})(x)=\Pi _{{\cal A}_{0}}\nabla h(\Pi _{{\cal A}}x) \) for any \( x\in \RR^{n}. \) Given a solution \( \gamma (t)=(q(t),\dot{q}(t)) \) of (\ref{E:LAG}) defined on \( J \), we set \( p(t)=(p_{1}(t),\dots ,p_{n}(t))=\left(\frac{\partial {\cal L}}{\partial \dot{q}_{1}}(\gamma (t)), \dots ,\frac{\partial {\cal L}}{\partial \dot{q}_{n}}(\gamma (t))\right)\). We have \( p(t)=\nabla (h\circ \Pi _{{\cal A}})(\dot{q}(t))=\Pi _{{\cal A}_{0}}\nabla h(\Pi _{{\cal A}}\dot{q}(t))=\phi _{h}(\Pi _{{\cal A}}\dot{q}(t)).\) Equations of motion become \( \frac{d}{dt}p(t)=\Pi _{{\cal A}_{0}}c,\) that is, \( \frac{d}{dt}\phi _{h}(\Pi _{{\cal A}}\dot{q}(t))=\Pi _{{\cal A}_{0}}c \). Since \( \phi _{h}:{{\cal F}}\to{{\cal F}}^* \) is a diffeomorphism, the latter means, according to Proposition \ref{P:pushforward}, that \( \Pi _{{\cal A}}\dot{q}(t) \) is a trajectory for the vector field \( \nabla _{H}f_{|_{{\cal F}}}. \) Notice that \( C \) being convex, as soon as \( \dot{q}(0)\in C, \) \( \Pi _{{\cal A}}\dot{q}(0)\in C\cap {\cal A}={{\cal F}}, \) and what precedes forces \( \Pi _{{\cal A}}\dot{q}(t) \) to stay in \( {{\cal F}} \) for any \( t\in J. \) \end{proof} \subsubsection{Completely integrable Hamiltonian systems} In the sequel, all mappings are supposed to be at least of class \( {\cal C}^{2} \). Let us first recall the notion of Hamiltonian system. Given an integer \(r\geq 1\) and a real-valued mapping \( {\cal H}(q,p) \) on \( \RR^{2r}\) with coordinates \((q,p)=(q_{1},\dots ,q_{r},p_{1},\dots ,p_{r}) \), the {\it Hamiltonian vector field \( X_{{\cal H}} \) associated with \( {\cal H} \)} is defined by \( X_{{\cal H}}=\sum _{i=1}^r \frac{\partial {\cal H}}{\partial p_{i}}\frac{\partial }{\partial q_{i}}-\frac{\partial {\cal H}}{\partial q_{i}}\frac{\partial }{\partial p_{i}}.\) The trajectories of the dynamical system induced by \( X_{{\cal H}} \) are the solutions to \begin{equation} \left\{ \begin{array}{l} \dot{p}_{i}(t)=-\frac{\partial} {\partial q_{i}}{\cal H}(q(t),p(t)),\:i=1,\dots, r,\\ \dot{q}_{i}(t)=\frac{\partial} {\partial p_{i}}{\cal H}(q(t),p(t)),\:i=1,\dots, r. \end{array}\label{Ham}\right. \end{equation} Following a standard procedure, Lagrangian functions \( {\cal L}(q,\dot{q}) \) are associated with Hamiltonian systems by means of the so-called Legendre transform \[\Phi :\left\{ \begin{array}{ccl} \RR^{2r}& \longrightarrow &\RR^{2r}\\ (q,\dot{q}) &\longmapsto & (q,\frac{\partial {\cal L}}{\partial \dot{q}}(q,\dot{q})) \end{array}\right.\] In fact, when \( \Phi \) is a diffeomorphism, the Hamiltonian function \( {\cal H} \) associated with the Lagrangian \( {\cal L} \) is defined on \( \Phi(\RR^{2r}) \) by \( {\cal H}(p,q) = \sum _{i=1}^r p_{i}\dot{q}_{i}-{\cal L}(q,\dot{q}) = \<p,\psi ^{-1}(q,p)\rangle-{\cal L}(q,\psi ^{-1}(q,p)),\) where \( (q,\psi ^{-1}(q,p)):=\Phi ^{-1}(q,p) \). With these definitions, \( \Phi \) sends the trajectories of the corresponding Lagrangian system on the trajectories of the Hamiltonian system (\ref{Ham}). In general, the Lagrangian (\ref{E:LAG}) does not lead to an invertible \( \Phi \) on \(\RR^{2n}\). However, we are only interested in the projections \( \Pi _{{\cal A}}\dot{q} \) of the trajectories, which, according to Theorem \ref{T:LAG}, take their values in \( {{\cal F}} \). Moreover, notice that for any differentiable path \( t\mapsto q^{\perp }(t) \) lying in \( {\cal A}_{0}^{\perp } \), \( t\mapsto (q(t),\dot{q}(t)) \) is a solution of (\ref{LAG}) iff \( t\mapsto (q(t)+q^{\perp }(t),\dot{q}(t)+\dot{q}^{\perp }(t)) \) is. This legitimates the idea of restricting \( {\cal L} \) to \( {\cal A}_{0}\times \Pi _{{\cal A}_{0}}{{\cal F}} \). Hence and from now on, \( {\cal L} \) denotes the function: \begin{equation}\label{E:Lag2} {\cal L}: \left\{ \begin{array}{ccl} {\cal A}_{0}\times \Pi _{{\cal A}_{0}}{{\cal F}}& \longrightarrow & \RR\\ (q,\dot{q})& \longmapsto & {\cal L}(q,\dot{q}). \end{array}\right. \end{equation} Taking \( (q_{1},\dots, q_{r}) \), with $r=n-m$, a linear system of coordinates induced by an Euclidean orthonormal basis for \( {\cal A}_{0} \), we easily see that this ``new'' Lagrangian has trajectories \( (q(t),\dot{q}(t)) \) lying in \( {\cal A}_{0}\times \Pi _{{\cal A}_{0}}{{\cal F}} \), whose projections \( \Pi _{{\cal A}}\dot{q}(t) \) are exactly the \((H\mbox{-}SD)\) trajectories. Moreover, an easy computation yields \[ \frac{\partial {\cal L}}{\partial \dot{q}}(q,\dot{q})=\Pi _{{\cal A}_{0}}\nabla h(\Pi _{{\cal A}_{0}}\dot{q})=[\phi _{h}\circ \Pi _{{\cal A}}](\dot{q}),\] which is a diffeomorphism by Proposition \ref{T:dphi}. The Legendre transform is then given by \[\Phi : \left\{ \begin{array}{ccl} {\cal A}_{0}\times \Pi _{{\cal A}_{0}}{{\cal F}} & \longrightarrow & {\cal A}_{0}\times {{\cal F}}^{*}\\ (q,\dot{q})& \longmapsto & (q,[\phi _{h}\circ \Pi _{{\cal A}}](\dot{q})), \end{array}\right.\] and therefore, \( {\cal L} \) is converted into the Hamiltonian system associated with \begin{equation} {\cal H}:\left\{ \begin{array}{ccl} {\cal A}_{0}\times {{\cal F}}^{*}& \longrightarrow & \RR\\ (q,p)&\longmapsto & \<p,[\phi _{h}\circ \Pi _{{\cal A}}]^{-1}(p)\rangle-{\cal L}(q,[\phi _{h}\circ \Pi _{{\cal A}}]^{-1}(p)). \end{array}\right.\label{Ham2} \end{equation} Let us now introduce the concept of completely integrable Hamiltonian system. The Poisson bracket of two real valued functions \( f_{1},f_{2} \) on \( \RR^{2r}\) is given by \( \{f_{1},f_{2}\}=\sum _{i=1}^r\frac{\partial f_{1}}{\partial p_{i}} \frac{\partial f_{2}}{\partial q_{i}}-\frac{\partial f_{1}} {\partial q_{i}}\frac{\partial f_{2}}{\partial p_{i}}.\) Notice that, from the definitions, we have \( \{f_{1},f_{2}\}=X_{f_{1}}(f_{2}) \) and \( X_{\{f_{1},f_{2}\}}=[X_{f_{1}},X_{f_{2}}] \), where \([\cdot,\cdot]\) is the standard {\em bracket product} of vector fields \cite{Lan95}. Now, the system (\ref{Ham}) is called {\it completely integrable} if there exist $r$ functions \( f_{1},\dots ,f_{r} \) with \( f_{1}={\cal H} \), satisfying \[ \left\{ \begin{array}{l} \{f_{i},f_{j}\}=0, \quad \forall i,j=1,\dots, r.\\ df_{1}(x),\dots,df_{r}(x) \textrm{ are linearly independent at any }x\in \RR^{2r}. \end{array}\right. \] As a motivation for completely integrable systems, we will just point out the following: the functions \( f_{i} \) are called {\it integrals of motions} because \( X_{{\cal H}}(f_{i})=\{h,f_{i}\}=0 \), which means that any trajectory of \( X_{{\cal H}} \) lies on the level sets of each \( f_{i} \) (the same holds for all \( X_{f_{j}} \)). Also, the trajectory passing through \( (q_{0},p_{0}) \) lies in the set \( \bigcap _{i=1\dots r}f^{-1}_{i}(\{f_{i}(q_{o},p_{0})\}) \). Besides, \( [X_{f_{i}},X_{f_{j}}]=0 \) implies that we can find, at least locally, coordinates \( (x_{1},\dots ,x_{r}) \) on this set such that \( X_{{\cal H} }=\frac{\partial }{\partial x_{1}},X_{f_{2}}=\frac{\partial }{\partial x_{2}}, \dots ,X_{f_{r}}=\frac{\partial } {\partial x_{r}}, \) that is, in these coordinates, the trajectories of \( X_{f_i} \) are straight lines. \begin{theorem} Suppose $\Pi_{A_0}c\neq 0.$ The Lagrangian system on \( {\cal A}_{0}\times \Pi _{{\cal A}_{0}}{{\cal F}} \) associated with {\rm (\ref{E:LAG}), (\ref{E:Lag2})} gives rise, by the Legendre transform, to a completely integrable Hamiltonian system on \( {\cal A}_{0}\times {{\cal F}}^{*} \) with Hamiltonian given by {\rm (\ref{Ham2})}. \end{theorem} \begin{proof} There only remains to prove the complete integrability of the system. To this end, we adapt the proof of \cite[Theorem II.12.2]{BaL89} to our abstract framework. Take the integrals of motion to be $f_{1}={\cal H}$, $f_{i}(q,p)=\<c_{i},p\rangle,\:i=2,\dots, r$ where $r=n-m$ and \(\{ \Pi _{{\cal A}_{0}}c,\:c_{2},\dots ,c_{r} \}\) is chosen as to be an orthonormal basis of \( {\cal A}_{0} \). For any \( i,j\in\{2,\dots, r\}, \) \( \{f_{i},f_{j}\} \) is zero since \( f_{i} \) and \( f_{j} \) only depend on \( p \). Let \( \phi _{h,l}^{-1}(q,p) \) (resp. \( (\Pi _{{\cal A}_{0}}c)_{l} \)) stand for the \( l \)-th component of \( \phi_h^{-1} (q,p) \) (resp. the \( l \)-th component of \( \Pi _{{\cal A}_{0}}c \)) and take some $k\in\{1,...,r\}$. Since \begin{eqnarray*} \frac{\partial {\cal H}}{\partial q_{k}}(q,p)&=&\frac{\partial (\sum _{l=1}^r p_{l}\phi ^{-1}_{h,l})}{\partial q_{k}}(q,p)-\frac{\partial ({\cal L}\circ \Phi ^{-1})}{\partial q_{k}}(q,p)\\ &=&\sum _{l=1}^r p_{l}\frac{\partial \phi _{h,l}^{-1}} {\partial q_{k}}(p,q)-\frac{\partial {\cal L}}{\partial q_{k}}(q,\phi_h ^{-1}(q,p)) -\sum _{l=1}^r\frac{\partial {\cal L}}{\partial \dot{q}_{l}} (q,\phi_h ^{-1}(q,p))\frac{\partial \phi _{h,l}}{\partial q_{k}}(q,p)\\ &=&-(\Pi _{{\cal A}_{0}}c)_{k} \end{eqnarray*} we deduce that for all $i\in \{2,...,r\}$, \(\{{\cal H},f_{i}\} = \sum _{k=1}^r-\frac{\partial f_{i}}{\partial p_{k}}\frac{ \partial {\cal H}}{\partial q_{k}}= \<\Pi _{{\cal A}_{0}}c,c_{i}\rangle=0\). The second condition for complete integrability is satisfied too, as the \(r\times 2r \) matrix \[ \left( [\frac{\partial f_{i}}{\partial q_{1}},\dots,\frac{\partial f_{i}}{\partial q_{r}}, \frac{\partial f_{i}}{\partial p_{1}},\dots,\frac{\partial f_{i}}{\partial p_{r}}] \right)_{i=1,\dots,r}=\left( \begin{array}{cc} \Pi _{{\cal A}_{0}}c^T & \star \\ 0 & \begin{array}{c} c_{1}^T\\ \dots \\ c_{r}^T \end{array} \end{array}\right) \] is full rank. \end{proof}
-106,913.956862
[ -2.5390625, 2.251953125 ]
15.521503
[ -3.384765625, 0.282958984375, -2.248046875, -7.1953125, -0.86376953125, 10 ]
[ 3.73046875, 9.15625, 0.272705078125, 5.37890625 ]
659
11,966
[ -3.482421875, 3.919921875 ]
37.386688
[ -5.3671875, -4.2421875, -5.5390625, -2.564453125, 1.6982421875, 13.4921875 ]
0.254329
10.241279
21.343807
1.915913
[ 1.3917746543884277 ]
-62,095.650429
5.323166
-106,762.960591
0.197812
6.347252
[ -1.66015625, -3.478515625, -4.3203125, -5.6875, 1.845703125, 13.21875 ]
[ -5.6328125, -1.9462890625, -2.208984375, -0.80419921875, 3.484375, 3.80078125 ]
BkiUd3E5qoTBB2nxJLAl
\section{Basic cosmological framework}\label{BASIC} The general framework for present cosmological work is set by three observational results. The perfect Planckian shape of the cosmic microwave background (CMB) spectrum as observed with the COBE satellite (Mather et al. 1990) clearly shows that the Universe must have evolved -- from a hot, dense, and opaque phase. The very good correspondence of the observed abundance of light elements and the results of Big Bang Nucleosynthesis (BBN, e.g. Burles, Nollett \& Turner 2001) shows that the cosmic expansion can be traced back to cosmological redshifts up to $z=10^{10}$. Steigman (2002) pointed out that if these analyses would have been performed with Newton gravity and not with Einstein gravity, then the observed abundances could not be reconciled with the BBN predictions. One can take this as one of the few indications than Einstein gravity can in fact be applied within a cosmological context and underlines the importance of the BBN benchmark for any gravitational theory. Finally, the consistency of the ages of the oldest stars in globular clusters (e.g. Chaboyer \& Krauss 2002) and the age of the Universe as obtained from cosmological observations can be regarded as the long-waited `unification' of the theory of stellar structure and the theory of cosmic spacetime (Peebles \& Ratra 2004). Traditionally, Friedmann-Lema\^{\i}tre (FL) world models as derived from Einstein's field equations for spatially homogeneous and isotropic systems, are assumed, characterized by the Hubble constant $H_0$ in units of $h= H_0/(100\,{\rm km}\,{\rm s}^{-1}\,{\rm Mpc}^{-1})$, the normalized density of cosmic matter $\Omega_{\rm m}$ (e.g., baryonic and Cold Dark Matter CDM), the normalized cosmological constant $\Omega_\Lambda$, and its equation of state $w$. Within this general framework, clusters of galaxies are traditionally used as cosmological probes on Gigaparsec scales. However, a precise test that one can apply Einstein gravity on such large scales is still missing. In Sect.\,\ref{GCLST}, a summary of the basic properties of nearby galaxy clusters is given. The hierarchical structure formation paradigm is tested with nearby galaxy clusters in Sect.\,\ref{HIER}. Constraints on the density of dark matter (DM), the normalization of the matter power spectrum, and neutrino masses are presented in Sect.\,\ref{MATTER}. Observational effects of the equation of state of the dark energy (DE), and a first test of a non-gravitational interaction between DE and DM are presented in Sect.\,\ref{DE}. The problem of the cosmological constant and its discussion in terms of the gravitional Holographic Principle as well as the effect of an extra-dimension of brane-world gravity are discussed in Sect.\,\ref{CCP}. Sect.\,\ref{FUTURE} draws some conclusions. A general review on clusters is given in Bahcall (1999), whereas Edge (2004) focuses on nearby X-ray cluster surveys, Borgani \& Guzzo (2001) on their spatial distribution, Rosati, Borgani \& Norman (2002) and Voit (2004) on their evolution. \section{Galaxy clusters}\label{GCLST} Galaxy clusters are the largest virialized structures in the Universe. Only 5\% of the bright galaxies ($>L_*$) are found in rich clusters, but more than 50\% in groups and poor clusters. The number of cluster galaxies brighter than $m_3+2^m$ where $m_3$ is the magnitude of the third-brightest cluster galaxy, and located within $1.5\,h^{-1}\,{\rm Mpc}$ radius from the cluster center, range for rich clusters from 30 to 300 galaxies, and for groups and poor clusters from 3 to 30. For cosmological tests, rich clusters will turn out to be of more importance so that the following considerations will mainly focus on the properties of this type. Rich clusters have typical radii of $1-2\,h^{-1}\,{\rm Mpc}$ where the surface galaxy density drops to $\sim 1$\% of the central density. Baryonic gas, falling into the cluster potential well, is shock-heated up to temperatures of $T_{\rm e}=10^{7-8}\,{\rm K}$. The acceleration of the electrons in the hot plasma (intracluster medium ICM) gives thermal Bremsstrahlung with a maximum emissivity at $k_{\rm B}T_{\rm e}\,=\,2-14\,{\rm keV}$ so that they can be observed in X-rays together with some line emission. Typical X-ray luminosities range between $L_{\rm x}=10^{42-45}\,h^{-2}\,{\rm erg}\,{\rm s}^{-1}$ in the energy interval $0.1-2.4$\,keV. With X-ray satellites like ROSAT, Chandra, or XMM-Newton, these clusters can thus be detected up to cosmological interesting redshifts. However, only a few clusters are detected at redshifts beyond $z=1$ (Rosati et al. 2002). Galaxy clusters are rare objects with number densities of $10^{-5}\,h^3$ ${\rm Mpc}^{-3}$, strongly decreasing with X-ray luminosity or cluster mass (B\"ohringer et al. 2002). Current structure formation models predict of the order of $10^6$ rich galaxy clusters in the visible Universe, the majority with redshifts below $z=2$. More than 5\,000 nearby galaxy clusters are already identified in the optical as local concentrations of galaxies, and 2\,000 by their (extended) X-ray emission. Surveys planned for the next few years like the Dark Universe Observatory DUO (Griffiths, Petri, Hasinger et al. 2004) will yield about $10^4$ clusters possibly up to $z=2$, that is, already 1\% of the total cluster population. It appears thus not completely illusory to finally get an almost complete census of all rich galaxy clusters in the visible Universe. X-ray clusters get their importance for cosmology because of the tide correlations between observables like X-ray temperature or X-ray luminosity and total gravitating cluster mass which allow a precise reconstruction of the cosmic mass distribution on large scales. Knowledge of the total gravitating mass of a cluster within a well-defined radius, is of crucial importance. The masses are summarized in cluster mass functions which depend on structure formation models through certain values of the cosmological parameters. However, cosmic mass function appear to be independent of cosmology when they are written in terms of natural ``mass'' and ``time'' variables (Lacey \& Cole 1994). Model mass functions can either be predicted from semi-analytic models (e.g, Sheth \& Tormen 2002, Schuecker et al. 2001a, Amossov \& Schuecker 2004) or from N-body simulations, the latter with errors between 10 to 30\% (Jenkins et al. 2001). Cluster masses can be determined in the optical by the velocity dispersion of cluster galaxies or in X-rays from, e.g., the gas temperature and density profiles, assuming virial and hydrostatic equilibrium, respectively (and spherical symmetry). Gravitational lensing uses the distortion of background galaxies and determines the projected cluster mass without any specific assumption (e.g., Kaiser \& Squires 1993). For regular clusters, the masses of galaxy clusters are consistently determined with the three methods and range between $10^{14}-10^{15}\,h^{-1}\,M_\odot$ (e.g., Wu et al. 1998). Several projects are currently under way to compare the mass estimates obtained with the different methods in more detail. The baryonic mass in clusters comes from the ICM and the stars in the cluster galaxies. The ratio between the baryonic and total gravitating mass (baryon fraction) in a cluster is about $0.07h^{-1.5}+0.05$. Systematic X-ray studies of large samples of galaxy clusters have revealed that about half of the clusters have significant substructure in their surface brightness distributions, i.\,e., some deviations from a perfect regular shape (e.g. Schuecker et al. 2001b). For the detection of substructure, different methods as summarized in Feretti, Giovannini \& Gioa (2002) give substructure occurence rates ranging from 20 to 80\%. The large range clearly shows that the definition of a well-defined mass threshold for substructure and the measurement of the masses of the different subclumps is difficult and has not yet been regorously applied. Further interesting ambiguities arise because clusters appear more regular in X-ray pseudo pressure maps (product of projected gas mass density and gas temperature) whereas contact discontinuities and shock fronts caused by merging events appear more pronounced in pseudo entropy and temperature maps (Briel, Finoguenov \& Henry 2004). Substructering is taken as a signature of the dynamical youth of a galaxy cluster. The most dramatic distortions occure when two big equal mass clumps collide (major merger) to form a larger cluster. With the ROSAT satellite, merging events could be studied for the first time in X-rays in more detail (e.g. Briel, Henry \& B\"ohringer 1992). A typical time scale of a merger event is $10^9$\,yr where the increased gas density and X-ray temperature can boost X-ray luminosities up to factors of five (Randall, Sarazin \& Ricker 2002). The XMM-Newton and especially the Chandra X-ray satellite allows more detailed studies of substructures down to arcsec scales. Substructures in form of cavities and bubbles (B\"ohringer et al. 1993, Fabian et al. 2000), cold fronts (Vikhlinin, Markevitch \& Murray 2001), weak shocks and sound waves (Fabian et al. 2003), strong shocks (Forman et al. 2003), and turbulence (Schuecker et al. 2004) were discovered, possibly triggered by merging events and/or AGN activity. With the ASTRO-E2 satellite planned to be launched in 2005, the line-of-sight kinematics of the ICM will be studied for the first time to get more information about the dynamical state of the ICM. The majority of the abovementioned substructures have low amplitudes which do not much disturb radially-averaged cluster profiles (after masking) and thus cluster mass estimates. In fact, the hydrostatic equation relates the observed smooth pressure gradients to the total gravitating cluster mass, which makes the robustness of X-ray cluster mass estimates from numerical simulations plausible (Evrard, Metzler \& Navarro 1996, but see Sect.\,\ref{FUTURE}). Present cosmological tests based on galaxy clusters assume that the diversity of regular and substructured clusters contribute only to the intrinsic scatter of the observed X-ray luminosity-mass relation or similar diagnostics, while keeping the shape and normalization of the original relation almost unaltered. \begin{figure}[] \vspace{0.0cm} \center{\hspace{-5.5cm}\psfig{figure=schuecker_fig1a.ps,height=4.5cm}} \vspace{-5.3cm} \center{\hspace{+6.3cm}\psfig{figure=schuecker_fig1b.ps,height=5.0cm}} \vspace{-0.5cm} \caption[]{\small {\bf Left:} Normalized comoving REFLEX cluster number densities as a function of redshift, and comoving radial distance $R$. Vertical error bars represent the formal $1\sigma$ Poisson errors. {\bf Left:} Histogram of the normalized KL coefficients of the REFLEX sample and superposed Gaussian profile. The Kolmogorov-Smirnov probability for Gaussianity is 93\%.} \label{FIG_NZ} \end{figure} The remaining about 50\% of the clusters appear quite regular - a significant fraction of these clusters have very bright X-ray cores, where the dense gas could significantly cool. Such cooling core clusters are expected to be in a very relaxed dynamical state since several Gigayears. Numerical simulations suggest that the baryon fraction in these clusters is close to the universal value and can be used after some corrections as a cosmic `standard candle' (e.g. White et al. 1993). For nearby ($z<0.3$) rich systems, evolutionary effects on core radius and entropy input are found to be negligible (Rosati et al. 2002). Detailed XMM studies at $z\sim 0.3$ can be found in Zhang et al. (2004). Therefore, cosmological tests based on massive nearby clusters with gas temperatures $k_{\rm B}T_{\rm e}>3$\,keV are expected to give reliable results. For these systems, the observed X-ray luminosity can be transformed into the theory-related cluster mass with empirical luminosity-mass or similar relations characterized by their shape, intrinsic scatter, and normalization (e.g., Reiprich \& B\"ohringer 2002). It will be shown that with such methods, cosmological tests can be performed presently on the 20-30\% accuracy level. Further improvements on cluster scaling relations are thus necessary to reach (if possible) the few-percent level of `precision cosmology'. Large and systematic observational programms based on Chandra and XMM-Newton observations are now under way which are expected to significantly improve the relations within the next few years (e.g., XMM Large Programme, H. B\"oh\-ringer et al., in prep., and a large Chandra project of T.H. Reiprich et al., in prep.). For cosmological tests with distant rich clusters, additional work is necessary. Gravi\-ta\-tio\-nal\-ly-induced evolutionary effects due to structure growth, and non-gravitationally-induced evolutionary effects like ICM heating through galactic winds caused by supernovae (SNe), and heating by AGN cause systematic deviations from simple self-similarity expectations (Kaiser 1986, Ponman, Cannon \& Navarro 1999). For cosmological tests, such evolutionary effects add further degrees of freedom to be determined simultaneously with the cosmological parameters (e.g., Borgani et al. 1999). \section{Hierarchical structure formation paradigm}\label{HIER} Structure formation on the largest scales as probed by galaxy clusters is mainly driven by gravity and should thus be understandable in a simple manner. However, reconciling the tiny CMB anisotropies at $z\approx 1100$ with the very large inhomogeneities of the local galaxy distribution has shown that the majority of cosmic matter must come in nonbaryonic form (e.g., CDM). A direct consequence of such scenarios is that clusters should grow from Gaussian initial conditions in a quasi hierarchical manner, i.e., less rich clusters and groups tend to form first and later merge to build more massive clusters. The merging of galaxy clusters as seen in X-rays (Sect.\,\ref{GCLST}) is a direct indication that such processes are still at work in the local universe. A further argument for hierarchical structure growth comes from the spatial distribution of galaxy clusters on $10^2\,h^{-1}\,{\rm Mpc}$ scales. Less then 1/10 of this distance can be covered by cluster peculiar velocities within a Hubble time, keeping in this linear regime the Gaussianity of the cosmic matter field as generated by the chaotic processes in the early Universe almost intact. This Gaussianity formally stems from the random-phase superposition of plane waves and the central limit theorem (superposition approximation). The peaks of this random field will eventually collapse to form virialized clusters. The relation between the spatial fluctuations of the clusters and the underlying matter field is called `biasing'. For Gaussian random fields, the biasing tend to concentrate the clusters in regions with the highest global matter density in a manner that their correlation strengths $r_0$ increase with cluster mass (Kaiser 1984) - otherwise they would immediately distroy Gaussianity (e.g. if we would put a very massive cluster into a void of galaxies). Peculiar velocities of the clusters induced by the resulting inhomogeneities modify the $r_0$-mass relation, but without disturbing the general trend (peak-background split of Efstathiou, Frank \& White 1988, \& Mo \& White 1996). In the linear regime, we thus expect a Gaussian distribution of the amplitudes of cluster number fluctuations which increase with mass in a manner as predicted by the specific hierarchical scenario. \begin{figure}[] \vspace{0.0cm} \center{\hspace{-6.5cm}\psfig{figure=schuecker_fig2a.ps,height=6.0cm}} \vspace{-6.3cm} \center{\hspace{+5.3cm}\psfig{figure=schuecker_fig2b.ps,height=6.0cm}} \vspace{-0.5cm} \caption[]{\small {\bf Left:} Comparison of the observed REFLEX power spectrum (points with error bars) with the prediction of a spatially flat $\Lambda$CDM model with a matter density of $\Omega_{\rm m}=0.34$ and $\sigma_8=0.71$. Errors include cosmic variance and are estimated with numerical simulations. {\bf Right:} Comparison of observed power spectral densities and predictions of a low-density CDM semi-analytic model as a function of lower X-ray luminosity, i.e., lower X-ray cluster mass (Schuecker et al. 2001c). The errors include cosmic variance and are obtained from N-body simulations.} \label{FIG_DECO} \end{figure} The REFLEX catalogue (B\"ohringer et al. 2004)\footnote{ http://www.xray.mpe.mpg.de/theorie/REFLEX/} provides the largest homogeneously selected sample of X-ray clusters and is ideally suited for testing specific hierarchical structure formation models. The sample comprises 447 southern clusters with redshifts $z\le0.45$ (median at $z=0.08$) down to X-ray fluxes of $3.0\times 10^{-12}\,{\rm erg}\,{\rm s}^{-1}\,{\rm cm}^{-2}$ in the energy range $(0.1-2.4)$\,keV, selected from the ROSAT All-Sky Survey (B\"ohringer et al. 2001). Several tests show that the sample cannot be seriously affected by unknown selection effects. An illustration is given by the normalized, radially-averaged comoving number densities along the redshift direction (Fig.\,\ref{FIG_NZ} left). The densities fluctuate around a $z$-independent mean as expected when no unknown selection or evolutionary effects are present. For further tests can be found in B\"ohringer et al. (2001, 2004), Collins et al. (2000), and Schuecker et al. (2001c). Tests of the Gaussianity of the cosmic matter field refer to the superposition approximation mentioned above. They devide the survey volume into a set of large non-overlapping cells, count the clusters in each cell, decompose the fluctuation field of the cluster counts into plane waves via Fourier transformation, and check whether the frequency distribution of the amplitudes of the plane waves (Fourier modes with wavenumber $k$) follow a Gaussian distribution. However, the survey volume provides only a truncated view of the cosmic matter field which will result in an erroneous Fourier transform (the result obtained will be the convolution of the true Fourier transform with the survey window function). The truncation effect comprises both the reduction of fine details in the Fourier transform and the correlation of Fourier modes so that fluctuation power migrates between the modes. This leakage effect increases when the symmetry of the survey volume deviates from a perfect cubic shape. Uncorrelated amplitudes can be obtained, when the fluctuations are decomposed into modes which follow to some extent the shape of the survey volume. The Karhunen-Lo{\`e}we (KL) decomposition determines such eigenmodes under the constraint that the resulting KL fluctuation amplitudes are statistically uncorrelated. This construction is quite optimal for testing cosmic Gaussianity. The KL eigenmodes are the eigenvectors of the sample correlation matrix, i.e., the matrix giving the expected correlations between the number of clusters obtained in pairs of count cells as obtained with a fiducial (e.g. concordance) cosmological model. KL modes were first applied to CMB data by Bond (1995), to galaxy data by Vogeley \& Szalay (1996), and to cluster data by Schuecker et al. (2002). The linearity of the KL transform and the direct biasing scheme expected for galaxy clusters suggest that the statistics of the KL coefficients should directly reflect the statistics of the underlying cosmic matter field. Figure\,\ref{FIG_NZ} (right) compares a standard Gaussian with the frequency distribution of the observed KL-transformed and normalized cluster counts obtained with REFLEX. The cell sizes are larger than $100\,h^{-1}\,{\rm Mpc}$ and thus probe Gaussianity in the linear regime. The observed Gaussianity of the REFLEX data suggests Gaussianity of the underlying cosmic matter field on such large scales. This is a remarkable finding, taking into account the difficulties one has to test Gaussianity even with current CMB data (Komatsu et al. 2003, Cruz et al. 2004). \begin{figure}[h] \vspace{-0.0cm} \center{\hspace{-1.0cm}\psfig{figure=schuecker_fig3.ps,height=12.0cm}} \vspace{-1.5cm} \caption[]{\small Compilation of fluctuation power spectra of various cosmological objects as complited by Tegmark \& Zaldarriaga (2002) with the added REFLEX power spectrum. The continuous line represents the concordance $\Lambda$CDM structure formation model.} \label{FIG_PKSUM} \end{figure} As mentioned above, hierarchical structure formation predicts that the amplitudes of the fluctuations should increase in a certain manner with mass. On scales small compared to the maximum extent of the survey volume, the fluctuation field roughly follows the superposition approximation. In this scale range, it is very convinient to test the mass-dependent amplitude effect with a simple plane wave decomposition as summarized by the power spectrum $P(k)$\,\footnote{The KL method would need many modes to test small scales which is presently too computer-intensiv}. Fig.\,\ref{FIG_DECO} (left) shows that the observed REFLEX power spectrum of the complete sample is well fit by a low-density $\Lambda$CDM model. Comparisons with other hierarchical scenarios are found to be less convincing (Schuecker et al. 2001c). In contrast to the `standard CDM' model with $\Omega_{\rm m}=1$, in low-density (open) CDM models, the epoch of equality of matter and radiation occure rather late and the growth of structure proceeds over a somewhat smaller range of redshift, until $(1+z)=\Omega_{\rm m}^{-1}$. Consequently, the turnover in $P(k)$ is at larger scales, leaving less power on small scales. The nonzero cosmological constant of a (flat) $\Lambda$CDM scenario stretches out the time scales of the model until $(1+z)=\Omega_{\rm m}^{-1/3}$. The differences in the dynamics of structure growth are thus not very large compared to an open CDM model and become only important at late stages. Note, however, that when all models are normalized to the local Universe, the opposite conclusion is true. The behaviour of the cluster fluctuation amplitude with mass (X-ray luminosity) for a low-density CDM model is shown in Fig.\,\ref{FIG_DECO} (right). The predictions are shown as continuous and dashed lines which nicely follow the observed trends. The model includes an empirical relation to convert cluster mass to X-ray luminosity (Reiprich \& B\"ohringer 2002), a model for quasi-nonlinear and linear structure growth (Peebles 1980), a biasing model (Mo \& White 1996, Matarrese et al. 1997), and a model for the transformation of the power spectrum from real space to redshift space (Kaiser 1987). However, one could still argue that clusters constitute only a small population of all cosmological objects visible over a limited redshift interval, and could therefore not give a representative view of the goodness of hierarchical structure formation models. Fig.\,\ref{FIG_PKSUM} summarizes power spectra obtained with various cosmological tracer objects as compiled by Tegmark \& Zaldarriaga (2002) including the REFLEX power spectrum. All spectra are normalized by their respective biasing parameters (if necessary). The combined power spectrum covers a spatial scale range of more than four orders of magnitude and redshifts between $z=1100$ (CMB) and $z=0$. The good fit of the $\Lambda$CDM model shows that this hierarchical structure formation model is really very successful in describing the clustering properties of cosmological objects. The following cosmological tests thus assume the validity of this structure formation model. \section{Ordinary matter}\label{MATTER} The observed cosmic density fluctuations are very well summarized by a low matter density $\Lambda$CDM model (Sect.\,\ref{HIER}). Therefore, many cosmological tests refer to this structure formation scenario. In general, baryonic matter, Cold Dark Matter (CDM), primeval thermal remnants (electromagnetic radiation, neutrinos), and an energy corresponding to the cosmological constant give the total (normalized) density of the present Universe, $\Omega_{\rm tot}=\Omega_{\rm b}+\Omega_{\rm CDM}+\Omega_{\rm r}+\Omega_\Lambda$. The normalized density of ordinary matter comprises the first three components. Recent CMB data suggest $\Omega_{\rm tot}=1.02\pm 0.02$ (Spergel et al. 2003), i.e., an effectively flat universe with a negligible spatial curvature. The same data suggest a baryon density of $\Omega_{\rm b}h^2=0.024\pm 0.001$ and $h=0.72\pm 0.05$. For our purposes, the energy density of thermal remnants, $\Omega_{\rm r}=0.0010\pm0.0005$ (Fukugita \& Peebles 2004), can be neglected, yielding the present cosmic matter density $\Omega_{\rm m}=\Omega_{\rm b}+\Omega_{\rm CDM}$. At the end of this section, an estimate of $\Omega_{\rm r}$ including only the neutrinos is given. Within this context of the hierarchical structure formation, the occurence rate of substructure seems to be a useful diagnostic to test different cosmological parameters because a high merger rate implies a high $\Omega_{\rm m}$ (e.g., Richstone, Loeb \& Turner 1992, Lacey \& Cole 1993). However, as mentioned in Sect.\,\ref{GCLST}, the effects of substructure are difficult to measure and to quantify in terms of mass so that presently less stringent constraints are attainable (for a recent discussion see, e.g., Suwa et al. 2003). A simple though $h$-dependent estimate of $\Omega_{\rm m}$ can be obtained from the comoving wavenumber of the turnover of the power spectrum because it corresponds to the horizon length at the epoch of matter-radiation equality $k_{\rm eq}=0.195\Omega_{\rm m}h^2\,{\rm Mpc}$ (e.g. Peebles 1993) below which most structure is smoothed-out by free-streaming CDM particles. A small $\Omega_{\rm m}$ or a small Hubble constant thus shifts the maximum of $P(k)$ towards larger scales. The product $\Gamma=\Omega_{\rm m}h$ is referred to as the shape parameter of the power spectrum. For the REFLEX power spectrum, the turnover is at $k_{\rm eq}=0.025\pm0.005$ (Fig.\,\ref{FIG_DECO}), so that for $h=0.72$ a matter density of $\Omega_{\rm m}=0.25\pm0.05$ is obtained. In this case, the shape parameter is $\Gamma=0.18\pm0.03$ which is typical for $\Lambda$CDM. Cluster abundance measurements are a classical application of galaxy clusters in cosmology to determine the present density of cosmic matter, $\Omega_{\rm m}$, either assuming a negligible effect of $\Omega_\Lambda$ or not. The effective importance of $\Omega_\Lambda$ on geometry and structure growth cannot be neglected for clusters with $z>0.5$. A related quantity is the variance of the matter fluctuations in spherical cells with radius $R$ and Fourier transform $W(kR)$: $\sigma^2(R)=\frac{1}{2\pi^2}\int_0^\infty\,dk\,k^2\,P(k)\,|W(kR)|^2$. The specific value $\sigma_8$ at $R=8\,h^{-1}\,{\rm Mpc}$ characterizes the normalization of the matter power spectrum $P(k)$. Recent CMB data suggest $\sigma_8=0.9\pm 0.1$ (Spergel et al. 2003). In the following, the abundance of galaxy clusters is used to determine simultaneously the values of $\Omega_{\rm m}$ and $\sigma_8$. Early applications of the method can be found in, e.g., White, Efstathiou \& Frenk (1993), Eke, Cole \& Frenk (1996), and Viana \& Liddle (1996) suggesting a stronge degeneracy between $\Omega_{\rm m}$ and $\sigma_8$ of the form $\sigma_8=(0.5-0.6)\Omega_{\rm m}^{-0.6}$. To understand this degeneracy and the high sensitivity of cluster counts on the values of the cosmological parameters, consider the expected number of clusters observed at a certain redshift and flux limit, \begin{equation}\label{ABU1} dN(z,f_{\rm lim})\,=\,dV(z)\,\int_{M_{\rm lim}(z,f_{\rm lim})}^\infty\, \,dM\,\,\frac{dn(M,z,\sigma^2(M))}{dM}\,. \end{equation} For optically selected samples, the flux limit has to be replaced by a richness (or optical luminosity) limit. The cosmology-dependency of $dN$ stems from the comoving volume element $dV$, the mass limit $M_{\rm lim}$ at a certain redshift, and the shape of the cosmic mass function $dn/dM$. Three basic cosmological tests are thus applied simultaneously, which explains the high sensitivity of cluster counts on cosmology, although sometimes effects related to structure growth and geometric volume can work against each other (Sect.\,\ref{DE}). The summation in (\ref{ABU1}) is over cluster mass whereas observations yield quantities like X-ray luminosity, gas temperature, richness etc. The conversion of such observables into mass is the most crucial step where most of the systematic errors can occure. For more massive systems, likely contributors to systematic errors are effects related to cluster merging, substructures, and cooling cores. Cluster merging increases the gas density and temperature and thus the X-ray luminosity which increases the detection probablity in X-rays. The overall statistical effect is difficult to quantify, but systematic errors in the cosmological parameters on the 20\% level can be reached (Randall et al. 2002). For less massive systems, further effects related to additional heat input by AGN, star formation, galactic winds driven by SNe, etc. lead to deviations from self-similar expectations (Sect.\,\ref{GCLST}), and increase the scatter in scaling relations. Such effects are quite difficult to simulate (e.g., Borgani et al. 2004, Ettori et al. 2004). Equation\,(\ref{ABU1}) can directly be applied to flux-selected cluster samples as obtained in X-rays or millimeter wavelengths. The latter surveys detect clusters via the Sunyaev-Zel'dovich (SZ) effects (e.g., Birkinshaw, Gull \& Hardebeck 1984, Carlstrom, Holder \& Reese 2002). Here, energy of the ICM electrons is locally transferred through inverse Compton (Thomson) scattering to the CMB photons so that the number of photons on the long wavelength side of the Planck spectrum is depleted. After this blue-shift, each cluster is detected at wavelengths beyond 1.4\,mm as decrements against the average CMB background, and at shorther wavelengths as increments. This process thus measures deviations relative to the actual CMB background and is thus redshift-independent so that cluster detection does not has to work against the $(1+z)^4$ Tolman's surface brightness dimming which is especially important for very distant clusters. Certain blind SZ surveys are now in preparation (SZ-Array starting 2004; AMI 2004, APEX-SZ 2005, ACT 2007, SPT 2007 and Planck 2007). The flux limits in X-rays and submm allow after some standard corrections a very accurate determination of the volume accessable by a cluster with certain X-ray or submm properties. The detection of clusters in the optical is more complicated (e.g., red-sequence method in Gladders, Yee \& Howard 2004, matched filter method in Postman et al. 1996, Schuecker \& B\"ohringer 1998, Schuecker, B\"ohringer \& Voges 2004). For the application of Eq.\,({\ref{ABU1}) to optically selected cluster samples, the mass limit $M_{\rm min}(z)$ has to be obtained with numerical simulations in a more model-dependent manner (e.g., Goto et al. 2002, Kim et al. 2002). For cosmological tests, the values of the parameters are changed until observed and predicted numbers of clusters agree. In order to avoid the evaluation of 3rd and 4th-order statistics in the error determination, the parameter matrices should be as diagonal as possible. This can be achieved, when the cluster cell counts are transformed into the orthonormal base generated by the KL eigenvectors of the sample correlation matrix (Sect.\,\ref{HIER}). With the REFLEX sample, the classical $\Omega_{\rm m}$-$\sigma_8$ test was performed with the KL base (Schuecker et al. 2002, 2003a). The observed Gaussianity of the matter field directly translates into a multi-variant Gaussian likelihood function, and includes in a natural manner a weighting of the squared differences between KL-transformed observed and modeled cluster counts with the variances of the transformed counts. Not only the mean counts in the cells but also their variances from cell to cell depend on the cosmological model. The KL method thus simultaneously tests both mean counts and their fluctuations which increases the sensitivity of the method even more. The method was extensively tested with clusters selected from the Hubble Volume Simulation. Note that for the application of the KL method to galaxies of the Sloan Digital Sky Survey (SDSS, Szalay et al. 2003, Pope et al. 2004) only the fluctuations could be used and were in fact enough to provide constraints on the 10-percent level. A typical result of a cosmological test of $\Omega_{\rm m}$ and $\sigma_8$ with REFLEX clusters is shown in Fig.\,\ref{FIG_LC}. \begin{figure}[] \vspace{0.0cm} \center{\hspace{-6.5cm}\psfig{figure=schuecker_fig4a.ps,height=4.0cm}} \vspace{-4.3cm} \center{\hspace{+4.0cm}\psfig{figure=schuecker_fig4b.ps,height=4.0cm}} \vspace{-0.3cm} \caption[]{\small {\bf Left:} Likelihood contours ($1-3\sigma$ level for two degrees of freedom) as obtained with the REFLEX sample. {\bf Right:} Same likelihood contours as left for a different empirical mass/X-ray luminosity relation.} \label{FIG_LC} \end{figure} Note the small parameter range covered by the likelihood contours and the residual $\Omega_{\rm m}$-$\sigma_8$ degeneracy: For (flat) $\Lambda$CDM and low $z$, structure growth is negligible, and the $\Omega_{\rm m}$-$\sigma_8$ degeneracy is related to the fact that a small $\sigma_8$ (corresponding to a low-amplitude power spectrum) yields a small comoving cluster number density, whereas a large $\Omega_{\rm m}$ (corresponding to a low mass limit $M_{\rm min}$) yields a large comoving number density. For (flat) $\Lambda$CDM and high $z$, structure growth and comoving volume do again not strongly depend on $\Omega_{\rm m}$, but the number of high-$z$ clusters increases with decreasing $\Omega_{\rm m}$ because for a fixed cluster number density at $z=0$ the normalization $\sigma_8$ has to be increased when $\Omega_{\rm m}$ is decreased as shown above. However, the sensitivity on structure growth becomes apparent once open and flat models are compared (Bahcall \& Fan 1998). For the test, further cosmological parameters like the Hubble constant, the primordial slope of the power spectrum, the baryon density, the biasing model, and the empirical mass/X-ray luminosity relation had fixed prior values. The final REFLEX result is obtained by marginalizing over these parameters and yields the $1\sigma$ corridors $0.28\le\Omega_m\le0.37$ and $0.56\le\sigma_8\le0.80$. As mentioned above, the largest uncertainty in these estimates comes from the empirical mass/X-ray luminosity relation obtained for REFLEX from mainly ROSAT and ASCA pointed observations by Reiprich \& B\"ohringer (2002) - compare Fig.\,\ref{FIG_LC} left and right. Tests are in preparation with a four-times larger X-ray cluster sample of 1\,500 clusters combining a deeper version of REFLEX with an extended version of the cluster catalogue of B\"ohringer et al. (2000) of the northern hemisphere, plus a more precise M/L-relation obtained over a larger mass range with the XMM-Newton satellite. Errors below the 10-percent level are expected. Variants of the cluster abundance method use the X-ray luminosity or the gas temperature function. For the transition from observables to mass, often the relations mass-temperature and luminosity-temperature are used. As an example, Borgani et al. (2001) obtained comparatively strong constraints using a sample of clusters up to $z=1.27$ yielding the $1\sigma$ corridors $0.25\le\Omega_{\rm m}\le 0.38$ and $0.61\le \sigma_8\le 0.72$. White et al. (1993) pointed out that the matter content in rich nearby clusters provides a fair sample of the matter content of the Universe. The ratio of the baryonic to total mass in clusters should thus give a good estimate of $\Omega_{\rm b}/\Omega_{\rm m}$. The combination with determinations of $\Omega_{\rm b}$ from BBN (constrained by the observed abundances of light elements at high $z$) can thus be used to determine $\Omega_{\rm m}$ (David, Jones, \& Forman 1995, White \& Fabian 1995, Evrard 1997). Extending the universality assumption on the gas mass fraction to distant clusters, Ettori \& Fabian (1999) and later Allen et al. (2002) could show that at a certain distance from the center of quite relaxed distant clusters, the observed X-ray gas mass fraction tends to converge to a universal value. To illustrate the potential power of the method note that after further corrections, the results obtained by Allen et al. with only seven apparently relaxed clusters up to $z=0.5$ were already sensitive enough to constrain the cosmic matter density, $\Omega_{\rm m}=0.30^{+0.04}_{-0.03}$. Later work includes more clusters up to $z=0.9$ and cluster abundances from the REFLEX-sample (B\"ohringer et al. 2004) and the BCS sample (Ebeling et al. 1998), and yields the $1\sigma$ error corridors $0.25\le \Omega_{\rm m}\le 0.33$ and $0.66\le\sigma_8\le0.74$ (Allen et al. 2003). However, the method shares some similarity with the type-Ia SNe method in the sense that the validity of the gas mass fraction as a cosmic standard candle especially at high $z$ is mainly based on observational arguments, partially supported by numerical simulations. The overlap of the error corridors of the less-degenerated results of Borgani et al. (2001), Schuecker et al. (2003a), and Allen et al. (2003) yields our final result \begin{equation}\label{OMEGA} \Omega_{\rm m}=0.31\pm0.03\,. \end{equation} Other measurements show the $\Omega_{\rm m}$-$\sigma_8$ degeneracy more pronounced over a larger range. When all measurements are evaluated at $\Omega_{\rm m}=0.3$, the values of $\sigma_8$ appear quite consistent at a comparatively low normalization of \begin{equation}\label{SIGMA8} \sigma_8\,=\,0.76\,\pm\,0.10\,, \end{equation} within the total range $0.5<\sigma_8<1.0$ (data compiled in Henry 2004 from Bahcall et al. 2003, Henry 2004, Pierpaoli et al. 2003, Ikebe et al. 2002, Reiprich \& B\"ohringer 2002, Rosati et al. 2002, including Allen et al. 2003 and Schuecker et al. 2003a with small degeneracies)\footnote{Vauclair et al. (2003) could find a consistent solution between local and high redshift X-ray temperature distribution functions and the redshift distributions of distant X-ray cluster surveys using mass-temperature and luminosity-temperature relations. Their best model has $\Omega_{\rm m}>0.85$ and $\sigma_8=0.455$, and the shape parameter, $\Gamma=\Omega_{\rm m}\,h\approx 0.1$, which implies $h<0.12$, in conflict with many observations.}. Recent neutrino experiments are based on atmospheric, solar, reactor, and accelerator neutrinos. All experiments suggest that neutrinos change flavour as they travel from the source to the detector. These experiments give strong arguments for neutrino oscillations and thus nonzero neutrino rest masses $m_\nu$ (e.g. Ashie et al. 2004 and references given therein). Further information can be obtained from astronomical data on cosmological scales. The basic idea is to measure the normalization of the matter CDM spectrum with CMB anisotropies on several hundred Mpc scales. This normalization is transformed with structure growth functions to $8\,h^{-1}\,{\rm Mpc}$ at $z=0$ assuming various neutrino contributions. This normalization should match the $\sigma_8$ normalization from cluster counts (e.g., Fukugita, Liu \& Sugiyama 2000). Recent estimates are obtained by combining CMB-WMAP data with the 2dFGRS galaxy power spectrum, X-ray cluster gas mass fractions, and X-ray cluster luminosity functions (Allen, Schmidt \& Bridle 2003). For a flat universe and three degenerate neutrino species, they measured the contribution of neutrinos to the energy density of the Universe, and a species-summed neutrino mass, and their respective $1\sigma$ errors, \begin{equation}\label{NEUTR} \Omega_\nu=0.006\pm0.003\,,\quad\sum_im_i=0.6\pm0.3\,{\rm eV}\,, \end{equation} which formally corresponds to $m_\nu\approx 0.2$\,eV per neutrino. Their combined analysis yields a normalization of $\sigma_8=0.74^{+0.12}_{-0.07}$, which is consistent with the recent measurements with galaxy clusters mentioned above. From CMB, 2dFGRS and Ly-$\alpha$ forest data, Spergel et al. (2003) obtained the $2\sigma$ constraint $m_\nu<0.23$\,eV per neutrino. In a similar analysis including also SDSS galaxy clustering, Seljak et al. (2004) found $m_\nu<0.13$\,eV for the lightest neutrino (at $2\sigma$). Estimates from neutrino oscillations suggest $m_\nu\approx 0.05$\,eV for at least one of two neutrino species, consistent with the Fukugita \& Peebles (2004) estimate given above. \section{Dark energy}\label{DE} The present state of the cosmological tests is illustrated in Fig.\,\ref{FIG_ILLUS} (left). The combination of the likelihood contours obtained with three different observational approaches (type-Ia SNe: Riess et al. 2004; CMB: Spergel et al. 2003; galaxy clusters: Schuecker et al. 2003b) shows that the cosmic matter density is close to $\Omega_{\rm m}=0.3$, and that the normalized cosmological constant is around $\Omega_\Lambda=0.7$. This sums up to unit total cosmic energy density and suggests a spatially flat universe. However, the density of cosmic matter growths with redshift like $(1+z)^3$ whereas the density $\rho_\Lambda$ related to the cosmological constant $\Lambda$ is independent of $z$. The ratio $\Omega_\Lambda/\Omega_{\rm m}$ today is close to unity and must thus be a finely-tuned infinitesimal constant $\Omega_\Lambda/(\Omega_{\rm m}(1+z_\infty)^3)$ set in the very early Universe (cosmic coincidence problem). \begin{figure}[h] \vspace{-0.0cm} \center{\hspace{-6.5cm}\psfig{figure=schuecker_fig5a.ps,height=6.0cm}} \vspace{-6.1cm} \center{\hspace{+5.0cm}\psfig{figure=schuecker_fig5b.ps,height=6.5cm}} \vspace{-0.7cm} \caption[]{\small {\bf Left:} Present situation of cosmological tests of the matter density $\Omega_{\rm m}$ and the normalized cosmological constant $\Omega_\Lambda$ from difference observational approaches (B\"ohringer, priv. com.). {\bf Right:} Null Energy Condition (NEC) and Strong Energy Condition (SEC) for a flat FL spacetime at redshift $z=0$ with negligible contributions from relativistic particles in the parameter space of the normalized cosmic matter density $\Omega_{\rm m}$ and the equation of state parameter of the dark energy $w_{\rm x}$. More details are given in the main text.} \label{FIG_ILLUS} \end{figure} An alternative hypothesis is to consider a time-evolving `dark energy' (DE), where in Einstein's field equations the time-independent energy density $\rho_\Lambda$ of the cosmological constant is replaced by a time-dependent DE density $\rho_{\rm x}(t)$, \begin{equation}\label{EINSTEIN} G_{\mu\nu}\,=\,-\frac{8\pi G}{c^4}\, \left[\,T_{\mu\nu}\,+\, \rho_{\Lambda\rightarrow{\rm x}}(t)\,c^2\,g_{\mu\nu}\right]\,, \end{equation} while assuming that the `true' cosmological constant is either zero or negligible. Here, $G_{\mu\nu}$ is the Einstein tensor, $T_{\mu\nu}$ the energy-momentum tensor of ordinary matter, and $g_{\mu\nu}$ the metric tensor. For a time-evolving field (see, e.g., Ratra \& Peebles 1988, Wetterich 1988, Caldwell et al. 1998, Zlatev, Wang \& Steinhardt 1999, Caldwell 2002, recent review in Peebles \& Ratra 2004) the aim is to understand the coincidence in terms of dynamics. A central r\^{o}le in these studies is assumed by the phenomenological ratio \begin{equation}\label{COIN2} w_{\rm x}=\frac{p_{\rm x}}{\rho_{\rm x} c^2} \end{equation} (equation of state) between the pressure $p_{\rm x}$ of the unknown energy component and its rest energy density $\rho_{\rm x}$. Note that $w_{\rm x}=-1$ for Einstein's cosmological constant. The resulting phase space diagram of DE (Fig.\,\ref{FIG_ILLUS}, right) distinguishes different physical states of the two-component cosmic fluid -- separated by two energy conditions of general relativity (Schuecker et al. 2003b). Generally, assumptions on energy conditions form the basis for the well-known singularity theorems (Hawking \& Ellis 1973), censorship theorems (e.g. Friedman et al. 1993) and no-hair theorems (e.g. Mayo \& Bekenstein 1996). Quantized fields violate all local point-wise energy conditions (Epstein et al. 1965). In the present investigation we are, however, concerned with observational studies on macroscopic scales relevant for cosmology where $\rho_{\rm x}$ and $p_{\rm x}$ are expected to behave classically. Cosmic matter in the form of baryons and non-baryons, or relativistic particles like photons and neutrinos satisfy all standard energy conditions. The two energy conditions discussed below are given in a simplified form (see Wald 1984 and Barcel\'{o} \& Visser 2001). The {\it Strong Energy Condition} (SEC): $\rho+3p/c^2\ge0$ {\it and} $\rho+p/c^2\ge0$, derived from the more general condition $R_{\mu\nu}v^\mu v^\nu\ge0$, where $R_{\mu\nu}$ is the Ricci tensor for the geometry and $v^\mu$ a timelike vector. The simplified condition is valid for diagonalizable energy-momentum tensors which describe all observed fields with non-zero rest mass and all zero rest mass fields except some special cases (see Hawking \& Ellis 1973). The SEC ensures that gravity is always attractive. Certain singularity theorems (e.g., Hawking \& Penrose 1970) relevant for proving the existence of an initial singularity in the Universe need an attracting gravitational force and thus assume SEC. Violations of this condition as discussed in Visser (1997) allow phenomena like inflationary processes expected to take place in the very early Universe or a moderate late-time accelerated cosmic expansion as suggested by the combination of recent astronomical observations (Fig.\,\ref{FIG_ILLUS} left). Likewise, phenomena related to $\Lambda>0$ and an effective version of $\Lambda$ whose energy and spatial distribution evolve with time ({\it quintessence}: Ratra \& Peebles 1988, Wetterich 1988, Caldwell et al. 1998 etc.) are allowed consequences of the breaking of SEC -- but not a prediction. However, a failure of SEC seems to have no severe consequences because the theoretical description of the relevant physical processes can still be provided in a canonical manner. Phenomenologically, violation of SEC means $w_{\rm x}<-1/3$ for a {\it single} energy component with density $\rho_{\rm x}> 0$. For $w_{\rm x}\ge -1/3$, SEC is not violated and we have a decelerated cosmic expansion. The {\it Null Energy Condition} (NEC): $\rho+p/c^2\ge0$, derived from the more general condition $G_{\mu\nu}k^\mu k^\nu\ge0$, where $G_{\mu\nu}$ is the geometry-dependent Einstein tensor and $k^\mu$ a null vector (energy-momentum tensors as for SEC). Violations of this condition are recently studied theoretically in the context of macroscopic traversable wormholes (see averaged NEC: Flanagan \& Wald 1996, Barcel\'{o} \& Visser 2001) and the Holographic Principle (Sect.\,\ref{CCP}). The breaking of this criterion in a finite local region would have subtle consequences like the possibility for the creation of ``time machines'' (e.g. Morris, Thorne \& Yurtsever 1988). Violating the energy condition in the cosmological case is not as dangerous (no threat to causality, no need to involve chronology protection, etc.), since one cannot isolate a chunk of the energy to power such exotic objects. Nevertheless, violation of NEC on cosmological scales could excite phenomena like super-acceleration of the cosmic scale factor (Caldwell 2002). Theoretically, violation of NEC would have profound consequences not only for cosmology because all point-wise energy conditions would be broken. It cannot be achieved with a canonical Lagrangian {\it and} Einstein gravity. Phenomenologically, violation of NEC means $w_{\rm x}<-1$ for a {\it single} energy component with $\rho_{\rm x}> 0$. The sort of energy related to this state of a Friedmann-Robertson-Walker (FRW) spacetime is dubbed {\it phantom energy} and is described by {\it super-quintessence} models (Caldwell 2002, see also Chiba, Okabe \& Yamaguchi 2000). For $w_{\rm x}\ge-1$ NEC is not violated, and is described by {\it quintessence} or super-quintessence models. \begin{figure}[] \vspace{-0.0cm} \center{\hspace{-1.0cm}\psfig{figure=schuecker_fig6.ps,height=6.0cm}} \vspace{-0.5cm} \caption[]{\small Likelihood contours ($1-3\sigma$) obtained with the Riess et al. (1998) sample of type-Ia SNe. The luminosities are corrected with the $\Delta m_{15}$ method. The equation of state parameter $w_{\rm x}$ is assumed to be redshift-independent.} \label{FIG_SN} \end{figure} Assuming a spatially flat FRW geometry, $\Omega_{\rm m}+\Omega_{\rm x}=1$, and $\Omega_{\rm m}\ge 0$ as indicated by the astronomical observations in Fig.\,\ref{FIG_ILLUS} (left), the formal conditions for this two-component cosmic fluid translates into $w_{\rm x}\ge-1/3(1-\Omega_{\rm m})$ for SEC, and $ w_{\rm x}\ge-1/(1-\Omega_{\rm m})$ for NEC (curved lines in Fig.\,\ref{FIG_ILLUS} right). These energy conditions, characterizing the possible phases of a mixture of dark energy and cosmic matter, thus rely on the precise knowledge of $\Omega_{\rm m}$ and $w_{\rm x}$. Unfortunately, the effects of $w_{\rm x}$ are not very large. However, a variety of complementary observational approaches and their combination helps to reduce the measurement errors significantly. The most direct (geometric) effect of $w_{\rm x}$ is to change cosmological distances. For example, for a spatially flat universe, comoving distances in dimensionless form, $a_0r=H_0\int_0^z\frac{dz'}{H(z')}$, are directly related to $w_{\rm x}$ via \begin{equation}\label{DIST} \left[\frac{H(z)}{H_0}\right]^2 = \Omega_{\rm m}(1+z)^3+(1-\Omega_{\rm m})\exp{\left\{ 3\int_0^z[1+w_{\rm x}(z')]\,\,d\ln(1+z')\, \right\}}\,. \end{equation} A less negative $w_{\rm x}$ increases the Hubble parameter and thus reduces all cosmic distances. In general, $w_{\rm x}$ must evolve in time. To discuss Eq.\,(\ref{DIST}) in terms of the resulting parameter degeneracy, let us assume $w_{\rm x}(z)=w_0+w_1\cdot z$ with the additional constraint that $w_0=-1$ implies $w_1=0$. For this simple parameterization the same expansion rate at $z$ is obtained when $w_0$ and $w_1$ are related by $w_1=-\frac{\ln(1+z)}{z-\ln(1+z)}(1+w_0)$. The parameter degeneracy between $w_0$ and $w_1$ is a generic feature and can be seen in many proposed observational tests. Fortunately, its slope depends on $z$, so that the degeneracy can be broken with independent observations covering a large redshift range. Current observations have not the sensitivity to measure $w_0$ and $w_1$ separately so that basically all published measurements of the equation of state of the DE are on $w_0$ assuming $w_1=0$. The danger with this assumption is, however, that if the true $w_1$ would strongly deviate from zero then the estimated $w_0$ would be biased correspondingly (Maor et al. 2002). In addition, even when an explicit redshift dependency of $w_{\rm x}$ could be neglected, some parameter degeneracy between $\Omega_{\rm m}$ and $w_{\rm x}$ remains as suggested by Eq.\,(\ref{DIST}) (see Fig.\,\ref{FIG_SN} obtained with the type-Ia SNe). \begin{figure}[] \vspace{0.0cm} \center{\hspace{-5.5cm}\psfig{figure=schuecker_fig7a.ps,height=5.3cm}} \vspace{-5.6cm} \center{\hspace{+6.3cm}\psfig{figure=schuecker_fig7b.ps,height=5.3cm}} \vspace{-0.5cm} \caption[]{\small Evolution of the matter power spectrum for different redshift-independent equations of state $-1\le w_{\rm x}<0$ of the DE. The lower curve is for $w_{\rm x}=-1$ and increases in amplitude with $w_{\rm x}$.} \label{FIG_PKWZ} \end{figure} Structure growth via gravitational instability provides a further probe of $w_{\rm x}$. DE, not in form of a cosmological constant or vacuum energy density, is inhomogenously distributed - a smoothly distributed, time-varying component is unphysical because it would not react to local inhomogeneities of the other cosmic fluid and would thus violate the equivalence principle. An evolv\-ing scalar field with $w_{\rm x}<0$ (e.g. quintessence) automatically satisfies these conditions (Caldwell, Dave \& Steinhardt 1998a). The field is so light that it behaves relativistically on small scales and non-relativistically on large scales. The field may develop density perturbations on Gpc scales where sound speeds $c^2_{\rm s}<0$, but does not clump on scales smaller than galaxy clusters. Generally, perturbations come in either linear or nonlinear form depending on whether the density contrast, $\delta=(\rho/\bar{\rho})-1$, is smaller or larger than one. In the linear regime, and when DE is modeled as a dynamical scalar field, the rate of growth of linear density perturbations in the CDM is damped by the Hubble parameter, $\delta''_{\rm cdm}+aH\delta'_{\rm cdm}=4\pi G a^2\delta\rho_{\rm cdm}$ ($a$ means scale factor and prime derivative with respect to conformal time). This evolution equation can be solved approximately by $ \frac{d\ln\delta_{\rm cdm}}{d\ln a}\approx \left[1+\frac{\rho_{\rm x}(a)}{\rho_{\rm cdm}(a)}\right]^{-0.6} $ (Caldwell, Dave \& Steinhardt 1998b), provided that $\rho_{\rm x}<\rho_{\rm r}$ at radiation-matter equality. It is seen that $\rho_{\rm x}(a)$ and thus a more positive $w_{\rm x}$ delays structure growth. To reach the same fluctuations in the CDM field, structures must have formed at higher $z$ compared to the standard CDM model. For a redshift-independent $w_{\rm x}$, transfer and growth functions can be found in Ma et al. (1999). The effects of a constant $w_{\rm x}$ on $P(k)$ are shown in Fig.\,\ref{FIG_PKWZ}. The sensitivity of CMB anisotropies to $w_{\rm x}$ is limited to the integrated Sachs-Wolfe effect because $\Omega_{\rm x}$ dominates only at late $z$ (Eq.\,\ref{DIST}). Spergel et al. (2003) showed that the WMAP data could equally well fit with $\Omega_{\rm m}=0.47$, $h=0.57$, and $w_{\rm x}=-0.5$ once $w_{\rm x}$ is regarded as a free (constant) parameter. In the nonlinear regime, the effects of DE are not very large. For the cosmological constant, Lahav et al. (1991) used the theory of peak statistics in Gaussian random fields and linear gravitational-instability theory in the linear regime and the spherical infall model to evolve the profiles to the present epoch. They found that the local dynamics around a cluster at $z=0$ does not carry much information about $\Lambda$. However, DM haloes have core densities correlating with their formation epoch. Therefore, when $w_{\rm x}$ delays structure growth, then DM haloes are formed at higher $z$ with higher core densities and should thus appear for fixed mass and redshift more concentrated in $w_{\rm x}> -1$ models compared to $\Lambda$. This is reflected in the virial densities of collapsed objects in units of the critical density shown in Fig.\,\ref{FIG_ADC} (left). The first semi-analytic computations of a spherical collapse in a fluid with DE with $-1\le w_{\rm x}<0$ were performed by Wang \& Steinhardt (1998). Schuecker et al. (2003b) enlarged the range to $-5<w_{\rm x}<0$, whereas Mota \& van de Bruck (2004) discussed the spherical collapse for specific potentials of scalar fields. For recent simulations see Klypin et al. (2003) and Bartelmann et al. (2004). \begin{figure}[] \vspace{0.0cm} \center{\hspace{-5.8cm}\psfig{figure=schuecker_fig8a.ps,height=6.5cm}} \vspace{-5.2cm} \center{\hspace{+5.3cm}\psfig{figure=schuecker_fig8b.ps,height=4.4cm}} \vspace{-0.2cm} \caption[]{\small {\bf Left:} Virial density in units of the critical matter density for a flat universe as a function of $\Omega_{\rm m}$ and $w_{\rm x}$. The $w_{\rm x}$ values range from $-1$ (lower curve) to zero (upper curve). {\bf Right:} Likelihood contours ($1$-$3\sigma$) obtained from nearby cluster counts (REFLEX: Schuecker et al. 2003b) assuming a constant $w_{\rm x}$ and marginalized over $0.5<\sigma_8<1$. } \label{FIG_ADC} \end{figure} These arguments have to be combined with the general discussion of Eq.\,(\ref{ABU1}) to understand the sensitivity of cluster counts on $w_{\rm x}$. Keeping the present-day cluster abundance and lower mass limit $M_{\rm min}$ in Eq.\,(\ref{ABU1}) fixed, the dominant effect of $w_{\rm x}$ comes from structure growth and volume (Haiman, Mohr \& Holder 2001). For a larger $w_{\rm x}$, the DE field delays structure growth so that the number of distant clusters increases. However, a large $w_{\rm x}$ yields a small comoving count volume for the clusters which counteracts the growth effect. The compensation works mainly at small $z$ and leads to a comparatively small sensitivity of cluster counts at $z<0.5$ on $w_{\rm x}$. For $z>0.5$, the effect of a delayed structure growth starts to dominate and the number of high-$z$ clusters increases with $w_{\rm x}$. However, the realistic case is when a redshift and cosmology-dependent lower mass limit is included. In this case, it could be shown that at high $z$, the $w_{\rm x}$-dependence of the redshift distribution is mainly caused by the $w_{\rm x}$-dependence of the lower mass limit in the sense that a larger $w_{\rm x}$ decreases distances and therefore increases the number of high-$z$ clusters, whereas at small redshifts no strong dependency beyond the standard $\Omega_{\rm m}$-$\sigma_8$ degeneracy remains. The inclusion of a $z$-dependent mass limit does only slightly damp the sensitivity on $\Omega_{\rm m}$. This high-$z$ behaviour of the number of clusters is very important for future planned cluster surveys (e.g. DUO Griffiths et al. 2004) where in the wide (northern) survey about 8\,000 clusters will be detected over 10\,000 square degrees on top of the SDSS cap up to $z=1$, and where in the deep (southern) survey about 1\,800 clusters will be detected over 176 square degrees up to $z=2$ (if they exists at such high redshifts). REFLEX has most clusters below $z=0.3$. For a constant $w_{\rm x}$ the likelihood contours are shown in Fig.\,\ref{FIG_ADC} (right) as a function of $\Omega_{\rm m}$ (Schuecker et al. 2003b). The effects of yet unknown possible systematic errors are included by using a very large range of $\sigma_8$ priors ($0.5<\sigma_8<1.0$). As expected, the $w_{\rm x}$ dependence is very weak. The past examples (Fig.\,\ref{FIG_SN} and Fig.\,\ref{FIG_ADC} right) have shown that presently neither SNe nor galaxy clusters alone give an accurate estimate of the redshift-independent part of $w_{\rm x}$. This is also true for CMB anisotropies. However, the resulting likelihood contours of SNe and galaxy clusters appear almost orthogonal to each other in the high-$w_{\rm x}$ range. Their combination thus gives a quite strong constraint on both $w_{\rm x}$ and $\Omega_{\rm m}$ (Fig.\,\ref{FIG_SC} left). This is a typical example of cosmic complementarity which stems from the fact that SNe probe the homogeneous Universe whereas galaxy clusters test the inhomogeneous Universe as well. The final result of the combination of different SNe samples and REFLEX clusters yields the $1\sigma$ constraints $w_{\rm X}=-0.95\pm 0.32$ and $\Omega_{\rm m}=0.29\pm 0.10$ (Schuecker et al. 2003b). Averaging all results obtained with REFLEX and various SN-samples yields $w_{\rm x}=-1.00^{+0.18}_{-0.25}$ (Fig.\,\ref{FIG_SC} left). The figure shows that the measurements suggest a cosmic fluid that violates SEC and fulfills NEC. In fact, the measurements are quite consistent with the cosmological constant and leave not much room for any exotic types of DE. The violation of the SEC gives a further argument that we live in a Universe in a phase of accelerated cosmic expansion. Ettori, Tozzi \& Rosati (2003) used the baryonic gas mass fraction of clusters in the range $0.72\le z\le 1.27$ and obtained $w_{\rm x}\le -0.49$. The combination with SN data yields $w<-0.89$, erroneously referring to the constraint $w_{\rm x}\ge -1$. Henry (2004) used the X-ray temperature function and found $w_{\rm x}=-0.42\pm0.21$, assuming $w_{\rm x}\ge -1.0$. In a preliminary analysis, Sereno \& Longo (2004) used angular diameter distance ratios of lensed galaxies in rich clusters, and shape parameters of surface brightness distributions and gas temperatures from X-ray data, and obtained $w_{\rm x}=-0.83\pm0.14$, assuming $w_{\rm x}\ge -1.0$. Rapetti, Allen \& Weller (2004) combined cluster X-ray gas mass fractions with WMAP data and obtained the constraints $w_{\rm X}=-1.05\pm 0.11$. A formal average of the most accurate und unconstrained $w_{\rm x}$ measurements using galaxy clusters (Schuecker et al. 2003b, Rapetti et al. 2004) gives \begin{equation}\label{WVAL} w_{\rm x}\,=\,-1.00\pm 0.05\,. \end{equation} Lima, Cunha \& Alcaniz (2003) give a summary of the results of the $w_{\rm x}$-$\Omega_{\rm m}$ tests obtained with various methods, all assuming a redshift-independent $w_{\rm x}$. A clear trend is seen that $w_{\rm x}>-0.5$ is ruled out by basically all observations. The large degeneracy seen in Fig.\,\ref{FIG_SN} (left) towards $w_{\rm x}<-1$ translates into a less well-defined lower bound. Hannestad \& M\"ortsell (2002) found $w_{\rm x}>-2.7$ by the combination of CMB, SNe and large-scale structure data. Melchiorri et al. (2003) combined seven CMB experiments including WMAP with the Hubble parameter measurements from the Hubble Space Telescope and luminosity measurements of type-Ia SNe, and found the 95\% confidence range $-1.45<w_{\rm x}<-0.74$. If they include also 2dF data on the large-scale distribution of galaxies they found $-1.38<w_{\rm x}<-0.82$. More recent measurements support the tendency that $w_{\rm x}$ is close to the value expected for a cosmological constant as found by the combination of REFLEX and SN data. Spergel et al. (2003) used a variety of different combinations between WMAP and galaxy data and obtained the $1\sigma$ corridor $w_{\rm X}=-0.98\pm0.12$. Riess et al. (2004) combined data from distant type-Ia SNe with CMB and large-scale structure data, and found $w_{\rm x}=-1.02^{+0.13}_{-0.19}$. Their results are also inconsistent with a rapid evolution of the DE. Combining Ly-$\alpha$ forest and bias analysis of the SDSS with previous constraints from SDSS galaxy clustering, the latest SN and WMAP data, Seljak et al. (2004) obtained $w_{\rm x}=-0.98^{+0.10}_{-0.12}$ at $z=0.3$ (they also obtained $\sigma_8=0.90\pm0.03$). A combination of the $w_{\rm x}$ measurements of REFLEX, Rapetti et al. (2004), Spergel et al. (2003), Riess et al. (2004), and Seljak et al. (2004) yields $w_{\rm x}=-0.998\pm0.038$. Independent from this more or less subjective summary, it is still save to conclude that all recent measurements are consistent with a cosmological constant, and that the most precise estimates suggest that $w_{\rm x}$ is very close to $-1$. This points towards a model where DE behaves very similar to a cosmological constant, i.e., that the time-dependency of the DE cannot be very large. In fact, Seljak et al. have also tested $w_{\rm x}$ at $z=1$, and found $w_{\rm x}(z=1)=-1.03^{+0.21}_{-0.28}$ and thus no significant change with $z$. \begin{figure}[h] \vspace{0.0cm} \center{\hspace{-5.8cm}\psfig{figure=schuecker_fig9a.ps,height=5.0cm}} \vspace{-5.85cm} \center{\hspace{+5.9cm}\psfig{figure=schuecker_fig9b.ps,height=5.5cm}} \vspace{-0.2cm} \caption[]{\small {\bf Left:} Combination of $w_{\rm x}$ measurements based various SN samples and the REFLEX sample assuming a redshift-independent $w_{\rm x}$. The likelihood contours ($1-3\sigma$) are centred around $w_{\rm x}=-1$ which corresponds to the cosmological constant (vertical line). The two curved lines correspond to the SEC (upper line) and the NEC (lower line). The curved line in the right part of the diagram corresponds to a specific holographic DE model of Li (2004). {\bf Right:} Normalization parameter of the matter power spectrum $\sigma_8$ compared to the coupling strength $\beta$ where $\beta=0$ means no coupling between DE and DM. The inner region marked by the dashed horizontal lines (GCLST) marks observational constraints from the scatter of all $\sigma_8$ estimates obtained from galaxy clusters during the past 2 years. The broader range marked by the continuous horizontal lines (ALL) is a plausible interval which takes into account also $\sigma_8$ measurements from other observations.} \label{FIG_SC} \end{figure} Cluster abundance measurements have not yet reached the depth to be very sensitive to the normalized cosmological constant $\Omega_\Lambda$ or $\Omega_{\rm x}$. The most reliable estimates todate come from the X-ray gas mass fraction. Vikhlinin et al. (2003) used the cluster baryon mass as a proxy for the total mass, thereby avoiding the large uncertainties on the M/T or M/L relations, yielding with 17 clusters with $z\approx 0.5$ the degeneracy relation $\Omega_{\rm m}+0.23\Omega_\Lambda=0.41\pm0.10$. For $\Omega_{\rm m}=0.3$, this would give $\Omega_\Lambda=0.48\pm0.12$. Allen et al. (2002) obtained with the X-ray gas mass fraction in combination with the other measurements described above the constraint $\Omega_\Lambda=0.95^{+0.48}_{-0.72}$. Ettori et al. (2003) obtained $\Omega_\Lambda=0.94\pm0.30$, and Rapetti et al. (2004) $\Omega_\Lambda=0.70\pm0.03$. Combining lensing and X-ray data, Sereno \& Longo (2004) obtained $\Omega_\Lambda=1.1\pm0.2$. The formal average and $1\sigma$ standard deviation of these measurements is \begin{equation}\label{Lambda} \Omega_\Lambda\,=\,0.83\pm0.24\,. \end{equation} The last effect of DE and thus $w_{\rm x}$ discussed here is interesting by its own, but also offers a possibility for cross-checks of $w_{\rm x}$ measurements. The effect is related to a possible non-gravitational interaction between DE and ordinary matter (e.g. Amendola 2000). We showed above (e.g., Eq.\,\ref{WVAL}) that the most obvious candidate for DE is presently the cosmological constant with all its catastrophic problems (Sect.\,\ref{CCP}). However, a very small redshift-dependency of the DE density cannot be ruled out. Based on this possible residual effect, a further explanation would be a light scalar (quintessential) field $\phi$ where its potential can drive the observed accelerated expansion similar as in the de-Sitter phase of inflationary scenarios. In general, $\phi$ interacts beyond gravity to baryons and DM with a strength similar to gravity. However, some (unkown) symmetry could signficantly reduce the interaction (Carroll 1998) -- otherwise it would have already been detected -- so that some coupling could remain. The following discussion is restricted to a possible interaction between DE and DM. The general covariance of the energy momentum tensor requires the sum of DM ($m$) and DE ($\phi$) to be locally conserved so that we can allow for a coupling of the two fluids, e.g., in the simple linear form, \begin{eqnarray}\label{COUPL} T^\mu_{\nu(\phi);\mu}\,&=&\,C(\beta)T_{(m)}\phi_{;\nu}\nonumber\,,\\ T^\mu_{\nu(m);\mu}\,&=&\,-C(\beta)T_{(m)}\phi_{;\nu}\,, \end{eqnarray} with the dimensionless coupling constant $\beta$ in $C(\beta)=\sqrt{\frac{16\pi G}{3c^4}}\beta$, but more complicated choices are, however, possible. Observational constraints on the strength of a nonminimal coupling $\beta$ between $\phi$ and DM are $|\beta|<1$ (Damour et al. 1990). For a given potential $V(\phi)$, the corresponding equation of motion of $\phi$ can be solved. Amendola (2000) discussed exponential potentials which yield a present accelerating phase. A generic result is a saddle-point phase between $z=10^4$ and $z=1$ where the normalized energy density related to the scalar field, $\Omega_{\phi}$, is significantly higher compared to noncoupling models. The saddle-point phase thus leads to a further suppression of structure growth and thus to smaller $\sigma_8$ (when the models are normalized with the CMB) compared to noninteracting quintessence models (Fig.\,\ref{FIG_SC} right). The present observations appear quite stringent. The X-ray cluster constraint $\sigma_8=0.76\pm0.10$ (Eq.\,\ref{SIGMA8}) obtained in Sect.\,\ref{MATTER} suggests a clear detection of a nonminimal coupling between DE and DM: \begin{equation}\label{BETA} \beta\,=\,0.10\,\pm\,0.01\,. \end{equation} This would provide an argument that DE cannot be the cosmological constant because $\Lambda$ cannot couple non-gravitationally to any type of matter. In this case, the quite narrow experimental corridor found for $w_{\rm x}$ (Eq.\,\ref{WVAL}) would be responsible for the nonminimal coupling. However, a possibly underestimated $\sigma_8$ by galaxy clusters, and thus no nonminimal couplings and a DE in form of a cosmological constant seem to provide a more plausible alternative (see Sect.\,\ref{FUTURE}). \section{The Cosmological Constant Problem}\label{CCP} Recent measurements of the equation of state $w_{\rm x}$ of the DE do not leave much room for any exotic type of DE (Eq.\,\ref{WVAL} in Sect.\,\ref{DE}). In this section we take the most plausible assumption that the observed accelerated cosmic expansion is driven by Einstein's cosmological constant more serious. In this case, we are, however, confronted with the long-standing cosmological constant problem (e.g., Weinberg 1989). To some extent also DE models based on scalar fields suffer on this problem because they have to find a physical mechanism (symmetry) which makes the value of $\Lambda$ negligible. To illustrate the problem, separate the effectively observed DE density as usual into a gravitational and non-gravitational part, \begin{equation}\label{CCP1} \rho_{\rm \Lambda}^{\rm eff}= \rho_{\rm \Lambda}^{\rm GRT}+\rho_{\rm \Lambda}^{\rm VAC}= 10^{-26}\,{\rm kg}\,{\rm m}^{-3}\,, \end{equation} for $\Omega_\Lambda=0.7$. The non-gravitational part represents the physical vacuum. A free scalar field offers a convinient way to get an estimate of a plausible vacuum energy density. Interpreting this field as a physical operator and thus constraining it to Heisenberg's uncertainty relations, quantize the field in the canonical manner. The quantized field behaves like an infinite number of free harmonic oscillators. The sum of their zero particle (vacuum) states, up to the Planck energy, corresponding to a cutoff in physical (not comoving) wavenumber, is \begin{equation}\label{CCP2} \rho_{\rm \Lambda}^{\rm VAC}=\frac{\hbar}{c}\, \int_0^{E_p/\hbar c}\frac{4\pi k^2 dk}{(2\pi)^3}\frac{1}{2} \sqrt{k^2+(mc/\hbar)^2}\approx 10^{+93}\,{\rm kg}\,{\rm m}^{-3}\,, \end{equation} for $m=0$. The cosmological constant problem is the extra-ordinary fine-tuning which is necessary to combine the effectively measured DE density in Eq.\,(\ref{CCP1}) with the physical vacuum (\ref{CCP2}). This simple (though quite naive) estimate immediately shows that something fundamentally has gone wrong with the estimation of the physical vacuum in Eq.\,(\ref{CCP2}). An obvious answer is related to the fact that for the estimation of the physical vacuum, gravitational effects are completely ignored. One could think of a quantum gravity with strings. However, present versions of such theories seem to provide only arguments for a vanishing or a negative cosmological constant (Witten 2000, but see below). A hint how inclusion of gravity could effectively work in Eq.\,(\ref{CCP2}), comes from black hole thermodynamics (Bekenstein 1973, Hawking 1976). Analyzing quantized particle fields in curved but not quantized spacetimes, it became clear that the information necessary to fully describe the physics inside a certain region and characterized by its entropy, increases with the surface of the region. This is in clear conflict to standard non-gravitational theories where entropy as an extensive variable always increases with volume. Non-gravitational theories would thus vastly overcount the amount of entropy and thus the number of modes and degrees of freedom when quantum effects of gravity become important. Later studies within a string theory context could verify a microscopic origin of the black hole entropy bound (Strominger \& Vafa 1996). Bousso (2002) generalizes the prescription how entropy has to be determined even on cosmological scales, leading to the Covariant Entropy Bound. `t Hooft (1993) and Susskind (1995) elevated the entropy bound as the Holographic Principle to a new fundamental hypothesis of physics. A simple intuitive physical mechanism for this holographic reduction of degrees of freedom is related to the idea that each quantum mode in Eq.\,(\ref{CCP2}) should carry a certain amount of gravitating energy. If the modes were packed dense enough, they would immediately collapse to form a black hole. The reduction of the degrees of freedom comes from the ignorance of these collapsed states. Later studies of Cohen, Kaplan \& Nelson (1999), Thomas (2002), and Horvat (2004) made the exclusion of states inside their Schwarzschild radii more explict which further strengthen the entropy bound so that a new estimate of the physical vacuum is \begin{equation}\label{CCP3} \rho_{\rm \Lambda}^{\rm HOL}= \frac{c^2}{8\pi G}\frac{1}{R_{\rm EH}^2}\approx 3\cdot 10^{-27}\,{\rm kg}\,{\rm m}^{-3}\,, \end{equation} where $R_{\rm EH}$ is the present event horizon of the Universe. This is, however, not a solution of the cosmological constant problem because gravity and the exclusion of microscopic black hole states were put in by hand and not in a self-consistent manner by a theory of quantum gravity. Nevertheless, the similarity of Eqs.\,(\ref{CCP1}) and (\ref{CCP3}) might be taken as a hint that gravitational hologpraphy could be relevant to find a more complete theory of physics. A method to test for consistency of present observations with gravitational holography, is closely related to the fact that gravitational holography as tested with the Covariant Entropy Bound on cosmological scales is based on the validity of the Null Energy Condition (NEC). However, in contrast to the NEC as discussed in Sec.\,\ref{DE} for the total cosmic fluid, Kaloper \& Linde (1999) could show that for the Covariance Entropy Bound each individual component of the cosmic substratum must obey \begin{equation}\label{HOLO} -1\,\le\, w_i\,\le\,+1\,. \end{equation} The problematic component is the equation of state of the dark energy. The observed values summarized in Sect.\,\ref{DE} suggest $w_{\rm x}=-1.00\pm 0.05$ which is consistent with the bound (\ref{HOLO}). One can take this as the first consistency test of probably the most important assumption of the Holographic Principle on macroscopic scales. However, a direct measurement of {\it cosmological entropy} on light sheets as defined in Bousso (2002) is still missing. Li (2004) recently combined holographic ideas with DE to `solve' the cosmological constant problem. Applying the stronger entropy bound as suggested by Thomas (1998) and Cohen et al. (1999), and using the cosmic event horizon as a characteristic scale of the Universe, accelerating solutions of the cosmic scale factor at low $z$ could be found together with relations between the density of cosmic matter and $w_{\rm x}$ as shown in Fig.\,\ref{FIG_SC} (left). This model of holographic DE appears to be quite consistent with present observations and was in fact used in Eq.\,(\ref{CCP3}) to estimate the density of the physical vacuum. t'Hooft (1993) and Susskind (1995) give arguments suggesting that M-theory should satisfy the Holographic Principle. Horava (1999) in his `conservative' approach to M-theory, defined by specific gauge symmetries and invariance under spacetime diffeomorphisms and parity, could show that the entropy bound and thus holography emerges quite naturally. Therefore, any astronomical test supporting gravitational holography more directly or some of its basic assumptions like the NEC as described above should give important hints towards the development of a more complete theory of physics. There is a class of models based on higher dimensions which follow the Holographic Principle. Brane-worlds emerging from the model of Horava \& Witten (1996a,b) are phenomenological realizations of M-theory ideas. Recent theoretical investigations concentrate on the Randall \& Sundrum (1996a,b) models where gravity is used in an elegant manner to compactify the extra dimension. Some of these models also follow the Holographic Principle. Here, matter and radiation of the visible Universe are located on a $(1+3)$-dimensional brane. Expressed in a simplified manner, non-gravitational forces, described by open strings, are attached with their endpoints on branes. Gravity, described by closed strings, can propagate also into the $(1+4)$-dimensional bulk and thus `dilutes' differently than Newton or Einstein gravity. Table-top experiments of classical gravity (and BBN) confine the size of an extra dimension to $<0.16$\,mm (Hoyle et al. 2004). Einstein gravity formulated in a five dimensional spacetime and combined with a five-dimensional cosmic line element carrying the symmetries of the assumed brane-world, can yield FL-like solutions with the well-known phenomenology at low $z$ (Binetruy et al. 2000). \begin{figure}[] \vspace{-0.0cm} \center{\hspace{-1.0cm}\psfig{figure=schuecker_fig10.ps,height=8.0cm}} \vspace{-1.0cm} \caption[]{\small Predicted cluster power spectra based on matter power spectra of Rhodes et al. (2003). The effect of the extra-dimension decreases the $P(k)$ amplitudes at large scales. The error bars are typical for a DUO-like X-ray cluster survey. In order to show the differences more clearly, power spectra for each extra dimension are slidely shifted relative to each other along the comoving $k$ axis.} \label{FIG_RHODES} \end{figure} The analysis of perturbations in brane-world scenarios is not yet fully understood (Maartens 2004). Difficulties arise when perturbations created on the brane propagate into the bulk and react back onto the brane. Only on large scales are the computations under control because here the effects of the backreaction are small and can be neglected. It is thus not yet clear, whether the resulting effects on the power spectrum described below are mere reflections of such approximations or generic features of higher dimensions. Brax et al. (2003) and Rhodes et al. (2003) discussed the effects of extra dimensions on CMB anisotropies and large scale structure formation. Models with extra dimensions can at low energies be described as scalar-tensor theories where the light scalar fields (moduli fields) couple to ordinary matter in a manner depending on the details of the higher dimensional theory. An illustration of the expected effects on the cluster power spectrum is given in Fig.\,\ref{FIG_RHODES}. The error bars are computed with cluster samples selected from the Hubble Volume Simulation under the conditions of the DUO wide survey (P. Schuecker, in prep.). It is seen that $P(k)$ gets flatter on scales around $300\,h^{-1}\,{\rm Mpc}$ with increasing size of the extra dimension. A careful statistical analysis shows that more than 30\,000 galaxy clusters are needed to clearly detect the presence of an extra dimension on scales below 0.16\,mm. \section{Summary and conclusions}\label{FUTURE} X-ray galaxy clusters give, in combination with other measurements, the observational constraints and their $1\sigma$ errors on the matter density $\Omega_{\rm m}=0.31\pm0.03$, the normalized cosmological constant $\Omega_\Lambda=0.83\pm0.23$, the normalization of the matter power spectrum $\sigma_8=0.76\pm0.10$, the neutrino energy density $\Omega_\nu=0.006\pm0.003$, the equation of state of the DE $w_{\rm x}=-1.00\pm0.05$, and the linear interaction $\beta=0.10\pm0.01$ between DE and DM. These estimates suggest a spatially flat universe with $\Omega_{\rm tot}=\Omega_{\rm m}+\Omega_\Lambda=1.14\pm0.24$, as assumed in many cosmological tests based on galaxy clusters. They do, however, not provide an overall consistent physical interpretation. The problem is related to the low $\sigma_8$ which leads to an overestimate of the neutrino mass compared to laboratory experiments and to an interaction between DE and DM. Such a high interaction is not consistent with a DE with $w_{\rm x}=-1.00\pm0.05$ because the latter indicates that DE behaves quite similar to a cosmological constant which cannot exchange energy beyond gravity. A more convincing explanation is that $\sigma_8=0.76$ should be regarded as a lower limit so that DE would be the cosmological constant without any nonminimal couplings. Systematic underestimates of $\sigma_8$ by 10-20\% are not unexpected from recent simulations (e.g., Randall et al. 2002, Rasia et al. 2004). Present data do not allow any definite conclusion, especially in the light of the partially obscured effects of non-gravitational processes in galaxy clusters and because of our ignorance of a possible time-dependency of $w_{\rm x}$. However, the inclusion of further parameters obviously improves our abilities for consistency checks. Energy conditions form the bases of many phenomena related to gravity and holography. M-theory should also come holographic, as well as brane-world gravity as a phenomenological realization of M-theory ideas. Tests of the resulting cosmologies will in the end confront alternative theories of quantum gravity. Observational tests on cosmological scales as illustrated by the effects of an extra-dimension on the cluster power spectrum probably need the `ultimate' cluster survey, i.e. a census of possibly all $10^6$ rich galaxy clusters which might exist down to redshifts of $z=2$ in the visible Universe.\\ \\ {\it Acknowledgements: I would like to thank Hans B\"ohringer and the REFLEX team for our joint work on galaxy clusters and cosmology.} \subsection*{References} {\small \vspace{4pt}\noindent\hangindent=10mm Allen, S.W., Schmidt, R.W., Fabian, A.C., 2002, MNRAS, 334, L11 \vspace{4pt}\noindent\hangindent=10mm Allen, S.W., Schmidt, R.W., Fabian, A.C., Ebeling, H., 2003, MNRAS, 344, 43 \vspace{4pt}\noindent\hangindent=10mm Allen, S.W., Schmidt, R.W., Bridle, S.L., 2003, MNRAS, 346, 593 \vspace{4pt}\noindent\hangindent=10mm Allen, S.W., Schmidt, R.W., Ebeling, H., Fabian, A.C., van Speybroeck, L., 2004, MNRAS, 353, 457 \vspace{4pt}\noindent\hangindent=10mm Amendola, L., 2000, PhRvD, 62, 043511 \vspace{4pt}\noindent\hangindent=10mm Amossov, G., Schuecker, P., 2004, A\&A, 421, 425 \vspace{4pt}\noindent\hangindent=10mm Ashie, Y., et al., 2004, PhRvL, 93, 101801 \vspace{4pt}\noindent\hangindent=10mm Bahcall, N.A., 1999, in Formation of structure in the universe, eds. A. Dekel \& J.P. Ostriker, Cambridge Univ. Press, Cambridge, p.\,135 \vspace{4pt}\noindent\hangindent=10mm Bahcall, N.A., Fan, X., 1998, ApJ, 504, 1 \vspace{4pt}\noindent\hangindent=10mm Bahcall, N.A., et al. 2003, ApJ, 585, 182 \vspace{4pt}\noindent\hangindent=10mm Barcel\'{o}, C., \& Visser, M., 2001, PhLB, 466, 127 \vspace{4pt}\noindent\hangindent=10mm Bartelmann, M., Dolag, K., Perrotta, F., Baccigalupi, C., Moscardini, L., Meneghetti, M., Tormen, G., 2004, astro-ph/0404489 \vspace{4pt}\noindent\hangindent=10mm Bekenstein, J., 1973, PhRvD, 9, 3292 \vspace{4pt}\noindent\hangindent=10mm Binetruy, P., Deffayet, C., Ellwanger, U., Langois, D., 2000, PhLB, 477, 285 \vspace{4pt}\noindent\hangindent=10mm Birkingshaw, M., Gull, S.F., Hardebeck, H., 1984, Natur, 309, 34 \vspace{4pt}\noindent\hangindent=10mm B\"ohringer, H., Voges, W., Fabian, A.C., Edge, A.C., Neumann, D.M., 1993, MNRAS, 264, L25 \vspace{4pt}\noindent\hangindent=10mm B\"ohringer, H., et al., 2000, ApJS, 129, 35 \vspace{4pt}\noindent\hangindent=10mm B\"ohringer, H., et al. 2001, A\&A, 369, 826 \vspace{4pt}\noindent\hangindent=10mm B\"ohringer, H., et al. 2002, ApJ, 566, 93 \vspace{4pt}\noindent\hangindent=10mm B\"ohringer, H., et al. 2004, A\&A, 425, 367 \vspace{4pt}\noindent\hangindent=10mm Bond, J.R., 1995, PhRvL, 74, 4369 \vspace{4pt}\noindent\hangindent=10mm Borgani, S., Guzzo, L., 2001, Natur, 409, 39 \vspace{4pt}\noindent\hangindent=10mm Borgani, S., Rosati, P., Tozzi, P., Norman, C., 1999, ApJ, 517, 40 \vspace{4pt}\noindent\hangindent=10mm Borgani, S., et al. (2001), ApJ, 561, 13 \vspace{4pt}\noindent\hangindent=10mm Borgani, S., Murane, G., Springel, V., Diaferio, A., Dolag, K., Moscardini, L., Tormen, G., Tornatore, L., Tozzi, P., 2004, MNRAS, 348, 1078 \vspace{4pt}\noindent\hangindent=10mm Bousso, R., 2002, RvMP, 74, 825 \vspace{4pt}\noindent\hangindent=10mm Brax, Ph., van de Bruck, C., Davis, A.-C., Rhodes, C.S., 2003, PhRvD, 67, 023512 \vspace{4pt}\noindent\hangindent=10mm Briel, U.G., Henry, J.P., B\"ohringer, H., 1992, A\&A, 259, L31 \vspace{4pt}\noindent\hangindent=10mm Briel, U.G., Finoguenov, A., Henry, J.P., 2004, A\&A, 426, 1 \vspace{4pt}\noindent\hangindent=10mm Burles, S., Nollett, K.M., Turner, M.S., 2001, ApJL, 552, L1 \vspace{4pt}\noindent\hangindent=10mm Caldwell, R.R., 2002, PhLB, 545, 17 \vspace{4pt}\noindent\hangindent=10mm Caldwell, R.R., Dave, R., Steinhardt, P.J., 1998a, PhRvL, 80, 1582 \vspace{4pt}\noindent\hangindent=10mm Caldwell, R.R., Dave, R., Steinhardt, P.J., 1998b, Ap\&SS, 261, 303 \vspace{4pt}\noindent\hangindent=10mm Carlstrom, J.E., Holder, G.P., Reese, E.D., 2002, ARAA, 40, 643 \vspace{4pt}\noindent\hangindent=10mm Carroll, S.M., 1998, PhRvL, 81, 3067 \vspace{4pt}\noindent\hangindent=10mm Chaboyer, B., Krauss, M., 2002, ApJL, 567, L45 \vspace{4pt}\noindent\hangindent=10mm Chiba, T., Okabe, T., Yamaguchi, M., 2000, PhRvD, 62, 023511 \vspace{4pt}\noindent\hangindent=10mm Cohen, A.G., Kaplan, D.B., Nelson, A.E., 1999, PhRvL, 82, 4971 \vspace{4pt}\noindent\hangindent=10mm Collins, C.A., et al. 2000, MNRAS, 319, 939 \vspace{4pt}\noindent\hangindent=10mm Cruz, M., Martinez-Gonzalez, E., Vielva, P., Cayon, L., 2004, MNRAS (in press) \vspace{4pt}\noindent\hangindent=10mm Damour, T., Gibbons, G.W., Gundlach, C., 1990, PhRvL, 64, 123 \vspace{4pt}\noindent\hangindent=10mm David, L.P., Jones, C., Forman, W., 1995, ApJ, 445, 578 \vspace{4pt}\noindent\hangindent=10mm Ebeling, H., Edge, A.C., B\"ohringer, H., Allen, S.W., Crawford, C.S., Fabian, A.C., Voges, W., Huchra, J.P., 1998, MNRAS, MNRAS, 301, 881 \vspace{4pt}\noindent\hangindent=10mm Edge, A.C., 2004, in Clusters of Galaxies, eds. J.S. Mulchaey, A. Dressler, and A. Oemler, Cambridge Univ. Press, Cambridge, p.\,58 \vspace{4pt}\noindent\hangindent=10mm Efstathiou, G., Frank, C.S., White, S.D.M., Davis, M., 1988, MNRAS, 235, 715 \vspace{4pt}\noindent\hangindent=10mm Eke, V.R., Cole, S., Frenk, C.S., 1996, MNRAS, 282, 263 \vspace{4pt}\noindent\hangindent=10mm Epstein, H., Glaser, V., Jaffe, A., 1965, NCim, 36, 2296 \vspace{4pt}\noindent\hangindent=10mm Ettori, S., Fabian, A.C., 1999, MNRAS, 305, 834 \vspace{4pt}\noindent\hangindent=10mm Ettori, S., Tozzi, P., Rosati, P., 2004, A\&A, 398, 879 \vspace{4pt}\noindent\hangindent=10mm Ettori, S., et al. 2004, MNRAS, 354, 111 \vspace{4pt}\noindent\hangindent=10mm Evrard, A.E., 1997, MNRAS, 292, 289 \vspace{4pt}\noindent\hangindent=10mm Evrard, A.E., Metzler, C.A., Navarro, J.F., 1996, ApJ, 469, 494 \vspace{4pt}\noindent\hangindent=10mm Fabian, A.C., et al. 2000, MNRAS, 318, L65 \vspace{4pt}\noindent\hangindent=10mm Fabian, A.C., Sander, J.S., Allen, S.W., Crawford, C.S., Iwasawa, K., Johnstone, R.M., Schmidt, R.W., Taylor, G.B., 2003, MNRAS, 344, L43 \vspace{4pt}\noindent\hangindent=10mm Feretti, L., Gioia, I.M., Giovannini, G., I. Gioia 2002, Merging processes in galaxy clusters, Eds., Astrophysics and Space Science Library, Vol. 272, Kluwer Academic Publisher, Dordrecht \vspace{4pt}\noindent\hangindent=10mm Flanagan, \'{E}.\'{E}., Wald, R.M., 1996, PhRvD, 54, 6233 \vspace{4pt}\noindent\hangindent=10mm Forman, W., et al., 2003, ApJ (submitted), astro-ph/0312576 \vspace{4pt}\noindent\hangindent=10mm Friedman, J.L., Schleich, K., Witt, D.M., 1993, PhRvL, 71, 1486 \vspace{4pt}\noindent\hangindent=10mm Fukuda, Y., et al., 1998, PhRvL, 81, 1562 \vspace{4pt}\noindent\hangindent=10mm Fukugita, M., Liu, G.-C., Sugiyama, N., 2000, PhRvL, 84, 1082 \vspace{4pt}\noindent\hangindent=10mm Fukugita, M., Peebles , J.P.E. 2004, astro-ph/0406095 \vspace{4pt}\noindent\hangindent=10mm Gladders, M.D., Yee, D., Howard, K.C., 2004, ApJS (in press), astro-ph/0411075 \vspace{4pt}\noindent\hangindent=10mm Goto, T., et al. 2002, AJ, 123, 1807 \vspace{4pt}\noindent\hangindent=10mm Griffiths, R.E., Petre, R., Hasinger, G., et al. 2004, in Proc. SPIE conference (submitted) \vspace{4pt}\noindent\hangindent=10mm Haiman, Z., Mohr, J.J., Holder, G.P., 2001, ApJ, 553, 545 \vspace{4pt}\noindent\hangindent=10mm Hawking, S.W., Penrose, R., 1970, in Proc. of the Roycal Society of London. Series A. Mathematical and Physical Sciences. Vol. 314, Issue 1519, p.\,529 \vspace{4pt}\noindent\hangindent=10mm Hawking, S.W., Ellis, G.F.R., 1973, The large scale structure of space-time, Cambridge Monographs on Mathematical Physics, Cambridge Univ. Press, London \vspace{4pt}\noindent\hangindent=10mm Hawking, S.W., 1976, PhRvD, 13, 191 \vspace{4pt}\noindent\hangindent=10mm Henry, J.P. 2004, ApJ, 609, 603 \vspace{4pt}\noindent\hangindent=10mm Horava, P., 1999, PhRvD, 59, 046004 \vspace{4pt}\noindent\hangindent=10mm Horava, P., Witten, E., 1996a, NuPhB, 460, 506 \vspace{4pt}\noindent\hangindent=10mm Horava, P., Witten, E., 1996b, NuPhB, 475, 94 \vspace{4pt}\noindent\hangindent=10mm Horvat, R., 2004, PhRvD, 70, 087301 \vspace{4pt}\noindent\hangindent=10mm Hoyle, C.D., Kapner, D.J., Heckel, B.R., Adelberger, E.G., Gundlach, J.H., Schmidt, U., Swanson, H.E., 2004, PhRvD, 70, 042004 \vspace{4pt}\noindent\hangindent=10mm Ikebe, Y., Reiprich, T.H., B\"ohringer, H., Tanaka, Y., Kitayama, T., 2002, A\&A, 383, 773 \vspace{4pt}\noindent\hangindent=10mm Jenkins, A., Frenk, C.S., White, S.D.M., et al. 2001, MNRAS, 321, 372 \vspace{4pt}\noindent\hangindent=10mm Kaiser, N., 1984, ApJL, 284, L9 \vspace{4pt}\noindent\hangindent=10mm Kaiser, N., 1986, MNRAS, 222, 323 \vspace{4pt}\noindent\hangindent=10mm Kaiser, N., 1987, MNRAS, 227, 1 \vspace{4pt}\noindent\hangindent=10mm Kaiser, N., Squires, G., 1993, ApJ, 404, 441 \vspace{4pt}\noindent\hangindent=10mm Kaloper, N., Linde, A., 1999, PhRvD, 60, 103509 \vspace{4pt}\noindent\hangindent=10mm Komatsu, E., et al., 2003, ApJS, 148, 119 \vspace{4pt}\noindent\hangindent=10mm Kim, R.S.J., et al., 2002, AJ, 123, 20 \vspace{4pt}\noindent\hangindent=10mm Klypin, A., Maccio, A.V., Mainini, R., Bonometto, S.A., 2003, ApJ, 599, 31 \vspace{4pt}\noindent\hangindent=10mm Lacey, C.G., Cole, S.M., 1993, MNRAS, 262, 627 \vspace{4pt}\noindent\hangindent=10mm Lacey, C.G., Cole, S.M., 1994, MNRAS, 271, 676 \vspace{4pt}\noindent\hangindent=10mm Lahav, O., Rees, M.J., Lilje, P.B., Primack, J.R., 1991, MNRAS, 251, 128 \vspace{4pt}\noindent\hangindent=10mm Li, M., 2004, Phys. Lett. B (submitted), astro-ph/0403127 \vspace{4pt}\noindent\hangindent=10mm Lima, J.A.S., Cunha, J.V., Alcaniz, J.S., 2003, PhRvD, 68, 023510 \vspace{4pt}\noindent\hangindent=10mm Ma, C.-P., Caldwell, R.R., Bode, P., Wang, L., 1999, ApJ, 521, L1 \vspace{4pt}\noindent\hangindent=10mm Maartens, R., 2004, LRR, 7, 7 \vspace{4pt}\noindent\hangindent=10mm Maor, I., Brustein, R., McMahon, J., Steinhardt, P.J., 2002, PhRvD, 65, 123003 \vspace{4pt}\noindent\hangindent=10mm Matarrese, S., Coles, P., Lucchin, F., Moscardini, L., 1997, MNRAS, 286, 115 \vspace{4pt}\noindent\hangindent=10mm Mather, J.C., et al. 1990, ApJ, 354, L37 \vspace{4pt}\noindent\hangindent=10mm Mayo, A.E., Bekenstein, J.D., 1996, PhRvD, 54, 5059 \vspace{4pt}\noindent\hangindent=10mm Melchiorri, A., Mersini, L., \"Odman, C.J., Trodden, M., 2003, PhRvD, 68, 043509 \vspace{4pt}\noindent\hangindent=10mm Mo, H.J., White, S.D.M, 1996, MNRAS, 282, 347 \vspace{4pt}\noindent\hangindent=10mm Morris, M.S., Thorne, K.S., Yurtsever, U., 1988, PhRvL, 61, 1446 \vspace{4pt}\noindent\hangindent=10mm Mota, D.F., van de Bruck, C., 2004, A\&A, 421, 71 \vspace{4pt}\noindent\hangindent=10mm Peebles, P.J.E., 1980, The Large-Scale Structure of the Universe, Princeton Univ. Press, Princeton \vspace{4pt}\noindent\hangindent=10mm Peebles, P.J.E., 1993, Principles of Physical Cosmology, Univ. Press, Princeton, Princeton \vspace{4pt}\noindent\hangindent=10mm Peebles, P.J.E., Ratra, B., 2004 RvMP, 75, 559 \vspace{4pt}\noindent\hangindent=10mm Pierpaoli, E., Borgani, S., Scott, D., White, M., 2003, MNRAS, 242, 163 \vspace{4pt}\noindent\hangindent=10mm Ponman, T.J., Cannon, D.B., Navarro, J.F., 1999, Natur, 397, 135 \vspace{4pt}\noindent\hangindent=10mm Pope, A.C., et al. 2004, ApJ, 607, 655 \vspace{4pt}\noindent\hangindent=10mm Postman, M., Lubin, L.M., Gunn, J.E., Oke, J.B., Hoessel, J.G., Schnieder, D.P., Christensen, J.A., 1996, AJ, 111, 615 \vspace{4pt}\noindent\hangindent=10mm Randall, L., Sundrum, R., 1996a, PhRvL, 83, 3370 \vspace{4pt}\noindent\hangindent=10mm Randall, L., Sundrum, R., 1996b, PhRvL, 83, 4690 \vspace{4pt}\noindent\hangindent=10mm Randall, S.W., Sarazin, C.L., Ricker, P.M., 2002, ApJ, 577, 579 \vspace{4pt}\noindent\hangindent=10mm Rapetti, D., Allen, S,W, Weller, J., 2004, MNRAS (submitted), astro-ph/0409574 \vspace{4pt}\noindent\hangindent=10mm Rasia, E., Mazzotta, P., Borgani, S., Moscardini, L., Dolag, K., Tormen, G., Diaferio, A., Murante, G., 2004, ApJL (submitted), astro-ph/0409650 \vspace{4pt}\noindent\hangindent=10mm Ratra, B., Peebles, P.J.E., 1988, PhRvD, 37, 3406 \vspace{4pt}\noindent\hangindent=10mm Reiprich, T.H.,, B\"ohringer, H., 2002, ApJ, 567, 716 \vspace{4pt}\noindent\hangindent=10mm Rhodes, C.S., van de Bruck, C., Brax, Ph., Davis, A.-C., 2003, PhRvD, 68, 3511 \vspace{4pt}\noindent\hangindent=10mm Richstone, D., Loeb, A., Turner, E., 1992, ApJ, 363, 477 \vspace{4pt}\noindent\hangindent=10mm Riess, A.G., Filippenko, A.V., Challis, P., et al., 1998, AJ, 116, 1009 \vspace{4pt}\noindent\hangindent=10mm Riess, A.G., et al. 2004, ApJ, 607, 665 \vspace{4pt}\noindent\hangindent=10mm Rosati, P., Borgani, S., Norman, C., 2002, ARAA, 40, 539 \vspace{4pt}\noindent\hangindent=10mm Szalay, A.S., et al. 2003, ApJ, 591, 1 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., B\"ohringer, H., 1998, A\&A, 339, 315 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., B\"ohringer, H., Arzner, K., Reiprich, T.H., 2001a, A\&A, 370, 715 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., B\"ohringer, H., Reiprich, T.H., Feretti, L., 2001b, A\&A, 378, 408 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., et al. 2001c, A\&A, 368, 86 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., Guzzo, L., Collins C.A., B\"ohringer, H., 2002, MNRAS, 335, 807 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., B\"ohringer, H., Collins, C.A., Guzzo, L., 2003a, A\&A, 398, 867 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., Caldwell, R.R., B\"ohringer, H., Collins, C.A., Guzzo, L., Weinberg, N.N., 2003b, A\&A, 402, 53 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., B\"ohringer, H., Voges, W., 2004, A\&A, 420, 425 \vspace{4pt}\noindent\hangindent=10mm Schuecker, P., Finoguenov, A., Miniati, F., B\"ohringer, H., Briel, U.G., 2004, A\&A, 426, 387 \vspace{4pt}\noindent\hangindent=10mm Seljak, U., et al., 2004, PhRvD (submitted) astro-ph 0407372 \vspace{4pt}\noindent\hangindent=10mm Sereno, M., Longo, G., 2004, MNRAS, 354, 1255 \vspace{4pt}\noindent\hangindent=10mm Sheth, R.K., Tormen, G., 2002, MNRAS, 329, 61 \vspace{4pt}\noindent\hangindent=10mm Spergel, D., et al. 2003, ApJS, 148, 175 \vspace{4pt}\noindent\hangindent=10mm Steigman, G., 2002 as cited in Peebles, P.J.E., Ratra, B., 2003, RvMP, 75, 559 \vspace{4pt}\noindent\hangindent=10mm Strominger, A., Vafa, C., 1996, PhL B, 379, 99 \vspace{4pt}\noindent\hangindent=10mm Susskind, L., 1995, JMP, 36, 6377 \vspace{4pt}\noindent\hangindent=10mm Suwa, T., Habe, A., Yoshikawa, K., Okamoto, T., 2003, ApJ, 588, 7 \vspace{4pt}\noindent\hangindent=10mm Tegmark, M., Zaldarriaga, M., 2002, PhRvD, 66, 103508 \vspace{4pt}\noindent\hangindent=10mm Thomas, S., 2002, PhRvL, 89, 081301 \vspace{4pt}\noindent\hangindent=10mm t`Hooft, G., 1993, in Salamfestschrift: a collection of talks, World Scientific Series in 20th Century Physics, Vol.\,4, eds. A. Ali, J. Ellis and S. Randjbar-Daemi, World Scientific, 1993, e-print gr-qc/9310026 \vspace{4pt}\noindent\hangindent=10mm Vauclair, S.C., et al., 2003, 412, 37 \vspace{4pt}\noindent\hangindent=10mm Viana, P.T.P., Liddle, A.R., 1996, MNRAS, 281, 323 \vspace{4pt}\noindent\hangindent=10mm Vikhlinin, A., Markevitch, M., Murray, S.S. 2001, ApJ, 551, 160 \vspace{4pt}\noindent\hangindent=10mm Vikhlinin, A., et al., 2003, ApJ, 590, 15 \vspace{4pt}\noindent\hangindent=10mm Visser, M., 1997, PhRvD, 56, 7578 \vspace{4pt}\noindent\hangindent=10mm Vogeley, M.S., Szalay, A.S., 1996, ApJ, 465, 34 \vspace{4pt}\noindent\hangindent=10mm Voit, G.M., 2004 RMP (in press), astro-ph/0410173 \vspace{4pt}\noindent\hangindent=10mm Wald, R.M., 1984, General Relativity, The University of Chicago Press, Chicago and London \vspace{4pt}\noindent\hangindent=10mm Wang, L., Steinhardt, P.J., 1998, ApJ, 508, 483 \vspace{4pt}\noindent\hangindent=10mm Weinberg, S., 1989, RvMP, 61, 1 \vspace{4pt}\noindent\hangindent=10mm Wetterich, C., 1988, NuPhB, 302, 668 \vspace{4pt}\noindent\hangindent=10mm White, S.D.M., Frenk, C.S., 1991, ApJ, 379, 52 \vspace{4pt}\noindent\hangindent=10mm White, S.D.M., Efstathiou, G., Frank, C.S., 1993, MNRAS, 262, 1023 \vspace{4pt}\noindent\hangindent=10mm White, S.D.M., Navarro, J.F., Evrard, A.E., Frank, C.S., 1993, Natur, 366, 429 \vspace{4pt}\noindent\hangindent=10mm White, S.D.M., Fabian, A.C., 1995, MNRAS, 273, 72 \vspace{4pt}\noindent\hangindent=10mm Witten, E., 2000, hep-ph/0002297 \vspace{4pt}\noindent\hangindent=10mm Wu, X.-P., Chiueh, T., Fang, L.-Z., Xue, Y.-J., 1998, MNRAS, 301, 861 \vspace{4pt}\noindent\hangindent=10mm Zhang, Y.-Y., Finoguenov, A., B\"ohringer, H., Ikebe, Y., Matsushita, K., Schuecker, P., 2004, A\&A, 413, 49 \vspace{4pt}\noindent\hangindent=10mm Zlatev, I., Wang, L., Steinhardt, P.J., 1999, PhRvL, 82, 896 } \vfill \end{document}
-52,309.671567
[ -2.142578125, 2.162109375 ]
16.920732
[ -2.982421875, 0.72900390625, -1.453125, -5.32421875, -0.1444091796875, 6.90234375 ]
[ 2.94921875, 8.1640625, 2.248046875, 6.4375 ]
1,637
13,198
[ -2.7734375, 3.029296875 ]
28.319876
[ -6.19921875, -4.015625, -4.5, -2.58984375, 1.7021484375, 13.15625 ]
1.165815
16.860445
21.245643
7.205304
[ 2.8435497283935547 ]
-36,922.46275
5.621837
-50,659.551845
0.345027
6.530748
[ -3.6953125, -4.07421875, -2.806640625, -3.482421875, 2.755859375, 10.6171875 ]
[ -5.78125, -1.9580078125, -2.1953125, -0.97802734375, 3.439453125, 4.4921875 ]
BkiUdx05qsBDH65QVtNl
\section{Introduction} Study of radiation-matter interaction, in two level quantum mechanical systems have lead to several fascinating phenomena like the Autler-Townes doublet \cite{Aut55}, vacuum Rabi splitting \cite{Aga84,Tho92, Khi06}, antibunching and squeezing \cite{Gar86,Car87,Vya92}. In particular, interaction of two-level atoms in a cavity with a coherent source of light and coupled to a squeezed vacuum has been extensively studied \cite{Aga89,Par90,Cir91,Rice96,Ge095,Smy96,Cle20,Str05,Set08}. Currently there is renewed interest in such studies from the context of semiconductor systems like quantum dots (QDs) and wells (QWs) \cite{Baa04,Qua05,Hic108,Hic208, Set10} given their potential application in opto-electronic devices \cite{Shi07}. In this regard, intersubband excitonic transitions which have similarities to two level atomic system has been primarily exploited. However, it is important to understand that the quantum nature of fluorescent light emitted by excitons in QWs embedded inside a microcavity somewhat differs to that of atomic cavity QED predictions. For example, unlike antibunching observed in atoms embedded in a cavity a QW exhibits bunching effects in the fluorescent spectrum of the emitted radiation \cite{Vya20,Ere03}. Further in the strong coupling regime, for a resonant microcavity-QW interaction, exciton-photon mode splitting and oscillatory excitonic emission has been demonstrated \cite{Wei92,Pau95,Jac95}. Recently effect of a non-resonant strong drive on the Intersubband excitonic transition has been investigated and observation of Autler Townes doublets were reported \cite{Dyn05,Car05,Wag2010}. In light of these new results an eminent question of interest is thus, how does the quantum nature of radiation emitted from a QW in a microcavity get effected in presence of a squeezed vacuum environment and non-resonant drive? We investigate this in the current paper. We explore the interaction between an external field and a QW placed in a microcavity coupled to a squeezed vacuum reservoir. Our analysis is restricted to the weak excitation regime where the density of excitons is small. This allows us to neglect any exciton-exciton interaction thereby simplifying our problem considerably and yet preserving the physical insight. We further assume the cavity-exciton interaction to be strong, which brings in interesting features. Note that we consider the external field to be in resonance with the cavity mode throughout the paper. We analyze the effect of exciton-photon detuning, external coherent light, and the squeezed reservoir on the quantum statistical properties and polariton resonances in the strong coupling and low excitation regimes. The effect of the coherent light on the behavior of the dynamical evolution of the intensity fluorescent light is remarkably different to that of the squeezed vacuum reservoir due to the nature of photons generated by the two systems. This is due to the distinct nature of the photons generated by the coherent and squeezed inputs. This effect is manifested on the intensity of the fluorescent light. Both sources lead to excitation of two or more excitons in the quantum well creating a probability for emission of two or more photons simultaneously. As a result of this, the photons tend to propagate in bunches other than at random. Moreover, the fluorescent light emitted by exciton in the quantum well exhibits nonclassical property namely, quadrature squeezing. \section{Model and Quantum Langevin equations} We consider a semiconductor quantum well (QW) in a cavity driven by external coherent light and coupled to a single mode squeezed vacuum reservoir. The scheme is outlined in Fig. \ref{fig0}. In this work, we are restricted to a linear regime in which the density of excitons is small so that exciton-exciton scattering can be ignored. We assume that the driving laser is at resonance with the cavity mode while the exciton transition frequency is off resonant with the cavity mode by an amount $\Delta=\omega_{0}-\omega_{c}$ with $\omega_{0}$ and $\omega_{c}$ being the exciton and cavity mode frequencies. The interaction of the cavity mode with the resonant pump field and the exciton is described, in the rotating wave and dipole approximations, by the Hamiltonian \begin{equation} H=\Delta b^{\dagger}b+i\varepsilon (a^{\dagger}-a)+i\text{g}(a^{\dagger}b-ab^{\dagger})+H_{\text{loss}}, \end{equation} where $a$ and $b$, are the annihilation operators for the cavity and exciton modes satisfying the commutation relation $[a,a^{\dagger}]=[b,b^{\dagger}]=1$ respectively; $\text{g}$ is the photon-exciton coupling constant; $\varepsilon $, assumed to be real and constant, is proportional to the amplitude of the pump field, and $H_{\text{loss}}$ is the Hamiltonian that describes the interaction of the exciton with the vacuum reservoir and also the interaction of the cavity mode with the squeezed vacuum reservoir. \begin{figure}[t] \includegraphics[width=8cm]{fig1.eps} \caption{Schematic representation of a quantum well (QW) in a driven cavity coupled to a squeezed vacuum reservoir.} \label{fig0} \end{figure} The quantum Langevin equations, taking into account the dissipation processes, can be written as \begin{equation}\label{2} \frac{da}{dt}=-\frac{\kappa}{2} a+\text{g}b+\varepsilon+F(t), \end{equation} \begin{equation}\label{3} \frac{db}{dt}=-(\frac{\gamma}{2}+i\Delta)b-\text{g}a+G(t), \end{equation} where $\kappa$ and $\gamma$ are cavity mode decay rate via the port mirror and spontaneous emission decay rate for the exciton, respectively; $F=\sqrt{\kappa}F_{\text{in}}$ and $G=\sqrt{\gamma}G_{\text{in}}$ with $F_{\text{in}}$ and $G_{\text{in}}$ being the Langevin noise operators for the cavity and exciton modes, respectively. Both noise operators have zero mean, i.e., $\langle F_{\text{in}}\rangle=\langle G_{\text{in}}\rangle=0$. For a cavity mode coupled to a squeezed vacuum reservoir, the noise operator $F_{\text{in}}(t)$ satisfies the following correlations: \begin{equation}\label{4} \left\langle F_{\text{in}}(t)F_{\text{in}}^{\dagger}(t^{\prime })\right\rangle = (N+1) \delta (t-t^{\prime }), \end{equation} \begin{equation}\label{5} \left\langle F_{\text{in}}^{\dagger}(t)F_{\text{in}}(t^{\prime })\right\rangle =N\delta (t-t^{\prime }), \end{equation} \begin{equation} \left\langle F_{\text{in}}(t)F_{\text{in}}(t^{\prime })\right\rangle =\left\langle F_{\text{in}}^{\dagger}(t)F_{\text{in}}^{\dagger}(t^{\prime })\right\rangle= M\delta (t-t^{\prime }), \end{equation} where $N=\sinh^2r$ and $M=\sinh r\cosh r$ with $r$ being the squeeze parameter characterize the mean photon number and the phase correlations of the squeezed vacuum reservoir, respectively. Further, the exciton noise operator $G_{\text{in}}$ satisfies the following correlations: \begin{equation}\label{7} \left\langle G_{\text{in}}(t)G_{\text{in}}^{\dagger}(t^{\prime })\right\rangle =\delta (t-t^{\prime }), \end{equation} \begin{equation}\label{8} \left\langle G_{\text{in}}^{\dagger}(t)G_{\text{in}}(t^{\prime })\right\rangle =\left\langle G_{\text{in}}(t)G_{\text{in}}(t^{\prime })\right\rangle =\left\langle G_{\text{in}}^{\dagger}(t)G_{\text{in}}^{\dagger}(t^{\prime })\right\rangle=0. \end{equation} Following the method outlined in \cite{Set10}, we obtain the solution of the quantum Langevin equations \eqref{2} and \eqref{3} in the strong coupling regime ($g\gg \gamma, \kappa$) to be \begin{align}\label{a1} a(t)&= \eta_{1}(t)\varepsilon+\eta_{+}(t)a(0)+\eta_{3}(t)b(0)\notag\\ &+\int_{0}^{t}dt^{\prime}\eta_{+}(t-t^{\prime})F(t^{\prime})+\int_{0}^{t}dt^{\prime}\eta_{3}(t-t^{\prime})G(t^{\prime}), \end{align} \begin{align}\label{a2} b(t)&=\eta_{-}(t)b(0) -\eta_{4}(t)\varepsilon-\eta_{3}(t)a(0)\notag\\ &-\int_{0}^{t}dt^{\prime}\eta_{3}(t-t^{\prime})F(t^{\prime})+\int_{0}^{t}dt^{\prime}\eta_{-}(t-t^{\prime})G(t^{\prime}), \end{align} where \begin{align}\label{a3} \eta_{1}(t)=\frac{1}{\mu}\sin \mu t~ e^{-(\Gamma+i\Delta/2)t}, \end{align} \begin{align}\label{a4} \eta_{\pm}(t)=\frac{1}{2\mu}\left[2\mu\cos \mu t\pm i\Delta \sin \mu t\right]e^{-(\Gamma+i\Delta/2)t}, \end{align} \begin{equation}\label{a5} \eta_{3}(t)=\frac{g}{\mu}\sin \mu t e^{-(\Gamma+i\Delta/2)t}, \end{equation} \begin{align}\label{a7} \eta_{4}(t)=\frac{1}{g}-\frac{1}{g}\left[\cos (\mu t)+\frac{i\Delta}{2\mu}\sin\mu t)\right] e^{-(\Gamma+i\Delta/2)t}, \end{align} where $\Gamma=(\kappa+\gamma)/4$ and $\mu=\sqrt{g^2+\Delta^2/4}$. Using these solutions we study the dynamical behavior of intensity, power spectrum, second-order correlation function, and quadrature variance for the fluorescent light emitted by the quantum well in the following sections. \section{Intensity of fluorescent light} In this section we analyze the properties of the fluorescent light emitted by excitons in the quantum well. In particular, we study the effect of the external driving field, exciton-photon detuning, and the squeezed vacuum reservoir. Note that the intensity of the fluorescent light is proportional to the mean number of excitons. To this end, the intensity of the fluorescent light can be expressed in terms of the solutions of the Langevin equations as \begin{equation}\label{m1} \langle b^{\dagger}b\rangle=\varepsilon^2|\eta_{4}(t)|^2+|\eta_{-}(t)|^2+\kappa N\int_{0}^{t}|\eta_{3}(t-t')|^2dt'. \end{equation} Here we have assumed the cavity mode is initially in vacuum state [$\langle a^{\dagger}(0)a(0)\rangle=0$] while the quantum well initially contains only one exciton [$\langle b^{\dagger}(0)b(0)\rangle=1$]. In \eqref{m1} the first term corresponds to the contribution from the external coherent light while the last term is due to the the squeezed vacuum reservoir. It is also easy to see that the intensity is independent of the parameter $M$, which characterizes the phase correlations of the reservoir, implying that the same result could be obtained if the cavity mode is coupled to a thermal reservoir. Performing the integration using Eqs. \eqref{a5}-\eqref{a7}, we obtain \begin{align}\label{m2} \langle b^{\dagger}b\rangle&=\frac{\varepsilon^2}{g^2}+\frac{g^2\kappa N}{4\Gamma\mu^2}+(\lambda_{1}+\kappa N\lambda_{2})e^{-2\Gamma t}\notag\\ &+\frac{\varepsilon^2}{g^2}\left(\lambda_{1}e^{-2\Gamma t}-2\lambda_{3}e^{-\Gamma t}\right), \end{align} where \begin{equation}\label{m3} \lambda_{1}(t)=\frac{\Delta^2}{4\mu^2}\sin^2\mu t+\cos^2\mu t, \end{equation} \begin{align}\label{m4} \lambda_{2}(t)=-\frac{g^2 }{4\mu^2}\Big(\frac{1}{\Gamma}+\frac{1}{\mu}\sin 2\mu t\Big), \end{align} \begin{equation}\label{m5} \lambda_{3}(t)=\cos\mu t\cos(\Delta t/2)+\frac{\Delta}{2\mu}\sin\mu t\sin(\Delta t/2). \end{equation} We immediately see that the intensity of the fluorescent light reduces in the steady state to \begin{align}\label{m2} \langle b^{\dagger}b\rangle_{ss}=\frac{\varepsilon^2}{g^2}+\frac{g^2\kappa N}{4\Gamma\mu^2}. \end{align} As expected the steady state intensity is inversely proportional to the decay rate. The higher the decay rate the lower the intensity and vise versa. \begin{figure}[t] \includegraphics{fig2.eps} \caption{Plots of the fluorescent intensity [Eq. \eqref{ee}] vs scaled time $\gamma t$ for $\kappa/\gamma=1.2$, $g/\gamma=5$, $\Delta/\gamma=2$, and for different values of $\varepsilon/\gamma$.} \label{fig1} \end{figure} \begin{figure}[h] \includegraphics{fig3.eps} \caption{Plots of the fluorescent intensity [Eq. \eqref{nn}] vs scaled time $\gamma t$ for $\kappa/\gamma=1.2$, $g/\gamma=5$, $\Delta/\gamma=2$, not external driving field ($\varepsilon/\gamma=0$) and for different values of $r$.} \label{fig2} \end{figure} In order to clearly see the effect of the external coherent light on the intensity, we set $N=0$ in Eq. \eqref{m2} and obtain \begin{equation}\label{ee} \langle b^{\dagger}b\rangle=\frac{\varepsilon^2}{g^2}(1+\lambda_{1}e^{-2\Gamma t}-2\lambda_{3}e^{-\Gamma t}). \end{equation} Figure \ref{fig1} shows the dependence of the intensity of the fluorescent light on the external coherent field. In general, the intensity increases with the amplitude of the pump field and exhibits non periodic damped oscillations. Although there is a decrease in the mean number of excitons for the initial moment, cavity photons gradually excite one or more excitons in the quantum well leading to enhanced emission of fluorescence. However, the excitation of excitons saturates as time progresses limited by the strength of the applied field. From the steady state intensity, $\varepsilon^2/g^2$, we easily see that the field strength has to exceed the exciton-photon coupling constant in order to see more than one exciton in the quantum well in the long time limit. On the other hand, the effect of the squeezed vacuum can be studied by turning off the external driving field. Thus setting $\varepsilon=0$ in Eq. \eqref{m2}, we get \begin{equation}\label{nn} \langle b^{\dagger}b\rangle=\frac{g^2\kappa N}{4\Gamma\mu^2}+(\lambda_{1}+\kappa N\lambda_{2})e^{-2\Gamma t}. \end{equation} In Fig. \ref{fig2}, we plot the intensity of the light emitted by the exciton [Eq. \eqref{nn}] as a function of scaled time $\gamma t$ for a given photon-exciton detuning. This figure illustrates the dependence of the intensity on squeezed vacuum reservoir impinging via the partially transmitting mirror. Here also the intensity exhibits damped oscillations at frequency $2\mu=2(g^2+\Delta^2/4)^{1/2}$ indicating exchange of energy between the excitons and cavity mode. The intensity decreases at the initial moment and gradually increases to steady state values $g^2\kappa N/4\Gamma \mu^2$. While the intensity increases it shows oscillatory behavior which ultimately disappears for longer times. Unsurprisingly, the intensity increases with the number of photons coming in through the mirror. Comparing Figs. \ref{fig1} and \ref{fig2}, we note that the intensity has different behavior for the two cases. This can be explained in terms of the nature of photon each source is producing. In the case of coherent light, the photon distribution is Poisson and the photons propagate randomly. This leads to uneven excitation of excitons that results in nonperiodic oscillatory nature of the intensity. In the case of squeezed vacuum source, however, the photons show bunching property and hence can excite two or more excitons at the same time. This in turn implies that depending on the strength of the impinging squeezed vacuum field, there will be one or more excitons in the quantum well. For the sake of completeness we further consider the effect of detuning on the intensity of the fluorescent light at a given pump field strength and squeezed photons. Figure \ref{fig3} shows the intensity as a function of scaled time $\gamma t$. When the photon is out of resonance with the exciton frequency there will be less number of excitons in the quantum well and hence the fluorescent intensity decreases. This is clearly shown in Fig. \ref{fig3}. \begin{figure}[t] \includegraphics{fig4.eps} \caption{Plots of the fluorescent intensity [Eq. \eqref{m2}] vs scaled time $\gamma t$ for $\kappa/\gamma=1.2$, $g/\gamma=5$, $r=1.8$, pump amplitude $\varepsilon/\gamma=7$ and for different values of exciton-photon detuning ($\Delta/\gamma$).} \label{fig3} \end{figure} \section{Power spectrum} The power spectrum of the fluorescent light in the steady state is given by \begin{equation}\label{s1} S(\omega)=\frac{1}{\pi}\text{Re}\int_{0}^{\infty}d\tau~e^{i(\omega-\omega_{0}) \tau}\langle b^{\dagger}(t)b(t+\tau)\rangle_{ss}, \end{equation} where $ss$ stands for steady state. The two time correlation function that appears in the above integrand is found to be \begin{align}\label{s2} \langle b^{\dagger}(t)b(t+\tau)\rangle_{ss}&=\frac{\varepsilon^2}{g^2}+\frac{\kappa N g^2}{4\Gamma\mu^3}e^{-(\Gamma+2i\Delta/2)\tau}\notag\\ &\times(\mu\cos \mu\tau+\Gamma\sin\mu\tau). \end{align} Now employing this result in \eqref{s1} and performing the resulting integration and carrying out the straightforward arithmetic, we obtain \begin{equation}\label{s3} S(\omega)=\frac{\varepsilon^2}{2\pi g^2}\delta(\omega-\omega_{0})+S_{\text{incoh}}(\omega), \end{equation} where \begin{align}\label{s4} S_{\text{incoh}}(\omega)&=\frac{kNg^2}{16\pi\mu^3}\Big\{\frac{\Delta+4\mu-2\omega}{\Gamma^2+[\frac{\Delta}{2}+\mu-(\omega-\omega_{0})]^2}\notag\\ & +\frac{-\Delta+4\mu+2\omega}{\Gamma^2+[\frac{\Delta}{2}-\mu-(\omega-\omega_{0})]^2}\Big\}. \end{align} We note that the power spectrum has two components: coherent and incoherent parts. The coherent component is represented by the delta function, which indeed corresponds to the coherent light. The incoherent component given by \eqref{s4}, arises as a result of the squeezed photons coming through the port mirror. From Eq. \eqref{s4} it is clear that the spectrum of the incoherent light is composed of two Lorentzians having the same width $\Gamma$ but centered at two different frequencies:$\omega-\omega_{0}=\mu+\Delta/2$ and $\omega-\omega_{0}=\mu-\Delta/2 $. We then see that the detuning leads to a shift in the resonance frequency components observed in zero detuning ($\Delta=0)$. \begin{figure}[t] \includegraphics{fig5.eps} \caption{Plots of the incoherent component of the power spectrum [Eq. \eqref{s4}] vs scaled frequency $(\omega-\omega_{0})/\gamma$ for $\kappa/\gamma=1.2$, $g/\gamma=6$, squeeze parameter $r=1$, and for different values of detuning, $\Delta/\gamma$.} \label{fig4} \end{figure} \begin{figure}[t] \includegraphics[width=6cm]{fig6.eps} \caption{Density plot of the incoherent component of the power spectrum [Eq. \eqref{s4}] vs scaled frequency $(\omega-\omega_{0})/\gamma$ and $\Delta/\gamma$ for $\kappa/\gamma=1.2$, $g/\gamma=6$, and for squeeze parameter $r=1$.} \label{fig5} \end{figure} \begin{table*}\label{tab1} \begin{ruledtabular} \caption{List of eigenvalues and eigenstates for single and two excitation manifolds. Here $\chi_{\pm}=\sqrt{4g^2+(\Delta^2\pm2\mu)^2}$.} \begin{tabular}{lccccccc} & Eigenvalues(shifts) &~~Eigenstates (Exciton polartions)\\ \hline & & \\ Single excitation &$\Delta/2+\mu$&$|1\rangle_{+}=[(\Delta+2\mu)|1,0\rangle+2ig|0,1\rangle]/\chi_{+}$\\ manifold& & \\ &$\Delta/2-\mu$&$|1\rangle_{-}=\left[(\Delta+2\mu)|1,0\rangle+2ig|0,1\rangle\right]/\chi_{-}$\\ & & \\ \hline & & \\ Two excitation &$\Delta+2\mu$&$|2\rangle_{+}=[-i\sqrt{2}g|2,0\rangle+(\Delta+2\mu)|1,1\rangle+i\sqrt{2}g|0,2\rangle]/\chi_{+}$\\ manifold & & \\ &$\Delta$&$|2\rangle_{0}=[-i\sqrt{2}g|2,0\rangle+\Delta|1,1\rangle+i\sqrt{2}g|0,2\rangle]/\mu$\\ & & \\ &$\Delta-2\mu$&$|2\rangle_{-}=[-i\sqrt{2}g|2,0\rangle+(\Delta-2\mu)|1,1\rangle+i\sqrt{2}g|0,2\rangle]/\chi_{-}$\\ \end{tabular} \end{ruledtabular} \end{table*} \begin{figure*}[t] \includegraphics[width=16.5cm]{fig7.eps} \caption{(a) Dressed state energy level diagram for single and two excitation manifolds when the exciton is at resonance with photon. (b) Dressed states energy level diagram when the exciton frequency is detuned by $\Delta$ from that of the photon. We have assumed $\Delta$ to be positive for sake of simplicity. The bare states $|n,m\rangle (n,m=0,1,2)$ represent $n$ numbers of excitons and $m$ numbers of photons. Even though there are 6 possible transitions there are only 2 distinct transition frequencies namely: $\omega-\omega_{0}=\mu+\Delta/2$ and $\omega-\omega_{0}=-\mu+\Delta/2$, where $\mu=\sqrt{g^2+\Delta^2/4}$.} \label{fig6} \end{figure*} In Fig. \ref{fig4} we plot the incoherent component of the power spectrum as a function of scaled time $\gamma t$ for the cavity mode initially in vacuum state and for the quantum well initially containing one exciton. For zero detuning the power spectrum consists of two well resolved peaks centered at $\omega-\omega_{0}=\pm g$. This splitting can be understood from the dressed state energy level diagram (see Fig \ref{fig6}(a)). Note that for the case in which there is only one excitation, there are two possible degenerate bare states: $|1,0\rangle$--one exciton and no photon, and $|0,1\rangle$--one photon no exciton. However, the strong exciton-photon coupling lifts the degeneracy of these two bare states and results in two dressed states (exciton polaritons) $|+\rangle=(|1,0\rangle+i|0,1\rangle)/\sqrt{2}$ and $|-\rangle=(|1,0\rangle-i|0,1\rangle)/\sqrt{2}$ with eigenvalues $g$ and $-g$, respectively. In general, since the exciton-photon system is coupled to the environment, exciton polartions are unstable states. Thus the decay of the exciton and cavity photon leads to exciton polaritons decay, which yields two peaks in the emission spectrum. It is worth to note that even though we start off with a single exciton in the quantum well, the cavity photons excite two or more excitons in the quantum well. This results in more dressed states in multi excitation manifolds. For example, as shown in Fig. \ref{fig6}(a), for two excitation manifolds there are three dressed states which are equally spaced in energy. This energy separation is the same as the energy separation in one excitation manifold. Out of the six possible transitions, from two excitations to single excitation and then from single to ground state, there are only two distinct frequencies. Therefore, the emission spectrum consists of two peaks. This is different from the atom-photon coupling in which the increase in excitation number increases the number of emission spectrum peaks. On the other hand, for nonzero detuning case, the emission spectrum has two peaks whose centers are shifted to red (for positive detuning). Here the one excitation bare states ($|1,0\rangle, |0,1\rangle$) are separated by $\Delta$ and the two excitation states ($|2,0\rangle, |1,1\rangle$, and $|0,2\rangle$) as well. The eigenvalues and corresponding eigenstates are given in Table I. The exciton-photon coupling leads to the generation of dressed states (exciton polaritons). The decay of these states to the one excitation and to the ground state gives rise to two emission peaks whose frequencies are different from the zero detuning case as shown in Fig. \ref{fig6} (b). Further, the density plot for the power spectrum clearly shows that there are indeed two peaks separated by $2\mu=2\sqrt{g^2+\Delta^2/4}$ as illustrated in Fig. \ref{fig5}. \section{Second-order correlation function} In this section we study the second-order correlation function of the light emitted by the quantum well. Second-order correlation function is a measure of the photon correlations between some time t and a later time $t+\tau$. It is also an indicator of a quantum feature that doesn't have classical analog. Quantum mechanically the second-order correlation function is defined by \begin{equation}\label{G1} \text{g}^{(2)}\left( \tau \right) =\frac{\left\langle b^{\dagger} (t)b^{\dagger} (t+\tau )b(t+\tau )b(t)\right\rangle_{ss} }{\left\langle b^{\dagger} (t)b(t)\right\rangle_{ss} ^{2}}. \end{equation} The correlation function that appears in \eqref{G1} can be obtained using the solution \eqref{m2} and together with the properties of the Langevin noise forces. Note that as the mean values of the noise forces are zero and the Langevin equations are linear we apply the Gaussian properties of the noise forces. To this end, using Eq. \eqref{m2}, we obtain \begin{figure}[t] \includegraphics{fig8.eps} \caption{Plot of second-order correlation function versus normalized time $\gamma \tau$ for $g/\gamma=5$, $\kappa/\gamma=1.2$, $\Delta/\gamma=0$, squeezing parameter $r=1$, and for different value of pump amplitude $\varepsilon/\gamma$.}\label{g22} \end{figure} \begin{align}\label{g1} \text{g}^{(2)}(\tau)=1&+\frac{1}{\langle b^{\dagger}b\rangle_{ss}^2}\Big[ \frac{\kappa^2}{4}\left(4M^2 A_{3}+N^2 A_{2}^2\right)e^{-2\Gamma\tau}\notag\\ &+\frac{\kappa \varepsilon^2}{g^2}\left[M A_{1}+N A _{2}\cos(\Delta \tau/2)\right]e^{-\Gamma\tau}\Big], \end{align} where \begin{align}\label{g2} A_{1}(\tau)&=\frac{2\mu\cos\mu\tau\sin(\Delta \tau/2)-\Delta\cos(\Delta \tau/2)\sin\mu\tau}{4\mu^2}\notag\\ &+\frac{g^2\cos\mu \tau[2\Gamma\cos(\Delta t/2)-\Delta\sin(\Delta t/2)]}{\mu^2[\Delta^2+4\Gamma^2]}, \end{align} \begin{equation}\label{g3} A_{2}(\tau)=\frac{g^2}{2\Gamma\mu^3}\left(\mu \cos\mu\tau+\Gamma\sin\mu\tau\right), \end{equation} \begin{equation}\label{g4} A_{3}(\tau)=\frac{4\mu^2\cos^2\mu\tau+\Delta^2\sin^2\mu\tau+4\Gamma\mu\sin(2\mu\tau)}{16\mu^2[\Delta^2+4\Gamma^2]}, \end{equation} and $\langle b^{\dagger}b\rangle_{ss}$ is given by \eqref{m2}. The term in Eq. \eqref{g1} represents the second order correlation function for the coherent light. This can easily be seen by setting $N=M=0$. The first term in the square bracket is the contribution to the second-order correlation function from the squeezed vacuum reservoir while the second term describes the interference term between the coherent field and the reservoir. Note that at $\tau\rightarrow \infty$ $\text{g}^{(2)}$ becomes unity as it should be, showing no correlation between the photons. The dynamical behavior of the second-order correlation function is illustrated in Fig. \ref{g22}. We see from this figure that the correlation function shows oscillatory behavior with oscillation frequency equal to the photon-exciton coupling constant (g) for zero detuning case and in the absence of the external driving field. However, the frequency of oscillation is reduced by a factor of 1/2 in the presence of external coherent field. It easy to see that $\text{g}^{(2)}(0)$ is always greater than unity indicating photon bunching. This is in contrary to what has been observed in atomic cavity QED, where the photons show antibunching property \cite{Set08}. This is due to the fact that there is a finite time delay between absorbtion and subsequent emission of a photon by the atom. In the case of semiconductor cavity QED, however, the cavity photons can excite two or more excitons at the same time depending on the number of photons in the cavity leading to possible multi photon emission. This is the reason why the photons emitted by excitons are bunched. Indeed, excitation of two or more excitons in the quantum well is shown in Figs. \ref{fig1}-\ref{fig3}. \section{Quadrature squeezing} Next we study the squeezing properties of the fluorescent light by evaluating the variances of the quadrature operators. The variances of the quadrature operators for the fluorescent light is given by \begin{equation}\label{Q1} \Delta b_{\pm}^2=1+2\langle b^{\dagger}b\rangle\pm\left[\langle b^{2}\rangle+\langle b^{\dagger 2}\rangle\right]\mp(\langle b^{\dagger}\rangle\pm\langle b\rangle)^2, \end{equation} where $b_{+}= (b^{\dagger}+b)$ and $b_{-}=i(b^{\dagger}-b)$. It can be easily seen from this definition that the quadrature operators satisfy the commutation relation $[b_{+}, b_{-}] = 2i$. It is well known that for the fluorescent light to be squeezed the variances of the quadrature operators should satisfy the condition that either $\Delta b_{+}^2 < 1$ or $\Delta b_{-}^2 < 1$. Using Eq. \eqref{a2} and the properties of the noise operators we find the variances to be \begin{align}\label{Q3} \Delta b_{\pm}^2&=1+\frac{\kappa N g^2}{2\Gamma\mu^2}\pm\frac{2\kappa M g^2\Gamma}{\mu^2(\Delta^2+4\Gamma^2)}\notag\\ &+\Big[2\lambda_{1}(t)+2\kappa N\lambda_{2}(t) \pm \kappa M\lambda_{4}\Big] e^{-2\Gamma t}, \end{align} in which $\lambda_{1}$ and $\lambda_{2}$ are same as defined earlier in section (III) and \begin{align*} \lambda_{4}&=\frac{g^2}{\mu^2}\Bigg[\frac{\Delta\sin(\Delta t/2)-2\Gamma\cos(\Delta t)}{\mu^2(\Delta^2+4\Gamma^2)}\notag\\ &-\frac{\sin[(\Delta-2\mu)t]}{2((\Delta-2\mu))}-\frac{\sin[(\Delta+2\mu)t]}{2((\Delta+2\mu))}\Bigg] \end{align*} It is straightforward to see that in the steady state the variances reduce to \begin{align}\label{Q5} \Delta b_{+}^2&=1+\frac{\kappa N g^2}{2\Gamma\mu^2}+\frac{2\kappa M g^2\Gamma}{\mu^2(\Delta^2+4\Gamma^2)}, \end{align} \begin{align}\label{Q6} \Delta b_{-}^2&=1+\frac{\kappa N g^2}{2\Gamma\mu^2}-\frac{2\kappa M g^2\Gamma}{\mu^2(\Delta^2+4\Gamma^2)}. \end{align} \begin{figure}[t] \includegraphics{fig9.eps} \caption{Plots of the steady state quadrature variance [\eqref{Q6}] vs squeeze parameter $r$ for $g/\gamma=5$, $\kappa/\gamma=1.2$, and for the different values of exciton-photon detuning $\Delta/\gamma$ .}\label{qv2} \end{figure} \begin{figure}[t] \includegraphics{fig10.eps} \caption{Plots of the quadrature variance [\eqref{Q3}] vs scaled time $\gamma t$ for $g/\gamma=5$, $\kappa/\gamma=1.2$, squeeze parameter $r=1$ and for the different values of exciton-photon detuning $\Delta/\gamma$.}\label{qv1} \end{figure} From the above expressions we find that, in the steady state, the quadrature variances crucially depend on the detuning, the cavity-exciton coupling strength and amount of squeezing provided by the reservoir. Further, it is apparent that if there is any squeezing it can only be present in the $b_{-}$ quadrature. Thus for rest of this section we will only discuss the properties of variance in the $b_{-}$ quadrature. As a special case, we consider that the cavity mode to be at resonance with with the excitonic transition frequency and put $\Delta = 0$ in Eq. (\ref{Q6}). We then find that \begin{align}\label{Q7} \Delta b_{-}^{2} = 1-\frac{\kappa}{\kappa+\gamma}(1-e^{-2r}) < 1. \end{align} Equation (\ref{Q7}) then suggest that higher squeezing in the reservoir leads to better squeezing of the fluorescent light. In Fig. \ref{qv2} we confirm this behavior by plotting the steady state $\Delta b_{-}^2$ as a function of the squeezing parameter $r$. In addition, from Eq. (\ref{Q7}) we see that as $e^{-2r} \rightarrow 0$ quickly with increase in $r$ the maximum possible squeezing achievable in our system is $ 50\%$ for $\kappa = \gamma$. This is also depicted to be true in Fig. \ref{qv2}. In presence of detuning $(\Delta \neq 0)$ the behavior of $\Delta b_{-}^2$ changes dramatically. We find that for some small detuning $\Delta/\gamma = 0.5$ there exist a range of the squeezing parameter $r~(0 < r < 1.3)$ where one can see squeezing of the fluorescent light emitted from the exciton, however for higher values of $r$ it vanishes. This thus implies that in presence of detuning stronger squeezing of the reservoir leads to negative effect on the squeezing of the emitted radiation from the excitons. In Fig. \ref{qv1}, we plot the time evolution of the quadrature variance $\Delta b_{-}^2$ (Eq. \ref{Q3}) as a function of the normalized time $\gamma t$ for $r = 1$ and different values of detuning. It is seen, in general, that the variance oscillates initially with the amplitude of oscillation gradually damping out at longer time. Eventually, at large enough time the variance becomes flat and approaches to the steady state value. Interestingly, our results show that even though there is no squeezing of the fluorescent light at the initial moment, for small or zero detuning, transient squeezing gradually develops. Moreover, we also find that for weak squeezing of the reservoir, even in presence of small detuning, the initial transient squeezing is sustained and finally leads to a steady state squeezing. This can be understood as a consequence of strong interaction of the quantum well with the squeezed photon entering via the cavity mirror. In case of large detuning the exciton is unable to absorb photons from the squeezed reservoir and thus no squeezing develops in the fluorescence. \section{Conclusion} In this paper we consider a semiconductor quantum well in a cavity driven by external coherent light and coupled to a single mode squeezed vacuum reservoir. We study the photon statistics and nonclassical properties of the light emitted by the quantum well in the presence of exciton-photon detuning in the strong coupling regime. The effects of coherent light and the squeezed vacuum reservoir on the intensity of the fluorescence are quite different. The former leads to a transient peak intensity which eventually decreases to a considerably smaller steady state value. In contrast, the latter, however, gives rise to a gradual increase in the intensity and leads to maximum intensity at steady state. This difference is attributed to the nature of photons that the two sources produce. As a signature of strong coupling between the excitons in the quantum well and cavity photons the emission spectrum consists of two peaks corresponding to the two eigenenergies of the dressed states. Further, we find that the fluorescence exhibit nonclassical feature--quadrature squeezing--as a result of strong interaction of the excitons with the squeezed photons entering via the cavity mirror. In view of recent successful experiments on Autler-Townes effect in GaAs/AlGaAs \cite{Wag2010} and gain without inversion in semiconductor nanostructures \cite{Fro2006}, the quantum statistical properties of the fluorescence emitted by the quantum well can be tested experimentally. \begin{acknowledgements} One of the authors (E.A.S.) is supported by a fellowship from Herman F. Heep and Minnie Belle Heep Texas A$\&$M University Endowed Fund held/administered by the Texas A$\&$M Foundation. \end{acknowledgements}
-30,128.107626
[ -2.11328125, 2.037109375 ]
12.005857
[ -3.212890625, 0.5400390625, -1.98828125, -5.67578125, -0.9775390625, 8.1953125 ]
[ 2.23046875, 7.81640625, 2.734375, 4.88671875 ]
224
4,013
[ -3.28515625, 4.03125 ]
27.759789
[ -6.03515625, -4, -3.708984375, -2.232421875, 1.9140625, 11.09375 ]
1.598867
6.396376
24.719661
2.64003
[ 2.0132219791412354 ]
-20,123.437126
6.156242
-29,608.506695
1.020036
5.640993
[ -2.6328125, -3.83203125, -3.90234375, -4.94140625, 2.34375, 12.65625 ]
[ -5.5625, -1.8203125, -2.310546875, -1.388671875, 3.458984375, 4.40625 ]
BkiUcD_xK7kjXIdG9gHx
\section{Introduction} In this paper we consider the tensor product of two graded algebras, with multiplication twisted by a bicharacter. When the two factors are augmented, Bergh and Oppermann~\cite{BO} completely described the Ext algebra as a twisted tensor product of the Ext algebras of the factors. Hochschild cohomology, however, is more complicated, and with the techniques at the time, they were able only to describe a subalgebra of the Hochschild cohomology ring. This subalgebra may be thought of as the ``untwisted part'' of Hochschild cohomology; it was left open how to describe the remaining, truly twisted, part of Hochschild cohomology. This we do here. Moreover, we describe the full Gerstenhaber algebra structure in terms of that of the component algebras. To be precise, let $R$ and $S$ be algebras over a field $k$, graded by abelian groups $A$ and $B$ respectively. A bicharacter $t : A\times B\rightarrow k^{\times}$ determines a twisted tensor product algebra $R\otimes^t S$, as explained in Section~\ref{sec:prelim} below. The following result combines Theorems \ref{Main-theorem}, \ref{cup-prod-thm}, and \ref{thm:G-alg} (all notation is defined in Section~\ref{sec:prelim}). \begin{theorem*} There is an isomorphism of Gerstenhaber algebras \[ \HH^*(R\otimes^t S) \cong \bigoplus_{a\in A, \ b\in B} \HH^*(R, R_{\hat{b}})^a \otimes \HH^*(S, {}_{\hat{a}}S)^b . \] Let $M$ be an $R$-bimodule graded by $A$ and let $N$ be an $S$-bimodule graded by $B$. There is an isomorphism of graded $\HH^*(R\otimes^t S)$-modules \[ \HH^*(R\otimes^t S,M\otimes^tN)\cong \bigoplus_{a\in A, \ b\in B}\HH^*(R,M_{\hat{b}})^a\otimes \HH^*(S,{}_{\hat{a}}N)^b. \] \end{theorem*} This description allows us to simplify drastically the calculation of Hochschild cohomology for several classes of algebras, notably the quantum complete intersections (see Section~\ref{sec:examples}). The fact that this decomposition is compatible with the cup product implies that the Hochschild cohomology of twisted tensor products often has an extremely degenerate product (see Corollary \ref{degenerate-cor}). The fact that it is compatible with the Gerstenhaber bracket means it can be used to simplify the often formidable task of computing brackets. On the two factors appearing in the theorem we define a twisted cup product $\smile_t$ and twisted Gerstenhaber bracket $[\ ,\,]_t$ (coming from a chain level twisted circle product $\circ_t$). These structures seem interesting in their own right, and we only begin to study them here. The rule for combining these twisted structures together, in the two factors appearing in the main theorem, is a generalization of Manin's definition of the tensor product of two Gerstenhaber algebras~\cite{M}. The proof is elementary, and we give an explicit isomorphism at the level of Hochschild cochain complexes, taking care to match up the various twistings. This means in particular that the main theorem could be upgraded into a statement about dg algebras. At the same time, we indicate how the isomorphism can be thought of conceptually in terms of ``orbit Hochschild cohomology'' (see Section~\ref{sec:cup}). \subsection*{Outline} Section \ref{sec:prelim} contains the necessary notation and homological notions. The main isomorphism is constructed in Section~\ref{sec:main}, at the level of graded vector spaces. In Section~\ref{sec:cup} we handle the cup product and module structure, and then in Section~\ref{sec:bracket} we treat Gerstenhaber brackets, completing the proof of the above theorem. Finally, in Sections~\ref{sec:examples} and~\ref{sec:examples2} we give examples. After understanding the statement of the main theorem, we recommend that the reader skip to the example sections to see how it is used. \section{Preliminaries}\label{sec:prelim} Throughout this paper $k$ is a field (but a commutative ring would be fine if everything in sight is projective over $k$). All unlabeled tensor products and $\Hom$s are taken over $k$. We denote by $R^{\sf ev} = R^{\sf op}\otimes R$ the enveloping algebra of a $k$-algebra $R$. \subsection*{Twisted tensor products of algebras} First we recall how the tensor product of two graded algebras can be twisted by a bicharacter connecting their grading groups. Let $R$ and $S$ be $k$-algebras that are graded by abelian groups $A$ and $B$: \[ R = \bigoplus_{a\in A} R^a \quad \mbox{ and } \quad S = \bigoplus_{b\in B} S^b . \] Suppose also that $t: A\times B \rightarrow k^{\times}$ is a bicharacter, so that \[ t(a+a',b) = t(a,b)t(a',b), \quad \quad t(a, b+b') = t(a,b)t(a,b'), \] whenever $a,a'\in A$ and $b,b'\in B$. For convenience we also use the notation \[ t(r,s)=t(a,b) \] when $r$ is in $R^a$ and $s$ is in $S^b$, and similar notation for elements in graded modules. With this data the {\em twisted tensor product} $R\otimes^t S$ is by definition the vector space $R\otimes S$, with multiplication given by \[ (r\otimes s )\cdot( r'\otimes s') = t(r',s) \,rr'\otimes ss' \] for homogeneous elements $r,r'\in R$ and $s,s'\in S$. Note that $R\otimes^t S$ is naturally an $A\oplus B$-graded algebra with $(R\otimes^t S)^{a,b} = R^a\otimes S^b$ for all $a\in A$ and $b\in B$. \begin{example}\label{exKoszul} If $A$ and $B$ are both $\mathbb{Z}$ and $t$ is the sign bicharacter $t(a,b)=(-1)^{ab}$, then this construction yields the usual graded tensor product of two graded algebras (i.e.~following the Koszul sign rule). \end{example} \begin{example}\label{exqci} Let $R = k[x]/(x^m)$ and $S=k[y]/(y^n)$ for some positive integers $m,n\geq 2$, both graded by $\mathbb{Z}$, with $x$ and $y$ in degree $1$. Let $q\in k^{\times}$ and $t(a,b) =q^{ab}$ for $a,b\in \mathbb{Z}$. There is a presentation \[ R\otimes ^t S \cong k\langle x,y\rangle / ( x^m , \ y^n , \ yx-qxy ) . \] This is a {\em quantum complete intersection} in two indeterminates. The construction can be iterated to obtain a quantum complete intersection in finitely many indeterminates. When $m=n=2$ and $q$ is not a root of unity, these are algebras of infinite global dimension whose Hochschild cohomology is finite dimensional over $k$, as discovered by Buchweitz, Green, Madsen, and Solberg~\cite{BGMS}. \end{example} We will present more examples in the final two sections. \subsection*{Twisted tensor products of bimodules} Continuing in the setting of the last subsection, suppose that $M$ is an $A$-graded $R$-bimodule, and that $N$ is a $B$-graded $S$-bimodule. The twisted tensor product $M\otimes^t N$ is by definition the $A\oplus B$-graded $R\otimes^t S $-bimodule whose underlying graded vector space is $M\otimes N$, with $R\otimes^t S$-action \begin{equation}\label{bimodule-tens} (r\otimes s)\cdot (m\otimes n) = t(m,s)\,rm \otimes sn \quad \mbox{ and } \quad (m\otimes n)\cdot (r\otimes s) = t(r,n)\,mr \otimes ns \end{equation} for homogeneous elements $r\in R$, $s\in S$, $m\in M$ and $n\in N$. One may consider $R\otimes^tS$ as a bimodule over itself in the usual way, and this notation is consistent in that $R\otimes^tS$ is indeed the bimodule twisted tensor product of the bimodules $R$ and $S$. \subsection*{Group actions} Let $\widehat{A}=\Hom(A,k^\times)$ be the group of linear characters of $A$. Since $R$ is graded by $A$, this group acts naturally on $R$ by setting $\rho\cdot r = \rho(a) r$ for $r\in R^a$ and $\rho\in \widehat{A}$. The bicharacter $t$ induces a homomorphism $B\to \widehat{A}$, which will be denoted $b\mapsto \hat{b}$, and through this homomorphism $B$ acts naturally on $R$. Explicitly, the automorphism $\hat{b}$ of $R$ is defined by \begin{equation}\label{action-def} \hat{b}(r) = t(a,b) r \quad \text{for all} \quad r\in R^a. \end{equation} Similarly, through the homomorphism $A\to \widehat{B}$, denoted $a\mapsto \hat{a}$, we obtain from $t$ a natural action of $A$ on $S$. These actions allow us to twist the structures of $R$-bimodules and $S$-bimodules as we describe next. \subsection*{Twisting module structures along automorphisms} This is a separate (but, we will see, related) use of the word ``twist''. Suppose that $\rho$ is a graded $k$-algebra automorphism of $R$, and that $M$ is a graded $R$-bimodule. We denote by $M_\rho$ the graded $R$-bimodule obtained from $M$ by twisting the right $R$-module structure along $\rho$. That is to say, the left action of $R$ is unchanged and the right action is the composition \[ M_\rho\otimes R\xrightarrow{1\otimes \rho} M\otimes R \xrightarrow{\ \mu \ } M, \] where $\mu$ is the given right module structure of $M$. In calculations we will need to distinguish the old and new actions of $R$, so we will use $\cdot_\rho$ to denote the twisted action and plain concatenation for the original action: \begin{equation}\label{action-convention} m\cdot_\rho r= m\rho(r). \end{equation} Similarly, one may twist the left module structure along $\rho$ to obtain a bimodule ${}_\rho M$. In particular, any $R$-bimodule $M$ can be twisted by an element of $B$ to produce a new bimodule $M_{\hat{b}}$. And any $S$-bimodule $N$ can be twisted by an element of $A$ to produce a new bimodule ${}_{\hat{a}}N$. It is these twists which appear in the main theorem. \subsection*{The bar construction and Hochschild cohomology} Mostly in order to fix notation, we quickly recap the usual construction of the Hochschild cochain complex, and some of the structure that it enjoys. Denote by $BR$ the unreduced \emph{bar construction}, which is a $\mathbb{Z}$-graded vector space with $B_m R = R^{\otimes m}$. We use the bar notation $r_1\otimes \cdots \otimes r_m=[r_1|\cdots |r_m]$. The bar construction comes with a differential $b_R\colon B_nR\to B_{n-1}R$ given by $b_R[r_1|\cdots |r_m]=\sum_{i=1}^{m-1}(-1)^i [r_1|\cdots | r_{i}r_{i+1}|\cdots |r_m]$. The bar construction can be used to build resolutions. In particular, the \emph{bar resolution} of $R$ is by definition the complex $R\otimes BR\otimes R$ with the $R$-bilinear differential given by $\partial( 1\otimes [r_1|\cdots |r_{m}]\otimes 1)=$ \[ r_1\otimes [r_2|\cdots |r_{m}]\otimes 1 + 1\otimes b_R[r_1|\cdots |r_{m}] \otimes 1+ (-1)^{m} 1\otimes [r_1|\cdots |r_{m-1}]\otimes r_{m}. \] Using this differential we get a free resolution $R\otimes BR\otimes R\xrightarrow{\ \simeq\ } R$ of $R$-bimodules. In Section \ref{sec:bracket} we will use the short-hand $B(R)=R\otimes BR\otimes R$ for the bar resolution. If $M$ is an $R$-bimodule, the unreduced Hochschild cochain complex $C^*(R,M)$ is by definition $\Hom(BR,M)$. Its differential is inherited from the bar resolution by way of the isomorphism $\Hom(BR,M)\cong \Hom_{R^{\sf ev}}(R\otimes BR\otimes R,M)$. Explicitly, if $f\in C^m(R,M)$ then $\partial(f) [r_1|\cdots |r_{m+1}]=$ \begin{equation}\label{HHdiff} r_1f[r_2|\cdots |r_{m+1}] + fb_R[r_1|\cdots |r_{m+1}] + (-1)^{m+1}f[r_1|\cdots |r_m]r_{m+1} \end{equation} for $r_1,\ldots, r_{m+1}$ in $R$. The homology of $C^*(R,M)$ is the Hochschild cohomology $\HH^*(R,M)$ of $R$ with coefficients in $M$. From the $A$-grading of $R$ and $M$, each of $BR$ and $C^*(R,M)$ and $\HH^*(R,M)$ inherit an $A$-grading. All of the above applies just as well to $S$ with its $B$-grading, and indeed $R\otimes^tS$ with its $A\oplus B$-grading. \subsection*{The twisted bar resolution} One may tensor together the bar resolutions of $R$ and $S$, with action as in equation~(\ref{bimodule-tens}), to obtain (by the K\"unneth Theorem) a quasi-isomorphism \[ (R\otimes BR\otimes R)\otimes^t(S\otimes BS\otimes S)\xrightarrow{\ \simeq\ } R\otimes^t S. \] The left-hand side has its usual tensor product differential here; the twist only affects the bimodule structure. Now, there is a unique $R\otimes^t S$-bimodule isomorphism \[ (R\otimes BR\otimes R)\otimes^t(S\otimes BS\otimes S) \cong (R\otimes ^t S)\otimes BR\otimes BS\otimes (R\otimes^t S) \] such that \[ (1 \otimes [r_1|\cdots |r_m]\otimes 1)\otimes (1\otimes [s_1| \cdots | s_n ]\otimes 1) \ \mapsto\ (1\otimes 1)\otimes [r_1|\cdots |r_m]\otimes [s_1| \cdots | s_n ]\otimes (1\otimes 1) . \] If we write out in full what happens to the differential under this isomorphism we obtain the following proposition. \begin{proposition}\label{diff-prop} $(R\otimes ^t S)\otimes BR\otimes BS\otimes (R\otimes^t S)$ is naturally an $R\otimes^t S $-bimodule resolution of $R\otimes^t S $ when equipped with the following differential: \[ \partial\ \left((1\otimes 1)\otimes [r_1|\cdots |r_m]\otimes [s_1| \cdots | s_n ]\otimes (1\otimes 1) \right) = \] \begin{align*} & ( r_1\otimes 1)\otimes [r_2|\cdots |r_m]\otimes [s_1| \cdots | s_n ]\otimes (1\otimes 1) +\\ &(1\otimes 1)\otimes b_R[r_1|\cdots |r_m]\otimes [s_1| \cdots | s_n ]\otimes (1\otimes 1) +\\ (-1)^m t(r_m, b)^{-1}&( 1\otimes 1)\otimes [r_1|\cdots |r_{m-1}]\otimes [s_1| \cdots | s_n ]\otimes (r_m\otimes 1) + \\ (-1)^mt(a, s_1)^{-1}&( 1\otimes s_1)\otimes [r_1|\cdots |r_m]\otimes [s_2| \cdots | s_n ]\otimes (1\otimes 1) +\\ (-1)^m&(1\otimes 1)\otimes [r_1|\cdots |r_m]\otimes b_S[s_1|\cdots | s_n ]\otimes (1\otimes 1) +\\ (-1)^{m+n} &(1\otimes 1)\otimes [r_1|\cdots |r_{m}]\otimes [s_1| \cdots | s_{n-1} ]\otimes (1\otimes s_n).\\ \end{align*} Here $a$ is the $A$-degree of $[r_1|\cdots |r_m]$ and $b$ is the $B$-degree of $[s_1| \cdots | s_n ]$. \end{proposition} Moving on, we also will need to use the diagonal map \begin{equation}\label{eqn:diagonal} \Delta\colon BR\otimes BS \longrightarrow BR\otimes BS\otimes BR\otimes BS \end{equation} from \cite{GNW}, which is by definition given by \begin{align*} \Delta \ [ & r_1 | \cdots | r_{m} ] \otimes [s_1 | \cdots | s_n ] = \\ & \sum_{i,j}^{} (-1)^{(m-i)j} t ( a_i,b_j)^{-1} \, [r_1 | \cdots | r_i] \otimes [ s_1 | \cdots | s_j ] \otimes [r_{i+1} | \cdots | r_{m} ] \otimes [s_{i+1} | \cdots | s_n ] , \end{align*} the sum over $1\leq i \leq m$, $1\leq j\leq n$, where $a_i$ is the $A$-degree of $[r_{i+1}|\cdots |r_m]$, and $b_j$ is the $B$-degree of $[s_1| \cdots | s_j ]$. \subsection*{The complex $C^*_t(R,S,M,N)$ and its structure} The differential described in Proposition \ref{diff-prop} above induces a dual differential on \begin{align*} \Hom(BR\otimes BS,\, &M\otimes N) \\ \cong &\ \Hom_{(R\otimes^t S)^{{\sf ev}}}((R\otimes ^t S)\otimes BR\otimes BS\otimes (R\otimes^t S),M\otimes^tN). \end{align*} We will write $C^*_t(R,S,M,N)$ for the complex $\Hom(BR\otimes BS,M\otimes N) $ equipped with this differential. By construction, the homology of this complex is the $A\oplus B$-graded Hochschild cohomology $\HH^*(R\otimes^t S,M\otimes^tN)$. Taking $M=R$ and $N=S$ the complex $C^*_t(R,S,R,S)$ has a natural product \[ f\smile g = \mu (f\otimes g) \Delta \] where $\mu $ is the product on $R\otimes^tS$ and $\Delta$ is the diagonal map (\ref{eqn:diagonal}). In \cite{GNW} it is explained why this computes the usual cup product on $\HH^*(R\otimes^tS,R\otimes^tS)$. In fact, one can check that this makes $C^*_t(R,S,M,N)$ into a dg algebra quasi-isomorphic to the usual Hochschild cochain algebra of $R\otimes^t S$. Let us introduce some notation which will be useful for the rest of the paper. With enough finiteness conditions (see the beginning of Section~\ref{sec:main}) any element of $C^*_t(R,S,M,N)$ can be split into a sum of parallel tensor products of maps $BR \to M$ and $BS\to N$. We use the square $\boxtimes$ as our notation for this parallel tensor product, so \begin{equation}\label{box-notation} (f\boxtimes g)([r]\otimes [s])= (-1)^{mn}\ f[r]\otimes g[s] \end{equation} for elements $f\in C^m(R,M)$, $g\in C^n(S,N)$, $[r]= [r_1|\cdots |r_m]\in B_mR$ and $[s]=[s_1|\cdots |s_n]\in B_nS$. With this notation, expanding the cup product explicitly on elements gives the following lemma. \begin{lemma}\label{cup-prod-lemma} Take elements $f\in C^m(R,R)^a$, $f'\in C^{m'}(R,R)^{a'}$, $g\in C^n(S,S)^b$ and $g'\in C^{n'}(S,S)^{b'}$. Then the cup product on $C^*_t(R,S,R,S)$ satisfies \begin{align*} \big((f\boxtimes g) & \smile (f'\boxtimes g')\big) \ \big([r_1|\cdots | r_{m+m'}]\otimes [s_1|\cdots | s_{n+n'}]\big) \\ & = (-1)^i \cdot t^*\cdot f[r_1|\cdots | r_m]f'[r_{m+1}|\cdots |r_{m+m'}] \otimes g[s_1|\cdots | s_n]g'[s_{n+1}|\cdots |s_{n+n'}] \end{align*} \begin{align*} \text{where}\quad\quad & i = mn'+nn'+m'n' +mm'+mn\\ \text{and}\ \ \quad\quad & t^* = t(a',b)t(a',s)t(r',b). \end{align*} Here $r'$ is the $A$-degree of $[r_{m+1}|\cdots |r_{m+m'}]$ and $s$ is the $B$-degree of $[s_1|\cdots | s_n]$. \end{lemma} \section{Main Result}\label{sec:main} \subsection*{}\label{finiteness-cond} First we impose some finiteness conditions: we require each component $R^a$ of $R$ to be finite dimensional, and we require that the set ${\rm supp}(R)\subseteq A$ on which $R$ is nonzero satisfies the condition that $n$-fold addition ${\rm supp}(R)^{\times n}\to A$ has finite fibers for all $n$. This just means that $R^{\otimes n}$ is degree-wise finite dimensional for each $n$. Certainly if $R$ is finite dimensional (in total) then there is nothing to worry about. We assume the same for $S$ and its $B$-grading. Using the notation~(\ref{box-notation}), we define a map of $\mathbb{Z}\oplus A \oplus B$-graded vector spaces \[ \phi \colon \Hom(BR,M)\otimes \Hom(BS,N)\longrightarrow \Hom(BR\otimes BS,M\otimes N) \] \[ f\otimes g \quad \mapsto\quad f\boxtimes g. \] \begin{theorem}\label{Main-theorem}\label{vect-space-theorem} Under the just stated finiteness conditions, $\phi$ induces an isomorphism \[ \HH^*(R\otimes^t S,M\otimes^tN)\cong \bigoplus_{a\in A, \ b\in B} \ \HH^*(R,M_{\hat{b}})^a\otimes \HH^*(S,{}_{\hat{a}}N)^b \] of graded vector spaces. \end{theorem} In the next two sections we refine the theorem by describing cup products and Gerstenhaber brackets on Hochschild cohomology of $\HH^*(R\otimes^t S)$ in terms of this direct sum decomposition when $M=R$ and $N=S$. \begin{remark} Let us note the difference between graded algebras and \emph{gradable} algebras. In some contexts the groups $A$ and $B$ may be considered only as auxiliary data allowing one to encode the twist which constructs $R\otimes^t S$. Then one might be interested in the ordinary $\mathbb{Z}$-graded Hochschild cohomology, computed by first forgetting the $A\oplus B$-grading. This can be a subtle issue with real consequences (Hochschild cohomology need not commute with forgetting a grading), but under our finiteness assumptions one \emph{can} safely do this by using Theorem \ref{Main-theorem} to compute the $A\oplus B$-graded Hochschild cohomology and then forgetting the grading. \end{remark} \begin{proof}[Proof of Theorem \ref{vect-space-theorem}.] Since each of $BR,BS,M$ and $N$ is degree-wise finite dimensional by hypothesis, $\phi$ is an isomorphism of $\mathbb{Z}\oplus A \oplus B$-graded vector spaces. Our task is only to show that \begin{equation}\label{eqn:phi} \phi\colon C^*(R,M_{\hat{b}})^a\otimes C^*(S,{}_{\hat{a}}N)^b\xrightarrow{\ \cong\ } C^*_t(R,S,M,N)^{a,b} \end{equation} is a chain map. This will be a slightly tedious matter of checking that all the twists match up. Take $f$ in $C^m(R,M_{\hat{b}})^a$ and $g$ in $C^n(S,{}_{\hat{a}}N)^b$. We first compute $\phi\partial (f\otimes g)=\phi\partial(f)\otimes g+(-1)^m \phi f\otimes \partial(g)$. We can split this computation into two cases corresponding to the two summands here. Specifically, we can evaluate this: (\ref{case1}) on elements of the form $[r_1|\cdots |r_{m+1}]\otimes [s_1|\cdots|s_{n}]$; and (\ref{case2}) on elements of the form $[r_1|\cdots |r_{m}]\otimes [s_1|\cdots|s_{n+1}]$. We first deal with case (\ref{case1}). In this calculation we use the short-hand notation $[r]=[r_1|\cdots |r_{m+1}]$ and $[s]=[s_1|\cdots|s_{n}]$. We use our formula (\ref{HHdiff}) for the Hochschild differential. Also remember our action notation from equation~(\ref{action-convention}). Note that $\partial(g)[s]=0$, so \begin{align}\tag{I}\label{case1} \phi \partial (f\otimes g) ( [r]\otimes [s]) = & -(-1)^m(-1)^{(m+1)n}\ r_1 f[r_2|\cdots |r_{m+1}]\otimes g[s] \\ & -(-1)^m (-1)^{(m+1)n} \ fb_R[r]\otimes g[s] \nonumber \\ & -(-1)^m (-1)^{(m+1)n} (-1)^{m+1} \ f[r_1|\cdots |r_m]\cdot_{\hat{b}} r_{m+1}\otimes g[s] \nonumber \\ = & -(-1)^{mn+m+n}\ r_1 f[r_2|\cdots |r_{m+1}]\otimes g[s] \nonumber \\ & -(-1)^{mn+m+n} \ fb_R[r]\otimes g[s] \nonumber \\ & -(-1)^{mn+m+n}(-1)^{m+1} t(r_{m+1},b) \ f[r_1|\cdots |r_m]r_{m+1}\otimes g[s] . \nonumber \end{align} Similarly in case (\ref{case2}), with $[r]=[r_1|\cdots |r_m]$ and $[s]=[s_1|\cdots|s_{n+1}]$, we have $\partial(f)[r]=0$ so \begin{align}\tag{II}\label{case2} \phi \partial (f\otimes g) ( [r]\otimes [s]) = & -(-1)^m (-1)^n(-1)^{m(n+1)}\ f[r]\otimes s_1\cdot_{\hat{a}} g[s_2|\cdots |s_{n+1}] \\ & -(-1)^m (-1)^n(-1)^{m(n+1)}\ f[r]\otimes gb_S[s] \nonumber \\ & -(-1)^m (-1)^n(-1)^{m(n+1)}(-1)^{n+1} \ f[r]\otimes g[s_1|\cdots|s_n] s_{n+1} \nonumber \\ = & -(-1)^{mn+n}t(a,s_1)\ f[r]\otimes s_1 g[s_2|\cdots |s_{n+1}] \nonumber\\ &-(-1)^{mn+n} \ f[r]\otimes gb_S[s] \nonumber \\ & -(-1)^{mn+n}(-1)^{n+1} \ f[r]\otimes g[s_1|\cdots|s_n] s_{n+1} . \nonumber \end{align} Next we calculate $\partial(\phi(f\otimes g))([r]\otimes [s])$ in the same two cases. In case (\ref{case1}) we take $[r]=[r_1|\cdots |r_{m+1}]$ in $(B_{m+1}R)^{a'}$ and $[s]=[s_1|\cdots|s_{n}]$ in $(B_nS)^{b'}$. We use Proposition \ref{diff-prop} to compute the differential on $C^*_t(R,S,M,N)$, and as in our previous calculation the choice of $[r]$ and $[s]$ means that half the terms described in Proposition \ref{diff-prop} vanish. \begin{align}\tag{\ref{case1}} \partial(\phi(f\otimes g))([r]\otimes [s]) = & -(-1)^{m+n}(-1)^{mn}\ (r_1\otimes 1) f[r_2|\cdots|r_{m+1}]\otimes g[s] \\ & -(-1)^{m+n}(-1)^{mn}\ fb_R[r]\otimes g[s] \nonumber\\ & -(-1)^{m+n}(-1)^{mn}(-1)^{m+1} t(r_{m+1},b')^{-1}\ f[r_1|\cdots |r_m]\otimes g[s] (r_{m+1}\otimes 1)\ \nonumber\\ = & -(-1)^{mn+m+n}\ r_1 f[r_2|\cdots |r_{m+1}]\otimes g[s] \nonumber \\ & -(-1)^{mn+m+n} \ fb_R[r]\otimes g[s] \nonumber \\ & -(-1)^{mn+m+n}(-1)^{m+1} t(r_{m+1},b) \ f[r_1|\cdots |r_m]r_{m+1}\otimes g[s] . \nonumber \end{align} The last line comes from the fact that \[ t(r_{m+1},b')^{-1} f[r_1|\cdots |r_m]\otimes g[s] (r_{m+1}\otimes 1)=t(r_{m+1},b')^{-1}t(r_{m+1},b+b' )f[r_1|\cdots |r_m]r_{m+1}\otimes g[s] , \] since $g[s]$ has degree $b+b'$ in $B$, and then $t(r_{m+1},b')^{-1}t(r_{m+1},b+b' )=t(r_{m+1},b )$. The result matches exactly the calculation of $\phi \partial (f\otimes g) ( [r]\otimes [s])$ in case (\ref{case1}). It remains to compute $\partial(\phi(f\otimes g))([r]\otimes [s])$ in case (\ref{case2}), with $[r]=[r_1|\cdots |r_m]$ in $(B_mR)^{a'}$ and $[s]=[s_1|\cdots|s_{n+1}]$ in $(B_{n+1}S)^{b'}$. \begin{align}\tag{\ref{case2}} \partial(\phi(f\otimes g))([r]\otimes [s]) = & -(-1)^{m+n}(-1)^{mn}(-1)^m t(a',s_{n+1})^{-1} \ (1\otimes s_1) f[r]\otimes g[s_2|\cdots|s_{n+1}] \\ & -(-1)^{m+n}(-1)^{mn}(-1)^m\ f[r]\otimes gb_S[s] \nonumber\\ & -(-1)^{m+n}(-1)^{mn}(-1)^{m+n+1} \ f[r]\otimes g[s_1|\cdots|s_n] (1\otimes s_{n+1})\ \nonumber\\ = & -(-1)^{mn+n} t(a,s_{n+1})\ f[r]\otimes s_1 g[s_2|\cdots|s_{n+1}] \nonumber \\ & -(-1)^{mn+n} \ f[r]\otimes gb_S[s] \nonumber \\ & -(-1)^{mn+n}(-1)^{n+1} \ f[r]\otimes g[s_1|\cdots|s_n]s_{n+1} . \nonumber \end{align} This matches the computation of $\phi \partial (f\otimes g) ( [r]\otimes [s])$ in case (\ref{case2}), so we are done. \end{proof} \section{The Cup Product}\label{sec:cup} In this section and the next we explain how to handle to the algebraic structure in Theorem~\ref{Main-theorem}. But first we make a construction which seems to be interesting in its own right. \subsection*{Orbit Hochschild cohomology} The following construction applies to any algebra with a group action, but we may as well use the same notation as above and consider the (right) action of the group $B$ on $R$. The orbit Hochschild cohomology of $R$ is by definition $\bigoplus_{b\in B} \HH^*(R, R_{\hat{b}})$. We can define a cup product on this graded vector space by the following formula, in terms of the standard Hochschild cochain complex. If $f\in C^m(R, R_{\hat{b}})$ and $f'\in C^{m'}(R, R_{\hat{b'}})$, then we set \begin{equation}\label{eqn:twisted-cup1} (f\smile_t f')\ [r_1|\cdots|r_{m+m'}] = (-1)^{mm'}f[r_1|\cdots|r_m] \,\hat{b}(f'[r_{m+1}|\cdots|r_{m+m'}]), \end{equation} where the automorphism $\hat{b}$ is given by~(\ref{action-def}). Then $f\smile_t f'$ is an element of $C^{m+m'}(R, R_{\widehat{b+b'}})$. One can think of this product as follows: from the $B$-action on $R$ we form an ``orbit endomorphism ring'' $\bigoplus_{b} \HH^*(R, R_{\hat{b}}) = \bigoplus_b \Hom_{D(R^{\sf ev})}(R,R_{\hat{b}})$. Just as for ordinary Hochschild cohomology the product can be defined using the (derived) tensor product $\otimes_R$ on $D(R^{\sf ev})$, but it must involve a twist. The automorphism $\hat{b}$ can naturally be considered as a map $R_{\hat{b'}}\to {}_{\hat{b}}R_{\hat{b}\hat{b'}}$, and the composition \[ R\cong R\otimes_R R \xrightarrow{f\otimes_R f'} R_{\hat{b}}\otimes_R R_{\hat{b'}} \xrightarrow{1\otimes_R \hat{b}} R_{\hat{b}}\otimes_R {}_{\hat{b}}R_{\hat{b}\hat{b'}}\cong R_{\hat{b}\hat{b'}} \] is the map $f\smile_tf'\colon R\to R_{\hat{b}\hat{b'}}$. It is straightforward to check that the product is associative. Moreover, orbit Hochschild cohomology is always twisted commutative in the following sense. \begin{proposition}\label{R-twisted-comm-prop}\label{degenerate-prod} The product (\ref{eqn:twisted-cup1}) satisfies \begin{align*} f\smile_t f' & = (-1)^{mm'}t(a',b) f'\smile_t f \end{align*} whenever $f\in \HH^m(R,R_{\hat{b}})^a$ and $f'\in \HH^{m'}(R,R_{\hat{b'}})^{a'}$. In particular, \[ f\smile_t f' = 0 \ \text{ unless }\ t(a',b)=t(a,b')^{-1} . \] \end{proposition} Note that this implies commutativity with respect to the braided monoidal structure given by $t$. The second statement above is useful in calculations. It will mean that the Hochschild cohomology of a twisted tensor product often has a very degenerate cup product. \begin{proof}[Sketch of proof] One can do this with a twisted version of the usual Eckmann-Hilton argument~\cite{SA}, working in $D(R^{\sf ev})$. But it will be useful for us later to introduce the twisted circle product, so we follow Gerstenhaber's original argument. Given $f\in C^m(R,R_{\hat{b}})$ and $f'\in C^{m'}(R,R_{\hat{b'}})$ we define an element $f'\circ_tf$ in $ C^{m+m'-1}(R,R_{\widehat{b+b'}})$ according to the rule \begin{align*} (f'\circ_t f)&[r_1|\cdots|r_{m+m'-1}]= \\ & \sum_{i} (-1)^{i(m'+1)} f'[ r_1|\cdots | r_{i} | f [ r_{i+1}| \cdots | r_{i+m'}]| \hat{b} r_{i+m'+1} |\cdots | \hat{b}r_{m+m'-1} ]. \end{align*} With this, a computation reveals that \[ \partial(f'\circ_t f)+\partial(f')\circ_t f+(-1)^{m'}f'\circ_t \partial(f) = f\smile_t (\hat{b}^{-1}f'\hat{b})- (-1)^{mm'}f'\smile_t f. \] So at the level of Hochschild cohomology, \[ f'\smile_t f = (-1)^{mm'}f\smile_t(\hat{b}^{-1}f'\hat{b}). \] Another computation shows that $\hat{b}^{-1}f'\hat{b}=t(a',b)^{-1}f'$. The second statement follows from the first by exchanging $f$ and $f'$ twice. \end{proof} All of this applies just as well to the (left) action of $A$ on $S$. There is a natural product on the orbit Hochschild cohomology $\bigoplus_{a\in A}\HH^*(S,{}_{\hat{a}}S)$ given by $g\smile_t g' = \hat{a}g\smile g'$, that is \begin{equation}\label{eqn:twisted-cup2} (g\smile_t g')[s_1|\cdots|s_{n+n'}] = (-1)^{nn'}\hat{a}(g[s_1|\cdots|s_n])\, \,g'[s_{m+1}|\cdots|s_{n+n'}] \end{equation} for $g\in C^n(S,{}_{\hat{a}}S)^b$ and $g'\in C^{n'}(S,{}_{\hat{a}'}S)^{b'}$. As before this is associative and twisted commutative in the following sense. \begin{proposition}\label{S-twisted-comm-prop} The product (\ref{eqn:twisted-cup2}) satisfies \[ g\smile_t g' = (-1)^{nn'}t(a,b') g'\smile_t g \] whenever $g\in \HH^n(S,{}_{\hat{a}}S)^b$ and $g'\in \HH^{n'}(S,{}_{\hat{a}'}S)^{b'}$. \end{proposition} The proof uses an analogous twisted circle product for $S$. We record it here for use in the next section: \begin{align*} (g'\circ_t g)& [s_1|\cdots | s_{n+n'-1}] = \\ & \sum_i (-1)^{i(n'+1)} g'[\hat{a}'s_1|\cdots | \hat{a}'s_i | g[s_{i+1} | \cdots | s_{i+n'} ]| s_{i+n'+1} | \cdots | s_{n+n'-1}] . \end{align*} \subsection*{The cup product in the main theorem} Because of the \emph{non-twisted} tensor product in the enveloping algebra $(R\otimes^t S)^{\sf op}\otimes (R\otimes^t S)$ which underlies Hochschild cohomology, there must be some kind of untwisting appearing in the main theorem. This is manifest in the following cup product on \[ \big(\bigoplus_{b\in B} \HH^*(R, R_{\hat{b}})\big)\otimes \big( \bigoplus_{a\in A}\HH^*(S,{}_{\hat{a}}S) \big), \] which we define by the formula \begin{equation}\label{untwistedcupdef} (f\otimes g) \smile (f'\otimes g') = (-1)^{m'n}t(a',b)^{-1}(f\smile_t f')\otimes (g\smile_t g') \end{equation} where $g\in\HH^n(S,{}_{\hat{a}}S)^b$ and $f'\in\HH^{m'}(R,R_{\hat{b'}})^{a'}$. This is an ``untwisted cup product'' because the twist $t(a',b)^{-1}$ is the inverse of the expected one. With this product, the diagonal subalgebra \[ \bigoplus_{a\in A,\ b\in B} \HH^*(R, R_{\hat{b}})^a\otimes \HH^*(S,{}_{\hat{a}}S)^b \] becomes graded-commutative in the usual sense, because the twists from Propositions~\ref{R-twisted-comm-prop} and~\ref{S-twisted-comm-prop} cancel out. Hence, the following statement makes sense. \begin{theorem}\label{cup-prod-thm} With the just defined product (\ref{untwistedcupdef}) on the left-hand side, when $M=R$ and $N=S$, the isomorphism $\phi$ of Theorem~\ref{vect-space-theorem} is one of algebras: \[ \bigoplus_{a\in A,\ b\in B} \HH^*(R,R_{\hat{b}})^a\otimes \HH^*(S,{}_{\hat{a}}S)^b\xrightarrow{\ \cong \ }\HH^*(R\otimes^t S). \] \end{theorem} In order to establish fully that the second isomorphism in Theorem~\ref{Main-theorem} is one of $\HH^*(R\otimes^t S)$-modules, we should also define an action of $\HH^*(R\otimes^t S)$ on $\bigoplus_{a,b} \HH^*(R,M_{\hat{b}})^a\otimes \HH^*(S,{}_{\hat{a}}N)^b$ at the chain level, and show that $\phi$, defined in~(\ref{eqn:phi}), respects this action. Since this is formally extremely similar to the case $M=R$ and $N=S$ described above, without any further interesting features, we omit the proof. Combining Theorem~\ref{cup-prod-thm} and Proposition~\ref{degenerate-prod} yields the following consequence. \begin{corollary}\label{degenerate-cor} If $x\in \HH^*(R\otimes^t S,R\otimes^t S)^{a,b}$ and $x'\in \HH^*(R\otimes^t S,R\otimes^t S)^{a',b'}$, then \[ x\smile x' = 0 \ \text{ unless }\ t(a',b)=t(a,b')^{-1}. \] \end{corollary} So, despite $\HH^*(R\otimes^t S,R\otimes^t S)$ being graded commutative, the fact that it is built out of twisted commutative algebras implies that its product is often very degenerate. \begin{proof}[Proof of Theorem~\ref{cup-prod-thm}] We will check that the product defined in this section agrees with the expression obtained in Lemma~\ref{cup-prod-lemma}. Let $f,f',g$ and $g'$ be as in the statement of Lemma~\ref{cup-prod-lemma}. Firstly \begin{equation}\label{cup-prod-thm-pf-eqn} \phi\big((f\otimes g) \smile (f'\otimes g') \big) = (-1)^{m'n}t(a',b)^{-1}(f\smile_t f')\boxtimes (g\smile_t g'), \end{equation} so we compute \begin{align*} (f\smile_t &\, f') \boxtimes (g\smile_t g') \ [r_1|\cdots | r_{m+m'}]\otimes [s_1|\cdots | s_{n+n'}] =\\ & (-1)^{i} \cdot f[r_1|\cdots|r_m] \,\hat{b}(f'[r_{m+1}|\cdots|r_{m+m'}]) \otimes \hat{a}(g[s_1|\cdots|s_n])\, \,g'[s_{m+1}|\cdots|s_{n+n'}] \end{align*} where $i = mm'+nn'+(m+m')(n+n')$. This is by defintion \[ (-1)^{i}\cdot t^*\cdot f[r_1|\cdots | r_m]f'[r_{m+1}|\cdots |r_{m+m'}] \otimes g[s_1|\cdots | s_n]g'[s_{n+1}|\cdots |s_{n+n'}] \] where $t^*= t(a'+r',b)t(a,b+s)$, with $r'$ being the $A$-degree of $[r_{m+1}|\cdots |r_{m+m'}]$ and $s$ being the $B$-degree of $[s_1|\cdots | s_n]$. If we multiply this by the scalar $(-1)^{m'n}t(a',b)^{-1}$ appearing in \eqref{cup-prod-thm-pf-eqn}, then we find that our coefficient matches that of Lemma~\ref{cup-prod-lemma}, so we are done. \end{proof} \section{The Gerstenhaber Bracket}\label{sec:bracket} We start by defining a bracket on $\bigoplus_{b\in B} \HH^*(R, R_{\hat{b}})$. Once again we point out that this applies to any algebra with a group action. If $f\in C^m (R, R_{\hat{b}})^{a}$ and $f'\in C^{m'}(R, R_{\hat{b'}})^{a'}$, then by definition \[ {[} f , f' {]}_t = t^{-1}(a,b')f\circ_t f' - (-1)^{(m-1)(m'-1)} f'\circ_t f , \] where the twisted circle product $\circ_t$ was defined in Section~\ref{sec:cup}. Similarly, when $g\in C^n(S, {}_{\hat{a}}S)^b$ and $g\in C^{n'}(S, {}_{\hat{a}'}S)^{b'}$, we define \[ [g,g']_t = g\circ_t g' - (-1)^{(n-1)(n'-1)} t^{-1}(a',b) g'\circ_t g . \] We refrain from discussing the sense in which these brackets make $\bigoplus_{b\in B} \HH^*(R, R_{\hat{b}})$ and $\bigoplus_{a\in A} \HH^*(S, {}_{\hat{a}}S)$ into ``twisted Gerstenhaber algebras''. In the following theorem we combine the two brackets, adapting and generalizing a definition of Manin~\cite{M} to define a bracket in the setting of twisted tensor products. \begin{theorem}\label{thm:G-alg} The isomorphism $\phi$ of Theorem~\ref{vect-space-theorem} is one of Gerstenhaber algebras: \[ \bigoplus_{a\in A, \ b\in B} \HH^*(R, R_{\hat{b}})^a\otimes \HH^*(S, {}_{\hat{a}}S)^b \xrightarrow{\ \cong\ } \HH^*(R\otimes^t S), \] where on the left-hand side, the bracket is defined by ${[}f\otimes g, f'\otimes g' {]} =$ \[ (-1)^{(m-1)n'} {[}f,f'{]}_t \otimes (g\smile_t g') + (-1)^{m(n'-1)} (f\smile_t f')\otimes {[}g,g'{]}_t . \] \end{theorem} Before proving the theorem, we summarize some results from~\cite{GNW,NW} that we will need for computing brackets. Our bracket formula generalizes that given by Le and Zhou~\cite{LZ}, who showed that the Hochschild cohomology ring of a tensor product of algebras is isomorphic, as a Gerstenhaber algebra, to the graded tensor product of their Hochschild cohomology rings. Our result is a twisted analogue. Note that the diagonal map $\Delta$ of~(\ref{eqn:diagonal}) is coassociative and counital by its definition. Therefore the Gerstenhaber bracket of $f\boxtimes g$ and $f'\boxtimes g'$, representing elements of $\HH^*(R\otimes^t S)$, may be computed as follows~\cite{NW}. The circle product $(f\boxtimes g)\circ (f'\boxtimes g')$ can be taken to be the following composition: \begin{equation}\label{eqn:circle} (f\boxtimes g) \Phi (1\otimes_{R\otimes^t S} (f'\boxtimes g')\otimes_{R\otimes^t S} 1) \Delta^{(2)}, \end{equation} where $\Delta^{(2)}=(\Delta\otimes 1)\Delta$, and where $\Phi$ is the homotopy given in~\cite[Lemma 3.5]{GNW} for twisted tensor products, in accordance with the theory of~\cite{NW}. It is given by \[ \Phi = (G_{B(R)}\otimes F^l_{B(S)} + F^r_{B(R)}\otimes G_{B(S)}) \sigma, \] with $G_{B(R)}$, $F^l_{B(S)}$, $F^r_{B(R)}$, $G_{B(S)}$, $\sigma$ defined as below, and $B(R)$, $B(S)$ the bar resolutions for $R$ and $S$. Letting $\mu_R\colon B(R) \rightarrow R$ be the natural quasi-isomorphism, set \begin{equation}\label{FlFr} F^{l}_{B(R)} = \mu_R\otimes 1_{B(R)} \ \ \ \mbox{ and } \ \ \ F^{r}_{B(R)}=1_{B(R)}\otimes \mu_R, \end{equation} as maps from $B(R)\otimes_R B(R)$ to $B(R)$, where $1_{B(R)}$ is the identity map on $B(R)$, and similarly for $B(S)$. The map $G_{B(R)}\colon B(R)\otimes _R B(R) \rightarrow B(R)[1]$ is defined by \begin{equation}\label{equation-G} G_{B(R)} ((r_0\otimes [r_1 | \cdots r_{p-1}] \otimes r_p) \otimes_R (1 \otimes [r_{p+1} |\cdots | r_n] \otimes r_{n+1})) \hspace{4cm} \end{equation} $$ \hspace{3cm} = (-1)^{p-1} r_0\otimes [r_1 | \cdots | r_{p-1} | r_p | r_{p+1} |\cdots | r_n ] \otimes r_{n+1} $$ for all $r_0,\ldots, r_{n+1}$ in $R$. The chain map \[ \sigma\colon (B(R)\otimes B(S))\otimes _{R\otimes ^t S}(B(R)\otimes B(S)) \rightarrow (B(R)\otimes _R B(R)) \otimes ^t (B(S)\otimes _S B(S)) \] is an isomorphism of $(R\otimes ^t S)^{\sf ev}$-modules in each degree given by \[ \sigma ((x\otimes y)\otimes (x'\otimes y')) = (-1)^{jp} t(x' , y) (x\otimes x') \otimes (y\otimes y') \] on homogeneous elements in $(B_i(R)\otimes B_j(S)) \otimes _{R\otimes ^t S} (B_p(R)\otimes B_q(S))$. \begin{proof}[Proof of Theorem~\ref{thm:G-alg}] Let $f,f',g,g'$ be as before. We will compute the bracket $[f\boxtimes g , f ' \boxtimes g ']$ on the right side of the isomorphism in the statement of the theorem by applying it to elements of the form $${[} r_1 |\cdots | r_{m''}{]} \otimes {[}s_1 | \cdots | s_{n''}{]} \quad \text{ in } BR\otimes BS$$ where $m'' + n'' = m+m' + n+ n'-1$ and $r_1,\ldots, r_{m''}\in R$, $s_1,\ldots, s_{n''}\in S$. We will first compute the circle product~(\ref{eqn:circle}), canonically identifying $f,g,f',g'$ with the corresponding bimodule homomorphisms: \begin{align*} & ((f \boxtimes g)\circ (f'\boxtimes g')) ((1\otimes {[} r_1| \cdots | r_{m''} {]}\otimes 1) \otimes (1\otimes {[} s_1| \cdots | s_{n''} {]}\otimes 1)) \\ & = (f\boxtimes g) \left(G_{B(R)}\otimes F^l_{B(S)} + F^r_{B(R)}\otimes G_{B(S)} \right) \sigma (1 \otimes (f'\boxtimes g') \otimes 1 ) \Delta^{(2)} \\ & \hspace{4cm} ((1\otimes {[} r_1| \cdots | r_{m''} {]}\otimes 1)\otimes (1\otimes {[} s_1| \cdots | s_{n''}{]}\otimes 1)) \\ & = (f\boxtimes g) \left(G_{B(R)} \otimes F^l_{B(S)} + F^r_{B(R)}\otimes G_{B(S)} \right) \sigma (1 \otimes (f'\boxtimes g') \otimes 1) (\Delta\otimes 1) \\ & \hspace{1.5cm} \Big( \sum_{j=0}^{m''} \sum_{i=0}^{n''} (-1)^{i(m''-j)} t^{-1}({[}r_{j+1}|\cdots | r_{m''}{]} , {[}s_1| \cdots | s_i{]}) \\ & \hspace{2cm} ((1\otimes {[}r_1 |\cdots | r_j{]} \otimes 1)\otimes (1\otimes {[} s_1 | \cdots | s_i {]}\otimes 1) ) \otimes_{R\otimes^tS}\\ &\hspace{2cm} ( (1\otimes {[} r_{j+1} |\cdots | r_{m''}{]}\otimes 1)\otimes (1\otimes {[}s_{i+1}|\cdots | s_{n''}{]}\otimes 1) ) \Big) \\ &=(f\boxtimes g) \left(G_{B(R)} \otimes F^l_{B(S)} + F^r_{B(R)}\otimes G_{B(S)} \right) \sigma (1 \otimes (f'\boxtimes g')\otimes 1)\\ &\quad \Big( \sum_{j=0}^{m''} \sum_{i=0}^{n''} \sum_{p=0}^i \sum_{l=0}^j (-1)^{i(m''-j)}(-1)^{p (j-l)} t^{-1}({[}r_{j+1} |\cdots | r_{m''}{]} , {[}s_1 | \cdots | s_i{]}) t^{-1} ({[}r_{l+1} |\cdots | r_j{]} ,{[} s_1| \cdots | s_p{]}) \\ & \hspace{2cm} ((1\otimes {[} r_1 |\cdots | r_l {]}\otimes 1)\otimes (1\otimes {[} s_1| \cdots | s_p{]} \otimes 1) ) \otimes _{R\otimes^tS}\\ &\hspace{2cm} ((1\otimes {[} r_{l+1}|\cdots | r_j{]}\otimes 1) \otimes (1\otimes {[} s_{p+1}|\cdots | s_i{]}\otimes 1) ) \otimes_{R\otimes^t S}\\ &\hspace{2cm} ((1\otimes {[}r_{j+1}|\cdots | r_{m''} {]}\otimes 1) \otimes (1\otimes {[} s_{i+1}| \cdots | s_{n''}{]}\otimes 1)) \Big). \\ \end{align*} We wish to apply $(1\otimes (f'\boxtimes g')\otimes 1)$ to the above sum, so it suffices to consider only terms for which $m'= j-l$, $n'=i-p$. The sign involved is thus $$ (-1)^{(p+l)(m'+n')} = (-1)^{ (m'+n')(j-m' + i-n')}, $$ and the above becomes \begin{align*} & = (f\boxtimes g) \left(G_{B(R)} \otimes F^l_{B(S)} + F^r_{B(R)}\otimes G_{B(S)} \right) \sigma \\ &\quad \Big( \sum_{j=m'}^{m''} \sum_{i=n'}^{n''} (-1)^{i (m''-j)} (-1)^{(i-n')m'} (-1)^{(m'+n')(j-m'+i-n')} \\ &\hspace{1cm} t^{-1}({[}r_{j+1} |\cdots | r_{m''}{]} , {[}s_1 | \cdots | s_i{]})t^{-1} ({[}r_{j-m'+1} |\cdots | r_j{]}, {[} s_1 | \cdots | s_{i-n'}{]}) \\ &\hspace{1cm} ((1\otimes {[}r_1|\cdots | r_{j-m'}{]}\otimes 1)\otimes (1\otimes {[} s_1| \cdots | s_{i-n'}{]}\otimes 1)) \otimes_{R\otimes^tS}\\ &\hspace{1cm} \big(f'({[}r_{j-m'+1}| \cdots | r_j{]}) \otimes g'({[} s_{i-n'+1}| \cdots | s_i{]}) \big) \otimes_{R\otimes^tS}\\ &\hspace{1cm} ((1\otimes {[} r_{j+1}| \cdots | r_{m''}{]}\otimes 1) \otimes (1\otimes {[} s_{i+1}| \cdots | s_{n''}{]}\otimes 1)) \Big). \\ \end{align*} Now apply the module action, and apply $\sigma$ (which comes with the sign $(-1)^{(i-n')(m''-j)}$), to obtain \begin{align*} & = (f\boxtimes g) \left(G_{B(R)} \otimes F^l_{B(S)} + F^r_{B(R)}\otimes G_{B(S)} \right) \sigma \\ &\quad \Big( \sum_{j=m'}^{m''} \sum_{i=n'}^{n''} (-1)^{i (m''-j)} (-1)^{(i-n')m'} (-1)^{(m'+n')(j-m'+i-n')} \\ &\hspace{.5cm} t^{-1}({[}r_{j+1}|\cdots | r_{m''}{]} , {[}s_1 | \cdots | s_i{]}) t^{-1}( {[}r_{j-m'+1}|\cdots | r_j{]} , {[} s_1 | \cdots | s_{i-n'}{]}) \\ &\hspace{.5cm} t(f'({[} r_{j-m'+1}| \cdots | r_j{]}) , {[}s_1| \cdots | s_{i-n'}{]}) \\ &\hspace{.5cm} ((1\otimes {[}r_1|\cdots | r_{j-m'}{]}\otimes f'({[} r_{j-m'+1}| \cdots | r_j{]})) \otimes (1\otimes {[} s_1| \cdots | s_{i-n'}{]} \otimes g'({[} s_{i-n'+1}|\cdots | s_i{]})) \otimes_{R\otimes^tS}\\ &\hspace{.5cm} ((1\otimes {[} r_{j+1}| \cdots | r_{m''}{]}\otimes 1) \otimes (1\otimes {[} s_{i+1}| \cdots | s_{n''}{]}\otimes 1)) \Big) \\ & = (f\boxtimes g) \left(G_{B(R)} \otimes F^l_{B(S)} + F^r_{B(R)}\otimes G_{B(S)} \right) \\ &\quad \Big( \sum_{j=m'}^{m''} \sum_{i=n'}^{n''} (-1)^{-n' (m''-j)} (-1)^{(i-n')m'} (-1)^{(m'+n')(j-m'+i-n')} \\ &\hspace{.5cm} t^{-1}({[}r_{j+1}|\cdots | r_{m''}{]} , {[}s_1| \cdots | s_i{]})t^{-1} ({[}r_{j-m'+1}|\cdots | r_j {]}, {[}s_1 | \cdots | s_{i-n'}{]})\\ &\hspace{.5cm} t(f'({[}r_{j-m'+1}| \cdots | r_j{]}) , {[}s_1| \cdots | s_{i-n'}{]}) t({[}r_{j+1}| \cdots | r_{m''}{]}, 1\otimes {[} s_1| \cdots | s_{i-n'}{]} \otimes g'({[} s_{i-n'+1}| \cdots | s_i{]})) \\ &\hspace{.5cm} (1\otimes {[}r_1|\cdots | r_{j-m'}{]}\otimes f'({[} r_{j-m'+1}| \cdots | r_j{]})) \otimes_R (1\otimes {[} r_{j+1}| \cdots | r_{m''}{]}\otimes 1) \otimes \\ &\hspace{.5cm} (1\otimes {[} s_1| \cdots | s_{i-n'}{]}\otimes g'({[} s_{i-n'+1}| \cdots | s_i{]})) \otimes_S (1\otimes {[}s_{i+1}| \cdots | s_{n''}{]}\otimes 1) \Big). \\ \end{align*} Denote by $t^{**}$ the twisting coefficient in the above equation, which simplifies to: \[ t^{**} = t(a', [s_1|\cdots | s_{i-n'}]) t( [r_{j+1} | \cdots | r_{m''}] , b') . \] Next we will apply $G_{B(R)} \otimes F^l_{B(S)} + F^r_{B(R)}\otimes G_{B(S)}$, and there are signs associated to each term. In applying $G_{B(R)}\otimes F^l_{B(S)}$, necessarily $i=n'$ for the image to be non-zero, and the sign is $(-1)^{j-m'}$. In applying $F^r_{B(R)}\otimes G_{B(S)}$, necessarily $j=m''$ for the image to be non-zero, and the sign is $(-1)^{i-n'}$ with an additional sign $(-1)^{j-m'+m''-j}=(-1)^{m''-m'}=(-1)^m$ (as for this term, we take $m''=m + m'$) since the degree of the map $G_{B(R)}$ is 1. The above expression thus becomes \begin{align*} &= (f\boxtimes g) \Big( \sum_{j=m'}^{m''} (-1)^{-n' (m''-j)} (-1)^{(m'+n')(j-m')}(-1)^{j-m'}t([r_{j+1}\cdots | r_{m''}] , b') \\ & \hspace{.5cm} (1\otimes {[} r_1|\cdots | r_{j-m'} | f' ( {[} r_{j-m'+1}|\cdots | r_j{]}) | r_{j+1} |\cdots | r_{m''} {]}\otimes 1) \otimes ( g'({[} s_1|\cdots | s_i{]} )\otimes [s_{i+1} | \cdots | s_{n''}{]}\otimes 1) \\ & \hspace{.5cm}+\sum_{i=n'}^{n''} (-1)^{(i-n')m'} (-1)^{(m'+n')(m+i-n')}(-1)^{i-n'}(-1)^{m} t(a', [s_1 | \cdots | s_{i-n'}]) \\ & \hspace{.5cm} (1\otimes {[} r_1| \cdots | r_{m}] \otimes f'({[} r_{m+1}|\cdots | r_{m''}]))\otimes (1\otimes {[} s_1|\cdots | s_{i-n'} | g'({[} s_{i-n'+1} |\cdots | s_i{]})| s_{i+1}|\cdots | s_{n''}{]}\otimes 1) \Big) \\ & = \sum_{j=m'}^{m''} (-1)^{-n' (m''-j)} (-1)^{(m'+n')(j-m')}(-1)^{j-m'}\\ & \hspace{.5cm} f({[} r_1|\cdots | r_{j-m'} | f' ({[} r_{j-m'+1}| \cdots | r_j{]})| \hat{b}r_{j+1} |\cdots | \hat{b}r_{m''} {]}) \otimes \hat{a}'( g'({[}s_1|\cdots | s_i{]})) g( {[} s_{i+1}| \cdots | s_{n''}{]})\\ & \hspace{.5cm}+\sum_{i=n'}^{n''} (-1)^{(i-n')m'} (-1)^{(m'+n')(m+i-n')}(-1)^{i-n'}(-1)^{m} \\ & \hspace{.5cm}f({[} r_1|\cdots | r_{m} {]} ) \hat{b}( f'({[}r_{m+1}|\cdots | r_{m''}{]})) \otimes g({[}\hat{a}'s_1|\cdots | \hat{a}'s_{i-n'} | g'({[} s_{i-n'+1} |\cdots | s_i{]})| s_{i+1} |\cdots | s_{n''}{]}). \\ \end{align*} We wish to rewrite the sums. The first sum involves $f\circ_t f'$, in which the term indexed by $j$ has a sign $(-1)^{(m'-1)(j-m')}$. The second sum involves $g\circ_t g'$, in which the term indexed by $i$ has a sign $(-1)^{(n'-1)(i-n')}$. Accommodating these signs and rewriting, the above is equal to $$ (-1)^{n'(m+n-1)} (f\circ_t f')\boxtimes (g ' \smile_t g) + (-1)^{m(n'-1)}(f\smile_t f')\boxtimes (g \circ_t g')$$ applied to the input. By Proposition~\ref{R-twisted-comm-prop}, reversing the order of $g'$, $g$ in the first term, we finally find that $(f\boxtimes g )\circ (f'\boxtimes g')$ is equal to \[ (-1)^{n'(m-1)} t^{-1}(a,b') (f\circ_t f') \boxtimes (g\smile_t g') + (-1)^{m(n'-1)} (f\smile_t f') \boxtimes (g\circ_t g') . \] Similarly, \begin{align*} &(f' \boxtimes\ g')\circ (f\boxtimes g) \\ & \hspace{2cm}= (-1)^{n(m'+n'-1)} (f'\circ_t f)\boxtimes (g \smile_t g') + (-1)^{m'(n-1)}(f'\smile_t f)\boxtimes (g' \circ_t g). \\ \end{align*} By Proposition~\ref{R-twisted-comm-prop}, reversing the order of $f'$, $f$ in the second term, we obtain \[ (-1)^{n(m'+n'-1)} (f'\circ_t f)\boxtimes (g\smile_t g') + (-1)^{m' (m+n-1)}t^{-1}(a',b) (f\smile_t f') \boxtimes (g'\circ_t g) . \] We thus have found that \[ \begin{aligned} & [ f\boxtimes g , f'\boxtimes g'] \\ & = (f\boxtimes g ) \circ (f'\boxtimes g') - (-1)^{(m+n-1)(m'+n'-1)} (f'\boxtimes g')\circ (f\boxtimes g) \\ & = (-1)^{n'(m-1)} t^{-1}(a,b') (f\circ_t f')\boxtimes (g\smile_t g') + (-1)^{m(n'-1)} (f\smile_t f') \boxtimes (g\circ_t g') \\ & \quad - (-1)^{(m'+n'-1)(m-1)} (f'\circ_t f)\boxtimes (g\smile_t g') - (-1)^{(n'-1)(m+n-1)} t^{-1}(a',b) (f\smile_t f') \boxtimes (g'\circ_t g) . \end{aligned} \] We are now ready to compare with the formula given at the start of this section, for $[f\otimes g , f'\otimes g']$ on the left side of the claimed isomorphism in the theorem statement. This is \[ \begin{aligned} &(-1)^{(m-1)n'} [f,f']_t \otimes (g\smile_t g') + (-1)^{m(n'-1)} (f\smile_t f') \otimes [g,g']_t \\ & = (-1)^{(m-1)n'} \big(t^{-1}(a,b') f\circ_t f' - (-1)^{(m-1)(m'-1)} f'\circ_t f \big) \otimes (g\smile_t g') \\ & \quad +(-1)^{m(n'-1)} (f\smile_t f') \otimes \big( g\circ_t g' - (-1)^{(n-1)(n'-1)} t^{-1}(a',b) g'\circ_t g\big) . \end{aligned} \] Comparison with the previous calculation shows that \[ \phi([f\otimes g,f'\otimes g']) = [ f\boxtimes g , f'\boxtimes g'] = [ \phi(f\otimes g), \phi(f'\otimes g')] .\qedhere \] \end{proof} \begin{remark} Bergh and Oppermann's \cite[Theorem 4.6]{BO} is a special case of our results. Their result is recovered by restricting the isomorphism of Theorem \ref{vect-space-theorem} to the subspace graded by $\ker t(-,B)\times \ker t(A,-)\subseteq A\times B$. This identifies exactly the part of $\bigoplus \ \HH^*(R,R_{\hat{b}})^a\otimes \HH^*(S,{}_{\hat{a}}S)^b$ on which the twists act trivially. The Gerstenhaber bracket on $\HH^*(R\otimes^tS)$ was partially computed by Grimley, Nguyen, and the second author in \cite[Theorem 6.3]{GNW} in terms of Bergh and Oppermann's decomposition. It is shown in loc.~cit.~that on the untwisted part of $\HH^*(R\otimes^tS)$ (i.e.~the restriction to $\ker t(-,B)\times \ker t(A,-)$) the bracket can be computed explicitly using the bracket on the two factors. Theorem \ref{thm:G-alg} above extends this to all of Hochschild cohomology, explaining how to account for the twists. \end{remark} \section{Quantum complete intersections and iterated twisted products}\label{sec:examples} In this section we present a series of examples, namely the quantum complete intersections, as an application of our main theorem. We also explain how to extend the theorem to the case of iterated twisted tensor product algebras. We begin with the case of two indeterminates, continuing from Example \ref{exqci} above. Fix $q\in k^\times$. We consider the Hochschild cohomology of the quantum complete intersection $\Lambda_q(m,n)=k[x]/(x^m)\otimes^tk[x]/(x^n)$. The two factors are graded by $\mathbb{Z}$, generated in degree $1$, and $t:\mathbb{Z}\times\mathbb{Z}\to k^\times$ is the bicharacter $t(a,b) = q^{ab}$. The Hochschild cohomology was computed for $m=n=2$ in \cite{BGMS} , and then later for all $m$ and $n$ in \cite[Theorem 3.3]{BE}. In \cite{EF} the cup product was computed when $m=n$ is the order of $q$ in $k^\times$. The Gerstenhaber brackets were computed fully in \cite[Section 5]{GNW} in the case $m=n=2$. All of these results can be recovered using the main theorem of this paper, but for the sake of novelty we deal with a new case here, and use Theorem \ref{thm:G-alg} to calculate the Gerstenhaber brackets on $\HH^*(\Lambda_m\otimes^t\Lambda_n)$ for all $m$ and $n$. There are many cases to consider, depending on the characteristic of $k$ and the order of $q$ in $k^\times$, and for the sake of brevity we will only consider here the case that $q$ has infinite order. Let us emphasize however that all of the cases can be dealt with readily (one only needs the patience to write them all out). \begin{theorem}\label{qcibracketthm} If $q$ is not a root of unity then as an algebra \[ \HH^*(\Lambda_q(m,n))\cong k[U]/(U^2) \times_k {\textstyle \bigwedge^*_k}(V,W) , \] i.e.~it is the fiber product of $k[U]/(U^2)$, $U$ in degree $0$, with an exterior algebra $\bigwedge^*_k(V,W)$, $V,W$ in degree $1$. The bracket is given by \[ [V,U] = (m-1) U,\quad [W,U] = (n-1) U, \quad \text{and }\ [V,W]=0. \] \end{theorem} \begin{proof} The algebra structure is known \cite[Theorem 3.3]{BE}, but we give the full calculation to demonstrate Theorem \ref{cup-prod-thm}. Denote $\Lambda(m)=k[x]/(x^m)$ and take $b\in \mathbb{Z}$. We need to compute $\HH^*(\Lambda(m),\Lambda(m)_{\hat{b}})$. There is a well-known $2$-periodic bimodule resolution of $\Lambda(m)$: \[ \cdots\xrightarrow{x\otimes 1-1\otimes x}\Lambda(m)^{{\sf ev}}[-m]\xrightarrow{\sum_{i=0}^{m-1} x^{m-i-1}\otimes x^{i}}\Lambda(m)^{{\sf ev}}[-1]\xrightarrow{x\otimes 1-1\otimes x}\Lambda(m)^{{\sf ev}}, \] where $[i]$ denotes the shift in grading by $i$. Applying $\Hom_{\Lambda(m)}(-,\Lambda(m)_{\hat{b}})$ produces the complex \[ \cdots\xleftarrow{x(1-q^{b})}\Lambda(m)[m]\xleftarrow{x^{m-1} (\sum_{i=0}^{m-1}q^{ib})}\Lambda(m)[1]\xleftarrow{x(1-q^{b})}\Lambda(m). \] From this one can read off the cohomology (assuming that $q^b\neq 1$ when $b\neq 0$): \begin{equation} \HH^i(\Lambda(m),\Lambda(m)_{\hat{b}})= \begin{cases} \Lambda(m) & \text{if }i=0, b=0\\ (x^{m-1}) & \text{if }i=0, b\neq 0\\ (\Lambda(m)/(mx^{m-1}))[\frac{i}{2}m]& \text{if }i>0 \text{ is even}, b=0\\ {\rm Ann}_{\Lambda(m)}(mx^{m-1})[\frac{i-1}{2}m+1]& \text{if }i>0 \text{ is odd}, b=0\\ 0& \text{if }i>0, b\neq0.\\ \end{cases} \end{equation} The computation for $\HH^j(\Lambda(n),{}_{\hat{a}}\Lambda(n))$ is essentially the same. When one comes to combine these vector spaces according to the decomposition of Theorem \ref{vect-space-theorem}, one finds that almost all of the terms have at least one of the two factors equal to zero. The only surviving terms are \begin{align*} \HH^0(\Lambda_q(m,n))^{0,0} \ = \ \ & k(1\otimes 1) & \ \\ \HH^0(\Lambda_q(m,n))^{m-1,n-1} \ = \ \ & k(x^{m-1}\otimes x^{n-1}) \\ \HH^1(\Lambda_q(m,n))^{0,0} \ = \ \ & k(x[1]\otimes1) + k(1 \otimes x[1]) \\ \HH^2(\Lambda_q(m,n))^{0,0} \ = \ \ & k (x[1] \otimes x[1]). \end{align*} This matches the desired result if we set $U=x^{m-1}\otimes x^{n-1}$ and $V=x[1]\otimes1$ and $W= 1 \otimes x[1]$. Lastly, $[V,W]=0$ for degree reasons, and using Theorem \ref{thm:G-alg}, \begin{align*} [V,U] & = [x[1]\otimes1,x^{m-1}\otimes x^{n-1}] \\ & = [x[1],x^{m-1}]_t\otimes x^{n-1} - x^{m-1}\otimes[x^{n-1},1]_t\\ & = (m-1)x^{m-1}\otimes x^{n-1} -0 , \end{align*} where as usual, bracketing a degree~1 element with a degree~0 element amounts to applying the corresponding derivation to the algebra element. Similarly $[W,U]=(n-1)x^{m-1}\otimes x^{n-1}$, and this completes the proof. \end{proof} Now we point out how the main result of this paper can be extended to the case of iterated twisted tensor products. Suppose we have abelian groups $A_1,...,A_n$ and a collection $t$ of bicharacters $t_{ij}:A_i\times A_{j}\to k^\times$ for $i<j$. If $R_1,...,R_n$ are algebras graded by $A_1,...,A_n$ respectively, we can form the twisted tensor product $R_1\otimes^t\cdots \otimes^tR_n$. As a graded vector space this is $R_1\otimes \cdots \otimes R_n$, and the multiplication is determined by $(1\otimes\cdots \otimes r_i\otimes\cdots \otimes 1) \cdot (1\otimes\cdots \otimes r_j\otimes\cdots \otimes 1)= $ \begin{align*} (1\otimes\cdots \otimes r_ir_j\otimes\cdots \otimes 1) \quad & \text{if } i=j , \\ (1\otimes\cdots \otimes r_i\otimes\cdots \otimes r_j\otimes\cdots \otimes 1) \quad & \text{if } i<j , \\ t_{ji}(r_j,r_i)(1\otimes\cdots \otimes r_j\otimes\cdots \otimes r_i\otimes\cdots \otimes 1) \quad& \text{if } i>j. \end{align*} \begin{corollary}\label{coriterated} Let $R_1,...,R_n$ be algebras graded by abelian groups $A_1,...,A_n$ respectively, each satisfying the finiteness conditions from Section \ref{sec:main}, and let $t=\{t_{ij}\}$ be a collection of bicharacters as above. The Hochschild cohomology of $R=R_1\otimes^t\cdots \otimes^tR_n$ can be decomposed \[ \HH^*(R,R)\cong \bigoplus_{a_1,\ldots,a_n} \bigotimes_{i=1}^{n}\HH^*(R_i, {}_{\hat{a}_{1}\cdots\hat{a}_{i-1}}(R_i)_{\hat{a}_{i+1}\cdots\hat{a}_{n}})^{a_i}. \] The cup product and Gerstenhaber bracket on $\HH^*(R,R)$ can be computed from that of the factors in this decomposition, in a similar way to Theorems \ref{cup-prod-thm} and \ref{thm:G-alg}. \end{corollary} We leave the last statement to be interpreted properly by the interested reader. We also skip the proof, since it is a simple induction applying Theorem \ref{vect-space-theorem} repeatedly (using the fact that $R_1\otimes^t\cdots \otimes^tR_n$ can be viewed as an iterated twisted tensor product with two factors at a time, in a similar way to \cite[Lemma 5.1]{BO}). As an application of Corollary \ref{coriterated} we compute the Hochschild cohomology of quantum complete intersections with more than two indeterminates. This has yet wider applications; for example, many Nichols algebras arising in the theory of pointed Hopf algebras have associated graded algebras that are quantum complete intersections, and this structure can have important homological implications. Let $q$ be a collection of elements $q_{ij}$ in $k^\times$ for $i<j$, and set $t_{ij}:\mathbb{Z}\times \mathbb{Z}\to k^\times$ to be the bicharacter $t(a,b)=q_{ij}^{ab}$. Extending our earlier notation from the case of two indeterminates to many, we set \[ \Lambda_q(m_1,\ldots,m_n)=\Lambda(m_1)\otimes^t\cdots \otimes^t \Lambda(m_n). \] \begin{theorem}\label{iteratedqcithm} Assume that the scalars $q_{ij}$ freely generate a free abelian subgroup of $k^\times$. Then as an algebra \[ \HH^*(\Lambda_q(m_1,\ldots,m_n))\cong k[U]/(U^2) \times_k {\textstyle \bigwedge^*_k}(V_1,\ldots,V_n) , \] i.e.~it is the fiber product of $k[U]/(U^2)$, $U$ in degree $0$, with an exterior algebra $\bigwedge^*_k(V_1,\ldots,V_n)$, $V_i$ in degree $1$. The bracket is given by \[ [V_i,U] = (m_i-1) U, \quad \text{and }\ [V_i,V_j]=0. \] \end{theorem} The proof is very similar to that of Theorem \ref{qcibracketthm}, and so we will omit it. In the notation there, $U$ corresponds to $x^{m_1-1}\otimes\cdots \otimes x^{m_n-1}$ and each $V_i$ corresponds to $1\otimes \cdots \otimes x[1]\otimes\cdots \otimes 1$ (the $x[1]$ in the $i$th position). In general, the description of Hochschild cohomology of $\Lambda_q(m_1,\ldots,m_n)$ depends on what kind of subgroup of $k^\times$ the scalars $q_{ij}$ generate. Various other cases are easy to compute as well, for example when all of the $q_{ij}$ are equal. Bergh and Oppermann computed a \emph{part} of Hochschild cohomology in the case of many indeterminates, with its algebra structure. The authors of \cite{BE,BGMS,EF} do not treat the many indeterminate case; it likely would not have been feasible with the methods available. Thus Theorem \ref{iteratedqcithm} illustrates the usefulness of our main theorem. \section{Skew group algebras}\label{sec:examples2} In this section we give another large class of examples to which our main theorem applies, namely skew group algebras for which the group is abelian. We treat first the case of a symmetric algebra with group action. Assume that $k$ is algebraically closed of characteristic $0$, and let $G$ be a finite abelian group acting on a finite dimensional vector space $V$. This action extends naturally to an action on the symmetric algebra $S(V)$. We can form the twisted group algebra $S(V)\rtimes G$. As a graded vector space this is the tensor product $S(V)\otimes kG$, and the multiplication is given by \[ (f\otimes g)\cdot (f'\otimes g')= f g(f') \otimes gg' . \] Let us explain how to see $S(V)\rtimes G$ as a bicharacter twisted tensor product. Recall that $\widehat{G}=\Hom(G,k^\times)$ denotes the group of characters of $G$. There is a natural bicharacter $t:\widehat{G}\times G\to k^\times$, $t(\phi, g)=\phi(g)$. Given $\phi\in \widehat{G}$ we consider the eigenspace \[ S(V)^{\phi}=\big\{ f : g(f)=\phi(g) f \text{ for all } g\in G\big\} . \] Since $k$ is algebraically closed of characteristic $0$ we have a decomposition \[ S(V)=\bigoplus_{\phi \in \widehat{G}}S(V)^\phi. \] In fact, this makes $S(V)$ into a $\widehat{G}$-graded algebra. The group algebra $kG$ is trivially $G$-graded. Putting all this structure together, we find that \[ S(V)\rtimes G=S(V)\otimes^t kG. \] Using this observation we can recover---in the abelian case---a result Buchweitz, proved independently in work of Farinati~\cite{F} and Ginzburg and Kaledin~\cite{GK}. This can also be seen as resulting from a special case of a spectral sequence of Negron \cite{CN} (which degenerates for us, by the assumption on $k$). \begin{corollary}\label{skewcor} There is an isomorphism of Gerstenhaber algebras \begin{align*} \HH^*(S(V)\rtimes G) & \cong\bigoplus_{g} \HH^*(S(V),S(V)_{\hat{g}})^G\\ & \cong \HH^*(S(V),S(V)\rtimes G)^G . \end{align*} \end{corollary} \begin{proof} Using our main theorem, \[ \HH^*(S(V)\rtimes G)\cong \bigoplus_{\phi,g} \HH^*(S(V),S(V)_{\hat{g}})^{\phi}\otimes \HH^*(kG,{}_{\phi}kG)^{g}. \] Since $kG$ is semisimple, $\HH^*(kG,kG) \cong kG$, and $\HH^*(kG,{}_{\phi}kG)=0$ if $\phi\neq 1$. From here the statement follows. \end{proof} Shepler and the second author investigated the cup product structure of the Hochschild cohomology $\HH^*(S(V)\rtimes G)$ in \cite{SW}. Their description of this structure can be recovered by inspecting our proof above (but note that they work more generally with any finite group). Similarly, Negron and the second author described Gerstenhaber brackets on $\HH^*(S(V)\rtimes G)$ for any finite group $G$, and~\cite[Theorem~5.2.3]{NW2} can be recovered, in case $G$ is abelian, from our Corollary~\ref{skewcor} above. On a related note, we remark that the algebra $\HH^*(S(V),S(V)\rtimes G)$ appearing above is the orbit Hochschild cohomology which we considered in Section \ref{sec:cup}. \begin{remark} More generally, one can consider an action of $G$ on any $k$-algebra $R$. If $G$ is abelian and $k$ has ``enough'' roots of unity, then the action can be diagonalized, and the skew group algebra $R\rtimes G$ can be realized as a bicharacter twisted tensor product, as it was for the case $R=S(V)$ above. Therefore, we can use our decomposition theorem to compute the Hochschild cohomology of $R\rtimes G$. More generally still, the same remarks apply when $G$ acts on an algebra $R$ and grades an algebra $S$. Then one can form a twisted tensor product algebra $R\rtimes S$ with twist determined by both the $G$-grading and the $G$-action; the product is given by $r \otimes s\cdot r'\otimes s' = r g(r')\otimes ss' $ when $s'\in S^g$. Under the same hypotheses as above one can replace this with a bicharacter twisted tensor product, and thereby compute its Hochschild cohomology. \end{remark}
-71,976.497576
[ -3.396484375, 2.984375 ]
35.202206
[ -2.724609375, 0.447998046875, -2.193359375, -7.16796875, -0.9619140625, 9.6640625 ]
[ 3.212890625, 9.734375, 0.62451171875, 5.5859375 ]
338
7,094
[ -3.53125, 4.21875 ]
38.417405
[ -4.92578125, -3.4921875, -5.08984375, -2.294921875, 1.513671875, 12.1796875 ]
1.896134
20.945929
21.990414
3.011315
[ 1.2848098278045654 ]
-47,474.45228
5.352692
-72,467.110155
0.684715
6.118804
[ -1.5263671875, -3.0703125, -3.86328125, -5.375, 1.7529296875, 12.328125 ]
[ -5.66796875, -1.80078125, -2.349609375, -1.08203125, 3.521484375, 4.0390625 ]
BkiUdKzxK6Ot9WA5hefz
\section{Introduction} \label{sec:intro} \input{tex/intro} \input{tex/notation} \section{Related Work} \label{sec:related} \input{tex/related} \section{Background and Formulation} \label{sec:background} \input{tex/background} \section{Design and Optimization} \label{sec:impl} \input{tex/impl} \section{Model Validation} \label{sec:pcb-validation} \input{tex/pcb_validation} \section{Experimental Evaluation} \label{sec:exp} \input{tex/experiments} \section{Numerical Evaluation} \label{sec:sensitivity} \input{tex/sensitivity} \section{Conclusion} \label{sec:conclusion} \input{tex/conclusion} \section{Acknowledgements} \label{sec:acks} This work was supported in part by NSF grants ECCS-1547406, CNS-1650685, and CNS-1827923. We thank Jackson Welles for his contributions to the assembly and testing of the PCB cancellers. \section{Appendix A: The PCB BPF Model} \label{append:pcb-model} \input{tex/append_pcb_model} \bibliographystyle{ACM-Reference-Format} \subsection{Optimization of Canceller Configuration} \label{ssec:algo-opt} We present a general optimization framework that jointly optimizes all the FDE taps in the FDE-based cancellers. Although our implemented RFIC and PCB cancellers have only two FDE taps, both their models and the optimization of canceller configuration can be easily extended to the case with a larger number of FDE taps. The inputs to the FDE-based canceller configuration optimization are: (i) the RFIC or PCB canceller model with given number of FDE taps, $\NumTap$, (ii) the antenna interface response, $H_{\textrm{SI}}(f_k)$, and the desired RF SIC bandwidth $\{f_k\}$. Then, the optimized canceller configuration are obtained by respectively solving the optimization problems $\textsf{(P3)}$ or $\textsf{(P2)}$, in which $H^{\textrm{I}}(f_k)$ is given by {\eqref{eq:rfic-tf}} and $H^{\textrm{P}}(f_k)$ is given by {\eqref{eq:pcb-tf-calibrated}}. \begin{align*} \textsf{(P3)}\ \min: &\ \mathop{\textstyle\sum}_{k=1}^{K} \NormTwo{H_{\textrm{res}}^{\textrm{I}}(f_k)} = \mathop{\textstyle\sum}_{k=1}^{K} \NormTwo{ H_{\textrm{SI}}(f_k) - H^{\textrm{I}}(f_k) }^2 \\ \textrm{s.t.:} &\ \ICTapAmp{i} \in [A_{\textrm{min}}^{\textrm{I}}, A_{\textrm{max}}^{\textrm{I}}],\ \ICTapPhase{i} \in [-\pi, \pi], \\ &\ \ICTapCF{i} \in [f_{\textrm{c,min}}, f_{\textrm{c,max}}],\ \ICTapQF{i} \in [Q_{\textrm{min}}, Q_{\textrm{max}}],\ \forall i. \end{align*} \begin{align*} \textsf{(P2)}\ \min: &\ \mathop{\textstyle\sum}_{k=1}^{K} \NormTwo{H_{\textrm{res}}^{\textrm{P}}(f_k)} = \mathop{\textstyle\sum}_{k=1}^{K} \NormTwo{ H_{\textrm{SI}}(f_k) - H^{\textrm{P}}(f_k) }^2 \\ \textrm{s.t.:} &\ \PCBTapAmp{i} \in [A_{\textrm{min}}^{\textrm{P}}, A_{\textrm{max}}^{\textrm{P}}],\ \PCBTapPhase{i} \in [-\pi, \pi], \\ &\ \PCBTapCFCap{i} \in [C_{\textrm{F,min}}, C_{\textrm{F,max}}],\ \PCBTapQFCap{i} \in [C_{\textrm{Q,min}}, C_{\textrm{Q,max}}],\ \forall i. \end{align*} Note that the objectives of $\textsf{(P3)}$ and $\textsf{(P2)}$ are non-convex and non-linear due to the specific forms of the configuration parameters such as (i) the higher-order terms introduced by $f_k$ in the canceller models (see {\eqref{eq:rfic-tf}} and {\eqref{eq:pcb-tf-calibrated}}), and (ii) non-linear, non-convex trigonometric terms introduced by the phase controls, $\ICTapPhase{i}$ and $\PCBTapPhase{i}$. Moreover, the non-ideal, time-varying antenna interface, $H_{\textrm{SI}}(f_k)$, makes the optimization problem more challenging. In general, it is difficult to maintain analytical tractability of $\textsf{(P3)}$ or $\textsf{(P2)}$. Therefore, a non-linear optimization solver is used to solve $\textsf{(P3)}$ and $\textsf{(P2)}$, where the obtained canceller configuration may only be locally optimal (thus not globally optimal). However, in practice, it is unnecessary to obtain the globally optimal solution to $\textsf{(P3)}$ or $\textsf{(P2)}$ as long as the local minimum achieved is sufficiently good, which we will show later in Sections~\ref{sec:eval-rfic} and~\ref{sec:eval-pcb}. In our FD radio design described in Section~\ref{sec:eval-pcb}, the optimized PCB canceller configuration can be obtained in less than $\SI{10}{ms}$ via a non-optimized solver. \subsection{FD Background and Notation} Fig.~\ref{fig:diagram} shows the block diagram of a single-antenna FD radio using a circulator at the antenna interface. Due to the extremely strong SI power level and the limited dynamic range of the analog-to-digital converter (ADC) at the RX, a total amount of $90$--$\SI{110}{dB}$ SIC must be achieved across the antenna, RF, and digital domains. Specifically, (i) SI suppression is first performed at the antenna interface, (ii) an RF SI canceller then taps a reference signal at the output of the TX power amplifier (PA) and performs SIC at the input of the low-noise amplifier (LNA) at the RX, and (iii) a digital SI canceller further suppresses the residual SI. \begin{figure}[!t] \centering \includegraphics[width=0.85\columnwidth]{./figs/diagram/diagram_fd_radio.pdf} \vspace{-0.5\baselineskip} \caption{Block diagram of an FD radio.} \label{fig:diagram} \vspace{-\baselineskip} \end{figure} Consider a wireless bandwidth of $B$ that is divided into $K$ orthogonal frequency channels. The channels are indexed by $k \in \{1,\dots,K\}$ and denote the center frequency of the $k^{\textrm{th}}$ channel by $f_k$.\footnote{We use discrete frequency values $\{f_k\}$ since in practical systems, the antenna interface response is measured at discrete points (e.g., per OFDM subcarrier). However, the presented model can also be applied to cases with continuous frequency values.} We denote the antenna interface response by $H_{\textrm{SI}}(f_k)$ with amplitude $|H_{\textrm{SI}}(f_k)|$ and phase $\angleH_{\textrm{SI}}(f_k)$. Note that the actual SI channel includes the TX-RX leakage from the antenna interface as well as the TX and RX transfer functions at the baseband from the perspective of the digital canceller. Since the paper focuses on achieving wideband RF SIC, we use $H_{\textrm{SI}}(f_k)$ to denote the antenna interface response and also refer to it as the \emph{SI channel}. We refer to \emph{TX/RX isolation} as the ratio (in $\SI{}{dB}$, usually a negative value) between the residual SI power at the RX input and the TX output power, which includes the amount of TX/RX isolation achieved by both the antenna interface and the RF canceller/circuitry. We then refer to \emph{RF SIC} as the absolute value of the TX/RX isolation. We also refer to \emph{overall SIC} as the total amount of SIC achieved in both the RF and digital domains. The antenna interface used in our experiments typically provides a TX/RX isolation of around $-\SI{20}{dB}$. \subsection{Problem Formulation} Ideally, an RF canceller is designed to best emulate the antenna interface, $H_{\textrm{SI}}(f_k)$, across a desired bandwidth, $B=[f_1, f_K]$. We denote by $H(f_k)$ the frequency response of an RF canceller and by $H_{\textrm{res}}(f_k) := H_{\textrm{SI}}(f_k) - H(f_k)$ the \emph{residual SI channel response}. The optimized RF canceller configuration is obtained by solving {\textsf{(P1)}}: \begin{align} \textsf{(P1)}\ & \textrm{min:}\ \mathop{\textstyle\sum}\limits_{k=1}^{K} \NormTwo{ H_{\textrm{res}}(f_k) }^2 = \mathop{\textstyle\sum}\limits_{k=1}^{K} \NormTwo{ H_{\textrm{SI}}(f_k) - H(f_k) }^2 \nonumber \\ \textrm{s.t.:}\ & \textrm{constraints on configuration parameters of}\ H(f_k),\ \forall k. \nonumber \end{align} The RF canceller configuration obtained by solving $\textsf{(P1)}$ minimizes the residual SI power referred to the TX output. As described in Section~\ref{sec:intro}, one main challenge associated with the design of the RF canceller with response $H(f_k)$ to achieve wideband SIC is due to the highly frequency-selective antenna interface, $H_{\textrm{SI}}(f_k)$. Moreover, an efficient RF canceller configuration scheme needs to be designed so that the canceller can adapt to time-varying $H_{\textrm{SI}}(f_k)$. \subsection{RF Canceller Designs} \label{ssec:background-previous-work} \subsubsection*{Delay Line-based RF Cancellers} An RF canceller design introduced in~\cite{bharadia2013full} involves using $\NumTap$ delay line taps. Specifically, the $i^{\textrm{th}}$ tap is associated with a time delay of $\tau_i$, which is \emph{pre-selected} and \emph{fixed} depending on the selected circulator and antenna, and an amplitude control of $\Amp{i}$. Since the Fourier transform of a delay of $\tau_i$ is $e^{-2\pi f\tau_i}$, an $\NumTap$-tap delay line-based RF canceller has a frequency response of $H^{\textrm{DL}}(f_k) = \sum_{i=1}^{\NumTap} \Amp{i} e^{-j2\pi f_k \tau_i}$. The configurations of the amplitude controls, $\{\Amp{i}\}$, are obtained by solving $\textsf{(P1)}$ with $H(f_k) = H^{\textrm{DL}}(f_k)$. In~\cite{bharadia2013full}, an RF canceller of $\NumTap=16$ delay line taps is implemented. In~\cite{korpi2016full}, a similar approach is considered with $\NumTap=3$ and an additional phase control, $\Phase{i}$, on each tap, resulting in an RF canceller model of $H^{\textrm{DL}}(f_k) = \sum_{i=1}^{3} \Amp{i} e^{-j(2\pi f_k \tau_i + \Phase{i})}$. As mentioned in Section~\ref{sec:intro}, although such cancellers can achieve wideband SIC, this approach is more suitable for large-form-factor nodes than for compact/small-form-factor implementations. \subsubsection*{Amplitude- and Phase-based RF Cancellers} A compact design that is based on an amplitude- and phase-based RF canceller realized in an RFIC implementation is presented in~\cite{Zhou_NCSIC_JSSC14}. This canceller has a single-tap with one amplitude and frequency control, $(\Amp{0}, \Phase{0})$, which can emulate the antenna interface, $H_{\textrm{SI}}(f_k)$, at \emph{only one} given cancellation frequency $f_{1}$ by setting $\Amp{0} = |H_{\textrm{SI}}(f_{1})|$ and $\Phase{0} = \angle H_{\textrm{SI}}(f_{1})$. The same design is also realized using discrete components on a PCB (without using any length delay lines), and is integrated in the ORBIT testbed for open-access FD research~\cite{flexicon_orbit_arxiv}. However, this type of RF cancellers has limited RF SIC perfromacne and bandwidth, since it can only emulate the antenna interface at a single frequency. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[]{ \label{fig:fde-concept-diagram} \includegraphics[height=1.15in]{./figs/diagram/diagram_fde_canc.pdf}} \hspace{-12pt} \hfill \subfloat[]{ \label{fig:fde-concept-bpf} \includegraphics[height=1.15in]{./figs/diagram/rfic_filter_shape.pdf}} \vspace{-0.5\baselineskip} \caption{(a) Block diagram of an FDE-based RF canceller with $\NumTap=2$ FDE taps, and (b) illustration of amplitude and phase responses of an ideal $2^{\textrm{nd}}$-order BPF with amplitude, phase, center frequency, and quality factor (i.e., group delay) controls.} \label{fig:fde-concept} \vspace{-\baselineskip} \end{figure} \subsubsection*{An FDE-based RF Canceller} One compact design to achieve significantly enhanced performance and bandwidth of RF SIC is based on the technique of frequency-domain equalization (FDE) and was implemented in an RFIC~\cite{Zhou_WBSIC_JSSC15}. Fig.~\ref{fig:fde-concept}\subref{fig:fde-concept-diagram} shows the diagram of an FDE-based canceller, where parallel reconfigurable bandpass filters (BPFs) are used to emulate the antenna interface response across wide bandwidth. We denote the frequency response of a general FDE-based RF canceller consisting of $\NumTap$ FDE taps by \begin{align} \label{eq:fde-canc-tf} H^{\textrm{FDE}}(f_k) & = \mathop{\textstyle\sum}\limits_{i=1}^{\NumTap} \FDETapTF{i}(f_k), \end{align} where $\FDETapTF{i}(f_k)$ is the frequency response of the $i^{\textrm{th}}$ FDE tap containing a reconfigurable BPF with amplitude and phase controls. Theoretically, any $m^{\textrm{th}}$-order RF BPF ($m=1,2,\dots$) can be used. Fig.~\ref{fig:fde-concept}\subref{fig:fde-concept-bpf} illustrates the amplitude and phase of a $2^{\textrm{nd}}$-order BPF with different control parameters. For example, increased BPF quality factors result in ``sharper'' BPF amplitudes and increased group delay. Since it is shown~\cite{ghaffari2011tunable,Zhou_WBSIC_JSSC15} that a $2^{\textrm{nd}}$-order BPF can accurately model the FDE $N$-path filter, the frequency response of an FDE-based RF\underline{I}C canceller with $\NumTap$ FDE taps is given by \begin{align} \label{eq:rfic-tf} H^{\textrm{I}}(f_k) = \mathop{\textstyle\sum}_{i=1}^{\NumTap} \ICTapTF{i}(f_k) = \mathop{\textstyle\sum}_{i=1}^{\NumTap} \frac{\ICTapAmp{i} \cdot e^{-j\ICTapPhase{i}}}{1 - j\ICTapQF{i} \cdot \left( \ICTapCF{i}/f_k-f_k/\ICTapCF{i}\right)}. \end{align} Within the $i^{\textrm{th}}$ FDE tap, $\ICTapTF{i}(f_k)$, $\ICTapAmp{i}$ and $\ICTapPhase{i}$ are the amplitude and phase controls, and $\ICTapCF{i}$ and $\ICTapQF{i}$ are the center frequency and quality factor of the $2^{\textrm{nd}}$-order BPF (see Fig.~\ref{fig:fde-concept}\subref{fig:fde-concept-bpf}). In the RFIC canceller, $\ICTapCF{i}$ and $\ICTapQF{i}$ are adjusted through a reconfigurable baseband capacitor and transconductors, respectively. As Fig.~\ref{fig:fde-concept}\subref{fig:fde-concept-bpf} and {\eqref{eq:rfic-tf}} suggest, one FDE tap features four degrees of freedom (DoF) so that the antenna interface, $H_{\textrm{SI}}(f_k)$, can be emulated \emph{not only in amplitude and phase, but also in the slope of amplitude and the slope of phase (i.e., group delay)}. Therefore, the RF SIC bandwidth can be significantly enhanced through FDE when compared with the amplitude- and phased-based RF cancellers. \subsection{The Ideal Case} To evaluate and validate the concept of FDE, we first simulate the RF SIC achieved by the RFIC canceller with varying number of FDE taps, $\NumTap \in \{1,2,3,4\}$, and desired RF SIC bandwidth of $\{20,40,80\}\thinspace\SI{}{MHz}$. We only report the RF SIC performance with up to $\NumTap=4$ FDE taps since, as we show below, this case can achieve sufficient RF SIC performance up to $\SI{80}{MHz}$ bandwidth.\footnote{We select $\{20,40,80\}\thinspace\SI{}{MHz}$ desired RF SIC bandwidth since: (i) these are the standard WiFi bandwidths, and (ii) the circulator has an operating frequency range of $\SI{100}{MHz}$.} Fig.~\ref{fig:eval-rfic} shows the TX/RX isolation (defined in Section~\ref{sec:background}) without and with the RFIC canceller, where the isolation at the antenna interface $|H_{\textrm{SI}}(f_k)| \approx -\SI{20}{dB}$. The optimized RFIC canceller configurations are obtained by solving the optimization problem $\textsf{(P3)}$ with the following constraints: $\forall i$, $\ICTapAmp{i} \in [-40, -10]\thinspace\SI{}{dB}$, $\ICTapPhase{i} \in [-\pi, \pi]$, $\ICTapCF{i} \in [875, 925]\thinspace\SI{}{MHz}$, and $\ICTapQF{i} \in [1, 50]$. These constraints are practically selected and can be easily realized in an RFIC implementation. Fig.~\ref{fig:eval-rfic} shows that (i) a larger number of FDE taps can significantly enhance the RF SIC bandwidth, and (ii) under the same desired RF SIC bandwidth, a larger number of FDE taps results in better average RF SIC performance. With only $2$ FDE taps, the RFIC canceller achieves an average $\SI{58}{dB}$ RF SIC with $\SI{20}{MHz}$ bandwidth, but this value decreases to only $\SI{47}{dB}$ with $\SI{40}{MHz}$ bandwidth. With $4$ FDE taps, the RFIC canceller achieves an average amount of more than $\SI{56}{dB}$ RF SIC for all considered RF SIC bandwidth up to $\SI{80}{MHz}$. Fig.~\ref{fig:eval-rfic} also shows that in some scenarios, adding more FDE taps does not help increase the RF SIC performance. For example, an RFIC canceller with $3$ FDE taps already achieves more than $\SI{70}{dB}$ RF SIC with $\SI{20}{MHz}$ bandwidth, and adding a fourth FDE tap only introduces minimal performance improvement. Note that we constrain the center frequency of the $2^{\rm nd}$-order BPFs in {\eqref{eq:rfic-tf}} to be within a $\SI{50}{MHz}$ range, and performance improvement is expected by relaxing this constraint. \subsection{Adding Practical Quantization Constraints} In practice, the values of $\{\ICTapAmp{i}, \ICTapPhase{i}, \ICTapCF{i}, \ICTapQF{i}\}$ can not be arbitrarily selected from a continuous range. Instead, they are often restricted to discrete values given the resolution of the corresponding hardware components. We evaluate the effect of such practical quantization constraints with the following settings. The amplitude $\ICTapAmp{i}$ has a $\SI{0.25}{dB}$ resolution within range $[-40, -10]\thinspace\SI{}{dB}$. For $\ICTapPhase{i} \in [-\pi, \pi]$, $\ICTapCF{i} \in [875, 925]\thinspace\SI{}{MHz}$, and $\ICTapQF{i} \in [1, 50]$, an $8$-bit resolution constraint is introduced, which is equivalent to $2^8=256$ discrete values spaced equally in the given range. The RFIC canceller configurations with quantization constraints are obtained by rounding the configurations returned by $\textsf{(P3)}$ to their closest quantized values. Fig.~\ref{fig:eval-rfic-quantized} presents the TX/RX isolation without and with the RFIC canceller with $\NumTap \in \{1,2,3,4\}$ and $\{40, 80\}\thinspace\SI{}{MHz}$ desired RF SIC bandwidth, under practical quantization constraints. The results for $20\thinspace\SI{}{MHz}$ desired RF SIC bandwidth are similar and thus omitted here. Comparing Fig.~\ref{fig:eval-rfic-quantized} with Fig.~\ref{fig:eval-rfic} shows that adding practical quantization constraints slightly reduces the amount of RF SIC achieved across the desired bandwidth. In particular, the effect of quantized configurations is the most significant for the case with $4$ FDE taps. This is because that as more FDE taps are available, more flexibility exists due to a larger number of variables that can be controlled. As a result, the RF SIC performance becomes more sensitive to the coupling between individual FDE tap responses after quantization. However, as Fig.~\ref{fig:eval-rfic-quantized} suggests, even under practical quantization constraints, an RFIC canceller with $4$ FDE taps can achieve an average $\SI{50}{dB}$ RF SIC across up to $\SI{80}{MHz}$ desired bandwidth. In addition, by relaxing the quantization constraints (e.g., through using components with higher resolutions), performance improvement is expected. \begin{figure}[!t] \centering \subfloat{ \label{fig:eval-rfic-quantized-bw-40mhz} \includegraphics[width=0.4\columnwidth]{./figs/sim/sim_rfic_bw_40mhz_quantized.eps}} \subfloat{ \label{fig:eval-rfic-quantized-bw-80mhz} \includegraphics[width=0.4\columnwidth]{./figs/sim/sim_rfic_bw_80mhz_quantized.eps} } \vspace{-0.5\baselineskip} \caption{TX/RX isolation of the antenna interface and with the RFIC canceller {\eqref{eq:rfic-tf}} with varying number of FDE taps, $\NumTap \in \{1,2,3,4\}$, and desired RF SIC bandwidth of $\{40,80\}\thinspace\SI{}{MHz}$, under practical quantization constraints} \label{fig:eval-rfic-quantized} \vspace{-\baselineskip} \end{figure} \subsection{Comparison with the Iterative Algorithm} \label{ssec:eval-rfic-compare} We now compare the RFIC canceller performance under the iterative configuration algorithm and with the optimized canceller configuration, with the same practical quantization constraints as described above. As described in Section~\ref{ssec:algo-iter}, for the iterative configuration algorithm, the selection of the cancellation frequency is unclear. Therefore, we manually find the best cancellation frequencies, $\{f_{\textrm{canc},i}\}$, such the RFIC canceller performance is optimized under the iterative configuration algorithm. Moreover, we consider only $2$ FDE taps since the performance of the iterative configuration algorithm with a larger value of $\NumTap$ performs even poorly when compared with the that with the optimized canceller configuration. Fig.~\ref{fig:eval-rfic-iter}\subref{fig:eval-rfic-iter-detail} shows the process of the iterative configuration algorithm, where $\ICTapTF{1}(f_k)$ and $\ICTapTF{2}(f_k)$ are used to match with $H_{\textrm{SI}}(f_k)$ and $H_{\textrm{res},1}(f_k) = H_{\textrm{SI}}(f_k)-\ICTapTF{1}(f_k)$, respectively, at the manually-optimized cancellation frequencies $\SI{896}{MHz}$ and $\SI{904}{MHz}$ indicated by the asterisks. As Fig.~\ref{fig:eval-rfic-iter}\subref{fig:eval-rfic-iter-detail} shows, the $\SI{40}{dB}$ RF SIC bandwidth is improved from $\SI{10}{MHz}$ to $\SI{16}{MHz}$ by using a second FDE tap. However, Fig.~\ref{fig:eval-rfic-iter}\subref{fig:eval-rfic-iter-compare} shows that this approach suffers from a large performance degradation when compared with the RFIC canceller performance with the optimized canceller configuration. Therefore, the optimal canceller configuration is more beneficial, which is also used to evaluate the PCB canceller performance in Section~\ref{sec:eval-pcb}. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[]{ \label{fig:eval-rfic-iter-detail} \includegraphics[width=0.4\columnwidth]{./figs/sim/sim_rfic_iter_detail.eps} } \subfloat[]{ \label{fig:eval-rfic-iter-compare} \includegraphics[width=0.4\columnwidth]{./figs/sim/sim_rfic_iter_compare.eps} } \vspace{-\baselineskip} \caption{(a) TX/RX Isolation of the antenna interface and with the RFIC canceller after each iterative configuration step. (b) Comparison of TX/RX isolation performance under the iterative and optimal configurations.} \label{fig:eval-rfic-iter} \vspace{-\baselineskip} \end{figure} \subsection{Implementation and Testbed} \label{ssec:exp-testbed} \subsubsection*{FDE-based FD Radio and the SDR Testbed} Figs.~\ref{fig:intro}\subref{fig:intro-fd-radio} and~\ref{fig:intro}\subref{fig:intro-fd-net} depict our FDE-based FD radio design (whose diagram is shown in Fig.~\ref{fig:diagram}) and the SDR testbed. A $698$--$\SI{960}{MHz}$ swivel blade antenna and a coaxial circulator with operating frequency range $860$--$\SI{960}{MHz}$ are used as the antenna interface. We use the NI USRP-2942 SDR with the SBX-120 daughterboard operating at $\SI{900}{MHz}$ carrier frequency, which is the same as the operating frequency of the PCB canceller. As mentioned in Section~\ref{ssec:impl-pcb}, our PCB canceller design can be easily extended to other operating frequencies. and the antenna interface. The USRP has a measured noise floor of $-\SI{85}{dBm}$ at a fixed receiver gain setting.\footnote{This USRP receiver noise floor is limited by the environmental interference at around $\SI{900}{MHz}$. The USRP has a true noise floor of around $-\SI{95}{dBm}$ at the same receiver gain setting, when not connected to an antenna.} We implemented a full OFDM-based PHY layer using NI LabVIEW on a host PC.\footnote{We consider a general OFDM-based PHY but do not consider the specific frame/packet structure defined by the standards (e.g., LTE or WiFi PHY).} A real-time RF bandwidth of $B=\SI{20}{MHz}$ is used through our experiments. The baseband complex (IQ) samples are streamed between the USRP and the host PC through a high-speed PCI-Express interface. The OFDM symbol size is 64 samples (subcarriers) with a cyclic-prefix ratio of 0.25 (16 samples). Throughout the evaluation, $\{f_k\}_{k=1}^{K=52}$ is used to represent the center frequency of the 52 non-zero subcarriers. The OFDM PHY layer supports various modulation and coding schemes (MCSs) with constellations from BPSK to 64QAM and coding rates of 1/2, 2/3, and 3/4, resulting in a highest (HD) data rate of $\SI{54}{Mbps}$. The digital SIC algorithm with a highest non-linearity order of 7 is also implemented in LabVIEW to further suppress the residual SI signal after RF SIC.\footnote{The digital SIC algorithm is based on Volterra series and a least-square problem, which is similar to that presented in~\cite{bharadia2013full}. We omit the details here due to limited space.} In total, our testbed consists of 3 FDE-based FD radios, whose performance is experimentally evaluated at the node, link, and network levels. Regular USRPs (without the PCB canceller) are also included in scenarios where additional HD users are needed. \subsubsection*{Optimized PCB Canceller Configuration} The optimized PCB canceller configuration scheme is implemented on the host PC and the canceller is configured by a SUB-20 controller through the USB interface. For computational efficiency, the PCB canceller response {\eqref{eq:pcb-tf-calibrated}} (which is validated in Section~\ref{sec:pcb-validation} and is independent of the environment) is pre-computed and stored. The detailed steps of the canceller configuration are as follows. \begin{enumerate}[leftmargin=*,topsep=3pt] \item[1.] Measure the real-time antenna interface response, $H_{\textrm{SI}}(f_k)$, using a preamble (2 OFDM symbols) by dividing the received preamble by the known transmitted preamble in the frequency domain; \item[2.] Solve for an initial PCB canceller configuration using optimization {\textsf{(P2)}} based on the measured $H_{\textrm{SI}}(f_k)$ and the canceller model {\eqref{eq:pcb-tf-calibrated}} (see Section~\ref{ssec:impl-opt}). The returned configuration parameters are rounded to their closest possible values based on hardware resolutions (see Section~\ref{ssec:impl-pcb}); \item[3.] Perform a finer-grained local search and record the optimal canceller configuration (usually \texttt{\char`\~}10 iterations). \end{enumerate} In our design, the optimized PCB canceller configuration can be obtained in less than $\SI{10}{ms}$ on a regular PC with quad-core Intel i7 CPU via a non-optimized MATLAB solver.\footnote{Assuming that the canceller needs to be configured once per second, this is only a $1\%$ overhead. We note that a C-based optimization solver and/or an implementation based on FPGA/look-up table can significantly improve the performance of the canceller configuration and is left for future work.} \subsection{Node-Level: Microbenchmarks} \label{ssec:exp-node} \subsubsection*{Optimized PCB Canceller Response and RF SIC} \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[]{ \label{fig:exp-usrp-algo-iq} \includegraphics[width=0.47\columnwidth]{./figs/exp/exp_usrp_algo_iq.pdf} } \hfill \subfloat[]{ \label{fig:exp-usrp-algo-res} \includegraphics[width=0.47\columnwidth]{./figs/exp/exp_usrp_algo_res.pdf} } \vspace{-0.5\baselineskip} \caption{(a) Real and imaginary parts of the optimized PCB canceller response, $H^{\textrm{P}}(f_k)$, vs. real-time SI channel measurements, $H_{\textrm{SI}}(f_k)$, and (b) modeled and measured RX signal power after RF SIC at $\SI{10}{dBm}$ TX power. An average $\SI{52}{dB}$ RF SIC across $\SI{20}{MHz}$ is achieved in the experiments.} \label{fig:exp-usrp-algo} \vspace{-\baselineskip} \end{figure} We set up an FDE-based FD radio running the optimized PCB canceller configuration scheme and record the canceller configuration, measured $H_{\textrm{SI}}(f_k)$, and measured residual SI power after RF SIC. The recorded canceller configuration is then used to compute the PCB canceller response using {\eqref{eq:pcb-tf-calibrated}}. Fig.~\ref{fig:exp-usrp-algo}\subref{fig:exp-usrp-algo-iq} shows an example of the optimized PCB canceller response, $H^{\textrm{P}}(f_k)$, and the measured antenna interface response, $H_{\textrm{SI}}(f_k)$, in real and imaginary parts (or I and Q). It can be seen that $H^{\textrm{P}}(f_k)$ closely matches with $H_{\textrm{SI}}(f_k)$ with maximal amplitude and phase differences of only $\SI{0.5}{dB}$ and $\SI{2.5}{\degree}$, respectively. This confirms the accuracy of the PCB canceller model and the performance of the optimized canceller configuration. Fig.~\ref{fig:exp-usrp-algo}\subref{fig:exp-usrp-algo-res} shows the modeled (computed by subtracting the modeled canceller response from the measured $H_{\textrm{SI}}(f_k)$) and measured RX signal power after RF SIC at $\SI{10}{dBm}$ TX power. The results show that the FDE-based FD radio achieves an average $\SI{52}{dB}$ RF SIC across $\SI{20}{MHz}$ bandwidth, from which $\SI{20}{dB}$ is obtained from the antenna interface isolation. Similar performance is observed in various experiments throughout the experimental evaluation. \subsubsection*{Overall SIC} \begin{figure}[!t] \centering \includegraphics[width=0.85\columnwidth]{./figs/exp/exp_usrp_sic.pdf} \vspace{-0.5\baselineskip} \caption{Power spectrum of the received signal after SIC in the RF and digital domains with $\SI{10}{dBm}$ average TX power, $\SI{20}{MHz}$ bandwidth, and $-\SI{85}{dBm}$ receiver noise floor.} \label{fig:eval-usrp-spec-20mhz} \vspace{-\baselineskip} \end{figure} We measure the overall SIC achieved by the FDE-based FD radio including the digital SIC in the same setting as described above, and the results are presented in Fig.~\ref{fig:eval-usrp-spec-20mhz}. It can be seen that the FDE-based FD radio achieves an average $\SI{95}{dB}$ overall SIC across $\SI{20}{MHz}$, from which $\SI{52}{dB}$ and $\SI{43}{dB}$ are obtained in the RF and digital domains, respectively. Recall from Section~\ref{ssec:exp-testbed} that the USRP has noise floor of $\SI{-85}{dBm}$, the FDE-based RF radio supports a maximal average TX power of $\SI{10}{dBm}$ (where the peak TX power can go as high as $\SI{20}{dBm}$). We use TX power levels lower than or equal to $\SI{10}{dBm}$ throughout the experiments, where all the SI can be canceled to below the RX noise floor. \subsection{Link-Level: SNR-PRR Relationship} \label{ssec:exp-snr-prr-relationship} We now evaluate the relationship between link SNR and link packet reception ratio (PRR). We setup up a link with two FDE-based FD radios at a fixed distance of 5 meters with equal TX power. In order to evaluate the performance of our FD radios with the existence of the PCB canceller, we set an FD radio to operate in HD mode by turning on only its transmitter or receiver. We conduct the following experiment for each of the $12$ MCSs in both FD and HD modes, with varying TX power levels. In particular, the packets are sent over the link simultaneously in FD mode or in alternating directions in HD mode (i.e., the two radios take turns and transmit to each other). In each experiment, both radios send a sequence of 50 OFDM streams, each OFDM stream contains 20 OFDM packets, and each OFDM packet is 800-Byte long. We consider two metrics. The \emph{HD (resp.\ FD) link SNR} is measured as the ratio between the average RX signal power in both directions and the RX noise floor when both radios operate in HD (resp. FD) mode. The \emph{HD (resp.\ FD) link PRR} is computed as the fraction of packets successfully sent over the HD (resp. FD) link in each experiment. We observe from the experiments that the HD and FD link SNR and PRR values in both link directions are similar. Similar experiments and results were presented in~\cite{zhou2016basic} for HD links. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[Code rate 1/2]{ \label{fig:exp-prr-vs-snr-1-2} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_prr_vs_snr_1_2.pdf} } \hfill \subfloat[Code rate 3/4]{ \label{fig:exp-prr-vs-snr-3-4} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_prr_vs_snr_3_4.pdf} } \vspace{-0.5\baselineskip} \caption{HD and FD link packet reception ratio (PRR) with varying HD link SNR and modulation and coding schemes (MCSs).} \label{fig:exp-prr-vs-snr} \vspace{-\baselineskip} \end{figure} Fig.~\ref{fig:exp-prr-vs-snr} shows the relationship between link PRR values and HD link SNR values with varying MCSs. The results show that with sufficient link SNR values (e.g., $\SI{8}{dB}$ for BPSK-1/2 and $\SI{28}{dB}$ for 64QAM-3/4), the FDE-based FD radio achieves a link PRR of $100\%$. With insufficient link SNR values, the average FD link PRR is $6.5\%$ lower than the HD link PRR across varying MCSs. This degradation is caused by the link SNR difference when the radios operate in HD or FD mode, which is described later in Section~\ref{ssec:exp-link}. Since packets are sent simultaneously in both directions on an FD link, this average PRR degradation is equivalent to an average FD link throughput gain of $1.87\times$ under the same MCS. \subsection{Link-Level: SNR Difference and FD Gain} \label{ssec:exp-link} \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[LOS deployment and an FD radio in a hallway]{ \label{fig:exp-map-los} \includegraphics[width=\columnwidth]{./figs/exp/link/exp_los_setup.pdf} } \\ \vspace{-0.5\baselineskip} \subfloat[NLOS deployment and an FD radio in a lab environment]{ \label{fig:exp-map-nlos} \includegraphics[width=\columnwidth]{./figs/exp/link/exp_nlos_setup.pdf} } \vspace{-0.5\baselineskip} \caption{(a) Line-of-sight (LOS), and (b) non-line-of-sight (NLOS) deployments, and the measured HD link SNR values (dB).} \label{fig:exp-map} \vspace{-\baselineskip} \end{figure} \subsubsection*{Experimental Setup} To thoroughly evaluate the link level FD throughput gain achieved by our FD radio design, we conduct experiments with two FD radios with $\SI{10}{dBm}$ TX power, one emulating a base station (BS) and one emulating a user. We consider both line-of-sight (LOS) and non-line-of-sight (NLOS) experiments as shown in Fig.~\ref{fig:exp-map}. In the LOS setting, the BS is placed at the end of a hallway and the user is moved away from the BS at stepsizes of 5 meters up to a distance of 40 meters. In the NLOS setting, the BS is placed in a lab environment with regular furniture and the user is placed at various locations (offices, labs, and corridors). We place the BS and the users at about the same height across all the experiments.\footnote{In this work, we emulate the BS and users without focusing on specific deployment scenarios. The impacts of different antenna heights and user densities, as mentioned in~\cite{lopez2015towards}, will be considered in future work.} The measured HD link SNR values are also included in Fig.~\ref{fig:exp-map}. Following the methodology of~\cite{bharadia2013full}, for each user location, we measure the \emph{link SNR difference}, which is defined as the absolute difference between the average HD and FD link SNR values. Throughout the experiments, link SNR values between $0$--$\SI{50}{dB}$ are observed. \subsubsection*{Difference in HD and FD Link SNR Values} Fig.~\ref{fig:exp-link-snr-loss} shows the measured link SNR difference as a function of the HD link SNR (i.e., for different user locations) in the LOS and NLOS experiments, respectively, with 64QAM-3/4 MCS. For the LOS experiments, the average link SNR difference is $\SI{0.6}{dB}$ with a standard deviation of $\SI{0.16}{dB}$. For the NLOS experiments, the average link SNR difference is $\SI{0.63}{dB}$ with a standard deviation of $\SI{0.31}{dB}$. The SNR difference has a higher variance in the NLOS experiments, due to the complicated environments (e.g., wooden desks and chairs, metal doors and bookshelves, etc.). In both cases, the link SNR difference is minimal and uncorrelated with user locations, showing the robustness of the FDE-based FD radio. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[LOS Experiment]{ \label{fig:exp-link-snr-loss-los} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_los_snr_loss.pdf} } \hfill \subfloat[NLOS Experiment]{ \label{fig:exp-link-snr-loss-nlos} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_nlos_snr_loss.pdf} } \vspace{-0.5\baselineskip} \caption{Difference between HD and FD link SNR values in the (a) LOS, and (b) NLOS experiments, with $\SI{10}{dBm}$ TX power and 64QAM-3/4 MCS.} \label{fig:exp-link-snr-loss} \vspace{-\baselineskip} \end{figure} \subsubsection*{Impact of Constellations} Fig.~\ref{fig:exp-constellation} shows the measured link SNR difference and its CDF with varying constellations and 3/4 coding rate. It can be seen that the link SNR difference has a mean of $\SI{0.58}{dB}$ and a standard deviation of $\SI{0.4}{dB}$, both of which are uncorrelated with the constellations. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[]{ \label{fig:exp-constellation-snr-loss} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/snr_loss_vs_constellation_err.pdf} } \hfill \subfloat[]{ \label{fig:exp-constellation-tput-gain} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/snr_loss_vs_constellation_cdf.pdf} } \vspace{-0.5\baselineskip} \caption{Difference between HD and FD link SNR values with $\SI{10}{dBm}$ TX power under varying constellations: (a) mean and standard deviation, and (b) CDF.} \label{fig:exp-constellation} \vspace{-0.5\baselineskip} \end{figure} \subsubsection*{FD Link Throughput and Gain} For each user location in the LOS and NLOS experiments, the HD (resp.\ FD) link throughput is measured as the highest average data rate across all MCSs achieved by the link when both nodes operate in HD (resp.\ FD) mode . The FD gain is computed as the ratio between FD and HD throughput values. Recall that the maximal HD data rate is $\SI{54}{Mbps}$, an FD link data rate of $\SI{108}{Mbps}$ can be achieved with an FD link PRR of 1. Fig.~\ref{fig:exp-link-tput} shows the average HD and FD link throughput with varying 16QAM-3/4 and 16QAM-3/4 MCSs, where each point represents the average throughput across 1,000 packets. The results show that with sufficient link SNR (e.g., $\SI{30}{dB}$ for 64QAM-3/4 MCS), the FDE-based FD radios achieve an \emph{exact} link throughput gain of $2\times$. In these scenarios, the HD/FD link always achieves a link PRR of 1 which results in the maximum achievable HD/FD link data rate. With medium link SNR values, where the link PRR less than 1, the average FD link throughput gains across different MCSs are $1.91\times$ and $1.85\times$ for the LOS and NLOS experiments, respectively. We note that if higher modulation schemes (e.g., 256QAM) are considered and the corresponding link SNR values are high enough for these schemes, the HD/FD throughput can increase (compared to the values in Fig.~\ref{fig:exp-link-tput}). However, considering such schemes is not required in order to evaluate the FDE-based cancellers and the FD gain. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[LOS Experiment]{ \label{fig:exp-link-tput-los} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_los_tput_3_4.pdf} } \hfill \subfloat[NLOS Experiment]{ \label{fig:exp-link-tput-nlos} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_nlos_tput_3_4.pdf} } \vspace{-0.5\baselineskip} \caption{HD and FD link throughput in the (a) LOS, and (b) NLOS experiments, with $\SI{10}{dBm}$ TX power and 16QAM-3/4 and 64QAM-3/4 MCSs.} \label{fig:exp-link-tput} \vspace{-0.5\baselineskip} \end{figure} \subsection{Network-Level FD Gain} \label{ssec:exp-net} We now experimentally evaluate the network-level throughput gain introduced by FD-capable BS and users. The users can significanlty benefit from the FDE-based FD radio suitable for hand-held devices. We compare experimental results to the analysis (e.g.,~\cite{marasevic2017resource}) and demonstrate practical FD gain in different network settings. Specifically, we consider two types of networks as depicted in Fig.~\ref{fig:exp-net-setup}: (i) \emph{UL-DL networks} with one FD BS and two HD users with inter-user interference (IUI), and (ii) \emph{heterogeneous HD-FD networks} with HD and FD users. Due to software challenge with implementing a real-time MAC layer using a USRP, we apply a TDMA setting where each (HD or FD) user takes turn to be activated for the same period of time. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[]{ \label{fig:exp-net-setup-ul-dl} \includegraphics[width=0.32\columnwidth]{./figs/exp/net_ul_dl/exp_ul_dl_setup.pdf} } \hspace{-6pt} \hfill \subfloat[]{ \label{fig:exp-net-setup-two-users} \includegraphics[width=0.32\columnwidth]{./figs/exp/net_two_users/exp_two_users_setup.pdf} } \hspace{-6pt} \hfill \subfloat[]{ \label{fig:exp-net-setup-three-users} \includegraphics[width=0.32\columnwidth]{./figs/exp/net_three_users/exp_three_users_setup.pdf} } \vspace{-0.5\baselineskip} \caption{An example experimental setup for: (a) the UL-DL networks with varying $\gamma_{\textrm{UL}}$ and $\gamma_{\textrm{DL}}$, (b) heterogeneous 3-node network with one FD BS and 2 FD users, and (c) heterogeneous 4-node networks with one FD BS, 2 FD users, and one HD user.} \label{fig:exp-net-setup} \vspace{-0.5\baselineskip} \end{figure} \subsubsection{UL-DL Networks with IUI} \label{sssec:exp-net-ul-dl} We first consider UL-DL networks consisting of one FD BS and two HD users (indexed 1 and 2). Without loss of generality, in this setting, user 1 transmits on the UL to the BS, and the BS transmits to user 2 on the DL (see Fig.~\ref{fig:exp-net-setup}\subref{fig:exp-net-setup-ul-dl}). \noindent\textbf{Analytical FD gain}. We use Shannon's capacity formula $\DataRate{}(\SNR{}) = B \cdot \log_{2}(1+\SNR{})$ to compute the \emph{analytical throughput} of a link under bandwidth $B$ and link SNR $\SNR{}$. If the BS is only HD-capable, the network throughput in a UL-DL network when the UL and DL share the channel in a TDMA manner with equal fraction of time is given by \begin{align} \label{eq:net-ul-dl-tput-hd} \DataRate{\textrm{UL-DL}}^{\textrm{HD}} = \frac{B}{2}\log_{2}\left(1+\gamma_{\textrm{UL}}\right) + \frac{B}{2}\log_{2}\left(1+\gamma_{\textrm{DL}}\right), \end{align} where $\gamma_{\textrm{UL}}$ and $\gamma_{\textrm{DL}}$ are the UL and DL SNRs, respectively. If the BS is FD-capable, the UL and DL can be simultaneously activated with an analytical network throughput of \begin{align} \label{eq:net-ul-dl-tput-fd} \DataRate{\textrm{UL-DL}}^{\textrm{FD}} = B\log_{2}\left(1+\frac{\gamma_{\textrm{UL}}}{1+\gamma_{\textrm{Self}}}\right) + B\log_{2}\left(1+\frac{\gamma_{\textrm{DL}}}{1+\gamma_{\textrm{IUI}}}\right), \end{align} where: (i) $\left(\frac{\gamma_{\textrm{DL}}}{1+\gamma_{\textrm{IUI}}}\right)$ is the signal-to-interference-plus-noise ratio (SINR) at the DL HD user, and (ii) $\gamma_{\textrm{Self}}$ is the residual self-interference-to-noise ratio (XINR) at the FD BS. We set $\gamma_{\textrm{Self}}=1$ when computing the analytical throughput. Namely, the residual SI power is no higher than the RX noise floor (which can be achieved by the FDE-based FD radio, see Section~\ref{ssec:exp-node}). The \emph{analytical FD gain} is then defined as the ratio $\left(\DataRate{\textrm{UL-DL}}^{\textrm{FD}}/\DataRate{\textrm{UL-DL}}^{\textrm{HD}}\right)$. Note that the FD gain depends on the coupling between $\gamma_{\textrm{UL}}$, $\gamma_{\textrm{DL}}$, and $\gamma_{\textrm{IUI}}$, which depend on the BS/user locations, their TX power, etc. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[$\gamma_{\textrm{UL}}=\SI{10}{dB}$]{ \label{fig::exp-net-ul-dl-gain-10db} \includegraphics[height=1.35in]{./figs/exp/net_ul_dl/exp_ul_dl_gain_10db.pdf} } \subfloat[$\gamma_{\textrm{UL}}=\SI{15}{dB}$]{ \label{fig::exp-net-ul-dl-gain-15db} \includegraphics[height=1.35in]{./figs/exp/net_ul_dl/exp_ul_dl_gain_15db.pdf} } \subfloat[$\gamma_{\textrm{UL}}=\SI{20}{dB}$]{ \label{fig::exp-net-ul-dl-gain-20db} \includegraphics[height=1.35in]{./figs/exp/net_ul_dl/exp_ul_dl_gain_20db.pdf} } \vspace{-0.5\baselineskip} \caption{Analytical (colored surface) and experimental (filled circles) network throughput gain for UL-DL networks consisting of one FD BS and two HD users with varying UL and DL SNR values, and inter-user interference (IUI) levels: (a) $\gamma_{\textrm{UL}}=\SI{10}{dB}$, (b) $\gamma_{\textrm{UL}}=\SI{15}{dB}$, and (c) $\gamma_{\textrm{UL}}=\SI{20}{dB}$. The baseline is the network throughput when the BS is HD.} \label{fig:exp-net-ul-dl-gain} \vspace{-\baselineskip} \end{figure} \noindent\textbf{Experimental FD gain}. The experimental setup is depicted in Fig.~\ref{fig:exp-net-setup}\subref{fig:exp-net-setup-ul-dl}, where the TX power levels of the BS and user 1 are set to be $\SI{10}{dBm}$ and $\SI{-10}{dBm}$, respectively. We fix the location of the BS and consider different UL SNR values of $\gamma_{\textrm{UL}} = 10/15/20\thinspace\SI{}{dB}$ by placing user 1 at three different locations. For each value of $\gamma_{\textrm{UL}}$, user 2 is placed at 10 different locations, resulting in varying $\gamma_{\textrm{DL}}$ and $\gamma_{\textrm{IUI}}$ values. \begin{table}[!t] \caption{Average FD Gain in UL-DL Networks with IUI.} \label{table:exp-net-ul-dl-gain} \vspace{-0.5\baselineskip} \scriptsize \begin{center} \begin{tabular}{|c|c|c|} \hline UL SNR, $\gamma_{\textrm{UL}}$ & Analytical FD Gain & Experimental FD Gain \\ \hline $\SI{10}{dB}$ & $1.30\times$ & $1.25\times$ \\ \hline $\SI{15}{dB}$ & $1.23\times$ & $1.16\times$ \\ \hline $\SI{20}{dB}$ & $1.22\times$ & $1.14\times$ \\ \hline \end{tabular} \end{center} \vspace{-0.5\baselineskip} \end{table} Fig.~\ref{fig:exp-net-ul-dl-gain} shows the analytical (colored surface) and experimental (filled circles) FD gain, where the analytical gain is extracted using {\eqref{eq:net-ul-dl-tput-hd}} and {\eqref{eq:net-ul-dl-tput-fd}}, and the experimental gain is computed using the measured UL and DL throughput. It can be seen that smaller values of $\gamma_{\textrm{UL}}$ and lower ratios between $\gamma_{\textrm{DL}}$ and $\gamma_{\textrm{IUI}}$ lead to higher throughput gains in both analysis and experiments. The average analytical and experimental FD gains are summarized in Table~\ref{table:exp-net-ul-dl-gain}. Due to practical reasons such as the link SNR difference and its impact on link PRR (see Section~\ref{ssec:exp-snr-prr-relationship}), the experimental FD gain is $7\%$ lower than the analytical FD gain. The results confirm the analysis in~\cite{marasevic2017resource} and demonstrate the practical FD gain achieved in wideband UL-DL networks without any changes in the current network stack (i.e., only bringing FD capability to the BS). Moreover, performance improvements are expected through advanced power control and scheduling schemes. \subsubsection{Heterogeneous 3-Node Networks} \label{sssec:exp-net-two-users} We consider heterogeneous HD-FD networks with 3 nodes: one FD BS and two users that can operate in either HD or FD mode (see an example experimental setup in Figs.\ref{fig:intro}\subref{fig:intro-fd-net} and~\ref{fig:exp-net-setup}\subref{fig:exp-net-setup-two-users}). All 3 nodes have the same $\SI{0}{dBm}$ TX power so that each user has symmetric UL and DL SNR values of $\SNR{i}$ ($i=1,2$). We place user 1 at 5 different locations and place user 2 at 10 different locations for each location of user 1, resulting in a total number of 50 pairs of $(\SNR{1},\SNR{2})$. \noindent\textbf{Analytical FD gain}. We set the users to share the channel in a TDMA manner. The analytical network throughput in a 3-node network when zero, one, and two users are FD-capable is respectively given by \begin{align} & \DataRate{}^{\textrm{HD}} = \frac{B}{2}\log_{2}\left(1+\SNR{1}\right) + \frac{B}{2}\log_{2}\left(1+\SNR{2}\right), \label{eq:net-two-users-tput-hd} \\ & \DataRate{\textrm{User}~i~\textrm{FD}}^{\textrm{HD-FD}} = B\log_{2}\left(1+\frac{\SNR{i}}{1+\gamma_{\textrm{Self}}}\right) + \frac{B}{2}\log_{2}\left(1+\SNR{\overline{i}}\right),\ \label{eq:net-two-users-tput-hd-fd} \\ & \DataRate{}^{\textrm{FD}} = B\log_{2}\left(1+\frac{\SNR{1}}{1+\gamma_{\textrm{Self}}}\right) + B\log_{2}\left(1+\frac{\SNR{2}}{1+\gamma_{\textrm{Self}}}\right), \label{eq:net-two-users-tput-fd} \end{align} where $\gamma_{\textrm{Self}}=1$ is set (similar to Section~\ref{sssec:exp-net-ul-dl}). We consider both FD gains of $\left(\DataRate{\textrm{User}~i~\textrm{FD}}^{\textrm{HD-FD}}/\DataRate{}^{\textrm{HD}}\right)$ (i.e., user $i$ is FD and user $\overline{i} \ne i$ is HD), and $\left(\DataRate{}^{\textrm{FD}}/\DataRate{}^{\textrm{HD}}\right)$ (i.e., both users are FD). \noindent\textbf{Experimental FD gain}. For each pair of $(\SNR{1},\SNR{2})$, experimental FD gain is measured in three cases: (i) only user 1 is FD, (ii) only user 2 is FD, and (iii) both users are FD. Fig.~\ref{fig:exp-net-two-users} shows the analytical (colored surface) and experimental (filled circles) FD gain for each case. We exclude the results with $\SNR{i}<\SI{3}{dB}$ since the packets cannot be decoded, resulting in a throughput of zero (see Fig.~\ref{fig:exp-prr-vs-snr}). \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat[Only user 1 FD]{ \label{fig:exp-net-two-users-user1-fd} \includegraphics[height=1.35in]{./figs/exp/net_two_users/exp_user1_fd_gain_experimental.pdf} } \subfloat[Only user 2 FD]{ \label{fig:exp-net-two-users-user2-fd} \includegraphics[height=1.35in]{./figs/exp/net_two_users/exp_user2_fd_gain_experimental.pdf} } \subfloat[Both users FD]{ \label{fig:exp-net-two-users-both-fd} \includegraphics[height=1.35in]{./figs/exp/net_two_users/exp_both_fd_gain_experimental.pdf} } \vspace{-0.5\baselineskip} \caption{Analytical (colored surface) and experimental (filled circles) network throughput gain for 3-node networks consisting of one FD BS and two users with varying link SNR values: (a) only user 1 is FD, (b) only user 2 is FD, and (c) both users are FD. The baseline is the network throughput when both users are HD.} \label{fig:exp-net-two-users} \vspace{-\baselineskip} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{./figs/exp/net_two_users/exp_two_users_fairness.pdf} \vspace{-0.5\baselineskip} \caption{Measured Jain's fairness index (JFI) in 3-node networks when both users are HD and FD with varying $(\SNR{1},\SNR{2})$.} \label{fig:exp-net-two-users-fairness} \vspace{-\baselineskip} \end{figure} The results show that with small link SNR values, the experimental FD gain is lower than the analytical value due to the inability to decode the packets. On the other hand, with sufficient link SNR values, the experimental FD gain exceeds the analytical FD gain. This is because setting $\gamma_{\textrm{Self}}=1$ in {\eqref{eq:net-two-users-tput-hd-fd}} and {\eqref{eq:net-two-users-tput-fd}} results in a $\SI{3}{dB}$ SNR loss in the analytical FD link SNR, and thereby in a lower throughput. However, in practice, the packets can be decoded with a link PRR of 1 with sufficient link SNRs, resulting in exact twice number of packets being successfully sent over an FD link. Moreover, the FD gain is more significant when enabling FD capability for users with higher link SNR values. Another important metric we consider is the fairness between users, which is measured by the Jain's fairness index (JFI). In the considered 3-node networks, the JFI ranges between 1/2 (worst case) and 1 (best case). Fig.~\ref{fig:exp-net-two-users-fairness} shows the measured JFI when both users operate in HD or FD mode. The results show that introducing FD capability results in an average degradation in the network JFI of only $5.6\%/4.4\%/7.4\%$ for $\SNR{1} = 15/20/25\thinspace\SI{}{dB}$, while the average network FD gains are $1.32\times$/$1.58\times$/$1.73\times$, respectively. In addition, the JFI increases with higher and more balanced user SNR values, which is as expected. \subsubsection{Heterogeneous 4-Node Networks} \label{sssec:exp-net-three-users} We experimentally study 4-node networks consisting of an FD BS and three users with $\SI{10}{dBm}$ TX power (see an example experimental setup in Fig.~\ref{fig:exp-net-setup}\subref{fig:exp-net-setup-three-users}). The experimental setup is similar to that described in Section~\ref{sssec:exp-net-two-users}. 100 experiments are conducts where the 3 users are placed at different locations with different user SNR values. For each experiment, the network throughput is measured in three cases where: (i) zero, (ii) one, and (iii) two users are FD-capable. Fig.~\ref{fig:exp-net-three-users} shows the CDF of the network throughput of the three cases, where the measured link SNR varies between $5$--$\SI{45}{dB}$. Overall, median network FD gains of $1.25\times$ and $1.52\times$ are achieved in cases with one and two FD users, respectively. The trend shows that in a real-world environment, the total network throughput increases as more users become FD-capable, and the improvement is more significant with higher user SNR values. Note that we only apply a TDMA scheme and a more advanced MAC layer (e.g.,~\cite{kim2013janus}) has the potential to improve the FD gain in these networks. \begin{figure}[!t] \centering \includegraphics[width=0.9\columnwidth]{./figs/exp/net_three_users/exp_three_users_gain_experimental.pdf} \vspace{-0.5\baselineskip} \caption{Experimental network throughput gain for 4-node networks when zero, one, or two users are FD-capable, with $\SI{10}{dBm}$ TX power and varying user locations.} \label{fig:exp-net-three-users} \vspace{-\baselineskip} \end{figure} \subsection{Implementation and Testbed} \label{ssec:exp-testbed} \noindent\textbf{FDE-based FD Radio and the SDR Testbed}. Figs.~\ref{fig:intro}\subref{fig:intro-fd-radio} and~\ref{fig:intro}\subref{fig:intro-fd-net} depict our FDE-based FD radio design and the SDR testbed. A $698$--$\SI{960}{MHz}$ swivel blade antenna and a coaxial circulator with operating frequency range $860$--$\SI{960}{MHz}$ are used as the antenna interface. We use the NI USRP-2942 SDR with carrier frequency at $\SI{900}{MHz}$, which is the same as the operating frequency of the antenna interface and the PCB canceller with 2 FDE taps.\footnote{As mentioned in Section~\ref{ssec:impl-pcb}, our PCB canceller design can be easily extended to other operating frequencies.} The USRP has a measured noise floor of $-\SI{85}{dBm}$ at a fixed receiver gain setting.\footnote{This USRP receiver noise floor is limited by the environmental interference at around $\SI{900}{MHz}$ frequency. The USRP has a true noise floor of around $-\SI{95}{dBm}$ at the same receiver gain setting, when not connected to an antenna.} We implemented a full OFDM-based PHY layer using NI LabVIEW on a host PC. A real-time RF bandwidth of $B=\SI{20}{MHz}$ is used through our experiments. The baseband complex (IQ) samples are streamed between the USRP and the host PC through a high-speed PCI-Express interface. The OFDM symbol size is 64 samples (subcarriers) with a cyclic-prefix ratio of 0.25 (16 samples). Throughout the evaluation, $\{f_k\}_{k=1}^{K=52}$ is used to represent the center frequency of the 52 non-zero subcarriers. The OFDM PHY layer supports various modulation and coding schemes (MCSs) with constellations from BPSK to 64QAM and coding rates of 1/2, 2/3, and 3/4, resulting in a highest (HD) data rate of $\SI{54}{Mbps}$. The digital SIC algorithm with a highest non-linearity order of 7 is also implemented in LabVIEW to further suppress the residual SI signal after RF SIC.\footnote{The digital SIC algorithm is based on Volterra series and a least-square problem, which is similar to that presented in~\cite{bharadia2013full}. We omit the details here due to limited space.} In total, our testbed consists of 3 FDE-based FD radios, whose performance is experimentally evaluated at the node, link, and network levels. Regular USRPs (without the PCB canceller) are also included in scenarios where additional HD users are needed. \noindent\textbf{Optimized PCB Canceller Configuration}. We implement the optimized PCB canceller configuration scheme on the host PC and the canceller is configured by a SUB-20 controller through the USB interface. To achieve computational efficiency, the PCB canceller response {\eqref{eq:pcb-tf-calibrated}} (which is independent of the environment) is pre-computed and stored. The detailed steps of the canceller configuration are as follows. \begin{enumerate}[topsep=0pt,itemsep=-3pt,leftmargin=*] \item[1.] Measure the real-time antenna interface response, $H_{\textrm{SI}}(f_k)$, using a preamble (2 OFDM symbols) by dividing the received preamble by the known transmitted preamble in the frequency domain; \item[2.] Solve for an initial PCB canceller configuration using optimization {\textsf{(P2)}} based on the measured $H_{\textrm{SI}}(f_k)$ and the canceller model {\eqref{eq:pcb-tf-calibrated}} (see Section~\ref{ssec:impl-opt}). The returned configuration parameters are rounded to their closest possible values based on hardware resolutions (see Section~\ref{ssec:impl-pcb}); \item[3.] Perform a finer-grained local search and record the optimal canceller configuration (usually \texttt{\char`\~}10 iterations). \end{enumerate} In our design, the optimized PCB canceller configuration can be obtained in less than $\SI{10}{ms}$ on a regular PC with quad-core Intel i7 CPU via a non-optimized MATLAB solver.\footnote{Assuming that the canceller needs to be configured once per second, this is only a $1\%$ overhead. We note that an FPGA-based implementation can significantly improve the performance of the canceller configuration scheme and is left for future work.} \subsection{Node-Level: Microbenchmarks} \label{ssec:exp-node} \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[]{ \label{fig:exp-usrp-algo-iq} \includegraphics[width=0.47\columnwidth]{./figs/exp/exp_usrp_algo_iq.eps} } \hfill \subfloat[]{ \label{fig:exp-usrp-algo-res} \includegraphics[width=0.47\columnwidth]{./figs/exp/exp_usrp_algo_res.eps} } \vspace{-0.5\baselineskip} \caption{(a) Real and imaginary parts of the optimized PCB canceller response, $H^{\textrm{P}}(f_k)$, vs. real-time SI channel measurements, $H_{\textrm{SI}}(f_k)$, and (b) modeled and measured RX signal power after RF SIC at $\SI{10}{dBm}$ TX power. An average $\SI{52}{dB}$ RF SIC across $\SI{20}{MHz}$ is achieved in the experiments.} \label{fig:exp-usrp-algo} \vspace{-\baselineskip} \end{figure} \noindent\textbf{Optimized PCB Canceller Response and RF SIC}. We set up an FDE-based FD radio running the optimized PCB canceller configuration scheme and record the canceller configuration, measured $H_{\textrm{SI}}(f_k)$, and measured residual SI power after RF SIC. The recorded canceller configuration is then used to compute the PCB canceller response using {\eqref{eq:pcb-tf-calibrated}}. Fig.~\ref{fig:exp-usrp-algo}\subref{fig:exp-usrp-algo-iq} shows an example of the optimized PCB canceller response, $H^{\textrm{P}}(f_k)$, and the measured antenna interface response, $H_{\textrm{SI}}(f_k)$, in real and imaginary parts (or I and Q). It can be seen that $H^{\textrm{P}}(f_k)$ closely matches with $H_{\textrm{SI}}(f_k)$ with maximal amplitude and phase differences of only $\SI{0.5}{dB}$ and $\SI{2.5}{\degree}$, respectively. This confirms the accuracy of the PCB canceller model and the performance of the optimized canceller configuration. Fig.~\ref{fig:exp-usrp-algo}\subref{fig:exp-usrp-algo-res} shows the modeled (computed by subtracting the modeled canceller response from the measured $H_{\textrm{SI}}(f_k)$) and measured RX signal power after RF SIC at $\SI{10}{dBm}$ TX power. The results show that the FDE-based FD radio achieves an average $\SI{52}{dB}$ RF SIC across $\SI{20}{MHz}$ bandwidth, from which $\SI{20}{dB}$ is obtained from the antenna interface isolation. Similar performance is observed in various experiments throughout the experimental evaluation. \begin{figure}[!t] \centering \includegraphics[width=0.7\columnwidth]{./figs/exp/exp_usrp_sic.eps} \vspace{-0.5\baselineskip} \caption{Power spectrum of the received signal after SIC in the RF and digital domains with $\SI{10}{dBm}$ average TX power, $\SI{20}{MHz}$ bandwidth, and $-\SI{85}{dBm}$ receiver noise floor.} \label{fig:eval-usrp-spec-20mhz} \vspace{-\baselineskip} \end{figure} \noindent\textbf{Overall SIC}. We measure the overall SIC achieved by the FDE-based FD radio including the digital SIC in the same setting as described above, and the results are presented in Fig.~\ref{fig:eval-usrp-spec-20mhz}. It can be seen that the FDE-based FD radio achieves an average $\SI{95}{dB}$ overall SIC across $\SI{20}{MHz}$, from which $\SI{52}{dB}$ and $\SI{43}{dB}$ are obtained in the RF and digital domains, respectively. Recall from Section~\ref{ssec:exp-testbed} that the USRP has noise floor of $\SI{-85}{dBm}$, the FDE-based RF radio supports a maximal average TX power of $\SI{10}{dBm}$ (where the peak TX power can go as high as $\SI{20}{dBm}$). We use TX power levels lower than or equal to $\SI{10}{dBm}$ throughout the experiments, where all the SI can be canceled to below the RX noise floor. \subsection{Link-Level: SNR-PRR Relationship} \label{ssec:exp-snr-prr-relationship} We now evaluate the relationship between link SNR and link packet reception ratio (PRR). We setup up a link with two FDE-based FD radios at a fixed distance of 5 meters with equal TX power. We set an FD radio to operate in HD mode by turning on only its transmitter or receiver, since we would like to evaluate it performance with the existence of the PCB canceller. The following experiments are conducted for all possible MCSs and varying TX power level of the radios. In each experiment, both FD radios send a sequence of 50 OFDM streams in both HD and FD modes, each containing 20 OFDM packets with length 800 Bytes. The packets are sent over the link simultaneously in FD mode and in alternating directions in HD mode (i.e., the two radios take turns and transmit to each other). We consider two metrics. The \emph{HD (resp.\ FD) link SNR} is measured as the ratio between the average RX signal power in both directions and the RX noise floor when both radios operate in HD (resp. FD) mode. The \emph{HD (resp.\ FD) link PRR} is computed as the fraction of packets successfully sent over the HD (resp. FD) link in each experiment. We observe from the experiments that the HD and FD link SNR and PRR values in both link directions are similar. Similar experiments and results were presented in~\cite{zhou2016basic} for HD links. \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[Code rate 1/2]{ \label{fig:exp-prr-vs-snr-1-2} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_prr_vs_snr_1_2.eps} } \hfill \subfloat[Code rate 3/4]{ \label{fig:exp-prr-vs-snr-3-4} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_prr_vs_snr_3_4.eps} } \vspace{-0.5\baselineskip} \caption{HD and FD link packet reception ratio (PRR) with varying HD link SNR and modulation and coding schemes (MCSs).} \label{fig:exp-prr-vs-snr} \vspace{-\baselineskip} \end{figure} Fig.~\ref{fig:exp-prr-vs-snr} shows the relationship between link PRR values and HD link SNR values with varying MCSs. The results show that with sufficient link SNR values (e.g., $\SI{8}{dB}$ for BPSK-1/2 and $\SI{28}{dB}$ for 64QAM-3/4), the FDE-based FD radio achieves a link PRR of $100\%$. With insufficient link SNR values, the average FD link PRR is $6.5\%$ lower than the HD link PRR across varying MCSs. This degradation is caused by the link SNR difference when the radios operate in HD or FD mode, which is described later in Section~\ref{ssec:exp-link}. Since packets are sent simultaneously in both directions on an FD link, this average PRR degradation is equivalent to an average FD link throughput gain of $1.87\times$ under the same MCS. \subsection{Link-Level: SNR Difference and FD Gain} \label{ssec:exp-link} \begin{figure}[!t] \centering \subfloat[LOS deployment and an FD radio in a hallway]{ \label{fig:exp-map-los} \includegraphics[width=1\columnwidth]{./figs/exp/link/exp_los_setup.pdf} } \\ \vspace{-0.5\baselineskip} \subfloat[NLOS deployment and an FD radio in a lab environment]{ \label{fig:exp-map-nlos} \includegraphics[width=1\columnwidth]{./figs/exp/link/exp_nlos_setup.pdf} } \vspace{-0.5\baselineskip} \caption{(a) Line-of-sight (LOS), and (b) non-line-of-sight (NLOS) deployments, and the measured HD link SNR values (dB).} \label{fig:exp-map} \vspace{-\baselineskip} \end{figure} \noindent\textbf{Experimental Setup}. To thoroughly evaluate the link level FD throughput gain achieved by our FD radio design, we conduct experiments with two FD radios with $\SI{10}{dBm}$ TX power, one emulating a base station (BS) and one emulating a user. We consider both line-of-sight (LOS) and non-line-of-sight (NLOS) experiments as shown in Fig.~\ref{fig:exp-map}. In the LOS setting, the BS is placed at the end of a hallway and the user is moved away from the BS at stepsizes of 5 meters up to a distance of 40 meters. In the NLOS setting, the BS is placed in a lab environment with regular furniture and the user is placed at various locations (offices, labs, and corridors). The measured HD link SNR values are also included in Fig.~\ref{fig:exp-map}. Following the methodology of~\cite{bharadia2013full}, for each user location, we measure the \emph{link SNR difference}, which is defined as the absolute difference between the average HD and FD link SNR values. Throughout the experiments, link SNR values between $0$ -- $\SI{50}{dB}$ are observed. \noindent\textbf{Difference in HD and FD Link SNR Values}. Fig.~\ref{fig:exp-link-snr-loss} shows the measured link SNR difference as a function of the HD link SNR (i.e., for different user locations) in the LOS and NLOS experiments, respectively, with 64QAM-3/4 MCS. For the LOS experiments, the average link SNR difference is $\SI{0.6}{dB}$ with a standard deviation of $\SI{0.16}{dB}$. For the NLOS experiments, the average link SNR difference is $\SI{0.63}{dB}$ with a standard deviation of $\SI{0.31}{dB}$. The SNR difference has a higher variance in the NLOS experiments, due to the complicated environments (e.g., wooden desks and chairs, metal doors and bookshelves, etc.). In both cases, the link SNR difference is minimal and uncorrelated with user locations, showing the robustness of the FDE-based FD radio. \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[LOS Experiment]{ \label{fig:exp-link-snr-loss-los} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_los_snr_loss.eps} } \hfill \subfloat[NLOS Experiment]{ \label{fig:exp-link-snr-loss-nlos} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_nlos_snr_loss.eps} } \vspace{-0.5\baselineskip} \caption{Difference between HD and FD link SNR values in the (a) LOS, and (b) NLOS experiments, with $\SI{10}{dBm}$ TX power and 64QAM-3/4 MCS.} \label{fig:exp-link-snr-loss} \vspace{-\baselineskip} \end{figure} \noindent\textbf{Impact of Constellations}. Fig.~\ref{fig:exp-constellation} shows the measured link SNR difference and its CDF with varying constellations and 3/4 coding rate. It can be seen that the link SNR difference has a mean of $\SI{0.58}{dB}$ and a standard deviation of $\SI{0.4}{dB}$, both of which are uncorrelated with the constellations. \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[]{ \label{fig:exp-constellation-snr-loss} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/snr_loss_vs_constellation_err.eps} } \hfill \subfloat[]{ \label{fig:exp-constellation-tput-gain} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/snr_loss_vs_constellation_cdf.eps} } \vspace{-0.5\baselineskip} \caption{Difference between HD and FD link SNR values with $\SI{10}{dBm}$ TX power under varying constellations: (a) mean and standard deviation, and (b) CDF.} \label{fig:exp-constellation} \vspace{-\baselineskip} \end{figure} \noindent\textbf{FD Link Throughput and Gain}. For each user location in the LOS and NLOS experiments, the HD (resp.\ FD) link throughput is measured as the highest average data rate across all MCSs achieved by the link when both nodes operate in HD (resp.\ FD) mode . The FD gain is computed as the ratio between FD and HD throughput values. Recall that the maximal HD data rate is $\SI{54}{Mbps}$, an FD link data rate of $\SI{108}{Mbps}$ can be achieved with an FD link PRR of 1. Fig.~\ref{fig:exp-link-tput} shows the average HD and FD link throughput with varying 16QAM-3/4 and 16QAM-3/4 MCSs, where each point represents the average throughput across 1,000 packets. The results show that with sufficient link SNR (e.g., $\SI{30}{dB}$ for 64QAM-3/4 MCS), the FDE-based FD radios achieve an \emph{exact} link throughput gain of $2\times$ with an HD/FD link PRR of 1. Namely, in cases in which the data rate is limited by the MCS, enabling FD communication results in an \emph{exact} throughput gain of $2\times$. With medium link SNR values, where the link PRR less than 1, the average FD link throughput gains across different MCSs are $1.91\times$ and $1.85\times$ for the LOS and NLOS experiments, respectively. \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[LOS Experiment]{ \label{fig:exp-link-tput-los} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_los_tput_3_4.eps} } \hfill \subfloat[NLOS Experiment]{ \label{fig:exp-link-tput-nlos} \includegraphics[width=0.47\columnwidth]{./figs/exp/link/exp_nlos_tput_3_4.eps} } \vspace{-0.5\baselineskip} \caption{HD and FD link throughput in the (a) LOS, and (b) NLOS experiments, with $\SI{10}{dBm}$ TX power and 16QAM-3/4 and 64QAM-3/4 MCSs.} \label{fig:exp-link-tput} \vspace{-\baselineskip} \end{figure} \subsection{Network-Level FD Gain} \label{ssec:exp-net} We now experimentally evaluate the network-level throughput gain introduced by FD-capable BS and users. The users can significanlty benefit from the FDE-based FD radio suitable for hand-held devices. We compare experimental results to the analysis (e.g.,~\cite{marasevic2017resource}) and demonstrate practical FD gain in different network settings. Specifically, we consider two types of networks as depicted in Fig.~\ref{fig:exp-net-setup}: (i) \emph{UL-DL networks} with one FD BS and two HD users with inter-user interference (IUI), and (ii) \emph{heterogeneous HD-FD networks} with HD and FD users. Due to hardware limitations, we apply a TDMA setting where each (HD or FD) user takes turn to be activated for the same period of time. \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[]{ \label{fig:exp-net-setup-ul-dl} \includegraphics[width=0.32\columnwidth]{./figs/exp/net_ul_dl/exp_ul_dl_setup.pdf} } \hspace{-6pt} \hfill \subfloat[]{ \label{fig:exp-net-setup-two-users} \includegraphics[width=0.32\columnwidth]{./figs/exp/net_two_users/exp_two_users_setup.pdf} } \hspace{-6pt} \hfill \subfloat[]{ \label{fig:exp-net-setup-three-users} \includegraphics[width=0.32\columnwidth]{./figs/exp/net_three_users/exp_three_users_setup.pdf} } \vspace{-0.5\baselineskip} \caption{An example experimental setup for: (a) the UL-DL networks with varying $\gamma_{\textrm{UL}}$ and $\gamma_{\textrm{DL}}$, (b) heterogeneous 3-node network with one FD BS and 2 FD users, and (c) heterogeneous 4-node networks with one FD BS, 2 FD users, and one HD user.} \label{fig:exp-net-setup} \vspace{-\baselineskip} \end{figure} \subsubsection{UL-DL Networks with IUI} \label{sssec:exp-net-ul-dl} We first consider UL-DL networks consisting of one FD BS and two HD users (indexed 1 and 2). Without loss of generality, in this setting, user 1 transmits on the UL to the BS, and the BS transmits to user 2 on the DL (see Fig.~\ref{fig:exp-net-setup}\subref{fig:exp-net-setup-ul-dl}). \noindent\textbf{Analytical FD gain}. We use Shannon's capacity formula $\DataRate{}(\SNR{}) = B \cdot \log_{2}(1+\SNR{})$ to compute the \emph{analytical throughput} of a link under bandwidth $B$ and link SNR $\SNR{}$. If the BS is only HD-capable, the network throughput in a UL-DL network when the UL and DL share the channel in a TDMA manner with equal fraction of time is given by \begin{align} \label{eq:net-ul-dl-tput-hd} \textstyle \DataRate{\textrm{UL-DL}}^{\textrm{HD}} = \frac{B}{2}\log\left(1+\gamma_{\textrm{UL}}\right) + \frac{B}{2}\log\left(1+\gamma_{\textrm{DL}}\right), \end{align} where $\gamma_{\textrm{UL}}$ and $\gamma_{\textrm{DL}}$ are the UL and DL SNRs, respectively. If the BS is FD-capable, the UL and DL can be simultaneously activated with an analytical network throughput of \begin{align} \label{eq:net-ul-dl-tput-fd} \textstyle \DataRate{\textrm{UL-DL}}^{\textrm{FD}} = B\log\left(1+\frac{\gamma_{\textrm{UL}}}{1+\gamma_{\textrm{Self}}}\right) + B\log\left(1+\frac{\gamma_{\textrm{DL}}}{1+\gamma_{\textrm{IUI}}}\right), \end{align} where: (i) $\frac{\gamma_{\textrm{DL}}}{1+\gamma_{\textrm{IUI}}}$ is the signal-to-interference-plus-noise ratio (SINR) at the DL HD user, and (ii) $\gamma_{\textrm{Self}}$ is the residual self-interference-to-noise ratio (XINR) at the FD BS. We set $\gamma_{\textrm{Self}}=1$ when computing the analytical throughput. Namely, the residual SI power is no higher than the RX noise floor (which can be achieved by the FDE-based FD radio, see Section~\ref{ssec:exp-node}). The \emph{analytical FD gain} is then defined as the ratio $\left(\DataRate{\textrm{UL-DL}}^{\textrm{FD}}/\DataRate{\textrm{UL-DL}}^{\textrm{HD}}\right)$. Note that the FD gain depends on the coupling between $\gamma_{\textrm{UL}}$, $\gamma_{\textrm{DL}}$, and $\gamma_{\textrm{IUI}}$, which depend on the BS/user locations, their TX power, etc. \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[$\gamma_{\textrm{UL}}=\SI{10}{dB}$]{ \label{fig::exp-net-ul-dl-gain-10db} \includegraphics[height=1.25in]{./figs/exp/net_ul_dl/exp_ul_dl_gain_10db.eps} } \subfloat[$\gamma_{\textrm{UL}}=\SI{15}{dB}$]{ \label{fig::exp-net-ul-dl-gain-15db} \includegraphics[height=1.25in]{./figs/exp/net_ul_dl/exp_ul_dl_gain_15db.eps} } \subfloat[$\gamma_{\textrm{UL}}=\SI{20}{dB}$]{ \label{fig::exp-net-ul-dl-gain-20db} \includegraphics[height=1.25in]{./figs/exp/net_ul_dl/exp_ul_dl_gain_20db.eps} } \vspace{-0.5\baselineskip} \caption{Analytical (colored surface) and experimental (filled circles) network throughput gain for UL-DL networks consisting of one FD BS and two HD users with varying UL and DL SNR values, and inter-user interference (IUI) levels: (a) $\gamma_{\textrm{UL}}=\SI{10}{dB}$, (b) $\gamma_{\textrm{UL}}=\SI{15}{dB}$, and (c) $\gamma_{\textrm{UL}}=\SI{20}{dB}$. The baseline is the network throughput when the BS is HD.} \label{fig:exp-net-ul-dl-gain} \vspace{-\baselineskip} \end{figure} \noindent\textbf{Experimental FD gain}. The experimental setup is depicted in Fig.~\ref{fig:exp-net-setup}\subref{fig:exp-net-setup-ul-dl}, where the TX power levels of the BS and user 1 are set to be $\SI{10}{dBm}$ and $\SI{-10}{dBm}$, respectively. We fix the location of the BS and consider different UL SNR values of $\gamma_{\textrm{UL}} = 10/15/20\thinspace\SI{}{dB}$ by placing user 1 at three different locations. For each value of $\gamma_{\textrm{UL}}$, user 2 is placed at 10 different locations, resulting in varying $\gamma_{\textrm{DL}}$ and $\gamma_{\textrm{IUI}}$ values. \begin{table}[!t] \caption{Average FD Gain in UL-DL Networks with IUI.} \label{table:exp-net-ul-dl-gain} \vspace{-1.5\baselineskip} \scriptsize \begin{center} \begin{tabular}{|c|c|c|} \hline UL SNR, $\gamma_{\textrm{UL}}$ & Analytical FD Gain & Experimental FD Gain \\ \hline $\SI{10}{dB}$ & $1.30\times$ & $1.25\times$ \\ \hline $\SI{15}{dB}$ & $1.23\times$ & $1.16\times$ \\ \hline $\SI{20}{dB}$ & $1.22\times$ & $1.14\times$ \\ \hline \end{tabular} \end{center} \vspace{-2\baselineskip} \end{table} Fig.~\ref{fig:exp-net-ul-dl-gain} shows the analytical (colored surface) and experimental (filled circles) FD gain, where the analytical gain is extracted using {\eqref{eq:net-ul-dl-tput-hd}} and {\eqref{eq:net-ul-dl-tput-fd}}, and the experimental gain is computed using the measured UL and DL throughput. It can be seen that smaller values of $\gamma_{\textrm{UL}}$ and lower ratios between $\gamma_{\textrm{DL}}$ and $\gamma_{\textrm{IUI}}$ lead to higher throughput gains in both analysis and experiments. The average analytical and experimental FD gains are summarized in Table~\ref{table:exp-net-ul-dl-gain}. Due to practical reasons such as the link SNR difference and its impact on link PRR (see Section~\ref{ssec:exp-snr-prr-relationship}), the experimental FD gain is $7\%$ lower than the analytical FD gain. The results confirm the analysis in~\cite{marasevic2017resource} and demonstrate the practical FD gain achieved in wideband UL-DL networks without any changes in the current network stack (i.e., only bringing FD capability to the BS). Moreover, performance improvements are expected through advanced power control and scheduling schemes. \subsubsection{Heterogeneous 3-Node Networks} \label{sssec:exp-net-two-users} We consider heterogeneous HD-FD networks with 3 nodes: one FD BS and two users that can operate in either HD or FD mode (see an example experimental setup in Figs.\ref{fig:intro}\subref{fig:intro-fd-net} and~\ref{fig:exp-net-setup}\subref{fig:exp-net-setup-two-users}). All 3 nodes have the same $\SI{0}{dBm}$ TX power so that each user has symmetric UL and DL SNR values of $\SNR{i}$ ($i=1,2$). We place user 1 at 5 different locations and place user 2 at 10 different locations for each location of user 1, resulting in a total number of 50 pairs of $(\SNR{1},\SNR{2})$. \noindent\textbf{Analytical FD gain}. We set the users to share the channel in a TDMA manner. The analytical network throughput in a 3-node network when zero, one, and two users are FD-capable is respectively given by \begin{align} & \textstyle \DataRate{}^{\textrm{HD}} = \frac{B}{2}\log\left(1+\SNR{1}\right) + \frac{B}{2}\log\left(1+\SNR{2}\right), \label{eq:net-two-users-tput-hd} \\ & \textstyle \DataRate{\textrm{User}~i~\textrm{FD}}^{\textrm{HD-FD}} = B\log\left(1+\frac{\SNR{i}}{1+\gamma_{\textrm{Self}}}\right) + \frac{B}{2}\log\left(1+\SNR{\bar{i}}\right),\ \label{eq:net-two-users-tput-hd-fd} \\ & \textstyle \DataRate{}^{\textrm{FD}} = B\log\left(1+\frac{\SNR{1}}{1+\gamma_{\textrm{Self}}}\right) + B\log\left(1+\frac{\SNR{2}}{1+\gamma_{\textrm{Self}}}\right), \label{eq:net-two-users-tput-fd} \end{align} where $\gamma_{\textrm{Self}}=1$ is set (similar to Section~\ref{sssec:exp-net-ul-dl}). We consider both FD gains of $\left(\DataRate{\textrm{User}~i~\textrm{FD}}^{\textrm{HD-FD}}/\DataRate{}^{\textrm{HD}}\right)$ (i.e., user $i$ is FD and user $\bar{i} \ne i$ is HD), and $\left(\DataRate{}^{\textrm{FD}}/\DataRate{}^{\textrm{HD}}\right)$ (i.e., both users are FD). \noindent\textbf{Experimental FD gain}. For each pair of $(\SNR{1},\SNR{2})$, experimental FD gain is measured in three cases: (i) only user 1 is FD, (ii) only user 2 is FD, and (iii) both users are FD. Fig.~\ref{fig:exp-net-two-users} shows the analytical (colored surface) and experimental (filled circles) FD gain for each case. We exclude the results with $\SNR{i}<\SI{3}{dB}$ since the packets cannot be decoded, resulting in a throughput of zero (see Fig.~\ref{fig:exp-prr-vs-snr}). \begin{figure}[!t] \centering \vspace{-0.5\baselineskip} \subfloat[Only user 1 FD]{ \label{fig:exp-net-two-users-user1-fd} \includegraphics[height=1.25in]{./figs/exp/net_two_users/exp_user1_fd_gain_experimental.eps} } \subfloat[Only user 2 FD]{ \label{fig:exp-net-two-users-user2-fd} \includegraphics[height=1.25in]{./figs/exp/net_two_users/exp_user2_fd_gain_experimental.eps} } \subfloat[Both users FD]{ \label{fig:exp-net-two-users-both-fd} \includegraphics[height=1.25in]{./figs/exp/net_two_users/exp_both_fd_gain_experimental.eps} } \vspace{-0.5\baselineskip} \caption{Analytical (colored surface) and experimental (filled circles) network throughput gain for 3-node networks consisting of one FD BS and two users with varying link SNR values: (a) only user 1 is FD, (b) only user 2 is FD, and (c) both users are FD. The baseline is the network throughput when both users are HD.} \label{fig:exp-net-two-users} \vspace{-0.5\baselineskip} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.8\columnwidth]{./figs/exp/net_two_users/exp_two_users_fairness.eps} \vspace{-0.5\baselineskip} \caption{Measured Jain's fairness index (JFI) in 3-node networks when both users are HD and FD with varying $(\SNR{1},\SNR{2})$.} \label{fig:exp-net-two-users-fairness} \vspace{-\baselineskip} \end{figure} The results show that with small link SNR values, the experimental FD gain is lower than the analytical value due to the inability to decode the packets. On the other hand, with sufficient link SNR values, the experimental FD gain exceeds the analytical FD gain. This is because setting $\gamma_{\textrm{Self}}=1$ in {\eqref{eq:net-two-users-tput-hd-fd}} and {\eqref{eq:net-two-users-tput-fd}} results in a $\SI{3}{dB}$ SNR loss in the analytical FD link SNR, and thereby in a lower throughput. However, in practice, the packets can be decoded with a link PRR of 1 with sufficient link SNRs, resulting in exact twice number of packets being successfully sent over an FD link. Moreover, the FD gain is more significant when enabling FD capability for users with higher link SNR values. Another important metric we consider is the fairness between users, which is measured by the Jain's fairness index (JFI). In the considered 3-node networks, the JFI ranges between 1/2 (worst case) and 1 (best case). Fig.~\ref{fig:exp-net-two-users-fairness} shows the measured JFI when both users operate in HD or FD mode. The results show that introducing FD capability results in an average degradation in the network JFI of only $5.6\%/4.4\%/7.4\%$ for $\SNR{1} = 15/20/25\thinspace\SI{}{dB}$, while the average network FD gains are $1.32\times$/$1.58\times$/$1.73\times$, respectively. In addition, the JFI increases with higher and more balanced user SNR values, which is as expected. \subsubsection{Heterogeneous 4-Node Networks} \label{sssec:exp-net-three-users} We experimentally study 4-node networks consisting of an FD BS and three users with $\SI{10}{dBm}$ TX power (see an example experimental setup in Fig.~\ref{fig:exp-net-setup}\subref{fig:exp-net-setup-three-users}). The experimental setup is similar to that described in Section~\ref{sssec:exp-net-two-users}. 100 experiments are conducts where the 3 users are placed at different locations with different user SNR values. For each experiment, the network throughput is measured in three cases where: (i) zero, (ii) one, and (iii) two users are FD-capable. Fig.~\ref{fig:exp-net-three-users} shows the CDF of the network throughput of the three cases, where the measured link SNR varies between $5$--$\SI{45}{dB}$. Overall, median network FD gains of $1.25\times$ and $1.52\times$ are achieved in cases with one and two FD users, respectively. The trend shows that in a real-world environment, the total network throughput increases as more users become FD-capable, and the improvement is more significant with higher user SNR values. Note that we only apply a TDMA scheme and a more advanced MAC layer design has the potential to improve the FD gain in these networks. \begin{figure}[!t] \centering \includegraphics[width=0.8\columnwidth]{./figs/exp/net_three_users/exp_three_users_gain_experimental.eps} \vspace{-0.5\baselineskip} \caption{Experimental network throughput gain for 4-node networks when zero, one, or two users are FD-capable, with $\SI{10}{dBm}$ TX power and varying user locations.} \label{fig:exp-net-three-users} \vspace{-\baselineskip} \end{figure} \subsection{FDE using RF BPFs} \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{./figs/diagram/pcb_diagram.pdf} \vspace{-\baselineskip} \caption{Block diagram of the implemented $\NumTap=2$ FDE taps in the PCB canceller (see Fig.~\ref{fig:fde-concept}(a)), each of which consists of an RLC bandpass filter (BPF), an attenuator for amplitude control, and a phase shifter for phase control.} \label{fig:diagram-pcb} \vspace{-\baselineskip} \end{figure} \subsection{FDE PCB Canceller Implementation} \label{ssec:impl-pcb} Fig.~\ref{fig:intro}\subref{fig:intro-pcb} and Fig.~\ref{fig:fde-concept}\subref{fig:fde-concept-diagram} show the implementation and block diagram of the PCB canceller with 2 FDE taps. In particular, a reference signal is tapped from the TX input using a coupler and is split into two FDE taps through a power divider. Then, the signals after each FDE tap are combined and RF SIC is performed at the RX input. Each FDE tap consists of a reconfigurable $2^{\textrm{nd}}$-order BPF, as well as an attenuator and phase shifter for amplitude and phase controls. We refer to the BPF here as the \emph{PCB BPF} to distinguish from the one in the RFIC canceller {\eqref{eq:rfic-tf}}. The PCB BPF (with size of $\SI{1.5}{\cm}\times\SI{4}{\cm}$, see Fig.~\ref{fig:diagram-pcb}) is implemented as an RLC filter with impedance transformation networks and is optimized around $\SI{900}{MHz}$ operating frequency.\footnote{We select $\SI{900}{MHz}$ around the Region 2 $902$--$\SI{928}{MHz}$ ISM band as the operating frequency but the approach can be easily extended to other bands (e.g., $\SI{2.4}{GHz}$) with slight modification of the hardware design and proper choice of the frequency-dependent components.} When compared to the $N$-path filter used in the RFIC canceller~\cite{Zhou_WBSIC_JSSC15} that consumes certain amount of DC power, this discrete component-based passive RLC BPF has zero DC power consumption and can support higher TX power levels. Moreover, it has a lower noise level than the RFIC implementation. The PCB BPF center frequency in the $i^{\textrm{th}}$ FDE tap can be adjusted through the capacitor, $\PCBTapCFCap{i}$, in the RLC resonance tank. In order to achieve a high and adjustable BPF quality factor, impedance transformation networks including transmission-lines (T-Lines) and digitally tunable capacitors, $\PCBTapQFCap{i}$, are introduced. In our implementation, $\PCBTapCFCap{i}$ consists of two parallel capacitors: a fixed $\SI{8.2}{pF}$ capacitor and a Peregrine Semiconductor PE64909 digitally tunable capacitor ($4$-bit) with a resolution of $\SI{0.12}{pF}$. For $\PCBTapQFCap{i}$, we use the Peregrine Semiconductor PE64102 digitally tunable capacitor ($5$-bit) with a resolution of $\SI{0.39}{pF}$. In addition, the programmable attenuator has a tuning range of $0$--$\SI{15.5}{dB}$ with a $\SI{0.5}{dB}$ resolution, and the passive phase shifter is controlled by a $8$-bit digital-to-analog converter (DAC) and covers full $\SI{360}{\degree}$ range. \subsection{FDE PCB Canceller Model} \label{ssec:impl-model} Ideally, the PCB BPF has a $2^{\textrm{nd}}$-order frequency response from the RLC resonance tank. However, in practical implementation, its response deviates from that used in the FDE-based RFIC canceller {\eqref{eq:rfic-tf}}. Therefore, there is a need for a valid model tailored for evaluating the performance and optimized configuration of the PCB canceller. Based on the circuit diagram in Fig.~\ref{fig:diagram-pcb}, we derive a realistic model for the frequency response of the PCB BPF, $\BPFTapTF{i}(f_k)$, given by\footnote{The details can be found in Appendix~\ref{append:pcb-model}.} \begin{align} \label{eq:pcb-bpf-tf} \BPFTapTF{i}(f_k) = & R_s^{-1} \Big[ j\sin(2\beta l) Z_0Y_{\textrm{F},i}(f_k)Y_{\textrm{Q},i}(f_k) \nonumber \\ & + \cos^2(\beta l)Y_{\textrm{F},i}(f_k) + 2\cos(2\beta l)Y_{\textrm{Q},i}(f_k) \nonumber \\ & + j\sin(2\beta l)/Z_0 + 0.5j\sin(2\beta l)Z_0(Y_{\textrm{Q},i}(f_k))^2 \nonumber \\ & - \sin^2(\beta l)Z_0^2Y_{\textrm{F},i}(f_k)(Y_{\textrm{Q},i}(f_k))^2 \Big]^{-1}, \end{align} where $Y_{\textrm{F},i}(f_k)$ and $Y_{\textrm{Q},i}(f_k)$ are the admittance of the RLC resonance tank and impedance transformation networks, i.e., \begin{equation} \left\{ \begin{aligned} \label{eq:pcb-admittance} Y_{\textrm{F},i}(f_k) & = 1/R_{\textrm{F}} + j2\pi \PCBTapCFCap{i} f_k + 1/(j2\pi L_{\textrm{F}} f_k), \\ Y_{\textrm{Q},i}(f_k) & = 1/R_{\textrm{Q}} + j2\pi \PCBTapQFCap{i} f_k + 1/(j2\pi L_{\textrm{Q}} f_k). \end{aligned} \right. \end{equation} In particular, to have perfect matching with the source and load impedance of the RLC resonance tank, $R_{\textrm{S}}$ and $R_{\textrm{L}}$ are set to be the same value of $R_\textrm{Q} = 50\Omega$ (see Fig.~\ref{fig:diagram-pcb}). $\beta$ and $Z_0$ are the wavenumber and characteristic impedance of the T-Line with length $l$ (see Fig.~\ref{fig:diagram-pcb}). In our implementation, $L_{\textrm{F}} = \SI{1.65}{nH}$, $L_{\textrm{Q}} = \SI{2.85}{nH}$, $\beta l \approx \SI{1.37}{rad}$, and $Z_0 = 50\Omega$. In addition, other components in the PCB canceller (e.g., couplers and power divider/combiner) can introduce extra attenuation and group delay, due to implementation losses. Based on the S-Parameters of the components and measurements, we observed that the attenuation and group delay introduced, denoted by $\Amp{0}^{\textrm{P}}$ and $\tau_0^{\textrm{P}}$, are constant across frequency in the desired bandwidth. Hence, we empirically set $\Amp{0} = \SI{-4.1}{dB}$ and $\tau_0 = \SI{4.2}{ns}$. Recall that each FDE tap is also associated with amplitude and phase controls, $\PCBTapAmp{i}$ and $\PCBTapPhase{i}$, the PCB canceller with two FDE taps is modeled by \begin{align} \label{eq:pcb-tf-calibrated} H^{\textrm{P}}(f_k) & = \Amp{0}^{\textrm{P}} e^{-j2\pi f_k\tau_0^{\textrm{P}}} \left[ \mathop{\textstyle\sum}_{i=1}^{2} \PCBTapAmp{i} e^{-j\PCBTapPhase{i}} \BPFTapTF{i}(f_k) \right], \end{align} where $\BPFTapTF{i}(f_k)$ is the PCB BPF model given by {\eqref{eq:pcb-bpf-tf}}. As a result, the $i^{\textrm{th}}$ FDE tap in the PCB canceller {\eqref{eq:pcb-tf-calibrated}} has configuration parameters $\{\PCBTapAmp{i}, \PCBTapPhase{i}, \PCBTapCFCap{i}, \PCBTapQFCap{i}\}$, featuring 4 DoF. \subsection{Optimization of Canceller Configuration} \label{ssec:impl-opt} Based on {\textsf{(P1)}}, we now present a general FDE-based canceller configuration scheme that jointly optimizes all the FDE taps in the canceller.\footnote{The RFIC canceller presented in~\cite{Zhou_WBSIC_JSSC15} is configured based on heuristics. In Section~\ref{sec:sensitivity}, we show that the optimized configuration scheme can significantly improve the RFIC canceller performance.} Although our implemented PCB canceller has only 2 FDE taps, both its model and the configuration scheme can be easily extended to the case with a larger number of FDE taps, as described in Section~\ref{sec:sensitivity}. The inputs to the FDE-based canceller configuration scheme are: (i) the PCB canceller model {\eqref{eq:pcb-tf-calibrated}} with given number of FDE taps, $\NumTap$, (ii) the antenna interface response, $H_{\textrm{SI}}(f_k)$, and (iii) the desired RF SIC bandwidth, $f_k \in [f_1,f_K]$. Then, the optimized canceller configuration is obtained by solving {\textsf{(P2)}}. \begin{align*} \textsf{(P2)}\ & \min: \mathop{\textstyle\sum}_{k=1}^{K} \NormTwo{H_{\textrm{res}}^{\textrm{P}}(f_k)} = \mathop{\textstyle\sum}_{k=1}^{K} \NormTwo{ H_{\textrm{SI}}(f_k) - H^{\textrm{P}}(f_k) }^2 \\ \textrm{s.t.:}\ & \PCBTapAmp{i} \in [A_{\textrm{min}}^{\textrm{P}}, A_{\textrm{max}}^{\textrm{P}}],\ \PCBTapPhase{i} \in [-\pi, \pi], \\ & \PCBTapCFCap{i} \in [C_{\textrm{F,min}}, C_{\textrm{F,max}}],\ \PCBTapQFCap{i} \in [C_{\textrm{Q,min}}, C_{\textrm{Q,max}}],\ \forall i. \end{align*} Note that {\textsf{(P2)}} is challenging to solve due to its non-convexity and non-linearity, as opposed to the linear program used in the delay line-based RF canceller~\cite{bharadia2013full}. This is due to the specific forms of the configuration parameters in {\eqref{eq:pcb-tf-calibrated}} such as (i) the higher-order terms introduced by $f_k$, and (ii) the trigonometric term introduced by the phase control, $\PCBTapPhase{i}$. In addition, the antenna interface response, $H_{\textrm{SI}}(f_k)$, is also frequency-selective and time-varying. In general, it is difficult to maintain analytical tractability of {\textsf{(P2)}} (i.e., to obtain its optimal solution in closed-form). However, in practice, it is unnecessary to obtain the global optimum to {\textsf{(P2)}} as long as the performance achieved by the obtained local optimum is sufficient (e.g., at least $\SI{45}{dB}$ RF SIC is achieved). In this work, the local optimal solution to (P2) is obtained using a MATLAB solver. The detailed implementation and performance of the optimized canceller configuration are described in Section~\ref{ssec:exp-node}. \subsection{Validation of the PCB BPF} \begin{table}[!t] \caption{Four $(C_{\textrm{F}}, C_{\textrm{Q}})$ configurations used in the validations.} \label{table:set-cap} \vspace{-0.5\baselineskip} \scriptsize \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|c|} \hline & Highest Q-Factor & Lowest Q-Factor \\ \hline Highest Center Freq. & Set 1: $(C_{\textrm{F,min}}, C_{\textrm{Q,min}})$ & Set 3: $(C_{\textrm{F,min}}, C_{\textrm{Q,max}})$ \\ \hline Lowest Center Freq. & Set 2: $(C_{\textrm{F,max}}, C_{\textrm{Q,min}})$ & Set 4: $(C_{\textrm{F,max}}, C_{\textrm{Q,max}})$ \\ \hline \end{tabular} \renewcommand{\arraystretch}{1} \end{center} \vspace{-0.5\baselineskip} \end{table} \subsubsection*{Validation of the PCB BPF} We first experimentally validate the PCB BPF model, $\BPFTapTF{i}(f_k)$, given by {\eqref{eq:pcb-bpf-tf}}. The ground truth is obtained by measuring the frequency response (using S-Parameters measurements) of the PCB BPF using a test structure, which contains only the BPF, to avoid the effects of other components on the PCB. The measurements are conducted with varying $(C_{\textrm{F}}, C_{\textrm{Q}})$ configurations and the result of each configuration is averaged over $20$ measurement instances.\footnote{We drop the subscript $i$, since both PCB BPFs behave identically.} The BPF center frequency is measured as the frequency with the highest BPF amplitude, and the BPF quality factor is computed as the ratio between the center frequency and the $\SI{3}{dB}$ bandwidth around the center frequency. The PCB BPF has a \emph{fixed} quality factor of 2.7, achieved by using only the RLC resonance tank. By setting $C_{\textrm{Q}} = C_{\textrm{Q,max}}$ and $C_{\textrm{Q}} = C_{\textrm{Q,min}}$ (see Section~\ref{ssec:impl-pcb}), the measured lowest and highest achievable BPF quality factors are 9.2 and 17.8, respectively. This shows an improvement in the PCB BPF quality factor tuning range of 3.4$\times$--6.6$\times$, achieved by introducing the impedance transformation networks. Similarly, by setting $C_{\textrm{F}} = C_{\textrm{F,max}}$ and $C_{\textrm{F}} = C_{\textrm{F,min}}$, the PCB BPF has a center frequency tuning range of $\SI{18}{MHz}$. Fig.~\ref{fig:eval-pcb-bpf} presents the modeled and measured amplitude and phase responses of the PCB BPF with 4 $(C_{\textrm{F}}, C_{\textrm{Q}})$ configurations (see Table~\ref{table:set-cap}) which cover the entire tuning range of the BPF center frequency and quality factor. The results show that the PCB BPF model {\eqref{eq:pcb-bpf-tf}} matches very closely with the measurements at the highest BPF quality factor value (Sets 1 and 2). In particular, the maximum differences between the measured and modeled amplitude and phase are $\SI{0.5}{dB}$ and $\SI{7}{\degree}$, respectively. At the lowest BPF quality factor value (Sets 3 and 4), the differences are $\SI{1.2}{dB}$ and $\SI{15}{\degree}$, thereby showing the validity of the PCB BPF model. The same level of accuracy of the PCB BPF model {\eqref{eq:pcb-bpf-tf}} is also observed for other $(C_{\textrm{F}}, C_{\textrm{Q}})$ configurations within their tuning ranges. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat{ \label{fig:eval-pcb-bpf-amp} \includegraphics[width=0.47\columnwidth]{./figs/model/model_bpf_amp.pdf} } \hfill \subfloat{ \label{fig:eval-pcb-bpf-phase} \includegraphics[width=0.47\columnwidth]{./figs/model/model_bpf_phase.pdf} } \vspace{-0.5\baselineskip} \caption{Modeled and measured amplitude and phase responses of the implemented PCB BPF under different $(C_{\textrm{F}}, C_{\textrm{Q}})$ configurations indicated in Table~\ref{table:set-cap}.} \label{fig:eval-pcb-bpf} \vspace{-\baselineskip} \end{figure} \subsubsection*{Validation of the PCB Canceller} We use the same experiments as in the PCB BPF validation to validate the PCB canceller model with 2 FDE taps, $H^{\textrm{P}}(f_k)$, given by {\eqref{eq:pcb-tf-calibrated}}. We consider two cases for controlled measurements: (i) only one FDE tap is active, and (ii) both FDE taps are active. Note that the programmable attenuators only have a maximal attenuation of only $\SI{15.5}{dB}$ (see Section~\ref{ssec:impl-pcb}) and at this maximal attenuation, signals can still leak through the FDE tap, resulting in inseparable behaviors between the two FDE taps. To minimize the effect of the second FDE tap, we set the first FDE tap at its highest amplitude (i.e., lowest attenuation value of $\PCBTapAmp{1}$) with varying values of $(\PCBTapCFCap{1}, \PCBTapQFCap{1})$ while setting the second FDE tap with the lowest amplitude (i.e., highest attenuation value of $\PCBTapAmp{2}$). Fig.~\ref{fig:eval-pcb} shows the modeled and measured amplitude and phase responses of the PCB canceller in this case, i.e., only the first FDE tap is active. At the highest BPF quality factor value (Sets 1 and 2), the maximum differences between the modeled and measured amplitude and phase are $\SI{0.9}{dB}$ and $\SI{8}{\degree}$, respectively. At the lowest BPF quality factor value (Sets 3 and 4), the errors are $\SI{1.5}{dB}$ and $\SI{12}{\degree}$, while still validating the PCB canceller model. We obtain similar results in the case where only the second FDE tap is active by setting highest attenuation value of $\PCBTapAmp{1}$ and lowest attenuation value of $\PCBTapAmp{2}$. The measurements are repeated with different $\{\PCBTapAmp{i}, \PCBTapPhase{i}, \PCBTapCFCap{i}, \PCBTapQFCap{i}\}$ settings for $i=1,2$, and all the results demonstrate the same level of accuracy of the PCB canceller model {\eqref{eq:pcb-tf-calibrated}}. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat{ \label{fig:eval-pcb-amp} \includegraphics[width=0.47\columnwidth]{./figs/model/model_fde_amp.pdf} } \hfill \subfloat{ \label{fig:eval-pcb-phase} \includegraphics[width=0.47\columnwidth]{./figs/model/model_fde_phase.pdf} } \vspace{-0.5\baselineskip} \caption{Modeled and measured amplitude and phase responses of the PCB canceller, where only the first FDE tap is active, under different $(C_{\textrm{F}}, C_{\textrm{Q}})$ configurations indicated in Table~\ref{table:set-cap}.} \label{fig:eval-pcb} \vspace{-\baselineskip} \end{figure} \subsubsection*{RF Canceller and FD Radio Designs} RF SIC typically involves two stages: (i) isolation at the antenna interface, and (ii) SIC in the RF domain using cancellation circuitry. While a separate TX/RX antenna pair can provide good isolation and can be used to achieve cancellation~\cite{radunovic2010rethinking,choi2010achieving,khojastepour2011case,jain2011practical,aryafar2012midu,aryafar2015fd}, a shared antenna interface such as a circulator is more appropriate for single-antenna implementations~\cite{bharadia2013full,chung2015prototyping} and is compatible with FD MIMO systems. Existing designs of analog/RF SIC circuitry are mostly based on a time-domain interpolation approach~\cite{bharadia2013full,korpi2016full}. In particular, real delay lines with different lengths and amplitude weighting~\cite{bharadia2013full} and phase controls~\cite{korpi2016full} are used and their configurations are optimized to best emulate the SI channel. This essentially represents an RF implementation of a finite impulse response (FIR) filter. Based on the same RF SIC approach, several FD MIMO radio designs are presented~\cite{aryafar2012midu,bharadia2014full,chen2015flexradio,chung2017compact}. FD relays have also been successfully demonstrated in~\cite{hsu2016full,bharadia2015fastforward,chen2015airexpress,chen2017bipass}. Moreover, SIC can be achieved via digital/analog beamforming in FD massive-antenna systems~\cite{everett2016softnull,aryafar2018pafd}. The techniques utilized in these works are incompatible with IC implementations, which are required for small-form-factor devices. In this paper, we focus on an FDE-based canceller, which builds on our previous work towards the design of such an RFIC canceller~\cite{Zhou_WBSIC_JSSC15}. However, existing IC-based FD radios (e.g.,~\cite{Zhou_WBSIC_JSSC15}) have not been evaluated at the system-level in different network settings. \subsubsection*{FD Gain at the Link- and Network-level} At the higher layers, recent work focuses on characterizing the capacity region and rate gains, as well as developing resource allocation algorithms under both perfect~\cite{ahmed2013rate,Sabharwal_DistributedSideChannel_TWC13} and imperfect SIC~\cite{goyal2015full,marasevic2017resource,diakonikolas2017rate}. Similar problems are considered in FD multi-antenna/MIMO systems~\cite{zheng2015joint,everett2016softnull,qian2017concurrent}. Medium access control (MAC) algorithms are studied in networks with all HD users~\cite{choi2015power,chen2017probabilistic} or with heterogeneous HD and FD users~\cite{chen2018hybrid}. Moreover, network-level FD gain is analyzed in~\cite{radunovic2010rethinking,yang2014characterizing,xie2014does,wang2017fundamental} and experimentally evaluated in~\cite{jain2011practical,kim2013janus} where all the users are HD or FD. Finally,~\cite{hsu2017inter} proposes a scheme to suppress IUI using an emulated FD radio. To the best of our knowledge, this is \emph{the first thorough study of wideband RF SIC achieved via a frequency-domain-based approach (which is suitable for compact implementations) that is grounded in real-world implementation and includes extensive system- and network-level experimentation}. \subsection{Setup} We use a real, practical antenna interface response, $H_{\textrm{SI}}(f_k)$, measured in the same setting as described in Section~\ref{ssec:exp-testbed}, and consider $\NumTap \in \{1,2,3,4\}$ and $B \in \{20,40,80\}\thinspace\SI{}{MHz}$. We only report the RF SIC performance with up to 4 FDE taps since, as we will show, this case can achieve sufficient RF SIC up to $\SI{80}{MHz}$ bandwidth.\footnote{We select typical values of $20/40/80\thinspace\SI{}{MHz}$ as the desired RF SIC bandwidth, since the circulator has a frequency range of $\SI{100}{MHz}$.} We use {\eqref{eq:rfic-tf}} to both model and evaluate the RFIC canceller with configuration parameters $\{\ICTapAmp{i}, \ICTapPhase{i}, \ICTapCF{i}, \ICTapQF{i}\}$, since it is shown that a $2^{\textrm{nd}}$-order BPF can accurately model the FDE $N$-path filter~\cite{ghaffari2011tunable,Zhou_WBSIC_JSSC15}. Similar to {\textsf{(P2)}} (see Section~\ref{ssec:impl-opt}), the optimized RFIC canceller configuration can be obtained by solving {\textsf{(P3)}} with $H^{\textrm{I}}(f_k)$ given by {\eqref{eq:rfic-tf}}. \begin{align*} \textsf{(P3)}\ & \min: \mathop{\textstyle\sum}_{k=1}^{K} \NormTwo{H_{\textrm{res}}^{\textrm{I}}(f_k)} = \mathop{\textstyle\sum}_{k=1}^{K} \NormTwo{ H_{\textrm{SI}}(f_k) - H^{\textrm{I}}(f_k) }^2 \\ \textrm{s.t.:}\ & \ICTapAmp{i} \in [A_{\textrm{min}}^{\textrm{I}}, A_{\textrm{max}}^{\textrm{I}}],\ \ICTapPhase{i} \in [-\pi, \pi], \\ & \ICTapCF{i} \in [f_{\textrm{c,min}}, f_{\textrm{c,max}}],\ \ICTapQF{i} \in [Q_{\textrm{min}}, Q_{\textrm{max}}],\ \forall i. \end{align*} \noindent Note that in~\cite{Zhou_WBSIC_JSSC15}, there is no optimization of the RFIC canceller configuration, and the canceller is configured based on a heuristic approach. As we will show, the optimized canceller scheme outperforms the heuristic approach by an order of magnitude in terms of the amount of RF SIC achieved. The implemented PCB canceller includes only $\NumTap=2$ FDE taps due to its design (see Section~\ref{sec:impl}). However, it is practically feasible to include more parallel FDE taps. For numerical evaluation purposes, we model the PCB canceller with $\NumTap>2$ FDE taps by extending the validated model {\eqref{eq:pcb-tf-calibrated}} with symmetric FDE taps (i.e., all BPFs in the FDE taps behave identically). Although the canceller configuration scheme has a computational complexity of $4^{\NumTap}$ (i.e., four DoF per FDE tap), we will show that $\NumTap = 4$ taps can achieve sufficient amount of RF SIC in realistic scenarios. In practice, the canceller configuration parameters cannot be arbitrarily selected from a continuous range as described in {\textsf{(P2)}} and {\textsf{(P3)}}. Instead, they are often restricted to discrete values given the resolution of the corresponding hardware components. To address this problem, we evaluate the canceller models in both the \emph{ideal} case and the case \emph{with practical quantization constraints}. The canceller configuration with quantization constraints are obtained by rounding the configuration parameters returned by solving {\textsf{(P2)}} or {\textsf{(P3)}} to their closest quantized values. In particular, the RFIC canceller has the following constraints: $\forall i$, $\ICTapAmp{i} \in [-40, -10]\thinspace\SI{}{dB}$, $\ICTapPhase{i} \in [-\pi, \pi]$, $\ICTapCF{i} \in [875, 925]\thinspace\SI{}{MHz}$, and $\ICTapQF{i} \in [1, 50]$. When adding practical quantization constraints, we assume that the amplitude $\ICTapAmp{i}$ has a $\SI{0.25}{dB}$ resolution within its range. For $\ICTapPhase{i}$, $\ICTapCF{i}$, and $\ICTapQF{i}$, an $8$-bit resolution constraint is introduced, which is equivalent to $2^8=256$ discrete values spaced equally in the given range. These constraints are practically selected and can be easily realized in an IC implementation. The PCB canceller model has following constraints: $\forall i$, $\PCBTapAmp{i} \in [-15.5, 0]\thinspace\SI{}{dB}$, $\PCBTapPhase{i} \in [-\pi, \pi]$, $\PCBTapCFCap{i} \in [0.6,2.4]\thinspace\SI{}{pF}$, and $\PCBTapQFCap{i} \in [2,14]\thinspace\SI{}{pF}$. When adding the quantization constraints, we consider $\SI{0.5}{dB}$, $\SI{0.12}{pF}$, and $\SI{0.39}{pF}$ resolution to $\PCBTapAmp{i}$, $\PCBTapCFCap{i}$, and $\PCBTapQFCap{i}$, respectively. For $\PCBTapPhase{i}$, an 8-bit resolution is introduced. These numbers are consistent with our implementation and experiments (see Sections~\ref{ssec:impl-pcb} and~\ref{sec:exp}). \subsection{Performance Evaluation and Comparison between the RFIC and PCB Cancellers} \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat{ \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_rfic_bw_20mhz_ideal.pdf} } \subfloat{ \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_pcb_bw_20mhz_ideal.pdf} } \vspace{-0.5\baselineskip} \\ \subfloat{ \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_rfic_bw_40mhz_ideal.pdf} } \subfloat{ \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_pcb_bw_40mhz_ideal.pdf} } \vspace{-0.5\baselineskip} \\ \setcounter{subfigure}{0} \subfloat[RFIC Canceller]{ \label{fig:sim-ideal-rfic} \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_rfic_bw_80mhz_ideal.pdf} } \subfloat[PCB Canceller]{ \label{fig:sim-ideal-pcb} \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_pcb_bw_80mhz_ideal.pdf} } \vspace{-0.5\baselineskip} \caption{TX/RX isolation of the antenna interface (black curve) and with the RFIC and PCB cancellers with varying number of FDE taps, $\NumTap \in \{1,2,3,4\}$, and desired RF SIC bandwidth, $B \in \{20,40,80\}\thinspace\SI{}{MHz}$, in the ideal case.} \label{fig:sim-ideal} \vspace{-\baselineskip} \end{figure} Fig.~\ref{fig:sim-ideal} shows the TX/RX isolation achieved by the RFIC and PCB cancellers with optimized canceller configuration, with varying $\NumTap$ and $B$ in the ideal case (i.e., without quantization constraints). It can be seen that: (i) under a given value of $B$, a larger number of FDE taps results in higher average RF SIC, and (ii) for a larger value of $B$, more FDE taps are required to achieve sufficient RF SIC. For example, the ideal RFIC and PCB cancellers with 2 FDE taps can achieve an average $50/46/42\thinspace\SI{}{dB}$ and $50/42/35\thinspace\SI{}{dB}$ RF SIC across $20/40/80\thinspace\SI{}{MHz}$ bandwidth, respectively. \begin{figure}[!t] \centering \vspace{-\baselineskip} \subfloat{ \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_rfic_bw_20mhz_quan.pdf} } \subfloat{ \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_pcb_bw_20mhz_quan.pdf} } \vspace{-0.5\baselineskip} \\ \subfloat{ \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_rfic_bw_40mhz_quan.pdf} } \subfloat{ \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_pcb_bw_40mhz_quan.pdf} } \vspace{-0.5\baselineskip} \\ \setcounter{subfigure}{0} \subfloat[RFIC Canceller]{ \label{fig:sim-quan-rfic} \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_rfic_bw_80mhz_quan.pdf} } \subfloat[PCB Canceller]{ \label{fig:sim-quan-pcb} \includegraphics[width=0.48\columnwidth]{./figs/sim/sim_pcb_bw_80mhz_quan.pdf} } \vspace{-0.5\baselineskip} \caption{TX/RX isolation of the antenna interface (black curve) and with the RFIC and PCB cancellers with varying number of FDE taps, $\NumTap \in \{1,2,3,4\}$, and desired RF SIC bandwidth, $B \in \{20,40,80\}\thinspace\SI{}{MHz}$, under practical quantization constraints.} \label{fig:sim-quan} \vspace{-\baselineskip} \end{figure} Fig.~\ref{fig:sim-quan} shows the TX/RX isolation achieved by the RFIC and PCB cancellers with optimized canceller configuration under practical quantization constraints. Comparing to Fig.~\ref{fig:sim-ideal}, the results show a performance degradation due to limited hardware resolutions, which is more significant as $\NumTap$ increases. This is because a larger value of $\NumTap$ introduces a higher number of DoF with more canceller parameters that can be flexibly controlled. As a result, the RF SIC performance is more sensitive to the coupling between individual FDE tap responses after quantization. The results show that under practical constraints, the RFIC and PCB cancellers with 4 FDE taps can still achieve an average $54/50/45\thinspace\SI{}{dB}$ and $52/45/39\thinspace\SI{}{dB}$ RF SIC across $20/40/80\thinspace\SI{}{MHz}$ bandwidth, respectively. Fig.~\ref{fig:sim-quan} also shows that the RFIC canceller under the optimized configuration scheme achieves a $\SI{10}{dB}$ higher RF SIC compared with that achieved by the heuristic approach described in~\cite{Zhou_WBSIC_JSSC15} (labeled ``Heur''). It is interesting to observe that the RF SIC profile of the PCB canceller with 2 FDE taps is very similar to our experimental results (see Fig.~\ref{fig:exp-usrp-algo} in Section~\ref{ssec:exp-node}). It is also worth noting that, in practice, adding more FDE taps cannot improve the amount of RF SIC in some scenarios (e.g., with $\SI{20}{MHz}$ bandwidth), which is limited by the quantization constraints. However, performance improvement is expected by relaxing these constraints (e.g., through using components with higher resolutions and/or wider tuning ranges). \begin{table}[!t] \caption{Comparison between the PCB and RFIC cancellers.} \label{table:comparison-pcb-rfic} \vspace{-0.5\baselineskip} \scriptsize \begin{center} \begin{tabular}{|c|c|c|} \hline & PCB (this work) & RFIC~\cite{Zhou_WBSIC_JSSC15} \\ \hline Center Frequency & $\SI{900}{MHz}$ & $\SI{1.37}{GHz}$ \\ \hline \# of FDE Taps & 2 & 2 \\ \hline Antenna Interface & \makecell{A single antenna \\ and a circulator} & \makecell{A TX/RX \\ antenna pair} \\ \hline Antenna Isolation & $\SI{20}{dB}$ & $\SI{35}{dB}$ \\ \hline Canceller SIC ($\SI{20}{MHz}$) & $\SI{32}{dB}$ & $\SI{20}{dB}$ \\ \hline Canceller Configuration & Optimization {\textsf{(P2)}} & Heuristic \\ \hline Digital SIC & $\SI{43}{dB}$ & N/A \\ \hline Evaluation & Node/Link/Network & Node \\ \hline \end{tabular} \end{center} \vspace{-\baselineskip} \end{table} Table~\ref{table:comparison-pcb-rfic} shows the comparison between our implemented PCB canceller and the RFIC canceller presented in~\cite{Zhou_WBSIC_JSSC15}. To summarize, we numerically show that the performance of the RFIC and PCB cancellers is similar. The results based on measurements and validated canceller models confirm that the PCB canceller accurately emulates its RFIC counterpart, and that the FDE-based approach is valid and suitable for achieving wideband RF SIC in small-form-factor devices.
-91,018.065305
[ -1.3388671875, 1.15625 ]
31.131296
[ -3.501953125, 1.3818359375, -1.302734375, -4.6484375, -1.73046875, 6.71484375 ]
[ -1.830078125, 4.140625, -1.96484375, 2.232421875 ]
1,099
12,985
[ -3.611328125, 4.125 ]
27.192245
[ -6.328125, -3.97265625, -3.705078125, -1.728515625, 2.02734375, 11.171875 ]
0.521439
22.818183
13.877551
2.341464
[ 1.46903657913208 ]
-64,149.165386
6.424567
-87,644.510209
0.48308
6.011512
[ -2.701171875, -3.630859375, -4.27734375, -5.1796875, 2.5078125, 12.546875 ]
[ -5.79296875, -2.0078125, -2.69921875, -1.931640625, 3.498046875, 5.421875 ]
BkiUd1E5qsMAIzZBdsn6
\section{Proofs} We give the proofs of the main theorems discussed above and some preliminary ones. Some of them are also discussed in \citep{NU17} with details. We set some notations which only appear in the proof section. \begin{enumerate} \item Let us denote some $\sigma$-fields such that $\mathcal{G}_t:=\sigma(w_s;s\le t,x_0)$, $\mathcal{G}_j^n:=\mathcal{G}_{j\Delta_{n}}$, $\mathcal{A}_j^n:=\sigma(\epsilon_{ih_{n}};i\le jp_{n}-1)$, $\mathcal{H}_j^n:=\mathcal{G}_j^n\vee \mathcal{A}_j^n$. \item We define the following $\Re^r$-valued random variables which appear in the expansion: \begin{align*} \zeta_{j+1,n}&=\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}\int_{j\Delta_{n}+ih_{n}}^{(j+1)\Delta_{n}}\dop w_s,\ \zeta_{j+2,n}'=\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}\int_{(j+1)\Delta_{n}}^{(j+1)\Delta_{n}+ih_{n}}\dop w_s. \end{align*} \item $I_{j,k,n}:=I_{j,k}=[j\Delta_{n}+kh_{n},j\Delta_{n}+(k+1)h_{n}),\ j=0,\ldots, k_{n}-1,\ k=0,\ldots,p_{n}-1$. \item We set the following empirical functionals: \begin{align*} \bar{M}_{n}(f(\cdot,\vartheta))&:=\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}f(\lm{Y}{j},\vartheta),\\ \bar{D}_{n}(f(\cdot,\vartheta))&:=\frac{1}{k_{n}\Delta_{n}}\sum_{j=1}^{k_{n}-2}f(\lm{Y}{j-1},\vartheta) \parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j-1})},\\ \bar{Q}_{n}(B(\cdot,\vartheta))&=\frac{1}{k_{n}\Delta_{n}}\sum_{j=1}^{k_{n}-2}\ip{B(\lm{Y}{j-1},\vartheta)}{\parens{\lm{Y}{j+1}-\lm{Y}{j}}^{\otimes2}}. \end{align*} \item Let us define $D_{j,n}:=\frac{1}{2p_{n}}\sum_{i=0}^{p_{n}-1}\parens{\parens{Y_{j\Delta_{n}+(i+1)h_{n}}-Y_{j\Delta_{n}+ih_{n}}}^{\otimes2}-\Lambda_{\star}}$, and for $1\le l_2\le l_1\le d$,$D_{n}^{(l_1,l_2)}:= \frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}D_{j,n}^{(l_1,l_2)}$ and $D_{n}:=\left[D_{n}^{(1,1)},D_{n}^{(2,1)},\ldots, D_{n}^{(d,d-1)},D_{n}^{(d,d)}\right]=\vech\parens{\hat{\Lambda}_{n}-\Lambda_{\star}}$. \item We denote \begin{align*} &\left\{B_{\kappa}\left|\kappa=1,\ldots,m_1,\ B_{\kappa}=(B_{\kappa}^{(j_1,j_2)})_{j_1,j_2}\right.\right\},\\ &\left\{f_{\lambda}\left|\lambda=1,\ldots,m_2,\ f_{\lambda}=(f^{(1)}_{\lambda},\ldots,f^{(d)}_{\lambda})\right.\right\}, \end{align*} which are sequences of $\Re^d\otimes \Re^d$-valued functions and $\Re^d$-valued ones such that the components of themselves and their derivatives with respect to $x$ are polynomial growth functions for all $\kappa$ and $\lambda$. \item Let us define \begin{align*} &\left\{B_{\kappa,n}(x)\left|\kappa=1,\ldots,m_1,\ B_{\kappa,n}=(B_{\kappa,n}^{(j_1,j_2)})_{j_1,j_2}\right.\right\}, \end{align*} which is a family of sequences of the functions such that the components of the functions and their derivatives with respect to $x$ are polynomial growth functions and there exist a $\Re$-valued sequence $\tuborg{v_{n}}_{n}$ s.t. $v_{n}\to0$ and $C>0$ such that for all $x\in\Re^d$ and for the sequence $\tuborg{B_{\kappa}}$ discussed above, \begin{align*} \sum_{\kappa=1}^{m_1}\norm{B_{\kappa,n}(x)-B_{\kappa}(x)} \le v_{n}\parens{1+\norm{x}^C}. \end{align*} \item Denote \begin{align*} W^{(\tau)}\parens{\tuborg{B_{\kappa}}_{\kappa},\tuborg{f_{\lambda}}_{\lambda}}:=\crotchet{\begin{matrix} W_1 & O & O\\ O & W_2^{\tau}\parens{\tuborg{B_{\kappa}}_{\kappa}} & O \\ O & O & W_3\parens{\tuborg{f_{\lambda}}_{\lambda}} \end{matrix}}. \end{align*} \end{enumerate} \subsection{Conditional expectation of supremum} The following two propositions are multidimensional extensions of Proposition 5.1 and Proposition A in \citep{Gl00} respectively. \begin{proposition}\label{pro711} Under (A1), for all $k\ge 1$, there exists a constant $C(k)$ such that for all $t\ge 0$, \begin{align*} \CE{\sup_{s\in[t,t+1]}\norm{X_s}^k}{\mathcal{G}_t}\le C(k)(1+\norm{X_t}^k). \end{align*} \end{proposition} \begin{comment} \begin{proof} It is enough to see the case $k\ge 2$ because of H\"{o}lder's inequality. For $s\in[t,t+1]$, \begin{align*} X_s=X_t+\int_{t}^{s}b(X_u)\dop u + \int_{t}^{s}a(X_u)\dop w_u. \end{align*} Therefore, for H\"{o}lder's inequality, Burkholder-Davis-Gundy inequality and (A1), the following evaluation holds: \begin{align*} \CE{\sup_{u\in[t,s]}\norm{X_u}^k}{\mathcal{G}_t}&\le C(k)\parens{1+\norm{X_t}^k}+C(k)\int_{t}^{s}\CE{\norm{X_u}^k}{\mathcal{G}_t}\dop u. \end{align*} By putting $\phi(s)=\CE{\sup_{u\in[t,s]}\norm{X_u}^k}{\mathcal{G}_t}$, we obtain $\phi(s)\le C(k)(1+\norm{X_t}^k)e^{C(k)(s-t)}$ because of Gronwall's inequality. \end{proof} \end{comment} \begin{proposition}\label{pro712} Under (A1) and for a function $f$ whose components are in $\mathcal{C}^1(\Re^d)$, assume that there exists $C>0$ such that \begin{align*} \norm{f'(x)}&\le C(1+\norm{x})^C. \end{align*} Then for any $k\in\mathbf{N}$, \begin{align*} \CE{\sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{f(X_s)-f(X_{j\Delta_{n}})}^k}{\mathcal{G}_j^n}\le C(k)\Delta_{n}^{k/2}\parens{1+\norm{X_{j\Delta_{n}}}^{C(k)}}. \end{align*} Especially for $f(x)=x$, \begin{align*} \CE{\sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{X_s-X_{j\Delta_{n}}}^k}{\mathcal{G}_j^n}\le C(k)\Delta_{n}^{k/2}\parens{1+\norm{X_{j\Delta_{n}}}^{k}}. \end{align*} \end{proposition} \begin{comment} \begin{proof} In the first place, we consider the case $f(x)=x$ and define the following random variable such that \begin{align*} \delta_{j,n}:=\sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{X_s-X_{j\Delta_{n}}}. \end{align*} Then by Proposition \ref*{pro711}, we obtain the following evaluation of the conditional expectation such that \begin{align*} \CE{\delta_{j,n}^k}{\mathcal{G}_j^n}&\le C(k)\Delta_{n}^{k/2}\parens{1+\norm{X_{j\Delta_{n}}}^{k}} \end{align*} because of BDG inequality, H\"{o}lder inequality and (A1). Therefore, we could have obtained the case $f(x)=x$. Next, we examine $f$ satisfying the conditions and set the functional \begin{align*} \delta_{j,n}(f)=\sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{f(X_s)-f(X_{j\Delta_{n}})}. \end{align*} Then by Taylor's theorem and the condition for $f'$, we obtain \begin{align*} \delta_{j,n}(f)\le \sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}C\parens{1+\norm{X_s}}^C\delta_{j,n}. \end{align*} Because of Proposition \ref*{pro711}, H\"{o}lder inequality and Taylor's theorem, we have the following evaluation \begin{align*} \CE{\delta_{j,n}^k(f)}{\mathcal{G}_j^n}\le C(k)\Delta_{n}^{k/2}\parens{1+\norm{X_{j\Delta_{n}}}^{C(k)}} \end{align*} and we obtain the result. \end{proof} \end{comment} The next proposition summarises some results useful for computation. \begin{proposition}\label{pro713} Under (A1), for all $t_3\ge t_2\ge t_1\ge 0$ where there exists a constant $C$ such that $t_3-t_1\le C$ and $l\ge 2$, we have \begin{align*} \mathrm{(i)}\ &\sup_{s_1,s_2\in[t_1,t_2]}\norm{\CE{b(X_{s_1})-b(X_{s_2})}{\mathcal{G}_{t_1}}}\le C(t_2-t_1) \parens{1+\norm{X_{t_1}}^3},\\ \mathrm{(ii)}\ &\sup_{s_1,s_2\in[t_1,t_2]}\norm{\CE{a(X_{s_1})-a(X_{s_2})}{\mathcal{G}_{t_1}}}\le C(t_2-t_1) \parens{1+\norm{X_{t_1}}^3},\\ \mathrm{(iii)}\ &\norm{\CE{\int_{t_2}^{t_3}\parens{b(X_s)-b(X_{t_2})}\dop s}{\mathcal{G}_{t_1}}}\le C(t_3-t_2)^2\parens{1+\CE{\norm{X_{t_2}}^6}{\mathcal{G}_{t_1}}}^{1/2},\\ \mathrm{(iv)}\ &\CE{\norm{\int_{t_2}^{t_3}\parens{b(X_s)-b(X_{t_2})}\dop s}^l}{\mathcal{G}_{t_1}} \le C(l)(t_3-t_2)^{3l/2}\parens{1+\CE{\norm{X_{t_2}}^{2l}}{\mathcal{G}_{t_1}}},\\ \mathrm{(v)}\ &\CE{\norm{\int_{t_1}^{t_2}\parens{\int_{t_1}^{s}\parens{a(X_{u})-a(X_{t_1})}\dop w_u}\dop s}^l }{\mathcal{G}_{t_1}}\le C(l)\parens{t_2-t_1}^{2l}\parens{1+\norm{X_{t_1}}^{2l}}. \end{align*} \end{proposition} \begin{proof} (i), (ii): Let $L$ be the infinitesimal generator of the diffusion process. Since Ito-Taylor expansion, for all $s\in [t_1,t_2]$, \begin{align*} \CE{b(X_s)}{\mathcal{G}_{t_1}}=b(X_{t_1})+\int_{t_1}^{s}\CE{Lb(X_u)}{\mathcal{G}_{t_1}}\dop u, \end{align*} and the second term has the evaluation \begin{align*} \sup_{s\in[t_1,t_2]}\norm{\int_{t_1}^{s}\CE{Lb(X_u)}{\mathcal{G}_{t_1}}\dop u}\le C\left(t_{2}-t_{1}\right)\parens{1+\norm{X_{t_1}}^3}. \end{align*} Therefore, we have (ii) and identical revaluation holds for (ii).\\ (iii): Using (i) and H\"{o}lder's inequality, we have the result.\\ (iv): Because of Proposition \ref*{pro712} and H\"{o}lder's inequality, We obtain the proof.\\ (v): For convexity, we have \begin{align*} &\CE{\norm{\int_{t_1}^{t_2}\parens{\int_{t_1}^{s}\parens{a(X_{u})-a(X_{t_1})}\dop w_u}\dop s}^l }{\mathcal{G}_{t_1}}\\ &\le C(l)\sum_{i=1}^{d}\sum_{j=1}^{r}\CE{\abs{\int_{t_1}^{t_2}\parens{\int_{t_1}^{s} \parens{a^{(i,j)}(X_{u})-a^{(i,j)}(X_{t_1})} \dop w_u^{(j)}\dop s}^{l}}}{\mathcal{G}_{t_1}}. \end{align*} H\"{o}lder's inequality, Fubini's theorem, BDG theorem and Proposition \ref*{pro712} give the result. \end{proof} \subsection{Propositions for ergodicity and evaluations of expectation} The next result is a multivariate version of \citep{K97} or \citep{Gl06} using Proposition \ref{pro711}. \begin{lemma}\label{lem721} Assume (A1)-(A3) hold. Let $f$ be a function in $\mathcal{C}^1(\Re^d\times\Xi)$ and assume that $f$, the components of $\partial_{x} f$ and $\nabla_{\vartheta} f$ are polynomial growth functions uniformly in $\vartheta\in\Xi$. Then the following convergence holds: \begin{align*} \frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}f(X_{j\Delta_{n}},\vartheta)\cp \nu_0\parens{f(\cdot,\vartheta)}\ \text{uniformly in }\vartheta. \end{align*} \end{lemma} \begin{comment} \begin{proof} (A2) verifies the pointwise convergence in probability such that for all $\vartheta\in\Xi$, \begin{align*} \frac{1}{k_{n}\Delta_{n}}\int_{0}^{k_{n}\Delta_{n}}f(X_s,\vartheta)\dop s\cp \nu_0(f(\cdot,\vartheta)). \end{align*} Let $\mathscr{D}_{n}(\vartheta)$ be a random variable such that \begin{align*} D_{n}(\vartheta):=\frac{1}{k_{n}\Delta_{n}}\sum_{j=0}^{k_{n}-1}\int_{j\Delta_{n}}^{(j+1)\Delta_{n}} \parens{f(X_s,\vartheta)-f(X_{j\Delta_{n}},\vartheta)}\dop s. \end{align*} (A3) and Proposition \ref*{pro712} lead to for all $\vartheta\in\Xi$, \begin{align*} \sup_{j=0,\ldots,k_{n}-1}\E{\sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\abs{f(X_s,\vartheta)-f(X_{j\Delta_{n}},\vartheta)}} \le C\Delta_{n}^{1/2}. \end{align*} It results in for all $\vartheta$, \begin{align*} \E{\abs{D_{n}(\vartheta)}}=\E{\abs{\frac{1}{k_{n}\Delta_{n}}\sum_{j=0}^{k_{n}-1}\int_{j\Delta_{n}}^{(j+1)\Delta_{n}} \parens{f(X_s,\vartheta)-f(X_{j\Delta_{n}},\vartheta)}\dop s}}\le C\Delta_{n}^{1/2}\to 0 \end{align*} and hence for all $\vartheta$, \begin{align*} \frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}f(X_{j\Delta_{n}},\vartheta) =\frac{1}{k_{n}\Delta_{n}}\int_{0}^{k_{n}\Delta_{n}}f(X_s,\vartheta)\dop s-D_{n}(\vartheta)\cp \nu_0(f(\cdot,\vartheta)). \end{align*} With respect to the uniform convergence, the discussion is identical to that of Lemma 8 in \citep{K97}. \end{proof} \end{comment} \subsection{Characteristics of local means} The following propositions, lemmas and corollary are multidimensional extensions of those in \citep{Gl00} and \citep{Fa14}. \begin{comment} \begin{lemma}\label{lem731} $\xi_{j,n}$ and $\xi_{j+1,n}'$ are independent of each other and Gaussian. $\xi_{j,n}$ is $\mathcal{G}_{j+1}^n$-measurable and independent of $\mathcal{G}_{j}^n$, and $\xi_{j+1,n}'$ is $\mathcal{G}_{j+2}^n$-measurable and independent of $\mathcal{G}_{j+1}^n$. Furthermore, the evaluation of following conditional expectations holds: \begin{align*} \CE{\xi_{j,n}}{\mathcal{G}_j^n}=\CE{\xi_{j+1,n}'}{\mathcal{G}_j^n}&=\mathbf{0},\\ \CE{\xi_{j,n}\xi_{j,n}^T}{\mathcal{G}_j^n}=\CE{\xi_{j+1,n}'\parens{\xi_{j+1,n}'}^T}{\mathcal{G}_j^n}&=\frac{1}{3}I_r,\\ \CE{\xi_{j,n}\parens{\xi_{j,n}'}^T}{\mathcal{G}_j^n}&=\frac{1}{6}I_r. \end{align*} \end{lemma} \noindent The proof for Lemma \ref{lem731} is almost identical to that for Lemma 2.1 in \citep{Gl00}. \begin{proof} Measurability and independence are obvious. Note that the following integral evaluation holds: \begin{align*} \int_{j\Delta_{n}}^{(j+1)\Delta_{n}}\frac{(s-j\Delta_{n})^2}{\Delta_{n}^3}\mathrm{d}s &=\int_{(j+1)\Delta_{n}}^{(j+2)\Delta_{n}}\frac{((j+2)\Delta_{n}-s)^2}{\Delta_{n}^3}\mathrm{d}s =\frac{1}{3}. \end{align*} Therefore, $\xi_{j,n}$ has the following distribution because of Wiener integral and the independence between the components of the Wiener process $(w_t)_t$: \begin{align*} \xi_{j,n}&\sim N_r\parens{\mathbf{0},\frac{1}{3}I_r},\\ \xi_{j+1,n}'&\sim N_r\parens{\mathbf{0},\frac{1}{3}I_r}. \end{align*} In addition, the following equality holds: \begin{align*} \int_{j\Delta_{n}}^{(j+1)\Delta_{n}}\frac{(s-j\Delta_{n})((j+1)\Delta_{n}-s)}{\Delta_{n}^3}\mathrm{d}s&=\frac{1}{6}. \end{align*} Therefore the independence of the components of the Wiener processes leads to the evaluation. \end{proof} \end{comment} \begin{lemma}\label{lem732} $\zeta_{j+1,n}$ and $\zeta_{j+1,n}'$ are $\mathcal{G}_{j+1}^n$-measurable, independent of $\mathcal{G}_{j}^n$ and Gaussian. These random variables have the following decomposition: \begin{align*} \zeta_{j+1,n}&=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k_{n}+1)\int_{I_{j,k}}\dop w_t,\\ \zeta_{j+1,n}'&=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)\int_{I_{j,k}}\dop w_t. \end{align*} In addition, the evaluation of the following conditional expectations holds: \begin{align*} \CE{\zeta_{j,n}}{\mathcal{G}_j^n}=\CE{\zeta_{j+1,n}'}{\mathcal{G}_j^n}&=\mathbf{0},\\ \CE{\zeta_{j+1,n}\parens{\zeta_{j+1,n}}^T}{\mathcal{G}_j^n}&=m_{n}\Delta_{n}I_r,\\ \CE{\zeta_{j+1,n}'\parens{\zeta_{j+1,n}'}^T}{\mathcal{G}_j^n} &=m_{n}'\Delta_{n}I_r,\\ \CE{\zeta_{j+1,n}\parens{\zeta_{j+1,n}'}^T}{\mathcal{G}_j^n}&=\chi_{n}\Delta_{n}I_r, \end{align*} where $m_{n}=\parens{\frac{1}{3}+\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}}$, $m_{n}'=\parens{\frac{1}{3}-\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}}$ and $\chi_{n}=\frac{1}{6}\parens{1-\frac{1}{p_{n}^2}}$. \end{lemma} \noindent For the proof, see Lemma 8.2 in \citep{Fa14} and extend it to multidimensional discussion. \begin{comment} \begin{proof} Measurability and independence are trivial. The decomposition is also obvious if we consider the overlapping parts of sum of integral. Note the following integral: \begin{align*} \int_{j\Delta_{n}}^{(j+1)\Delta_{n}}\parens{\sum_{k=0}^{p_{n}-1}\frac{k_{n}+1}{p_{n}}\mathbf{1}_{I_{j,k}}(s)}^2\dop s &=\Delta_{n}\parens{\frac{1}{3}+\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}},\\ \int_{j\Delta_{n}}^{(j+1)\Delta_{n}}\parens{\sum_{k=0}^{p_{n}-1}\frac{p_{n}-1-k_{n}}{p_{n}}\mathbf{1}_{I_{j,k}}(s)}^2\dop s &=\Delta_{n}\parens{\frac{1}{3}-\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}}. \end{align*} For the independence of the components of the Wiener process and Wiener integral, we have \begin{align*} \zeta_{j+1,n}&\sim N\parens{\mathbf{0},\Delta_{n}\parens{\frac{1}{3}+\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}}I_r},\\ \zeta_{j+1,n}'&\sim N\parens{\mathbf{0},\Delta_{n}\parens{\frac{1}{3}-\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}}I_r}, \end{align*} and hence the first, second and third equalities hold. With respect to the fourth equality, the independence among the components and the independent increments of the Wiener processes lead to the evaluation. \end{proof} \begin{proposition}\label{pro733} Under (A1), \begin{align*} \frac{1}{\Delta_{n}}\int_{j\Delta_{n}}^{(j+1)\Delta_{n}}X_s\dop s-\lm{X}{j}{}= \sqrt{h_{n}}\parens{\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}a(X_{j\Delta_{n}+ih_{n}})\xi_{i,j,n}'}+e_{j,n}^{(1)}, \end{align*} where \begin{align*} ^{\exisits} C>0,\ \norm{\CE{e_{j,n}^{(1)}}{\mathcal{H}_j^n}}&\le Ch_{n}(1+\norm{X_{j\Delta_{n}}})\\ ^{\forall} l\ge 2,\ ^{\exisits} C(l)>0,\ \CE{\norm{e_{j,n}^{(1)}}^l}{\mathcal{H}_j^n}&\le C(l)h_{n}^l\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} \end{proposition} \noindent The proof can be obtained by multidimensional extension of that for Proposition 3.1 in \citep{Fa14}. \begin{proof} It is enough to consider the conditional expectation with respect to $\mathcal{G}_j^n$. Since $\Delta_{n}=p_{n}h_{n}$, as \citep{Fa14}, \begin{align*} R_{j,n}&:=\frac{1}{\Delta_{n}}\int_{j\Delta_{n}}^{(j+1)\Delta_{n}}X_s\dop s-\lm{X}{j}\\ &=\sqrt{h_{n}}\parens{\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}a(X_{j\Delta_{n}+kh_{n}})\frac{1}{h_{n}^{3/2}}\int_{I_{j,k}}\parens{\int_{j\Delta_{n}+kh_{n}}^{s}\dop w_u}\dop s}\\ &\qquad+\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\frac{1}{h_{n}}\int_{I_{j,k}}\parens{\int_{j\Delta_{n}+kh_{n}}^{s}\parens{a(X_{u})-a(X_{j\Delta_{n}+kh_{n}})}\dop w_u}\dop s\\ &\qquad+\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\frac{1}{h_{n}}\int_{I_{j,k}}\parens{\int_{j\Delta_{n}+kh_{n}}^{s}b(X_u)\dop u}\dop s. \end{align*} The fact $tw_t=\int_{0}^{t}w_s\dop s+\int_{0}^{t}s\dop w_s$ easily derived from Ito's formula leads to the next transformation as follows: \begin{align*} \int_{I_{j,k}}\parens{\int_{j\Delta_{n}+kh_{n}}^{s}\dop w_u}\dop s=h_{n}^{3/2}\xi_{k,j,n}'. \end{align*} Therefore, we have \begin{align*} R_{j,n}&= \sqrt{h_{n}}\parens{\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}a(X_{j\Delta_{n}+kh_{n}})\xi_{k,j,n}'}\\ &\qquad+\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\frac{1}{h_{n}}\int_{I_{j,k}}\parens{\int_{j\Delta_{n}+kh_{n}}^{s}\parens{a(X_{u})-a(X_{j\Delta_{n}+kh_{n}})}\dop w_u}\dop s\\ &\qquad+\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\frac{1}{h_{n}}\int_{I_{j,k}}\parens{\int_{j\Delta_{n}+kh_{n}}^{s}b(X_u)\dop u} \dop s. \end{align*} Hence it is necessary to evaluate the second term and the third one of this right hand side. Let us denote the second term as $e_{1,j,n}^{(1)}$ and the third one as $e_{2,j,n}^{(1)}$. Fubini's theorem verifies the following evaluation \begin{align*} \CE{e_{1,j,n}^{(1)}}{\mathcal{G}_j^n}=\mathbf{0}. \end{align*} Furthermore, for $l\ge 2$, because of convexity of $\norm{\cdot}^l$, Proposition \ref*{pro712} and Proposition \ref*{pro713}, we have \begin{align*} \CE{\norm{e_{1,j,n}^{(1)}}^l}{\mathcal{G}_j^n} \le C(l)h_{n}^{l}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} With respect to $e_{2,j,n}^{(1)}$, we obtain \begin{align*} \norm{e_{2,j,n}^{(1)}}\le Ch_{n}\parens{1+\sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{X_s}}. \end{align*} Hence for $l\ge1$ and Proposition \ref*{pro711}, we have \begin{align*} \CE{\norm{e_{2,j,n}^{(1)}}^l}{\mathcal{G}_j^n}\le C(l)h_{n}^l\parens{1+\norm{X_{j\Delta_{n}}}^l}. \end{align*} Therefore we obtain the result. \end{proof} \end{comment} \begin{comment} \begin{proposition}\label{pro734} Under (A1), \begin{align*} \frac{1}{\Delta_{n}}\int_{j\Delta_{n}}^{(j+1)\Delta_{n}}X_s\dop s-X_{j\Delta_{n}}=a(X_{j\Delta_{n}})\sqrt{\Delta_{n}}\xi_{j,n}' +e_{j,n}^{(2)}, \end{align*} where \begin{align*} ^{\exisits} C>0,\ \norm{\CE{e_{j,n}^{(2)}}{\mathcal{H}_j^n}}&\le C\Delta_{n}\parens{1+\norm{X_{j\Delta_{n}}}},\\ ^{\forall} l\ge2,\ ^{\exisits} C(l)>0,\ \CE{\norm{e_{j,n}^{(2)}}^l}{\mathcal{H}_j^n}&\le C(l)\Delta_{n}^l\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} \end{proposition} \begin{proposition}\label{pro735} Under (A1) and (AH), for all $j\le k_{n}-1$, \begin{align*} \lm{Y}{j}{}-X_{j\Delta_{n}}=a(X_{j\Delta_{n}})\sqrt{\Delta_{n}}\xi_{j,n}'+e_{j,n}'+\Lambda_{\star}^{1/2}\lm{\epsilon}{j}, \end{align*} where \begin{align*} ^{\exisits} C>0,\ \norm{\CE{e_{j,n}'}{\mathcal{G}_j^n}} &\le C\Delta_{n}\parens{1+\norm{X_{j\Delta_{n}}}},\\ ^{\forall} l\ge2,\ ^{\exisits} C(l),\ \CE{\norm{e_{j,n}'}^l}{\mathcal{G}_j^n}&\le C(l)\Delta_{n}^{l}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} Furthermore, for all $l\ge2$, \begin{align*} \CE{\norm{\lm{Y}{j}{}-X_{j\Delta_{n}}}^l}{\mathcal{H}_j^n} \le C(l)\Delta_{n}^{l/2} \parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} \end{proposition} \begin{proof} It is enough to see conditional expectation with respect to $\mathcal{G}_j^n$. Note the following decomposition \begin{align*} \lm{Y}{j}{}-X_{j\Delta_{n}} &=\lm{X}{j}{} -\frac{1}{\Delta_{n}}\int_{j\Delta_{n}}^{(j+1)\Delta_{n}}X_s +\frac{1}{\Delta_{n}}\int_{j\Delta_{n}}^{(j+1)\Delta_{n}}X_s -X_{j\Delta_{n}} +\Lambda_{\star}^{1/2}\lm{\epsilon}{j}{}\\ &=-\sqrt{h_{n}}\parens{\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}a(X_{j\Delta_{n}+ih_{n}})\xi_{i,j,n}'}+e_{j,n}^{(1)} +a(X_{j\Delta_{n}})\sqrt{\Delta_{n}}\xi_{j,n}' +e_{j,n}^{(2)} +\Lambda_{\star}^{1/2}\lm{\epsilon}{j}{}. \end{align*} We define \begin{align*} r_{j,n}=\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}a(X_{j\Delta_{n}+ih_{n}})\xi_{i,j,n}'. \end{align*} Obviously the following evaluation holds: \begin{align*} \CE{r_{j,n}}{\mathcal{G}_j^n}=\CE{\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}a(X_{j\Delta_{n}+ih_{n}})\xi_{i,j,n}'}{\mathcal{G}_j^n} =\mathbf{0}. \end{align*} In addition, we check for any $l\ge 2$ \begin{align*} \CE{\norm{r_{j,n}}^l}{\mathcal{G}_j^n}\le C(l)\parens{1+\norm{X_{j\Delta_{n}}}^l}. \end{align*} We have \begin{align*} \CE{\norm{r_{j,n}}^l}{\mathcal{G}_j^n} \le \frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}\CE{ \norm{a(X_{j\Delta_{n}+ih_{n}})}^l\CE{\norm{\xi_{i,j,n}'}^l}{\mathcal{G}_{j\Delta_{n}+ih_{n}}}}{\mathcal{G}_j^n}. \end{align*} Because of Wiener integral and the evaluation \begin{align*} \int_{j\Delta_{n}+ih_{n}}^{j\Delta_{n}+(i+1)h_{n}}\frac{\parens{j\Delta_{n}+(i+1)h_{n}-s}^2}{h_{n}^3}=\frac{1}{3}, \end{align*} $\xi_{i,j,n}$ is distributed as \begin{align*} \xi_{i,j,n}\sim N\parens{0,\frac{1}{3}I_r}. \end{align*} This result and Proposition \ref*{pro711} lead to \begin{align*} \CE{\norm{r_{j,n}}^l}{\mathcal{G}_j^n} &\le C(l)\parens{1+\norm{X_{j\Delta_{n}}}^l}. \end{align*} Therefore we have \begin{align*} \CE{\norm{\sqrt{h_{n}}\parens{\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}a(X_{j\Delta_{n}+ih_{n}})\xi_{i,j,n}'}}^l}{\mathcal{G}_j^n} &=\CE{h_{n}^{l/2}\norm{r_{j,n}}^{l}}{\mathcal{G}_j^n}\\ &\le C(l)h_{n}^{l/2}\parens{1+\norm{X_{j\Delta_{n}}}^l}. \end{align*} Here we define \begin{align*} e_{j,n}':=-\sqrt{h_{n}}\parens{\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}a(X_{j\Delta_{n}+ih_{n}})\xi_{i,j,n}'}+e_{j,n}^{(1)}+e_{j,n}^{(2)} \end{align*} and see $e_{j,n}'$ satisfies the statement. The evaluation above and Proposition \ref*{pro733}, Proposition \ref*{pro734} verify \begin{align*} \norm{\CE{e_{j,n}'}{\mathcal{G}_j^n}} &=\norm{\CE{-\sqrt{h_{n}}\parens{\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}a(X_{j\Delta_{n}+ih_{n}})\xi_{i,j,n}'}}{\mathcal{G}_j^n} +\CE{e_{j,n}^{(1)}}{\mathcal{G}_j^n}+\CE{e_{j,n}^{(2)}}{\mathcal{G}_j^n}}\\ &=\norm{\CE{e_{j,n}^{(1)}}{\mathcal{G}_j^n}+\CE{e_{j,n}^{(2)}}{\mathcal{G}_j^n}}\\ &\le Ch_{n}\parens{1+\norm{X_{j\Delta_{n}}}}+C\Delta_{n}\parens{1+\norm{X_{j\Delta_{n}}}}. \end{align*} Note that (AH) leads to \begin{align*} \frac{h_{n}}{\Delta_{n}^2}=\frac{1}{p_{n}^{2-\tau}}=O(1)\ \Rightarrow\ h_{n}=O(\Delta_{n}^2). \end{align*} With this fact, for all $l\ge2$, \begin{align*} \CE{\norm{e_{j,n}'}^l}{\mathcal{G}_j^n} \le C(l)\Delta_{n}^{l}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} In addition, since the sequence of $\epsilon_{ih_{n}}$ is i.i.d., $\CE{\norm{\lm{\epsilon}{j}{}}^l}{\mathcal{H}_j^n}=\E{\norm{\lm{\epsilon}{j}{}}^l}$ and \begin{align*} \CE{\norm{\lm{Y}{j}{}-X_{j\Delta_{n}}}^l}{\mathcal{H}_j^n} &\le C(l)\CE{\norm{a(X_{j\Delta_{n}})\sqrt{\Delta_{n}}\xi_{j,n}'}^l}{\mathcal{H}_j^n} +C(l)\CE{\norm{e_{j,n}'}^l}{\mathcal{H}_j^n}\\ &\qquad+C(l)\CE{\norm{\Lambda_{\star}^{1/2}\lm{\epsilon}{j}{}}^l}{\mathcal{H}_j^n}\\ &\le C(l)\parens{\Delta_{n}^{l/2}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}+\norm{\Lambda_{\star}^{1/2}}^l\E{\norm{\lm{\epsilon}{j}{}}^l}}. \end{align*} Finally Lemma \ref*{lem739} leads to \begin{align*} \CE{\norm{\lm{Y}{j}{}-X_{j\Delta_{n}}}^l}{\mathcal{H}_j^n}\le C(l)\Delta_{n}^{l/2}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*}It completes the proof. \end{proof} \end{comment} \begin{proposition}\label{cor736} Under (A1), (AH), assume the component of the function $f$ on $\Re^d\times\Xi$, $\partial_x f$ and $\partial_x^2 f$ are polynomial growth functions uniformly in $\vartheta\in\Xi$. Then there exists $C>0$ such that for all $j\le k_{n}-1$ and $\vartheta\in\Xi$, \begin{align*} &\norm{\CE{f(\lm{Y}{j}{},\vartheta)-f(X_{j\Delta_{n}},\vartheta)}{\mathcal{H}_j^n}}\le C\Delta_{n}\parens{1+\norm{X_{j\Delta_{n}}}^{C}}. \end{align*} Moreover, for all $l\ge 2$, \begin{align*} &\CE{\norm{f(\lm{Y}{j}{},\vartheta)-f(X_{j\Delta_{n}},\vartheta)}^l}{\mathcal{H}_j^n}\le C(l)\Delta_{n}^{l/2}\parens{1+\norm{X_{j\Delta_{n}}}^{C(l)}}. \end{align*} \end{proposition} \noindent The proof is almost identical to that of Corollary 3.3 in \citep{Fa14} except for dimension, but it does not influence the evaluation. \begin{comment} \begin{proof} It is enough to assume $q=1$. Taylor's theorem verifies the expansion such that \begin{align*} &f(\lm{Y}{j}{},\vartheta)-f(X_{j\Delta_{n}},\vartheta)\\ &=\partial_x f(X_{j\Delta_{n}},\vartheta)\parens{\lm{Y}{j}{}-X_{j\Delta_{n}}}\\ &\qquad+\parens{\lm{Y}{j}{}-X_{j\Delta_{n}}}^T\parens{\int_{0}^{1}(1-u)\partial_x^2 f\parens{X_{j\Delta_{n}}+u(\lm{Y}{j}{}-X_{j\Delta_{n}}),\vartheta}\dop u}\parens{\lm{Y}{j}{}-X_{j\Delta_{n}}}. \end{align*} Note the following inequality: \begin{align*} \norm{\lm{Y}{j}{}}&\le \norm{\lm{X}{j}{}}+\norm{\Lambda_{\star}^{1/2}\lm{\epsilon}{j}{}}\\ &\le \sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{X_s}+\norm{\Lambda_{\star}^{1/2}}\norm{\lm{\epsilon}{j}{}},\\ \norm{X_{j\Delta_{n}}}&\le \sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{X_s}\\ &\le \sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{X_s}+\norm{\Lambda_{\star}^{1/2}}\norm{\lm{\epsilon}{j}{}}. \end{align*} Then for all $\vartheta\in\Xi$, Proposition \ref*{pro711}, Proposition \ref*{pro735} and Lemma \ref*{lem739} lead to \begin{align*} \abs{\CE{f(\lm{Y}{j}{},\vartheta)-f(X_{j\Delta_{n}},\vartheta)}{\mathcal{H}_j^n}} \le C\Delta_{n}\parens{1+\norm{X_{j\Delta_{n}}}^{C}}. \end{align*} For any $l\ge2$, Taylor's theorem gives the following evaluation: \begin{align*} &\abs{f(\lm{Y}{j}{},\vartheta)-f(X_{j\Delta_{n}},\vartheta)}^{l}\\ &=\abs{\parens{\int_{0}^{1}\partial_xf\parens{X_{j\Delta_{n}}+u\parens{\lm{Y}{j}{}-X_{j\Delta_{n}}},\vartheta}\dop u}\parens{\lm{Y}{j}{}-X_{j\Delta_{n}}}}^l\\ &\le C(l)\parens{1+\sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{X_s}^{C(l)} +\norm{\Lambda_{\star}^{1/2}}^{C(l)}\norm{\lm{\epsilon}{j}{}}^{C(l)}} \norm{\lm{Y}{j}{}-X_{j\Delta_{n}}}^l. \end{align*} It leads to \begin{align*} \CE{\abs{f(\lm{Y}{j}{},\vartheta)-f(X_{j\Delta_{n}},\vartheta)}^l}{\mathcal{H}_j^n}\le C(l)\Delta_{n}^{l/2}\parens{1+\norm{X_{j\Delta_{n}}}^{C(l)}} \end{align*} and here we have the proof. \end{proof} \end{comment} \begin{proposition}\label{pro737} Under (A1) and (AH), \begin{align*} \lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j})=a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}+e_{j,n} +\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}, \end{align*} where $e_{j,n}$ is a $\mathcal{H}_{j+2}^n$-measurable random variable such that there exists $C>0$ and $C(l)>0$ for all $l\ge2$ satisfying the inequalities \begin{align*} &\norm{\CE{e_{j,n}}{\mathcal{H}_j^n}}\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{5}},\\ &\CE{\norm{e_{j,n}}^l}{\mathcal{H}_j^n}\le C(l)\Delta_{n}^l\parens{1+\norm{X_{j\Delta_{n}}}^{3l}},\\ &\norm{\CE{e_{j,n}\parens{\zeta_{j+1,n}}^T}{\mathcal{H}_j^n}}+\norm{\CE{e_{j,n}\parens{\zeta_{j+2,n}'}^T}{\mathcal{H}_j^n}}\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{3}}. \end{align*} \end{proposition} \noindent For the proof, see that of Proposition 3.4 in \citep{Fa14} and extend the discussion to multidimensional one. \begin{comment} \begin{proof} First of all, note the decomposition \begin{align*} \lm{Y}{j+1}-\lm{Y}{j}=\lm{X}{j+1}-\lm{X}{j}+\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}. \end{align*} We denote $C_j:=\lm{X}{j+1}-\lm{X}{j}$ and have \begin{align*} C_j&=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\parens{X_{(j+1)\Delta_{n}+kh_{n}}-X_{j\Delta_{n}+kh_{n}}}\\ &=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\parens{X_{(j+1)\Delta_{n}+kh_{n}}-X_{(j+1)\Delta_{n}+(k-1)h_{n}}+X_{(j+1)\Delta_{n}+(k-1)h_{n}} -\cdots-X_{j\Delta_{n}+kh_{n}}}\\ &=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\parens{\int_{I_{j,k-1+p_{n}}}\dop X_s+\cdots+\int_{I_{j,k}}\dop X_s}\\ &=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\sum_{l=0}^{p_{n}-1}\int_{I_{j,k+l}}\dop X_s\\ &=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)\int_{I_{j,k}}\dop X_s +\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)\int_{I_{j+1,k}}\dop X_s, \end{align*} and \begin{align*} \int_{I_{j,k}}\dop X_s&=h_{n}b(X_{j\Delta_{n}+kh_{n}})+\int_{I_{j,k}}\parens{b(X_s)-b(X_{j\Delta_{n}+kh_{n}})}\dop s\\ &\qquad+a(X_{j\Delta_{n}+kh_{n}})\int_{I_{j,k}}\dop w_s + \int_{I_{j,k}}\parens{a(X_s)-a(X_{j\Delta_{n}+kh_{n}})}\dop w_s. \end{align*} Using $\Delta_{n}=(k+1)h_{n}+(p_{n}-1-k)h_{n}$ for all $k$, we have the decomposition \begin{align*} &\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j})\\ &=C_j-\Delta_{n}b(\lm{Y}{j})+\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}\\ &=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)\int_{I_{j,k}}\dop X_s +\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)\int_{I_{j+1,k}}\dop X_s\\ &\qquad-\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}\parens{(k+1)h_{n}+(p_{n}-1-k)h_{n}}b(\lm{Y}{j})+\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}\\ &=a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}+e_{j\Delta_{n}}+\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}, \end{align*} where $e_{j,n}=\sum_{l=1}^{4}e_{j,n}^{(l)},\ e_{j,n}^{(l)}=r_{j,n}^{(l)}+s_{j,n}^{(l)}$, \begin{align*} r_{j,n}^{(1)}&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)h_{n}\parens{b(X_{j\Delta_{n}+kh_{n}})-b(\lm{Y}{j})},\\ r_{j,n}^{(2)}&:=\parens{\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)a(X_{j\Delta_{n}+kh_{n}})\int_{I_{j,k}}\dop w_s} -a(X_{j\Delta_{n}})\zeta_{j+1,n},\\ r_{j,n}^{(3)}&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)\int_{I_{j,k}}\parens{b(X_s)-b(X_{j\Delta_{n}+kh_{n}})}\dop s,\\ r_{j,n}^{(4)}&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)\int_{I_{j,k}}\parens{a(X_s)-a(X_{j\Delta_{n}+kh_{n}})}\dop w_s,\\ s_{j,n}^{(1)}&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)h_{n}\parens{b(X_{(j+1)\Delta_{n}+kh_{n}})-b(\lm{Y}{j})},\\ s_{j,n}^{(2)}&:=\parens{\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)a(X_{(j+1)\Delta_{n}+kh_{n}})\int_{I_{j+1,k}}\dop w_s} -a(X_{j\Delta_{n}})\zeta_{j+2,n}',\\ s_{j,n}^{(3)}&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)\int_{I_{j+1,k}}\parens{b(X_s)-b(X_{j\Delta_{n}+kh_{n}})}\dop s,\\ s_{j,n}^{(4)}&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)\int_{I_{j+1,k}}\parens{a(X_s)-a(X_{(j+1)\Delta_{n}+kh_{n}})}\dop w_s. \end{align*} \noindent \textbf{(Step 1):} We evaluate $\norm{\CE{e_{j,n}}{\mathcal{H}_j^n}}$. It is trivial that $\CE{r_{j,n}^{(l)}}{\mathcal{H}_j^n}=\CE{s_{j,n}^{(l)}}{\mathcal{H}_j^n}=\mathbf{0}$ for $l=2,4$. Because the components of $b$, $\partial_xb$ and $\partial_x^2b$ are polynomial growth functions uniformly in $\theta\in\Theta$, Corollary \ref*{cor736} and Proposition \ref*{pro713} verify the evaluation \begin{align*} \norm{\CE{r_{j,n}^{(1)}}{\mathcal{H}_j^n}}&\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{5}}. \end{align*} The next evaluation for $r_{j,n}^{(3)}$ holds because of Proposition \ref*{pro713}: \begin{align*} \norm{\CE{r_{j,k}^{(3)}}{\mathcal{H}_j^n}} &\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^3}. \end{align*} Similarly, we can evaluate $s_{j,n}^{(1)}$ such as \begin{align*} \norm{\CE{s_{j,n}^{(1)}}{\mathcal{H}_j^n}}\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{5}} \end{align*} and $s_{j,n}^{(3)}$ such as \begin{align*} \norm{\CE{s_{j,n}^{(3)}}{\mathcal{H}_j^n}}\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^3}. \end{align*} In sum up, we have \begin{align*} \norm{\CE{e_{j,n}}{\mathcal{G}_j^n}}\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{5}}. \end{align*}\\ \noindent\textbf{(Step 2):} Now we evaluate $\CE{\norm{e_{j,n}}^l}{\mathcal{H}_j^n}$. Corollary \ref*{cor736} and Proposition \ref*{pro712} lead to the evaluations: \begin{align*} \CE{\norm{r_{j,n}^{(1)}}^l}{\mathcal{H}_j^n} &\le C(l)\Delta_{n}^{l+l/2}\parens{1+\norm{X_{j\Delta_{n}}}^{3l}} ,\\ \CE{\norm{r_{j,n}^{(3)}}^l}{\mathcal{H}_j^n} &\le C(l)\Delta_{n}^lh_{n}^{l/2}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}},\\ \CE{\norm{s_{j,n}^{(1)}}^l}{\mathcal{H}_j^n}&\le C(l)\Delta_{n}^{l+l/2}\parens{1+\norm{X_{j\Delta_{n}}}^{3l}},\\ \CE{\norm{s_{j,n}^{(3)}}^l}{\mathcal{H}_j^n}&\le C(l)\Delta_{n}^{l+l/2} \parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} Because of Lemma \ref*{lem732}, we have the expression \begin{align*} r_{j,n}^{(2)}&=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)\parens{a(X_{j\Delta_{n}+kh_{n}})-a(X_{j\Delta_{n}})}\int_{I_{j,k}}\dop w_s,\\ s_{j,n}^{(2)}&=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)\parens{a(X_{(j+1)\Delta_{n}+kh_{n}})-a(X_{j\Delta_{n}})} \int_{I_{j+1,k}}\dop w_s, \end{align*} and then when we define \begin{align*} f_1(s)&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)\parens{a(X_{j\Delta_{n}+kh_{n}})-a(X_{j\Delta_{n}})}\mathbf{1}_{I_{j,k}}(s),\\ f_2(s)&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)\parens{a(X_{(j+1)\Delta_{n}+kh_{n}})-a(X_{j\Delta_{n}})}\mathbf{1}_{I_{j+1,k}}(s), \end{align*} $r_{j,n}^{(2)}$ and $s_{j,n}^{(2)}$ are the Ito integral of $f_1$ and that of $f_2$ respectively. Hence Proposition \ref*{pro712}, BDG inequality and H\"{o}lder's inequality give \begin{align*} \CE{\norm{r_{j,n}^{(2)}}^l}{\mathcal{H}_j^n}\le C(l)\Delta_{n}^{l}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}, \end{align*} and \begin{align*} \CE{\norm{s_{j,n}^{(2)}}^l}{\mathcal{H}_j^n}&\le C(l)\Delta_{n}^{l}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} Identically, when we define \begin{align*} g_1(s)&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(k+1)\parens{a(X_s)-a(X_{j\Delta_{n}+kh_{n}})}\mathbf{1}_{I_{j,k}}(s),\\ g_2(s)&:=\frac{1}{p_{n}}\sum_{k=0}^{p_{n}-1}(p_{n}-1-k)\parens{a(X_s)-a(X_{(j+1)\Delta_{n}+kh_{n}})}\mathbf{1}_{I_{j+1,k}}(s), \end{align*} then $r_{j,n}^{(4)}$ and $s_{j,k}^{(4)}$ are the Ito integrals with respect to $g_1$ and $g_2$ and \begin{align*} \CE{\norm{r_{j,n}^{(4)}}^l}{\mathcal{H}_j^n} \le C(l)\Delta_{n}^{l}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}, \end{align*} and identically \begin{align*} \CE{\norm{s_{j,n}^{(4)}}^l}{\mathcal{H}_j^n} &\le C(l)\Delta_{n}^{l}\parens{1+\norm{X_{j\Delta_{n}}}^{2l}}. \end{align*} Then we obtain \begin{align*} \CE{\norm{e_{j,n}}^l}{\mathcal{H}_j^n}&\le C(l)\Delta_{n}^l\parens{1+\norm{X_{j\Delta_{n}}}^{3l}}. \end{align*}\\ \noindent \textbf{(Step 3):} For $r_{j,n}^{(1)}\parens{\zeta_{j+1,n}}^T$, H\"{o}lder's inequality leads to \begin{align*} \norm{\CE{r_{j,n}^{(1)}\parens{\zeta_{j+1,n}}^T}{\mathcal{H}_j^n}}& \le C\Delta_{n}^{2}\parens{1+\norm{X_{j\Delta_{n}}}^{3}}, \end{align*} and the same evaluation for $r_{j,n}^{(1)}\parens{\zeta_{j+2,n}'}^T$, $r_{j,n}^{(3)}\parens{\zeta_{j+1,n}}^T$, $r_{j,n}^{(3)}\parens{\zeta_{j+2,n}'}^T$, $s_{j,n}^{(1)}\parens{\zeta_{j+1,n}}^T$, $s_{j,n}^{(1)}\parens{\zeta_{j+2,n}'}^T$, $s_{j,n}^{(3)}\parens{\zeta_{j+1,n}}^T$ and $s_{j,n}^{(3)}\parens{\zeta_{j+2,n}'}^T$ hold. Next, tower property of conditional expectation, independence of increments of the Wiener process and {Proposition \ref*{pro713}} give \begin{align*} \norm{\CE{r_{j,n}^{(2)}\parens{\zeta_{j+1,n}}^T}{\mathcal{H}_j^n}} \le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^3}. \end{align*} The same inequality holds for $s_{j,n}^{(2)}\parens{\zeta_{j+2,n}'}^T$. It is obvious that \begin{align*} &\CE{r_{j,n}^{(2)}\parens{\zeta_{j+2,n}'}^T}{\mathcal{G}_j^n}=\CE{r_{j,n}^{(4)}\parens{\zeta_{j+2,n}'}^T}{\mathcal{G}_j^n}=\CE{s_{j,n}^{(2)}\parens{\zeta_{j+1,n}}^T}{\mathcal{G}_j^n}=\CE{s_{j,n}^{(4)}\parens{\zeta_{j+1,n}}^T}{\mathcal{G}_j^n}\\ &=\mathbf{0} \end{align*} because of independence of increments of the Wiener process. Finally with the same argument as $\norm{\CE{r_{j,n}^{(2)}\parens{\zeta_{j+1,n}}^T}{\mathcal{H}_j^n}}$, we have \begin{align*} \norm{\CE{r_{j,n}^{(4)}\parens{\zeta_{j+1,n}}^T}{\mathcal{H}_j^n}} \le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^2} \end{align*} because $h_{n}^{1/2}=\Delta_{n}/\parens{h_{n}^{1/2}p_{n}}=\Delta_{n}/p_{n}^{1-\tau/2}\le C\Delta_{n}$. The same evaluation for $s_{j,n}^{(4)}\parens{\zeta_{j+2,n}'}^T$. Hence we obtain the result. \end{proof} \end{comment} \begin{corollary}\label{cor738} Under (A1) and (AH), \begin{align*} \lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(X_{j\Delta_{n}})=a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}+e_{j,n} +\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}, \end{align*} where $e_{j,n}$ is a $\mathcal{H}_{j+2}^n$-measurable random variable such that there exists $C>0$ and $C(l)>0$ for all $l\ge2$ satisfying the inequalities \begin{align*} &\norm{\CE{e_{j,n}}{\mathcal{H}_j^n}}\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{5}},\\ &\CE{\norm{e_{j,n}}^l}{\mathcal{H}_j^n}\le C(l)\Delta_{n}^l\parens{1+\norm{X_{j\Delta_{n}}}^{3l}},\\ &\norm{\CE{e_{j,n}\parens{\zeta_{j+1,n}}^T}{\mathcal{H}_j^n}}+\norm{\CE{e_{j,n}\parens{\zeta_{j+2,n}'}^T}{\mathcal{H}_j^n}}\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{3}}. \end{align*} \end{corollary} \begin{proof} It is enough to see $\Delta_{n}b(\lm{Y}{j})-\Delta_{n}b(X_{j\Delta_{n}})$ satisfies the evaluation for $e_{j,n}$. Corollary \ref*{cor736} and Proposition \ref*{pro737} give \begin{align*} &\norm{\CE{\Delta_{n}b(\lm{Y}{j})-\Delta_{n}b(X_{j\Delta_{n}})}{\mathcal{H}_j^n}}\le C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{5}},\\ &\CE{\norm{\Delta_{n}b(\lm{Y}{j})-\Delta_{n}b(X_{j\Delta_{n}})}^l}{\mathcal{H}_j^n} \le C(l)\Delta_{n}^l\parens{1+\norm{X_{j\Delta_{n}}}^{3l}}. \end{align*} With respect to the third evaluation, H\"{o}lder's inequality verifies the result. \end{proof} The following lemma summarises some useful evaluations for computation. \begin{lemma}\label{lem739} Assume $f$ is a function whose components are in $\mathcal{C}^2(\Re^d\times\Xi)$ and the components of $f$ and $\partial_xf$ are polynomial growth functions in $\vartheta\in\Xi$. In addition, $g$ denotes a function whose components are in $\mathcal{C}^2(\Re^d)$ and that the components of $g$ and $\partial_xg$ are polynomial growth functions. Under (A1), (A3), (A4) and (AH), the following uniform evaluation holds: \begin{align*} \mathrm{(i) }&^{\forall} l_1,l_2\in\mathbf{N}_0,\ \sup_{j,n}\E{\sup_{\vartheta\in\Xi}\norm{f(\lm{Y}{j-1},\vartheta)}^{l_1}\parens{1+\norm{X_{j\Delta_{n}}}}^{l_2}}\le C(l_1,l_2),\\ \mathrm{(ii) }&^{\forall} l\in \mathbf{N},\ ^{\forall} j\le k_{n}-2,\ \E{\norm{\zeta_{j+1,n}+\zeta_{j+2,n}'}^{l}}\le C(l)\Delta_{n}^{l/2},\\ \mathrm{(iii) }&^{\forall} l\in\mathbf{N},\ ^{\forall} j\le k_{n}-1,\ \E{\norm{g(X_{(j+1)\Delta_{n}})-g(X_{j\Delta_{n}})}^{l}}\le C(l)\Delta_{n}^{l/2},\\ \mathrm{(iv) }&^{\forall} l\in\mathbf{N},\ ^{\forall} j\le k_{n}-1,\ \E{\norm{g(\lm{Y}{j})-g(X_{j\Delta_{n}})}^{l}}\le C(l)\Delta_{n}^{l/2},\\ \mathrm{(v) }&^{\forall} l\in\mathbf{N},\ ^{\forall} j\le k_{n}-2,\ \E{\norm{g(\lm{Y}{j+1})-g(\lm{Y}{j})}^{l}}\le C(l)\Delta_{n}^{l/2},\\ \mathrm{(vi) }&^{\forall} l\in\mathbf{N},\ ^{\forall} j\le k_{n}-2,\ \E{\norm{e_{j,n}}^l}\le C(l)\Delta_{n}^l,\\ \mathrm{(vii) }&^{\forall} l\in\mathbf{N},\ \sup_{j,n}\parens{\frac{\mathbf{E}\left[\norm{\lm{\epsilon}{j}}^l\right]}{\Delta_{n}^{l/2}}}\le C(l). \end{align*} \end{lemma} \begin{proof}Simple computations and the results above lead to the proof. \end{proof} \subsection{Uniform law of large numbers} The following propositions and theorems are multidimensional version of \citep{Fa14}. \begin{proposition}\label{pro741} Assume $f$ is a function in $\mathcal{C}^2(\Re^d\times\Xi)$ and $f$, the components of $\partial_xf$, $\partial_x^2f$ and $\partial_{\vartheta} f$ are polynomial growth functions uniformly in $\vartheta\in\Xi$. Under (A1)-(A4), (AH), \begin{align*} \bar{M}_{n}(f(\cdot,\vartheta)) \cp \nu_0(f(\cdot,\vartheta)) \text{ uniformly in }\vartheta. \end{align*} \end{proposition} \noindent The proof is almost same as Proposition 4.1 in \citep{Fa14}. \begin{comment} \begin{proof} Lemma \ref*{lem721} leads to \begin{align*} \sup_{\vartheta\in\Xi}\abs{\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}f(X_{j\Delta_{n}},\vartheta)-\nu_0(f(\cdot,\vartheta))}\cp 0. \end{align*} Hence it is enough to see $L^1$ convergence to $0$ of the following random variable \begin{align*} \sup_{\vartheta\in\Xi}\abs{\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}f(\lm{Y}{j},\vartheta)-f(X_{j\Delta_{n}},\vartheta)} \end{align*} since it will lead to \begin{align*} \sup_{\vartheta\in\Xi}\abs{\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}f(\lm{Y}{j},\vartheta)-\nu_0(f(\cdot,\vartheta))}& \le \sup_{\vartheta\in\Xi}\abs{\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}\parens{f(\lm{Y}{j},\vartheta)-f(X_{j\Delta_{n}},\vartheta)}}\\ &\qquad+\sup_{\vartheta\in\Xi}\abs{\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}f(X_{j\Delta_{n}},\vartheta)-\nu_0(f(\cdot,\vartheta))}\\ &\cp 0. \end{align*} We can use Taylor's theorem \begin{align*} A_j&:=\sup_{\vartheta\in\Xi}\abs{f(\lm{Y}{j},\vartheta)-f(X_{j\Delta_{n}},\vartheta)}\\ &= \sup_{\vartheta\in\Xi}\abs{\parens{\int_{0}^{1}\partial_xf(X_{j\Delta_{n}}+u\parens{\lm{Y}{j}-X_{j\Delta_{n}}},\vartheta)\dop u} \parens{\lm{Y}{j}-X_{j\Delta_{n}}}}\\ &\le C\parens{1+\sup_{s\in[j\Delta_{n},(j+1)\Delta_{n}]}\norm{X_{s}}^C +\norm{\Lambda_{\star}^{1/2}}^C\norm{\lm{\epsilon}{j}}^C} \norm{\lm{Y}{j}-X_{j\Delta_{n}}}. \end{align*} Therefore because of (A4), H\"{o}lder's inequality, Proposition \ref*{pro735} and Lemma \ref*{lem739}, the next evaluation holds: \begin{align*} \CE{A_j}{\mathcal{H}_j^n} \le C\Delta_{n}^{1/2}\parens{1+\norm{X_{j\Delta_{n}}}^{C}}. \end{align*} (A3) leads to \begin{align*} \sup_{j}\E{A_j}\le C\Delta_{n}^{1/2}; \end{align*} therefore, we obtain \begin{align*} \E{\sup_{\vartheta\in\Xi}\abs{\frac{1}{k_{n}}\sum_{j=0}^{k_{n}-1}\parens{f(\lm{Y}{j},\vartheta)-f(X_{j\Delta_{n}},\vartheta)}}} \to 0. \end{align*} \end{proof} \end{comment} \begin{theorem}\label{thm742} Assume $f=\parens{f^1,\ldots,f^d}$ is a function in $\mathcal{C}^2(\Re^d\times\Xi; \Re^d)$ and the components of $f$, $\partial_xf$, $\partial_x^2f$ and $\partial_{\vartheta} f$ are polynomial growth functions uniformly in $\vartheta\in\Xi$. Under (A1)-(A4), (AH), \begin{align*} \bar{D}_{n}(f(\cdot,\vartheta)) \cp 0\text{ uniformly in }\vartheta. \end{align*} \end{theorem} \begin{proof} We define the following random variables: \begin{align*} V_j^n(\vartheta)&:=f(\lm{Y}{j-1},\vartheta)\parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j})},\\ \tilde{D}_{n}(f(\cdot,\vartheta))&:=\frac{1}{k_{n}\Delta_{n}}\sum_{j=1}^{k_{n}-2}V_j^n(\vartheta) \end{align*} and then \begin{align*} \bar{D}_{n}(f(\cdot,\vartheta)) &=\tilde{D}_{n}(\cdot,\vartheta)+\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2}f(\lm{Y}{j-1},\vartheta) \parens{b(\lm{Y}{j})-b(\lm{Y}{j-1})}. \end{align*} Hence it is enough to see the uniform convergences in probability of the first term and the second one in the right hand side. \\ In the first place, we consider the first term of the right hand side above. We can decompose the sum of $V_j^n(\vartheta)$ as follows: \begin{align*} \sum_{j=1}^{k_{n}-2}V_j^n(\vartheta)=\sum_{1\le 3j\le k_{n}-2}V_{3j}^n(\vartheta)+\sum_{1\le 3j+1\le k_{n}-2}V_{3j+1}^n(\vartheta) +\sum_{1\le 3j+2\le k_{n}-2}V_{3j+2}^n(\vartheta). \end{align*} To simplify notations, we only consider the first term of the right hand side and the other terms have the identical evaluation. Let us define the following random variables: \begin{align*} v_{3j,n}^{(1)}(\vartheta)&:=f(\lm{Y}{3j-1},\vartheta)a(X_{3j\Delta_{n}})\parens{\zeta_{3j+1,n}+\zeta_{3j+2,n}'},\\ v_{3j,n}^{(2)}(\vartheta)&:=f(\lm{Y}{3j-1},\vartheta)\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{3j+1}-\lm{\epsilon}{3j}},\\ v_{3j,n}^{(3)}(\vartheta)&:=f(\lm{Y}{3j-1},\vartheta)e_{3j,n}, \end{align*} and recall Proposition \ref*{pro737} which states \begin{align*} \lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j})=a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}+e_{j,n} +\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}. \end{align*} Therefore we have \begin{align*} V_{3j}^{n}(\vartheta)=v_{3j,n}^{(1)}(\vartheta)+v_{3j,n}^{(2)}(\vartheta)+v_{3j,n}^{(3)}(\vartheta). \end{align*} First of all, the pointwise convergence to 0 for all $\vartheta$ and we abbreviate $f(\cdot,\vartheta)$ as $f(\cdot)$. Since $V_{3j}^n$ is $\mathcal{H}_{3j+2}^{n}$-measurable and hence $\mathcal{H}_{3j+3}^n$-measurable, the sequence of random variables $\tuborg{V_{3j}}_{1\le 3j\le k_{n}-2}$ are $\tuborg{\mathcal{H}_{3j+3}^{n}}_{1\le 3j\le k_{n}-2}$-adopted, and hence it is enough to see \begin{align*} \frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}\CE{V_{3j}^{n}}{\mathcal{H}_{3(j-1)+3}^n}=\frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}\CE{V_{3j}^{n}}{\mathcal{H}_{3j}^n}&\cp 0,\\%\tag{cp.1}\\ \frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\parens{V_{3j}^{n}}^2}{\mathcal{H}_{3j}^n}&\cp \end{align*} because of Lemma 9 in \citep{GeJ93}. It is quite routine to show it because of Proposition \ref{pro737}. \begin{comment} With respect to the conditional first moment of $v_{3j,n}^{(1)}$ and $v_{3j,n}^{(2)}$, it holds \begin{align*} \CE{v_{3j,n}^{(1)}}{\mathcal{H}_{3j}^n} &=\CE{f(\lm{Y}{3j-1})a(X_{3j\Delta_{n}})\parens{\zeta_{3j+1,n}+\zeta_{3j+2,n}'}}{\mathcal{H}_{3j}^n}=0,\\ \CE{v_{3j,n}^{(2)}}{\mathcal{H}_{3j}^n} &=\CE{f(\lm{Y}{3j-1})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{3j+1}-\lm{\epsilon}{3j}}}{\mathcal{H}_{3j}^n}=0. \end{align*} As for $v_{3j,n}^{(3)}$ we have \begin{align*} \abs{\CE{v_{3j,n}^{(3)}}{\mathcal{H}_{3j}^n}}\le \norm{f(\lm{Y}{3j-1})}C\Delta_{n}^2\parens{1+\norm{X_{3j\Delta_{n}}}^5} \end{align*} and Lemma \ref*{lem739} verifies \begin{align*} \E{\abs{\CE{v_{3j,n}^{(3)}}{\mathcal{H}_{3j}^n}}} \le C\Delta_{n}^2. \end{align*} As a result, we can see the following $L^1$ convergence and hence (cp.1): \begin{align*} \E{\abs{\frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}\CE{V_{3j}^n}{\mathcal{H}_{3j}^n}}} \to0. \end{align*} The conditional square moment of $v_{3j,n}^{(1)}$ can be evaluated as \begin{align*} \CE{\abs{v_{3j,n}^{(1)}}^2}{\mathcal{H}_{3j}^n} \le C\Delta_{n}\norm{f(\lm{Y}{3j-1})}^2\norm{a(X_{3j\Delta_{n}})}^2 \end{align*} because of Lemma \ref*{lem739}; therefore Lemma \ref*{lem739} again gives \begin{align*} \E{\abs{\frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\abs{v_{3j,n}^{(1)}}^2}{\mathcal{H}_{3j}^n}}} \to0. \end{align*} Lemma \ref*{lem739} gives the following evaluation for $v_{3j,n}^{(2)}$: \begin{align*} \CE{\abs{v_{3j,n}^{(2)}}^2}{\mathcal{H}_{3j}^n} \le C\Delta_{n}\norm{f(\lm{Y}{3j-1})}^2 \end{align*} and then by Lemma \ref*{lem739}, \begin{align*} \E{\abs{\frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\abs{v_{3j,n}^{(2)}}^2}{\mathcal{H}_{3j}^n}}} \to0. \end{align*} With respect to $v_{3j,n}^{(3)}$, \begin{align*} \CE{\abs{v_{3j,n}^{(3)}}^2}{\mathcal{H}_{3j}^n} \le \norm{f(\lm{Y}{3j-1})}^2C\Delta_{n}^2\parens{1+\norm{X_{j\Delta_{n}}}^{6}} \end{align*} and hence by Lemma \ref*{lem739}, \begin{align*} \E{\abs{\frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\abs{v_{3j,n}^{(3)}}^2}{\mathcal{H}_{3j}^n}}}\to0. \end{align*} Therefore we obtain \begin{align*} \frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\parens{V_{3j}^{n}}^2}{\mathcal{H}_{3j}^n}&\le \frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}C\CE{\parens{v_{3j,n}^{(1)}}^2+\parens{v_{3j,n}^{(2)}}^2+\parens{v_{3j,n}^{(3)}}^2}{\mathcal{H}_{3j}^n}\\ &\cp 0. \end{align*} Here we have the pointwise convergence in probability of $\tilde{D}_{n}(f(\cdot,\vartheta))$ for all $\vartheta$ because of Lemma 9 in \citep{GeJ93}. \end{comment} Next, we consider the uniform convergence in probability of $\tilde{D}_{n}(f(\cdot,\vartheta))$. Let us define \begin{align*} S_{n}^{(l)}(\vartheta):=\frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}v_{3j,n}^{(l)}(\vartheta),\ l=1,2,3. \end{align*} We will see for all $l$, $S_{n}^{(l)}(\vartheta)$ uniformly converges to 0 in probability. Firstly we examine $S_{n}^{(3)}$: Lemma \ref*{lem739} gives \begin{align*} \E{\sup_{\vartheta\in\Xi}\abs{\nabla_{\vartheta} v_{3j,n}^{(3)}(\vartheta)}} \le C\Delta_{n}. \end{align*} Hence we obtain \begin{align*} \sup_{n\in\mathbf{N}}\E{\sup_{\vartheta\in\Xi}\abs{\nabla_{\vartheta} S_{n}^{(3)}(\vartheta)}} \le C <\infty. \end{align*} Therefore it holds \begin{align*} S_{n}^{(3)}(\vartheta)\cp0\text{ uniformly in }\vartheta \end{align*} as the discussion in \citep{K97} or Proposition A1 in \citep{Gl06}. For $S_{n}^{(l)},\ l=1,2$ we see the following inequalities hold (see \citep{IH81}): there exist $C>0$ and $\kappa>\dim \Xi$ such that \begin{align*} \mathrm{(1) }&^{\forall} \vartheta\in\Xi,\ ^{\forall} n\in\mathbf{N},\ \E{\abs{S_{n}^{(l)}(\vartheta)}^{\kappa}}\le C,\\ \mathrm{(2) }&^{\forall} \vartheta,\vartheta'\in\Xi,\ ^{\forall} n\in\mathbf{N},\ \E{\abs{S_{n}^{(l)}(\vartheta)-S_{n}^{(l)}(\vartheta')}^{\kappa}}\le C\norm{\vartheta-\vartheta'}^{\kappa}. \end{align*} Assume $\kappa=2k,\ k\in\mathbf{N}$ in the following discussion. For $l=1$, Burkholder's inequality and Lemma \ref*{lem739}gives that for all $\kappa$, there exists $C=C(\kappa)$ such that \begin{align*} \E{\abs{S_{n}^{(1)}(\vartheta)}^{\kappa}} &\le \frac{C(\kappa)}{\parens{k_{n}\Delta_{n}}^{\kappa}}k_{n}^{\kappa/2-1} \sum_{1\le 3j\le k_{n}-2}\E{\abs{v_{3j,n}^{(1)}(\vartheta)}^{\kappa}}\\ &\le \frac{C(\kappa)}{\parens{k_{n}\Delta_{n}}^{\kappa}}k_{n}^{\kappa/2-1} \sum_{1\le 3j\le k_{n}-2}C(\kappa)\Delta_{n}^{\kappa/2}\\ &\le \frac{C(\kappa)}{\parens{k_{n}\Delta_{n}}^{\kappa/2}}. \end{align*} With respect to $\E{\abs{S_{n}^{(1)}(\vartheta)-S_{n}^{(1)}(\vartheta')}^{\kappa}}$, identically \begin{align*} \E{\abs{S_{n}^{(1)}(\vartheta)-S_{n}^{(1)}(\vartheta')}^{\kappa}} &\le \frac{C(\kappa)}{\parens{k_{n}\Delta_{n}}^{\kappa}}k_{n}^{\kappa/2-1} \sum_{1\le 3j\le k_{n}-2}\E{\abs{v_{3j,n}^{(1)}(\vartheta)-v_{3j,n}^{(1)}(\vartheta')}^{\kappa}}\\ &\le \frac{C(\kappa)}{\parens{k_{n}\Delta_{n}}^{\kappa}}k_{n}^{\kappa/2-1} \sum_{1\le 3j\le k_{n}-2}C(\kappa)\Delta_{n}^{\kappa/2}\norm{\vartheta-\vartheta'}^{\kappa}\\ &\le \frac{C(\kappa)}{\parens{k_{n}\Delta_{n}}^{\kappa/2}}\norm{\vartheta-\vartheta'}^{\kappa}. \end{align*} This result gives uniform convergence in probability of $S_{n}^{(1)}$. \begin{comment} as same as $S_{n}^{(1)}$, Burkholder's inequality gives \begin{align*} \E{\abs{S_{n}^{(2)}(\vartheta)}^{\kappa}}&\le \frac{C(\kappa)}{\parens{k_{n}\Delta_{n}}^{\kappa}}k_{n}^{\kappa/2-1} \sum_{1\le 3j\le k_{n}-2}\E{\abs{v_{3j,n}^{(2)}(\vartheta)}^{\kappa}} \end{align*} and Lemma \ref*{lem739} verifies \begin{align*} \E{\abs{v_{3j,n}^{(2)}(\vartheta)}^{\kappa}} \le C(\kappa)\Delta_{n}^{\kappa/2}; \end{align*} therefore, \end{comment} The identical evaluation holds for $S_{n}^{(2)}$ and then these lead to uniform convergence of $\tilde{D}_{n}\parens{\vartheta}$. Finally we check the uniform convergence in probability of the following random variable, \begin{align*} \frac{1}{k_{n}}\sum_{1\le 3j\le k_{n}-2}f(\lm{Y}{j-1},\vartheta)\parens{b(\lm{Y}{j})-b(\lm{Y}{j-1})}. \end{align*} By Lemma \ref*{lem739}, it is easily shown that \begin{align*} \E{\sup_{\vartheta\in\Xi}\abs{\frac{1}{k_{n}}\sum_{1\le 3j\le k_{n}-2} f(\lm{Y}{j-1},\vartheta)\parens{b(\lm{Y}{j})-b(\lm{Y}{j-1})}}}\to0. \end{align*} It completes the proof. \end{proof} \begin{theorem}\label{thm743} Assume $B=\parens{B^{\left(i,j\right)}}_{i,j}$ is a function in $\mathcal{C}^2(\Re^d\times\Xi;\Re^d\otimes\Re^d)$ and the components of $B$, $\partial_xB$, $\partial_x^2B$ and $\partial_{\vartheta} B$ are polynomial growth functions uniformly in $\vartheta\in\Xi$. Under (A1)-(A4), (AH), if $\tau\in(1,2]$, \begin{align*} \bar{Q}_{n}(B(\cdot,\vartheta)) \cp \frac{2}{3}\nu_0\parens{\ip{B(\cdot,\vartheta)}{A^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}}\text{ uniformly in }\vartheta. \end{align*} \end{theorem} \begin{proof} We define $q_{j,n}(\vartheta):=\sum_{i=1}^{6}q_{j,n}^{(i)}(\vartheta)$ where \begin{align*} q_{j,n}^{(1)}(\vartheta)&:=\ip{B(\lm{Y}{j-1},\vartheta)}{\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^{\otimes 2}},\\ q_{j,n}^{(2)}(\vartheta)&:=\ip{B(\lm{Y}{j-1},\vartheta)}{\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^{\otimes2}},\\ q_{j,n}^{(3)}(\vartheta)&:=\ip{B(\lm{Y}{j-1},\vartheta)}{\parens{\Delta_{n}b(\lm{Y}{j})+e_{j,n}}^{\otimes2}},\\ q_{j,n}^{(4)}(\vartheta)&:=\ip{B(\lm{Y}{j-1},\vartheta)}{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T},\\ &\qquad+\ip{B(\lm{Y}{j-1},\vartheta)}{\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}^Ta(X_{j\Delta_{n}})^T},\\ q_{j,n}^{(5)}(\vartheta)&:=\ip{B(\lm{Y}{j-1},\vartheta)}{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}\parens{\Delta_{n}b(\lm{Y}{j})+e_{j,n}}^T}\\ &\qquad+\ip{B(\lm{Y}{j-1},\vartheta)}{\parens{\Delta_{n}b(\lm{Y}{j})+e_{j,n}} \parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}^Ta(X_{j\Delta_{n}})^T},\\ q_{j,n}^{(6)}(\vartheta)&:=\ip{B(\lm{Y}{j-1},\vartheta)}{\parens{\Delta_{n}b(\lm{Y}{j})+e_{j,n}}\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T},\\ &\qquad+\ip{B(\lm{Y}{j-1},\vartheta)}{\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}\parens{\Delta_{n}b(\lm{Y}{j})+e_{j,n}}^T}, \end{align*} and then for Proposition \ref*{pro737} we obtain \begin{align*} \bar{Q}_{n}(B(\cdot,\vartheta))=\frac{1}{k_{n}\Delta_{n}}\sum_{j=1}^{k_{n}-2}q_{j,n}(\vartheta). \end{align*} We examine the following quantities which divide the summation into three parts: for $l=0,1,2$, \begin{align*} T_{l,n}^{(i)}(\vartheta):=\frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j+l\le k_{n}-2}q_{3j+l,n}^{(i)}(\vartheta)\text{ for }i=1,\ldots,6. \end{align*} Firstly we see the pointwise-convergence in probability with respect to $\vartheta$.\\ We examine $T_{0,n}^{(1)}(\vartheta)$ and consider to show convergence in probability with Lemma 9 in \citep{GeJ93}. Lemma \ref*{lem732} gives \begin{align*} \CE{q_{3j,n}^{(1)}(\vartheta)}{\mathcal{H}_{3j}^{n}} =\parens{\frac{2}{3}+\frac{1}{3p_{n}^2}}\Delta_{n}\ip{B(\lm{Y}{3j-1},\vartheta)}{A(X_{3j\Delta_{n}})}. \end{align*} Note that Lemma \ref*{lem721} gives \begin{align*} &\frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}\parens{\frac{2}{3}+\frac{1}{3p_{n}^2}} \Delta_{n}\ip{B(X_{3j\Delta_{n}},\vartheta)}{A(X_{3j\Delta_{n}})} \cp\frac{1}{3}\times\frac{2}{3}\times\nu_0\parens{\ip{B(\cdot,\vartheta)}{A(\cdot)}}, \end{align*} and we can obtain \begin{align*} \frac{1}{k_{n}}\sum_{1\le 3j\le k_{n}-2}\parens{\frac{2}{3}+\frac{1}{3p_{n}^2}} \ip{\parens{B(X_{3j\Delta_{n}},\vartheta)-B(X_{(3j-1)\Delta_{n}},\vartheta)}}{A(X_{3j\Delta_{n}})}&\cp0\\ \frac{1}{k_{n}}\sum_{1\le 3j\le k_{n}-2}\parens{\frac{2}{3}+\frac{1}{3p_{n}^2}} \ip{\parens{B(\lm{Y}{3j-1},\vartheta)-B(X_{(3j-1)\Delta_{n}},\vartheta)}}{A(X_{3j\Delta_{n}})}&\cp0 \end{align*} because of Lemma \ref*{lem739}; hence we have \begin{align*} \frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}\CE{q_{3j,n}^{(1)}(\vartheta)}{\mathcal{H}_{3j}^{n}}\cp \frac{2}{9}\nu_0\parens{\ip{B(\cdot,\vartheta)}{A(\cdot)}}. \end{align*} Next we have \begin{align*} \CE{\abs{q_{3j,n}^{(1)}(\vartheta)}^2}{\mathcal{H}_{3j}^{n}} \le C\Delta_{n}^2\norm{B(\lm{Y}{3j-1},\vartheta)}^2\norm{a(X_{3j\Delta_{n}})}^4 \end{align*} because of Lemma \ref*{lem739}, and then we obtain \begin{align*} \E{\abs{\frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\abs{q_{3j,n}^{(1)}(\vartheta)}^2}{\mathcal{H}_{3j}^{n}}}} \to0 \end{align*} also because of Lemma \ref*{lem739}. Therefore, Lemma 9 in \citep{GeJ93} gives \begin{align*} T_{0,n}^{(1)}(\vartheta)\cp\frac{2}{9}\nu_0\parens{\ip{B(\cdot,\vartheta)}{A(\cdot)}} \end{align*} and identical convergences for $T_{1,n}^{(1)}(\vartheta)$ and $T_{2,n}^{(1)}(\vartheta)$ can be given. Hence \begin{align*} T_{0,n}^{(1)}(\vartheta)+T_{1,n}^{(1)}(\vartheta)+T_{2,n}^{(1)}(\vartheta) \cp\frac{2}{3}\nu_0\parens{\ip{B(\cdot,\vartheta)}{A(\cdot)}}. \end{align*} For $T_{0,n}^{(2)}(\vartheta)$, we also see the pointwise convergence in probability with \citep{GeJ93}. Firstly, \begin{align*} \CE{q_{3j,n}^{(2)}(\vartheta)}{\mathcal{H}_{3j}^n} =\frac{2}{p_{n}}\ip{B(\lm{Y}{j-1},\vartheta)}{\Lambda_{\star}} \end{align*} and then Proposition \ref*{pro741} leads to \begin{align*} \frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}\CE{q_{3j,n}^{(2)}(\vartheta)}{\mathcal{H}_{3j}^n} \cp\begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{2}{3}\nu_0\parens{\ip{B(\cdot,\vartheta)}{\Lambda_{\star}}} & \text{ if }\tau=2 \end{cases} \end{align*} because \begin{align*} \frac{1}{p_{n}\Delta_{n}}=\frac{1}{p_{n}^2h_{n}}=\frac{1}{p_{n}^{2-\tau}}\to\begin{cases} 0 & \text{ if }\tau\in(1,2)\\ 1 & \text{ if }\tau=2. \end{cases} \end{align*} Because of Lemma \ref*{lem739}, we also easily have the conditional second moment evaluation such that \begin{align*} \E{\abs{\frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\abs{q_{3j,n}^{(2)}(\vartheta)}^2}{\mathcal{H}_{3j}^n}}} \to0, \end{align*} therefore this $L^1$ convergence verifies convergence in probability and by Lemma 9 in \citep{GeJ93}, \begin{align*} T_{0,n}^{(2)}(\vartheta)\cp\begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{2}{3}\nu_0\parens{\ip{B(\cdot,\vartheta)}{\Lambda_{\star}}} & \text{ if }\tau=2 \end{cases} \end{align*} and then \begin{align*} T_{0,n}^{(2)}(\vartheta)+T_{1,n}^{(2)}(\vartheta)+T_{2,n}^{(2)}(\vartheta)\cp\begin{cases} 0 & \text{ if }\tau\in(1,2)\\ 2\nu_0\parens{\ip{B(\cdot,\vartheta)}{\Lambda_{\star}}} & \text{ if }\tau=2. \end{cases} \end{align*} \begin{comment} \noindent\textbf{(Step 3):} For $T_{0,n}^{(3)}(\vartheta)$, we can obtain the following $L^1$ convergence: \begin{align*} \E{\abs{\frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}q_{3j,n}^{(3)}(\vartheta)}} \to0. \end{align*} Hence it leads to \begin{align*} T_{0,n}^{(3)}(\vartheta)\cp0 \end{align*} and \begin{align*} T_{0,n}^{(3)}(\vartheta)+T_{1,n}^{(3)}(\vartheta)+T_{2,n}^{(3)}(\vartheta)\cp 0. \end{align*} \noindent\textbf{(Step 4): } We show pointwise convergence in probability of $T_{0,n}^{(4)}(\vartheta)$ with Lemma 9 in \citep{GeJ93}. First of all, \begin{align*} \CE{q_{3j,n}^{(4)}(\vartheta)}{\mathcal{H}_{3j}^n} =0 \end{align*} since $w$ and $\epsilon$ are independent. Secondly, for conditional second moment, we have \begin{align*} \CE{\abs{q_{3j,n}^{(4)}(\vartheta)}^2}{\mathcal{H}_{3j}^n}\le C\Delta_{n}^2\norm{A(\lm{Y}{j-1},\vartheta)}^2\norm{a(X_{j\Delta_{n}})}^2, \end{align*} because of Lemma \ref*{lem739}; therefore, \begin{align*} \E{\abs{\frac{1}{k_{n}^2\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\abs{q_{3j,n}^{(4)}(\vartheta)}^2}{\mathcal{H}_{3j}^n}}}\to0 \end{align*} by Lemma \ref*{lem739}. Then Lemma 9 in \citep{GeJ93} verifies \begin{align*} T_{0,n}^{(4)}\cp0 \end{align*} and \begin{align*} T_{0,n}^{(4)}+T_{1,n}^{(4)}+T_{2,n}^{(4)}\cp0. \end{align*} \noindent\textbf{(Step 5): } We can evaluate $L^1$ convergence of $T_{0,n}^{(5)}(\vartheta)$ and $T_{0,n}^{(6)}(\vartheta)$ with Lemma \ref*{lem739} such that \begin{align*} \E{\abs{\frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}q_{3j,n}^{(5)}}} &\to0,\\ \E{\abs{\frac{1}{k_{n}\Delta_{n}}\sum_{1\le 3j\le k_{n}-2}q_{3j,n}^{(6)}(\vartheta)}} &\to0. \end{align*} This leads to \begin{align*} T_{0,n}^{(5)}(\vartheta)&\cp0\\ T_{0,n}^{(6)}(\vartheta)&\cp0 \end{align*} and \begin{align*} T_{0,n}^{(5)}(\vartheta)+T_{1,n}^{(5)}(\vartheta)+T_{2,n}^{(5)}(\vartheta)&\cp0\\ T_{0,n}^{(6)}(\vartheta)+T_{1,n}^{(6)}(\vartheta)+T_{2,n}^{(6)}(\vartheta)&\cp0. \end{align*} \end{comment} It is easy to show $T_{0,n}^{(l)}(\vartheta)\cp 0$ for $l=3,\ldots,6$. Here we have pointwise convergence in probability of $\bar{Q}_{n}(B(\cdot,\vartheta))$ for all $\vartheta$.\\ Finally we see the uniform convergence. It can be obtained as \begin{align*} \sup_{n\in\mathbf{N}}\E{\sup_{\vartheta\in\Xi}\abs{\nabla\bar{Q}_{n}(B(\cdot,\vartheta))}}\le C \end{align*} whose computation is verified by Lemma \ref*{lem739}. Therefore uniform convergence in probability is obtained. \end{proof} \subsection{Asymptotic normality} \begin{theorem}\label{thm751} Under (A1)-(A5), (AH) and $k_{n}\Delta_{n}^2\to0$, \begin{align*} \crotchet{\begin{matrix} \sqrt{n}D_{n}\\ \sqrt{k_{n}}\crotchet{\bar{Q}_{n}\parens{B_{\kappa}(\cdot)} -\frac{2}{3}\bar{M}_{n}\parens{\ip{B_{\kappa}(\cdot)}{A_{n}^{\tau}\parens{\cdot,\alpha^{\star},\Lambda_{\star}}}}}_{\kappa}\\ \sqrt{k_{n}\Delta_{n}}\crotchet{\bar{D}_{n}\parens{f_{\lambda}(\cdot)}}^{\lambda} \end{matrix}}\cl N(\mathbf{0},W^{(\tau)}(\tuborg{B_{\kappa}},\tuborg{f_{\lambda}})), \end{align*} where \begin{align*} A_{n}^{\tau}(x,\alpha,\Lambda)&:=A(x,\alpha)+3\Delta_{n}^{\frac{2-\tau}{\tau-1}}\Lambda. \end{align*} \end{theorem} \begin{proof}\textbf{(Step 1): } We can decompose \begin{comment} \begin{align*} \hat{\Lambda}_{n}-\Lambda_{\star} &=\frac{1}{2n}\sum_{i=0}^{n-1}\parens{Y_{(i+1)h_{n}}-Y_{ih_{n}}}^{\otimes 2}-\Lambda_{\star}\\ &=\frac{1}{2n}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}+\Lambda_{\star}^{1/2}\epsilon_{(i+1)h_{n}} -X_{ih_{n}}-\Lambda_{\star}^{1/2}\epsilon_{ih_{n}}}^{\otimes 2} -\Lambda_{\star}\\ &=\frac{1}{2n}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^{\otimes 2} +\frac{1}{2n}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{\otimes 2}-2I_d}\Lambda_{\star}^{1/2}\\ &\qquad+\frac{1}{2n}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^T\Lambda_{\star}^{1/2}\\ &\qquad+\frac{1}{2n}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^T. \end{align*} Hence \end{comment} \begin{align*} \sqrt{n}\parens{\hat{\Lambda}_{n}-\Lambda_{\star}}&=\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^{\otimes 2} \\ &\qquad+\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{\otimes 2}-I_d}\Lambda_{\star}^{1/2}\\ &\qquad+\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^T\Lambda_{\star}^{1/2}\\ &\qquad+\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^T. \end{align*} The first, third and fourth terms in right hand side are $o_P(1)$ which can be shown by $L^{1}$-evaluation and Lemma 9 in \citep{GeJ93}. \begin{comment} The first term of right hand side shows the following moment convergence: \begin{align*} \E{\norm{\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^{\otimes 2}}} \to0. \end{align*} The conditional moment evaluation for the third term can be given as follows: for all $j_1$ and $j_2$ in $\tuborg{1,\ldots,d}$, \begin{align*} \E{\abs{\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\CE{\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^{j_1}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{j_2}}{\mathcal{H}_{ih_{n}}^n}}}\to 0 \end{align*} and \begin{align*} \E{\abs{\frac{1}{4n}\sum_{i=0}^{n-1}\CE{\abs{\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^{j_1}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{j_2}}^2}{\mathcal{H}_{ih_{n}}^n}}}\to 0. \end{align*} Therefore Lemma 9 in \citep{GeJ93} leads to the third and the fourth term are $o_P(1)$. \end{comment} Then we obtain \begin{align*} \sqrt{n}\parens{\hat{\Lambda}_{n}-\Lambda_{\star}}&=\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{\otimes 2}-2I_d}\Lambda_{\star}^{1/2}+o_P(1) \end{align*} and \begin{align*} \sqrt{n}D_{n}&=\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\vech\parens{\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{\otimes 2}-2I_d}\Lambda_{\star}^{1/2}}+o_P(1). \end{align*} We can rewrite the summation as \begin{align*} &\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{\otimes 2}-2I_d}\Lambda_{\star}^{1/2}\\ &=\frac{1}{2\sqrt{n}}\sum_{i=1}^{n-1}\Lambda_{\star}^{1/2}\parens{2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d}\Lambda_{\star}^{1/2}+o_P(1)\\ &=\frac{1}{2\sqrt{n}}\sum_{i=p_{n}}^{n-p_{n}-1}\Lambda_{\star}^{1/2}\parens{2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d}\Lambda_{\star}^{1/2}\\ &\qquad+o_P(1) \end{align*} where the last evaluation holds because of Lemma 9 in \citep{GeJ93}, \begin{comment} Since \begin{align*} \E{\norm{\frac{1}{\sqrt{n}}\sum_{i=1}^{p_{n}}\CE{2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d}{\mathcal{H}_{(i-1)h_{n}}}}} &=0 \end{align*} and \begin{align*} \E{\frac{1}{n}\sum_{i=1}^{p_{n}}\CE{\norm{2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d}^2}{\mathcal{H}_{(i-1)h_{n}}}}\to 0, \end{align*} we have \begin{align*} &\frac{1}{2\sqrt{n}}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{\otimes 2}-2I_d}\Lambda_{\star}^{1/2}\\ &=\frac{1}{2\sqrt{n}}\sum_{i=p_{n}}^{n-p_{n}-1}\Lambda_{\star}^{1/2}\parens{2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d}\Lambda_{\star}^{1/2}+o_P(1) \end{align*} \end{comment} and then \begin{align*} \sqrt{n}D_{n}&=\frac{\sqrt{n}}{k_{n}}\sum_{j=1}^{k_{n}-2}D_{j,n}'+o_P(1), \end{align*} where \begin{align*} D_{j,n}'&=\frac{1}{2p_{n}}\sum_{i=0}^{p_{n}-1}\vech\lparens{\Lambda_{\star}^{1/2}\lparens{2\parens{\epsilon_{j\Delta_{n}+ih_{n}}}^{\otimes 2}+\parens{\epsilon_{j\Delta_{n}+ih_{n}}}\parens{\epsilon_{j\Delta_{n}+(i-1)h_{n}}}^T}}\\ &\qquad\qquad\qquad\qquad\qquad\rparens{\rparens{+\parens{\epsilon_{j\Delta_{n}+(i-1)h_{n}}}\parens{\epsilon_{j\Delta_{n}+ih_{n}}}^T-2I_d}\Lambda_{\star}^{1/2}},\\ \parens{D_{j,n}'}^{(l_1,l_2)}&=\frac{1}{2p_{n}}\sum_{i=0}^{p_{n}-1}\lparens{\Lambda_{\star}^{1/2}\lparens{2\parens{\epsilon_{j\Delta_{n}+ih_{n}}}^{\otimes 2}+\parens{\epsilon_{j\Delta_{n}+ih_{n}}}\parens{\epsilon_{j\Delta_{n}+(i-1)h_{n}}}^T}}\\ &\qquad\qquad\qquad\qquad\qquad\rparens{\rparens{+\parens{\epsilon_{j\Delta_{n}+(i-1)h_{n}}}\parens{\epsilon_{j\Delta_{n}+ih_{n}}}^T-2I_d}\Lambda_{\star}^{1/2}}^{(l_1,l_2)}. \end{align*} The conditional moment of $u_{i}$ is given as \begin{align*} \CE{D_{j,n}'}{\mathcal{H}_{j}^n}=\mathbf{0}. \end{align*} Note that \begin{align*} &\parens{\Lambda_{\star}^{1/2}\parens{2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d}\Lambda_{\star}^{1/2}}^{(l_1,l_2)}\\ &=2\parens{\sum_{k_1=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,k_1)}\parens{\epsilon_{ih_{n}}^{(k_1)}}} \parens{\sum_{k_2=1}^{d}\parens{\epsilon_{ih_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_2)}}\\ &\qquad+\parens{\sum_{k_1=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,k_1)}\parens{\epsilon_{ih_{n}}^{(k_1)}}} \parens{\sum_{k_2=1}^{d}\parens{\epsilon_{(i-1)h_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_2)}}\\ &\qquad+\parens{\sum_{k_1=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,k_1)}\parens{\epsilon_{(i-1)h_{n}}^{(k_1)}}} \parens{\sum_{k_2=1}^{d}\parens{\epsilon_{ih_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_2)}}\\ &\qquad-2\Lambda_{\star}^{(l_1,l_2)} \end{align*} and hence \begin{align*} &\tilde{D}_{ih_{n},n}\parens{(l_1,l_2),(l_3,l_4)}\\ &:=\mathbf{E}\lcrotchet{\parens{\Lambda_{\star}^{1/2}\parens{2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d}\Lambda_{\star}^{1/2}}^{(l_1,l_2)}}\\ &\qquad\times\rcrotchet{\left.\parens{\Lambda_{\star}^{1/2}\parens{2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d}\Lambda_{\star}^{1/2}}^{(l_3,l_4)}\right|\mathcal{H}_{(i-1)h_{n}}^n}\\ &=4\sum_{k=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,k)}\parens{\Lambda_{\star}^{1/2}}^{(l_2,k)}\parens{\Lambda_{\star}^{1/2}}^{(l_3,k)}\parens{\Lambda_{\star}^{1/2}}^{(l_4,k)} \parens{\E{\abs{\epsilon_0^{(k)}}^4}-3}\\ &\quad+4\parens{\Lambda_{\star}^{(l_1,l_3)}\Lambda_{\star}^{(l_2,l_4)}+\Lambda_{\star}^{(l_1,l_4)} \Lambda_{\star}^{(l_2,l_3)}}\\ &\quad+\Lambda_{\star}^{(l_1,l_3)}\parens{\sum_{k_2=1}^{d}\parens{\epsilon_{(i-1)h_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_2)}}\parens{\sum_{k_2=1}^{d}\parens{\epsilon_{(i-1)h_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_4)}}\\ &\quad+\Lambda_{\star}^{(l_1,l_4)}\parens{\sum_{k_2=1}^{d}\parens{\epsilon_{(i-1)h_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_2)}} \parens{\sum_{k_1=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_3,k_1)}\parens{\epsilon_{(i-1)h_{n}}^{(k_1)}}}\\ &\quad+\Lambda_{\star}^{(l_2,l_3)}\parens{\sum_{k_1=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,k_1)}\parens{\epsilon_{(i-1)h_{n}}^{(k_1)}}}\parens{\sum_{k_2=1}^{d}\parens{\epsilon_{(i-1)h_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_4)}}\\ &\quad+\Lambda_{\star}^{(l_2,l_4)}\parens{\sum_{k_1=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,k_1)}\parens{\epsilon_{(i-1)h_{n}}^{(k_1)}}}\parens{\sum_{k_1=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_3,k_1)}\parens{\epsilon_{(i-1)h_{n}}^{(k_1)}}} \end{align*} and \begin{align*} \CE{\parens{\sum_{k_2=1}^{d}\parens{\epsilon_{(i-1)h_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_2)}}\parens{\sum_{k_2=1}^{d}\parens{\epsilon_{(i-1)h_{n}}^{(k_2)}}\parens{\Lambda_{\star}^{1/2}}^{(k_2,l_4)}}}{\mathcal{H}_{(i-2)h_{n}}^n} = \Lambda_{\star}^{(l_2,l_4)}. \end{align*} These lead to \begin{align*} \frac{n}{k_{n}^2}\sum_{j=1}^{k_{n}-2}\CE{\parens{D_{j,n}'}^{(l_1,l_2)}\parens{D_{j,n}'}^{(l_3,l_4)}}{\mathcal{H}_j^n} &=\frac{n}{4n^2}\sum_{j=1}^{k_{n}-2}\CE{\sum_{i=0}^{p_{n}-1}\tilde{D}_{j\Delta_{n}+ih_{n},n}\parens{(l_1,l_2),(l_3,l_4)}}{\mathcal{H}_j^n}\\ &\cp W_1^{(l_1,l_2),(l_3,l_4)}. \end{align*} Then \begin{align*} \frac{n}{k_{n}^2}\sum_{j=1}^{k_{n}-2}\CE{\parens{D_{j,n}'}^{\otimes2}}{\mathcal{H}_{j}^n}\cp W_1. \end{align*} Finally we check \begin{align*} \E{\abs{\frac{n^2}{k_{n}^4}\sum_{j=1}^{k_{n}-2}\CE{\norm{D_{j,n}'}^4}{\mathcal{H}_{j}^n}}}\to 0. \end{align*} Note that $\tuborg{\epsilon_{ih_{n}}}$ are i.i.d. and when we denote \begin{align*} M_i=2\parens{\epsilon_{ih_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i-1)h_{n}}}^T+\parens{\epsilon_{(i-1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T-2I_d, \end{align*} then \begin{align*} \E{\norm{\sum_{i=1}^{p_{n}}M_i}^4}&=\E{\sum_{i_1}\sum_{i_2}\parens{\tr\parens{M_{i_1}M_{i_2}}}^2+\sum_{i_1}\sum_{i_2}\sum_{i_3}\sum_{i_4}\tr\parens{M_{i_1}M_{i_2}M_{i_3}M_{i_4}}}\\ &\le Cp_{n}^2; \end{align*} They verify the result.\\ \noindent\textbf{(Step 2): } \begin{comment} Corollary \ref*{cor738} gives \begin{align*} \parens{\lm{Y}{j+1}-\lm{Y}{j}}^{\otimes 2} &=\parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}^{\otimes 2} +\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^{\otimes 2}\\ &\qquad+\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^{\otimes2}\\ &\qquad+\parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^T\\ &\qquad+\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}\parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}^T\\ &\qquad+\parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}} \parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T\\ &\qquad+\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}} \parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}^T\\ &\qquad+\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}} \parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T\\ &\qquad+\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}} \parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^T. \end{align*} We define the random variable $U_{n}(\lambda)$ using Corollary \ref*{cor738} such that \begin{align*} U_{n}(\kappa)&:=\sqrt{k_{n}}\parens{\bar{Q}_{n}(B_{\kappa}(\cdot))-\frac{2}{3}\bar{M}_{n}\parens{\ip{A_{\kappa}(\cdot)}{ A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}}}\\ &=\frac{1}{\sqrt{k_{n}}\Delta_{n}}\sum_{j=1}^{k_{n}-2} \ip{B_{\kappa}(\lm{Y}{j-1})}{\parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}^{\otimes 2}}\\ &\qquad+\frac{1}{\sqrt{k_{n}}\Delta_{n}}\sum_{j=1}^{k_{n}-2} \ip{B_{\kappa}(\lm{Y}{j-1})}{\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^{\otimes 2}}\\ &\qquad+\frac{1}{\sqrt{k_{n}}\Delta_{n}}\sum_{j=1}^{k_{n}-2} \ip{B_{\kappa}(\lm{Y}{j-1})}{\parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^{\otimes2}}\\ &\qquad+\frac{2}{\sqrt{k_{n}}\Delta_{n}}\sum_{j=1}^{k_{n}-2} \ip{\bar{B}_{\kappa}(\lm{Y}{j-1})}{ \parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^T}\\ &\qquad+\frac{2}{\sqrt{k_{n}}\Delta_{n}}\sum_{j=1}^{k_{n}-2} \ip{\bar{B}_{\kappa}(\lm{Y}{j-1})}{ \parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}} \parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T}\\ &\qquad+\frac{2}{\sqrt{k_{n}}\Delta_{n}}\sum_{j=1}^{k_{n}-2} \ip{\bar{B}_{\kappa}(\lm{Y}{j-1})}{ \parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}} \parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T}\\ &\qquad-\frac{2}{3\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\ip{A_{\kappa}(\lm{Y}{j-1})} {A(\lm{Y}{j-1},\alpha^{\star})+3\Delta_{n}^{\frac{2-\tau}{\tau-1}}\Lambda_{\star}}, \end{align*} where $\bar{B}_{\kappa}:=\frac{1}{2}\parens{B_{\kappa}+B_{\kappa}^T}$. \end{comment} Let us $u_{j,n}^{(l)}(\kappa),\ l=1,\ldots,7$ such that \begin{align*} u_{j,n}^{(1)}(\kappa) &:=\ip{B_{\kappa}(\lm{Y}{j-1})}{a(X_{j\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}^{\otimes 2} -\frac{2}{3}I_r}a(X_{j\Delta_{n}})^{T}},\\ u_{j,n}^{(2)}(\kappa) &:=\frac{2}{3}\ip{B_{\kappa}(\lm{Y}{j-1})}{A(X_{j\Delta_{n}})-A(\lm{Y}{j-1})},\\ u_{j,n}^{(3)}(\kappa) &:=\ip{B_{\kappa}(\lm{Y}{j-1})}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}^{\otimes 2}-2\Delta_{n}^{\frac{2-\tau}{\tau-1}}I_d}\Lambda_{\star}^{1/2}}\\ u_{j,n}^{(4)}(\kappa) &:=\frac{2}{\Delta_{n}}\ip{\bar{B}_{\kappa}(\lm{Y}{j-1})}{ \parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}} \parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T},\\ u_{j,n}^{(5)}(\kappa) &:=\frac{1}{\Delta_{n}}\ip{B_{\kappa}(\lm{Y}{j-1})}{\parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}^{\otimes 2}},\\ u_{j,n}^{(6)}(\kappa) &:=\frac{2}{\Delta_{n}}\ip{\bar{B}_{\kappa}(\lm{Y}{j-1})}{ \parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^T},\\ u_{j,n}^{(7)}(\kappa) &:=\frac{2}{\Delta_{n}} \ip{\bar{B}_{\kappa}(\lm{Y}{j-1})}{ \parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}} \parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T}. \end{align*} where $\bar{B}_{\kappa}:=\frac{1}{2}\parens{B_{\kappa}+B_{\kappa}^T}$. Then because of Corollary \ref{cor738}, we obtain \begin{align*} U_{n}(\kappa)=\sum_{l=1}^{7}U_{n}^{(l)}(\kappa),\ U_{n}^{(l)}(\kappa)=\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}u_{j,n}^{(l)}(\kappa). \end{align*} With respect to $U_{n}^{(1)}(\kappa)$, we have \begin{align*} U_{n}^{(1)}(\kappa)=\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}u_{j,n}^{(1)}(\kappa) =\frac{1}{\sqrt{k_{n}}}\sum_{j=2}^{k_{n}-2}s_{j,n}^{(1)}(\kappa)+ \frac{1}{\sqrt{k_{n}}}\sum_{j=2}^{k_{n}-2}\tilde{s}_{j,n}^{(1)}(\kappa)+o_P(1), \end{align*} where \begin{align*} s_{j,n}^{(1)}(\kappa)&=\ip{B_{\kappa}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}}^{\otimes 2}-m_{n}I_r}a(X_{j\Delta_{n}})^{T}}\\ &\qquad+\ip{B_{\kappa}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}'}^{\otimes 2}-m_{n}'I_r}a(X_{(j-1)\Delta_{n}})^{T}}\\ &\qquad+2\ip{\bar{B}_{\kappa}(\lm{Y}{j-2})}{a(X_{(j-1)\Delta_{n}}) \parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j,n}\parens{\zeta_{j+1,n}'}^T}}a(X_{(j-1)\Delta_{n}})^{T}},\\ \tilde{s}_{j,n}^{(1)}(\kappa)&=\parens{\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}} \ip{B_{\kappa}(\lm{Y}{j-1})}{A(X_{j\Delta_{n}})}\\ &\qquad+\parens{-\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}}\ip{B_{\kappa}(\lm{Y}{j-2})}{A(X_{(j-1)\Delta_{n}})}. \end{align*} Actually the second term converge to 0 in $L^1$ and hence \begin{comment} \begin{align*} \E{\abs{\frac{1}{\sqrt{k_{n}}}\sum_{j=2}^{k_{n}-2}\tilde{s}_{j,n}^{(1)}(\kappa)}} \to0. \end{align*} Hence \end{comment} \begin{align*} U_{n}^{(1)}(\kappa)=\frac{1}{\sqrt{k_{n}}}\sum_{j=2}^{k_{n}-2}s_{j,n}^{(1)}(\kappa)+o_P(1) \end{align*} and it is enough to examine the first term of the right hand side. Firstly, Lemma \ref*{lem732} leads to \begin{align*} \CE{s_{j,n}^{(1)}(\kappa)}{\mathcal{H}_j^n}=0. \end{align*} Note the fact that for $\Re^r$-valued random vectors $\mathbf{x}$ and $\mathbf{y}$ such that \begin{align*} \crotchet{\begin{matrix} \mathbf{x}\\ \mathbf{y} \end{matrix}}\sim N\parens{\mathbf{0}, \crotchet{\begin{matrix} \sigma_{11}I_r & \sigma_{12}I_r\\ \sigma_{12}I_r & \sigma_{22}I_r \end{matrix}}}, \end{align*} where $\sigma_{11}>0$, $\sigma_{22}>0$ and $\abs{\sigma_{12}}^2\le \sigma_{11}\sigma_{22}$, it holds for any $\Re^r\times\Re^r$-valued matrix $M$, \begin{align*} \E{\mathbf{y}\mathbf{x}^TM\mathbf{x}\mathbf{y}^T}=\sigma_{12}^2\parens{M+M^T}+\sigma_{11}\sigma_{22}\tr\parens{M}I_r \end{align*} and also the fact that for any square matrices $A$ and $B$ whose dimensions coincide, \begin{align*} \tr\parens{AB}+\tr\parens{AB^T}=2\tr\parens{\bar{A}\bar{B}}, \end{align*} where $\bar{A}=\parens{A+A^T}/2$ and $\bar{B}=\parens{B+B^T}/2$. For all $\kappa_1,\ \kappa_2$, \begin{comment} \begin{align*} &\mathbf{E}\left[\ip{B_{\kappa_1}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}}^{\otimes 2}-m_{n}I_r}a(X_{j\Delta_{n}})^{T}}\right.\\ &\qquad\left.\left.\times\ip{B_{\kappa_2}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}}^{\otimes 2}-m_{n}I_r}a(X_{j\Delta_{n}})^{T}}\right|\mathcal{H}_j^n\right]\\ &=m_{n}^2\tr\parens{B_{\kappa_1}(\lm{Y}{j-1})A(X_{j\Delta_{n}})B_{\kappa_2}(\lm{Y}{j-1})A(X_{j\Delta_{n}})}\\ &\qquad+m_{n}^2\tr\parens{B_{\kappa_1}(\lm{Y}{j-1})^TA(X_{j\Delta_{n}})B_{\kappa_2}(\lm{Y}{j-1})A(X_{j\Delta_{n}})} \end{align*} and \begin{align*} &\mathbf{E}\left[\ip{B_{\kappa_1}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}'}^{\otimes 2}-m_{n}'I_r}a(X_{(j-1)\Delta_{n}})^{T}}\right.\\ &\qquad\left.\left.\times\ip{B_{\kappa_2}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}'}^{\otimes 2}-m_{n}'I_r}a(X_{(j-1)\Delta_{n}})^{T}}\right|\mathcal{H}_j^n\right]\\ &=\parens{m_{n}'}^2\tr\parens{B_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})B_{\kappa_2}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})}\\ &\qquad+\parens{m_{n}'}^2\tr\parens{B_{\kappa_1}(\lm{Y}{j-2})^TA(X_{(j-1)\Delta_{n}})B_{\kappa_2}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})} \end{align*} and \begin{align*} &\mathbf{E}\left[2\ip{\bar{B}_{\kappa_1}(\lm{Y}{j-2})}{a(X_{(j-1)\Delta_{n}}) \parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j,n}\parens{\zeta_{j+1,n}'}^T}}a(X_{(j-1)\Delta_{n}})^{T}}\right.\\ &\qquad\left.\left.\times2\ip{\bar{B}_{\kappa_2}(\lm{Y}{j-2})}{a(X_{(j-1)\Delta_{n}}) \parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j,n}\parens{\zeta_{j+1,n}'}^T}}a(X_{(j-1)\Delta_{n}})^{T}}\right|\mathcal{H}_j^n\right]\\ &=\frac{4m_{n}'}{\Delta_{n}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^{T}\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}}) \bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}}) \parens{\zeta_{j,n}} \end{align*} and \begin{align*} &\mathbf{E}\left[\ip{B_{\kappa_1}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}}^{\otimes 2}-m_{n}I_r}a(X_{j\Delta_{n}})^{T}}\right.\\ &\qquad\left.\left.\times\ip{B_{\kappa_2}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}'}^{\otimes 2}-m_{n}'I_r}a(X_{(j-1)\Delta_{n}})^{T}}\right|\mathcal{H}_j^n\right]\\ &=\chi_{n}^2 \tr\left\{a(X_{j\Delta_{n}})^{T}B_{\kappa_1}(\lm{Y}{j-1}) a(X_{j\Delta_{n}})a(X_{(j-1)\Delta_{n}})^{T}B_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\right\}\\ &\qquad+\chi_{n}^2 \tr\left\{a(X_{j\Delta_{n}})^{T}B_{\kappa_1}(\lm{Y}{j-1})^T a(X_{j\Delta_{n}})a(X_{(j-1)\Delta_{n}})^{T}B_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\right\} \end{align*} and \begin{align*} &\mathbf{E}\left[\ip{B_{\kappa_1}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}}^{\otimes 2}-m_{n}I_r}a(X_{j\Delta_{n}})^{T}}\right.\\ &\qquad\times\left.\left.2\ip{\bar{B}_{\kappa_2}(\lm{Y}{j-2})}{a(X_{(j-1)\Delta_{n}}) \parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j,n}\parens{\zeta_{j+1,n}'}^T}}a(X_{(j-1)\Delta_{n}})^{T}}\right|\mathcal{H}_j^n\right]\\ &=0 \end{align*} and \begin{align*} &\mathbf{E}\left[\ip{B_{\kappa_1}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}'}^{\otimes 2}-m_{n}'I_r}a(X_{(j-1)\Delta_{n}})^{T}}\right.\\ &\qquad\times\left.\left.2\ip{\bar{B}_{\kappa_2}(\lm{Y}{j-2})}{a(X_{(j-1)\Delta_{n}}) \parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j,n}\parens{\zeta_{j+1,n}'}^T}}a(X_{(j-1)\Delta_{n}})^{T}}\right|\mathcal{H}_j^n\right]\\ &=0. \end{align*} Hence \end{comment} we obtain \begin{align*} &\CE{\parens{s_{j,n}^{(1)}(\kappa_1)}\parens{s_{j,n}^{(1)}(\kappa_2)}}{\mathcal{H}_j^n}\\ &=m_{n}^2\tr\parens{B_{\kappa_1}(\lm{Y}{j-1})A(X_{j\Delta_{n}})B_{\kappa_2}(\lm{Y}{j-1})A(X_{j\Delta_{n}})}\\ &\qquad+m_{n}^2\tr\parens{B_{\kappa_1}(\lm{Y}{j-1})^TA(X_{j\Delta_{n}})B_{\kappa_2}(\lm{Y}{j-1})A(X_{j\Delta_{n}})}\\ &\qquad+\parens{m_{n}'}^2\tr\parens{B_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})B_{\kappa_2}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})}\\ &\qquad+\parens{m_{n}'}^2\tr\parens{B_{\kappa_1}(\lm{Y}{j-2})^TA(X_{(j-1)\Delta_{n}})B_{\kappa_2}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})}\\ &\qquad+\frac{4m_{n}'}{\Delta_{n}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^{T}\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}}) \bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}}) \parens{\zeta_{j,n}}\\ &\qquad+\chi_{n}^2 \tr\left\{a(X_{j\Delta_{n}})^{T}B_{\kappa_1}(\lm{Y}{j-1}) a(X_{j\Delta_{n}})a(X_{(j-1)\Delta_{n}})^{T}B_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\right\}\\ &\qquad+\chi_{n}^2 \tr\left\{a(X_{j\Delta_{n}})^{T}B_{\kappa_1}(\lm{Y}{j-1})^T a(X_{j\Delta_{n}})a(X_{(j-1)\Delta_{n}})^{T}B_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\right\}\\ &\qquad+\chi_{n}^2 \tr\left\{a(X_{j\Delta_{n}})^{T}B_{\kappa_2}(\lm{Y}{j-1}) a(X_{j\Delta_{n}})a(X_{(j-1)\Delta_{n}})^{T}B_{\kappa_1}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\right\}\\ &\qquad+\chi_{n}^2 \tr\left\{a(X_{j\Delta_{n}})^{T}B_{\kappa_2}(\lm{Y}{j-1})^T a(X_{j\Delta_{n}})a(X_{(j-1)\Delta_{n}})^{T}B_{\kappa_1}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\right\}. \end{align*} We have the following evaluation \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}}\CE{\frac{4m_{n}'}{\Delta_{n}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^{T}\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}}) \bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}}) \parens{\zeta_{j,n}}}{\mathcal{H}_j^n}\\ &\cp \frac{4}{9}\nu_0\parens{\tr\parens{\bar{B}_{\kappa_1}(\cdot)A(\cdot)\bar{B}_{\kappa_2}(\cdot)A(\cdot)}} \end{align*} and \begin{align*} &\frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}}\CE{\abs{\frac{4m_{n}'}{\Delta_{n}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^{T}\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}}) \bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}}) \parens{\zeta_{j,n}}}^2}{\mathcal{H}_j^n}\\ &=o_P(1). \end{align*} Then we have \begin{align*} \frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{\parens{s_{j,n}^{(1)}(\kappa_1)}\parens{s_{j,n}^{(1)}(\kappa_2)}}{\mathcal{H}_j^n} \cp\nu_0\parens{\tr\parens{\bar{B}_{\kappa_1}(\cdot)A(\cdot)\bar{B}_{\kappa_2}(\cdot)A(\cdot)}}. \end{align*} Now let us consider the fourth conditional expectation. \begin{comment} It can be evaluated such that \begin{align*} \CE{\parens{s_{j,n}^{(1)}(\kappa)}^4}{\mathcal{H}_j^n} &\le C\norm{a(X_{j\Delta_{n}})^{T}B_{\kappa}(\lm{Y}{j-1})a(X_{j\Delta_{n}})}^4\\ &\qquad+C\norm{a(X_{(j-1)\Delta_{n}})^{T}B_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})}^4\\ &\qquad +C\norm{a(X_{(j-1)\Delta_{n}})^{T}\bar{B}_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})}^4\frac{1}{\Delta_{n}^2}\norm{\zeta_{j,n}}^4 \end{align*} and then \begin{align*} \E{\abs{\CE{\parens{s_{j,n}^{(1)}(\kappa)}^4}{\mathcal{H}_j^n}}}\le C. \end{align*} Therefore, \end{comment} But it is easy to see \begin{align*} \E{\abs{\frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}-2}\CE{\parens{s_{j,n}^{(1)}(\kappa)}^4}{\mathcal{H}_j^n}}}\to0. \end{align*} \begin{comment} Next we consider $U_{n}^{(2)}(\kappa)=o_P(1)$. Because of Corollary \ref*{cor736}, \begin{align*} \E{\abs{\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{u_{j,k}^{(2)}(\kappa)}{\mathcal{H}_j^n}}} \to0 \end{align*} and \begin{align*} \E{\abs{\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2}\CE{\abs{u_{j,k}^{(2)}(\kappa)}^2}{\mathcal{H}_j^n}}}\to0. \end{align*} Hence \end{comment} For $U_{n}^{(2)}(\kappa)$, it holds $U_{n}^{(2)}(\kappa)=o_P(1)$ shown by Lemma 9 in \citep{GeJ93} and Corollary \ref*{cor736}. We will see the asymptotic behaviour of $U_{n}^{(3)}(\kappa)$ in the next place. As $u_{n}^{(1)}(\kappa)$, $u_{j,n}^{(3)}(\kappa)$ contains $\mathcal{H}_{j+1}^{n}$-measurable $\lm{\epsilon}{j}$ and $\mathcal{H}_{j+2}^{n}$-measurable $\lm{\epsilon}{j+1}$. Hence we rewrite the summation as follows: \begin{align*} U_{n}^{(3)}(\kappa)=\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}u_{j,n}^{(3)}(\kappa) =\frac{1}{\sqrt{k_{n}}}\sum_{j=2}^{k_{n}-2}s_{j,n}^{(3)}(\kappa)+o_P(1), \end{align*} where \begin{align*} s_{j,n}^{(3)}(\kappa)&:=\ip{\parens{B_{\kappa}(\lm{Y}{j-2})+B_{\kappa}(\lm{Y}{j-1})}}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}-\Delta_{n}^{\frac{2-\tau}{\tau-1}}I_d}\Lambda_{\star}^{1/2}}\\ &\qquad+\ip{B_{\kappa}(\lm{Y}{j-2})}{\Lambda_{\star}^{1/2} \parens{-\frac{2}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}\parens{\lm{\epsilon}{j}}^T}\Lambda_{\star}^{1/2}}. \end{align*} We can obtain \begin{align*} \CE{s_{j,n}^{(3)}(\kappa)}{\mathcal{H}_j^n} =0. \end{align*} For all $\kappa_1$ and $\kappa_2$, \begin{comment} \begin{align*} &s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(3)}(\kappa_2)\\ &=\ip{\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}}\Lambda_{\star}^{1/2}}\\ &\qquad\times\ip{\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}}\Lambda_{\star}^{1/2}}\\ &\quad-\Delta_{n}^{\frac{2-\tau}{\tau-1}}\ip{\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}}\Lambda_{\star}^{1/2}} \ip{\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}}{\Lambda_{\star}}\\ &\quad-\Delta_{n}^{\frac{2-\tau}{\tau-1}}\ip{\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}}{\Lambda_{\star}}\\ &\qquad\times\ip{\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}-\Delta_{n}^{\frac{2-\tau}{\tau-1}}I_d}\Lambda_{\star}^{1/2}}\\ &\quad+\ip{\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}}\Lambda_{\star}^{1/2}}\\ &\quad\qquad\times\ip{B_{\kappa_2}(\lm{Y}{j-2})}{\Lambda_{\star}^{1/2} \parens{-\frac{2}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}\parens{\lm{\epsilon}{j}}^T}\Lambda_{\star}^{1/2}}\\ &\quad-\Delta_{n}^{\frac{2-\tau}{\tau-1}}\ip{\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}}{\Lambda_{\star}} \ip{B_{\kappa_2}(\lm{Y}{j-2})}{\Lambda_{\star}^{1/2} \parens{-\frac{2}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}\parens{\lm{\epsilon}{j}}^T}\Lambda_{\star}^{1/2}}\\ &\quad+\ip{B_{\kappa_1}(\lm{Y}{j-2})}{\Lambda_{\star}^{1/2} \parens{-\frac{2}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}\parens{\lm{\epsilon}{j}}^T}\Lambda_{\star}^{1/2}}\\ &\quad\qquad\times\ip{\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}}\Lambda_{\star}^{1/2}}\\ &\quad-\Delta_{n}^{\frac{2-\tau}{\tau-1}}\ip{B_{\kappa_1}(\lm{Y}{j-2})}{\Lambda_{\star}^{1/2} \parens{-\frac{2}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}\parens{\lm{\epsilon}{j}}^T}\Lambda_{\star}^{1/2}} \ip{\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}}{\Lambda_{\star}}\\ &\quad+\frac{4}{\Delta_{n}^2}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}B_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j}} \parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}\parens{B_{\kappa_2}(\lm{Y}{j-2})}^T\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}. \end{align*} Hence we can evaluate \end{comment} \begin{align*} &\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(3)}(\kappa_2)}{\mathcal{H}_j^n}\\ &=\frac{1}{\Delta_{n}^2}\mathbf{E}\left[\parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2} \parens{\lm{\epsilon}{j}}^{\otimes 2}\right.\\ &\hspace{3cm}\left.\left.\times\Lambda_{\star}^{1/2}\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2} \parens{\lm{\epsilon}{j}}\right|\mathcal{H}_j^n\right]\\ &\qquad-\parens{\Delta_{n}^{\frac{2-\tau}{\tau-1}}}^2\ip{\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}}{\Lambda_{\star}} \ip{\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}}{\Lambda_{\star}}\\ &\qquad+\frac{4\Delta_{n}^{\frac{2-\tau}{\tau-1}}}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}B_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\parens{B_{\kappa_2}(\lm{Y}{j-2})}^T\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}. \end{align*} Note the fact that for any $\Re^d\times\Re^d$-valued matrix $A$ \begin{align*} \E{\parens{\lm{\epsilon}{j}}^{\otimes2}A\parens{\lm{\epsilon}{j}}^{\otimes2}} &=\frac{1}{p_{n}^2}\parens{2\bar{A}}+\frac{1}{p_{n}^2}\tr\parens{A}I_d +\crotchet{\parens{\frac{\E{\parens{\epsilon_{0}^{(i)}}^4}-3}{p_{n}^3}}A^{(i,i)}}_{i,i}, \end{align*} where $\bar{A}=\parens{A+A^T}/2$. Therefore, \begin{align*} &\mathbf{E}\left[\parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2} \parens{\lm{\epsilon}{j}}^{\otimes 2}\right.\\ &\qquad\qquad\left.\left.\times\Lambda_{\star}^{1/2}\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2} \parens{\lm{\epsilon}{j}}\right|\mathcal{H}_j^n\right]\\ &=\frac{2}{p_{n}^2}\tr\tuborg{\Lambda_{\star}^{1/2}\parens{\bar{B}_{\kappa_1}(\lm{Y}{j-2})+\bar{B}_{\kappa_1}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}\Lambda_{\star}^{1/2}\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}}\\ &\qquad+\frac{1}{p_{n}^2}\tr\tuborg{\Lambda_{\star}^{1/2}\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}} \tr\tuborg{\Lambda_{\star}^{1/2}\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}}\\ &\qquad+\sum_{i=1}^{d}\parens{\frac{\E{\parens{\epsilon_{0}^{(i)}}^4}-3}{p_{n}^3}}\parens{\Lambda_{\star}^{1/2}\parens{B_{\kappa_1}(\lm{Y}{j-2})+B_{\kappa_1}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}}^{(i,i)}\\ &\qquad\qquad\qquad\times\parens{\Lambda_{\star}^{1/2}\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}}^{(i,i)}. \end{align*} Hence \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(3)}(\kappa_2)}{\mathcal{H}_j^n}\\ &=\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\frac{2}{p_{n}^2\Delta_{n}^2}\tr\tuborg{\parens{\bar{B}_{\kappa_1}(\lm{Y}{j-2})+\bar{B}_{\kappa_1}(\lm{Y}{j-1})}\Lambda_{\star}\parens{B_{\kappa_2}(\lm{Y}{j-2})+B_{\kappa_2}(\lm{Y}{j-1})}\Lambda_{\star}}\\ &\qquad+\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\frac{4\Delta_{n}^{\frac{2-\tau}{\tau-1}}}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}B_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\parens{B_{\kappa_1}(\lm{Y}{j-2})}^T\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}\\ &\qquad+o_P(1)\\ &\cp \begin{cases} 0 & \text{ if }\tau\in(1,2),\\ 12\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)\Lambda_{\star}}} & \text{ if }\tau=2. \end{cases} \end{align*} \begin{comment} We have \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{\frac{4\Delta_{n}^{\frac{2-\tau}{\tau-1}}}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}B_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\parens{B_{\kappa_2}(\lm{Y}{j-2})}^T\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}} {\mathcal{H}_j^n}\\ &\cp\begin{cases} 0 & \text{ if }\tau\in(1,2)\\ 4\tr\tuborg{B_{\kappa_1}(\cdot)\Lambda_{\star}B_{\kappa_2}(\cdot)\Lambda_{\star}} &\text{ if }\tau=2 \end{cases} \end{align*} and \begin{align*} \frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}-2}\frac{16\parens{\Delta_{n}^{\frac{2-\tau}{\tau-1}}}^2}{\Delta_{n}^2}\CE{\parens{\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}B_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\parens{B_{\kappa_2}(\lm{Y}{j-2})}^T\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}}^2} {\mathcal{H}_j^n}=o_P(1). \end{align*} To sum up if $\tau\in(1,2)$ we obtain \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(3)}(\kappa_2)}{\mathcal{H}_j^n}\cp 0 \end{align*} and if $\tau=2$ \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(3)}(\kappa_2)}{\mathcal{H}_j^n}\\ &\cp {2}\nu_0\parens{\tr\tuborg{\parens{\bar{B}_{\kappa_1}(\cdot)+\bar{B}_{\kappa_1}(\cdot)}\Lambda_{\star}\parens{B_{\kappa_2}(\cdot) +B_{\kappa_2}(\cdot)}\Lambda_{\star}}}\\ &\qquad+4\nu_0\parens{\tr\tuborg{B_{\kappa_1}(\cdot)\Lambda_{\star}B_{\kappa_2}(\cdot)\Lambda_{\star}}}\\ &=12\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)\Lambda_{\star}}}. \end{align*} \end{comment} Therefore, $U_{n}^{(3)}(\kappa)=o_P(1)$ if $\tau\in(1,2)$. The conditional fourth expectation of $s_{j,n}^{(3)}$ can be evaluated easily as \begin{comment} \begin{align*} \CE{\parens{s_{j,n}^{(3)}(\kappa)}^4}{\mathcal{H}_j^n} \le C\norm{\parens{B_{\kappa}(\lm{Y}{j-2})+B_{\kappa}(\lm{Y}{j-1})}}^4+\frac{C}{\Delta_{n}^2}\norm{B_{\kappa}(\lm{Y}{j-2})}^4\norm{\lm{\epsilon}{j-1}}^4 \end{align*} and hence \end{comment} \begin{align*} \E{\abs{\frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}-2}\CE{\parens{s_{j,n}^{(3)}(\kappa)}^4}{\mathcal{H}_j^n}}} \to0 \end{align*} using Lemma \ref{lem739}. In the next place, we see the asymptotic behaviour of $U_{n}^{(4)}(\kappa)$. We again rewrite the summation as follows: \begin{align*} U_{n}^{(4)}(\kappa)&:=\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\frac{2}{\Delta_{n}}\ip{\bar{B}_{\kappa}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'} \parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}}\\ &=\frac{1}{\sqrt{k_{n}}}\sum_{j=2}^{k_{n}-2}s_{j,n}^{(4)}(\kappa)+o_P(1), \end{align*} where \begin{align*} s_{j,n}^{(4)}(\kappa)&:=\frac{2}{\Delta_{n}}\ip{\bar{B}_{\kappa}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j,n}} \parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}}\\ &\qquad-\frac{2}{\Delta_{n}}\ip{\bar{B}_{\kappa}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}} \parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}}\\ &\qquad+\frac{2}{\Delta_{n}}\ip{\bar{B}_{\kappa}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j+1,n}'} \parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}}\\ &\qquad-\frac{2}{\Delta_{n}}\ip{\bar{B}_{\kappa}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j+1,n}'} \parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}}. \end{align*} Hence it is enough to examine $\frac{1}{\sqrt{k_{n}}}\sum_{j=2}^{k_{n}-2}s_{j,n}^{(4)}(\kappa)$. It is obvious that \begin{align*} \CE{s_{j,n}^{(4)}(\kappa)}{\mathcal{H}_j^n}=0. \end{align*} For all $\kappa_1$ and $\kappa_2$, \begin{comment} \begin{align*} &\parens{\frac{2}{\Delta_{n}}}^{-2}\CE{s_{j,n}^{(4)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}\\ &=\frac{1}{p_{n}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j,n}}\\ &\qquad+\frac{m_{n}\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-1})A(X_{j\Delta_{n}})}\\ &\qquad+\frac{m_{n}'\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})}\\ &\qquad+m_{n}'\Delta_{n}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa_1}(\lm{Y}{j-2}) A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}\\ &\qquad-\frac{\chi_{n}\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{j\Delta_{n}})^T}\\ &\qquad-\frac{\chi_{n}\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_2}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_1}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{j\Delta_{n}})^T} \end{align*} and then \end{comment} \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{s_{j,n}^{(4)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}\\ &=\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{1}{p_{n}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j,n}}\\ &\qquad+\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{m_{n}\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-1})A(X_{j\Delta_{n}})}\\ &\qquad+\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{m_{n}'\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})}\\ &\qquad+\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}m_{n}'\Delta_{n}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa_1}(\lm{Y}{j-2}) A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}\\ &\qquad-\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{\chi_{n}\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{j\Delta_{n}})^T}\\ &\qquad-\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{\chi_{n}\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_2}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_1}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{j\Delta_{n}})^T}\\ &\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ 4\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_2}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_1}(\cdot)A(\cdot)}} & \text{ if }\tau=2 \end{cases} \end{align*} \begin{comment} We examine the terms in right hand side respectively. With respect to the first term, \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{1}{p_{n}}\CE{\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j,n}}}{\mathcal{H}_{j-1}^n}\\ &\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{4}{3}\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)A(\cdot)}} & \text{ if }\tau=2, \end{cases} \end{align*} and \begin{align*} &\frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{4}\frac{1}{p_{n}^2}\CE{\abs{\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j,n}}}^2}{\mathcal{H}_{j-1}^n}\\ &\cp0; \end{align*} therefore \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{1}{p_{n}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j,n}}\\ &\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{4}{3}\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)A(\cdot)}} & \text{ if }\tau=2 \end{cases} \end{align*} because of Lemma 9 in \citep{GeJ93}. The fourth term can be evaluated as follows: \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{\parens{\frac{2}{\Delta_{n}}}^{2}m_{n}'\Delta_{n}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa_1}(\lm{Y}{j-2}) A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}}{\mathcal{H}_{j-1}^n}\\ &\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{4}{3}\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)A(\cdot)}} & \text{ if }\tau=2, \end{cases} \end{align*} and \begin{align*} &\frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}-2}\CE{\parens{\frac{2}{\Delta_{n}}}^{4}\parens{m_{n}'\Delta_{n}}^2\abs{\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa_1}(\lm{Y}{j-2}) A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}}^2}{\mathcal{H}_{j-1}^n}\\ &\cp 0; \end{align*} then as the first term we obtain \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}m_{n}'\Delta_{n}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa_1}(\lm{Y}{j-2}) A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}\\ &\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{4}{3}\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)A(\cdot)}} & \text{ if }\tau=2. \end{cases} \end{align*} As for the other terms, we have \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{m_{n}\Delta_{n}}{p_{n}}\tr \tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-1})A(X_{j\Delta_{n}})}\\ &\quad\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{4}{3}\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)A(\cdot)}} & \text{ if }\tau=2, \end{cases}\\ &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{m_{n}'\Delta_{n}}{p_{n}}\tr \tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})}\\ &\quad\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{4}{3}\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)A(\cdot)}} & \text{ if }\tau=2, \end{cases}\\ &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{\chi_{n}\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_1}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{j\Delta_{n}})^T}\\ &\quad\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{2}{3}\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)A(\cdot)}} & \text{ if }\tau=2, \end{cases}\\ &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\parens{\frac{2}{\Delta_{n}}}^{2}\frac{\chi_{n}\Delta_{n}}{p_{n}}\tr\tuborg{\bar{B}_{\kappa_2}(\lm{Y}{j-1})\Lambda_{\star}\bar{B}_{\kappa_1}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{j\Delta_{n}})^T}\\ &\quad\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ \frac{2}{3}\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_2}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_1}(\cdot)A(\cdot)}} & \text{ if }\tau=2. \end{cases} \end{align*} Note the following fact that \begin{align*} \tr\tuborg{\bar{B}_{\kappa_2}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_1}(\cdot)A(\cdot)}=\tr\tuborg{\bar{B}_{\kappa_1}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_2}(\cdot)A(\cdot)}. \end{align*} In summary we obtain \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{s_{j,n}^{(4)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}\\ &\quad\cp \begin{cases} 0 & \text{ if }\tau\in(1,2)\\ 4\nu_0\parens{\tr\tuborg{\bar{B}_{\kappa_2}(\cdot)\Lambda_{\star}\bar{B}_{\kappa_1}(\cdot)A(\cdot)}} & \text{ if }\tau=2. \end{cases} \end{align*} \end{comment} which can be shown by using Lemma 9 in \citep{GeJ93}. Hence $U_{n}^{(4)}(\kappa)=o_P(1)$ if $\tau\in(1,2)$. The conditional fourth moment can be evaluated as \begin{align*} \E{\abs{\frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}-2}\CE{\abs{s_{j,n}^{(4)}(\kappa)}^4}{\mathcal{H}_j^n}}}\to 0. \end{align*} For $l=5,6,7$, it is not difficult to see $U_{n}^{(l)}(\kappa)=o_P(1)$ using Lemma 9 in \citep{GeJ93}. \begin{comment} In the next place, we can see the $L^1$ convergence of $U_{n}^{(5)}(\kappa)$ such that \begin{align*} \E{\abs{U_{n}^{(5)}(\kappa)}} &\to0. \end{align*} To show $U_{n}^{(6)}(\kappa)=o_P(1)$ and $U_{n}^{(7)}(\kappa)=o_P(1)$, we use Lemma 9 in \citep{GeJ93}. We have \begin{align*} \E{\abs{\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{u_{j,n}^{(6)}(\kappa)}{\mathcal{H}_j^n}}}\to 0 \end{align*} because of Proposition \ref*{pro737}, and \begin{align*} \E{\abs{\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2}\CE{\abs{u_{j,n}^{(6)}(\kappa)}^2}{\mathcal{H}_j^n}}}\to0. \end{align*} Therefore, $U_{n}^{(6)}(\kappa)=o_P(1)$. We also obtain \begin{align*} \CE{u_{j,n}^{(7)}(\kappa)}{\mathcal{H}_j^n}=0 \end{align*} and \begin{align*} \E{\abs{\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2}\CE{\abs{u_{j,n}^{(7)}(\kappa)}^2}{\mathcal{H}_j^n}}}\to 0. \end{align*} Hence $U_{n}^{(7)}(\kappa)=o_P(1)$. \end{comment} Finally we see the covariance structure among $U_{n}^{(1)}$, $U_{n}^{(3)}$ and $U_{n}^{(4)}$ when $\tau=2$. Because of the independence of $\tuborg{w_t}$ and $\tuborg{\epsilon_{ih_{n}}}$, for all $\kappa_1$ and $\kappa_2$, \begin{align*} \CE{s_{j,n}^{(1)}(\kappa_1)s_{j,n}^{(3)}(\kappa_2)}{\mathcal{H}_j^n}=\CE{s_{j,n}^{(1)}(\kappa_1)}{\mathcal{H}_j^n}\CE{s_{j,n}^{(3)}(\kappa_2)}{\mathcal{H}_j^n}=0. \end{align*} With respect to the covariance between $U_{n}^{(1)}$ and $U_{n}^{(4)}$, the independence of $\tuborg{w_t}$ and $\tuborg{\epsilon_{ih_{n}}}$ leads to \begin{comment} \begin{align*} \CE{s_{j,n}^{(1)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n} =-\frac{4m_{n}'}{\Delta_{n}}\parens{\zeta_{j,n}}^{T}a(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2}) \Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}. \end{align*} Hence \end{comment} \begin{align*} &\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{s_{j,n}^{(1)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}\\ &=-\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\frac{4m_{n}'}{\Delta_{n}}\parens{\zeta_{j,n}}^{T}a(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2}) \Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}\\ &\cp 0 \end{align*} obtained by Lemma 9 in \citep{GeJ93} with ease. \begin{comment} We have \begin{align*} \CE{-\frac{4m_{n}'}{\Delta_{n}}\parens{\zeta_{j,n}}^{T}a(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2}) \Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}}{\mathcal{H}_{j-1}^n} =0 \end{align*} and \begin{align*} &\CE{\abs{-\frac{4m_{n}'}{\Delta_{n}}\parens{\zeta_{j,n}}^{T}a(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2}) \Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j-1}}}^2}{\mathcal{H}_{j-1}^n}\\ &\le C\CE{\norm{a(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})\bar{B}_{\kappa_2}(\lm{Y}{j-2}) \Lambda_{\star}^{1/2}}^2}{\mathcal{H}_{j-1}^n}. \end{align*} They verify \begin{align*} &\frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}-2}\CE{\CE{s_{j,n}^{(1)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}}{\mathcal{H}_{j-1}^n}=0 \end{align*} and \begin{align*} \E{\abs{\frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{\abs{\CE{s_{j,n}^{(1)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}}^2}{ \mathcal{H}_{j-1}^n}}} \to 0. \end{align*} Therefore, \begin{align*} \frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{s_{j,n}^{(1)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}=o_P(1) \end{align*} because of Lemma 9 in \citep{GeJ93}. \end{comment} The analogous discussion can be conducted between $U_{n}^{(3)}(\kappa_1)$ and $U_{n}^{(4)}(\kappa_2)$; it holds \begin{comment} Now we examine the covariance structure between $U_{n}^{(3)}(\kappa_1)$ and $U_{n}^{(4)}(\kappa_2)$. Again \begin{align*} \CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n} =-\frac{4}{p_{n}\Delta_{n}^2}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}B_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j,n}}. \end{align*} We also the conditional expectation with respect to $\mathcal{H}_{j-1}^n$. We have \begin{align*} \CE{\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}}{\mathcal{H}_{j-1}^n} =0 \end{align*} and \begin{align*} \CE{\abs{\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}}^2}{\mathcal{H}_{j-1}^n}\le \frac{C}{p_{n}^3\Delta_{n}^3}\norm{\Lambda_{\star}^{1/2}B_{\kappa_1}(\lm{Y}{j-2})\Lambda_{\star}\bar{B}_{\kappa_2}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})}^2. \end{align*} Hence \begin{align*} \frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}}{\mathcal{H}_{j-1}^n}=0 \end{align*} and \begin{align*} \E{\abs{\frac{1}{k_{n}^2}\sum_{j=2}^{k_{n}-2}\CE{\abs{\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}}^2 }{\mathcal{H}_{j-1}^n}}}\to 0. \end{align*} They lead to \end{comment} \begin{align*} \frac{1}{k_{n}}\sum_{j=2}^{k_{n}-2}\CE{s_{j,n}^{(3)}(\kappa_1)s_{j,n}^{(4)}(\kappa_2)}{\mathcal{H}_j^n}=o_P(1) \end{align*} by Lemma 9 in \citep{GeJ93}.\\ \noindent \textbf{(Step 3): } Corllary \ref{cor738} leads to \begin{comment} We check the following decomposition \begin{align*} f^{\lambda}(\lm{Y}{j-1})\parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j-1})} &=f^{\lambda}(\lm{Y}{j-1})\parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(X_{j\Delta_{n}})}\\ &\qquad+f^{\lambda}(\lm{Y}{j-1})\Delta_{n}\parens{b(X_{j\Delta_{n}})-b(\lm{Y}{j-1})}, \end{align*} and because of Corollary \ref*{cor738} we also have \begin{align*} f^{\lambda}(\lm{Y}{j-1})\parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(X_{j\Delta_{n}})} &=f^{\lambda}(\lm{Y}{j-1})a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}\\ &\qquad+f^{\lambda}(\lm{Y}{j-1})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}+f^{\lambda}(\lm{Y}{j-1})e_{j,n}, \end{align*} and then \begin{align*} f^{\lambda}(\lm{Y}{j-1})\parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j-1})} &=f^{\lambda}(\lm{Y}{j-1})a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}\\ &\qquad+f^{\lambda}(\lm{Y}{j-1})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}+f^{\lambda}(\lm{Y}{j-1})e_{j,n}\\ &\qquad+f^{\lambda}(\lm{Y}{j-1})\Delta_{n}\parens{b(X_{j\Delta_{n}})-b(\lm{Y}{j-1})}. \end{align*} Here we can rewrite $\sqrt{k_{n}\Delta_{n}} \bar{D}_{n}\parens{f^{\lambda}(\cdot)}$ as \end{comment} \begin{align*} \sqrt{k_{n}\Delta_{n}} \bar{D}_{n}\parens{f^{\lambda}(\cdot)}=\bar{R}_{n}^{(1)}(\lambda)+\bar{R}_{n}^{(2)}(\lambda)+\bar{R}_{n}^{(3)}(\lambda)+\bar{R}_{n}^{(4)}(\lambda), \end{align*} where \begin{align*} \bar{R}_{n}^{(1)}(\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}f^{\lambda}(\lm{Y}{j-1})a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'},\\ \bar{R}_{n}^{(2)}(\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}f^{\lambda}(\lm{Y}{j-1})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}},\\ \bar{R}_{n}^{(3)}(\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}f^{\lambda}(\lm{Y}{j-1})e_{j,n},\\ \bar{R}_{n}^{(4)}(\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}f^{\lambda}(\lm{Y}{j-1}) \Delta_{n}\parens{b(X_{j\Delta_{n}})-b(\lm{Y}{j-1})}. \end{align*} Hence it is enough to see asymptotic behaviour of $\bar{R}$'s and firstly we examine that of $R_{n}^{(1)}$. We define the $\mathcal{H}_{j+1}^n$-measurable random variable \begin{align*} r_{j,n}^{(1)}(\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}f^{\lambda}(\lm{Y}{j-1})a(X_{j\Delta_{n}})\zeta_{j+1,n}+\frac{1}{\sqrt{k_{n}\Delta_{n}}}f^{\lambda}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\zeta_{j+1,n}' \end{align*} and then \begin{align*} \bar{R}_{n}^{(1)}(\lambda)=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}f^{\lambda}(\lm{Y}{j-1})a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}=\sum_{j=2}^{k_{n}-2}r_{j,n}^{(1)}(\lambda)+o_P(1). \end{align*} Obviously \begin{align*} \CE{r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}=0. \end{align*} With respect to the second moment, for all $\lambda_1,\lambda_2\in\{1,\ldots,m_2\}$, by Lemma \ref*{lem732}, \begin{align*} &\CE{\parens{r_{j,n}^{(1)}(\lambda_1)}\parens{r_{j,n}^{(1)}(\lambda_2)}}{\mathcal{H}_j^n}\\ &=\frac{1}{k_{n}\Delta_{n}}\Delta_{n}\parens{\frac{1}{3}+\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}}f_{\lambda_1}(\lm{Y}{j-1})A(X_{j\Delta_{n}})\parens{f_{\lambda_2}(\lm{Y}{j-1})}^T\\ &\qquad+\frac{1}{k_{n}\Delta_{n}}\Delta_{n}\parens{\frac{1}{3}-\frac{1}{2p_{n}}+\frac{1}{6p_{n}^2}}f_{\lambda_1}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})\parens{f_{\lambda_2}(\lm{Y}{j-2})}^T\\ &\qquad+\frac{2}{k_{n}\Delta_{n}}\frac{\Delta_{n}}{6}\parens{1-\frac{1}{p_{n}^2}}f_{\lambda_1}(\lm{Y}{j-1})a(X_{j\Delta_{n}})\parens{a(X_{(j-1)\Delta_{n}})}^T\parens{f_{\lambda_2}(\lm{Y}{j-2})}^T. \end{align*} Therefore, \begin{align*} \sum_{j=2}^{k_{n}-2}\CE{\parens{r_{j,n}^{(1)}(\lambda_1)}\parens{r_{j,n}^{(1)}(\lambda_2)}}{\mathcal{H}_j^n} \cp\nu_0\parens{\parens{f_{\lambda_1}}\parens{c}\parens{f_{\lambda_2}}^T(\cdot)} \end{align*} because of Lemma \ref*{lem721} and Lemma \ref*{lem739}. The fourth conditional moment can be evaluated as \begin{comment} \begin{align*} \CE{\parens{r_{j,n}^{(1)}(\lambda)}^4}{\mathcal{H}_j^n} &\le \frac{C}{k_{n}^2}\norm{f^{\lambda}(\lm{Y}{j-1})a(X_{j\Delta_{n}})}^4+\frac{C}{k_{n}^2}\norm{f^{\lambda}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})} \end{align*} and \end{comment} \begin{align*} \E{\abs{\sum_{j=2}^{k_{n}-2}\CE{\parens{r_{j,n}^{(1)}(\lambda)}^4}{\mathcal{H}_j^n}}}\to0 \end{align*} by Lemma \ref*{lem739}. For the remaining parts, Lemma 9 in \citep{GeJ93}, Proposition \ref{pro737} and Lemma \ref{lem739} show $\bar{R}^{(l)}=o_P(1)$ for $l=2,3,4$ with simple computation.\\ \begin{comment} Let us define the following $\mathcal{H}_{j+1}^{n}$-measurable random variable \begin{align*} r_{j,n}^{(2)}(\lambda) &:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\parens{f_{\lambda}(\lm{Y}{j-2})-f_{\lambda}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}\lm{\epsilon}{j}, \end{align*} and then we have \begin{align*} \bar{R}_{n}^{(2)}(\lambda)=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}f^{\lambda}(\lm{Y}{j-1})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}=\sum_{j=2}^{k_{n}-2}r_{j,n}^{(2)}(\lambda)+o_P(1). \end{align*} We prove $\bar{R}^{(2)}=o_P(1)$ with Lemma 9 in \citep{GeJ93}. It is obvious \begin{align*} \CE{r_{j,n}^{(2)}(\lambda)}{\mathcal{H}_j^n}=0. \end{align*} For the second moment, with Lemma \ref*{lem739}, \begin{align*} \CE{\parens{r_{j,n}^{(2)}(\lambda)}^2}{\mathcal{H}_j^n} \le \frac{C}{k_{n}}\norm{f_{\lambda}(\lm{Y}{j-2})-f_{\lambda}(\lm{Y}{j-1})}^2 \end{align*} and therefore, by Lemma \ref*{lem739}, \begin{align*} \E{\abs{\sum_{j=2}^{k_{n}-2}\CE{\parens{r_{j,n}^{(2)}(\lambda)}^2}{\mathcal{H}_j^n}}}\to 0. \end{align*} Hence $\bar{R}_{n}^{(2)}(\lambda)=o_P(1)$. With respect to $\bar{R}_{n}^{(3)}(\lambda)$, we again use Lemma 9 in \citep{GeJ93} to show convergence to zero in probability. To ease notation, we separate the summation into three parts as same as Theorem \ref*{thm742} and Theorem \ref*{thm743} such that \begin{align*} \bar{R}_{n}^{(3)}(\lambda)&=\bar{R}_{0,n}^{(3)}(\lambda)+\bar{R}_{1,n}^{(3)}(\lambda)+\bar{R}_{2,n}^{(3)}(\lambda), \end{align*} where for $l=0,1,2$, \begin{align*} \bar{R}_{l,n}^{(3)}(\lambda)=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{1\le 3j+l\le k_{n}-2}f_{\lambda}(\lm{Y}{3j+l-1})e_{3j+l,n}, \end{align*} and it is enough to examine if $\bar{R}_{0,n}^{(3)}(\lambda)=o_P(1)$. Let $r_{3j,n}^{(3)}(\lambda)$ be a random variable defined as \begin{align*} r_{3j,n}^{(3)}(\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}f_{\lambda}(\lm{Y}{3j-1})e_{3j,n} \end{align*} and then $r_{3j,n}^{(3)}(\lambda)$ is $\mathcal{H}_{3j+2}^{n}$-measurable and $\mathcal{H}_{3(j+1)}^{n}$-measurable. Furthermore, \begin{align*} \bar{R}_{0,n}^{(3)}(\lambda)=\sum_{1\le 3j\le k_{n}-2}r_{3j,n}^{(3)}(\lambda). \end{align*} Therefore, the conditional expectation with respect to $\mathcal{H}_{3j}^{n}$ can be evaluated as \begin{align*} \CE{r_{3j,n}^{(3)}(\lambda)}{\mathcal{H}_{3j}^n} =\frac{1}{\sqrt{k_{n}\Delta_{n}}}f_{\lambda}(\lm{Y}{3j-1})\CE{e_{3j,n}}{\mathcal{H}_{3j}^n} \end{align*} and hence with Proposition \ref*{pro737} and Lemma \ref*{lem739}, \begin{align*} \E{\abs{\sum_{1\le 3j\le k_{n}-2}\CE{r_{3j,n}^{(3)}(\lambda)}{\mathcal{H}_{3j}^n}}} \to 0. \end{align*} With respect to the second moment, \begin{align*} \CE{\abs{r_{3j,n}^{(3)}(\lambda)}^2}{\mathcal{H}_{3j}^n} \le \frac{1}{k_{n}\Delta_{n}}\norm{f_{\lambda}(\lm{Y}{3j-1})}^2C\Delta_{n}^2\parens{1+\norm{X_{3j\Delta_{n}}}^6} \end{align*} and hence Lemma \ref*{lem739} leads to \begin{align*} \E{\abs{\sum_{1\le 3j\le k_{n}-2}\CE{\abs{r_{3j,n}^{(3)}(\lambda)}^2}{\mathcal{H}_{3j}^n}}}\to0. \end{align*} As a result we obtained $\bar{R}_{n}^{(3)}(\lambda)=o_P(1)$. We can evaluated $L^1$ norm of $\bar{R}_{n}^{(4)}(\lambda)$ such that \begin{align*} \E{\abs{\bar{R}_{n}^{(4)}(\lambda)}} &\to0 \end{align*} and it verifies $\bar{R}_{n}^{(4)}(\lambda)=o_P(1)$.\\ \end{comment} \noindent\textbf{(Step 4): } We check the covariance structures among $\sqrt{n}D_{n}$, $U_{n}^{(1)}$, $U_{n}^{(3)}$, $U_{n}^{(4)}$ $\bar{R}_{n}^{(1)}$ which have not been shown. It is easy to see \begin{align*} \CE{\parens{D_{j,n}'}^{(l_1,l_2)}s_{j,n}^{(1)}(\kappa)}{\mathcal{H}_j^n}&=0,\qquad \CE{\parens{D_{j,n}'}^{(l_1,l_2)}s_{j,n}^{(4)}(\kappa)}{\mathcal{H}_j^n}=0,\\ \CE{\parens{D_{j,n}'}^{(l_1,l_2)}r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}&=0,\qquad \CE{s_{j,n}^{(3)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}=0. \end{align*} The remaining evaluation are routine with Lemma 9 in \citep{GeJ93} \begin{align*} \frac{\sqrt{n}}{k_{n}^{3/2}}\sum_{j=1}^{k_{n}-2}\CE{\parens{D_{j,n}'}^{(l_1,l_2)}s_{j,n}^{(3)}(\kappa)}{\mathcal{H}_j^n}&=o_P(1),\\ \frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{s_{j,n}^{(1)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}&=o_P(1),\\ \frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{s_{j,n}^{(4)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}&=o_{P}(1). \end{align*} \begin{comment} We also have \begin{align*} &\frac{\sqrt{n}}{k_{n}^{3/2}}\sum_{j=1}^{k_{n}-2}\CE{\parens{D_{j,n}'}^{l_1,l_2}s_{j,n}^{(3)}(\kappa)}{\mathcal{H}_j^n}\\ &=\frac{1}{\sqrt{p_{n}}k_{n}}\sum_{j=1}^{k_{n}-2}\ip{\Lambda_{\star}^{1/2}\parens{B_{\kappa}(\lm{Y}{j-2})+B_{\kappa}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}}{\frac{1}{\Delta_{n}}\CE{\parens{D_{j,n}'}^{l_1,l_2} \parens{\parens{\lm{\epsilon}{j}}^{\otimes 2}}}{\mathcal{H}_j^n}}\\ &\qquad+\frac{1}{\sqrt{p_{n}}k_{n}}\sum_{j=1}^{k_{n}-2}\ip{\Lambda_{\star}^{1/2}B_{\kappa}(\lm{Y}{j-2})\Lambda_{\star}^{1/2}}{ \parens{-\frac{2}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}}\CE{\parens{D_{j,n}'}^{l_1,l_2} \parens{\lm{\epsilon}{j}}^T}{\mathcal{H}_j^n}}. \end{align*} Because \begin{align*} &\CE{\parens{D_{j,n}'}^{l_1,l_2} \parens{\parens{\lm{\epsilon}{j}}^{\otimes 2}}}{\mathcal{H}_j^n}\\ &=\frac{1}{2p_{n}^3}\sum_{i_1,i_2}\E{\parens{\Lambda_{\star}^{1/2}\parens{2\parens{\epsilon_{j\Delta_{n}+i_1h_{n}}}^{\otimes 2}}\Lambda_{\star}^{1/2}}^{l_1,l_2}\parens{\epsilon_{j\Delta_{n}+i_2h_{n}}}^{\otimes 2}}\\ &\qquad+\frac{1}{2p_{n}^3}\sum_{i_1}\E{\parens{\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{j\Delta_{n}+i_1h_{n}}}\parens{\epsilon_{j\Delta_{n}+(i_1-1)h_{n}}}^T}\Lambda_{\star}^{1/2}}^{l_1,l_2}\parens{\epsilon_{j\Delta_{n}+i_1h_{n}}}\parens{\epsilon_{j\Delta_{n}+(i_1-1)h_{n}}}^T}\\ &\qquad+\frac{1}{2p_{n}^3}\sum_{i_1}\E{\parens{\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{j\Delta_{n}+(i_1-1)h_{n}}}\parens{\epsilon_{j\Delta_{n}+i_1h_{n}}}^T}\Lambda_{\star}^{1/2}}^{l_1,l_2}\parens{\epsilon_{j\Delta_{n}+i_1h_{n}}}\parens{\epsilon_{j\Delta_{n}+(i_1-1)h_{n}}}^T}\\ &\qquad-\frac{1}{p_{n}}\parens{\Lambda_{\star}}^{l_1,l_2}I_d \end{align*} and \begin{align*} \CE{\parens{D_{j,n}'}^{l_1,l_2}\parens{\lm{\epsilon}{j}}^T}{\mathcal{H}_j^n} =0, \end{align*} we obtain \begin{align*} \E{\abs{\frac{1}{\sqrt{p_{n}}k_{n}}\sum_{j=1}^{k_{n}-2}\ip{\Lambda_{\star}^{1/2}\parens{B_{\kappa}(\lm{Y}{j-2})+B_{\kappa}(\lm{Y}{j-1})}\Lambda_{\star}^{1/2}}{\frac{1}{\Delta_{n}}\CE{\parens{D_{j,n}'}^{l_1,l_2} \parens{\parens{\lm{\epsilon}{j}}^{\otimes 2}}}{\mathcal{H}_j^n}}}}\to 0 \end{align*} and \begin{align*} \frac{1}{\sqrt{p_{n}}k_{n}}\sum_{j=1}^{k_{n}-2}\ip{\Lambda_{\star}^{1/2}B_{\kappa}(\lm{Y}{j-2})\Lambda_{\star}^{1/2}}{ \parens{-\frac{2}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}}\CE{\parens{D_{j,n}'}^{l_1,l_2} \parens{\lm{\epsilon}{j}}^T}{\mathcal{H}_j^n}}=0. \end{align*} Hence $U_{n}^{(3)}$ and $\sqrt{n}D_{n}$ are asymptotically independent. Furthermore, using the evaluation \begin{align*} \CE{\parens{D_{j,n}'}^{l_1,l_2}\parens{\lm{\epsilon}{j}}^T}{\mathcal{H}_j^n}=0 \end{align*} we have \begin{align*} \CE{\parens{D_{j,n}'}^{l_1,l_2}s_{j,n}^{(4)}(\kappa)}{\mathcal{H}_j^n}=0. \end{align*} Next In the next place, we evaluate the asymptotics of \begin{align*} \frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{s_{j,n}^{(1)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}. \end{align*} We have \begin{align*} &\CE{s_{j,n}^{(1)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}\\ &=\frac{2\chi_{n}}{\sqrt{k_{n}\Delta_{n}}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}}) a(X_{j\Delta_{n}})^Tf^{\lambda}(\lm{Y}{j-1})^T\\ &\qquad+\frac{2m_{n}'}{\sqrt{k_{n}\Delta_{n}}}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T, \end{align*} and then \begin{align*} &\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{s_{j,n}^{(1)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}\\ &=\frac{2\chi_{n}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}}) a(X_{j\Delta_{n}})^Tf^{\lambda}(\lm{Y}{j-1})^T\\ &\qquad+\frac{2m_{n}'}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T. \end{align*} Note the $L^1$ convergence \begin{align*} &\mathbf{E}\lcrotchet{\labs{\frac{2\chi_{n}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}}) a(X_{j\Delta_{n}})^Tf^{\lambda}(\lm{Y}{j-1})^T}}\\ &\qquad\rcrotchet{\rabs{-\frac{2\chi_{n}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}}) a(X_{(j-1)\Delta_{n}})^Tf^{\lambda}(\lm{Y}{j-2})^T}}\\ &\to 0. \end{align*} Hence we obtain \begin{align*} &\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{s_{j,n}^{(1)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}\\ &=\frac{2\parens{\chi_{n}+m_{n}'}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T\\ &\qquad+o_P(1). \end{align*} We are also able to evaluate \begin{align*} &\E{\abs{\frac{2\parens{\chi_{n}+m_{n}'}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\CE{\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T}{\mathcal{H}_{j-1}^n}}}\\ &=0 \end{align*} and \begin{align*} &\E{\abs{\frac{4\parens{\chi_{n}+m_{n}'}^2}{k_{n}^2\Delta_{n}}\sum_{j=1}^{k_{n}-2}\CE{\abs{\parens{\zeta_{j,n}}^Ta(X_{(j-1)\Delta_{n}})^T\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T}^2}{\mathcal{H}_{j-1}^n}}}\\ &\to 0. \end{align*} Hence Lemma 9 in \citep{GeJ93} leads to \begin{align*} \frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{s_{j,n}^{(1)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}\cp 0. \end{align*} Obviously we obtain \begin{align*} \CE{s_{j,n}^{(3)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}=0. \end{align*} Finally, we examine the asymptotics of \begin{align*} \frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}\CE{s_{j,n}^{(4)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}. \end{align*} We have \begin{align*} &\CE{s_{j,n}^{(4)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}\\ &=-\frac{2\chi_{n}}{\sqrt{k_{n}\Delta_{n}}}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{j\Delta_{n}})^Tf^{\lambda}(\lm{Y}{j-1})^T\\ &\qquad-\frac{2m_{n}'}{\sqrt{k_{n}\Delta_{n}}}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T \end{align*} and the $L^1$ convergence \begin{align*} &\mathbf{E}\lcrotchet{\labs{\frac{2\chi_{n}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{j\Delta_{n}})^Tf^{\lambda}(\lm{Y}{j-1})^T}}\\ &\qquad\rcrotchet{\rabs{-\frac{2\chi_{n}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})a(X_{(j-1)\Delta_{n}})^Tf^{\lambda}(\lm{Y}{j-2})^T}}\\ &\to 0. \end{align*} Hence \begin{align*} &\frac{1}{\sqrt{k_{n}}}\CE{s_{j,n}^{(4)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}\\ &=-\frac{2\parens{\chi_{n}+m_{n}'}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T\\ &\qquad+o_P(1). \end{align*} Again we can evaluate \begin{align*} \E{\abs{\frac{2\parens{\chi_{n}+m_{n}'}}{k_{n}\sqrt{\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\CE{\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T}{\mathcal{H}_{j-1}^n}}} =0 \end{align*} and \begin{align*} \E{\abs{\frac{4\parens{\chi_{n}+m_{n}'}^2}{k_{n}^2\Delta_{n}}\sum_{j=1}^{k_{n}-2}\CE{\abs{\parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}\bar{B}_{\kappa}(\lm{Y}{j-2})A(X_{(j-1)\Delta_{n}})f^{\lambda}(\lm{Y}{j-2})^T}^2}{\mathcal{H}_{j-1}^n}}}\to 0. \end{align*} Therefore, \begin{align*} \frac{1}{\sqrt{k_{n}}}\CE{s_{j,n}^{(4)}(\kappa)r_{j,n}^{(1)}(\lambda)}{\mathcal{H}_j^n}\cp 0. \end{align*} \end{comment} Then we obtain the proof. \end{proof} \begin{corollary}\label{cor752} With the same assumption as Theorem \ref*{thm751}, we have \begin{align*} \crotchet{\begin{matrix} \sqrt{n}D_{n}\\ \sqrt{k_{n}}\crotchet{\bar{Q}_{n}\parens{B_{\kappa,n}(\cdot)} -\frac{2}{3}\bar{M}_{n}\parens{\ip{B_{\kappa,n}(\cdot)}{A_{n}^{\tau}\parens{\cdot,\alpha^{\star},\Lambda_{\star}}}}}_{\kappa}\\ \sqrt{k_{n}\Delta_{n}}\crotchet{\bar{D}_{n}\parens{f_{\lambda}(\cdot)}}^{\lambda} \end{matrix}}\cl N(\mathbf{0},W(\tuborg{B_{\kappa}},\tuborg{f_{\lambda}})). \end{align*} \end{corollary} The proof is essentially analogous to that of Corollary 1 in \citep{Fa16}. \begin{comment} \begin{proof} It is enough to check \begin{align*} &\crotchet{\begin{matrix} \sqrt{k_{n}}\crotchet{\bar{Q}_{n}\parens{\parens{B_{\kappa,n}-B_{\kappa}}(\cdot)} -\frac{2}{3}\bar{M}_{n}\parens{\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\cdot)}{A_{n}^{\tau}\parens{\cdot,\alpha^{\star},\Lambda_{\star}}}}}_{\kappa}\\ \sqrt{k_{n}\Delta_{n}}\crotchet{\bar{D}_{n}\parens{\parens{f_{\lambda,n}-f_{\lambda}}(\cdot)}}^{\lambda} \end{matrix}}=o_P(1). \end{align*} We prepare the following notation such that \begin{align*} u_{j,n}^{(1)}((\kappa,n),\kappa) &:=\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\lm{Y}{j-1})}{a(X_{j\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}^{\otimes 2} -\frac{2}{3}I_r}a(X_{j\Delta_{n}})^{T}},\\ u_{j,n}^{(2)}((\kappa,n),\kappa) &:=\frac{2}{3}\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\lm{Y}{j-1})}{A(X_{j\Delta_{n}})-A(\lm{Y}{j-1})},\\ u_{j,n}^{(3)}((\kappa,n),\kappa) &:=\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\lm{Y}{j-1})}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}^{\otimes 2} -2\Delta_{n}^{\frac{2-\tau}{\tau-1}}I_d}\Lambda_{\star}^{1/2}},\\ u_{j,n}^{(4)}((\kappa,n),\kappa) &:=\frac{2}{\Delta_{n}}\ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-1})}{ \parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}} \parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T},\\ u_{j,n}^{(5)}((\kappa,n),\kappa) &:=\frac{1}{\Delta_{n}}\ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-1})}{\parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}^{\otimes 2}},\\ u_{j,n}^{(6)}((\kappa,n),\kappa) &:=\frac{2}{\Delta_{n}}\ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-1})}{ \parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}}\parens{a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^T},\\ u_{j,n}^{(7)}((\kappa,n),\kappa) &:=\frac{2}{\Delta_{n}} \ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-1})}{ \parens{\Delta_{n}b(X_{j\Delta_{n}})+e_{j,n}} \parens{\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^T},\\ s_{j,n}^{(1)}((\kappa,n),\kappa)&:=\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}}^{\otimes 2}-m_{n}I_r}a(X_{j\Delta_{n}})^{T}}\\ &\qquad+\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j+1,n}'}^{\otimes 2}-m_{n}'I_r}a(X_{(j-1)\Delta_{n}})^{T}}\\ &\qquad+2\ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-2})}{a(X_{(j-1)\Delta_{n}}) \parens{\frac{1}{\Delta_{n}}\parens{\zeta_{j,n}\parens{\zeta_{j+1,n}'}^T}}a(X_{(j-1)\Delta_{n}})^{T}},\\ s_{j,n}^{(3)}((\kappa,n),\kappa)&:=\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\lm{Y}{j-2})}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}-\Delta_{n}^{\frac{2-\tau}{\tau-1}}I_d}\Lambda_{\star}^{1/2}}\\ &\qquad+\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\lm{Y}{j-1})}{\Lambda_{\star}^{1/2} \parens{\frac{1}{\Delta_{n}}\parens{\lm{\epsilon}{j}}^{\otimes 2}-\Delta_{n}^{\frac{2-\tau}{\tau-1}}I_d}\Lambda_{\star}^{1/2}}\\ &\qquad+\ip{\parens{B_{\kappa,n}-B_{\kappa}}(\lm{Y}{j-2})}{\Lambda_{\star}^{1/2} \parens{-\frac{2}{\Delta_{n}}\parens{\lm{\epsilon}{j-1}}\parens{\lm{\epsilon}{j}}^T}\Lambda_{\star}^{1/2}},\\ s_{j,n}^{(4)}((\kappa,n),\kappa)&:=\frac{2}{\Delta_{n}}\ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j,n}} \parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}}\\ &\qquad-\frac{2}{\Delta_{n}}\ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-1})}{ a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}} \parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}}\\ &\qquad+\frac{2}{\Delta_{n}}\ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j+1,n}'} \parens{\lm{\epsilon}{j}}^T\Lambda_{\star}^{1/2}}\\ &\qquad-\frac{2}{\Delta_{n}}\ip{\parens{\bar{B}_{\kappa,n}-\bar{B}_{\kappa}}(\lm{Y}{j-2})}{ a(X_{(j-1)\Delta_{n}})\parens{\zeta_{j+1,n}'} \parens{\lm{\epsilon}{j-1}}^T\Lambda_{\star}^{1/2}},\\ r_{j,n}^{(1)}((\lambda,n),\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\parens{f_{\lambda,n}-f_{\lambda}}(\lm{Y}{j-1})a(X_{j\Delta_{n}})\zeta_{j+1,n}\\ &\qquad+\frac{1}{\sqrt{k_{n}\Delta_{n}}}\parens{f_{\lambda,n}-f_{\lambda}}(\lm{Y}{j-2})a(X_{(j-1)\Delta_{n}})\zeta_{j+1,n}', \end{align*} and \begin{align*} U_{n}^{(l)}((\kappa,n),\kappa)&:=\frac{1}{\sqrt{k_{n}}}\sum_{j=1}^{k_{n}-2}u_{j,n}^{(l)}((\kappa,n),\kappa),\\ \bar{R}_{n}^{(1)}((\lambda,n),\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{f_{\lambda,n}-f_{\lambda}}(\lm{Y}{j-1})a(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'},\\ \bar{R}_{n}^{(2)}((\lambda,n),\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{f_{\lambda,n}-f_{\lambda}}(\lm{Y}{j-1})\Lambda_{\star}^{1/2}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}},\\ \bar{R}_{n}^{(3)}((\lambda,n),\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{f_{\lambda,n}-f_{\lambda}}(\lm{Y}{j-1})e_{j,n},\\ \bar{R}_{n}^{(4)}((\lambda,n),\lambda)&:=\frac{1}{\sqrt{k_{n}\Delta_{n}}}\sum_{j=1}^{k_{n}-2}\parens{f_{\lambda,n}-f_{\lambda}}(\lm{Y}{j-1}) \Delta_{n}\parens{b(X_{j\Delta_{n}})-b(\lm{Y}{j-1})}. \end{align*} We can obtain \begin{align*} U_{n}^{(l_1)}((\kappa,n),\kappa)&=o_P(1)\ l_1=2,5,6,7,\\ \bar{R}_{n}^{(l_2)}((\lambda,n),\lambda)&=o_P(1),\ l_2=2,3,4 \end{align*} with the same way as proof of Theorem \ref*{thm751}. Thus it is sufficient to check the asymptotics of $U_{n}^{(1)}((\kappa,n),\kappa)$, $U_{n}^{(3)}((\kappa,n),\kappa)$, $U_{n}^{(4)}((\kappa,n),\kappa)$, $\bar{R}_{n}^{(1)}((\kappa,n),\kappa)$. Using the evaluation of the conditional first moments for $s_{j,n}^{(l_1)}((\kappa,n),\kappa)$ for $l_1=1,3,4$ and $r_{j,n}^{(1)}((\lambda,n),\lambda)$, it is enough to check the conditional second moments of them. We obtain \begin{align*} \E{\abs{\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2}\CE{\abs{s_{j,n}^{(1)}((\kappa,n),\kappa)}^2}{\mathcal{H}_j^n}}} \to 0. \end{align*} Hence $U_{n}^{(1)}((\kappa,n),\kappa)=o_P(1)$ by Lemma 9 in \citep{GeJ93}. Similarly, \begin{align*} \E{\abs{\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2}\CE{\abs{s_{j,n}^{(3)}((\kappa,n),\kappa)}^2}{\mathcal{H}_j^n}}} &\to 0,\\ \E{\abs{\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2}\CE{\abs{s_{j,n}^{(4)}((\kappa,n),\kappa)}^2}{\mathcal{H}_j^n}}} &\to 0. \end{align*} Finally, \begin{align*} \E{\abs{\sum_{j=1}^{k_{n}-2}\CE{\abs{r_{j,n}^{(1)}((\kappa,n),\kappa)}^2}{\mathcal{H}_j^n}}} \to 0. \end{align*} Here we obtain the proof. \end{proof} \end{comment} \subsection{Proofs of results in Section 3.1} \begin{proof}[Proof of Lemma \ref*{lem311}] Theorem \ref{thm751} and continuous mapping theorem lead to consistency. \begin{comment} We obtain \begin{align*} \hat{\Lambda}_{n}-\Lambda_{\star} &=\frac{1}{2n}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^{\otimes 2} +\frac{1}{2n}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^{\otimes 2}-2I_d}\Lambda_{\star}^{1/2}\\ &\qquad+\frac{1}{2n}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^T\Lambda_{\star}^{1/2}\\ &\qquad+\frac{1}{2n}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^T. \end{align*} Note the following evaluation: the first term in the right hand side can be evaluated with Lemma \ref*{lem739}, \begin{align*} \E{\norm{\frac{1}{2n}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}^{\otimes 2}}} \to0. \end{align*} For the second term, we obtain \begin{align*} &\frac{1}{2n}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}}^{\otimes 2}+\parens{\epsilon_{ih_{n}}}^{\otimes 2}-2I_d}\Lambda_{\star}^{1/2}\\ &\qquad+\frac{1}{2n}\sum_{i=0}^{n-1}\Lambda_{\star}^{1/2}\parens{\parens{\epsilon_{(i+1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T+\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i+1)h_{n}}}^T} \Lambda_{\star}^{1/2}\\ &=\Lambda_{\star}^{1/2}\parens{\frac{1}{2n}\sum_{i=0}^{n-1}\parens{\epsilon_{(i+1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T+\frac{1}{2n}\sum_{i=0}^{n-1}\parens{\epsilon_{ih_{n}}}\parens{\epsilon_{(i+1)h_{n}}}^T }\Lambda_{\star}^{1/2}+o_P(1) \end{align*} because of law of large number and \begin{align*} \E{\norm{\frac{1}{2n}\sum_{i=0}^{n-1}\parens{\epsilon_{(i+1)h_{n}}}\parens{\epsilon_{ih_{n}}}^T}^2} \to0 \end{align*} hence the second term is $o_P(1)$. For the third term, \begin{align*} \E{\norm{\frac{1}{2n}\sum_{i=0}^{n-1}\parens{X_{(i+1)h_{n}}-X_{ih_{n}}}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}^T\Lambda_{\star}^{1/2}}} \to0 \end{align*} because of Lemma \ref*{lem739}; and the fourth term can be evaluated as same as the third term. Hence we obtain the consistency of $\hat{\Lambda}_{n}$. \end{comment} \end{proof} Let us define the following quasi-likelihood functions such that \begin{align*} \check{\mathbb{H}}_{1,n}(\alpha):=-\frac{1}{2}\sum_{j=1}^{k_{n}-2} \parens{\ip{\parens{\frac{2}{3}\Delta_{n} A(\lm{Y}{j-1},\alpha)}^{-1}}{\parens{\lm{Y}{j+1}-\lm{Y}{j}}^{\otimes 2}} +\log\det A(\lm{Y}{j-1},\alpha)} \end{align*} and the corresponding ML-type estimator $\check{\alpha}_{n}$ where \begin{align*} \check{\mathbb{H}}_{1,n}(\check{\alpha}_{n})=\sup_{\alpha\in\Theta_1}\check{\mathbb{H}}_{1,n}(\alpha) \end{align*} \begin{lemma}\label{lem761} Under (A1)-(A7), (AH) and $\tau\in(1,2)$, $\check{\alpha}_{n}$ is consistent. \end{lemma} \begin{proof} For Proposition \ref{pro741} and Theorem \ref{thm743}, we obtain \begin{align*} &\frac{1}{k_{n}}\check{\mathbb{H}}_{1,n}\parens{\alpha} \cp -\frac{1}{2}\nu_0\parens{\parens{A(\cdot,\alpha)}^{-1}A(\cdot,\alpha^{\star})+\log \det A(\cdot,\alpha)}\text{ uniformly in }\theta. \end{align*} This uniform convergence and Assumption (A6) lead to consistency of the estimators with the discussion identical to Theorem 1 in \citep{K97}. \end{proof} \begin{proof}[Proof of Theorem \ref*{thm312}] First of all, we see the consistency of $\hat{\alpha}_{n}$. When $\tau=2$, \begin{align*} \frac{1}{k_{n}}\mathbb{H}_{1,n}^{\tau}(\alpha|\Lambda)\cp -\frac{1}{2}\nu_0\parens{\parens{A^{\tau}(\cdot,\alpha,\Lambda)}^{-1}A^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})+\log \det A^{\tau}(\cdot,\alpha,\Lambda)}\text{ uniformly in }\vartheta \end{align*} because of Proposition \ref{pro741} and Theorem \ref{thm743}, and then Lemma \ref{lem311} results in \begin{align*} \frac{1}{k_{n}}\mathbb{H}_{1,n}^{\tau}(\alpha|\hat{\Lambda}_{n})\cp -\frac{1}{2}\nu_0\parens{\parens{A^{\tau}(\cdot,\alpha,\Lambda_{\star})}^{-1}A^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})+\log \det A^{\tau}(\cdot,\alpha,\Lambda_{\star})} \end{align*} uniformly in $\theta$. Therefore we can reduce the discussion to that in the previous Lemma \ref{lem761}. When $\tau\in(1,2)$, it is sufficient to see \begin{align*} &\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2} \parens{\ip{\parens{\frac{2}{3}\Delta_{n} A_{n}^{\tau}(\lm{Y}{j-1},\alpha,\hat{\Lambda}_{n})}^{-1}}{\parens{\lm{Y}{j+1}-\lm{Y}{j}}^{\otimes 2}} +\log\det \parens{ A_{n}^{\tau}(\lm{Y}{j-1},\alpha,\hat{\Lambda}_{n})}}\\ &\quad\qquad\qquad-\frac{1}{k_{n}}\sum_{j=1}^{k_{n}-2} \parens{\ip{\parens{\frac{2}{3}\Delta_{n} A(\lm{Y}{j-1},\alpha)}^{-1}}{\parens{\lm{Y}{j+1}-\lm{Y}{j}}^{\otimes 2}} +\log\det \parens{ A(\lm{Y}{j-1},\alpha)}}\\ &\cp 0 \text{ uniformly in }\theta \end{align*} because of Lemma \ref*{lem761}. Note that \begin{align*} \sup_{\theta\in\Theta}\norm{\parens{A_{n}^{\tau}(x,\alpha,\hat{\Lambda}_{n})}^{-1}-\parens{A(x,\alpha)}^{-1}}&\le C\Delta_{n}^{\frac{2-\tau}{\tau-1}}\parens{1+\norm{x}^C},\\ \sup_{\theta\in\Theta}\abs{\log\det\parens{A_{n}^{\tau}(x,\alpha,\hat{\Lambda}_{n})}-\log\det\parens{A(x,\alpha)}}&\le C\Delta_{n}^{\frac{2-\tau}{\tau-1}}\parens{1+\norm{x}^C}. \end{align*} Using these inequalities and Lemma 9 in \citep{GeJ93} lead to the evaluation above with analogous discussion for Corollary 5.3 in \citep{Fa14}. Hence we have the consistency of $\hat{\alpha}_{n}$. In the next place we consider the consistency of $\hat{\beta}_{n}$. Because of Proposition \ref{pro741} and Theorem \ref{thm742}, \begin{align*} &\frac{1}{k_{n}\Delta_{n}}\mathbb{H}_{2,n}(\beta|\alpha)-\frac{1}{k_{n}\Delta_{n}}\mathbb{H}_{2,n}(\beta^{\star}|\alpha)\\ &=\frac{1}{k_{n}\Delta_{n}}\sum_{j=1}^{k_{n}-2} \ip{\parens{\Delta_{n}A(\lm{Y}{j-1},\alpha)}^{-1}}{\parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j-1},\beta)}^{\otimes 2}}\\ &\qquad-\frac{1}{k_{n}\Delta_{n}}\sum_{j=1}^{k_{n}-2} \ip{\parens{\Delta_{n}A(\lm{Y}{j-1},\alpha)}^{-1}}{\parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j-1},\beta^{\star})}^{\otimes 2}}\\ &\cp -\frac{1}{2}\nu_0\parens{\ip{\parens{c(\cdot,\alpha)}^{-1}}{\left(b(\cdot,\beta)-b(\cdot,\beta^{\star})\right)^{\otimes2}}} \text{ uniformly in }\theta \end{align*} and $\hat{\alpha}_{n}\cp \alpha^{\star}$ leads to \begin{align*} &\frac{1}{k_{n}\Delta_{n}}\mathbb{H}_{2,n}(\beta|\hat{\alpha}_{n})-\frac{1}{k_{n}\Delta_{n}}\mathbb{H}_{2,n}(\beta^{\star}|\hat{\alpha}_{n})\cp \mathbb{Y}_{2}\left(\beta\right) \text{ uniformly in }\theta. \end{align*} Hence the discussion identical to \citep{K97} verifies the consistency of $\hat{\beta}_{n}$. \end{proof} \begin{proof}[Proof of Theorem \ref*{thm313}] Firstly we prepare the notation \begin{align*} \hat{J}_{n}^{(1,2)}(\hat{\vartheta}_{n})&:=-\int_{0}^{1}\frac{1}{\sqrt{nk_{n}}}\partial_{\theta_{\epsilon}}\partial_{\alpha}\mathbb{H}_{1,n}^{\tau}(\hat{\alpha}_{n}|\theta_{\epsilon}^{\star}+u(\hat{\theta}_{\epsilon,n}-\theta_{\epsilon}^{\star}))\dop u,\\ \hat{J}_{n}^{(2,2)}(\hat{\vartheta}_{n})&:=-\int_{0}^{1}\frac{1}{k_{n}}\partial_{\alpha}^2\mathbb{H}_{1,n}^{\tau}(\alpha^{\star}+u(\hat{\alpha}_{n}-\alpha^{\star})|\theta_{\epsilon}^{\star})\dop u,\\ \hat{J}_{n}^{(2,3)}(\hat{\theta}_{n})&:=-\int_{0}^{1}\frac{1}{k_{n}\sqrt{\Delta_{n}}}\partial_{\alpha}\partial_{\beta} \mathbb{H}_{2,n}(\hat{\beta}_{n}|\alpha^{\star}+u(\hat{\alpha}_{n}-\alpha^{\star}))\dop u,\\ \hat{J}_{n}^{(3,3)}(\hat{\theta}_{n})&:=-\int_{0}^{1}\frac{1}{k_{n}\Delta_{n}}\partial_{\beta}^2 \mathbb{H}_{2,n}(\beta^{\star}+u(\hat{\beta}_{n}-\beta^{\star})|\alpha^{\star})\dop u,\\ \hat{J}_{n}(\hat{\vartheta}_{n})&:=\crotchet{\begin{matrix} I & O & O \\ \hat{J}_{n}^{(1,2)}(\hat{\vartheta}_{n}) & \hat{J}_{n}^{(2,2)}(\hat{\vartheta}_{n}) & O\\ O & \hat{J}_{n}^{(2,3)}(\hat{\theta}_{n}) & \hat{J}_{n}^{(3,3)}(\hat{\theta}_{n}) \end{matrix}}. \end{align*} Taylor's theorem gives \begin{align*} &\parens{\frac{1}{\sqrt{k_{n}}}\partial_{\alpha}\mathbb{H}_{1,n}^{\tau}(\hat{\alpha}_{n}|\hat{\theta}_{\epsilon,n})-\frac{1}{\sqrt{k_{n}}}\partial_{\alpha}\mathbb{H}_{1,n}^{\tau}(\alpha^{\star}|\theta_{\epsilon}^{\star})}^T\\ &=\parens{\int_{0}^{1}\frac{1}{\sqrt{nk_{n}}}\partial_{\theta_{\epsilon}}\partial_{\alpha}\mathbb{H}_{1,n}^{\tau}(\hat{\alpha}_{n}|\theta_{\epsilon}^{\star}+u(\hat{\theta}_{\epsilon,n}-\theta_{\epsilon}^{\star}))\dop u }\sqrt{n}\parens{\hat{\theta}_{\epsilon,n}-\theta_{\epsilon}^{\star}}\\ &\qquad+\parens{\int_{0}^{1}\frac{1}{k_{n}}\partial_{\alpha}^2\mathbb{H}_{1,n}^{\tau}(\alpha^{\star}+u(\hat{\alpha}_{n}-\alpha^{\star})|\theta_{\epsilon}^{\star})\dop u }\sqrt{k_{n}}\parens{\hat{\alpha}_{n}-\alpha^{\star}} \end{align*} and the definition of $\hat{\alpha}_{n}$ leads to \begin{align*} \parens{-\frac{1}{\sqrt{k_{n}}}\partial_{\alpha}\mathbb{H}_{1,n}^{\tau}(\alpha^{\star}|\theta_{\epsilon}^{\star})}^T &=-\hat{J}_{n}^{(1,2)}(\hat{\vartheta}_{n})\sqrt{n}\parens{\hat{\theta}_{\epsilon,n}-\theta_{\epsilon}^{\star}}-\hat{J}_{n}^{(2,2)}(\hat{\vartheta}_{n}) \sqrt{k_{n}}\parens{\hat{\alpha}_{n}-\alpha^{\star}}. \end{align*} Similarly we have \begin{align*} &\frac{1}{\sqrt{k_{n}\Delta_{n}}} \parens{\partial_{\beta}\mathbb{H}_{2,n}(\hat{\beta}_{n}|\hat{\alpha}_{n}) -\partial_{\beta}\mathbb{H}_{2,n}(\beta^{\star}|\alpha^{\star})}^T\\ &=\parens{\int_{0}^{1}\frac{1}{k_{n}\sqrt{\Delta_{n}}}\partial_{\alpha}\partial_{\beta} \mathbb{H}_{2,n}(\hat{\beta}_{n}|\alpha^{\star}+u(\hat{\alpha}_{n}-\alpha^{\star}))\dop u}\sqrt{k_{n}} \parens{\hat{\alpha}_{n}-\alpha^{\star}}\\ &\qquad+\parens{\int_{0}^{1}\frac{1}{k_{n}\Delta_{n}}\partial_{\beta}^2 \mathbb{H}_{2,n}(\beta^{\star}+u(\hat{\beta}_{n}-\beta^{\star})|\alpha^{\star})\dop u}\sqrt{k_{n}\Delta_{n}} \parens{\hat{\beta}_{n}-\beta^{\star}} \end{align*} and hence \begin{align*} \parens{-\frac{1}{\sqrt{k_{n}\Delta_{n}}}\partial_{\beta}\mathbb{H}_{2,n}(\beta^{\star}|\theta_{\epsilon}^{\star},\alpha^{\star})}^T &=-\hat{J}_{n}^{(2,3)}(\hat{\theta}_{n})\sqrt{k_{n}} \parens{\hat{\alpha}_{n}-\alpha^{\star}}\\ &\qquad-\hat{J}_{n}^{(3,3)}(\hat{\theta}_{n})\sqrt{k_{n}\Delta_{n}} \parens{\hat{\beta}_{n}-\beta^{\star}}. \end{align*} Here we obtain \begin{align*} &\crotchet{\begin{matrix} -\sqrt{n}D_{n}\\ -\frac{1}{\sqrt{k_{n}}}\crotchet{\partial_{\alpha^{(i_1)}}\mathbb{H}_{1,n}^{\tau}(\alpha^{\star}|\theta_{\epsilon}^{\star})}_{i_1=1,\ldots,m_1}\\ -\frac{1}{\sqrt{k_{n}\Delta_{n}}}\crotchet{\partial_{\beta^{(i_2)}}\mathbb{H}_{2,n}(\beta^{\star}|\alpha^{\star})}_{i_2=1,\ldots,m_2} \end{matrix}} =-\hat{J}_{n}(\hat{\vartheta}_{n})\crotchet{\begin{matrix} \sqrt{n} \parens{\hat{\theta}_{\epsilon,n}-\theta_{\epsilon}^{\star}}\\ \sqrt{k_{n}} \parens{\hat{\alpha}_{n}-\alpha^{\star}}\\ \sqrt{k_{n}\Delta_{n}} \parens{\hat{\beta}_{n}-\beta^{\star}} \end{matrix}} \end{align*} and we check the asymptotics of the left hand side and the right one. \noindent\textbf{(Step 1): } For $i=1,\ldots,m_1$, we can evaluate \begin{align*} &-\frac{1}{\sqrt{k_{n}}}\partial_{\alpha^{(i)}}\mathbb{H}_{1,n}^{\tau}(\alpha^{\star}|\theta_{\epsilon}^{\star})\\ &=-\frac{\sqrt{k_{n}}}{2}\bar{Q}_{n}\parens{\parens{\frac{2}{3}}^{-1}\parens{ A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}^{-1} \partial_{\alpha^{(i)}}A(\cdot,\alpha^{\star})\parens{ A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}^{-1}}\\ &\qquad+\frac{\sqrt{k_{n}}}{2}\bar{M}_{n}\parens{\ip{\parens{\parens{A{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}^{-1} \partial_{\alpha^{(i)}}A(\cdot,\alpha^{\star})\parens{ A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}^{-1} }}{A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}}\\ &=-{\sqrt{k_{n}}}\bar{Q}_{n}\parens{\frac{3}{4}\parens{ A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}^{-1} \partial_{\alpha^{(i)}}A(\cdot,\alpha^{\star})\parens{ A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}^{-1}}\\ &\qquad+\frac{2\sqrt{k_{n}}}{3}\bar{M}_{n}\parens{\frac{3}{4}\ip{\parens{\parens{A{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}^{-1} \partial_{\alpha^{(i)}}A(\cdot,\alpha^{\star})\parens{ A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}^{-1} }}{A_{n}^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}}. \end{align*} For $i=1,\ldots,m_2$, we have \begin{align*} -\frac{1}{\sqrt{k_{n}\Delta_{n}}}\partial_{\beta^{(i)}}\mathbb{H}_{2,n}(\beta^{\star}|\alpha^{\star}) &=-\sqrt{k_{n}\Delta_{n}}\bar{D}_{n}\parens{\parens{\partial_{\beta^{(i)}}b(\cdot,\beta^{\star})}^T\parens{A(\cdot,\alpha^{\star})}^{-1}}. \end{align*} As shown in the proof of Theorem 3.1.2, if $\tau\in(1,2)$ \begin{align*} \norm{\parens{A_{n}^{\tau}(x,\alpha^{\star},\Lambda_{\star})}^{-1}-\parens{A(x,\alpha^{\star})}^{-1}}\le C\Delta_{n}^{\frac{2-\tau}{\tau-1}}\parens{1+\norm{x}^C} \end{align*} and if $\tau=2$, \begin{align*} A_{n}^{\tau}(x,\alpha^{\star},\Lambda_{\star})=A(x,\alpha^{\star})+3\Lambda_{\star}=A^{\tau}(x,\alpha^{\star},\Lambda_{\star}). \end{align*} Therefore, by Theorem \ref*{thm751} and Corollary \ref*{cor752}, we obtain \begin{align*} \crotchet{\begin{matrix} -\sqrt{n}D_{n}\\ -\frac{1}{\sqrt{k_{n}}}\crotchet{\partial_{\alpha^{(i_1)}}\mathbb{H}_{1,n}^{\tau}(\alpha^{\star}|\theta_{\epsilon}^{\star})}_{i_1=1,\ldots,m_1}\\ -\frac{1}{\sqrt{k_{n}\Delta_{n}}}\crotchet{\partial_{\beta^{(i_2)}}\mathbb{H}_{2,n}(\beta^{\star}|\alpha^{\star})}_{i_2=1,\ldots,m_2} \end{matrix}}\cl N(\mathbf{0},\mathcal{I}^{\tau}(\vartheta^{\star})). \end{align*} \noindent\textbf{(Step 2): } We can compute \begin{align*} \E{\norm{\int_{0}^{1}\frac{1}{\sqrt{nk_{n}}}\partial_{\theta_{\epsilon}}\partial_{\alpha}\mathbb{H}_{1,n}^{\tau}(\hat{\alpha}_{n}|\theta_{\epsilon}^{\star}+u(\hat{\theta}_{\epsilon,n}-\theta_{\epsilon}^{\star}))\dop u}} &\to 0, \end{align*} and \begin{align*} \frac{1}{k_{n}\sqrt{\Delta_{n}}}\partial_{\alpha}\partial_{\beta} \mathbb{H}_{2,n}(\beta|\alpha)&\cp 0 \text{ uniformly in }\theta. \end{align*} We also have for $i_1,i_2\in\tuborg{1,\ldots,m_1}$ \begin{align*} &-\frac{1}{k_{n}}\partial_{\alpha^{(i_1)}}\partial_{\alpha^{(i_2)}}\mathbb{H}_{1,n}^{\tau}(\alpha|\theta_{\epsilon}^{\star})\\ &\cp \crotchet{\frac{1}{2}\nu_0\parens{\tr\tuborg{\parens{A^{\tau}(\cdot,\alpha,\Lambda_{\star})}^{-1}\partial_{\alpha^{(i_1)}}A(\cdot,\alpha)\parens{A^{\tau}(\cdot,\alpha,\Lambda_{\star})}^{-1}\partial_{\alpha^{(i_2)}}A(\cdot,\alpha)}}}_{i_1,i_2} \end{align*} uniformly in $\alpha$ because of Proposition \ref*{pro741}, Theorem \ref*{thm743} and \begin{align*} \norm{\parens{A_{n}^{\tau}(x,\alpha^{\star},\Lambda_{\star})}^{-1}-\parens{A(x,\alpha^{\star})}^{-1}}\le C\Delta_{n}^{\frac{2-\tau}{\tau-1}}\parens{1+\norm{x}^C} \end{align*} for $\tau\in(1,2)$. Similarly, for $j_1,j_2\in\tuborg{1,\ldots,m_2}$ \begin{align*} &-\frac{1}{k_{n}\Delta_{n}}\partial_{\beta^{(j_1)}}\partial_{\beta^{(j_2)}} \mathbb{H}_{2,n}(\beta|\alpha^{\star})\\ &\cp \lcrotchet{\nu_0\parens{\ip{\parens{A(\cdot,\alpha^{\star})}^{-1}} {\parens{\partial_{\beta^{(j_1)}}\partial_{\beta^{(j_2)}}b(\cdot,\beta)}\parens{b(\cdot,\beta)-b(\cdot,\beta^{\star})}^T }}}\\ &\hspace{2cm}\rcrotchet{+\nu_0\parens{\ip{\parens{A(\cdot,\alpha^{\star})}^{-1}}{\parens{\partial_{\beta^{(j_1)}}b}\parens{\partial_{\beta^{(j_2)}}b}^T (\cdot,\beta)}}}_{j_{1},j_{2}} \end{align*} uniformly in $\beta$. Hence these uniform convergences and consistency of $\hat{\alpha}_{n}$ and $\hat{\beta}_{n}$ lead to \begin{align*} -\int_{0}^{1}\frac{1}{k_{n}}\partial_{\alpha}^2\mathbb{H}_{1,n}^{\tau}(\alpha^{\star}+u(\hat{\alpha}_{n}-\alpha^{\star})|\theta_{\epsilon}^{\star})\dop u&\cp \mathcal{J}^{(2,2),\tau}(\vartheta^{\star}),\\ -\int_{0}^{1}\frac{1}{k_{n}\Delta_{n}}\partial_{\beta}^2 \mathbb{H}_{2,n}(\beta^{\star}+u(\hat{\beta}_{n}-\beta^{\star})|\alpha^{\star})\dop u&\cp \mathcal{J}^{(3,3)}(\theta^{\star}), \end{align*} and $\hat{J}_{n}(\hat{\vartheta}_{n})\cp \mathcal{J}^{\tau}(\vartheta^{\star})$. \end{proof} \subsection{Proofs of results in Section 3.2} First of all, we define \begin{align*} c_S(x):=\sum_{\ell_1=1}^{d}\sum_{\ell_2=1}^{d}A^{(\ell_1,\ell_2)}(x). \end{align*} \begin{comment} \begin{proposition}\label{pro771} Under (A1)-(A4) and $nh_{n}^2\to0$, \begin{align*} &\sqrt{n}\parens{\frac{1}{nh_{n}}\sum_{i=0}^{n-1}\parens{S_{(i+1)h_{n}}-S_{ih_{n}}}^2 -\frac{1}{nh_{n}}\sum_{0\le 2i\le n-2}\parens{S_{(2i+2)h_{n}}-S_{2ih_{n}}}^2}\\ &\qquad\cl N\parens{0,2\nu_0\parens{c_S^2(\cdot)}}. \end{align*} \end{proposition} \begin{proof} We have \begin{align*} &\sqrt{n}\parens{\frac{1}{nh_{n}}\sum_{i=0}^{n-1}\parens{S_{(i+1)h_{n}}-S_{ih_{n}}}^2 -\frac{1}{nh_{n}}\sum_{0\le 2i\le n-2}\parens{S_{(2i+2)h_{n}}-S_{2ih_{n}}}^2}\\ &=\sum_{0\le 2i\le n-2}\mathfrak{A}_{2i}^n+o_P(1), \end{align*} where \begin{align*} \mathfrak{A}_{2i}^n=\frac{\parens{-2}}{\sqrt{n}h_{n}}\parens{S_{(2i+2)h_{n}}-S_{(2i+1)h_{n}}}\parens{S_{(2i+1)h_{n}}-S_{2ih_{n}}}. \end{align*} It is quite routine to show \begin{align*} \sum_{0\le 2i\le n-2}\E{\abs{\CE{\mathfrak{A}_{2i}^n}{\mathcal{G}_{2ih_{n}}^n}}} &\to 0,\\ \sum_{0\le 2i\le n-2}\CE{\parens{\mathfrak{A}_{2i}^n}^2}{\mathcal{G}_{2ih_{n}}^n}&\cp 2\nu_0\parens{c_S^2(\cdot)},\\ \E{\abs{\sum_{0\le 2i\le n-2}\CE{\parens{\mathfrak{A}_{2i}^n}^4}{\mathcal{G}_{2ih_{n}}^n}}}&\to 0. \end{align*} Hence martingale CLT verifies the result. \end{proof} \end{comment} { \begin{proposition}\label{pro771} Assume $\Lambda_{\star}^{\left(\ell_{1},\ell_{2}\right)}=\left(h_{n}/\sqrt{n}\right)\mathfrak{M}$ for some matrix $\mathfrak{M}\ge 0$. Under (A1)-(A4) and $nh_{n}^2\to0$, \begin{align*} &\sqrt{n}\parens{\frac{1}{nh_{n}}\sum_{i=0}^{n-1}\parens{\mathscr{S}_{(i+1)h_{n}}-\mathscr{S}_{ih_{n}}}^2 -\frac{1}{nh_{n}}\sum_{0\le 2i\le n-2}\parens{\mathscr{S}_{(2i+2)h_{n}}-\mathscr{S}_{2ih_{n}}}^2}\\ &\qquad\cl N\parens{\delta,2\nu_0\parens{c_S^2(\cdot)}} \end{align*} where $\delta:=\sum_{\ell_{1}}\sum_{\ell_{2}}\mathfrak{M}$. \end{proposition} } \begin{proof} We have \begin{align*} &\sqrt{n}\parens{\frac{1}{nh_{n}}\sum_{i=0}^{n-1}\parens{\mathscr{S}_{(i+1)h_{n}}-\mathscr{S}_{ih_{n}}}^2 -\frac{1}{nh_{n}}\sum_{0\le 2i\le n-2}\parens{\mathscr{S}_{(2i+2)h_{n}}-\mathscr{S}_{2ih_{n}}}^2}\\ &=\sum_{0\le 2i\le n-2}\left(\mathfrak{A}_{2i}^n-\frac{2\delta}{n}\right)+\delta+o_P(1), \end{align*} where \begin{align*} \mathfrak{A}_{2i}^n=\frac{\parens{-2}}{\sqrt{n}h_{n}}\parens{\mathscr{S}_{(2i+2)h_{n}}-\mathscr{S}_{(2i+1)h_{n}}}\parens{\mathscr{S}_{(2i+1)h_{n}}-\mathscr{S}_{2ih_{n}}}. \end{align*} For independence among $\left\{X_{t}\right\}_{t}$ and $\left\{\epsilon_{ih_{n}}\right\}_{i}$ being i.i.d., and \begin{align*} &\CE{\mathfrak{A}_{2i}^n}{\mathcal{H}_{2ih_{n}}^n}\\ &=\CE{\frac{\parens{-2}}{\sqrt{n}h_{n}}\parens{S_{(2i+2)h_{n}}-S_{(2i+1)h_{n}}}\parens{S_{(2i+1)h_{n}}-S_{2ih_{n}}}}{\mathcal{H}_{2ih_{n}}^n}\\ &\quad+\E{\frac{\parens{-2}}{\sqrt{n}h_{n}}\sum_{\ell_{1}}\left(\Lambda_{\star}^{1/2}\left(\epsilon_{\left(2i+2\right)h_{n}}-\epsilon_{\left(2i+1\right)h_{n}}\right)\right)^{\left(\ell_{1}\right)} \sum_{\ell_{2}}\left(\Lambda_{\star}^{1/2}\left(\epsilon_{\left(2i+1\right)h_{n}}-\epsilon_{2ih_{n}}\right)\right)^{\left(\ell_{2}\right)}}, \end{align*} and the first term has the evaluation \begin{align*} \left|\CE{\frac{\parens{-2}}{\sqrt{n}h_{n}}\parens{S_{(2i+2)h_{n}}-S_{(2i+1)h_{n}}}\parens{S_{(2i+1)h_{n}}-S_{2ih_{n}}}}{\mathcal{H}_{2ih_{n}}^n}\right|\le \frac{Ch_{n}}{\sqrt{n}}\left(1+\left\|X_{2ih_{n}}\right\|\right)^{C}, \end{align*} and the second one has \begin{align*} &\E{\frac{\parens{-2}}{\sqrt{n}h_{n}}\sum_{\ell_{1}}\left(\Lambda_{\star}^{1/2}\left(\epsilon_{\left(2i+2\right)h_{n}}-\epsilon_{\left(2i+1\right)h_{n}}\right)\right)^{\left(\ell_{1}\right)} \sum_{\ell_{2}}\left(\Lambda_{\star}^{1/2}\left(\epsilon_{\left(2i+1\right)h_{n}}-\epsilon_{2ih_{n}}\right)\right)^{\left(\ell_{2}\right)}}\\ &=\frac{2}{\sqrt{n}h_{n}}\sum_{\ell_{1}}\sum_{\ell_{2}}\Lambda_{\star}^{\left(\ell_{1},\ell_{2}\right)} =\frac{2\delta}{n}. \end{align*} These verify \begin{align*} \sum_{0\le 2i\le n-2}\E{\abs{\CE{\mathfrak{A}_{2i}^n-\frac{2\delta}{n}}{\mathcal{H}_{2ih_{n}}^n}}} &\to 0. \end{align*} We also have the evaluation \begin{align*} \sum_{0\le 2i\le n-2}\CE{\left(\mathfrak{A}_{2i}^n-\frac{2\delta}{n}\right)^{2}}{\mathcal{H}_{2ih_{n}}^n} &=\sum_{0\le 2i\le n-2}\CE{\left(\mathfrak{A}_{2i}^n\right)^{2}}{\mathcal{H}_{2ih_{n}}^n}+o_{P}\left(1\right), \end{align*} and the order of $\Lambda_{\star}=\frac{h_{n}}{\sqrt{n}}\mathfrak{M}$ leads to \begin{align*} &\sum_{0\le 2i\le n-2}\CE{\left(\mathfrak{A}_{2i}^n\right)^{2}}{\mathcal{H}_{2ih_{n}}^n}\\ &=\sum_{0\le 2i\le n-2}\CE{\frac{4}{nh_{n}^{2}}\parens{S_{(2i+2)h_{n}}-S_{(2i+1)h_{n}}}^{2}\parens{S_{(2i+1)h_{n}}-S_{2ih_{n}}}^{2}}{\mathcal{H}_{2ih_{n}}^n} +o_{P}\left(1\right)\\ &\cp 2\nu_0\left(c_{S}^{2}\left(\cdot\right)\right). \end{align*} The identical discussion verifies \begin{align*} \mathbf{E}\left[\left|\sum_{0\le 2i\le n-2}\CE{\left(\mathfrak{A}_{2i}^n\right)^{2}}{\mathcal{H}_{2ih_{n}}^n}\right|\right]\to 0. \end{align*} Hence martingale CLT verifies the result. \end{proof} \begin{lemma}\label{lem772} Under (A1)-(A4) and (AH), \begin{align*} &\frac{1}{k_{n}\Delta_{n}^2}\sum_{j=1}^{k_{n}-2}\parens{\lm{\mathscr{S}}{j+1}-\lm{\mathscr{S}}{j}}^4\\ &\cp \begin{cases} \frac{4}{3}\nu_0\parens{c_S^2(\cdot)} &\text{ if }\tau\in(1,2)\\ \frac{4}{3}\nu_0\parens{c_S^2(\cdot)}+\frac{2}{3}\nu_0\parens{c_S(\cdot)}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{(i_1,i_2)}}\\ \qquad+\parens{2d^2+10d}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{(i_1,i_2)}}^2&\text{ if }\tau=2. \end{cases} \end{align*} \end{lemma} \begin{proof} We prove with the identical way as Theorem \ref*{thm743}. Note the evaluation \begin{align*} &\frac{1}{k_{n}\Delta_{n}^2}\sum_{j=1}^{k_{n}-2}\parens{\lm{\mathscr{S}}{j+1}-\lm{\mathscr{S}}{j}}^4\\ &=\frac{1}{k_{n}\Delta_{n}^2}\sum_{j=1}^{k_{n}-2}\parens{\sum_{i=1}^{d}\parens{a^{(i,\cdot)}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}+\parens{\Lambda_{\star}^{1/2}}^{(i,\cdot)}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}}^4+o_{P}(1), \end{align*} using Proposition \ref{pro737} and Lemma \ref{lem739}. We have \begin{align*} \mathfrak{C}_{j,n}&:=\CE{\parens{\sum_{i=1}^{d}\parens{a^{(i,\cdot)}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}+\parens{\Lambda_{\star}^{1/2}}^{(i,\cdot)}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}}^4}{\mathcal{H}_{j}^{n}}\\ &=3\parens{m_{n}+m_{n}'}^2\Delta_{n}^2c_S^2(X_{j\Delta_{n}}) +\parens{m_{n}+m_{n}'}\Delta_{n}c_S(X_{j\Delta_{n}})\frac{1}{p_{n}}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d} \Lambda_{\star}^{(i_1,i_2)}}\\ &\qquad+\parens{\frac{2d^2+10d}{p_{n}^2}+\frac{2}{p_{n}^3}\parens{\sum_{i=1}^{d}\E{\parens{\epsilon_{0}^{(i)}}^4}-3d}}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d} \Lambda_{\star}^{(i_1,i_2)}}^2 \end{align*} and hence \begin{align*} \frac{1}{k_{n}\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\mathfrak{C}_{3j,n} &\cp \begin{cases} \frac{4}{9}\nu_0\parens{c_S^2(\cdot)} &\text{ if }\tau\in(1,2)\\ \frac{4}{9}\nu_0\parens{c_S^2(\cdot)}+\frac{2}{9}\nu_0\parens{c_S(\cdot)}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{(i_1,i_2)}}\\ \qquad+\frac{2d^2+10d}{3}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{(i_1,i_2)}}^2&\text{ if }\tau=2. \end{cases} \end{align*} \begin{comment} \begin{align*} &\parens{\sum_{i=1}^{d}\parens{a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}+\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}}^4\\ &=\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^4\\ &\qquad+4\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^3\parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}\\ &\qquad+6\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^2\parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^2\\ &\qquad+4\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}\parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^3\\ &\qquad+\parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^4, \end{align*} and \begin{align*} \CE{\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^4}{\mathcal{H}_j^n} &=3\parens{m_{n}+m_{n}'}^2\Delta_{n}^2c_S^2(X_{j\Delta_{n}}). \end{align*} It leads to \begin{align*} \frac{1}{k_{n}\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{3j\Delta_{n}})\parens{\zeta_{3j+1,n}+\zeta_{3j+2,n}'}}^4}{\mathcal{H}_{3j}^n}\cp \frac{4}{9}\nu_0\parens{c_S^2(\cdot)}. \end{align*} It is obvious that \begin{align*} \CE{\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^3\parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}}{\mathcal{H}_j^n}=0 \end{align*} and \begin{align*} \CE{\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}\parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^3} {\mathcal{H}_j^n}=0. \end{align*} We can evaluate \begin{align*} &\CE{\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{j\Delta_{n}})\parens{\zeta_{j+1,n}+\zeta_{j+2,n}'}}^2 \parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^2}{\mathcal{H}_j^n}\\ &=\parens{m_{n}+m_{n}'}\Delta_{n}c_S(X_{j\Delta_{n}})\frac{1}{p_{n}}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{i_1,i_2}} \end{align*} and hence \begin{align*} &\frac{1}{k_{n}\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\parens{\sum_{i=1}^{d}a^{i,\cdot}(X_{3j\Delta_{n}})\parens{\zeta_{3j+1,n}+\zeta_{3j+2,n}'}}^2 \parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{3j+1}-\lm{\epsilon}{3j}}}^2}{\mathcal{H}_{3j}^n}\\ &\cp \begin{cases} 0 & \text{ if }\tau \in(1,2)\\ \frac{2}{9}\nu_0\parens{c_S(\cdot)}\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{i_1,i_2} & \text{ if }\tau=2. \end{cases} \end{align*} We also have \begin{align*} \tr\tuborg{\CE{\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}^{\otimes2} \parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}^{\otimes 2}}{\mathcal{H}_j^n}} =\frac{2d^2+10d}{p_{n}^2}+\frac{2}{p_{n}^3}\parens{\sum_{i=1}^{d}\E{\parens{\epsilon_{0}^i}^4}-3d} \end{align*} and \begin{align*} &\CE{\parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}}^4}{\mathcal{H}_j^n}\\ &=\frac{2d^2+10d}{p_{n}^2}+\frac{2}{p_{n}^3}\parens{\sum_{i=1}^{d}\E{\parens{\epsilon_{0}^i}^4}-3d}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{i_1,i_2}}^2. \end{align*} It leads to \begin{align*} &\frac{1}{k_{n}\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2}\CE{\parens{\sum_{i=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{i,\cdot}\parens{\lm{\epsilon}{3j+1}-\lm{\epsilon}{3j}}}^4}{\mathcal{H}_{3j}^n}\\ &\cp\begin{cases} 0 &\text{ if }\tau\in(1,2)\\ \parens{2d^2+10d}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{i_1,i_2}}^2&\text{ if }\tau=2. \end{cases} \end{align*} Since the following evaluation \begin{align*} \CE{\norm{e_{j,n}}^{l}}{\mathcal{H}_j^n}&\le C\Delta_{n}^l\parens{1+\norm{X_{j\Delta_{n}}}^C}\\ \CE{\norm{\zeta_{j+1,n}+\zeta_{j+2,n}}^{l}}{\mathcal{H}_j^n}&\le C\Delta_{n}^{l/2}\\ \CE{\norm{\lm{\epsilon}{j+1}-\lm{\epsilon}{j}}^{l}}{\mathcal{H}_j^n}&\le C\Delta_{n}^{l/2} \end{align*} for all $l\in\mathbf{N}$, we have \begin{align*} &\frac{1}{k_{n}\Delta_{n}^2}\sum_{1\le 3j\le k_{n}-2} \CE{\parens{\lm{\mathscr{S}}{3j+1}-\lm{\mathscr{S}}{3j}}^4}{\mathcal{H}_{3j}^n}\\ &\cp\begin{cases} \frac{4}{9}\nu_0\parens{c_S^2(\cdot)} &\text{ if }\tau\in(1,2)\\ \frac{4}{9}\nu_0\parens{c_S^2(\cdot)}+\frac{2}{9}\nu_0\parens{c_S(\cdot)}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{i_1,i_2}}\\ \qquad+\frac{2d^2+10d}{3}\parens{\sum_{i_1=1}^{d}\sum_{i_2=1}^{d}\Lambda_{\star}^{i_1,i_2}}^2&\text{ if }\tau=2. \end{cases} \end{align*} \end{comment} Finally we obtain \begin{align*} \E{\abs{\frac{1}{k_{n}^2\Delta_{n}^4}\sum_{1\le 3j\le k_{n}-2}\CE{\parens{\lm{\mathscr{S}}{3j+1}-\lm{\mathscr{S}}{3j}}^8}{\mathcal{H}_{3j}^n}}} \le \frac{C}{k_{n}} \to 0. \end{align*} and the proof is obtained because of the Lemma 9 in \citep{GeJ93}. \end{proof} \begin{proof}[Proof of Theorem \ref*{thm321}] Under $H_0$, the result of Lemma \ref*{lem772} is equivalent to \begin{align*} \frac{3}{4k_{n}\Delta_{n}^2}\sum_{j=1}^{k_{n}-2}\parens{\lm{\mathscr{S}}{j+1}-\lm{\mathscr{S}}{j}}^4 \cp \nu_0\parens{c_S^2(\cdot)},\ ^{\forall} \tau\in(1,2]. \end{align*} Therefore, Proposition \ref*{pro771}, Lemma \ref*{lem772} and Slutsky's theorem {complete} the proof. \end{proof} \begin{proof}[Proof of Theorem \ref*{thm322}] Assumption (T1) verifies $\sum_{l_1}\sum_{l_2}\Lambda_{\star}^{(l_1,l_2)}>0$. We firstly show \begin{align*} \frac{1}{2n}\sum_{i=0}^{n-1}\parens{\mathscr{S}_{(i+1)h_{n}}- \mathscr{S}_{ih_{n}}}^2 \cp\sum_{l_1}\sum_{l_2}\Lambda_{\star}^{(l_1,l_2)} \end{align*} under $H_1$ and (T1). We can decompose \begin{align*} &\frac{1}{2n}\sum_{i=0}^{n-1}\parens{\mathscr{S}_{(i+1)h_{n}}-\mathscr{S}_{ih_{n}}}^2 -\sum_{l_1}\sum_{l_2}\Lambda_{\star}^{(l_1,l_2)}\\ &=\frac{1}{2n}\sum_{i=0}^{n-1}\parens{\sum_{l=1}^{d}\parens{X_{(i+1)h_{n}}^{(l)}-X_{ih_{n}}^{(l)}}}^2\\ &\qquad+\frac{1}{2n}\sum_{i=0}^{n-1}\parens{\sum_{l=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l,\cdot)}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}}^2-\sum_{l_1}\sum_{l_2}\Lambda_{\star}^{(l_1,l_2)}\\ &\qquad+\frac{1}{n}\sum_{i=0}^{n-1}\parens{\sum_{l=1}^{d}\parens{X_{(i+1)h_{n}}^{(l)}-X_{ih_{n}}^{(l)}}} \parens{\sum_{l=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l,\cdot)}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}}. \end{align*} The first term and the fourth one of the right hand side are $o_P(1)$ which can be shown by $L^1$-evaluation. \begin{comment} since \begin{align*} \E{\abs{\frac{1}{2n}\sum_{i=0}^{n-1}\parens{\sum_{l=1}^{d}\parens{X_{(i+1)h_{n}}^{(l)}-X_{ih_{n}}^{(l)}}}^2}}\to 0. \end{align*} The fourth term is also $o_P(1)$ since \begin{align*} \E{\abs{\frac{1}{n}\sum_{i=0}^{n-1}\parens{\sum_{l=1}^{d}\parens{X_{(i+1)h_{n}}^{(l)}-X_{ih_{n}}^{(l)}}} \parens{\sum_{l=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l,\cdot)}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}}}}\to 0. \end{align*} \end{comment} We also can evaluate the second and third term of the right hand side as \begin{align*} &\frac{1}{2n}\sum_{i=0}^{n-1}\parens{\sum_{l=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l,\cdot)}\parens{\epsilon_{(i+1)h_{n}}-\epsilon_{ih_{n}}}}^2-\sum_{l_1}\sum_{l_2}\Lambda_{\star}^{(l_1,l_2)}\\ &=-\frac{1}{n}\sum_{i=0}^{n-1}\sum_{l_1=1}^{d}\sum_{l_2=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,\cdot)}\parens{\epsilon_{(i+1)h_{n}}} \parens{\epsilon_{ih_{n}}}^T\parens{\Lambda_{\star}^{1/2}}^{(\cdot,l_2)} +o_P(1) \end{align*} because of law of large numbers. The first term can be evaluated as \begin{align*} \E{\abs{\frac{1}{n}\sum_{i=0}^{n-1}\sum_{l_1=1}^{d}\sum_{l_2=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,\cdot)}\parens{\epsilon_{(i+1)h_{n}}} \parens{\epsilon_{ih_{n}}}^T\parens{\Lambda_{\star}^{1/2}}^{\cdot,l_2}}^2}\to 0. \end{align*} With identical computation, we obtain \begin{align*} \frac{1}{n}\sum_{0\le 2i\le n-2}\parens{\mathscr{S}_{(2i+2)h_{n}}-\mathscr{S}_{2ih_{n}}}^2&\cp \sum_{l_1,l_2}\Lambda_{\star}^{(l_1,l_2)} \end{align*} There exists a constant $C_1$ such that \begin{align*} \frac{3}{4k_{n}\Delta_{n}^2}\sum_{j=1}^{k_{n}-2}\parens{\lm{\mathscr{S}}{j+1}-\lm{\mathscr{S}}{j}}^4 &\cp C_1. \end{align*} These convergences in probability and some computations {verify} the result. \end{proof} \begin{proof}[Proof for Theorem \ref{thm323}] Proposition \ref{pro771} and the discussion similar to that in Lemma \ref{lem772} verify the result. \end{proof} \section{Introduction} We consider a $d$-dimensional ergodic diffusion process defined by the following stochastic differential equation \begin{align} \dop X_t = b(X_t, \beta)\dop t + a(X_t, \alpha)\dop w_t,\ X_0 = x_0, \end{align} where $\left\{w_t\right\}_{t\ge 0}$ is a $r$-dimensional standard Wiener process, $x_0$ is a $\Re^d$-valued random variable independent of $\left\{w_t\right\}_{t\ge0}$, $\alpha\in\Theta_1\subset \Re ^{m_{1}}$, $\beta\in\Theta_2\subset \Re ^{m_{2}}$ with $\Theta_1$ and $\Theta_2$ being compact and convex. Moreover, $b:\Re^d\times\Theta_2\to \Re ^d$, $a:\Re^d\times \Theta_1\to \Re^d\otimes\Re^r$ are known functions. We denote $\theta:=(\alpha,\beta)\in\Theta_1\times \Theta_2 =:\Theta$ and $\theta^{\star}=(\alpha^{\star},\beta^{\star})$ as the true value of $\theta$ which belongs to $\mathrm{Int}(\Theta)$. We deal with the problem of parametric inference for $\theta$ with $\left\{Y_{ih_{n}}\right\}_{i=1,\ldots,n}$ defined by the following model \begin{align} Y_{ih_{n}}=X_{ih_{n}}+\Lambda^{1/2} \epsilon_{ih_{n}},\ i=0,\ldots,n, \end{align} where $h_{n}>0$ is the {discretisation step}, $\Lambda\in \Re^d\otimes\Re^{d}$ is a positive semi-definite matrix and $\left\{\epsilon_{ih_{n}}\right\}_{i=1,\ldots,n}$ is an i.i.d. sequence of $\Re^d$-valued random variables such that $\E{\epsilon_{ih_{n}}}=\mathbf{0}$, $\mathrm{Var}\parens{\epsilon_{ih_{n}}}=I_d$, and each component is independent of other components, $\tuborg{w_t}$ and $x_0$. {Hence the term $\Lambda^{1/2}\epsilon_{ih_{n}}$ indicates the exogenous noise}. Let $\Theta_{\epsilon}\in\Re^{d(d+1)/2}$ be the convex and compact parameter space such that $\theta_{\epsilon}:=\mathrm{vech}(\Lambda)\in\Theta_{\epsilon}$ and $\Lambda_{\star}$ be the true value of $\Lambda$ such that $\theta_{\epsilon}^{\star}:=\mathrm{vech}(\Lambda_{\star})\in\mathrm{Int}(\Theta_{\epsilon})$, where $\vech$ is the half-vectorisation operator. We denote $\vartheta := (\theta, \theta_{\epsilon})$ and $\Xi:=\Theta\times\Theta_{\epsilon}$. With respect to the sampling scheme, we assume that $h_{n}\to0$ and $nh_{n}\to\infty$ as $n\to\infty$. Our main concern with these settings is the adaptive maximum likelihood (ML)-type estimation scheme in the form of $ \hat{\Lambda}_{n}=\frac{1}{2n}\sum_{i=0}^{n-1}\parens{Y_{(i+1)h_{n}}-Y_{ih_{n}}}^{\otimes2}$, \begin{align} \mathbb{H}_{1,n}^{\tau}(\hat{\alpha}_{n}|\hat{\Lambda}_{n})&=\sup_{\alpha\in\Theta_1}\mathbb{H}_{1,n}^{\tau}(\alpha|\hat{\Lambda}_{n}),\\ \mathbb{H}_{2,n}(\hat{\beta}_{n}|\hat{\alpha}_{n})&=\sup_{\beta\in\Theta_2}\mathbb{H}_{2,n}(\beta|\hat{\alpha}_{n}), \end{align} where $A^{\otimes 2} = AA^T$ for any matrix $A$, $A^T$ indicates the transpose of $A$, $\mathbb{H}_{1,n}^{\tau}$ and $\mathbb{H}_{2,n}$ are quasi-likelihood functions, which are defined in Section 3. The composition of the model above is quite analogous to that of discrete-time state space models (e.g., see \citep{PPC09}) in terms of expression of endogenous perturbation in the system of interest and exogenous noise attributed to observation separately. As seen in the assumption $h_{n}\to0$, this model that we consider is for the situation where high-frequency observation holds, and this requirement enhances the flexibility of modelling since our setting includes the models with non-linearity, dependency of the innovation on state space itself. In addition, adaptive estimation which also becomes possible through the high-frequency setting has the advantage in easing computational burden in comparison to simultaneous one. Fortunately, the number of situations where requirements are satisfied has been grown gradually, and will continue to soar because of increase in the amount of real-time data and progress of observation technology these days. The idea of modelling with diffusion process concerning observational noise is no new phenomenon. For instance, in the context of high-frequency financial data analysis, the researchers have addressed the existence of "microstructure noise" with large variance with respect to time increment {questioning} the premise that what we observe are purely diffusions. The energetic research of the modelling with "diffusion + noise" has been conducted in the decade: some research have examined the asymptotics of this model in the framework of fixed time interval such that {$nh_{n}=1$} (e.g., \citep{GlJ01a}, \citep{GlJ01b}, \citep{JLMPV09}, \citep{PV09} and \citep{O17}); and \citep{Fa14} and \citep{Fa16} research the parametric inference of this model with ergodicity and the asymptotic framework ${nh_{n}}\to \infty$. For parametric estimation for {discretely} observed diffusion processes without measurement errors, see \citep{Fl89}, \citep{Y92}, \citep{Y11}, \citep{BS95}, \citep{K97} and references therein. Our research is focused on the statistical inference for an ergodic diffusion plus noise. We give the estimation methodology with adaptive estimation that relaxes computational burden and that has been researched for ergodic diffusions so far (see \citep{Y92}, \citep{Y11}, \citep{K95}, \citep{UY12}, \citep{UY14}) in comparison to the simultaneous estimation of \citep{Fa14} and \citep{Fa16}. In previous researches the simultaneous asymptotic normality of $\hat{\Lambda}_{n}$, $\hat{\alpha}_{n}$ and $\hat{\beta}_{n}$ has not been shown, but our method allows us to see asymptotic normality and asymptotic independence of them with the different convergence rates. {Our methods also broaden the applicability of modelling with stochastic differential equations since it is more robust for the existence of noise than the existent results in {discretely observed} diffusion processes with ergodicity not concerning observation noise.} As the real data analysis, we analyse the 2-dimensional wind data \citep{NWTC} and try to model the dynamics with 2-dimensional Ornstein-Uhlenbeck process. We utilise the fitting of our diffusion-plus-noise modelling and that of diffusion modelling with estimation methodology called local Gaussian approximation method (LGA method) which has been investigated for these decades (for instance, see \citep{Y92}, \citep{K95} and \citep{K97}). \begin{comment} The results of fitting are as follows: diffusion-plus-noise fitting gives \begin{align} \dop \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}} &=\parens{\crotchet{\begin{matrix} -3.77 & -0.32\\ -0.40 & -5.01 \end{matrix}} \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}} + \crotchet{\begin{matrix} 3.60\\ -2.54 \end{matrix}}}\dop t+\crotchet{\begin{matrix} 13.41 & -0.29\\ -0.29 & 12.62 \end{matrix}}\dop w_t, \end{align} and the estimation of the noise variance \begin{align} \hat{\Lambda}_{n}=\crotchet{\begin{matrix} 6.67\times 10^{-3} & 3.75\times 10^{-5}\\ 3.75\times 10^{-5} & 6.79\times 10^{-3}\\ \end{matrix}}; \end{align} and the diffusion fitting with LGA method which is asymptotic efficient if $\Lambda=O$ gives \begin{align} \dop \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}} &=\parens{\crotchet{\begin{matrix} -67.53 & -9.29\\ -10.37 & -104.45 \end{matrix}} \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}} + \crotchet{\begin{matrix} 63.27\\ -50.24 \end{matrix}}}\dop t+\crotchet{\begin{matrix} 43.82 & 0.13\\ 0.13 & 44.22 \end{matrix}}\dop w_t. \end{align} \end{comment} {The result (see Section 5)} seems that there is considerable difference between these estimates: however, we cannot evaluate which is the more trustworthy fitting only with these results. It results from the fact that we cannot distinguish a diffusion from a diffusion-plus-noise; if $\Lambda_{\star}=O$, then the observation is not contaminated by noise and the estimation of LGA should be adopted for its asymptotic efficiency; but if $\Lambda_{\star}\neq O$, what we observe is no more a diffusion process and the LGA method loses its theoretical validity. Therefore, it is necessary to compose the statistical hypothesis test with $H_0: \Lambda = O$ and $H_1: \Lambda \neq O$. In addition to estimation methodology, we also research this problem of hypothesis test and propose a test which has the consistency property. In Section 2, we {gather} the assumption and notation across the paper. Section 3 gives the main results of this paper. Section 4 examines the result of Section 3 with simulation. In Section 5 {we analyse the real data for wind velocity} named MetData with our estimators and LGA as discussed above and test whether noise does exist. \section{Local means, notations and assumptions} \subsection{Local means} We partition the observation into $k_{n}$ blocks containing $p_{n}$ {observations} and examine the property of the following local means such that \begin{align} \lm{Z}{j}{}=\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}Z_{j\Delta_{n}+ih_{n}},\ j=0,\ldots,k_{n}-1, \end{align} where $\tuborg{Z_{ih_{n}}}_{i=1,\ldots,n}$ is an arbitrary sequence of random variables on the mesh $\{ih_{n}\}_i$ as $\tuborg{Y_{ih_{n}}}_{i=1,\ldots,n}$, $\tuborg{X_{ih_{n}}}_{i=1,\ldots,n}$ and $\tuborg{\epsilon_{ih_{n}}}_{i=1,\ldots,n}$; and $\Delta_{n}=p_{n}h_{n}$. Note that $k_{n} p_{n} =n$ and $k_{n} \Delta_{n} =n h_{n}$. In the same way as \citep{Fa14} and \citep{Fa16}, our estimation method is based on these local means with respect to the observation $\tuborg{Y_{ih_{n}}}_{i=1,\ldots,n}$. The idea is so straightforward; taking means of the data $\tuborg{Y_{ih_{n}}}$ in each partition should reduce the influence of the noise term $\tuborg{\epsilon_{ih_{n}}}$ because of the law of large numbers and then we will obtain the information of the latent process $\tuborg{X_{ih_{n}}}$. We show how local means work to extract the information of the latent process. \begin{comment} The plot on next page (Figure \ref{figExample}) draws 3 processes: the first one (blue line) is the simulation of a 1-dimensional Ornstein-Uhlenbeck process such that \begin{align} \dop X_t = -\parens{X_t-1}\dop t + \dop w_t, X_0=1 \end{align} where $r=1$, $n=10^4$ and $h_{n}=n^{-0.7}$; secondly, we contaminate the observation with normally-distributed noise $\tuborg{\epsilon_{ih_{n}}}$ and $\Lambda_{\star}= 0.1$ and plot it with grey line; finally we make the sequence of local means $\tuborg{\lm{Y}{j}}_{j=0,\ldots,k_{n}-1}$ where $p_{n}=25$ and $k_{n}=400$ plot it with red line. With these plots, it seems that the local means recover rough states of the latent processes, and actually it is possible to compose the quantity which converges to each state $\tuborg{X_{j\Delta_{n}}}_{j=0,\ldots,k_{n}-1}$ on the mesh $\tuborg{j\Delta_{n}}_{j=0,\ldots,k_{n}-1}$ for Proposition \ref{cor736} with the assumptions below. \end{comment} The first plot on next page (Figure \ref{figPLP}) is the simulation of a 1-dimensional Ornstein-Uhlenbeck process such that \begin{align} \dop X_t = -\parens{X_t-1}\dop t + \dop w_t, X_0=1 \end{align} where $r=1$, $n=10^4$ and $h_{n}=n^{-0.7}$. Secondly, we contaminate the observation with normally-distributed noise $\tuborg{\epsilon_{ih_{n}}}$ and $\Lambda_{\star}= 0.1$ and plot the observation on next page (Figure \ref{figPOB}). Finally we make the sequence of local means $\tuborg{\lm{Y}{j}}$ where $p_{n}=25$ and $k_{n}=400$ plot at the bottom of the next page (Figure \ref{figPLM}). With these plots, it seems that the local means recover rough states of the latent processes, and actually it is possible to compose the quantity which converges to each state $\tuborg{X_{j\Delta_{n}}}$ on the mesh $\tuborg{j\Delta_{n}}$ for Proposition \ref{cor736} with the assumptions below. \begin{figure}[p] \centering \includegraphics[bb= 0 0 720 480,width=10cm]{plot_PLP2_960_640.png} \caption{plot of the latent process}\label{figPLP} \includegraphics[bb=0 0 720 480,width=10cm]{plot_POB2_960_640.png} \caption{plot of the contaminated observation}\label{figPOB} \includegraphics[bb=0 0 720 480,width=10cm]{plot_PLM2_960_640.png} \caption{plot of the local means}\label{figPLM} \end{figure} \subsection{Notations and assumptions} We set the following notations. \begin{enumerate} \item For a matrix $A$, $A^T$ denotes the transpose of $A$ and $A^{\otimes 2}:=AA^T$. For same size matrices $A$ and $B$, $\ip{A}{B}:=\tr\parens{AB^T}$. \item For any vector $v$, $v^{(i)}$ denotes the $i$-th component of $v$. Similarly, $M^{(i,j)}$, $M^{(i,\cdot)}$ and $M^{(\cdot,j)}$ denote the $(i,j)$-th component, the $i$-th row vector and $j$-th column vector of a matrix $M$ respectively. \item For any vector $v$, $\norm{v}:=\sqrt{\sum_i\parens{v^{(i)}}^2}$, and for any matrix $M$, $\norm{M}:=\sqrt{\sum_{i,j}\parens{M^{(i,j)}}^2}$. \item $A(x,\alpha):=\parens{a(x,\alpha)}^{\otimes 2}.$ \item $C$ is a positive generic constant independent of all other variables. If it depends on fixed other variables, e.g. an integer $k$, we will express as $C(k)$. \item $a(x):=a(x,\alpha^{\star})$ and $b(x):=b(x,\beta^{\star})$. \item Let us define $\vartheta:=\parens{\theta,\theta_{\epsilon}}\in \Xi$. \item A $\Re$-valued function $f$ on $\Re^d$ is a \textit{polynomial growth function} if for all $x\in \Re^d$, \begin{align*} \abs{f(x)}\le C\parens{1+\norm{x}}^C. \end{align*} $g:\Re^d\times \Theta\to \Re$ is a \textit{polynomial growth function uniformly in $\theta\in\Theta$} if for all $x\in \Re^d$, \begin{align*} \sup_{\theta\in\Theta}\abs{g(x,\theta)}\le C\parens{1+\norm{x}}^C. \end{align*} Similarly we say $h:\Re^d\times \Xi\to \Re$ is a \textit{polynomial growth function uniformly in $\vartheta\in\Xi$} if for all $x\in \Re^d$, \begin{align*} \sup_{\vartheta\in\Xi}\abs{h(x,\vartheta)}\le C\parens{1+\norm{x}}^C. \end{align*} \begin{comment} \item For any $\Re$-valued sequence $u_{n}$, $R:\Theta\times\Re\times \Re^d\to \Re$ denotes a function with a constant $C$ such that \begin{align*} \abs{R(\theta,u_{n},x)}\le Cu_{n}\parens{1+\norm{x}}^C \end{align*} for all $x\in\Re^d$ and $\theta\in\Theta$. \end{comment} \item Let us denote for any $\mu$-integrable function $f$ on $\Re^d$, $ \mu(f(\cdot)) := \int f(x)\mu(\dop x).$ \item We set \begin{align*} \mathbb{Y}_1^{\tau}(\alpha)&:=-\frac{1}{2}\nu_0\parens{\mathrm{tr}\parens{\parens{A^{\tau}(\cdot,\alpha,\Lambda_{\star})}^{-1}A^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})-I_d}+\log\frac{\det A^{\tau}(\cdot,\alpha,\Lambda_{\star})}{\det A^{\tau}(\cdot,\alpha^{\star},\Lambda_{\star})}},\\ \mathbb{Y}_2(\beta)&:=-\frac{1}{2}\nu_0\parens{\ip{\parens{A(\cdot,\alpha^{\star})}^{-1}}{\parens{b(\cdot,\beta)-b(\cdot,\beta^{\star})}^{\otimes2}}}, \end{align*} where $ A^{\tau}(\cdot,\alpha,\Lambda):=A(\cdot,\alpha)+3\Lambda\mathbf{1}_{\left\{2\right\}}\left(\tau\right)$ and $\nu_0$ is the invariant measure of $X$. \item Let \begin{align*} &\left\{B_{\kappa}(x)\left|\kappa=1,\ldots,m_1,\ B_{\kappa}=(B_{\kappa}^{(j_1,j_2)})_{j_1,j_2}\right.\right\},\\ &\left\{f_{\lambda}(x)\left|\lambda=1,\ldots,m_2,\ f_{\lambda}=(f^{(1)}_{\lambda},\ldots,f^{(d)}_{\lambda})\right.\right\} \end{align*} be sequences of $\Re^d\otimes \Re^d$-valued functions and $\Re^d$-valued ones respectively such that the components of themselves and their derivative with respect to $x$ are polynomial growth functions for all $\kappa$ and $\lambda$. Then we define the following matrix \begin{align*} W_1^{(l_1,l_2),(l_3,l_4)}&:=\sum_{k=1}^{d}\parens{\Lambda_{\star}^{1/2}}^{(l_1,k)}\parens{\Lambda_{\star}^{1/2}}^{(l_2,k)}\parens{\Lambda_{\star}^{1/2}}^{(l_3,k)}\parens{\Lambda_{\star}^{1/2}}^{(l_4,k)} \parens{\E{\abs{\epsilon_0^k}^4}-3}\notag\\ &\qquad+\frac{3}{2}\parens{\Lambda_{\star}^{(l_1,l_3)}\Lambda_{\star}^{(l_2,l_4)}+\Lambda_{\star}^{(l_1,l_4)}\Lambda_{\star}^{(l_2,l_3)}}, \end{align*} and matrix-valued functionals, for $\bar{B}_{\kappa}:=\frac{1}{2}\parens{B_{\kappa}+B_{\kappa}^T}$, \begin{align*} \parens{W_2^{(\tau)}(\tuborg{B_{\kappa}})}^{(\kappa_1,\kappa_2)}&:=\begin{cases} \nu_0\parens{\tr\tuborg{\parens{\bar{B}_{\kappa_1}A\bar{B}_{\kappa_2}A}(\cdot)}} \\ \qquad\text{ if }\tau\in(1,2),\\ \nu_0\parens{\tr\tuborg{\parens{\bar{B}_{\kappa_1}A\bar{B}_{\kappa_2}A+4\bar{B}_{\kappa_1}A\bar{B}_{\kappa_2}\Lambda_{\star}+12\bar{B}_{\kappa_1}\Lambda_{\star}\bar{B}_{\kappa_2}\Lambda_{\star}}(\cdot)}}\\ \qquad\text{ if }\tau=2, \end{cases}\\ \parens{W_3(\tuborg{f_{\lambda}})}^{(\lambda_1,\lambda_2)}&:=\nu_0\parens{\parens{f_{\lambda_1}A\parens{f_{\lambda_2}}^T}(\cdot)}. \end{align*} \item $\cp$ and $\cl$ indicate convergence in probability and convergence in law respectively. \item For $f(x)$, $g(x,\theta)$ and $h(x,\vartheta)$, $f'(x):=\frac{\dop }{\dop x}f(x)$, $f''(x):=\frac{\dop^2}{\dop x^2}f(x)$, $\partial_{x}g(x,\theta):=\frac{\partial}{\partial x}g(x,\theta)$, $\partial_{\theta}g(x,\theta):=\frac{\partial}{\partial \theta}g(x,\theta)$, $\partial_{x}h(x,\vartheta):=\frac{\partial}{\partial x}h(x,\vartheta)$ and $\partial_{\vartheta}h(x,\vartheta):=\frac{\partial}{\partial \vartheta}h(x,\vartheta)$. \end{enumerate} We make the following assumptions. \begin{enumerate} \item[(A1)] $b$ and $a$ are continuously differentiable for 4 times, and the components of themselves as well as their derivatives are polynomial growth functions uniformly in $\theta\in\Theta$. Furthermore, there exists $C>0$ such that for all $x\in\Re^d$, \begin{align*} &\norm{b(x)}+\norm{b'(x)}+\norm{b''(x)}\le C(1+\norm{x}), \\ &\norm{a(x)}+\norm{a'(x)}+\norm{a''(x)}\le C(1+\norm{x}). \end{align*} \item[(A2)] $X$ is ergodic and the invariant measure $\nu_0$ has $k$-th moment for all $k>0$. \item[(A3)] For all $k>0$, $\sup_{t\ge0}\E{\norm{X_t}^k}<\infty$. \item[(A4)] For any $k>0$, $\epsilon_{ih_{n}}$ has $k$-th moment and the component of $\epsilon_{ih_{n}}$ are independent of the other components for all $i$, $\tuborg{w_t}$ and $x_0$. In addition, the marginal distribution of each component is symmetric. \item[(A5)] $\inf_{x,\alpha} \det c(x,\alpha)>0$. \item[(A6)] There exist $\chi>0$ and $\tilde{\chi}>0$ such that for all $\tau$, $\alpha$ and $\beta$, $\mathbb{Y}_1^{\tau}(\alpha)\le -\chi\norm{\alpha-\alpha^{\star}}^2$, and $\mathbb{Y}_2^{\tau}(\beta)\le -\tilde{\chi}\norm{\beta-\beta^{\star}}^2$. \item[(A7)] The components of $b$, $a$, $\partial_xb$, $\partial_{\beta}b$, $\partial_xa$, $\partial_{\alpha}a$, $\partial_x^2b$, $\partial_{\beta}^2b$, $\partial_x\partial_{\beta}b$, $\partial_x^2a$, $\partial_{\alpha}^2a$ and $\partial_x\partial_{\alpha}a$ are polynomial growth functions uniformly in $\theta\in\Theta$. \item[(AH)] $h_{n}=p_{n}^{-\tau},\ \tau\in(1,2]$ and $h_{n}\to0$, $p_{n}\to\infty$, $k_{n}\to\infty$, $\Delta_{n}=p_{n}h_{n}\to0$, $nh_{n}\to\infty$ as $n\to\infty$. \item[(T1)] If the index set $\mathcal{K}:=\tuborg{i\in\tuborg{1,\ldots,d}:\Lambda_{\star}^{(i,i)}>0}$ is not null, then the submatrix of $\Lambda_{\star}$ such that $\Lambda_{\star,\mathrm{sub}}:=\crotchet{\Lambda_{\star}^{(i_1,i_2)}}_{i_1,i_2\in \mathcal{K}}$ is positive definite. \end{enumerate} { \begin{remark} Some of the assumptions are ordinary ones in statistical inference for ergodic diffusions: (A1) and (A7) which indicates local Lipschitz continuity lead to existence and uniqueness of the solution of the stochastic differential equation. (A2), (A3) and (A5) can be seen in the literature such as \citep{K97} and some sufficient conditions are shown in \citep{UY12}. (A6) is the identifiability condition adopted in \citep{UY12} and \citep{Y11}, supporting {non-degeneracy} of information matrices simultaneously. (A4) is a stronger assumption with respect to integrability of $\left\{\epsilon_{ih_{n}}\right\}_{i=0,\ldots,n}$ compared to \citep{Fa14} and \citep{Fa16}. We consider multi-dimensional parameters in variance of noise, the diffusion coefficient and the drift one, and it results in the necessity of stronger integrability of $\epsilon_{ih_{n}}$ when we prove Theorem \ref{thm742}, which shows that some empirical functionals uniformly converge to 0 in probability. \end{remark} } \section{Main results} \subsection{Adaptive ML-type estimation} Firstly, we construct an estimator for $\Lambda$ such that $\hat{\Lambda}_{n}:=\frac{1}{2n}\sum_{i=0}^{n-1}\parens{Y_{(i+1)h_{n}}-Y_{ih_{n}}}^{\otimes2}$, {which takes the form similar to the quadratic variation of $\left\{Y_{ih_{n}}\right\}_{i=0,\ldots,n}$.} \begin{lemma}\label{lem311} Under (A1)-(A4), $h_{n}\to0$ and $nh_{n}\to\infty$ as $n\to\infty$, $\hat{\Lambda}_{n}$ is consistent. \end{lemma} { \begin{remark} The result of Lemma \ref{lem311} can be understood intuitively: note the order evaluation $\sum_{i=0}^{n-1}\left(X_{\left(i+1\right)h_{n}}-X_{ih_{n}}\right)^{\otimes2}=O_{P}\left(nh_{n}\right)$ and $\sum_{i=0}^{n-1}\left(\epsilon_{\left(i+1\right)h_{n}}-\epsilon_{ih_{n}}\right)^{\otimes2}=O_{P}\left(n\right)$; and the cross term being merely a residual because of independence of $\left\{X_{t}\right\}_{t\ge0}$ and $\left\{\epsilon_{ih_{n}}\right\}_{i=0,\ldots,n}$. \end{remark} } We propose the following {Gaussian-type} quasi-likelihood functions such that {\small\begin{align} &\mathbb{H}_{1,n}^{\tau}(\alpha|\Lambda):=-\frac{1}{2}\sum_{j=1}^{k_{n}-2} \parens{\ip{\parens{\frac{2}{3}\Delta_{n} A_{n}^{\tau}(\lm{Y}{j-1},\alpha,\Lambda)}^{-1}}{\parens{\lm{Y}{j+1}-\lm{Y}{j}}^{\otimes 2}} +\log\det \parens{ A_{n}^{\tau}(\lm{Y}{j-1},\alpha,\Lambda)}}, \\ &\mathbb{H}_{2,n}(\beta|\alpha) :=-\frac{1}{2}\sum_{j=1}^{k_{n}-2} \ip{\parens{\Delta_{n}A(\lm{Y}{j-1},\alpha)}^{-1}}{\parens{\lm{Y}{j+1}-\lm{Y}{j}-\Delta_{n}b(\lm{Y}{j-1},\beta)}^{\otimes 2}}, \end{align}} where $A_{n}^{\tau}(x,\alpha,\Lambda):=A(x,\alpha)+3\Delta_{n}^{\frac{2-\tau}{\tau-1}}\Lambda$. {They are quite similar to the quasi-likelihood functions for discretely-observed ergodic diffusion processes where $nh_{n}^{2}\to0$ proposed by \citep{UY12} except for the scaling $2/3$ seen in $\mathbb{H}_{1,n}^{\tau}$. It is because \begin{align} \mathbf{E}\left[\left(\lm{Y}{j+1}-\lm{Y}{j}\right)^{\otimes2}|X_{j\Delta_{n}},\left\{\epsilon_{ih_{n}}\right\}_{i=0,\ldots,jp_{n}-1}\right]\approx\frac{2}{3}\Delta_{n}A_{n}^{\tau}\left(X_{j\Delta_{n}},\alpha^{\star},\Lambda_{\star}\right) \end{align} in some sense (see Proposition \ref{pro737} or Theorem \ref{thm743}), which can be contrasted with that $\mathbf{E}\left[\left(X_{\left(i+1\right)h_{n}}-X_{ih_{n}}\right)^{\otimes2}|X_{ih_{n}}\right]\approx h_{n}A\left(X_{ih_{n}},\alpha^{\star}\right)$}. We define the {adaptive ML-type} estimators $\hat{\alpha}_{n}$ and $\hat{\beta}_{n}$ {corresponding to $\mathbb{H}_{1,n}^{\tau}$ and $\mathbb{H}_{2,n}$}, where \begin{align} \mathbb{H}_{1,n}^{\tau}(\hat{\alpha}_{n}|\hat{\Lambda}_{n})&=\sup_{\alpha\in\Theta_1}\mathbb{H}_{1,n}^{\tau}(\alpha|\hat{\Lambda}_{n}),\\ \mathbb{H}_{2,n}(\hat{\beta}_{n}|\hat{\alpha}_{n})&=\sup_{\beta\in\Theta_2}\mathbb{H}_{2,n}(\beta|\hat{\alpha}_{n}). \end{align} The consistency of these estimators is given by the next theorem. \begin{theorem}\label{thm312} Under (A1)-(A7) and (AH), $\hat{\alpha}_{n}$ and $\hat{\beta}_{n}$ are consistent. \end{theorem} { \begin{remark} These adaptive ML-type estimators have the advantage that the computation burden in optimisation is reduced compared to that of simultaneous ML-type ones maximising the simultaneous quasi-likelihood $\mathbb{H}_{n}\left(\alpha,\beta|\Lambda\right)$ such that \begin{align} \mathbb{H}_{n}\left(\hat{\alpha}_{n},\hat{\beta}_{n}|\hat{\Lambda}_{n}\right) =\sup_{\alpha\in\Theta_1,\beta\in\Theta_2}\mathbb{H}_{n}\left(\alpha,\beta|\hat{\Lambda}_{n}\right) \end{align} studied in \citep{Fa14} and \citep{Fa16}. \end{remark} } {To argue the asymptotic normality of the estimators, let us denote the limiting information matrices} \begin{align} \mathcal{I}^{\tau}(\vartheta^{\star})&:=\mathrm{diag}\left\{W_{1}, \mathcal{I}^{(2,2),\tau},\mathcal{I}^{(3,3)}\right\}(\vartheta^{\star})\\ \mathcal{J}^{\tau}(\vartheta^{\star})&:=\mathrm{diag}\left\{I_{d(d+1)/2},\mathcal{J}^{(2,2),\tau},\mathcal{J}^{(3,3)}\right\}(\vartheta^{\star}) \end{align} where for $i_1,i_2\in\tuborg{1,\ldots,m_1}$, \begin{align} \mathcal{I}^{(2,2),\tau}(\vartheta^{\star})&:= W_2^{(\tau)}\parens{\tuborg{\frac{3}{4}\parens{ A^{\tau}}^{-1}\parens{\partial_{\alpha^{(k_1)}}A}\parens{ A^{\tau}}^{-1}(\cdot,\vartheta^{\star})}_{k_1}},\\ \mathcal{J}^{(2,2),\tau}(\vartheta^{\star})&:= \crotchet{\frac{1}{2}\nu_0\parens{\tr\tuborg{\parens{A^{\tau}}^{-1}\parens{\partial_{\alpha^{(i_1)}}A}\parens{A^{\tau}}^{-1} \parens{\partial_{\alpha^{(i_2)}}A}}(\cdot,\vartheta^{\star})}}_{i_1,i_2}, \end{align} {which are the information for the diffusion parameter $\alpha$, and for $j_1,j_2\in\tuborg{1,\ldots,m_2}$,} \begin{align} \mathcal{I}^{(3,3)}(\theta^{\star})&=\mathcal{J}^{(3,3)}(\theta^{\star}):= \crotchet{\nu_0\parens{\ip{\parens{A}^{-1}}{\parens{\partial_{\beta^{(j_1)}}b}\parens{\partial_{\beta^{(j_2)}}b}^T}(\cdot,\theta^{\star})}}_{j_1,j_2} \end{align} {which are that for the drift one $\beta$. To ease the notation, let us also }denote $\hat{\theta}_{\epsilon,n}:=\vech\hat{\Lambda}_{n}$ and $\theta_{\epsilon}^{\star}:=\vech \Lambda_{\star}$. \begin{theorem}\label{thm313} Under (A1)-(A7), (AH) and $k_{n}\Delta_{n}^2\to0$, the following convergence in distribution holds: { \begin{align*} &\left[ \sqrt{n}\left(\hat{\theta}_{\epsilon,n}-\theta_{\epsilon}^{\star}\right),\ \sqrt{k_{n}}\left(\hat{\alpha}_{n}-\alpha^{\star}\right),\ \sqrt{nh_{n}}\left(\hat{\beta}_{n}-\beta^{\star}\right) \right]\\[5pt] &\hspace{3cm}\cl N\parens{\mathbf{0},\parens{\mathcal{J}^{\tau}(\vartheta^{\star})}^{-1}\mathcal{I}^{\tau}(\vartheta^{\star})\parens{\mathcal{J}^{\tau}(\vartheta^{\star})}^{-1}}. \end{align*}} \end{theorem} { This theorem shows the difference of the convergence rates with respect to $\hat{\theta}_{\epsilon,n}$, $\hat{\alpha}_{n}$ and $\hat{\beta}_{n}$ which is essentially significant to construct adaptive estimation approach. The difference among these convergence rates can be intuitively comprehended: the estimator for $\Lambda$ has $\sqrt{n}$-consistency as ordinary i.i.d. case because it is estimated with the quadratic variation of observation masking the influence of the latent process $\left\{X_{t}\right\}_{t\ge0}$ as noted in the remark for Lemma \ref{lem311}; $\hat{\alpha}$ which is estimated with the quasi-likelihoods composed by $k_{n}$ local means $\left\{\lm{Y}{j}\right\}_{j=0,\ldots,k_{n}-1}$ has $\sqrt{k_{n}}$-consistency corresponding to $\sqrt{n}$-consistency in the inference for discretely-observed diffusion processes; and $\hat{\beta}_{n}$ has $\sqrt{nh_{n}}$-consistency which is ordinary in statistics for diffusion processes. Note that our estimator $\hat{\beta}_{n}$ is asymptotically efficient for all $\tau\in\left(1,2\right]$: the limiting variance of $\sqrt{nh_{n}}\left(\hat{\beta}_{n}-\beta^{\star}\right)$ is the inverse of the Fisher information. It is because we construct adaptive quasi-likelihood functions for the diffusion parameter $\alpha$ and the drift one $\beta$ separately; to the contrary, a simultaneous quasi-likelihood functions proposed in \citep{Fa14} and \citep{Fa16} cannot achieve the asymptotic efficiency under $\tau=2$. } { \begin{remark} (AH) inherits the assumption in both \citep{Fa14} and \citep{Fa16}. The tuning parameter $\tau$ controls the size and the number of partitions denoted as $p_{n}$ and $k_{n}$ given for observations, and for all $\tau\in\left(1,2\right]$ we have shown our estimators have asymptotic normality. It can be tuned to get optimal in each application, but the following discussion may give guide to determine $\tau$. Generally speaking, larger $\tau$ has an advantage in asymptotics of our estimators since higher $\tau$ indicates smaller $k_{n}\Delta_{n}^{2}$ whose convergence to 0 is one of the conditions to show asymptotic normality of the estimators. Let us consider the case where it holds $nh_{n}^{\gamma}\to 0$ for some $\gamma\in\left(1,3/2\right]$; then $k_{n}\Delta_{n}^{2}=nh_{n}^{2-1/\tau}\to0$ if $\gamma>2-1/\tau$, which is the condition can be eased with larger $\tau$. On the other hand, there does not exist any $C>0$ such that $\mathcal{I}^{(2,2),\tau}=C\mathcal{J}^{(2,2),\tau}$ if $\tau=2$ which makes it difficult to compose test statistics like likelihood-ratio ones (see \citep{NU18}). Hence in practice, $\tau$ sufficiently close to $2$ can be optimal, but it would be hard to discuss goodness-of-fit of models when $\tau=2$ exactly. \end{remark} } \subsection{Test for noise detection} We formulate the statistical hypothesis test such that {\begin{align*} H_0: \Lambda_{\star}=O,\ H_1: \Lambda_{\star}\neq O. \end{align*}} We define $S_{t}:= \sum_{l=1}^{d}X_t^{(l)}$ and $\mathscr{S}_{ih_{n}} := \sum_{l=1}^{d}Y_{ih_{n}}^{(l)}$ and $\crotchet{X_t^{(1)},\ldots,X_t^{(d)},S_{t}}$ is also an ergodic diffusion. Furthermore, {\small \begin{align} Z_{n}:=\sqrt{\frac{2p_{n}}{3\sum_{j=1}^{k_{n}-2}\parens{\lm{\mathscr{S}}{j+1}-\lm{\mathscr{S}}{j}}^4}}\parens{\sum_{i=0}^{n-1}\parens{\mathscr{S}_{(i+1)h_{n}}-\mathscr{S}_{ih_{n}}}^2 -\sum_{0\le 2i\le n-2}\parens{\mathscr{S}_{(2i+2)h_{n}}-\mathscr{S}_{2ih_{n}}}^2}, \end{align}} where $\lm{\mathscr{S}}{j}:=\frac{1}{p_{n}}\sum_{i=0}^{p_{n}-1}\mathscr{S}_{j\Delta_{n}+ih_{n}}$, and consider the hypothesis test with rejection region $Z_{n}\ge z_{\alpha}$ where $z_{\alpha}$ is the {$(1-\alpha)$-quantile} of $N(0,1)$. \begin{theorem}\label{thm321} Under $H_0$, (A1)-(A5), (AH) and $nh_{n}^2\to0$, \begin{align*} Z_{n}\cl N(0,1). \end{align*} \end{theorem} \begin{theorem}\label{thm322} Under $H_1$, (A1)-(A5), (AH), (T1) and $nh_{n}^2\to0$, the test is consistent, i.e., for all $\alpha\in(0,1)$, \begin{align*} P(Z_{n}\ge z_{\alpha})\to 1. \end{align*} \end{theorem} { \begin{remark} The consistency shown above utilises the well-known fact in financial econometrics that the quadratic variation of process as diverges to infinity as observation frequency increases when the observation is contaminated by exogenous noise. The first quadratic variation in the bracket of $Z_{n}$ composes of the entire observation, and the second one halves the number of samples with doubling the frequency from $h_{n}$ to $2h_{n}$. It results in that the first quadratic variation divided by $2n$ converges to $\sum_{\ell_{1}}\sum_{\ell_{2}}\Lambda_{\star}^{\left(\ell_{1},\ell_{2}\right)}$ in probability while the second one divided by $2n$ converges to $(1/2)\sum_{\ell_{1}}\sum_{\ell_{2}}\Lambda_{\star}^{\left(\ell_{1},\ell_{2}\right)}$ in probability. This difference is sufficiently large so that $Z_{n}$ diverges in the sense of $P\left(Z_{n}\ge z_{\alpha}\right)\to1$ for any $\alpha\in(0,1)$. \end{remark} } { In the next place, we concern approximation of the power in the finite sample scheme and consider the following sequence of alternatives \begin{align*} \left(\Lambda_{\star}\right)_{n}=\frac{h_{n}}{\sqrt{n}}\mathfrak{M}, \end{align*} where $\mathfrak{M}\ge0$ and $\delta:=\sum_{\ell_{1}}\sum_{\ell_{2}}\mathfrak{M}^{\left(\ell_{1},\ell_{2}\right)}>0$. Then we obtain the next result for approximation of power. \begin{theorem}\label{thm323} Under the sequence of the alternatives $\left\{\left(\Lambda_{\star}\right)_{n}\right\}$, (A1)-(A5), (AH) and $nh_{n}^{2}\to0$, the limit power of the test is $\Phi\left(\delta-z_{\alpha}\right)$. \end{theorem} \begin{remark} (i) The order of the alternatives above might seem to be peculiar, but can be comprehended as follows: the convergence in distribution in Theorem 4 can result from that of the following quantity \begin{align*} \sqrt{n}\left(\frac{1}{nh_{n}}\sum_{i=0}^{n-1}\left(S_{\left(i+1\right)h_{n}}-S_{ih_{n}}\right)^{2}-\frac{1}{nh_{n}}\sum_{0\le 2i\le n-2}\left(S_{\left(2i+2\right)h_{n}}-S_{2ih_{n}}\right)^{2}\right), \end{align*} which converges to normal distribution with mean 0, that is to say, is $O_{P}\left(1\right)$. If we replace $S_{ih_{n}}$ with $\mathscr{S}_{ih_{n}}$ and $\Lambda_{\star}$ with $\sum_{\ell_{1}}\sum_{\ell_{2}}\Lambda_{\star}^{\left(\ell_{1},\ell_{2}\right)}>0$ {is} fixed, then this quantity has the order $O_{P}\left(\sqrt{n}/h_{n}\right)$ as discussed in Lemma \ref{lem311} or Theorem \ref{thm322}. Here it is possible to understand the role of $h_{n}/\sqrt{n}$ in the sequence of the alternatives: it let the quantity remain to $O_{P}\left(1\right)$ and in fact $Z_{n}$ converges to normal distribution. (ii) We should note that the dependency of $\Lambda_{\star}$ on $n$ is simply aimed at approximation of the power as claimed above; some literatures also let $\Lambda_{\star}$ depend on $n$ even in estimation framework while we have set it as a constant matrix except for Theorem \ref{thm323}. \end{remark} } \bgroup \def1.5{1.5} \setlength\tabcolsep{0.3cm} \section{Example and simulation results} \subsection{Case of small noise} First of all, we consider the following 2-dimensional Ornstein-Uhlenbeck process \begin{align} \dop \crotchet{\begin{matrix} X_t^{\left(1\right)}\\ X_t^{\left(2\right)} \end{matrix}}=\parens{\crotchet{\begin{matrix} \beta_1 & \beta_3\\ \beta_2 & \beta_4 \end{matrix}}\crotchet{\begin{matrix} X_t^{\left(1\right)}\\ X_t^{\left(2\right)} \end{matrix}}+\crotchet{\begin{matrix} \beta_5\\ \beta_6 \end{matrix}}}\dop t + \crotchet{\begin{matrix} \alpha_1 & \alpha_2\\ \alpha_2 & \alpha_3 \end{matrix}}\dop w_t,\ \crotchet{\begin{matrix} X_0^{\left(1\right)}\\ X_0^{\left(2\right)} \end{matrix}}=\crotchet{\begin{matrix} 1\\ 1 \end{matrix} }, \end{align} where the true values of the diffusion parameters $\parens{\alpha_1^{\star}, \alpha_2^{\star},\alpha_3^{\star}}=\parens{1, 0.1,1}$ and the drift one $\parens{\beta_1^{\star}, \beta_2^{\star},\beta_3^{\star},\beta_4^{\star},\beta_5^{\star},\beta_6^{\star}}=\parens{-1,-0.1,-0.1,-1,1,1}$, and the multivariate normal noise and the several levels of $\Lambda$ such that $\Lambda_{\star,-\infty}=O, \Lambda_{\star,-i}=10^{-i}I_2$ for all $i=\{4,5,6,7,8\}$. We check the performance of our estimator and the test constructed in Section 3, and compare our estimator (local mean method, LMM) with the estimator by LGA. We show the setting and result of simulation in the following tables{; note that we simulate with $\tau=1.8$, $1.9$, and $2.0$ and denote empirical means {and root-mean-squared errors} in 1000 iterations without and with bracket $(\cdot)$ respectively}. With respect to the estimator for the noise variance, let us check the case of $\Lambda_{\star,-4}$. The empirical mean and {root-mean-squared errors} of $\hat{\Lambda}_{n}^{\left(1,1\right)}$ with $\Lambda_{\star}^{\left(1,1\right)}=10^{-4}$ are $1.32\times10^{-4}$ and $3.21\times 10^{-5}$; those of $\hat{\Lambda}_{n}^{\left(1,2\right)}$ with $\Lambda_{\star}^{\left(1,2\right)}=0$ are $6.29\times 10^{-6}$ and $6.31\times 10^{-6}$; and those of $\hat{\Lambda}_{n}^{\left(2,2\right)}$ with $\Lambda_{\star}^{\left(2,2\right)}=10^{-4}$ are $1.33\times10^{-4}$ and $3.25\times 10^{-5}$. \begin{table}[h] \centering \caption{Setting in Section 4} \begin{tabular}{c|ccc} quantity & \multicolumn{3}{c}{approximation} \\\hline $n$ & \multicolumn{3}{c}{$10^6$}\\ $h_{n}$ & \multicolumn{3}{c}{$6.309573\times 10^{-5}$}\\ ${nh_{n}}$ & \multicolumn{3}{c}{$63.09573$}\\ $nh_{n}^{2}$ & \multicolumn{3}{c}{$0.003981072$}\\ $\tau$ & $1.8$ & $1.9$ & $2.0$\\ $p_{n}$ & $215$ & $162$ & $125$\\ $k_{n}$ & $4651$ & $6172$ & $8000$\\ $\Delta_{n}$ & $0.01356558$ & $0.01022151$ & $0.007886967$\\ $k_{n}\Delta_{n}^{2}$ & $0.8559005$ & $0.6448459$ & $0.497634$\\ iteration & \multicolumn{3}{c}{1000} \end{tabular} \end{table} {Firstly, we compare the simulation results for test statistics for all $\tau=1.8$, $1.9$ and $2.0$: see Table 2--4. It is observed that there are little differences among the results with the different value of tuning parameter $\tau$; hence we can conclude that $\tau$ does not matter at least in hypothesis testing proposed {in} section 3.2 as the results are proved for all $\tau\in\left(1,2\right]$.} In the {second} place, we examine the performance of the diffusion estimators {(see Table 5--7)}. It can be seen that neither estimator with our method nor LGA dominates the {others} in terms of {root-mean-squared errors} where {$\Lambda_{\star}=\Lambda_{\star,-\infty}$, $\Lambda_{\star,-8}$, $\Lambda_{\star,-7}$}. Note that the powers of the test statistics are not large in these settings. It reflects that it is indifferent to choose either our estimators which are consistent even if there is no noise or the estimators with LGA by counting on the result of noise detection test which are asymptotically efficient if observation is not contaminated by noise. In contrast to these sizes of variance of noise, the results of simulation with the setting $\Lambda_{\star,-6}$, $\Lambda_{\star,-5}$ and $\Lambda_{\star,-4}$ shows that our estimators dominate the estimators with LGA in terms of {root-mean-squared errors}, and simultaneously the test for noise detection performs high power. {We should refer to the differences among LMM with different values of $\tau$; the larger $\tau$ clearly lessen the {root-mean-squared errors} since the influence of observational noise is not so large under these settings and our estimator $\hat{\alpha}_{n}$ has $\sqrt{k_{n}}$-consistency.} We also see the same behaviour in estimation for drift parameters {(see Table 8--13)}. In this case, our estimators are dominant in all the setting of noise variance, but the performance of LGA estimators are close to them where $\Lambda_{\star,-\infty}$, $\Lambda_{\star,-8}$ and $\Lambda_{\star,-7}$. With the larger variance of noise, the estimators with local means method are far fine compared to the others. {Contrary to estimation for diffusion parameters, the difference of {root-mean-squared errors} among LMM with various value of $\tau$ is not obvious because our estimator $\hat{\beta}_{n}$ is $\sqrt{nh_{n}}$-consistent, i.e., the convergence rate does not depend on $\tau$.} \begin{remark} \normalfont With these results, we can see that the test works well as a criterion to select the estimation methods with local means and LGA: when adopting $H_0:\Lambda_{\star}=O$, we are essentially free to adopt either estimation; if rejecting $H_0$, we are strongly motivated to select our estimator. \end{remark} \subsection{Case of large noise} Secondly we consider the problem with the identical setting as the previous one except for the variance of noise. We set the variance as $\Lambda_{\star}=I_2$ which is much larger than those in the previous subsection. In simulation, the empirical power of the test for noise detection is 1. We compare the estimation with our method (local mean method, LMM) and that with local Gaussian approximation (LGA) again. Obviously all the estimators with LMM {and all $\tau=1.8$, $1.9$, and $2.0$ }dominate the {those with LGA, diverging clearly (see Table 14)}. Moreover, the {root-mean-squared errors} of our estimators are close to those with settings of small noise in the subsection above. It shows that our estimator is robust even if the variance of noise is so large that we cannot imagine the undermined diffusion process seemingly. {Remarkably, the {root-mean-squared errors} for $\hat{\alpha}_{n}$ decreases as $\tau$ declines contrary to what we see in the previous section 4.1. We can consider several causes for this tendency: the difference between the asymptotic variance with $\tau\in\left(1,2\right)$ and that with $\tau=2$ dependent on $\Lambda_{\star}$; the variety in $p_{n}$ denoting the number of samples in each local means, whose value results in the degree of diminishing the influence of noise. Anyway, we can observe the approach to tune $\tau$ should be problem-centric as mentioned in section 3.1. } \begin{table}[!h] \caption{test statistics performance with small noise (section 4.1, $\tau=1.8$)} \centering \begin{tabular}{c|ccc} & ratio of $Z_{n}>z_{0.05}$ & ratio of $Z_{n}>z_{0.01}$ & ratio of $Z_{n}>z_{0.001}$ \\\hline $\Lambda_{\star}=O$ & 0.050 & 0.004 & 0.001\\ $\Lambda_{\star}=10^{-8}I_2$ & 0.062 & 0.010 & 0.001\\ $\Lambda_{\star}=10^{-7}I_2$ & 0.256 & 0.088 & 0.017\\ $\Lambda_{\star}=10^{-6}I_2$ & 1.000 & 1.000 & 1.000\\ $\Lambda_{\star}=10^{-5}I_2$ & 1.000 & 1.000 & 1.000\\ $\Lambda_{\star}=10^{-4}I_2$ & 1.000 & 1.000 & 1.000\\ \end{tabular} \end{table} \begin{table}[!h] \caption{test statistics performance with small noise (section 4.1, $\tau=1.9$)} \centering \begin{tabular}{c|ccc} & ratio of $Z_{n}>z_{0.05}$ & ratio of $Z_{n}>z_{0.01}$ & ratio of $Z_{n}>z_{0.001}$ \\\hline $\Lambda_{\star}=O$ & 0.051 & 0.007 & 0.002\\ $\Lambda_{\star}=10^{-8}I_2$ & 0.063 & 0.011 & 0.002\\ $\Lambda_{\star}=10^{-7}I_2$ & 0.263 & 0.087 & 0.017\\ $\Lambda_{\star}=10^{-6}I_2$ & 1.000 & 1.000 & 1.000\\ $\Lambda_{\star}=10^{-5}I_2$ & 1.000 & 1.000 & 1.000\\ $\Lambda_{\star}=10^{-4}I_2$ & 1.000 & 1.000 & 1.000\\ \end{tabular} \end{table} \begin{table}[!h] \caption{test statistics performance with small noise (section 4.1, $\tau=2.0$)} \centering \begin{tabular}{c|ccc} & ratio of $Z_{n}>z_{0.05}$ & ratio of $Z_{n}>z_{0.01}$ & ratio of $Z_{n}>z_{0.001}$ \\\hline $\Lambda_{\star}=O$ & 0.050 & 0.008 & 0.002\\ $\Lambda_{\star}=10^{-8}I_2$ & 0.065 & 0.010 & 0.002\\ $\Lambda_{\star}=10^{-7}I_2$ & 0.257 & 0.088 & 0.016\\ $\Lambda_{\star}=10^{-6}I_2$ & 1.000 & 1.000 & 1.000\\ $\Lambda_{\star}=10^{-5}I_2$ & 1.000 & 1.000 & 1.000\\ $\Lambda_{\star}=10^{-4}I_2$ & 1.000 & 1.000 & 1.000\\ \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\alpha_{1}^{\star}=1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\alpha}_{1,\mathrm{LMM}}$ ($1$)} & \multirow{2}{*}{$\hat{\alpha}_{1,\mathrm{LGA}}$ ($1$)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $0.996318$ & $0.997492$ & $0.998099$ & $1.003940$ \\ & ( $0.0120$ ) & ( $0.0101$ ) & ( $0.0090$ ) & ( $0.0067$ )\\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $0.996318$ & $0.997492$ & $0.998099$ & $1.004100$ \\ & ( $0.0120$ ) & ( $0.0101$ ) & ( $0.0090$ ) & ( $0.0068$ ) \\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $0.996318$ & $0.997492$ & $0.998099$ & $1.005534$\\ & ( $0.0120$ ) & ( $0.0101$ ) & ( $0.0090$ ) & ( $0.0077$ )\\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $0.996318$ & $0.997492$ & $0.998100$ & $1.019757$ \\ & ( $0.0120$ ) & ( $0.0101$ ) & ( $0.0090$ ) & ( $0.0205$ )\\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $0.996319$ & $0.997492$ & $0.998101$ & $1.152084$ \\ & ( $0.0120$ ) & ( $0.0101$ ) & ( $0.0090$ ) & ( $0.1522$ )\\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $0.996322$ & $0.997493$ & $0.998108$ & $2.045903$ \\ & ( $0.0120$ ) & ( $0.0101$ ) & ( $0.0090$ ) & ( $1.0459$ ) \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\alpha_{2}^{\star}=0.1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\alpha}_{2,\mathrm{LMM}}$ ($0.1$)} & \multirow{2}{*}{$\hat{\alpha}_{2,\mathrm{LGA}}$ ($0.1$)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $0.094735$ & $0.095539$ & $0.096314$ & $0.098900$\\ & ( $0.0087$ ) & ( $0.0073$ ) & ( $0.0064$ ) & ( $0.0067$ ) \\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $0.094735$ & $0.095539$ & $0.096314$ & $0.098885$ \\ & ( $0.0087$ ) & ( $0.0073$ ) & ( $0.0064$ ) & ( $0.0067$ ) \\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $0.094735$ & $0.095539$ & $0.096314$ & $0.098745$ \\ & ( $0.0087$ ) & ( $0.0073$ ) & ( $0.0064$ ) & ( $0.0067$ ) \\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $0.094735$ & $0.095539$ & $0.096314$ & $0.097379$ \\ & ( $0.0087$ ) & ( $0.0073$ ) & ( $0.0064$ ) & ( $0.0070$ )\\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $0.094736$ & $0.095539$ & $0.096313$ & $0.086260$ \\ & ( $0.0087$ ) & ( $0.0073$ ) & ( $0.0064$ ) & ( $0.0149$ )\\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $0.094736$ & $0.095540$ & $0.096311$ & $0.048684$ \\ & ( $0.0087$ ) & ( $0.0073$ ) & ( $0.0064$ ) & ( $0.0514$ ) \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\alpha_{3}^{\star}=1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\alpha}_{3,\mathrm{LMM}}$ ($1$)} & \multirow{2}{*}{$\hat{\alpha}_{3,\mathrm{LGA}}$ ($1$)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $0.997063$ & $0.997764$ & $0.998626$ & $1.010689$ \\ & ( $0.0118$ ) & ( $0.0103$ ) & ( $0.0089$ ) & ( $0.0156$ )\\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $0.997063$ & $0.997764$ & $0.998626$ & $1.010847$ \\ & ( $0.0118$ ) & ( $0.0103$ ) & ( $0.0089$ ) & ( $0.0157$ ) \\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $0.997063$ & $0.997764$ & $0.998626$ & $1.012273$ \\ & ( $0.0118$ ) & ( $0.0103$ ) & ( $0.0089$ ) & ( $0.0167$ ) \\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $0.997063$ & $0.997765$ & $0.998626$ & $1.026402$ \\ & ( $0.0118$ ) & ( $0.0103$ ) & ( $0.0089$ ) & ( $0.0287$ ) \\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $0.997064$ & $0.997766$ & $0.998627$ & $1.157960$ \\ & ( $0.0118$ ) & ( $0.0103$ ) & ( $0.0089$ ) & ( $0.1583$ ) \\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $0.997068$ & $0.997770$ & $0.998632$ & $2.049110$ \\ & ( $0.0118$ ) & ( $0.0103$ ) & ( $0.0089$ ) & ( $1.0491$ ) \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\beta_{1}^{\star}=-1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\beta}_{1,\mathrm{LMM}}$ ($-1$)} & \multirow{2}{*}{$\hat{\beta}_{1,\mathrm{LGA}}$ ($-1$)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $-1.069994$ & $-1.073383$ & $-1.075441$ & $-1.097705$ \\ & ( $0.1998$ ) & ( $0.2056$ ) & ( $0.1992$ ) & ( $0.2099$ ) \\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $-1.069994$ & $-1.073383$ & $-1.075442$ & $-1.098047$ \\ & ( $0.1998$ ) & ( $0.2056$ ) & ( $0.1992$ ) & ( $0.2101$ ) \\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $-1.069994$ & $-1.073382$ & $-1.075443$ & $-1.101166$ \\ & ( $0.1998$ ) & ( $0.2056$ ) & ( $0.1992$ ) & ( $0.2121$ ) \\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $-1.069994$ & $-1.073384$ & $-1.075443$ & $-1.132415$ \\ & ( $0.1998$ ) & ( $0.2056$ ) & ( $0.1992$ ) & ( $0.2330$ ) \\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $-1.069994$ & $-1.073386$ & $-1.075444$ & $-1.446044$ \\ & ( $0.1998$ ) & ( $0.2056$ ) & ( $0.1992$ ) & ( $0.5088$ ) \\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $-1.069996$ & $-1.073397$ & $-1.075444$ & $-4.587123$ \\ & ( $0.1998$ ) & ( $0.2056$ ) & ( $0.1992$ ) & ( $3.6698$ ) \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\beta_{2}^{\star}=-0.1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\beta}_{2,\mathrm{LMM}}$ ($-0.1$)} & \multirow{2}{*}{$\hat{\beta}_{2,\mathrm{LGA}}$ ($-0.1$)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $-0.093152$ & $-0.097752$ & $-0.100402$ & $-0.109554$ \\ & ( $0.1947$ ) & ( $0.1964$ ) & ( $0.1954$ ) & ( $0.1995$ ) \\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $-0.093152$ & $-0.097752$ & $-0.100402$ & $-0.109502$ \\ & ( $0.1947$ ) & ( $0.1964$ ) & ( $0.1954$ ) & ( $0.1995$ )\\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $-0.093152$ & $-0.097752$ & $-0.100403$ & $-0.109275$ \\ & ( $0.1947$ ) & ( $0.1964$ ) & ( $0.1954$ ) & ( $0.1997$ )\\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $-0.093153$ & $-0.097751$ & $-0.100404$ & $-0.106142$ \\ & ( $0.1947$ ) & ( $0.1964$ ) & ( $0.1954$ ) & ( $0.2024$ )\\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $-0.093152$ & $-0.097750$ & $-0.100404$ & $-0.074222$ \\ & ( $0.1947$ ) & ( $0.1964$ ) & ( $0.1954$ ) & ( $0.2333$ )\\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $-0.093151$ & $-0.097747$ & $-0.100403$ & $0.237936$ \\ & ( $0.1948$ ) & ( $0.1964$ ) & ( $0.1954$ ) & ( $0.6836$ ) \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\beta_{3}^{\star}=-0.1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\beta}_{3,\mathrm{LMM}}$ (-0.1)} & \multirow{2}{*}{$\hat{\beta}_{3,\mathrm{LGA}}$ (-0.1)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $-0.095493$ & $-0.095852$ & $-0.097983$ & $-0.108282$ \\ & ( $0.1913$ ) & ( $0.1931$ ) & ( $0.1931$ ) & ( $0.1948$ ) \\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $-0.095493$ & $-0.095852$ & $-0.097983$ & $-0.108265$ \\ & ( $0.1913$ ) & ( $0.1931$ ) & ( $0.1931$ ) & ( $0.1949$ ) \\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $-0.095492$ & $-0.095852$ & $-0.097982$ & $-0.107915$ \\ & ( $0.1913$ ) & ( $0.1931$ ) & ( $0.1931$ ) & ( $0.1952$ ) \\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $-0.095493$ & $-0.095853$ & $-0.097982$ & $-0.104802$ \\ & ( $0.1913$ ) & ( $0.1931$ ) & ( $0.1931$ ) & ( $0.1978$ ) \\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $-0.095491$ & $-0.095851$ & $-0.097979$ & $-0.073318$ \\ & ( $0.1913$ ) & ( $0.1931$ ) & ( $0.1931$ ) & ( $0.2288$ ) \\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $-0.095487$ & $-0.095846$ & $-0.097979$ & $0.238196$ \\ & ( $0.1913$ ) & ( $0.1931$ ) & ( $0.1931$ ) & ( $0.6808$ ) \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\beta_{4}^{\star}=-1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\beta}_{4,\mathrm{LMM}}$ ($-1$)} & \multirow{2}{*}{$\hat{\beta}_{4,\mathrm{LGA}}$ ($-1$)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $-1.055341$ & $-1.064300$ & $-1.070131$ & $-1.092219$ \\ & ( $0.1951$ ) & ( $0.2009$ ) & ( $0.2020$ ) & ( $0.2156$ ) \\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $-1.055341$ & $-1.064301$ & $-1.070132$ & $-1.092547$ \\ & ( $0.1951$ ) & ( $0.2009$ ) & ( $0.2020$ ) & ( $0.2158$ ) \\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $-1.055341$ & $-1.064301$ & $-1.070132$ & $-1.095701$ \\ & ( $0.1951$ ) & ( $0.2009$ ) & ( $0.2020$ ) & ( $0.2177$ ) \\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $-1.055342$ & $-1.064302$ & $-1.070132$ & $-1.126843$ \\ & ( $0.1951$ ) & ( $0.2009$ ) & ( $0.2020$ ) & ( $0.2376$ ) \\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $-1.055342$ & $-1.064303$ & $-1.070134$ & $-1.438228$ \\ & ( $0.1951$ ) & ( $0.2009$ ) & ( $0.2020$ ) & ( $0.5073$ ) \\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $-1.055344$ & $-1.064302$ & $-1.070141$ & $-4.559194$ \\ & ( $0.1951$ ) & ( $0.2009$ ) & ( $0.2020$ ) & ( $3.6493$ ) \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\beta_{5}^{\star}=1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\beta}_{5,\mathrm{LMM}}$ ($-1$)} & \multirow{2}{*}{$\hat{\beta}_{5,\mathrm{LGA}}$ ($-1$)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $1.057539$ & $1.060114$ & $1.062063$ & $1.093013$\\ & ( $0.2713$ ) & ( $0.2802$ ) & ( $0.2751$ ) & ( $0.2863$ ) \\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $1.057539$ & $1.060114$ & $1.062065$ & $1.093291$ \\ & ( $0.2713$ ) & ( $0.2802$ ) & ( $0.2751$ ) & ( $0.2865$ ) \\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $1.057539$ & $1.060114$ & $1.062065$ & $1.095840$ \\ & ( $0.2713$ ) & ( $0.2802$ ) & ( $0.2751$ ) & ( $0.2880$ ) \\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $1.057539$ & $1.060115$ & $1.062066$ & $1.121342$ \\ & ( $0.2713$ ) & ( $0.2802$ ) & ( $0.2751$ ) & ( $0.3029$ ) \\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $1.057537$ & $1.060116$ & $1.062063$ & $1.376932$ \\ & ( $0.2713$ ) & ( $0.2802$ ) & ( $0.2751$ ) & ( $0.5083$ ) \\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $1.057537$ & $1.060123$ & $1.062064$ & $3.936379$ \\ & ( $0.2713$ ) & ( $0.2802$ ) & ( $0.2751$ ) & ( $3.1035$ ) \\ \end{tabular} \end{table} \begin{table}[h] \caption{comparison of estimator for $\beta_{6}^{\star}=1$ with small noise (section 4.1).} {Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c} \multirow{2}{*}{$\Lambda_{\star}$} & \multicolumn{3}{c|}{$\hat{\beta}_{6,\mathrm{LMM}}$ ($1$)} & \multirow{2}{*}{$\hat{\beta}_{6,\mathrm{LGA}}$ ($1$)} \\ & \multicolumn{1}{c}{$\tau=1.8$} & \multicolumn{1}{c}{$\tau=1.9$} & \multicolumn{1}{c|}{$\tau=2.0$} & \\ \hline \multirow{2}{*}{$O $} & $1.046920$ & $1.055245$ & $1.063341$ & $1.076187$ \\ & ( $0.2749$ ) & ( $0.2784$ ) & ( $0.2816$ ) & ( $0.2889$ ) \\ \hline \multirow{2}{*}{$10^{-8}I_2$} & $1.046920$ & $1.055245$ & $1.063341$ & $1.076418$ \\ & ( $0.2749$ ) & ( $0.2784$ ) & ( $0.2816$ ) & ( $0.2891$ ) \\ \hline \multirow{2}{*}{$10^{-7}I_2$} & $1.046920$ & $1.055246$ & $1.063342$ & $1.079131$ \\ & ( $0.2749$ ) & ( $0.2784$ ) & ( $0.2816$ ) & ( $0.2904$ ) \\ \hline \multirow{2}{*}{$10^{-6}I_2$} & $1.046920$ & $1.055246$ & $1.063341$ & $1.104471$ \\ & ( $0.2749$ ) & ( $0.2784$ ) & ( $0.2816$ ) & ( $0.3043$ ) \\ \hline \multirow{2}{*}{$10^{-5}I_2$} & $1.046920$ & $1.055248$ & $1.063345$ & $1.358819$ \\ & ( $0.2749$ ) & ( $0.2784$ ) & ( $0.2816$ ) & ( $0.5021$ ) \\ \hline \multirow{2}{*}{$10^{-4}I_2$} & $1.046918$ & $1.055244$ & $1.063347$ & $3.911360$ \\ & ( $0.2749$ ) & ( $0.2784$ ) & ( $0.2816$ ) & ( $3.0893$ ) \end{tabular} \end{table} \begin{table}[!h] \caption{Estimators with large noise (section 4.2). Topside values in cells denote empirical means; downside ones denote RMSE.} \centering \begin{tabular}{c|c|c|c|c|c} & & \multicolumn{3}{c|}{LMM} & \multirow{2}{*}{LGA}\\ & true value & $\tau=1.8$ & $\tau=1.9$ & $\tau=2.0$ & \\\hline \multirow{2}{*}{$\mathrm{\hat{\Lambda}^{\left(1,1\right)}}$} & \multirow{2}{*}{$\mathrm{1}$} & \multicolumn{4}{c}{ $1.000106$ } \\ & & \multicolumn{4}{c}{( $0.001678$ )} \\\hline \multirow{2}{*}{$\mathrm{\hat{\Lambda}^{\left(1,2\right)}}$} & \multirow{2}{*}{$\mathrm{0}$} & \multicolumn{4}{c}{ $1.796561\times10^{-5}$ } \\ & & \multicolumn{4}{c}{( $0.001226$ )} \\\hline \multirow{2}{*}{$\mathrm{\hat{\Lambda}^{\left(2,2\right)}}$} & \multirow{2}{*}{$\mathrm{1}$} & \multicolumn{4}{c}{ $1.000030$ } \\ & & \multicolumn{4}{c}{( $0.001826$ )} \\\hline \multirow{2}{*}{$\hat{\alpha}_1$} & \multirow{2}{*}{$1$} & $0.996805$ & $1.000907$ & $1.017547$ & $178.068993$ \\ & & ( $0.0217$ ) & ( $0.0267$ ) & ( $0.0396$ ) & ( $177.0733$ ) \\\hline \multirow{2}{*}{$\hat{\alpha}_2$} & \multirow{2}{*}{$0.1$} & $0.098530$ & $0.098489$ & $0.097489$ & $0.313443$ \\ & & ( $0.0155$ ) & ( $0.0196$ ) & ( $0.0250$ ) & ( $9.9737$ ) \\\hline \multirow{2}{*}{$\hat{\alpha}_3$} & \multirow{2}{*}{$1$} & $0.996391$ & $1.000373$ & $1.018511$ & $177.962836$ \\ & & ( $0.0211$ ) & ( $0.0271$ ) & ( $0.0383$ ) & ( $176.9738$ ) \\\hline \multirow{2}{*}{$\hat{\beta}_1$} & \multirow{2}{*}{$-1$} & $-1.050453$ & $-1.05002$ & $-1.048604$ & $3.51\times10^7$ \\ & & ( $0.1919$ ) & ( $0.1927$ ) & ( $0.1908$ ) & ( $1.11\times 10^9$ )\\\hline \multirow{2}{*}{$\hat{\beta}_2$} & \multirow{2}{*}{$-0.1$} & $-0.103636$ & $-0.105248$ & $-0.106449$ & $1.37\times10^8$ \\ & & ( $0.1931$ ) & ( $0.1955$ ) & ( $0.1987$ ) & ( $4.34\times 10^9$ ) \\\hline \multirow{2}{*}{$\hat{\beta}_3$} & \multirow{2}{*}{$-0.1$} & $-0.084856$ & $-0.086533$ & $-0.087520$ & $1.27\times10^8$ \\ & & ( $0.1908$ ) & ( $0.1913$ ) & ( $0.1920$ ) & ( $4.03\times 10^9$ ) \\\hline \multirow{2}{*}{$\hat{\beta}_4$} & \multirow{2}{*}{$-1$} & $-1.046981$ & $-1.04722$ & $-1.045288$ & $-4.57\times10^7$ \\ & & ( $0.1891$ ) & ( $0.1916$ ) & ( $0.1923$ ) & ( $1.44\times 10^9$ )\\\hline \multirow{2}{*}{$\hat{\beta}_5$} & \multirow{2}{*}{$1$} & $1.032952$ & $1.034185$ & $1.033792$ & $3.89\times10^6$ \\ & & ( $0.2719$ ) & ( $0.2725$ ) & ( $0.2715$ ) & ( $1.23\times 10^8$ ) \\\hline \multirow{2}{*}{$\hat{\beta}_6$} & \multirow{2}{*}{$1$} & $1.041460$ & $1.043000$ & $1.042522$ & $1.57\times10^7$ \\ & & ( $0.2716$ ) & ( $0.2744$ ) & ( $0.2753$ ) & ( $4.96\times 10^8$ ) \\ \end{tabular} \end{table} \clearpage \section{Real data analysis: Met Data of NWTC} We analyse the wind data called Met Data provided by National Wind Technology Center in United States. Met Data is the dataset recording several quantities related to wind such as velocity, speed, and temperature at the towers named M2, M4 and M5 with recording facilities in some altitudes. { The statistical modelling for wind data with stochastic differential equations have gathered interest: \citep{BB16} fits Cox-Ingersoll-Ross model to wind speed data and reports that the CIR model overwhelms other methods such as {static models} in terms of prediction precision; \citep{VASKV16} models both power generation by windmills and power demand with Ornstein-Uhlenbeck processes {with} some preprocessing and examines their performances for practical purposes.} We especially focus {on} the 2-dimensional data with 0.05-second resolution representing wind velocity labelled Sonic x and Sonic y (119M) at the M5 tower, from 00:00:00 on 1st July, 2017 to 20:00:00 on 5th July, 2017. For {details}, see \citep{NWTC}. We fit the 2-dimensional Ornstein-Uhlenbeck process such that \begin{align} \dop \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}}=\parens{\crotchet{\begin{matrix} \beta_1 & \beta_3\\ \beta_2 & \beta_4 \end{matrix}}\crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}}+\crotchet{\begin{matrix} \beta_5\\ \beta_6 \end{matrix}}}\dop t + \crotchet{\begin{matrix} \alpha_1 & \alpha_2\\ \alpha_2 & \alpha_3 \end{matrix}}\dop w_t,\ \crotchet{\begin{matrix} X_0\\ Y_0 \end{matrix}}=\crotchet{\begin{matrix} x_0\\ y_0 \end{matrix}}, \end{align} {where $\left(x_0, y_0\right)$ is the initial value.} We summarise some relevant quantities as follows. \begin{table}[h] \begin{center} \caption{Relevant Quantities in Section 5} \begin{tabular}{c|c} quantity & approximation \\\hline $n$ & 8352000\\ $h_{n}$ & $6.944444\times 10^{-6}$\\ ${nh_{n}}$ & 58\\ $nh_{n}^{2}$ & $4.027778\times 10^{-4}$\\ $\tau$ & $1.9$\\ $p_{n}$ & $518$\\ $k_{n}$ & $16123$\\ $\Delta_{n}$ & $0.003597222$\\ $k_{n}\Delta_{n}^{2}$ & $0.2086317$ \end{tabular} \end{center} \end{table} We have taken 2 hours as the time unit and fixed {$\tau=1.9$}. Our test for noise detection results in $Z=514.0674$ and $p<10^{-16}$; therefore, for any $\alpha\ge 10^{-16}$, the alternative hypothesis $\Lambda\neq O$ is adopted. Our estimator gives the fitting such that \begin{align} \dop \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}} &=\parens{\crotchet{\begin{matrix} -3.04 & -0.18\\ -0.25 & -4.19 \end{matrix}} \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}} + \crotchet{\begin{matrix} 2.95\\ -2.24 \end{matrix}}}\dop t+\crotchet{\begin{matrix} 12.18 & -0.25\\ -0.25 & 11.48 \end{matrix}}\dop w_t, \end{align} and the estimation of the noise variance \begin{align} \hat{\Lambda}_{n}=\crotchet{\begin{matrix} 6.67\times 10^{-3} & 3.75\times 10^{-5}\\ 3.75\times 10^{-5} & 6.79\times 10^{-3}\\ \end{matrix}}; \end{align} and the diffusion fitting with LGA method which is {asymptotically} efficient if $\Lambda=O$ gives \begin{align} \dop \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}} &=\parens{\crotchet{\begin{matrix} -67.53 & -9.29\\ -10.37 & -104.45 \end{matrix}} \crotchet{\begin{matrix} X_t\\ Y_t \end{matrix}} + \crotchet{\begin{matrix} 63.27\\ -50.24 \end{matrix}}}\dop t+\crotchet{\begin{matrix} 43.82 & 0.13\\ 0.13 & 44.22 \end{matrix}}\dop w_t. \end{align} What we see here is that these estimators give obviously different values with the data. If $\Lambda=O$, then we should have the reasonably similar values to each other. Since we have already obtained the result $\Lambda\neq O$, there is no reason to regard the latter estimate should be adopted. \section{Conclusion} Our contribution is composed of three parts: proofs of the asymptotic properties for adaptive estimation of diffusion-plus-noise models and noise detection test, the simulation study of the asymptotic results developed above, and the real data analysis showing that there exists situation where the proposed method should be adopted. The adaptive ML-type estimators introduced in Section 3.1 are so simple that it is only necessary for us to optimise the quasi-likelihood functions quite similar to the Gaussian likelihood after we compute the much simpler estimator for the variance of noise. The test for noise detection is nonparametric; therefore, there is no need to set any model structure or quantities other than $\tau$ and time unit. We could check our methodology works well in simulation section regardless of the size of variance of noise : the estimators could perform better than or at least as well as LGA method. The real data analysis shows that our methodology is certainly {helpful} to analyse some high-frequency data. As mentioned in the introduction, high-frequency setting of observation can relax some complexness and difficulty of state-space modelling. It results in a simple and unified methodology for both linear and nonlinear models since we can write the quasi-likelihood functions whether the model is linear or not. The innovation in state-space model can be dependent on the latent process itself; therefore, we can let the processes be with fat-tail which has been regarded as a stylised fact in financial econometrics these decades. The increase in amount of real-time data seen today will continue {at so brisk pace} that diffusion-plus-noise modelling with these desirable properties will gain more usefulness in wide range of situations. \begin{figure}[h] \centering \includegraphics[bb=0 0 720 480,width=12cm]{plot_MetX_960_640.png} \caption{plot of x-axis, Met Data} \centering \includegraphics[bb=0 0 720 480,width=12cm]{plot_MetY_960_640.png} \caption{plot of y-axis, Met Data} \end{figure} \clearpage \section*{Acknowledgement} This work was partially supported by Overseas Study Program of MMDS, JST CREST, JSPS KAKENHI Grant Number JP17H01100 and Cooperative Research Program of the Institute of Statistical Mathematics.
-434,175.210595
[ -2.6953125, 2.384765625 ]
17.769131
[ -4.16796875, 0.432861328125, -2.095703125, -6.421875, -0.6279296875, 8.796875 ]
[ -0.64306640625, 6.27734375, -1.333984375, 2.609375 ]
1,414
13,212
[ -3.333984375, 3.525390625 ]
47.615047
[ -5.6171875, -3.673828125, -3.880859375, -2.248046875, 1.9931640625, 10.9921875 ]
0.756822
6.489145
24.772934
8.399982
[ 1.0852744579315186 ]
-275,942.786604
10.120875
-427,137.121758
0.406829
6.52749
[ -2.08984375, -3.427734375, -4.3984375, -6.06640625, 2.478515625, 13.3125 ]
[ -5.07421875, -0.62939453125, -1.63671875, -0.88720703125, 2.748046875, 2.173828125 ]
BkiUdbA5qhLBWAiyVCvb
\section{Introduction} Let $d\in\mathbb{N}$ and $(\mathbb{S}^{d-1},\sigma_{d-1})$ denote the $(d-1)$-dimensional unit sphere equipped with the standard surface measure $\sigma_{d-1}$. We omit the subscript on $\sigma_{d-1}$ when clear from the context and denote the total surface measure of this unit sphere by \begin{equation}\label{Intro_area_sphere} \omega_{d-1}:=\sigma\big(\S^{d-1}\big) = \frac{2\,\pi^{d/2}}{\Gamma(d/2)}. \end{equation} Given $r >0$ and $x_0 \in \mathbb{R}^d$, we denote by $B(x_0,r)$ the open ball of radius $r$ centered at $x_0$. If $x_0 = 0$ we simply write $B(r )$. If $f \in L^1(\S^{d-1})$, we define the Fourier transform of the measure $f\sigma$ by \begin{equation*}\label{RN_derivative_conv} \widehat{f\sigma}(\xi):=\int_{\mathbb{S}^{d-1}} e^{- i \zeta\cdot \xi}\, f(\zeta)\,\text{\rm d}\sigma(\zeta)\ ; \ \ (\xi \in \mathbb{R}^d). \end{equation*} Our primary goal in this note is to find the sharp forms and characterize the extremizers of some adjoint Fourier restriction inequalities \begin{equation}\label{Intro_extension} \big\|\widehat{f \sigma}\big\|_{L^{p}(\mathbb{R}^{d})} \lesssim \|f\|_{L^{q}(\S^{d-1})}. \end{equation} The full range $(d,p,q)$ for which \eqref{Intro_extension} holds is not yet fully understood, and this is the theme of the restriction conjecture in harmonic analysis (see \cite{T} for a survey on this theory). For our purposes the classical restriction theory is enough, as we consider only cases where the inequality \eqref{Intro_extension} is already established. \medskip Building up on the work of Christ and Shao \cite{CS, CS2}, Foschi \cite{F} recently obtained the sharp form of \eqref{Intro_extension} in the Stein-Tomas endpoint case $(d,p,q) = (3,4,2)$, showing that the constant functions are global extremizers. Here we extend this paradigm to other suitable triples $(d,p,q)$. In fact, defining \begin{equation*} C(d,p,q) = \sup_{\stackrel{f \in L^{q}(\S^{d-1})}{ f \neq 0} }\frac{\big\|\widehat{f \sigma}\big\|_{L^{p}(\mathbb{R}^{d})}}{\|f\|_{L^{q}(\S^{d-1})}}\,, \end{equation*} our first result is the following: \begin{theorem}\label{Thm1} Let $(d,p,q) = (d,2k, q)$ with $d,k \in \mathbb{N}$ and $q\in \mathbb{R}^+ \cup \{\infty\}$ satisfying: \begin{itemize} \item[(a)] $k = 2$, $q \geq 2$ and $3 \leq d\leq 7$; \item[(b)] $k = 2$, $q \geq 4$ and $d \geq 8$; \item[(c)] $k \geq 3$, $q \geq 2k$ and $d \geq 2$. \end{itemize} Then \begin{equation}\label{Intro_sharp_constant} C(d,p,q) = \omega_{d-1}^{-1/q} \ \|\widehat{\sigma}_{d-1}\|_{L^{p}(\mathbb{R}^{d})}. \end{equation} Moreover, the complex-valued extremizers of \eqref{Intro_extension} are given by \begin{equation}\label{CV_ext} f(\zeta) = c \,e^{i\xi\cdot \zeta}, \end{equation} where $c \in \mathbb{C}\setminus \{0\}$ and $\xi \in \mathbb{R}^d$. \end{theorem} By Plancherel's theorem we have \begin{equation}\label{Intro_Plancherel} \| \widehat{\sigma}\|_{L^{2k}(\mathbb{R}^{d})} = (2\pi)^{d/2k}\,\|\sigma * \sigma * \ldots* \sigma\|_{L^{2}(\mathbb{R}^{d})}^{1/k}, \end{equation} where the convolution on the right-hand side is $k$-fold. We remark that, in principle, the $k$-fold convolution of the surface measure $$\sigma_{d-1}^{(k)} = \sigma_{d-1} * \ldots* \sigma_{d-1}$$ is a finite measure on $\mathbb{R}^d$ supported on the ball $\overline{B(k)}$. For $k \geq 2$, the measure $\sigma_{d-1}^{(k)}$ and the Lebesgue measure are mutually absolutely continuous on $\overline{B(k)}$ (see \cite[Eq. (2.7)]{Ch}), and we make the identification of $\sigma_{d-1}^{(k)}$ with its Radon-Nikodym derivative, which is a radial function. When $k=2$, the value of $\sigma_{d-1} * \sigma_{d-1}$ was explicitly computed in \cite[Proposition A.5]{R} as \begin{equation}\label{Intro_conv_2} \sigma_{d-1}\ast\sigma_{d-1}(\xi)=2^{-d+3}\, \omega_{d-2}\, \frac{1}{|\xi|}(4-|\xi|^2)_+^{\frac{d-3}{2}}, \end{equation} for $\xi \in \mathbb{R}^d$, where $x_+ = \max\{0,x\}$. We provide an alternative proof of \eqref{Intro_conv_2} in Lemma \ref{biconvsigma} below. \medskip Using \eqref{Intro_area_sphere}, \eqref{Intro_Plancherel}, \eqref{Intro_conv_2} and the identity \begin{equation}\label{A2} \int_0^1 t^{w-1} \,(1-t)^{z-1}\,\text{\rm d}t = \frac{\Gamma(w) \Gamma(z)}{\Gamma(w+z)}, \end{equation} valid for $w,z \in \mathbb{C}$ with $\Re(w), \Re(z) >0$, we may simplify \eqref{Intro_sharp_constant} in the case $k=2$ to \begin{equation*} C(d,4,q) = \omega_{d-1}^{-1/q + 1/4}\,\omega_{d-2}^{1/2}\,\,(2\pi)^{d/4}\,\,2^{(d-3)/4} \left[\frac{\Gamma(d-2)\,\Gamma\left(\tfrac{d-2}{2}\right)}{\Gamma\left(\tfrac{3(d-2)}{2}\right)}\right]^{1/4}. \end{equation*} When $k\geq 3$, we may express the value of the sharp constant in \eqref{Intro_sharp_constant} in terms of integrals involving the Bessel function of the first kind $J_v$ defined by $$J_v(z) = \sum_{n=0}^{\infty} \frac{(-1)^n \big(\tfrac12 z\big)^{2n + v}}{n!\, \Gamma(v+n+1)},$$ for $v > -1$ and $\Re(z) >0$. In fact, the Fourier transform of $\sigma_{d-1}$ is given by \begin{equation}\label{Intro_FT_sigma} \widehat{\sigma}_{d-1}(x)=(2\pi)^{d/2}|x|^{(2-d)/2} J_{(d-2)/2}(|x|). \end{equation} If we invert the $k$-th power of this Fourier transform, we find an expression for the $k$-fold convolution of the surface measure \begin{align}\label{Intro_conv_k} \begin{split} \sigma_{d-1}^{(k)}(\xi) & =(2\pi)^{-d} \, (2\pi)^{dk/2} \int_{\mathbb{R}^d} e^{i x \cdot \xi}\,|x|^{(2-d)k/2} J_{(d-2)/2}(|x|)^k\,\text{\rm d}x \end{split} \end{align} for $|\xi| \leq k$, provided this integral converges absolutely. \smallskip Our strategy to prove Theorem \ref{Thm1} has several distinct components. Firstly, as far as the sharp inequality is concerned, we follow the outline of Christ-Shao \cite{CS, CS2} and Foschi \cite{F} (which corresponds to the case $d=3$) to prove part (a), using a spectral decomposition in spherical harmonics and the Funk-Hecke formula. We are able to extend their method up to dimension $d=7$. In order to prove parts (b) and (c) we take a different path, using a sharp multilinear weighted inequality related to the $k$-fold convolution of the surface measure (Theorem \ref{Thm2}) together with a symmetrization process over the group of rotations $SO(d)$. Secondly, as far as the characterization of the complex-valued extremizers is concerned, our main tool is a complete characterization of the solutions of the Cauchy-Pexider functional equation for sumsets of the sphere given by Theorem \ref{Sec3_Char_ext}. This builds upon previous work by Christ-Shao \cite{CS2} and Charalambides \cite{Ch}. \medskip Our second result is the following multilinear weighted adjoint restriction inequality. \begin{theorem}\label{Thm2} Let $d,k \geq 2$ and $f_j \in L^1(\S^{d-1})$, $f_j \neq 0$, $j =1,2,\ldots, k$. Then we have \begin{equation}\label{Intro_weighted} \left\|\prod_{j=1}^k \widehat{f_j\sigma}\right\|_{L^{2}(\mathbb{R}^d)}^2 \leq (2\pi)^{d} \int_{(\S^{d-1})^k} \sigma^{(k)}(\zeta_1 + \zeta_2 + \ldots+ \zeta_k) \,\left(\prod_{j=1}^k |f_j(\zeta_j)|^2\right)\, \text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2)\, \ldots\, \text{\rm d}\sigma(\zeta_k). \end{equation} If $(d,k) \neq (2,2)$ and the right-hand side of \eqref{Intro_weighted} is finite, we have equality if and only if \begin{equation}\label{Intro_extremizers} f_j(\zeta) = c_j\,e^{\nu \cdot \zeta} \end{equation} for $j = 1,2, \ldots, k$, where $c_1, c_2, \ldots,c_k \in \mathbb{C} \setminus\{0\}$ and $\nu \in \mathbb{C}^d$. \end{theorem} \medskip Using \eqref{Intro_conv_2}, we may specialize the inequality \eqref{Intro_weighted} to the case $k=2$ to obtain the following corollary. \medskip \begin{corollary}\label{Cor3} Let $d \geq 2$ and $f_1,f_2 \in L^1(\S^{d-1})$ with $f_1,f_2 \neq 0$. Then we have \begin{equation}\label{Intro_weighted_case2} \left\| \widehat{f_1\sigma} \, \widehat{f_2\sigma}\right\|_{L^{2}(\mathbb{R}^d)}^2 \leq (2\pi)^{d}\,2^{(-d+2)/2} \,\omega_{d-2} \int_{(\S^{d-1})^2} \!\left[\frac{(1 - \zeta_1\cdot \zeta_2)^{d-3}}{(1 + \zeta_1 \cdot \zeta_2)}\right]^{1/2}\!|f_1(\zeta_1)|^2\,|f_2(\zeta_2)|^2\, \text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2). \end{equation} If $ d \geq 3$ and the right-hand side of \eqref{Intro_weighted_case2} is finite, we have equality if and only if \begin{equation*} f_1(\zeta) = c_1\,e^{\nu \cdot \zeta}\ \ \ {\textrm and} \ \ \ f_2(\zeta) = c_2\,e^{\nu \cdot \zeta}\,, \end{equation*} where $c_1, c_2 \in \mathbb{C}\setminus\{0\}$ and $\nu \in \mathbb{C}^d$. \end{corollary} In the case $d=2$, a version of the inequality \eqref{Intro_weighted_case2} can be found in the work of Foschi and Klainerman \cite[Example 17.5]{FK}, where it is described as ``an interesting formula''. In the case $d=3$, the weighted inequality \eqref{Intro_weighted_case2} already appears in the work of Foschi \cite[Lemma 4.1]{F}. A novel feature here is the complete characterization of the extremizers \eqref{Intro_extremizers}. The next result is a key tool to characterize the extremizers in Theorems \ref{Thm1} and \ref{Thm2}. \begin{theorem} \label{Sec3_Char_ext} Let $d \geq 2$, $k \geq 2$ and $(d,k) \neq (2,2)$. Let $f_j: \S^{d-1} \to \mathbb{C}$ $(1 \leq j \leq k)$ and $h:\overline{B(k)} \to \mathbb{C}$ be measurable functions such that \begin{equation} \label{Sec3_Functional_Equation} f_1(\zeta_1)f_2(\zeta_2) \ldots f_k(\zeta_k) = h(\zeta_1 + \zeta_2 + \ldots+ \zeta_k) \end{equation} for $\sigma^k-$a.e. $(\zeta_1, \zeta_2, \ldots, \zeta_k) \in (\S^{d-1})^k$. Then one the following holds: \begin{itemize} \item[(i)] There exist $c_1, c_2, \ldots,c_k \in \mathbb{C} \setminus\{0\}$ and $\nu \in \mathbb{C}^d$ such that \begin{equation*} f_j(\zeta) = c_j\,e^{\nu \cdot \zeta} \end{equation*} for $\sigma-$a.e. $\zeta \in \S^{d-1}$, $j=1,2,\ldots, k$. \smallskip \item[(ii)] $f_j(\zeta) = 0$ for $\sigma-$a.e. $\zeta \in \S^{d-1}$, for some $j$ with $1 \leq j \leq k$. \end{itemize} \end{theorem} In the spirit of Theorem \ref{Thm2}, similar multilinear weighted adjoint restriction inequalities were obtained for the paraboloid \cite{C} and cone \cite{BR}, in connection with sharp Strichartz and Sobolev-Strichartz estimates for the Sch\"{o}dinger and wave equations, respectively (in the context of the wave equation, see also \cite{KM1, KM2, KM3} for related inequalities with different `null' weights). In retrospect, the works of Kunze \cite{K}, Foschi \cite{F2} and Hundertmark-Zharnitsky \cite{HZ} were the pioneers on the existence and classification of extremizers for adjoint restriction inequalities (over the paraboloid and cone) in low dimensions. This line of research flourished and these papers were followed by a pool of very interesting works in the interface of extremal analysis and differential equations, see for instance \cite{BBCH, B, FVV, FVV2, HS, OS, OR, Q, Ra, Sh, Sh2}, in addition to the ones previously cited in this introduction. \section{Proof of Theorem \ref{Thm2}} \subsection{Preliminaries} Let $k \geq 2$ and $f_j \in L^1(\S^{d-1})$, $j =1,2,\ldots, k$. The convolution $(f_1\sigma * f_2\sigma * \ldots* f_k\sigma)$ is a finite measure defined on the Borel subsets $E\subset \mathbb{R}^d$ by \begin{equation*} (f_1\sigma * f_2\sigma * \ldots* f_k\sigma)(E) = \int_{(\S^{d-1})^k} \chi_E(\zeta_1 + \zeta_2 + \ldots+ \zeta_k)\, \left(\prod_{j=1}^k\,f_j(\zeta_j)\right)\text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2) \, \ldots \, \text{\rm d}\sigma(\zeta_k). \end{equation*} It is then clear that this measure is supported on $\overline{B(k)}$. Since $f_j \in L^1(\S^{d-1})$, the measure $(f_1\sigma * f_2\sigma * \ldots* f_k\sigma)$ is absolutely continuous with respect to $\sigma^{(k)}$, and therefore it is absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}^d$. We identify $(f_1\sigma * f_2\sigma * \ldots* f_k\sigma)$ with its Radon-Nikodym derivative with respect to the Lebesgue measure, writing it in the following way (see for instance \cite{F} or \cite[Remark 3.1]{FK}) \begin{equation}\label{Sec2_delta_form} (f_1\sigma * f_2\sigma * \ldots* f_k\sigma)(\xi) = \int_{(\S^{d-1})^k} \delta_d(\xi - \zeta_1 - \zeta_2 - \ldots - \zeta_k)\, \left(\prod_{j=1}^k\,f_j(\zeta_j)\right)\text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2)\, \ldots \, \text{\rm d}\sigma(\zeta_k), \end{equation} where $\delta_d$ denotes the $d$-dimensional Dirac delta distribution. The alternative expression \eqref{Sec2_delta_form} is particularly useful in some computations, as exemplified by the next result. \begin{lemma}\label{biconvsigma} Let $d\geq 2$. The surface measure $\sigma_{d-1}$ on $\S^{d-1}$ satisfies $$\sigma_{d-1}\ast\sigma_{d-1}(\xi)=2^{-d+3}\,\omega_{d-2}\,\frac{1}{|\xi|}(4-|\xi|^2)_+^{\frac{d-3}{2}}$$ for $\xi \in \mathbb{R}^d$, where $x_+ = \max\{0,x\}$. \end{lemma} \begin{proof} Let $\sigma:=\sigma_{d-1}$. Following \cite{F}, the surface measure on the sphere may be written as \begin{equation}\label{Sec2_sigma_delta} \text{\rm d}\sigma(\xi) = \delta(1 - |\xi|)\,\text{\rm d}\xi = 2\,\delta(1 - |\xi|^2)\,\text{\rm d}\xi, \end{equation} where $\delta$ denotes the one-dimensional Dirac delta and $\text{\rm d}\xi$ denotes the Lebesgue measure on $\mathbb{R}^d$. Using \eqref{Sec2_delta_form} and \eqref{Sec2_sigma_delta} we have \begin{align*} \sigma\ast\sigma(\xi)&=\int_{\mathbb{R}^d}\delta(1-|\xi-\nu|)\,\delta(1-|\nu|)\,\text{\rm d}\nu = \int_{\S^{d-1}}\delta(1-|\xi-\nu|)\, \text{\rm d}\sigma(\nu)\\ &= 2\int_{\S^{d-1}}\delta\big(1-|\xi-\nu|^2\big)\,\text{\rm d}\sigma(\nu) =2\int_{\S^{d-1}}\delta\big(2\xi\cdot\nu-|\xi|^2\big)\,\text{\rm d}\sigma(\nu)\\ &=\frac{2}{|\xi|}\int_{\S^{d-1}}\delta\left(2\frac{\xi}{|\xi|}\cdot\nu-|\xi|\right)\,\text{\rm d}\sigma(\nu). \end{align*} Passing to polar coordinates \cite[Lemma A.5.2]{DX} in the sphere $\S^{d-1}$, we find \begin{align*} \sigma\ast\sigma(\xi)&=\omega_{d-2}\,\frac{2}{|\xi|}\int_{-1}^1\delta\big(2 u-|\xi|\big)\,(1-u^2)^{\frac{d-3}{2}}\,\text{\rm d}u\\ &=\omega_{d-2}\,\frac{1}{|\xi|}\int_{-1}^1\delta\left(u-\frac{|\xi|}{2}\right)(1-u^2)^{\frac{d-3}{2}}\,\text{\rm d}u\\ &=\omega_{d-2}\,\frac{1}{|\xi|}\,\left(1-\frac{|\xi|^2}{4}\right)^{\frac{d-3}{2}}\,, \end{align*} if $|\xi|/2\in (-1,1)$. The result follows from this. \end{proof} \subsection{The sharp inequality} Consider the multilinear form \begin{equation*} T(f_1, f_2, \ldots,f_k)(\xi):= (f_1\sigma * f_2\sigma * \ldots* f_k\sigma)(\xi). \end{equation*} Applying the Cauchy-Schwarz inequality in \eqref{Sec2_delta_form} with respect to the measure $\delta_d(\xi - \zeta_1 - \zeta_2 - \ldots - \zeta_k)\,\text{\rm d}\sigma(\zeta_1)\, \ldots \, \text{\rm d}\sigma(\zeta_k)$ we find \begin{align}\label{Sec2_CS} \begin{split} |T(f_1, f_2, \ldots,f_k)(\xi)|^2 & \leq T({\bf 1}, {\bf 1}, \ldots,{\bf 1})(\xi)\,.\,T\big(|f_1|^2, |f_2|^2, \ldots\,,|f_k|^2\big)(\xi)\\ & = \sigma_{d-1}^{(k)}(\xi)\,\,T\big(|f_1|^2, |f_2|^2, \ldots \, ,|f_k|^2\big)(\xi), \end{split} \end{align} where ${\bf 1}$ denotes the constant function equal to $1$. Using Plancherel's theorem, \eqref{Sec2_delta_form} and \eqref{Sec2_CS} we arrive at \begin{align}\label{Sec2_weighted_ineq} \begin{split} &\left\|\prod_{j=1}^k \widehat{f_j\sigma}\right\|_{L^{2}(\mathbb{R}^d)}^2 = (2\pi)^d \left\|T(f_1, f_2, \ldots,f_k)\right\|_{L^{2}(\mathbb{R}^d)}^2 \\ & \ \ \ \ \ \leq (2\pi)^d \int_{\mathbb{R}^d} \sigma_{d-1}^{(k)}(\xi)\,\,T\big(|f_1|^2, |f_2|^2, \ldots \, ,|f_k|^2\big)(\xi)\,\text{\rm d}\xi\\ & \ \ \ \ \ = (2\pi)^{d} \int_{\mathbb{R}^d}\int_{(\S^{d-1})^k} \sigma^{(k)}(\xi) \, \delta_d(\xi - \zeta_1 - \ldots - \zeta_k)\,\left(\prod_{j=1}^k |f_j(\zeta_j)|^2\right)\, \text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2)\, \ldots\, \text{\rm d}\sigma(\zeta_k) \,\text{\rm d}\xi\\ & \ \ \ \ \ = (2\pi)^{d} \int_{(\S^{d-1})^k} \sigma^{(k)}(\zeta_1 + \zeta_2 + \ldots+ \zeta_k) \,\left(\prod_{j=1}^k |f_j(\zeta_j)|^2\right)\, \text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2)\, \ldots\, \text{\rm d}\sigma(\zeta_k), \end{split} \end{align} which is our desired inequality. \medskip If $(d,k) \neq (2,2)$, both sides of \eqref{Sec2_weighted_ineq} are finite for $f_1 = f_2 = \ldots= f_k = {\bf 1}$, in which case we have equality. In fact, in this case, both sides are equal to $\|\widehat{\sigma}\|_{L^{2k}(\mathbb{R}^d)}^{2k} = (2\pi)^d \,\sigma_{d-1}^{(2k)}(0)$, which is finite by \eqref{Intro_conv_k} since $J_{v}(x) = O( |x|^{-1/2})$ as $x \to \infty$. This shows that our inequality is sharp. \subsection{The cases of equality} \label{CE} Assume that the right-hand side of \eqref{Sec2_weighted_ineq} is finite and that we have equality in \eqref{Sec2_weighted_ineq}. Then the right-hand side of \eqref{Sec2_CS} is finite and we have equality in \eqref{Sec2_CS} for all $\xi$ in a subset $A_1 \subset \overline{B(k)}$ of full Lebesgue measure (note that both sides of \eqref{Sec2_CS} are zero for $\xi \notin \overline{B(k)}$). \smallskip Let $\xi \in A_1$ and consider the singular measure on $(\S^{d-1})^k$ given by \begin{equation}\label{Def_Psi} \d \Psi_{\xi} = \delta_d(\xi - \zeta_1 - \zeta_2 - \ldots - \zeta_k)\,\text{\rm d}\sigma(\zeta_1)\, \ldots \, \text{\rm d}\sigma(\zeta_k). \end{equation} The condition of equality in the Cauchy-Schwarz inequality tells us that there exists a function $h$ on $A_1$ such that \begin{equation}\label{Sec2_Cond_eq_Sigma} f_1(\zeta_1)f_2(\zeta_2) \ldots f_k(\zeta_k) = h(\xi) \end{equation} for $\Psi_{\xi}$-a.e. $(\zeta_1, \zeta_2, \ldots, \zeta_k) \in (\S^{d-1})^k$. If this is the case, then \begin{equation*} h(\xi) = \frac{\int_{(\S^{d-1})^k} f_1(\zeta_1)f_2(\zeta_2) \ldots f_k(\zeta_k)\,\d\Psi_{\xi}(\zeta_1,\zeta_2, \ldots, \zeta_k)}{\int_{(\S^{d-1})^k} \d\Psi_{\xi}(\zeta_1,\zeta_2, \ldots, \zeta_k)} = \frac{(f_1\sigma * f_2\sigma * \ldots* f_k\sigma)(\xi)}{\sigma^{(k)}(\xi)}\,, \end{equation*} and we see that $h$ is a Lebesgue measurable function. Note that $\sigma^{(k)}(\xi) >0$ for all $|\xi| < k$ (this follows from the explicit evaluation in Lemma \ref{biconvsigma} and induction on $k$) and we might have $\sigma^{(k)}(\xi) = +\infty$ only on a set of Lebesgue measure zero. Consider the set \begin{equation*} E = \left\{(\zeta_1,\zeta_2, \ldots, \zeta_k) \in (\S^{d-1})^k; \ \ f_1(\zeta_1)f_2(\zeta_2) \ldots f_k(\zeta_k) \neq h(\zeta_1 + \zeta_2 + \ldots+ \zeta_k)\right\} \end{equation*} and let $\sigma^{k}$ denote the product measure on $(\S^{d-1})^k$. We claim that $\sigma^k (E) = 0$. In fact, \begin{align*}\label{Sec2_Cal_E_Etilde} \begin{split} \sigma^k (E) &= \int_{(\S^{d-1})^k} \chi_{E}(\zeta_1, \ldots, \zeta_k) \,\text{\rm d}\sigma(\zeta_1)\, \ldots\, \text{\rm d}\sigma(\zeta_k)\\ & = \int_{(\S^{d-1})^k} \int_{\mathbb{R}^d} \chi_{E}(\zeta_1, \ldots, \zeta_k)\,\delta_d(\xi - \zeta_1 - \zeta_2 - \ldots - \zeta_k)\,\text{\rm d}\xi\,\text{\rm d}\sigma(\zeta_1)\, \ldots\, \text{\rm d}\sigma(\zeta_k)\\ & = \int_{\mathbb{R}^d} \int_{(\S^{d-1})^k} \chi_{E}(\zeta_1, \ldots, \zeta_k)\, \d\Psi_{\xi} (\zeta_1, \ldots, \zeta_k)\, \text{\rm d}\xi\\ & = 0. \end{split} \end{align*} Therefore we must have, for this $h: \overline{B(k)} \to \mathbb{C}$ measurable, \begin{equation} \label{Sec2_Functional_Equation} f_1(\zeta_1)f_2(\zeta_2) \ldots f_k(\zeta_k) = h(\zeta_1 + \zeta_2 + \ldots+ \zeta_k) \end{equation} for $\sigma^k-$a.e. $(\zeta_1, \zeta_2, \ldots, \zeta_k) \in (\S^{d-1})^k$. \medskip On the other hand, if \eqref{Sec2_Functional_Equation} holds, we can reverse all the steps above to conclude that we have equality a.e. in \eqref{Sec2_CS} and thus equality in \eqref{Sec2_weighted_ineq} (possibly with both sides being infinity). \subsection{Characterization of the extremizers} The characterization of the functions $f_j:\S^{d-1} \to \mathbb{C}$ \,($1 \leq j \leq k$) that satisfy the functional equation \eqref{Sec2_Functional_Equation} is given by Theorem \ref{Sec3_Char_ext}, whose proof we postpone to the next section. Assuming this result, let us conclude the proof of Theorem \ref{Thm2}. \medskip If the right-hand side of \eqref{Intro_weighted} is finite and we have equality, \eqref{Sec2_Functional_Equation} and Theorem \ref{Sec3_Char_ext} show that \eqref{Intro_extremizers} must hold (recall that we are assuming $f_j \neq 0$). Conversely, if \eqref{Intro_extremizers} holds we have that the right-hand side of \eqref{Intro_weighted} is finite (since all $f_j$'s are uniformly bounded) and, as observed in the previous subsection, we find that equality occurs in \eqref{Intro_weighted}. \section{Proof of Theorem \ref{Sec3_Char_ext} - revisiting the work of Charalambides} \subsection{Preliminaries} In the work \cite{Ch}, M. Charalambides developed a thorough study of the solutions of the Cauchy-Pexider functional equation \eqref{Sec3_Functional_Equation} (for sumsets of general submanifolds $M \subset \mathbb{R}^d$), building up on the work initiated by Christ and Shao \cite{CS2} for the sphere $\S^2 \subset \mathbb{R}^3$. In particular, Charalambides \cite{Ch} establishes the following result: \begin{lemma}\label{Lem6_Ch_result} {\rm (} \!\!{\rm cf.} \cite[Theorems 1.2, 1.6 and 1.8]{Ch}\! \!{\rm)} Theorem \ref{Sec3_Char_ext} is true in the cases $k=2$; $d \geq 3$ and $k=3$; $d \geq 2$, under the additional assumption that $\sigma\big(f_j^{-1}(\{0\})\big) = 0$, for all $j =1,2, \ldots, k$. \end{lemma} \noindent Our work in this section is to {\it remove the assumption} $\sigma\big(f_j^{-1}(\{0\})\big) = 0$, $j =1,2, . . ., k$, in the case of the sphere $\S^{d-1}$, and to extend this result to higher $k$. We start with a lemma that essentially follows from the work of Charalambides. We only indicate the main modifications needed with respect to the corresponding proof in \cite{Ch}. \begin{lemma}\label{Lem7} {\rm (} \!\!{\rm cf.} \cite[Proof of Theorem 1.2]{Ch} \!\!{\rm)} For $d \geq 3$, let $f_1,f_2: \S^{d-1} \to \mathbb{C}$ and $h:\overline{B(2)} \to \mathbb{C}$ be measurable functions satisfying \begin{equation}\label{Sug_Diogo_Sec3} f_1(\zeta_1)f_2(\zeta_2) = h(\zeta_1 + \zeta_2 ) \end{equation} for $\sigma^2-$a.e. $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$. Then for each $\xi \in B(2) \setminus\{0\}$ there exists a ball $B_{\xi} = B(\xi,r_{\xi}) \subset B(2) \setminus\{0\}$ and a measurable function $H_{\xi}: B_{\xi} + B_{\xi} \to \mathbb{C}$ such that \begin{equation}\label{hH} h(x) \,h(y) = H_{\xi}(x+y) \end{equation} for Lebesgue a.e. $(x,y) \in B_{\xi} \times B_{\xi}$. \end{lemma} \begin{proof} We briefly recall the notation used in \cite{Ch}. Let $M = \S^{d-1}$ and write points in $(\mathbb{R}^d)^4 \times (\mathbb{R}^d)^4$ as $(x,y)$, where $x = (x_1,x_2,x_3,x_4)$ and $y = (y_1, y_2, y_3, y_4)$, so that $x_j, y_j \in \mathbb{R}^d$ for $1 \leq j \leq 4$. Let $\Pi$ be the hyperplane in $(\mathbb{R}^d)^4 \times (\mathbb{R}^d)^4$ defined by $x_1 + y_2 = x_3 + y_4$ and $y_1 + x_2 = y_3 + x_4$ and let $\mathcal{P}_M = (M^4 \times M^4) \cap \Pi$. Let $\mathcal{S}_M$ be the set of smooth points of $\mathcal{P}_M$, i.e. the points where $M^4 \times M^4$ intersects $\Pi$ transversally, and let $\Lambda \subset (\mathbb{R}^d)^4$ be the $3d$-dimensional hyperplane given by the points $(w_1, w_2, w_3,w_4) \in (\mathbb{R}^d)^4$ such that $w_1 + w_2 = w_3 + w_4$. The linear addition map $(\mathbb{R}^d)^4 \times (\mathbb{R}^d)^4 \mapsto (\mathbb{R}^d)^4$ given by $(x,y) \mapsto x+y$ restricts to a smooth map $\pi_M : \mathcal{S}_M \to \Lambda$ and we call $\mathcal{R}_M$ the set of regular points of $\pi_M$, i.e. the points of $\mathcal{S}_M$ where $\pi_M$ is a submersion. Finally, let $$R_M = \{ x + y; \ x,y \in M; \ x \neq \pm y\} = B(2) \setminus \{0\}.$$ \smallskip The crux of the matter here, given $\xi = z_1 \in R_M = B(2) \setminus \{0\}$, is to choose a point $z = (z_1,z_2,z_3,z_4)$, with $z_2 = z_1$, such that $z = \pi_M (x,y)$ for some $(x,y) \in \mathcal{R}_M$ (see \cite[Proof of Theorem 1.2, p. 239, line 6]{Ch}). If this choice can be made, the lemma follows from the argument in \cite[Proof of Theorem 1.2, p. 239, lines 6 - 23]{Ch}. \smallskip Since $z_1 \in R_M = B(2) \setminus \{0\}$ we start by choosing freely $x_1, y_1 \in \S^{d-1}$ such that $$z_1 = x_1 + y_1.$$ Note that this implies that $x_1 \neq \pm y_1$. Now choose $x_2$ and $y_2$, in a way that $x_2,y_2 \neq \pm x_1, \pm y_1$ and such that $$z_2 = x_2 + y_2 = z_1.$$ Note again that $x_2 \neq \pm y_2$. By \cite[Lemma 2.3]{Ch}, it already follows that the point $(x,y)$ that we are constructing belongs to $\mathcal{S}_M$, and \cite[Eq. (2.4)]{Ch} is partially fulfilled. Note also that $${\rm span}\{x_1, y_1\} \cap {\rm span}\{x_2, y_2\} = {\rm span}\{z_1\}.$$ Now we are relatively free to choose $x_3, x_4, y_3,y_4$. In fact these must satisfy $$x_1 + y_2 = x_3 + y_4$$ and $$y_1 + x_2 = y_3 + x_4,$$ which are the equations defining $\Pi$, and we must complete the conditions \cite[Eq. (2.4) and (2.5)]{Ch} in order to guarantee that the point $(x,y)$ belongs to $\mathcal{R}_M$. Note that $x_1 + y_2$ and $y_1 + x_2$ both belong to $R_M = B(2) \setminus \{0\}$. We can choose, for instance, $x_3$ close (but not equal) to $x_1$ and $y_4$ close (but not equal) to $y_2$, and similarly, $y_3$ close (but not equal) to $x_2$ and $x_4$ close (but not equal) to $y_1$. Therefore we can assure that $x_j \neq \pm y_j$ for all $j =1,2,3,4$ (this establishes \cite[Eq. (2.4)]{Ch}) and ${\rm span}\{x_3, y_3\} \cap {\rm span}\{x_4, y_4\}$ is close to ${\rm span}\{x_1, x_2\} \cap {\rm span}\{y_1, y_2\}$, which is a line different from ${\rm span}\{z_1\}$, thus leading to $${\rm span}\{x_1, y_1\} \cap {\rm span}\{x_2, y_2\} \cap {\rm span}\{x_3, y_3\} \cap {\rm span}\{x_4, y_4\} = \{0\}.$$ This completes \cite[Eq. (2.5)]{Ch}, which shows that $(x,y) \in \mathcal{R}_M$, and that $$\pi_M(x,y) = (z_1, z_1, z_3, z_4),$$ where $z_3 = x_3 + y_3$ and $z_4 = x_4 + y_4$. \end{proof} \subsection{Proof of Theorem \ref{Sec3_Char_ext}} Throughout this proof we denote by $\lambda^d$ the $d$-dimensional Lebesgue measure. \subsubsection{The case $k=2$ and $d \geq 3$} . \medskip \noindent {\it Step 1. Local argument.} Fix $\xi \in B(2) \setminus\{0\}$ and let $B_{\xi}$ and $H_{\xi}$ as in Lemma \ref{Lem7}. We claim that $h(x) = c_{\xi}\, e^{\nu_{\xi} \cdot x}$ a.e. in $B_{\xi}$, for some $c_{\xi} \in \mathbb{C}$ and $\nu_{\xi} \in \mathbb{C}^d$. \medskip If $\lambda^d\big(h^{-1}(\{0\}) \cap B_{\xi}\big)=0$, we may use \cite[Lemma 2.1]{Ch} to reach the desired conclusion. \medskip If $\lambda^d\big(h^{-1}(\{0\}) \cap B_{\xi}\big)>0$, we will be done if we prove that we can choose $c_{\xi}=0$. Suppose this is not the case. Let $A_1:=h^{-1}(\{0\}) \cap B_{\xi}$ and define $A_2:=B_{\xi} \setminus A_1$. Assuming $\lambda^d(A_1)>0$ and $\lambda^d(A_2)>0$, we aim at a contradiction. \medskip For a.e. $x\in A_1$, identity \eqref{hH} holds for a.e. $y\in B_{\xi}$ (this is a consequence of Fubini's theorem). Similarly, for a.e. $x\in A_2$, identity \eqref{hH} holds for a.e. $y\in A_2$. Let $\widetilde{A}_1, \widetilde{A}_2$ denote the full measure subsets of $A_1, A_2$, respectively, for which these conclusions hold. Then, given $\epsilon>0$, there exist $x_1\in\widetilde{A}_1$ and $ x_2\in\widetilde{A}_2$ such that $|x_1-x_2|<\epsilon$ (the existence of such $x_1, x_2$ is guaranteed by the hypotheses $\lambda^d(A_1), \lambda^d(A_2)>0$). Now, by the definition of $\widetilde{A}_1$, we conclude that $H_{\xi}\equiv 0$ a.e. on $x_1+B_{\xi}$. By the definition of $\widetilde{A}_2$, we conclude that $H_{\xi}\neq 0$ a.e. on $x_2+A_2$. However, for sufficiently small $\epsilon>0$, $$\lambda^d\big((x_1+B)\cap(x_2+A_2)\big)>0,$$ and we reach a contradiction. The conclusion is that, if $\lambda^d(A_1)>0$, then $\lambda^d(A_2)=0$ and thus $h\equiv 0$ a.e. on $B_{\xi}$. \medskip \noindent {\it Step 2. Local-to-global argument.} Take $\xi_0 \in B(2)\setminus\{0\}$. From the previous step we know that there exist $c_{0} \in \mathbb{C}$ and $\nu_{0} \in \mathbb{C}^d$ such that $h(x) = c_{0}\, e^{\nu_{0} \cdot x}$ a.e. in $B_{\xi_0}$. Consider the set \begin{equation*} \Omega:=\{z\in B(2)\setminus\{0\}: h(x)=c_0 \,e^{\nu_0\cdot x}\textrm{ a.e. in a neighborhood of }z\}. \end{equation*} By construction, $\Omega$ is an open subset of $B(2)\setminus\{0\}$. We claim that $\Omega$ is also closed in $B(2)\setminus\{0\}$. To see this, suppose not, and take a point $\xi\in\overline{\Omega}\setminus\Omega$ (the closure is taken in $B(2)\setminus\{0\}$). Since $\xi\in B(2)\setminus\{0\}$, there exists an open ball $B_{\xi} = B(\xi,r_\xi)$ on which $h(x)=c_\xi \,e^{\nu_\xi\cdot x}$ almost everywhere. Since $\xi\in\overline{\Omega}$, the intersection $\Omega\cap B_{\xi}$ is nonempty. Take $z\in\Omega\cap B_{\xi}$. Then, since $z\in\Omega$, the identity $h(x)=c_0 \,e^{\nu_0\cdot x}$ holds almost everywhere in a sufficiently small ball $B_z\subset B_{\xi}$. Now, if $c_0=0$, then $c_\xi=0$ and $\xi\in\Omega$, a contradiction. If $c_0\neq 0$, it follows that $c_\xi=c_0$ and $\nu_\xi=\nu_0$ (this can be seen by differentiating the identity $c_\xi c_0^{-1}=e^{(\nu_\xi-\nu_0)\cdot x}$ with respect to the variable $x$). The conclusion is, again, that $\xi\in\Omega$, an absurd. We then conclude that $\Omega$ is closed in $B(2)\setminus\{0\}$ and, since this is a connected set, it follows that $\Omega=B(2)\setminus\{0\}$. \medskip An application of the Lebesgue differentiation theorem yields $h(x)=c_0 \,e^{\nu_0\cdot x}$ a.e. in $B(2)\setminus\{0\}$. \medskip \noindent {\it Step 3. Conclusion in the case $k=2$ and $d \geq 3$.} We now achieve the conclusion for $f_1$ and $f_2$. Let us split the analysis in two cases: \medskip If $c_0\neq 0$, we claim that $\sigma\big(f_1^{-1}(\{0\})\big)=\sigma\big(f_2^{-1}(\{0\})\big)=0$ and the conclusion follows from Lemma \ref{Lem6_Ch_result}. In fact, if this were not the case, assume without loss of generality that $A_1 = f_1^{-1}(\{0\})$ is such that $\sigma(A_1) >0$. Let $Q$ be the set of pairs $(\zeta_1,\zeta_2) \in (\S^{d-1})^2$ for which \eqref{Sug_Diogo_Sec3} does not hold. By assumption $\sigma^2(Q) = 0$. Let $E = \{\zeta_1 + \zeta_2;\ \zeta_1 \in A_1,\, \zeta_2 \in \S^{d-1},\, (\zeta_1, \zeta_2) \notin Q\}$. Observe that \begin{align*} \sigma * \sigma(E) & = \sigma^2\big\{ (w_1,w_2) \in (\S^{d-1})^2; \ w_1 + w_2 \in E\big\}\\ & \geq \sigma^2\big\{ (\zeta_1,\zeta_2) \in (\S^{d-1})^2; \ \zeta_1 \in A_1,\, \zeta_2 \in \S^{d-1},\, (\zeta_1, \zeta_2) \notin Q\big\}\\ & = \sigma^2\big\{ (\zeta_1,\zeta_2) \in (\S^{d-1})^2; \ \zeta_1 \in A_1,\, \zeta_2 \in \S^{d-1}\big\}\\ & = \sigma(A_1)\, \sigma(\S^{d-1})\\ & >0. \end{align*} As noted in the introduction, the measures $\sigma\ast\sigma$ and $\lambda^d$ are mutually absolutely continuous on $B(2)$, and so we find that $h\equiv 0$ on a subset of $B(2)$ of positive $\lambda^d-$measure (namely $E$), a contradiction. \medskip If $c_0=0$, let $E_j = f_j^{-1}(\mathbb{C} \setminus \{0\})$ for $j=1,2$. We claim that we cannot have $\sigma(E_j)>0$ for $j=1,2$. In fact, if this were the case, arguing as above, the sumset $E_1 + E_2$ would have positive $\lambda^d-$measure and $h$ would be non-zero on a subset of $B(2)$ of positive $\lambda^d-$measure, a contradiction. Therefore, we must have $\sigma(E_j)=0$ for at least one $j$. This possibility falls under the item (ii) of Theorem \ref{Sec3_Char_ext}. \subsubsection{The case $k\geq3$ and $d \geq 3$} . \medskip \noindent {\it Step 4. Induction argument.} To extend the previous result for $k \geq 3$ in dimension $d\geq 3$, we proceed by induction on the degree of the multilinearity $k$. We start by showing how the trilinear case $k=3$ can be deduced from the case $k=2$. Suppose that \begin{equation}\label{trilinear} f_1(\zeta_1)\,f_2(\zeta_2)\,f_3(\zeta_3)=h(\zeta_1+\zeta_2+\zeta_3) \end{equation} holds $\sigma^3-$a.e. on $(\S^{d-1})^3$. Then for $\sigma-$a.e. $\zeta_1\in \S^{d-1}$, identity \eqref{trilinear} holds for $\sigma^2-$a.e. $(\zeta_2,\zeta_3)\in (\S^{d-1})^2$. We split the analysis in two cases: \medskip If $f_j\equiv 0$ $\sigma-$a.e. for some $j$ with $1\leq j \leq 3$, we are done. \medskip Otherwise, let $E_1 = f_1^{-1}(\mathbb{C} \setminus \{0\})$. Then $\sigma(E_1) >0$. Choose $ z \in E_1$ for which identity \eqref{trilinear} holds with $\zeta_1= z$ for $\sigma^2-$a.e. $(\zeta_2,\zeta_3)\in (\S^{d-1})^2$. Then, defining $\widetilde{h}_{z}(\zeta):={f_1(z)^{-1}}\,h(\zeta+z)$, we have that \begin{equation*} f_2(\zeta_2)\,f_3(\zeta_3)=\widetilde{h}_{z}(\zeta_2+\zeta_3)\ \end{equation*} for $\sigma^2-$a.e. $(\zeta_2,\zeta_3)\in (\S^{d-1})^2$. By the case $k=2$, there exist $c_2,c_3\in \mathbb{C}\setminus\{0\}$ and $\nu\in\mathbb{C}^d$ such that \begin{equation*} f_2(\zeta_2)=c_2 \,e^{\nu\cdot \zeta_2}\textrm{ and } f_3(\zeta_3)=c_3 \,e^{\nu\cdot \zeta_3}. \end{equation*} Repeating this argument for $f_2$ instead of $f_1$, we conclude that $f_1(\zeta_1)=c_1 \,e^{\nu\cdot \zeta_1}$ for some $c_1 \in \mathbb{C}\setminus\{0\}$ and the same $\nu \in\mathbb{C}^d$. The general $k-$linear case follows similarly by induction. \subsubsection{The case $k = 3$ and $d = 2$} . \medskip \noindent {\it Step 5. Revisiting the argument of Charalambides in \cite[Section 5]{Ch}.} We now deal with the case of three functions $f_1, f_2, f_3$ on the circle $\S^1$. \medskip If $f_j\equiv 0$ $\sigma-$a.e. for some $j$ with $1\leq j \leq 3$, we are done. \medskip So assume that for every $1 \leq j \leq 3$ we have $\sigma\big(f_j^{-1}( \mathbb{C}\setminus \{0\})\big) > 0$. We shall prove that in this case we must have $\sigma\big(f_j^{-1}(\{0\})\big) = 0$ for $1 \leq j \leq 3$, and we will be done by Lemma \ref{Lem6_Ch_result}. This follows from the arguments in \cite[Section 5]{Ch} modulo some adjustments. First, let us define $\gamma: I = (0,2\pi) \to \mathbb{R}^2$ by \begin{equation*} \gamma(x) = (\cos x, \sin x), \end{equation*} which is a unit speed parametrization of the circle $\S^1$ (here we are excluding a point, but this is harmless) by the open interval $I$. Writing $F_{j}(x) = |f_j(\gamma(x))|$, we have the functional equation \begin{equation}\label{Sec4_Fun_Eq_mod} F_1(x_1)\, F_2(x_2)\, F_3(x_3) = H\left(\sum_{j=1}^3 \gamma(x_j)\right) \end{equation} for $\lambda^3-$ a.e. $(x_1, x_2, x_3) \in I^3$, with $H(z) = |h(z)|$. \medskip We first show that all the $F_j$'s are bounded $\lambda-$a.e. in $I$, and therefore $H$ is also bounded $\lambda^3-$a.e. in $B(3)$. In fact, by hypothesis, the set $\{(x_2,x_3) \in I^2; F_2(x_2)F_3(x_3) >0\}$ has positive $\lambda^2$-measure, and we can choose $N$ large enough such that $$K = \{(x_2,x_3) \in I^2; N^{-1} < F_2(x_2)F_3(x_3) < N\}$$ verifies $\lambda^2(K) >0$. We may therefore choose a point $(u_2, u_3)$ that belongs to $K$, to the Lebesgue set of the characteristic function $\chi_{K}$, and such that ${\rm span}\,\{\gamma'(u_2), \gamma'(u_3)\} = \mathbb{R}^2$. Let $z = \gamma(u_2) + \gamma(u_3)$, and choose neighborhoods $U$ of $(u_2, u_3)$ and $V = B(z,r)$ such that the map $\beta: I^2 \to \mathbb{R}^2$ given by $\beta(x_2, x_3) = \gamma(x_2) + \gamma(x_3)$ is diffeomorphism from $U$ onto $V$. Let $K_1 = K \cap U$ and note that $\lambda^2 (K_1) >0$ (since $(u_2,u_3)$ is a Lebesgue point of $K$). Now let $I_1 \subset I$ be such that $\lambda(I \setminus I_1) =0$ and for each $x_1 \in I_1$ the functional equation \eqref{Sec4_Fun_Eq_mod} is satisfied at the point $(x_1, x_2, x_3)$ for $\lambda^2-$a.e. $(x_2, x_3) \in I^2$. \medskip By Lusin's theorem applied to $H|_{\gamma(I) + V}$, we may find a compact subset $T$ of the open set $\gamma(I) + V$ and a constant $C < \infty$ such that \begin{equation}\label{Lusin1} \lambda^2((\gamma(I) + V)\setminus T) < \lambda^2(\beta(K_1)) \neq 0 \end{equation} and for all $w \in T$ we have $H(w) \leq C$. Let $x_1 \in I_1$. Since $\lambda^2(\gamma(x_1) + \beta(K_1)) = \lambda^2(\beta(K_1))$, we find by \eqref{Lusin1} that $\lambda^2\big( T \cap (\gamma(x_1) + \beta(K_1))\big) >0$. We conclude that there exists $(x_2,x_3) \in K_1$ such that $H(\gamma(x_1) + \beta(x_2, x_3)) \leq C$ and \eqref{Sec4_Fun_Eq_mod} holds at $(x_1,x_2,x_3)$, leading to \begin{equation*} F_1(x_1) \leq N\,C. \end{equation*} This proves that $F_1$ is bounded $\lambda-$a.e. in $I$. We may apply the same argument to $F_2$ and $F_3$. \medskip Having constructed the diffeomorphism $\beta: U \to V$ above, and since the point $(u_2, u_3)$ belongs to $K$ (and is a Lebesgue point of $K$) we can pick a small ball $U' \subset U$ around $(u_2, u_3)$ to see that \begin{equation*} \int_{U} F_2(x_2)\, F_3(x_3) \,\text{\rm d}x_2\, \text{\rm d}x_3 \geq \int_{U'} F_2(x_2)\, F_3(x_3) \,\text{\rm d}x_2\, \text{\rm d}x_3 >0. \end{equation*} Following the outline of \cite[Section 5]{Ch} we show that the function $F_1$ is equal $\lambda-$a.e. to a differentiable function, and by analogy so are $F_2$ and $F_3$, and thus $H$. From now on we make these identifications. \medskip For every $x_1 \in I$ such that $F_1(x_1) \neq 0$ we argue as in \cite[Section 5, Eq. (5.5)]{Ch} (here we might have to make a new choice of the neighborhood $U$ in order to have $F_2(x_2)F_3(x_3) \neq 0$ for $(x_2,x_3) \in U$) to conclude that there is a neighborhood of $B \subset I$ of $x_1$ such that $F_1(x) = c_B \,e^{\nu_{B}\cdot \gamma(x)}$ for all $x \in B$, where $c_B \in \mathbb{C}\setminus \{0\}$ and $\nu_B \in \mathbb{C}^2$. We now argue as in Step 2, to conclude that for every connected component $W$ of the set $\{x \in I;\ F_1(x) \neq 0\}$ we must have $F_1(x) = c_W \,e^{\nu_{W}\cdot \gamma(x)}$ for $x \in W$. Since $F_1$ is continuous, this plainly implies that either $F_1 \equiv 0$ (which is not the case by hypothesis) or $F_1(x) \neq 0$ for all $x \in I$, which is the desired conclusion. The same holds for $F_2$ and $F_3$, and we conclude the proof of our original claim, i.e. that $\sigma\big(f_j^{-1}(\{0\})\big) = 0$ for $1 \leq j \leq 3$. \subsubsection{The case $k\geq 4$ and $d = 2$} . \medskip \noindent {\it Step 6. Induction argument for $d = 2$.} In order to prove Theorem \ref{Sec3_Char_ext} in dimension $d=2$ for $k \geq 4$ we proceed by induction as in the Step 4. \medskip This concludes the proof of Theorem \ref{Sec3_Char_ext}. \section{Proof of Theorem \ref{Thm1} - cases (b) and ( \!\!c): symmetrization over $SO(d)$} \subsection{Reduction to the case $L^{2k}$ to $L^{2k}$} \label{Holder reduction} If we prove Theorem \ref{Thm1} in the case $(d,2k,2k)$, for $d,k \geq 2$ with $(d,k) \neq (2,2)$, the corresponding cases $(d,2k,q)$ for $q >2k$ follow directly. In fact, by H\"{o}lder's inequality we have \begin{equation}\label{Sec5_eq1.0} \big\|\widehat{f \sigma}\big\|_{L^{2k}(\mathbb{R}^{d})} \leq C(d,2k,2k)\, \|f\|_{L^{2k}(\S^{d-1})} \leq C(d,2k,2k)\, \omega_{d-1}^{\frac{1}{2k} - \frac{1}{q}} \|f\|_{L^{q}(\S^{d-1})} = C(d,2k,q)\,\|f\|_{L^{q}(\S^{d-1})}. \end{equation} In order to have equality in \eqref{Sec5_eq1.0}, we must have equality in the leftmost inequality, which happens only for the functions given by \eqref{CV_ext}. Since the functions in \eqref{CV_ext} have constant absolute value, we also have equality in H\"{o}lder's inequality, and thus in \eqref{Sec5_eq1.0}. \subsection{$L^{2k}$ to $L^{2k}$ inequality} Let $d,k \geq 2$ with $(d,k) \neq (2,2)$, and write \begin{equation*} K(\zeta_1, \zeta_2, \ldots,\zeta_k) = \sigma^{(k)}(\zeta_1 + \zeta_2 + \ldots + \zeta_k), \end{equation*} where $\zeta_j \in \S^{d-1}$ for $1 \leq j \leq k$. From Theorem \ref{Thm2} we have \begin{align}\label{kfirststep} \big\|\widehat{f\sigma}\big\|_{L^{2k}(\mathbb{R}^d)}^{2k} \leq (2\pi )^d\,\int_{(\S^{d-1})^k} |f(\zeta_1)|^2\ldots |f(\zeta_k)|^2\,K(\zeta_1, \ldots ,\zeta_k)\,\text{\rm d}\sigma(\zeta_1)\ldots \text{\rm d}\sigma(\zeta_k), \end{align} with nontrivial equality if and only if \begin{equation}\label{Sec5_equality} f(\zeta) = c\,e^{\nu \cdot \zeta}, \end{equation} with $c \in \mathbb{C} \setminus \{0\}$ and $\nu \in \mathbb{C}^d$. Let $SO(d)$ denote the special orthogonal group, i.e. the orthogonal $d \times d$ real matrices with determinant $1$. Note that the surface measure $\text{\rm d}\sigma$ is invariant under the action of $SO(d)$ and that our kernel $K$ verifies \begin{equation*} K(R\zeta_1, R\zeta_2, \ldots,R\zeta_k) = K(\zeta_1, \zeta_2, \ldots,\zeta_k), \end{equation*} for every $\zeta_1, \zeta_2, \ldots, \zeta_k \in \S^{d-1}$ and $R \in SO(d)$. Equipping the compact group $SO(d)$ with its normalized Haar measure $\text{\rm d}\mu$, we can rewrite the integral on the right-hand side of \eqref{kfirststep} as \begin{align}\label{Sec5_H_1} \begin{split} & \int_{SO(d)}\left(\int_{(\S^{d-1})^k} |f(R \zeta_1)|^2\ldots |f(R \zeta_k)|^2 \, K(\zeta_1,\ldots,\zeta_k)\,\text{\rm d}\sigma(\zeta_1)\ldots \text{\rm d}\sigma(\zeta_k)\right) \text{\rm d}\mu(R)\\ & \ \ \ \ \ \ \ = \int_{(\S^{d-1})^k}\left(\int_{SO(d)} |f(R \zeta_1)|^2\ldots |f(R \zeta_k)|^2 \,\text{\rm d}\mu(R)\right)K(\zeta_1,\ldots,\zeta_k) \,\text{\rm d}\sigma(\zeta_1)\ldots \text{\rm d}\sigma(\zeta_k). \end{split} \end{align} The inner integral can be estimated with H\"{o}lder's inequality: \begin{equation}\label{Sec5_Holder} \int_{SO(d)} |f(R \zeta_1)|^2\ldots |f(R \zeta_k)|^2 \,\text{\rm d}\mu(R) \leq \prod_{j=1}^k \left(\int_{SO(d)} |f(R \zeta_j)|^{2k}\,\text{\rm d}\mu(R)\right)^{1/k}. \end{equation} Note that \begin{align}\label{Sec5_H_2} \begin{split} \int_{SO(d)} |f(R \zeta)|^{2k}\,\text{\rm d}\mu(R) & = \frac{1}{\omega_{d-1}} \int_{\S^{d-1}} \int_{SO(d)} |f(R \zeta)|^{2k}\,\text{\rm d}\mu(R) \,\text{\rm d}\sigma(\zeta) \\ & = \frac{1}{\omega_{d-1}} \int_{SO(d)}\int_{\S^{d-1}} |f(R \zeta)|^{2k}\text{\rm d}\sigma(\zeta)\, \text{\rm d}\mu(R)\\ & = \frac{1}{\omega_{d-1}} \,\|f\|_{L^{2k}(\S^{d-1})}^{2k}, \end{split} \end{align} for any $\zeta \in \S^{d-1}$. From \eqref{kfirststep}, \eqref{Sec5_H_1}, \eqref{Sec5_Holder} and \eqref{Sec5_H_2} we arrive at \begin{align}\label{Sec5_rest_ineq} \begin{split} \big\|\widehat{f\sigma}\big\|_{L^{2k}(\mathbb{R}^d)}^{2k} & \leq (2\pi )^d\, \omega_{d-1}^{-1}\, \|f\|_{L^{2k}(\S^{d-1})}^{2k} \int_{(\S^{d-1})^k} K(\zeta_1,\ldots,\zeta_k) \,\text{\rm d}\sigma(\zeta_1)\ldots \text{\rm d}\sigma(\zeta_k)\\ & = (2\pi )^d\, \omega_{d-1}^{-1} \, \sigma^{(2k)}(0) \,\|f\|_{L^{2k}(\S^{d-1})}^{2k}\\ & = \omega_{d-1}^{-1} \, \|\widehat{\sigma}\|_{L^{2k}(\mathbb{R}^d)}^{2k} \,\|f\|_{L^{2k}(\S^{d-1})}^{2k}, \end{split} \end{align} which is our desired sharp inequality. \subsection{Cases of equality} In order to have nontrivial equality in \eqref{Sec5_rest_ineq}, on top of condition \eqref{Sec5_equality}, we must have equality in \eqref{Sec5_Holder} for $\sigma^k-$a.e. $(\zeta_1, \ldots, \zeta_k) \in (\S^{d-1})^k$. This implies that for $\sigma^k-$a.e. $(\zeta_1, \ldots, \zeta_k) \in (\S^{d-1})^k$ we must have \begin{equation}\label{Sec5_Holder_cond_eq} a_1\, |f(R\zeta_1)| = a_2\, |f(R\zeta_2)| = \ldots = a_k\, |f(R\zeta_k)|, \end{equation} for $\mu-$a.e. $R \in SO(d)$, where $a_j >0$ for $1 \leq j \leq k$. If we integrate \eqref{Sec5_Holder_cond_eq} over the group of rotations $SO(d)$ we see that the $a_j$'s must be equal. Using \eqref{Sec5_equality} we claim that $\nu \in \mathbb{C}^d$ must be purely imaginary. If not, we could take a point $(\zeta_1, \ldots, \zeta_k) \in (\S^{d-1})^k$ for which \eqref{Sec5_Holder_cond_eq} holds for $\mu-$a.e. $R \in SO(d)$, with $\zeta_1$ close to $\Re(\nu)/|\Re(\nu)|$ and $\zeta_2$ close to being perpendicular to $\Re(\nu)/|\Re(\nu)|$, and reach a contradiction by choosing $R$ close to the identity. This completes the proof of Theorem \ref{Thm1} in the cases (b) and (c) (and in fact, part of the case (a)). \section{Proof of Theorem \ref{Thm1} - case (a): the outline of Christ-Shao and Foschi} The goal of this section is to obtain the sharp inequality \begin{equation}\label{Sec5_eq1} \big\|\widehat{f \sigma}\big\|_{L^{4}(\mathbb{R}^{d})} \leq C(d,4,2)\, \|f\|_{L^{2}(\S^{d-1})}\,, \end{equation} for $3 \leq d \leq 7$, and to characterize its extremizers. A simple application of H\"{o}lder's inequality then gives the corresponding sharp inequalities for the cases $q >2$, as detailed in Section \ref{Holder reduction}. \medskip In the case $d=3$, Foschi \cite{F} recently obtained the sharp inequality \eqref{Sec5_eq1} by combining previous techniques developed by Christ and Shao \cite{CS, CS2} with an insighful geometric identity intrinsic to this restriction problem. Foschi characterizes the real-valued extremizers of \eqref{Sec5_eq1} and completes the characterization of the complex-valued extremizers by invoking a result of Christ and Shao \cite[Theorem 1.2]{CS2}. Here we extend this method up to dimension $d=7$ to prove the sharp inequality \eqref{Sec5_eq1}, and characterize the complex-valued extremizers via a different path, using our Theorem \ref{Sec3_Char_ext} instead (note that, in principle, the result of \cite[Theorem 1.2]{CS2} is not available for dimensions $d >3$). \medskip We keep the notation as close as possible to \cite{F} to facilitate some of the references. Lemmas \ref{Sec5_Lem8.1} - \ref{Sec5_Lem10} below are derived from the works of Christ and Shao \cite{CS, CS2} and Foschi \cite{F}. The novelty here is a careful discussion of the cases of equality. \subsection{Reduction to nonnegative functions} Recall that by Plancherel's theorem we have \begin{align*} \big\|\widehat{f \sigma}\big\|_{L^4(\mathbb{R}^d)}^2 = (2\pi)^{d/2}\, \big\|f\sigma * f\sigma\big\|_{L^2(\mathbb{R}^d)}. \end{align*} Our first lemma reduces matters to working with nonnegative functions. \begin{lemma}\label{Sec5_Lem8.1} Let $f \in L^2(\S^{d-1})$. We have \begin{equation}\label{Sec5_Lem8.1_Eq_1} \big\|f\sigma * f\sigma\big\|_{L^2(\mathbb{R}^d)} \leq \big\||f|\sigma * |f|\sigma\big\|_{L^2(\mathbb{R}^d)}, \end{equation} with equality if and only if there is a measurable function $h:\overline{B(2)} \to \mathbb{C}$ such that \begin{equation}\label{Sec5_Lem8.1_Eq_2} f(\zeta_1)\, f(\zeta_2) = h(\zeta_1 + \zeta_2) \,\big|f(\zeta_1)\, f(\zeta_2)\big| \end{equation} for $\sigma^2-$a.e. $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$. \end{lemma} \begin{proof} Recall from \eqref{Sec2_delta_form} that \begin{align*} f\sigma * f\sigma(\xi) = \int_{(\S^{d-1})^2} f(\zeta_1)\, f(\zeta_2)\, \delta_d(\xi - \zeta_1 - \zeta_2)\, \text{\rm d}\sigma(\zeta_1)\, \text{\rm d}\sigma(\zeta_2), \end{align*} which implies that \begin{equation}\label{Sec5_red_pos_eq1} |f\sigma * f\sigma(\xi) |\leq |f|\sigma * |f|\sigma(\xi) \end{equation} for all $\xi \in \mathbb{R}^d$. This plainly gives \eqref{Sec5_Lem8.1_Eq_1}. \medskip Assume we have equality in \eqref{Sec5_Lem8.1_Eq_1}. Then we must have equality in \eqref{Sec5_red_pos_eq1} for a.e. $\xi \in \mathbb{R}^d$. For each such $\xi \in \mathbb{R}^d$ there exists $h(\xi) \in \mathbb{C}$ such that \begin{equation}\label{Sec5_red_pos_eq2} f(\zeta_1)\, f(\zeta_2) = h(\xi) |f(\zeta_1)\, f(\zeta_2)| \end{equation} for $\Psi_{\xi}-$a.e. $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$, where the singular measure $\Psi_{\xi}$ on $(\S^{d-1})^2$ is given by $$\d\Psi_{\xi}(\zeta_1, \zeta_2) = \delta_d(\xi - \zeta_1 - \zeta_2)\, \text{\rm d}\sigma(\zeta_1)\, \text{\rm d}\sigma(\zeta_2).$$ By integrating with respect to $\Psi_{\xi}$ we find that \begin{equation}\label{Sec5_red_pos_eq3} f\sigma * f\sigma(\xi) = h(\xi) \big(|f|\sigma * |f|\sigma(\xi)\big), \end{equation} and we see that $h$ is actually a measurable function. Arguing as in Section \ref{CE} we arrive at \eqref{Sec5_Lem8.1_Eq_2}. \medskip Conversely, if we have \eqref{Sec5_Lem8.1_Eq_2}, we may argue again as in Section \ref{CE} to conclude that for a.e. $\xi \in \mathbb{R}^d$ we have \eqref{Sec5_red_pos_eq2} for $\Psi_{\xi}-$a.e. $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$. Then equality in \eqref{Sec5_red_pos_eq1} holds for a.e. $\xi \in \mathbb{R}^d$ and we have equality in \eqref{Sec5_Lem8.1_Eq_1}. \end{proof} By working with $|f|$ instead of $f$, {\it we may assume that we are dealing with nonnegative functions}. \subsection{Reduction to even functions} Given a function $f: \S^{d-1} \to \mathbb{R}^+$ we define its {\it antipodal} $f_{\star}$ by \begin{equation*} f_{\star}(\zeta) = f (-\zeta). \end{equation*} Using \eqref{Sec2_delta_form} we observe that \begin{align}\label{Sec5_Planc_star} \begin{split} \big\|f\sigma * f\sigma\big\|_{L^2(\mathbb{R}^d)}^2 & = \big\|f\sigma * f_{\star}\sigma\big\|_{L^2(\mathbb{R}^d)}^2\\ & = \int_{(\S^{d-1})^4} \!f(\zeta_1)\, f(-\zeta_2)\, f(\zeta_3)\, f(-\zeta_4)\,\delta_d (\zeta_1 + \zeta_2 + \zeta_3 + \zeta_4)\,\text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2)\,\text{\rm d}\sigma(\zeta_3)\,\text{\rm d}\sigma(\zeta_4)\\ & = Q(f, f_{\star}, f, f_{\star}). \end{split} \end{align} Here $Q$ is the quadrilinear form defined by \begin{equation*} Q(f_1, f_2, f_3, f_4) := \int_{(\S^{d-1})^4} f_1(\zeta_1)\, f_2(\zeta_2)\, f_3(\zeta_3)\, f_4(\zeta_4)\,\d\Sigma(\zeta), \end{equation*} where $\Sigma$ is the singular measure in $(\S^{d-1})^4$ given by \begin{equation*} \d\Sigma(\zeta) := \delta_d (\zeta_1 + \zeta_2 + \zeta_3 + \zeta_4)\,\text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2)\,\text{\rm d}\sigma(\zeta_3)\,\text{\rm d}\sigma(\zeta_4), \end{equation*} for $\zeta = (\zeta_1, \zeta_2, \zeta_3, \zeta_4) \in (\S^{d-1})^4$. Note that $\Sigma$ is supported on the set $\{\zeta \in (\S^{d-1})^4;\ \zeta_1 + \zeta_2 + \zeta_3 + \zeta_4 = 0\}$. \medskip For $f: \S^{d-1} \to \mathbb{R}^+$, we define the {\it antipodal symmetrization} $f_{\sharp}$ by \begin{equation*} f_{\sharp}(\zeta) = \sqrt{ \frac{f(\zeta)^2 + f(-\zeta)^2}{2}}. \end{equation*} Note that $\|f_{\sharp}\|_{L^2(\mathbb{R}^d)} = \|f\|_{L^2(\mathbb{R}^d)}$. \begin{lemma}[cf. \cite{CS, F}] \label{Sec5_Lem8} Let $d \geq 3$. If $f: \S^{d-1} \to \mathbb{R}^+$ belongs to $L^2(\S^{d-1})$ then \begin{equation}\label{Sec5_ineq_Q} Q(f, f_{\star}, f, f_{\star}) \leq Q(f_{\sharp}, f_{\sharp}, f_{\sharp}, f_{\sharp}), \end{equation} with equality if and only if $f= f_{\star} = f_{\sharp}$ {\rm (}$\sigma-$a.e. in $\S^{d-1}${\rm)}. \end{lemma} \begin{proof} We follow \cite[Proposition 3.2]{F}. Observe first that \begin{equation}\label{Sec5_f_sharp_ineq} f * f_{\star}(\xi) \leq f_{\sharp} * f_{\sharp}(\xi) \end{equation} for all $\xi \in \mathbb{R}^d$. In fact, we have \begin{align}\label{Sec5_CS0} \begin{split} 2 f * f_{\star}(\xi) &= f * f_{\star}(\xi) + f_{\star}* f(\xi)\\ & = \int_{(\S^{d-1})^2}\Big[ f(\zeta_1) f(-\zeta_2) + f(-\zeta_1)f(\zeta_2) \Big]\, \delta_d( \xi - \zeta_1 - \zeta_2)\,\text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2). \end{split} \end{align} By Cauchy-Schwarz inequality we have \begin{equation}\label{Sec5_CS} \Big[ f(\zeta_1) f(-\zeta_2) + f(-\zeta_1)f(\zeta_2) \Big] \leq \sqrt{f(\zeta_1)^2 + f(-\zeta_1)^2} \ \sqrt{f(\zeta_2)^2 + f(-\zeta_2)^2} = 2 \,f_{\sharp}(\zeta_1)\,f_{\sharp}(\zeta_2). \end{equation} Plugging \eqref{Sec5_CS} into \eqref{Sec5_CS0} we obtain \eqref{Sec5_f_sharp_ineq}. Now observe that \eqref{Sec5_Planc_star} and \eqref{Sec5_f_sharp_ineq} plainly imply \eqref{Sec5_ineq_Q}. \medskip In order to have equality in \eqref{Sec5_ineq_Q}, we must have equality in \eqref{Sec5_f_sharp_ineq} for a.e. $\xi \in \mathbb{R}^d$. For each such $\xi \in \mathbb{R}^d$, the condition of equality in the Cauchy-Schwarz inequality \eqref{Sec5_CS} gives us that \begin{equation}\label{Sec5_cond_1} f(\zeta_1)\, f(\zeta_2) = f(-\zeta_1)\, f(-\zeta_2) \end{equation} for $\Psi_{\xi}-$a.e. $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$. Arguing as in Section \ref{CE}, this implies that \eqref{Sec5_cond_1} must hold for $\sigma^2-$a.e. $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$. Let $\zeta_1 \in \S^{d-1}$ be such that \eqref{Sec5_cond_1} holds for $\sigma-$a.e. $\zeta_2 \in \S^{d-1}$. Then we can integrate over $\S^{d-1}$ with respect to the variable $\zeta_2$ to obtain (provided $f$ is nonzero, otherwise the result is trivial) \begin{equation*} f(\zeta_1) = f(-\zeta_1). \end{equation*} This shows that $f= f_{\star} = f_{\sharp}$ {\rm (}$\sigma-$a.e. in $\S^{d-1}${\rm)}. \end{proof} From now on {\it we may assume additionally that $f = f_{\sharp}$}. \subsection{The key geometric identity} The heart of Foschi's proof lies in the following simple geometric identity. \begin{lemma}{\rm (cf. \cite[Lemma 4.2]{F})} Let $(\zeta_1, \zeta_2, \zeta_3, \zeta_4) \in (\S^{d-1})^4$ be such that \begin{equation*} \zeta_1 + \zeta_2 + \zeta_3 + \zeta_4 = 0 \end{equation*} {\rm (}i.e. $(\zeta_1, \zeta_2, \zeta_3, \zeta_4) $ lies in the support of the measure $\Sigma${\rm )}. Then \begin{equation}\label{Sec5_magical_identity} |\zeta_1 + \zeta_2|\,|\zeta_3 + \zeta_4| + |\zeta_1 + \zeta_3|\,|\zeta_2 + \zeta_4| + |\zeta_1 + \zeta_4|\,|\zeta_2 + \zeta_3| = 4. \end{equation} \end{lemma} The kernel in our Corollary \ref{Cor3} is too singular to allow us to draw any sharp global conclusions about the adjoint restriction inequality \eqref{Sec5_eq1}. To overcome this barrier we use the identity \eqref{Sec5_magical_identity} and the symmetries of $Q$ in order to write \begin{equation}\label{Sec5_Id_Q} Q(f,f,f,f) = \frac{3}{4} \int_{(\S^{d-1})^4} f(\zeta_1)\, f(\zeta_2)\, |\zeta_1 + \zeta_2|\, f(\zeta_3)\, f(\zeta_4)\,|\zeta_3 + \zeta_4|\,\d\Sigma(\zeta). \end{equation} This allows us to prove the following lemma. \begin{lemma}\label{Sec5_Lem10} Let $d \geq 3$. If $f:\S^{d-1} \to \mathbb{R}^{+}$ is an even function in $L^2(\S^{d-1})$ then \begin{equation*} Q(f,f,f,f) \leq 2^{-d+3}\,\omega_{d-2}\, \frac{3}{4} \,\int_{(\S^{d-1})^2} f(\zeta_1)^2\, f(\zeta_2)^2\, |\zeta_1 + \zeta_2|\,\big(4-|\zeta_1 + \zeta_2|^2 \big)^{\frac{d-3}{2}}\,\text{\rm d}\sigma(\zeta_1) \, \text{\rm d}\sigma (\zeta_2), \end{equation*} with equality if and only if $f$ is a constant function. \end{lemma} \begin{proof} We use the Cauchy-Schwarz inequality in \eqref{Sec5_Id_Q} (with respect to the measure $\Sigma$), together with Lemma \ref{biconvsigma} to get \begin{align*} \begin{split} Q(f,f,f,f) &\leq \frac{3}{4} \left(\int_{(\S^{d-1})^4} f(\zeta_1)^2\, f(\zeta_2)^2\, |\zeta_1 + \zeta_2|^2\,\d\Sigma(\zeta)\right) ^{1/2} \left(\int_{(\S^{d-1})^4} f(\zeta_3)^2\, f(\zeta_4)^2\, |\zeta_3 + \zeta_4|^2\,\d\Sigma(\zeta)\right) ^{1/2}\\ & = \frac{3}{4} \int_{(\S^{d-1})^4} f(\zeta_1)^2\, f(\zeta_2)^2\, |\zeta_1 + \zeta_2|^2\,\d\Sigma(\zeta)\\ & = 2^{-d+3}\,\omega_{d-2}\, \frac{3}{4} \,\int_{(\S^{d-1})^2} f(\zeta_1)^2\, f(\zeta_2)^2\, |\zeta_1 + \zeta_2|\,\big(4-|\zeta_1 + \zeta_2|^2 \big)^{\frac{d-3}{2}}\,\text{\rm d}\sigma(\zeta_1) \, \text{\rm d}\sigma (\zeta_2). \end{split} \end{align*} In order to have equality we must have \begin{equation*} f(\zeta_1)\, f(\zeta_2)\, |\zeta_1 + \zeta_2| = c \, f(\zeta_3)\, f(\zeta_4)\,|\zeta_3 + \zeta_4| \end{equation*} for some $c \in \mathbb{R}$ and $\Sigma-$a.e. $(\zeta_1, \zeta_2, \zeta_3, \zeta_4) \in (\S^{d-1})^4$. Integrating both sides with respect to $\d\Sigma(\zeta_1, \zeta_2, \zeta_3, \zeta_4)$ gives us that $c = 1$ and thus \begin{equation}\label{Sec5_Lem10_Eq_cond} f(\zeta_1)\, f(\zeta_2)\, |\zeta_1 + \zeta_2| = f(\zeta_3)\, f(\zeta_4)\,|\zeta_3 + \zeta_4| \end{equation} for $\Sigma-$a.e. $(\zeta_1, \zeta_2, \zeta_3, \zeta_4) \in (\S^{d-1})^4$. Let \begin{equation}\label{Sec5_Def_E} E = \big\{(\zeta_1, \zeta_2, \zeta_3, \zeta_4) \in (\S^{d-1})^4;\ \eqref{Sec5_Lem10_Eq_cond}\ {\rm does \ not\ hold}\}. \end{equation} We find that \begin{align*} 0 &= \int_{(\S^{d-1})^4} \chi_E(\zeta_1, \zeta_2, \zeta_3, \zeta_4)\, \d\Sigma(\zeta_1, \zeta_2, \zeta_3, \zeta_4) \\ & = \int_{(\S^{d-1})^2} \left( \int_{(\S^{d-1})^2} \chi_E(\zeta_1, \zeta_2, -\zeta_3, -\zeta_4) \,\delta_d(\zeta_1 + \zeta_2 - \zeta_3 - \zeta_4)\,\text{\rm d}\sigma(\zeta_3)\, \text{\rm d}\sigma(\zeta_4)\right) \text{\rm d}\sigma(\zeta_1)\,\text{\rm d}\sigma(\zeta_2). \end{align*} Thus, for $\sigma^2-$a.e. $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$, we have that (recall that $f$ is even) \begin{equation}\label{Sec5_Lem10_Eq_cond_2} f(\zeta_1)\, f(\zeta_2)\, |\zeta_1 + \zeta_2| = f(\zeta_3)\, f(\zeta_4)\,|\zeta_3 + \zeta_4| \end{equation} for $\Psi_{\zeta_1 + \zeta_2}-$a.e. $(\zeta_3, \zeta_4) \in (\S^{d-1})^2$. For such a $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$, with $\zeta_1 + \zeta_2 \in B(2)\setminus \{0\}$ we have \begin{equation}\label{Sec5_Lem10_Eq_cond_3} f(\zeta_1)\, f(\zeta_2) = f(\zeta_3)\, f(\zeta_4) \end{equation} for $\Psi_{\zeta_1 + \zeta_2}-$a.e. $(\zeta_3, \zeta_4) \in (\S^{d-1})^2$, and if we average the right-hand side of \eqref{Sec5_Lem10_Eq_cond_3} with respect to $\d\Psi_{\zeta_1 + \zeta_2}(\zeta_3,\zeta_4)$ we arrive at \begin{align*} f(\zeta_1)\, f(\zeta_2) &= \frac{ \int_{(\S^{d-1})^2} f(\zeta_3)\, f(\zeta_4) \,\d\Psi_{\zeta_1 + \zeta_2}(\zeta_3,\zeta_4)}{ \int_{(\S^{d-1})^2}\d\Psi_{\zeta_1 + \zeta_2}(\zeta_3,\zeta_4)}= \frac{f\sigma*f\sigma(\zeta_1 + \zeta_2)}{\sigma^{(2)}(\zeta_1 + \zeta_2)}=:h(\zeta_1 + \zeta_2). \end{align*} \medskip We now use Theorem \ref{Sec3_Char_ext} to conclude that $f(\zeta) = c\, e^{\nu\cdot \zeta}$ for some $c \in \mathbb{C}$ and $\nu \in \mathbb{C}^d$. If $c = 0$ we are done. If $c \neq 0$, since $f$ is real-valued we must have $\Im(\nu) =0$, and since $f$ is even we must have $\Re(\nu) = 0$. Then, since $f$ is nonnegative, we must have $c>0$ and $f(\zeta) = c$. \medskip Conversely, it is clear that any (nonnegative) constant function verifies the desired equality. This concludes the proof. \end{proof} \subsection{Proof of Theorem \ref{Thm1} - case (a)} We consider the quadratic form \begin{equation*} H_d(g) := \int_{(\S^{d-1})^2} \overline{g(\zeta_1)}\, g(\zeta_2)\, |\zeta_1 - \zeta_2|\,\big(4-|\zeta_1 -\zeta_2|^2 \big)^{\frac{d-3}{2}}\,\text{\rm d}\sigma(\zeta_1) \, \text{\rm d}\sigma (\zeta_2). \end{equation*} This is a real-valued and continuous functional on $L^1(\S^{d-1})$. In fact, it is not hard to see that \begin{equation}\label{Sec5_Cont_H_d} |H_d(g_1) - H_d(g_2)| \leq 2^{d-2}\, \Big(\|g_1\|_{L^1(\S^{d-1})} + \|g_2\|_{L^1(\S^{d-1})}\Big)\, \|g_1 - g_2\|_{L^1(\S^{d-1})}. \end{equation} The next lemma is the last piece of information needed for our sharp inequality. \begin{lemma}\label{Sec5_Lem11} Let $3 \leq d \leq 7$. Let $g \in L^1(\S^{d-1})$ be an even function and write \begin{equation*} \mu = \frac{1}{\omega_{d-1}} \int_{\S^{d-1}} g(\zeta)\,\text{\rm d}\sigma(\zeta) \end{equation*} for the mean value of $g$ over the sphere $\S^{d-1}$. Then \begin{equation*} H_d(g) \leq H_d(\mu {\bf 1}) = |\mu|^2 H_d({\bf 1}), \end{equation*} with equality if and only if $g$ is a constant function. \end{lemma} Assume for a moment that we have proved this lemma and let us conclude the proof of Theorem \ref{Thm1}. \begin{proof}[Proof of Theorem \ref{Thm1} - case {\rm (a)}] Putting together our chain of inequalities (Lemmas \ref{Sec5_Lem8.1}, \ref{Sec5_Lem8}, \ref{Sec5_Lem10} and \ref{Sec5_Lem11}) we get \begin{align}\label{Sec5_chain} \begin{split} \big\|\widehat{f \sigma}\big\|_{L^4(\mathbb{R}^d)}^4& \leq (2\pi)^d\, Q(|f|, |f|_{\star}, |f|, |f|_{\star}) \\ & \leq (2\pi)^d\, Q(|f|_{\sharp}, |f|_{\sharp}, |f|_{\sharp}, |f|_{\sharp})\\ & \leq (2\pi)^d\, 2^{-d+3}\,\omega_{d-2}\, \frac{3}{4} \,H_d(|f|_{\sharp}^2)\\ & \leq \frac34\,(2\pi)^d\, 2^{-d+3}\,\frac{\omega_{d-2}}{\omega_{d-1}^2}\, H_d({\bf 1}) \,\|f\|_{L^2(\S^{d-1})}^4. \end{split} \end{align} This inequality is sharp since $f = {\bf 1}$ verifies the equalities in all the steps. \medskip If $f \in L^2(\S^{d-1})$ is a complex-valued extremizer of \eqref{Sec5_chain}, by Lemma \ref{Sec5_Lem10} (or Lemma \ref{Sec5_Lem11}) we must have $|f|_{\sharp} = \gamma\,{\bf 1}$, where $\gamma >0$ is a constant. By Lemma \ref{Sec5_Lem8} we must have $|f| = \gamma\,{\bf 1}$. By Lemma \ref{Sec5_Lem8.1} there is a measurable function $h:\overline{B(2)} \to \mathbb{C}$ such that \begin{equation*} f(\zeta_1)\, f(\zeta_2) = \gamma^2\, h(\zeta_1 + \zeta_2) \end{equation*} for $\sigma^2-$a.e. $(\zeta_1, \zeta_2) \in (\S^{d-1})^2$. By Theorem \ref{Sec3_Char_ext} there exist $c \in \mathbb{C}\setminus \{0\}$ and $\nu \in \mathbb{C}^d$ such that $$f(\zeta) = c\,e^{\nu \cdot \zeta}$$ for $\sigma-$a.e. $\zeta \in \S^{d-1}$. Since $|f|$ is constant, we must have $\Re(\nu) = 0$ and $|c| = \gamma$. \medskip Conversely, it is clear that the functions given by \eqref{CV_ext} verify the chain of equalities in \eqref{Sec5_chain}. This concludes the proof. \end{proof} \subsection{Spectral decomposition - Proof of Lemma \ref{Sec5_Lem11}} The case $d=3$ was proved by Foschi \cite[Theorem 5.1]{F}. Here we extend his method to dimensions $d = 4,5,6,7$. \subsubsection{Funk-Hecke formula and Gegenbauer polynomials} We start by proving Lemma \ref{Sec5_Lem11} for even functions $g$ in the subspace $L^2(\S^{d-1}) \subset L^1(\S^{d-1})$ (the general statement for even functions in $L^1(\S^{d-1})$ will follow by a density argument). In this case we may decompose $g$ as a sum \begin{equation}\label{Sec5_Dec_Spherical_Harmonics} g = \sum_{k \geq 0} Y_k, \end{equation} where $Y_k$ is a spherical harmonic of degree $k$ (see \cite[Chapter IV]{SW}). Since $g$ is even, we must have $Y_{2\ell+1} = 0$ in \eqref{Sec5_Dec_Spherical_Harmonics} for all $\ell\geq 0$. Note also that $Y_0 = \mu \,{\bf 1}$, where $\mu$ is the mean value of $g$ in $\S^{d-1}$. Let $$g_N = \sum_{k \geq 0}^N Y_k.$$ Since $g_N \to g$ in $L^2(\S^{d-1})$ as $N \to \infty$, we have that $g_N \to g$ in $L^1(\S^{d-1})$ as $N \to \infty$ and thus, by \eqref{Sec5_Cont_H_d}, $H_d(g_N) \to H_d(g)$ as $N \to \infty$. Therefore \begin{align}\label{Sec6_H_d_decomposition} \begin{split} H_d(g) & = \lim_{N \to \infty} \sum_{j,k=0}^N \int_{\S^{d-1}} \int_{\S^{d-1}} \overline{Y_j(\zeta_1)}\, Y_k(\zeta_2)\, |\zeta_1 - \zeta_2|\,\big(4-|\zeta_1 -\zeta_2|^2 \big)^{\frac{d-3}{2}}\,\text{\rm d}\sigma(\zeta_1) \, \text{\rm d}\sigma (\zeta_2)\\ & = \lim_{N \to \infty} \sum_{j,k=0}^N \int_{\S^{d-1}} \overline{Y_j(\zeta_1)}\,\left( \int_{\S^{d-1}} Y_k(\zeta_2)\, \phi_d(\zeta_1 \cdot \zeta_2)\,\text{\rm d}\sigma(\zeta_2) \right) \text{\rm d}\sigma (\zeta_1), \end{split} \end{align} where \begin{equation}\label{Sec6_def_phi_d} \phi_d(t) = 2^{\frac{d-2}{2}}\, (1 - t)^{\frac12}\, (1+t)^{\frac{d-3}{2}}. \end{equation} The inner integral above may be evaluated via the {\it Funk-Hecke formula} \cite[Theorem 1.2.9]{DX} \begin{equation}\label{Sec6_FH} \int_{\S^{d-1}} Y_k(\zeta_2)\, \phi_d(\zeta_1 \cdot \zeta_2)\,\text{\rm d}\sigma(\zeta_2) = \Lambda_k(\phi_d) \, Y_k(\zeta_1), \end{equation} with the constant $\Lambda_k(\phi_d)$ given by \begin{equation}\label{Sec6_Def_Lambda_k} \Lambda_k(\phi_d) = \omega_{d-2}\int_{-1}^{1} \frac{C^{\frac{d-2}{2}}_k(t)}{C^{\frac{d-2}{2}}_k(1)}\, \phi_d(t)\, (1-t^2)^{\frac{d-3}{2}}\, \text{\rm d}t, \end{equation} where $t \mapsto C^{\alpha}_k(t)$, for $\alpha >0$, are the {\it Gegenbauer polynomials} ({\it or ultraspherical polynomials}) defined in terms of the generating function \begin{equation}\label{Sec6_Def_Gegenbauer} (1 - 2rt + r^2)^{-\alpha} = \sum_{k=0}^{\infty} C^{\alpha}_k(t)\, r^k. \end{equation} For bounded $t$, the left-hand side of \eqref{Sec6_Def_Gegenbauer} is an analytic function of $r$ (for small $r$) and the right-hand side of \eqref{Sec6_Def_Gegenbauer} is the corresponding power series expansion. Note that $C^{\alpha}_k(t)$ has degree $k$. The Gegenbauer polynomials $C^{\alpha}_k(t)$ are orthogonal in the interval $[-1,1]$ with respect to the measure $(1- t^2)^{\alpha - \frac12}\,\text{\rm d}t$. If we plug \eqref{Sec6_FH} back into \eqref{Sec6_H_d_decomposition} and use the orthogonality properties of the spherical harmonics we arrive at \begin{equation}\label{Sec6_decom_H_g_Sph_Harm_Final} H_d(g) = \sum_{k=0}^{\infty} \Lambda_k(\phi_d) \, \|Y_k\|^2_{L^2(\S^{d-1})}. \end{equation} Our goal here is prove the following result. \begin{lemma}\label{Lem13}. \begin{itemize} \item[(i)] For $d=3$ we have $\Lambda_0(\phi_3) >0$ and $\Lambda_k(\phi_3) <0$ for all $k \geq1$. \item[(ii)] For $d=4$ we have $\Lambda_0(\phi_4) >0$, $\Lambda_{2k +1}(\phi_4) =0$ for all $k \geq0$, and $\Lambda_{2k}(\phi_4) <0$ for all $k \geq 1$. \item[(iii)] For $d = 5,6,7$ we have $\Lambda_0(\phi_d), \Lambda_1(\phi_d)>0$ and $\Lambda_k(\phi_d) <0$ for all $k \geq 2$. \end{itemize} \end{lemma} \noindent{\sc Remark:} For $d \geq 8$ we start to observe that $\Lambda_2(\phi_d)>0$. This is the basic reason why the method presented here only works (as it is) for dimensions up to $7$. \medskip Assuming Lemma \ref{Lem13} let us conclude the proof of Lemma \ref{Sec5_Lem11}. \begin{proof}[Conclusion of the proof of Lemma \ref{Sec5_Lem11}] Using Lemma \ref{Lem13} in \eqref{Sec6_decom_H_g_Sph_Harm_Final}, and the fact that $g \in L^2(\S^{d-1})$ is an even function, we find \begin{align*} H_d(g) = \sum_{k=0}^{\infty} \Lambda_k(\phi_d) \, \|Y_k\|^2_{L^2(\S^{d-1})}& \leq \Lambda_0(\phi_d) \, \|Y_0\|^2_{L^2(\S^{d-1})} + \Lambda_1(\phi_d) \, \|Y_1\|^2_{L^2(\S^{d-1})} \\ & = \Lambda_0(\phi_d) \, \|Y_0\|^2_{L^2(\S^{d-1})}\\ & = H_d(\mu {\bf 1}) \\ & = |\mu|^2 H_d({\bf 1}). \end{align*} Equality occurs if and only if $Y_k \equiv0$ for all $k \geq 2$, which means that $g = Y_0$ is a constant function. \medskip Now let $h \in L^1(\S^{d-1})$ be an even function. For each $N >0$ consider the truncation \begin{equation*} h_N(\zeta) = \left\{ \begin{array}{cc} h(\zeta) \ & {\rm if}\ |h(\zeta)| \leq N \vspace{0.15cm}\\ N \frac{h(\zeta)}{|h(\zeta)|} \ & {\rm if}\ |h(\zeta)| > N. \end{array} \right. \end{equation*} Note that each $h_N$ is an even function in $L^2(\S^{d-1})$ and $h_N \to h$ in $ L^1(\S^{d-1})$ as $N \to \infty$. If $\mu _N = \frac{1}{\omega_{d-1}} \int_{\S^{d-1}} h_N(\zeta)\,\text{\rm d}\sigma(\zeta)$ and $\mu = \frac{1}{\omega_{d-1}} \int_{\S^{d-1}} h(\zeta)\,\text{\rm d}\sigma(\zeta)$, then $\mu_N \to \mu$ and $H_d(h_N) \to H_d(h)$ as $N \to \infty$. Consider an orthonormal basis of spherical harmonics $\{Z_{kj}\}$, $k \geq 0$, $j = 1,2, \ldots,D(d,k)$, where each harmonic $Z_{kj}$ has degree $k$ and $D(d,k)$ is the dimension \footnote{Explicitly, $D(d,k) = \binom{d + k -1}{d-1} - \binom{d + k -3}{d-1}$. For details see \cite[Chapter IV]{SW}.} of the vector space of spherical harmonics of degree $k$ on $\S^{d-1}$, and write \begin{equation*} h_N = \sum_{k,j} \langle h_N, Z_{kj}\rangle \, Z_{kj}. \end{equation*} By the $L^2-$argument we have \begin{equation}\label{Sec6_case_L_1} H_d(h_N) = \sum_{k=0}^{\infty} \Lambda_k(\phi_d) \left( \sum_{j=1}^{D(d,k)} |\langle h_N, Z_{kj}\rangle|^2 \right) \leq |\mu_N|^2 H_d({\bf 1}). \end{equation} Passing the limit as $N\to \infty$ we get \begin{equation}\label{Sec6_case_L_1_eq_2} H_d(h) \leq |\mu|^2 H_d({\bf 1}), \end{equation} which is our desired inequality. We already know that $\langle h, Z_{kj}\rangle = 0$ if $k$ is odd. If $\langle h, Z_{kj}\rangle \neq 0$ for some even $k \geq 2$ and some $j$, since $\langle h_N, Z_{kj}\rangle \to \langle h, Z_{kj}\rangle$ as $N \to \infty$, by Lemma \ref{Lem13} we would have a strict inequality in \eqref{Sec6_case_L_1} that would propagate to the limit \eqref{Sec6_case_L_1_eq_2}. Therefore, in order to have the equality in \eqref{Sec6_case_L_1_eq_2} we must have $\langle h, Z_{kj}\rangle = 0$ for all $k \geq 1$ and all $j$. This implies that $h$ must be constant. \end{proof} \subsubsection{Computing the $\Lambda_k(\phi_d)$} We are left with the final task to prove Lemma \ref{Lem13}. In this section we accomplish this by explicitly computing the values of $\Lambda_k(\phi_d)$ via a recursive argument. To simplify the notation let us consider the {\it Legendre polynomials} defined by \begin{equation*} P_k(t) := C_k^{1/2}(t). \end{equation*} The next proposition lays the ground for our recursions. \begin{proposition}\label{Prop14}. \begin{itemize} \item[(i)] For $\alpha >0$ and $k \geq 0$ we have \begin{equation}\label{Sec6_Prop14_eq1} (2k + 2\alpha)\, t\, C^{\alpha}_k(t) = (k+1) \,C^{\alpha}_{k+1}(t) + (k + 2\alpha -1)\,C^{\alpha}_{k-1}(t). \end{equation} Note: We set $C_{-1}^{\alpha}(t) = 0$. \smallskip \item[(ii)] Let $k \geq 0$. The Legendre polynomials $P_k$ verify \begin{align} \gamma_k:=&\int_{-1}^1 (2-2t)^{1/2}\,P_k'(t)\,\text{\rm d}t=2\,(-1)^{k+1}+\frac{{2}}{2k+1}\,;\label{Sec6_Prop14_eq3}\\ \delta_k:=&\int_{-1}^1 (2-2t)^{3/2}\,P_k''(t)\,\text{\rm d}t= 8\,(-1)^{k}{k+1 \choose 2}+3\gamma_k. \label{Sec6_Prop14_eq4} \end{align} \end{itemize} \end{proposition} \begin{proof} {\it Part} (i). Differentiating \eqref{Sec6_Def_Gegenbauer} with respect to the variable $r$ yields \begin{equation}\label{Sec6_eq_derivated} 2\alpha\,(t-r)\, \sum_{k=0}^{\infty} C^{\alpha}_k(t)\, r^k = (1 - 2rt + r^2)\,\sum_{k=0}^{\infty} k\,C^{\alpha}_k(t)\, r^{k-1}. \end{equation} Comparing the coefficients of $r^k$ on both sides of \eqref{Sec6_eq_derivated} we obtain \eqref{Sec6_Prop14_eq1}. \medskip \noindent{\it Part} (ii). To establish \eqref{Sec6_Prop14_eq3} we use integration by parts: \begin{equation}\label{ibp} \int_{-1}^1 (2-2t)^{1/2}\,P_k'(t)\,\text{\rm d}t=-2P_k(-1)+\int_{-1}^1\frac{P_k(t)}{(2-2t)^{1/2}}\,\text{\rm d}t. \end{equation} By evaluating \begin{equation}\label{Sec6_Def_Legendre} (1 - 2rt + r^2)^{-1/2} = \sum_{k=0}^{\infty} P_k(t)\, r^k \end{equation} at $t=-1$ we find that $P_k(-1)=(-1)^k$. The value of the integral on the right-hand side of \eqref{ibp} was evaluated in the proof of \cite[Lemma 5.4]{F} and equals $\frac{2}{2k+1}$. \medskip As for identity \eqref{Sec6_Prop14_eq4}, integrate by parts once again: \begin{equation*} \int_{-1}^1 (2-2t)^{3/2}\,P_k''(t)\,\text{\rm d}t=-8P_k'(-1)+3\int_{-1}^1 (2-2t)^{1/2} \,P_k'(t)\,\text{\rm d}t. \end{equation*} Differentiating both sides of \eqref{Sec6_Def_Legendre} with respect to $t$ and then setting $t=-1$ yields \begin{equation*} \frac{r}{(1+r)^3}=\sum_{k\geq 0} P_k'(-1)\,r^k. \end{equation*} Hence $P_k'(-1)=(-1)^{k+1}{k+1 \choose 2}$. The result follows from this and \eqref{Sec6_Prop14_eq3}. \end{proof} We are now able to proceed to the proof of Lemma \ref{Lem13}. \begin{proof}[Proof of Lemma \ref{Lem13}]. \medskip \noindent {\it Step 1: Case} $d=3$. This was done in \cite[Lemma 5.4]{F}. \medskip \noindent {\it Step 2: Case} $d=4$. \noindent Throughout this proof let us rename \begin{equation*} Q_k(t) := C_k^1(t). \end{equation*} From \eqref{Sec6_Def_Gegenbauer} we find that $Q_k(1) = k+1$ for all $k \geq 0$, and from \eqref{Sec6_Prop14_eq1} we get \begin{equation*} 2t\, Q_k(t)=Q_{k+1}(t)+Q_{k-1}(t). \end{equation*} Using this recursively we get, for $k \geq 2$, \begin{align*} 4\,t^2 Q_k(t)& =2t\,Q_{k+1}(t)+2t\,Q_{k-1}(t)\\ & =Q_{k+2}(t)+2\,Q_k(t)+Q_{k-2}(t), \end{align*} and thus \begin{equation}\label{recQk} (4t^2-2)\,Q_k(t)=Q_{k+2}(t)+Q_{k-2}(t). \end{equation} We are now ready to compute the coefficients $\Lambda_k(\phi_4)$: \begin{align*} \Lambda_k(\phi_4)& =\omega_2\int_{-1}^1\frac{Q_k(t)}{Q_k(1)}\,\phi_4(t)\,(1-t^2)^{1/2}\,\text{\rm d}t=\frac{\omega_2}{2(k+1)}\int_{-1}^1Q_k(t)\,\big(2-(4t^2-2)\big)\,\text{\rm d}t. \end{align*} Setting $\tau_k:=\int_{-1}^1Q_k(t)\,\text{\rm d}t$ and recalling \eqref{recQk}, we have \begin{equation}\label{lambdabeta} \frac{2(k+1)}{\omega_2}\Lambda_k(\phi_4)=2\tau_k-\tau_{k-2}-\tau_{k+2}\,; \ \ \ \ (k\geq 2). \end{equation} The sequence of moments $\{\tau_k\}_{k \geq 0}$ can be computed explicitly. In fact, we claim that $\tau_{2j}=\frac{2}{2j+1}$ and $\tau_{2j+1}=0$. To verify this, recall by \eqref{Sec6_Def_Gegenbauer} that \begin{equation*} \sum_{k\geq 0} Q_k(t)\,r^k=\left(\sum_{k\geq 0} P_k(t)\,r^k\right)^2, \end{equation*} and so \begin{equation*} Q_k(t)=\sum_{\ell=0}^k P_{\ell}(t)\,P_{k-{\ell}}(t). \end{equation*} It follows that \begin{equation*} \tau_k=\int_{-1}^1 Q_k(t)\,\text{\rm d}t=\sum_{\ell=0}^k \int_{-1}^1 P_{\ell}(t)\,P_{k-\ell}(t) \,\text{\rm d}t. \end{equation*} By the orthogonality properties of Legendre polynomials, we find that $\tau_{2j+1}=0$. If $k=2j$ is even, then\footnote{This is also quoted in \cite[Lemma 5.4]{F}.} \begin{equation*} \tau_{2j}=\int_{-1}^1 P_j(t)^2\,\text{\rm d}t=\frac{2}{2j+1}, \end{equation*} as claimed. Plugging this into \eqref{lambdabeta}, we immediately check that $\Lambda_{2j+1}(\phi_4)=0$ for every $j\geq 0$, and that \begin{equation*} \Lambda_{2j}(\phi_4)=\frac{\omega_2}{2(2j+1)}\left(\frac{4}{2j+1}-\frac{2}{2j-1}-\frac{2}{2j+3}\right)<0, \end{equation*} for every $j\geq 1$. If $j =0$ then \begin{equation*} \Lambda_0(\phi_4)=\omega_2\int_{-1}^1\phi_4(t)\,(1-t^2)^{1/2}\,\text{\rm d}t=2\,\omega_2\int_{-1}^1 (1-t^2)\,\text{\rm d}t=\frac{8\,\omega_2}{3}=\frac{32\pi}{3}>0. \end{equation*} \medskip \noindent {\it Step 3: Case} $d=5$. In order to simplify the notation, we start again by relabeling \begin{equation*} R_k(t) := C^{3/2}_k(t). \end{equation*} The definition \eqref{Sec6_Def_Gegenbauer} gives us \begin{equation}\label{defRk} (1 - 2rt + r^2)^{-3/2} = \sum_{k=0}^{\infty} R_k(t)\, r^k. \end{equation} Differentiating \eqref{Sec6_Def_Legendre} with respect to the variable $t$ yields \begin{equation*} r(1-2rt+r^2)^{-3/2}=\sum_{k\geq 0}P_k'(t)\,r^k. \end{equation*} Comparing the two last displays, one concludes that \begin{equation*} R_k(t)=P_{k+1}'(t) \end{equation*} for every $k \geq 0$. From \eqref{defRk} we have $R_k(1)={{k+2}\choose{2}}$. It follows from \eqref{Sec6_def_phi_d} and \eqref{Sec6_Def_Lambda_k} that \begin{align*} \frac{1}{2\omega_3}{{k+2}\choose{2}}\Lambda_k(\phi_5)&=\int_{-1}^1(2-2t)^{1/2}\,R_k(t)\,(1+t)\,(1-t^2)\,\text{\rm d}t\\ &=\int_{-1}^1(2-2t)^{1/2}\,P_{k+1}'(t)\,(1+t-t^2-t^3)\,\text{\rm d}t\\ &= \gamma_{k+1}^{(0)} + \gamma_{k+1}^{(1)} - \gamma_{k+1}^{(2)} - \gamma_{k+1}^{(3)}, \end{align*} where \begin{equation*} \gamma_{k}^{(j)} = \int_{-1}^1(2-2t)^{1/2}\,P_{k}'(t)\, t^j\,\text{\rm d}t. \end{equation*} Note that $\gamma_{0}^{(j)} = 0$ for any $j \geq 0$. From \eqref{Sec6_Prop14_eq3} we know the exact value of $\gamma_{k}^{(0)}$. From \eqref{Sec6_Prop14_eq1} we have (recall that we have set $R_{-1}(t) =0$) \begin{equation*} tR_k(t)=\frac{k+1}{2k+3}R_{k+1}(t)+\frac{k+2}{2k+3} R_{k-1}(t), \end{equation*} which plainly gives \begin{align*} \gamma_{k+1}^{(j+1)} & = \frac{k+1}{2k+3} \gamma_{k+2}^{(j)} + \frac{k+2}{2k+3} \gamma_{k}^{(j)} \end{align*} for all $k \geq 0$ and $j \geq 0$. The previous recursion tells us that we can explicitly compute the values of all $\gamma_{k}^{(j)}$ by just knowing the values of $\gamma_{0}^{(j)}$ for all $j \geq 0$ and the values of $\gamma_{k}^{(0)}$ for all $k \geq 0$. This computation leads us to \begin{align*} \frac{1}{2\omega_3}{{k+2}\choose{2}}\Lambda_k(\phi_5) & = \gamma_{k+1}^{(0)} + \gamma_{k+1}^{(1)} - \gamma_{k+1}^{(2)} - \gamma_{k+1}^{(3)}\\ & = \frac{768 ( k+1) ( k+2) (3 - 3 k - k^2)}{( 2 k-3) ( 2 k-1) ( 2 k+1) ( 2 k+3) ( 2 k+5) ( 2 k+7) (2 k+9)}, \end{align*} and it follows that $\Lambda_0(\phi_5) , \Lambda_1(\phi_5) >0$ and $\Lambda_k(\phi_5) <0$ if $k\geq 2$, as claimed. \medskip \noindent {\it Step 4: Case} $d=6$. We set \begin{equation*} S_k(t) := C_k^2(t). \end{equation*} The definition \eqref{Sec6_Def_Gegenbauer} gives us \begin{equation}\label{Sec6_def_S_k} (1 - 2rt + r^2)^{-2} = \sum_{k=0}^{\infty} S_k(t)\, r^k\,, \end{equation} and it follows that $S_k(1) = {{k+3}\choose{3}}$. By differentiating \eqref{Sec6_Def_Gegenbauer}, in the case $\alpha = 1$, with respect to the variable $t$ and comparing coefficients with \eqref{Sec6_def_S_k} we find \begin{equation}\label{Sec6_relation_S_Q} S_k(t) = \tfrac12 \,Q_{k+1}'(t) \end{equation} for all $k \geq 0$. From \eqref{Sec6_def_phi_d} and \eqref{Sec6_Def_Lambda_k} it follows that \begin{align*} \frac{1}{4\,\omega_4}{{k+3}\choose{3}}\Lambda_k(\phi_6) & = \int_{-1}^1 S_k(t)\,(1 + t - 2 t^2 - 2 t^3 + t^4 + t^5)\,\text{\rm d}t\\ & = \epsilon_k^{(0)} + \epsilon_k^{(1)} - 2\epsilon_k^{(2)} - 2\epsilon_k^{(3)} + \epsilon_k^{(4)} + \epsilon_k^{(5)}, \end{align*} where \begin{equation*} \epsilon_k^{(j)} = \int_{-1}^1 S_k(t)\,t^j\,\text{\rm d}t. \end{equation*} Since Gegenbauer polynomials of odd degree are odd, it follows that $\epsilon_{2\ell+1}^{(0)}=0$ for every $\ell\geq 0$. On the other hand, using \eqref{Sec6_relation_S_Q}, we have \begin{equation*} \epsilon_{2\ell}^{(0)} = \int_{-1}^1 S_{2\ell}(t)\,\text{\rm d}t = \frac12 \big(Q_{2\ell+1}(1) - Q_{2\ell+1}(-1) \big) = 2(\ell + 1) \end{equation*} for every $\ell \geq 0$. From \eqref{Sec6_Prop14_eq1} we have (recall that we have set $S_{-1}(t) =0$) \begin{equation*} tS_k(t)=\frac{k+1}{2k+4}S_{k+1}(t)+\frac{k+3}{2k+4} S_{k-1}(t), \end{equation*} which plainly gives \begin{align*} \epsilon_{k}^{(j+1)} & = \frac{k+1}{2k+4} \epsilon_{k+1}^{(j)} + \frac{k+3}{2k+4} \epsilon_{k-1}^{(j)}, \end{align*} for all $k \geq 0$ and $j \geq 0$. Since we know the values of $\epsilon_k^{(0)}$ for all $k\geq 0$ and $\epsilon_{-1}^{(j)} = 0$ for all $j \geq 0$, the recursion above completely determines the values of all $\epsilon_k^{(j)}$. A computation leads us to \begin{equation*} \frac{1}{4\,\omega_4}{{k+3}\choose{3}}\Lambda_k(\phi_6) = \left\{ \begin{array}{cc} -\displaystyle\frac{8 ( k+2)}{( k-1) (k+1) ( k+3) (k+5)}\,, & {\rm for} \ k \ {\rm even};\\ \\ -\displaystyle\frac{8 ( k+1) ( k+3)}{k( k-2) ( k+2) ( k+4) (k+6)}\,,& {\rm for} \ k \ {\rm odd}, \end{array} \right. \smallskip \end{equation*} and it follows that $\Lambda_0(\phi_6) , \Lambda_1(\phi_6) >0$ and $\Lambda_k(\phi_6) <0$ if $k\geq 2$, as claimed. \medskip \noindent {\it Step 5: Case} $d=7$. The argument here is analogous to the case $d=5$. We start by relabeling \begin{equation*} T_k(t):= C_k^{5/2}(t). \end{equation*} By the definition \eqref{Sec6_Def_Gegenbauer} we have \begin{equation}\label{Sec6_def_T_k} (1 - 2rt + r^2)^{-5/2} = \sum_{k=0}^{\infty} T_k(t)\, r^k\,, \end{equation} and thus $T_k(1) = {{k+4}\choose{4}}$. If we differentiate \eqref{Sec6_Def_Legendre} twice with respect to the variable $t$ and compare with \eqref{Sec6_def_T_k} we find that \begin{equation}\label{Sec6_rel_T_P''} 3T_k(t) = P''_{k+2}(t) \end{equation} for all $k \geq 0$. From \eqref{Sec6_def_phi_d}, \eqref{Sec6_Def_Lambda_k} and \eqref{Sec6_rel_T_P''} we have \begin{align*} \frac{3}{2\,\omega_5}{{k+4}\choose{4}}\Lambda_k(\phi_7) & = \int_{-1}^1 (2-2t)^{3/2} \,P_{k+2}''(t)\,(1+3t+2t^2-2t^3-3t^4-t^5)\,\text{\rm d}t\\ & = \delta_{k+2}^{(0)} + 3\delta_{k+2}^{(1)} + 2\delta_{k+2}^{(2)} - 2\delta_{k+2}^{(3)} - 3\delta_{k+2}^{(4)} - \delta_{k+2}^{(5)}, \end{align*} where \begin{equation*} \delta_{k}^{(j)} := \int_{-1}^1 (2-2t)^{3/2} P_{k}''(t)\,t^j\,\text{\rm d}t. \end{equation*} The values of $\delta_{k}^{(0)}$, for $k \geq 0$, are given by Proposition \ref{Prop14}. From \eqref{Sec6_Prop14_eq1} we have (recall that we have set $T_{-1}(t) =0$) \begin{equation*} t\,T_k(t)=\frac{k+1}{2k+5}T_{k+1}(t)+\frac{k+4}{2k+5} T_{k-1}(t), \end{equation*} which gives us \begin{align*} \delta_{k+2}^{(j+1)} & = \frac{k+1}{2k+5} \delta_{k+3}^{(j)} + \frac{k+4}{2k+5} \delta_{k+1}^{(j)}, \end{align*} for all $k \geq 0$ and $j \geq 0$. Since we also know that $\delta_{0}^{(j)} = \delta_{1}^{(j)}= 0$, for $j \geq 0$, we can explicitly find the values of all the $\delta_{k}^{(j)}$ from the recursion above. A computation leads to \begin{align*} & \frac{3}{2\omega_5}{{k+4}\choose{4}}\Lambda_k(\phi_7) \\ & =\frac{245760 ( k+1) ( k+2) ( k+3) ( k+4) (15 - 5 k - k^2) (-3 + 5 k + k^2)}{( 2 k-5) ( 2 k-3) ( 2 k-1) ( 2 k+1) ( 2 k+3) ( 2 k+5) ( 2 k+7) ( 2 k+9) ( 2 k+11) ( 2 k+13) ( 2 k+15)}, \end{align*} and once again one can conclude that $\Lambda_0(\phi_7), \Lambda_1(\phi_7)>0$ and $\Lambda_k(\phi_7)<0$ if $k\geq 2$. \medskip This completes the proof of Lemma \ref{Lem13}. \end{proof} \section*{Acknowledgements} \noindent The software {\it Mathematica} was used in some of the computations of the proof of Lemma \ref{Lem13}. We would like to thank Marcos Charalambides for clarifying the nature of some results in \cite{Ch}, and to Stefan Steinerberger and Keith Rogers for useful comments on the exposition. We are also thankful to William Beckner, Michael Christ, Damiano Foschi and Christoph Thiele, for helpful discussions during the preparation of this work. E. C. acknowledges support from CNPq-Brazil grants $302809/2011-2$ and $477218/2013-0$, and FAPERJ-Brazil grant $E-26/103.010/2012$. Finally, we would like to thank IMPA - Rio de Janeiro and HCM - Bonn for supporting research visits during the preparation of this work.
-102,890.85045
[ -2.583984375, 2.275390625 ]
33.450088
[ -3.02734375, 0.66259765625, -2.6015625, -6.765625, -1.1953125, 9.6953125 ]
[ 3.333984375, 9.2109375, -0.13623046875, 5.3828125 ]
616
8,954
[ -3.47265625, 3.685546875 ]
41.528289
[ -5.265625, -3.65234375, -4.43359375, -2.4765625, 1.5576171875, 12.15625 ]
0.553539
19.59129
21.565781
5.096635
[ 1.5407627820968628 ]
-60,907.667903
5.568573
-103,360.755572
0.361004
6.245169
[ -1.6904296875, -3.3671875, -4.265625, -5.828125, 2.005859375, 13.2734375 ]
[ -5.90234375, -2.125, -2.607421875, -1.837890625, 3.759765625, 4.9140625 ]
BkiUczvxK3YB9i3RKhUS
\section*{Acknowledgments}} \newcommand{\color{red}}{\color{red}} \usepackage[ colorlinks=true, linkcolor=blue, citecolor=blue, filecolor=blue, urlcolor=blue]{hyperref} \usepackage{sidecap} \sidecaptionvpos{figure}{t} \begin{document} \title{Nonlinear dispersive waves in repulsive lattices} \author{A.~Mehrem} \affiliation{Instituto de Investigaci\'{o}n para la Gesti\'{o}n, Integrada de las Zonas Costeras, Universitat Politecnica de Valencia, Paranimf 1, 46730 Grao de Gandia, Spain} \author{N.~Jim\'{e}nez} \affiliation{LUNAM Universit\'e, Universit\'e du Maine, CNRS, LAUM UMR 6613, Av. O. Messiaen, 72085 Le Mans, France} \author{L.~J.~Salmer\'{o}n-Contreras} \affiliation{Instituto Universitario de Matem\'{a}tica Pura y Aplicada, Universitat Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain} \author{X.~Garc\'{i}a-Andr\'{e}s} \affiliation{Departamento de Ingenieria Mecanica y Materiales, Universitat Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain} \author{L.~M.~Garc\'{i}a-Raffi} \affiliation{Instituto Universitario de Matem\'{a}tica Pura y Aplicada, Universitat Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain}% \author{R.~Pic\'{o}} \affiliation{Instituto de Investigaci\'{o}n para la Gesti\'{o}n, Integrada de las Zonas Costeras, Universitat Politecnica de Valencia, Paranimf 1, 46730 Grao de Gandia, Spain} \author{V.~J.~S\'{a}nchez-Morcillo} \affiliation{Instituto de Investigaci\'{o}n para la Gesti\'{o}n, Integrada de las Zonas Costeras, Universitat Politecnica de Valencia, Paranimf 1, 46730 Grao de Gandia, Spain} \date{\today} \begin{abstract} The propagation of nonlinear waves in a lattice of repelling particles is studied theoretically and experimentally. A simple experimental setup is proposed, consisting of an array of coupled magnetic dipoles. By driving harmonically the lattice at one boundary, we excite propagating waves and demonstrate different regimes of mode conversion into higher harmonics, strongly influenced by dispersion and discreteness. The phenomenon of acoustic dilatation of the chain is also predicted and discussed. The results are compared with the theoretical predictions of $\alpha$-FPU equation, describing a chain of masses connected by nonlinear quadratic springs and numerical simulations. The results can be extrapolated to other systems described by this equation. \end{abstract} \maketitle \section{Introduction } Repulsive interactions among particles are known to form ordered states of matter. One example is Coulomb interaction, which is on the basis of the solid state physics \cite{Kittel}. In a crystal, atoms and ions are organized in ordered lattices by means of repulsive forces acting among them. Such non-contact forces provide also the coupling between neighbour atoms, what allows the propagation of perturbations in the form of phonons, or elementary excitations of the lattice. This picture is not restricted to the atomic scale. At a higher scale, the interaction of charged particles other from atoms and ions has shown the formation of crystal lattices. A remarkable case is ionic crystals in a trap \cite{raizen1992ionic}. Such crystals, which are considered a particular form of condensed matter, are formed by charged particles, e.g. atomic ions, confined by external electromagnetic potentials (Paul or other traps), and interacting by means of the Coulomb repulsion. Crystallization requires low temperature that is achieved by laser cooling techniques. Different crystallization patterns have been observed by tuning the shape and strengths of the traps. Crystals of trapped ions have been subject of great attention as a possible configuration to perform quantum computation \cite{porras2008mesoscopic}. Crystallization of a gas of confined electrons, known as Wigner crystals, have been also predicted and observed \cite{Matveev10,Deshpande08}. Waves in such crystals show strong dispersion at wavelengths comparable with the lattice periodicity. The linear (infinitesimal amplitude) dispersion relation, and some nonlinear characteristics of wave propagation have been experimentally determined in an electrically charged, micrometer-sized dust particles immersed in the sheath of a parallel plate rf discharge in helium in a rf plasma \cite{homann1997determination}, where the waves are excited by transferring the momentum from a laser to the first particle in the chain. In other type of plasma crystals, linear wave mixing and harmonic generation of compressional waves has been theoretically \cite{avinash2003nonlinear} and experimentally \cite{nosenko2004nonlinear} demonstrated. Also, nonlinear standing waves have been discussed in a two-dimensional system of charged particles \cite{denardo1988theory}. Here the generation of second and third harmonics was predicted on the long-wavelength (non- or weakly dispersive) limit. Some experiments with analogue models of repulsive lattices have been done using magnets as interacting particles, with the aim of demonstrating the generation and propagation of localized perturbations (discrete breathers and solitons). For example, in the seminal work of Russel \cite{russell1997moving} a chain of magnetic pendulums (very similar to the setup presented in this paper) was used to simulate at the macroscopic level some natural layered silicate crystals, such as muscovite mica. More recently, in \cite{moleron2014solitary}, the authors proposed another configuration of a chain of repelling magnets, for the study of solitary waves, similar to the highly discrete kinks studied theoretically in Coulomb chains including realistic interatomic and substrate potentials \cite{archilla2015ultradiscrete}. We finally note that repulsive potentials are not restricted to those of electric or magnetic nature. A celebrated case is the granular chain of spherical particles interacting via Hertz potentials. Many studies have been done in this system, theoretical and experimental, on the propagation of the solitary waves. Recently, several nonlinear effects related to the propagation of intense harmonic waves in such granular lattices has been described in \cite{sanchez2013second}, with special attention to the dispersive regime. In this work, we investigate experimentally and numerically the propagation of nonlinear and dispersive waves in harmonically driven repulsive lattices with on-site potentials. In particular, we study the harmonic generation of monochromatic waves travelling in an array of coupled magnetic dipoles, comparing the observations with the predictions from the $\alpha$-FPU equation and numerical results including an on-site potential. Two main results are reported: first, the experimental observation of the generation of second harmonic in highly dispersive nonlinear lattices and, second, the saturation in the generation of the evanescent zero frequency mode in lattices with on-site potential. The paper is organized as follows: In Sec.~\ref{sec:theory}, the theoretical model, the equation of motion of a lattice of particles interacting by inverse power-law forces, is presented. The weakly nonlinear limit is considered, where the model approaches to the celebrated $\alpha$-FPU equation. The linear dispersion relation and the analytical solutions for propagating and evanescent nonlinear periodic waves, are given. In Sec.~\ref{sec:magnets}, the theory is particularized for the case of an array of coupled magnetic pendula, and it is presented the experimental setup based on a lattice of magnetic pendula rotating by means of a magnetic bearing system that guarantees low friction. In Sec.~\ref{sec:results} we discuss the experimental results, concerning the generation of harmonics and a static displacement (dilatation) mode. Finally, the conclusions of the study are given in Sec.~\ref{sec:conclusions}. \section{Theoretical model}\label{sec:theory} \subsection{Equation of motion} \begin{figure}[b] \begin{center} \includegraphics[width=0.90\columnwidth]{figure1.eps} \caption{Scheme of the lattice of non-linearly coupled oscillators.} \label{schem2} \end{center} \end{figure} We consider an infinite chain of identical particles with mass $M$ aligned along the $x$-axis, interacting with their nearest neighbours via repulsive potentials, $V_\mathrm{int}$. In the absence of perturbations, every mass has a fixed equilibrium position, with the interparticle distance given by $a$, as shown in Figure $\ref{schem2}$. Since the forces are repulsive, note that for a finite chain this is only possible if there is an external potential $V_\mathrm{ext}$ that keeps the particles confined. This effect can be provided by a periodic on-site potential, or a force keeping the boundary particles at fixed positions. The equation of motion can be written as \begin{equation} M \ddot{u}_n=V'_\mathrm{int}\left(u_{n+1}-u_{n}\right) -V'_\mathrm{int}\left(u_{n}-u_{n-1}\right) +V'_\mathrm{ext}, \label{eq:eqmotion} \end{equation} \noindent where $u_n$ stands for the displacement of the $n$-th particle measured with respect to its equilibrium position, $M$ is the mass of the particle, $V$ are the potentials and $V'$ their derivatives with respect to the spatial coordinate, i.e., the forces. For small displacements, the interaction forces, $V'_\mathrm{int}$, can be considered linear with respect to the distance between the particles, $r$, i.e., $V'(r)=\kappa r$ where $\kappa$ is a constant, then Eq. (\ref{eq:eqmotion}) represents a system of coupled harmonic oscillators. For higher amplitude displacements, the linear approximation of the interaction force cannot be assumed in most real systems and nonlinearity must be considered. Chains of nonlinearly coupled oscillators have been extensively studied in the past for different types of anharmonic interaction potentials. Some relevant cases are the $\alpha$-FPU lattice, where $V'(r)=\kappa_1 r+\kappa_2 r^2$ (quadratic interaction), the $\beta$-FPU lattice where $V'(r)=\kappa_1 r+\kappa_3 r^3$ (cubic interaction), the Toda lattice, with $V'(r)=\exp({-r})-1$ or the granular lattice, with $V'(r)=\kappa r^{3/2}$. Here we consider the case of forces that decrease with an inverse power law of the distance, $V'(r)=\beta r^{-\alpha}$, typical of interatomic interactions, e.g. as the Coulomb repulsive interaction. For such a force, the equation of motion results in \begin{equation} \label{eq:eqmotion2} M \ddot{u}_n= \frac{\beta}{(a-u_{n+1}+u_{n})^\alpha}-\frac{\beta}{(a-u_{n}+u_{n-1})^\alpha} +V'_\mathrm{ext}. \end{equation} The exponent $\alpha$ can take different values depending on the particular system: $\alpha=2$ for electrically charged particles, e.g. in ion Coulomb crystals \cite{raizen1992ionic} or dusty plasma crystals \cite{nosenko2004nonlinear}, $\alpha=4$ for distant magnetic dipoles \cite{russell1997moving}, or any other non-integer power \cite{moleron2014solitary}. In general, Eq.~(\ref{eq:eqmotion2}) do not possess analytical solutions. Approximate analytical solutions can be obtained in the small amplitude limit, i.e., assuming that the particle displacement $|u_n|$ is small compared to the lattice constant $a$. Under this assumption, the forces can be expanded in Taylor series and Eq.(\ref{eq:eqmotion2}) can be reduced, neglecting cubic and higher order terms, to an equation in the normalized form \begin{align}\label{eq:FPU} \ddot{u}_n=&\frac{1}{4}\left(u_{n-1}-2u_{n}+u_{n+1}\right)- \nonumber \\ &\frac{\varepsilon}{8}\left(u_{n-1}-2u_{n}+u_{n+1}\right)\left(u_{n-1}-u_{n+1}\right) + \Omega_0^2 u_n, \end{align} \noindent where the normalization $u_n=u_n/a$ has been introduced, dots indicate now derivative with respect to a dimensionless time $\tau=\omega_m t$, where $\omega_m=\sqrt{4 \alpha \beta/M a^{\alpha+1}}$ is the maximum frequency of propagating waves (upper cutoff frequency of the dispersion relation), $\varepsilon=(1+\alpha)u_0$ is the nonlinearity coefficient and $\Omega_0=\omega_0/\omega_m$ is the on-site potential characteristic frequency. The on-site restoring force $V'_\mathrm{ext}$ is in general nonlinear. However, for small displacements, as considered here, it may be represented by a term $V'_\mathrm{ext}=M \Omega_0^2 {u_n}{a}$, where $\Omega_0$ is related with the frequency of oscillation of the particle in the external potential. The particular form of this term for the proposed experimental setup will be discussed later. If the on-site potential term is neglected (no external forces acting on the chain), Eq.~(\ref{eq:FPU}) reduces to the celebrated $\alpha$-FPU equation. It has been considered as an approximate description of many different physical systems, and has played a central role in the study of solitons and chaos \cite{GavallottiFPU}. \subsection{Dispersion Relation} Some important features of the propagation of waves in a lattice can be understood by analyzing its dispersion relation. For infinitesimal amplitude waves, it can be obtained analytically by neglecting the nonlinear terms in the equation of motion, and solving for a harmonic discrete solution in the form $u_n = \exp {i(\Omega t-k n)}$, where $\Omega=\omega/\omega_m$ is the normalized wave frequency and $k$ is the wavenumber. By replacing this solution in the linearized Eq.~(\ref{eq:FPU}), we obtain the well-known dispersion relation for a monoatomic lattice, that in normalized form reads \begin{equation}\label{disp2} \Omega= \sqrt{\sin^2\left(\frac{k}{2}\right)+\Omega_0^2}. \end{equation} \noindent On one hand, there is an upper cutoff frequency at which the transition from propagative to evanescent solutions is produced, i.e., $\mathrm{|Im(k)|>0}$, and it is given in this normalization by $\Omega=\sqrt{1+\Omega_0^2}$. On the other hand, the effect of the on-site potential is to create a low frequency bandgap in the dispersion relation, i. e., $\Omega_0$ represents the lower cut-off frequency. In the absence of external confining potential, $\Omega_0 \rightarrow 0$, the dispersion relation reduces to $\Omega= \left| \sin\left({ka}/{2}\right) \right|$. In this case the upper cutoff normalized frequency is $\Omega=1$. Although the dispersion relation has been derived assuming infinitesimal amplitude (linear) waves, it describes also the propagation of other modes as the higher harmonics of a fundamental harmonic wave (FW), when these are generated by weakly nonlinear processes, as described in the following sections. \subsection{Analytical solutions} One known effect of the quadratic nonlinearity is the generation of second and higher harmonics of an input signal. This is the basic effect, for example, of nonlinear acoustic waves propagating in homogeneous, non-dispersive media \cite{naugolnykh1998nonlinear,hamilton1998nonlinear}, where the amplitude of the harmonics depends on the nonlinearity of the medium, the excitation signal and the propagated distance (the excitation amplitude in the chain $u_0$, the frequency $\Omega$ and its position $n$). In general, the generation of harmonics is strongly dependent of the dispersion of the system, as occurs in the discrete lattice described by Eq.~(\ref{eq:FPU}). To study the process of harmonic generation, an analytical solution can be obtained by perturbative techniques, such as the successive approximations method. We follow this approach, by assuming that the nonlinear parameter $\varepsilon$ is small (which implies displacements much smaller than the interparticle separation), and expressing the displacement as a power series in terms of $\varepsilon$, in the form $u_n=u_n^{(0)}+\varepsilon u_n^{(1)}+\varepsilon^2 u_n^{(2)}+\ldots$. After substituting the expansion into Eq.~(\ref{eq:FPU}), and collecting terms at each order in $\varepsilon$, we obtain a hierarchy of linear equations that can be recursively solved. This has been done in Ref.~\cite{sanchez2013second} to study nonlinear waves in a granular chain, formed by spherical particles in contact interacting by Hertz potentials, and the result is readily extendible to chain of particles interacting by inverse power laws of arbitrary exponent, which results in a particular value of the nonlinearity coefficient. The equation of motion is always given by Eq.~(\ref{eq:FPU}), the value of $\varepsilon$ being dependent on the exponent $\alpha$. In case of granular chain, it was shown that $\varepsilon=u_0/2$. In this work a chain with quasi-dipolar interaction, $\alpha=4$, gives $\varepsilon=5u_0$, i.e. the nonlinear effects are one order of magnitude higher. \begin{figure*} \centering \includegraphics[width=0.99\textwidth]{figure2.pdf} \caption{Photograph of the experimental setup. The chain of magnets is driven mechanically by a dynamic sub-woofer speaker. On right, a detail of the construction of the pendula with the magnetic quasi-levitation system for minimize the losses at the bearing.} \label{setup} \end{figure*} Up to second order of accuracy in $\varepsilon$, the displacement field can be expressed as (the details of the derivation can be found in Ref. \cite{sanchez2013second}) \begin{align} \label{eq:Analytic} u_n=&\varepsilon \Omega^2 n + \nonumber\\ &\frac{1}{2}\left[1+\frac{1}{4} i \varepsilon^2 C_\Omega \sin\left(\frac{\Delta k}{2}n\right) e^{i\frac{\Delta k}{2}n}\right] e^{i\theta_n}+\nonumber\\ &\frac{\varepsilon}{4} \cot\left(\frac{k}{2}\right) \sin\left(\frac{\Delta k}{2}n\right) e^{i\frac{\Delta k}{2}n} e^{2i\theta_n} + \mathrm{c.c.}, \end{align} \noindent where $\theta_n=\Omega t-kna$, $n$ is the oscillator number corresponding to the discrete propagation coordinate, and \begin{equation}\label{eq:Di1} C_\Omega=1-\frac{\sin[k(2\Omega)/2]}{ \sin[\Delta k(\Omega)/2]}, \end{equation} \noindent where $\Delta k=2 k(\Omega)-k(2 \Omega)$ is the wavenumber mismatch between the forced, $2k(\Omega)$, and free, $k(2\Omega)$, contributions to the second harmonic. The solution given by Eq.~(\ref{eq:Analytic}) describes wave propagation in the system when the frequency of the second harmonic belongs to the dispersion relation, which is the case for driving frequencies $\Omega< 1/2$. For higher driving frequencies, the second harmonic frequency is outside the propagation band (becoming an evanescent mode) and the solution takes the form \begin{align} \label{eq:Analytic2} u_n = &\varepsilon \Omega^2 n + \nonumber \\ &\frac{1}{2}\left[1+\frac{1}{8} \varepsilon^2 C_\Omega \left(1-e^{-k{''}n}e^{i k{'}n}\right)\right] e^{i\theta_n}+ \nonumber \\ &\frac{\varepsilon}{8} \cot \left(\frac{k}{2}\right) \left(1-e^{-k{''}n}e^{i k{'}n}\right) e^{2i\theta_n} + \mathrm{c.c.}, \end{align} \noindent where $k{''}=2 \cosh^{-1}(2\Omega)$ and $k{'}=2 k(\Omega)-1$ and the mismatch take now the form $\Delta k=k{'}+i k{''}$. The previous analytical solutions (\ref{eq:Analytic}-\ref{eq:Analytic2}) predict a number of distinctive features in the nonlinear dynamics of the system, depending on the frequency regime. In the case of the second harmonic belonging to the propagation band, Eq.~(\ref{eq:Analytic}), dispersion causes a beating in the amplitudes of the different harmonics, since two components of the second harmonic with different wavenumbers propagate asynchronously. Both, the fundamental wave and its second harmonic oscillate out of phase in space: the displacement of the fundamental is maximum where the second harmonic vanishes, which occurs at positions satisfying the following condition: $n=2 \pi/\Delta k$. This process repeats periodically in space as energy is transferred between the two waves as they propagate. The half distance of the spatial beating period corresponds to the coherence length $l_c$: \begin{equation}\label{coherence} l_c =\frac{\pi}{\Delta k}, \end{equation} \noindent and it physically corresponds to the position where the free and forced waves are exactly in phase, i.e., the location of the maximum of first spatial beat. When the second harmonic frequency lies beyond the cut-off frequency, the free wave is evanescent. There still exists however a forced wave, driven by the first harmonic at any point in the chain. Due to this continuous forcing, the amplitudes of the fundamental and its second harmonic do not oscillate, reaching the amplitude of the second harmonic a constant value after a short transient of growth. This implies propagation of the second harmonic even in the forbidden region. We note that similar results about the behaviour of harmonics have been obtained for nonlinear acoustic waves propagating in a 1D periodic medium or superlattice \cite{jimenez2016nonlinear}. Finally, we note that the theory predicts the existence of a zero-frequency mode, $u_n=\varepsilon \Omega^2 n$, which represents an static deformation of the lattice, i.e., a constant dilatation. This effect will be studied in detail in Sect. IV. \section{The lattice of magnetic dipoles}\label{sec:magnets} \subsection{Forces acting on a magnet} Consider two magnetic dipoles, with magnetic moments $\vec{m}_1$ and $\vec{m}_2$. The force between them is given by the exact relation \cite{Griffiths07} \begin{equation} \label{eq:force1} \vec{F}_{1,2}= \frac{\mu_0}{4\pi} \vec{\nabla} \cdot \left[ \frac{\vec{m}_1 \cdot \vec{m}_2}{r^3} - 3\frac{\left(\vec{m}_1 \cdot \vec{r} \right)\left( \vec{m}_2 \cdot \vec{r}\right)} {r^5}\right], \end{equation} \noindent where $\vec{r}$ is the vector joining the centres of the dipoles. This relation implies that, in general, the force depends on the angle between the dipoles. In the particular case when the dipole moments are equal in magnitude, parallel to each other, and perpendicular to $\vec{r}$ (dipoles in the same plane), the force takes the simpler form \begin{equation} \label{eq:force2} \vec{F}_{1,2}=\frac{3\mu_0}{4\pi}\frac{m^2}{r^4} \hat{x}, \end{equation} where $m=|\vec{m}_1|=|\vec{m}_2|$, $\mu_0$ is the permeability of the medium and $\hat{x}$ is an unitary vector in the direction of the axis that connect the centres of the magnets. Eq.~(\ref{eq:force2}) gives the force at equilibrium position ($r=a$) at a magnetic dipole ($n=1$) of the chain produced by is neighbour ($n=2$) in the chain. A opposite force is produced on the oscillator $n=2$. In the case of the perturbed chain of magnets with nearest neighbour interactions, the distance between centres is a dynamic variable. Assuming small displacements of the magnets, i.e., the angles between the dipole moments are small, we can use Eq.~(\ref{eq:force2}) with $r = a-u_n+u_{n+1}$ to describe the interaction between two neighbour oscillators \begin{equation} \label{eq:force3} \vec{F}_{n,n+1}=\frac{3\mu_0 m^2}{4\pi} \frac{1}{\left(a-u_n+u_{n+1}\right)^4}. \end{equation} \noindent Comparing with the equation of motion of the chain, given by Eq.~(\ref{eq:eqmotion2}), we identify the parameters \begin{equation} \beta=(3/4\pi)\mu_0 m^2,\quad \alpha=4. \end{equation} This small angle Eq.~(\ref{eq:force3}) for the forces is a crude approximation and exact expressions can be found in Ref.~\cite{russell1997moving}. However, since our aim is to obtain simple analytical expressions based on the FPU equation, Eq.~(\ref{eq:FPU}), we will keep this degree of accuracy. The validity of this approximation to describe our setup will be tested in the next sections comparing with the experimental results and numerical simulations. The above expressions for the forces between magnetic dipoles are valid for loop currents or magnets of negligible dimensions. Expressions for finite size magnets can be found in the literature \cite{Camacho13} and are in general lengthy and cumbersome. Gilbert's model of magnetic field of magnets used here results in approximate but simple expressions for the forces \cite{Griffiths07}. For cylindrical magnets of length $h$, with their magnetic moments parallel, and their axis perpendicular to the line joining the centres, the force between adjacent magnets can be expressed as \begin{equation} \label{eq:force4} \vec{F}_{1,2}=\frac{\mu_0 m^2}{2\pi h^2} \left( \frac{1}{r^2}- \frac{r}{\left(r^2+h^2\right)^{3/2}} \right) \hat{x}, \end{equation} \noindent where the magnetic moment is $m={\cal{M}} h \pi R^2$, $\cal{M}$ is the magnetization and $R$ the radius of a the cylindrical magnet. In the limit $h\ll r$, Eq.~(\ref{eq:force4}) reduces to Eq.~(\ref{eq:force2}), i.e., magnets with small dimensions compared to their separation interact via dipolar forces, i.e., $\alpha=4$. In the opposite limit $h\gg r$, parallel magnets close to each other, the interaction law approaches to a Coulomb-type force, i.e., $\alpha=2$. In general, the interaction law of magnets can be approximated by an inverse-law with any given exponent that ranges between monopole and dipole cases. \subsection{Experimental setup} A chain of coupled magnets was built in order to test the theoretical predictions. The experimental setup is shown in Fig.~\ref{setup}. The chain was composed by 53 identical cylindrical neodymium magnets (Webcraft GmbH, DE, magnet type N45), with mass $M=2$ g, arranged in a one-dimensional periodic lattice. The radius and height of the magnets were $R=2.5$ mm and $h=14$ mm, respectively, and its magnetization was ${\cal{M}} = 1.07~10^6$ A/m. The magnets were oriented with the closest poles being those of the same polarity, therefore the produced forces were repulsive. To achieve the necessary stability of the chain, the magnets were attached to a rigid bar which allows them to oscillate around a T-shaped support, being each magnet actually a pendulum (see Fig.~\ref{setup}). The length of the vertical bars was $L=100$ mm, and the distance between supports (and therefore the distance between magnets at equilibrium) was $a=20$ mm. The bearing of the T-shaped support was specially designed to minimize the effects of friction and giving stability to the system. This was achieved by using an additional ring-shaped magnets which keep the oscillators quasi-levitating on air, with just one contact point, as shown in the inset of Fig.~(\ref{setup}). The effect of the pendulums is to introduce an additional external force to the dynamics of the chain, corresponding to the term $V'_\mathrm{ext}$ in Eq.~(\ref{eq:eqmotion2}). If $\theta_n$ is the angle formed by a magnet with respect to its vertical equilibrium position, the restoring force due to gravity is $F_z=M g \sin\theta_n$. For small angles $\theta_n$, and using the notation of Eq.~(\ref{eq:eqmotion2}), the force per mass can be approximated as $V'_\mathrm{ext}\simeq \Omega_0^2 u_n$, with $\Omega_0=\sqrt{g/L}/\omega_m$. All magnets oscillate freely except the outermost boundary magnets. The last magnet is fixed, and the first one is attached to the excitation system. The driving system consists of an electrodynamic sub-woofer (Fostex-L363) connected to an audio amplifier (Europower EPS2500) and excited by an arbitrary function generator (Tektronix AFG-2021). The first magnet is attached to the loudspeaker's diaphragm, thus, being it forced with a sinusoidal motion for different values of frequencies and amplitudes. The motion of the chain is recorded by using a GoPro-Hero3 camera. The camera is placed at a proper distance from the chain in order to track the motion of a certain number of magnets. In this work, the first $18$ magnets were recorded simultaneously. Then, each pendulum was optically tracked using image post-processing techniques. Image calibration was employed here to correct the lens aberration using the image processing toolbox in Matlab\textsuperscript{\textregistered}, allowing the measurement of the displacement waveforms $u_n$. We considered the travelling wave regime, ignoring the reflected wave by time windowing the recorded video. The measurement in a finite time window guarantees no reflections from the $n=N$ boundary. Due to the quasi-instantaneous temporal duration of the impulse response of the system, after some temporal cycles of measurement the system reaches the stationary. Therefore, the transient measurement is equivalent to the response of an infinite chain and the finite size effects of the chain do not influence the experiments. The duration of each record was about $3.5$ s, the camera resolution was set to $960$p with a frame rate of 100 frames per second, i.e., leading to a sampling frequency of 100 Hz. Using the measured waveforms, the amplitude of each harmonic was estimated as usual using the Fourier transform. \section{Experimental results}\label{sec:results} \subsection{Dispersion relation} \begin{SCfigure*}[1][t] \centering \includegraphics[width=13cm]{figure3.eps} \caption{Dispersion relations of a mono-atomic chain obtained analytically by using Eq.~(\ref{disp2}) (continuous line), by the experimental measurements (squares), and numerically including damping (circles). Horizontal bars indicate the experimental error of the normalized wavenumber. (a) Real part of the wavenumber, (b) imaginary part of the wavenumber.} \label{Disp_Mono} \end{SCfigure*} To obtain the dispersion relation experimentally, the first magnet was excited with short duration impulse with low amplitude excitation in order to ensure that the excited waves are described by linear theory. The generated travelling pulse was recorded at two consecutive magnets, i.e., $n$ and $n+1$. The real part of the wavenumber was calculated by estimating the phase difference between them and the imaginary part of the wavenumber was calculated estimating the attenuation, as \begin{align} \mathrm{Re}(k) &= \frac{\omega}{\mathrm{Re}(c_p)} = \frac{\arg [U_{n+1}(\omega)/U_n(\omega)]}{a}, \\ \mathrm{Im}(k) &= \frac{\omega}{\mathrm{Im}(c_p)} = \frac{\log\left|U_{n+1}(\omega)/U_n(\omega)\right|}{a}, \end{align} \noindent where $U_n(\omega)$ is the Fourier transform of the measured displacement of the $n$-th magnet and $c_p$ is the phase velocity. A set of 10 measurements at the oscillator $n=3$ were used to compute the mean value of the phase speed. Figures~\ref{Disp_Mono}~(a-b) show the real and imaginary part of the wavenumber respectively, where the experimental results and the dispersion relation of Eq.~(\ref{disp2}) were evaluated at frequencies with step of $\Delta f= 0.66$ Hz. The small magnitude of the experimental errors in the propagating band indicates good repetitiveness of the measurements. The experimental lower frequency cut-off was $f_0=1.68$ Hz, which agrees with the theoretical value $f_0=(1/2\pi) \sqrt{g/L} = 1.48$ Hz, ($f_0=1.56$ Hz if we consider the rigid-body pendulum taking into account the momentum of inertia of the steel rod). The measured upper cut-off frequency was $f_m=17.7$ Hz. This value was used to fit Eq. (\ref{eq:force4}) to an inverse power law, using the theoretical prediction $f_m=(1/2\pi)\sqrt{4 \alpha \beta/M a^{\alpha+1}} = 17.6$ we obtained an inverse power law with exponent $\alpha=3.6$ (quasi-dipolar interaction), which is in agreement with the ratio between the height and the separation distance between of the magnets given by Eq.~(\ref{eq:force4}). Both, upper and lower values of the dispersion relation obtained experimentally can slightly change with the amplitude of the input excitation $u_0$, which is in fact a signature of nonlinear dispersion caused by the finite amplitude of the wave. Note that for higher amplitudes the pulsed excitation used in this experiments leads to the generation of KdV-like compression solitons \cite{moleron2014solitary}. However, as long as the condition $u_0\ll a$ is fulfilled, the chain propagates linear modes and the dispersion relation can be obtained. One remarkable result is the low damping of the system, given by the smallness of the imaginary part of the wavenumber in the propagating band. The complex dispersion relation obtained by numerical integration of Eq.~(\ref{eq:eqmotion2}) adding a damping term $\gamma \partial u_n / \partial t$ to the equation of motion is shown in Figs~\ref{Disp_Mono}~(a-b). The damping coefficient, $\gamma$, was fitted to the experiments and corresponds to 0.52 dB/m (note the chain is 1 m long). The damping terms produces a force that opposes the pendulum movement. It is worth noting here that, in the propagating band, the total drag force is roughly twice the viscous drag force estimated for a cylinder of the size of a single magnet oscillating in air \cite{brouwers1985}: the magnetic bearing system itself produces only small damping. The effect of the small losses is to smooth the limits of the band gap, as it is also observed in other highly dispersive systems, e.g., as in Acoustics \cite{jimenez2017}, and to produce a small attenuation in the propagating band. The damping term is used in the numerical simulations in the following sections. \begin{SCfigure*}[1][tp] \centering \includegraphics[width=13cm]{figure4.eps} \caption{Three different regimes of harmonic generation, measured at different frequencies. (a) Weakly dispersive regime ($f=5$ Hz, $\Omega=0.27$) obtained using the analytical equation (continuous lines), numerical solution of the motion equations (crosses) and experimental results (squares). (b) Corresponding experimental spectrum as a function of the oscillator number. (c) Strongly dispersive regime ($f=8.8$ Hz, $\Omega=0.48$), and (d) its corresponding spectrum. (e) Evanescent regime for the second harmonic ($f=10.1$ Hz, $\Omega=0.55$), (f) corresponding spectrum.} \label{fig-harmonics} \end{SCfigure*} \subsection{Harmonic generation } By driving the first magnet with a sinusoidal motion, $u_1=u_0 \sin \omega t$, harmonic waves are excited and they propagate along the chain. On one hand, the amplitude $u_0$ was $u_0 = 2.4$ mm. On the other hand, according to the dispersion relation shown in Fig.~\ref{Disp_Mono}, the driving frequency, $\Omega$, can be chosen among to three different regimes regarding the propagation of the second harmonic: (a) weakly dispersive, (b) strongly dispersive, and (c) evanescent. The first case (a) is obtained when the frequency of the fundamental wave lies in the lower part of the pass band and, the generated second harmonic is also in the pass band, in the region of weak dispersion. Thus, in this regime the motion equations of the lattice can be approximated by a continuum whose dynamics follows the Boussinesq equation \cite{sanchez2013second} and the wave roughly propagates without dispersion. In this low frequency regime, the lower harmonics propagate with nearly the same phase velocity. The amplitude of the second harmonic increases roughly linearly with distance while the first harmonic amplitude decreases due to the energy transfer from the fundamental component to the higher harmonics. This case is shown in Fig.~\ref{fig-harmonics}~(a), where a fundamental wave with frequency $f=5$ Hz ($\Omega=0.27$) generates a second harmonic whose frequency $2f=10$ Hz lies on the weakly-dispersive region of the propagative band ($\Omega=0.54$). We note that in this regime, third harmonic is also generated, as shown Fig.~\ref{fig-harmonics}~(b), although it is not predicted by the perturbative analytical solution due to its second-order accuracy. Secondly, the case (b), corresponding to strongly dispersive second harmonic, is shown in Fig.~\ref{fig-harmonics}~(b). Here, the driving frequency approaches the half of the passband frequency. The second harmonic lies in the highly dispersive part of the band, but still in a propagative region (slightly below the cutoff frequency $f_m$). As observed in the previous case, the amplitude of the second harmonic increases with distance, but now at a particular distance given by the coherence length, $l_c$, it decreases. Both, the fundamental wave and its second harmonic present spatial oscillations, i.e., spatial beatings. Figures~\ref{fig-harmonics}~(c-d) illustrate this case for a fundamental wave with frequency $f=8.8$ Hz, i.e. a second harmonic with frequency $\Omega=0.96$. The experimental value of the coherence length was $l_c \approx 4.5a$, which is in agreement with the theoretical given by Eq.~(\ref{coherence}). Finally, the case (c) corresponds to the second harmonic lying within the band-gap, as shown in Fig. \ref{fig-harmonics}~(e-f) for a excitation frequency of $f=10.1$ Hz ($\Omega=0.55$). In this case, the second harmonic is evanescent and its amplitude does not change with distance. One would expect the absence of the SH field (since it is an evanescent mode) but a finite amplitude is observed in agreement with theory and numerical simulations. The second harmonic component is generated locally as it is ``pumped" by the fundamental wave. Its amplitude value remains constant all along the chain, being its amplitude dependent on the driving amplitude and on the properties of the medium (the non-linearity and the magnitude of the dispersion). The experimental results shown in Fig.~\ref{fig-harmonics} are in good agreement with the analytical predictions of the asymptotic theory (solid lines), and also with the numerical simulation of Eq. (\ref{eq:FPU}). However, small discrepancies can be observed between the theory and the experiments, as well as between the theory and the simulations. The value of the nonlinear coefficient used in the experiments was $\varepsilon=(1+\alpha)u_0 = 0.55$. Thus the small disagreements between the theory and the experiments and simulations are mainly explained due to the non smallness of the nonlinear parameter $\varepsilon$. For small excitation amplitudes, i.e., small $\varepsilon$, the theory and numerical solutions converge to the similar result. However, due to the precision of the motion-tracking acquisition system, it was difficult to accurately measure small amplitude perturbations. \begin{SCfigure*}[1][t] \centering \includegraphics[width=14cm]{figure5.eps} \caption{(a) Amplitude of the static displacement mode as a function of the space obtained by numerical integration of the equation of motion (continuous lines) and measured experimentally (markers) at different frequencies. (b-e) Corresponding experimental (coloured lines) and simulated (grey) waveforms acquired at oscillator $n=14$.} \label{expansion} \end{SCfigure*} \subsection{Chain dilatation} Besides the harmonic generation, the FPU equation also predicts the presence of a static (zero-frequency) mode. It physically represents an incremental shift of the average position of each oscillator, which in turn results in a constant dilatation or expansion of the chain. This term is accounted for by the first term in Eq.~(\ref{eq:Analytic}). Since the average displacement grows linearly with distance, it can be interpreted as a constant strain produced by the acoustic mode along the lattice. The phenomenon was originally reported for acoustic waves propagating in a solid described by a nonlinear wave equation \cite{cantrell1991acoustic}, which is actually the continuous (long-wavelength) analogue of Eq.~(\ref{eq:FPU}). The effect was described there as an acoustic-radiation-induced strain. The physical origin of the expansion of the discrete chain (and also in the continuous solid) is the anharmonicity of the interaction potential, and therefore is a general nonlinear effect. Note radiation forces also appear in other nonlinear systems as acoustic waves in fluids, soft solids or even light (radiation pressure), being the generation of acoustic radiation forces a general mechanism of any wave motion \cite{sarvazyan2010}. We remark that the phenomenon of acoustic expansion is analogous to the thermal expansion of solids, which also has its physical origin in the lattice anharmonicity. The link between these two effects and its relation with the acoustic nonlinear parameter has been pointed out in Ref.~\cite{Cantrell82}. We have shown in Fig.~\ref{expansion} the generation of the zero-mode in the particular case of the chain of coupled oscillators and for different excitation frequencies. The experimental results agree with simulations of the full equations of motion including the restoring force. We can see that for all the frequencies the linear increasing of the displacement predicted by the analytical solutions is not observed. Instead, we can observe two regimes, and a transition between them at a particular distance. First, in the region near the boundary (extending up to $n\approx 8$ in our experiment), the displacement grows roughly linearly with distance, as predicted by the theory without restoring force. However, beyond a given distance the growth of the static displacement mode saturates, and the chain attains an unstrained state, with the oscillators moving around positions shifted with respect to their initial values. This behaviour is not predicted by the theory. The saturation effect can be understood if we recall that the theory was developed assuming that there was not a prescribed equilibrium position for any oscillator in the chain, the chain was assumed semi-infinite, and the only force acting on the masses was the nearest neighbours interaction. However, in the experimental setup an additional restoring force is present, due to gravity. For small perturbations this is equivalent to an on-site potential. Since the magnets are pendula, the maximum shift of a magnet with respect to the equilibrium position is also bounded. Note in Fig.~\ref{expansion} the oscillators are displaced less than a lattice step. Note also that for a finite value of the on-site potential $\Omega_0$ the zero-th mode is always evanescent. Then, as described previously with the second harmonic in the evanescent case, only the forced contribution to the zero-th order mode is present, leading to a constant value of the zero-th mode. Finally, Fig.~\ref{expansion2} shows the dependence of the zero-mode with frequency, measured at $n=3$, $n=5$ and $n=10$. It can be observed that the experimental results agree with the simulations of the full equations of motion, while the simulations of the FPU equation roughly does with the theory (in this case the excitation amplitude was $u_0=4.8$ mm, leading to a value of the nonlinear parameter of $\varepsilon=0.96$). For frequencies below $\Omega\approx0.8$, the amplitude of the zero-mode roughly follows a quadratic dependence with frequency. In addition, the period average displacement of an oscillator corresponds to the position where there exist a balance between the gravity restoring force and the equivalent acoustic-radiation force produced by the nonlinear compressional wave, corresponding to $F_\mathrm{ARF} = \Omega_0^2 \left\langle u_n\right\rangle$, where $\left\langle u_n\right\rangle$ is the amplitude of the zero mode. Thus, the induced acoustic-radiation force in the experimental chain also follows a quadratic dependence with frequency for low frequency waves. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{figure6.eps} \caption{Dependence of the amplitude of static mode with the excitation frequency obtained using the analytical solution (dashed lines), numerical integration of the FPU equation (crosses), numerical integration with the pendulum restoring force (grey thick line) and experiments (squares), measured at the oscillator (a) $n=3$, (b) $n=5$ and (c) $n=10$.} \label{expansion2} \end{figure*} \section{Conclusions}\label{sec:conclusions} The propagation of nonlinear monochromatic waves in a lattice of particles coupled by repulsive forces following an inverse power-law with distance has been studied theoretically, numerically and experimentally. In the limit of small amplitudes, the system is described by a FPU equation with quadratic nonlinearity, where analytical solutions were generalized for the case of an arbitrary inverse frequency power-law interactions. In particular, it has been developed an experiment consisting in a lattice of coupled magnetic dipoles sinusoidally driven at one boundary, while a magnetic bearing system for the rotation of each pendulum provides low mechanical damping. In spite of the simplifying assumptions made in the theoretical analysis, the observations agree quite well with the model concerning the generation of the second harmonic, e.g., the characteristic spatial beatings of the second harmonic due to the dispersion of the lattice are observed. One particular feature of the studied lattice is the existence of a restoring force due to the action of gravity on the pendula. This is roughly equivalent to the introduction of an on-site potential, leading to the generation of a low frequency band gap. In this work, it has been observed for the first time that the generated zero mode is evanescent due to the presence of the on-site potential, therefore, only the forced component of the zero mode propagates through the chain and a saturation of the amplitude of the zero mode is observed. There exist discrepancies between the analytical FPU theory and the experimental measurements of the static dilatation mode. They are caused, mainly, because the developed theory is based on a FPU equation that lacks of the on-site potential that produces the low frequency band gap. Therefore, while the FPU theory predicts a linear monotonic growth of the zero-mode, the presence of the low-frequency band gap makes the zero-frequency mode to be evanescent, and, as a consequence, a saturation of the dilatation of the chain is observed in the experiments and in the numerical simulations. The particular dynamics of the generated zero-mode are discussed in analogy with the radiation force produced by a nonlinear monochromatic travelling wave. This result has an interest beyond the particular studied system, since there exist a number of systems, e.g. as condensed matter or granular crystals, that present similar of dispersion relations, with a low-frequency band gap. Additionally, the present low-friction experimental setup can be used to explore other effects of nonlinear discrete systems that have been predicted in the literature, e.g., nonlinear localized modes. Under the assumption of small amplitude, these results indicate that the lattice of magnetic dipoles is well described by an $\alpha$-FPU equation, which opens the possibility of extending the results to other systems which are described by the same generic equation. The proposed system can be also viewed as a mechanical analogue of a microscopic crystal of interacting charged particles (atoms or ions) at a macroscopic scale. Despite the limited applicability of this simple one dimensional lattice to describe real crystals, the approach possess however some advantages, as the possibility of varying parameters that are normally fixed, as the strength of the interaction and on-site potentials, or exploring strongly nonlinear regimes which are hardly achievable at atomic scales. \acknowledgments This research was funded by Spanish Ministerio de Economia e Innovacion (MINECO), grant FIS2015-65998-C2-2-P. AM gratefully acknowledge to Generalitat Valenciana (Santiago Grisolia program). LJSC gratefully acknowledge the support of PAID-01-14 at Universitat Polit\`ecnica de Val\`encia.
-23,595.125461
[ -0.0233612060546875, 0.264892578125 ]
46.366782
[ -2.736328125, 2.138671875, 0.2366943359375, -2.45703125, -1.4716796875, 4.09765625 ]
[ -0.7998046875, 2.654296875, -2.826171875, 0.2333984375 ]
396
6,379
[ -2.7734375, 3.091796875 ]
22.899209
[ -6.12109375, -3.6875, -4.53125, -2.8828125, 1.75390625, 12.7421875 ]
1.911346
30.486338
22.699483
2.480453
[ 0.9291425347328186 ]
-15,660.305719
5.782254
-22,685.134376
0.921784
5.891279
[ -3, -3.884765625, -3.46484375, -4.4140625, 2.486328125, 12.09375 ]
[ -5.57421875, -2.109375, -2.25, -1.1435546875, 3.46875, 4.93359375 ]
BkiUbZjxK5YsWR0Kh47-
\section{Introduction} Distributed algorithms for learning, inference, modeling, and optimization by networked agents are prevalent in many domains and applicable to a wide range of problems \cite{Sayed14PROC, Sayed14NOW, Dimakis10PROC, Saber07Pro}. Among the various classes of algorithms, techniques that are based on first-order gradient-descent iterations are particularly useful for distributed processing due to their low complexity, low power demands, and robustness against imperfections or unmodeled effects. Three of the most studied classes are consensus algorithms \cite{Olfati04TAC, Saber07Pro, Kar09TSP, Nedic09TAC, Kar11TSP}, diffusion algorithms \cite{Lopes08TSP, Cattivelli10TSP, Chen12TSP, Zhao12TSP2, Zhao12TSP, Sayed13Chapter, Sayed13SPM, Sayed14PROC}, and incremental algorithms \cite{Bertsekas97JOP, Nedic01JOP, Rabbat05JSAC, Lopes07TSP, Helou09SIAMOPT, Johansson09SIAMOPT}. The incremental techniques rely on the determination of a Hamiltonian cycle over the topology, which is generally an NP-hard problem and is therefore a hindrance to real-time adaptation, and even more so when the topology is dynamic and changes with time. For this reason, we will consider mainly learning algorithms of the consensus and diffusion types. In this work we focus on the case in which \emph{constant} step-sizes are employed in order to enable \emph{continuous} adaptation and learning in response to streaming data. When diminishing step-sizes are used, the algorithms would cease to adapt after the step-sizes have approached zero, which is problematic for applications that require the network to remain continually vigilant and to track possible drifts in the data and clusters. Therefore, adaptation with constant step-sizes is necessary in these scenarios. It turns out that when constant step-sizes are used, the dynamics of the distributed (consensus or diffusion) strategies are modified in a non-trivial manner: the stochastic gradient noise that is present in their update steps does not die out anymore and it seeps into the operation of the algorithms. In other words, while this noise component would be annihilated by decaying step-sizes, it will remain persistently active during constant step-size adaptation. As such, it becomes important to evaluate how well constant step-size implementations can alleviate the influence of gradient noise. It was shown in \cite{Tu12TSP, Sayed14PROC, Sayed14NOW} that consensus strategies can become problematic when constant step-sizes are employed. This is because of an asymmetry in their update relations, which can cause the state of the network to grow unbounded when these networks are used for adaptation. In comparison, diffusion networks do not suffer from this asymmetry problem and have been shown to be mean stable regardless of the topology of the network. This is a reassuring property, especially in the context of applications where the topology can undergo changes over time. These observations motivate us to focus our analysis on diffusion strategies, although the conclusions and arguments can be extended with proper adjustments to consensus strategies. Now, most existing works on distributed learning algorithms focus on the case in which all agents in the network are interested in estimating a common parameter vector, which generally corresponds to the minimizer of some aggregate cost function (see, e.g., \cite{Sayed14PROC, Sayed14NOW, Dimakis10PROC, Saber07Pro} and the references therein). In this article, we are instead interested in scenarios where different clusters of agents within the network are interested in estimating different parameter vectors. There have been several useful works in this domain in the literature under various assumptions, including in the earlier version of this work in \cite{Zhao12CIP}. This early investigation dealt only with the case of two separate clusters in the network with each cluster interested in one parameter vector. One useful application of this formulation in the context of biological networks was considered in \cite{Tu14TSP}, where each agent was assumed to collect data arising from one of two models (e.g., the location of two separate food sources). The agents did not know which model generated their observations and, yet, they needed to reach agreement about which model to follow (i.e., which food source to move towards). Another important extension dealing with multiple (more than two) models appears in \cite{ChenJ14TSP, ChenJ14TSP2} where multi-task problems are introduced. In this formulation, different clusters of the agents are again interested in estimating different parameter vectors (called ``tasks'') and the tasks of adjacent clusters are further assumed to be related to each other so that cooperation among clusters can still be beneficial. This formulation is useful in many scenarios, as already illustrated in \cite{ChenJ14TSP}, including in multiple target tracking \cite{Liu07MSP, Zhang11TAC} and classification problems involving multiple models \cite{Duda01, Francis74AS, Li96TAC, Cherkassky05TNN, Theodoridis09, Jacob09NIPS}. Other useful variations of multi-task problems appear in \cite{Bertrand10TSP}, which assumes fully-connected networks, and in \cite{Bogdanovic14ICASSP} where the agents have two types of parameters to estimate (a local parameter and a global parameter). These various works focus on mean-square-error (MSE) design, where the parameters of interest are estimated by seeking the minimizer of an MSE cost. Moreover, with the exception of \cite{Zhao12CIP, ChenJ14TSP2}, it is generally assumed in these works that the agents know beforehand which clusters they belong to or which parameters they are interested in estimating. In this article, we extend the approach of \cite{Zhao12CIP} and study multi-tasking adaptive networks under three conditions that are fundamentally different from previous studies. First, we go beyond mean-square-error estimation and allow for more general convex risk functions at the agents. This level of generality allows the framework to handle broader situations both in adaptation and learning, such as logistic regression for pattern classification purposes. Second, we do not assume any relation among the different objectives pursued by the clusters. In other words, we study the important problem where different components of the network are truly interested in different objectives and would like to avoid interference among clusters. And third, the agents do not know beforehand which clusters they belong to and which other agents are interested in the same objective. For example, in an application involving a sensor network tracking multiple moving objects from various directions, it is reasonable to assume that the trajectories of these objects are independent of each other. In this case, only information shared within clusters is beneficial for learning; the information from agents in other clusters would amount to interference. This means that agents would need to cooperate with neighbors that belong to the same cluster and would need to cut their links to neighbors with different objectives. This task would be simple to achieve if agents were aware of their cluster information. However, we will not be making that assumption. The cluster information will need to be learned as well. This point highlights one major feature of our formulation: we do not assume that agents have full knowledge about their clusters. This assumption is quite common in the context of unsupervised machine learning \cite{Duda01, Theodoridis09}, where the collected measurement data are not labeled and there are multiple candidate models. If two neighboring agents are interested in the same model and they are aware of this fact, then they should exchange data and cooperate. However, the agents may not know this fact, so they cannot be certain about whether or not they should cooperate. Accordingly, in this work, we will devise an adaptive clustering and learning strategy that allows agents to learn which neighbors they should cooperate with. In doing so, the resulting algorithm enables the agents in a network to be correctly clustered and to attain improved learning performance through enhanced intra-cluster cooperation. \emph{Notation}: We use lowercase letters to denote vectors, uppercase letters for matrices, plain letters for deterministic variables, and boldface letters for random variables. We also use $(\cdot)^{\mathsf{T}}$ to denote transposition, $(\cdot)^{-1}$ for matrix inversion, ${\mathrm{Tr}}(\cdot)$ for the trace of a matrix, and $\| \cdot \|$ for the 2-norm of a matrix or the Euclidean norm of a vector. Besides, we use $A \otimes B$ for matrices $A$ and $B$ to denote their Kronecker product, $A \ge B$ to demote that $A-B$ is positive semi-definite, and $A \succeq B$ to demote that all entries of $A-B$ are nonnegative. \section{Problem Formulation} \label{sec:problem} We consider a network consisting of $N$ agents inter-connected via some topology. An individual cost function, $J_k(w):\mathbb{R}^{M\times 1} \mapsto \mathbb{R}$, of a vector parameter $w$, is associated with every agent $k$. Each cost $J_k(w)$ is assumed to be strictly-convex and is minimized at a unique point $w_k^o$. According to the minimizers $\{ w_k^o \}$, agents in the network are categorized into $Q \ge 2$ mutually-exclusive clusters, denoted by $\mathcal{C}_q$, $q = 1, 2, \dots, Q$. \begin{definition}[Cluster] \label{def:cluster} Each cluster $q$, denoted by $\mathcal{C}_q$, consists of the collection of agents whose individual costs share the common minimizer $w_q^\star$, i.e., $w_k^o = w_q^\star$ for all $k \in \mathcal{C}_q$. \hfill \IEEEQED \end{definition} Since agents from different clusters do not share common minimizers, the network then aims to solve the \emph{clustered} multi-task problem: \begin{equation} \label{eqn:Jclusterdef} \minimize_{\{ w_q \}_{q = 1}^Q } \quad J(w_1, \dots, w_Q) \triangleq \sum_{q=1}^{Q} \sum_{k \in \mathcal{C}_q} J_k(w_q) \end{equation} \noindent If the cluster information $\{\mathcal{C}_q\}$ is available to the agents, then problem \eqref{eqn:Jclusterdef} can be decomposed into $Q$ separate optimization problems over the sub-networks associated with the clusters: \begin{equation} \label{eqn:Jclusterqdef} \minimize_w \quad J_q^\textrm{c}(w) \triangleq \sum_{k \in \mathcal{C}_q} J_k(w) \end{equation} for $q = 1, 2, \dots, Q$. Assuming the cluster topologies are connected, the corresponding minimizers $\{w_q^\star \}$ can be sought by employing diffusion strategies over each cluster. In this case, collaborative learning will only occur \emph{within} each cluster without any interaction across clusters. This means that for every agent $k$ that belongs to a particular cluster $\mathcal{C}_q$, i.e., $k \in \mathcal{C}_q$, its neighbors, which belong to the set denoted by $\mathcal{N}_k$, will need to be segmented into two sets: one set is denoted by $\mathcal{N}_k^+$ and consists of neighbors that belong to the same cluster $\mathcal{C}_q$, and the other set is denoted by $\mathcal{N}_k^-$ and consists of neighbors that belong to other clusters. It is clear that \begin{equation} \label{eqn:Neighborhood+and-def} \mathcal{N}_k^+ \triangleq \mathcal{N}_k \cap \mathcal{C}_q, \qquad \qquad \mathcal{N}_k^- \triangleq \mathcal{N}_k \backslash \mathcal{N}_k^+ \end{equation} We illustrate a two-cluster network with a total of $N = 20$ agents in Fig. \ref{fig:illustration_init_tot}. The agents in the clusters are denoted by blue and red circles, and are inter-connected by the underlying topology, so that agents may have in-cluster neighbors as well as neighbors from other clusters. For example, agent $k$ from blue cluster $\mathcal{C}_1$ has the in-cluster sub-neighborhood $\mathcal{N}_k^+ = \{k, 3, 4\}$, which is a subset of its neighborhood $\mathcal{N}_k = \{ k, 1, 2, 3, 4, 5 \}$. If the cluster information is available to all agents, then the network can be split into two sub-networks, one for each cluster, as illustrated in Figs. \ref{fig:illustration_res_blue} and \ref{fig:illustration_res_red}. However, in this work we consider the more challenging scenario in which the cluster information $\{\mathcal{C}_q\}$ is only \emph{partially} available to the agents beforehand, or even completely unavailable. When the cluster information is completely absent, each agent $k$ must first identify neighbors belonging to $\mathcal{N}_k^+$. When the cluster information is partially known, meaning that some agents from the same cluster already know each other, then these agents can cooperate to identify the other members in their cluster. In order to study these two scenarios in a uniform manner, we introduce the concept of a group. \begin{definition}[Group] \label{def:group} A group $m$, denoted by $\mathcal{G}_m$, is a collection of connected agents from the same cluster and knowing that they belong to this same cluster. \hfill \IEEEQED \end{definition} \begin{figure}[h] \centerline{ \subfloat[The underlying topology.] {\includegraphics[width=2.5in]{init_tot} \label{fig:illustration_init_tot}} \hfil \subfloat[The clustered topology for $\mathcal{C}_1$.] {\includegraphics[width=2.5in]{res_blue} \label{fig:illustration_res_blue}}} \centerline{ \subfloat[The clustered topology for $\mathcal{C}_2$.] {\includegraphics[width=2.5in]{res_red} \label{fig:illustration_res_red}} \hfil \subfloat[Five groups from cluster $\mathcal{C}_1$.] {\includegraphics[width=2.5in]{init_blue} \label{fig:illustration_init_blue}} } \caption{A network with $N=20$ nodes and $Q = 2$ clusters. Cluster $\mathcal{C}_1$ consists of 10 agents in blue. Cluster $\mathcal{C}_2$ consists of another 10 agents in red. Agent $k$ belongs to Cluster $\mathcal{C}_1$, and its neighborhood is denoted by $\mathcal{N}_k = \{ k, 1, 2, 3, 4, 5 \}$ with $\mathcal{N}_k^+ = \{k, 3, 4\}$. With perfect cluster information, the underlying topology splits into two sub-networks, one for each cluster. With partial cluster information, cluster $\mathcal{C}_1$ breaks down into five groups: two singleton groups $\mathcal{G}_1$ and $\mathcal{G}_5$, and three non-trivial groups $\mathcal{G}_2$, $\mathcal{G}_3$, and $\mathcal{G}_4$. Through adaptive learning and clustering, the five groups in (b) will end up merging into one largest group corresponding to the entire cluster in (c).} \label{fig:network} \vspace{-1\baselineskip} \end{figure} Figure \ref{fig:illustration_init_blue} illustrates the concept of groups when cluster information is only partially available to the agents in the network from Fig. \ref{fig:illustration_init_tot}. If an agent has no information about its neighbors, then it falls into a singleton group, such as groups $\mathcal{G}_1$ and $\mathcal{G}_5$ in Fig. \ref{fig:illustration_init_blue}. If some neighboring agents know the cluster information of each other, then they form a non-trivial group, such as groups $\mathcal{G}_2$, $\mathcal{G}_3$, and $\mathcal{G}_4$. If every agent in a cluster knows the cluster information of all its neighbors, then all cluster members form one group and this group coincides with the cluster itself, as shown in Fig. \ref{fig:illustration_res_blue}. Since cooperation among neighbors belonging to different clusters can lead to biased results \cite{Chen13JSTSP, Sayed14NOW, ChenJ14TSP}, agents should only cooperate within clusters. However, when agents have access to partial cluster information, then they only know their group neighbors but not \emph{all} cluster neighbors. Therefore, at this stage, agents can only cooperate within groups, leaving behind some potential opportunity for cooperation with neighbors from the same cluster. The purpose of this work is to devise a procedure to enable agents to identify all of their cluster neighbors, such that small groups from the same cluster can merge automatically into larger groups. At the same time, the procedure needs to be able to turn off links between different clusters in order to avoid interference. By using such a procedure, agents in multi-task networks with \emph{partial} cluster information will be able to cluster themselves in an \emph{adaptive} manner, and then solve problem \eqref{eqn:Jclusterdef} by solving \eqref{eqn:Jclusterqdef} collaboratively \emph{within} each cluster. We shall examine closely the probability of successful clustering and evaluate the steady-state mean-square-error performance for the overall learning process. In particular, we will show that the probability of correct clustering approaches one for sufficiently small step-sizes. We will also show that, with the enhanced cooperation that results from adaptive clustering, the mean-square-error performance for the network will be improved relative to the network without adaptive clustering. \section{Models and Assumptions} We summarize the main conditions on the network topology in the following statement. \begin{assumption}[Topology, clusters, and groups] \hfill \label{ass:topology} \begin{enumerate} \item The network consists of $Q$ clusters, $\{\mathcal{C}_q; q = 1, 2, \dots, Q\}$. The size of cluster $\mathcal{C}_q$ is denoted by $N_q^c$ such that $| \mathcal{C}_q | = N_q^c$ and $\sum_{q=1}^{Q} N_q^c = N$. \item The underlying topology for each cluster $\mathcal{C}_q$ is connected. Clusters are also inter-connected by some links so that agents from different clusters may still be neighbors of each other. \item There is a total of $G$ groups, $\{\mathcal{G}_m; m = 1, 2, \dots, G\}$, in the network. The size of group $\mathcal{G}_m$ is denoted by $N_m^g$ such that $|\mathcal{G}_m| = N_m^g$ and $\sum_{m=1}^G N_m^g = N$. \hfill \IEEEQED \end{enumerate} \end{assumption} \noindent It is obvious that $Q \le G \le N$ because each cluster has at least one group and each group has at least one agent. \begin{definition}[Indexing rule] \label{def:index} Without loss of generality, we index groups according to their cluster indexes such that groups from the same cluster will have consecutive indexes. Likewise, we index agents according to their group indexes such that agents from the same group will have consecutive indexes. \hfill \IEEEQED \end{definition} According to this indexing rule, if group $\mathcal{G}_m$ belongs to cluster $\mathcal{C}_q$, then the next group $\mathcal{G}_{m+1}$ will belong either to cluster $\mathcal{C}_q$ or the next cluster, $\mathcal{C}_{q+1}$; if agent $k$ belongs to group $\mathcal{G}_m$, then the next agent $k+1$ will belong either to group $\mathcal{G}_m$ or the next group, $\mathcal{G}_{m+1}$. Based on the problem formulation in Section \ref{sec:problem}, although agents in the same cluster are connected, they are generally not aware of each other's cluster information, and therefore some agents in the same cluster may not cooperate in the initial stage of adaptation. On the other hand, agents in the same group are aware of each other's cluster information, so these agents can cooperate. As the learning process proceeds, agents from different groups in the same cluster will recognize each other through information sharing. Once cluster information is inferred, small groups will merge into larger groups, and agents will start cooperating with more neighbors. Through this adaptive clustering procedure, cooperative learning will grow until all agents within the same cluster become cooperative and the network performance is enhanced. To proceed with the modeling assumptions, we introduce the following network Hessian matrix function: \begin{equation} \label{eqn:bigHwdef} \nabla^2 J( {\scriptstyle{\mathcal{W}}} ) \triangleq {\mathrm{diag}}\{ \nabla^2 J_1(w_1), \dots, \nabla^2 J_N(w_N) \} \end{equation} where the vector ${\scriptstyle{\mathcal{W}}}$ collects the parameters from across the network: \begin{equation} \label{eqn:wnetdef} {\scriptstyle{\mathcal{W}}} \triangleq {\mathrm{col}}\{ w_1, \dots, w_N \} \in \mathbb{R}^{NM\times 1} \end{equation} We also collect the individual minimizers into a vector: \begin{equation} \label{eqn:wonetworkdef} {\scriptstyle{\mathcal{W}}}^o \triangleq {\mathrm{col}}\{w_1^o, \dots, w_N^o\} = {\mathrm{col}}\{ \ds{1}_{N_q^c} \otimes w_q^\star ; q = 1, \dots, Q \} \end{equation} where the second equality is due to the indexing rule in Definition \ref{def:index}, and $\ds{1}_n$ denotes an $n \times 1$ vector with all its entries equal to one. We next list two standard assumptions for stochastic distributed learning over adaptive networks to guide the subsequent analysis in this work. One assumption relates to the analytical properties of the cost functions, and is meant to ensure well-defined minima and well-posed problems. The second assumption relates to stochastic properties of the gradient noise processes that result from approximating the true gradient vectors. This assumption is meant to ensure that the gradient approximations are unbiased and with moments satisfying some regularity conditions. Explanations and motivation for these assumptions in the context of inference problems can be found in \cite{Polyak87, Sayed14PROC, Sayed14NOW}. \begin{assumption}[Cost functions] \hfill \label{ass:costfunctions} \begin{enumerate} \item Each individual cost $J_k(w)$ is assumed to be strictly-convex, twice-differentiable, and with bounded Hessian matrix function satisfying: \begin{equation} \lambda_{k,L} I_M \le \nabla^2 J_k(w) \le \lambda_{k,U} I_M \end{equation} where $0 \le \lambda_{k,L} \le \lambda_{k,U} < \infty$. \item In each group $\mathcal{G}_m$, at least one individual cost, say, $J_{k^o}(w)$, is strongly-convex, meaning that the lower bound, $\lambda_{k^o, L}$, on the Hessian of this cost is positive. \item The network Hessian function $\nabla^2 J({\scriptstyle{\mathcal{W}}})$ in \eqref{eqn:bigHwdef} satisfies the Lipschitz condition: \begin{equation} \label{eqn:lipschitzHessian} \| \nabla^2 J({\scriptstyle{\mathcal{W}}}_1) - \nabla^2 J({\scriptstyle{\mathcal{W}}}_2) \| \le \kappa_H \| {\scriptstyle{\mathcal{W}}}_1 - {\scriptstyle{\mathcal{W}}}_2 \| \end{equation} for any ${\scriptstyle{\mathcal{W}}}_1, {\scriptstyle{\mathcal{W}}}_2 \in \mathbb{R}^{NM\times 1}$ and some $\kappa_H \ge 0$. \hfill \IEEEQED \end{enumerate} \end{assumption} \noindent The second set of assumptions relate to conditions on the gradient noise processes. For this purpose, we introduce the filtration $\{\mathbb{F}_i;i \ge 0\}$ to represent the information flow that is available up to the $i$-th iteration of the learning process. The true network gradient function and its stochastic approximation are respectively denoted by \begin{align} \label{eqn:biggdef} \nabla J({\scriptstyle{\mathcal{W}}}) & \triangleq {\mathrm{col}}\{ \nabla J_1(w_1), \dots, \nabla J_N(w_N) \} \\ \label{eqn:biggapproxdef} \widehat{\nabla J}({\scriptstyle{\mathcal{W}}}) & \triangleq {\mathrm{col}}\{ \widehat{\nabla J_1}(w_1), \dots, \widehat{\nabla J_N}(w_N) \} \end{align} The gradient noise at iteration $i$ and agent $k$ is denoted by: \begin{equation} \label{eqn:additivegradienterror} \bm{s}_{k,i}(\bm{w}_{k,i-1}) \triangleq \widehat{\nabla J_k}(\bm{w}_{k,i-1}) - \nabla J_k(\bm{w}_{k,i-1}) \end{equation} where $\bm{w}_{k,i-1}$ denotes the estimate for $w_k^o$ that is available to agent $k$ at iteration $i-1$. The network gradient noise is denoted by ${\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1})$ and is the random process that is obtained by aggregating all noise processes from across the network into a vector: \begin{equation} \label{eqn:bigsidef} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \triangleq {\mathrm{col}}\{ \bm{s}_{1,i}(\bm{w}_{1,i-1}), \dots, \bm{s}_{N,i}(\bm{w}_{N,i-1}) \} \end{equation} Using \eqref{eqn:additivegradienterror}, we can write \begin{equation} \label{eqn:additivegradienterrornetwork} \widehat{\nabla J}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) = \nabla J({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) + {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \end{equation} We denote the conditional covariance of ${\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1})$ by \begin{equation} \label{eqn:bigRsidef} \mathcal{R}_{s,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \triangleq \mathbb{E} [ {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i^\mathsf{T}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) | \mathbb{F}_{i-1}] \end{equation} where ${\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}$ is in $\mathbb{F}_{i-1}$. \begin{assumption}[Gradient noise] \label{ass:gradienterrors} It is assumed that the gradient noise process satisfies the following properties for any ${\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}$ in $\mathbb{F}_{i-1}$: \begin{enumerate} \item Martingale difference \cite{Kushner03, Sayed14NOW}: \begin{equation} \label{eqn:martingaledifference} \mathbb{E} [ {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) | \mathbb{F}_{i-1} ] = 0 \end{equation} \item Bounded fourth-order moment \cite{Chen13TIT, Zhao13TSPasync1, Sayed14NOW}: \begin{equation} \label{eqn:bounded4thorder} \mathbb{E} [ \| {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \|^4 | \mathbb{F}_{i-1} ] \le \alpha^2 \| {\scriptstyle{\mathcal{W}}}^o - {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} \|^4 + \sigma_s^4 \end{equation} for some $\alpha, \sigma_s \ge 0$, and where ${\scriptstyle{\mathcal{W}}}^o$ is from \eqref{eqn:wonetworkdef}. \item Lipschitz conditional covariance function \cite{Chen13TIT, Zhao13TSPasync1, Sayed14NOW}: \begin{equation} \label{eqn:lipschitzcovariance} \| \mathcal{R}_{s,i}({\scriptstyle{\mathcal{W}}}^o) - \mathcal{R}_{s,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \| \le \kappa_s \| {\scriptstyle{\mathcal{W}}}^o - {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} \|^{\gamma_s} \end{equation} for some $\kappa_s \ge 0$ and $0 < \gamma_s \le 4$. \item Convergent conditional covariance matrix \cite{Kushner03, Chen13TIT, Zhao13TSPasync1, Sayed14NOW}: \begin{equation} \label{eqn:convergentcovariance} \mathcal{R}_s \triangleq \lim_{i \rightarrow \infty} \mathcal{R}_{s,i}({\scriptstyle{\mathcal{W}}}^o) > 0 \end{equation} where $\mathcal{R}_s$ is symmetric and positive definite. \hfill \IEEEQED \end{enumerate} \end{assumption} It is easy to verify from \eqref{eqn:bounded4thorder} that the second-order moment of the gradient noise process also satisfies: \begin{equation} \label{eqn:boundedvariance} \mathbb{E} [ \| {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \|^2 | \mathbb{F}_{i-1} ] \le \alpha \| {\scriptstyle{\mathcal{W}}}^o - {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} \|^2 + \sigma_s^2 \end{equation} \section{Proposed Algorithm and Main Results} \label{sec:algorithm} In order to minimize all cluster cost functions $\{ J_q^\textrm{c}(w); q = 1, 2, \dots, Q \}$ defined by \eqref{eqn:Jclusterqdef}, agents need to cooperate only within their clusters. Although cluster information is in general not available beforehand, groups within each cluster are available according to Assumption \ref{ass:topology}. Therefore, based on this prior information, agents can instead focus on solving the following problem based on partitioning by groups rather than by clusters: \begin{equation} \label{eqn:Jgroupdef} \minimize_{\{ w_m \}_{m = 1}^G } \quad J'(w_1, \dots, w_G) \triangleq \sum_{m = 1}^G \sum_{k \in \mathcal{G}_m} J_k(w_m) \end{equation} with one parameter vector $w_m$ for each group $\mathcal{G}_m$. In the extreme case when prior clustering information is totally absent, groups will collapse into singletons and problem \eqref{eqn:Jgroupdef} will reduce to the individual non-cooperative case with each agent running its own stochastic-gradient algorithm to minimize its cost function. In another extreme case when cluster information is completely available, groups will be equivalent to clusters and problem \eqref{eqn:Jgroupdef} will reduce to the formation in \eqref{eqn:Jclusterdef}. Therefore, problem \eqref{eqn:Jgroupdef} is general and includes many scenarios of interest as special cases. We shall argue in the sequel that during the process of solving \eqref{eqn:Jgroupdef}, agents will be able to gradually learn their neighbors' clustering information. This information will be exploited by a \emph{separate} learning procedure by each group to dynamically involve more neighbors (from outside the group) in local cooperation. In this way, we will be able to establish analytically that, with high probability, agents will be able to successfully solve problem \eqref{eqn:Jclusterdef} (and not just \eqref{eqn:Jgroupdef}) even \emph{without} having the complete clustering information in advance. We motivate the algorithm by examining problem \eqref{eqn:Jgroupdef}. Since the groups $\{\mathcal{G}_m\}$ are already formed and they are disjoint, problem \eqref{eqn:Jgroupdef} can be decomposed into $G$ separate optimization problems, one for each group: \begin{equation} \label{eqn:Jgroupmdef} \minimize_w \quad J_m^g(w) \triangleq \sum_{k \in \mathcal{G}_m} J_k(w) \end{equation} with $m = 1, 2, \dots, G$. For any agent $k$ belonging to group $\mathcal{G}_m$ in cluster $\mathcal{C}_q$, i.e., $k \in \mathcal{G}_m \subseteq \mathcal{C}_q$, it is easy to verify that \begin{equation} \{k\} \subseteq \mathcal{N}_k \cap \mathcal{G}_m \subseteq \mathcal{N}_k \cap \mathcal{C}_q = \mathcal{N}_k^+ \end{equation} Then, agents in group $\mathcal{G}_m$ can seek the solution of $J_m^g(w)$ in \eqref{eqn:Jgroupmdef} by using the adapt-then-combine (ATC) diffusion learning strategy over $\mathcal{G}_m$, namely, \begin{subequations} \begin{align} \label{eqn:distributedadaptgroup} \bm{\psi}_{k,i} & = \bm{w}_{k,i-1} - \mu_k \widehat{\nabla J_k} (\bm{w}_{k,i-1}) \\ \label{eqn:distributedcombinegroup} \bm{w}_{k,i} & = \sum_{\ell \in \mathcal{N}_k \cap \mathcal{G}_m} a_{\ell k} \bm{\psi}_{\ell,i} \end{align} \end{subequations} for all $k\in\mathcal{G}_m$, where $\mu_k > 0$ denotes the step-size parameter, and $\{ a_{\ell k} \}$ are convex combination coefficients that satisfy \begin{equation} \label{eqn:alkcondition} \left\{ \begin{aligned} a_{\ell k} > 0 & \;\; \mbox{if} \;\; \ell \in \mathcal{N}_k \cap \mathcal{G}_m \\ a_{\ell k} = 0 & \;\; \mbox{otherwise} \end{aligned} \right., \;\; \mbox{and} \;\; \sum_{\ell = 1}^{N} a_{\ell k} = 1 \end{equation} Moreover, $\bm{w}_{k,i}$ denotes the random estimate computed by agent $k$ at iteration $i$, and $\bm{\psi}_{k,i}$ is the intermediate iterate. We collect the coefficients $\{a_{\ell k}\}$ into a matrix $A \triangleq [a_{\ell k}]_{\ell, k = 1}^N$. Obviously, $A$ is a left-stochastic matrix, namely, \begin{equation} A^\mathsf{T} \ds{1}_N = \ds{1}_N \end{equation} We collect the iterates generated from \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} by group $\mathcal{G}_m$ into a vector: \begin{equation} \label{eqn:wqigroupdef} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i} \triangleq {\mathrm{col}}\{ \bm{w}_{k,i}; k \in \mathcal{G}_m \} \in \mathbb{R}^{N_m^g M \times 1} \end{equation} where $N_m^g$ is the size of $\mathcal{G}_m$. According to the indexing rule from Definition \ref{def:index} for agents and groups, the estimate for the entire network from \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} can be obtained by stacking the group estimates $\{{\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i}\}$: \begin{equation} \label{eqn:winetworkdef} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_i \triangleq {\mathrm{col}}\{ \bm{w}_{1,i}, \dots, \bm{w}_{N,i} \} = {\mathrm{col}}\{ {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{1,i}, \dots, {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{G,i} \} \end{equation} The procedure used by the agents to enlarge their groups will be based on the following results to be established in later sections. We will show in Theorem \ref{theorem:normaldistribution} that after sufficient iterations, i.e., as $i \rightarrow \infty$, and for small enough step-sizes, i.e., $\mu_k \ll 1$ for all $k$, the network estimate ${\scriptstyle{\boldsymbol{\mathcal{W}}}}_i$ defined by \eqref{eqn:winetworkdef} exhibits a distribution that is \emph{nearly} Gaussian: \begin{equation} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_i \sim \mathbb{N}({\scriptstyle{\mathcal{W}}}^o, \; \mu_{\max} \Pi) \end{equation} where $\mathbb{N}(\phi, \Psi)$ denotes a Gaussian distribution with mean $\phi$ and covariance $\Psi$, ${\scriptstyle{\mathcal{W}}}^o$ is from \eqref{eqn:wonetworkdef}, \begin{equation} \label{eqn:mumaxdef} \mu_{\max} \triangleq \max_{k = 1, \dots, N} \mu_k \end{equation} and $\Pi \in \mathbb{R}^{ NM \times NM }$ is a symmetric, positive semi-definite matrix, independent of $\mu_{\max}$, and defined later by \eqref{eqn:Pinetworkdef}. In addition, we will show that for any pair of agents from two different groups, for example, $k \in \mathcal{G}_m$ and $\ell \in \mathcal{G}_n$, where the two groups $\mathcal{G}_m$ and $\mathcal{G}_n$ may or may not originate from the same cluster, the difference between their estimates will also be distributed approximately according to a Gaussian distribution: \begin{equation} \label{eqn:wdiffGaussian} \bm{w}_{\ell,i} - \bm{w}_{k,i} \sim \mathbb{N}( w_\ell^o - w_k^o, \; \mu_{\max} \Delta_{\ell,k} ) \end{equation} where \begin{equation} \Delta_{\ell,k} \triangleq \Pi_{\ell,\ell} + \Pi_{k,k} - \Pi_{k, \ell} - \Pi_{\ell, k} \end{equation} is a symmetric, positive semi-definite matrix, and $\Pi_{k,\ell}$ denotes the $(k,\ell)$-th block of $\Pi$ with block size $M \times M$. These results are useful for inferring the cluster information for agents $k$ and $\ell$. Indeed, since the covariance matrix in \eqref{eqn:wdiffGaussian} is on the order of $\mu_{\max}$, the probability density function (pdf) of $\bm{w}_{\ell,i} - \bm{w}_{k,i}$ will concentrate around its mean, namely, $w_\ell^o - w_k^o$, when $\mu_{\max}$ is sufficiently small. Therefore, if these agents belong to the same cluster such that $w_\ell^o = w_k^o$, then we will be able to conclude from \eqref{eqn:wdiffGaussian} that with high probability, $\| \bm{w}_{\ell,i} - \bm{w}_{k,i} \|^2 = O(\mu_{\max})$. On the other hand, if the agents belong to different clusters such that $w_\ell^o \neq w_k^o$, then it will hold with high probability that $\| \bm{w}_{\ell,i} - \bm{w}_{k,i} \|^2 = O(\mu_{\max}^0)$. This observation suggests that a hypothesis test can be formulated for agents $\ell$ and $k$ to determine whether or not they are members of the same cluster: \begin{equation} \label{eqn:decisionrule} \| \bm{w}_{\ell,i} - \bm{w}_{k,i} \|^2 \overset{\mathbb{H}_0}{\underset{\mathbb{H}_1}{\lessgtr}} \theta_{k,\ell} \end{equation} where $\mathbb{H}_0$ denotes the hypothesis $w_\ell^o = w_k^o$, $\mathbb{H}_1$ denotes the hypothesis $w_\ell^o \neq w_k^o$, and $\theta_{k,\ell} > 0$ is a predefined threshold. Both agents $\ell$ and $k$ will test \eqref{eqn:decisionrule} to reach a symmetric pattern of cooperation. Since $\bm{w}_{k,i}$ and $\bm{w}_{\ell,i}$ are accessible through local interactions within neighborhoods, the hypothesis test \eqref{eqn:decisionrule} can be carried out in a distributed manner. We will further show that the probabilities for both types of errors incurred by \eqref{eqn:decisionrule}, i.e., the false alarm (Type-I) and the missing detection (Type-II) errors, decay at exponential rates, namely, \begin{align*} \mbox{Type-I:} & \; {\mathbb{P}}[ \| \bm{w}_{\ell,i} - \bm{w}_{k,i} \|^2 > \theta_{k,\ell} | w_\ell^o = w_k^o ] \le O(e^{-c_1/\mu_{\max}}) \\ \mbox{Type-II:} & \; {\mathbb{P}}[ \| \bm{w}_{\ell,i} - \bm{w}_{k,i} \|^2 < \theta_{k,\ell} | w_\ell^o \neq w_k^o ] \le O(e^{-c_2/\mu_{\max}}) \end{align*} for some constants $c_1 > 0$ and $c_2 > 0$. Therefore, for long enough iterations and small enough step-sizes, agents are able to successfully infer the cluster information with very high probability. The clustering information acquired at each iteration $i$ is used by the agents to dynamically adjust their \emph{inferred} cluster neighborhoods. The $\bm{\mathcal{N}}_{k,i}^+$ for agent $k\in\mathcal{G}_m$ at iteration $i$ consists of the neighbors that are accepted under hypothesis $\mathbb{H}_0$ and the other neighbors that are already in the same group: \begin{equation} \label{eqn:neighborhoodki+def} \bm{\mathcal{N}}_{k,i}^+ \triangleq \{\ell \in \mathcal{N}_k; \| \bm{w}_{\ell,i} - \bm{w}_{k,i} \|^2 < \theta_{k,\ell} \;\; \mbox{or} \;\; \ell \in \mathcal{G}_m \} \end{equation} Using these dynamically-evolving cluster neighborhoods, we introduce a \emph{separate} ATC diffusion learning strategy: \begin{subequations} \begin{align} \label{eqn:distributedadaptdynamic} \bm{\psi}_{k,i}' & = \bm{w}_{k,i-1}' - \mu_k \widehat{\nabla J_k} (\bm{w}_{k,i-1}') \\ \label{eqn:distributedcombinedynamic} \bm{w}_{k,i}' & = \sum_{\ell \in \bm{\mathcal{N}}_{k,i-1}^+ } \bm{a}_{\ell k}'(i-1) \bm{\psi}_{\ell,i}' \end{align} \end{subequations} where the combination coefficients $\{ \bm{a}_{ \ell k}'(i-1)\}$ become random because $\bm{\mathcal{N}}_{k,i-1}^+$ is random and may vary over iterations. The iteration index $i-1$ is used for these coefficients to enforce causality. Since $\mathcal{N}_k \cap \mathcal{G}_m$ denotes the neighbors of agent $k$ that are already in the same group $\mathcal{G}_m$ as $k$, it is obvious that $\mathcal{N}_k \cap \mathcal{G}_m \subseteq \bm{\mathcal{N}}_{k,i-1}^+$ for any $i\ge0$. This means that recursion \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic} generally involves a larger range of interactions among agents than the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup}. We summarize the algorithm in the following listing. \vspace{0.5\baselineskip} \begin{algorithmic} \hrule \vskip 0.1\baselineskip \hrule \vskip 0.2\baselineskip \STATE \textbf{Distributed clustering and learning over networks} \vskip 0.2\baselineskip \hrule \vskip 0.2\baselineskip \STATE Initialization: $\bm{w}_{k,-1} = \bm{w}_{k,-1}' = 0$ and $\bm{\mathcal{N}}_{k,-1}^+ = \mathcal{N}_k \cap \mathcal{G}_m$ for all $k \in \mathcal{G}_m$ and $m = 1, 2, \dots, G$. \FOR{$i\ge0$} \STATE (1) Each agent $k$ updates $\bm{w}_{k,i}$ according to the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} over $\mathcal{N}_k \cap \mathcal{G}_m$. \STATE (2) Each agent $k$ updates $\bm{w}_{k,i}'$ according to the second recursion \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic} over $\bm{\mathcal{N}}_{k,i-1}^+$. \STATE (3) Each agent $k$ updates $\bm{\mathcal{N}}_{k,i}^+$ by using \eqref{eqn:neighborhoodki+def} with $\{\bm{w}_{\ell,i}; \ell \in \mathcal{N}_k\}$ from step (1). \ENDFOR \vskip 0.2\baselineskip \hrule \vskip 0.1\baselineskip \hrule \end{algorithmic} \vspace{0.5\baselineskip} \section{Mean-Square-Error Analysis} \label{sec:performancegroup} In the previous section, we mentioned that Theorem \ref{theorem:normaldistribution} in Section \ref{subsection:normalpdf} is the key result for the design of the clustering criterion. To arrive this theorem, we shall derive two useful intermediate results, Lemmas \ref{lemma:approxerrorrecursion} and \ref{lemma:lowrankapprox}, in this section. These two results are related to the MSE analysis of the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup}, which is used in step (1) of the proposed algorithm. We shall therefore examine the stability and the MSE performance of recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} in the sequel. It is clear that the evolution of this recursion is not influenced by the other two steps. Thus, we can study recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} independently. \subsection{Network Error Recursion} Using model \eqref{eqn:additivegradienterrornetwork}, recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} leads to \begin{equation} \label{eqn:uniformiteration} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_i = \mathcal{A}^\mathsf{T} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} - \mathcal{A}^\mathsf{T} \mathcal{M} \, \nabla J({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) - \mathcal{A}^\mathsf{T} \mathcal{M} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \end{equation} where ${\scriptstyle{\boldsymbol{\mathcal{W}}}}_i$ is from \eqref{eqn:winetworkdef}, $ \nabla J(\cdot)$ is from \eqref{eqn:biggdef}, ${\scriptstyle{\boldsymbol{\mathcal{S}}}}_i(\cdot)$ is from \eqref{eqn:bigsidef}, and \begin{align} \label{eqn:bigMdef} \mathcal{M} & \triangleq {\mathrm{diag}}\{ \mu_1, \dots, \mu_N \} \otimes I_M \\ \label{eqn:bigAdef} \mathcal{A} & \triangleq A \otimes I_M \end{align} We introduce the network error vector: \begin{equation} \label{eqn:networkerrorvectordef} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \triangleq {\scriptstyle{\mathcal{W}}}^o - {\scriptstyle{\boldsymbol{\mathcal{W}}}}_i = {\mathrm{col}}\{ \widetilde{\bm{w}}_{1,i}, \dots, \widetilde{\bm{w}}_{N,i} \} \end{equation} where ${\scriptstyle{\mathcal{W}}}^o$ is from \eqref{eqn:wonetworkdef}, and the individual error vectors: \begin{equation} \widetilde{\bm{w}}_{k,i} \triangleq w_k^o - \bm{w}_{k,i} \end{equation} Using the mean-value theorem \cite{Polyak87, Sayed14NOW}, we can write \begin{equation} \label{eqn:gwlinearize} \nabla J({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) = \nabla J({\scriptstyle{\mathcal{W}}}^o) - \left[ \int_{0}^{1} \nabla^2 J({\scriptstyle{\mathcal{W}}}^o - t \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1}) dt \right] \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1} \end{equation} where $\nabla^2 J(\cdot)$ is from \eqref{eqn:bigHwdef}. Since ${\scriptstyle{\mathcal{W}}}^o$ consists of individual minimizers throughout the network, it follows that $ \nabla J({\scriptstyle{\mathcal{W}}}^o) = 0$. Let \begin{equation} \label{eqn:bigbarHdef} \bm{\mathcal{H}}_{i-1} \triangleq \int_{0}^{1} \nabla^2 J({\scriptstyle{\mathcal{W}}}^o - t \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1}) dt = {\mathrm{diag}}\{ \bm{H}_{k,i-1} \}_{k=1}^{N} \end{equation} where \begin{equation} \label{eqn:Hkodef} \bm{H}_{k,i-1} \triangleq \int_{0}^{1} \nabla^2 J_k(w_k^o - t \widetilde{\bm{w}}_{k,i-1}) dt \end{equation} Then, expression \eqref{eqn:gwlinearize} can be rewritten as \begin{equation} \label{eqn:gwlinearizenew} \nabla J({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) = - \bm{\mathcal{H}}_{i-1} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1} \end{equation} where it is worth noting that the random matrix $\bm{\mathcal{H}}_{i-1}$ is dependent on $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1}$. Substituting \eqref{eqn:gwlinearizenew} into \eqref{eqn:uniformiteration} yields: \begin{equation} \label{eqn:uniformiteration1} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_i = \mathcal{A}^\mathsf{T} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} + \mathcal{A}^\mathsf{T} \mathcal{M} \bm{\mathcal{H}}_{i-1} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1} - \mathcal{A}^\mathsf{T} \mathcal{M} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \end{equation} By the indexing rule from Definition \ref{def:index} and condition \eqref{eqn:alkcondition}, the combination matrix $A$ possesses a block diagonal structure: \begin{equation} \label{eqn:Ablockstructure} A = {\mathrm{diag}}\{ A_m; m = 1, \dots, G \} \end{equation} where each $A_m$ collects the combination coefficients within group $\mathcal{G}_m$: \begin{equation} A_m \triangleq [a_{\ell k}; \ell, k \in \mathcal{G}_m] \end{equation} From the same condition \eqref{eqn:alkcondition}, we have that each $A_m$ is itself an $N_m^g \times N_m^g$ left-stochastic matrix: \begin{equation} \label{eqn:Am1eq1} A_m^\mathsf{T} \ds{1}_{N_m^g} = \ds{1}_{N_m^g} \end{equation} If group $\mathcal{G}_m$ is a subset of cluster $\mathcal{C}_q$, then the agents in $\mathcal{G}_m$ share the same minimizer at $w_q^\star$. Thus, for any $\mathcal{G}_m \subseteq \mathcal{C}_q$, let \begin{equation} \label{eqn:wmgroupdef} {\scriptstyle{\mathcal{W}}}_m^o \triangleq {\mathrm{col}}\{w_k^o; k \in \mathcal{G}_m \} = \ds{1}_{N_m^g} \otimes w_q^\star \end{equation} It follows from \eqref{eqn:Am1eq1} and \eqref{eqn:wmgroupdef} that \begin{equation} \label{eqn:AmImwm} (A_m^\mathsf{T} \otimes I_M) {\scriptstyle{\mathcal{W}}}_m^o = (A_m^\mathsf{T} \otimes I_M) (\ds{1}_{N_m^g} \otimes w_q^\star) = {\scriptstyle{\mathcal{W}}}_m^o \end{equation} Again, from the indexing rule in Definition \ref{def:index}, we have from \eqref{eqn:wonetworkdef} and \eqref{eqn:wmgroupdef} that \begin{equation} \label{eqn:wogroupclusters} {\scriptstyle{\mathcal{W}}}^o = {\mathrm{col}}\{{\scriptstyle{\mathcal{W}}}_m^o; m = 1, \dots, G \} \end{equation} Then, it follows from \eqref{eqn:Ablockstructure} and \eqref{eqn:wogroupclusters} that \begin{equation} \label{eqn:Awoequalwo} \mathcal{A}^\mathsf{T} {\scriptstyle{\mathcal{W}}}^o = \begin{bmatrix} A_1^\mathsf{T} \otimes I_M & & \\ & \ddots & \\ & & A_G^\mathsf{T} \otimes I_M \\ \end{bmatrix} \begin{bmatrix} {\scriptstyle{\mathcal{W}}}_1^o \\ \vdots \\ {\scriptstyle{\mathcal{W}}}_G^o \\ \end{bmatrix} = {\scriptstyle{\mathcal{W}}}^o \end{equation} Accordingly, subtracting ${\scriptstyle{\mathcal{W}}}^o$ from both sides of \eqref{eqn:uniformiteration1} and using \eqref{eqn:Awoequalwo} yields the network error recursion: \begin{equation} \label{eqn:uniformiteration2} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i = \mathcal{A}^\mathsf{T} ( I_{NM} - \mathcal{M} \bm{\mathcal{H}}_{i-1} ) \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1} + \mathcal{A}^\mathsf{T} \mathcal{M} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \end{equation} We denote the coefficient matrix appearing in \eqref{eqn:uniformiteration2} by \begin{equation} \label{eqn:bigBdef} \bm{\mathcal{B}}_{i-1} \triangleq \mathcal{A}^\mathsf{T} ( I_{NM} - \mathcal{M} \bm{\mathcal{H}}_{i-1} ) \end{equation} Then, the network error recursion \eqref{eqn:uniformiteration2} can be rewritten as \begin{equation} \label{eqn:networkerrorrecursiondef} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i = \bm{\mathcal{B}}_{i-1} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1} + \mathcal{A}^\mathsf{T} \mathcal{M} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \end{equation} We further introduce the group quantities: \begin{align} \label{eqn:bigAqdef} \mathcal{A}_m & \triangleq A_m \otimes I_M \\ \label{eqn:wqidef} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i} & \triangleq {\mathrm{col}}\{ \bm{w}_{k,i}; k \in \mathcal{G}_m \} \in \mathbb{R}^{N_m^g M \times 1} \\ \label{eqn:bigMqdef} \mathcal{M}_m & \triangleq {\mathrm{diag}}\{\mu_k; k \in \mathcal{G}_m\} \otimes I_M \\ \label{eqn:bigHqdef} \bm{\mathcal{H}}_{m,i-1} & \triangleq {\mathrm{diag}}\{ \bm{H}_{k,i-1}; k \in \mathcal{G}_m \} \\ \label{eqn:bigsqdef} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_{m,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i-1}) & \triangleq {\mathrm{col}}\{ \bm{s}_{k,i}(\bm{w}_{k,i-1}); k \in \mathcal{G}_m \} \end{align} It follows from the indexing rule in Definition \ref{def:index} that \begin{align} \label{eqn:bigAandAq} \mathcal{A} & = {\mathrm{diag}} \{ \mathcal{A}_1, \dots, \mathcal{A}_G \} \\ \label{eqn:wiandwqi} {\scriptstyle{\boldsymbol{\mathcal{W}}}}_i & = {\mathrm{col}}\{ {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{1,i}, \dots, {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{G,i} \} \\ \label{eqn:bigMandMq} \mathcal{M} & = {\mathrm{diag}}\{ \mathcal{M}_1, \dots, \mathcal{M}_G \} \\ \label{eqn:bigHandHq} \bm{\mathcal{H}}_{i-1} & = {\mathrm{diag}}\{ \bm{\mathcal{H}}_{1,i-1}, \dots, \bm{\mathcal{H}}_{G,i-1} \} \\ \label{eqn:bigsandsq} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) & = {\mathrm{col}}\{ {\scriptstyle{\boldsymbol{\mathcal{S}}}}_{1,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{1,i-1}), \dots, {\scriptstyle{\boldsymbol{\mathcal{S}}}}_{G,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{G,i-1}) \} \end{align} Using \eqref{eqn:bigAandAq}--\eqref{eqn:bigHandHq}, the matrix $\bm{\mathcal{B}}_{i-1}$ in \eqref{eqn:bigBdef} can be expressed by \begin{equation} \label{eqn:bigBandBq} \bm{\mathcal{B}}_{i-1} = {\mathrm{diag}}\{ \bm{\mathcal{B}}_{1,i-1}, \dots, \bm{\mathcal{B}}_{G,i-1} \} \end{equation} where \begin{equation} \label{eqn:bigBqdef} \bm{\mathcal{B}}_{m,i-1} \triangleq \mathcal{A}_m^\mathsf{T} (I_{N_m^g M} - \mathcal{M}_m \bm{\mathcal{H}}_{m,i-1}) \end{equation} Due to the block structures in \eqref{eqn:bigAandAq}--\eqref{eqn:bigBandBq}, groups are isolated from each other. Therefore, using these group quantities, the network error recursion \eqref{eqn:networkerrorrecursiondef} is automatically decoupled into a total of $G$ group error recursions, where the $m$-th recursion is given by \begin{equation} \label{eqn:networkerrorrecursiongroup} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i} = \bm{\mathcal{B}}_{m,i-1} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i-1} + \mathcal{A}_m^\mathsf{T} \mathcal{M}_m {\scriptstyle{\boldsymbol{\mathcal{S}}}}_{m,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i-1}) \end{equation} \subsection{Mean-Square and Mean-Fourth-Order Error Stability} \label{subsection:stability} The stability of the network error recursion \eqref{eqn:networkerrorrecursiondef} is now reduced to studying the stability of the group recursions \eqref{eqn:networkerrorrecursiongroup}. Recall that, by Definition \ref{def:group}, the agents in each group are connected. Moreover, condition \eqref{eqn:alkcondition} implies that agents in each group have non-trivial self-loops, meaning that $a_{kk} > 0$ for all $k \in \mathcal{G}_m$. It follows that each $A_m$ is a primitive matrix \cite{BermanPF, Sayed14PROC} (which is satisfied as long as there exists at least one $a_{kk} > 0$ in each group). Under these conditions, we are now able to ascertain the stability of the second and fourth-order error moments of the network error recursion \eqref{eqn:networkerrorrecursiondef} by appealing to results from \cite{Sayed14NOW}. \begin{theorem}[Stability of error moments] \label{theorem:stability} For sufficiently small step-sizes, the network error recursion \eqref{eqn:networkerrorrecursiondef} is mean-square and mean-fourth-order stable in the sense that \begin{align} \label{eqn:varianceasymptoticbound} \limsup_{i \rightarrow \infty} \; \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \|^2 & = O(\mu_{\max}) \\ \label{eqn:4thorderasymptoticbound} \limsup_{i \rightarrow \infty} \; \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \|^4 & = O(\mu_{\max}^2) \end{align} \end{theorem} \begin{proof} It is obvious that the network error recursion \eqref{eqn:networkerrorrecursiondef} is mean-square and mean-fourth-order stable if, and only if, each group error recursion \eqref{eqn:networkerrorrecursiongroup} is stable in a similar sense. From Assumption \ref{ass:costfunctions}, we know that there exists at least one strongly-convex cost in each group. Since the combination matrix $A_m$ for each group is primitive and left-stochastic, we can now call upon Theorems 9.1 and 9.2 from \cite[p. 508, p. 522]{Sayed14NOW} to conclude that every group error recursion is mean-square and mean-fourth-order stable, namely, \begin{align} \label{eqn:varianceasymptoticboundgroup} \limsup_{i \rightarrow \infty} \; \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i} \|^2 & = O(\mu_{\max}) \\ \label{eqn:4thorderasymptoticboundgroup} \limsup_{i \rightarrow \infty} \; \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i} \|^4 & = O(\mu_{\max}^2) \end{align} from which \eqref{eqn:varianceasymptoticbound} and \eqref{eqn:4thorderasymptoticbound} follow. \end{proof} \subsection{Long-Term Model} \label{subsection:steadystatecov} Once network stability is established, we can proceed to assess the performance of the adaptive clustering and learning procedure. To do so, it becomes more convenient to first introduce a long-term model for the error dynamics \eqref{eqn:networkerrorrecursiondef}. Note that recursion \eqref{eqn:networkerrorrecursiondef} represents a non-linear, time-variant, and stochastic system that is driven by a state-dependent random noise process. Analysis of recursion \eqref{eqn:networkerrorrecursiondef} is facilitated by noting (see Lemma \ref{lemma:approxerrorrecursion} below) that when the step-size parameter $\mu_{\max}$ is small enough, the mean-square behavior of \eqref{eqn:networkerrorrecursiondef} in steady-state, when $i \gg 1$, can be well approximated by the behavior of the following long-term model: \begin{equation} \label{eqn:longtermerrorrecursion} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} = \mathcal{B} \; \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1}^\textrm{long} + \mathcal{A}^\mathsf{T} \mathcal{M} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) \end{equation} where we replaced the random matrix $\bm{\mathcal{B}}_{i-1}$ in \eqref{eqn:networkerrorrecursiondef} by the constant matrix \begin{equation} \label{eqn:bigBfixeddef} \mathcal{B} \triangleq \mathcal{A}^\mathsf{T} ( I_{NM} - \mathcal{M} \mathcal{H} ) \end{equation} In \eqref{eqn:bigBfixeddef}, the matrix $\mathcal{H}$ is defined by \begin{equation} \label{eqn:bigHdef} \mathcal{H} \triangleq {\mathrm{diag}}\{ H_1, \dots, H_N \} \end{equation} where \begin{equation} \label{eqn:Hkdef} H_k \triangleq \nabla^2 J_k(w_k^o) \end{equation} Note that the long-term model \eqref{eqn:longtermerrorrecursion} is now a \emph{linear time-invariant} system, albeit one that continues to be driven by the \emph{same} random noise process as in \eqref{eqn:networkerrorrecursiondef}. Similarly to the original error recursion \eqref{eqn:networkerrorrecursiondef}, the long-term recursion \eqref{eqn:longtermerrorrecursion} can also be decoupled into $G$ recursions, one for each group: \begin{equation} \label{eqn:longtermerrorrecursiongroup} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} = \mathcal{B}_m \, \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i-1}^\textrm{long} + \mathcal{A}_m^\mathsf{T} \mathcal{M}_m {\scriptstyle{\boldsymbol{\mathcal{S}}}}_{m,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i-1}) \end{equation} where \begin{align} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} & \triangleq {\mathrm{col}}\{ \widetilde{\bm{w}}_{k,i}^\textrm{long}; k \in \mathcal{G}_m \} \in \mathbb{R}^{N_m^g M \times 1} \\ \mathcal{B}_m & \triangleq \mathcal{A}_m^\mathsf{T} ( I_{N_m^g M} - \mathcal{M}_m \mathcal{H}_m ) \\ \label{eqn:bigHmdef} \mathcal{H}_m & \triangleq {\mathrm{diag}}\{ H_k; k \in \mathcal{G}_m \} \\ {\scriptstyle{\mathcal{W}}}_m^o & \triangleq {\mathrm{col}}\{ w_k^o; k \in \mathcal{G}_m \} \end{align} \begin{lemma}[Accuracy of long-term model] \label{lemma:approxerrorrecursion} For sufficiently small step-sizes, the evolution of the long-term model \eqref{eqn:longtermerrorrecursion} is close to the original error recursion \eqref{eqn:networkerrorrecursiondef} in MSE sense: \begin{equation} \label{eqn:longtermmodelgap} \limsup_{i\rightarrow \infty} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} \|^2 = O(\mu_{\max}^2) \end{equation} \end{lemma} \begin{proof} We call upon Theorem 10.2 from \cite[p. 557]{Sayed14NOW} to conclude that the difference between each group error recursion \eqref{eqn:networkerrorrecursiongroup} and its long-term model \eqref{eqn:longtermerrorrecursiongroup} satisfies: \begin{equation} \label{eqn:longtermmodelgroupgap} \limsup_{i\rightarrow \infty} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i} - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} \|^2 = O(\mu_{\max}^2) \end{equation} for all $m$. It is then immediate to conclude that \eqref{eqn:longtermmodelgap} holds. \end{proof} \subsection{Low-Dimensional Model} \label{subsection:blockstructure} Lemma \ref{lemma:approxerrorrecursion} indicates that we can assess the MSE dynamics of the original network recursion \eqref{eqn:networkerrorrecursiondef} to first-order in $\mu_{\max}$ by working with the long-term model \eqref{eqn:longtermerrorrecursion}. It turns out that the state variable of the long-term model can be split into two parts, one consisting of the \emph{centroids} of each group and the other consisting of in-group discrepancies. The details of this splitting are not important for our current discussion but interested readers can refer to Sec. V of \cite{Chen13TIT} and Eq. (10.37) of \cite[p. 558]{Sayed14NOW} for a detailed explanation. Here we only use this fact to motivate the introduction of the low-dimensional model. Moreover, it also turns out that the first part, i.e, the part corresponding to the centroids, is the dominant component in the evolution of the error dynamics and that the evolution of the two parts (centroids and in-group discrepancies) is weakly-coupled. By retaining the first part, we can therefore arrive at a low-dimensional model that will allow us to assess performance in closed-form to first-order in $\mu_{\max}$. To arrive at the low-dimensional model, we need to exploit the eigen-structure of the combination matrix $A$, or, equivalently, that of each $A_m$. Recall that we indicated earlier prior to the statement of Theorem \ref{theorem:stability} that each $A_m$ is a primitive and left-stochastic matrix. By the Perron-Frobenius theorem \cite{BermanPF, Pillai05SPM, Sayed14NOW}, it follows that each $A_m$ has a simple eigenvalue at one with all other eigenvalues lying strictly inside the unit circle. Moreover, if we let $p_m^g \in \mathbb{R}^{N_m^g \times 1}$ denote the right-eigenvector of $A_m$ that is associated with the eigenvalue at one, and normalize its entries to add up to one, then the same theorem ensures that all entries of $p_m^g$ will be positive: \begin{equation} \label{eqn:pqdef} p_m^g \triangleq {\mathrm{col}}\{ p_{m,k}^g \}_{k=1}^{N_m^g} \succ 0, \; A_m p_m^g = p_m^g, \; \ds{1}_{N_m^g}^\mathsf{T} p_m^g = 1 \end{equation} where $p_{m,k}^g$ denotes the $k$-th entry of $p_m^g$. This means that we can express each $A_m$ in the form (see \eqref{eqn:Aeigendecompdef} further ahead): \begin{equation} \label{eqn:Amrankoneexp} A_m = p_m^g \ds{1}_{N_m^g}^\mathsf{T} + V_{m,R} J_{m,\epsilon} V_{m,L}^\mathsf{T} \end{equation} for some eigenvector matrices $V_{m,R}$ and $V_{m,L}$, and where $J_{m,\epsilon}$ denotes the collection of the Jordan blocks with eigenvalues inside the unit circle and with their unit entries on the first lower sub-diagonal replaced by some arbitrarily small constant $0<\epsilon\ll1$. The first rank-one component on the RHS of \eqref{eqn:Amrankoneexp} represents the contribution by the largest eigenvalue of $A_m$, and this component will be used further ahead to describe the centroid of group $\mathcal{G}_m$. The network Perron eigenvector is obtained by stacking the group Perron eigenvectors $\{ p_m^g \}$: \begin{equation} \label{eqn:pdef} p \triangleq {\mathrm{col}}\{ p_1^g, \dots, p_G^g \} \triangleq {\mathrm{col}}\{ p_1, \dots, p_N \} \end{equation} where $p_k$ denotes the $k$-th entry of $p \in \mathbb{R}^{N \times 1}$. According to the indexing rule from Definition \ref{def:index}, it is obvious that $p_m^g = {\mathrm{col}}\{ p_k; k \in \mathcal{G}_m \}$. Now, for each group $\mathcal{G}_m$, we introduce the low-dimensional (centroid) error recursion defined by (compare with \eqref{eqn:longtermerrorrecursiongroup}): \begin{equation} \label{eqn:lowdimensionerrorrecursiongroup} \widetilde{\bm{w}}_{m,i}^\textrm{low} = D_m \widetilde{\bm{w}}_{m,i-1}^\textrm{low} + (p_m^g \otimes I_M)^\mathsf{T} \mathcal{M}_m {\scriptstyle{\boldsymbol{\mathcal{S}}}}_{m,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i-1}) \end{equation} where $\widetilde{\bm{w}}_{m,i}^\textrm{low}$ is $M\times1$, and $D_m$ is $M \times M$ and defined by \begin{equation} \label{eqn:Dqdef} D_m \triangleq I_M - \mu_{\max} \bar{H}_m \end{equation} where \begin{align} \label{eqn:barHmdef} \bar{H}_m & \triangleq \mu_{\max}^{-1} (p_m^g \otimes I_M)^\mathsf{T} \mathcal{M}_m \mathcal{H}_m (\ds{1}_{N_m^g} \otimes I_M) \nonumber \\ & = \sum_{k\in\mathcal{G}_m} \frac{ p_k \mu_k}{\mu_{\max}} H_k = O(\mu_{\max}^0) \end{align} The matrix $\bar{H}_m$ is positive definite since there is at least one Hessian matrix in $\{ H_k; k \in \mathcal{G}_m \}$ that is positive definite according to Assumption \ref{ass:costfunctions}. We collect the low-rank recursions \eqref{eqn:lowdimensionerrorrecursiongroup} for groups into one recursion for the entire network by stacking them on top of each other: \begin{equation} \label{eqn:lowdimensionerrorrecursion} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} = \mathcal{D} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1}^\textrm{low} + \mathcal{P}^\mathsf{T} \mathcal{M} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i( {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} ) \end{equation} where \begin{align} \label{eqn:werrorlowdimdef} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} & \triangleq {\mathrm{col}}\{ \widetilde{\bm{w}}_{1,i}^\textrm{low}, \dots, \widetilde{\bm{w}}_{G,i}^\textrm{low} \} \in \mathbb{R}^{GM \times 1} \\ \label{eqn:bigDdef} \mathcal{D} & \triangleq {\mathrm{diag}}\{ D_1, \dots, D_G \} \in \mathbb{R}^{GM \times GM} \\ \label{eqn:bigPdef} \mathcal{P} & \triangleq {\mathrm{diag}}\{ p_1^g, \dots, p_G^g \} \otimes I_M \in \mathbb{R}^{NM \times GM} \end{align} Recursion \eqref{eqn:lowdimensionerrorrecursion} describes the joint dynamics of all the centroids (one for each group). Note that the dimension of $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ in \eqref{eqn:lowdimensionerrorrecursion} is $GM$, which is lower than the dimension, $NM$, of $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long}$ in \eqref{eqn:longtermerrorrecursion} or $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i$ in \eqref{eqn:networkerrorrecursiondef}, because $G \le N$ by Assumption \ref{ass:topology}. In order to measure the difference between the dynamics of the long-term model \eqref{eqn:longtermerrorrecursion} and the low-dimensional model \eqref{eqn:lowdimensionerrorrecursion}, we expand $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ in the following manner (compare with \eqref{eqn:werrorlowdimdef}): \begin{align} \label{eqn:zidef} \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} & \triangleq {\mathrm{col}}\{ \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{1,i}^\textrm{low}, \dots, \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{G,i}^\textrm{low} \} \in \mathbb{R}^{NM \times 1} \\ \label{eqn:zqidef} \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{low} & \triangleq \ds{1}_{N_m^g} \otimes \widetilde{\bm{w}}_{m,i}^\textrm{low} \in \mathbb{R}^{N_m^g M \times 1} \end{align} because $\sum_{m=1}^G N_m^g = N$ according to Assumption \ref{ass:topology}. \begin{lemma}[Accuracy of low-dimensional model] \label{lemma:lowrankapprox} For sufficiently small step-sizes, the low-dimensional model \eqref{eqn:lowdimensionerrorrecursion} is close to the network long-term model \eqref{eqn:longtermerrorrecursion} in the following sense: \begin{equation} \label{eqn:lowdimensionmodelgap} \limsup_{i\rightarrow \infty} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \|^2 = O(\mu_{\max}^2) \end{equation} where $\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ is given by \eqref{eqn:zidef} and is related to $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ via \eqref{eqn:zqidef}. \end{lemma} \begin{IEEEproof} See Appendix \ref{app:lowrankapprox}. \end{IEEEproof} \begin{lemma}[Low-dimensional error covariance] \label{lemma:lowrankerrorcov} For sufficiently small step-sizes, the covariance matrix for $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ satisfies \begin{equation} \label{eqn:ThetaiandTheta} \limsup_{i\rightarrow \infty} \| \mathbb{E} [\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} ] - \Theta \| = O(\mu_{\max}^{1+\gamma_s/2}) \end{equation} where $\Theta \in \mathbb{R}^{GM \times GM}$ is symmetric, positive-definite, and uniquely solves the discrete Lyapunov equation: \begin{equation} \label{eqn:ThetaDTLEdef} \Theta = \mathcal{D} \Theta \mathcal{D} + \mathcal{P}^\mathsf{T} \mathcal{M} \mathcal{R}_s \mathcal{M} \mathcal{P} \end{equation} \end{lemma} \begin{proof} See Appendix \ref{app:lowrankerrorcov}. \end{proof} \subsection{Steady-State MSE Performance} From Theorem \ref{theorem:stability}, we know that the limit superior of the MSE is bounded within $O(\mu_{\max})$. In order to define meaningful steady-state performance metrics, we consider the case in which the step-sizes approach zero asymptotically. Results obtained in this case are representative of operation in the slow adaptation regime (see Sec. 11.2 of \cite[pp. 581--583]{Sayed14NOW}). \begin{lemma}[Steady-state normalized MSD] \label{lemma:msdblock} The normalized total MSD of $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i$ in \eqref{eqn:networkerrorrecursiondef} is given by \begin{equation} \label{eqn:steadystateMSDdefblock} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \|^2 = \sum_{m = 1}^G \frac{N_m^g}{2 \mu_{\max}} {\mathrm{Tr}} \left[ \left( \sum_{k \in \mathcal{G}_m} p_k \mu_k H_k \right)^{-1} \left( \sum_{k \in \mathcal{G}_m} p_k^2 \mu_k^2 R_k \right) \right] \end{equation} where $H_k$ is from \eqref{eqn:Hkdef} and $R_k$ is the $m$-th block on the diagonal of $\mathcal{R}_s$ from \eqref{eqn:convergentcovariance} with block size $M \times M$. \end{lemma} \begin{proof} The normalized total MSD is the sum of the normalized MSD for each group. From Lemma 11.3 of \cite[p. 594]{Sayed14NOW}, the normalized MSD for each group $\mathcal{G}_m$ is given by \begin{equation} \label{eqn:steadystateMSDdefgroup} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i} \|^2 = \frac{N_m^g}{2 \mu_{\max}} {\mathrm{Tr}} \left[ \left( \sum_{k \in \mathcal{G}_m} p_k \mu_k H_k \right)^{-1} \left( \sum_{k \in \mathcal{G}_m} p_k^2 \mu_k^2 R_k \right) \right] \end{equation} Note that we calculate the \emph{normalized total} MSD rather than the \emph{average} MSD in \eqref{eqn:steadystateMSDdefblock} and \eqref{eqn:steadystateMSDdefgroup}. \end{proof} In order to examine the statistical properties of the error vector $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i$, we need to strengthen the result in Lemma \ref{lemma:msdblock} by evaluating the full normalized error covariance matrix of $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i$ in steady-state. From Lemmas \ref{lemma:approxerrorrecursion} and \ref{lemma:lowrankapprox}, it is clear that the mean-square dynamics of the original error recursion \eqref{eqn:networkerrorrecursiondef} can be well approximated by the low-dimensional model \eqref{eqn:lowdimensionerrorrecursion}. And it was shown in Eq. (10.78) of \cite[p. 563]{Sayed14NOW} that the variances of the centroids $\{\widetilde{\bm{w}}_{k,i}^\textrm{low}\}$ are in the order of $\mu_{\max}$ in steady-state, which implies that \begin{equation} \label{eqn:wilowdimnormalizedvardef} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow\infty} \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \|^2 = O(\mu_{\max}^0) \end{equation} Since the induced-2 norm of the covariance matrix of any random vector is always bounded by its variance, i.e., $\| \mathbb{E} \bm{x} \bm{x}^\mathsf{T} \| \le \mathbb{E} \| \bm{x} \|^2$ by using Jensen's inequality, it follows from \eqref{eqn:wilowdimnormalizedvardef} that the normalized covariance matrix of $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ is finite in steady-state. Moreover, since Lemma \ref{lemma:lowrankerrorcov} applies to any positive value of $\mu_{\max}$ as long as it is small enough to ensure stability, we can take the limit of $\mu_{\max}$ in \eqref{eqn:ThetaiandTheta} by letting it approach zero asymptotically. That is, \begin{equation} \label{eqn:PiinftyequalPhi} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} \| \mu_{\max}^{-1} \mathbb{E} [\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} ] - \Phi \| = 0 \end{equation} where \begin{equation} \label{eqn:PhiandTheta} \Phi \triangleq \lim_{\mu_{\max} \rightarrow 0} ( \mu_{\max}^{-1} \Phi_i ) \end{equation} Due to \eqref{eqn:wilowdimnormalizedvardef} and \eqref{eqn:PiinftyequalPhi}, $\Phi$ is in the order of $\mu_{\max}^0$, i.e., $\| \Phi \| = O(\mu_{\max}^0)$. In fact, by introducing $\Phi_i \triangleq \mu_{\max}^{-1} \mathbb{E} [\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} ]$ and using the triangle inequality, we have \begin{align} \label{eqn:Phiorder1} \| \Phi \| & = \| \Phi - \Phi_i + \Phi_i \| \le \| \Phi - \Phi_i \| + \| \Phi_i \| \\ \label{eqn:Phiorder2} \| \Phi_i \| & = \| \Phi_i - \Phi + \Phi \| \le \| \Phi_i - \Phi \| + \| \Phi \| \end{align} Taking $i \rightarrow \infty$ and $\mu_{\max} \rightarrow 0$ for both \eqref{eqn:Phiorder1} and \eqref{eqn:Phiorder2} yields: \begin{align} \label{eqn:Phiorder1_2} \| \Phi \| & \le \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} \| \Phi_i \| \\ \label{eqn:Phiorder2_2} \| \Phi \| & \ge \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} \| \Phi_i \| \end{align} by using \eqref{eqn:PiinftyequalPhi}. From \eqref{eqn:Phiorder1_2} and \eqref{eqn:Phiorder2_2}, we get \begin{equation} \label{eqn:Phibound} \| \Phi \| = \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} \| \Phi_i \| \end{equation} Since $\Phi_i \in \mathbb{R}^{GM \times GM}$ is positive semi-definite, it holds that \begin{equation} \label{eqn:Phiibounds} (GM)^{-1} {\mathrm{Tr}}(\Phi_i) \le \| \Phi_i \| \le {\mathrm{Tr}}(\Phi_i) \end{equation} where we used the fact for any positive semi-definite matrix $X \ge 0$ that (i) all the eigenvalues of $X$ are nonnegative, (ii) $\| X \|$ is equal to the largest eigenvalue of $X$, and (iii) ${\mathrm{Tr}}(X)$ is equal to the sum of all the eigenvalues of $X$. Moreover, \begin{equation} \label{eqn:Phiibounds2} {\mathrm{Tr}}(\Phi_i) = {\mathrm{Tr}}( \mu_{\max}^{-1} \mathbb{E} [\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} ] ) \!= \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \|^2 \!\! \end{equation} Using \eqref{eqn:wilowdimnormalizedvardef}, it follows from \eqref{eqn:Phiibounds} and \eqref{eqn:Phiibounds2} that \begin{equation} \label{eqn:Phiibounds3} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} \| \Phi_i \| = O(\mu_{\max}^0) \end{equation} Substituting \eqref{eqn:Phiibounds3} into \eqref{eqn:Phibound} yields the desired result, namely, $\| \Phi \| = O(\mu_{\max}^0)$. Then, according to \eqref{eqn:PhiandTheta}, $\Phi$ is the unique solution to equation \eqref{eqn:ThetaDTLEdef} when $\mu_{\max} \rightarrow 0$ asymptotically. Introduce two $GM \times GM$ matrices: \begin{align} \label{eqn:barbigHdef} \bar{\mathcal{H}} & \triangleq {\mathrm{diag}}\{ \bar{H}_1, \dots, \bar{H}_G \} = O(\mu_{\max}^0) \\ \label{eqn:Rsdef} \bar{\mathcal{R}} & \triangleq \mu_{\max}^{-2} \mathcal{P}^\mathsf{T} \mathcal{M} \mathcal{R}_s \mathcal{M} \mathcal{P} = O(\mu_{\max}^0) \end{align} where $\bar{H}_m$ is from \eqref{eqn:barHmdef} and $\mathcal{R}_s$ is from \eqref{eqn:convergentcovariance}. It is easy to verify that $\bar{\mathcal{H}}$ and $\bar{\mathcal{R}}$ are symmetric and positive-definite according to Assumptions \ref{ass:costfunctions} and \ref{ass:gradienterrors}. From \eqref{eqn:bigDdef}, \eqref{eqn:barbigHdef}, and \eqref{eqn:Dqdef}, we get \begin{equation} \label{eqn:bigDandbigbarH} \mathcal{D} = I_{GM} - \mu_{\max} \bar{\mathcal{H}} \end{equation} Using \eqref{eqn:PhiandTheta}--\eqref{eqn:bigDandbigbarH}, equation \eqref{eqn:ThetaDTLEdef} reduces to \begin{equation} \label{eqn:ThetaDTLEdef2} \bar{\mathcal{H}} \Phi + \Phi \bar{\mathcal{H}} = \bar{\mathcal{R}} + \mu_{\max} \bar{\mathcal{H}} \Phi \bar{\mathcal{H}} \end{equation} Since $\bar{\mathcal{H}}$ and $\bar{\mathcal{R}}$ are constant matrices, and $\Phi$ is finite, the last term on the RHS of \eqref{eqn:ThetaDTLEdef2} disappears as $\mu_{\max} \rightarrow 0$ asymptotically. Therefore, we conclude that $\Phi$ is the unique solution to the continuous Lyapunov equation: \begin{equation} \label{eqn:continuoustimeLyapunovEqn} \bar{\mathcal{H}} \Phi + \Phi \bar{\mathcal{H}} = \bar{\mathcal{R}} \end{equation} Let us define the \emph{normalized} network error covariance matrix for $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i$ from \eqref{eqn:networkerrorrecursiondef} by \begin{equation} \label{eqn:bigPidef} \Pi_i \triangleq \mu_{\max}^{-1} \mathbb{E} ( \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\mathsf{T} ) \end{equation} \begin{theorem}[Block structure] \label{theorem:blockstructure} In steady-state, and as the step-sizes approach zero asymptotically, the normalized network error covariance matrix $\Pi_i$ in \eqref{eqn:bigPidef} satisfies \begin{equation} \label{eqn:Piinftyoblockstructure} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \| \Pi_i - \Pi \| = 0 \end{equation} where \begin{equation} \label{eqn:Pinetworkdef} \Pi \triangleq \begin{bmatrix} ( \ds{1}_{N_1^g} \ds{1}_{N_1^g}^\mathsf{T} ) \otimes \Phi_{1,1} \!&\! \dots \!&\! ( \ds{1}_{N_1^g} \ds{1}_{N_G^g}^\mathsf{T} ) \otimes \Phi_{1,G} \\ \vdots \!&\! \ddots \!&\! \vdots \\ ( \ds{1}_{N_G^g} \ds{1}_{N_1^g}^\mathsf{T} ) \otimes \Phi_{G,1} \!&\! \dots \!&\! ( \ds{1}_{N_G^g} \ds{1}_{N_G^g}^\mathsf{T} ) \otimes \Phi_{G,G} \\ \end{bmatrix} \end{equation} and $\Phi_{m, r}$ denotes the $(m, r)$-th block of $\Phi$ from \eqref{eqn:continuoustimeLyapunovEqn} with block size $M \times M$. \end{theorem} \begin{proof} See Appendix \ref{app:block}. \end{proof} \section{Error Probability Analysis for Clustering} Using the results from the previous section, we now move on to assess the error probabilities for the hypothesis testing problem \eqref{eqn:decisionrule}. To do so, we need to determine the probability distribution of the decision statistic that is generated by recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup}. \subsection{Asymptotic Joint Distribution of Estimation Errors} \label{subsection:normalpdf} Using \eqref{eqn:bigDandbigbarH}, we rewrite the low-dimensional model \eqref{eqn:lowdimensionerrorrecursion} as \begin{equation} \label{eqn:lowdimensionerrorrecursionnew} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} = \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1}^\textrm{low} - \mu_{\max} \bar{\mathcal{H}} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1}^\textrm{low} + \mu_{\max} \bar{\bm{s}}_i \end{equation} where $\bar{\mathcal{H}}$ is from \eqref{eqn:barbigHdef} and \begin{equation} \label{eqn:barsidef} \bar{\bm{s}}_i \triangleq \mu_{\max}^{-1} \mathcal{P}^\mathsf{T} \mathcal{M} {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i( {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} ) \in \mathbb{R}^{GM \times 1} \end{equation} \begin{lemma}[Rate of weak convergence] \label{lemma:normallowdimensional} The normalized sequence, $\{\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} /\sqrt{\mu_{\max}}; i \ge 0\}$, from \eqref{eqn:lowdimensionerrorrecursionnew} converges in distribution as $i \rightarrow \infty$ and $\mu_{\max} \rightarrow 0$ to the Gaussian random variable: \begin{equation} \label{eqn:xidef} \bm{\xi} \triangleq {\mathrm{col}}\{ \bm{\xi}_1, \dots, \bm{\xi}_G \} \sim \mathbb{N}(0, \Phi) \end{equation} where $\bm{\xi}_m \in \mathbb{R}^{M \times 1}$ for all $m$, and $\Phi \in \mathbb{R}^{GM \times GM}$ is the unique solution to the Lyapunov equation \eqref{eqn:continuoustimeLyapunovEqn}. \end{lemma} \begin{proof} See Appendix \ref{app:normal}. \end{proof} In the sequel we establish the main result that the distribution of the normalized error sequence from \eqref{eqn:networkerrorrecursiondef}, $\{\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i/\sqrt{\mu_{\max}}; i \ge 0\}$, asymptotically approaches a Gaussian distribution. According to Definition 4 from \cite[p. 253]{Shiryaev80}, a random sequence $\{\bm{\zeta}_i; i\ge 0\}$ converges in distribution to some random variable $\bm{\zeta}$ if, and only if, \begin{equation} \lim_{i \rightarrow \infty} \mathbb{E} \left| f(\bm{\zeta}_i) - f(\bm{\zeta}) \right| = 0 \end{equation} for \emph{any} bounded continuous function $f(\cdot)$. We use this fact together with the following lemma to establish Theorem \ref{theorem:normaldistribution} further ahead. \begin{lemma}[Weak convergence] \label{lemma:convergence} Let $\{\bm{\zeta}_i; i\ge 0 \}$ and $\{\bm{\eta}_i; i \ge 0\}$ be two random sequences that are dependent on the parameter $\mu_{\max}$. If $\{\bm{\zeta}_i; i\ge 0\}$ approaches $\{\bm{\eta}_i; i\ge 0\}$ in mean-square sense: \begin{equation} \label{eqn:convergencemoments} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \mathbb{E} \| \bm{\zeta}_i - \bm{\eta}_i \|^2 = 0 \end{equation} and the variances of $\{\bm{\zeta}_i\}$ converge in the following sense: \begin{equation} \label{eqn:convergencesigma} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \mathbb{E} \| \bm{\zeta}_i \|^2 = \sigma^2 \end{equation} then it holds for any bounded continuous function $f(\cdot)$ that \begin{equation} \label{eqn:convergenceweakly} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \mathbb{E} | f(\bm{\zeta}_i) - f(\bm{\eta}_i) | = 0 \end{equation} \end{lemma} \begin{IEEEproof} See Appendix \ref{app:convergence}. \end{IEEEproof} \begin{theorem}[Asymptotic normality] \label{theorem:normaldistribution} As $i \rightarrow \infty$ and $\mu_{\max} \rightarrow 0$, the normalized error sequence from \eqref{eqn:networkerrorrecursiondef}, $\{\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i/\sqrt{\mu_{\max}}; i \ge 0\}$, converges in distribution \emph{close} to the Gaussian random variable: \begin{equation} \label{eqn:zetadef} \bm{\zeta} \triangleq {\mathrm{col}}\{ \ds{1}_{N_1^g} \otimes \bm{\xi}_1, \dots, \ds{1}_{N_G^g} \otimes \bm{\xi}_G \} \sim \mathbb{N}(0, \Pi) \end{equation} in the following sense: \begin{equation} \label{eqn:weakconvergence} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \mathbb{E} \left| f\left(\frac{\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i}{\sqrt{\mu_{\max}}}\right) - f(\bm{\zeta}) \right| = 0 \end{equation} for any bounded continuous function $f(\cdot):\mathbb{R}^{NM \times 1} \mapsto \mathbb{R}$, where $\{ \bm{\xi}_m\}$ are from \eqref{eqn:xidef}, and $\Pi$ is from \eqref{eqn:Pinetworkdef}. \end{theorem} \begin{proof} Using the triangle inequality, we have \begin{align} \label{eqn:weakconvergence2} \mathbb{E} \left| f\left( \frac{ \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i }{ \sqrt{\mu_{\max}} } \right) - f(\bm{\zeta}) \right| & \le \mathbb{E} \left| f\left( \frac{ \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i }{ \sqrt{\mu_{\max}} } \right) - f\left( \frac{ \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} }{ \sqrt{\mu_{\max}} } \right) \right| + \mathbb{E} \left| f\left( \frac{ \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} }{ \sqrt{\mu_{\max}} } \right) - f\left( \frac{ \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} }{ \sqrt{\mu_{\max}} } \right) \right| \nonumber \\ & \qquad + \mathbb{E} \left| f\left( \frac{ \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} }{ \sqrt{\mu_{\max}} } \right) - f( \bm{\zeta} ) \right| \end{align} where $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long}$ is from the long-term model \eqref{eqn:longtermerrorrecursion}, and $\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ is from \eqref{eqn:zidef} and is related to the low-dimensional model \eqref{eqn:lowdimensionerrorrecursion}. By Lemma \ref{lemma:msdblock}, the variances of the sequence $\{ \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i / \sqrt{\mu_{\max}}; i \ge 0 \}$ converge to its normalized MSD in \eqref{eqn:steadystateMSDdefblock} in a sense similar to \eqref{eqn:convergencesigma}. Using Lemma \ref{lemma:approxerrorrecursion}, it is clear that $\{ \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i / \sqrt{\mu_{\max}}; i \ge 0 \}$ approaches $\{ \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} / \sqrt{\mu_{\max}}; i \ge 0 \}$ in a sense similar to \eqref{eqn:convergencemoments}. Therefore, by calling upon Lemma \ref{lemma:convergence}, we conclude that the limit superior of the first term on the RHS of \eqref{eqn:weakconvergence2} vanishes. Likewise, using Lemmas \ref{lemma:approxerrorrecursion} and \ref{lemma:msdblock}, it can be verified that the variances of the sequence $\{ \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} / \sqrt{\mu_{\max}}; i \ge 0 \}$ also converge to the same normalized MSD in \eqref{eqn:steadystateMSDdefblock}. Therefore, from Lemmas \ref{lemma:lowrankapprox} and \ref{lemma:convergence}, the limit superior of the second term on the RHS of \eqref{eqn:weakconvergence2} vanishes. The limit superior of the third term vanishes since $\{ \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} / \sqrt{\mu_{\max}}; i \ge 0 \}$ converges in distribution to $\bm{\zeta}$, which follows from Lemma \ref{lemma:normallowdimensional}. Therefore, the limit superior of the RHS of \eqref{eqn:weakconvergence2} vanishes when $i \rightarrow \infty$ and $ \mu_{\max} \rightarrow 0$. \end{proof} Theorem \ref{theorem:normaldistribution} allows us to approximate the distribution of $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i/\sqrt{\mu_{\max}}$ by the Gaussian distribution $\mathbb{N}(0, \Pi)$ for large enough $i$ and small enough $\mu_{\max}$. \subsection{Statistical Decision on Clustering} In Theorem \ref{theorem:normaldistribution}, we established that for large enough $i$ and for sufficiently small $\mu_{\max}$, the joint distribution of the individual estimators $\{ \bm{w}_{k,i}; k = 1, 2, \dots, N \}$ can be well approximated by a Gaussian distribution \eqref{eqn:zetadef}. Therefore, the marginal distribution for any pair of estimators, say, $\bm{w}_{k,i}$ and $\bm{w}_{\ell,i}$, can be well approximated by the Gaussian distribution: \begin{equation} \label{eqn:steadystatedistributiontwoagents} \begin{bmatrix} \bm{w}_{k,i} \\ \bm{w}_{\ell,i} \\ \end{bmatrix} \sim \mathbb{N}\left( \begin{bmatrix} w_k^o \\ w_\ell^o \\ \end{bmatrix}, \mu_{\max} \begin{bmatrix} \Pi_{k,k} & \Pi_{k,\ell} \\ \Pi_{\ell,k} & \Pi_{\ell,\ell} \\ \end{bmatrix} \right) \end{equation} where $w_k^o$ and $w_\ell^o$ are their individual minimizers, and $\Pi_{k,\ell}$ denotes the $(k,\ell)$-th block of $\Pi$ with block size $M \times M$. Without loss of generality, let us consider the scenario where agent $k$ is from group $\mathcal{G}_m$ in cluster $\mathcal{C}_q$ and agent $\ell$ is from group $\mathcal{G}_n$ in cluster $\mathcal{C}_r$, i.e., $k \in \mathcal{G}_m \subseteq \mathcal{C}_q$ and $\ell \in \mathcal{G}_n \subseteq \mathcal{C}_r$. Then, we have from Definition \ref{def:cluster} that \begin{equation} \label{eqn:meankellandqr} w_k^o = w_q^\star, \qquad w_\ell^o = w_r^\star \end{equation} From Theorem \ref{theorem:blockstructure}, the covarince matrix $\Pi$ possesses the block structure shown in \eqref{eqn:Pinetworkdef}. Using \eqref{eqn:Pinetworkdef}, and noticing that $k \in \mathcal{G}_m$ and $\ell \in \mathcal{G}_n$, it is obvious that \begin{equation} \label{eqn:varkellandqr} \Pi_{k,k} = \Phi_{m,m}, \; \Pi_{k,\ell} = \Phi_{m,n}, \; \Pi_{\ell,k} = \Phi_{n,m}, \; \Pi_{\ell,\ell} = \Phi_{n,n} \end{equation} Then, it follows from \eqref{eqn:steadystatedistributiontwoagents}--\eqref{eqn:varkellandqr} that \begin{equation} \label{eqn:steadystatedistributiontwoagentsGroup} \begin{bmatrix} \bm{w}_{k,i} \\ \bm{w}_{\ell,i} \\ \end{bmatrix} \sim \mathbb{N}\left( \begin{bmatrix} w_q^\star \\ w_r^\star \\ \end{bmatrix}, \mu_{\max} \begin{bmatrix} \Phi_{m,m} & \Phi_{m,n} \\ \Phi_{n,m} & \Phi_{n,n} \\ \end{bmatrix} \right) \end{equation} which means that the mean and covariance of the joint distribution for any pair of agents $k$ and $\ell$ only depends on their groups. In other words, for any two agents $k_1$ and $k_2$ from the same group $\mathcal{G}_m$, the joint distribution of $\{k_1, \ell\}$ and the joint distribution of $\{k_2, \ell\}$ will be well approximated by the same Gaussian distribution in \eqref{eqn:steadystatedistributiontwoagentsGroup}. Therefore, if both agents $k_1$ and $k_2$ need to decide whether agent $\ell$ is in the same cluster as they are, then they will have the same error probabilities in the hypothesis test \eqref{eqn:decisionrule}. Based on \eqref{eqn:steadystatedistributiontwoagentsGroup}, the hypothesis test problem for clustering now becomes that of determining whether or not the two (near) Gaussian random vectors $\bm{w}_{k,i}$ and $\bm{w}_{\ell,i}$ have the same mean. Suppose the samples from the two variables are paired. The difference \begin{equation} \label{eqn:dkelldef} \bm{d}_{k,\ell} \triangleq \bm{w}_{k,i} - \bm{w}_{\ell,i} \end{equation} serves as a sufficient statistics \cite{Anderson58}. Since $\bm{w}_{k,i}$ and $\bm{w}_{\ell,i}$ are jointly Gaussian in \eqref{eqn:steadystatedistributiontwoagentsGroup}, their difference $\bm{d}_{k,\ell}$ is also Gaussian: \begin{equation} \label{eqn:dkldistribution} \bm{d}_{k,\ell} \sim \mathbb{N}( d_{q,r}^\star, \mu_{\max} \Delta_{m,n}) \end{equation} where \begin{align} \label{eqn:meandkldef} d_{q,r}^\star & \triangleq w_q^\star - w_r^\star \\ \label{eqn:vardkldef} \Delta_{m,n} & \triangleq \Phi_{m,m} + \Phi_{n,n} - \Phi_{m,n} - \Phi_{n,m} \ge 0 \end{align} If the agents $k$ and $\ell$ are from the same cluster such that $q = r$, then hypothesis $\mathbb{H}_0$ in \eqref{eqn:decisionrule} is true and $d_{q,r}^\star = 0$; otherwise, hypothesis $\mathbb{H}_1$ in \eqref{eqn:decisionrule} is true and $d_{q,r}^\star \ne 0$. The hypothesis test for clustering becomes to test whether or not the difference $\bm{d}_{k,\ell}$ in \eqref{eqn:dkelldef} is zero mean \emph{without} knowing its covariance matrix $\mu_{\max} \Delta_{m,n}$. If $N_\textrm{sam}$ independent samples of $\bm{d}_{k,\ell}$ are available for testing, where $N_\textrm{sam} > M$, and $\Delta_{m,n}$ is non-singular, then according to the Neyman-Pearson criterion \cite{Poor98}, the likelihood ratio test is given by \cite[p. 164]{Anderson58} \begin{equation} \label{eqn:Tsquaretest} \bm{T}_{k,\ell}^2 \triangleq N_\textrm{sam} \bar{\bm{x}}^\mathsf{T} \bm{S}^{-1} \bar{\bm{x}} \overset{\mathbb{H}_0}{\underset{\mathbb{H}_1}{\lessgtr}} \theta_{k,\ell} \end{equation} where $\bm{T}_{k,\ell}^2$ is called Hotelling's T-square statistic, $\bar{\bm{x}}$ is the sample mean of $\bm{d}_{k,\ell}$, $\bm{S}$ is the unbiased sample covariance matrix, and $\theta_{k,\ell}$ is the predefined threshold from \eqref{eqn:decisionrule}. The scaled T-square statistics $\frac{ N_\textrm{sam} - M }{ (N_\textrm{sam} - 1) M } \cdot \bm{T}_{k,\ell}^2$ has a non-central F-distribution with $M$ and $N_\textrm{sam} - M$ degrees of freedom and non-centrality parameter $N_\textrm{sam} \mu_{\max}^{-1} (d_{q,r}^\star)^\mathsf{T} \Delta_{m,n}^{-1} d_{q,r}^\star$ \cite[p. 480]{Johnson95v2}. When $d_{q,r}^\star = 0$, it reduces to a central F-distribution \cite[p. 322]{Johnson95v2}. However, because stochastic iterative algorithms employ very small step-sizes, sampling their steady-state estimators over time does not produce independent samples. In many scenarios we only have one sample available for testing, where the sample mean reduces to the sample itself, and the sample covariance matrix is not even available. In order to carry out the hypothesis test, we replace the sample covariance matrix by the identity matrix. Then, the Hotelling's T-square test \eqref{eqn:Tsquaretest} becomes \begin{equation} \label{eqn:deltakelldef} \bm{\delta}_{k,\ell}^2 \triangleq \| \bm{d}_{k,\ell} \|^2 \overset{\mathbb{H}_0}{\underset{\mathbb{H}_1}{\lessgtr}} \theta_{k,\ell} \end{equation} where we re-used $\bm{d}_{k,\ell}$ to denote the only available sample for testing. The decision statistic $\bm{\delta}_{k,\ell}^2$ is a quadratic form of the (near) Gaussian random vector $\bm{d}_{k,\ell}$. Using \eqref{eqn:dkldistribution}, the mean of $\bm{\delta}_{k,\ell}^2$ is given by \begin{equation} \label{eqn:chikl2testmean} \mathbb{E} \bm{\delta}_{k,\ell}^2 = \mathbb{E} \| \bm{d}_{k,\ell} \|^2 = \mathbb{E} {\mathrm{Tr}}( \bm{d}_{k,\ell} \bm{d}_{k,\ell}^\mathsf{T} ) = {\mathrm{Tr}}( \mathbb{E} \bm{d}_{k,\ell} \bm{d}_{k,\ell}^\mathsf{T} ) = \| d_{q,r}^\star \|^2 + \mu_{\max} {\mathrm{Tr}}(\Delta_{m,n}) \end{equation} and the variance of $\bm{\delta}_{k,\ell}^2$ is given by (see Appendix \ref{app:moments}) \begin{equation} \label{eqn:chikl2testvar} \mathrm{Var}(\bm{\delta}_{k,\ell}^2) = \mathbb{E} \| \bm{d}_{k,\ell} \|^4 - ( \mathbb{E} \| \bm{d}_{k,\ell} \|^2 )^2 = 4 \mu_{\max} \| d_{q,r}^\star \|_{\Delta_{m,n}}^2 + 2 \mu_{\max}^2 {\mathrm{Tr}}(\Delta_{m,n}^2) \end{equation} It is seen that the mean of $\bm{\delta}_{k,\ell}^2$ is dominated by $\| d_{q,r}^\star \|^2$ for sufficiently small step sizes. Since the variance of $\bm{\delta}_{k,\ell}^2$ is in the order of $\mu_{\max}$, according to Chebyshev's inequality \cite[p. 47]{Shiryaev80}, we have \begin{equation} {\mathbb{P}}[ | \bm{\delta}_{k,\ell}^2 - \mathbb{E} \bm{\delta}_{k,\ell}^2 | \ge c ] \le \frac{\mathrm{Var}(\bm{\delta}_{k,\ell}^2)}{c} = O(\mu_{\max}) \end{equation} for any constant $c > 0$. Therefore, for sufficiently small step sizes, the probability mass of $\bm{\delta}_{k,\ell}^2$ will highly concentrate around $\mathbb{E} \bm{\delta}_{k,\ell}^2$. When hypothesis $\mathbb{H}_0$ is true, we have $d_{q,r}^\star = 0$ and $\mathbb{E} \bm{\delta}_{k,\ell}^2 = \mu_{\max} {\mathrm{Tr}}(\Delta_{m,n}) = O(\mu_{\max}) \approx 0$; when hypothesis $\mathbb{H}_1$ is true, we have $d_{q,r}^\star \neq 0$ and $\mathbb{E} \bm{\delta}_{k,\ell}^2 = \| d_{q,r}^\star \|^2 + O(\mu_{\max}) \approx \| d_{q,r}^\star \|^2$. That is, the probability mass of $\bm{\delta}_{k,\ell}^2$ under $\mathbb{H}_0$ concentrates near $0$ while the probability mass of $\bm{\delta}_{k,\ell}^2$ under $\mathbb{H}_1$ concentrates near $\| d_{q,r}^\star \|^2 = \| w_q^\star - w_r^\star \|^2 > 0$ (which is a constant that is independent of $\mu_{\max}$). Obviously, the threshold $\theta_{k,\ell}$ should be chosen between 0 and $\| d_{q,r}^\star \|^2$. By doing so, the Type-I error will correspond to the right tail probability of $\bm{\delta}_{k,\ell}^2$ when $d_{q,r}^\star = 0$ (see \eqref{eqn:typeIerrordef} further ahead) and the Type-II error will correspond to the left tail probability of $\bm{\delta}_{k,\ell}^2$ when $d_{q,r}^\star \neq 0$ (see \eqref{eqn:typeIIerrordef} further ahead). In order to examine the statistical properties of $\bm{\delta}_{k,\ell}^2$ and to perform the analysis for error probabilities, let us introduce the eigen-decomposition of $\Delta_{m,n}$ in \eqref{eqn:vardkldef} and denote it by \begin{equation} \label{eqn:Deltaeigen} \Delta_{m,n} = U_\Delta \Lambda_\Delta U_\Delta^\mathsf{T} \end{equation} where $U_\Delta$ is orthonormal and $\Lambda_\Delta$ is diagonal and nonnegative. Let further \begin{equation} \label{eqn:xdef} \bm{x} \triangleq \Lambda_\Delta^{- 1/2} U_\Delta^\mathsf{T} \bm{d}_{k,\ell}, \quad \bar{x} \triangleq \Lambda_\Delta^{- 1/2} U_\Delta^\mathsf{T} d_{q,r}^\star \end{equation} Since $\bm{d}_{k,\ell} \sim \mathbb{N}( d_{q,r}^\star, \mu_{\max} \Delta_{m,n})$, it follows from \eqref{eqn:Deltaeigen} and \eqref{eqn:xdef} that $\bm{x} \sim \mathbb{N}(\bar{x}, \mu_{\max} I_M)$. Substituting \eqref{eqn:Deltaeigen} and \eqref{eqn:xdef} into \eqref{eqn:deltakelldef} yields \begin{equation} \label{eqn:delta2andx} \bm{\delta}_{k,\ell}^2 = \bm{x}^\mathsf{T} \Lambda_\Delta \bm{x} = \sum_{h=1}^{M} \lambda_h \bm{x}_h^2 \end{equation} where $\bm{x}_h$ denotes the $h$-th elements of $\bm{x}$, and $\lambda_h$ denotes the $h$-th element on the diagonal of $\Lambda_\Delta$. From \eqref{eqn:delta2andx}, it is obvious that $\bm{\delta}_{k,\ell}^2$ is a weighted sum of independent squared Gaussian random variables. When hypothesis $\mathbb{H}_0$ is true, we have $d_{q,r}^\star = 0$ and $\bar{x} = 0$ by \eqref{eqn:xdef}. In this case, $\bm{\delta}_{k,\ell}^2$ reduces to a weighted sum of independent Gamma random variables (because squared zero-mean Gaussian random variables follow Gamma distributions \cite[p. 337]{Johnson95v1}), whose pdf is available in closed-form (but is very complicated) \cite{Moschopoulos85, Kara06TCOM}. When hypothesis $\mathbb{H}_1$ is true and $\| d_{q,r}^\star \|^2 > 0$, the pdf of $\bm{\delta}_{k,\ell}^2$ is generally not available in closed-form. Several procedures have been proposed in \cite{Imhof61biometrika, Sheil77JRSS, Liu09CSDA, Duchesne10CSDA, Ha13REVSTAT} for numerical evaluation of its tail probability. Instead of relying on the precise pdf of $\bm{\delta}_{k,\ell}^2$, we shall provide some useful constructions in the sequel for the error probabilities in the hypothesis test problem \eqref{eqn:deltakelldef}. \subsection{Error Probabilities} For any $k \in \mathcal{G}_m \subseteq \mathcal{C}_q$ and $\ell \in \mathcal{G}_n \subseteq \mathcal{C}_r$, the Type-I error, namely, the false alarm for incorrect rejection of a true $\mathbb{H}_0$, is given by \begin{equation} \label{eqn:typeIerrordef} \mbox{Type-I error}: \qquad {\mathbb{P}}[\bm{\delta}_{k,\ell}^2 > \theta_{k,\ell} | d_{q,r}^\star = 0] \end{equation} and the Type-II error, namely, the missing detection for incorrect rejection of a true $\mathbb{H}_1$, is given by \begin{equation} \label{eqn:typeIIerrordef} \mbox{Type-II error}: \qquad {\mathbb{P}}[\bm{\delta}_{k,\ell}^2 < \theta_{k,\ell} | d_{q,r}^\star \neq 0] \end{equation} It is seen that the Type-I error corresponds to the right tail probability of $\bm{\delta}_{k,\ell}^2$ with $d_{q,r}^\star = 0$ and the Type-II error corresponds to the left tail probability of $\bm{\delta}_{k,\ell}^2$ with $d_{q,r}^\star \neq 0$. This is a fundamental difference between the two types of errors and, therefore, different techniques are needed to approximate them. Specifically, for the Type-II error, the pdf of $\bm{\delta}_{k,\ell}^2$ is close to a bell shape and can be well approximated by a Gaussian pdf. Then, the Type-II error probability can be bounded by using Chernoff bound \cite{Cover06}. However, this technique does not apply to the Type-I error because when $d_{q,r}^\star = 0$, the pdf of $\bm{\delta}_{k,\ell}^2$ concentrates on the positive side of the origin point and is skewed with a long right tail. Consequently, we need to take a different approach to bound the Type-I error probability. \subsubsection{Type-I Error} We first note that \begin{equation} \label{eqn:deltaandx} \bm{\delta}_{k,\ell}^2 = \bm{x}^\mathsf{T} \Lambda_\Delta \bm{x} \le \|\Delta_{m,n}\| \cdot \| \bm{x} \|^2 \end{equation} where $\Lambda_\Delta$ is from \eqref{eqn:Deltaeigen}. This means that if $\bm{\delta}_{k,\ell}^2 > \theta_{k,\ell}$, then $\|\Delta_{m,n}\| \cdot \| \bm{x} \|^2 > \theta_{k,\ell}$ must be true, which further implies that the event $\{ \bm{\delta}_{k,\ell}^2 > \theta_{k,\ell} \}$ is a subset of the event $\{ \|\Delta_{m,n}\| \cdot \| \bm{x} \|^2 > \theta_{k,\ell} \}$. Therefore, \begin{equation} \label{eqn:boundType1error} {\mathbb{P}}[\bm{\delta}_{k,\ell}^2 > \theta_{k,\ell} | d_{q,r}^\star = 0] \le {\mathbb{P}} [ \| \bm{x} \|^2 > \theta_{k,\ell}' | \bar{x} = 0 ] \end{equation} where $\bar{x}$ is from \eqref{eqn:xdef}, and \begin{equation} \theta_{k,\ell}' \triangleq \frac{\theta_{k,\ell}}{ \| \Delta_{m,n} \|} \end{equation} Since $\bar{x} = 0$, $\mu_{\max}^{-1} \| \bm{x} \|^2$ follows a central chi-square distribution with $M$ degrees of freedom \cite[p. 415]{Johnson95v1}. Therefore, using the Chernoff bound for the central chi-square distribution \cite[Lemma 1, p. 2500]{Li07JMLR}, we get from \eqref{eqn:boundType1error} that \begin{equation} {\mathbb{P}}[\bm{\delta}_{k,\ell}^2 > \theta_{k,\ell} | d_{q,r}^\star = 0] \le 1 - {\mathbb{P}} [ \| \bm{x} \|^2 \le \theta_{k,\ell}' | \bar{x} = 0 ] \le \left(\frac{ \theta_{k,\ell}' e }{\mu_{\max} M }\right)^{ M/2 } \exp\left(- \frac{\theta_{k,\ell}'}{ 2\mu_{\max}} \right) \end{equation} for $\mu_{\max} < \theta_{k,\ell}'/M$, where $e$ is Euler's number. Therefore, when $\mu_{\max}$ is small enough, the Type-I error probability decays exponentially at a rate of $O(e^{-c_1/\mu_{\max}})$ for some constant $c_1 > 0$. \subsubsection{Type-II Error} We consider the characteristic function of $\bm{\delta}_{k,\ell}^2$. Since $\{\bm{x}_h \}$ are mutually-independent, the characteristic function of $\bm{\delta}_{k,\ell}^2$ is given by \begin{equation} \label{eqn:characteristiczeta2} c_{\bm{\delta}_{k,\ell}^2}(t) \triangleq \mathbb{E} \left[e^{j t \bm{\delta}_{k,\ell}^2}\right] \!=\! \mathbb{E} \left[e^{j t \sum_{h=1}^{M} \lambda_h \bm{x}_h^2 }\right] \!=\! \prod_{h=1}^{M} \mathbb{E} \left[ e^{j t \lambda_h \bm{x}_h^2 } \right] \end{equation} where we used \eqref{eqn:deltaandx}. Since $d_{q,r}^\star \neq 0$ in this case, $\bm{x}$ from \eqref{eqn:xdef} has nonzero mean $\bar{x} \neq 0$. Therefore, each $\mu_{\max}^{-1} \bm{x}_h^2$ is a non-central chi-square random variable with one degree of freedom and non-centrality $\mu_{\max}^{-1} \bar{x}_h^2$ \cite[p. 433]{Johnson95v2}. The characteristic function of $\bm{x}_h^2$ is then given by \cite[p. 437]{Johnson95v2}: \begin{equation} \label{eqn:characteristicncx2} \mathbb{E} \left[ e^{jt \bm{x}_h^2 } \right] = \frac{1}{\sqrt{1 - 2 j t \mu_{\max}}} e^{j \bar{x}_h^2 t / ( 1 - 2 j t \mu_{\max})} \end{equation} Substituting \eqref{eqn:characteristicncx2} into \eqref{eqn:characteristiczeta2} yields: \begin{equation} \label{eqn:characteristiczeta3} c_{\bm{\delta}_{k,\ell}^2}(t) = \prod_{h=1}^{M} \frac{1}{\sqrt{1 - 2 j t \mu_{\max} \lambda_h}} \cdot e^{j \bar{x}_h^2 t \lambda_h / (1 - 2 j t \mu_{\max} \lambda_h)} \end{equation} When $\mu_{\max}$ is sufficiently small, we have \begin{equation} \label{eqn:approxfactors} \frac{1}{\sqrt{1 - 2 j t \mu_{\max} \lambda_h}} \approx 1, \frac{1}{1 - 2 j t \mu_{\max} \lambda_h} \approx 1 + 2 j t \mu_{\max} \lambda_h \end{equation} Using \eqref{eqn:dmnoandbarx}, we can approximate $c_{\bm{\delta}_{k,\ell}^2}(t)$ in \eqref{eqn:characteristiczeta3} by \begin{align} \label{eqn:characteristiczeta4} c_{\bm{\delta}_{k,\ell}^2}(t) & \approx \prod_{h=1}^{M} e^{j \bar{x}_h^2 t \lambda_h (1 + 2 j t \mu_{\max} \lambda_h)} \nonumber \\ & = e^{j t (\sum_{h=1}^{M} \lambda_h \bar{x}_h^2) - 2 t^2 \mu_{\max} (\sum_{h=1}^{M} \lambda_h^2 \bar{x}_h^2)} \nonumber \\ & = e^{j t \| d_{q,r}^\star \|^2 - 2 t^2 \mu_{\max} \| d_{q,r}^\star \|_{\Lambda_\Delta}^2} \end{align} where we used the fact that \begin{equation} \label{eqn:dmnoandbarx} \sum_{h=1}^{M} \lambda_h \bar{x}_h^2 = \| d_{q,r}^\star \|^2, \quad \sum_{h=1}^{M} \lambda_h^2 \bar{x}_h^2 = \| d_{q,r}^\star \|_{\Lambda_\Delta}^2 \end{equation} Note that the RHS of \eqref{eqn:characteristiczeta4} coincides with the characteristic function of a Gaussian distribution with mean $\| d_{q,r}^\star \|^2$ and variance $4 \mu_{\max} \| d_{q,r}^\star \|_{\Lambda_\Delta}^2$ \cite[p. 89]{Johnson95v1}. Since the distribution of a random variable is uniquely determined by its characteristic function, result \eqref{eqn:characteristiczeta4} implies that $\bm{\delta}_{k,\ell}^2 \sim \mathbb{N}(\| d_{q,r}^\star \|^2, 4 \mu_{\max} \| d_{q,r}^\star \|_{\Lambda_\Delta}^2)$ approximately for sufficiently small $\mu_{\max}$. Thus, \begin{equation} {\mathbb{P}}[\bm{\delta}_{k,\ell}^2 < \theta_{k,\ell} | d_{q,r}^\star \neq 0] \approx Q\left( \frac{ \| d_{q,r}^\star \|^2 - \theta_{k,\ell}}{2 \mu_{\max}^{1/2} \| d_{q,r}^\star \|_{\Lambda_\Delta}} \right) \le \frac{1}{2} e^{ - ( \| d_{q,r}^\star \|^2 - \theta_{k,\ell} )^2 / 8 \mu_{\max} \| d_{q,r}^\star \|_{\Lambda_\Delta}^2 } \end{equation} where $Q(\cdot)$ denotes the $Q$-function, which is the tail probability of the standard Gaussian distribution, and the last step is by using the Chernoff bound \cite[p. 380]{Cover06}. Therefore, when $\mu_{\max}$ is small enough, the Type-II error decays exponentially at a rate of $O(e^{-c_2/\mu_{\max}})$ for some constant $c_2 > 0$. \subsubsection{A Special Case} For the purpose of illustration only, we consider a special case where $\Delta_{m,n} = \sigma_{m,n}^2 I_M$. In this case, the pdf of $\bm{\delta}_{k,\ell}^2$ has a closed-form pdf. When $\mathbb{H}_1$ is true and $\| d_{q,r}^\star \|^2 > 0$, the quadratic form $\bm{\delta}_{k,\ell}^2 / (\mu_{\max} \sigma_{m,n}^2)$ reduces to a non-central chi-square random variable with $M$ degrees of freedom and non-centrality parameter $\| d_{q,r}^\star \|^2 / \mu_{\max} \sigma_{m,n}^2$ \cite[p. 433]{Johnson95v2}. Let us denote the non-central chi-square distribution with $d$ degrees of freedom and non-centrality parameter $\lambda$ by $\chi_d^2(\lambda)$. The pdf of $\chi_d^2(\lambda)$ is then given by \cite[p. 433]{Johnson95v2}: \begin{equation} \label{eqn:fchi2def} f_{\chi^2}(x; d, \lambda) = \frac{1}{2} \left( \frac{x}{\lambda} \right)^{(d-2)/4} e^{-(x+\lambda)/2} I_{(d-2)/2} (\sqrt{\lambda x}) \!\! \end{equation} for $x \ge 0$, where $I_h(x)$ denotes the $h$-th order modified Bessel function of the first kind. Then, \begin{equation} \frac{\bm{\delta}_{k,\ell}^2}{\mu_{\max} \sigma_{m,n}^2} \sim \chi_M^2\left( \frac{\| d_{q,r}^\star \|^2}{\mu_{\max} \sigma_{m,n}^2} \right) \end{equation} and the pdf of $\bm{\delta}_{k,\ell}^2$ is given by \begin{equation} \label{eqn:fzdef} f(z) = \frac{1}{\mu_{\max} \sigma_{m,n}^2} \cdot f_{\chi^2}\left(\frac{z}{\mu_{\max} \sigma_{m,n}^2}; M, \frac{\| d_{q,r}^\star \|^2}{\mu_{\max} \sigma_{m,n}^2} \right) \end{equation} where $f_{\chi^2}(\cdot)$ is from \eqref{eqn:fchi2def}. When $\mathbb{H}_0$ is true and $\| d_{q,r}^\star \|^2 = 0$, the pdf $f(z)$ in \eqref{eqn:fzdef} reduces to a scaled central chi-square distribution \cite[p. 415]{Johnson95v1}: \begin{equation} \label{eqn:fzcentraldef} f(z) = \frac{1}{\mu_{\max} \sigma_{m,n}^2} \cdot f_{\chi^2}\left(\frac{z}{\mu_{\max} \sigma_{m,n}^2}; M, 0 \right) \end{equation} We plot the pdf $f(z)$ from \eqref{eqn:fzdef} and \eqref{eqn:fzcentraldef} in Fig. \ref{fig:pdf_chi2}. It can be observed that when $M$, $\| d_{q,r}^\star \|^2$, and $\sigma_{m,n}^2$ are fixed, in both $\mathbb{H}_0$ (blue curves) and $\mathbb{H}_1$ (red curves) cases, the probability mass of $\bm{\delta}_{k,\ell}^2$ concentrates more around its mean as $\mu_{\max}$ decreases. When $q \neq r$ (i.e., $\mathbb{H}_1$ is true), the mean of $\bm{\delta}_{k,\ell}$ is close to $\| d_{q,r}^\star \|^2 = 1$ for sufficiently small $\mu_{\max}$; when $q = r$ (i.e., $\mathbb{H}_0$ is true), the mean is close to zero. The right tail probabilities of the blue curves (under $\mathbb{H}_0$) and the left tail probabilities of the red curves (under $\mathbb{H}_1$) all decay exponentially. In addition, it is seen that the pdf of $\bm{\delta}_{k,\ell}^2$ under $\mathbb{H}_1$ (the red curves with $\| d_{q,r}^\star \|^2 > 0$) is near symmetric and is in bell-shape, which agrees with the Gaussian approximation we made when evaluating the Type-II error (mis-detection) for the general case. On the other hand, the pdf of $\bm{\delta}_{k,\ell}^2$ under $\mathbb{H}_0$ (the blue curves with $\| d_{q,r}^\star \|^2 = 0$) concentrates close to zero and has large skewness with a long tail on the RHS, which distinguishes itself from Gaussian distributions; this demonstrates our previous statement that it is not appropriate to assess the Type-I error (false alarm) by approximating the pdf of $\bm{\delta}_{k,\ell}^2$ under $\mathbb{H}_0$ with Gaussian distributions. \begin{figure} \centering \includegraphics[width=3.5in]{pdf_hypothese} \caption{The pdf of $\bm{\delta}_{k,\ell}^2$ defined in \eqref{eqn:fzdef} and \eqref{eqn:fzcentraldef} with $M = 10$, $\| d_{q,r}^\star \|^2 = 1$, $\sigma_{m,n}^2 = 1$, $\mu_{\max} = 0.01, 0.03, 0.05$.} \label{fig:pdf_chi2} \vspace{-1\baselineskip} \end{figure} \subsection{Dynamics of Diffusion with Adaptive Clustering} Since both Type-I and Type-II errors decay exponentially with exponent proportional to $1/\mu_{\max}$, it is expected that incorrect clustering decisions will become rare as the iteration proceeds. We can therefore assume that enough iterations have elapsed and the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} is operating in steady-state. Under these conditions, we can examine the dynamics of the second recursion \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic} with adaptive clustering. From Assumption \ref{ass:topology}, correct clustering decisions split the underlying topology into $Q$ sub-networks one for each cluster. Within each cluster, correct clustering decisions merge all disjoint groups into a bigger group. Therefore, the resulting topology for the entire network will now consist of $Q$ separate sub-networks and each sub-network will be strongly-connected. In addition, since the step-sizes are sufficiently small, the decision statistics $\|\bm{w}_{\ell,i} - \bm{w}_{k,i}\|^2$ generated by the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} in steady-state will be nearly time-invariant. The clustering decisions will therefore also be nearly time-invariant. Then, with high probability, the cooperative sub-neighborhoods $\{ \bm{\mathcal{N}}_{k,i}^+ \}$ produced by \eqref{eqn:neighborhoodki+def} will become nearly time-invariant after the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} reaches steady-state: \begin{equation} \label{eqn:Neighborhoodconverge} \bm{\mathcal{N}}_{k,i}^+ \rightarrow \mathcal{N}_k^+, \quad \mbox{as} \quad i \rightarrow \infty \end{equation} for all $k$, where $\mathcal{N}_k^+$ is from \eqref{eqn:Neighborhood+and-def}. In order to gain from enhanced cooperation via adaptive clustering, it is critical to choose proper combination policies for recursion \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic}. From the discussion in Chapter 12 of \cite[p. 624-635]{Sayed14NOW}, we know that doubly-stochastic combination policies are able to exploit the benefit of cooperation when more agents are included in cooperation. For example, one can choose the Metropolis rule \cite[p. 664]{Sayed14NOW}, i.e., \begin{equation} \label{eqn:metropolisrule} \bm{a}_{\ell k}'(i) = \left\{ \begin{aligned} & \frac{1}{\max\{ |\bm{\mathcal{N}}_{\ell,i}^+|, |\bm{\mathcal{N}}_{k,i}^+| \}}, & \;\; & \ell \in \bm{\mathcal{N}}_{k,i}^+ \backslash \{k\} \\ & 1 - \sum_{n \in \bm{\mathcal{N}}_{k,i}^+\backslash\{k\}} \bm{a}_{n k}'(i), & \;\; & \ell = k \\ & 0, & \;\; & \ell \in \mathcal{N}_k \backslash \bm{\mathcal{N}}_{k,i}^+ \\ \end{aligned} \right. \end{equation} When the combination coefficients $\{ \bm{a}_{\ell k}'(i) \}$ are chosen according to \eqref{eqn:metropolisrule}, their values are determined by the size of their cooperative sub-neighborhood $\bm{\mathcal{N}}_{k,i}^+$. It is then obvious that coefficients $\{ \bm{a}_{\ell k}'(i) \}$ will tend to be constant values: \begin{equation} \label{eqn:aellkconverge} \bm{a}_{\ell k}'(i) \rightarrow a_{\ell k}', \quad \mbox{as} \quad i \rightarrow \infty \end{equation} which will be determined by the size of $\mathcal{N}_k^+$. Therefore, we can rewrite the second recursion \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic} for small enough $\mu_{\max}$ and large enough $i$ as \begin{subequations} \begin{align} \label{eqn:distributedadaptstable} \bm{\psi}_{k,i}' & = \bm{w}_{k,i-1}' - \mu_k \widehat{\nabla J_k} (\bm{w}_{k,i-1}') \\ \label{eqn:distributedcombinestable} \bm{w}_{k,i}' & = \sum_{\ell \in \mathcal{N}_k^+ } a_{\ell k}' \bm{\psi}_{\ell,i}' \end{align} \end{subequations} by using \eqref{eqn:Neighborhoodconverge} and \eqref{eqn:aellkconverge}. We collect the $\{ a_{\ell k}' \}$ into a matrix and denote it by $A'$. The matrix $A'$ is block diagonal and each block on its diagonal corresponds to a cluster. Recursion \eqref{eqn:distributedadaptstable}--\eqref{eqn:distributedcombinestable} only involves in-cluster cooperative learning for common minimizers, where all agents from a cluster form a single big group. Therefore, the performance analysis in Section \ref{sec:performancegroup} applies to this case as well. \section{Simulation Results} We first simulate a network consisting of $N = 200$ agents. Each agent observes a data stream $\{\bm{d}_k(i), \bm{u}_{k,i}; i \ge 0 \}$ that satisfies the linear regression model \cite{Sayed08}: \begin{equation} \bm{d}_k(i) = \bm{u}_{k,i} w_k^o + \bm{v}_k(i) \end{equation} where $\bm{d}_k(i) \in \mathbb{R}$ is a scalar response variable and $\bm{u}_{k,i} \in \mathbb{R}^{1\times M}$ is a row vector feature variable with $M = 2$. The feature variable $\bm{u}_{k,i}$ is randomly generated at every iteration by using a Gaussian distribution with zero mean and scaled identity covariance matrix $\sigma_{u,k}^2 I_M$. The model noise $\bm{v}_k(i)\in\mathbb{R}$ is also randomly generated at every iteration by using another independent Gaussian distribution with zero mean and variance $\sigma_{v,k}^2$. The values of $\{\sigma_{u,k}^2\}$ and $\{\sigma_{v,k}^2\}$ are positive and randomly generated. There are $Q = 2$ clusters in the network. The first $N_1 = 100$ agents belong to cluster $\mathcal{C}_1$, i.e., $\mathcal{C}_1 = \{1,2,\dots,100\}$. The second $N_2 = 100$ agents belong to cluster $\mathcal{C}_2$, i.e., $\mathcal{C}_2 = \{101,102,\dots,200\}$. The loading factors for the two clusters, namely, $w_1^\star$ and $w_2^\star$, are randomly generated. The step-size is uniform and is set to $\mu = 0.05$. The underlying topology that connects all agents is shown in Fig. \ref{fig:total_topology}. Agents from cluster $\mathcal{C}_1$ are in red and agents from $\mathcal{C}_2$ are in blue. We simulated the scenario where agents have some partial knowledge about the grouping at the beginning of the learning process. The partial knowledge is non-trivial, meaning that the groups $\{\mathcal{G}_m\}$ used in the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} are not just singletons. The topologies that reflect the $\{\mathcal{G}_m\}$ are plotted in Figs. \ref{fig:initial_topology_group1} and \ref{fig:initial_topology_group2} for the two clusters. The Metropolis rule \eqref{eqn:metropolisrule} is used in both recursions, \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} and \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic}. \begin{figure}[h] \centerline{ \subfloat[The initial topology with all links.] {\includegraphics[width=2in]{underlying_topology} \label{fig:total_topology}} \hfil \subfloat[Initial topology of cluster 1.] {\includegraphics[width=2in]{initial_topology_group1} \label{fig:initial_topology_group1}} \hfil \subfloat[Initial topology of cluster 2.] {\includegraphics[width=2in]{initial_topology_group2} \label{fig:initial_topology_group2}}} \centerline{ \subfloat[The final topology at steady-state.] {\includegraphics[width=2in]{split_topology} \label{fig:split_topology}} \hfil \subfloat[Resulting topology of cluster 1.] {\includegraphics[width=2in]{resulting_topology_group1} \label{fig:resulting_topology_group1}} \hfil \subfloat[Resulting topology of cluster 2.] {\includegraphics[width=2in]{resulting_topology_group2} \label{fig:resulting_topology_group2}}} \caption{The underlying topology of the entire network where agents from different clusters are connected. As the learning process progresses, the disjoint groups in each cluster merge into a bigger group to enable collaborative learning among more agents. In steady-state, only in-cluster links remain active.} \label{fig:cluster_topology} \vspace{-1\baselineskip} \end{figure} As we explained before, in steady-state the clustering decisions become time-invariant and small groups in the same cluster merge into bigger groups. The links between neighbors within the same cluster are active while links to neighbors from different clusters are dropped. We plot the resulting topology in steady-state with active links in Fig. \ref{fig:split_topology}. Compared to Fig. \ref{fig:total_topology}, the underlying topology in Fig. \ref{fig:split_topology} is trimmed and split into two disjoint sub-networks. This result implies that the interference between two clusters is suppressed. The two sub-networks are themselves connected at steady-state and are shown in Figs \ref{fig:resulting_topology_group1} and \ref{fig:resulting_topology_group2}. Comparing the resulting cluster topologies in Figs \ref{fig:resulting_topology_group1} and \ref{fig:resulting_topology_group2} with the initial cluster topologies in Figs. \ref{fig:initial_topology_group1} and \ref{fig:initial_topology_group2}, it can be observed that all separate small groups from the same cluster merge into a bigger group and collaborative learning involving more agents emerges. The MSD learning curves are plotted in Fig. \ref{fig:steadystateMSD} where the cluster MSDs are obtained by averaging over 100 trials. The cluster MSDs for the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} are in black and green for clusters 1 and 2, respectively. The cluster MSDs for the second recursion \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic} are in red and blue for clusters 1 and 2, respectively. Obviously both clusters improve their steady-state MSD performance on average by forming larger clusters for cooperation. \begin{figure}[h] \centering \includegraphics[width=4in]{steady_state_MSD} \caption{The steady-state cluster average MSDs for the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup} and the second recursion \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic}.} \label{fig:steadystateMSD} \vspace{-1\baselineskip} \end{figure} In the second simulation, we simulate a network with $N = 50$ nodes in $Q = 5$ clusters. The sizes of the five clusters are 8, 9, 10, 11, and 12, respectively. The initial topology is shown in Fig. \ref{fig:50init}. We choose the uniform step-size $\mu = 0.01$. After 1000 iterations, the resulting topology is separated into five clusters and is shown in Fig. \ref{fig:50res}, and the topologies for the five clusters are given in Figs. \ref{fig:50red}--\ref{fig:50magenta}, respectively. The MSD learning curves that are obtained by averaging over 500 trials match the theory well, as shown in Figs. \ref{fig:50MSD1} and \ref{fig:50MSD2}. \section{Conclusions} In this work we proposed a distributed strategy for adaptive learning and clustering over multi-cluster networks. Detailed performance analysis is conducted and the results are supported by simulations. The proposed algorithm can be used in applications to segment heterogeneous networks into sub-networks to enhance in-cluster cooperation and suppress cross-cluster interference. It can also be applied to homogeneous networks to prevent intrusion or jamming by isolating malicious nodes from normal nodes. Furthermore, it can be used to trim and grow adaptive networks according to the objectives of the agents in the network. \begin{figure}[h] \centerline{ \subfloat[The initial topology with five clusters.] {\includegraphics[width=3in]{50_nodes_init} \label{fig:50init}} \hfil \subfloat[The remaining topology with five clusters.] {\includegraphics[width=3in]{50_nodes_res} \label{fig:50res}}} \centerline{ \subfloat[Final topology of $\mathcal{C}_1$.] {\includegraphics[width=1.5in]{50_nodes_red} \label{fig:50red}} \hfil \subfloat[Final topology of $\mathcal{C}_2$.] {\includegraphics[width=1.5in]{50_nodes_blue} \label{fig:50blue}} \hfil \subfloat[Final topology of $\mathcal{C}_3$.] {\includegraphics[width=1.5in]{50_nodes_green} \label{fig:50green}}} \centerline{ \subfloat[Final topology of $\mathcal{C}_4$.] {\includegraphics[width=1.5in]{50_nodes_cyan} \label{fig:50cyan}} \hfil \subfloat[Final topology of $\mathcal{C}_5$.] {\includegraphics[width=1.5in]{50_nodes_magenta} \label{fig:50magenta}} } \caption{The initial topology with $N=50$ nodes and $Q=5$ clusters. In steady-state, the five clusters are successfully separated from each other while each cluster remains connected.} \label{fig:secondsim} \vspace{-1\baselineskip} \end{figure} \begin{figure}[h] \centerline{ \subfloat[The MSD learning curves for the first recursion \eqref{eqn:distributedadaptgroup}--\eqref{eqn:distributedcombinegroup}.] {\includegraphics[width=3.3in]{50_nodes_msd1} \label{fig:50MSD1}} \hfil \subfloat[The MSD learning curves for the second recursion \eqref{eqn:distributedadaptdynamic}--\eqref{eqn:distributedcombinedynamic}.] {\includegraphics[width=3.3in]{50_nodes_msd2} \label{fig:50MSD2}}} \caption{The MSD learning curves for the proposed distributed clustering and learning algorithm.} \label{fig:secondsimMSD} \vspace{-1\baselineskip} \end{figure} \appendices \section{Proof of Lemma \ref{lemma:lowrankapprox}} \label{app:lowrankapprox} Since both models, \eqref{eqn:lowdimensionerrorrecursion} and \eqref{eqn:longtermerrorrecursion}, can be decoupled into $G$ separate recursions one for each group, it is sufficient to show that for sufficiently small step-sizes, and for any group $\mathcal{G}_m$, it holds that \begin{equation} \label{eqn:lowrankmodelgapgroup} \limsup_{i\rightarrow \infty} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{low} \|^2 = O(\mu_{\max}^2) \end{equation} where $\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{low}$ is given by \eqref{eqn:zqidef}. We adopt a technique similar to the one used in the proof of Theorem 10.2 \cite[p. 557]{Sayed14NOW} to establish \eqref{eqn:lowrankmodelgapgroup} in the sequel. We introduce the Jordan decomposition of each $A_m$ \cite{Horn85, Sayed14NOW}: \begin{equation} \label{eqn:Aeigendecompdef} A_m = V_m J_m V_m^{-1} \triangleq \begin{bmatrix} p_m^g & V_{m,R} \end{bmatrix} \begin{bmatrix} 1 & \\ & J_{m,\epsilon} \\ \end{bmatrix} \begin{bmatrix} \ds{1}_{N_m^g} & V_{m,L} \end{bmatrix}^\mathsf{T} \end{equation} where $J_{m,\epsilon} \in \mathbb{C}^{(N_m^g-1)\times(N_m^g-1)}$ consists of all stable Jordan blocks with $\epsilon$'s on the first lower off-diagonal, and $V_m$ is a non-singular complex matrix. Let \begin{align} \label{eqn:bigVmdef} \mathcal{V}_m & \triangleq V_m \otimes I_M \\ \label{eqn:bigJmdef} \mathcal{J}_m & \triangleq J_m \otimes I_M \end{align} Multiplying $\mathcal{V}_m^\mathsf{T}$ to both sides of \eqref{eqn:longtermerrorrecursiongroup} yields: \begin{equation} \label{eqn:Umodifiederrorrecursiongroup} \mathcal{V}_m^\mathsf{T} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} = \bar{\mathcal{B}}_m \mathcal{V}_m^\mathsf{T} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i-1}^\textrm{long} + \mathcal{J}_m^\mathsf{T} \mathcal{V}_m^\mathsf{T} \mathcal{M}_m {\scriptstyle{\boldsymbol{\mathcal{S}}}}_{m,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i-1}) \end{equation} where \begin{equation} \label{eqn:barbigBmdef} \bar{\mathcal{B}}_m \triangleq \mathcal{V}_m^\mathsf{T} \mathcal{B}_m (\mathcal{V}_m^\mathsf{T})^{-1} = \mathcal{J}_m^\mathsf{T} - \mathcal{J}_m^\mathsf{T} \mathcal{V}_m^\mathsf{T} \mathcal{M}_m \mathcal{H}_m (\mathcal{V}_m^\mathsf{T})^{-1} \end{equation} By \eqref{eqn:Aeigendecompdef} and \eqref{eqn:bigVmdef}, we have \begin{equation} \label{eqn:wbarandwcheckdef} \mathcal{V}_m^\mathsf{T} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} = \begin{bmatrix} (p_m^g \otimes I_M)^\mathsf{T} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} \\ (V_{m,R} \otimes I_M)^\mathsf{T} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} \\ \end{bmatrix} \triangleq \begin{bmatrix} \bar{\bm{w}}_{m,i}^\textrm{long} \\ \check{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} \\ \end{bmatrix} \end{equation} where $\bar{\bm{w}}_{m,i}^\textrm{long}$ is an $M\times 1$ vector, $\check{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long}$ is an $(N_m^g-1)M \times1$ vector. It follows from \eqref{eqn:bigVmdef} and \eqref{eqn:zqidef} that \begin{equation} \label{eqn:Uonewcent} \mathcal{V}_m^\mathsf{T} \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{low} = (V_m^\mathsf{T} \ds{1}_{N_m^g} ) \otimes \widetilde{\bm{w}}_{m,i}^\textrm{low} = \begin{bmatrix} \widetilde{\bm{w}}_{m,i}^\textrm{low} \\ 0 \end{bmatrix} \end{equation} since $\ds{1}_{N_m^g}$ is the first column of $(V_m^\mathsf{T})^{-1}$ in \eqref{eqn:Aeigendecompdef}. Using \eqref{eqn:wbarandwcheckdef} and \eqref{eqn:Uonewcent}, we find that \begin{equation} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{low} \|_{\Sigma_m}^2 = \mathbb{E} \| \bar{\bm{w}}_{m,i}^\textrm{long} - \widetilde{\bm{w}}_{m,i}^\textrm{low} \|^2 + \mathbb{E} \| \check{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} \|^2 \end{equation} where $\Sigma_m \triangleq \mathcal{V}_m \mathcal{V}_m^\mathsf{T}$ is a positive-definite weighting matrix. Since $\| \Sigma_m \|$ is independent of $\mu_{\max}$, result \eqref{eqn:lowrankmodelgapgroup} holds if the following condition holds: \begin{equation} \label{eqn:steadystateerrorapproxlowdimgroup} \limsup_{i\rightarrow\infty} \mathbb{E} \| \bar{\bm{w}}_{m,i}^\textrm{long} - \widetilde{\bm{w}}_{m,i}^\textrm{low} \|^2 + \mathbb{E} \| \check{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} \|^2 = O(\mu_{\max}^2) \end{equation} Using Eq. (10.78) in \cite[p. 563]{Sayed14NOW}, we know that \begin{equation} \label{eqn:wcheckboundzero} \limsup_{i\rightarrow\infty} \mathbb{E} \| \check{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{long} \|^2 = O(\mu_{\max}^2) \end{equation} From \eqref{eqn:Umodifiederrorrecursiongroup} and \eqref{eqn:wbarandwcheckdef}, the evolution of $\bar{\bm{w}}_{m,i}^\textrm{long}$ is given by (see Eq. (9.61) from \cite[p. 514]{Sayed14NOW} for a similar derivation): \begin{equation} \label{eqn:barwmigrouprecursion} \bar{\bm{w}}_{m,i}^\textrm{long} = D_m \bar{\bm{w}}_{m,i-1}^\textrm{long} - D_{21}^\mathsf{T} \check{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i-1}^\textrm{long} + (p_m^g \otimes I_M)^\mathsf{T} \mathcal{M}_m {\scriptstyle{\boldsymbol{\mathcal{S}}}}_{m,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{m,i-1}) \end{equation} where $D_{21}^\mathsf{T} \triangleq (p_m^g \otimes I_M)^\mathsf{T} \mathcal{M}_m \mathcal{H}_m (V_{m,L} \otimes I_M)$. Using \eqref{eqn:barwmigrouprecursion} and \eqref{eqn:lowdimensionerrorrecursiongroup}, we obtain \begin{equation} \label{eqn:lowrankblock1recursion} \bar{\bm{w}}_{m,i}^\textrm{long} - \widetilde{\bm{w}}_{m,i}^\textrm{low} = D_m (\bar{\bm{w}}_{m,i-1}^\textrm{long} - \widetilde{\bm{w}}_{m,i-1}^\textrm{low} ) - D_{21}^\mathsf{T} \check{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i-1}^\textrm{long} \end{equation} We recognize that recursion \eqref{eqn:lowrankblock1recursion} has a form that is similar to the recursion for $\bar{\bm{b}}_i$ in Eq. (10.64) of \cite[p. 561]{Sayed14NOW} except that here in \eqref{eqn:lowrankblock1recursion} the driving noise term is absent. Therefore, we immediately get from Eq. (10.66) of \cite[p. 562]{Sayed14NOW} that \begin{equation} \label{eqn:lowrankblock1recursion2} \mathbb{E} \| \bar{\bm{w}}_{m,i}^\textrm{long} - \widetilde{\bm{w}}_{m,i}^\textrm{low} \|^2 \le (1 - \sigma_{11} \mu_{\max}) \mathbb{E} \| \bar{\bm{w}}_{m,i-1}^\textrm{long} - \widetilde{\bm{w}}_{m,i-1}^\textrm{low} \|^2 + \frac{\sigma_{21}^2 \mu_{\max}}{\sigma_{11}} \mathbb{E} \| \check{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i-1}^\textrm{long} \|^2 \end{equation} for some constants $\sigma_{11} > 0$ and $\sigma_{21} > 0$. Substituting \eqref{eqn:wcheckboundzero} into \eqref{eqn:lowrankblock1recursion2} yields \begin{equation} \label{eqn:lowrankblock1recursion3} \mathbb{E} \| \bar{\bm{w}}_{m,i}^\textrm{long} - \widetilde{\bm{w}}_{m,i}^\textrm{low} \|^2 \le (1 - \sigma_{11} \mu_{\max}) \mathbb{E} \| \bar{\bm{w}}_{m,i-1}^\textrm{long} - \widetilde{\bm{w}}_{m,i-1}^\textrm{low} \|^2 + O(\mu_{\max}^3) \end{equation} for large enough $i$. Therefore, it follows from \eqref{eqn:lowrankblock1recursion3} that \begin{equation} \label{eqn:wbarboundzero} \limsup_{i\rightarrow\infty} \mathbb{E} \| \bar{\bm{w}}_{m,i}^\textrm{long} - \widetilde{\bm{w}}_{m,i}^\textrm{low} \|^2 = O(\mu_{\max}^2) \end{equation} Combining \eqref{eqn:wcheckboundzero} and \eqref{eqn:wbarboundzero} proves \eqref{eqn:steadystateerrorapproxlowdimgroup}. \section{Proof of Lemma \ref{lemma:lowrankerrorcov}} \label{app:lowrankerrorcov} Let us examine the evolution of the covariance matrix of $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$, which is defined by \begin{equation} \Theta_i \triangleq \mathbb{E} [\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} ] \end{equation} Using \eqref{eqn:bigRsidef} and \eqref{eqn:martingaledifference}, we get from \eqref{eqn:lowdimensionerrorrecursion} that \begin{equation} \label{eqn:Phirecursiondef} \Theta_i = \mathcal{D} \Theta_{i-1} \mathcal{D} + \mathcal{P}^\mathsf{T} \mathcal{M} [\mathbb{E} \mathcal{R}_{s,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} )] \mathcal{M} \mathcal{P} \end{equation} We next introduce the fixed-point covariance recursion \begin{equation} \label{eqn:Phiorecursiondef} \Theta_i^{{\textrm{fix}}} = \mathcal{D} \Theta_{i-1}^{{\textrm{fix}}} \mathcal{D} + \mathcal{P}^\mathsf{T} \mathcal{M} \mathcal{R}_{s,i}( {\scriptstyle{\mathcal{W}}}^o ) \mathcal{M} \mathcal{P} \end{equation} Let \begin{equation} \label{eqn:DeltaThetaandRsidef} \Delta \Theta_i \triangleq \Theta_i - \Theta_i^{{\textrm{fix}}}, \;\; \Delta \mathcal{R}_{s,i} \triangleq \mathbb{E} \mathcal{R}_{s,i}( {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) - \mathcal{R}_{s,i}({\scriptstyle{\mathcal{W}}}^o) \end{equation} The difference matrix $\Delta \Theta_i$ evolves by the following recursion: \begin{equation} \label{eqn:deltaPhiirecursion} \Delta \Theta_i = \mathcal{D} \Delta \Theta_{i-1} \mathcal{D} + \mathcal{P}^\mathsf{T} \mathcal{M} \Delta \mathcal{R}_{s,i} \mathcal{M} \mathcal{P} \end{equation} We bound the difference matrix $\Delta \mathcal{R}_{s,i}$ by \begin{align} \label{eqn:bounddeltaR} \| \Delta \mathcal{R}_{s,i} \| & \stackrel{(a)}{\le} \mathbb{E} \| \mathcal{R}_{s,i}({\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1}) - \mathcal{R}_{s,i}({\scriptstyle{\mathcal{W}}}^o) \| \nonumber \\ & \stackrel{(b)}{\le} \kappa_s \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1} \|^{\gamma_s} \nonumber \\ & \stackrel{(c)}{\le} \kappa_s \left( \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1} \|^4 \right)^{\gamma_s/4} \end{align} where step (a) is by using Jensen's inequality; step (b) is by using \eqref{eqn:lipschitzcovariance} from Assumption \ref{ass:gradienterrors}; and step (c) is by applying Jensen's inequality again to the concave function $x^{\gamma_s/4}$ for $\gamma_s \le 4$ and $x\ge0$. As $i \rightarrow \infty$, we get from \eqref{eqn:bounddeltaR} that \begin{equation} \label{eqn:bounddeltaR2} \limsup_{i\rightarrow\infty} \| \Delta \mathcal{R}_{s,i} \| = O(\mu_{\max}^{\gamma_s/2}) \end{equation} by using \eqref{eqn:4thorderasymptoticbound}. From Eq. (9.286) in \cite[p. 548]{Sayed14NOW}, we have \begin{equation} \label{eqn:boundIQMmumaxH} \| \mathcal{D} \| = \max_m \| D_m \| \le 1 - \sigma \mu_{\max} \end{equation} for some $\sigma > 0$. Using the triangle inequality and the sub-multiplicativity property of norms, we have from \eqref{eqn:deltaPhiirecursion} that \begin{align} \label{eqn:bounddiffphiiandphio} \!\! \| \Delta \Theta_i \| & \le \| \mathcal{D} \Delta \Theta_{i-1} \mathcal{D} \| + \| \mathcal{P}^\mathsf{T} \mathcal{M} \Delta \mathcal{R}_{s,i} \mathcal{M} \mathcal{P} \| \nonumber \\ \!\! & \le \| \mathcal{D} \|^2 \| \Delta \Theta_{i-1} \| + \mu_{\max}^2 \| \mathcal{P} \|^2 \| \Delta \mathcal{R}_{s,i} \| \nonumber \\ \!\! & \le (1 - \sigma \mu_{\max}) \| \Delta \Theta_{i-1} \| + \mu_{\max}^2 \| \mathcal{P} \|^2 \| \Delta \mathcal{R}_{s,i} \| \!\! \end{align} where in the last step we used \eqref{eqn:boundIQMmumaxH} and the fact that $0< 1-\sigma \mu_{\max} < 1$. Then, as $i \rightarrow \infty$, we get from \eqref{eqn:bounddeltaR2} and \eqref{eqn:bounddiffphiiandphio} that \begin{equation} \label{eqn:bounddiffphiiandphio2} \limsup_{i\rightarrow\infty} \| \Delta \Theta_i \| \le \sigma^{-1} \mu_{\max} \| \mathcal{P} \|^2 ( \limsup_{i\rightarrow\infty} \| \Delta \mathcal{R}_{s,i} \| ) = O(\mu_{\max}^{1+ \gamma_s/2}) \end{equation} Now, since $\mathcal{D}$ is stable and in view of \eqref{eqn:convergentcovariance}, the fixed-point recursion \eqref{eqn:Phiorecursiondef} converges as $i \rightarrow \infty$. At steady-state, the limit $\Theta_{\infty}^{{\textrm{fix}}} \triangleq \lim_{i \rightarrow \infty} \Theta_i^{{\textrm{fix}}}$ of \eqref{eqn:Phiorecursiondef} satisfies the discrete Lyapunov equation \eqref{eqn:ThetaDTLEdef} by identifying $\Theta \equiv \Theta_{\infty}^{{\textrm{fix}}}$. \section{Proof of Theorem \ref{theorem:blockstructure}} \label{app:block} From Lemmas \ref{lemma:approxerrorrecursion} and \ref{lemma:lowrankapprox}, \begin{align} \label{eqn:winetclosetozi2} & \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \|^2 \nonumber \\ & \le \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} + \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \|^2 \nonumber \\ & \le \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} 2 \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} \|^2 + \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} 2 \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{long} - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \|^2 \nonumber \\ & = 0 \end{align} Let \begin{equation} \label{eqn:Piilowdef} \Pi_i^\textrm{low} \triangleq \mu_{\max}^{-1} \mathbb{E} \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \end{equation} Then, by Jensen's inequality, \begin{align} \label{eqn:boundcovgap1} \mu_{\max} \| \Pi_i - \Pi_i^\textrm{low} \| & \le \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\mathsf{T} - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \| \nonumber \\ & = \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\mathsf{T} - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\mathsf{T} + \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\mathsf{T} - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \| \nonumber \\ & \le \mathbb{E} \| (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}) \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\mathsf{T} \| + \mathbb{E} \| \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \| \end{align} The second term on the RHS of \eqref{eqn:boundcovgap1} can be bounded by \begin{align} \label{eqn:boundcovgap2} \mathbb{E} \| \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \| & = \mathbb{E} \| (\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i + \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i ) (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \| \nonumber \\ & \le \mathbb{E} \| (\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i) (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \| + \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \| \nonumber \\ & = \mathbb{E} \| \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \|^2 + \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low})^\mathsf{T} \| \end{align} Substituting \eqref{eqn:boundcovgap2} into \eqref{eqn:boundcovgap1} yields: \begin{equation} \label{eqn:boundcovgap3} \mu_{\max} \| \Pi_i - \Pi_i^\textrm{low} \| \le 2 \mathbb{E} \| (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}) \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\mathsf{T} \| + \mathbb{E} \| \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \|^2 \end{equation} The first term on the RHS of \eqref{eqn:boundcovgap3} can be bounded by \begin{align} \label{eqn:boundcovgap4} \mathbb{E} \| (\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}) \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\mathsf{T} \| & \le \mathbb{E} ( \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \| \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \| ) \nonumber \\ & \le \sqrt{ \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \|^2 \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \|^2 } \end{align} by using the Cauchy-Schwarz inequality. Substituting \eqref{eqn:boundcovgap4} into \eqref{eqn:boundcovgap3} yields: \begin{equation} \label{eqn:boundcovgap5} \| \Pi_i - \Pi_i^\textrm{low} \| \le 2 \sqrt{ \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i - \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \|^2 } \cdot \sqrt{ \mu_{\max}^{-1} \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \|^2 } + \mu_{\max}^{-1} \mathbb{E} \| \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} - \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i \|^2 \end{equation} Using \eqref{eqn:winetclosetozi2} and Theorem \ref{theorem:stability}, it follows from \eqref{eqn:boundcovgap5} that \begin{equation} \label{eqn:boundcovgap6} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \| \Pi_i - \Pi_i^\textrm{low} \| = 0 \end{equation} Noting that $\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ is obtained by extending $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low}$ via \eqref{eqn:zidef} and \eqref{eqn:zqidef}, we have \begin{equation} \label{eqn:boundcovgap7} \mathbb{E} \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{low} (\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{n,i}^\textrm{low})^\mathsf{T} = ( \ds{1}_{N_m^g} \ds{1}_{N_n^g}^\mathsf{T} ) \otimes \mathbb{E} \widetilde{\bm{w}}_{m,i}^\textrm{low} (\widetilde{\bm{w}}_{n,i}^\textrm{low})^\mathsf{T} \end{equation} for any $m$ and $n$. From \eqref{eqn:PiinftyequalPhi}, we know that \begin{equation} \label{eqn:boundcovgap8} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i\rightarrow \infty} \| \mu_{\max}^{-1} \mathbb{E} \widetilde{\bm{w}}_{m,i}^\textrm{low} (\widetilde{\bm{w}}_{n,i}^\textrm{low})^\mathsf{T} - \Phi_{m,n} \| = 0 \end{equation} where $\Phi_{m,n}$ denotes the $(m,n)$-th block of $\Phi$ with block size $M \times M$. It follows from \eqref{eqn:boundcovgap7} and \eqref{eqn:boundcovgap8} that \begin{equation} \label{eqn:boundcovgap9} \lim_{\mu_{\max} \rightarrow 0} \! \limsup_{i\rightarrow \infty} \! \| \mu_{\max}^{-1} \mathbb{E} \bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{m,i}^\textrm{low} (\bar{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{n,i}^\textrm{low})^\mathsf{T} \!-\! ( \ds{1}_{N_m^g} \! \ds{1}_{N_n^g}^\mathsf{T} ) \otimes \Phi_{m,n} \| \!=\! 0 \end{equation} Using \eqref{eqn:zidef}, \eqref{eqn:Pinetworkdef}, and \eqref{eqn:Piilowdef}, we get from \eqref{eqn:boundcovgap9} that \begin{equation} \label{eqn:boundcovgap10} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \| \Pi_i^\textrm{low} - \Pi \| = 0 \end{equation} Combining \eqref{eqn:boundcovgap6} and \eqref{eqn:boundcovgap10}, we arrive at \eqref{eqn:Piinftyoblockstructure}. \section{Proof of Lemma \ref{lemma:normallowdimensional}} \label{app:normal} We establish this result by calling upon Theorem 1.1 from \cite[p. 319]{Kushner03}, which considers a stochastic recursion of the following form: \begin{equation} \label{eqn:SGDdef} \bm{x}_i = \bm{x}_{i-1} + \mu g(\bm{x}_{i-1}) + \mu \bm{v}_i \end{equation} with step-size $\mu > 0$, update vector $g(\bm{x}_{i-1})$, and noise $\bm{v}_i$, satisfying the conditions: \begin{enumerate} \item The function $g(\cdot)$ is continuously differentiable and can be expanded as \begin{equation} g(x) = g(x^o) + [\nabla g(x^o)]^\mathsf{T} (x - x^o) + o(\| x - x^o \|) \end{equation} around a point $x^o$, where $\nabla g(\cdot)$ denotes the Jacobian of $g(\cdot)$, and $o(\cdot)$ is the ``small-$o$'' notation that represents higher order terms. \item It holds that $x^o$ is the unique point that satisfies: \begin{equation} \label{eqn:gradientbezero} g(x^o) = 0 \end{equation} \item The Jacobian $A \triangleq \nabla g(x^o)$ is a Hurwitz matrix (i.e., the real parts of the eigenvalues of $A$ are negative). \item The noise process $\{\bm{v}_i; i\ge0\}$ is a martingale difference, i.e., \begin{equation} \label{eqn:martingale} \mathbb{E} (\bm{v}_i | \mathbb{F}_{i-1} ) = 0 \end{equation} where $\mathbb{F}_{i-1}$ is the filtration defined by $\{ \bm{x}_i; i\ge 0\}$. \item The noise $\bm{v}_i$ has an asymptotically bounded moment of order higher than 2, namely, \begin{equation} \label{eqn:boundedvar} \lim_{\mu \rightarrow 0} \limsup_{i \rightarrow \infty} \mathbb{E} \| \bm{v}_i \|^{2+p} < \infty \end{equation} for some $p>0$. \item The covariance matrices of the noise process $\{ \bm{v}_i; i\ge 0 \}$ converge to a positive semi-definite matrix $\Sigma \ge 0$: \begin{equation} \lim_{\mu \rightarrow 0} \limsup_{i \rightarrow \infty} \| \mathbb{E} \bm{v}_i \bm{v}_i^\mathsf{T} - \Sigma \| = 0 \end{equation} \end{enumerate} Under these conditions, it holds that as $i\rightarrow\infty$ and $\mu \rightarrow 0$ asymptotically, the sequence $\{ \bm{x}_i/\sqrt{\mu} \}$ converges weakly to a Gaussian random distribution with mean $x^o$ and covariance matrix $C$, which is the unique solution to the continuous Lyapunov equation $A C + C A^\mathsf{T} = \Sigma$. These conditions are satisfied by our recursion \eqref{eqn:lowdimensionerrorrecursionnew} by identifying $\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} \equiv \bm{x}_i$, $\mu_{\max} \equiv \mu$, $-\bar{\mathcal{H}} \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1}^\textrm{low} \equiv g(\bm{x}_{i-1})$, $\bm{v}_i \equiv \bar{\bm{s}}_i$. First, since $\bar{\mathcal{H}}$ is positive-definite by \eqref{eqn:barbigHdef} and \eqref{eqn:barHmdef}, it is obvious that $x^o = 0$ is the unique point satisfying \eqref{eqn:gradientbezero}. Second, since $g(x) = -\bar{\mathcal{H}} x$ and $x^o = 0$, condition 1) holds automatically with $[\nabla g(x^o)]^\mathsf{T} = -\bar{\mathcal{H}}$. Third, it is easy to recognize that $A \equiv -\bar{\mathcal{H}}$ is Hurwitz since $\bar{\mathcal{H}}$ is positive-definite. Fourth, by \eqref{eqn:martingaledifference} from Assumption \ref{ass:gradienterrors}, condition \eqref{eqn:martingale} holds. Fifth, by \eqref{eqn:bounded4thorder} from Assumption \ref{ass:gradienterrors}, we have \begin{align} \label{eqn:boundbarsi4thorder} \mathbb{E} \| \bar{\bm{s}}_i \|^4 & \le \| \mathcal{P} \|^4 \mathbb{E} \| {\scriptstyle{\boldsymbol{\mathcal{S}}}}_i( {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} ) \|^4 \nonumber \\ & \le \| \mathcal{P} \|^4 ( \alpha^2 \mathbb{E} \| \widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_{i-1} \|^4 + \sigma_s^4 ) \end{align} Using Theorem \ref{theorem:stability}, we get from \eqref{eqn:boundbarsi4thorder} that \begin{equation} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \mathbb{E} \| \bar{\bm{s}}_i \|^4 \le \| \mathcal{P} \|^4 ( O(\mu_{\max}^2) + \sigma_s^4 ) < \infty \end{equation} which satisfies condition \eqref{eqn:boundedvar}. Sixth, we have from \eqref{eqn:barsidef} and \eqref{eqn:bigRsidef} that \begin{equation} \label{eqn:covbarsi} \mathbb{E} \bar{\bm{s}}_i \bar{\bm{s}}_i^\mathsf{T} = \mu_{\max}^{-2} \mathcal{P}^\mathsf{T} \mathcal{M} \mathbb{E} \mathcal{R}_{s,i}( {\scriptstyle{\boldsymbol{\mathcal{W}}}}_{i-1} ) \mathcal{M} \mathcal{P} \end{equation} Let \begin{equation} \Sigma_i \triangleq \mu_{\max}^{-2} \mathcal{P}^\mathsf{T} \mathcal{M} \mathcal{R}_{s,i}({\scriptstyle{\mathcal{W}}}^o) \mathcal{M} \mathcal{P} \end{equation} Then, using Jensen's inequality and \eqref{eqn:lipschitzcovariance} from Assumption \ref{ass:gradienterrors}, we have from \eqref{eqn:covbarsi} that \begin{equation} \label{eqn:EbarsicovSigmaigap} \| \mathbb{E} \bar{\bm{s}}_i \bar{\bm{s}}_i^\mathsf{T} - \Sigma_i \| \le \| \mathcal{P} \|^2 \| \Delta \mathcal{R}_{s,i} \| \end{equation} where $\Delta \mathcal{R}_{s,i}$ is from \eqref{eqn:DeltaThetaandRsidef}. Using \eqref{eqn:bounddeltaR2}, we further get \begin{equation} \label{eqn:EbarsicovSigmaigap2} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \| \mathbb{E} \bar{\bm{s}}_i \bar{\bm{s}}_i^\mathsf{T} - \Sigma_i \| = 0 \end{equation} Using \eqref{eqn:convergentcovariance}, we have \begin{equation} \label{eqn:EbarsicovSigmaigap3} \lim_{i\rightarrow\infty} \Sigma_i = \mu_{\max}^{-2} \mathcal{P}^\mathsf{T} \mathcal{M} \mathcal{R}_s \mathcal{M} \mathcal{P} = \bar{\mathcal{R}} \ge 0 \end{equation} where $\bar{\mathcal{R}}$ is from \eqref{eqn:Rsdef}. It follows from \eqref{eqn:EbarsicovSigmaigap2} and \eqref{eqn:EbarsicovSigmaigap3} that \begin{align} \label{eqn:EbarsicovSigmaigap4} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \| \mathbb{E} \bar{\bm{s}}_i \bar{\bm{s}}_i^\mathsf{T} - \bar{\mathcal{R}} \| = 0 \end{align} Therefore, we conclude that the sequence $\{\widetilde{{\scriptstyle{\boldsymbol{\mathcal{W}}}}}_i^\textrm{low} /\sqrt{\mu_{\max}}; i \ge 0\}$ converges weakly to the Gaussian random variable with zero mean and covariance matrix $\Phi$ that satisfies \eqref{eqn:continuoustimeLyapunovEqn}. \section{Proof of Lemma \ref{lemma:convergence}} \label{app:convergence} We follow an argument similar to the proof of Theorem 2 from \cite[p. 256]{Shiryaev80} (which proves the result that convergence in moments implies convergence in distribution). Let $| f(x) | \le c$, i.e., bounded. Because a continuous function $f(x)$ is also \emph{uniformly} continuous in any \emph{bounded} region \cite[p. 54]{Shiryaev80}, for \emph{any} constant $\epsilon > 0$ \emph{and} for \emph{any} constant $b > 0$, there exists some $\delta_{\epsilon, b} > 0$, which depends on the choices of $\epsilon$ \emph{and} $b$, such that $|f(x) - f(y)| < \epsilon$ for $\| x \| < b$ \emph{and} $\| x - y \| < \delta_{\epsilon, b}$. Now, setting $b \triangleq \sqrt{ 2 c \sigma^2 / \epsilon } > 0$, where $\sigma^2$ is from \eqref{eqn:convergencesigma}, and using conditional expectations, we have \begin{align} \label{eqn:boundmsgapzetaandeta} \mathbb{E} | f(\bm{\zeta}_i) - f(\bm{\eta}_i) | & = \mathbb{E} [ | f(\bm{\zeta}_i) - f(\bm{\eta}_i) | \; | \; \| \bm{\zeta}_i - \bm{\eta}_i \| < \delta_{\epsilon, b}, \| \bm{\zeta}_i \| < b ] \cdot {\mathbb{P}}[\| \bm{\zeta}_i - \bm{\eta}_i \| < \delta_{\epsilon, b}, \| \bm{\zeta}_i \| < b] \nonumber \\ & \;\; + \mathbb{E} [ | f(\bm{\zeta}_i) - f(\bm{\eta}_i) | \; | \; \| \bm{\zeta}_i - \bm{\eta}_i \| < \delta_{\epsilon, b}, \| \bm{\zeta}_i \| \ge b ] \cdot {\mathbb{P}}[\| \bm{\zeta}_i - \bm{\eta}_i \| < \delta_{\epsilon, b}, \| \bm{\zeta}_i \| \ge b] \nonumber \\ & \;\; + \mathbb{E} [ | f(\bm{\zeta}_i) - f(\bm{\eta}_i) | \; | \; \| \bm{\zeta}_i - \bm{\eta}_i \| \ge \delta_{\epsilon, b} ] \cdot {\mathbb{P}}[\| \bm{\zeta}_i - \bm{\eta}_i \| \ge \delta_{\epsilon, b} ] \end{align} The first term on the RHS of \eqref{eqn:boundmsgapzetaandeta} is bounded by \begin{equation} \label{eqn:1stterm} \mbox{1st term} \le \mathbb{E} [ \epsilon \; | \; \| \bm{\zeta}_i - \bm{\eta}_i \| < \delta, \| \bm{\zeta}_i \| < b ] \times 1 = \epsilon \end{equation} Using the fact that $|f(x) - f(y)| \le |f(x)| + |f(y)| \le 2c$, and also the fact that the joint probability is bounded by any one of the marginal probabilities, i.e., ${\mathbb{P}}[A \cap B] \le {\mathbb{P}}[A]$ for any two events $A$ and $B$, the second term on the RHS of \eqref{eqn:boundmsgapzetaandeta} is bounded by \begin{equation} \label{eqn:2ndterm} \mbox{2nd term} \le 2c \, {\mathbb{P}}[\| \bm{\zeta}_i \| \ge b] \le \frac{2c \, \mathbb{E} \| \bm{\zeta}_i \|^2}{b^2} = \frac{\epsilon \, \mathbb{E} \| \bm{\zeta}_i \|^2}{\sigma^2} \end{equation} where we used Chebyshev's inequality \cite[p. 47]{Shiryaev80}. Likewise, the third term on the RHS of \eqref{eqn:boundmsgapzetaandeta} is bounded by \begin{equation} \label{eqn:3rdterm} \mbox{3rd term} \le 2c \, {\mathbb{P}}[\| \bm{\zeta}_i - \bm{\eta}_i \| \ge \delta ] \le \frac{2c \, \mathbb{E} \| \bm{\zeta}_i - \bm{\eta}_i \|^2}{\delta^2} \end{equation} Now, substituting \eqref{eqn:1stterm}--\eqref{eqn:3rdterm} into \eqref{eqn:boundmsgapzetaandeta}, we have \begin{equation} \mathbb{E} | f(\bm{\zeta}_i) - f(\bm{\eta}_i) | \le \epsilon + \frac{\epsilon \, \mathbb{E} \| \bm{\zeta}_i \|^2}{\sigma^2} + \frac{2c \, \mathbb{E} \| \bm{\zeta}_i - \bm{\eta}_i \|^2}{\delta^2} \end{equation} Using \eqref{eqn:convergencemoments} and \eqref{eqn:convergencesigma}, we end up with \begin{equation} \label{eqn:boundmsgapzetaandeta2} \lim_{\mu_{\max} \rightarrow 0} \limsup_{i \rightarrow \infty} \mathbb{E} | f(\bm{\zeta}_i) - f(\bm{\eta}_i) | \le 2\epsilon \end{equation} Since $\epsilon$ is arbitrary, result \eqref{eqn:convergenceweakly} follows from \eqref{eqn:boundmsgapzetaandeta2}. \section{Proof of \eqref{eqn:chikl2testvar}} \label{app:moments} To simplify the notation, we drop the subscript of $\bm{d}_{k,\ell}$ and denote its mean by $\bar{d} \triangleq \mathbb{E} \bm{d}$ and its covariance by $C \triangleq \mathbb{E} (\bm{d} - \bar{d})(\bm{d} - \bar{d})^\mathsf{T}$. Since $\bm{d}$ is Gaussian, it holds that \begin{align} \label{eqn:Ed4} \mathbb{E} \| \bm{d} \|^4 & = \mathbb{E} \| \bm{d} - \bar{d} + \bar{d} \|^4 \nonumber \\ & = \mathbb{E} [ \| \bm{d} - \bar{d} \|^2 + 2 (\bm{d} - \bar{d})^\mathsf{T} \bar{d} + \| \bar{d} \|^2 ]^2 \nonumber \\ & = \mathbb{E} \| \bm{d} - \bar{d} \|^4 + 2 \mathbb{E} \| \bm{d} - \bar{d} \|^2 \| \bar{d} \|^2 + \| \bar{d} \|^4 + 4 \bar{d}^\mathsf{T} \mathbb{E} [(\bm{d} - \bar{d}) (\bm{d} - \bar{d})^\mathsf{T}] \bar{d} \nonumber \\ & = \mathbb{E} \| \bm{d} - \bar{d} \|^4 + 2 {\mathrm{Tr}}(C) \| \bar{d} \|^2 + \| \bar{d} \|^4 + 4 \| \bar{d} \|_C^2 \end{align} where we used the fact that the odd order moments of $\bm{d} - \bar{d}$ is zero. Likewise, \begin{align} \label{eqn:ed22} ( \mathbb{E} \| \bm{d} \|^2 )^2 & = ( \mathbb{E} \| \bm{d} - \bar{d} + \bar{d} \|^2 )^2 \nonumber \\ & = ( \mathbb{E} \| \bm{d} - \bar{d} \|^2 + \| \bar{d} \|^2 )^2 \nonumber \\ & = [{\mathrm{Tr}}(C)]^2 + 2 {\mathrm{Tr}}(C) \| \bar{d} \|^2 + \| \bar{d} \|^4 \end{align} From \eqref{eqn:Ed4} and \eqref{eqn:ed22}, we have \begin{equation} \label{eqn:eded2} \mathbb{E} \| \bm{d} \|^4 - ( \mathbb{E} \| \bm{d} \|^2 )^2 = \mathbb{E} \| \bm{d} - \bar{d} \|^4 - [{\mathrm{Tr}}(C)]^2 + 4 \| \bar{d} \|_C^2 \end{equation} From Lemma A.2 of \cite[p. 11]{Sayed08}, it can be verified that \begin{equation} \label{eqn:edd4} \mathbb{E} \| \bm{d} - \bar{d} \|^4 = [{\mathrm{Tr}}(C)]^2 + 2 {\mathrm{Tr}}(C^2) \end{equation} Substituting \eqref{eqn:edd4} into \eqref{eqn:eded2} yields: \begin{equation} \mathbb{E} \| \bm{d} \|^4 - ( \mathbb{E} \| \bm{d} \|^2 )^2 = 2 {\mathrm{Tr}}(C^2) + 4 \| \bar{d} \|_C^2 \end{equation} \bibliographystyle{IEEEbib}
-188,023.322895
[ -2.208984375, 2.107421875 ]
15.890084
[ -2.693359375, 0.96533203125, -2.5, -6.25, -1.2529296875, 9.1796875 ]
[ 3.53125, 8.296875, 1.7900390625, 7.484375 ]
599
14,689
[ -3.205078125, 3.62890625 ]
31.016812
[ -6.49609375, -5.0625, -5.44140625, -2.275390625, 2.7109375, 13.765625 ]
0.319708
12.609915
17.251004
1.540153
[ 1.998515248298645 ]
-115,961.214334
7.239907
-187,666.920938
0.464517
6.408677
[ -2.578125, -3.56640625, -3.7421875, -4.8515625, 2.580078125, 11.9296875 ]
[ -5.6484375, -2.75, -2.75390625, -2.23046875, 4.16015625, 6.203125 ]
BkiUa5U4ubnjoqNEqD9W
\section{Introduction} Our Solar System is currently immersed in a warm, partially ionized, diffuse interstellar medium (DISM), called the local interstellar cloud (LIC), consisting of gas and dust. The motion of the Sun relative to the LIC results in a unidirectional wind of the LIC materials toward the Sun and the formation of the heliospheric boundary due to the interaction between the interstellar wind and the solar wind \citep{bertaux-blamont1971,frisch-et-al1999}. While interstellar neutral atoms and large dust particles penetrate the heliospheric boundary and are detected inside the heliosphere, interstellar ions and tiny dust particles are filtered out at the boundary \citep{kimura-mann1998,kimura-mann1999,linde-gombosi2000,czechowski-mann2003}. In the 1970s, possible detections of LIC dust were reported by \citet{bertaux-blamont1976} through an analysis of impact data from capacitor-type detectors onboard the Meteoroid Technology Satellite and by \citet{wolf-et-al1976} from the multi-coincidence microparticle sensing system onboard Pioneer 8 and 9 \citep[see also][]{mcdonnell-berg1975}. In the late part of the 20th century, the stream of dust particles from the LIC was unambiguously recorded by impact ionization dust detectors onboard Ulysses, Galileo, Hiten, Nozomi, and Cassini \citep{gruen-et-al1994,gruen-et-al1997,svedhem-et-al1996,sasaki-et-al2007,altobelli-et-a2003}. For a thorough review of LIC dust measurements performed during the 20th century, see \citet{mann-kimura2000}. In the last decade, our understanding of LIC dust has been greatly improved by data analyses of recent space missions, data mining of previous space missions, elaborate numerical simulations of dust dynamics, and comprehensive studies of gas depletion measurements. An analysis of Helios in-situ dust data have revealed that the time-of-flight mass spectra of interstellar dust are dominated by silicates and iron \citep{altobelli-et-a2006}. NASA's Stardust mission was successful in capturing a collection of LIC dust particles and bringing them back to Earth for a thorough analysis in a laboratory \citep{frank-et-al2014}. The Stardust samples of LIC dust are mineral grains, but they show no clear evidence for the presence of organic refractory material in the LIC \citep{bechtel-et-al2014,westphal-et-al2014b}. The absence of organic refractory materials in a near-Earth orbit is not a complete surprise because the organic-rich population of LIC dust may not be able to easily reach the inner Solar System against high radiation pressure repulsion from the Sun \citep{landgraf1999,kimura-et-al2003b,sterken-et-al2012}. The impact ionization time-of-flight mass spectrometer onboard Cassini recorded signals of 36 LIC dust grains in the proximity of Saturn and revealed that LIC dust is mainly composed of Mg-rich silicates; some of them are mixed with Fe-bearing metals and/or oxides, but not with organic compounds \citep{altobelli-et-al2016}. Since the penetration of organic-rich carbonaceous grains could reach the orbit of Saturn, they should have been detected by Cassini if they exist. Nevertheless, we cannot exclude the possibility that organic-rich carbonaceous grains are simply smaller than the detection threshold of the Cassini dust analyzer (i.e., grain radius of $a \sim 20~\mathrm{nm}$). The most recent model of interstellar dust by \citet{jones-et-al2013} suggests that carbonaceous grains in the DISM are shattered into tiny grains of $a < 20~\mathrm{nm}$ by interstellar shocks. However, the heavy depletion of organic materials in submicrometer-sized grains looks incongruous since there is no trace of such a strong shock in the LIC \citep{kimura2015}. The gas-depletion measurements of the LIC by the Hubble Space Telescope (HST) indicate that LIC dust contains refractory organic material consisting of C, H, O, and N \citep{kimura-et-al2003a,kimura-et-al2003b,kimura2015}. There is, therefore, a remarkable discrepancy that a significant organic-rich population of interstellar dust exists in the LIC, but it has never been identified inside the Solar System. Another conundrum emerges from the composition of gas in the LIC determined by in-situ measurements of interstellar pickup ions in the inner Solar System. The chemical composition of the LIC derived from pickup-ion measurements indicates that no N atoms are incorporated into LIC dust \citep{gloeckler-geiss2004}. This contradicts the fact that the HST measurements of gas absorption lines along the line of sight toward nearby stars have revealed the depletion of N atoms in the gas phase of the LIC \citep{wood-et-al2002,kimura-et-al2003a}. Moreover, nearly half of the O atoms are depleted in the gas phase of the LIC, contrary to the pickup-ion measurements indicating that the majority of the O atoms reside in the gas phase \citep{gloeckler-geiss2004,kimura2015}. As a result, the discrepancy between the pickup ion measurements and the gas absorption measurements remains a deep mystery. The purpose of this study is to solve the conundrums of the missing organic materials in interstellar dust streaming into the Solar System from the LIC. The most reasonable hypothesis would be the loss of organic refractory materials from interstellar dust en route to the inner Solar System. Here, we show that all the measurements of the LIC materials are in harmony if the sublimation of organic compounds from LIC dust proceeds from exothermic reactions in the organic substances. \section{Model} We propose that the missing organics problems can be solved if the elements forming organic compounds (i.e., C, H, O, and N) desorb from LIC dust near the Sun. There are several energetic processes that could help C, H, O, and N to desorb from organic refractory material in the interstellar medium \citep{baragiola-et-al2005,collings-mccoustra2012}. Among these processes, we consider that exothermic reactions would release sufficient energy to sublimate organic materials when interstellar dust is heated by solar radiation to a temperature that is high enough to trigger the reactions. Exothermic reactions are associated with a spontaneous release of energy by the recombination of reactive atoms and molecules in the organic materials. Laboratory experiments on the recombination of free radicals or the rearrangement of carbon bonds have revealed that such exothermic reactions are accompanied by explosive events \citep{dhendecourt-et-al1982,benit-roessler1993,wakabayashi-et-al2004}. As a typical picture of LIC dust, hereafter, we consider a particle that has a radius of $a=0.1~\mathrm{\mu m}$ and consists of amorphous silicates in the core of the particle and organic material in the mantle \citep{li-greenberg1997}. The mass fraction $x$ of organic material in LIC dust is $x = 0.39$, based on the most plausible assignment of dust-forming elements to the composition of dust in the LIC (see Appendix~\ref{appendix_a}). The mass of the particle, $m_\mathrm{p}$, is given by $m_\mathrm{p} = \left({4 / 3}\right) \pi a^{3} \left[{x / \rho_\mathrm{or} +\left({1-x}\right) / \rho_\mathrm{sil}}\right]^{-1}$ where $\rho_\mathrm{sil}$ is the density of the silicate core and $\rho_\mathrm{or}$ is the density of the organic mantle. By assuming $\rho_\mathrm{sil} = 3.5 \times {10}^{3}~\mathrm{kg~m^{-3}}$ and $\rho_\mathrm{or} = 1.8 \times {10}^{3}~\mathrm{kg~m^{-3}}$, we obtain $m_\mathrm{p} \simeq 1.1\times{10}^{-17}~\mathrm{kg}$ for the core-mantle particles with $a=0.1~\mathrm{\mu m}$ \citep[cf.][]{li-greenberg1997}. The energy, $\varepsilon$, required to raise a dust particle to the temperature, $T$, from the triggering temperature of exothermic chain reactions, $T_\mathrm{trig}$, is given by \begin{eqnarray} \varepsilon\left({T}\right) = m_\mathrm{p} \int_{T_\mathrm{trig}}^{T} C_\mathrm{p}\left({T'}\right) \,dT', \label{required-energy} \end{eqnarray} where $C_\mathrm{p}\left({T}\right)$ is the specific heat of the particle at the temperature $T$ \citep{leger-et-al1985,sorrell2001,duley-williams2011}. The specific heat at $T=298~\mathrm{K}$ lies in the range of $C_\mathrm{or} = 0.7$--$1.9~\mathrm{kJ~kg^{-1}~K^{-1}}$ for organic materials and $C_\mathrm{sil} = 0.74$--$0.86~\mathrm{kJ~kg^{-1}~K^{-1}}$ for silicate materials \citep{domalski-hearing1990,campbell-norman1998,winter-saari1969,zeller-pohl1971,krishnaiah-et-al2004}. Accordingly, we estimate the specific heat of a silicate-core, organic-mantle particle to be $C_\mathrm{p} = 1.0~\mathrm{kJ~kg^{-1}~K^{-1}}$ at $T=298~\mathrm{K}$ using $C_\mathrm{p} = x\,C_\mathrm{or} + \left({1-x}\right) C_\mathrm{sil}$ \citep{senshu-et-al2002}. For the sake of simplicity, we may approximate the dependence of specific heat on the temperature by $C_\mathrm{p}\left({T}\right) \propto T$ in the temperature range of interest \citep[cf.][]{sharp-ginther1951,wong-westrum1971,kay-goit1975,richet-et-al1982}. We would like to point out that desorption of organic forming elements only takes place if $\varepsilon$ is less than the total energy $E$ released by exothermic reactions. The typical energy per unit mass released by exothermic reactions was derived from laboratory experiments to be $E/m = 11200$--$18300~\mathrm{kJ~kg^{-1}}$ for irradiated solid methane, $E/m \ga 1500~\mathrm{kJ~kg^{-1}}$ for UV photolyzed ices, $E/m = 3200$--$4000~\mathrm{kJ~kg^{-1}}$ for frozen carbon molecules and noble-gas atoms, and $E/m = 6400$--$8200~\mathrm{kJ~kg^{-1}}$ for amorphous carbon \citep{carpenter1987,shabalin1997,schutte-greenberg1991,wakabayashi-et-al2004,yamaguchi-wakabayashi2004,tanaka-et-al2010}. Hereafter, we consider the total energy per unit mass of organic refractory material released by exothermic reactions in the range of $E/m \ge 1500~\mathrm{kJ~kg^{-1}}$. Although we cannot specify the triggering temperature $T_\mathrm{trig}$ without details on reaction kinetics, it should be an equilibrium temperature at a heliocentric distance that is larger than the Saturnian orbit around the Sun. The equilibrium temperature of a dust particle is calculated by \begin{eqnarray} \Gamma_\mathrm{abs} = \Gamma_\mathrm{rad} + \Gamma_\mathrm{sub}, \label{energy-balance} \end{eqnarray} where $\Gamma_\mathrm{abs}$, $\Gamma_\mathrm{rad}$, and $\Gamma_\mathrm{sub}$ denote the heating rate of a dust particle by solar radiation, the cooling rate of a dust particle by thermal radiation, and the cooling rate of a dust particle by sublimation, respectively \citep[see, e.g.,][]{mukai-mukai1973,lamy1974}. The heating rate of a dust particle by solar radiation is given by \begin{eqnarray} \Gamma_\mathrm{abs} = \pi {\left({\frac{R_\sun}{r}}\right)}^{2} \int_{0}^{\infty} C_\mathrm{abs}\left({m^\ast, \lambda}\right) B_\sun\left({\lambda}\right) \,d\lambda , \end{eqnarray} where $C_\mathrm{abs}\left({m^\ast, \lambda}\right)$ is the cross section of absorption at a wavelength of $\lambda$ and the complex refractive index of $m^\ast$, $B_\sun\left({\lambda}\right)$ is the solar radiance, and $r$ is the heliocentric distance \citep{kimura-mann1998,kimura-et-al2002}. Computations of $C_\mathrm{abs}\left({m^\ast, \lambda}\right)$ for a silicate core, organic mantle particle were performed in the framework of the Mie theory \citep{aden-kerker1951,bohren-huffman1983}. Complex refractive indices $m^\ast$ of organic material and of the amorphous silicate necessary for the computations are taken from \citet{li-greenberg1997} and \citet{scott-duley1996}, respectively. The cooling rate of a dust particle by thermal radiation is given by \begin{eqnarray} \Gamma_\mathrm{rad} = 4 \pi \int_{0}^{\infty} C_\mathrm{abs}\left({m^\ast, \lambda}\right) B\left({\lambda, T}\right) \,d\lambda, \label{radiative-cooling} \end{eqnarray} where $B\left({\lambda, T}\right)$ is the Planck function at a temperature of $T$ \citep{kimura-et-al1997,kimura-et-al2002,li-greenberg1998}. The cooling rate of a dust particle by sublimation is given by \begin{eqnarray} \Gamma_\mathrm{sub} = S\,\sqrt{\frac{M_\mathrm{or} u}{2\pi k_\mathrm{B}T}}\,p\left({T}\right) L, \label{sublimation-cooling} \end{eqnarray} where $k_\mathrm{B}$ and $u$ are the Boltzmann constant and the atomic mass unit, $S$ is the surface area of the particle, and $L$ and $M_\mathrm{or}$ are the latent heat of sublimation and the molecular weight of organic materials \citep{kimura-et-al1997,kimura-et-al2002,kobayashi-et-al2009}. The vapor pressure $p\left({T}\right)$ is described by the Clausius-Clapeyron relation as \begin{eqnarray} p\left({T}\right) = \exp\left({-\frac{M_\mathrm{or} u}{k_\mathrm{B}T}\,L+b}\right), \end{eqnarray} where $b = \ln p\left({\infty}\right)$ is a constant. Here we represent the thermodynamic properties of interstellar organic materials by those of hexamethylenetetramine (HTM), which is an organic refractory residue from interstellar ice analogs \citep{briani-et-al2013}. Accordingly, we insert $M_\mathrm{or} = 140$, $L = 5.62 \times {10}^{5}~\mathrm{J\,{kg}^{-1}}$, and $e^{b}=4.24 \times{10}^{12}~\mathrm{Pa}$ into Eq.~(\ref{sublimation-cooling}). \section{Results} \begin{figure} \centering \includegraphics[width=\hsize]{f1} \caption{Final temperature of a grain raised by the release of an energy $\varepsilon\left({T}\right)$ per unit mass of organic matter. Solid curve: the triggering temperature of 100~K. Dashed curve: the triggering temperature of 50~K. The shaded area indicates the range of energy $E$ released from a unit mass of organic matter by exothermic chain reactions. } \label{fig1} \end{figure} Figure~\ref{fig1} shows how the energy required to heat up a unit mass of organic matter to the final temperature, $\varepsilon\left({T}\right)/m$, changes with the temperature $T$ (see Eq.~(\ref{required-energy})). Since we cannot specify at which temperature exothermic reactions are triggered, we plotted the results with the required energy for two triggering temperatures: $T_\mathrm{trig} = 50~\mathrm{K}$ (dashed line) and $100~\mathrm{K}$ (solid line). Also, the range of energies released by exothermic reactions through the recombination of free radicals or the rearrangement of carbon bonds is enclosed by a shaded area. If the temperature of sublimation is higher than approximately $200~\mathrm{K}$, then the energy required to raise a dust particle to the temperature of sublimation is almost independent of the triggering temperature. The total energy released by exothermic reactions is sufficient to heat a dust particle up to the temperature of $T \ga 600~\mathrm{K,}$ regardless of the triggering temperature. \begin{figure} \centering \includegraphics[width=\hsize]{f2} \caption{Cooling rates for a grain with a radius of $a=0.1~\mu$m as a function of the grain temperature $T$. Solid curve: sublimation cooling rate $\Gamma_\mathrm{sub}$. Dashed curve: radiative cooling rate $\Gamma_\mathrm{rad}$. The shaded area indicates the range of the final temperature attained by the released energy of $E/m \ge 1500~\mathrm{kJ~kg^{-1}}$ (see Fig.~\ref{fig1}). } \label{fig2} \end{figure} In the range of temperatures where sublimation dominates the cooling process, $\Gamma_\mathrm{sub}$ should exceed the cooling rate by radiation, $\Gamma_\mathrm{rad}$ \citep{leger-et-al1985}. Figure~\ref{fig2} shows the sublimation cooling rate $\Gamma_\mathrm{sub}$ and the radiative cooling rate $\Gamma_\mathrm{rad}$ for grains with $a=0.1~\mathrm{\mu m}$ as a function of temperature $T$ (see Eqs.~(\ref{radiative-cooling})--(\ref{sublimation-cooling})). Our results show that sublimation dominates over radiation for cooling the grains when the temperature of the grains exceeds $T=270~\mathrm{K}$. \begin{figure} \centering \includegraphics[width=\hsize]{f3} \caption{Equilibrium temperature of a grain with a radius of $0.1~\mathrm{\mu m}$ as a function of heliocentric distance (solid line). The dashed line indicates the blackbody temperature. The shaded area indicates the filtration region where a portion of neutral atoms is converted to ions by the charge exchange with ions. } \label{fig3} \end{figure} In Fig.~\ref{fig3}, we plotted the equilibrium temperature of a dust particle that has $a = 0.1~\mathrm{\mu m}$ as a function of heliocentric distance, computed by Eq.~(\ref{energy-balance}) in the framework of the Mie theory (solid line). Also, the blackbody temperature of an airless body that is illuminated by the Sun for comparison is plotted as a dashed line. The filtration region, where a portion of neutral atoms is converted to ions by a charge exchange with ions, is enclosed by a shaded area. The temperature of the particle beyond Saturn is kept below 200~K, while the blackbody temperature is approximately 90~K in the Saturnian orbit. The particle attains a temperature of 106~K at 50~AU from the Sun, which is the outer edge of the Kuiper belt, and 47~K at 500~AU, which corresponds to the orbit of inner Oort cloud objects (sednoids). The equilibrium temperature of the particle at the heliospheric boundary is 77~K at 120~AU and 83~K at 95~AU, which are approximately the heliocentric distances to the heliopause and the termination shock, respectively, in the upwind direction. \section{Discussion} \subsection{Sublimation of organic materials} We have investigated the possibility that organic forming elements desorb completely by sublimation as LIC dust approaches the Sun. Our results suggest the sublimation of organic refractory material by exothermic chemical reactions with a final temperature of $T \ga 600~\mathrm{K}$. A comparison of Fig.~\ref{fig1} with Fig.~\ref{fig2} indicates that a dust particle is heated up to the temperature of $T > 270~\mathrm{K}$ if exothermic reactions release the energy of $E/m > 300~\mathrm{kJ~kg^{-1}}$ for $T_\mathrm{trig} = 50~\mathrm{K}$ and $E/m > 300~\mathrm{kJ~kg^{-1}}$ for $T_\mathrm{trig} = 100~\mathrm{K}$. Since the rearrangement of carbon bonds in amorphous carbon is accompanied with graphitization, this might not be relevant for the desorption of organic forming elements. The concentration of frozen free radicals are expected in organic forming atoms and molecules ($1$--$10$\%), because of ultraviolet photolysis in the interstellar medium \citep{greenberg1976,sorrell2001}. Even if the concentration of free radicals is too small to completely sublime all the organic refractory component, built-up pressure by sublimation may induce the desorption of molecules by an explosion \citep{schutte-greenberg1991}. It is, therefore, not extraordinary that organic refractory material stores the energy of $E/m \ga 270~\mathrm{kJ~kg^{-1}}$, releases it instantaneously by exothermic reactions and shattering, and ends up with its complete desorption. \subsection{Gas-to-dust mass ratio} It is worth noting that a signature of organic sublimation in the Solar System may be found as an increase in the gas-to-dust mass ratio, which is $R_\mathrm{g/d} \simeq 121$ in the LIC \citep{kimura2015}. In fact, \citet{krueger-et-al2015} have derived the gas-to-dust mass ratio of $R_\mathrm{g/d} = 193^{+85}_{-57}$ from the entire data set of Ulysses LIC dust impacts measured within 5~AU from the Sun. This high mass ratio of LIC gas-to-dust is associated with a low mass density of LIC dust measured in situ by the impact ionization dust detector onboard Ulysses. According to our model, Ulysses should have detected LIC dust particles that have experienced the sublimation of organic materials at heliocentric distances beyond 10~AU from the Sun. Therefore, one has to take a weight loss of organic materials from LIC dust into account when estimating the gas-to-dust mass ratio in the LIC from impacts of interstellar dust measured in situ inside the Solar System. We expect that the sublimation of organic materials elevates the gas-to-dust mass ratio from $R_\mathrm{g/d} = 121 \pm 22$ to $199 \pm 16$ in the inner Solar System (see Appendix~\ref{appendix_b}). \begin{figure} \centering \includegraphics[width=\hsize]{f4} \caption{Gas-to-dust mass ratio of the local interstellar cloud (LIC). The filled diamond and square indicate the ratios determined by the depletion of elements in the LIC, before and after the sublimation of organic refractory material (see Appendix~\ref{appendix_b}). The filled circle is the the gas-to-dust mass ratio of the LIC derived from the Ulysses in-situ measurements of LIC dust impacts by \citet{krueger-et-al2015}. The error bars represent the standard deviations. } \label{fig4} \end{figure} The gas-to-dust mass ratio of $R_\mathrm{g/d} \approx 200$ is entirely consistent with the value derived from the Ulysses in-situ measurements of LIC dust impacts. Although the Ulysses data have large uncertainties, the difference between the Ulysses data and the depletion data after the subtraction of the organic component is not statistically significant at the 5\% level. Consequently, our results explain the reason that organic forming elements have never been identified in the Stardust samples and the Cassini data of LIC dust, yet they solve the puzzle as to why the mass density of LIC dust is so low in the Ulysses data. \subsection{Interstellar pickup ions} \begin{figure} \centering \includegraphics[width=\hsize]{f5} \caption{Elemental abundances of nitrogen and oxygen atoms per million hydrogen atoms. The filled triangle, diamond, and square indicate the abundances in the organic material, the gas phase, and the sum of the organic and gas phases, respectively (see Appendix~\ref{appendix_b}). The filled circle is the elemental abundance determined by the measurements of interstellar pickup ions \citep{gloeckler-geiss2004}. The error bars represent the standard deviations. } \label{fig5} \end{figure} So-called interstellar pickup ions detected inside the heliosphere have been used as a powerful tool to study the elemental abundances of gas in the LIC \citep{gloeckler-geiss2004}. The pickup ion measurements suggest that all nitrogen atoms and most oxygen atoms of the LIC are in the gas phase, contrary to the HST measurements of gas absorption lines. The sublimation of organic materials could influence the elemental abundances of C, N, O, and H for interstellar pickup ions, since the C, N, O, and H atoms that desorbed from dust particles are also picked up by the solar wind, in the same way as interstellar neutral atoms. The abundances of elements in the dust phase of the LIC allow us to place significant constraints on the chemical composition of the organic material, although the exact nature of the organic material is unknown (see Appendix~\ref{appendix_a}). In Fig.~\ref{fig5}, we compare the elemental abundances of nitrogen and oxygen atoms in the organic material (filled square), the gas phase (filled triangle), the sum of organic and gas phases (filled square), and interstellar pickup ions (filled circle). It turns out that although interstellar pickup ions have an excess of the N and O, in comparison to LIC gas, each pickup-ion abundance is remarkably closer to the sum of organic and gas phase abundances, compared with the gas-phase abundance alone. We find that the sum of organic and gas phase abundances is not statistically different at the 5\% level of significance from both N and O abundances of interstellar pickup ions. Although the large uncertainties in the measured N/H ratio of interstellar pickup ions conceal the deviation from LIC gas, the O/H ratio of interstellar pickup ions differs from that of LIC gas at the 95\% confidence level. Consequently, the sublimation of organic forming elements in the Solar System would be in good harmony with the elemental abundances of interstellar pickup ions measured inside the Solar System. \subsection{Triggering temperature} The process behind sublimation is that the heat of the Sun triggers exothermic chain reactions by the recombination of free radicals or the rearrangement of carbon bonds. Because the triggering temperature depends on the organic composition that is unknown for LIC dust, it is difficult, if not impossible, to determine the triggering temperature precisely. \citet{schutte-greenberg1991} estimated the triggering temperature to lie in the range of $T_\mathrm{trig} = 24.5$--28~K for free radicals in their UV-photolyzed ice mixtures, similar to $T_\mathrm{trig} = 27~\mathrm{K}$, which was measured by \citet{dhendecourt-et-al1982} with different UV-photolyzed ice mixtures. Since organic forming elements are strongly depleted from LIC dust, which is already at the orbit of Saturn according to the Cassini in-situ measurements, exothermic reactions should have been triggered beyond 10~AU from the Sun. However, the N and O abundances of interstellar pickup ions may not be well accounted for by the sum of organic materials and gas, as shown in Fig.~\ref{fig4}, unless exothermic reactions are triggered before crossing the so-called filtration region. The filtration region extends to a circum-heliospheric interstellar medium of 100--200~AU beyond the heliopause and converts a portion of neutral atoms to ions by a charge exchange with ions \citep{gloeckler-et-al1997,izmodenov-et-al2004}. This implies that the triggering temperature should be lower than at least $T = 77~\mathrm{K}$, which is the equilibrium temperature at the heliopause (see Fig.~\ref{fig3}). It is, however, reasonable to assume that the triggering temperature lies below the equilibrium temperature just beyond the filtration region, which is close to the inner edge of the Oort cloud. Therefore, we suggest $T_\mathrm{trig} \la 50~\mathrm{K}$, because it does not contradict the triggering temperatures for the recombination of free radicals and the rearrangement of carbon bonds. Since the temperature of interstellar dust is kept as low as 18~K in the LIC, we propose that the triggering temperature lies in the range of $T_\mathrm{trig} = 20$--50~K. \subsection{Annealing of amorphous silicates} \begin{figure} \centering \includegraphics[width=\hsize]{f6} \caption{Timescales $\tau_\mathrm{c}$ for annealing of amorphous silicate (solid curve) and those $\tau_\mathrm{sub}$ for the sublimation of organic refractory material (dashed curve). } \label{fig6} \end{figure} While exothermic reactions elevate a dust particle to the sublimation temperature, it is of great importance to find out whether the annealing of amorphous silicates takes place or not. Experimental and theoretical studies on the crystallization of amorphous silicate grains covered by CH$_4$-doped amorphous carbon have shown that exothermic reactions in the mantle result in the crystallization of the silicate core at room temperature \citep{kaito-et-al2007,tanaka-et-al2010}. Although there is no evidence for the annealing of amorphous silicates of LIC dust en route to the Solar System, we examine whether the amorphous silicate core of a submicron grain crystallizes by exothermic reactions in its organic mantle. We have found that the energy released by exothermic reactions is sufficient to heat up a dust particle to the temperature of $T \ga 270~\mathrm{K}$ where sublimation dominates the cooling (see Figs.~\ref{fig1} and \ref{fig2}). If the annealing of amorphous silicates proceeds with crystallization at $T = 1000~\mathrm{K}$, it takes $\tau_\mathrm{c} = 4.4\times{10}^{3}$--$9.1\times{10}^{4}~\mathrm{s}$, depending on the silicate mineralogy \citep{fabian-et-al2000,kimura-et-al2002,tanaka-et-al2010}. If we estimate the timescale for sublimation by $\tau_\mathrm{sub} = mxL/\Gamma_\mathrm{sub}$, it turns out that organic materials completely sublimate within $\tau_\mathrm{sub} = 6.3\times{10}^{-11}~\mathrm{s}$ at $T = 1000~\mathrm{K}$. The duration of organic sublimation is drastically shorter than the timescale for silicate crystallization, even if we consider higher temperatures (see Fig.~\ref{fig6}). Therefore, we may assert that the annealing of amorphous silicate cores does not take place during the sublimation of organic materials. \subsection{Large grains of micrometer sizes} We have restricted our discussion on the sublimation of organic materials to submicron grains, as this is the size regime of LIC dust that is compositionally analyzed by Cassini's Cosmic Dust Analyzer (CDA), but micrometer-sized grains are certainly present in the LIC as detected by Ulysses within 5~AU from the Sun \citep{gruen-et-al1994,krueger-et-al2015}. If LIC dust particles of micrometer sizes are agglomerates of submicron core-mantle grains, as suggested by \citet{kimura-et-al2003b}, then their temperatures are most likely close to those of submicron core-mantle grains. Therefore, we expect that the sublimation of organic materials from the constituent grains of the agglomerates takes place in a similar way as we have studied in this work. Once organic mantles of submicron grains in agglomerates proceed with sublimation, then the so-called packing effect may produce compact agglomerates of silicate core grains \citep{mukai-fechtig1983}. Therefore, there is a possibility that such compact agglomerates of silicate grains are the micrometer-sized interstellar grains detected by Ulysses within 5~AU from the Sun. Laboratory analyses of Stardust samples have revealed that micrometer-sized interstellar grains are indeed characteristic of low-density silicate materials, which closely resemble agglomerates of silicate grains \citep{butterworth-et-al2014,westphal-et-al2014a,westphal-et-al2014b}. The extraordinary low capture velocities of the Stardust interstellar grains and high ratios of solar radiation pressure to solar gravity on these grains found by \citet{postberg-et-al2014} and \citet{sterken-et-al2014} also point to low-density agglomerates of silicate grains with metallic inclusions \citep{kimura2017}. Alternatively, if the micrometer-sized interstellar grains are single micrometer-sized particles with an organic mantle and a silicate core, then their temperatures are lower than submicron core-mantle grains. This indicates that the organic component of micrometer-sized interstellar grains sublimates at smaller heliocentric distances than the distances estimated from Fig.~\ref{fig3}. Nevertheless, we are confident that organic materials sublimate beyond 10~AU since even micrometer-sized grains are still much warmer than the blackbody. Consequently, we conclude that micrometer-sized grains that are registered on the dust detector onboard Ulysses do not retain organic compounds either. \subsection{Physical and chemical structures of interstellar dust} Although we assume that interstellar dust consists of an amorphous silicate in the core of the particle and an organic material in the mantle, the physical structure of interstellar dust and the chemical structure of carbonaceous matter are open to debate. \citet{mathis-whiffen1989} proposed interstellar dust to be conglomerates of silicate grains, graphite grains, and amorphous carbon grains because graphite grains and amorphous carbon grains are condensed in carbon-rich envelopes around evolved stars. While exothermic reactions may take place in amorphous carbon grains by rearrangements of carbon atoms, the outcome would be the phase-transition of amorphous carbon to graphite \citep{wakabayashi-et-al2004}. If conglomerates of silicate grains, graphite grains, and amorphous carbon grains represent realistic structures of interstellar dust, the presence of carbon atoms from graphite in such a high abundance should have been identified in the data of Helios, Cassini, and Stardust. Since this is not the case, we could rule out that conglomerates of silicate grains, graphite grains, and amorphous carbon grains are the major population of dust in the LIC. We would like to point out that the depletions of C, N, and O in the gas phase of the LIC are not in harmony with pure carbon, such as graphite and amorphous carbon, but with organic refractory materials \citep{kimura-et-al2003b,kimura2015}. The reason that we adopt the core-mantle structure is that it is the outcome of ice accretion in molecular clouds and subsequent photo-processing in the DISM, while the growth of core-mantle grains into agglomerates leaves room for discussion \citep{li-greenberg1997,kimura-et-al2003b,kimura2017}. Unfortunately, we cannot ensure the uniqueness of the structure, but the desorption of organic-forming elements from the grain surface is a novel idea to solving a number of conundrums and thus the model of core-mantle grains provides a feasible solution to the observations of missing organic materials in interstellar dust penetrating into the Solar System. Because the available body of facts do not conflict with the picture that LIC dust is composed of organic refractory material in its mantle and amorphous silicate in its core, the sublimation of organic matter sheds light on the substantial depletion of organic forming elements in LIC dust observed inside the heliosphere. \subsection{Delivery of interstellar organic matter to the early Earth} The delivery of extraterrestrial organic compounds to the primitive Earth might have played a crucial role in the origin of life, on the assumption that they are the seeds for prebiotic chemistry \citep{anders1989,chyba-et-al1990}. Comets and asteroids, indeed, transport extraterrestrial organic matter to the Earth by means of interplanetary dust particles and chondritic meteorites, but the transportation of extraterrestrial seeds to the Earth from the interstellar medium is an unresolved issue. In contrast to extraterrestrial abiotic organic seeds, panspermia is the hypothesis that attributes the origin of life on Earth to the delivery of biotic material, namely, microorganisms via interplanetary and/or interstellar space \citep{arrhenius1908}. We have shown that organic compounds of interstellar dust cannot remain intact, but they suffer from sublimation prior to its entry into the heliosphere because solar heating facilitates exothermic chain reactions of radicals. Contrary to the interstellar organic matter, comets retain refractory organic matter without a threat to sublimation in a near-Earth orbit since the region of comet formation in the solar nebula was warm enough to exhaust radicals and hence prevent their accumulation \citep{barnun-kleinfeld1989}. Therefore, unlike comets and asteroids, the delivery of extraterrestrial organic matter to the Earth via the DISM is very difficult, if not impossible, irrespective of its biotic or abiotic nature. If one of the future exploration missions would find evidence for the sublimation of interstellar organic matter as indicated by our results, the hypothesis of panspermia from the DISM must be discarded. \begin{acknowledgements} Special thanks are due to Harald Kr\"{u}ger who provided stimulating discussions and Ulysses results on LIC dust impacts prior to publication of the results. We are also grateful to an anonymous reviewer for her/his fruitful comments that helped us to improve the manuscript. H. Kimura thanks Martin Hilchenbach and Thomas Albin for their hospitality during his stay at Max Planck Institute for Solar System Research (MPS), where much of the writing was performed, as well as Hitoshi Miura and Andrew Westphal for their useful correspondences. The author also thanks MPS's research fellowship and JSPS's Grants-in-Aid for Scientific Research (\#26400230, \#15K05273, \#19H05085). \end{acknowledgements}
-22,045.944109
[ -3.451171875, 3.146484375 ]
71.747212
[ -2.955078125, 1.33203125, -1.3857421875, -5.70703125, -0.64306640625, 7.125 ]
[ 3.55859375, 6.57421875, 4.98046875, 6.95703125 ]
205
4,696
[ -3.609375, 4.19140625 ]
23.706074
[ -5.5859375, -2.771484375, -2.763671875, -2.009765625, 1.1572265625, 9.8984375 ]
1.036381
16.811917
24.126917
2.086623
[ 3.195216178894043 ]
-17,560.184989
5.958688
-21,897.654442
0.450289
5.671539
[ -3.552734375, -3.455078125, -3.07421875, -3.8671875, 2.568359375, 10.5234375 ]
[ -5.31640625, -1.8056640625, -2.0078125, -1.6640625, 3.107421875, 4.6796875 ]
BkiUd1U25V5jB4AVSE0P
\section{Contributors} \title{\bf The Whole is Greater than the Sum of the Parts: \\ Optimizing the Joint Science Return from LSST, Euclid and WFIRST} \maketitle \begin{quote} {B.~Jain,\footnote{[email protected]} D.~Spergel,\footnote{[email protected]} R.~Bean, A.~Connolly, I.~Dell'antonio, J.~Frieman, E.~Gawiser, N.~Gehrels, L.~Gladney, K.~Heitmann, G.~Helou, C.~Hirata, S.~Ho, \v{Z}.~Ivezi\'{c}, M.~Jarvis, S.~Kahn, J.~Kalirai, A. Kim, R. Lupton, R.~Mandelbaum, P.~Marshall, J.~A.~Newman, S.~Perlmutter, M.~Postman, J.~Rhodes, M.~A.~Strauss, J.~A.~Tyson, L.~Walkowicz, W.~M.~Wood-Vasey} \end{quote} \tableofcontents \section{Where will we be in 2024?} Astronomy in 2024 should be very exciting! LSST and Euclid, which should each be in the midst of their deep surveys of the sky, will be joined by WFIRST. With higher resolution and sensitivities than previous astronomical survey instruments, they will reveal new insights into areas ranging from exoplanets to the nature of dark energy. At the same time, JWST will be staring deeper into the early universe than ever before. Advanced LIGO should be detecting frequent collisions between neutron stars. ALMA will be operating at all of its planned frequencies, and the new generation of very large optical ground based telescopes should be revolutionizing ground-based optical astronomy. In parallel, advances in computational capabilities should enable observers to better exploit these complex data sets and theorists to make detailed time-dependent three-dimensional models that can capture much of the physics needed to explain the new observations. The focus of this report is an exploration of some of the opportunities enabled by the combination of LSST, Euclid and WFIRST, the optical surveys that will be an essential part of the next decade's astronomy. The sum of these surveys has the potential to be significantly greater than the contributions of the individual parts. As is detailed in this report, the combination of these surveys should give us multi-wavelength high-resolution images of galaxies and broadband data covering much of the stellar energy spectrum. These stellar and galactic data have the potential of yielding new insights into topics ranging from the formation history of the Milky Way to the mass of the neutrino. However, enabling the astronomy community to fully exploit this multi-instrument data set is a challenging technical task: for much of the science, we will need to combine the photometry across multiple wavelengths with varying spectral and spatial resolution. Coordination will be needed between the LSST, Euclid, and WFIRST projects in order to understand the trades between overlapping areal coverage, filter design, depth and cadence of the observations, and performance of the image analysis algorithms. We will need to provide these data to the community in a highly usable format. If we do not prepare the missions for this task in advance, we will limit their scientific return and increase the cost of the eventual effort of fully exploiting these data sets. The goal of this report is to identify some of the science enabled by the combined surveys and the key technical challenges in achieving the synergies. \newpage \section{Background on missions} This section covers the science goals and technical overview of the three missions. \subsection{WFIRST} WFIRST is the top-ranked large space mission of the 2010 New Worlds New Horizon (NWNH) Decadal Survey. With the addition of a coronagraph, WFIRST would also satisfy the top ranked medium space priority of NWNH. The mission is designed to settle essential questions in dark energy, exoplanets, and infrared astrophysics. The mission will feature strategic key science programs plus a vigorous program of guest observations. A Science Definition Team and a Study Office at the Goddard Space Flight Center and Jet Propulsion Laboratory are studying the mission with reports\footnote{See SDT web site: {\it http://WFIRST.gsfc.nasa.gov/add/} } in 2013, 2014, and 2015. WFIRST is now baselined with an existing 2.4 m telescope NASA acquired from the National Reconnaissance Office (NRO), a telescope that became available after the completion of NWNH. This configuration is referred to as WFIRST-AFTA (Astrophysics Focused Telescope Asset). The mission consists of telescope, spacecraft, a Wide-Field Instrument (WFI), an IFU spectrometer and a coronagraph instrument, which includes an IFS spectrometer. The WFI operates with multiple bands covering 0.7 to 2.0 micron band with extension to 2.4 microns under study. It has a filter wheel for multiband imaging and a grism for wide-field spectroscopy (R=550-800). The pixel scale is 0.11 arcsec, which fully samples the PSF at the H band. The IFU has a 3 arcsec FoV and R=75 resolution. The coronagraph operates in the visible 400 - 1000 nm band. It has a 2.5 arcsec FoV, $10^{-9}$ effective contrast and 100-200 mas inner working angle. The IFS has R=70 resolution. WFIRST will measure the expansion history of the Universe as a function of cosmic time and the growth rate of the large-scale structure of the Universe as a function of time to test the theory of general relativity on cosmological scales and to probe the nature of dark energy. It will employ five different techniques: supernovae, weak lensing, baryon acoustic oscillations (BAO), redshift space distortions (RSD), and clusters. With its 2.4-meter primary mirror, the mission will measure more than double the surface density of galaxies detected by the Euclid mission and push these measurements further in the NIR. This higher density will enable more detailed maps of the dark matter distribution, measurements not only of two-point statistics but also higher order statistics, and multiple tests of systematic dependance of cosmological parameters on galaxy properties. The NRC noted in its recent review\footnote{{\it http://www.nap.edu/catalog.php?record\_id=18712}}: ``For each of the cosmological probes described in NWNH, WFIRST/AFTA exceeds the goals set out in NWNH." For exoplanets, WFIRST will complete the census of exoplanets begun by Kepler. It will make microlensing observations of the galactic bulge using the WFI to discover several thousand planets at the orbit of Earth and beyond and will be sensitive to planets as small as Mars. WFIRST's microlensing survey will have the unique capability to detect free-floating planets, thus, enabling astronomers to determine the efficiency of planet formation. The coronagraph will directly image and characterize tens of Uranus to Jupiter mass planets around nearby stars and study debris disks. WFIRST will also be a worthy successor to the Hubble Space Telescope. With 200 times the field-of-view of HST, and the same size primary mirror, it will conduct a rich program of general astrophysics and reveal new insights and discoveries on scales ranging from the nearest stars to the most distant galaxies. Table 1 gives the WFIRST capabilities for surveys and SN monitoring observations. \begin{table}[h!] \centering \begin{tabular}[b]{| l | l |} \hline \textbf{Attributes} & \textbf{WFIRST Capability} \\ \hline Imaging survey & J $\sim$ 27 AB over 2400 sq deg \\ \hline & J $\sim$ 29 AB over 3 sq deg deep fields \\ \hline Multi-filter photometry & Filters: z, Y, J, H, F184 (Ks), W (wide) \\ \hline Slitless wide-field spectroscopy & 0.28 sq deg, R$\sim$600 \\ \hline Slit multi-field spectroscopy & IFU, R$\sim$70 \\ \hline Number of SN Ia & $2\times10^{3}$ to z$\sim$1.7 \\ \hline Number galaxies with spectra & $2\times10^{7}$ \\ \hline Number galaxies with shapes & $4\times10^{8}$ \\ \hline Number of galaxies detected & few $\times10^{9}$ \\ \hline Number of massive clusters & $4\times10^{4}$ \\ \hline \end{tabular} \caption{\textbf{WFIRST capabilities in a nominal $\sim2.5$ year dark energy survey}} \label{table:1} \end{table} {\it High Latitude Survey (HLS):} The nominal HLS will be for 2 years with 1.3 years for imaging and 0.6 years for grism spectroscopy. The coverage will be 2200 deg$^{2}$ of high Galactic latitude sky within the LSST footprint. The imaging will have 2 passes over the survey footprint in each of the 4 imaging filters (J, H, F184 [for shapes] and Y [for photo-z's]). Data from LSST will be required to provide optical filters for photo-z determination. Each pass will include four 184 sec exposures in each filter (with five exposures in J band) with each exposure offset diagonally by slightly more than a detector chip gap. This pattern is repeated across the sky. The spectroscopic survey will have four passes total over the survey footprint with two ``leading" passes and two ``trailing" passes to enable the single grism to rotate relative to the sky. Each pass includes two 362 sec exposures with offset to cover chip gaps. The two ``leading" passes (and two ``trailing") are rotated from each other by $\sim5^{\circ}$. It is anticipated that the sky coverage of the HLS will be significantly expanded in the extended phase of the mission after the baseline first 6 years. WFIRST has no consumables that would prohibit a mission of 10 years or longer, and is being designed with serviceability in mind to enable the possibility of an even longer mission. {\it Supernova Survey:} A six month SN Ia survey will employ a three-tier strategy so to track supernova over a wide range of redshifts: \begin{itemize} \item Tier 1 for z$<$0.4: 27.44 deg$^{2}$ Y=27.1, J= 27.5 \item Tier 2 for z$<$0.8: 8.96 deg$^{2}$ J=27.6, H= 28.1 \item Tier 3 for z$<$1.7: 5.04 deg$^{2}$ J=29.3, H=29.4 \end{itemize} Tier 3 is contained in Tier 2 and Tier 2 is contained in Tier 1. Each of these fields will be visited every 5 days over the 6 months of the SN survey. The imager is used for SN discovery and the IFU spectrometer is used to determine SN type, measure redshifts, and obtain lightcurves. Each set of observations will take a total of 30 hours of combined imaging and spectroscopy. The fields are located in low dust regions $\le20^{\circ}$ off an ecliptic pole. A final revisit for each target for spectroscopy will occur after the SN fades for galaxy subtraction. {\it Guest Observer Program:} A significant amount of observing time will be awarded to the community through a peer-selected GO program. An example observing program prepared by the SDT has 25\% of the baseline 6 years of the mission, or 1.5 years, for guest observations. Significantly more time would be awarded in an extended phase after the 6th year. The GO program is expected to cover broad areas of science from the solar system to Galactic studies to galaxies to cosmology. The April 2013 SDT study contains a rich set of $\sim$50 potential GO science programs that are uniquely enabled by WFIRST. Astronomical community members provided a wide-ranging set of one-page descriptions of different GO programs that highlight the tremendous potential of WFIRST to advance many of the key science questions formulated by the Decadal survey. An important contribution that would likely come from key projects in the GO program would be to have deep IR observations of selected few-square-degree fields. These ultra deep drilling fields could reach limits of J=30 AB. \subsection{LSST} The Large Synoptic Survey Telescope (LSST) is a large-aperture, wide-field, ground-based facility designed to perform many repeated imaging observations of the entire southern hemisphere in six optical bands (u, g, r, i, z, y). The majority of the southern sky will be visited roughly 800 times over the ten-year duration of the mission. The resulting database will enable a wide array of diverse scientific investigations ranging from studies of moving objects in the solar system to the structure and evolution of the Universe as a whole (Ivezic et al 2014). The Observatory will be sited atop Cerro Pachon in Northern Chile, near the Gemini South and SOAR telescopes. The telescope incorporates a 3-mirror astigmatic optical design. Incident light is collected by the primary, which is an annulus with an outer diameter of 8.4 m, then reflected to a 3.4-m convex secondary, onto a 5-m concave tertiary, and finally into three refractive lenses in the camera. The total field of view is 9.6 deg$^2$ and the effective collecting aperture is 6.6 m in diameter. The design maintains a 0.2 arc-second system point spread function (PSF) across the entire spectral range of 320 nm to 1050 nm. The etendue (the product of collecting area and field of view) of the system is several times higher than that of any other previous facility. The telescope mount assembly is a compact, stiff structure with a high fundamental frequency that enables fast slew and settle. The camera contains a 3.2 Gigapixel focal plane array, comprised of roughly 200 4K $\times$ 4K CCD sensors with 10 $\mu$m pixels. The sensors are deep depleted, back-illuminated devices with a highly segmented architecture that enables the entire array to be read out in 2 s or less. Four major science themes have motivated the definition of science requirements for LSST: \begin{itemize} \item Taking a census of moving objects in the solar system. \item Mapping the structure and evolution of the Milky Way. \item Exploring the transient optical sky. \item Determining the nature of dark energy and dark matter. \end{itemize} These four themes stress the system in different ways. For the science of dark energy and dark matter, LSST data will enable a variety of complementary analyses, including measurements of cosmic shear power spectra, baryon acoustic oscillations, precision photometry of Type Ia supernovae, measurements of time-delays between the multiple images in strong lensing systems, and the statistics of clusters of galaxies. Collectively, these will result in substantial improvements in our constraints on the dark energy equation of state and the growth of structure in the Universe, among other parameters. The main fast-wide-deep survey will require 90\% of the observing time and is designed to optimize the homogeneity of depth and number of visits. Each visit will comprise two back-to-back 15 second exposures and, as often as possible, each field will be observed twice per night, with visits separated by 15-60 minutes. Additional survey areas, including a region within 10 degrees of the northern ecliptic, the South Celestial Pole, and the Galactic Center will be surveyed with either a subset of the LSST filter complement or with fewer observations. The remaining 10\% of the observing time will be used on mini-surveys that improve the scientific reach of the LSST. Examples of this include the ``Deep Drilling Fields'' where each field receives approximately 40 hour-long sequences of 200 exposures. When all 40 sequences and the main survey visits are coadded, this would extend the depth of these fields to $r\sim28$. The LSST has identified four distant extragalactic survey fields to observe as Deep Drilling Fields: Elias S1, XMM-LSS, Extended Chandra Deep Field-South, and COSMOS. Table~\ref{table:2} gives the LSST baseline design and survey parameters \begin{table}[h] \centering \begin{tabular}{|l|l|} \hline \textbf{Attributes} & \textbf{LSST Capability} \\ \hline Final f-ratio, aperture & f/1.234, 8.4 m \\ Field of view, \'etendue & 9.6 deg$^2$, 319 m$^2$deg$^2$ \\ Exposure time & 15 seconds (two exposures per visit)\\ Main survey area & 18,000 deg$^2$\\ Pixel count & 3.2 Gigapix \\ Plate scale & 50.9 $\mu$m/arcsec (0.2'' pix) \\ Wavelength coverage & 320 -- 1050 nm, $ugrizy$ \\ Single visit depths, design & 23.7, 24.9, 24.4, 24.0, 23.5, 22.6 \\ Mean number of visits & 56, 80, 184, 184, 160, 160 \\ Final (coadded) depths & 25.9, 27.3, 27.2, 26.8, 26.3, 25.4 \\ \hline \end{tabular} \caption{\textbf{LSST Capabilities for the fast-wide-deep main survey}} \label{table:2} \end{table} The rapid cadence of the LSST will produce an enormous volume of data, roughly 15 Terabytes per night, leading to a total dataset over the ten years of operations of over a hundred Petabytes. Processing such a large dataset and archiving it in useful form for access by the community has been a major design consideration for the project. The data management system is configured in three layers: an infrastructure layer consisting of the computing, storage, and networking hardware and system software; a middleware layer, which handles distributed processing, data access, user interface, and system operations services; and an applications layer, which includes the data pipelines and products and the science data archives. There will be both a mountain summit and a base computing facility (located in La Serena, the city nearest the telescope site), as well as a central archive facility in the United States. The data will be transported over existing high-speed optical fiber links from South America to the U.S. The observing strategy for the LSST will be optimized to maximize observing efficiency by minimizing slew and other down time and by making appropriate choices of filter bands given the real-time weather conditions. The final cadence selection will be undertaken in consultation with the community. A prototype simulator has been developed to help evaluate this process, which will be transformed into a sophisticated observation scheduler. A prototype fast Monte Carlo optical ray trace code has also been developed to simulate real LSST images. This will be further developed to aid in testing science analysis codes. The LSST cadence will take into account the various science goals and can also be refined to improve the synergy with other datasets. LSST anticipates first light in 2020 and the start of the 10-year survey in 2022. It is fully funded for construction as of the summer 2014. \subsection{Euclid} Euclid is a medium class mission within the European Space Agency's (ESA) `Cosmic Vision' program\footnote{ {\it http://sci.esa.int/cosmic-vision/}}. Euclid was formally selected in 2011 and is currently on schedule for launch in 2020. After traveling to the L2 Lagrange point and a brief shakeout and calibration period, Euclid will undertake an approximately 6-year survey aimed at ``mapping the geometry of the dark Universe." Euclid will pursue four primary science objectives (Laureijs et al 2011): \begin{itemize} \item Reach a dark energy FoM $> 400$ using only weak lensing and galaxy clustering; this roughly corresponds to 1\% errors on $w_p$ and $w_a$ of 0.02 and 0.1, respectively. \item Measure $\gamma$, the exponent of the growth factor, with a 1$\sigma$ precision of $< 0.02$, sufficient to distinguish General Relativity and a wide range of modified-gravity theories. \item Test the Cold Dark Matter paradigm for hierarchical structure formation, and measure the sum of the neutrino masses with a 1$\sigma$ precision better than 0.03 eV. \item Constrain $n_s$, the spectral index of the primordial power spectrum, to percent accuracy when combined with Planck, and to probe inflation models by measuring the non-Gaussianity of initial conditions parameterized by $f_{NL}$ to a 1$\sigma$ precision of $\sim$2. \end{itemize} Euclid is optimized for two primary cosmological probes: weak gravitational lensing and galaxy clustering (including BAO and RSD). Therefore, Euclid will measure both the expansion history of the Universe and the growth of structure. Euclid comprises a 1.2m Korsch 3 mirror anastigmatic telescope and two primary science instruments. The visible instrument (VIS) consists of 36 4k $\times$ 4k CCDs and will be used to take images in a single, wide (riz) filter for high precision weak lensing galaxy shape measurements. The light entering the telescope will be split via a dichroic to allow for simultaneous observations with the Near Infrared Spectrometer and Photometer (NISP), which will take 3 band imaging (Y, J, H) and grism spectroscopy in the 1-2 $\mu$m wavelength range. The NISP imaging is aimed at producing high quality photometric redshifts when combined with ground-based optical data from a combination of telescopes in the southern and norther hemispheres and the spectroscopy is aimed at producing accurate maps of galaxy clustering over 2/3 of the age of the Universe. NISP will contain 16 H2RG 2k $\times$ 2k NIR detectors procured and characterized by NASA. Together, VIS and NISP will survey the darkest (least obscured by dust) 15,000 square degrees of the extragalactic sky, providing weak lensing shapes for over 1.5 billion galaxies and emission line spectra for several tens of millions of galaxies, while taking full advantage of the low systematics afforded by a space mission. While the Euclid hardware and survey design are specifically optimized for dark energy studies, the images and catalogs will enable a wide range of ancillary science in cosmology, galaxy evolution, and other areas of astronomy and astrophysics. These data will be made publicly available both in Europe via ESA and The Euclid Consortium and in the US via the Euclid NASA Science Center at IPAC (ENSCI) after a brief proprietary period. Small areas of the Euclid survey (suitable for testing algorithms and pipelines but not large enough for cosmology) will be released at 14, 38, 62, and 74 months after survey operations start (these are referred to as ``quick releases"). The full survey data will be released in three stages: circa 2022, 2500 square degrees; circa 2025, 7500 square degrees; circa 2028, 15,000 square degrees. \newpage \section{Enhanced science} The main dark energy probes established over the last decade are described next in the context of the three missions. They are: Type Ia supernova, galaxy clustering (baryon acoustic oscillations and redshift space distortions), weak lensing, strong lensing and galaxy clusters. The biggest advantage from combining information from the three missions is in the mitigation of systematic errors, especially via redshift information. For the observational and theoretical issues underlying the discussion here we refer the reader to recent review articles in the literature (Weinberg et al 2013; Joyce et al 2014). LSST, WFIRST and Euclid each have their own systematic errors, most of which are significantly reduced via the combination of the survey datasets. The systematic errors affecting each of the surveys individually arise from their incomplete wavelength coverage (creating photo-z systematics), their differences in imaging resolution and blending (creating shear systematics), or from different biases in galaxy sample selections. In most respects the surveys are complementary in the sense that one survey reduces or nearly eliminates a systematic in the other. To achieve this synergy requires some level of data sharing; we discuss below the cases where catalog level sharing of data is sufficient and others where it is essential to jointly process the pixel data from the surveys. This section describes each dark energy probe, with a focus on mitigation of systematics, and returns to photometric redshifts (photo-z's). The last three subsections are devoted to stellar science, galaxy and quasar science, and the variable universe. \subsection{Weak lensing} A cosmological weak lensing analysis in any of these surveys will use tomography, which involves dividing the galaxy sample into redshift slices and measuring both the auto-correlations of galaxies within slices and cross-correlations of galaxies in different slices. Tomography gives additional information about structure growth beyond a strictly 2D shear analysis. However, there are a number of requirements on the data for a tomographic weak lensing analysis to be successful. First, we require a good understanding of both additive and multiplicative bias in the shear estimates, including how they scale with redshift. Second, we require a strong understanding of photometric redshift bias and scatter. This is true in general, but it becomes even more important when trying to model theoretical uncertainties such as the intrinsic alignments of galaxy shapes with large-scale structure. There are a number of ways in which the weak lensing science from each of LSST, WFIRST and Euclid would benefit by sharing data with the other surveys and thus improving our understanding of shear estimates and/or the photometric redshift estimates. This data sharing may include catalog-level data or possibly image data. Although the latter would pose additional complications in terms of the data reprocessing required to appropriately incorporate both ground and space images, the gains may be worth the effort where confusion or depth or wavelength limitations are severe. The main benefit to LSST from either WFIRST or Euclid will be in terms of the calibration of shear estimates. The space-based imaging will have much better resolution due to the much smaller PSF size. The shapes of galaxies measured from space images will suffer less from model bias and noise bias (at a fixed depth) than the shapes of the same galaxies measured from ground images. LSST does go deeper over its wide survey than the space missions, so it will do better on the outer parts of galaxy images. For subsets of galaxies measured at comparable depth, a comparison of the shear signal estimated by both LSST and either Euclid or WFIRST can help quantify possible systematic errors in the LSST estimates. While each survey will have its own independent scheme to correct for calibration biases, a cross-comparison of this sort can be used to validate those schemes. Any additional correction that is derived could then be extended to the rest of the LSST area where there is no such overlap. Note that such a comparison would use a matched sample of galaxies in LSST and one of the space-based surveys, but would not involve comparing per-galaxy shape estimates. With different effective resolution, wavelength, and possibly shear estimation methods, there is no reason to expect agreement in per galaxy shears (even allowing for variation due to noise). Instead, such a comparison would utilize the reconstructed lensing shear fields from the different surveys. WFIRST may be more effective than Euclid for this purpose because its narrower passbands mean that it does not suffer from the wide-band chromatic PSF issue mentioned below. Another benefit to LSST relates to blended galaxies. The higher resolution space-based images make it easier to reliably identify blended galaxies than in ground-based images. The performance of the deblending algorithm in LSST image processing pipeline can be significantly improved by providing higher-resolution space-based data because of order half of all observed galaxies (though a smaller fraction of the LSST gold sample) will be significantly blended with another galaxy. With space-based catalogs, one can at least identify which objects are really multiple galaxies or galaxy-star blends and do forced fitting using the space-estimated positions. Euclid will be more effective than WFIRST for this purpose because its footprint has a greater overlap with that of LSST. One benefit to Euclid from combination with LSST relates to chromatic effects. The diffraction-limited PSF is wavelength dependent and therefore differs for stars and galaxies due to their different SEDs. Euclid's very wide band imaging means that with its optical imaging alone, it will not be able to correctly estimate the appropriate PSF for each galaxy. Multi-band photometry is required to correct for this effect. Without correctly accounting for the color-dependent PSF, the systematic will dominate Euclid's WL error budget, so LSST photometry will be very useful in reducing this systematic error. Note that LSST cannot help with chromatic effects involving color gradients within galaxies, for which higher resolution imaging data will be necessary to estimate corrections; but LSST can provide information about chromatic effects involving the average galaxy colors. Finally, there are some shear systematics tests that can be done in a joint analysis but not in any one survey individually. For example, one route that has been proposed within surveys to mitigate additive shear systematics is to cross-correlate shear estimates in different bands or in different exposures rather than auto-correlate shear estimates. However, some sources of additive systematics may persist across bands or exposures, which would make them quite difficult to identify within the data from the survey alone. Indeed, in essentially every survey that has been used for weak lensing analysis to date, the preliminary results indicated some unforeseen systematics. Despite care taken in the design and planned operations of new surveys, we cannot reliably exclude the possibility of unforeseen systematics. In the case of these three surveys, the systematic errors from each survey are expected to be very different. Thus, cross-correlating the shear estimates from one survey with those of another should remove nearly all shear systematic error contributions, leaving only the true weak lensing signature. If this cross-correlation analysis reveals differences from the analysis within individual surveys, it could signal a problem that needs to be investigated and mitigated. Furthermore, the surveys have different redshift distributions for the lensed galaxies, so a joint cosmological analysis using data from two or more of them will be complementary. The joint analysis would also be better able to constrain nuisance parameters like intrinsic alignments and photometric redshift errors. As described below, the principal way in which Euclid and WFIRST would benefit from sharing information with LSST is in the photometric redshift catalog. Without this additional data, Euclid and WFIRST will not be able to complete their goals with weak lensing tomography. LSST will also gain from the IR band photometry of WFIRST and Euclid in terms of photometric redshift determination. The easiest form of data sharing between surveys is if only catalog information is shared. For some of the gains mentioned above, this level of sharing is completely sufficient: cross-correlations and shear calibration, for example. The deblending improvements would require an iterative reprocessing of the data, using a space catalog as a prior and then reprocessing the LSST data with that extra information. A similar iterative reprocessing would be necessary for Euclid to properly account for its color-dependent PSF using the LSST SED estimates as a prior. For the photometric redshift improvement, one could just use the multiple catalogs, but to do it properly, it is important to have access to the pixel data from both experiments at once to properly estimate robust colors for each galaxy. It will be a significant technical challenge to develop a framework that will work with the different kinds of image data in a coherent analysis. \subsection{Large-scale structure} \begin{figure}[!htb] \begin{center} \includegraphics[height=3.0in]{wfirst_euclid_lsst_v2.png} \caption{\label{fig:comb} \scriptsize{ The chart shows how the complementarity of LSST, Euclid and WFIRST contributes to significant improvement in constraints on cosmological parameters. As described in the text, the improved constraints on $\sigma_8$ come from the mitigation of intrinsic alignment and other systematics in weak lensing; the improved constraints on the sum of neutrino masses $\sum m_\nu$ (in eV) comes from the combination of the weak lensing, CMB convergence maps, and galaxy clustering, in particular by reducing the multiplicative bias in shear measurement. Note that the space based surveys are assumed to have used ground based photometry to obtain photo-z's. }} \end{center} \end{figure} Measurements of large-scale structure probe dark energy via baryonic acoustic oscillation features in the galaxy power spectrum. In addition, the full shape and amplitude of the power spectrum measure the clustering of matter, if one can also measure the biasing of galaxies relative to the matter. This provides new avenues to measure the properties of dark energy and probe gravity. The upcoming generation of spectroscopic surveys will make detailed maps of the large-scale structure of the universe. DESI, PFS, Euclid and WFIRST will focus on measuring galaxy clustering by obtaining spectra of tens of millions of galaxies over redshift ranges from $z\sim 0.4-3.5$. The surveys complement each other using a variety of spectral lines (H$\alpha$, OII, CII) and galaxy types (ELG and LRGs) as tracers and through having distinct, but overlapping, redshift ranges. DESI and Euclid will be wider surveys, while PFS and WFIRST will go deeper on smaller areas of the sky. The combination of spectroscopic and lensing data enables a test of general relativity on cosmological scales. The spectroscopic data provides information about galaxy positions and motions determined inertial masses moving in the local gravitational potential. The imaging data, by contrast, shows the effect of space-time curvature on the trajectories of photons. If general relativity is the correct description of gravity then these will agree. Spectroscopic and photometric galaxy cluster surveys from WFIRST, LSST, Euclid and DESI, and others, and their cross-correlation with contemporary CMB polarization (CMB lensing) and temperature (CMB lensing and kinetic Sunyaev-Zel'dovich) data will provide coincident, complementary dynamical and weak lensing tests of gravity in halos two to three orders of magnitude more massive than the galaxies within them. Thus we expect powerful tests of gravity by a joint analysis of the three surveys. We study the complementarity of WFIRST, LSST and Euclid by forecasting constraints on matter clustering (we pick $\sigma_8$ for convenience, which is sensitive to dark energy and is partially degenerate with the parameters, $n_s$, $\Omega_m$), and the sum of neutrino masses in the Universe. While we concentrate on these particular parameters, there are many other possibilities considered by the community. Figure~\ref{fig:comb} shows constraints on $\sigma_8$ and the sum of neutrino masses $\Sigma m_\nu$ from the three surveys individually and in combination. We include a CMB prior from Planck, and marginalize over cosmological (and galaxy bias) parameters when appropriate. For the space surveys we assume that photo-z's will be obtained with the aid of ground based data. WFIRST's densely sampled spectroscopic galaxies will provide a detailed characterization of filamentary structure that is complementary to the lensing measurements provided by imaging surveys. This allows us to understand one of the most elusive systematics in weak lensing surveys -- the intrinsic alignment (IA) of galaxies. One can test and validate IA models and marginalize over the free parameters in specific models when we can have a 3D map of the universe via galaxy spectra. In LSST, the lensing sources spans a large redshift range between 0 and 3 and a large luminosity range. With WFIRST and Euclid, we will have spectroscopic sources spanning a large part of this redshift range, which enables a very good construction of filament map of the surveyed volume. WFIRST offers a much higher density of sources while LSST and Euclid will cover a much large area of sky. The joint analysis leads to significant improvement in the constraint on $\sigma_8$, as shown on the left in Figure~\ref{fig:comb}. We also consider the constraints on massive neutrinos which suppress the growth of structure below the free-streaming scale. The combination of WFIRST and Euclid's spectroscopic sample, along with CMB lensing convergence, can be used to calibrate the shear multiplicative bias in LSST sources. We follow Das, Evrard \& Spergel (2013) in methodology and calculate the improvement in constraints first in the multiplicative bias in shear, and propagate this to the constraints on the sum of neutrino masses that relies on the large sky coverage of LSST. We assume that a Stage III level CMB convergence field will be available. We plot the improvement in $\Sigma m_\nu$ on the right in Fig~\ref{fig:comb}. \subsection{Galaxy clusters} LSST, Euclid and WFIRST will use galaxy clusters (primarily via gravitational lensing mass measurements) as part of their cosmological measurements. However, both for cosmology and the study of galaxy evolution in clusters, a joint analysis of the data can greatly improve the reach of each mission. Fundamentally, this is because each of the missions observes different aspects of the emission from clusters. LSST provides optical colors, useful for cluster identification via photometric redshifts, as well as (for group-scale structures) time delays for lensed objects. Euclid provides shallow NIR photometry and somewhat deeper NIR spectroscopy, as well as higher resolution optical imaging. WFIRST will provide much better resolution images and deep NIR photometry and spectroscopy over a smaller area than Euclid. Because of the complementarity of these measurements, combining the data has implications for cluster finding, redshift determination, mass measurement and calibration, and the study of galaxy evolution in clusters. {\it Galaxy cluster finding:} Over the past decade, techniques have been developed that select galaxy groups and clusters very effectively by isolating overdensities simultaneously in projected position and color space (for example redMaPPer). These techniques have been optimized for optical wavelengths. LSST will be extremely efficient at finding groups and clusters of galaxies and estimating their redshifts (although eROSITA will also be an efficient cluster finder, the redshift information available from the LSST imaging will allow selection of samples at fixed redshift for study). Although redMaPPer is effective to $z\sim 1$, extending its redshift range will require infrared imaging, which would be provided by Euclid or WFIRST. Given that the number density of detected (as opposed to resolved) sources in LSST and WFIRST are going to be similar, and given the importance of consistent photometry measurements across bands, simultaneous measurements of the pixel data would yield more accurate selection of clusters. {\it Redshift Determination:} As discussed below, the photo-z's of galaxies will be greatly improved by combining the optical and infrared imaging of the three missions. Thus, joint analysis of the LSST+WFIRST/Euclid data gives more accurate cluster sample selection, and allows accurate determination of the background galaxy redshift distribution, which is the largest single source of systematic uncertainty in determination of the mass function from weak lensing mass determinations. {\it Strong and Weak Lensing:} Measuring the weak lensing shear requires high resolution imaging. WFIRST will be able to resolve many more galaxies than LSST, and as discussed above will aid the shear calibration of LSST. In addition, the conversion of shear to mass depends on redshift information. In this way, the combination of LSST, Euclid, and WFIRST data will provide much more accurate mass measurements than either will alone. Cluster arc tomography is greatly improved by the joint pixel-level analysis of LSST, Euclid and WFIRST data. For instance, WFIRST will have the angular resolution to discover $\sim 2000$ strong lensing clusters in its survey footprint. However, most of the arcs will be too faint (and too low surface brightness) to have spectroscopic redshifts. By contrast, LSST will reach surface brightness limits ($>28.7$ Mag per square arcsecond) sufficient to detect the arcs and obtain photometric redshift estimates, but will not have the resolution to separate the arcs cleanly from the cluster galaxies in many cases. Thus, the measurement of the tomographic signal from multiple arc systems, which is a strong test of cosmology, will be much stronger from the joint analysis than from Euclid, WFIRST or LSST alone. Once again, the added value of the pixel data is great, because cluster cores are very dense environments and obtaining valid photometric redshifts will require use of the space-based imaging to disentangle the light contribution form the arcs and cluster galaxies. {\it Systematics:} One of the sources of systematic uncertainty in weak lensing shear measurements from galaxy clusters comes from the blending of sources both with other sources and with cluster galaxies. The source-source blending is in many ways similar to that encountered in large-scale weak lensing measurements, but in clusters, there is an additional source of bias from blending with cluster galaxies. This is both radially dependent and (because cluster concentration evolves with redshift) redshift dependent, and directly affects the normalization of the cluster mass function. Using the WFIRST data to directly measure the deblending bias in the clusters in common between the two surveys would make it possible to correct for the bias. \subsection{Supernovae} Each individual Type Ia supernova (SN~Ia) is potentially a powerful probe of relative distance in the Universe. Thousands of them provide detailed information about the expansion history of the Universe, in particular allowing us to probe the accelerated expansion and the nature of dark energy. Hundreds of thousands of them will allow us to probe the nature of dark energy in different regions and environments. A set spanning the past 10 billion years of cosmic history will allow for a rich and powerful exploration of the nature of the expansion and more recent acceleration of the Universe and thus reveal key insights into the key constituents. Ground-based SN~Ia searches have provided almost one thousand well-observed SNe~Ia in 20 years of work. Increasingly powerful ground facilities will rapidly increase this number to thousands per year (e.g., Pan-STARRS, PTF and DES) and then hundreds of thousands per year in the era of LSST. For a smaller subset (hundreds) of these supernovae, even calibrated spectrophotometric time-series have been obtained (at the nearby SN Factory). However, the rest-frame emission of SNe~Ia is line-blanketed at UV wavelengths. Thus, by $z\sim 1$ their light has already redshifted out of the optical atmospheric window easily accessible from the ground. In addition, because the relative distances across redshifts are estimated based on relative flux differences across wavelength, accurate cross-wavelength calibration is critical to use supernovae to determine the properties of dark energy. The stability of WFIRST in space yields significant benefits in both wavelength coverage and flux calibration for thousands of SNe~Ia/year from $0.1<z<1.7$. The accessibility of the ground allows for the large aperture and high-etendue of the LSST system that can observe 10,000s of SNe~Ia per year from $0.03<z<1$. Measuring distances from $z\sim0.03$ to $z\sim1.7$ will connect the era when dark energy first made its presence known, through the transition to an accelerating expansion, and up to the present day. These are the key redshifts over which we will discover hints to the underlying nature of dark energy, and the combination of the high-redshift reach and calibration of WFIRST and the huge numbers and volume coverage of LSST will yield a strong and powerful pairing for SN Cosmology. The key question for this work will be the control of systematic uncertainties as we explore a large span of time and changing populations of host-galaxy environments in which the supernovae are born. The combination of WFIRST, LSST, and other ground-based follow-up instruments will make possible unprecedented rich characterization of each supernova so that low- and high-redshift supernovae can be matched at a more detailed level than with the broad ``Type~Ia'' definition. For example, a time series of detailed spectrophotometry could be obtained for the same rest-wavelengths for every supernova. Modern SN distance analyses are already joint analyses. The current Hubble Diagrams are constructed from supernovae from different surveys (e.g., Betoule et al. 2014 combined the Supernova Legacy Survey (SNLS), SDSS Supernova Survey, and the current heterogeneous sample of nearby $z<0.05$ SNeIa). But a true joint analysis should begin at the pixel level and take advantage of all the internal details about calibration and photometry within each survey. Specifically, when combining SNeIa from different data sources, a joint analysis is beneficial because of: 1. Consistent pixel-level analysis, in particular photometric extraction; and, 2. Deep cross-checks of photometric calibration not possible with just extracted SN light curves. Supernova surveys have distinct science requirements both on supernova discovery and early classification, and on the quality of distance modulus and redshift determinations of discovered supernovae. Each science requirement propagates into hardware and survey requirements. Technical requirements for supernova discovery (e.g., large field of view, detection of new point sources) can have a more cost-effective ground-based solution, whereas requirements for photometry and spectroscopy (e.g., high signal-to-noise SN spectral features, infrared wavelength coverage) are only fulfilled with a space observatory. A case in point is the baseline WFIRST plan where an imaging survey is conducted to identify supernovae for targeted spectroscopic observations. The large field of view of LSST and the availability of SN photons detectable by LSST out to $z=1.2$ should make the LSST the better instrument for SN discovery, especially at low redshifts where the angular density of potential host galaxies is low. Assuming the triggering logistics work, this might free up WFIRST to do what it is better at: obtaining supernova spectrophotometric time series. In addition, there is a benefit of observing the same SN with multiple overlapping surveys. Such observations provide: \begin{itemize} \item Direct cross-instrumental calibration on the source of interest. (It is important to note, however, that this calibration is not the same as the cross-wavelength calibration that is required when comparing supernovae at very different wavelength.) \item Expanding observations of the wavelength/temporal range of the supernova. \item A quantitative assessment of systematic errors. \end{itemize} Finally, supernova cosmology analyses require a suite of external additional resources beyond the direct observations of high-redshift supernovae. Thus, planning for a joint LSST-WFIRST SN cosmology analysis will yield significant gains in combining resources to obtain external data and theory needs. Some key examples include: \begin{itemize} \item Low-z SN set, including possible spectrophotometric time-series follow up. \item Fundamental flux calibrators (which {\em can} provide the cross-wavelength calibration that is required when comparing supernovae at very different wavelength). \item High-resolution spectroscopy of host galaxies. \item Improved SN Ia theory and empirical modeling. \end{itemize} \subsection{Strong lensing} The LSST, Euclid and WFIRST datasets will be particularly complementary in the field of strong gravitational lensing. Galaxy-scale strong lenses can be used as probes of dark energy (via time delay distances or multiple source plane cosmography). The time delays between multiple images in a strong gravitational lens system depend, as a result of the lens geometry, on the underlying cosmology -- primarily the Hubble constant but also, in large samples where internal degeneracy breaking is possible, the dark energy parameters. (Refsdal 1964, Tewes et al 2013; Suyu et al 2014). Lens systems with sources at multiple redshifts may also provide interesting and competitive constraints on cosmological parameters, via the ratio of distance ratios in each system. Strong lenses also provide unique information about dark matter, via the central density profiles of halos on a range of scales, and the mass function of sub-galactic mass structures that cause measurable perturbations of well-resolved arcs and Einstein Rings. Both this science case and strong lensing cosmology will benefit greatly from the large samples of lenses detectable in these wide field imaging surveys; we sketch these cases out below in Section 4, and identify how high fidelity space-based imaging and spectroscopy will enable new measurements. \subsection{Photometric redshifts} Photometry provides an efficient way to estimate the physical properties of galaxies (i.e.\ their redshifts, spectral types, and luminosities) from a small set of observable parameters (e.g.\ magnitudes, colors, sizes, and clustering). LSST will depend on these photometric redshifts for all of its major probes of dark energy as it is infeasible to obtain spectroscopic redshifts for the bulk of objects in the LSST sample with any reasonable amount of telescope time. For the { WFIRST} and { Euclid} missions, three of the key applications will rely on photometric redshifts: measures of the mass power spectrum from analyses of the weak lensing of faint galaxies, breaking of the degeneracies between possible redshift solutions for galaxies exhibiting only a single emission line in grism spectroscopy, and the identification of high redshift Type Ia supernovae based on the photometric redshifts of their host galaxies. In addition, photometric redshifts will play a key role in much of the LSST, { WFIRST}, and { Euclid} extragalactic science where we wish to constrain how the demographics of galaxies change over time. The high efficiency of photometric redshifts comes at a cost. The primary features that enable photometric redshift estimation are the transitions of the Balmer (3650 \AA) and Lyman (912 \AA) breaks through a series of photometric filters. Uncertainties in isolating the position of the breaks due to the low spectral resolution of the photometry give rise to a scatter between photometric redshift estimates and the true redshifts of galaxies. Confusion between breaks (due to sparse spectral coverage and incomplete knowledge of the underlying spectral energy distributions) leads to catastrophic photo-z errors, as seen in Figure~\ref{fig:wfirst_photoz}. Existing studies have demonstrated that while photometric redshift accuracies of $\sigma_z\sim 0.007(1+z)$ are possible for bright objects with many filters, uncertainties of $\sigma_z\sim 0.05(1+z)$ are more typical when restricted to 5-6 bands of deep imaging. Catastrophic photometric redshift errors (where the difference between the photometric and spectroscopic redshifts, $\Delta z$, exceeds the $3-\sigma$ statistical error) typically occur in well over 1\% of cases. As the photo-z scatter and catastrophic failure rate increase, information is degraded and dark energy constraints will weaken. Furthermore, if there are nonnegligible ($>\sim 0.2\%$) systematic offsets in photo-z's or if the $\Delta z$ distribution is mischaracterized, dark energy inference will be biased at levels comparable to or greater than expected random errors. As a result, careful calibration and validation of the photometric redshifts will be necessary for the { LSST}, { WFIRST}, and { Euclid} surveys. \subsubsection{The impact of Euclid and WFIRST near-infrared data on LSST photometric redshifts} The LSST filter system covers the $u, g, r, i, z $ and $y$ passbands, providing substantial leverage for redshift estimation from $z=0$ to $z>6$. For the LSST ``gold'' sample of galaxies, $i<25.3$, Figure~\ref{fig:wfirst_photoz} shows how the HLS for WFIRST, with four bands comparable in depth to the 10 year LSST survey, (i.e.\ 5$\sigma$ extended source depths of Y$_{AB}$=25.6, J$_{AB}$=25.7, H$_{AB}$=25.7, and F184$_{AB}$=25.2), will significantly improve on the photometric redshift performance from LSST alone. For example, at redshifts $z>1.5$, where the Balmer break transitions out of the LSST $y$ band and into the WFIRST and Euclid infrared bands, the inclusion of the WFIRST data results in a reduction in $\sigma_z$ by a factor of more than two ($1.5<z<3$), and a reduction in the fraction of catastrophic outliers to $<$2\% across the full redshift range. Euclid's three-band NIR photometry, while shallower, will have a much greater overlap with LSST and will also provide a quantitative improvement in LSST photo-z's. Combining the LSST, WFIRST and Euclid photometric data effectively will depend, however, on the details of the respective filter systems, their signal-to-noise, our ability to extract unbiased photometric measurements from extended sources (e.g.\ the deblending of sources using the higher spatial resolution of the WFIRST data), and the accuracy of the photometric calibration of the data both across the sky and between the near-infrared and optical passbands. \begin{figure}[h] \begin{center} \includegraphics[scale=.45]{LSST_WFIRST_szpz.png} \end{center} \caption{\label{fig:wfirst_photoz}\footnotesize A comparison of the relative photometric redshift performance of the LSST optical filters (left panel) with a combination of LSST and WFIRST filters (right panel). The simulated data assumes a 10-year LSST survey and a ``gold sample'' with $i<25.3$. The addition of high signal-to-noise infrared data from WFIRST reduces the scatter in the photometric redshifts by roughly a factor of two (at redshifts $z>1.5$) and the number of catastrophic outliers by a factor of three. These simulations do not account for deblending errors or photometric calibration uncertainties, and assume complete knowledge of the underlying spectral energy distributions of galaxies as an ensemble. } \end{figure} \subsubsection{Mitigating systematics with WFIRST and Euclid spectroscopy} \begin{figure}[h] \begin{center} \includegraphics[scale=.45]{ifu_photoz} \end{center} \caption{\label{fig:ifu}\footnotesize Predictions of the fraction of LSST weak lensing sample objects that would yield a secure (multiple-confirmed-feature) spectroscopic redshift, based either on 1440-second exposure time with {\it WFIRST} (colored regions) or 10 nights' open-shutter-time spectroscopy with the Subaru/PFS spectrograph (black curve) {\it WFIRST} IFU spectroscopy would provide training redshifts for objects at higher $z$ than are easily accessible from the ground, particularly if read noise per pixel is small (the colored regions indicate a range of feasible scenarios). Longer exposure times (e.g., in supernova fields or by optimized dithering strategies) could enhance the success rate further.} \end{figure} The optimization of photometric redshift algorithms and the calibration of photometric redshift uncertainties both require spectroscopic samples of galaxies. If simple algorithms are used, more than 100 spectroscopic survey regions (of $\sim$0.25 deg$^2$) with at least 300-400 spectroscopic redshifts per region may be required to optimize a photometric redshift algorithm (whether by refining templates and photometric zero points or as input for machine learning algorithms) to ensure that their accuracy is not limited by sample variance in the spectroscopic training set (Cunha et al. 2012); with techniques that take this variance into account, 15-30 fields may be sufficient (Newman et al. 2014). An ideal training set would span the full range of properties (including redshifts) of the galaxies to which photometric redshifts will be applied. To the degree to which we do not meet this goal, we can expect that photometric redshift errors will be degraded, weakening constraints on dark energy as well as other extragalactic science. Current spectroscopic samples fare well; surveys to $R=24.1$ or $i=22.5$ (more than two magnitudes shallower than the LSST ``gold sample'') with 8-10m telescopes have obtained $>$99\% secure redshifts for 21-60\% of all targeted galaxies, and $>$95\% secure redshifts for 42-75\% of the galaxies (incorrect redshift rates above 1\% would lead to photo-z systematics that exceed LSST requirements). WFIRST spectroscopy can address these limitations in a number of ways. Depending on the final configuration and dithering strategy for {\it WFIRST}, an object in the LSST weak lensing sample will fall on the IFU field-of-view at least 10\% (and up to $\sim 100\%$) of the time. Most objects in this sample would yield a successful redshift with a $\sim1440$ sec exposure (see Figure~\ref{fig:ifu}). For an IFU with a 3`` $\times$ 3`` field-of-view, at least 10,000 spectra down to the LSST weak lensing depth, corresponding to roughly 20,000 down to the {\it WFIRST} limits, would be measured concurrently with the WFIRST HLS (with minimal impact from sample/cosmic variance). This spectroscopy would have very different incompleteness from ground-based samples, allowing a broader range of galaxies to have well-trained photometric redshifts, with accuracy limited by the imaging depth rather than our knowledge of galaxy SEDs. If our spectroscopic samples do not have a success rate approaching 100\%, accurate characterization of photo-z errors (required for dark energy inference) is likely to instead be based on the cross-correlation between spectroscopic and photometric samples (cf. Newman 2008). To meet LSST calibration requirements, such an analysis will require secure ($<1\%$ incorrect) spectroscopic redshifts for $\sim 100,000$ galaxies spanning the full redshift range of the photometric samples of interest and an area of at least a few hundred square degrees. Euclid and WFIRST grism spectroscopy will provide spectra for many millions of galaxies over wide areas (e.g., $2\times10^{7}$ galaxies in the WFIRST HLS). However, this spectroscopy will provide the highly secure, multiple-feature redshifts required for training and calibrating photo-z's in only very limited redshift ranges. As a result, Euclid and WFIRST grism spectroscopy may contribute to photometric redshift calibration in combination with other datasets spanning the remaining redshift range (e.g., from DESI), but cannot solve this problem on their own. \subsection{Stellar science} New insights on stellar populations in the Milky Way and nearby galaxies depend critically on gains in photometric sensitivity and resolution. Thus far, the state of the art imaging surveys of our Galaxy have involved relatively modest tools with respect to today's capabilities. For example, the SDSS survey is both shallow and operates at low resolution, and IR surveys such as 2MASS and WISE are based on even smaller telescopes operating at even lower resolution. The combination of LSST, Euclid, WFIRST and Gaia will transform our view of the Milky Way and nearby galaxies. LSST will map 18,000 square degrees of the night sky every few days, across 6 visible light filters. The imaging depth of LSST will be over 100 times (5 magnitudes) deeper than SDSS, and at higher resolution. WFIRST will overlap some of the same footprint, but will also extend the spectral range of Milky Way surveys to near-infrared wavelengths at even higher spatial resolution (0.1 arcsecond pixels). Euclid provides much higher spatial resolution than LSST in the visible (aids deblending sources and crowded field photometry) and Gaia addresses the single biggest contribution to the error budget of most stellar population studies aimed at characterizing physical properties; knowledge of the distance of sources. Much of the stellar population science case for LSST (e.g., from the Science Book) overlaps that from WFIRST (e.g., the community science white papers in the 2013 WFIRST SDT report). These include studies of the initial mass function, star formation histories for Milky Way components, near field cosmology through deep imaging and proper motions of dwarf satellites, discovery and characterization of halo substructure in nearby galaxies, and more. Given the wavelength complementarity alone, most stellar population investigations will be aided by the panchromatic baseline of LSST (and Euclid) + WFIRST (e.g., interpreting SEDs into fundamental stellar properties, increasing the baseline for star formation history fits from multi-band color-magnitude diagrams, confirming memberships of very blue or red stars in either data set, etc.). However, the overall quality of all three data sets for general investigations can be greatly enhanced (and, in a uniform way) by jointly re-processing the pixel data sets for LSST, Euclid, and WFIRST. To highlight the technical requirements for this joint processing, we point to three LSST and WFIRST "killer apps" in stellar population science, \begin{itemize} \item Establishing and characterizing the complete spectrum of Milky Way satellites \item Testing halo formation models by measuring the shapes, substructure content, masses, and ages of a large set of stellar halos within 50 Mpc \item Measuring the star formation history, Galactic mass budget, and spatially and chemically-dependent IMF \end{itemize} In each of these cases, LSST and Euclid have the advantage with respect to field of view (and cadence). WFIRST has the advantage of higher photometric sensitivity to red giant branch and low mass star tracers, and much higher resolution for star-galaxy separation in the IR. Joint processing of the three data sets offers the following rewards, \begin{enumerate} \item By using the WFIRST astrometry to deblend sources and feed new positions of stars to LSST (and to a lesser extent, Euclid), the LSST and Euclid photometry can be made to go deeper. In the actual processing, the pixels from LSST, Euclid and WFIRST would be analyzed simultaneously to detect and perform photometry on all detections in any of the more than dozen LSST+Euclid+WFIRST bandpasses. \item WFIRST photometry alone will be critically insensitive to hot stars (e.g., white dwarfs in the Milky Way, horizontal branch stars in the Local Group). In most halo and sparse populations, these objects will be easily detected by LSST, and to a shallower depth, Euclid. Similar to 1.), the LSST photometry can be used to obtain new measurements in the WFIRST data set and to get an infrared color for these sources. \item As a result of the above, many more sources in the overlapping footprint of the three telescopes will have high-precision panchromatic photometry. This increased wavelength baseline will yield a much more accurate characterization of the underlying stellar populations (e.g., the sensitivity to age, reddening, and distance variations increases on this baseline). Note, this is different from matching LSST, Euclid, and WFIRST catalogs since the goal here is to detect sources that otherwise would not have been measured in one of the three data sets. The biggest gain in our interpretation of the stellar populations to derive physical properties will come from accurate distances to the stellar populations from Gaia. The different photometric sensitivities is not a concern here, since there are bright tracers in most populations that Gaia can target. \item The WFIRST morphological criteria of faint sources provides a crucial assessment of LSST's star-galaxy separation. \end{enumerate} Many of these joint analyses can be done at the catalog level while others likely require working with the pixels from each of the surveys. \subsection{Galaxy and quasar science} The WFIRST photometry will go to a similar depth (on an AB scale) as the full coadded LSST images. Euclid NIR data, while shallower, will overlap a much larger area of the LSST survey. Thus the combination of LSST, Euclid and WFIRST will give high S/N 9-band photometry stretching from 3000 \AA \ to 2 microns (a factor of six in wavelength, twice that of either WFIRST/Euclid or LSST alone) for the Gold sample ($i<25.3$) of hundreds of millions of galaxies. This broad wavelength coverage probes the SEDs of galaxies beyond 4000A to redshifts 3 and beyond, allowing detailed determination of star formation rates and stellar masses during the cosmic epoch when galaxies assembled most of their stars. (Unobscured) AGN will be identified both from their SEDs and from their variability in the repeat LSST imaging. The Hubble Space Telescope has been able to obtain comparably deep photometry over a similar wavelength range over tiny areas of sky; for example, the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) covers only $0.2$ deg$^2$, while WFIRST will cover an area ten thousand times larger. With this much larger sky coverage, the combination of LSST and WFIRST can explore the evolution of galaxy properties over this broad range of cosmic history. The weak lensing signature from individual galaxies is too small to be measured, but stacking analyses can be done in fine bins of stellar mass and redshift, exploring the relationship between dark matter halo mass, stellar populations, galaxy morphology, AGN activity and environments during the peak of galaxy assembly. The exquisite resolution of the WFIRST images will also allow the role of mergers in galaxy evolution to be quantified, with galaxy pairs separated by as little as 3 kpc discernible over essentially the entire redshift range. The multi-band photometry will allow both quasars and galaxies at much higher redshifts to be identified as well. Current studies with HST have allowed identification of a handful of galaxy candidates to redshift 8. WFIRST + LSST will expand these studies over 10,000 times the area, looking for objects that are dropouts in the u through y filters. The wide area is particularly important for discovering the rare most luminous objects at these redshifts, that can be followed up in detail with JWST and the next generation of $\sim$30-meter telescopes. The study of such objects probes the exit from the Dark Ages, as the intergalactic medium became reionized. Understanding the formation of the first galaxies and the reionization process in detail requires putting these objects in their large-scale structure context, which is only possible with samples selected over large solid angles. In particular, the expanding bubbles by which reionization is thought to progress are thought to subtend tens of arcminutes on the sky, requiring wide-field samples to explore them. \subsection{The variable universe} Time domain astronomy encompasses an extremely varied suite of astrophysical topics, from the repeated pulsations of regular variable stars that serve as tracers of Galactic structure and cosmological distances (e.g. RR Lyrae and Cepheids), to transient events that by their nature occur either unpredictably or only once (e.g. stellar flares, supernovae, tidal disruption events). As suggested by these examples, transients and variables occur both locally and at the most distant reaches of our universe, and so comprise a population with both a very high dynamic range in luminosity and a wide span of colors. Time domain astronomy is a rapidly growing field: the rich public dataset provided by LSST will enable a large part of the astronomy community to analyze observations of transient and/or variable objects. The potential synergies between LSST, Euclid and WFIRST are as compelling and varied as time domain astronomy itself, but here we highlight some compelling examples. The combined WFIRST, Euclid and LSST datasets would greatly facilitate the characterization and classification of transient and variable objects. One of the challenges imminent in the coming LSST era is the identification of samples of variable and transient events to be targeted for additional observations. From the transient side, the anticipated high volume of the LSST alert stream requires rapid triage of large numbers of events to identify the most important transients: those events which, if not pursued with followup observations immediately, will be opportunities lost forever. Furthermore, the high volume of events also necessitates the prioritization of the most compelling sources to followup, which in turn requires rapid characterization of an event (where characterization here is distinguished from actual classification into a known category of object). The WFIRST and Euclid datasets will assist with these challenges in a number of ways. First, the deep infrared observations will greatly assist in the identification of the ``foreground fog" of very red Galactic sources: low mass stars and brown dwarfs. While LSST's depth will identify many of these objects in quiescence, some percentage will escape even LSST on account of their very red colors and low luminosities. Therefore, flares on these objects may still create what appear to be cosmological transient events in spite of their local origin. The deep, panchromatic characterization of the static sky will enable the association of transient events with co-located sources, whether those sources are quiescent detections of the same source, or possible host galaxies of the event whose properties may give clues for a deeper understanding of those transients. As pointed out in the previous subsection on stellar science, the strength of the combined datasets goes far beyond matching the WFIRST, Euclid and LSST catalogues. Rather it increases the depth of the photometry by making it possible to detect sources that would otherwise not have been detected. The ability to identify (or rule out) infrared counterparts may also be helpful in confirming some of the most elusive transients. For example, potential electromagnetic counterparts to gravitational wave sources are expected by current models to emit their peak emission in near IR wavelengths. The association of any gravitational wave event detection with a known galaxy greatly reduces the search volume for EM counterparts to gravitational wave events, and the elimination of faint red background galaxies as putative associated transient kilonova emission will help confirm potential multi-messenger events. \newpage \section{Coordinated observations} A significant fraction of WFIRST time will be devoted to a Guest Observer program. In this section of the report, we consider a number of potential GO programs that will likely follow-up on or extend LSST observations. While the GO programs will likely be selected by a future Time Allocation Committee (TAC), we will benefit by preparing for likely observing scenarios. Here, we consider three examples of joint LSST/WFIRST studies: WFIRST follow-up of strong lenses and lensed AGNs found in LSST, joint observations of deep drilling fields, and joint WFIRST/LSST searches for the afterglows of gravitational wave bursts. \subsection{Measuring dark energy with strong lens time delays} As described in the previous section, time delays between multiply imaged sources are a powerful cosmological probe. LSST will enable time delay distance measurements towards hundreds of lensed AGN and supernovae (Oguri \& Marshall 2010, LSST Science Book). The detection of candidate systems, which have image separations of just 1-3 arcseconds, and confirmation of them by lens modeling, depends critically on the quality and depth of the imaging data, suggesting that there is much to be gained by searching a combined LSST/Euclid/WFIRST imaging dataset. Measuring cosmological parameters with this sample will require high signal to noise ratio and high resolution imaging and spectroscopy of any Einstein Rings discovered in order to constrain the lens mass distributions. By integrating longer on the most interesting systems, WFIRST can achieve background source densities of over 200 sources/arcmin$^2$, which will enable detailed weak lensing mass maps of these systems. Joint analysis of all LSST and wide field infrared survey data will improve the completeness and purity of the lens searches in the following ways: 1) a combined catalog of LSST and space-detected objects will produce a more accurate initial list of target lenses, and 2) joint analysis of all the available cutout images will enable better lens modeling, leading to a much cleaner sample of candidates to be confirmed in the followup observations. Being able to model large proportions of the lens sample precisely with high resolution space-based imaging will enable the surveys themselves to constrain the structure and evolution of massive galaxies across a homogeneously-selected sample, providing important prior information needed by other cosmology probes. The joint optical and infrared photometry provided by LSST, Euclid and WFIRST will constrain the mass environments of the strong lenses (including the line of sight environment) by providing accurate photometric redshifts and stellar masses for all the relevant galaxies in the lightcone (e.g., Greene at al 2013, Collett et al 2013). \subsection{Probing dark matter with grism spectroscopy of lensed AGN} Measuring the mass function of dark matter subhalos down to small masses (e.g., $10^6$ solar masses) is a unique test of the CDM paradigm, and of the nature of dark matter in general. The CDM mass function is expected to be a power law rising as $M^{-1.9}$ down to Earth-like masses, while a generic feature of self-interacting dark matter and warm dark matter models is to introduce a lower mass cutoff. Current limits set that cutoff at somewhere below $10^9 M_\odot$. We know from observations of the environs of the Milky Way that there is a shortage of luminous satellites at those masses, but this may be entirely due to the physics of star formation at the formation epochs of these satellites. Dark Milky Way satellites at lower masses may be detectable through their dynamical effects on tidal streams, but statistical studies of the one galaxy we live in will always carry high uncertainty. A powerful way to detect subhalos independent of their stellar content and in external galaxies is via the study of large samples of strong gravitational lenses, in particular the so-called flux ratio anomalies (see Treu 2010, and references therein). Small masses located in projection near the four images of quadruply-lensed quasars cause a strong distortion of the magnification and therefore alter the flux ratios. However, anomalous flux ratios can also be caused by stellar microlensing if the source is small enough. One solution is to observe the narrow line region emission lines, where the lensed quasar emission is sufficiently extended to be unaffected by microlensing. Large samples of thousands of lenses could achieve sensitivity down to $10^6-10^7$ solar masses required to probe the nature of Dark Matter. Unfortunately, at the moment only 3 dozen quads are known, and only a handful of those are bright enough to be observable from the ground (e.g. Chiba et al 2005). Discovering thousands of quads will require surveys covering thousands of square degrees at sub-arcsecond resolution (Oguri \& Marshall 2010). Between them, Euclid, WFIRST and LSST should discover over 10,000 lensed AGN, one sixth of which will be quad systems. Euclid and WFIRST have the potential to observe of order a thousand of these quads spectroscopically themselves, using narrow near infrared emission lines to avoid the microlensing and enabling the wholesale application of the lens flux ratio experiment (Nierenberg et al. 2014). \subsection{Deep drilling fields} The LSST collaboration plans several ``deep drilling" fields. While this aspect of the LSST survey is still under design, the broad plan is to reach $AB \approx 28$ in $ugriz$ bands (and shallower in $y$), which is comparable to the medium depth planned for the WFIRST SN survey (J=27.6 and H=28.1 over 8.96 deg$^2$). The WFIRST deep SN fields will be deeper with depths of J=29.3 and H=29.4 over 5.04 deg$^2$. LSST has currently identified four selected fields, each with areas of 9.6 deg$^2$: ELAIS S1, XMM-LSS, COSMOS, and Extended Chandra Deep Field South. With three days of observation per band, WFIRST can cover each of these deep fields to AB = 28. The goals of these fields (Gawiser et al., white paper; Ferguson et al., white paper) are to (1) Test and improve photometric redshifts critical for LSST Main Survey science. (2) Determine the flux distribution of galaxy populations dimmer than the Main Survey limit that contribute to clustering signals in the Main Survey due to lensing magnification. (3) Measure clustering for samples of galaxies and Active Galactic Nuclei (AGN) too faint to be detected in the Main Survey (4) Characterize ultra-faint supernova host galaxies. (5) Characterize variability-selected AGN host galaxies. (6) Identify of nearby isolated low-redshift dwarf galaxies via surface-brightness fluctuations. (7) Characterize of low-surface-brightness extended features around both nearby and distant galaxies. (8)Provide deep ``training sets", for characterizing completeness and bias of various types of galaxy measurements in the wide survey (e.g. photometry, morphology, stellar populations, photometric redshifts). All of these goals will be clearly enhanced by complementary data in the infrared. As already outlined in the report, infrared data will significantly improve the certainty of photo-z's. WFIRST high spatial resolution will be particularly powerful in these deeper fields where confusion is likely to be significant. The deblending, color correction and morphology arguments in sec 3 apply even more strongly at the greater depth reached in these fields. Indeed, WFIRST provides great synergy by providing a high-resolution detection image that will allow LSST to go past the ground-based confusion limit. This in turn will motivate LSST to go deeper in $gri$ than currently planned for the Deep Drilling Fields; by matching the LSST Main Survey filter time distribution, LSST will reach deeper than AB$\sim$29 in these three filters rather than the current baseline of AB$\sim$28.5, providing a better match to the deepest WFIRST imaging. It is worth noting that the VISTA VIDEO deep NIR survey regions are located at the centers of the already-approved LSST DDFs, and these will generate initial spectroscopy and multi-wavelength coverage even before WFIRST launches, making them optimal regions for covering deeper in NIR. However, if WFIRST is able to cover all (or most) of the 9.6 deg$^2$ LSST field-of-view, LSST can respond by skipping the significant fraction of time spent in $y$ band in the current LSST "near-uniform DDF" approach. Using WFIRST imaging in lieu of LSST imaging in $y$ band (and possibly $z$ band) also offers a big improvement in both resolution and depth, as LSST only reaches AB=28.0,27.0 in $z$ and $y$ in the "near-uniform DDF" approach spending 60\% of DDF observing time in those filters; significantly deeper $ugri$ images could be obtained if this time is saved. Full areal coverage by WFIRST of as many LSST DDFs as possible, in $zyJH$, will motivate LSST to devote its observing time in those fields to the deepest possible $ugri$ imaging, enabling a wide range of science with these "ultra-deep drilling" fields. \subsection{Coordinated supernovae observations} If the WFIRST SN medium and deep surveys target the same fields as one of the LSST deep drilling fields, then coordinated observations would yield 8 band supernova light curves. The WFIRST IFU measurements of these supernovae would yield a powerful training set that could greatly enhance the value of the large LSST SN sample. These coordinated observations would not require any increase in observing time by either project but would enhance the return of both sets of observations. \subsection{Variable universe: follow-up on novel targets} LSST's cadence and depth will allow it to discover new classes of variable objects and ``golden" transients, examples of known classes of objects that have nearly ideal properties for particular aspects of astronomical study. WFIRST's resolution, spectral coverage, and wavelength coverage will make it a premier facility for following up on some of these objects in the near infrared. Since studies of individual targets do not take advantage of WFIRST's large field of view, large ground-based telescopes with high quality adaptive optics will also be important tools for follow-up observation. There are a host of potential targets for follow-ups. Examples include: \begin{itemize} \item ``Macronovae" produced by the coalescence of binary neutron stars (Li \& Paczynski 1998, Kulkarni 2005, Bildsten et al. 2007). These explosive events likely peak in the near-infrared and would be potential targets for both WFIRST imaging and IFU follow-up. \item LSST and WFIRST will be our most powerful combination of telescopes for detecting afterglows of gravitational mergers (Kasliwal and Nissanke 2014; Gehrels et al. 2014). Given the LIGO's large error box for each event, there will be many variable objects found at the LSST/WFIRST depths, a program of joint monitoring (perhaps, combined with the deep field program described above) will enable a better characterization of the rich zoo of transients. \item Gamma Ray Bursts (Rossi et al. 2007). \end{itemize} \newpage \section{Data management and analysis challenges} The sections above make clear that in essentially every area of astronomy and cosmology, joint analysis of the LSST, WFIRST and Euclid data will provide significant benefits. These will be made possible by linking the data from the different surveys and providing a common access point for interrogating the data in a user-friendly way with the appropriate tools. This would enable scientists to explore connections between the data sets and stimulate novel community research leveraging the combined data. The publication of detailed, accurate documentation along with the cross-referencing and associated data releases would be the most effective way to encourage and support the community in pursuing such research. This section describes the issues involved in carrying out such a program for the photometry and simulations required to analyze and interpret the data. \subsection{Multi-resolution photometry} There are quite a few challenges to producing accurate, consistent photometry across different filters with widely different resolutions, as would be the situation for combining LSST data with either WFIRST or Euclid. We can assume that we will have a catalogue of the positions of objects detected in at least one band (including single-band detections in u or H), and that we know the PSF in each band. There are issues with respect to the chromatic effects within a single band as well as spatial variation of the PSF across the field of view, which are important, but we can neglect them for this discussion. We also assume that blending issues have been taken into account, so we have a set of pixels in each band that correspond to a single object. For point sources, photometry is relatively straightforward. Because all point sources have an identical profile (again, ignoring for now the spatial variation of the PSF) we may use any aperture to measure the relative flux of all point sources in an image (a PSF aperture is optimal). We may then estimate the image's zero-point which gives us total fluxes for all sources. We may repeat this in all the bands, resulting in a catalogue of fluxes and colors. Measuring the fluxes of galaxies is harder. It is not possible to simply measure the total flux in an aperture large enough to include the entire galaxy as the measurements are too noisy, and this is not even a well-defined concept. On the other hand, a small aperture measures a different fraction of the flux of different galaxies, so we cannot simply apply an aperture correction as in the stellar case. Even if a galaxy has the same profile in each band these fractions are different if the PSF is band-dependent; if the profiles vary the difficulties are multiplied. If we can model galaxies adequately, this alleviates the difficulties of flux measurement. The PSF-convolved model flux is a reasonable definition of the galaxy's total flux in each band. However using model-based photometry is not a panacea, as if the model must be correct. For example, choosing a model in one band and applying it in another will not yield the correct color for a galaxy with a red bulge and a blue disk, a scale length that's a function of band, spiral arms, HII regions, or any other kind of substructure with a different color from the overall galaxy light. If the seeing is (or can be made) the same in all bands we are allowed to choose a single model. The resulting fluxes are not the galaxy's total fluxes, as different components are not necessarily weighted correctly, but these ``consistent fluxes" do represent the fluxes of a certain sample of the galaxy's stellar population. However, degrading all bands to a lowest common denominator would destroy much of the extra information present in the higher-resolution space data. Determining the optimal way to estimate consistent fluxes while keeping as much of this high-resolution information as possible is still an open research question. Given the complexity of the problem of measuring galaxy photometry across a decade in wavelength with varying PSFs, there is no optimal approach. Different science questions will require different ways of combining the multi-color data. \begin{itemize} \item {\it Photometric Redshifts: } We may use an object's total or consistent fluxes to find a location in multi-color space and hence its redshift probability function, $p(z)$. If $p(z)$ is unimodal and sharp, the two estimators predict nearly the same photo-z. However, in general this will not be so and $p(z)$ will be different for the two choices of flux. \item{\it Choice of Galaxy Model:} In the rest-frame optical, a simple Sersic or constrained bulge-disk model is probably sufficient to return reliable fluxes (such models down-weight the contribution from localized star-forming regions, changing p(z) at least in principle). However, modeling galaxies across the broad redshift range covered by WFIRST, Euclid, and LSST will likely require more complex models (including, for example, the uv-bright knots that contribute the bulk of the flux in the far-uv). Constraining these models is going to be hard; one approach would be to demand that all components have a consistent $p(z)$. \item{\it Photometry of Blended Objects} If our models indeed accurately describe our galaxies then the deblending problem is no different from the galaxy photometry problem; instead of fitting multiple components to a single galaxy we fit multiple components to multiple galaxies, relaxing but not abandoning the constraints on components' $p(z)$. In practice it is not clear how well this will work, and there are intermediate schemes (similar to the deblender used in SDSS) that fit simplified models to the ensemble of objects to assign the flux to the child objects and then proceed with photometry. This is an unsolved problem, but given the upcoming data, we will need to must significant progress in the coming years. \end{itemize} \subsection{Joint simulations} WFIRST, Euclid and LSST will generate a remarkable set of observations in the optical and near-infrared wavebands. Their cosmological interpretation, especially in combination, will be very powerful but also very challenging. Many systematic effects will have to be disentangled from fundamental physics to fully exploit the power of the measurements. Simulations of all aspects of the experiment -- at the cosmological scale, of the instruments, and of the data processing and analysis software -- are critical elements of the systematics mitigation program. Joint analyses of multiple surveys require these simulations to have consistent interfaces, to enable the same realizations of cosmological volumes or multi-wavelength simulated galaxies to be passed through instrument-specific simulators, and ultimately to ensure that the correct correlations and joint statistical properties are represented when mock analyses are performed on the simulated output catalogs. We focus here on the role of coordinated simulations in two areas -- {\em cosmological simulations} and {\em instrument/pipeline simulations}. \subsubsection{Cosmological simulations} Cosmological simulations play key roles within the interpretation of large scale structure surveys: (i) they provide controlled testbeds for analysis pipelines and tools for survey design and optimization via sophisticated synthetic skies populated with galaxies targeted by the specific survey, (ii) they provide predictions for different fundamental physics effects, including dynamical dark energy, modified gravity, neutrinos, and non-trivial dark matter models, (iii) they provide modeling approaches for astrophysical systematics, including gas dynamics, star formation, and feedback effects, and (iv) they provide important information about the error estimates via covariance studies. We will briefly elaborate on the synergies between the simulation programs required for WFIRST and LSST in these four areas in the following. {\em Synthetic sky maps} --- In order to build synthetic sky maps for Euclid, WFIRST and LSST, large-volume $N$-body simulation with very high force and mass resolution are essential. Next, the simulations have to be populated with galaxies using sub-halo abundance matching or semi-analytic modeling techniques, tuned to match the observed population of galaxies. While the wavelength ranges of Euclid, WFIRST and LSST will be different, a common code infrastructure will be beneficial in some areas (e.g., in exploring model parameter space) and critical in others (e.g., simulating realistic merged multi wavelength catalogs). {\em Precision predictions for fundamental cosmological statistics} --- The precision requirements for cosmological observables and the fundamental physics to be explored is very similar for Euclid, WFIRST and LSST and a joint program in this area would be very fruitful. Both surveys are promising observations of large scale structure measurements at the sub-percent level, a tremendous challenge for cosmological simulations. Besides the precision challenge, different fundamental physics effects have to be explored systematically for both surveys. {\em Astrophysical systematics} --- Some of the systematics that have to be accounted for to interpret the observations from Euclid, WFIRST and LSST are common, such as the effects of baryonic physics on the weak lensing shear power spectrum. Since the accuracy requirements are similar, a joint program would be beneficial to both surveys. However, there are significant differences because of the differing ground and space-based nature of the two programs, different instrumentation and associated wavebands, and different galaxy populations and densities. Nevertheless, the underlying sky catalogs can be generated from the same set of simulations. {\em Covariance matrices} --- Since the Euclid, WFIRST and LSST surveys will have overlapping volumes, combining cosmological constraints from their data sets will require understanding the full covariance matrix of all observables extracted from the data sets, using simulations of the entire relevant volume. While the accuracy requirements for these simulations are much less stringent than for other applications, the large number of simulations required will represent a computational challenge. \subsubsection{Instrument and pipeline simulations} Instrument and pipeline simulations are also a critical element of any precision large scale structure program: they inform the relationship between the astronomical ``scene'' in the Universe and the catalogs, parameters, and other information that is recorded in the processed data. They also provide an opportunity to exercise the data reduction and analysis tools prior to first light. The instrument and pipeline tools must be considered together, since in a precision experiment the pipeline is an integral part of the measurement process. Unlike cosmological simulations, these tools are highly instrument-specific: for example, the atmosphere is only included for ground-based observations, the patterns of ghosts and diffraction features are completely different for the LSST, Euclid and WFIRST configurations, and internal optical effects, cross-talk, and readout artifacts are fundamentally different in silicon CCDs versus NIR arrays. Despite the differences in instrument simulators, it is critical that they be able to take compatible input data. Precision measurements of galaxy clustering and weak lensing depend on not just counting objects and measuring shapes, but understanding selection biases, blending effects, and error distributions of measured parameters. The combination of WFIRST, Euclid and LSST data will be used to measure photometric redshifts, and for both object-by-object and statistical comparisons of number density and shear. Simulation of these applications requires that the same objects be fed through both pipelines, and hence that the input scene (simulated stars and galaxies) be consistent and have the proper correlations of object properties across wavebands. While proprietary pipelines do not need to be shared outside of the individual constortia, the inputs of these pipelines must remain compatible. \newpage \section{Conclusion} The scientific opportunity offered by the combination of data from LSST, WFIRST and Euclid goes well beyond the science enabled by any one of the data sets alone. The range in wavelength, angular resolution and redshift coverage that these missions jointly span is remarkable. With major investments in LSST and WFIRST, and partnership with ESA in Euclid, the US has an outstanding scientific opportunity to carry out a combined analysis of these data sets. It is imperative for us to seize it and, together with our European colleagues, prepare for the defining cosmological pursuit of the 21st century. The main argument for conducting a single, high-quality reference co-analysis exercise and carefully documenting the results is the complexity and subtlety of systematics that define this co-analysis. Falling back on many small efforts by different teams in selected fields and for narrow goals will be inefficient, leading to significant duplication of effort. For much of the science, we will need to combine the photometry across multiple wavelengths with varying spectral and spatial resolution -- a technical challenge. As described in Section 3, the joint analysis can be carried out in ways that have different computational demands. The most technically demanding joint analysis is to work with pixel level data of the entire area of overlap between the surveys. Many of the goals of a joint analysis require such a pixel-level analysis. If pixel-level joint analysis is not feasible, catalog-level analysis can still be beneficial, say to obtain calibrations of the lensing shear or the redshift distribution of galaxies. Hybrid efforts are also potentially useful, for example using catalog level information from space for deblending LSST galaxies, or using only a mutually agreed subset of the data for calibration purposes. However the full benefits of jointly analyzing any two of the surveys can be reaped only through pixel-level analysis. The resources required to achieve this additional science are outside of what is currently budgeted for LSST by NSF and DOE, and for WFIRST (or Euclid) by NASA. Funding for this science would most naturally emerge from coordination among all agencies involved, and would be closely orchestrated scientifically and programmatically to optimize science returns. A possible model would be to identify members of the science teams of each project who would work together on the joint analysis. The analysis team would ideally be coupled with an experienced science center acting as a focal point for the implementation, and simultaneously preparing the public release and documentation for broadest access by the community. \newpage
-23,847.824814
[ -0.262939453125, 0.6845703125 ]
51.432469
[ -3.279296875, 0.1634521484375, -0.9111328125, -4.1015625, 0.25244140625, 5.60546875 ]
[ 3.529296875, 5.41015625, 2.306640625, 6.18359375 ]
746
14,493
[ -1.515625, 1.3798828125 ]
20.503824
[ -4.734375, -1.044921875, -1.5302734375, -1.0263671875, 0.34033203125, 6.359375 ]
1.302735
28.524364
18.422687
1.346612
[ 1.4136536121368408 ]
-20,477.237244
5.375906
-23,104.484506
0.423549
6.319743
[ -4, -3.46484375, -1.880859375, -2.646484375, 2.875, 8.2421875 ]
[ -6.078125, -2.59765625, -1.990234375, -1.369140625, 3.888671875, 5.4765625 ]
BkiUcunxK4tBVhat3gJO
\section{Introduction} Although the vector Coulomb potential does not hold relativistic bound-state solutions, its screened version ($\sim e^{-|x|/\lambda }$) is a genuine binding potential and its solutions have been found for fermions.\cite{ada} The problem has also been analyzed for scalar\cite{DIRACscalarscreened} and pseudoscalar\cite{asc} couplings. The Klein-Gordon equation with vector,\cite% {KGvectorscreened} \ scalar\cite{KGscalarscreened} and arbitrarily \ mixed vector-scalar\cite{KGmixed} couplings has not been exempted. As has been emphasized in Refs. [2] and [4], the solution of relativistic equations with this sort of potential may be relevant in the study of pionic atoms, doped Mott insulators, doped semiconductors, interaction between ions, quantum dots surrounded by a dielectric or a conducting medium, protein structures, etc. In the present paper it is shown that the problem of a fermion under the influence of a mixed vector-scalar screened Coulomb potential, except for possible isolated energies, can be mapped into a Sturm-Liouville problem for the upper component of the Dirac spinor with an effective asymmetric Morse-like potential, or an effective screened Coulomb potential in particular circumstances. In all of those circumstances, the quantization conditions are obtained. Beyond its potential physical applications, this sort of mixing shows to be a powerful tool to obtain a deeper insight about the nature of the Dirac equation and its solutions. \section{The Dirac equation with mixed vector-scalar potentials in a 1+1 dimension} In the presence of time-independent vector and scalar potentials the 1+1 dimensional time-independent Dirac equation for a fermion of rest mass $m$ reads \begin{equation} \mathcal{H}\Psi =E\Psi ,\quad \mathcal{H}=c\sigma_{1} p+\sigma_{3} \left( mc^{2}+V_{s}\right) +V_{v}, \label{1a} \end{equation} \noindent where $E$ is the energy of the fermion, $c$ is the velocity of light and $p$ is the momentum operator. $\sigma _{1}$ and $\sigma _{3}$ are 2% $\times $2 Pauli matrices and the vector and scalar potentials are given by $% V_{v}$ and $V_{s}$, respectively. Introducing the unitary operator $U(\delta )=\exp \left[ -i\left( \delta -\pi /2\right) \sigma _{1}/2\right] $, \noindent with $-\pi /2\leq \delta \leq \pi /2$, the transform of the Hamiltonian (\ref{1a}), $H=U\mathcal{H}U^{-1}$, takes the form \begin{equation} H=\sigma _{1}cp+\sigma _{2}\cos \delta \,\left( mc^{2}+V_{s}\right) -\sigma _{3}\sin \delta \,\left( mc^{2}+V_{s}\right) +V_{v}. \label{3} \end{equation} \noindent In terms of the upper ($\phi $) and lower ($\chi $) components of the transform of the spinor $\Psi $ under the action of the operator $U$, $% \psi =U\Psi $, the Dirac equation, choosing $V_{v}=V_{s}\sin \delta $, i.e., $|V_{s}|\geq $ $|V_{v}|$, becomes \noindent \begin{eqnarray} \hbar c\phi ^{\prime }-\cos \delta \,\left( mc^{2}+V_{s}\right) \phi &=&i \left[ E+\sin \delta \,mc^{2}\right] \chi \nonumber \\ \hbar c\chi ^{\prime }+\cos \delta \,\left( mc^{2}+V_{s}\right) \chi &=&i \left[ E-\sin \delta \,\left( mc^{2}+2V_{s}\right) \right] \phi . \label{6b} \end{eqnarray} \noindent \noindent where the prime denotes differentiation with respect to $x$. Note that the charge conjugation is put into practice by the simultaneous changes $E\rightarrow -E$ and $\delta \rightarrow -\delta $ while changing the sign of any one of the components of the spinor $\psi $. Taking advantage of this symmetry we can restrict our attention to nonnegative values of $\delta $. Using the expression for $\chi $ obtained from the first line of (\ref{6b}), and inserting it into the second line, one arrives at the following second-order differential equation for $\phi $: \begin{equation} -\frac{\hbar ^{2}}{2}\phi ^{\prime \prime }+\left( \frac{\cos ^{2}\delta }{% 2c^{2}}\,V_{s}^{2}+\frac{mc^{2}+E\sin \delta }{c^{2}}\,V_{s}+\frac{\hbar \cos \delta }{2c}\,V_{s}^{\prime }-\frac{E^{2}-m^{2}c^{4}}{2c^{2}}\right) \phi =0. \label{8} \end{equation} \noindent Therefore, the solution of the relativistic problem is mapped into a Sturm-Liouville problem for the upper component of the Dirac spinor. In this way one can solve the Dirac problem by recurring to the solution of a Schr\"{o}dinger-like problem. The solutions for $E=-\sin \delta \,mc^{2}$, excluded from the Sturm-Liouville problem, can be obtained directly from the Dirac equation (\ref{6b}). \section{The mixed vector-scalar screened Coulomb potential} Now let us focus our attention on a scalar potentials in the form \begin{equation} V_{s}=-\frac{\hbar cg}{2\lambda }\exp \left( -\frac{|x|}{\lambda }\right) , \label{12} \end{equation}% \noindent where the coupling constant, $g$, is a dimensionless real parameter and $\lambda $, related to the range of the interaction, is a positive parameter. The solution for $E=-\sin \delta \,mc^{2}$ is not continuous at $x=0$ and should be discarded. For $E\neq -\sin \delta \,mc^{2} $ the Sturm-Liouville problem transmutes into \begin{equation} -\frac{\hbar ^{2}}{2m}\,\phi _{\varepsilon }^{\prime \prime }+V_{\mathtt{eff}% }^{\left( \varepsilon \right) }\,\phi _{\varepsilon }=E_{\mathtt{eff}}% \mathtt{\,}\phi _{\varepsilon }, \label{14a} \end{equation}% where $E_{\mathtt{eff}}=\left( E^{2}-m^{2}c^{4}\right) /\left( 2mc^{2}\right) $ and \begin{equation} V_{\mathtt{eff}}^{\left( \varepsilon \right) }=V_{1}^{\left( \varepsilon \right) }\exp \left( -\frac{|x|}{\lambda }\right) +V_{2}\exp \left( -2\frac{% |x|}{\lambda }\right) \label{13} \end{equation}% \noindent with \begin{equation} V_{1}^{\left( \varepsilon \right) }=-\frac{mc^{2}\lambda _{c}\,g}{2\lambda }% \left( 1+\frac{E}{mc^{2}}\sin \delta -\frac{\lambda _{c}}{2\lambda }% \,\varepsilon \cos \delta \right) ,\;\;\;V_{2}=\frac{mc^{2}\lambda _{c}^{2}\,g^{2}}{8\lambda ^{2}}\cos ^{2}\delta , \label{130} \end{equation} \noindent where $\varepsilon $ stands for the sign function ($\varepsilon =x/|x|$ for $x\neq 0$). \subsection{\noindent The effective screened Coulomb potential ($V_{v}=\pm V_{s}$)} For this class of effective potential, the discrete spectrum arises when $% V_{1}^{\left( \varepsilon \right) }<0$ and $V_{2}=0$, corresponding to $% V_{v}=\pm V_{s}$. Bound-state solutions are feasible only if $g>0$. Defining the dimensionless quantities \begin{equation} y=y_{0}\exp \left( -\frac{|x|}{2\lambda }\right) ,\quad y_{0}=2\sqrt{\frac{% \lambda g}{\lambda _{c}}\left( 1\pm \frac{E}{mc^{2}}\right) },\quad \mu =% \frac{2\lambda }{\lambda _{c}}\sqrt{1-\left( \frac{E}{mc^{2}}\right) ^{2}} \nonumber \end{equation} \noindent and using (\ref{14a})-(\ref{130}) one obtains the differential Bessel equation $y^{2}\phi ^{\prime \prime }+y\phi ^{\prime }+\left( y^{2}-\mu ^{2}\right) \phi =0$, \ \noindent where the prime denotes differentiation with respect to $y$. The solution finite at $y=0$ ($% |x|=\infty $) is given by the Bessel function of the first kind and order $% \mu $:\cite{abr} $\phi (y)=N_{\mu }\,J_{\mu }(y)$, \noindent where $N_{\mu }$ is a normalization constant. In fact, the normalizability of $\phi $ demands that the integral $\int_{0}^{y_{0}}y^{-1}|J_{\mu }(y)|^{2}dy$ must be convergent. Since $J_{\mu }(y)$ behaves as $y^{\mu }$ at the lower limit, one can see that $\mu \geq 1/2$ so that square-integrable Dirac eigenfunctions are allowed only if $\lambda \geq \lambda _{c}/4$. The boundary conditions at $x=0$ ($y=y_{0}$) imply that $dJ_{\mu }(y)/dy|_{y=y_{0}}=0$ for even states, and $J_{\mu }(y_{0})=0$ for odd states. \noindent Since the Dirac eigenenergies are dependent on $\mu $ and $% y_{0}$, it follows that those last equations are quantization conditions. \subsection{\noindent The effective Morse-like potential ($V_{v}\neq \pm V_{s}$)} Let us define $z=z_{0}\exp \left( -\frac{|x|}{\lambda }\right)$ , $z_{0}=g\cos \delta$, and \begin{equation} \rho _{\varepsilon }=\frac{\lambda }{\lambda _{c}\cos \delta }\left( 1+\frac{% E}{mc^{2}}\,\sin \delta -\frac{\lambda _{c}}{2\lambda }\varepsilon \cos \delta \right) ,\qquad \nu =\frac{\lambda }{\lambda _{c}}\sqrt{1-\left( \frac{E}{mc}\right) ^{2}}, \label{24a} \\ \nonumber \end{equation} \noindent so that $z\phi _{\varepsilon }^{\prime \prime }+\phi _{\varepsilon }^{\prime }+\left( -\frac{z}{4}-\frac{\nu ^{2}}{z}+\rho _{\varepsilon }\right) \phi _{\varepsilon }=0$. \noindent Now the prime denotes differentiation with respect to $z$. Following the steps of Refs. [4] and [5], we make the transformation $\phi _{\varepsilon }=z^{-1/2}\Phi _{\varepsilon }$ to obtain the Whittaker equation:\cite{abr} \begin{equation} \Phi _{\varepsilon }^{\prime \prime }+\left( -\frac{1}{4}+\frac{\rho _{\varepsilon }}{z}+\frac{1/4-\nu ^{2}}{z^{2}}\right) \Phi _{\varepsilon }=0, \label{16a} \end{equation}% whose solution vanishing at the in\-fin\-i\-ty is written as $\Phi _{\varepsilon }$$=N_{\varepsilon }\,z^{\nu +1/2}e^{-z/2}M(a_{\varepsilon },b,z)$, where $N_{\varepsilon }$ is a normalization constant and $M$ is a regular solution of the confluent hypergeometric equation (Kummer\'{}s equation):\cite{abr} \begin{equation} \xi M^{\prime \prime }+(b-\xi )M^{\prime }-a_{\varepsilon }M=0\textrm{, with }% a_{\varepsilon }=\nu +\frac{1}{2}-\rho _{\varepsilon }\textrm{ and }b=2\nu +1. \label{19} \end{equation} \noindent \noindent Now we are ready to write the physically acceptable solutions on both sides of the $x-$axis by recurring to the symmetry $\phi _{\varepsilon }(-x)\sim \phi _{-\varepsilon }(x)$. They are \begin{eqnarray} \phi &=&z^{\nu }e^{-z/2}\left[ \theta (-x)C^{\left( -\right) }M(a_{-},b,z)+\theta (+x)C^{\left( +\right) }M(a_{+},b,z)\right] \nonumber \\ \chi &=&z^{\nu }e^{-z/2}\left[ \theta (-x)D^{\left( -\right) }M(a_{+},b,z)+\theta (+x)D^{\left( +\right) }M(a_{-},b,z)\right] , \label{10} \end{eqnarray} \noindent where $C^{\left( \pm \right) }$ and $D^{\left( \pm \right) }$ are normalization constants and $\theta (x)$ is the Heaviside function. The continuity of the wavefunctions at $x=0$ plus the substitution of (\ref{10}) into the Dirac equation (\ref{6b}), and making use of a pair of recurrence formulas involving the solution of Kummers% \'{}% s equation,\cite{abr} lead to the quantization condition \begin{equation} \frac{M(a_{+}+1,b,z_{0})}{M(a_{+},b,z_{0})}=\sqrt{\frac{a_{+}-2\nu }{a_{+}}}. \label{eq23} \end{equation} \section{Concluding remarks} The quantization conditions for a general mixing of vector and scalar screened Coulomb potentials in a two-dimensional world have been put forward in a unified way. Of course, we have to analyze the nature of the spectra as a function of the potential parameters. This task, including the complete set of eigenvectors, will be reported elsewhere. \section*{Acknowledgements} This work was supported by CAPES, CNPq and FAPESP.
-14,043.599101
[ -2.54296875, 2.349609375 ]
18.584071
[ -6.328125, -4.4609375, -3.6484375, -9.4140625, 1.078125, 14.5 ]
[ 1.7197265625, 8.6640625, 0.9130859375, 4.140625 ]
43
1,286
[ -3.599609375, 4.25390625 ]
33.500511
[ -5.30078125, -4.3359375, -3.046875, -1.220703125, 2.12109375, 8.8828125 ]
1.294807
15.633067
39.813375
2.848663
[ 1.7084150314331055 ]
-9,360.083527
5.705288
-13,885.625425
0.708737
5.492583
[ -2.357421875, -3.50390625, -4.21875, -5.39453125, 2.353515625, 11.953125 ]
[ -5.66796875, -3.33203125, -2.623046875, -1.611328125, 3.7734375, 5.60546875 ]
BkiUalHxK7DgtAAQGwWA
\section{Introduction} For an axisymmetric and stationary Einstein-Maxwell black hole with angular momentum $J$ and charge $Q$, Marcus Ansorg and Jorg Hennig\cite{ansorg}\cite{ansorg2} proved the universal relation \be A^{+}A^{-}=(8\pi J)^{2}+(4\pi Q^{2})^{2}, \ee where $A^{+}$ and $A^{-}$ denote the areas of the event and Cauchy horizon, respectively. Cveti\v{c} \emph{et al}. \cite{cvetic} generalized the calculation to higher-dimensional black holes with multihorizons. By explicit computation, they showed that the product of all horizon entropies for rotating multicharge black holes in four and higher dimensions is independent of the mass, in either asymptotically flat or asymptotically anti-de Sitter spacetimes. This subject has been further explored in recent years\cite{chen}-\cite{visser}. Recently, the entropy sum of all black hole horizons has been investigated\cite{meng-sum, yu}. In many known solutions, the sum is also mass independent. So far, most studies on this issue rely on specific forms of metrics as well as explicit expressions of black hole mass, angular momentum, etc. In this paper, we aim to find some general criteria for the mass independence of the entropy product/sum. By employing the first law of black hole thermodynamics and the Vandermonde determinant, we find a very simple criterion for spherically symmetric black holes. We show that if the radial metric function $f(r)$ is a Laurent series then whether the entropy product/sum is mass independent is determined only by the lowest power and highest power of the series. Our arguments are also helpful for rotational black holes. We find that the entropy product for a Myers-Perry black hole is mass independent for all dimensions and the entropy sum is mass independent for all $d>4$, where $d$ is the spacetime dimension. Our calculation requires the expressions of entropy and temperature, but the explicit forms of the mass, angular momentum and charge are not needed. \section{Entropy product, entropy sum and the first law} In this section, we apply the first law to all horizons of the black hole and find some formulas related to the entropy product and entropy sum. A stationary black hole with mass $M$, charged $Q$ and angular momentum $J$ may have multiple horizons. Each horizon possesses a different temperature $T_{i}$, angular velocity $\Omega_i$, electrostatic potential $\Omega_i$ and entropy $S_{i}$. The first law of black hole thermodynamics for each horizon reads \be dM=T_idS_i+\Omega_i dJ+\Phi_i dQ \,, \ee which obviously yields \be \frac{\partial S_{i}}{\partial M}=\frac{1}{T_{i}}\,. \ee In this paper, we require $T_i\neq 0$ for all horizons. Denote the product of the entropy by $\tilde S$, i.e., $\widetilde{S}=S_{1}S_{2}...S_{n}=\prod\limits_{i=1}^{n} S_{i}$. Taking partial derivative of the entropy product with respect to the mass, we have \be \frac{\partial\widetilde{S}}{\partial M}=\widetilde{S}\left(\frac{1}{T_{1}S_{1}}+\frac{1}{T_2S_2}+... \right)=\widetilde{S} \sum\limits_{i=1}^{n}\frac{1}{T_{i}S_{i}}\,. \ee Denote by $\hat S$ the entropy sum, i.e., \be \hat{S}=S_{1}+S_{2}+...+S_{n}\,. \ee The first law yields \be \frac{\pa \hat{S}}{\pa M}=\sum\limits_{i=1}^{n}\frac{1}{T_{i}}\,. \ee Then it is straightforward to obtain the following theorem. \begin{The} For a black hole with multiple horizons, the entropy product is independent of the mass if and only if \be \sum\limits_{i=1}^{n}\frac{1}{T_iS_{i}}=0\,, \ee and the entropy sum is independent of the mass if and only if \be \sum\limits_{i=1}^{n}\frac{1}{T_{i}}=0\,. \ee \label{theo-first} \end{The} We shall see that this theorem is very useful in spherically symmetric spacetime because it does not require the specific form of metric. \section{Criterion for spherical black holes in arbitrary dimensions} The following discussion concerns spherical black holes described by the metric in the form \be ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\Omega^{2} \label{fmetric}\,. \ee The roots of $f(r)$, denoted by $r_{1},r_{2},...,r_{n}$, are the radii of the horizons. Here the roots include complex ones, corresponding to the so called ``virtual horizons.'' In general relativity, the temperature is proportional to $f'(r_i)$, \be T_{i}=\frac{1}{4\pi}f'(r_i)\,, \ee and the entropy is proportional to the area \be S_{i}=\frac{1}{4}A_{i}=\frac{1}{4}\Omega_{d-2}r^{d-2}\,, \ee where $\Omega_{d-2}$ is the area of a unit $(d-2)$ sphere \be \Omega_{d-2}=\frac{2\pi^{\frac{d-1}{2}}}{\Gamma(\frac{d-1}{2})}\,. \ee To proceed, we introduce a useful lemma first. \begin{Lem} \label{lemma2} Let $\{r_i\}$ be $n$ different numbers. Then \be \sum\limits_{i=1}^{n} \frac{r_{i}^{k}}{\prod\limits_{j\neq i}^{n}(r_{i}-r_{j})}=0\,, \ee where $0\leq k \leq n-2$. \end{Lem} For example, when $n=3$, the lemma gives \be \frac{1}{(r_1-r_2)(r_1-r_3)}+\frac{1}{(r_2-r_1)(r_2-r_3)}+\frac{1}{(r_3-r_1)(r_3-r_2)}=0\,, \ee and \be \frac{r_1}{(r_1-r_2)(r_1-r_3)}+\frac{r_2}{(r_2-r_1)(r_2-r_3)}+\frac{r_3}{(r_3-r_1)(r_3-r_2)}=0\,. \ee The proof of the lemma is a simple application of the Vandermonde determinant (see the Appendix for details). Now let us assume that $f(r)$ is in a Laurent polynomial form, \be f(r)=1+a_{1}r+...+a_{n}r^{n}+b_{1}r^{-1}+...+b_{m}r^{-m}\,, \ee where $n,m\geq1$. Then we have \be r^{m}f(r)=r^{m}+a_{1}r^{m+1}+...+a_{n}r^{m+n}+b_{1}r^{m-1}+...+b_{m}=\prod\limits_{i}^{n+m}(r-r_{i})\,. \ee Taking derivative on both sides with respect to $r$ at each horizon, we have \be\label{deri} r_{i}^{m} f'(r_i)=\prod\limits_{j\neq i}^{n+m}(r_{i}-r_{j})\,. \ee This is equivalent to \be 16\pi S_{i}T_{i}r_{i}^{m-(d-2)}=\Omega_{d-2}\prod\limits_{j\neq i}^{n+m}(r_{i}-r_{j})\,, \ee or \be \frac{1}{S_{i}T_{i}}\eqn \frac{16\pi r_{i}^{m-(d-2)}}{\Omega_{d-2}\prod\limits_{j\neq i}^{n+m}(r_{i}-r_{j})}\,,\\ \sum_i\frac{1}{S_{i}T_{i}}\eqn \sum_i\frac{16\pi}{\Omega_{d-2}} \frac{ r_{i}^{m-(d-2)}}{\prod\limits_{j\neq i}^{n+m}(r_{i}-r_{j})}\,. \label{siti} \ee According to Lemma \ref{lemma2}, as long as $m-(d-2)\geq0$ and $m-(d-2)\leq m+n-2$, the right-hand side of \eq{siti} vanishes, and consequently, the entropy product is independent of the mass by virtue of Theorem \ref{theo-first}. Similarly, \eq{deri} also gives \be \frac{1}{T_{i}}=\frac{4\pi r_{i}^{m}}{\prod\limits_{j\neq i}^{n+m}(r_{i}-r_{j})}\,, \ee which vanishes for $n\geq 2$. Therefore, by applying Theorem \ref{theo-first} and Lemma \ref{lemma2}, we arrive at the following theorem: \begin{The}\label{theorem-sp} For a spherical black hole in a d-dimensional spacetime described by metric \meq{fmetric}, suppose f(r) is a Laurent polynomial: \be f(r)=1+a_{1}r+...+a_{n}r^{n}+b_{1}r^{-1}+...+b_{m}r^{-m}\,, \ee where $n$ and $m$ are positive integers. If the entropy and temperature of each horizon located at $r=r_i$ are of the form \be S_i\eqn \frac{1}{4}{A_i}=\frac{1}{4}\Omega_{d-2}r^{d-2}\,,\\ T_i\eqn \frac{1}{4\pi}f'(r_i)\neq 0 \,; \ee then the necessary and sufficient condition for the entropy product being mass independent is that \be m\geq d-2 \ \ and\ \ \ n\geq 4-d. \label{c1} \ee The necessary and sufficient condition for the entropy sum being mass independent is \be n\geq 2. \label{c2} \ee \end{The} This theorem provides a very simple criterion to judge whether the entropy product/sum is mass independent. For example, $f(r) $ in a d-dimensional ($d\geq 4$) spherically symmetric Reissner-Nordstr\"om black hole takes the form \be f(r)=1-\frac{2M}{r^{d-3}}+\frac{Q^{2}}{r^{2(d-3)}}\,. \ee One sees immediately that \be m=2(d-3), \ \ \ n=0\,. \ee Therefore, $m\geq d-2$ and $n\geq 4-d$ are satisfied for $d\geq 4$. According to the theorem, the entropy product is independent of $M$ for all $d\geq 4$. Obviously, the condition $n\geq 2$ fails, and consequently, the entropy sum must depend on $M$. Now we consider charged black holes with cosmological constants. The solution is of the form \be f(r)=1-\frac{2M}{r^{d-3}}-\frac{2\Lambda}{(d-1)(d-2)}r^{2}+\frac{Q^{2}}{r^{2(d-3)}}\,. \ee When $Q=0$, it reduces to the Schwarzschild-de Sitter black hole, in which case $m=d-3$, $n=2$. So $m\geq d-2$ fails, and the entropy product is mass dependent. This result has been found by Visser \cite{visser}. For $Q\neq 0$, we have \be m=2(d-3), \ \ \ n=2\,. \ee We see that both \eqs{c1} and \meq{c2} are satisfied. Thus, the entropy product and entropy sum are both mass independent. This result agrees with Refs.\cite{cvetic, yu}. \section{Myers-Perry black holes} The entropy product/sum issue is not limited to spherical black holes. For Kerr-Anti-de Sitter black holes in four and higher dimensions\cite{gibbons}\cite{gibbons2}, Cveti\v{c} \emph{et al}.\cite{cvetic} showed that the entropy product of such black holes is independent of the mass and Du and Tian\cite{yu} showed that the entropy sum is also mass independent. Obviously, our Theorem \ref{theorem-sp} is not applicable to rotating black holes . However, our Theorem \ref{theo-first} and Lemma \ref{lemma2} are still helpful for the mass dependence problem. In the following, we shall apply our technique developed above to the Myers-Perry solution \cite{myers}\cite{myers2}, which is a generalization of the Kerr solution in higher dimensions. It is necessary to discuss the Myers-Perry black holes in even and odd dimensions separately. \subsection{Even dimensions} Suppose $d=2n+2$ with $n\geq1$, in which case the metric for the Myers-Perry black hole is\cite{myers} \be ds^{2}=-dt^{2}+\frac{\mu r}{\Pi F}(dt+\sum\limits_{i=1}^{n}a_{i}\mu_{i}^{2}d\phi_{i})^{2}+\frac{\Pi F}{\Pi-\mu r}dr^{2} +\sum\limits_{i=1}^{n}(r^{2}+a_{i}^{2})(d\mu_{i}^{2}+\mu_{i}^{2}d\phi_{i}^{2})+r^{2}d\alpha^{2}\,, \ee where \be\label{F} F(r)=1-\sum\limits_{i=1}^{n}\frac{a_{i}^{2}\mu_{i}^{2}}{r^{2}+a_{i}^{2}}\,, \ee \be\label{Pi} \Pi(r)=\prod\limits_{i=1}^{n}(r^{2}+a_{i}^{2})\,, \ee and $\alpha$ is an extra unpaired spatial coordinate. There are $2n$ horizons located at the roots of the equation \be\label{root} \Pi-\mu r=0\,. \ee Denote the $ith$ root by $r_i$. The corresponding horizon entropy is then given by \cite{chen} \be S_i=\frac{\Omega_{d-2}\Pi(r_{i})}{4}\,, \label{si} \ee and the surface gravity is given by\cite{myers} \be \kappa_i=\left.\frac{\pa_{r}\Pi-\mu}{2\mu r}\right|_{r=r_{i}} \,.\label{kappai} \ee To check the mass dependence of entropy product, we introduce the function \be\label{starteven} f(r)\equiv \left[\Pi(r)-\mu r\right]\frac{\Pi(r)}{2\mu r}\,. \ee Then it is not difficult to get \be\label{fprime} f'(r_i)=\left.\frac{\pa_{r}\Pi(r)-\mu}{2\mu r}\Pi(r)\right|_{r=r_{i}}=32\pi \Omega_{d-2}T_{i}S_{i}\,. \ee On the other hand, \eq{Pi} indicates that there are $2n$ roots for \eq{root}. Hence, \eq{starteven} can be written in the form \be f(r)=\prod\limits_{j=1}^{2n}(r-r_{j})\cdot\frac{\Pi(r)}{2\mu r}\,, \ee and then \be f'(r_i)=\prod\limits_{j\neq i}(r_{i}-r_{j})\cdot\frac{\Pi(r_{i})}{2\mu r_{i}}=\oh\prod\limits_{j\neq i}(r_{i}-r_{j})\,, \ee where \eq{root} has been used in the last step. Together with \eqs{fprime}, we have \be \sum_i \frac{1}{T_iS_i}\sim \sum_{i=1}^{2n}\prod_{j\neq i}\frac{1}{(r_{i}-r_{j})}\,, \ee which vanishes according to Lemma \ref{lemma2}. Therefore, the entropy product is independent of the mass of the black hole. To discuss the entropy sum, we should start with the function \be \frac{\Pi-\mu r}{2\mu r}=\frac{\prod\limits_{i=1}^{2n}(r-r_{i})}{2\mu r}\,. \ee After taking derivatives with respect to $r$, we obtain \be \sum\limits_{i=1}^{2n}\frac{1}{8\pi T_{i}}=\sum\limits_{i=1}^{2n}\frac{2\mu r_{i}}{\prod\limits_{j\neq i}(r_{i}-r_{j})}\,, \ee which equals zero as long as $n\geq 2$, according to Lemma \ref{lemma2}. So the entropy sum is independent of the mass for $n\geq 2$. The only exception is $n=1$ ($d=4$) , i.e., the Kerr solution. In this case, it is straightforward to show that the entropy sum is $4M$. \subsection{Odd dimensions} Suppose $d=2n+1$, $d\geq 5$. The metric reads\cite{myers} \be ds^{2}=-dt^{2}+\frac{\mu r^{2}}{\Pi F}\left(dt+\sum\limits_{i=1}^{n}a_{i}\mu_{i}^{2}d\phi_{i}\right)^{2}+\frac{\Pi F}{\Pi-\mu r^{2}}dr^{2} +\sum\limits_{i=1}^{n}(r^{2}+a_{i}^{2})(d\mu_{i}^{2}+\mu_{i}^{2}d\phi_{i}^{2})\,, \ee where $F$ and $\Pi$ are given by (\ref{F}) and (\ref{Pi}). Again, there are $2n$ horizons located at the roots of \be\label{rootodd} \Pi-\mu r^{2}=0\,. \ee The surface gravity is\cite{myers} \be \kappa_i=\left.\frac{\pa_{r}\Pi-2\mu r}{2\mu r^{2}}\right|_{r=r_{i}}\,, \ee and the entropy is\cite{chen} \be S_i=\frac{\Omega_{d-2}\Pi(r_{i})}{4r_{i}}\,. \ee To discuss the entropy product, we start with the function \be\label{startodd} f(r)=[\Pi(r)-\mu r^{2}]\frac{\Pi(r)}{2\mu r^{3}}\,. \ee Then \be f'(r_i)=32\pi \Omega_{d-2}T_{i}S_{i}=\prod\limits_{j\neq i}(r_{i}-r_{j})\cdot\frac{\Pi(r_{i})}{2\mu r_{i}^{3}}\,. \ee Applying \eq{rootodd}, we can simplify the right-hand side as \be T_{i}S_{i}=\frac{\prod\limits_{j\neq i}(r_{i}-r_{j})}{2r_{i}}\,, \ee and then \be \frac{1}{32\pi \Omega_{d-2}}\sum\limits_{i=1}^{2n}\frac{1}{T_{i}S_{i}}=\sum_i\frac{2r_{i}}{\prod\limits_{j\neq i}(r_{i}-r_{j})}\,. \ee Since $d\geq 5$, $n\geq 2$, the entropy product is independent of the mass due to Theorem \ref{theo-first}. To discuss the entropy sum, we should start with the term \be \frac{\Pi-\mu r}{2\mu r^{2}}=\frac{\prod\limits_{i=1}^{2n}(r-r_{i})}{2\mu r^{2}}\,. \ee By similar arguments, one can find \be \sum\limits_{i=1}^{2n}\frac{1}{8\pi T_{i}}=\sum_i^{2n}\left(\frac{2\mu r_{i}^{2}}{\prod\limits_{j\neq i}(r_{i}-r_{j})}\right)\,, \ee which equals zero as long as $n\geq 2$, according to Lemma 2. So the entropy sum is independent of the mass for all odd dimensions ($d\geq 5$). \section{Conclusions} We have developed some criteria for the mass independence of black hole entropy product and entropy sum. By applying the first law of black hole thermodynamics, the explicit form of mass is no longer needed. Thus, our method allows the mass to take different forms as long as the first law is satisfied. This treatment is particularly useful for nonasymptotically flat spacetimes such as asymptotically (anti-) de Sitter spacetimes, where the mass can not be uniquely specified. For Reissner-Nordstr\"om black holes with or without cosmological constants, the criterion becomes very simple and straightforward and does not require the detailed knowledge of the metric. The technique is also proven to be useful in nonspherically symmetric cases. For Myers-Perry black holes, we have shown that the entropy sum is mass independent for higher dimensions ($d>4$). It depends on the mass only in four dimensions, i.e., the Kerr solution. With our method, it is also easy to demonstrate that the entropy product is mass independent for all dimensions. \section*{Acknowledgements} This research was supported by NSFC Grants No. 11235003, No.11375026 and No.NCET-12-0054.
-21,674.676838
[ -2.958984375, 2.677734375 ]
34.591195
[ -2.435546875, 0.51953125, -2.033203125, -6.2578125, -1.1181640625, 8.4140625 ]
[ 3.669921875, 10.25, 3.380859375, 6.97265625 ]
127
1,795
[ -3.568359375, 4.22265625 ]
34.395205
[ -6.09765625, -4.25390625, -4.3671875, -2.310546875, 1.927734375, 12.234375 ]
3.477051
19.116129
31.420613
3.494039
[ 1.771864891052246 ]
-14,977.615899
5.607799
-21,834.371442
2.364395
5.361869
[ -2.26171875, -3.521484375, -3.857421875, -5.16796875, 2.34765625, 12.6015625 ]
[ -5.41015625, -2.263671875, -2.841796875, -1.8505859375, 3.63671875, 5.13671875 ]
BkiUd4c4uzlh5TJwJHWa
\section{Introduction} \label{sec:intro} Throughout the paper, an embedded graph will mean one embedded in $R^3$. A graph is {\em intrinsically knotted\/} if every embedding contains a non-trivially knotted cycle. Conway and Gordon \cite{CG} showed that $K_7$, the complete graph with seven vertices, is an intrinsically knotted graph. Foisy \cite{F} showed that $K_{3,3,1,1}$ is also intrinsically knotted. A graph $H$ is a {\em minor\/} of another graph $G$ if it can be obtained by contracting edges in a subgraph of $G$. If a graph $G$ is intrinsically knotted and has no proper minor that is intrinsically knotted, we say $G$ is {\em minor minimal intrinsically knotted\/}. Robertson and Seymour \cite{RS} proved that for any property of graphs, there is a finite set of graphs minor minimal with respect to that property. In particular, there are only finitely many minor minimal intrinsically knotted graphs, but finding the complete set is still an open problem. A $\nabla Y$ {\em move\/} is an exchange operation on a graph that removes all edges of a triangle $abc$ and then adds a new vertex $v$ and three new edges $va, vb$ and $vc$. The reverse operation is called a $Y \nabla$ {\em move\/} as follows: \begin{figure}[h] \includegraphics[scale=1]{fig0.eps} \end{figure} Since the $\nabla Y$ move preserves intrinsic knottedness \cite{MRS}, we will concentrate on triangle-free graphs. We say two graphs $G$ and $G^{\prime}$ are {\em cousins} of each other if $G^{\prime}$ is obtained from $G$ by a finite sequence of $\nabla Y$ and $Y \nabla$ moves. The set of all cousins of $G$ is called the $G$ {\em family}. Johnson, Kidwell and Michael \cite{JKM} and, independently, the second author \cite{M} showed that intrinsically knotted graphs have at least 21 edges. Hanaki, Nikkuni, Taniyama and Yamazaki \cite{HNTY} constructed the $K_7$ family which consists of 20 graphs derived from $H_{12}$ and $C_{14}$ by $Y \nabla$ moves as in Figure \ref{fig11}, and they showed that the six graphs $N_9$, $N_{10}$, $N_{11}$, $N'_{10}$, $N'_{11}$ and $N'_{12}$ are not intrinsically knotted. Goldberg, Mattman and Naimi \cite{GMN} also proved this, independently. \begin{figure}[h] \includegraphics[scale=1]{fig11.eps} \caption{The $K_7$ family} \label{fig11} \end{figure} Recently two groups \cite{BM, LKLO}, working independently, showed that $K_7$ and the 13 graphs obtained from $K_7$ by $\nabla Y$ moves are the only intrinsically knotted graphs with 21 edges. This gives us the complete set of 14 minor minimal intrinsically knotted graphs with 21 edges, which we call {\em the KS graphs} as they were first described by Kohara and Suzuki~\cite{KS}. In this paper, we concentrate on intrinsically knotted graphs with 22 edges. The $K_{3,3,1,1}$ family consists of 58 graphs, of which 26 graphs were previously known to be minor minimal intrinsically knotted. Goldberg et al.~\cite{GMN} showed that the remaining 32 graphs are also minor minimal intrinsically knotted. The graph $E_9+e$ of Figure~\ref{fig12} is obtained from $N_9$ by adding a new edge $e$ and has a family of 110 graphs. All of these graphs are intrinsically knotted and exactly 33 are minor minimal intrinsically knotted \cite{GMN}. By combining the $K_{3,3,1,1}$ family and the $E_9+e$ family, all 168 graphs were already known to be intrinsically knotted graphs with 22 edges. \begin{figure}[h] \includegraphics[scale=1]{fig12.eps} \caption{The graph $E_9+e$} \label{fig12} \end{figure} A {\em bipartite\/} graph is a graph whose vertices can be divided into two disjoint sets $A$ and $B$ such that every edge connects a vertex in $A$ to one in $B$. Equivalently, a bipartite graph is a graph that does not contain any odd-length cycles. Among the 14 intrinsically knotted graphs with 21 edges, only $C_{14}$, {\em the Heawood graph}, is bipartite. A bipartite graph formed by adding an edge to the Heawood graph will be bipartite intrinsically knotted. We will show that this is the only way to form such a graph that has a KS graph minor. Among the remaining 168 known examples of intrinsically knotted graphs with 22 edges in the $K_{3,3,1,1}$ and $E_9+e$ families, cousins 89 and 110 of the $E_9+e$ family are the only bipartite graphs. \begin{figure}[h] \includegraphics[scale=1]{fig13.eps} \caption{Two cousins 89 and 110 of the $E_9+e$ family} \label{fig13} \end{figure} However, Cousin 89 has the Heawood graph as a subgraph. Our goal in this paper is to show that Cousin 110 completes the list of minor minimal examples. We say that a graph $G$ is {\em minor minimal bipartite intrinsically knotted} if $G$ is an intrinsically knotted bipartite graph, but no proper minor of $G$ has this property. Since contracting edges can lead to a bipartite minor for a graph that was not bipartite to begin with, it's easy to construct examples of graphs that are not themselves bipartite intrinsically knotted even though they have a minor that is minor minimal bipartite intrinsically knotted. Nonetheless, Robertson and Seymour's~\cite{RS} Graph Minor Theorem guarantees that there are a finite number of minor minimal bipartite intrinsically knotted graphs and every bipartite intrinsically knotted graph must have one as a minor. Our main theorem shows that there are exactly two such of 22 or fewer edges. \begin{theorem}\label{thm:main} There are exactly two graphs of size at most 22 that are minor minimal bipartite intrinsically knotted: The Heawood graph and Cousin 110 of the $E_9+e$ family. \end{theorem} As we show below, the argument quickly reduces to graphs of minimum degree $\delta(G)$ at least three, for which we have: \begin{theorem}\label{thm:main3} There are exactly two bipartite intrinsically knotted graphs with 22 edges and minimum degree at least three, the two cousins 89 and 110 of the $E_9+e$ family. \end{theorem} We remark that Cousin 110 was earlier identified as bipartite intrinsically knotted in \cite[Theorem 3]{HAMM} as part of a classification of such graphs on ten or fewer vertices. It follows from that classification that Cousin 110 is the only minor minimal bipartite intrinsically knotted graph of order ten or less. It would be interesting to know if there are further examples of order between 11 and 14, which is the order of the Heawood graph. Such examples would have at least 23 edges. \begin{proof}[Proof of Theorem~\ref{thm:main}] Suppose $G$ is bipartite intrinsically knotted, with $\|G \| \leq 22$. If $\delta(G) \leq 1$, we may delete a vertex (and its edge, if it has one) to obtain a proper minor that also has this property, so $G$ is not minor minimal. If $\delta(G) = 2$, then contracting an edge adjacent to a degree two vertex gives a minor $H$ that remains intrinsically knotted and is of size at most 21. Thus $H$ is one of the KS graphs. In other words $G$ is obtained by a vertex split of the KS graph $H$. Now, a graph obtained in this way from a KS graph will be intrinsically knotted and have 22 edges. However, it's straightforward to verify that it cannot be bipartite. So, we can assume $\delta(G) \geq 3$. If $\|G\| = 21$, $G$ must be a KS graph and $C_{14}$, the Heawood graph, is the only bipartite graph in this set. As graphs of 20 edges are not intrinsically knotted~\cite{JKM,M}, $C_{14}$ is minor minimal for intrinsic knotting and, so, also for bipartite intrinsically knotted. By Theorem~\ref{thm:main3}, if $\|G\| = 22$, $G$ must be one of the two cousins in the $E_9+e$ family. Goldberg et al.~\cite{GMN} showed that all graphs in this family are intrinsically knotted. However, Cousin 89 is formed by adding an edge to the Heawood graph and is not minor minimal. On the other hand, it's easy to verify that Cousin 110 is minor minimal bipartite intrinsically knotted and, therefore, the only such graph on 22 edges. \end{proof} The remainder of this paper is a proof of Theorem~\ref{thm:main3}. In the next section we introduce some terminology and outline the strategy of our proof. \section{Terminology and strategy} \label{sec:term} Henceforth, let $G=(A,B,E)$ denote a bipartite graph with 22 edges whose partition has the parts $A$ and $B$ with $E$ denoting the edges of the graph. Note that $G$ is triangle-free. For any two distinct vertices $a$ and $b$, let $\widehat{G}_{a,b}$ denote the graph obtained from $G$ by deleting $a$ and $b$, and then contracting edges adjacent to vertices of degree 1 or 2, one by one repeatedly, until no vertices of degree 1 or 2 remain. Removing vertices means deleting interiors of all edges adjacent to these vertices and any remaining isolated vertices. Let $\widehat{E}_{a,b}$ denote the set of edges of $\widehat{G}_{a,b}$. The distance, denoted by ${\rm dist}(a,b)$, between $a$ and $b$ is the number of edges in the shortest path connecting them. If $a$ has the distance 1 from $b$, then we say that $a$ and $b$ are {\em adjacent\/}. The degree of $a$ is denoted by $\deg(a)$. Note that $\sum_{a \in A} \deg(a) = \sum_{b \in B} \deg(b) = 22$ by the definition of bipartition. To count the number of edges of $\widehat{G}_{a,b}$, we have some notation. \begin{itemize} \item $E(a)$ is the set of edges that are adjacent to a vertex $a$. \item $V(a)=\{c \in A \cup B\ |\ {\rm dist}(a,c)=1\}$ \item $V_n(a)=\{c \in A \cup B\ |\ {\rm dist}(a,c)=1,\ \deg(c)=n\}$ \item $V_n(a,b)=V_n(a) \cap V_n(b)$ \item $V_Y(a,b)=\{c \in A \cup B\ |\ \exists \ d \in V_3(a,b) \ \mbox{such that} \ c \in V_3(d) \setminus \{a,b\}\}$ \end{itemize} Obviously in $G \setminus \{a,b\}$ for some distinct vertices $a$ and $b$, each vertex of $V_3(a,b)$ has degree 1. Also each vertex of $V_3(a), V_3(b)$ (but not of $V_3(a,b)$) and $V_4(a,b)$ has degree 2. Therefore to derive $\widehat{G}_{a,b}$ all edges adjacent to $a,b$ and $V_3(a,b)$ are deleted from $G$, followed by contracting one of the remaining two edges adjacent to each vertex of $V_3(a)$, $V_3(b)$, $V_4(a,b)$ and $V_Y(a,b)$ as in Figure \ref{fig21}(a). Thus we have the following equation counting the number of edges of $\widehat{G}_{a,b}$ which is called the {\em count equation\/}; $$|\widehat{E}_{a,b}| = 22 - |E(a)\cup E(b)| - (|V_3(a)|+|V_3(b)|-|V_3(a,b)|+|V_4(a,b)|+|V_Y(a,b)|).$$ \begin{figure}[h] \includegraphics[scale=1]{fig21.eps} \caption{Deriving $\widehat{G}_{a,b}$} \label{fig21} \end{figure} For short, write $NE(a,b) = |E(a)\cup E(b)|$ and $NV_3(a,b) = |V_3(a)|+|V_3(b)|-|V_3(a,b)|$. If $a$ and $b$ are adjacent (i.e. dist$(a,b)=1$), then $V_3(a,b), V_4(a,b)$ and $V_Y(a,b)$ are all empty sets because $G$ is triangle-free. Note that the derivation of $\widehat{G}_{a,b}$ must be handled slightly differently when there is a vertex $c$ in $V$ such that more than one vertex of $V(c)$ is contained in $V_3(a,b)$ as in Figure \ref{fig21}(b). In this case we usually delete or contract more edges even though $c$ is not in $V_Y(a,b)$. The following proposition, which was mentioned in \cite{LKLO}, gives two important conditions that ensure a graph fails to be intrinsically knotted. Note that $K_{3,3}$ is a triangle-free graph and every vertex has degree 3. \begin{proposition} \label{prop:planar} If $\widehat{G}_{a,b}$ satisfies one of the following two conditions, then $G$ is not intrinsically knotted. \begin{itemize} \item[(1)] $|\widehat{E}_{a,b}| \leq 8$, or \item[(2)] $|\widehat{E}_{a,b}|=9$ and $\widehat{G}_{a,b}$ is not isomorphic to $K_{3,3}$. \end{itemize} \end{proposition} \begin{proof} If $|\widehat{E}_{a,b}| \leq 8$, then $\widehat{G}_{a,b}$ is a planar graph. Also if $|\widehat{E}_{a,b}|=9$, then $\widehat{G}_{a,b}$ is either a planar graph or isomorphic to $K_{3,3}$. It is known that if $\widehat{G}_{a,b}$ is planar, then $G$ is not intrinsically knotted \cite{{BBFFHL},{OT}}. \end{proof} In proving Theorem~\ref{thm:main3} it is sufficient to consider connected graphs having no vertex of degree 1 or 2. Our process is to construct all possible such bipartite graphs $G$ with 22 edges, delete two vertices $a$ and $b$ of $G$, and then count the number of edges of $\widehat{G}_{a,b}$. If $\widehat{G}_{a,b}$ has 9 edges or less and is not isomorphic to $K_{3,3}$, then we conclude that $G$ is not intrinsically knotted by Proposition \ref{prop:planar}. \begin{proposition} \label{prop:deg6} There is no bipartite intrinsically knotted graph with 22 edges and minimum degree at least three that has a vertex of degree 6 or more. \end{proposition} \begin{proof} Suppose that $G$ is an intrinsically knotted graph with 22 edges that has a vertex $a$ in $A$ of degree 6 or more. Since $\sum_{b \in B} \deg(b) = 22$ and each vertex of $B$ has degree at least 3, $B$ consists of at most seven vertices, so the degree of $a$ cannot exceed 7. If $\deg(a) = 7$, then $B$ consists of seven vertices, and so one vertex $b$ has degree 4 and the others have degree 3. Then $NE(a,b) = 10$ and $|V_3(a)| = 6$. By the count equation, $|\widehat{E}_{a,b}| \leq 6$ in $\widehat{G}_{a,b}$. Now assume that $\deg(a) = 6$. Since $\sum_{c \in A} \deg(c) = 22$, there is a vertex $c$ of degree at least 4 in $A$. We may assume that $|V_3(a)| + |V_4(a,c)| \leq 3$, otherwise $|\widehat{E}_{a,c}| \leq 8$ because $NE(a,c) \geq 10$. Because $|V_3(a)| \leq 3$, $B$ consists of exactly six vertices and at most three vertices in $B$ have degree 3. Furthermore $c$ is adjacent to at most three vertices of degree 3 or 4 in $B$. Therefore, eventually $c$ has degree 4, and is adjacent to a degree 5 vertex $b$ and three other degree 3 vertices in $B$ as shown in Figure \ref{fig22}. If there is another vertex of degree more than 3 in $A$, then, like $c$, it is adjacent to three degree 3 vertices in $B$. Therefore $A$ can have at most three vertices of degree more than 3, including $a$ and $c$. This implies that $V_3(b) \geq 2$, and so $|\widehat{E}_{a,b}| \leq 7$. \end{proof} \begin{figure}[h] \includegraphics[scale=1]{fig22.eps} \caption{The case of $\deg(a) = 6$} \label{fig22} \end{figure} Therefore each vertex of $A$ and $B$ has degree 3, 4 or 5 only. Let $A_n$ denote the set of vertices in $A$ of degree $n = 3,4,5$ and $[A] = [|A_5|, |A_4|,|A_3|]$ and similarly for $B$. Then $[A] = [3,1,1]$, $[2,3,0]$, $[2,0,4]$, $[1,2,3]$, $[0,4,2]$ or $[0,1,6]$. Without loss of generality, we may assume that $|A_5| \geq |B_5|$, and if $|A_5| = |B_5|$ then $|A_4| \geq |B_4|$. This paper relies on the technical machinery developed in \cite{LKLO}. We divide the proof of Theorem \ref{thm:main3} according to the size of $A_5$, always under the assumption that our graphs are of minimum degree at least three. In Section 3, we show that the only bipartite intrinsically knotted graph with two or more degree 5 vertices in $A$ is Cousin 110 of the $E_9+e$ family. In Section 4, we show that there is no bipartite intrinsically knotted graph with exactly one degree 5 vertex in $A$. In Section 5, we show that the only bipartite intrinsically knotted graph with all vertices of degree at most 4 is Cousin 89 of the $E_9+e$ family. \section{Case of $|A_5| \geq 2$} In this case, $A$ has at least two degree 5 vertices, say $a$ and $b$, and so $[A]$ is one of $[3,1,1]$, $[2,3,0]$ or $[2,0,4]$. First we consider the case that $[B]$ is $[3,1,1]$, in other words, $([A], [B]) = ([3,1,1], [3,1,1])$. The edges adjacent to each vertex in $A_5$ and $B_5$ are constructed in a unique way because both $A$ and $B$ have exactly five vertices. This determines 21 edges of the graph so that both $A$ and $B$ have three degree 5 vertices and two degree 3 vertices. Now we add a final, dashed edge to connect two degree 3 vertices, one in $A$ and another in $B$. We get the graph shown in Figure \ref{fig31} (a) which is Cousin 110 of the $E_9+e$ family and is intrinsically knotted. In case $[B]$ is $[2,3,0]$, i.e., $([A], [B])$ is either $([3,1,1], [2,3,0])$ or $([2,3,0], [2,3,0])$, we similarly construct the edges adjacent to each vertex in $A_5$ and $B_5$ in a unique way. We add the remaining dashed edges which are also determined uniquely. This gives, for the two cases, the graphs shown in Figure \ref{fig31} (b) and (c), respectively. In both cases, $\widehat{G}_{a,b}$ is planar for the vertices $a$ and $b$ shown in the figures. \begin{figure}[h] \includegraphics[scale=1]{fig31.eps} \caption{The cases that $A$ has at least two degree 5 vertices} \label{fig31} \end{figure} Now consider the case where $[B]$ is $[2,0,4]$, i.e., $([A], [B])$ is one of $([3,1,1], [2,0,4])$, $([2,3,0], [2,0,4])$ or $([2,0,4], [2,0,4])$. We may assume that $NV_3(a,b) \leq 3$, otherwise $|\widehat{E}_{a,b}| \leq 8$ because $NE(a,b) \geq 10$. Thus $a$ and $b$ are adjacent to the same five vertices in $B$ as shown in Figure \ref{fig31} (d). Let $d$ be the remaining degree 3 vertex in $B$, with $d$ adjacent to three vertices other than $a$ and $b$. If $[A]$ is $[3,1,1]$, the remaining vertex in $A_5$ must be adjacent to the same vertices as $a$. This is impossible because this vertex is also adjacent to $d$. If $[A]$ is $[2,3,0]$, the remaining dashed edges can be added in a unique way. Let $c$ be a vertex in $A_4$. Since $NE(a,c) = 9$ and $NV_3(a,c) = 4$, then $|\widehat{E}_{a,c}| \leq 9$. Since $\widehat{G}_{a,c}$ has the degree 4 vertex $b$, it is not isomorphic to $K_{3,3}$. If $[A]$ is $[2,0,4]$, let $a'$ be a vertex in $B_5$, then $NE(a,a') = 9$ and $NV_3(a,a') \geq 6$, so $|\widehat{E}_{a,a'}| \leq 7$. Consider the case where $[B]$ is $[1,2,3]$, i.e., $([A], [B])$ is one of $([3,1,1], [1,2,3])$, $([2,3,0], [1,2,3])$ or $([2,0,4], [1,2,3])$. Similarly we may assume that $NV_3(a,b) + |V_4(a,b)| \leq 3$, otherwise $|\widehat{E}_{a,b}| \leq 8$. Thus $a$ and $b$ are adjacent to the same four vertices of degree 5 or 3, but different degree 4 vertices in $B$ as shown in Figure \ref{fig31} (e). If $[A]$ is $[3,1,1]$, the remaining vertex in $A_5$ must be adjacent to another degree 4 vertex which is not adjacent to $a$ and $b$, but this is impossible. For the remaining two cases, we follow the same argument as the last two cases when $[B]$ was $[2,0,4]$. It remains to consider $[B] = [0,4,2]$ or $[0,1,6]$. For any two vertices $a$ and $b$ in $A_5$, $NE(a,b) = 10$ and $NV_3(a,b) + |V_4(a,b)| \geq 4$, and so $|\widehat{E}_{a,b}| \leq 8$. \section{Case of $|A_5| = 1$} In this case, there is only one choice for $[A]$, $[1,2,3]$. Let $a$, $b_1$, $b_2$, $c_1$, $c_2$ and $c_3$ be the degree 5 vertex, the two degree 4 vertices and the three degree 3 vertices in $A$. There are three choices for $[B]$, $[1,2,3]$, $[0,4,2]$ or $[0,1,6]$. We divide into three subsections, one for each case. \subsection{$[B] = [1,2,3]$} \hspace{1cm} Let $a'$, $b'_1$, $b'_2$, $c'_1$, $c'_2$ and $c'_3$ be the degree 5 vertex, the two degree 4 vertices and the three degree 3 vertices in $B$. If $NV_3(a,a') \geq 5$, then $|\widehat{E}_{a,a'}| \leq 8$. Therefore $NV_3(a,a') = 4$ and we get the subgraph shown in Figure \ref{fig41} (a). Assume that $c_3$ and $c'_3$ are the remaining unused vertices. \begin{figure}[h] \includegraphics[scale=1]{fig41.eps} \caption{$[A] = [B] = [1,2,3]$} \label{fig41} \end{figure} If $V_4(b_1)$ is empty, i.e., $|V_3(b_1)| = 3$, then $|\widehat{E}_{b_1,a'}| \leq 9$ and $\widehat{G}_{b_1,a'}$ has the degree 4 vertex $a$. Thus $|V_4(b_1)| \geq 1$, and similarly $|V_4(b_2)|$, $|V_4(b'_1)|$ and $|V_4(b'_2)| \geq 1$. Without loss of generality, we say that ${\rm dist}(b_1,b'_1) =1$ and ${\rm dist}(b_2,b'_2) =1$. If $V_4(c'_3)$ is empty, i.e., $|V_3(c'_3)| = 3$, then $|\widehat{E}_{a,c'_3}| \leq 9$ and $\widehat{G}_{a,c'_3}$ has the degree 4 vertex $b_1$. Thus we may assume that ${\rm dist}(b_2,c'_3) =1$. If ${\rm dist}(b_2,b'_1) =1$, then $|\widehat{E}_{a,b_2}| \leq 8$. Thus we also assume that ${\rm dist}(b_2,c'_1) =1$. If ${\rm dist}(c_i,c'_1) =1$ for some $i=1,2,3$, then $|\widehat{E}_{a,b_2}| \leq 8$ because $NV_3(a,b_2) + |V_4(a,b_2)| = 4$ and $V_Y(a,b_2) = \{ c_i \}$. Therefore ${\rm dist}(b_1,c'_1) =1$. Now consider the graph $\widehat{G}_{a,a'}$. Since $|\widehat{E}_{a,a'}| \leq 9$, we only need to consider the case that $\widehat{G}_{a,a'}$ is isomorphic to $K_{3,3}$. Since the three vertices $b_1$, $b'_2$ and $c'_3$ (big black dots in the figure) are adjacent to $b_2$ in $\widehat{G}_{a,a'}$, they're also adjacent to $b'_1$ and $c_3$ (big white dots) as shown in Figure \ref{fig41} (b). Restore the graph $G$ by adding back the two vertices $a$ and $a'$ and their associated nine edges as shown in Figure \ref{fig41} (c). The reader can easily check that the graph $\widehat{G}_{a,b_1}$ is planar. \subsection{$[B] = [0,4,2]$} \hspace{1cm} First, we give a very useful lemma. Let $\widetilde{K}_{3,3}$ be the bipartite graph shown in Figure \ref{fig42}. The six degree 3 vertices in $\widetilde{K}_{3,3}$ are divided into three big black vertices and three big white vertices. If we ignore the three degree 2 vertices, then we get $K_{3,3}$. The vertex $d_4$ is called the {\em $s$-vertex\/}. \begin{figure}[h] \includegraphics[scale=1]{fig42.eps} \caption{$\widetilde{K}_{3,3}$} \label{fig42} \end{figure} \begin{lemma}\label{lem:H} Let $H$ be a bipartite graph such that one partition of its vertices contains four degree 3 vertices, and the other partition contains two degree 3 vertices and three degree 2 vertices. If $H$ is not planar then $H$ is isomorphic to $\widetilde{K}_{3,3}$. \end{lemma} \begin{proof} Let $d_1$, $d_2$, $d_3$ and $d_4$ be the four degree 3 vertices in one partition of $H$, and $d'_1$ and $d'_2$ be the two degree 3 vertices in the other partition, which contains degree 2 vertices. Let $\widehat{H}$ be the graph obtained from $H$ by contracting three edges, one each from the pair adjacent to the three degree 2 vertices. Since $\widehat{H}$ consists of nine edges but is not planar, it must be $K_{3,3}$. Therefore $d'_1$, $d'_2$ and one of the $d_i$'s, say $d_4$, are in the same partition of $K_{3,3}$ since ${\rm dist}(d'_1,d'_2) \geq 2$. Since $H$ is originally a bipartite graph, $d_4$ is connected to $d_i$ for each $i=1,2,3$, by exactly two edges adjacent to each degree 2 vertex of $H$. This gives the graph $\widetilde{K}_{3,3}$ shown in Figure \ref{fig42}. \end{proof} Let $b'_1$, $b'_2$, $b'_3$, $b'_4$, $c'_1$ and $c'_2$ be the four degree 4 vertices and the two degree 3 vertices in $B$. First consider the case that $V_3(a)$ has only one vertex, say $c'_1$. Note that ${\rm dist}(b_i,c'_1) = 1$ for each $i=1,2$, otherwise $|\widehat{E}_{a,b_i}| \leq 8$. Now we divide into two cases depending on whether $b_1$ is adjacent to three vertices among the $b'_j$'s (say $b'_1$, $b'_2$ and $b'_3$) or two (say $b'_1$ and $b'_2$) along with $c'_2$. In Figure \ref{fig43}, the ten non-dashed edges in figures (a) and (b) indicate the first case while the ten non-dashed edges in figures (c)$\sim$(e) indicate the second. Let $H$ be the bipartite graph $H$ obtained from $G$ by deleting these ten non-dashed edges. Then $H$ has four degree 3 vertices from $A$, and two degree 3 vertices and three degree 2 vertices from $B$. We only need to handle the case that $H$ is not planar because if $H$ is planar, then $\widehat{G}_{a,b_1}$ is also planar. By Lemma \ref{lem:H}, $H$ is isomorphic to $\widetilde{K}_{3,3}$. In each case, the $s$-vertex is either at $b_2$ as in figures (a) and (c) or at one of $c_i$'s, say $c_1$, as in figures (b), (d) and (e). The big white vertex on the left identifies the $s$-vertex. Indeed these five figures (a)$\sim$(e) represent all the possibilities up to symmetry. \begin{figure}[h] \includegraphics[scale=1]{fig43.eps} \caption{$[A] = [1,2,3]$ and $[B] = [0,4,2]$ with $|V_3(a)|=1$} \label{fig43} \end{figure} For the two graphs in the figures (a) and (c), each $\widehat{G}_{b'_1,b'_2}$ has ten edges, but also contains two bi-gons and so is planar. For the other three graphs in the figure, (b), (d) and (e), each $\widehat{G}_{a,b_2}$ has nine edges, but also contains a bi-gon on the vertices $c_2$ and $c_3$. So, these are also planar. Now consider the case where $V_3(a)$ has two vertices. Thus $a$ is adjacent to three $b'_j$ vertices, say $b'_1$, $b'_2$ and $b'_3$, as well as $c'_1$ and $c'_2$. If $|V_3(b'_4)|=3$, then $NV_3(a,b'_4) = 5$, so $|\widehat{E}_{a,c}| \leq 8$. We may assume that $b'_4$ is adjacent to $b_1$, $b_2$, $c_1$ and $c_2$ as shown in Figure \ref{fig44} (a). Furthermore, $|V_3(b'_j)| \leq 2$ for each $j=1,2,3$, otherwise $|\widehat{E}_{a,b'_j}| \leq 9$ and $\widehat{G}_{a,b'_j}$ has the degree 4 vertex $b'_4$. Thus each $b'_j$ is adjacent to either $b_1$ or $b_2$ or both. Without loss of generality, we say that $b_1$ is adjacent to both $b'_1$ and $b'_2$. Also $V_3(b_1)$ must have one vertex, say $c'_1$, and $c'_1$ is adjacent to $b_2$, otherwise $NV_3(a,b_1) + |V_4(a,b_1)| + |V_Y(a,b_1)| \geq 5$, so $|\widehat{E}_{a,b_1}| \leq 8$. The four dashed edges in the figure indicate these new edges. \begin{figure}[h] \includegraphics[scale=1]{fig44.eps} \caption{$[A] = [1,2,3]$ and $[B] = [0,4,2]$ with $|V_3(a)|=2$} \label{fig44} \end{figure} Now consider the graph $\widehat{G}_{a,b'_4}$. Since $|\widehat{E}_{a,b'_4}| \leq 9$, we can assume that $\widehat{G}_{a,b'_4}$ is isomorphic to $K_{3,3}$. Since the three vertices $b_2$, $b'_1$ and $b'_2$ (big black dots in the figure) are adjacent to $b_1$ in $\widehat{G}_{a,b'_4}$, they're also adjacent to $c_3$ and $b'_3$ (big white dots) as shown in Figure \ref{fig44} (b). Restore the graph $G$ by adding back the two vertices $a$ and $b'_4$ and their associated nine edges as shown in Figure \ref{fig44} (c). The reader can easily check that the graph $\widehat{G}_{a,b_2}$ is planar. \subsection{$[B] = [0,1,6]$} \hspace{1cm} Let $b'$ be the degree 4 vertex in $B$. For given $i=1,2$, $NV_3(a,b_i) + |V_4(a,b_i)| \leq 4$, otherwise $|\widehat{E}_{a,b_i}| \leq 8$. Therefore $V_3(a)$, $V_3(b_1)$ and $V_3(b_2)$ are the same set of four degree 3 vertices in $B$. Since $|V_3(b')|=3$, then $|\widehat{E}_{a,b'}| = 7$. \section{Case of $|A_5| = 0$} In this case, $([A], [B])$ is one of $([0,4,2], [0,4,2])$, $([0,4,2], [0,1,6])$ or $([0,1,6], [0,1,6])$. We divide into three subsections, one for each case. \subsection{$([A], [B]) = ([0,4,2], [0,4,2])$} \hspace{1cm} We first give a very useful lemma. Let $\widetilde{P}_{10}$ be the bipartite graph shown in Figure \ref{fig51}. By deleting the vertex $c'$ and its two edges, we get $\widetilde{K}_{3,3}$. The vertices $d_4$ and $c'$ are called the {\em $s$-} and {\em $t$-vertex\/}, respectively. \begin{figure}[h] \includegraphics[scale=1]{fig51.eps} \caption{$\widetilde{P}_{10}$} \label{fig51} \end{figure} \begin{lemma}\label{lem:P} Let $H$ be a bipartite graph such that one partition of its vertices contains two degree 4 vertices and two degree 3 vertices, and the other partition contains two degree 3 vertices and four degree 2 vertices. If $H$ is not planar then $H$ is isomorphic to $\widetilde{P}_{10}$. \end{lemma} \begin{proof} Let $d_1$, $d_2$, $d_3$ and $d_4$ be the two degree 4 vertices and the two degree 3 vertices in one partition of $H$, and $d'_1$ and $d'_2$ be the two degree 3 vertices in the other partition, which contains degree 2 vertices. Let $\widehat{H}$ be the graph obtained from $H$ by contracting four edges, one each from the pair adjacent to each of the degree 2 vertices. Since $\widehat{H}$ consists of six vertices and ten edges but is not planar, it must be $K_{3,3}+e$, the graph obtained from $K_{3,3}$ by connecting two vertices in the same partition by an edge $e$. Then $e$ must connect the two degree 4 vertices $d_1$ and $d_2$. Furthermore $d_1$ and $d_2$ and one of $d_3$ or $d_4$, say $d_3$, are in the same partition of $K_{3,3}+e$ containing the edge $e$. Since $H$ was originally a bipartite graph, $d_1$ and $d_2$ are connected by two edges adjacent to a degree 2 vertex, call it $c'$, of $H$, and $d_4$ is connected to $d_i$ for each $i=1,2,3$, by two edges adjacent to each other degree 2 vertex of $H$. So, we get the graph $\widetilde{P}_{10}$ as shown in Figure \ref{fig51}. \end{proof} First consider the case that some degree 4 vertex, say $b_1$, in $A$ is adjacent to all four degree 4 vertices in $B$. Let $b_2$ denote another degree 4 vertex in $A$, and $H$ the graph obtained from $G$ by deleting the two vertices $b_1$ and $b_2$ and the adjacent eight edges. If the graph $\widehat{G}_{b_1,b_2}$ is not planar, then $H$ satisfies all assumptions of Lemma \ref{lem:P}. Thus $H$ is isomorphic to $\widetilde{P}_{10}$. Now restore the vertex $b_2$ and the associated four dashed edges as shown in Figure \ref{fig52} (a). Note that these four edges can be replaced in a unique way because of the assumptions for the vertex $b_1$. Let $b_3$ denote a degree 4 vertex in $A$, other than $b_1$ and $b_2$. The reader can easily check that the graph $\widehat{G}_{b_1,b_3}$ is planar as shown in Figure \ref{fig52} (b). \begin{figure}[h] \includegraphics[scale=1]{fig52.eps} \caption{$[A]=[B] = [0,4,2]$ with $|V_3(b_1)|=0$ or 1} \label{fig52} \end{figure} Now assume that each degree 4 vertex, say $b_i$ for $i=1,2,3,4$, in $A$ is adjacent to at least one of the two degree 3 vertices, say $c'_1$ and $c'_2$, in $B$. By counting the degrees of each vertex, we may assume that $b_1$ is adjacent to $c'_1$, but not $c'_2$. Also assume that $b_2$ is adjacent to $c'_2$, but not $c'_1$ since at most three vertices among the $b_i$'s can be adjacent to $c'_1$. If $V_4(b_1) = V_4(b_2)$, then $|\widehat{E}_{b_1,b_2}| \leq 9$. Since $\widehat{G}_{b_1,b_2}$ has the remaining degree 4 vertex in $B$ outside of $V_4(b_1)$, it is not isomorphic to $K_{3,3}$. Therefore $V_4(b_1) \cap V_4(b_2)$ has two vertices, say $b'_1$ and $b'_2$, in $B$. As drawn in Figure \ref{fig52} (c), the eight non-dashed edges adjacent to the vertices $b_1$ and $b_2$ are settled. Let $H$ denote the graph obtained from $G$ by deleting these two vertices and the associated eight edges. If the graph $\widehat{G}_{b_1,b_2}$ is not planar, then $H$ satisfies all assumptions of Lemma \ref{lem:P}. Thus $H$ is isomorphic to $\widetilde{P}_{10}$. There are several choices for the $s$-vertex and the $t$-vertex among $A$ and $B$, respectively. For example, we can choose $c_2$ for the $s$-vertex and $b'_1$ for the $t$-vertex as in figure (c). Whatever choice is made, the graph $\widehat{G}_{b_1,b_3}$ is always planar. \subsection{$([A], [B]) = ([0,4,2], [0,1,6])$} \hspace{1cm} First assume that the degree 4 vertex, say $b'$, in $B$ is adjacent to all four degree 4 vertices, say $b_1$, $b_2$, $b_3$ and $b_4$, in $A$. Let $c'_1$ through $c'_6$ denote the six degree 3 vertices in $B$. If $|V_3(b_j,b_k)| \geq 5$ for some different $j$ and $k$, then $|\widehat{E}_{b_j,b_k}| \leq 8$. Thus we may assume that $|V_3(b_j,b_k)| \leq 4$ for all pairs $j$ and $k$, so $V_3(b_j)$ and $V_3(b_k)$ have at least two common vertices among the $c'_i$'s. Assume that $b_1$ is adjacent to $c'_1$, $c'_2$ and $c'_3$. Note that each $c'_i$ is adjacent to at least one of the $b_j$'s because $c'_i$ has degree 3. We also assume that $c'_4$ is adjacent to $b_2$. Since $|V_3(b_1,b_2)| \leq 4$, assume that $b_2$ is also adjacent to $c'_1$ and $c'_2$. Again, assume that $c'_5$ is adjacent to $b_3$. Since $|V_3(b_j,b_3)| \leq 4$ for $j=1,2$, $b_3$ must be adjacent to $c'_1$ and $c'_2$. Now $c'_6$ is adjacent to $b_4$, and similarly $b_4$ must be adjacent to $c'_1$ and $c'_2$. This is impossible because $c'_1$ and $c'_2$ have degree 3. See Figure \ref{fig53} (a). \begin{figure}[h] \includegraphics[scale=1]{fig53.eps} \caption{$[A]= [0,4,2]$ and $[B] = [0,1,6]$} \label{fig53} \end{figure} Now consider the case that $b'$ is not adjacent to a degree 4 vertex, say $b_1$, in $A$. Assume that $b_1$ is adjacent to $c'_1$, $c'_2$, $c'_3$ and $c'_4$. If $|V_3(b')| = 2$, then $|\widehat{E}_{b_1,b'}| \leq 8$. Thus we may assume that $b'$ is adjacent to $b_2$, $b_3$, $b_4$ and a degree 3 vertex, say $c_1$. We also assume that $c'_5$ is adjacent to $b_2$ because $c'_5$ has degree 3. If $b_2$ is adjacent to $c'_6$ or $V_Y(b_1,b_2)$ is not empty, then $|\widehat{E}_{b_1,b_2}| \leq 8$. Thus we may assume that $b_2$ is adjacent to $c'_1$ and $c'_2$, and $c'_1$ and $c'_2$ are adjacent to $b_3$ and $b_4$, respectively. These are the non-dashed edges in Figure \ref{fig53} (b). Now consider the graph $\widehat{G}_{b_1,b'}$. Since $|\widehat{E}_{b_1,b'}| \leq 9$, we can assume that $\widehat{G}_{b_1,b'}$ is isomorphic to $K_{3,3}$. Since the three vertices $b_3$, $b_4$ and $c'_5$ (big black dots in the figure) are adjacent to $b_2$ in $\widehat{G}_{b_1,b'}$, they're also adjacent to $c_2$ and $c'_6$ (big white dots) as shown in Figure \ref{fig53} (c). Restore the graph $G$ by adding back the two vertices $b_1$ and $b'$ and their associated nine edges. Then $NV_3(b_2,b_3) + |V_4(b_2,b_3)| = 6$, so $|\widehat{E}_{b_2,b_3}| \leq 8$. \subsection{$([A], [B]) = ([0,1,6], [0,1,6])$} \hspace{1cm} If the degree 4 vertex, say $b$, in $A$ is not adjacent to the degree 4 vertex, say $b'$, in $B$, then $|\widehat{E}_{b,b'}| \leq 6$ because $NV_3(b,b') = 8$. Assume that $b$ is adjacent to $b'$, and let $e$ denote the edge connecting these two vertices. \begin{lemma}\label{4cycle} In this case, if $G$ is intrinsically knotted, then every 4-cycle contains the edge $e$. \end{lemma} \begin{proof} Suppose that there is a 4-cycle $H$ which does not contain $e$. Then, we may assume that $H$ does not contain $b$. Let $c$ be any vertex in $A$ such that either $b$ or $c$ is adjacent to some vertex of $H$ in $B$, other than $b'$. Since $|V_3(b,c)| = |V_Y(b,c)|$ and $|V_3(c)| + |V_4(b,c)| = 3$, $NV_3(b,c) + |V_4(b,c)| + |V_Y(b,c)| = (|V_3(b)| + |V_3(c)| - |V_3(b,c)|) + |V_4(b,c)| + |V_3(b,c)| = 6$. Therefore $|\widehat{E}_{b,c}| \leq 9$. Furthermore $\widehat{G}_{b,c}$ contains $H$ which is no longer a 4-cycle because at least one vertex of $H$ in $B$ has degree 2. So $\widehat{G}_{b,c}$ is not isomorphic to $K_{3,3}$. \end{proof} Now consider the subgraph $G \setminus \{e\}$ which consists of fourteen degree 3 vertices and has no 4-cycle by Lemma \ref{4cycle}. We name these vertices $c_i$'s and $c'_j$'s as in Figure \ref{fig54} (a). Assume that $c_1$ is adjacent to $c'_1$, $c'_2$ and $c'_3$, and $c'_1$ is adjacent to $c_2$ and $c_3$. Since there is no 4-cycle, we can also assume that $c_2$ is adjacent to $c'_4$ and $c'_5$, $c_3$ with $c'_6$ and $c'_7$, $c'_2$ with $c_4$ and $c_5$, and $c'_3$ with $c_6$ and $c_7$ as illustrated by the non-dashed edges in the figure. Without loss of generality, we may assume that $c_4$ is adjacent to $c'_4$ and $c'_6$, and then $c_5$ must be adjacent to $c'_5$ and $c'_7$. Similarly we may assume that $c_6$ is adjacent to $c'_4$, and then $c_6$ must be adjacent to $c'_7$, and $c_7$ to $c'_5$ and $c'_6$. Finally we get the Heawood graph $C_{14}$ as drawn in Figure \ref{fig54} (b). Note that $C_{14}$ is symmetric and any pair of vertices $c_i$ and $c'_j$ has distance either 1 or 3. Thus $G$ can be obtained by connecting two such vertices of distance 3 by the edge $e$. This graph is isomorphic to Cousin 89 of the $E_9+e$ family as drawn in Figure \ref{fig13}. \begin{figure}[h] \includegraphics[scale=1]{fig54.eps} \caption{The subgraph $G \setminus \{e\}$} \label{fig54} \end{figure}
-42,464.52694
[ -1.9765625, 1.822265625 ]
57.313433
[ -2.33984375, 1.71875, -2.193359375, -5.671875, -1.822265625, 8.1484375 ]
[ 2.03515625, 7.578125, 2.64453125, 6.45703125 ]
318
5,470
[ -3.333984375, 4.05859375 ]
36.602566
[ -5.06640625, -2.69140625, -3.865234375, -2.126953125, 1.1767578125, 10.59375 ]
2.837781
16.866522
13.601463
5.087836
[ 2.871361494064331 ]
-30,170.993602
4.316271
-42,433.573159
1.575604
5.380306
[ -2.1328125, -2.666015625, -2.96484375, -4.53515625, 1.9326171875, 10.328125 ]
[ -5.8828125, -1.763671875, -2.02734375, -1.365234375, 3.515625, 4.7265625 ]
BkiUak3xK2li-F6JwYSJ
\section{Introduction} \label{sec:introduction} \vspace*{-0.3em} Like for any subfield of machine learning, the existence of standardized benchmarks has played a crucial role in the progress we have observed over the past few years in meta-learning research. They make the evaluation of existing methods easier and fair, which in turn serves as a reference point for the development of new meta-learning algorithms; this creates a virtuous circle, rooted into these well-defined suites of tasks. Unlike existing datasets in supervised learning though, such as MNIST \citep{lecun1998gradient} or ImageNet \citep{russakovsky2015imagenet}, the benchmarks in meta-learning consist in datasets of datasets. This adds a layer of complexity to the data pipeline, to the extent that a majority of meta-learning projects implement their own specific data-loading component adapted to their method. The lack of a standard at the input level creates variance in the mechanisms surrounding each meta-learning algorithm, which makes a fair comparison more challenging. Although the implementation might be different from one project to another, the process by which these datasets of datasets are created is generally the same across tasks. In this paper we introduce Torchmeta, a meta-learning library built on top of the PyTorch deep learning framework \citep{paszke2017automatic}, providing data-loaders for most of the standard datasets for few-shot classification and regression. Torchmeta uses the same interface for all the available benchmarks, making the transition between different datasets as seamless as possible. Inspired by previous efforts to design a unified interface between tasks, such as OpenAI Gym \citep{openaigym} in reinforcement learning, the goal of Torchmeta is to create a framework around which researchers can build their own meta-learning algorithms, rather than adapting the data pipeline to their methods. This new abstraction promotes code reuse, by decoupling meta-datasets from the algorithm itself. In addition to these data-loaders, Torchmeta also includes extensions of PyTorch to simplify the creation of models compatible with classic meta-learning algorithms that sometimes require higher-order differentiation \citep{maml17,finn2018learning,rusu2018meta,DBLP:journals/corr/abs-1801-08930}. This paper gives an overall overview of the features currently available in Torchmeta, and is organized as follows: Section~\ref{sec:data-loaders-few-shot-learning} gives a general presentation of the data-loaders available in the library; in Section~\ref{sec:meta-learning-modules}, we focus on an extension of PyTorch's modules called ``meta-modules'' designed specifically for meta-learning, and we conclude by a discussion in Section~\ref{sec:discussion}. \vspace*{-1em} \section{Data-loaders for few-shot learning} \label{sec:data-loaders-few-shot-learning} \vspace*{-0.5em} The library provides a collection of datasets corresponding to classic few-shot classification and regression problems from the meta-learning literature. The interface was created to support modularity between datasets, for both classification and regression, to simplify the process of evaluation on a full suite of benchmarks; we will detail this interface in the following sections. Moreover, the data-loaders from Torchmeta are fully compatible with standard data components of PyTorch, such as \texttt{Dataset} and \texttt{DataLoader}. Before going into the details of the library, we first briefly recall the problem setting. To balance the lack of data inherent in few-shot learning, meta-learning algorithms acquire some prior knowledge from a collection of datasets $\mathcal{D}_{\mathrm{meta}} = \{\mathcal{D}_{1}, \ldots, \mathcal{D}_{n}\}$, called the \emph{meta-training set}. In the context of few-shot learning, each element $\mathcal{D}_{i}$ contains only a few inputs/output pairs $(x, y)$, where $y$ depends on the nature of the problem. For instance, these datasets can contain examples of different tasks performed in the past. Torchmeta offers a solution to automate the creation of each dataset $\mathcal{D}_{i}$, with a minimal amount of problem-specific components. \vspace*{-0.5em} \subsection{Few-shot regression} \label{sec:few-shot-regression} \vspace*{-0.3em} A majority of the few-shot regression problems in the literature are simple regression problems between inputs and outputs through different functions, where each function corresponds to a task. These functions are parametrized to allow variability between tasks, while preserving a constant ``theme'' across tasks. For example, these functions can be sine waves of the form $f_{i}(x) = a_{i}\sin(x + b_{i})$, with $a$ and $b$ varying in some range \citep{maml17}. In Torchmeta, the meta-training set inherits from an object called \texttt{MetaDataset}, and each dataset $\mathcal{D}_{i}$ ($i=1,\ldots, n$, with $n$ defined by the user) corresponds to a specific choice of parameters for the function, with all the parameters sampled once at the creation of the meta-training set. Once the parameters of the function are known, we can create the dataset by sampling inputs in a given range, and feeding them to the function. The library currently contains 3 toy problems: sine waves \citep{maml17}, harmonic function \citep[i.e. sum of two sine waves, ][]{lacoste2018uncertainty}, and sinusoid and lines \citep{finn2018probabilistic}. Below is an example of how to instantiate the meta-training set for the sine waves problem:\\[0.5em] \begin{adjustbox}{center} \begin{tikzpicture} \node[font=\footnotesize\ttfamily, text width=\linewidth, inner sep=5pt, fill=CadetBlue!5] at (0, 0) {torchmeta.toy.Sinusoid(num\_samples\_per\_task=\textcolor{OliveGreen}{10}, num\_tasks=\textcolor{OliveGreen}{1\_000\_000}, noise\_std=\textcolor{OliveGreen}{None})}; \end{tikzpicture} \end{adjustbox} \vspace*{-0.5em} \subsection{Few-shot classification} \label{sec:few-shot-classification} \vspace*{-0.3em} For few-shot classification problems, the creation of the datasets $\mathcal{D}_{i}$ usually follows two steps: first $N$ classes are sampled from a large collection of candidates (corresponding to $N$ in ``$N$-way classification''), and then $k$ examples are chosen per class (corresponding to $k$ in ``$k$-shot learning''). This two-step process is automated as part of an object called \texttt{CombinationMetaDataset}, inherited from \texttt{MetaDataset}, provided that the user specifies the large collection of class candidates, which is problem-specific. Moreover, to encourage reproducibility in meta-learning, every task is associated to a unique identifier (the $N$-tuple of class identifiers). Once the task has been chosen, the object returns a dataset $\mathcal{D}_{i}$ with all the examples from the corresponding set of classes. In Section~\ref{sec:train-test-split}, we will describe how $\mathcal{D}_{i}$ can then be further split into training and test datasets, as is common in meta-learning. The library currently contains 5 few-shot classification problems: Omniglot \citep{lake2015human,DBLP:journals/corr/abs-1902-03477}, Mini-ImageNet \citep{DBLP:journals/corr/VinyalsBLKW16,ravi2016optimization}, Tiered-ImageNet \citep{ren2018meta}, CIFAR-FS \citep{bertinetto2018meta}, and Fewshot-CIFAR100 \citep{oreshkin2018tadam}. Below is an example of how to instantiate the meta-training set for 5-way Mini-ImageNet:\\[0.5em] \begin{adjustbox}{center} \begin{tikzpicture} \node[font=\footnotesize\ttfamily, text width=\linewidth, inner sep=5pt, fill=CadetBlue!5, align=left] at (0, 0) {torchmeta.datasets.MiniImagenet(\textcolor{Brown}{"data"}, num\_classes\_per\_task=\textcolor{OliveGreen}{5}, meta\_train=\textcolor{OliveGreen}{True},\\\hspace*{16em}download=\textcolor{OliveGreen}{True})}; \end{tikzpicture} \end{adjustbox} Torchmeta also includes helpful functions to augment the pool of class candidates with variants, such as rotated images \citep{santoro2016meta}. \vspace*{-0.2em} \subsection{Training \& test datasets split} \label{sec:train-test-split} \vspace*{-0.2em} In meta-learning, it is common to separate each dataset $\mathcal{D}_{i}$ in two parts: a training set (or support set) to adapt the model to the task at hand, and a test set (or query set) for evaluation and meta-optimization. It is important to ensure that these two parts do not overlap though: while the task remains the same, no example can be in both the training and test sets. To ensure that, Torchmeta introduces a wrapper over the datasets called a \texttt{Splitter} that is responsible for creating the training and test datasets, as well as optionally shuffling the data. Here is an example of how to instantiate the meta-training set of a 5-way 1-shot classification problem based on Mini-Imagenet:\\[0.5em] \begin{adjustbox}{center} \begin{tikzpicture} \node[font=\footnotesize\ttfamily, text width=\linewidth, inner sep=5pt, fill=CadetBlue!5, align=left] at (0, 0) {dataset = torchmeta.datasets.MiniImagenet(\textcolor{Brown}{"data"}, num\_classes\_per\_task=\textcolor{OliveGreen}{5},\\\hspace*{20.8em}meta\_train=\textcolor{OliveGreen}{True}, download=\textcolor{OliveGreen}{True})\\ dataset = torchmeta.transforms.ClassSplitter(dataset, num\_train\_per\_class=\textcolor{OliveGreen}{1},\\\hspace*{22.3em}num\_test\_per\_class=\textcolor{OliveGreen}{15}, shuffle=\textcolor{OliveGreen}{True})}; \end{tikzpicture} \end{adjustbox} in addition to the meta-training set, most benchmarks also provide a meta-test set for the overall evaluation of the meta-learning algorithm (and possible a meta-validation set as well). These different meta-datasets can be selected when the \texttt{MetaDataset} object is created, with \texttt{meta\_test=True} (or \texttt{meta\_val=True}) instead of \texttt{meta\_train=True}. \vspace*{-0.2em} \subsection{Meta Data-loaders} \label{sec:meta-dataloaders} \vspace*{-0.2em} The objects presented in Sections~\ref{sec:few-shot-regression} \& \ref{sec:few-shot-classification} can be iterated over to generate datasets from the meta-training set; these datasets are PyTorch \texttt{Dataset} objects, and as such can be included as part of any standard data pipeline (combined with \texttt{DataLoader}). Nonetheless, most meta-learning algorithms operate better on batches of tasks. Similar to how examples are batched together with \texttt{DataLoader} in PyTorch, Torchmeta exposes a \texttt{MetaDalaoader} that can produce batches of tasks when iterated over. In particular, such a meta data-loader is able to output a large tensor containing all the examples from the different tasks in the batch. For example:\\[0.5em] \begin{adjustbox}{center} \begin{tikzpicture} \node[font=\footnotesize\ttfamily, text width=\linewidth, inner sep=5pt, fill=CadetBlue!5, align=left] at (0, 0) {\textcolor{Gray}{\# Helper function, equivalent to Section~\ref{sec:train-test-split}}\\dataset = torchmeta.datasets.helpers.miniimagenet(\textcolor{Brown}{"data"}, shots=\textcolor{OliveGreen}{1}, ways=\textcolor{OliveGreen}{5},\\\hspace*{24.8em}meta\_train=\textcolor{OliveGreen}{True}, download=\textcolor{OliveGreen}{True})\\ dataloader = torchmeta.utils.data.BatchMetaDataLoader(dataset, batch\_size=\textcolor{OliveGreen}{16})\\[1em]\textcolor{OliveGreen}{for} batch \textcolor{OliveGreen}{in} dataloader:\\\hspace*{1.9em}train\_inputs, train\_labels = batch[\textcolor{Brown}{"train"}] \textcolor{Gray}{\# Size (16, 5, 3, 84, 84) \& (16, 5)}}; \end{tikzpicture} \end{adjustbox} \vspace*{-0.5em} \section{Meta-learning modules} \label{sec:meta-learning-modules} \vspace*{-0.3em} Models in PyTorch are created from basic components called \emph{modules}. Each basic module, equivalent to a layer in a neural network, contains both the computational graph of that layer, as well as its parameters. The modules treat their parameters as an integral part of their computational graph; in standard supervised learning, this is sufficient to train a model with backpropagation. However some meta-learning algorithms require to backpropagate through an update of the parameters \citep[like a gradient update,][]{maml17} for the meta-optimization (or the ``outer-loop''), hence involving higher-order differentiation. Although high-order differentiation is available in PyTorch as part of its automatic differentiation module \citep{paszke2017automatic}, replacing one parameter of a basic module with a full computational graph (i.e. the update of the parameter), without altering the way gradients flow, is not obvious. Backpropagation through an update of the parameters is a key ingredient of gradient-based meta-learning methods \citep{maml17,finn2018learning,DBLP:journals/corr/abs-1801-08930,DBLP:journals/corr/abs-1904-03758}, and various hybrid methods \citep{rusu2018meta,DBLP:journals/corr/abs-1810-03642}. It is therefore critical to adapt the existing modules in PyTorch so they can handle arbitrary computational graphs as a substitute for these parameters. The approach taken by Torchmeta is to extend these modules, and leave an option to provide new parameters as an additional input. These new objects are called \texttt{MetaModule}, and their default behaviour (i.e. without any extra parameter specified) is equivalent to their PyTorch counterpart. Otherwise, if extra parameters (such as the result of one step of gradient descent) are specified, then the \texttt{MetaModule} treats them as part of the computational graph, and backpropagation works as expected. Figure~\ref{fig:metalinear} shows how the extension of the \texttt{Linear} module called \texttt{MetaLinear} works, with and without additional parameters, and the impact on the gradients. The figure on the left shows the instantiation of the meta-module as a container for the parameters $W$ \& $b$, and the computational graph with placeholders for the weight and bias parameters. The figure in the middle shows the default behaviour of the \texttt{MetaLinear} meta-module, where the placeholders are substituted with $W$ \& $b$: this is equivalent to PyTorch's \texttt{Linear} module. Finally, the figure on the right shows how these placeholders can be filled with a complete computational graph, like one step of gradient descent \citep{maml17}. In this latter case, the gradient of $\mathcal{L}_{\mathrm{outer}}$ with respect to $W$, necessary in the outer-loop update, can correctly flow all the way to the parameter $W$. \begin{figure}[t] \centering \begin{tikzpicture} \node[font=\scriptsize, align=center] (code) at (0, 0) {\texttt{model = MetaLinear(in\_features,}\\\texttt{out\_features, bias=\textcolor{OliveGreen}{True})}}; \node[above=3.1em of code] (figure) {\begin{tikzpicture} \begin{scope}[local bounding box=metalinear] \node[draw=black, circle, inner sep=0, thick] (times) at (0, 0) {$\times$}; \node[draw=black, circle, inner sep=0, thick, above of=times] (plus) {$+$}; \node[inner sep=0, minimum width=1pt, fill=red, fill opacity=0, minimum height=6em, anchor=center] at ($(times)!0.5!(plus)$) {}; \node[draw=black, rectangle, thick, densely dashed, minimum height=2em, minimum width=2.5em, right=0.5 of times] (W) {}; \node[draw=black, rectangle, thick, densely dashed, minimum height=2em, minimum width=2.5em, left=0.5 of plus] (b) {}; \draw[-{latex}, thick] (times) -- (plus); \draw[-{latex}, thick] (W) -- (times); \draw[-{latex}, thick] (b) -- (plus); \node[font=\tiny, inner sep=0, outer sep=0, anchor=south west] at (current bounding box.{south west}) { \begin{tikzpicture}{every node/.style={inner sep=0, outer sep=0}} \node[anchor=south west] at (0, 0) {\textbf{MetaLinear}}; \end{tikzpicture} }; \node[font=\scriptsize, inner sep=0, outer sep=0, anchor=north east] at (current bounding box.{north east}) { \begin{tikzpicture} \node[draw=Orange, fill=Orange!20, rectangle, thick, anchor=north east, minimum size=2em] (W_2) at (0, 0) {$W$}; \node[draw=Green, fill=Green!20, rectangle, thick, left=5pt of W_2, minimum size=2em] (b_2) {$b$}; \end{tikzpicture} }; \end{scope} \draw[thick] ($(metalinear.{north west})+(-4pt,4pt)$) rectangle ($(metalinear.{south east})+(4pt,-4pt)$); \node[inner sep=0, outer sep=0, minimum size=5pt, circle, fill=white, thick, draw=black, anchor=center, yshift=-1pt] (output-interface) at ($(metalinear.north)+(0,5pt)$) {}; \node[inner sep=0, outer sep=0, minimum size=5pt, circle, fill=white, thick, draw=black, anchor=center, yshift=1pt] (input-interface) at ($(metalinear.south)+(0,-5pt)$) {}; \draw[-{latex}, thick] (input-interface) -- (times); \draw[-{latex}, thick] (plus) -- (output-interface); \end{tikzpicture}}; \end{tikzpicture}\hfill \begin{tikzpicture} \node[font=\scriptsize, align=center] (code) at (0, 0) {\texttt{$\tilde{y}_{\mathrm{train}}$ = model($x_{\mathrm{train}}$, params=\textcolor{OliveGreen}{None})}\\\texttt{\phantom{p}}}; \node[above=0.9em of code] (figure) {\begin{tikzpicture} \begin{scope}[local bounding box=metalinear] \node[draw=black, circle, inner sep=0, thick] (times) at (0, 0) {$\times$}; \node[draw=black, circle, inner sep=0, thick, above of=times] (plus) {$+$}; \node[inner sep=0, minimum width=1pt, fill=red, fill opacity=0, minimum height=6em, anchor=center] at ($(times)!0.5!(plus)$) {}; \node[draw=Orange, fill=Orange!20, rectangle, thick, minimum height=2em, minimum width=2.5em, right=0.5 of times] (W) {$W$}; \node[draw=Green, fill=Green!20, rectangle, thick, minimum height=2em, minimum width=2.5em, left=0.5 of plus] (b) {$b$}; \draw[-{latex}, thick] (times) -- (plus); \draw[-{latex}, thick] (W) -- (times); \draw[-{latex}, thick] (b) -- (plus); \node[font=\tiny, inner sep=0, outer sep=0, anchor=south west] at (current bounding box.{south west}) { \begin{tikzpicture}{every node/.style={inner sep=0, outer sep=0}} \node[anchor=south west] (params) at (0, 0) {\texttt{params:} N/A}; \node[anchor=south west, yshift=3pt] at (params.{north west}) {\textbf{MetaLinear}}; \end{tikzpicture} }; \end{scope} \draw[thick] ($(metalinear.{north west})+(-4pt,4pt)$) rectangle ($(metalinear.{south east})+(4pt,-4pt)$); \node[inner sep=0, outer sep=0, minimum size=5pt, circle, fill=white, thick, draw=black, anchor=center, yshift=-1pt] (output-interface) at ($(metalinear.north)+(0,5pt)$) {}; \node[inner sep=0, outer sep=0, minimum size=5pt, circle, fill=white, thick, draw=black, anchor=center, yshift=1pt] (input-interface) at ($(metalinear.south)+(0,-5pt)$) {}; \draw[-{latex}, thick] (input-interface) -- (times); \draw[-{latex}, thick] (plus) -- (output-interface); \node[above of=output-interface, draw=NavyBlue, fill=NavyBlue!20, thick, minimum height=2em, minimum width=2.5em] (loss) {$\mathcal{L}_{\mathrm{inner}}$}; \draw[-{latex}, thick] (output-interface) -- node[midway, font=\small, anchor=west] (y) {$\tilde{y}_{\mathrm{train}}$} (loss); \node[below=1.3em of input-interface, font=\small, inner sep=2pt, outer sep=0] (x) {$x_{\mathrm{train}}$}; \draw[-{latex}, thick] (x) -- (input-interface); \draw[-{latex}, thick, draw=Orange, rounded corners=3pt, densely dashed] ($(loss.south)+(8pt,-1.4em)$) |- ($(W.west)+(0,6pt)$); \draw[-{latex}, thick, draw=Green, rounded corners=3pt, densely dashed] ($(loss.south)+(-8pt,-0.4em)$) |- ($(b.east)+(0,6pt)$); \end{tikzpicture}}; \end{tikzpicture}\hfill \begin{tikzpicture} \node[font=\scriptsize, align=center] (code) at (0, 0) {$W' = W - \alpha \nabla_{W}\mathcal{L}\ $ \& $\ b' = b - \alpha \nabla_{b}\mathcal{L}$\\\texttt{$\tilde{y}_{\mathrm{test}}$ = model($x_{\mathrm{test}}$, params=$\{W', b'\}$)}}; \node[above=0em of code] (figure) {\begin{tikzpicture} \begin{scope}[local bounding box=metalinear] \node[draw=black, circle, inner sep=0, thick] (times) at (0, 0) {$\times$}; \node[draw=black, circle, inner sep=0, thick, above of=times] (plus) {$+$}; \node[inner sep=0, minimum width=1pt, fill=red, fill opacity=0, minimum height=4em, anchor=center] at (0, 0) {}; \node[draw=Orange, rectangle, thick, pattern=north west lines, pattern color=Orange!30, minimum height=2em, minimum width=2.5em, above right=-1.1em and 0.5 of times] (W) { \begin{tikzpicture}[scale=0.5, every node/.style={transform shape, line width=0.5pt, solid, inner sep=0, minimum size=0}] \node[draw=black, ellipse, fill=white, minimum height=1.2em] (W_times) at (0, 0) {$\times \alpha$}; \node[above=1.8em of W_times, draw=NavyBlue, fill=NavyBlue!20, minimum width=2.5em, minimum height=3em] (W_dLdW) {$\displaystyle\frac{\partial \mathcal{L}}{\partial W}$}; \node[left=1.5em of W_times, draw=black, fill=white, circle] (W_sub) {$-$}; \node[above=7em of W_sub, draw=Orange, fill=Orange!20, minimum size=2em] (W_W) {$W$}; \coordinate (W_b_dLdW) at ($(W_dLdW.north)+(0.8em,0)$); \node[draw=Green, fill=Green!20, minimum size=2em, anchor=center] (W_b) at (W_b_dLdW |- W_W.center) {$b$}; \draw[-{latex}, solid] (W_dLdW) -- (W_times); \draw[-{latex}, solid] (W_times) -- (W_sub); \draw[-{latex}, solid] (W_W) -- (W_sub); \draw[-{latex}, solid] (W_W) -| ($(W_dLdW.north)+(-0.8em,0)$); \draw[-{latex}, solid] (W_b) -- ($(W_dLdW.north)+(0.8em,0)$); \draw[-{latex}, densely dashed, draw=Orange, thick, rounded corners=3pt] ($(W_sub.north)+(-1.2em,0)$) -| node[pos=1] (W_head) {} ($(W_sub.north)+(-0.8em,6.5em)$); \draw[-{latex}, densely dashed, draw=Orange, thick, rounded corners=3pt] ($(W_sub.north)+(0.8em,0)$) -| ($(W_head)+(2em,0)$); \end{tikzpicture} }; \node[draw=Green, rectangle, thick, pattern=north west lines, pattern color=Green!30, minimum height=2em, minimum width=2.5em, below left=-1.3em and 0.5 of plus] (b) { \begin{tikzpicture}[scale=0.5, every node/.style={transform shape, line width=0.5pt, solid, inner sep=0, minimum size=0}] \node[draw=black, ellipse, fill=white, minimum height=1.2em] (b_times) at (0, 0) {$\times \alpha$}; \node[below=1.8em of b_times, draw=NavyBlue, fill=NavyBlue!20, minimum width=2.5em, minimum height=3em] (b_dLdb) {$\displaystyle\frac{\partial \mathcal{L}}{\partial b}$}; \node[right=1.5em of b_times, draw=black, fill=white, circle] (b_sub) {$-$}; \node[below=7em of b_sub, draw=Green, fill=Green!20, minimum size=2em] (b_b) {$b$}; \coordinate (b_W_dLdb) at ($(b_dLdb.north)+(-0.8em,0)$); \node[draw=Orange, fill=Orange!20, minimum size=2em, anchor=center] (b_W) at (b_W_dLdb |- b_b.center) {$W$}; \draw[-{latex}, solid] (b_dLdb) -- (b_times); \draw[-{latex}, solid] (b_times) -- (b_sub); \draw[-{latex}, solid] (b_b) -- (b_sub); \draw[-{latex}, solid] (b_b) -| ($(b_dLdb.south)+(0.8em,0)$); \draw[-{latex}, solid] (b_W) -- ($(b_dLdb.south)+(-0.8em,0)$); \draw[-{latex}, densely dashed, draw=Orange, thick, rounded corners=3pt] ($(b_sub.north)+(1.2em,0.5em)$) -| ($(b_W.{north west})+(0,0.5em)$); \end{tikzpicture} }; \draw[-{latex}, thick] (times) -- (plus); \draw[-{latex}, thick] (W.west |- times) -- (times); \draw[-{latex}, thick] (b.east |- plus) -- (plus); \draw[-, thin] (W.west |- times) -- ++(0.75em, 0); \draw[-, thin] (b.east |- plus) -- ++(-0.75em, 0); \node[font=\tiny, inner sep=0, outer sep=0, anchor=south east] at (current bounding box.{south east}) { \begin{tikzpicture}{every node/.style={inner sep=0, outer sep=0}} \node[anchor=south east] (params) at (0, 0) {\texttt{params:} \tikz[baseline=0.7ex, scale=0.6]{\node[draw=Orange, pattern=north west lines, pattern color=Orange!30, rectangle, transform shape, minimum height=2em, minimum width=2.5em, line width=0.5pt] {$W'$};} \tikz[baseline=0.7ex, scale=0.6]{\node[draw=Green, pattern=north west lines, pattern color=Green!30, rectangle, transform shape, minimum height=2em, minimum width=2.5em, line width=0.5pt] {$b'$};}}; \node[anchor=south east, yshift=3pt] at (params.{north east}) {\textbf{MetaLinear}}; \end{tikzpicture} }; \end{scope} \draw[thick] ($(metalinear.{north west})+(-4pt,4pt)$) rectangle ($(metalinear.{south east})+(4pt,-4pt)$); \node[inner sep=0, outer sep=0, minimum size=5pt, circle, fill=white, thick, draw=black, anchor=center, yshift=-1pt] (output-interface) at ($(metalinear.north)+(0,5pt)$) {}; \node[inner sep=0, outer sep=0, minimum size=5pt, circle, fill=white, thick, draw=black, anchor=center, yshift=1pt] (input-interface) at ($(metalinear.south)+(0,-5pt)$) {}; \draw[-{latex}, thick] (input-interface) -- (times); \draw[-{latex}, thick] (plus) -- (output-interface); \node[above of=output-interface, draw=NavyBlue, fill=NavyBlue!20, thick, minimum height=2em, minimum width=2.5em] (loss) {$\mathcal{L}_{\mathrm{outer}}$}; \draw[-{latex}, thick] (output-interface) -- node[midway, font=\small, anchor=west] (y) {$\tilde{y}_{\mathrm{test}}$} (loss); \node[below=1.3em of input-interface, font=\small, inner sep=2pt, outer sep=0] (x) {$x_{\mathrm{test}}$}; \draw[-{latex}, thick] (x) -- (input-interface); \draw[-{latex}, thick, draw=Orange, rounded corners=3pt, densely dashed] ($(loss.south)+(8pt,-1.4em)$) |- ($(W.west)+(0,-1.7em)$); \draw[-{latex}, thick, draw=Orange, rounded corners=3pt, densely dashed] ($(loss.south)+(-8pt,-0.4em)$) |- ($(b.{north east})+(0,-3pt)$); \end{tikzpicture}}; \end{tikzpicture} \caption{Illustration of the functionality of the \texttt{MetaLinear} meta-module, the extension of the \texttt{Linear} module. Left: Instantiation of a \texttt{MetaLinear} meta-module. Middle: Default behaviour, equivalent to \texttt{Linear}. Right: Behaviour with extra parameters \citep[a one-step gradient update,][]{maml17}. Gradients are represented as dashed arrows, in orange for $\partial/\partial W$ and green for $\partial/\partial b$.} \label{fig:metalinear} \end{figure} \vspace*{-0.3em} \section{Discussion} \label{sec:discussion} \vspace*{-0.3em} Reproducibility of data pipelines is challenging. It is even more challenging that some early works, even though they were evaluated on benchmarks that now became classic, did not disclose the set of classes available for meta-training and meta-test (and possibly meta-validation) in few-shot classification. For example, while the Mini-ImageNet dataset was introduced in \citep{DBLP:journals/corr/VinyalsBLKW16}, the split used in \citep{ravi2016optimization} is now widely accepted in the community as the official dataset. The advantage of a library like Torchmeta is to standardize these benchmarks to avoid any confusion. The other objective of Torchmeta is to make meta-learning accessible to a larger community. We hope that similar to how OpenAI Gym \citep{openaigym} helped the progress in reinforcement learning, with an access to multiple environments under a unified interface, Torchmeta can have an equal impact on meta-learning research. The full compatibility of the library with both PyTorch and Torchvision, PyTorch's computer vision library, simplifies its integration to existing projects. Even though Torchmeta already features a number of datasets for both few-shot regression and classification, and covers most of the standard benchmarks in the meta-learning literature, one notable missing dataset in the current version of the library is Meta-Dataset \citep{triantafillou2019meta}. Meta-Dataset is a unique and more complex few-shot classification problem, with a varying number of classes per task. And while it could fit the proposed abstraction, Meta-Dataset requires a long initial processing phase, which would make the automatic download and processing feature of the library impractical. Therefore its integration is left as future work. In the meantime, we believe that Torchmeta can provide a structure for the creation of better benchmarks in the future, and is a crucial step forward for reproducible research in meta-learning. \section{Acknowledgements} \label{sec:acknowledgements} We would like to thank the students at Mila who tested the library, and provided valuable feedback during development. Tristan Deleu is supported by the Antidote scholarship from Druide Informatique. \bibliographystyle{apalike}
-26,827.290247
[ -2.208984375, 2.33203125 ]
25.482625
[ -2.873046875, 1.4560546875, -1.7353515625, -5.1640625, -1.353515625, 7.11328125 ]
[ 4.671875, 6.89453125, 1.4833984375, 8.09375 ]
253
2,965
[ -2.103515625, 2.05078125 ]
30.385243
[ -5.7578125, -3.09375, -3.533203125, -1.52734375, 1.73828125, 9.6015625 ]
0.630701
18.689231
29.106239
3.673221
[ 1.6935787200927734 ]
-20,553.958769
6.951771
-26,487.451633
0.329905
5.892584
[ -2.900390625, -3.412109375, -3.20703125, -3.978515625, 2.6796875, 10.3984375 ]
[ -5.5390625, -2.5546875, -2.90625, -2.439453125, 3.8125, 6.5859375 ]
BkiUd3U5qoTBG9o64jwL
\section{Introduction} The High Energy Stereoscopic System (H.E.S.S.) consists of four imaging atmospheric Cherenkov telescopes situated in the Khomas Highland of Namibia \citep{hess_define}. The H.E.S.S. Collaboration has been surveying the Galactic plane for new very high energy (VHE, $>$100\,GeV) gamma-ray sources. Recently \cite{hess_disc} reported the discovery of an unresolved VHE gamma-ray source close to the Galactic plane, \object{HESS J1943+213}. The conducted search for multi-wavelength counterparts revealed the presence of a hard X-ray source, observed by {\it INTEGRAL}, \object{IGR J19443+2117}, in the vicinity of the H.E.S.S. source. \cite{Landi} analyzed the data of the X-ray telescope onboard the {\it Swift} satellite to search for soft X-ray counterparts of three {\it INTEGRAL} sources, among them IGR J19443+2117. They concluded that they found firm soft X-ray localization of the source (\object{SWIFT J1943.5+2120}). Later, \cite{Tomsick} conducted {\it Chandra} X-ray observations of several {\it INTEGRAL} sources including IGR J19443+2117 to localize and measure their soft X-ray spectra. They concluded that IGR J19443+2117 is associated with the {\it Chandra } source \object{CXOU J194356.2+211823} and the probability of spurious association is only $0.39$\,\%. According to the precise {\it Chandra} measurement, the X-ray source is located $23\arcsec$ away from HESS\,J1943+213 but still within the H.E.S.S. error circle, leading \cite{hess_disc} to identify the X-ray source with the H.E.S.S. source. All the proposed counterparts of HESS\,J1943+213 discussed by \cite{hess_disc} are located within the 68\,\% best-fit source position confidence level contour of HESS\,J1943+213 \citep[see Fig. 6. in ][]{hess_disc}. In a search for a radio counterpart, a possible source is found in the U.S. National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) Sky Survey (NVSS). The source, \object{NVSS J194356+211826} \citep{nvss} is located $24\farcs7$ away from the H.E.S.S. position (still within the H.E.S.S. error circle) and $3\farcs5$ away from the {\it Chandra} position. The {\it Chandra} source is outside the NVSS error circle. \cite{hess_disc} discuss possible origins for HESS\,J1943+213, proposing that it can be either a gamma-ray binary, a pulsar wind nebula (PWN), or an active galactic nucleus (AGN). They conclude that observational facts favour the AGN interpretation, and suggest that HESS\,J1943+213 is an extreme, high-frequency peaked BL Lacertae object. To better understand the nature of the VHE source, we conducted exploratory Very Long Baseline Interferometry (VLBI) continuum observations with the European VLBI Network (EVN) at 1.6~GHz frequency of \object{NVSS J194356+211826} (hereafter J1943+2218). These observations permit to spatially resolve the radio source and to obtain its positional information with the precision of a few milli-arcseconds (mas). The high angular resolution study was complemented with the analysis of 1.4-GHz VLA radio continuum and neutral hydrogen emission and absorption data in an extended field around the VHE source. We describe the observations and data reduction in Sect.~\ref{obs}, and discuss our findings in Sect.~\ref{res} of this paper. Our conclusions are summarised in Sect. \ref{concl}. \section{Observations and data reduction} \label{obs} The exploratory EVN observation of J1943+2118 took place on 2011 May 18. At a recording rate of up to 1024~Mbit~s$^{-1}$, seven antennas participated in this e-VLBI experiment: Effelsberg (Germany), Jodrell Bank Lovell Telescope (UK), Medicina (Italy), Onsala (Sweden), Toru\'n (Poland), Hartebeesthoek (South Africa), and the phased array of the Westerbork Synthesis Radio Telescope (WSRT, The Netherlands). In an e-VLBI experiment \citep{Szomoru08}, the signals received at the remote radio telescopes are streamed over optical fibre networks directly to the central data processor for real-time correlation. The correlation took place at the EVN data processor in the Joint Institute for VLBI in Europe (JIVE), Dwingeloo, The Netherlands, with 2~s integration time. The observations lasted for 2 hours. Eight intermediate frequency channels (IFs) were used in both right and left circular polarisations. The total bandwidth was 128 MHz per polarisation. The target source was observed in phase-reference mode to obtain precise relative positional information. It was crucial for strengthening the identification of the source, since there was a significant difference between the NVSS peak position and the {\it Chandra} position. The phase-reference calibrator source \object{J1946+2300} is separated from the NVSS radio source by $1\fdg77$ in the sky. Its coordinates in the current 2nd realisation of the International Celestial Reference Frame (ICRF2) are right ascension $\alpha=19^\mathrm{h}46^\mathrm{m} 6\fs25140484$ and declination $\delta=+23\degr 0\arcmin 4\farcs4144890$ \citep{Fey09}. The target--reference cycles of $\sim$5\,min allowed us to spend $\sim$3.5\,min on the target source in each cycle, thus providing almost 1.3\,h total integration time on J1943+2118. The source position turned out to be offset from the phase centre position (taken from the NVSS catalogue) by $\sim$$4\arcsec$, but still within the undistorted field of view of the EVN. The NRAO Astronomical Image Processing System (AIPS) was used for the data calibration in the standard way. We refer to \cite{frey2008} for the details of the data reduction and imaging. The calibrated data were exported to the Caltech Difmap package for imaging. Phase self-calibration was only performed at the most sensitive antennas (Effelsberg, Jodrell Bank, WSRT). No amplitude self-calibration was applied. Finally, the longest baselines (from European telescopes to Hartebeesthoek) were excluded from the imaging because the signal was barely above the noise level due to the resolved nature of the source. The resulting image of J1943+2118 is displayed in Fig.~\ref{ourEVN}. We also analysed archival VLA continuum data (project AH196, observing date 1985 September 30). These observations were taken from the region around our target source at 1.4~GHz, in the C configuration of the array, thus improving the angular resolution by a factor of three compared to the NVSS survey, which was performed with the most compact D configuration of the VLA. The archival experiment AH196 included five different pointings covering a region of about $2\fdg4$$\times$$1\degr$ around our source of interest. The calibration was performed in AIPS, using 3C~48 as the primary flux density calibrator. We used Difmap to obtain the image shown in Fig.~\ref{archVLA}. We also investigated the neutral hydrogen (HI) radio emission in a large field around the VHE source. To carry out this study, we made use of the data from the VLA Galactic Plane Survey \citep[VGPS,][]{VLAGPS}. \section{Results and discussion} \label{res} \begin{figure} \centering \includegraphics*[width=8cm]{HESSJ1943_ourEVN.eps} \caption{1.6-GHz EVN image of J1943+2118. The lowest contours are drawn at $\pm0.6$\,mJy/beam. The positive contour levels increase by a factor of 2. The peak brightness is $25.3$\,mJy/beam. The Gaussian restoring beam shown in the lower-left corner is $43.9$\,mas $\times$ $28.5$\,mas (full width at half maximum, FWHM) with major axis position angle $40\fdg3$.} \label{ourEVN} \end{figure} \begin{figure} \centering \includegraphics*[width=8cm]{hess_vla_pos_ver2.eps} \caption{1.4-GHz VLA C-configuration image of J1943+2118. The peak brightness is $60$\,mJy/beam. The lowest contours are drawn at $\pm 1.2$\,mJy/beam and the positive contour levels increase by a factor of 2. The Gaussian restoring beam shown in the lower-right corner is $17\farcs8 \times 15\farcs1$ with major axis position angle $60\fdg1$. Label number 1 indicates the TeV source position and its 90\,\% error measured by H.E.S.S. The NVSS source position and the position of the X-ray counterpart is labelled as number 2 and number 3, respectively. The sizes of the symbols represent the errors of the corresponding position measurement. The {\it Chandra} position coincides with the position of the radio source derived from our phase-referencing EVN observation. (The accuracy of the EVN position is superior to that of the X-ray, therefore it cannot be distinctively displayed in this figure.)} \label{archVLA} \end{figure} \begin{figure} \centering \includegraphics*[width=8cm]{large_cont.eps} \caption{Radio continuum emission at 1.4~GHz as taken from the VGPS \citep{VLAGPS}. The white cross indicates the location of J1943+2118. Note that this map is presented in galactic coordinates.} \label{vgps_cont} \end{figure} \subsection{Radio continuum: no blazar-like compact emission} The phase-referenced exploratory EVN observation provided accurate equatorial coordinates for J1943+2118: $\alpha=19^\mathrm{h} 43^\mathrm{m} 56\fs2372 \pm 0\fs0001$ and $\delta=21\degr 18\arcmin 23\farcs402 \pm 0\farcs002$. This position agrees well, within uncertainties, with the coordinates of CXOU\,J194356.2+211823, the X-ray source proposed to be the counterpart of the H.E.S.S. point source \citep{hess_disc}. Therefore we can confirm that J1943+2118 is indeed the radio counterpart of CXOU\,J194356.2+211823 and consequently very likely to be associated with the VHE emission detected by H.E.S.S. It has to be noted that the EVN detection is $3\farcs75$ off from the position given in the NVSS catalogue. We used the Difmap package to fit a circular Gaussian brightness distribution model component to the VLBI visibility data. The feature in our EVN image can be well described with a component of 31\,mJy flux density and 15.8\,mas angular size (full width at half maximum, FWHM). These values imply a brightness temperature of $T_\mathrm{B}=7.9 \times 10^7$\,K. Since J1943+2118 is very close to the Galactic plane (at about $-1\fdg3$ Galactic latitude), angular broadening caused by the intervening ionised interstellar matter can distort the image of a distant compact extragalactic radio source. According to the model of \cite{ne2001}, the maximal amount of angular broadening of a point source in this direction at 1.6\,GHz (at the frequency of our EVN observation) is expected to be 3.34\,mas. However, even if we take this effect into account, the ``de-broadened'' source size remains quite large. The brightness temperature calculated using this "de-broadened" source size is still substantially lower than the intrinsic equipartition value estimated for relativistic compact jets \citep[$\sim 5\times10^{10}$\,K, ][]{equi}. Thus relativistic beaming most probably does not play a role in the appearance of this source. The flux density recovered in our high-resolution EVN observation is only one-third of the value reported by NVSS at 1.4~GHz, (102.6$\pm$3.6)\,mJy. To investigate the discrepancy between the flux density values, we analysed the WSRT phased array data taken during our EVN experiment. The offset of the source position from the phase centre is a significant fraction of the synthesised beam of the phased array of the WSRT ($7\farcs8$ at 1.6\,GHz) therefore the derived source characteristics should be used with caution. However, the obtained flux density value, $\sim$$95$\,mJy, still agrees well with the value reported in the NVSS for J1943+2118. Thus, we can conclude that the high-resolution EVN observation resolved out significant portion of the large-scale structure of J1943+2118. This was confirmed with the archival VLA C-array data (see Fig. \ref{archVLA}). According to those, the flux density of the source is (91$\pm$5)\,mJy, which agrees with the flux density values derived from the two other (the NVSS and the WSRT-only) lower-resolution data sets. Additionally, the VLA C-array data revealed an elongated structure of J1943+2118 in the SE-NW direction. This shape can naturally explain why the NVSS position is $\sim 4\arcsec$ off. In Fig. \ref{archVLA}, label 2 indicates the NVSS position, while label 3 indicates the coinciding {\it Chandra} X-ray and EVN radio positions. The shift between points 2 and 3 is exactly in the direction of the source extension. Finally, to gain insight into the field where HESS\,J1943+213 is located, and to search for possibly associated extended emission, we inspected the VGPS image of the radio continuum emission at 1.4~GHz in a $5\degr$$\times$$5\degr$ area around the source position (Fig. \ref{vgps_cont}). Essentially, it shows an almost empty field around J1943+2118 (whose location is indicated by a white cross in Fig. \ref{vgps_cont}), with no trace of diffuse emission at the sensitivity of this survey. The reprocessed VLA C-array archival data confirm this picture, down to an rms noise level of 0.15\,mJy/beam. \subsection{HI data: the distance and the associated features} \begin{figure} \centering \includegraphics*[width=8cm]{HIspectrum.eps} \caption{Top panel: HI spectrum towards J1943+2118. Bottom panel: absorption HI spectrum obtained from the subtraction of the above profile from the average of four offset profiles.} \label{HIspectrum} \end{figure} \begin{figure} \centering \includegraphics*[width=8cm]{largeHI.eps} \caption{HI emission distribution from VGPS, within the velocity range $+50${km\,s$^{-1}$} to $+57${km\,s$^{-1}$}. The white cross shows the location of J1943+2118 and the white dashed circle marks up the location of the discovered HI shell. Note that this map is presented in galactic coordinates.} \label{largeHI} \end{figure} \begin{figure} \centering \includegraphics*[width=8cm]{detailedHI.eps} \caption{A detailed view of the HI shell shown in Fig.~\ref{largeHI}. The cross shows the location of J1943+2118 and the circle is the $2\farcm8$ confidence size of HESS J1943+213.} \label{detailedHI} \end{figure} With an angular resolution of $1\arcmin$ and a sensitivity of 11\,mJy/beam (2\,K for a $1\arcmin$ beam), the VGPS \citep{VLAGPS} is an adequate database to search for possible footprints of the event that resulted in the very high-energy emission detected by H.E.S.S. We have carried out an HI absorption study to constrain the distance to J1943+2118. Fig.~\ref{HIspectrum} displays the HI profile traced towards the point source (top panel) and the absorption spectrum obtained from the subtraction of this profile from the average of four surrounding offset points (lower panel). From these spectra we can conclude that there is an absorption feature at $\varv$$\approx$$-16$\,{km\,s$^{-1}$}, followed by an emission feature peaking near $\varv$$\approx$$-50$\,{km\,s$^{-1}$}. By applying a Galactic circular rotation model \citep{galrot}, we set tentative limits for the distance to J1943+2118, concluding that it is beyond 10\,kpc (corresponding to the radial velocity $-16$\,{km\,s$^{-1}$}) but closer than 13\,kpc (corresponding to the radial velocity $-50$\,{km\,s$^{-1}$}). In what follows we adopt as a compromise an intermediate distance of 11.5\,kpc for J1943+2118. We note that the emission feature seen at $-50$\,{km\,s$^{-1}$} lies most probably in the warping region of the outer galaxy \citep{voskes_thesis} further than J1943+2118 and does not seem to be related to it. By inspecting the HI emission in the large $5\degr$$\times$$5\degr$ field, across the whole observed velocity range between $-114$\,{km\,s$^{-1}$} and +166\,{km\,s$^{-1}$}, we find that in all channel maps between radial velocity $\varv$$\approx$$+50$\,{km\,s$^{-1}$} and $\varv$$\approx$$+57$\,{km\,s$^{-1}$}, a striking, almost complete shell-like feature surrounds the location of J1943+2118 (Fig.~\ref{largeHI}). This HI shell, displayed in detail in Fig.~\ref{detailedHI}, appears at a ``forbidden" velocity for the expected rotational properties of the Galactic gaseous disk, since in this direction of the Galaxy (Galactic longitude $l$$\sim$$57\degr$) the maximum positive velocity (corresponding to the tangent point) is predicted to be about $+35$\,{km\,s$^{-1}$}. This fact is not surprising. Recently \cite{Kang2010} discussed the presence of many faint, wing-like HI features at velocities beyond the boundaries allowed by Galactic rotation (what they call Forbidden-Velocity Wings, FVWs), and conclude that these features are the results of violent events, probably missing old Galactic supernova remnants (SNRs) which are invisible in other spectral regimes. \begin{table} \caption[]{Observed and derived parameters of the HI shell.} \label{ring_data} $$ \begin{array}{p{0.4\linewidth}l} \hline \noalign{\smallskip} Geometrical centre: & \\ \hspace{2mm}Equatorial coordinates & \alpha=19^\mathrm{h} 43^\mathrm{m} 23^\mathrm{s}, \delta=+21\degr 10\arcmin 45\arcsec \\ \hspace{2mm}Galactic coordinates & {\it l}=57\fdg6, {\it b}=-1\fdg25 \\ Angular diameter & 1^\circ \\ Linear diameter & 200 \mathrm{\,pc} \\ Adopted systemic velocity & -16\mathrm{\,km\,s}^{-1} \\ Adopted central velocity & +54\mathrm{\,km\,s}^{-1} \\ Expansion velocity & 70\mathrm{\,km\,s}^{-1} \\ HI mass & 6.4 \times 10^{4}\mathrm{\,M}_\odot \\ Ambient density & 0.6 \mathrm{\,cm}^{-3} \\ Kinetic energy & 1.6 \times10^{51} \mathrm{\,erg} \\ Initial energy & 1.6 \times 10^{52} \mathrm{\,erg} \\ Dynamic age & 4 \times 10^{5} \mathrm{\,years} \\ \noalign{\smallskip} \hline \end{array} $$ \end{table} With this idea in mind, we investigated the characteristic physical parameters of the newly detected HI shell around J1943+2118. In Table~\ref{ring_data} we summarise the observed and derived parameters of the HI shell (for the adopted distance of 11.5\,kpc). The central velocity was derived on the basis of the velocity of the central channel of the nine where the HI shell is clearly detected between $+50$\,{km\,s$^{-1}$} and $+57$\,{km\,s$^{-1}$}. The expansion velocity is estimated by adding the $\sim$$54$\,{km\,s$^{-1}$} at which the shell is seen, to the absolute value of the assumed systemic velocity of approximately $-16$\,{km\,s$^{-1}$}, as suggested by the absorption study. We also searched for the expanding ``caps'' of the shell at the appropriate velocities, i. e. ($-16 - 70$)\,{km\,s$^{-1}$}$=-86$\,{km\,s$^{-1}$}. Even though quite faint, we found traces of the HI shell at a very close velocity value, $-81$\,{km\,s$^{-1}$}. The HI mass was calculated by integrating all contributions within the shell across all the channels where the shell is present. The ambient density was derived by assuming that all the swept-up mass that forms the shell was initially uniformly distributed within a sphere of 200\,pc diameter. The initial energy was calculated from the relation \citep{SNRdyn}: $E_{\rm SN} = 6.8 \times 10^{43}~ n_0^{1.16}~ R_\mathrm{s}^{3.16}~ \varv_\mathrm{exp}^{1.35}~ \xi^{0.161}$\,ergs, where $\xi$, the metallicity index, is adopted $0.2$, corresponding to $d$$\approx$$10$\,kpc \citep{metal}. Finally, the dynamic age is calculated from the relation $t \sim 0.3 R_\mathrm{s}/\varv_\mathrm{exp}$. All the observed and calculated parameters for this shell are completely analogous to those obtained by \cite{2006Koo}, who conducted an HI study towards one of the FVWs, the structure \object{FVW 190.2+1.1}. They concluded that such parameters are only consistent with the HI shell being the last remnant of a supernova (SN) explosion that occurred in the outermost fringes of the Galaxy some $3 \times 10^5$ years ago, an SNR that is not seen in any other wavebands, and that represents the oldest type of SNRs, essentially invisible except via its HI line emission. In our case, the observed HI shell might be the last vestige of an old, very energetic SN that exploded on the other side of the Galaxy, about 400\,000 years ago, and is currently expanding in a tenuous interstellar gas. These facts explain the absence of radio continuum emission associated with this remnant. In this scenario, the sources detected in TeV, radio, and X-rays would be the emission from the PWN. For comparison, if the paradigmatic PWNe \object{Crab} and \object{3C 58} were located at a distance of 11.5\,kpc, as the approximate distance of J1943+2118, their angular size in radio wavelengths would be of the order of $1\farcm2 \times 0\farcm8$ for Crab and $\sim$$0\farcm8$ for 3C~58, in excellent concordance with the size of the elliptical structure of J1943+2118 as seen from the VLA observation. The size of the PWNe in X-rays is generally smaller than in radio due to the smaller synchrotron lifetimes of the higher-energy electrons \citep{crab}, thus explaining why {\it Chandra} detected it as a point-like source. Besides, the radio spectral index $-0.32$ reported by \cite{vollmer_cat} for the NVSS point source coincident with J1943+2118 is compatible with the standard spectral indices of PWNe \citep[between $0$ and $-0.3$, ][]{pwn_spectrum}. However, one problem arises in this scenario. According to \cite{gammaXratio}, the gamma-ray to X-ray flux ratio is proportional to the age of the PWN. The relatively low value of the flux ratio ($0.04$) of J1943+2118 implies that the nebula is only $\sim$$10^3$\,years old, more than hundred times younger than the age implied by the dynamics of the HI shell. This contradiction can be resolved if the HI shell was produced by the stellar wind of the massive supernova progenitor. As the SNR expanded in a previously evacuated cavity, the ambient density to create the SNR shell was very low and it is natural that the SNR remained undetected. Such HI bubbles are common features around massive early-type stars which eventually end their lives exploding as SNe \citep[e.g.][]{Dubner_windblown, Cappa_windblown, Vasquez_windblown, hess_windblown}. In this scenario, the estimated dynamic age has to be attributed to the wind-blown shell. A similar scenario was suggested by \cite{beforeSN} to explain the detection of a high-velocity HI shell associated with the SNR \object{W44}. \section{Conclusions} \label{concl} We reported on the study of the radio counterpart of the newly discovered TeV source, HESS J1943+213. We analysed new and archival radio continuum data and archival HI spectral-line observations of the proposed radio counterpart, J1943+2118 \citep{hess_disc}. Our high-resolution exploratory EVN observation provided the most accurate position of the source to date, thus strengthening the identification. From the EVN data, we could also calculate the brightness temperature of the source. The resulting low value indicates that relativistic beaming most probably does not play a role in the appearance of this source. According to the unified model of radio-loud AGNs \citep{uniAGN}, however, BL Lacertae objects are viewed at angles very close to the jet direction, therefore their radio emission is relativistically beamed, with apparent brightness temperatures well in excess of the equipartition value. Therefore, the measured low brightness temperature of J1943+2218 is not compatible with the BL Lacertae object nature, thus weakening the suggestion of \cite{hess_disc}. The re-analysed archival VLA C-array data, which recover emission at spatial scales that have been filtered out in the EVN observation, revealed an elongated source structure. Further radio observation at intermediate resolution is needed to better understand the radio structure of J1943+2218. On the other hand, the HI data revealed a shell-like feature around J1943+2118. Even though we do not have yet an unequivocal evidence confirming that the two objects are associated, all the observed and derived parameters of this large-scale feature and the properties of the radio continuum source point to the discovery of a distant SNR whose PWN could be responsible for the TeV emission detected by H.E.S.S. If the HI shell is explained as a result of a SN explosion, then its dynamics indicate that the SN explosion took place $\sim$\,$400\,000$ years ago. The X-ray to gamma-ray flux ratio on the other hand suggests a much younger ($\sim$\,$10^3$\,yrs) PWN. If the latter age is the correct, then the observed HI shell was not produced by the SN explosion but by the action of the stellar wind of the massive SN progenitor(s). Later the SN explosion took place within the rarefied wind cavity and the nebula created by its pulsar gave rise to the observed gamma-ray and X-ray emission. An ultimate evidence for the PWN scenario would be the detection of pulsed radio emission from J1943+2118. It is worth mentioning that the SNR \object{W44}, a high energy gamma-ray emitter with an associated high-velocity HI shell, also harbours a pulsar (\object{PSR B1853+01}) which is powering a PWN \citep{w44}. The importance of the reported discovery resides not only in the fact that it naturally explains the nature of the TeV emission, but also, as remarked by \cite{2006Koo}, it helps to solve the long-standing problem of the ``missing'' Galactic SNRs, where the detected SNRs barely make up 1\% of the expected number. \begin{acknowledgements} We are grateful to the chair of the EVN Program Committee, Tiziana Venturi, for granting us short exploratory e-VLBI observing time in May 2011. The EVN is a joint facility of European, Chinese, South African, and other radio astronomy institutes funded by their national research councils. The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work was supported by the European Community's Seventh Framework Programme, Advanced Radio Astronomy in Europe, grant agreement no.\ 227290, and Novel EXplorations Pushing Robust e-VLBI Services (NEXPReS), grant agreement no.\ RI‑261525. This research was supported by the Hungarian Scientific Research Fund (OTKA, grant no.\ K72515). GD and EG are members of CIC-CONICET (Argentina) and their work is partially supported by CONICET, ANPCYT and UBACYT grants from Argentina. \end{acknowledgements} \bibliographystyle{aa}
-14,891.935638
[ -3.09765625, 2.890625 ]
49.66443
[ -3.328125, 0.154296875, -1.7353515625, -5.6640625, -0.580078125, 8.3671875 ]
[ 2.22265625, 6.5625, 4.52734375, 4.0625 ]
282
3,641
[ -3.111328125, 3.51953125 ]
27.251338
[ -5.67578125, -2.197265625, -2.310546875, -1.6962890625, 0.89453125, 9.1875 ]
1.765291
28.339074
31.960461
3.452445
[ 2.3294248580932617 ]
-11,777.75992
5.523208
-14,320.579698
0.447539
5.903014
[ -3.890625, -3.560546875, -3.087890625, -3.546875, 2.615234375, 10.5625 ]
[ -6.97265625, -2.955078125, -3.00390625, -2.234375, 4.34375, 6.7578125 ]
BkiUc_s5qsBC-ZU-Hmyc
\section{Introduction} We consider infinite number of point particles of unit masses on $\mathbb{R}$ with formal Hamiltonian \[ H(q,p)=\sum_{k\in\mathbb{Z}}\frac{p_{k}^{2}}{2}+\frac{1}{2}\sum_{k,j}a(k-j)q_{k}q_{j},\quad p_{k},q_{k}\in\mathbb{R} \] where $q_{k}=x_{k}-k\delta$ denotes displacement of the particle with index $k$ from the point $k\delta$ for some $\delta>0$, $p_{k}$ is the impulse of the particle $k$. We will assume that function $a(n)$ satisfies the following three natural conditions: \begin{enumerate} \item symmetry: $a(n)=a(-n)$, \item finite range interaction: $a(n)$ has finite support, i.e.\ there is number $r\geqslant1$ such that $a(n)=0$ if $|n|>r$, \item for every $\lambda\in\mathbb{R}$ there is the bound \[ \omega^{2}(\lambda)=\sum_{n\in\mathbb{Z}}a(n)e^{in\lambda}\geqslant0. \] Next we will see that this condition guarantees that the energy $H(q,p)$ is non-negative for all $p,q\in l_{2}(\mathbb{Z})$. \end{enumerate} Harmonic chain with nearest neighborhood interaction by definition has the following Hamiltonian: \[ H_{C}(q,p)=\sum_{k\in\mathbb{Z}}\frac{p_{k}^{2}}{2}+\frac{\omega_{1}^{2}}{2}\sum_{k}(q_{k+1}-q_{k})^{2}+\frac{\omega_{0}^{2}}{2}\sum_{k}q_{k}^{2} \] for some non-negative constants $\omega_{0},\omega_{1}\geqslant0$. It is easy to see that \[ \omega^{2}(\lambda)=\omega_{0}^{2}+2\omega_{1}^{2}(1-\cos\lambda) \] and the conditions 1--3 obviously hold. We will always assume that conditions 1--3 are fulfilled. Suppose that the particle with index $0$ is influenced by the white noise. Then the equations of motion are: \begin{equation} \ddot{q}_{k}=-\sum_{j}a(k-j)q_{j}+\sigma\delta_{k,0}\dot{w}_{t},\quad k\in\mathbb{Z},\label{mainEquation} \end{equation} where \[ \delta_{k,0}=\begin{cases} 1, & k=0,\\ 0, & k\ne0, \end{cases} \] $\sigma>0$ and $w_{t}$ is a standard Brownian motion. In this article we study the solution $q(t),p(t)$ of latter equations with initial data in $l_{2}(\mathbb{Z})$. Mainly we are interested in the energies behavior as $t\rightarrow\infty$ at the microscopic (local) and macroscopic (global) scales. By energies we mean three quantities: kinetic, potential and full energy. The first and more simple objects to study are global energies. By global we mean the energy of the whole system. We will prove that the expectation values of the energies grow linearly with time $t$ up to terms $\bar{\bar{o}}(t)$. Moreover, the mean values of the kinetic and potential energies grow with the same speed $\sigma^{2}/4$. It shows that our system, in some sense, is transient and goes to infinity. Despite this fact, we see the equipartition property at this pure non-equilibrium case. Recall that accordingly to the equipartition theorem (\hspace{-0.1pt}\cite{Huang}, p.\thinspace 136--138) for systems with quadratic Hamiltonian at the equilibrium, the expected values of the kinetic and potential energies coincide. Next, we investigate the local energies. It turns out that on microscopic level the equipartition property holds too. We prove that under some additional assumptions the $n$-th particle mean kinetic and potential energies asymptotically equal $d_{n}\ln t$ as $t\rightarrow\infty$ for some constants $d_{n}\geqslant d>0$ bounded from below by some positive constant $d$. In general $d_{n}$ depends on the index $n$, but if $\omega(\lambda)$ has no critical points on interval $[0,\pi]$ except $0$ and $\pi$ then $d_{n}$ does not depend on $n$. Thus in the latter case we have the equipartition property of the energy by particles in some asymptotic sense. The related effect was mentioned and proved in the book \cite{Kozlov} for some class of finite linear Hamiltonian systems with random initial conditions. The order of growth $\ln t$ is not the same as in a finite case. Indeed, in the finite case for linear Hamiltonian system when only few degrees of freedom are influenced by the white noise the local energies grow like $t$ (\hspace{-0.1pt}\cite{LDisser}, p.\thinspace 68--73). Therein was also proved equipartition property of the local energies. The slower order of growth $\ln t$ in infinite case is natural since a part of the energy goes to infinity. Infinite harmonic chain is a quite standard object in mathematical physics. It is used for study various physical phenomenons: convergence to equilibrium, heat transport, hydrodynamics, etc. Most of the works dealing with systems are not far away from an equilibrium. In the present article we have considered, in some sense, the opposite situation --- our system goes to infinity, i.e.\ away from equilibrium. The effect of the total energy's growth in the case when only few degrees of freedom are influenced by a white noise was proved in \cite{LMM} for finite linear Hamiltonian systems. Seminal physical papers \cite{GKT,KK} (and reference therein) are closely related to ours. In these articles authors have considered system with a discrete Laplacian at the right hand side of (\ref{mainEquation}). In \cite{KK} instead of white noise there is a periodic force $\sin\omega t$ acting on the fixed particle with index 0 and additional pinning term. The authors studied behavior of energies and solution as time $t$ goes to infinity. \section{Results} Denote $p_{k}(t)=\dot{q}_{k}(t)$. We say that sequences $q_{k}(t),\ p_{k}(t),\ k\in\mathbb{Z}$ of stochastic processes solve the equation (\ref{mainEquation}) if they satisfy the following system of stochastic differential equations (in It\^o sense) \begin{align*} dq_{k}= & \ p_{k}dt,\\ dp_{k}= & \ -\sum_{j}a(k-j)q_{j}dt+\sigma\delta_{k,0}dw_{t},\quad k\in\mathbb{Z}. \end{align*} Denote the phase space of our system by $L=\{\psi=(q,p):q\in l_{2}(\mathbb{Z}),\ p\in l_{2}(\mathbb{Z})\}$. It is evident that $L$ is the Hilbert space. \begin{lemma} \label{EUlemma} For all $\psi\in L$ there is a unique solution $\psi(t)$ of (\ref{mainEquation}) with initial condition $\psi$ such that $\bold{P}(\psi(t)\in L)=1$ for all $t\geqslant0$ . \end{lemma} By unique we mean that if $\psi'(t)$ is another solution of (\ref{mainEquation}) such that $\psi'(0)=\psi$ and $\psi'(t)\in L$ almost sure for all $t$ then $\psi(t)$ and $\psi'(t)$ are stochastically equivalent, i.e.\ $\bold{P}(\psi(t)=\psi'(t))=1$ for all $t\geqslant0$ . Define kinetic and potential energies respectively: \[ T(t)=\frac{1}{2}\sum_{k}p_{k}^{2}(t),\quad U(t)=\frac{1}{2}\sum_{k,j}a(k-j)q_{k}(t)q_{j}(t), \] where $\psi(t)=(q(t),p(t))^{T}$ is a solution of (\ref{mainEquation}). Then evidently $H(t)=H(q(t),$ $p(t))=T(t)+U(t)$. \begin{theorem}[Global energy behavior] \label{globalEnergyBehTh} For all initial condition $\psi(0)\in L$ the following equalities hold \begin{align} \bold{E}H(t)= & \ \frac{\sigma^{2}}{2}t+H(0),\label{EHform}\\ \bold{E}T(t)= & \ \frac{\sigma^{2}}{4}t+\bar{\bar{o}}(t),\label{ETform}\\ \bold{E}U(t)= & \ \frac{\sigma^{2}}{4}t+\bar{\bar{o}}(t),\label{EUform} \end{align} as $t\rightarrow\infty$. Moreover the full energy has the following representation \begin{equation} H(t)=\frac{\sigma^{2}}{2}t+H(0)+\xi_{0}(t)+\xi_{1}(t),\label{energyDecomp} \end{equation} where a Gaussian random process $\xi_{0}(t)$ depends on initial conditions and can be expressed as follows \[ \xi_{0}(t)=\int_{0}^{t}f(s)\ dw_{s},\ \] and the random process $\xi_{1}(t)$ has a representation via a multiple It\^o integral \[ \xi_{1}(t)=\int_{0}^{t}\left(\int_{0}^{s_{1}}h(s_{1}-s_{2})dw_{s_{2}}\right)dw_{s_{1}},\ \] where \[ h(t)=\frac{\sigma^{2}}{2\pi}\int_{0}^{2\pi}\cos(t\omega(\lambda))d\lambda, \] \[ f(t)=\frac{\sigma}{2\pi}\int_{0}^{2\pi}-Q_{0}(\lambda)\omega(\lambda)\sin(t\omega(\lambda))+P_{0}(\lambda)\cos(t\omega(\lambda))d\lambda, \] \[ Q_{0}(\lambda)=\sum_{n}q_{n}(0)e^{in\lambda},\quad P_{0}(\lambda)=\sum_{n}p_{n}(0)e^{in\lambda}. \] \end{theorem} If the initial conditions are zero, then as a consequence of presentation (\ref{energyDecomp}), we can write the variance of the full energy: \begin{equation} \bold{D}H(t)=\int_{0}^{t}(t-s)h^{2}(s)ds.\label{HvarFor} \end{equation} Later on we will prove that the following formula holds: \begin{equation} \bold{D}H(t)=\bar{\bar{o}}(t^{2}),\ \mbox{as}\ t\rightarrow\infty.\label{HvarAsymp} \end{equation} We see that the mean energies of the whole system grows linearly with time. Next we will see that locally the energies grow like logarithm of time $t$. Additionally we should note that the mean kinetic and potential energies grow with the same rate: $\sigma^{2}/4$. It turns out that locally this picture is the same, i.e.\ the leading asymptotic term in the corresponding mean energies coincides. To formulate the corresponding result we need more assumptions. We will suppose that \begin{itemize} \item[{\bf A1)}] $\omega(\lambda)$ is strictly greater than zero: \[ \omega(\lambda)>0 \] for all $\lambda\in\mathbb{R}$. \item[{\bf A2)}] each critical point of $\omega(\lambda)$ is non-degenerate, it means that if $\omega'(\lambda)=0$ for some $\lambda$ then $\omega''(\lambda)\ne0$. \end{itemize} Since $a(n)$ is symmetric and has finite support, we can write \[ \omega^{2}(\lambda)=a(0)+2\sum_{n=1}^{r}a(n)\cos(n\lambda). \] Thus points $0$ and $\pi$ are critical and $\omega(\lambda)$ has a finite number of critical points on interval $(0,\pi)$. Let us denote their by $\lambda_{1},\ldots,\lambda_{m}$ and put by definition $\lambda_{0}=0,\ \lambda_{m+1}=\pi$. \begin{itemize} \item[{\bf A3)}] The third assumption is $\omega(\lambda_{j})\ne\omega(\lambda_{i})$ for all $i\ne j$. \end{itemize} Let us introduce local energies \[ T_{n}(t)=\frac{p_{n}^{2}(t)}{2},\quad U_{n}(t)=\frac{1}{2}\sum_{j}a(n-j)q_{n}(t)q_{j}(t),\quad H_{n}=T_{n}+U_{n}, \] kinetic, potential and full energy respectively. \begin{theorem}[Local energy behavior] \label{locEnergyTheorem} Suppose assumptions A1, A2, A3 are fulfilled. Then for all $n\in\mathbb{Z}$ the following equalities hold: \begin{align} \bold{E}T_{n}(t)= & \ d_{n}\ln(t)+O(1),\label{kineticEnergyAsym}\\ \bold{E}U_{n}(t)= & \ d_{n}\ln(t)+O(1),\label{potentialEnergyAsym} \end{align} where we denote \begin{equation} d_{n}=\frac{\sigma^{2}}{8\pi}\sum_{k=0}^{m+1}\frac{\cos^{2}n\lambda_{k}}{|\omega''(\lambda_{k})|}\chi_{k},\quad\chi_{k}=\begin{cases} 4, & k=1,\ldots,m\\ 1, & k\in\{0,m+1\} \end{cases}.\label{dneqdef} \end{equation} \end{theorem} We emphasize that $\inf_{n}d_{n}>0$. Indeed, since $\lambda_{0}=0,\ \lambda_{m+1}=\pi$, we obtain the bound: \[ d_{n}\geqslant\frac{\sigma^{2}}{8\pi}\left(\frac{1}{|\omega''(0)|}+\frac{1}{|\omega''(\pi)|}\right)>0. \] \section{Proofs} \subsection{Existence and uniqueness lemma \ref{EUlemma}} First we prove the existence and uniqueness lemma \ref{EUlemma}. Let us rewrite the system (\ref{mainEquation}) in a matrix form: \begin{equation} \dot{\psi}=A\psi+\sigma g_{0}\dot{w}_{t},\label{mainEqMatrixForm} \end{equation} where a matrix $A$ is given by the formula (in $q,p$ decomposition of the phase space): \begin{equation} A=\left(\begin{array}{cc} 0 & I\\ -V & 0 \end{array}\right),\label{defA} \end{equation} $I$ is the unit matrix, $V_{i,j}=a(i-j)$ and a vector $g=(0,e_{0})^{T}$, $e_{0}$ is a vector of a standard basis (it has all zero entities except of zero component which is equal to one). We agree vectors are the column vectors. Since $a(n)$ has a compact support, $V$ acts on $l_{2}(\mathbb{Z})$ and hence $A$ defines a bounded linear operator on $L$. Uniqueness easily follows from the linearity of system (\ref{mainEqMatrixForm}). Indeed, let $\psi(t)$ and $\psi'(t)$ are two solutions of (\ref{mainEqMatrixForm}) with the same initial condition. Then $\delta(t)=\psi(t)-\psi'(t)$ is a solution of homogeneous equation: \[ \dot{\delta}=A\delta \] and $\delta(0)=0$ and moreover $\delta(t)\in L$ almost surely for all $t\geqslant0$. Thus the same arguments as in the classical theory of ODE in the Banach spaces (see \cite{DalKrein}) show us that $\delta(t)=0$ almost surely for all $t\geqslant0$. So uniqueness has been proved. A solution of (\ref{mainEqMatrixForm}) can be found via the classical formula for the solution of inhomogeneous ODE \cite{DalKrein}: \begin{equation} \psi(t)=e^{tA}\psi(0)+\sigma\int_{0}^{t}e^{(t-s)A}g_{0}\ dw_{s}=\psi_{0}(t)+\psi_{1}(t),\label{psiSol} \end{equation} where $\psi_{0}(t)=e^{tA}\psi(0)$ and $\psi_{1}(t)=\sigma\int_{0}^{t}e^{(t-s)A}g_{0}\ dw_{s}$. As we mentioned above $A$ is a bounded operator on $L$, and so $e^{tA}$ is correctly defined bounded operator on $L$. Therefore $\psi_{0}(t)\in L$ for all $t\geqslant0$. To prove the same statement with probability one for $\psi_{1}(t)$ we need the following lemma. \begin{lemma} \label{exptaformula} For all $t\geqslant0$ the following formula holds: \[ e^{tA}=\left(\begin{array}{cc} C(t) & S(t)\\ VS(t) & C(t) \end{array}\right), \] where \[ C(t)=\sum_{n=0}^{\infty}(-1)^{n}\frac{t^{2n}}{(2n)!}V^{n},\quad S(t)=\sum_{n=0}^{\infty}(-1)^{n}\frac{t^{2n+1}}{(2n+1)!}V^{n}. \] \end{lemma} \begin{proof} Straightforward calculation or see at \cite{DalKrein}. Note that formal series expansion for $\cos tV^{1/2}$ equals to $C(t)$. The same is true for the pair $V^{-1/2}\sin tV^{1/2}$ and $S(t)$. \end{proof} Denote $\psi_{1}(t)=(q^{(1)}(t),p^{(1)}(t))^{T}$. From lemma \ref{exptaformula} we have \begin{equation} q_{k}^{(1)}(t)=\sigma\int_{0}^{t}S_{k,0}(t-s)dw_{s},\quad p_{k}^{(1)}(t)=\sigma\int_{0}^{t}C_{k,0}(t-s)dw_{s}.\label{qpSolInhomSOl} \end{equation} We want to prove that $q^{(1)}(t)\in l_{2}(\mathbb{Z})$ and $p^{(1)}(t)\in l_{2}(\mathbb{Z})$ almost surely. The proof will be based on the following lemma. \begin{lemma} For all $t\geqslant0$ the following inequalities hold: \begin{equation} |C_{i,j}(t)|\leqslant\frac{v^{\rho}t^{2\rho}}{(2\rho)!}e^{\sqrt{v}t},\quad|S_{i,j}(t)|\leqslant\frac{v^{\rho}t^{2\rho+1}}{(2\rho+1)!}e^{\sqrt{v}t}, \label{CSineq} \end{equation} where $v=||V||_{l_{2}(\mathbb{Z})}$, $\rho=\lceil |i-j| / r\rceil$ and the number $r$ is a radius of interaction, which is defined in assumption 2 on the function $a$. By $\lceil x\rceil$ we denote a ceiling function of $x$, i.e.\ the least integer greater than or equal to $x$. \end{lemma} \begin{proof} Since matrix $V$ is translation invariant, it suffices to prove the assertion for $i=k\geqslant0$ and $j=0$. For all $n\geqslant1$ we have: \[ (V^{n})_{k,0}=\sum_{i_{1},\ldots,i_{n-1}\in\mathbb{Z}}V_{i_0,i_{1}}V_{i_{1},i_{2}}\ldots V_{i_{n-1},i_{n}},\ i_0 = k,\ i_n = 0. \] We see that if $(V^{n})_{k,0}\ne0$ then $|i_{j}-i_{j-1}|\leqslant r$ for all $j$ and thus we get : \[ |k|=|i_0-i_{1}+i_{1}-i_{2}+\ldots +i_{n-1}-i_{n}|\leqslant rn. \] And we obtain the inequality: $n\geqslant k / r$. Therefore the following is true: \[ C_{k,0}(t)=\sum_{n\geqslant\frac{k}{r}}^{\infty}(-1)^{n}\frac{t^{2n}}{(2n)!}V^{n}. \] Now we derive a bound for $C_{k,0}(t)$: \[ |C_{k,0}(t)|\leqslant\sum_{n=\rho}^{\infty}\frac{t^{2n}}{(2n)!}v^{n}=\sum_{n=\rho}^{\infty}\frac{(\sqrt{v}t)^{2n}}{(2n)!}\leqslant\sum_{n=2\rho}^{\infty}\frac{(\sqrt{v}t)^{n}}{n!}\leqslant\frac{v^{\rho}t^{2\rho}}{(2\rho)!}e^{\sqrt{v}t}. \] It is easy to see that $S(t)=\int_{0}^{t}C(s)\ ds$ and so the second inequality in (\ref{CSineq}) follows. \end{proof} Everything has done to prove the existence and uniqueness lemma \ref{EUlemma}. Indeed, from (\ref{qpSolInhomSOl}) we obtain : \[ \bold{E}(q_{k}^{(1)}(t))^{2}=\sigma^{2}\int_{0}^{t}S_{k,0}^{2}(t-s)\ ds=\sigma^{2}\int_{0}^{t}S_{k,0}^{2}(s)\ ds. \] Using inequalities (\ref{CSineq}) we have the bound: \[ \bold{E}(q_{k}^{(1)}(t))^{2}\leqslant\sigma^{2}\int_{0}^{t}\left(\frac{v^{\rho}s^{2\rho+1}}{(2\rho+1)!}e^{\sqrt{v}s}\right)\ ds\leqslant\sigma^{2}\frac{v^{\rho}t^{2\rho+2}}{(2\rho+2)!}e^{\sqrt{v}t}, \] where $\rho=\lceil |k| / r \rceil$. Hence we conclude: \[ \sum_{k\in\mathbb{Z}}\bold{E}(q_{k}^{(1)}(t))^{2}<\infty \] and due to monotone convergence theorem we get: \[ \sum_{k\in\mathbb{Z}}(q_{k}^{(1)}(t))^{2}<\infty \] with probability one. Thus $q^{(1)}(t)\in l_{2}(\mathbb{Z})$ almost sure for all $t\geqslant0$. The similar arguments show that $p^{(1)}(t)\in l_{2}(\mathbb{Z})$ almost sure for all $t\geqslant0$. Thus lemma \ref{EUlemma} is proved. \subsection{Solution via the Fourier transform} Consider the Fourier transform of a solution: \[ Q_{t}(\lambda)=\sum_{n}q_{n}(t)e^{in\lambda}. \] Simple calculation gives us: \[ \frac{d^{2}}{dt^{2}}Q_{t}=-\omega^{2}(\lambda)Q_{t}+\sigma\dot{w}_{t}. \] Thus $Q_{t}(\lambda)$ for every $\lambda$ satisfies the equation of harmonic oscillator with frequency $\omega(\lambda)$ influenced by white noise. Its solution is unique and can easily be found using standard tools (see \cite{Gitterman,Antonov}) : \[ Q_{t}(\lambda)=Q_{t}^{(0)}(\lambda)+Q_{t}^{(1)}(\lambda), \] where we denote: \begin{align*} Q_{t}^{(0)}&=Q_{0}(\lambda)\cos(t\omega(\lambda))+P_{0}(\lambda)\frac{\sin(t\omega(\lambda))}{\omega(\lambda)},\\ Q_{t}^{(1)}&=\frac{\sigma}{\omega(\lambda)}\int_{0}^{t}\sin((t-s)\omega(\lambda))dw_{s}, \end{align*} and $P_{0}(\lambda)=\dot{Q}_{0}(\lambda)$. Note that $Q_{t}^{(0)}$ is a solution of the homogeneous equation (with $\sigma=0$) with initial data $Q_{0}^{(0)}=Q_{0},\ \dot{Q}_{0}^{(0)}=P_{0}$ and $Q_{t}^{(1)}$ is a solution of the inhomogeneous equation with zero initial conditions. Using the inverse transformation we obtain: \[ q_{n}(t)=\frac{1}{2\pi}\int_{0}^{2\pi}e^{-in\lambda}Q_{t}(\lambda)\ d\lambda. \] In formula (\ref{psiSol}) we denote $\psi_{k}(t)=(q^{(k)}(t),p^{(k)}(t))^{T}, \ k=0,1$. It now follows that: \begin{equation} q_{n}^{(k)}(t)=\frac{1}{2\pi}\int_{0}^{2\pi}e^{-in\lambda}Q_{t}^{(k)}(\lambda)\ d\lambda,\quad k=0,1.\label{qknt} \end{equation} Thus we almost have proved the following lemma. \begin{lemma} \label{allSolFormulas} The following formulas hold: \begin{align} q_{n}^{(0)}(t)= & \ \frac{1}{2\pi}\int_{0}^{2\pi}e^{-in\lambda}\left(Q_{0}(\lambda)\cos(t\omega(\lambda))+P_{0}(\lambda)\frac{\sin(t\omega(\lambda))}{\omega(\lambda)}\right)d\lambda,\label{qz}\\ p_{n}^{(0)}(t)= & \ \frac{1}{2\pi}\int_{0}^{2\pi}e^{-in\lambda}\left(-Q_{0}(\lambda)\omega(\lambda)\sin(t\omega(\lambda))+P_{0}(\lambda)\cos(t\omega(\lambda))\right)d\lambda,\label{pz}\\ q_{n}^{(1)}(t)= & \ \int_{0}^{t}x_{n}(t-s)dw_{s},\quad x_{n}(t)=\frac{\sigma}{2\pi}\int_{0}^{2\pi}e^{-in\lambda}\frac{\sin(t\omega(\lambda))}{\omega(\lambda)}d\lambda,\label{qo}\\ p_{n}^{(1)}(t)= & \ \int_{0}^{t}y_{n}(t-s)dw_{s},\quad y_{n}(t)=\frac{\sigma}{2\pi}\int_{0}^{2\pi}e^{-in\lambda}\cos(t\omega(\lambda))d\lambda .\label{po} \end{align} \end{lemma} \begin{proof} Formula (\ref{qz}) was derived above at (\ref{qknt}). (\ref{pz}) obtained from (\ref{qz}) by differentiating. (\ref{qo}) follows from (\ref{qknt}) after switching the order of integration. We can change the order of integration between It\^o integral and Lebesgue one because the integrand is a deterministic smooth function and due to ``integration by parts'' formula \[ \int_{0}^{t}f(s)dw_{s}=f(t)w_{t}-\int_{0}^{t}f'(s)w_{s}\ ds \] which is true for any smooth function $f$. The last formula (\ref{po}) deduced from (\ref{qo}) and the equality $dq_{n}^{(1)}=p_{n}^{(1)}\ dt$. \end{proof} Let us prove here that $U(q)=\sum_{k,j}a(k-j)q_{k}q_{j}\geqslant0$ for all $q\in l_{2}(\mathbb{Z})$. Using operator $V$ defined above in (\ref{defA}) we can write: \[ U(q)=(q,Vq). \] Denote $\widehat{f}(\lambda)=\sum_{k}f(k)e^{ik\lambda}$ the Fourier transform of the sequence $f(k)$. Thus, due to Parseval's theorem we obtain \[ U(q)=\frac{1}{2\pi}\int_{0}^{2\pi}\widehat{q}(\lambda)\overline{\widehat{(Vq)}}(\lambda)d\lambda=\frac{1}{2\pi}\int_{0}^{2\pi}|\widehat{q}(\lambda)|^{2}\omega^{2}(\lambda)d\lambda\geqslant0. \] In the last equality we have used an obvious relation $\widehat{(Vq)}(\lambda)=\omega^{2}(\lambda)\widehat{q}(\lambda)$. \subsection{Global energy behavior} To prove formula (\ref{EHform}), we need find the expression for differential $dH$. By definition using It\^o formula we have: \[ dp_{k}^{2}=2p_{k}dp_{k}+(dp_{k})^{2}=2p_{k}\Bigl(-\sum_{j}a(k-j)q_{j}dt+\sigma\delta_{k,0}dw_{t} \Bigr)+\frac{\sigma^{2}}{2}\delta_{k,0}dt. \] Denote $X_{t}=H(t)=H(\psi(t))$. Whence for the energy we obtain: \begin{align*} dH=dX_{t}&=\frac{1}{2}\sum_{k}dp_{k}^{2}+\frac{1}{2}\sum_{k,j}a(k-j)d(q_{k}q_{j}) \\ &=-\sum_{k,j}a(k-j)p_{k}q_{j}dt+\frac{1}{2}\sum_{k,j}a(k-j)(p_{k}q_{j}+p_{j}q_{k})dt\\ &\quad {} +\sigma p_{0}dw_{t}+\frac{\sigma^{2}}{2}dt \\ &=\frac{\sigma^{2}}{2}dt+\sigma p_{0}dw_{t}. \end{align*} This is equivalent to the equality: \begin{align*} H(t)&=H(0)+\frac{\sigma^{2}}{2}t+\sigma\int_{0}^{t}p_{0}(s)dw_{s}\\ &=H(\psi(0))+\frac{\sigma^{2}}{2}t+\sigma\int_{0}^{t}p_{0}^{(0)}(s)+p_{0}^{(1)}(s)\ dw_{s}. \end{align*} In the last equality we have used (\ref{psiSol}). Substituting (\ref{pz}) and (\ref{po}) into the last expression we get (\ref{energyDecomp}). Formulas (\ref{EHform}) and (\ref{HvarFor}), for the expected value and variance of the energy $H(t)$ immediately follows from (\ref{energyDecomp}). Now we prove equality (\ref{HvarAsymp}). From (\ref{HvarFor}) we have: \[ \bold{D}H(t)=t^{2}\int_{0}^{1}(1-s)^{2}h^{2}(ts)\ ds. \] Lemma \ref{202001122342} gives us: \[ \lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}h^{2}(s)ds=0. \] Therefore due to Bochner's theorem (see \cite{Kawata}, p. 182, th. 5.5.1) the following limit holds: \[ \int_{0}^{1}(1-s)^{2}h^{2}(ts)\ ds\rightarrow0,\ \mbox{as}\ t\rightarrow\infty. \] So (\ref{HvarAsymp}) has proved. Note that our derivation of (\ref{energyDecomp}) has some disadvantages in a place where we sum up the infinite number of It\^o differentials. It is not hard to legitimize this procedure using a corresponding limit or one can use a more direct approach based on lemma \ref{allSolFormulas}. Now let us prove (\ref{ETform}). From presentation (\ref{psiSol}) and lemma \ref{allSolFormulas} we have: \[ \bold{E}T(t)=\frac{1}{2}\sum_{n}(p_{n}^{(0)}(t))^{2}+\frac{1}{2}\sum_{n}\bold{E}(p_{n}^{(1)}(t))^{2}=O(1)+\frac{1}{2}\sum_{n}\int_{0}^{t}y_{n}^{2}(s)ds. \] The last equality is due to It\^o isometry. Next we study the sum. Note that $y_{n}(t)$ is a Fourier coefficient of the function $\cos(t\omega(\lambda))$. Hence using the Parseval's theorem we obtain: \[ \sum_{n}\int_{0}^{t}y_{n}^{2}(s)ds=\int_{0}^{t}\sum_{n}y_{n}^{2}(s)ds=\sigma^{2}\int_{0}^{t}\frac{1}{2\pi}\int_{0}^{2\pi}\cos^{2}(s\omega(\lambda))d\lambda ds= \] \[ =\frac{\sigma^{2}t}{2}+\frac{1}{2}\int_{0}^{t}\frac{1}{2\pi}\int_{0}^{2\pi}\cos(2s\omega(\lambda))d\lambda ds. \] In the last equality we have used a school formula $\cos^{2}x= [1+\cos(2x)] / 2$. Lemma \ref{202001122342} gives us: \[ \frac{1}{2}\int_{0}^{t}\frac{1}{2\pi}\int_{0}^{2\pi}\cos(2s\omega(\lambda))d\lambda ds=O(t^{1-\varepsilon}) \] for some $\varepsilon>0$. Thus we have proved the formula for the mean kinetic energy (\ref{ETform}). Equality (\ref{EUform}) immediately follows from (\ref{EHform}) and (\ref{ETform}) because of relation $H=T+U$. This completes the proof of Theorem \ref{globalEnergyBehTh}. \subsection{Local energy asymptotics} Now we prove Theorem \ref{locEnergyTheorem}. Let us begin with kinetic energy: \[ T_{n}(t)=\frac{p_{n}^{2}(t)}{2}. \] Remark that according to formulas (\ref{qz}),(\ref{pz}) and Riemann\thinspace--\thinspace Lebesgue lemma we have the limit: \begin{equation} \lim_{t\rightarrow\infty}\psi_{0}(t)=0.\label{psizerolim} \end{equation} Hence we obtain: \begin{align*} \bold{E}T_{n}(t)&=\frac{1}{2}\bold{E}\left((p_{n}^{(0)}(t))^{2}+(p_{n}^{(1)}(t))^{2}+2p_{n}^{(0)}(t)p_{n}^{(1)}(t)\right)\\ &=\frac{1}{2}\left((p_{n}^{(0)}(t))^{2}+\bold{E}(p_{n}^{(1)}(t))^{2}\right) \\ &=\frac{1}{2}\bold{E}(p_{n}^{(1)}(t))^{2}+\bar{\bar{o}}(1),\quad \mbox{as}\ t\rightarrow\infty. \end{align*} The application of (\ref{po}) and It\^o isometry yields the equality: \begin{equation} \bold{E}(p_{n}^{(1)}(t))^{2}=\int_{0}^{t}y_{n}^{2}(t-s)ds=\int_{0}^{t}y_{n}^{2}(s)ds.\label{202001091457} \end{equation} Since $\omega(\lambda)$ is even function, for $y_{n}(t)$ we get: \[ y_{n}(t)=\frac{\sigma}{2\pi}\int_{0}^{2\pi}e^{-in\lambda}\cos(t\omega(\lambda))d\lambda=\frac{\sigma}{2\pi}\int_{0}^{2\pi}\cos(n\lambda)\cos(t\omega(\lambda))d\lambda. \] Lemma \ref{eftlemma} gives us: \begin{align*} y_{n}(t)&=\sigma\frac{1}{\sqrt{t}}\sum_{k=0}^{m+1}\theta_{k}\sqrt{\frac{1}{2\pi|\omega''(\lambda_{k})|}}\cos(n\lambda_{k}) \cos\Bigl(t\omega(\lambda_{k})+\frac{\pi}{4}s(\lambda_{k})\Bigr)+O\Bigl(\frac{1}{t}\Bigr) \\ &=\frac{1}{\sqrt{t}}\sum_{k=0}^{m+1}b_{k}u_{k}(t)+O\Bigl(\frac{1}{t}\Bigr), \end{align*} where we denote: \[ b_{k}=\sigma\theta_{k}\sqrt{\frac{1}{2\pi|\omega''(\lambda_{k})|}}\cos(n\lambda_{k}),\ u_{k}(t)=\cos\Bigl(t\omega(\lambda_{k})+\frac{\pi}{4}s(\lambda_{k})\Bigr) \] For the square we get: \[ y_{n}^{2}(t)=\frac{1}{t}\sum_{k=0}^{m+1}b_{k}^{2}u_{k}^{2}(t)+\frac{1}{t}\sum_{k\ne j}b_{k}b_{j}u_{k}(t)u_{j}(t)+O\left(\frac{1}{t\sqrt{t}}\right). \] Substitute the last expression for $y_{n}^{2}(t)$ to (\ref{202001091457}): \begin{align} \bold{E}(p_{n}^{(1)}(t))^{2}&=\int_{0}^{1}y_{n}^{2}(s)ds+\int_{1}^{t}y_{n}^{2}(s)ds=O(1)+\int_{1}^{t}y_{n}^{2}(s)ds \nonumber \\ &=O(1)+\sum_{k=0}^{m+1}b_{k}^{2}\int_{1}^{t}\frac{u_{k}^{2}(s)}{s}ds+\sum_{k\ne j}b_{k}b_{j}\int_{1}^{t}\frac{u_{k}(s)u_{j}(s)}{s}ds.\label{202001091523} \end{align} Using the formula $\cos^{2}x=[1+\cos(2x)]/2$ we obtain: \begin{align*} \int_{1}^{t}\frac{u_{k}^{2}(s)}{s}ds&=\int_{1}^{t}\frac{\cos^{2}(s\omega(\lambda_{k})+\frac{\pi}{4}s(\lambda_{k}))}{s}ds\\ &=\frac{\ln t}{2}+\frac{1}{2}\int_{1}^{t}\frac{\cos(2s\omega(\lambda_{k})+\frac{\pi}{2}s(\lambda_{k}))}{s}ds =\frac{1}{2}\ln t+O(1). \end{align*} The last equality follows from the fact that integrals: \begin{equation} \mathrm{ci}(1)=\int_{1}^{+\infty}\frac{\cos x}{x}dx<\infty,\quad\mathrm{si}(1)=-\int_{1}^{+\infty}\frac{\sin x}{x}dx<\infty\label{cisiconv} \end{equation} are converged (see \cite{GR} p.\thinspace 656, 3.721) and that $\omega(\lambda_{k})>0$ for all $k=0,\ldots,m+1$ due to assumption A1). To estimate the remainder term in (\ref{202001091523}) we use the formula $\cos(a)\cos(b)=\frac{1}{2}(\cos(a+b)+\cos(a-b))$: \begin{align*} \int_{1}^{t}\frac{u_{k}(s)u_{j}(s)}{s}ds&=\frac{1}{2}\int_{1}^{t}\frac{\cos(s(\omega(\lambda_{k})+\omega(\lambda_{j}))+\frac{\pi}{4}(s(\lambda_{k})+s(\lambda_{j})))}{s}ds \\ &\quad {} +\frac{1}{2}\int_{1}^{t}\frac{\cos(s(\omega(\lambda_{k}) \! - \! \omega(\lambda_{j}))+\frac{\pi}{4}(s(\lambda_{k}) \! - \! s(\lambda_{j})))}{s}ds=O(1). \end{align*} The last equality is derived from (\ref{cisiconv}) using assumption A3). Substituting it to (\ref{202001091523}) we obtain: \[ \bold{E}(p_{n}^{(1)}(t))^{2}=O(1)+\frac{\ln(t)}{2}\sum_{k=0}^{m+1}b_{k}^{2}. \] Thus the equality (\ref{kineticEnergyAsym}) has proved. Now prove equality (\ref{potentialEnergyAsym}) of theorem \ref{locEnergyTheorem}. The idea of the proof is the same as for the kinetic energy. From (\ref{psizerolim}) we obtain: \begin{equation} \bold{E}U_{n}(t)=\frac{1}{2}\sum_{j}a(n-j)\bold{E}(q_{n}^{(1)}(t)q_{j}^{(1)}(t))+\bar{\bar{o}}(1),\ \mbox{as}\ t\rightarrow\infty.\label{eunteq} \end{equation} Equality (\ref{qo}) and It\^o isometry give us: \begin{align} \bold{E}(q_{n}^{(1)}(t)q_{j}^{(1)}(t))&=\int_{0}^{t}x_{n}(t-s)x_{j}(t-s)ds\nonumber \\ &=\int_{0}^{t}x_{n}(s)x_{j}(s)ds=\int_{1}^{t}x_{n}(s)x_{j}(s)ds+O(1).\label{eqqeq} \end{align} Since $\frac{\sin(t\omega(\lambda))}{\omega(\lambda)}$ is an even function and due to lemma \ref{eftlemma}, we have for all $n\in\mathbb{Z}$: \begin{align*} x_{n}(t)&=\frac{\sigma}{2\pi}\int_{0}^{2\pi}\cos(n\lambda)\frac{\sin(t\omega(\lambda))}{\omega(\lambda)}d\lambda \\ &=\sigma\frac{1}{\sqrt{t}}\sum_{k=0}^{m+1}\theta_{k}\sqrt{\frac{1}{2\pi|\omega''(\lambda_{k})|}}\frac{\cos(n\lambda_{k})}{\omega(\lambda_{k})}\sin(t\omega(\lambda_{k})+\frac{\pi}{4}s(\lambda_{k}))+O\left(\frac{1}{t}\right) \\ &=\frac{1}{\sqrt{t}}\sum_{k=0}^{m+1}e_{k}^{(n)}v_{k}(t)+O\left(\frac{1}{t}\right), \end{align*} where we denote: \[ e_{k}^{(n)}=\sigma\theta_{k}\sqrt{\frac{1}{2\pi|\omega''(\lambda_{k})|}}\frac{\cos(n\lambda_{k})}{\omega(\lambda_{k})},\quad v_{k}(t)=\sin(t\omega(\lambda_{k})+\frac{\pi}{4}s(\lambda_{k})). \] Substitute the last expression to (\ref{eqqeq}): \begin{align*} \bold{E}(q_{n}^{(1)}(t)q_{j}^{(1)}(t))&=\sum_{k=0}^{m+1}e_{k}^{(n)}e_{k}^{(j)}\int_{1}^{t}\frac{v_{k}^{2}(s)}{s}ds\\ &\quad {} +\sum_{k_{1}\ne k_{2}}e_{k_{1}}^{(n)}e_{k_{2}}^{(j)}\int_{1}^{t}\frac{v_{k_{1}}(s)v_{k_{2}}(s)}{s}ds+O(1). \end{align*} The same arguments as in the case of the kinetic energy give us equalities: \[ \int_{1}^{t}\frac{v_{k}^{2}(s)}{s}ds=\frac{1}{2}\ln t+O(1),\quad\int_{1}^{t}\frac{v_{k_{1}}(s)v_{k_{2}}(s)}{s}=O(1) \] if $k_1 \ne k_2$. Therefore we have \[ \bold{E}(q_{n}^{(1)}(t)q_{j}^{(1)}(t))=\frac{1}{2}\ln t\sum_{k=0}^{m+1}e_{k}^{(n)}e_{k}^{(j)}+O(1). \] Put this expression to formula (\ref{eunteq}). Then we obtain: \[ \bold{E}U_{n}(t)=D_{n}\ln t+O(1),\quad D_{n}=\frac{1}{4}\sum_{j}a(n-j)\sum_{k=0}^{m+1}e_{k}^{(n)}e_{k}^{(j)}. \] Now we prove that $D_{n}=d_{n}$ where $d_{n}$ defined in (\ref{dneqdef}). At the first step we change the summation order: \[ D_{n}=\frac{1}{4}\sum_{k=0}^{m+1}e_{k}^{(n)}\sum_{j}a(n-j)e_{k}^{(j)}. \] For the internal sum we get: \[ \sum_{j}a(n-j)e_{k}^{(j)}=\sigma\theta_{k}\sqrt{\frac{1}{2\pi|\omega''(\lambda_{k})|}}\frac{1}{\omega(\lambda_{k})}\sum_{j}a(n-j)\cos(j\lambda_{k}). \] The simple algebra shows us: \[ \sum_{j}a(n-j)\cos(j\lambda_{k})=\sum_{j}a(n-j)\frac{e^{ij\lambda_{k}}+e^{-ij\lambda_{k}}}{2}=\cos(n\lambda_{k})\omega^{2}(\lambda_{k}). \] Whence we have: \[ \sum_{j}a(n-j)e_{k}^{(j)}=\omega^{2}(\lambda_{k})e_{k}^{(n)}. \] Thus we obtain: \[ D_{n}=\frac{1}{4}\sum_{k=0}^{m+1}\omega^{2}(\lambda_{k})(e_{k}^{(n)})^{2}=d_{n}. \] This completes the proof of Theorem \ref{locEnergyTheorem}. \begin{lemma} \label{202001122342} There are positive constants $b,\varepsilon$ such that for all sufficiently large $t$ the following inequality holds: \begin{equation} \left|\int_{0}^{2\pi}e^{it\omega(\lambda)}d\lambda\right|\leqslant bt^{-\varepsilon}.\label{202001122115} \end{equation} \end{lemma} \begin{proof} Recall that \[ \omega^{2}(\lambda)=a(0)+2\sum_{n=1}^{r}a(n)\cos(n\lambda). \] If $\omega(\lambda)$ is strictly greater than zero (i.e.\ assumption A1 holds) then lemma immediately follows from the stationary phase method. Indeed in that case $\omega(\lambda)$ is an analytic function and by the stationary phase method the asymptotic of the integral at (\ref{202001122115}) is determined by stationary points of $\omega(\lambda)$ (see \cite{Erdelyi,Fedoruk}). Since $\omega(\lambda)$ is an analytic then $\omega(\lambda)$ has a finite number of critical points on $[0,2\pi]$ and each of this point has finite multiplicity. Hence the inequality (\ref{202001122115}) follows from the corresponding asymptotic formulas of the stationary phase method.\par Now suppose that $\omega(\lambda)$ has zeros on $[0,2\pi]$. Denote $f(\lambda)=\omega^{2}(\lambda)$. Since $f$ is an analytic, $\omega(\lambda)$ has a finite number of zeros on $[0,2\pi]$. Consider some zero $z\in[0,2\pi]$ of $\omega(\lambda)$ and study integral (\ref{202001122115}) over a small neighborhood of $z$. From analyticity of $f$ follows that there is a number $n\geqslant1$ such that: \[ f(z)=0,\ f'(z)=0,\ldots f^{(n-1)}(z)=0,\ f^{(n)}(z)\ne0. \] Since $f$ is non-negative, $n=2m$ is an even number. This is a well-known fact (see \cite{Erdelyi,Fedoruk}) that in the given case there is a $C^{\infty}$-smooth one-to-one function $\varphi(y)$ mapping some neighborhood of zero, say $[-\delta,\delta]$, to a small vicinity of $z$, denote it by $[z-\delta',z+\delta']$, such that \[ f(\varphi(y))=y^{n},\quad \varphi(0)=z. \] Therefore for the integral we have \begin{align*} \int_{z-\delta'}^{z+\delta'}e^{it\omega(\lambda)}d\lambda&=\int_{-\delta}^{\delta}\exp(it|y|^{n/2})\varphi'(y)dy\\ &=\int_{0}^{\delta}\exp(ity^{m})(\varphi'(y)+\varphi'(-y))dy=O\bigl(t^{-1/m}\bigr). \end{align*} Thus we have proved that for each zero $z$ of $f$ there is a neighborhood of $z$ such that the integral over this neighborhood satisfies inequality (\ref{202001122115}) . The same statement is evidently true for the critical points of $f$. Hence, splitting integral (\ref{202001122115}) into the integrals over such neighborhoods and remaining part without zeros and critical points, we get the proof of (\ref{202001122115}). \end{proof} \begin{lemma} \label{eftlemma} Consider the integral \[ E_{f}(t)=\int_{0}^{2\pi}g(\lambda)e^{it\omega(\lambda)}d\lambda,\ t\geqslant0 \] for some $2\pi$-periodic real-valued $C^{\infty}(\mathbb{R})$-smooth even function $g$. Then under the assumptions A1) and A2) the following formula holds: \begin{equation} E_{f}(t)=\frac{1}{\sqrt{t}}\sum_{k=0}^{m+1}\theta_{k}\sqrt{\frac{2\pi}{|\omega''(\lambda_{k})|}}g(\lambda_{k})e^{it\omega(\lambda_{k})+\frac{i\pi}{4}s(\lambda_{k})}+O\left(\frac{1}{t}\right)\label{efasym} \end{equation} where \[ \theta_{k}=\begin{cases} 2, & k=1,\ldots,m,\\ 1, & k\in\{0,m+1\}, \end{cases} \quad s(\lambda)=\mathrm{sgn}(\omega''(\lambda)), \] and $\lambda_{0},\ldots,\lambda_{m+1}$ are critical points of the function $\omega(\lambda)$ introduced in assumption A2. \end{lemma} \begin{proof} We will use the stationary phase method (see \cite{Erdelyi,Fedoruk}). Note that $\omega(\lambda)=\omega(2\pi-\lambda)$ for all $\lambda$. Hence the only critical points of $\omega(\lambda)$ on the interval $[0,2\pi)$ are $\lambda_{0},\ldots,\lambda_{m+1}$ and $\mu_{1},\ldots,\mu_{m}$ where $\mu_{j}=2\pi-\lambda_{j},\ j=1,\ldots,m$. Recall that $\lambda_0=0$ and we want to shift the interval of integration from the boundary stationary point. Since functions $g$ and $\omega$ are $2\pi$ periodic, we can write \[ E_{f}(t)=\int_{-\delta}^{2\pi-\delta}g(\lambda)e^{it\omega(\lambda)}d\lambda, \] where we choose small number $\delta$ in such a way all critical points $\lambda_{0},\ldots,\lambda_{m+1}$ and $\mu_{1},\ldots,\mu_{m}$ lie strongly inside the interval $(-\delta,2\pi-\delta)$. By stationary phase method we have the asymptotic formula: \begin{align*} E_{f}(t)&\sim\frac{1}{\sqrt{t}}\sum_{k=0}^{m+1}\sqrt{\frac{2\pi}{|\omega''(\lambda_{k})|}}g(\lambda_{k}) \exp\Bigl\{it\omega(\lambda_{k})+\frac{i\pi}{4}s(\lambda_{k})\Bigr\} \\ &\quad {} +\frac{1}{\sqrt{t}}\sum_{k=1}^{m}\sqrt{\frac{2\pi}{|\omega''(\mu_{k})|}}g(\mu_{k}) \exp\Bigl\{it\omega(\mu_{k})+\frac{i\pi}{4}s(\mu_{k})\Bigr\}. \end{align*} Since $\omega(\mu_{k})=\omega(\lambda_{k}),\ \omega''(\mu_{k})=\omega''(\lambda_{k}),\ g(\mu_{k})=g(\lambda_{k})$, we obtain the leading term in (\ref{efasym}). The term $O(t^{-1})$ comes from contribution of the boundary points. \end{proof}
-50,250.109231
[ -2.759765625, 2.47265625 ]
20.427236
[ -3.16015625, 0.1910400390625, -2.17578125, -6.953125, -0.591796875, 9.578125 ]
[ 1.31640625, 8.203125, -0.10986328125, 4.21875 ]
242
3,099
[ -3.52734375, 3.98828125 ]
38.824412
[ -5.72265625, -3.900390625, -4.375, -2.5390625, 1.7890625, 12.0234375 ]
0.56487
7.878318
30.816392
3.699936
[ 1.6094152927398682 ]
-31,680.231496
7.140691
-50,192.941655
0.65073
5.893541
[ -2.142578125, -3.58203125, -3.849609375, -5.40625, 2.181640625, 12.765625 ]
[ -5.5078125, -1.6376953125, -2.203125, -1.1650390625, 3.45703125, 3.75390625 ]
BkiUeArxK6wB9mn4_v06
\section{Introduction} Measurements of the $B$-mode (curl component) polarization of the CMB provide a unique opportunity to detect the imprint of the primordial gravitational waves predicted by the inflationary paradigm. The amplitude of these tensor perturbations measures the energy scale of inflation and its potential. Such measurements therefore can be used to place powerful constraint on a broad class of inflationary models. Moreover, the confirmation of inflation and the determination of the inflationary potential would provide a direct observational link with the physics of the early universe. The recent detection of a non-zero $B$-mode power spectrum at multipoles around $\ell \sim 100$ by BICEP2 \citep{2014arXiv1403.3985B} have profound implications for current and future experiments aimed at measuring CMB polarization. If the signal detected by BICEP2 is confirmed as primordial, the implied relatively large value of the tensor-to-scalar ratio ($r \sim 0.2$) re-evaluates the requirements for both the instrument (sensitivity, systematics) and the data analysis (in particular foreground removal). Given that foreground emission and instrumental systematics can generate $B$-modes of significant power over broad multipole ranges, it is possible that the origin of at least part of the signal measured at degree scales by BICEP2 is not cosmological. The most convincing confirmation would be a measurement of the reionization bump at multipoles $\ell \sim 2$--10, and in general of the $B$-mode signal over a wider range of angular scales. In fact, measuring the power spectrum in two different multipole regimes would probe its shape, which is expected to be very different from that due to foregrounds and/or instrumental systematics. Going to larger scales, and thus to the reionization bump, is probably the easier option, because the $B$-mode signal due to lensing dominates over the primordial one at $\ell > 100$. Probing the $B$-mode power spectrum over a wide multipole range would also improve considerably the measurement of the tensor-to-scalar ratio, $r$, and of the optical depth to reionization, $\tau$. Measuring the $B$-mode power spectrum at the largest scales requires a large sky coverage, ideally a full-sky satellite experiment. Foreground contamination must be dealt with while at the same time retaining as much sky area as possible, thus foreground avoidance (analyzing a small sky area where foreground emission is particularly low, as done by BICEP2) is not a viable option. The strategy in this case is to map the total polarization signal at several frequencies and to exploit the different dependencies on frequency of the emission components to separate them. This data analysis step is called component separation. Several component separation approaches have been developed for CMB $B$-mode detection at large and intermediate scales \citep{2009A&A...503..691B,2009MNRAS.397.1355E,2010MNRAS.406.1644R,2010MNRAS.408.2319S,2011ApJ...737...78K,2011MNRAS.418.1498A,2013MNRAS.435...18B}. In most cases, the process can be thought of as forming a suitable combination of the data at different frequencies. Such a combination is designed to minimize the foreground contribution while also reducing the instrumental noise. However, the final noise level will always be higher than what could be achieved in the absence of foregrounds, for example by averaging the data at different frequencies with inverse noise variance weights \citep{2011MNRAS.414..615B}. Thus, the optimization of an experiment targeting $B$-mode measurements on large scales needs to consider, together with the signal-to-noise ratio, also the issues related to foreground contamination and component separation \citep{2011MNRAS.414..615B,2011JCAP...08..001F,2012MNRAS.424.1914A,2012PhRvD..85h3006E}. In this work we use the forecasting tool developed in \cite{2011MNRAS.414..615B} to assess the component separation requirements to measure the large-scale $B$-mode signal for values of $r$ between 0.1 and 0.2, consistent with the detection by BICEP2. This tool estimates the uncertainties (noise, foreground residuals and cosmic variance) on the $B$-mode power spectrum taking into account the instrumental specifications (sensitivity, number and frequency of the channels, and sky coverage) and a component separation step (the CMB is obtained with a suitable linear combination of the frequency maps). We perform a Fisher matrix analysis to propagate the predicted uncertainties from the power spectrum to the cosmological parameters, in particular $r$ and $\tau$. We apply our method to the specifications of the {\it Planck} satellite \citep{2013arXiv1303.5062P} and the Cosmic Origins Explorer \citep[COrE,][]{2011arXiv1102.2181T}. We also consider a balloon-borne experiment targeting the large-scale $B$-mode signal, for which we take the specifications of the Large-Scale Polarization Explorer \citep[LSPE,][]{2012arXiv1208.0281T}, as an example case. \section{Statement of the problem} \label{sec:due} \subsection{Data model} In addition to the CMB, the microwave sky contains several foreground components, both diffuse and compact. For our analysis, which is focused on large and intermediate scales, we consider only diffuse foregrounds. The main diffuse polarized foregrounds are Galactic synchrotron and thermal dust (the free-free emission is unpolarized and the anomalous dust emission is also expected to be essentially unpolarized). The synchrotron component dominates at the lowest frequencies. Its spectral behavior in antenna temperature can be modeled as a power law: \begin{equation}\label{scaling_synchro} T_{\rm A,synch}(\nu)\propto \nu^{-\beta_{\rm s}} \label{syn}\ , \end{equation} where the synchrotron spectral index $\beta_{\rm s}$ can vary in the sky in the range $2.5<\beta_{\rm s}<3.5$. The spectral behaviour of thermal dust emission, which dominates at high frequencies, follows approximately a grey-body law: \begin{equation}\label{scaling_dust} T_{\rm A,dust} (\nu)\propto \frac{\nu^{\beta_{\rm d}+1}}{\exp (h\nu/kT_{\rm dust})-1}. \label{dust} \end{equation} Both $\beta_{\rm d}$ and $T_{\rm d}$ are spatially varying around $\beta_{\rm d} \sim 1.7$ and $T_{\rm d} \sim 18\,$K. The polarized CMB signal has a blackbody spectrum: \begin{equation}\label{scaling_CMB} T_{\rm A,CMB}(\nu)\propto\frac{(h\nu/kT_{\rm CMB})^2\exp (h\nu/kT_{\rm CMB})} {(\exp (h\nu/kT_{\rm CMB})-1)^2} \ \end{equation} with $T_{\rm CMB}\simeq 2.73$. For component separation purposes, it is convenient to model the data as a linear mixture of the components. For each direction in the sky we write \begin{equation}\label{datamodel} \bmath{x}=\bmath{\sf{H}}\bmath{s}+\bmath{n}. \end{equation} The vectors $\bmath{x}$ and $\bmath{n}$ have dimension equal to the number of detectors, $N_{\rm d}$, and contain the data and instrumental noise, respectively; $\bmath{s}$ is a vector containing the sources (CMB and foregrounds) and has dimension equal to the number of components, $N_{\rm c}$; $\bmath{\sf{H}}$ is the $N_{\rm d} \times N_{\rm c}$ mixing matrix, containing the frequency scaling of the components. The spatial variability of the synchrotron and dust spectral indices implies that the mixing matrix $\bmath{\sf{H}}$ is in general different for different sky pixels. In order to be able to write the data model as in Eq.~(\ref{datamodel}) we made some simplifying assumptions. The most important of them is that the instrumental resolution does not depend on frequency. This is in general not true, and requires a pre-processing step in which the resolution is equalised by suitably smoothing the data. In our case, because we focus on large scales, such a loss of resolution is not particularly problematic. If the linear mixture data model holds, the components can be reconstructed as \begin{equation}\label{recon} \bmath{\hat s}=\bmath{\sf{W}}\bmath{x}, \end{equation} where $\bmath{\hat s}$ is an estimate of the components $\bmath{s}$ and $\bmath{\sf{W}}$ is a $N_{\rm c} \times N_{\rm d}$ matrix called the reconstruction matrix. We choose to rely on a linear estimator because it will allow us to easily include the component separation process in the forecasting of $B$-mode power spectrum constraints, as we will see in the next section. In addition this approach is well suited for use with Monte Carlo simulations, needed to accurately control error sources. We adopt the so-called Generalized Least Square solution (GLS): \begin{equation} \label{gls} \bmath{\sf{W}}=[\bmath{\sf{\hat H}}^T \bmath{\sf{N}}^{-1}\bmath{\sf{\hat H}}]^{-1}\bmath{\sf{\hat H}}^T\bmath{\sf{N}}^{-1}. \end{equation} This requires the noise covariance $\bmath{\sf{N}}$ of the channel maps and an estimate $\bmath{\sf{\hat H}}$ of the mixing matrix $\bmath{\sf{H}}$ that can be obtained exploiting any of the dedicated component separation methods discussed in the literature \citep[see, e.g.,][]{2006MNRAS.373..271B,2006ApJ...641..665E,2009MNRAS.392..216S,2010MNRAS.406.1644R,2011MNRAS.418.1498A}. We stress that this choice is not completely general, as other reconstruction matrices could be used. However, rather than on the form of the reconstruction matrix, the main dependence is on the mismatch between the true and the estimated mixing matrix. The fact that our reconstruction matrix explicitly contains the estimated mixing matrix also allows us to easily include in our forecasts the effect of errors in the mixing matrix estimation. \begin{table*} \centering \begin{minipage}{140mm} \caption{Instrumental characteristics considered in the present study for the {\it Planck} and COrE experiments. The RMS per pixel represents the polarization sensitivity and is quoted for Healpix resolution $N_{\rm side}=512$ pixels (pixel size $\sim 7\,$arcmin). The integration time is 4 years for COrE, 51 months for {\it Planck} LFI, 29 months for {\it Planck} HFI and two weeks for LSPE.} \label{tab:instr} \begin{tabular}{llllllllllllllllll} \hline {{\it Planck} LFI \footnote{\cite{2013arXiv1303.5063P}} and HFI \footnote{\cite{2013arXiv1303.5067P}} specifications}\\ \hline $\nu\,$(GHz)&44&70&100&143&&&217\\ FWHM (arcmin)&24&14&9.5&7.1&&&5.0\\ RMS $\Delta T$($\mu$K RJ)& 34 &38 & 11 &4 &&&2.9\\ \hline COrE specifications \footnote{\cite{2011arXiv1102.2181T}} \\ \hline $\nu\,$(GHz)&45&75&105&135&165&195&225\\ FWHM (arcmin)&23&14&10&7.8&6.4&5.4&4.7\\ RMS $\Delta T$($\mu$K RJ)&1.2&0.6&0.5&0.4&0.3&0.3&0.2\\ \hline LSPE specifications \footnote{\cite{2012arXiv1208.0281T}} \\ \hline $\nu\,$(GHz)&43&90&95&145&&&225\\ FWHM (arcmin)&60&30&110&89&&&74\\ RMS $\Delta T$($\mu$K RJ)&23&87&1.6&1.7&&&1.8\\ \hline \end{tabular} \end{minipage} \end{table*} \section{Forecast method} \subsection {Errors on the $B$-mode power spectrum}\label{sec:errbar} In this section we briefly summarise the forecasting method presented in \cite{2011MNRAS.414..615B}. We refer the reader to this paper for further details and a complete derivation. In addition to considering the sky coverage of each instrument, we adopt for the analysis a Galactic mask to exclude those regions which are most contaminated by foreground emission. After masking we are left with a sky fraction $f_{\rm_sky}$. We assume the analysis will recover the CMB component from the multi-frequency data through the GLS linear mixture estimator [eqns.~(\ref{recon}) and (\ref{gls})]. We use uniform weights across the considered sky area, which allows us to work directly at the power spectrum level. This is not a realistic assumption, because the spectral properties of the foreground components are known to vary with position on the sky. However, we can still simulate the correct level of foreground contamination provided that the constant weights that we use are a good representation of the true ones over most of the sky. The other crucial parameter is the difference between the true mixing matrix $\bmath{\sf H}$ and the estimated mixing matrix $\bmath{\hat {\sf H}}$. This difference needs to represent the estimation error that we would have in an analysis of real data, i.e. when the spectral properties of the foregrounds are spatially varying. It can also be increased to incorporate other effects, such as an incorrect modelling of the foreground properties (e.g. steepening of the synchrotron spectral index) or the presence of additional polarised foregrounds. In our case we parametrise the mixing matrix error in terms of uncertainties on the synchrotron and dust spectral indices, $\Delta \beta_{\rm s}$ and $\Delta \beta_{\rm d}$. As detailed in Sect.~\ref{sec:deltabeta}, we consider two quite different error regimes, which reflect our uncertainty on the current models and on the progress we expect to make in the future. We estimate the power spectrum in multipole bands $\hat \ell$, according to some binning scheme. In the following we adopt the notation $\bmath{C}^{\rm XX}_{\hat \ell}$ where XX can be either the $EE$ or $BB$ CMB polarization spectrum. We model the error $\Delta \bmath{C}^{\rm XX}_{\hat \ell}$ on the power spectrum $\bmath{C}^{\rm XX}_{\hat \ell}$ as the sum of three contributions: noise, residual foreground contamination, and cosmic variance \begin{equation} \label{total} \Delta \bmath{C}^{\rm XX}_{\hat \ell}=\Delta \bmath{C}^{\rm XX}_{\hat \ell,{\rm noise}}+\Delta \bmath{C}^{\rm XX}_{\hat \ell,{\rm foreg}} + \Delta \bmath{C}^{\rm XX}_{\hat \ell,{\rm CV}}. \end{equation} By adding the three error components in this way we implicitly assume that the errors are Gaussian and uncorrelated. This might not be true especially at the lowest multipoles, where the error distribution for a given spectral bin can be non-Gaussian and highly asymmetric. Moreover, we do not consider any correlation between the errors on different bins or between the $EE$ and $BB$ bandpowers. Finally, we note that we are not considering here any contribution due to instrumental systematics. Although systematics can be very important for $B$-mode detection, they are typically instrument-specific, and cannot be predicted easily without modelling the instrument in detail. For all the reasons listed above, our forecasted errors should be considered as an approximate, and possibly optimistic, assessment of the true uncertainties. Concerning the noise error, we do not know the actual noise realization but we can estimate its statistical properties trough a Monte Carlo analysis. The noise component of the error on the CMB power spectrum, $\Delta \bmath{C}^{\rm XX}_{\hat \ell,{\rm noise}}$, is due to the sampling variance of the noise bias, \begin{equation} \label{deltanoise} \Delta \bmath{C}^{\rm XX}_{\hat \ell,{\rm noise}}=\sqrt{\frac{2/(2\hat \ell+1)}{f_{\rm sky} \, \bmath{\rm nbin}(\hat \ell)}}\,\bmath{N}_{\rm CMB}, \end{equation} where $\bmath{N}_{\rm CMB}$ is the noise bias on the CMB power spectrum and $\bmath{{\rm nbin}}(\hat \ell)$ contains the number of multipoles within each of the spectral bins $\hat \ell$. If the noise is white and Gaussian, for each frequency $\nu$ we have \begin{equation}\label{clnoise} \bmath{N}_{\nu}=\frac{4 \pi f_{\rm sky}}{N_{\rm pix}} \sigma^2_{\nu}\bmath{B}_{\nu}^2 \end{equation} where $N\rm {pix}$ is the number of pixels in the considered portion of the sky $f_{\rm sky}$, $\sigma^2_{\nu}$ is the noise variance per pixel at frequency $\nu$ and $\bmath{B}_{\nu}^2(\ell)$ is the beam function applied to each channel map to obtain a common resolution. Given the linearity of the CMB recovery process [eq.~(\ref{recon})] and the assumption of a spatially-invariant reconstruction matrix, the noise bias is obtained by combining the channel noise spectra $\bmath{N}_{\nu}$ with the matrix $\bmath{\sf{W}}^2$: \begin{equation}\label{clnoise} \bmath{N}_{\rm CMB}=\sum_\nu w^2_{\nu,{\rm CMB}}\, \bmath{N}_{\nu}, \end{equation} where $w^2_{\nu,{\rm CMB}}$ are the elements of the matrix $\bmath{\sf{W}}^2$ pertaining to the CMB component. From eqns. (\ref{deltanoise}) and (\ref{clnoise}) we see that the error due to noise depends on the reconstruction matrix $\bmath{\sf{W}}$ and, therefore, on the estimated mixing matrix $\bmath{\sf{\hat H}}$. It is possible that the cleanest channels in terms of instrumental noise are not the best in terms of foreground contamination. Thus, the optimal reconstruction matrix does not necessarily minimise noise. In particular, depending on the relative sensitivity of the frequency channels, it is possible that a very accurate mixing matrix corresponds to a noise level that is higher than that achieved with a less accurate mixing matrix. The map of residuals, $\bmath{s}-\bmath{\hat s}$, for a linear mixture source reconstruction can be estimated as: \begin{equation} \label{deltacomp} \bmath{s}-\bmath{\hat s}=(\bmath{\sf{W}}\bmath{\sf{H}}-\bmath{\sf{I}})\,\bmath{\tilde s}, \end{equation} where $\bmath{\sf{I}}$ is the identity matrix and $\bmath{\tilde s}$ is a set of simulated components \citep{2010MNRAS.408.2319S,2011MNRAS.414..615B,2012PhRvD..85h3006E}. The error due to the imperfect foreground subtraction, $\Delta C^{\rm XX}_{\hat \ell,{\rm foreg}}$, is the binned power spectrum of the residuals computed outside the adopted Galactic mask. This error essentially depends on the mismatch between the true mixing matrix $\bmath{\sf{H}}$ and the estimated mixing matrix $\bmath{\sf{\hat H}}$, which is used to compute the reconstruction matrix $\bmath{\sf{W}}$. It is clear that $\Delta \bmath{C}^{\rm XX}_{\hat \ell,{\rm foreg}}$ is model-dependent, since it relies on simulations of the data $\bmath{\tilde s}$, which are hampered by our poor knowledge of polarized foregrounds. The situation will substantially improve in the very near future as new polarization data, above all those from the {\it Planck} mission, will become available. Finally, the cosmic variance term is given by \begin{equation} \label{deltacv} \Delta \bmath{C}^{\rm XX}_{\hat \ell,{\rm CV}} =\sqrt{\frac{2/(2\hat \ell+1)}{f_{\rm sky} \, \bmath{\rm nbin}(\hat \ell)}}\,\bmath{C}^{\rm XX}_{\hat \ell}, \end{equation} and represents the error due to the fact we only measure one particular CMB realization. It depends on the area of the sky considered. We note that this formula is a good approximation only if the fraction of the sky is large. Our forecasted errors for the LSPE experiment, which we take to be representative of a large-scale balloon-borne experiment (see Sect.~\ref{sec:inst_spec}), are therefore likely to be underestimates. \subsection{Errors on the cosmological parameters} \label{sec:fisher} We propagate the total error on the power spectrum $\Delta \bmath{C}^{\rm XX}_{\hat \ell}$ to the cosmological parameters with a Fisher matrix approach. Given a set of parameters $p=\{p_1, p_2, ..p_N\}$ which depend on a set of observables $d=\{d_1, d_2, ..d_m\}$, the Fisher information matrix is a $N \times N$ symmetric matrix whose elements are given by \begin{equation} F_{ij}=\sum _{l=1,m}\frac{1}{\sigma^2_l}\frac{\partial d_l}{\partial p_i}\frac{\partial d_l}{\partial p_j}, \end{equation} where $\sigma^2_l$ is the variance of the error on the datapoint $d_l$, and $\partial d_l/\partial p_i$ is the partial derivative of $d_l$ with respect to $p_i$. The inverse of the Fisher matrix, $F^{-1}$ is the covariance matrix for the parameters $p_i$; in particular, it contains the variance on the parameters on the diagonal $F^{-1}_{ii}=\sigma^2_{ii}$. In our case, the parameters $p_i$ are the cosmological parameters, and the variances $\sigma^2_l$ are the forecasted errors on the CMB $EE$ and $BB$ power spectra for a set of relevant bins $\hat \ell=1,m$. The partial derivatives $\partial d_l/\partial p_j$ are evaluated numerically by computing power spectra from theory for a fiducial cosmological model. We then vary by a small amount one parameter at a time and evaluate the corresponding change in the data $d_l$. \section{Details of the simulation}\label{sec:quattro} \begin{figure*} \begin{center} \includegraphics[width=4.5cm,angle=90,keepaspectratio]{Fig_1a.ps} \includegraphics[width=4.5cm,angle=90,keepaspectratio]{Fig_1b.ps} \caption{Sky masks used for {\it Planck} and COrE (left) and LSPE (right). The {\it Planck} and COrE mask is the P06 mask prepared by WMAP; it covers around 70\,\% of the sky. The mask for LSPE takes into account the sky coverage of the balloon and features a smaller Galactic mask, resulting in a final sky coverage of 25\,\%.} \label{fig:masks} \end{center} \end{figure*} \subsection{Sky model} We consider three polarised components: the CMB and diffuse polarised Galactic synchrotron and dust emission. The polarized CMB simulation is based on a standard $\Lambda$CDM model with best-fit cosmological parameters from {\it Planck} \citep[including WMAP polarization,][]{2013arXiv1303.5076P}. We have added tensor modes with tensor to scalar ratio $r=0.2$ and $r=0.1$ and gravitational lensing. The power spectra have been computed with CAMB \footnote{\tt http://camb.info/}. The polarization Q and U synchrotron and dust templates were generated at 100 GHz using the Planck Sky Model \citep{2013A&A...553A..96D}. These templates were then extrapolated to lower and higher frequencies using the spectra of eqs.~(\ref{scaling_synchro}) and (\ref{scaling_dust}) with $\beta_{\rm s}=3$ for synchrotron and $\beta_{\rm d}=1.7, \, T_{\rm d}=18\,$K for dust. These equations and parameters define the true mixing matrix of the model, which appears in eq.~(\ref{deltacomp}) for the computation of the foreground residuals. As mentioned earlier, the spectral properties have been taken to be spatially constant in order to be able to perform the forecast at the power spectrum level. \begin{figure*} \begin{center} \includegraphics[width=8.8cm,keepaspectratio]{Fig_2a.ps} \includegraphics[width=8.8cm,keepaspectratio]{Fig_2b.ps} \caption{Forecasted errors on the $B$-mode power spectrum for conservative (left) and improved (right) errors on the mixing matrix. Triangles: $\Delta \bmath{C}^{\rm BB}_{\hat \ell,{\rm noise}}$; squares: $\Delta \bmath{C}^{\rm BB}_{\hat \ell,{\rm foreg}}$; crosses: $\Delta \bmath{C}^{\rm BB}_{\hat \ell,{\rm CV}}$ for $r=0.2$ for the mask used for LSPE (green) and the one used for {\it Planck} and COrE (grey). Solid lines: total error ($\Delta \bmath{C}^{\rm BB}_{\hat \ell}=\Delta \bmath{C}^{\rm BB}_{\hat \ell,{\rm noise}}+\Delta \bmath{C}^{\rm BB}_{\hat \ell,{\rm foreg}} + \Delta \bmath{C}^{\rm BB}_{\hat \ell,{\rm CV}}$ for $r=0.2$). Blue: {\it Planck}; green: LSPE; red: COrE. Solid grey lines: theoretical $BB$ power spectra (primordial + lensing) for $r=0.1$ and $r=0.2$. The black points show the total error (statistical plus cosmic variance) for BICEP2; the filled points are those we used in our analysis. The grey vertical line indicates the maximum $\ell$ that we included in the Fisher matrix analysis for $r$ and $\tau$. This maximum $\ell$ was chosen in order to avoid the dominant lensing $B$-mode signal at higher multipoles.} \label{fig:deltacl} \end{center} \end{figure*} \subsection{Masks} For the full-sky {\it Planck} and COrE experiments we adopted the WMAP P06 mask \citep{2007ApJS..170..335P}, covering roughly 70\,\% of the sky. The LSPE is a stratospheric baloon experiment and its coverage is limited. In this case we apply a smaller foreground mask to restrain cosmic variance, which is the dominant source of error at large scales. The final coverage is 25\,\% of the sky. The two masks are shown in Fig.~1. \subsection{Foreground residuals}\label{sec:deltabeta} As mentioned earlier, the foreground residuals depend on the mismatch between the true and the estimated mixing matrix, which is parametrised by the error on the synchrotron and dust spectral indices, $\Delta \beta_{\rm s }$ and $\Delta \beta_{\rm d}$. For the mixing matrix error we considered two regimes, which we label as ``conservative'' ($\Delta \beta_{\rm s}=0.1$ , $\Delta \beta_{\rm d}=0.05$) and ``improved'' ($\Delta \beta_{\rm s}=0.01$ , $\Delta \beta_{\rm d}=0.005$). The former are a conservative assessment of the state-of-the-art, based on a realistic simulation of {\it Planck} polarization data, including a spatially-varying mixing matrix and realistic noise (Ricciardi et al. 2010). The next generation experiments which are focused on detailed characterisation the $B$-mode power spectrum will undoubtedly improve the accuracy of the determination of foreground spectral properties. In order to forecast the performance of these experiments we therefore also consider an ``improved'' regime. The adopted error values should be considered as indicative rather than representative. The error on the dust spectral index is smaller than that on the synchrotron spectral index because of the frequency coverage of the experiments, which is broader towards high frequency, thus providing in principle better control over dust contamination. To propagate the errors $\Delta \beta_{\rm s }$ and $\Delta \beta_{\rm d}$ to the CMB power spectra, we generated a set of ten estimated mixing matrices $\bmath{\sf {\hat H}}$, drawing the actual spectral indices from Gaussian distributions with mean equal to the true indices and standard deviation equal to the spectral index errors. For each $\bmath{\sf {\hat H}}$ we computed the reconstruction matrix $\bmath{\sf W}$, the residual map through eq.~(\ref{deltacomp}) and its power spectrum outside the adopted mask. For the CMB reconstruction we considered the frequency channels in the range $40<\nu<250$\,GHz. As discussed in Bonaldi \& Ricciardi (2011), using a wider frequency range could potentially result in increased foreground residuals due to the inclusion of frequency channels which are more affected by foregrounds. On the other hand, including further channels generally lowers the noise: the best trade-off for a particular experiment will depend on the instrument sensitivity, on the intensity of the foregrounds, and on the errors on the mixing matrix. We stress that, for the purposes of estimating the mixing matrix, it is always beneficial to use as wide a frequency range as possible. In this respect, the lowest and highest frequencies are particularly useful, as they map respectively the polarized synchrotron and dust emission with high signal-to-noise and low contamination from the other components. Indeed, the accuracy of the mixing matrix estimation is critically important, particularly for the high sensitivity instruments. \subsection{Instrumental specifications}\label{sec:inst_spec} In this work we consider the {\it Planck} and COrE instruments, as representative of a current and future CMB satellite. We also consider LSPE, as representative of a balloon-borne experiment targeting the large-scale polarization of the CMB. For the purpose of our forecasts, the instruments are modelled as a set of frequency channels, each one having a given resolution and sensitivity (which we take to be uniform across the sky). As previously discussed, we assume that the CMB is separated from the foreground contamination by way of a linear combination of the frequency maps between 40 and 250\,GHz. Therefore, we do not explicitly consider frequency channels beyond this range. The instrumental specifications adopted are reported in Table~\ref{tab:instr}, together with the references we used to derive them. We have also included the BICEP2 $EE$ and $BB$ power spectra constraints in our forecasts in particular to assess the potential improvement achievable by combining them with another measurement at larger angular scales. We downloaded the BICEP2 data and considered the statistical error bars for the $EE$ and $BB$ power spectra. We added to them the uncertainties due to cosmic variance for a sky coverage of 380 square degrees. For $BB$ we used only the first three band powers (covering $\ell=45$--110). In fact, these are the most important bands for constraining the tensor-to-scalar ratio $r$, because at higher $\ell$s the contribution due to lensing $B$-modes is dominant. \section{Results}\label{sec:results} In Figure \ref{fig:deltacl} we show a comparison of the theoretical power spectra for $r=0.1$ and 0.2 with the forecasted error bars for each of the considered experiments and both conservative (left) and improved (right) mixing matrix errors. The large-scale $B$-mode signal for these values of the tensor-to-scalar ratio is accessible to both {\it Planck} (blue lines and symbols) and LSPE (green lines and symbols). For {\it Planck} the limiting factor is foreground residuals at very low multipoles and noise at higher multipoles. LSPE is limited by foreground residuals and cosmic variance at very low multipoles and noise at higher multipoles. In both cases, the different mixing matrix accuracy from the conservative to the improved case lowers the total error only for a limited multipole range. Since the total error for {\it Planck} and LSPE are quite similar for scales larger than those probed by BICEP2, the combination of either of these probes with the BICEP2 points results in a similar constraint on the cosmological parameters. Therefore, in the Fisher matrix analysis that follows, we only show the results for {\it Planck}. The performances of the COrE experiment (red lines and symbols) are limited by the accuracy of the mixing matrix estimation for a large multipole range. Thus the results for COrE illustrate the impact of the improvement in the mixing matrix accuracy when going from the conservative errors to the improved ones. This experiment is able to measure accurately both the reionization bump and the main peak. In the left panel of Fig.~\ref{fig:fisher} we show the results of the Fisher matrix analysis when the only free parameter is $r$. This corresponds to assuming that all the other parameters are known. For this analysis we used only the $BB$ spectrum up to $\ell=100$. In order to exploit measurements at higher multipoles, one would also need to consider the amplitude of lensing parameter, $A_{\rm L}$. If $r=0.2$, our Fisher matrix analysis of the BICEP2 results predicts $\Delta r=0.04$ at 68\% confidence level (CL). The somewhat lower error with respect to the published result ($r=0.2^{+0.07}_{-0.05}$) is partly intrinsic to the Fisher matrix approach (Cramer-Rao inequality) and partly due to our simplified treatment of the BICEP2 errors, such as having neglected the correlation of errors between different bins. In the following we will use the predicted BICEP2 errors in place of the actual errors in order to compare the different probes using the same analysis. In the simple one-parameter model, the inclusion of the {\it Planck} dataset at low multipoles improves the constraint on $r$ by 25--45\% depending on the mixing matrix accuracy. The results for LSPE are very similar. Besides this improvement on the accuracy, these experiments would be able to provide an independent confirmation of the primordial origin of the BICEP2 signal were it to be cosmological. A next generation experiment like COrE would then improve the measurement of $r$ substantially (almost one order of magnitude), especially if the mixing matrix is accurately modelled. \begin{figure*} \begin{center} \includegraphics[width=8.5cm,keepaspectratio]{Fig_3a.ps} \includegraphics[width=8.5cm,keepaspectratio]{Fig_3b.ps} \caption{One-dimensional likelihood for $r$ (left) and two-dimensional likelihood for $r$ and $\tau$ (right) when $r=0.2$ and all the other parameters are known. The lines and contours represent the 68\% CL (1\,$\sigma$). Solid lines represent the experiments used alone, and dot-dashed lines in combination with the BICEP2 constraint. Black: BICEP2; blue: {\it Planck} with conservative mixing matrix errors; red: {\it Planck} with improved mixing matrix errors; green: COrE with conservative mixing matrix errors; magenta: COrE with improved mixing matrix errors. The results for LSPE are similar to those obtained for {\it Planck}.} \label{fig:fisher} \end{center} \end{figure*} In the right panel of Fig.~\ref{fig:fisher} we consider the simultaneous estimation of $r$ and $\tau$ using both the $EE$ and $BB$ power spectra. BICEP2 alone (black line) measures $r$ much better than $\tau$, because it does not measure the reionization signal at large angular scales. {\it Planck} alone (blue and red solid lines for the conservative and improved mixing matrix errors) has a better handle on the combination of these parameters, but the results on $r$ are somewhat worse than the BICEP2 ones. The combination of the two probes (blue and red dot-dashed lines) gives the best results and measures both $r$ and $\tau$ accurately. With respect to BICEP2 alone, the improvement on $\tau$ is $\sim$95--100\%. In the conservative error regime, COrE reduces the error bar on $r$ by 80\% and on $\tau$ by 95\% with respect to BICEP2 (green line). If the error bars on the mixing matrix are those of the improved regime, the error bars on both parameters are reduced by almost two orders of magnitudes (magenta line). This demonstrates the importance of a detailed understanding of the foreground spectra for the next generation experiments. \section{Conclusions} We have applied the method developed by Bonaldi \& Ricciardi (2011) to forecast error bars on the CMB polarization $EE$ and $BB$ power spectra for current and future experiments, in the hypothesis that $r=0.1$--0.2 as suggested by the BICEP2 experiment. We showed that such a signal is within {\it Planck}'s reach even for conservative assumptions on the accuracy of foreground removal (but without considering instrumental systematics). The detection of the large-scale counterpart of the BICEP2 signal would be the most convincing confirmation of this result. We used a Fisher matrix formalism to predict the errors on the cosmological parameters starting from the error bars on the $BB$ power spectrum. The combination of BICEP2 with either {\it Planck} or with a balloon-borne experiment targeting the large-scale polarization of the CMB (here represented by LSPE) improves the accuracy on $r$ by 25-45\% and measures $\tau$ with an error of 0.002--0.001. The constraint on the tensor-to-scalar ratio can be improved substantially with a next-generation $B$-mode satellite such as COrE. This experiment can reduce the error bar on $r$ by another order of magnitude, provided that we have accurate knowledge of the frequency spectra of the foreground components. On the other hand, if only limited progress on this aspect is made from the present state-of-the-art, the improvement in sensitivity with respect to {\it Planck} cannot be fully exploited. This confirms that, even for a relatively large $B$-mode signal such as the one implied by the BICEP2 results, foreground removal is crucial for a precise measurement of the tensor-to-scalar ratio. \section{Aknowledgements} We thank the referee, F. Stivoli, for useful suggestions. AB and MLB acknowledge support from the European Research Council under the EC FP7 grant number 280127. MLB also acknowledges support from an STFC Advanced/Halliday fellowship. SR acknowledges support by ASI through ASI/INAF Agreement 2014-024-R.0 for the Planck LFI Activity of Phase E2 and by MIUR through PRIN 2009 grant no. 2009XZ54H2. \bibliographystyle{mn2e}
-21,713.042196
[ -2.951171875, 2.7890625 ]
24.822695
[ -3.208984375, 0.08441162109375, -2.2265625, -6.47265625, -1.24609375, 9.0078125 ]
[ 5.04296875, 8.90625, 3.90234375, 6.98828125 ]
335
4,888
[ -2.705078125, 2.8046875 ]
25.441027
[ -6.49609375, -4.859375, -4.83203125, -2.7890625, 1.87890625, 13.3359375 ]
1.530462
13.826501
23.158756
3.310421
[ 1.9697211980819702 ]
-16,099.093901
5.547463
-21,408.820522
0.774451
5.777079
[ -3.09765625, -3.84375, -3.357421875, -4.38671875, 2.400390625, 11.6640625 ]
[ -5.703125, -2.869140625, -2.853515625, -2.203125, 3.736328125, 6.3203125 ]
BkiUairxK6wB9mpbzooS
\section{Introduction} \label{sec:introduction} A string $P$ is said to have a {\em jumbled} occurrence in string $T$ if $P$ can be rearranged so that it appears in $T$. In other words, if $T$ contains a substring of length $|P|$ where each letter of the alphabet occurs the same number of times as in $P$. In indexing for {\em Jumbled pattern matching} we wish to preprocess a given text $T$ so that given a query $P$ we can determine quickly whether $T$ has a jumbled occurrence of $P$. \paragraph{\bf Binary jumbled pattern matching on strings.} Apart from a recent paper on constant alphabets~\cite{ESA13}, all the results on the problem are restricted to binary alphabets (where a query pattern $(i,j)$ asks for a substring of $T$ that is of length $i$ and has $j$ 1s). The important property of a binary alphabet is that $(i,j)$ appears in $T$ iff $j$ is between the minimum and maximum number of 1s over all substrings of length $i$. As observed in~\cite{CFL09}, this means that we can store only the minimum and maximum values of every $i$ and can then answer a query in $O(1)$ time. While this requires only $O(n)$ space, computing it naively takes $O(n^2)$ time. Beating $O(n^2)$ has become a recent challenge of the pattern matching community. The first improvement was to $O(n^2 / \log n)$. It was independently obtained by Burcsi et al.~\cite{BCFL12a}, and by Moosa and Rahman~\cite{MR10}, who reduced the problem to min-plus products of vectors. Moosa and Rahman~\cite{MR12} then further improved it to $O(n^2 / \log^2 n)$ in the RAM model by cleverly using the four-Russians technique instead of min-plus products. This remained the state of the art and $o(n^2 / \log^2 n)$ time was only known when the string compresses well under run-length encoding~\cite{BFKL12,GG12} or when we are willing to settle for approximate indexes~\cite{CLWY12}. \paragraph{\bf Binary jumbled pattern matching on Trees.} On a tree $T$ whose nodes are labeled 0 or 1, a query $(i,j)$ asks for a connected subgraph of $T$ that is of size $i$ and has exactly $j$ nodes labeled by 1. Like in strings, if $(i,j_1)$ and $(i,j_2)$ both appear in $T$, then for every $j_1\le j \le j_2$, $(i,j)$ appears in $T$. This means that, again, we only need to store for every $i$ the minimum $j_1$ and maximum $j_2$ values such that $(i,j_1)$ and $(i,j_2)$ appear in $T$. In~\cite{OurESA13} we showed that finding these values can be done in $O(n^2 / \log^2 n)$ time, just like in strings. In fact, the solution for trees was obtained by reducing it to multiple applications of the solution for strings~\cite{MR12} based on the four-Russians technique. \paragraph{\bf Our results.} Given a string (resp. tree) $T$, we refer to jumbled pattern matching as the problem of computing for every $i$ the maximum and minimum number of 1s in a substring (resp. connected subgraph) of $T$ of size $i$. We obtain the following: \begin{theorem}\label{theorem1} Any $O(n^3/\ell(n))$ algorithm for computing the min-plus product of $n\times n$ matrices implies an $O(n^2/\ell(\sqrt{n}))$ algorithm for jumbled pattern matching on strings and an $O(nr+ n^2/\ell(\sqrt{r}))$-time algorithm for jumbled pattern matching on trees for any choice of $r$. \end{theorem} \noindent Our work was motivated by the recent breakthrough algorithm of Williams~\cite{Ryan} for computing the min-plus product of two $n\times n$ matrices in $n^3/2^{\Omega(\log n/\log \log n)^{1/2}}$ time\footnote{Using the Williams algorithm on the word RAM takes $n^3/2^{\Omega(\log n/\log \log n)^{1/2}}$ time since we know that all elements of our matrices are bounded by $\sqrt{n}$. The algorithm is randomized and can be made deterministic in $O(n^3/2^{\log^{\delta} n})$ time for some $\delta > 0$.}. This means that currently both $\ell(n)$ and $\ell(\sqrt{n})$ are $2^{\Omega(\log n/\log \log n)^{1/2}}$ (the difference between $\ell(n)$ and $\ell(\sqrt{n})$ is only in the constant behind the $\Omega$). Choosing $r= \sqrt{n}$, we get: \begin{corollary}\label{cor} Jumbled pattern matching on both strings and trees can be solved in $n^2/2^{\Omega(\log n/\log \log n)^{1/2}}$ time. \end{corollary} \noindent Finally, we note that the above bound also applies to the more general problem of computing the maximum sub-sums of a string or a tree. Namely, given a string (resp. tree) whose characters (resp. nodes) have arbitrary weights we can compute (in the time bound of Corollary~\ref{cor}) for every $i=1,\ldots,n$ the maximum sum of weights of all substrings (resp. connected subgraphs) of size $i$. \section{Binary Jumbled Pattern Matching on Strings} We begin by proving the first part of Theorem~\ref{theorem1} regarding jumbled pattern matching on strings. As discussed in the previous section, this boils down to the following problem: Given a binary text $T$ of length $n$, compute the minimum and maximum number of 1s in a substring of length $s$ in $T$, for all $s=1,\ldots,n$. Below, we focus on computing the minimum number of 1s in each substring length, as computing the maximum number of 1s can be done in an analogous manner. We show how to do this in total $O(n^2/\ell(\sqrt{n}))$ time, where $\ell(n)$ is the assumed speedup factor for the naive cubic-time min-plus multiplication algorithm of matrices $A$ and $B$, defined as: \[(A \star B)[i,j] = \min_{k} (A[i,k] + B[k,j]).\] That is, matrix multiplication where $\min$ plays the role of addition, and $+$ plays the role of multiplication. The complexity of such multiplication is equivalent to that of All-Pairs Shortest Paths. We start by first partitioning the string $T$ into consecutive substrings (blocks) $T_0,\ldots, T_{\sqrt{n}-1}$ each of length $\sqrt{n}$. We then compute for every $T_i$ the minimum number of 1s in a substring of length $s$ that is completely inside~$T_i$. This can be done naively for all $s\in\{1,\ldots, \sqrt{n}\}$ in $O(n)$ time, and over all $T_i$'s in $O(n^{1.5})$ time. We next want to compute the minimum number of 1s in substrings that span more than one block. For every $\ell \in \{1,\ldots, 2\sqrt{n}\}$, let $C_\ell$ be the $\sqrt{n}\times \sqrt{n}$ matrix where $C_\ell[i,j]$ is the minimum number of 1s in substrings that include: (1) a suffix $q$ of $T_i$ (2) the complete blocks $T_{i+1},\ldots,T_{j-1}$ (3) a prefix $p$ of $T_j$, and (4) $\ell=|p|+|q|$. It is not hard to see that once we have all $C_1,\ldots,C_{2\sqrt{n}}$, along with all information we computed within the blocks, solving our problem is trivial in $O(n^{1.5})$ time. We distinguish between two cases: The case where $\ell \leq \sqrt{n}$ (in which $p$ and $q$ are allowed to be empty), and the case where $\ell > \sqrt{n}$ (in which both $p$ and $q$ must be non-empty). Assume that $\ell \le \sqrt{n}$. Let $A$ be the $\sqrt{n}\times (\ell+1)$ matrix such that $A[i,k]$ is the number of 1s in the last $k$ bits of $T_i$. Similarly, we define the $(\ell+1) \times \sqrt{n}$ matrix $B$ such that $B[k,j]$ is the number of 1s in the first $\ell-k$ bits of $T_j$. Their min-plus product $C$ is defined as $C[i,j] = \min_{k} (A[i,k] + B[k,j])$. We set $C_\ell[i,j] = C[i,j]$ + $x_{i,j}$ where $x_{i,j}$ is the number of 1s in the substring $T_{i+1} \cdots T_{j-1}$. Note that computing $x_{i,j}$ is done once and is then used for every $\ell$. To compute $C_\ell$ for $\ell > \sqrt{n}$, we use the same procedure and only slightly change $A$ and $B$. Now $A$ is an $\sqrt{n}\times (2\sqrt{n}-\ell+1)$ matrix, and $A[i,k]$ is the number of 1s in the last $k+\ell-\sqrt{n}$ bits of $T_i$. The matrix $B$ is an~$(2\sqrt{n}-\ell+1)\times \sqrt{n}$ matrix such that $B[k,j]$ equals the number of 1s in the first $\sqrt{n}-k$ bits of $T_j$. The matrix $C$ is again defined as the min-plus product $A \star B$, and $C_\ell[i,j]$ is computed as in the previous case. Note that computing $A$ and $B$ for each $\ell$ can be trivially done in $O(n)$ time. Furthermore, it is not difficult to see that the value $C_\ell[i,j]$ computed for each $i$ and $j$ is indeed the minimum number of 1s in substrings of $T$ as required above. The matrix $C_\ell$ can be computed easily in $O(n)$ time once $C$ has been computed via the min-plus computation. Since $\ell=O(\sqrt{n})$, using the algorithm of Williams, we can compute this product in $O(n^{3/2}/\ell(\sqrt{n}))$ time. Thus, in total we compute $C_1,\ldots,C_{2\sqrt{n}}$ in $O(n^2/\ell(\sqrt{n}))$ time. This proves the first part of Theorem~\ref{theorem1}. \subsection{Relation to Previous Work} The above proof was first suggested by us in 2008~\cite{Private2008}. A similar construction was independently obtained by Bremner et al.~\cite{BCDEHILT06} (Arxiv 2012, Section 4.4) who showed that MPV$(n)= n^{1.5} + \sqrt{n}\cdot $MPM$(\sqrt{n})$. Here, MPM$( \sqrt{n})$ denotes the time it takes to compute the min-plus product of two $ \sqrt{n}\times \sqrt{n}$ {\em matrices} and MPV$(n)$ denotes the time it takes to compute the min-plus product of two $n$-length {\em vectors} $x,y$ defined as: \vspace{-0.07in} \[(x \odot y)[i] = \min_{k=1}^i (x[k]+y[i-k]).\] Moosa and Rahman~\cite{MR10} showed that jumbled pattern matching on a string of length $n$ can be done in time $T(n) = 2T(n/2) +$MPV$(n)$. By Bremner et al. this means that $T(n) = 2T(n/2) +n^{1.5} + \sqrt{n}\cdot $MPM$(\sqrt{n})$. By Williams~\cite{Ryan} we have MPM$(\sqrt{n})= O(n^{3/2}/\ell(\sqrt{n}))$ and so $T(n)=O(n^2/\ell(\sqrt{n}))$. \section{Binary Jumbled Pattern Matching on Trees} We now prove the second part of Theorem~\ref{theorem1}. Given a tree $T$ with~$n$ nodes, each labeled with either 0 or 1, we wish to compute, for every $i=1.\ldots, n$ the minimum number of nodes labeled 1 in a connected subgraph of $T$ that is of size $i$ (the maximum is found similarly). In~\cite{OurESA13}, we presented a tree-to-strings reduction for this problem that was based on the four-Russians speedup of~\cite{MR12}. Here, we generalize this reduction to a black-box reduction, which is applied regardless of the particular speedup technique used in the string case. We outline this generalization below. The first observation in~\cite{OurESA13} was that we can assume w.l.o.g that $T$ is a binary tree. The second was an $O(n^2)$ simple algorithm: In a bottom-up manner, for each node $v$ of $T$, compute an array $A_v$ of size $|T_v|+1$ ($T_v$ includes $v$ and all its descendants in $T$). The entry $A_v[i]$ will store the minimum number of 1-nodes in a connected subgraph of size $i$ that includes $v$ and another $i-1$ nodes in $T_v$. If $v$ has a single child $u$, then we set $A_v[i]=lab(v)+A_u[i-1]$, where $lab(v)$ is the label of $v$. If $v$ has two children $u$ and $w$, we set $A_v[i]= lab(v)+\min_{0 \leq j \leq i-1} \{A_u[j]+A_w[i-j-1]\}$. The time required to compute all arrays is asymptotically bounded by $\sum_v \alpha(v)\beta(v)= O(n^2)$ where $\alpha(v)$ (resp. $\beta(v)$) is the size of $v$'s left (resp. right) child's subtree. The total space used can be made $O(n)$ by only keeping $A_v$'s which are necessary for future computations. It can be made $O(n)$ {\em bits} by representing $A_v$ as a binary string $B_v$ where $B_v[0]=0$, and $B_v[i]= A_v[i]-A_v[i-1]$ for all $i=1,\ldots,n-1$. Since $A_v[i]= \sum_{j=0}^i B_v[j]$, each entry of $A_v$ can be retrieved from $B_v$ in $O(1)$ time using {\em rank} queries~\cite{Jacobson}. \paragraph{\bf The black-box reduction.} Now that we have an $O(n^2/\ell(\sqrt{n}))$-time algorithm for strings we would like to also obtain an $O(n^2/\ell(\sqrt{n}))$-time algorithm for trees. Using the above algorithm, this can be achieved if computing $B_v$ can be done in $O(\alpha(v)\beta(v)/\ell(\sqrt{n}))$ time, since in total we would then get $O(1/\ell(\sqrt{n}) \cdot \sum_v \alpha(v)\beta(v)) = O(n^2/\ell(\sqrt{n}))$ time. For a node $v$ with children $u$ and $w$, we can compute $B_v$ using jumbled pattern matching on the binary string $S = X \cdot lab(v) \cdot Y$, where $X$ is obtained from $B_u$ by reversing it and removing its last bit, and $Y$ is obtained from $B_w$ by removing its first bit. The catch is that we are only interested in substrings that include the position of $lab(v)$ in $S$. If $x=|X|$ and $y=|Y|$, then this can naively be done in $O(xy)$ time. Alternatively, it can also be done in $O(|S|^2/\ell(\sqrt{|S|}))= O((x+y)^2/\ell(\sqrt{x+y}))$ time using the algorithm of the previous section\footnote{Note that the algorithm from the previous section can easily be adapted (in the same time complexity) to only consider substrings that include the position of $lab(v)$.}. However, we desire $O(xy/\ell(\sqrt{n}))$. To achieve this, assume w.l.o.g that $x \leq y$. We partition $Y$ into consecutive substrings $Y_1,\ldots,Y_{y/x}$, each of length $x$ (except perhaps the last one). We compute $B_v$ by solving jumbled pattern matching on all the strings $X \cdot lab(v) \cdot Y_i$. Using the previous section this takes total $(y/x)\cdot (x^2/\ell(\sqrt{x}))= O(xy/\ell(\sqrt{x}))$. This would be fine if $\ell(\sqrt{x})$ is roughly equal to $(\ell(n))$. Note that for large enough $x$, say $x\!\ge\!\! \sqrt{n}$, with the current Williams bound indeed both $\ell(\sqrt{x})$ and $\ell(n)$ are $2^{\Omega(\log n/\log \log n)^{1/2}}$. The challenge is therefore to deal with small $x$ (say $x\!< \!\sqrt{n}$). The challenge of small $x$ was also an obstacle in the reduction of~\cite{OurESA13}. We use the same solution of~\cite{OurESA13} (with only a small change in parameters). Namely a \emph{micro-macro decomposition}~\cite{MicroMacro}. A micro-macro decomposition is a partition of $T$ into $O(n/r)$ disjoint connected subgraphs of size at most $r$ called {\em micro trees}. Each micro tree $C$ has at most two nodes (called \emph{boundary} nodes) that are adjacent to nodes in other micro trees. The {\em macro tree} is a tree of size $O(n/r)$. Each node of the macro tree corresponds to a micro tree $C$ and the edges of the macro tree to edges between boundary nodes. We first compute the maximum number of 1s in all patterns that are completely inside a micro tree. Using the above simple algorithm each micro tree is handled in $O(r^2)$ time, so overall $O((n/r)\cdot r^2)=O(nr)$. Notice that in particular this computes $B_v$ for every boundary node $v$ with respect to its micro tree $C$. Denote this array by $B_v(C)$. To deal with patterns that span multiple micro trees, it was shown in~\cite{OurESA13} that the simple algorithm can be applied bottom-up on the macro tree (instead of on $T$). For each node $C$ in the macro tree, and each boundary node $v$ of $C$, the array $B_v$ is computed by combining the array $B_v(C)$ with the arrays $B_u$ of every descendant boundary node $u$ adjacent to $v$. Recall that combining the arrays means solving jumbled pattern matching on a binary string $S = X\cdot lab(v) \cdot Y$ with $x=|X|$ and $y=|Y|$. As before, if $x\ge r$ this takes $O(xy/\ell(\sqrt{x}))=O(xy/\ell(\sqrt{r}))$ time, so over all such computations take $O(n^2/\ell(\sqrt{r}))$ time. If~$x < r$, we simply extend $x$ artificially until it is of length $r$. The computation will then take $O(ry/\ell(\sqrt{r}))=O(rn/\ell(\sqrt{r}))$ time, but there are only $O(n/r)$ boundary nodes, so overall this takes $O(n^2/\ell(\sqrt{r}))$ time. Accounting also for the $O(nr)$ time required for computing all necessary information inside the micro trees, we obtain the time complexity promised in Theorem~\ref{theorem1}. \section{Conclusions} We have showed that any $O(n^3/\ell(n))$ algorithm for computing the min-plus product of $n\times n$ matrices implies an $O(n^2/\ell(\sqrt{n}))$-time algorithm for jumbled pattern matching on strings, and an $O(nr+ n^2/\ell(\sqrt{r}))$-time algorithm for jumbled pattern matching on trees for any choice of $r$. With the current Williams bound on $\ell(n)$, and by choosing $r=\sqrt{n}$, we get that jumbled pattern matching on either strings or trees can be done in $O(n^2/\ell(\sqrt{n}))$ time. This is because currently both $\ell(\sqrt{n})$ and $\ell(n)$ are $2^{\Omega(\log n/\log \log n)^{1/2}}$. In the future, if say an $O(n^{3-\varepsilon})$ algorithm is found (with a constant $\varepsilon$) for All-Pairs Shortest Paths (i.e., for min-plus products), then we would get an $O(n^{2-\varepsilon/2})$ algorithm for jumbled pattern matching on strings but only an $O(n^{2-\varepsilon'})$ algorithm for trees where $\varepsilon' = \frac{\varepsilon/2}{1+\varepsilon/2}$. This is obtained using the $O(nr+ n^2/\ell(\sqrt{r}))$ bound of Theorem~\ref{theorem1} with $r=n^\frac{1}{1+\varepsilon/2}$. However, notice that the $O(nr)$ factor originated from running the simple $O(r^2)$ algorithm on each one of the $O(n/r)$ micro trees. But we now have a better than $O(r^2)$ algorithm, namely an $O(rr'+ r^2/\ell(\sqrt{r'}))$ algorithm for any choice of $r'$. Doing this recursively improves the $O(nr)$ factor and makes $\varepsilon'$ closer to $\varepsilon/2$. We leave this as an exercise for the optimistic future in which APSP can be done in $O(n^{3-\varepsilon})$ time. \bibliographystyle{plain}
-18,915.357458
[ -2.375, 2.26171875 ]
76.923077
[ -2.763671875, 0.54736328125, -1.8740234375, -6.0625, -0.66796875, 8.03125 ]
[ 1.0810546875, 6.66796875, 1.2451171875, 5.890625 ]
148
2,574
[ -3.162109375, 3.939453125 ]
31.346255
[ -5.42578125, -3.46484375, -3.30859375, -1.708984375, 1.796875, 10.46875 ]
2.386135
46.904897
24.436674
1.374293
[ 2.8855056762695312 ]
-13,194.855935
4.640249
-18,892.732223
3.315472
5.51299
[ -2.328125, -2.958984375, -3.046875, -4.5390625, 2.306640625, 10.7734375 ]
[ -5.77734375, -2.1796875, -1.921875, -1.4931640625, 3.693359375, 4.8515625 ]
BkiUdYU25YjgKOpQ6oY3
\section{Introduction}\label{intro} In his commentary on the classic paper by Chandrasekhar \& M\"{u}nch (1952) on brightness fluctuations in the Milky Way, Scalo (1999) presented an insightful discussion of a dichotomy in our perception of the structure of the diffuse interstellar medium (ISM) of our Galaxy. On the one hand, we might view the ISM in terms of a collection of isolated, dense clouds enveloped in a more tenuous medium, a concept that has directed our thinking on the establishment of discrete ``phases'' of the ISM with well established spatial domains and vastly different properties that can be justified on some fundamental physical grounds (Field et al. 1969; McKee \& Ostriker 1977; Burkert \& Lin 2000; V\'azquez-Semadeni et al. 2000; Brandenburg et al. 2007). On the other hand, much of the ISM can be viewed as a continuous fluid medium containing a texture of seemingly random fluctuations in density, velocity and temperature. Adherents to this second picture [e.g., Ballesteros- Paredes et al. (1999)] view clouds as an illusion created by the most extreme fluctuations in a turbulent medium having an extraordinarily high Reynolds number. While this is undoubtedly true, we must also acknowledge the presence of nearly static, sharp boundaries between different media, as revealed by dark clouds with well defined edges that have been sculpted by ionization, dissociation, and evaporation/condensation fronts. Both pictures have their utility in exploring important issues on the multitude of processes that can occur within the ISM. As we switch our perspective from morphological to dynamical properties of the ISM, we find that over macroscopic scales the motions of gases in our Galaxy can be governed by the injection and dissipation of mechanical energy from a wide range of energy sources that include supernova explosions (McKee \& Ostriker 1977; McCray \& Snow 1979; Mac Low et al. 1989; Kim et al. 2001; de Avillez \& Breitschwerdt 2005a), disturbances from newly formed H~II regions (Lasker 1967; Tenorio-Tagle 1979; Rodr\'iguez-Gaspar \& Tenorio-Tagle 1998; Peters et al. 2008), stellar mass loss (Abbott 1982; McKee et al. 1984; Owocki 1999), infalling gas clouds from the Galactic halo (Wakker \& van Woerden 1997; Santill\'an et al. 1999, 2007) , bipolar jets from star forming regions (Bally 2007), shocks in spiral arm density waves (Roberts et al. 1975), and the magnetorotational instabililty driven by differential galactic rotation (Pinotek \& Ostriker 2004). These processes play a strong role in creating recognizable, discrete structures and flows of material in the ISM, but ultimately some of the energy from the resulting compressions and vorticity will also be transformed into random turbulent motions. Transient structures of small sizes can arise from the cascade of larger turbulent cells into small ones or be created in the interface regions between colliding gas flows (Audit \& Hennebelle 2005). Turbulence can also be fed by instabilities in phase transition layers (Inoue et al. 2006) or the weak driving forces that arise from the thermal instability of the ISM (Kritsuk \& Norman 2002b; Koyama \& Inutsuka 2006), the latter of which can be sustained by abrupt changes in the heating rate from UV radiation (Kritsuk \& Norman 2002a). Over the past several decades, much progress has been made in the study of magnetohydrodymical (MHD) turbulence in the ISM. Our understanding of this phenomenon has been facilitated by the rapid emergence of powerful 3-dimensional computer simulations, and its existence is supported by observations of column density distributions, velocity statistics, cloud morphologies, deviations in magnetic fields, and various kinds of disturbances in the propagation of radio waves in ionized media [for a review, see Elmegreen \& Scalo (2004)]. This phenomenon is influential on the heating, chemical mixing, radio wave propagation, and cosmic ray scattering in the ISM (Scalo \& Elmegreen 2004). Within denser environments, turbulent processes are expected to have a strong influence on the fragmentation of density concentrations just before and during the earliest stages of gravitational collapse that leads to star formation (Mac Low \& Klessen 2004; McKee \& Ostriker 2007). Both the coherent dynamical phenomena and the smaller scale turbulent motions have an influence on pressures in the ISM. These pressures appear in many forms: thermal, magnetic, dynamical, and the indirect effects of cosmic rays, and their collective magnitude amounts to about $p/k=2.5\times 10^4{\rm cm}^{-3}\,$K\footnote{Throughout this paper, we quantify pressures in terms of $p/k$ in the units ${\rm cm}^{-3}\,$K instead of simply $p$ in the units of ${\rm dyne~cm}^{-2}$ or ${\rm erg~cm}^{-3}$. Our representation facilitates comparisons with actual densities and temperatures in the ISM.} that is established by the hydrostatic equilibrium of gaseous material in the gravitational potential of the Galactic plane (Boulares \& Cox 1990). Except for very hot media ($T > 10^5\,$K) that have been created by shock heating from supernova blast waves (Cox \& Smith 1974; de Avillez \& Breitschwerdt 2005b) or that reside within wind-blown bubbles around stars (Castor et al. 1975; Weaver et al. 1977), the thermal pressures of the ISM represent a small fraction (about one-tenth for $T\sim 100\,$K) of the total pressure (de Avillez \& Breitschwerdt 2005a). While thermal pressures and their variability may seem unimportant in the dynamical development of the general ISM, they nevertheless can be influenced by stochastic dynamical effects and thus can provide us with useful information. In this paper, we make use of the fact that the two upper fine-structure levels in the electronic ground state of the neutral carbon atom, with excitation energies $E/k=23.6$ and $62.4\,$K, are easily excited and de-excited by collisions with neutral and charged particles at typical densities and temperatures within the diffuse, cold gas in the Galactic plane. The balance between the effects of these collisions and spontaneous radiative decays (at wavelengths 609 and $371\,\mu$m) establishes fine-structure level population ratios that can serve as an indicator of the local density and temperature of the C~I-bearing material. We sense these population ratios by observing the UV multiplets of C~I that appear as foreground absorption features in the spectra of hot stars recorded at high spectral resolution. The first widespread study of C~I fine-structure excitations was carried out by Jenkins \& Shaya (1979), who analyzed observations that came from the UV spectrograph on the {\it Copernicus\/} satellite. Those observations and a more comprehensive survey by Jenkins et al. (1983) were primitive by today's standards for observing C~I features set by spectrographs on the {\it Hubble Space Telescope\/} ({\it HST\/}) (Smith et al. 1991; Jenkins et al. 1998; Jenkins 2002), but they nevertheless established an early framework for determining the average thermal pressures in the ISM and their variations from one location to the next. In addition to these general surveys, special studies sensed extreme positive deviations in pressure from the C~I features in spectra of stars within and behind the Vela supernova remnant (Jenkins et al. 1981, 1984, 1998; Jenkins \& Wallerstein 1995; Wallerstein et al. 1995; Nichols \& Slavin 2004), indicating that the blast wave overtook and compressed\footnote{We note that Nichols \& Slavin (2004) proposed some possible alternative explanations for producing an excess of excited C~I.} small clouds in the medium that surrounded the explosion site (Chevalier 1977). A new advance in the study of C~I excitation in the general ISM arose from the study by Jenkins \& Tripp (2001) (hereafter JT01), who used the highest resolution configurations of the echelle spectrograph in the {\it Space Telescope Imaging Spectrograph\/} (STIS) (Kimble et al. 1998; Woodgate et al. 1998) to observe C~I in the spectra of 21 stars. An important breakthrough in this work was to make use of the ability of STIS to record many different C~I multiplets simultaneously, which allowed JT01 to benefit from a special analysis technique that they developed to unravel the blended absorption profiles arising from the three fine-structure levels. The current study of thermal pressures expands on the work of JT01, once again using the analysis method employed earlier, but with some technical improvements outlined in Appendix~\ref{improvements}. In \S\ref{observations} we describe our new coverage of sightlines that significantly expands on the limited selection of stars that were studied by JT01, but we caution that one must be aware of a few, mostly unavoidable, selection biases in the sampling. In \S\ref{results} we review the basic principles of the analysis, but leave it to the reader to consult JT01 for a more detailed description of the mathematical method. This section also introduces our fundamental approach to interpreting the population ratios in terms of the local density and temperature of the C~I-bearing gas, a method originally developed by Jenkins \& Shaya (1979). Table~\ref{sightlines_table} in this section lists the 89 target stars whose spectra were analyzed in this study, along with some relevant information about the foreground regions probed by the sightlines. For each sightline, we show in Table~\ref{obs_quantities_table} some composite interstellar conditions that we derived and some reflections on their significance in \S\ref{overview}. In Table~\ref{MRT} we provide more detailed, machine-readable information for each velocity bin in the entire survey. Section~\ref{properties} describes a number of additional factors that require some consideration when the C~I results are interpreted, including ways to estimate the total amount of gas that accompanies the C~I (\S\ref{ionization_corrections}), the kinetic temperature of the gas (\S\ref{T}), the mix of particles that can excite the upper levels (\S\ref{particle_mix}), and the local intensity of starlight (\S\ref{starlight}). The starlight intensity must be known in order to correct for optical pumping of the fine-structure levels (as described in \S\ref{derivations}), and, as we demonstrate in \S\ref{basic_dist}, such intensities seem to be correlated with pressure. Section~\ref{admixtures} discusses how we interpret our finding that virtually all of the measurements do not agree with the expected fine-structure populations for collisional excitation for any uniform values of local density or temperature. Here, we introduce the idea that small admixtures of gas at extraordinarily high pressures accompany virtually all of the gas at normal pressures, and in \S\ref{misleading_conclusion} we examine (and ultimately reject) some alternative explanations of the observed deviations. Some discussions on the implications and possible origins of the high pressure component appear in \S\S\ref{amount_hipress}$-$\ref{origins}, and we show in \S\ref{behavior_velocity} and \S\ref{origins} that the gas fractions at high pressures are accentuated in material that is moving rapidly or is exposed to a high intensity of starlight. In \S\ref{interpretation} we derive the distribution function for the mass-weighted thermal pressures in the dominant low pressure regime for two samples: (1) all of the gas and (2) gas that is well removed from intense sources of UV radiation. For the convenience of those who wish to compare our results with computer simulations of turbulence based on volume-weighted distributions, we convert our mass-weighted sampling to a volume-weighted one in \S\ref{vol_weighted}, but with the precarious assumption that the gas responds to pressure perturbations with a single polytropic index $\gamma$ (which is left as a free parameter). In \S\ref{pileup}, we address the possibility that the width of the pressure distribution understates the true dispersion of pressures in the ISM, due to the fact that we view in each velocity bin the superposition of absorptions by gases with different pressures and thus only sense an average pressure in each case. In \S\ref{comparisons_turbulence} we derive a range of possible characteristic turbulent Mach numbers for the C~I-bearing gas We explore in \S\ref{time_constants} some time constants for various physical processes that are relevant to our work. For instance, the excitation temperatures of the two lowest rotational levels of H$_2$ play a role many of our pressure determinations. In \S\ref{crossing_times} we find that only on scales of order 100$-$1000$\,$AU are lag times of any importance in weakening the coupling of H$_2$ rotation temperatures to the local kinetic temperature. The same applies to the equilibrium between heating and cooling of the gas: compressions and decompressions of the gas should follow the equilibrium relationship except on the smallest scales where the behavior becomes more adiabatic in nature. The time scales for the equilibria of fine-structure populations and the balance between C~I and C~II are quite short and hence apply on all of the relevant size scales. In a brief departure from the discussion of turbulence as a source of pressure deviations, we consider in \S\ref{coherent_disturbances} the possibility that the upper end of our pressure distribution function is consistent with random interceptions of supernova remnants in different stages of development. Finally, we summarize our conclusions in \S\ref{summary}. \section{Observations}\label{observations} Our earlier survey (JT01) covered only 21~stars that were observed with the guaranteed observing time granted to the STIS instrument definition team. In order to maximize the observing efficiency, the target stars in this study were located within two Galactic longitude intervals, ones where the {\it HST\/} continuous viewing zones\footnote{The continuous viewing zones (CVZs) are two declination bands centered on $\delta=\pm 61\fdg5$ where observations can be performed with high efficiency because the targets are not occulted by the Earth as the satellite progresses along its orbit.} intersected the Galactic plane ($99\arcdeg < \ell < 138\arcdeg$, $254\arcdeg < \ell < 313\arcdeg$). Many additional observations, most of which were performed after the study by JT01, were more broadly distributed in the sky and were taken before the 5-year hiatus of observing brought about by STIS instrument failure in August 2004. We downloaded from MAST (Multimission Archive at the Space Telescope Science Institute) virtually all of the observations performed at wavelengths that covered two or more C~I transitions in the E140H and E230H modes. Once again, we made use of the broad wavelength coverage of STIS to examine as many multiplets as possible. These spectra had a resolving power in radial velocity equal to $2.6\,{\rm km~s}^{-1}$ (or $1.5\,{\rm km~s}^{-1}$ for the stars observed by JT01 because a narrower entrance slit was used) (Proffitt et al. 2010). Prominent among these observations were those performed by a SNAP (snapshot) program conducted by J.~T.~Lauroesch (program nr.~8241) in 1999 and 2000. In the current new study, we have also reanalyzed the data presented by JT01 because we have now adopted some new, more refined analysis procedures (see Appendix~\ref{improvements}). All of the data were processed in the manner described by JT01, except that for observations outside their survey we did not need to implement an intensity rebalancing between MAMA half pixels (see their \S4.2 for details), since these half pixel intensities were binned together beforehand. A small fraction of the observations had to be rejected because either (1) there was an insufficient amount of C~I present to perform a meaningful analysis with the signal-to-noise ratio at hand, or (2) the projected rotational velocity of the star was so low that stellar features interfered with the interstellar ones or made the continua too difficult to model. These unsuitable sight lines are identified by their target star names in Table~\ref{rejected}. We also rejected the central stars of planetary nebulae, since the interstellar components could be contaminated by contributions from gas in the nebular shell. \placetable{rejected} \begin{center} \begin{table}[h!] \caption{Rejected Sightlines\label{rejected}} \begin{tabular}{ c c } \tableline Insufficient & Stellar Line\\ C~I & Confusion\\ \tableline BD+25D2534&CPD$-$64D481\\ BD$-$03D2179&HD$\,$1909\\ HD$\,$1999&HD$\,$3175\\ HD$\,$6456&HD$\,$30122\\ HD$\,$6457&HD$\,$37367\\ HD$\,$23873&HD$\,$43819\\ HD$\,$32039&HD$\,$44743\\ HD$\,$64109&HD$\,$52329\\ HD$\,$79931&HD$\,$62714\\ HD$\,$86360&HD$\,$93237\\ HD$\,$92536&HD$\,$94144\\ HD$\,$164340&HD$\,$106943\\ HD$\,$192273&HD$\,$108610\\ HD$\,$195455&HD$\,$175756\\ HD$\,$196867\\ HD$\,$201908\\ HD$\,$233622\\ \tableline \end{tabular} \end{table} \end{center} Initially, we had considered using observations recorded at lower resolution with the E140M mode of STIS to broaden the selection of targets, but a comparison of the results for a few stars that were also observed with the E140H mode indicated that unreliable results emerged from the lower resolution data as a result of improper treatments of unresolved, saturated profiles. (The correction scheme developed by Jenkins (1996) could not be used because various lines overlap each other, and the optical depths must go through a complicated transformation to obtain unique answers for the column densities of the three fine-structure levels, as described in \S5.2.1 of JT01.) Table~\ref{sightlines_table} presents the information on the 89 sightlines included in the present study. They span path lengths that range from about 0.2$\,$kpc to 6$\,$kpc and have a median length of 1.9$\,$kpc. We processed all of the data that we felt were acceptable, according to the principles outlined in the above two paragraphs. We made 2416 separate measurements, but since we oversampled the wavelength resolution of the spectrograph by a factor of 5.3, our determinations actually represent only about 460 independent samples in radial velocity. We refrained from applying any special selection criteria to make our sampling of regions more evenhanded. For this reason, one must be aware of certain selection biases in the composite information presented in \S\ref{results} below. The following are some noteworthy considerations about our sample: \begin{enumerate} \item All sightlines terminate at the location of a bright, early-type star. Thus, it is inevitable that we will be probing an environment near such a star, or in fact a location near a grouping of many such stars, since they tend to be strongly clustered in space. In a number of cases, strong elevations in thermal pressures seen in certain radial velocity channels probably arise from either the effects of stellar winds or rapidly expanding H~II regions. As we will show in \S\ref{basic_dist}, a large portion of the C~I that we observe resides in regions with much higher than normal densities of starlight radiation. This is probably a consequence of the fact that the observations are biased toward a sampling of the progenitorial cloud complexes out of which the stars had formed. \item Regions where the density is low enough (or the local temperature or radiation density high enough) to shift the ionization equilibrium of carbon atoms more strongly than usual to its ionized form are missed in our sample. As we show in Figure~\ref{phase_diag}, practically all of what is classically known as the warm neutral medium (WNM, with $T\sim 9000\,$K) is invisible to us; our survey is restricted to the phase called the cold neutral medium (CNM, with $T\sim 80\,$K), with possibly some very limited sensitivity to gas in the thermally unstable intermediate temperatures. In addition, there are situations where C~I is detected at certain velocities, but the quality of the data is insufficient to measure reliably the thermal pressures, as we discuss in some detail in \S\ref{uncertainties}. Hence, such regions are excluded. \item Regions of moderate size that are very dense will have enough extinction in the UV to make stars behind them too faint to observe. This effect will result in our missing clouds that happen to be strongly compressed by turbulence or gravity. It is clear from the results shown in Column~5 of Table~\ref{sightlines_table} that a cutoff of our sample corresponds to a $B-V$ color excess of about 0.5, which in turn translates approximately to $N({\rm H})=3\times 10^{21}{\rm cm}^{-2}$ if we use the standard relation between E$(B-V$) and $N({\rm H})$ in the ISM (Bohlin et al. 1978; Rachford et al. 2009). Also, portions of some of our strongest C~I absorption profiles were rejected from consideration because we sensed that they had velocity substructures that were saturated and not resolved by the instrument. \item Stars in certain programs were selected by observers because they had interesting properties. Of special relevance to our thermal pressure outcomes would be the observations that were designed to probe regions that were known to be either disturbed (e.g., showing high velocity gas) or at higher than normal densities (e.g., showing unusually strong molecular absorptions). Often, one can sense the characters of such selections by reading the abstracts of the programs that made the observations.\footnote{The archive root names listed in Column~9 of Table~\ref{sightlines_table} can be used as a guide on the MAST {\it HST\/} search web page to find the Proposal ID and its abstract.} \end{enumerate} \placetable{sightlines_table} \clearpage \begin{deluxetable}{ c c c c c c c c c } \tabletypesize{\small} \rotate \tablecolumns{9} \tablewidth{0pt} \tablecaption{Properties of the Sightlines\label{sightlines_table}} \tablehead{ \colhead{Target} & \multicolumn{2}{c}{Galactic Coordinates (deg.)} & \colhead{Spectral}&\colhead{}&\colhead{Distance}\tablenotemark{a}& \colhead{H$_2~T_{01}$\tablenotemark{b}}&\colhead{}&\colhead{Archive Exposure}\\ \cline{2-3} \colhead{Star} & \colhead{$\ell$} & \colhead{$b$}&\colhead{Type}& \colhead{E($B-V$)\tablenotemark{a}}&\colhead{(kpc)}&\colhead{(K)}& \colhead{Ref.\tablenotemark{c}}&\colhead{Root Name(s)}\\ \colhead{(1)}& \colhead{(2)}& \colhead{(3)}& \colhead{(4)}& \colhead{(5)}& \colhead{(6)}& \colhead{(7)}& \colhead{(8)}& \colhead{(9)} } \startdata CPD$$-$59\arcdeg\,$2603&287.590&$-$0.687&O5$\,$V((f))&0.36&3.5&77&6& O40P01D6Q\\ &&&&&&&&O4QX03010$-$30\\ HD$\,$108&117.928&1.250&O6pe&0.42&3.8&\nodata&&O5LH01010$-$80\\ HD$\,$1383&119.019&$-$0.893&B1$\,$II&0.37&2.9&\nodata&&O5C07C010\\ HD$\,$3827&120.788&$-$23.226&B0.7$\,$Vn&0.05&1.8&\nodata&&O54309010$-$30\\ &&&&&&&&O54359010$-$30\\ HD$\,$15137&137.462&$-$7.577&O9.5$\,$II$-$IIIn&0.24&3.5&104&7&O5LH02010$-$80\\ HD$\,$23478&160.765&$-$17.418&B3$\,$IV&0.20&0.47&55&7&O6LJ01020\\ HD$\,$24190&160.389&$-$15.184&B2$\,$Vn&0.23&0.82&66&7&O6LJ02020\\ HD$\,$24534 (X Per)&163.083&$-$17.137&O9.5$\,$III&0.31&2.1&57&6&O66P02010\\ &&&&&&&&O64813010$-$20\\ &&&&&&&&O66P01010$-$20\\ HD$\,$27778 (62 Tau)&172.764&$-$17.393&B3$\,$V&0.34&0.23&55&2&O59S01010$-$20\\ HD$\,$32040&196.071&$-$22.605&B9$\,$Vn&0.00&0.16&\nodata&&O56L04010$-$30\\ &&&&&&&&O8MM02010$-$30\\ HD$\,$36408&188.498&$-$8.885&B7$\,$IV&0.11&0.19&\nodata&&O8MM04020$-$30\\ HD$\,$37021 ($\theta^1$~Ori~B) &209.007&$-$19.384&B3$\,$V&0.42&0.56&\nodata&&O59S02010\\ HD$\,$37061 ($\nu$~Ori)&208.926&$-$19.274&B0.5$\,$V&0.44&0.64&\nodata&&O59S03010\\ HD$\,$37903&206.853&$-$16.538&B1.5$\,$V&0.29&0.83&68&8&O59S04010\\ HD$\,$40893&180.086&4.336&B0$\,$IV&0.31&3.1&78&8&O8NA02010$-$20\\ HD$\,$43818 (11~Gem)&188.489&3.874&B0$\,$II&0.45&1.9&\nodata&&O5C07I010\\ HD$\,$44173&199.002&$-$1.316&B5$\,$III&0.05&0.52&\nodata&&O5C020010\\ HD$\,$52266&219.133&$-$0.680&O9.5$\,$IVn&0.22&1.8&\nodata&&O5C027010\\ HD$\,$69106&254.519&$-$1.331&B0.5$\,$IVnn&0.14&1.5&80&10&O5LH03010$-$50\\ HD$\,$71634&273.326&$-$11.524&B7$\,$IV&0.09&0.32&\nodata&&O5C090010\\ HD$\,$72754 (FY Vel)&266.828&$-$5.815&B2$\,$I:pe&0.31&3.9&\nodata&&O5C03E010\\ HD$\,$75309&265.857&$-$1.900&B1$\,$IIp&0.18&2.9&65&3&O5C05B010\\ HD$\,$79186 (GX Vel)&267.366&2.252&B5$\,$Ia&0.23&1.9&\nodata&&O5C092010\\ HD$\,$88115&285.317&$-$5.530&B1.5$\,$IIn&0.12&3.7&145&3&O54305010$-$60\\ HD$\,$91824&285.698&0.067&O7$\,$V&0.22&3.0&61&6&O5C095010\\ HD$\,$91983&285.877&0.053&B1$\,$III&0.14&3.0&61&7&O5C08N010\\ HD$\,$93205&287.568&$-$0.706&O3$\,$Vf+&0.34&3.3&105&6&O4QX01010$-$40\\ HD$\,$93222&287.738&$-$1.016&O7$\,$IIIf&0.32&3.6&77&6&O4QX02010$-$40\\ HD$\,$93843&288.243&$-$0.902&O5$\,$IIIf&0.24&3.5&107&10&O5LH04010$-$40\\ HD$\,$94454&295.693&$-$14.725&B8$\,$III&0.19&0.30&74&7&O6LJ0H010\\ HD$\,$94493&289.016&$-$1.177&B1$\,$Ib&0.15&3.4&\nodata&&O54306010$-$20\\ HD$\,$99857&294.779&$-$4.940&B0.5$\,$Ib&0.27&3.5&83&10&O54301010$-$60\\ &&&&&&&&O54301020\\ &&&&&&&&O54301030\\ &&&&&&&&O54301040\\ &&&&&&&&O54301050\\ &&&&&&&&O54301060\\ HD$\,$99872&296.692&$-$10.617&B3$\,$V&0.29&0.24&66&7&O6LJ0I020\\ HD$\,$102065&300.027&$-$17.996&B2$\,$V&0.28&0.18&59&6&O4O001010$-$30\\ HD$\,$103779&296.848&$-$1.023&B0.5$\,$Iab&0.17&4.3&86&6&O54302010$-$20\\ HD$\,$104705 (DF Cru)&297.456&$-$0.336&B0$\,$Ib&0.17&5.0&92&6&O57R01010,~30\\ HD$\,$106343 (DL Cru)&298.933&$-$1.825&B1.5$\,$Ia&0.23&3.3&\nodata&&O54310010$-$20\\ HD$\,$108002&300.158&$-$2.482&B2$\,$Ia/Iab&0.18&4.2&77&7&O6LJ08020\\ HD$\,$108639&300.218&1.950&B0.2$\,$III&0.26&2.4&88&7&O6LJ0A020\\ HD$\,$109399&301.716&$-$9.883&B0.7$\,$II&0.19&2.9&\nodata&&O54303010$-$20\\ HD$\,$111934 (BU Cru)&303.204&2.514&B1.5$\,$Ib&0.32&2.3&\nodata&&O5C03N010\\ HD$\,$112999&304.176&2.176&B6$\,$Vn&0.17&0.45&96&7&O6LJ0C010$-$20\\ HD$\,$114886&305.522&$-$0.826&O9$\,$IIIn&0.32&1.8&92&7&O6LJ0D020\\ HD$\,$115071&305.766&0.153&B0.5$\,$Vn&0.40&2.7&71&7&O6LJ0E010$-$20\\ HD$\,$115455&306.063&0.216&O7.5$\,$III&0.40&2.6&81&7&O6LJ0F010$-$20\\ HD$\,$116781&307.053&$-$0.065&B0$\,$IIIne&0.31&2.2&\nodata&&O5LH05010$-$40\\ HD$\,$116852&304.884&$-$16.131&O9$\,$III&0.14&4.5&70&6&O5C01C010\\ &&&&&&&&O63571010\\ &&&&&&&&O8NA03010$-$20\\ HD$\,$120086&329.611&57.505&B2$\,$V&0.04&0.99&\nodata&&O5LH06010$-$50\\ HD$\,$121968&333.976&55.840&B1$\,$V&0.11&3.1&38&6&O57R02010$-$20\\ HD$\,$122879&312.264&1.791&B0$\,$Ia&0.29&3.3&90&7&O5C037010\\ &&&&&&&&O5LH07010$-$40\\ &&&&&&&&O6LZ57010\\ HD$\,$124314&312.667&$-$0.425&O6$\,$Vnf&0.43&1.4&74&7&O54307010$-$20\\ HD$\,$140037&340.151&18.042&B5$\,$III&0.08&0.77&\nodata&&O6LJ04010\\ HD$\,$142315&348.981&23.300&B9$\,$V&0.10&0.15&\nodata&&O5C03Y010\\ HD$\,$142763&31.616&46.960&B8$\,$III&0.01&0.28&\nodata&&O5C040010\\ HD$\,$144965&339.044&8.418&B2$\,$Vne&0.27&0.51&70&7&O6LJ05010\\ HD$\,$147683&344.858&10.089&B4$\,$V+B4$\,$V&0.28&0.37&58&7&O6LJ06020\\ HD$\,$147888 ($\rho$~Oph~D)&353.648&17.710&B3$\,$V&0.42&0.12&44&7&O59S05010\\ HD$\,$148594&350.930&13.940&B8$\,$Vnn&0.18&0.19&\nodata&&O5C04A010\\ HD$\,$148937&336.368&$-$0.218&O6.5$\,$I&0.55&2.2&\nodata&&O6F301010$-$20\\ HD$\,$152590&344.842&1.830&O7$\,$V&0.37&3.6&64&7&O5C08P010\\ &&&&&&&&O8NA04010$-$20\\ HD$\,$156110&70.996&35.713&B3$\,$Vn&0.03&0.62&\nodata&&O5C01K010\\ HD$\,$157857&12.972&13.311&O6.5$\,$IIIf&0.37&3.1&86&7&O5C04D010\\ HD$\,$165246&6.400&$-$1.562&O8$\,$Vn&0.33&1.9&\nodata&&O8NA05010$-$20\\ HD$\,$175360&12.531&$-$11.289&B6$\,$III&0.12&0.24&\nodata&&O5C047010\\ HD$\,$177989&17.814&$-$11.881&B0$\,$III&0.11&6.0&52&6&O57R04010$-$20\\ &&&&&&&&O57R03010$-$20\\ HD$\,$185418&53.604&$-$2.171&B0.5$\,$V&0.38&1.2&105&6&O5C01Q010\\ HD$\,$192639&74.903&1.479&O7$\,$Ibf&0.56&2.1&98&2&O5C08T010\\ HD$\,$195965&85.707&4.995&B0$\,$V&0.19&1.1&91&7&O6BG01010$-$20\\ HD$\,$198478 (55 Cyg)&85.755&1.490&B3$\,$Ia&0.43&1.3&\nodata&&O5C06J010\\ HD$\,$198781&99.946&12.614&B0.5$\,$V&0.26&0.69&65&7&O5C049010\\ HD$\,$201345&78.438&$-$9.544&O9$\,$V&0.14&2.2&147&6&O5C050010\\ &&&&&&&&O6359P010\\ HD$\,$202347&88.225&$-$2.077&B1.5$\,$V&0.11&0.95&116&3&O5G301010,~40$-$50\\ HD$\,$203374&100.514&8.622&B2$\,$Vn&0.43&0.34&87&10&O5LH08010$-$60\\ HD$\,$203532&309.461&$-$31.739&B3$\,$IV&0.24&0.22&49&6&O5C01S010\\ HD$\,$206267&99.292&3.738&O6.5$\,$V&0.45&0.86&65&2&O5LH09010$-$40\\ HD$\,$206773&99.802&3.620&B0$\,$V:nnep&0.39&0.82&94&4&O5C04T010\\ HD$\,$207198&103.138&6.995&O9.5$\,$Ib$-$II&0.47&1.3&66&2&O59S06010$-$20\\ HD$\,$208440&104.031&6.439&B1$\,$V&0.27&1.1&75&4&O5C06M010\\ HD$\,$208947&106.550&8.996&B2$\,$V&0.16&0.56&\nodata&&O5LH0A010$-$40\\ HD$\,$209339&104.579&5.869&B0$\,$IV&0.24&1.2&90&4&O5LH0B010$-$40\\ &&&&&&&&O6LZ92010\\ HD$\,$210809&99.849&$-$3.130&O9$\,$Iab&0.28&4.3&87&7&O5C01V010\\ HD$\,$210839 ($\lambda$~Cep)&103.829&2.611&O6$\,$Infp&0.49&1.1&72&2&O54304010$-$20\\ HD$\,$212791&101.644&$-$4.303&B3ne&0.18&0.62&\nodata&&O5C04Q010\\ HD$\,$218915&108.064&$-$6.893&O9.5$\,$Iabe&0.21&5.0&86&6&O57R05010,~30\\ HD$\,$219188&83.031&$-$50.172&B0.5$\,$IIIn&0.09&2.1&103&1&O6E701010\\ &&&&&&&&O8DP01010\\ &&&&&&&&O8SW01010\\ HD$\,$220057&112.131&0.210&B3$\,$IV&0.17&0.77&65&7&O5C01X010\\ HD$\,$224151&115.438&$-$4.644&B0.5$\,$II$-$III&0.34&1.3&252&10& O54308010$-$20\\ HDE$\,$232522&130.701&$-$6.715&B1$\,$II&0.14&6.1&\nodata&&O5C08J010\\ HDE$\,$303308&287.595&$-$0.613&O3$\,$Vf&0.33&3.8&86&6&O4QX04010$-$40\\ \enddata \tablenotetext{a}{$B-V$ color excesses and distances to the stars were either taken from listings of the same stars in Bowen et al (2008) or Jenkins (2009), or else they were computed by using the same procedures that they invoked.} \tablenotetext{b}{The molecular hydrogen rotational temperature from $J=0$ to 1 that was adopted as an indicator for the kinetic temperature of the intervening gas.} \tablenotetext{c}{ Reference for the source of the $T_{01}$ value given in the previous column: (1) Savage et al. (1977); (2) Rachford et al. (2002); (3) Andr\'e et al. (2003); (4) Pan et al. (2005); (5) Lee et al. (2007); (6) Burgh et al. (2007); (7) Sheffer et al. (2008); (8) Rachford et al. (2009); (9) Burgh et al. (2010); (10) ``J.~M.~Shull (2009) in preparation'' listed in Burgh et al. (2010); (11) Jensen et al. (2010).} \end{deluxetable} \clearpage \placefigure{phase_diag} \begin{figure*} \epsscale{2.2} \plotone{fig1.eps} \caption{A plot of the thermal pressure, $\log(p/k)$, vs. the density of hydrogen nuclei, $\log n(H)$, showing the locations where heating equals cooling in the ISM near our part of the Galaxy (thick curve), according to the thermal equilibrium calculations by Wolfire et al. (2003) (their ``standard model''; see their Fig.~8). Portions of this curve where the slopes are positive are thermally stable and form the distinct phases called the warm neutral medium (WNM) and cold neutral medium (CNM), as indicated. The portion of the curve that has a negative slope has a balance between heating and cooling, but is thermally unstable (Field 1965). In the absence of rapidly changing pressures and densities due to turbulence, the lowest allowable pressure for the CNM is at the horizontal dash dot line labeled ``CNM $\log (p/k)_{\rm min}$.'' Different temperatures in this diagram are revealed by the straight, dashed lines, constructed using the assumption that He/H=0.09 and, for $T<1000\,$K, $f({\rm H}_2)=0.6$ (see \S\protect\ref{particle_mix}). The thin, gently curved lines show constant values for the expected values of ${\rm C~I_{total}/(C~II+C~I_{total}})$, as indicated, according to our equation for ionization equilibrium (see Eq.~\protect\ref{C_ionization} in \S\protect\ref{starlight} and the accompanying text), under the assumption that the starlight intensity is equal to the average level $I_0$ given by Mathis et al. (1983). These curves demonstrate that the WNM is virtually invisible in our survey of C~I. \label{phase_diag}} \end{figure*} \clearpage \section{C~I Results}\label{results} Descriptions of our analysis and the mathematical details of the interpretation of the C~I absorption multiplets covered by {\it HST\/} were presented by JT01. Except for some enhancements in technique and the use of more up to date atomic data discussed in Appendix~\ref{improvements}, we have implemented once again the methods of JT01. Briefly, after we normalized the intensity profiles to an assumed continuum (that usually varies smoothly with wavelength), we converted them to apparent optical depths (Savage \& Sembach 1991). For the average spread in radial velocity of the C~I in a typical sightline, the individual lines in any given multiplet overlap each other. This introduces confusion in the interpretation of the optical depths. However, we can unravel this confusion by observing different multiplets, because the locations of different transitions with respect to each other change, thus allowing one to resolve ambiguous mixtures of opacities. JT01 devised a way to construct a system of linear equations that could be solved to reveal the apparent column densities $N(\rm{ C~I})$, $N({\rm C~I^*})$, and $N({\rm C~I^{**}})$ as a function of velocity.\footnote{The notation adopted here is consistent with that of JT01: $N({\rm C~I})$ refers to the column density of atomic carbon in its $^3P_0$ ground fine-structure state, while $N({\rm C~I^*})$ and $N({\rm C~I^{**}})$ refer to the column densities of the excited $^3P_1$ and $^3P_2$ levels, respectively. The quantity $N({\rm C~I_{ total}})$ equals the sum of the column densities in all three levels. Strictly speaking, we measure {\it apparent\/} column densities [$N_a$ in the notation of Savage \& Sembach (1991)], which differ from true column densities because the recorded intensities are smoothed by the instrumental line spread function. In the interest of simplicity, we will refer to such apparent column densities as simply $N$ and treat them is if they were true column densities. Possible errors in this assumption and our avoidance of cases where they are large are discussed in \S\protect\ref{distortions}.} Once this has been done, we can evaluate the quantities $f1\equiv N({\rm C~I^*})/N({\rm C~I_{ total}})$ and $f2\equiv N({\rm C~I^{**}})/N({\rm C~I_{ total}})$ , which are useful representations of the excitation conditions when we want to understand not only the physical conditions in any given absorbing region, but also possible combinations of contributions from differing regions that overlap each other at a particular velocity. As explained originally by Jenkins \& Shaya (1979) and once again by JT01, the balance of collisional excitations (and de-excitations) against the spontaneous radiative decay of the excited levels establishes an equilibrium value for the level populations that depends on the local density and temperature (and to a much lesser extent, the composition of the gas). As the densities increase at any given temperature, the locations of points on a diagram of $f1$ vs. $f2$ trace an upward arching curve (see Fig.~\ref{all_v_f1f2}) that stretches from the origin for low densities to a point at very high densities that approaches a Boltzmann distribution for the levels at the temperature in question. When the absorptions from two or more regions are superposed at a single velocity, the outcome for $f1$ and $f2$ is at the ``center of mass'' for the values that apply to the individual contributors, with respective weights equal to their values of $N({\rm C~I_{total}})$. This outcome is not without some ambiguity, since various combinations of conditions in any ensemble of different clouds can produce the same result. We will address this issue later in \S\ref{admixtures} when we make a simplifying assumption about such mixtures, and in \S\ref{pileup} we will discuss the consequences of possible averaging effects that are difficult to recognize. Some additional complexity emerges when one considers the effects of optical pumping by starlight photons, which we will cover in \S\ref{derivations}. Figure~\ref{all_v_f1f2} shows the outcome for all of our measurements of ($f1$, $f2$) at each velocity interval that showed acceptable results, and for every star in the survey. The area of each dot in this diagram is proportional to $N({\rm C~I_{total}})$. It is clear that practically all of the points fall above the equilibrium calculations for $f1$ and $f2$, even for a 300$\,$K temperature that is well above the nominal values for the CNM. From this we conclude that either there is always a mixture of two or more regions with vastly differing conditions for every velocity channel or that for some reason(s) the curves are incorrect or inappropriate (we will touch upon this issue later in \S\ref{misleading_conclusion}). A generalized picture of how we interpret some plausible admixtures will be presented in \S\ref{admixtures}. \placefigure{all_v_f1f2} \clearpage \begin{figure*}[h!] \plotone{fig2.eps} \caption{Measurements of $f1$ and $f2$ for all velocity bins, each having a width of $0.5\,{\rm km~s}^{-1}$, for the 89 stars in the survey that had uncertainties $\sigma(f1)$ and $\sigma(f2)$ less than 0.03. The area of each dot is proportional to the respective value of $N({\rm C~I_{total}})$, with a normalization in size as shown in the box at the upper left portion of the plot. The white $\times$ located at $f1=0.209$, $f2=0.068$ represents the ``center of mass'' of all of the dots. The curves indicate the expected level populations for three different temperatures, assuming the gas mixture as specified in \S\protect\ref{particle_mix}, with different values of $\log (p/k)$ indicated with dots (adjacent dots represent differences of 0.1~dex). The large open circles on the curves indicate integer values of $\log (p/k)$ with accompanying numbers to indicate their values. Populations that are proportional to the degeneracies of the levels are indicated by the + sign labeled ``$(p,\,T)\rightarrow\infty$.'' \label{all_v_f1f2}} \end{figure*} \clearpage \section{Properties of the C~I-Bearing Gas}\label{properties} \subsection{Corrections for the Ionization of Carbon}\label{ionization_corrections} It is important to realize that C~I in the gas that we observe is a minor constituent, since the ionization potential of neutral carbon (11.26$\,$eV) is below that of neutral hydrogen (13.6$\,$eV). Most of the carbon atoms in H~I regions are singly ionized. The relative proportion of carbon in the neutral form can vary by enormous factors according to the local density of the gas and the strength of the local radiation field that is responsible for ionizing the atoms. As we attempt to derive an evenhanded picture of the pressure distribution for the general neutral gas, rather than just a value that is weighted in proportion to the amount of C~I that is present, we must devise a means for asssessing how much C~II accompanies the C~I. In essence, we use C~II as an indicator for the total amount of the neutral material. Direct measures of $N({\rm C~II})$ are very difficult to carry out. The only available transition in the wavelength bands covered by our survey is the one at 1334.53$\,$\AA. This line is strongly saturated, and the only way to measure $N({\rm C~II})$ with this feature is by sensing the strength of its damping wings (Sofia et al. 2011). However, this measure applies to C~II at all velocities, rather than at the velocities where we are able to make use of information from C~I. While there exists a very weak intersystem transition at 2325.40$\,$\AA, this feature is outside the wavelength coverage of most of our observations and also requires a very high signal-to-noise ratio for a reliable detection (Sofia et al. 2004). To overcome our inability to measure directly $N({\rm C~II})$ as a function of velocity, we instead used O~I as a proxy for C~II. The very weak O~I intersystem line at 1356$\,$\AA\ is ideal for tracing all but the smallest column densities of material per unit velocity. For velocity intervals over which the C~I absorptions could be measured reliably, we found that only on very rare occasions was the O~I line too weak to measure. For such instances we had to use the weakest line of S~II at 1250$\,$\AA\ as a substitute for O~I.\footnote{ Over very restricted velocity intervals there was an intermediate range of column densities per unit velocity where the O~I line was too weak to observe (apparent optical depth $\tau_{\rm a}<0.05$) and the S~II line was badly saturated ($\tau_{\rm a}>2.5$). In this range, we adopted a geometric mean as an approximate compromise between the respective upper and lower limits.} For deriving $N$(C~II), we assumed that C, O and S were depleted below their respective protosolar abundances (Lodders 2003) by amounts equal to $-0.162$~dex, $-0.123$~dex and $-0.275$~dex, respectively, which corresponds to a moderate depletion strength ($F_*=0.5$) in the generalized representation of Jenkins (2009). The assumed abundance of O relative to C could be in error by about 0.05$\,$dex if actual the depletion strength is either $F_*=0.0$ or 1.0 instead of 0.5. Also, a few new determinations of $N({\rm C~II})$ by Sofia et al. (2011) based on fitting the damping wings of the strong line at 1334.53$\,$\AA\ instead of using the weak intersystem line suggest that the abundances of C in the ISM may be about 0.3$\,$dex lower than stated above, but this change would create a uniform offset that would apply to all of our cases. One can imagine that some of the C~II-bearing gas may be situated in fully ionized regions intersected by our sight lines, especially if most of the ionizing radiation in the ISM is below the ionization potential of singly ionized carbon (24.38$\,$eV). At velocities where we rely on O~I as a proxy, we are confident that our estimate for the amount of C~II applies only to neutral gas, since H~II regions are devoid of O~I because the ionizations of O and H are strongly coupled by a charge exchange reaction with a large rate constant (Field \& Steigman 1971; Chambaud et al. 1980; Stancil et al. 1999). The same is not true for S~II; its behavior should be similar to that of C~II (singly ionized S and C have ionization potentials within 1$\,$eV of each other). However, we find that the continuity over velocity between the O~I and S~II profiles is usually good, which argues against the existence of much contamination from H~II regions. \subsection{Kinetic Temperature of the C~I-bearing Gas}\label{T} The relationship between the thermal pressure and the fine-structure excitations has a weak dependence on temperature. Thus, even though this effect is small, we must still try to reduce the ambiguities in the measurement of $p$ when either $n$(H) or $T$ is unknown. Fortunately, we can make use of the fact that molecular hydrogen usually accompanies C~I and thus we can utilize measurements of the $J=0$ to 1 rotation temperature of H$_2$ to indicate the most probable kinetic temperature of the material for which we are making pressure measurements. Columns (7) and (8) of Table~\ref{sightlines_table} show the outcomes of such rotation temperature measurements, $T_{01}$, along with the sources in the literature where such measurements were listed. In cases where $T_{01}$ was not measured, we had no alternative but to adopt an arbitrary number $T=80\,$K, which is close to the median value of all of the measurements. However, variations in $T_{01}$ from one sightline to the next have {\it rms\/} deviations of only 30$\,$K, and deviations of this magnitude alter the outcome for $\log (p/k)$ by only about 0.06~dex. We acknowledge the presence of an unavoidable limitation that $T_{01}$ shows simply an average over all velocities, and it is weighted in proportion to the local density of hydrogen molecules, $n({\rm H}_2)$, instead of $n({\rm C~I_{total}})$. In some circumstances variations in temperature across different regions along our sightlines might compromise the accuracy of our results, but this is probably not a very important effect. \subsection{The Particle Mix}\label{particle_mix} The composition of the gas has a small, but nonnegligible effect on the expected outcomes for $f1$ and $f2$. For instance, for $ 2.8 \lesssim \log (p/k)\lesssim 3.8$ the inferred pressure for a given ($f1$, $f2$) for pure atomic hydrogen is about 0.1~dex lower than for the equivalent number density of pure H$_2$ with $T_{01}=80\,$K. Thus, in order to minimize the error in the interpretation of the measurements, it is good to adopt an estimate for the most probable mix of gas constituents. Any deviations in the true conditions from whatever we adopt for the molecular fraction, $f({\rm H}_2)=2n({\rm H}_2)/[2n({\rm H}_2)+n({\rm H~I})]$, will result in an error for $\log (p/k)$ of less than 0.1~dex. We estimate that approximately half of the material in our lines of sight arise from the WNM, which is free of H$_2$, while the remaining half (CNM) that we can see with C~I has an appreciable molecular content. If we assume that the CNM has $f({\rm H}_2)=0.60$, then an overall value $f({\rm H}_2)=0.42$ would apply to the entire sightline. The latter value is very close to the median outcome for $f({\rm H}_2)$ found by Rachford et al. (2009) in their {\it FUSE\/} survey of sightlines similar to the ones in the present study. We therefore adopt the assumption that $f({\rm H}_2)=0.60$ in our C~I-bearing gas, and the mix of ortho- and para-H$_2$ is governed by the determination of $T_{01}$ (which is set to 80$\,$K if unknown). Another constituent is helium, whose ratio to hydrogen in atoms and molecules is assumed to be the protosolar value 0.094 given by Lodders (2003). In their normal concentrations in the CNM, electrons and protons have a negligible influence on $f1$ and $f2$. \subsection{Local Starlight Intensity}\label{starlight} In \S\ref{ionization_corrections} we explained how we derive the amount of C~II that accompanies the C~I. This determination at any particular velocity in a given sightline has two applications. First, as we indicated earlier, it represents our best estimate for the total amount of neutral material associated with the C~I. Second, the ratio of C~II to C~I allows us to estimate the local radiation density, which in turn will be useful for applying a correction to the equilibrium $f1$ and $f2$ values that allows for effects of optical pumping of the fine-structure levels (see \S\ref{initial_estimates} and \S\ref{pumping} for details). As we will show later in \S\ref{basic_dist} and \S\ref{origins}, the radiation density outcomes are by themselves of special interest in our overall outlook on the distribution of thermal pressures. The ionization balance of carbon atoms in the neutral ISM is given by the relation \begin{eqnarray}\label{C_ionization} n({\rm C~I_{total}})(I/I_0)\Gamma_0({\rm C~I})&= n({\rm C~II})[\alpha_e({\rm C~II},T)n(e)&\nonumber\\ &+\alpha_g({\rm C~II},n(e),I,T)n({\rm H})]~,& \end{eqnarray} where the photoionization rate $\Gamma_0({\rm C~I})=2.0\times 10^{-10}{\rm s}^{-1}$ (Weingartner \& Draine 2001a) if the radiation field density $I$ is equal to a value $I_0$ specified by Mathis et al. (1983) for the average intensity of ultraviolet starlight in our part of the Galaxy, $\alpha_e({\rm C~II}, T)$ is the radiative plus dielectronic recombination coefficient of C~II with free electrons as a function of temperature $T$ (Shull \& Van Steenberg 1982), and $\alpha_g({\rm C~II},n(e),I,T)$ is the C~II recombination rate due to collisions with dust grains (and subsequent transfer of an electron) normalized to the local hydrogen density (Weingartner \& Draine 2001a).\footnote{In the notation of Weingartner \& Draine (2001a), this electron transfer rate from grains is expressed as $\alpha_g({\rm C}^+,\psi,T)$, where $\psi=GT^{\onehalf} n(e)^{-1}$ and $G=1.13$ for the interstellar radiation field of Mathis et al. (1983).} To solve for $n(e)$, we assume that free electrons are created by both the photoionization of some heavy elements, amounting to $2\times 10^{-4}n(H)$, supplemented by electrons liberated from the cosmic-ray ionization of hydrogen at a rate $\zeta_{\rm CR}=2\times 10^{-16}{\rm s}^{-1}$ (Indriolo et al. 2007; Neufeld et al. 2010).\footnote{For the column densities of hydrogen considered here, the average ionization from x-rays is almost negligible by comparison (Wolfire et al. 1995).} In order to estimate the density of electrons created by these cosmic-ray ionizations, one must calculate the balance between the creation of free protons with a density $n(p)$ against their recombination with free electrons and also electrons on dust grains using an equation analogous to Eq.~\ref{C_ionization}, \begin{eqnarray}\label{H_ionization} \zeta_{\rm CR}n({\rm H})=n(p)[\alpha_e({\rm H~II},T)n(e)&&~~~~~~~~~\nonumber\\ +\alpha_g({\rm H~II},n(e),I,T)n({\rm H})]~.~~~~~~~~~~ \end{eqnarray} As with the case for C~II, we obtain a formula for $\alpha_g({\rm H~II},n(e),I,T)$ from Weingartner \& Draine (2001a). There may be some shortcomings in our simple formulation in Eq.~\ref{C_ionization}. Welty et al. (1999) found inconsistencies in the determinations of electron densities using the ratios of neutral and ionized forms of different elements, and these problems are not resolved when the grain recombination processes are included (Weingartner \& Draine 2001a). Either the rates incorporated into Eq.~\ref{C_ionization} are inaccurate or other kinds of reactions may be important, such as charge exchange with protons, or the formation and destruction of CO (van Dishoeck \& Black 1988) or other C-bearing diatomic molecules (Prasad \& Huntress 1980; van Dishoeck \& Black 1986, 1989). If we use our determination of $N$(C~II) discussed in \S\ref{ionization_corrections}, compare it with $N({\rm C~I_{total}})$, and apply the equilibrium condition expressed by Eq.~\ref{C_ionization} to determine the local starlight density, we should obtain a reasonably accurate result for $I/I_0$ provided that there is not a large amount of C~II arising from the WNM at exactly the same velocity. If such an additional contribution is present, we will overestimate the radiation density. Likewise, this overestimate of $N$(C~II) accompanying the C~I will give a disproportionately large figure for the total amount of neutral material for the particular measurement at hand. \section{Admixtures of Different Kinds of Gas}\label{admixtures} It is clear that the 2-dimensional distribution of the points shown in Fig.~\ref{all_v_f1f2} represents a complex mixture of high and low pressure gas. Some insight on the possible nature of this mixture may be gained by stripping away the information about the 2-dimensional scatter of the points and focusing on just a composite value of ($f1$, $f2$) for all of the measurements, that is, a single ``center of mass'' of all of the points in the diagram. The location of this value is shown by a white $\times$ in the figure. The question we now ask is whether or not some simple, generalized distribution function for pressures can reproduce the observed composite ($f1$, $f2$) pair. A very elementary but plausible pressure relationship to propose is a lognormal distribution, which is appropriate for situations where random pressure fluctuations arise from turbulence (V\'azquez Semadeni 1994; Nordlund \& Padoan 1999; Kritsuk et al. 2007) -- see \S\ref{basic_dist}. Panels ($a$) through ($c$) in Figure~\ref{f1f2_lognormal} show reconstructions of the combinations of $f1$ and $f2$ for such a distribution for three different values for the width of the distribution in $\log (p/k)$ and a single value for the location of the peak. One might initially suppose that the curvature of the high pressure tail in this distribution along an arc that traces the single-region ($f1$, $f2$) combinations could pull the distribution's composite ($f1$, $f2$) to a location above the curve at low pressures. However, Figure~\ref{f1f2_lognormal} illustrates that this curvature appears to be insufficient to make our model lognormal composite values rise as high (in $f2$) as the measured ones without passing beyond the composite measurement of $f1$. Instead, it appears that a more complex picture is called for, one that requires the use of a bimodial distribution of pressures. The simplest such model is to propose the existence of a separate, small contribution from material at pressures $\log (p/k) > 5.5$ and a temperature $T>80\,$K. Panel ($d$) of the figure shows that such a contribution can solve our problem with the elevated composite ($f1$, $f2$). This high pressure contribution seems to be present in a majority of the sightlines, since most of the individual points that are shown in Fig.~\ref{all_v_f1f2} are pulled above the curves. \clearpage \placefigure{f1f2_lognormal} \begin{figure*} \plotone{fig3.eps} \caption{A schematic demonstration of the behavior of the composite values of $f1$ and $f2$ when the pressure distribution follows a simple lognormal behavior. The distribution is approximated by a discrete collection of H~I packets spaced 0.1~dex apart in pressure, illustrated by black dots strung along the equilibrium curve, with the area of each dot indicating the amount of H~I. After factoring in the ionization equilibrium equation for carbon atoms, the amounts of C~I are strongly biased in favor of higher densities. The amounts of C~I are indicated by the areas enclosed by open circles. The ``center of mass'' for the C~I packets appears at the location of the square with an arrow pointing toward it, while the observed composite ($f1$, $f2$) shown in Fig.~\protect\ref{all_v_f1f2} is indicated with an $\times$. Panels ({\it a\/}) through ({\it c\/}) show lognormal distributions with 3 successively increasing values of $\sigma$, while panel ({\it d\/}) shows the outcome when a small, additional contribution of very high density gas is present (very small circles at the top of the equilibrium curve). \label{f1f2_lognormal}} \end{figure*} \clearpage \begin{figure*} \plotone{fig4.eps} \caption{A demonstration of how an observed ($f1$, $f2$) at a given velocity is decomposed into a superposition of low and high pressure regions. As in Fig.~\protect\ref{all_v_f1f2}, the equilibrium track is marked with a scale in $\log (p/k)$, with open circles and accompanying numbers showing integer values of this quantity. An assigned location of (0.38, 0.49) in this diagram applies to the high pressure component, and a line that projects from this point through the observed ($f1$, $f2$) intersects the equilibrium curve (for a given temperature) at a point $\times$ that corresponds to the pressure of the low pressure component (in this case $\log (p/k)_{\rm low} = 3.5$). The relative distances along the projection line indicate fractions of C~I in the two components: in this depiction, the length of the segment above ``Obs.'' indicates that fraction of gas at normal (low) pressures is $g_{\rm low}=0.90$, and the remaining fraction in the high pressure component is indicated by the length of the lower line segment, yielding $g_{\rm high}=0.10$. \label{fig3}} \end{figure*} \clearpage The exact properties of the distribution of high pressure material is not known, but for ($f1$, $f2$) outcomes not far above the lower portions of the equilibrium curves, such details do not matter much for the low pressure gas. We need only to know the general vicinity in the upper portion of the diagram where such material resides. Henceforth, we adopt (0.38, 0.49) for a fiducial $f1$ and $f2$ for the high pressure component (which corresponds approximately to $T=300\,$K, $n({\rm H~I})=4000\,{\rm cm}^{-3}$), and consider that from one observation to the next, the relative proportion of this gas is some small fraction of the total that can vary from one case to the next. The consequences of the high-pressure reference point differing from reality will be discussed briefly in \S\ref{overview}. \placefigure{fig3} Figure~\ref{fig3} is a schematic illustration of how we geometrically decompose an observed combination of $f1$ and $f2$ into a superposition of the two proposed types of gas, one at a very high pressure (but with poorly known physical conditions) and the other at a normal, low pressure. The basic strategy is to find where a projection from the assumed high density locus (0.38, 0.49) through the observed combination of $f1$ and $f2$ extends to a specific point on the equilibrium curve drawn for an appropriate temperature, as defined by $T_{01}$, if available. We regard the pressure that corresponds to this intersection point to represent the proper result for the low density gas. The ratio of C~I in the high pressure gas to the overall total is given by the quantity $1-g_{\rm low}$ (or simply $g_{\rm high}$) shown in the diagram. Two different considerations governed our choice of the fiducial high pressure ($f1$, $f2$) to be situated near the top of the points shown in Fig.~\ref{all_v_f1f2}. One is based on the plausibility that for most of the ordinary lines of sight the amount of this gas (in terms of the total gas, not just C~I) is probably a small fraction of all of the gas. At the highest pressures, C~I is more conspicuous (because of the shift in the ionization balance toward the neutral form of C), which in turn leads to a smaller quantity for the inferred amount of singly-ionized carbon. The other consideration is a practical one: the numerical results for the decompositions into low and high pressure gas are more stable when the high pressure point is well removed from the measured values of $f1$ and $f2$ -- this becomes important for outcomes at moderately high pressures above the main distribution of low pressures. In the next several sections, we will concentrate on the measurements of the low pressure component, using the method just described. Later, in \S\ref{high_pressure_comp}, we will turn our attention to the small amount of gas at high pressures and discuss its possible significance in our understanding of processes in the diffuse ISM. \section{Derivations of Thermal Pressures}\label{derivations} \subsection{Initial Estimates of Conditions}\label{initial_estimates} For each measurement of ($f1$, $f2$), we apply the construction demonstrated in Fig.~\ref{fig3} to determine the quantities $\log (p/k)_{\rm low} $ and $g_{\rm low}$. This determination is based on the initial assumption that the radiation field intensity $I(\lambda)$ in the gas in the low pressure regime is equal to the average Galactic value $I_0(\lambda)$ (see \S\ref{starlight}). Any deviation of the true intensity from this relationship will result in an error in $\log (p/k)$ because an incorrect calculation of the optical pumping rate was applied. (Different pumping rates have virtually no effect on the value of $g_{\rm low}$ since the equilibrium values of $f1$ and $f2$ simply shift along the curve that represents different pressures.) In order to obtain a more accurate result, we must evaluate how much the true intensity $I(\lambda)$ differs from $I_0(\lambda)$, so that our pumping correction will be more accurate. We make the simplifying assumption that the relative distribution of intensity over $\lambda$ does not change appreciably from one location to the next, but that the overall level of radiation is a free parameter that can vary. We then estimate an initial approximation for the value of this parameter, which we call $I/I_0$. To derive this estimate we make use of the ionization equilibrium equation, Eq.~\ref{C_ionization}, to solve for $I/I_0$ for gas at any particular velocity increment, after replacing $n$(C~II) by $N$(C~II) (as determined according to the method described in \S\ref{ionization_corrections}), $n({\rm C~I_{total}})$ by $g_{\rm low}N({\rm C~I_{total}})$ and $n$(H) by our initial approximate value of $p/(kT_{01})$. In making the substitution for $n({\rm C~I_{total}})$, we assume that it is safe to declare that virtually all of the C~II is associated with the low pressure component of C~I, but this condition could be violated if the radiation density experienced by the high pressure gas is many orders of magnitude higher than that of the gas at ordinary pressures. \subsection{Convergence to Final Values}\label{convergence} After evaluating the new radiation intensity level $I/I_0$, we are in a position to repeat the calculation of $\log (p/k)_{\rm low}$ using a better representation for the shifts in the expected values of $f1$ and $f2$ caused by optical pumping.\footnote{The shift in the outcome for $\log(p/k)$ depends not only on the strength of the pumping field intensity, but also on $p/k$ itself. Figure~6 of Jenkins \& Shaya (1979) shows the ($f1,f2$) equilibrium tracks for $I/I_0=1$ and 10, but in terms of our revision of the pumping rates derived in \S\ref{pumping}, these tracks are equivalent to present-day values for $I/I_0$ equal to about 1.5 and 15. Representative values for $\log(p/k)$ and $\log(I/I_0)$ are listed for each sight line in Columns (6) and (7) of Table~\ref{obs_quantities_table}.} However, the new value for the pressure will have an impact on the density $n$(H) used in the equation for the ionization equilibrium, hence the calculation of this balance must be repeated in order to derive a modified number for radiation enhancement factor $I/I_0$, one that is better suited for a more accurate derivation of the pressure. We cycle through the alternation between pressure and ionization calculations many times until the densities and intensities converge to stable solutions. \section{Overview of Sightlines}\label{overview} Table~\ref{obs_quantities_table} presents a number of properties of the C~I and (inferred) C~II data that we obtained for the sightlines that were suitable for study. The numbers in this table give general indications integrated over velocity; they were not used in the analysis of the pressure distribution, which relied on the more detailed results that we obtained for the explicit velocity channels. We show our estimates of the total column densities of C~II in Column~(3) of the table. They compare favorably with the few direct determinations reported in the literature (see note $f$); rms deviations between our values and others amount to 0.22$\,$dex. When we consider that the direct measurements of $N$(C~II) have quoted errors of order 0.1$\,$dex, the magnitudes of the disagreements indicate that our values are probably uncertain by about 0.20$\,$dex. The largest deviations seem to occur for cases where the other determinations are higher than ours, which may indicate that we are not registering some C~II in ionized gas because we are mostly using O~I as an indicator (see \S\ref{ionization_corrections}). The relative coverages of velocities where $f1$ and $f2$ could be measured, weighted by their respective values of $N$(C~II), are given for each sightline in Column~(4) of the table. These quantities vary by large factors from one case to the next. For the entire survey that spanned a total length of 180$\,$kpc, the total $N({\rm C~II})=3.8\times 10^{19}{\rm cm}^{-2}$, while that within our sampled velocity intervals represents $N({\rm C~II})=2.3\times 10^{19}{\rm cm}^{-2}$ (61\%). It is difficult to gauge the real fraction of the gas for which our measurements apply (i.e., CNM vs. CNM + WNM) because some of the WNM material can overlap in velocity the CNM that we sampled. Based on approximate interpolations of the velocity profiles of gas that is relatively free of C~I, we estimate the fraction to be in the general vicinity of 15\%, which means that on average our determinations could be systematically low by about $-0.07\,$dex. Uncertainties in $g_{\rm low}$ and $\log(p/k)_{\rm low}$ are probably dominated by deviations of the real high-pressure conditions from those that apply to our adopted location for $f1,f2=(0.38,0.49)$, as we outlined in \S\ref{admixtures}. One can estimate the magnitudes of such deviations by examining plausible alternative geometrical constructions of the type depicted in Fig.~\ref{fig3}. For example, if conditions in the high pressure gas are closer to $T=100\,$K, $\log(p/k)=5.3$ and $g_{\rm high}\approx 0.1$ (i.e., twice the general average), an apparent value of $\log (p/k)_{\rm low}= 3.5$ may be $0.1\,$dex higher than the true value. The magnitude of this effect scales in proportion to $g_{\rm high}$, and it is diminished for higher values of $\log (p/k)_{\rm low}$. In the light of our remarks about various forms of sampling bias in \S\ref{observations} (item 1), it should come as no surprise that values of $\log (I/I_0)$ shown in Column (7) of the table are all greater than zero. This is a consequence of the CNM being preferentially located in the vicinity of hot stars, rather than in random locations in the Galactic disk. This preference seems to overcome the effects of attenuation of starlight by dust. However, to some limited extent our intensity outcomes could be elevated in a systematic fashion by the presence of unrelated WNM gas that is at the same velocity as the C~I. This extra gas would mislead us into thinking the carbon atoms in the regions of interest are more ionized than in reality. \placetable{obs_quantities_table} \clearpage \begin{deluxetable}{ l c c c c c c } \tabletypesize{\footnotesize} \tablecolumns{7} \tablewidth{0pt} \tablecaption{Observed and Calculated Quantities over all Velocities in the Sightlines\label{obs_quantities_table}} \tablehead{ \colhead{Target} &\colhead{$\log N({\rm C~I_{total}})$\tablenotemark{a}} & \colhead{Calc. $\log$} & \colhead{Percent C~II} & \multicolumn{2}{c}{Weighted Averages\tablenotemark{d}} & \colhead{Median}\\ \cline{5-6} \colhead{Star} & \colhead{$({\rm cm}^{-2})$} & \colhead{$N({\rm C~II})$\tablenotemark{b}$({\rm cm}^{-2})$} & \colhead{Observed\tablenotemark{c}} & \colhead{$g_{\rm low}$} & \colhead {$\log (p/k)_{\rm low}$} & \colhead{$\log (I/I_0)$\tablenotemark{e}}\\ \colhead{(1)}& \colhead{(2)}& \colhead{(3)}& \colhead{(4)}& \colhead{(5)}& \colhead{(6)}& \colhead{(7)} } \startdata CPD-59D2603\dotfill& $\gtrapprox$14.64&17.87&25.4&0.92&3.63&0.31\\ HD$\,$108\dotfill & $\gtrapprox$14.98&17.81&64.4&0.95&3.82&0.30\\ HD$\,$1383\dotfill & $\gtrapprox$14.80&17.84&44.5&0.96&3.49&0.27\\ HD$\,$3827\dotfill & 13.57$\pm$0.07&17.22& 7.3&0.92&3.51&0.16\\ HD$\,$15137\dotfill &14.62$\pm$0.01&17.69&74.3&0.98&3.61&0.32\\ HD$\,$23478\dotfill & $>$14.75&17.35&46.4&0.93&3.62&0.46\\ HD$\,$24190\dotfill & 14.45$\pm$0.01&17.49&84.5&0.97&3.64&0.62\\ HD$\,$24534 (X Per) \dotfill & $>$14.81&17.49\tablenotemark{f}& 5.0&0.95&4.17&1.16\\ HD$\,$27778 (62 Tau)\dotfill & $>$14.98&17.48\tablenotemark{f}& 53.4&0.90&3.54&0.49\\ HD$\,$32040\dotfill & 13.26$\pm$0.03&16.58&18.1&0.97&3.07&0.11\\ HD$\,$36408\dotfill & $\gtrapprox$14.17&17.33&54.5&0.93&3.82&0.81\\ HD$\,$37021 ($\theta^1$ Ori B)\dotfill & 13.64$\pm$0.05&17.78\tablenotemark{f}& 67.5&0.39&3.83&1.83\\ HD$\,$37061 ($\nu$~Ori)\dotfill & 13.97$\pm$0.03&17.90\tablenotemark{f}& 86.3&0.59&4.28&1.80\\ HD$\,$37903\dotfill & 14.24$\pm$0.05&17.75&48.8&0.79&4.61&1.37\\ HD$\,$40893\dotfill & 14.67$\pm$0.01&17.79&91.7&0.98&3.40&0.37\\ HD$\,$43818 (11 Gem)\dotfill & 14.76$\pm$0.01&17.89&91.9&0.98&3.49&0.35\\ HD$\,$44173\dotfill & 13.66$\pm$0.05&16.98& 4.1&0.83&3.45&0.45\\ HD$\,$52266\dotfill & 14.34$\pm$0.01&17.62&63.0&0.98&3.39&0.48\\ HD$\,$69106\dotfill & 14.30$\pm$0.01&17.37&81.1&0.96&3.55&0.45\\ HD$\,$71634\dotfill & 14.13$\pm$0.02&17.18&68.3&0.89&4.08&0.88\\ HD$\,$72754 (FY Vel)\dotfill & $\gtrapprox$14.40&17.54&37.0&0.89&3.86&0.83\\ HD$\,$75309\dotfill & 14.39$\pm$0.02&17.51&74.3&0.97&3.41&0.46\\ HD$\,$79186 (GX Vel)\dotfill & $\gtrapprox$14.57&17.76&69.8&0.96&3.33&0.40\\ HD$\,$88115\dotfill & 14.03$\pm$0.05&17.56&29.4&0.98&3.55&0.51\\ HD$\,$91824\dotfill & $\gtrapprox$14.45&17.49&63.6&0.93&3.62&0.78\\ HD$\,$91983\dotfill & 14.54$\pm$0.01&17.55&76.3&0.96&3.53&0.51\\ HD$\,$93205\dotfill & 14.56$\pm$0.02&17.83&51.5& \nodata\tablenotemark{g}&\nodata\tablenotemark{g}&0.58\\ HD$\,$93222\dotfill & 14.36$\pm$0.01&17.82&72.6&0.95&4.41&0.82\\ HD$\,$93843\dotfill & 14.15$\pm$0.05&17.67&12.8&0.88&4.13&0.66\\ HD$\,$94454\dotfill & $>$14.29&17.50&30.5&0.93&3.60&0.68\\ HD$\,$94493\dotfill & 14.26$\pm$0.02&17.51&59.1&0.97&3.53&0.49\\ HD$\,$99857\dotfill & 14.59$\pm$0.01&17.71&69.9&0.94&3.65&0.55\\ HD$\,$99872\dotfill & $\gtrapprox$14.41&17.45&79.4&0.94&3.62&0.86\\ HD$\,$102065\dotfill & $\gtrapprox$14.22&17.40&53.9&0.96&3.62&0.61\\ HD$\,$103779\dotfill & 14.23$\pm$0.03&17.60&44.8&1.00&3.29&0.33\\ HD$\,$104705 (DF Cru)\dotfill & 14.25$\pm$0.01&17.58&66.7&0.98&3.48&0.53\\ HD$\,$106343(DL Cru)\dotfill & 14.34$\pm$0.01&17.62&58.1&0.96&3.51&0.49\\ HD$\,$108002\dotfill & $\gtrapprox$14.45&17.58&51.3&0.96&3.41&0.34\\ HD$\,$108639\dotfill & 14.31$\pm$0.01&17.75&76.3&0.96&3.49&0.55\\ HD$\,$109399\dotfill & 14.27$\pm$0.02&17.60&37.9&0.97&3.69&0.59\\ HD$\,$111934 (BU Cru)\dotfill & $\gtrapprox$14.48&17.70&43.5&0.99&3.71&0.63\\ HD$\,$112999\dotfill & 14.23$\pm$0.02&17.42&82.7&0.97&3.61&0.53\\ HD$\,$114886\dotfill & 14.73$\pm$0.01&17.75&66.5&0.95&3.62&0.37\\ HD$\,$115071\dotfill & 14.69$\pm$0.01&17.87&74.3&0.94&3.64&0.74\\ HD$\,$115455\dotfill & 14.63$\pm$0.02&17.85&61.3&0.97&3.52&0.58\\ HD$\,$116781\dotfill & 14.28$\pm$0.02&17.75&32.2&0.98&3.42&0.47\\ HD$\,$116852\dotfill & 14.15$\pm$0.02&17.45&66.0&0.95&3.72&0.80\\ HD$\,$120086\dotfill & 13.20$\pm$0.08&16.98& 3.6&0.99&3.72&0.50\\ HD$\,$121968\dotfill & 13.38$\pm$0.06&16.87&47.0&0.95&3.81&1.27\\ HD$\,$122879\dotfill & 14.42$\pm$0.01&17.75&76.3&0.99&3.59&0.49\\ HD$\,$124314\dotfill & $\gtrapprox$14.66&17.85&66.0&0.97&3.54&0.58\\ HD$\,$140037\dotfill & $\gtrapprox$14.03&17.24& 4.8&$\approx 1.0$&5.19&0.95\\ HD$\,$142315\dotfill & 13.78$\pm$0.06&17.33& 7.3&0.93&3.38&0.50\\ HD$\,$142763\dotfill & 13.38$\pm$0.06&17.03&24.9&0.93&3.47&0.70\\ HD$\,$144965\dotfill & $>$14.28&17.52& 9.4&0.82&3.80&0.76\\ HD$\,$147683\dotfill & $\gtrapprox$14.80&17.64&39.7&0.89&3.89&0.57\\ HD$\,$147888 ($\rho$~Oph)& $>$14.51&17.80\tablenotemark{f}& 26.9&0.68&3.98&1.20\\ HD$\,$148594\dotfill & $\gtrapprox$14.12&17.54&39.3&0.79&4.51&1.53\\ HD$\,$148937\dotfill & $\gtrapprox$14.87&17.96&90.6&0.95&3.84&0.67\\ HD$\,$152590\dotfill & 14.61$\pm$0.01&17.76\tablenotemark{f}& 77.0&0.87&3.71&0.77\\ HD$\,$156110\dotfill & 13.88$\pm$0.04&17.08&45.6&0.99&3.83&0.57\\ HD$\,$157857\dotfill & 14.61$\pm$0.01&17.77&87.2&0.95&3.50&0.46\\ HD$\,$165246\dotfill & $\gtrapprox$14.33&17.73&59.4&0.94&3.58&0.79\\ HD$\,$175360\dotfill & 13.98$\pm$0.01&17.29&76.6&1.00&3.18&0.39\\ HD$\,$177989\dotfill & $\gtrapprox$14.66&17.44&72.5&0.96&3.59&0.35\\ HD$\,$185418\dotfill & $>$14.72&17.71&66.8&0.97&3.41&0.23\\ HD$\,$192639\dotfill & $\gtrapprox$14.74&17.85&80.0&0.97&3.68&0.52\\ HD$\,$195965\dotfill & $\gtrapprox$14.48&17.42&60.2&0.96&3.56&0.32\\ HD$\,$198478 (55 Cyg)\dotfill & $\gtrapprox$14.84&17.80&84.5&0.92&3.68&0.43\\ HD$\,$198781\dotfill & $\gtrapprox$14.56&17.57&32.0&0.92&3.49&0.37\\ HD$\,$201345\dotfill & 13.97$\pm$0.02&17.42&63.4&0.97&3.50&0.32\\ HD$\,$202347\dotfill & 14.61$\pm$0.01&17.33&79.5&0.95&3.76&0.20\\ HD$\,$203374\dotfill & $\gtrapprox$14.98&17.68&81.1&0.94&3.63&0.31\\ HD$\,$203532\dotfill & $>$14.65&17.40& 4.2&0.85&4.40&1.26\\ HD$\,$206267\dotfill & $\gtrapprox$15.29&17.85&72.4&0.93&3.64&0.30\\ HD$\,$206773\dotfill & $\gtrapprox$14.70&17.55&71.8&0.96&3.55&0.19\\ HD$\,$207198\dotfill & $\gtrapprox$15.24&17.81\tablenotemark{f}& 87.6&0.94&3.63&0.20\\ HD$\,$208440\dotfill & $\gtrapprox$14.84&17.68&78.8&0.94&3.66&0.35\\ HD$\,$208947\dotfill & $\gtrapprox$14.63&17.36&53.2&0.97&3.60&0.33\\ HD$\,$209339\dotfill & 14.76$\pm$0.01&17.61&87.1&0.96&3.69&0.34\\ HD$\,$210809\dotfill & $\gtrapprox$14.70&17.69&36.8&0.94&3.66&0.29\\ HD$\,$210839 ($\lambda$ Cep)\dotfill & $\gtrapprox$14.95&17.79&63.8&0.86&4.16&0.47\\ HD$\,$212791\dotfill & 14.28$\pm$0.03&17.44&29.9&0.97&3.54&0.38\\ HD$\,$218915\dotfill & 14.56$\pm$0.01&17.59&68.9&0.97&3.58&0.33\\ HD$\,$219188\dotfill & 13.92$\pm$0.04&17.08&77.6&0.99&2.97&0.01\\ HD$\,$220057\dotfill & $>$14.71&17.46&25.6&0.94&3.51&0.35\\ HD$\,$224151\dotfill & $\gtrapprox$14.62&17.80&56.1&0.97&3.80&0.16\\ HDE$\,$232522\dotfill & $\gtrapprox$14.60&17.66&58.5&0.97&3.53&0.35\\ HDE$\,$303308\dotfill & 14.69$\pm$0.00&17.83&72.9& \nodata\tablenotemark{g}&\nodata\tablenotemark{g}&0.59\\ \enddata \tablenotetext{a}{Integrated over all velocities where C~I absorption seems to be visible (not just over the restricted regions where the lines are strong enough to yield good measurements of $f1$ and $f2$). Sometimes there was evidence that unresolved saturations were evident at certain velocities, as indicated by a test that is discussed in \S\ref{distortions}. When this occurred over very limited portions of the profile, we indicate a mild inequality with ``$\gtrapprox$.'' When a substantial portion of the profile exhibited such behavior, we indicate a more severe inequality by ``$>$.'' When errors in the column densities are given, they indicate only the quantifiable errors arising from noise or uncertainties in the continuum levels. These errors indicate the relative quality of the measurements, but they are not fully realistic because they do not take into account uncertainties in our adopted $f$-values or occasional flaws in the MAMA detector used by STIS.} \tablenotetext{b}{The computed amount of C~II at all velocities based on the absorption profiles of O~I or S~II; see \S\protect\ref{ionization_corrections} for details. These amounts compare favorably with the observed amounts for a few stars in note $f$.} \tablenotetext{c}{The relative amount of C~II, as represented by its proxy O~I (and sometimes S~II), within the velocity interval where determinations of $f1$ and $f2$ were good enough to be considered for pressure measurements, compared to the amount seen at all velocities, as shown in the previous column.} \tablenotetext{d}{Calculated according to the following: $\sum [g_{\rm low}N({\rm C~I_{total}})] /\sum N({\rm C~I_{total}})$ and $\log \sum [(p/k)_{\rm low}N({\rm C~II})] /\sum N({\rm C~II})$.} \tablenotetext{e}{Our estimate for the local density of UV radiation from starlight that is more energetic than the ionization potential of neutral carbon, compared to an adopted standard $I_0$ based on a level specified by Mathis et al. (1983) for the average intensity of ultraviolet starlight in the Galactic plane at a Galactocentric distance of 10~kpc. This estimate is based on our evaluation of the ionization equilibrium of C, as expressed in Eq.~\protect\ref{C_ionization}.} \tablenotetext{f}{Compare with actual measurements of $\log N$(C~II) using the intersystem C~II] line at 2325$\,$\AA: From Sofia et al. (1998) HD$\,$24534: $17.51~(+0.11,\,-0.16)$. From Sofia et al. (2004) HD$\,$2778: $<17.34$; HD$\, $37021: $17.82~(+0.12,\,-0.18)$; HD$\,$37061: $18.13~(+0.04,\,-0.06)$; HD$\,$147888: $18.00~(+0.07,\,-0.09)$; HD$\,$152590: $18.21~(+0.08,\,-0.10)$; HD$\,$207198: $17.98~(+0.11,\,-0.14)$. However, recent measurements of the damping wings for the allowed transition at 1334.53$\,$\AA\ reported by Sofia et al. (2011) indicate that these column densities may be too large by a factor of about 2.} \tablenotetext{g}{Gas within a component at large negative velocities has conditions very near the high pressure reference mark. Hence the projection onto the low pressure arc is meaningless.} \end{deluxetable} Of particular interest are the characteristic sizes of the regions containing the C~I that we are able to study. Within any velocity bin, we can determine a value for the local density of gas particles, composed of atomic hydrogen, helium atoms, and hydrogen molecules. Once again, if we assume that $f({\rm H}_2)=0.6$ (see \S\ref{particle_mix}) and ${\rm He/H}=0.09$, it follows that the local density of hydrogen nuclei is given by $n({\rm H})=p/(0.79kT)$. The longitudinal thickness occupied by the gas is then equal to the column density of these nuclei, $N({\rm H})$, divided by $n$(H). We obtain $N$(H) by multiplying the amount of carbon, measured by the methods outlined in \S\ref{ionization_corrections}, by the general expectation for $({\rm H/C})=5040$ in the ISM. A sum over velocity of all of the length segments gives the overall thickness of the C~I-bearing region(s) in any particular line of sight. \placefigure{filling_factor} \begin{figure}[b!] \epsscale{1.0} \plotone{fig5.eps} \caption{Histograms that show ({\it left\/}) the total thicknesses of the regions and ({\it right\/}) their relative occupation fractions in the sightlines that we are able to measure in the survey.\label{filling_factor}} \end{figure} Figure~\ref{filling_factor} shows the distribution of region thicknesses for all of the lines of sight in our survey, which generally have dimensions of less than 20$\,$pc. The occupation fractions are quite small, generally less than about 2\%. For the benefit of future investigations that may require more detailed information about individual sight lines, Table~\ref{MRT} provides a machine-readable summary that lists for each velocity channel the measured values of $f1$, $f2$, and $N({\rm C~I}_{\rm total})$, along with the calculated values of $N({\rm C~II})$, $g_{\rm low}$, $\log (p/k)_{\rm low}$, and $\log (I/I_0)$. \section{Behavior with Velocity}\label{behavior_velocity} The radial velocities that we measure in the C~I profiles arise from various kinematical phenomena, such as differential velocities caused by rotation or density waves in the Galaxy, coherent motions caused by discrete dynamical events such as supernova explosions, mass loss from stars, the collision of infalling halo gas with material in the Galactic plane, and random motions arising from turbulence. With the exception of differential Galactic rotation, all of these effects can transform some of their energy into an increase of the thermal pressures. In their limited sample of only 21 stars, JT01 found elevated pressures in gases whose velocities deviated away from the range that was expected for differential Galactic rotation. We now revisit this issue for our present, much larger sample of sightlines to further substantiate the evidence for a coupling between the thermal pressures and unusual dynamical properties of the gas. \placefigure{outlier_v_f1f2} \begin{figure*} \epsscale{2.2} \plotone{fig6.eps} \caption{Presentations similar to Fig.~\protect\ref{all_v_f1f2}, except that the measurements include velocities only below the minimum value permitted by differential Galactic rotation, but with an extra margin of $5\,{\rm km~s}^{-1}$, i.e., $v<\min(v_{\rm gr})-5\,{\rm km~s}^{-1}$ ({\it left-hand panel\/}) or more than $5\,{\rm km~s}^{-1}$ above the maximum permitted velocities, i.e., $v>\max(v_{\rm gr})+5\,{\rm km~s}^{-1}$ ({\it right-hand panel\/}). The dot diameters in these diagrams are twice as large as those in Fig.~\protect\ref{all_v_f1f2} for a given column density of ${\rm C~I_{total}}$.\label{outlier_v_f1f2}} \end{figure*} Figure~\ref{outlier_v_f1f2} shows the measurements of $f1$ and $f2$ at velocities that are either above or below the respective line-of-sight velocity ranges permitted by differential Galactic rotation, assuming that the rotation curve is flat at $254\,{\rm km~s}^{-1}$ and the distance to the Galactic center is 8.4$\,$kpc (Reid et al. 2009). An extra margin of $5\,{\rm km~s}^{-1}$ is added to the exclusion zone for permitted velocities, so that we are more certain of showing material that genuinely disturbed in some manner. When we compare the results of Fig.~\ref{outlier_v_f1f2} to those shown in Fig.~\ref{all_v_f1f2}, it is clear that gases at high velocity do not have nearly as strong a central concentration near $f1\approx 0.2$ and $f2\approx 0.07$. The ``center of mass'' ($f1,\,f2$) locations for all of the points shown in Fig.~\ref{outlier_v_f1f2} are (0.265, 0.163) for negative velocities and (0.228, 0.078) for positive velocities. By comparison, for all measurements shown in Fig.~\ref{all_v_f1f2}, we found the balance point to be at (0.209, 0.068). These differences are principally caused by a greater prominence of a more highly dispersed population of points in $f1$ and $f2$, and they should come as no surprise since they demonstrate the expected coupling of the dynamics of the gas to the observed enhancements in the thermal pressures. \placetable{MRT} \clearpage \begin{deluxetable}{ l r c c c c c c c c c } \tabletypesize{\footnotesize} \tablecolumns{11} \tablewidth{0pt} \tablecaption{Observed and Calculated Quantities in Specific Velocity Channels\label{MRT}} \tablehead{ \colhead{Target} & \colhead{Velocity} & \colhead{$f1$} & \colhead{$f1$} & \colhead{$f2$} & \colhead{$f2$} & \colhead{$N({\rm C~I}_{\rm total})$} & \colhead{$N({\rm C~II})$} & \colhead{$g_{\rm low}$} & \colhead{$\log (p/k)_{\rm low}$} & \colhead{$\log (I/I_0)$}\\ \colhead{Star} & \colhead{(${\rm km~s}^{-1}$)} & \colhead{} & \colhead{Error} & \colhead{} & \colhead{Error} & \colhead{(cm$^{-2}$)} & \colhead{(cm$^{-2}$)} & \colhead{} & \colhead{} & \colhead{} } \startdata CPD-59D2603& 4.0& 0.213& 0.016& 0.068& 0.026& 2.13e+013& 1.34e+016& 0.930& 3.63& 0.37\\ CPD-59D2603 & 4.5& 0.201& 0.013& 0.065& 0.020& 2.81e+013& 1.44e+016& 0.933& 3.60& 0.28\\ CPD-59D2603& 5.0& 0.200& 0.011& 0.065& 0.017& 3.33e+013& 1.56e+016& 0.934& 3.60& 0.25\\ CPD-59D2603& 5.5& 0.207& 0.010& 0.065& 0.015& 3.83e+013& 1.76e+016& 0.936& 3.62& 0.26\\ CPD-59D2603& 6.0& 0.218& 0.009& 0.064& 0.013& 4.46e+013& 2.02e+016& 0.945& 3.67& 0.29\\ CPD-59D2603& 6.5& 0.224& 0.008& 0.065& 0.011& 5.30e+013& 2.46e+016& 0.944& 3.68& 0.31\\ CPD-59D2603& 7.0& 0.227& 0.008& 0.066& 0.009& 6.26e+013& 3.03e+016& 0.943& 3.69& 0.33\\ CPD-59D2603& 7.5& 0.230& 0.008& 0.070& 0.008& 7.17e+013& 3.80e+016& 0.938& 3.70& 0.37\\ CPD-59D2603& 8.0& 0.233& 0.008& 0.074& 0.007& 7.88e+013& 4.63e+016& 0.931& 3.70& 0.41\\ CPD-59D2603& 8.5& 0.222& 0.008& 0.079& 0.007& 8.10e+013& 5.13e+016& 0.910& 3.65& 0.40\\ CPD-59D2603& 9.0& 0.200& 0.007& 0.081& 0.007& 7.65e+013& 5.18e+016& 0.894& 3.56& 0.35\\ CPD-59D2603& 9.5& 0.181& 0.007& 0.081& 0.009& 6.53e+013& 4.85e+016& 0.885& 3.47& 0.32\\ HD102065\dotfill & 8.5& 0.188& 0.019& 0.071& 0.016& 2.42e+012& 5.98e+014& 0.908& 3.51& 0.09\\ HD102065\dotfill & 9.0& 0.241& 0.012& 0.072& 0.010& 3.87e+012& 6.54e+014& 0.934& 3.71& 0.14\\ HD102065\dotfill & 9.5& 0.275& 0.008& 0.067& 0.006& 5.93e+012& 7.52e+014& 0.966& 3.83& 0.14\\ HD102065\dotfill & 11.0& 0.223& 0.004& 0.050& 0.003& 1.64e+013& 1.56e+016& 0.973& 3.62& 0.61\\ HD102065\dotfill & 11.5& 0.203& 0.005& 0.048& 0.002& 1.92e+013& 1.99e+016& 0.967& 3.55& 0.59\\ HD102065\dotfill & 12.0& 0.187& 0.005& 0.046& 0.002& 2.14e+013& 2.40e+016& 0.964& 3.49& 0.57\\ HD102065\dotfill & 12.5& 0.176& 0.005& 0.043& 0.002& 2.38e+013& 2.83e+016& 0.965& 3.44& 0.54\\ HD102065\dotfill & 13.0& 0.177& 0.004& 0.043& 0.002& 2.70e+013& 3.13e+016& 0.965& 3.45& 0.55\\ HD102065\dotfill & 13.5& 0.187& 0.004& 0.046& 0.002& 3.06e+013& 3.67e+016& 0.967& 3.49& 0.59\\ HD102065\dotfill & 14.0& 0.204& 0.004& 0.051& 0.002& 3.32e+013& 4.35e+016& 0.964& 3.54& 0.66\\ HD102065\dotfill & 16.0& 0.318& 0.005& 0.099& 0.003& 1.47e+013& 2.83e+016& 0.925& 3.85& 1.06\\ HD102065\dotfill & 16.5& 0.315& 0.005& 0.101& 0.004& 1.00e+013& 1.87e+016& 0.918& 3.84& 1.03\\ HD102065\dotfill & 17.0& 0.298& 0.007& 0.091& 0.006& 6.78e+012& 1.05e+016& 0.931& 3.81& 0.95\\ HD102065\dotfill & 17.5& 0.281& 0.010& 0.074& 0.008& 4.54e+012& 3.97e+015& 0.954& 3.80& 0.74\\ HD102065\dotfill & 18.0& 0.270& 0.014& 0.059& 0.012& 3.07e+012& 2.74e+015& 0.983& 3.78& 0.72\\ HD102065\dotfill & 18.5& 0.273& 0.021& 0.053& 0.018& 2.11e+012& 2.55e+015& 0.998& 3.77& 0.80\\ HD102065\dotfill & 19.0& 0.283& 0.030& 0.055& 0.026& 1.45e+012& 2.15e+015& 1.000& 3.79& 0.89\\ \enddata \tablenotetext{~}{(This table is available in its entirety in a machine-readable form in the online journal. A portion is shown here for guidance regarding its form and content.)} \end{deluxetable} \clearpage Models of pressurized clouds behind weak shocks in the ISM computed by Bergin et al. (2004) reveal that the outcomes for ($f1,\,f2$) are centered on values of approximately (0.36,$\,$0.18), (0.40,$\,$0.25) and (0.37,$\,$0.35) for the post-shock condensations behind shocks with velocities of 10, 20 and $50\,{\rm km~s}^{-1}$, respectively (see their Fig.~8; in these cases the resultant ram pressures were $1.4\times 10^4$, $5.8\times 10^4$ and $3.6\times 10^5{\rm cm}^{-3}$K for a preshock density of $1\,{\rm cm}^{-3}$). These results for $f1$ and $f2$ are well removed from the densest clustering of measurements shown in Fig.~\ref{all_v_f1f2}, but they do seem consistent with the more sparse population of points having $f2>0.15$, which is more strongly emphasized in the unusual velocity ranges represented by the two panels of Fig.\ref{outlier_v_f1f2}. Figure~\ref{outlier_v_f1f2} shows clearly that a moderate number of the measurements in the positive-velocity regime exhibit higher pressures than usual, but not to the great extremes revealed by the negative velocity gas. We offer a simple interpretation for why this happens. We propose that a significant fraction of the high pressure material arises from stellar mass-loss outflows that eventually collide with the ambient medium, creating dense, expanding shells that are at high pressures (Castor et al. 1975; Weaver et al. 1977). Another possibility is that small clouds surrounding the stars are pressurized and accelerated by either the momentum transfer arising from photoevaporation (Oort \& Spitzer 1955; Kahn 1969; Bertoldi 1989) or the momentary surge in pressure of a newly developed H~II region. These phenomena associated with our target stars should be visible to us if they contain C~I. The shells should be intercepted by our sight lines regardless of whether they (or possibly small clouds inside them) are large and at moderately high pressures or very small and at much higher pressures. The foreground portions of such shells are responsible for the negative velocity gas that we can view in the star's spectrum. For positive velocity gas the situation is different. Here, we rely entirely on the random chance of seeing either one of the large-scale events (of non stellar origin) mentioned at the beginning of this section, or else the rear portion of some region or shell that is created by some foreground star or stellar association that is unrelated to the star that we are viewing. This being the case, there may be a vanishingly small chance that we will intercept a highly pressurized shell with a small diameter, but the chances increase for larger shells that have lower pressures at their boundaries. This observational bias against small, high pressure events at positive velocities could explain why we see no points at $\log (p/k) > 5$ in the right-hand panel of the figure. In the next section (and in \S\ref{origins}), we will reinforce the picture that high pressures arise from the increased dynamical activity near bright stars. We will show evidence that there is a strong correlation between pressures and the local intensities of ionizing radiation. \section{Interpretations of the Results}\label{interpretation} \subsection{Basic Distribution Functions}\label{basic_dist} After evaluating the conditions within each velocity interval for all of the lines of sight, we are in a position to look at the composite outcome of all of the results of the dominant low-pressure component. All of the presentations in this section will show distributions expressed in terms of the amount of hydrogen in a given condition. In order to do so, we must convert our original measurement weights based on $N({\rm C~I_{total}})$ into ones that account for the equivalent column densities of C~II that we derived from our determinations of O~I (and on rare occasions S~II) at identical velocities, as discussed in \S\ref{ionization_corrections}. Once again, we convert from $N$(C~II) to $N$(H) by multiplying the amount of carbon by $({\rm H/C})=5040$ in the ISM. As we indicated earlier (\S\ref{starlight}), WNM material at the same velocity as the CNM will tend to inflate somewhat the derived value of $N$(H) associated with the C~I that is used for determining the pressure. \placefigure{fig4} \begin{figure*} \plotone{fig7.eps} \caption{The distribution of thermal pressures, normalized to the estimated amount of hydrogen present, for three different temperature conditions, as indicated by the $J=0$ to 1 rotation temperatures of H$_2$: (1) all of the gas (tallest profile), (2) gas for which $T_{01} > 85\,$K (middle profile) and (3) gas for which $T < 75\,$K (shortest profile). The median temperature for all cases is $77\,$K, i.e., a value that is between the two limits. Sight lines where $T_{01}$ measurements do not exist are included in condition (1) but excluded from conditions (2) and (3). A best-fit lognormal distribution for condition (1) is shown by the solid curve, and it is represented by Eq.~\protect\ref{log_normal_fit}. The inset shows a scatter plot of the C~I weighted average $\log (p/k)$ given in Column (6) of Table~\protect\ref{obs_quantities_table} vs. $T_{01}$ (if known), as listed in Column (7) of Table~\protect\ref{sightlines_table}.\label{fig4}} \end{figure*} The histogram distributions shown in Fig.~\ref{fig4} reveal that the pressure distribution function is not strongly influenced by the temperature of the gas, as deduced from the measurements of $T_{01}$ of H$_2$. Nevertheless, the evidence that we have suggests an inverse correlation of pressures with temperature, although the scatter in this relationship is large. For the points shown in the inset of the figure, the Spearman rank order correlation coefficient is $-0.29$. This determination is significantly different from a zero correlation for the population at the 97.5\% confidence level for 58 pairs of measurements. The dispersion of the results is so large that it is difficult to assign a value for the apparent polytropic index of the gas, but the sign of the trend is consistent with a slope of less than one for the CNM thermal equilibrium track near the minimum pressure shown in Fig.~\ref{phase_diag}. The central portion of the distribution of thermal pressures (for all $T_{01}$) follows closely a lognormal distribution given by \begin{eqnarray}\label{log_normal_fit} dN({\rm H})/d\log(p/k)=&\nonumber\\ 2.30\times 10^{23}&\exp\left[-{(\log(p/k)-3.58)^2\over 2(0.175)^2}\right]\,{\rm cm}^{-2}~. \end{eqnarray} This lognormal relationship is shown by the smooth curve in Fig.~\ref{fig4} (and will be shown again later in a log-log representation by the smooth gray curve in Fig.~\ref{hist_pok}). Outside the range $3.2<\log(p/k)<4.0$ it understates the observed amount of material in the wings of the profile (this is not evident in Fig.~\ref{fig4}, but is clearly shown later in Fig.~\ref{hist_pok}). \placefigure{log_pok_bin} \begin{figure}[t!] \epsscale{1.0} \plotone{fig8.eps} \caption{A gray-scale representation of the logarithm of the amount of H~I gas that we found as a function of $\log(I/I_0)$, i.e., the logarithm of the enhancement of the starlight density above average, against the thermal pressure, expressed in terms of $\log (p/k)$. The dashed line shows the cutoff equal to $\sqrt{10}$ times the average field that we established for defining the low intensity distribution shown in Fig.~\protect\ref{hist_pok}.\label{log_pok_bin}} \end{figure} Figure~\ref{log_pok_bin} shows that the outcomes for the starlight densities and the thermal pressures are not independent of each other. In regions that are close to stars that emit UV radiation ($I/I_0\approx 10$), we find that with a few exceptions the average pressures increase to values in the general vicinity of $\log (p/k) \sim 4$. This enhancement supports the viewpoint that turbulent energies are greater in the general vicinity of young stars, a phenomenon that may be related to changes in the morphology of H~I near stellar associations that were found by Robitaille et al. (2010). In order to obtain a representation for the pressures in the general ISM somewhat removed from the bright stars, we will limit further study of the distribution to only those cases where $I/I_0<10^{0.5}$, a limit that is depicted by the dashed line in Fig.~\ref{log_pok_bin}. We consider that any gas elements that are above that line represent localized regions that are exceptionally close to sources of mechanical energy and are thus not representative of the general, diffuse ISM. \begin{figure*}[t!] \epsscale{2.2} \plotone{fig9.eps} \caption{Log-log presentations of the distribution of thermal pressures for two cases: all of the gas sampled by C~I is shown by the gray histogram, while a subset of the material for which $I/I_0<10^{0.5}$ is shown by the black histogram. This intensity cutoff limits the sample to all of the outcomes that appear below the dashed line in Fig.~\protect\ref{log_pok_bin}. The thin curves show how well the analytical expressions given in Eqs.~\protect\ref{log_normal_fit} (gray) and \protect\ref{polynomial_fit} (black) fit the results. The vertical dot-dash line labeled CNM $\log(p/k)_{\rm min}$ corresponds to the minimum pressure that is allowed for a static CNM, as shown by a similar line with the same designation in Fig.~\protect\ref{phase_diag}.\label{hist_pok}} \end{figure*} Figure~\ref{hist_pok} shows in a log-log format the distribution of thermal pressures for $I/I_0<10^{0.5}$ (black histogram) compared with the distribution for all intensities (gray histogram). In terms of $\log(p/k)$ [and using a linear representation of $dN({\rm H})/d\log(p/k)$], the distribution for the low-intensity results has a mean of 3.47, a standard deviation of 0.253, a skewness of $-1.8$, and a kurtosis\footnote{Our definition of kurtosis includes a subtraction of 3 from the fourth moment divided by $\sigma^4$, thus making the kurtosis of a Gaussian distribution equal to zero. Sometimes in the literature, e.g. Federrath et al. (2010), this ``$-3$'' term is omitted.} of 6.2. The influence of the excess of low pressure outcomes, as evidenced by the negative skewness, causes the standard deviation listed here to be larger than the value 0.175 given in Eq.~\ref{log_normal_fit}, which would apply to just the central portion of the profile. \placefigure{hist_pok} For the convenience of those who wish to reproduce a reasonably good representation of the low-intensity data in analytical form, we supply an empirical polynomial fit, \begin{eqnarray}\label{polynomial_fit} dN({\rm H})/d\log(p/k)=1.16\times 10^{23}\exp(-0.0192z\nonumber\\ -0.00387z^2+2.39\times 10^{-5}z^3\nonumber\\ +6.24\times 10^{-7}z^4-6.77\times 10^{-9}z^5)\,{\rm cm}^{-2}~, \end{eqnarray} where the dimensionless quantity $z=(p/k)^{\onehalf}-60$ (for $p/k$ expressed in terms of ${\rm cm}^{-3}$K). This empirical fit is shown by the thin, black curve in Fig.~\ref{hist_pok}. If this distribution function is converted into a linear representation, i.e., $N({\rm H})$ as a function of $p/k$, we find that for $p/k < 5500\,{\rm cm}^{-3}\,$K it does not differ appreciably from a Gaussian function with mean value of $p/k=3700\,{\rm cm}^{ -3}\,$K and a standard deviation of $1200\,{\rm cm}^{-3}\,$K. The distribution is somewhat higher than this Gaussian function above $5500\,{\rm cm}^{-3}\,$K. Our mean value quoted above is 0.22$\,$dex higher than the value $2240\,{\rm cm}^{-3}$K that we listed earlier (JT01). There are three independent reasons that can account for nearly all of this difference. First, we used revised rates for the collisional excitation and radiative decay of the upper fine-structure levels of C~I, as discussed in \S\ref{atomic_phys_param}. This accounts for an elevation of typical determinations of pressures of about 0.05$\,$dex. Second, our new estimate for the strength of the optical pumping of the levels has been reduced (\S\ref{pumping}), with the result that a typical pressure measurement should be raised by another 0.05$\,$dex. Third, our earlier specification for the mean value of $p/k$ was for a temperature (40$\,$K) that gave the lowest inferred pressure for a given level of C~I excitation, while the present result uses either actual temperatures measured from the H$_2$ rotational excitations or a median value of 80$\,$K if an explicit measurement for a sight line is not available. Under most circumstances, the inferred pressure at 80$\,$K is about 0.1$\,$dex higher than that for 40$\,$K. Taken together, these three effects can account for an elevation of our new pressures over the old ones by 0.2$\,$dex. As a final point, we add a cautionary note that the errors in our determinations for $\log (p/k)$ become much larger than usual when their values fall below 3.0. The reason for this is that the changes in $f1$, the major discriminant for pressures, become very small at low pressures, as shown by the shrinkage in the spacing between the 0.1~dex markers in Figs.~\ref{all_v_f1f2} and \ref{fig3}. \subsection{Volume-Weighted Distributions}\label{vol_weighted} Up to now, the distribution functions that we have shown (Figs.~\ref{fig4}$-$\ref{hist_pok}) have been weighted in proportion to our calculated hydrogen column densities, which is equivalent to a sampling by mass. In many cases, investigators showing results of computer simulations of ISM turbulence express their outcomes according to the counts of volume cells having different pressures. In order to make a conversion from a mass-weighted distribution to a volume-weighted one, we must make a simplifying assumption that we are viewing an ensemble of gases that has internal random pressure fluctuations that change with time, but that is otherwise approximately uniform in nature and that can be characterized as having an equation of state with a single value for the polytropic index $\gamma$ (equal to the slope of $\log p$ vs. $\log n$ or the ratio of specific heats $c_p/c_v$). In this situation, the changes in pressure cause the volumes of mass parcels to change in proportion to $p^{-1/\gamma}$, and this factor must be applied to the mass-weighted distribution function to obtain the volume-weighted one. The smooth curves in Fig.~\ref{volume_dist} show how the mass-weighted distribution in $\log (p/k)$ for the low starlight intensities would appear after being converted to volume weighted ones for three different assumed values of $\gamma$. These three examples illustrate the behavior of the gas under the conditions (1) $\gamma=0.7$, which is a good approximation of the slope of thermal equilibrium curve for the CNM shown in Fig.~\ref{phase_diag}, (2) the relationship for $\gamma=1.0$ that corresponds to an isothermal gas, and (3) a condition $\gamma=5/3$ that indicates that the gas is undergoing adiabatic compressions and expansions (and assuming that the gas has a purely atomic composition). The divergent behavior of the curves at the far left portion of this diagram probably arises from either deviations caused by small number statistics for the samples at the extremely low pressures or the fact that the errors in $\log (p/k)$ become larger than usual at the low pressure extreme. It is important to emphasize that in a turbulent cascade the notion that the gas has a single polytropic index on all length scales is an oversimplification; we will explore this issue in more detail in \S\ref{crossing_times}. \subsection{Pileup in Velocity Bins}\label{pileup} As we discussed in \S\ref{results}, any outcomes for $f1$ and $f2$ at a particular velocity may represent a composite result for two or more regions that are seen in projection along the line of sight. In \S\ref{admixtures} we explained how we separated contributions from small amounts of gas at extraordinarily large pressures, well away from the dominant regime of low pressures. However, we have yet to address the possibility that two or more regions at somewhat different pressures along the lower, nearly straight portion of the $f1-f2$ equilibrium curve can create an apparent outcome that represents a proper C~I-weighted mean, but without revealing the true dispersion of pressures from the contributors. If such superpositions are taking place frequently, they will tend to decrease the width our observed overall distribution shown in Figs.~\ref{fig4}$-$\ref{hist_pok}. One way to gain an insight on this possibility is to examine how deviations from the mean $\log (p/k)$ scale with the corresponding amounts of C~I. If we imagine a simple picture where all of the C~I exists within independent parcels, each with some single, representative value $N_0({\rm C~I_{total}})$, we would expect to find that the dispersion of any collection of measurements having some multiple $n$ times $N_0({\rm C~I_{total}})$ would show us a standard deviation equal to $\sigma_{\rm true}/\sqrt{n}$, where $\sigma_{\rm true}$ is the real dispersion in $\log(p/k)$ for the individual packets that are seen in projection. \placefigure{volume_dist} \begin{figure*}[t] \epsscale{2.0} \plotone{fig10.eps} \caption{({\it three smooth curves:\/}) The results of a conversion of the mass-weighted distribution curve for starlight intensity levels $I/I_0<10^{0.5}$ (the black, smooth curve shown in Fig.~\protect\ref{hist_pok}) to volume weighted ones for three assumed values for $\gamma$, corresponding to cases where the gas behaves as if it were in thermal equilibrium ($\gamma=0.70$), isothermal ($\gamma=1.0$) and adiabatic ($\gamma=1.67$). ({\it Histogram-style trace:\/}) The volume-weighted distribution for all intensity levels for $\log (p/k)>3.5$, assuming $\gamma=1.67$. This distribution is relevant to a discussion that is presented in \S\protect\ref{coherent_disturbances} about the possible creation of higher than normal pressures by expanding supernova remnants. In all four cases, the curves are normalized such that their integrals over all $\log (p/k)$ equal 1.0.\label{volume_dist}} \end{figure*} \placefigure{sigma_p} \begin{figure*}[t] \plotone{fig11.eps} \caption{{\it Top panel:\/} Individual measurements of $\log(p/k)$ in the low pressure regime as a function of the inverse square-root of the column density of C~I$_{\rm total}$ for all velocity channels of width $0.5\,{\rm km~s}^{-1}$ that had $\log I/I_0<10^{0.5}$. The mean values of $\log(p/k)$ are listed near the top of the panel for successive intervals centered on integral values of $[N({\rm C~I_{total}})/5\times 10^{13}\,{\rm cm}^{-2}]^{-0.5}$. {\it Lower panel:\/} The standard deviations in $\log(p/k)$ for measurements within the intervals, showing an almost linear progression up to a value $\sigma[\log(p/k)]\approx 0.5$, at which point the column density of independent packets of gas have characteristic values $N_0({\rm C~I_{total}})=2\times 10^{12}{\rm cm}^{-2}$. The ``$n=$'' designations show the number of points that were used to evaluate $\sigma[\log (p/k)]$ in each bin.\label{sigma_p}} \end{figure*} The upper panel of Figure~\ref{sigma_p} shows the apparent outcomes for values of $\log(p/k)$ as a function of $N({\rm C~I_{total}})^{-0.5}$; it is clear that as the column densities decrease (i.e., moving toward the right of the plot), the vertical dispersions increase. For data segregated within successive bins having a width of $\sqrt{2}\times 10^{-7}{\rm cm}$, the lower panel indicates that the {\it rms\/} dispersion indeed seems to scale in direct proportion to $N({\rm C~I_{total}})^{-0.5}$, but only up to about $N({\rm C~I_{total}})^{-0.5}=5\sqrt{2}\times 10^{-7}{\rm cm}$ (indicating that $N_0({\rm C~I_{total}})\approx 2\times 10^{12}{\rm cm}^{-2}$). Thus, on the one hand, one could imagine that $\sigma_{\rm true}$ could be as large as around 0.5, instead of our overall measured value of 0.253. On the other hand, the proposed model for superpositions may be only a product of our imagination: perhaps coherent regions with larger values of $N({\rm C~I_{total}})$ have a real tendency be less easily perturbed by various external forces that cause pressure deviations away from some mean value. In essence, the trend shown in Fig.~\ref{sigma_p} may reflect a real physical effect rather than a trend caused by random superpositions of unrelated, small gas clouds. In principle, a trivial explanation for the effect shown in Fig.~\ref{sigma_p} might be that as $N({\rm C~I_{total}})$ decreases the measurement errors in $\log(p/k)$ increase. However, as we explain later in \S\ref{err_f1f2}, the $1\sigma$ errors in $f1$ and $f2$ should be equal to 0.03 or less for the measurements to be accepted. At normal pressures this amount of error is equivalent to a change in $\log (p/k)$ equal to only 0.1~dex, far smaller than the observed dispersion that is shown for low column density cases in Fig.~\ref{sigma_p}. One reason that we are able to maintain small errors for lower column densities is that our system of weighting the measurements causes a shift of emphasis from weak atomic transitions to stronger ones as $N({\rm C~I}_{\rm total})$ decreases. \section{Discussion}\label{discussion} \subsection{Distribution Width and Shape}\label{shape} \subsubsection{Overall Shape}\label{overall_shape} The highly symmetrical appearance of our distribution for all of the material that we sampled in the regime of ordinary pressures (which we identified as the ``low pressure component'' in \S\ref{admixtures}) is an illusion that arises from the projection of the irregularly shaped distribution depicted in Fig.~\ref{log_pok_bin} onto the $x$-axis that represents $\log(p/k)$. The distribution function reverts to one with a strong negative skewness when we limit our consideration to conditions where $I/I_0<10^{0.5}$ (below the dashed line in Fig.~\ref{log_pok_bin}). This behavior is inconsistent with turbulence in an isothermal gas, which should show a pure lognormal density (and pressure) volume-weighted distribution function (V\'azquez Semadeni 1994; Nordlund \& Padoan 1999; Kritsuk et al. 2007). \subsubsection{Deviations to Low Pressures}\label{deviations_low} A substantial fraction of the material (29\%) -- that which is depicted to the left of the vertical dash-dot line in Fig.~\ref{hist_pok} -- is detected at pressures below those permissible for a static CNM, $(p/k)_{\rm min}=1960\,{\rm cm}^{-3}\,$K, as defined by the ``standard model'' for the thermal equilibrium curve presented by Wolfire et al. (2003) that we show in Fig.~\ref{phase_diag}.\footnote{With a parametric formulation discussed in the next paragraph using our value for $I/I_0=1.0$ and $\zeta_{\rm CR}=2\times 10^{-16}{\rm s}^{-1}$, $(p/k)_{\rm min}$ decreases very slightly to $1860\,{\rm cm}^{-3}$K.} From this we conclude that either their curve does not apply to the media we are viewing or else that rarefactions caused by turbulence create momentary excursions below the curve. The recovery toward normal pressures for regions that reach anomalously low pressures in some cases might be inhibited by temporary, locally high values of magnetic pressure (Mac Low et al. 2005). A valid question to pose is whether or not we could understand the existence of low pressures by large changes in some of the parameters that influence the value of $(p/k)_{\rm min}$ within some localized regions. Wolfire et al. (2003) expressed a simple equation [their Eq.~(33)] that gives some guidance on this possibility. We restate their equation terms of our variables by substituting $0.674(I/I_0)$ for their normalized ISM intensity $G_0^\prime$ at a Galactocentric distance of 8.5$\,$kpc. The reason for this substitution is that we have adopted a more recent measure of a standard intensity $I_0$ (Mathis et al. 1983) that is lower than the one they chose to use [taken from Draine (1978)]. Also, we set their parameter for the total ionization rate (multiplied by $10^{16}{\rm s}^{-1}$) $\zeta_t^\prime=2.0$, since we have adopted a cosmic ray ionization rate $\zeta_{\rm CR}=2\times 10^{-16}{\rm s}^{-1}$ (see \S\ref{starlight}) and assumed that the x-ray and EUV ionization rates are very small in comparison. Our restatement of their equation takes the form, \begin{equation}\label{pmin_eq} (p/k)_{\rm min}=5730(Z_d^\prime I/I_0){Z_g^\prime\over 1+2.08(Z_d^\prime I/I_0)^{0.365}}~, \end{equation} where $Z_d^\prime$ is equal to the normalized ratio of interstellar dust grains to polycyclic aromatic hydrocarbons (PAH), and $Z_g^\prime$ is the normalized gas phase abundance of heavy elements that are responsible for radiative cooling of the gas (chiefly C and O). The two quantities $Z_d^\prime$ and $Z_g^\prime$ are generally assumed to be equal to 1.0 for conditions in our part of the Galaxy. If $\log(I/I_0)=-0.35$, we find from Eq.~\ref{pmin_eq} that $(p/k)_{\rm min}=1000\,{\rm cm}^{-3}$K. However, the distribution of outcomes shown in Fig.~\ref{log_pok_bin} indicate most of our pressure measurements apply to regions with $\log(I/I_0)>-0.35$. Another way to reduce $(p/k)_{\rm min}$ to $1000\,{\rm cm}^{-3}$K would be to have $I/I_0=1.0$ but with either a ratio of the dust grain to the PAH concentration $Z_d^\prime$ as low as 0.45 times the normally assumed value or a reduction of $Z_g^\prime$ to 0.54 times the normal amount. Even with the possible deviations discussed here that would make $(p/k)_{\rm min}$ reach as low as $1000\,{\rm cm}^{-3}$K, we still have measurable amounts of gas below the pressure threshold for a stable CNM. \subsubsection{Comparisons with Expectations of the Effects of Turbulence}\label{comparisons_turbulence} The magnitude and skewness of the fluctuations in thermal pressure give an indication of the strength and character of the turbulence in the ISM (Padoan et al. 1997b). For instance, the one-dimensional simulations of Passot \& V\'azquez-Semadeni (1998) illustrated how $\gamma$ changes the sign of the skewness of the distribution: $\gamma<1$ makes the distribution shallower on the high pressure side of the peak and steeper on the low pressure side, while the opposite is true for $\gamma>1$. The influence of the polytropic index on the shape of the distribution can also be seen in the results of three-dimensional simulations performed by Li, et al. (2003) and Audit \& Hennebelle (2010). Studies of turbulence in an isothermal medium by Federrath et al. (2008) indicated that the shape of the distribution may also be governed by the character of the driving force: solenoidal (divergence-free) driving forces result in a symmetrical distribution (close to lognormal), while compressive (curl-free) driving can create a negative skewness. In short, the appearance of our pressure distribution seems to favor either $\gamma > 1$ (i.e, somewhere between isothermal and adiabatic behavior), turbulence that is compressive in nature, or some combination of the two. One important application of our determination of the dispersion of thermal pressures is to estimate the strength of the turbulence using a quantitative comparison based on computer MHD simulations. Padoan et al. (1997a, b) introduced a scaling relation between the rms dispersion $\sigma_s$ for a log-normal distribution of the quantity $s=\ln p$ in terms of a simple function of the Mach number $M$, \begin{equation}\label{sigma_s} \sigma_s=[\ln(1+b^2M^2)]^{0.5}~. \end{equation} Investigators that have adopted this formalism generally find that their simulations seem to support the validity of a scaling with M shown in Eq.~\ref{sigma_s}, but values for the constant $b$ appear to vary from one study to the next. Federrath et al. (2008, 2010) and Brunt (2010) have summarized the outcomes for $b$ for many different cases reported in the literature: extremes in $b$ have ranged from 0.3 to 1.0, depending on the conditions in the computations. Simulations carried out by Federrath et al. (2008, 2010) indicate that whether or not the forcing of the turbulence is solenoidal or compressive can have a strong influence on $b$. Lemaster \& Stone (2008) have shown that magnetic fields have only a small effect on the relationship between $\sigma_s$ and $M$. We can derive a characteristic turbulent Mach number for the C~I-bearing gas by taking our dispersion for $\ln p$, adopting a value for $b$, and solving for $M$ in Eq.~\ref{sigma_s}. Here, it is appropriate to use a volume weighted distribution of pressures, since that is the conventional way of describing the outcomes of the simulations. A best-fit of a log-normal distribution to the portion $\log (p/k)>3$ of the isothermal curve for low $I/I_0$ shown in Fig.~\ref{volume_dist} yields $\sigma_s=0.46$. From the analysis of the possible effects of averaging in velocity bins that we presented in \S\ref{pileup}, we acknowledge that the true dispersion of $\log (p/k)$ could be as large as 0.5, leading to $\sigma_s=0.5\ln 10=1.2$. For our choice of $b$, we adopt the finding by Brunt (2010) that $b= 0.48^{+0.15}_{-0.11}$, which was based on the observed density and velocity variances in cold gas with large turbulent Mach numbers in the Taurus molecular cloud. This value is near the middle of the range of those derived from computer simulations of MHD turbulence mentioned in the above paragraph. With this value for $b$, we solve for $M$ in Eq.~\ref{sigma_s} and derive $M=1.0^{+0.3}_{-0.2}$ for our lower value of $\sigma_s$ and $M=3.7^{+1.1}_{-0.9}$ for the larger one. For our representative values $f({\rm H_2})=0.60$ (\S\ref{particle_mix}) and $T= 80\,$K (\S\ref{T}), the isothermal sound speed $c_s=0.50\,{\rm km~s}^{-1}$. For the smallest value of $M$ minus its error, we expect the velocity dispersion $\sigma_v=0.8c_s=0.40\,{\rm km~s}^{-1}$, and for the largest $M$ plus its error we expect that $\sigma_v=4.8c_s=2.4\,{\rm km~s}^{-1}$. Over a wide dynamic range in linear separations, the velocity differences for packets of material in the ISM have been observed to scale in proportion to these separations to a fixed power (Larson 1979, 1981; Heithausen 1996; Brunt \& Heyer 2002a; Brunt \& Kerton 2002). We can factor in our values of $\sigma_v$ into this relationship to estimate the largest characteristic scales for the turbulent motions, which in turn indicate the largest cloud sizes (or energy injection scales). For the power-law relationship, we adopt the recent finding of Heyer \& Brunt (2004), \begin{equation}\label{sigma_v} \sigma_v=(0.96\pm0.17)r_{\rm pc}^{0.59\pm 0.07}, \end{equation} where $r_{\rm pc}$ is the linear separation in pc. Solving for $r_{\rm pc}$ using our velocity dispersions in this equation yields $r_{\rm pc}=0.23^{+0.04}_{-0.02}$ for $\sigma_v=0.40\,{\rm km~s}^{-1}$ and $r_{\rm pc}=4.7^{+3.7}_{-1.6}$ for $\sigma_v=2.4\,{\rm km~s}^{-1}$. In Fig.~\ref{filling_factor} we showed the distribution of thicknesses of our C~I-bearing clouds for the different lines of sight in our survey. The median of all the values for the entire collection is 5.5$\,$pc. On the one hand, this median value is close to the upper end of our range of $r_{\rm pc}$, which may indicate that our larger value of $\sigma_s$, i.e., the largest possible dispersion found in \S\ref{pileup}, represents the correct value for the deviations of thermal pressures. On the other hand, the smaller dimensions that apply to the direct measurement $\sigma_s=0.46$ may simply indicate that we are usually viewing a superposition of many independent, smaller clouds along each line of sight. We caution that the trend expressed in Eq.~\ref{sigma_v} is evaluated from $^{12}{\rm CO}$ $J=1-0$ emission-line data for molecular clouds, which may differ from the relationship for the more diffuse regions that we have sampled. \subsection{Time Constants}\label{time_constants} Since fluid elements in a turbulent medium have physical properties that change with time, it is important to establish the time intervals that are required for various measurable quantities to converge nearly to their equilibrium values. There are three different contexts where we compare two (or more) states of any particular constituent: (1) the ratio of C~I fine-structure populations, (2) the balance between neutral and ionized forms of the carbon atoms and (3) the $J=0$ to 1 rotational temperature $T_{01}$ of H$_2$. A fourth time-variable quantity of interest is the kinetic temperature of the gas, which not only influences the other three quantities that we measure but also the manner in which the gas responds to disturbances. We will compute the characteristic $e$-folding times for these processes in the following subsections, and later we will compare them to the eddy turnover times for different size scales. In a general context, we can imagine atoms or molecules in two possible states with equilibrium number densities $n_{\rm 0,eq}$ in some lower level and $n_{\rm 1,eq}$ in an upper one. In equilibrium, \begin{equation}\label{equilib} R_{01}n_{\rm 0,eq}=R_{10}n_{\rm 1,eq}~, \end{equation} where $R_{01}$ and $R_{10}$ are the upward and downward conversion rates, respectively. We can propose a solution for the time behavior of the lower level, $n_0$, to take the form \begin{equation}\label{proposed_behavior} n_0(t)=n_{\rm 0,eq}+(n_{\rm 0,i}-n_{\rm 0,eq})e^{-\gamma t} \end{equation} as the concentration of $n_0$ adjusts itself from some initial density $n_{\rm 0,i}$ to its equilibrium value $n_{\rm 0,eq}$. This form must agree with the condition \begin{eqnarray}\label{dndt} {dn_0(t)\over dt}&=&n_1(t)R_{10}-n_0(t)R_{01}\nonumber\\ &=&n_{\rm tot}R_{10}-n_0(t)(R_{10}+R_{01})~, \end{eqnarray} where $n_{\rm tot}=n_0(t)+n_1(t)$ is the (constant) sum of the number densities of the two levels. If we insert the proposed time behavior (Eq.~\ref{proposed_behavior}) into the $n_0(t)$ term of Eq.~\ref{dndt} and compare it to an explicit differentiation of Eq.~\ref{proposed_behavior} with time, we can equate the $e^{-\gamma t}$ terms to reveal that \begin{equation}\label{gamma} \gamma=R_{10}+R_{01}~. \end{equation} (The sum of the remaining terms without $e^{-\gamma t}$ equals zero.) In essence, any departure from the equilibrium level distribution, either positive or negative in sign, will decay to the equilibrium condition in an exponential fashion with an $e$-folding time constant given by Eq.~\ref{gamma}. In the following three subsections, we apply this rule to population ratios discussed in items (1) to (3) at the beginning of this section. \subsubsection{C~I Fine Structure Levels}\label{fsl_timedep} As a simplification, we consider only the first two levels and ignore the existence of the third (highest) one. Here, $R_{01}$ equals the sum of the upward rate constants for various collision partners times their respective densities. $R_{10}$ equals the sum of the downward rate constants times the densities plus also the spontaneous decay probability $A_{10}$. If $f1$ is small, $A_{10}=7.93\times 10^{-8}{\rm s}^{-1}$ (Galav\'is et al. 1997) dominates over the collisional excitation (and de-excitation) terms. The inverse of $A_{10}$ equals 146~days. If $f1$ is not small, the collisional terms make $R_{10}+R_{01}$ even larger and thus reduce the time constant to less than 146~days. Clearly, even for the more complex situation for the interactions with the highest of the three fine-structure levels, the time constants are extraordinarily short ($A_{21}^{-1}=44\,{\rm days}$). \subsubsection{The Photoionization and Recombination of Carbon Atoms}\label{ioniz_timedep} Since the equilibrium concentrations of neutral carbon are much smaller than the ionized forms in all cases that we consider, as is evident from Fig.~\ref{phase_diag}, it is clear that the ionization rate $R_{01}=(I/I_0)\Gamma_0$ dominates over the various recombination terms shown in Eq.~\ref{C_ionization} that make up $R_{10}$. Figure~\ref{log_pok_bin} shows us that $I/I_0=1$ is about the lowest value of the radiation density that we encounter. Hence, the longest time constant that we expect to apply is simply $\Gamma_0^{-1}=5\times 10^9{\rm s}=160\,{\rm yr}$, and this time shortens in proportion to the increase in $I$ above the reference value $I_0$. \placefigure{r_t_01} \begin{figure*} \plotone{fig12.eps} \caption{Time constants (in seconds) for (1) the relaxation of the $T_{01}$ rotation temperature of H$_2$ to the local kinetic temperature (solid contours), as a result of ortho-para conversions of the lowest two levels due to collisions with protons and (2) cooling times $t_{\rm cool}$ as given in Eq.~\protect\ref{t_cool} (dashed lines). The thermal equilibrium curve for the CNM that appears in Fig.~\protect\ref{phase_diag} is shown by the thick curve. This diagram was constructed assuming that the gas has $f({\rm H}_2)=0.6$ and He/H=0.09 (see \S\protect\ref{particle_mix}).\label{r_t_01}} \end{figure*} \subsubsection{The $J=0$ to 1 Rotation Temperature $T_{01}$ of H$_2$}\label{T_01_timedep} Cecchi-Pestellini et al. (2005) have performed detailed calculations of the time-dependent H$_2$ level populations in turbulent media that have short-lived pockets of hot gas that can leave an imprint on the rotation temperatures. Here, we focus on a much simpler discussion for $T_{01}$ of the two lowest rotational levels, since they are relevant to our determinations of kinetic temperatures. We evaluate the characteristic time for changes in the population ratio of $J=0$ to that of $J=1$ when there is a sudden change in the kinetic temperature, but we neglect any of the effects of repopulating the lower levels from cascades from higher levels of excitation. The rate coefficient for ortho-para transitions caused by neutral hydrogen impacts onto H$_2$ is extremely low at the temperatures that we consider [for $T<300\,$K, $k_{01}<10^{-16}{\rm cm}^3{\rm s}^{-1}$ (Sun \& Dalgarno 1994)]. For protons, however, the rate constants are much larger: $k_{10}=2.0\times 10^{- 10}{\rm cm}^3{\rm s}^{-1}$ (Gerlich 1990), and $k_{01}=9\exp(-171/T)k_{10}$. The solid contours in Fig.~\ref{r_t_01} show how the time constants vary over the $\log (p/k)$ vs. $\log n({\rm H})$ diagram when we combine the rate constants with determinations of $n(p)$ using Eq.~\ref{H_ionization}. \subsubsection{The Kinetic Temperature}\label{kinetic_T_timedep} Unlike the cases that we considered in \S\S\ref{fsl_timedep}-\ref{T_01_timedep}, for the kinetic temperature we must work with a continuous variable instead of a population ratio of two states of some constituent. Thus, a somewhat different tactic is needed to assess the characteristic relaxation time. Wolfire et al. (2003) have evaluated the isobaric cooling time for the ISM and find that \begin{equation}\label{t_cool} t_{\rm cool}=7.40\times 10^{11}\left({T\over 80\,{\rm K}}\right)^{1.2}\left({p/k\over 3000\,{\rm cm}^{-3}{\rm K}}\right)^{-0.8}{\rm s}~. \end{equation} They state that this formula is valid to within a factor 1.35 over the temperature range $55<T<8500\,$K. By itself, the coefficient in Eq.~\ref{t_cool} applies to conditions very close to our median temperature and pressure in the survey ($T_{01}=80\,$K and $p/k=3000\,{\rm cm}^{-3}\,$K), and it is not much different from the relaxation time for $T_{01}$ ($3.95\times 10^{11}{\rm s}=12,500\,{\rm yr}$) at the same temperature and pressure. We depict values of $t_{\rm cool}$ by the nearly straight, dashed contours in Fig.~\ref{r_t_01}. \subsection{Turbulent Eddy Crossing Times}\label{crossing_times} As the length scales become smaller, the dwell times for conditions become shorter. It then follows that these shortened durations could curtail physical stasis in certain respects. To estimate the time scales $\Delta t=r/\Delta v$ for changes to occur in a turbulent eddy with a characteristic radius $r$, we can once again make use of the power-law relationship between velocity shears $\Delta v$ and length scales $r_{\rm pc}$ (as we did in \S\ref{comparisons_turbulence}), but this time we adopt the findings taken from observations at smaller scales. A slight reduction of the slope seems to occur for these shorter length scales: Falgarone et al. (1992) conclude that $\Delta v\approx r_{\rm pc}^{0.4}$ for $10^{-2}<r_{\rm pc}<1$, and their result agrees with that of Brunt \& Heyer (2002b) and Heyer \& Brunt (2004) at a common scale $r_{\rm pc}=1$. This velocity trend for the shorter lengths is consistent with the theoretical study of turbulence by Boldyrev et al. (2002), and we will adopt it for our investigation of time scales. We recognize, however, that there are isolated observations, such as those carried out by Sakamoto (2002), Sakamoto \& Sunaka (2003) and Heithausen (2004, 2006), that show some specific regions where the velocity differences measured over $r_{\rm pc}\sim 10^{-3}-10^{-1}$ are almost one order of magnitude above this velocity-size relationship; see Fig.~10 of Falgarone et al. (2009). Also, observations of CO emission by Hily-Blant et al. (2008) demonstrate that isolated concentrations of turbulent energy over small scales create occasional, large velocity deviations that go well beyond the tails of a Gaussian distribution. Finally, Shetty et al. (2010) indicate that projection effects in the position-position-velocity (PPV) data overestimate the power-law slope (by one to a few tenths) and underestimate the velocity amplitudes (by about a factor of two) in a 3D physical position-position-position (PPP) representation of a turbulent medium. With these points in mind, we make an extrapolation of the trend $\Delta v=r_{\rm pc}^{0.4}\,{\rm km~s}^{-1}$ toward very small scales to yield $\Delta t=r_{\rm pc}^{0.6}\,$Myr, but acknowledge that in some circumstances this form for $\Delta t$ may significantly overestimate the time span for rapid changes in conditions. For a length scale $r_{\rm pc}=0.00046$ (or 95$\,$AU), we expect that $\Delta t= 0.01\,$Myr (i.e., $10^{11.5}{\rm s}$). Along the CNM equilibrium curve shown in Fig.~\ref{r_t_01}, this time equals $t_{\rm cool}$ at $\log(p/k)=3.85$. The crossing time is about equal to the $e$-folding time for $T_{01}$ at a slightly lower pressure, $\log(p/k)=3.6$. Thus, in short, for scale sizes smaller than about 100$\,$AU (but perhaps a few thousand AU for some of the more active regions) we can expect that turbulent fluctuations at the pressures that we are considering will depart from the CNM thermal equilibrium curve ($\gamma\approx 0.7$) and exhibit an effective $\gamma$ that is somewhere between 0.7 and the adiabatic value of 5/3 (for a pure atomic gas). Over smaller scales (or lower pressures) $T_{01}$ may depart from the local kinetic temperature. Over all of the practical size scales, the equilibrium results for the C~I fine-structure excitations and C ionization should apply, but with the provision that their outcomes depend on the instantaneous temperature. \subsection{Possible Effects from Coherent, Large Scale Disturbances}\label{coherent_disturbances} Up to now, we have considered the effects of compressions and rarefactions caused by random turbulent motions. However, the injection of mechanical energy over macroscopic scales in the Galactic disk can also create deviations in pressure. Supernova blast waves constitute a principal source of this energy in the ISM. We know that there are strongly elevated pressures inside clouds that have recently been overtaken by a supernova blast wave, as shown by the enhanced C~I fine-structure excitations that appear in the spectra of stars within and behind the Vela supernova remnant (Jenkins et al. 1981, 1984, 1998; Jenkins \& Wallerstein 1995; Wallerstein et al. 1995; Nichols \& Slavin 2004). We have good reason to expect that pressure increases with somewhat smaller amplitudes should persist even within remnants that are no longer identifiable because they are so old or disrupted. In a more general context, at random locations in the disk of the Galaxy the outcomes for the thermal pressure enhancements arising from the effects of supernovae are expected to be appreciably different for the various broad temperature regimes in the ISM, as shown by several different computer simulations (de Avillez \& Breitschwerdt 2005a; Mac Low et al. 2005). The simulations have many free parameters that influence the properties of the average pressures and the shapes of the distribution functions. For this reason, we will restrict our attention to a very simple test of the plausibility that, beyond the limited range of fluctuations caused by turbulence, there is a broader, underlying spread of pressures caused coherent, large scale mechanical disturbances arising from supernova explosions. We can adopt a tactic similar to one developed by Jenkins et al. (1983) in their comparison of C~I pressures to a prediction based on the theory of the three-phase ISM advanced by McKee \& Ostriker (1977). Small, neutral clouds that are overtaken by an expanding supernova blast wave should rapidly (and adiabatically) adjust their internal thermal pressures to equal that of the hot medium well inside the remnant's boundary. We can now make a simple prediction of what would happen if these clouds actually defined the trend of the pressure distribution well above the mean pressure and then compare this outcome with our observations. If the radius $r$ of any remnant in the adiabatic phase grows in proportion to $t^\eta$ and its volume-weighted average internal pressure $p$ is proportional to $r^\alpha$, we find that \begin{equation} dp/dt=(dp/dr)(dr/dt)\propto r^{\alpha-1+(\eta - 1)/\eta} . \end{equation} A time-averaged occupation volume $V(p)$ is then given by \begin{equation} V(p)\propto r^3/(dp/dt)=r^{3-\alpha+1/\eta} , \end{equation} which gives an overall volume filling factor per unit $\log p$ that is proportional to $V(p)p=p^{-14/9}$ for $\alpha=-3$ and $\eta=3/5$ (McKee \& Ostriker 1977), as long as the remnants do not overlap each other, which should be true at pressures well above the median pressure. The histogram-style trace in Figure~\ref{volume_dist} shows our thermal pressure distribution on the assumption that the overtaken clouds contract adiabatically, i.e., with $\gamma=5/3$. Unlike the smooth curves shown in this figure, this distribution represents our entire dataset, i.e., not just the instances where $I/I_0<10^{0.5}$. Our reason for this choice is that we wish to avoid a bias against regions of higher than normal starlight intensity, because the locations of supernova remnants are correlated with those of associations of early-type stars. As Fig.~\ref{volume_dist} shows, the pressure distribution has a slope that is roughly consistent with $dV/d\ln p\propto p^{-14/9}$. \subsection{High Pressure Component}\label{high_pressure_comp} In \S\ref{admixtures} we proposed that some small fraction of all of the gas that we observed has an extraordinarily high pressure ($p/k\gtrsim 3\times 10^5\,{\rm cm}^{-3}\,$K, $T > 80\,$K), in order to nudge the $f2$ outcomes to locations above the normal equilibrium tracks shown in Fig.~\ref{all_v_f1f2}. We now explore various explanations for the excesses in $f2$, starting with ones that do not imply the presence of small amounts of gas at high pressures. Later, on the premise that the existence of the high pressure material is indeed real, we review some suggestions made by other investigators on its possible origin. \subsubsection{A Misleading Conclusion?}\label{misleading_conclusion} Before we fully accept our interpretation that the anomalously high $f2$ measurements imply the existence of small amounts of high pressure material, well separated from the main pressure distribution function presented in \S\ref{basic_dist}, we should briefly investigate possible errors in the interpretation of the outcomes in $f1$ and $f2$. One possibility is that the excitation cross sections or the decay rates for the excited levels have systematic errors that underestimate the populations in the $^3P_2$ state (C~I$^{**}$) relative to those in the $^3P_1$ level (C~I$^*$). We feel that this is unlikely, since earlier calculations of these quantities that appeared in the literature (i.e., the ones adopted by JT01\footnote{Appendix~\ref{atomic_phys_param} discusses our current updates for the atomic parameters.}) did not yield outcomes that predicted greater values of $f2$ for their respective $f1$ counterparts. However, on more fundamental grounds we do not feel qualified to comment on the accuracy of the atomic physics calculations, so we will not pursue this issue further. Another possibility for misleading results could be errors in our determinations of $f1$ and $f2$. Two possibilities for the origin of such errors could either be errors in the adopted $f$-values for the C~I transitions or our under-appreciation of the effects of misleading apparent optical depths caused by unresolved, saturated substructures in the absorption line profiles. For the former of the two, we feel that our investigation discussed in Appendix~\ref{fval_validation} provides some assurance that we are not experiencing systematic errors in the relative strengths of weak multiplets versus the strong ones. However, our derived $f$-values rely on the correctness of the published relative line strengths within multiplets. These relative strengths have a direct influence on our derived values of $N({\rm C~I}^*)$ and $N({\rm C~I}^{**})$, relative to each other and to $N({\rm C~I})$. As for the possibility that we are being misled by incorrect optical depth measurements, we feel that the precautions that we discuss in \S\ref{distortions} for screening out such cases provide adequate safeguards. Moreover, it is reassuring to see that for individual determinations of the apparent fraction of C~I in the high pressure component, $g_{\rm high}$, in each velocity bin (i.e., not the overall averages shown in Table~\ref{obs_quantities_table}), there is no trend with $N({\rm C~I_{total}})$, an effect that we would have expected to see if the phenomenon were driven by the strengths of the absorption lines. Still another effect to examine is the possibility that there is an anomalous means for exciting the fine-structure levels. Positively charged collision partners will give proportionally stronger excitations of the second excited level of C~I, as indicated by the differences in cross sections for protons compared to neutrals -- see Fig.~1 of Silva \& Viegas (2002). If ambipolar diffusion (ion-neutral slip) created by MHD shocks and Alfv\'en waves create enough suprathermal protons (and heavy element ions) to further excite the C~I, they might help to explain the larger outcomes for $f2$. While this is a qualitatively attractive explanation, in a quantitative sense it seems to fail: the required fractional concentration of the positively charged collision partners, greater than about 30\%, seems to be unreasonably large (e.g., the conditions $n({\rm H~I})=2\,{\rm cm}^{-3}$, $T({\rm H~I})=600\,$K, $n(p)=0.6\,{\rm cm}^{-3}$, $T(p)=20,000\,$K should give $f1=0.23$ and $f2=0.066$, which is not far from our measured average shown by the white $\times$ in Fig.~\ref{all_v_f1f2}. Smaller ion fractions fail to do so however). \subsubsection{The Amount of the High Pressure Component}\label{amount_hipress} We now move on to the premise that we advocated earlier in \S\ref{admixtures} that the anomalously large values of $f2$ arise from a small admixture of high pressure gas in virtually all of the cases that we examined. In terms of $N({\rm C~I})$, the overall fraction $g_{\rm high}$ is usually about 5\%. However it is important to note that, except in the presence of exceptionally strong ionization field strengths, this outcome must arise from much smaller proportions of H~I because the neutral fraction of carbon increases with pressure, making small amounts of high pressure gas far more conspicuous. For instance, we can expect a factor 100 enhancement in fractional amount of C~I, $n({\rm C~I_{total}})/[n({\rm C~II})+ n({\rm C~I_{total}})]$, when the gas is at $\log (p/k)=6$, $T=300\,$K over that which would apply to material with more conventional physical conditions $\log(p/k)=3.6$, $T=80\,$K. On average, this makes the fractional mass contribution of H~I in the high pressure component only about $g_{\rm high}/100$, a 0.05\% mass fraction. We add a caution, however, that this fraction could be larger if the actual pressure of the high-pressure component is lower than the value stated above. \subsubsection{Radiation from the Excited Levels of Carbon Atoms}\label{fsl_radiation} Radiative decay of the upper fine-structure level of C~II is an important cooling mechanism for the ISM. The rate of this cooling per unit mass can be monitored by either directly observing the emission at 1900$\,$GHz ($157.7\,\mu$m) (Stutzki et al. 1988; Langer et al. 2010; Pineda et al. 2010; Velusamy et al. 2010) or by viewing the C~II$^*$ absorption features at 1037.018 and 1335.708$\, $\AA, as has been done for both the ISM of our Galaxy (Lehner et al. 2004) and the distant, damped L$\alpha$ systems in quasar spectra (Wolfe et al. 2003a, b). It is worthwhile to ask the question: could the regions that we view with enhanced pressures make an important contribution to either the absorption or emission measurements? It is difficult to formulate a precise answer, since we do not fully understand the nature of these regions. On the one hand, the factor 100 diminution in the H~I content mentioned in the previous section is approximately offset by a factor 100 enhancement in the collision rate for exciting the upper level of C~II. In this circumstance, as long as we are still below the critical density for establishing the C~II$^*$ population, our typical value of $g_{\rm high}$ of 5\% would be approximately the correct answer for the enhancement of the emission intensity (in the optically thin limit) or for the increase in $N$(C~II$^*$) over that from the gas at normal pressures. On the other hand, if the high pressure regions are located at sites where the photoionizing radiation level is much higher than elsewhere, then the H~I concentration is not strongly diminished but the population of the upper C~II levels is still very high. Here, the column densities of C~II$^*$ could be considerably higher than 5\% of the total and the regions that hold this material could emit a substantial amount of radiation. It is much easier for us to make a quantitative assessment of the enhancement of radiation from the excited levels of C~I because the populations of the two upper levels are exactly what we observe. The value of $f1$ within the high pressure gas should be about twice that of the normal gas; hence we can expect that the radiation at 492$\,$GHz ($609\,\mu$m) seen toward most of the translucent clouds (Heithausen et al. 2001; Bensch et al. 2003) should only be increased by about 10\%. We estimate that the value of $f2$ in the high pressure gas is about 15 times as large as that in the low pressure gas, so about 44\% of the radiation at 809$\,$GHz ($371\,\mu$m) could arise from the high pressure regions. \subsubsection{Possible Origins of the High Pressure Gas}\label{origins} As we pointed out in \S\ref{admixtures}, in order to obtain the composite $f1$ and $f2$ values that we found for the entire survey, the high pressure component had to be a distinct population whose distribution in pressure was well removed from the low pressure material. We demonstrated in Fig.~\ref{f1f2_lognormal} that it could not be a diminishing tail resembling a power law that extends away from the main, low pressure distribution. In the context of turbulence theory, this poses a challenge in the interpretation of the high pressure gas, unless one could propose an explanation for the absence of intermediate mass fractions at pressures between the two extremes. In order to justify the presence of certain molecules in the ISM that require endothermic reactions for their production, such as CH$^+$, Joulain et al. (1998) proposed the existence of hot gas concentrations within highly confined dissipation regions created by turbulence. From a computer simulation, Pety \& Falgarone (2000) found that extraordinary physical conditions could arise in regions that were selected to have special dynamical conditions, such as larger than normal amounts of vorticity or negative divergence. Further studies from theoretical or observational perspectives have been presented by Godard et al. (2009) and Hily-Blant et al. (2008). These investigators concluded that the volume filling factors for these regions are small (a few percent), but not as small as the mass fractions that we reported in \S\ref{amount_hipress}. While the intermittent emergence of extreme conditions within highly confined dissipation regions in a turbulent regime is an attractive prospect for explaining our high pressure gas, it must nevertheless satisfy our requirement for a distinct separation from the pressure fluctuations arising from regular turbulent disturbances instead of a continuous, low level extension thereof. In \S\ref{behavior_velocity} we showed evidence that the greatest extremes in pressure occurred for gases at unusually large negative velocities, and this interpretation fits in well with the concept that the target stars (and their neighboring stars) play a role. Indeed, an inspection of Table~\ref{obs_quantities_table} shows that in some directions, several adjacent sight lines all show elevated pressures compared to the typical pressures derived from the full sample. Two prominent examples are stars near or within the Carina and Orion Nebulae. These are dynamically disturbed regions, and they are also regions of significantly enhanced starlight density. This supports the notion that the stars somehow raise the pressures in their surroundings and that high values of $I/I_{0}$ indicate both recent, enhanced star formation and a more highly pressurized ISM. Figure~\ref{fhigh_vs_logi} indicates that the quantities $g_{\rm high}$ and $I/I_0$ also seem to be connected to each other. Generally, we can see that cases where $g_{\rm high}>0.2$ appear to require that $\log (I/I_0) > 0.5$ and that there were very few outcomes that had $g_{\rm high} < 0.2$ that had $\log (I/I_0) > 1.5$. It is unclear whether the dominant cause for pressurization is from interactions with mass-loss ejecta, small clouds experiencing a photoevaporation ``rocket effect'' (Bertoldi 1989; Bertoldi \& McKee 1990; Bertoldi \& Jenkins 1992), or the sudden creation of an H~II region, all of which can compress the ambient material and accelerate it toward us. An additional possibility is that H~I gas near the stars is heated more strongly by the photoelectric effect from grains (Weingartner \& Draine 2001b), which could cause a short-term spike in pressure. While these effects (or combinations thereof) may be the dominant cause for the isolated cases that show large values for $g_{\rm high}$, we still find significant amounts of high pressure material at large positive velocities and even small admixtures of high pressure material at all velocities. These outcomes indicate that other mechanisms unrelated to the target stars may play role as well. \placefigure{fhigh_vs_logi} \begin{figure*} \epsscale{1.7} \plotone{fig13.eps} \caption{The relationship between the fractional amount of high pressure gas, $g_{\rm high}$, and the starlight intensity relative to the Galactic average, $I/I_0$.\label{fhigh_vs_logi}} \end{figure*} Field et al. (2009) have proposed that the recoil of H atoms following the photodissociation of H$_2$ at the edge of a molecular cloud can create an external pressure that helps to confine the cloud. They estimated that at locations where ambient UV field intensity reaches $I/I_0 \approx 60$ the recoil pressure can be approximately $1.3\times 10^5{\rm cm}^{-3}\,$K. (Our $I/I_0$ is approximately the same as their $\chi$ intensity parameter.) If this mechanism could increase the thermal pressure in the outer portions of translucent clouds that still have reasonable concentrations of H$_2$ and explain the existence of the high pressure component that we observe, we would indeed expect to see a positive relationship between $g_{\rm high}$ and $I/I_0$. Even though the points in Fig.~\ref{fhigh_vs_logi} at first glance seem to favor the recoil hypothesis as a possible explanation for the high pressure component, our enthusiasm for accepting this proposition must be tempered by two considerations: (1) our fiducial pressure $\log (p/k) \gtrsim 5.5$ (a lower limit which is to some degree arbitrary and might be relaxed to a slightly lower level) requires a value for $I/I_0$ considerably greater than 60, and (2) the correlation seen in Fig.~\ref{fhigh_vs_logi} may be a byproduct of some other physical effect that responds to higher than normal intensities and generates the high pressures. For instance, we know from Fig.~\ref{log_pok_bin} that values of $\log (p/k)$ in the low pressure regime are likewise correlated with the intensity of starlight, and the greater influence of intermittent dissipation effects in the more strongly driven turbulence may account for the more conspicuous presence of high pressure gas. \section{Summary}\label{summary} We have presented a comprehensive analysis of {\it HST\/} UV spectra stored in the MAST archive for 89 stars that were observed with the E140H mode of STIS, with the goal of measuring the populations of the three fine-structure levels of the ground electronic state of neutral carbon atoms in the ISM. The ultimate purpose of these measurements was to synthesize a distribution function for the thermal pressures in gases that mostly represent the cold neutral medium (CNM) in our part of the Galactic disk. This work builds upon a similar study of 21 stars in restricted portions of the sky carried out by JT01 in a special observing program dedicated to this purpose. We have repeated the basic analysis protocol developed by JT01 for unraveling the velocity profiles for carbon atoms in the separate fine-structure levels from the blended features in many different multiplets, but with a few improvements in methodology that are outlined in various subsections within Appendix~\ref{improvements}. We interpret most of the variations in the outcomes for thermal pressures to arise from fluctuations caused by interstellar turbulence. Some additional pressure excursions that are large in magnitude but for small mass fractions probably arise from the the random passages of infrequent but strong shocks created by either stellar mass loss or supernova explosions. The following conclusions have emerged from our study of C~I fine-structure excitations: \begin{enumerate} \item The relative populations of the two excited fine-structure levels are influenced in different ways by the local physical conditions, since the levels have significantly different collisional rate constants and energies. This feature allows us to sense in any one velocity channel the presence of admixtures of gas that have markedly different conditions. While there is a multitude of possibilities for explaining any particular combination of level populations, we find that when the data are viewed as a whole, the most straightforward interpretation is that practically all of the gas in the normal range of pressures ($10^3 \lesssim p/k \lesssim 10^4{\rm cm}^{-3}\,$K) is accompanied by very small amounts (of order 0.05\%) of gas at anomalously large pressures and temperatures ($p/k > 10^{5.5}{\rm cm}^{-3}\,$, $T>80\,$K). In a small fraction of cases, the proportion of the gas at high pressures is markedly higher than this level, both because the amount of C~I is greater and the local radiation density is high (which makes more of the carbon atoms singly ionized). \item For a substantial number of our lines of sight, we can make use of molecular hydrogen rotation temperatures $T_{01}$ between $J=0$ and 1 to define the local kinetic temperature. Such temperatures are useful in defining one of the free parameters in solutions for the level populations. As an added benefit, we can explore whether or not, in a general statistical sense, the pressure outcomes are related in some way with such temperatures. We find only a weak anticorrelation, which indicates that pressure fluctuations do not appear to be the dominant cause for temperature changes from one place to the next. \item Excluding the small amounts of high-pressure gas mentioned earlier, the pressures of most of the CNM material show some correlation with the local radiation densities, as sensed by the observed ratio of O~I (or sometimes S~II) to C~I followed by an application of the equation of ionization equilibrium with plausible values for O/C and S/C to derive $N$(C~II). We interpret this trend as arising from the fact that the stars that create this radiation are sources of enough mechanical energy to make the pressures higher than normal. \item The main part of the mass-weighted distribution of pressures in our complete sample approximately follows a lognormal distribution with a mean value for $\log (p/k)$ equal to 3.58 and a standard deviation of 0.175~dex. However, for $\log (p/k) < 3.2$ or $>4.0$ the amount of material is greater than a continuation of the lognormal distribution. \item In order to sense the distribution of pressures in regions well removed from the sources of mechanical disturbance (i.e., the stars that emit large amounts of radiation), we have isolated for study only those cases where the radiation density is less than $10^{0.5}$ times the overall average level. Under these circumstances, the tail on the high pressure side of the distribution becomes suppressed, and the remaining distribution develops a negative skewness. We supply a polynomial expression that fits this distribution (Eq.~\ref{polynomial_fit}) that is shown in Fig.~\ref{hist_pok}. About 23\% of the material in this distribution is below the minimum pressure for the thermal equilibrium curve of a static CNM in our part of the Galaxy, suggesting that short-term fluctuations in pressure can occur without the gas being transformed to a stable warm neutral medium (WNM). \item The thicknesses of the regions that we were able to probe, as measured by the hydrogen column densities divided by their space densities, are generally less than 20$\,$pc. The filling fractions for the sightlines are generally less than 1\%. The remaining 99\% of a typical sightline is filled with much hotter gas having densities that are far too low to create enough C~I for us to measure. \item We recognize that even with the over-determination of conditions provided by the two fine-structure levels, we can still underestimate the dispersion of pressures because we are viewing at each velocity an average pressure for the superposition of regions that could have vastly different pressures. We have studied how the dispersions scale in proportion to $N({\rm C~I_{total}})^{-0.5}$ and find that the ISM could conceivably be composed of independent packets of gas with a true {\it rms\/} dispersion in $\log (p/k)$ that could be as large as 0.5$\,$dex, which is considerably wider than the distribution that we constructed directly from the data. The characteristic column density of each packet would be about $N({\rm C~I}_{\rm total})=2\times 10^{12}{\rm cm}^{-2}$. However, an alternate interpretation, and one that is quite plausible, is that small volumes of gas have intrinsic pressure variances that are larger than for coherent, larger volumes that might be more resistant to perturbations from turbulent forces. This phenomenon could conceivably produce the same linear scaling of pressure dispersions against $N({\rm C~I_{total}})^{-0.5}$ that we observed. \item On the basis of our findings reported in items 4 and 7 above, we derive characteristic turbulent Mach numbers for the C~I-bearing gas to range between 0.8 and 4.8. Since the speed of sound is about $0.5\,{\rm km~s}^{-1}$ if $T=80\,$K, we expect that the 3-dimensional velocity dispersion $\sigma_v$ to range between 0.40 and $2.4\,{\rm km~s}^{-1}$. If we equate these numbers to observations of velocity structure functions in the ISM, we find that the characteristic size $r$ of the clouds or the outer driving scale of the turbulence is probably in the range of approximately $0.2 < r < 4.7\,$pc. \item Gas with radial velocities well outside the range of motions expected for differential galactic rotation is more likely than usual to exhibit exceptionally large pressures. This link of pressures with kinematics helps to support the interpretation that shocks and turbulence play an important role in creating the positive excursions in pressure. Packets of C~I moving at negative velocities show larger pressure excursions than for those at positive velocities. We explain this difference in terms of an observational bias that favors our viewing the near sides of pressurized shells that are expanding away from our target stars. \item There is a broad range of time scales that are needed to reach equilibrium values for various quantities and physical processes that are relevant to our study. From the shortest to the longest they are as follows: (1) C~I fine-structure level populations (of order 100~days), (2) the balance between C~I and C~II established by the competition between photoionizations and various means of recombination (160~yr, or shorter if the radiation density is larger than average), (3) the coupling of the $J=0$ to 1 rotation temperature of H$_2$ to the local kinetic temperature ($10^4\,$yr for typical conditions: $\log (p/k)=3.5$ and $T=80\,$K), and (4) the cooling time for the ISM ($3\times 10^4\,$yr for the same conditions). We compute the eddy turnover times for turbulent eddies having a radius $r$ using a relation $\Delta t=r/\Delta v$ with an extrapolation to small scales $\Delta v=r_{\rm pc}^{0.4}{\rm km~s}^{-1}$, and we find that the only items of consequence for $r$ smaller than about $100-1000\,$AU are (3) and (4). Over these extremely small scales, delays in the adjustments of H$_2$ rotation temperatures will give misleading readings for the local kinetic temperatures, but the differences in the two temperatures should be minor, especially since we can use $T_{01}$ only to indicate an average temperature over many small volumes. Likewise, any lag in the thermal response of the gas will make its polytropic index $\gamma$ closer to the adiabatic value, rather than matching the slope of the thermal equilibrium curve for the CNM ($\gamma\approx 0.7$). The fact that this may be happening is supported by the negative skewness of our distribution in $\log (p/k)$, which indicates that the turbulent fluctuations are consistent with $\gamma > 1$. \item For $\log p/k$ above 4.0, we find a slope in the relationship between the logarithms of the volume fractions of the gas and $\log (p/k)$ to be consistent with a power-law slope of $-14/9$ that is expected for random penetrations of expanding supernova remnants in various stages of development. \end{enumerate} \acknowledgments This research was supported by program number HST-AR-09534.01-A which was provided by NASA through a grant from the Space Telescope Science Institute (STScI), which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. All of the C~I absorption line data that were analyzed for this paper were taken with the NASA/ESA {\it Hubble Space Telescope\/} and were downloaded from the Multimission Archive at STScI (MAST). {\it Facilities:\/} \facility{HST(STIS)}
-73,613.720015
[ -3.24609375, 3.060546875 ]
13.48
[ -3.099609375, 0.5048828125, -1.7294921875, -5.87890625, -0.18359375, 7.5546875 ]
[ 4.43359375, 7.01171875, 1.8349609375, 5.7734375 ]
2,396
21,120
[ -3.5234375, 4.1796875 ]
32.20868
[ -5.7265625, -3.3359375, -4.171875, -2.470703125, 1.3046875, 12.046875 ]
1.311305
9.211663
18.181818
10.933277
[ 2.131664514541626 ]
-48,901.210412
5.542566
-71,593.664066
0.205025
6.458643
[ -3.21875, -3.74609375, -2.853515625, -3.755859375, 2.55078125, 10.453125 ]
[ -5.484375, -2.208984375, -2.1171875, -1.119140625, 3.52734375, 4.453125 ]
BkiUdso25V5jCst-SORs
\section{Introduction} \label{sec:intro} Finding planetary systems similar to our own is one of the main goals of exoplanet search. It is of particular interest if such systems show planetary transits, since multiple transiting planetary systems provide crucial information for the understanding of planet formation and evolution\micitap{ford2006a}. Mutual dynamical interactions between planets especially require an additional effort to understand their origin and to justify their long term stability. Unfortunately, such systems are difficult to find because of the low geometrical probability for transiting planets. The satellite {\it Kepler}\micitap{borucki2010a} has observed the planetary system orbiting the star KIC~11442793 almost continuously for more than 4 years. The {\it Kepler}~team has published the parameters of 3 transiting candidates around this star\micitap{batalha2013} with the identification numbers KOI 351.01, .02, and .03. A careful analysis of the light curve with the transit detection algorithm DST\micitap{cabrera2012} reveals the presence of 4 additional transiting planets, making this system the most populated among the transiting ones. These 4 planets are reported here for the first time (see the results of\micitaalt{ofir2013a,huang2013,tenenbaum2013}\footnote{while this paper was in referee process, Schmitt et al. submitted to AJ a paper with an independent characterization of this system.}). Considering the magnitude of the star (13.7 magnitude in SDSS r) and the characteristics of the transiting candidates, we were not able to independently confirm the planets by measuring their masses with radial velocity. However, we have performed the following steps to validate the planetary nature of the candidates: 1) medium resolution spectra of the star were taken with the Coud{\'e}-Echelle spectrograph at the Tautenburg observatory, characterizing the host star as a solar-like dwarf; 2) the analysis of the {\it Kepler}~photometry, including the study of the motion of the PSF centroid\micitap{batalha2010b}, which does not reveal any hint of the presence of contaminating eclipsing binary; 3) the analysis of the timing of the eclipses reveals that the planetary candidates are dynamically interacting one with each other; and finally 4) a stability analysis of the system with the orbital dynamics integrator {\em Mercury}\micitap{chambers1999} reveals that, for the system to be stable, all the planetary candidates must have planetary masses. Therefore, we validate in this paper the planetary nature of the 7 candidates. \section{Stellar characterization} \label{sec:star} In order to characterize the host star, five spectra were taken on June 6 and 7, 2013, with the Coud{\'e}-Echelle spectrograph attached to the 2-m telescope at the Th{\"u}ringer Landessternwarte Tautenburg. The wavelength coverage was 472-736 nm and a 2 arcsec slit provided a spectral resolving power of $32\,000$. The exposure time for each spectrum was 40 minutes. The spectra were reduced using standard ESO-MIDAS packages. The reduction steps included filtering of cosmic rays, background and straylight subtraction, flat fielding using a halogen lamp, optimum extraction of diffraction orders, and wavelength calibration using a ThAr lamp. Due to the low signal-to-noise ratio (SNR) of a single spectrum it was difficult to define the local continuum. Because no radial velocity shifts between the single spectra could be found, we repeated the reduction using the co-added raw spectra. The continuum of the resulting mean spectrum was then well enough defined for a proper normalization. The SNR of the mean spectrum, measured from some almost line-free parts of the continuum, was about 19. We used the spectral synthesis method, which compared the observed spectrum with synthetic spectra computed on a grid in atmospheric parameters. The synthetic spectra were computed with the SynthV program\micitap{tsymbal1996}, based on a library of atmosphere models calculated with the LLmodels code\micitap{shulyak2004}. The error estimation was done from $\chi^2$ statistics taking all interdependencies between the different parameters into account\micitap{lehman2011}. The step widths of the grid were 100K in T$_{\mathrm{eff}}$, 0.1 dex in $\log g$, 0.1 dex in [M/H], 0.5 km\,s$^{-1}$ in microturbulent velocity, and 1 km\,s$^{-1}$ in $v \sin i$; where [M/H] means scaled solar abundances. For the determination of $v \sin i$ we used the metal lines-rich wavelengths region 491-567 nm. For all other parameters, the wavelengths range utilized was \mbox{472-567\,nm} which also includes H$_{\beta}$. Table~\ref{table:star} lists the results obtained from the full grid in all parameters. The large uncertainties mainly originate from the large ambiguities between the different parameters and from the low SNR of the observed spectrum. We use a compilation of empirical values of stellar parameters from\micitap{gray2005}. Comparing our results from the full grid search with the literature data, we see that we can exclude luminosity class III stars because of the values of $\log g$ and v$\sin i$. The $T_{\mathrm{eff}}$ derived from spectral analysis lies, within the uncertainties, between $5\,600$ and $6\,250$K which is consistent with dwarfs of spectral type G6 to F6. Based on the measured $\log g$, we cannot determine if the star is slightly evolved. Assuming that the star is a typical main sequence star of early G-type, we can adopt a $\log g$ of 4.4, which lies within the measurement error, obtaining a better constraint on on $T_{\mathrm{eff}}$ and a slightly higher value for the metallicity (last column of Table~\ref{table:star}). Under this assumption, we obtain $T_{\mathrm{eff}}$ between $5\,910$ and $6\,340$ K, corresponding to spectral types between G1 and F6. The corresponding ranges in mass and radius are relatively small, between 1.1 and 1.3 M$_{\mathrm{sun}}$ and 1.1 and 1.3 R$_{\mathrm{sun}}$. \subsection{Reddening and distance} \label{subsec:extinction} We determined the interstellar extinction $A_\mathrm{v}$ and distance $d$ to KIC\,11442793 by applying the method described in\micitat{gandolfi2008}. This technique is based on the simultaneous fit of the observed stellar colours with theoretical magnitudes obtained from the \emph{NextGen} model spectrum\micitap{hauschildt1999a} with the same photospheric parameters as the target star. For KIC\,11442793 we used SDSS, 2MASS, and WISE photometry (see Table~\ref{table:magnitudes} and Fig.~\ref{figure:sed}). We excluded the $W_3$ and $W_4$ WISE magnitudes, as the former has a SNR of 3.5 and the latter is only an upper limit. Assuming a normal extinction ($R_\mathrm{v}=3.1$) and a black body emission at the stellar effective temperature and radius, we found that the star reddening amounts to $A_\mathrm{v}=0.15\pm0.10$\,mag and that the distance to KIC\,11442793 is $d=780\pm100$\,pc. \begin{figure} \centering \includegraphics[% width=0.9\linewidth,% height=0.5\textheight,% keepaspectratio]{SED_KOI351.eps} \caption{Dereddened spectral energy distribution of KIC\,11442793. The optical SDSS-$g$,-$r$,-$i$,-$z$ photometry is from the {\it Kepler}~Input Catalogue. Infrared $J$,$H$,$Ks$ and $W1$, $W2$, $W3$, $W4$ data are taken from the 2MASS\micitap{cutri2003} and WISE\micitap{wright2010wise} database, respectively. The \emph{NextGen} model spectrum by\micitat{hauschildt1999a} with the same photospheric parameters as KIC\,11442793 and scaled to the stellar radius and distance is overplotted with a light-blue line.} \label{figure:sed} \end{figure} \begin{table} \caption[]{Derived atmospheric parameters for the star.} \label{table:star} \centering \begin{tabular}{lcc} & full grid & $\log g$ fixed \\ \hline\hline T$_{\mathrm{eff}}$ (K) & $5\,930 \pm 320$ & $6\,080^{+260}_{-170}$ \\ $\log g$ (cgs) & $4.0 \pm 0.5$ & $4.4$ (fixed) \\ v$_{\mathrm{mic}}$ (km\,s$^{-1}$) & $1.2 \pm 0.6$ & $1.2 \pm 0.6$ \\ $[$M/H$]$ (dex) & $-0.17 \pm 0.21$ & $-0.12 \pm 0.18$ \\ v $\sin i$ (km\,s$^{-1}$) & $4.6 \pm 2.1$ & $4.6 \pm 2.1$ \\ \hline \end{tabular} \end{table} \begin{table} \caption{{\it Kepler}, GSC2.3, USNO-A2, and 2MASS identifiers of the target star. Equatorial coordinates and optical SDSS-$g$,-$r$,-$i$,-$z$ photometry are from the {\it Kepler}~Input Catalogue. Infrared $J$,$H$,$Ks$ and $W1$,$W2$,$W3$,$W4$ data are taken from the 2MASS\micitap{cutri2003} and WISE\micitap{wright2010wise} database, respectively.} \label{table:magnitudes} \begin{center} \begin{tabular}{lll} \multicolumn{1}{l}{\emph{Main identifiers}} \\ \hline \hline \noalign{\smallskip} {\it Kepler}~IDs & KIC~11442793 - KOI~351 - Kepler-90 \\ GSC2.3~ID & N2EM001018 \\ USNO-A2~ID & 1350-10067455 \\ 2MASS~ID & 18574403+4918185 \\ \noalign{\smallskip} \hline \noalign{\medskip} \noalign{\smallskip} \multicolumn{2}{l}{\emph{Equatorial coordinates}} \\ \hline \hline \noalign{\smallskip} RA \,(J2000) $18^h\,57^m\,44^s.038$ & Dec (J2000) $+49^\mathrm{o} 18' 18''.58$ \\ \noalign{\smallskip} \hline \noalign{\medskip} \noalign{\smallskip} \multicolumn{3}{l}{\emph{Magnitudes}} \\ \hline \hline \noalign{\smallskip} \centering Filter \,\,($\lambda_{\mathrm eff}$)& Mag & Uncertainty \\ \noalign{\smallskip} \hline \noalign{\smallskip} $g$ \,\,~\,(~0.48\,$\mu m$) & 14.139 & 0.030 \\ $r$ \,\,~\,(~0.63\,$\mu m$) & 13.741 & 0.030 \\ $i$ \,\,~\,(~0.77\,$\mu m$) & 13.660 & 0.030 \\ $z$ \,\,~\,(~0.91\,$\mu m$) & 13.634 & 0.030 \\ $J$ \,\,~\,(~1.24\,$\mu m$) & 12.790 & 0.029 \\ $H$ \,\,\,(~1.66\,$\mu m$) & 12.531 & 0.033 \\ $Ks$ \,(~2.16\,$\mu m$) & 12.482 & 0.024 \\ $W_1$ \,(~3.35\,$\mu m$) & 12.429 & 0.024 \\ $W_2$ \,(~4.60\,$\mu m$) & 12.462 & 0.024 \\ $W_3$ \,(11.56\,$\mu m$) & 12.750 & 0.308 \\ $W_4$ \,(22.09\,$\mu m$) &$~~9.702^{a}$ & ~~~-\\ \noalign{\smallskip} \hline \end{tabular} \end{center} {$^{a}${Upper limit}} \end{table} \section{Light curve analysis} \label{sec:lightcurve} {\it Kepler}~observations of KIC~11442793 extend for $1\,340$ days with a duty cycle of 82\%. The light curve, shown in Figure~\ref{figure:rawlc}, reveals that the host star is not particularly active. It barely shows hints of some variations compatible with the evolution of stellar spots on its surface, with an amplitude of 0.1\% We have applied a detrending algorithm to treat the stellar activity optimized for the CoRoT mission\micitap{baglin2006}, but adapted to the treatment of {\it Kepler}~data\micitap{cabrera2012}. Then we have applied the transit detection algorithm DST\micitap{cabrera2012} to search for the periodic signature of transiting planets. We confirm the detection of the candidates KOI~351.01, .02, and .03, previously announced\micitap{batalha2013}, and we assign them the identifications KIC~11442793~h, g, and d. We present the discovery of four additional candidates, b, c, e, and f, reported for the first time here. The ephemerides of these objects are given in Table~\ref{table:planets}. The orbital ephemerides have been calculated as follows: the transit detection algorithm DST provides preliminary values of the period, epoch, depth and duration of the transiting candidates. With this information, we first fit separately the transits of every candidate. Then we make a weighted linear fit to the epochs of the individual transits, the slope of the fit is the period and the intercept the epoch The residuals between the linear fit and the actual position of the transits (observed minus calculated, O-C) are usually referred as transit timing variations (TTVs), which are discussed later on Section~\ref{sec:ttv}. \begin{figure} \centering \includegraphics[% width=0.9\linewidth,% height=0.5\textheight,% keepaspectratio]{rawlc2.eps} \caption{Public raw light curve of KIC~11442793. The seven sets of periodic transits are indicated with symbols of different colors: planet h with red plus-signs, planet g with red crosses, planet f with green diamonds, planet e with magenta squares, planet d with blue triangles, planet c with turquoise filled squares, and planet b with orange filled triangles. In the enlarged region the stellar variability has been subtracted to show a subset of the shallower transits.} \label{figure:rawlc} \end{figure} \section{Planetary parameter modelling} \label{sec:szilard} Several planets in this system show significant transit timing variations (TTVs), described in Section~\ref{sec:ttv}, which need to be removed before proceeding with the modeling of the planetary parameters. We use an iterative method to correct for this effect, similar to the one described by\micitat{alapini2009}, but accounting for the TTVs. We take a geometrical model of the transit based on the preliminary value of the planetary parameters obtained by the detection method. We use a genetic algorithm\micitap{geem2001} to fit the value of the epoch, fixing the other transit parameters. For every trial value of the epoch, we correct for stellar activity in a region covering ten times the transit duration with a second order Legendre polynomial (first order for planets b and c). The polynomial is interpolated in the expected region of the transit, to preserve the transit shape. We then fold the light curve with the obtained values of the individual epochs. This method does not converge in the case of planets b and c because of the low SNR of their transit signal. Therefore, for planets b and c we fix the period, we do not fit for the epochs, but we do apply the stellar activity correction for each individual transit described above. A detailed description of the modeling of the planetary parameters applied here can be found in\micitat{csizmadia2011}. We used the publicly available short cadence {\it Kepler}~light curves. For candidates b, c, d, e and f we binned the light curves (we formed 2000 binned points in the $\pm 2D$ vicinity of the mid-transit, $D$ being the transit duration), while for candidates g and h we used the original short cadence photometric points. We used the\micitat{mandel2002} transit model. This model gives the light loss of the star due to the transit of an object as a function of their size ratio (k), of their mutual sky-projected distance (denoted by $\delta$), and of the limb darkening coefficients of the transited star ($ld_1 = u_1 + u_2,$ and $ld_2 = u_1 - u_2$). Following\micitat{csizmadia2013} we determined the limb darkening coefficients from the light curve instead of using theoretical predictions. This fit was first applied to planet h, which has the largest transit depth i.e. the highest signal-to-noise ratio. Having obtained the values of the limb darkening coefficients, we set the limb darkening coefficients at the value obtained from the fit of planet g's transit light curve, but we allowed them to vary within the uncertainties of the determined values. Since we do not have any radial velocity measurements, nor occultations, nor phase-curves of any of these seven planets, we had no a priori information about eccentricities and arguments of periastron. Therefore we could not calculate the sky-projected distance of the stellar and planetary centers in the usual way (e.g.\micitaalt{gimenez2006}). Instead, we fitted the duration of the transit, the epoch ($t_0$), the period ($P$), the impact parameter ($b$), the planet-to-stellar radii ratio ($k=R_p/R_s$). Then the sky-projected mutual distance of the star and the planet were calculated with the formula \begin{equation} \delta \approxeq \sqrt{ b^2 + \left[(1+k)^2 - b^2\right]\left(\frac{t-t_0}{P}\right)^2} \end{equation} where $t$ is the time. We checked the validity of this latter formula via numerical experiments and we found that in our cases it yields a very good agreement with the theoretical value in the vicinities of transits. No mutual transit event was modeled. For the optimization, a genetic algorithm process described in\micitat{csizmadia2011} was used, and the results were refined by a Simulated Annealing algorithm which was also used for the error estimation. The reported uncertainties in Table~\ref{table:planets} are 1$\sigma$ uncertainties. We report the modeled values of $k$ and $b$ in Table~\ref{table:planets} with their respective uncertainties for each of the seven candidates in the system. Once $k$, $b$, $D$, $P$ became known from the modeling procedure, the value of the scaled semi-major ($a/R_s$) for circular orbits can then be calculated as \begin{equation} \frac{a}{R_s} = \frac{1}{\pi}\frac{P}{D}\sqrt{(1+k)^2-b^2} \end{equation} We then calculated the scaled semi-major axes for every planet in the system assuming circular orbits (see Table~\ref{table:planets}). Re-writing Kepler's third law, we obtained for the stellar density parameter (neglecting the mass of the planet): \begin{equation} \frac{M^{1/3}}{R_s} = \left( \frac{3\pi}{G P^2} \right)^{1/3} \frac{a}{R_S} \end{equation} or equivallentily \begin{equation} \frac{M^{1/3}}{R_s} = \left( \frac{3}{G P D} \right)^{1/3} \left[ (1+k)^2-b^2 \right]^{(3/2)} \left( \frac{1-e^2}{1+e^2-2 e \sin \omega} \right)^{3/2} \end{equation} We also report the density parameter derived from every candidate in Table~\ref{table:planets}. Figure~\ref{figure:fit} shows in graphical form the modelling of the photometric light curves and the model residuals for each planet. \begin{table}\footnotesize \caption[]{Planetary parameters. Values calculated for $R_s = 1.2\pm0.1 R_\odot$; $R_\odot = 696\,342$km and $R_{E} = 6\,378$ km.} \label{table:planets} \centering \renewcommand{\arraystretch}{0.5} \begin{tabular}{*{2}{lc}} \hline\hline KIC~11442793~h (KOI~351.01) & & KIC~11442793~d (KOI~351.03) & \\ period (days) & 331.600\,59 $\pm$ 0.000\,37 & period (days) & 59.736\,67 $\pm$ 0.000\,38 \\ epoch (HJD - 2454833) & 140.496\,31 $\pm$ 0.000\,82 & epoch (HJD - 2454833) & 158.965\,6 $\pm$ 0.004\,2 \\ duration (h) & 14.737 $\pm$ 0.046 & duration (h) & 8.40 $\pm$ 0.19 \\ $a/R_s$ & 180.7 $\pm$ 4.7 & $a/R_s$ & 56.1 $\pm$ 4.8 \\ $a$ (AU) & 1.01 $\pm$ 0.11 & $a$ (AU) & 0.32 $\pm$ 0.05 \\ $R_p/R_s$ & 0.0866 $\pm$ 0.0007 & $R_p/R_s$ & 0.0219 $\pm$ 0.0005 \\ $R_p$ ($R_E$) & 11.3 $\pm$ 1.0 & $R_p$ ($R_E$) & 2.87 $\pm$ 0.30 \\ $b$ & 0.36 $\pm$ 0.07 & $b$ & 0.28 $\pm$ 0.25 \\ $i$ (deg) & 89.6 $\pm$ 1.3 & $i$ (deg) & 89.71 $\pm$ 0.29 \\ $M^{1/3}/R_s$ & 0.90 $\pm$ 0.13 & $M^{1/3}/R_s$ & 0.88 $\pm$ 0.15 \\ $ld_1$ & 0.348 $\pm$ 0.056 & $ld_1$ & 0.371 $\pm$ 0.087 \\ $ld_2$ & 1.03 $\pm$ 0.19 & $ld_2$ & 1.04 $\pm$ 0.23 \\ \hline KIC~11442793~g (KOI~351.02) & & KIC~11442793~c & \\ period (days) & 210.606\,97 $\pm$ 0.000\,43 & period (days) & 8.719\,375 $\pm$ 0.000\,027 \\ epoch (HJD - 2454833) & 147.036\,4 $\pm$ 0.001\,4 & epoch (HJD - 2454833) & 139.568\,7 $\pm$ 0.002\,3 \\ duration (h) & 12.593 $\pm$ 0.045 & duration (h) & 4.41 $\pm$ 0.18 \\ $a/R_s$ & 127.3 $\pm$ 4.1 & $a/R_s$ & 16.0 $\pm$ 0.8 \\ $a$ (AU) & 0.71 $\pm$ 0.08 & $a$ (AU) & 0.089 $\pm$ 0.012 \\ $R_p/R_s$ & 0.0615 $\pm$ 0.0011 & $R_p/R_s$ & 0.0091 $\pm$ 0.0003 \\ $R_p$ ($R_E$) & 8.1 $\pm$ 0.8 & $R_p$ ($R_E$) & 1.19 $\pm$ 0.14 \\ $b$ & 0.45 $\pm$ 0.10 & $b$ & 0.09 $\pm$ 0.20 \\ $i$ (deg) & 89.80 $\pm$ 0.06 & $i$ (deg) & 89.68 $\pm$ 0.74 \\ $M^{1/3}/R_s$ & 0.84 $\pm$ 0.14 & $M^{1/3}/R_s$ & 0.90 $\pm$ 0.16 \\ $ld_1$ & 0.34 $\pm$ 0.10 & $ld_1$ & 0.40 $\pm$ 0.20 \\ $ld_2$ & 0.98 $\pm$ 0.10 & $ld_2$ & 1.21 $\pm$ 0.26 \\ \hline KIC~11442793~f & & KIC~11442793~b & \\ period (days) & 124.914\,4 $\pm$ 0.001\,9 & period (days) & 7.008\,151 $\pm$ 0.000\,019 \\ epoch (HJD - 2454833) & 254.704 $\pm$ 0.014 & epoch (HJD - 2454833) & 137.690\,6 $\pm$ 0.001\,7 \\ duration (h) & 10.94 $\pm$ 0.25 & duration (h) & 3.99 $\pm$ 0.15 \\ $a/R_s$ & 86.4 $\pm$ 9.7 & $a/R_s$ & 13.2 $\pm$ 1.8 \\ $a$ (AU) & 0.48 $\pm$ 0.09 & $a$ (AU) & 0.074 $\pm$ 0.016 \\ $R_p/R_s$ & 0.0220 $\pm$ 0.0022 & $R_p/R_s$ & 0.0100 $\pm$ 0.0005 \\ $R_p$ ($R_E$) & 2.88 $\pm$ 0.52 & $R_p$ ($R_E$) & 1.31 $\pm$ 0.17 \\ $b$ & 0.35 $\pm$ 0.40 & $b$ & 0.13 $\pm$ 0.32 \\ $i$ (deg) & 89.77 $\pm$ 0.31 & $i$ (deg) & 89.4 $\pm$ 1.5 \\ $M^{1/3}/R_s$ & 0.84 $\pm$ 0.20 & $M^{1/3}/R_s$ & 0.85 $\pm$ 0.21 \\ $ld_1$ & 0.360 $\pm$ 0.068 & $ld_1$ & 0.378 $\pm$ 0.060 \\ $ld_2$ & 1.01 $\pm$ 0.18 & $ld_2$ & 1.11 $\pm$ 0.20 \\ \hline KIC~11442793~e & & & \\ period (days) & 91.939\,13 $\pm$ 0.000\,73 & & \\ epoch (HJD - 2454833) & 134.312\,7 $\pm$ 0.006\,3 & & \\ duration (h) & 9.71 $\pm$ 0.19 & & \\ $a/R_s$ & 74.7 $\pm$ 4.3 & & \\ $a$ (AU) & 0.42 $\pm$ 0.06 & & \\ $R_p/R_s$ & 0.0203 $\pm$ 0.0005 & & \\ $R_p$ ($R_E$) & 2.66 $\pm$ 0.29 & & \\ $b$ & 0.27 $\pm$ 0.22 & & \\ $i$ (deg) & 89.79 $\pm$ 0.19 & & \\ $M^{1/3}/R_s$ & 0.87 $\pm$ 0.15 & & \\ $ld_1$ & 0.360 $\pm$ 0.049 & & \\ $ld_2$ & 1.05 $\pm$ 0.17 & & \\ \hline \end{tabular} \end{table} \begin{figure \centering \begin{minipage}[t]{0.48\textwidth} \begin{center} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793h_fit} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793f_fit} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793d_fit} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793b_fit} \end{center} \end{minipage} \begin{minipage}[t]{0.48\textwidth} \begin{center} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793g_fit} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793e_fit} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793c_fit} \end{center} \end{minipage} \caption{Filtered light curve of KIC 11442793 folded at the period of the different planets. For planets b and c the light curve has been binned, to help the eye. The orange solid line shows the light curve fit (Table~\ref{table:planets}). The lower panels show respectively the residuals of the light curve fit. } \label{figure:fit} \end{figure} \subsection{Analysis of the geometry of the transits} \label{subsec:geometry} One argument supporting the hypothesis that all these planet candidates orbit the same star comes from the modeling of the planetary parameters. The inclinations and stellar densities ($M^{1/3}/R_s$) shown in Table~\ref{table:planets} were calculated independently for each planet. They are all compatible to each other and the density is compatible with the value obtained independently for the stellar parameters in Section~\ref{sec:star}. We can also provide another geometrical argument supporting the former hypothesis using the measured durations and periods of the transiting planets. These are obtained from a pure geometrical fit to the transits, independently of the planetary modelling techniques. This argument has previously been used in the literature to support the hypothesis that multiple candidate systems actually orbit the same star\micitap{chaplin2013}. Figure~\ref{figure:transitdurations} shows how the transit durations distribute as a function of planetary periods. If all planets orbit the same star in circular, coplanar orbits, the transit duration $D$ should relate to the orbital period $P$ through Kepler's third law: \begin{equation} D = \frac{\alpha}{\pi} P^{1/3} \sqrt{ 1 - \left( \frac{\cos i}{\alpha}^2 P^{4/3} \right)}, \end{equation} where $\alpha = ( 3 \pi/G/\rho_s)^{1/3}$, and $\rho_s$ is the density of the star. If $D$ and $P$ are in days, the best fit to the data gives a value of $\alpha = 0.23$ and $i = 90^\circ$, compatible with the values obtained from the stellar and planetary modelling. Note that the fit is not a physical solution, because all planetary orbits do not need to be exactly coplanar. However, they are compatible with all planets orbiting the same star in nearly edge-on aligned orbits, which supports our hypothesis that all planets orbiting the same star. \begin{figure} \centering \includegraphics[% width=0.9\linewidth,% height=0.5\textheight,% keepaspectratio]{kic11442793_geometry} \caption{Transit duration of each planet as a function of their orbital period. The observed values are compatible with the seven planets orbiting a star whose density is that given by the stellar and planetary parameter modelling on edge-on aligned orbits. The range allowed by the modelling of the stellar parameters is indicated with the continuous blue lines. } \label{figure:transitdurations} \end{figure} \section{Transit timing variations} \label{sec:ttv} The analysis of the transit timing variations (TTVs) has proved to be a versatile tool to confirm the planetary nature of transiting candidates\micitap{ford2011}. Typically, TTVs have amplitudes of several minutes (with some exceptional cases like KOI~142,\micitaalt{nesvorny2013a}, with an amplitude of 12h) and typically periods one order of magnitude larger than the orbital period of the planet involved\micitap{mazeh2013}. Figure~\ref{figure:transitsplanetg} shows the individual transits and Fig.~\ref{figure:ttv} the O-C diagram for candidate g. The transit corresponding to epoch 7 (epoch 1 being the value provided in Table~\ref{table:planets}) has a displacement of 25.7 hours with respect to its expected position. This abrupt change is due to a change in the osculating orbital elements produced by the gravitational interaction with other objects in the system, possibly candidate h (see Section~\ref{sec:dynamics}). Most surveys of TTVs expect discovering periodic modulations of the timing perturbations (see a derivation of the searched expression in\micitaalt{lithwick2012a} and the series of papers \micitaalt{ford2011,ford2012a,steffen2012a,fabrycky2012a,ford2012b,steffen2012b,steffen2013,mazeh2013}). However, non-periodic, sudden changes of the orbital elements, corresponding to irregular behavior such as the one displayed by planet g, have been theoretically described (for example, though in a different context,\micitaalt{holman2005}), but we believe that we report an observational example for the first time. In addition to the change in the osculating elements, it is interesting to discuss separately the other transit events recorded for candidate g. The depth and the duration of transit events 1, 2, and 3 changes significantly. One can speculate that the perturbations seen around these transits are morphologically equivalent to those produced by a moon around the planet\micitap{sartoretti1999,kipping2013a}. This hypothesis is further discussed in Section~\ref{sec:moon}. We do not have enough evidence to prove that these perturbations are produced by a moon and until we have constraints on the planetary masses we cannot assess the stability of moons around candidate g. We note, just for completion, that a moon could not be responsible in any case for the abrupt change in the osculating orbital elements displayed in transit event 7. The amplitudes of the perturbations produced by moons are typically only a few seconds\micitap{cabrera2008,kipping2009a,kipping2009b}. The available data set for KIC~11442793 does not allow us to do an unambiguous determination of the planetary masses from the analysis of the TTVs. Candidates b and c are too small and too close to the detection limit to measure any reliable TTV amplitude (see Figure~\ref{figure:ttv}), which is not unusual in the case of low-mass planets in compact systems (see the case of CoRoT-7b\micitaalt{leger2009}). The TTVs of candidates d and e are compatible with zero within the limits of our current modelling (see Figure~\ref{figure:ttv}). There are only 5 full transits observed from the 9 expected transits of candidate f due to some unfortunate coincidence of observing interruptions with the expected transit positions. However, there is a significant signal in the available O-C diagram, which means that candidate f is interacting dynamically with other objects in the system. Candidate g shows 6 transits in the available data set (expected 7) and candidate h shows 3 transits (expected 5), less than expected due to the interruptions of the photometric record (duty cycle is 82\%). However, candidates g and h show both significant TTVs and also transit duration variations, consequently we deduce that they are interacting dynamically. \begin{figure} \centering \includegraphics[% width=0.9\linewidth,% height=0.5\textheight,% keepaspectratio]{transitsplanetg.eps} \caption{Individual observed transit events of planet g and the expected position of those transits assuming a constant period, marked with a line. Note the irregularities in the transit depth and duration at epochs 1 and 2 and the displacement from the expected position of epoch 7. The additional transit like event marked with an arrow close to epoch 3 is discussed in the text.} \label{figure:transitsplanetg} \end{figure} \begin{figure \centering \begin{minipage}[t]{0.48\textwidth} \begin{center} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793h_ttv} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793f_ttv} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793d_ttv} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793b_ttv} \end{center} \end{minipage} \begin{minipage}[t]{0.48\textwidth} \begin{center} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793g_ttv} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793e_ttv} \includegraphics[% width=0.8\linewidth,% height=0.4\textheight,% keepaspectratio]{kic11442793c_ttv} \end{center} \end{minipage} \caption{Transit timing variations of the different planets. Observed mid-times of planetary transits (O) minus calculated linear ephemeris (C) are plotted with 1$\sigma$ uncertainties.} \label{figure:ttv} \end{figure} \section{Dynamical study} \label{sec:dynamics} \subsection{Analysis with a numerical integrator} We have done a stability analysis of the system with the orbital dynamics integrator {\em Mercury}\micitap{chambers1999}. The system is only stable if candidates g and h have masses below some Jupiter units (typically, less than 5 Jupiter masses). Therefore, we conclude that g and h are planets because they interact gravitationally and their long term dynamical stability is only guaranteed if these bodies have planetary masses. The {\em Mercury} numerical analysis of the planetary system reveals that objects d, e, and f are in stable orbits only if those are very circular (typically, less than 3\% for mass values of 10 Earth masses, representative of 2.5 Earth radii super-Earths) and planetary masses (less than the mass of Jupiter). Therefore, we conclude that these three must also be planets. Actually, the requirement of the circularity of their orbits implies that, for the system to be stable, the mean motion resonance has to play a role to guarantee the survival of the system. We did not see any sign that candidates b and c interact dynamically with the other planets in the system because the low SNR of the transit light curves. The {\em Mercury} numerical analysis reveals that their orbits are in principle only stable if the objects have planetary masses. \subsection{A first dynamical study} We estimated the masses of the seven planets considering their sizes and assuming representative mean densities for each planetary class (gas giant, ice giants, large and small super-Earth) as follows: planet $m_h = 0.8 M_{\textrm{Jupiter}}$, $m_g = 0.7 M_{\textrm{Neptune}}$, planets $m_f \sim m_e \sim m_d \sim 10 M_{\textrm{Earth}}$ and planets $m_c \sim m_b \sim 3 M_{\textrm{Earth}}$. Given the the periods and the estimated semi-major axes we can compute their separation in terms of Hill radii. Using the formula given below\micitap{chambers1996}: \begin{equation} H= \left( \frac{m_1+m_2}{3} \right)^{\frac{1}{3}} \frac{a_1+a_2}{2} \end{equation} we get the following numbers for the separation of neighboring planets in Hill radii: \begin{equation} g-h: 5, h-f: 11, f-e: 5, e-d: 10, d-c: 47, \;\mathrm{and}\; c-b: 10. \end{equation} This indicates the stability of the different subsystem given they are moving in almost circular orbits. Especially the inner planets b, c, d, e and f are relative safe in their orbits, which is evident from their Hill radii. It is interesting to note, that the innermost two planets are in a 4:5 mean motion resonance (MMR); the two massive outer ones are not far from a 5:8 MMR. It is also worth to mention that the three planets d, e and f are close to the interesting Laplace resonance, which is known to happen for the the motion of the three Galilean Moons of Jupiter (Io, Europa and Ganymede) but also for the three Moons of Uranus (Miranda, Ariel and Umbriel, e.g.\micitaalt{ferrazmello1979}): \begin{equation} \frac{1}{P_d} - \frac{3}{P_e} + \frac{2}{P_f} \sim 0 \end{equation} Because the inner system consisting of super-earth planets is quite stable we concentrate on the dynamics of the planets h and g. The stability of this extrasolar planetary system seems to depend on the stability of the orbits of these two outer gas giants, which may have even eccentric orbits given the relative distance to the star. So we tried to find borders for stable motion of the two outer gas giants using the results of long term integrations up to $10^7$ years.\footnote{As integration method we used the a high precision LIE-integrator with automatic step (e.g.\micitaalt{hanslmeier1984}).} It turned out that inside the domain of motion for $e_h < 0.095$ and $e_g < 0.025$ the orbits of the two outer planets are regular with only slight periodic changes in the eccentricities (see Fig.~\ref{figure:chaos}, lower right graph). The closeness to the 8:5 MMR is not destroying their stability; an additional resonance appears for the motions of the perihelia of g and h. This secular resonance is depicted in Fig.\ref{figure:omega}, where the 1:1 resonance of the motion of $\omega_g$ and $\omega_h$ with a period of about $1.7 \cdot 10^4$ years is visible. \begin{figure} \centering \includegraphics[width=6cm,angle=270]{perihel-300} \includegraphics[width=6cm,angle=270]{perihel-10000} \caption{Perihelion motion of the two outer planets h and g for initial conditions in the stable domain (see text). Out of the whole integration time of $10^7$ years we show the first $3 \cdot 10^5$ (upper graph) and the last $3 \cdot 10^5$ years (lower graph). The strong coupling in a 1:1 secular resonance of the perihelion motion with a period of around $17 \cdot 10^4$ is clearly visible.} \label{figure:omega} \end{figure} \begin{figure} \centering \includegraphics[width=4cm,angle=270]{e16-0-1} \includegraphics[width=4cm,angle=270]{e16-0-2} \includegraphics[width=4cm,angle=270]{e16-1} \includegraphics[width=4cm,angle=270]{e16-2} \caption{Orbits close to the stability region: time evolution of the eccentricities of the orbits of planets h (green) and g (red). Stable seeming orbit (upper left) which turns out to be unstable after about $7.69 \cdot 10^5$ years (upper right). Another close by orbit unstable after $5. \cdot 10^6$ years (lower left); another close by, but stable orbit is shown (lower right). For details, see text.} \label{figure:chaos} \end{figure} Close to the edge of the stable region we have an intermediate region where stable and unstable orbits are very close to each other (see Fig.~\ref{figure:chaos}). In this domain we find the so-called sticky orbits - a well known phenomenon of dynamical systems (e.g.\micitaalt{dvorak1998}): an orbit there is 'sticked' to an invariant torus in phase space and then escapes through a hole of the last KAM-torus.\footnote{KAM stands for Kolmogorov -- Arnold -- Moser.} We show in the respective figure three such examples, where a small shift in eccentricity of planet h ($\Delta e = 0.005$) causes such a different dynamical behavior of an orbit. We need to explain the large TTV for planet g: the answer is visible from Fig.\ref{figure:ttvdvorak}, where one can see the relatively large variations of the semi-major axis of this planet even for a time scale of years. This change can lead to a change in the period which achieves values up to a day from one transit to another one, comparable to the changes observed in the {\it Kepler}~data. \begin{figure} \centering \includegraphics[width=4cm,angle=270]{ttv1.eps} \includegraphics[width=4cm,angle=270]{ttv2.eps} \caption{Variation of the semi-major axis of planet g caused by the presence of planet h during 10 years (left graph). Variation of the semi-major axis of the outer gas giant caused by the inner gas giant g (red color) and vice versa on planet g (green color) during $10^3$ years (right graph). Note that the lower curve is normalized with respect to the semi-major axis of a=0.71 AU of planet g} \label{figure:ttvdvorak} \end{figure} But the system is quite more complex: because planet g is in 5:3 MMR with planet f and this one is in the formerly mentioned Laplace resonance (with the planets e and d) the stability limit for the eccentricities of all planets is very small. Integrating the 'complete' system\footnote{one can ignore the two innermost super-earth-planets - so we integrated the star plus the five outer planets} it is only stable well before the stability limit mentioned above for the eccentricities of planets h and g: this absolute limit for a stable system is $e < 0.001$ for all 5 outer planets! We conclude from the preliminary dynamical study of this seven planet system that with the actual parameters determined it is quite close to instability. Consequently the parameters like the masses and the semi-major axes need some revision after a deeper dynamical study, out of the scope of this paper. Even in our Solar System, where the orbital parameters are well determined, the issue of the long term stability is debated (e.g.\micitaalt{laskar1994,laskar2008}) and the influence of many different resonances is complex. We are currently working on that dynamical study (Dvorak et al. in preparation). \section{KIC~11442793 in the context of other multiplanet systems} \label{sec:discussion} Models of planet formation include theories about planet-planet scattering followed by tidal circularization\micitap{rasio1996,lin1997,chatterjee2008,beauge2012a}. Another possible mechanism of planet formation builds planets at relative large distances of the star and later these planets migrate inwards through a disk\micitap{goldreich1980,lin1996,ward1997,murray1998}. The first mechanism does not likely form compact multiple systems such the ones observed by {\it Kepler}\micitap{batalha2013}, characterized by being compact and by having low relative inclination orbits\micitap{fang2012,tremaine2012}. Different mechanisms have been proposed to explain the origin of the latter systems. One promising possibility is in-situ formation, see for example\micitap{chiang2013,chatterjee2013}, including the observed feature that many of those systems have planets orbiting close to, but not exactly at, mean motion resonances\micitap{lithwick2012b,petrovich2013}. We show in Fig.~\ref{figure:systemcomparison} a schematic view of the periods and relative sizes of 9 multiple transiting planetary systems discovered by {\it Kepler}~with 5 transiting planets or more, together with the planetary system reported in this paper. There are also multiple systems discovered by radial velocity hosting 6 or more planets, like GJ~667C\micitap{angladaescude2013b}, HD~40307\micitap{tuomi2013}, or HD~10180\micitap{lovis2011}. But their orbital properties and even their existence is not as secure as those of transiting candidates. For example, consider the case of the system GJ~581\micitap{hatzes2013b} or the discussion in the literature if HD~10180 is orbited by six\micitap{feroz2011}, seven\micitap{lovis2011}, or even nine\micitap{tuomi2012} planets. Therefore, we limit ourselves in Fig.~\ref{figure:systemcomparison} to the discussion of multiple transiting systems. Among the systems shown, KIC~11442973 presented here is the only one showing a clear hierarchy, like our Solar System. Additionally, only KIC~11442973 and KOI~435 include a giant planet larger than 10 Earth radii. Such systems are typically more difficult to form because giant planets tend to excite the eccentricity of less massive planets during the migration processes, compromising the long term stability of the system (see, for example,\micitaalt{raymond2008}). Note that there are two additional known systems hosting simultaneously super-Earths and gas giants, but these two systems orbit M dwarfs, and only the second example is a compact system. GJ~676A\micitap{angladaescude2012a} hosts up to 4 planets, including one super-Earth in a 3.6 days orbit and one 5 Jupiter masses planet in a 1050 days orbit. GJ~876\micitap{rivera2010} is also an M-dwarf hosting one super-Earth of 6 Earth masses at 1.9 days orbital period, a 0.7 Jupiter masses planet at 20 days, a 2.3 Jupiter masses planet at 61 days, and a 14 Earth masses planet in a 124 days orbit. However, KIC~11442793 is a late F/early G solar-like star, hosting a more complex system where dynamical interactions play an important role in the long term stability of the system. \subsection{About the possible existence of moons in the planetary system} \label{sec:moon} We have discussed in previous sections the possibility that KIC~11442793g hosts a moon. Figure~\ref{figure:transitsplanetg} shows that the transit epochs 1, 2 and 3 show features morphologically equivalent to an exomoon orbiting the planet\micitap{sartoretti1999,szabo2006,kipping2011b}. However, considering the distance between the transit epoch 3 and the moon-like event marked with an arrow in Figure~\ref{figure:transitsplanetg}, the estimated projected distance between the planet and the exomoon candidate would be orbiting close to the Hill radius of the planet, which is too far away to guarantee the long term stability of the satellite, usually limited to a distance of one third\micitap{barnes2002} to one half\micitap{domingos2006} of the planetary Hill sphere. With the current data set, we cannot exclude that the event marked with an arrow in Figure~\ref{figure:transitsplanetg} is caused by instrumental residuals. However, the distorted shape features of transits 1 and 2 cannot be explained simply by the impact of stellar activity and their origin remains unclear. Space surveys have regularly been used to rule out the presence of moons around extrasolar planets\micitap{pont2007,deeg2010}. So far, the most extensive search for exomoons\micitap{kipping2012c} has taken the advantage of the simultaneous change in the transit timing and transit duration changes produced by the hypothetical satellites\micitap{kipping2009a,kipping2009b}. However, until now only negative results have been reported\micitap{kipping2013a,kipping2013b}. A possible reason for this lack of success is that searches have been limited to isolated, typically non-giant planets. However, if these systems are formed by planet-planet scattering, they are unlikely to maintain the moons during their formation process\micitap{gong2013}. In turn, migration tends to remove moons from planetary systems\micitap{namouni2010}. Therefore, in-situ formed compact systems could be more prone to host exomoons in long timescales. \begin{figure} \centering \includegraphics[% width=0.9\linewidth,% height=0.5\textheight,% keepaspectratio]{systemcomparison.eps} \caption{Comparison of different multiple systems. Kepler-11\micitap{lissauer2011a}, KOI-435\micitap{ofir2013a}, Kepler-20\micitap{gautier2012,fressin2012a}, Kepler-32\micitap{fabrycky2012a}, Kepler-33\micitap{lissauer2012a}, Kepler-55\micitap{steffen2013}, Kepler-62\micitap{borucki2013}, KOI-500\micitap{xie2013,wu2013}, KOI-1589\micitap{xie2013,wu2013}. Color codes separate Earth and Super-Earth planets (up to 4 Earth radii, shown in green), Neptune-sized planets (between 4 and 8 Earth radii, shown in blue), and gas giants (larger than 8 Earth radii, shown in red).} \label{figure:systemcomparison} \end{figure} \section{Summary} \label{sec:summary} We report the discovery of a planetary system with seven transiting planets with orbital periods in the range from 7 to 330 days (0.074 to 1.01 AU). The system is hierarchical, the two innermost planets have sizes close to Earth and their period ratio is within 0.5\% of the 4:5 mean motion resonance. The three following planets are super-Earths with sizes between 2 and 3 Earth radii whose periods are close to a 2:3:4 chain. From the observational data set we cannot determine their masses or the value of their mean longitudes, but the ratio of their mean motions is close to a Laplace resonance. The outermost planets are two gas giants at distances of 0.7 and 1.0 AU. There are other systems of super-Earths, discovered either by radial velocity or by transit,which show some similarities, for example GJ~876\micitap{rivera2010} or KOI~152\micitap{wang2012b}, but these systems only contain super-Earths, while KIC~11442793 is a hierarchical system. As a singularity among the other multiple systems found by {\it Kepler}~or radial velocity, KIC~11442793 contains a gas giant planet similar to Jupiter orbiting at 1 AU. Systems with super-Earths close to a Laplace resonance are also believed to be frequent\micitap{chiang2013}, but this particular system poses new challenges due to the presence of the gas giants g and h, which seem to have the most intense gravitational interaction measured among extrasolar planets so far (25.7 h of change in the ephemeris). If {\it Kepler}~cannot continue the follow up of this system\micitap{cowen2013}, the follow-up of the Earth and super-Earth planets of this system will be challenging in the near future, as they are beyond reach for CHEOPS\micitap{broeg2013} or TESS\micitap{ricker2010}. Only PLATO\micitap{rauer2011b} will be able to study in detail their evolution. However, the gas giants g and f produce 0.5\% and 0.8\% transits, which should be observable from ground, which makes of this system an attractive target for future follow-up studies. \acknowledgments We are grateful to {\'E}.~B{\'a}lint, Ph.~von~Paris and M.~Godolt for useful discussions concerning this paper. This paper includes data collected by the {\it Kepler}~mission. Funding for the {\it Kepler}~mission is provided by the NASA Science Mission directorate. Some/all of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. \bibliographystyle{apj}
-29,329.927157
[ -2.6796875, 2.69140625 ]
31.077471
[ -3.15625, 0.338623046875, -1.4912109375, -4.9765625, -0.845703125, 6.94140625 ]
[ 3.5625, 5.7578125, 3.19140625, 6.3046875 ]
572
6,263
[ -3.08984375, 3.419921875 ]
35.359515
[ -5.48046875, -2.11328125, -2.21875, -1.8779296875, 0.45751953125, 8.765625 ]
1.590877
14.338942
26.99984
7.251363
[ 2.481536865234375 ]
-19,540.118615
5.670605
-28,357.343368
1.931578
6.169642
[ -3.673828125, -3.4375, -2.599609375, -3.46875, 2.46875, 9.6171875 ]
[ -6.40625, -2.32421875, -2.244140625, -1.2294921875, 3.474609375, 4.8515625 ]
BkiUatDxK4sA-9F9iLym
\section{INTRODUCTION} One of the main differences between a classical and a quantum theory is that, in a quantum theory classical forbidden phenomena might happen such as tunneling and quantum entanglement, the later first studied in 1935 by Einstein, Podolsky and Rosen \tcolor{\cite{EPR}}. Suppose we have two entangled quantum systems $A$ and $B$. If we measure a property of one system (for instance, spin), due to entanglement we also determine automatically the same property of the other system no matter the distance between both systems. This phenomena suggested that quantum mechanics might be an incomplete theory, lacking ``hidden variables''. In 1964, John Stewart Bell introduced the now so-called Bell inequality \tcolor{\cite{Bell}}. This inequality is a bound over correlations of experimental results, violation of the inequality supports the fact that quantum mechanics is a complete theory without hidden variables (at least the ones which fulfils the assumptions of the Bell's theorem). The original Bell inequality is not optimal to test experimentally \tcolor{\cite{bell_ineq_at_LEP}}, because of this, other Bell-type inequalities have been searched for. Since the Bell-type inequalities were introduced, several experiments have confirmed their violation \tcolor{\cite{BIviol_1,BIviol_2,BIviol_3,BIviol_4,BIviol_5}}, supporting quantum theory as a true description of physical reality. Quantum entanglement has been broadly studied as a tool to develop new technologies to improve quantum computing. Also, it has been studied as a fundamental nature phenomenon, for example in the context of foundations of quantum mechanics \tcolor{\cite{foundations_qm}} and Quantum Field Theories \tcolor{\cite{qft_ent_1,qft_ent_2,qft_ent_3}}, and even while studying neutrino oscillations \tcolor{\cite{NeutrinoEnt,Neutrino1,Neutrino2,Neutrino3,Neutrino4,Neutrino5}}. Here we are interested in studying entanglement in scattering processes from QED. Recent work on this and related tests of Bell-type inequalities at colliders can be found in \tcolor{\cite{wittness,EntangEntropyDecay,max_ent_DIS,BI_collider_1,moller_scatt,scatt_qed_ent,two_qbit_entang,BI_collider_2,BI_collider_3}}. In order to determine if a theory belongs to a local hidden variable class, we can test the CHI derived by Clauser and Horne in \tcolor{\cite{CH1974,CH1978}}. The CHI is a Bell-type inequality easier to test experimentally compared to the original Bell inequality. In their work \tcolor{\cite{CH1974}}, Clauser and Horne also proposed an experimental setup to correctly test their inequality. Let \begin{align*} S &= \frac{p_{12}(a_1,a_2)}{p_{12}(\infty,\infty)} - \frac{p_{12}(a_1,a_2')}{p_{12}(\infty,\infty)} + \frac{p_{12}(a_1',a_2)}{p_{12}(\infty,\infty)}\\ &+ \frac{p_{12}(a_1',a_2')}{p_{12}(\infty,\infty)} - \frac{p_{12}(a_1',\infty)}{p_{12}(\infty,\infty)} - \frac{p_{12}(\infty,a_2)}{p_{12}(\infty,\infty)} \hphantom{.}\spa, \numberthis \end{align*} if $S$ gets a value outside the range [-1,0], then it is said that the inequality is violated. In this case, $p_{12}(a_1,a_2)/p_{12}(\infty,\infty)$ represents the joint probability of measuring the spin of the outgoing particles along the directions $a_1$ and $a_2$, and $p_{12}(\infty,a_2)/p_{12}(\infty,\infty)$ is the probability of measuring the spin of only one particle in the direction $a_2$. Here, $p_{12}(\infty,\infty)$ denotes the normalization factor. In previous work \tcolor{\cite{yongram}}, spin correlations for particles oriented as the ones described in the experimental setup proposed by Clause and Horne were calculated. The probability amplitudes derived depend explicitly on the speed $\beta$ and polarization angles $\chi_1$ and $\chi_2$ for emergent particles of electron-positron scattering. Although authors in \tcolor{\cite{yongram}} claim to have found violation of the CHI at all energies, we found this is not true. Also, we made some corrections on the spinor expressions they used. To verify this, it is enough to take the spinors and plug them into the Dirac equation. On appendices A and B we show the correct expressions for all spinors. The work is distributed as follows: (1), first we perform the calculations for initially polarized particles and test the CHI at all energies. (2) Then we repeat the same procedures as before but with initially unpolarized particles, so we averaged over initial spins. Here we also test CHI at all energies, in this case we do not find violation of the CHI. (3) Finally, we discuss possible reasons why CHI violation does not occur for initially unpolarized scattering. \section{INITIALLY POLARIZED PARTICLES} As already mentioned, the scattering amplitude is calculated a tree level in QED only. The Feynman diagrams are shown in Figure \tcolor{\ref{feyndiag}} \begin{figure}[H] \includegraphics[scale=0.24]{feyn_diag_f3.png} \centering \caption{Feynman diagrams for electron-positron scattering.} \label{feyndiag} \end{figure} The probability amplitude for the scattering process shown in figure (COLOR Y REF) is known to be \begin{align*} \label{general_amplitude} \mathcal{M} &= \frac{e^2}{s} \enskip \sub{k_1} \scaleobj{1.2}{\gamma}^\mu \sv{k_2} \svb{p_2} \scaleobj{1.2}{\gamma}_\mu \su{p_1} \\ &- \frac{e^2}{t} \enskip \sub{k_1} \scaleobj{1.2}{\gamma}^\nu \su{p_1} \svb{p_2} \scaleobj{1.2}{\gamma}_\nu \sv{k_2} \hphantom{.}\spa. \numberthis \end{align*} As done in \tcolor{\cite{yongram}}, we consider the particles' momentum and polarization oriented as in the experimental setup proposed by Clauser and Horne in \tcolor{\cite{CH1974}}. The initial electron and positron are moving along the $y$ axis and colliding in their common CM reference frame. Each particle has spin up in the $z$ direction and momentum $\mathbf{p}_1=\gamma m \beta(0,1,0)=-\mathbf{p}_2$ respectively, where $\gamma=1 / \sqrt{1-\beta^2}$. For the emerging electron and positron, we considered the momenta $\mathbf{k}_1=\gamma m \beta(0,0,1)=-\mathbf{k}_2$ respectively. Using the correct spinor expressions shown in appendix A for the initial and final particles, we find the invariant amplitude to be \begin{align*} \label{M_amplitude_section_2} \mathcal{M} &= \enskip \frac{e^2(\beta\rho - 2)}{4\beta\rho} \enskip \Bigg[ \hphantom{.} A(\beta) \cos{\left(\frac{\chi_1 +\chi_2}{2}\right)} \\ &+ B(\beta) \sin{\left(\frac{\chi_1 -\chi_2}{2}\right)} - i C(\beta)\sin{\left(\frac{\chi_1+\chi_2}{2}\right)} \hphantom{.} \Bigg] \hphantom{.}\spa, \numberthis \end{align*} where $\chi_1$ and $\chi_2$ are spin directions over the $xy$-plane specified in Figure \tcolor{\ref{scatter_diagram_initially_polarized}}, and \begin{align*} A(\beta) &= (\beta^2 - 1)(\rho^2 + 1)^2 \hphantom{.}\spa, \\ B(\beta) &= 8\rho^2 \hphantom{.}\spa, \\ C(\beta) &= (\beta^2 - 1)(\rho^4 - 1) \hphantom{.}\spa, \\ \rho(\beta) &= \frac{\gamma \beta}{\gamma +1}=\frac{\beta}{1+\sqrt{1-\beta^2}} \hphantom{.}\spa. \numberthis \end{align*} \begin{figure}[H] \includegraphics[scale=0.35]{diag_polarized.png} \centering \caption{Configuration space scattering diagram. Initial electron and positron move along the $y$ axis, and the emergent electron and positron along the $z$ axis. The angle $\chi_1$ is measured relative to the $x$ axis and denotes the electron spin orientation over the $xy$ plane. The angle $\chi_2$ is also measured relative to the $x$ axis and denotes the positron's opposite spin orientation over the $xy$ plane. Remember that for antiparticles the spinor direction is opposite to physical particle's spin direction.} \label{scatter_diagram_initially_polarized} \end{figure} Let $F(\beta,\chi_1,\chi_2)$ be the modulus squared of $\mathcal{M}$ in equation (\ref{M_amplitude_section_2}), then the conditional joint probability of measuring the polarization of the emergent particles in the directions $\chi_1$ and $\chi_2$ is \begin{equation} P(\beta,\chi_1,\chi_2) = \frac{F(\beta,\chi_1,\chi_2)}{N(\beta)} \hphantom{.}\spa. \end{equation} The normalization factor $N(\beta)$ is obtained by summing the non-normalized probability over the pair of angles $(\chi_1, \chi_2)$, $(\chi_1+\pi, \chi_2)$, $(\chi_1, \chi_2+\pi)$ and $(\chi_1+\pi, \chi_2+\pi)$, this is \begin{align*} N(\beta) &= F(\beta,\chi_1,\chi_2) + F(\beta,\chi_1 +\pi ,\chi_2) \\ &+ F(\beta,\chi_1,\chi_2 +\pi ) + F(\beta,\chi_1 +\pi ,\chi_2 +\pi) \\ &= \frac{2e^4}{\beta^4} \hphantom{.} \big( 2-5\beta^2 +8\beta^4 -\beta^6 \big) \hphantom{.}\spa. \numberthis \end{align*} The probability of measuring only the spin of the electron in the $\chi_1$ direction is \begin{align} P(\chi_1,-) &= P(\chi_1,\chi_2)+P(\chi_1,\chi_2+\pi) \hphantom{.}\spa \notag, \\ &=\frac{1}{2} - \frac{2\beta^2 (\beta^2 - 1 )\sin(\chi_1)}{\beta^6 -8 \beta^4 + 5\beta^2 - 2} \hphantom{.}\spa , \end{align} similarly, the probability of measuring only the spin of the positron in the $\chi_2$ direction is \begin{align} P(-,\chi_2) &= P(\chi_1,\chi_2)+P(\chi_1 +\pi,\chi_2) \hphantom{.}\spa\notag, \\ &= \frac{1}{2} + \frac{2\beta^2 (\beta^2 - 1)\sin(\chi_2)}{\beta^6 -8 \beta^4 + 5\beta^2 - 2} \hphantom{.}\spa. \end{align} If we take the high energy limit $\beta \longrightarrow 1$ the probability becomes \begin{align} P(\chi_1,\chi_2) = \frac{1}{4} \Big( 1 - \cos (\chi_1-\chi_2) \Big) \hphantom{.}\spa. \end{align} Using these results we can construct the quantity $S$ following Clauser and Horne in \tcolor{\cite{CH1974}} \begin{align*} \label{S_section_2_fully_polarized} S(\beta) &= P(\beta,\chi_1,\chi_2) - P(\beta,\chi_1,\chi'_2) + P(\beta,\chi'_1,\chi_2) \\ &+ P(\beta,\chi'_1,\chi'_2) - P(\chi'_1 , - ) - P( - , \chi_2 ) \hphantom{.}\spa. \numberthis \end{align*} In order to test the CHI, we calculated the minimum and maximum value of $S$ for each value of $\beta\in [0,1]$, this values were calculated numerically using the \texttt{Wolfram Mathematica} software. The result is shown in Figure \tcolor{\ref{S_sec2}}.\\ \begin{figure}[H] \includegraphics[scale=0.21]{S_min_max_section_2_fully_polarized.png} \centering \caption{Maximum and minimum values for S($\beta$) in equation (\ref{S_section_2_fully_polarized}). Horizontal dashed lines at $S=0$ and $S=-1$ show the bounds for the CHI.} \label{S_sec2} \end{figure} Clearly, violation of the CHI occurs for energies such that $\beta \gtrsim 0.696$. The fact that Min$(S)+$Max$(S)=-1$ guarantees that if the CHI can be violated from below for a given $\beta_0$, then it can also be violated from above. Consider $\textnormal{Min}(S) = -1 - \textnormal{Max}(S)$, if a minimum $\textnormal{Min}(S) < -1$ for some $\beta_0$, this implies that $\textnormal{Max}(S) > 0$ for the same $\beta_0$. \section{INITIALLY UNPOLARIZED PARTICLES} Again, following the procedure in \tcolor{\cite{yongram}}, for initial positron and electron we consider their momentum in the common CM frame as $\textbf{p}_1 = \gamma m\beta(0,1,0) = - \textbf{p}_2$ respectively and for the final particles $\textbf{k}_1 = \gamma m\beta(1,0,0) = - \textbf{k}_2$. \begin{figure}[H] \includegraphics[scale=0.34]{diag_polarized_section_3.png} \centering \caption{Configuration space scattering diagram. Initial electron and positron move along the $y$ axis, and emergent electron and positron along the $x$ axis. The angle $\chi_1$ is measured relative to the $x$ axis and denotes the electron spin orientation over the $zy$ plane. The angle $\chi_2$ is also measured relative to the $x$ axis and denotes the positron's opposite spin orientation over the $zy$ plane. Remember that for antiparticles spinor direction is opposite to physical particle's spin direction.} \label{scatter_diagram_initially_unpolarized} \end{figure} First we take the amplitude in (\ref{general_amplitude}) and average over initial spins \begin{align} |\overline{\mathcal{M}}|^2 = \frac{1}{4}\sum_{s,r}|\mathcal{M}|^2 \hphantom{.}\spa. \end{align} Note that as we previously mentioned, we did not sum over final ones. The squared averaged amplitude yields {\small \begin{align*} |\overline{\mathcal{M}}|^2 &= \frac{e^4}{4s^2} \textnormal{Tr}\Big[ (\slashed{p_1}+m) \scaleobj{1.2}{\gamma}_\nu (\slashed{p_2}-m)\scaleobj{1.2}{\gamma}_\mu \Big] \sub{k_1} \scaleobj{1.2}{\gamma}^\mu \sv{k_2} \svb{k_2 } \scaleobj{1.2}{\gamma}^\nu \su{k_1} \\ &+ \frac{e^4}{4t^2} \sub{k_1} \scaleobj{1.2}{\gamma}^\alpha ( \slashed{p_1}+m ) \scaleobj{1.2}{\gamma}^\beta \su{k_1} \svb{k_2} \scaleobj{1.2}{\gamma}_\beta ( \slashed{p_2}-m ) \scaleobj{1.2}{\gamma}_\alpha \sv{k_2} \\ &- \frac{e^4}{4st} \textnormal{Tr}\Big[(\slashed{p_1}+m)\scaleobj{1.2}{\gamma}^\sigma\su{k_1}\svb{k_2} \scaleobj{1.2}{\gamma}_\sigma (\slashed{p_2}-m )\scaleobj{1.2}{\gamma}_\omega \Big] \sub{k_1}\scaleobj{1.2}{\gamma}^\omega \sv{k_2} \\ &- \frac{e^4}{4st} \textnormal{Tr}\Big[(\slashed{p_1}+m)\scaleobj{1.2}{\gamma}_\rho (\slashed{p_2}-m )\scaleobj{1.2}{\gamma}_\lambda \sv{k_2} \sub{k_1}\scaleobj{1.2}{\gamma}^\lambda\Big] \svb{k_2}\scaleobj{1.2}{\gamma}^\rho\su{k_1} \enskip. \numberthis \end{align*} } This calculations were performed using the spinor expressions shown in appendix B with the help of \texttt{Mathematica}. Because spinors are present inside traces in the interference terms, the gamma matrices algebra cannot help us to perform the calculations. Then we need to adopt a particular representation to perform the calculations explicitly. As authors in \tcolor{\cite{yongram}}, we adopt the Dirac representation. Let $F(\beta,\chi_1,\chi_2)$ be the averaged modulus squared of the amplitude, \begin{align*} F(\beta,\chi_1,\chi_2) = \frac{e^4}{4\beta^4} &\phantom{.} \Bigg[ \cos(\chi_1+\chi_2) (-\beta^6 + 3\beta^4 - \beta^2) \\ &+ \cos(\chi_1-\chi_2) (\beta^6 - 6\beta^4 + 5\beta^2) \\ & - 2\beta^6 + 13\beta^4 - 6\beta^2 + 4 \Bigg] \hphantom{.}\spa, \numberthis \end{align*} the conditional joint probability of measuring the polarization of the emergent particles in the direction $\chi_1,\chi_2$ is \begin{align} P(\beta,\chi_1,\chi_2) = \frac{F(\beta,\chi_1,\chi_2) }{N(\beta)} \hphantom{.}\spa, \end{align} again, the normalization factor $N(\beta)$ is obtained by summing the non-normalized probability over the pair of angles $\chi_1$ and $\chi_2$, \begin{align*} N(\beta) &= F(\beta,\chi_1,\chi_2) + F(\beta,\chi_1 +\pi ,\chi_2) \\ &+ F(\beta,\chi_1,\chi_2 +\pi ) + F(\beta,\chi_1 +\pi ,\chi_2 +\pi) \\ &= \frac{e^4}{\beta^4} \big( - 2\beta^6 + 13\beta^4 - 6\beta^2 + 4 \big) \hphantom{.}\spa. \label{N(b)_initiall_no_polarized}\numberthis \end{align*} The probability of measuring only the polarization of the emerging electron and positron respectively is \begin{align} P(\chi_1 , - ) = \frac{1}{2} \hphantom{.}\spa,\hphantom{.}\spa P( - , \chi_2 ) = \frac{1}{2} \hphantom{.}\spa. \end{align} One way to observe that indeed our calculation is correct is the following: the spin-averaged amplitude for Bhabha scattering at tree level is \setlength{\jot}{6pt} \begin{align*} \label{bhabha_scat_fully_avreaged} \frac{1}{4}\sum_{s,r}\sum_{s',r'}|\mathcal{M}|^2 &= \frac{2e^4}{s^2} \big( t^2 + u^2 + 8m^2s - 8m^4 \big) \\ &+ \frac{2e^4}{t^2} \big( s^2 + u^2 + 8m^2t - 8m^4 \big) \\ &+ \frac{4e^4}{st} \big( u^2 - 8m^2u + 12m^4 \big) \hphantom{.}\spa, \numberthis \end{align*} \setlength{\jot}{13pt} where in this case, the Mandelstam variables are \begin{align} s = \frac{4m^2}{1 - \beta^2} \enskip\enskip,\enskip\enskip t = -\frac{2\beta^2 m^2}{1 - \beta^2} \enskip\enskip,\enskip\enskip u = - \frac{2\beta^2 m^2}{1 - \beta^2} \enskip\enskip. \end{align} As is mentioned right after equation (21) in \tcolor{\cite{yongram}}, summing $F(\beta,\chi_1,\chi_2)$ over the pairs of angles $\chi_1$ and $\chi_2$ should be equivalent to sum over polarizations of the emerging particles. If we plug the Mandelstam variables in (\ref{bhabha_scat_fully_avreaged}), we indeed get the normalization factor $N(\beta)$ in equation (\ref{N(b)_initiall_no_polarized}) \begin{align} \label{bhabha_scat_fully_avreaged_evaluated} &\frac{1}{4}\sum_{s,r}\sum_{s',r'}|\mathcal{M}|^2 = \frac{e^4}{\beta^4} \big( - 2\beta^6 + 13\beta^4 - 6\beta^2 + 4 \big) \hphantom{.}\spa, \end{align} notice that this can not be obtained in \tcolor{\cite{yongram}}. To analyze possible violation of the CHI, again we plot the numerical minimum and maximum value of $S(\beta)$ for each value of $\beta\in [0,1]$. The result is shown in Figure \tcolor{\ref{S_sec3}}. In this case, violation of the CHI do not occur at any energy. \begin{figure}[H] \includegraphics[scale=0.21]{S_min_max_section_3_partially_polarized.png} \centering \caption{Maximum and minimum values for S($\beta$). Horizontal dashed lines at $S=0$ and $S=-1$ show the bounds for the CHI.} \label{S_sec3} \end{figure} \section{CONCLUSION} We have tested the Clauser-Horne inequality in the QED process $e^+ e^- \rightarrow e^+ e^-$. First, we considered initially polarized electrons and found violation of the Clauser-Horne inequality for incoming electrons with a speed $\beta \gtrsim 0.696$. As is remarked in \tcolor{\cite{yongram}}, the spin correlation depends on the speed of the particles. As a second case, we studied initial unpolarized electrons, but this time we found no violation of the Clauser-Horne inequality. It is important to remember that entanglement does not necessarily imply violation of Bell-type inequalities. However, if a state violates an inequality, it is assured that the state is entangled. Our result does not imply entanglement is lost due unpolarized particles. As argued in \tcolor{\cite{bell_ineq_at_LEP}}, averaging over initial spin states leads to loss of information on the final states to be measured, thus, is not possible violate the Clauser-Horne inequality this way. Measuring the polarization of the outgoing particles in the scattering process $e^+ e^- \rightarrow e^+ e^-$, where the incoming particles polarization is unknown, is not a proper experimental test of the Clauser-Horne inequality. \section{ACKNOWLEDGMENTS} We would like to thank Alfredo Aranda and Carlos Alvarado for useful discussions and guidance on this work. \section{APPENDIX A: SPINORS FOR SECTION II} In the Dirac representation of the gamma matrices, the spinors for particles and antiparticles normalized as $\sub{p}^s\su{p}^r = 2m \delta_{sr}$ and $\svb{p}^s\sv{p}^r =- 2m \delta_{sr}$, are given by \begin{equation} \label{dirac_spinors} u^{s}\scaleobj{0.7}{(p)}=\sqrt{E+m} \begin{pmatrix} \varphi^s \\[.27cm] \frac{\Vec{\sigma} \cdot \Vec{p}}{E+m} \hphantom{.} \varphi^s \end{pmatrix} \hphantom{.}\spa, \end{equation} \begin{equation}\label{dirac_spinors2} v^{s}\scaleobj{0.7}{(p)}=-\sqrt{E+m} \begin{pmatrix} \frac{\Vec{\sigma} \cdot \Vec{p}}{E+m} \hphantom{.} \eta^{s} \\[.27cm] \eta^{s} \end{pmatrix} \hphantom{.}\spa, \end{equation} where $\varphi^s$ and $\eta^s$ are Weyl spinors encoding spin direction. For spin along $x$, $y$ or $z$ axis they are eigenstates of the Pauli sigma matrices $\sigma^j$, if the spin direction is quantized along an arbitrary axis $\hat{n}$, then they are eigenstates of the linear combination $\vec{\sigma} \cdot \hat{n}$. Working in the center of mass reference frame, lets consider an initial electron with spin up along $z$ axis and four-momentum $p^\mu_1$ and similarly a initial positron also with spin up along $z$ axis and four-momentum $p^\mu_2$, with their momenta given by \begin{align} p^\mu_1 = ( E,0,\gamma m\beta,0 ) \enskip\esp&,\enskip\esp p^\mu_2 = ( E,0, -\gamma m\beta,0 ) \hphantom{.}\spa. \end{align} For the electron, physical particle spin is the same as spinor $\su{p_1}^s$ spin. Pointing in the $+z$ direction, the Weyl spinor is \begin{align} \label{weyl_spinor_phi_1} \varphi^1 = \left(\begin{array}{c} 1\\[0.2cm] 0\\ \end{array}\right) \hphantom{.}\spa. \end{align} For the positron, physical particle spin is opposite to the spinor $\sv{p_2}^r$ spin. For physical spin along the $z$ direction, the corresponding Weyl spinor is \begin{align} \label{weyl_spinor_eta_2} \eta^2 = \left(\begin{array}{c} 0\\[0.2cm] 1\\ \end{array}\right) \hphantom{.}\spa, \end{align} plugging this into (\ref{dirac_spinors}) we get the initial spinors \begin{align} \su{p_1} = \sqrt{E + m} \left(\begin{array}{c} 1 \\[0.1cm] 0 \\[0.1cm] 0 \\[0.1cm] i\rho \end{array}\right) \hphantom{.}\spa,\hphantom{.}\spa \sv{p_2} = - \sqrt{E + m} \left(\begin{array}{c} i\rho \\[0.1cm] 0 \\[0.1cm] 0 \\[0.1cm] 1 \end{array}\right) \hphantom{.}. \end{align} Now for the final electron and positron with momenta $k^\mu_1$ and $k^\mu_2$ respectively \begin{align} k^\mu_1 = ( E,0,0,\gamma m\beta ) \enskip\esp&,\enskip\esp k^\mu_2 = ( E,0,0,-\gamma m\beta ) \hphantom{.}\spa, \end{align} we look for spinor spin lying on the $xy$ plane measured relative to the $x$-axis for both particles. For this we take the Weyl spinor in (\ref{weyl_spinor_phi_1}) and rotate it using the spin rotation operator \renewcommand\arraystretch{2.2} \begin{align}\label{Rotation} \scaleobj{1.3}{e}^{-i\frac{\theta}{2} \vec{\sigma} \cdot \hat{n}} = \scaleobj{0.8}{ \left(\begin{array}{cc} \cos(\theta/2)-in^{3}\sin(\theta/2) \enskip&\enskip -i(n^{1}-in^{2})\sin(\theta/2) \\ -i(n^{1}+in^{2})\sin(\theta/2) \enskip&\enskip \cos(\theta/2)+in^{3}\sin(\theta/2) \end{array}\right) } \enskip. \end{align} \renewcommand\arraystretch{1} Rotations are performed by an angle $\theta$ around the $\hat{n}$ axis. As illustrated in figure \tcolor{\ref{scatter_diagram_initially_unpolarized}}, looking at the electron like coming towards us, spin direction is measured clockwise, looking at the positron like coming towards us, spin direction is also measured clockwise so the Weyl spinor is the same for both spinors $\su{k_1}$ and $\sv{k_2}$. The physical positron spin direction measured in the lab is opposite to the spinor $\sv{k_2}$ direction, so for a $\chi_2$ spinor spin direction, the lab measures a $\chi_2 + \pi$ spin direction. First rotate $\varphi^1$ by an angle $\theta = \pi/2$ around the $y$-axis, then rotation again this time y an angle $\chi_1$ around the $z$ axis. Then repeat for the positron but perform the second rotation by an angle $\chi_2$, the Weyl spinor is \begin{align} \xi_k = \frac{1}{\sqrt{2}} \left(\begin{array}{c} \scaleobj{1.3}{e}^{-i\chi_k /2} \\[0.25cm] \scaleobj{1.3}{e}^{-i\chi_k /2} \end{array}\right) \hphantom{.}\spa. \end{align} Plugging this into (\ref{dirac_spinors}) we obtain the spinor expressions for the final particles \begin{align} \su{k_1} = \sqrt{\frac{E+m}{2}} \left(\begin{array}{c} \scaleobj{1.3}{e}^{-i\chi_1 /2} \\[0.15cm] \scaleobj{1.3}{e}^{i\chi_1 /2} \\[0.15cm] \rho \hphantom{.} \scaleobj{1.3}{e}^{-i\chi_1 /2} \\[0.15cm] -\rho \hphantom{.} \scaleobj{1.3}{e}^{i\chi_1 /2} \\[0.15cm] \end{array}\right) \hphantom{.}\spa, \end{align} \begin{align} \sv{k_2} = - \sqrt{\frac{E+m}{2}} \left(\begin{array}{c} - \rho \hphantom{.} \scaleobj{1.3}{e}^{-i\chi_2 /2} \\[0.15cm] \rho \hphantom{.} \scaleobj{1.3}{e}^{i\chi_2 /2} \\[0.15cm] \scaleobj{1.3}{e}^{-i\chi_2 /2} \\[0.15cm] \scaleobj{1.3}{e}^{i\chi_2 /2} \\[0.15cm] \end{array}\right) \hphantom{.}\spa. \end{align} \section{APPENDIX B: SPINORS FOR SECTION III} Consider the final electron and positron as having momenta $k^\mu_1 = ( E,\gamma m\beta,0,0)$ and $k^\mu_2 = ( E,-\gamma m\beta,0,0)$, and suppose their spins lie on the $yz$-plane. To describe the Weyl spinors in (\ref{dirac_spinors}) and (\ref{dirac_spinors2}), it is enough to rotate $\varphi^1$ from (\ref{weyl_spinor_phi_1}) around the $y$-axis. Denoting as $\chi_1$ and $\chi_2$ the rotation angles that describe the spin direction of the electron and positron, and using the expression (\ref{Rotation}), we obtain the Weyl spinor \begin{align} \xi_k = \left(\begin{array}{c} \cos{(\chi_k /2)} \\[0.2cm] -i\sin{(\chi_k /2)} \end{array}\right) \hphantom{.}\spa. \end{align} Plugging this expression into (\ref{dirac_spinors}) and (\ref{dirac_spinors2}), we obtain the final four-spinors: \begin{align} \su{k_1} = \sqrt{E+m} \left(\begin{array}{c} \cos{(\chi_1 /2)} \\[0.13cm] -i\sin{(\chi_1 /2)} \\[0.13cm] -i \rho \hphantom{.} \sin{(\chi_1 /2)} \\[0.13cm] \rho \hphantom{.} \cos{(\chi_1 /2)} \\[0.13cm] \end{array}\right) \hphantom{.}\spa, \end{align} \begin{align} \sv{k_2} = - \sqrt{E+m} \left(\begin{array}{c} i \rho \hphantom{.} \sin{(\chi_2 /2)} \\[0.13cm] -\rho \hphantom{.} \cos{(\chi_2 /2)} \\[0.13cm] \cos{(\chi_2 /2)} \\[0.13cm] -i \sin{(\chi_2 /2)} \\[0.13cm] \end{array}\right) \hphantom{.}\spa. \end{align} \renewcommand*{\bibfont}{\footnotesize} \renewcommand{\refname}{\centering\textsc{REFERENCES}}
-28,865.602973
[ -2.083984375, 1.9521484375 ]
28.701595
[ -2.9609375, 1.23828125, -1.6435546875, -5.48828125, -1.1103515625, 7.25 ]
[ 2.951171875, 7.65625, 2.658203125, 6.8359375 ]
233
2,707
[ -3.1953125, 3.53515625 ]
33.390133
[ -5.75, -3.833984375, -3.5703125, -1.8349609375, 1.794921875, 10.4453125 ]
0.758106
23.966182
28.112301
7.669403
[ 3.1365323066711426 ]
-18,641.742705
6.334688
-28,599.484269
1.108001
5.747906
[ -2.728515625, -3.49609375, -3.705078125, -4.8203125, 2.248046875, 11.875 ]
[ -5.44140625, -2.013671875, -2.541015625, -1.6064453125, 3.337890625, 4.57421875 ]
BkiUebXxK6Ot9V_E5ERu
\section{Introduction}\label{s1} First we consider the evolution equation \begin{equation}\label{11} \begin{cases} U'(t)=AU(t)+k(t)BU(t-\tau)\qtq{in}(0,\infty),\\ U(0)=U_0,\\ BU(t-\tau)=f(t)\qtq{for}t\in(0,\tau), \end{cases} \end{equation} where $A$ generates an exponentially stable semigroup $(S(t))_{t\ge 0}$ in a Hilbert space $H$, $B$ is a continuous linear operator of $H$ into itself, $k:[0,\infty)\to\mathbb R$ is a function belonging to $L^1_{loc}([0,\infty);\mathbb R)$, and $\tau>0$ is a delay parameter. The initial data $U_0$ and $f$ are taken in $H$ and $C([0,\tau];H)$, respectively. By the assumptions on the operator $A$ there exist two numbers $M,\omega>0$ such that \begin{equation}\label{12} \norm{S(t)}\le Me^{-\omega t}\qtq{for all}t\ge 0. \end{equation} Time delay effects often appear in applications to physical models, and it is well-known (\cite{BP, DLP}) that they can induce instability phenomena. We are interested in showing that under some mild assumptions on $\tau, k$, the operator $B$ and the constants $M, \omega$ the system \eqref{11} is still exponentially stable. Stability results for the above abstract model have been recently obtained in \cite{JEE15, NicaisePignotti18} but only for a constant delay feedback coefficient $k$. In these papers some nonlinear extensions are also considered. The arguments of \cite{JEE15, NicaisePignotti18} could be easily extended to $k(\cdot)\in L^\infty$ with sufficiently small $\norm{k}_\infty$. Recently, motivated by some applications, $L^1_{loc}$ damping coefficients $k$ have been considered, for instance of intermittent type (see \cite{Haraux, Pignotti, FP}), and stability estimates have been obtained for some particular models. Here, our aim is to give a well-posedness result and an exponential decay estimate for the general model \eqref{11} with a damping coefficient $k$ belonging only to $L^1_{loc}$. As a non-trivial generalization, next we consider the case of multiple time-varying delays. Namely, let $\tau_i:[0, +\infty)\rightarrow (0, +\infty),$ $i=1, \dots, l,$ be the time delays functions belonging to $W^{1,\infty}(0,+\infty)$. We assume for each $i=1, \ldots, l,$ that \begin{equation}\label{13qqq} 0\le \tau_i(t)\le \overline\tau_i \end{equation} and \begin{equation}\label{14qqq} \tau_i^\prime (t)\le c_i<1. \end{equation} for a.e. $t>0$, with suitable constants $\overline\tau_i$ and $c_i$. It follows from \eqref{14qqq} that \begin{equation*} (t-\tau_i(t))^\prime = 1-\tau_i^\prime (t) >0, \quad \mbox{a.e.}\ t>0, \end{equation*} for a.e. $t>0$, and hence \begin{equation*} t-\tau_i(t)\ge -\tau_i(0)\qtq{for all}t\ge 0. \end{equation*} Therefore, setting \begin{equation}\label{15qqq} \tau^*:=\max_{i=1,..., l} \set{\tau_i(0)} \end{equation} we may consider the following abstract model: \begin{equation}\label{16qqq} \begin{cases} U'(t)=AU(t)+\sum_{i=1}^lk_i(t)B_iU(t-\tau_i(t))\qtq{in}(0,\infty),\\ U(0)=U_0,\\ B_iU(t)=f_i(t)\qtq{for}t\in [-\tau^*, 0], \end{cases} \end{equation} where the operator $A$, as before, generates an exponentially stable semigroup $(S(t))_{t\ge 0}$ in a Hilbert space $H$, and for each $i=1,\dots, l$, $B_i$ is a continuous linear operator of $H$ into itself, $k_i:[0,\infty)\to\mathbb R$ belongs to ${\mathcal L}^1_{loc}([0,+\infty);\mathbb R )$, and $\tau_i$ is a variable time delay. Under some mild assumptions on the involved functions and parameters we will establish the well-posedness of the problem \eqref{16qqq}, and we will obtain exponential decay estimates for its solutions. The nonlinear version of previous model \begin{equation}\label{51qqq} \begin{cases} U'(t)=AU(t)+\sum_{i=1}^lk_i(t)B_iU(t-\tau_i(t))+F(U(t))\\ \hspace{7,6 cm}\qtq{in}(0,\infty),\\ U(0)=U_0,\\ B_iU(t)=f_i(t)\qtq{for}t\in [-\tau^*, 0], \end{cases} \end{equation} is also analyzed under some Lipschitz continuity assumption on the nonlinear function $F.$ Exponential decay of the energy is obtained for small initial data under a suitable well-posedness assumption. A quite general class of examples satisfying our abstract setting is exhibited. The paper is organized as follows. In Sections \ref{s2qqq} and \ref{s3qqq} we prove the well-posedness and the exponential decay estimate for the model \eqref{11} with a single constant time delay. Next, in Section \ref{s4qqq} we analyze the more general system \eqref{16qqq} with multiple time-varying time delays. The nonlinear abstract model \eqref{51qqq} is studied in Section \ref{s5qqq}, where we give an exponential decay estimate for small initial data. Finally, in Section \ref{s6qqq} we give some applications of our abstract results to concrete models. \section{Single constant delay: well-posedness}\label{s2qqq} The following well-posedness result holds true. \begin{proposition}\label{p21qqq} Given $U_0\in H$ and a continuous function $f:[0,\tau]\to H$, the problem \eqref{11} has a unique (weak) solution given by Duhamel's formula \begin{equation}\label{21qqq} U(t)=S(t)U_0+\int_0^tk(s)S(t-s)BU(s-\tau)\ ds\qtq{for all}t\ge 0. \end{equation} \end{proposition} \begin{proof} We proceed step by step by working on time intervals of length $\tau.$ First we consider $t\in [0,\tau]$. Setting $G_1(t) =k(t)BU(t-\tau),$ $t\in [0,\tau]$ we observe that $G_1(t)= k(t)f(t),$ $t\in [0,\tau]$. Then, problem \eqref{11} can be rewritten, in the time interval $[0,\tau]$, as a standard inhomogeneous evolution problem: \begin{equation}\label{22qqq} \begin{cases} U'(t)=AU(t)+G_1(t) \qtq{in}(0,\tau),\\ U(0)=U_0. \end{cases} \end{equation} Since $k\in L^1_{loc}([0,\infty);\mathbb R)$ and $f\in C([0,\tau];H)$, we have that $G_1\in L^1((0,\tau);H)$. Therefore, applying \cite[Corollary 2.2]{Pazy} there exists a unique solution $U \in C([0,\tau]; H)$ of \eqref{22qqq} satisfying Duhamel's formula \begin{equation*} U(t)= S(t)U_0+\int_0^tS(t-s) G_1 (s) ds,\quad t\in [0,\tau] \end{equation*} and hence \begin{equation*} U(t)= S(t)U_0+\int_0^tk(s) S(t-s) BU (s-\tau) ds,\quad t\in [0,\tau]. \end{equation*} Next we consider the time interval $[\tau, 2\tau]$. Setting $G_2(t)=$$k(t)BU(t-\tau)$ we observe that $U(t-\tau)$ is known for $t\in [\tau, 2\tau]$ from the first step. Then $G_2$ is a known function and it belongs to $L^1((\tau,2\tau); H)$. So we can rewrite our model \eqref{11} in the time interval $[\tau,2\tau ]$ as the inhomogeneous evolution problem \begin{equation}\label{23qqq} \begin{cases} U'(t)=AU(t)+G_2(t) \qtq{in}(\tau ,2\tau),\\ U(\tau)=U (\tau^-). \end{cases} \end{equation} By the standard theory of abstract Cauchy problems we have a unique continuous solution $U:[\tau, 2\tau)\rightarrow H$ satisfying \begin{equation*} U(t)= S(t-\tau )U(\tau^-) +\int_\tau ^tS(t -s) G_2 (s) ds,\quad t\in [\tau ,2\tau], \end{equation*} and hence \begin{equation*} U(t)= S(t-\tau )U(\tau^-) +\int_\tau ^tk(s) S(t -s) BU (s-\tau) ds,\quad t\in [\tau ,2\tau]. \end{equation*} Putting together the partial solutions obtained in the first and second steps we get a unique continuous solution $U:[0,2\tau]\rightarrow \mathbb R$ satisfying Duhamel's formula \begin{equation*} U(t)= S(t)U_0+\int_0^tk(s) S(t-s) BU (s-\tau) ds,\quad t\in [0,2\tau]. \end{equation*} Iterating this argument we find a unique solution $U\in C([0,\infty);H)$ satisfying the representation formula \eqref{21qqq}. \end{proof} \section{Single constant delay: exponential stability}\label{s3qqq} It follows from Duhamel's formula \eqref{21qqq} that \begin{equation}\label{31qqq} \begin{array}{l} \displaystyle{e^{\omega t}\norm{U(t)} \le M\norm{U_0}+Me^{\omega\tau}\int_0^{\tau}\abs{k(s)}e^{\omega (s-\tau)}\norm{f(s)}\ ds}\\ \qquad\displaystyle{ +M\norm{B}e^{\omega\tau}\int_{\tau}^t\abs{k(s)}e^{\omega (s-\tau)}\norm{U(s-\tau)}\ ds} \end{array} \end{equation} for all $t\ge 0$. Setting \begin{align*} &u(t):=e^{\omega t}\norm{U(t)},\\ &\alpha:=M\norm{U_0}+Me^{\omega\tau}\int_0^{\tau}\abs{k(s)}e^{\omega (s-\tau)}\norm{f(s)}\ ds\intertext{and} &\beta(t):=M\norm{B}e^{\omega\tau}\abs{k(t+\tau)} \end{align*} we may rewrite \eqref{31qqq} in the form \begin{equation*} u(t)\le \alpha+\int_0^{t-\tau}\beta(s)u(s)\ ds\qtq{for all}t\ge 0. \end{equation*} Since $\beta\ge 0$ and $u\ge 0$, it follows that \begin{equation*} u(t)\le \alpha+\int_0^{t}\beta(s)u(s)\ ds\qtq{for all}t\ge 0. \end{equation*} Applying Gronwall's inequality we conclude that \begin{equation*} u(t)\le \alpha e^{\int_0^t\beta(s)\ ds}\qtq{for all}t\ge 0, \end{equation*} i.e., \begin{equation}\label{32qqq} \norm{U(t)}\le \alpha e^{\int_0^t\beta(s)\ ds-\omega t}\qtq{for all}t\ge 0. \end{equation} This estimate yields the following result: \begin{theorem}\label{t31qqq} Assume that there exist two constants $\omega'\in(0,\omega)$ and $\gamma\in\mathbb R$ such that \begin{equation}\label{33qqq} M\norm{B}e^{\omega\tau}\int_0^t \abs{k(s+\tau)}\ ds\le\gamma+\omega't\qtq{for all}t\ge 0. \end{equation} Then there exists a constant $M'>0$ such that the solutions of \eqref{11} satisfy the estimate \begin{equation}\label{34qqq} \norm{U(t)}\le M'e^{-(\omega-\omega') t}\qtq{for all}t\ge 0. \end{equation} \end{theorem} \begin{proof} Using our previous notation $\beta(s)=M\norm{B}e^{\omega\tau}\abs{k(s+\tau)}$ we have \begin{equation*} \int_0^t\beta(s)\ ds-\omega t\le\gamma-(\omega-\omega') t\qtq{for all}t\ge 0. \end{equation*} Combining this with \eqref{32qqq} the estimate \eqref{34qqq} follows with $M':=\alpha e^{\gamma}$. \end{proof} \begin{remark} The hypothesis \eqref{33qqq} is satisfied, in particular, if the feedback coefficient $k$ belongs to $L^\infty (0,\infty)$ and \begin{equation*} M\norm{B}\cdot\norm{k}_\infty e^{\omega\tau} < \omega. \end{equation*} It is also satisfied if $k\in L^1(0,\infty)$ or, more generally, if $k=k_1+k_2$ with $k_1\in L^1(0,\infty)$ and $k_2\in L^\infty (0,\infty)$ with a sufficiently small norm $\norm{k_2}_\infty$. Thus Proposition \ref{t31qqq} extends the results obtained in \cite{NicaisePignotti18}, in the linear setting, for constant $k$. \end{remark} \section{Time variable delays}\label{s4qqq} Let us now consider the model \eqref{16qqq} with multiple time varying delays. First, we study its well-posedness. \begin{theorem}\label{t41} Given $U_0\in H$ and continuous functions $f_i:[-\tau^*, 0]\to H$, $i=1,\dots,l,$ the problem \eqref{16qqq} has a unique (weak) solution given by Duhamel's formula \begin{equation}\label{41qqq} U(t)=S(t)U_0+\int_0^tS(t-s)\sum_{i=1}^lk_i(s)B_iU(s-\tau_i(s))\ ds, \end{equation} for all $t\ge 0.$ \end{theorem} \begin{proof} First we consider the case with time delays functions $\tau_i(\cdot)$ bounded from below by some positive constants, i.e., there exists for each $i=1,\dots,l$ a constant $\underline{\tau}_i>0$ such that \begin{equation*} \tau_i(t)\ge\underline{\tau}_i\qtq{for all}t\ge 0. \end{equation*} Then we may argue step by step, as in the proof of Proposition \ref{p21qqq}, by restricting ourselves each time to a time interval of length \begin{equation*} \tau_{min}:=\min \set{\underline {\tau}_i\ :\ i=1, \dots, l}>0. \end{equation*} Indeed, we infer from the assumption \eqref{13qqq} that \begin{equation*} t-\tau_i(t) \le t-\underline\tau_i\le t-\tau_{min} \end{equation*} for all $t\ge 0$ and $i=1,\dots,l$. Therefore, if $t\in [k \tau_{min}, (k+1) \tau_{min}],$ then $t-\tau_i(t) \le k\tau_{min}$ for all $i=1,\dots, l$. Now, we pass to the general case. For each fixed positive number $\epsilon\le 1$ we set \begin{equation}\label{42qqq} \tau_i^\epsilon (t):=\tau_i(t)+\epsilon,\quad t\ge0, \quad i=1,\dots, l. \end{equation} Moreover, we extend the initial data $f_i$ to continuous functions $\tilde f_i: [-\tau^* -1, 0]\rightarrow H$ with the constant $\tau^*$ defined in \eqref{15qqq}. Since $\tau_i^\epsilon(t)\ge \epsilon >0$ for all $t$ and $i$, the corresponding model \eqref{16qqq} with initial data $U_0,$ $\tilde f_i$ and time delay functions $\tau^{\epsilon}_i(\cdot)$ has a unique solution $U_\epsilon (\cdot)\in C[0, +\infty)\rightarrow H$ satisying the representation formula \eqref{41qqq}. Now consider two positive parameters $\epsilon_1,\epsilon_2\le 1$. Since $U_{\epsilon_1}(\cdot)$ and $U_{\epsilon_2}(\cdot)$ satisfy \eqref{41qqq}, we have \begin{multline*} U_{\epsilon_1}(t)- U_{\epsilon_2}(t) =\\ \int_0^t S(t-s) \sum_{i=1}^l k_i(s)\left [ B_iU_{\epsilon_1}(s-\tau_i^{\epsilon_1}(s))- B_iU_{\epsilon_2}(s-\tau_i^{\epsilon_2}(s)) \right ]\, ds \end{multline*} and hence \begin{multline*} U_{\epsilon_1}(t)- U_{\epsilon_2}(t) =\\ \int_0^t S(t-s) \sum_{i=1}^l k_i(s)\left [ B_iU_{\epsilon_1}(s-\tau_i^{\epsilon_1}(s))- B_iU_{\epsilon_2}(s-\tau_i^{\epsilon_1}(s)) \right ]\, ds\\ + \int_0^t S(t-s) \sum_{i=1}^l k_i(s)\left [ B_iU_{\epsilon_2}(s-\tau_i^{\epsilon_1}(s))- B_iU_{\epsilon_2}(s-\tau_i^{\epsilon_2}(s)) \right ]\, ds. \end{multline*} It follows that \begin{equation}\label{43qqq} e^{\omega t}\Vert U_{\epsilon_1}(t)- U_{\epsilon_2}(t)\Vert \le M(I_1+I_2) \end{equation} with \begin{equation*} I_1:=\sum_{i=1}^l\int_{\tau_i^{\epsilon_1}(0)}^t e^{-\omega s} \vert k_i\vert \vert B_i\Vert\,\Vert U_{\epsilon_1}(s-\tau_i^{\epsilon_1}(s))- U_{\epsilon_2}(s-\tau_i^{\epsilon_1}(s)) \Vert\, ds \end{equation*} and \begin{equation*} I_2:=\sum_{i=1}^l \int_0^t e^{-\omega s} \vert k_i\vert \Vert B_i\Vert\,\Vert B_iU_{\epsilon_2}(s-\tau_i^{\epsilon_1}(s))- B_iU_{\epsilon_2}(s-\tau_i^{\epsilon_2}(s)) \Vert \, ds. \end{equation*} Using the changes of variable \begin{equation*} s-\tau_i^{\epsilon_1}(s)=\sigma \end{equation*} in the integrals in the sum $I_1$ and using the notation \begin{equation}\label{44qqq} \varphi_i(s):=s-\tau_i(s)=\epsilon_1+\sigma \end{equation} we obtain the estimate \begin{equation}\label{45qqq} I_1\le \sum_{i=1}^l \frac {e^{\omega (\overline\tau_i+1)}}{1-c_i}\Vert B_i\Vert \int_0^t e^{\omega s} \vert k_i(\varphi_i^{-1}(s+\epsilon_1)\vert \Vert U_{\epsilon_1}(s)- U_{\epsilon_2}(s)\Vert \, ds. \end{equation} Furthermore, since $U_{\epsilon_2}(\cdot)\in C([0,+\infty);H)$ is locally uniformly continuous and \begin{equation*} \tau_i^{\epsilon_1}(t)- \tau_i^{\epsilon_2}(t)=\epsilon_1-\epsilon_2 \end{equation*} for all $t$ and $i$, we have for every fixed $T>0$ the estimate \begin{equation}\label{46qqq} I_2 \le C(T; \epsilon_1-\epsilon_2), \end{equation} with a constant $C(T;\epsilon_1-\epsilon_2)$ tending to zero as $\epsilon_1-\epsilon_2\to 0.$ Using \eqref{45qqq} and \eqref{46qqq} in \eqref{43qqq} and applying Gronwall's Lemma for each fixed $T>0,$ we conclude that for $\epsilon\rightarrow 0$ the functions $U_\epsilon(\cdot)$ converge locally uniformly to a function $U\in C([0,+\infty); H)$ which satisfies \eqref{41qqq}. This completes the proof. \end{proof} Under an appropriate relation between the problem's parameters the system \eqref{16qqq} is exponentially stable: \begin{theorem}\label{t42qqq} Assume that there exist two constants $\omega'\in(0,\omega)$ and $\gamma\in\mathbb R$ such that \begin{equation}\label{47qqq} M \sum_{i=1}^l\frac {e^{\omega\overline \tau_i}}{1-c_i}\Vert B_i\Vert \int_0^t\abs{k_i(\varphi_i^{-1}(s))} \ ds\le \gamma+\omega^\prime t \quad\mbox{for all}\quad t\ge 0. \end{equation} Then there exists a constant $M'>0$ such that the solutions of \eqref{16qqq} satisfy the estimate \begin{equation}\label{48qqq} \norm{U(t)}\le M'e^{-(\omega-\omega') t}\qtq{for all}t\ge 0. \end{equation} \end{theorem} \begin{proof} It follows from Duhamel's formula \eqref{41qqq} that \begin{align*} e^{\omega t}\norm{U(t)} &\le M\norm{U_0}+M\sum_{i=1}^l \int_0^{t}e^{\omega s}\vert k_i(s)\vert \,\Vert B_i U(s-\tau_i)\Vert\ ds\\ &\le M\norm{U_0} +M\sum_{i=1}^l e^{\omega\overline\tau_i}\int_{0}^t e^{\omega (s-\tau_i(s))}\vert k_i(s)\vert \,\Vert B_i U(s-\tau_i)\Vert\ ds \end{align*} for all $t\ge 0$. Now we make the change of variable $\varphi_i(s):= s-\tau_i(s)=\sigma$ for every $i=1,\dots, l$. Note that the functions $\varphi_i(\cdot) $ are invertible by \eqref{14qqq}. We have the estimates \begin{multline*} \int_0^t e^{\omega (s-\tau_i(s))}\vert k_i(s)\vert \,\Vert B_i U(s-\tau_i)\Vert\ ds\\ \le \frac 1 {1-c_i}\int_{-\tau_i(0)}^{t-\tau_i(t)} e^{\omega\sigma}\vert k_i(\varphi_i^{-1}(\sigma)\vert\,\Vert B_i U(\sigma )\Vert\ d\sigma \end{multline*} and hence \begin{multline*} e^{\omega t}\norm{U(t)} \le M\norm{U_0}+M\sum_{i=1}^l \frac {e^{\omega\overline\tau_i}} {1-c_i} \int_{-\tau_i(0)}^{0}e^{\omega s}\vert k_i(\varphi_i^{-1}(s))\vert \,\Vert f_i(s)\Vert\ ds\\ +M\sum_{i=1}^l \frac {e^{\omega\overline\tau_i}}{1-c_i}\int_{0}^t e^{\omega s}\vert k_i(\varphi_i^{-1}(s))\vert \,\Vert B_i \Vert \norm{U(s)}\ ds \end{multline*} for all $t\ge 0$. Setting $\tilde u(t):=e^{\omega t}\norm{U(t)}$ and \begin{equation}\label{49qqq} \tilde \alpha:=M\Big (\norm{U_0}+ \sum_{i=1}^l \frac {e^{\omega\overline\tau_i}} {1-c_i} \int_{-\tau_i(0)}^{0}e^{\omega s}\vert k_i(\varphi_i^{-1}(s))\vert \,\Vert f_i(s)\Vert\ ds \Big ), \end{equation} \begin{equation}\label{410qqq} \tilde \beta (t)=M \sum_{i=1}^l \frac {e^{\omega\overline\tau_i}}{1-c_i}\vert k_i(\varphi_i^{-1}(s))\vert \Vert B_i\Vert, \end{equation} we may rewrite the above estimate in the form \begin{equation*} \tilde u(t)\le \tilde \alpha+\int_0^{t}\tilde \beta(s)\tilde u(s)\ ds\qtq{for all}t\ge 0. \end{equation*} Applying Gronwall's inequality we conclude that \begin{equation*} \tilde u(t)\le \tilde \alpha e^{\int_0^t\tilde \beta(s)\ ds}\qtq{for all}t\ge 0, \end{equation*} i.e., \begin{equation}\label{411qqq} \norm{U(t)}\le \alpha e^{\int_0^t\tilde \beta(s)\ ds-\omega t}\qtq{for all}t\ge 0. \end{equation} Now we can conclude as in the proof of Proposition \ref{t31qqq} provided that \eqref{47qqq} is satisfied. \end{proof} \begin{remark} The hypothesis \eqref{47qqq} is satisfied in particular if the feedback coefficients $k_i, i=1,\dots, l,$ belong to $L^\infty (0,+\infty )$ and \begin{equation*} M \sum_{i=1}^l\frac {e^{\omega\overline \tau_i}}{1-c_i}\Vert B_i\Vert\,\Vert k_i\Vert_\infty <\omega. \end{equation*} It is also satisfied if $k_i(\varphi^{-1}_i(\cdot ))\in L^1(0,+\infty),$ $i=1,\ldots, l,$ or, more generally if $k_i\circ\varphi^{-1}_i =k_i^1+k_i^2$ with $k_i^1\in L^1(0,+\infty)$ and $ k_i^2\in L^\infty (0,+\infty )$ with sufficiently small norms $\Vert k_i^2\Vert_\infty$. \end{remark} \section{A nonlinear model}\label{s5qqq} We now consider the nonlinear model \eqref{51qqq}, where the operator $A$, as before, generates an exponentially stable semigroup $(S(t))_{t\ge 0}$ in a Hilbert space $H$, and for each $i=1,\dots, l,$ \begin{itemize} \item $B_i$ is a continuous linear operator of $H$ into itself, \item $k_i\in {\mathcal L}^1_{loc}([0,+\infty);\mathbb R )$, \item $\tau_i$ is a variable time delay satisfying \eqref{13qqq} and \eqref{14qqq}, \item $f_i\in C([-\tau^*,0]; H)$ with $\tau^*$ defined in \eqref{15qqq}. \end{itemize} Furthermore, the nonlinear function $F$ satisfies some local Lipschitz assumption as precised below. We will prove an exponential stability result for {\em small} initial data under a suitable well-posedness assumption. Then, we will give a class of examples for which this assumption is satisfied. Concerning $F$ we assume that $F(0)=0$, and that the following local Lipschitz condition is satisfied: for each constant $r>0$ there exists a constant $L(c)>0$ such that \begin{equation}\label{52qqq} \norm{F(U)-F(V)}_H\le L(r)\norm{U-V}_H \end{equation} whenever $\norm{U}_H\le r$ and $\norm{V}_H\le r$. Furthermore, we assume tha the system is well posed in the following sense: \begin{wpa} There exist two constants $\omega'\in(0,\omega)$ and $\gamma\in\mathbb R$ such that \eqref{47qqq} is satisfied. Furthermore, there exist two positive constants $\rho$ and $C_{\rho}$ with $L(C_\rho ) <\frac {\omega -\omega'}M$ such that if $U_0\in H$ and $f_i\in C([0,\tau^*];H)$ for $i=1, \ldots,l$ satisfy the inequality \begin{equation}\label{53qqq} \norm{U_0}^2_H+ \sum_{i=1}^l \int_0^{\tau^*}\abs{k_i(s)} \cdot\norm{f_i(s)}_H^2\ ds < \rho^2, \end{equation} then \eqref{51qqq} has a unique solution $U\in C([0,+\infty );H)$ satisfying $\norm{U(t)}\le C_\rho$ for all $t>0$. \end{wpa} \begin{theorem}\label{t51qqq} Under the above well-posedness assumption, there exists a constant $\tilde M>0$ such that all these solutions satisfy the estimate \begin{equation*} \norm{U(t)}\le \tilde M e^{-(\omega-\omega'-ML(C_\rho)) t}\qtq{for all}t\ge 0. \end{equation*} \end{theorem} \begin{proof} By Duhamel's formula \eqref{41qqq} we have \begin{multline*} e^{\omega t}\norm{U(t)} \le M\norm{U_0} +M\sum_{i=1}^l e^{\omega\overline\tau_i}\int_{0}^t e^{\omega (s-\tau_i(s))}\abs{k_i(s)} \cdot\norm{B_i U(s-\tau_i)}\ ds\\ +M L(C_\rho) \int_0^t e^{\omega s} \norm{U(s)} \, ds \end{multline*} for all $t\ge 0$. Now, as before, we make the change of variable $\varphi_i(s):= s-\tau_i(s)=\sigma$ for every $i=1,\dots, l$ to get the estimate \begin{multline*} e^{\omega t}\norm{U(t)} \le M\norm{U_0}+M\sum_{i=1}^l \frac {e^{\omega\overline\tau_i}} {1-c_i} \int_{-\tau_i(0)}^{0}e^{\omega s}\vert k_i(\varphi_i^{-1}(s))\vert\cdot\Vert f_i(s)\Vert\ ds\\ +M\sum_{i=1}^l \frac {e^{\omega\overline\tau_i}}{1-c_i}\int_{0}^t e^{\omega s}\vert k_i(\varphi_i^{-1}(s))\vert\cdot\Vert B_i \Vert\cdot\norm{U(s)}\ ds\\ +M L(C_\rho) \int_0^t e^{\omega s} \norm{U(s)} \, ds\hspace{3 cm} \end{multline*} for all $t\ge 0$. Setting $\tilde u(t)=e^{\omega t}\Vert U(t)\Vert$ we may rewrite it in the form \begin{equation*} \tilde u(t)\le \tilde \alpha+\int_0^{t}\tilde \beta(s)\tilde u(s)\ ds+M L(C_\rho) \,\int_0^t\tilde u(s)\, ds \qtq{for all}t\ge 0, \end{equation*} where $\tilde \alpha, \tilde \beta$ are defined as in \eqref{49qqq} and \eqref{410qqq}. Applying Gronwall's inequality we conclude that \begin{equation*} \tilde u(t)\le \tilde \alpha e^{\int_0^t \left [\tilde \beta(s)+ ML(C_\rho)\right ]\ ds}\qtq{for all}t\ge 0, \end{equation*} i.e., \begin{equation}\label{54} \norm{U(t)}\le \tilde\alpha e^{\int_0^t\tilde \beta(s)\ ds +M L(C_\rho)t -\omega t}\qtq{for all}t\ge 0. \end{equation} Now we can conclude as in the proof of Theorem \ref{t31qqq} provided that \eqref{47qqq} is satisfied. \end{proof} Now we consider a class of examples for which the above well-posedness assumption is satisfied. Let $W$ be a real Hilbert space and $A_0: {\mathcal D}(A_0)\rightarrow W$ a positive self-adjoint operator with a compact inverse in $W$. We denote by $\tilde W:= {\mathcal D}(A_0^{1/2})$ the domain of the operator $A_0^{1/2}.$ Furthermore, let $W_0, \ldots, W_l$ be real Hilbert spaces and $C:W_0\rightarrow W,$ $D_i:W\rightarrow W_i,$ bounded linear operators such that \begin{equation}\label{55} \Vert D_i^*u\Vert_{W_i}^2\le d_i\Vert u\Vert_W^2 \end{equation} and \begin{equation}\label{56} a\Vert u\Vert^2_{W_i}\le \Vert C^*u\Vert^2_{W_0} \end{equation} for all $u\in W$ and $i=1,\dots, l$, with suitable positive constants $d_i$ and $a$. Furthermore, let $ \Psi : \tilde W\rightarrow \mathbb R$ be a functional having a G\^{a}teaux derivative $D \Psi (u)$ in every $u\in \hat W.$ In the same spirit as in \cite{ACS}, we assume the following: \begin{enumerate}[\upshape (i)] \item For every $u\in\tilde W$ there is a constant $c(u)$ such that \begin{equation*} \vert D \Psi (u) (v)\vert\le c(u)\Vert v\Vert_W,\quad \forall\ v\in\tilde W, \end{equation*} where $D\Psi (u)$ is the G\^ateaux derivative of the functional $\Psi$ at $u.$ Thus, $\Psi$ can be extended to the whole $W$ and we denote by $\nabla\Psi (u)$ the unique vector representing $D\Psi (u)$ in the Riesz isomorphism, namely \begin{equation*} \langle \nabla\Psi (u), v\rangle_W=D\Psi (u) (v),\quad \forall\ v\in W. \end{equation*} \item For all $r>0$ there exists a constant $L(r)>0$ such that \begin{equation*} \Vert \nabla\Psi (u)-\nabla\Psi (v)\Vert_W\le L(r)\Vert A_0^{1/2}(u-v)\Vert_W, \end{equation*} for all $u,v\in \hat W$ satisfying $\Vert A_0^{1/2}u\Vert_W\le r$ and $\Vert A_0^{1/2}v\Vert_W\le r.$ \item $\Psi (0)=0, \nabla\Psi (0)=0,$ and there exists an increasing continuous function $ h$ such that, $\forall \ u \in \tilde W,$ \begin{equation*} \Vert \nabla \Psi (u)\Vert_W \le h(\Vert A_0^{1/2}u\Vert_W)\Vert A_0^{1/2}u\Vert_W \end{equation*} for all $\ u \in \tilde W$. \end{enumerate} In this framework, let us consider the following second-order model: \begin{equation}\label{57} \begin{cases} u_{tt}+ A_0 u + C C^* u_t=\nabla \Psi (u) +\sum_{i=1}^l k_iD_iD_i^*u_t (t-\tau_i(t)), \\ \hspace{7,6 cm}\qtq{in}(0,\infty),\\ u(0)=u_0,\ u_t(0)=u_1, \\ D_i^*u_t(t)=g_i(t)\qtq{for}t\in [-\tau^*, 0],\ i=1,\dots, l, \end{cases} \end{equation} with $(u_0,u_1)\in \tilde W\times W,$ $g_i \in C([-\tau^*,0]; W).$ Here, the functions $k_i(\cdot),$ $\tau_i(\cdot),$ and the constant $\tau^*$ as defined as before. In order to deal with the nonlinear model we assume \begin{equation*} \tau_i(t)\ge\underline{\tau}_i,\quad i=1,\dots,l, \end{equation*} and let us set \begin{equation}\label{58} \tau_{min}=\min\ \{\, \underline{\tau}_i\,:\, i=1,\dots,l\,\}. \end{equation} If we denote $v:=u_t$ and $U:= (u,v)^T,$ \eqref{57} can be recast in the more abstract form \eqref{51qqq} where operator $A$ is defined in $H:= \tilde W\times W$ by \begin{equation*} A:= \begin{pmatrix} 0&1\\ -A_0&-CC^* \end{pmatrix} , \end{equation*} while $F$ and $B_i$ for $i=1,\dots, l$ are defined by \begin{equation*} F(U):=(0,\nabla \Psi (u))^T \qtq{and} B_i U:=(0, D_iD_i^*v)^T. \end{equation*} Under some conditions on the damping operator $CC^*$ (see, e.g., \cite{BLR} or \cite[Chapter 5]{Komornikbook}) we know that $A$ generates an exponentially stable semigroup. Moreover, the above assumptions on $\Psi$ imply that $F(0)=0$ and $F$ satisfies \eqref{52qqq}. We define the energy functional for the model \eqref{57} as \begin{multline*} E(t):= E(t, u(\cdot ))= \frac 12 \Vert u_t\Vert_W^2 +\frac 12 \Vert A_0^{1/2}u\Vert_W^2-\Psi (u)\\ +\frac 12\sum_{i=1}^l \frac 1 {1-c_i}\int_{t-\tau_i(t)}^t \vert k_i(\varphi_i^{-1}(s))\vert\cdot \Vert D_i^*u_t(s)\Vert^2_{W_i} ds. \end{multline*} We are going to show that the problem \eqref{57} satisfies the well-posedness assumption and the exponential decay estimate of Theorem \ref{t51qqq} for {\em small} initial data under a suitable compatibility condition between the functions $k_i$ and the constant $a$ in \eqref{56}. We need a preliminary lemma. \begin{lemma}\label{l52qqq} Assume that $k_i(t)=k^1_i(t) + k^2_i(t)$ with $k^1_i\in L^1([0, +\infty))$ and $k^2_i\in L^\infty (0,+\infty)$ for $i=1,\dots ,l$. Furthermore, assume that \begin{equation}\label{59} \Vert k^2_i\Vert_\infty \le \frac {2a} l\cdot\frac {1-c_i}{2-c_i},\quad i=1,\dots,l. \end{equation} Then, for any solution $u$ of problem \eqref{57}, defined on $[0, T)$ for some $T>0$, and satisfying $E(t)\ge \frac 1 4 \Vert u_t(t)\Vert^2$ for all $t\in [0, T),$ we have \begin{equation}\label{510} E(t)\le\bar{C} E(0)\qtq{for all}t\in [0,T) \end{equation} with \begin{equation}\label{511} \bar{C}= e^{2\sum_{i=1}^l d_i\int_0^{+\infty} ( \frac 1 {1-c_i} \vert k_i^1(\varphi_i^{-1}(s))\vert + \vert k_i^1(s)\vert )\ ds }. \end{equation} \end{lemma} \begin{proof} Differentiating, we have \begin{align*} E^\prime(t) &= -\Vert C^*u_t(t)\Vert_{W_0}^2 +\sum_{i=1}^lk_i(t)\langle D_i^*u_t(t), D_i^* u_t(t-\tau_i(t))\rangle \\ &\qquad +\sum_{i=1}^l \frac {\vert k_i(\varphi^{-1}(t)\vert }{2 (1-c_i)}\Vert D_i^*u_t(t)\Vert^2_{W_i}\\ &\qquad -\sum_{i=1}^l \frac {\vert k_i(t)\vert }{2 (1-c_i)}(1-\tau_i^\prime(t))\Vert D_i^*u_t(t-\tau_i(t))\Vert^2_{W_i}. \end{align*} Recalling \eqref{14qqq} and using the Cauchy--Schwarz inequality hence we infer that \begin{align*} E^\prime(t) &\le -\Vert C^*u_t(t)\Vert_{W_0}^2 +\frac 1 2 \sum_{i=1}^l \Big ( \frac 1 {1-c_i} \vert k_i(\varphi^{-1}(t))\vert +\vert k_i(t)\vert \Big )\Vert D_i^*u_t(t)\Vert_{W_i}^2\\ &\le -\Vert C^*u_t(t)\Vert_{W_0}^2 + \frac 1 2 \sum_{i=1}^l \Big ( \frac 1 {1-c_i} \vert k_i^2(\varphi^{-1}(t))\vert +\vert k_i^2(t)\vert \Big )\Vert D_i^*u_t(t)\Vert_{W_i}^2\\ &\qquad +\frac 1 2 \sum_{i=1}^l \Big ( \frac 1 {1-c_i} \vert k_i^1(\varphi^{-1}(t))\vert +\vert k_i^1(t)\vert \Big )\Vert D_i^*u_t(t)\Vert_{W_i}^2, \end{align*} and then, using \eqref{56} and \eqref{59}, that \begin{equation}\label{512} E^\prime (t) \le \frac 1 2 \sum_{i=1}^l \Big ( \frac 1 {1-c_i}\vert k_i^1(\varphi_i^{-1}(t)\vert +\vert k_i^1(t)\vert \Big )\, \Vert D_i^* u_t(t)\Vert_{W_i}^2\,. \end{equation} From \eqref{512}, recalling \eqref{55}, we obtain \begin{equation*} E(t)\le E(0)+\frac 14 \int_0^t K(s) \Vert u_t(s)\Vert^2\, ds \end{equation*} with the notation \begin{equation*} K(t):=2\sum_{i=1}^l d_i \Big (\frac 1 {1-c_i} \vert k_i^1(\varphi_i^{-1}(t))\vert +\vert k_i^1(t)\vert \Big )\,. \end{equation*} Now Gronwall's inequality yields \begin{equation*} E(t)\le E(0) e^{\int_0^t K(s)\ ds }, \end{equation*} proving \eqref{510} with $\bar{C}$ defined by \eqref{511}. \end{proof} \begin{proposition}\label{p53qqq} Assume that $k_i(t)=k^1_i(t) + k^2_i(t), i=1,\dots ,l,$ with $k^1_i\in L^1([0, +\infty))$ and $k^2_i\in L^\infty (0,+\infty).$ Moreover, assume \eqref{59}. Then, the model \eqref{57} satisfies the well posedness assumption of Theorem \ref{t51qqq}. \end{proposition} \begin{proof} First we restrict ourselves to the time interval $[0, \tau_{min}]$ where $\tau_{min}$ is the constant defined in \eqref{58}. In such an interval the model can be rewritten in the abstract form \begin{equation}\label{513} \begin{cases} U'(t)=AU(t)+\sum_{i=1}^lk_i(t) G_i(t)+F(U(t))\qtq{in}(0,\infty),\\ U(0)=U_0, \end{cases} \end{equation} where $G_i(t)=(0, g_i(t-\tau_i(t))), $ $i=1,\dots,l.$ Then one can apply the classical theory of nonlinear semigroup to deduce the existence of a unique mild solution defined on a maximal interval $[0, \delta )$ with $\delta\le\tau_{min}.$ We will show that for suitably small initial data the solution is global and it satisfies a certain bound. Our argument is inspired by \cite{ACS} but here additional difficulties appear due to the fact that, being $k_i(\cdot)$ variable in time, we do not have a decreasing energy. First we observe that if $h\left ( \Vert A_0^{1/2}u_0\Vert_W \right ) <\frac 12,$ then $E(0)>0.$ Indeed, we deduce from the assumption (iii) on $\Psi$ that \begin{multline}\label{514} \vert \Psi(u) \vert\le \int_0^1\vert \langle \nabla \Psi (su), u\rangle\vert\, ds\\ \le \Vert A_0^{1/2} u\Vert_W^2\, \int_0^1 h (s\Vert A_0^{1/2} u\Vert_W ) s\, ds \le \frac 12 h (\Vert A_0^{1/2} u\Vert_W ) \Vert A_0^{1/2} u\Vert^2_W \end{multline} Then we have the estimate \begin{align*} E(0)&= \frac 12 \Vert u_1\Vert_W^2 +\frac 12 \Vert A_0^{1/2} u_0\Vert_W^2 -\Psi (u_0)\\ &\qquad +\frac 1 2\sum_{i=1}^l \frac 1 {1-c_i} \int_{-\tau_i(0)}^0 \vert k_i(\varphi_i^{-1}(s))\vert\, \Vert D_i^*u_t(s)\Vert^2_{W_i}\ ds\\ &\ge\frac 12 \Vert u_1\Vert_W^2 +\frac 14 \Vert A_0^{1/2} u_0\Vert_W^2\\ &\qquad \frac 1 2\sum_{i=1}^l \frac 1 {1-c_i} \int_{-\tau_i(0)}^0 \vert k_i(\varphi_i^{-1}(s))\vert\, \Vert D_i^*u_t(s)\Vert^2_{W_i}\ ds>0. \end{align*} Now we prove that if \begin{equation}\label{515} h\left ( \Vert A_0^{1/2}u_0\Vert_W \right ) <\frac 12 \quad \mbox{and}\quad h\left ( 2 \bar{C}^{1/2}E^{1/2}(0) \right ) <\frac 12, \end{equation} where $\bar{C}$ is the constant defined in \eqref{511}, then \begin{multline}\label{516} E(t)> \frac 14 \Vert u_t(t)\Vert_W^2 +\frac 14 \Vert A_0^{1/2} u(t)\Vert^2_W\\ +\frac 1 4\sum_{i=1}^l \frac 1 {1-c_i} \int_{t-\tau_i(t)}^t \vert k_i(\varphi_i^{-1}(s))\vert\, \Vert D_i^*u_t(s)\Vert^2_{W_i}\ ds \end{multline} for all $t\in [0,\delta )$. Indeed, let $r$ be the supremum of all $s\in [0,\delta )$ such that \eqref{516} holds true for every $t\in [0,s].$ Arguing by contradiction, suppose that$r<\delta.$ Then by continuity we have \begin{multline}\label{517} E(r)= \frac 14 \Vert u_t(r)\Vert_W^2 +\frac 14 \Vert A_0^{1/2} u(r)\Vert^2_W\\ +\frac 1 4\sum_{i=1}^l \frac 1 {1-c_i} \int_{r-\tau_i(r)}^r \vert k_i(\varphi_i^{-1}(s))\vert\, \Vert D_i^*u_t(s)\Vert^2_{W_i}\ ds. \end{multline} Therefore we infer from \eqref{517} and Lemma \ref{l52qqq} that \begin{equation*} h( \Vert A_0^{1/2}u(r)\Vert_W)\le h(2 E^{1/2}(r))\le h( 2 \bar{C}^{1/2} E^{1/2}(0))<\frac 12\,. \end{equation*} Using \eqref{514} in the definition of $E(r),$ this gives \begin{align*} E(r)&= \frac 12 \Vert u_t(r)\Vert_W^2 +\frac 12 \Vert A_0^{1/2} u(r)\Vert^2_W-\Psi (u(r))\\ &\qquad +\frac 1 2\sum_{i=1}^l \frac 1 {1-c_i} \int_{r-\tau_i(r)}^r \vert k_i(\varphi_i^{-1}(s))\vert\cdot\Vert D_i^*u_t(s)\Vert^2_{W_i}ds \\ &>\frac 14 \Vert u_t(r)\Vert_W^2 +\frac 14 \Vert A_0^{1/2} u(r)\Vert^2_W\\ &\qquad + \frac 1 4\sum_{i=1}^l \frac 1 {1-c_i} \int_{r-\tau_i(r)}^r \vert k_i(\varphi_i^{-1}(s))\vert\cdot\Vert D_i^*u_t(s)\Vert^2_{W_i}ds, \end{align*} contradicting the maximality of $r.$ This implies $r=\delta.$ Now, let us define \begin{equation}\label{518} \rho:= \frac 1 {2\bar{C}^{1/2}} h^{-1}(\frac 12). \end{equation} We show that \eqref{515} is satisfied for all $u_0\in \tilde W,$ $u_1\in W,$ $g_i\in C([-\tau^*,0], W_i),$ $i=1,\dots,l,$ satisfying \begin{equation*} \Vert A_0^{1/2}u_0\Vert_W^2+\Vert u_1\Vert_W^2+ \frac 12 \sum_{i=1}^l\frac 1 {1-c_i}\int_{-\tau_i(0)}^0 \vert k_i(\varphi_i^{-1} (s))\vert\, \Vert g_i(s)\Vert^2_{W_i}\, ds <\rho^2. \end{equation*} Indeed, this assumption implies $\Vert A_0^{1/2}u_0\Vert_W <\rho$ and then, observing that $\bar{C}>1,$ we have $$ h \Big ( \Vert A_0^{1/2}u_0\Vert_W\Big ) < h(\rho )=h \Big ( \frac 1 {2\bar{C}^{1/2}} h^{-1}(\frac 12 )\Big )< \frac 12\,.$$ Moreover, from \eqref{514} we get the estimate \begin{multline*} E(0)\le \frac 34 \Vert A_0^{1/2} u_0\Vert_W^2+\frac 12\Vert u_1\Vert_W^2\\ + \frac 12 \sum_{i=1}^l\frac 1 {1-c_i}\int_{-\tau_i(0)}^0 \vert k_i(\varphi_i^{-1} (s))\vert\, \Vert g_i(s)\Vert^2_{W_i}\, ds<\rho^2, \end{multline*} and thus, from \eqref{518} we infer that \begin{equation*} h\left ( 2 \bar{C}^{1/2}E^{1/2}(0) \right ) < h (2\bar{C}^{1/2}\rho ) <h(h^{-1}(1/2))= \frac 1 2. \end{equation*} We conclude that \eqref{515} holds, and that \begin{multline*} 0< \frac 14 \Vert u_t(t)\Vert^2_W +\frac 14 \Vert A_0^{1/2} u(t)\Vert^2_W\\ +\frac 1 4\sum_{i=1}^l \frac 1 {1-c_i} \int_{t-\tau_i(t)}^t \vert k_i(\varphi_i^{-1}(s))\vert\, \Vert D_i^*u_t(s)\Vert^2_{W_i}ds\\ \le E(t)\le \bar{C} E(0)\le \bar{C}\rho^2, \ \forall \ t\in [0,\delta]. \end{multline*} One can extend the solution of problem (\ref{513}) by considering as initial datum the solution at time $t=\delta\,.$ Arguing as above, we can extend the solution to the whole $[0,\tau_{min}]$ and the solution satisfies $$h\left (\Vert A_0^{1/2}u(\tau_{min})\Vert_W \right )\le h\left (2 E^{1/2}(\tau_{min})\right )\le h\left (2 \bar{C}^{1/2}E^{1/2}(0) \right )<\frac 12,$$ where we have applied estimate \eqref{510} on the whole interval $[0,\tau_{min}].$ Once we have the solution $U(\cdot)$ on the interval $[0, \tau_{min}],$ then on the second interval $[\tau_{min}, 2\tau_{min}]$ one can rewrite again our problem in the abstract form (\ref{513}) with $G_i(t)=(0, D_iD_i^*u_t(t-\tau_i(t)))$ (note that, for $t\in [\tau_{min}, 2\tau_{min}]$ it results $t-\tau_i(t)\le\tau_{min}$). One can repeat the same argument on every interval of length $\tau_{min}$ to get a global solution satisfying \eqref{510}. \end{proof} \section{Examples}\label{s6qqq} In the following examples we consider a non-empty bounded domain $\Omega$ in $\mathbb R^n$ with a boundary $\Gamma $ of class $C^2$. \subsection{The wave equation with localized frictional damping} Let $O\subset \Omega$ be a nonempty open subset satisfying the \emph{geometrical control property} in \cite{BLR}. For instance, denoting by $m$ the standard multiplier $m(x)=x-x_0,$ $x_0\in\mathbb R^n,$ as in \cite{Lions-1988}, $O$ can be the intersection of $\Omega$ with an open neighborhood of the set \begin{equation*} \Gamma_0=\set{x\in\Gamma \:\ m(x)\cdot\nu (x)>0}. \end{equation*} Denoting by $\chi_D$ the characteristic function of a set $D,$ let us consider the following system: \begin{equation}\label{61qqq} \begin{cases} u_{tt}(x,t)-\Delta u(x,t)+a\chi_O(x) u_t(x,t) \\ \hspace{2cm} {-k(t)\chi_{\tilde O}(x) u_t(x,t-\tau)=0}\qtq{in}\Omega\times (0, \infty),\\ {u(x,t)=0\qtq{on}\Gamma\times (0,\infty )},\\ {u(x,t-\tau)=u_0(x, t) \qtq{in}\Omega\times (0,\tau]},\\ {u_t(x,t-\tau)=u_1(x,t)} {\qtq{in}\Omega\times (0,\tau]}, \end{cases} \end{equation} where $a$ is a positive constant, $\tau >0$ is the time delay, and the damping coefficient $k$ belongs to $L^1_{loc}(0,\infty)$. The set $\tilde O\subset\Omega$ where the delay feedback is localized can be any measurable subset of $\Omega.$ Setting $U= (u, u_t)^T,$ this problem can be rewritten in the form \eqref{11} with $H= H^1_0(\Omega)\times L^2(\Omega),$ \begin{equation*} A= \begin{pmatrix} 0&1\\\Delta&\chi_O \end{pmatrix} \end{equation*} and $B: H\to H$ defined by \begin{equation*} B (u,v)^T= (0, \chi_{\tilde O}v)^T. \end{equation*} It is well-known that ${A}$ generates a strongly continuous semigroup which is exponentially stable (see e.g. \cite{zuazua, Komornikbook}). Since $\norm{B}\le 1$, we have exponential stability result under the assumption \begin{equation}\label{62qqq} Me^{\omega\tau}\int_0^t\abs{k(t+\tau )}\ ds \le\gamma+\omega't\qtq{for all}t\ge 0 \end{equation} for some $\omega'<\omega$ and $\gamma\in\mathbb R$, where $M$ and $\omega$ denote the positive constants in the exponential estimate \eqref{12} for the semigroup generated by $A.$ This extends to more general delay feedbacks a previous result of the second author \cite{SCL12}. \subsection{The wave equation with memory }\label{ss62qqq} Given an arbitrary open subset $O$ of $\Omega$, we consider the system \begin{equation}\label{63qqq} \begin{cases} u_{tt}(x,t) -\Delta u (x,t)+ \int_0^\infty \mu (s)\Delta u(x,t-s) ds\\ \hspace{2,5 cm} =k(t)\chi_O (x) u_t(x,t-\tau) )\qtq{in} \Omega\times (0,\infty),\\ u (x,t) =0\qtq{on}\Gamma \times (0,\infty),\\ u(x,t)=u_0(x, t)\qtq{in}\Omega\times (-\infty, 0] \end{cases} \end{equation} with a constant time delay $\tau >0$, and a locally absolutely continuous memory kernel $\mu :[0,\infty)\rightarrow [0,\infty)$, satisfying the following three conditions: \begin{enumerate}[\upshape (i)] \item $\mu (0)=\mu_0>0;$ \item $\int_0^{\infty} \mu (t) dt=\tilde \mu <1;$ \item $\mu^{\prime} (t)\le -\alpha \mu (t) \qtq{for some}\alpha >0.$ \end{enumerate} As in \cite{Dafermos}, we introduce the notation \begin{equation*} \eta^t(x,s):=u(x,t)-u(x,t-s). \end{equation*} Then we can restate \eqref{63qqq} in the following form: \begin{equation}\label{64qqq} \begin{cases} u_{tt}(x,t)= (1-\tilde \mu)\Delta u (x,t)+ \int_0^\infty \mu (s)\Delta \eta^t(x,s) ds\\ \hspace{2.5 cm} =k(t)\chi_O (x) u_t(x,t-\tau))\qtq{in} \Omega\times (0,\infty)\\ \eta_t^t(x,s)=-\eta^t_s(x,s)+u_t(x,t)\qtq{in}\Omega\times (0,\infty)\times (0,\infty ),\\ u (x,t) =0\qtq{on}\Gamma \times (0,\infty)\\ \eta^t (x,s) =0\qtq{in}\Gamma \times (0,\infty) \qtq{for} t\ge 0,\\ u(x,0)=u_0(x)\qtq{and}\quad u_t(x,0)=u_1(x)\qtq{in}\Omega,\\ \eta^0(x,s)=\eta_0(x,s) \qtq{in}\ \Omega\times (0,\infty), \end{cases} \end{equation} where \begin{equation}\label{65qqq} \begin{cases} u_0(x)=u_0(x,0), \quad x\in\Omega,\\ u_1(x)=\frac {\partial u_0}{\partial t}(x,t)\vert_{t=0},\quad x\in\Omega,\\ \eta_0(x,s)=u_0(x,0)-u_0(x,-s),\quad x\in\Omega,\ s\in (0,\infty). \end{cases} \end{equation} Let us introduce the Hilbert space $L^2_{\mu}((0, \infty);H^1_0(\Omega ))$ of $H^1_0$-valued functions on $(0,\infty),$ endowed with the inner product \begin{equation*} \langle \varphi, \psi\rangle_{L^2_{\mu}((0, \infty);H^1_0(\Omega ))}= \int_{\Omega}\left (\int_0^\infty \mu (s)\nabla \varphi (x,s)\nabla \psi (x,s) ds\right )dx, \end{equation*} and then the the Hilbert space \begin{equation*} H:= H^1_0(\Omega)\times L^2(\Omega)\times L^2_{\mu}((0, \infty);H^1_0(\Omega )) \end{equation*} equipped with the inner product \begin{multline*}\label{innerd} \left\langle \begin{pmatrix} u\\ v\\ w \end{pmatrix} , \begin{pmatrix} \tilde u\\ \tilde v\\ \tilde w \end{pmatrix} \right\rangle_H := (1-\tilde\mu )\int_\Omega \nabla u\nabla\tilde u dx + \int_\Omega v\tilde v dx\\ +\int_{\Omega} \int_0^\infty \mu (s)\nabla w\nabla\tilde w ds dx. \end{multline*} Setting $U:= (u,u_t,\eta^t)^T$ we may rewrite the problem \eqref{64qqq}--\eqref{65qqq} in the abstract form \begin{equation*}\label{abstractd} \begin{cases} U_t(t)={ A} {U}(t)+k B U(t-\tau),\\ U(0)=(u_0,u_1, \eta_0)^T, \end{cases} \end{equation*} where the operator $A$ is defined by \begin{equation*}\label{Operator} A \begin{pmatrix} u\\ v\\ w \end{pmatrix} := \begin{pmatrix} v\\ (1-\tilde\mu)\Delta u+\int_0^{\infty}\mu (s)\Delta w(s)ds\\ -w_s+v \end{pmatrix} \end{equation*} with domain \begin{align*}\label{dominioOpd} D(A):= &\left\{(u,v,\eta )\in H^1_0(\Omega)\times H^1_0(\Omega) \times L^2_{\mu}((0,\infty);H^1_0(\Omega))\, :\right.\\ &(1-\tilde\mu)u+\int_0^\infty \mu (s)\eta (s) ds \in H^2(\Omega)\cap H^1_0(\Omega),\\ &\left.\eta_s\in L^2_{\mu}((0\infty);H^1_0(\Omega))\right\} \end{align*} in the Hilbert space $H$, and the bounded operator $B:H\to H$ is defined by the formula \begin{equation*} B(u,v, \eta^t)^T:= (0, \chi_O v, 0)^T. \end{equation*} It is well-known (see e.g. \cite{GRP}) that the operator $A$ generates an exponentially stable semigroup. Since $\norm{B}\le 1$, Theorems \ref{p21qqq} and \ref{t31qqq} guarantee the well-posedness and exponential stability of \eqref{64qqq}--\eqref{65qqq} if the condition \eqref{62qqq} is satisfied for some $\omega'<\omega$ and $\gamma\in\mathbb R$, where $M$ and $\omega$ denote the positive constants in the exponential estimate \eqref{12} for the semigroup generated by $A.$ \subsection{The wave equation with frictional damping and source} Let $O\subset \Omega$ be a nonempty open subset satisfying the \emph{geometrical control property} in \cite{BLR} and let $\tilde O\subset O.$ As an explicit example of (\ref{513}), let us consider the following system: \begin{equation}\label{66qqq} \begin{cases} u_{tt}(x,t)-\Delta u(x,t)+a\chi_O(x) u_t(x,t)\\ \hspace{6cm} -k(t)\chi_{\tilde O}(x) u_t(x,t-\tau) \\ \qquad =\vert u(x,t)\vert^\mu u(x,t)\qtq{in}\Omega\times (0, \infty),\\ u(x,t)=0\qtq{on}\Gamma\times (0,\infty ),\\ {u(x,t-\tau)=u_0(x, t) \qtq{in}\Omega\times (0,\tau]},\\ {u_t(x,t-\tau)=u_1(x,t)} {\qtq{in}\Omega\times (0,\tau]}, \end{cases} \end{equation} where $a$ is a positive constant, $\tau >0$ is the time delay, and the damping coefficient $k$ belongs to $L^1_{loc}(0,\infty)$. Moreover, we assume $k= k^1+k^2$ with $k_1\in L^1([0,+\infty))$ and $k_2\in L^\infty (0,+\infty)$ satisfying $\Vert k^2\Vert_\infty < a.$ Setting $U= (u, u_t)^T,$ this problem can be rewritten in the form \eqref{51qqq} with $H= H^1_0(\Omega)\times L^2(\Omega),$ \begin{equation*} A= \begin{pmatrix} 0&1\\\Delta&\chi_O \end{pmatrix} \end{equation*} and $B: H\to H$ defined by \begin{equation*} B (u,v)^T= (0, \chi_{\tilde O}v)^T. \end{equation*} It is well-known that ${A}$ generates a strongly continuous semigroup which is exponentially stable (see e.g. \cite{zuazua, Komornikbook}). Next, consider the functional $$\Psi (u):= \frac 1 {\mu+2} \int_{\Omega} \vert u(x)\vert^{\mu+2} dx,\quad u\in H_0^1(\Omega),$$ which is well-defined, for $0<\mu\le \frac 4 {n-2},$ by Sobolev's embedding theorem. Note that $\Psi$ is G\^{a}teaux differentiable at every $u\in H_0^1(\Omega)$ and its G\^{a}teaux derivative is given by $$D\Psi (u)(v)= \int_{\Omega} \vert u(x)\vert^\mu u(x) v(x)\, dx,\quad v \in H_0^1(\Omega).$$ As showed in \cite{ACS}, assuming $0<\mu<\frac 4 {n-2},$ then $\Psi$ satisfies previous assumptions (i), (ii), (iii), and then problem \eqref{66qqq} is included in the abstract form (\ref{513}). Since the assumptions of Lemma \ref{l52qqq} are satisfied then Theorem \ref{t51qqq} holds for small initial data if \eqref{62qqq} is satisfied for some $\omega'<\omega$ and $\gamma\in\mathbb R$, where $M$ and $\omega$ denote the positive constants in the exponential estimate \eqref{12} for the semigroup generated by $A.$ \begin{remark} A large variety of other examples could be considered, e.g., the wave equation with standard dissipative boundary conditions and internal delays, plate equations with internal/boundary/viscoelastic dissipative feedbacks and internal delays, elasticity systems with different kinds of feedbacks. \end{remark}
-64,941.442906
[ -2.98046875, 2.54296875 ]
24.439701
[ -3.111328125, 1.1435546875, -2.3203125, -6.375, -1.1796875, 8.7109375 ]
[ 2.03125, 7.8671875, -1.2314453125, 4.26171875 ]
200
4,839
[ -3.4609375, 4.0234375 ]
37.516698
[ -5.67578125, -3.908203125, -4.5546875, -2.541015625, 1.927734375, 12.09375 ]
0.535461
12.703305
25.356479
4.022144
[ 1.3290079832077026 ]
-41,612.365423
5.982021
-64,499.780644
1.243652
6.13121
[ -1.9521484375, -3.3984375, -4.1875, -5.78125, 2.12890625, 13.0625 ]
[ -5.62109375, -1.634765625, -2.376953125, -1.595703125, 3.423828125, 4.25390625 ]
BkiUeH04dbghfPG8YJBu
\section{Introduction} \label{sec:intro} End-to-end (E2E) automatic speech recognition (ASR) has been attracting attention as a method of directly integrating acoustic models (AMs) and language models (LMs) because of the simple training and efficient decoding procedures. In recent years, various models have been studied, including connectionist temporal classification (CTC) \cite{graves06, graves14, miao15, amodei16}, attention-based encoder--decoder models \cite{chorowski15, chan16, lu16, zeyer2018improved, chiu18}, their hybrid models \cite{kim17, watanabe17}, and the RNN-transducer \cite{graves12,graves13rnnt,rao17}. Transformer \cite{vaswani17} has been successfully introduced into E2E ASR by replacing RNNs \cite{dong18, sperber18, salazar19, dong19, zhao19}, and it outperforms bidirectional RNN models in most tasks \cite{karita19}. Transformer has multihead self-attention network (SAN) layers, which can leverage a combination of information from completely different positions of the input. However, similarly to bidirectional RNN models \cite{schuster97}, Transformer has a drawback in that the entire utterance is required to compute self-attention, making it difficult to utilize in online recognition systems. Also, the memory and computational requirements of Transformer grow quadratically with the input sequence length, which makes it difficult to apply to longer speech utterances. A simple solution to these problems is block processing as in \cite{sperber18, dong19, jaitly2015neural}. However, it loses global context information and its performance is degraded in general. We have proposed a block processing method for the encoder--decoder Transformer model by introducing a context-aware inheritance mechanism, where an additional context embedding vector handed over from the previously processed block helps to encode not only local acoustic information but also global linguistic, channel, and speaker attributes \cite{tsunoo19}. Although it outperforms naive blockwise encoders, the block processing method can only be applied to the encoder because it is difficult to apply to the decoder without knowing the optimal chunk step, which depends on the token unit granularity and the language. For the attention decoder, various online processes have been proposed. In \cite{chorowski15, chan16online, merboldt19}, the chunk window is shifted from an input position determined by the median or maximum of the attention distribution. Monotonic chunkwise attention (MoChA) uses a trainable monotonic energy function to shift the chunk window \cite{chiu2017monotonic}. MoChA has also been extended to make it stable while training \cite{miao19} and to be able to change the chunk size adaptively to the circumstances \cite{fan19}. \cite{moritz19} proposed a unique approach that uses a trigger mechanism to notify the timing of the attention computation. However, to the best of our knowledge, such monotonic chunkwise approaches have not yet been applied to Transformer. In this paper, we extend our previous context block approach towards an entire online E2E ASR system by introducing an online decoding process inspired by MoChA into the Transformer decoder. Our contributions are as follows. 1) Triggers for shifting chunks are estimated from the source--target attention (STA), which uses queries and keys, 2) all the past information is utilized according to the characteristics of the Transformer attentions that are not always monotonic or locally peaky, and 3) a novel training algorithm of MoChA is proposed, which extends to train the trigger function by dealing with multiple attention heads and residual connections of the decoder layers. Evaluations of the Wall Street Journal (WSJ) and AISHELL-1 show that our proposed online Transformer decoder outperforms conventional chunkwise approaches. \section{Transformer ASR} \label{sec:transformer} The baseline Transformer ASR follows that in \cite{karita19}, which is based on the encoder--decoder architecture. An encoder transforms a $T$-length speech feature sequence ${\mathbf x} = (x_{1},\dots,x_{T})$ to an $L$-length intermediate representation ${\mathbf h} = (h_{1},\dots,h_{L})$, where $L \leq T$ due to downsampling. Given ${\mathbf h}$ and previously emitted character outputs ${\mathbf y}_{i-1} = (y_{1},\dots,y_{i-1})$, a decoder estimates the next character $y_{i}$. The encoder consists of two convolutional layers with stride $2$ for downsampling, a linear projection layer, positional encoding, followed by $N_{e}$ encoder layers and layer normalization. Each encoder layer has a multihead SAN followed by a position-wise feedforward network, both of which have residual connections. Layer normalization is also applied before each module. In the SAN, attention weights are formed from queries ($\mathbf{Q} \in {\mathbb R}^{t_q\times d}$) and keys ($\mathbf{K} \in {\mathbb R}^{t_k\times d}$), and applied to values ($\mathbf{V} \in {\mathbb R}^{t_v\times d}$) as \vspace{-1mm} \begin{align} \mathrm{Attention}({\mathbf Q},{\mathbf K},{\mathbf V}) = \mathrm{softmax}\left(\frac{{\mathbf Q}{\mathbf K}^T}{\sqrt{d}}\right){\mathbf V}, \label{eq:attention} \end{align} where typically $d = d_{\mathrm{model}}/M$ for the number of heads $M$. We utilized multihead attention denoted, as the $\mathrm{MHD}(\cdot)$ function, as follows: \vspace{-1mm} \begin{align} & \mathrm{MHD}({\mathbf Q},{\mathbf K},{\mathbf V}) = \mathrm{Concat}(\mathrm{head}_{1},\dots,\mathrm{head}_{M}){\mathbf W}_O^n, \label{eq:mhead} \\ & \mathrm{head}_{m} = \mathrm{Attention}({\mathbf Q}{\mathbf W}_{Q,m}^n,{\mathbf K}{\mathbf W}_{K,m}^n,{\mathbf V}{\mathbf W}_{V,m}^n). \label{eq:head} \end{align} In \eqref{eq:mhead} and \eqref{eq:head}, the $n$th layer is computed with the projection matrices ${\mathbf W}_{Q,m}^n \in {\mathbb R}^{d_{\mathrm{model}} \times d}$, ${\mathbf W}_{K,m}^n \in {\mathbb R}^{d_{\mathrm{model}} \times d}$, ${\mathbf W}_{V,m}^n \in {\mathbb R}^{d_{\mathrm{model}} \times d}$, and ${\mathbf W}_{O}^n \in {\mathbb R}^{Md \times d_{\mathrm{model}}}$. For all the SANs in the encoder, ${\mathbf Q}$, ${\mathbf K}$, and ${\mathbf V}$ are the same matrices, which are the inputs of the SAN. The position-wise feedforward network is a stack of linear layers. The decoder predicts the probability of the following character from previous output characters ${\mathbf y}_{i-1}$ and the encoder output ${\mathbf h}$, i.e., $p(y_i|{\mathbf y}_{i-1},{\mathbf h})$. The character history sequence is converted to character embeddings. Then, $N_{d}$ decoder layers are applied, followed by the linear projection and Softmax function. The decoder layer consists of a SAN and a STA, followed by a position-wise feedforward network. The first SAN in each decoder layer applies attention weights to the input character sequence, where the input sequence of the SAN is set as ${\mathbf Q}$, ${\mathbf K}$, and ${\mathbf V}$. Then, the following STA attends to the entire encoder output sequence by setting ${\mathbf K}$ and ${\mathbf V}$ to be the encoder output ${\mathbf h}$. The SAN can leverage a combination of information from completely different positions of the input. This is due to the multiple heads and residual connections of the layers that complement each other, i.e., some attend {\it monotonically and locally} while others attend {\it globally}. Transformer requires the entire speech utterance for both the encoder and decoder; thus, they are processed only after the end of the utterance, which causes a huge delay. To realize an online ASR system, both the encoder and decoder are processed online. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{context} \vspace{-0.5cm} \caption{Context inheritance mechanism of the encoder.} \label{fig:context} \end{figure} \section{Contextual Block Processing of Encoder} \label{sec:encoder} A simple way to process the encoder online is blockwise computation, as in \cite{sperber18, dong19, jaitly2015neural}. However, the global channel, speaker, and linguistic context are also important for local phoneme classification. We have proposed a context inheritance mechanism for block processing by introducing an additional context embedding vector\cite{tsunoo19}. As shown in the tilted arrows in Fig.~\ref{fig:context}, the context embedding vector is computed in each layer of each block and handed over to the upper layer of the following block. Thus, the SAN in each layer is applied to the block input sequence using the context embedding vector. The context embedding vector is introduced into the original formulation in Sec.~\ref{sec:transformer}. Denoting the context embedding vector as $\mathbf{c}_{b}^{n}$, the augmented variables satisfy $\Tilde{{\mathbf Q}}_b^{n} = [{\mathbf Z}_b^{n-1} \ c_{b}^{n-1}]$ and $\Tilde{{\mathbf K}}_b^{n}=\Tilde{{\mathbf V}}_b^n=[{\mathbf Z}_b^{n-1} \ c_{b-1}^{n-1}]$, where the context embedding vector of the previous block $(b-1)$ of the previous layer $(n-1)$ is used. ${\mathbf Z}_b^{n}$ is the output of the $n$th encoder layer of block $b$, which is computed simultaneously with the context embedding vector $c_b^{n}$ as \vspace{-1mm} \begin{align} [{\mathbf Z}_b^{n} \ c_b^{n}] &= \max(0, \Tilde{{\mathbf Z}}_{b,\text{int.}}^{n}{\mathbf W}_1^n + v_1^n){\mathbf W}_2^n + v_2^n + \Tilde{{\mathbf Z}}_{b,\text{int.}}^{n} \\ \Tilde{{\mathbf Z}}_{b,\text{int.}}^{n} &= \mathrm{MHD}(\Tilde{{\mathbf Q}}_b^{n},\Tilde{{\mathbf K}}_b^{n},\Tilde{{\mathbf V}}_b^{n}) + \Tilde{{\mathbf V}}_{b}^{n}, \end{align} where ${\mathbf W}_1^n$, ${\mathbf W}_2^n$, $v_1^n$, and $v_2^n$ are trainable matrices and biases. The output of the SAN does not only encode input acoustic features but also delivers the context information to the succeeding layer as shown by the tilted red arrows in Fig.~\ref{fig:context}. \section{Online Process for Decoder} \label{sec:decoder} \subsection{Online Transformer Decoder based on MoChA} \label{ssec:mocha} The decoder of Transformer ASR is incremental at test time, especially for the first SAN of each decoder layer. However, the second STA requires the entire sequence of the encoded features ${\mathbf h}$. Blockwise attention mechanisms cannot be simply applied with a fixed step size, because the step size depends on the output token granularity (grapheme, character, (sub-)word, and so forth) and language. In addition, not all the STAs are monotonic, because the other heads and layers complement each other. Typically, in the lower layer of the Transformer decoder, some heads attend wider areas, and some attend a certain area constantly, as shown in Fig.~\ref{fig:wide}. Therefore, chunk shifting and the chunk size should be adaptive. For RNN models, the median or maximum of the attention distribution is used as a cue for shifting a fixed-length chunk, where the parameters of the original batch models are reused \cite{chorowski15, chan16online, merboldt19}. MoChA further introduces the probability distribution of chunking to train the monotonic chunking mechanism. In this paper, we propose a novel online decoding method inspired by MoChA. \begin{figure}[t] \centering \includegraphics[width=1\columnwidth]{attentions_wider.eps} \vspace{-0.5cm} \caption{Examples of attentions in a Transformer decoder layer. (a) is a head having wider attentions, and (b) is a head attending a certain area of ${\mathbf h}$.} \label{fig:wide} \end{figure} MoChA \cite{chiu2017monotonic} splits the input sequence into small chunks over which soft attention is computed. It learns a monotonic alignment between the encoder features ${\mathbf h}$ and the output sequence ${\mathbf y}$, with $w$-length chunking. ``Soft'' attention is efficiently utilized with backpropagation to train chunking parameters. At the test time, online ``hard'' chunking is used to realize online ASR, which achieves almost the same performance as the soft attention model. \begin{algorithm}[t] \caption{MoChA Inference for $n$-th Transformer Decoder Layer} \label{alg:inference} \begin{algorithmic}[1] \REQUIRE encoder features ${\mathbf h}$, length $L$, chunk size $w$ \STATE \textbf{Initialize:} $y_0=\langle sos\rangle$, $t_{m,0}=1$, $i=1$ \WHILE{$y_{i-1} \neq \langle eos \rangle$} \FOR{\textcolor{red}{$m=1$ \TO $M$}} \FOR{$j=t_{m,i-1}$ \TO $L$} \STATE $p_{m,i,j}=\sigma(\textcolor{red}{\mathrm{Energy}_m}(z_{\mathrm{SAN},i},h_{j}))$ \IF{$p_{m,i,j} \geq 0.5$} \STATE $t_{m,i}=j$ \STATE \textbf{break} \ENDIF \ENDFOR \IF{$p_{m,i,j}<0.5, \forall j \in \{t_{m,i-1},\dots,L\}$} \STATE \textcolor{red}{$t_{m,i}=t_{m,i-1}$} \ENDIF \STATE $r=t_{m,i}-w+1$ \ \ \textcolor{red}{// or $r=1$} \FOR{$k=r$ \TO $t_{i}$} \STATE{$u_{m,i,k}=\textcolor{red}{\mathrm{ChunkEnergy}_m}(z_{\mathrm{SAN},i},h_{k})$} \ENDFOR \STATE $\mathrm{head}_{m,i}=\sum_{k=r}^{t_{i}}\frac{\exp(u_{i,k})}{\sum_{l=r}^{t_{i}}\exp(u_{i,l})}v_{m,k}$ \ENDFOR \STATE $z_{\mathrm{STA},i}=\textcolor{red}{\mathrm{STA}({\mathbf y}_{i-1},\mathrm{head}_{1,i},\dots,\mathrm{head}_{M,i})}$, $i=i+1$ \ENDWHILE \end{algorithmic} \end{algorithm} Since Transformer has unique properties, the conventional MoChA cannot be simply applied. One property is that the STA is computed using queries and keys, while MoChA is formulated on the basis of the attention using a hidden vector of the RNN and $\tanh$. Another property is that not all the STAs are monotonic, because the other heads and layers complement each other, as examples shown in Fig.~\ref{fig:wide}. We modify the training algorithm of MoChA to deal with these characteristics. \subsection{Inference Algorithm} \label{ssec:inference} The inference process for decoder layer $n$ is shown in Algorithm~\ref{alg:inference}. The differences from the original MoChA are highlighted in red color. In our case, MoChA decoding is introduced into the second STA of each decoder layer; the vector $z_{\mathrm{SAN},i}$ in Algorithm~\ref{alg:inference} is the output of the first SAN in the decoder layer. $\mathrm{STA}(\cdot)$ in line 20 concatenates and computes an output of the STA network, $z_{\mathrm{STA},i}$, in each decoder layer, as in (\ref{eq:mhead}). MoChA can be applied independently to each head; thus, we added line 3. In line 18, the attention weight is applied to the selected values $v_{m,k} = h_{k}W_{V,m}$ to compute $\mathrm{head}_m$ in (\ref{eq:head}), and the chunk of selection shifts monotonically. $p_{m,i,j}$ in line 5 is regarded as a trigger function at head $m$ to move the computing chunk, which is estimated from an $\mathrm{Energy}$ function. For the $\mathrm{Energy}$ and $\mathrm{ChunkEnergy}$ (in line 16) functions, the original MoChA utilizes $\tanh$ because it is used as a nonlinear function in RNNs. However, in Transformer, attentions are computed using queries and keys as in (\ref{eq:attention}). Therefore, we modify them for the head $m$ as \vspace{-1mm} \begin{align} \mathrm{Energy}_m(z_{\mathrm{SAN},i},h_{j}) &= g_{m}\frac{q_{i,m}k_{j,m}^T}{\sqrt{d}||q_{i,m}||} + r_{m}, \end{align} \begin{align} \mathrm{ChunkEnergy}_m(z_{\mathrm{SAN},i},h_{j}) &= \frac{q_{i,m}k_{j,m}^T}{\sqrt{d}}, \end{align} where $g_{m}$ and $r_{m}$ are trainable scalar parameters, $q_{i,m}=z_{\mathrm{SAN},i}W_{Q,m}$, and $k_{j,m}=h_{j}W_{K,m}$ as in (\ref{eq:head}). Note that, the exception in lines 11--13, where the trigger never ignites in frame $i$, sets $\mathrm{head}_{m,i}$ as $\mathbf{0}$ in the original MoChA. However, we compute $\mathrm{head}_{m,i}$ using the previous $t_{m,i-1}$ (line 12) because the exception often occurs in Transformer. Also, for online processing, all the past frames of encoded features ${\mathbf h}$ are also available without any latency, while the original MoChA computes attentions within the fixed-length chunk. Taking into account the property that Transformer attentions tend to be distributed widely and are not always monotonic, we also consider utilizing the past frames. We optionally modify line 14 by setting $r=1$ and test both cases in Sec.~\ref{sec:experiments}. \subsection{Training Algorithm} \label{ssec:training} MoChA strongly relies on the monotonicity of the attentions, and it also forces attentions to be monotonic, while Transformer has a flexible attention mechanism that may be able to integrate information of various positions without the monotonicity. Further more, the Transformer decoder has both multihead and residual connections. Therefore, typically, not all the attentions become monotonic, as in Fig.~\ref{fig:wide}. \begin{algorithm}[t] \caption{MoChA Training for $n$-th Transformer Decoder Layer} \label{alg:train} \begin{algorithmic}[1] \REQUIRE encoder features ${\mathbf h}$, length $L$, chunk size $w$, Gauss. noise $\epsilon$ \STATE \textbf{Initialize:} $y_0=\langle sos\rangle$, $\alpha_{0,0}=1$, $\alpha_{0,k}=0 (k\neq 0)$, $i=1$ \WHILE{$y_{i-1} \neq \langle eos \rangle$} \FOR{\textcolor{red}{$m=1$ \TO $M$}} \FOR{$j=1$ \TO $L$} \STATE $p_{m,i,j}=\sigma(\textcolor{red}{\mathrm{Energy}_m}(z_{\mathrm{SAN},i},h_{j}) + \epsilon)$ \STATE \textcolor{red}{$q_{m,i,j}=\prod_{k=j+1}^{L}(1-p_{m,i,k})$} \STATE \textcolor{red}{$\alpha_{m,i,j}=p_{m,i,j}\sum_{k=1}^{j}\left(\alpha_{m,i-1,k}\prod_{l=k}^{j-1}(1-p_{m,i,l})\right)$} \\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $\textcolor{red}{+ q_{m,i,j}\alpha_{m,i-1,j}}$ \ENDFOR \FOR{$j=1$ \TO $L$} \STATE $u_{m,i,k}=\textcolor{red}{\mathrm{ChunkEnergy}_m}(z_{\mathrm{SAN},i},h_{k})$ \STATE $\beta_{m,i,j}=\sum_{k=j}^{j+w-1}\frac{\alpha_{m,i,j}\exp(u_{m,i,k})}{\sum_{l=k-w+1}^{k}\exp(u_{m,i,l})}$ \ENDFOR \STATE $\mathrm{head}_{m,i}=\sum_{j=1}^{L}\beta_{m,i,j}v_{j}$ \ENDFOR \STATE $z_{\mathrm{STA},i}=\textcolor{red}{\mathrm{STA}(y_{i-1},\mathrm{head}_{1,i},\dots,\mathrm{head}_{M,i})}$, $i=i+1$ \ENDWHILE \end{algorithmic} \end{algorithm} The original MoChA training computes a variable $\alpha_{i,j}$, which is a cumulative probability of computing the local chunk attention at $t_{i}=j$, defined as \vspace{-1mm} \begin{align} \alpha_{i,j} = p_{i,j}\sum_{k=1}^j\left(\alpha_{i-1,k}\prod_{l=k}^{j-1}(1-p_{i,l})\right). \label{eq:orgalpha} \end{align} When $p_{i,j}\approx0$ for all $j$, which occurs frequently in Transformer because the other heads and layers complement each other for this frame, $\alpha_{i,j}$ rapidly decays after $i$. An example is shown in Fig.~\ref{fig:attention}. The top left shows $p_{m,i,j}$ in Algorithm~\ref{alg:inference}, which has monotonicity. The top right is the original $\alpha_{i,j}$ in (\ref{eq:orgalpha}), in which the value decreases immediately after around frame 50 of the target $y$ and does not recover. Therefore, we introduce a probability of the trigger not igniting as $q_{m,i,j}$ into computation of $\alpha_{m,i,j}$. Thus, the new training algorithm for Transformer is shown in Algorithm~\ref{alg:train}, which encourages MoChA to exploit the flexibility of the SAN in Transformer (colored lines are new to the original MoChA). An example of our modified $\alpha_{m,i,j}$ is shown in the bottom left of Fig.~\ref{fig:attention}, which maintains the monotonicity. The bottom right shows the expected attention $\beta_{m,i,j}$. \begin{figure}[t] \hspace{-0.7cm} \includegraphics[width=1.1\columnwidth]{attentions_colored_hot.eps} \vspace{-1.0cm} \caption{Example of expected attention in the Transformer decoder. Top left: $p_{i,j}$ in Algorithm~\ref{alg:train}; top right: original $\alpha_{i,j}$ in (\ref{eq:orgalpha}); bottom left: our modified $\alpha_{i,j}$ in Algorithm~\ref{alg:train}; bottom right: expected attention $\beta_{i,j}$. Head index $m$ is omitted for simplicity.} \label{fig:attention} \end{figure} \section{Experiments} \label{sec:experiments} \subsection{Experimental Setup} \label{ssec:setups} We carried out experiments using the WSJ English and AISHELL-1 Mandarin data \cite{aishell17}. The input acoustic features were 80-dimensional filter banks and the pitch, extracted with a hop size of 10 ms and a window size of 25 ms, which were normalized with the global mean and variance. For the WSJ English setup, the number of output classes was 52, including symbols. We used 4,231 character classes for the AISHELL-1 Mandarin setup. For the training, we utilized multitask learning with CTC loss as in \cite{watanabe17,karita19} with a weight of 0.1. A linear layer was added onto the encoder to project ${\mathbf h}$ to the character probability for the CTC. The Transformer models were trained over 100 epochs for WSJ and 50 epochs for AISHELL-1, with the Adam optimizer and Noam learning rate decay as in \cite{vaswani17}. The learning rate was set to 5.0 and the minibatch size to 20. SpecAugment \cite{park19} was applied to only WSJ. The parameters of the last 10 epochs were averaged and used for inference. The encoder had $N_{e}=12$ layers with 2048 units and the decoder had $N_{d}=6$ layers with 2048 units, with both having a dropout rate of 0.1. We set $d_{model}=256$ and $M=4$ for the multihead attentions. We trained three types of Transformer, namely, baseline Transformer \cite{karita19}, Transformer with the contextual block processing encoder (CBP Enc.\ + Batch Dec.) \cite{tsunoo19}, and the proposed entire online model with the online decoder (CBP Enc.\ + Proposed Dec.). The training was carried out using ESPNet \cite{watanabeespnet} with the PyTorch backend. The median based chunk shifting \cite{chorowski15} with a window of 16 frames was also applied to the Batch Dec.\ with and without past frames for the fair comparison (CBP Enc.\ + Median Dec.). For the CBP Enc.\ models, we set the parameters as $L_{\mathrm{block}}=16$ and $L_{\mathrm{hop}}=8$. For the initialization of context embedding, we utilized the average of the input features to simplify the implementation. The decoder was trained with the proposed MoChA architecture using $w=8$. The STA were computed within each chunk, or using all the past frames of encoded features as described in Sec. \ref{ssec:inference}. The decoding was performed alongside the CTC, whose probabilities were added with weights of 0.3 for WSJ and 0.7 for AISHELL-1 to those of Transformer. We performed decoding using a beam search with a beam size of 10. An external word-level LM, which was a single-layer LSTM with 1000 units, was used for rescoring using shallow fusion \cite{kannan18} with a weight of $1.0$ for WSJ. A character-level LM with the same structure was fused with a weight of $0.5$ for AISHELL-1. For comparison, unidirectional and bidirectional LSTM models were also trained as in \cite{watanabe17}. The models consisted of an encoder with a VGG layer, followed by LSTM layers and a decoder. The numbers of encoder layers were six and three, with 320 and 1024 units for WSJ and AISHELL-1, respectively. The decoders were an LSTM layer with 300 units for WSJ and two LSTM layers with 1024 units for AISHELL-1. \begin{table}[t] \caption{Word error rates (WERs) in the WSJ and AISHELL-1 evaluation task.} \label{tab:result} \vspace{1mm} \centering \scalebox{0.9}{ \begin{tabular}{l|cc} \hline & WSJ (WER) & AISHELL-1 (CER) \\ \hline\hline \multicolumn{2}{l}{Batch processing} \\ \hline biLSTM \cite{watanabe17} & 6.7 & 9.2 \\ uniLSTM & 8.4 & 11.8 \\ Transformer \cite{karita19} & 4.9 & 6.7 \\ CBP Enc. + Batch Dec. \cite{tsunoo19} & 6.0 & 7.6\\ \hline \multicolumn{2}{l}{Online processing} \\ \hline CBP Enc. + median Dec. \cite{chorowski15} & 9.9 & 25.0\\ \ \ ---{\it with past frames} & 7.9 & 24.2 \\ CBP Enc. + Proposed Dec. & 8.8 & 18.7\\ \ \ ---{\it with past frames} & {\bf 6.6} & {\bf 9.7} \\ \hline \end{tabular} } \end{table} \subsection{Results} \label{ssec:results} Experimental results are summarized in Table~\ref{tab:result}. The chunk hopping using the median of attention worked well in the English task but poorly in the Chinese task. This was because Chinese requires a wider area of the encoded features to emit each character. On the other hand, our proposed decoder prevented the degradation of performance. In particular, using all the past frames of encoded features, our proposed decoder achieved the highest accuracy among the online processing methods. This indicated that the new decoding algorithm was able to exploit the wider attentions of Transformer. \section{Conclusion} \label{sec:conclusion} We extended our previous Transformer, which adopted a contextual block processing encoder, towards an entirely online E2E ASR system by introducing an online decoding process inspired by MoChA into the Transformer decoder. The MoChA training and inference algorithms were extended to cope with the unique properties of Transformer whose attentions are not always monotonic or peaky and have multiple heads and residual connections of the decoder layers. Evaluations of WSJ and AISHELL-1 showed that our proposed online Transformer decoder outperformed conventional chunkwise approaches. Thus, we realize the entire online processing of Transformer ASR with reasonable performance. \bibliographystyle{IEEEbib}
-20,955.600698
[ -2.685546875, 2.513671875 ]
55.45977
[ -3.552734375, 0.1600341796875, -1.9814453125, -5.9296875, -0.60302734375, 8.3515625 ]
[ 2.45703125, 7.30859375, 0.498046875, 6.10546875 ]
203
3,148
[ -1.8564453125, 2.13671875 ]
28.149528
[ -6.16796875, -4.1640625, -4.15625, -1.6826171875, 2.44921875, 11.640625 ]
0.946151
23.304098
29.415502
2.454243
[ 1.7186789512634277 ]
-14,534.731053
5.875476
-20,623.344815
0.832612
5.825098
[ -2.859375, -3.52734375, -3.623046875, -4.62109375, 2.56640625, 11.7578125 ]
[ -5.6875, -1.779296875, -1.953125, -1.6103515625, 3.375, 4.56640625 ]
BkiUdJXxK6nrxl9bNU3t
\section{Introduction} Let us consider a finite spherical polyhedron, $P$, and a palette of four colours, $\{W,R,G,B\}$. We will call a good colouring of $P$ any map which associates one of these colours to each face of $P$ in such a way that any two adjacent faces carry distinct colours. The four-colour theorem \cite{AC,AH,RSST} states that such a map exists for any $P$. The goal of the present work is to provide a geometric interpretation of this theorem. We obtain here two new results : the number of good colourings of a trivalent, spherical polyhedron is the 2-holonomy of a 2-connection on a fibered category over the dual triangulation, $T=P^\ast$ (Theorem 2) ; the four-colour theorem is equivalent to the existence of a non-vanishing, equivariant global section of this fibered category (Theorem 4). In order to study the colourability of $P$, let us start by making some classical modifications. We first remark that it is sufficient to prove the colourability of trivalent polyhedra. Indeed, by cutting a little disk around each vertex of degree $>3$ in $P$, one obtains a trivalent polyhedron and each good colouring of the latter provides a good colouring of $P$ by shrinking this disk to the initial vertex. Henceworth, we will suppose that $P$ is itself trivalent. Secondly, let us identify our four colours with the pairs of diametrally opposite vertices of a cube : $W=\{w,w'\}$, $R=\{r,r'\}$, $G=\{g,g'\}$ and $B=\{b,b'\}$. Then each good colouring of the three faces which surround a vertex of $P$ defines an edge-loop in this cube such that the determinant of any triplet of successive vectors be $\pm 1$ : \subsubsection*{} \subsubsection*{} $$ \xymatrix{ & & \ar@{-}[dd] & & & & & & b' \ar@{-}[d] \ar@{~>}[rr]^{e_1} & & w' \ar@{.}[dl] \ar[dd] \\ & B \ar@{~>}[rr]^{\ 1} & & W \ar[ddl] & & & & r \ar@{.>}[ur]^{e_2} \ar@{~}[rr] & \ar@{-}[d] & g' \ar@{-}[dd] & \\ & & \ast \ar@{-}[dll]_{2} \ar@{-}[drr]^{3} & & & & & & g \ar@{~}[r] & \ar@{~>}[r] & r' \ar@{.>}[dl] \\ & & R \ar@{.>}[uul] & & & & & w \ar@{->}[uu]^{e_3} \ar@{.}[ur] \ar@{<~}[rr] & & b & \\ } $$ \subsubsection*{} \subsubsection*{} \noindent A map $(u:T_1\to\{e_1,e_2,e_3\})$ satisfying this property will be called a good numbering. Thus, the number of good numberings of the edges of $T$ is one quarter of the number of good colourings of the faces of $P$, as proved by P.G. Tait \cite{T}. We call this integer, $K_T$, the chromatic index of $T$ and the four-colour theorem states that $K_T\neq 0$ for any finite, spherical triangulation, $T$. Our article is organised as follows. In Section 2, we give a proof of Penrose's formula which expresses $K_T$ as a partition function. In Section 3, we define the graph $\mathscr{P}$ of edge-paths of $T$. In Section 4, we collect useful results about representations of ${\mathfrak{sl}}_2$. In Section 5, we construct the chromatic stack, $\varphi$, which is a fibered category over $T$, endowed with a functorial 1-connection and with a natural 2-connection, and we prove that $K_T$ is the 2-holonomy of this 2-connection on $T$. In Section 6, we define another fibered category, $\Phi$, over $\mathscr{P}$. By integrating the functorial connection of $\Phi$ along a 2-path which sweeps each triangle of $T$ once only, we obtain an equivariant global section of the pull-back of the chromatic stack to a triangulation $\widetilde{T}$ of the disk. This section, $\zeta$, is an ${\mathfrak{sl}}_2$-bundle with connection whose holonomy on $\partial\widetilde{T}$ is $K_T$. Our construction is an adaptation of Stokes theorem to a case of combinatorial differential forms with values in the tensor category $\mathbf{A}=\mathbf{Rep}_f (\sl)$ and we can write it symbolically $K_T=\int_T\varphi=\int_{\partial\widetilde{T}}\zeta$. Since $K_T$ depends linearly on the value of $\zeta$ on each inner edge of $\widetilde{T}$, we obtain this way our second result : the four-colour theorem is equivalent to the fact that $\zeta$ vanishes nowhere. \section{The chromatic index} The idea to translate the four-colour problem in terms of linear algebra is due to Roger Penrose. Let us fix a finite, spherical triangulation $T=(T_0,T_1,T_2)$. $T_0$ is the set of its vertices, $T_1$ the set of its edges and $T_2$ the set of its triangles. Following \cite{RP}, we define the chromatic index of $T$ as \subsubsection*{} \begin{eqnarray} \boxed{K_T := \sum_u \prod_{[xyz]} i\,\det\,(u_{xy},u_{yz},u_{zx})} \end{eqnarray} \subsubsection*{} \subsubsection*{} \noindent In this sum, $u$ runs over the set of all maps from $T_1$ to $\{e_1,e_2,e_3\}$, the canonical basis of $\mathbb{R}^3$, and $[xyz]$ runs over the set of positively oriented triangles of $T$. The integrality of $K_T$ follows from the fact that, if $u$ is a good numbering of $T_1$, {\it i.e. }\, if no determinant vanishes in this product, the number of triangles where $\det=(+1)$ minus the number of triangles where $\det=(-1)$ is a multiple of $4$, as proves the following lemma. \subsubsection*{} {\bf Lemma} : \emph{If $u$ is a good numbering of $T_1$ and if $n_+$ (resp. $n_-$) denotes the number of triangles $[xyz]$ such that $\det (u_{xy},u_{yz},u_{zx}) = (+1)$ (resp. $(-1)$), then $n_+\equiv n_-$ mod 4.} \subsubsection*{} {\it Proof} : 1) Starting from $(T,u)$, we can build another triangulation, $T'$, equipped with a good edge numbering, $u'$, such that ${n'}_+=0$. Indeed, if two adjacent, positively oriented triangles of $T$, say $[xyz]$ and $[zyw]$, have $\det=(+1)$ (positive triangles), then we can flip their common edge $[yz]$ to $[xw]$ and obtain a new pair of negative triangles, $[xyw]$ and $[wzx]$, where $\det = (-1)$. During this step, $(n_+-n_-)$ is reduced by $4$. Once all these pairs of neighbour positive triangles have been eliminated this way, the remaining contributions to $n_+$ are triangles surrounded by three negative neighbours. By adjoining three edges and a trivalent vertex inside each isolated triangle of this kind, we change a positive triangle for three negative ones. Again, $(n_+-n_-)$ is reduced by $4$, and $(T',u')$ is reached at the end of this process. \subsubsection*{} 2) Consider all pairs of triangles, $[xyz]$ and $[zyw]$, with ${u'}_{yz}=e_1$ on their common edge, $[yz]$. Since $\det (u_{xy},u_{yz},u_{zx}) = \det (u_{zy},u_{yw},u_{wz}) = (-1)$, the opposite sides of the rectangle $[xywz]$ carry the same vector, say ${u'}_{xz}={u'}_{yw}=e_2$ and ${u'}_{xy}={u'}_{zw}=e_3$. Let us join the midpoints of two opposite edges with a simple curve. By repeating this process inside all such pairs of triangles, we obtain two simple closed curves, $c_2$ and $c_3$. If we orient these curves conveniently, their intersection number is equal to $\vert{u'}^{(-1)}(e_1)\vert$, the number of edges of $T'$ marked with $e_1$. But, after Jordan's theorem, the intersection number of two simple closed curves in $S^2$ is even. Therefore, $\vert{u'}^{(-1)}(e_1)\vert$, the number of edges mapped to $e_1$ by $u'$, is even. Similarly, $\vert{u'}^{(-1)}(e_2)\vert$ and $\vert{u'}^{(-1)}(e_3)\vert$ are also even, as well as the total number of edges of $T'$ : \begin{eqnarray*} {t'}_1=\vert{u'}^{(-1)}(e_1)\vert+\vert{u'}^{(-1)}(e_2)\vert+\vert{u'}^{(-1)}(e_3)\vert \in 2\,\mathbb{N} \\ \end{eqnarray*} 3) Since $T'$ is a triangulation of a closed surface, we have $3\,{t'}_2=2\,{t'}_1$. Since ${t'}_1$ is even, we obtain ${t'}_2 = {n'}_- \in 4\,\mathbb{N}$. Therefore, $n_+$ and $n_-$ are congruent modulo 4 : \begin{eqnarray} \boxed{ (n_+-n_-) \in 4\,\mathbb{Z} } \end{eqnarray} \hfill\qed \subsubsection*{} ${\bf Theorem \ 1}$ : \emph{$K_T$ is the number of good numberings of $T_1$.} \subsubsection*{} {\it Proof} : If $u$ is a bad numbering, then one of the determinants is zero and the corresponding product vanishes. On the other hand, if $u$ is a good numbering, then the corresponding product is equal to $i^{(n_+-n_-)} = 1$, after the precedent lemma. Therefore, the sum of all these products equals the number of good numberings of $T_1$. \hfill\qed \subsubsection*{} \section{The graph of edge-paths} Having fixed our triangulation, $T$, let us define the graph $\mathscr{P}$ whose vertices are the edge-paths of $T$ : \begin{eqnarray*} \mathscr{P}_0 = \bigcup_{\ell\geqslant 0} \big\lbrace \gamma=(x_0,\cdots,x_\ell) \ : \ \{x_i,x_{i+1}\} \in T_1 \ \forall\, i \big\rbrace \\ \end{eqnarray*} and whose edges, called the 2-edges of $T$, are the pairs of paths, with the same source and the same target, which bound a single triangle of $T$ : \begin{eqnarray*} \mathscr{P}_1 = \big\lbrace \{ (x_0,\cdots,x_\ell),(x_0,\cdots,x_i,y,x_{i+1},\cdots,x_\ell) \} \in \mathscr{P}_0\times\mathscr{P}_0 \ : \ \{x_i y x_{i+1}\} \in T_2 \big\rbrace \\ \end{eqnarray*} $$ \xymatrix{ & & & y \ar@{->}[dr] & & & \\ x_0 \ar@{.>}[rr] & & x_i \ar@{->}[rr] \ar@{->}[ur] & & x_{i+1} \ar@{.>}[rr] & & x_\ell \\ & & & & & & \\ } $$ The oriented 2-edges are the corresponding ordered pairs. A 2-path in $T$ is an edge-path in $\mathscr{P}$, {\it i.e. }\, a family $\Gamma=(\gamma_0,\cdots,\gamma_n)$ such that $\{\gamma_i,\gamma_{i+1}\}\in\mathscr{P}_1$ for $i=0,\cdots,n-1$. They form the set $\mathscr{P}_2$ : \begin{eqnarray*} \mathscr{P}_2 = \bigcup_{n\geqslant 0} \big\lbrace \Gamma=(\gamma_0,\cdots,\gamma_n) \ : \ \{\gamma_i,\gamma_{i+1}\}\in\mathscr{P}_1 \quad \text{for} \quad i=0,\cdots,n-1 \\ \end{eqnarray*} For each 2-path $\Gamma=(\gamma_0,\cdots,\gamma_n)$, there is a 2-path $\widetilde{\Gamma}$ going backward in time : \begin{eqnarray*} \widetilde{\Gamma}=(\gamma_n,\cdots,\gamma_0) \\ \end{eqnarray*} The 0-source (resp. 0-target) of $\Gamma$ is the common source (resp. target) of the $\gamma_i$'s. The 1-source of $\Gamma$ si $\gamma_0$ and and its 1-target is $\gamma_n$. The oriented 2-cells of $T$ are its smallest 2-paths. They have the form $\big( (xz),(xyz) \big)$ or $\big( (xyz),(xz) \big)$, for some triangle $\{xyz\}$. \section{Representations of ${\mathfrak{sl}}_2$} As we have seen above, Penrose's formula involves the determinants of triples of basis vectors of $\mathbb{R}^3$. If we endow $\mathbb{R}^3$ with its canonical euclidian structure and with the corresponding cross-product, we obtain a Lie algebra isomorphic to ${\mathfrak{so}}_3$. Since we will use complex coefficients and Schur's lemma, valid only for representations over an algebraically closed field, we will work with its complexification, $V={\mathfrak{sl}}_2$. We will note $I=\mathrm{Id}_{V}$, $V^\ell=V^{\otimes\ell}$ and $I^\ell=\mathrm{Id}_{{V}^{\ell}}$, where $V^\ell$ carries the representation \begin{eqnarray*} \rho_\ell &:& V \longrightarrow {\mathrm{End}} ({V}^{\ell}) \\ & & x \longmapsto \rho_\ell (x) = \sum_{k=1}^\ell I^{k-1} \otimes \mathrm{ad}_x \otimes I^{\ell-k} \\ \end{eqnarray*} Let $\mathbf{A}=\mathbf{Rep}_f (\sl)$, the category of finite dimensional representations of ${\mathfrak{sl}}_2$ over complex vector spaces. If $M$ and $M'$ are two $V$-modules, carrying, respectively, the representations $R$ and $R'$, we will often identify $M$ with $M\otimes -$, the endofunctor of $\mathbf{A}$, and write $M'M$ for $M'\otimes M$. For each $j\in\frac 12 \mathbb{N}$, let $(R_j:V \to {\mathrm{End}} (V_j))$ be a representative of the isomorphy class of representations of spin $j$ and dimension $2j+1$. For example, we can choose $V_0=\mathbb{C}$, $V_{1/2}=\mathbb{C}^2$ and $V_1=V$. After Schur's lemma, the irreducible representations are orthonormal for the bilinear bifunctor $\hom_{\A}$ : \begin{eqnarray} \boxed{\hom_{\A} (R_j,R_k) \simeq \delta_{jk} R_0} \end{eqnarray} \subsubsection*{} \noindent The intertwining number between two representations $R$ and $R'$ is defined as the dimension of the space $\hom_{\A} (R,R')$ : \begin{eqnarray*} c(R,R') = \dim_{\mathbb{C}} \big( \hom_{\A} (R,R') \big) \\ \end{eqnarray*} After Clebsch-Gordan's rule, $V^2\simeq V_0\oplus V_1\oplus V_2$ and $c(V,V^2)=c(V^2,V)=1$. The projectors onto the isotypic components of $V^2$, of spin $0$, $1$ and $2$, respectively map $u\otimes v$ to \begin{eqnarray*} T (u\otimes v) &=& (u \cdot v) \, e_a \otimes e_a \\ A (u\otimes v) &=& \frac 12 (u_a v_b - u_b v_a) \, e_a \otimes e_b \\ S (u\otimes v) &=& \frac 12 (u_a v_b + u_b v_a) \, e_a \otimes e_b - (u_a v_a) \, e_a \otimes e_a \\ \end{eqnarray*} The line $L=\hom_{\A}(V,V^2)$ is spanned by the map $F$ defined by \begin{eqnarray*} F(e_a)=i \, e_{a-1}\wedge e_{a+1} \\ \end{eqnarray*} and the line $\widetilde{L}=\hom_{\A}(V^2,V)$ is spanned by the bracket, noted $\widetilde{F}$ : \begin{eqnarray*} \widetilde{F}(e_a\otimes e_b)=[e_a,e_b]=i\,\varepsilon_{abc} \, e_c \\ \end{eqnarray*} All these morphisms of representations satisfy the relations \begin{eqnarray*} \widetilde{F} F &=& 2\, I \\ F \widetilde{F} &=& A \\ T+A+S &=& I^2 \\ (\widetilde{F} \otimes I) (I \otimes F) & = & (I \otimes \widetilde{F}) (F \otimes I) \\ &=& T+2A-2S \\ F &=& (\widetilde{F} \otimes I) (I \otimes F) F \\ \widetilde{F} &=& \widetilde{F} (I \otimes \widetilde{F}) ( F \otimes I) \\ \end{eqnarray*} \subsubsection*{} \section{The chromatic stack, $\varphi$} The notion of combinatorial stack appeared in \cite{Kap} and we used it in \cite{A} to give a construction of non-abelian $G$-gerbes over a simplicial complex. Dually, we can also use coefficients in a category of representation. Thus, we define the chromatic stack, $\varphi$, as a 2-functor which represents the simplicial homotopy groupoid $\Pi_1(\mathscr{P})$ into the 2-category of $\mathbf{A}$-modules. $\varphi$ is generated by pasting the following data : \begin{eqnarray*} \varphi_x &=& \mathbf{A} \\ \varphi_{xy} &=& (V \otimes - : \varphi_y \to \varphi_x ) \\ \varphi_{(x_0,\cdots,x_\ell)} &=& (V^\ell \otimes - : \varphi_{x_\ell} \to \varphi_{x_0}) \\ \varphi_{\sigma} &=& F \quad {\mathrm{if}} \quad \sigma=\big( (xyz),(xz) \big) \\ &=& \widetilde{F} \quad {\mathrm{if}} \quad \sigma=\big( (xz),(xyz) \big) \\ \varphi_{\gamma\g'} &=& (I^k\otimes\varphi_{\sigma}\otimes I^{\ell-k-1} : \varphi_{\gamma'} \to \varphi_{\gamma}) \\ \varphi_{(\gamma_0,\cdots,\gamma_n)} &=& (\varphi_{\gamma_0\gamma_1}\circ\cdots\circ\varphi_{\gamma_{n-1}\gamma_n}: \varphi_{\gamma_n}\to\varphi_{\gamma_0}) \\ \end{eqnarray*} The 1-connection of $\varphi$ is the family of functors $(\varphi_\gamma)_{\gamma\in \mathscr{P}_0}$, and the 2-connection of $\varphi$ is the family of natural transformations $(\varphi_\Gamma)_{\Gamma\in \mathscr{P}_2}$. In order to compute the chromatic index, we choose a 2-loop, $\Gamma=(\gamma_0,\cdots,\gamma_n)$, based at $(a,b)=\gamma_0=\gamma_n$, and sweeping each triangle of $T$ once only. To each path $\gamma_p=(a,x_{p1},\cdots,x_{p\ell_{p-1}},b)$, of length $\vert\gamma_p\vert=\ell_p$, $\varphi$ associates a copy of $V^{\ell_p}$. For each $p\in\{2,\cdots,n\}$, the loop $\gamma_p$ differs from $\gamma_{p-1}$ either by the insertion of a vertex $y\in T_0$ between $x_{p-1,k_p}$ and $x_{p-1,k_p+1}$ or by the deletion of $x_{p-1,k_p}$, where $x_{p-1,k_p-1}$ and $x_{p-1,k_p+1}$ are supposed to be adjacent. Each such move is represented by a linear map of the form \begin{eqnarray*} \varphi_{\gamma_{p-1} \gamma_{p}} &=& F_{k_p \ell_p} \ =\ (I_{V^{k_p-1}} \otimes F \otimes I_{V^{\ell_p-k_p}} \ : \ V^{\ell_p}\longrightarrow V^{\ell_p+1}) \qquad {\mathrm{if}} \quad \ell_{p-1}=\ell_p+1 \\ &=& \widetilde{F}_{k_p \ell_p} \ =\ (I_{V^{k_p-1}} \otimes \widetilde{F} \otimes I_{V^{\ell_p-k_p-1}} \ : \ V^{\ell_p}\longrightarrow V^{\ell_p-1}) \qquad {\mathrm{if}} \quad \ell_{p-1}=\ell_p-1 \\ \end{eqnarray*} Since Penrose's formula looks like the partition function of a statistical model, it is natural to express $K_T$ as the trace of a product of transfer matrices which represent linear maps between tensor powers of $V$. This approach will give us an efficient way to compute it, because the bad numberings are eliminated progressively during the sweeping process. Geometrically, the construction of the chromatic stack allows us to reinterpret $K_T$ as a $2$-holonomy, which is the categorical analogue of a holonomy in a fiber bundle. \subsubsection*{} {\bf Definition} : \emph{The 2-holonomy of $\varphi$ along a 2-loop $\Gamma=(\gamma_0,\gamma_1,\cdots,\gamma_{n-1},\gamma_0)$ based at $\gamma_0$, is the natural transformation} \begin{eqnarray*} \varphi_\Gamma = \varphi_{\gamma_0\gamma_1}\circ\cdots\circ\varphi_{\gamma_{n-1}\gamma_0} &:& \varphi_{\gamma_0} \longrightarrow \varphi_{\gamma_0} \\ \end{eqnarray*} When $\gamma_0=(a)$, $\varphi_\Gamma$ is an endomorphism of $\varphi_{\gamma_0}=\mathrm{Id}_{\mathbf{A}}$ so that $\varphi_\Gamma$ defines canonically a complex number. Moreover, after the following theorem, which illustrates the pasting lemma \cite{Po} in the 2-category of $\mathbf{A}$-modules, the trace of $\varphi_\Gamma\in{\mathrm{End}}(\varphi_{\gamma_0})$ depends only on $T$ and not on the 2-path $\Gamma$. \subsubsection*{} ${\bf Theorem \ 2}$ : \emph{If $\Gamma$ is a 2-loop which sweeps each triangle of $T$ once only, then the trace of the 2-holonomy of $\varphi$ along $\Gamma$, evaluated in the representation associated to the base path of $\Gamma$, is the chromatic index of $T$ :} \begin{eqnarray} \boxed{\mathrm{tr}_{\varphi_{\gamma_0}}(\varphi_\Gamma) = K_T} \end{eqnarray} \subsubsection*{} {\it Proof} : Let $\Gamma=(\gamma_0,\gamma_1,\cdots,\gamma_{n-1},\gamma_0)$ be such a 2-loop. Let $p\in\{0,\cdots,n-1\}$ and suppose that $\gamma_{p+1}$ is obtained from $\gamma_p$ by inserting $y$ between $x_j$ and $x_{j+1}$, with $x_j \neq y \neq x_{j+1} \neq x_j$ : \begin{eqnarray*} \gamma_p &=& (x_0, \cdots, x_\ell ) \\ \gamma_{p+1} &=& (x_0,\cdots, x_j, y, x_{j+1}, \cdots, x_\ell ) \\ \end{eqnarray*} Then the 2-arrow $\varphi_{\gamma_p \gamma_{p+1}}$ is the intertwiner \begin{eqnarray*} \varphi_{\gamma_p \gamma_{p+1}} = I_{\varphi_{x_0 x_1}} \otimes\cdots\otimes I_{\varphi_{ x_{j-1} x_j }} \otimes \widetilde{F} \otimes I_{\varphi_{ x_{ j+1, j+2 }}} \otimes\cdots\otimes I_{\varphi_{ x_{\ell-1} x_\ell }} = \widetilde{F}_{\ell k} \\ \end{eqnarray*} which is represented by the matrix $M_p$ whose entries are given by \begin{eqnarray*} M_{p,ab} &=& \delta_{a_0 b_0} \cdots \delta_{a_{j-1} b_{j-1}} \, \big( i \, \varepsilon_{a_j b_j b_{j+1}} \big) \, \delta_{a_{j+1} b_{j+2}} \cdots \delta_{a_{\ell-1} b_\ell} \\ \end{eqnarray*} If $\gamma_{q+1}$ is obtained from $\gamma_q$ by deleting a vertex between $y_k$ and $y_{k+1}$, then $\varphi_{\gamma_q \gamma_{q+1}}$ is the intertwiner going backwards \begin{eqnarray*} \varphi_{\gamma_q \gamma_{q+1}} = I_{\varphi_{y_0 y_1}} \otimes\cdots\otimes I_{\varphi_{ y_{k-1} y_k }} \otimes F \otimes I_{\varphi_{ y_{ k+1, k+2 }}} \otimes\cdots\otimes I_{\varphi_{ y_{\ell-1} y_\ell }} = F_{\ell+1,k} \\ \end{eqnarray*} and is represented by the matrix whose entries are \begin{eqnarray*} M_{q,ab} &=& \delta_{a_0 b_0} \cdots \delta_{a_{k-1} b_{k-1}} \, \big( (-i) \, \varepsilon_{a_k b_{k+1} b_k} \big) \, \delta_{a_{k+2} b_{k+1}} \cdots \delta_{a_\ell b_{\ell-1}} \\ \end{eqnarray*} Now, let $a^p = \big(a^p_1, \cdots , a^p_{\ell_p} \big)$ be a generic multi-index for the basis vectors of the representation $\varphi_{\gamma_p}$, with $a^p_j \in \{1,2,3\}$ for $j=1,\cdots,\ell_p$. The number $\mathrm{tr}_{\varphi_{\gamma_0}}(\varphi_\Gamma)$ is the trace of the product of these matrices : \begin{eqnarray*} \mathrm{tr}_{\varphi_{\gamma_0}}(\varphi_\Gamma) &=& \sum_a \prod_{p=0}^{n-1} M_{p,{a^p} a^{p+1}} \\ \end{eqnarray*} In this sum, $a$ runs over the set of families $(a^0, \cdots, a^n)$ of multi-indices $a^p = \big( a^p_1, \cdots , a^p_{\ell_p} \big)$ with $a^p_i \in \{ 1,2,3 \}$. To each edge of $T$ are associated as many indices as there are paths $\gamma_p$ which contain it. Let $N_{xy}$ be the number of indices associated to $(xy)$. Among them, $(N_{xy}-2)$ indices are constrained by the $\delta$'s to be equal. Similarly, the two $\varepsilon$'s associated to the two triangles which contain $(xy)$ force the two remaining indices to take the same value. Since the $\delta$'s are sandwiched between these two $\varepsilon$'s, these two indices are in fact equal and there is one and only one free index $a_{xy}$ associated to each edge $(xy)$. The various factors of the product are equal to one except for the $\varepsilon$'s which can be indexed by the positively oriented triangles of $T$. Therefore, the precedent formula becomes \begin{eqnarray*} \mathrm{tr}_{\varphi_{\gamma_0}}(\varphi_\Gamma) &=& \sum_{(xy)\in T_1} \quad \sum_{a_{xy}\in\{1,2,3\}} \bigg( \prod_{[xyz]\in T_2} i\, \varepsilon_{a_{xy} a_{yz} a_{zx}} \bigg) \\ &=& \sum_u \prod_{[xyz]} i\,\det (u_{xy},u_{yz},u_{zx}) \\ \end{eqnarray*} where $u$ describes the set of all maps from $T_1$ to $\{e_1,e_2,e_3\}$ and the triangles $[xyz]$ all have the same orientation. \subsubsection*{} \hfill \qed \subsubsection*{} Initially, $K_T$ is defined as a sum of $3^{t_1}$ terms and most of them vanish. By working in the tensor algebra, $T(V)$, the bad bumberings are eliminated during the sweeping process and the computation is much quicker if we use formula (4). Moreover, this method provides explicitely all good numberings. \subsubsection*{} {\bf Example} : Let us apply the relation (4) to the computation of the chromatic index of the octahedron. $$ \begin{picture}(150,200)(10,-50) \xy /l3pc/:,{\xypolygon3"A"{~:{(.75,0):}}}, {\xypolygon3"B"{~:{(-3,0):}}}, {"A1"\PATH~={**@{-}}'"B3"'"A2"'"B1"'"A3"'"B2"'"A1"} \endxy \put(-197,-53){$a$} \put(5,-53){$b$} \put(-97,113){$c$} \put(-128,15){$d$} \put(-96,-37){$e$} \put(-66,15){$f$} \end{picture} $$ \subsubsection*{} \subsubsection*{} \noindent We sweep this triangulation with the $2$-path $\Gamma=\big( (ab),(aeb),(adeb),(adefb),(adfb),(adcfb),(acfb),(acb),(ab) \big)$. For simplicity, we will write ${\bf a}_1 \cdots {\bf a}_\ell$ for $e_{a_1}\otimes\cdots\otimes e_{a_\ell}$ with ${\bf a}_i\in\{{\bf 1,2,3}\}$. The successive images of ${\bf 1}$ via the maps $\varphi_{\gamma\g'}$ are : \begin{eqnarray*} {\bf 1} & \mapsto & i ({\bf 23 - 32}) \\ & \mapsto & i^2 ({\bf 313 - 133 -122 + 212}) \\ & \mapsto & i^3 ({\bf 3112 - 3121 - 1312 + 1321 - 1231 + 1213 + 2131 - 2113}) \\ & \mapsto & i^4 ({\bf - 331 - 122 - 111 - 111 - 133 - 221}) \\ & \mapsto & i^5 ({\bf - 3121 + 3211 - 1312 + 1132 - 1231 + 1321 - 1231 + 1321 - 1123 + 1213 - 2311 + 2131}) \\ & \mapsto & i^6 ({\bf - 221 - 111 + 212 - 331 - 221 - 331 - 221 + 313 - 111 - 331}) \\ & \mapsto & i^7 ({\bf 23 + 23 - 32 + 23 - 32 + 23 - 32 - 32}) \\ & \mapsto & i^8 ({\bf 1 + 1 + 1 + 1}) \ = \ 4 \cdot {\bf 1} \\ \end{eqnarray*} Consequently, $K_{octa.}=3! \cdot 4=24$ and there exist $4\cdot 24=96$ good colourings of the dual cube. We have made $64$ operations instead of $3^{12}=531441$. It would be interesting to evaluate the complexity of this method for generic triangulations. Using the same method, one can compute the chromatic index of the icosahedron and one finds $K_{ico.}=60$, proving this way that there exist 240 good colourings of the faces of the dual dodecahedron. \section{A global section of $\varphi$} $\varphi$ induces over $\mathscr{P}$ another fibered category, $\Phi$, defined as follows. To each path $\alpha=(a_0, \cdots , a_\ell)$, we associate the category $\Phi_\alpha$ whose objects are the sections of $\varphi$ over $\alpha$. These are the families of $V$-modules, $\zeta_{a_i} \in\mathrm{Ob}\,(\varphi_{a_i})$, connected by intertwiners : \subsubsection*{} $$ \zeta = \bigg( \xymatrix{ \zeta_{a_{i-1}} \ar@/_1pc/[rr]_{\zeta_{a_i a_{i-1}}} & & V\zeta_{a_i} \ar@/_1pc/[ll]_{\zeta_{a_{i-1} a_i}} \\ } \bigg)_{1 \leqslant i \leqslant \ell} \in \mathrm{Ob}\, (\Phi_{\alpha}) $$ \subsubsection*{} \noindent If $\zeta,\omega\in\mathrm{Ob}\,(\Phi_\alpha)$, then $\hom_{\Phi_\alpha}(\zeta,\omega)$ is the vector space of families $(u_i:\zeta_{a_i}\to {\omega}_{a_i})_{0\leqslant i\leqslant\ell}$ of linear maps such that the following diagrams commute : \subsubsection*{} \[ \xymatrix{ \zeta_{a_{i-1}} \ar[dd]_{u_{i-1}} \ar[rr]^{\zeta_{a_i a_{i-1}}} & & V \zeta_{a_i}\ar[dd]^{I \otimes u_i} & & & & \zeta_{a_{i-1}} \ar[dd]_{u_{i-1}} \ar@{<-}[rr]^{\zeta_{a_{i-1} a_i}} & & V \zeta_{a_i}\ar[dd]^{I \otimes u_i} \\ & & & & & & & & \\ {\omega}_{a_{i-1}} \ar[rr]_{{\omega}_{a_i a_{i-1}}} & & V {\omega}_{a_i} & & & & {\omega}_{a_{i-1}} \ar@{<-}[rr]_{{\omega}_{a_{i-1} a_i}} & & V {\omega}_{a_i} \\ } \] \begin{eqnarray*} {\omega}_{a_i a_{i-1}} \circ u_{i-1} = (I \otimes u_i) \circ \zeta_{a_i a_{i-1}} & \hspace{25mm} & u_{i-1} \circ \zeta_{a_{i-1} a_i} = {\omega}_{a_{i-1} a_i} \circ (I \otimes u_i) \\ \end{eqnarray*} {\bf Definition} : \emph{Let $\alpha = (a_0, \cdots, a_\ell)$ be a path of length $\ell$ and let $\zeta\in\mathrm{Ob}\, (\Phi_\alpha)$ be a section of $\varphi$ over $\alpha$. The direct transport operator of $\zeta$ along $\alpha$ is the morphism} \begin{eqnarray*} T_{\alpha}(\zeta) = (I^{\ell-1}\otimes \zeta_{a_{\ell} a_{\ell-1}}) \circ (I^{\ell-2}\otimes \zeta_{a_{\ell-1} a_{\ell-2}}) \circ\cdots\circ (I\otimes \zeta_{a_2 a_1})\circ \zeta_{a_1 a_0} \ : \ \zeta_{a_0} \longrightarrow V^{\ell} \zeta_{a_\ell} \\ \end{eqnarray*} \emph{and the inverse transport operator of $\zeta$ is the morphism} \begin{eqnarray*} \overline{T}_{\alpha}(\zeta) = \zeta_{a_0 a_1} \circ (I\otimes \zeta_{a_1 a_2}) \circ\cdots\circ (I^{\ell-2}\otimes \zeta_{a_{\ell-2} a_{\ell-1}}) \circ (I^{\ell-1}\otimes \zeta_{a_{\ell-1} a_{\ell}}) \ : \ V^\ell \zeta_{a_\ell} \longrightarrow \zeta_{a_0} \\ \end{eqnarray*} Let us note that $T_{\overline{\alpha}}\neq{\overline{T}}_\alpha$. $\Phi_{\vert\mathscr{P}_1}$ is generated by its restriction to the oriented 2-cells of $T$. If $\sigma=\big( (xz),(xyz) \big)$ and if $\zeta\in\mathrm{Ob}\,(\Phi_{(xyz)})$, then we define ${\xi}=\Phi_{\sigma}(\zeta)\in\mathrm{Ob}\,(\Phi_{(xz)})$ by \begin{eqnarray*} {\xi}_x &=& {\zeta_x} \\ {\xi}_z &=& {\zeta_z} \\ {\xi}_{zx} &=& (\widetilde{F} \otimes I_{\zeta_z}) \circ (I \otimes {\zeta_{zy}}) \circ {\zeta_{yx}} \\ {\xi}_{xz} &=& {\zeta_{xy}} \circ (I \otimes \zeta_{yz}) \circ (F \otimes I_{\zeta_z}) \\ \end{eqnarray*} Similarly, if ${\xi}\in\mathrm{Ob}\,(\Phi_{(xz)})$, we define $\zeta=\Phi_{\overline{\sigma}}({\xi})\in\mathrm{Ob}\,(\Phi_{(xyz)})$ by \begin{eqnarray*} \zeta_x &=& {\xi}_x \\ \zeta_y &=& V {\xi}_z \\ \zeta_z &=& {\xi}_z \\ \zeta_{yx} &=& (F \otimes I_{\xi_z}) \circ \xi_{zx} \\ \zeta_{xy} &=& \xi_{xz} \circ (\widetilde{F} \otimes I_{\xi_z}) \\ \zeta_{yz} &=& I\otimes I_{\xi_z} \ = \ \zeta_{zy} \\ \end{eqnarray*} If $u\in\hom_{\Phi_{(xyz)}} (\zeta,\omega)$, then we have the commutative diagrams \subsubsection*{} $$ \xymatrix{ \zeta_x \ar[dd]_{u_x} \ar[rr]^{\zeta_{yx}} & & {V} \zeta_y \ar[dd]^{I\otimes u_y} \ar[rr]^{I\otimes {\zeta_{zy}}} & & V^2 \zeta_z \ar[dd]^{I^2\otimes u_z} \ar[rr]^{\widetilde{F} \otimes I_{\zeta_z}} & & \ar[dd]^{I\otimes u_z} V \zeta_z \\ & & & & & & \\ \omega_x \ar[rr]_{\omega_{yx}} & & {V} \omega_y \ar[rr]_{I \otimes \omega_{zy}} & & V^2 \omega_z \ar[rr]_{\widetilde{F} \otimes I_{\omega_z}} & & V \omega_z \\ } $$ \subsubsection*{} \subsubsection*{} $$ \xymatrix{ \zeta_x \ar[dd]_{u_x} \ar@{<-}[rr]^{\zeta_{xy}} & & {V} \zeta_y \ar[dd]^{I\otimes u_y} \ar@{<-}[rr]^{I\otimes {\zeta_{yz}}} & & V^2 \zeta_z \ar[dd]^{I^2\otimes u_z} \ar@{<-}[rr]^{F \otimes I_{\zeta_z}} & & \ar[dd]^{I\otimes u_z} V \zeta_z \\ & & & & & & \\ \omega_x \ar@{<-}[rr]_{\omega_{xy}} & & {V} \omega_y \ar@{<-}[rr]_{I \otimes \omega_{yz}} & & V^2 \omega_z \ar@{<-}[rr]_{F \otimes I_{\omega_z}} & & V \omega_z \\ } $$ \subsubsection*{} \noindent and we can define the action of $\Phi_{\sigma}$ and of $\Phi_{\overline{\sigma}}$ on the arrows by \begin{eqnarray*} \Phi_{\sigma} (u_x,u_y,u_z) &=& (u_x,u_z) \\ \Phi_{\overline{\sigma}} (v_x,v_z) &=& (v_x, I \otimes v_z , v_z) \\ \end{eqnarray*} These functors satisfy the relations : \begin{eqnarray*} \Phi_{\sigma} \Phi_{\overline{\sigma}} (\xi_x,\xi_{zx},\xi_z) &=& (\xi_x,2\xi_{zx},\xi_z) \\ \Phi_{\overline{\sigma}} \Phi_{\sigma} ({\zeta_x},{\zeta_{yx}},{\zeta_y},{\zeta_{zy}},{\zeta_z}) &=& (\zeta_x,\zeta_{yx}\circ(A\otimes \zeta_{zy}),{V} \zeta_z,I \otimes I_{\zeta_z},\zeta_z) \\ \end{eqnarray*} If $(\alpha,\beta)\in\mathscr{P}_1$ is a generic 2-edge, then $\Phi_{\alpha\beta}$ acts locally as above without modifying the other entries. For $p=0,\cdots,n$, let $\Gamma_p=(\gamma_p,\cdots,\gamma_n)$ be the partial 2-path made of the last $(n-p+1)$ entries of $\Gamma$ and let \begin{eqnarray*} \Phi_{\Gamma_p} = \Phi_{\gamma_p\gamma_{p+1}}\circ\cdots\circ\Phi_{\gamma_{n-1}\gamma_n} &:& \Phi_{\gamma_n} \longrightarrow \Phi_{\gamma_p} \\ \end{eqnarray*} be the functor which maps the sections of $\varphi$ over $\gamma_n$ to sections over $\gamma_p$. For example, we can choose $\gamma_n=(ab)$. Let us apply $\Phi_{\Gamma_p}$ to the section $\zeta^n \in \mathrm{Ob}\, (\Phi_{(ab)})$ defined by \begin{eqnarray*} \zeta^n_a &=& \zeta^n_b \ = \ V \\ \zeta^n_{ba} &=& F \\ \zeta^n_{ab} &=& \widetilde{F} \\ \end{eqnarray*} \subsubsection*{} $$ \zeta^n = \bigg( \xymatrix{ {V} \ar@/_1pc/[rr]_{F} & & {V}^2 \ar@/_1pc/[ll]_{\widetilde{F}} \\ } \bigg) $$ \subsubsection*{} \subsubsection*{} ${\bf Theorem \ 3}$ : \emph{$\Phi_\Gamma$ multiplies the arrows of $\zeta^n$ by $K_T$ : } \begin{eqnarray*} \boxed{\Phi_{\Gamma}(\zeta^n) = \bigg( \xymatrix{ {V} \ar@/_1pc/[rr]_{K_T \, F} & & V^2 \ar@/_1pc/[ll]_{K_T \, \widetilde{F}} \\ } \bigg) \in \mathrm{Ob}\, (\Phi_{(ab)}) } \\ \end{eqnarray*} \subsubsection*{} {\it Proof} : The inverse transport operator of $\zeta^p:=\Phi_{\Gamma_p}(\zeta^n) \in \mathrm{Ob}\, (\Phi_{\gamma_p})$ is \begin{eqnarray*} {\overline{T}}_{\gamma_p} (\zeta^p) = {\overline{T}}_{\gamma_p} \big( \Phi_{\gamma_p\gamma_{p+1}}\circ\cdots\circ \Phi_{\gamma_{n-1}\gamma_n} (\zeta^n) \big) : V^{\ell_p+1} \longrightarrow V \\ \end{eqnarray*} By a decreasing induction on $p$, we have : \begin{eqnarray*} {\overline{T}}_{\gamma_p} (\zeta^p) = \zeta^n_{ab} \circ (\varphi_{\Gamma_p}\otimes I) = \widetilde{F} \circ (\varphi_{\Gamma_p}\otimes I) \\ \end{eqnarray*} \subsubsection*{} $$ \xymatrix{ V^{\ell_p+1} \ar@/_1pc/[rrrr]_{{\overline{T}}_{\gamma_p} (\zeta^p) } \ar@/^1pc/[rr]^{\varphi_{\Gamma_p}\otimes I} & & {V}^2 \ar@/^1pc/[rr]^{\zeta^n_{ab}} & & V \\ } $$ \subsubsection*{} For $p=0$ : \begin{eqnarray*} \zeta^0_{ab} = {\overline{T}}_{\gamma_0} (\zeta^0) = \widetilde{F} \circ (\varphi_{\Gamma}\otimes I) = K_T\, \widetilde{F} \\ \end{eqnarray*} Similarly, by using the direct transport operator, we obtain \subsubsection*{} $$ \xymatrix{ V \ar@/_1pc/[rrrr]_{T_{\gamma_p} (\zeta^p) }\ar@/^1pc/[rr]^{\zeta^n_{ba}} & & V^2 \ar@/^1pc/[rr]^{\varphi_{{\widetilde{\Gamma}}_p}\otimes I} & & V^{\ell_p+1} \\ } $$ \subsubsection*{} \begin{eqnarray*} \zeta^0_{ba}=T_{\gamma_0} (\zeta^0)= ({\varphi}_{\widetilde{\Gamma}}\otimes I) \circ F = K_T\, F \\ \end{eqnarray*} \hfill\qed \subsubsection*{} Once $\Gamma=(\gamma_0,\cdots,\gamma_n)$ has been chosen, the sweeping process constructs a $V$-module, $\zeta_x={V}^{n_x}$, for each $x\in T_0$, and a morphism, $\zeta_{xy}$, for each oriented edge of $T$. Each integer $n_x$ depends only on the partial 2-path $\Gamma_p$ which reaches $x$ first and not on the paths $\gamma_q$ with $q<p$. Similarly for each arrow, $\zeta_{xy}$. Therefore, we obtain a global section, $\zeta$, of $\varphi$ over $T$. More precisely, if we lift $T$ to a triangulation $\widetilde{T}$ of the disk $D^2$ such that ${\widetilde{T}}_{\vert\partial D^2}$ be a pair of arcs both projected onto the base edge $(ab)$, then $\zeta$ is a global section of the pull-back of $\varphi$ to $\widetilde{T}$. If $\zeta_{xy}=0$ for some edge $(xy)$, then the transport operator along a path $\gamma_p$ containing $(xy)$ vanishes, as well as the subsequent transport operators and, at the end, we obtain $K_T=0$. Conversely, if $K_T=0$, then there exists an edge (at least the last one) where $\zeta$ vanishes. Consequently, we have obtained a geometric interpretation of the four-colour theorem in terms of sections of $\varphi$ : \subsubsection*{} ${\bf Theorem \ 4}$ : \emph{4CT $\iff ( \zeta_{xy}\neq 0 \quad \forall\, (xy) )$.} \subsubsection*{} {\bf Example : } Let us construct $\zeta$ on the octahedron. Starting from $\zeta_a = \zeta_b = V $ and $\zeta^n_{ab} = \widetilde{F}$, we have : \begin{eqnarray*} \zeta_e &=& V^2 \\ \zeta_{ae} &=& \widetilde{F} \circ (\widetilde{F}\otimes I) \\ \zeta_{eb} &=& I^2 \\ \zeta_d &=& V^3 \\ \zeta_{ad} &=& \widetilde{F} \circ (\widetilde{F}\otimes I) \circ (\widetilde{F}\otimes I^2) \\ \zeta_{de} &=& I^3 \\ \zeta_f &=& V^2 \\ \zeta_{ef} &=& \widetilde{F} \otimes I \\ \zeta_{fb} &=& I^2 \\ \zeta_{df} &=& (I\otimes \widetilde{F}\otimes I) \circ (F\otimes I^2) \\ \zeta_c &=& V^3 \\ \zeta_{dc} &=& (I\otimes \widetilde{F}\otimes I) \circ (F\otimes I^2)\circ (\widetilde{F}\otimes I^3) \\ \zeta_{cf} &=& I^3 \\ \zeta_{ac} &=& \widetilde{F} \circ (\widetilde{F}\otimes I) \circ (\widetilde{F}\otimes I^2)\circ (I^2\otimes \widetilde{F}\otimes I) \\ & & \quad \circ (I\otimes F\otimes I^2)\circ (I\otimes\widetilde{F}\otimes I^3)\circ (F\otimes I^3) \\ \zeta_{cb} &=& F\otimes I \\ \zeta_{ab}^0 &=& \widetilde{F} \circ (\widetilde{F}\otimes I) \circ (\widetilde{F}\otimes I^2) \circ (I^2\otimes \widetilde{F}\otimes I)\circ (I\otimes F\otimes I^2) \\ & & \quad \circ (I\otimes\widetilde{F}\otimes I^3)\circ (F\otimes I^3) \circ (I\otimes F\otimes I)\circ (F\otimes I) \\ &=& \widetilde{F} \circ (\varphi_{\Gamma}\otimes I) \\ &=& K_T\, \widetilde{F} \\ \end{eqnarray*} \section{Conclusion and perspectives} The classical approaches to the four-colour problem study the local form of a planar map to prove its global colourability. This suggests the existence of a cohomological interpretation of this property. In the present work, we have constructed a global section of a fibered category modeled on $\mathbf{Rep}_f (\sl)$ and proved that the validity of the four-colour theorem is equivalent to the fact that this section does not vanish. We hope that the present approach will be a first step toward an algebraic proof and the understanding of the four-colour theorem. \vspace{5mm}
-41,243.744965
[ -2.875, 2.642578125 ]
37.528868
[ -2.50390625, 0.69189453125, -1.8740234375, -5.078125, -0.8525390625, 8.1796875 ]
[ 4.390625, 7.58984375, 2.00390625, 6.6328125 ]
137
3,949
[ -2.837890625, 3.439453125 ]
39.418112
[ -4.625, -3.16015625, -4.56640625, -2.41796875, 1.119140625, 11.3203125 ]
0.741152
31.482234
29.095974
3.357717
[ 2.3918845653533936 ]
-26,556.608826
5.4667
-40,951.61805
0.560497
6.051275
[ -1.6025390625, -2.951171875, -3.53515625, -5.140625, 1.84765625, 11.8046875 ]
[ -5.2421875, -1.1728515625, -1.7529296875, -0.8349609375, 3.078125, 3.15234375 ]
BkiUaV7xK0fkXPSOneXZ
\section{INTRODUCTION} The 22~GHz H$_2$O maser emission line is of great astrophysical interest for its extreme requirements for density ($>$10$^{7}$~cm$^{-3}$), temperature ($>$300~K), and of course radial velocity coherence. It is detected in both Galactic and extragalactic star forming regions as well as in the central regions of active galaxies (AGNs). In AGNs, isotropic luminosities commonly reach values of $L_{\rm H_2 O} > 10 L_{\odot}$ and the objects are then classified as ``megamasers'' (see recent reviews by e.g. \citealt{gre04,mor04,hen05b,lo05}). So far, water megamaser emission has been detected in about 10\% of active galactic nuclei (AGNs) surveyed in the local universe \citep{bra04}. The association of water megamasers with AGNs of primarily Seyfert-2 or Low-Ionization Nuclear Emission-line Region (LINER) type \citep{bra97,bra04} and the fact that the emission often arises from the innermost parsec(s) of their parent galaxy have raised great interest in the study of 22~GHz maser emission. It suggests that the so far poorly constrained excitation mechanism is closely related to AGN activity, probably irradiation by X-rays (e.g. \citealt{neu94}). For Seyfert-2 galaxies, in the framework of the so-called unified model \citep{ant93}, a dusty molecular disk or torus is seen edge-on where the conditions and velocity-coherent path lengths are favorable for the formation of megamaser activity. In those cases in which the emission arises from a nuclear disk and can be resolved spatially using Very Long Baseline Interferometry (VLBI), the central black hole (BH) mass can be constrained, as has been successfully shown for NGC\,4258 (e.g. \citealt{gre95,miy95,her99,her05}). Moreover, using H$_2$O masers, distances to galaxies can be obtained without the use of standard candles \citep{miy95,her99,arg04,bru05,arg07,hum08}. Thus, finding new megamaser galaxies is of great interest. If the unified scheme for AGNs is to be equally successful for objects of high as well as of low luminosity, there should exist a large number of type-2 QSOs whose optical spectra are dominated by narrow emission lines. Indeed, with the advent of new extended surveys such as the Sloan Digital Sky Survey (SDSS), many type-2 QSOs have recently been identified \citep{zak03}. We conducted a search for water megamasers in 274 of the 291 SDSS type-2 AGNs \citep{zak03} using the Robert C. Byrd Green Bank Telescope (GBT) and the Effelsberg 100-m radio telescope. With a redshift range of 0.3 $<$ z $<$ 0.83, the sample covers significantly higher distances than most previous searches for H$_2$O megamasers ($z \ll 0.1$; \citealt{bra96, tar03, bra04, kon06a, kon06b, bra08, cas08}) and is the first survey for water megamasers in objects with QSO luminosities (except for the study of \cite{bar05} which is part of the larger survey presented here). Such a search provides clues to whether the unified model can indeed be extended to QSOs or whether their powerful engines lead to a different scenario. Do the high QSO luminosities result in H$_2$O ``gigamasers''? Or do they destroy the warm dense molecular gas needed to supply the water molecules? Are the molecular parts of the accretion disks much farther away from the nuclear engine, so that rotation velocities are smaller in spite of a potentially more massive nuclear engine than in Seyfert-2 galaxies? Finding megamasers in type-2 QSOs may provide insights into QSO molecular disks and tori and enables us to independently determine their BH masses. Even more importantly, megamasers in type-2 QSOs may provide the unique possibility to directly measure their distances and thus verify the results from type 1a supernovae measurements on the existence and properties of the elusive dark energy (e.g. \citealt{bar05,bra07,rei08}). We summarize the sample properties in \textsection{2}, describe the observations in \textsection{3}, present the results in \textsection{4}, and discuss them in \textsection{5}. After a brief summary (\textsection{6}), we list a sample of 171 additional objects (radio galaxies, QSOs, and galaxies) in the Appendix (\textsection{A}). Throughout the paper, we assume a Hubble constant of $H_0$ = 75\,km\,s$^{-1}$\,Mpc$^{-1}$. For the high-z objects, we additionally assume $\Omega_{\Lambda}$ = 0.73 and $\Omega_{\rm M}$ = 0.27 \citep{wri06}. \section{SAMPLE PROPERTIES} As already mentioned above, our sample consists of 274 type-2 AGNs (0.3 $<$ z $<$ 0.83) selected from the SDSS \citep{zak03}. Out of these, 122 objects have $L_{\rm [OIII]}$ $>$ 3 10$^8$ $L_{\odot}$, and can thus be classified as type-2 QSOs \citep{zak03}. About 10\% of the SDSS type-2 AGNs are radio-loud \citep{zak04}, comparable to the AGN population as a whole. A few type-2 AGNs have soft X-ray counterparts \citep{zak04}. Spectropolarimetry was carried out for 12 type-2 QSOs and revealed polarization in all objects. Five objects show polarized broad lines expected in the framework of the unified model at the sensitivity achieved \citep{zak05}. \citet{zak06} studied the host galaxy properties for nine objects, finding that the majority (6/9) of the type-2 QSO host galaxies are ellipticals. All observations support the interpretation of the type-2 AGNs selected from the SDSS as being powerful obscured AGNs. Table~\ref{table1} summarizes the sample properties. \section{OBSERVATIONS} All sources were measured in the 6$_{\rm 16}$-5$_{\rm 23}$ line of H$_{\rm 2}$O (22.23508 GHz rest frequency). The observations were carried out during several runs at the GBT of the National Radio Astronomy Observatory (NRAO) in January and June 2005 as well as at the Effelsberg 100-m radio telescope of the Max Planck Institut f\"ur Radioastronomie (MPIfR) in November and December 2005. For details of observations, see Table~\ref{table1}.\footnote{Note that our velocity convention is the optical one, i.e. $v$ = c$z$.} \subsection{GBT} A total of 128 SDSS type-2 AGNs were observed with the GBT, limited to those that are within the available frequency coverage of 12--15.4 GHz (0.44 $<$ z $<$ 0.85). The observing mode utilized two feeds separated by 5.5\arcmin\ on the sky, each with dual polarization. The system temperatures were typically 25 K. The source was placed alternately in each beam, with a position-switching interval of 2 minutes and was typically observed for 30 minutes total on-source time (possibly longer for objects with follow-ups). A total of 200 MHz bandwidth was covered with $\sim$0.5\,km\,s$^{-1}$ channels. Antenna pointing checks were made roughly every 2 hours, and typical pointing errors were less than 1/10 of a full width to half power (FWHP) beamwidth of 48\arcsec\ at 15 GHz. GBT flux calibration was done using standard antenna gain vs frequency curves. We estimate the calibration uncertainty to be $\sim$20\%. \subsection{Effelsberg} A total of 150 SDSS type-2 AGNs were observed with the Effelsberg 100-m radio telescope\footnote{Note that a few objects were observed at both GBT and Effelsberg yielding a total number of 274 sources.}. The measurements were carried out with a dual polarization HEMT receiver providing system temperatures of $\sim$36--45 K (for the observed frequencies between $\sim$14.3 and 17.1 GHz) on a main beam brightness temperature scale. The observations were obtained in a position switching mode. Signals from individual on- and off-source positions were integrated for 3 minutes each, with the off-position offsets alternating between +900 and --900 arcsec in Right Ascension. The typical on-source integration time was $\sim$70 minutes (possibly longer for objects with follow-ups) with variations due to weather and elevation. An auto-correlator backend was used, split into eight bands of 40\,MHz width and 512 channels, respectively, that were individually shifted in such a way that a total of $\sim$130--240\,MHz was covered. Channel spacings are $\sim$1.5\,km\,s$^{-1}$. The FWHP beamwidth was $\sim$40\arcsec. The pointing accuracy was better than 10\arcsec. Calibration was obtained by repeated measurements at different frequencies toward 3C\,286, 3C\,48, and NGC\,7027, with flux densities taken from \citet{ott94}, interpolated for the different observed frequencies using their Table~5. The calibration should be accurate to $\sim$20\%. \section{RESULTS} All spectra were examined carefully by eye for both broad and narrow lines. In addition, we applied spectral binning using several bin sizes, especially if there was anything looking remotely like a signal. \subsection{The Gigamaser J0804+3607} As already reported in \citet{bar05}, water maser emission was detected from the type-2 QSO SDSS J080430.99+360718.1 (hereafter J0804+3607; $z$ = 0.66). With $L_{\rm H_2O}$ $\simeq$ 21,000 $L_{\odot}$\footnote{Using $H_0$ = 75\,km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_{\Lambda}$ = 0.73, and $\Omega_{\rm M}$ = 0.27. Note that the value given by \citet{bar05}, $L_{\rm H_2O}$ = 23,000 $L_{\odot}$, is higher due to a smaller value of $H_0$.}, it is the intrinsically most powerful maser known. \subsection{Non Detections} \label{non} While the detection of a water vapor maser in J0804+3607 shows that H$_2$O masers are indeed detectable at high redshift, and thus, that such a project is in principle feasible, no obvious maser emission was discovered in any of the remaining objects (Table~\ref{table1}). For some objects we see 2--3 sigma blips, which, however, do not meet the 5 $\sigma$ detection criterion. While they are most likely statistically insignificant, considering the effectively large number of trials implicit in the number of channels per spectrum, and the number of objects observed, follow-up observations are planned for verification. \section{DISCUSSION} Ideally, we would like to estimate the detection probabilities for our sample and compare them with the (non-) detection rate. In the local universe, water megamaser emission has been detected in about 10\% of AGNs \citep{bra04}. Simply extrapolating the percentage of megamasers detected in nearby Seyfert-2 galaxies to the more distant type-2 Seyferts and type-2 QSOs, leads to the expectation of finding at least $\sim$27 megamasers among the 274 SDSS type-2 AGNs. However, such a naive extrapolation does not take into account the megamaser luminosity function, its evolution with redshift, the sensitivity of the survey, and intrinsic differences among the sources. We will discuss each of these issues in turn. \subsection{H$_2$O Maser Luminosity Function} \label{luminosity} \citet{hen05a} performed a statistical analysis of 53 H$_2$O maser galaxies beyond the Magellanic clouds. From the maser luminosity function (LF), i.e. the number density of objects with a given water maser luminosity per logarithmic interval in $L_{\rm H_2O}$, they estimate that the number of detectable maser sources is almost independent of their intrinsic luminosity: The larger volume in which high luminosity masers can be detected compensates the smaller source density. This implies that masers out to cosmological distances should be detectable with current telescopes, as long as the LF is not steepening at the very high end and if suitable candidates are available. Thus, \citet{hen05a} conclude that most of the detectable luminous H$_2$O megamasers with $L_{\rm H_2 O} > 100 L_{\odot}$ have not yet been found. We performed a similar analysis of the larger sample of masers known to date (78 sources; see Table~\ref{maserlf}) and derived a zeroth order approximation of the LF of extragalactic water maser sources. We here briefly summarize the procedure adapted from \citet{hen05a}; for details and a discussion of limitations, we refer the reader to \citet{hen05a}. To estimate the water maser LF, the standard $V/V_{\rm max}$ method \citep{sch86} was used. We divided the 78 maser sources known to date (Table~\ref{maserlf}) in luminosity bins $L_b$ of 0.5 dex ($b$ = 1,...,11), covering a total range of $L_{\rm H_2 O}/L_{\odot} = 10^{-1}$ to $3\,10^{4}$. The differential LF value was calculated for each luminosity bin according to \begin{eqnarray*} \Phi (L_b) = \frac{4 \pi}{\Omega} \Sigma^{n(L_b)}_{i=1} (1/V_{\rm max})_i\hspace*{0.15cm} . \end{eqnarray*} $n(L_b)$ is the number of galaxies with $L_b - 0.25 < \log (L_{\rm H_2O}/L_{\odot}) \le L_b + 0.25$ (centering on $\log (L_{\rm H_2O}/L_{\odot})$ = -0.75, -0.25, +0.25, etc.). Following \citet{hen05a}, we set $\Omega$ = 2 $\pi$, approximating the sky coverage to be the entire northern sky, for the Seyfert sample. For J0804+3607, we assumed $\Omega$ = 0.64 as the SDSS data release 1 from which the type-2 AGN sample of \citet{zak03} was taken covered $\sim$2100\,deg$^2$. $V_{\rm max}$ is the maximum volume over which an individual galaxy can be detected depending on the detection limit of the survey and its maser luminosity (see also Sect.~\ref{sensitivity}). We calculated the maser LF for three different detection limits: (a) 1 Jy km\,s$^{-1}$, (b) 0.2 Jy km\,s$^{-1}$, and (c) 0.06 Jy km\,s$^{-1}$. The first two cases are identical to the procedure in \citet{hen05a}, the latter case was added to include objects such as the gigamaser J0804+3607\footnote{Note that for this distant object, we used the co-moving volume as maximum volume.}. From the sample of 78 sources, IC\,342 is excluded in all three cases due to its too low maser luminosity. In case (a), 32 masers fall below the chosen detection limit, and in case (b), 10 galaxies were omitted. In case (c), all 77 sources are included in the LF. The resulting LFs are shown in Fig.~\ref{lf}. The overall slope of the H$_2$O LF does not depend strongly on the chosen detection limit. Applying a linear fit to the three different LFs, we derive $\Phi \propto L_{\rm H_2O}^{-1.4 \pm 0.1}$, comparable to \citet{hen05a}, but steeper than the LF for OH megamasers ($\Phi \propto L_{\rm H_2O}^{-1.2}$) \citep{dar02}. The main conclusions we can draw from this new version of the water maser LF are virtually identical to those by \citet{hen05a}: (i) The number of sources at the upper end of the LF decays rapidly, indicating that gigamasers are intrinsically rare or that the proper sources have not yet been found --- so far most surveys were focused on nearby sources. In case c, when including J0804+3607, the LF seems to raise again which is due to the much smaller area of sky covered in the survey presented here (see above). (ii) There are only a few sources in the $L_{\rm H_2O}$ = 0.1 -- 10 $L_{\odot}$ bins. The associated slight minimum in the LF suggests that two different LFs are overlayed: one for masers in star forming regions with low luminosities ($L_{\rm H_2O}$ $<$ 0.1$-$10 $L_{\odot}$) and one for maser sources in AGNs with $L_{\rm H_2O}$ $>$ 10 $L_{\odot}$. However, note that an extrapolation of the local maser LF to higher redshifts is not straightforward. It would assume no cosmological evolution, but a strong evolution of AGN activity with redshift is known. Another cautionary note we want to add is that our survey is most sensitive to narrow-line masers and that we might be missing broad-line masers. Although we binned our data in various ways to emphasize potential broad-line masers and to make them more visible, a given amount of integrated flux density would then be spread over a larger amount of noise and baseline uncertainties would become more severe. Broad lines typically arise in jet masers such as Mrk\,348 \citep{pec03} and NGC\,1052 \citep{cla98}, one exception being TXS\,2226-184 where a broad maser arises from a disk maser \citep{bal05}; for a discussion on jet masers see also \citet{hen05b}. As these broad-line masers are included in the LF of the known maser sources, we in principle introduce a systematic error when extrapolating the derived LF to our survey. However, since broad-line masers seem to be rare, we neglect this problem. \begin{figure} \includegraphics[scale=0.29,angle=-90]{f1.eps} \caption{Luminosity function for water maser galaxies at $D$ $\ge 100$\,kpc (cf. Table~\ref{maserlf}). Plotted are the resulting LFs assuming three different sensitivity limits of the survey: 1 Jy km\,s$^{-1}$ (open diamonds), 0.2 Jy km\,s$^{-1}$ (filled diamonds), and 0.06 Jy km\,s$^{-1}$ (open stars). The numbers on the top indicate the number of galaxies included in each luminosity bin for a sensitivity limit of 0.06 Jy km\,s$^{-1}$ (open stars); corresponding error bars were calculated from Poisson statistics \citep{con89}. The lines indicate the best linear fit for a sensitivity limit of 1 Jy km\,s$^{-1}$ (dashed line; $\Phi \propto L_{\rm H_2O}^{-1.3}$), 0.2 Jy km\,s$^{-1}$ (solid line; $\Phi \propto L_{\rm H_2O}^{-1.4}$), and 0.06 Jy km\,s$^{-1}$, respectively (dotted line; $\Phi \propto L_{\rm H_2O}^{-1.4}$). See text for further details.} \label{lf} \end{figure} \subsection{Sensitivity of the Survey} \label{sensitivity} We can estimate the H$_2$O luminosities we would be able to detect depending on the sensitivity of our survey. Our sample lies at a redshift range of 0.3 $<$ $z$ $<$ 0.83, corresponding to luminosity distances of $D_L$ = 1,460 $-$ 4,980 Mpc\footnote{Using $H_0$ = 75\,km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_{\lambda}$ = 0.73, and $\Omega_{\rm matter}$ = 0.27}. The detectable H$_2$O luminosities depend on the sensitivity of the survey and the distance of the object \citep{hen05a}. In general, it is \begin{eqnarray*} L = \frac{F_{\nu^{\prime}} (\nu_0)}{1+z} \times 4 \pi D_L^2\hspace*{0.15cm} , \end{eqnarray*} with the specific flux $F_{\nu^{\prime}} (\nu_0)$ in the observed frame. Then \begin{eqnarray*} \frac{L_{\rm H_2O}}{L_{\odot}} = \frac{10^{-23}\,S_{\rm peak}}{\rm Jy} \times \frac{\nu_{\rm rest}}{\rm{c}} \times \frac{\Delta v}{{\rm km\,s^{-1}}} \times \frac{1}{1+z}\\ \times 4 \pi \left(\frac{3.1\,10^{24}\,D_L}{{\rm Mpc}}\right)^2 \times \frac{1}{3.8\,10^{33}}\hspace*{0.15cm} , \end{eqnarray*} where $\nu_{\rm rest}$ = 22.23508 GHz and $c$ the speed of light in km\,s$^{-1}$. Thus \begin{eqnarray*} \frac{L_{\rm H_2O}}{L_{\odot}} = \left[0.023 \times \frac{S_{\rm peak}}{{\rm Jy}} \times \frac{\Delta v}{{\rm km\,s^{-1}}}\right] \times \frac{1}{1+z} \times \left(\frac{D_L}{{\rm Mpc}}\right)^2 \end{eqnarray*} (see also \citealt{sol05}). Assuming a characteristic linewidth of the dominant spectral feature of 20\,km\,s$^{-1}$, a 5 $\sigma$ detection threshold of 5$\times$(7.6/4.5)\,mJy (with 7.6 mJy being the average rms of our observations for a 1\,km\,s$^{-1}$ channel) gives \begin{eqnarray*} \frac{L_{\rm H_2O}}{L_{\odot}} &=& 0.0039 \times \frac{1}{1+z} \times \left(\frac{D_L}{{\rm Mpc}}\right)^2 \hspace*{0.15cm} . \end{eqnarray*} Thus, given the distance of our sample, we can detect H$_2$O luminosities of $L_{\rm H_2O}$ $\simeq$ 6,400 $-$ 52,900 $L_{\odot}$. These H$_2$O luminosities are higher than the average luminosity found for megamasers in Seyfert-2 galaxies and LINERs. Among the 78 known H$_2$O maser galaxies, the typical cumulative H$_2$O luminosity range is 10 ... 2000 L$_{\odot}$ for sources associated with AGNs, while most of the weaker masers appear to be related to star formation. However, in addition, there are two gigamasers known, TXS\,2226--184 \citep{koe95} with $L_{\rm H_2O}$ = 6800 $L_{\odot}$ and J0804+3607 with $L_{\rm H_2O}$ $\simeq$ 21,000 $L_{\odot}$. Thus, the distance of our sample allows us to detect gigamasers comparable to TXS\,2226--184 and J0804+3607 only. The low detection rate may simply reflect that megamasers with H$_2$O luminosities above 6,000 $L_{\odot}$ are intrinsically rare, an interpretation that is supported by the water maser LF (Sect.~\ref{luminosity}). However, there are other possibilities for the low detection rates that we discuss in the following. \subsection{Velocity Coverage} Our observations cover a frequency width of 130--240 MHz, corresponding to $\sim$1800--4000 km\,s$^{-1}$. This range should be large enough to cover any mismatch in redshift between the maser emission and the optical [OII]\,$\lambda$3727 emission. For J0804+3607, for example, the megamaser line is redshifted with respect to the [OII] line by 360\,km\,s$^{-1}$ \citep{bar05}. However, we may not be able to detect superpositions of thousands of individual maser components with slightly differing velocities nor rapidly rotating tori with only the tangential parts showing strong (highly red- and blue-shifted) maser emission \citep{hen98}, if the emission covers a range of $>$2000\,km\,s$^{-1}$. \subsection{Time Variability} Monitoring of megamaser sources has revealed variability on timescales of weeks with fluctuations of the order of 10\% (e.g. \citealt{gre97b}) as well as on timescales of years with maser luminosities varying by factors of 3-10 (e.g. \citealt{fal00a,gal01,tar07}). Such flaring masers can be explained by an increase in the X-ray luminosity of the AGN \citep{neu00}, if the maser emission is powered by the X-ray radiation from the AGN. We cannot exclude that at least some of the sources for which we did not detect megamaser emission were in a low stage of maser activity and might be detected at a later flaring stage. \subsection{Intrinsic Differences} So far, we did not take into account that, when comparing the low-luminous AGNs such as Seyfert-2 galaxies and LINERs with the high-luminous AGNs such as the type-2 QSOs in our sample, we may be comparing apples and oranges. Intrinsic differences between the different samples complicates estimating detection probabilities. \subsubsection{The Nature of Megamaser Galaxies} So far, $\sim$1500 galaxies have been searched for H$_2$O maser emission, resulting in the detection of 78 maser galaxies (Table~\ref{maserlf}). For 73 of the 78 known H$_2$O maser galaxies, the activity type has been determined (NED\footnote{Note that NED classifications such as morphological types and activity classes are inhomogeneous.}; see Table~\ref{maserlf}). The vast majority are classified as Seyferts (78\%), out of which Sy2s (including Sy1.9) are the dominant type (88\%) and Sy1s are rare (3\%), the rest being classified simply as Sy, or Sy1.5. The second largest activity type among the extragalactic water maser sources are LINERs, making up 11\% of the sample. In addition, 7\% are HII regions, 3\% starburst (SB) galaxies and 1\% Narrow-Line Radio Galaxies (NLRGs)\footnote{Note that we counted the ``more energetic'' activity type, e.g. an object with activity types ``Sy2, SB, HII'' (Table~\ref{maserlf}, column 10), was counted as Sy2, an object with ``L, LIRG, HII'', was counted as LINER, etc.}. Objects with activity type of HII or SB have generally lower maser luminosities and fall in the kilomaser range ($L_{\rm H_2O}$ $<$ 10 $L_{\odot}$). When including only maser sources with $L_{\rm H_2O} \ge 10 L_{\odot}$, the activity type has been determined for 57 sources in total. Out of these, 86\% are Seyferts (82\% are Sy2s; 4\% are Sy1s, namely NGC\,235A and NGC\,2782), 10\% are LINERs and only 2\% HII regions (namely NGC\,2989). Using the higher number of extragalactic water maser sources known to date, our statistic thus confirms earlier studies that water megamaser sources are associated with AGNs of primarily Seyfert-2 or LINER type \citep{bra97,bra04}. This in turn strengthens the general expectation to find megamasers also in type-2 QSOs. Interpreting this finding in the framework of the unified models of AGNs, where an optically thick obscuring dust torus is envisioned to encircle the accretion disk and type-1 AGNs are seen pole-on while type-2 AGNs are seen edge-on \citep{ant93}, suggests that the megamaser activity is related to the large column densities of molecular gas along the line-of-sight in the torus. However, even if such an interpretation holds, the question remains why not all type-2 AGNs are megamasers. What are the necessary ingredients for the occurrence of these powerful masers? \citet{bra97} addressed this question by a statistical comparison of the physical, morphological, and spectroscopic properties of the known megamaser galaxies with those of non-megamaser galaxies\footnote{Here and in the following, we denote as ``non-megamaser galaxies'' those galaxies that have been observed at 22~GHz, but for which no megamaser emission was detected.}. They compared the AGN class, the host galaxy type and inclination, the mid-infrared (MIR) and FIR properties, the radio fluxes and luminosities, the [OIII] fluxes and luminosities, and the X-ray properties of the 16 megamasers known at that time with those of $\sim$340 non-megamaser galaxies. Apart from their main conclusion that H$_2$O emission is only detected in Seyfert-2 galaxies and LINERs but not in Seyfert-1 galaxies (a conclusion that still holds for the larger sample of megamasers known today; see above), \citet{bra97} found that H$_2$O emission is preferentially detected in sources that, when compared to the non-megamaser galaxies in their sample, are ``apparently brighter at MIR and FIR and centimeter radio wavelengths'' However, this result may at least in part result from the fact that the megamaser galaxies are nearer than the non-megamaser galaxies. \citet{bra97} also find that H$_2$O emission is preferentially detected in sources with high X-ray-absorbing columns of gas -- a result that is still discussed controversially: While \citet{zha06} concluded that H$_2$O megamasers have similar X-ray absorbing column densities as other Seyfert-2 galaxies, \citet{gre08} find a correlation between maser emission and high X-ray obscuring columns. The requirement for velocity coherence may play an important role for the (non) occurrence of megamasers. For NGC\,4258, for example, the scattered light requires a thick obscuring disk (in terms of its optical shadowing properties) but the masers reside in a thin disk \citep{wilk95,bar99,hum08}. Enough velocity coherence (and gas column density) is perhaps achieved most often in the midplane. Thus, the solid angle into which the water maser emission is beamed is small, much smaller than that of the torus. With such a small angle, the likelihood to observe a maser line depending on the viewing angle is small as well. However, now, over 10 years after the study of \citet{bra97}, the number of known megamaser galaxies has more than quadrupled. But there is no comparable study addressing the IR properties of megamasers, their host galaxies and their radio and optical properties. Such a study might reveal the necessary ingredients for the occurrence of megamasers in AGNs. This in turn would greatly facilitate the pre-selection of promising candidates for megamaser emission among type-2 QSOs. However, such a detailed comparison is beyond the scope of this paper. \subsubsection{FIR Luminosities and Dust Temperatures} Here, we derived the FIR luminosities and dust temperatures from IRAS fluxes \citep{ful89} using the procedure of \citet{wou86}. Some caution is required because these IRAS measurements are affected by the ratio of the contribution of nuclear light to host light which depends on nuclear FIR luminosity, nature of the host, and metric aperture size (and thus distance). Table~\ref{maserlf} gives FIR luminosities and dust temperatures for the sample of known maser sources. The latter are all well above 30 K and thus rather large, as already noted by \citet{hen86,bra97,hen05a}.\footnote{Note that the dust temperatures were calculated from the 60/100$\mu$m flux ratio. Cooler dust might be dominant in these galaxies but does not radiate at these wavelengths.} There is no obvious relation between dust temperatures and $L_{\rm H_2O}$ (Fig.~\ref{dust}). In Fig.~\ref{fir}, we show the FIR luminosity versus water maser luminosity. There seems to be a correlation between FIR luminosity and water maser luminosities in the sense that higher FIR luminosity lead to higher water maser luminosities. However, we do not claim that the maser luminosity versus FIR luminosity plot shows an intimate physical connection between both properties. Instead, Fig.~\ref{fir} mainly shows the range of FIR and H$_2$O luminosities covered by the known megamaser sources. Unfortunately, for our sample of 274 SDSS type-2 AGNs, IRAS fluxes are only available for 8 objects (see Table~\ref{zakfir}). Keeping in mind the small number statistics, it is interesting to note that the average dust temperature for these 8 objects, $T_{\rm dust, ave}$ $\simeq$ 36$\pm$1\,K, is by $\sim$12\,K lower than the one for the known maser galaxies ($T_{\rm dust, ave}$ $\simeq$ 48$\pm$2\,K). At the same time, the type-2 AGNs of the SDSS sample all have very high FIR luminosities [$\log (L_{\rm FIR}/L_{\odot}) \simeq 13.5 \pm 0.1$] compared to the maser sources [$\log (L_{\rm FIR}/L_{\odot}) \simeq 10.5 \pm 0.1$], which are of course a lot closer. \begin{figure}[h!] \includegraphics[scale=0.29,angle=-90]{f2.eps} \caption{Water maser luminosities versus dust temperatures of H$_2$O detected galaxies (cf. Table~\ref{maserlf}; excluding those objects for which we do not have dust temperatures, leaving us with 73 objects total). } \label{dust} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.29,angle=-90]{f3.eps} \caption{IRAS point source FIR luminosity versus total H$_2$O luminosity of H$_2$O detected galaxies (cf. Table~\ref{maserlf}; excluding those objects for which we do not have FIR luminosities as well as those for which we only have lower limits, leaving us with 71 objects total).} \label{fir} \end{figure} \subsubsection{BH Mass and Accretion Disk} One difference between the high-z type-2 AGNs in our sample and the local Seyfert-2 galaxies and LINERs in which megamasers have been found is the mass of the central engine. For Seyfert galaxies, BH masses range between $\sim$10$^6$ $M_{\odot}$ and a few 10$^7$ $M_{\odot}$ (e.g. \citealt{gre97b, her99, hen02}) while masses in QSOs can reach 10$^9$ $M_{\odot}$ or more (e.g. \citealt{lab06, ves08}). \citet{tar07} suggest that clouds in a disk with large rotational velocity and small galactocentric radius like NGC\,4258 might not be stable in the vicinity of such a large BH mass. \subsubsection{Dust Torus and X-ray Luminosity} \citet{bar05} have argued that extremely powerful masers might be expected from high-luminosity QSOs. With every square parsec of area illuminated by the primary AGN x-ray emission, a luminosity of $\sim$100 $L_{\odot}$ is produced \citep{neu94}. The area of the torus illuminated by the AGN increases with the optical/UV continuum luminosity as the dust sublimation radius scales as $L_{\rm opt/UV}^{1/2}$. Indeed, a scenario in which the high QSO luminosities result in gigamasers is consistent with the detection of J0804+3607, by far the most powerful maser known today. If $r_{\rm sub}$ increases with optical/UV luminosity, also the molecular parts giving rise to the maser emission are expected to arise further away from the nuclear engine. Such a prediction can be tested observationally: We would expect to observe rotation velocities that are smaller than in Seyfert-2 galaxies, despite a potentially more massive BH. However, these considerations do not take into account that the dust covering factor may be decreasing with optical luminosity \citep{sim05}. In addition, the water maser luminosity is expected to grow more slowly than optical luminosity, since $L_{\rm x-ray}$/$L_{\rm opt}$ seems to be a declining function with increasing luminosity \citep{vig03}. These two effects may cause megamasers to be intrinsically rare among type-2 QSOs. \subsubsection{Host Galaxy} One known difference between QSOs and Seyfert galaxies are their host galaxies: While the majority of Seyfert galaxies reside in spiral-like galaxies, QSOs are found predominantly in early-type galaxies \citep[e.g.,][]{dis95,bah97,mcl99,flo04}. Among the 78 known H$_2$O maser galaxies, the host galaxy properties have been determined for 74 objects (NED\footnote{As noted above, the morphological types given by NED are inhomogeneous.}; see Table~\ref{maserlf}). The majority of known megamasers resides in spiral galaxies ($\sim$ 84\%), of which more than half were classified as barred or at least weakly barred galaxies (53\%). 7\% of the host galaxies were classified as SO, and only 1\% as elliptical galaxies, the rest as irregular or peculiar galaxies ($\sim$ 8\%). It is remarkable that only one galaxy, namely NGC\,1052, has an elliptical morphology. Also, the search for megamaser emission from early-type galaxies with FRI radio morphology lead to no detection \citep{hen98}. Does this imply that spiral galaxies somehow favor the presence of megamaser activity? Elliptical galaxies may simply lack the molecular gas necessary for the occurrence of H$_2$O masers. With respect to the nuclear activity, one important difference between spiral galaxies and early-type galaxies seems to be the fueling mechanism. While there is now convincing evidence that most if not all QSOs are triggered by mergers (e.g., \citealt{hut94,can01,guy06,can07,ben08,urr08}), their low-luminous cousins, Seyfert galaxies, do not show unusually high rates of interaction \citep[e.g.][]{mal98}. For Seyferts, the gas necessary for the fueling of the AGN may simply be provided by their spiral host galaxies and funneled into the very center through bar instabilities \citep[]{com06}.\footnote{However, the axis of the spiral disk seems to be completely uncorrelated with that of the accretion disk as traced by the radio jet (e.g. \citealt[]{sch02}).}. Bars may also play an important role for the obscuration of the central AGN \citep{mai99}. Is the fueling via bar instabilities a more stable mechanisms ensuring the existence of a central dusty region in which the water molecules can survive? And are these regions destroyed by the more violent process of fueling by mergers? If this is the case, one might expect to find megamasers preferentially in barred galaxies. However, the percentage of barred galaxies in the sample of known megamasers residing in spiral galaxies is with 54\% (see above) not higher than what is typically found for the fraction of barred galaxies in the local universe (e.g \citealt{bar08}). \section{SUMMARY} We report a search for megamasers in 274 SDSS type-2 AGNs (0.3 $<$ $z$ $<$ 083), half of which are luminous enough (in [OIII]) to be classified as type-2 QSOs \citep{zak03}. Apart from the detection of the gigamaser J0804+3607 already reported by \citet{bar05}, we do not find any additional line emission. We estimate the detection probabilities by comparing our sample with known megamasers, taking into account the observed H$_2$O maser luminosity function and the sensitivity of our survey. We discuss intrinsic differences between the known megamasers, mainly low-luminous AGNs such as Seyfert-2 galaxies and LINERs in the local universe, and our sample consisting of high-luminous AGNs at higher redshift. At this stage, we cannot distinguish between the different scenarios presented that could lead to the high rate of non-detections. Further and more sensitive observations are required, e.g. using the Square Kilometer Array (SKA). Detecting megamasers in type-2 QSOs remains a challenging and yet, if successful, highly rewarding project not only to determine BH masses but especially for its possibility to constrain distances and thus the properties of the elusive dark energy. \acknowledgments We thank Neil Nagar for his help with the luminosity function. We thank Phil Perillat and Chris Salter for help with the Arecibo observations and reductions. We thank the anonymous referee for carefully reading the manuscript and for useful suggestions. N.B. is supported through a grant from the National Science Foundation (AST 0507450). The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The 100-m radio telescope at Effelsberg is operated by the Max-Planck-Institut f{\"u}r Radioastronomie (MPIfR) on behalf of the Max-Planck-Gesellschaft (MPG). The Arecibo Observatory is part of the National Astronomy and Ionosphere Center, which is operated by Cornell University under a cooperative agreement with the National Science Foundation. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. For the reduction and analysis of the Effelsberg data, we used the GILDAS/CLASS software (http://www.iram.fr/IRAMFR/GILDAS).
-19,351.313698
[ -2.890625, 2.71875 ]
31.754161
[ -2.8515625, 0.62060546875, -1.701171875, -5.6640625, -0.87060546875, 8.09375 ]
[ 3.400390625, 6.96484375, 4.28125, 4.91015625 ]
343
5,308
[ -3.015625, 3.541015625 ]
26.788676
[ -5.85546875, -3.3203125, -3.20703125, -2.103515625, 1.25390625, 10.734375 ]
0.979694
11.551425
26.752072
4.447938
[ 2.2155957221984863 ]
-14,844.877135
5.288244
-18,888.816842
0.32419
6.097976
[ -3.3984375, -3.580078125, -2.58203125, -3.482421875, 2.544921875, 9.8125 ]
[ -5.78125, -2.326171875, -2.119140625, -1.2646484375, 3.3984375, 4.8828125 ]
BkiUfDnxK6nrxq6DpSrb
\section{ Introduction} The electrooptical response of liquid crystals (LCs) mostly depends on the birefringence, dielectric, elastic properties, and the rotational viscosity~\cite{de,ch,deju,blinov}. Depending upon the applications, a set of desired physical properties is required. Since all the desired properties are unavailable in any single compound, usually several liquid crystals are mixed in appropriate proportions to tune the physical properties for commercial devices. Further, studies on liquid crystal mixtures with a variety of shapes and structures have always proven to be rewarding from both the technological and fundamental perspectives. Several new phases have been discovered, which are absent in the parent compounds~\cite{pa,sd1,rp}. In the context of binary mixtures made of symmetric and highly asymmetric molecules, such as mixtures of rod-like and bent-shaped molecules have resulted in several new physical properties \cite{har,kundu,satya,satya1,dodge,dodge1}. However, physical studies on the binary mixtures of nematic liquid crystals with the identical core structures and highly antagonistic dipole orientation are meagre. High dipole moments of the molecules are known to make an important contribution to the intermolecular interactions and give rise to distinct physical properties of liquid crystals~\cite{nvm1,nvm2,nvm3,nvm4,sd}. In this paper, we report experimental studies on the binary mixtures of two low molecular weight nematic liquid crystals, namely CCH-7 and CCN-47, having identical (bicyclohexane) cores and antagonistic dipole orientation with respect to their long axes (see Fig.\ref{fig:figure1}). In particular, the direction of permanent dipole moment is oriented parallel and perpendicular to the long axis in case of CCH-7 and CCN-47, respectively \cite{sai,gj}. We have prepared a few mixtures with varying composition and measured physical properties such as birefringence, dielectric and elastic anisotropies as a function of temperature. We show that the orientation of the polar group (-CN) not only contributes to the optical and dielectric anisotropies, but it also significantly affects the bend-splay elastic anisotropy. \begin{figure}[htp] \centering \includegraphics[scale=0.8]{fig1.pdf} \caption{ Chemical structures and phase transition temperatures of the liquid crystals. Red arrows indicate the direction of permanent dipole moment.} \label{fig:figure1} \end{figure} \section{Experimental} \begin{figure*}[htp] \includegraphics[scale=0.7]{fig2.pdf} \caption{ Schematic diagram of the experimental setup. The orientations of the polariser, analyser, photoelastic modulator (PEM) axes and the director are shown by red arrows.} \label{fig:figure2} \end{figure*} The experimental cells are made of two parallel indium-tin-oxide coated glass plates with circularly patterned electrodes. To obtain the planar alignment of the director (the mean molecular orientation), the plates are coated with polyimide AL-1254 and cured at 180$^\circ$C for 1 hour. The polyimide coated plates are rubbed in an antiparallel way for homogeneous or planar alignment of the director. For homeotropic alignment, the plates are coated with polyimide JALS-204 and cured at 200$^\circ$C for 1 hour. The typical cell thickness used in the experiments is about 8$\mu$m. The phase transition temperature of the mixtures are obtained using a polarising optical microscope (Nikon, LV100 POL) and a temperature controller (Instec, mK1000). The temperature-dependent birefringence of the mixtures is measured by a polarisation modulation technique, using a home-built electro-optic setup, involving a He-Ne laser ($\lambda=632.8$nm), photo-elastic modulator (PEM) and a lock-in amplifier~\cite{oak,sat1,sat2}. A schematic diagram of the experimental setup is shown in Fig.\ref{fig:figure2}. Planar cells are used for all the experimental measurements on samples with positive dielectric anisotropy. We measured voltage-dependent dielectric constant of the samples form 0.02 to 18V in steps of 0.03V, using an LCR meter (Agilent E4980A) at a frequency of 4111 Hz. For samples with positive dielectric anisotropy the dielectric constant below the Freedericksz's threshold voltage \cite{freedericksz} is equal to $\epsilon_{\perp}$. To measure $\epsilon_{||}$, the linear part of dielectric constant at higher voltages plotted against $1/V$ and extrapolated to 0 ($ V\rightarrow\infty \Rightarrow 1/V\rightarrow0$). This experiment is repeated at different temperatures to get temperature dependent dielectric anisotropy, $\Delta \epsilon$. A computer controlled LabVIEW program is used to control the experiments. \\ The same procedure is followed for the samples with negative dielectric anisotropy, but in homeotropic cells~\cite{sai}. The splay and bend elastic constants of all the samples are measured following the procedure described in ref.\cite{gruler, morris,sai}. The birefringence and dielectric data are measured with an accuracy of 2\%. The splay and bend elastic constants of positive dielectric anisotropy materials are measured with an accuracy of 5\% and 8\%, respectively. The bend and splay elastic constants of negative dielectric anisotropy materials are measured with an accuracy of 5\% and 10\%, respectively. \section{Results and discussion} The chemical structure of the pristine compounds are shown in Fig.\ref{fig:figure1} and the phase transition temperatures of the compounds are presented in Table-I. Apart from nematic, CCN-47 also exhibits a nematic to smectic-A (N-SmA) phase transition at 30.2$^\circ$C ($T_{NS}$). The molecules of both CCN-47 and CCH-7 have identical bicyclohexane cores. The polar group (-CN) in CCN-47 is directed along the transverse direction and exhibits negative dielectric anisotropy, whereas in CCH-7, the -CN group is directed along the longitudinal direction and exhibits positive dielectric anisotropy \cite{ananth, ibra,sai}. \begin{table} \caption{ Phase transition temperatures of binary mixtures of CCN-47 and CCH-7. $T_{NI}$: nematic to isotropic; $T_{NS}$: N-SmA transition temperatures. `-' indicates no N-SmA transition is observed in the experimental temperature range. } \vspace{0.1in} \begin{tabular}{|c|c| c| c| c|} \hline & wt$\%$ CCN-47 & wt$\%$ CCH-7 & $T_{NI}$ ($^\circ$C) & $T_{NS}$ ($^\circ$C) \\ \hline Sample-1 &100 & 0 & 59.6 & 30.2 \\ \hline Sample-2 &75 & 25 & 46.5 & - \\ \hline Sample-3 & 55 & 45 & 47.6 & 39.2 \\ \hline Sample-4 & 23 & 77 & 71.5 & - \\ \hline Sample-5 & 0 & 100 & 85.0 & - \\ \hline \end{tabular} \end{table} \begin{figure}[htp] \centering \includegraphics[scale=0.58]{fig3.pdf} \caption{ Variation of (a) birefringence $(\Delta n)$ and (b) dielectric anisotropy ($\Delta\epsilon$) of the mixtures as a function of shifted temperature.} \label{fig:figure3} \end{figure} We prepared three mixtures with different wt\% of these compounds as shown in Table-I. Sample-1 and Sample-5 are pure CCN-47 and CCH-7 compounds. All the mixtures show the nematic phase. Sample-3 shows relatively a shorter temperature range of nematic ($8.4^{\circ}$C) and a nematic to smectic-A phase transition. The birefringence ($\Delta n$) of the samples as a function of shifted temperature is shown in Fig.\ref{fig:figure3}(a). At a shifted temperature $T-T_{NI}=-12^{\circ}$C, the birefringence of pristine CCN-47 is $\Delta n\simeq0.027$ and for pristine CCH-7, it is $\Delta n\simeq0.047$ and agrees well with the previous reports \cite{sai,ibra,ananth}. At a fixed shifted temperature, $\Delta n$ increases with increasing wt\% of CCH-7 in the mixtures as expected. Since the molecules have identical core structure, the difference in the birefringence (i.e., $\Delta n_{CCH-7}-\Delta n_{CCN-47}\simeq0.02$) arises mostly due to the antagonistic orientation of the -CN groups. The variation of dielectric anisotropy ($\Delta \varepsilon$) of the samples is shown as a function of shifted temperature in Fig.\ref{fig:figure3}(b). The dielectric anisotropy of pristine CCH-7 is a large positive ($\Delta\epsilon\simeq 3.8$), whereas for CCN-47, $\Delta \varepsilon$ is a large negative ($\Delta\epsilon\simeq -4.2$) at $T-T_{NI}=-12^\circ$C. The core structures of the two molecules are identical, hence the large difference in $\Delta \varepsilon$ of the pristine samples is mostly due to the difference in the orientation of the -CN groups. This indicates that the antagonistic orientation of the -CN groups in mixture tends to cancel out their contribution on the dielectric anisotropy. In fact, $\Delta \varepsilon$ of Sample-3, which is composed of nearly 55wt\% of CCN-47 is very close to zero ($\Delta\epsilon\simeq 0.03$). \begin{figure}[htp] \includegraphics[scale=0.75]{fig4.pdf} \caption{ Variation of (a) splay ($K_{11}$) and (b) bend ($K_{33}$) elastic constants with shifted temperature. (c) Variation of $K_{33}$ and $K_{11}$ with wt\% of CCH-7 at a relative shifted temperature $T-T_{NI}=-8^\circ$C.} \label{fig:figure4} \end{figure} \begin{figure}[htp] \includegraphics[scale=0.7]{fig5.pdf} \caption{ (a) Ratio $K_{33}/K_{11}$ (a) as a function of shifted temperature (b) as a function of wt\% of CCH-7 at a relative shifted temperature $T-T_{NI}=-8^\circ$C. (c) Variation of calculated $K_{33}/K_{11}$ with length-to-width ratio ($L/D$) using Eq.(1) at a few values of $\overline{P_{4}}/\overline{P_{2}}$.} \label{fig:figure5} \end{figure} The variations of splay $(K_{11})$ and bend $(K_{33})$ elastic constants of the samples as a function of shifted temperature are shown in Fig.\ref{fig:figure4}(a,b). Both $K_{11}$ and $K_{33}$ increase with decreasing temperature as they are proportional to the square of the orientational order parameter $S$~\cite{de}. In pristine CCN-47 (Sample-1) both $K_{11}$ and $K_{33}$ tend to diverge near the room temperature due to the short-range presmectic order effect \cite{sai, sai1}. Figure~\ref{fig:figure4}(c) shows that in pristine CCN-47, the bend-splay elastic anisotropy ($\Delta K=K_{33}-K_{11}$) is negative whereas, in pristine CCH-7, $\Delta K$ is positive. It changes sign between 25wt\% to 50wt\% of CCH-7. Figure~\ref{fig:figure5}(a) shows the ratio of the two elastic constants ($K_{33}/K_{11}$) of the samples as a function of shifted temperature. It is observed that for Sample-1 and Sample-2, $K_{33}/K_{11}<1$, whereas for Sample-3, Sample-4 and Sample-5, $K_{33}/K_{11}>1$. Figure~\ref{fig:figure5}(b) shows that at a fixed shifted temperature ($T-T_{NI}=-8^\circ$C) the ratio, $K_{33}/K_{11}$ increases with increasing wt\% of CCH-7. The variation of $K_{33}/K_{11}$ can be qualitatively explained based on a molecular theory proposed by Priest~\cite{prist}, considering the effective length-to-width ratio of the molecules. The ratio $K_{33}/K_{11}$ is related to the molecular properties and given by \cite{prist}: \begin{equation} \frac{K_{33}}{K_{11}}=\frac{1+\Delta+4\Delta^{'}\overline{P_{4}}/\overline{P_{2}}}{1+\Delta-3\Delta^{'}\overline{P_{4}}/\overline{P_{2}}} \end{equation} where $\Delta=(2R^2-2)/(7R^2+20)$, $\Delta^{'}=9(3R^2-8)/16(7R^2+20)$ and $R=(L-D)/D$. $L$ and $D$ are the length and width of the spherocylindrical molecules. The calculated ratio of $K_{33}/K_{11}$ for various values of $\overline{P_{4}}/\overline{P_{2}}$ is shown in Fig.\ref{fig:figure5}(c). It is observed that the ratio $K_{33}/K_{11}$ increases or decreases depending on the sign of $\overline{P_{4}}/\overline{P_{2}}$. For positive values of $\overline{P_{4}}/\overline{P_{2}}$, the ratio $K_{33}/K_{11}$ increase with $L/D$. In the x-ray measurements of nematic phase of CCH-7, an antiparallel local ordering of the molecules was reported due to the antiparallel correlation of the longitudinal polar group (-CN) \cite{gj}. Bradshaw \textit{et. al.}, showed the antiparallel local ordering of the dipoles increases the effective length of CCH-7 molecules \cite{brad}. As a result of which the ratio $K_{33}/K_{11}$ is greater than 1 (see Fig.\ref{fig:figure5}(a)). In our previous study, from the energy minimised DFT calculations, the shape of the CCN-47 molecule was found to be bent-shape \cite{sai}. Due to such a shape, the effective length-to-width ratio could be relatively smaller. In addition, they do not exhibit strong antiparallel correlation due to the transverse orientation of the polar group (-CN)~\cite{praveen}. This could results in, $K_{33}/K_{11}<1$ for pure CCN-47. As we increase the wt\% of CCH-7, beyond about 25wt\%, the antiparallel correlation of permanent longitudinal dipoles (-CN) of CCH-7 develops, as a result of which the length-to-width ratio is increased. Thus, increasing wt\% of CCH-7 in the mixture is an equivalent effect of increasing molecular length. The best comparison to the experimental data with calculation is obtained with a length-to-width ratio of $L/D\simeq 7$ and a value of the ratio $\overline{P_{4}}/\overline{P_{2}}=0.5$. In the x-ray diffraction studies of CCH-7, the effective $L/D$ was reported to be $\simeq6.5$~\cite{gj}. This is very close to the value at which our experimental result resembles with the calculations (see Fig.\ref{fig:figure5}(c)). It would be very useful to have experimental measurements of $\overline{P_{4}}/\overline{P_{2}}$ of these samples. It may be mentioned that both the rotational and translational entropy of the molecules with longitudinal dipole moment is expected to get reduced due to the antiparallel correlation. This effect should be absent in molecules with transverse dipole moment due to the lack of antiparallel correlation. As a result of which, the orientational order parameter and eventually the dielectric and elastic properties could differ in the respective systems. A detailed computer simulation may be useful for getting quantitative information. \\ \section{ Conclusion} In conclusion, we have measured $\Delta n$, $\Delta \varepsilon$ and $\Delta K$ of pristine as well as some mixtures of CCH-7 and CCN-47 liquid crystals, possessing identical core structures and antagonistic dipole orientation. Both $\Delta n$ and $\Delta \varepsilon$ of the mixtures decrease systematically with increasing wt\% of CCN-47. In particular, at 55wt\%, $\Delta n$ is reduced by 33\% of the pristine CCH-7 sample and $\Delta \varepsilon$ becomes almost zero. Since the core structures are identical, the large change in the optical and dielectric anisotropies of the mixtures are mostly due to the antagonistic orientation of the dipolar group (-CN). The bend-splay elastic anisotropy $\Delta K$ changes from negative to positive beyond $\sim$37wt\% of CCH-7. The analysis suggests that the antiparallel correlation of dipoles and the resulting molecular association, which is absent in pristine CCN-47 becomes significant beyond 37wt\% of CCH-7 in the mixture. Thus, the orientation of the strongly polar group (-CN) with respect to the molecular axis influences the elastic properties significantly, in addition to optical and dielectric properties.\\ {\bf Acknowledgments}: This work was supported by National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No. 2020R1A2C1006464). SD acknowledges support from DST-FIST-II, School of Physics, University of Hyderabad. \\
-12,544.847621
[ -3.177734375, 3 ]
42.307692
[ -3.099609375, 0.494140625, -2.041015625, -5.3125, -0.71142578125, 8.1640625 ]
[ 3.12109375, 7.0859375, 2.904296875, 5.9921875 ]
147
2,011
[ -2.474609375, 2.5078125 ]
26.902827
[ -5.890625, -3.20703125, -3.0390625, -2.216796875, 1.5380859375, 10.375 ]
1.673676
24.694369
30.183988
6.294977
[ 2.4068126678466797 ]
-10,088.64009
5.793635
-12,157.79137
0.7553
5.413458
[ -3.2890625, -3.73828125, -3.60546875, -4.36328125, 2.57421875, 11.5078125 ]
[ -5.55859375, -1.876953125, -2.1953125, -1.61328125, 3.384765625, 4.890625 ]
BkiUfvnxK3YB9ohkPj9J
\section{Introduction} Stress Mirror Polishing (SMP) is a manufacturing method allowing to obtain the required shape of optical surfaces using deformation and elasticity properties of materials. In this approach, the mirror is deformed from a sphere, as we want to obtain a quasi parabola, to the desired inverse-shape, spherically polished using full-size tools, before to take the correct form when the applied constraints are released. This technique has been developed for forty years on several type of mirrors such as Keck segments \cite{Lubliner:80} or the Spectro-Polarimetric High-contrast Exoplanet Reseach (SPHERE) instrument on the Very Large Telescope (VLT) \cite{refId0}. Contrary to the classical aspherical polishing process where small size tools introduce some high-order spatial frequencies errors on the optical surface, the SMP process uses large tool which gives an extreme surface quality due to the continuous deformation and polishing behaviors. Moreover, due to the large area of the polishing tools, SMP process \cite{Izazaga_Perez_SMP} allows a considerable saving in time of manufacturing and is perfectly suited to produce a large number of primary mirror segments in a short time for future space and ground-based observatories such as Thirty Meter Telescope \cite{TMT2010,TMT2012} and LUVOIR \cite{LUVOIR} projects. SMP techniques have also a significant benefit for the manufacturing of large lightweight aspheric primary mirrors for space telescopes, especially as they can be applied on a thin shell mirror and combined with an active support \cite{Burge} or a rigid assembly using a sandwich structure with two facesheets and a Carbon Fiber Reinforced Composite (CFRC) honeycomb\cite{Catanzaro}. This could be perfectly suited for manufacturing of ultra high surface quality primary mirrors, as the ones foreseen in future high-contrast imaging missions for exoplanets detection and characterization, or simply to have a gain in term of manufacturing time for more classical mirrors. We investigated the possibility to produce such a lightweight aspheric primary mirror for space telescopes using classical elasticity theory combined with new tools. They are provided by Finite Element Analysis (FEA), such as "shape optimisation", to go further the classical use described by Izazaga-P\'erez \cite{Izazaga_Perez_FEM}. In the first section, we introduce rotational symmetry aspherics, known as conicoids, and the method to produce them using Active Optics and SMP more specifically by giving an example for a $3^{th}$-order spherical Zernike aberration. We will discuss the definition of shape optimization and the strategy adopted to implement it. Then, an algorithm has been developed allowing to optimize the deformation obtained, using data provided by Finite Element Model (FEM) outputs which has been implemented in Python language. \section{Conicoid Mirror} \subsection{Definition} \label{sec:examples} Primary mirrors are one of the most sensitive part of space telescopes. The increasing size of diameter allows collecting a large amount of photons and detecting fainter objects, while the surface quality is crucial to obtain the best images of stars, planets or exoplanets \cite{refSPHERE}. For these reasons, large diameter and high surface quality are requested on most space and ground-based future observatories. Nowadays most of the telescope optical designs use primary parabolic mirrors which are more convenient to focus rays from infinite objects without introducing any additional complexity. The sphere and the parabola are two peculiar solutions of what is commonly called \textit{conicoids}. Let us define Eq.(\ref{eq:conicoids}) of a \textit{conicoids} aspheric optical surface with rotational symmetry by : \begin{equation} z = \frac{r^2}{R_c + \sqrt{R_c^2-(1+C)r^2} } \label{eq:conicoids} \end{equation} where $R_c$ is radius of curvature of aspherics with a conic constant $C$ and $r$ is the radial coordinate from the center to the semi-diameter \textit{a}. Note that if $C=0$ the optical surface is a sphere and a parabola if $C=-1$. By developing expression in expansion series, we obtain for the first terms Eq.(\ref{eq:expansion series}): \begin{equation} z = \frac{r^2}{2R_c} + \frac{(1+C)r^4}{8R_c^3 } + \frac{(1+C)^2r^6}{16R_c^5 } + \cdots \label{eq:expansion series} \end{equation} Retrieving the asphere $z_a$ from the sphere $z_s$ with radii of curvature $R_a$ and $R_s$ respectively allows us to determine the deformation $z_d$ to be applied during the polishing process as: \begin{equation} z_d = z_s - z_a \label{eq:deformation} \end{equation} Replacing Eq.(\ref{eq:expansion series}) in Eq.(\ref{eq:deformation}) we obtain (Eq.(\ref{eq:deformation series})): \begin{equation} z_d = \frac{r^2}{2}(\frac{1}{R_s}-\frac{1}{R_a}) + \frac{r^4}{8}(\frac{1+C}{R_s^3}-\frac{1}{R_a^3}) + \frac{r^6}{16}(\frac{(1+C)^2}{R_s^5}-\frac{1}{R_a^5}) + \cdots \label{eq:deformation series} \end{equation} $R_s$ is defined as the first parameter of the substrate dimensions. $R_a$ is defined in order to minimize the removal volume defined by Eq.(\ref{eq:volume}) as described by Unti \cite{Unti:66}: \begin{equation} V=2\pi\int_0^r \Delta z r dr \label{eq:volume} \end{equation} where $\Delta z$ is the difference in abscissa between the sphere and aspheric (Eq.(\ref{eq:Delta z})): \begin{equation} \Delta z = \epsilon + R_s - \sqrt{R_s^2-r^2} - z_a \label{eq:Delta z} \end{equation} with $\epsilon$ is the shifted distance from the origin as shown in the Fig. \ref{fig:Best-fit sphere}. \begin{figure}[htpb] \centering \begin{tikzpicture}[scale=1.] \begin{axis}[axis lines=middle, xlabel=$r$, ylabel=$z$, enlargelimits, ytick={2.171,1.225}, yticklabels={$z_s$,$z_a$}, xtick={\empty}] \pgfplotsset{compat=1.11} \addplot[name path=F,black,domain={0:5}] {x^2*0.1} node[pos=.8, below,at end,yshift=-1cm,xshift=-0.3cm]{$ASPHERIC$}; \addplot[name path=G,black,thick,domain={0:5}] {(4.5-sqrt(4.5^2-x^2))+0.5}node[pos=.1, above,midway,yshift=2cm]{$SPHERE$}; \addplot[pattern=north east lines, pattern color=black!50]fill between[of=F and G, soft clip={domain=0:5}]; \node[anchor=east] at (axis cs:0,0.2) {$\epsilon$}; \filldraw[color=gray!50] (3.3,1.089) -- (3.3,1.94) -- plot [domain=3.3:3.7] (\x,0.175*\x^2) (3.7,2.44) -- (3.7,1.369) -- plot [domain=3.7:3.3] (\x,0.1*\x^2) ; \draw[dashed,below left]node{$0$} {(0,2.171)--(3.5,2.171)node[above, midway]{$r$}} {(0,1.225)--(3.5,1.225)}; \draw[->](0,4)--(3.5,2.171)node[above, midway]{$R_s$}; \end{axis} \end{tikzpicture} \caption{Definitions of the best-fit sphere (BFS) minimizing the volume removal to obtain the targeted aspheric surface.} \label{fig:Best-fit sphere} \end{figure} \begin{equation} \frac{dV}{dR} = 0 \label{eq:volume minimization} \end{equation} Unti \cite{Unti:66} demonstrated for a perfect parabolic mirror with a conic constant equal to -1, the best-sphere radius of curvature minimizing the volume removal (Eq.(\ref{eq:volume minimization})) is equal to: \begin{equation} R_{BFS} = R_a + \frac{r^2}{4R_a} \label{eq:radius of best sphere} \end{equation} The best-fit sphere radius can also be determined by minimizing the Root-Mean Square (RMS) deviation between the targeted optical surface and the best-fit sphere as used in the optical software ZEMAX \cite{Zemax2001}. The deformation $z_d$ is determined by substituting the radius of the sphere $R_s$ (Eq.(\ref{eq:radius of best sphere})) in the Eq.(\ref{eq:deformation}) by the best-fit sphere radius $R_{BFS}$. \subsection{Active Optics and $3^{rd}$ order spherical aberration generation} Stress mirror polishing developed by Lema\^itre \cite{Lemaitre:72} is derived from Schmidt works to aspherize his refractive correctors using full-size spherical tool to avoid any local retouch \cite{Everhart:66}. This technique relies on Active Optic methods using polishing pressure as an uniform load distribution. Active optics method based on elasticity's theory of materials is usually used to correct the Wave-Front Error (WFE) of an optical system by applying the adjusted deformation on one or several surfaces. Single or combined deformation modes can be generated in this purpose with one or several actuators. For example SPHERE, the Very Large Telescope (VLT) instrument dedicated to exoplanets detection, is assisted by this method as a means of compensating small deformations due to external constrains to the instrument, such as gravity load or temperature gradient, at long timescale \cite{procSPHERE}. For years, they have also been deployed for many applications such as Variable Curvature Mirrors (VCMs) \cite{Lemaitre2009} implemented in the delay line system of the the Very Large Telescope Interferometer (VLTI) in order to compensate optical path differences by changing focii of mirrors also called zoom mirrors \cite{1998A&AS..128..221F}. Usually deformation modes are generated with a combination of pressure, forces and bending moments. However pure aspheric shapes are mostly difficult to execute on classical mirrors with quasi-constant thickness distribution. For this particular reason Variable Thickness Distribution (VTD) classes have been developed by Lema\^itre \cite{Lemaitre2009} to produce better first-order modes, by adapting the thickness distribution to the required deformation shape, the load distribution and the boundary conditions. In our particular case, VTD is the starting point to generate aspherics using stress polishing. Third-order spherical aberration (SA3) can be generated by applying a uniform polishing pressure $P_p$ on the optical surface with a pushing central force $F_c$ in reaction, as developed by Lema\^itre \cite{Lemaitre2009_TSA3}, while the mirror's edge is maintained with a device discussed in the following paragraph. In that specific configuration, not only pure $3^{rd}$ order spherical aberration is produced but also curvature and some other high-order spherical aberrations which have to be removed. The dimensionless thickness distribution $T_{SA3}^D$ which produces $3^{rd}$-order spherical aberration is described \cite{Lemaitre2009_TSA3} by the following Eq.(\ref{eq:T40}): \begin{equation} T_{SA3}^D = [\frac{3+\nu}{1-\nu} \rho^\frac{-8}{3+\nu} - \frac{4}{1-\nu} \rho^{-2} + 1]^{1/3} \label{eq:T40} \end{equation} with $\rho=r/a$\hspace{0.1cm} $\in[0;1]$ ; $\textit{a}$ the mirror's semi-diameter and $\nu$ the Poisson's ratio. \begin{figure}[htbp] \centering \begin{tikzpicture} \centering \begin{axis} [ xlabel={$Normalized \hspace{0.1cm} semi-diameter$}, ylabel={$T_{SA3}^D$}, xmin=0, xmax=1., ymin=0, ymax=18., grid=both, grid style={line width=1.5pt, draw=gray!10} ] \addplot[smooth,domain=0.05:1]{(((3+0.22)/(1-0.22) )*x^(-8/(3+0.22)) - (4/(1-0.22) )*x^(-2) + 1)^(1/3)}; \addplot[dashed,samples=200,domain=0.05:1]{x-x}; \end{axis} \end{tikzpicture} \caption{Dimensionless Variable Thickness Distribution $T_{SA3}^D$ in function of the normalized semi-diameter. This profile results from Eq.(\ref{eq:T40}).} \label{fig:T40 distribution} \end{figure} It is important to note that, in this solution, the thickness is zero at the mirror's edge and infinite at the center as shows in Fig. \ref{fig:T40 distribution}. In practice, primary mirrors have to be polished with an acceptable thickness at the edge according to the large mirror's diameter and finite thickness at the center. To obtain the real variable thickness distribution $T_{SA3}$ (Eq.(\ref{eq:TSA3_real})), it is necessary to multiply the normalized dimensionless thickness distribution $T_{SA3}^D$ by a scaling factor $T_0$ \cite{Lemaitre2009_TSA3} and taking into account the sphere deviation and the edge thickness $e$. Scaling factor is depending on intrinsic parameters of the mirror's material such as Poisson's ratio $\nu$ and Young's modulus $E$ but also external loads $q$ and the sagitta of SA3 that expected to be generated $A_{SA3}$: \begin{equation} T_{SA3} = T_{SA3}^D*T_0 - (R_s - \sqrt{R_s^2-r^2}) + e \label{eq:TSA3_real} \end{equation} with \hspace{0.5cm} $T_0$ = $[\frac{3 (1-\nu^2) q}{16.A_{SA3}.E}]^{\frac{1}{3}}$ \section{Shape optimization and Pure $3^{rd}$-order spherical aberration} Shape optimization is a very efficient tool as long as you know how to use it properly. Botkin \cite{Botkin} has defined the basis of such optimization to go beyond mass reduction problem using mainly size optimization of a model in which the geometry remains unchanged. Shape optimization, has since been developed \cite{haftka} and is currently used in a large number of various applications such as fluid-structure interaction to find the optimal shape of a plane wing by minimizing drag \cite{mohammadi2010applied}. Very few examples are referenced in literature as noticed Lund \cite{LUND} for the fluid-structure coupling especially but the same could be said for field in optics. One should notice that it is not uncommon to encounter an amalgam between shape optimization and topology optimization. From an objective function, shape optimization refers to the relocation of points belonging to a given shape while topology optimization determines if a model element has to be removed or not based on the level of its \textit{involvement} in the model rigidity. The shape optimization is based on the variation of design variables depending on an objective function, as mentioned before, which is usually a scalar subject to design constraints. The objective function can be a combination of design responses to provide a single final response. The main issue of using shape optimization is to generate the correct shape basis vectors. Indeed, each selected model gridpoints, also called \textit{nodes}, will move along these basis vectors directions. NASTRAN software integrates this functionality as described by Kodiyalam \cite{KODIYALAM1991821} but can also be generated by a self-programmed code. In this case, the model meshing should be adapted for both having enough space to relocate nodes and sufficient number \textit{N} of nodes to converge. Because of the large amount of design variables, the optimizer is not able to perform all the possible options but it can use design sensitivity analysis in order to determine the optimal set of design variables to reach the goal. Design sensitivity computes all the rates of change \cite{Nastran2017}, also called \textit{partial derivatives} coefficients $\lambda_{ij}$, for each \textit{i}-th design variables $v_i$ to anticipate the influence on the \textit{j}-th model response $r_j$ given by Eq.(\ref{Partial derivatives}): \begin{equation} \lambda_{ij}= \frac{\partial r_j}{\partial v_i} \label{Partial derivatives} \end{equation} \subsection{Problem Formulation} In our case we want to find the optimal variable thickness distribution with the aim of producing a pure $3^{rd}$-order spherical deformation defining by Zernike \textit{"Peak-to-Valley"}(PtV) polynomials \cite{Noll:76} as: \vspace{-0.25cm} \begin{equation} z_{SA3}^{PtV} = (6\rho^4 - 6\rho^2)\hspace{0.1cm}z_{SA3}^{RMS}\hspace{0.1cm}\delta_{4,0} \label{eq:SA3 theorique} \end{equation} with \hspace{0.2cm} $\delta_{n,m}$ = $\sqrt{n+1}$\hspace{0.2cm} X \hspace{0.2cm} $\left \{ \begin{array}{ll} \ 1 & \mbox{if} \hspace{0.2cm} m = 0 \\ \ \sqrt{2} & \mbox{if} \hspace{0.2cm} m \neq 0 \end{array} \right.$ \vspace{0.1cm} $z_{SA3}^{RMS}$ is the root-mean square value of the $3^{rd}$-order spherical aberration linked to the PtV value using $\delta_{n,m}$ coefficient. In this context, the goal of the optimization is to reduce the deviation of the deformation from the theoretical SA3 surface, given in Eq.(\ref{eq:SA3 theorique}), by minimizing the threshold $\beta(\Delta)$ with $\Delta = \delta_1,\delta_2,\dots,\delta_N$ and subject to the design constraints $\zeta_i$ (Eq.(\ref{Design constraints definition})): \begin{equation} \zeta_i = \beta(\Delta)-\delta_i \geqslant 0 \hspace{0.5cm} \forall i \in [1,\ldots,N] \label{Design constraints definition} \end{equation} where $\delta_i$ (Eq.(\ref{eq:delta i})) is the deviation between model deformation responses $z_i$ and the SA3 theoretical shape for each gridpoints \textit{i} of the optical surface $z_{SA3}^i$, displayed in Fig. \ref{fig:Beta optimization}, defines as: \begin{equation} \delta_i = z_i - z_{SA3}^i \label{eq:delta i} \end{equation} \begin{figure}[htbp] \centering \begin{tikzpicture} \centering \begin{axis}[ xlabel={$Normalized \hspace{0.1cm} semi-diameter$}, ylabel={$displacements \hspace{0.2cm} z$ ($\mu$m PtV)}, xmin=0, xmax=1., ymin=-52, ymax=1., grid=both, grid style={line width=1.5pt, draw=gray!10} ] \filldraw[black] (20,381.41) circle (1pt); \filldraw[black] (20,445.81) circle (1pt); \filldraw[black] (10,428.42) circle (1pt); \filldraw[black] (10,500.87) circle (1pt); \draw[<->] (20,381.41) to node [left] {$\delta_2$} (20,445.81); \draw[<->] (10,428.42) to node [left]{$\delta_1$} (10,500.87); \plot[smooth,domain=0:1]{(6*x^4 - 6*x^2)*(14.4*sqrt(5))}; \addlegendentry{Theoretical function ($z_{SA3}^i$)} \plot[dashed,smooth,domain=0:1]{(6*x^4 - 6*x^2 + (x-1)/4)*(14.4*sqrt(5))}; \addlegendentry{FEA output ($z_i$)} \end{axis} \end{tikzpicture} \caption{Difference between theoretical function $z_{SA3}^i$ (solid line) and FEA output $z_i$ (dashed line)} \label{fig:Beta optimization} \end{figure} \begin{figure}[htbp] \centering \begin{tikzpicture} \centering \begin{axis}[ xlabel={$Normalized \hspace{0.1cm} semi-diameter$}, ylabel={$\delta_i$ ($\mu$m)}, xmin=0, xmax=1., ymin=0, ymax=10., grid=both, grid style={line width=1., draw=gray!10} ] \filldraw[black] (20,62.79) circle (1pt) node [anchor=south west]{$\delta_2$}; \filldraw[black] (10,72.45) circle (1pt) node [anchor=north]{$\delta_1$}; \filldraw[black] (30,16.5*3.22) circle (1pt) node [anchor=south west]{$\delta_3$}; \filldraw[black] (40,13.5*3.22) circle (1pt)node [anchor=south west]{$\delta_4$}; \filldraw[black] (50,10.5*3.22) circle (1pt) node [anchor=south west]{$\delta_5$}; \filldraw[black] (60,7.5*3.22) circle (1pt) node [anchor=south west]{$\delta_6$}; \draw[<->] (10,72.45) to node [right] {$\zeta_1$} (10,82); \plot[thick,dash dot,samples=200,domain=0:1.1]{8.2}; \addlegendentry{Objective function} \draw (25,90) node {$\beta(\Delta)$}; \end{axis} \end{tikzpicture} \caption{Beta optimization principle. $\delta_i$ has to be decreased by reducing the objective function $\beta(\Delta)$} \label{fig:Beta optimization principle} \end{figure} Fig. \ref{fig:Beta optimization principle} is showing the Beta optimization principle and the design constraints $\zeta_i$ which impose $\delta_i$ values to be lower than the threshold $\beta(\Delta)$ while it is reducing. \newpage \subsection{Shape optimization stategy} \subsubsection{Shape optimization algorithm} Shape optimization method has to start with a model close to the optimal form such as the thickness distribution described in Eq.(\ref{eq:T40}) to avoid large gridpoints displacements and local minima. Finite Element Analysis (FEA) is operated for the first calculation with the initial model parameters to get the input data: optical surface displacements as design responses and variable thickness distribution as design variables layered in Fig. \ref{fig:Flowchart}. Thereafter, the input data files are used to generate the bulk data entries with the aid of a self-programed PYTHON algorithm that will be embedded in the optimization input file. This program allows to calculate the maximum difference between the theoretical desired shape and the FEM displacements $\delta_{max} = Max (\delta_i)$ in order to define the minimum threshold to implement. Since the difference $\delta_{max}$ is small, a multiplying factor $c_3$ has to be applied to assure an objective function $\beta_{obj}$ large enough in order to prevent small values depending on the model unit (Eq.(\ref{eq:Beta objective})): \begin{equation} \beta_{obj} = c_3*\beta(\Delta) \label{eq:Beta objective} \end{equation} As mentioned, the threshold has to be higher than the maximum difference $\delta_{max}$ and has to be used as design constraint for each optical surface gridpoints. Eq.(\ref{eq:Design constraints}) describes the design constraint $\zeta_i$ implemented: \begin{equation} \zeta_i = c_1*(c_2*\beta(\Delta) - |z_{SA3}-z_i|) \label{eq:Design constraints} \end{equation} where $c_1$ and $c_2$ are the two coefficients used to manage the shape optimization. $c_2$ is slightly higher than the difference $\delta_{max}$ to avoid a constraint violation before running the optimization algorithm. The design constraint sensitivity is handled by the scaling factor $c_1$ depending on the design model. As regards design variables selected on the thickness distribution boundary, they are all model gridpoints in this particular numerical simulation. They are compiled in a DVGRIDs entry defining the direction and magnitude for a given change in a design variable. Once the optimization is terminated, particularly for the first one, due to the deformation of the elements induced by the gridpoints displacements, it is indispensable to refine the Finite Element Model (FEM) to get more accurate results. With a projection on the Zernike orthonormal basis, the optical surface is decomposed into a linear combination of weighted orthogonal polynomials. Due to the ill-posed problem such as non-adapted optimization coefficients or an initial design very far from the target, it is rare to converge in one step of optimization. In most cases, the optimization process needs a certain number of iterations, but usually less than a dozen. The optimization range will be performed on the entire mirror's surface $a$ but it can be narrowed to the interested area. \begin{figure}[htbp] \begin{tikzpicture}[node distance=1.5cm] \tikzstyle{startstop} = [rectangle, rounded corners, minimum width=3cm, minimum height=1cm,text centered, text width=3cm, draw=black, fill=red!30] \tikzstyle{io} = [trapezium, trapezium left angle=70, trapezium right angle=110, minimum width=2cm, minimum height=1cm, text centered, text width=3cm, draw=black, fill=blue!10] \tikzstyle{process} = [rectangle, minimum width=3.7cm, minimum height=1cm, text centered, text width=3cm ,draw=black, fill=orange!30] \tikzstyle{decision} = [diamond, draw=black, fill=green!30] \tikzstyle{arrow} = [thick,->,>=stealth] \node (start) [startstop] {Initial design}; \node (parameters) [io, below of=start] {Initial design parameters}; \draw [arrow] (start) -- (parameters); \node (static calculation) [process,below of=parameters] {Static calculation (FEA)}; \draw [arrow] (parameters) -- (static calculation); \node (Optical surface data output) [process, below of=static calculation, xshift=-2cm] {Optical surface \\ data output \\ $X_{loc}^{OS},X_{disp}^{OS},Z_{loc}^{OS},Z_{disp}^{OS}$}; \draw [arrow] (static calculation) -- (Optical surface data output); \node (VTD data output) [process, below of=static calculation, xshift=2cm] {VTD data $X_{loc}^{VTD},Z_{loc}^{VTD}$}; \draw [arrow] (static calculation) -- (VTD data output); \node (Objective function) [process, below of=Optical surface data output] {Objective function \\ Constraints}; \draw [arrow] (Optical surface data output) -- (Objective function); \node (DVGRIDs) [process, below of=VTD data output] {DVGRIDs}; \draw [arrow] (VTD data output) -- (DVGRIDs); \node (Bulk data) [process, below of=static calculation, yshift=-3cm] {Bulk data entries}; \draw [arrow] (Objective function) -- (Bulk data); \draw [arrow] (DVGRIDs) -- (Bulk data); \node (Optimization process) [process, below of=Bulk data] {Optimization process using design sensitivity}; \draw [arrow] (Bulk data) -- (Optimization process); \node (Remeshing) [process, below of=Optimization process] {Remeshing}; \draw [arrow] (Optimization process) -- (Remeshing); \node (Residual map) [process, below of=Remeshing] {Residuals map}; \draw [arrow] (Remeshing) -- (Residual map); \node (Convergence) [decision,below of=Residual map, yshift=-1cm] {$\beta$ < 50nm ?}; \draw [arrow] (Residual map) -- (Convergence); \node (fit) [draw,dashed,inner sep=5pt,fit={(Objective function) (DVGRIDs) (Bulk data)}]{}; \draw [arrow] (Convergence.east) -- + (3.5,0) node [near start,above] {No} |- (static calculation.east); \node (Final design) [startstop, below of=Convergence, yshift=-0.9cm] {Final design}; \draw [arrow] (Convergence) -- node [anchor=east]{Yes}(Final design); \node (fit2) [draw,dashed,inner sep=8pt,fit={(Residual map)}]{}; \draw [auto=left,thin,-] (fit.west) -- + (-0.5,0) node [at end,xshift=-1cm,yshift=-2cm,text width=3cm]{PYTHON \\ self-programed codes} |- (fit2.west); \end{tikzpicture} \centering \caption{Flowchart of the optimization design} \label{fig:Flowchart} \end{figure} \subsubsection{Initial design} This first study is limited to a two-dimensional axisymmetric problem using Finite Element Analysis (FEA) software. In this approach, \textit{CTRIAX6} elements are required to make the calculation possible. However, \textit{CTRIAX6} element defines isoparametric and axisymmetric triangular elements with mid-side gridpoints which do not enable a large deformation due to the elements distorsion. Therefore, remeshing the model at each step is critical to pursue the optimization process. The initial design of the \textit{"tulip-form"} mirror based on the $T_{SA3}$ variable thickness distribution is shown in Fig. \ref{fig:Initial design 2D}. \begin{figure} \centering \begin{tikzpicture}[scale=1.0] \begin{axis}[axis lines=middle, xlabel=$r$, ylabel=$z$, xmin=-2, xmax=570, ytick={\empty}, xtick={\empty}] \addplot[name path=F,black,domain={-10:540},yscale=0.5,yshift=2.2cm] {-(((3+0.22)/(1-0.22) )*(x/570)^(-8/(3+0.22)) - (4/(1-0.22) )*(x/570)^(-2) + 1)^(1/3)+(3773.37-sqrt(3773.37^2-x^2))} node[pos=0.35, below,yshift=-0.3cm]{\textbf{$T_{SA3}$}}; \addplot[name path=G,black,thick,domain={0:540},yscale=0.5,yshift=2.7cm] {(3773.37-sqrt(3773.37^2-x^2))}node[,pos=.1, above,midway,yshift=2cm]{$Optical \hspace{0.1cm}Surface$}; \addplot[pattern=north east lines, pattern color=black!50]fill between[of=F and G, soft clip={domain=0:540}]; \draw[<->](0,580)--(540,580)node[above, midway]{$a$}; \draw[<->](0,520)--(515,520)node[above, midway]{$r_{f}$}; \draw[<->](0,510)--(500,510)node[below, midway]{$r_{ca}$}; \draw[<->](0,400)--(138.5,400)node[above, midway]{$r_{ch}$}; \draw[<->](540,540)--(540,500)node[right, midway]{$e$}; \draw[<->](250,340)--(250,140)node[right, midway]{$c$}; \draw[-](0,140)--(12,140); \draw[dashed,below left]node{$0$} {(540,580)--(540,340)} {(515,520)--(515,340)} {(500,510)--(500,340)} {(0,140)--(250,140)} {(138.5,450)--(138.5,250)}; \end{axis} \end{tikzpicture} \caption{Initial 2D design with the optical surface (bold line) and the Variable Thickness Disctribution (solid line). The geometry parameters are discribed in the Table \ref{tab:Model parameters}.} \label{fig:Initial design 2D} \end{figure} \newpage The preliminary parameters introduced for the initial design are listed in Table \ref{tab:Model parameters}. We use a Zerodur vitro-ceramic substrate from Schott company as primary mirror material due to its low coefficient of thermal expansion perfectly adapted to space mirror stability. The mechanical properties of Zerodur are the following: Young's modulus= 91000 MPa, shear modulus=37295.08 MPa and density Poisson's ratio=0.22. The outer mechanical diameter for SMP $a$ and the final outer mechanical diameter $r_f$ are 1080 mm and 1030 mm respectively as shown in the Figure \ref{fig:Initial design 2D}. The clear aperture has been chosen for more convenience and set at 1000 mm for a quasi-parabolic mirror with a conic constant around -0.967. The planned central hole is 277 mm diameter which will be machined after the polishing process. A starting mirror's edge thickness has been established at 2.9 mm in order to be able to support the mirror during the deformation. The total mass of the mirror before the central hole drilling is 25.29 kg. \vspace{-0.1cm} \begin{table}[htpb] \centering \caption{\bf Model parameters of the primary mirror} \begin{tabular}{lc} \hline Model design parameters & Values \\ \hline Initial outer mechanical diameter (for SMP) (a) & 1080 mm \\ Final outer mechanical diameter ($r_{f}$) & 1030 mm \\ Clear aperture diameter ($r_{ca}$) & 1000 mm \\ Initial Radius of curvature ($R_{s}$) & 3773.37 mm \\ Radius of the best-fit sphere ($R_{BFS}$) & 3792.69 mm \\ Central hole diameter added after polishing ($r_{ch}$) & 277 mm \\ Conic constant to achieve & -0.966682 \\ Edge thickness (e) & 2.9 mm \\ Central thickness (c) & 68 mm \\ Initial total mass & 25.29 kg \\ Final total mass with the central hole & 10.5 kg \\ \hline \end{tabular} \label{tab:Model parameters} \end{table} \newpage \subsubsection{Finite Element Model (FEM)} A compromise between large gridpoint displacements and good accuracy in the first calculation was reached with around 500 CTRIAX6 elements and 1200 gridpoints in the finite element model. These numbers should be increased when the objective function decreases in order to converge more effectively. A combination of loads is applied in the FEA including a nominal polishing pressure of $31.5 g/cm^2$ on the optical surface, associated to a central pressure of $9.51 MPa$ at the \textit{"tulip-form"} basis to generate pure $3^{rd}$-order spherical aberration. Entire model is placed under $9.81 m.s^{-2}$ gravity as shown in Fig. \ref{fig:Meshing} and stays fixed during all the optimization process. An articulated and movable border along the radial axis is disposed at the top edge of the outer mechanical diameter. The purpose of this boundary condition is to avoid introducing radial moments $M_r$ and radial tensile forces $N_r$ and to allow a higher sagitta of mirror's deflection and a pure $3^{rd}$-order spherical aberration. \begin{figure}[htbp] \centering \includegraphics[width=0.8\textwidth]{First_meshing.pdf} \caption{First meshing with initial design parameters and CTRIAX6 elements. Model loads are gravity, polisshing and central pressures. An articuled and movable edge is implemented to increase the mirror's deflection.} \label{fig:Meshing} \end{figure} \subsubsection{Validity of the FEM} Finite Element Model (FEM), and particularly Finite Element Analysis (FEA), is commonly used to study new opto-mechanical configuration before entering in the manufacturing phase of a mirror. In this approach the FEA outputs are cross-checked with some demonstrator experimental results in a second phase. Most of the time a very high correlation is noticed, however the last phase is mandatory to perfectly adjust the model. This procedure has been successfully used to design, simulate and polish toric mirrors of the SPHERE instrument for the VLT \cite{Hugot:08,Hugot_SMP_TM} or to modify the design of the warping harnesses in order to gain in performance on sky \cite{procSPHERE}. The same approach led to the study and development of active optics dedicated to space applications through the MADRAS deformable mirror \cite{Laslandes2011,Laslandes2013} and is currently applied for the design and manufacturing of off-axis parabola for the WFIRST mission \cite{Roulet2018}. This specific background in combining FEA with opto-mechanical fabrication, acquired during previous projects, give us a good confidence in the numerical results presented in the following section with respect to the up-coming prototyping phase. \section{Results and Discussions} The goal of the optimization is to deform the mirror by producing 14.4 $\mu$m RMS of $3^{rd}$-order spherical aberration during polishing process to obtain the targeted aspheric, which is an ellipsoid with a conic constant of -0.96682, for a full aperture diameter of 1080mm. Optimization parameters, $c_1$, $c_2$ and $c_3$ coefficients have to be updated for each re-meshing step depending on design responses. DVGRIDs data entries are provided in function of design model but the displacement of each gridpoints is decreasing as the final shape approaches. Because of the axisymmetrical model we are working on, we produce only axisymmetrical modes such as Focus and spherical aberrations. Fig. \ref{fig:first deformation} shows the deformation resulting from the initial calculation before the first optimization. At first sight, the mirror has the shape of the planned $3^{rd}$-order spherical aberration. \begin{figure} \centering \includegraphics[width=0.65\textwidth]{Deformation2_FIRST_optimization_BEFORE.pdf} \caption{Deformation (in millimeters) with maximum (red) and minimum (blue) deflection values before the first optimization.} \label{fig:first deformation} \end{figure} By subtracting the theoretical SA3 from the FEA deformation we can notice a large variation notably at the center of the mirror with 9.4 $\mu$m PtV surface error (red solid curve in Fig. \ref{fig:first optimisation}). The global deviation from the theoretical shape along the mirror's radius appears to be $5^{th}$-order spherical aberration (SA5) as shown in Fig. \ref{fig:First Zernike spherical polynomials} depicting the three first theoretical Zernike spherical polynomials. The edge of the mirror is not moving along the optical axis, \textit{Z-axis}, due to the adopted boundary condition. The pupil is taking into account the radial displacement of the mirror in order order to have the right Zernike's decomposition. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{First_optimization_SAG.pdf} \caption{First optimization from the first deformation (figure \ref{fig:first deformation}) before optimization (bottom red curve) to the optimized deformation (top green curve). This curves represent the difference between FEA output and targeted theoretical aberration (in micrometers).} \label{fig:first optimisation} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale=0.9] \centering \begin{axis} [ xlabel={$Normalized \hspace{0.1cm} semi-diameter$}, ylabel={$Normalized \hspace{0.1cm} sagitta$}, xmin=0, xmax=1., ymin=-1, ymax=1., grid=both, grid style={line width=1.5pt, draw=gray!10}, legend entries ={ Focus, SA3, SA5, }, legend pos=south east ] \addplot[smooth,domain=0:1,blue,mark=square]{2*x^2-1}; \addplot[smooth,domain=0:1,orange,mark=star]{6*x^4-6*x^2+1}; \addplot[smooth,domain=0:1,black,mark=triangle]{20*x^6-30*x^4+12*x^2-1}; \addplot[dashed,samples=200,domain=0.05:1]{x-x}; \addlegendimage{line, blue} \addlegendimage{line, red} \addlegendimage{line, black} \end{axis} \end{tikzpicture} \caption{First three Zernike spherical polynomials: Focus (blue squares), SA3 (orange stars) and SA5 (black triangles).} \label{fig:First Zernike spherical polynomials} \end{figure} The initial calculation before the optimization process shows a deformation map without Piston (Fig. \ref{figur:1}) around 1.8 $\mu$m RMS, essentially SA5 with 1.4 $\mu$m RMS, as noticed previously. A projection of residuals on Zernike modes between SA7 ($36^{th}$ Zernike polynomials) and $561^{th}$ Zernike polynomials indicates a high-order value of 84.1 nm RMS (Fig. \ref{figur:2}) with a maximum deflection at the center. Given the mirror's axisymmetrical geometry and the load condition, it is not necessary to use more than 561 Zernike polynomials to define the optical surface. Indeed, the spatial frequency between two peaks in the residual deformation is much lower in comparison with the radial component of the $561^{th}$ Zernike polynomial. \begin{figure} \centering \subfloat[Residuals without Piston\label{figur:1}]{% \includegraphics[width=0.5\textwidth]{Before_OPT_Residuals_without_Piston_SAG.pdf}} \hfill \subfloat[Residuals without 36 first Zernike polynomials\label{figur:2}]{% \includegraphics[width=0.5\textwidth]{Before_OPT_R36_SAG_Ech2.pdf}} \\ \subfloat[Residuals without Piston\label{figur:3}]{% \includegraphics[width=0.5\textwidth]{After_OPT1_Residuals_without_Piston_0_540_SAG_Ech2.pdf}} \hfill \subfloat[Residuals without 36 first Zernike polynomials\label{figur:4}]{% \includegraphics[width=0.5\textwidth]{After_OPT1_R36_0_540_SAG_Ech2.pdf}} \caption{(a) residuals without Piston and (b) by retrieving numerically the 36 first Zernike polynomials (right) before the first optimization calculation on the outer mechanical diameter for SMP. Figures (c) and (d) show the results after the first optimization.} \label{fig:Residuals before and after first optimization} \end{figure} \vspace{0.2cm} A first optimization has been computed on the entire mechanical radius [0mm ; 540mm] letting gridpoints on the variable thickness distribution free to move along the Z-axis. A significant reduction of the difference between desired theoretical shape and FEA deformation can be noticed leading to a residual of 6 $\mu$m PtV (Fig. \ref{fig:first optimisation}) corresponding to a residuals sag. of 2.1 $\mu$m RMS on the entire pupil. Two extrema are present and contribute to the high-order spherical aberrations which can be visualised on the residuals error maps (Fig. \ref{fig:Residuals before and after first optimization}). The first error map (Fig. \ref{figur:3}) shows the residuals of the Zernike decomposition when the Piston aberration, which we do not need to consider, is substracted. This value decreases down to 52.9nm RMS by removing the 36 first Zernike polynomials by a numerical subtraction (Fig. \ref{figur:4}). These high-order polynomials need to be minimized as much as possible in the optimization process because of the complexity of applying the high-order removal procedure afterward. \vspace{0.2cm} The next optimizations are performed on the reduced pupil [138.5mm ; 540mm] to take into account the central hole and thus release constraints on the optimization process. The deviation from the theoretical SA3 is free to evolve within the central hole while the optimization is focused on reducing the difference in the pupil of interest (Fig. \ref{fig:Next optimization steps for the entire mirror's diameter}). As the threshold has to be decreased to reach a very low value of a few nanometers, the model unit has to be scaled in order to better control the optimization process. In this particular case, it was modified to use micrometers instead of millimeters to reach a better sensitivity on the optical surface coordinates. \begin{figure} \centering \includegraphics[width=1.05\textwidth]{Next_optimizations2_LargeVIEW_SAGITTA.pdf} \caption{Next optimization steps of the optimization process performed on the reduced pupil [138.5;500mm]} \label{fig:Next optimization steps for the entire mirror's diameter} \end{figure} \begin{figure} \centering \includegraphics[width=1.025\textwidth]{Next_optimizations2_cutVIEW_SAGITTA.pdf} \caption{Zoomed view of the figure \ref{fig:Next optimization steps on the reduced pupil} on the area of interest [138.5;540mm]} \label{fig:Next optimization steps on the reduced pupil} \end{figure} By focusing the analysis on the reduced pupil it allows us to observe a substantial improvement in the aim of fitting the SA3 deformation given by the FEM model and the targeted theoretical SA3. However, some difficulties in the optimization process have to be noticed for a radius at distances larger than 400mm from the center. It is illustrated by a second regime (black and blue curves in Fig. \ref{fig:Next optimization steps on the reduced pupil}) retaining any optimization in this radius range. The residual error caps at around 20nm RMS on the drilled optical surface. Indeed, this part of the mirror has a very small thickness, starting from 9.9mm to 2.9mm, which makes the model responses most sensitive to the design variables specially in this area. This leads to consider the optimization on the very limited pupil [138.5 ; 500mm] without the optical surface extension. \vspace{0.2cm} In this instance, a last optimization is performed on the final pupil [138.5;500mm] (green curve in Fig. \ref{fig:Next optimization steps on the reduced pupil}) adding a supplementary degree of freedom in the optimization for gridpoints located in the range [500;540mm]. Naturally, it leads to a peak in the non-optimized area to ensure a optimal result in the optimized one. Finally, the deformation map of the optical surface (Fig. \ref{figur:5}) shows a residual error, without piston, about 16.5 nm RMS in the reduced pupil including 14 nm RMS of high-order spherical modes after the 36 first Zernike's polynomials (Fig. \ref{figur:6}). After verification by changing the model units twice and refining it with very small elements (150 $\mu$m between two gridpoints), it was concluded that these residuals are effectively attributable to the FEA deformation and not to the accuracy of the FEA sampling (Fig. \ref{R36vsMMS}). \begin{figure} \centering \subfloat[Residuals without Piston\label{figur:5}]{% \includegraphics[width=0.5\textwidth]{2D_After_OPT17_Residuals_without_Piston_SAG_Ech2.pdf}} \hfill \subfloat[Residuals without 36 first Zernike polynomials\label{figur:6}]{% \includegraphics[width=0.5\textwidth]{2D_After_OPT17_R36_SAG_Ech2.pdf}} \caption{(a) Residuals without Piston and (b) by retrieving numerically the 36 first Zernike polynomials (right) after the last optimization calculation on the clear aperture} \label{fig:Residuals last optimization} \end{figure} Indeed, by taking a minimum mesh size (MMS) (distance between two gridpoints) between 0.15mm and 2.5mm, the residual error without the 36 first Zernike polynomials (R36) ranges from 14.03 nm RMS to 14.14 nm RMS. This numerical difference of 1 Angstrom is comparable to the dimension of a single atom. In our case, a MMS of 1.75mm has been chosen to make a trade-off between precision of the results and reduction of computational time and load. In this manner, a duration of 30 minutes for each iteration in the optimization process has been performed with a total of five in our particular case. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{R36vsMMS.pdf} \caption{Evolution of the residuals without the 36 first Zernike polynomials (R36) with the central hole in function of the minimum mesh size (MMS)} \label{R36vsMMS} \end{figure} \vspace{0.2cm} The optimized final variable thickness distribution in Fig. \ref{fig:Optimised VTD} (red curve) is different from the initial theoretical contour for many reasons. The first reason lies in the VTD definition which is valid for a zero thickness at the mirror's edge. In our case, the mirror needs to have a sufficient thickness at the edge to be maintained and a finite height at the center to respect manufacturing and loads specifications, leading to shift and cut the theoretical curve. Another reason explaining this VTD deviation is the useful surface used for the optimization. At the beginning, the entire optical surface has been exploited for the shape optimization but it has been reduced to the clear aperture to release some limitations and also accelerate the minimization process. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{Optimised2_VTD.pdf} \caption{Initial (red curve) and optimised (green curve) Variable Thickness Distribution (VTD)} \label{fig:Optimised VTD} \end{figure} \vspace{0.2cm} In addition, polishing and central pressures have been slightly adjusted to cancel focus and residual SA3 completely. In fact, the variation of spherical aberrations according to the loads is more sensitive for lower than higher Zernike modes. In this case, a minor modification of loads impacts very low orders leaving spherical residuals from SA5 not affected. This adjustment can also be used, during manufacturing by SMP as an iterative process, to help converging towards former specifications. After the first polishing run, the optical part is usually slightly out of spec on very low orders. Adjusting polishing pressure and loads allow to compensate for over or under deformation of the optical part. A second and a third run are generally sufficient to reach the correct shape letting high spherical modes not impacted. \vspace{0.2cm} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{MaximumSTRESS_LAST_optimization.pdf} \caption{Maximum stress within the mirror (in $N.\mu m^{-2}$)} \label{fig:Maximum stress} \end{figure} The maximal Von Mises stress within the mirror during the deformation is equal to 8.09 MPa (Fig. \ref{fig:Maximum stress}) which is acceptable for standard use in this particular case (limited to 14 MPa). The maximal stress is located at the \textit{"tulip-form"} basis owing to the important pressure applied in a relatively small area. \section{Conclusions} Thanks to this approach, combining SMP and shape optimization, we show a new manner to produce a conicoid, rotationaly symmetric aspheric, from a 2D finite element model by imposing pre-defined loads configuration and boundary conditions. The shape optimization method will assist the SMP process to reach high quality and very precise surface shapes in a minimum of iterations. In this study, DVGRIDs data entry have been defined in a basic way but they can be described with more degrees of freedom or even as a combination of orthogonal basis such as Zernike polynomials. It is also possible to define other theoretical symmetrical or non-symmetrical shapes to produce more complex optics with a combination of high-order modes. The main following step consists of polishing the thin mirror using the shape optimization method developed in this paper. The last step of the development will associate this large (>1m class) and thin (around 20mm) mirror to a rigid lightweight support in order to obtain very stiff reflective optics for space applications. In fact, sandwiches structures offer higher stiffness for lower mass ratio compared to other lightweighted designs \cite{Catanzaro}. In this case, the parallelization of the SMP technique using shape optimization and the rigid support manufacturing will participate to a considerable gain of fabrication time. Each part can be produced simultaneously on two different manufacturing procedures to be assembled in a final stage. \section*{Funding} Thales-SESO; Aix-Marseille Universite (AMU); Centre National de la Recherche Scientifique (CNRS). \section*{Disclosures} The authors declare that there are no conflicts of interest related to this article.
-29,238.165909
[ -3.1484375, 2.87890625 ]
32.108626
[ -3.931640625, -0.429443359375, -1.783203125, -5.0625, 0.407958984375, 7.42578125 ]
[ 4.78515625, 6.734375, 3.67578125, 6.05859375 ]
431
5,734
[ -1.5966796875, 1.5302734375 ]
27.615541
[ -5.55078125, -3.095703125, -2.94140625, -1.43359375, 1.8115234375, 9.2265625 ]
0.647979
25.510425
27.520056
7.368135
[ 2.529520273208618 ]
-19,491.024146
6.190269
-28,411.048738
0.526835
6.209481
[ -3.248046875, -3.685546875, -3.123046875, -3.935546875, 2.693359375, 10.6796875 ]
[ -5.625, -2.01953125, -2.384765625, -1.4365234375, 3.95703125, 4.98046875 ]
BkiUdvc4eIOjR_CK1Z2j
\section{Introduction} Access points mounted on unmanned aerial vehicles (UAVs) are being proposed as a potential solution to data demand and congestion issues that are expected to arise in next-generation wireless networks \cite{Zeng_20162}. The benefit of these UAV networks compared to existing static infrastructure is the flexibility of their deployment, which allows them to configure themselves around user hotspots or gaps in coverage. These UAV networks additionally differ from existing infrastructure in how they connect into the core network; whereas fixed infrastructure can make use of wired backhaul links, often fiber optics, the UAVs must use dedicated wireless links for a backhaul. These links may be established using millimeter-waves \cite{Xiao_2016}, free-space optical channels \cite{Alzenad_2016} or using sub-6GHz technologies such as LTE. Emerging research in the wireless community suggests that existing LTE base stations (BSs) that are intended for ground user service are capable of also providing backhaul for UAV networks \cite{Lin_2017}. However, we expect that as UAV networks become more widespread and play a greater role in serving the end user network, operators will opt to deploy dedicated backhaul infrastructure, referred to as ground stations (GSs), to support these networks. UAVs have very different performance requirements and behave very differently to the typical user equipment; it follows that GS networks designed to serve UAVs will require different deployment strategies to BS networks designed to serve user equipment. One of the first works to consider low-altitude UAVs operating alongside terrestrial BS networks is \cite{Rohde_2012}. The authors simulate a network outage scenario where UAVs substitute offline BSs, with the UAVs wirelessly backhauling into adjacent, functioning BSs. The authors assume a hexagonal grid of BSs and demonstrate achieveable data rates in the downlink as a function of UAV distance to their backhaul. The works in \cite{Zeng_2016} and \cite{ZhangZeng_2017} approach the problem of UAV position optimisation by considering a single end user, UAV and backhaul in isolation, with the UAV adjusting its position to optimise its end user and backhaul links. Stochastic geometry has also started to see use as a tool for modelling and analysing UAV-BS wireless links. In our previous work in \cite{Galkin_20172} (which extends our work in \cite{Galkin_2017}) we investigated the performance of a UAV network when the UAVs opportunistically backhaul through LTE BSs designed to serve end users on the ground. We considered both the backhaul link as well as the link between the UAV and the end user, and we demonstrated that while the two different links have optimal performance at differing UAV heights it is possible to maximise the performance of both links by allowing UAVs to adjust their individual heights. A similar analysis of the UAV-BS link was carried out in \cite{MahdiAzari_20172} where the authors consider the performance of UAV-BS links in different environments, given down-tilted BS antennas and omnidirectional UAV antennas. The authors conclude that the UAV-BS link is vulnerable to interference due to the high LOS probability, and suggest that both UAVs and BSs reduce their heights above ground to improve performance. While a limited number of results have been published on the interaction between UAV networks and existing terrestrial BS networks, to date there is a lack of work on the topic of analysing the performance of a dedicated GS network that is deployed for providing backhaul to UAV networks. Our contribution in this paper is to use stochastic geometry to analyse the backhaul performance of GSs serving a low-altitude UAV network in an urban environment. We consider the use of both 2GHz LTE and millimeter-wave technology to support the backhaul link, with the GSs using frequency bands that are orthogonal to the underlying BS networks. Our analysis takes into account directional antenna alignment in 3D space between a typical UAV in the network and its associated backhaul GS as well as other interfering GSs, and it captures the effect of generalised multipath fading as well as line-of-sight (LOS) blocking due to buildings in the environment. We explore the effect of GS density as well as their height above ground on the probability of a typical UAV being able to establish a backhaul, given different operating heights of the UAV. We initially assume that the GSs are deployed independently of existing terrestrial infrastructure; in the numerical results section we show that when using an LTE link for the backhaul the GS infrastructure can be colocated with existing BS sites to achieve good backhaul performance. The analysis in this work allows us to address basic network design questions that would be relevant to a network operator interested in providing dedicated backhaul infrastructure for a network of UAVs. \begin{figure}[t!] \centering \subfloat{\includegraphics[width=.40\textwidth]{sideview.pdf}}\\ \subfloat{\includegraphics[width=.40\textwidth]{Topview.pdf}} \caption{ Side and top view showing a UAV in an urban environment at a height $\gamma$, positioned above $x_0$ with antenna beamwidth $\omega$. The UAV selects the nearest GS at $x_1$ for its backhaul and aligns its antenna with the GS location; the GS at $x_2$ falls inside the aligned area and potentially produces interference. \vspace{-5mm} } \label{fig:drone_network} \end{figure} \section{System Model} In this section we set up a system model of a network of GSs providing backhaul to UAVs. We consider two technologies for the backhaul channel, LTE in the 2GHz band and millimeter-wave. We model the network of GSs as a Poisson point process (PPP) $\Phi = \{x_1 , x_2 , ...\} \subset \mathbb{R}^2$ of intensity $\lambda$ where elements $x_i\in \mathbb{R}^2$ represent the projections of the GS locations onto the $\mathbb{R}^2$ plane. The GSs have a height $\gamma_{G}$ above ground. We consider a single reference UAV, positioned above the origin $x_0 = (0,0)$ at a height $\gamma$. Let $r_i = ||x_i||$ denote the horizontal distance between the GS $i$ and the reference UAV, and let $\phi_i = \tan^{-1}((\Delta \gamma)/r_i)$ denote the vertical angle, where $\Delta \gamma = \gamma - \gamma_{G}$. The UAV is equipped with a directional antenna for communicating with its associated backhaul GS. The antenna has a horizontal and vertical beamwidth $\omega$ and a rectangular radiation pattern; using the approximations (2-26) and (2-49) in \cite{Balanis_2005} and assuming perfect radiation efficiency the antenna gain can be expressed as $\eta(\omega) = 16\pi/(\omega^2)$ inside of the main lobe and $\eta(\omega)=0$ outside. The UAV selects the nearest GS as its serving GS; we denote this GS as $x_1$ and its distance to the UAV as $r_1$. The UAV orients itself to align its backhaul antenna towards $x_1$; the antenna radiation pattern illuminates an area we denote as $\mathcal{W} \subset \mathbb{R}^2$. This area takes the shape of a ring sector of arc angle equal to $\omega$ and major and minor radii $v(\gamma,r_{1})$ and $w(\gamma,r_{1})$, respectively, which are defined as \begin{align} v(\gamma,r_{1}) = \begin{cases} \frac{|\Delta \gamma|}{\tan(|\phi_{1}|-\omega/2)} \hspace{-2mm} &\text{if} \hspace{3mm} \omega/2 < |\phi_{1}| < \pi/2 - \omega/2 \\ \frac{|\Delta \gamma|}{\tan(\pi/2 -\omega)} \hspace{-2mm} &\text{if} \hspace{3mm} |\phi_{1}| > \pi/2 - \omega/2 \\ \infty &\text{otherwise} \end{cases} \end{align} \begin{align} w(\gamma,r_{1}) = \begin{cases} \frac{|\Delta \gamma|}{\tan(|\phi_{1}|+\omega/2)} \hspace{2mm} &\text{if} \hspace{3mm} |\phi_{1}| < \pi/2 - \omega/2 \\ 0 &\text{otherwise} \end{cases} \end{align} \noindent where $|.|$ denotes absolute value. For the case where $\omega\geq \pi/2$ major radius $v(\gamma,r_{1})$ will always be infinite. We denote the set of GSs other than $x_1$ that fall inside the area $\mathcal{W}$ as $\Phi_{\mathcal{W}} = \{x_i \in \Phi \setminus \{x_1\} : x_i \in \mathcal{W}\}$. The GSs in the set $\Phi_{\mathcal{W}}$ are capable of causing interference for the reference UAV. Those GSs in the set $\Phi \setminus \Phi_{\mathcal{W}}$ are outside of the main lobe of the UAV antenna, thus they do not interfere with the backhaul link. Note that $\Phi_{\mathcal{W}}$ is a PPP with the same intensity $\lambda$. The wireless channels between the reference UAV and the GSs will be affected by buildings, which form obstacles and break LOS links. We adopt the model in \cite{ITUR_2012}, which defines an urban environment as a collection of buildings arranged in a square grid. There are $\beta$ buildings per square kilometer, the fraction of area occupied by buildings to the total area is $\delta$, and each building has a height which is a Rayleigh-distributed random variable with scale parameter $\kappa$. The probability of the reference UAV having LOS to the GS $i$ is given in \cite{ITUR_2012} as \vspace{-1mm} \begin{align} &\mathbb{P}_{LOS}(r_{i}) = \nonumber \\ &\prod\limits_{n=0}^{\max(0,d_i-1)}\left(1-\exp\left(-\frac{\left(\max( \gamma,\gamma_{G}) - \frac{(n+1/2)|\Delta \gamma|}{d_i}\right)^2}{2\kappa^2}\right)\right) \label{eq:LOS} \vspace{-1mm} \end{align} \noindent where $d_i = \floor*{r_i\sqrt{\beta\delta}}$. We can express the SINR at the reference user as \begin{equation} \mathrm{SINR} = \frac{p H_{t_{1}} \eta(\omega)\mu c (r_{1}^2+\Delta \gamma^2)^{-\alpha_{t_{1}}/2}}{I_{L} + I_{N}+\sigma^2} \label{eq:SINR} \end{equation} \noindent where $p$ is GS transmit power, $H_{t_1}$ is the random multipath fading component, $t_{1} \in \{\text{L},\text{N}\}$ is an indicator variable which denotes whether the UAV has LOS or NLOS to its serving GS, $\mu$ is the serving GS antenna gain defined in the next subsections, $c$ is the near-field pathloss, $\sigma^2$ is the noise power, and $I_{L}$ and $I_{N}$ are the aggregate LOS and NLOS interference, respectively. Expressions for the noise power $\sigma^2$ and near-field pathloss $c$ are given in \cite{Elshaer_2016}. The backhaul data rate, in Mbits/sec, can be calculated from the SINR using the Shannon capacity bound \begin{equation} \mathcal{R} = b \log_2(1+\mathrm{SINR}) \end{equation} \noindent where $b$ denotes the bandwidth of the backhaul channel. We define a SINR threshold $\theta$ for the UAV backhaul link: if $\mathrm{SINR}<\theta$ this represents the UAV failing to establish a backhaul of the required channel quality and therefore being in an outage state. \subsection{LTE Backhaul} For the LTE backhaul case we assume that the GSs are equipped with tri-sector antennas similar to those already in use in terrestrial BSs, as this allows them to serve UAVs in any horizontal direction. For tractability we model the horizontal antenna gain $\mu_{h}$ of these antennas as having a constant value. The antennas are tilted up towards the sky, to model the behaviour of the antennas in the vertical plane we adopt the 3GPP model \cite{3GPP_2010}, such that \begin{equation} \mu_{v}(\phi_{i}) = 10^{-\min\left(12\left(\frac{\phi_{i} - \phi_T}{10}\right)^2, 20 \right)/10}, \end{equation} \begin{equation} \mu_{l}(\phi_{i}) = \max\left(\mu_{h}\mu_{v}(\phi_{i}),10^{-2.5}\right), \end{equation} \noindent where $\mu_{v}(\phi_{i})$ is the vertical antenna gain, $\phi_T$ is the vertical uptilt angle of the GS antenna (in degrees) and $\mu_{l}(\phi_{i})$ is the total antenna gain. \subsection{Millimeter Wave Backhaul} For the millimeter-wave backhaul we assume each GS is equipped with an antenna array that uses beamforming to direct a directional beam towards the UAV to which it provides a backhaul link. We adopt a similar approach to modelling the GS antenna array as in \cite{Elshaer_2016} and \cite{Andrews_2017}. The GS antenna is modelled as having a single directional beam with a beamwidth of $\omega_G$ and a gain of $\mu_{m}$ inside the main lobe, and a gain of 0 outside. The reference UAV will always experience an antenna gain of $\mu_{m}$ from its serving GS. The beam patterns of the remaining GSs will appear to be pointed in random directions with respect to the reference UAV; as a result each interfering GS will have non-zero antenna gain to the reference UAV with a certain probability $\zeta$. \section{Analytical Results} In this section we derive an analytical expression for the probability that the reference UAV will receive a signal from the GS network with an SINR above $\theta$, thereby establishing a backhaul. We refer to this as the backhaul probability. To derive an expression for the backhaul probability we need an expression for the conditional backhaul probability given the serving GS of the reference UAV has either LOS or NLOS to the reference UAV, and given it is located at a horizontal distance $r_1$ from the UAV. We then decondition this conditional backhaul probability with respect to the LOS probability of the serving GS as well as its horizontal distance. The LOS probability for a given horizontal distance $r_1$ is given in \Eq{LOS}. Given a PPP distribution of GSs the serving GS horizontal distance random variable $R_1$ is known to be Rayleigh-distributed with scale parameter $1/\sqrt{2\pi\lambda}$. \subsection{Aggregate LOS \& NLOS Interference} \textbf{LTE backhaul} The interferers will belong to the set $\Phi_{\mathcal{W}}$. We partition this set into two sets which contain the LOS and NLOS interfering GSs, denoted as $\Phi_{\mathcal{W}L} \subset \Phi_{\mathcal{W}}$ and $\Phi_{\mathcal{W}N} \subset \Phi_{\mathcal{W}}$, respectively. These two sets are inhomogeneous PPPs with intensity functions $\lambda_L(x) = \mathbb{P}_{LOS}(||x||)\lambda$ and $\lambda_N(x) =(1-\mathbb{P}_{LOS}(||x||))\lambda$. Note that we drop the index $i$ as the GS coordinates have the same distribution irrespective of their index values. For an LTE backhaul the aggregate LOS and NLOS interference is expressed as $I_{L} = \sum_{x\in\Phi_{\mathcal{W}L}} p H_{L} \eta(\omega)\mu_l(\phi) c (||x||^2+\Delta \gamma^2)^{-\alpha_{L}/2}$ and $I_{N} = \sum_{x\in\Phi_{\mathcal{W}N}} p H_{N} \eta(\omega)\mu_l(\phi) c (||x||^2+\Delta \gamma^2)^{-\alpha_{N}/2}$, recalling that $\phi = \tan^{-1}(\Delta \gamma/||x||)$. \textbf{millimeter-wave backhaul} As defined in the system model, the millimeter-wave interfering GSs will only create interference at the reference UAV if their directional beams happen to align with the UAV location, which occurs with probability $\zeta$. As a result of this $\Phi_{\mathcal{W}L}$ and $\Phi_{\mathcal{W}N}$ have intensity functions $\lambda_L(x) = \mathbb{P}_{LOS}(||x||)\zeta\lambda$ and $\lambda_N(x) =(1-\mathbb{P}_{LOS}(||x||))\zeta\lambda$. The aggregate LOS and NLOS interference is then expressed as $I_{L} = \sum_{x\in\Phi_{\mathcal{W}L}} p H_{L} \eta(\omega)\mu_m c (||x||^2+\Delta \gamma^2)^{-\alpha_{L}/2}$ and $I_{N} = \sum_{x\in\Phi_{\mathcal{W}N}} p H_{N} \eta(\omega)\mu_m c (||x||^2+\Delta \gamma^2)^{-\alpha_{N}/2}$. \subsection{Conditional Backhaul Probability} \noindent The expression for the backhaul probability, given serving GS distance $r_1$ and an LOS channel to the serving GS, was derived by us in \cite{Galkin_2017} as \begin{align} &\mathbb{P}(\mathrm{SINR}\geq \theta |R_1=r_1,t_1 = \text{L}) = \nonumber \\ &\sum\limits_{n=0}^{m_L-1}\frac{s_L^n}{n!} (-1)^n \cdot \sum_{i_L+i_N+i_{\sigma}=n}\frac{n!}{i_L!i_N!i_{\sigma}!} \nonumber \\ &\cdot(-(p\eta(\omega)c)^{-1}\sigma^2)^{i_{\sigma}}\exp(-(p\eta(\omega)c)^{-1}s_L\sigma^2) \nonumber \\ &\cdot\frac{d^{i_L} \mathcal{L}_{I_{L}}((p \eta(\omega)c)^{-1}s_L)}{ds_L^{i_L}} \frac{d^{i_N}\mathcal{L}_{I_{N}}((p \eta(\omega)c)^{-1}s_L)}{ds_L^{i_N}}, \label{eq:condProb3} \end{align} \noindent where $s_{L}= m_{L}\theta \mu^{-1}(r_1^2+\Delta\gamma^2)^{\alpha_{L}/2}$, $m_L$ is the Nakagami-$m$ fading term for a LOS channel, $\mathcal{L}_{I_{L}}$ and $\mathcal{L}_{I_{N}}$ are the Laplace transforms of the aggregate LOS and NLOS interference, respectively, and the second sum is over all the combinations of non-negative integers $i_L,i_N$ and $i_{\sigma}$ that add up to $n$. $\mu$ takes the value of either $\mu_l(\phi_1)$ or $\mu_m$ depending on whether we are considering LTE or millimeter-wave backhaul. The conditional backhaul probability given an NLOS serving GS $\mathbb{P}(\mathrm{SINR}\geq \theta |R_1=r_1,t_1 = \text{N})$ is calculated as in \Eq{condProb3} with $m_N$, $\alpha_N$ and $s_N$ replacing $m_L$, $\alpha_L$ and $s_L$. \subsection{Laplace Transform of Aggregate Interference} \textbf{LTE backhaul} The Laplace transform of the aggregate LOS interference $\mathcal{L}_{I_{L}}((p\eta(\omega)c)^{-1}s_L)$ given an LOS serving GS is expressed as \begin{align} &\mathbb{E}\left[\exp\bigg(-(p\eta(\omega)c)^{-1}s_L I_{L}\bigg)\right] \nonumber \\ &=\mathbb{E}_{\Phi_{\mathcal{W}L}}\bigg[\prod_{x\in\Phi_{\mathcal{W}L}}\hspace{-3mm}\mathbb{E}_{H_L} \left[\exp\Big(-H_L g(||x||,s_L,\alpha_L)\Big)\right]\bigg] \nonumber \end{align} \begin{align} &\overset{(a)}{=}\mathbb{E}_{\Phi_{\mathcal{W}L}}\left[\prod_{x\in\Phi_{\mathcal{W}L}}\left(\frac{m_L}{g(||x||,s_L,\alpha_L)+m_L}\right)^{m_L}\right] \nonumber \\ &\overset{(b)}{=} \exp\Bigg(\hspace{-1mm}-\int\limits_{\mathcal{W}}\Bigg(1- \left(\frac{m_L}{g(||x||,s_L,\alpha_L)+m_L}\right)^{m_L}\hspace{-1mm}\Bigg) \lambda_{L}(x) \mathrm{d} x \Bigg) \nonumber \\ &\overset{(c)}{=} \exp\hspace{-1mm}\Bigg(\hspace{-1mm}-\hspace{-1mm}\lambda \omega \hspace{-4mm} \int\limits_{r_1}^{v(\gamma,r_{1})}\hspace{-2mm}\Bigg(\hspace{-1mm}1- \hspace{-1mm}\left(\frac{m_L}{g(r,s_L,\alpha_L)+m_L}\right)^{m_L}\hspace{-1mm}\Bigg) \mathbb{P}_{LOS}(r)r \mathrm{d} r \hspace{-1mm}\Bigg) \nonumber \\ \label{eq:laplace} \end{align} \noindent where \begin{equation} g(||x||,s_L,\alpha_L) = s_L\mu_{l}(\phi)(||x||^2+\Delta\gamma^2)^{-\alpha_L/2} \nonumber, \end{equation} \noindent $(a)$ comes from Nakagami-$m$ fading having a gamma distribution, $(b)$ comes from the probability generating functional of the PPP \cite{Haenggi_2013}, $(c)$ comes from switching to polar coordinates where $r = ||x||$ and $\lambda_{L}(x) = \mathbb{P}_{LOS}(||x||) \lambda$. Note that the Laplace transform for the NLOS interferers $\mathcal{L}_{I_N}((p \eta(\omega) c)^{-1}s_L)$ is solved by simply substituting $\lambda_{L}(x)$ with $\lambda_{N}(x)$, $m_L$ with $m_N$ and $g(r,s_L,\alpha_L)$ with $g(r,s_L,\alpha_N)$ in \Eq{laplace} and solving as shown. The above integration is for the case when the serving GS is LOS; if the serving GS is NLOS we substitute $s_L$ with $s_N$ as defined in the previous subsection. \textbf{millimeter-wave backhaul} The Laplace transform of the LOS interferers for a millimeter-wave backhaul is derived as in \Eq{laplace}, with the intensity $\lambda$ being multiplied by $\zeta$ (as explained in the previous subsection), and with $\mu_{l}(\phi)$ being replaced with $\mu_{m}$. Note that, unlike $\mu_{l}(\phi)$, $\mu_{m}$ is a constant value with respect to $r$; as a result of this it is possible to solve the integral in \Eq{laplace} for the case of millimeter-wave backhaul. We begin by recognising that $\mathbb{P}_{LOS}(r)$ is a step function. We use this fact to separate the integral above into a sum of weighted integrals, resulting in the following expression \begin{equation} \omega\zeta\lambda\hspace{-5mm}\sum\limits_{j=\floor*{r_1\sqrt{\beta\delta}}}^{\floor*{v(\gamma,r_{1})\sqrt{\beta\delta}}} \hspace{-5mm}\mathbb{P}_{LOS}(l)\int\limits_{l}^{u}\Bigg(1- \left(\frac{m_L}{g(r,s_L,\alpha_L)+m_L}\right)^{m_L}\Bigg) r \mathrm{d} r \label{eq:laplaceSum} \end{equation} \noindent where $l = \max(r_1,j/\sqrt{\beta\delta})$ and $u = \min(v(\gamma,r_{1}),(j+1)/\sqrt{\beta\delta})$. Using the derivations (17) in \cite{Galkin_2017} \begin{align} &\int\limits_{l}^{u}\hspace{-1mm}\Bigg(1- \hspace{-1mm}\left(\frac{m_L}{g(r,s_L,\alpha_L)+m_L}\right)^{m_L}\hspace{-1mm}\Bigg) r \mathrm{d} r = \frac{1}{2}\sum\limits_{k=1}^{m_L}\hspace{-1mm}\binom{m_L}{k}(-1)^{k+1}\nonumber \\ &\cdot\bigg((u^2+\Delta\gamma^2)\mbox{$_2$F$_1$}\Big(k,\frac{2}{\alpha_L};1+\frac{2}{\alpha_L}; -\frac{m_L(u^2+\Delta\gamma^2)^{\alpha_L/2}}{\mu_{m} s_L}\Big) \nonumber\\ &-(l^2+\Delta\gamma^2)\mbox{$_2$F$_1$}\Big(k,\frac{2}{\alpha_L};1+\frac{2}{\alpha_L};-\frac{m_L(l^2+\Delta\gamma^2)^{\alpha_L/2}}{\mu_{m} s_L}\Big)\bigg), \label{eq:laplace_final} \end{align} \noindent Inserting this solution into \Eq{laplaceSum} we obtain an expression for the Laplace transform of the LOS interferers \Eq{laplace}. \subsection{Backhaul Probability and Expected Rate} To obtain the overall backhaul probability for the reference UAV in the network we decondition the conditional backhaul probability as defined in the previous subsection with respect to the indicator variable $t_1$ by multiplying by the LOS probability function \Eq{LOS}. We then decondition with respect to the horizontal distance random variable $R_1$ via integration. \begin{multline} \mathbb{P}(\mathrm{SINR}\geq \theta) = \int\limits_{0}^{\infty}\bigg(\mathbb{P}(\mathrm{SINR}\geq \theta |R_1=r_1,t_1 = \text{L})\mathbb{P}_{LOS}(r_1) \\ +\mathbb{P}(\mathrm{SINR}\geq \theta |R_1=r_1,t_1 = \text{N})(1-\mathbb{P}_{LOS}(r_1))\bigg)f_{R_1}(r_1) \mathrm{d} r_1 . \label{eq:pcov_final} \end{multline} The expected rate for the backhaul can be calculated using the backhaul probability as \begin{equation} \mathbb{E}[\mathcal{R}] = b\int\limits_{0}^{\infty} \mathbb{P}(\mathrm{SINR}\geq 2^{\theta}-1) \mathrm{d} \theta. \end{equation} \section{Numerical Results} In this section we explore the trade-offs that occur between the parameters of the GS network and the resulting backhaul probability of the reference UAV. We generate our results using the analytical expressions given in the previous section and validate them via Monte Carlo (MC) trials. In the following figures solid lines denote the results obtained via the mathematical analysis and markers denote results obtained via MC trials. Unless stated otherwise the parameters used for the numerical results are taken from Table \ref{tab:table}. \begin{table}[b!] \vspace{-3mm} \begin{center} \caption{Numerical Result Parameters} \begin{tabular}{ |c|c|c| } \hline Parameter & LTE Backhaul & mmWave Backhaul \\ \hline Carrier Freq & \unit[2]{GHz} & \unit[73]{GHz} \\ Bandwidth & \unit[20]{MHz} & \unit[1000]{MHz} \\ $\omega$ & \unit[30]{deg} & \unit[10]{deg} \\ $\alpha_L$ & 2.1 & 2\\ $\alpha_N$ & 4 & 3.5 \cite{Ghosh_2014}\\ $m_L$ & 1 & 3\\ $m_N$ & 1 & 1\\ $p$ & \unit[40]{W} & \unit[10]{W} \cite{Semiari_2017}\\ $c$ & \unit[-38.4]{dB} & \unit[-69.7]{dB} \\ $\mu_{h}$ & \unit[-5]{dB} & N/A \\ $\mu_{m}$ & N/A & \unit[32]{dB} \\ $\zeta$ & N/A & 0 \\ $\theta$ & \unit[10]{dB} & \unit[10]{dB} \\ $\sigma^2$ & \unit[$8\cdot10^{-13}$]{W} &\unit[$4\cdot10^{-11}$]{W} \\ $\phi_T$ & $\tan^{-1}(\Delta\gamma/\mathbb{E} [R_1] )$ & N/A \\ $\omega_G$ & N/A & \unit[10]{deg} \\ $\gamma_{G}$ & \unit[30]{m} & \unit[30]{m}\\ $\beta$ & \unit[300]{$/\text{km}^2$} & \unit[300]{$/\text{km}^2$}\\ $\delta$ & 0.5 & 0.5\\ $\kappa$ & \unit[20]{m} & \unit[20]{m} \\ \hline \end{tabular} \label{tab:table} \end{center} \end{table} \begin{figure}[b!] \centering \includegraphics[width=.45\textwidth]{BackhaulDensity2-eps-converted-to.pdf} \vspace{-5mm} \caption{ Backhaul probability for an LTE backhaul as a function of the GS density. } \label{fig:BackhaulDensity} \vspace{-3mm} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.45\textwidth]{mmwaveDensity3-eps-converted-to.pdf} \vspace{-5mm} \caption{ Backhaul probability for a millimeter-wave backhaul as a function of the GS density. } \label{fig:mmwaveDensity} \vspace{-3mm} \end{figure} In \Fig{BackhaulDensity} and \Fig{mmwaveDensity} we demonstrate how increasing the density of the GSs improves the backhaul probability of the UAV, for the LTE and the millimeter-wave backhaul cases, respectively. For all cases the backhaul probability increases asymptotically, with more GSs giving diminishing returns on the improved performance. Note that we set the millimeter-wave interference probability parameter $\zeta$ to zero, to reflect the fact the millimeter-wave antennas have very narrow beamwidths and therefore have an extremely low probability of alignment occuring by chance. As a result of this the millimeter-wave signal is noise limited, with a resulting higher backhaul probability for the UAV compared to the interference-limited LTE signals. We consider the upper limit for the GS density to be \unit[5]{$/\text{km}^2$}, which corresponds to the density of a terrestrial BS network in an urban environment \cite{3GPP_2010}. These results suggest that good UAV backhaul probability can be achieved when the density of GSs is only a fraction of the density of the existing BS network; if the GSs are to be colocated with the BS sites then this result demonstrates that a network operator only has to upgrade a fraction of the existing BS network to be able to provide a backhaul for a network of UAVs. \begin{figure}[t!] \centering \includegraphics[width=.45\textwidth]{Rate3-eps-converted-to.pdf} \vspace{-5mm} \caption{ Expected data rate of a backhaul as a function of GS density } \label{fig:Rate} \vspace{-3mm} \end{figure} In \Fig{Rate} we plot the expected rate of the UAV backhaul as a function of the density of GSs, given an LTE backhaul. We can see that the expected rate initially increases as we increase the density of the GSs. However, due to the effect of interference the expected rate appears to behave differently depending on the UAV height. For the lowest UAV height the LOS probability on the interfering GSs is low due to building blockage, and therefore the rate monotonically increases with increased GS density. At the mid-range heights (\unit[60 and 80]{m}) the UAVs have a higher LOS probability on the interfering GSs; as a result increasing the density of the GSs improves the signal from the serving GS, but at the same time increases the aggregate interference. For the large heights (\unit[100 and 120]{m}) the UAV has a steep vertical angle to its serving GS, which results in a smaller area $\mathcal{W}$ illuminated by the UAV antenna and which limits interference. The height the UAVs will operate at will be largely determined by the end user link \cite{Galkin_20172}; however, an operator may wish to avoid operating the UAVs within the range of heights which cause deteriorated backhaul performance, if such an option exists. In \Fig{BackhaulBSHeight} we consider the effect of the GS height on the backhaul probability of an LTE backhaul, for different UAV heights. We immediately observe that for larger GS heights the backhaul probability appears to deteriorate for all but the lowest UAV height. This effect is due to the increasing interference that is experienced by a typical UAV as the GS heights increase; the improved wireless channel to the serving GS does not compensate for the improved wireless channels to the interfering GSs. The GS height cutoff point above which the interference deteriorates appears to be around \unit[30]{m}, which corresponds to the height of LTE BSs in urban environments as proposed by the 3GPP model \cite{3GPP_2010}. These results suggest that when deploying dedicated GSs for backhauling the UAVs using the LTE band the operators should avoid placing the GSs any higher than the standard height used for the current terrestrial BS network, which also suggests that existing BS sites are suitable for hosting the backhaul GSs. It is also worth noting that the backhaul probability only marginally decreases for GS heights lower than \unit[30]{m}; this suggests that it is possible to provide UAV backhaul using GSs that are positioned at heights very close to ground level. \begin{figure}[t!] \centering \includegraphics[width=.45\textwidth]{BackhaulGSHeight2-eps-converted-to.pdf} \vspace{-5mm} \caption{ Backhaul probability for an LTE backhaul, given a GS density of \unit[1.25]{$/\text{km}^2$} } \label{fig:BackhaulBSHeight} \vspace{-3mm} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=.45\textwidth]{mmwaveGSHeight3-eps-converted-to.pdf} \vspace{-5mm} \caption{ Coverage probability for a millimeter-wave backhaul, given a GS density of \unit[1.25]{$/\text{km}^2$} } \label{fig:mmwaveGSHeight} \vspace{-5mm} \end{figure} In \Fig{mmwaveGSHeight} we show the effect of the GS height on the backhaul probability when the backhaul uses a millimeter-wave signal. We see that the backhaul probability monotonically increases with increased GS height. This is due to the increase in the LOS probability between the UAV and the serving GS. Recall that for a millimeter-wave signal high-directionality antennas are assumed on the part of both the UAV as well as the backhaul, as a result of this the UAV is assumed to receive no interference from other GSs, even when those GSs have LOS on the UAV. It follows then that the network operator should consider deploying the GSs as high as possible above the ground to maximise backhaul performance, which makes the existing BS sites sub-optimal for hosting the GS equipment, in contrast to the LTE backhaul case. As in the previous plots, we observe that greater UAV heights correspond to larger backhaul probability, showing the importance of operating UAVs at heights which can strike a balance between ensuring a good signal for the end user while simultaneously allowing the UAVs to meet their backhaul requirements. \section{Conclusion} In this paper we have used stochastic geometry to model a network of GSs that provide wireless backhaul to UAVs in an urban environment. Our model takes into account network parameters such as GS density and antenna characteristics, along with environmental parameters such as building density. We demonstrated that a good backhaul probability for the UAVs can be achieved with a GS density that is lower than the typical BS network density in an urban environment, and that LTE and millimeter-wave backhauls require different GS heights to maximise performance. In subsequent works we will consider more detailed models of the millimeter-wave channel which take into account factors such as shadowing, atmospheric and building signal absorption, as well as the impact of antenna misalignment. \section*{Acknowledgements} This material is based upon works supported by the Science Foundation Ireland under Grants No. 10/IN.1/I3007 and 14/US/I3110. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{./bib/IEEEtran}
-25,145.797606
[ -3.4453125, 3.166015625 ]
42.618384
[ -3.79296875, 0.3740234375, -1.75, -5.96484375, -0.482421875, 7.9453125 ]
[ 2.5703125, 6.90234375, 2.814453125, 5.87890625 ]
163
3,874
[ -2.74609375, 3.2421875 ]
28.137922
[ -6.58203125, -4.63671875, -4.33203125, -1.97265625, 2.67578125, 11.796875 ]
0.798297
34.548296
25.838926
3.956801
[ 2.8902955055236816 ]
-16,746.370514
5.820341
-24,780.471704
0.864822
5.716486
[ -3.017578125, -3.583984375, -3.62109375, -4.63671875, 2.716796875, 11.4296875 ]
[ -5.8671875, -3.0234375, -2.9921875, -2.759765625, 4.0546875, 6.78515625 ]
BkiUdQY4ubnhAvmH63ED
\section{Introduction}\label{sec1} \noindent In the last few years, nonlocal operators have taken increasing relevance, because they arise in a number of applications, in such fields as game theory, finance, image processing, and optimization, see \cite{A, BV, LC, RO} and the references therein. \\ The main reason is that nonlocal operators are the infinitesimal generators of L\'{e}vy-type stochastic processes. A L\'{e}vy process is a stochastic process with independent and stationary increments, it represents the random motion of a particle whose successive displacements are independent and statistically identical over different time intervals of the same length. These processes extend the concept of Brownian motion, where the infinitesimal generator is the Laplace operator.\\ \noindent The linear operator $L_K$ is defined for any sufficiently smooth function $u:\mathbb{R}^n \rightarrow \mathbb{R}$ and all $x \in \mathbb{R}^n$ by $$\mathit{L}_K u(x)= P.V. \int_{\mathbb{R}^n} (u(x)-u(y))K(x-y)\,dy,$$ where $$K(y)=a\Bigl(\cfrac{y}{|y|}\Bigr)\cfrac{1}{|y|^{n+2s}}$$ is a singular kernel for a suitable function $a$. \noindent The infinitesimal generator $L_K$ of any L\'{e}vy processes is defined in this way, under the hypothesis that the process is symmetric, and the measure $a$ is absolutely continuous on $S^{n-1}$. In the particular case $a\equiv 1$ we obtain the fractional Laplacian operator $(-\Delta)^s$. \\ Among all the nonlocal operators we choose the anisotropic type, because we want to consider L\'{e}vy processes that are as general as possible.\\ \noindent In order to explain our choice, we observe that the \emph{nonlocal evolutive equation} \[ u_t(x,t)+\mathit{L_K}u(x,t)=0 \] naturally arises from a probabilistic process in which a particle moves randomly in the space subject to a probability that allows long jumps with a polynomial tail \cite{BV}. In this case, at each step the particle selects randomly a direction $v \in S^{n-1}$ with the probability density $a$, differently from the case of the fractional heat equation \cite{BV}. Another probabilistic motivation for the operator $L_K$ arises from a \emph{pay-off} approach \cite{BV}-\cite{RO}.\\ \noindent In this paper we study the nonlinear Dirichlet problem \[ \begin{cases} L_K u = f(x,u) & \text{in $\Omega$ } \\ u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$,} \end{cases} \] where $\Omega\subset\mathbb{R}^n$ is a bounded domain with a $C^{1,1}$ boundary, $n>2s$, $s\in(0,1)$, and $f:\Omega\times\mathbb{R}\rightarrow\mathbb{R}$ is a Carath\'{e}odory function.\\ The choice of the functional setting $X(\Omega)$, which will be defined later on, is extremely delicate and it is crucial to show our results. By the results of Ros Oton in \cite{RO}, if $a$ is nonnegative the Poincar\'{e} inequality and regularity results still hold, therefore they are used to solve linear problems; on the other hand, by results of Servadei and Valdinoci in \cite{SV1}, if $a$ is positive $X(\Omega)$ is continuously embedded in $L^q(\Omega)$ for all $q\in[1, 2^*_s]$ and compactly for all $q\in[1, 2^*_s)$, and these tools are necessary to solve nonlinear problems. Here the fractional critical exponent is $2^*_s=\frac{2n}{n-2s}$ for $n>2s$. In analogy with the classical cases, if $n<2s$ then $X(\Omega)$ is embedded in $C^{\alpha}(\overline{\Omega})$ with $\alpha=\frac{2s-n}{2}$ \cite[Theorem 8.2]{DNPV}, while in the limit case $n=2s$ it is embedded in $L^q(\Omega)$ for all $q \geq 1$. Therefore, due to Corollary 4.53 and Theorem 4.54 in \cite{DDE}, we can state that the results of this paper hold true even when $n\leq 2s$, but we only focus on the case $n>2s$, with subcritical or critical nonlinearities, to avoid trivialities (for instance, the $L^\infty$ bounds are obvious for $n<2s$). Note that $n\leq2s$ requires $n=1$, hence this case falls into the framework of ordinary nonlocal equations. In the limit case $n=1$, $s=\frac{1}{2}$ the critical growth for the nonlinearity is of exponential type, according to the fractional Trudinger-Moser inequality. Such case is open for general nonlocal operators, though some results are known for the operator $(-\Delta)^{\frac{1}{2}}$, see \cite{IS}.\\ An alternative to preserve regularity results is taking kernels between two positive constants, for instance considering $a \in L^{\infty}(\Omega)$, but in this way the operator $L_K$ behaves exactly as the fractional Laplacian and, in particular $X(\Omega)$ coincides with the Sobolev space $H^s_0(\Omega)$, consequently there is not any real novelty. These reasons explain our assumptions on the kernel $K$.\\ \noindent A typical feature of this operator is the \emph{nonlocality}, in the sense that the value of $L_K u(x)$ at any point $x \in \Omega$ depends not only on the values of $u$ on a neighborhood of $x$, but actually on the whole $\mathbb{R}^n$, since $u(x)$ represents the expected value of a random variable tied to a process randomly jumping arbitrarily far from the point $x$. This operator is said \emph{anisotropic}, because the role of the function $a$ in the kernel is to weight differently the different spacial directions.\\ Servadei and Valdinoci have established variational methods for nonlocal operators and they have proved an existence result for equations driven by integrodifferential operator $L_K$, with a general kernel $K$, satisfying \textquotedblleft structural properties\textquotedblright, as we will see later \eqref{P2}-\eqref{P3}-\eqref{P4}. They have shown that problem \eqref{P} admits a Mountain Pass type solution, not identically zero, under the assumptions that the nonlinearity $f$ satisfies a subcritical growth, the Ambrosetti-Rabinowitz's condition and $f$ is superlinear at $0$, see \cite{SV1}-\cite{SV2}.\\ Ros Oton and Valdinoci have studied the linear Dirichlet problem, proving existence of solutions, maximum principles and constructing some useful barriers, moreover they focus on the regularity properties of solutions, under weaker hypothesis on the function $a$ in the kernel $K$, see \cite{RO}-\cite{ROV}.\\ \noindent In \cite{IMS} Iannizzotto, Mosconi, Squassina have studied the problem \eqref{P} with the fractional Laplacian and they have proved that, for the corresponding functional $J$, being a local minimizer for $J$ with respect to a suitable weighted $C^0$-norm, is equivalent to being an $H_0^s(\Omega)$-local minimizer. Such result represents an extension to the fractional setting of the classic result by Brezis and Nirenberg for Laplacian operator \cite{BN}.\\ We hope to make a contribution to the knowledge of nonlocal anisotropic operators, using the existing tools already in literature to prove new results, as $L^{\infty}$-bounds and the principle of equivalence of minimizers. We have extended this minimizers principle to the case of anisotropic operator $L_K$, considering a suitable functional analytical setting instead of $H_0^s$. This last fact has allowed us to prove a multiplicity result, under suitable assumptions we show that problem \eqref{P} admits at least three non trivial solution: one positive, one negative ad one of unknown sign, using variational methods and, in particular Morse theory.\\ The paper has the following structure: in Section 2 we compare different definitions of the operator $L_K$, in Section 3 we recall the variational formulation of our problem, together with some results from critical point theory. In Section 4 we prove a $L^{\infty}$ bound on the weak solutions and the equivalence of minimizers in the two topologies $C_{\delta}^0(\overline{\Omega})$ and $X(\Omega)$, respectively. Moreover we deal with an eigenvalue problem driven by the nonlocal anisotropic operator $L_K$ and we discuss some properties of its eigenvalues and eigenfunctions. In Section 4 we prove a multiplicity result and in the Appendix we study a general Hopf's lemma where the nonlinearity is slightly negative. \section{The nonlocal anisotropic operator $L_K$}\label{sec2} \noindent \begin{Def} \label{D1} The linear operator $L_K$ is defined for any $u$ in the Schwartz space $\mathit{S}(\mathbb{R}^n)$ as \begin{equation} \begin{split} \mathit{L}_K u(x) & = P.V. \int_{\mathbb{R}^n} (u(x)-u(y))K(x-y)\,dy \\ & =\lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}^n \setminus B_{\epsilon}(x)} (u(x)-u(y))K(x-y)\,dy, \end{split} \label{E1} \end{equation} where the singular kernel $K: \mathbb{R}^n \setminus \{0\} \rightarrow (0, +\infty)$ is given by $$K(y)=a\Bigl(\cfrac{y}{|y|}\Bigr)\cfrac{1}{|y|^{n+2s}}, \qquad a \in L^1(S^{n-1}), \inf_{S^{n-1}} a>0, \text{even}.$$ Here P.V. is a commonly used abbreviation for \textquotedblleft in the principal value sense" (as defined by the latter equation). \end{Def} \noindent In general, the $u$'s we will be dealing with, do not belong in $\mathit{S}(\mathbb{R}^n)$, as the optimal regularity for solutions of nonlocal problems is only $C^s(\mathbb{R}^n)$. We will give a weaker definition of $L_K$ in Subsection 3.1.\\ We notice that the kernel of the operator $L_K$ satisfies some important properties for the following results, namely \begin{align} & m K \in L^1(\mathbb{R}^n), \text{ where } m(y)=\min\{|y|^2,1\}; \label{P2} \\ & \text{there exists } \beta>0 \text{ such that } K(y)\geq \beta |y|^{-(n+2s)} \text{ for any } y \in \mathbb{R}^n \setminus \{0\}; \label{P3} \\ &K(y)=K(-y) \text{ for any } y \in \mathbb{R}^n \setminus \{0\}. \label{P4} \end{align} \noindent The typical example is $K(y)= |y|^{-(n+2s)}$, which corresponds to $L_K=(-\Delta)^s$, the fractional Laplacian.\\ We remark that we do not assume any regularity on the kernel $K(y)$. As we will see, there is an interesting relation between the regularity properties of solutions and the regularity of kernel $K(y)$.\\ We recall some special properties of the case $a \in L^{\infty}(S^{n-1})$. \begin{Oss} \label{Oss1} Due to the singularity at $0$ of the kernel, the right-hand side of \eqref{E1} is not well defined in general. In the case $s \in (0,\frac{1}{2})$ the integral in \eqref{E1} is not really singular near $x$. Indeed, for any $u \in \mathit{S}(\mathbb{R}^n)$, $a \in L^{\infty}(S^{n-1})$ we have \begin{align*} & \int_{\mathbb{R}^n} \cfrac{|u(x)-u(y)|}{|x-y|^{n+2s}} \; a\Bigl(\cfrac{x-y}{|x-y|}\Bigr)\,dy \\ &\leq C ||a||_{L^\infty} \int_{B_R} \cfrac{|x-y|}{|x-y|^{n+2s}}\,dy + C ||a||_{L^\infty} ||u||_{L^\infty} \int_{\mathbb{R}^n \setminus B_R} \cfrac{1}{|x-y|^{n+2s}}\,dy\\ &=C \left(\int_{B_R} \cfrac{1}{|x-y|^{n+2s-1}}\,dy +\int_{\mathbb{R}^n \setminus B_R} \cfrac{1}{|x-y|^{n+2s}}\,dy \right)< \infty, \end{align*} where $C$ is a positive constant depending only on the dimension and on the $L^{\infty}$ norms of $u$ and $a$, see \cite[Remark 3.1]{DNPV} in the case of the fractional Laplacian. \end{Oss} \noindent The singular integral given in Definition \ref{D1} can be written as a weighted second-order differential quotient as follows (see \cite[Lemma 3.2] {DNPV} for the fractional Laplacian): \begin{Lem} For all $u \in \mathit{S}(\mathbb{R}^n)$ $L_K$ can be defined as \begin{equation} \mathit{L}_K u(x) = \frac{1}{2} \int_{\mathbb{R}^n}(2u(x)-u(x+z)-u(x-z)) K(z)\,dz, \quad x \in \mathbb{R}^n. \label{E2} \end{equation} \end{Lem} \begin{Oss} We notice that the expression in \eqref{E2} doesn't require the P.V. formulation since, for instance, taking $u \in L^\infty(\mathbb{R}^n)$ and locally $C^2$, $a \in L^{\infty}(S^{n-1})$, using a Taylor expansion of $u$ in $B_1$, we obtain \begin{align*} & \int_{\mathbb{R}^n}\cfrac{|2u(x)-u(x+z)-u(x-z)|} {|z|^{n+2s}} \; a\Bigl(\cfrac{z}{|z|}\Bigr)\,dz \\ & \leq c ||a||_{L^\infty} ||u||_{L^\infty}\int_{\mathbb{R}^n \setminus B_1} \cfrac{1}{|z|^{n+2s}}\,dz + ||a||_{L^\infty} ||D^2u||_{L^\infty(B_1)} \int_{B_1} \cfrac{1}{|z|^{n+2s-2}}\,dz < \infty. \end{align*} \end{Oss} \noindent We show that the two definitions are equivalent, hence we have \begin{align*} \mathit{L}_K u(x) & = \frac{1}{2} \int_{\mathbb{R}^n}(2u(x)-u(x+z)-u(x-z)) K(z)\,dz \\ &=\frac{1}{2} \lim_{\epsilon \rightarrow 0^+} \int_{\mathbb{R}^n \setminus B_{\epsilon}} (2u(x)-u(x+z)-u(x-z)) K(z)\,dz,\\ &=\frac{1}{2} \lim_{\epsilon \rightarrow 0^+} \left[\int_{\mathbb{R}^n \setminus B_{\epsilon}}(u(x)-u(x+z)) K(z)\,dz + \int_{\mathbb{R}^n\setminus B_{\epsilon}}(u(x)-u(x-z)) K(z)\,dz\right], \end{align*} we make a change of variables $\tilde{z}=-z$ in the second integral and we set $\tilde{z}=z$ $$=\lim_{\epsilon \rightarrow 0^+}\int_{\mathbb{R}^n}(u(x)-u(x+z)) K(z)\,dz,$$ we make another change of variables $z=y-x$ and we obtain the first definition $$=\lim_{\epsilon \rightarrow 0^+}\int_{\mathbb{R}^n}(u(x)-u(y)) K(x-y)\,dy.$$ It is important stressing that this holds only if the kernel is even, more precisely if the function $a$ is even.\\ \noindent There exists a third definition of $L_K$ that uses a Fourier transform, we can define it as $$\mathit{L_K}u(x)= \mathcal{F}^{-1}(S(\xi)(\mathcal{F}u))$$ where $\mathcal{F}$ is a Fourier transform and $S: \mathbb{R}^n \rightarrow \mathbb{R}$ is a multiplier, $S(\xi)=\int_{\mathbb{R}^n} (1-\cos(\xi \cdot z)) a\bigl(\frac{z}{|z|}\bigr)\,dz$. We consider \eqref{E2} and we apply the Fourier transform to obtain \begin{align*} \mathcal{F}(\mathit{L_K}u) & =\mathcal{F}\left(\frac{1}{2} \int_{\mathbb{R}^n}(2u(x)-u(x+z)-u(x-z)) a\Bigl(\frac{z}{|z|}\Bigr)\,dz\right)\\ & =\frac{1}{2} \int_{\mathbb{R}^n} (\mathcal{F}(2u(x)-u(x+z)-u(x-z)) a\Bigl(\frac{z}{|z|}\Bigr)\,dz \\ & =\frac{1}{2} \int_{\mathbb{R}^n} (2-e^{i \xi \cdot z} -e^{-i \xi \cdot z}) (\mathcal{F}u)(\xi) a\Bigl(\frac{z}{|z|}\Bigr)\,dz\\ &=\frac{1}{2} (\mathcal{F}u)(\xi) \int_{\mathbb{R}^n} (2-e^{i \xi \cdot z} -e^{-i \xi \cdot z}) a\Bigl(\frac{z}{|z|}\Bigr)\,dz \\ & = (\mathcal{F}u)(\xi) \int_{\mathbb{R}^n} (1-\cos(\xi \cdot z) a\Bigl(\frac{z}{|z|}\Bigr)\,dz. \end{align*} We recall that in the case $a\equiv 1$, namely for the fractional Laplacian (see \cite[Proposition 3.3] {DNPV}), $S(\xi)=|\xi|^{2s}$.\\ If $a$ is unbounded from above, $L_K$ is better dealt with by a convenient functional approach. \section{Preliminaries}\label{sec3} \noindent In this preliminary section, we collect some basic results that will be used in the forthcoming sections. In the following, for any Banach space $(X,||.||)$ and any functional $J \in C^1(X)$ we will denote by $K_J$ the set of all critical points of $J$, i.e., those points $u \in X$ such that $J'(u)=0$ in $X^*$ (dual space of $X$), while for all $c \in \mathbb{R}$ we set $$K_{J}^c=\{u \in K_J: J(u)=c\},$$ $$J^c=\{u \in X: J(u) \leq c\} \quad (c \in \mathbb{R}),$$ beside we set $$\overline{B}_{\rho}(u_0)=\{u \in X: ||u-u_0||\leq \rho\} \quad (u_0 \in X, \rho >0).$$ Moreover, in the proofs of our results, $C$ will denote a positive constant (whose value may change case by case).\\ Most results require the following \emph{Cerami compactness condition} (a weaker version of the Palais-Smale condition):\\ \emph{Any sequence $(u_n)$ in $X$, such that $(J(u_n))$ is bounded in $\mathbb{R}$ and $(1+||u_n||)J'(u_n)\rightarrow 0$ in $X^{*}$ admits a strongly convergent subsequence}. \subsection{Variational formulation of the problem}\label{subsec31} \noindent Let $\Omega$ be a bounded domain in $\mathbb{R}^n$ with $C^{1,1}$ boundary $\partial \Omega$, $n>2s$ and $s\in (0,1)$. We consider the following Dirichlet problem \begin{equation} \begin{cases} \mathit{L}_K u = f(x,u) & \text{in $\Omega$ } \\ u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$.} \end{cases} \label{P} \end{equation} We remark that the Dirichlet datum is given in $\mathbb{R}^n \setminus \Omega$ and not simply on $\partial \Omega$, consistently with the non-local character of the operator $\mathit{L_K}$.\\ The nonlinearity $f: \Omega \times \mathbb{R} \rightarrow \mathbb{R}$ is a Carath\'{e}odory function which satisfies the growth condition \begin{equation} |f(x,t)|\leq C(1 + |t|^{q-1}) \text{ a.e. in } \Omega, \forall t \in \mathbb{R} \; (C>0, q \in [1, 2_{s}^{*}]) \label{G} \end{equation} (here $2_{s}^{*}:= 2n/(n-2s)$ is the fractional critical exponent).\,Condition \eqref{G} is referred to as a subrictical or critical growth if $q<2_{s}^{*}$ or $q=2_{s}^{*}$, respectively.\\ \noindent The aim of this paper is to study nonlocal problems driven by $L_K$ and with Dirichlet boundary data via variational methods. For this purpose, we need to work in a suitable fractional Sobolev space: for this, we consider a functional analytical setting that is inspired by the fractional Sobolev spaces $H_0^s(\Omega)$ \cite{DNPV} in order to correctly encode the Dirichlet boundary datum in the variational formulation.\\ We introduce the space \cite{SV1} $$X(\Omega)=\{u \in L^2(\mathbb{R}^n): [u]_{K} < \infty, u=0 \text{ a.e. in } \mathbb{R}^n \setminus \Omega \},$$ with $$[u]_{K}^2 := \int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy.$$ $X(\Omega)$ is a Hilbert space with inner product $$\left\langle u,v \right\rangle_{X(\Omega)} = \int_{\mathbb{R}^{2n}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy,$$ which induces a norm $$||u||_{X(\Omega)}= \left(\int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy \right)^\frac{1}{2}.$$ \noindent (We indicate for simplicity $||u||_{X(\Omega)}$ only with $||u||$, when we will consider a norm in different spaces, we will specify it.) \noindent By the fractional Sobolev inequality and the continuous embedding of $X(\Omega)$ in $H^s_0(\Omega)$ (see \cite[Subsection 2.2] {SV1}), we have that the embedding $X(\Omega)\hookrightarrow L^q(\Omega)$ is continuous for all $q \in [1,2_s^*]$ and compact if $q \in [1,2_s^*)$ (see \cite[Theorem 6.7, Corollary 7.2]{DNPV}). \noindent We set for all $u \in X(\Omega)$ $$J(u)=\frac{1}{2} \int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy - \int_{\Omega} F(x,u(x))\,dx,$$ where the function $F$ is the primitive of $f$ with respect to the second variable, that is $$F(x,t)=\int_0^t f(x,\tau)\, d\tau, \quad x\in \Omega, t \in \mathbb{R}.$$ Then, $J \in C^1(X(\Omega))$ and all its critical points are weak solutions of \eqref{P}, namely they satisfy \begin{equation} \int_{\mathbb{R}^{2n}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy=\int_{\Omega} f(x,u(x))v(x)\,dx, \quad \forall v \in X(\Omega). \label{Fd} \end{equation} \subsection{Critical groups}\label{subsec32} \noindent We recall the definition and some basic properties of critical groups, referring the reader to the monograph \cite{MMP} for a detailed account on the subject. Let $X$ be a Banach space, $J \in C^1(X)$ be a functional, and let $u \in X$ be an isolated critical point of $J$, i.e., there exists a neighbourhood $U$ of $u$ such that $K_J \cap U = \{u\}$, and $J(u)=c$. For all $k \in \mathbb{N}_0$, the \emph{k-th critical group of $J$ at $u$} is defined as $$C_k(J,u)=H_k(J^c \cap U, J^c \cap U \setminus \{u\}),$$ where $H_k(\cdot , \cdot)$ is the k-th (singular) homology group of a topological pair with coefficients in $\mathbb{R}$.\\ \noindent The definition above is well posed, since homology groups are invariant under excision, hence $C_k(J,u)$ does not depend on $U$. Moreover, critical groups are invariant under homotopies preserving isolatedness of critical points. We recall some special cases in which the computation of critical groups is immediate ($\delta_{k,h}$ is the Kronecker symbol). \begin{Pro}{\rm\cite[Example 6.45]{MMP}} \label{M} Let $X$ be a Banach space, $J \in C^1(X)$ a functional and $u \in K_J$ an isolated critical point of $J$. The following hold: \begin{itemize} \item if $u$ is a local minimizer of $J$, then $C_k(J,u)=\delta_{k,0} \mathbb{R}$ for all $k \in \mathbb{N}_0$, \item if $u$ is a local maximizer of $J$, then $C_k(J,u)= \begin{cases} 0 & \text{if $\mathrm{dim}(X)=\infty$} \\ \delta_{k,m} \mathbb{R} & \text{if $\mathrm{dim}(X)=m$} \end{cases}$ for all $k \in \mathbb{N}_0$. \end{itemize} \end{Pro} \noindent Next we pass to critical points of mountain pass type. \begin{Def}{\rm\cite[Definition 6.98]{MMP}} Let $X$ be a Banach space, $J \in C^1(X)$ and $x \in K_J$, $u$ is of mountain pass type if, for any open neighbourhood $U$ of $u$, the set $\{y \in U: J(y)<J(u)\}$ is nonempty and not path-connected. \end{Def} \noindent The following result is a variant of the mountain pass theorem \cite{PS} and establishes the existence of critical points of mountain pass type. \begin{Teo}{\rm\cite[Theorem 6.99]{MMP}} \label{MPT} If $X$ is a Banach space, $J \in C^1(X)$ satisfies the (C)-condition, $x_0,x_1 \in X$, $\Gamma:=\{\gamma \in C([0,1],X): \gamma(0)=x_0, \gamma(1)=x_1\}$, $c:=\inf_{\gamma \in \Gamma} \max_{t \in [0,1]} J(\gamma(t))$, and $c>\max\{J(x_0),J(x_1)\}$, then $K_{J}^c \neq \emptyset$ and, moreover, if $K_{J}^c$ is discrete, then we can find $u \in K_{J}^c $ which is of mountain pass type. \end{Teo} \noindent We now describe the critical groups for critical points of mountain pass type. \begin{Pro}{\rm\cite[Proposition 6.100]{MMP}} \label{Gcr} Let $X$ be a reflexive Banach space, $J \in C^1(X)$, and $u \in K_J$ isolated with $c:=J(u)$ isolated in $J(K_J)$. If $u$ is of mountain pass type, then $C_1(J,u)\neq 0$. \end{Pro} \noindent If the set of critical values of $J$ is bounded below and $J$ satisfies the (C)-condition, we define for all $k \in \mathbb{N}_0$ the \emph{k-th critical group at infinity of $J$} as $$C_k(J,\infty)=H_k(X, J^a),$$ where $a < \inf_{u \in K_J} J(u)$.\\ \noindent We recall the \emph{Morse identity}: \begin{Pro}{\rm\cite[Theorem 6.62 (b)]{MMP}} \label{MI} Let $X$ be a Banach space and let $J \in C^1(X)$ be a functional satisfying (C)-condition such that $K_J$ is a finite set. Then, there exists a formal power series $Q(t)=\sum_{k=0}^{\infty} q_k t^k \; (q_k \in \mathbb{N}_0 \; \forall k \in \mathbb{N}_0)$ such that for all $t \in \mathbb{R}$ $$\sum_{k=0}^{\infty} \sum_{u \in K_J} \mathrm{dim}\, C_k(J,u) t^k = \sum_{k=0}^{\infty} \mathrm{dim}\, C_k(J,\infty) t^k + (1+t) Q(t).$$ \end{Pro} \section{Results}\label{sec4} \noindent This section is divided in the following way: in Subsection 1 we prove a priori bound for the weak solution of problem (\ref{P}), in the subcritical and critical case, and we recall some preliminary results, including the weak and strong maximum principles, and a Hopf lemma. In Subsection 2 we prove the equivalence of minimizers in the $X(\Omega)$-topology and in $C_{\delta}^0({\overline{\Omega}})$-topology, respectively; in Subsection 3 we consider an eigenvalue problem for nonlocal, anisotropic operator $L_K$. \subsection{$L^{\infty}$ bound on the weak solutions}\label{subsec41} \noindent We prove an $L^{\infty}$ bound on the weak solutions of \eqref{P} (in the subcritical case such bound is uniform)\cite[Theorem 3.2]{IMS}. \begin{Teo} \label{SL} If $f$ satisfies the growth condition \eqref{G}, then for any weak solution $u \in X(\Omega)$ of \eqref{P} we have $u \in L^{\infty}(\Omega)$. Moreover, if $q<2_{s}^{*}$ in \eqref{G}, then there exists a function $M \in C(\mathbb{R_+})$, only depending on the constants $C$, $n$, $s$ and $\Omega$, such that $$||u||_{\infty} \leq M(||u||_{2_{s}^{*}}).$$ \end{Teo} \begin{proof} Let $u \in X(\Omega)$ be a weak solution of \eqref{P} and set $\gamma=(2_s^*/2)^{1/2}$ and $t_k=sgn(t) \min\{|t|,k\}$ for all $t \in \mathbb{R}$ and $k>0$. We define $v=u|u|_k^{r-2}$, for all $r \geq 2$, $k>0$, $v \in X(\Omega)$. By (\ref{P3}) and applying the fractional Sobolev inequality we have that $$||u|u|_k^{\frac{r}{2}-1}||_{2_s^*}^2 \leq C ||u|u|_k^{\frac{r}{2}-1}||_{H_0^s}^2 \leq \frac{C}{\beta} ||u|u|_k^{\frac{r}{2}-1}||^2.$$ By \cite[Lemma 3.1]{IMS} and assuming $v$ as test function in \eqref{Fd}, we obtain $$||u|u|_k^{\frac{r}{2}-1}||_{2_s^*}^2 \leq C ||u|u|_k^{\frac{r}{2}-1}||^2 \leq \frac{C r^2}{r-1} \left\langle u,v\right\rangle_{X(\Omega)} \leq C r \int_{\Omega} |f(x,u)| |v|\,dx,$$ for some $C>0$ independent of $r \geq 2$ and $k>0$. Applying $\eqref{G}$ and the Fatou Lemma as $k \rightarrow \infty$ yields $$||u||_{\gamma^2 r} \leq C r^{1/r} \left( \int_{\Omega} (|u|^{r-1} + |u|^{r+q-2})\,dx\right) ^{1/r}.$$ The rest of the proof follows arguing as in \cite{IMS}, using a suitable bootstrap argument, providing in the end $u \in L^{\infty}(\Omega)$. The main difference is that such bound is uniform only in the subcritical case and not in the critical case. \end{proof} \noindent Theorem \ref{SL} allows to set $g(x):=f(x,u(x)) \in L^{\infty}(\mathbb{R}^n)$ and now we rephrase the problem as a linear Dirichlet problem \begin{equation} \begin{cases} \mathit{L}_K u = g(x) & \text{in $\Omega$} \\ u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$,} \end{cases} \label{L} \end{equation} with $g \in L^\infty(\Omega)$. \begin{Pro}{\rm\cite[Proposition 4.1, Weak maximum principle]{RO}} \label{WmP} Let $u$ be any weak solution to \eqref{L}, with $g \geq 0$ in $\Omega$. Then, $u \geq 0$ in $\Omega$. \end{Pro} \noindent We observe that the weak maximum principle also holds when the Dirichlet datum is given by $u=h$, with $h \geq 0$ in $\mathbb{R}^n \setminus \Omega$.\\ For problem \eqref{L}, the interior regularity of solutions depends on the regularity of $g$, but it also depends on the regularity of $K(y)$ in the $y$-variable. Furthermore, if the kernel $K$ is not regular, then the interior regularity of $u$ will in addition depend on the boundary regularity of $u$. \begin{Teo}{\rm\cite[Theorem 6.1, Interior regularity]{RO}} \label{IR} Let $\alpha>0$ be such that $\alpha + 2s$ is not an integer, and $u \in L^{\infty}(\mathbb{R}^n)$ be any weak solution to $L_K u=g$ in $B_1$. Then, $$||u||_{C^{2s+\alpha}(B_{1/2})}\leq C(||g||_{C^{\alpha}(B_1)}+||u||_{C^{\alpha}(\mathbb{R}^n)}).$$ \end{Teo} \noindent It is important to remark that the previous estimate is valid also in case $\alpha =0$ (in which the $C^{\alpha}$ norm has to be replaced by the $L^{\infty}$). With no further regularity assumption on the kernel $K$, this estimate is sharp, in the sense that the norm $||u||_{C^{\alpha}(\mathbb{R}^n)}$ can not be replaced by a weaker one. Under the extra assumption that the kernel $K(y)$ is $C^{\alpha}$ outside the origin, the following estimate holds $$||u||_{C^{2s+\alpha}(B_{1/2})}\leq C(||g||_{C^{\alpha}(B_1)}+||u||_{L^{\infty}(\mathbb{R}^n)}).$$ \noindent We focus now on the boundary regularity of solutions to \eqref{L}. \begin{Pro}{\rm\cite[Proposition 7.2, Optimal H\"{o}lder regularity]{RO}} \label{Opt} Let $g \in L^{\infty}(\Omega)$, and $u$ be the weak solution of \eqref{L}. Then, $$||u||_{C^s(\overline{\Omega})} \leq C ||g||_{L^{\infty}(\Omega)},$$ for some positive constant $c$. \end{Pro} \noindent Finally, we conclude that the solutions to \eqref{L} are $C^{3s}$ inside $\Omega$ whenever $g \in C^s$, but only $C^s$ on the boundary, and this is the best regularity that we can obtain. For instance, we consider the following torsion problem \[ \begin{cases} \mathit{L}_K u = 1 & \text{in $B_1$} \\ u = 0 & \text{in $\mathbb{R}^n \setminus B_1$,} \end{cases} \] The solution $u_0:= (1-|x|^2)_{+}^s$ belongs to $C^s(\overline{B_1})$, but $u_0 \notin C^{s+\epsilon}(\overline{B_1})$ for any $\epsilon > 0$, as a consequence we can not expect solutions to be better than $C^s(\overline{\Omega})$. \noindent While solutions of fractional equations exhibit good interior regularity properties, they may have a singular behaviour on the boundary. Therefore, instead of the usual space $C^1(\overline{\Omega})$, they are better embedded in the following weighted H\"{o}lder-type spaces $C_{\delta}^{0}(\overline{\Omega})$ and $C_{\delta}^{\alpha}(\overline{\Omega})$ as defined here below.\\ We set $\delta(x)=\mathrm{dist}(x,\mathbb{R}^n \setminus \Omega)$ with $x \in \overline{\Omega}$ and we define $$C_{\delta}^0(\overline{\Omega})=\{u\in C^0(\overline{\Omega}):u/\delta^s \in C^0(\overline{\Omega})\},$$ $$C_{\delta}^{\alpha}(\overline{\Omega})=\{u\in C^0(\overline{\Omega}):u/\delta^s \in C^{\alpha}(\overline{\Omega})\} \quad (\alpha \in (0,1)),$$ endowed with the norms $$||u||_{0,\delta}= \left\|\cfrac{u}{\delta^s}\right\|_{\infty}, \quad ||u||_{\alpha,\delta}= ||u||_{0,\delta} + \sup_{x \neq y} \frac{|u(x)/\delta^s(x) - u(y)/\delta^s(y)|}{|x-y|^{\alpha}},$$ respectively. For all $0 \leq \alpha < \beta <1$ the embedding $C_{\delta}^{\beta}(\overline{\Omega}) \hookrightarrow C_{\delta}^{\alpha}(\overline{\Omega})$ is continuous and compact. In this case, the positive cone $C_{\delta}^0(\overline{\Omega})_{+}$ has a nonempty interior given by $$\mathrm{int}(C_{\delta}^0(\overline{\Omega})_{+})=\left\{u \in C_{\delta}^0(\overline{\Omega}): \frac{u(x)}{\delta^s(x)}>0 \text{ for all } x \in \overline{\Omega} \right\}.$$ \noindent The function $\frac{u}{\delta^s}$ on $\partial \Omega$ plays sometimes the role that the normal derivative $\frac{\partial u}{\partial \nu}$ plays in second order equations. Furthermore, we recall that another fractional normal derivative can be considered, namely the one in formula (1.2) of \cite{DROV}. \begin{Lem}{\rm\cite[Lemma 7.3, Hopf's lemma]{RO}} \label{Hopf} Let $u$ be any weak solution to \eqref{L}, with $g \geq 0$. Then, either $$u \geq c \delta^s \qquad \text{in } \overline{\Omega} \text{ for some } \; c>0 \quad \text{or} \quad u \equiv 0 \text{ in } \overline{\Omega}.$$ \end{Lem} \noindent Furthermore, the quotient $\frac{u}{\delta^s}$ is not only bounded, but it is also H\"{o}lder continuous up to the boundary. Using the explicit solution $u_0$ and similar barriers, it is possible to show that solutions $u$ satisfy $|u| \leq C \delta^s$ in $\Omega$. \begin{Teo}[\cite{RO}, Theorem 7.4] \label{Rap} Let $s \in (0,1)$, and $u$ be any weak solution to \eqref{L}, with $g \in L^{\infty}(\Omega)$. Then, $$\left\|\cfrac{u}{\delta^s}\right\|_{C^{\alpha}(\overline{\Omega})} \leq C ||g||_{L^{\infty}(\overline{\Omega})}, \quad \alpha \in (0,s).$$ \end{Teo} \begin{Oss} The results, in \cite{RO}, hold even if $a\geq0$ in the kernel $K$. \end{Oss} \noindent We observe that Hopf's lemma involves Strong maximum principle and we will see another general version of Hopf's lemma, where the nonlinearity is slightly negative, but this requires an higher regularity for $f$ (see the appendix). Moreover, we recall \cite[Proposition 2.5]{DI} for the fractional Laplacian analogy. \subsection{Equivalence of minimizers in the two topologies}\label{subsec42} In Theorem \ref{Equiv} we present an useful topological result, relating the minimizers in the $X(\Omega)$-topology and in $C_{\delta}^0({\overline{\Omega}})$-topology, respectively. This is an anisotropic version of the result of \cite{IMS}, previously proved in \cite[Proposition 2.5]{BCSS}, which in turn is inspired from \cite{BN}. In the proof of Theorem \ref{Equiv} the critical case, i.e. $q=2_s^*$ in \eqref{G}, presents a twofold difficulty: a loss of compactness which prevents minimization of $J$, and the lack of uniform a priori estimate for the weak solutions of \eqref{P}. \begin{Teo} \label{Equiv} Let \eqref{G} hold, $J$ be defined as above, and $u_0 \in X(\Omega)$. Then, the following conditions are equivalent:\\ i) there exists $\rho>0$ such that $J(u_0 + v)\geq J(u_0)$ for all $v \in X(\Omega) \cap \emph{C}_{\delta}^{0}(\overline{\Omega})$, $||v||_{0, \delta} \leq \rho$ ;\\ ii) there exists $\epsilon>0$ such that $J(u_0 + v)\geq J(u_0)$ for all $v \in X(\Omega)$, $||v|| \leq \epsilon$. \end{Teo} \noindent We remark that, contrary to the result of \cite{BN} in the local case $s=1$, there is no relationship between the topologies of $X(\Omega)$ and $C_{\delta}^{0}(\overline{\Omega})$. \begin{proof} We define $J \in C^1(X(\Omega))$ as in the Section 3.1.\\ We argue as in \cite[Theorem 1.1]{IMS}.\\ \textbf{i)} $\Rightarrow$ \textbf{ii)}\\ We suppose $u_0=0$, hence we can rewrite the hypothesis as $$\inf_{u \in X(\Omega) \cap \overline{B}_{\rho}^{\delta}} J(u)=0,$$ where $\overline{B}_{\rho}^{\delta}$ denotes the closed ball in $C_{\delta}^0(\overline{\Omega})$ centered at $0$ with radius $\rho$.\\ We argue by contradiction: we assume i) and that there exist sequences $(\epsilon_n)\in (0,\infty)$, $(u_n)$ in $X(\Omega)$ such that $\epsilon_n \rightarrow 0$, $||u_n|| \leq \epsilon_n$, and $J(u_n) < J(0)$ for all $n \in \mathbb{N}$.\\ We consider two cases: \begin{itemize} \item If $q<2_s^*$ in \eqref{G}, by the compact embedding $X(\Omega) \hookrightarrow L^q(\Omega)$, $J$ is sequentially weakly lower semicontinuous in $X(\Omega)$, then we may assume $$J(u_n)= \inf_{\overline{B}_{\epsilon_n}^X} J <0,$$ where $\overline{B}_{\epsilon_n}^X$ denotes the closed ball in $X(\Omega)$ centred at $0$ with radius $\epsilon_n$.\\ Therefore there exists a Lagrange multiplier $\mu_n \leq 0$ such that for all $v \in X(\Omega)$ $$\left\langle J'(u_n),v\right\rangle=\mu_n \left\langle u_n,v\right\rangle_{X(\Omega)},$$ which is equivalent to $u_n$ being a weak solution of \[ \begin{cases} \mathit{L}_K u = C_n f(x,u) & \text{in $\Omega$ } \\ u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$,} \end{cases} \] with $C_n=(1-\mu_n)^{-1} \in (0,1]$. By Theorem \ref{SL}, $||u_n||_{\infty} \leq C$, hence by Proposition \ref{Opt} and by Theorem \ref{Rap} we have $u_n \in C_{\delta}^{\alpha}(\overline{\Omega})$ and $||u_n||_{\alpha,\delta} \leq C$. By the compact embedding $C_{\delta}^{0,\alpha}(\overline{\Omega}) \hookrightarrow C_{\delta}^0(\overline{\Omega})$, passing to a subsequence, $u_n \rightarrow 0$ in $C_{\delta}^0(\overline{\Omega})$, consequently for $n \in \mathbb{N}$ big enough we have $||u_n||_{\delta}\leq \rho$ together with $J(u_n)<0$, a contradiction. \item If $q=2_s^*$ in \eqref{G}, then we use a truncated functional $$J_k(u)=\frac{||u||^2}{2} - \int_{\Omega} F_k(x,u(x))\,dx,$$ with $f_k(x,t)=f(x, \text{sgn(t)} \min\{|t|,k\})$, $F_k(x,t)=\int_0^t f_k(x,\tau)\, d\tau$ to overcome the lack of compactness and of uniform $L^{\infty}$-bound. \end{itemize} \textbf{Case $u_0 \neq 0$.}\\ Since $C_c^{\infty}(\Omega)$ is a dense subspace of $X(\Omega)$ (see \cite[Theorem 6]{FSV}, \cite[Theorem 2.6]{MBRS}) and $J'(u_0) \in X(\Omega)^*$, \begin{equation} \left\langle J'(u_0),v\right\rangle =0 \label{PS} \end{equation} holds, not only for all $v \in C_c^{\infty}(\Omega)$ (in particular $v \in X(\Omega) \cap C_{\delta}^0(\overline{\Omega})$), but for all $v \in X(\Omega)$, i.e., $u_0$ is a weak solution of \eqref{P}. By $L^{\infty}$- bounds, we have $u_0 \in L^{\infty}(\Omega)$, hence $f(.,u_0(.)) \in L^{\infty}(\Omega)$. Now Proposition \ref{Opt} and Theorem \ref{Rap} imply that $u_0 \in C_{\delta}^0(\overline{\Omega})$. We set for all $v \in X(\Omega)$ $$\tilde{J}(v)=\frac{||v||^2}{2} - \int_{\Omega} \tilde{F}(x,v(x))\,dx,$$ with for all $(x,t) \in \Omega \times \mathbb{R}$ $$\tilde{F}(x,t)=F(x, u_0(x)+t)-F(x,u_0(x))-f(x,u_0(x))t.$$ $\tilde{J} \in C^1(X(\Omega))$ and the mapping $\tilde{f}: \Omega \times \mathbb{R} \rightarrow \mathbb{R}$ defined by $\tilde{f}(x,t)= \partial_t \tilde{F}(x,t)$ satisfies a subcritical growth condition of the type \eqref{G}. Besides, by \eqref{PS}, we have for all $v \in X(\Omega)$ $$\tilde{J}(v)=\frac{1}{2}(||u_0+v||^2 - ||u_0||^2) - \int_{\Omega} (F(x,u_0+v)-F(x,u_0))\,dx = J(u_0+v) - J(u_0),$$ in particular $\tilde{J}(0)=0$. The hypothesis i) thus rephrases as $$\inf_{v \in X(\Omega) \cap \overline{B}_{\rho}^{\delta}} \tilde{J}(v)=0$$ and by the previous cases, we obtain the thesis.\\ \textbf{ii)} $\Rightarrow$ \textbf{i)} \\ By contradiction: we assume ii) and we suppose there exists a sequence $(u_n)$ in $X(\Omega)\cap C_{\delta}^{0}(\overline{\Omega})$ such that $u_n \rightarrow u_0$ in $C_{\delta}^{0}(\overline{\Omega})$ and $J(u_n)<J(u_0)$. Then $$\limsup_n ||u_n||^2 \leq ||u_0||^2,$$ in particular $(u_n)$ is bounded in $X(\Omega)$, so (up to a subsequence) $u_n \rightharpoonup u_0$ in $X(\Omega)$, hence, by \cite[Proposition 3.32]{B}, $u_n \rightarrow u_0$ in $X(\Omega)$. For $n\in \mathbb{N}$ big enough we have $||u_n-u_0||\leq \epsilon$, a contradiction. \end{proof} \subsection{An eigenvalue problem}\label{subsec43} We consider the following eigenvalue problem \begin{equation} \begin{cases} \mathit{L_K} u = \lambda u & \text{in $\Omega$ } \\ u = 0 & \text{in $\mathbb{R}^n \setminus \Omega$.} \end{cases} \label{EP} \end{equation} \noindent We recall that $\lambda \in \mathbb{R}$ is an \emph{eigenvalue} of $L_K$ provided there exists a nontrivial solution $u \in X(\Omega)$ of problem \eqref{EP}, and, in this case, any solution will be called an \emph{eigenfunction} corresponding to the eigenvalue $\lambda$. \begin{Pro} The set of the eigenvalues of problem \eqref{EP} consists of a sequence $\{\lambda_k \}_{k \in \mathbb{N}}$ with $$ 0 < \lambda_1 < \lambda_2 \leq \cdots \leq \lambda_k \leq \lambda_{k+1} \leq \cdots \quad \text{and} \quad \lambda_k \rightarrow + \infty \quad \text{as} \quad k \rightarrow + \infty,$$ with associated eigenfunctions $e_1, e_2, \cdots, e_k, e_{k+1}, \cdots$ such that \begin{itemize} \item the eigenvalues can be characterized as follows: \begin{align} \lambda_1 & = \min_{u \in X(\Omega), \quad ||u||_{L^2(\Omega)=1}} \int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy, \label{A1} \\ \lambda_{k+1} & = \min_{u \in \mathbb{P}_{k+1},\quad ||u||_{L^2(\Omega)=1}} \int_{\mathbb{R}^{2n}} |u(x)-u(y)|^2 K(x-y)\,dxdy \quad \forall k \in \mathbb{N}, \label{Ak} \end{align} dove $\mathbb{P}_{k+1}:= \{u \in X(\Omega) \; \mathrm{s.t.} \; \left\langle u,e_j \right\rangle_{X(\Omega)}= 0 \; \forall j=1, \cdots, k\};$ \item there exists a positive function $e_1 \in X(\Omega)$, which is an eigenfunction corresponding to $\lambda_1$, attaining the minimum in \eqref{A1}, that is $||e_1||_{L^2(\Omega)}=1$; moreover, for any $k \in \mathbb{N}$ there exists a nodal function $e_{k+1} \in \mathbb{P}_{k+1}$, which is an eigenfunction corresponding to $\lambda_{k+1}$, attaining the minimum in \eqref{Ak}, that is $||e_{k+1}||_{L^2(\Omega)}=1$; \item $\lambda_1$ is simple, namely the eigenfunctions $u \in X(\Omega)$ corresponding to $\lambda_1$ are $u=\zeta e_1$, with $\zeta \in \mathbb{R}$; \item the sequence $\{e_k\}_{k \in \mathbb{N}}$ of eigenfunctions corresponding to $\lambda_k$ is an orthonormal basis of $L^2(\Omega)$ and an orthogonal basis of $X(\Omega)$; \item each eigenvalue $\lambda_k$ has finite multiplicity, more precisely, if $\lambda_k$ is such that $$\lambda_{k-1} < \lambda_k = \cdots = \lambda_{k+h} < \lambda_{k+h+1}$$ for some $h \in \mathbb{N}_0$, then the set of all the eigenfunctions corresponding to $\lambda_k$ agrees with $$\emph{span}\{e_k, \ldots, e_{k+h}\}.$$ \end{itemize} \end{Pro} \begin{Oss} The proof of this result can be found in \cite{SV2} with the following differences, due to the kind of kernel considered. For $L_K$ with a general kernel $K$, satisfying \eqref{P2}-\eqref{P3}-\eqref{P4}, the first eigenfunction $e_1$ is non-negative and every eigenfunction is bounded, there aren't any better regularity results \cite{SV2}. While for this particular kernel $K(y)= a(\frac{y}{|y|})\frac{1}{|y|^{n+2s}}$ we stress that the first eigenfunction is positive and all eigenfunctions belong to $C^s(\overline{\Omega})$, like in the case of fractional Laplacian. More precisely, $u_1 \in \mathrm{int}(C_{\delta}^{0}(\overline{\Omega})_{+})$, by applying Lemma \ref{Hopf} and Theorem \ref{Rap}. \end{Oss} \section{Application: a multiplicity result }\label{sec5} \noindent In this section we present an existence and multiplicity result for the solution of problem \eqref{P}, under condition \eqref{G} plus some further conditions; in the proof Theorem \ref{Equiv} will play an essential part. This application is an extension to the anisotropic case of a result on the fractional Laplacian \cite[Theorem 5.2] {IMS}. By a truncation argument and minimization, we show the existence of two constant sign solutions, then we apply Morse theory to find a third nontrivial solution. \begin{Teo} \label{MR} Let $f: \Omega \times \mathbb{R} \rightarrow \mathbb{R}$ be a Carath\'{e}odory function satisfying \\ i) $|f(x,t)|\leq a(1+|t|^{q-1})$ a.e. in $\Omega$ and for all $t \in \mathbb{R}$ $(a>0, 1<q<2_s^{*})$;\\ ii) $f(x,t)t \geq 0$ a.e. in $\Omega$ and for all $t \in \mathbb{R}$;\\ iii) $\lim_{t \to 0} \frac{f(x,t)-b |t|^{r-2} t}{t}=0$ uniformly a.e. in $\Omega$ $(b>0, 1<r<2)$;\\ iv) $\limsup_{|t| \to \infty} \frac{2 F(x,t)}{t^2} < \lambda_1$ uniformly a.e. in $\Omega$.\\ \noindent Then problem \eqref{P} admits at least three non-zero solutions $u^{\pm} \in \pm \ \mathrm{int}(C_{\delta}^0(\overline{\Omega})_+)$, $\tilde{u} \in C_{\delta}^0(\overline{\Omega})\setminus \{0\}$. \end{Teo} \begin{Ese} As a model for $f$ we can take the function \[ f(t):= \begin{cases} b |t|^{r-2} t + a_1 |t|^{q-2} t, & \text{if } |t| \leq 1, \\ \beta_1 t, & \text{if } |t|>1, \end{cases} \] with $1<r<2<q<2_s^*$, $a_1, b >0$, $\beta_1 \in (0,\lambda_1)$ s.t. $a_1 + b = \beta_1$. \end{Ese} \begin{proof}[Proof of Theorem \ref{MR}] We define $J \in C^1(X(\Omega))$ as $$J(u)=\frac{||u||^2}{2} - \int_{\Omega} F(x,u(x))\,dx.$$ Without loss of generality, we assume $q>2$ and $\epsilon, \epsilon_1, b_1, a_1, a_2$ are positive constants.\\ From ii) we have immediately that $0 \in K_J$, but from iii) $0$ is not a local minimizer. Indeed, let be $\delta >t>0$, by iii) we have $$\frac{f(x,t)-bt^{r-1}}{t} \geq - \epsilon,$$ by integrating $F(x,t) \geq b_1 t^r - \epsilon_1 t^2$ $(\epsilon_1 < b_1)$, but by i) $F(x,t) \geq -a_1 t-a_2 t^q$, hence, in the end, we obtain a.e. in $\Omega$ and for all $t \in \mathbb{R}$ \begin{equation} F(x,t) \geq c_0 |t|^r - c_1 |t|^q \quad (c_0,c_1 >0). \label{VA} \end{equation} We consider a function $u \in X(\Omega)$, $u(x)>0$ a.e. in $\Omega$, for all $\tau >0$ we have $$J(\tau u)= \frac{\tau^2 ||u||^2}{2} - \int_{\Omega} F(x,\tau u)\,dx \leq \frac{\tau^2 ||u||^2}{2} - c_0 \tau^r ||u||_{L^r(\Omega)}^r + c_1 \tau^q ||u||_{L^q(\Omega)}^q,$$ and the latter is negative for $\tau >0$ close enough to $0$, therefore, $0$ is not a local minimizer of $J$. \noindent We define two truncated energy functionals $$J_{\pm}(u):=\frac{||u||^2}{2} - \int_{\Omega} F_{\pm}(x,u)\,dx \quad \forall u \in X(\Omega),$$ setting for all $(x,t) \in \Omega \times \mathbb{R}$ $$f_{\pm}(x,t)=f(x,\pm t_{\pm}), \; F_{\pm}(x,t)=\int_0^t f_{\pm}(x,\tau)\, d\tau, \; t_{\pm}=\max\{\pm t, 0\} \; \forall t \in \mathbb{R}.$$ In a similar way, by \eqref{VA}, we obtain that $0$ is not a local minimizer for the truncated functionals $J_{\pm}$.\\ \noindent We focus on the functional $J_+$, clearly $J_+ \in C^1(X(\Omega))$ and $f_+$ satisfies \eqref{G}. We now prove that $J_+$ is coercive in $X(\Omega)$, i.e., $$\lim_{||u|| \to \infty} J_+(u)=\infty.$$ Indeed, by iv), for all $\epsilon >0$ small enough, we have a.e. in $\Omega$ and for all $t \in \mathbb{R}$ $$F_+(x,t) \leq \frac{\lambda_1 - \epsilon}{2} t^2 + C.$$ By the definition of $\lambda_1$, we have for all $u \in X(\Omega)$ $$J_+(u)\geq \frac{||u||^2}{2} - \frac{\lambda_1 - \epsilon}{2} ||u||_{L^2(\Omega)}^2 - C \geq \frac{\epsilon}{2 \lambda_1} ||u||^2 - C,$$ and the latter goes to $\infty$ as $||u||\rightarrow \infty$. Consequently, $J_+$ is coercive in $X(\Omega)$.\\ Moreover, $J_+$ is sequentially weakly lower semicontinuous in $X(\Omega)$. Indeed, let $u_n \rightharpoonup u$ in $X(\Omega)$, passing to a subsequence, we may assume $u_n \rightarrow u$ in $L^q(\Omega)$ and $u_n(x) \rightarrow u(x)$ for a.e. $x \in \Omega$, moreover, there exists $g \in L^q(\Omega)$ such that $|u_n(x)| \leq g(x)$ for a.e. $x \in \Omega$ and all $n \in \mathbb{N}$ \cite[Theorem 4.9]{B}. Hence, $$\lim_n \int_{\Omega} F_+(x,u_n)\,dx = \int_{\Omega} F_+(x,u)\,dx.$$ Besides, by convexity we have $$\liminf_n \frac{||u_n||^2}{2} \geq \frac{||u||^2}{2},$$ as a result $$ \liminf_n J_+(u_n) \geq J_+(u).$$ \noindent Thus, there exists $u^+ \in X(\Omega) \setminus \{0\}$ such that $$J_+(u^+)=\inf_{u \in X(\Omega)} J_+(u).$$ \noindent By Proposition \ref{WmP} and by ii) we have that $u^+$ is a nonnegative weak solution to \eqref{P}. By Theorem \ref{SL}, we obtain $u^+ \in L^{\infty}(\Omega)$, hence by Proposition \ref{Opt} and Theorem \ref{Rap} we deduce $u^+ \in C_{\delta}^0(\overline{\Omega})$. Furthermore, by Hopf's lemma $\frac{u^+}{\delta^s}>0$ in $\overline{\Omega}$, and by \cite[Lemma 5.1]{ILPS} $u^+ \in \mathrm{int}(C_{\delta}^0(\overline{\Omega})_+)$. \\ Let $\rho >0$ be such that $B_{\rho}^{\delta}(u^+) \subset C_{\delta}^0(\overline{\Omega})_+$, $u^+ +v \in B_{\rho}^{\delta}(u^+)$, $\forall v \in C_{\delta}^0(\overline{\Omega})$ con $||v||_{0,\delta}\leq \rho$, since $J$ and $J_+$ agree on $C_{\delta}^0(\overline{\Omega})_+ \cap X(\Omega)$, $$J(u^+ +v) \geq J(u^+), \qquad v \in B_{\rho}(0) \cap X(\Omega)$$ and by Theorem \ref{Equiv}, $u^+$ is a strictly positive local minimizer for $J$ in $X(\Omega)$. Similarly, looking at $J_-$, we can detect another strictly negative local minimizer $u^- \in - \mathrm{int}(C_{\delta}^0(\overline{\Omega})_+)$ of $J$. Now, by Theorem \ref{MPT} there exists, besides the two points of minimum, a third critical point $\tilde{u}$, such point is of mountain pass type. We only have to show that $\tilde{u} \neq 0$, to do this we use a Morse-theoretic argument. First of all, we prove that $J$ satisfies Cerami condition (which in this case is equivalent to the Palais-Smale condition) to apply Morse theory.\\ Let $(u_n)$ be a sequence in $X(\Omega)$ such that $|J(u_n)| \leq C$ for all $n \in \mathbb{N}$ and $(1+||u_n||)J'(u_n)\rightarrow 0$ in $X(\Omega)^*$. Since $J$ is coercive, the sequence $(u_n)$ is bounded in $X(\Omega)$, hence, passing to a subsequence, we may assume $u_n \rightharpoonup u$ in $X(\Omega)$, $u_n \rightarrow u$ in $L^q(\Omega)$ and $L^1(\Omega)$, and $u_n(x) \rightarrow u(x)$ for a.e. $x \in \Omega$, with some $u \in X(\Omega)$. Moreover, by \cite[Theorem 4.9]{B} there exists $g \in L^q(\Omega)$ such that $|u_n(x)| \leq g(x)$ for all $n \in \mathbb{N}$ and a.e. $x \in \Omega$. Using such relations along with i), we obtain \begin{align*} ||u_n-u||^2 & =\left\langle u_n, u_n-u\right\rangle_{X(\Omega)} - \left\langle u, u_n-u\right\rangle_{X(\Omega)} \\ & =J'(u_n)(u_n-u) + \int_{\Omega} f(x,u_n)(u_n-u)\,dx - \left\langle u, u_n-u\right\rangle _{X(\Omega)}\\ & \leq ||J'(u_n)||_* ||u_n-u||+ \int_{\Omega} a (1+|u_n|^{q-1})|u_n-u|\,dx - \left\langle u, u_n-u\right\rangle_{X(\Omega)} \\ & \leq ||J'(u_n)||_* ||u_n-u|| + a (||u_n-u||_{L^1(\Omega)}+||u_n||_{L^q(\Omega)}^{q-1} ||u_n-u||_{L^q(\Omega)}) - \left\langle u, u_n-u\right\rangle_{X(\Omega)} \end{align*} for all $n \in \mathbb{N}$ and the latter tends to $0$ as $n\rightarrow \infty$. Thus, $u_n\rightarrow u$ in $X(\Omega)$.\\ Without loss of generality, we assume that $0$ is an isolated critical point, therefore we can determine the corresponding critical group.\\ \textbf{Claim:} $C_k(J,0)=0 \quad \forall k \in \mathbb{N}_0$.\\ By iii), we have $$\lim_{t \to 0} \frac{r F(x,t)-f(x,t)t}{t^2}=0,$$ hence, for all $\epsilon >0$ we can find $C_\epsilon >0$ such that a.e. in $\Omega$ and for all $t \in \mathbb{R}$ $$\left| F(x,t) - \frac{f(x,t) t}{r}\right| \leq \epsilon t^2 + C_\epsilon |t|^q.$$ By the relations above we obtain $$\int_{\Omega} \left(F(x,u) - \frac{f(x,u) u}{r} \right)\,dx= o(||u||^2) \qquad \text{as } ||u||\rightarrow 0.$$ For all $u \in X(\Omega) \setminus \{0\}$ such that $J(u)>0$ we have $$\frac{1}{r} \frac{d}{d\tau} J(\tau u)|_{\tau =1}= \frac{||u||^2}{r} - \int_{\Omega} \frac{f(x,u) u}{r}\,dx= J(u)+ \left(\frac{1}{r}-\frac{1}{2}\right) ||u||^2 +o(||u||^2) \qquad \text{as } ||u||\rightarrow 0.$$ Therefore we can find some $\rho >0$ such that, for all $u \in B_{\rho}(0)\setminus \{0\}$ with $J(u) > 0$, \begin{equation} \frac{d}{d\tau} J(\tau u)|_{\tau =1}>0. \label{De} \end{equation} Using again \eqref{VA}, there exists $\tau(u) \in (0,1)$ such that $J(\tau u) <0$ for all $0<\tau<\tau(u)$ and $J(\tau(u) u)=0$. This assures uniqueness of $\tau(u)$ defined as above, for all $u \in B_\rho(0)$ with $J(u)>0$. We set $\tau(u)=1$ for all $u \in B_\rho(0)$ with $J(u)\leq0$, hence we have defined a map $\tau: B_\rho (0)\rightarrow (0,1]$ such that for $\tau \in (0,1)$ and for all $u \in B_\rho (0)$ we have \[ \begin{cases} J(\tau u)<0 & \text{if $\tau<\tau(u)$} \\ J(\tau u)=0 & \text{if $\tau=\tau(u)$}\\ J(\tau u)>0 & \text{if $\tau>\tau(u).$}\\ \end{cases} \] \noindent By \eqref{De} and the Implicit Function Theorem, $\tau$ turns out to be continuous. We set for all $(t,u) \in [0,1]\times B_\rho (0)$ $$h(t,u)=(1-t)u+t \tau(u)u,$$ hence $h: [0,1] \times B_\rho (0)\rightarrow B_\rho (0)$ is a continuous deformation and the set $B_\rho(0) \cap J^0 = \{\tau u: u \in B_{\rho}(0), \tau \in [0,\tau(u)]\}$ is a deformation retract of $B_\rho(0)$. Similarly we deduce that the set $B_\rho (0) \cap J^0 \setminus \{0\}$ is a deformation retract of $B_\rho(0) \setminus \{0\}$. Consequently, we have $$C_k(J,0)=H_k(J^0 \cap B_\rho (0), J^0 \cap B_\rho (0) \setminus \{0\})= H_k(B_\rho (0), B_\rho (0) \setminus \{0\})=0 \quad \forall k \in \mathbb{N}_0,$$ the last passage following from contractibility of $B_\rho(0)\setminus\{0\}$, recalling that $\mathrm{dim}(X(\Omega))=\infty$.\\ Since by Proposition \ref{Gcr} $C_1(J,\tilde{u})\neq 0$ and $C_k(J,0)=0$ $\forall k \in \mathbb{N}_0$, then $\tilde{u}$ is a non-zero solution. \end{proof} \begin{Oss} We remark that we can use Morse identity (Proposition \ref{MI}) to conclude the proof. Indeed, we note that $J({u_\pm})< J(0)=0$, in particular $0$ and $u_{\pm}$ are isolated critical points, hence we can compute the corresponding critical groups. By Proposition \ref{M}, since $u_{\pm}$ are strict local minimizers of $J$, we have $C_k(J,u_{\pm})=\delta_{k,0} \mathbb{R}$ for all $k \in \mathbb{N}_0$. We have already determined $C_k(J,0)=0$ for all $k \in \mathbb{N}_0$, and we already know the k-th critical group at infinity of $J$. Since $J$ is coercive and sequentially weakly lower semicontinuous, $J$ is bounded below in $X(\Omega)$, then, by \cite[Proposition 6.64 (a)]{MMP}, $C_k(J,\infty)=\delta_{k,0} \mathbb{R}$ for all $k \in \mathbb{N}_0$. Applying Morse identity and choosing, for instance, $t=-1$, we obtain a contradiction, therefore there exists another critical point $\tilde{u} \in K_J \setminus \{0, u_{\pm}\}$. \\ But in this way we lose the information that $\tilde{u}$ is of mountain pass type. \end{Oss} \section{Appendix: General Hopf's lemma}\label{sec6} \noindent As stated before, we show that weak and strong maximum principle, Hopf's lemma can be generalized to the case in which the sign of $f$ is unknown. Now we focus on the following problem \begin{equation} \begin{cases} \mathit{L_K} u = f(x,u) & \text{in $\Omega$ } \\ u = h & \text{in $\mathbb{R}^n \setminus \Omega$,} \end{cases} \label{DNO} \end{equation} where $h \in C^s(\mathbb{R}^n \setminus \Omega)$, and we have the same assumptions on the function $f$, in addition we assume \begin{equation} f(x,t) \geq -ct \quad \forall (x,t) \in \overline{\Omega} \times \mathbb{R_{+}} \quad (c>0). \label{SF} \end{equation} \begin{Oss} Since Dirichlet data is not homogeneous in \eqref{DNO}, the energy functional associated to the problem \eqref{DNO} is \begin{equation} J(u)=\frac{1}{2} \int_{\mathbb{R}^{2n} \setminus \mathcal{O}} |u(x)-u(y)|^2 K(x-y)\,dxdy - \int_{\Omega} F(x,u(x))\,dx, \label{NH} \end{equation} for all $u \in \tilde{X}:=\{u \in L^2(\mathbb{R}^n): [u]_K< \infty\}$ with $u=h$ a.e. in $\mathbb{R}^n \setminus \Omega$, where $\mathcal{O}= (\mathbb{R}^n \setminus \Omega) \times (\mathbb{R}^n \setminus \Omega)$. When $h$ is not zero, the term $ \int_{\mathcal{O}} |h(x)-h(y)|^2 K(x-y)\,dxdy$ could be infinite, this is the reason why one has to take \eqref{NH}, see \cite{RO}. \end{Oss} \noindent We begin with a weak maximum principle for \eqref{DNO}. \begin{Pro}[Weak maximum principle] Let \eqref{SF} hold and let $u$ be a weak solution of \eqref{DNO} with $h \geq 0$ in $\mathbb{R}^n \setminus \Omega$. Then, $u \geq 0$ in $\Omega$. \end{Pro} \begin{proof} Let $u$ be a weak solution of \eqref{DNO}, i.e. \begin{equation} \int_{\mathbb{R}^{2n}\setminus \mathcal{O}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy = \int_{\Omega} f(x,u(x)) v(x)\,dx \label{Sol} \end{equation} for all $v \in X(\Omega)$. We write $u=u^+ - u^-$ in $\Omega$, where $u^+$ and $u^-$ stand for the positive and the negative part of $u$, respectively. We take $v=u^-$, we assume that $u^-$ is not identically zero, and we argue by contradiction.\\ From hypotheses we have \begin{equation} \int_{\Omega} f(x,u(x)) v(x)\,dx = \int_{\Omega} f(x,u(x)) u^-(x)\,dx \geq - \int_{\Omega} c u(x) u^-(x)\,dx = \int_{\Omega^{-}} c u(x)^2\,dx >0, \label{Sgn} \end{equation} where $\Omega^{-}:=\{x \in \Omega : u(x)<0\}$.\\ On the other hand, we obtain that \begin{align*} &\int_{\mathbb{R}^{2n}\setminus \mathcal{O}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy \\ &=\int_{\Omega \times \Omega} (u(x)-u(y))(u^-(x)-u^-(y))K(x-y)\,dxdy \;+ \\ &+2\int_{\Omega \times (\mathbb{R}^n \setminus \Omega)} (u(x)-h(y)) u^-(x)K(x-y)\,dxdy. \end{align*} Moreover, $(u^+(x)-u^+(y))(u^-(x)-u^-(y)) \leq 0$, and thus \begin{align*} &\int_{\Omega \times \Omega} (u(x)-u(y))(u^-(x)-u^-(y))K(x-y)\,dxdy \\ & \leq - \int_{\Omega \times \Omega} (u^-(x)-u^-(y))^2 K(x-y)\,dxdy < 0. \end{align*} Since $h \geq 0$, then $$\int_{\Omega \times (\mathbb{R}^n \setminus \Omega)} (u(x)-h(y)) u^-(x)K(x-y)\,dxdy \leq 0.$$ Therefore, we have obtained that $$\int_{\mathbb{R}^{2n}\setminus \mathcal{O}} (u(x)-u(y))(v(x)-v(y))K(x-y)\,dxdy <0,$$ and this contradicts \eqref{Sol}-\eqref{Sgn}. \end{proof} \noindent The next step consists in proving a strong maximum principle for \eqref{DNO}. To do so we will need a slightly more restrictive notion of solution, namely a pointwise solution, which is equivalent to that of weak solution under further regularity assumptions on the reaction $f$. Therefore we add extra hypotheses on $f$ to obtain a better interior regularity of the solutions, as we have seen previously, as a consequence we can show a strong maximum principle and Hopf's Lemma in a more general case. \begin{Pro}[Strong maximum principle] \label{SMP} Let \eqref{SF} hold and $f(.,t) \in C^s(\overline{\Omega})$ for all $t\in \mathbb{R}$, $f(x,.)\in C_{loc}^{0,1}(\mathbb{R})$ for all $x \in \overline{\Omega}$, $a \in L^{\infty}(S^{n-1})$, let $u$ a weak solution of \eqref{DNO} with $h \geq 0$ in $\mathbb{R}^n \setminus \Omega$. Then either $u(x)=0$ for all $x \in \Omega$ or $u > 0$ in $\Omega$. \end{Pro} \begin{proof} The assumptions $f(.,t) \in C^s(\overline{\Omega})$ for all $t \in \mathbb{R}$ and $f(x,.)\in C_{loc}^{0,1}(\mathbb{R})$ for all $x \in \overline{\Omega}$ imply that $f(x,u(x))\in C^s(\overline{\Omega}\times \mathbb{R})$.\\ We fix $x \in \Omega$, since $\Omega$ is an open set, there exists a ball $B_R(x)$ such that $u$ satisfies $L_K(u)=f(x,u)$ weakly in $B_R(x)$, hence by Theorem \ref{IR} $u \in C^{3s}\bigl(B_{\frac{R}{2}}\bigr)$ and by Proposition \ref{Opt} $u \in C^s(\mathbb{R}^n)$, then $u$ is a pointwise solution, namely the operator $\mathit{L_K}$ can be evaluated pointwise: \begin{align*} & \int_{\mathbb{R}^n} \frac{|u(x)-u(y)|}{|x-y|^{n+2s}} a\Bigl(\frac{x-y} {|x-y|}\Bigr)\,dy\\ & \leq C ||a||_{L^\infty} \int_{B_{\frac{R}{2}}} \frac{|x-y|^{3s}}{|x-y|^{n+2s}}\,dy + C ||a||_{L^\infty} \int_{\mathbb{R}^n \setminus B_{\frac{R}{2}}} \frac{|x-y|^s}{|x-y|^{n+2s}}\,dy \\ &=C \left(\int_{B_{\frac{R}{2}}} \frac{1}{|x-y|^{n-s}}\,dy +\int_{\mathbb{R}^n \setminus B_{\frac{R}{2}}} \frac{1}{|x-y|^{n+s}}\,dy \right)< \infty. \end{align*} Therefore, if $u$ is a weak solution of problem \eqref{DNO}, under these hypotheses, $u$ becomes a pointwise solution of this problem. \\ By weak maximum principle, $u \geq 0$ in $\mathbb{R}^n$. We assume that $u$ does not vanish identically.\\ Now, we argue by contradiction. We suppose that there exists a point $x_0 \in \Omega$ such that $u(x_0)=0$, hence $x_0$ is a minimum of $u$ in $\mathbb{R}^n$ , then $$0=-cu(x_0) \leq L_Ku(x_0)= \int_{\mathbb{R}^n} (u(x_0)-u(y))K(x_0-y)\,\mathrm{d}y < 0,$$ a contradiction. \end{proof} \noindent Finally, by using the previous results, we can prove a generalised Hopf's Lemma for \eqref{DNO} with possibly negative reaction. \begin{Lem}[Hopf's Lemma] \label{Hopf1} Let \eqref{G} and \eqref{SF} hold, $f(.,t) \in C^s(\overline{\Omega})$ for all $t \in \mathbb{R}$, $f(x,.)\in C_{loc}^{0,1}(\mathbb{R})$ for all $x \in \overline{\Omega}$, $a \in L^{\infty}(S^{n-1})$. If $u$ is a solution of \eqref{DNO} and $h \geq 0$ in $\mathbb{R}^n \setminus \Omega$, then either $u(x)=0$ for all $x \in \Omega$ or $$ \liminf_{\Omega \ni x \rightarrow x_0} \frac{u(x)}{\delta(x)^s} > 0 \quad \forall x_0 \in \partial\Omega.$$ \end{Lem} \begin{proof} The proof is divided in two parts, firstly we show this result in a ball $B_R$, $R>0$, and secondly in a general $\Omega$ satisfying an interior ball condition. (We assume that $B_R$ is centered at the origin without loss of generality). We argue as in \cite[Lemma 1.2]{GS}.\\ \textbf{Case $\Omega=B_R$}\\ We suppose that $u$ does not vanish identically in $B_R$. By Proposition \ref{SMP} $u>0$ in $B_R$, hence for every compact set $K \subset B_R$ we have $\min_{K} u>0$. We recall \cite[Lemma 5.4]{RO} that $u_R(x)= C (R^2-|x|^2)_{+}^{s}$ is a solution of \[ \begin{cases} \mathit{L}_K u_R =1 & \text{in $B_R$ } \\ u_R=0 & \text{in $\mathbb{R}^n \setminus B_R$,} \end{cases} \] we define $v_m(x)= \frac{1}{m} u_R(x)$ for $x \in \mathbb{R}^n$ and $\forall m \in \mathbb{N}$, consequently $L_K v_m=\frac{1}{m}$.\\ \textbf{Claim}: There exists some $\bar{m} \in \mathbb{N}$ such that $u \geq v_{\bar{m}}$ in $\mathbb{R}^n$.\\ We argue by contradiction, we define $w_m=v_m-u \; \forall m \in \mathbb{N}$, and we suppose that $w_m >0$ in $\mathbb{R}^n$. Since $v_m=0 \leq u$ in $\mathbb{R}^n \setminus B_R$, there exists $x_m \in B_R$ such that $w_m(x_m)=\max_{B_R} w_m >0$, hence we may write $0<u(x_m)<v_m(x_m)$. As a consequence of this and of the fact that \begin{equation} v_m \rightarrow 0 \text{ uniformly in } \mathbb{R}^n, \label{Cv} \end{equation} we obtain \begin{equation} \lim_{m \to +\infty} u(x_m)=0. \label{Cu} \end{equation} This and the fact of $\min_{K} u>0$ imply $|x_m|\rightarrow R$ as $m \rightarrow + \infty$. Consequently, as long as $y$ ranges in the ball $\overline{B}_{\frac{R}{2}} \subset B_R$, the difference $x_m-y$ keeps far from zero when $m$ is large. Therefore, recalling also Remark \ref{Oss1}, there exist a positive constant $C>1$, independent of $m$, such that \begin{equation} \frac{1}{C} \leq \int_{B_{\frac{R}{2}}} a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr) \frac{1} {|x_m-y|^{n+2s}}\,dy \leq C. \label{S} \end{equation} By assumption and arguing as in the previous proof, the operator $L_K$ can be evaluated pointwise, hence we obtain \begin{align} \begin{split} &-cu(x_m)\leq L_K u(x_m) = \int_{\mathbb{R}^n} \frac{u(x_m)-u(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy\\ &=\int_{B_{\frac{R}{2}}} \frac{u(x_m)-u(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy + \int_{\mathbb{R}^n\setminus B_{\frac{R}{2}}} \frac{u(x_m)-u(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy\\ &= A_m + B_m. \label{GS} \end{split} \end{align} We concentrate on the first integral, since there exists a positive constant $b$ such that $\min_{B_{\frac{R}{2}}} u = b$, and by previous estimates and by Fatou's lemma we have $$\limsup_{m} A_m = \limsup_{m} \int_{B_{\frac{R}{2}}} \frac{u(x_m)-u(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy \leq -\frac{b}{C} <0,$$ where we used \eqref{Cu} and \eqref{S}.\\ For the second integral we observe $u(x_m)-u(y)\leq v_m(x_m) - v_m(y)$, indeed we recall that $w_m(y) \leq w_m(x_m)$ for all $y \in \mathbb{R}^n$ (being $x_m$ the maximum of $w_m$ in $\mathbb{R}^n$), hence, passing to the limit, by \eqref{Cv} and \eqref{S} we obtain \begin{align*} B_m & \leq \int_{\mathbb{R}^n\setminus B_{\frac{R}{2}}} \frac{v(x_m)-v(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy\\ &= L_K v_m(x_m) - \int_{B_{\frac{R}{2}}} \frac{v(x_m)-v(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy \\ &= \frac{1}{m} - \int_{B_{\frac{R}{2}}} \frac{v(x_m)-v(y)}{|x_m-y|^{n+2s}} \; a\Bigl(\frac{x_m-y}{|x_m-y|}\Bigr)\,dy \rightarrow 0 \quad m \rightarrow \infty. \end{align*} Therefore, inserting these in \eqref{GS}, we obtain $0 \leq - \frac{b}{C}$, a contradiction.\\ Then $u \geq v_{\bar{m}}$ for some $\bar{m}$, therefore $$u(x) \geq \frac{1}{\bar{m}} (R^2-|x|^2)^s=\frac{1}{\bar{m}} (R+|x|)^s (R-|x|)^s \geq \frac{2^s R^s}{\bar{m}}(\mathrm{dist}(x, \mathbb{R}^n \setminus B_R))^s,$$ then $$ \liminf_{B_R \ni x \rightarrow x_0} \frac{u(x)}{(\delta(x)^s)} \geq \frac{1}{\bar{m}} 2^s R^s >0.$$ \noindent \textbf{Case of a general domain $\Omega$}\\ We define $\Omega_{\rho}=\{x \in \Omega: \delta_{\Omega}(x) < \rho\}$ with $\rho >0$, for all $x \in \Omega_{\rho}$ there exists $x_0 \in \partial \Omega$ such that $|x-x_0|=\delta_{\Omega}(x)$. Since $\Omega$ satisfies an interior ball condition, there exists $x_1 \in \Omega$ such that $B_{\rho}(x_1) \subseteq \Omega$, tangent to $\partial \Omega$ at $x_0$. Then we have that $x \in [x_0,x_1]$ and $\delta_{\Omega}(x)=\delta_{B_{\rho} (x_1)}(x)$.\\ Since $u$ is a solution of \eqref{DNO} and by Proposition \ref{SMP} we observe that either $u\equiv 0$ in $\Omega$, or $u >0$ in $\Omega$. If $u>0$ in $\Omega$, in particular $u>0$ in $B_{\rho}(x_1)$ and $u\geq 0$ in $\mathbb{R}^n \setminus B_{\rho}(x_1)$, then $u$ is a solution of \[ \begin{cases} \mathit{L_K} u = f(x,u) & \text{in $B_{\rho}(x_1)$ } \\ u = \tilde{h} & \text{in $\mathbb{R}^n \setminus B_{\rho}(x_1)$,} \end{cases} \] with \[ \tilde{h}(y)= \begin{cases} u(y), & \text{if } y \in \Omega, \\ h(y), & \text{if } y \in \mathbb{R}^n \setminus \Omega. \end{cases} \] Therefore, by the first case there exists $C=C(\rho, m, s)>0$ such that $u(y)\geq C \delta_{B_{\rho} (x_1)}^s (y)$ for all $y \in \mathbb{R}^n$, in particular we obtain $u(x)\geq C \delta_{B_{\rho} (x_1)}^s (x)$.\\ Then, by $\delta_{\Omega}(x)=\delta_{B_{\rho} (x_1)}(x)$, we have $$ \liminf_{\Omega \ni x \rightarrow x_0} \frac{u(x)}{\delta_{\Omega}(x)^s} \geq \liminf_{\Omega_{\rho} \ni x \rightarrow x_0} \frac{C \delta_{\Omega}(x)^s }{\delta_{\Omega}(x)^s} = C > 0 \quad \forall x_0 \in \partial\Omega.$$ \end{proof} \begin{Oss} We stress that in Lemma \ref{Hopf} we consider only weak solutions, while in Lemma \ref{Hopf1} pointwise solutions. Moreover, the regularity of $u/\delta^s$ yields in particular the existence of the limit $$\lim_{\Omega \ni x \rightarrow x_0} \frac{u(x)}{\delta(x)^s}$$ for all $x_0 \in \partial\Omega$. \end{Oss} \vskip4pt \noindent {\small {\bf Acknowledgement.} S.F. would like to acknowledge Antonio Iannizzotto for many valuable discussions on the subject.}
-75,131.084411
[ -2.572265625, 2.306640625 ]
25.925926
[ -2.890625, 0.81298828125, -2.265625, -5.98828125, -1.28125, 8.859375 ]
[ 3.16796875, 8.25, 0.305908203125, 4.78125 ]
385
7,869
[ -3.435546875, 3.849609375 ]
37.299984
[ -5.7734375, -4.36328125, -5.3984375, -2.65234375, 2.046875, 13.65625 ]
0.915467
12.79334
20.968357
2.130184
[ 2.217440605163574 ]
-43,613.658537
5.163934
-74,953.659697
0.930233
6.048956
[ -1.884765625, -3.501953125, -3.96875, -5.4921875, 2.0625, 12.90625 ]
[ -5.90625, -1.6982421875, -1.9619140625, -0.79833984375, 3.466796875, 3.58984375 ]
BkiUddU5qsBB3YgznBVN
\section{Introduction} Let $f = u+iv$ be a continuous complex-valued harmonic mapping in the open unit disk $E = \{z: |z|<1\}$, where both $u$ and $v$ are real-valued harmonic functions in $E$. Such a mapping can be decomposed into two parts and can be expressed as $f=h+\overline g$. Here $h$ is known as the analytic part and $g$ the co-analytic part of $f$. Lewy's Theorem implies that a harmonic mapping $f=h+\overline g$ defined in $E$, is locally univalent and sense preserving if and only if the Jacobian of the mapping, defined by $J_f=|h'|^2-|g'|^2,$ is positive or equivalently, if and only if $h'(z)\not=0$ and the dilatation function $\omega$ of $f$, defined by $\displaystyle\omega(z)=\frac{g'(z)}{h'(z)}$, satisfies $|\omega(z)|<1$ in $E$. We denote by $S_H$ the class of all harmonic, sense-preserving and univalent mappings $f=h+\overline g$, defined in $E$ which are normalized by the conditions $ h(0)=0 $ and $h_{z}(0)=1$. Therefore, a function $f=h+\overline g$ in class $S_H$ has the representation, \begin{equation} f(z) = z+ \sum _{n=2}^{\infty} a_nz^n + \sum _{n=1}^{\infty}\overline{ b_nz^n} , \end{equation} for all $z$ in $E$. The class of functions of the type (1) with ${\overline b_1}=0$ is denoted by $S_H^0$ which is a subset of $S_H$. Further let $K_H$(respectively $K_H^0$) be the subclass of $S_H$(respectively $S_H^0$) consisting of functions which map the unit disk $E$ onto convex domains. A domain $\Omega$ is said to be convex in the direction $\phi, 0 \leq \phi < \pi,$ if every line parallel to the line joining $0$ and $ e ^{i \phi}$ has a connected intersection with $\Omega$. In particular, a domain convex in horizontal direction is denoted by CHD. \begin{definition} Convolution or Hadamard product of two harmonic mappings $F(z) = H + \overline G=z + \sum_{n=2}^\infty A_n z^n + \sum_{n=1}^\infty{\overline B}{_n} {\overline z} {^n}$ and $ f(z)= h+ \overline g =z+\sum_{n=2}^\infty a_n z^n + \sum_{n=1}^\infty\overline {b}{_n}\overline{z}{^n}$ in $S_H$ is defined as $$ \begin{array}{clll} (F {\ast} f)(z)&=& (H {\ast} h)(z) +\overline{(G {\ast} g)(z)}\\ &= & z+ \sum_{n=2}^\infty a_n A_n z^n + \sum_{n=1}^\infty\overline {{b}{_n}{ B}{_n}}\overline {z}{^n}. \end{array} $$ \end{definition} Let $f_a=h_a+\overline{g_a}\in K_H$ be the mapping in the right half-plane given by $\displaystyle h_a+g_a=\frac{z}{1-z}$\,\,with\,dilatation\, function $\displaystyle \omega_a(z)=\frac{a-z}{1-az}\,\,\,(|a|<1,a\in \mathbb{R}).$ Then, by using the shearing technique (see [\ref{cl and sh}]), we get \begin{equation} \displaystyle h_a(z) =\frac{\frac{1}{1+a}z-\frac{1}{2}z^2}{(1-z)^2}\quad {\rm and}\quad\displaystyle g_a(z) =\frac{\frac{a}{1+a}z-\frac{1}{2}z^2}{(1-z)^2}. \end{equation} By setting $a=0$, we get $f_0=h_0+\overline{g_0}\in K_H^0,$ the standard right half-plane mapping, where \begin{equation} \displaystyle h_0(z) =\frac{z-\frac{1}{2}z^2}{(1-z)^2}\quad {\rm and}\quad\displaystyle g_0(z) =\frac{-\frac{1}{2}z^2}{(1-z)^2}. \end{equation} Unlike the case of analytic functions, the convolution of two univalent convex harmonic functions is not necessarily convex harmonic. It may not even be univalent. So, it is interesting to explore the convolution properties of mappings in the class $K_H$. In [\ref{do}] and [\ref{do and no}], the authors obtained several results in this direction. In particular they proved the following: \begin{thma} (See [\ref{do}]) Let $\displaystyle f_1=h_1+\overline{g_1}\,,f_2=h_2+\overline{g_2}\in S_H^0$ with $\displaystyle h_i+g_i=\frac{z}{1-z}\,for\,\, i=1,2.$ If $f_1\ast f_2$ is locally univalent and sense preserving, then $f_1\ast f_2\,\in S_H^0$ and is CHD.\end{thma} \begin{thmb} (See [\ref{do and no}]) Let $\displaystyle f=h+\overline{g}\in K_H^0$ with $\displaystyle h+g=\frac{z}{1-z}$ and $\omega(z)=e^{i\theta}z^n(n\in\mathbb{N}\, and\,\theta\in \mathbb{R})$. If $n=1,2$, then $f_0\ast f\,\in S_H^0$ and is CHD, where $f_0$ is given by (3).\end{thmb} In Theorem B, the authors established that the requirement that the convolution should be locally univalent and sense preserving in Theorem A can be dropped. Recently, Li and Ponnusamy [\ref{li and po},\ref{li and po 2}] also obtained some results involving convolutions of right half-plane and slanted right half-plane harmonic mappings. In [\ref{li and po 2}], they proved: \begin{thmc} Let $f_0$ be given by (3). If $\displaystyle f=h+\overline{g}$, is a slanted right half-plane mapping, given by $\displaystyle h+e^{-2i\alpha}g=\frac{z}{1-e^{i\alpha} z}\,\,(0\leq\alpha<2\pi)$ with $\omega(z)=e^{i\theta}z^n(\theta\in \mathbb{R})$, then for $n=1,2$, $f\ast f_0\,\in S_H^0$ and is convex in the direction of $-\alpha$.\end{thmc} In [\ref{do and no}] and [\ref{li and po 2}], authors showed that Theorem B and Theorem C do not hold for $n\geq3$. The aim of the present paper is to investigate the convolution properties of harmonic mappings $\displaystyle f_a\,(a\in \mathbb{R}, |a|<1)$ defined by (2) with right half-plane mappings $f_n=h+\overline{g}$ where $\displaystyle h+g=\frac{z}{1-z}$ and dilatation $\omega(z)=e^{i\theta}z^n\,(\theta \in \mathbb{R}\,, n\in\mathbb{N}).$ We establish that $f_a\ast f_n$ are in $S_H$ and are CHD for all $a \in \left[\frac{n-2}{n+2},1\right)$ and for all $n\in \mathbb{N}$. A condition is also determined under which $f_a\ast f_b$ is CHD and belongs to $S_H$. \section{Main Results} We begin by proving the following lemma. \begin{lemma} Let $f_a=h_a+\overline{g_a}$ be defined by (2) and $f= h+\overline{g}\,\in S_H$ be the right half-plane mapping, where $\displaystyle h+g=\frac{z}{1-z}$ with dilatation $ \displaystyle\omega(z)=\frac{g'(z)}{h'(z)}\,(h'(z)\not=0, z\in E)$. Then $\widetilde{\omega}_1$, the dilatation of $f_a\ast f$, is given by \begin{equation} \displaystyle \hspace{-1cm}\widetilde{\omega}_1=\left[\frac{2\omega(a-z)(1+\omega)+z\omega'(a-1)(1-z)}{2(1-az)(1+\omega)+z\omega'(a-1)(1-z)}\right].\end{equation} \end{lemma} \begin{proof} From $\displaystyle h+g=\frac{z}{1-z}$ and $\displaystyle g'=\omega h',$ we immediately get\\ $\indent\hspace{1.5cm}\displaystyle h'(z)=\frac{1}{(1+\omega(z))(1-z)^2}$\,\, and \,\, $\displaystyle h''(z)=\frac{2(1+\omega(z))-\omega'(z)(1-z)}{(1+\omega(z))^2(1-z)^3}.$ \\ Let $$f_a\ast f= h_a\ast h + \overline{g_a\ast g}=h_1+\overline{g_1}\,,\, where$$ $$h_1(z)=\frac{1}{2}\left[\frac{z}{1-z}+\frac{(1-a)z}{(1+a)(1-z)^2}\right]\ast h$$ $$\indent\hspace{-1.5cm}=\frac{1}{2}\left[h+\frac{(1-a)}{(1+a)}zh'\right] $$ and $$ g_1(z)= \frac{1}{2}\left[\frac{z}{1-z}-\frac{(1-a)z}{(1+a)(1-z)^2}\right]\ast g$$ $$\indent\hspace{-1.5cm}=\frac{1}{2}\left[g-\frac{(1-a)}{(1+a)}zg'\right].$$ Now the dilatation $\widetilde{\omega}_1$ of $f_a\ast f$ is given by $$ \begin{array}{clll} \vspace{.5cm} \displaystyle \hspace{-5cm}\widetilde{\omega}_1(z)=\frac{g_1'(z)}{h_1'(z)}=\left[\frac{2ag'-(1-a)zg''}{2h'+(1-a)zh''}\right].\\ \vspace{.5cm} \displaystyle\hspace{-3.5cm}=\left[\frac{2a\omega h'-z(1-a)(\omega h''+\omega'h')}{2h'+(1-a)zh''}\right] \end{array} $$ $$\displaystyle \hspace{-2.3cm}=\left[\frac{2\omega(a-z)(1+\omega)+z\omega'(a-1)(1-z)}{2(1-az)(1+\omega)+z\omega'(a-1)(1-z)}\right].$$ \end{proof} We shall also need the following forms of Cohn's rule and Schur-Cohn's algorithm . \begin{lema} (Cohn's rule [\ref{ra and sc}, p.375]) Given a polynomial $$t(z)= a_0 + a_1z + a_2z^2+...+a_nz^n$$ of degree $n$, let $$ t^*(z)=\displaystyle z^n\overline{t\left(\frac{1}{\overline z}\right)} = \overline {a}_n + \overline {a}_{n-1}z + \overline {a}_{n-2}z^2 +...+ \overline{a}_0z^n.$$ Denote by $r$ and $s$ the number of zeros of $t(z)$ inside and on the unit circle $|z|=1$, respectively. If $|a_0|<|a_n|,$ then $$ t_1(z)= \frac{\overline {a}_n t(z)-a_0t^*(z)}{z}$$ is of degree $n-1$ and has $r_1=r-1$ and $s_1=s$ number of zeros inside the unit circle and on it, respectively.\end{lema} \begin{lemab} (Schur-Cohn's algorithm [\ref{ra and sc}, p.383]) Given a polynomial $$ r(z) =a_0+a_1z+\cdots+a_nz^n+a_{n+1}z^{n+1}$$ of degree $n+1$, let \[M_{k}= det\begin{pmatrix} \overline{B}_{k}\,^T& A_{k} \\ \overline{A}_{k}\,^T& B_{k} \end{pmatrix} \begin{pmatrix} k=1,2\cdots,n+1 \end{pmatrix},\] \\ where $A_{k}$ and $B_{k}$ are the triangular matrices \[A_{k}=\begin{pmatrix} a_{0}&a_{1}&\cdots &a_{k-1} \\ &a_{0}&\cdots & a_{k-2}\\ & & \ddots & \vdots\\ & & &a_{0} \end{pmatrix},\qquad\,\,\, B_{k}=\begin{pmatrix} \overline{a}_{n+1}&\overline{a}_{n}&\cdots &\overline{a}_{(n+1)-k+1} \\ &\overline{a}_{n+1}&\cdots &\overline{a}_{(n+1)-k+2} \\ & & \ddots & \vdots\\ & & &\overline{a}_{n+1} \end{pmatrix}.\]\\ \normalsize{ Then $r(z)$ has all its zeros inside the unit circle $|z|=1$ if and only if the determinants $M_1, M_2\cdots,M_{n+1}$ are all positive.}\end{lemab} We now proceed to state and prove our main result. \begin{theorem} Let $f_a=h_a+\overline{g_a}$ be given by (2). If $f_n= h+\overline{g}$ is the right half-plane mapping given by $\displaystyle h+g=\frac{z}{1-z}$ with $\omega(z)=e^{i\theta}z^n\,\,(\theta \in \mathbb{R}\,, n\in\mathbb{N})$, then $f_a\ast f_n \, \in S_H$ and is CHD for $a \in [\frac{n-2}{n+2},1)$.\end{theorem} \begin{proof} In view of Theorem A, it suffices to show that the dilatation of $f_a\ast f_n=\widetilde{\omega}_1\,$ satisfies $\displaystyle|\widetilde{\omega}_1(z)|<1$, for all $z\in E$. Setting $\omega(z)=e^{i\theta}z^n$ in (4), we get \begin{equation} \indent\hspace{-.7cm}\displaystyle\widetilde{\omega}_1(z)=\displaystyle-z^ne^{2i\theta}\left[\frac{z^{n+1}-az^n+ \frac{1}{2}(2+an-n)e^{-i\theta}z+\frac{1}{2}(n-2a-an)e^{-i\theta}}{\frac{1}{2}(n-2a-an)e^{i\theta}z^{n+1}+\frac{1}{2}(2+an-n)e^{i\theta}z^n-az+1}\right] \end{equation} $\indent\hspace{1.3cm}=\displaystyle-z^ne^{2i\theta}\frac{p(z)}{p^*(z)},$ \noindent where \begin{equation}\hspace{-1.8cm}p(z)=z^{n+1}-az^n+ \frac{1}{2}(2+an-n)e^{-i\theta}z+\frac{1}{2}(n-2a-an)e^{-i\theta} \end{equation} and \qquad $p^*(z)=z^{n+1}\overline{p\left(\frac{1}{\overline z}\right)}.$\\ \noindent Obviously, if $z_0$ is a zero of $p$ then $\displaystyle\frac{1}{\overline{z_0}}$ is a zero of $p^*$. Hence, if $A_1,A_2\cdots A_{n+1}$ are the zeros of $p$ (not necessarily distinct), then we can write $$\displaystyle\widetilde{\omega}_1(z)=-z^ne^{2i\theta}\frac{(z-A_1)}{(1-\overline {A}_1 z)}\frac{(z-A_2)}{(1-\overline {A}_2 z)}\cdots\frac{(z-A_{n+1})}{(1-\overline {A}_{n+1} z)}.$$ Now for $|A_i|\leq1$, $\displaystyle \frac{(z-A_i)}{(1-\overline {A_i} z)}$ maps $\overline{E}=\{z: |z|\leq1\}$ onto $\overline{E}.$ So in order to prove our theorem, we will show that $A_1,A_2,\cdots,A_{n+1}$ lie inside or on the unit circle $|z|=1$ for $a\in \left(\frac{n-2}{n+2},1\right)$ (in the case $a=\frac{n-2}{n+2},$ from (5) we see that $\displaystyle|\widetilde{\omega}_1(z)|=\displaystyle|-z^ne^{i\theta}|<1$). To do this we will use Lemma B by considering the following two cases.\\ {\bf Case 1.} \emph{When $n=1.$} In this case $\displaystyle p(z)=z^2+\left[-a+\frac{1}{2}(1+a)e^{-i\theta}\right]z+\frac{1}{2}(1-3a)e^{-i\theta}$ and $a \in(-\frac{1}{3},1)$. Thus, by comparing $p(z)$ and $r(z)$ of Lemma B, we have\\ \[\indent\hspace{-4cm}M_{1}= det\begin{pmatrix} a_2\,& a_0\\ \overline{a_0}\,& \overline{a_2} \end{pmatrix}=det\begin{pmatrix} 1\,& \frac{1}{2}(1-3a)e^{-i\theta}\\ \frac{1}{2}(1-3a)e^{i\theta}\,& 1 \end{pmatrix}\]\\ $\indent\hspace{1.3cm}=\displaystyle \frac{3}{4}(1-a)(1+3a)>0,$ \quad and\\ \[\indent\hspace{-9.5cm} M_{2}= det\begin{pmatrix} a_2&0&a_0&a_1\\ a_1&a_2&0&a_0\\ \overline{a_0}&0&\overline{a_2}&\overline{a_1}\\ \overline{a_1}&\overline{a_0}&0&\overline{a_2} \end{pmatrix}\] \[\indent\hspace{.1cm}=det\begin{pmatrix} 1&0&\frac{1}{2}(1-3a)e^{-i\theta}&-a+\frac{1}{2}(1+a)e^{-i\theta}\\ -a+\frac{1}{2}(1+a)e^{-i\theta}&0&1&\frac{1}{2}(1-3a)e^{i\theta}\\ \frac{1}{2}(1-3a)e^{i\theta}&0&1&-a+\frac{1}{2}(1+a)e^{i\theta}\\ -a+\frac{1}{2}(1+a)e^{i\theta}&\frac{1}{2}(1-3a)e^{i\theta}&0&1 \end{pmatrix}\]\\ $\indent\hspace{1cm}\displaystyle=\frac{1}{2}(1-a)^2(1+3a)^2\cos^2\frac{\theta}{2}>0,$ for $\theta\not=(2m+1)\pi,\,m\in\mathbb{N}$.\\ If $\theta=(2m+1)\pi,\,m\in\mathbb{N},$ then $p(z)=z^2-\frac{1}{2}(1+3a)z-\frac{1}{2}(1-3a).$ Obviously, $z=1$ and $z= -\frac{1}{2}(1-3a)$ are zeros of $p(z)$ and lie on and inside the unit circle $|z|=1,$ respectively, for $-\frac{1}{3}<a<1.$\\ \noindent{\bf Case 2.} \emph{When $n\geq2$}. In this case $ p(z)=z^{n+1}-az^n+ \frac{1}{2}\left(2+an-n\right)e^{-i\theta}z+\frac{1}{2}\left(n-2a-an\right)e^{-i\theta}.$\\ Again comparing $p(z)$ and $r(z)$, let \[\indent\hspace{-6cm}M_{k}= det\begin{pmatrix} \overline{B}_{k}\,^T& A_{k} \\ \overline{A}_{k}\,^T& B_{k} \end{pmatrix}\,\,(k=1,2,3,\cdots,n+1),\] where $A_k$ and $B_k$ are as defined in Lemma B with $a_{n+1}=1,$\,$a_{n}=-a$,\,$a_{n-1}=0$,$\cdots,$ $a_{2}=0$,\,$a_{1}=\frac{1}{2}(2+an-n)e^{-i\theta}$\,and\,$a_{0}=\frac{1}{2}(n-2a-an)e^{-i\theta}$. Since $a_{n+1}=1,$ therefore $det(B_{k})=1$ and so, \[\begin{pmatrix} \overline{B}_{k}\,^T& A_{k} \\ \overline{A}_{k}\,^T& B_{k} \end{pmatrix}\begin{pmatrix} I& \Huge{0} \\ -B_{k}^{-1}\overline{A}_{k}\,^T& I \end{pmatrix}=\begin{pmatrix} \overline{B}_{k}\,^T- A_{k}B_{k}^{-1}\overline{A}_{k}\,^T& A_{k} \\ 0 & B_{k} \end{pmatrix}\] which gives \[\indent\hspace{-1cm}M_{k}= det\begin{pmatrix} \overline{B}_{k}\,^T& A_{k} \\ \overline{A}_{k}\,^T& B_{k} \end{pmatrix}=det\begin{bmatrix} \overline{B}_{k}\,^T- A_{k}B_{k}^{-1}\overline{A}_{k}\,^T\end{bmatrix}.\] Now we consider the following two subcases. \\ {\bf Subcase 1.}\emph{ When $k=1,2,3\cdots,n.$} We will show that in this case,\\ $\displaystyle M_k=\left(\frac{1}{4}\right)^k n^{k-1}(n+2k)(2-n+2a+an)^k(1-a)^k,$ which is positive for $a \in \left(\frac{n-2}{n+2},1\right).$\\ In this case $A_k$ and $B_k$ are the following $k \times k$ matrices; \scriptsize{ \[\indent\hspace{-2.5cm}A_{k}=\begin{pmatrix} a_{0}&a_{1}&a_2 &\cdots &a_{n-1} \\ 0&a_{0}&a_1&\cdots &a_{n-2} \\ 0& 0&a_0&\cdots &a_{n-3}\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &a_{0} \end{pmatrix}=\begin{pmatrix} a_{0}&a_{1}&0 &\cdots &0 \\ 0&a_{0}&a_{1}&\cdots &0 \\ 0& 0&a_{0}&\cdots &0\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &a_{0} \end{pmatrix}.\quad B_{k}=\begin{pmatrix} \overline{a}_{n+1}&\overline{a}_{n}&\overline{a}_{n-1} &\cdots &\overline {a}_2 \\ 0&\overline{a}_{n+1}&\overline{a}_n&\cdots &\overline{a}_3 \\ 0& 0&\overline{a}_{n+1}&\cdots &\overline{a}_4\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &\overline{a}_{n+1} \end{pmatrix}=\begin{pmatrix} {1}&{-a}&0 &\cdots &0 \\ 0&1&-a&\cdots &0 \\ 0& 0&1&\cdots &0\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &1 \end{pmatrix}.\]}\\ \normalsize We can compute \scriptsize{ \[\indent\hspace{-2.4cm}\overline{B}_{k}^{\,\,T}-A_{k}\,B_{k}^{-1}\,\overline{A}_{k}^{\,\,T}=\begin{pmatrix} 1-\overline{a_{0}}a_{0}-\overline{a_1}(aa_0+a_1)&-\overline{a_0}(aa_0+a_1)-\overline{a_1}a(aa_0+a_1)&a[-\overline{a_0}(aa_0+a_1)-\overline{a_1}a(aa_0+a_1)] &\cdots &-\overline{a_0}a^{k-2}(aa_0+a_1) \\ -a-\overline{a_1}a_0&1-\overline{a_0}a_0-\overline{a_1}(aa_0+a_1)&-\overline{a_0}(aa_0+a_1)-\overline{a_1}a(aa_0+a_1)&\cdots &-\overline{a_0}a^{k-3}(aa_{0}+a_1) \\ 0&-a-\overline{a_1}a_0 &1-\overline{a_0}a_0-\overline{a_1}(aa_0+a_1)&\cdots &-\overline{a_0}a^{k-4}(aa_{0}+a_1)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &1-\overline{a_0}{a_0} \end{pmatrix}.\] \normalsize Now\\ \noindent{\bf (a)} $1-\overline{a_0}a_0-\overline{a_1}(aa_0+a_1)=\frac{1}{4}n(2-n+2a+an)(1-a)(2-a).$\\ ${\bf (b)} -\overline{a_0}(aa_0+a_1)-\overline{a_1}a(aa_0+a_1)=-\frac{1}{4}n(2-n+2a+an)(1-a)^3.$\\ \noindent{\bf (c)} $\overline{a_0}a^{k-m}(aa_0+a_1)=\frac{1}{4}a^{k-m}(n-2a-an)(2-n+2a+an)(1-a),\,m=1,2,3,\cdots,k.$\\ \noindent{\bf (d)} $ -a-\overline{a_1}a_0=-\frac{1}{4}n(2-n+2a+an)(1-a).\\$ \noindent{\bf (e)} $ 1-\overline{a_0}{a_0}=\frac{1}{4}(n+2)(2-n+2a+an)(1-a).$\\ Therefore, \scriptsize{ \[\indent\hspace{-3cm}M_k=\left[\left(\frac{1}{4}\right)^k n^{k-1}(2-n+2a+an)^k(1-a)^k\right]det\begin{pmatrix} (2-a)&-(1-a)^2 &-a(1-a)^2 &\cdots &-a^{k-2}(n-2a-an)\\ -1&(2-a)&-(1-a)^2&\cdots &- a^{k-3}(n-2a-an) \\ 0& -1&(2-a)&\cdots &-a^{k-4}(n-2a-an)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &(n+2) \end{pmatrix}\]} \scriptsize{ \[\indent\hspace{-3cm}=\left[\left(\frac{1}{4}\right)^k n^{k-1}(2-n+2a+an)^k(1-a)^k\right]det\begin{pmatrix} (2-a)&-(1-a)^2 &-a(1-a)^2 &\cdots &-a^{k-2}(n-2a-an)\\ 0&\frac{3-2a}{2-a}&-\frac{2(1-a)^2}{2-a}&\cdots &-\frac{2a^{k-3}(n-2a-an)}{2-a} \\ 0& 0&\frac{4-3a}{3-2a}&\cdots &-\frac{3a^{k-4}(n-2a-an)}{3-2a}\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &\frac{n+2k}{k-(k-1)a} \end{pmatrix}\]} \normalsize $\indent\hspace{-1.5cm}=\left(\frac{1}{4}\right)^k n^{k-1}(n+2k)(2-n+2a+an)^k(1-a)^k.$\\ \normalsize \indent\hspace{-.7cm}{\bf Subcase 2.} \emph{When $k=n+1$}. In this case,\\ \scriptsize{ \[\indent\hspace{-3cm}A_{n+1}=\begin{pmatrix} a_{0}&a_{1}&a_{2} &\cdots &{a_n} \\ 0&a_{0}&a_1&\cdots &a_{n-1} \\ 0& 0&a_0&\cdots &a_{n-2}\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &a_{0} \end{pmatrix}=\begin{pmatrix} a_{0}&a_{1}&0 &\cdots &-a \\ 0&a_{0}&a_1&\cdots &0 \\ 0& 0&a_0&\cdots &0\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &a_{0} \end{pmatrix}\quad and \quad B_{n+1}=\begin{pmatrix} \overline{a}_{n+1}&\overline{a}_{n}&\overline {a}_{n-1} &\cdots &\overline{ a}_1 \\ 0&\overline{a}_{n+1}&\overline{a}_n&\cdots &\overline{ a}_2 \\ 0& 0&\overline{a}_{n+1}&\cdots &\overline{ a}_3\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &\overline{a}_{n+1} \end{pmatrix}=\begin{pmatrix} 1&-a&0 &\cdots &\overline{a}_1 \\ 0&1&-a&\cdots &0 \\ 0& 0&1&\cdots &0\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &1 \end{pmatrix}.\]} \normalsize We compute that \scriptsize{ \[ \indent\hspace{-4cm}\overline{B}_{n+1}^{\,\,T}-A_{n+1}B_{n+1}^{-1}\overline{A}_{n+1}^{\,\,T}=\left( \begin{array}{llll} 1-a_0\overline{a}_0-\overline{a}_1(aa_0+a_1)+a^n(aa_0+a_1)-a(a_0\overline{a}_1+a)&\quad-\overline{a}_0(aa_0+a_1)-a\overline{a}_1(aa_0+a_1)&\quad\cdots\\ -a-\overline{a}_1a_0+a^{n-1}(aa_0+a_1)&\quad1-\overline{a}_0a_0-\overline{a}_1(aa_0+a_1)&\quad\cdots \\ a^{n-2}(aa_0+a_1)&\quad-a-\overline{a}_1a_0 &\quad\cdots\\ \vdots &\qquad \vdots &\quad \ddots \\ aa_0+a_1&\qquad0&\quad\cdots \end{array} \right.\]} \[\indent\hspace{3.5cm}\left.\begin{array}{clll} a^{n-2}[-\overline{a}_0(aa_0+a_1)-a\overline{a}_1(aa_0+a_1)]+\overline{a}_1(\overline{a}_1a_0+a) &\qquad-\overline{a}_0a^{n-1}(aa_0+a_1)+\overline{a}_0(a_0\overline{a}_1+a) \\ a^{n-3}[-\overline{a}_0(aa_0+a_1)-a\overline{a}_1(aa_0+a_1)] &\qquad -\overline{a}_0a^{n-2}(aa_0+a_1)\\ a^{n-4}[-\overline{a}_0(aa_0+a_1)-a\overline{a}_1(aa_0+a_1)] &\qquad-\overline{a}_0a^{n-3}(aa_0+a_1)\\ \vdots &\qquad \vdots\\ -a-\overline{a}_1a_0 &\qquad 1-a_0\overline{a}_0 \end{array} \right)\]\\ \normalsize Let $E_j$ be the $j^{th}$ column of $\overline{B}_{n+1}^{\,\,T}-A_{n+1}B_{n+1}^{-1}\overline{A}_{n+1}^{\,\,T},$ where $j=1,2,\dots,n+1.$ Note that for $E_m (m=2,3,\cdots,n-1),$ the column entries are identical to those of $\overline{B}_{k}^{\,\,T}-A_{k}B_{k}^{-1}\overline{A}_{k}^{\,\,T}$ in {Subcase 1}. However, the entries for $E_1$, $E_n$, and $E_{n+1}$ are different. We split $E_1$, $E_n$ and $E_{n+1}$ in the following way:\\ $E_1=F_1+G_1+H_1$, $E_n=F_n+G_n$, $E_{n+1}=F_{n+1}+G_{n+1},$ where\\ $F_1^{T}=[1-a_0\overline{a}_0-\overline{a}_1(aa_0+a_1),-a-\overline{a}_1a_0,0,\cdots,0]$\\ $G_1^{T}=[a^n(aa_0+a_1),a^{n-1}(aa_0+a_1),a^{n-2}(aa_0+a_1),\cdots,(aa_0+a_1)]$\\ $H_1^{T}=[-a(a_0\overline{a}_1+a),0,0,\cdots,0]$\\ $F_n^{T}=[-a^{n-2}\overline{a}_0(aa_0+a_1)-a^{n-1}\overline{a}_1(aa_0+a_1),-a^{n-3}\overline{a}_0(aa_0+a_1)-a^{n-2}\overline{a}_1(aa_0+a_1),\cdots-(a+a_0\overline{a}_1)]$\\ $G_n^{T}=[-\overline{a}_1(\overline{a}_1a_0+a),0,0,\cdots,0]$\\ $F_{n+1}^{T}=[-\overline{a}_0a^{n-1}(aa_0+a_1),-\overline{a}_0a^{n-2}(aa_0+a_1),\cdots,1-\overline{a}_0a_0]$\\ $G_{n+1}^{T}=[\overline{a}_0(a_0\overline{a}_1+a),0,0,\cdots,0].$\\ \indent\hspace{-1cm} Now $det[\overline{B}_{n+1}^{\,\,T}-A_{n+1}B_{n+1}^{-1}\overline{A}_{n+1}^{\,\,T}]$=$det[E_1E_2\cdots E_nE_{n+1}]$\\ \indent\hspace{1cm}$= det[F_1E_2\cdots F_nF_{n+1}]$ + \,$det[G_1E_2\cdots F_nF_{n+1}]$ + \,$det[H_1E_2\cdots F_nF_{n+1}]$\\ \indent\hspace{1cm}+ \,$det[F_1E_2\cdots F_nG_{n+1}]$ + $det[G_1E_2\cdots F_nG_{n+1}]$ + \,$det[H_1E_2\cdots F_nG_{n+1}]$\\ \indent\hspace{1cm}+ \,$det[F_1E_2\cdots G_nF_{n+1}]$ + \,$det[G_1E_2\cdots G_nF_{n+1}]$ + $det[H_1E_2\cdots G_nF_{n+1}]$\\ \indent\hspace{1cm}+ \,$det[F_1E_2\cdots G_nG_{n+1}]$ + \,$det[G_1E_2\cdots G_nG_{n+1}]$ + \,$det[H_1E_2\cdots G_nG_{n+1}].$\\ We will compute each of these determinants. From Subcase 1, \begin{equation} \indent\hspace{-1cm}det[F_1E_2\cdots F_nF_{n+1}]=\left(\frac{1}{4}\right)^{n+1} n^n(3n+2)(2-n+2a+an)^{n+1}(1-a)^{n+1}. \end{equation} Also,\quad $det[F_1E_2\cdots F_nG_{n+1}]=(-1)^n[\overline{a}_0(a_0\overline{a}_1+a)][(-1)^n((a+\overline{a}_1a_0)^n)]$ \begin{equation} \indent\hspace{1cm}=\left(\frac{1}{4}\right)^{n+1} n^n(2-n+2a+an)^{n+1}(1-a)^{n+1}\left[\frac{1}{2}e^{i\theta}n(n-2a-an)\right], \end{equation} and $det[H_1E_2\cdots F_nF_{n+1}]=-a(a_0\overline{a}_1+a) det \left[\overline{B}_{n}^{\,\,T}-A_{n}\,B_{n}^{-1}\,\overline{A}_{n}^{\,\,T}\right]$\\ \begin{equation} \indent\hspace{1cm}=-\left(\frac{1}{4}\right)^{n+1}a n^n(2-n+2a+an)^{n+1}(1-a)^{n+1}(3n). \end{equation} Next\quad $det[G_1E_2\cdots F_nF_{n+1}]$\scriptsize{ \[\indent\hspace{-2cm}=\left(\frac{1}{4}\right)^{n+1}n^{n-1}(2-n+2a+an)^{n+1}(1-a)^{n+1}2e^{-i\theta}det\begin{pmatrix} a^n&-(1-a)^2 & -a(1-a)^2 &\cdots &- a^{n-1}(n-2a-an) \\ a^{n-1}&(2-a)&-(1-a)^2&\cdots &-a^{n-2}(n-2a-an)\\ a^{n-2}& -1&(2-a)&\cdots & a^{n-3}(n-2a-an)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 1&0&0&\cdots &(n+2) \end{pmatrix}\]} \scriptsize{ \[\indent\hspace{-2cm}=\left(\frac{1}{4}\right)^{n+1}n^{n-1}(2-n+2a+an)^{n+1}(1-a)^{n+1}2e^{-i\theta}det\begin{pmatrix} a^n&-(1-a)^2 & -a(1-a)^2 &\cdots &- a^{n-1}(n-2a-an) \\ 0&\frac{1}{a}&0&\cdots &0\\ 0& \frac{1-2a}{a^2}&\frac{1}{a}&\cdots &0\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&\frac{(1-a)^2}{a^n}&\frac{(1-a)^2}{a^{n-1}}&\cdots &\frac{n}{a} \end{pmatrix}\]} \normalsize \begin{equation} \indent\hspace{-3.5cm}=\left(\frac{1}{4}\right)^{n+1}n^n(2-n+2a+an)^{n+1}(1-a)^{n+1}2e^{-i\theta}. \end{equation} Also, $det[G_1E_2\cdots F_nG_{n+1}]$ \scriptsize{\[\indent\hspace{-2cm}=(-1)^n\left(\frac{1}{4}\right)^{n+1}n^{n}(2-n+2a+an)^{n+1}(1-a)^{n+1}(n-2a-an)det\begin{pmatrix} a^{n-1}&(2-a) & -(1-a)^2 &\cdots &- a^{n-3}(1-a)^2 \\ a^{n-2}&-1&(2-a)&\cdots &-a^{n-4}(1-a)^2\\ a^{n-3}& 0&-1&\cdots & -a^{n-5}(1-a)^2\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 1&0&0&\cdots &-1 \end{pmatrix}.\]} \indent\hspace{-2.6cm}\normalsize Now \scriptsize{\[\indent\hspace{-10.9cm}det\begin{pmatrix} a^{n-1}&(2-a) & -(1-a)^2 &\cdots &- a^{n-3}(1-a)^2 \\ a^{n-2}&-1&(2-a)&\cdots &-a^{n-4}(1-a)^2\\ a^{n-3}& 0&-1&\cdots & -a^{n-5}(1-a)^2\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 1&0&0&\cdots &-1 \end{pmatrix}=\] \[\indent\hspace{-1.9cm}a^{n-1} det\begin{pmatrix} -1&(2-a) & -(1-a)^2 &\cdots &- a^{n-4}(1-a)^2 \\ 0&-1&(2-a)&\cdots &-a^{n-5}(1-a)^2\\ 0& 0&-1&\cdots & -a^{n-6}(1-a)^2\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &-1 \end{pmatrix}-a^{n-2} det\begin{pmatrix} (2-a) & -(1-a)^2&-a(1-a)^2 &\cdots &- a^{n-3}(1-a)^2 \\ 0&-1&(2-a)&\cdots &-a^{n-5}(1-a)^2\\ 0& 0&-1&\cdots & -a^{n-6}(1-a)^2\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &-1 \end{pmatrix}+\]\[\indent\hspace{-1.6cm}a^{n-3} det\begin{pmatrix} (2-a) & -(1-a)^2&-a(1-a)^2 &\cdots &- a^{n-3}(1-a)^2 \\ -1&(2-a)&-(1-a)^2&\cdots &-a^{n-4}(1-a)^2\\ 0&-1&(2-a)&\cdots & -a^{n-6}(1-a)^2\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &-1 \end{pmatrix}-a^{n-4} det\begin{pmatrix} (2-a) & -(1-a)^2&-a(1-a)^2 &\cdots &- a^{n-3}(1-a)^2 \\ -1&(2-a)&-(1-a)^2&\cdots &-a^{n-4}(1-a)^2\\ 0&-1&(2-a)&\cdots & -a^{n-5}(1-a)^2\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &-1 \end{pmatrix}+\cdots\]} \normalsize{ \indent= $a^{n-1}(-1)^{n-1}+a^{n-2}(-1)^{n-1}(2-a)+a^{n-3}(-1)^{n-1}(3-2a)+a^{n-4}(-1)^{n-1}(4-3a)\cdots+\indent\hspace{.5cm}a(-1)^{n-1}[(n-1)-(n-2)a]+(-1)^{n-1}[n-(n-1)a]$\\ $\indent=(-1)^{n-1}n.$ Therefore, \begin{equation} det[G_1E_2\cdots F_nG_{n+1}]=-\left(\frac{1}{4}\right)^{n+1}n^n(2-n+2a+an)^{n+1}(1-a)^{n+1}(n-2a-an)n. \end{equation} In addition\\ $\indent\hspace{-1cm}det[F_1E_2\cdots G_nF_{n+1}]$\scriptsize{\[=det\begin{pmatrix} 1-\overline{a_{0}}a_{0}-\overline{a_1}(aa_0+a_1)&-\overline{a_0}(aa_0+a_1)-\overline{a_1}a(aa_0+a_1) &\cdots&\overline{a_1}(\overline{a_1}a_0+a) &-\overline{a_0}a^{n-1}(aa_0+a_1) \\ -a-\overline{a_1}a_0&1-\overline{a_0}a_0-\overline{a_1}(aa_0+a_1)&\cdots&0 &-\overline{a_0}a^{n-2}(aa_{0}+a_1) \\ 0&-a-\overline{a_1}a_0&\cdots &0 &-\overline{a_0}a^{n-3}(aa_{0}+a_1)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &1-\overline{a_0}{a_0} \end{pmatrix}\]} \normalsize \begin{equation} =\left(\frac{1}{4}\right)^{n+1}n^n(2-n+2a+an)^{n+1}(1-a)^{n+1}\left[\frac{e^{i\theta}}{2}(n+2)(2-n+an)\right]. \end{equation} Also,\,\,$det[G_1E_2\cdots G_nF_{n+1}]$\scriptsize{\[\indent\hspace{1cm}=det\begin{pmatrix} a^n(aa_0+a_1)&-\overline{a_0}(aa_0+a_1)-\overline{a_1}a(aa_0+a_1) &\cdots&\overline{a_1}(\overline{a_1}a_0+a) &-\overline{a_0}a^{n-1}(aa_0+a_1) \\ a^{n-1}(aa_0+a_1)&1-\overline{a_0}a_0-\overline{a_1}(aa_0+a_1)&\cdots&0 &-\overline{a_0}a^{n-2}(aa_{0}+a_1) \\ a^{n-2}(aa_0+a_1)&-a-\overline{a_1}a_0&\cdots &0 &-\overline{a_0}a^{n-3}(aa_{0}+a_1)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ (aa_0+a_1)&0&0&\cdots &1-\overline{a_0}{a_0} \end{pmatrix}\]} \scriptsize{ \[\indent\hspace{-1cm}=\left(\frac{1}{4}\right)^{n+1}n^{n-1}(2-n+2a+an)^{n+1}(1-a)^{n+1}2e^{-i\theta}(-1)^{n+1}\overline{a_1}det\begin{pmatrix} a^{n-1}&(2-a) &-(1-a)^2 &\cdots &- a^{n-2}(n-2a-an) \\ a^{n-2}&-1&(2-a)&\cdots &-a^{n-3}(n-2a-an)\\ a^{n-3}& 0&-1&\cdots& -a^{n-4}(n-2a-an)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 1&0&0&\cdots &n+2 \end{pmatrix}.\]} \indent\hspace{-2.6cm}\normalsize Now \scriptsize{\[\indent\hspace{-10.5cm}det\begin{pmatrix} a^{n-1}&(2-a) & -(1-a)^2 &\cdots &- a^{n-2}(n-2a-an) \\ a^{n-2}&-1&(2-a)&\cdots &-a^{n-3}(n-2a-an)\\ a^{n-3}& 0&-1&\cdots & -a^{n-4}(n-2a-an)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 1&0&0&\cdots &n+2 \end{pmatrix}=\] \[\indent\hspace{-1.9cm}a^{n-1} det\begin{pmatrix} -1&(2-a) & -(1-a)^2 &\cdots &- a^{n-3}(n-2a-an) \\ 0&-1&(2-a)&\cdots &-a^{n-4}(n-2a-an)\\ 0& 0&-1&\cdots & -a^{n-5}(n-2a-an)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &n+2 \end{pmatrix}-a^{n-2} det\begin{pmatrix} (2-a) & -(1-a)^2&-a(1-a)^2 &\cdots &- a^{n-2}(n-2a-an) \\ 0&-1&(2-a)&\cdots &-a^{n-4}(n-2a-an)\\ 0& 0&-1&\cdots & -a^{n-5}(n-2a-an)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &n+2 \end{pmatrix}+\]\tiny\[\indent\hspace{-1.3cm}a^{n-3} det\begin{pmatrix} (2-a) & -(1-a)^2&-a(1-a)^2 &\cdots &- a^{n-2}(n-2a-an) \\ -1&(2-a)&-(1-a)^2&\cdots &-a^{n-3}(n-2a-an)\\ 0&0&-1&\cdots & -a^{n-5}(n-2a-an)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &n+2 \end{pmatrix}+\cdots(-1)^{n+1}det\begin{pmatrix} (2-a) & -(1-a)^2&-a(1-a)^2 &\cdots &- a^{n-2}(n-2a-an) \\ -1&(2-a)&-(1-a)^2&\cdots &-a^{n-3}(n-2a-an)\\ 0&-1&(2-a)&\cdots & -a^{n-4}(n-2a-an)\\ \vdots & \vdots &\vdots& \ddots & \vdots\\ 0&0&0&\cdots &-(n-2a-an) \end{pmatrix}\]}}} \normalsize =$a^{n-1}(-1)^{n-2}(n+2)+a^{n-2}(-1)^{n-2}(n+2)(2-a)+a^{n-3}(-1)^{n-2}(n+2)(3-2a)+\cdots+(-1)^{n-2}(n-1)[n-(n+2)a]$\\ =$(-1)^{n-2}n(n-1).$ Hence, \begin{equation} det[G_1E_2\cdots G_nF_{n+1}]=-\left(\frac{1}{4}\right)^{n+1}n^n(2-n+2a+an)^{n+1}(1-a)^{n+1}(2+an-n)(n-1). \end{equation} \normalsize Finally,\\ $det [F_1E_2\cdots G_nG_{n+1}]=det [H_1E_2\cdots G_nF_{n+1}]=det [H_1E_2\cdots G_nG_{n+1}]=$ \begin{equation} det[G_1E_2\cdots G_nG_{n+1}]=det[H_1E_2\cdots F_nG_{n+1}]=0 \end{equation} Using equations $(7)$-$(14)$, we get\\ $M_{n+1}=det[\overline{B}_{n+1}^{\,\,T}-A_{n+1}\,B_{n+1}^{-1}\,\overline{A}_{n+1}^{\,\,T}]=\displaystyle \left(\frac{1}{4}\right)^{n+1}n^n(1-a)^{n+1}(2-n+2a+an)^{n+1}$ $[(3n+2)+\frac{e^{i\theta}}{2}n(n-2a-an)-3an+2e^{-i\theta}-n(n-2a-an) +\frac{e^{i\theta}}{2}(n+2)(2+an-n)-(n-1)(2+an-n)]$\\ $=(\frac{1}{4})^{n+1}n^n(1-a)^{n+1}(2-n+2a+an)^{n+1}(4+4\cos\theta)$.\\ Therefore $M_{n+1}>0,$ if $\theta\not=(2m+1)\pi,m\in\mathbb{N}.$\\ When $\theta =(2m+1)\pi, m\in\mathbb{N},$ then $p(z)=z^{n+1}-az^n+\frac{1}{2}(n-2-an)z-\frac{1}{2}(n-2a-an).$ As $z=1$ is a zero of $p(z),$ therefore we can write $$p(z)=(z-1)[z^n+(1-a)z^{n-1}+(1-a)z^{n-2}+\cdots+(1-a)z+\frac{1}{2}(n-2a-an)]$$ $$\indent\hspace{-9.8cm}=(z-1)q(z).$$ It suffices to show that zeros of $q(z)$ lie inside $|z|=1$. Since $|\frac{1}{2}(n-2a-an)|<1$ whenever $a \in \left(\frac{n-2}{n+2},1\right)$, by applying Lemma A on $q(z)$ (by comparing it with $t(z)$), we get \\ $\indent\hspace{-0.8cm}\displaystyle q_1(z)=\frac{\overline{a}_nq(z)-a_0q^*(z)}{z}$\\ $\indent\hspace{1cm}=(1-a)[1-(\frac{1}{2}(n-2a-an))]\left\{(1+\frac{n}{2})z^{n-1}+z^{n-2}+\cdots+z+1\right\}.$\\ By Lemma A, the number of zeros of $q_1(z)$ inside the unit circle is one less then the number of zeros of $q(z)$ inside the unit circle. Let $p_1(z)=(1+\frac{n}{2})z^{n-1}+z^{n-2}+\cdots+z+1.$ Again $1<|1+\frac{n}{2}|,$ therefore\\ $\indent\hspace{-0.8cm}\displaystyle q_2(z)=\frac{\overline{a}_{n-1}p_1(z)-a_0p_1^*(z)}{z}$\\ $\indent\hspace{1cm}=\frac{n}{2}[(2+\frac{n}{2})z^{n-2}+z^{n-3}+\cdots+z+1].$\\ Again the number of zeros of $q_2(z)$ inside the unit circle is two less than the number of zeros of $q(z)$ inside the unit circle. Continuing in this manner we derive that \\ $\indent\hspace{-1cm}\displaystyle q_k(z)=\left[(k-2)+\frac{n}{2}\right]\left\{(k+\frac{n}{2})z^{n-k}+z^{n-(k+1)}+\cdots+z^{n-(n-1)}+1\right\},\,k=2,3,\cdots,n-1.$\\ In particular for $k=n-1$\\ $\indent\displaystyle q_{n-1}(z)=\left((n-3)+\frac{n}{2}\right)\{((n-1)+\frac{n}{2})z+1\},$\\ which has $(n-1)$ less number of zeros inside the unit circle than the number of zeros of $q(z)$ inside the unit circle. But the zero of $q_{n-1}$ is $\displaystyle-\frac{2}{3n-2}$ which lies inside the unit circle $|z|=1$ for $n\geq 2$. Consequently all zeros of $q(z)$ lie inside $|z|=1$ and the proof of our theorem is now complete.\end{proof} \begin{remark} If we set $a=0$, then $n$ is restricted to $n=1$ or $n=2$ and we get the same result as Theorem B stated in Section 1.\end{remark} Next we give an example showing that for a given value of $n$, if we go beyond the range of real number $a$ as specified in Theorem 2.2, then convolution no longer remains locally univalent and sense preserving. If we take $n=1,$ then the range of real constant $a$ comes out to be $[-\frac{1}{3},1).$ \begin{example} If we take $n=1$, $a=-0.34 <-\displaystyle\frac{1}{3}$ and $\theta=\pi,$ in (5) we get $$\indent\hspace{-1.7cm}\displaystyle\widetilde{\omega}_1(z)=-z\left[\frac{z^2+0.01z-1.01}{1+0.01z-1.01z^2}\right]$$ \indent\hspace{5cm} $=-zR(z).$\\ We prove that there exists some point $z_0$ in $E$ such that $\displaystyle\left|\widetilde{\omega}_1(z_0)\right|>1$. Assume that this is not true. It is easy to see that for each $\alpha$, $|R(e^{i\alpha})|=1$ and $R(z)\overline{(R(\frac{1}{\overline z}))}=1$. So the function $R(z)$ preserves the symmetry about the unit circle and maps the closed disk $|z|\leq1$ onto itself. Therefore, $R(z)$ can be written as a Blaschke product of order two. However, the product of the moduli of zeros of $R$ in the unit disk is $1.01$, which is a contradiction.\end{example} The image of $E$ under $f_a\ast f_1$ for $a=-0.34$ is shown (using the applet \emph{Complex Tool} (see [\ref{do and ro}])) in Figure 1. Figure 2 ia a zoomed version of Figure 1 showing that the images of two outer most concentric circles in $E$ are intersecting and so $f_a\ast f_1$ is not univalent.\\ \begin{figure} \centering \includegraphics[width=0.60\textwidth]{raj4.eps}\\ \vspace{-0.2cm} \centering Figure 1 \end{figure} \begin{figure} \centering \includegraphics[width=0.60\textwidth]{raj3.eps}\\ \vspace{-0.2cm} \centering Figure 2 \end{figure} In the next result we find the condition under which the convolution, $f_a\ast f_b\in S_H$ and is CHD. \begin{theorem} If $f_b= h+\overline{g}\,\in K_H$ is given by $\displaystyle h+g=\frac{z}{1-z}$ with dilatation $\displaystyle \omega(z)=\frac{b-z}{1-bz}\,\, (|b|<1, b\in \mathbb{R})$, then $f_a\ast f_b \, \in S_H$ and is CHD for $\displaystyle b\geq-\frac{1+3a}{3+a}$, where $f_a$ is as in Theorem 2.2.\end{theorem} \begin{proof} If $\widetilde{\omega}_1(z)$ is the dilatation of $f_a\ast f_b$, then in view of Theorem A it is sufficient to prove that $|\widetilde{\omega}_1(z)|<1$, for all $z\in E$. Substituting $\displaystyle \omega(z)=\frac{b-z}{1-bz}$ in (4), we get\\ $\indent\hspace{-1cm}\widetilde{\omega}_1(z)=\displaystyle\left[\frac{2\left(\frac{b-z}{1-bz}\right)(a-z)\left(1+\frac{b-z}{1-bz}\right)+ \frac{b^2z-z}{(1-bz)^2} (a-1)(1-z)}{2(1-az)\left(1+\frac{b-z}{1-bz}\right)+ \frac{b^2z-z}{(1-bz)^2} (a-1)(1-z)}\right]$ $$ \begin{array}{clll} \hspace{-6.5cm}=\displaystyle\frac{z^2+\frac{1}{2}(ab-3a-3b+1)z+ab}{abz^2+\frac{1}{2}(ab-3a-3b+1)z+1}=\frac{m(z)}{m^*(z)} \end{array} $$ where $\displaystyle m(z)=z^2+\frac{1}{2}(ab-3a-3b+1)z+ab$\\ $\indent\hspace{1.4cm}=a_2z^2+a_1z+a_0\quad{\rm and}$\\ $\indent\hspace{.4cm}\displaystyle m^*(z)=z^2\overline{m\left(\frac{1}{\overline z}\right)}.$\\ As before, if $z_0$ is a zero of $m$ then $\displaystyle\frac{1}{\overline{z_0}}$ is a zero of $m^*$, and we can write $$\displaystyle\widetilde{\omega}_1(z)=\frac{(z+A)(z+B)}{(1+\overline {A} z)(1+\overline {B} z)}.$$ It suffices to show that either both the zeros $-A$\,,$-B$ lie inside unit circle $|z|=1$ or one of the zeros lies inside $|z|=1$ and other lies on it . As $|a_0|=|ab|<1=|a_2|$, using Lemma A on $m$, we have $\indent\hspace{-0.8cm}\displaystyle m_1(z)=\frac{\overline{a}_2 m(z)-a_0m^*(z)}{z}$\\ $\indent\hspace{.9cm}=(1-ab)[(1+ab)z+\frac{1}{2}(-3a-3b+1+ab)]$.\\ But $z_0=\displaystyle\frac{3}{2}\left(\frac{a+b}{1+ab}\right)-\frac{1}{2}$ is the zero of $m_1$ which lies in or on the unit circle $|z|=1$ if ${ \displaystyle b\geq-\frac{(1+3a)}{(3+a)}.}$ So, either both zeros of $m$ lie inside $|z|=1$ or at least one zero lies in and other lies on $|z|=1$ . Hence $\displaystyle\left|\widetilde{\omega}_1(z)\right|<1.$ \end{proof} \begin{corollary} The convolution $f_a\ast f_a\in S_H$ and CHD for $a\in[-3+2\sqrt{2},1)$.\end{corollary} \noindent{\emph{Acknowledgement: The first author is thankful to Council of Scientific and Industrial Research, New Delhi, for financial support vide (grant no. 09/797/0006/2010 EMR-1).}} {
-67,687.513903
[ -2.728515625, 2.3515625 ]
12.407407
[ -4.2734375, -0.250244140625, -2.48828125, -6.39453125, 0.0955810546875, 9.515625 ]
[ 1.8095703125, 8.8125, -1.3994140625, 4.18359375 ]
193
2,741
[ -3.3828125, 3.763671875 ]
46.553715
[ -5.2265625, -3.322265625, -3.58984375, -2.2265625, 1.41796875, 10.4140625 ]
2.629401
9.089067
31.667275
11.613868
[ 1.5575661659240723 ]
-43,503.884876
7.492521
-66,830.291287
1.97205
5.866434
[ -2.462890625, -3.154296875, -3.7734375, -5.3984375, 2.18359375, 12.390625 ]
[ -5.33203125, -1.189453125, -1.5751953125, -0.7763671875, 2.8515625, 2.716796875 ]
BkiUdiY4eIfiUYvU09Vw
\section{Introduction} The precision data collected to date have confirmed the Standard Model to be a good description of physics below the electroweak scale \cite{Schaile}. Despite of its great success, there are many reasons to believe that some kind of new physics must exist. On the other hand, the non-abelian structure of the gauge boson self-couplings is still poorly tested and one of the most sensitive probes for new physics is provided by the trilinear gauge boson couplings (TGC) \cite{TGC}. Many studies have been devoted to the $WW\gamma$ and $WWZ$ couplings. At hadron colliders and $e^+e^-$ colliders, the present bounds (Tevatron \cite{Errede}) and prospects (LHC, LEP2 and NLC \cite{TGC,LEP2}) are mostly based on diboson production ($WW$, $W\gamma$ and $WZ$). In $ep$ collisions, HERA could provide further information analyzing single $W$ production ($ep\to eWX$ \cite{ABZ}) and radiative charged current scattering ($ep\to\nu\gamma X$ \cite{hubert}). There is also some literature on $WW\gamma$ couplings in $W$-pair production at future very high energy photon colliders (bremsstrahlung photons in peripheral heavy ion collisions \cite{HIC} and Compton backscattered laser beams \cite{gg}). Only recently, attention has been paid to the $Z\gamma Z$, $Z\gamma\g$ and $ZZZ$ couplings. There is a detailed analysis of $Z\gamma V$ couplings ($V=\gamma,Z$) for hadron colliders in \cite{BB}. CDF \cite{CDF} and D\O\ \cite{D0} have obtained bounds on the $Z\gamma Z$ and $Z\gamma\g$ anomalous couplings, while L3 has studied only the first ones \cite{L3}. Studies on the sensitivities to these vertices in future $e^+e^-$ colliders, LEP2 \cite{LEP2} and NLC \cite{Boudjema}, have been performed during the last years. Some proposals have been made to probe these neutral boson gauge couplings at future photon colliders in $e\gamma\to Ze$ \cite{eg}. In this work we study the prospects for measuring the TGC in the process $ep\to e\gamma X$. In particular, we will concentrate on the $Z\gamma\g$ couplings, which can be more stringently bounded than the $Z\gamma Z$ ones for this process. In Section 2, we present the TGC. The next section deals with the different contributions to the process $ep\to e\gamma X$ and the cuts and methods we have employed in our analysis. Section 4 contains our results for the Standard Model total cross section and distributions and the estimates of the sensitivity of these quantities to the presence of anomalous couplings. Finally, in the last section we present our conclusions. \section{Phenomenological parametrization of the neutral TGC} A convenient way to study deviations from the standard model predictions consists of considering the most general lagrangian compatible with Lorentz invariance, the electromagnetic U(1) gauge symmetry, and other possible gauge symmetries. For the trilinear $Z\gamma V$ couplings ($V=\gamma,Z)$ the most general vertex function invariant under Lorentz and electromagnetic gauge transformations can be described in terms of four independent dimensionless form factors \cite{hagiwara}, denoted by $h^V_i$, i=1,2,3,4: \begin{eqnarray} \Gamma^{\a\b\mu}_{Z\gamma V} (q_1,q_2,p)=\frac{f(V)}{M^2_Z} \{ h^V_1 (q^\mu_2 g^{\a\b} - q^\a_2 g^{\mu\b}) +\frac{h^V_2}{M^2_Z} p^\a (p\cdot q_2g^{\mu\b}-q^\mu_2 p^\b) \nonumber \\ +h^V_3 \varepsilon^{\mu\a\b\r}q_{2_\r} +\frac{h^V_4}{M^2_Z}p^\a\varepsilon^{\mu\b\r\sigma}p_\r q_{2_\sigma} \}. \hspace{3cm} \label{vertex} \end{eqnarray} Terms proportional to $p^\mu$, $q^\a_1$ and $q^\b_2$ are omitted as long as the scalar components of all three vector bosons can be neglected (whenever they couple to almost massless fermions) or they are zero (on-shell condition for $Z$ or U(1) gauge boson character of the photon). The overall factor, $f(V)$, is $p^2-q^2_1$ for $Z\gamma Z$ or $p^2$ for $Z\gamma\g$ and is a result of Bose symmetry and electromagnetic gauge invariance. These latter constraints reduce the familiar seven form factors of the most general $WWV$ vertex to only these four for the $Z\gamma V$ vertex. There still remains a global factor that can be fixed, without loss of generality, to $g_{Z\gamma Z}=g_{Z\gamma\g}=e$. Combinations of $h^V_3 (h^V_1)$ and $h^V_4 (h^V_2)$ correspond to electric (magnetic) dipole and magnetic (electric) quadrupole transition moments in the static limit. All the terms are $C$-odd. The terms proportional to $h^V_1$ and $h^V_2$ are $CP$-odd while the other two are $CP$-even. All the form factors are zero at tree level in the Standard Model. At the one-loop level, only the $CP$-conserving $h^V_3$ and $h^V_4$ are nonzero \cite{barroso} but too small (${\cal O}(\a/\pi$)) to lead to any observable effect at any present or planned experiment. However, larger effects might appear in theories or models beyond the Standard Model, for instance when the gauge bosons are composite objects \cite{composite}. This is a purely phenomenological, model independent parametrization. Tree-level unitarity restricts the $Z\gamma V$ to the Standard Model values at asympotically high energies \cite{unitarity}. This implies that the couplings $h^V_i$ have to be described by form factors $h^V_i(q^2_1,q^2_2,p^2)$ which vanish when $q^2_1$, $q^2_2$ or $p^2$ become large. In hadron colliders, large values of $p^2=\hat{s}$ come into play and the energy dependence has to be taken into account, including unknown dumping factors \cite{BB}. A scale dependence appears as an additional parameter (the scale of new physics, $\L$). Alternatively, one could introduce a set of operators invariant under SU(2)$\times$U(1) involving the gauge bosons and/or additional would-be-Goldstone bosons and the physical Higgs. Depending on the new physics dynamics, operators with dimension $d$ could be generated at the scale $\L$, with a strength which is generally suppressed by factors like $(M_W/\L)^{d-4}$ or $(\sqrt{s}/\L)^{d-4}$ \cite{NPscale}. It can be shown that $h^V_1$ and $h^V_3$ receive contributions from operators of dimension $\ge 6$ and $h^V_2$ and $h^V_4$ from operators of dimension $\ge 8$. Unlike hadron colliders, in $ep\to e\gamma X$ at HERA energies, we can ignore the dependence of the form factors on the scale. On the other hand, the anomalous couplings are tested in a different kinematical region, which makes their study in this process complementary to the ones performed at hadron and lepton colliders. \section{The process $ep\to e\gamma X$} The process under study is $ep\to e\gamma X$, which is described in the parton model by the radiative neutral current electron-quark and electron-antiquark scattering, \begin{equation} \label{process} e^- \ \stackrel{(-)}{q} \to e^- \ \stackrel{(-)}{q} \ \gamma . \end{equation} There are eight Feynman diagrams contributing to this process in the Standard Model and three additional ones if one includes anomalous vertices: one extra diagram for the $Z\gamma Z$ vertex and two for the $Z\gamma\g$ vertex (Fig. \ref{feyndiag}). \bfi{htb} \begin{center} \bigphotons \bpi{35000}{21000} \put(4000,8000){(a)} \put(200,17000){\vector(1,0){1300}} \put(1500,17000){\vector(1,0){3900}} \put(5400,17000){\line(1,0){2600}} \drawline\photon[\S\REG](2800,17000)[5] \put(200,\pbacky){\vector(1,0){1300}} \put(1500,\pbacky){\vector(1,0){2600}} \put(4100,\pbacky){\vector(1,0){2600}} \put(6700,\pbacky){\line(1,0){1300}} \put(0,13000){$q$} \put(8200,13000){$q$} \put(3300,\pmidy){$\gamma,Z$} \drawline\photon[\SE\FLIPPED](4900,\pbacky)[4] \put(0,18000){$e$} \put(8200,18000){$e$} \put(8200,\pbacky){$\gamma$} \put(13000,8000){(b)} \put(9500,17000){\vector(1,0){1300}} \put(10800,17000){\vector(1,0){2600}} \put(13400,17000){\vector(1,0){2600}} \put(16000,17000){\line(1,0){1300}} \drawline\photon[\S\REG](12100,17000)[5] \put(9500,\pbacky){\vector(1,0){1300}} \put(10800,\pbacky){\vector(1,0){3900}} \put(14700,\pbacky){\line(1,0){2600}} \drawline\photon[\NE\FLIPPED](14200,17000)[4] \put(22000,8000){(c)} \put(18500,17000){\vector(1,0){3250}} \put(21750,17000){\vector(1,0){3250}} \put(25000,17000){\line(1,0){1300}} \drawline\photon[\S\REG](23700,17000)[5] \put(18500,\pbacky){\vector(1,0){1300}} \put(19800,\pbacky){\vector(1,0){2600}} \put(22400,\pbacky){\vector(1,0){2600}} \put(25000,\pbacky){\line(1,0){1300}} \drawline\photon[\SE\FLIPPED](21100,\pbacky)[4] \put(31000,8000){(d)} \put(27500,17000){\vector(1,0){1300}} \put(28800,17000){\vector(1,0){2600}} \put(31400,17000){\vector(1,0){2600}} \put(34000,17000){\line(1,0){1300}} \drawline\photon[\S\REG](32700,17000)[5] \put(27500,\pbacky){\vector(1,0){3250}} \put(30750,\pbacky){\vector(1,0){3250}} \put(33900,\pbacky){\line(1,0){1300}} \drawline\photon[\NE\FLIPPED](30100,17000)[4] \put(17800,0){(e)} \put(17100,5500){$\gamma,Z$} \put(17100,3000){$\gamma,Z$} \put(14000,7000){\vector(1,0){1300}} \put(15300,7000){\vector(1,0){3900}} \put(19200,7000){\line(1,0){2600}} \drawline\photon[\S\REG](16600,7000)[5] \put(16750,\pmidy){\circle*{500}} \put(14000,\pbacky){\vector(1,0){1300}} \put(15300,\pbacky){\vector(1,0){3900}} \put(19200,\pbacky){\line(1,0){2600}} \drawline\photon[\E\REG](16750,\pmidy)[5] \put(22300,\pbacky){$\gamma$} \end{picture} \end{center} \caption{\it Feynman diagrams for the process $e^- q \to e^- q \gamma$. \label{feyndiag}} \end{figure} Diagrams with $\gamma$ exchanged in the t-channel are dominant. Nevertheless, we consider the whole set of diagrams in the calculation. On the other side, u-channel fermion exchange poles appear, in the limit of massless quarks and electrons (diagrams (c) and (d)). Since the anomalous diagrams (e) do not present such infrared or collinear singularities, it seems appropriate to avoid almost on-shell photons exchanged and fermion poles by cutting the transverse momenta of the final fermions (electron and jet) to enhance the signal from anomalous vertices. Due to the suppression factor coming from $Z$ propagator, the anomalous diagrams are more sensitive to $Z\gamma\g$ than to $Z\gamma Z$ vertices. In the following we will focus our attention on the former. The basic variables of the parton level process are five. A suitable choice is: $E_\gamma$ (energy of the final photon), $\cos\th_\gamma$, $\cos\th_{q'}$ (cosines of the polar angles of the photon and the scattered quark defined with respect to the proton direction), $\phi$ (the angle between the transverse momenta of the photon and the scattered quark in a plane perpendicular to the beam), and a trivial azimuthal angle that is integrated out (unpolarized beams). All the variables are referred to the laboratory frame. One needs an extra variable, the Bjorken-x, to connect the partonic process with the $ep$ process. The phase space integration over these six variables is carried out by {\tt VEGAS} \cite{VEGAS} and has been cross-checked with the {\tt RAMBO} subroutine \cite{RAMBO}. We adopt two kinds of event cuts to constrain conveniently the phase space: \begin{itemize} \item {\em Acceptance and isolation} cuts. The former are to exclude phase space regions which are not accessible to the detector, because of angular or efficiency limitations:\footnote{The threshold for the transverse momentum of the scattered quark ensures that its kinematics can be described in terms of a jet.} \begin{eqnarray} \label{cut1} 8^o < \theta_e,\ \theta_\gamma,\ \theta_{\rm jet} < 172^o; \nonumber\\ E_e, \ E_\gamma, \ p^{\rm q'}_{\rm T} > 10 \ {\rm GeV}. \end{eqnarray} The latter keep the final photon well separated from both the final electron and the jet: \begin{eqnarray} \label{cut2} \cos \langle \gamma,e \rangle < 0.9; \nonumber\\ R > 1.5, \end{eqnarray} where $R\equiv\sqrt{\Delta\eta^2+\phi^2}$ is the separation between the photon and the jet in the rapidity-azimuthal plane, and $\langle \gamma,e \rangle$ is the angle between the photon and the scattered electron. \item Cuts for {\em intrinsic background suppression}. They consist of strengthening some of the previous cuts or adding new ones to enhance the signal of the anomalous diagrams against the Standard Model background. \end{itemize} We have developed a Monte Carlo program for the simulation of the process $ep\to e\gamma X$ where $X$ is the remnant of the proton plus one jet formed by the scattered quark of the subprocess (\ref{process}). It includes the Standard Model helicitity amplitudes computed using the {\tt HELAS} subroutines \cite{HELAS}. We added new code to account for the anomalous diagrams. The squares of these anomalous amplitudes have been cross-checked with their analytical expressions computed using {\tt FORM} \cite{FORM}. For the parton distribution functions, we employ both the set 1 of Duke-Owens' parametrizations \cite{DO} and the modified MRS(A) parametrizations \cite{MRS}, with the scale chosen to be the hadronic momentum transfer. As inputs, we use the beam energies $E_e=30$ GeV and $E_p=820$ GeV, the $Z$ mass $M_Z=91.187$ GeV, the weak angle $\sin^2_W=0.2315$ \cite{PDB} and the fine structure constant $\a=1/128$. A more correct choice would be the running fine structure constant with $Q^2$ as the argument. However, as we are interested in large $Q^2$ events, the value $\a(M^2_Z)$ is accurate enough for our purposes. We consider only the first and second generations of quarks, assumed to be massless. We start by applying the cuts (\ref{cut1}) and (\ref{cut2}) and examining the contribution to a set of observables of the Standard Model and the anomalous diagrams, separately. Next, we select one observable such that, when a cut on it is performed, only Standard Model events are mostly eliminated. The procedure is repeated with this new cut built in. After several runs, adding new cuts, the ratio standard/anomalous cross sections is reduced and hence the sensitivity to anomalous couplings is improved. \section{Results} \subsection{Observables} The total cross section of $ep\to e\gamma X$ can be written as \begin{equation} \sigma=\sigma_{{\rm SM}} + \sum_{i} \t_i \cdot h^\gamma_i + \sum_{i}\sigma_i\cdot (h^\gamma_i)^2 + \sigma_{12} \cdot h^\gamma_1 h^\gamma_2 + \sigma_{34} \cdot h^\gamma_3 h^\gamma_4. \end{equation} \bfi{htb} \setlength{\unitlength}{1cm} \bpi{8}{7} \epsfxsize=11cm \put(-1,-4){\epsfbox{eng_acciso.ps}} \end{picture} \bpi{8}{7} \epsfxsize=11cm \put(0.,-4){\epsfbox{ptg_acciso.ps}} \end{picture} \bpi{8}{6} \epsfxsize=11cm \put(-1,-5){\epsfbox{angge_acciso.ps}} \end{picture} \bpi{8}{6} \epsfxsize=11cm \put(0.,-5){\epsfbox{anggj_acciso.ps}} \end{picture} \bpi{8}{6} \epsfxsize=11cm \put(-1,-5){\epsfbox{angej_acciso.ps}} \end{picture} \bpi{8}{7} \epsfxsize=11cm \put(0.,-5){\epsfbox{q2e_acciso.ps}} \end{picture} \caption{\it Differential cross sections (pb) for the process $ep\to e\gamma X$ at HERA, with only acceptance and isolation cuts. The solid line is the Standard Model contribution and the dash (dot-dash) line correspond to 10000 times the $\sigma_1$ ($\sigma_2$) anomalous contributions.\label{A}} \end{figure} The forthcoming results are obtained using the MRS'95 pa\-ra\-me\-tri\-za\-tion of the parton densities\footnote{The values change $\sim 10$\% when using the (old) Duke-Owens' structure functions.} \cite{MRS}. The linear terms of the $P$-violating couplings $h^\gamma_3$ and $h^\gamma_4$ are negligible, as they mostly arise from the interference of standard model diagrams with photon exchange ($P$-even) and anomalous $P$-odd diagrams ($\t_3\simeq \t_4\simeq 0$). Moreover, anomalous diagrams with different $P$ do not interfere either. On the other hand, the quadratic terms proportional to $(h^\gamma_1)^2$ and $(h^\gamma_3)^2$ have identical expressions, and the same for $h^\gamma_2$ and $h^\gamma_4$ ($\sigma_1=\sigma_3$, $\sigma_2=\sigma_4$). Only the linear terms make their bounds different. The interference terms $\sigma_{12}$ and $\sigma_{34}$ are also identical. \bfi{htb} \setlength{\unitlength}{1cm} \bpi{8}{7} \epsfxsize=11cm \put(-1,-4){\epsfbox{eng_bkgsup.ps}} \end{picture} \bpi{8}{7} \epsfxsize=11cm \put(0.,-4){\epsfbox{ptg_bkgsup.ps}} \end{picture} \bpi{8}{6} \epsfxsize=11cm \put(-1,-5){\epsfbox{angge_bkgsup.ps}} \end{picture} \bpi{8}{6} \epsfxsize=11cm \put(0.,-5){\epsfbox{anggj_bkgsup.ps}} \end{picture} \bpi{8}{6} \epsfxsize=11cm \put(-1,-5){\epsfbox{angej_bkgsup.ps}} \end{picture} \bpi{8}{7} \epsfxsize=11cm \put(0.,-5){\epsfbox{q2e_bkgsup.ps}} \end{picture} \caption{\it Differential cross sections (pb) for the process $ep\to e\gamma X$ at HERA, after intrinsic background suppression. The solid line is the Standard Model contribution and the dash (dot-dash) line correspond to 500 times the $\sigma_1$ ($\sigma_2$) anomalous contributions.\label{B}} \end{figure} We have analyzed the distributions of more than twenty observables in the laboratory frame, including the energies, transverse momenta and angular distributions of the jet, the photon and the final electron, as well as their spatial, polar and azimuthal separations. Also the bjorken-x, the leptonic and hadronic momenta transfer and other fractional energies are considered. The process of intrinsic background suppression is illustrated by comparing Figures \ref{A} and \ref{B}. For simplicity, only the most interesting variables are shown: the energy $E(\gamma)$ and transverse momentum $p_T(\gamma)$ of the photon; the angles between the photon and the scattered electron $\langle \gamma,e \rangle$, the photon and the jet $\langle \gamma,j \rangle$, and the scattered electron and the jet $\langle e,j \rangle$; and the leptonic momentum transfer $Q^2(e)$. In Fig.~\ref{A}, these variables are plotted with only acceptance and isolation cuts implemented. All of them share the property of disposing of a range where any anomalous effect is negligible, whereas the contribution to the total SM cross section is large. The set of cuts listed below were added to reach eventually the distributions of Fig.~\ref{B}: \begin{itemize} \item The main contribution to the Standard Model cross section comes from soft photons with very low transverse momentum. The following cuts suppress a 97$\%$ of these events, without hardly affecting the anomalous diagrams which, conversely, enfavour high energy photons: \begin{eqnarray} E_\gamma > 30 \ {\rm GeV} \nonumber \\ p^\gamma_T > 20 \ {\rm GeV} \label{cut3} \end{eqnarray} \item Another remarkable feature of anomalous diagrams is the very different typical momentum transfers. Let's concentrate on the leptonic momentum transfer, $Q^2_e=-(p'_e-p_e)^2$. The phase space enhances high $Q^2_e$, while the photon propagator of the Standard Model diagrams prefer low values (above the threshold for electron detectability, $Q^2_e>5.8$~GeV$^2$, with our required minimum energy and angle). On the contrary, the anomalous diagrams have always a $Z$ propagator which introduces a suppression factor of the order of $Q^2_e/M^2_Z$ and makes irrelevant the $Q^2_e$ dependence, which is only determined by the phase space. As a consequence, the following cut looks appropriate, \begin{equation} Q^2_e > 1000 \ {\rm GeV}^2 \label{cut4} \end{equation} \end{itemize} It is important to notice at this point why usual form factors for the anomalous couplings can be neglected at HERA. For our process, these form factors should be proportional to $1/(1+Q^2/\L^2)^n$. With the scale of new physics $\L=500$~GeV to 1~TeV, these factors can be taken to be one. This is not the case in lepton or hadron high energy colliders where the diboson production in the s-channel needs dumping factors $1/(1+\hat{s}/\L^2)^n$. The total cross section for the Standard Model with acceptance and isolation cuts is $\sigma_{\rm SM}=21.38$~pb and is reduced to 0.37~pb when all the cuts are applied, while the quadratic contributions only change from $\sigma_1=2\times10^{-3}$~pb, $\sigma_2=1.12\times10^{-3}$~pb to $\sigma_1=1.58\times10^{-3}$~pb, $\sigma_2=1.05\times10^{-3}$~pb. The linear terms are of importance and change from $\t_1=1.18\times10^{-2}$~pb, $\t_2=1.27\times10^{-3}$~pb to $\t_1=7.13\times10^{-3}$~pb, $\t_2=1.26\times10^{-3}$~pb. Finally, the interference term $\sigma_{12}=1.87\times10^{-3}$~pb changes to $\sigma_{12}=1.71\times10^{-3}$~pb. The typical Standard Model events consist of soft and low-$p_T$ photons mostly backwards, tending to go in the same direction of the scattered electrons (part of them are emitted by the hadronic current in the forward direction), close to the required angular separation ($\sim 30^o$). The low-$p_T$ jet goes opposite to both the photon and the scattered electron, also in the transverse plane. On the contrary, the anomalous events have not so soft and high-$p_T$ photons, concentrated in the forward region as it the case for the scattered electron and the jet. \subsection{Sensitivity to anomalous couplings} In order to estimate the sensitivity to anomalous couplings, we consider the $\chi^2$ function. One can define the $\chi^2$, which is related to the likelihood function ${\cal L}$, as \begin{equation} \label{chi2} \chi^2\equiv-2\ln{\cal L}= 2 L \displaystyle\left(\sigma^{th}-\sigma^{o}+\sigma^{o} \ln\displaystyle\frac{\sigma^{o}}{\sigma^{th}}\right) \simeq L \displaystyle\dis\frac{(\sigma^{th}-\sigma^{o})^2}{\sigma^{o}}, \end{equation} where $L=N^{th}/\sigma^{th}=N^o/\sigma^o$ is the integrated luminosity and $N^{th}$ ($N^o$) is the number of theoretical (observed) events. The last line of (\ref{chi2}) is a useful and familiar approximation, only valid when $\mid \sigma^{th}-\sigma^o \mid/ \sigma^o \ll 1$. This function is a measure of the probability that statistical fluctuations can make undistinguisable the observed and the predicted number of events, that is the Standard Model prediction. The well known $\chi^2$-CL curve allows us to determine the corresponding confidence level (CL). We establish bounds on the anomalous couplings by fixing a certain $\chi^2=\d^2$ and allowing for the $h^\gamma_i$ values to vary, $N^o=N^o(h^\gamma_i)$. The parameter $\d$ is often referred as the number of standard deviations or `sigmas'. A $95\%$ CL corresponds to almost two sigmas ($\d=1.96$). When $\sigma \simeq \sigma_{{\rm SM}} + (h^\gamma_i)^2 \sigma_i$ (case of the $CP$-odd terms) and the anomalous contribution is small enough, the upper limits present some useful, approximate scaling properties, with the luminosity, \begin{equation} h^\gamma_i (L')\simeq\sqrt[4]{\frac{L}{L'}} \ h^\gamma_i (L). \end{equation} A brief comment on the interpretation of the results is in order. As the cross section grows with $h^\gamma_i$, in the relevant range of values, the $N^o$ upper limits can be regarded as the lowest number of measured events that would discard the Standard Model, or the largest values of $h^\gamma_i$ that could be bounded if no effect is observed, with the given CL. This procedure approaches the method of upper limits for Poisson processes when the number of events is large ($\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 10$). \bfi{htb} \setlength{\unitlength}{1cm} \bpi{8}{8} \epsfxsize=12cm \put(3.35,4.245){+} \put(-2.5,-1.5){\epsfbox{conh1h2.nogrid.ps}} \end{picture} \bpi{8}{8} \epsfxsize=12cm \put(4.1,4.245){+} \put(-1.75,-1.5){\epsfbox{conh3h4.nogrid.ps}} \end{picture} \caption{\it Limit contours for $Z\gamma\g$ couplings at HERA with an integrated luminosity of 10, 100, 250, 1000 pb$^{-1}$ and a 95\% CL.\label{contour}} \end{figure} In Fig. \ref{contour} the sensitivities for different luminosities are shown. Unfortunately, HERA cannot compete with Tevatron, whose best bounds, reported by the D\O\ collaboration \cite{D0}, are \begin{eqnarray} |h^\gamma_1|, \ |h^\gamma_3| &<& 1.9 \ (3.1), \nonumber \\ |h^\gamma_2|, \ |h^\gamma_4| &<& 0.5 \ (0.8). \end{eqnarray} For the first value it was assumed that only one anomalous coupling contributes (`axial limits') and for the second there are two couplings contributing (`correlated limits'). Our results are summarized in Table \ref{table}. \begin{table} \begin{center} \begin{tabular}{|c|r|r|r|r|r|r|r|r|} \hline HERA & \multicolumn{2}{c|}{10 pb$^{-1}$} & \multicolumn{2}{c|}{100 pb$^{-1}$} & \multicolumn{2}{c|}{250 pb$^{-1}$} & \multicolumn{2}{c|}{1 fb$^{-1}$} \\ \hline \hline $h^\gamma_1$ & -19.0 & 14.5 & -11.5 & 7.0 & -9.5 & 5.5 & -8.0 & 3.5 \\ & -26.0 & 19.5 & -16.0 & 9.5 & -14.0 & 7.0 & -11.5 & 4.5 \\ \hline $h^\gamma_2$ & -21.5 & 20.0 & -12.0 & 10.0 & - 9.5 & 8.0 & -7.0 & 6.0 \\ & -26.0 & 30.0 & -13.0 & 18.0 & -10.0 & 15.0 & - 7.5 & 12.0 \\ \hline $h^\gamma_3$ & -17.0 & 17.0 & -9.0 & 9.0 & -7.5 & 7.5 & -5.5 & 5.5 \\ & -22.5 & 22.5 & -12.0 & 12.0 & -10.0 & 10.0 & -7.0 & 7.0 \\ \hline $h^\gamma_4$ & -20.5 & 20.5 & -11.0 & 11.0 & -8.5 & 8.5 & -6.0 & 6.0 \\ & -27.5 & 27.5 & -14.5 & 14.5 & -12.0 & 12.0 & -8.5 & 8.5 \\ \hline \end{tabular} \end{center} \caption{\it Axial and correlated limits for the $Z\gamma\g$ anomalous couplings at HERA with different integrated luminosities and $95\%$ CL. \label{table}} \end{table} The origin of so poor results is the fact that, unlike diboson production at hadron or $e^+e^-$ colliders, the anomalous diagrams of $ep\to e\gamma X$ have a $Z$ propagator decreasing their effect. The process $ep\to eZX$ avoids this problem thanks to the absence of these propagators: the Standard Model cross section is similar to the anomalous one but, as a drawback, they are of the order of femtobarns. \section{Summary and conclusions} The radiative neutral current process $ep\to e\gamma X$ at HERA has been studied. Realistic cuts have been applied in order to observe a clean signal consisting of detectable and well separated electron, photon and jet. The possibility of testing the trilinear neutral gauge boson couplings in this process has also been explored. The $Z\gamma Z$ couplings are very suppressed by two $Z$ propagators. Only the $Z\gamma \gamma$ couplings have been considered. A Monte Carlo program has been developed to account for such anomalous vertex and further cuts have been implemented to improve the sensitivity to this source of new physics. Our estimates are based on total cross sections since the expected number of events is so small that a distribution analysis is not possible. The distributions just helped us to find the optimum cuts. Unfortunately, competitive bounds on these anomalous couplings cannot be achieved at HERA, even with the future luminosity upgrades.\footnote{We would like to apologize for the optimistic but incorrect results that were presented at the workshop due to a regrettable and unlucky mistake in our programs.} As a counterpart, a different kinematical region is explored, in which the form factors can be neglected. \section*{Acknowledgements} One of us (J.I.) would like to thank the Workshop organizers for financial support and very especially the electroweak working group conveners and the Group from Madrid at ZEUS for hospitality and useful conversations. This work has been partially supported by the CICYT and the European Commission under contract CHRX-CT-92-0004.
-22,764.740163
[ -2.75390625, 2.57421875 ]
21.393035
[ -2.984375, 0.8974609375, -1.77734375, -5.45703125, -0.62890625, 7.3515625 ]
[ 1.64453125, 6.82421875, 2.853515625, 4.43359375 ]
260
3,403
[ -2.248046875, 2.552734375 ]
31.68668
[ -5.90234375, -3.78515625, -3.884765625, -2.376953125, 1.6025390625, 11.3046875 ]
0.784135
12.189906
33.588011
10.039892
[ 2.596883773803711 ]
-15,589.128001
5.808698
-22,298.481355
1.456974
6.004329
[ -2.693359375, -3.462890625, -3.5546875, -4.58203125, 2.080078125, 11.6796875 ]
[ -4.9921875, -1.396484375, -1.7060546875, -0.79052734375, 2.87109375, 3.564453125 ]
BkiUdqc4eIOjRvCHtF0k
\section{Introduction} Type Ia supernovae (SNe~Ia) are believed to be the result of a thermonuclear runaway explosion of a C/O white dwarf (WD) approaching the Chandrasekhar limit (see \citealt{hillebrandt00} for a review). Explosive nucleosynthesis up to \hbox{$^{56}$Ni } releases $\sim 10^{51}{\rm ~erg}$, unbinding the progenitor. The subsequent light curve is powered by injection of energy from the radioactive decay of \hbox{$^{56}$Ni}; the $\gamma$ rays degrade to longer wavelengths as they diffuse out through the expanding ejecta. The well-established relationship between the light-curve width and the luminosity at peak brightness allows SNe~Ia to be ``standardizable candles" at optical wavelengths \citep{phillips93}, and possibly almost standard candles in the infrared \citep{krisciunas04, wood-vasey08, folatelli10}. Application of the width-luminosity relation along with colour information to SNe~Ia over cosmological scales has led to the discovery of the accelerating expansion of the Universe (\citealt{riess98, perlmutter99}; see also \citealt{astier06, riess07, wood-vasey07, kowalski08, hicken09b,amunullah10}), indicating either the presence of ``dark energy'' having a negative pressure or a failure of general relativity on the largest scales. Recent work has also shown that including spectral flux ratios may also aid in reducing Hubble-diagram residuals \citep{bailey09, blondin11}. Despite the successful cosmological application of SNe~Ia, the SN community lacks a clear understanding of the nature of their progenitor systems. Possible scenarios include a single-degenerate WD paired with a red-giant post-main-sequence star undergoing stable mass transfer until $M_{\rm WD}$ approaches $M_{\rm Ch} \approx 1.4$~M$_{\odot}$ \citep{whelan73,livio00}, a double-degenerate WD merger that reaches or surpasses $M_{\rm Ch}$ \citep{webbink84, iben94}, or the result of a sub-Chandrasekhar explosion of a WD steadily burning helium accreted from a companion \citep{shen09}. Each scenario carries with it tentative observational evidence and hardships imposed by unrealised theoretical predictions. While considerable attention has been focused on the post-maximum decline of the SN~Ia light curve, less has been paid to the rise of the SN~Ia from explosion to maximum brightness. This is due to the dearth of data in the days following the SN explosion and the intrinsic difficulty of finding SNe~Ia shortly after explosion. The Lick Observatory Supernova Search (LOSS) has been successful at finding SNe in the nearby Universe (redshift $z < 0.05$) and the Sloan Digital Sky Survey (SDSS) Supernova Survey \citep{frieman08} has found and monitored 391 SNe at moderate redshift ($0.03 \le z \le 0.35$). Higher redshift searches such as the SuperNova Legacy Survey \citep{astier06} and ESSENCE \citep{wood-vasey07} benefit from cosmological time dilation, allowing the SN rise to be spread out over more days in the observer's frame. A first attempt at quantifying SN~Ia rise times by \cite{pskovskii84} analysed 54 historical light curves gathered from photographic plates and visual $B$ magnitude estimates, finding a typical rise of 19--20~d. Recent attempts have made use of more reliable data taken with charge-coupled devices (CCDs) which are linear in translating photons to counts over a large dynamic range. A seminal paper by \citet[][hereafter R99]{riess99b} used observations of nearby SNe~Ia to find a $B$-band rise time of $19.5 \pm 0.2~{\rm d}$ for a normal SN~Ia having a decline of 1.1 mag between maximum light and 15~d after maximum in the $B$ band (i.e., $\hbox{$\Delta m_{15}(B)$} =1.1$ mag; Phillips 1993). Studies using data on higher redshift SNe~Ia have found a rise time in concordance with that of R99 \citep{aldering00,conley06}, advancing the notion that there is limited, if any, evolution in SN~Ia properties from high to low redshifts. Most recently, \cite[][hereafter C06]{conley06} found a rise of $19.10^{+0.18}_{-0.17} {\rm~(stat)} \pm 0.2 \rm{(sys)}$~d using SNLS data. In an analysis of $B$- and $V$-band photometry of eight nearby SNe~Ia with excellent early-time data, \citet[][hereafter S07]{strovink07} found tentative evidence for two populations of $B$-band rise times. After correction for light-curve decline rate, S07 found that three SNe rise in $18.81 \pm 0.36$ d and five SNe rise in $16.64 \pm 0.21$ d. More recently, \citet[][hereafter H10]{hayden10a} analysed $B$- and $V$-band photometry 105 SNe~Ia from the SDSS sample and found that SNe~Ia come from a rather broad distribution of $B$-band rise times, with evidence indicating that slower declining events (e.g., more luminous SNe) have some of the fastest rise times. This result has significant implications for light-curve fitting techniques that rely on a single-parameter family of light curves and theoretical modeling of SNe~Ia. {H10 find an average $B$-band rise time of $17.38 \pm 0.17$~d, a departure from the results of R99 and C06 which H10 trace back to differences in fitting methods. In this paper, we analyse available data on nearby SNe~Ia to measure the rise times, relying heavily on the recently released LOSS sample \citep{ganeshalingam10}. Previous analyses such as those of S07 and H10 use combined results from both $B$- and $V$-band photometry to measure the $B$-band rise time. In this paper, we compare the $B$-band rise time measured in the $B$ and $V$ bands and find that such a combination may not be appropriate. Instead, we present independent analysis of the $B$ and $V$ bands. Specifically, we define the rise time in a photometric band as the elapsed time between explosion and maximum light for that particular band. Nearby SNe offer the advantage of being able to be monitored by small-aperture telescopes and benefit from not requiring significant $K$-corrections, which at early times are ill defined because of a lack of available spectra to model the SN spectral energy distribution (SED). \section{Data \label{s:data}} \subsection{LOSS Light Curves} LOSS is a transient survey utilizing the 0.76-m Katzman Automatic Imaging Telescope (KAIT) at Lick Observatory (\citealt{li00,filippenko01}; see also Filippenko, Li, \& Treffers 2011, in prep.). KAIT is a robotic telescope that monitors a sample of $\sim 15,000$ galaxies in the nearby Universe ($z < 0.05$) with the goal of finding transients within days of explosion. Fields are imaged every 3--10~d and compared automatically to archived template images, and potential new transients are flagged. These are subsequently examined by human image checkers, and the best candidates are reobserved the following night. Candidates that are present on two consecutive nights are reported to the community using the International Astronomical Union Circulars (IAUCs) and the Central Bureau of Electronic Telegrams (CBETs). Time allotted to our group on the Lick Observatory 3-m Shane telescope with the Kast double spectrograph \citep{miller93} is used to spectroscopically identify and study candidates. Between first light in 1997 and 2010 September 30 UT, LOSS found over 865 SNe, 382 of which have been spectroscopically classified as SNe~Ia. The statistical power of the LOSS SNe is well demonstrated by the series of papers deriving the nearby SN rates \citep{leaman11,li11a,li11b}. In addition to the SN search, KAIT monitors active SNe of all types in broad-band \hbox{$BV\!RI$} filters. The first data release of \hbox{$BV\!RI$} light curves for 165 SNe~Ia along with details about the reduction procedure have been published by \cite{ganeshalingam10}. In summary, point-spread function (PSF) fitting photometry is performed on images from which the host galaxy has been subtracted using templates obtained $> 1$~yr after explosion. Photometry is transformed to the Landolt system \citep{landolt83,landolt92} using averaged colour terms determined over many photometric nights. Calibrations for each SN field are obtained on photometric nights with an average of 5 calibrations per field. The LOSS light curves represent a homogeneous, well-sampled set of \hbox{$BV\!RI$} light curves. The average cadence is 3--4~d between observations, with a typical light curve having 22 epochs. Of the 165 \hbox{$BV\!RI$} \ light curves in the sample, 70 have data starting at least one week before maximum light. \subsection{Light Curves from Other Nearby Samples} In addition to the LOSS sample, here we include data from the following previously published SN~Ia samples: the Cal\'{a}n-Tololo sample \citep{hamuy96}, the Center for Astrophysics (CfA) Data Releases 1--3 \citep{riess99a,jha06,hicken09a}, and the Carnegie Supernova Project (CSP) dataset \citep{contreras09}. In cases where there are data from multiple samples, we chose the dataset with the best-sampled light curve to avoid introducing systematic calibration error. With the exception of the CSP sample, all light curves are in the Landolt photometric system. The CSP light curves are in the natural system of the Swope telescope at Las Campanas Observatory. Comparing LOSS $B$-band light curves (in the Landolt system) with CSP $B$-band light curves (in the Swope natural system), we find differences of $\sim 0.03$ mag which do not appear to be correlated with SN colour. We adopt 0.03 mag as the systematic uncertainty of the CSP Swope system light curves. We also include individual light curves for SN 1990N \citep{lira98}, SN 1992A \citep{altavilla04}, SN 1994D \citep{patat96}, SN 1998aq \citep{riess05}, SN 1999ee \citep{stritzinger02}, SN 2003du \citep{stanishev07}, SN 2007gi \citep{zhang10}, and a preliminary reduction of SN 2009ig which was found by LOSS about 15~d before maximum light. In total, we have $BV$ light curves for 398 SNe. \section{Methods\label{s:methods}} In this section we detail the method used to measure the rise times of our sample. Most previous measurements of the SN~Ia rise time have used a single-stretch fit (see R99, C06). These studies ``stretched'' the template light curve along the time axis to determine the stretch value, $s$, that best fit the data. A light curve narrower than the template would have $s < 1$, a wider light curve $s > 1$, and a light curve that matched the template perfectly $s = 1$. Explicit in this formalism is that a single stretch value applies to both the rising and falling portions of the light curve. Instead, we adopt a two-stretch fitting procedure first introduced by H10 to fit our $B$- and $V$-band data; we fit each band independently.} Here we discuss our implementation of the two-stretch fitting routine with a template created mostly from LOSS data. \subsection{The Two-Stretch Fitting Method\label{s:two_stretch}} In our two-stretch fitting routine, pre-maximum and post-maximum data are decoupled, allowing the two portions of the light curve to take on different stretch values to match a template. We define \hbox{$s_{r}$}\ to be the ``rise stretch'' and \hbox{$s_{f}$}\ to be the ``fall stretch.'' The template has $s_{r} = s_{f} = 1.00$ by construction. The two stretched light-curve portions are joined at peak where the first derivative is 0, ensuring a continuous function with a continuous first derivative at maximum light. Mathematically, the time axis is stretched such that \begin{equation}\label{e:tau} \tau = \left\{ \begin{array}{lcr} \frac{t - t_{0} }{s_{r} (1 + z)} & t \leq t_{0} \\ \frac{t - t_{0} }{s_{f} (1 + z)} & t > t_{0}, \end{array} \right. \end{equation} where $\tau$ represents the effective ``stretch-corrected" rest-frame epoch, $t_0$ is the time of maximum light, and $z$ is the SN redshift. Similar to other implementations of stretch \citep[e.g.,][]{goldhaber01}, we model each light curve as a function of time by \begin{equation} f(t) = f_{0} S(\tau), \end{equation} where $f_{0}$ is the peak flux and $S$ is our normalised template light curve. We perform a $\chi^2$ minimization to the quantity \begin{equation} \chi^{2} =\sum_{i} \frac{ (F (t_{i}) - f(t_{i}))^{2}}{\sigma_{\rm {phot}}^2 + \sigma_{{\rm template}}^2}, \end{equation} where $i$ is an index summed over all observations, $F(t_{i})$ are individual flux measurements, $\sigma_{{\rm phot}}$ is the photometric uncertainty including systematic error, and $\sigma_{{\rm template}}$ is the uncertainty in our template as described in \S \ref{s:template}. Our $\chi^2$ minimization fits for the values of $f_{0}$, $t_{0}$, $s_{r}$, and $s_{f}$. We restrict the fit to data within $\tau < +35 $ d relative to maximum light, after which SNe enter the nebular phase where the stretch parametrisation is no longer applicable \citep{goldhaber01}. The rest-frame rise time of a light curve (i.e., the elapsed time between explosion and maximum light in that band) is obtained by multiplying the measured $s_{r}$ by the fiducial rise time of the template. Following S07, we define the $B$-band fall time to be the amount of time required for the $B$-band light curve to decline by 1.1 mag starting at maximum light in $B$. We define the fall time for the $V$ band to be the required time for a $V$-band light curve to fall by 0.66 mag; the fall time of a light curve is $15\,s_{f}$~d. By construction, both templates with $s_{f} = 1$ have a fall time of $15~\rm{d}$. The $B$-band fall stretch \hbox{$s_{f}$}\ is directly related to \hbox{$\Delta m_{15}(B)$}\ by the simple, empirical formula \begin{equation}\label{e:dmb} \Delta m_{15}(B) \approx 1.1 -1.70( s_{f}(B) -1.0) + 2.30 (s_{f}(B) - 1.0)^2, \end{equation} which was found by stretching our $B$-band template and reading off the resulting value of \hbox{$\Delta m_{15}(B)$}\ mag. Similarly, for our $V$-band template, we find \begin{equation}\label{e:dmv} \Delta m_{15}(V) \approx 0.66 - 0.83 (s_{f}(V) - 1.0) + 0.94 (s_{f}(V) - 1.0)^{2}. \end{equation} Note that Equations \ref{e:dmb} and \ref{e:dmv} are only valid for the LOSS templates, and that $s_{f}(B)=1$ and $s_{f}(V) = 1$ respectively correspond to \hbox{$\Delta m_{15}(B)$}\ = 1.1 mag and \hbox{$\Delta m_{15}(V)$}\ = 0.66 mag. $K$-corrections are computed with the spectral series of \citet{hsiao07} which provides the spectral evolution of a SN~Ia with a one-day cadence. Using all available multi-colour photometry for a SN, the spectrum for the phase nearest the photometry epoch is warped to match the colours in the observer's frame using a third-order spline with knots placed at the effective wavelength of each available filter. $K$-corrections are computed from the warped spectrum and are typically $< 0.05$ mag. For a single SN, our fitting procedure gives the date of maximum light, maximum flux, rise stretch, and fall stretch for both the $B$- and $V$-band light curves. For the purpose of comparing our results to those of H10, we compute the $B$-band rise time (i.e., the time between explosion and maximum light in $B$) found using both $B$- and $V$-band photometry. To determine the $B$-band rise time using $V$-band photometry, we measure the $V$-band rise time (the time between explosion and maximum light in $V$) and subtract off the time between maximum light in $V$ and maximum light in $B$. We employ this determination of the $B$-band rise time using $V$-band photometry to compare the rise-time behaviour of the two bandpasses over the same time period. Consequently, the determination of the $B$-band rise time using the $V$ band typically has larger uncertainties than using the $B$-band photometry, since we must also include the error from both the times of $B$ and $V$ maximum. We emphasise that the $V$-band rise time is defined to be the elapsed time between explosion and maximum light in the $V$ band. \subsection{Template\label{s:template}} The template plays an important role in measuring light-curve properties. A template which does not reflect the data will lead to fits with systematically incorrect measurements of light-curve properties \citep[H10;][]{aldering00}. Here we discuss the sample of objects and the method used to construct our light-curve templates for the $B$ and $V$ band. We construct our templates from a sample of well-observed ``normal" SNe~Ia. Most of the objects come from the set of SNe~Ia observed by LOSS \citep{ganeshalingam10}, but we also include several objects published previously. SNe~Ia with excellent light curves but known peculiarities, such as SN 1999ac \citep{phillips06}, SN 2000cx \citep{li01:00cx, candia03}, SN 2002cx \citep{li03,jha06:02cx}, SN 2004dt \citep{leonard05:03du, wang06, altavilla07}, SN 2005hk \citep{phillips07}, and SN 2009dc \citep{yamanaka09, silverman10}, are avoided. The underluminous SN 1991bg-like objects show distinctly different photometric behaviour compared with the rest of the SN~Ia population, so they are excluded from the sample as well. In total, the set includes 60 objects, many of which are also in the rise-time SN~Ia sample discussed in \S \ref{s:sample}. For the zeroth-order template light curve, we adopt the fiducial $\Delta = 0$ (\hbox{$\Delta m_{15}(B)$}\ = 1.1 mag) template from the Multi-colour Light-Curve Shape (MLCS2k2) fitter \citep{jha07}.\footnote{Downloaded from www.physics.rutgers.edu/$\sim$saurabh/mlcs2k2/ on October 7, 2010.} Light-curve data are then fit using a two-stretch fitting parametrisation. The parameters being fit via a $\chi^2$ minimization technique are the time of maximum light $t_{0}$, the flux at maximum light $f_{0}$, the rise stretch $s_{r}$, and the fall stretch $s_{f}$ (see \S \ref{s:two_stretch} for details) after correcting the light curves for time dilation using redshifts obtained from the NASA/IPAC Extragalactic Database.\footnote{http://nedwww.ipac.caltech.edu/} The light-curve data are then normalised using the best-fit parameters to have a peak flux of 1.0 at $\tau = 0$ and de-stretched, using $s_{r}$ and $s_{f}$, along the time axis to match the template. After normalizing and de-stretching all of our light-curve data to match the shape of the template, we study the mean residuals between the data and the template. While we find that even though the $\Delta = 0$ templates do a reasonable job in fitting the data, there are still small systematic trends in the fit residuals. For each band, we fit a smooth curve to the residuals and use it as a correction to the input template. These ``refined" templates are then used to fit the light-curve data from our template sample again and the fit residuals are studied. This process is iterated until convergence is reached --- that is, no systematic trend is observed in the fit residuals. Convergence is achieved within 5 iterations. We restrict this part of the template training procedure to data where the MLCS2k2 template is well defined, but before the SN enters the nebular phase. For the $B$ band this is within the range $-10 < \tau < +35$~d with respect to maximum light in $B$. For the $V$ band this is $-11< \tau < +35 $~d with respect to maximum light in $V$. To estimate the uncertainties of our templates, we bin the fit residuals in 3~d intervals and calculate their root-mean square (RMS). Because a particular light-curve fit can have systematic residuals relative to the input template (i.e., several data points in one portion of the light curve all show negative residuals, while data in another portion all show positive residuals), the residuals at different epochs are correlated, and the RMS measurements are an overestimate of the true uncertainties. It is difficult to quantify how the residuals are correlated because different light-curve fits have different patterns of residuals. We assume that the residuals are equally affected by the correlated errors in all portions of the template, and apply a constant scaling factor to the RMS measurements so that the overall fit to all of the data has a reduced $\chi^2 \approx 1$. These scaled RMS measurements are adopted as the uncertainties of our templates. Suffering from a dearth of early-time data, we adopted an expanding fireball model to describe the SN light curve for $\tau < -10$~d relative to maximum light based on the arguments presented by R99 and employed in most rise-time studies thereafter. After explosion of the progenitor, the SN undergoes free, unimpeded expansion such that its radius, $R$, expands proportionally with time, $\tau$. Approximating the SN as a blackbody, the optical luminosity through a broad-band filter on the Rayleigh-Jeans tail of the SED is given by $L \propto R^{2}T \propto (v \tau)^2 T$. Despite recent observations indicating that $T$ may actually change substantially over this period \citep{pastorello07,hayden10a}, if we assume that changes in $v$ and $T$ are modestly small, then $L \propto \tau^{2}$. Wrapping our ignorance into a ``nuisance parameter'' $\alpha$, we can write the flux in the rise-time region as $f = \alpha(\tau + t_{r})^2$, where $t_{r}$ is the rise time of the template. To determine the rise time for our template, we restrict our sample to light curves with data starting at $\tau \leq -10$~d relative to $B$-band maximum and at $\tau \leq -11$~d for the $V$ band. The following approach was adopted. \begin{enumerate} \item{Create a random realization of the light curves in our sample using reported photometry uncertainties for each data point and systematic calibration uncertainties for each light curve in our template section (see \S \ref{s:uncertainties} for details on how this is implemented).} \item{Perform the two-stretch fitting routine, restricting the fits to data within the range $-10 < \tau < 35$~d for $B$ or $-11 < \tau < 35$~d for $V$. The fit is restricted to this range to avoid imposing a shape in the region where we will fit for the rise time.} \item{Normalise and two-stretch correct the light curves in the sample.} \item{Fit a parabola to the ensemble of light-curve points with $\tau \leq -10$~d for $B$ or $\tau \leq -11$~d for $V$ with the constraint that the parabola is continuous with the template.} \end{enumerate} The procedure outlined above was performed 1000 times for the $B$- and $V$-band data independently. We find a best-fit rise time of $17.92 \pm 0.19$~d for the $B$ band and $19.12 \pm 0.19$~d for the $V$ band. We also attempted to find the best rise time for our template using the approach outlined by H10, except on a finer grid. We tested templates with different rise times in the range $15 \leq t_{r} \leq 21$~d in 0.1~d increments (as opposed to the 0.5~d increments of H10). For each template, light curves with data starting at least ten days before maximum light were fit using the two-stretch method (see \S \ref{s:two_stretch} for more details). Unlike the previous procedure to determine the rise time, all data at $\tau < +35$~d are used in the fit. We calculated the $\chi^2$ statistic for all stretch-corrected data in the region $ \tau\leq -10$~d for each template. We fit a fourth-order polynomial to the curve of reduced $\chi^2$ as a function of $t_{r}$. The minimum of the polynomial is taken to be the rise time that best fits our data. Performing a Monte Carlo simulation of this procedure to draw 500 unique realizations of our dataset, we find a best-fit rise time of $17.11 \pm 0.09$~d for our $B$-band template and $18.07 \pm 0.11$~d for our $V$-band template. These rise times disagree at the 4--5$\sigma$ level with what we found above. It is unclear why these two procedures find such different results for the best-fit rise time for our template. However, when comparing a $B$-band template with a rise time of $17.11$~d to the data after they have been two-stretch corrected using stretches found with fits restricted to $-10 < \tau +35$~d, we see a significant systematic trend for data $\tau < -13$~d. Similar results are found in a $V$-band comparison with a template rise time of $18.07 \pm 0.11$~d. While the nature of the discrepancy eludes us, we use a $B$-band rise time of $17.92 \pm 0.19$~d and a $V$-band rise time of $19.12 \pm 0.19$~d to avoid introducing any systematic error. As a precaution, we have run our analysis using both sets of template rise times and find that while some of the final numbers change, they change systematically (by $\sim 0.50$--0.75~d) and do not affect any of our qualitative results or final conclusions. In \S \ref{s:companion}, we discuss using different rise times in the context of searching for signatures of interaction with the companion star in the single-degenerate scenario. \begin{figure} \begin{center} \includegraphics[scale=.4]{lc_two_stretch_corrected} \end{center} {\caption{Stretch-corrected $B$-band light curves for \hbox{61}\ objects using the two-stretch method. All light curves have been shifted such that $\tau=0$~d is at maximum light and they have the same peak flux. Overplotted in red is the LOSS template. We restrict our fits to data within 35~d of maximum light. The inset panel is a detailed view of the early-time rise of the SN. The bottom panel shows the residuals between the stretch-corrected data and the template normalised by the template flux. } \label{f:lc_two_stretch_corrected}} \end{figure} We caution the reader from interpreting the above template rise times as the ``typical'' rise time of a SN~Ia or even the rise time of a SN with $\Delta m_{15}(B) = 1.1$~mag. The rise time found above should be viewed as the rise time required to match the {\it shape} of our template light curve in the region $-10 < \tau < 0$~d. The rising portion ($\tau < 0$~d) of our template light curve based on the MLCS2k2 template does not {\it a priori} reflect a light curve with a decline $\Delta m_{15} (B) = 1.1~{\rm mag}$. Thus far, the goal of constructing our template was to find a light-curve shape that will fit our sample by applying independent stretches to the rising and falling portions of the light curve. To that extent, the actual number associated with the template rise time is meaningless; the significance is in the {\it shape} of the light curve. For example, we could construct an equivalent $B$-band template with a rise time of 35.84~d by stretching the rising portion ($\tau < 0$~d) of our 17.92~d template by a factor of 2. Using a template with a rise time of 35.84~d would decrease the measured rise stretches by a factor of 2. The final rise time for each SN found by multiplying the rise stretch by the fiducial rise time of the template will be the same for both templates. In \S \ref{s:rise_v_decline}, we will use the shapes of our template light curves to measure the rise and fall of our data sample and we will address the fall stretch-corrected rise time of SNe~Ia in our sample. Figure \ref{f:lc_two_stretch_corrected} compares our template light curve to the stretch-corrected light-curve data. Residuals to the fit scaled by the template flux are plotted in the bottom panel of the figure. The template fits all portions of the light curve, without any systematic trends in the residuals. \subsection{Estimating Uncertainties\label{s:uncertainties}} To estimate the uncertainties in our fitting procedure, we use a Monte Carlo procedure including the effects of systematic calibration error from different photometric surveys, the uncertainty in rise time of our template, and the statistical photometric error. The prescription for one simulation for our Monte Carlo procedure is as follows. \begin{enumerate} \item For each survey, model calibration uncertainties by choosing a random photometric offset, and change all photometry from that survey by that random amount. The random offset is chosen for each survey assuming a Gaussian distribution with a mean offset of 0.0 mag and $\sigma = 0.03$ mag. SNe from the same survey will have the same photometric offset while SNe from a different survey may have a different offset. \item Model the photometric error by perturbing every photometric point for each SN randomly based on its reported uncertainty, assuming the uncertainty is Gaussian. \item Modify the LOSS template to have a rise time given by a random draw from a Gaussian distribution defined by the mean of 17.92~d and $\sigma = 0.19$~d for $B$ or 19.12~d and $\sigma = 0.19$~d for $V$ (see \S \ref{s:template} for how these values were derived). \item Fit simulated data with the modified template. \end{enumerate} We perform 1000 Monte Carlo simulations for our $B$- and $V$-band photometry independently. We take the parameter uncertainties to be the standard deviation in our 1000 trials and keep track of the covariance matrix between fit parameters. \begin{figure} \begin{center} \includegraphics[scale=.45]{z_dis} \end{center} {\caption{The redshift distribution of our sample broken into the three spectroscopic subclassifications: Normal, High Velocity, and SN 1991T/1999aa-like. The median redshift for the entire sample of \hbox{61}\ objects is 0.016.}\label{f:z_dis}} \end{figure} \subsection{The Rise-Time Sample\label{s:sample}} We restrict our final sample to objects that have $\sigma_{t_{r,f}} \leq 1.5$~d in both the $B$ and $V$ bands to ensure that we are using objects where the rise and fall of the light curve are being fit well. By the nature of our Monte Carlo procedure, each light curve does not have a single value of $\chi^{2}$ associated with it. We instead identify SN light curves which have a median reduced $\chi^{2} < 1.5$ over the 1000 trials. Fits are visually inspected to identify the poor ones, which are excluded from our sample. We also place the somewhat strict requirement that the SN have data starting 7 {\it effective} days before maximum (defined by Equation \ref{e:tau}) to anchor the measurement of \hbox{$s_{r}$}. We explore the effects of relaxing or tightening these requirements in \S \ref{s:cuts} and find that changing the requirements does not affect our final results. Objects similar to the subluminous SN 1991bg \citep{filippenko92:91bg, leibundgut93:91bg} were unable to be fit satisfactorily by our fitting routine. This is not surprising given the significant differences in light-curve shape and spectral evolution between the SN 1991bg-like SNe~Ia and ``Branch-normal SNe~Ia'' \citep{branch92}. In \S \ref{s:91bg} we discuss our attempt to measure the rise times of SN 1991bg-like objects. We also exclude SN 2005hk, which is of the peculiar SN 2002cx subclass \citep{li03, jha06:02cx,phillips07}. Of the initial $BV$ light curves for 398 SNe, 95 have data starting at least 7 effective days before maximum light that could be fit with our two-stretch routine. Of the 95, 63 have good fits with rise and fall times that be can reasonably measured to within an uncertainty of 1.5~d. For the purposes of analysing the rise time of different spectroscopic subclasses, we further break our sample into three groups: Normal, High Velocity (HV), and SN 1991T/1999aa-like \citep{filippenko92:91t,li01b,garavini04}. For the classification of HV objects, we adopt the criterion of \cite{wang09} that the average $v_{\rm Si}$ for spectra taken within one week of maximum light is $\ga 1.2 \times 10^{4}$ km s$^{-1}$. Objects which have $v_{{\rm Si}} \le 10^{4}$ km s$^{-1}$ (more typical Si velocities for a normal SN~Ia) are classified as normal SNe~Ia. \begin{figure} \begin{center} \includegraphics[scale=.4]{all_rise_times} \end{center} {\caption{$B$-band light curves for the SNe used in this analysis. The light curves have been shifted such that they share the same peak flux and $t=0$ is the time of maximum light. The light curves are well sampled and start at least one week before maximum light. The SNe are coded by \hbox{$\Delta m_{15}(B)$}\ which measures the the post-maximum decline. The collection of light curves strongly suggests that SNe with slow rise also have a slow decline. }\label{f:all_rise_times}} \end{figure} We also spectroscopically identify SN 1991T/1999aa-like objects in our sample. The combination of SN 1991T-like and SN 1999aa-like objects is based off of previous studies which note the similar photometric and spectroscopic properties of the two subtypes \citep{ li01b, strolger02}. In particular, both exhibit broad light curves (\hbox{$\Delta m_{15}(B)$} $\approx 0.8$--0.9 mag) and spectra indicative of higher photospheric temperatures than normal SNe~Ia (see \citealt{filippenko97} for a review of the spectroscopic diversity of SNe~Ia). Our subclass information comes from spectra taken within one week of maximum light observed as part of the Berkeley SuperNova Ia Program (BSNIP; Silverman et al. 2011, in prep.). Using a modified version of the SuperNova IDentification (SNID) code \citep{blondin07}, we are able to classify SN spectra using a cross-correlation algorithm against a spectral database of know subtypes. Classification of SN 1991T/1999aa-like objects requires a spectrum within a week of maximum light to avoid confusion with normal SNe~Ia \citep{li01b}. In instances where a SN in our sample has a classification in \cite{wang09}, we adopt their subclassifications. Otherwise, our subclassifications are based on BSNIP spectra. We are left without subclass identifications for only SNe 1992bo and 1992bc; hence, these two objects are excluded from our subsequent analysis. Our final sample consists of \hbox{61}\ SNe, including 39 normal, 16 HV, and 6 SN 1991T/1999aa-like SNe. Figure \ref{f:z_dis} shows the redshift distribution of our sample for each of the three spectroscopic subclassifications. We plot the $B$-band light curves for the \hbox{61}\ SNe that passed our cuts in Figure \ref{f:all_rise_times}. The SNe are shifted relative to the time of maximum light and normalised to have the same peak flux. Qualitatively, objects with slower post-maximum declines (i.e., smaller \hbox{$\Delta m_{15}(B)$}) have slower rise times. In \S 4.1, we explore this relationship in more depth. H10 analyse a sample of 41 nearby SNe~Ia drawn from S07, R99, and the CfA3 sample \citep{hicken09a}. Our analysis benefits from the addition of the LOSS SN~Ia sample which has $BV$ data starting at least a week before maximum light for 70 SNe. Data from the LOSS sample make up $\sim$70\% of the objects that pass our cuts. \begin{figure} \begin{center} \includegraphics[scale=.5]{b_v_rise} \end{center} {\caption{ Difference in the rest-frame $B$-band rise time derived from the $B$ band, $t_{r}(B)(B)$, and the $V$ band, $t_{r}(B)(V)$, as a function of \hbox{$\Delta m_{15}(B)$}. Differences are generally small, although there appears to be a small trend in the rise time as a function of \hbox{$\Delta m_{15}(B)$}. We find evidence for a systematic difference between the two measurements.} \label{f:b_v_rise}} \end{figure} \subsection{$B$- and $V$-Band Results: To Combine or Not to Combine?\label{s:b_v}} In similar analyses of the $B$-band rise-time distribution, S07 and H10 combine stretches in the $B$ and $V$ bands using an error-weighted mean to produce a final single $B$-band rise time and fall time for each SN. H10 found evidence for a weak trend between the difference in stretch values for $B$ and $V$ as a function of \hbox{$\Delta m_{15}(B)$}\ using measurements for 105 SDSS-II SNe~Ia. In this paper, we take a different approach and independently measure the $B$-band and $V$-band rise times from the corresponding photometric data. However, we can use the results of our fitting routine to look for similar trends in the $B$-band rise time. In \S \ref{ss:brise}, we compare the $B$-band rise time derived from the $B$ band to that derived from the $V$ band. The $B$-band rise time found using the $V$-band data is measured by taking the $V$-band rise time and subtracting the time between maximum light in the $V$ and $B$ bands. Comparisons by H10 are done in stretch space, while ours are between the measured rise/fall time. H10 use a single template rise time of 16.5~d for both the $B$ and $V$ bands for fitting the rise stretch of their data light curves. Going from rise stretch to rise time only requires multiplying the measured rise stretch by the rise time of their fiducial template. Our fitting routine measures the rise and fall values of stretch using a Monte Carlo simulation which randomly chooses a template rise time based on the best-fit template rise time and its Gaussian uncertainty (see \S \ref{s:uncertainties} for details). The measured rise-stretch values are tied to the template rise time used for the fit, making the rise time a more appropriate quantity to compare. \begin{figure} \begin{center} \includegraphics[scale=.5]{b_v_fall} \end{center} {\caption{ Difference in the fall times in the $B$ band, $t_{f}(B)$, and the $V$ band, $t_{f}(V)$. The difference is consistent with 0~d. } \label{f:b_v_fall}} \end{figure} \subsubsection{$B$-band Rise-Time Comparison\label{ss:brise}} In Table \ref{t:diff_rise}, we show the differences in $B$-band rise time derived using the $B$ band, $t_{r}(B)(B)$, and the $V$ band, $t_{r}(B)(V)$, for our sample divided by spectroscopic subclassification. For our entire sample of \hbox{61}\ SNe, we find a mean difference of $-0.91\pm 0.10$~d (standard error of the mean), in the sense that the $B$-band rise time is shorter using $B$-band photometry compared to that derived using $V$-band photometry. Breaking our objects by spectroscopic subclassification, the difference in $B$-band rise time found using $B$- and $V$-band photometry is $-0.79 \pm 0.13$~d for normal SNe, $-1.08 \pm 0.16$~d for HV SNe, and $-1.17 \pm 0.45$~d for SN 1991T/1999a-like objects. We find evidence for a systematic difference of $\sim 1$~d between the 2 determinations of the $B$-band rise time. In Figure \ref{f:b_v_rise} we plot difference in $B$-band rise time as a function of \hbox{$\Delta m_{15}(B)$}. There appears to be a small trend in the $B$-band rise time difference with increasing \hbox{$\Delta m_{15}(B)$}. Fitting a line results in a slope of $-0.54 \pm 0.44$~d mag$^{-1}$. The slope is computed by bootstrap resampling our sample to give 1000 realizations of our dataset. The mean and standard deviation of the distribution of the fit slopes are adopted as the most probable slope value and $1\sigma$ uncertainty. We do not find evidence of a significant trend. Our comparison shows evidence for a slight systematic trend that the rise time is $\sim 1$~d longer when measured with the $V$ band. At least part of such a shift can be hidden by uncertainties in measuring the time of maximum flux of a light curve where the derivative slowly approaches 0 within a day of maximum light. Combining the two measurements will introduce systematic errors into an analysis. \subsubsection {Fall-Time Comparison} In a comparison of fall-time stretches derived from the $B$ and $V$ bands by H10, the authors found that objects with small \hbox{$\Delta m_{15}(B)$}\ had a larger $B$-band fall stretch than $V$-band fall stretch (i.e., $t_{f}(B) > t_{f}(V)$). We look for a similar trend in our data. Recall that the $B$-band fall time is defined as the amount of time it takes for the $B$-band light curve to fall by 1.1 mag after maximum light in $B$, and the $V$-band fall time is defined as the amount of time it takes for the $V$-band light curve to fall by 0.66 mag after maximum light in $V$. By construction, our templates both have a fall time of 15~d. In Table \ref{t:diff_fall} we show the difference in fall times between the $B$ and $V$ bands (i.e., $t_{f}(B) - t_{f}(V)$). A comparison of the fall times for all \hbox{61}\ objects gives a mean difference of $-0.21 \pm 0.09$~d (standard error in the mean) between the $B$ and $V$ bands for all objects. There is no significant mean difference in the fall time measured between the $B$ and $V$ bands across all subclassifications. We plot the difference in fall times as a function of \hbox{$\Delta m_{15}(B)$}\ in Figure \ref{f:b_v_fall}. The mean fall-time difference is consistent with 0 d. Fitting a line to the data, we measure a slope of $-0.47 \pm 0.36$ d mag$^{-1}$, finding no significant evidence for a trend. \begin{figure*} \begin{center} \includegraphics[scale=.7]{m15b_v_tr} \end{center} {\caption{The rest-frame $B$-band rise time plotted as a function of \hbox{$\Delta m_{15}(B)$}\ (left), and the rest-frame $V$-band rise time as a function of \hbox{$\Delta m_{15}(V)$}\ (right). Note that the rise time is not stretch corrected. Using the two-stretch fitting method, we find a correlation between \hbox{$\Delta m_{15}(B)$}\ (calculated from $s_{f}(B)$ and Equation \ref{e:dmb}) and rise time. SNe with smaller \hbox{$\Delta m_{15}(B)$}\ (i.e., a slower post-maximum decline rate) have longer rise times. SNe that have been identified as HV are plotted as red stars. For fixed \hbox{$\Delta m_{15}(B)$}, HV SNe appear to have shorter rise times than their normal counterparts in the $B$ band. Overluminous SN 1991T/1999aa-like objects are plotted as blue squares. These objects have the smallest values of \hbox{$\Delta m_{15}(B)$}\ and the longest rise times. A less prominent, but similar correlation exists in \hbox{$\Delta m_{15}(V)$}\ (calculated from $s_{f}(V)$ and Equation \ref{e:dmv}) and the $V$-band rise time.}\label{f:m15b_v_tr}} \end{figure*} \begin{table} \caption{Mean differences in $B$-band rise time derived using $B$- and $V$-band photometry by spectroscopic subclassification \label{t:diff_rise}} \begin{center} \begin{tabular}{lc} \hline Subclassification & $ t_{r}(B)(B) - t_{r}(B)(V)$ \\ \hline Normal & $-0.79 \pm 0.13$ d \\ High Velocity & $-1.08 \pm 0.16$ d\\ SN 1991T/1999aa-like & $-1.17 \pm 0.45$ d \\ All & $-0.91 \pm 0.10$ d\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Mean differences in $B$- and $V$-band fall times by spectroscopic subclassification \label{t:diff_fall}} \begin{center} \begin{tabular}{lc} \hline Subclassification & $ t_{f}(B) - t_{f}(V)$ \\ \hline Normal & $-0.20 \pm 0.08$ d\\ High Velocity & $-0.19 \pm 0.22$ d\\ SN 1991T/1999aa-like & $-0.32 \pm 0.39$ d\\ All & $-0.21 \pm 0.09$ d\\ \hline \end{tabular} \end{center} \end{table} \section{Analysis} In this section we present our analysis of the $B$- and $V$-band rise-time distribution of our nearby sample. Given the results found in \S \ref{s:b_v}, we treat the $B$ and $V$ bands separately. In the following analysis, the rise time for a given band is defined as the elapsed time between explosion and maximum light in that band. Note that the results are not corrected for stretch unless indicated by a ${'}$ symbol, in which case the quantity is corrected for fall stretch. \subsection{Rise Times Correlated with Decline \label{s:rise_v_decline}} In Figure \ref{f:m15b_v_tr}, we plot the rest-frame $B$-band rise time as a function of \hbox{$\Delta m_{15}(B)$}\ and the rest-frame $V$-band rise time as a function of \hbox{$\Delta m_{15}(V)$}\ for our nearby sample of \hbox{61}\ objects. In both bands, there is a strong correlation between the rise time and the post-maximum decline, although the scatter indicates that the situation is more complicated than a simple one-to-one mapping between light-curve decline and the rise time. In general, SNe with slower post-maximum declines (e.g., small \hbox{$\Delta m_{15}(B)$}/ \hbox{$\Delta m_{15}(V)$}) have longer rise times. This correlation is evident in $B$, but shows more scatter in $V$. Using \hbox{61}\ objects, we calculate a Pearson correlation coefficient of $-0.69 \pm 0.03$ for $B$ and $-0.49 \pm 0.04$ for $V$. The probability of a Pearson coefficient $ < -0.49$ for two uncorrelated variables and \hbox{61}\ measurements is $ < 0.01 \%$, indicating a highly significant correlation in both $B$ and $V$. The correlation remains strong even when excluding SN 1991T/1999aa-like objects. Figure \ref{f:m15b_v_tr} indicates that SN~1991T/SN~1999aa-like SNe (i.e., overluminous SNe~Ia) have the longest rise times. In contrast, H10 find that overluminous events have some of the shortest $B$-band rise times in their sample. However, H10 have few overluminous events and spectroscopic identifications were not included in their analysis. Instead, identifications of overluminous objects were based on \hbox{$\Delta m_{15}(B)$}. \begin{table} \caption{Rest-frame, fall-stretch corrected $B$- and $V$-band rise times\label{t:sc_rise}} \begin{center} \begin{tabular}{lcc} \hline Subclassification & $t_{r}{'} (B)$ &$ t_{r}{'} (V)$ \\ \hline Normal & $18.03 \pm 0.24$~d & $20.23 \pm 0.44$~d \\ HV & $16.63 \pm 0.29$~d & $19.43\pm 0.33$~d \\ SN 1991T/1999aa-like & $18.05 \pm 0.69$~d & $20.00 \pm 0.68$~d \\ \hline \end{tabular} \end{center} \end{table} In Table \ref{t:sc_rise}, we give the fall-stretch corrected rise times, $t_{r}{'}$, for the various spectroscopic subclassifications in our sample for the $B$ and $V$ bands. When $B$-band light curves are fall-stretch corrected to a light curve with $\Delta m_{15} (B) = 1.1$ mag (i.e., $s_{f} = 1$), the rise time of normal SNe~Ia and HV objects is (respectively) $18.03 \pm 0.24$~d (uncertainty in the mean) and $16.63 \pm 0.29$~d, a $\sim 3 \sigma$ difference. H10 found an average $B$-band rise of $17.38 \pm 0.17$~d for the SDSS sample, consistent to $2.2 \sigma$ with our rise time for normal SNe~Ia. When we correct our $V$-band rise times to $\Delta m_{15}(V) = 0.66$~mag by dividing by the $V$-band fall stretch for normal objects, we find a fall-stretch corrected, rest-frame $V$-band rise time of $20.23 \pm 0.44$~d (uncertainty in the mean). The fall-stretch corrected rise time of HV objects is $19.43 \pm 0.33$~d, consistent with the $V$-band rise time of normal objects. If we assume that a normal SN~Ia with $\Delta m_{15}(B) = 1.1$~mag corresponds to $\Delta m_{15}(V) = 0.66$~mag, then the fall-stretch corrected $V$-band rise time is $2.20 \pm 0.50$~d longer than the $B$-band rise time. This is within $1.5\sigma$ of the 1.5~d difference reported by R99 despite our disagreement on the actual measured $B$-band and $V$-band rise times for a $\Delta m_{15}(B) = 1.1$~mag SN (R99 measure $19.5 \pm 0.2$~d and $21 \pm 0.2$~d for their $B$- and $V$-band rise times, respectively). For the HV objects, we measure a fall-stretch corrected rise-time difference of $2.80 \pm 0.44$ d. The larger difference (in absolute terms) in the $B$- and $V$-band rise times for HV objects compared to normal objects indicates that HV objects have a faster rise in the $B$ band than in the $V$ band (compared to normal objects). \subsection{The Rise Time of High Velocity SNe} Plotted as red stars in Figure \ref{f:m15b_v_tr} are objects spectroscopically subclassified as HV, while normal SNe~Ia are plotted as filled circles. In the $B$ band, HV objects appear to lie along a locus of points below that of normal SNe. For a fixed \hbox{$\Delta m_{15}(B)$}\ in the $B$ band, HV objects have a shorter rise than normal SNe. Analysing smaller samples, \cite{pignata08} and \cite{zhang10} previously found that HV SNe~Ia appear to have a faster $B$-band rise for a given \hbox{$\Delta m_{15}(B)$}. This result is not evident in the $V$ band, where HV objects do not differ significantly from their normal counterparts. The $\sim 3 \sigma$ difference in fall-stretch corrected rise time between HV and normal objects in $B$ and the lack of a significant difference in $V$ provides evidence for subtle differences in the photometric evolution of these two subclassifications. Recent work by \cite{wang09} and \cite{foley11} has shown that the two subclassifications have different $B_{\rm{max}} - V_{\rm{max}}$ pseudocolour\footnote{Note that $B_{\rm{max}} - V_{\rm{max}}$ is the $B$-band magnitude at maximum light in the $B$ band minus the $V$-band magnitude at maximum light in the $V$ band. The dates of maximum light are typically offset by $\sim$2~d, making this quantity not an actual observed colour of the SN at a discrete time.} distributions, with HV objects typically having a redder pseudocolour. Furthermore, these studies have shown that separating HV and normal objects in cosmological analyses reduces the scatter in a Hubble diagram from 0.18 mag to 0.12 mag. \cite{foley11} provide a model which offers a possible explanation for why HV objects have a redder $B_{\rm{max}} - V_{\rm{max}}$ colour at maximum light. The two dominant sources of opacity in the atmosphere of a SN are electron scattering and line opacity from Fe-group elements. Electron scattering opacity is wavelength independent, while line opacity from Fe-group elements is most significant at wavelengths shorter than 4300 \AA~\citep[][hereafter KP07]{kasen07b}. The transition from electron scattering to Fe-group line opacity occurs near the peak of the $B$ band. SNe with high-velocity ejecta will have broader Fe-group absorption features, which will decrease flux in the $B$ band while having little effect on $V$-band flux. \cite{foley11} analyse models of an off-centre failed deflagration to detonation from KP07 to explore the expected differences in observables of HV and normal SNe. The KP07 models studied a single SN with an off-centre ignition viewed from different angles. When viewed on the side nearest the ignition, the KP07 models produce a SN with HV features. When viewed from the side opposite the ignition, we will view a normal SN. Although the models presented in KP07 were intended to study the observational consequences of off-centre explosions, their model spectra and light curves conveniently produce a similar distribution of ejecta velocities to that observed between normal and HV SNe depending on viewing angle. The set of models predicts that HV objects should have redder colours at maximum light in comparison to normal objects (see Fig. 8 of \citealt{foley11}) on theoretical grounds. Using the model light curves of KP07, we can explore how the rise time of HV objects will differ from that of normal objects. If all other parameters are equal (i.e., Nickel mass, kinetic energy, etc.), the KP07 model predicts that HV SNe should have shorter rise times in $B$ compared to normal SNe (as indicated by the model light curves in their Fig. 11) due to the enhanced line opacity from Fe-group elements. Although not specifically stated by KP07, one would expect the $V$-band rise time of the two subclasses to be similar, since the opacity in this wavelength range is dominated by electron scattering. Consequently, the KP07 models also predict that the enhanced opacity at blue wavelengths from HV components in the SN ejecta reduce the peak absolute $B$-band magnitude and hasten the evolution of the $B$-band light curve, producing a larger \hbox{$\Delta m_{15}(B)$}. This complicates a direct comparison between the rise time of HV and normal SNe and does not necessarily predict the observed result in the left panel of Figure \ref{f:m15b_v_tr}, that HV SNe lie on a locus of points below that of normal SNe in the $\Delta m_{15}(B)$--$t_{r}$ plane. One expectation of the KP07 models is a \hbox{$\Delta m_{15}(B)$}\ distribution for HV objects that is pushed to larger values in comparison to normal objects. We find a median \hbox{$\Delta m_{15}(B)$} = 1.09 mag for the HV objects and a median \hbox{$\Delta m_{15}(B)$} = 1.11 mag for the normal objects. However, our sample is most likely not representative of a complete sample of SNe~Ia and suffers from observational bias. Without quantifying how HV ejecta change both \hbox{$\Delta m_{15}(B)$}\ and $t_{r}$, we cannot definitively claim that the models of KP07 explain our result. While it is beyond the scope of this paper to match theoretical models to our observations, the KP07 models offer a possible qualitative explanation for why there are significant differences between HV and normal SNe in the $B$ band, but not in the $V$ band. \begin{figure*} \begin{center} \includegraphics[scale=.7]{rmf_distribution_w_99ee} \end{center} {\caption{{\it Top panels:} The rest-frame corrected rise time minus the fall time (RMF) as a function of \hbox{$\Delta m_{15}(B)$}\ for the $B$ band (left) and of \hbox{$\Delta m_{15}(V)$}\ for the $V$ band (right). These quantities are not stretch corrected. Plotted as a dashed black line is the prediction using a single-stretch prescription where there is a one-to-one mapping of rise time to fall time. The nonlinearity in the prediction is due to the quadratic relationship between $s_{f}$ and \hbox{$\Delta m_{15}(B)$}\ in Equation \ref{e:dmb}. There is significant scatter about the line. We find that slowly declining SNe tend to have a faster rise than predicted by the single-stretch model. {\it Bottom panels:} The RMF distribution in the $B$ and $V$ bands. HV objects have a shorter $B$-band RMF in comparison to objects which are spectroscopically normal. A K-S test indicates a $< 0.01\%$ chance that the two distributions are drawn from the same parent distribution. No significant difference is seen in the $V$-band RMF distribution. SN 1999ee is the only normal SN with $B$-band RMF $< -1~\rm{d}$, although a previous study noted HV features in spectra of this object \protect \citep{mazzali05}. }\label{f:rmf_distribution}} \end{figure*} Recent work by \cite{maeda10} finds evidence that SNe with a high velocity gradient (HVG) in the \ion{Si}{II} line may be the natural result of viewing an asymmetric explosion. HVG SNe are defined by \cite{benetti05} as SNe that exhibit a time derivative in the velocity of the \ion{Si}{II} line $> 70$~km~s$^{-1}$~d$^{-1}$ around maximum light. \citeauthor{benetti05} provide evidence that the HV and HVG subclassifications are highly correlated. Looking at the \ion{Fe}{II} $\lambda$7155 and \ion{Ni}{II} $\lambda$7378 nebular emission in late-time spectra that trace the deflagration ashes, \citeauthor{maeda10} find that HVG SNe tend to exhibit redshifted lines while low-velocity gradient (LVG) SNe show a blueshift. The authors attribute this observational distinction to the difference between viewing an asymmetric explosion from the side nearest the site of initial deflagration (LVG) and the opposite side (HVG). The models of asymmetric explosions of SNe~Ia presented by \citet[][hereafter M11]{maeda11} indicate that viewing a SN from the far side, one expects HVG SNe to have longer bolometric rise times and smaller \hbox{$\Delta m_{15}(B)$}\ than comparable LVG SNe. This effect is more prominent in SNe with less $^{56}$Ni (for instance, compare models A0.3 and A0.6 in Fig. 11 of M11). The longer rise times in HVG objects is attributed to an increased optical depth for optical photons when viewing the explosion from the side opposite the explosion site (i.e., the site of \hbox{$^{56}$Ni }\ synthesis). This is not necessarily inconsistent with our observational result that the HV SNe have a different rise-time distribution than normal SNe. If we view the SN from the side nearest the initial explosion site, we will see a LVG SN, and if we view the SN from the opposite side, we will see a HVG SN. Assuming HVG implies HV, we will measure a longer rise time and a smaller \hbox{$\Delta m_{15}(B)$}\ for a high-velocity object assuming the $B$ band roughly traces the bolometric behaviour. This will move the HV object up and to the left of a normal SN in the \hbox{$\Delta m_{15}(B)$}--$t_{r}$ plane (Fig.~\ref{f:m15b_v_tr}). Depending on how the viewing angle affects both rise time and \hbox{$\Delta m_{15}(B)$}, the models of M11 could put the locus of HV points below that of normal SNe. However, the models presented in M11 do not outright predict a difference in rise times of HV objects in the $B$ and $V$ bands. The models of KP07 and M11 offer opposite theoretical predictions for the rise time of HV SNe in comparison to normal SNe. KP07 predict that HV SNe should have larger \hbox{$\Delta m_{15}(B)$}\ and shorter rise times than normal SNe, while M11 predict smaller \hbox{$\Delta m_{15}(B)$}\ and longer rise times. The differences are rooted in the nature of the SN asymmetry (i.e., the distribution of intermediate-mass elements and $^{56}{\rm Ni}$) and the treatment of opacity. KP07 use an expansion opacity formalism which sums over individual lines, while M11 use a frequency averaged gray opacity. A better test of the HV models may possibly be found by looking at the \hbox{$\Delta m_{15}(B)$}\ distributions of a complete SNe~Ia sample, since both models predict that that asymmetries should influence the measured \hbox{$\Delta m_{15}(B)$}. \cite{wang09} found the \hbox{$\Delta m_{15}(B)$}\ distribution of HV and normal SNe to be strikingly similar despite different $B_{\rm max} - V_{\rm max}$ distributions, although it is not clear that their sample is complete. Despite evidence that HV objects have different rise-time properties than normal SNe, a clear physical picture remains elusive. Further efforts in modeling HV objects may shed light on the rise-time distribution of the different spectroscopic subclassifications. \subsection{The Rise Minus Fall Distribution\label{s:rmf}} In the top two panels of Figure \ref{f:rmf_distribution}, we compare the rise time minus the fall time (RMF) as a function of \hbox{$\Delta m_{15}(B)$}\ for the $B$ band (left panel) and \hbox{$\Delta m_{15}(V)$}\ for the $V$ band (right panel). Note that the RMF has not been stretch corrected. As in Figure \ref{f:m15b_v_tr}, blue squares refer to SN~1991T/SN~1999aa-like objects, red stars refer to HV SNe~Ia, and black circles refer to spectroscopically normal SNe~Ia. Overplotted in a broken line is the expectation from a one-to-one mapping of rise time to fall time using a single-stretch parametrisation. For the $B$ band we use a fiducial 18.03~d rise time and for the $V$ band we use a rise time of 20.23~d based on the fall-corrected rise times found in \S \ref{s:rise_v_decline}. Clearly, our sample does not strictly follow a single-stretch parametrisation. Similar to the results of H10, a number of slowly declining objects have a faster rise time (i.e., a smaller RMF) than expected from a one-stretch parametrisation in both $B$ and $V$. We reiterate that based on the results of Figure \ref{f:m15b_v_tr}, more luminous SNe have longer rise times than less luminous SNe; however, Figure \ref{f:rmf_distribution} indicates that more luminous SNe have faster rise times than expected based on a single-stretch parametrisation. In the bottom two panels of Figure \ref{f:rmf_distribution}, we plot the rest-frame $B$- and $V$-band distributions of RMF for the various subclass identifications. In $B$, HV objects have a mean RMF of $1.55 \pm 0.27$~d (uncertainty in the mean) in comparison to normal SNe that have an RMF of $2.77 \pm 0.20$~d. The $4 \sigma$ difference in mean RMF offers evidence that the two distributions may be drawn from different populations. The mean RMF for our sample by subclassification can be found in Table \ref{t:rmf}. Despite significant differences in the $B$ band, the mean $V$-band RMFs for the spectroscopic subclassifications are consistent with one another. A Kolmogorov-Smirnoff (K-S) test finds that the two groups of SNe have a $\sim 0.01\%$ probability of being drawn from the same $B$-band RMF distribution and a $\sim 44\%$ probability of being draw from the same $V$-band RMF distribution. The lone SN with RMF $< -1$~d in $B$ is SN 1999ee. \cite{mazzali05} find evidence of high-velocity features in spectra taken before maximum light. However, we retain the classification of \cite{wang09} and regard SN 1999ee as being normal. Our application of the K-S test indicates that HV and normal objects have a high probability of being drawn from different populations. However, the K-S test does not reveal whether this difference is physical in origin or a reflection of observational bias. For instance, the difference in RMF between the two populations may also be a result of different stretch distributions. Such differences cannot be disentangled without knowledge of the observational bias in each of the photometric surveys used in our sample. Future surveys with proper spectroscopic follow-up observations and understanding of biases will aid in exploring the difference in RMF distribution for HV and normal SNe. Our sample also includes 6 objects which show spectroscopic similarities to the overluminous SN 1991T or SN 1999aa. Based on the $B$- and $V$-band distribution of RMF for SN 1991T/1999a-like objects, these objects have slightly larger RMFs compared to normal and HV objects, however, the mean of the 1991T/1999aa-like distribution is consistent with the normal objects within $1\sigma$. S07 find evidence for two distinct RMF distributions using eight SNe. While we have presented evidence for distinct HV and normal populations, S07 only have a single HV object in their sample (SN 2002bo). The other seven SNe are classified as normal by the criterion of \cite{wang09}. We do not see evidence for two populations of rise times within our spectroscopically normal SNe. Similar to the results of H10, we find that our sample does not follow the expectation from a one-stretch parametrisation of light-curve shape. This is especially evident in $V$, where the trend appears to go in the opposite direction predicted by a single-stretch fit. \begin{table} \caption{Mean rise minus fall times for different spectroscopic subclassifications\label{t:rmf}} \begin{center} \begin{tabular}{lcc} \hline Subclassification & RMF ($B$) & RMF($V$) \\ \hline Normal & $ 2.77 \pm 0.20$~d & $ 4.83 \pm 0.33$~d \\ High Velocity &$ 1.55 \pm 0.27$~d & $ 4.45 \pm 0.28$~d \\ SN 1991T/1999aa-like & $ 3.38 \pm 0.73$~d & $ 5.72 \pm 0.73$~d \\ \hline \end{tabular} \end{center} Note -- Quantities have not been corrected for stretch. \end{table} \subsection{Rise-Time Power Law\label{s:power}} Previously, we assumed the rise in flux took the form of a parabola, $n = 2$, based on physical arguments presented in \S 3.1. In this section, we fit for the functional form of the rising portion of the light curve ($\tau \leq -10~\rm{d}$) as a power law of the form $f = A(\tau + t_{r})^n$, where $t_{r}$ is the rise time. Allowing $n$ to vary, we perform a $\chi^{2}$ minimization to find the best-fit power law to our $B$-band photometry. However, unlike the analysis in previous sections, we restrict our two-stretch fitting procedure to $-10 < \tau < +35~\rm{d }$ in order to avoid imposing a shape on the region we plan to fit. We find a best fit of $n = 2.20^{+0.27}_{-0.19}$, consistent ($1\sigma$) with the expanding fireball modeled by a parabola. The uncertainty in the power-law index is found using a Monte Carlo simulation similar to that outlined in \S \ref{s:uncertainties} except modified to only fit within the region $-10 < \tau < 35$~d. Our result is in agreement with that of C06, who find $n = 1.8 \pm 0.2$, and H10, who find $n = 1.80^{+0.23}_{-0.18}$. However, H10 show evidence of significant colour evolution during this period of light-curve evolution, challenging our assumption of modest temperature change. They find a linear $B - V$ colour evolution of 0.5 mag between 15~d and 9~d before maximum light, leading to a temporal dependence closer to $n \approx 4$ rather than the $n = 2$ predicted by an expanding fireball. We check for similar evidence of colour evolution in the expanding fireball phase in our sample. Using the colour curves for SNe in the range of $-15 <\tau < -9$~d relative to maximum light, we create a median $B - V$ colour curve similar to the analysis presented by H10. We divide the data into 1~d interval bins and take the median $B-V$ colour for each bin. Corrections for Milky Way extinction were made using the dust maps provided by \cite{schlegel98}. No corrections were attempted for possible host-galaxy extinction. Our median colour curve shows a small change of 0.05 mag over the 6~d interval. However, inspecting the $B-V$ colour curve for SN 2009ig in this range, we find a drop of 0.5 mag over the same interval, similar to what is reported by H10. This leads us to suspect that our median colour curve does not necessarily reflect the full sample, and at least in individual SNe, there can be significant colour change at $\tau < -9$~d. The discrepancy between the median colour curve and that of SN 2009ig may be a result of using noisy data in the earliest time bins for the median colour curve. \begin{figure*} \begin{center} \includegraphics[scale=.5]{kasen_models} \end{center} {\caption{ Two-stretch corrected $B$-band light curves compared to the expected flux of a collision between the SN ejecta and a 1 M$_{\odot}$ companion assuming the companion is undergoing Roche-lobe overflow. Only data in the range $-10 < \tau < +35 $~d are used to stretch correct the light curves, to avoid imposing a shape onto data at $\tau \leq -10$~d. Plotted as a red line is the light-curve template with no shock interaction assuming various power laws for the initial rise, indicated by $n$, and rise times, $t_{r}$. Plotted as a blue dashed line is the expected ``shocked template," which is the expected shock emission added to the template assuming different separation distances $a_{13}$ and the fraction of SN explosions that show signs of interaction, $f$. The expected flux from the companion is estimated using the analytic models of \protect \cite{kasen10}. The bottom plot for each panel shows the resulting normalised residual curve between the data and the ``shocked template." We find a considerable degree of degeneracy between the adopted rise time, the power-law index for the initial rise, and the shock emission. As the four plots indicate, a similar minimum reduced $\chi^{2}$ can be achieved with $n=2$, $t_{r} = 17.92$~d and no shock emission (top left) or by varying the rise time, power-law index, and amount of shock emission (top right, bottom left, and bottom right, respectively). }\label{f:find_prog_emis}} \end{figure*} \subsection{Companion Interaction\label{s:companion}} Recently, \cite{kasen10} proposed using emission produced from the collision of the SN ejecta with its companion star to probe the progenitor system. Similar to the shock break-out in core-collapse SNe that is theoretically well understood \citep[e.g.,][]{klein78,matzner99} and observed \citep[e.g.,][]{soderberg08,modjaz09}, in the single-degenerate progenitor scenario the expanding SN ejecta are expected to collide with the extended envelope of the mass donating companion. In the case of a companion undergoing Roche-lobe overflow, the radius of the companion is on the same order of magnitude as the separation distance. The timing and luminosity emitted from the interaction will depend on the mass of the companion and the separation distance. The shock emission is expected to be brightest in the ultraviolet, but detectable in the $B$ band. \cite{hayden10b} looked for this signal in the rising portion of the $B$-band light curves of 108 SDSS SNe~Ia, finding no strong evidence of a shock signature in the data. Using simulated light curves produced with \verb+SNANA+ \citep{kessler09} and a Gaussian to model the shock interaction, the authors constrain the companion in the single-degenerate scenario to be less than a 6 M$_{\odot}$ \!\! main-sequence star, strongly disfavouring red giants (RGs) undergoing Roche-lobe overflow. For this analysis, we focus on our $B$-band data where we have the best chance of detecting the signs of shock emission. \cite{kasen10} predicts that in the initial few days after explosion, the luminosity produced by the interaction with the companion will dominate the luminosity powered by $^{56}$Ni decay. Inspection of individual light-curve fits with the two-stretch fitting routine do not show the tell-tale signs of strong interaction. To take advantage of the power in numbers, we analyse all of the light curves as an ensemble. We then compare data from the earliest light-curve epochs to the models of \cite{kasen10} to place constraints on the mass and distance to the companion. We start by applying our two-stretch fitting routine again, but limiting the fit to data $ -10 < \tau < +35~\rm{d}$ in order to avoid forcing a shape on the earliest data and possibly suppressing the signature of interaction. We construct models of ``shocked" template light curves including contributions from companion interaction using the analytic solutions for the properties of the emission found by \citet{kasen10}. Using equations for $L_{\rm{c,iso}}$ and $T_{\rm{eff}}$ from \cite{kasen10} (Eq. 22 and 25, respectively) and assuming the emission is that of a blackbody, we calculate the expected flux density at the peak of the $B$ band ($\lambda_{\rm{eff}} = 4450~\rm{\AA}$) at a distance of 10~pc. We normalise the flux density from the shock to a peak SN~Ia magnitude of $M_{B} \ = -19.3~\rm{mag}$. Based on the opening angle of the shock interaction, \cite{kasen10} predicts that the interaction signature should be visible in $\sim 10$\% of SN~Ia explosions. Ideally, one should account for the viewing angle dependence of the detected shock emission. We make the simplification that we are either looking along the axis or we are not. Our light-curve models including the effects of interaction are the un-shocked template plus a fraction, $f$, of the collision flux calculated using the analytic model of \cite{kasen10}. A model with $f=0$ represents a situation in which shock emission is never detectable (and reduces to the unshocked template) and $f=1$ is a scenario in which shock emission is detectable in every SN explosion. For most of the analysis in this paper, we adopted an expanding fireball model for the early rise of the light curve ($\tau \leq -10$~d) which assumed a power-law index of $n=2$. Under this assumption, we found that our early-time data were best fit by a rise time of 17.92 d. However, as discussed in \S \ref{s:power}, the assumption that the initial rise of the light curve has $n = 2$ may be somewhat questionable given rather significant changes in SN colour at early phases. As a test of the degeneracies between the added shock emission and the assumptions that go into constructing an unshocked template, we consider a number of different templates with different rise times and power-law indices. In addition to our nominal template with $t_{r} = 17.92$~d and $n=2$, we also try templates with $t_{r} = 17.11$~d and $n=2$, $t_{r} = 17.92$~d and $n=2.2$, and $t_{r} = 17.11$~d and $n=1.8$. The parameters free to vary in the model for shock interaction are $a_{13}$, the distance to the companion normalised to $10^{13}~\rm{cm}$, $M_c$, the companion mass, and $f$, the fraction of SN explosions that produce detectable shock emission. For this simple analysis the probed mass is fixed at $M_{c} = 1~{\rm M}_{\odot}$, to explore RGs as a possible companion, and we set $f$ equal to either 0 (i.e., no emission is detected) or 0.1, the expected fraction of SNe with detectable shock emission based on the opening angle of the shock. For each of our four different unshocked templates, we calculate the minimum $\chi^{2}$ statistic for models by varying $a_{13}$ to find the best-fit shocked template. The fit is restricted to data within the range of $-16 \leq \tau \leq -10 $~d. We note that the minimum reduced $\chi^{2}$ in all of our fits for $a_{13}$ exceeds 1 and is usually closer to 3--4. This is not completely unexpected since data at $\tau \leq -10$~d were not included in the two-stretch fitting procedure used to normalise the light curves, and this may induce correlated errors into the data at $\tau \leq -10$~d. We also suspect that the reported errors for the earliest photometry epochs may be underestimated. Furthermore, the scatter in the data points may even be the physical result of a distribution of companion masses, separation distances, and viewing angles contributing to the emitted flux from shock interaction. Unfortunately, we are not in a position to disentangle what is contributing to the scatter. Our final results for each template are shown in Figure \ref{f:find_prog_emis}. Plotted in red are the unshocked templates and in blue are the shocked templates including some contribution of companion interaction. In each case of an unshocked template, we can find an acceptable fit with similar $\chi^2$ by varying the separation distance. There is a significant degree of degeneracy between the parameters, making the task of disentangling the true sign of companion interaction at this level extremely difficult. For instance, for our template with $n=2$ and $t_{r} = 17.92$~d, we do not see signs of any interaction. However, had we used a template with $n=2$ and $t_{r} = 17.11$~d, we would find that if the fraction of SNe that showed signs of shock interaction is $f = 0.1$, then $\chi^{2}$ is minimised with $a_{13} = 0.9$. Similarly, when varying $n$ and $t_{r}$ in the unshocked template, suitable matches to the data can be found by adjusting the amount of shock interaction. The degeneracy is partially broken by increasing the contribution of companion interaction to the ``shocked template" by either increasing $f$ or $a_{13}$. Fixing $f = 0.1$ based on the expected opening angle of the shock, $a_{13} = 2$ increases the reduced $\chi^2$ by more than 1 for all of our unshocked templates. In summary, we find that with our limited sample of early-time data, we cannot completely disentangle the behaviour of the power-law index, the adopted rise time, and the amount of companion interaction. This result makes use of all of the light-curve data at $\tau \leq -10$~d as an ensemble. Better constraints could be placed on individual, excellently sampled light curves in the rise region. In particular, efforts to obtain data at bluer wavelengths, where the shock interaction is stronger and more easily detectable, will greatly help to break degeneracies in the fitting process. Recently, \cite{justham11} and \cite{stefano11} have argued that the transfer of angular momentum from the companion donor star to the C/O white dwarf acts to increase the critical mass required to explode and the time required for the white dwarf to explode. Consequently, the donor star may possibly evolve past the red giant phase, vastly reducing the cross section for interaction with SN ejecta. The subsequent shock emission for a RG would decrease beyond current detection limits, thus saving RGs as a possible donor in the single-degenerate scenario. The increased time to explosion allows the mass ejected from the envelope of the donor star time to diffuse to the density of the ambient interstellar medium, offering an explanation for the tight constraints placed on the presence of $\rm{H}\alpha$ in nebular spectra of SNe~Ia \citep[][and references therein]{leonard07}. \begin{figure} \begin{center} \includegraphics[scale=.42]{sn1991bg_fail} \end{center} {\caption{ Comparison of the $B$-band light curve for the SN 1991bg-like SN 1999by (solid circles) and the LOSS template (black dashed line) stretched along the time axis by 0.74 to match the rise portion of the light curve. The light-curve evolution of SN 1999by for $t > 10$~d past maximum makes it impossible to find a reasonable fit using our two-stretch fitting routine. If we restrict our fit to the pre-maximum portion of the light curve, we find a rise time of $13.33 \pm 0.40$~d.}\label{f:sn1991bg_fail}} \end{figure} \subsection{SN 1991bg-Like Objects\label{s:91bg}} The analysis presented thus far has excluded SN 1991bg-like objects due to fits which had unacceptably large $\chi^{2}$ per degree of freedom. To investigate what was causing the poor fits, we focus on the $B$-band light curve of the SN 1991bg-like SN 1999by from the LOSS sample, which has data starting about $-10$~d before maximum light in the $B$ band. As seen in Figure \ref{f:sn1991bg_fail}, the largest difference in light-curve shape between SN 1999by and our template occurs at $t > +10$~d, where the light curve of SN 1999by transitions to a slower linear decline not seen until $t > +30$~d in normal SNe~Ia. Similar differences in light-curve shape for SN 1991bg are found by \cite{filippenko92:91bg} and \cite{leibundgut93:91bg}. The fit parameters of $s_{f}$ and $s_{r}$ are correlated; thus, a bad fit for $s_{f}$ will propagate into an incorrect determination of $s_{r}$. Given the different light-curve shapes for SN 1991bg-like objects and normal objects for $t > 10$ d, we are unable to acceptably fit $s_{f}$ and thus $s_{r}.$ For the purposes of exploring the rise time of SN 1991bg-like objects, we restrict our fit to the pre-maximum portion of the light curve using the date of maximum found with a low-order polynomial fit. For SN 1999by, we find a best-fit rise time of $13.33 \pm 0.40$~d (overplotted as dashed lines in Fig. \ref{f:sn1991bg_fail}), indicating that they join other objects with large \hbox{$\Delta m_{15}(B)$}\ as the fastest risers in our sample. This matches the qualitative results found by \cite{modjaz01:98de} for the SN 1991bg-like SN 1998de. \subsection{Impacts of Fitting Cuts\label{s:cuts}} To ensure that the results described in the previous sections are not a result of the cuts made in \S \ref{s:sample}, we reanalyse our data, both tightening and relaxing the constraints on the uncertainty in $t_{r}$ and $t_{f}$, $\sigma_{t_{r,f}}$. Most of the reported results are not highly sensitive to fitting cuts. Restricting acceptable fits to reduced $\chi^{2} = 1.5$, a first epoch at $\tau = -10$~d, and $\sigma_{t_{r,f}} = 1$~d decreases the number of available objects to 26 normal, 9 HV, and 3 SN~1991T/SN~1999aa-like SNe~Ia. The probability that the $B$-band RMF populations are drawn from the same parent population increases to $\sim 4\%$. The difference in the $B$-band fall-time corrected rise time, $t_{r}{'}$, between HV and normal SNe remains significant at $-1.44 \pm 0.49$~d, indicating a faster rise for HV objects. Relaxing constraints to a reduced $\chi^{2} = 2$, first epoch at $\tau = -5$~d relative to maximum light, and $\sigma_{t_{r,f}} = 1.5~\rm{d}$ increases the difference in fall-stretch corrected $B$-band rise time between HV and normal objects to $-1.86 \pm 0.49$~d. The $V$-band rise times remain consistent between HV and normal objects, as do the $V$-band RMF distributions. Overall, changing what we define as an ``acceptable" fit does not impact our results. \section{Discussion} We have presented an analysis of the rise-time distribution of nearby SNe~Ia in the $B$ and $V$ bands. Using a two-stretch fitting technique, we find that the SN rise time is correlated with the decline rate in the sense that SNe with broader light curves post-maximum (i.e., light curves with small \hbox{$\Delta m_{15}(B)$} /\hbox{$\Delta m_{15}(V)$}) have longer rise times. While SN 1991bg-like objects could not be fit well by our two-stretch fitting procedure, we found that restricting our analysis to the pre-maximum data for the SN 1991bg-like SN 1999by gives a rise time of $13.33 \pm 0.40$~d. This follows the expected trend of a fast rise leading to a fast decline. Using a sample of 105 SDSS SNe at intermediate redshifts ($0.037 \leq z \leq 0.230$), H10 find that there is a great diversity in $B$-band rise times for a fixed \hbox{$\Delta m_{15}(B)$}, and that the slowest declining SNe tend to have the fastest rise times. While we do see evidence for scatter in rise times with \hbox{$\Delta m_{15}(B)$}, we find a strong correlation between \hbox{$\Delta m_{15}(B)$}\ and rise time in the sense that slowly declining SNe~Ia have longer rise times. This correlation is strong in the $B$ band, but weaker in the $V$ band. The discrepancy with H10 may be a result of combining $B$- and $V$-band stretches which was avoided in this analysis. We find that a single value of the stretch does not adequately describe the rising and falling portions of our light curves. Similar to the results in H10, more luminous SNe with longer fall times have shorter rise times than one would expect from a single-stretch prescription for light curves. This is especially evident in our analysis of $V$-band light curves. However, we reiterate that while luminous SNe have shorter rise times than expected from a single-stretch prescription, they still have longer rise times than less luminous SNe, contrary to the findings of H10. R99 and C06 found a fiducial $B$-band rise time of $\sim 19.5$~d for a ``typical" SN~Ia. In a departure from previous results, H10 report an average rise time of $17.38 \pm 0.18$~d. We find a fall-stretch corrected (i.e., corrected to have a post-maximum fall of \hbox{$\Delta m_{15}(B)$}\ = 1.1 mag) $B$-band rise time of $18.03 \pm 0.24$~d for spectroscopically normal SNe and $16.63 \pm 0.30$~d for HV SNe. Our $B$-band rise time for spectroscopically normal SNe with \hbox{$\Delta m_{15}(B)$}\ = 1.1 mag is in agreement with the average rise time found by H10 at the $2.2 \sigma$ level. When correcting our $V$-band light curves to a post-maximum fall of \hbox{$\Delta m_{15}(V)$}\ = 0.66 mag, we find a fall-stretch corrected $V$-band rise time of $20.23 \pm 0.44$~d. After correcting for post-maximum decline rate, HV SNe~Ia have faster rise times than normal SNe~Ia in the $B$ band, but similar rise times in $V$. We find a $\sim 3\sigma$ difference in the fall-stretch corrected rise time between HV and normal SNe in the $B$ band. The rise minus fall (RMF) distributions (not corrected for stretch) of the two subclassifications show significant differences in the two populations. The peak values of the distributions are offset by $1.22 \pm 0.34$~d in the $B$ band, with HV objects having a faster rise time. A K-S test indicates a $\sim 0.01$\% probability that HV SNe come from the same parent RMF population as normal SNe in the $B$ band. Despite differences in the $B$ band, we see no evidence of a difference in the RMF populations in the $V$ band. Based on the model presented by \cite{foley11} and the models of KP07, we offer a possible qualitative explanation for why HV SNe should have a different $B$-band rise-time distribution than normal SNe, but similar $V$-band distributions. The physical origin of the difference is possibly rooted in the different opacity mechanisms at work in the $B$ and $V$ bands. Line blanketing from Fe-group elements is the dominant source of opacity for wavelengths shorter than $\sim 4300$ \AA, the peak of the $B$ band. At longer wavelengths, such as the $V$ band, electron scattering is the dominant source of opacity. Rapidly moving ejecta, as is the case with HV objects, will broaden absorption features, diminishing the $B$-band flux without affecting the $V$-band flux. Models from KP07 show that all other things being equal (e.g., Ni mass, kinetic energy), this leads to faster light curves with larger \hbox{$\Delta m_{15}(B)$}\ for HV objects. However, the enhanced opacity at short wavelengths also affects \hbox{$\Delta m_{15}(B)$}, complicating the application of the models to our result. Further modeling and observations of HV SNe~Ia are required to shed light on the photometric differences of this spectroscopic subclass. We fit the earliest data in our sample ($\tau \leq -10~\rm{d}$) to find that the flux rises as a power law with index $2.20^{+0.27}_{-0.19}$. This is consistent with the unimpeded, free expansion of the expanding fireball toy model that predicts an index of 2. However, a preliminary analysis of SN~2009ig in the range $-15 < \tau < -9$~d shows evidence of significant colour evolution, contrary to the assumption of little to no colour evolution in the expanding fireball model. H10 find similar colour evolution in an analysis of the $B-V$ colour curve of SDSS SNe~Ia and derive an expected power-lax index of 4. We compare our early-time $B$-band data as an ensemble to models \citep{kasen10} of shock interaction produced from SN ejecta colliding with the mass-donating companion in the single-degenerate progenitor scenario. When relaxing our assumptions on the functional form of the early-time template light-curve behaviour (i.e., changing the rise time or power-law index of the rise), we find that our data require some amount of shock interaction to remove systematic trends. This indicates a level of degeneracy between the adopted template rise time, the power-law index, and the amount of shock interaction required to match the data. Future surveys with high-cadence search strategies will provide well-sampled SN light curves starting days after explosion, substantially adding to the sample of rise-time measurements. Complemented with spectroscopic follow-up observations, analysis of the RMF distribution can be further broken into different spectroscopic classes to provide insights into the underlying populations and the physics that differentiates them. \section*{Acknowledgments} We thank the Lick Observatory staff for their assistance with the operation of KAIT. We are grateful to the many students, postdocs, and other collaborators who have contributed to KAIT and LOSS over the past two decades, and to discussions concerning the results and SNe in general --- especially S. Bradley Cenko, Ryan Chornock, Ryan J. Foley, Saurabh W. Jha, Jesse Leaman, Maryam Modjaz, Dovi Poznanski, Frank J. D. Serduke, Jeffrey M. Silverman, Nathan Smith, Thea Steele, and Xiaofeng Wang. In addition, Silverman provided subclass identifications for SNe using unpublished data, while Daniel Kasen and Foley made insightful comments regarding high-velocity SNe. We thank our referee, Alex Conley, for comments which significantly elevated the level of this manuscript. The research of A.V.F.'s supernova group at UC Berkeley has been generously supported by the US National Science Foundation (NSF; most recently through grants AST--0607485 and AST--0908886), the TABASGO Foundation, US Department of Energy SciDAC grant DE-FC02-06ER41453, and US Department of Energy grant DE-FG02-08ER41563. KAIT and its ongoing operation were made possible by donations from Sun Microsystems, Inc., the Hewlett-Packard Company, AutoScope Corporation, Lick Observatory, the NSF, the University of California, the Sylvia \& Jim Katzman Foundation, the Richard and Rhoda Goldman Fund, and the TABASGO Foundation. We give particular thanks to Russell M. Genet, who made KAIT possible with his initial special gift; Joseph S. Miller, who allowed KAIT to be placed at Lick Observatory and provided staff support; Jack Borde, who provided invaluable advice regarding the KAIT optics; Richard R. Treffers, KAIT's chief engineer; and the TABASGO Foundation, without which this work would not have been completed. We made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA.
-48,467.919682
[ -3.2109375, 2.87890625 ]
60.56338
[ -3.71875, -0.2318115234375, -2.2890625, -6.78125, -0.53955078125, 9.2421875 ]
[ 4.6484375, 7.9140625, 4.2890625, 5.66015625 ]
854
13,159
[ -2.9609375, 3.275390625 ]
26.566632
[ -6.0234375, -3.396484375, -3.578125, -2.322265625, 1.45703125, 11.4609375 ]
0.923613
34.460668
16.490615
3.785888
[ 2.6529011726379395 ]
-35,478.675658
5.018998
-47,693.022479
0.605648
6.024181
[ -3.591796875, -3.7890625, -3.328125, -4.0546875, 2.5703125, 10.9921875 ]
[ -6.11328125, -2.693359375, -2.654296875, -2.1484375, 3.9375, 6.2734375 ]
BkiUdOk5qWTA8xMZTlnM
\section{Introduction} As superconducting qubit technology grows beyond one dimensional chains of nearest neighbor coupled qubits \cite{barends2014superconducting}, arbitrarily sized two dimensional arrays are a likely next step towards both surface code error correction and more complex high fidelity quantum circuits \cite{PhysRevA.86.032324}. While prototypical two dimensional arrays have been demonstrated \cite{corcoles2015demonstration, riste2015detecting, PhysRevLett.117.210505}, the challenge of routing control wiring and readout circuitry has thus far prevented the development of high fidelity 3\,x\,3 or larger qubit arrays. For example, frequency tunable Xmon transmon qubits on the interior of a two dimensional array would require capacitive coupling to four nearest neighbor qubits and a readout resonator as well as individual addressability of an XY drive line and an inductively coupled flux line \cite{PhysRevLett.111.080502}. Routing these control wires with a single layer of base wiring and crossovers is not scalable beyond a few-deep array of qubits. Multilayer fabrication with embedded routing layers is a natural solution \cite{PhysRevB.81.134510}, but integrated dielectric layers on a qubit wafer introduce additional decoherence to the qubits \cite{doi:10.1063/1.2898887}. This individual addressability problem can be solved by separating the device into two chips, a dense wiring chip that allows for lossy dielectrics and a pristine qubit chip with only high coherence materials. Combining these two chips to form a hybrid device provides the advantages of both technologies. A hybrid device is composed of a ``base substrate'' bonded to a ``top chip.'' Hybridization allows for improved impedance matching between chips as compared to wirebonds and the close integration of incompatible fabrication processes. A qubit hybrid would also benefit from the availability of straightforward capacitive, inductive, or galvanic coupling of electrical signals between the base substrate and top chip through the use of parallel plate capacitors and coupled inductors. Hybrid devices have become ubiquitous in the semiconductor industry, finding applications in everything from cell phones to the Large Hadron Collider \cite{BROENNIMANN2006303}. Cryogenic applications are fewer; bolometer arrays for submillimeter astronomy \cite{1278145, HILTON2006513} and single flux quantum devices \cite{0953-2048-25-10-105012, 4982605} have utilized this technology. Low resistance cryogenic bump bonds \cite{2017arXiv170604116R, 2017arXiv170502435M} and superconducting bump bonds that proximitize normal metals have also been fabricated \cite{2017arXiv170802219O}. Here we present a novel bump bond metal stack up consisting of all superconducting materials with the intent of achieving maximal flexibility in designing flux tunable qubit circuits where mA control currents are necessary. In order to maintain compatibility with our existing qubit architecture, bump bond interconnects for a superconducting qubit hybrid must meet these requirements: \begin{enumerate} \item Bumps must be compatible with qubit fabrication (e.g., aluminum on silicon). \item If interconnects will be used in routing control signals (rather than just as ground plane connections and chip spacers), fabrication yield must be high. e.g., With a 99.9\% yield, a device with 700 interconnects on control lines would yield all lines (0.999$^{700}$ =) 50\% of the time. \item Interconnects must continue to perform electrically and mechanically after cooling from 300\,K to 10\,mK. \item Bonding must be accomplished at atmospheric pressure without elevated process temperatures to avoid altering Josephson junction critical currents through annealing \cite{doi:10.1116/1.3673790}. \item Interconnects must superconduct to provide a lossless connection between chips and avoid local heating. \item The critical current of the interconnects must exceed 5 mA to enable applications in current-biased flux lines. \end{enumerate} To satisfy condition (i) above and to extend our wire-routing capabilities through known multi-layer techniques, bumps must provide a connection between aluminum wiring on both the base substrate and top chip. This design consideration will allow us to connect our qubit fabrication to a dense, multi-layer, wire routing device based on standardized complementary metal-oxide-semiconductor (CMOS) fabrication techniques. Known bump bonding materials that also superconduct include indium and various soldering alloys. Indium is a natural choice because high purity sources are readily available, it can be deposited in many $\mu$m thick layers by thermal evaporation, it has a relatively high critical temperature of 3.4\,K, and room temperature indium bump bonding is an industrially proven technology \cite{datta2004microelectronic}. However, since aluminum and indium form an intermetallic \cite{wade1973chemistry}, under bump metalization (UBM) it is necessary to act as a diffusion barrier. Fortunately, titanium nitride, fulfills our UBM requirements as it is a well known diffusion barrier (used in CMOS fabrication) with a T$_{c}$ as high as 5.64\,K and has also been shown to be a viable high-coherence qubit material \cite{doi:10.1063/1.4813269, 0953-2048-27-1-015009}. \section{Device fabrication and layout} Figure \ref{fig:fab_process} shows a minimal, qubit compatible, asymmetric bump bond process used here for DC characterization. The base substrate has a full aluminum/titanium nitride/indium metal stack and, for simplicity, the top chip has just a single layer of indium wiring (which allowed us to avoid the complication of processing with two die sizes in every fabrication run while still testing all the necessary metal interfaces). In this case, as current flows between the base substrate and top chip, it passes through one aluminum/titanium nitride interface, one titanium nitride/indium interface, and one indium/indium interface. Actual qubit hybrids would be symmetric, with aluminum wiring and titanium nitride UBM on both chips, which adds one aluminum/titanium nitride interface and one titanium nitride/indium interface to the metal stack for each interconnect. For the base substrate, we first blanket deposit 100\,nm of aluminum through e-beam evaporation--the same base wiring material used in qubit fabrication \cite{doi:10.1063/1.4993577}. The base wiring, shown in figure (\ref{fig:fab_process}a), is defined with optical lithography and a BCl$_3$\,+\,Cl$_2$ plasma dry etch (although lift-off defined aluminum base wiring has been used with similar results). Then, (\ref{fig:fab_process}b) titanium nitride pads are defined in lift-off resist and the device is placed into a sputter chamber where an \textit{in situ} ion mill (see \ref{appendix:ion_mill} for ion milling parameters) removes the native oxide from the aluminum (\ref{fig:fab_process}b) before titanium nitride is reactively sputtered in argon and nitrogen partial pressures (\ref{fig:fab_process}c). After titanium nitride lift-off, the indium pillars are defined in lift-off resist and, then (\ref{fig:fab_process}d), in a third vacuum chamber, another in situ ion mill (\ref{appendix:ion_mill}) is used to remove oxide and contaminants from the titanium nitride surface, before depositing indium in a thermal evaporator with the substrate cooled to 0\,$^\circ$C (\ref{fig:fab_process}e). Also shown in (\ref{fig:fab_process}e) is the single layer of indium lift off used to define indium wiring on the top chip--this may be done in the same or different indium deposition as the base substrate$\textsc{\char13}$s indium layer. For the devices we characterized here, we deposited 5\,$\mu$m of indium on the substrate and 2\,$\mu$m of indium on the top chip. \begin{figure}[h] \begin{center} \includegraphics[width=0.8\textwidth]{figure1.eps} \caption{Hybrid fabrication process;\,(a-d) describe steps specific to the base substrate and (e-g) are common to both the base substrate and top chip. a) On a silicon substrate, a base electrode is defined in 100 nm of e-beam evaporated aluminum by a BCl$_3$\,+\,Cl$_2$ plasma dry etch. b) The native aluminum oxide is removed by an ion mill at locations defined by lift-off resist. c) In the same vacuum chamber as b), 50-80 nm of titanium nitride is sputter deposited from a pure titanium source in argon and nitrogen partial pressures. d) After lift-off of the titanium nitride and patterning new resist, oxide and contaminants are removed from the titanium nitride by an ion mill at locations defined by lift-off resist. e) In the same vacuum chamber as d), 2-10\,$\mu$m of indium is deposited by thermal evaporation on both the base substrate and top chip. f) After lift-off of the indium, an atmospheric plasma is used to clean and passivate the surface of both devices a few minutes before bonding. g) The base substrate and top chip are aligned and compressed together at room temperature to complete the hybrid.\label{fig:fab_process}} \end{center} \end{figure} After both the base substrate and top chip have been fabricated, an atmospheric plasma surface treatment (with a mix of of hydrogen, helium and nitrogen) is used to remove surface oxide and passivate the surface of the indium a few minutes before the two chips are bonded together (\ref{fig:fab_process}f). This surface treatment is critical to making good indium-to-indium contact during bonding without reflowing the indium \cite{6248801}. We then flip over the top chip, align the two devices, and compress the dies together using a SET FC-150 flip-chip bonder (\ref{fig:fab_process}g). Bonding is performed at room temperature with a typical bonding force of 10-20\,N per mm$^2$ of bump area for 15\,$\mu$m diameter bumps (2-5\,grams/bump), which results in a compression of roughly 40-60\% the total height of the two indium depositions. Inspection with an edge gap tool indicates that typically the tilt between the base substrate and top chip is parallel within $\pm$\,0.5\,mRad, and inspection with an infrared microscope indicates that the xy alignemnt is typically within $\pm$\,2\,$\mu$m. Choosing an appropriate bump geometry is subject to several constraints. First, it is desirable to have a chip-to-chip separation of at least several microns so that the impedance of a 2\,$\mu$m wide, 50\,$\Omega$ coplanar waveguide transmission line is not dramatically changed by the presence of an overhead ground plane. Providing sufficient separation allows designs to be insensitive to the final chip-to-chip separation and for a smooth impedance transition as transmission lines travel under the edge of the top chip. In order to achieve a desired separation of 2-10\,$\mu$m post-compression, 2-10\,$\mu$m of indium must be deposited on both the base substrate and top chip. When depositing such thick layers of material, especially a high mobility material like indium, sidewall deposition can result in a considerable constriction of the bump feature size. 15\,$\mu$m diameter bumps were chosen as they have a width to height aspect ratio of 3:2 at the thickest intended bump height; for more information on thick indium deposition see \ref{appendix:fab}. Secondly, the titanium nitride UBM footprint must be large enough so that, after compression, indium does not contact aluminum directly. Given the post-compression alignment accuracy of our flip chip bonder ($\pm$\,2\,$\mu$m) and an expected 50\% compression, we find that 30\,$\mu$m square titanium nitride pads are sufficient for 15\,$\mu$m diameter indium pillars. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\textwidth]{figure2.eps} \caption{Design of the bump bond DC characterization hybrid. a) Photograph of a hybrid device with a 6\,mm\,x\,6\,mm base substrate and a 4\,mm\,x\,4\,mm top chip. b) Infrared micrograph looking through the top chip of the hybrid device. The woven pattern of test circuit can be seen, and bumps are located on either side of the crossings to connect the base wire from the base substrate to the top chip and back. c) Zoomed in infrared micrograph of a single indium bar on the top chip with interconnects at either end. d) Cross-sectional diagram of the device along the dotted line in c).\label{fig:device_cross_section}} \end{center} \end{figure} The devices characterized here consist of a 6\,mm\,x\,6\,mm base substrate and a 4\,mm\,x\,4\,mm top chip shown in Figure \ref{fig:device_cross_section}. In order to electrically characterize a large number of interconnects, we place 1620, 15\,$\mu$m diameter, circular indium bumps on the base substrate and 30\,$\mu$m\,x\,150\,$\mu$m indium bars on the top chip to connect pairs of bumps into a series chain of 1620 chip-to-chip interconnects. At each end of the chain, and every 90 interconnects along the chain, we wire bond to pads on the perimeter of the chip. This wiring configuration allows us to make four-wire resistance measurements by applying an excitation current to any 90 interconnect subsection (or number of subsections of the device) while measuring the voltage across that subsection/s with other leads. Each section of 90 interconnects consists of three rows or columns that extend across the entire top chip, spread over an area of roughly 2\,mm$^{2}$. By weaving these rows and columns of together, as shown in figure \ref{fig:device_cross_section}b, we are able to ascertain whether or not electrical failures are spatially correlated. For instance, if one subsection arranged in the rows fails to superconduct or has a suppressed critical current, but none of the columns show the same behavior, it is likely that there are no spatially correlated failures. However, if one section of rows and one section of columns fails, then the intersection indicates a region of interest for failure analysis such as electron energy loss spectroscopy (EELS), focused ion beam (FIB) cross sections, post-shear inspection, or inspection with an optical or infrared microscope. \section{Electrical characterization} We perform low temperature four-wire electrical measurements in an adiabatic demagnetization refrigerator (ADR) down to 50\,mK using a lock-in amplifier, ammeter, source measure unit (SMU) and a matrix switch to rapidly characterize a large number of devices. Twisted pair wiring and shielding is used to reduce parasitic coupling between the current excitation leads and voltage sense leads. Common mode voltage correction is implemented with the matrix switch which also allows us to quickly switch between measurements. For a detailed look at the measurement system as well as the the resistance and critical current measurements discussed below, see \ref{appendix:bounding_resistance}. This setup allows us to make a resistance measurement of the device in its superconducting state. Using common mode compensation and the lock-in amplifier with a several mA sinusoidal test current, we are typically able to bound the resistance of a series chain of 1620 interconnects to be less than 5\,$\mu\Omega$ below 1.1\,K, which is an average resistance of 3\,n$\Omega$ per interconnect. Figure \ref{fig:results}a shows a typical resistance versus temperature curve for a full 1620 interconnect chain and a 2 interconnect test structure on the same device. At 1.1\,K we observe a clear transition to a superconducting state when the resistance of 1620 interconnects in series falls more than 7 orders of magnitude to a few $\mu\Omega$. The resistance measured below 1.1\,K is roughly the same for both 1620 interconnects and the 2 interconnect test structure which indicates that this measurement is likely limited by system parasitics or measurement electronics rather than by an actual resistance or the inductance of the device. In figure \ref{fig:results}b we use a SMU to assess the critical current of each of the eighteen 90 interconnect subsections on three hybrid devices. The average critical current for each subsection is 26.8\,mA, with a number of subsections above 30\,mA and a single subsection with a suppressed critical current of 10.3\,mA. This data represents 4860 interconnects, 100\% of which superconduct with a critical current above 10\,mA. Furthermore, at least 98\% of the interconnects have a critical current above 24.5\,mA. Since there was only one section of rows (and no columns) with a suppressed critical current, it is likely that a single interconnect could be responsible for the lower critical current. The high yield of this process and lack of spatially correlated failures indicate that parallel interconnects can be used to further increase the critical current and/or to serve as precautionary redundant connections (though we yielded 100\% on these 3 test devices and have had similar yields across several generations of test devices). The average room temperature resistance of these 90 interconnect subsections is 47.7\,$\Omega$ with a standard deviation of 2\,$\Omega$ indicating reasonable bump uniformity. Typically we find that a room temperature resistance $<$1\,$\Omega$/interconnect (including the aluminum and indium base wiring used to chain them together) indicates that the flip chip bonding was successful. We find that insufficient compression or a bad material interface results in a resistance higher than 1\,$\Omega$ per interconnect. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\textwidth]{figure3.eps} \caption{Electrical device characterization. a) Typical four-wire resistance measurement versus temperature for a chain of 1620 interconnects and a 2 interconnect test structure on the same device from room temperature to 50 mK. A superconducting transition can be seen at 1.1\,K where the resistance of both the 1620 and 2 interconnect structures fall to a few $\mu\Omega$. For the 1620 long chain, this measurement demonstrates a superconducting resistance more than 7 orders of magnitude lower than its normal state resistance at 3\,K. b) Histogram of critical currents for each of the eighteen 90-interconnect subsections on three different chips. The average critical current is 26.8\,mA with $>$98\% of the subsections above 24.5\,mA \label{fig:results}} \end{center} \end{figure} \section{Mechanical characterization} Several mechanical tests were performed on a different generation of hybrids consisting of a 10\,mm\,x\,10\,mm substrate and a 6\,mm\,x\,6\,mm square chip. These devices had about four thousand 20\,$\mu$m diameter circular bump bonds spread fairly evenly over the 36\,mm$^2$ area of the top chip. In order to characterize the mechanical strength of these interconnects, we performed destructive die shear strength tests (in accordance with MIL-STD-883) in which a force is applied to the edge of the top chip, parallel to the face of the chip (e.g., force was applied in the plane of the page as the chip is shown in figure 2a), until the top chip separates from the substrate. Four devices were tested; three separated at 35\,N and one exceeded the limits of the tool at 49.9\,N, all of which are more than sufficient to ensure that devices are robust enough for handling. Finally, thermal cycling was performed on a device that had been previously confirmed to be fully superconducting below 1.1\,K. One hundred thermal cycles from -80\,$^{\circ}$C to 45\,$^{\circ}$C were performed with a 23 minute dwell at both -80\,$^{\circ}$C and 45\,$^{\circ}$C and a 20\,$^{\circ}$C/min ramp rate for transitions. After 100 thermal cycles (and unknown conditions during round-trip ground shipping to our off-site lab) the sample was cooled back down to 50\,mK. All interconnects on the device still remained superconducting, although the critical current was reduced to 1-5\,mA in most subsections down from 20-25\,mA in the initial characterization of this device. The reason for the reduced critical current is not known, but it is worth noting that, in a more typical use case, the devices measured in figure \ref{fig:results} were cycled from room temperature to 50\,mK and back as many as three times in our ADR (approximately 0.2$^{\circ}$C/min average warming/cooling rate) with no measurable impact on the critical current. \section{Conclusion} The flip chip hybrid devices we have developed offer a viable solution to control signal routing in two-dimensional high-coherence circuits. These interconnects, consisting of a titanium nitride diffusion barrier and indium bumps, serve as electrical interconnects between two planar devices with aluminum wiring. This fabrication process opens the door to the possibility of the close integration of two superconducting circuits with each other or, as would be desirable in the case of superconducting qubits, the close integration of one high-coherence qubit device with a dense, multi-layer, signal-routing device. Furthermore, these interconnects have a typical critical current above 25\,mA which is an order of magnitude larger than the largest typical DC control currents used to flux-tune superconducting qubits. Limited by the aluminum, these bumps are fully superconducting below 1.1\,K, and below this critical temperature, we are able to estimate the resistance of each bump to $<$\,3\,n$\Omega$. These high yield, mechanically robust, and high critical current electrical interconnects are ready to be implemented into more complex circuits including two dimensional arrays of nearest neighbor coupled flux-tunable superconducting qubits. \subsection*{Acknowledgments} This work was supported by Google. C. Q. and Z.C. acknowledge support from the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144085. Devices were made at the UC Santa Barbara Nanofabrication Facility, a part of the NSF funded National Nanotechnology Infrastructure Network. Materials characterization performed by the Google Failure Analysis lab. The authors would also like to thank Eric Schulte for sharing his wealth of experience. \section*{References} \bibliographystyle{unsrt}
-11,395.699455
[ -3.0859375, 2.908203125 ]
60
[ -2.5078125, 0.5302734375, -2.1875, -5.7578125, -0.58349609375, 8.515625 ]
[ 3.205078125, 7.9375, 2.359375, 6.0859375 ]
168
3,152
[ -2.0546875, 2.056640625 ]
22.168828
[ -5.921875, -2.794921875, -3.248046875, -2.501953125, 1.486328125, 10.90625 ]
0.820507
35.505184
30.425127
1.547056
[ 2.6938600540161133 ]
-9,955.341709
5.606599
-11,190.32913
1.267542
5.894451
[ -3.2421875, -3.99609375, -3.7890625, -4.58984375, 2.630859375, 12.046875 ]
[ -5.26171875, -1.6767578125, -2.15234375, -1.837890625, 3.44921875, 4.89453125 ]
BkiUe7a6NNjgBtyghHQJ
\section{Introduction} \label{sec:Intro} \subsection{Overview} \label{sec:Overview} Eigenvalues and eigenvectors are structurally fundamental quantities associated with matrices and are widely studied throughout mathematics, statistics, and engineering disciplines. For example, given an observed graph, the eigenvalues and eigenvectors of associated matrix representations (such as the adjacency matrix or Laplacian matrix) encode structural information about the graph (e.g. community structure, connectivity \cite{Chung1997}). In the context of certain random graph models, the eigenvalues and eigenvectors associated with the underlying matrix-valued model parameter, namely the edge probability matrix, exhibit similar information. It is therefore natural to study how ``close'' the eigenvalues and eigenvectors of a graph are to the underlying model quantities. In this paper we consider simple, undirected random graphs on $n$ vertices generated via the \emph{inhomogeneous Erd\H{o}s--R\'{e}nyi model} (IERM) \cite{Hoff-et-al--2002,Bollobas2007}, $\mathbb{G}(n,P)$, where $P:=[P_{i,j}] \in [0,1]^{n \times n}$ denotes the (symmetric) edge probability matrix. This independent edge model generalizes numerous widely-studied random graph models including the classical Erd\H{o}s--R\'{e}nyi model \cite{Erdos-Renyi1959}, the stochastic block model \cite{Holland-et-al--1983}, and the random dot product graph model \cite{Young-Scheinerman--2007}. For $G \sim \mathbb{G}(n,P)$, the (symmetric) adjacency matrix, $A \equiv A_{G}\in\{0,1\}^{n \times n}$, has entries which are independently distributed according to $A_{i,j} \sim \text{Bernoulli}(P_{i,j})$ for all $i \le j$. This yields $P\equiv\mathbb{E}[A]$, where $\mathbb{E}[\cdot]$ denotes probabilistic expectation. We focus our attention on the eigenvalues of $A$ and $P$. Specifically, we consider the eigenvalues in pairs (e.g. the largest eigenvalues of $A$ and of $P$ form a pair, as do the second-largest eigenvalues of each matrix, etc.). We obtain bounds on the distance between eigenvalues in certain ``signal pairs'', thereby and therein demonstrating a local sense in which random graphs concentrate. Note that in the random graph literature, the term \emph{concentration} is primarily used to describe global, uniform behavior via the spectral norm quantity $\|A-\mathbb{E}[A]\|_2$. The following description provides an overview of our results for the IERM setting. Given a collection of consecutive, ordered eigenvalues of $P$ which are sufficiently separated from the remainder of the spectrum and conditional on that the corresponding eigenvalues of $A$ are not near the remainder of the spectrum of either $A$ or $P$, then Theorems \ref{thrm:IERMconditional} and \ref{thrm:spikeEvalsIERM} yield high-probability bounds on the distances between the eigenvalues in each pair. The individual, pair-specific (i.e. local) bounds we obtain stand in contrast to weaker bounds which hold uniformly for all eigenvalue pairs (for example, bounds implied by Weyl's inequality \cite{Horn-Johnson--2012}). Our results hold even in the presence of eigenvalue multiplicity. We demonstrate that when the matrix $P$ has low rank, our results compare favorably with the recent study of low rank matrices undergoing random perturbation in \cite{O-Vu-Wang--2014} (see our Example \ref{ex:2BSSBM}). We also demonstrate that our results can lead to meaningful estimation in high rank settings (see Example \ref{ex:SpikeModel}). After presenting our main theoretical results, we then apply the theory in this paper to both hypothesis testing and change-point detection for random graphs. Moreover, we generalize our results beyond the IERM setting to obtain high-probability bounds for perturbations of singular values of rectangular matrices in a quite general random matrix noise setting. Broadly speaking, we adapt the original, deterministic setting in a paper by T. Kato \cite{Kato--1950} to a new setting involving randomness, and this approach is novel in the context of random graphs, random matrix theory, and statistical inference for random graphs. We further detail the key modifications and differences between our work and \cite{Kato--1950} in our subsequent remarks and proofs. The present paper also stands in contrast to a deterministic generalization of the Kato--Temple inequality in \cite{Harrell--1978}. \subsection{Inhomogeneous random graphs} In the inhomogeneous random graph literature, concentration bounds have been known for some time for each eigenvalue of $A$, denoted $\lambda_{i}(A)$, both around its median and around its expectation, $\mathbb{E}[\lambda_{i}(A)]$ \cite{Alon-K-Vu--2002}. Unfortunately, since the latter quantities are inaccessible in practice, such bounds are of limited practical use. Moreover, in general $\mathbb{E}[\lambda_{i}(A)] \neq \lambda_{i}(\mathbb{E}[A])$. By way of contrast, numerous results in the literature bound the spectral norm matrix difference $\|A-\mathbb{E}[A]\|_{2}$, thereby immediately and uniformly bounding each of the eigenvalue differences $|\lambda_{i}(A)-\lambda_{i}(\mathbb{E}[A])|$ via an application of Weyl's inequality. For example, \cite{Oliveira--2010} proved an asymptotically almost surely spectral norm bound of $\|A-\mathbb{E}[A]\|_2 = O(\sqrt{\Delta \log n})$ for $\Delta=\Omega(\log(n))$ where $\Delta\equiv\Delta(n)$ denotes the maximum expected degree of a graph. In \cite{Lu-Peng--2013} the above bound is improved to $\|A-\mathbb{E}[A]\|_2 \le (2+o(1))\sqrt{\Delta}$ under the stronger assumption that $\Delta = \omega(\log^4 n)$ with further refinement being subsequently obtained in \cite{LeiRinaldo2015}. We on the other hand show that under certain conditions, for particular eigenvalue pairs one can obtain tighter and non-uniform high probability bounds of the form $|\lambda_{i}(A)-\lambda_{i}(\mathbb{E}[A])|=O(\log^{\delta}n)$ for small $\delta>0$. Spectral theory for random graphs overlaps with the random matrix theory literature. There, asymptotic analysis includes proving, for example, convergence of the empirical spectral distribution to a limiting measure \cite{Ding-Tiefeng2010}. Related approaches to studying the spectrum of random graphs consider normalized versions of the adjacency matrix \cite{Le-Vershynin2015} and employ standard random matrix theory techniques such as the Stieltjes transform method \cite{Avrachenkov-Cottatellucci-Kadavankandy2015, Zhang-Nad.-Rao--2014}. In contrast, we do not study normalized versions of the adjacency or the edge probability matrix. Indeed, much of the existing literature focuses on properties of eigenvectors corresponding to random graphs \cite{Fortunato2010,LeiRinaldo2015,TangPriebe2016} given, among other reasons, the success of spectral clustering methods for graph inference \cite{Luxburg2007}. We do not consider eigenvectors since our aim is to demonstrate the usefulness of adapting and applying the eigenvalue-centric Kato--Temple framework. The stochastic block model (SBM) offers an example of an inhomogeneous random graph model which is wildly popular in the literature \cite{Lei2016,BickelSarcar2013,Zhao2012,KarrerNewman2011,LeiRinaldo2015} and in which our results apply to the top (signal) eigenvalues of $A$ and $P$. Previously, the authors in \cite{Athreya-et-al--2015} obtained a collective deviation bound on the top eigenvalues of $A$ and $P$ for certain stochastic block model graphs in order to prove the main limit theorem therein. Our Theorem \ref{thrm:IERMconditional} improves upon Lemma 2 in \cite{Athreya-et-al--2015} by removing a distinct eigenvalue assumption and by yielding stronger high-probability deviation bounds for pairs of top eigenvalues of $A$ and $P$ which are of the same order. This implies a statistical hypothesis testing regime for random graphs which is discussed further in Section \ref{sec:Application}. \subsection{Organization} \label{sec:Organization} The remainder of this paper is organized as follows. In Section \ref{sec:Setup} we introduce notation and the Kato--Temple eigenvalue perturbation framework. In Section \ref{sec:Results} we present our results for random graphs and more generally for matrix perturbation theory. There we also include illustrative examples together with comparative analysis involving recent results in the literature. In Section \ref{sec:Application} we discuss applications of our results to problems involving graph inference. Sections \ref{sec:Thanks} and \ref{sec:Appendix} contain our acknowledgments and the proofs of our results, respectively. \section{Setup and notation} \label{sec:Setup} Let $\langle \cdot, \cdot \rangle$ denote the standard Euclidean inner (dot) product between two vectors, $\|\cdot\|$ denote the vector norm induced by the dot product, and $\|\cdot\|_{2}$ denote the spectral norm of a matrix. The identity matrix is implicitly understood when we write the difference of a matrix with a scalar. In this paper, $\mathcal{O}(\cdot)$, $\Omega(\cdot)$, and $\Theta(\cdot)$ denote standard big-O, big-Omega, and big-Theta notation, respectively, while $o(\cdot)$ and $\omega(\cdot)$ denote standard little-o and little-omega notation, respectively. As prefaced in Section \ref{sec:Intro}, we consider simple, undirected random graphs on $n$ vertices generated by the inhomogeneous Erd\H{o}s--R\'{e}nyi model, $G \sim \mathbb{G}(n,P)$, via the corresponding (binary, symmetric) adjacency matrix $A \equiv A_G$. Given an open interval in the positive half of the real line, $(\alpha, \beta) \subset \R_{>0}$, we denote the $d$ eigenvalues of $P$ that lie in this interval (locally) by \begin{equation} \label{eqn:local} \alpha < \lambda_{1}(P) \le \lambda_{2}(P) \le \dots \le \lambda_{d}(P) < \beta, \end{equation} and similarly for $A$, noting that for $A$ this amounts to a probabilistic statement. By symmetry one can just as well handle the case when the interval lies in the negative half of the real line. We are principally interested in eigenvalues that are large in magnitude, so we do not consider the case when the underlying interval contains the origin. To highlight the Kato--Temple framework for bounding eigenvalues, we now reproduce two lemmas from \cite{Kato--1950} along with the Kato--Temple inequality as stated in \cite{Harrell--1978} (see Theorem \ref{thrm:KT_single_case} below).\footnote{Of primary importance in this paper is the extension of Theorem \ref{thrm:KT_single_case} to multiple eigenvalues as presented in \cite{Kato--1950}. The original statement of the extension to multiple eigenvalues is more involved and therefore omitted for simplicity.} These results all hold in the following common setting. \begin{quote} Let $H$ be a self-adjoint operator on a Hilbert space. Assume a unit vector $w$ is in the domain of $H$ and define $\eta:=\langle Hw,w \rangle$ along with $\epsilon := \|(H-\eta)w\|$, noting that $\eta^2 + \epsilon^2 = \|Hw\|^2$. The quantity $\eta$ may be viewed as an ``approximate eigenvalue'' of $H$ corresponding to the ``approximate eigenvector'' $w$, while $\epsilon$ represents a scalar residual term. \end{quote} \begin{lemma}[\cite{Kato--1950}, Lemma 1] \label{lem: kato1} For every $\alpha$ such that $\alpha < \eta$ (where $\alpha = -\infty$ is permitted), the interval $(\alpha, \eta + \frac{\epsilon^2}{\eta - \alpha}]$ contains a point in the spectrum of $H$. \end{lemma} \begin{lemma}[\cite{Kato--1950}, Lemma 2] \label{lem:kato2} For every $\beta$ such that $\beta > \eta$ (where $\beta = \infty$ is permitted), the interval $[\eta - \frac{\epsilon^2}{\beta - \eta}, \beta)$ contains a point in the spectrum of $H$. \end{lemma} \begin{theorem}[Kato--Temple inequality; \cite{Harrell--1978}, Theorem 2] \label{thrm:KT_single_case} Suppose that $\epsilon^2 < (\beta - \eta)(\eta - \alpha)$ where $\alpha < \beta$. Then $\emph{spectrum}(H) \cap (\alpha, \beta) \neq \emptyset$. Moreover, if the only point of the spectrum of $H$ in the interval $(\alpha, \beta)$ is the eigenvalue $\lambda(H)$, then $$ - \frac{\epsilon^2}{\beta - \eta} \le \lambda(H) - \eta \le \frac{\epsilon^2}{\eta - \alpha}.$$ \end{theorem} \begin{remark}[Hermitian dilation] \label{rem:HermDial} Given an $m \times n$ real matrix $M$, it will be useful to consider the corresponding real symmetric $(m+n) \times (m+n)$ Hermitian dilation matrix $\tilde{M}$ given by \[ \tilde{M} := \left[ \begin{array}{cc} 0 & M \\ M^\top & 0 \end{array} \right]. \] It is well-known that the non-zero eigenvalues of $\tilde{M}$ correspond to the signed singular values of $M$ (see Theorem 7.3.3 in \cite{Horn-Johnson--2012}). This correspondence between the singular values of arbitrary matrices and the eigenvalues of Hermitian matrices allows our results to generalize beyond the IERM setting to the more general study of matrix perturbation theory for singular values in a straightforward manner. \end{remark} \section{Results} \label{sec:Results} \subsection{Results for random graphs} \label{sec:ResultsRandomGraphs} In the IERM setting, a graph's adjacency matrix can be written as $A=P+E$ where $E:=A-P$ is a random matrix and $P$ is the (deterministic) expectation of $A$. We begin with a preliminary observation concerning the tail behavior of $A-P$ which will subsequently be invoked for the purpose of obtaining standard union bounds. The proof follows from a straightforward application of Hoeffding's inequality. \begin{proposition}[General IERM concentration] \label{prop:A-P} Let $u, v \in\R^{n}$ denote (non-random) unit vectors. Then for any $t>0$, \begin{equation} \label{eqn:RDPG tail bound} \Prob[|\langle(A-P)u,v\rangle| > t] \le 2 \exp(-t^2). \end{equation} \end{proposition} It is indeed possible to invoke more refined concentration inequalities than Proposition \ref{prop:A-P} in the presence of additional structure (e.g. when all entries of $P$ have uniformly very small magnitude). Doing so is particularly useful when it is simultaneously possible to obtain a strong bound on $\|A-P\|_{2}$. This observation will be made clearer in the context of Theorem \ref{thrm:IERMconditional} below. Furthermore, consideration of Proposition \ref{prop:A-P} will facilitate the subsequent presentation of our generalized results which extend beyond the IERM setting. \begin{remark} \label{rem:diagElementsOfP} In this paper the main diagonal elements of $P$ are allowed to be strictly positive, in which case realizations of $A$ need not necessarily be hollow (i.e. observed graphs may have self-loops). To avoid graphs with self-loops, one may either condition on the event that $A$ is hollow or set the main diagonal of $P$ to be zero. In the former case, note that $P\equiv\mathbb{E}[A]$ no longer holds on the main diagonal. In the latter case, a modified version of Proposition \ref{prop:A-P} holds. \end{remark} \begin{comment} \begin{remark} \label{rem:A-Ptilde} For each $i \in [n]$ define the vector $\tilde{w}_i := [\frac{1}{\sqrt{2}}w_i, \frac{1}{\sqrt{2}}w_i]^\top \in \R^{2n}$. Observe that by construction $\langle \tilde{A}\tilde{w}_i, \tilde{w}_j \rangle = \langle A w_i, w_j \rangle$ for all $i,j \in [n]$. Moreover, the same relationship holds for matrices $\tilde{P}$ and $\tilde{A}-\tilde{P}$. Hence, Proposition \ref{prop:A-P} immediately extends to the collection $\tilde{A}, \tilde{P}$, and $\{\tilde{w}_i\}_{i=1}^{n}$. \end{remark} \end{comment} We now present our main results for the IERM setting. The proofs, which are located in Section \ref{sec:Appendix}, also formulate a bound for the special case when the upper bound threshold $\beta$ may be chosen to be infinity. This special case is particularly useful in applications. \begin{theorem}[IERM eigenvalue perturbation bounds, conditional version] \label{thrm:IERMconditional} Let the matrices $A \in \{0,1\}^{n \times n}$ and $P \in [0,1]^{n \times n}$ correspond to the IERM setting described in Section $\ref{sec:Setup}$. Suppose the interval $(\alpha,\beta) \subset \R_{>0}$ contains precisely $d$ eigenvalues of $P$, $\lambda_{1}(P)\le\lambda_{2}(P)\le \dots \le \lambda_{d}(P)$ (possibly with multiplicity). Condition on the event that $(\alpha, \beta)$ contains precisely $d$ eigenvalues of $A$, $\{\lambda_{i}(A)\}_{i=1}^{d}$, as well as the set $\{\langle A w_{i}, w_{i} \rangle\}_{i=1}^{d}$ where $\{w_i\}_{i=1}^{d}$ is an orthonormal collection of eigenvectors of $P$ corresponding to the eigenvalues $\{\lambda_{i}(P)\}_{i=1}^{d}$. Fix $k \in [d]$. Define $l:=(d-k+1)$. Then, for $t>0$, \begin{align} \label{eqn:LowerBound} \lambda_{k}(A) \ge \lambda_{k}(P) - t &- \zeta^{-}, \end{align} where $\zeta^{-} := \frac{l\|E\|_{2}^{2}+((\beta-\lambda_{k}(P))+(\lambda_{d}(P)-\lambda_{k}(P)) + 3t)l(l-1)t}{\beta-\lambda_{d}(P)-(l(l-1)+1)t}$ with probability at least $1-\left(l+\binom{l}{2}\right)2\exp(-t^{2})$. Also, for $t>0$, \begin{align} \label{eqn:UpperBound} \lambda_{k}(A) \le \lambda_{k}(P) + t &+ \zeta^{+}, \end{align} where $\zeta^{+}:=\frac{k\|E\|_{2}^{2}+(3\lambda_{k}(P)-\alpha +3t)k(k-1)t}{\lambda_{1}(P)-\alpha - (k(k-1)+1)t}$ with probability at least\\ $1-\left(k+\binom{k}{2}\right)2\exp(-t^{2})$. Moreover, the upper and lower bounds hold collectively with probability at least $1-\left(d+\binom{d}{2}\right)2\exp(-t^{2})$. \end{theorem} \begin{remark} Our proof depends upon several new observations with respect to Kato's original argument. In particular, for $w_i$ as defined above, the matrix $[\langle A w_i, w_j \rangle]_{i,j=1}^{d}$ need not be diagonal, so $\{w_i\}_{i=1}^{d}$ need not constitute an orthonormal collection of ``approximate eigenvectors'' of $A$ in the sense of \cite{Kato--1950}. Instead, here the notion of ``approximate'' may be interpreted via Proposition \ref{prop:A-P} as the source of randomness which allows for Kato--Temple methodology to be adapted beyond the original deterministic setting. Of additional note is that the vectors $w_i$ as defined in this paper agree in function and notation with Kato's original paper, the operational distinction being that our setting provides a canonical choice for these vectors. \end{remark} \begin{remark} \label{rem:spec_vs_Euclid bounds} We note that the appearance of $\|E\|_{2}^{2}$ in the formulations of $\zeta^{+}, \zeta^{-}$ can be replaced by taking the appropriate maximum over quantities of the form $\|Ew_{i}\|^{2}$ (see Equation (\ref{eqn:epsilonSquared})). That is to say, in the presence of additional local structure and knowledge, one can refine the above bounds in Theorem \ref{thrm:IERMconditional}. \end{remark} \begin{remark} \label{rem:interval extension} In settings wherein the eigenvalues of interest have disparate orders of magnitude, Kato--Temple methodology is not guaranteed to yield useful bounds. This can be seen in the bounds' dependence on the ratio of eigenvalues of $P$ in Theorem \ref{thrm:IERMconditional}. Moreover, within the Kato--Temple framework, poor separation from the remainder of the spectrum also deteriorates the bounds, as is evident in the denominators' dependence on the interval endpoints $\alpha$ and $\beta$ along with the smallest and largest local eigenvalues of $P$. On the other hand, by further localizing, i.e. by restricting to a subset of $d^{\prime} < d$ eigenvalues in a particular interval, applying Theorem \ref{thrm:IERMconditional} to said fewer eigenvalue pairs may yield improved bounds (see Example \ref{ex:2BSSBM} and Remark \ref{rem:spec_vs_Euclid bounds}). \end{remark} Next, we formulate an unconditional version of Theorem \ref{thrm:IERMconditional}. For both simplicity and the purpose of applications, Theorem \ref{thrm:spikeEvalsIERM} is stated in terms of the largest singular values in the IERM setting. \begin{theorem}[IERM singular value perturbation bounds, unconditional version] \label{thrm:spikeEvalsIERM} Let the matrices $A \in \{0,1\}^{n \times n}$ and $P \in [0,1]^{n \times n}$ correspond to the IERM setting described in Section $\ref{sec:Setup}$ with maximum expected degree (via $P$) given by $\Delta\equiv\Delta(n)$. Denote the $d+1$ largest singular values of $A$ by $0 \le \hat{\sigma}_{0} < \hat{\sigma}_{1} \le \dots \le \hat{\sigma}_{d}$, and denote the $d+1$ largest singular values of $P$ by $0 \le \sigma_{0} < \sigma_{1} \le \dots \le \sigma_{d}$. Suppose that $\Delta =\omega(\log^{4}n)$, $\sigma_{1} \ge C\Delta$, and $\sigma_{0} \le c\Delta$ for some absolute constants $C>c>0$. Let $\delta\in(0,1]$. Then for each $k\in[d]$, there exists some positive constant $c_{k,d}$ such that as $n \rightarrow\infty$, with probability $1-o(1)$, \begin{equation} |\hat{\sigma}_{k}-\sigma_{k}|\le c_{k,d}\left(\log^{\delta}n\right). \end{equation} \end{theorem} A similar version of Theorem \ref{thrm:generalSVbound} holds when $\Delta=\Omega(\log n)$ under slightly different assumptions on the entries of $P$ for which one still has $\|A-P\|_{2}=O(\sqrt{\Delta})$ with high probability \cite{LeiRinaldo2015}. On a related yet different note, see \cite{Le-Vershynin2015} for discussion of the sparsity regime $\Delta=O(1)$ in which graphs fail to concentrate in the classical sense. \begin{remark}[Random dot product graph model] \label{rem:RDPG} When the edge probability matrix $P$ can be written as $P=XX^{\top}$ for some matrix $X\in\R^{n \times d}$ with $d \ll n$, then the IERM corresponds to the popular \emph{random dot product graph (RDPG) model} \cite{Young-Scheinerman--2007}. In the random dot product graph model, the largest eigenvalues of $A$ and $P$ are of statistical interest in that they represent spectral ``signal'' in the model. These eigenvalues are separated from the remainder of their respective spectra and lie in an interval of the form $(\alpha, \infty)$ where, for example, $\alpha$ may be taken to be $O(\|A-P\|_2)$. Among its applications, the RDPG model has been used as a platform for modeling graphs with hierarchical and community structure \cite{lyzinski2017community}. In addition, a central limit theorem is known for the behavior of the top eigenvectors of adjacency matrices arising from the RDPG model \cite{Athreya-et-al--2015}. In particular, the main limit theorem in \cite{Athreya-et-al--2015} relies upon a lemma which collectively bounds the differences between top eigenvalues of $A$ and $P$ while requiring a stringent eigengap assumption. Namely, for $\delta_{\text{gap}}:=\text{min}_{i}(\sigma_{i+1}(P)-\sigma_{i}(P))/\Delta > 0$, Lemma 2 in \cite{Athreya-et-al--2015} yields that with probability $1-o(1)$, \begin{equation} \label{eqn:CLT big 0_P bound} \sqrt{\sum_{i=1}^{d}|\lambda_{i}(A) - \lambda_{i}(P)|^2} = O(\delta_{\textnormal{gap}}^{-2} \log{n}). \end{equation} In contrast, using Theorem \ref{thrm:spikeEvalsIERM} with $\sigma_{0}:=0$, we do not require the gap assumption $\delta_{\text{gap}} > 0$ and still obtain that with probability $1-o(1)$, \begin{equation} \label{eqn:CLT_Kato_improved} \sqrt{\sum_{i=1}^{d}|\lambda_{i}(A) - \lambda_{i}(P)|^2} = O(\log{n}). \end{equation} In practice, models involving repeated or arbitrarily close eigenvalues are prevalent and of interest (e.g. Section \ref{subsec:threeBlockSBM}). As such, the above improvement is nontrivial and of practical significance. \end{remark} \begin{remark}[Latent position random graphs] \label{rem:kernel extension} Theorem \ref{thrm:IERMconditional} further extends to the more general setting of latent position random graphs. There, the matrix $P$ is viewed as an operator $[\kappa(X_i, X_j)]_{i,j=1}^{n}$ where $X_{i}$ and $X_{j}$ are independent, identically distributed latent positions with distribution $F$ and the positive definite kernel, $\kappa$ (viewed as an integral operator), is not necessarily of finite fixed rank as $n$ increases \cite{Hoff-et-al--2002, Tang-et-al--2013}. Note that for the RDPG model, the kernel $\kappa$ is simply the standard Euclidean inner product between (latent position) vectors. \end{remark} \subsection{Results for matrix perturbation theory} \label{sec:ResultsExtension} The behavior of the random matrix $A-P$ (see Proposition \ref{prop:A-P}) represents a specific instance of more general, widely-encountered probabilistic concentration as discussed in \cite{O-Vu-Wang--2014} and formulated in the following definition. \begin{definition}[\cite{O-Vu-Wang--2014}] \label{def:Ccgamma} An $m \times n$ random real (``error'') matrix $E$ is said to be \emph{$(C,c,\gamma)$-concentrated} for a trio of positive constants $C,c,\gamma >0$ if for all unit vectors $u \in \R^{n}, v \in \R^{m}$ and for every $t>0$, then \begin{equation} \label{eqn:concentration} \mathbb{P}[|\langle Eu, v \rangle| > t] \le C \exp(-c t^\gamma). \end{equation} \end{definition} In particular, the IERM setting corresponds to $(C,c,\gamma)$ concentration where $m=n$, $C=\gamma=2$, and $c=1$. For the Hermitian dilation discussed in Remark \ref{rem:HermDial} one has the following important correspondence between $E$ and $\tilde{E}$. \begin{lemma}[\cite{O-Vu-Wang--2014}] \label{lem:CcgammaBlowUp} Let $E\in\R^{m\times n}$ be $(C,c,\gamma)$-concentrated. Define $\tilde{C}:=2C$ and $\tilde{c}:=c/2^{\gamma}$. Then the matrix $\tilde{E}\in\R^{m+n \times m+n}$ is $(\tilde{C},\tilde{c},\gamma)$-concentrated. \end{lemma} Definition \ref{def:Ccgamma} and Lemma \ref{lem:CcgammaBlowUp} together with Remark \ref{rem:HermDial} allow for Theorem \ref{thrm:IERMconditional} to be generalized in a straightforward manner. We frame the generalization in the context of a signal-plus-noise matrix model with tail probability bounds. In particular, replace $A$ with $\hat{M}:=M+E$, thought of as an observed data matrix. Replace $P$ with $M$, thought of as an underlying signal matrix, so that the matrix $A-P$ becomes $E$, thought of as an additive error matrix. We emphasize that the following generalization is in terms of the singular values of $M$ and $\hat{M}$. This generalization resembles the formulation of a result obtained in \cite{O-Vu-Wang--2014} using different methods; however, unlike our Theorem \ref{thrm:generalSVbound}, the bound in \cite{O-Vu-Wang--2014} depends upon the rank of $M$ and assumes that the rank is known. Given a matrix $M\in\R^{m\times n}$, write its singular value decomposition as $M\equiv U\Sigma V^{\top}$ where $Mv_{i}=\sigma_{i}u_{i}$ holds for the normalized left (resp., right) singular vectors $v_{i}$ (resp., $u_{i}$) and singular values $\sigma_{i}=\Sigma_{i,i}$. For each $i$ such that $\sigma_{i}>0$, define $\tilde{w}_{i}\in\R^{m+n}$ to be the concatenated unit vector $w_{i}:=\frac{1}{\sqrt{2}}(u_{i}^{\top},v_{i}^{\top})^{\top}$. Note that $\tilde{w}_{i}$ is an eigenvector for $\tilde{M}$ with $\tilde{M}\tilde{w}_{i}=\sigma_{i}\tilde{w}_{i}$. \newpage \begin{theorem}[Singular value perturbation bounds, conditional version] \label{thrm:generalSVbound} For matrices $M,E \in \R^{m \times n}$ and $\hat{M}:=M+E$, suppose that $E$ is ($C,c,\gamma$)-concentrated for positive constants $C, c, \gamma >0$. Suppose the interval $(\alpha,\beta)\subset\R_{>0}$ contains the largest $d$ singular values of $M$, denoted by $0<\sigma_{1}\le\sigma_{2}\le\dots\le\sigma_{d}$. Condition on the event that the interval $(\alpha,\beta)$ contains precisely $d$ singular values of $\hat{M}$, denoted $0<\hat{\sigma}_{1}\le\hat{\sigma}_{2}\le\dots\le\hat{\sigma}_{d}$, as well as $\langle \tilde{\hat{M}}\tilde{w}_{i},\tilde{w}_{i}\rangle$ for $1 \le i \le d$ and unit vector $\tilde{w}_{i}$ as defined above. Fix $k \in [d]$. Define $l:=(d-k+1)$. Then for $t > 0$, \begin{align} \label{eqn:GeneralLowerBound} \hat{\sigma}_{k} \ge \sigma_{k} - t &- \zeta^{-}, \end{align} where $\zeta^{-} := \frac{l\|E\|_{2}^{2}+((\beta-\sigma_{k})+(\sigma_{d}-\sigma_{k}) + 3t)l(l-1)t}{\beta-\sigma_{d}-(l(l-1)+1)t}$ with probability at least\\ $1-\left(l+\binom{l}{2}\right)\tilde{C}\exp(-\tilde{c}t^{\gamma})$. Also, for $t>0$, \begin{align} \label{eqn:GeneralUpperBound} \hat{\sigma}_{k} \le \sigma_{k} + t &+ \zeta^{+}, \end{align} where $\zeta^{+}:=\frac{k\|E\|_{2}^{2}+(3\sigma_{k}-\alpha +3t)k(k-1)t}{\sigma_{1}-\alpha - (k(k-1)+1)t}$ with probability at least\\ $1-\left(k+\binom{k}{2}\right)\tilde{C}\exp(-\tilde{c}t^{\gamma})$. Moreover, the upper and lower bound hold collectively with probability at least $1-\left(d+\binom{d}{2}\right)\tilde{C}\exp(-\tilde{c}t^{\gamma})$. \end{theorem} As with the results in Section \ref{sec:ResultsRandomGraphs}, Theorem \ref{thrm:generalSVbound} can be formulated unconditionally and for collections of not-necessarily-the-largest singular values. Both of these aspects are explored in greater detail in Example \ref{ex:SpikeModel}. The following technical lemma will subsequently be employed in the application of unconditional bounds in Section \ref{sec:Application}. \begin{lemma} \label{lem:CcgammaSpectralProb} Let $E\in\R^{m \times n}$ be a $(C,c,\gamma)$-concentrated random matrix. Choose $\epsilon > 0$ such that $2 + \epsilon > 2\left(2\log(9)/c\right)^{1/\gamma}$ and define the quantity $c_{\epsilon,c,\gamma} :=\left(c(1+\epsilon/2)^{\gamma}-2\log(9)\right)>0$. Then, \begin{align} \mathbb{P}\left[\|E\|_{2}>(2+\epsilon)\textnormal{max}\{m,n\}^{1/\gamma}\right] &\le C\exp(-c_{\epsilon,c,\gamma}\textnormal{max}\{m,n\}). \end{align} If in addition $m=n$ and $E$ is assumed to be symmetric, then the quantity $2\log(9)$ above may be replaced by $\log(9)$, an improvement. \end{lemma} \subsection{Two illustrative examples} \label{sec:Discussion} In the remainder of this section we present two examples which highlight the usefulness and flexibility of Kato--Temple methodology. We begin with Example \ref{ex:2BSSBM} which presents a simple stochastic block model setting wherein our results compare favorably with those in the recent work of \cite{O-Vu-Wang--2014}, noting that in general neither \cite{O-Vu-Wang--2014} nor this paper dominates the other. \begin{example}[Balanced two block stochastic block model] \label{ex:2BSSBM} Consider an $n$ vertex realization from a two block (affinity) stochastic block model in which $0<q<p<1$ where $p$ and $q$ denote the within-block and between-block edge probabilities, respectively. Suppose each block contains $n/2$ of the graph's vertices. The signal singular values and maximum expected degree of this rank two model are given by \begin{equation} \sigma_{1}(P)=\frac{n}{2}(p-q), \sigma_{2}(P)=\frac{n}{2}(p+q), \textnormal{ and } \Delta=\sigma_{2}(P). \end{equation} For the purposes of large $n$ comparison, view $\|E\|_{2}\approx 2\sqrt{\Delta}$ from \cite{Lu-Peng--2013} and set the lower threshold $\alpha$ to be $\|E\|_{2}$. Define $r_{p,q}$ to be the edge probability-dependent parameter $r_{p,q}:=(p+q)/(p-q)$. Then via Kato--Temple methodology applied jointly to $\sigma_{1}(P)$ and $\sigma_{2}(P)$, with probability approximately 0.99 when $t_{KT} \ge 2.55$, for each singular value, respectively, \begin{align*} -3t_{KT} &\le \hat{\sigma}_{1}(A) - \sigma_{1}(P) \le 4r_{p,q} + t_{KT},\\ -t_{KT} &\le \hat{\sigma}_{2}(A) - \sigma_{2}(P) \le (8+6t)r_{p,q}+t_{KT}. \end{align*} By the same approach, the bounds obtained in \cite{O-Vu-Wang--2014} are given by \begin{align*} -t_{OVW} &\le \hat{\sigma}_{1}(A) - \sigma_{1}(P) \le 8\sqrt{2}r_{p,q} + \sqrt{2}t_{OVW},\\ -t_{OVW} &\le \hat{\sigma}_{2}(A) - \sigma_{2}(P) \le 8+\sqrt{2}t_{OVW}. \end{align*} Direct application of the results in \cite{O-Vu-Wang--2014} yields probability approximately at least 0.99 for $t_{OVW} \ge 11.6$, though it appears upon further inspection that this can be improved to, for example, $t_{OVW} \ge 5.6$. The above joint analysis demonstrates that our bounds are favorable for the pair $\{\hat{\sigma}_{1}(A),\sigma_{1}(P)\}$ whereas the opposite is true for the pair $\{\hat{\sigma}_{2}(A),\sigma_{2}(P)\}$. We emphasize that here the upper bounds are of primary importance and interest. Indeed, the $(C,c,\gamma)$ property allows for straightforward lower bounds to be obtained by epsilon net techniques together with the Courant--Fisher--Weyl min-max principle. For example, note that a single application of $(C,c,\gamma)$-concentration yields that $\hat{\sigma}_{2}(A) - \sigma_{2}(P) \ge -t$ with probability at least $1-C\exp(-ct^{\gamma})$. Among the advantages of the Kato--Temple methodology is the ability to, in certain cases, refine one's initial analysis by further localizing the interval $(\alpha,\beta)$. This is possible in the current example wherein we can ``zoom in'' further on the largest signal singular value. In particular, keeping the same indexing as above and setting $\alpha$ to be $\|E\|_{2}+\sigma_{1}(P)$, then for $n$ large and with probability approximately 0.99, we have \begin{align*} -t_{KT} &\le \hat{\sigma}_{2}(A) - \sigma_{2}(P) \le 2\left(\frac{p}{q}+1\right)+t_{KT}. \end{align*} Throughout this example, we note the bounds' dependence upon the underlying parameter $p$ and $q$. By virtue of the large $n$ comparison here, these parameters do not meaningfully influence the underlying probabilistic statement. \hfill$\blacktriangle$ \end{example} In contrast to the low rank setting of Example \ref{ex:2BSSBM}, Example \ref{ex:SpikeModel} below demonstrates how our results can be applied to the problem of estimating signal in a high rank matrix setting. \begin{example}[Estimating signal in a high rank spike model] \label{ex:SpikeModel} Let $m,n,p \in \N$ and set $q:=m+n+p$. Let $M\in\R^{q \times q}$ be full rank with singular values given by the set \begin{align*} \{ \underbrace{1,\dots,1}_{m \textnormal{ times }}, \underbrace{\kappa+1,\dots,\kappa+1}_{n \textnormal{ times }}, \underbrace{\tau+\kappa+1,\dots,\tau+\kappa+1}_{p \textnormal{ times }} \}, \end{align*} where $\tau,\kappa > 0$. By slight abuse of notation, denote the singular values of $M$ up to multiplicity by $\sigma_{1}:=1$, $\sigma_{2}:=\kappa+1$, and $\sigma_{3}:=\tau+\kappa+1$. Further suppose that $E\in\R^{q \times q}$ has entries which are independent, identically distributed standard normal random variables. It follows by Gaussian concentration that $E$ is $(C,c,\gamma)$-concentrated with parameters $C=2$, $c=\frac{1}{2}$, and $\gamma=2$, and so by an application of Lemma \ref{lem:CcgammaSpectralProb} for $\epsilon=4$, then \begin{align*} \mathbb{P}\left[\|E\|_{2}>6\sqrt{q}\right] &\le 2\exp\left(-\frac{1}{10}q\right). \end{align*} Define $\hat{M}:=M+E$ and organize the singular values of $\hat{M}$ in correspondence with the repeated singular values of $M$, namely write \begin{equation*} \left\{ \{\hat{\sigma}_{1,i_{1}}\}_{i_{1}=1}^{m}, \{\hat{\sigma}_{2,i_{2}}\}_{i_{2}=1}^{n}, \{\hat{\sigma}_{3,i_{3}}\}_{i_{3}=1}^{p} \right\}. \end{equation*} Suppose that $\tau,\kappa > (2\times(6\sqrt{q})+1)$. Then we can use Weyl's inequality as a preliminary device for selecting the threshold values $\alpha$ and $\beta$. In particular, such analysis yields that with high probability, \begin{align*} |\hat{\sigma}_{1,i_{1}}-1|\le\|E\|_{2} &\Longrightarrow & 0 &\le \hat{\sigma}_{1,i_{1}} \le 6\sqrt{q}+1,\\ |\hat{\sigma}_{2,i_{2}}-(\kappa+1)|\le\|E\|_{2} &\Longrightarrow & 6\sqrt{q}+2 &< \hat{\sigma}_{2,i_{2}} < \tau+\kappa-6\sqrt{q},\\ |\hat{\sigma}_{3,i_{3}}-(\tau+\kappa+1)|\le\|E\|_{2} &\Longrightarrow &\tau+\kappa+1-6\sqrt{q} &\le \hat{\sigma}_{3,i_{3}} \le \tau+\kappa+1+6\sqrt{q}. \end{align*} For the choices $\alpha=6\sqrt{q}+2$ and $\beta=\tau+\kappa-6\sqrt{q}$, observe that $\{\hat{\sigma}_{2,i_{2}}\}_{i_{2}=1}^{n} \subset (\alpha,\beta)\subset \R_{>0}$ while simultaneously $\{1,\kappa+1,\tau+\kappa+1\}\bigcap(\alpha,\beta)=\{\kappa+1\}$. In this setting our perturbation theorems apply for $\kappa$ sufficiently large. Namely, choosing $\delta \in (0,1]$ and setting $t=\Theta(\log^{\delta}q)$ yields that for each $k\in[n]$ there exist positive constants $c^{\prime}$ and $c^{\prime\prime}$ such that with high probability, \begin{align*} |\hat{\sigma}_{2,k}-\sigma_{2}| &\le c^{\prime}t+c^{\prime\prime}. \end{align*} To reiterate, this bound improves upon the bound implied by a na\"{i}ve, terminal application of Weyl's inequality. Moreover, Example \ref{ex:SpikeModel} demonstrates how Weyl's inequality may be invoked for the preliminary purpose of establishing threshold values when the paired singular values (eigenvalues) correspond to the same index after ordering. \hfill$\blacktriangle$ \end{example} \section{Applications to graph inference} \label{sec:Application} \subsection{Methods of graph inference} The field of statistical inference and modeling for graphs represents a burgeoning area of research with implications for the social and natural sciences among other disciplines \cite{Kolaczyk2009,Goldenberg2010}. Within the current body of research, the pursuit of identifying and studying community structure within real-world networks continues to receive widespread attention \cite{Newman2006,NewmanGirvan2004,BickelChen2009,Fortunato2010,VerzelenCastro2015,CastroVerzelen2014}. Still another area of investigation involves anomaly detection for time series of graphs by considering graph statistics such as the total degree, number of triangles, and various scan statistics \cite{Wang2014, RukhinDiss}. Here we apply our results to two such detection tasks. \subsection{Community detection via hypothesis testing} \label{subsec:threeBlockSBM} In this application we view the problem of community detection through the lens of hypothesis testing as in \cite{CastroVerzelen2014,VerzelenCastro2015}. We consider the simple setting of a balanced three block stochastic block model and the problem of detecting differences in between-block communication. Namely, consider the block edge probability matrix and block assignment vector given by \begin{equation} \label{eqn:Null_B_0} \textnormal{Null model: } B_0 = \left( \begin{array}{ccc} p & q & q \\ q & p & q \\ q & q & p \end{array} \right) \text{ and } \pi_0 = \left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right), \end{equation} where $p = 0.81$ and $q = 0.2025$. In this model, vertices have an equal probability of belonging to each of the three blocks. Vertices within the same block have probability $p$ of being connected by an edge, whereas for vertices in different blocks the probability is $q$. As an aside, we note that this SBM may be cast in the language of random dot product graphs for which the underlying distribution of latent positions $F$ is a mixture of point masses. Specifically, take $F$ to be the discrete uniform distribution on the vectors $x_1 \approx (0.55, 0.32, 0.64)$, $x_2 \approx (-0.55, 0.32, 0.64)$, and $x_3 \approx (0, -0.64, 0.64)$ in $\mathbb{R}^3$ (see Remarks \ref{rem:RDPG} and \ref{rem:kernel extension}). For a graph on $n$ vertices from this three block model, condition on the graph exhibiting equal block sizes, i.e. $n_{1}=n_{2}=n_{3}=n/3$. For the corresponding $P$ matrix, denoted $P_n(B_0)$, the non-trivial (signal) model eigenvalues themselves exhibit multiplicity (hence Equation (\ref{eqn:CLT big 0_P bound}) via \cite{Athreya-et-al--2015} does not apply) and are \begin{equation} \lambda_{1}(P_n(B_0))=\lambda_{2}(P_n(B_0))=\frac{n}{3}(p-q) \text{ and } \lambda_{3}(P_n(B_0))=\frac{n}{3}(p+2q). \end{equation} In contrast, consider an alternative model in which the first and second blocks exhibit stronger between-block communication. This stronger communication is represented by an additional additive factor $\epsilon\in(0,p-q)$ in the block edge probability matrix $B_{\epsilon}$, where $\epsilon$ is assumed to be bounded away from $p-q$ for convenience. \begin{equation} \label{eqn:Alt_B_0} \textnormal{Alternative model: } B_{\epsilon} = \left( \begin{array}{ccc} p & q+\epsilon & q \\ q+\epsilon & p & q \\ q & q & p \end{array} \right) \text{ and } \pi_1 = \left(\frac{1}{3},\frac{1}{3},\frac{1}{3}\right). \end{equation} Under $B_{\epsilon}$, the signal eigenvalues of $P_{n}(B_{\epsilon})$ (equiv., singular values) can be explicitly computed as functions of $p,q,n,$ and $\epsilon$. They are given by \begin{align*} \lambda_{1}(P_{n}(B_{\epsilon})) &=\frac{n}{3}(p-q-\epsilon), \hspace{1em} \lambda_{2}(P_{n}(B_{\epsilon})) =\frac{n}{6}(2p+q+\epsilon-\sqrt{9q^2+2q \epsilon + \epsilon^2}),\\ \lambda_{3}(P_{n}(B_{\epsilon})) &=\frac{n}{6}(2p+q+\epsilon+\sqrt{9q^2+2q \epsilon + \epsilon^2}). \end{align*} Furthermore, the maximum expected degree of the model corresponding to $B_{\epsilon}$ is given by $\Delta_{\epsilon}= \frac{n}{3}(p+2q+\epsilon)$. Given $\epsilon>0$, one may formulate a simple null versus simple alternative hypothesis test written as \begin{equation} \label{eqn:hypothesis_test} \mathbb{H}_{0}: B = B_{0} \text{ against } \mathbb{H}_{A}: B=B_{\epsilon}. \end{equation} In what follows we choose the smallest signal eigenvalue as our test statistic and denote it by $\Lambda_{1}$. We compare our bounds obtained via Kato--Temple methodology with the large-sample approximation bounds implied by \cite{Lu-Peng--2013} for the specified values $n\in\{6000,9000,12000,15000\}$. Similar comparison can be carried out with respect to the results in \cite{O-Vu-Wang--2014}. Our bounds compare favorably with those in \cite{O-Vu-Wang--2014} even for conservative choices of $t$ therein. By Lemma \ref{lem:CcgammaSpectralProb} and Proposition \ref{prop:A-P}, irrespective of $\epsilon>0$ above, we have the concentration inequality $\mathbb{P}\left[\|E\|_{2}>3\sqrt{n}\right] \le 2\exp\left(-\frac{1}{20}n\right)$. This spectral norm bound allows us to invoke an unconditional version of Theorem \ref{thrm:IERMconditional}. Specifically, for moderate choices of $t>0$, the bounds in Theorem \ref{thrm:IERMconditional} hold with probability at least $1-12\exp(-t^{2})-2\exp\left(-\frac{1}{20}n\right)$. When $n\ge 6000$, the choice $t \approx 2.66$ yields probability at least 0.99. Using these concentration inequality results, we determine confidence intervals which hold for $\Lambda_{1}$ with probability at least 0.99 under $\mathbb{H}_{0}$ and $\mathbb{H}_{A}$, respectively. We compute the value $\epsilon_{n}$ such that the confidence intervals under $\mathbb{H}_{0}$ and $\mathbb{H}_{A}$ no longer overlap for $\epsilon\in(\epsilon_{n},0.2]$, emphasizing that smaller values of $\epsilon_{n}$ indicate superior performance. This provides us with a region of the alternative in which our statistical test has power at least 0.99. Our results are summarized in the numerical table below. \begin{table}[H] \caption{} \begin{tabular}{||c c c||} \hline $n$ & $\epsilon_{n}$ via \cite{Lu-Peng--2013} & $\epsilon_{n}$ via this paper\\ [0.5ex] \hline\hline 6000 & 0.1006 & 0.0407 \\ \hline 9000 & 0.0818 & 0.0256 \\ \hline 12000 & 0.0707 & 0.0187 \\ \hline 15000 & 0.0631 & 0.0147 \\[1ex] \hline \end{tabular} \end{table} It is not too difficult to realize that the eigenvalue-based test considered here has asymptotic power equal to one as $n\rightarrow\infty$ for any choice of $0<q<p<1$ and $\epsilon\in(0,p-q)$. Moreover, as a consequence of Theorem \ref{thrm:spikeEvalsIERM} and subsequent discussion, we make the following observation. \begin{proposition} Consider testing the hypothesis in Equation (\ref{eqn:hypothesis_test}). Assume that $q \equiv q_{n} = \omega(\frac{\log n}{n})$ with $q_{n} < p_{n}$. Then for $n \epsilon_{n} = \omega(\log n)$ and $\epsilon_{n} < p_{n} - q_{n}$, the above test using $\Lambda_{1}$ has asymptotically full power. \end{proposition} Note that the above analysis investigates testing performance as a function of $\epsilon$ for graphs with fixed block proportions. Next we investigate a setting wherein $\epsilon$ is fixed and the sizes of the graph communities change. \subsection{Change-point detection} We now consider a stylized example of change-point detection via hypothesis testing. Let $T^{*} \geq 1$ and suppose that $G_1, G_2, \dots, G_T$ for $T < T^{*}$ are Erd\H{o}s--R\'{e}nyi graphs on $n$ vertices, while for $T \ge T^{*}$ the graph $G_{T}$ is sampled according to a two block stochastic block model with block edge probability matrix $B = \Bigl[\begin{smallmatrix} p_{\epsilon} & p \\ p & p \end{smallmatrix} \Bigr]$ for $p_{\epsilon}:=p+\epsilon$ and $\epsilon>0$, with $m$ vertices assigned to the first block and $n - m$ vertices assigned to the second block. We note that $B$ encapsulates a notion of chatter anomaly, i.e., a subset of the vertices in $[n]$ exhibit altered communication behavior in an otherwise stationary setting. For a given value of $T$, we are interested in testing the hypothesis that $T$ is a change-point in the collection $\{G_1, G_2, \dots, G_T\}$. Given two graphs with adjacency matrices, $A^{(T-1)}$ and $A^{(T)}$, this can be formulated as the problem of testing the two-sample hypotheses \begin{align*} \label{eq:two-sample} \mathbb{H}_0 &\colon A^{(T-1)} \sim \mathrm{ER}(n,p), A^{(T)} \sim \mathrm{ER}(n,p) \quad \text{against}\\ \mathbb{H}_A &\colon A^{(T-1)} \sim \mathrm{ER}(n,p), A^{(T)} \sim \mathrm{SBM}(B, m, n-m). \end{align*} We emphasize that in the above formulation, the parameter $p$ in $\mathrm{ER}(n,p)$, the size $m$ of the chatter community, and the associated communication probability $p_{\epsilon}$ are generally assumed to be unknown. Many test statistics are available for this change-point detection problem, including those based on graph invariant statistics (such as number of edges or number of triangles) or those based on locality statistics (such as max degree or scan statistics). For a given graph with adjacency matrix $A$, let $N(i) = \{j \colon A_{i,j} = 1\}$ denote the collection of vertices adjacent to vertex $i$. Furthermore, \begin{itemize} \item let $\mathcal{T}_k$ count the number of $k$-cliques in $A$ for $k \geq 2$; \item let $\delta(A) := \max_{i} \sum_j A_{i,j}$ be the max degree statistic of $A$; \item let $\Psi(A) := \max_i \sum_{j,k \in N(i)} A_{j,k}$ be the scan statistic of $A$. \end{itemize} We note that these test statistics are widely used in anomaly detection for time series of graphs; see \cite{priebe_enron,Wang2014,CastroVerzelen2014,ranshous} and the references therein for a survey of results and applications. One can then show \cite{rukhin2,tang_tsp2013} that the test statistics based on $\mathcal{T}_2$ and $\mathcal{T}_3$ are consistent for the above hypothesis test when $m = \Omega(\sqrt{n})$. More precisely, under the null hypothesis, one has \begin{equation*} \frac{\mathcal{T}_2(A^{(T)}) - \mathcal{T}_2(A^{(T-1)})}{n\sqrt{p(1 - p)}} \overset{d}{\longrightarrow} N(0,1); \quad \frac{\mathcal{T}_3(A^{(T)}) - \mathcal{T}_3(A^{(T-1)})}{n^2 p^2 \sqrt{pp_{\epsilon}}} \overset{d}{\longrightarrow} N(0,1), \end{equation*} as $n \rightarrow \infty$, while under the alternative hypothesis, one has \begin{align*} \frac{\mathcal{T}_2(A^{(T)}) - \mathcal{T}_2(A^{(T-1)})}{n\sqrt{p(1 - p)}} &\overset{d}{\longrightarrow} N\Bigl(\frac{m(m-1)\epsilon}{n\sqrt{p(1-p)}}, C_1\Bigr); \\ \frac{\mathcal{T}_3(A^{(T)}) - \mathcal{T}_3(A^{(T-1)})}{n^2 p^2 \sqrt{p p_{\epsilon}}} &\overset{d}{\longrightarrow} N\Bigl(\frac{\mu_{n,m,p,\epsilon}}{n^2p^2 \sqrt{pp_{\epsilon}}}, C_2\Bigr), \end{align*} as $n \rightarrow \infty$ for some positive constants $C_1$ and $C_2$ together with $\mu_{n,m,p,\epsilon}:=m^3p_{\epsilon}^{3}/6 + m^2(n-m)p^2p_{\epsilon} + (m(n-m)^2/2 + (n-m)^3/6)p^3 - n^3p^3/6$. Now, if $m = \omega(\sqrt{n})$, then \begin{equation*} \frac{m(m-1)\epsilon}{n\sqrt{p(1-p)}} \rightarrow \infty; \quad \frac{\mu_{n,m,p,\epsilon}}{n^2p^2 \sqrt{pp_{\epsilon}}} \rightarrow \infty, \end{equation*} as $n \rightarrow \infty$, and thus both $\mathcal{T}_2$ and $\mathcal{T}_3$ are consistent for the above hypothesis test when $m = \Omega(\sqrt{n})$. Furthermore, Theorem~2 and Proposition~2 of \cite{CastroVerzelen2014} indicate that $\mathcal{T}_2$ is asymptotically optimal, i.e., if $m = o(\sqrt{n})$ then provided that \begin{equation} \lim_{n \rightarrow \infty} \mathcal{I}(m,n,p,\epsilon) := \lim_{n \rightarrow \infty} \frac{m \bigl(p_{\epsilon} \log \tfrac{p_{\epsilon}}{p} + (1 - p_{\epsilon}) \log \tfrac{1 - p_{\epsilon}}{1 - p}\bigr)}{2 \log{(n/m)}} < 1, \end{equation} no test statistic is consistent for testing the above hypotheses. Similarly, one can also show \cite{rukhin1,tang_tsp2013} that the test statistics based on $\delta(A)$ and $\Psi(A)$ are consistent for the above hypothesis test when $m = \Omega(\sqrt{n \log n})$; in particular the (normalized) limiting distributions of both $\delta(A^{(T)}) - \delta(A^{(T-1)})$ and $\Psi(A^{(T)}) - \Psi(A^{(T-1)})$ is the Gumbel distribution. In the context of this paper, one could also use a test statistic based on the largest eigenvalue. Our earlier results indicate that, under the null hypothesis, with high probability the largest eigenvalues of $A^{(T)}$ and $A^{(T-1)}$ satisfy \begin{equation*} |\lambda_{\text{max}}(A^{(T)}) - \lambda_{\text{max}}(P^{(T)})| = O(1) \text{ and }|\lambda_{\text{max}}(A^{(T-1)}) - \lambda_{\text{max}}(P^{(T-1)})| = O(1), \end{equation*} along with $|\lambda_{\text{max}}(A^{(T)}) - \lambda_{\text{max}}(A^{(T-1)})| = O(1)$. Meanwhile, under the alternative hypothesis, when $m=o(n)$, then with high probability \begin{equation*} \begin{split} \left|\left|\lambda_{\text{max}}(A^{(T)}) - \lambda_{\text{max}}(A^{(T-1)})\right| - \frac{m^{2}p\epsilon}{np-m\epsilon}\right| &= O(1). \end{split} \end{equation*} \begin{comment} Consider the $P$ matrix as an inflated SBM matrix (allowing loops). Under the alternative with SBM(m,n-m) one can inductively show the characteristic polynomial to be $\mathbb{p}(x) =x^{n}-(np+m\epsilon)x^{n-1}+m(n-m)p\epsilon x^{n-2}$ Solving for the leading eigenvalue allows us to divide through by $x^{n-2}$ thereby obtaining a degree two polynomial which is solvable by the quadratic formula \end{comment} Thus the largest eigenvalue test statistic is also consistent when $m = \Omega(\sqrt{n})$. The previous test statistics are all global test statistics in the sense that, if $\mathbb{H}_0$ is rejected, the resulting test procedures do not extract the subset of the vertices which exhibits anomalous behavior between $A^{(T)}$ and $A^{(T-1)}$. One can construct related local test statistics which do extract the subset of anomalous vertices, although the resulting test procedure is computationally prohibitive. For example, assuming that $m$ is known, we could replace $\Psi(A)$ with the (modified) scan statistic $\Upsilon_m(A) = \max_{|S| = m} \mathcal{T}_{2}(A_{\mid S})$ where $A_{\mid S}$ is the subgraph of $A$ induced by the vertices in $S$ and the maximum is taken over all subsets $S \subset [n]$ with $|S| = m$. Thus $\Upsilon_m(A)$ is the maximum number of edges in any subgraph induced by $m$ vertices of $A$. By \cite{CastroVerzelen2014} the test statistic $\Upsilon_m(A^{(T)}) - \Upsilon_m(A^{(T-1)})$ is consistent for the hypothesis test considered in this section whenever \begin{equation*} \lim_{n \rightarrow \infty} \mathcal{I}(m,n,p,\epsilon) > 1. \end{equation*} Thus, for any fixed $p$ and $\epsilon$, the (modified) scan statistics is consistent when $m = \Omega(\log{n})$ as $n \rightarrow \infty$. Using a similar idea, one can define a local variant of the largest eigenvalue statistic as $\Lambda_m(A) = \max_{|S| = m} \lambda_{\text{max}}(A_{\mid S})$. By Theorem \ref{thrm:IERMconditional} and a union bound over all $\tbinom{n}{m} = O(n^m)$ subsets $S \subseteq [n]$ with $|S| = m$, we have that there exists a constant $C > 0$ such that if $T = C \sqrt{m \log n}$, then with high probability \begin{equation*} |\Lambda_m(A^{(T)}) - \Lambda_m(A^{(T-1)})| = O(\sqrt{m \log n}) \end{equation*} under the null hypothesis, whereas under the alternative hypothesis, with high probability \begin{equation*} \left| |\Lambda_m(A^{(T)}) - \Lambda_m(A^{(T-1)})| - m\epsilon\right| = O(\sqrt{m \log n}). \end{equation*} Thus for any fixed $p$ and $\epsilon$, the test statistic based on $\Lambda_m$ is also consistent for the above hypothesis test whenever $m = \Omega(\log{n})$ as $n \rightarrow \infty$. In summary, the results in Section \ref{sec:Results} facilitate eigenvalue-based test statistics for the change-point detection problem as presented in this section. Furthermore, the resulting procedure is consistent whenever the size of the chatter community $m$ exceeds the threshold of detectability given in \cite{CastroVerzelen2014}. \section{Acknowledgments} \label{sec:Thanks} The authors thank the anonymous referees for their valuable feedback which has improved the quality of this paper. \section{Appendix} \label{sec:Appendix} \begin{comment} \subsection{Proof of Proposition \ref{prop:A-P}} \begin{proof} Firstly, by the triangle inequality and using the entry-wise notation $u\equiv(u_{1},u_{2},\dots,u_{n})^{\top}$ and $A:=[A_{k,l}]_{k,l=1}^{n}$, we expand $\left| \langle(A-P)u,v\rangle \right|$ as \begin{equation*} \label{eqn:triang} \left| \sum_{k<l}2(A_{k,l}-P_{k,l})w_{k}w_{l} + \sum_{k=1}^{n}(A_{k,k}-P_{k,k})u_{k}v_{k} \right|. \end{equation*} Together, these sums represent the sum of $n(n+1)/2$ mean zero bounded independent random variables, hence by Hoeffding's inequality \begin{align*} \Prob[\left|\langle(A-P)u,v\rangle\right| > t] &\le 2 \exp\left(\frac{-2t^2}{\sum_{k<l}(2u_{k}v_{l})^2 + \sum_{k=1}^{n}(u_{k}v_{k})^2}\right) \le 2 \exp(-t^2). \end{align*} The final inequality follows from the observation that \begin{align*} \sum_{k<l}(2 u_{k}v_{l})^2 + \sum_{k=1}^{n}(u_{k}v_{k})^2 &\le 2\left(\sum_{k,l=1}^{n} u_{k}^2 v_{l}^2\right) = 2(\langle u,u \rangle \times \langle v,v \rangle) = 2. \end{align*} This proves the lemma. \end{proof} \end{comment} \subsection{Proof of Theorem \ref{thrm:IERMconditional}} \begin{proof} Let $P,E \in \R^{n \times n}$ be real symmetric matrices such that $E$ satisfies Proposition \ref{prop:A-P}. Denote the $d$ largest eigenvalues of $P$ and $A$ by \begin{align*} 0 &< \lambda_{1}(P) \le \lambda_{2}(P) \le \dots \le \lambda_{d}(P),\\ 0 &< \lambda_{1}(A) \le \lambda_{2}(A) \le \dots \le \lambda_{d}(A). \end{align*} Let $\{w_{i}\}_{i=1}^{d}$ denote a collection of orthonormal eigenvectors of $P$ corresponding to the collection of eigenvalues $\{\lambda_{i}(P)\}_{i=1}^{d}$. Similarly, let $\{u_{i}\}_{i=1}^{d}$ denote a collection of orthonormal eigenvectors of $A$ corresponding to the collection of eigenvalues $\{\lambda_{i}(A)\}_{i=1}^{d}$. For each $i \in [d]$ define $\eta_{i}$ to be an ``approximate eigenvalue of $A$ close to $\lambda_{i}(P)$'' in the sense that \begin{align} \eta_{i} &:= \langle Aw_{i},w_{i}\rangle = \lambda_{i}(P) + \langle Ew_{i},w_{i}\rangle, \end{align} and define a corresponding ``residual quantity'' $\epsilon_{i}$ as \begin{align} \epsilon_{i}:=\|(A-\eta_{i})w_{i}\|. \end{align} \subsubsection{Proof of Theorem \ref{thrm:IERMconditional}: upper bound} \label{sec:Upper bound} Now for fixed $k\in[d]$ define the $k$-dimensional linear manifold $\mathcal{M}_{k}$ by \begin{align*} \mathcal{M}_{k}:=\textnormal{span}\{u_{1},\dots,u_{k}\}. \end{align*} We now define a collection of ``aggregate quantities'': \begin{itemize} \item Define $w$ to be an ``aggregate approximate eigenvector of $A$'' in the sense that $ w:=\sum_{i=1}^{k}r_{i}w_{i} $ for a collection of normalized coefficients $\{r_{i}\}_{i=1}^{k}$ such that $ \|w\|^{2} = \sum_{i=1}^{k}r_{i}w_{i} = 1,$ and satisfying the under-determined linear system $ \langle w, u_{i} \rangle = 0 \textnormal{ for } i=1,2,\dots,k-1.$ \item Define $\eta$ to be an ``aggregate approximate eigenvector of $A$'' in the sense that $\eta := \langle Aw,w \rangle.$ \item Define $\epsilon$ to be the ``aggregate residual quantity'' $\epsilon:=\|(A-\eta)w\|.$ \end{itemize} By Lemma 1 in \cite{Kato--1950}, the interval $\left(\alpha, \eta + \frac{\epsilon^2}{\eta - \alpha}\right]$ contains a point in the spectrum of $A$. Note that by construction, $w \in \mathcal{M}_{k-1}^{\perp} = : \mathcal{N}_{k-1}$; moreover, $Aw \in \mathcal{N}_{k-1}$ as a function of $\{r_{i}\}_{i=1}^{k}$. In the Hilbert space $\mathcal{N}_{k-1}$, however, the spectrum of $A$ does not contain $\lambda_{1}(A), \dots, \lambda_{k-1}(A)$ since $u_{1},\dots,u_{k-1} \notin \mathcal{N}_{k-1}$. Thus, by another application of Lemma 1 in \cite{Kato--1950}, the eigenvalue of $A$ in the interval given by $\left(\alpha, \eta + \frac{\epsilon^2}{\eta - \alpha}\right]$ must be $\lambda_{k}(A)$ with associated unit eigenvector $u_{k}$. Hence, \begin{equation} \label{eqn:Kato general upper bound} \lambda_{k}(A) \le \eta + \frac{\epsilon^2}{\eta - \alpha} = \frac{\eta^2+\epsilon^2-\alpha \eta}{\eta - \alpha}. \end{equation} We pause briefly to make several computational observations. First, \begin{align} \eta^2 + \epsilon^2 &= \langle Aw,w \rangle^2 + \|(A-\langle Aw,w\rangle)w\|^2\\ &= \|Aw\|^2 =\sum_{i,j=1}^{k}r_{i}r_{j}\langle Aw_{i},Aw_{j}\rangle. \end{align} Letting $\delta_{i,j}:=\mathbb{I}\{i=j\}$ denote the Kronecker delta function, we have for each $i,j\in[d]$ that \begin{align} \label{eqn:Awi ip Awj} \langle Aw_{i},Aw_{j} \rangle = \langle (A-\eta_{i})w_{i}, (A-\eta_{j})w_{j} \rangle + (\eta_{i} + \eta_{j} ) \langle Aw_{i},w_{j}\rangle - \eta_{i}^2 \delta_{i,j}, \end{align} while \begin{align} \label{eqn:Awi,wj,iNotj} \langle Aw_{i},w_{j} \rangle &= \langle Ew_{i},w_{j} \rangle \textnormal{ for } i \neq j. \end{align} It will also prove useful to recognize the expansion \begin{align} \eta &= \langle Aw,w \rangle = \sum_{i=1}^{k}r_{i}^{2}\eta_{i} + \sum_{1\le i < j \le k}2r_{i}r_{j}\langle Ew_{i},w_{j}\rangle. \end{align} Combining these observations yields \begin{align*} \eta^2 + \epsilon^2 &= \sum_{i,j=1}^{k}r_{i}r_{j} \langle Aw_{i},Aw_{j} \rangle \\ &= \left( \sum_{i=1}^{k}r_i^2 \langle Aw_{i},Aw_{i} \rangle \right) + \left( \sum_{1\le i < j \le k}2r_{i}r_{j}\langle Aw_{i},Aw_{j} \rangle \right) \\ &= \sum_{i,j=1}^{k}r_{i}r_{j} \langle (A-\eta_{i})w_{i}, (A-\eta_{j})w_{j} \rangle \\ &+ \sum_{i=1}^{k}r_{i}^2\eta_{i}^2 + \sum_{1\le i<j \le k}2r_{i}r_{j} (\eta_{i} + \eta_{j} ) \langle Aw_{i},w_{j} \rangle. \end{align*} An application of the Cauchy--Schwarz inequality coupled with subsequent computation yields \begin{align*} \sum_{i,j=1}^{k}r_{i}r_{j}\langle(A-\eta_{i})w_{i},(A-\eta_{j})w_{j} \rangle &\le \sum_{i,j=1}^{k}r_{i}r_{j}\left(\|(A-\eta_{i})w_{i}\| \|(A-\eta_{j})w_{j}\|\right)\\ &= \sum_{i,j=1}^{k}(r_{i}\epsilon_{i})(r_{j}\epsilon_{j}) \le \left(\sum_{i=1}^{k}\epsilon_{i}|r_{i}|\right)^2. \end{align*} Hence, \begin{align} \label{eqn:eta2epsilon2} \eta^{2} + \epsilon^{2} &\le \left(\sum_{i=1}^{k}\epsilon_{i}|r_{i}|\right)^2 + \sum_{i=1}^{k}r_{i}^2\eta_{i}^2 + \sum_{1\le i<j \le k}2r_{i}r_{j} (\eta_{i} + \eta_{j} ) \langle Ew_{i},w_{j} \rangle. \end{align} Returning to Eqn. (\ref{eqn:Kato general upper bound}), the numerator then becomes \begin{equation} \left(\sum_{i=1}^{k}\epsilon_{i}|r_{i}|\right)^2 + \sum_{i=1}^{k}r_{i}^{2}\eta_{i}(\eta_{i} - \alpha) + \sum_{1\le i<j \le k}2r_{i}r_{j}(\eta_{i} + \eta_{j} - \alpha)\langle Ew_{i},w_{j} \rangle \end{equation} while the denominator becomes \begin{equation} \left(\sum_{i=1}^{k}r_{i}^{2}(\eta_i - \alpha)\right) + \left( \sum_{1\le i < j \le k}2r_{i}r_{j}\langle Ew_{i},w_{j} \rangle \right). \end{equation} By a simple union bound, observe that for $t>0$, \begin{align} \mathbb{P}\left[\textnormal{max}_{1 \le i \le j \le k}|\langle Ew_{i},w_{j}\rangle|>t\right] &\le \left(k+\binom{k}{2}\right)C\exp(-ct^{\gamma}), \end{align} in which case with high probability, \begin{align} \left(\sum_{i=1}^{k}r_{i}^{2}(\eta_i - \alpha)\right) &\ge \lambda_{1}(P)-\alpha-t,\\ \left( \sum_{1\le i < j \le k}2r_{i}r_{j}\langle Ew_{i},w_{j} \rangle \right) &\ge -k(k-1)t, \end{align} while with high probability, \begin{align} \sum_{1\le i<j \le k}2r_{i}r_{j}(\eta_{i} + \eta_{j} - \alpha)\langle Ew_{i},w_{j} \rangle &\le (2\lambda_{k}(P)-\alpha +2t)k(k-1)t,\\ \left(\sum_{i=1}^{k}r_{i}^{2}\eta_{i}\right)k(k-1)t &\le (\lambda_{k}(P)+t)k(k-1)t. \end{align} By adding and subtracting $\left(\sum_{i=1}^{k}r_{i}^{2}\eta_{i}\right)k(k-1)t$ to the numerator of Eqn. (\ref{eqn:Kato general upper bound}) we obtain the following bound in which the first term on the right-hand side is the leading term while the second term on the right hand side corresponds to a residual term. \begin{align*} \lambda_{k}(A) &\le \frac{ \left( \sum_{i=1}^{k}\epsilon_i |r_i| \right)^2 + \left(\sum_{i=1}^{k}r_i^2 \eta_i (\eta_i - \alpha -k(k-1) t) \right) }{\left(\sum_{i=1}^{k}r_i^2(\eta_i - \alpha - k(k-1) t) \right)} \\ &+ \frac{(3\lambda_{k}(P)-\alpha +3t)k(k-1)t}{\lambda_{1}(P)-\alpha - (k(k-1)+1)t}. \end{align*} Now by the same arguments as in \cite{Kato--1950}, Section 3, Eqns. (22--30), the constants $\{r_{i}\}_{i=1}^{k}$ can be removed. To this end, the quantity \begin{align} \frac{ \left( \sum_{i=1}^{k}\epsilon_i |r_i| \right)^2 + \left(\sum_{i=1}^{k}r_i^2 \eta_i (\eta_i - \alpha -k(k-1) t) \right) }{\left(\sum_{i=1}^{k}r_i^2(\eta_i - \alpha - k(k-1) t) \right)} \end{align} is bounded above by the quantity \begin{align} \underset{1\le i \le k}{\textnormal{max}}\eta_{i} + \left(\sum_{i=1}^{k}\frac{\epsilon_{i}^2}{\eta_{i} - \alpha - k(k-1)t}\right). \end{align} Note that $\underset{1\le i \le k}{\textnormal{max}}\eta_{i} \le \lambda_{k}(P)+t,$ with high probability, while a simple computation reveals that for each $i\in[k]$, \begin{align} \label{eqn:epsilonSquared} \epsilon_{i}^{2} &= \|Ew_{i}\|^{2}-|\langle Ew_{i},w_{i}\rangle|^{2} \le \|Ew_{i}\|^{2} \le \|E\|_{2}^{2}. \end{align} Putting all these observations together finally produces an upper bound on $\lambda_{k}(A)$ of the form \begin{align} \lambda_{k}(A) \le \lambda_{k}(P) + t &+ \zeta^{+}, \end{align} where $\zeta^{+}:=\frac{k\|E\|_{2}^{2}+(3\lambda_{k}(P)-\alpha +3t)k(k-1)t}{\lambda_{1}(P)-\alpha - (k(k-1)+1)t}$. \subsubsection{Proof of Theorem \ref{thrm:IERMconditional}: lower bound} Fix $k\in[d]$ and let $l:=d-k+1$. Define $\mathcal{M}_{l}$ to be the $l$-dimensional linear manifold given by \begin{align*} \mathcal{M}_{l}:=\textnormal{span}\{u_{k},\dots,u_{d}\}. \end{align*} We now define a collection of ``aggregate quantities'' similar to the formulation in Section \ref{sec:Upper bound}: \begin{itemize} \item Define $w$ to be an ``aggregate approximate eigenvector of $A$'' in the sense that $ w:=\sum_{i=k}^{d}r_{i}w_{i} $ for a collection of normalized coefficients $\{r_{i}\}_{i=k}^{d}$ such that $ \|w\|^{2} = \sum_{i=k}^{d}r_{i}w_{i} = 1,$ and satisfying the under-determined linear system $ \langle w, u_{i} \rangle = 0 \textnormal{ for } i=k+1,\dots,d.$ \item Define $\eta$ to be an ``aggregate approximate eigenvector of $A$'' in the sense that $\eta := \langle Aw,w \rangle$. \item Define $\epsilon$ to be the ``aggregate residual quantity'' $\epsilon:=\|(A-\eta)w\|.$ \end{itemize} By Lemma 2 in \cite{Kato--1950}, the interval $\left[\eta-\frac{\epsilon^2}{\beta-\eta}, \beta \right)$ contains a point in the spectrum of $A$. Note that by construction, $w\in \mathcal{M}_{l-1}^{\perp} = : \mathcal{N}_{l-1}$; moreover, $Aw \in \mathcal{N}_{l-1}$ as a function of $\{r_{i}\}_{i=k}^{d}$. In the Hilbert space $\mathcal{N}_{l-1}$, however, the spectrum of $A$ does not contain $\lambda_{k+1}(A), \dots, \lambda_{d}(A)$ since $u_{k+1},\dots,u_d \notin \mathcal{N}_{l-1}$. Thus, by another application of Lemma 2 in \cite{Kato--1950}, the eigenvalue of $A$ in the interval $\left[\eta-\frac{\epsilon^2}{\beta-\eta}, \beta \right)$ must be $\lambda_{k}(A)$ with associated unit eigenvector $u_{k}$. Consider first the special case when $\beta=\infty$. By a simple union bound, observe that for $t>0$, \begin{align} \mathbb{P}\left[\textnormal{max}_{k \le i \le j \le d}|\langle Ew_{i},w_{j}\rangle|>t\right] &\le \left(l+\binom{l}{2}\right)C\exp(-ct^{\gamma}), \end{align} hence with high probability \begin{equation} \label{eqn:partial denom bound} \sum_{i=k}^{d} r_{i}^2 \eta_{i} \ge \underset{k \le i \le d}{\text{min}} \eta_i \ge \lambda_{k}(P)-t \end{equation} and \begin{align} \label{eqn:applicKatoLowerBound} \lambda_{k}(A) \ge \eta &= \sum_{i=k}^{d} r_i^2 \eta_i + \sum_{\substack{k\le i<j \le d}}2r_{i}r_{j} \langle Ew_{i},w_{j} \rangle\\ &\ge \lambda_{k}(P) - (l(l-1)+1)t. \end{align} Now suppose that $\beta < \infty.$ Then for the lower bound of the above interval, one has \begin{align*} \lambda_{k}(A) &\ge \eta - \frac{\epsilon^2}{\beta- \eta} = \frac{\beta \eta-\eta^2-\epsilon^2}{\beta-\eta} =\frac{-(\eta^2+\epsilon^2)+\beta \eta}{\beta-\eta}. \end{align*} Reversing the direction of the previous application of the Cauchy--Schwarz inequality in Eqn. (\ref{eqn:eta2epsilon2}) permits the numerator to be bounded below by \begin{align*} -(\sum_{i=k}^{d}\epsilon_{i}|r_{i}|)^2 +\sum_{i=k}^{d}r_{i}^{2}\eta_{i}(\beta-\eta_{i}) + \sum_{k\le i < j \le d}2r_{i}r_{j}(\beta- \eta_{i} -\eta_{j})\langle Ew_{i},w_{j}\rangle, \end{align*} whereas the denominator has the expansion \begin{align*} \sum_{i=k}^{d}r_{i}^{2}(\beta-\eta_{i}) + \sum_{k \le i < j \le d}2r_{i}r_{j}\langle Ew_{i},w_{j}\rangle. \end{align*} For the denominator terms, note that with high probability \begin{align*} \sum_{i=k}^{d}r_{i}^{2}(\beta-\eta_{i}) &\ge \beta - \lambda_{d}(P) - t,\\ \sum_{k \le i < j \le d}2r_{i}r_{j}\langle Ew_{i},w_{j}\rangle &\ge -l(l-1)t, \end{align*} while in the numerator, with high probability, \begin{align*} \sum_{k\le i < j \le d}2r_{i}r_{j}(\beta- \eta_{i} -\eta_{j})\langle Ew_{i},w_{j}\rangle &\ge -(\beta-\lambda_{k}(P)+\lambda_{d}(P)+2t)l(l-1)t. \end{align*} In the numerator of Eqn. (\ref{eqn:applicKatoLowerBound}), add and subtract the quantity $\left(\sum_{i=k}^{d}r_{i}^{2}\eta_{i}\right)l(l-1)t$ which is bounded below by $(\lambda_{k}(P)-t)l(l-1)t$. Combining these observations yields \begin{align*} \lambda_{k}(A) &\ge \frac{-(\sum_{i=k}^{d}\epsilon_i |r_i|)^2 +\sum_{i=k}^{d}r_{i}^{2}\eta_{i}(\beta-\eta_{i} - l(l-1)t)}{\sum_{i=k}^{d}r_{i}^2(\beta-\eta_{i} - l(l-1)t)}\\ &+ \frac{-(\beta-\lambda_{k}(P)+\lambda_{d}(P)-\lambda_{k}+3t)l(l-1)t}{\beta - \lambda_{d}(P) - (l(l-1)+1)t}. \end{align*} By employing the same approach used to obtain the upper bound and taking negatives when necessary (thereby reversing the direction in which bounds hold), we obtain the lower bound for $\lambda_{k}(A)$ of the form \begin{align} \lambda_{k}(A) \ge \lambda_{k}(P) - t &- \zeta^{-}, \end{align} where $\zeta^{-} := \frac{l\|E\|_{2}^{2}+((\beta-\lambda_{k}(P))+(\lambda_{d}(P)-\lambda_{k}(P)) + 3t)l(l-1)t}{\beta-\lambda_{d}(P)-(l(l-1)+1)t}$. \end{proof \subsection{Proof of Theorem \ref{thrm:spikeEvalsIERM}} \begin{proof} The hypotheses imply by \cite{Lu-Peng--2013} that $\|A-P\|_{2}=O(\sqrt{\Delta})$ with probability $1-o(1)$ as $n\rightarrow\infty$. Set $\alpha=(C-c)\Delta/2$ and $\beta=\infty$ as Kato-Temple threshold values. Choose $\delta\in(0,1]$ and set $t=\Theta(\log^{\delta}n)$. Then in Theorem \ref{thrm:IERMconditional}, for sufficiently large $n$, one has $\zeta^{+},\zeta^{-}=O(t)$ where the underlying constant depends upon $k$, $d$, as well as underlying (unspecified) constants. So for $n\ge n_{0}$, then $|\hat{\sigma}_{k}-\sigma_{k}| \le c_{k,d}t$ with probability $1-o(1)$ as claimed. \end{proof} \subsection{Proof of Theorem \ref{thrm:generalSVbound}} \begin{proof} \label{pf:generalSVbound} The proof follows essentially \emph{mutatis mutandis} as in Theorem \ref{thrm:IERMconditional} via Remark \ref{rem:HermDial}, Definition \ref{def:Ccgamma}, and Lemma \ref{lem:CcgammaBlowUp}. In particular, observe that one has $\langle \tilde{\hat{M}}\tilde{w}_{i},\tilde{w}_{j}\rangle =\sigma_{i}\delta_{i,j} +\langle\tilde{E}\tilde{w}_{i},\tilde{w}_{j}\rangle$ for each pair $i,j$, while at the same time $\|\tilde{E}\|_{2}=\|E\|_{2}$. \end{proof} \subsection{Proof of Lemma \ref{lem:CcgammaSpectralProb}} \begin{proof} Let $E\in\R^{m \times n}$ be a $(C,c,\gamma)$-concentrated random matrix. Take $\mathcal{X}$ and $\mathcal{Y}$ to be $\frac{1}{4}$-nets of the spheres $S^{n-1}$ and $S^{m-1}$, respectively, with cardinalities at most $9^{n}$ and $9^{m}$, respectively. Then a standard net argument yields that for $t>0$, \begin{align*} \mathbb{P}\left[\|E\|_{2}>t\right] &\le \mathbb{P}\left[2 \underset{x\in\mathcal{X},y\in\mathcal{Y}}{\textnormal{max}}|\langle Ex,y\rangle| > t\right]\\ &\le 9^{m+n}\mathbb{P}\left[|\langle Ex,y \rangle|>t/2\right]\\ &\le C\exp((m+n)\log(9)-c(t/2)^{\gamma})\\ &\le C\exp(2\log(9)\textnormal{max}\{m,n\}-c(t/2)^{\gamma}). \end{align*} Choose $\epsilon > 0$ such that $2 + \epsilon > 2\left(2\log(9)/c\right)^{1/\gamma}$ and set $t= (2+\epsilon)\textnormal{max}\{m,n\}^{1/\gamma}$. Then for $c_{\epsilon,c,\gamma}:=\left(c(1+\epsilon/2)^{\gamma}-2\log(9)\right)>0$, we have \begin{align*} \mathbb{P}\left[\|E\|_{2}>(2+\epsilon)\textnormal{max}\{m,n\}^{1/\gamma}\right] &\le C\exp(-c_{\epsilon,c,\gamma}\textnormal{max}\{m,n\}). \end{align*} If in addition $m=n$ and $E$ is assumed to be symmetric, then since $\|E\|_{2}\equiv\textnormal{sup}_{\|x\|_{2}=1}|\langle Ex,x\rangle|$, one need only consider the $\frac{1}{4}$-net $\mathcal{X}$ for the purposes of a union bound. \end{proof} \newpage \bibliographystyle{amsplain}
-69,306.471974
[ -2.95703125, 2.65625 ]
26.210153
[ -2.1640625, 1.0107421875, -2.12890625, -6.38671875, -1.5712890625, 8.8984375 ]
[ 2.875, 7.68359375, 2.546875, 7.90625 ]
391
7,851
[ -3.5234375, 4.0234375 ]
32.795969
[ -5.81640625, -4.46484375, -5.2890625, -2.509765625, 2.109375, 13.5859375 ]
0.571821
9.616639
23.32187
3.490469
[ 2.2712366580963135 ]
-38,723.230364
6.125589
-68,905.805964
0.513599
6.263185
[ -1.82421875, -3.544921875, -4.05078125, -5.26171875, 2.212890625, 12.4296875 ]
[ -5.61328125, -2.830078125, -2.517578125, -1.9921875, 4.2265625, 6.046875 ]
BkiUdtY5qsBDCrvnORH5
\section{Introduction} Online social media systems are places where people talk about everything, sharing their take or their opinions about noteworthy events. Not surprisingly, sentiment analysis has become an extremely popular tool in several analytic domains, but especially on social media data. The number of possible applications for sentiment analysis in this specific domain is growing fast. Many of them rely on monitoring what people think or talk about places, companies, brands, celebrities or politicians~\citep{Hu:2004:MSC:1014052.1014073,conf/epia/OliveiraCA13,DBLP:journals/corr/abs-1010-3003}. Due to the enormous interest and applicability, many methods have been proposed in the last few years (e.g., SentiStrength~\citep{thelwall2013heart}, VADER~\citep{hutto2014vader}, Umigon\citep{levallois2013umigon}, SO-CAL\citep{taboada2011lexicon}). In common, these methods are unsupervised\footnote{They do not require explicit manually labeling data to be used in different domains.} tools and have been applied to identify sentiments (i.e. positive, negative, and neutral) of short pieces of text such as tweets, in which the subject discussed in the text is known \textit{a priori}. The importance of being unsupervised is that, in a real application of sentiment analysis, it can be very hard to get previous labeled data to train a classifier. These tools are all currently acceptable by the research community as the state-of-the-art is not well established yet. However, a recent effort~\citep{ribeiro2015benchmark} has shown that the prediction performance of these methods varies considerably from one dataset to another. For instance, in that study, Umigon was ranked in the first position in five datasets containing tweets and was among the worst in a dataset of news comments. Even among similar datasets, existing methods showed \textbf{low stability} in terms of their ranked positions. This suggests that existing unsupervised approaches should be used very carefully, especially for unknown datasets. More importantly, it suggests that novel sentiment analysis methods should not only be superior to existing ones in terms of predictive performance, but they should also be stable, that is, its relative prediction performance should vary minimally when used in many different datasets and contexts. Accordingly, in this article, we propose 10SENT, an unsupervised learning approach for sentence-level sentiment classification that tells if a given piece of text (i.e. a tweet) is positive, negative, or neutral. In order to obtain better results than existing methods and guarantee stability across datasets, our approach exploits the combination of their classification outputs in a smart way. Our strategy relies on using a bootstrapped learning classifier that creates a training set based on a combination of answers provided by existing unsupervised methods. The intuition is that if the majority of the methods label an instance as positive, it is likely that it is positive, and it could be used to learn a classifier. This self-learning step provides to our method a level of adaptability to the current (textual) context, reducing prediction performance instability, a key aspect of an unsupervised approach. We test our proposed approach by combining the top (best) ranked methods, according to a recent benchmark study~\citep{ribeiro2015benchmark}. We evaluate 10SENT with thirteen gold standard datasets containing social media data from different sources and contexts. Those datasets consist of different sets of labeled data annotated for positive, negative and neutral texts from social networks messages and from comments of news articles, videos, websites, and blogs. Our approach showed to be statistically superior to (or at least ties with) the existing individual methods in most datasets. As a consequence, our approach obtains the best mean rank position considering all datasets. Thus, our experimental results demonstrate that our combined method not only improves significantly the overall effectiveness in many datasets but its cross-dataset performance variability is minimal (maximum stability). In practical terms, this means that one can use our approach in any situation in which the base methods can be exploited, without any extra cost (since it is unsupervised) and without the need to discover the best method for a given context, and still obtain top-notch effectiveness in most situations. We also show that 10SENT is superior to basic baseline combinations, such as a majority voting approaches, with gains of up to 17\% against such baselines. This highlights the importance of our bootstrapped strategy to improve the effectiveness of the sentiment classification task. It is important to stress that the number of methods to be combined is not necessarily restricted to ten. Our self-learning approach is very independent of the base methods, which means that it is highly extensible to incorporate any new additional method that can be created in the future. To summarize, the main contribution of our work is an easily deployable and stable method that can produce results as good or better than the best single method for most datasets (the performance of the base methods can vary a lot) in a completely unsupervised manner, being much superior than other unsupervised solutions such as majority voting and, in some cases, close to the best supervised ones. As far as we know, this is the first time non-trivial unsupervised learning is used along with ``state-of-the-practice'' sentiment analysis methods to solve important issues in the field such as stability, generality, and improved effectiveness, all at the same time. Finally, as a second contribution, we start an investigation into a important question of our research: whether we can ``transfer'' some knowledge to our method from a dataset labeled with emoticons by Twitter users, which is easily available, meaning that no extra labeling effort is necessary. The main idea here is that such transfer of knowledge could provide additional (unsupervised) information to our method helping to improve it even further. \section{Related Work} There are currently two distinct categories of sentiment analysis methods used in the social media domain: lexicon-based and those based on machine learning techniques. Machine learning methods comprise supervised classifiers trained with labeled datasets in which classes correspond to polarities (e.g. positive, negative or neutral) \citep{pang2002thumbs}.One major challenge in this scenario is the difficulty in obtaining annotated data to train (supervised) methods due to issues such as cost and the inherent complexity of the labeling task. Accordingly, in here, we propose an unsupervised solution to deal with this sentiment analysis task. Lexicon-based methods exploit lexical dictionaries, that is, word lists associated with sentiments or other specific features, which are usually not based on supervised learning. some challenges with lexicon-based solutions including the construction of the lexicon itself (which is usually manually done) and difficulties in adapting for domains different from which they were originally designed. Such issues naturally call for a combination of solutions that exploits their strengths while overcome their limitations. The idea of combining different sentiment analysis strategies, however, has been only recently explored, but most of the existing literature on the combination of sentiment analysis involves a learning component. For instance, \cite{prabowo2009sentiment} proposed a new hybrid classification method based on the combination of different strategies. This work combines a rule-based classification and other supervised learning strategies into a new hybrid sentiment classifier. \cite{DangZC10} combined machine learning and semantic-orientation that consider words expressing positive or negative sentiments. \cite{zhang} explored an entity-level sentiment analysis method specific to the Twitter data. In that work, the authors combined lexicon and learning-based methods in order to increase the recall rate of individual methods. Differently from our work, this method was proposed for the entity-level, while ours focus on a sentence-level granularity. Similarly, \cite{Mudinas2012} proposed \textit{pSenti}, a method for sentiment analysis developed as a combination of lexicon and learning approaches for a different granularity level, the concept-level (semantic analysis of texts by means of web ontologies or semantic networks). \cite{MoraesVPDAG13} investigated approaches to detect the polarity of FourSquare tips using supervised (SVM, Maximum Entropy and Na{\"i}ve Bayes) and unsupervised (SentiWordNet) learning. They also investigate hybrid approaches, developed as a combination of the learning and lexical algorithms. All techniques were tested separately and combined, but the authors did not obtain significant improvements with the hybrid approaches over the best individual techniques for this particular domain. \cite{polly2016offtheshelf} analyzed different datasets and considered supervised machine learning in the context of classifiers' ensembles. Their methodology also consisted of combining a set of different sentiment analysis method in a ``off-the-self'' strategy to generate the ensemble method. Their results suggest that it is possible to obtain significant improvements with ensemble techniques depending on the domain. In here, we focus on a unsupervised solution enhanced with an automatic bootstrapping step. In a more recent effort on the ensemble direction, \cite{gonccalves2013comparing} exploits the power of the combination of some of the state-of-the-art methods, showing that they can outperform individual methods. Their results show the potential of simple solutions such as majority voting, but the authors did not delve deep in more complex combination strategies. Some approaches use a limited amount of labeled data (also known as weakly supervised classifier) in order to predict the sentiment in some domains. For example, \cite{siddiqua2016} proposed a weakly supervised classifier for Twitter sentiment analysis. In this work, Naive-Bayes (NB) is combined with a rule-based classifier based on several publicly available sentiment lexicons to extract positive and negative sentiment words. After the rule-based classifier is applied, the NB is used to classify the remaining tweets as positive or negative. \cite{deriu2017leveraging} also uses a weakly supervised approach to multi-language sentiment classification task. The developed method evaluates large amounts of weakly supervised data in various languages to train a multi-layer convolutional neural network, but its focus is on multilingual sentiment classification. Wikisent, proposed by \citep{mukherjee2012wikisent} also describes a weakly supervised system for sentiment analysis classification. They use text summarization focused on movie reviews domain in order to obtain knowledge about the various technical aspects of the movie. After that, the summary of the opinions are classified by using the SentiWordNet lexicon method. To summarize, many authors proposed supervised ensemble classifiers, but differently from those, we propose a novel approach by combining a series of "state-of-the-practice'' existing methods in a totally unsupervised and in much more elaborated manner exploiting bootstrapping and (unsupervised) transfer learning. Another major difference of our effort is that we evaluate using multiple labeled datasets, covering multiple domains and social media sources. This is critical for an unsupervised approach given that the performance of the base methods varies significantly. As we shall see, our solution produced the most consistent results across all datasets and contexts. \section{Combining Methods}\label{sec:methods} Sentiment analysis can be applied to different tasks. We restrict our focus on combining those efforts related to detect the polarity (i.e. positivity, negativity, neutrality) of a given short text (i.e. sentence-level). In other words, given a set $S$ of opinionated sentences, we want to determine whether each sentence $s$ $in$ $S$ expresses a positive, negative or neutral opinion. We focus our effort on combining only unsupervised ``off-the-shelf'' methods. Our strategy consists of using the output label predicted by each individual method as input for a bootstrapping technique -- a self-starting process supposed to proceed without external input. Next we present the proposed technique. Our \textbf{bootstrapping technique} is an unsupervised machine learning algorithm that uses the sentiment scores produced by each individual sentiment analysis method to create a training set for a supervised machine learning algorithm. With this algorithm, we are able to produce a final result regarding the sentiment of a sentence. Note that, we did not need to use any manually labeled data in order to produce the model. \sloppy We describe the method in Algorithm~\ref{alg:bootstrapping}. Suppose we have access to a set of sentences $S = \{s_{tr0}, s_{tr1}, s_{tr3},...,s_{tr_n}\}$, which are candidates of being part of our training data. Our goal is to use the unlabeled data $S$ in order to produce a training set $train$ and, then, apply it to unseen sentences for which we want to predict (here represented as $test = \{s_{tst0}, s_{tst1}, s_{tst3},...,s_{tst_m}\}$), generating the set of predictions $P$. The training data $train$ is represented by a set of pairs $(c,s)$ where $c$ is the class representing a sentiment (positive, negative or neutral) obtained by using the information of each sentiment analysis method described in Section~\ref{sec:methods} and $s$ is a sentence represented by a set of features which, in our case, corresponds to the off-the-shelf sentiment methods' outputs. \begin{algorithm} \centering \small \caption{Bootstrapping Algorithm} \label{alg:bootstrapping} \begin{algorithmic}[1] { \Require Minimum of Agreement $A$ \Require Minimum of Confidence $C$ \Require The set of $n$ sentences $S = s_{tr0}, s_{tr1}, s_{tr3},...,s_{tr_n}$, candidates of being part of our training data \Require The set of $m$ sentences which we want to predict: $test = \{s_{tst0}, s_{tst1}, s_{tst3},...,s_{tst_m,}\}$ \State Let $train$ = our training set represented by $(c,s)$ which $c$ is the target class and $s$ is the sentence \State Let $P$ = our result which is represented by a set of triplet $(i, predicted\_class, confidence)$ which is the instance, the predicted class and its confidence \ForAll{$s \in S$ } \If{$agree(s) \ge A$} \State \textbf{Add} the pair $(agreeClass(s),s)$ to $train$ \State \textbf{Remove} $s$ from $S$ \EndIf \EndFor \State \textbf{Create} a model $M$ using $train$ \State \textbf{Apply} the model $M$ in $S$ to obtain the predictions $P$ \ForAll{$(s, predicted\_class, confidence) \in P$ } \If{$confidence \ge C$} \State \textbf{Add} the pair $(predicted\_class,s)$ to $train$ \EndIf \EndFor \STATE \textbf{Create} a model $M$ using $train$\\ \STATE \textbf{Apply} the model $M$ in $test$ to obtain the predictions $P$ } \end{algorithmic} \end{algorithm} The $test$ is represented by a set of sentences $test = \{s_{tst0}, s_{tst1}, s_{tst3},...,s_{tst_n}\}$ and, the prediction $P$, contains a set of triplets $(s, predicted\_class, confidence)$ representing the sentence, the predicted class and the confidence (i.e. a score representing how confident the machine learning method is in its prediction), respectively. We use the function $agree(s)$, for each sentence $s$, which computes the Agreement level, in other words, the maximum number of sentiment analysis methods agreeing with each other regarding the sentiment in the sentence $s$. If this number is higher than the threshold $A$, we add the sentence $s$ in the training set $train$, removing it from $S$. Note that, when adding a sentence to $train$ we use the method $agreeClass(s)$ in order to obtain the class $c$ which will the sentiment assigned to $s$. Class $c$ is obtained by using the class which has the majority of sentiment analysis methods assigned to the sentence $s$. After doing this for all the sentences in $S$, only sentences for which we could not infer a label with enough agreement remain in $S$. Then, in order to increase our training data, we use our training set $train$ to train a classification model and apply it to sentences in $S$, producing the predictions $P$. By doing so, we are able to use $P$ to add more sentences to $train$. In order to avoid noise, we only add sentences for which the learned model produces a confidence higher than a threshold $C$. Finally, we retrain with the new set $train$ and apply it to $test$ in order to produce, for each sentence $s$, a single score $c$ representing its final sentiment score. As mentioned before, our approach consists of combining popular \textbf{``off-the-shelf'' sentiment analysis methods} freely available for use. It is important to highlight that the number of methods to be combined is not necessarily restricted to ten. In fact, there is no limit on the number of methods we can include as part of our approach -- thus, we focus on the ones evaluated by \cite{ribeiro2015benchmark} as it provides the most recent and complete sentence-level benchmark of off-the-shelf sentiment analysis methods. There are few small adaptations on some methods to provide as output positive, negative and neutral decisions. For this, we have used the codes shared by the authors of~\cite{ribeiro2015benchmark}. More details about these implementations can be found there. The considered methods include: VADER~\citep{hutto2014vader}, AFINN\citep{nielsen2011new}, OpinionLexicon~\citep{Hu:2004:MSC:1014052.1014073}, Umigon~\citep{levallois2013umigon}, SO-CAL~\citep{taboada2011lexicon}, Pattern.en~\citep{smedt2012pattern}, Sentiment140~\citep{mohammad2013nrc}, EmoLex~\citep{journals/ci/MohammadT13}, Opinion Finder\citep{wilson2005opinionfinder}, and SentiStrength~\citep{thelwall2013heart}. A brief description of these methods can also in found in~\cite{ribeiro2015benchmark}. We also note that all methods exploit light-weight unsupervised approaches that rely on lexical dictionaries, usually implemented as a hash-like data structure. For this reason, the execution performance of our combined, as well as the individual methods, does not require any powerful hardware platform. \if 0 \begin{table}[H] \scriptsize \begin{tabular}{|l|l|} \hline \textbf{Method} & \textbf{Description} \\ \hline VADER \cite{hutto2014vader} & \begin{tabular}{p{8cm}} ~This method is based on a lexicon dictionary. Valence Aware Dictionary for sEntiment Reasoning (VADER) is a human-validated sentiment analysis method developed for micro-blogging and social media, requiring no training data. \end{tabular} \\ \hline AFINN \citep{nielsen2011new} & \begin{tabular}{p{8cm}} ~Builds a Twitter based sentiment Lexicon including Internet slangs and obscene words. AFINN uses a dictionary created to provide emotional ratings for English words. \end{tabular} \\ \hline OpinionLexicon \citep{Hu:2004:MSC:1014052.1014073} & \begin{tabular} { p{8cm}} ~Also known as Sentiment Lexicon, it is a lexical-based method consisting of two lists with 2,006 positive words and 4,783 negative words. It includes slang, misspellings, morphological variants, and social-media markups.\end{tabular} \\ \hline Umigon \citep{levallois2013umigon} & \begin{tabular}{p{8cm}} ~Disambiguate tweets using lexicon with heuristics to detect negations plus elongated words and hashtags evaluation.\end{tabular} \\ \hline SO-CAL \citep{taboada2011lexicon} & \begin{tabular}{p{8cm}} ~Creates a new Lexicon with unigrams (verbs, adverbs, nouns and adjectives) and multi-grams (phrasal verbs and intensifiers) hand ranked with scale +5 (strongly positive) to -5 (strongly negative). Authors also included part of speech processing, negation and intensifiers. \end{tabular} \\ \hline Pattern.en \citep{smedt2012pattern} & \begin{tabular}{p{8cm}} ~Python programming package (toolkit) that deal with NLP, Web Mining and Sentiment Analysis. Sentiment analysis is provided through averaging scores from words in the sentence according to a bundle lexicon of adjectives. \end{tabular} \\ \hline Sentiment140 Lexicon \citep{mohammad2013nrc} & \begin{tabular}{p{8cm}} ~It consists in a dictionary of words associated with positive and negative sentiments. The dictionary of Sentiment140 Lexicon contains 66,000 unigrams (single words), 677,000 bigrams (two-word sequence) and 480,000 of unigram--unigram pairs, unigram--bigram pairs, bigram--unigram pairs, or a bigram--bigram pairs. \end{tabular} \\ \hline EmoLex \citep{journals/ci/MohammadT13} & \begin{tabular}{p{8cm}} ~Also called NRC Emotion Lexicon, it is lexical method with up 10,000 word-sense pairs. Each entry lists the association of a word-sense pair with 8 basic emotions, but it also provides results as positive or negative feelings. We used EmoLex version 0.92 in our work. \end{tabular} \\ \hline Opinion Finder \citep{wilson2005opinionfinder} & \begin{tabular}{p{8cm}} ~Performs subjectivity analysis trough a framework with lexical analysis former and a machine learning approach latter. \end{tabular} \\ \hline SentiStrength \citep{thelwall2013heart} & \begin{tabular}{p{8cm}} ~SentiStrength was built with the use of supervised and unsupervised classification methods, SentiStrength Classifies positive and negative polarity strength (from 2 to 5) separately as the default setup of the method. We have used its option that just produces polarity results, ignoring the scale of the polarity strength. \end{tabular} \\ \hline \end{tabular} \centering \caption{Description of individual methods combined in our approach} \label{table:methods} \normalsize \end{table} \fi \section{Methodology} Next we present the gold standard datasets used to evaluate our approach, the combined baseline method, the metrics used for evaluation and the experimental setup. In our evaluation, we use 13 \textbf{datasets} of messages labeled as positive, negative and neutral from several domains, including messages from social networks, opinions and comments in news articles and videos. These datasets were kindly shared by the authors of ~\citep{ribeiro2015benchmark}. We only consider those with three classes (positive, negative, and neutral). The number of messages vary from few hundreds to a few thousands. The datasets are usually very skewed, with usually one or two classes outnumbered the majority one by large margins. The median of the average number of phrases pier message is around 2 while the average number of words vary from around 15 to approximately 60. We refer the reader to their work for more details about the datasets. We emphasize that the diversity and amount of different datasets used in our evaluation allow us to accurately evaluate not only the prediction performance of the proposed method, but also measure the extent to which a method's result varies when it is tested for different social media sources. \sloppy \if 0 \begin{table}[h!] \centering \scriptsize \begin{tabular}{|l|l|l|l|l|l|l|} \hline \textbf{Dataset} & \textbf{Messages} & \textbf{Positives} & \textbf{Negatives} & \textbf{Neutrals} & \textbf{\begin{tabular}[c]{@{}l@{}}Average \#\\ of phrases\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Average \#\\ of words\end{tabular}} \\ \hline Sentistrength\_bbc & 1,000 & 99 & 653 & 248 & 3.98 & 64.39 \\ \hline Sentistrength\_digg & 1,077 & 210 & 572 & 295 & 2.50 & 33.97 \\ \hline Vader\_nyt & 5,190 & 2,204 & 2,742 & 244 & 1.01 & 17.76 \\ \hline Nikolaos\_ted & 839 & 318 & 409 & 112 & 1 & 16.95 \\ \hline Sentistrength\_youtube & 3,407 & 1,665 & 767 & 975 & 1.78 & 17.68 \\ \hline Sentistrength\_myspace & 1,041 & 702 & 132 & 207 & 2.22 & 21.12 \\ \hline Sentistrength\_rw & 1,046 & 484 & 221 & 341 & 4.79 & 66.12 \\ \hline debate & 3,238 & 730 & 1,249 & 1,259 & 1.86 & 14.86 \\ \hline Sentistrength\_twitter & 4,242 & 1,340 & 949 & 1,953 & 1.77 & 15.81 \\ \hline English\_dailabor & 3,771 & 739 & 488 & 2,536 & 1.54 & 14.32 \\ \hline aisopos\_ntua & 500 & 139 & 119 & 222 & 1.90 & 15.44 \\ \hline sanders & 3,737 & 580 & 654 & 2,503 & 1.60 & 15.03 \\ \hline tweet\_semevaltest & 6,087 & 2,223 & 837 & 3,027 & 1.86 & 20.05 \\ \hline \end{tabular} \caption{Labeled datasets details.} \label{tab:labeled_dataset} \end{table} \fi Regarding the \textbf{baseline}, as 10SENT explores the output of 10 other individual methods, Majority Voting is a natural baseline\footnote{Notice that Weighted Majority Voting is not an option as a fair baseline, since to determine the weights we would need some type of supervision, something that our method does not exploit. In any case, we compare our solution to a version of the Weighted Majority Voting method in the `Upperbound Comparison' Section.}. Voting is one of the simplest ways to combine several methods. By assuming that each individual method gives us a unique label as output for a sentence, the final result of Majority Voting is the label that the majority of the base classifiers returned as output for that sentence\footnote{In this method, ties are possible. In this case, we assign a \textit{Neutral} class to the sentence.}. The major advantages of this approach are its simplicity and extensibility, i.e., it is very easy to include new (off-the-shelf) methods. Also, no training data is necessary for this method, which fits well with our purpose of an unsupervised solution. On the other hand, majority voting is not as flexible as 10SENT in coping with all the diversity of the methods This is due to the training phase of 10SENT that allows it to capture some idiosyncracies of each one of them. \if 0 More specifically, this combination works as follows: given an unlabeled instance $x$, the labels candidates set $L = \{ w_1 , w_2 , w_3 \}$ and a set of voting methods $M = \{ m_1 , m_2 , m_3 , …, m_n \}$ , we define set $V_{M\times L}$ of votes $v_{ij}$ of a sentence $x$ as classes $w_j$ given by the method $m_i$ as follows: $v_{ij}$ =\begin{cases} 1,& \text{ if method $m_i$ classified $x$ as class $w_j$};\\ 0,& \text{otherwise}. \end{cases} The final result $R$ is given by the class $j$ with maximum amount of votes: \begin{equation*} R = argmax\bigg( \sum_{i=1}^M V_{ij} \bigg) \end{equation*} \fi As \textbf{evaluation metric} the use the popular Micro and Macro-F1 scores. Micro-F1 captures the overall accuracy across all classes. Macro-F1 calculates the F1 score for each class separately report the average of these scores for all classes. It is important in datasets with high skewness (as is the case here) or in problem in which we are more interested in the effectiveness in the majority classes. \if 0 The first \textbf{evaluation metric} is the popular (Micro)F1-score. It can be calculated by computing the amount of correctly predicted instances as positive and negative for an individual class in terms of precision and recall. The precision rate is the proportion of retrieved instances that was correctly classified for this class, while recall is the proportion of real instances of this class that was retrieved from all dataset. Then, the F-Score is the harmonic mean of precision $p$ and recall $r$, calculated for all classes as: \begin{equation*} \label{eq:f1score} F_1 = 2 \cdot \frac{p \cdot r}{p+r} \end{equation*} We also use Macro-F1, or macro-average measure, to compute the F1 score among all labels. This method can be used when we want to know how the system performs overall across different classes. As we have three different classes (positive, negative and neutral), we calculate an average precision and recall per class as next equation: \begin{equation*} \label{eq:macrof1} ( p_{macro}, r_{macro} )= \bigg( \frac{1}{q} \sum_{\lambda=1}^q p_\lambda , \frac{1}{q} \sum_{\lambda=1}^q r_\lambda \bigg) \end{equation*} Where $\lambda$ is a label and $L = \{{\lambda}_1 , {\lambda}_2 , ... , {\lambda}_q \}$ is the set of all labels. Finally, the Macro measure will be simply the harmonic mean as calculated in equation of F1-score using these two averages $p$ and $r$. \fi As a third evaluation metric, we use \textit{Mean Ranking}. As we have a potential large number of results, considering all base methods and datasets, it is important to have a global measure of performance for all these combinations in a single metric. For doing so, we ranked the methods for each dataset. The Mean Ranking is the simple sum of ranks obtained by a method in each dataset divided by the total number of datasets. It is important to notice that the rank was calculated based on Macro F1 because of the high imbalance among the classes in several datasets. \if 0 , i.e.: \begin{equation*} MeanRank(m) = \frac{ \sum_{i=1}^D r_{i} } { |D| } \end{equation*} in which $D$ is the set of datasets and $r_i$ is the rank of the method $m$ for dataset $i$. \fi Finally, our \textbf{experiments} were run using a 5-fold cross validation setup, with best parameters for the learning methods found using cross-validation within the training set. This procedure was applied to all considered datasets. To compare the average results in the test sets of our experiments, we assess the statistical significance of our results by means of a paired t-test with 95\% confidence. We just consider statistically significant, results whose the value of $p$ is less than 0.05 and any stated claim of superiority is based on these tests. Finally, we adapted the original outputs values of base methods to our corresponding polarities. In particular, an output equals to zero was considered as a neutral or ``absence of opinion''. \section{Experimental Results} Here, we discuss some decisions taken during the development of 10SENT and start some investigation on issues related to transfer learning for sentiment analysis, showing the potential of this technique to improve our results. \subsection{Choice of The Classifier} 10SENT is an unsupervised machine learning method as it does not exploit manually labeled data, only the agreement among the base methods. Given that the bootstrapping process adds a set of instances with high confidence into a training set, it is possible to perform a learning step exploiting such data in the the usual format training/validation. Because of this, there is a need to investigate which classifier fits better this application. Thus, we perform a series of tests with our method using different classification algorithms in order to choose the best one for this task. In all these tests, we used all 10 methods of 10SENT. We tested three different and widely used algorithms in our approach: \textit{Support Vector Machine} (SVM) \citep{ChangSVM}, \textit{Random Forest} (RF) \citep{randomforest} and\textit{ $k$-Nearest Neighbors} (KNN) \citep{knn}. Here we used the implementations of RF and KNN provided in scikit-learn\footnote{Available at \url{http://scikit-learn.org/stable/index.html}} and for SVM, we use LibSVM\footnote{Available at \url{http://www.csie.ntu.edu.tw/~cjlin/libsvm/}} package. Specifically, we use a \textit{radial basis function} (RBF) kernel with a grid search for the best parameters. Overall, Random Forests produced the best results in most datasets, being the final choice for our bootstrapping method. \if 0 \begin{table}[!htpb] \centering \small \begin{tabular}{|l|l|l|l|} \hline \textbf{DATASET} & {KNN} & {SVM} & {Random Forest} \\ \hline \textbf{english\_dailabor} & 55.5($ \pm $ 1.7) & 58.2($ \pm $ 1.6) & 60.9($ \pm $ 1.7) \\ \hline \textbf{aisopos\_ntua} & 51.7($ \pm $ 4.3) & 59.0($ \pm $ 1.0) & 59.7($ \pm $ 2.5) \\ \hline \textbf{sentistrength\_digg} & 46.0($ \pm $ 1.6) & 51.7($ \pm $ 2.8) & 49.4($ \pm $ 1.0) \\ \hline \textbf{debate} & 40.6($ \pm $ 1.5) & 23.7($ \pm $ 19.8) & 42.6($ \pm $ 1.7) \\ \hline \textbf{sentistrength\_rw} & 37.3($ \pm $ 5.0) & 17.3($ \pm $ 14.6) & 34.6($ \pm $ 1.8) \\ \hline \end{tabular} \caption{Results of different classifier algorithms for learning step of 10SENT.} \label{table:classificationalgorithms} \end{table} \fi \subsection{Choice of Number of Methods} To verify the coherence with results obtained by Majority Voting, we perform a test with different number of methods used in the combination. In this test, we want to check how the addition of a method can impact the outcome. We evaluated results of 10SENT combining from 3 up to 10 methods. In these experiments, we included from the best to the worst method in each dataset, according to~\cite{ribeiro2015benchmark}. We noted that adding a new method improves the overall results, but it is possible to note that improvements get smaller with new inclusions. Thus, after a certain number, the gain is minimal. Therefore, we fixed 10 as a good choice to number of methods in 10SENT core. \if 0 \begin{table}[!htpb] \centering \small \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{9}{|c|}{\textbf{\#Methods}} \\ \hline \textbf{DATASET} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} \\ \hline \textbf{english\_dailabor} & 49.90 & 65.68 & 68.26 & 66.38 & 67.09 & 68.68 & 66.70 & 69.68 \\ \hline \textbf{aisopos\_ntua} & 41.90 & 59.21 & 57.78 & 61.56 & 59.99 & 58.13 & 59.78 & 65.21 \\ \hline \textbf{tweet\_semevaltest} & 39.54 & 56.31 & 61.51 & 61.65 & 62.06 & 62.94 & 62.26 & 62.64 \\ \hline \textbf{sentistrength\_twitter} & 36.26 & 49.24 & 56.20 & 57.88 & 57.91 & 57.96 & 57.50 & 58.68 \\ \hline \textbf{sentistrength\_youtube} & 38.14 & 50.01 & 54.98 & 56.56 & 57.13 & 55.83 & 55.45 & 56.93 \\ \hline \textbf{sentistrength\_myspace} & 26.88 & 44.94 & 47.37 & 53.04 & 52.91 & 51.53 & 46.99 & 55.00 \\ \hline \textbf{sanders} & 46.40 & 54.22 & 54.34 & 54.59 & 55.84 & 57.14 & 54.30 & 53.03 \\ \hline \textbf{sentistrength\_digg} & 28.16 & 43.62 & 48.67 & 50.44 & 50.59 & 52.47 & 51.55 & 54.18 \\ \hline \textbf{sentistrength\_rw} & 30.47 & 42.60 & 50.37 & 48.30 & 49.55 & 48.61 & 47.39 & 47.30 \\ \hline \textbf{sentistrength\_bbc} & 21.92 & 35.79 & 45.35 & 47.15 & 48.41 & 49.45 & 47.34 & 45.72 \\ \hline \textbf{debate} & 25.10 & 34.03 & 41.40 & 45.76 & 45.17 & 45.11 & 42.74 & 45.06 \\ \hline \textbf{nikolaos\_ted} & 24.42 & 34.82 & 41.32 & 42.44 & 46.19 & 44.11 & 44.80 & 42.56 \\ \hline \textbf{vader\_nyt} & 9.38 & 19.64 & 30.27 & 34.97 & 37.42 & 36.83 & 36.98 & 37.97 \\ \hline \end{tabular} \caption{Test with 10SENT varying the number of methods used in combination.} \label{tab:10sentNofmethods} \end{table} \fi \subsection{Choice of Parameters} In our method, we need to define two important parameters: the agreement and the confidence level. Accordingly, we performed a study to better understand how our method performs when varying such parameters. In more details, the first tested parameter was the minimum number of agreements among the methods we should use in the first round of classification (Agreement Level). Table \ref{table:concordance} shows Macro-F1 results for each number of agreements. As we have a total of ten base methods, this table shows bootstrapping results when we use instances that at least 4 or more methods agree with each other, 5 and so on. We did not show results with less than 3 agreements since there were no instances in such scenario. \begin{table}[!htpb] \centering \small \begin{tabular}{|l|l|l|l|l|l|l|l|l|} \hline \multicolumn{9}{|c|}{\textbf{\#Concordants}} \\ \hline \textbf{DATASET} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} \\ \hline \textbf{english\_dailabor} & 69.58 & 69.18 & 69.23 & 68.50 & 66.91 & 64.81 & 59.58 & 60.75 \\ \hline \textbf{aisopos\_ntua} & 60.93 & 60.54 & 56.87 & 57.74 & 64.14 & 59.65 & 54.58 & 58.93 \\ \hline \textbf{tweet\_semevaltest} & 63.54 & 63.58 & 63.64 & 63.47 & 63.82 & 61.56 & 59.36 & 59.15 \\ \hline \textbf{sentistrength\_twitter} & 56.75 & 58.32 & 59.13 & 58.34 & 57.58 & 55.35 & 54.27 & 57.44 \\ \hline \textbf{sentistrength\_youtube} & 55.63 & 55.22 & 55.67 & 56.39 & 56.65 & 55.44 & 54.69 & 54.50 \\ \hline \textbf{sanders} & 56.23 & 55.72 & 55.94 & 55.64 & 55.27 & 52.73 & 50.77 & 48.14 \\ \hline \textbf{sentistrength\_digg} & 51.13 & 51.29 & 53.86 & 54.58 & 51.51 & 50.98 & 48.06 & 51.83 \\ \hline \textbf{sentistrength\_myspace} & 46.76 & 48.30 & 48.71 & 50.31 & 50.52 & 51.02 & 54.34 & 39.44 \\ \hline \textbf{sentistrength\_rw} & 48.96 & 50.09 & 46.62 & 49.73 & 49.00 & 46.52 & 46.03 & 47.58 \\ \hline \textbf{sentistrength\_bbc} & 49.15 & 50.62 & 46.95 & 47.41 & 46.31 & 45.04 & 45.75 & 46.57 \\ \hline \textbf{debate} & 45.61 & 45.82 & 43.69 & 43.97 & 44.47 & 44.99 & 43.57 & 43.10 \\ \hline \textbf{nikolaos\_ted} & 46.29 & 44.71 & 48.13 & 46.43 & 46.97 & 47.52 & 44.50 & 47.24 \\ \hline \textbf{vader\_nyt} & 36.13 & 36.19 & 36.32 & 37.35 & 38.21 & 36.89 & 32.54 & 34.02 \\ \hline \end{tabular} \caption{Comparative table of results (F1) for 10SENT bootstrapping by different agreement levels among the base methods in classification} \label{table:concordance} \end{table} As we can see in this table, the extreme cases of agreement or disagreement produce the worst results. There is a small amount of instances with 100\% of agreement, which harms the training of the algorithm. On the other hand, when the agreement is very low, there is a lot of noise in the training data. In sum, the Agreement level represents a trade off between the amount of available data for training and the amount of noise. The second parameter was the RF confidence Level, defined in Algorithm~\ref{alg:bootstrapping} as the constant $C$. The Confidence Level is the confidence ratio of the Random Forest algorithm in its predictions. We use this in order to add more data to train during the bootstrapping step. Then, a similar variation of this parameter was tested, as shown in Table \ref{table:confidence}. \begin{table}[!htpb] \centering \small \begin{tabular}{l|l|l|l|l|l|l|l|l|} \cline{2-9} & \multicolumn{8}{c|}{Confidence Level} \\ \hline \multicolumn{1}{|c|}{DATASET} & \multicolumn{1}{c|}{0.3} & \multicolumn{1}{c|}{0.4} & \multicolumn{1}{c|}{0.5} & \multicolumn{1}{c|}{0.6} & \multicolumn{1}{c|}{0.7} & \multicolumn{1}{c|}{0.8} & \multicolumn{1}{c|}{0.9} & \multicolumn{1}{c|}{1.0} \\ \hline \multicolumn{1}{|l|}{english\_dailabor} & 67.80 & 66.82 & 67.28 & 67.50 & 67.65 & 67.63 & 67.57 & 67.64 \\ \hline \multicolumn{1}{|l|}{aisopos\_ntua} & 64.53 & 64.19 & 63.81 & 64.95 & 66.21 & 60.89 & 57.57 & 57.36 \\ \hline \multicolumn{1}{|l|}{tweet\_semevaltest} & 62.56 & 62.88 & 62.75 & 62.65 & 62.82 & 63.33 & 63.21 & 63.02 \\ \hline \multicolumn{1}{|l|}{sentistrength\_twitter} & 58.11 & 58.08 & 58.47 & 59.24 & 58.14 & 56.75 & 57.71 & 56.12 \\ \hline \multicolumn{1}{|l|}{sentistrength\_youtube} & 56.64 & 56.50 & 55.59 & 55.55 & 56.28 & 56.93 & 56.86 & 56.04 \\ \hline \multicolumn{1}{|l|}{sanders} & 55.22 & 55.88 & 54.83 & 54.62 & 55.65 & 54.70 & 53.81 & 53.78 \\ \hline \multicolumn{1}{|l|}{sentistrength\_myspace} & 51.86 & 51.77 & 52.89 & 52.46 & 55.22 & 52.75 & 54.51 & 53.03 \\ \hline \multicolumn{1}{|l|}{sentistrength\_digg} & 51.75 & 50.98 & 53.15 & 51.93 & 52.59 & 51.11 & 50.86 & 50.85 \\ \hline \multicolumn{1}{|l|}{sentistrength\_rw} & 46.46 & 45.60 & 50.24 & 49.61 & 50.84 & 50.59 & 45.77 & 47.30 \\ \hline \multicolumn{1}{|l|}{sentistrength\_bbc} & 45.67 & 45.75 & 46.16 & 45.17 & 47.19 & 48.00 & 46.01 & 48.24 \\ \hline \multicolumn{1}{|l|}{debate} & 45.77 & 45.95 & 46.10 & 46.53 & 45.48 & 45.26 & 43.94 & 43.93 \\ \hline \multicolumn{1}{|l|}{nikolaos\_ted} & 45.80 & 44.70 & 43.05 & 46.42 & 44.76 & 46.00 & 45.79 & 43.48 \\ \hline \multicolumn{1}{|l|}{vader\_nyt} & 38.53 & 38.33 & 38.49 & 38.65 & 37.69 & 37.27 & 36.94 & 36.10 \\ \hline \end{tabular} \caption{Comparative table of results (F1) for 10SENT by different confidence levels added to training in classification.} \label{table:confidence} \end{table} As a final conclusion of these experiments, we arrive at a value of 7 for agreement and 0.7 for confidence, in most datasets, as a way to achieve the ``best'' balance between quantity and quality for the training data. \subsection{Bag of Words vs. Predictions} After the definitions of the parameters for the classification process, additional features can be extracted and combined with the predictions of other methods to improve results. One example is the text of messages itself. With the text, we can extract the Bag of Words representation (BoW) of the sentences included in the training. We used the traditional TF-IDF representation calculated for each sentence in each dataset. This was concatenated with the results of each method, as in the traditional 10SENT. In Table \ref{tab:bow} we find the results comparing the use of these different sets of features, the predictions outputted by all base methods and bag of words. Here, we used all best parameters discovered in previous sections, including the random forest classifier. Note that the combination of these two set of features improves results compared with each single one separately. Although BoW individually presented better results in a few datasets it is not the best in all of them and alone, which suggests that using both sets of features is the best option for 10SENT. In the next experiments, we always use this joint representation (BoW + BaseMethods) when me mention 10SENT. \begin{table}[!htpb] \centering \small \begin{tabular}{|l|l|l|l|} \hline Dataset & Bag of Words & BaseMethods & BoW + BaseMethods \\ \hline english\_dailabor & 68.4 & 67.1 & 72.4 \\ \hline aisopos\_ntua & 72.3 & 62.0 & 69.9 \\ \hline tweet\_semevaltest & 58.3 & 62.8 & 65.2 \\ \hline sentistrength\_twitter & 58.8 & 59.1 & 61.2 \\ \hline sentistrength\_youtube & 56.6 & 56.1 & 58.7 \\ \hline sanders & 61.5 & 54.1 & 56.4 \\ \hline sentistrength\_myspace & 50.2 & 52.3 & 52.2 \\ \hline sentistrength\_digg & 45.4 & 50.1 & 50.6 \\ \hline nikolaos\_ted & 51.3 & 45.9 & 49.0 \\ \hline debate & 57.1 & 45.9 & 47.1 \\ \hline sentistrength\_rw & 48.3 & 48.5 & 45.5 \\ \hline sentistrength\_bbc & 34.8 & 45.5 & 43.8 \\ \hline vader\_nyt & 28.0 & 38.9 & 39.2 \\ \hline \end{tabular} \caption{Results of 10SENT using different set of features for Random Forest} \label{tab:bow} \end{table} \subsection{Transfer Learning Analysis} Finally, we evaluate whether it is possible to explore some ``easily available'' knowledge from an external source. We do so by exploring datasets in which messages are labeled with ``emoticons'' by the systems' users themselves. To use an approach that transfers knowledge from one task to another, it is usually necessary to map characteristics from the source problem into the target one, identifying similarities and differences. Next we detail how we transfer knowledge from existing emoticons in the datasets to the task of sentence-level sentiment analysis. Emoticons are representations of an expression in a faced-look set of characters. They became very popular nowadays and even the Oxford English Dictionary has recently chosen an ``emoji'' as word of year (in 2015) due to its notable and massive use around the world. In our case, they are used to give us an idea of feelings in the text, like happiness or sadness. Previous works have demonstrated that such messages, though not available in large volumes, are very precise. In other words, labeling with emoticons used by the final user indeed provide trustful information about the polarity of message. Accordingly, in these experiments, we used the ``rules of thumb'' suggested in \citep{gonccalves2013comparing} to translate emoticons into polarities. \if 0 \begin{table}[!htpb] \centering \scriptsize \begin{tabular}{|l|l|} \hline \multicolumn{1}{|c|}{Label} & \multicolumn{1}{c|}{Emoticon} \\ \hline Positive & \begin{tabular}[c]{@{}l@{}} :) \quad :{]} \quad :\} \quad :o) \quad :o{]} \quad :o\} \quad :-{]} \quad :-) \quad :-\} \quad =) \quad ={]} \quad =\} \quad =\textasciicircum {]} \quad =\textasciicircum ) \quad =\textasciicircum \} \\ :B \quad :-D \quad :-B \quad :\textasciicircum D \quad :\textasciicircum B \quad =B \quad =\textasciicircum B \quad =\textasciicircum D \quad :') \quad :'{]} \quad :'\} \quad =') \quad ='{]} \quad ='\} \\ \textless3 \quad \textasciicircum .\textasciicircum \quad \textasciicircum -\textasciicircum \quad \textasciicircum \_\textasciicircum \quad \textasciicircum \textasciicircum \quad :* \quad =* \quad :-* \quad ;) \quad ;{]} \quad ;\} \quad :-p \quad :-P \quad :-b \quad :\textasciicircum p \\ :\textasciicircum P \quad :\textasciicircum b \quad =P \quad =p \quad \textbackslash{o}\textbackslash \quad \textbackslash{o}/ \quad /o/ \quad :P \quad :p \quad :b \quad =b \quad =\textasciicircum p \quad =\textasciicircum P \quad =\textasciicircum b \end{tabular} \\ \hline Negative & \begin{tabular}[c]{@{}l@{}} D: \quad D= \quad D-: \quad D\textasciicircum : \quad D\textasciicircum = \quad :( \quad :{[} \quad :\{ \quad :o( \quad :o{[} \quad :\textasciicircum ( \quad :\textasciicircum {[} \quad :\textasciicircum \{ \quad =\textasciicircum ( \quad =\textasciicircum \{ \quad \\ \textgreater=( \quad \textgreater={[} \quad \textgreater=\{ \quad \textgreater=( \quad \textgreater:-\{ \quad \textgreater:-{[} \quad \textgreater:-( \quad \textgreater=\textasciicircum {[} \quad \textgreater:-( \quad :-{[} \quad :-( \quad =( \quad ={[} \\ =\{ \quad =\textasciicircum {[} \quad \textgreater:-=( \quad \textgreater={[} \quad \textgreater=\textasciicircum ( \quad :'( \quad :'{[} \quad :'\{ \quad ='\{ \quad ='( \quad ='{[} \quad =\textbackslash \quad :\textbackslash \quad =/ \\ :/ \quad =\$ \quad o.O \quad O\_o \quad Oo \quad :\$ \quad :-\{ \quad \textgreater:-\{ \quad \textgreater=\textasciicircum \{ \quad :o\ \end{tabular} \\ \hline Neutral & \begin{tabular}[c]{@{}l@{}} :$\mid$ \quad =$\mid$ \quad :-$\mid$ \quad \textgreater.\textless \quad \textgreater\textless \quad \textgreater\_\textless \quad :o \quad :0 \quad =O \quad :@ \quad =@ \quad :\textasciicircum o \quad :\textasciicircum @ \quad -.- \\-.-' \quad -\_- \quad -\_-' \quad :x \quad =X \quad :\# \quad =\# \quad :-x \quad :-@ \quad :-\# \quad :\textasciicircum x \quad :\textasciicircum \# \end{tabular} \\ \hline \end{tabular} \caption{List of Emoticons divided by categories} \label{tab:emoticons} \end{table} \fi As one might expect, the fraction of messages containing emoticons is very low compared to the total number of messages.As we can see in Table \ref{tab:emoticoncoverage} emoticons appeared just in a very small amount of instances (observed in the coverage column). In spite of that, the accuracy of emoticons is often very precise to distinguish polarity of sentiment, reaching more than 90\% in ``nikolaos\_ted'' dataset. This is also in agreement with previous efforts~\citep{gonccalves2013comparing}. Our ultimate goal here is to extract some information about the text of those messages to our classification step. For this, we incorporate into the training data these instances labeled with emoticons extracted from the respective datasets. \begin{table}[!htpb] \centering \small \begin{tabular}{l|c|c|} \cline{2-3} & Accuracy & Coverage \\ \hline \multicolumn{1}{|l|}{nikolaos\_ted} & 0.919 & 0.014 \\ \hline \multicolumn{1}{|l|}{sentistrength\_myspace} & 0.800 & 0.091 \\ \hline \multicolumn{1}{|l|}{aisopos\_ntua} & 0.787 & 0.526 \\ \hline \multicolumn{1}{|l|}{tweet\_semevaltest} & 0.693 & 0.071 \\ \hline \multicolumn{1}{|l|}{english\_dailabor} & 0.687 & 0.064 \\ \hline \multicolumn{1}{|l|}{sentistrength\_youtube} & 0.686 & 0.085 \\ \hline \multicolumn{1}{|l|}{sentistrength\_twitter} & 0.627 & 0.097 \\ \hline \multicolumn{1}{|l|}{sentistrength\_rw} & 0.619 & 0.148 \\ \hline \multicolumn{1}{|l|}{sentistrength\_digg} & 0.600 & 0.028 \\ \hline \multicolumn{1}{|l|}{sanders} & 0.359 & 0.045 \\ \hline \multicolumn{1}{|l|}{debate} & 0.339 & 0.015 \\ \hline \multicolumn{1}{|l|}{sentistrength\_bbc} & 0.173 & 0.006 \\ \hline \multicolumn{1}{|l|}{vader\_nyt} & - & - \\ \hline \end{tabular} \caption{Accuracy and coverage of emoticons in training experiments for all datasets} \label{tab:emoticoncoverage} \end{table} To compare the effect of transfer learning from emoticons, we separated it in three different experiments: first with our traditional 10SENT; next we used just emoticon labels to create the training, without our majority voting predictions; then we combined these two to check the impact of emoticons in our method. Results of this experiment can be seen in Table \ref{tab:transfer}. We can see that improvements of up to 6\% (e.g., in case of the sentistrength\_myspace dataset) can be obtained in terms of Macro F1, with no significant losses in most datasets and with no extra (labeling) cost. Thus, this approach represents an interesting opportunity to provide to the user some help in terms of labeling effort. \begin{table}[!htpb] \centering \small \begin{tabular}{l|c|c|c|} \cline{2-4} & 10Sent & Emoticons & 10Sent + Emoticons \\ \hline \multicolumn{1}{|l|}{english\_dailabor} & 70.62 & 25.57 & 72.02 \\ \hline \multicolumn{1}{|l|}{aisopos\_ntua} & 69.91 & 35.48 & 73.61 \\ \hline \multicolumn{1}{|l|}{tweet\_semevaltest} & 64.78 & 18.06 & 65.13 \\ \hline \multicolumn{1}{|l|}{sentistrength\_twitter} & 62.17 & 22.93 & 62.87 \\ \hline \multicolumn{1}{|l|}{sentistrength\_youtube} & 57.06 & - & 59.36 \\ \hline \multicolumn{1}{|l|}{sanders} & 56.19 & 11.83 & 56.78 \\ \hline \multicolumn{1}{|l|}{sentistrength\_digg} & 51.91 & - & 52.22 \\ \hline \multicolumn{1}{|l|}{sentistrength\_myspace} & 50.22 & - & 53.20 \\ \hline \multicolumn{1}{|l|}{nikolaos\_ted} & 47.97 & - & 48.97 \\ \hline \multicolumn{1}{|l|}{debate} & 47.18 & - & 47.37 \\ \hline \multicolumn{1}{|l|}{sentistrength\_rw} & 47.15 & - & 45.25 \\ \hline \multicolumn{1}{|l|}{sentistrength\_bbc} & 43.76 & - & 43.18 \\ \hline \multicolumn{1}{|l|}{vader\_nyt} & 39.81 & - & 39.01 \\ \hline \end{tabular} \caption{Macro-F1 results for experiments on 10SENT using Transfer Learning} \label{tab:transfer} \end{table} \section{Comparative Results} We now turn our attention to the comparison between 10SENT, the ``strongest'' baseline (Majority voting) and the base methods. We should point out that in these comparisons, a mention to 10SENT corresponds to the results obtained with the best unsupervised configuration found in the previous analyses, in other words, the original 10SENT representation (methods' decisions) along with the Bag Of Words and the transfer learning. We can observe in Figure \ref{fig:macroF1} that our method has a higher Macro-F1, above the baselines, in most datasets. In fact, 10SENT is the best method in 7 out of 13 datasets and it is close to the top of the rank in several others. This is also reflected in the Mean Rank, shown in Table \ref{table:rank}, confirming that 10SENT is the overall winner across all tested datasets. \begin{figure}[!htpb] \centering { \includegraphics[width=\textwidth]{MacroF1Variation2.eps} } \caption{Macro-F1 results of 10SENT compared with each individual base method for all datasets} \label{fig:macroF1} \end{figure} \begin{table}[!htpb] \centering \small \begin{tabular}{|l|l|l|l|} \hline \textbf{METHOD} & \textbf{MEAN RANK} & \textbf{POS} & \textbf{DEVIATION} \\ \hline 10SENT & 2.154 & 1 & 1.457 \\ \hline Majority Voting & 3.154 & 2 & 1.350 \\ \hline Vader & 3.692 & 3 & 1.814 \\ \hline SO-CAL & 3.769 & 4 & 1.717 \\ \hline Umigon & 4.923 & 5 & 2.921 \\ \hline Afinn & 6.615 & 6 & 1.820 \\ \hline OpinionLexicon & 6.923 & 7 & 1.900 \\ \hline pattern.en & 7.000 & 8 & 2.287 \\ \hline OpinionFinder & 9.308 & 9 & 1.136 \\ \hline Sentistrength & 9.846 & 10 & 1.747 \\ \hline Emolex & 9.923 & 11 & 2.055 \\ \hline Sentiment140 Lexicon & 10.692 & 12 & 2.493 \\ \hline \end{tabular} \caption{Mean Rank of methods for all datasets} \label{table:rank} \end{table} In fact, 10SENT can be considered as the most \textit{stable method} as it produces the best (or close to the best) results in most datasets in different domains and applications. In other words, by using our proposed method, one can almost always guarantee top-notch results, at no extra cost, and without the need to discover the best method for a given context/dataset/domain. \subsection{UpperBound Comparison} For analysis purposes, we perform a comparison of 10SENT with some ``uppperbound'' baselines which use some type of privileged information, most notably the real label of the instances in the training set, an information not available to us. The idea here it to understand how far our proposed unsupervised approach is from the ones that exploit such information as well as to understand the limits to what we can achieve with an unsupervised solution. The first ``upperbound'' baseline is a fully supervised approach which uses all the labeled information available in the training data. As normally done in fully supervised approaches, the parameters of the RF algorithm are determined using a validation set. The second baseline is an \textit{Exhaustive Weighted Majority Voting} method that uses the real labels of messages of the datasets to find the best possible linear combination of weights for each base method. Differently from the Majority Voting baseline, in which all methods have the same weight, in this approach, each individual base method has a different weight, so that the influence of each one in the final classification is different. The weights for each method are found by means of an exhaustive search in each (training portion of the) dataset. That is, for each dataset we found the "close-to-ideal'' weights that would lead to the best possible result when combining the exploited base methods. Then, for each method, a weight was associated with its output and, finally, the class with the highest weight was marked as the resulting label of each instance. This search was performed in exhaustive mode, i.e., we evaluate every possible combination, seeking to maximize the Macro-F1 in each dataset. During the experiments, we limited the search to five different weights in the range $[0-1]$: $ W = \{0, 0.25, 0.5, 0,75, 1\}$) to estimate ``close-to-best'' results, while maintaining feasible computational costs. Table \ref{table:weights} shows the average weights and corresponding standard deviation for each method in some datasets, but for all the results are similar. We can see that most methods have different behaviors in different datasets (implied by the large deviations). In other words, the same method may have a huge variance in effectiveness in different datasets, which precludes the use of a single unique method for all cases. Despite this, we can observe that some methods have clearly a higher average than others even with this high deviation. \begin{table}[!htpb] \centering \small \begin{tabular}{l|l|l|l|l|l|} \cline{2-6} \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{Weights} \\ \cline{2-6} & pattern.en & sentiment140 & emolex & opinionfinder & sentistrength \\ \hline \multicolumn{1}{|l|}{Avg. Weight} & 0.28 & 0.37 & 0.26 & 0.40 & 0.85 \\ \hline \multicolumn{1}{|l|}{Std. Deviation} & 0.25 & 0.28 & 0.29 & 0.31 & 0.28 \\ \hline \end{tabular} \if 0 \begin{tabular}{l|l|l|l|l|l|} \cline{2-6} \multicolumn{1}{c|}{} & \multicolumn{5}{c|}{Weights} \\ \cline{2-6} & vader & afinn & OpinionLexicon & Umigon & so-cal \\ \hline \multicolumn{1}{|l|}{Avg. Weight} & 0.44 & 0.25 & 0.27 & 0.61 & 0.66 \\ \hline \multicolumn{1}{|l|}{Std. Deviation} & 0.24 & 0.25 & 0.25 & 0.34 & 0.38 \\ \hline \end{tabular} \fi \caption{Average and deviation for weights found during Exhaustive Weighted Vote step} \label{table:weights} \end{table} \normalsize Finally, the third ``upperbound'' baseline is the best single base method in each dataset. Since the base methods are unsupervised ``off-the-shelf'' ones, we determine the best method for each dataset also using the labels in the training sets. It is also an ``upperbound'' because the best method cannot be determined, in advance, without supervision, i.e., a training set. \subsubsection{Upperbound Results} The results of those upperbounds are shown in Table \ref{table:upperbounds}. For comparative purposes we also included in this table the results of the unsupervised Majority Voting. As before, all results correspond to the average performance in the 5 test sets of the folded cross-validation procedure using 10SENT with its best configuration including Bag of Words and Transfer Learning. Values marked with ``\textbf{*}'' in this table indicate that the difference was not statistically significant when compared to 10SENT in a paired-test with 95\% confidence. Results reported with ``\textsuperscript{$\triangle$}'' are those statistically better than those of 10SENT. On the other hand, our method demonstrated to be statistically superior to the ones whose values are marked with ``\textsuperscript{$\nabla$}''. \begin{table}[!htpb] \centering \small \begin{tabular}{|l|l|l|l|l|l|} \hline & \multicolumn{1}{c|}{Fully Supervised} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Exhaustive Weighted \\ Majority Voting\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Best\\ Individual\end{tabular}} & \multicolumn{1}{c|}{Majority Voting} & \multicolumn{1}{c|}{10SENT} \\ \hline aisopos\_ntua & 76.64\textsuperscript{$\triangle$} & 75.8\textsuperscript{$\triangle$} & 74.58* & 59.45\textsuperscript{$ \nabla$} & 73.61 \\ \hline english\_dailabor & 75.63\textsuperscript{$\triangle$} & 71.9* & 67.47\textsuperscript{$\nabla$} & 68.16\textsuperscript{$\nabla$} & 72.02 \\ \hline tweet\_semevaltest & 66.77* & 65.5* & 61.27\textsuperscript{$\nabla$} & 62.64\textsuperscript{$\nabla$} & 65.13 \\ \hline sentistrength\_twitter & 66.14\textsuperscript{$\triangle$} & 62.9* & 59.05\textsuperscript{$\nabla$} & 58.89\textsuperscript{$\nabla$} & 62.87 \\ \hline sentistrength\_youtube & 61.77\textsuperscript{$\triangle$} & 60.6* & 56.81\textsuperscript{$\nabla$} & 54.60\textsuperscript{$\nabla$} & 59.36 \\ \hline sanders & 62.76\textsuperscript{$\triangle$} & 58.0\textsuperscript{$\triangle$} & 53.52* & 54.75* & 56.78 \\ \hline sentistrength\_myspace & 57.47\textsuperscript{$\triangle$} & 57.8\textsuperscript{$\triangle$} & 54.05* & 51.56* & 53.20 \\ \hline sentistrength\_digg & 59.52\textsuperscript{$\triangle$} & 57.3\textsuperscript{$\triangle$} & 51.98* & 51.50* & 52.22 \\ \hline nikolaos\_ted & 57.43\textsuperscript{$\triangle$} & 56.1\textsuperscript{$\triangle$} & 50.76\textsuperscript{$\triangle$} & 47.17* & 48.97 \\ \hline debate & 58.75\textsuperscript{$\triangle$} & 49.1\textsuperscript{$\triangle$} & 46.45* & 43.99\textsuperscript{$\nabla$} & 47.37 \\ \hline sentistrength\_rw & 53.52\textsuperscript{$\triangle$} & 52.2\textsuperscript{$\triangle$} & 47.97* & 48.34\textsuperscript{$\triangle$} & 45.25 \\ \hline sentistrength\_bbc & 44.00* & 51.8\textsuperscript{$\triangle$} & 46.17* & 45.19* & 43.18 \\ \hline vader\_nyt & 46.87\textsuperscript{$\triangle$} & 51.9\textsuperscript{$\triangle$} & 44.56\textsuperscript{$\triangle$} & 37.19\textsuperscript{$\nabla$} & 39.01 \\ \hline \end{tabular} \caption{Results in terms of Macro-F1 comparing 10SENT with all other evaluation methods (``*'' indicates values that the difference was not statistically significant compared to the 10SENT; ``\textsuperscript{$\nabla$}'' are values that 10SENT wins and ``\textsuperscript{$\triangle$}'' are the values statistically superior to the 10SENT result)} \label{table:upperbounds} \end{table} As highlighted before, 10SENT is tied or better than the traditional majority voting in most datasets, being statistically superior in seven out of 12 cases, tying in other 5 and losing only in one dataset (sentistrength\_rw). Gains can achieve up to 23.8\% against this baseline. When compared to the best individual method in each dataset, 10SENT wins (4 cases) or ties (7 cases) in 11 out of 13 cases, a strong result. This shows that 10SENT is a good and consistent choice among all available options, independently of which dataset is used. When compared to supervised Exhaustive Weighted Majority Voting, a first observation is that, as expected, it is always superior to the simple Majority Voting. Although we cannot beat this ``upperbound'' baseline, we tie with it in 4 datasets (sentistrenth\_youtube, sentistrength\_twitter, tweet\_semevaltest, english\_dailabor) and get close results in others such as aisopos\_ntua, sanders and debate. This with no cost at all in terms of labeling effort. Regarding the strongest upperbound baseline -- Fully Supervised --, an interesting observation to make is that in some datasets its results get very close to those of the Exhaustive Weighted Majority Voting, even loosing to it in two (sentistrength\_bbc, vader\_nyt). This is a surprising result, meaning that the combination of both strategies is also an interesting venue to pursue in the future. When comparing this baseline to 10SENT, as expected, we can also not beat it, but can tie with it in two datasets and get close results in others, mainly in those cases in which our method was a good competitor against Exhaustive Weighted Majority Voting. We consider these very strong results. For a deeper understanding of the results, Table \ref{tab:analysis} shows the set size of 10SENT used to train the classifier before and after the bootstrapping step (lines 9-11 of Algorithm 1). As we can see, the majority voting heuristics selects a relative large amount of training data from the original datasets. This may explain some of the good results obtained in our experiments, since the classifiers have a reasonable amount of data to be trained with. \begin{table}[!htpb] \centering \small \begin{tabular}{l|l|l|l|l|l|l|} \cline{2-7} \multirow{3}{*}{} & \multicolumn{6}{c|}{10SENT} \\ \cline{2-7} & \multicolumn{3}{c|}{Majority Voting} & \multicolumn{3}{c|}{Bootstrapping} \\ \cline{2-7} & \multicolumn{1}{c|}{Set Size} & \multicolumn{1}{c|}{Accuracy} & \multicolumn{1}{c|}{F1} & \multicolumn{1}{c|}{Set Size} & \multicolumn{1}{c|}{Accuracy} & \multicolumn{1}{c|}{F1} \\ \hline \multicolumn{1}{|l|}{english\_dailabor} & 1999 & 0.858 & 70.61 & 2165 & 0.826 & 70.62 \\ \hline \multicolumn{1}{|l|}{aisopos\_ntua} & 215 & 0.762 & 71.67 & 238 & 0.734 & 69.91 \\ \hline \multicolumn{1}{|l|}{tweet\_semevaltest} & 2781 & 0.796 & 65.04 & 3139 & 0.757 & 64.78 \\ \hline \multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}sentistrength\_\\ twitter\end{tabular}} & 2042 & 0.706 & 63.06 & 2238 & 0.665 & 62.17 \\ \hline \multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}sentistrength\_\\ youtube\end{tabular}} & 1688 & 0.660 & 58.37 & 1837 & 0.645 & 57.06 \\ \hline \multicolumn{1}{|l|}{sanders} & 1760 & 0.762 & 55.94 & 1929 & 0.758 & 56.19 \\ \hline \multicolumn{1}{|l|}{sentistrength\_digg} & 474 & 0.630 & 49.32 & 519 & 0.615 & 51.91 \\ \hline \multicolumn{1}{|l|}{\begin{tabular}[c]{@{}l@{}}sentistrength\_\\ myspace\end{tabular}} & 488 & 0.641 & 49.34 & 535 & 0.645 & 50.22 \\ \hline \multicolumn{1}{|l|}{debate} & 1422 & 0.508 & 47.30 & 1620 & 0.530 & 47.97 \\ \hline \multicolumn{1}{|l|}{nikolaos\_ted} & 329 & 0.606 & 47.11 & 370 & 0.556 & 47.18 \\ \hline \multicolumn{1}{|l|}{sentistrength\_rw} & 417 & 0.618 & 43.12 & 471 & 0.601 & 47.15 \\ \hline \multicolumn{1}{|l|}{sentistrength\_bbc} & 376 & 0.687 & 37.91 & 418 & 0.661 & 43.76 \\ \hline \multicolumn{1}{|l|}{vader\_nyt} & 2222 & 0.363 & 40.44 & 2563 & 0.366 & 39.81 \\ \hline \end{tabular} \caption{Set size, ``noise''(indicated by accuracy) and Macro-F1 values to 10SENT training sets without bootstrapping and including bootstrapping step} \label{tab:analysis} \end{table} However, this is only part of the story. One question that remains to be answered is: ``What is the \textbf{quality} of the automatically labeled training set''. We can answer this question by looking at the columns ``Accuracy'' in the table. This metric calculates the proportion of correctly assigned labels in the training sets when compared to the ``real labels''. For a considerable number of datasets, the accuracy is relatively high, between 0.6-0.8. In fact, the cases in which 10SENT gets closer to the fully supervised method correspond to those in which the accuracy in the training is higher. We can also see that after the bootstrapping, in general the accuracy in the training drops a bit, which is natural since the heuristics is not perfect, but this is compensated by the increase in training size, resulting in a learned model that generalizes better. Finally, we can see that the absolute results of the best overall method in each dataset are still not very high (maximum of 76\%), which shows the difficulty of the sentiment analysis task and that there is a lot of room for improvements. \section{Conclusions} We presented a novel unsupervised approach for sentiment analysis on sentence-level derived from the combination of several existing ``off-the-shelf'' sentiment analysis methods. Our solution was thoroughly tested in a wide and diversified environment. We cover a vast amount of methods and labeled datasets from different domains. The key advantage of 10SENT is that it fixes one major issue in this field -- the variability of the methods across domains and datasets. Our experimental results show that our self-learning approach has the lowest prediction performance variability due to its ability to slightly adapt to different contexts. This is a crucial issue in an area in which researchers are mostly interested in using an ``off-the-shelf'' method to different contexts. Our approach is also easily expandable to include any new developed unsupervised method. Our experimental results also show that 10SENT achieves good effectiveness compared to our baselines. 10SENT was superior to all existing individual methods and also obtained better results than the traditional majority voting, with gains of up to 17.5\%. In an upperbound comparison, we saw that 10SENT can get close to the best supervised results. Finally, our analysis on transfer learning shows us the possibility of adapting the method to include more strategies that can lead to better results. As future work, we intend to better explore weights as well choosing other setups for different scenarios. Additionally, we will explore other syntactic and semantic aspects of the text of the messages to improve results. We also plan to release our codes and datasets to the research community and deploy our method as part of known sentiment analysis benchmark systems~\citep{araujo2016@icwsm}. \bibliographystyle{apalike}
-44,939.987247
[ -1.5205078125, 1.7470703125 ]
50.691244
[ -2.798828125, 0.417236328125, -2.2578125, -5.234375, -0.375732421875, 8.0703125 ]
[ 2.970703125, 4.62890625, 1.54296875, 5.9609375 ]
1,004
8,592
[ -2, 1.939453125 ]
45.795789
[ -6.59375, -4.50390625, -4.78125, -1.8271484375, 2.90625, 13.1875 ]
0.462042
38.793616
25.619835
10.812924
[ 2.3270785808563232 ]
-31,438.565642
5.793645
-44,420.306432
0.77141
6.3275
[ -3.26953125, -3.57421875, -2.900390625, -3.8203125, 2.89453125, 10.421875 ]
[ -6.234375, -3.45703125, -2.681640625, -1.9375, 4.421875, 7.1796875 ]
BkiUeILxK4sA-5fmyDe0
\section{Introduction} Recent observations of Type Ia supernovae (SNe Ia) \cite{sn} indicate that the expansion of the Universe is accelerating at the present time. These results, when combined with the observations of cosmic microwave background (CMB) \cite{wmap} and large scale structure (LSS) \cite{sdss}, strongly suggest that the Universe is spatially flat and dominated by an exotic component with large negative pressure, referred to as dark energy \cite{de}. The first year result of the Wilkinson Microwave Anisotropy Probe (WMAP) shows that dark energy occupies about $73\%$ of the energy of our Universe, and dark matter about $23\%$. The usual baryon matter which can be described by our known particle theory occupies only about $4\%$ of the total energy of the Universe. Although we can affirm that the ultimate fate of the Universe is determined by the feature of dark energy, the nature of dark energy as well as its cosmological origin remain enigmatic at present. So far the confirmed information about dark energy is still limited and can be roughly summarized as the following several items: it is a kind of exotic energy form with sufficiently large negative pressure such that drives the Universe to undergo a period of accelerating expansion currently; it is spatially homogeneous and non-clustering; and it is in small part at the early times while dominates the Universe very recently. The investigation of the nature of dark energy is an important mission in the modern cosmology. Much work has been done on this issue, and there is still a long way to go. Currently, the preferred candidates of dark energy are vacuum energy (or cosmological constant) and dynamical fields. The simplest form of dark energy is the cosmological constant $\Lambda$. A tiny positive cosmological constant which can naturally explain the current acceleration would encounter many theoretical problems, such as the ``fine-tuning'' problem and the ``coincidence'' problem. Another possible form of dark energy is provided by scalar fields. Dark energy can be attributed to the dynamics of a scalar field $\phi$, called quintessence \cite{quin}, which convincingly realize the present accelerated expansion of the Universe by evolving slowly down its potential $V(\phi)$. The tracker version quintessence is to some extent likely to resolve the coincidence problem. It should also be pointed out that the coupled quintessence models \cite{couple} provide this problem with a more natural solution. However, for quintessence models with flat potentials, the quintessence field has to be nearly massless and one thus expects radiative corrections to destabilize the ratio between this mass and the other known scales of physics. In addition, for the cosmological constant and many quintessence models, the event horizon would lead to a potential incompatibility with the string theory. Other designs on dark energy roughly include k-essence \cite{kess}, quiessence (or ``X-matter'') \cite{xmatter}, brane world \cite{brane}, tachyon \cite{tachyon}, generalized Chaplygin gas (GCG) \cite{chaplygin,gcgtest}, holographic dark energy \cite{holography,holoSN}, and so forth. The quiessence or X-matter component is simply characterized by a constant, non-positive equation of state $w_X$ ($w_X<-1/3$ is a necessary condition to make the Universe accelerate). In general, in a Friedmann-Robertson-Walker (FRW) background with the presence of cold dark matter (CDM), an arbitrary but constant $w_X$ for dark energy from the range $(-1,0)$ can be achieved by using a scalar field with a hyperbolic sine potential \cite{xmatter}. It may be noted that in principle the value of $w_X$ may be even less than $-1$. In fact, by fitting the SNe Ia data in the framework of XCDM (X-matter with CDM), the hint for $w_X<-1$ has been found. Indeed, a study of high-$z$ SNe Ia \cite{Knop} finds that the equation of state of dark energy has a $99\%$ probability of being $<-1$ if no priors are placed on $\Omega_m^0$. When these SNe results are combined with CMB and 2dFGRS the $95\%$ confidence limits on an unevolving equation of state are $-1.46<w_X<-0.78$ \cite{Knop,Riess} which is consistent with estimates made by other groups \cite{wmap,sdss}. The possibility of $w_X<-1$ has provoked lots of investigations on the phantom dark energy \cite{phantom}. The remarkable feature of the phantom model is that the Universe will end its life with a ``Big Rip'' (future singularity) within a finite time \cite{bigrip}. On the other hand, we concern here another interesting proposal on dark energy, i.e. the dark energy component might be explained by a background fluid with an exotic equation of state, the generalized Chaplygin gas model \cite{chaplygin}. The striking feature of this model is that it allows for a unification of dark energy and dark matter. This point can be easily seen from the fact that the GCG behaves as a dust-like matter at early times and behaves like a cosmological constant at late stage. This dual role is at the heart of the surprising properties of the GCG model. Moreover, the GCG model has been successfully confronted with various phenomenological tests involving SNe Ia data, CMB peak locations, gravitational lensing and other observational data \cite{gcgtest}. It is remarkable that the GCG equation of state has a well defined connection with string and brane theories \cite{gcgbrane}, and this gas is the only gas known to admit a supersymmetric generalization \cite{susy}. In addition, it should be pointed out that the GCG model can be portrayed as a picture that cosmological constant type dark energy interacts with cold dark matter. However, since the equation of state of dark energy still cannot be determined exactly, the observational data show $w_X$ is in the range of $(-1.46, -0.78)$, the GCG model should naturally be generalized to accommodate any possible X-type dark energy with constant $w_X$. Therefore, we propose here a new generalized Chaplygin gas (NGCG) scenario as a scheme for unification of X-type dark energy and dark matter. The feature of this new model should also be exhibited in that dark sectors are uniformly described by an exotic background fluid and this new gas behaves as a dust-like matter at early times and as a X-type dark energy at late times. We will show in this paper that this model is a kind of interacting XCDM model, and constrain the parameters of this model by using observational data. This paper is organized as follows. In Section 2, we introduce the extension version of the generalized Chaplygin gas, namely the NGCG model, to describe the unification of dark energy and dark matter, and demonstrate that the NGCG actually is a kind of interacting XCDM system. In Section 3, we analyze the NGCG model by means of the statefinder parameters. In Section 4, we constrain the parameters of the NGCG model using the SNe Ia, CMB, and LSS data. We give concluding remarks in the final section. \section{The NGCG scenario and the interacting XCDM parametrization} In this section we introduce the NGCG model. In the framework of FRW cosmology, considering an exotic background fluid, the NGCG, described by the equation of state \begin{equation} p_{\rm Ch} = - {\tilde{A}(a) \over \rho_{\rm Ch}^\alpha}~, \label{eqstate} \end{equation} where $\alpha$ is a real number and $\tilde{A}(a)$ is a function depends upon the scale factor of the Universe, $a$. We might expect that this exotic background fluid smoothly interpolates between a dust dominated phase $\rho\sim a^{-3}$ and a dark energy dominated phase $\rho\sim a^{-3(1+w_X)}$ where $w_X$ is a constant and should be taken as any possible value in the range $(-1.46,-0.78)$. It can be expected that the energy density of the NGCG should be elegantly expressed as \begin{equation} \rho_{\rm Ch}= \left[A a^{-3(1+w_X)(1+\alpha)} + {B a^{-3 (1 + \alpha)}}\right]^{1 \over 1 + \alpha}~.\label{chrho} \end{equation} The derivation of the Eq. (\ref{chrho}) should be the consequence of substituting the equation of state Eq. (\ref{eqstate}) into the energy conservation equation of the NGCG for an homogeneous and isotropic spacetime, this requires the function $\tilde{A}(a)$ to be of the form \begin{equation} \tilde{A}(a)=-w_XAa^{-3(1+w_X)(1+\alpha)}~,\label{A} \end{equation} where $A$ is a positive constant, and the other positive constant $B$ appears in Eq. (\ref{chrho}) an integration constant. One can see explicitly that this model recovers the GCG model as the equation-of-state parameter $w_X$ taken to be $-1$, and an ordinary $X$CDM model can be reproduced by taking the parameter $\alpha$ to be zero. The parameter $\alpha$ is called {\it interaction parameter} of the model as will be shown below. The NGCG scenario involves an interacting XCDM picture. For showing this, we first decompose the NGCG fluid into two components, one is the dark energy component, and the other is the dark matter component, \begin{equation} \rho_{\rm Ch}=\rho_X+\rho_{dm}~. \end{equation} Note that the pressure of the NGCG fluid is provided only by the dark energy component, namely $p_{\rm Ch}=p_X$. Therefore, the energy density of the dark energy ingredient can be given \begin{equation} \rho_X={p_{\rm Ch}\over w_X}={A a^{-3(1+w_X)(1+\alpha)}\over [Aa^{-3(1+w_X)(1+\alpha)}+Ba^{-3(1+\alpha)}]^{\alpha\over 1+\alpha}}~~,\label{rhox}\end{equation} and then the energy density of the dark matter component can also be obtained \begin{equation} \rho_{dm}={Ba^{-3(1+\alpha)}\over [Aa^{-3(1+w_X)(1+\alpha)}+Ba^{-3(1+\alpha)}]^{\alpha\over 1+\alpha}}~~.\label{rhodm}\end{equation} From these expressions one obtains the scaling behavior of the energy densities \begin{equation} {\rho_{dm}\over\rho_X}={B\over A}a^{3w_X(1+\alpha)}~~.\label{scaling}\end{equation} We see explicitly from it that there must exist an energy flow between dark matter and dark energy provided that $\alpha\neq 0$. When $\alpha>0$, the transfer direction of the energy flow is from dark matter to dark energy; when $\alpha<0$, just the reverse. Therefore, it is clear that the parameter $\alpha$ characterizes the interaction between dark energy and dark matter. This is the reason for that we call $\alpha$ the interaction parameter. The parameters $A$ and $B$ can be expressed in terms of current cosmological observables. From Eq.(\ref{chrho}), it is easy to get \begin{equation} A+B=\rho_{\rm Ch0}^\eta~, \end{equation} where $\eta=1+\alpha$ is used to characterize the interaction for simplicity, thus we have \begin{equation} A=\rho_{\rm Ch0}^\eta A_s~,~~~~B=\rho_{\rm Ch0}^\eta (1-A_s)~,\label{As} \end{equation} where $A_s$ is a dimensionless parameter. Using Eqs. (\ref{scaling}) and (\ref{As}), one gets \begin{equation} A_s={\rho_{X0}\over \rho_{X0}+\rho_{dm0}}={\Omega_X^0\over 1-\Omega_{b}^0}~, \end{equation} where the second equality stands for the cosmological model involving the baryon matter component. We have assumed here that the space of the Universe is flat. Hence, the NGCG energy density can be expressed as \begin{equation} \rho_{\rm Ch}=\rho_{\rm Ch0} a^{-3}[1-A_s(1-a^{-3w_X\eta})]^{1/\eta}~.\label{ngcg} \end{equation} Making use of Eqs. (\ref{chrho}), (\ref{rhox}), (\ref{rhodm}), and (\ref{ngcg}), the energy densities of dark energy and dark matter can be re-expressed as \begin{equation} \rho_X=\rho_{X0} a^{-3(1+w_X\eta)}\left[1-{\Omega_X^0\over 1-\Omega_{b}^0}(1-a^{-3w_X\eta})\right]^{{1\over\eta}-1}~,\label{de} \end{equation} \begin{equation} \rho_{dm}=\rho_{dm0} a^{-3}\left[1-{\Omega_X^0\over 1-\Omega_{b}^0}(1-a^{-3w_X\eta})\right]^{{1\over\eta}-1}~.\label{dm} \end{equation} The whole NGCG fluid satisfies the energy conservation, but dark energy and dark matter components do not obey the energy conservation separately; they interact with each other. We depict this interaction through an energy exchange term $Q$. The equations of motion for dark energy and dark matter can be written as \begin{equation} \dot{\rho}_X+3H(1+w_X)\rho_X=Q~, \end{equation} \begin{equation} \dot{\rho}_{dm}+3H\rho_{dm}=-Q~, \end{equation} where dot denotes a derivative with respect to time $t$, and $H=\dot{a}/a$ represents the Hubble parameter. For convenience we define the effective equations of state for dark energy and dark matter through the parameters \begin{equation} w_X^{(e)}=w_X-{Q\over 3H\rho_X}~, \end{equation} \begin{equation} w_{dm}^{(e)}={Q\over 3H\rho_{dm}}~. \end{equation} According to the definition of the effective equations of state, the equations of motion for dark energy and dark matter can be re-expressed into forms of energy conservation, \begin{equation} \dot{\rho}_X+3H(1+w_X^{(e)})\rho_X=0~, \end{equation} \begin{equation} \dot{\rho}_{dm}+3H(1+w_{dm}^{(e)})\rho_{dm}=0~. \end{equation} By means of the concrete forms of dark energy and dark matter in NGCG scenario, Eqs. (\ref{de}) and (\ref{dm}), one can obtain \begin{equation} w_X^{(e)}=w_X+{(\eta-1)w_X(1-\Omega_X^0-\Omega_b^0)a^{3w_X\eta}\over \Omega_X^0+(1-\Omega_X^0-\Omega_b^0)a^{3w_X\eta}}~,\label{wXeff} \end{equation} \begin{equation} w_{dm}^{(e)}=-{(\eta-1)w_X\Omega_X^0\over \Omega_X^0+(1-\Omega_X^0-\Omega_b^0)a^{3w_X\eta}}~. \end{equation} Now we switch to discuss the cosmological evolution. Consider a spatially flat FRW Universe with baryon matter component $\rho_{b}$ and NGCG fluid $\rho_{\rm Ch}$, the Friedmann equation reads \begin{equation} 3M_P^2H^2=\rho_{\rm Ch}+\rho_{b}~, \end{equation} where $M_P$ is the reduced Planck mass. The Friedmann equation can also be expressed as \begin{equation} H(a)=H_0E(a)~, \end{equation} where \begin{equation} E(a)=\left\{(1-\Omega_b^0)a^{-3}\left[1-{\Omega_X^0\over 1-\Omega_b^0}(1-a^{-3w_X\eta})\right]^{1/\eta}+\Omega_b^0 a^{-3}\right\}^{1/2}~. \end{equation} Then, the fractional energy densities of various components can be easily obtained \begin{equation} \Omega_X=\Omega_X^0 E^{-2}a^{-3(1+w_X\eta)}\left[1-{\Omega_X^0\over 1-\Omega_{ b}^0}(1-a^{-3w_X\eta})\right]^{{1\over\eta}-1}~,\label{OmegaX} \end{equation} \begin{equation} \Omega_{dm}=(1-\Omega_X^0-\Omega_{b}^0)E^{-2}a^{-3}\left[1-{\Omega_X^0\over 1-\Omega_{b}^0}(1-a^{-3w_X\eta})\right]^{{1\over\eta}-1}~, \end{equation} \begin{equation} \Omega_{b}=\Omega_{b}^0E^{-2}a^{-3}~. \end{equation} So far we see clearly that the NGCG model is totally dual to a coupled dark energy scenario \cite{IXCDM,Cai}, namely an interacting XCDM parametrization. It is remarkable that the interaction between dark energy and dark matter can be interpreted as arising from the time variation of the mass of dark matter particles. The GCG model is a special case in the NGCG model corresponding to $w_X=-1$, thus the GCG model is actually an interacting $\Lambda$CDM model. Fig.1 and Fig.2 illustrate examples of the density evolution in the NGCG model. The current density parameters used in the plots are $\Omega_{dm}^0=0.25$, $\Omega_X^0=0.7$, and $\Omega_b^0=0.05$. In Fig.1, we show the cases having the common equation-of-state parameter $w_X=-1.2$, while the interaction parameters $\alpha$ are taken to be $0$, $0.5$, and $0.8$, respectively. Note that the $\alpha=0$ case corresponds to a normal phantom model with constant $w_X$. In this example we see the role the interaction parameter $\alpha$ plays in the model. The transfer energy flows from dark matter to dark energy when $\alpha>0$; the larger $\alpha$ leads to the stronger energy flow; density of baryon component is also affected evidently by the interaction between dark energy and dark matter. In Fig.2, we depict the cases with common interaction parameter $\alpha=0.5$, and the equation-of-state parameters $w_X$ are taken to be $-1$, $-0.8$, and $-1.2$, respectively. Here $w_X=-1$ case corresponds exactly to the GCG model. The effect of the parameter $w_X$ in the NGCG scenario is also evident as we see in this example. \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{den1.eps} \caption[]{The evolution of the density parameters for various components $\Omega_X$, $\Omega_{dm}$, and $\Omega_b$. Note that $\Omega_{\rm Ch}=\Omega_X+\Omega_{dm}$. The current density parameters used in the plot are $\Omega_{dm}^0=0.25$, $\Omega_X^0=0.7$, and $\Omega_b^0=0.05$. In this case, we fix $w_X$ and vary $\alpha$. } \end{center} \end{figure} \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{den2.eps} \caption[]{The evolution of the density parameters for various components $\Omega_X$, $\Omega_{dm}$, and $\Omega_b$. Note that $\Omega_{\rm Ch}=\Omega_X+\Omega_{dm}$. The current density parameters used in the plot are $\Omega_{dm}^0=0.25$, $\Omega_X^0=0.7$, and $\Omega_b^0=0.05$. In this case, we fix $\alpha$ and vary $w_X$.} \end{center} \end{figure} Let us now discuss the cosmological consequences led by the NGCG model and compare the cosmological quantities in the NGCG cosmology with those of some special cases such as $\Lambda$CDM and GCG. Firstly, we regard the Hubble parameter $H$ which evaluates the expansion rate of the Universe. In Fig.3 we plot the Hubble parameter of the NGCG model in units of $H_{\rm \Lambda CDM}$ as a function of redshift $z$ range from 0 to 5. The current density parameters used in the plot of Fig.3 are the same as used in Figs.1 and 2. The model parameters are divided into two groups, $\alpha=0$ and $\alpha=0.5$, both including $w_X=-1$, $-0.8$, and $-1.2$. It can be seen from Fig.3 that the NGCG model degenerates to the XCDM when the parameter $\alpha$ takes 0; the cases of $w_X>-1$ and $w_X<-1$ make $H$ larger than and less than $H_{\rm \Lambda CDM}$, respectively, during the cosmological evolution. The introducing of the interaction parameter $\alpha$ makes $H$ be larger than $H_{\rm \Lambda CDM}$ evidently at early times; while it is interesting to see that the value of $H/H_{\rm \Lambda CDM}$ can cross 1 in the case of $w_X<-1$ in recent period. The acceleration of the Universe is evaluated by the deceleration parameter $q=-\ddot{a}/aH^2$. Omitting the radiation component, the deceleration parameter can be expressed as \begin{equation} q={1\over 2}+{3\over 2}w_X\Omega_X~, \end{equation} where $\Omega_X$ is given by (\ref{OmegaX}). The evolution of the deceleration parameter $q$ is depicted in Fig.3 for selected parameter sets. The current density parameters are taken to be the same as above figures. The influence coming from the interaction $\alpha$ and equation of state of dark energy $w_X$ can be seen clearly in this figure. We notice that a positive $\alpha$ makes the redshift of acceleration/deceleration transition ($q(z_T)=0$) shift to a smaller value; while the values of $z_T$ are nearly degenerate under the same $\alpha$ as shown in this example. \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{h.eps} \caption[]{The evolution of the Hubble parameter in units of $H_{\rm \Lambda CDM}(z)$. The current density parameters are taken to be $\Omega_{dm}^0=0.25$, $\Omega_X^0=0.7$, and $\Omega_b^0=0.05$.} \end{center} \end{figure} \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{q.eps} \caption[]{The evolution of the deceleration parameter $q(z)$. The current density parameters are taken to be $\Omega_{dm}^0=0.25$, $\Omega_X^0=0.7$, and $\Omega_b^0=0.05$.} \end{center} \end{figure} \section{Statefinder diagnostic} Since more and more dark energy models have been constructed for interpreting or describing the cosmic acceleration, the problem of discriminating between the various contenders becomes very important. In order to be able to differentiate between those competing cosmological scenarios involving dark energy, a sensitive and robust diagnostic for dark energy models is a must. For this purpose a diagnostic proposal that makes use of parameter pair $\{r,s\}$, the so-called ``statefinder'', was introduced by Sahni et al. \cite{sahni}. The statefinder probes the expansion dynamics of the Universe through higher derivatives of the scale factor $\stackrel{...}{a}$ and is a natural companion to the deceleration parameter $q$ which depends upon $\ddot a$. The statefinder pair $\{r,s\}$ is defined as follows \begin{equation} r\equiv \frac{\stackrel{...}{a}}{aH^3},~~~~s\equiv\frac{r-1}{3(q-1/2)}~.\label{rs} \end{equation} The statefinder is a ``geometrical'' diagnostic in the sense that it depends upon the scale factor and hence upon the metric describing space-time. Trajectories in the $s-r$ plane corresponding to different cosmological models exhibit qualitatively different behaviors. The spatially flat $\Lambda$CDM scenario corresponds to a fixed point in the diagram \begin{equation} \{s,r\}\bigg\vert_{\rm \Lambda CDM} = \{ 0,1\} ~.\label{lcdm} \end{equation} Departure of a given dark energy model from this fixed point provides a good way of establishing the ``distance'' of this model from $\Lambda$CDM \cite{sahni,alam}. As demonstrated in Refs. \cite{sahni,alam,quintomsr,gorini,holosr,zimdahl,zx} the statefinder can successfully differentiate between a wide variety of dark energy models including the cosmological constant, quintessence, quintom, the Chaplygin gas, braneworld models, holographic dark energy and interacting dark energy models. We can clearly identify the ``distance'' from a given dark energy model to the $\Lambda$CDM scenario by using the $r(s)$ evolution diagram. The current location of the parameters $s$ and $r$ in these diagrams can be calculated in models, and on the other hand it can also be extracted from data coming from SNAP (SuperNovae Acceleration Probe) type experiments \cite{sahni,alam}. Therefore, the statefinder diagnostic combined with future SNAP observations may possibly be used to discriminate between different dark energy models. For example, as shown in Ref. \cite{alam}, by carrying out a maximum likelihood analysis which combines the statefinder diagnostic with realistic expectations from the SNAP experiment, the averaged-over-redshift statefinder pair $\{\bar{s}, \bar{r}\}$ is convincingly demonstrated to be useful diagnostic tool in successfully differentiating between the cosmological constant and dynamical models of dark energy. In this section we apply the statefinder diagnostic to the NGCG model. In what follows we will calculate the rangefinder parameters for the NGCG model and plot the evolution trajectories of the model in the statefinder parameter-plane. The statefinder parameters can be expressed in terms of the total energy density $\rho$ and the total pressure $p$ in the Universe: \begin{equation} r=1+{9(\rho+p)\over 2\rho}{\dot{p}\over\dot{\rho}}~,~~~~s={(\rho+p)\over p}{\dot{p}\over\dot{\rho}}~. \end{equation} The total energy of the Universe is conserved, so we have $\dot{\rho}=-3H(\rho+p).$ Since the dust matter does not have pressure, the total pressure of the cosmic fluids is provided only by dark energy component, $p=p_X=w_X\rho_X$. Then making use of $\dot{\rho}=-3H(\rho+p)$ and $\dot{\rho}_X=-3H(1+w_X^{(e)})\rho_X$, we can get the concrete expression of the statefinder parameters \begin{equation} r=1+{9\over 2}w_X\Omega_X(1+w_X^{(e)})~,~~~~s=1+w_X^{(e)}~. \end{equation} Here $w_X^{(e)}$ and $\Omega_X$ are given by (\ref{wXeff}) and (\ref{OmegaX}), respectively. Though the relationship between statefinder parameters $r$ and $s$, namely the function $r(s)$, might be derived analytically in principle, we do not give the expression here due to the complexity of the formula. Making the redshift $z={1/a}-1$ vary in an enough large range involving far future and far past, e.g. from $-1$ to 5, one can easily get the evolution trajectories in the statefinder $s-r$ plane of this model. Selected curves of $r(s)$ are plotted in Fig.5 and Fig.6. In Fig.5, we fix $\alpha=0.5$ and vary $w_X$ as $-0.8$, $-1$, and $-1.2$, respectively. In Fig.6, we fix $w_X=-1.2$ and vary $\alpha$ as 0, $\pm0.2$, $\pm0.5$, and $\pm0.8$, respectively. Other parameters are taken as the same as previous figures. In these two figures, dots locate the today's values of the statefinder parameters $(s_0, r_0)$ and arrows denote the evolution directions of the statefinder trajectories $r(s)$. The $\Lambda$CDM model locates at $(0, 1)$ in the $s-r$ plane also denoted as a dot. \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{sr1.eps} \caption[]{The statefinder $r(s)$ evolution diagram. Dots locate the today's values of the statefinder parameters $(s_0, r_0)$ and arrows denote the evolution directions of the statefinder trajectories $r(s)$. The $\Lambda$CDM model locates at the fixed point $(0, 1)$. In this case, we fix $\alpha=0.5$ and vary $w_X$ as $-0.8$, $-1$, and $-1.2$, respectively.} \end{center} \end{figure} \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{sr2.eps} \caption[]{The statefinder $r(s)$ evolution diagram. Dots locate the today's values of the statefinder parameters $(s_0, r_0)$ and arrows denote the evolution directions of the statefinder trajectories $r(s)$. The $\Lambda$CDM model locates at the fixed point $(0, 1)$. In this case, we fix $w_X=-1.2$ and vary $\alpha$ as 0, $\pm0.2$, $\pm0.5$, and $\pm0.8$, respectively.} \end{center} \end{figure} The statefinder diagnostic can discriminate between various dark energy models effectively. Different cosmological models involving dark energy exhibit qualitatively different evolution trajectories in the $s-r$ plane. For example, the $\Lambda$CDM scenario corresponds to the fixed point $s=0,~r=1$ as shown in (\ref{lcdm}), and the SCDM (standard cold dark matter) scenario corresponds to the point $s=1,~r=1$. For the ``quiessence" (XCDM) models, the trajectories are some vertical segments, i.e. $r$ decreases monotonically from 1 to $1+{9\over 2}w_X(1+w_X)$ while $s$ remains constant at $1+w_X$ \cite{sahni,alam}. The quintessence (inverse power law) tracker models have typical trajectories similar to arcs of an upward parabola lying in the regions $s>0,~r<1$ \cite{sahni,alam}. The holographic dark energy scenario ($c=1$ case), as shown in \cite{holosr}, commences its evolution from $s=2/3,~r=1$, through an arc segment, and ends it at the $\Lambda$CDM fixed point ($s=0,~r=1$) in the future. The coupled quintessence models and Quintom models exhibit more complicated trajectories as shown in Refs. \cite{quintomsr,zx}. Now from the Figs. 5 and 6 of this paper, we can see the statefinder trajectories of the NGCG model. In Fig.5 we see the cases under the fixed $\alpha$, where the GCG model ($w_X=-1$) exhibits a complete downward parabola, while the general cases ($w_X\neq -1$) correspond to some broken parabolas. The statefinder trajectory commences its evolution from $s=1+(1+\alpha)w_X,~r=1$ at $t\rightarrow 0$ to $s=1+w_X,~r=1+{9\over 2}w_X(1+w_X)$ at $t\rightarrow \infty$, through the curve. Today's statefinder point locates at $s_0=1+w^{(e)}_{X0}$, $r_0=1+{9\over 2}w_X\Omega_X^0(1+w^{(e)}_{X0})$, where $w^{(e)}_{X0}=w_X+\alpha w_X(1-\Omega_X^0-\Omega_b^0)/(1-\Omega_b^0)$. The ``distance'' from the NGCG model to the $\Lambda$CDM scenario can be measured directly in the statefinder plane. Note that under a positive $\alpha$, the cases of $w_X<-1$ never arrive at the $\Lambda$CDM fixed point; the GCG case ($w_X=-1$) ends at the $\Lambda$CDM fixed point; while the cases of $w_X>-1$ have passed through this fixed point. Fig.6 displays the cases under a fixed $w_X$. We show here $w_X=-1.2$, a phantom. Trajectories correspond to zero, positive, as well as negative values of $\alpha$ are all displayed in this diagram to depict a complete statefinder diagnostic. It is interesting to see that the trajectories can pass through the $\Lambda$CDM fixed point under a phantom case when $\alpha<0$. This is because a negative $\alpha$ makes dark energy component transfer energy flow to dark matter component. We notice that the normal phantom case ($\alpha=0$) evolves its trajectory along a vertical segment. Comparing with the quiessence case \cite{sahni,alam}, the phantom case reposes on the left of $\Lambda$CDM point, namely the region $s<0,~r>1$, and evolves upwards; while the quiessence case reposes on the right of the $\Lambda$CDM point, namely the region $s>0,~r<1$, and evolves downwards. Interestingly, under a fixed $w_X$, the present statefinder points as well as the $\Lambda$CDM fixed point locate on a straight line. This is because when $w_X$ is fixed, the relationship between $r_0$ and $s_0$ is linear, $r_0=1+{9\over 2}w_X\Omega_X^0 s_0$. \section{Observational constraints from SNe Ia, CMB, and LSS data} In this section we will derive the constraints on the NGCG model from current available observational data. It should be mentioned that the interacting XCDM parametrization scenario has been tested by the recent Type Ia supernovae data \cite{IXCDM}. The results show that the SNe Ia data favor a negative coupling and an equation of state $w_X<-1$, namely a negatively coupled phantom dark energy. However, as we know, the supernovae data alone are not sufficient to constrain dark energy models strictly (see e.g. the analysis in Ref. \cite{holoSN}). Therefore, to obtain more tight constraints on dark energy models, one should need additional data provided by other astronomical observations to be necessary and useful complements to the SNe data. It has been demonstrated that some observational quantities irrelevant to $H_0$ are very suitable to play this role \cite{wangyun}. Such quantities and data can be found in the probes of CMB and LSS \cite{holoSN,wangyun,Xia,starob,gyg}. In what follows we perform a combined analysis of SNe Ia, CMB, and LSS on the constraints of the NGCG model. We use a $\chi^2$ statistic \begin{equation} \chi^2=\chi_{\rm SN}^2+\chi_{\rm CMB}^2+\chi_{\rm LSS}^2~, \end{equation} where $\chi_{\rm SN}^2$, $\chi_{\rm CMB}^2$ and $\chi_{\rm LSS}^2$ are contributions from SNe Ia, CMB, and LSS data, respectively. It is well known that the acceleration of the Universe is found by the Type Ia supernovae observations, where the concept of the luminosity distance plays a very important role. The luminosity distance of a light source is defined in such a way as to generalize to an expanding and curved space the inverse-square law of brightness valid in a static Euclidean space, \begin{equation} d_L=\left({{\cal L}\over 4\pi{\cal F}}\right)^{1/2}=cH_0^{-1}(1+z)\int_0^z{dz'\over E(z')}~, \end{equation} where ${\cal L}$ is the absolute luminosity which is a known value for the standard candle SNe Ia, $\cal F$ is the measured flux. The Hubble distance $cH_0^{-1}=2997.9h^{-1}$ Mpc. The Type Ia supernova observations directly measure the apparent magnitude $m$ of a supernova and its red-shift $z$. The apparent magnitude $m$ is related to the luminosity distance $d_L$ of the supernova through \begin{equation} m(z)=M+5\log_{10}(d_L(z)/{\rm Mpc})+25~, \end{equation} where $M$ is the absolute magnitude which is believed to be constant for all Type Ia supernovae. In our analysis, we take the 157 gold data points listed in Riess et al. \cite{Riess} which includes recent new 14 high redshift SNe (gold) data from the HST/GOODS program. The $\chi^2$ function determined by SNe Ia observations is \begin{equation} \chi_{\rm SN}^2=\sum_{i=1}^{157}{[\mu_{\rm obs}(z_i)-\mu_{\rm th}(z_i)]^2\over \sigma_i^2}~,\label{chisn} \end{equation} where the extinction-corrected distance moduli $\mu(z)$ is defined as $\mu(z)=m(z)-M$, and $\sigma_i$ is the total uncertainty in the observation. Following the Ref. \cite{IXCDM}, we fix $\Omega_b^0=0.05$ in the computation for simplification. Hence, the computation is carried out in a four-dimensional space, for the four parameters $P=(\eta, w_X, h, \Omega_{dm}^0)$. For the CMB, we use only the measurement of the CMB shift parameter \cite{cmbshift}, \begin{equation} {\cal R}=\sqrt{\Omega_m^0}\int_0^{z_{\rm dec}}{dz\over E(z)}~,\label{R} \end{equation} where $\Omega_m^0=\Omega_{dm}^0+\Omega_b^0$, and $z_{\rm dec}=1089$ \cite{wmap}. Note that this quantity is irrelevant to the parameter $H_0$ such that provides robust constraint on the dark energy model. The results from CMB data correspond to ${\cal R}_0=1.716\pm 0.062$ (given by WMAP, CBI, ACBAR) \cite{wmap,cbi}. We include the CMB data in our analysis by adding $\chi_{\rm CMB}^2=[({\cal R}-{\cal R}_0)/\sigma_{\cal R}]^2$ (see e.g. Refs. \cite{wangyun,Xia,starob}), where ${\cal R}$ is computed by the NGCG model using equation (\ref{R}). The only large scale structure information we use is the parameter $A$ measured by SDSS \cite{sdssred}, defined by \begin{equation} A=\sqrt{\Omega_m^0}E(z_1)^{-1/3}\left[{1\over z_1}\int_0^{z_1}{dz\over E(z)}\right]^{2/3}~,\label{A} \end{equation} where $z_1=0.35$. Also, we find that this quantity is independent of $H_0$ either, thus can provide another robust constraint on the model. The SDSS gives the measurement data \cite{sdssred} $A_0=0.469\pm 0.017$. We also include the LSS constraint in our analysis by adding $\chi_{\rm LSS}^2=[(A-A_0)/\sigma_A]^2$ (see e.g. Refs. \cite{holoSN,gyg}), where $A$ is computed by the NGCG model using equation (\ref{A}). Note that we have chosen to use only the most conservative and robust information, ${\cal R}$ and $A$, from CMB and LSS observations. These measurements we use do not depend on the Hubble constant $H_0$, thus are useful complements to the SNe data. It is remarkable that the likelihood analysis scheme we employ here is very economical and efficient due to that it does not make use of all the information available in CMB and LSS but can provide fairly good constraints on dark energy models \cite{holoSN,gyg}. We now analyze the probability distribution of $\eta$ and $w_X$ in the NGCG model. The likelihood of these two parameters is determined by minimizing over the ``nuisance'' parameters \begin{equation} {\cal L}(\eta,w_X)=\int dhd\Omega_{dm}^0~ e^{-\chi^2/2}~, \end{equation} where the integral is over a large enough range of $h$ and $\Omega_{dm}^0$ to include almost all the probability. We now compute ${\cal L}(\eta,w_X)$ on a two-dimensional grid spanned by $\eta$ and $w_X$. The $68.3\%$, $95.4\%$, and $99.7\%$ (namely 1, 2, and 3 $\sigma$) confidence contours consist of points where the likelihood equals $e^{-2.31/2}$, $e^{-6.18/2}$, and $e^{-11.83/2}$ of the maximum value of the likelihood, respectively. Fig.7 shows our main results, the contours of $1\sigma$, $2\sigma$, and $3\sigma$ confidence levels in the $w_X-\eta$ plane. The 1 $\sigma$ fit values for the model parameters are: $w_X=-0.98^{+0.15}_{-0.20}$ and $\eta=1.06^{+0.20}_{-0.16}$, and the minimum value of $\chi^2$ in the four dimensional parameter space is: $\chi_{\rm min}^2=167.29$. We see clearly that the combined analysis of SNe Ia, CMB, and LSS data provides a fairly tight constraint on the NGCG model. It is remarkable that the best fit happens at the vicinity of the cosmological constant, even though $w_X$ is slightly larger than $-1$ and $\eta$ is mildly larger than 1 (i.e. $\alpha$ slightly larger than 0). This means that within the framework of the NGCG model the real form of dark energy at the maximum probability is the near cosmological constant according to the joint analysis of SNe+CMB+LSS data. However, the analysis results still accommodate the existence likelihood of the ``X-matter'' and the interaction between dark energy and dark matter. In 1 $\sigma$ range, $w_X\in (-1.18, -0.83)$ and $\alpha\in (-0.1, 0.26)$. This implies that the probabilities of that dark energy behaves as quintessence-like form and phantom-like form are roughly equal, and the probabilities that the energy flow streams from dark energy to dark matter and the reverse are also roughly equal. One-dimensional likelihood distribution functions for $w_X$ and $\eta$ are shown in Fig.8 and Fig.9, respectively. It is very clear that the original Chaplygin gas model, $\alpha=1$ (or $\eta=2$) and $w_X=-1$, is totally ruled out by the observational data at $99.7\%$ confidence level. In addition, it should be pointed out that when we fix $\eta=2$ and let $w_X$ free, the NGCG model will be identified as the so-called variable Chaplygin gas (VCG) model proposed in Ref. \cite{Guo1} (see also Ref. \cite{Guo2}). The joint analysis of SNe+CMB+LSS also rules out this probability. It is hopeful that the future precise data will provide more strong evidences to judge whether the dark energy is the cosmological constant and whether dark energy and dark matter are in unification. \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{contour.eps} \caption[]{Confidence level contours of $68.3\%,~95.4\%$ and $99.7\%$ in the $(\eta,w_X)$ plane. The 1 $\sigma$ fit values for the model parameters are: $w_X=-0.98^{+0.15}_{-0.20}$ and $\eta=1.06^{+0.20}_{-0.16}$, and the minimum value of $\chi^2$ in the four dimensional parameter space is: $\chi_{\rm min}^2=167.29$.} \end{center} \end{figure} \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{likelihoodwx.eps} \caption[]{One-dimensional probability distribution for $w_X$.} \end{center} \end{figure} \vskip.8cm \begin{figure} \begin{center} \leavevmode \epsfbox{likelihoodeta.eps} \caption[]{One-dimensional probability distribution for $\eta$.} \end{center} \end{figure} \section{Concluding remarks} The Chaplygin gas model is a proposal to describe dark energy and dark matter as a unified fluid, the Chaplygin gas, characterized by an exotic equation of state $p=-A/\rho$, where $A$ is a positive constant. Since this original Chaplygin gas has been ruled out by the observations, a generalization of the Chaplygin equation of state $p=-A/\rho^\alpha$ was considered, by introducing a free parameter $\alpha$. The generalized Chaplygin gas model is also regarded as a unification of dark energy and dark matter. The reason is that the GCG behaves as a dust-like matter at early stage and as a cosmological constant at late stage. That is to say that the GCG model admits that the Universe will be dominated by a cosmological constant and thus enter into a de Sitter phase in the future. However, we can not hitherto affirm whether the dark energy is a tiny positive cosmological constant. Therefore, the scheme for unification of dark energy and dark matter should accommodate other forms of dark energy such as quintessence-like and phantom-like dark energy. This is the motivation for us to further generalize the GCG model. We propose in this paper a new model as a scheme for the unification of dark energy and dark matter. This new model is a further extension version of the GCG model, thus dubbed new generalized Chaplygin gas model. This further generalization is implemented by introducing another free parameter $w_X$ to make the constant $A$ in the GCG equation of state become a scale factor dependent function $\tilde{A}(a)$. In order to implement the interpolation between a dust dominated Universe and an X-matter dominated Universe, the unique choice of $\tilde{A}(a)$ is $\tilde{A}(a)=-w_XAa^{-3(1+w_X)(1+\alpha)}$. Through a two-fluid decomposition, we show that the NGCG model is totally equivalent to an interacting XCDM parametrization scenario, in which the interaction between dark energy and dark matter is characterized by the constant $\alpha$. We discuss the cosmological consequences led by such an unified dark sectors model. Furthermore, a statefinder diagnostic is performed on this scenario and the discrimination between this scenario and other dark energy models is shown. Finally, a combined analysis of the data of SNe Ia, CMB, and LSS is used to constrain the parameters of the NGCG model. The fit result shows that the joint analysis can provide a considerably tight constraint on the NGCG model. According to the observational test, the best fit happens at the vicinity of the $\Lambda$CDM. We hope that the future precise data will provide more strong evidences to judge whether the dark energy is the cosmological constant and whether dark energy and dark matter can be unified into one component. Also, it would be interesting to investigate the evolution of density perturbations and the structure formation in the NGCG scenario. \begin{acknowledgments} One of us (X.Z.) is grateful to Xiao-Jun Bi, Zhe Chang, Tong Chen, Zong-Kuan Guo, and Hao Wei for helpful discussions. This work was supported by the Natural Science Foundation of China (Grant No. 111105035001). \end{acknowledgments}
-29,366.940511
[ -2.6796875, 2.609375 ]
22.402597
[ -3.08203125, 0.396484375, -2.03515625, -5.9765625, -0.55419921875, 8.3984375 ]
[ 3.732421875, 8.3125, 2.865234375, 6.45703125 ]
397
5,587
[ -2.751953125, 3.033203125 ]
27.149187
[ -5.921875, -4.25, -4.39453125, -2.37890625, 1.8212890625, 11.9140625 ]
2.623887
19.831332
22.749239
2.737842
[ 3.089066505432129 ]
-19,957.40209
5.388939
-28,858.129585
0.996413
5.777519
[ -2.724609375, -3.541015625, -3.51171875, -4.56640625, 2.19921875, 11.5859375 ]
[ -5.29296875, -2.25, -2.2109375, -1.384765625, 3.720703125, 4.78125 ]
BkiUdIfxK1yAga6Jr2zw
\section{Introduction} We are interested in bifurcation parameters $\mu$ of discrete one-dimensional dynamical systems in the sense of nontriviality of box dimension of the trajectory $S_\mu$, near a given trajectory of the system. More precisely, we are interested in values of the parameter $\mu$ such that $\dim_BS_\mu$ is nonzero. The main results are stated in Theorems~\ref{Mb} and~\ref{-Mb}. A typical example is the system generated by standard logistic map. M.\ Feigenbaum studied the dynamics of the logistic map for $\lambda\in(0,4]$. Taking $\lambda=\lambda_\infty\approx 3.570$ the corresponding invariant set $A\subset[0,1]$ has both Hausdorff and box dimensions equal to $\approx0.538$ (Grassberger \cite{grass}, Grassberger and Procaccia \cite{grassproc}). Here we compute precise values of box dimension of trajectories corresponding to period-doubling bifurcation parameters $3$ and $1+\sqrt6$, and to period-$3$ bifurcation parameter $1+\sqrt8$, see Corollary~\ref{logistic}. Similar effect of nontriviality of box dimension of trajectories as in bifurcation problems for discrete systems has been noticed for some planar vector fields having spiral trajectories $\Gamma_\mu$, see \cite{zuzu}. There we have considered a standard model of Hopf-Takens bifurcation with respect to bifurcation parameter $\mu$ where $\dim_B\Gamma_\mu>1$, while $\dim_B\Gamma_\mu=1$ otherwise. We noticed that a limit cycle is born at the moment of jump of box dimension of a spiral trajectory. Analogously, in the case of one-dimensional discrete system a periodic trajectory is born at the moment of jump of box dimension of a discrete trajectory (sequence). We expect that fractal analysis of general planar spiral trajectories can be reduced to the study of discrete one-dimensional trajectories via the Poincar\'e map, see also \cite[Remark 11]{zuzu}. A review of results dealing with applications of fractal dimensions to dynamics is given in~\cite{fdd}. We recall the notions of box dimension and Minkowski content, see e.g.\ Mattila \cite{mattila}. For any subset $S\subset\mathbb{R}^N$ by $S_\varepsilon$ we denote the $\varepsilon$-neighbourhood of $S$ (also called Minkowski sausage of radius $\varepsilon$ around $A$, a term coined by B.\ Mandelbrot), and $|S_\varepsilon|$ is its $N$-dimensional Lebesgue measure. For a bounded set $S$ and given $s\ge0$ we define the upper $s$-dimensional Minkowski content of $S$ by $$ {\mathcal M}^{*s}(S)=\limsup_{\varepsilon\to0}\frac{|S_\varepsilon|}{\varepsilon^{N-s}}. $$ Analogously for the lower $s$-dimensional content of $S$. The upper box dimension of $S$ is defined by $$ \ov\dim_BS=\inf\{s\ge0:{\mathcal M}^{*s}(S)=0\}, $$ and analogously the lower box dimension $\underline\dim_BS$. If both dimensions coincide, we denote it by $\dim_BS$. We say that a set $S$ is Minkowski nondegenerate if its $d$-dimensional upper and lower Minkowski contents are in $(0,\infty)$ for some $d\ge0$, and Minkowski measurable if ${\mathcal M}^{*d}(S)={\mathcal M}_*^d(S):={\mathcal M}^d(S)\in(0,\infty)$. Nondegeneracy of Minkowski contents for fractal strings has been characterized by Lapidus and van Frankenhuysen \cite{lapiduspom}. Applications of Minkowski content in the study of singular integrals can be seen in \cite{mink} and~\cite{singl}. For any two sequences $(a_n)_{n\ge1}$ and $(b_n)_{n\ge1}$ of positive real numbers we write $a_n\simeq b_n$ as $n\to\infty$ if there exist positive constants $A$ and $B$ such that $A\le a_n/b_n\le B$ for all $n$. Analogously, for two positive functions $f,g\:(0,r)\to\mathbb{R}$ we write $f(x)\simeq g(x)$ as $x\to0$ if $f(x)/g(x)\in[A,B]$ for $x$ sufficiently small. \section{Box dimension of some recurrently defined sequences} The first result deals with sequences~$(x_n)_{n\ge1}$ converging monotonically to zero. \begin{theorem}\label{m} Let $\alpha>1$ and let $f\:(0,r)\to (0,\infty)$ be a monotonically nondecreasing function such that $f(x)\simeq x^\alpha$ as $x\to0$, and $f(x)<x$ for all $x\in(0,r)$. Consider the sequence $S(x_1):=(x_n)_{n\ge1}$ defined by \bg{\eq} x_{n+1}=x_n-f(x_n),\quad x_1\in(0,r). \end{\eq} Then \bg{\eq}\label{xn} x_n\simeq n^{-1/(\alpha-1)}\quad \mbox{as $n\to\infty$.} \end{\eq} Furthermore, \bg{\eq} \dim_BS(x_1)=1-\frac1\alpha, \end{\eq} and the set $S(x_1)$ is Minkowski nondegenerate. \end{theorem} \begin{proof} (a) Assuming that $Ax^\alpha\le f(x)\le Bx^\alpha$, we have \bg{\eq} 0<x_{n+1}\le x_n-Ax_n^\alpha. \end{\eq} It is easy to see that $x_n$ tends monotonically to $0$. Using induction we first prove that $x_n\le b n^{-\beta}$, where $\beta:=\frac1{\alpha-1}$, for some positive constant $b$. Let us consider inductive step first, and then the basis. Assume that $x_n\le b n^{-\beta}$ for some $n$, and assume also that $x_n\le x_{max}$, where $x_{max}$ is the point of maximum of $x-x^\alpha$, $x>0$. Note that since $x_n$ is decreasing, converging to zero, then $x_n\le x_{max}$ for all $n$ sufficiently large. Exploiting monotonicity of $x\mapsto x-x^\alpha$ on $(0,x_{max})$ we have $$ x_{n+1}\le x_n-Ax_n^\alpha\le bn^{-\beta}-Ab^\alpha n^{-\alpha \beta}\le b(n+1)^{-\beta}. $$ In order to prove the last inequality, it suffices to show that $$ n^{-\beta}-Ab^{\alpha-1} n^{-\alpha \beta}\le (n+1)^{-\beta}. $$ To this end let us consider the binomial series expansion: \bg{eqnarray} (n+1)^{-\beta}&=&[n(1+\frac 1n)]^{-\beta}=n^{-\beta}+\binom{-\beta}1n^{-\beta-1}+R_n\label{binom}\\ &=&n^{-\beta}-\beta n^{-\alpha\beta}+R_n\ge n^{-\beta}-Ab^{\alpha-1}n^{-\alpha\beta}.\nonumber \end{eqnarray} The last inequality holds provided $b$ is chosen so that $Ab^{\alpha-1}\ge\beta$, and if $R_n\ge0$. To prove $R_n\ge0$ note that $R_n=a_2+a_4+a_6+\dots$, where each $a_k$ (with even $k$) has the form \bg{eqnarray} a_k&=&\binom{-\beta}kn^{-\beta-k}+\binom{-\beta}{k+1}n^{-\beta-k-1}\nonumber\\ &=&\frac{\beta(\beta+1)\dots(\beta+k-1)}{k!}n^{-\beta-k}-\frac{\beta(\beta+1)\dots(\beta+k)}{(k+1)!}n^{-\beta-k-1}.\nonumber \end{eqnarray} Inequality $a_k\ge0$ is equivalent with $$ n\ge\frac{\beta+k}{k+1}=\frac1{(k+1)(\alpha-1)}+\frac{k}{k+1}. $$ For all even $k$ the right-hand side obviously does not exceed $n_0=n_0(\alpha)=\frac1{3(\alpha-1)}+1$. The condition $x_n\le x_{max}$ for $n\ge n_0$ is secured if we take $n_0$ sufficiently large. From condition $Ab^{\alpha-1}\ge\beta$ we see that we must take $b\ge (\beta/A)^{\beta}$. Hence, the basis of induction and inductive step hold for $n\ge n_0$ with such a $b$. Taking $b$ still larger, we can achieve that $x_n\le bn^{-\beta}$ for all $n\ge1$. (b) To prove that there exists $a>0$ such that $x_n\ge an^{-\beta}$ for all $n\ge1$, we use only $x_{n+1}\ge x_n-Bx_n^\alpha$. Assuming by induction that the desired inequality holds for a fixed $n$ we obtain analogously as in (a) that \bg{\eq}\label{ineq} x_{n+1}\ge x_n-Bx_n^\alpha\ge an^{-\beta}-Ba^\alpha n^{-\alpha \beta}\ge a(n+1)^{-\beta}, \end{\eq} under the assumption that $x_n\le x_{max}$. In order to show the last inequality in (\ref{ineq}) we use binomial expansion (\ref{binom}) again, and proceed by writing $\beta=\gamma+\delta$ with arbitrarily chosen positive constants $\gamma$ and $\delta$. We have to achieve \bg{eqnarray} (n+1)^{-\beta}&=&(n^{-\beta}-\gamma n^{-\alpha\beta})-(\delta n^{-\alpha\beta}-R_n)\nonumber\\ &\le& n^{-\beta}-B a^{\alpha-1}n^{-\alpha\beta}.\nonumber \end{eqnarray} This holds provided $\gamma\ge Ba^{\alpha-1}$, that is, $a\le(\gamma/B)^\beta$, and if $R_n\le \delta n^{-\alpha\beta}$ for some $\delta\in(0,\beta)$. Note that $R_n\le \delta n^{-\alpha\beta}$ is equivalent with $$ n\sum_{k=2}^\infty\binom{\beta}kn^{-k}\le \delta. $$ that is, with $$ n\left[(1+\frac1n)^\beta-1-\binom\beta 1\frac1n\right]\le\delta. $$ Using Taylor's formula $(1+\frac1n)^\beta=1+\binom \b1\frac1n+\binom\b2\ov x^2$, $0<\ov x<\frac1n$, we see that the above inequality is satisfied when $\binom\b2n\ov x^2\le\delta$, that is, when $\binom\b2n^{-1}\le\delta$. This holds for all $n\ge n_0$ if $n_0$ is large enough. We can choose $n_0$ large enough so that also $x_{n_0}\le x_{max}$. Taking $a$ small enough we can achieve the basis of induction, $x_{n_0}\ge a n_0^{-\beta}$. Taking $a>0$ still smaller the lower bound will hold for all $n\ge1$. This completes the proof of the lower bound of $x_n$ by induction. (c) Since $f$ is nondecreasing, the sequence $l_n:=x_n-x_{n+1}=f(x_n)$ is nonincreasing. Hence, we can derive Minkowski nondegeneracy of $S(x_1)$ using Lapidus and Pomerance \cite[Theorem 2.4]{lapiduspom}. Indeed, from (\ref{xn}) we have $$ l_n=f(x_n)\simeq x_n^\alpha\simeq n^{-\alpha/(\alpha-1)}=n^{-1/d}, $$ where $d:=1-\frac1\alpha\in(0,1)$. Using the mentioned result we conclude that $S(x_1)$ is Minkowski nondegenerate and $\dim_BS(x_1)=d=1-\frac1\alpha$. \end{proof} \remark Step (c) in the proof of Theorem~\ref{m} can be carried out by directly estimating Minkowski contents of $S=S(x_1)$. Using $l_n:=x_n-x_{n+1}=f(x_n)\le B(bn^{-\beta})^\alpha=Bb^\alpha n^{-\alpha\beta}$ we see that $l_n\le2\varepsilon$ if $n\ge(\frac12 Bb^\alpha)^{d}\varepsilon^{-d}$, where $d:=1-\frac1\alpha$. Defining $n_0=n_0(\varepsilon):=\lceil (\frac12 Bb^\alpha)^{d}\varepsilon^{-d} \rceil$ we have \bg{eqnarray}\label{ge} |S_\varepsilon|\ge x_{n_0}+2\varepsilon(n_1-1), \end{eqnarray} where $n_1=n_1(\varepsilon)$ is obtained in the similar way from the condition $l_n=f(x_n)\ge2\varepsilon$. It is satisfied for $n\le n_1:=\lfloor (\frac12Aa^{\alpha})^d\varepsilon^{-d}\rfloor$. Using (\ref{ge}) we conclude that \bg{\eq}\label{gm} {\mathcal M}_*^d(S)\ge\frac ab\left(\frac 2{B}\right)^{1/\alpha}+2\left(\frac12Aa^\alpha\right)^d. \end{\eq} In the analogous way, estimating $|S_\varepsilon|$ from above, we obtain \bg{\eq}\label{dm} {\mathcal M}^{*d}(S)\le\frac ba\left(\frac 2{A}\right)^{1/\alpha}+2\left(\frac12Bb^\alpha\right)^d. \end{\eq} This proves that $S$ is Minkowski nondegenerate and $\dim_BS=d$. \remark We do not know if the set $S=S(x_1)$ corresponding to $f(x)= A\cdot x^\alpha$ in Theorem \ref{m}, where $A>0$, is Minkowski measurable. Numerical experiments show that in this case any corresponding sequence $S=(x_n)_{n\ge1}$, $x_1\in(0,1)$, is Minkowski measurable, and \bg{\eq} {\mathcal M}^d(S)=\left(\frac2{A}\right)^{1/\alpha}\frac\alpha{\alpha-1}. \end{\eq} This value is obtained if we let formally $a=b=(\beta/A)^\beta$ in (\ref{gm}) and (\ref{dm}). \medskip The following result deals with sequences $(x_n)_{n\ge1}$ oscillating around a fixed point $x_0$, so that their two subsequences defined by odd and even indices monotonically converge to~$x_0$. It suffices to consider the case $x_0=0$. \begin{theorem}\label{-m} Let $f:(-r,r)\to\mathbb{R}$ be a function such that $f(x)\simeq |x|^\alpha$ as $x\to0$, where $\alpha>1$. We also assume that the function $f(-x-f(x))-f(x)$ satisfies the following conditions: \bg{eqnarray} &\mbox{it is monotonically nondecreasing for $x>0$ small enough,}&\label{mon}\\ &\mbox{it is monotonically nonincreasing for $x<0$ small enough,}&\nonumber\\ &f(-x-f(x))-f(x)\simeq\pm|x|^{2\alpha-1}\quad\mbox{as $x\to0\pm$.}\label{funkc}& \end{eqnarray} Then there exists $r_1>0$ such that for any sequence $S(x_1):=(x_n)_{n\ge1}$ defined by \bg{\eq} x_{n+1}=-x_n-f(x_n),\quad x_1\in(-r_1,r_1), \end{\eq} we have \bg{\eq}\label{xnn} |x_n|\simeq n^{-1/(2\alpha-2)},\quad \mbox{as $n\to\infty$.} \end{\eq} Furthermore, \bg{\eq} \dim_BS(x_1)=1-\frac1{2\alpha-1}, \end{\eq} and the set $S(x_1)$ is Minkowski nondegenerate. \end{theorem} \begin{proof} Note that if we define $F(x):=-x-f(x)$ then $F^2(x)=F(F(x))=x-g(x)$, where $g(x):=f(-x-f(x))-f(x)$. We have \bg{\eq}\label{gx} g(x)\simeq \pm|x|^{2\alpha-1}\quad\mbox{as $x\to0\pm$.} \end{\eq} As $2\alpha-1>1$ and $g(0)=0$, from (\ref{gx}) we see that there exists $r_1\in(0,r)$ such that $0<g(x)<x$ for $x\in(0,r_1)$ and $-x<g(x)<0$ for $x\in(-r_1,0)$. Starting with $x_1\in(0,r_1)$, the sequence $y_n:=x_{2n-1}$, $n\ge1$, satisfies $y_{n+1}=y_{n}-g(y_{n})$, and since $(y_n)$ is nonincreasing, it is contained in $(0,r_1)$. By Theorem \ref{m} applied to $g$ and the sequence $(y_n)$ we have that $y_n=x_{2n-1}\simeq n^{-1/(2\alpha-2)}$ as $n\to\infty$. To obtain the same asymptotics for $|x_{2n}|$, it suffices to start with $x_2=F(x_1)<0$, and to consider the sequence $z_n:=x_{2n}$, ${n\ge1}$, contained in $(-r,0)$. Using again Theorem \ref{m} (modified to this situation; note that $-x<g(x)<0$) applied to the sequence $z_n$, we obtain $|z_n|=|x_{2n}|\simeq n^{-1/(2\alpha-2)}$. Hence $|x_{n}|\simeq n^{-1/ (2\alpha-2)}$. The same asymptotics is obtained if we start with $x_1\in(-r_1,0)$. Exploiting finite stability of the upper box dimension, see Falconer \cite[p.\ 44]{falc}, we have that $\ov\dim_{B}S=\max\{\ov\dim_BS_-,\ov\dim_BS_+\}$, where $S_-:=S\cap(-r,0)$ and $S_+:=S\cap(0,r)$ are negative and positive part of the sequence $S=S(x_1)$. Since $\ov\dim_BS_-=\ov\dim_BS_+=1-\frac1{2\alpha-1}$, see Theorem \ref{m}, we conclude that $\ov\dim_BS=1-\frac1{2\alpha-1}$. To estimate the lower box dimension, first note that the sets $S_{+}$ and $S_-$ are separated by $x=0$, hence $(S_+)_\varepsilon\cap (S_-)_\varepsilon=(-\varepsilon,\varepsilon)$. Therefore $|S_\varepsilon|=|(S_+)_\varepsilon|+|(S_-)_\varepsilon|-|(S_+)_\varepsilon\cap(S_-)_\varepsilon|=|(S_+)_\varepsilon|+|(S_-)_\varepsilon|-2\varepsilon$, and from this we immediately obtain that ${\mathcal M}_*^d(S)\ge{\mathcal M}_*^d(S_+)+{\mathcal M}_*^d(S_-)>0$, where $d:=1-\frac1{2\alpha-1}$. We have used also Minkowski nondegeneracy of $S_{\pm}$. Hence, $\underline\dim_BS\ge d$. This finishes the proof of $\dim_BS=d$. \end{proof} \remark Conditions of Theorem \ref{-m} are satisfied when for example $f(x)=|x|^\alpha$, $x\in(-r,r)$. In order to facilitate the study of bifurcation problems below, it will be convenient to formulate the following consequences of Theorems~\ref{m} and~\ref{-m}. \begin{theorem}\label{M} Let $F:(x_0-r,x_0+r)\to\mathbb{R}$ be a function of class $C^3$, such that \bg{eqnarray} F(x_0)&=&x_0,\label{0}\\ F'(x_0)&=&1,\label{od}\\ F''(x_0)&<&0.\label{odd} \end{eqnarray} Then there exists $r_1>0$ such that for any sequence $S(x_1)=(x_n)_{n\ge1}$ defined by $$ x_{n+1}=F(x_n),\quad x_1\in(x_0,x_0+r_1), $$ we have $|x_n-x_0|\simeq n^{-1}$ as $n\to\infty$, \bg{\eq} \dim_BS(x_1)=\frac12, \end{\eq} and $S(x_1)$ is Minkowski nondegenerate. Analogous result holds if $x_1\in(x_0-r_1,x_0)$, assuming that in (\ref{odd}) we have the opposite sign. \end{theorem} \begin{proof} We can assume without loss of generality that $x_0=0$, and let $x>0$. By the Taylor formula, using (\ref{0}), (\ref{od}), we have that $$ F(x)=x-f(x), $$ where $$ f(x)=-\frac{F''(0)}{2!}x^2-\frac{F'''(\ov x)}{3!}x^3 $$ with $\ov x=\ov x(x)\in(0,r)$. Since $F''(0)<0$, we see that $f(x)\simeq x^2$. The condition $f(x)<x$ is clearly satisfied in $(0,r_1)$ for $r_1$ sufficiently small. The function $f$ is increasing, since using again Taylor's formula applied to $F'\in C^2$, we get $$ f'(x)=1-F'(x)=-F''(0)\,x-\frac{F'''(\ov x)}{2!}\,x^2>0 $$ for $x\in(0,r_1)$ with $r_1$ small enough. The claim follows from Theorem~\ref{m} with $\alpha=2$. Analogously for $x\in(-r_1,0)$. \end{proof} \begin{theorem}\label{-M} Let $F:(x_0-r,x_0+r)\to\mathbb{R}$ be a function of class $C^4$, such that \bg{eqnarray} F(x_0)&=&x_0,\label{0-}\\ F'(x_0)&=&-1,\label{-od}\\ F''(x_0)&\ne&0.\label{-odd} \end{eqnarray} Then there exists $r_1>0$ such that for any sequence $S(x_1)=(x_n)_{n\ge1}$ defined by $$ x_{n+1}=F(x_n),\quad x_1\in(x_0-r_1,x_0+r_1),\quad x_1\ne x_0, $$ we have $|x_n-x_0|\simeq n^{-1/2}$ as $n\to\infty$, \bg{\eq} \dim_BS(x_1)=\frac 23, \end{\eq} and $S(x_1)$ is Minkowski nondegenerate. \end{theorem} \begin{proof} We assume without loss of generality that $x_0=0$. It suffices to check that conditions of Theorem~\ref{-m} are satisfied. Using Taylor's formula, (\ref{0}), and (\ref{-odd}), we get $$ F(x)=-x-f(x), $$ where $$ f(x)=-\frac{F''(0)}{2!}\,x^2-\frac{F^{'''}(\ov x)}{3!}\,x^3, $$ where $\ov x=\ov x(x)\in(-r,r)$. Now we consider the function $g(x)=f(-x-f(x))-f(x)$: $$ g(x)=-F''(0)x\cdot f(x)+\dots=\frac12F''(0)^2x^3+\mbox{higher order terms}. $$ Since $g'(x)=\frac32F''(0)^2\,x^2+\mbox{higher order terms}>0$ for $x\ne0$ such that $|x|$ is small enough, it is clear that $g(x)$ is increasing on $(-r_1,r_1)$, provided $r_1>0$ is small enough. Also, $$ g(x)=\frac{F''(0)^2}{2}x^3+o(x^3)\simeq x^3\quad\mbox{as $x\to0$.} $$ This shows that conditions (\ref{mon}) and (\ref{funkc}) are fulfilled. The claim follows from Theorem~\ref{-m} with $\alpha=2$. \end{proof} \remark It is clear that more general versions of Theorems~\ref{M} and~\ref{-M} can be proved where finitely many consecutive derivatives of $F$ of orders $k=2,3,\dots,2m-1$ are equal to zero, and $F^{(2m)}(x_0)\ne0$. \begin{lemma}\label{hyperbolic} Assume that $S=(x_n)\subset\mathbb{R}$ is a sequence of positive numbers converging exponentially to zero, that is, there exists $\lambda\in(0,1)$ and a constant $C>0$ such that $0<x_n\le C\lambda^n$ for all $n$. Then $\dim_BS=0$. \end{lemma} \begin{proof} For any fixed $\varepsilon>0$ inequality $x_n\le \lambda^n<2\varepsilon$ is satisfied for $n\ge n_0(\varepsilon):= \lceil\frac{\log(2\varepsilon)}{\log\lambda}\rceil$. Hence, $$ |S_\varepsilon|\le2\varepsilon+n_0(\varepsilon)\cdot 2\varepsilon, $$ and from this ${\mathcal M}^{*s}(S)=0$ for any $s\in(0,1)$. Therefore, $\ov\dim_B S=0$. \end{proof} From this we immediately obtain the following result. The notion of hyperbolic fixed point of the system is described e.g.\ in Devaney~\cite{devaney}. \begin{theorem} {\rm (Hyperbolic fixed point)}\label{hyp} Let $F:(x_0-r,x_0+r)\to\mathbb{R}$ be a function of class $C^1$, $F(x_0)=x_0$, and $|F'(x_0)|<1$. Then there exists $r_1\in(0,r)$ such that for any sequence $S(x_1)=(x_n)_{n\ge1}$ defined by $$ x_{n+1}=F(x_n),\quad x_1\in(x_0-r_1,x_0+r_1) $$ we have $$ \dim_BS(x_1)=0. $$ \end{theorem} It is easy to see that under the assumptions on $f$ given in Theorem \ref{m} we have that for each $\lambda\in(0,1)$ the sequence $S:=(x_n)_{n\ge1}$ corresponding to $x_{n+1}=\lambda x_n-g(x_n)$, $x_1\in(0,r_\lambda)$, with $r_\lambda$ sufficiently small, has exponential decay, $0<x_n\le \lambda^n$. Hence $\dim_BS=0$. \smallskip Now we state a simple but useful comparison result, which we shall need in the proof of Theorem~\ref{exp}. \begin{lemma}\label{comp} {\rm (Comparison principle for box dimensions)} Assume that $A=(a_n)_{n\ge1}$ and $B=(b_n)_{n\ge1}$ are two decreasing sequences of positive real numbers converging to zero, such that the sequences of their differences $(a_n-a_{n+1})_{n\ge1}$ and $(b_n-b_{n+1})_{n\ge1}$ are monotonically nonincreasing. If $a_n\le b_n$ then $$ \ov\dim_BA\le\ov\dim_BB,\quad \underline\dim_BA\le\underline\dim_BB. $$ \end{lemma} \begin{proof} Using the fact that the Borel rarefaction index of $A$ is equal to the upper box dimension, see Tricot \cite[p.\ 34 and Theorem on p.\ 35]{tricot}, we obtain \bg{\eq} \ov\dim_BA=\underset{n\to\infty}{\ov\lim}\,\,\frac1{1+\frac{\log(1/a_n)}{\log n}}. \end{\eq} Since $0<a_n\le b_n$ we conclude that $\ov\dim_B A\le\ov\dim_B B$. Using methods described in Tricot \cite[pp.\ 33--36]{tricot} it can be shown that analogous result holds for the lower box dimension: \bg{\eq}\label{diml} \underline\dim_BA=\underset{n\to\infty}{\underline\lim}\,\,\frac1{1+\frac{\log(1/a_n)}{\log n}}. \end{\eq} This immediately implies $\underline\dim_B A\le\underline\dim_B B$. \end{proof} Now we consider a sequence with a very slow convergence to $0$, such that its box dimension is maximal possible. We achieve this by assuming that $f(x)$ converges very fast to $0$ as $x\to0$. An example of such a function is $f(x)=\exp(-1/x)$. \begin{theorem}\label{exp} Let $f\:(0,r)\to (0,\infty)$ be a nondecreasing function such that $f(x)<x$ and for any $\alpha>1$ we have that $f(x)=O(x^\alpha)$ as $x\to0$. Consider the sequence $S:=(x_n)_{n\ge1}$ defined by $x_{n+1}=x_n-f(x_n)$, $x_1\in(0,r)$. Then \bg{\eq} \dim_BS=1. \end{\eq} \end{theorem} \begin{proof} It is clear that $x_n\to0$. For any fixed $\alpha>1$ there exists $B_\alpha>0$ such that we have $f(x)\le B_\alpha x^\alpha$, hence $x_{n+1}\ge x_n-B_\alpha x_n^\alpha$ for all $n\ge1$. As in step (b) of the proof of Theorem~\ref{m} we conclude that there exists $a=a_\alpha>0$ such that $x_n\ge an^{-1/(\alpha-1)}$ for all $n$. Since $x_n\to0$ monotonically, then $c_n=x_n-x_{n+1}=f(x_n)\to0$ also monotonically. Therefore, using Lemma~\ref{comp} (see also (\ref{diml})), we get $$ \underline\dim_BS\ge\underline\dim_B\{an^{-1/(\alpha-1)}\}=\frac1{1+(\alpha-1)^{-1}}= 1-\frac1\alpha. $$ The claim follows by letting $\alpha\to\infty$. \end{proof} \section{Box dimension of trajectories at bifurcation points} For definitions of saddle-node (or tangent) bifurcation and period-doubling bifurcation and basic results see Devaney \cite[Section 1.12]{devaney}. Note that conditions in Theorem~\ref{Mb} are essentially the same as those in \cite[Theorem~12.6]{devaney}. Also conditions in Theorem~\ref{-Mb} are essentially the same as those in \cite[Theorem~12.7]{devaney}. The novelty in Theorems~\ref{Mb} and~\ref{-Mb} are precise values of box dimensions of trajectories near the point of bifurcation, convergence rate of trajectories, and their Minkowski nondegeneracy. \begin{theorem}\label{Mb} {\rm (Saddle-node bifurcation)} Suppose that a function $F:J\times(x_0-r,x_0+r)\to\mathbb{R}$, where $J$ is an open interval in $\mathbb{R}$, is such that $F(\lambda_0,\cdot)$ is of class $C^3$ for some $\lambda_0\in\mathbb{R}$, and $F(\cdot,x)$ of class $C^1$ for all~$x$. Assume that \bg{eqnarray} F(\lambda_0,x_0)&=&x_0,\\ \pd F{x}(\lambda_0,x_0)&=&1,\label{odb}\\ \pdd Fx(\lambda_0,x_0)&<&0,\label{oddb}\\ \pd F{\lambda}(\lambda_0,x_0)&\ne&0. \end{eqnarray} Then $\lambda_0$ is the point where saddle-node bifurcation occurs. Furthermore, there exists $r_1\in(0,r)$ such that for any sequence $S(\lambda_0,x_1)=(x_n)_{n\ge1}$ defined by $$ x_{n+1}=F(\lambda_0,x_n),\quad x_1\in(x_0,x_0+r_1), $$ we have $|x_n-x_0|\simeq n^{-1}$ as $n\to\infty$, \bg{\eq} \dim_BS(\lambda_0,x_1)=\frac12, \end{\eq} and $S(\lambda_0,x_1)$ is Minkowski nondegenerate. Analogous result holds if $x_1\in(x_0-r_1,x_0)$, assuming that in (\ref{oddb}) we have the opposite sign. \end{theorem} \begin{proof} The claim follows from Theorem~\ref{M} and \cite[Theorem~12.6]{devaney}. \end{proof} \begin{theorem}\label{-Mb} {\rm(Period-doubling bifurcation)} Let $F:J\times(x_0-r,x_0+r)\to\mathbb{R}$ be a function of class $C^2$, where $J$ is an open interval in $\mathbb{R}$, and $F(\lambda_0,\cdot)$ is of class $C^4$ for some $\lambda_0\in J$. Assume that \bg{eqnarray} F(\lambda_0,x_0)&=&x_0,\\ \pd Fx(\lambda_0,x_0)&=&-1,\label{-odb}\\ \pdd Fx(\lambda_0,x_0)&\ne&0,\label{-oddb}\\ \frac{\partial^2(F^2)}{\partial\lambda\,\partial x}(\lambda_0,x_0)\ne0,&&\frac{\partial^3(F^2)}{\partial x^3} (\lambda_0,x_0)\ne0, \end{eqnarray} where we have denoted $F^2=F\circ F$. Then $\lambda_0$ is the point where period-doubling bifurcation occurs. Furthermore, there exists $r_1\in(0,r)$ such that for any sequence $S(\lambda_0,x_1)=(x_n)_{n\ge1}$ defined by $$ x_{n+1}=F(\lambda_0,x_n),\quad x_1\in(x_0-r_1,x_0+r_1),\quad x_1\ne x_0, $$ we have $|x_n-x_0|\simeq n^{-1/2}$ as $n\to\infty$, \bg{\eq} \dim_BS(\lambda_0,x_1)=\frac 23, \end{\eq} and $S(\lambda_0,x_1)$ is Minkowski nondegenerate. \end{theorem} \begin{proof} The claim follows from Theorem~\ref{-M} and \cite[Theorem~12.7]{devaney}. \end{proof} Now we apply preceding results to bifurcation problem generated by the logistic map. By $d(x,A)$, where $x\in\mathbb{R}$ and $A\subset\mathbb{R}$, we denote Euclidean distance from $x$ to $A$. \begin{cor}\label{logistic} {\rm(Logistic map)} Let $F(\lambda,x)=\lambda x(1-x)$, $x\in(0,1)$, and let $S(\lambda,x_1)=(x_n)_{n\ge1}$ be a sequence defined by initial value $x_1$ and $x_{n+1}=F(\lambda,x_n)$. (a) For $\lambda_0=1$, taking $x_1>0$ sufficiently close to $x_0=0$, we have that $x_n\simeq n^{-1}$ as $n\to\infty$, and $$ \dim_BS(1,x_1)=\frac12. $$ (b) (Onset of period-$2$ cycle) For $\lambda_0=3$ the corresponding fixed point is $x_0=2/3$. For any $x_1$ sufficiently close to $x_0$ we have that $|x_n-x_0|\simeq n^{-1/2}$, and $$ \dim_BS(3,x_1)=\frac23. $$ (c) For any $\lambda\notin\{1,3\}$ and $x_1$ such that the sequence $S(\lambda,x_1)$ is convergent, we have that $\dim_BS(\lambda,x_1)=0$. (d) (Onset of period-$4$ cycle) If $\lambda_0=1+\sqrt6$ then for any $x_1$ sufficiently close to period-$2$ trajectory $A=\{a_1,a_2\}$ we have that $d(x_n,A)\simeq n^{-1/2}$ as $n\to\infty$, and $$ \dim_BS(1+\sqrt6,x_1)=\frac23. $$ (e) (Period-$3$ cycle) Let $\lambda_0=1+\sqrt8$ and let $a_1$, $a_2$, $a_3$ be fixed points of $F^3$ such that $0<a_1<a_2<a_3<1$, $F(a_1)=a_2$, $F(a_2)=a_3$, and $F(a_3)=a_1$. Then there exists $\delta>0$ such that for any initial value $$ x_1\in(a_1-\delta,a_1)\cup(a_2-\delta,a_2)\cup(a_3,a_3+\delta) $$ we have $d(x_n,\{a_1,a_2,a_3\})\simeq n^{-1}$ as $n\to\infty$, and $$ \dim_BS(1+\sqrt8,x_1)=\frac12. $$ All trajectories appearing in this corollary are Minkowski nondegenerate. \end{cor} \begin{proof} Claim (a) follows from Theorem~\ref{m}. For (b) and (c) see Theorems~\ref{-Mb} and \ref{hyp}. Claim (d) follows from Theorem~\ref{-M} since $(F^2)''(\lambda_0,x_0)\ne0$. Claim (e) follows using Theorem~\ref{M} applied to $F^3$. Note that these three intervals are disjoint for $\delta>0$ small enough. See Strogatz \cite[pp.\ 362 and 363]{strogatz}. The fact that for $\lambda_0=1+\sqrt8$ we have $(F^3)''(a_i)\ne0$, $i=1,2,3$, can be obtained by direct computation. \end{proof} \medskip \remark It would be interesting to know precise values of box dimensions of trajectories corresponding to all period-doubling bifurcation parameters $\lambda_k$ where $2^k$-periodic points occur. On the basis of numerical experiments we expect that all of them will be equal to $2/3$. \medskip {\csc Example.} For $F(\lambda,x):=\lambda e^{x}$, see Devaney \cite[Section 1.12]{devaney}, we can obtain similar results. Indeed, using Theorem~\ref{Mb} we obtain that $$ \dim_BS(e^{-1},x_1)=\frac12 $$ for all $x_1$ in a punctured neighbourhood of $x_0=1$. Using Theorem~\ref{-Mb} we obtain that for any $x_1$ in a punctured neighbourhood of $x_0=-1$ we have $$ \dim_B(-e,x_1)=\frac23. $$ If $\lambda\notin\{e^{-1},-e\}$, we have $\dim_BS(\lambda,x_1)=0$ provided $S(\lambda,x_1)$ is a convergent sequence, see Theorem~\ref{hyp}.
-32,986.423996
[ -2.939453125, 2.66015625 ]
37.576687
[ -2.955078125, 0.497314453125, -2.009765625, -5.7421875, -0.62939453125, 8.15625 ]
[ 2.833984375, 7.68359375, 1.63671875, 5.30078125 ]
215
2,967
[ -3.3828125, 4.1640625 ]
36.483137
[ -5.51953125, -3.8125, -4.25390625, -2.458984375, 1.728515625, 12.0234375 ]
1.053228
17.660418
30.434783
4.281038
[ 1.3867900371551514 ]
-21,338.923897
5.95214
-33,140.138562
1.223103
5.855578
[ -1.8408203125, -3.150390625, -3.8046875, -5.53515625, 2.025390625, 12.5703125 ]
[ -5.44140625, -1.2529296875, -1.48828125, -0.83740234375, 2.857421875, 2.93359375 ]
BkiUfZs5qhLBmqyw59dW
\section{Introduction} Matrix multiplication refers to computing the product $XY^T$ of two matrices $X\in\mathbb{R}^{m_x\times n}$ and $Y\in\mathbb{R}^{m_y\times n}$, which is a fundamental task in many machine learning applications such as regression \citep{Naseem10,Cohen16}, online learning \citep{Hazan_2007,Chu11}, information retrieval \citep{Eriksson-Bique11} and canonical correlation analysis \citep{Hotelling36,Chen15}. Recently, the scales of data and models in these applications have increased dramatically, which results in very large data matrices. As a result, it requires unacceptable time and space to directly compute $XY^T$ in the main memory. To reduce both time and space complexities, approximate matrix multiplication (AMM) with limited space, which can efficiently compute a good approximation of the matrix product, has been a substitute and received ever-increasing attention \citep{Qiaomin16,ELB}. Given two large matrices $X\in\mathbb{R}^{m_x\times n}$ and $Y\in\mathbb{R}^{m_y\times n}$, the goal of AMM with limited space is to find two small sketches $B_X\in\mathbb{R}^{m_x\times \ell}$ and $B_Y\in\mathbb{R}^{m_y\times \ell}$ such that $B_XB_Y^T$ approximates $XY^T$ well, where $\ell\ll\min(m_x,m_y,n)$ is the sketch size. Traditionally, randomized techniques such as column selection \citep{Drineas06} and random projection \citep{Sarlos06,Magen2011,Cohen16} have been utilized to develop lightweight algorithms with the $O(n(m_x+m_y)\ell)$ time complexity and $O((m_x+m_y)\ell)$ space complexity for AMM, and yielded theoretical guarantees for the approximation error. Specifically, early studies \citep{Drineas06,Sarlos06} focused on the Frobenius error, and achieved the following bound \begin{equation} \label{F_bound1} \|XY^T-B_XB_Y^T\|_F\leq\epsilon\|X\|_F\|Y\|_F \end{equation} with $\ell=\tilde{O}(1/\epsilon^2)$\footnote{We use the $\widetilde{O}$ notation to hide constant factors as well as polylogarithmic factors.}. Later, two improvements \citep{Magen2011,Cohen16} established the following error bound measured by the spectral norm \begin{equation} \label{S_bound1} \|XY^T-B_XB_Y^T\|\leq\epsilon\|X\|\|Y\| \end{equation} with $\ell=\widetilde{O}\left(\left(\sr(X)+\sr(Y)\right)/\epsilon^2\right)$ where $\sr(X)=\frac{\|X\|_F^2}{\|X\|^2}$ is the stable rank of $X$. However, their sketch size $\ell$ has a quadratic dependence on $1/\epsilon$, which means that a large sketch size is required to ensure a small approximation error. To improve the dependence on $1/\epsilon$, recent studies \citep{Qiaomin16,CoD17} have extended a deterministic matrix sketching technique called frequent directions (FD) \citep{Liberty2013,Ghashami2014,Ghashami2016} to AMM. Specifically, \citet{Qiaomin16} applied FD to $[X;Y]\in\mathbb{R}^{(m_x+m_y)\times n}$ to generate $B_X$ and $B_Y$ such that \begin{equation} \label{S_bound2-1} \|XY^T-B_XB_Y^T\|\leq\left(\|X\|_F^2+\|Y\|_F^2\right)/\ell \end{equation} which also requires the $O(n(m_x+m_y)\ell)$ time complexity and $O((m_x+m_y)\ell)$ space complexity. Furthermore, \citet{CoD17} proposed a new algorithm named as co-occuring directions (COD) with the $O((m_x+m_y)\ell)$ space complexity to generate $B_X$ and $B_Y$ such that \begin{equation} \label{S_bound2} \|XY^T-B_XB_Y^T\|\leq2\|X\|_F\|Y\|_F/\ell \end{equation} which is slightly tighter than the error bound in (\ref{S_bound2-1}). Compared with previous randomized methods, COD only requires $\ell=2\sqrt{\sr(X)\sr(Y)}/\epsilon$ to achieve the error bound in (\ref{S_bound1}), which improves the dependence on $1/\epsilon$ to be linear. However, the time complexity of COD is $O\left(n(m_x+m_y+\ell)\ell\right)$, which is still very high for large matrices. In this paper, we exploit the sparsity of the input matrices to reduce the time complexity of COD. In many real applications, the sparsity is a common property for large matrices. For example, in information retrieval, the word-by-document matrix could contain less than $5\%$ non-zero entries \citep{Dhillon01}. In recommender systems, the user-item rating matrix could contain less than $7\%$ non-zero entries \citep{Zhang17}. The computational bottleneck of COD is to compute the QR decomposition $O(n/\ell)$ times. We note that a similar bottleneck also exists in FD, which needs to compute SVD $O(n/\ell)$ times. To make FD more efficient for sparse matrices, \citet{SFD16} utilized a randomized SVD algorithm named as simultaneous iteration (SI) \citep{SIPower15} to reduce the number of exact SVD. Inspired by this work, we first develop boosted simultaneous iteration (BSI), which can efficiently perform a good decomposition for the product of two small sparse matrices with a sufficiently large probability. Then, we develop sparse co-occuring directions (SCOD) by employing BSI to reduce the number of QR decompositions required by COD. In this way, the time complexity is reduced to $\widetilde{O}\left((\nnz(X)+\nnz(Y))\ell+n\ell^2\right)$ in expectation. Moreover, we prove that the space complexity of our algorithm is still $O((m_x+m_y)\ell)$, and it enjoys almost the same error bound as that of COD. \section{Preliminaries} In this section, we review necessary preliminaries about co-occuring directions and simultaneous iteration. \subsection{Co-occuring Directions} Co-occuring directions \citep{CoD17} is an extension of frequent directions \citep{Liberty2013} for AMM. For brevity, the most critical procedures of COD are extracted and summarized in Algorithm \ref{DenseCoD}, which is named as dense shrinkage (DS), where $\sigma_{x}(A)$ is the $x$-th largest singular value of $A$. Given ${X}\in\mathbb{R}^{m_x\times n}$ and ${Y}\in\mathbb{R}^{m_y\times n}$, COD first initializes ${B_X}={0}_{m_x\times \ell}$ and ${B_Y}={0}_{m_y\times \ell}$. Then, it processes the $i$-th column of $X$ and $Y$ as follows \begin{equation*} \begin{split} &\text{Insert }{X}_i\text{ into a zero valued column of }{B_X}\\ &\text{Insert }{Y}_i\text{ into a zero valued column of }{B_Y}\\ &\textbf{if } {B_X,B_Y}\text{ have no zero valued column} \textbf{ then}\\ &\quad\quad B_X,B_Y = \dcod(B_X,B_Y)\\ &\textbf{end if} \end{split} \end{equation*} for $i=1,\cdots,n$. It is easy to verify that the space complexity of COD is only $O((m_x+m_y)\ell)$. However, it needs to compute the QR decomposition of ${B}_X,B_Y$ and SVD of $R_XR_Y^T$ almost $O(n/\ell)$ times, which implies that its time complexity is \[O\left(\frac{n}{\ell}(m_x\ell^2+m_y\ell^2+\ell^3)\right)=O(n(m_x+m_y+\ell)\ell).\] We will reduce the time complexity by utilizing the sparsity of the input matrices. Our key idea is to employ simultaneous iteration to reduce the number of QR decompositions and SVD. \begin{algorithm}[t] \caption{Dense Shrinkage (DS)} \label{DenseCoD} \begin{algorithmic}[1] \STATE \textbf{Input:} $B_X\in\mathbb{R}^{m_x\times \ell^\prime},B_Y\in\mathbb{R}^{m_y\times \ell^\prime}$ \STATE $Q_X,R_X= \qr(B_X)$ \STATE $Q_Y,R_Y= \qr(B_Y)$ \STATE $U,\Sigma,V= \svd(R_XR_Y^\top)$ \STATE $\gamma=\sigma_{\ell^\prime/2}(\Sigma)$ \STATE $\widetilde{\Sigma}=\max(\Sigma-\gamma I_{\ell^\prime},0)$ \STATE $B_X= Q_XU\sqrt{\widetilde{\Sigma}}$ \STATE $B_Y= Q_YV\sqrt{\widetilde{\Sigma}}$ \STATE \textbf{return} $B_X,B_Y$ \end{algorithmic} \end{algorithm} \subsection{Simultaneous Iteration} Simultaneous iteration \citep{SIPower15} is a randomized algorithm for approximate SVD. Specifically, given a matrix $A\in\mathbb{R}^{m_x\times m_y}$ and an error coefficient $\epsilon$, it performs the following procedures \begin{equation} \label{si_eq1} \begin{split} &q=O\left(\log(m_x)/\epsilon\right),G\sim\mathcal{N}(0,1)^{m_y\times \ell}\\ &K=(AA^T)^qAG\\ &\text{Orthonormalize the columns of } K \text{ to obtain } Q \end{split} \end{equation} to generate an orthonormal matrix $Q\in\mathbb{R}^{m_x\times \ell}$, which enjoys the the following guarantee (\citeauthor{SIPower15}, \citeyear{SIPower15}, Theorem 10). \begin{thm} \label{thm_SI} With probability $99/100$, applying (5) to any matrix $A\in\mathbb{R}^{m_x\times m_y}$ has \[\|A-QQ^TA\|\leq(1+\epsilon)\sigma_{\ell+1}(A).\] \end{thm} Note that some earlier studies \citep{Rokhlin09,Halko11,Woodruff2014,Witten14} have also analyzed this algorithm and achieved similar results. We will utilize SI to approximately decompose $A=S_XS_Y^T$, where $S_X\in\mathbb{R}^{m_x\times d},S_Y\in\mathbb{R}^{m_y\times d}$, $d\leq m=\max(m_x,m_y)$, $\nnz(S_X)\leq m\ell$ and $\nnz(S_Y)\leq m\ell$. The detailed procedures are derived by substituting $A=S_XS_Y^T$ into (\ref{si_eq1}), and are summarized in Algorithm \ref{SI}. Specifically, lines 3 to 6 in SI is designed to compute \[K=(S_XS_Y^TS_YS_{X}^T)^qS_XS_Y^TG\] in $O((\nnz(S_X)+\nnz(S_Y))\ell\log(m_x))$ time, which requires $O((m_x+m_y+d)\ell)$ space. Line 7 in SI can be implemented by Gram-Schmidt orthogonalization or Householder reflections, which only requires $O(m_x\ell^2)$ time and $O(m_x\ell)$ space. So, the time complexity of SI is \begin{equation} \label{time_SI} O((\nnz(S_X)+\nnz(S_Y))\ell\log(m_x)+m_x\ell^2) \end{equation} and its space complexity is only $O(m\ell)$, because of $d\leq m=\max(m_x,m_y)$. By comparison, decomposing $S_XS_Y^T$ with DS requires $O((m_x+m_y)d^2+d^3)$ time and $O(md)$ space, which is unacceptable for large $d$ even if $S_X$ and $S_Y$ are very sparse. \begin{algorithm}[t] \caption{Simultaneous Iteration (SI)} \label{SI} \begin{algorithmic}[1] \STATE \textbf{Input:} $S_X\in\mathbb{R}^{m_x\times d},S_Y\in\mathbb{R}^{m_y\times d},\ell,0<\epsilon<1$ \STATE $q=O\left(\log(m_x)/\epsilon\right)$, $G\sim\mathcal{N}(0,1)^{m_y\times \ell}$ \STATE $K=S_X(S_Y^TG)$ \WHILE{$q>0$} \STATE $K=S_X(S_Y^T(S_Y(S_{X}^TK))),q=q-1$ \ENDWHILE \STATE Orthonormalize the columns of $K$ to obtain $Q$ \STATE \textbf{return} $Q,S_Y(S_X^TQ)$ \end{algorithmic} \end{algorithm} \section{Main Results} In this section, we first introduce a boosted version of simultaneous iteration, which is necessary for controlling the failure probability of our algorithm. Then, we describe our sparse co-occuring directions for AMM with limited space and its theoretical results. Finally, we provide a detailed space and runtime analysis of our algorithm. \begin{algorithm}[t] \caption{Boosted Simultaneous Iteration (BSI)} \label{BoostedSI} \begin{algorithmic}[1] \STATE \textbf{Input:} $S_X\in\mathbb{R}^{m_x\times d},S_Y\in\mathbb{R}^{m_y\times d},\ell,0<\delta<1$ \STATE \textbf{Initialization:} persistent $j=0$ ($j$ retains its value between invocations) \STATE $j=j+1$, $p=\left\lceil\log(2j^2\sqrt{m_xe}/\delta)\right\rceil$ \STATE $\Delta=\frac{11}{10\ell}\sum_{i=1}^{\cols(S_X)}\|S_{X,i}\|_2\|S_{Y,i}\|_2$ \WHILE{True} \STATE $C_X,C_Y=\si(S_X,S_Y,\ell,1/10)$ \STATE $C=(S_X S_Y^{T}-C_XC^{T}_Y)/\Delta$ ($C$ is not computed) \STATE $\mathbf{x}\sim\mathcal{N}(0,1)^{m_x\times 1}$ \IF{$\|(CC^T)^{p}\mathbf{x}\|_2\leq \|\mathbf{x}\|_2$} \STATE \textbf{return} $C_X,C_Y$ \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \subsection{Boosted Simultaneous Iteration} From previous discussions, in the simple case $n\leq m=\max(m_x,m_y)$, $\nnz(X)\leq m\ell$ and $\nnz(Y)\leq m\ell$, we can generate $B_X$ and $B_Y$ by performing \[B_X,B_Y=\si(X,Y,\ell,1/10).\] According to Theorem \ref{thm_SI}, with probability $99/100$ \begin{equation} \label{eq_sip} \begin{split} \|XY^T-B_XB_Y^T\| \leq&\left(1+\frac{1}{10}\right)\sigma_{\ell+1}(XY^T). \end{split} \end{equation} Although $X$ and $Y$ generally have more non-zero entries and columns, we could divide $X$ and $Y$ into several smaller matrices that satisfy the conditions of the above simple case, and repeatedly perform SI. However, in this way, the failure probability will increase linearly, where failure means that there exists a run of SI, after which the error between its input and output is unbounded. To reduce the failure probability, we need to verify whether the error between the input and output is bounded by a small value after each run of SI. \citet{SFD16} has proposed an algorithm to verify the spectral norm of a symmetric matrix. Inspired by their algorithm, we propose boosted simultaneous iteration (BSI) as described in Algorithm \ref{BoostedSI}, where $\delta$ is the failure probability. Let $S_X$ and $S_Y$ be its two input matrices. In line 3 of BSI, we use $j$ to record the number of invocations of BSI and set $p=\left\lceil\log(2j^2\sqrt{m_xe}/\delta)\right\rceil$, where $e$ is Euler's number. In line 4 of BSI, we set $\Delta=\frac{11}{10\ell}\sum_{i=1}^{\cols(S_X)}\|S_{X,i}\|_2\|S_{Y,i}\|_2$, where $\cols(S_X)$ denotes the column number of $S_X$. From lines 5 to 12 of BSI, we first utilize SI to generate $C_X,C_Y$, and then verify whether \begin{equation} \label{verify_cond} \|(CC^T)^{p}\mathbf{x}\|_2\leq \|\mathbf{x}\|_2 \end{equation} holds, where $C=(S_X S_Y^{T}-C_XC^{T}_Y)/\Delta$ and $\mathbf{x}$ is drawn from $\mathcal{N}(0,1)^{m_x\times 1}$. If so, we will return $C_X,C_Y$. Otherwise, we will rerun SI and repeat the verification process until it holds. Note that the condition (\ref{verify_cond}) is used to verify whether $\|C\|>2$, and if $C$ satisfies this condition, with high probability, $\|C\|\leq2$. Specifically, we establish the following guarantee. \begin{lem} \label{lem2} Assume that $C_X,C_Y$ are returned by the $j$-th run of BSI, with probability at least $1-\frac{\delta}{2j^2}$, \[\left\|S_X S_Y^{T}-C_XC^{T}_Y\right\|\leq2\Delta\] where $\Delta=\frac{11}{10\ell}\sum_{i=1}^{\cols(S_X)}\|S_{X,i}\|_2\|S_{Y,i}\|_2$. \end{lem} Lemma \ref{lem2} implies that the failure probability of bounding $\left\|S_X S_Y^{T}-C_XC^{T}_Y\right\|$ decreases as the number of invocations of BSI increases, instead of keeping $1/100$ for the naive SI, which is essential for our analysis. \subsection{Sparse Co-occuring Directions} To work with limited space and exploit the sparsity, we propose an efficient variant of COD for sparse matrices as follows. Let ${X}\in\mathbb{R}^{m_x\times n}$ and ${Y}\in\mathbb{R}^{m_y\times n}$ be the two input matrices. In the beginning, we initialize ${B_X}={0}_{m_x\times \ell}$ and ${B_Y}={0}_{m_y\times \ell}$, where $\ell\ll\min(m_x,m_y,n)$. Moreover, we initialize two empty buffer matrices as $S_{X}={0}_{m_x\times 0}$ and $S_{Y}={0}_{m_y\times 0}$, which will be used to store the non-zero entries of $X$ and $Y$. % To avoid excessive space cost, the buffer matrices are deemed full when $S_X$ or $S_Y$ contains $m\ell$ non-zero entries or $m$ columns. For $i=1,\cdots,n$, after receiving $X_i$ and $Y_i$, we store the non-zero entries of them in the buffer matrices as \[S_{X}=[S_{X},{X}_i],S_{Y}=[S_{Y},Y_i].\] Each time the buffer matrices become full, we first utilize BSI in Algorithm \ref{BoostedSI} to approximately decompose $S_XS_Y^T$ as \[C_X,C_Y=\bsi(S_{X},S_{Y},\ell,\delta)\] where $C_X\in\mathbb{R}^{m_x\times\ell},C_Y\in\mathbb{R}^{m_y\times\ell}$ and $\delta$ is the failure probability. Then, to merge $C_X,C_Y$ into $B_X,B_Y$, we utilize DS in Algorithm \ref{DenseCoD} as follows \begin{align*} &D_X=[B_X,C_X],D_Y=[B_Y,C_Y]\\ &B_X,B_Y=\dcod(D_X,D_Y) \end{align*} where $B_X\in\mathbb{R}^{m_x\times\ell},B_Y\in\mathbb{R}^{m_y\times\ell}$ are large enough to store the non-zero valued columns returned by DS. Finally, we reset the buffer matrices as \[S_{X}={0}_{m_x\times 0},S_{Y}={0}_{m_y\times 0}\] and continue to process the remaining columns in the same way. The detailed procedures of our algorithm are summarized in Algorithm \ref{SparseCoD} and it is named as sparse co-occuring directions (SCOD). \begin{algorithm}[t] \caption{Sparse Co-occuring Directions (SCOD)} \label{SparseCoD} \begin{algorithmic}[1] \STATE \textbf{Input:} $X\in\mathbb{R}^{m_x\times n},Y\in\mathbb{R}^{m_y\times n},\ell,0<\delta<1$ \STATE \textbf{Initialization:} $B_X={0}_{m_x\times l},B_X={0}_{m_y\times l}$, $S_{X}={0}_{m_x\times 0},S_{Y}={0}_{m_y\times 0}$ \FOR{$i=1,...,n$} \STATE $S_{X}=[S_{X},{X}_i],S_{Y}=[S_{Y},Y_i]$ \IF{$\nnz(S_{X})\geq \ell m$ \textbf{or} $\nnz(S_{Y})\geq \ell m$ \textbf{or} $\cols(S_{X})=m$} \STATE $C_X,C_Y=\bsi(S_{X},S_{Y},\ell,\delta)$ \STATE $D_X=[B_X,C_X],D_Y=[B_Y,C_Y]$ \STATE $B_X,B_Y=\dcod(D_X,D_Y)$ \STATE $S_{X}={0}_{m_x\times 0},S_{Y}={0}_{m_y\times 0}$ \ENDIF \ENDFOR \STATE \textbf{return} $B_X,B_Y$ \end{algorithmic} \end{algorithm} Following \citet{CoD17}, we first bound the approximation error of our SCOD as follow. \begin{thm} \label{thm1} Given $X\in\mathbb{R}^{m_x\times n},Y\in\mathbb{R}^{m_y\times n},\ell\leq\min(m_x,m_y,n)$ and $\delta\in(0,1)$, Algorithm \ref{SparseCoD} outputs $B_X\in\mathbb{R}^{m_y\times \ell},B_Y\in\mathbb{R}^{m_y\times \ell}$ such that \begin{align*} \|XY^T-B_XB_Y^T\|\leq\frac{16\|X\|_F\|Y\|_F}{5\ell} \end{align*} with probability at least $1-\delta$. \end{thm} Compared with the error bound (\ref{S_bound2}) of COD, the error bound of our SCOD only magnifies it by a small constant factor of $1.6$. Furthermore, the following theorem shows that the output of SCOD can be used to compute a rank-$k$ approximation for $XY^T$. \begin{thm} \label{thm2} Let $B_X\in\mathbb{R}^{m_y\times \ell},B_Y\in\mathbb{R}^{m_y\times \ell}$ be the output of Algorithm \ref{SparseCoD}. Let $k\leq \ell$ and $\bar{U},\bar{V}$ be the matrices whose columns are the top-$k$ left and right singular vectors of $B_XB_Y^T$. Let $\pi_{\bar{U}(X)}=\bar{U}\bar{U}^TX$ and $\pi_{\bar{V}}(Y)=\bar{V}\bar{V}^TY$. For $\epsilon>0$ and $\ell\geq\frac{64\sqrt{\sr(X)\sr(Y)}}{5\epsilon}\frac{\|X\|\|Y\|}{\sigma_{k+1}(XY^T)}$, we have \begin{align} \label{k-error} \|XY^T-\pi_{\bar{U}}(X)\pi_{\bar{V}}(Y)^T\|\leq\sigma_{k+1}(XY^T)(1+\epsilon) \end{align} with probability at least $1-\delta$. \end{thm} \citet{CoD17} have proved that the output of COD enjoys (\ref{k-error}) with \begin{equation} \label{cod_l} \ell\geq\frac{8\sqrt{\sr(X)\sr(Y)}}{\epsilon}\frac{\|X\|\|Y\|}{\sigma_{k+1}(XY^T)}. \end{equation} By contrast, the lower bound of $\ell$ for our SCOD only magnifies the right term in (\ref{cod_l}) by a small constant factor of $1.6$. \subsection{Space and Runtime Analysis} The total space complexity of our SCOD is only $O(m\ell)$, as explained below. \begin{itemize} \item $B_X,B_Y,S_X,S_Y,C_X,C_Y,D_X,D_Y$ maintained in Algorithm \ref{SparseCoD} only require $O(m\ell)$ space. \item Because of the \emph{if} conditions in SCOD, BSI invoked by Algorithm \ref{SparseCoD} only requires $O(m\ell)$ space. \item Because of $\cols(D_X)=\cols(D_Y)=2\ell$, DS invoked by Algorithm \ref{SparseCoD} only requires $O(m\ell)$ space. \end{itemize} To analyze the expected runtime of SCOD, we first note that it is dominated by the cumulative runtime of BSI and DS that are invoked after each time the \emph{if} statement in Algorithm \ref{SparseCoD} is triggered. Without loss of generality, we assume that the \emph{if} statement is triggered $s$ times in total. It is not hard to verify \begin{equation} \label{s_value}s\leq\frac{\nnz(X)+\nnz(Y)}{m\ell}+\frac{n}{m}.\end{equation} Because of $\cols(D_X)=\cols(D_Y)=2\ell$, each run of DS requires $O(m\ell^2)$ time. Therefore, the total time spent by invoking DS $s$ times is \begin{equation} \label{time_DS} O\left(sm\ell^2\right)=O\left((\nnz(X)+\nnz(Y))\ell+n\ell^2\right). \end{equation} Then, we further bound the expected cumulative runtime of BSI. It is not hard to verify that the runtime of BSI is dominated by the time spent by lines 6 and 9 in Algorithm \ref{BoostedSI}. According to Theorem \ref{thm_SI}, after each execution of line 6 in Algorithm \ref{BoostedSI}, with probability $99/100$, we have \begin{align*} \|S_X S_Y^{T}-C_XC^{T}_Y\|&\leq(1+\frac{1}{10})\sigma_{\ell+1}\left(S_XS_Y^{T}\right)\\ &\leq\frac{11}{10\ell}\|S_X S_Y^{T}\|_\ast\\ &\leq\frac{11}{10\ell}\sum_{i=1}^{\cols(S_X)}\|S_{X,i}S_{Y,i}^T\|_\ast\\ &\leq\frac{11}{10\ell}\sum_{i=1}^{\cols(S_X)}\|S_{X,i}\|_2\|S_{Y,i}\|_2\\ &=\Delta \end{align*} which implies $\left\|(S_X S_Y^{T}-C_XC^{T}_Y)/\Delta\right\|\leq 1$. Combining with $C=(S_X S_Y^{T}-C_XC^{T}_Y)/\Delta$, we have \begin{align*} \|(CC^T)^{p}\mathbf{x}\|_2&\leq\|(CC^T)\|^{p}\|\mathbf{x}\|_2\leq \|\mathbf{x}\|_2 \end{align*} with probability $99/100$. Therefore, for Algorithm \ref{BoostedSI}, the probability that $C_X$ and $C_Y$ generated by executing its line 6 satisfy the \emph{if} condition in its line 9 is $99/100>1/2$. Hence, in each run of BSI, the expected number of executing lines 6 and 9 is at most 2. Because of the \emph{if} conditions in SCOD and the time complexity of SI in (\ref{time_SI}), each time executing line 6 in BSI requires \[O((\nnz(S_X)+\nnz(S_Y))\ell\log(m_x)+m_x\ell^2)\] time. Let $S_X^t,S_Y^t$ denote the values of $S_X,S_Y$ in the $t$-th execution of line $6$ in Algorithm \ref{SparseCoD}, where $t=1,\cdots,s$. Because of $\sum_{t=1}^s\nnz(S_X^t)=\nnz(X)$ and $\sum_{t=1}^s\nnz(S_Y^t)=\nnz(Y)$, in expectation, the total time spent by executing line 6 in BSI is \begin{equation} \label{BSS_SS_time} O((\nnz(X)+\nnz(Y))\ell\log(m_x)+n\ell^2). \end{equation} In the $j$-th run of BSI, each time executing its line 9 needs to compute $\|(CC^T)^{p}\mathbf{x}\|_2$, which requires $O\left(m\ell p\right)$ time, because $C=(S_X S_Y^{T}-C_XC^{T}_Y)/\Delta$ and $S_X,S_Y,C_X,C_Y$ have less than $O(m\ell)$ non-zero entries. Note that $p=\left\lceil\log(2j^2\sqrt{m_xe}/\delta)\right\rceil$. So, in expectation, the total time spent by executing line 9 in BSI is \begin{align*} \sum_{j=1}^{s}O\left(m\ell\log(m_xj/\delta)\right)\leq O\left(sm\ell\log(m_xs/\delta)\right). \end{align*} Finally, combining the above inequality, (\ref{s_value}), (\ref{time_DS}) and (\ref{BSS_SS_time}), the expected runtime of SCOD is \begin{align*} O\left(N\ell\log(m_x)+n\ell^2+(N+n\ell)\log(n/\delta)\right) \end{align*} where $N=\nnz(X)+\nnz(Y)$. Note that there exist numerical softwares (e.g., Matlab), which provide highly optimized subroutine to efficiently compute the exact multiplication $XY^T$, if $X$ and $Y$ are sparse. However, they suffer a high space complexity of $\nnz(X)+\nnz(Y)+\nnz(XY^T)$ to operate $X$, $Y$ and $XY^T$ in the main memory, which is not applicable for large matrices when the memory space is limited. By contrast, the space complexity of our SCOD is only $O(m \ell)$. \section{Theoretical Analysis} In this section, we provide all the proofs \subsection{Proof of Theorem \ref{thm1}} Without loss of generality, we assume that the \emph{if} statement in Algorithm \ref{SparseCoD} is triggered $s$ times in total. To facilitate presentations, we use $S_X^t$, $S_Y^t$, $C_X^t$, $C_Y^t$, $B_X^t$, $B_Y^t$, $D_X^t$, $D_Y^t$, $Q^t$, $Q_X^t$, $Q_Y^t$, $U^t$, $V^t$, $\Sigma^t$, $\widetilde{\Sigma}^t$ ,$\gamma_t$ to denote the values of $S_X$, $S_Y$, $C_X$, $C_Y$, $B_X$, $B_Y$, $D_X$, $D_Y$, $Q$, $Q_X$, $Q_Y$, $U$, $V$, $\Sigma$, $\widetilde{\Sigma}$, $\gamma$ after the $t$-th execution of lines 6 to 8 in Algorithm \ref{SparseCoD}, where $t=1,\cdots,s$. Note that $B_X$ and $B_Y$ generated by Algorithm \ref{SparseCoD} are denoted by $B_X^s$ and $B_Y^s$. To bound $\left\|XY^T-B_X^sB_Y^{s,T}\right\|$, we define \[E_1=XY^T-\sum_{t=1}^sC_X^tC_Y^{t,T}, E_2=\sum_{t=1}^sC_X^tC_Y^{t,T}-B_X^sB_Y^{s,T}\] where $E_1+E_2=XY^T-B_X^sB_Y^{s,T}$. By the triangular inequality, we have \begin{equation} \label{eq1} \begin{split} \left\|XY^T-B_X^sB_Y^{s,T}\right\|=&\left\|E_1+E_2\right\|\leq \left\|E_1\right\|+\left\|E_2\right\|. \end{split} \end{equation} Hence, we will analyze $\left\|E_1\right\|$ and $\left\|E_2\right\|$, respectively. Combining Lemma \ref{lem2} and the union bound, with probability $1-\sum_{j=1}^s\frac{\delta}{2j^2}\geq1-\delta$, we have \begin{equation} \label{eq_lemma1_eq} \left\|S_X^tS_Y^{t,T}-C_X^tC^{t,T}_Y\right\|\leq\frac{11}{5\ell}\sum_{i=1}^{\cols(S_X^t)}\|S_{X,i}^t\|_2\|S_{Y,i}^t\|_2 \end{equation} for all $t=1,\cdots,s$. Therefore, with probability at least $1-\delta$, we have \begin{equation} \label{eq2} \begin{split} \left\|E_1\right\|=&\left\|\sum_{t=1}^sS_X^tS_Y^{t,T}-\sum_{t=1}^sC_X^tC_Y^{t,T}\right\|\\ \leq&\sum_{t=1}^s\left\|S_X^tS_Y^{t,T}-C_X^tC_Y^{t,T}\right\|\\ \leq&\frac{11}{5\ell}\sum_{t=1}^s\sum_{i=1}^{\cols(S_X^t)}\|S_{X,i}^t\|_2\|S_{Y,i}^t\|_2\\ =&\frac{11}{5\ell}\sum_{i=1}^{n}\|X_i\|_2\|Y_i\|_2\\ \leq&\frac{11}{5\ell}\sqrt{\sum_{i=1}^{n}\|X_i\|_2^2}\sqrt{\sum_{i=1}^{n}\|Y_i\|_2^2}\\ \leq&\frac{11}{5\ell}\|X\|_F\|Y\|_F \end{split} \end{equation} where the second inequality is due to (\ref{eq_lemma1_eq}) and the third inequality is due to Cauchy-Schwarz inequality. Then, for $\left\|E_2\right\|$, we have \begin{equation*} \begin{split} \left\|E_2\right\|=&\left\|\sum_{t=1}^sC_X^tC_Y^{t,T}+\sum_{t=1}^s\left(B_X^{t-1}B_Y^{t-1,T}-B_X^tB_Y^{t,T}\right)\right\|\\ =&\left\|\sum_{t=1}^s\left(D_X^{t}D_Y^{t,T}-B_X^tB_Y^{t,T}\right)\right\|\\ \leq&\sum_{t=1}^s\left\|D_X^{t}D_Y^{t,T}-B_X^tB_Y^{t,T}\right\|. \end{split} \end{equation*} According to Algorithms \ref{SparseCoD} and \ref{DenseCoD}, we have \[\left\|D_X^{t}D_Y^{t,T}-B_X^tB_Y^{t,T}\right\|=\left\|\Sigma^t-\widetilde{\Sigma}^t\right\|\] which further implies that \begin{equation} \label{eq3} \begin{split} \left\|E_2\right\| \leq\sum_{t=1}^s\left\|\Sigma^t-\widetilde{\Sigma}^t\right\|\leq\sum_{t=1}^s\gamma_t. \end{split} \end{equation} Now we need to upper bound $\sum_{t=1}^s\gamma_t$ with properties of $X$ and $Y$. First, we have \begin{equation*} \begin{split} \left\|B_X^sB_Y^{s,T}\right\|_{\ast}=&\sum_{t=1}^s\left(\left\|B_X^tB_Y^{t,T}\right\|_{\ast}-\left\|B_X^{t-1}B_Y^{t-1,T}\right\|_{\ast}\right)\\ =&\sum_{t=1}^s\left(\left\|D_X^tD_Y^{t,T}\right\|_{\ast}-\left\|B_X^{t-1}B_Y^{t-1,T}\right\|_{\ast}\right)\\ &-\sum_{t=1}^s\left(\left\|D_X^tD_Y^{t,T}\right\|_{\ast}-\left\|B_X^{t}B_Y^{t,T}\right\|_{\ast}\right) \end{split} \end{equation*} where $\|A\|_{\ast}$ denotes the nuclear norm of any matrix $A$. According to Algorithms \ref{SparseCoD} and \ref{DenseCoD}, it is not hard to verify that \[\left\|D_X^tD_Y^{t,T}\right\|_{\ast}=\tr(\Sigma^t) \text{ and } \left\|B_X^{t}B_Y^{t,T}\right\|_{\ast}=\tr(\widetilde{\Sigma}^t).\] Then, we have \begin{equation*} \begin{split} \left\|B_X^sB_Y^{s,T}\right\|_{\ast}\leq&\sum_{t=1}^s\left(\left\|D_X^tD_Y^{t,T}\right\|_{\ast}-\left\|B_X^{t-1}B_Y^{t-1,T}\right\|_{\ast}\right)-\sum_{t=1}^s\left(\tr(\Sigma^t)-\tr(\widetilde{\Sigma}^t)\right)\\ \leq&\sum_{t=1}^s\left(\left\|D_X^tD_Y^{t,T}\right\|_{\ast}-\left\|B_X^{t-1}B_Y^{t-1,T}\right\|_{\ast}\right)-\sum_{t=1}^s\ell\gamma_t\\ \leq&\sum_{t=1}^s\left\|D_X^tD_Y^{t,T}-B_X^{t-1}B_Y^{t-1,T}\right\|_{\ast}-\sum_{t=1}^s\ell\gamma_t\\ =&\sum_{t=1}^s\left\|C_X^tC_Y^{t,T}\right\|_{\ast}-\sum_{t=1}^s\ell\gamma_t. \end{split} \end{equation*} Furthermore, we have \begin{equation} \label{eq5} \begin{split} \sum_{t=1}^s\gamma_t&\leq\frac{1}{\ell}\left(\sum_{t=1}^s\left\|C_X^tC_Y^{t,T}\right\|_{\ast}-\left\|B_X^sB_Y^{s,T}\right\|_{\ast}\right)\\ &\leq\frac{1}{\ell}\sum_{t=1}^s\left\|Q^tQ^{t,T}S_X^tS_Y^{t,T}\right\|_{\ast}\leq\frac{1}{\ell}\sum_{i=1}^s\left\|S_X^tS_Y^{t,T}\right\|_{\ast}\\ &=\frac{1}{\ell}\sum_{t=1}^s\left\|\sum_{i=1}^{\cols(S_X^t)}S_{X,i}^tS_{Y,i}^{t,T}\right\|_{\ast}\\ &\leq\frac{1}{\ell}\sum_{t=1}^s\sum_{i=1}^{\cols(S_X^t)}\left\|S_{X,i}^tS_{Y,i}^{t,T}\right\|_{\ast}\\ &\leq\frac{1}{\ell}\sum_{t=1}^s\sum_{i=1}^{\cols(S_X^t)}\left\|S_{X,i}^t\right\|_2\left\|S_{Y,i}^{t}\right\|_{2}\\ &=\frac{1}{\ell}\sum_{i=1}^{n}\|X_i\|_2\|Y_i\|_2\\ &\leq\frac{1}{\ell}\sqrt{\sum_{i=1}^{n}\|X_i\|_2^2}\sqrt{\sum_{i=1}^{n}\|Y_i\|_2^2}\\ &\leq\frac{1}{\ell}\|X\|_F\|Y\|_F \end{split} \end{equation} where the sixth inequality is due to Cauchy-Schwarz inequality. Combining with (\ref{eq1}), (\ref{eq2}), (\ref{eq3}) and (\ref{eq5}), we complete this proof. \subsection{Proof of Theorem \ref{thm2}} Note that the proof of Theorem 3 in \citet{CoD17} has already shown \begin{align*}&\|XY^T-\pi_{\bar{U}}(X)\pi_{\bar{V}}(Y)^T\|\leq4\|XY^T-B_XB_Y^T\|+\sigma_{k+1}(XY^T). \end{align*} Therefore, combing with our Theorem \ref{thm1}, with probability at least $1-\delta$, we have \begin{align*} \|XY^T-\pi_{\bar{U}}(X)\pi_{\bar{V}}(Y)^T\|\leq&4\|XY^T-B_XB_Y^T\|+\sigma_{k+1}(XY^T)\\ \leq&\frac{64\|X\|_F\|Y\|_F}{5\ell}+\sigma_{k+1}(XY^T)\\ \leq&\sigma_{k+1}(XY^T)\left(1+\frac{64\sqrt{\sr(X)\sr(Y)}}{5\ell}\frac{\|X\|\|Y\|}{\sigma_{k+1}(XY^T)}\right)\\ \leq&\sigma_{k+1}(XY^T)(1+\epsilon). \end{align*} \subsection{Proof of Lemma \ref{lem2}} In this proof, we analyze the properties of $C_X$ and $C_Y$ that are returned by the $j$-th run of Algorithm \ref{BoostedSI}. First, we introduce the following lemma. \begin{lem} \label{lem1} Let $\mathbf{x}=(x_1,x_2,\cdots,x_{m_x})\sim\mathcal{N}(0,1)^{m_x\times 1}$, $\mathbf{e}_1=(1,0,\cdots,0)\in\mathbb{R}^{m_x}$, $0<\delta<1$, \[\pr\left[|\mathbf{e}_1^T\mathbf{x}|\leq \frac{\delta}{\sqrt{m_xe}}\|\mathbf{x}\|_2\right]\leq\delta.\] \end{lem} Let $U=[\mathbf{u}_1,\cdots,\mathbf{u}_{m_x}]$ denote the left singular matrix of \[C=(S_X S_Y^{T}-C_XC^{T}_Y)/\Delta\] where $UU^T=U^TU=I_{m_x}$. Because of $p=\left\lceil\log(2j^2\sqrt{m_xe}/\delta)\right\rceil$, if $\|C\|>2$, we have \[\|(CC^T)^{p}\mathbf{x}\|_2>|\mathbf{u}_1^T\mathbf{x}|4^{p}>\|\mathbf{x}\|_2\] as long as $|\mathbf{u}_1^T\mathbf{x}|>\frac{\delta}{2j^2\sqrt{m_xe}}\|\mathbf{x}\|_2$. Note that \begin{align*} \pr\left[|\mathbf{u}_1^T\mathbf{x}|\leq\frac{\delta}{2j^2\sqrt{m_xe}}\|\mathbf{x}\|_2\right]=&\pr\left[| \mathbf{u}_1^TUU^T\mathbf{x}|\leq\frac{\delta}{2j^2\sqrt{m_xe}}\|U^T\mathbf{x}\|_2\right]\\ =&\pr\left[|\mathbf{e}_1^TU^T\mathbf{x}|\leq\frac{\delta}{2j^2\sqrt{m_xe}}\|U^T\mathbf{x}\|_2\right]\\ \leq&\delta/2j^2 \end{align*} where the inequality is due to the fact $U^T\mathbf{x}\sim\mathcal{N}(0,1)^{m_x\times 1}$ and Lemma \ref{lem1}. Therefore, when $\|C\|>2$, we have \[\pr\left[\|(CC^T)^{p}\mathbf{x}\|_2\leq\|\mathbf{x}\|_2\right]\leq\delta/2j^2.\] Hence, if $\|C\|>2$ but $\|(CC^T)^{p}\mathbf{x}\|_2\leq \|\mathbf{x}\|_2$, we will have \[\left\|S_X S_Y^{T}-C_XC^{T}_Y\right\|>\frac{11\sum_{i=1}^{\cols(S_X)}\|S_{X,i}\|_2\|S_{Y,i}\|_2}{5\ell}\] the probability of which is at most $\delta/2j^2$. We complete this proof. \subsection{Proof of Lemma \ref{lem1}} Let $c=\frac{\delta}{\sqrt{m_xe}}$ and $\lambda=\frac{1-m_xc^2}{2m_xc^2(1-c^2)}>0$, we have \begin{align*} \pr\left[|\mathbf{e}_1^T\mathbf{x}|\leq c\|\mathbf{x}\|_2\right]=&\pr\left[(c^2-1)x_1^2+c^2\sum_{i=2}^{m_x}x_i^2\geq0\right]\\ =&\pr\left[e^{\lambda(c^2-1)x_1^2+\lambda c^2\sum_{i=2}^{m_x}x_i^2}\geq1\right]\\ \leq&\mathbb{E}\left[e^{\lambda(c^2-1)x_1^2+\lambda c^2\sum_{i=2}^{m_x}x_i^2}\right]\\ =&\mathbb{E}\left[e^{\lambda(c^2-1)x_1^2}\right]\Pi_{i=2}^{m_x}\mathbb{E}\left[e^{\lambda c^2x_i^2}\right]\\ \leq&(1-2\lambda(c^2-1))^{-1/2}(1-2\lambda c^2)^{-(m_x-1)/2}\\ =&\sqrt{m_x}c\left(1+\frac{1}{m_x-1}\right)^{\frac{m_x-1}{2}}\left(1-c^2\right)^{\frac{m_x-1}{2}}\\ \leq&\sqrt{m_xe}c\\ =&\delta \end{align*} where the first inequality is due to Markov inequality, the second inequality is due to $\mathbb{E}\left[e^{sx_i^2}\right]=\frac{1}{\sqrt{1-2s}}$ for $i=1,\cdots,m_x$ and any $s<{1/2}$ and the third inequality is due to $(1+1/x)^x\leq e$ for any $x>0$. \section{Experiments} In this section, we perform numerical experiments to verify the efficiency and effectiveness of our SCOD. \subsection{Datasets} We conduct experiments on two synthetic datasets and two real datasets: NIPS conference papers\footnote{\url{https://archive.ics.uci.edu/ml/datasets/NIPS+Conference+Papers+1987-2015}} \citep{NIPSpaper} and MovieLens 10M\footnote{\url{https://grouplens.org/datasets/movielens/10m/}}. Each dataset consists of two sparse matrices $X\in\mathbb{R}^{m_x\times n}$ and $Y\in\mathbb{R}^{m_y\times n}$. The synthetic datasets are randomly generated with $\spr$, which is a built-in function of Matlab. Specifically, we first generate a low-rank dataset by setting \begin{align*} &X=\spr(1e3,1e4,0.01,\mathbf{r})\\ &Y=\spr(2e3,1e4,0.01,\mathbf{r}) \end{align*} where $\mathbf{r}=[400,399,\cdots,1]\in\mathbb{R}^{400}$, which ensures that $X\in\mathbb{R}^{1e3\times1e4}$ and $Y^{2e3\times1e4}$ only contain $1\%$ non-zero entries and their non-zero singular values are equal to $\mathbf{r}$. With the same $\mathbf{r}$, a noisy low-rank dataset is generated by adding a sparse noise to the above low-rank matrices as \begin{align*} &X=\spr(1e3,1e4,0.01,\mathbf{r})+\spr(1e3,1e4,0.01)\\ &Y=\spr(2e3,1e4,0.01,\mathbf{r})+\spr(2e3,1e4,0.01) \end{align*} where $X$ and $Y$ contain less than $2\%$ non-zero entries. Moreover, NIPS conference papers dataset is originally a $11463\times5811$ word-by-document matrix $M$, which contains the distribution of words in 5811 papers published between the years 1987 and 2015. In our experiment, let $X^T$ be the first 2905 columns of $M$, and let $Y^T$ be the others. Therefore, the product $XY^T$ reflects the similarities between two sets of papers. Similarly, MovieLens 10M dataset is originally a $69878 \times 10677$ user-item rating matrix $M$. We also let $X^T$ be the first $5338$ columns of $M$ and $Y^T$ be the others. \begin{figure*}[t] \centering \subfigure[Approximation Error]{\includegraphics[width=0.325\textwidth]{./LR_AMM_error.eps}} \centering \subfigure[Projection Error]{\includegraphics[width=0.325\textwidth]{./LR_LRE.eps}} \centering \subfigure[Runtime]{\includegraphics[width=0.325\textwidth]{./LR_runtime.eps}} \caption{Experimental results among different sketch size on the low-rank dataset.} \label{fig1} \centering \subfigure[Approximation Error]{\includegraphics[width=0.325\textwidth]{./Noisy_AMM_error.eps}} \centering \subfigure[Projection Error]{\includegraphics[width=0.325\textwidth]{./Noisy_LRE.eps}} \centering \subfigure[Runtime]{\includegraphics[width=0.325\textwidth]{./Noisy_runtime.eps}} \caption{Experimental results among different sketch size on the noisy low-rank dataset.} \label{fig2} \end{figure*} \begin{figure*}[t] \centering \subfigure[Approximation Error]{\includegraphics[width=0.325\textwidth]{./real_AMM_error.eps}} \centering \subfigure[Projection Error]{\includegraphics[width=0.325\textwidth]{./real_LRE.eps}} \centering \subfigure[Runtime]{\includegraphics[width=0.325\textwidth]{./real_runtime.eps}} \caption{Experimental results among different sketch size on NIPS conference papers dataset.} \label{fig3} \centering \subfigure[Approximation Error]{\includegraphics[width=0.325\textwidth]{./movie_AMM_error.eps}} \centering \subfigure[Projection Error]{\includegraphics[width=0.325\textwidth]{./movie_LRE.eps}} \centering \subfigure[Runtime]{\includegraphics[width=0.325\textwidth]{./movie_runtime.eps}} \caption{Experimental results among different sketch size on MovieLens 10M dataset.} \label{fig4} \end{figure*} \subsection{Baselines and Setting} We first show that our SCOD can match the accuracy of COD, and significantly reduce the runtime of COD for sparse matrices. Moreover, we compare SCOD against other baselines for AMM with limited space including FD-AMM \citep{Qiaomin16}, column selection (CS) \citep{Drineas06} and random projection (RP) \citep{Sarlos06}. In the previous sections, to control the failure probability of SCOD, we have used BSI in line 6 of Algorithm \ref{SparseCoD}. However, in practice, we find that directly utilizing SI is enough to ensure the accuracy of SCOD. Therefore, in the experiments, we implement SCOD by replacing the original line 6 of Algorithm \ref{SparseCoD} with \[C_X,C_Y=\si(S_X,S_Y,\ell,1/10).\] Note that a similar strategy has also been adopted by \citet{SFD16} in the implementation of sparse frequent directions (SFD). In all experiments, each algorithm will receive two matrices $X\in \mathbb{R}^{m_x\times n}$ and $Y\in \mathbb{R}^{m_y\times n}$, and then output two matrices $B_X\in \mathbb{R}^{m_x\times \ell}$ and $B_Y\in \mathbb{R}^{m_y\times \ell}$. We adopt the approximation error $\|XY^T-B_XB_Y^T\|$ and the projection error $\|XY^T-\pi_{\bar{U}}(X)\pi_{\bar{V}}(Y)^T\|$ to measure the accuracy of each algorithm, where $\bar{U}\in\mathbb{R}^{m_x\times k},\bar{V}\in\mathbb{R}^{m_y\times k}$ and we set $k=200$. Furthermore, we report the runtime of each algorithm to verify the efficiency of our SCOD. Because of the randomness of SCOD, CS and RP, we report the average results over $50$ runs. \subsection{Results} Fig. \ref{fig1} and \ref{fig2} show the results of different algorithms among different $\ell$ on the synthetic datasets. First, from the comparison of runtime, we find that our SCOD is significantly faster than COD, FD-AMM and RP among different $\ell$. Moreover, with the increase of $\ell$, the runtime of our SCOD increases more slowly than that of COD, FD-AMM and RP, which verifies the time complexity of our SCOD. Although CS is faster than our SCOD, its accuracy is far worse than that of our SCOD. Second, in terms of approximation error and projection error, our SCOD matches or improves the performance of COD among different $\ell$. The improvement may be due to the fact that our SCOD performs fewer shrinkage than COD, which is the source of error. We note that a similar result happened in the comparison between FD and SFD by \citet{SFD16}. Third, our SCOD outperforms other baselines including FD-AMM, CS and RP. Fig. \ref{fig3} and \ref{fig4} show the results of SCOD, COD and FD-AMM among different $\ell$ on the real datasets. The results of CS and RP are omitted, because they are much worse than SCOD, COD and FD-AMM. We again find that our SCOD is faster than COD and FD-AMM, and achieves better performance among different $\ell$. \section{Conclusions} In this paper, we propose SCOD to reduce the time complexity of COD for approximate multiplication of sparse matrices with the $O\left((m_x+m_y+\ell)\ell\right)$ space complexity. In expectation, the time complexity of our SCOD is $\widetilde{O}\left((\nnz(X)+\nnz(Y))\ell+n\ell^2\right)$, which is much tighter than $O\left(n(m_x+m_y+\ell)\ell\right)$ of COD for sparse matrices. Furthermore, the theoretical guarantee of our algorithm is almost the same as that of COD up to a constant factor. Experiments on both synthetic and real datasets demonstrate the advantage of our SCOD for handling sparse matrices. \vskip 0.2in
-46,070.591922
[ -3.013671875, 2.638671875 ]
17.857143
[ -3.31640625, 0.6328125, -1.8896484375, -5.6796875, -0.88232421875, 7.84375 ]
[ 3.228515625, 8.8671875, 2.791015625, 8.2421875 ]
232
3,804
[ -3.060546875, 3.408203125 ]
33.667799
[ -5.99609375, -4.671875, -3.939453125, -1.5966796875, 2.525390625, 11.3203125 ]
0.511283
12.396795
27.839117
3.574387
[ 2.6110928058624268 ]
-29,396.813256
6.838328
-46,215.95002
0.380579
5.938993
[ -2.205078125, -3.33984375, -3.916015625, -5.08984375, 2.36328125, 12.1796875 ]
[ -5.54296875, -1.8837890625, -2.22265625, -1.9013671875, 3.240234375, 4.8125 ]
BkiUdO8241xiEnW_4uiy
\section{Introduction} In recent years, the concept of a quantum group has extensively emerged in the physical and mathematical literature \cite{ref1,ref2,ref3}. Quantum groups are nontrivial generalizations of ordinary Lie groups. Such generalizations are made in the framework of Hopf algebras \cite{ref4,ref5,ref6}. A Hopf algebra is an algebra together with operations called the comultiplication, counit and antipode, which reflect the group structure. A quantum group is a non-commutative Hopf algebra consistent with these costructures. Usually, quantum groups are introduced as deformations of commutative Hopf algebras in the sense that they become commutative Hopf algebra as some parameters go to particular values\cite{ref7,ref8}. Probably the most studied case of a quantum group is $ GL_q (2) $ whose element $ T = \left( \begin{array}{cl} A & B \\ C & D \end{array} \right) $ satisfies the following nontrivial commutation relations: \begin{eqnarray} AB &=& q BA, \hspace{2cm} AC = q CA, \nonumber \\ BD &=& q DB, \hspace{2cm} CD = q DC, \label{eq:1}\\ BC &=& CB, \hspace{2cm} AD - DA = (q - q^{-1} ) BC \, . \nonumber \end{eqnarray} On the other hand, quantum spaces or quantum planes may be introduced as representation spaces of quantum groups\cite{ref1,ref3,ref14}. Corresponding to the quantum group $GL_q (2)$, Manin \cite{ref1} has defined a quantum space as one generated by two noncommuting coordinates $x , \; y$ obeying \begin{equation} x \, y \,= \, q \, yx \; \; \; (q \, \neq \, 0 \, , \; 1)\, . \label{eq:2} \end{equation} Then the quantum group $GL_q (2)$ becomes a symmetry group of the quantum plane. In fact, the points $(x^{\prime} , \; y^{\prime} \, ) $ and $(x^{\prime\prime} , \; y^{\prime \prime} \, ) $, transformed respectively by means of the matrix $T$ and its transpose $T^t$, satisfy the relations $ x^{\prime} y^{\prime} \,= \, q \, y^{\prime} x^{\prime}$ and $ x^{\prime \prime} y^{\prime \prime} \, = \, q \, y^{\prime \prime} x^{\prime \prime} \, $ where \begin{equation} T : \left( \begin{array}{c} x \\ y \end{array} \right)\; \mapsto \; \left( \begin{array}{c} x^{\prime} \\ y^{\prime} \end{array} \right) \; = \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right) \label{eq:5} \end{equation} and \begin{equation} T^t : \left( \begin{array}{c} x \\ y \end{array} \right)\; \mapsto \; \left( \begin{array}{c} x^{\prime \prime} \\ y^{\prime \prime} \end{array} \right) \; = \left( \begin{array}{cc} A & C \\ B & D \end{array} \right) \left( \begin{array}{c} x \\ y \end{array} \right) \, . \label{eq:6} \end{equation} What we emphasize here is that the relation in Eq.(\ref{eq:2}) is invariant not only under the transformation $T$ but also under its transpose $T^t$ ( In this sense, a one-parameter quantum group can be regarded as a symmetry group of a quantum plane ) and that {\em the generators of a quantum group are assumed to be commutative with the coordinates of a quantum plane}. In this work, we are naturally led to a two-parameter deformation of the group $GL(2)$ and its corresponding quantum planes even though we do not put any restriction on the number of parameters at the outset. Thus even though the multi-parameter case has already been studied \cite{ref14,ref17,ref18}, we shall concern ourselves with only the two-parameter case in this work. Two-parameter quantum planes have still attracted attention recently\cite{ref15,ref16}. Now let us recall two-parameter quantum groups. In fact, by solving the Yang-Baxter equation, one can get the universal $R$-matrix \begin{equation} R_{p, \, q} \; = \; \left( \begin{array}{cccc} q & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & q - \frac{1}{p} & \frac{q}{p} & 0 \\ 0 & 0 & 0 & q \end{array} \right) \label{eq:7} \end{equation} where $p $ and $q$ are free parameters\cite{ref9,ref10,ref11}. From $RTT $ relations, one has the commutation relations \begin{eqnarray} A \, B \; &=& \; p \, B \, A, \hspace{2 cm} C \, D \; = \; p \, D \, C, \nonumber \\ A \, C \; &=& \; q \, C \, A, \hspace{2 cm} B \, D \; = \; q \, D \, B, \label{eq:8} \\ p \, B \, C \; &=& q \, C \, B, \hspace{2 cm} A \, D \, - \, D \, A \; = \; (p - \frac{1}{q} ) \, B \, C \, . \nonumber \end{eqnarray} We note that $R_{p, q } $ and Eq. (\ref{eq:8}) become the well-known $R_q $ solution and Eq. (\ref{eq:1}), respectively, in the limit $p \, \rightarrow \, q$. However, Eq. (\ref{eq:2}) in the two-parameter case is not invariant under the transformation in Eq. (\ref{eq:6}). It is only invariant under the transformation in Eq. (\ref{eq:5}). Whenever one requires that Eq. (\ref{eq:2}) be invariant under the two transformations with the assumption that the generators of a quantum group and the coordinates of a quantum plane be commutative, one is led to a one-parameter quantum group. Our observation is that even though there are no restrictions on the number of parameters at the outset, one is led naturally to a two-parameter quantum group $GL_{p, q} (2)$ in such a manner that the commutation relations in Eq. (\ref{eq:8}) come directly from the condition that $x \, y \; = \; q \, y \, x $ is preserved not only under the transformation in Eq. (\ref{eq:5}) but also under that in Eq. (\ref{eq:6}) as in the one-parameter case, {\em if one relaxes the commutation relations between the generators of a quantum group and the noncommuting coordinates}. Actually, the remarkable fact is that even in the case of one-parameter quantum groups, the generators of a quantum group do not commute with the coordinates of the quantum plane {\em generically}, as can be seen in the next section. In Sec. II, we shall push this observation further in a more general fashion. This formulation leads us to many probable quantum planes associated with a quantum group. We shall discuss some special examples in Sec. III. \section{Two-parameter quantum group as a symmetry group } Let $ \left( \begin{array}{cl} A & B \\ C & D \end{array} \right) $ be an element of a quantum group and let us assume that for some numbers, $q_{ij}$'s, \begin{eqnarray} x \, A \; &=& \; q_{11} \, A \, x, \hspace{2 cm} y \, A \; = \; q_{21} \, A \, y, \nonumber \\ x \, B \; &=& \; q_{12} \, B \, x, \hspace{2 cm} y \, B \; = \; q_{22} \, B \, y, \nonumber \\ x \, C \; &=& \; q_{13} \, C \, x, \hspace{2 cm} y \, C \; = \; q_{23} \, C \, y, \label{eq:12} \\ x \, D \; &=& \; q_{14} \, D \, x, \hspace{2 cm} y \, D \; = \; q_{24} \, D \, y \, . \nonumber \end{eqnarray} Also let us assume that, under the transformations in Eqs.(\ref{eq:5}) and (\ref{eq:6}), the relation $x \, y \; = \; q \, y\, x$ is transformed, respectively, as \begin{equation} x^{\prime} \, y^{\prime} \; = \; \bar{q} \, y^{\prime} \, x^{\prime} \label{eq:10} \end{equation} and \begin{equation} x^{\prime \prime} \, y^{\prime \prime} \; = \; \bar{\bar{q}} \, y^{\prime \prime} \, x^{\prime \prime} \, . \label{eq:11} \end{equation} Then, we have \vspace{0.2cm} (1) $ \left( \begin{array}{cl} A & B \\ C & D \end{array} \right) \in GL_{p,q^{\prime}}(2)$ for some nonzero $p, q^{\prime} $ with $pq^{\prime} \ne -1 $, \vspace{0.5cm} (2) \hspace{1.5cm} $ \bar{q} = \bar{\bar{q}} $, \hspace{0.5cm} and \vspace{0.2cm} (3) \vspace{-1.17cm} \begin{eqnarray} q_{11} &=& 1, \hspace{2.83cm} q_{21} = q\bar{q}^{-1} q_{14} \; = \;q {q^{\prime}}^{-1} \, k, \nonumber \\ q_{12} &=& \bar{q} \, p^{-1}, \hspace{2.21cm} q_{22} = q\bar{q} \, p^{-1} \,(\bar{q} - ( p - {q^{\prime}}^{-1} ) k ), \label{eq:102} \\ q_{13} &=& \bar{q} \, {q^{\prime}}^{-1}, \hspace{2.21cm} q_{23} = q\bar{q} \, {q^{\prime}}^{-1} \, (\bar{q} - ( p - {q^{\prime}}^{-1} ) k ), \nonumber \\ q_{14} &=& \bar{q} {q^{\prime}}^{-1} \, k, \hspace{2cm} q_{24} = q{\bar{q}}^{2} \, {q^{\prime}}^{-1} p^{-1} (\bar{q} - ( p - {q^{\prime}}^{-1} ) k ) \nonumber \end{eqnarray} where $k$ is a complex number. In this section, we shall prove the above statement. The converse of the above statement is trivial. Also we note that if one requires that $q_{ij} = 1$, then $\bar{q} = p = q^{\prime} = q $ and $ \left( \begin{array}{cl} A & B \\ C & D \end{array} \right) \in GL_{q}(2).$ The proof is as follows: From the Eqs. (\ref{eq:10}) and (\ref{eq:11}), it follows that \begin{eqnarray} & & A \, C \; = \; q_1 \, C \, A, \nonumber \\ & & B \, D \; = \; q_2 \, D \, B, \label{eq:13} \\ & & q \, q_{14} \, A \, D \; - \; \bar{q} \, q_{21} \, D \, A \; = q\bar{q} \, q_{12} \, C \, B \; - \; q_{23} \, B \, C, \nonumber \end{eqnarray} and \begin{eqnarray} & & A \, B \; = \; q_3 \, B \, A, \nonumber \\ & & C \, D \; = \; q_4 \, D \, C, \label{eq:14} \\ & & q \, q_{14} \, A \, D \; - \; \bar{\bar{q}} \, q_{21} \, D \, A \; = q\bar{\bar{q}} \, q_{13} \, B \, C \; - \; q_{22} \, C \, B \, , \nonumber \end{eqnarray} where $ q_1 \: = \: \bar{q} \, {q_{13}}^{-1} \, q_{11} , \; q_2 \: = \: \bar{q} \, {q_{24}}^{-1} \, q_{22} , \; q_3 \: = \: \bar{\bar{q}} \, {q_{12}}^{-1} \, q_{11}, $ and $ q_4 \: = \: \bar{\bar{q}} \, {q_{24}}^{-1} \, q_{23} $. We are now interested in those $ q_{ij}$'s such that $ T \;=\; \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) $ is an element of a quantum group. For the matrix $T$ to be such a matrix, the entries $ A, \, B, \, C,$ and $D$ should be consistent with the costructures of the Hopf algebra. We note that the comultiplication $ \Delta $ and the antipode $ S $, among others, satisfy the following relations: \begin{eqnarray} & & \Delta \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \; = \; \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \; \otimes \; \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \nonumber \\ & & = \; \left( \begin{array}{cc} A \otimes A \, + \, B \otimes C & A \otimes B \, + \, B \otimes D \\ C \otimes A \, + \, D \otimes C & C \otimes B \, + \, D \otimes D \end{array} \right) \label{eq:15} \end{eqnarray} and \begin{equation} S \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \; = \; \left( \begin{array}{cc} A & B \\ C & D \end{array} \right)^{-1}\, . \label{eq:16} \end{equation} From the consistent conditions $\Delta \left( A C \right) \; = \; q_1 \,\Delta \left( C A \right) $ and $\Delta \left( B D \right) \; = \; q_2 \,\Delta \left( D B \right) $, we can have $q_1 \; = \; q_2 \; \equiv \; q^{\prime} $ and \begin{equation} A \, D \; - \; D \, A \; = \; q^{\prime} \, C \, B \; - \; {q^{\prime} }^{-1} \, B \, C \, . \label{eq:17} \end{equation} Also from the conditions $ \Delta \left( A B \right) \; = \; q_3 \,\Delta \left( B A \right) $ and $\Delta \left( C D \right) \; = \; q_4 \,\Delta \left( D C \right) $, it follows that $q_3 \; = \; q_4 \; \equiv \; p $ and \begin{equation} A \, D \; - \; D \, A \; = \; p \, B \, C \; - \; p^{-1} \, C \, B \, . \label{eq:18} \end{equation} From Eqs. (\ref{eq:17}) and (\ref{eq:18}), it follows that \begin{equation} p \, B \, C \; = \; q^{\prime} \, C \, B \, , \label{eq:20} \end{equation} unless $ pq^{\prime} = -1 $. Thus, we construct a two-parameter deformation of $GL(2)$: \begin{eqnarray} A \, B \; &=& \; p \, B \, A, \hspace{2 cm} C \, D \; = \; p \, D \, C, \nonumber \\ A \, C \; &=& \; q^{\prime} \, C \, A, \hspace{2 cm} B \, D \; = \; q^{\prime} \, D \, B, \label{eq:100} \\ p \, B \, C \; &=& q^{\prime} \, C \, B, \hspace{2 cm} A \, D \, - \, D \, A \; = \; (p - \frac{1}{q^{\prime}} ) \, B \, C \, . \nonumber \end{eqnarray} Hence $ \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \in GL_{p,q^{\prime}} $. Next, Eq. (\ref{eq:16}) implies the existence of the inverse matrix $ T^{-1} $. From the ansatz \begin{equation} \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \, \left( \begin{array}{cc} D & \beta \, B \\ \gamma \, C & \alpha \, A \end{array} \right) \, {\cal D}^{-1} \; = \; \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right)\, , \nonumber \end{equation} we can find $ \alpha \, = 1 , \: \: \beta \, = - p^{-1} , \:\: \gamma \, = - \, p, $ and $ {\cal D} \, = \, A D \, - \, p B C \, = \, D A \, - \, p^{-1} C B \, , $ which is consistent with Eq. (\ref{eq:18}). The quantum determinant ${\cal D} $ satisfies \begin{eqnarray} A{\cal D}& =& {\cal D}A, \hspace{3cm} B{\cal D} = p^{-1}q^{\prime}{\cal D}B, \nonumber \\ C{\cal D}& =& p{q^{\prime}}^{-1}{\cal D}C, \hspace{2cm} D{\cal D} = {\cal D}D. \label{eq:1001} \end{eqnarray} This gives us \begin{eqnarray} {\cal D}^{-1} A \;& = & \; A {\cal D}^{-1}, \nonumber \\ {\cal D}^{-1} B \;& = & \; q^{\prime} p^{-1} B {\cal D}^{-1}, \nonumber \\ {\cal D}^{-1} C \;& = & \; p {q^{\prime}}^{-1} C {\cal D}^{-1}, \label{eq:19} \\ {\cal D}^{-1} D \;& = & \; D {\cal D}^{-1}, \nonumber \end{eqnarray} which is consistent with the requirement \begin{equation} \left( \begin{array}{cc} D & - \frac{1}{p} B \\ -p \, C & A \end{array} \right) \; {\cal D}^{-1} \; \left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \; = \; \left( \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right) . \end{equation} The result in Eq. (\ref{eq:19}) is the same as the one in Ref 12. Furthermore, the third equations in Eqs. (\ref{eq:13}) and (\ref{eq:14}) also should be identical to Eq. (\ref{eq:18}). If $qq_{14} \ne \bar{q}q_{21} $, the third eq. in Eq. (\ref{eq:13}) is $qq_{14}AD - \bar{q}q_{21}DA = \xi BC $ where $\xi = q\bar{q}q_{12}{q^{\prime}}^{-1}p - q_{23} $. In the case when $\xi = 0 $, $qq_{14}AD = \bar{q}q_{21}DA $ which is of the form $AD = \epsilon DA $ with $\epsilon \ne 1 $. However, the relation $\Delta (AD) = \epsilon \Delta (DA)$ leads us to $\epsilon = 1 $, which is a contradiction. When $\xi \ne 0 $, we have two cases: $p \ne {q^{\prime}}^{-1} $ and $ p = {q^{\prime}}^{-1} $. The first case together with Eq. (\ref{eq:18}) gives $(qq_{14}(p -{q^{\prime}}^{-1}) - \xi)AD = (\bar{q}q_{21}(p-{q^{\prime}}^{-1})-\xi)DA $. In every possible case, this equation contradicts either the fact that ${\cal D} = AD - pBC$ is invertible or that $\epsilon = 1 $ from $\Delta (AD) = \epsilon \Delta (DA)$ as in the above. In the second case when $ p = {q^{\prime}}^{-1}$, $AD = DA = \delta BC $ for some number $\delta $. However, from the relation $\Delta(AD) = \delta \Delta(BC)$, $\delta = p$, which is a contradiction to the existence of ${\cal D}$. Thus, we conclude that $qq_{14} = \bar{q}q_{21}$. The equation $qq_{14} = \bar{\bar{q}}q_{21} $ follows from the third equation in Eq. (\ref{eq:14}) by a completely analogous method. Hence, we have \begin{equation} \bar{q} = \bar{\bar{q}}. \label{eq:1002} \end{equation} Now let us summarize the relations between the $q_{ij}$'s: \begin{eqnarray} & q^{\prime} \;& = \; \bar{q} \, {q_{13}}^{-1} \, q_{11} \; = \; \bar{q} \, {q_{24}}^{-1} \, q_{22}, \nonumber \\ & p \;& = \;\bar{q} \, {q_{12}}^{-1} \, q_{11} \; = \; \bar{q} \, {q_{24}}^{-1} \, q_{23}, \nonumber \\ & q&q_{14} \; = \; \bar{q}q_{21}, \label{eq:101} \\ & p &\; - \; {q^{\prime}}^{-1} \; =\; \bar{q} \, {q_{14}}^{-1} \, q_{13} \; - \; q^{-1} \, {q^{\prime}}^{-1} \, p \; {q_{14}}^{-1} \, q_{22} . \nonumber \end{eqnarray} The relation between $q, \bar{q}$, and $q^{\prime}$ depends on the choice of $q_{ij}$'s. There may be (infinitely) many choices for $q_{ij}$'s consistent with the theory of quantum groups. In effect, there are two unknowns since there are six independent relationships between them, as can be seen in Eq. (\ref{eq:101}). Without loss of generality, we may assume that $q_{14} \; = \; k q_{13}$ for some number $k$. Then we can express all of the $q_{ij}$'s in one unknown $q_{11}$, which may be regarded as a proportional constant. Hence, if we put $q_{11} \; = \; 1 $ for simplicity, we have \begin{eqnarray} q_{11} &=& 1, \hspace{2.83cm} q_{21} = q{\bar{q}}^{-1} q_{14} \; = \;q {q^{\prime}}^{-1} \, k, \nonumber \\ q_{12} & =& \bar{q} \, p^{-1}, \hspace{2.2 cm} q_{22} = q\bar{q} \, p^{-1} \,(\bar{q} - ( p - {q^{\prime}}^{-1} ) k ), \label{eq:1003} \\ q_{13} & = &\bar{q} \, {q^{\prime}}^{-1}, \hspace{2.2 cm} q_{23} = q\bar{q} \, {q^{\prime}}^{-1} \, (\bar{q} - ( p - {q^{\prime}}^{-1} ) k ), \nonumber \\ q_{14} &=& \bar{q} {q^{\prime}}^{-1} \, k, \hspace{2 cm} q_{24} = q{\bar{q}}^{2} \, {q^{\prime}}^{-1} p^{-1} (\bar{q} - ( p - {q^{\prime}}^{-1} ) k ) \nonumber \end{eqnarray} where $k$ is the only parameter to be determined. Thus, we prove the statement. As seen in the above, the choice $q_{11} = 1$ is arbitrary. In other words, the assumption that the generators of a one-parameter quantum group commute with the coordinates of the quantum plane is very special. They do not commute generically. From Eq. (\ref{eq:100}), it is obvious that $\left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \in GL_{p,q^{\prime}} (2) $ if and only if $\left( \begin{array}{cc} A & C \\ B & D \end{array} \right) \in GL_{q^{\prime},p} (2) $. On the other hand, $GL_{p,q^{\prime}} (2) \; = \; GL_{q^{\prime},p} (2)$ in the sense that $GL_{p,q^{\prime}} (2) $ and $ GL_{q^{\prime},p} (2)$ are the algebras freely generated by $A, \; B, \; C, \; D$, and ${\cal D}^{-1} $ modulo the relations given by Eqs. (\ref{eq:100}) and (\ref{eq:19}) and by the equations $(AB \; - \; pBC) {\cal D}^{-1} \; - \; 1 $ and $ {\cal D}^{-1} (AB \; - \; p BC) \; - \; 1 $. Thus, Manin's viewpoint that quantum groups are symmetry groups of quantum planes is recovered as in the one-parameter case under the commutation relation in Eq. (\ref{eq:12}) with $q_{ij}$'s given by Eq. (\ref{eq:1003}) between quantum group generators and noncommuting coordinates. \section{Quantum Planes Associated With A Quantum Group } In this section, we shall discuss several interesting choices of $q_{ij}$'s. The diversity of the choices of $q_{ij}$'s means the diversity of quantum planes for a given quantum group. \vspace{1cm} {\bf Case I: } $\bar{q} = q $ This case corresponds to the standard way of dealing with quantum planes. Then, \begin{eqnarray} q_{11}& =& 1, \hspace{2.83 cm} q_{21} = q_{14} \; = \;q {q^{\prime}}^{-1} \, k, \nonumber \\ q_{12} &=& q \, p^{-1}, \hspace{2.2 cm} q_{22} = q^2 \, p^{-1} \,(q - ( p - {q^{\prime}}^{-1} ) k ), \label{eq:1004} \\ q_{13} &=& q \, {q^{\prime}}^{-1}, \hspace{2.2 cm} q_{23} = q^2 \, {q^{\prime}}^{-1} \, (q - ( p - {q^{\prime}}^{-1} ) k ), \nonumber \\ q_{14}& =& q {q^{\prime}}^{-1} \, k, \hspace{2 cm} q_{24} = q^3 \, {q^{\prime}}^{-1} p^{-1} (q - ( p - {q^{\prime}}^{-1} ) k ) \, . \nonumber \end{eqnarray} Now we introduce the exterior differential $d$ as in Ref 15 and 16 except for the following: \begin{eqnarray} (dx) \, A \; = \; q_{11} \, A \, dx, & \hspace{2 cm} & (dy) \, A \; = \; q_{21} \, A \, dy, \nonumber \\ (dx) \, B \; = \; q_{12} \, B \, dx, & \hspace{2 cm} & (dy) \, B \; = \; q_{22} \, B \, dy, \nonumber \\ (dx) \, C \; = \; q_{13} \, C \, dx, & \hspace{2 cm} & (dy) \, C \; = \; q_{23} \, C \, dy, \label{eq:26} \\ (dx) \, D \; = \; q_{14} \, D \, dx, & \hspace{2 cm} & (dy) \, D \; = \; q_{24} \, D \, dy \nonumber \end{eqnarray} where $ q_{ij}$'s satisfy Eq. (\ref{eq:1004}). Now if we require that $dx \, dy \; = \; - \frac{1}{p} \, dy \, dx$ is preserved under the transformation $T$, it is easy to see that $k \; = \; \frac{ q^{\prime} ( q p - 1)}{p(q^{\prime} p -1)} $. Thus, we have, with $q_{11} = 1$, \begin{eqnarray} q_{12}& = q \, p^{-1}, \hspace{2 cm}& q_{21} \; = \; q_{14} \; = \;\frac{ q ( q p - 1)}{p(q^{\prime} p -1)}, \nonumber \\ q_{13} &= q \, {q^{\prime}}^{-1}, \hspace{2 cm}& q_{22} = q^2 \, p^{-2}, \label{eq:1005} \\ q_{14} & = \frac{ q ( q p - 1)}{p(q^{\prime} p -1)}, \hspace{2 cm}& q_{23} = q^2 \, {q^{\prime}}^{-1} \,p^{-1}, \nonumber \\ &\hspace{2 cm} & q_{24} = q^3 \, {q^{\prime}}^{-1} p^{-2} \,. \nonumber \end{eqnarray} Now we may go further. In fact, it is natural to require that the two-parameter case become the one-parameter case in some limit. Therefore, if $q_{ij} \longrightarrow 1 $ as $p \longrightarrow q^{\prime}$, then we must set $q^{\prime} = q$. Hence, Eq. (\ref{eq:100}) is the same as Eq. (\ref{eq:8}), and Eq. (\ref{eq:1005}) becomes \begin{eqnarray} & & q_{11} \; = \; q_{13} \; = \; 1, \nonumber \\ & & q_{12} \; = \; q_{14} \; = \; q_{21} \; = \; q_{23} \; = \; q \, p^{-1}, \label{eq:25} \\ & & q_{22} \; = \; q_{24} \; = \; q^2 \, p^{-2} \, . \nonumber \end{eqnarray} The virtue of this formulation is that the relations for the differentials on a quantum plane are preserved not only under $T$ but also under $T^t$. According to Ref 16, one can define the differential calculus on a quantum plane in the one-parameter case: For an exterior differential $d$ which is linear and satisfies $d^2 = 0$ and the Leibnitz rule, one can choose \begin{eqnarray} dx \, dy \; &=& \; - \frac{1}{q} \, dy \, dx, \nonumber \\ x \, dx \; &=& \; q^2 \, (dx) \, x, \nonumber \\ x \, dy \; &=& \; q \, (dy) \, x \; + \; ( q^2 -1 ) \, (dx) \, y, \label{eq:4} \\ y \, dx \; &=& \; q \, (dx) \, y, \nonumber \\ y \, dy \; &=& \; q^2 \, (dy) \, y \, \, . \nonumber \end{eqnarray} Also, by the same method as in the one-parameter case, we obtain the following relations for the differentials in the two-parameter case: \begin{eqnarray} dx \, dy \; &=& \; - \frac{1}{p} \, dy \, dx, \nonumber \\ x \, dx \; &=& \; p \, q \, (dx) \, x, \nonumber \\ x \, dy \; &=& \; q \, (dy) \, x \; + \; ( p \, q -1 ) \, (dx) \, y, \label{eq:9} \\ y \, dx \; &=& \; p \, (dx) \, y, \nonumber \\ y \, dy \; &=& \; p \, q \, (dy) \, y \, \, . \nonumber \end{eqnarray} We note that Eq. (\ref{eq:4}) is invariant under the transformations $ T $ and $T^t $. Eq. (\ref{eq:9}) is also invariant under the transformation $T $, but it is easy to see that it is not invariant under the transformation $T^t$ if the quantum group generators $ A\, , \; B \, , \; C $, and $D$ commute with the noncommuting coordinates $ x\, , \: y \, . $ However, if we choose the $q_{ij}$'s as in Eq. (\ref{eq:25}), then a lengthy but straightforward calculation shows the nice property that Eq. (\ref{eq:9}) is invariant not only under the transformation $T$ but also under the transformation $T^t$. Moreover, we have $dx^{\prime} dx^{\prime} \; = \; dy^{\prime} dy^{\prime} \; = \; 0$ and $dx^{\prime \prime} dx^{\prime \prime} \; = \; dy^{\prime \prime} dy^{\prime \prime} \; = \; 0 $. \vspace{1cm} {\bf Case II: } $ p = q^{\prime} $ \vspace{0.5cm} Let $\left( \begin{array}{cc} A & B \\ C & D \end{array} \right) \in GL_{q^{\prime}} (2) $. If we put $ k = q_{12}$ ( In effect, this choice of $k$ gives the same equation, Eq. (\ref{eq:25}), as case I above ), then \begin{eqnarray} q_{11} &= & 1, \hspace{2.83 cm} q_{21} = q\bar{q}{q^{\prime}}^{-1}p^{-1}, \nonumber \\ q_{12} & = & \bar{q} \, p^{-1}, \hspace{2.2cm} q_{22} = q\bar{q}^{2}{q^{\prime}}^{-1}p^{-2}, \label{eq:1010} \\ q_{13} & = & \bar{q} {q^{\prime}}^{-1}, \hspace{2.2 cm} q_{23} = q\bar{q}^{2} \, {q^{\prime}}^{-2}p^{-1}, \nonumber \\ q_{14} &= &\bar{q}^{2} {q^{\prime}}^{-1}p^{-1}, \hspace{1.5 cm} q_{24} = q{\bar{q}}^{3} {q^{\prime}}^{-2}p^{-2} \nonumber \end{eqnarray} In order to see interesting aspects of quantum planes, it is enough only to consider the one-parameter case. Thus, if put $p = q^{\prime}$, \begin{eqnarray} q_{11}& = & 1, \hspace{2.83cm} q_{21} = q\bar{q}{q^{\prime}}^{-2}, \nonumber \\ q_{12} & = & \bar{q} \, {q^{\prime}}^{-1}, \hspace{2.2 cm} q_{22} = q\bar{q}^{2}{q^{\prime}}^{-3}, \label{eq:1011} \\ q_{13} &=& \bar{q} {q^{\prime}}^{-1}, \hspace{2.2 cm} q_{23} = q\bar{q}^{2} {q^{\prime}}^{-3}, \nonumber \\ q_{14} &=& \bar{q}^{2} {q^{\prime}}^{-2}, \hspace{2 cm} q_{24} = q{\bar{q}}^{3} {q^{\prime}}^{-4}. \nonumber \end{eqnarray} The quantum plane such that $xy = qyx $ corresponding to these values of the $q_{ij}$'s is transformed into $x^{\prime}y^{\prime} = \bar{q}y^{\prime }x^{\prime} $ and $x^{\prime \prime}y^{\prime \prime} = \bar{q}y^{\prime \prime}x^{\prime \prime} $, respectively, under the action $\left( \begin{array}{cc} A & B \\ C & D \end{array} \right) $ and its tranpose. Now if we take a quantum plane for $GL_{q^{\prime}}$ such that $ q = 1 $ and $\bar{q} = q^{\prime}$, then \begin{equation} q_{1i} = 1, \hspace{2cm} q_{2i} = {q^{\prime}}^{-1}, \end{equation} for $i = 1, \cdots, 4 $. This quantum plane is generated by $ x, y $ such that $xy = yx $ and is transformed as $ x^{\prime}y^{\prime} = q^{\prime}y^{\prime}x^{\prime}$. However, $x^{\prime}, y^{\prime} $ do not obey Eq. (\ref{eq:12}). The case when $q = \bar{q} = 1$ is also interesting since the quantum plane looks like an ordinary plane in the sense that it is generated by commuting coordinates. If we take a quantum plane for $GL_{q^{\prime}}$ such that $ q = q^{\prime} $ and $\bar{q} = q^{\prime}$, then $q_{ij} = 1 $. This quantum plane is the original one \cite{ref1}. \section{Conclusions} In the one-parameter case, the condition that $x \, y \; = \; q \, y \, x $ is preserved under the transformation $T$ and its transpose $T^t$ gives the commutation relation between the generators of a quantum group $GL_{q} (2) $. Here, one assumes that the generators of a quantum group commute with the noncommuting coordinates of a quantum plane. In this work, we have relaxed the assumption and investigated its consequences. We are naturally led to a two-parameter deformation of the group $GL(2)$ and its corresponding quantum planes even though we do not put any restrictions at the outset on the number of parameters. As a by-product, this formulation supports Manin's viewpoint that quantum groups are symmetry groups of quantum planes, and the diversity of the choices of $q_{ij}$'s shows that there can be many quantum planes for a given quantum group $GL_{p,q}$. Associated with a given quantum group, there are some special quantum planes such as the original one in the literature. Especially, a quantum plane which looks like an ordinary plane attracts much attention and seems to be worthy of further research. {\bf ACKNOWLEDGMENTS} This work was supported by Ministry of Education, Project No. BSRI-95-2414, a grant from TGRC-KOSEF 1994 and Korea Research Center for Theoretical Physics and Chemistry.
-45,529.19958
[ -3.16015625, 2.87109375 ]
18.508287
[ -3.103515625, 0.794921875, -1.9228515625, -4.8515625, -1.2080078125, 7.1796875 ]
[ 3.130859375, 9.46875, 1.7861328125, 6.15625 ]
172
3,352
[ -3.64453125, 4.328125 ]
47.396597
[ -5.44140625, -3.41796875, -3.546875, -1.6669921875, 1.4482421875, 9.8046875 ]
0.397114
10.172338
18.227924
6.783828
[ 2.829411029815674 ]
-32,355.258304
4.507458
-45,219.977893
0.397114
5.370282
[ -2.521484375, -2.80859375, -3.3515625, -4.96875, 1.9384765625, 11.3046875 ]
[ -5.23828125, -0.81787109375, -1.17578125, -0.5224609375, 2.48046875, 1.9072265625 ]
BkiUc9c5qsBC5lINcD2t
\section{Introduction} As the high speed Internet continues to advance and evolve, our physical, social and cyber worlds are interweaving into a gigantic network of connected people, connected devices, and connected things. Soon we will be walking on smart streets, commuting with smart transportation, living in smart home, working in smart office buildings and smart manufacturing. This technological evolution trend will not only result in unprecedented growth of multi-modal digital data, but also drive network bandwidth and network resources to their limits, e.g., managing the payloads of transporting video data captured from a massive network of surveillance cameras or drones in real time 24 by 7. Edge computing has emerged as a popular distributed computation paradigm, which represents a new wave for computation offloading to where data is collected, and a new distributed system architecture for processing, learning and reasoning from dispersed data. Many edge systems today have employed DNN-trained object detection models as a critical component for mission-critical video analytic tasks at edge, such as autonomous driving~\cite{autonomous-driving-object-detection}, object tracking~\cite{object-tracking-real-time}, intrusion detection and surveillance monitoring for networks of cameras~\cite{video-surveillance-object-detection} in smart buildings, smart streets and smart cities. \noindent {\bf Object detection for video analytics.} A video stream can be decomposed into a sequence of video frames upon capturing. Offline frame processing and online frame processing represent two spectrums of object detection for video analytics. SSD~\cite{ssd} and YOLOv3~\cite{yolov3} are representative DNN-powered object detection algorithms popularly used for video analytics at edge. Object detectors pre-trained using DNN algorithms such as SSD~\cite{ssd} and YOLOv3~\cite{yolov3} can detect and localize objects by performing detection processing to identify objects with their bounding boxes and class labels in an input video frame, and then output these detection processed frames via visual display at edge in the original sequence of the incoming video frames. Offline detection and online detection represent two ends of the spectrum for video frame processing workflows in the context of incoming video frame rate, the detection processing rate and the output display rate. The {\it offline processing} will perform the object detection on the incoming video frames one at a time regardless the incoming video rate in in the number of frames per second (FPS). It is used as a detection processing reference model for zero-frame drop. Once all the incoming video frames are processed for object detection using a pre-trained object detector such as SSD~\cite{ssd} and YOLOv3~\cite{yolov3}, the output of the processed frames will be sorted by the temporal sequence of the incoming video frames prior to visualization and presentation via a video player on an edge device. Figure~\ref{fig:offline-workflow} provides an visual illustration. First, the live video is captured and stored in a persistent storage, such as a solid-state drive. Then, each stored video is uploaded to the offline detection model executor, which performs object detection one frame at a time regardless the real time video FPS rate from the video camera or other video capturing equipment. Next, upon completing the processing of all frames of the input video, the processed video frames are sorted in the original temporal sequence of the input video to produce the detection output video, which will feed into the edge device for visualization and presentation via a video player. The offline processing time can be measured by the multiplication of the per frame processing time with the total number of frames. Alternatively, one can also output the detection processed frames as they become available and follow the temporal sequence of input frames. The main advantage of this offline detection is its guarantee of zero frame dropping during detection processing and the end to end video analytics workflow. However, we argue that it cannot be adopted in real time video analytics on edge devices when the real time detection processing rate is much slower than the incoming video streaming FPS rate. In contrast, an {\it online object detection} edge system is by design a detection streaming system, which will continuously feed the incoming video frames one after another to the online object detection model executor, regardless whether the detection processing is able to finish the current per-frame processing. This may result in random frame dropping. Although it will output the detection processed frames as they become available for visualization and presentation via a video player, instead of waiting for all frames to be processed and sorted. Figure~\ref{fig:online-workflow} sketches an end-to-end online detection workflow. In comparison, online detection has a number of unique characteristics. First, when incoming video streaming rate is much faster than the detection processing FPS rate, the edge detection may experience malfunction, such as random dropping of large frames. Second, even though online detection will output the detection processed frames as they become available, the slow detection processing rate may lead to low output FPS rate, rendering the edge video analytics failing to deliver near real time performance. \begin{figure}[h!] \centering \subfloat[Offline Processing]{ \centering \includegraphics[width=0.45\textwidth]{offline-workflow} \label{fig:offline-workflow} }\newline \subfloat[Online Processing]{ \centering \includegraphics[width=0.45\textwidth]{online-workflow} \label{fig:online-workflow} } \caption{Offline vs. Online End to End Object Detection Workflows} \label{fig:offline-online-workflow} \end{figure} \noindent {\bf Challenges for online object detection.\/} Given a live video stream rate, say 30 frames per second (FPS), and heterogeneous edge devices with different model execution capacities, it is challenging for online object detection model executor to deliver near real-time FPS processing performance (e.g., the detection processing rate of 30 FPS) for a number of reasons. {\it First,} edge devices tend to be heterogeneous in computation, communication, storage, and energy consumption. Hence, running DNN object detection models on edge devices may result in a large variation in terms of detection processing rates in FPS. Performance degradation due to detection processing delay includes random frame dropping when the incoming video stream is faster than the detection processing rate. When the gap between the incoming FPS and the detection processing FPS becomes significantly large, the edge system will experience slow output stream with glitches and lagging issues and poor detection quality, such as low mAP (mean average precision) for object detection quality over the total frames of the input video. {\it Second,} DNN trained object detection models are computationally intensive for many edge devices~\cite{faster-rcnn,ssd,yolov3,fast-yolo}. Although recent studies have shown Faster RCNN~\cite{faster-rcnn}, SSD~\cite{ssd} and YOLOv3~\cite{yolov3} can deliver real-time or near real-time detection performance on medium or high-end GPUs, few studies have dedicated on delivering real-time or near real-time performance for online object detection on edge devices equipped with edge AI hardware, such as the Intel Neural Computing Stick 2 (NCS2)~\cite{intel-ncs2}, Raspberry Pi~\cite{raspberry-pi}, NVIDIA Jetson~\cite{nvidia-jetson} and Google Coral~\cite{google-coral}. These edge AI hardwares tend to employ custom software frameworks, such as Intel OpenVINO~\cite{openvino}, NVIDIA EGX~\cite{nvidia-egx} and Google TensorFlow Lite~\cite{tensorflow-lite} to run compressed DNN object detection models on edge devices for supporting Edge AI applications. Although powered by such edge AI hardware, edge devices with different computation capacity can run pre-trained object detection models for edge video analytics, we will show in this paper that the detection processing performance in FPS on such edge AI hardware is slower compared to that of medium or high-end GPUs, a root cause for the performance problems due to the mis-match between the fast incoming video stream rate and the slow detection processing rate. \noindent {\bf Scope and contributions.} We argue that an important quality of service requirement for edge systems is to provide near real-time performance guarantee for object detection on edge devices of heterogeneous resources. In this paper, we exploit edge detection parallelism for fast object detection in edge systems with heterogeneous edge devices, aiming to significantly enhance the detection processing rate while minimizing random dropping, enabling high detection quality in the presence of limited capacity to run DNN object detection models on heterogeneous edge devices. First, we analyze the performance bottleneck of running a well-trained DNN model at edge with Intel NCS2~\cite{intel-ncs2} installed for real time online object detection. We use offline detection as a reference model for zero-frame dropping, and examine the root cause of random dropping when the detection processing rate is significantly slower than the incoming video streaming rate. Second, we address the detection processing throughput problem, we present a performance optimization technique by exploiting multi-model multi-device detection parallelism for fast object detection at edge. Our parallel model detection approach consists of a careful combination of a model parallel scheduler and a model parallel sequence synchronizer. It can effectively speed up the FPS detection processing rate, minimizing its FPS disparity with the incoming video frame rate. Experiments are conducted on two benchmark video datasets~\cite{MOTChallenge2015} with diverse incoming FPS rates using well-trained DNN object detection models (SSD~\cite{ssd} and YOLOv3~\cite{yolov3}). We show that by exploiting model detection parallelism and leveraging multiple AI hardware devices at edge, we can effectively speed up the online object detection processing rate and deliver near real-time object detection performance for efficient video analytics at edge. \begin{figure*}[h!] \centering \subfloat[Frame 64]{ \centering \includegraphics[width=0.249\textwidth]{video-frames/no-drop/64} \label{fig:eth-sunnyday-video-frames-no-drop-64} } \subfloat[Frame 65]{ \centering \includegraphics[width=0.249\textwidth]{video-frames/no-drop/65} \label{fig:eth-sunnyday-video-frames-no-drop-65} } \subfloat[Frame 66]{ \centering \includegraphics[width=0.249\textwidth]{video-frames/no-drop/66} \label{fig:eth-sunnyday-video-frames-no-drop-66} } \subfloat[Frame 67]{ \centering \includegraphics[width=0.249\textwidth]{video-frames/no-drop/67} \label{fig:eth-sunnyday-video-frames-no-drop-67} } \caption{Frame 64$\sim$67 without dropping frames (ETH-Sunnyday, Single NCS2, YOLOv3, Processing FPS=2.5, mAP=86.9\%)} \label{fig:eth-sunnyday-video-frames-no-drop} \end{figure*} \begin{figure*}[h!] \centering \subfloat[Frame 64]{ \centering \includegraphics[width=0.249\textwidth]{video-frames/drop/64} \label{fig:eth-sunnyday-video-frames-drop-64} } \subfloat[Frame 65]{ \centering \includegraphics[width=0.249\textwidth]{video-frames/drop/65} \label{fig:eth-sunnyday-video-frames-drop-65} } \subfloat[Frame 66]{ \centering \includegraphics[width=0.249\textwidth]{video-frames/drop/66} \label{fig:eth-sunnyday-video-frames-drop-66} } \subfloat[Frame 67]{ \centering \includegraphics[width=0.249\textwidth]{video-frames/drop/67} \label{fig:eth-sunnyday-video-frames-drop-67} } \caption{Frame 64$\sim$67 with dropping frames (ETH-Sunnyday, Single NCS2, YOLOv3, Processing FPS=14.0, mAP=66.1\%)} \label{fig:eth-sunnyday-video-frames-drop} \end{figure*} \section{Online Object Detection} \subsection{Problem Statement} We study the real time object detection at edge based on the following assumption. First, let the incoming video rate of a live video stream be $\lambda$ frames per second (FPS), and let the detection throughput rate of an object detector deployed on an edge device be $\mu$ frames per second. When $\mu \ge \lambda$, it indicates that the incoming video stream at $\lambda$ rate can be processed in real time with no frame dropping. However, in many cases, given the limited computing resources on an edge device, including the AI hardware for running the pre-trained DNN object detection model, the detection processing rate $\mu$ tends to be much smaller than the live video streaming rate $\lambda$. When $\mu << \lambda$, e.g., $\lambda=30$ FPS but $\mu=3$ FPS, we observe that even with well-trained object detectors (SSD or YOLOv3) deployed on an edge device with an AI hardware such as Intel NCS2 (Neural Compute Stick 2)~\cite{intel-ncs2}, the actual online object detection at edge may suffer from low frame processing rate, denoted by $\sigma$. An obvious performance optimization goal is to speed up $\sigma$ as much as possible by maximizing resource and computation capacities. Consider the two ends of the spectrum in implementing and optimizing the video frame processing rate $\sigma$ for object detection at edge: (1) if we want to maintain zero frame dropping, then we limit $\sigma$ to the offline detection processing baseline rate $\mu$, i.e., $\sigma = \mu$. When the incoming video stream rate $\lambda$ is much higher than $\mu$, i.e., $\mu << \lambda$, the online detection will suffer from the low video processing rate ($\sigma$) and hence low throughput in detection output FPS; and (2) if we want to speed up the actual video processing rate $\sigma$ at edge, we would need to develop resource and computation optimizations such that we can offer fast detection processing rate, that is either the same as the live video stream input rate $\lambda$, e.g., $\sigma = \lambda$, or increasing $\sigma$ towards $\lambda$. {\bf Na\"ive Approach.\/} When the actual processing rate of the detection model on an edge device is much lower than the input video streaming rate $\lambda$, e.g., say $\mu=2.5$ FPS whereas $\lambda=30$ FPS, if the online detection scheduler simply sends the incoming video frames to the AI hardware for object detection as they arrive at $\lambda$ rate, then the online detection executor may end up dropping some video frames randomly at approximately $(\lambda - \mu)$ frames per second. This translates to dropping $(\lambda/\mu - 1)$ frames on average for every frame processed by the object detector running on the AI hardware attached to an edge device. This may result in decreasing the mean average precision (mAP) for object detection quality measured over all frames of the input video. Such performance degradation can be further aggravated when the mismatching gap between the input video stream rate $\lambda$ and the detection processing rate at edge is much larger due to a larger number of randomly dropped frames (see experimental results in Section~\ref{section:experimental-analysis}). \subsection{Illustrative Examples and Visualization} In this subsection, we use a benchmark video ETH-Sunnyday from the MOT-15 dataset~\cite{MOTChallenge2015}, which has a total of 354 frames, and is taken from a moving camera at 14 FPS with a resolution of 640$\times$480. We install YOLOv3 object detection model trained using DarkNet-53~\cite{yolov3} on an edge server with Intel NCS2~\cite{intel-ncs2}. We first resize the input video frame to the input size of the object detection model, that is 416$\times$416$\times$3 for YOLOv3. Then, the online object detection module running YOLOv3 will process each resized video frame by performing perform object detection inference, followed by post-processing, such as non-maximum suppression (NMS)~\cite{faster-rcnn,ssd,yolov3}, for computing the bounding box, the class label and the object confidence for each detected object. By applying post-training quantization to use 16 bit half-prevision floating point format (FP16), YOLOv3 is converted into the FP16 data type in order to be deployed on Intel NCS2 sticks with a smaller model size of 119MB instead of over 200MB. Figure~\ref{fig:eth-sunnyday-video-frames-no-drop} shows the object detection results for four consecutive frames when we use the frame processing rate that will incur zero frame dropping as done in the offline detection, i.e., keeping a large buffer that is able to hold the input video frames at $\lambda$ rate, while scheduling one frame at a time to ensure no frame dropping, resulting in very low detection processing rate $\sigma$ with $\sigma << \lambda$. Even though in this case, the object detector can successfully identify the objects in each frame, the actual video processing rate $\sigma$ is only 2.5 FPS, which is 5.6x slower than the original ETH-Sunnyday live video stream rate of 14 FPS ($\lambda=14$). We also provide a visual comparison of this online detection processing with no frame dropping (i.e., $\mu=2.5$ FPS) with the original input live video stream rate. The visualization shows the shape contrast in \url{https://github.com/git-disl/EVA}, our EVA project site Figure~\ref{fig:eth-sunnyday-video-frames-drop} shows the online object detection processing, which aims to approximate the input video stream rate, for the video ETH-Sunnyday using the same four consecutive frames as shown in Figure~\ref{fig:eth-sunnyday-video-frames-no-drop}. Here the live video stream rate of 14 FPS is used to feed video frames to the online object detection executor (recall Figure~\ref{fig:online-workflow}). In this case, the online object detection executor experiences random frame dropping at the rate of 5 frames on average for each detection processed frame, i.e., $\lceil 14/2.5 - 1 \rceil = 5$. The random frame dropping happens when the object detection scheduler is sending incoming frames to the object detection executor at the input video rate of 14 FPS ($\lambda=14$), while its actual detection processing throughput is only 2.5 FPS (i.e., $\sigma=2.5 << \lambda$). By comparing Figure~\ref{fig:eth-sunnyday-video-frames-drop} with random dropping of input frames and Figure~\ref{fig:eth-sunnyday-video-frames-no-drop} with zero frame dropping, it is visually clear that in the four consecutive frames (64 to 67) of Figure~\ref{fig:eth-sunnyday-video-frames-drop}, a few bounding boxes are misaligned, and some buildings are mislabeled as person or bicycle (e.g., those on the left). As a result, the mAP of this online detection executor is reduced from 86.9\% to 66.1\% (see more detailed experimental results in Section~\ref{section:experimental-analysis}). Readers can find a visualization of the comparison on our GitHub project page (\url{https://github.com/git-disl/EVA}), showing the negative effects of random frame dropping on both FPS and mAP of the detection outputs through visual comparison of input video stream with the detection output video stream. \begin{figure*}[h!] \centering \includegraphics[width=0.75\linewidth]{model-parallel-workflow} \caption{An Architectural Overview for Parallel Object Detection in EVA} \label{fig:model-parallel-workflow} \end{figure*} \section{Multi-Model Detection Parallelism} With heterogeneous edge devices and different AI hardware processing capabilities, edge video analytics needs efficient mechanisms to cope with the mismatch between the input video streaming rate ($\lambda$) and the actual detection processing throughput rate ($\sigma$) on an edge device with a given AI hardware and pre-trained object detection model. In this paper, we address this mismatch problem by exploiting multi-model detection runtime parallelism. Put differently, we propose to design and implement an edge system for fast object detection by delivering the video processing rate $\sigma$ in near real-time. We aim to optimize the runtime detection processing efficiency by reducing the gap from the detection processing rate $\sigma$ to the given input live video streaming rate $\lambda$, while maintaining high accuracy performance in mean average precision (mAP) for efficient online object detection in edge systems. \subsection{Solution Approach: An Overview} Our multi-model parallel detection framework consists of three main functional components, which work in concert to explore the runtime detection parallelism at edge and to speed up the overall frame processing rate for object detection by reducing the performance gap with respect to the live input video stream rate $\sigma$. Figure~\ref{fig:model-parallel-workflow} shows a sketch of the overall workflow for fast online object detection by exploiting multi-model parallelism. The multi-model detection scheduler and the sequence synchronizer are the two core components, which work together to enable parallel detection processing of multiple input video frames at the same time while maintain the temporal order of the detection processed frames according to the original input video stream sequence. The multi-model parallel detection scheduler will first determine the number of parallel detection models to use for delivering the near real-time frame processing performance, denoted by $\sigma_{P}$, based on the input video stream rate $\lambda$ and the average single model detection processing rate $\mu$. Then the scheduler will create parallel execution thread pool, one for each model. Finally, the scheduler will choose an adequate scheduling algorithm based on the configuration of the edge system and its edge devices. There are several ways to exploit multi-model parallelism in an edge system, and one often complements another. First, we can explore multiple models running on one edge device by installing multiple AI hardware sticks as attachments, such as Intel NSC2 sticks~\cite{intel-ncs2}. Intel NSC2 is a hardware accelerator for deploying deep learning at the edge, which is built on the Intel Movidius VPU (Vision Processing Unit) and a dedicated neural compute engine with 4GB memory. It has the USB plus and play feature, enabling its wide deployment on many edge devices for accelerating DNN inference, supported by Intel OpenVINO software framework~\cite{openvino}. Note that the implementation of multi-model on one edge device will focus primarily on optimizing the model execution parallelism for maximizing the overall performance speed-up. Second and alternatively, we can explore multiple edge devices with one AI hardware like NCS2 attached to each device to run an object detection model. Additional performance factors need to be considered for this approach. For example, we need to design an algorithm to find sufficient edge devices within geographical proximity or with fast network connection to the leader edge device which initiates this video detection job. Then we need to define optimal workload balance for achieving overall performance speed-up when multiple nearby edge devices have initiated their own video detection jobs. The third alternative is to consider edge devices with heterogeneous AI hardware attachments, each is capable of running detection models of different size and complexity or different number of detection models. In summary, the first approach serves as a baseline, as it runs multiple detection models in parallel on one edge device attached with a USB 3.0 hub connecting multiple AI hardware sticks. The second alternative uses multiple nearby edge nodes to run multiple detection models instead, with each edge node attached with one AI hardware stick, running one object detection model. The third alternative addresses heterogeneous edge devices by implementing a hybrid of the first two approaches. In the remaining of this section, we will describe our parallel multi-model detection architecture and algorithms in terms of the first design alternative, focusing on the design decision on the number of parallel detection models (the parameter $n$), the choice of parallel detection scheduling algorithms and the scheduler implementation, and the sequence synchronizer for achieving real time or near real time processing throughput $\sigma_{P}$. We report our experimental evaluation in Section~\ref{section:experimental-analysis}, with different edge node configurations and up to seven Intel NCS2 sticks as AI hardware attachments to an edge device. \subsection{Determining the Parallel Detection Parameter $n$} \label{parameter_n} The first task performed by our parallel object detection scheduler is to determine the number of detection models ($n$) to use in order to achieve the desired online detection performance in terms of both frame processing rate FPS and mAP. The goal is to choose the right setting of the parallel detection parameter $n$ such that it is cost efficient. Too small $n$ may not be sufficient to achieve near real time detection processing throughput in FPS, whereas too large $n$ may result in resource waste when the parallel detection processing capacity $n\times \mu$ is far beyond the input video stream rate $\lambda$, i.e., $n\times \mu >> \lambda$. There are several factors impacting the choice of $n$, such as the computation capacity of the AI hardware attached to an edge device, the size and complexity of the pre-trained object detection model, the CPU and memory of the edge device, assuming that the number of AI hardware attachments via a USB hub connecting to an edge device is homogeneous. Let $\sigma_{P}$ denote the detection processing rate achieved by parallel execution of the $n$ detection models. One ideal scenario will be when the parallel object detection can scale linearly with the number of parallel models (or edge devices). As a result, the overall frame processing rate $\sigma_{P}=n\mu$, where $\mu$ is the per model detection processing rate, or the average detection processing rate of the $n$ detectors. $\sigma_{P}=\sum_{i} \mu_i$ when different models and/or heterogeneous edge devices are used, and each with its specific detection processing rate $\mu_{i}$ ($1\leq i\leq n$). If we want to match or approach the original input video stream rate $\lambda$, we can set $n = \lceil \lambda / \mu \rceil$, which ensures that $\sigma_{P}=n \mu \geq \lambda$ holds. With this approach, we can determine the right $n$ detection models to process the incoming video stream with zero frame dropping, because the parallel execution of $n$ detection models will be sufficient for online object detection in real time and no frame dropping will be in this ideal scenario. Given that the comfort of human perception for street view of walking pedestrians is around 10-30 FPS, one may further relax the ideal scenario and set $n$ in the range of [$\lceil 10/ \mu \rceil$, $\lceil \lambda / \mu \rceil $]. From observations made using our first edge system prototype, implemented in C++ on top of OpenVINO~\cite{openvino} and OpenCV~\cite{opencv}, we confirm that when the input video stream rate $\lambda$ is higher than 12 FPS, it is effective to choose $n$ from the range of [$\lceil 10/ \mu \rceil$, $\lceil \lambda / \mu \rceil$] instead of using the conservative setting of $n \geq \lceil \lambda / \mu \rceil$, if we aim at achieving near real-time object detection performance in FPS with high detection accuracy in mAP. Recall the benchmark live video ETH-Sunnyday from the MOT-15-dataset~\cite{MOTChallenge2015} taken from a moving camera at 14 FPS (Figure~2 and Figure~3), if the detection processing rate for no-frame dropping ($\mu$) is 2.5 FPS when running YOLOv3, and if we want to determine the parameter $n$, aiming to closely match the input video streaming rate $\lambda = 14$, we can achieve the parallel detection processing rate $\sigma_{P}$ by choosing $n$ from the range of ($\lceil 10/2.5 \rceil$, $\lceil 14 / 2.5 \rceil$), which is [$4, 6$]. We get $\sigma_{P} = 10$ FPS when we choose $n=4$ by running four detection models in parallel, achieving the detection processing throughput close to the incoming video stream rate $\lambda$, which is 14 FPS, and mAP of 86.5\%. We call this choice of $n$ the near real time parallel detection approach. Alternatively, if we choose the upper bound of the range, which is $n=6$, then we achieve $\sigma_{P} = 14.8$ FPS with mAP of 86.9\%, indicating that by running 6 detection models in parallel, the detection processing throughput can match or slightly exceed the input video stream rate $\lambda=14$ with high mAP. We refer to this approach as a conservative real time parallel detection approach. Readers may refer to Table~\ref{table:exp-multi-ncs2-stick-eth-sunnyday} in Section~\ref{section:experimental-analysis} for more details. \subsection{Parallel detection scheduling algorithms} The second task performed by the parallel detection scheduler is also a part of the initialization process. It needs to choose a scheduling algorithm as a part of the runtime configuration of the scheduler. Concretely, upon receiving the input video stream at $\lambda$ rate, an edge device relies on its object detection task scheduler to launch $n$ threads to run $n$ object detection models in parallel. The assignment of incoming frames to $n$ detection models is done by a scheduling algorithm. There are three categories of parallel execution scheduling algorithms: Round Robin (RR) or weighted round robin, First Come First Serve (FCFS), and performance aware proportional scheduler. A {\bf round robin scheduler} will assign incoming video frames to the $n$ detection models, one at a time, in a pre-defined round robin order at the input stream rate $\lambda$. For example, consider the input video stream $V$, the RR scheduler will dedicate a separate thread to receive the incoming video frames and place them into a shared parallel execution queue $Q$. At the same time, the RR schedule will launch $n$ dedicated threads, each runs one of the $n$ object detection models in parallel. Each thread will fetch one video frame from the shared queue $Q$ and send it to its corresponding object detection model for parallel detection processing. Upon completing one round of input frame to detection model assignment for all $n$ threads, the next round starts. If the current detection model is still busy in processing a previous frame, then the current frame will be dropped, and the detection results from the latest processed frame will be reused as the detection approximation for this dropped frame. The video frame sequence synchronizer will enforce temporal input stream ordering over both the detection processed frames and the randomly dropped frames prior to displaying the live streaming of the detection processed output results via visualization or video streaming player. When the input video stream rate $\lambda$ is much faster than the per-model detection processing rate $\mu$, the round robin scheduler may incur larger amount of random frame dropping. A weighted round robin typically refers to a static resource adaptive scheduler. It assigns a higher weight to those detection models that run much faster, for example, due to high-capacity AI hardware known at edge device configuration, such as a high end GPU card instead of a low end Intel NCS2 stick. Compared to the round robin baseline, the static weighted RR scheduler can further improve the parallel detection throughput while maximizing execution parallelism. A {\bf first come and first serve (FCFS) scheduler} will assign incoming video frames to the $n$ detection models one at a time in the first come first serve order. Concretely, it starts by assigning $n$ input frames to the $n$ available detection models. Then it will assign the $(n+1)^{th}$ input frame to the first detection model that becomes available, instead of following a rigid round robin order for parallel detection processing. The FCFS scheduler is particularly suitable for the scenarios in which either the $n$ detection models or the $n$ AI hardware attachments are heterogeneous, and have different runtime performance for object detection processing. The {\bf performance aware proportional scheduler} is a dynamic runtime performance aware scheduler. It extends the round robin scheduler by periodically computing and assigning dynamic weights to the $n$ detection models. Concretely, upon assigning the first $n$ input frames to the $n$ detection models, it will start to monitor the execution time of each detection model and periodically compute the weight for each of the $n$ detection models based on the historical statistics on the execution performance of the $n$ models with respect to detection processing efficiency. Unlike the weighted round robin, which assigns a constant weight to each detection model at compile time, the performance-aware proportional scheduler dynamically assigns a weight to each detection model, enabling the parallel detection scheduling decision to take into account multiple dynamic factors that may impact on runtime performance, such as network bandwidth, edge device execution state, runtime state of AI hardware. Based on the chosen scheduling algorithm, the edge parallel model detection scheduler will schedule the $n$ threads, one for each detection model. The processed video frames will be sorted by the sequence synchronizer based on their original temporal streaming sequence for visualization and video player presentation of the detection processed frames, including detected objects in bounding boxes with class labels and confidence statistics. Note that for first come first serve and proportional scheduling algorithms, the parallel detection model scheduler will assign frames to threads in an opportunistic manner. The sequence synchronizer will be responsible for enforcing the temporal sequencing constraints on the detection processed frames before forwarding them to the visual presentation phase. Although with round-robin, the temporal sequence constraint is followed by the parallel detection scheduler, the sequence synchronizer serves as an additional checkpoint to ensure and enforce the temporal sequence of the output frames, which is important in the presence of random frame dropping. \section{Experimental Analysis} \label{section:experimental-analysis} We use two benchmark videos from the MOT-15 challenge dataset~\cite{MOTChallenge2015}: ADL-Rundle-6 and ETH-Sunnyday, as shown in Table~\ref{table:two-test-videos}. Both videos are taken in unconstrained real environments. ADL-Rundle-6 video is shot from a static camera at 30 FPS, has 525 frames in total, and the video resolution is 1920$\times$1080. ETH-Sunnyday is shot from a moving camera at 14 FPS with a resolution of 640$\times$480, and a total of 354 frames. We use two representative object detection models: SSD300~\cite{ssd} and YOLOv3~\cite{yolov3} for performing the object detection task on both videos. \newcommand{\smalltabfigure}[2]{\raisebox{-0.5\height}{\includegraphics[#1]{#2}}} \begin{table}[h!] \caption{{Two Test Videos}} \label{table:two-test-videos} \scalebox{0.95}{ \centering \small \begin{tabular}{|c|c|c|} \hline Video Name & ADL-Rundle-6 & ETH-Sunnyday \\ \hline Video FPS & 30 & 14 \\ \hline \#Frames & 525 & 354 \\ \hline Resolution & 1920$\times$1080 & 640$\times$480 \\ \hline Camera & static & moving \\ \hline Example & \smalltabfigure{width=0.18\textwidth}{video-frames/ADL_000228} & \smalltabfigure{width=0.15\textwidth}{video-frames/ETH_000293} \\ \hline \end{tabular} } \end{table} \begin{table}[h!] \caption{{Two Object Detection Models}} \label{table:two-object-detection-models} \scalebox{0.93}{ \centering \small \begin{tabular}{|c|c|c|c|c|} \hline Model & Backbone & Input Size & Model Size & Data Type \\ \hline SSD300 & VGG-16 & 300$\times$300$\times$3 & 51MB & FP16 \\ \hline YOLOv3 & DarkNet-53 & 416$\times$416$\times$3 & 119MB & FP16 \\ \hline \end{tabular} } \end{table} To understand the impact of object detection models pre-trained using different DNN algorithms, we include SSD300 and YOLOv3 in our experiments, as shown in Table~\ref{table:two-object-detection-models}. Both SSD300 and YOLOv3 use their default backbone deep neural networks, i.e., VGG-16 for SSD300 and DarkNet-53 for YOLOv3. For detecting objects, the input video frame will be first resized to the input size of the object detection model, that is 300$\times$300$\times$3 for SSD300 and 416$\times$416$\times$3 for YOLOv3. Then, the deep neural network in SSD300 or YOLOv3 will perform inference on the resized video frame, followed by post-processing, such as non-maximum suppression (NMS), for extracting the bounding box, class label and object confidence for each detected object. SSD300 and YOLOv3 are converted into the FP16 (float16) data type to be deployed on Intel NCS2 sticks. Furthermore, to understand the impact of edge servers with different computational capacities, we include two types of edge servers as specified in Table~\ref{table:edge-server-configuration}. For high capacity type of edge servers, we configure the edge server with a fast CPU, Intel i7-10700K, with 24GB main memory and Ubuntu 20.04. For low capacity type of edge servers, we configure the edge server with an AMD A6-9225 CPU, with 12GB memory and Ubuntu 18.04. In both cases, a total of up to seven Intel NCS2 sticks are attached to these two edge servers via a USB 3.0 hub in all experiments reported in this paper. \begin{table}[h!] \centering \caption{Edge Server Configuration} \label{table:edge-server-configuration} \scalebox{1.0}{ \centering \begin{tabular}{|c|c|c|} \hline Edge Server & Fast & Slow \\ \hline CPU & Intel i7-10700K & AMD A6-9225 \\ \hline CPU Frequency & 3.8GHz & 2.6GHz \\ \hline CPU \#Cores & 8 & 2 \\ \hline Main Memory Size & 24GB & 12GB \\ \hline OS & Ubuntu 20.04 & Ubuntu 18.04\\ \hline \end{tabular} } \end{table} \begin{table*}[h!] \centering \caption{{Parallel Detection using Multiple NCS2 Sticks (ETH-Sunnyday)}} \label{table:exp-multi-ncs2-stick-eth-sunnyday} \scalebox{1.0}{ \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Detection Processing} & Zero Frame Dropping & Single AI-Hardware & \multicolumn{6}{c|}{ Parallel Detection Models} \\ \hline Model & \#NCS2 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \multirow{2}{*}{SSD300} & Detection FPS & 2.3 & 2.3 & 4.6 & 6.9 & 9.2 & 11.5 & 13.8 & 16.0 \\ \cline{2-10} & mAP (\%) & 74.5 & 69.0 & \textbf{78.7} & \textbf{78.6} & \textbf{77.5} & \textbf{77.5} & \textbf{77.5} & \textbf{74.5} \\ \hline \multirow{2}{*}{YOLOv3} & Detection FPS & 2.5 & 2.5 & 5.1 & 7.5 & 10.0 & 12.4 & 14.8 & 17.3 \\ \cline{2-10} & mAP (\%) & 86.9 & 66.1 & 83.9 & 86.5 & 86.5 & 86.5 & \textbf{86.9} & \textbf{86.9} \\ \hline \end{tabular} } \end{table*} \begin{table*}[h!] \centering \caption{{Parallel Detection using Multiple NCS2 Sticks (ADL-Rundle-6)}} \label{table:exp-multi-ncs2-stick-adl-rundle-6} \scalebox{1.0}{ \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Detection Processing} & Zero Frame Dropping & Single AI Hardware & \multicolumn{6}{c|}{Parallel} \\ \hline Model & \#NCS2 & 1 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \multirow{2}{*}{SSD300} & Detection FPS & 2.3 & 2.3 & 4.6 & 6.9 & 9.1 & 11.5 & 13.7 & 16.0 \\ \cline{2-10} & mAP (\%) & 54.4 & 46.7 & \textbf{56.2} & \textbf{55.8} & \textbf{55.4} & \textbf{55.7} & \textbf{55.7} & \textbf{54.7} \\ \hline \multirow{2}{*}{YOLOv3} & Detection FPS & 2.5 & 2.5 & 5.1 & 7.5 & 10.0 & 12.5 & 14.8 & 17.3 \\ \cline{2-10} & mAP (\%) & 62.5 & 42.7 & 56.7 & 61.2 & \textbf{62.7} & \textbf{62.7} & \textbf{62.7} & \textbf{62.7} \\ \hline \end{tabular} } \end{table*} \subsection{Parallel Detection Effectiveness} \noindent {\bf ETH-Sunnyday.} We first evaluate our multi-model detection parallel approach on ETH-Sunnyday by using up to seven Intel NCS2 sticks, and hence running up to seven detection models in parallel. Table~\ref{table:exp-multi-ncs2-stick-eth-sunnyday} shows the experimental results for two object detection models, SSD300 and YOLOv3. We include the zero frame dropping in the offline processing as the baseline for comparison with two scenarios: (1) the online object detection using only one detection model on one AI hardware (e.g., one NCS2 stick); and (2) the online parallel detection with $n$ models by varying $n$ from 2 to 7. Three interesting observations are highlighted. (1) Our model parallel approach achieved almost linear scalability with the number of NCS2 sticks. Given one detection model runs on each of the 7 NCS2 sticks, the parallel detection with 7 models significantly improves the detection processing throughput ($\sigma_{P}$) from 2.3 FPS to 16.0 FPS for pre-trained SSD300 model and from 2.5 FPS to 17.3 FPS for pre-trained YOLOv3 detector. This represents 6.96$\times$ speed-up for SSD300 and 6.92$\times$ speed-up for YOLOv3, compared to using a single detection model on one NCS2 stick. (2) Based on our proposed method for determining the detection parallel parameter $n$ in Section~\ref{parameter_n}, for ETH-Sunnyday with the input video stream rate of 14 FPS, setting $n$ to be in the range of [$4, 6$] is sufficient for delivering near real time detection processing throughput. This experiment confirms this method. For $n$ NCS2 sticks, running $n$ YOLOv3 detection models in parallel, by varying $n$ from 4, 5 to 6, we can achieve a detection processing rate of 10.0 FPS, 12.4 FPS or 14.8 FPS respectively with mAP of 86.5\% or higher for all three cases. For street view of walking pedestrians, the moving camera capturing at 10 FPS or higher is considered comfortable and near real time for human perception. Hence, choosing $n$ by 4, 5, or 6 are considered acceptable configurations for our multi-model detection parallel approach. (3) By choosing $n=7$, the parallel detection processing rate exceeds the input video stream rate of 14 FPS for both SSD300 and YOLOv3, achieving the same mAP: 74.5\% for SSD300 with 7 NCS2 sticks and 86.9\% for YOLOv3 as the zero frame dropping baseline. Clearly, adding one additional AI hardware will consume additional computation cost, including battery consumption if edge device is not wired, although the computation time is negligible thanks to parallel execution of all $n$ models. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{ssd300-yolov3-multi-NCS2} \caption{Detection FPS and mAP for Multiple NCS2 Sticks (ADL-Rundle-6, Video FPS=30)} \label{fig:ssd300-yolov3-multi-NCS2-adl-rundle-6} \end{figure} \noindent {\bf ADL-Rundle-6.} We next evaluate our parallel detection implementation on ADL-Rundle-6, another test video from MOT-15~\cite{MOTChallenge2015}, using the same experimental setup, including varying $n$ from 2 to 7 with up to 7 Intel NCS2 sticks and running SSD300 and YOLOv3 pre-trained models for object detection. Table~\ref{table:exp-multi-ncs2-stick-adl-rundle-6} shows the results. We highlight the following observations. (1) Similar to ETH-Sunnyday in Table~\ref{table:exp-multi-ncs2-stick-eth-sunnyday}, the multi-model parallel detection approach achieves almost linear scalability over the number of NCS2 sticks on ADL-Rundle-6. (2) The input video stream rate for ADL-Rundle-6 is 30 FPS, doubling the rate of ETH-Sunnyday. However, comparing to the zero frame dropping baseline with mAP of 54.4\% and FPS of 2.3 for SSD pre-trained model and mAP of 62.5\% and FPS of 2.5 for YOLOv3 pre-trained model, when running parallel detection processing, choosing $n$ by 4, 5, 6 or 7 will offer better mAP and higher FPS: For SSD, when $n$ is set to 5, 6 or 7, we achieve the detection processing rate of 11.5, 13.7 and 16.0 respectively with mAP of 54.7\% or higher for these three settings. For YOLOv3, we achieve the detection processing rate of 10.0, 12.5, 14.8 and 17.3 respectively with mAP of 62.7\% for all four settings. (3) According to our method for determining the detection parallel parameter $n$, the choice of $n$ should be in the range of [$\lceil 10/2.3 \rceil, \lceil 30/2.3 \rceil$] for SSD, which is [$5, 14$], and in the range of $(\lceil 10/2.5 \rceil, \lceil 30/2.5 \rceil)$ for YOLOv3, which is [$4, 12$]. Even when we only have up to 7 Intel NCS2 sticks for this experiment, we show that with 4, 5, 6, or 7 detection models running in parallel, we can achieve near real time detection processing throughput higher than 10 FPS while maintaining mAP higher than the zero frame dropping baseline (measured using offline processing). Figure~\ref{fig:ssd300-yolov3-multi-NCS2-adl-rundle-6} shows the trend of FPS (y-axis on the left) and mAP (y-axis on the right) for running SSD and YOLOv3 pre-trained models, by varying $n$ from 1 to 7 (x-axis). First, both SSD and YOLOv3 show the linear scalability with respect to the detection processing rate in FPS. Second, for YOLOv3, as $n$ increase from 1 to 4, the multi-model detection parallel approach continues to increase its mAP. As $n$ continues to increase from 4 to 7, the mAP is stabilized at 62.7\%. For SSD, the multi-model detection parallel approach always achieves mAP higher than the no-frame dropping baseline of 54.4\%, with $n\geq 2$. For SSD300 on ADL-Rundle-6, our multi-model parallel detection with 3 NCS2 sticks has the detection processing rate of 6.9 FPS. This results in a small amount of random frame dropping, which is on average $\lceil 30/6.9 - 1 \rceil = 4$ frames for each detection processed frame, compared to random dropping of 13 frames ($\lceil 30/2.3 - 1 \rceil = 13$) when using one detection model running on a single NCS2 stick. For YOLOv3, when using 5 NCS2 sticks, the number of randomly dropped video frames decreases significantly from on average 11 frames for each detection processed frame ($\lceil 30/2.5 - 1 \rceil = 11$) when using one NCS2 stick to only dropping on average 2 frames ($\lceil 30/12.5 - 1 \rceil = 2$) for each detection processed frame. This also explains the reason for the improved mAP for both SSD300 and YOLOv3 since the number of dropped video frames is significantly reduced. We provide a visualization comparison to show the effectiveness of our multi-model parallel detection approach with respect to the input video stream rates, accessible at \url{https://github.com/git-disl/EVA}. \begin{table*}[h!] \centering \caption{{Comparison of Power Efficiency of Different Hardware Devices}} \label{table:comparison-power-effieiency} \scalebox{1.0}{ \begin{tabular}{|c|c|c|c|c|} \hline Power Consumption & Intel NCS2 & Slow CPU (AMD A6-9225) & Fast CPU (Intel i7-10700K) & GPU (GTX TITAN X) \\ \hline TDP (Watts) & 2~\cite{ncs2-power-consumption} & 15~\cite{amd-power-consumption} & 125~\cite{intel-power-consumption} & 250~\cite{nvidia-power-consumption} \\ \hline Detection FPS & 2.5 & 0.4 & 13.5 & 35~\cite{yolov3} \\ \hline Detection FPS / Watt & 1.25 & 0.03 & 0.11 & 0.14 \\ \hline \end{tabular} } \vspace{-2mm} \end{table*} \begin{table*}[h!] \centering \caption{{Experiments with RR and FCFS Scheduler (ETH-Sunnyday, YOLOv3)}} \label{table:exp-rr-fcfs-eth-sunnyday-yolov3} \scalebox{1.0}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Detection FPS & \#NCS2 & 0 (CPU Only) & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \multirow{3}{*}{Round-Robin} & NCS2 Only & - & 2.5 & 5.1 & 7.5 & 10.0 & 12.4 & 14.8 & 17.3 \\ \cline{2-10} & Fast CPU + NCS2 & 13.5 & 5.1 & 7.6 & 10.1 & 12.7 & 15.0 & 17.6 & 20.1 \\ \cline{2-10} & Slow CPU + NCS2 & 0.4 & 0.9 & 1.3 & 1.8 & 2.2 & 2.6 & 3.1 & 3.4 \\ \hline \multirow{3}{*}{FCFS} & NCS2 Only & - & 2.5 & 5.1 & 7.5 & 9.9 & 12.5 & 15.0 & 17.3 \\ \cline{2-10} & Fast CPU + NCS2 & 13.5 & 16.0 & 17.1 & 19.4 & 22.0 & 24.3 & 26.7 & 29.0 \\ \cline{2-10} & Slow CPU + NCS2 & 0.4 & 3.0 & 5.5 & 7.8 & 10.3 & 12.7 & 14.9 & 17.9 \\ \hline \end{tabular} } \vspace{-2mm} \end{table*} \subsection{Energy Efficiency for Detection at Edge} Edge devices are often with wireless connectivity and limited power supply. Therefore, energy efficiency is an important consideration for video analytics in edge systems. Table~\ref{table:comparison-power-effieiency} shows the power consumption of the AI hardware attached to the two types of edge servers (fast and slow) used in the experiments reported in this paper, including Nvidia GTX TITAN X (the popular GPU), Intel NCS2, AMD A6-9225 (the slow server), and Intel i7-10700K (the fast edge server). We run the YOLOv3 object detection model on these four types of edge devices and measure the energy consumption and FPS as reported in Table~\ref{table:comparison-power-effieiency}. We highlight two interesting observations. {\it First}, different AI devices consume different amounts of power. We use the TDP (Thermal Design Power) to indicate the power dissipation of each AI device. For example, using one Intel NCS2 stick only consumes 2 watts. In comparison, using a single GTX TITAN X GPU has the highest energy consumption with 250 watts. When running YOLOv3 on edge server with slow CPU without AI hardware, it consumes 15 watts. In comparison, when running on edge server with fast CPU, it consumes 125 watts. {\it Second}, Intel NCS2 achieved the highest energy efficiency in terms of detection FPS per watt. Here we measure detection processing throughput in FPS with zero frame dropping for all four detection execution environments at edge. Although edge server equipped with Titan X GPU achieves 35 FPS, followed by the fast CPU with 13.5 FPS, Intel NCS2 with 2.5 FP. The slow CPU edge server has the slowest detection performance of 0.4 FPS. Based on both TDP (Watts) and FPS for all four settings, we next measure the energy efficiency in terms of detection FPS per watt, which is computed by dividing the detection FPS by the corresponding TDP. The edge device with Intel NCS2 offers the highest detection FPS per watt (1.25) compared to the edge device with GPU (0.14), edge device powered by fast CPU (0.11) and edge device powered by slow CPU (0.03). This further verifies that edge AI hardware such as Intel NCS2 stick is competitive in edge computing with high energy efficiency and suitable for deploying deep neural network models on wireless edge devices. \begin{table*}[h!] \centering \caption{{Comparison of Bandwidth for Different Interfaces}} \label{table:bandwidth-comparison} \scalebox{1.0}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Port & USB 2.0 & USB 3.0 & Ethernet & 10 Gigabit Ethernet & WiFi 6 & 4G (peak) & 5G (peak) \\ \hline Bandwidth & 480 Mbps~\cite{usb-2-3-ethernet-speed} & 5 Gbps~\cite{usb-2-3-ethernet-speed} & 100 Mbps$\sim$1 Gbps~\cite{usb-2-3-ethernet-speed} & 10 Gbps~\cite{role-10-gigabit-ethernet} & $\sim$10 Gbps~\cite{wifi-6} & 1 Gbps ~\cite{cellnetwork-5g-4g} & 20 Gbps~\cite{cellnetwork-5g-4g} \\ \hline \end{tabular} } \end{table*} \begin{table}[h!] \centering \caption{{\small The Impact of Connection Interface (ADL-Rundle-6)}} \label{table:connection-interface} \scalebox{0.962}{ \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\#NCS2} & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \multirow{2}{*}{SSD300} & USB 2.0 & 2.0 & 3.9 & 5.9 & 7.8 & 9.7 & 11.6 & 13.2 \\ \cline{2-9} & USB 3.0 & 2.3 & 4.6 & 6.9 & 9.1 & 11.5 & 13.7 & 16.0 \\ \hline \multirow{2}{*}{YOLOv3} & USB 2.0 & 1.9 & 3.7 & 5.5 & 7.2 & 8.1 & 8.0 & 8.1 \\ \cline{2-9} & USB 3.0 & 2.5 & 5.1 & 7.5 & 10.0 & 12.4 & 14.8 & 17.3 \\ \hline \end{tabular} } \end{table} \subsection{Impact of Scheduling Algorithms} This set of experiments is designed to measure the impact of using different parallel detection scheduling algorithms. We focus on comparing the baseline round robin (RR) scheduler with the first come first serve (FCFS) scheduler. Table~\ref{table:exp-rr-fcfs-eth-sunnyday-yolov3} reports the experimental results with these two schedulers by running YOLOv3 on the ETH-Sunnyday video. We highlight three interesting observations. {\it First}, consider NCS2 Only scenario, in which homogeneous AI hardware attachments are used with each running YOLOv3 on ETH-Sunnydy test video, the RR and FCFS schedulers show similar performance as expected when performing multi-model parallel detection using only $n$ NCS2 sticks by varying $n$ from 1 to 7, where the detection FPS for both RR and FCFS schedulers ranges from 2.5 to 17.3 FPS. {\it Second}, when we carry out the multi-model parallel detection using heterogeneous edge devices, we expect different performance impact when using the RR schedule compared to using the FCFS scheduler. In the next experiment, we use Intel i7 (Fast CPU) as the edge server with $n$ NCS2 sticks by varying $n$ from 1 to 7, we perform the multi-model parallel detection using both fast CPU and the $n$ NCS2 sticks. We deploy YOLOv3 on CPU via the OpenVINO framework. This set of experiments shows that compared to the RR scheduler, the FCFS scheduling algorithm achieves much higher detection FPS with $n$ NCS2 sticks when varying $n$ from 1 to 7. This is because FCFS is in general more efficient for heterogeneous execution environments. Concretely, the fast CPU has much higher detection processing throughput (13.5 FPS) compared to that of a NCS2 stick (2.5 FPS). FCFS can effectively leverage such performance disparity whereas the RR scheduler fails to take into account such detection performance disparity when scheduling parallel detection execution. {\it Third}, with the Fast CPU as the edge server and 7 NCS2 sticks, the FCFS scheduler can consistently achieve higher throughput performance of 29 FPS, compared to the performance of 20.1 FPS by the RR scheduler under the same runtime setup. We next conduct the same set of experiments by replacing the fast CPU with slow CPU. It is observed that even with the slow CPU with poor detection of 0.4 FPS, using FCFS scheduler, our multi-model parallel detection approach can still deliver slightly better performance compared to using NCS2 only, because the slow CPU with NCS2 solution benefits from one additional model for parallel object detection. However, when using the RR scheduler, the slow CPU with NCS2 will suffer notably because the poor performance of the detection model running on the slow CPU had significantly negative impact on the overall performance of the parallel detection approach due to the large amount of random frame dropping at the slow CPU. In summary, FCFS scheduling algorithm provides more consistent performance for the multi-model parallel detection approach under both homogeneous and heterogeneous runtime environments. In our first prototype implementation, we set FCSC as the default scheduler. \subsection{Impact of Connection Interface} To install multiple AI hardware sticks to an edge server, one would need a USB hub. In our first prototype, we connect multiple Intel NCS2 sticks to the edge servers via the USB port with a USB hub. Different versions of USB ports have different bandwidths. For instance, the bandwidth for USB 3.0 is 5 Gbps/s, which is much higher than USB 2.0 with about 0.5 Gbps/s~\cite{usb-2-3-ethernet-speed}. By connecting AI hardware such as an NCS2 stick to an edge server, the computation heavy DNN model for object detection will be hosted on the AI hardware for detection processing. Hence, the edge server will need to transfer the input video data to the attached NCS2 stick to perform the object detection processing. Hence, the connection bandwidth between the edge server and its attached AI hardware is critical. For example, we can connect the multiple Intel NCS2 sticks to the fast edge server via USB 2.0 or USB 3.0, and the two USB ports have different connection bandwidth, which subsequently impacts the detection processing throughput performance in FPS, as shown in Table~\ref{table:connection-interface}. This set of experiments reports the results by running pre-trained object detection models by SSD300 and YOLOv3 on ADL-Rundle-6 test video. We highlight two interesting observations. {\it First,} for both SSD300 and YOLOv3, edge detection with USB 3.0 achieves much higher performance in FPS than the same edge server with USB 2.0 port regardless whether we run object detection using only one NSC2 stick or using multiple NCS2 sticks. USB 3.0 port outperforms USB 2.0 port in both bandwidth and FPS throughput. {\it Second}, the input data size for YOLOv3 is 416$\times$416$\times$3=519,168 which is about 2$\times$ of the input size of SSD300 with 300$\times$300$\times$3=270,000. YOLOv3 has higher bandwidth requirements than SSD300, which also explains that YOLOv3 fails to further improve the detection FPS after \#NCS2=5 with USB 2.0. This set of experiments motivates us to consider the high speed connection offered by 5G or 6G networks. Table~\ref{table:bandwidth-comparison} compares the network bandwidths with different types of connections. The reported peak bandwidth of 5G network could reach 20 Gbps~\cite{cellnetwork-5g-4g}, followed by 10 Gigabit Ethernet~\cite{role-10-gigabit-ethernet} and WiFi 6 with 10 Gbps~\cite{wifi-6}. With guaranteed 10 Gigabit or higher network connectivity, our multi-model parallel detection approach can run effectively on a group of nearby edge nodes, with one edge node running one object detection model. However, with Ethernet~\cite{usb-2-3-ethernet-speed} or 4G network with 1 Gbps~\cite{cellnetwork-5g-4g} connection, using multiple AI hardware sticks on a single edge server with a USB 3.0 hub represents a more efficient approach in terms of both detection performance and energy consumption. \noindent {\bf Impact of Programming Languages on Detection Performance at Edge.\/} During our prototype development for multi-model parallel object detection on edge devices, we also observed that when the parallel object detection model YOLOv3 is implemented in different programming languages, such as Python or C++, even with the same runtime environments, they achieve different detection throughput performance in FPS. Table~\ref{table:programming-language} reports the results from this set of experiments. Overall, the Python implementation fails to scale well with more than two NCS2 sticks, only achieving 9.7$\sim$9.8 FPS when $n$ varies from 3 to 7. In comparison, the C++ implementation scales as we increase the number of NCS2 sticks from 2 to 7. The overall speedup of C++ implementation is 7$\times$ with 7 NCS2, showing some trend of linear scalability, though the synchronization overhead of the C++ implementation leads to slightly lower performance for 1 and 2 NCS2 sticks than the Python implementation. For the Python implementation, the current OpenVINO framework does not support multi-process in Python. For the Python multi-thread implementation, due to the Python global lock, the execution is ultimately serialized, consequently impairing the scalability. \begin{table}[h!] \centering \caption{{\small The Impact of Programming Languages on parallel detection performance in FPS (YOLOv3, ADL-Rundle-6)}} \label{table:programming-language} \scalebox{1.0}{ \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \#NCS & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline Python & 4.8 & 9.4 & 9.8 & 9.8 & 9.7 & 9.7 & 9.7 \\ \hline C++ & 4.5 & 9.1 & 13.5 & 18.0 & 22.3 & 27.5 & 32.4 \\ \hline \end{tabular} } \end{table} \section{Conclusion} We have shown that even with well-trained DNN object detectors, the online detection quality in edge systems may deteriorate due to limited capacity for running DNN object detection models and random frame dropping when the frame processing rate is significantly slower than the incoming video stream rate. We present a multi-model parallel detection approach to effectively speed up the detection processing throughput at edge. We show that using the FCFS scheduler, our multi-model parallel detection approach can effectively speed up the detection throughput performance by minimizing the disparity in the detection processing rates on heterogeneous detection execution environments at edge. Extensive experiments are conducted using both SSD300 and YOLOv3 pre-trained DNN models on benchmark videos with different video stream rates. We show that our multi-model parallel detection approach can speed up the online object detection processing throughput (FPS) and deliver near real-time object detection performance at edge. Our ongoing research includes implementing other performance aware model parallel scheduling algorithms, designing a multi-model and multi-device prototype system, capable of handling heterogeneous edge devices and heterogeneous detection models in terms of size and complexity. We are also interested in incorporating object tracking and segmentation techniques to take into consideration the temporal or spatial dependencies over individual video frames~\cite{object-tracking-survey,fast-online-object-tracking}. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi This work is partially sponsored by Cisco grant on edge computing. We thank the discussions and meetings with the Cisco team on edge computing. The authors from Georgia Institute of Technology are also partially supported by the National Science Foundation under Grants NSF 2038029 and NSF 1564097. \bibliographystyle{IEEEtran}
-37,289.545661
[ -1.89453125, 2.1328125 ]
43.589744
[ -2.861328125, 1.4091796875, -1.9951171875, -4.4921875, -1.65625, 6.90625 ]
[ 3.64453125, 6.109375, 2.96484375, 6.85546875 ]
600
8,508
[ -0.962890625, 0.66943359375 ]
25.0963
[ -6.234375, -4.12109375, -4.02734375, -1.5009765625, 2.166015625, 11.125 ]
2.535423
29.271053
16.302304
6.738766
[ 2.140195846557617 ]
-28,457.392946
5.632464
-36,418.54899
1.391874
5.855979
[ -3.091796875, -3.287109375, -3.734375, -4.58203125, 2.484375, 11.5703125 ]
[ -5.546875, -1.9765625, -2.560546875, -1.7275390625, 3.73046875, 5.71875 ]
BkiUc9DxK0zjCsHeaflN
\section{Introduction} In recent years, a lot of nanostructural surfaces was investigated. The interest in them is due to their electronic and transport properties. They can be used as nanoscale devices like transistors, molecular memory devices, nanowires, etc. They can be produced in complex thermal processes such as, for example, the chemical vapor deposition. The electronic properties are mediated by different kinds of topological defects, various substitutes added into the structure, external influences such as a magnetic field, border effects given by the geometry close to the defects, etc. In the case of the cone, the geometry is given by the pentagons in the tip whose number changes from $1$ to $5$ and it determines the resulting vortex angle. The properties of nanocones are widely investigated in \cite{lammertcrespi}. In this paper, we will be interested in how the border effects influence the electronic structure close to the tip. We apply the approximation used for the studies of the electronic properties of the double wall nanotubes in \cite{pincak1}: we consider the influence of the radius of the nanoparticles on the bond hybridization and also on the $\pi$ orbitals. The rehybridization of the $\sigma, \pi$ orbitals method was used in order to compute the influence of the curvature of the nanocone shell on the matrix elements. The electronic structure of the cone where this effect is not considered was investigated in \cite{sitenko}. The continuum gauge field-theory model was used, where the disclination is represented by a SO(2) gauge vortex. The solution of the Dirac-Weyl equation gives the local density of states ($LDoS$) around the Fermi energy.\\ In the continuum gauge field-theory, the following equation is solved to investigate the electronic structure of the conical surface: \begin{equation}\label{first}\hat{H}\psi=E\psi.\end{equation} Because of the atomic structure of the graphene lattice which is composed of two sublattices $A$ and $B$, the solution has two components, $\psi=(\psi_A, \psi_B)$ which depend on the energy. From the solution, $LDoS$ is calculated as \begin{equation}\label{LDoS}LDoS(E)=|\psi_A(E)|^2+|\psi_B(E)|^2.\end{equation} In \cite{sitenko}, the Hamiltonian in (\ref{first}) has the form \begin{equation}\label{second}\hat{H}=\left(\begin{array}{cc}H_1 & 0\\0 & H_{-1}\end{array}\right)\end{equation} and \begin{equation}\label{third}\hat{H}_s=\hbar v\left\{i\sigma^2\partial_r-\sigma^1r^{-1}\left[(1-\eta)^{-1}\left(is\partial_{\varphi}+\frac{3}{2}\eta\right)+ \frac{1}{2}\sigma^3\right]\right\},\hspace{5mm}s=\pm 1.\end{equation} The Hamiltonian in (\ref{second}) arised from a Hamiltonian $H_0$ by performing some unitary transformations, so we cannot strictly say to which of the definite sublattices $A$ or $B$ or the Fermi points "$+$" or "$-$" the particular components of the Hamiltonian $H_+, H_-$ correspond which mix up different possibilities. The parameters $r$ and $\varphi$ in (\ref{third}) represent the polar coordinates on the curved conical surface with the origin in the tip, and $\hbar$ and $v$ represent the Planck constant and the Fermi velocity whose product $\hbar v=\frac{3}{2}t$ \cite{sitenko}, where $t\doteq 3\,{\rm eV}$ is the hopping integral corresponding to the neighbouring sites. The Pauli matrices $\sigma^i, i=1,2,3$ are a regular part of the Dirac-Weyl equation. The number of pentagonal defects $N_d$ is included in the parameter $\eta=N_d/6$. In this paper, we will use the upgraded version of the Hamiltonian in (\ref{third}): \begin{equation}\label{fourth}\hat{H}_s=\hbar v\left\{i\sigma^2\partial_r-\sigma^1r^{-1}\left[(1-\eta)^{-1}\left(is\partial_{\varphi}+\frac{3}{2}\eta\right)+ \frac{1}{2}\sigma^3\right]-\frac{A}{(1-\eta)^2r^2}I\right\},\end{equation} so we supply a nonzero dimensionless parameter $A$ which represents the dependence of the $\pi$ orbital on the local curvature of the systems. And so we have (see Appendix) \begin{equation}\label{2a} \langle \pi |\hat{H}|\pi\rangle =\langle p_{z} |\hat{H}|p_{z}\rangle - \frac{\hbar v A}{(1-\eta)^2r^2} \end{equation} Similar effect has been taken into account to compute the electronic structure of the multiwalled fullerenes and nanotubes in \cite{pincak1} and \cite{pincak2}. This curvature-dependent effect has nothing to do with the overlap of the $\pi$ orbitals.\\ \begin{figure} \begin{center} {\includegraphics[width=60mm]{cone3.eps}}\caption{The structure of the graphitic cone with 3 (pentagonal) defects.}\label{fg7} \end{center} \end{figure} In Fig. \ref{fg7}, a model of the graphitic nanocone with 3 (pentagonal) defects in the tip is displayed. Two significant positions with respect to the tip ($r_0$ and $r_1$) are marked there: $r_1$ denotes the upper limit of $r$ for which the mentioned effect of the pseudopotential is significant. On the other hand, at the distance $r_0$ from the tip, the surface becomes too smooth and the influence of the $\pi$ bonds is negligible in comparison with the influence of the $\sigma$ bonds. So only in the interval $(r_0, r_1)$ the investigation of the pseudopotential (or any kind of effective charge) has meaning. Denoting by $d$ the length of the $C-C$ bond, we can estimate $r_0\sim d, r_1\sim(2-3)d$ (here we choose $r_1=2d$).\\ If we write (\ref{first}) as a system of equations (with the Hamiltonian including the nonzero parameter $A$ presented in (\ref{fourth})), then, by performing the substitution method, after final corrections and putting \begin{equation}\frac{A}{(1-\eta)^2}=a_1,\hspace{1cm}\frac{E}{\hbar v}=a_2,\hspace{1cm}\frac{sn-\eta}{1-\eta}=a_3,\end{equation} we obtain \[\left[a_2+\frac{a_1}{r^2}\right]\frac{{\rm d}^2f}{{\rm d}r^2}+\left[\frac{a_2}{r}+\frac{3a_1}{r^3}\right]\frac{{\rm d}f}{{\rm d}r}+\] \begin{equation}\label{solve1} +\left[a_2^3+(3a_1a_2-a_3^2)\frac{a_2}{r^2}+ (3a_1a_2-2a_3-a_3^2)\frac{a_1}{r^4}+\frac{a_1^3}{r^6}\right]f(r)=0.\end{equation} Here, $f(r)$ represents an arbitrary component of the two-component wave function $\psi$ from (\ref{first}).\\ To solve the equation for low values of the parameter $r$, we use the Runge-Kutta numerical method. In this method, we suppose some initial values of the solution and its derivation at the point $r_0$: \begin{equation}f(r_0)=f_0,\hspace{1cm}f'(r_0)=f_0'.\end{equation} Then, we calculate the values of the given function at the points $x_0+n h$, where $h>0$ is a small number. At each step, we calculate the coefficients \begin{equation}k_1(h)=\frac{1}{2}h^2f(x_0,y_0,y_0'),\end{equation} \begin{equation}k_2(h)=\frac{1}{2}h^2f(x_0+\frac{h}{2},y_0+\frac{h}{2}y_0'+\frac{k_1}{4},y_0'+\frac{k_1}{h}),\end{equation} \begin{equation}k_3(h)=\frac{1}{4}h^2f(x_0+\frac{h}{2},y_0+\frac{h}{2}y_0'+\frac{k_2}{4},y_0'+\frac{k_2}{h}),\end{equation} \begin{equation}k_4(h)=\frac{1}{2}h^2f(x_0+h,y_0+h y_0'+k_3,y_0'+\frac{2k_3}{h}),\end{equation} so that we finally find \begin{equation}y(x_0+h)\doteq y(x_0)+h y'(x_0)+\frac{1}{3}(k_1(h)+k_2(h)+k_3(h)),\end{equation} \begin{equation}y'(x_0+h)\doteq y'(x_0)+\frac{1}{3h}(k_1(h)+2k_2(h)+2k_3(h)+k_4(h)).\end{equation} The detailed description of this method is given in \cite{runge-kutta}. We can try to find the analytical solution as well. In this case (low values of $r$), this solution can be approximated by simplifying (\ref{solve1}) in the way that we suppose small values of $r$. Then, we get \begin{equation}\frac{{\rm d}^2f}{{\rm d}r^2}+\left(\frac{3}{r}-\frac{2a_2}{a_1}r\right)\frac{{\rm d}f}{{\rm d}r}+f(r)=0\end{equation} with the solution \begin{equation}f_0(r)=C_1F_-(r)+C_2F_+(r)=C_1\hspace{1mm} {}_1F_1\left(-\frac{a_1}{4a_2},2,\frac{a_2}{a_1}r^2\right)+C_2G_{1,2}^{2,0}\left(-\frac{a_2}{a_1}r^2\Big| \begin{tabular}{cc}\multicolumn{2}{c}{$1+\frac{a_1}{4a_2}$}\\$-1$ & $0$\end{tabular} \right),\end{equation} where ${}_1F_1$ is the hypergeometric function, $G_{1,2}^{2,0}$ is the Meier $G-$function, $C_1, C_2$ are the normalization constants whose form will be introduced bellow. In practical calculations, we will not consider the contribution of the Meier $G-$function which will be very small. Then, $LDoS$ will have the form (taking in mind the meaning of the particular parameters) \[LDoS(\tilde{E},\eta)=C_1(\tilde{E},\eta)^2{}_1F_1^2\left(-\frac{A}{4(1-\eta)^2\tilde{E}},2, \frac{\tilde{E}}{A}(1-\eta)^2r^2\right)\sim\] \begin{equation}\label{hyperF} \sim C_1(\tilde{E},\eta)^2\left[\frac{1}{64}r^4+\left(\frac{\tilde{E}}{48}-\frac{A}{192(1-\eta)^2}\right)r^6\right],\end{equation} where $\tilde{E}=a_2=\frac{E}{\hbar v}$. The reliability of this method of finding the solution is limited: without satisfying additional requirements on the Taylor expansion of the solution, the deviation from the real solution can be considerable. We will verify the validity of this solution in the plots of $LDoS$.\\ For the high values of the $r$ parameter, we will simplify equation (\ref{solve1}) by estimating which terms will be so negligible that we can exclude them. After doing this, the form of the equation (\ref{solve1}) for the high values of the $r$ parameter is \begin{equation}\frac{{\rm d}^2f}{{\rm d}r^2}+\frac{1}{r}\frac{{\rm d}f}{{\rm d}r}+\left[a_2^2+\frac{1}{r^2}(3a_1a_2-a_3^2)\right]f(r)=0,\end{equation} and the solution has the form \begin{equation}\label{yinfty}f_{\infty}(r)=C_1G_-(r)+C_2G_+(r)=C_1 J_{-\frac{a_3}{a_2}}(\sqrt{r})+C_2 J_{\frac{a_3}{a_2}}(\sqrt{r}),\end{equation} where $J$ is the Bessel function of the first kind. As there is no reason to prefer the site of a concrete type, we will suppose $C_1=C_2$. Then, the normalization constant will be calculated from the condition \begin{equation}C_1^2\int\limits_0^{r_{max}}(|G_-(r)|^2+|G_+(r)|^2){\rm d}r=1,\end{equation} where $r_{max}$, the upper limit of the integration, will be chosen as $r_{max}=50$. This normalization constant will be used for the case of the low as well as the high values of $r$.\\ Now, the dependence of $LDoS$ on the energies is found with the help of (\ref{LDoS}) (and by using the normalization constant $C_{\infty}$). For the low values of $r$, it is shown in the plot in Fig. \ref{fg5}. The case of a different number of pentagons with $A$ being zero and nonzero is studied there. The exact value of $A$ is calculated in Appendix. We see that for nonzero $A$, $LDoS$ decreases and metallization appears for a higher number of pentagons. The plots are valid for the solution calculated by using the Runge-Kutta method as well as for the solution presented by the hypergeometric function (\ref{hyperF}).\\ \begin{figure} {\includegraphics[width=130mm]{CLD4.eps}}\caption{Schematic plot of $LDoS$ for the wave function $f_0$ and low values of $r$ and comparison with the case without border effects for the number of pentagons $N_d=1, 2$ and $3,\hspace{2mm}n=2, s=1, r=2$.}\label{fg5} \end{figure} For the long distance from the tip where we can neglect the term $A/((1-\eta)^{2}r^{2})$, $LDoS$ corresponds to the solution presented in \cite{sitenko} and to the approximation for high values of $r$ presented in \cite{lamcresp}. We can try to find the energy levels using the analogy with the calculations presented in \cite{berry}. There, the solution has the form of the Bessel function $J_l(k_{nl})$ and the equality is derived \begin{equation}J_l(k_{nl})=J_{l+1}(k_{nl}).\end{equation} We can use the solution and the approximation from \cite{sitenko} and do the same procedure. Then, \begin{equation}J_{\nu}\left(\frac{E}{\hbar v}r\right)=J_{\nu+1}\left(\frac{E}{\hbar v}r\right)\end{equation} holds, where $\nu=\frac{sn-\eta}{1-\eta}$. From \cite{lamcresp} follows \begin{equation}J_{\nu}\left(\frac{E}{\hbar v}r\right)\sim\sqrt{\frac{2\hbar v}{\pi Er}}\cos\left[\frac{Er}{\hbar v}-\left(\nu+\frac{1}{2}\right)\frac{\pi}{2}\right],\end{equation} so, after the substitution, we get \begin{equation}\frac{E}{\hbar v}=\frac{(m+0.5)\pi+sn}{r},\hspace{1cm}m\in\mathcal{Z}.\end{equation} \section{Conclusion} We studied the electronic properties of the graphitic nanocone near the tip. However, we found that it is very difficult to find the analytical solution. So we divided the problem into two cases: the case of low values of $r$ and the case of high values of $r$. For the high values of $r$, the solution is in fact the same as in the case considered in \cite{sitenko} which does not reflect the pseudopotential; for the low values of $r$, we used the numerical methods of finding the solution and we compared it with the analytical solution which was calculated from the simplified version of equation (\ref{solve1}) for the given case. In fact, this simplification was carried out without additional requirements on the form of the solution; nevertheless, the results appeared to be in agreement with the numerical solution. It follows from the plots in Fig. \ref{fg5} that for the case of 1 and 2 pentagons, the electron flux in the nanocone tip is spread over all interval of energies and, on the other hand, it is concentrated around the Fermi level in the case of 3 pentagons. For the high values of $r$, the energy levels were calculated with the help of the method used in \cite{berry}. For the low values of $r$, the calculation of the energy levels is complicated because the found approximation of the analytical solution has not the form of the Bessel function. Furthermore, it follows from the plots of $LDoS$ in Fig. \ref{fg5} that there are no peaks, and it significantly restricts the possibility of the calculation of the energy levels from the numerical or analytical approximations of the solution. We expected that the border effects close to the tip are caused by the curvature of the $\pi$ orbitals of the neighboring atoms. In a closer approximation, the hopping terms could be supplied into (\ref{fourth}) which are caused by the overlap of the neighbouring $\pi$ orbitals and also by the overlap of the $\pi$ orbitals corresponding to the sites located at the opposite sides from the tip. Both effects would be stronger in the case of adding next pentagons to the tip which creates the form of the carbon nanohorn. This could be considered in further calculations. The localization of the electrons on the nanocone tip in the case of 3 pentagonal defects could be applied in the field of the electron microscopes. One of the papers dealing with this subject is \cite{chenchen}. There, the carbon nanocone is recommended as a good material for the probe tip in the atomic force microscopy because of its good physical and chemical properties in the comparison with other materials like silicon. A process of the tip fabrication via the E-beam induced and the chemical vapor deposition is suggested there. Another applicability in the field of the scanning tunneling microscopy is given in \cite{carbon}.\\ \section*{Acknowledgments} The work was supported by the Slovak Academy of Sciences in the framework of CEX NANOFLUID, and by the Science and Technology Assistance Agency under Contract No. APVV-0509-07 and APVV-0171-10, VEGA Grant No. 2/0037/13 and Ministry of Education Agency for Structural Funds of EU in frame of project 26220120021, 26220120033 and 26110230061. R. Pincak would like to thank the TH division in CERN for hospitality.
-14,676.956956
[ -2.6953125, 2.474609375 ]
26.845638
[ -3.583984375, 0.476806640625, -2.107421875, -6.01171875, -0.37109375, 8.21875 ]
[ 3.04296875, 7.5546875, 2.22265625, 6.0625 ]
92
1,914
[ -3.654296875, 4.30078125 ]
28.895613
[ -6.12890625, -3.767578125, -3.60546875, -2.1640625, 1.9267578125, 10.578125 ]
2.72833
17.996492
33.0721
4.210114
[ 2.2373874187469482 ]
-10,065.28642
5.840648
-14,431.680969
0.966097
5.267631
[ -2.947265625, -3.666015625, -3.40234375, -4.5625, 2.5625, 11.609375 ]
[ -5.171875, -1.0009765625, -1.9951171875, -0.6591796875, 2.865234375, 3.0859375 ]
BkiUcanxK7Tt52mM8ZKt
\section*{Results} In this article, we determine the three dimensional $T$-$p$-$H$ phase diagram of LaCrGe$_3$ by measuring the electrical resistivity of single crystals of LaCrGe$_3$ under pressure and magnetic field. The sample growth and characterization has been reported in Ref.~\cite{Lin2013PRB}. The pressure techniques have been reported in Ref.~\cite{Taufour2016PRL}. The magnetic field dependent resistivity was measured in two Quantum Design Physical Property Measurement Systems up to $9$ or $14$~T. The electrical current is in the $ab$-plane, and the field is applied along the $c$-axis, which is the easy axis of magnetization~\cite{Cadogan2013SSP,Lin2013PRB}. Whereas most of the features in Fig.\ref{fig:diagabcd}d were well understood in Ref.~\cite{Taufour2016PRL}, we also indicate the pressure dependence of $T_x$ ( d$\rho$/d$T_\text{max}$) at which a broad maximum is observed in $d\rho/dT$ below $T_\textrm{C}$ and shown as orange triangles in Fig.\,\ref{fig:diagabcd}d. At ambient pressure, $T_x\approx71$~K. No corresponding anomaly can be observed in magnetization~\cite{Taufour2016PRL}, internal field~\cite{Taufour2016PRL} or specific heat~\cite{Lin2013PRB}. Under applied pressure, $T_x$ decreases and cannot be distinguished from $T_\textrm{C}$ (d$\rho$/d$T_\text{mid}$) above $1.6$~GPa. As will be shown, application of magnetic field allows for a much clearer appreciation and understanding of this feature. \begin{figure}[htb!] \begin{center} \includegraphics[width=8.5cm]{1_14GPa} \end{center} \caption{\label{1_14GPa}(color online) Temperature dependence of the resistivity (black line) and its derivative (blue line) of (a) LaCrGe$_3$ at $1.14$\,GPa and (b) UGe$_2$ at $0$\,GPa from Ref.~\cite{Taufour2010PRL}. The crossover between the two ferromagnetic phases (FM1 and FM2) is inferred from the maximum in d$\rho$/d$T$ ($T_x$) and marked by a red triangle, whereas the paramagnetic-ferromagnetic transition is inferred from the middle point of the sharp increase in d$\rho$/d$T$ ($T_\textrm{C}$) and indicated by a blue circle.} \end{figure} Figure~\ref{1_14GPa}a shows the anomalies at $T_x$ and $T_\textrm{C}$ observed in the electrical resistivity and its temperature derivative at $1.14$~GPa. For comparison, Fig.~\ref{1_14GPa}b shows ambient pressure data for UGe$_2$~\cite{Taufour2010PRL} where a similar anomaly at $T_x$ can be observed. In UGe$_2$, this anomaly was studied intensively~\cite{Pfleiderer2002PRL,Hardy2009PRB,PalacioMorales2016PRB}. It corresponds to a crossover between two ferromagnetic phases FM1 and FM2 with different values of the saturated magnetic moment~\cite{Pfleiderer2002PRL,Hardy2009PRB}. Under pressure, there is a critical point at which the crossover becomes a first-order transition, which eventually vanishes where a maximum in superconducting-transition temperature is observed~\cite{Saxena2000Nature}. In the case of LaCrGe$_3$, we cannot locate where the crossover becomes a first order transition, since the anomaly merges with the Curie temperature anomaly near $1.6$~GPa, very close to the TCP. However, as we will show below, the two transitions can be separated again with applied magnetic field above $2.1$~GPa. This is similar to what is observed in UGe$_2$ where the PM-FM1 and FM1-FM2 transition lines separate more and more as the pressure and the magnetic field are increased. Because of such similarities with UGe$_2$, we label the two phases FM1 and FM2 and assume that the anomaly at $T_x$ corresponds to a FM1-FM2 crossover. A similar crossover was also observed in ZrZn$_2$~\cite{Kimura2004PRL}. In Refs.~\cite{Sandeman2003PRL,Wysokinski2014PRB}, a Stoner model with two peaks in the density of states near the Fermi level was proposed to account for the two phases FM1 and FM2, reinforcing the idea of the itinerant nature of the magnetism in LaCrGe$_3$. \begin{figure}[htb!] \begin{center} \includegraphics[width=8.5cm]{2.39_GPa1.eps} \end{center} \caption{\label{2.39_GPa}(color online) (a)~Field dependence of the electrical resistivity at $2$\,K, $13.5$\,K, and $30$\,K at $2.39$\,GPa. Continuous and dashed lines represent the field increasing and decreasing respectively. (b)~Corresponding field derivatives (d$\rho$/d$H$). The curves are shifted by $15$\,$\mu\Omega$\,cm\,T$^{-1}$ for clarity. Vertical arrows represent the minima. The transition width is determined by the full width at half minimum as represented by horizontal arrows. The temperature dependence of the hysteresis width of $H_{\textrm{min}1}$ and $H_{\textrm{min}2}$ are shown in (c) and (d)(left axes). The hysteresis width gradually decreases with increasing temperature and disappears at \TWCP. The right axes show the temperature dependence of the transition widths. The width is small for the first-order transition and becomes broad in the crossover region. The blue-color shaded area represents the first order transition region whereas the white color area represents the crossover region. These allow for the determination of the wing critical point of the FM1 transition at $13.5$~K, $2.39$~GPa and $5.1$~T and the one for the FM2 transition at $12$~K, $2.39$~GPa and $7.7$~T.} \end{figure} In zero field, for applied pressures above $2.1$~GPa, both FM1 and FM2 phases are suppressed. Upon applying a magnetic field along the $c$-axis, two sharp drops of the electrical resistivity can be observed (Fig.\ref{2.39_GPa}a) with two corresponding minima in the field derivatives (Fig.\ref{2.39_GPa}b). At $2$~K, clear hysteresis of $\Delta H\sim0.7$~T can be observed for both anomalies indicating the first order nature of the transitions. The emergence of field-induced first-order transitions starting from $2.1$~GPa and moving to higher field as the pressure is increased is characteristic of the ferromagnetic quantum phase transition: when the PM-FM transition becomes of the first order, a magnetic field applied along the magnetization axis can induce the transition resulting in a wing structure phase diagram such as the one illustrated in Fig.\ref{fig:diagabcd}a. In the case of LaCrGe$_3$, evidence for a first order transition was already pointed out because of the very steep pressure dependence of $T_\textrm{C}$ near $2.1$~GPa and the abrupt doubling of the residual ($T=2$~K) electrical resistivity~\cite{Taufour2016PRL}. In UGe$_2$ or ZrZn$_2$, the successive metamagnetic transitions correspond to the PM-FM1 and FM1-FM2 transitions. In LaCrGe$_3$, due to the presence of the AFM$_Q$ phase at zero field, the transitions correspond to AFM$_Q$-FM1 and FM1-FM2. As the temperature is increased, the hysteresis decreases for both transitions, as can be seen in Figs.~\ref{2.39_GPa}c and d and disappears at a wing critical point (WCP). Also, the transition width is small and weakly temperature dependent below the WCP and it broadens when entering in the crossover regime. Similar behavior has been observed in UGe$_2$~\cite{Kotegawa2011JPSJ}. At $2.39$~GPa for example, we locate the WCP of the first-order FM1 transition around $13.5$~K and the one of the first-order FM2 transition around $12$~K. At this temperature and pressure, the transitions occur at $5.1$ and $7.7$~T respectively. This allows for the tracking of the wing boundaries in the $T$-$p$-$H$ space up to our field limit of $14$~T. At low field, near the TCP, the wing boundaries are more conveniently determined as the location of the largest peak in d$\rho$/d$T$ (Supplementary Information). \begin{figure}[htb!] \begin{center} \includegraphics[width=8.5cm]{Wing1_2.eps} \end{center} \caption{\label{Wing1}(color online) Projection of the wings in (a)~$T$-$H$, (b)~$T$-$p$ and (c)~$H$-$p$ planes. Black solid squares and green solid circles represents the FM$_1$-wing and FM$_2$-wing respectively. Red lines (represented in the $T$-$p$-$H$ space in Fig.\ref{Diag3DPaper2}) are guides to the eyes and open symbols represent the extrapolated QWCP. (d)~$H$-$p$ phase diagram at $2$\,K. The arrow represents the pressure $p_{c}=2.1$\,GPa.} \end{figure} The projections of the wings lines $T_\textrm{WCP}(p,H)$ in the $T$-$H$, $T$-$p$ and $H$-$p$ planes are shown in Figs.~\ref{Wing1}a, b and c respectively. The metamagnetic transitions to FM1 and FM2 start from $2.1$~GPa and separate in the high field region as the pressure is further increased. For the FM1 wing, the slope d$T_w$/d$H_w$ is very steep near $H=0$ (Fig.~\ref{Wing1}a) whereas d$H_w$/d$p_w$ is very small (Fig.~\ref{Wing1}c). This is in agreement with a recent theoretical analysis based on the Landau expansion of the free energy which shows that d$T_w$/d$H_w$ and d$p_w$/d$H_w$ are infinite at the tricritical point~\cite{Taufour2016PRB}. This fact was overlooked in the previous experimental determinations of the wing structure phase diagram in UGe$_2$~\cite{Taufour2010PRL,Kotegawa2011JPSJ} and ZrZn$_2$~\cite{Kabeya2012JPSJ}, but appears very clearly in the case of LaCrGe$_3$. In the low field region, there are no data for the FM2 wing since the transition is not well separated from the FM1 wing, but there is no evidence for an infinite slope near $H=0$. The wing lines can be extrapolated to quantum wing critical points (QWCPs) at $0$~K in high magnetic fields of the order of $\sim30$~T (Fig.~\ref{Wing1}a) and pressures around $\sim3$~GPa (Fig.~\ref{Wing1}b). Figure~\ref{Wing1}d shows the $p$-$H$ phase diagram at low temperature ($T=2$~K). Identical $H$-$p$ phase diagram in Fig.~\ref{Wing1}b and Fig.~\ref{Wing1}c reveals the near vertical nature of the wings. \begin{figure}[htb!] \begin{center} \includegraphics[width=1\linewidth]{Diag3DPaper2.eps} \end{center} \caption{\label{Diag3DPaper2}(color online) $T$-$p$-$H$ phase diagram of LaCrGe$_{3}$ based on resistivity measurements. Red solid lines are the second order phase transition and blue color planes are planes of first order transitions. Green color areas represent the AFM$_Q$ phase.} \end{figure} The resulting three-dimensional $T$-$p$-$H$ phase diagram of LaCrGe$_3$ is shown in Fig.~\ref{Diag3DPaper2} which summarizes our results (Several of the constituent $T$-$H$ phase diagrams, at various pressures, are given in Supplementary Information). The double wing structure is observed in addition to the AFM$_Q$ phase. This is the first time that such a phase diagram is reported. Other materials suggested that there is either a wing structure without any new magnetic phase~\cite{Taufour2010PRL,Kotegawa2011JPSJ,Kabeya2012JPSJ}, or a new magnetic phase without wing structure~\cite{Kotegawa2013JPSJ,Lengyel2015PRB}. The present study illustrates a third possibility where all such features are observed. Moreover, the existence of the two metamagnetic transitions (to FM1 and FM2) suggests that this might be a generic feature of itinerant ferromagnetism. Indeed, it is observed in ZrZn$_2$, UGe$_2$, and LaCrGe$_3$, although these are very different materials with different electronic orbitals giving rise the the magnetic states. We note that a wing structure has also been determined in the paramagnetic compounds UCoAl~\cite{Aoki2011JPSJUCoAl,Combier2013JPSJ,Kimura2015PRB} and Sr$_3$Ru$_2$O$_7$~\cite{Wu2011PRB}, implying that a ferromagnetic state probably exists at negative pressures in these materials. Strikingly, two anomalies could be detected upon crossing the wings in UCoAl (two kinks of a plateau in electrical resistivity~\cite{Aoki2011JPSJUCoAl}, two peaks in the ac susceptibility~\cite{Kimura2015PRB}), as well as in Sr$_3$Ru$_2$O$_7$ (two peaks in the ac susceptibility~\cite{Wu2011PRB}). These double features could also correspond to a double wing structure To conclude, the $T$-$p$-$H$ phase diagram of LaCrGe$_3$ provides an example of a new possible outcome of ferromagnetic quantum criticality. At zero field, quantum criticality is avoided by the appearance of a new modulated magnetic phase, but the application of magnetic field shows the existence of a wing structure phase diagram leading towards QWCP at high field. These experimental findings reveal new insights into the possible phase-diagram of ferromagnetic systems. The emergence of the wings reveals for the first time a theoretically predicted tangent slope~\cite{Taufour2016PRB} near the tricritical point, a fact that was overlooked in previous experimental determination of phase diagrams of other compounds because of the lack of data density in that region. In addition, the double nature of the wings appears to be a generic feature of itinerant ferromagnetism, as it is observed in several, a priori, unrelated materials. This result deserves further theoretical investigations and unification. \begin{appendix} \noindent{\bf Acknowledgements} We would like to thank S.~K.~Kim, X.~Lin, V.~G.~Kogan, D.~K.~Finnemore, E.~D.~Mun, H.~Kim, Y.~Furukawa, R.~Khasanov for useful discussions. This work was carried out at the Iowa State University and the Ames Laboratory, US DOE, under Contract No. DE-AC02-07CH11358. This work was supported by the Materials Sciences Division of the Office of Basic Energy Sciences of the U.S. Department of Energy. \\ \noindent{\bf Supplementary Information} is attached below. \\ \noindent{\bf Author Contributions} V.\,T. and P.\,C. initiated this study. U.\,K. and P.\,C. prepared the single crystals. U.\,K, V.\,T. and S.\,B. performed the pressure measurements. U.\,K, V.\,T, S.\,B. and U.\,K analysed and interpreted the pressure data. U.\,K. and V.\,T. wrote the manuscript with the help of all authors. \\ \end{appendix} \renewcommand{\figurename}{Supplementary Figure} \section{Supplementary Information} \subsection{Determination of the location of the tricritical point} \begin{figure}[htb!] \begin{center} \includegraphics[width=8.5cm]{H_sweep.eps} \end{center} \caption{\label{H_sweep}(color online) (a)-(b) Temperature dependence of d$\rho$/d$T$ at various magnetic fields at $1.67$ and $1.83$\,GPa. Arrow in panel (a) represent the $T_C$ and panel (b) represent the \TWCP. (c) The variation of d$\rho$/d$T_{peak}$ as a function of external field for $p$\,$<$\,\pTCP , $p$\,$\approx$\,\pTCP~and \pTCP\,$<$\,$p$\,$<$\,\pc~.} \end{figure} In Ref.~\cite{Taufour2016PRL}, the position of the tricritical point TCP was estimated near $40$~K and $1.75$\,GPa based on a discontinuity in the resistivity as a function of pressure $\rho(p)$. Here, we use measurements under magnetic field to locate the TCP. When the paramagnetic-ferromagnetic (PM-FM)transition is of the second order, the magnetic field applied along the magnetization axis ($c$-axis) breaks the time reversal symmetry, so that no phase transition can occur. Instead, a crossover is observed resulting in a broadening and disappearing of the anomalies. Supplementary Fig.~\ref{H_sweep}a, shows the peak in the temperature derivative of resistivity d$\rho$/d$T$ at various magnetic fields at $1.67$~GPa. The peak amplitude decreases showing that the transition is of the second order. This is in contrast with the behavior at $1.83$~GPa (Supplementary Fig.~\ref{H_sweep}b) where the peak first increases under magnetic field indicating the first order nature of the transition. The evolution of the value of d$\rho$/d$T$ at the peak position as a function of magnetic field is shown in Supplementary Fig.~\ref{H_sweep}c for various pressures. We can distinguish two regimes: for pressures below $\sim1.75$~GPa, the peak size monotonically decreases with applied magnetic field; for pressure above $1.75$~GPa, the peak size first increases with field, reach a maximum at a field $H_{WCP}$ and then decreases. With this procedure, we find the TCP to be near $1.75$~GPa, at which pressure the transition temperature is $40$~K. For $p>p_{\textrm{TCP}}$, the location of the maximum value of d$\rho$/d$T$ at the peak position serves to locate the wing critical point as a function of temperature, pressure and magnetic field. \subsection{Determination of the three-dimensional $T$-$p$-$H$ phase diagram} In Supplementary Fig.\ref{compildiag}, we show several $T$-$H$ phase diagrams at various pressures (as illustrated in Supplementary Fig\,\ref{fig:2KDiag}) . For each pressure, anomalies in the temperature and field dependence of the electrical resistivity are located and serve to outline the phase boundaries. To be complete, and for future reference, we also indicate the location of broad maxima or kinks in d$\rho$/d$T$ which do not seem to correspond to phase transitions at this point and are most likely related to crossover anomalies. \begin{figure*}[htb!] \begin{center} \includegraphics[width=17cm]{compildiag.eps} \end{center} \caption{\label{compildiag}(color online) Compilation of $T$-$H$ phase diagrams at various pressures determined by tracking various anomalies in the temperature and field dependence of the electrical resistivity measurements up to $9$ or $14$~T. The hysteresis width for the drop in $\rho(H)$ (mimimum in d$\rho$/d$H$) is also indicated. Lines are guides to the eyes. The pressure positions are shown in Supplementary Fig.~\ref{fig:2KDiag}.} \end{figure*} The $T$-$p$-$H$ phase diagram shown as Fig.\ref{Diag3DPaper2} in the main text is constructed by combining all the $T$-$H$ phase diagrams at various pressures. \begin{figure}[htb!] \begin{center} \includegraphics[width=8.5cm]{2KDiag.eps} \end{center} \caption{\label{fig:2KDiag}$p$-$H$ phase diagram of LaCrGe$_3$ at $2$~K. the black dashed lines indicate the position of the pressures for the diagrams shown in Supplementary Fig.~\ref{compildiag}. Note: This is an expanded view of diagram shown in Figs.~\ref{Wing1}d in main text} \end{figure} \bibliographystyle{apsrev4-1} \clearpage
-14,237.350373
[ -0.5380859375, 0.8515625 ]
47.169811
[ -3.04296875, 0.970703125, -2.095703125, -5.00390625, -1.1142578125, 7.96484375 ]
[ 2.09375, 7.1875, 3.294921875, 4.8203125 ]
235
2,335
[ -3.275390625, 4.04296875 ]
25.663967
[ -5.8984375, -2.6328125, -2.859375, -2.32421875, 1.1171875, 10.5 ]
1.772264
32.041377
27.965739
1.771354
[ 1.6041219234466553 ]
-11,485.214361
5.799572
-13,810.410572
0.465219
5.420773
[ -3.2578125, -3.759765625, -3.953125, -4.75390625, 2.353515625, 12.421875 ]
[ -5.7578125, -1.85546875, -2.39453125, -1.9169921875, 3.6875, 5.2890625 ]
BkiUdsw4ukPiEUQBmYD_
\section{Introduction} The theory of word maps on finite non-abelian simple groups -- that is, maps of the form $(x_1,\ldots ,x_k) \mapsto w(x_1,\ldots ,x_k)$ for some word $w$ in the free group $F_k$ of rank $k$ -- has attracted much attention. It was shown in \cite[1.6]{LS} that for a given nontrivial word $w$, every element of every sufficiently large finite simple group $G$ can be expressed as a product of $C(w)$ values of $w$ in $G$, where $C(w)$ depends only on $w$; and this has been improved to $C(w) = 2$ in \cite{LarSh, LarShT, Sh}. Improving $C(w)$ to 1 is not possible in general, as is shown by power words $x_1^n$, which cannot be surjective on any finite group of order non-coprime to $n$. Certain word maps are surjective on all groups -- namely, those in cosets of the form $x_1^{e_1} \ldots x_k^{e_k}F_k'$ where the $e_i$ are integers with ${\rm gcd}(e_1,\ldots,e_k)=1$ (see \cite[3.1.1]{S}). The word maps for a small number of other words have been shown to be surjective on all finite simple groups. These include the commutator word $[x_1,x_2]$, whose surjectivity was conjectured by Ore in 1951 and proved in 2010 (see \cite{ore} and the references therein). The main result of this paper is the following. \begin{theorem}\label{main1} Let $p,q$ be primes, let $a,b$ be non-negative integers, and let $N=p^aq^b$. The word map $(x,y) \mapsto x^Ny^N$ is surjective on all finite (non-abelian) simple groups. \end{theorem} This result generalizes various theorems. First, it implies the classical Burnside $p^aq^b$-theorem stating that groups of this order are soluble. Indeed, if $G$ is a non-soluble group of order $N = p^aq^b$, then $G$ has a non-abelian composition factor $S$ whose order divides $N$. Thus $S$ is a (non-abelian) finite simple group satisfying the identity $x^N=1$, so the word map $x^Ny^N$ on $S$ has the trivial image $\{ 1 \}$, contradicting Theorem \ref{main1}. Theorem \ref{main1} also implies the surjectivity of $x^2y^2$ and more generally of the words $x^{p^a}y^{p^a}$ (for a prime $p$), as established in \cite{GM, LOST}. In \cite[Cor.\ 1.5]{GM} it is shown that $x^{6^a}y^{6^a}$ is surjective on all (non-abelian) finite simple groups, again a particular case of Theorem \ref{main1}. This theorem is best possible in the sense that it cannot be extended to the case where $N$ is a product of three or more prime powers, since such a number can be the exponent of a simple group. Indeed, the smallest example is that of $\A_5$. If $N_1, N_2$ are positive integers such that $N_1N_2$ is divisible by at most two primes, then $x^{N_1}y^{N_2}$ is surjective on all (non-abelian) finite simple groups, since $(x^{N_2})^{N_1}(y^{N_1})^{N_2} = x^{N_1N_2}y^{N_1N_2}$ is surjective by Theorem \ref{main1}. But some more general questions, including the following, have a negative answer. If $N$ is not divisible by the exponent of a finite simple group $G$, is $x^Ny^N$ surjective on $G$? If $N$ is odd, is $x^Ny^N$ surjective on all finite non-abelian simple groups? If $N = p^aq^b$ for some primes $p,q$, is $x^Ny^N$ surjective on all finite quasisimple groups, or does it hit at least all non-central elements of every quasisimple group? See Remark \ref{general}. However, we prove the following result which generalizes the celebrated Feit-Thompson theorem: \begin{theorem}\label{main2} Let $N$ be an odd positive integer. The word map $(x,y,z) \mapsto x^Ny^Nz^N$ is surjective on all finite quasisimple groups. In fact, every element of every finite quasisimple group is a product of three $2$-elements. \end{theorem} As mentioned above, this result is best possible in the sense that it does not hold for $x^Ny^N$; it also implies the surjectivity of $x^{N_1}y^{N_2}z^{N_3}$ for odd numbers $N_1, N_2, N_3$. A key ingredient of our proof of Theorem \ref{main2} is the construction of certain $2$-elements in simple groups $G$ of Lie type in odd characteristic that are regular if $G$ is classical (see \S7.2) and almost regular if $G$ is exceptional (see \S7.4). This construction may be useful in other situations. There are other results of the same flavor as the second statement of Theorem \ref{main2}, such as \cite[Theorem 3.8]{GT3} where $p$-elements are considered instead of $2$-elements. There is also considerable literature on the case of involutions, see e.g.\ \cite{Mac} and the references therein. These imply results like Theorem \ref{main2} with longer products $x_1^N x_2^N \ldots x_t^N$, where $N$ is not divisible by the exponent of the simple group in question, see for example \ \cite[Corollary 3.9]{GT3}. \smallskip Recall that the main results of \cite{LarSh, LarShT} assert that, given two non-trivial words $w_1$ and $w_2$, the product $w_1w_2$ is surjective on all finite non-abelian simple groups of {\it sufficiently large} order (depending on $w_1$ and $w_2$). In particular, once we fix a positive integer $N$, the word $x^Ny^N$ is surjective on all sufficiently large simple groups. Theorem \ref{main1} (and \ref{main2}) shows that, for all $N$ of the prescribed form, the word map $x^Ny^N$ (respectively $x^Ny^Nz^N$) is in fact surjective on {\it all} simple groups (respectively quasisimple groups). As mentioned above, one cannot generalize Theorem \ref{main1} for products of more than two prime powers. However, we prove results of that flavor by imposing asymptotic conditions on the simple groups. To formulate these results, define $\pi(N) = k$ and $\O(N) = \sum^k_{i=1}\alpha_i$ if the integer $N$ has the prime factorization $N = \prod^k_{i=1}p_i^{\alpha_i}$ (with $p_1 < \ldots < p_k$ and $\alpha_i > 0$). \begin{theorem}\label{main3} Given a positive integer $k$, there is some $f(k)$ such that for all positive integers $N$ with $\pi(N) \leq k$, the word map $(x,y) \mapsto x^Ny^N$ is surjective on all finite simple groups $S$, where $S$ is either an alternating group $\A_n$ with $n \geq f(k)$, or a simple Lie-type group of rank $\geq f(k)$ and defined over $\F_q$ with $q \geq f(k)$. \end{theorem} \begin{theorem}\label{main4} Given a positive integer $k$, there is some $g(k)$ such that for all positive integers $N$ with $\O(N) \leq k$, the word map $(x,y) \mapsto x^Ny^N$ is surjective on all finite simple groups $S$, where $S$ is either an alternating group $\A_n$ with $n \geq g(k)$, or a simple Lie-type group of rank $\geq g(k)$. \end{theorem} Neither Theorem \ref{main3} nor \ref{main4} holds for finite simple Lie-type groups of bounded rank, cf.\ Example \ref{bounded}. It remains an open question whether Theorem \ref{main3} holds for finite simple Lie-type groups of unbounded rank over fields of bounded size. \smallskip We use the notation of \cite{KL} for finite groups of Lie type. For $\e = \pm$, the group $\SL^\e_n(q)$ is $\SL_n(q)$ when $\e = +$ and $\SU_n(q)$ when $\e = -$, and similarly for $\GL^\e_n(q)$, $\PSL^\e_n(q)$. Also, $E_6^\e(q)$ is $E_6(q)$ if $\e = +$ and $\tw2 E_6(q)$ if $\e = -$. We use the convention that if $\e = \pm$ then expressions such as $q-\e$ mean $q-\e 1$. \section{Preliminaries} The following plays a key role in our proofs. \begin{thm}{\rm \cite[Theorem 1.1]{GT3}}\label{prime1} Let $\cG$ be a simple simply connected algebraic group in characteristic $p > 0$ and let $F~:~\cG \to \cG$ be a Frobenius endomorphism such that $G := \cG^F$ is quasisimple. There exist (not necessarily distinct) primes $r,s_1,s_2$, all different from $p$, and regular semisimple $x,y \in G$ such that $|x| = r$, $y$ is an $\{s_1,s_2\}$-element, and $x^G\cdot y^G \supseteq G \setminus \bfZ(G)$. In fact $s_1 = s_2$ unless $\cG$ is of type $B_{2n}$ or $C_{2n}$. \end{thm} Throughout the paper, by a {\it finite simple group of Lie type in characteristic $p$} we mean a simple non-abelian group $S = G/\bfZ(G)$ for some $G = \cG^F$ as in Theorem \ref{prime1}. In this notation, let $q = p^f$ denote the common absolute value of the eigenvalues of $F$ acting on the character group of an $F$-stable maximal torus (so that $f$ is half-an-integer if $G$ is a Suzuki-Ree group). For each group $G$ and $S = G/\bfZ(G)$, we refer to the set $\{r,s_1,s_2\}$ specified in the proof of \cite[Theorem 1.1]{GT3} as $\cR(G)$ and $\cR(S)$. \begin{cor}\label{prime2} In the notation of Theorem \ref{prime1}, let $S = G/\bfZ(G)$ be simple non-abelian. \begin{enumerate}[\rm(i)] \item Theorem \ref{main1} holds for $S$, unless possibly $N = p^at^b$ with $t \in \{r,s_1,s_2\}$. \item Suppose $N = p^at^b$ for some prime $t$ and $|\cX| < |G|/2$, where $\cX$ is the set of all elements of $G$ of order divisible by $p$ or by $t$. The word map $(x,y) \mapsto x^Ny^N$ is surjective on $G$. \end{enumerate} \end{cor} \pf (i) By \cite[Corollary, p.\ 3661]{EG}, every non-central element of $G$ is a product of two $p$-elements. Hence Theorem \ref{main1} holds for $S$ if $p \nmid N$. On the other hand, if $N = p^at^b$ with $t \notin \{r,s_1,s_2\}$, then the elements $x$ and $y$ in Theorem \ref{prime1} are $N$th powers, so Theorem \ref{main1} again holds for $S$. (ii) Let $g \in G$. By assumption, $|G \setminus \cX| > |G|/2$, so $g(G \setminus \cX) \cap (G \setminus \cX) \neq \emptyset$. Hence $g = xy^{-1}$ for some $x,y \in G \setminus \cX$. Note that every element of $G \setminus \cX$ is an $N$th power, whence the claim follows. \hal Recall that if $a \geq 2$ and $n \geq 3$ are integers and $(a,n) \neq (2,6)$, then $a^n-1$ has a {\it primitive prime divisor}, i.e.\ a prime divisor that does not divide $\prod^{n-1}_{i=1}(a^i-1)$, cf.\ \cite{Zs}. In what follows, we fix one such prime divisor for given $(a,n)$ and denote it by $\p(a,n)$. Next we record the primes $r,s_1,s_2$ mentioned in Theorem \ref{prime1} in Table \ref{primes} (for larger groups $G$). The third column of Table \ref{primes} contains one entry precisely when $s_1 = s_2$. \begin{table}[htb] \[ \begin{array}{|c|c|c|c|} \hline G & r & s_1,s_2 & (n,q) \neq \\ \hline \begin{array}{c}\SL_n(q),\\ n \geq 4 \end{array} & \p(p,nf) & \p(p,(n-1)f) & (6,2),(7,2), (4,4) \\ \hline \begin{array}{c}\SU_n(q),\\ n \geq 5 \hbox{ odd} \end{array} & \p(p,2nf) & \begin{array}{ll} \p(p,(n-1)f), & n \equiv 1 \bmod 4\\ \p(p,(n-1)f/2), & n \equiv 3 \bmod 4 \end{array} & (7,4) \\ \hline \begin{array}{c}\SU_n(q),\\ n \geq 4 \hbox{ even} \end{array} & \p(p,(2n-2)f) & \begin{array}{ll} \p(p,nf), & n \equiv 0 \bmod 4\\ \p(p,nf/2), & n \equiv 2 \bmod 4 \end{array} & (4,2),(6,4) \\ \hline \begin{array}{c}\Sp_{2n}(q),\\ \Spin_{2n+1}(q),\\ n \geq 3 \hbox{ odd} \end{array} & \p(p,2nf) & \p(p,nf) & (3,4) \\ \hline \begin{array}{c}\Sp_{2n}(q),\\ \Spin_{2n+1}(q),\\ n \geq 6 \hbox{ even} \end{array} & \p(p,2nf) & \p(p,nf), \p(p,nf/2) & (6,2), (12,2) \\ \hline \Sp_{24}(2) & 241 & 13, 7 & \\ \Sp_{12}(2) & 13 & 3, 7 & \\ \hline \begin{array}{c}\Spin^+_{2n}(q),\\ n \geq 4 \end{array} & \p(p,(2n-2)f) & \begin{array}{ll}\p(p,nf), & n \hbox{ odd}\\ \p(p,(n-1)f), & n \hbox{ even} \end{array} & (4,2)\\ \hline \begin{array}{c}\Spin^-_{2n}(q),\\ n \geq 4 \end{array} & \p(p,2nf) & \p(p,(2n-2)f) & (4,2) \\ \hline \tw2 B_2(q^2) & \p(2,8f) & \p(2,8f) & q^2 > 8\\ \hline \tw2 G_2(q^2) & \p(3,12f) & \p(3,12f) & q^2 > 27\\ \hline \tw2 F_4(q^2) & \p(2,24f) & \p(2,12f) & q^2 > 8\\ \hline G_2(q) & \p(p,3f) & \p(p,3f) & q \neq 2,4\\ \hline \tw3 D_4(q) & \p(p,12f) & \p(p,12f) & \\ \hline F_4(q) & \p(p,12f) & \p(p,8f) & \\ \hline E_6(q)_{\SC} & \p(p,9f) & \p(p,8f) & \\ \hline \tw2 E_6(q)_{\SC} & \p(p,18f) & \p(p,8f) & \\ \hline E_7(q)_{\SC} & \p(p,18f) & \p(p,7f) & \\ \hline E_8(q) & \p(p,24f) & \p(p,20f) & \\ \hline \end{array} \] \caption{Special primes for simple groups of Lie type} \label{primes} \end{table} \begin{lem}\label{basic} Let $G$ be a finite group, fix $g_1,g_2 \in G$, and let $g \in G$. {\rm (i)} Then $g \in g_1^G \cdot g_2^G$ if and only if $$\sum_{\chi \in \Irr(G)}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)} \neq 0.$$ In particular, $g \in g_1^G\cdot g_2^G$ if $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)} \right| < \left|\sum_{\chi \in \Irr(G),~\chi(1)=1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)} \right|.$$ {\rm (ii)} For $D \in \N$, $$\left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{1}{D}\left(|\bfC_G(g_1)|\cdot|\bfC_G(g_2)|\cdot|\bfC_G(g)|\right)^{1/2}.$$ \end{lem} \pf The first is the well known result of Frobenius. For (ii), note that $|\chi(g)| \leq |\bfC_G(g)|^{1/2}$ for $\chi \in \Irr(G)$ by the second orthogonality relation for complex characters. By the Cauchy-Schwarz inequality, $$\sum_{\chi \in \Irr(G)}|\chi(g_1)\chi(g_2)| \leq \left(\sum_{\chi \in \Irr(G)}|\chi(g_1)|^2 \cdot \sum_{\chi \in \Irr(G)}|\chi(g_2)|^2\right)^{1/2} = (|\bfC_G(g_1)|\cdot|\bfC_G(g_2)|)^{1/2}.$$ \hal \begin{lem}\label{alt-spor} Theorem \ref{main1} holds for all alternating groups $\A_n$, $5 \leq n \leq 18$, and for all $26$ sporadic finite simple groups. \end{lem} \pf For each of these groups $G$ and for every two primes $p,q$ dividing $|G|$, we verify that each $g \in G$ can be written as a product of two $\{p,q\}'$-elements. We do this by applying Lemma \ref{basic} to the character table of the relevant group. Some of these character tables are available in the Character Table Library of {\sf GAP} \cite{GAP}; the remainder were constructed directly using the {\sc Magma} \cite{Magma} implementation of the algorithm of Unger \cite{Unger}. \hal \begin{prop}\label{alt1} Theorem \ref{main1} holds for $S = \A_n$ if $n \geq 19$. \end{prop} \pf Since $n \geq 19$, there are at least $6$ consecutive integers in the interval $[\lfloor 3n/4 \rfloor, n]$. In particular, we can find an odd integer $m$ such that $\lfloor 3n/4 \rfloor \leq m < m+4 \leq n$. Suppose now that $N = p^aq^b$. Among $m$, $m+2$, and $m+4$, at most one integer is divisible by $p$, and similarly for $q$. Hence there is some $\ell \in \{m,m+2,m+4\}$ that is coprime to $N$. According to \cite[Corollary 2.1]{B}, each $g \in \A_n$ is a product of two $\ell$-cycles. Since every $\ell$-cycle is an $N$th power in $S$, we are done. \hal \begin{prop}\label{alt2} Given a positive integer $k$, there is some $f(k)$ such that for all $n \geq f(k)$ and for all positive integers $N$ with at most $k$ distinct prime factors, the word map $(x,y) \mapsto x^Ny^N$ is surjective on $S = \A_n$. \end{prop} \pf Choosing $f(k)$ large enough, we see by the prime number theorem that, for every $n \geq f(k)$, the interval $[3n/4,n]$ contains at least $k+1$ distinct primes $p_1, \ldots, p_{k+1}$. Given a positive integer $N$ with at most $k$ distinct primes factors, at least one of the $p_i$'s, call it $\ell$, does not divide $N$, whence all $\ell$-cycles are $N$th powers. Hence the claim follows from \cite[Corollary 2.1]{B}. \hal \begin{lem}\label{real1} If $g$ is a real element of a finite group $G$, then $g$ is a product of two $2$-elements of $G$. \end{lem} \pf By assumption, $g^{-1} = xgx^{-1}$ for some $x \in G$. Replace $x$ by $x^{|x|_{2'}}$ to obtain a $2$-element. Now $$xgxg = x^{2} \cdot x^{-1}gx \cdot g = x^2 \cdot g^{-1} \cdot g = x^2,$$ so $xg$ is a $2$-element as well. Since $g = x^{-1} \cdot xg$, the claim follows. \hal In particular, the following is an immediate consequence of Lemma \ref{real1}: \begin{cor}\label{real2} If $G$ is a finite real group and $N$ is an odd integer, then the word map $(x,y) \mapsto x^Ny^N$ is surjective on $G$. \end{cor} \begin{cor}\label{main-real} Let $q = p^f$ be an odd prime power. Theorem \ref{main1} holds for the following simple groups: \begin{enumerate}[\rm(i)] \item $\PSp_{2n}(q)$ and $\Omega_{2n+1}(q)$, where $n \geq 3$ and $q \equiv 1 \bmod 4$; \item $\PO^+_{4n}(q)$, where $n \geq 3$ and $q \equiv 1 \bmod 4$; \item $\PO^{-}_{4n}(q)$, where $n \geq 2$; \item $\PO^+_8(q)$, $\O_9(q)$, and $\tw3 D_4(q)$. \end{enumerate} If $N$ is an arbitrary odd integer, then the word map $(x,y) \mapsto x^Ny^N$ is surjective on each of these groups. The same conclusion holds for $G = \Spin^-_{4n}(q)$ with $n \geq 2$, and $G = \Omega^+_{4n}(q)$ with $n \geq 2$ and $q \equiv 1 \bmod 4$, and $G = \Omega^+_8(q)$. \end{cor} \pf By \cite[Theorem 1.2]{TZ3}, all of these groups $G$ are real, whence the statement follows from Corollary \ref{real2} when $N$ is odd. If $G$ is simple and $N$ is even, then the statement follows from Corollary \ref{prime2}(i). \hal Corollary \ref{main-real} implies that Theorem \ref{main1} holds for many simple symplectic or orthogonal groups over $\F_q$ when $q \equiv 1 \bmod 4$. To handle the groups over $\F_q$ with $q \equiv 3 \bmod 4$ or $2|q$, we use the following result: \begin{prop}\label{ratio} Let $S$ be a non-abelian simple group of Lie type in characteristic $p$. Suppose $N = p^at^b$ with $t \in \cR(S)$, where $\cR(S)$ is defined after Theorem \ref{prime1}. The word map $(x,y) \mapsto x^Ny^N$ is surjective on $G$, where $S = G/\bfZ(G)$ and $G$ is one of the following groups: \begin{enumerate}[\rm(i)] \item $\Sp_{2n}(q)$, where $2|q \geq 8$ and $2 \nmid n \geq 3$; \item $\Sp_{2n}(q)$, where $2 \nmid q \geq 11$ and $2 \nmid n \geq 3$; \item $\O_{2n+1}(q)$, where $2 \nmid q \geq 7$, $2 \nmid n \geq 3$, and $(n,q) \neq (3,7)$; \item $\Omega^\pm_{2n}(q)$, where $n \geq 4$, $q \geq 5$, and $n \neq 5,7$ when $q=5$. \end{enumerate} \end{prop} \pf By Corollary \ref{prime2}(ii), it suffices to show that $|\cX| < |G|/2$ for $\cX = \cX_p \cup \cX_t$, where $\cX_s$ is the set of all elements of $G$ that have order divisible by $s$ for $s \in \{p,t\}$. We use \cite[Theorem 2.3]{GL} which states that $|\cX_p|/|G| < c(q)$, where $$c(q) := \left\{ \begin{array}{ll}2/(q-1) + 1/(q-1)^2, & G = \Sp_{2n}(q), ~2|q\\ 3/(q-1) + 1/(q-1)^2, & G = \Sp_{2n}(q), ~2 \nmid q \\ 2/(q-1) + 2/(q-1)^2, & G = \Omega_{2n+1}(q) \mbox{ or }\Omega^\pm_{2n}(q). \end{array} \right.$$ (Note that this result applies to $G$ since $\bfZ(G)$ is a $p'$-group.) To estimate $|\cX_t|$, observe that every nontrivial $t$-element $x$ of $G$ is regular semisimple, with $\bfC_G(x)$ being a conjugate $T^g$ of a fixed maximal torus $T$ of $G$. Hence if $y \in \cX_t$ has the $t$-part equal to $x$ then $y \in T^g$. It follows that $$|\cX_t|/|G| < |T|/|\bfN_G(T)|.$$ For cases (i)--(iii), $\bfN_G(T)/T$ contains a cyclic group of odd order $n$. Moreover, since the central involution of the Weyl group of $G$ inverts $T$, cf.\ \cite[Proposition 3.1]{TZ3}, $|\bfN_G(T)/T|$ is even. It follows that $2n$ divides $|\bfN_G(T)/T|$. If in addition $G \neq \O_7(7)$, then $$\frac{|\cX|}{|G|} \leq \frac{|\cX_t|}{|G|} + \frac{|\cX_p|}{|G|} < \frac{1}{2n} + c(q) < 0.49.$$ In case (iv), we may by Corollary \ref{main-real} assume that $n \geq 5$. Note that $T$ is constructed using two kinds of cyclic maximal tori. The first is $$T_1 = \SO^+_2(q^m) \cap G \leq \SO^+_{2m}(q)$$ with $m$ odd, where $$\bfN_{\Omega^+_{2m}(q)}(T_1)/T_1 \cong C_m.$$ The second is $$T_2 = \SO^-_2(q^m) \cap G \leq \SO^-_{2m}(q),$$ where $$\bfN_{\Omega^-_{2m}(q)}(T_2)/T_2 \hookleftarrow C_m.$$ Furthermore, $m \in \{n-1,n\}$. Hence, if $q \geq 7$, or $q = 5$ and $n \geq 9$, then $$\frac{|\cX|}{|G|} \leq \frac{|\cX_t|}{|G|} + \frac{|\cX_p|}{|G|} < \frac{1}{n-1}+ \frac{1}{q-1} + \frac{2}{(q-1)^2} \leq 1/2,$$ as desired. If $q = 5$ and $n = 6,8$ then we are done by Corollary \ref{main-real}. \hal \begin{lem}\label{orbit} Let $G$ be a finite group and let $g \in G$. If $\cO$ is a $\Gal(\overline\Q/\Q)$-orbit on $$\{ \chi \in \Irr(G) \mid \chi(g) \neq 0\},$$ then $$\sum_{\chi \in \cO}|\chi(g)|^2 \geq |\cO|.$$ In particular, $$|\{ \chi \in \Irr(G) \mid \chi(g) \neq 0\}| \leq |\bfC_G(g)|.$$ \end{lem} \pf Note that $\prod_{\chi \in \cO}\chi(g)$ is a nonzero algebraic integer fixed by $\Gal(\overline\Q/\Q)$, whence it is a nonzero integer. The Cauchy-Schwarz inequality implies that $$\sum_{\chi \in \cO}|\chi(g)|^2 \geq |\cO| \cdot |\prod_{\chi \in \cO}\chi(g)|^{2/|\cO|} \geq |\cO|.$$ Let $\cO_1, \ldots, \cO_t$ denote all of the distinct $\Gal(\overline\Q/\Q)$-orbits on $\{ \chi \in \Irr(G) \mid \chi(g) \neq 0\}$. The first statement implies that $$|\bfC_G(g)| = \sum_{\chi \in \Irr(G),~\chi(g) \neq 0}|\chi(g)|^2 = \sum^t_{i=1}\sum_{\chi \in \cO_i}|\chi(g)|^2 \geq \sum^t_{i=1}|\cO_i|.$$ \hal \begin{rem}\label{general} {\rm Some natural generalizations of Theorem \ref{main1} are false.} {\rm (i)} It is not true that for every $N = p^aq^b$ the word map $(x,y) \mapsto x^Ny^N$ is always surjective on every quasisimple group $G$, or at least hits all the non-central elements of $G$. {\rm For instance, if $N = 20$, then this map does not hit any element of order $5$ in $G = \SL_2(5)$ (indeed, $x^{20}$ has order $1$ or $3$ in $G$, and if $x \in G$ has order $3$ then $\{1\} \cup x^G \cup x^G \cdot x^G$ does not contain any element of order $5$ of $G$).} \smallskip {\rm (ii)} It is not true that for every odd integer $N$ the word map $(x,y) \mapsto x^Ny^N$ is always surjective on every non-abelian simple group $G$. {\rm For instance, consider a prime power $q > 3$ where $q \equiv 3 \bmod 8$ and set $G : = \PSL_2(q)$ and $N := q(q^2-1)/8$. Note that $x^N$ has order $1$ or $2$ for every $x \in G$. It follows that every element of $G$ that is hit by the word map $(x,y) \mapsto x^Ny^N$ is either an involution or a product of two involutions, so it is real. On the other hand, the nontrivial unipotent elements of $G$ are not real. The same arguments show that the word map $(x,y) \mapsto x^Ny^N$ is not surjective on the Ree group $G = \tw2 G_2(q)$, if $q = 3^{2a+1} > 3$ and $N = |G|_{2'}$. It is an open question whether these two families of simple groups exhaust all the simple groups $G$ on which the word map $(x,y) \mapsto x^Ny^N$ is not surjective for some odd $N$.} \end{rem} \begin{exa}\label{bounded} {\em By \cite[Corollary 4.2]{AST}, there are infinitely primes $p$ such that $\O(p^2-1) \leq 21$. For every such prime $p$, the exponent of $\PSL_2(p)$ divides $N_p := p(p^2-1)$, so the word map $(x,y) \mapsto x^{N_p}y^{N_p}$ cannot be surjective on $\PSL_2(p)$ (its image consists only of the identity element); on the other hand, $\pi(N_p) \leq \O(N_p) \leq 22$. Thus neither Theorem \ref{main3} nor \ref{main4} holds for finite simple groups of Lie type and bounded rank.} \end{exa} \section{Centralizers of unbreakable elements} \subsection{Symplectic and orthogonal groups} \begin{defn}\label{unbr-spo} {\em Let $\Cl(V) = \Sp(V)$ or $\O (V)$ be a finite symplectic or orthogonal group. An element $x $ of $\Cl(V)$ is {\it breakable} if there is a proper, nonzero, non-degenerate subspace $U$ of $V$ such that $x = x_1x_2 \in \Cl(U) \times \Cl(U^\perp)$ (with $x_1 \in \Cl(U)$, $x_2 \in \Cl(U^\perp)$), and either (i) $\Cl(U)$ and $\Cl(U^\perp)$ are both perfect, or (ii) $\Cl(U^\perp)$ is perfect and $x_1 = \pm 1_{U}$. \noindent Otherwise, $x$ is {\it unbreakable}.} \end{defn} \begin{lemma}\label{spunb} Let $G = \Sp_{2n}(q) = \Sp(V)$ with $n\ge 2$, and assume that $n\ge 4$ if $q=3$ and that $n\ge 7$ if $q=2$. If $x \in G$ is unbreakable, then $|\bfC_G(x)|\le N$ where $N$ is as in Table $\ref{spbd}$. \end{lemma} \begin{table}[htb] \[ \begin{array}{l|l|l} \hline n & q & N \\ \hline \hbox{odd} & q>3, q\hbox{ odd} & q^{2n-1}(q^2-1) \\ & q>3, q\hbox{ even} & 2q^{2n}(q+1) \\ & q=3 & 24 \cdot 3^{2n-2} \\ \hbox{even} & q>3, q\hbox{ odd} & 2q^n \\ & q>3, q\hbox{ even} & q^{2n}(q^2-1) \\ & q=3 & 48\cdot 3^{2n+1} \\ \hbox{any} & q=2 & 9\cdot 2^{2n+9} \\ \hline \end{array} \] \caption{Upper bounds for symplectic groups} \label{spbd} \end{table} \pf Assume first that $x$ is unipotent and $q$ is odd. By \cite[3.12]{LSei}, $V\downarrow x$ is an orthogonal sum of non-degenerate subspaces of the form $W(m)$ and $V(2m)$, where $x$ acts on $W(m)$ as $J_m^2$, the sum of two Jordan blocks of size $m$, and on $V(2m)$ as $J_{2m}$. Moreover, if $m$ is even then $W(m) \cong V(m)^2$ as $x$-modules. For $q>3$ the symplectic group $\Sp(V(m))$ is perfect for every $m\ge 2$, so the unbreakability of $x$ implies that $V\downarrow x$ is either $W(n)$ with $n$ odd, or $V(2n)$. The corresponding orders of $\bfC_G(x)$ are given by \cite[7.1]{LSei}, and the largest are those in Table \ref{spbd} for $q>3$ odd. If $q=3$ then $\Sp_2(3)$ is not perfect, so there are more unbreakable possibilities for $x$: \[ \begin{array}{l|l} \hline V\downarrow x & |\bfC_G(x)| \\ \hline V(2n) & 2\cdot 3^n \\ V(2n-2)+V(2) & 4\cdot 3^{n+2} \\ W(n) \,(n\hbox{ odd}) & 24\cdot 3^{2n-2} \\ W(n-1)+V(2)\,(n\hbox{ even}) & 48\cdot 3^{2n+1} \\ \hline \end{array} \] Again, the values of $|\bfC_G(x)|$ are given by \cite[7.1]{LSei}, and the largest are those in Table \ref{spbd}. Next assume $x$ is unipotent and $q$ is even. Again, $V\downarrow x$ is an orthogonal sum of non-degenerate subspaces of the form $W(m)$ and $V(2m)$ (see \cite[Chapter 6]{LSei}). If $q\ge 4$, the unbreakability of $x$ implies that $V\downarrow x$ is either $W(n)$ or $V(2n)$. The corresponding orders of $\bfC_G(x)$ are given by \cite[7.3]{LSei}, and the largest are those in Table \ref{spbd} for $q>2$ even. If $q=2$ then neither $\Sp_2(2)$ nor $\Sp_4(2)$ is perfect, so for $n\ge 7$, the possible $V \downarrow x$ for unbreakable $x$ are of the form $X+Y$, where $X = W(n-k)$ or $V(2n-2k)$ and $Y = W(k)$, $V(2k)$ or $V(2)^k$ for some $k\le 2$. By \cite[7.3]{LSei}, the largest centralizer order occurs for $W(n-2)+W(2)$, and is at most $9\cdot 2^{2n+9}$, as in Table \ref{spbd}. Now suppose $x$ is not unipotent and write $x=su$ with semisimple part $s$ and unipotent part $u$. If $s \in \bfZ(G)$ then the argument for the unipotent case above applies, so assume $s \not \in \bfZ(G)$. Then \[ \bfC_G(s) = \Sp_{2r}(q) \times \Sp_{2t}(q) \times \prod \GL_{a_i}^{\e_i}(q^{b_i}), \] where $2r,2t$ are the dimensions of the 1- and $-1$- eigenspaces of $s$ (with $t=0$ for $q$ even), and $r+t+2\sum a_ib_i = n$. If $q>3$ then the unbreakability of $x$ implies that $r=t=0$ and $a_1b_1 = n$; write $a=a_1,b=b_1$. Moreover, in $\bfC_G(s) = \GL_{a}^{\e}(q^{b})$, $u$ must be a single Jordan block $J_{a}$. So from \cite[7.1]{LSei}, $|\bfC_G(x)| = |\bfC_{\bfC_G(s)}(u)| = (q^b-\e)q^{b(a-1)} \le q^n+1$, giving the result in this case. Now consider $q=3$. As $x$ is unbreakable, either $2r$ or $2t$ is equal to $2n-2$, or $a_1b_1 \in \{n-1,n\}$. In the former case, $u = u_1u_2 \in \bfC_G(s) = \Sp_{2n-2}(3) \times H$ with $H = \Sp_2(3)$ or $\GU_1(3)$, and unbreakability forces $V_{2n-2} \downarrow {u_1}$ to be $W(n-1)$ ($n$ even) or $V(2n-2)$. Now \cite[7.1]{LSei} shows that $|\bfC_G(x)|$ is less than the bound in Table \ref{spbd}. In the latter case $u = u_1u_2 \in \bfC_G(s) = \GL_a^\e(q^b) \times H$ with either $ab=n$, $H=1$, or $ab=n-1$, $H \in \{ \Sp_2(3), \GU_1(3)\}$. If $ab=n$, unbreakability forces $u_1$ to be $J_a$ or $(J_{a-1},J_1)$; likewise if $ab=n-1$, then $u_1 = J_a$. In either case $|\bfC_G(x)|$ is less than the bound in Table \ref{spbd}. Finally, suppose $q=2$. Here unbreakability forces either $r \ge n-2$ or $a_1b_1 \ge n-2$. If $r\ge n-2$ then $u = u_1u_2 \in \bfC_G(s) = \Sp_{2r}(2) \times H$ with $H \le \Sp_{2n-2r}(2)$, and $V_{2r}\downarrow u_1$ is $V(2r)$, $W(r)$, or $V(2n-4)+V(2)$ ($r=n-1$) or $W(n-2)+V(2)$ ($r=n-1$). The largest possible value of $|\bfC_G(x)|$ is less than the value $9\cdot 2^{2n+9}$ in Table \ref{spbd}. If $a_1b_1 = n-k\ge n-2$ then, writing $a=a_1,b=b_1$, we see that $u = u_1u_2 \in \Sp_{2k}(2) \times \GL_a^\e(q^b)$. The largest value of $|\bfC_G(x)|$ occurs when $a=n, b=1, \e=-1$ and $u = u_2 = (J_{n-2},J_1^2)$; here $\bfC_G(x) = \bfC_{\GU_n(2)}(u)$ again has order less than the bound in Table \ref{spbd}. \hal \begin{lemma}\label{orthogunb} Let $G = \O(V) = \O_{2n}^\e(q)$ ($n\ge 4$) or $G = \O(V)= \O_{2n+1}(q)$ ($n\ge 3$, $q$ odd), and assume further that $\dim V \ge 13$ if $q\le3$. If $x \in G$ is unbreakable, then $|\bfC_G(x)|\le M$, where $M$ is as in Table $\ref{orthogbd}$. \end{lemma} \begin{table}[htb] \[ \begin{array}{l|l} \hline q & M \\ \hline q>3 & q^{2n-2}(q+1)^2 \\ q=2 & 3\cdot 2^{2n+6} \\ q=3 & 2^6\cdot 3^{2n+4}\,(\dim V = 2n) \\ & 2^4\cdot 3^{2n+3}\,(\dim V = 2n+1) \\ \hline \end{array} \] \caption{Upper bounds for orthogonal groups} \label{orthogbd} \end{table} \pf First consider the case where $q\geq 4$ is even, so $G = \O_{2n}^\e(q)$. Assume $x$ is unipotent. By \cite[Chapter 6]{LSei}, $V \downarrow x$ is an orthogonal sum of non-degenerate subspaces of the form $V(2k)$ (a single Jordan block $J_{2k} \in \GO_{2k}^\e(q)\setminus \O_{2k}^\e(q)$) and $W(k)$ (two singular Jordan blocks $J_k^2 \in \O_{2k}^+(q)$). Since $x$ is unbreakable, $V\downarrow x$ is $W(n)$ or $V(2n-2k)+V(2k)$ for some $k$. The order of $\bfC_G(x)$ is given by \cite[7.1]{LSei}, and the largest value occurs for $W(n)$. It is $q^{2n-3}|\Sp_2(q)|$ for $n$ even, and $q^{2n-2}|\SO_2^\pm(q)|$ for $n$ odd; the former is less than the bound in Table \ref{orthogbd} for $q>3$. If $x = su$ is non-unipotent with semisimple part $s$ and unipotent part $u$, then $\bfC_G(s) = \O_{2k}^\d(q) \times \prod \GL_{a_i}^{\e_i}(q^{b_i})$ with $2k = \dim \bfC_V(s)$ and $k+\sum a_ib_i = n$. As each $\GL_{a_i}^{\e_i}(q^{b_i}) \le \O_{2a_ib_i}(q)$, the unbreakability of $x$ implies that either $k \ge n-1$ or $a_1b_1 \ge n-1$. In the former case $u = u_1u_2 \in \bfC_G(s) = \O_{2n-2}^\d(q) \times \GL_1^\nu(q)$, and as in the previous paragraph $|\bfC_{ \O_{2n-2}^\d(q)}(u_1)|$ is at most $q^{2n-5}|\Sp_2(q)|$, which gives the conclusion. In the latter case $u = u_1u_2 \in \bfC_G(s) = \O_{2k}^\d(q) \times \GL_a^\nu(q^b)$ with $k\le 1$ and $ab = n-k$, and unbreakability forces $u_2 \in \GL_a^\nu(q^b)$ to be either $J_a$, or $(J_{a-1},J_1)$ with $a=n, b=1$. Then $\bfC_G(x) = \bfC_{\bfC_G(s)}(u)$ has smaller order than the bound in Table \ref{orthogbd}. Now consider the case where $q \ge 5 $ is odd. For $x$ unipotent, $V \downarrow x$ is an orthogonal sum of non-degenerate spaces $W(2k)$ (namely, $J_{2k}^2 \in \O_{4k}^+(q)$) and $V(2k+1)$ (namely, $J_{2k+1} \in \O_{2k+1}(q)$). The unbreakability of $x$ implies that $V\downarrow x = W(n)$ or $V(2n+1)$, giving the conclusion by \cite[7.1]{LSei}. For $x=su$ non-unipotent, write \[ \bfC_G(s) = (\O_a(q) \times \O_b(q) \times \prod \GL_{a_i}^{\e_i}(q^{b_i})) \cap G, \] where $a = \dim \bfC_V(s), b=\dim \bfC_V(-s)$ and $a+b+\sum 2a_ib_i = \dim V$. As $\GL_r^\e(q) \le \SO_{2r}(q)$ and $s$ has determinant one, $b$ is even. If $a \ne 0$ then $V_a \downarrow u$ is either $W(2k)$ or $V(2k+1)$ and $x$ is breakable. Hence $a=0$. Moreover, $-1 \in \O_{4k}^+(q)$ (see \cite[2.5.13]{KL}), so if $u_0$ is a unipotent element of type $W(2k)$, then $-u_0 \in \O_{4k}^+(q)$. Hence by unbreakability, if $b\ne 0$ then either $b=\dim V$ and $V_b\downarrow u = W(n)$, or $V_b \downarrow u$ is a sum of an even number of spaces $V(2k_i+1)$. The former case satisfies the conclusion as above, so assume the latter holds. If there are more than two of the spaces $V(2k_i+1)$, then there exist $i,j$ such that the discriminant of $V(2k_i+1)+V(2k_j+1)$ is a square; if $u_1$ is the projection of $u$ to this space then $-u_1 = -(J_{2k_1+1},J_{2k_j+1}) \in \O_{2k_i+2k_j+2}(q)$, contradicting unbreakability. Hence either $b=0$ or $V_b \downarrow u$ is a sum of two spaces $V(2k_i+1)$. Likewise, the projection of $u$ to a factor $\GL_{a_i}^{\e_i}(q^{b_i}))$ has at most two Jordan blocks; here, the only extra point to note is that if $b_i = 1$ and there are three blocks $J_1,J_k,J_l$ with the projection of $s$ to the $J_1$ block giving an element of $\O_2(q)$, then the projection of $s$ to the other blocks gives elements of $\O_{2k}(q),\O_{2l}(q)$, and $x$ is breakable. It follows from all these observations together with \cite[7.1]{LSei} that the largest value of $|\bfC_G(x)|$ occurs when either $b=\dim V$ and $V\downarrow u = V(n)^2$ ($n$ odd), or $\bfC_G(s) = \GU_n(q)\cap G$ and $u = (J_{n/2}^2) \in \GU_n(q)$ ($n$ even). In either case $|\bfC_G(x)| \le q^{2n-2}(q+1)^2$, as in Table \ref{orthogbd}. Next suppose $q=3$. Following the proof of the $q=3$ case of \cite[5.15]{ore}, for $\dim V = 2n$ the largest possibility for $|\bfC_G(x)|$ is as in Table \ref{orthogbd}, and arises when $x$ is unipotent and $V\downarrow x = W(2)+W(n-2)$; note that the larger bound given in \cite[5.15]{ore} occurs when $x = -u$ with $V\downarrow u = V(1)^4+W(n-2)$, but this element $x$ is breakable according to our definition (which is different from the definition in \cite{ore}). For $\dim V = 2n+1$ the largest value of $|\bfC_G(x)|$ is as in \cite[5.15.]{ore}. Finally, if $q=2$ the proof of \cite[5.15]{ore} gives the bound in Table \ref{orthogbd}. \hal \begin{lemma}\label{blocks} {\rm (i)} Let $q=2$ or $3$, and let $G = Sp(V)$ or $\O(V)$ with the assumptions on $\dim V$ as in Lemmas $\ref{spunb}$ and $\ref{orthogunb}$. Let $\bar V = V \otimes_{\F_q} \bar \F_q$ and let $\a \in \bar \F_q$ satisfy either $\a^{q-1}=1$ or $\a^{q+1}=1$. If $x \in G$ is unbreakable, then $\dim {\rm Ker}_{\bar V}(x-\a I) \le 4$. {\rm (ii)} Let $q=5$ and let $G = \O(V) = \O^{\pm}_{2n}(5)$ with $n\ge 5$. Let $\bar V = V \otimes_{\F_q} \bar \F_q$ and let $\a \in \bar \F_q$ satisfy $\a^{q-1}=1$ or $\a^{q+1}=1$. If $x \in G$ is unbreakable, then $\dim {\rm Ker}_{\bar V}(x-\a I) \le 2$. \end{lemma} \pf (i) For $\a = \pm 1$ the lemma implies that the number of unipotent Jordan blocks of $\pm x$ is at most 4, which follows from the proofs of Lemmas \ref{spunb} and \ref{orthogunb}. In the other case, $\a$ has order $q+1$. A Jordan block of $x$ on $\bar V$ with eigenvalue $\a$ and dimension $k$ corresponds to a non-degenerate subspace $W$ of $V$ of dimension $2k$ such that $x^W$ lies in $\Sp(W)$ or $\SO(W)$. Hence the unbreakability of $x$ implies that there can be no more than four such blocks. \vspace{2mm} (ii) If $x = \pm u$ with $u$ unipotent, then the proof of Lemma 3.3 (for the case where $q \ge 5$ is odd) shows that $V\downarrow u$ is $W(n)$ or $V(2k_1+1)+V(2k_2+1)$ for some $k_1,k_2$, giving the result in this case. Now suppose $x=su$ with semisimple part $s \ne \pm 1$, and let $\bfC_G(s) = \O_a(5) \times \O_b(5) \times \prod \GL_{a_i}^{\e_i}(5^{b_i})$ as in Lemma 3.3. That proof shows that $a=0$, $b$ is even, $V_b\downarrow u$ is the sum of zero or two odd-dimensional spaces $V(2k_i+1)$, and the projection of $u$ to each factor $\GL_{a_i}^{\e_i}(5^{b_i})$ has at most 2 Jordan blocks. The conclusion of (ii) follows. \hal \subsection{Linear and unitary groups} \begin{defn}\label{br-slu} {\em (i) An element of the general linear group $\GL_n(2)$ is {\it breakable} if it lies in a natural subgroup of the form $\GL_a(2) \times \GL_b(2)$ where $a+b=n$, $1\le a\le b$ and $a,b \ne 2$. (ii) An element of the unitary group $\GU_n(2)$ is {\it breakable} if it lies in a natural subgroup of the form $\GU_a(2) \times \GU_b(2)$ where $a+b=n$, $1\le a\le b$ and $a,b \ne 2, 3$. (iii) An element of the general linear or unitary group $\GL^\e_n(3)$ is {\it breakable} if it lies in a natural subgroup of the form $\GL^\e_a(3) \times \GL^\e_b(3)$ where $a+b=n$, $1\le a\le b$ and $a,b \ne 2$. (iv) If $q \geq 4$, then an element of $\GL^\e_n(q)$ is {\it breakable} if it lies in a natural subgroup of the form $\GL^\e_a(q) \times \GL^\e_b(q)$ where $a+b=n$ and $1\le a\le b$.} \end{defn} If $q \geq 4$ and $x \in G = \GL^\e_n(q)$ is unbreakable, then \begin{equation}\label{cent-slu} |\bfC_G(x)| \leq \left\{ \begin{array}{ll}q^n-1, & \e = +,\\ q^{n-1}(q+1), & \e = -, \end{array} \right. \end{equation} (cf.\ \cite[Lemma 6.7]{ore} for the case $\e = -$). \begin{lemma}\label{LUL-sl2} If $n\ge 7$ and $x \in G=\GL_n(2)$ is unbreakable, then either {\rm (i)} $|\bfC_G(x)| \le 2^{n+2}$, or {\rm (ii)} $|\bfC_G(x)| = 9 \cdot 2^n$, $2|n$, and $x \in \GL_{n/2}(4)$. \end{lemma} \pf Suppose first that $x$ is unipotent. As it is unbreakable, $x$ has Jordan form $J_n$, or $J_{n-2}+J_2$. The order of $\bfC_G(x)$ is given by \cite[7.1]{LSei}, and the maximum possible order is $2^{n+2}$, which occurs in the last case. Now assume that $x = su$ where $s \ne 1$ is the semisimple part and $u$ the unipotent part of $x$. Then \[ \bfC_G(s) = \prod_i \GL_{a_i}(2^{b_i}), \] where $\sum a_ib_i = n$. Moreover, since $x \in \bfC_G(s)$ is unbreakable, we may assume $a_1b_1 \in \{n,n-2\}$, and write $a=a_1,b=b_1$. If $ab=n$ then $b\geq 2$. A Jordan block $J_c$ of $u$ as an element of $\GL_a(2^b)$ lies in a natural subgroup $\GL_{cb}(2)$, so the unbreakability of $x$ forces the Jordan form of $u$ in $\GL_a(2^b)$ to be $J_a$ or $J_{a-1}+J_1$ (with $b=2$ in the latter case). By \cite[7.1]{LSei}, $|\bfC_G(x)| = |\bfC_{\GL_a(2^b)}(u)|$ is $2^{b(a-1)}(2^b-1) < 2^n$ in the former case, and it is $2^{ab}\cdot |\GL_1(2^b)|^2 = 9 \cdot 2^n$ in the latter case, in which case also $2|n$ and $x \in \bfC_G(s) = \GL_{n/2}(4)$. If $ab = n-2$, then $\bfC_G(s) \leq \GL_a(2^b) \times \GL_2(2)$ and the Jordan form of $u$ in the first factor must be $J_a$, whence \[ |\bfC_G(x)| \le 2^{b(a-1)}|\GL_1(2^b)||\GL_2(2)| = (2^{n-2}-2^{n-2-b})\cdot 6 < 2^{n+2}, \] giving the result in this case. \hal \begin{lemma}\label{LUL} If $x \in G=\GU_n(2)$ is unbreakable, then $|\bfC_G(x)| \le 2^{n+4}\cdot 3^2$ if $n \geq 10$ and $|\bfC_G(x)| \leq 2^{48}$ if $n = 9$. \end{lemma} \pf (i) Consider the case $n \geq 10$. Suppose first that $x$ is unipotent. As it is unbreakable, $x$ has Jordan form $J_n$, $J_{n-2}+J_2$ or $J_{n-3}+J_3$. The order of $\bfC_G(x)$ is given by \cite[7.1]{LSei}, and the maximum possible order is $2^{n+4}\cdot 3^2$, which occurs in the last case. Suppose that $x = su$ where $s \ne 1$ is the semisimple part and $u$ the unipotent part of $x$. If $s \in \bfZ(G)$ then the argument of the previous paragraph applies. If $s \not \in \bfZ(G)$, then \[ \bfC_G(s) = \prod \GU_{a_i}(2^{b_i}) \times \prod \GL_{c_i}(2^{2d_i}) \le \prod \GU_{a_ib_i}(2) \times \prod \GU_{2c_id_i}(2), \] where $\sum a_ib_i + 2\sum c_id_i = n$, and all $b_i$ are odd. Moreover, since $x \in \bfC_G(s)$ is unbreakable, either $a_1b_1$ or $2c_1d_1$ lies in the set $\{n,n-2,n-3\}$. Suppose $a_1b_1 \in \{n,n-2,n-3\}$, and write $a=a_1,b=b_1$. If $ab=n$ then $b > 1$ since $s \not \in \bfZ(G)$, so $b\ge 3$ (as $b$ is odd). A Jordan block $J_c$ of $u$ as an element of $\GU_a(2^b)$ lies in a natural subgroup $\GU_{cb}(2)$, so the unbreakability of $x$ forces the Jordan form of $u$ in $\GU_a(2^b)$ to be $J_a$ or $J_{a-1}+J_1$ (with $b=3$ in the latter case). By \cite[7.1]{LSei}, the largest possible value of $|\bfC_G(x)| = |\bfC_{\GU_a(2^b)}(u)|$ occurs in the latter case, and is $2^{ab}\cdot |\GU_1(2^b)|^2 = 2^n\cdot 9^2$, proving the result in this case. If $ab = n-2$, then $\bfC_G(s) = \GU_a(2^b) \times \GU_2(2)$ and the Jordan form of $u$ in the first factor must be $J_a$, whence \[ |\bfC_G(x)| \le 2^{b(a-1)}|\GU_1(2^b)||\GU_2(2)| = (2^{n-2}+2^{n-2-b})\cdot 18 < 2^n\cdot 3^2, \] giving the result in this case. Similarly, if $ab=n-3$ then \[ \begin{array}{ll} |\bfC_G(x)| & \le |\bfC_{\GU_a(2^b)}(J_a)||\GU_3(2)| = 2^{b(a-1)}(2^b+1)\cdot 2^33^4 \\ & = (2^{n-3}+2^{n-3-b})\cdot 2^33^4 < 2^{n+4}\cdot 3^2. \end{array} \] Now suppose $2c_1d_1 \in \{n,n-2,n-3\}$, and write $c=c_1,d=d_1$. If $d=1$ then the projection of $s$ in $\GL_c(2^{2d})$ is a central element of order 3 which is central in a natural subgroup $\GU_{2c}(2)$, so $\bfC_G(s)$ has a factor $\GU_{2c}(2)$ rather than $\GL_c(2^2)$. Hence $d>1$. As above, the unbreakability of $x$ forces $u$ to have Jordan form $J_c$ as an element of $\GL_c(2^{2d})$. Hence \[ |\bfC_G(x)| \le |\bfC_{\GL_c(2^{2d})}(J_c)|\cdot |\GU_{n-2cd}(2)|, \] which is a maximum when $cd = n-3$, in which case $|\bfC_G(x)| \le 2^{2d(c-1)}(2^{2d}-1)\cdot |\GU_3(2)|$ which is less than $2^n\cdot 3^4$. This completes the proof. \smallskip (ii) Suppose now that $n = 9$. Assume first that $x = su$ where $s \in \bfZ(G)$ and $u$ is unipotent. As $x$ is unbreakable, $u$ has Jordan form $J_9$, $J_7+J_2$, $J_6+J_3$ or $J_3^3$. The largest centralizer is that of $J_3^3$, which has order $2^{18}|\GU_3(2)|$, less than $2^{48}$. Now suppose $x=su$ with semisimple part $s \not \in \bfZ(G)$. Then $\bfC_G(s)$ is as described above. Assuming that $|\bfC_G(x)|\ge 2^{48}$, the only possibility is that $\bfC_G(s) = \GU_7(2)\times \GU_2(2)$ (note that $\GU_8(2)\times GU_1(2)$ is not possible as this would imply that $x$ is breakable). If $|\bfC_G(x)| = |\bfC_{C_G(s)}(u)|\ge 2^{48}$, then $u$ projects to the identity in $\GU_7(2)$; but then $x$ is breakable, a contradiction. \hal \begin{lemma}\label{LUL3} If $n\ge 7$ and $x \in G=\GL^\e_n(3)$ is unbreakable, then $|\bfC_G(x)| \le 3^{n+2}\cdot 2^4$. \end{lemma} \pf For $x$ unipotent the largest centralizer occurs when $x = (J_{n-2},J_2)$ and has order $ 3^{n+2}\cdot 2^4$ by \cite[7.1]{LSei}. Suppose $x = su$ is non-unipotent. If $s \in \bfZ(G)$ the bound of the previous paragraph applies, so assume $s \not \in \bfZ(G)$. The possibilities for $\bfC_G(s)$ are: \[ \begin{array}{l} \e = +:\; \bfC_G(s) = \prod \GL_{a_i}(3^{b_i}) \\ \e=-:\; \bfC_G(s) = \prod \GU_{a_i}(3^{b_i}) \times \prod \GL_{c_i}(3^{2d_i}) \end{array} \] where $\sum a_ib_i = n$ for $\e=+$, and $\sum a_ib_i + 2\sum c_id_i = n$ and all $b_i$ are odd for $\e=-$. As in the previous proof, the unbreakability assumption implies that $a_1b_1 \in \{n-2,n\}$ for $\e=+$, and either $a_1b_1$ or $2c_1d_1$ is in $\{n-2,n\}$ for $\e=-$. Now we argue as in the previous lemma that none of the possibilities for $u \in \bfC_G(s)$ give a larger centralizer order than $ 3^{n+2}\cdot 2^4$. \hal \section{Theorem \ref{main1} for linear and unitary groups} \subsection{General inductive argument} Recall $\cR(S)$ from \S2, and the notion of unbreakability from Definition \ref{br-slu}. \begin{defn}\label{cond-slu} {\em Given a prime power $q = p^f$, $\e = \pm$, and an integer $N = p^at^b$ with $t \nmid (q-\e)$ a prime. We say that $G = \GL^\e_n(q)$ satisfies {\rm (i)} the condition $\sP(N)$ if every $g \in G$ can be written as $g=x^Ny^N$ for some $x,y \in G$ with $x^N \in \SL^\e_n(q)$; and {\rm (ii)} the condition $\sP_u(N)$ if every {\it unbreakable} $g \in G$ can be written as $g=x^Ny^N$ for some $x,y \in G$ with $x^N \in \SL^\e_n(q)$.} \end{defn} First we prove an extension of Theorem \ref{prime1} for $\GL^\e_n(q)$: \begin{prop}\label{slu-generic} Let $G = \GL^\e_n(q)$ with $n \geq 4$, $q = p^f$, and let $t \nmid p(q-\e)$ be a prime not contained in $\cR(\SL^\e_n(q))$. Then $\sP(N)$ holds for $G$ and for all $N = p^at^b$. \end{prop} \pf (i) First we consider the generic case: $\cR(\SL^\e_n(q)) = \{r,s_1=s_2\}$ and $r$ and $s=s_1=s_2$ are listed in Table \ref{primes}. In particular, $r = \p(q,n)$ and $s_1 = \p(q,n-1)$ when $\e = +$. When $\e = -$, interchanging $r$ and $s$ if necessary, we may assume that $r$ divides $q^n-\e^n$ but not $\prod^{n-1}_{i=1}(q^i-\e^i)$ (so $r$ is a primitive prime divisor of $(\e q)^n-1$), and similarly, $s$ divides $q^{n-1}-\e^{n-1}$ but not $\prod_{1 \leq i \leq n,~i \neq n-1}(q^i-\e^i)$. Since $N$ is coprime to $q-\e$, every central element of $G$ can be written as an $N$th power. So it suffices to prove $\sP(N)$ for every non-central $g \in G$. Fix a regular semisimple $g_1 \in G$ of order $r$, in particular $\det(g_1) = 1$, and a regular semisimple $h \in \GL^\e_{n-1}(q)$ of order $s$. We can choose $d\in \GL^\e_1(q)$ such that $\det(g_2) = \det(g)$ for $g_2 := \diag(h,d)$. Since both $g_1$ and $g_2$ have order coprime to $N$, it suffices to show that $g \in g_1^G \cdot g_2^G$. To this end we apply Lemma \ref{basic}(i). Consider a character $\chi \in \Irr(G)$ with $\chi(g_1)\chi(g_2) \neq 0$. It follows that $\chi(1)$ is neither of $r$-defect $0$ nor of $s$-defect $0$. On the other hand, the order of the centralizer of every non-central semisimple element of $\GL^\e_n(q)$ is either coprime to $r$ or coprime to $s$. Hence the Lusztig classification of irreducible characters of $G$ \cite{DL} implies that $\chi$ belongs to the rational series $\cE(G,(z))$ labeled by a central semisimple $z \in G^* \cong G$. It follows that $\chi = \lambda\psi$, where $\lambda(1) = 1$ and $\psi$ is a unipotent character of $G$. Moreover, as shown in the proof of \cite[Theorems 2.1--2.2]{MSW}, $\psi$ is either $1_G$ or $\St$, the Steinberg character of $G$. Since $\det(g_1) = 1$ and $\det(g_2) = \det(g)$ by our choice, $\lambda(g_1) = 1$ and $\lambda(g_2)\overline\lambda(g) = 1$ for all linear $\lambda \in \Irr(G)$. Finally, since $g \notin \bfZ(G)$ and $|\St(g_i)| = 1$, $$\sum_{\chi \in \Irr(G)}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)} = (q-\e)\left(1+\frac{\St(g)}{\St(1)}\right) > 0,$$ so we are done. \smallskip (ii) The same arguments apply to the non-generic cases $$(n,q,\e) = (4,4,+),~(6,4,-),(7,4,-),$$ if we choose $\cR(\SL^\e_n(q))$ to be $\{17,7\}$, $\{41,7\}$, or $\{113,7\}$, respectively. In the remaining cases $$(n,q,\e) = (6,2,+),~(7,2,+),~(4,2,-),$$ the statement follows from \cite[Lemma 2.12]{GT3} if we choose $\cR(\SL^\e_n(q))$ to be $\{31\}$, $\{127\}$, or $\{5\}$, respectively (note that $\GU_4(2) \cong C_3 \times \SU_4(2)$). \hal Our proof of Theorem \ref{main1} for linear and unitary groups relies on the following inductive argument: \begin{prop}\label{ind-slu} Fix a prime power $q = p^f$, an integer $n \geq 4$, and $\e = \pm$. Suppose that there is an integer $n_0 \geq 3$ such that the following statements hold: \begin{enumerate}[\rm(i)] \item Let $1 \leq k \leq n_0$ with $k \neq 2$ if $q = 2,3$, and $k \neq 3$ if $(q,\e) = (2,-)$. Then $\sP_u(N)$ holds for $\GL^\e_k(q)$ for every $N = p^at^b$ with $t$ prime and $t \nmid p(q-\e)$. \item For each $k$ with $n_0 < k \leq n$, $\sP_u(N)$ holds for $\GL^\e_k(q)$ and for every $N = p^at^b$ with $t \in \cR(\SL^\e_k(q))$. \end{enumerate} If $N = s^at^b$ for some primes $s,t$, then the word map $(u,v) \mapsto u^Nv^N$ is surjective on $\PSL^\e_n(q)$. \end{prop} \pf By Corollary \ref{prime2}, we need to consider only the case $N = p^at^b$ with $t \in \cR(\SL^\e_n(q))$; in particular, $t \nmid (q-\e)$. It suffices to show $\sP(N)$ holds for $G := \GL^\e_n(q)$ and this choice of $N$. Indeed, in this case every $g \in \SL^\e_n(q)$ can be written as $x^Ny^N$ with $\det(x^N) = \det(y^N) = 1$. Since $\gcd(N,q-\e) = 1$, it follows that $x, y \in \SL^\e_n(q)$. By (ii), $\sP_u(N)$ holds for $G$. Consider a breakable $g \in G$ and write it as $\diag(g_1, \ldots ,g_m)$ lying in the natural subgroup $$\GL^\e_{k_1}(q) \times \ldots \times \GL^\e_{k_m}(q).$$ Here, $1 \leq k_i < n$, and if $k_i \leq n_0$ then $k=k_i$ fulfills the conditions imposed on $k$ in (i). Furthermore, each $g_i$ is unbreakable. Hence, according to (i), $\sP_u(N)$ holds for $\GL^\e_{k_i}(q)$ if $k_i \leq n_0$. If $k_i > n_0$, then by (ii) and Proposition \ref{slu-generic}, $\sP_u(N)$ holds for $\GL^\e_{k_i}(q)$ as well. Thus we can write $g_i = x_i^Ny_i^N$ with $x_i,y_i \in \GL^\e_{k_i}(q)$ and $\det(x_i^N) = 1$. Letting $$x := \diag(x_1, \ldots ,x_m),~~y := \diag(y_1, \ldots, y_m)$$ we deduce that $g = x^Ny^N$ and $\det(x^N) = 1$. Thus $\sP(N)$ holds for $G$, as desired. \hal \subsection{Induction base} \begin{lem}\label{slu-small2} Let $q = p^f\geq 2$, $\e = \pm$, and $N = r^as^b$ for some primes $r,s$. Suppose that $S = \PSL^\e_k(q)$ is simple and $k = 2$ or $3$. Then the map $(u,v) \mapsto u^Nv^N$ is surjective on $S$. \end{lem} \pf By Corollary \ref{prime2}(i), we need to consider only the case $N = p^as^b$. Let $S = \PSL_3(q)$. By \cite[Theorem 7.3]{GM}, $S \setminus \{1\} \subseteq CC$ where $C = x^S$ or $y^S$, $|x| = (q^2+q+1)/d$ and $|y| = (q^2-1)/d$, with $d = \gcd(3,q-1)$. In particular, $|x|$ and $|y|$ are coprime. Hence at least one of $x,y$ has order coprime to $N$, so it is an $N$-power in $S$, whence we are done. $\PSU_3(q)$ can be treated similarly using \cite[Theorem 7.1]{GM}. If $S = \PSL_2(q)$ with $q \geq 7$ odd, then by \cite[Theorem 7.1]{GM}, $S \setminus \{1\} \subseteq CC$ with $C = x^S$ or $y^S$, $|x| = (q+1)/2$ and $|y| = (q-1)/2$, so we can argue similarly. Finally, assume that $S = \SL_2(q)$ with $q \geq 4$ even. If $s \nmid (q-1)$, then $S \setminus \{1\} \subseteq CC$ with $C = x^S$ and $|x| = q-1$ by \cite[Theorem 7.1]{GM}, so we are done. Assume $s|(q-1)$. Using the character table, we check that $S \setminus \{1\} \subseteq y^S \cdot (y^2)^S$ if $|y| = q+1$, so we are done again. \hal \begin{lem}\label{slu-small1} Let $q = p^f\geq 4$, $\e = \pm$, and $N = p^at^b$ for a prime $t \nmid p(q-\e)$. Then $\sP_u(N)$ holds for $G = \GL^\e_k(q)$ with $1 \leq k \leq 3$. \end{lem} \pf Clearly the statement holds for $k = 1$. Suppose $k > 1$ and let $g \in G$ be unbreakable. Let $\rho \in \F_q^\times$ and let $\ve \in \C^\times$ have order $q-1 \geq 3$. To establish $\sP_u(N)$ for $g$, we exhibit some $N'$-elements $g_1, g_2$ of $G$ such that $g \in g_1^G \cdot g_2^G$ and at least one of $g_1,g_2$ has determinant $1$. \smallskip (i) Consider the case $G = \GL_2(q)$. Since $g$ is unbreakable, it belongs to class $B_1$ or $A_2$, in the notation of \cite{St}. In the first case, $g$ lies in a torus of order $q^2-1$, and we define $g_1 = \diag(\rho,\rho^{-1})$, and $g_2 = \diag(1,\rho^i)$ if $\det(g) = \rho^i \neq 1$, or $g_2 = g_1$ if $\det(g) = 1$. Using \cite[Table II]{St}, it is easy to check that $$\sum_{\chi \in \Irr(G)}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)} = (q-1)\left(1+\frac{1}{q}\right) > 0.$$ Since $g_1$ and $g_2$ are $N'$-elements, we are done. Suppose now that $g \in A_2$, i.e.\ $g = zu$ with $z \in \bfZ(G)$ and $u$ a regular unipotent element. Since $z$ is the $N$th power of some central element of $G$, it suffices to show that $u \in g_1^G \cdot g_2^G$ where we again choose $g_2 = g_1$. Using \cite[Table II]{St}, $$\sum_{\chi \in \Irr(G)}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)} = (q-1)\left(1-\frac{1}{2(q+1)}\sum_{0 \leq m \neq n \leq q-2}(\ve^{m-n}+\ve^{n-m})^2\right) = \frac{4(q-1)}{q+1},$$ so we are done again. The same arguments apply in the case $G = \GU_2(q)$, where we choose $g_2 = g_1^2$ if $g = zu$ and $u$ is a regular unipotent element. \smallskip (ii) Consider the case $G = \GL_3(q)$, Since $g$ is unbreakable, $g$ belongs to class $C_1$ (so $g$ lies in a maximal torus of order $q^3-1$) or $A_3$ (i.e.\ $g$ is a scalar multiple of a regular unipotent element), in the notation of \cite{St}. First suppose that $t \neq \p(q,3)$. By Lemma \ref{slu-det} (below) we can find a regular semisimple $g_1 \in \GL_3(q)$ of order $\p(q,3)m$ such that $\det(g_1) = \det(g)$ and all prime divisors of $m$ divide $q-1$. Note that $g_1$ belongs to class $C_1$. Also, define $g_2 = \diag(1,\rho,\rho^{-1}) \in \SL_3(q)$ belonging to class $A_6$. Using \cite[\S3]{St}, it is easy to check that $$\left|\sum_{\chi \in \Irr(G)}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| > (q-1)\left(1-\frac{2}{q(q+1)}-\frac{1}{q^3}\right) > 0.$$ Since $g_1$ and $g_2$ are $N'$-elements, we are done. Suppose now that $t = \p(q,3)$. We choose $h$ to be a regular semisimple element of order $q+1$ in $\SL_2(q)$ and define $g_1 := \diag(h,\det(g))$ so that it belongs to class $B_1$. Using $g_2$ as in (i), we observe that $$\left|\sum_{\chi \in \Irr(G)}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| > (q-1)\left(1-\frac{1}{q^3}-\frac{3(q-2)}{2(q^2+q+1)} \right) > 0,$$ so we are done again. The same arguments apply in the case $G = \GU_3(q)$. \hal \subsection{Induction step: Generic case} We need the following simple observation: \begin{lem}\label{slu-det} Let $G = \GL^\e_n(q)$ with $n \geq 3$ and let $T$ be a cyclic torus of order $q^n-\e^n$ of $G$. Suppose there is a prime $s$ that divides $q^n-\e^n$ but not $\prod^{n-1}_{i=1}(q^i-\e^i)$. For every $g \in G$, there exists a regular semisimple $h \in T$ of order $sm$ for some $m \in \N$ such that $\det(h) = \det(g)$ and all prime divisors of $m$ divide $q-\e$. \end{lem} \begin{proof} Let $D \cong C_{q-\e}$ denote the image of $G$ under the determinant map $\det$. Note that $\det$ maps $T$ onto $D$. The condition on $s$ implies that every $x \in T$ of order divisible by $s$ is regular semisimple and $s \nmid (q-\e)$. It follows that $\det$ maps $T_1 \geq \bfO_s(T)$ into $1$ and $T_2$ onto $D$, where $T = T_1 \times T_2$, $|T_1|$ is coprime to $q-\e$, and all prime divisors of $|T_2|$ divide $q-\e$. Hence we can choose $x \in \bfO_s(T)$ of order $s$ and $y \in T_2$ such that $\det(y) = \det(g)$ and set $h := xy$. \end{proof} \begin{prop}\label{sl-large} Suppose $G = \GL_n(q)$ with $n \geq 4$, $q = p^f \geq 4$, and $t \in \cR(\SL_n(q))$. Then $\sP_u(N)$ holds for $G$ and for every $N = p^at^b$. \end{prop} \pf Consider an unbreakable $g \in G$ and a regular semisimple $g_1 \in \SL_n(q)$ of order $s \in \cR(G) \setminus \{t\}$. Denote $$\Irr(G/[G,G]) = \{ \lambda_i \mid 0 \leq i \leq q-2\}.$$ \smallskip (i) First we consider the case $n \geq 6$. Choose $$D = \frac{(q^n-1)(q^{n-1}-q^2)}{(q-1)(q^2-1)}.$$ By \cite[Theorem 3.1]{TZ1}, every irreducible character of $\SL_n(q)$ of degree less than $D$ is either the principal character, or an irreducible Weil character, and it is well known that each of these characters extends to $G$. It follows that the characters in $\Irr(G)$ of degree less than $D$ are exactly the $q-1$ linear characters $\lambda_i$ and $(q-1)^2$ irreducible Weil characters $\tau_{i,j}$, $0 \leq i,j \leq q-2$, where $$\tau_{i,j} = \lambda_j\tau_{i,0},~\tau_{i,0}(1) = \frac{q^n-1}{q-1} - \delta_{i,0}.$$ Using Lemma \ref{slu-det}, we can choose a regular semisimple $g_2 \in G$ of order $sm$ where all prime divisors of $m$ divide $q-1$ and $\det(g_2) = \det(g)$. In particular, $$|\bfC_G(g_i)| \leq q^n-1,$$ and \begin{equation}\label{sl41} \sum^{q-2}_{i=0}\lambda_i(g_1)\lambda_i(g_2)\overline\lambda_i(g) = q-1. \end{equation} By (\ref{cent-slu}) and Lemma \ref{basic}, \begin{equation}\label{sl42} \left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(q^n-1)^{3/2}}{D} \leq (q-1)\left(1-\frac{1}{q^2+q+1} \right). \end{equation} Fix a primitive $(q-1)$th root of unity $\delta \in \F_q^\times$ and a primitive $(q-1)$th root of unity $\td \in \C^\times$. Relabeling $\tau_{i,j}$ if necessary, $$\tau_{i,0}(x) = \frac{1}{q-1}\sum^{q-2}_{l=0}\td^{il} q^{e(x,\delta^l)}-2\delta_{i,0}$$ for every $x \in G$, where $e(x,\alpha)$ denotes the dimension of the $\alpha$-eigenspace of $x$ on the natural module $\F_q^n$ for $G$. The choice of $g_i$ and the unbreakability of $g$ ensure that $e(y,\delta^l)$ is at most $1$ for $y \in \{g,g_1,g_2\}$ and $0 \leq l \leq q-2$, and in fact it can equal $1$ for at most one value $l_0$. In particular, $$-1 = \frac{1}{q-1}(q-1)-2 \leq \tau_{0,0}(y) \leq \frac{1}{q-1}(q+q-2)-2 = 0.$$ Consider $i > 0$. If such $l_0$ exists, then $$\tau_{i,0}(y) = \frac{1}{q-1}\left(\td^{il_0}(q-1)+\sum^{q-2}_{l=0}\td^{il}\right) = \td^{il_0}.$$ If no such $l_0$ exists, then $$\tau_{i,0}(y) = \frac{1}{q-1}\sum^{q-2}_{l=0}\td^{il} = 0.$$ We have shown that \begin{equation}\label{weil-sl1} |\tau_{i,j}(y)| \leq 1. \end{equation} It follows that if $n \geq 5$ then \begin{equation}\label{weil-sl2} \left|\sum_{0 \leq i,j \leq q-2}\frac{\tau_{i,j}(g_1)\tau_{i,j}(g_2)\overline\tau_{i,j}(g)}{\tau_{i,j}(1)}\right| \leq \frac{(q-1)^3}{q^n-q} < \frac{q-1}{q(q^2+q+1)}. \end{equation} Together with (\ref{sl42}), this implies that $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq (q-1)\left(1-\frac{1}{q^2+q+1} + \frac{1}{q(q^2+q+1)}\right) < q-1.$$ Hence $g \in g_1^G \cdot g_2^G$ by (\ref{sl41}) and Lemma \ref{basic}(i). Since both $g_1$ and $g_2$ have order coprime to $N$, we are done. \smallskip (ii) Next we consider the case $n = 5$. Setting $s' := \p(q,3)$ and using Lemma \ref{slu-det}, we can choose a regular semisimple $h \in \GL_3(q)$ of order $s'm$, where all prime divisors of $m$ divide $q-1$ and $\det(h) = \det(g)$. Also, let $h' \in \GL_2(q)$ be conjugate (over $\overline\F_q$) to $\diag(\beta,\beta^{-1})$, where $\beta \in \overline\F_q^\times$ has order $q+1$. Setting $g_2 = \diag(h,h')$, the orders of $g_1$ and $g_2$ are coprime to $N$, $\det(g_2) = \det(g)$, and $e(g_2,\delta^l) = 0$ for $0 \leq l \leq q-2$. In particular, (\ref{weil-sl1}) and (\ref{weil-sl2}) hold. Next, we choose $D = q^4(q^5-1)/(q-1)$, yielding \begin{equation}\label{sl43} \left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(q^5-1)^{3/2}}{D} \leq \frac{q-1}{q^{3/2}} \leq \frac{q-1}{8}. \end{equation} Now, using \cite{Lu}, we check that if $\psi \in \Irr(\SL_5(q))$ has positive $s$-defect and positive $s'$-defect and $\psi(1) < D$, then either $\psi$ is the principal character or a Weil character, or $s = \p(q,4)$ and $\psi$ is the unique character of degree $q^2(q^5-1)/(q-1)$. In either case, $\psi$ extends to $G$. In fact, in the latter case, an extension $\varphi$ of $\psi$ to $G$ is the unipotent character labeled by the partition $(3,2)$ (see \cite[\S13.8]{C}). On the other hand, $\tau_{0,0}$ is the unipotent character of $G$ labeled by the partition $(4,1)$. It follows by \cite[Lemma 5.1]{GT1} that $$\varphi = (1_G + \tau_{0,0} + \varphi) - (1_G + \tau_{0,0}) = \rho_2 - \rho_1,$$ where $\rho_i$ is the permutation character of the action of $G$ on the set of $i$-dimensional subspaces of the natural module $\F_q^5$ for $i = 1,2$. Therefore, $$\varphi(g_1) = \rho_2(g_1) - \rho_1(g_1) = 0-1 = -1,~~ \varphi(g_2) = \rho_2(g_2) - \rho_1(g_2) = 1-0 = 1.$$ Also, the extensions of $\psi$ to $G$ are $\varphi\lambda_i$, $0 \leq i \leq q-2$, and $|\varphi(g)| \leq (q^5-1)^{1/2}$ by (\ref{cent-slu}). Certainly, $\chi(g_1)\chi(g_2) = 0$ unless $\chi$ has positive $s$-defect and positive $s'$-defect. Hence, combining with (\ref{weil-sl2}), we deduce that $$\left|\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{q-1}{q(q^2+q+1)} + (q-1)\frac{(q^5-1)^{1/2}}{\psi(1)} \leq \frac{q-1}{32}.$$ Together with (\ref{sl43}), this implies that $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq (q-1)\left(\frac{1}{8} + \frac{1}{32}\right) < q-1,$$ so we are done as before. \smallskip (iii) Here we consider the case $n = 4$. Since $g$ is unbreakable, $g$ belongs to class $A_5$, $C_2$, or $E_1$, in the notation of \cite{St}. In the two latter cases, note that the $G$-conjugacy class of such an element $g$ is completely determined by $|g|$ and the eigenvalues of $g$ acting on $\overline\F_q^4$. On the other hand, $G$ contains a natural subgroup $H \cong \GL_2(q^2)$, and $H$ contains an element $h$ with the same spectrum and order as $g$. Hence we may assume $g =h \in H$. As $N = p^at^b$ and $t \nmid (q^2-1)$, we can now apply Lemma \ref{slu-small1} (if $h$ is unbreakable) to get $g = x^Ny^N$ for some $x \in \SL_2(q^2) < \SL_4(q)$ and $y \in H$. Such a decomposition certainly exists if $h$ is breakable in $H$ (i.e.\ $h \in \GL_1(q^2) \times \GL_1(q^2)$). It remains therefore to consider the case $g \in A_5$, i.e.\ $g = zu$, where $z \in \bfZ(G)$ and $u$ is a regular unipotent element. By \cite[Corollary 8.3.6]{C}, $|\chi(g)| \leq 1$ for all $\chi \in \Irr(G)$. Choosing $D = (q-1)(q^3-1)$ and $g_2$ of order $sm$ as in (i), by the Cauchy-Schwarz inequality, $$ \left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(q^4-1)}{(q-1)(q^3-1)} < 1.35.$$ Using \cite{St}, we check that all irreducible characters of $G$ of degree less than $D$ are linear or Weil characters. Hence (\ref{weil-sl1}) implies that $$\left|\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(q-1)^{3}}{q^4-q} < 0.11.$$ It follows that $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| < 1.35 + 0.11 = 1.46 < q-1,$$ so we are done. \hal \begin{prop}\label{su-large} Suppose $G = \GU_n(q)$ with $n \geq 4$, $q = p^f \geq 4$, and $t \in \cR(\SU_n(q))$. Then $\sP_u(N)$ holds for $G$ and for every $N = p^at^b$. \end{prop} \pf Consider an unbreakable $g \in G$ and a regular semisimple $g_1 \in \SU_n(q)$ of order $s \in \cR(G) \setminus \{t\}$. Denote $$\Irr(G/[G,G]) = \{ \lambda_i \mid 0 \leq i \leq q\}.$$ \smallskip (i) First we consider the case $n \geq 6$. If $n \geq 7$, then using Lemma \ref{slu-det}, we can choose a regular semisimple $g_2 \in G$ of order $sm$ where all prime divisors of $m$ divide $q+1$ and $\det(g_2) = \det(g)$. If $n = 6$, then we set $s' := \p(q,6) \geq 7$ and use Lemma \ref{slu-det} to get a regular semisimple $h \in \GU_3(q)$ of order $s'm$, where $\det(h) = \det(g)$ and all prime divisors of $m$ divide $q+1$. We also set $h':= (h_{s'})^{-1}$ and $g_2 = \diag(h,h')$. Then $g_2 \in G$ is regular semisimple, and $\det(g_2) = \det(g)$. In either case $$|\bfC_G(g_i)| \leq (q^{n-1}+1)(q+1).$$ Choose $$D = \left\{ \begin{array}{ll}(q^n-(-1)^n)(q^{n-1}-q^2)/(q+1)(q^2-1), & n \geq 7,\\ (q+1)(q^3+1)(q^5+1)/2, & n = 6. \end{array} \right.$$ If $n \geq 7$, then by \cite[Theorem 4.1]{TZ1}, every irreducible character of $\SU_n(q)$ of degree less than $D$ is either the principal character, or an irreducible Weil character, and each of these characters extends to $G$, cf.\ \cite[Lemma 4.7]{TZ2}. In this case, the characters in $\Irr(G)$ of degree less than $D$ are exactly the $q+1$ linear characters $\lambda_i$, and $(q+1)^2$ irreducible Weil characters $\zeta_{i,j}$, $0 \leq i,j \leq q$, where $$\zeta_{i,j} = \lambda_j\zeta_{i,0},~\zeta_{i,0}(1) = \frac{q^n-(-1)^n}{q+1} + (-1)^n\delta_{i,0}.$$ Suppose $n = 6$ and $\psi \in \Irr(\SU_6(q))$ has positive $s$-defect $0$ and positive $s'$-defect. Using \cite{Lu}, we check that either $\psi$ is the principal character of a Weil character, or $\psi(1) \geq D$. Again, if $\chi \in \Irr(G)$, $\chi(1) < D$, and $\chi(g_1)\chi(g_2) \neq 0$, then $\chi$ is either a linear character, or a Weil character. The choice of $g_1$ and $g_2$ ensures that \begin{equation}\label{su41} \sum^{q}_{i=0}\lambda_i(g_1)\lambda_i(g_2)\overline\lambda_i(g) = q+1. \end{equation} By (\ref{cent-slu}) and Lemma \ref{basic}, \begin{equation}\label{su42} \left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| < \frac{((q+1)(q^{n-1}+1))^{3/2}}{D} < \frac{2(q+1)}{3}. \end{equation} Fix a primitive $(q+1)$th root $\xi \in \F_{q^2}^\times$ of unity and a primitive $(q+1)$th root $\tx \in \C^\times$ of unity. Relabeling $\zeta_{i,j}$ if necessary, $$\zeta_{i,0}(x) = \frac{(-1)^n}{q+1}\sum^{q}_{l=0}\tx^{il} (-q)^{e(x,\xi^l)}$$ for every $x \in G$, where $e(x,\alpha)$ denotes the dimension of the $\alpha$-eigenspace of $x$ on the natural module $\F_{q^2}^n$ for $G$. As before, the choice of $g_i$ and the unbreakability of $g$ ensure that $e(y,\xi^l)$ is at most $1$ for $y \in \{g,g_1,g_2\}$ and $0 \leq l \leq q$, and in fact it can equal $1$ for at most one value $l_0$. If such $l_0$ exists, then $$(-1)^n\zeta_{i,0}(y) = \frac{1}{q+1}\left(\tx^{il_0}(-q-1)+\sum^{q}_{l=0}\tx^{il}\right) = \delta_{i,0}-\tx^{il_0}.$$ If no such $l_0$ exists, then $$\zeta_{i,0}(y) = \frac{(-1)^n}{q+1}\sum^{q}_{l=0}\tx^{il} = (-1)^n\delta_{i,0}.$$ We have shown that \begin{equation}\label{weil-su1} |\zeta_{i,j}(y)| \leq 1. \end{equation} It follows that if $n \geq 5$ then \begin{equation}\label{weil-su2} \left|\sum_{0 \leq i,j \leq q}\frac{\zeta_{i,j}(g_1)\zeta_{i,j}(g_2)\overline\zeta_{i,j}(g)}{\zeta_{i,j}(1)}\right| \leq \frac{(q+1)^3}{q^n-q} \leq \frac{(q+1)^2}{q(q-1)^2}. \end{equation} Together with (\ref{su42}), this implies that $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq (q+1)\left(\frac{2}{3} + \frac{1}{7}\right) < q+1.$$ Hence $g \in g_1^G \cdot g_2^G$ by (\ref{su41}) and Lemma \ref{basic}(i). Since both $g_1$ and $g_2$ have order coprime to $N$, we are done. \smallskip (ii) Next we consider the case $n = 5$. Setting $s' := \p(q,6)$ and using Lemma \ref{slu-det} we can choose a regular semisimple $h \in \GU_3(q)$ of order $s'm$, where all prime divisors of $m$ divide $q+1$ and $\det(h) = \det(g)$. Also, let $h' \in \GU_2(q)$ be conjugate (over $\overline\F_q$) to $\diag(\alpha,\alpha^{-1})$, where $\alpha \in \F_q^\times$ has order $q-1$. Setting $g_2 = \diag(h,h')$, the orders of $g_1$ and $g_2$ are coprime to $N$, $\det(g_2) = \det(g)$, and $e(g_2,\xi^l) = 0$ for $0 \leq l \leq q$. In particular, (\ref{weil-su1}) and (\ref{weil-su2}) hold. Next, we choose $D = q^4(q^5+1)/(q+1)$, yielding \begin{equation}\label{su43} \left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(q+1)(q^4+1)(q^4(q+1))^{1/2}}{D} < \frac{q+1}{5}. \end{equation} Now, using \cite{Lu}, we check that if $\psi \in \Irr(\SU_5(q))$ has positive $s$-defect and positive $s'$-defect and $\psi(1) < D$, then either $\psi$ is the principal character or a Weil character, or $s = \p(q,4)$ and $\psi$ is the unique character of degree $q^2(q^5+1)/(q+1)$. In either case, $\psi$ extends to $G$. In fact, in the latter case, an extension $\varphi$ of $\psi$ to $G$ is the unipotent character labeled by the partition $(3,2)$ (see \cite[\S13.8]{C}). Letting $\sigma$ be the unipotent character of $G$ labeled by the partition $(3,1,1)$, of degree $q^3(q^2+1)(q^2-q+1)$, we check that $$\rho = 1_G + \varphi + \sigma$$ is the (rank 3) permutation character of the action of $G$ on the set of isotropic $1$-dimensional subspaces of the natural module $\F_{q^2}^5$, cf.\ \cite[Table 2]{ST}. Note that $\sigma$ has $s$-defect $0$ and $s'$-defect $0$. It follows that $\sigma(g_1) = \sigma(g_2) = 0$, so $$\varphi(g_1) = \rho(g_1) - 1 = 0-1 = -1,~~ \varphi(g_2) = \rho(g_2) - 1 = 2-1 = 1.$$ Also, the extensions of $\psi$ to $G$ are $\varphi\lambda_i$, $0 \leq i \leq q$, and $|\varphi(g)| \leq (q^4(q+1))^{1/2}$ by (\ref{cent-slu}). Certainly, $\chi(g_1)\chi(g_2) = 0$ unless $\chi$ has positive $s$-defect and positive $s'$-defect. Hence, combining with (\ref{weil-su2}), we deduce that $$\left|\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(q+1)^2}{q(q-1)^2} + (q+1)\frac{(q^4(q+1))^{1/2}}{\psi(1)} < \frac{q+1}{7}.$$ Together with (\ref{su43}), this implies that $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq (q+1)\left(\frac{1}{5} + \frac{1}{7}\right) < q+1,$$ so we are done as before. \smallskip (iii) Here we consider the case $n = 4$. Since $g$ is unbreakable, $g$ belongs to class $A_5$, $C_2$, or $E_1$, in the notation of \cite{Noz1}. In the two latter cases, note that the $G$-conjugacy class of such an element $g$ is completely determined by $|g|$ and the eigenvalues of $g$ acting on $\overline\F_q^4$. On the other hand, $G$ contains a natural subgroup $H \cong \GL_2(q^2)$, and $H$ contains an element $h$ with the same spectrum and order as $g$. Hence we may assume $g =h \in H$. As $N = p^at^b$ and $t \nmid (q^2-1)$, we can now apply Lemma \ref{slu-small1} (if $h$ is unbreakable) to get $g = x^Ny^N$ for some $x \in \SL_2(q^2) < \SL_4(q)$ and $y \in H$. Such a decomposition certainly exists if $h$ is breakable in $H$ (i.e.\ $h \in \GL_1(q^2) \times \GL_1(q^2)$). It remains therefore to consider the case $g \in A_5$, i.e.\ $g = zu$, where $z \in \bfZ(G)$ and $u$ is a regular unipotent element. By \cite[Corollary 8.3.6]{C}, $|\chi(g)| \leq 1$ for all $\chi \in \Irr(G)$. Choosing $D = (q+1)(q^3+1)$ and $g_2$ of order $sm$ as in (i) (when $n \geq 7$), by the Cauchy-Schwarz inequality, $$ \left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(q^3+1)(q+1)}{(q+1)(q^3+1)} = 1 \leq \frac{q+1}{5}.$$ Using \cite{Noz1}, we check that all irreducible characters of $G$ of degree less than $D$ are linear or Weil characters. Hence (\ref{weil-su1}) implies that $$\left|\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(q+1)^{2}}{q(q-1)^2} < \frac{q+1}{7}.$$ It follows that $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| < (q+1)\left(\frac{1}{5} + \frac{1}{7}\right) < q+1,$$ so we are done. \hal \begin{cor}\label{slu-large} Theorem \ref{main1} holds for $G = \PSL^\e_n(q)$ with $q = p^f \geq 4$, $\e = \pm$, and $n \geq 2$. \end{cor} \pf The case $n = 2,3$ follows from Lemma \ref{slu-small2}. If $n \geq 4$, then we choose $n_0 = 3$ and apply Proposition \ref{ind-slu}. Note that condition (i) of that proposition is satisfied by Lemma \ref{slu-small1}, and (ii) holds by Propositions \ref{sl-large} and \ref{su-large}. Hence we are done by Proposition \ref{ind-slu}. \hal \subsection{Induction step: Small fields} \begin{prop}\label{sl2} Suppose $G = \GL_n(2)$ with $n \geq 8$ and $t \in \cR(G)$. Then $\sP_u(N)$ holds for $G$ and for every $N = 2^at^b$. \end{prop} \pf Consider an unbreakable $g \in G$ and choose $$D = (2^n-1)(2^{n-1}-4)/3.$$ By \cite[Theorem 3.1]{TZ1}, $\Irr(G)$ contains exactly two characters of degree less than $D$: namely, $1_G$ and $\tau$. In fact $\tau(1) = 2^n-2$ and $\rho = \tau+1_G$ is the permutation character of the action of $G$ on the set of nonzero vectors of the natural module $V = \F^n_2$. Choose regular semisimple elements $g_1=g_2$ of order $s \in \cR(G) \setminus \{t\}$; in particular, $|\bfC_G(g_i)| \leq 2^n-1$. Note that $\rho(g_i) \in \{0,1\}$, so $|\tau(g_i)| \leq 1$. Also, $|\bfC_G(g)| \leq 9 \cdot 2^{n}$ by Lemma \ref{LUL-sl2}. It follows that $$\frac{|\tau(g_1)\tau(g_2)\tau(g)|}{\tau(1)} \leq \frac{3 \cdot 2^{n/2}}{2^n-2} < 0.189.$$ If $n \geq 9$, then by Lemma \ref{basic}(ii) $$\left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(2^n-1) \cdot 3 \cdot 2^{n/2}}{D} = \frac{9 \cdot 2^{n/2}}{2^{n-1}-4} < 0.809.$$ If $n = 8$ and $|\bfC_G(g)| \leq 2^{n+2}$, then $$\left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(2^n-1) \cdot 2 \cdot 2^{n/2}}{D} = \frac{6 \cdot 2^{n/2}}{2^{n-1}-4} < 0.775.$$ Thus, in each of these cases, $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| < 0.809 + 0.189 = 0.998,$$ whence $g \in g_1^G \cdot g_2^G$ by Lemma \ref{basic}(i). Since both $g_1$ and $g_2$ have order coprime to $N$, we are done in these cases. In the remaining case, by Lemma \ref{LUL-sl2}, $G = \GL_8(2)$ and $g \in H := \GL_4(4) = \bfZ(H) \times S$ with $\bfZ(H) \cong C_3$ and $S \cong \SL_4(4)$. Thus we can write $g = zh$ with $z \in \bfZ(H)$ and $h \in S$. Applying Corollary \ref{slu-large} to $\SL_4(4)$, we deduce that $h = x^Ny^N$ for some $x,y \in S$. Certainly, $z = z_1^N$ for some $z_1 \in \bfZ(H)$. It follows that $g = (z_1x)^Ny^N$, and we are done again. \hal \begin{prop}\label{sl3} Suppose $G = \GL_n(3)$ with $n \geq 8$ and $t \in \cR(\SL_n(3))$. Then $\sP_u(N)$ holds for $G$ and for every $N = 3^at^b$. \end{prop} \pf Consider an unbreakable $g \in G$, so $|\bfC_G(g)| \leq 3^{n+2} \cdot 2^4$ by Lemma \ref{LUL3}. First, we use Lemma \ref{slu-det} to get a regular semisimple element $g_1$ of order $sm$, where $s \in \cR(G) \setminus \{t\}$, $m$ is a $2$-power, and $\det(g_1) = \det(g)$. Next we fix a regular semisimple $h \in \SL_{n-2}(3)$ of order $s' = \p(3,n-2)$ and $h' \in \SL_2(3)$ of order $4$, and set $g_2 := \diag(h,h')$. In particular, $|\bfC_G(g_i)| \leq 3^n-1$. Also, we choose $$D = 3^{3n-9}.$$ By Lemma \ref{basic}(ii), $$\left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(3^n-1) \cdot 4 \cdot 3^{(n+2)/2}}{3^{3n-9}} < \frac{4}{3^{3n/2-10}} \leq \frac{4}{9}.$$ Now we estimate character ratios for $\chi \in \Irr(G)$ with $\chi(1) < D$ and $\chi(g_1)\chi(g_2) \neq 0$. The latter condition implies that $\chi$ has positive $s$-defect and positive $s'$-defect. Applying \cite[Theorem 3.4]{BK}, $\chi$ can be only one of the following: $\bullet$ two linear characters $\lambda_{0,1}$, $\bullet$ two of the four Weil characters $\tau_{i,j}$ with $0 \leq i \leq 1$ (see the proof of Proposition \ref{sl-large} for their definition), and, possibly, $\bullet$ two characters $\varphi_{0,1} = \varphi\lambda_{0,1}$. Here, $\varphi$ is the unipotent character of $G$ labeled by the partition $(n-2,2)$, of degree $(3^n-1)(3^{n-1}-9)/16$. The elements $g_{1,2}$ have the property that $e(g_i,\delta) \leq 1$ for all $\delta \in \F_3^\times$, with equality attained at most once. Hence, the estimate (\ref{weil-sl1}) holds. It follows that $$\sum_{0 \leq i,j \leq 1}\frac{|\tau_{i,j}(g_1)\tau_{i,j}(g_2)\overline\tau_{i,j}(g)|}{\tau_{i,j}(1)} \leq \frac{2 \cdot 4 \cdot 3^{n/2+1}}{(3^n-3)/2} \leq \frac{8}{13}.$$ On the other hand, $\tau_{0,0}$ is the unipotent character of $G$ labeled by the partition $(n-1,1)$. It follows by \cite[Lemma 5.1]{GT1} that $$\varphi = (1_G + \tau_{0,0} + \varphi) - (1_G + \tau_{0,0}) = \rho_2 - \rho_1,$$ where $\rho_i$ is the permutation character of the action of $G$ on the set of $i$-dimensional subspaces of the natural module $\F_3^n$ for $i = 1,2$. Observe that $\rho_2(g_1) = 0$ and $\rho_1(g_1) = 0$ or $1$. Therefore, $$|\varphi(g_1)| = |\rho_2(g_1) - \rho_1(g_1)| = |\rho_1(g_1)| \leq 1,~~ \varphi(g_2) = \rho_2(g_2) - \rho_1(g_2) = 1-0 = 1.$$ This implies that $$\left|\sum^1_{i=0}\frac{\varphi_i(g_1)\varphi_i(g_2)\overline\varphi_i(g)}{\varphi_i(1)}\right| \leq \frac{2 \cdot 4 \cdot 3^{n/2+1}}{(3^n-1)(3^{n-1}-9)/16} \leq \frac{128}{(3^{n/2-1}-1)(3^{n-1}-9)} < 0.003.$$ In summary, $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| < \frac{4}{9} + \frac{8}{13} + 0.003 < 1.07.$$ Our choice of $g_1$ and $g_2$ ensures that $$\sum^{1}_{i=0}\lambda_i(g_1)\lambda_i(g_2)\overline\lambda_i(g) = 2.$$ Hence $g \in g_1^G \cdot g_2^G$ by Lemma \ref{basic}(i), so we are done since $|g_1|$ and $|g_2|$ are both coprime to $N$. \hal \begin{prop}\label{su3} Suppose $G = \GU_n(3)$ with $n \geq 7$ and $t \in \cR(\SU_n(3))$. Then $\sP_u(N)$ holds for $G$ and for every $N = 3^at^b$. \end{prop} \pf Consider an unbreakable $g \in G$, so $|\bfC_G(g)| \leq 3^{n+2} \cdot 2^4$ by Lemma \ref{LUL3}. First, we use Lemma \ref{slu-det} to get a regular semisimple $g_1$ of order $sm$, where $s \in \cR(G) \setminus \{t\}$, $m$ is a $2$-power, and $\det(g_1) = \det(g)$. Then we choose $$s' := \left\{ \begin{array}{ll}\p(q,2n-4), & n \equiv 1 \bmod 2,\\ \p(q,n-2), & n \equiv 2 \bmod 4,\\ \p(q,(n-2)/2), & n \equiv 0 \bmod 4, \end{array}\right.$$ (with $q = 3$). Note that $s'|(q^{n-2}-(-1)^{n-2})$ but $s' \nmid \prod^{n}_{i=1,i \neq n-2}(q^i-(-1)^i)$. Next, we fix $\alpha \in \ovF_3^\times$ of order $q^{n-2}-(-1)^n$ and choose a regular semisimple $h \in \GU_{n-2}(3)$ that is conjugate over $\ovF_3$ to $$\diag\left(\alpha,\alpha^{-q},\alpha^{q^2}, \ldots ,\alpha^{(-q)^{n-3}}\right).$$ Note that $\det(h) \in \F_9^\times$ has order $4$. Hence there is some $\beta \in \F_9^\times$ of order $q^2-1$ so that $\det(h) = \beta^2$. We fix $h' = \diag(\beta,\beta^{-q}) \in \GU_2(3)$, and set $g_2 := \diag(h,h')$. In particular, $g_2 \in \SU_n(3)$ is $s'$-singular, $g_i$ is an $N'$-element and $|\bfC_G(g_i)| \leq 4(3^{n-1}+1)$ for $i = 1,2$. Recall the Weil characters $\zeta_{i,j}$, $0 \leq i,j \leq q$ defined in the proof of Proposition \ref{su-large}. Fix $\xi \in \F_{q^2}^\times$ of order $q+1$. The elements $g_{1,2}$ have the property that $e(g_i,\xi^l) \leq 1$ for all $0 \leq l \leq q$, with equality attained at most once. Hence, the estimate (\ref{weil-su1}) holds for $y = g_i$. Also, \begin{equation}\label{weil-su3} e(g,\xi^l) \leq n/2 \end{equation} whenever $n \geq 7$. (Indeed, otherwise $U = \Ker(g-\xi^l \cdot 1_V)$ has dimension $\geq (n+1)/2$ in the natural $G$-module $V := \F_{q^2}^n$. It follows that $U$ cannot be totally singular, so $U$ contains at least one anisotropic vector $u$. In this case, $g$ fixes the decomposition $$V = \la u \ra_{\F_{q^2}} \oplus (\la u \ra_{\F_{q^2}})^\perp.$$ In other words, $g \in \GU_1(q) \times \GU_{n-1}(q)$, so $g$ is breakable, a contradiction.) As $n \geq 7$, we deduce that $e(g,\xi^l) \leq n-4$, whence \begin{equation}\label{weil-su31} |\zeta_{i,j}(g)| \leq \frac{(q+1)q^{n-4}}{q+1} = q^{n-4}, \end{equation} so \begin{equation}\label{weil-su4} \sum_{0 \leq i,j \leq q}\frac{|\zeta_{i,j}(g_1)\zeta_{i,j}(g_2)\overline\zeta_{i,j}(g)|}{\zeta_{i,j}(1)} \leq \frac{16 \cdot 3^{n-4}}{(3^n-3)/4} < 0.8. \end{equation} Choosing $$D = \left\{ \begin{array}{rl}(3^{n}-1)(3^{n-1}-1)(3^{n-2}-27)/896, & n \geq 8,\\ 3^{16}, & n = 7, \end{array} \right.$$ by Lemma \ref{basic}(ii) \begin{equation}\label{su31} \left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{4(3^{n-1}+1) \cdot 4 \cdot 3^{(n+2)/2}}{D} < 0.76. \end{equation} Now we estimate character ratios for $\chi \in \Irr(G)$ with $\chi(1) < D$ and $\chi(g_1)\chi(g_2) \neq 0$. The latter condition implies that $\chi$ has positive $s$-defect and positive $s'$-defect. Applying \cite[Proposition 6.6]{ore} for $n \geq 8$, $\chi$ can be only one of the following: $\bullet$ $4$ linear characters $\lambda_{i}$, $0 \leq i \leq 3$; $\bullet$ (at most $12$ of the) $16$ Weil characters $\zeta_{i,j}$ with $0 \leq i \leq 3$, and $\bullet$ $4$ characters $\varphi_{i} = \varphi\lambda_{i}$, $0 \leq i \leq 3$, if $s | (q^{n-1}+(-1)^n)$. Here, $\varphi$ is the unipotent character of $G$ labeled by the partition $(n-2,2)$, of degree $$\varphi(1) = (3^n-(-1)^n)(3^{n-1}+9(-1)^n)/32.$$ This conclusion also holds for $n = 7$. (Indeed, for $n = 7$, using \cite{Lu} we can check that if $\sigma \in \Irr(\SU_7(q))$ has positive $s$-defect and positive $s'$-defect and $\sigma(1) < D$, then $\sigma$ is the restriction to $\SU_7(q)$ of one of the above characters of $\GU_7(q)$.) Let $\psi$ denote the unipotent character of $G$ labeled by the partition $(n-2,1,1)$, of degree $$\psi(1) = (3^n+3(-1)^n)(3^n-9(-1)^n)/32.$$ It is well known, see e.g. \cite[Table 2]{ST}, that $\rho := 1_G + \varphi + \psi$ is the permutation character of the action of $G$ on the set of isotropic $1$-dimensional subspaces of the natural module $V$. Recall we need to consider $\varphi$ only when $s|(q^{n-1}+(-1)^n)$, so $\psi$ has $s$-defect $0$ and $s'$-defect $0$. In particular, $\psi(g_1) = \psi(g_2) = 0$. Therefore, $$\varphi(g_1) = \rho(g_1) - 1 = 0-1 = -1,~~ \varphi(g_2) = \rho(g_2) - 1 = 2-1 = 1.$$ Since $|\varphi(g)| \leq 4 \cdot 3^{n/2+1}$, $$\left|\sum^q_{i=0}\frac{\varphi_i(g_1)\varphi_i(g_2)\overline\varphi_i(g)}{\varphi_i(1)}\right| \leq \frac{4 \cdot 4 \cdot 3^{n/2+1}}{(3^n-1)(3^{n-1}-9)/32} < 0.05.$$ Together with (\ref{weil-su3}) and (\ref{su31}), this implies that $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| < 0.8+ 0.76 + 0.05 = 1.61.$$ Our choice of $g_1$ and $g_2$ ensures that $$\sum^{q}_{i=0}\lambda_i(g_1)\lambda_i(g_2)\overline\lambda_i(g) = 4.$$ Hence $g \in g_1^G \cdot g_2^G$ by Lemma \ref{basic}(i), so we are done since $|g_1|$ and $|g_2|$ are both coprime to $N$. \hal \begin{prop}\label{su2} Suppose $G = \GU_n(2)$ with $n \geq 9$ and $t \in \cR(\SU_n(2))$. Then $\sP_u(N)$ holds for $G$ and for every $N = 2^at^b$. \end{prop} \pf Consider an unbreakable $g \in G$, so $|\bfC_G(g)| \leq 2^{n+4} \cdot 3^2$ when $n \geq 10$ and $|\bfC_G(g)| \leq 2^{48}$ when $n = 9$ by Lemma \ref{LUL}. \smallskip (i) First, we use Lemma \ref{slu-det} to get a regular semisimple element $g_1$ of order $sm$, where $s \in \cR(G) \setminus \{t\}$, $m$ is a $3$-power, and $\det(g_1) = \det(g)$. If $n \geq 10$, we can find a regular semisimple $g_2 \in \SU_n(2)$ of order $s$. In particular, $g_i$ is an $N'$-element and $|\bfC_G(g_i)| \leq 3(2^{n-1}+1)$ for $i = 1,2$. Fix $\xi \in \F_{q^2}^\times$ of order $q+1$. As in the proof of Proposition \ref{su3}, we note that the elements $g_{1,2}$ have the property that $e(g_i,\xi^l) \leq 1$ for all $0 \leq l \leq q$, with equality attained at most once. Hence, the estimate (\ref{weil-su1}) holds for $y = g_i$. If $n = 9$, we choose $s':= 43$ and fix a regular semisimple $h \in \SU_7(2)$ of order $43$. Also, we fix $h' \in \SU_2(2)$ of order $3$ and set $g_2:= \diag(h,h')$. In particular, $|\bfC_G(g_2)| = 9(2^7+1)$, and $e(g_2,\xi^l)$ equals $0$ for $l = 0$ and $1$ for $l = 1,2$. Direct computation shows that $|\zeta_{i,j}(g_2)| = 1$ for all $i,j$. Thus, for $n \geq 9$ and $y \in \{g_1,g_2\}$, \begin{equation}\label{weil-su5} |\zeta_{i,j}(y)| \leq 1. \end{equation} \smallskip (ii) Choosing $$D = \left\{ \begin{array}{rl}(2^{n}+1)(2^{n-1}-1)(2^{n-2}-27)/81, & n \geq 10,\\ 2^{22} \cdot 7 \cdot (2^9+1), & n = 9, \end{array} \right.$$ by Lemma \ref{basic}(ii), for $n \geq 10$, \begin{equation}\label{su21} \left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{3(2^{n-1}+1) \cdot 3 \cdot 2^{n/2+2}}{D} < 0.37. \end{equation} Now we estimate character ratios for $\chi \in \Irr(G)$ with $\chi(1) < D$ and $\chi(g_1)\chi(g_2) \neq 0$. The latter condition implies that $\chi$ has positive $s$-defect and positive $s'$-defect. Applying \cite[Proposition 6.6]{ore} for $n \geq 10$, $\chi$ can be only one of the following: $\bullet$ $3$ linear characters $\lambda_{i}$, $0 \leq i \leq 2$; $\bullet$ at most $6$ of the $9$ Weil characters $\zeta_{i,j}$ with $0 \leq i \leq 2$, and $\bullet$ (some of the) $27$ characters $D^\circ_\alpha\lambda_{i}$, $0 \leq i \leq 2$, $\alpha \in \Irr(S)$ with $S := \GU_2(2)$ (see \cite[Proposition 6.3]{ore} for the definition of $D^\circ_\alpha$). This conclusion also holds for $n = 9$. (Indeed, for $n = 9$, using \cite{Lu} we can check that if $\sigma \in \Irr(\SU_9(2))$ has positive $s$-defect and positive $s'$-defect and $\sigma(1) < D$, then $\sigma$ is the restriction to $\SU_9(2)$ of one of the above characters of $\GU_9(2)$.) Next, the inequality (\ref{weil-su3}) implies that $e(g,\xi^l) \leq n-5$ as $n \geq 9$, so $$|\zeta_{i,j}(g)| \leq \frac{(q+1)q^{n-5}}{q+1} = q^{n-5}.$$ It now follows from (\ref{weil-su5}) that \begin{equation}\label{weil-su6} \sum_{0 \leq i,j \leq q}\frac{|\zeta_{i,j}(g_1)\zeta_{i,j}(g_2)\overline\zeta_{i,j}(g)|}{\zeta_{i,j}(1)} \leq \frac{6 \cdot 2^{n-5}}{(2^n-2)/3} < 0.57. \end{equation} \smallskip (iii) Now we assume that $n \geq 10$. We already observed that $e(g_i,\xi^l) \leq 1$ for $0 \leq l \leq 2$, with equality attained at most once. Thus $g_i$ satisfies the conclusion (i) of \cite[Lemma 6.7]{ore}. Hence it also satisfies the conclusion (ii) of \cite[Proposition 6.9]{ore}. Thus $$|D^\circ_\alpha(g_i)| \leq \left\{ \begin{array}{ll} 2, & \alpha(1) = 1, \alpha \neq 1_S,\\ 3, & \alpha = 1_S,\\ 4, & \alpha(1) = 2. \end{array} \right.$$ Since $|\varphi(g)| \leq 3 \cdot 2^{n/2+2}$, $$\left|\sum_{\chi = D^\circ_\alpha \lambda_i}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq 3^2 \cdot 2^{n/2+2} \cdot \left( \frac{5 \cdot 2^2 + 3^2}{(2^n-1)(2^{n-1}-4)/9} + \frac{3 \cdot 4^2}{(2^n-2)(2^n-4)/9} \right) < 1.06.$$ Together with (\ref{weil-su6}) and (\ref{su21}), this implies that $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| < 0.57+ 0.37 + 1.06 = 2.$$ Our choice of $g_1$ and $g_2$ ensures that $$\sum^{q}_{i=0}\lambda_i(g_1)\lambda_i(g_2)\overline\lambda_i(g) = 3.$$ Hence $g \in g_1^G \cdot g_2^G$ by Lemma \ref{basic}(i), so we are done since $|g_1|$ and $|g_2|$ are both coprime to $N$. \smallskip (iv) Finally, we handle the case $n = 9$. Now $\chi = D^\circ_\alpha \lambda_i$ can have positive $s$-defect and positive $s'$-defect only when $t=19$, $s=17$, $\alpha = 1_S$. In this case, $\varphi := D^\circ_{1_S}$ is the unipotent character of $G$ labeled by the partition $(n-2,2)$, of degree $$\varphi(1) = (2^9+1)(2^{8}-4)/9= 14364.$$ Let $\psi$ denote the unipotent character of $G$ labeled by the partition $(n-2,1,1)$, of degree $$\psi(1) = (2^9-2)(2^9+4)/9 = 29240.$$ Again, $\rho := 1_G + \varphi + \psi$ is the permutation character of the action of $G$ on the set of isotropic $1$-dimensional subspaces of the natural module $V$, see e.g. \cite[Table 2]{ST}. Recall we need to consider $\varphi$ only when $s=17$ (and $s' = 43$), so $\psi$ has $s$-defect $0$ and $s'$-defect $0$. In particular, $\psi(g_1) = \psi(g_2) = 0$. Therefore, $$\varphi(g_i) = \rho(g_i) - 1 = 0-1 = -1.$$ Recall that $e(g,\xi^l) \leq 4$ for all unbreakable $g \in \GU_9(q)$ and $0 \leq l \leq 2$. Arguing as in the proof of \cite[Proposition 6.9]{ore}, we obtain $$|\varphi(g)| = |D^\circ_{1_S}(g)| \leq 2^8+1 = 257.$$ It follows that $$\left|\sum_{\chi = \varphi\lambda_i,~0 \leq i \leq 2}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{3 \cdot 257}{14364} < 0.06.$$ By Lemma \ref{basic}(ii), $$\left|\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| \leq \frac{(765 \cdot 1161)^{1/2} \cdot 2^{24}}{2^{22} \cdot 7 \cdot 513} < 1.05.$$ In summary, $$\left|\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{\chi(g_1)\chi(g_2)\oc(g)}{\chi(1)}\right| < 0.57+ 0.06 + 1.05 = 1.68,$$ so we are done again. \hal \begin{lem}\label{base-slu} Let $q = p= 2,3$ and $\e =\pm$. {\rm (i)} Suppose that $3 \leq k \leq 4$ and $k \neq 3$ if $(q,\e) = (2,-)$. Then $\sP(N)$ holds for $\GL^\e_k(q)$ and for every $N = p^at^b$ with $t \neq p$ a prime and $t \nmid (q-\e)$. Also, Theorem \ref{main1} holds for $\PSL^\e_k(q)$. {\rm (ii)} Let $\GL^\e_k(q)$ be one of the following groups: \begin{enumerate}[\rm(a)] \item $\GL_n(2)$ or $\GL_n(3)$, with $5 \leq n \leq 7$; \item $\GU_n(2)$ with $5 \leq n \leq 8$; \item $\GU_n(3)$ with $n = 5,6$. \end{enumerate} Then $\sP_u(N)$ holds for $\GL^\e_k(q)$ and for every $N = p^at^b$ with $t \in \cR(\SL^\e_k(q))$, \end{lem} \pf Direct calculations similar to those of Lemma \ref{alt-spor}. \hal \begin{cor}\label{slu-small} Theorem \ref{main1} holds for $G = \PSL^\e_n(q)$ with $q = p^f = 2,3$, $\e = \pm$, $n \geq 3$, and $(n,q,\e) \neq (3,2,-)$. \end{cor} \pf The case $n = 3,4$ follows from Lemma \ref{base-slu}(i). Suppose now that $n \geq 5$. Then we choose $n_0 = 4$ and apply Proposition \ref{ind-slu}. Note that condition (i) of that proposition is verified by Lemma \ref{base-slu}(i), and (ii) holds by Propositions \ref{sl2}, \ref{sl3}, \ref{su3}, \ref{su2}, and Lemma \ref{base-slu}(ii). Hence we are done by Proposition \ref{ind-slu}. \hal \section{Theorem \ref{main1} for symplectic and orthogonal groups} \subsection{General inductive argument} Recall $\cR(G)$ from \S2, and the notion of unbreakability for symplectic and orthogonal groups from Definition \ref{unbr-spo}. \begin{defn}\label{cond-spo} {\em Given a prime power $q = p^f$, a finite symplectic or orthogonal group $G = \Cl(V) = \Cl_n(q)$, and an integer $N = p^at^b$ with $t > 2$ a prime. We say that $G$ satisfies {\rm (i)} the condition $\sP(N)$ if every $g \in G$ can be written as $g=x^Ny^N$ for some $x,y \in G$; and {\rm (ii)} the condition $\sP_u(N)$ if every {\it unbreakable} $g \in G$ can be written as $g=x^Ny^N$ for some $x,y \in G$.} \end{defn} Our proof of Theorem \ref{main1} for symplectic and orthogonal groups relies on the following inductive argument: \begin{prop}\label{ind-clas} Given a prime power $q = p^f$, an integer $n \geq 4$, let $V = \F_q^n$ be a finite symplectic or quadratic space, and let $G := \Cl(V) = \Cl_n(q)$ be perfect, with $\Cl = \Sp$ or $\Omega$. Suppose that there is an integer $n_0 \geq 4$ with the following properties: \begin{enumerate}[\rm(i)] \item If $1 \leq k \leq n_0$ and $\Cl_k(q)$ is perfect, then $\sP_u(N)$ holds for $\Cl_k(q)$ and for every $N = p^at^b$ with $t \neq 2, p$ any prime; and \item For each $k$ with $n_0 < k \leq n$, $\sP_u(N)$ holds for $\Cl_k(q)$ and for every $N = p^at^b$ with $t \in \cR(\Cl_k(q))$. \end{enumerate} If $N = s^at^b$ for some primes $s,t$, then the word map $(u,v) \mapsto u^Nv^N$ is surjective on $G/\bfZ(G)$. \end{prop} \pf By Corollary \ref{prime2}, we need to consider only the case $N = p^at^b$ with $t \in \cR(\Cl_n(q))$; in particular, $t > 2$. It suffices to show $\sP(N)$ holds for $G$. According to (ii), $\sP_u(N)$ holds for $G$. Consider a breakable $g \in G$ and write it as $\diag(g_1, \ldots ,g_m)$ lying in the natural subgroup $$\Cl(U_1) \times \ldots \times \Cl(U_i) \cong \Cl_{k_1}(q) \times \ldots \times \Cl_{k_m}(q)$$ that corresponds to an orthogonal decomposition $V = U_1 \oplus \ldots \oplus U_m$. Here, $1 \leq k_i < n$, and for each $i$ either $\Cl_{k_i}(q)$ is perfect or $g_i = \pm 1_{U_i}$. Relabeling the elements $g_i$ suitably, we may assume that there is some $m' \leq m$ such that $g_i$ is unbreakable if $1 \leq i \leq m'$ and $g_i = \pm 1_{U_i}$ if $i > m'$. Hence, according to (i), $\sP_u(N)$ holds for $\Cl_{k_i}(q)$ if $k_i \leq n_0$ and $i \leq m'$. Suppose $k_i > n_0$. Then $\sP_u(N)$ holds for $\Cl_{k_i}(q)$ if $t \in \cR(\Cl_{k_i}(q))$ by (ii). If $t \notin \cR(\Cl_{k_i}(q))$, then by Theorem \ref{prime1} every non-central element of $G$ is a product of two $N'$-elements, so it is a product of two $N$th powers. Furthermore, all central elements of $\Cl_{k_i}(q)$ are $N$th powers. Hence $\sP_u(N)$ holds for $\Cl_{k_i}(q)$ in this case as well. Thus for $i \leq m'$ we can write $g_i = x_i^Ny_i^N$ with $x_i,y_i \in \Cl(U_i)$. Setting $$U := U_1 \oplus \ldots \oplus U_{m'}, ~W = U_{m'+1} \oplus \ldots \oplus U_{m},~ h := \diag(g_{m'+1}, \ldots, g_m) \in \Iso(W),$$ (where $\Iso(W) = \Sp(W)$ if $\Cl = \Sp$ and $\Iso(W) = \GO(W)$ if $\Cl = \Omega$), we see that either $|h| = 1$, or $p$ and $N$ are odd and $|h| = 2$. In particular, $h = h^N$ in either case. Letting $$x := \diag(x_1, \ldots ,x_{m'}) \in \Cl(U),~~y := \diag(y_1, \ldots, y_{m'}) \in \Cl(U)$$ we deduce that $g = x^Ny^Nh^N = x^N(yh)^N$. Also, $x,y \in G$, $g=\diag(g',h) \in G$ with $g':= \diag(g_1, \ldots ,g_{m'}) \in \Cl(U) \leq G$. It follows that $h \in G$, so $\sP(N)$ holds for $G$, as desired. \hal \subsection{Induction base} \begin{lem}\label{sp24} Let $q = p^f$ and let $N = p^at^b$ with $t \neq 2,p$ any prime. Then $\sP(N)$ holds for $G = \SL_2(q)$ with $q \geq 4$, and for $\Sp_4(q)$ with $q \geq 3$. \end{lem} \pf (i) Consider the case $G = \SL_2(q)$. If $t \nmid (q-1)$, then we check that $X^G \cdot X^G = G$ for $X = x\bfZ(G)$ and $x \in G$ of order $q-1$. On the other hand, if $t \nmid (q+1)$, then $Y_1^G \cdot Y_2^G \supseteq G \setminus \{1\}$ for $Y_i = y^i\bfZ(G)$, $i = 1,2$, and $y \in G$ of order $q+1$. Since $N$ is odd, we are done in both cases. \smallskip (ii) Consider the case $G = \Sp_4(q)$ with $2|q$. The character table of $G$ is given in \cite{E}. Suppose that $t|(q^2+1)$. We fix a regular semisimple $x_1 \in G$ of order $q-1$ belonging to the class $B_1(1,2)$ and a regular semisimple $x_2 \in G$ of order $q+1$ belonging to the class $B_4(1,2)$, in the notation of \cite[Table IV-1]{E}. There are $3$ non-principal characters of $G$ that are nonzero at both $x_1$ and $x_2$: namely, $\theta_{1,2}$ of degree $q(q^2+1)/2$, and $\St$ of degree $q^4$. For every $1 \neq g \in G$, $$\sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(x_1)\chi(x_2)\chi(g)|}{\chi(1)} \leq 2 \cdot \frac{q(q+1)/2}{q(q^2+1)/2} + \frac{q}{q^4} < 1,$$ so we are done by Lemma \ref{basic}(i). Suppose now that $t \nmid (q^2+1)$. Then at least one of $x_1$ and $x_2$ has order coprime to $N$; denote it by $x$. We also fix a regular semisimple $y \in G$ of order $q^2+1$ belonging to the class $B_5(1)$. There are at most $2$ non-principal characters of $G$ that are nonzero at both $x$ and $y$: namely, $\St$ and possibly a character $\theta$ of degree $\geq q(q-1)^2/2$. For every $1 \neq g \in G$, $$\sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(x)\chi(y)\chi(g)|}{\chi(1)} \leq \frac{q(q-1)/2}{q(q-1)^2/2} + \frac{q}{q^4} < 1,$$ so we are done by Lemma \ref{basic}(i). \smallskip (iii) Assume that $G = \Sp_4(q)$ with $q \geq 7$ odd. The character table of $G$ is given in \cite{Sri}. If $t \nmid (q^2+1)$, then the statement follows from \cite[Theorem 7.3]{GM}. So we assume that $t|(q^2+1)$. Fix a regular semisimple $x_1 \in G$ of order $q^2-1$ belonging to the class $B_2(1)$ and a regular semisimple $x_2 \in G$ of order $(q^2-1)/2$ belonging to the class $B_5(1,1)$, in the notation of \cite{Sri}. There are $3$ non-principal characters of $G$ that are nonzero at both $x_1$ and $x_2$: namely, $\theta_{1,2}$ of degree $q(q^2+1)/2$, and $\St$ of degree $q^4$. For every $1 \neq g \in G$, $$\sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(x_1)\chi(x_2)\chi(g)|}{\chi(1)} \leq 2 \cdot \frac{q(q+1)/2}{q(q^2+1)/2} + \frac{q}{q^4} < 1,$$ so we are done by Lemma \ref{basic}(i). \hal \begin{lem}\label{base-clas} Let $G$ be one of the following groups: \begin{enumerate}[\rm(i)] \item $\Sp_{2n}(2)$ with $3 \leq n \leq 6$, $\Sp_{2n}(3)$ with $2 \leq n \leq 5$, and $\Sp_{2n}(4)$ with $n = 2,3$; \item $\Omega_{2n+1}(3)$ with $3 \leq n \leq 5$; \item $\Omega^\pm_{2n}(2)$ with $4 \leq n \leq 6$, $\Omega^\pm_{2n}(3)$ with $4 \leq n \leq 6$, and $\Omega^\pm_8(4)$. \end{enumerate} Let $N = p^at^b$ where $p$ is the defining characteristic of $G$ and $t \in \cR(G)$. Then $\sP(N)$ holds. \end{lem} \pf Direct calculations similar to those of Lemma \ref{alt-spor}. \hal \subsection{Induction step: Symplectic groups} \begin{prop}\label{sp-odd-large} Suppose $G = \Sp_{2n}(q)$ with $n \geq 3$, $q = p^f \geq 7$ odd, and $t \in \cR(G)$. Then $\sP_u(N)$ holds for $G$ and for every $N = p^at^b$. \end{prop} \pf Consider an unbreakable $g \in G$; in particular, $$|\bfC_G(g)| \leq \left\{ \begin{array}{ll} 2q^n, & 2|n,\\ q^{2n-1}(q^2-1), & 2 \nmid n \end{array} \right.$$ by Lemma \ref{spunb}. Let $V = \F_q^{2n}$ denote the natural module for $G$. Inside $\Sp_{2n-2}(q)$ we can find a regular semisimple element $x_{-}$ of order $s_{-}=\p(q,2n-2)$, and, if $2|n$, a regular semisimple element $x_+$ of order $s_{+}=\p(q,n-1)$. For $\nu = \pm$, we fix $y_\nu \in \Sp_2(q)$ of order $q-\nu$. \smallskip (a) Here we consider the case $2|n$, and set $$g_1 := \diag(x_{+},y_+),~~g_2 := \diag(x_{-},y_-)$$ so each $g_i$ is an $N'$-element and $|\bfC_G(g_i)| \leq (q^{n-1}+1)(q+1)$. We also choose $$D = \frac{(q^{n}-1)(q^{n}-q)}{2(q+1)}.$$ It follows that \begin{equation}\label{sp-sum1} \sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{n-1}+1)(q+1)(2q^n)^{1/2}}{D} < 0.54. \end{equation} By \cite[Theorem 5.2]{TZ1} the only non-principal irreducible character of $G$ of degree less than $D$ are the four irreducible Weil characters: $\eta_{1,2}$ of degree $(q^n-1)/2$ and $\xi_{1,2}$ of degree $(q^n+1)/2$. The choice of $g_i$ implies that $\Ker(g_i \pm 1_V) = 0$. Hence, by \cite[Lemma 2.4]{GT2}, $$|\omega(g_i)|,~ |\omega(zg_i)| \leq 1,$$ where $\omega = \eta_1 + \xi_1$ is a reducible Weil character of $G$ and $z \in G$ is the central involution. Note that $$|\omega(g_i)| = |\eta_1(g_i)+\xi_1(g_i)|,~~|\omega(zg_i)| = |\eta_1(g_i)-\xi_1(g_i)|.$$ It follows that $$|\eta_1(g_i)| = \frac{|(\eta_1(g_i)+\xi_1(g_i))+(\eta_1(g_i)-\xi_1(g_i))|}{2} \leq \frac{|\omega(g_i)|+|\omega(zg_i)|}{2} \leq 1.$$ Similarly, \begin{equation}\label{sp-weil1} |\eta_j(g_i)| \leq 1, ~~|\xi_j(g_i)| \leq 1, ~~ \forall i,j = 1,2. \end{equation} It follows that $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{4\cdot (2q^n)^{1/2}}{(q^n-1)/2} < 0.24.$$ Together with (\ref{sp-sum1}), this implies that $$\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} < 0.54 + 0.24 = 0.78,$$ whence $g \in g_1^G \cdot g_2^G$. Since both $g_1$ and $g_2$ are $N'$-elements, we are done. \smallskip (b) Next we consider the case $n \geq 3$ odd. Here we choose $$D = \left\{ \begin{array}{ll}(q^{2n}-1)(q^{n-1}-q)/2(q^2-1),& n \geq 5,\\ q^4(q^3-1)(q-1)/2, & n = 3, \end{array} \right.$$ so \begin{equation}\label{sp-sum2} \sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{n-1}+1)(q+1)(q^{2n-1}(q^2-1))^{1/2}}{D} < 0.15. \end{equation} Using \cite[Theorem 1.1]{Ng} for $n \geq 7$ and \cite{Lu} for $n=5$, we show that every non-principal irreducible character of $G$ of degree less than $D$ is one of the following: (b1) four irreducible Weil characters $\eta_{1,2}$, $\xi_{1,2}$ as above; (b2) four unipotent characters $\alpha_\nu$, $\beta_\nu$, $\nu = \pm$, of degree $$\alpha_\nu(1) = \frac{(q^n-\nu)(q^n+\nu q)}{2(q-1)},~~ \beta_\nu(1) = \frac{(q^n+\nu)(q^n+\nu q)}{2(q+1)};$$ (b3) two characters of degree $(q^{2n}-1)/2(q+1)$, two of degree $(q^{2n}-1)/2(q-1)$, $(q-1)/2$ of degree $(q^{2n}-1)/(q+1)$, and $(q-3)/2$ of degree $(q^{2n}-1)/2(q-1)$. If $n = 3$, then we check using \cite{Lu} that the characters $\chi \in \Irr(G)$ with $1 < \chi(1) < D$ and such that $\chi$ has positive $s$-defect and positive $s_{-}$-defect are described in (b1) and (b2). Thus, in all cases, in considering characters of $G$ of degree less than $D$ we can restrict to the ones in (b1)--(b3). \smallskip Since $t \in \cR(G)$, there is an $\e = \pm$ such that $t|(q^n-\e)$. Now, we choose a regular semisimple element $g_1$ of order $s \in \cR(G) \setminus \{t\}$ and take $g_2 := \diag(x_{-},h_\e)$. In particular, $|\bfC_G(g_i)| \leq (q^{n-1}+1)(q+1)$. Note that all characters in (b3) have $s$-defect $0$, so vanish at $g_1$. Next, $\beta_\e$ and $\alpha_{-\e}$ have $s$-defect $0$, whence $$\beta_\e(g_1) = \alpha_{-\e}(g_1) = 0.$$ Likewise, $\beta_+$ and $\alpha_+$ have $s_-$-defect $0$, whence $$\beta_+(g_2) = \alpha_+(g_2) = 0.$$ Consider the case $\e = -$. We have shown that $\chi(g_1)\chi(g_2) = 0$ for $\chi = \alpha_+$, $\beta_+$, $\beta_-$, and $$\alpha_+(g_1)=\alpha_+(g_2) = 0.$$ On the other hand, $\rho := 1_G + \alpha_+ + \alpha_-$ is just the permutation character of the action of $G$ on the set of $1$-spaces of $V$, cf.\ \cite[Table 2]{ST}. The choice of $g_i$ ensures that $\rho(g_i) = 0$, whence $$\alpha_-(g_1) = \alpha_-(g_2) = -1.$$ Assume now that $\e = +$. We have shown that $\chi(g_1)\chi(g_2) = 0$ for $\chi = \alpha_+$, $\beta_+$, $\alpha_-$, and $$\beta_+(g_1)=\beta_+(g_2) = 0.$$ On the other hand, as shown in \cite{T}, $\zeta := \beta_+ + \beta_-$ is just the restriction to $G$ of the unipotent Weil character $\zeta_{0,0}$ of $\GU_{2n}(q)$ (as defined in the proof of Proposition \ref{su-large}) when we embed $$G = \Sp_{2n}(q) \hookrightarrow \SU_{2n}(q) \lhd \GU_{2n}(q).$$ The choice of $g_i$ ensures that $\zeta(g_i) = 0$, whence $$\beta_-(g_1) = \beta_-(g_2) = -1.$$ The same arguments as in (a) show that (\ref{sp-weil1}) holds in this case as well. Observe that, for $\mu =\pm 1$, $U_{\mu} := \Ker(g-\mu \cdot 1_V)$ has dimension at most $n$, as otherwise it cannot be totally isotropic, so $g$ acts as the multiplication by $\mu$ on a $2$-dimensional non-degenerate subspace of $U$, contrary to the assumption that $g$ is unbreakable. Using \cite[Lemma 2.4]{GT2}, we see that $$|\omega(g)|,~|\omega(zg)| \leq q^{n/2},$$ so, arguing as in the above proof of (\ref{sp-weil1}), we obtain $$|\eta_i(g)|, ~|\xi_i(g)| \leq q^{n/2}.$$ Certainly, $|\gamma(g)| \leq |\bfC_G(g)|^{1/2} \leq (q^{2n-1}(q^2-1))^{1/2}$ for $\gamma = \alpha_-$, $\beta_-$. In summary, $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{4\cdot q^{n/2}}{(q^n-1)/2} + \frac{(q^{2n-1}(q^2-1))^{1/2}}{(q^n-1)(q^n-q)/2(q-1)} < 0.53.$$ Together with (\ref{sp-sum2}), this implies that $$\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} < 0.15 + 0.53 = 0.68,$$ whence $g \in g_1^G \cdot g_2^G$. Since both $g_1$ and $g_2$ are $N'$-elements, we are done. \hal To handle the symplectic groups over $\F_3$, we need an explicit description of low-degree complex characters of $\Sp_{2n}(3)$. \begin{lem}\label{sp3-lowdim} Let $G = \Sp_{2n}(3)$ with $n \geq 6$ and let $D := (3^{2n}-1)(3^{n-1}-3)/16$. Then $$\{ \chi \in \Irr(G) \mid 1 < \chi(1) < D\}$$ consists of the following $13$ characters: \begin{enumerate}[\rm(i)] \item four irreducible Weil characters $\eta$, $\bar\eta$ of degree $(3^n-1)/2$, $\xi$, $\bar\xi$ of degree $(3^n+1)/2$; \item four characters $\SQ(\xi)$, $\wedge^2(\eta)$, $\xi\bar\xi-1_G$, and $\eta\bar\eta-1_G$, of respective degree $$\frac{(3^n+1)(3^n+3)}{8},~~\frac{(3^n-1)(3^n-3)}{8},~~\frac{(3^n-1)(3^n+3)}{4},~~ \frac{(3^n+1)(3^n-3)}{4};$$ \item two characters $\SQ(\eta)$, $\wedge^2(\xi)$ of degree $(3^{2n}-1)/8$, and three characters $\xi\bar\eta$, $\bar\xi\eta$, $\xi\eta = \bar\xi\bar\eta$ of degree $(3^{2n}-1)/4$. \end{enumerate} Also, $\SQ(\eta) = \bar\wedge^2(\xi)$. \end{lem} \pf Applying \cite[Theorem 1.1]{Ng}, we deduce that the degrees, and the multiplicity for each degree of non-principal irreducible character of $G$ of degree less than $D$ are as listed above. The proof of \cite[Proposition 5.4]{MT} shows that the six characters $\SQ(\xi)$, $\wedge^2(\eta)$, $\xi\bar\xi-1_G$, $\eta\bar\eta-1_G$, $\SQ(\eta)$, and $\wedge^2(\xi)$ have the degrees listed in (ii) and (iii). It also shows that $\xi\bar\eta$ and $\bar\xi\eta$ are two distinct irreducible constituents (of a certain real character $\tau$) of degree $(3^{2n}-1)/4$, so they are non-real. On the other hand, $\xi\eta$ is the unique irreducible constituent of degree $(3^{2n}-1)/4$ of a certain real character $\sigma$, whence it must be real. We have therefore identified the three characters of degree $(3^{2n}-1)/4$. Finally, $$[\SQ(\eta)+\wedge^2(\eta),\bar\SQ(\xi)+\bar\wedge^2(\xi)] = [\eta^2,\bar\xi^2] = [\xi\eta,\bar\xi\bar\eta]= 1,$$ so $\SQ(\eta) = \bar\wedge^2(\xi)$, since the involved characters are all irreducible, and only $\SQ(\eta)$ and $\bar\wedge^2(\xi)$ have equal degree. \hal \begin{prop}\label{sp3-large} Suppose $G = \Sp_{2n}(3)$ with $n \geq 6$, and $t \in \cR(G)$. Then $\sP_u(N)$ holds for $G$ and for every $N = 3^at^b$. \end{prop} \pf (i) Consider an unbreakable $g \in G$; in particular, \begin{equation}\label{sp3-cent} |\bfC_G(g)| \leq 16 \cdot 3^{2n+2} \end{equation} by Lemma \ref{spunb}. Let $V = \F_3^{2n}$ denote the natural module for $G$. Inside $\Sp_{2n-2}(3)$ we can find a regular semisimple element $x_{-}$ of order $s_{-}=\p(3,2n-2)$ and a regular semisimple element $x_+$ of order $s_{+}=\p(3,n-1)$. We fix $y \in \Sp_2(3)$ of order $4$. If $n$ is even, we set $$g_1 := \diag(x_{+},y),~~g_2 := \diag(x_{-},y),$$ whereas for odd $n$, we choose a regular semisimple $g_1 \in G$ of order $s \in \cR(G) \setminus \{t\}$ and set $g_2:= \diag(x_-,y)$. In particular, $g_i$ is an $N'$-element and $|\bfC_G(g_i)| \leq 4 \cdot (3^{n-1}+1)$ for $i = 1,2$. We also choose $$D = \frac{(3^{2n}-1)(3^{n-1}-3)}{16}.$$ Then the characters $\chi \in \Irr(G)$ with $1 < \chi(1)< D$ are described in Lemma \ref{sp3-lowdim}. The choice of $g_i$ implies that $\Ker(g_i \pm 1_V) = 0$. Hence, as in the proof of Proposition \ref{sp-odd-large}, \begin{equation}\label{sp3-weil1} |\chi(g_i)| \leq 1, ~~ \forall \chi \in \{\xi,\eta\}. \end{equation} On the other hand, $\dim_{\F_3}\Ker(g \pm 1_V) \leq 4$ by Lemma \ref{blocks}. Arguing as in part (a) of the proof of Proposition \ref{sp-odd-large}, we obtain \begin{equation}\label{sp3-weil2} |\chi(g)| \leq 3^2, ~~ \forall \chi \in \{\xi,\eta\}. \end{equation} It follows that \begin{equation}\label{sp3-sum1} \sum_{\chi \in \{\xi,\bar\xi,\eta,\bar\eta\}}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{4\cdot 3^2}{(3^n-1)/2} < 0.099. \end{equation} Let $\cX$ denote the set of nine characters listed in Lemma \ref{sp3-lowdim}(ii), (iii). Observe that $x_\nu$ has prime order $s_\nu$ for $\nu = \pm$. Hence $$\Ker(g_i^2 -1_V) = 0,~~\dim_{\F_3}\Ker(g_i^2+1_V) \leq 2.$$ This in turn implies that $$|\omega(g_i^2)| \leq 1,~~|\omega(zg_i^2)| \leq 3$$ for the reducible Weil character $\omega = \xi+\eta$ and the central involution $z \in G$. Arguing as in part (a) of the proof of Proposition \ref{sp-odd-large}, we obtain $$|\chi(g_i^2)| \leq (1+3)/2 = 2, ~~ \forall \chi \in \{\xi,\eta\}.$$ Together with (\ref{sp3-weil1}), this implies that \begin{equation}\label{sp3-weil3} |\chi(g_i)| \leq 3/2, ~~ \forall \chi \in \cX. \end{equation} \smallskip (ii) Here we assume that $n \geq 7$. If $2|n$, then the four characters listed in Lemma \ref{sp3-lowdim} have either $s_+$-defect $0$, or $s_-$-defect $0$. If $2\nmid n$, then the five characters listed in Lemma \ref{sp3-lowdim} have $s$-defect $0$. Thus, at most five characters from $\cX$ can be nonzero at both $g_1$ and $g_2$. Also, $|\chi(g)| \leq 4 \cdot 3^{n+1}$ for all $\chi \in \Irr(G)$ by (\ref{sp3-cent}). Using (\ref{sp3-weil3}), we see that $$\sum_{\chi \in \cX}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{5 \cdot (3/2)^2 \cdot 4 \cdot 3^{n+1}}{(3^n-1)(3^n-3)/8} < 0.495.$$ On the other hand, $$\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{4(3^{n-1}+1) \cdot 4 \cdot 3^{n+1})}{D} < 0.354.$$ Together with (\ref{sp3-sum1}), these estimates imply that $$\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} < 0.099 + 0.495 + 0.354 = 0.948,$$ whence $g \in g_1^G \cdot g_2^G$. Since both $g_1$ and $g_2$ are $N'$-elements, we are done. \smallskip (iii) We may now assume that $n=6$. In this case $\cR(G) = \{7, 13,73\}$ and $|g_1| = 44$, $|g_2| = 244$. Using \cite{Lu}, we check that $G$ has exactly $30$ irreducible characters $\chi$ that have both positive $11$-defect and positive $41$-defect: namely, $1_G$, four Weil characters, five characters from $\cX$ and listed in Lemma \ref{sp3-lowdim}(iii), four characters $\psi_{1,2,3,4}$ with two of each of the degrees $$D = 15 \cdot (3^{12}-1), ~~D_1 := 15 \cdot (3^4+1) \cdot (3^8+3^4+1),$$ and $16$ more, of degree larger than $D_2 := 3^{19}$. In particular, \begin{equation}\label{sp3-sum2} \sum_{\chi \in \Irr(G),~\chi(1) \geq D_2}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{4 \cdot (3^{n-1}+1) \cdot 4 \cdot 3^{n+1}}{D_2} < 0.0074. \end{equation} Next we strengthen the bound on $|\chi(g)|$ for $\chi \in \cX$. Consider $\lambda = \pm 1$ and write $g = uv=vu$, with $u$ unipotent and $v$ semisimple. Let $\tilde V := V \otimes_{\F_3}\overline\F_3$. Note that if $w \in U_\lambda := \Ker(g^2-\lambda \cdot 1_{\tilde V})$, then $w$ belongs to $W_\mu := \Ker(v-\mu \cdot 1_{\tilde V})$ for some $\mu$ with $\mu^2 = \lambda$. Now $g^2$ acts on $W_\mu$ as $\lambda u'^2$, where $u' := u_{W_\mu}$ is unipotent. Next, observe that $$\dim_{\overline\F_3}\Ker(u'^2-1_{W_\mu}) = \dim_{\overline\F_3}\Ker(u'-1_{W_\mu}).$$ It then follows from Lemma \ref{blocks} that $$\dim_{\overline\F_3}U_\lambda = \dim_{\overline\F_3}\Ker(g-\mu_0\cdot 1_{\tilde V}) + \dim_{\overline\F_3}\Ker(g+\mu_0 \cdot 1_{\tilde V}) \leq 8,$$ where $\mu_0$ is a fixed square root $\mu_0$ of $\lambda$. In turn, this implies by \cite[Lemma 2.4]{GT2} that $$|\omega(g^2)|,~|\omega(zg^2)| \leq 3^4.$$ Arguing as in part (a) of the proof of Proposition \ref{sp-odd-large}, we obtain $$|\chi(g^2)| \leq 3^4, ~~ \forall \chi \in \{\xi,\eta\}.$$ Using this bound and (\ref{sp3-weil2}), we see that $$|\chi(g)| \leq 3^4, ~~ \forall \chi \in \cX.$$ Since only five characters from $\cX$ can be nonzero at both $g_1$ and $g_2$, this last estimate together with (\ref{sp3-weil3}) yields \begin{equation}\label{sp3-sum3} \sum_{\chi \in \cX}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{5 \cdot (3/2)^2 \cdot 3^4}{(3^{2n}-1)/8} < 0.0138. \end{equation} Finally, we estimate character ratios for the four characters $\psi_{1,2,3,4}$ of degree $D$ and $D_1$. Since $|g_2| = 4 \cdot 61$, $\chi(g) = 0$ if and only if $\chi \in \Irr(G)$ has degree divisible by $61$. Using \cite{Lu}, we check that $$\Irr_{61'}(G) := \{ \chi \in \Irr(G) \mid 61 \nmid \chi(1) \}$$ consists of exactly $343$ characters. (Another way to check it is to observe that since $P \in \Syl_{61}(G)$ is cyclic, the {\it McKay conjecture} holds for $G$, i.e. $$|\Irr_{61'}(G)| = |\Irr_{61'}(\bfN_G(P))|.$$ Direct computation shows that $$\bfN_G(P) = (C_{244} \rtimes C_{10}) \times \Sp_2(3)$$ has exactly $343$ irreducible characters of degree coprime to $61$.) Certainly, $$\Irr_{61'}(G) \setminus \{\psi_{1,2,3,4}\}$$ is a union of some $\Gal(\overline\Q/\Q)$-orbits. Hence, by Lemma \ref{orbit}, $$\sum_{\chi \in \Irr(G) \setminus \{\psi_{1,2,3,4}\}}|\chi(g_2)|^2 = \sum_{\chi \in \Irr_{61'}(G) \setminus \{\psi_{1,2,3,4}\}}|\chi(g_2)|^2 \geq |\Irr_{61'}(G) \setminus \{\psi_{1,2,3,4}\}| = 343-4 = 339.$$ Since $\sum_{\chi \in \Irr(G)}|\chi(g_2)|^2 = |\bfC_G(g_2)| = 4 \cdot (3^5+1) = 976$, $$\sum^4_{j=1}|\psi_j(g_2)|^2 \leq 976-339 = 637.$$ Recall that $|\psi_j(g)| \leq 4 \cdot 3^7$ by (\ref{sp3-cent}) and $|\psi_j(g_1)|^2 \leq |\bfC_G(g_1)| = 4 \cdot (3^5-1) = 968$. By the Cauchy-Schwarz inequality, $$\sum^4_{j=1}\frac{|\psi_j(g_1)\psi_j(g_2)\overline\psi_j(g)|}{\psi_j(1)} \leq \frac{4 \cdot 3^7 \cdot 968^{1/2}}{D} \cdot (\sum^4_{j=1}|\psi_j(g_2)|^2)^{1/2} = \frac{4 \cdot 3^7 \cdot (968 \cdot 637)^{1/2}}{15 \cdot (3^{12}-1)} < 0.8618. $$ Together with (\ref{sp3-sum1}), (\ref{sp3-sum2}), (\ref{sp3-sum3}), this implies that $$\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} < 0.099 + 0.0138 + 0.8618 +0.0074 = 0.982,$$ whence $g \in g_1^G \cdot g_2^G$. Since both $g_1$ and $g_2$ are $N'$-elements, we are again done. \hal \begin{prop}\label{sp-even} Suppose $G = \Sp_{2n}(q)$ with $n \geq 3$, $2 | q$, and $t \in \cR(G)$. Assume that $n \geq 4$ if $q = 4$, and $n \geq 7$ if $q=2$. Then $\sP_u(N)$ holds for $G$ and for every $N = 2^at^b$. \end{prop} \pf Consider an unbreakable $g \in G$; in particular, $$|\bfC_G(g)| \leq B := \left\{ \begin{array}{ll} q^{2n}(q^2-1), & 2|n, q \geq 4\\ 2q^{2n}(q+1), & 2 \nmid n, q \geq 4 \\ 9 \cdot q^{2n+9}, & q = 2\end{array} \right.$$ by Lemma \ref{spunb}. Let $V = \overline\F_q^{2n}$ denote the natural module for $G$. Inside $\Sp_{2n-2}(q)$ we can find a regular semisimple element $x_{-}$ of order $s_{-}=\p(q,2n-2)$, and, if $2|n$, a regular semisimple element $x_+$ of order $s_{+}=\p(q,n-1)$. We fix $y \in \Sp_2(q)$ of order $q+1$. Let $\cW$ denote the set of $q+3$ Weil characters $$\alpha_n, ~\beta_n, ~~\rho^1_n, ~\rho^2_n,~~ \zeta^i_n,~1 \leq i \leq q/2,~~\tau^j_n,~ 1 \leq j \leq q/2-1$$ (as described in \cite[Table 1]{GT2}). Assuming $n \geq 4$ and choosing $$D := \frac{(q^{2n}-1)(q^{n-1}-1)(q^{n-1}-q^2)}{2(q^4-1)},$$ we see by \cite[Corollary 6.2]{GT2} that $\cW$ is precisely the set $\{ \chi \in \Irr(G) \mid 1 < \chi(1) < D\}$. \smallskip (i) Here we consider the case $2|n$, and set $$g_1 := \diag(x_{+},y),~~g_2 := \diag(x_{-},y)$$ so that each $g_i$ is an $N'$-element and $|\bfC_G(g_i)| \leq (q^{n-1}+1)(q+1)$. In particular, \begin{equation}\label{sp2-sum1} \sum_{\tiny{\begin{array}{l}\chi \in \Irr(G),\\ \chi(1) \geq D\end{array}}} \frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{n-1}+1)(q+1) \cdot B^{1/2}}{D} < \left\{ \begin{array}{ll}0.8293, & n = 4\\ 0.1956, & n \geq 6. \end{array} \right. \end{equation} If $\chi \in \{\alpha_n,\beta_n,\rho^1_n,\rho^2_n\}$ then $\chi$ has $s_\nu$-defect $0$ for some $\nu = \pm$, so $\chi(g_1)\chi(g_2) = 0$. For $\gamma \in \overline\F_q^\times$, the choice of $g_i$ implies that $\dim_{\overline\F_q}\Ker(g_i - \gamma \cdot 1_V)$ equals $0$ if $\gamma^{q-1} = 1$, and is at most $1$ if $\gamma^{q+1} = 1$; in fact, it equals $1$ for exactly two primitive $(q+1)$th roots of unity in $\overline\F_q^\times$. Hence, by formulae (1) and (4) of \cite{GT2}, $$|\tau^j_n(g_i)| = 0,~~|\zeta^j_n(g_i)| \leq b,$$ where $b := 2$ if $q \geq 4$ and $b := 1$ if $q = 2$. For $n \geq 6$, it follows that $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{q}{2} \cdot \frac{b^2 \cdot (q^{2n}(q^2-1))^{1/2}}{(q^{2n}-1)/(q+1)} < 0.7956.$$ Suppose that $n = 4$ and $q \geq 4$. Observe that $\dim_{\overline\F_q}\Ker(g_i - \gamma \cdot 1_V) \leq 4$ for $\gamma \in \overline\F_q^\times$ with $\gamma^{q + 1} = 1$. (Indeed, this bound is obvious if $\gamma \neq 1$. If $\gamma = 1$, it follows from the condition that $g$ is unbreakable.) Hence, formula (4) of \cite{GT2} implies that $|\zeta^j_n(g)| \leq q^4$, so $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{q}{2} \cdot \frac{b^2 \cdot q^{4}}{(q^{8}-1)/(q+1)} < 0.1564.$$ Together with (\ref{sp2-sum1}), this implies that $$\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} < \left\{ \begin{array}{ll}0.8293 + 0.1564 = 0.9857, & n = 4\\ 0.1956 + 0.7956 = 0.9912, & n \geq 6 \end{array} \right.$$ whence $g \in g_1^G \cdot g_2^G$. Since both $g_1$ and $g_2$ are $N'$-elements, we are done. \smallskip (ii) From now on we assume $2 \nmid n$. By Proposition \ref{ratio} we may assume that $n \geq 5$. We choose a regular semisimple element $g_1$ of order $s \in \cR(G) \setminus \{t\}$ and take $g_2 := \diag(x_{-},y)$. In particular, again $|\bfC_G(g_i)| \leq (q^{n-1}+1)(q+1)$. Note that all characters $\zeta^j_n$ and $\tau^j_n$ have $s$-defect $0$, so vanish at $g_1$. Next, the choice of $g_i$ implies that, for $\gamma \in \overline\F_q^\times$, $\dim_{\overline\F_q}\Ker(g_i - \gamma \cdot 1_V)$ equals $0$ if $\gamma^{q-1} = 1$, and is at most $1$ if $\gamma^{q+1} = 1$; in fact, it equals $1$ for exactly two primitive $(q+1)$th roots of unity in $\overline\F_q^\times$. Using formulae (1), (3), (4), and (6) of \cite{GT2}, we obtain $$(\rho^1_n+\rho^2_n)(g_i) = -1,~~(\alpha_n+\beta_n)(g_1) = 1,~~(\alpha_n+\beta_n)(g_2) = -1.$$ Furthermore, exactly one character among $\alpha_n$, $\beta_n$, and exactly one character among $\rho^1_n$, $\rho^2_n$, have $s$-defect zero. It follows that $$|\chi(g_1)| \leq 1,~\forall \chi \in \{\alpha_n,\beta_n,\rho^1_n,\rho^2_n\}.$$ Likewise, $\beta_n$ and $\rho^2_n$ have $s_-$-defect $0$, so $$\beta_n(g_2) = \rho^2_n(g_2) = 0,~~|\alpha_n(g_2)| = |\rho^1_n(g_2)| = 1.$$ We also observe that $\alpha_n(g_1) = 0$ if $s|(q^n-1)$ and $\rho^1_n(g_1) = 0$ if $s|(q^n+1)$. We have shown that, among the characters in $\cW$, exactly one character can be nonzero at both $g_1$ and $g_2$. Denoting this character by $\psi$, \begin{equation}\label{sp2-psi} |\psi(g)|/\psi(1) \leq 0.95, ~|\psi(g)| \leq B^{1/2},~~|\psi(g_i)| \leq 1. \end{equation} Here, the first bound follows from the main result of \cite{Glu}. \smallskip (iii) Assume in addition that $n \geq 9$ if $q = 2$. Now $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{B^{1/2}}{(q^n-1)(q^n-q)/2(q+1)} < 0.8003.$$ On the other hand, $$\sum_{\chi \in \Irr(G),~\chi(1) \geq D} \frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{n-1}+1)(q+1) \cdot B^{1/2}}{D} < 0.0478.$$ It follows that $$\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} < 0.8003 + 0.0478 = 0.8481,$$ whence $g \in g_1^G \cdot g_2^G$. Since both $g_1$ and $g_2$ are $N'$-elements, we are done. \smallskip (iv) Now we consider the case $(n,q)= (7,2)$ and choose $$D_1 := \frac{q^{35}(q^7-1)(q^7-q)}{2(q+1)}.$$ Using \cite{Lu}, we check that there is only one character $\chi \in \Irr(G)$ with $1 < \chi(1) < D_1$ that has both positive $s$-defect and $s_-$-defect, namely the character $\psi$ described in (ii). Now using (\ref{sp2-psi}) $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D_1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{|\psi(g)|}{\psi(1)} < 0.95.$$ On the other hand, $$\sum_{\chi \in \Irr(G),~\chi(1) \geq D_1} \frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{6}+1)(q+1) \cdot B^{1/2}}{D_1} < 0.01.$$ It follows that $$\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} < 0.95 + 0.01 = 0.96,$$ and we are done again. \hal \subsection{Induction step: Orthogonal groups} \begin{prop}\label{so-odd} Suppose $G = \Omega_{2n+1}(q)$ with $n \geq 3$, $q = p^f$ odd, and $t \in \cR(G)$. Assume that $n \geq 6$ if $q=3$. Then $\sP_u(N)$ holds for $G$ and for every $N = p^at^b$. \end{prop} \pf By Corollary \ref{main-real} and Proposition \ref{ratio}, we may assume that $q = 3$ and $n \geq 6$. Let $V = \overline\F_q^{2n+1}$ denote the natural module for $G$, and let $$F_0 := \{ \gamma \in \overline\F_q^\times \mid \gamma^{q \pm 1} = 1\}.$$ Consider an unbreakable $g \in G$; in particular, $|\bfC_G(g)| \leq B := 2^4 \cdot q^{2n+3}$ by Lemma \ref{orthogunb}. Let $\cX$ denote the set of $q+4$ characters described in \cite[Proposition 5.7]{ore}: each is of the form $D^\circ_\alpha$ for $\alpha \in \Irr(S)$ and $S := \Sp_2(q)$. Choosing $$D := q^{4n-8},$$ we see by \cite[Corollary 5.8]{ore} that $\cX$ is precisely the set $\{ \chi \in \Irr(G) \mid 1 < \chi(1) < D\}$. If $\gamma \in F_0$, then $$\dim_{\overline\F_q}\Ker(g-\gamma \cdot 1_V) \leq 4$$ by Lemma \ref{blocks}. Following the proof of \cite[Proposition 5.11]{ore}, one can show that \begin{equation}\label{so1-weil1} |D_\alpha(g)| \leq q^4 \cdot \alpha(1). \end{equation} Now we choose $g_1 = g_2$ to be a regular semisimple element of order $s \in \cR(G) \setminus \{t\}$, so that $g_i$ is an $N'$-element and $|\bfC_G(g_i)| \leq (q^{n-1}+1)(q+1)$. In particular, \begin{equation}\label{so1-sum1} \sum_{\tiny{\begin{array}{l}\chi \in \Irr(G),\\ \chi(1) \geq D\end{array}}} \frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{n-1}+1)(q+1) \cdot B^{1/2}}{D} < 0.35. \end{equation} The choice of $g_i$ implies that $$\dim_{\overline\F_q}\Ker(g-\gamma \cdot 1_V) \leq 1$$ for all $\gamma \in F_0$. Following the proof of \cite[Proposition 5.11]{ore}, one can show that \begin{equation}\label{so1-weil2} |D_\alpha(g)| \leq q \cdot \alpha(1). \end{equation} In the notation of \cite[Table I]{ore}, if $\alpha \neq \xi_{1,2}$, then $D^\circ_\alpha = D_{\alpha}$. In this case, it follows from (\ref{so1-weil1}) and (\ref{so1-weil2}) for $\chi = D^\circ_\alpha$ that $$\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{q^6 \cdot \alpha(1)^3}{\chi(1)} < (1.1)\frac{q^6 \cdot \alpha(1)^2}{(q^{2n}-1)/(q^2-1)}.$$ In the case $\alpha = \xi_{1,2}$ (of degree $(q+1)/2$), for $\chi = D^\circ_\alpha = D_\alpha-1_G$, $$\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^4\alpha(1)+1)(q\alpha(1)+1)^2}{\chi(1)} < (1.4)\frac{q^6 \cdot \alpha(1)^2}{(q^{2n}-1)/(q^2-1)}.$$ It follows that $$\begin{array}{ll}\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\dfrac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} & \leq (1.4) \dfrac{q^6}{(q^{2n}-1)/(q^2-1)} \cdot \sum_{\alpha \in \Irr(S)}\alpha(1)^2 \\ \\ & = (1.4)\dfrac{q^6 \cdot q(q^2-1)}{(q^{2n}-1)/(q^2-1)} < 0.26.\end{array}$$ Together with (\ref{so1-sum1}), this implies that $$\sum_{\chi \in \Irr(G),~\chi(1) > 1}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} < 0.35 + 0.26 = 0.61.$$ whence $g \in g_1^G \cdot g_2^G$. Since both $g_1$ and $g_2$ are $N'$-elements, we are done. \hal \begin{prop}\label{so2-even} Suppose $G = \O^\e_{2n}(q)$ and $q = 2,4$, $\e = \pm$, and $t \in \cR(G)$. Assume that $n \geq 5$ if $q = 4$, and $n \geq 7$ if $q=2$. Then $\sP_u(N)$ holds for $G$ and for every $N = 2^at^b$. \end{prop} \pf (i) Consider an unbreakable $g \in G$; in particular, $$|\bfC_G(g)| \leq B := \left\{ \begin{array}{rl} 3 \cdot q^{2n+6}, & q = 2\\ 25 \cdot q^{2n-2}, & q = 4\end{array} \right.$$ by Lemma \ref{orthogunb}. We also choose $$D := \left\{ \begin{array}{ll}q^{4n-10}, & n \geq 6, (n,q) \neq (7,2),\\ q^{4n-8}, & (n,q) = (7,2),\\ q^3(q^3-1)(q^5-1)(q-1)^2/2, & n = 5, q=4.\end{array} \right.$$ Consider the prime $s \in \cR(G) \setminus \{t\}$. If $s|(q^{n-1}+1)$ with $q = 2$ and $\e = +$, then we choose $g_1 = \diag(x_1,y_1)$, where $x_1 \in \O^-_{2n-2}(q)$ is regular semisimple of order $s$ and $y_1 \in \O^-_2(q)$ has order $q+1$. In all other cases, we choose a regular semisimple $g_1 \in G$ of order $s$. If $s|(q^{n-1}+1)$ and $(n,q,\e) = (7,2,+)$, then choose $g_2 := \diag(x_2,y_2)$, where $x_2 \in \O^{-}_{2n-4}(q)$ is regular semisimple of order $s_{\e}=\p(q,2n-4)=11$, and $y_2 \in \O^-_4(q)$ of order $\p(q,4) = 5$. In all other cases, let $g_2 := g_1$. Our choices of $g_i$ imply that each $g_i$ is an $N'$-element, and $|\bfC_G(g_i)| \leq (q+1)(q^{n-1}+1)$. It follows that \begin{equation}\label{so2-sum1} \sum_{\tiny{\begin{array}{l}\chi \in \Irr(G),\\ \chi(1) \geq D\end{array}}} \frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{n-1}+1)(q+1) \cdot B^{1/2}}{D} < \left\{ \begin{array}{ll}0.10, & n \geq 5, q =4\\ 0.33, & n \geq 7, q =2 . \end{array} \right. \end{equation} \smallskip (ii) Now we estimate character values for the characters in $$\cX:= \{ \chi \in \Irr(G) \mid 1 < \chi(1) < D\}.$$ By \cite[Theorem 1.3]{Ng}, when $n \geq 6$ and $(n,q) \neq (7,2)$ the set $\cX$ consists of $q+1$ characters: $\bullet$ $\varphi$ of degree $(q^n-\e)(q^{n-1}+\e q)/(q^2-1)$, $\bullet$ $\psi$ of degree $(q^{2n}-q^2)/(q^2-1)$, $\bullet$ $\zeta_i$ of degree $(q^n-\e)(q^{n-1}-\e)/(q+1)$ for $1 \leq i \leq q/2$, and $\bullet$ $\sigma$ of degree $(q^n-\e)(q^{n-1}+\e)/(q-1)$ if $q=4$.\\ If $(n,q) = (7,2)$ and $\chi \in \cX$ has positive $s$-defect, then using \cite{Lu} we show that $\chi$ must be one of these $q+1$ characters. Likewise, if $(n,q) = (5,4)$ and $\chi \in \cX$ has positive $s$-defect and positive $s_\e$-defect, then using \cite{Lu} we check that $\chi$ is again one of these characters. Let $V = \F_q^{2n}$ denote the natural module for $G$. Then \begin{equation}\label{so2-weil1} \rho_0 = 1_G + \varphi +\psi \end{equation} is the rank $3$ permutation character of the action of $G$ on singular $1$-spaces of $V$, see \cite[Table 1]{ST}. It is shown in \cite{GMT} that \begin{equation}\label{so2-weil2} \rho_0 = 1_G + \psi + \sigma + \sum^{q/2}_{i=1}\zeta_i \end{equation} is the permutation character of the action of $G$ on non-singular $1$-spaces of $V$ (we use the convention that $\sigma = 0$ for $q=2$). We can identify $G$ with its dual group $G^*$, cf.\ \cite{C}. Then the non-identity elements of the natural subgroup $\O^-_2(q)$ of $G$ break into $q/2$ conjugacy classes with representatives $t_i$, $1 \leq i \leq q/2$, and $$\bfC_G(t_i) = \O^{-\e}_{2n-2}(q) \times \O^-_2(q).$$ All these semisimple elements have connected centralizer in the underlying algebraic group. Hence, these classes yield $q/2$ semisimple characters in $\Irr(G)$, which can then be identified with $\zeta_i$, $1 \leq i \leq q/2$. If $q = 4$ then $\zeta_1$ and $\zeta_2$ are Galois conjugate and $\Q(\zeta_i) = \Q(\sqrt{5})$. (Indeed, let $\omega$ denote a primitive $5$th root of unity in $\C$, so that $\Q(\omega+\omega^{-1}) = \Q(\sqrt{5})$. Let $\gamma:\omega \mapsto \omega^2$ be a generator of $\Gal(\Q(\omega)/\Q$. Following the proof of \cite[Lemma 9.1]{NT}, one can show that $\Q(\zeta_i) \subseteq \Q(\omega)$, and $\gamma$ sends $\zeta_1$ to $\zeta_2$. Moreover, since $s_i$ is real, $\Q(\zeta_i)$ is fixed by $\gamma^2:\omega \mapsto \omega^{-1}$. It follows that $\Q(\zeta_i) \subseteq \Q(\omega)^{\gamma^2} = \Q(\sqrt{5})$. As $\zeta_1$ and $\zeta_2$ are distinct Galois conjugates, we conclude that $\Q(\zeta_i) = \Q(\sqrt{5}$.) In particular, since the $g_j$ are chosen to be $5'$-elements, $\zeta_i(g_j) \in \Q$, so \begin{equation}\label{so2-weil3} \zeta_1(g_i) = \zeta_2(g_i) \end{equation} when $q=4$. \smallskip (iii) Here we determine character values for the element $g_1$ of order $s$. Suppose that $s=\p(q,2n-2)$. Then $\psi$ has $s$-defect $0$, so $\psi(g_1) = 0$. Similarly, $\sigma(g_1) = 0$ if $\e = +$ and $\zeta_i(g_1) = 0$ if $\e = -$. Next, $$(\rho_0(g_1),\rho_1(g_1)) = \left\{\begin{array}{ll} (0,q+1), & \e = +,q=4,\\ (0,0), & \e = +,q=2,\\ (2,q-1), & \e = -. \end{array}\right.$$ It follows by (\ref{so2-weil1})--(\ref{so2-weil3}) that $$\varphi(g_1) = \pm 1,\mbox{ and }\left\{ \begin{array}{ll} \zeta_i(g_1) = 2, & \mbox{ if } \e = +, q= 4,\\ \zeta_i(g_1) = -1, & \mbox{ if } \e = +, q= 2,\\ \sigma(g_1) = q-2, & \mbox{ if }\e = -.\end{array} \right.$$ Suppose that either $s=\p(q,2n)$ and $\e = -$, or $s = \p(q,n)$ with $2 \nmid n$ and $\e = +$. Then $\varphi$, $\zeta_i$, and $\sigma$ all have $s$-defect $0$, so they all vanish at $g_1$. Also, $\rho_0(g_1) = 0$, so (\ref{so2-weil1}) implies that $\psi(g_1) = -1$. The remaining case is that $s=\p(q,n-1)$, $\e = +$, and $2|n$. Then $\psi$ and $\zeta_i$ have $s$-defect $0$, so they vanish at $g_1$. Also, $\rho_0(g_1) = 2$, so (\ref{so2-weil1}) implies that $\varphi(g_1) = 1$. Similarly, $\rho_1(g_1) = q-1$, so (\ref{so2-weil1}) implies that $\sigma(g_1) = q-2$. \smallskip (iv) Suppose $n \geq 5$ and $q = 4$. The analysis in (iii) shows that there are at most $3$ characters $\chi \in \cX$ that can be nonzero at $g_1 = g_2$, in which case $|\chi(g_1)\chi(g_2)| \leq 4$. Also, one character in $\cX$ has degree $\geq d:= (q^n-1)(q^{n-1}-q)/(q^2-1)$ and all others have degree $\geq 3d$. It follows that $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{4 \cdot 5 \cdot q^{n-1}}{(q^{n}-1)(q^{n-1}-q)/(q^2-1)} \cdot \left(1+ \frac{2}{3}\right) < 0.497.$$ Suppose $n \geq 7$ and $q = 2$. The analysis in (iii) shows that there are at most $2$ characters $\chi \in \cX$ that can be nonzero at $g_1$, in which case $|\chi(g_1)| \leq 1$. If $n \geq 8$, then $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{2 \cdot 3^{1/2} \cdot q^{n+3}}{(q^{n}-1)(q^{n-1}-q)/(q^2-1)} < 0.658.$$ If $(n,q) = (7,2)$, then the analysis in (iii) shows that the only case where two characters $\chi \in \cX$ are nonzero at $g_1$ is when $\e = +$, $s=\p(q,2n-2)$ and $\chi = \varphi$, $\zeta_1$. In this case, $\varphi$ has $s_\e$-defect $0$, so it vanishes at $g_2$. Furthermore, $$\rho_0(g_2) = \rho_1(g_2) = 0,$$ so (\ref{so2-weil1}) and (\ref{so2-weil2}) imply that $$\psi(g_2) = -1,~~\zeta_1(g_2) = 0,$$ so no character $\chi \in \cX$ can be nonzero at both $g_1$ and $g_2$. In all other cases, only one $\chi \in \cX$ can be nonzero at $g_1=g_2$ and $|\chi(g_1)\chi(g_2)| \leq 1$. It follows that $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{3^{1/2} \cdot q^{n+3}}{(q^{n}+1)(q^{n-1}-q)/(q^2-1)} < 0.666.$$ Combining with (\ref{so2-sum1}), we are done in all cases. \hal \begin{prop}\label{so3-even} Suppose that $G = \O^\e_{2n}(q)$, where $n \geq 5$, $q = 3,5$, and $\e = \pm$. Assume that $t \in \cR(G)$, $2 \nmid n$ if $q=5$, and $n \geq 7$ if $q=3$. Then $\sP_u(N)$ holds for $G$ and for every $N = q^at^b$. \end{prop} \pf (i) Consider an unbreakable $g \in G$; in particular, $$|\bfC_G(g)| \leq B = \left\{ \begin{array}{ll}2^6 \cdot q^{2n+4}, & q=3\\ 6^2 \cdot q^{2n-2}, & q = 5 \end{array} \right.$$ by Lemma \ref{orthogunb}. We also choose $$D := \left\{ \begin{array}{ll}q^{4n-10}, & (n,q) \neq (7,3), (5,5),\\ q^{19}, & (n,q) = (7,3), \\ q^3(q^3-1)(q^5-1)(q-1)^2/2, & (n,q) = (5,5). \end{array} \right.$$ For $(n,q) \neq (5,5)$, we fix regular semisimple $g_1=g_2 \in G$ of order $s \in \cR(G) \setminus \{t\}$. Suppose now that $(n,q) = (5,5)$. First, we fix a regular semisimple $u_1 \in \O^{-\e}_6(5)$ of order $\ell := 7$ if $\e = +$ and $\ell:= 31$ if $\e = -$, and a regular semisimple $u_2 \in \O^-_4(5)$ of order $13$, and set $g_1 = \diag(u_1,u_2)$. If $t \nmid (q^5-\e)$, we fix a regular semisimple $g_2 \in G$ of order $s \in \cR(G) \setminus \{t\}$. Note that the central involution $z$ of $\SO^-_8(5)$ does {\it not} belong to $\O^-_8(5)$. Also, a generator $v_2$ of $\SO^{-\e}_2(5)$ does not belong to $\O^{-\e}_2(5)$ and has two distinct eigenvalues $\nu$, $\nu^{-1}$ of order $q-\e$. Choosing a regular semisimple $v_1 \in \O^-_8(5)$ of order $s$, we can now set $g_2 := \diag(zv_1,v_2)$ in the case $t|(q^5-\e)$. Our choice of $g_i$ implies that each $g_i$ is an $N'$-element, and $|\bfC_G(g_i)| \leq (q+1)(q^{n-1}+1)$. It follows that \begin{equation}\label{so3-sum1} \sum_{\tiny{\begin{array}{l}\chi \in \Irr(G),\\ \chi(1) \geq D\end{array}}} \frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q+1)(q^{n-1}+1)\cdot B^{1/2}}{D} < \left\{ \begin{array}{ll}0.14, & (n,q) \neq (7,3) \\ 0.40, & (n,q) = (7,3). \end{array} \right. \end{equation} Also, $g_2$ is always $s$-singular. Furthermore, $g_1$ is $\ell$-singular when $(n,q) = (5,5)$. \smallskip (ii) Now we estimate character values for the characters in $$\cX:= \{ \chi \in \Irr(G) \mid 1 < \chi(1) < D\}.$$ By \cite[Theorem 1.4]{Ng}, when $(n,q) \neq (7,3)$, $(5,5)$, the set $\cX$ consists of $q+4$ characters: $\bullet$ $\varphi = D_{1_S}-1_G$ of degree $(q^n-\e)(q^{n-1}+q\e)/(q^2-1)$, $\bullet$ $\psi = D_{\St}-1_G$ of degree $(q^{2n}-q^2)/(q^2-1)$, $\bullet$ $D_{\xi_i}$ of degree $(q^n-\e)(q^{n-1}+\e)/2(q-1)$ for $1 \leq i \leq 2$, $\bullet$ $D_{\eta_i}$ of degree $(q^n-\e)(q^{n-1}-\e)/2(q+1)$ for $1 \leq i \leq 2$, $\bullet$ $D_{\theta_j}$ of degree $(q^n-\e)(q^{n-1}-\e)/(q+1)$ for $1 \leq j \leq (q-1)/2$, and $\bullet$ $D_{\chi_j}$ of degree $(q^n-\e)(q^{n-1}+\e)/(q-1)$ for $1 \leq j \leq (q-3)/2.$\\ The characters $D_\alpha$ of $G$ with $\alpha \in \Irr(S)$ and $S := \Sp_2(q)$ are constructed in \cite[Proposition 5.7]{ore}. If $(n,q) = (7,3)$ and $\chi \in \cX$ has positive $s$-defect, then using \cite{Lu} we show that $\chi$ must be one of these $q+4$ characters. If $(n,q) = (5,5)$ and $\chi \in \cX$ has positive $s$-defect and positive $\ell$-defect, then using \cite{Lu} we again show that $\chi$ must be one of these characters. Let $V = \overline\F_q^{2n}$ denote the natural module for $G$ and let $F_0 := \{ \lambda \in \overline\F_q^\times \mid \lambda^{q \pm 1}=1\}$. By Lemma \ref{blocks}, $$\dim_{\overline\F_q}\Ker(g - \lambda \cdot 1_V) \leq c$$ for all $\lambda \in F_0$, where $c := 4$ for $q = 3$ and $c = 2$ for $n= 5$. Hence, arguing as in the proof of \cite[Proposition 5.11]{ore}, we show that \begin{equation}\label{so3-weil1} |D_\alpha(g)| \leq q^c\cdot \alpha(1) \end{equation} for every $\alpha \in \Irr(S)$. On the other hand, by our choice of $g_i$, $$\dim_{\overline\F_q}\Ker(g_i - \lambda \cdot 1_V) \leq e_i$$ for all $\lambda \in F_0$ and $i =1,2$, where $e_i := 2$ if $(n,q) \neq (5,5)$, $e_1 := 0$ and $e_2 \leq 1$ if $(n,q) = (5,5)$. Arguing as in the proof of \cite[Proposition 5.11]{ore}, we obtain \begin{equation}\label{so3-weil2} |D_\alpha(g_i)| \leq q^{e_i} \cdot \alpha(1) \end{equation} for every $\alpha \in \Irr(S)$. \smallskip (iii) Recall that $D^\circ_\alpha = D_\alpha - k_\alpha \cdot 1_G$ where $k_\alpha = 1$ if $\alpha = 1_S$ or $\St$ and $k_\alpha = 0$ otherwise cf.\ \cite[Table II]{ore}. Suppose that $q=3$ and $n \geq 8$. Then $\alpha(1) \leq 3$ for all $\alpha \in \Irr(S)$. It now follows from (\ref{so3-weil1}) and (\ref{so3-weil2}) that $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{7 \cdot (3^3+1)^2 \cdot (3^5+1)}{(3^{n}-1)(3^{n-1}-3)/8} < 0.75.$$ Together with (\ref{so3-sum1}), this implies that $g \in g_1^G\cdot g_2^G$, so we are done in this case. Assume now that either $q = 5$ or $(n,q) =(7,3)$; in particular, either $s|(q^n-\e)$ or $s|(q^{n-1}+1)$. In the former case, all $\chi \in \cX$ but $\psi = D_\St-1_G$ have $s$-defect $0$, so vanish at $g_i$. Also, $\St(1) = q$, whence by (\ref{so3-weil1}) and (\ref{so3-weil2}) $$\sum_{\chi \in \cX}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{e_1+1}+1)(q^{e_2+1}+1)(q^{c+1}+1)}{(q^{2n}-q^2)/(q^2-1)} < 0.33.$$ In the latter case, the only $\chi \in \cX$ that have positive $s$-defect are $\varphi = D_{1_S}-1_G$, and $k = (q+1)/2$ or $(q+3)/2$ characters $D_{\alpha_i}$, $1 \leq i \leq k$ with $$\sum^k_{i=1}\alpha_i(1)^2 \leq (q+1)^2(q-2)/2.$$ Moreover, $\varphi(1) \geq d_1 := (q^n-1)(q^{n-1}-q)/(q^2-1)$ and $D_{\alpha_i}(1) \geq \alpha_i(1)d$. In this case, using (\ref{so3-weil1}) and (\ref{so3-weil2}), we obtain $$\sum_{\chi \in \cX}\frac{|\chi(g_1)\chi(g_2)\oc(g)|}{\chi(1)} \leq \frac{(q^{e_1}+1)(q^{e_2}+1)(q^c+1)}{d} + \sum^k_{i=1} \frac{q^{e_1+e_2}\alpha_i(1)^2 \cdot q^c\alpha_i(1)}{\alpha_i(1)d}< 0.44.$$ In either case, together with (\ref{so3-sum1}), this implies that $g \in g_1^G \cdot g_2^G$, so we are again done. \hal \subsection{Completion of the proof of Theorem \ref{main1} for classical groups} \begin{prop}\label{clas-main} Theorem \ref{main1} holds for all finite non-abelian simple symplectic or orthogonal groups. \end{prop} \pf Let $G = \Cl_n(q)$ be such that $G/\bfZ(G)$ is simple non-abelian and $q = p^f$. By Corollary \ref{prime2}(i), we need to prove the surjectivity of the word map $(x,y) \mapsto x^Ny^N$ only in the case $N = p^at^b$ with $t \in \cR(G)$. In particular, $t \neq 2,p$. First we consider the case $G = \Sp_{2m}(q)$. By Lemma \ref{sp24}, we may assume that $m \geq 3$. We are also done by Corollary \ref{main-real} if $q \equiv 1 \bmod 4$. For the remaining cases, we take $n_0 = 4$ if $q \geq 7$, $n_0=6$ if $q = 4$, and $n_0 = 6$ if $q=2$, and set $n = 2m$. Note that condition (i) of Proposition \ref{ind-clas} holds by Lemmas \ref{sp24} and \ref{base-clas}. Next, condition (ii) of Proposition \ref{ind-clas} holds by Propositions \ref{sp-odd-large}, \ref{sp3-large}, and \ref{sp-even}. Hence we are done by Proposition \ref{ind-clas}. Next assume that $G = \Omega^\pm_{2m}(q)$ with $m \geq 3$ and $2|q$. Then we are done by Proposition \ref{ratio} if $q \geq 8$ and $m \geq 4$. Since \begin{equation}\label{iso1} \begin{array}{l} \O_3(q) \cong \PSL_2(q),~\Omega^+_4(q) \cong \SL_2(q) \circ \SL_2(q), ~\Omega^-_4(q) \cong \PSL_2(q^2),\\ \O_5(q) \cong \PSp_4(q),~ ~\Omega^+_6(q) \cong \SL_4(q)/Z,~\Omega^-_6(q) \cong \SU_4(q)/Z \end{array} \end{equation} (for all $q$ and for a suitable central $2$-subgroup $Z$), cf.\ \cite[Proposition 2.9.1]{KL}, we are done in the case $m = 2,3$ by the results of \S4. In the remaining cases of $q=2,4$ and $n=2m \geq 8$, we take $n_0 = 8$ for $q=4$ and $n_0=12$ for $q=2$. Note that condition (i) of Proposition \ref{ind-clas} holds by Lemmas \ref{sp24} and \ref{base-clas} for $8 \leq k \leq n_0$, and by the isomorphisms in (\ref{iso1}) for $k = 4,6$. Next, condition (ii) of Proposition \ref{ind-clas} holds by Proposition \ref{so2-even}. Hence we are done by Proposition \ref{ind-clas}. Finally, let $G = \Omega^\pm_{n}(q)$ with $n \geq 7$ and $q$ odd. Then we take $n_0 = 6$ if $q > 3$ and $n_0=12$ if $q = 3$. Note that condition (i) of Proposition \ref{ind-clas} holds for $1 < k \leq 6$ by the isomorphisms in (\ref{iso1}) and Lemma \ref{sp24}, and for $7 \leq k \leq n_0$ by Lemma \ref{base-clas}. Next, condition (ii) of Proposition \ref{ind-clas} holds by Proposition \ref{so-odd} when $2\nmid k$, by Proposition \ref{ratio} if $2|k$, $q \geq 5$, and $(k,q) \neq (10,5)$, $(14,5)$, and by Proposition \ref{so3-even} if $2|k$, and $q=3$ or $(k,q) = (10,5)$, $(14,5)$. Hence we are done by Proposition \ref{ind-clas}. \hal \section{Theorem \ref{main1} for exceptional groups} \begin{lem}\label{sz} Theorem \ref{main1} holds for the Suzuki groups $\tw2 B_2(q^2)$ with $q^2 \geq 8$ and the Ree groups $\tw2 G_2(q^2)$ with $q^2 \geq 27$. \end{lem} \pf Let $S$ be of these groups. Note that $|S|$ is divisible by at least four different odd primes. Hence we can find a prime divisor $\ell > 2$ of $|S|$ that is coprime to both $q^2$ and $N$, and a semisimple $x \in S$ of order $\ell$. By \cite[Theorem 7.1]{GM}, $x^S \cdot x^S \supseteq S \setminus \{1\}$, whence the claim follows. \hal \begin{lem}\label{exc-base} Theorem \ref{main1} holds for the following: $\tw2 F_4(2)'$; $G_2(q)$ with $q=3,4$; $\tw3 D_4(q)$ with $q = 2,4$; $F_4(2)$; $E_6(2)$; $\tw2 E_6(2)$. \end{lem} \pf The cases $\tw2F_4(2)'$, $G_2(3)$, $G_2(4)$ were checked directly using their character tables. For the remainder, by Corollary \ref{prime2}(i), it suffices to prove Theorem \ref{main1} for $N = p^at^b$, where $p$ is the defining characteristic and $t \in \cR(G) = \{r,s\}$, which is $\{13\}$, $\{241\}$, $\{13,17\}$, $\{73,17\}$, $\{19,17\}$, respectively. This was done by direct calculations similar to those of Lemma \ref{alt-spor}. \hal In what follows, let $\Phi'_{24} := q^4 +q^3\sqrt{2}+q^2+q\sqrt{2}+1$. \begin{lemma}\label{exc-norm} Let $S$ be one of $G_2(q)$, $\tw3 D_4(q)$, $\tw2 F_4(q)$, $F_4(q)$, $E_6^\e(q)$, $E_7(q)$, $E_8(q)$ where $q=p^f$. Define the primes $r,s$ as follows: \[ \begin{array}{ccccc} \hline S & r & s & |\bfN_S(T_r):T_r| & |\bfN_S(T_s):T_s| \\ \hline G_2(q) & \p(p,3f) & & 6 & \\ \tw3 D_4(q) & \p(p,12f) & & 4 & \\ \tw2 F_4(q^2) & \p(2,24f)|\Phi'_{24} & & 12 & \\ F_4(q) & \p(p,12f) & \p(p,8f) & 12 & 8 \\ E_6(q) & \p(p,9f) & \p(p,8f) & 9 & 8 \\ \tw2 E_6(q) & \p(p,18f) & \p(p,8f) & 9 & 8 \\ E_7(q) & \p(p,18f) & \p(p,7f) & 18 & 14 \\ E_8(q) & \p(p,24f) & \p(p,20f) & 24 & 20 \\ \hline \end{array} \] For $t \in \{r,s\}$ let $x_t \in \cX_t$, where $\cX_t$ is the set of $t$-singular elements in $S$. {\rm (i)} $\bfC_S(x_t) = T_t$, where $T_t$ is a uniquely determined maximal torus of $S$. {\rm (ii)} $|\bfN_S(T_t):T_t|$ is as in the table. {\rm (iii)} $|\cX_t| < |S|/|N_S(T_t):T_t|$. {\rm (iv)} If $S \not\cong G_2(q)$, $\tw3 D_4(q)$, then $|\cX_t| <|S|/8$. \end{lemma} \pf We know that $x_t$ lies in some maximal torus $T_t$ of $S$. The orders of maximal tori are given by \cite{car}. Inspection shows that for each $t$ there is a unique possible order $|T_t|$ divisible by $t$, as follows, where in most cases we give also the label of $T_t$ in \cite{car} (and $d = (3,q-\e)$ and $e = (2,q-1)$): \[ \begin{array}{lll} \hline S & |T_r|, \hbox{ label} & |T_s|, \hbox{ label} \\ \hline G_2(q) & q^2+q+1 & \\ \tw3 D_4(q) & q^4-q^2+1 & \\ \tw2 F_4(q^2) & \Phi'_{24} & \\ F_4(q) & q^4-q^2+1, \;F_4 & q^4+1,\;B_4 \\ E_6^\e(q) & (q^6+\e q^3+1)/d,\;E_6(a_1)& (q^4+1)(q^2-1)/d,\; D_5 \\ E_7(q) & (q^6-q^3+1)(q+1)/e,\; E_7 & (q^7-1)/e,\;A_6 \\ E_8(q) & q^8-q^4+1,\;E_8(a_1) & q^8-q^6+q^4-q^2+1,\;E_8(a_2) \\ \hline \end{array} \] Write $S = (\cG^F)'$, where $\cG$ is the corresponding adjoint algebraic group and $F$ a Frobenius endomorphism of $\cG$. By \cite[II,4.4]{SS}, $\bfC_{\cG}(x_t)$ is connected. Then $\bfC_{\cG}(x_t) = \cD\cZ$ where $\cD$ is semisimple and $\cZ$ is a torus. If $\cD\ne 1$ then $\cD^F$ contains a subsystem $\SL_2(q)$ or $\SU_3(q)$ subgroup $D$, so $x_t \in \bfC_S(D)$. However $\bfC_S(D)$ does not have order divisible by $t$. Hence $\cD=1$ and $\bfC_{\cG}(x_t)$ is a maximal torus, whence $\bfC_S(x_t) = T_t$, proving (i). Part (ii) follows from the tables in \cite[pp.\ 312--315]{car}. By (i), every element of $\cX_t$ lies in a unique conjugate of $T_t$, and the number of these conjugates is $|S:\bfN_S(T_t)|$; also, $1 \notin \cX_t$. This gives (iii), and (iv) follows immediately. \hal \begin{prop}\label{exc} Theorem \ref{main1} holds for the simple exceptional group $S = G/\bfZ(G)$, where $G$ is one of the following groups: \begin{enumerate}[\rm(i)] \item $G_2(q)$, $q \geq 5$; \item $\tw3 D_4(q)$, $q \neq 2,4$; \item $\tw2 F_4(q^2)$, $q^2 \geq 8$; \item $F_4(q)$, $q \geq 5$; \item $E_6(q)_\SC$ or $\tw2 E_6(q)_\SC$, $q \geq 3$; \item $E_7(q)_\SC$ or $E_8(q)$. \end{enumerate} \end{prop} \pf By Corollary \ref{prime2}(i), it suffices to prove Theorem \ref{main1} in the case $N = p^at^b$ with $p|q$ and $t \in \cR(G) = \{r,s\}$. First we consider the case $S = G_2(q)$ with $q \geq 5$; in particular, $t = \p(p,3f)$ (with $q = p^f$ as usual). Note that $|\cX_p|/|S| \leq 2/(q-1)-1/(q-1)^2 < 0.31$ for $q \geq 7$ by \cite[Theorem 3.1]{GL}, and $|\cX_p|/|S| \leq 1-0.68 = 0.32$ for $q = 5$ by \cite{Lu2}. Lemma \ref{exc-norm} implies that $$\frac{|\cX_t|}{|S|}+\frac{|\cX_p|}{|S|} < \frac{1}{6} + 0.32 < \frac{1}{2},$$ so we are done by Corollary \ref{prime2}(ii). We can argue similarly in other cases. In the case $S = \tw3 D_4(q)$ with $q \geq 5$, by Lemma \ref{exc-norm} and \cite[Theorem 3.1]{GL}, $$\frac{|\cX_t|}{|S|}+\frac{|\cX_p|}{|S|} < \frac{1}{4} + \frac{1}{q-1} \leq \frac{1}{2},$$ so we are done. Also, note that the odd $q$ case is covered by Corollary \ref{main-real}. Suppose $S = \tw2 F_4(q^2)$ with $q^2 \geq 8$. By \cite[Theorem 7.3]{GM}, $S \setminus \{1\} \subseteq x^S \cdot x^S$ for a regular semisimple $x \in S$ of order $\Phi'_{24}$. It remains therefore to consider the case $t|\Phi'_{24}$. By Lemma \ref{exc-norm} and \cite[Theorem 3.1]{GL}, $$\frac{|\cX_t|}{|S|}+\frac{|\cX_p|}{|S|} < \frac{1}{12} + \frac{2}{q^2-1} - \frac{1}{(q^2-1)^2} < \frac{1}{2}.$$ Next we consider the case $S = F_4(q)$ with $q \geq 5$. Note that $|\cX_p|/|S| \leq 2/(q-1)-1/(q-1)^2 < 0.3056$ for $q \geq 7$ by \cite[Theorem 3.1]{GL}, and $|\cX_p|/|S| \leq 1-0.6619 = 0.3381$ for $q = 5$ by \cite{Lu2}. It follows by Lemma \ref{exc-norm} that $$\frac{|\cX_t|}{|S|}+\frac{|\cX_p|}{|S|} \leq \frac{1}{8} + 0.3381 < \frac{1}{2}.$$ For cases (v) and (vi), we note that $|\cX_p|/|G| \leq 1/(q-1) \leq 1/3$ for $q \geq 4$ by \cite[Theorem 3.1]{GL}, and $|\cX_p|/|G| \leq 1-0.6627 = 0.3373$ for $q = 3$ by \cite{Lu2}. It follows by Lemma \ref{exc-norm} that $$\frac{|\cX_t|}{|G|}+\frac{|\cX_p|}{|G|} \leq \frac{1}{8} + 0.3373 < \frac{1}{2},$$ so we are done. If $G = E_8(q)$, then $G$ has two maximal tori $T_{1,2}$ of order $q^8-1$ and $\Phi_{15}$, and $t$ is coprime to both $|T_{1,2}|$. According to \cite[Theorem 10.1]{LuM}, $T_i$ contains a regular semisimple element $s_i$ for $i =1,2$, such that $\Irr(G)$ contains exactly two irreducible characters $\chi$ with $\chi(s_1)\chi(s_2) \neq 0$, namely $1_G$ and $\St$. Since $|\St(s_i)| = 1$, it follows that $$\sum_{\chi \in \Irr(G)}\frac{|\chi(s_1)\chi(s_2)\chi(g)|}{\chi(1)} = 1 + \frac{|\St(g)|}{\St(1)} > 0,$$ so $g \in s_1^G \cdot s_2^G$ for all $1 \neq g \in G$, and we are done. Finally, let $G = E_7(2)$, so that $t \in \{19, 127\}$. Consider $s_1 \in G$ of order $73$ and $s_2 \in G$ of order $43$. Using \cite{Lu}, we check that the only $\chi \in \Irr(G)$ that has positive $73$-defect and positive $43$-defect are $1_G$ and $\St$. Hence $s_1^G \cdot s_2^G = G \setminus \{1\}$ and we are done as above. \hal \begin{lem}\label{f4a} {\rm (i)} Let $G = F_4(4)$ and let $x$ be a non-semisimple element of $G$ such that $|\bfC_G(x)|> 3\cdot 4^{19}$. Then there is a quasisimple classical subgroup $S$ in characteristic $2$ of $G$ such that $|\bfZ(S)|$ is a $3$-power and $x \in S$. {\rm (ii)} Let $G = F_4(3)$ and let $x$ be a non-semisimple element of $G$ such that $|\bfC_G(x)|> 3^{19}$. Then there is a quasisimple classical subgroup $S$ in characteristic $3$ of $G$ such that $|\bfZ(S)|$ is a $2$-power and $x \in S$. \end{lem} \pf (i) Suppose first that $x$ is unipotent. Following \cite[Table 22.2.4]{LSei}, the bound on $|\bfC_G(x)|$ forces $x$ to be in one of the following unipotent classes: \[ A_1,\,\tilde A_1,\, (\tilde A_1)_2,\, A_1\tilde A_1,\, A_2\,(2 \hbox{ classes}),\,\tilde A_2\,(2 \hbox{ classes}),\,B_2\,(2 \hbox{ classes}). \] In the first two cases $x$ lies in a subgroup $\SL_2(4)$. The third class $ (\tilde A_1)_2$ has representative $x = u_{1232}(1)u_{2342}(1)$ (see \cite[Table 16.2 and (18.1)]{LSei}). This is centralized by the long root groups $U_{\pm 0100}$, and these generate $A \cong \SL_2(4)$. Then $x \in \bfC_G(A) = \Sp_6(4)$. The class $A_1\tilde A_1$ has a representative in a subgroup $A_1(4)\tilde A_1(4)$, which is contained in a subgroup $\Sp_8(4)$. Representatives of the four classes with labels $A_2,\tilde A_2$ lie in subgroups $\SL_3(4)$ or $\SU_3(4)$. Finally, representatives of the classes with label $B_2$ lie in a subgroup $\Sp_4(4)$. Now suppose $x$ is non-unipotent, with Jordan decomposition $x = su$, where $s\ne 1$ is semisimple and $u$ unipotent. As $x$ is assumed non-semisimple, $u\ne 1$. Then $\bfC_G(s)$ is a subsystem subgroup of order greater than $3\cdot 4^{19}$, and the only possibility is that $\bfC_G(s) = C_3 \times \Sp_6(4)$. But then $u \in \Sp_6(4)$ has centralizer of order greater than $4^{19}$, which is impossible for a nontrivial unipotent element of $\Sp_6(4)$. \vspace{2mm} (ii) This is similar to (i). Suppose $x$ is unipotent. Then $x$ lies in one of the classes \[ A_1,\,\tilde A_1\,(2 \hbox{ classes}),\, A_1\tilde A_1,\, A_2\,(2 \hbox{ classes}),\,\tilde A_2. \] For the $A_1\tilde A_1$ class, as above $x$ lies in a subgroup $\Spin_9(3)$. Each of the other class representatives lies in a subgroup $\SL_3(3)$ or $\SU_3(3)$. Now suppose $x$ is non-unipotent, so $x = su$ with semisimple and unipotent parts $s,u \ne 1$. Then $\bfC_G(s)$ is a subsystem subgroup of type $B_4$, $A_1C_3$, $T_1C_3$ or $T_1B_3$, where $T_1$ denotes a 1-dimensional torus. The last two cases are not possible, as in (i). In the first case, $x \in \bfC_G(s) = \Spin_9(3)$. So assume finally that $\bfC_G(s)$ is of type $A_1C_3$, and let $u = u_1u_2$ with $u_1 \in \SL_2(3)$, $u_2\in \Sp_6(3)$. If $u_2 \ne 1$ then $$|\bfC_G(x)| \le |\SL_2(3)|\cdot |\bfC_{\Sp_6(3)}(u_2)| < 3^{19}.$$ Hence $u_2=1$ and $x = su \in \SL_2(3) < \SL_3(3)$. This completes the proof. \hal \begin{lem}\label{f4b} Theorem \ref{main1} holds for the simple exceptional groups $G = F_4(q)$ with $q = 3$, $4$. \end{lem} \pf By Corollary \ref{prime2}(i), it suffices to prove Theorem \ref{main1} in the case $N = q^at^b$ with $t \in \cR(G) = \{r,s\}$. Suppose that $t \nmid \Phi_8$. Then $G$ has two maximal tori $T_{1,2}$ of orders $(q^2-1)(q^2+q+1)$ and $q^4+1$, which are coprime to $N$. It is shown in \cite[Theorem 10.1]{LuM} that $T_i$ contains a regular semisimple element $s_i$ for $i =1,2$, such that $\Irr(G)$ contains exactly two irreducible characters $\chi$ with $\chi(s_1)\chi(s_2) \neq 0$, namely $1_G$ and $\St$. It follows that $G \setminus \{1\} = s_1^G \cdot s_2^G$, so we are done as in the proof of Proposition \ref{exc}. Now consider the case where $t|\Phi_8$. Choose regular semisimple $s_1 = s_2 \in G$ of (prime) order $s=\Phi_{12}$. Using \cite{Lu}, we check that if $\chi \in \Irr(G)$, $1 < \chi(1) < q^{18}$, and $\chi(s_1)\chi(s_2) \neq 0$ (in particular, $\chi$ has positive $s$-defect), then $\chi = \chi_{1,2}$ with $$\chi_1(1) = q\Phi_1^2\Phi_3^2\Phi_8,~~\chi_2(1) = q\Phi_2^2\Phi_6^2\Phi_8.$$ It suffices to show that every nontrivial $g \in G$ belongs to $s_1^G \cdot s_2^G$. This is indeed the case if $g$ is semisimple by \cite{Gow}, so we assume $g$ is non-semisimple. Moreover, if $|\bfC_G(g)| > B$, where $B := 3^{19}$ for $q=3$ and $B := 3 \cdot 4^{19}$ for $q=4$, then by Lemma \ref{f4a} we can embed $G$ in a quasisimple classical subgroup $S$ in characteristic $q$ with $|\bfZ(S)|$ coprime to $N$, in which case we are done by applying Theorem \ref{main1} to $S/\bfZ(S)$. So we may assume that $|\bfC_G(g)| \leq B$. Next observe that $\chi_i$ is rational-valued (as it is the unique character in $\Irr(G)$ of its degree), and $\chi_i(1) \equiv \pm 1 \bmod s$. It follows that $\chi_i(s_1) \in \Z$ and $\chi_i(s_1) \equiv \pm 1 \bmod s$. Since $|\chi_i(s_1)| \leq |\bfC_G(s_1)|^{1/2} = s^{1/2}$, we conclude that $\chi_i(s_1) = \pm 1$. It follows that $$\sum^2_{i=1}\frac{|\chi_i(s_1)\chi_i(s_2)\chi_i(g)|}{\chi_i(1)} \leq \frac{B^{1/2}}{\chi_1(1)} + \frac{B^{1/2}}{\chi_2(1)} < 0.87.$$ On the other hand, since $|\bfC_G(s_i)| = \Phi_{12}$, $$\sum_{\chi \in \Irr(G),~\chi(1) \geq q^{18}} \frac{|\chi(s_1)\chi(s_2)\chi(g)|}{\chi(1)} \leq \frac{B^{1/2}\Phi_{12}}{q^{18}} < \frac{1}{q^4} \leq \frac{1}{81}.$$ It follows that $g \in s_1^G \cdot s_2^G$, as stated. \hal In summary, we have proved the following. \begin{cor}\label{exc-main} Theorem \ref{main1} holds for all finite non-abelian simple exceptional groups of Lie type. \end{cor} \noindent {\bf Proof of Theorem \ref{main1}.} The case of simple groups of Lie type is completed by Proposition \ref{clas-main} for classical groups and Corollary \ref{exc-main} for exceptional groups. Alternating and sporadic groups are handled by Lemma \ref{alt-spor} and Proposition \ref{alt1}. \hal \section{Odd power word maps} \subsection{Preliminaries} \begin{lem}\label{schur} Let $S$ be a finite non-abelian simple group. To prove Theorem \ref{main2} for all quasisimple groups $G$ with $G/\bfZ(G) \cong S$, it suffices to prove it for the $2'$-universal cover $H$ of $S$, that is, $H/\bfZ(H) \cong S$ and $|\bfZ(H)|$ is the $2'$-part of the order of the Schur multiplier of $S$. \end{lem} \pf It suffices to prove Theorem \ref{main2} for the universal cover $L$ of $S$. By assumption, Theorem \ref{main2} holds for $H = L/Z$, where $Z \leq \bfZ(L)$ is a $2$-group. Thus every $g \in L$ can be written in the form $g = xyzt$, where $x,y,z$ are $2$-elements of $L$ and $t \in Z$. It follows that $g = xy(zt)$ is a product of three $2$-elements in $L$. \hal \begin{lem}\label{alt-odd} Theorem \ref{main2} holds for all quasisimple covers of alternating groups $S = \A_n$ with $n \geq 5$. Moreover, every element of $S$ is a product of two $2$-elements. \end{lem} \pf The cases $S = \A_6, \A_7$ are checked directly using \cite{Atlas}. By Lemma \ref{schur}, it suffices to prove Theorem \ref{main2} for $G = \A_n$. \smallskip (i) First we show that if $g = (1,2,\ldots,m)$ is an $m$-cycle with $m = 2k+1 \geq 5$, then $g = x_1y_1 = x_2y_2$, where $x_i,y_i \in \SSS_m$ have order $2$ or $4$, and moreover $x_1,y_1 \in \A_m$, $x_2,y_2 \in \SSS_m \smallsetminus \A_m$. Indeed, $g$ is inverted by the involution $$x := (1,2k+1)(2,2k) \ldots (k-1,k+3)(k,k+2).$$ Setting $y := xg$, we get $y^2 = xgxg = g^{-1}g = 1$, so $g = xy$. Next, we set $$x' := (1,2k+1)(2,2k) \ldots (k-1,k+3),~~y' := x' g.$$ A computation establishes that $|x'| = 2$, $|y'| = 4$, and $g = x' y'$. Since exactly one of $x,x'$ belongs to $\A_m$ and $g \in \A_m$, the claims follow. \smallskip (ii) Now we show that every $g \in \A_n$ is a product of two $2$-elements. Indeed, if $g$ is real in $\A_n$ then the statement follows from Lemma \ref{real1}. Since $g$ is always real in $\SSS_n$, we may assume that $g$ is not real in $\A_n$, so it is not centralized by any {\it odd} permutation in $\SSS_n$. Thus $g = g_1g_2 \ldots g_s$ is a product of $s \geq 1$ disjoint cycles, where $g_i$ is an $n_i$-cycle, $3 \leq n_1 < n_2 < \ldots < n_s$, and $n_i$ is odd for all $i$. We may assume that $$\SSS_n \geq X_1 \times X_2 \times \ldots \times X_s,$$ where $X_i \cong \SSS_{n_i}$ and $g_i \in X_i$. Suppose $n_1 \geq 5$. Then, according to (i) we can write $g_i = x_iy_i$ where $x_i,y_i \in [X_i,X_i] \cong \A_{n_i}$ are $2$-elements. Hence $g = xy$ with $x := x_1x_2 \ldots x_s$ and $y := y_1y_2 \ldots y_s$, as desired. Assume now that $n_1 = 3$. Since $g$ is not real in $\A_n$ and $n \geq 5$, we observe that $s \geq 2$. Again by (i), for $i \geq 2$ we can write $g_i = x_iy_i$, where $x_i,y_i \in X_i$ are $2$-elements; moreover, $x_i,y_i \in [X_i,X_i]$ if $i \geq 3$ and $x_{2},y_{2} \in X_{2} \smallsetminus [X_{2},X_{2}]$. We may assume that $g_1 = (1,2,3)$ and write $g_1 = x_1y_1$ with $x_1 = (1,3)$, $y_1 = (1,2)$. Now setting $x:= x_1x_2 \ldots x_s$ and $y := y_1y_2 \ldots y_s$, again $g = xy$ is a product of two $2$-elements in $\A_n$. \hal \begin{lem}\label{lie2} Let $S$ be a non-abelian simple group of Lie type in characteristic $2$. Theorem \ref{main2} holds for all quasisimple covers of $S$. \end{lem} \pf The case $S = \tw2 F_4(2)'$ is checked directly using \cite{Atlas}; and $S = \A_6$ follows from Lemma \ref{alt-odd}. Suppose now that $S \not\cong \A_6$, $\tw2 F_4(2)'$. Then there is a quasisimple Lie-type group $H$ of simply connected type such that $H$ is a $2'$-universal cover of $S$. According to \cite[Corollary, p.\ 3661]{EG}, every non-central element of $H$ is a product of two $2$-elements. For $g \in \bfZ(H)$, consider a non-central $2$-element $t$ of $H$. Again $gt^{-1} = xy$ for some $2$-elements $x,y$ of $H$, so $g = xyt$ is a product of three $2$-elements. Hence we are done by Lemma \ref{schur}. \hal \begin{lem}\label{odd-base} {\rm (i)} Theorem \ref{main2} holds for the quasisimple group $G$ if $G/\bfZ(G)$ is one of the following simple groups: a sporadic group, $\PSU_4(3)$, $\PSp_6(3)$, $\O_7(3)$, $\PSp_8(3)$. {\rm (ii)} Suppose that $G = \GU_n(3)$ with $3 \leq n \leq 6$. Each $g \in G$ can be written as $g = xyz$, where $x, y, z$ are $2$-elements of $G$ and $\det(x) = \det(y) = 1$. \end{lem} \pf These statements were established using direct calculations similar to those of Lemma \ref{alt-spor}. \hal \subsection{Regular $2$-elements in classical groups in odd characteristic} We show that finite classical groups in odd characteristic admit regular $2$-elements with prescribed determinant or spinor norm. We begin with the general linear and unitary groups. \begin{lem}\label{reg-slu} Let $G = \GL^\e_n(q)$ with $n \geq 1$, $\e = \pm 1$, $q$ an odd prime power and let $\mu_{q-\e} := \{ \lambda \in \overline{\F}_q^\times \mid \lambda^{q-\e} = 1\}$. For every $2$-element $\delta$ of $\mu_{q-\e}$, there exists a regular $2$-element $s = s_n(\delta) $ of $ G$, such that $\det(s) = \delta$ and $s$ has at most two eigenvalues $\beta$ that belong to $\mu_{q-\e}$ (and each such eigenvalue appears with multiplicity one). \end{lem} \pf (i) First we consider the special case $n = 2^m \geq 2$ and construct a regular $2$-element $s_m$ of $ G$. Fix $\g \in \overline{\F}_q^\times$ with $|\g| = (q^{2^m}-1)_2 \geq 8$. Using the embeddings $$\GL_1(q^{2^m}) \hookrightarrow \GL_{2^{m-1}}(q^2) \hookrightarrow \GL^\e_{2^m}(q) = G,$$ we can find $s_m \in G$ which is conjugate over $\overline\F_q$ to $$\diag(\g,\g^{q\e},\g^{(q\e)^2}, \ldots,\g^{(q\e)^{n-1}}).$$ It is straightforward to check that all eigenvalues of $s_m$ appear with multiplicity one and have order $(q^{2^m}-1)_2$; in particular, $s_m$ is regular. \smallskip (ii) If $n = 1$, then we set $s_1(\delta) = \delta$. Suppose $n = 2$. If $\delta \neq 1$, then we choose $s_2(\delta) := \diag(1,\delta)$. If $\delta = 1$, then we can choose $\a = \pm 1$ such that $q \equiv \a \bmod 4$ and take $$s_2(1) \in C_{q-\a} \hookrightarrow \SL^\e_2(q) < G$$ with $|s_2(1)| = 4$. Note that $|s_n(\delta)| < (q^2-1)_2$ for all $\delta \in \mu_{q-\e}$ and $n = 1,2$. Consider the case $n \geq 3$ odd and write $$n = 2^{m_1} + 2^{m_2} + \ldots + 2^{m_t} + 1$$ with $m_1 > m_2 > \ldots > m_t \geq 1$. Setting $$s := \diag(s_{m_1},s_{m_2}, \ldots,s_{m_t},\a) \in \GL^\e_{2^{m_1}}(q) \times \ldots \times \GL^\e_{2^{m_t}}(q) \times \GL^\e_1(q) < G,$$ with $\a := \delta/\prod^t_{i=1}\det(s_{m_i})$, we deduce that $\det(s) = \delta$ and all eigenvalues of $s$ appear with multiplicity one, as required. \smallskip (iii) We may now assume that $$n = 2^{m_1} + 2^{m_2} + \ldots + 2^{m_t}$$ with $m_1 > m_2 > \ldots > m_t \geq 1$. Suppose first that $m_t = 1$. We choose $$s := \diag(s_{m_1},s_{m_2}, \ldots,s_{m_{t-1}},s_2(\a)) \in \GL^\e_{2^{m_1}}(q) \times \ldots \times \GL^\e_{2^{m_{t-1}}}(q) \times \GL^\e_2(q) < G,$$ with $\a := \delta/\prod^t_{i=1}\det(s_{m_i})$, so that $\det(s) = \delta$. The construction of $s$ ensures that all eigenvalues of $s$ appear with multiplicity one, so $s$ is regular. If $a := m_t \geq 2$, then we rewrite $$n = 2^{a_1} + 2^{a_2} + \ldots + 2^{a_{t-1}} + 2^{a_t} + 2^{a_{t+1}} + \ldots + 2^{a_k},$$ where $a_i = m_i$ for $1 \leq i \leq t-1$, $k = t+a-1$, and $(a_{t},a_{t+1}, \ldots ,a_k) = (a-1,a-2, \ldots,2,1,1)$. Now we can choose $$s := \diag(s_{a_1},s_{a_2}, \ldots,s_{a_{k-1}},s_2(\a)) \in \GL^\e_{2^{a_1}}(q) \times \ldots \times \GL^\e_{2^{a_{k-1}}}(q) \times \GL^\e_2(q) < G,$$ with $\a := \delta/\prod^{k-1}_{i=1}\det(s_{a_i})$. Again $\det(s) = \delta$, and all eigenvalues of $s$ appear with multiplicity one, as desired. The last condition on $s$ can be checked easily in all cases. \hal \begin{lem}\label{reg-sp} Let $G = \Sp_{2n}(q)$ with $n \geq 1$ and $q$ an odd prime power. There exists a regular $2$-element $s$ of $G$ (and neither $1$ nor $-1$ is an eigenvalue of $s$). \end{lem} \pf First we consider the special case $n = 2^m \geq 2$. We fix $\g \in \overline{\F}_q^\times$ with $|\g| = (q^{2^m}-1)_2 \geq 8$ and use the element $s_m$ constructed in part (i) of the proof of Lemma \ref{reg-slu} via the embeddings $$\GL_1(q^{2^m}) \hookrightarrow \GL_{2^{m}}(q) \hookrightarrow \Sp_{2n}(q) = G.$$ Note that $s_m$ is conjugate over $\overline\F_q$ to $$\diag(\g,\g^{q},\g^{q^2}, \ldots,\g^{q^{n-1}},\g^{-1},\g^{-q},\g^{-q^2}, \ldots,\g^{-q^{n-1}}).$$ In particular, all eigenvalues of $s_m$ appear with multiplicity one and have order $(q^{2^m}-1)_2$, whence $s_m$ is regular. Consider the general case $$n = 2^{m_1} + 2^{m_2} + \ldots + 2^{m_t}$$ with $m_1 > m_2 > \ldots > m_t \geq 0$ and $t \geq 1$. If $m_t \geq 1$, set $$s := \diag(s_{m_1},s_{m_2}, \ldots,s_{m_t}) \in \Sp_{2^{m_1}}(q) \times \ldots \times Sp_{2^{m_t}}(q) \leq G.$$ If $m_t = 0$, then we can choose $$s:= \diag(s_{m_1},s_{m_2}, \ldots,s_{m_{t-1}},s_2(1)) \in \Sp_{2^{m_1}}(q) \times \ldots \times Sp_{2^{m_{t-1}}}(q) \times \Sp_2(q) < G,$$ where $s_2(1)$ is constructed in part (ii) of the proof of Lemma \ref{reg-slu}. It is easy to check that $s$ has the desired properties. \hal Recall that the {\it spinor norm} $\theta(g)$ of $g \in \SO^\e_{n}(q)$ is defined in \cite[pp.\ 29--30]{KL}. \begin{lem}\label{reg-so} Let $G = \SO^\e_n(q)$ with $n \geq 2$, $\e = \pm 1$, $q$ an odd prime power. For $\delta = \pm 1$, there exists a regular $2$-element $s = s^\e_n(\delta)$ of $G$, such that $\theta(s) = \delta$; moreover, every $\beta \in \F_{q^2}^\times$ can appear as an eigenvalue of $s$ with multiplicity at most two, and multiplicity two can occur only when $\beta = \pm 1$. \end{lem} \pf (i) First we consider the special case $n = 2^{m+1} \geq 4$. We fix $\g \in \overline{\F}_q^\times$ with $|\g| = (q^{2^m}-1)_2 \geq 8$ and use the element $s_m$ constructed in part (i) of the proof of Lemma \ref{reg-slu} via the embeddings $$\GL_1(q^{2^m}) \hookrightarrow \GL_{2^{m}}(q) \hookrightarrow \SO^+_{n}(q)$$ Note that $s_m$ is conjugate over $\overline\F_q$ to $$\diag(\g,\g^{q},\g^{q^2}, \ldots,\g^{q^{2^m-1}},\g^{-1},\g^{-q},\g^{-q^2}, \ldots,\g^{-q^{2^m-1}}).$$ In particular, all eigenvalues of $s_m$ appear with multiplicity one and have order $(q^{2^m}-1)_2$, whence $s_m$ is regular. As an element of $\GL_{2^m}(q)$, $s_m$ has determinant $$\nu : = \g^{1+q+q^2+ \ldots + q^{2^m-1}} = \g^{(q^{2^m}-1)/(q-1)}.$$ It follows that $\nu^{(q-1)/2} = \g^{(q^{2^m}-1)/2} =-1$, so $\theta(s_m) = -1$ by \cite[Lemma 2.7.2]{KL}. \smallskip (ii) Suppose that $n = 2$. We take $s^\e_2(-1) \in \SO^\e_2(q) \cong C_{q-\a}$ of order $(q-\e)_2$, and $s^\e_2(1) = I_2$. Next suppose that $n = 4$ and choose $\a = \pm 1$ such that $q \equiv \a \bmod 4$. We also fix $s_0 \in \SO^\a_2(q)$ of order $(q-\a)_2 \geq 4$ so that $\theta(s_0) = -1$ (note that we can take $s_0 = s^\a_2(-1)$). Since $\SO^\e_4(q) > \SO^\a_2(q) \times \SO^{\e\a}_2(q)$, we can choose $$s^+_4(1) = \diag(-I_2,I_2),~~s^-_4(1) = \diag(s_0,-I_2),~~s^+_4(-1) = \diag(s_0,-I_2),~~s^-_4(-1) = \diag(s_0,I_2).$$ Note that $|s^\e_n(\delta)| < (q^2-1)_2$ for all $\delta = \pm 1$ and $n = 2,4$. Also, we need later the fact that $s^\e_4(-\e)$ does not have $1$ as an eigenvalue. \smallskip (iii) Suppose that $6 \leq n \equiv 2 \bmod 4$. We write $$n = 2^{m_1+1} + 2^{m_2+1} + \ldots + 2^{m_t+1} + 2$$ with $m_1 > m_2 > \ldots > m_t \geq 1$, and choose $$s := \diag(s_{m_1},s_{m_2}, \ldots,s_{m_t},s^\e_2(\a)) \in \SO^+_{2^{m_1+1}}(q) \times \ldots \times \SO^+_{2^{m_t+1}}(q) \times \SO^\e_{2}(q) < G$$ with $\a := (-1)^t\delta$, so that $\theta(s) = \delta$. Consider the case $n \equiv 0 \bmod 4$ and write $$n = 2^{m_1+1} + 2^{m_2+1} + \ldots + 2^{m_t+1}$$ with $m_1 > m_2 > \ldots > m_t \geq 1$. We can rewrite $$n = 2^{a_1+1} + 2^{a_2+1} + \ldots + 2^{a_{t-1}+1} + 2^{a_t+1} + 2^{a_{t+1}+1} + \ldots + 2^{a_k+1},$$ where $a_i = m_i$ for $1 \leq i \leq t-1$, $k = t+m_t-1$, and $$(a_{t},a_{t+1}, \ldots ,a_k) = \left\{ \begin{array}{ll}(m_t-1,m_t-2, \ldots,2,1,1), & m_t \geq 2,\\ (m_t), & m_t = 1. \end{array} \right.$$ Now we can choose $$s := \diag(s_{a_1},s_{a_2}, \ldots,s_{a_{k-1}},s^\e_4(\a)) \in \SO^+_{2^{a_1+1}}(q) \times \ldots \times \SO^+_{2^{a_{k-1}+1}}(q) \times \SO^\e_{4}(q) < G$$ with $\a := (-1)^{k-1}\delta$, so that $\theta(s) = \delta$. \smallskip (iv) From now on, we may assume $$n = 2^{m_1+1} + 2^{m_2+1} + \ldots + 2^{m_t+1} + 1$$ with $m_1 > m_2 > \ldots > m_t \geq 0$ and $t \geq 1$. Again choose $\a = \pm 1$ such that $4|(q-\a)$. If $m_t = 0$, then we choose $$s:= \left\{ \begin{array}{ll} \diag(s_{m_1}, s_{m_2}, \ldots ,s_{m_{t-1}},s_{m_t},1), & \delta = (-1)^t,\\ \diag(s_{m_1}, s_{m_2}, \ldots ,s_{m_{t-1}},-I_2,1), & \delta = (-1)^{t-1},\end{array} \right.$$ and note that $s \in \SO^+_{2^{m_1+1}}(q) \times \ldots \times \SO^+_{2^{m_{t-1}+1}}(q) \times \SO^\a_2(q) \times \SO_1(q) < G$. Finally, suppose that $m_t \geq 1$. We rewrite $$n = 2^{a_1+1} + 2^{a_2+1} + \ldots + 2^{a_{t-1}+1} + 2^{a_t+1} + 2^{a_{t+1}+1} + \ldots + 2^{a_k+1}+1,$$ where $a_i = m_i$ for $1 \leq i \leq t-1$, $k = t+m_t-1$, and $$(a_{t},a_{t+1}, \ldots ,a_k) = \left\{ \begin{array}{ll}(m_t-1,m_t-2, \ldots,2,1,1), & m_t \geq 2,\\ (m_t), & m_t = 1. \end{array} \right.$$ Next, we set $$s:= \diag(s_{a_1}, s_{a_2}, \ldots ,s_{a_{k-1}},s^\beta_4(-\beta),1)$$ which belongs to $$\SO^+_{2^{a_1+1}}(q) \times \ldots \times \SO^+_{2^{a_{k-1}+1}}(q) \times \SO^\beta_4(q) \times \SO_1(q) < G,$$ where $\beta = (-1)^{k}\delta$. In all cases, one can verify that $\theta(s) = \delta$, and $s$ has the desired properties. \hal \subsection{Proof of Theorem \ref{main2} for classical groups in odd characteristics} First we deal with special linear and unitary groups in dimensions $3$ and $4$. The {\sf CHEVIE} project~\cite{CHEVIE} provides \emph{generic character tables} for the groups $\SL_3(q)$ and $\SU_3(q)$; these are symbolic parametrized descriptions of the character tables of all of these groups. To establish Lemma \ref{main2-slu34}, it suffices to prove that \[ c_{x,y,z} = \frac{|x^G| \cdot |y^G|}{|G|} \sum_{\chi \in \Irr(G)} \frac{\chi(x)\chi(y)\chi(z^{-1})}{\chi(1)} > 0, \] for all $x,y,z \in G$. While, in principle there is a function which computes $c_{x,y,z}$ from the generic tables, its application is often difficult because the result may depend in a complicated way on the parameters for a conjugacy class. We thank Frank L\"ubeck for providing us with the following alternative proof of this result. \begin{lemma}\label{main2-slu34} Let $q$ be a power of an odd prime and let $G$ be one of the groups $\SL_n(q)$ or $\SU_n(q)$ with $n \in \{3,4\}$. Every element of $G$ is a product of three $2$-elements in $G$. \end{lemma} \pf We choose conjugacy classes carefully, such that only very few character values from the generic character tables are needed (and these are also available for $n=4$). We first consider $G = \SL_3(q)$ or $G = \SU_3(q)$. Let $c \in \F_{q^2}^\times$ have order $(q^2-1)_2$. Since $q-1$ and $q+1$ are even, $c \notin \F_q$, $c \neq c^q$ and $c \neq c^{-q}$. Let $x$ be a regular semisimple element with eigenvalues $\{c,c^q,c^{-q-1}\}$ (in case $\SL$), or $\{c, c^{-q}, c^{q-1}\}$ (in case $\SU$). The centralizer of $x$ in $G$ is a maximal torus of order $q^2-1$. Let $y$ be a regular semisimple element of the maximal torus of order $q^2 \pm q +1$. By inspecting the generic character tables for $\SL_3$ and $\SU_3$ in {\sf CHEVIE}, we notice that there are only two irreducible characters which both have a non-zero value on the conjugacy classes of $x$ and $y$ (the trivial character and the Steinberg character of degree $q^3$). This can be explained in terms of Deligne-Lusztig theory and Lusztig's Jordan decomposition of characters, see~\cite[13.16]{DM}: The only semisimple element of the dual group of $G$ whose centralizer contains maximal tori of types of the centralizers of $x$ and of $y$ is the trivial element. From information about the values of Deligne-Lusztig characters, it follows that only unipotent characters can be non-zero on both $x$ and $y$. Which unipotent characters have this property can be read from the character table of the Weyl group of $G$, isomorphic to the symmetric group on $3$ points, because up to sign this describes the values of unipotent characters on regular semisimple elements. Now let $z \in G$. Observe that \[ c_{x,y,z} = \frac{|x^G|\cdot |y^G|}{|G|} (1 - \frac{\St(z^{-1})}{q^3}). \] Hence $c_{x,y,z} > 0$ for every non-central element $z$. The case $z = x$ shows that $y$ is the product of two $2$-power elements, so every non-central $z$ is a product of three $2$-power elements. For some $q$ there are non-trivial $z$ in the center of $G$. To show that such $z$ can be written as product of three $2$-power elements, we have a closer look at the generic character table to establish that $c_{x,x,xz} > 0$. We can compute readily a sufficient lower bound for this number: for example, in $\SL_3(q)$ there are $q-2$ irreducible characters of degree $q^2+q+1$ whose value on $x$ and $xz$ are some root of unity; for a lower bound we can substitute the corresponding terms in $c_{x,x,xz}$ by $-(q-2)/(q^2+q+1)$. Now we turn to the case $G = \SL_4(q)$ and $G = \SU_4(q)$. In this case the center of $G$ has order $2$ or $4$, so there is nothing to show for center elements. All groups of type $\SL_n(q)$ contain pairs of regular semisimple elements such that only two characters are non-zero on both elements. But for $n=4$ there are no such pairs containing $2$-power elements, therefore we need a slightly more complicated argument than before. Let $c \in \F_{q^2}^\times$ have order $(q^2-1)_2$. Now $c \neq c^{-1}$ and $c \neq c^{\pm q}$, and $G$ contains a regular $2$-power element $x$ with eigenvalues $\{c,c^q,c^{-1},c^{-q}\}$; its centralizer in $G$ is a maximal torus of order $(q^2-1)(q \pm 1)$. We choose as $y$ a regular element of a cyclic maximal torus of order $q^3 \pm 1$. With the same arguments as sketched in the $\SL_3/\SU_3$-case, we find that only unipotent characters can have non-zero value on both $x$ and $y$. The unipotent characters of $G$ are obtained by restricting the unipotent characters of $\GL_4(q)$ or $\GU_4(q)$, respectively. These are available in {\sf CHEVIE}, and their values are all given by evaluating polynomials over the integers at $q$. There are three unipotent characters with non-zero value on $x$ and $y$, and we can compute the precise values of $c_{x,x,y}$ and $c_{x,y,z}$ for every non-central $z \in G$. For all resulting polynomials, it is easy to see that they evaluate to a positive number for all prime powers $q$. This shows that $y$ is a product of two $2$-power elements, so every non-central element is a product of three $2$-power elements. \hal \begin{prop}\label{main2-sl} Theorem \ref{main2} holds for all quasisimple covers of $S = \PSL_n(q)$, if $n \geq 5$ and $2 \nmid q$. \end{prop} \pf (i) By Lemma \ref{schur}, it suffices to prove Theorem \ref{main2} for $G = \SL_n(q)$. Let $s = s_n(1) \in G$ be as constructed in Lemma \ref{reg-slu}. It suffices to show that every $g \in G$ is a product of three conjugates of $s$, which is equivalent to \begin{equation}\label{3class1} \sum_{\chi \in \Irr(G)}\frac{\chi(s)^3\bar{\chi}(g)}{\chi(1)^2} \neq 0. \end{equation} As $|\chi(g)/\chi(1)| \leq 1$, it suffices to prove \begin{equation}\label{3class2} \sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(s)|^3}{\chi(1)} < 1. \end{equation} Set $$D := \left\{ \begin{array}{ll}(q^n-1)(q^{n-1}-q^2)/(q-1)(q^2-1), & (n,q) \neq (6,3),\\ (q^5-1)(q^3-1), & (n,q) = (6,3). \end{array} \right.$$ By \cite[Theorem 3.1]{TZ1}, every character $\chi \in \Irr(G)$ of degree less than $D$ is either $1_G$ or one of $q-1$ irreducible Weil characters $\tau_{i}$, $0 \leq i \leq q-2$. \smallskip (ii) Consider the case $n \geq 6$. The construction of $s$ in Lemma \ref{reg-slu} shows that $$|\bfC_G(s)| \leq \left\{ \begin{array}{ll}(q^n-1)/(q-1), & (n,q) \neq (6,3),\\ (q^4-1)(q^2-1)/(q-1), & (n,q) = (6,3). \end{array} \right.$$ Hence $$\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{|\chi(s)|^3}{\chi(1)} < \frac{|\bfC_G(s)|^{1/2}}{D}\cdot\sum_{\chi \in \Irr(G)}|\chi(s)|^2 = \frac{|\bfC_G(s)|^{3/2}}{D} < 0.9099.$$ Next we estimate $|\tau_i(s)|$. Recall that $1_G+\tau_0$ is just the permutation character of $G$ acting on the set of $1$-spaces of $\F_q^n$. In the notation of the proof of Proposition \ref{sl-large}, by Lemma \ref{reg-slu}, $e(g,\delta^l) \leq 1$ for all $0 \leq l \leq q-2$ and equality can be attained at most twice. It follows that $s$ fixes at most two $1$-spaces, i.e.\ $0 \leq \tau_0(s) +1 \leq 2$, so $|\tau_0(s)| \leq 1$. Arguing as in part (i) of the proof of Proposition \ref{sl-large}, for $1 \leq i \leq q-2$ we obtain $$|\tau_i(s)| \leq \frac{q+q + 1 \cdot (q-3)}{q-1} = 3.$$ Hence $$\sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(s)|^3}{\chi(1)} = \sum^{q-2}_{i=0}\frac{|\tau_i(s)|^3}{\tau_i(1)} \leq \frac{1+(q-2) \cdot 3^3}{(q^n-q)/(q-1)} < 0.0772.$$ Thus $$\sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(s)|^3}{\chi(1)} < 0.9099 + 0.0772 = 0.9871,$$ so we are done by (\ref{3class2}). \smallskip (iii) Assume now that $n = 5$. The construction of $s$ in Lemma \ref{reg-slu} implies that $|\bfC_G(s)| \leq q^4-1$; furthermore, $e(g,\delta^l) \leq 1$ for all $0 \leq l \leq q-2$ and equality can be attained at most once. It follows by (\ref{weil-sl1}) that $|\tau_i(s)| \leq 1$ for all $i$. Arguing as in (ii), we obtain $$\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{|\chi(s)|^3}{\chi(1)} < \frac{(q^4-1)^{1.5}}{q^2(q^5-1)/(q-1)},$$ $$\sum_{\chi \in \Irr(G), 1 < \chi(1) < D}\frac{|\chi(s)|^3}{\chi(1)} = \sum^{q-2}_{i=0}\frac{|\tau_i(s)|^3}{\tau_i(1)} \leq \frac{q-1}{(q^5-q)/(q-1)}.$$ Thus $$\sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(s)|^3}{\chi(1)} < \frac{(q^4-1)^{1.5}}{q^2(q^5-1)/(q-1)} + \frac{q-1}{(q^5-q)/(q-1)} < \frac{q^4+q-1}{(q^5-q)/(q-1)} < 1,$$ so we are done again. \hal \begin{prop}\label{main2-su-large} Theorem \ref{main2} holds for all quasisimple covers of $S = \PSU_n(q)$, if $n \geq 5$ and $q \geq 5$ is odd, or if $(n,q) = (5,3)$. \end{prop} \pf (i) By Lemma \ref{schur}, it suffices to prove Theorem \ref{main2} for $G = \SU_n(q)$. Let $s = s_n(1) \in G$ be as constructed in Lemma \ref{reg-slu}. It suffices to show that every $g \in G$ is a product of three conjugates of $s$. Hence, it suffices to prove (\ref{3class2}). Set $$D := \frac{(q^n-1)(q^{n-1}-q^2)}{(q-1)(q^2-1)}.$$ By \cite[Theorem 4.1]{TZ1}, every character $\chi \in \Irr(G)$ of degree less than $D$ is either $1_G$ or one of $q+1$ irreducible Weil characters $\zeta_{i}$, $0 \leq i \leq q$. Consider the case $n \geq 6$. The construction of $s$ in Lemma \ref{reg-slu} shows that $$|\bfC_G(s)| \leq \left\{ \begin{array}{ll}(q+1)^{n-1}, & n \geq 8,\\ (q^4-1)(q+1)^2, & n = 7,\\ (q^4-1)(q+1), & n=6. \end{array} \right.$$ Hence, as in the proof of Lemma \ref{main2-sl}, $$\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{|\chi(s)|^3}{\chi(1)} < \frac{|\bfC_G(s)|^{3/2}}{D} < 0.6992.$$ Next we estimate $|\zeta_i(s)|$. In the notation of the proof of Proposition \ref{su-large}, by Lemma \ref{reg-slu}, $e(g,\xi^l) \leq 1$ for all $0 \leq l \leq q$ and equality can be attained at most twice. Arguing as in part (i) of the proof of Proposition \ref{su-large}, we obtain $$|\zeta_i(s)| \leq \frac{q+q + 1 \cdot (q-1)}{q+1} = \frac{3q-1}{q+1}.$$ Hence $$ \sum_{\chi \in \Irr(G),~1 < \chi(1) < D}\frac{|\chi(s)|^3}{\chi(1)} = \sum^{q}_{i=0}\frac{|\zeta_i(s)|^3}{\zeta_i(1)} \leq \frac{(q+1)((3q-1)/(q+1))^3}{(q^n-q)/(q+1)} < 0.1467.$$ Thus $$\sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(s)|^3}{\chi(1)} < 0.6992 + 0.1467 = 0.8459,$$ so we are done by (\ref{3class2}). \smallskip (ii) Assume now that $n = 5$. The construction of $s$ in Lemma \ref{reg-slu} implies that $|\bfC_G(s)| \leq q^4-1$; furthermore, $e(g,\xi^l) \leq 1$ for all $0 \leq l \leq q$ and equality can be attained at most once. It follows by (\ref{weil-su1}) that $|\zeta_i(s)| \leq 1$ for all $i$. Set $$D = (q-1)(q^2+1)(q^5+1)/(q-1).$$ Using \cite{Lu}, we check that if $\chi \in \Irr(G)$ satisfies $1 < \chi(1) < D$ then $\chi$ is either one of $q+1$ Weil characters $\zeta_i$, $0 \leq i \leq q$, or one of $q+1$ characters $\a_i$, $0 \leq i \leq q$, where $$\a_0(1) = q^2(q^5+1)/(q+1),~~\a_i(1) = (q^2+1)(q^5+1)/(q+1),~1 \leq i \leq q.$$ Inspecting the character table of $\GU_5(q)$ as given in \cite{Noz2}, we observe that each $\a_i$ extends to $\GU_5(q)$ and $|\a_i(s)| \leq 1$. Hence, $$\sum_{\chi \in \Irr(G), 1 < \chi(1) < D}\frac{|\chi(s)|^3}{\chi(1)} = \sum^{q}_{i=0}\frac{|\zeta_i(s)|^3}{\zeta_i(1)} + \sum^{q}_{i=0}\frac{|\zeta_i(s)|^3}{\zeta_i(1)} \leq \frac{2(q+1)}{(q^5-q)/(q+1)} < 0.1334.$$ On the other hand, $$\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{|\chi(s)|^3}{\chi(1)} < \frac{(q^4-1)^{1.5}}{(q-1)(q^2+1)(q^5+1)/(q-1)} < 0.5866.$$ Thus $$\sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(s)|^3}{\chi(1)} < 0.1334 +0.5866 = 0.72,$$ so we are done again. \hal For $\PSU_n(3)$, respectively $\PSp_{2n}(q)$, we again employ the notion of breakable elements as defined in Definition \ref{br-slu}(iii), respectively Definition \ref{unbr-spo}. \begin{prop}\label{main2-su3} Theorem \ref{main2} holds for all quasisimple covers of $S = \PSU_n(3)$ if $n \geq 5$. \end{prop} \pf By Lemma \ref{schur}, it suffices to prove Theorem \ref{main2} for $L := \SU_n(3)$. Consider the following statements for $G := \GU_n(3)$: $$\sQ(n):\begin{array}{l}\mbox{Every }g \in G\mbox{ can be written as }g = xyz,\\ \mbox{where }x,y,z \in G\mbox{ are 2-elements and }\det(x) = \det(y) = 1,\end{array}$$ $$\sQ_u(n):\begin{array}{l}\mbox{Every unbreakable }g \in G\mbox{ can be written as }g = xyz,\\ \mbox{where }x,y,z \in G\mbox{ are 2-elements and }\det(x) = \det(y) = 1,\end{array}$$ By Lemma \ref{odd-base}(ii), $\sQ(n)$ holds for $3 \leq n \leq 6$. It is straightforward to check that Theorem \ref{main2} holds for $L$ with $n \geq 7$ once we show that $\sQ_u(n)$ holds. We now prove $\sQ_u(n)$ for $n \geq 7$. Consider an unbreakable $g \in G$. Lemma \ref{LUL3} implies that $|\bfC_G(g)| \leq 3^{n+2} \cdot 2^4$. Let $s_1 =s_2 := s_n(1)$ and $s_3 := s_n(\det(g))$, where $s_n(\delta)$ is constructed in Lemma \ref{reg-slu}; in particular, $|\bfC_G(s_i)| \leq 4^n$. Choosing $$D := (3^n-1)(3^{n-1}-9)/32,$$ by the Cauchy-Schwarz inequality, $$\sum_{\chi \in \Irr(G),~\chi(1) \geq D}\frac{|\chi(s_1)\chi(s_2)\chi(s_3)\bar\chi(g)|}{\chi(1)^2} < \frac{(4^n)^{3/2}(3^{n+2} \cdot 2^4)^{1/2}}{((3^n-1)(3^{n-1}-9)/32)^2} \leq 0.4866.$$ By \cite[Proposition 6.6]{ore}, the characters $\chi \in \Irr(G)$ of degree less than $D$ consist of $4$ linear characters and $4^2$ Weil characters $\zeta_{i,j}$, $0 \leq i,j \leq 3$. Arguing as in part (i) of the proof of Proposition \ref{su-large}, we obtain $$|\zeta_{i,j}(s_k)| \leq \frac{q+q + 1 \cdot (q-1)}{q+1} = \frac{3q-1}{q+1} = 2$$ for $q= 3$. Together with (\ref{weil-su31}), this implies that $$\sum_{{\tiny \begin{array}{l}\chi \in \Irr(G),\\1 < \chi(1) < D\end{array}}} \frac{|\prod^3_{k=1}\chi(s_k)\cdot\bar\chi(g)|}{\chi(1)^2} = \sum_{{\tiny \begin{array}{l}\chi=\zeta_{i,j}\\0 \leq i,j \leq 3\end{array}}} \frac{|\prod^3_{k=1}\chi(s_k)\cdot\bar\chi(g)|}{\chi(1)^2} \leq \frac{4^2 \cdot 2^3 \cdot 3^{n-4}}{((3^n-3)/4)^2} < 0.0117.$$ Since $$\sum_{\chi \in \Irr(G),~\chi(1) =1}\frac{\chi(s_1)\chi(s_2)\chi(s_3)\bar\chi(g)}{\chi(1)^2} = 4,$$ we conclude that $$\sum_{\chi \in \Irr(G)}\frac{\chi(s_1)\chi(s_2)\chi(s_3)\bar\chi(g)}{\chi(1)^2} \neq 0,$$ i.e.\ $g \in (s_1)^G\cdot(s_2)^G\cdot(s_3)^G$, as stated. \hal \begin{prop}\label{main2-sp} Theorem \ref{main2} holds for all quasisimple covers of $S = \PSp_{2n}(q)$ if $n \geq 1$, $2 \nmid q$, and $(n,q) \neq (1,3)$. \end{prop} \pf (i) Consider the case $n = 1$. The cases $\PSp_2(5) \cong \Sp_2(4)$, $\PSp_2(7) \cong \SL_3(2)$, and $\PSp_2(9) \cong \A_6$ are covered by Lemma \ref{lie2}, so we may assume $q \geq 11$. By Lemma \ref{schur}, it suffices to prove Theorem \ref{main2} for $L := \Sp_2(q)$. Using the character table of $L$ as given in \cite{DM}, it is straightforward to check that $g \in s^L \cdot s^L \cdot s^L$ for all $g \in L$ if $|s| = 4$. From now on we may assume $n \geq 2$. Hence by Lemma \ref{schur}, it suffices to prove Theorem \ref{main2} for $L := \Sp_{2n}(q)$. If $q \equiv 1 \bmod 4$, then $L$ is real by \cite[Theorem 1.2]{TZ2}, whence we are done by Lemma \ref{real1}. Also, the case $\PSp_4(3) \cong \SU_4(2)$ is covered by Lemma \ref{lie2}. Note that Theorem \ref{main2} holds for $\Sp_6(3)$ and $\Sp_8(3)$ by Lemma \ref{odd-base}(i). So we may assume $q \equiv 3 \bmod 4$ and $(n,q) \neq (2,3)$, $(3,3)$, $(4,3)$. \smallskip (ii) It suffices to prove that every unbreakable $g \in L$ is a product of three $2$-elements of $L$. By Lemma \ref{spunb}, $$|\bfC_L(g)| \leq B := \left\{ \begin{array}{ll}2q^n, & 2|n,~q \geq 5,\\ 48 \cdot 3^{2n+1}, & 2|n,~q = 3,\\ q^{2n-1}(q^2-1), & 2 \nmid q. \end{array} \right.$$ Let $s$ be as constructed in Lemma \ref{reg-sp}; in particular, $$|\bfC_L(s)| \leq C := \left\{ \begin{array}{ll}q^2-1, & n=2,\\ (q^2-1)(q+1), & n = 3,\\ (q+1)^n, & n \geq 4. \end{array} \right.$$ Choosing $$D := (q^n-1)(q^n-q)/2(q+1),$$ by the Cauchy-Schwarz inequality, $$\sum_{\chi \in \Irr(L),~\chi(1) \geq D}\frac{|\chi(s)^3\cdot\bar\chi(g)|}{\chi(1)^2} < \frac{C^{3/2}\cdot B^{1/2}}{D^2} \leq 0.5255.$$ By \cite[Theorem 5.2]{TZ1}, the characters $\chi \in \Irr(L)$ of degree less than $D$ consist of $1_L$ and four Weil characters: $\eta_{1,2}$ of degree $(q^n - 1)/2$ and $\xi_{1,2}$ of degree $(q^n-1)/2$. Recall by Lemma \ref{reg-sp} that neither $1$ nor $-1$ is an eigenvalue of $s$. Hence, (\ref{sp-weil1}) holds for $s$. Since $|\chi(g)| \leq \chi(1)$, $$\sum_{\chi \in \Irr(L),~1 < \chi(1) < D} \frac{|\chi(s)^3\cdot\bar\chi(g)|}{\chi(1)^2} \leq \sum_{\chi \in \Irr(L),~1 < \chi(1) < D}\frac{|\chi(s)^3|}{\chi(1)} \leq \frac{4}{(q^n-1)/2} < 0.1668.$$ Thus $$\sum_{1_L \neq \chi \in \Irr(L)}\frac{|\chi(s)^3\cdot\bar\chi(g)|}{\chi(1)^2} < 0.5255 + 0.1668 = 0.6923,$$ so $g \in s^L\cdot s^L \cdot s^L$, as stated. \hal \begin{prop}\label{main2-so} Theorem \ref{main2} holds for all quasisimple covers of $S = {\rm P}\Omega^\e_{m}(q)$ if $m \geq 7$ and $2 \nmid q$. \end{prop} \pf By Lemma \ref{real1} and \cite[Theorem 1.2]{TZ2}, we may assume that $m \neq 8,9$ and $q \equiv 3 \bmod 4$ if $m = 7$. Note that Theorem \ref{main2} holds for $\Omega_7(3)$ by Lemma \ref{odd-base}. By Lemma \ref{schur}, it suffices to prove Theorem \ref{main2} for $L := \Omega^\e_m(q)$. Let $s = s^\e_m(1) \in L$ be as constructed in Lemma \ref{reg-so}. \smallskip (i) First we consider the case $m = 2n$; in particular, $n \geq 5$. The construction of $s$ implies that $$|\bfC_L(s)| \leq C := \left\{ \begin{array}{ll}(q^4-1)(q+1), & n=5,\\ (q+1)^n, & n \geq 6. \end{array} \right.$$ Choosing $$D := \left\{ \begin{array}{ll}q^{4n-10}, & (n,\e) \neq (5,-),\\ (q-1)(q^2+1)(q^3-1)(q^4+1), & (n,\e) = (5,-), \end{array} \right.$$ by the Cauchy-Schwarz inequality, $$\sum_{\chi \in \Irr(L),~\chi(1) \geq D}\frac{|\chi(s)^3|}{\chi(1)} < \frac{C^{3/2}}{D} \leq 0.135.$$ By \cite[Propositions 5.3, 5.7]{ore}, the characters $\chi \in \Irr(L)$ of degree less than $D$ consist of $1_L$ and $q+4$ characters $\DC_\a$, $\a \in \Irr(X)$, where $X := \Sp_2(q)$. By Lemma \ref{reg-so}, each $\beta \in \F_{q^2}^\times$ can appear as an eigenvalue of $s$ of multiplicity at most $2$. Arguing as in the proof of \cite[Proposition 5.11]{ore}, we obtain that $|D_\a(s)| \leq q^2\a(1)$. Recalling that $\DC_\a$ equals $D_\a$ if $\a \neq 1_X$, $\St_X$ and $D_\a - 1_L$ otherwise, cf.\ \cite[Table II]{ore}, for $n \geq 6$ $$\sum_{{\tiny \begin{array}{l}\chi \in \Irr(L),\\1 < \chi(1) < D \end{array}}}\frac{|\chi(s)^3|}{\chi(1)} \leq \sum_{\a = 1_X,\St_X}\frac{(q^2\a(1)+1)^3}{\DC_\a(1)} + \sum_{{\tiny \begin{array}{l}\a \in \Irr(X),\\ \a \neq 1_X,\St_X\end{array}}}\frac{(q^2\a(1))^3}{\DC_\a(1)} < 0.849.$$ Consider the case $n = 5$. If $4|(q-\e)$, then each $\beta \in \F_{q^2}^\times$ can appear as an eigenvalue of $s$ of multiplicity at most $1$, so arguing as above $|D_\a(s)| \leq q\a(1)$. Suppose that $4\nmid(q-\e)$. In this case, the only eigenvalue $\beta$ of $s$ that belongs to $\F_{q^2}^\times$ is $-1$ and its multiplicity is $2$. In the notation of the proof of \cite[Proposition 5.11]{ore}, for every $x \in X$ $$|\o(xs)| \leq q^{\dim \Ker(xs-I_{2m})/2} = q^{\dim \Ker(x+I_2)}.$$ When $x $ runs over $X$, $\dim \Ker(x+I_2)$ is $2$ only for $x = -I_2$, it is $1$ for $q^2-1$ elements, and it is $0$ for the rest. Hence, $$|D_\a(s)| \leq \frac{1}{|X|}\sum_{x \in X}|\o(xs)\overline{\a(x)}| \leq \frac{\a(1)}{|X|}(q^2 + q \cdot (q^2-1) + 1 \cdot(q(q^2-1)-q^2)) = 2\a(1).$$ We have shown that $|D_\a(s)| \leq q\a(1)$. Hence, $$\sum_{{\tiny \begin{array}{l}\chi \in \Irr(L),\\1 < \chi(1) < D \end{array}}}\frac{|\chi(s)^3|}{\chi(1)} \leq \sum_{\a = 1_X,\St_X}\frac{(q\a(1)+1)^3}{\DC_\a(1)} + \sum_{{\tiny \begin{array}{l}\a \in \Irr(X),\\ \a \neq 1_X,\St_X\end{array}}}\frac{(q\a(1))^3}{\DC_\a(1)} < 0.329.$$ Thus in all cases $$\sum_{1_L \neq \chi \in \Irr(L)}\frac{|\chi(s)^3|}{\chi(1)} < 1,$$ so $g \in s^L\cdot s^L \cdot s^L$ by (\ref{3class2}), as stated. \smallskip (ii) Now we consider the case $m = 2n+1 \geq 11$. Again $|\bfC_L(s)| \leq (q+1)^n$. Set $D := q^{4n-8}$. By the Cauchy-Schwarz inequality $$\sum_{\chi \in \Irr(L),~\chi(1) \geq D}\frac{|\chi(s)^3|}{\chi(1)} < \frac{C^{3/2}}{D} \leq 0.062.$$ By \cite[Corollary 5.8]{ore}, the characters $\chi \in \Irr(L)$ of degree less than $D$ consist of $1_L$ and $q+4$ characters $\DC_\a$, $\a \in \Irr(X)$. By Lemma \ref{reg-so}, each $\beta \in \F_{q^2}^\times$ can appear as an eigenvalue of $s$ of multiplicity at most $e$, where we can choose $e = 2$ for $n \geq 6$ and $e=1$ for $n = 5$. Arguing as in the proof of \cite[Proposition 5.11]{ore}, we obtain that $|D_\a(s)| \leq q^e\a(1)$. Recalling that $\DC_\a$ equals $D_\a$ if $\a \neq \xi_{1,2}$ (the two Weil characters of degree $(q+1)/2$ of $X$) and $D_\a - 1_L$ otherwise, cf.\ \cite[Table I]{ore}, $$\sum_{\chi \in \Irr(L),~1 < \chi(1) < D}\frac{|\chi(s)^3|}{\chi(1)} \leq \sum_{\a = \xi_{1,2}}\frac{(q^2\a(1)+1)^3}{\DC_\a(1)} + \sum_{\a \in \Irr(X),~ \a \neq \xi_{1,2}}\frac{(q^2\a(1))^3}{\DC_\a(1)} < 0.281,$$ so we are done by (\ref{3class2}). \smallskip (iii) Finally, we consider the case $m=7$, so $q \geq 7$. Theorem \ref{main2} holds for $$\O_3(q) \cong \PSL_2(q),~\O^+_4(q) \cong \SL_2(q) \circ \SL_2(q),~\O^-_4(q) \cong \PSL_2(q^2), ~\O_5(q) \cong \PSp_4(q)$$ by Proposition \ref{main2-sp}, and for $\Spin^+_6(q) \cong \SL_4(q),~\Spin^-_6(q) \cong \SU_4(q)$ by Lemma \ref{main2-slu34}. Hence, if $g \in L = \O_7(q)$ is breakable in the sense of Definition \ref{unbr-spo}, then $g$ is a product of three $2$-elements of $L$. If $g \in L$ is unbreakable then $|\bfC_L(g)| \leq q^4(q+1)^2$ by Lemma \ref{orthogunb}. Also, $\chi(1) \geq q^4+q^2+1$ for all $1_L \neq \chi \in \Irr(L)$ by \cite[Theorem 1.1]{TZ1}. As $|\bfC_L(s)| \leq (q+1)^3$, by the Cauchy-Schwarz inequality, $$\sum_{1_L \neq \chi \in \Irr(L)}\frac{|\chi(s)^3\cdot\bar\chi(g)|}{\chi(1)^2} \leq \frac{(q+1)^{4.5} \cdot q^2(q+1)}{(q^4+q^2+1)^2} < 0.757,$$ so we are done as well. \hal \subsection{Proof of Theorem \ref{main2} for exceptional groups in odd characteristics} Our goal is to prove the following result, which, together with the results of \S\S7.1 and 7.3, completes the proof of Theorem \ref{main2}. \begin{thm}\label{2eltexcep} Let $G$ be a quasisimple group such that $G/\bfZ(G)$ is an exceptional simple group of Lie type in odd characteristic. Every element of $G$ is a product of three $2$-elements. \end{thm} The proof consists of a series of lemmas. The first is immediate from \cite{lub}. \begin{lemma}\label{chardeg} Let $G$ be as in Theorem $\ref{2eltexcep}$, and let $\chi$ be a nontrivial irreducible character of $G$. Then $\chi(1)\ge N$, where $N$ is as in Table $\ref{chartab}$. \end{lemma} \begin{table}[htb] \[ \begin{array}{|l|l|l|} \hline G & N & C \\ \hline E_8(q) & q(q^6+1)(q^{10}+1)(q^{12}+1) & q^8-1 \\ E_7(q) & q(q^{14}-1)(q^6+1)/(q^4-1) & (q+1)^2q^7 \\ E_6^\e(q)\,(\e = \pm) & q(q^4+1)(q^6+\e q^3+1) & (q^4-1)(q-\e)^2,\,q\equiv \e \bmod 4 \\ & & (q-\e)q^7,\,q\equiv -\e \bmod 4 \\ F_4(q) & q^8+q^4+1 & (q+1)^3q^3\\ G_2(q)\,(q>3) & q^3-1 & q^2-1 \\ \tw3D_4(q) & q(q^4-q^2+1) & (q^3-1)(q+1) \\ \tw2G_2(q)\,(q>3) & q^2-q+1 & q+1 \\ \hline \end{array} \] \caption{Bounds for character degrees and centralizers}\label{chartab} \end{table} \begin{lemma}\label{2eltcent} If $G$ is as in Theorem $\ref{2eltexcep}$, then $G$ has a $2$-element $s$ such that $|\bfC_G(s)|\le C$, where $C$ is as in Table $\ref{chartab}$. \end{lemma} \pf For the most part we construct the element $s$ within a suitable product of classical groups inside $G$, using the methods of \S7.2. For $G = E_8(q)$ we work in a subsystem subgroup $A$ of type $A_8$. This has shape $d.L_9(q).e$, where $e = (3,q-1)$ and $d = (9,q-1)/e$ (see for example \cite[Table 5.1]{LSS}); the derived subgroup is a quotient of $\SL_9(q)$ by a central subgroup $Z$. We shall define $s$ in $\SL_9(q)$, and identify it with its image modulo $Z$. Choose $\g \in \F_{q^8}$ of order $(q^8-1)_2$, and define $s_8 \in \GL_1(q^8) \le \GL_8(q)$ to be conjugate over $\bar \F_q$ to $\hbox{diag}(\g,\g^q,\g^{q^2},\ldots ,\g^{q^7})$. Let $s = \hbox{diag}(s_8,\a) \in \SL_9(q)$, where $\a^{-1} = \hbox{det}(s_8)$. Then $|\bfC_A(s)| = q^8-1$. Now, by \cite[11.2]{LSei}, \[ \cL(E_8)\downarrow A_8 = \cL(A_8) + V_{A_8}(\l_3) + V_{A_8}(\l_6). \] Here $ V_{A_8}(\l_3) \cong \wedge^3(V_9)$, the wedge-cube of the natural module for $\SL_9(q)$, and $ V_{A_8}(\l_6) $ is the dual of this. Since $\g^{q^i+q^j+q^k}$ cannot equal 1 for distinct $i,j,k$ between 0 and 7, and also $\g^{q^i+q^j}$ cannot lie in $\F_q$, the element $s$ has no nonzero fixed points in $ \wedge^3(V_9)$, so $\dim \bfC_{\cL(E_8)}(s) = 8$. Hence $\bfC_G(s)$ is a maximal torus, so $\bfC_G(s)=\bfC_A(s)$ of order $q^8-1$. Next consider $G = E_7(q)$. We shall work in the simply connected version of $G$; the element $s$ we construct works equally well for the adjoint version. Let $A$ be a subsystem subgroup of type $A_2^\e A_5^\e$ ($\e = \pm 1$), where $q\equiv -\e \bmod 4$. This has the subgroup $\SL_3^\e(q) \circ \SL_6^\e(q)$ of index $(3,q-\e)$. Let $\g \in \F_{q^4}$ have order $(q^4-1)_2$, and define $\a = \g^{(q^2+1)(\e q+1)}$, $\b = \g^{2(\e q+1)}$. Now define $s_1 \in \SL_3^\e(q)$, $s_2\in \SL_6^\e(q)$ so that they are conjugate over $\bar \F_q$ to \[ \hbox{diag}(\g^{-2},\g^{-2\e q},\b) \in \SL_3,\;\; \hbox{diag}(\g,\g^{\e q},\g^{q^2},\g^{\e q^3},1,\a) \in \SL_6, \] respectively. Let $s = s_1s_2 \in A$. Then $|\bfC_A(s)| = (q^4-1)(q^2-1)(q-\e)$. From \cite[11.8]{LSei}, \[ \cL(E_7)\downarrow A_2A_5 = \cL(A_2A_5) + (V_{A_2}(\l_1)\otimes V_{A_5}(\l_2))+(V_{A_2}(\l_2)\otimes V_{A_5}(\l_4)). \] Here $V_{A_2}(\l_1)\otimes V_{A_5}(\l_2) \cong V_3 \otimes \wedge^2(V_6)$, where $V_3,V_6$ are the natural modules for $\SL_3$ and $\SL_6$. One checks that $s$ has fixed space of dimension 1 on this module (coming from the product of the eigenvalues $\b,1,\a$). Hence $\dim \bfC_{\cL(E_7)}(s) = 9$, and so over $\bar \F_q$ we deduce that $\bfC_{E_7}(s) = A_1T_6$, where $T_6$ denotes a torus of rank 6. It follows that $|\bfC_G(s)| = |A_1(q)|\cdot |T_6(q)|$. As $\bfC_G(s)$ contains $\bfC_A(s)$, of the order given above, $|\bfC_G(s)| \le |A_1(q)|(q^4-1)(q+1)^2 < (q+1)^2q^7$. If $G = E_6^\e(q)$, then we work in a subsystem subgroup $A$ of type $A_1A_5$ containing $\SL_2(q) \circ \SL_6^\e(q)$. Again let $\g \in \F_{q^4}$ have order $(q^4-1)_2$ and define $s_2 \in \SL_6^\e(q)$ as the previous paragraph. Define $s_1 \in \SL_2(q)$ to be conjugate over $\bar \F_q$ to $\hbox{diag}(\g^{2(q+\e)}, \g^{-2(q+\e)})$ if $q \equiv \e \bmod 4$, and to $I_2$ otherwise. Set $s = s_1s_2$. Then $|\bfC_A(s)|$ is equal to $(q^4-1)(q-\e)^2$ if $q \equiv \e \bmod 4$, and to $|A_1(q)|(q^4-1)(q-\e)$ otherwise. By \cite[11.10]{LSei}, \[ \cL(E_6)\downarrow A_1A_5 = \cL(A_1A_5) + (V_{A_1}(1)\otimes V_{A_5}(\l_3)), \] and the second summand is $V_2\otimes \wedge^3(V_6)$, where $V_2,V_6$ are the natural modules for $A_1,A_5$. We check that $s$ has no nonzero fixed points on this tensor product, and it follows that $\bfC_{E_6}(s) = \bfC_{A_1A_5}(s)$; hence $\bfC_G(s) = \bfC_A(s)$, giving the result. Now let $G = F_4(q)$. Here we construct our element $s$ in a subsystem subgroup $A = B_4(q) \cong {\rm \Spin}_9(q)$. It is convenient to define it in the quotient $\Omega_9(q)$ and take a preimage. We follow the proof of Lemma \ref{reg-so}. Let $\g \in \F_{q^2}$ have order $(q^2-1)_2$, and define $s_4 \in \GL_1(q^2) \le \GL_2(q) \le SO_4^+(q)$ to be conjugate over $\bar \F_q$ to ${\rm diag}(\g,\g^q,\g^{-1},\g^{-q})$. Then $s_4$ has spinor norm $-1$. Let $q\equiv \e \bmod 4$ with $\e = \pm 1$, and define $s_2 \in SO_2^\e(q)$ to be conjugate to ${\rm diag}(\g^{q+\e},\g^{-(q+\e)})$. Then $s_2$ also has spinor norm $-1$, so $t_1:={\rm diag}(s_4,s_2) \in \Omega_6^\e(q)$. Finally, let $t_2:= {\rm diag}(-1,-1,1) \in \Omega_3(q)$ and define $s \in A$ to be the preimage of ${\rm diag}(t_1,t_2) \in \Omega_9(q)$. Then $|\bfC_A(s)| = (q^2-1)(q-\e)^2$. Now \[ \cL(F_4) \downarrow B_4 = \cL(B_4) \oplus V_{B_4}(\l_4). \] The second summand is the spin module for $B_4(q)$, which restricts to the preimage of $\O_6^\e(q) \times \O_3(q)$ as $(V_4\otimes V_2)\oplus (V_4^*\otimes V_2^*)$, where each summand is a tensor product of natural modules for the isomorphic group $\SL_4^\e(q) \times \SL_2(q)$. Elements of $\SL_4^\e(q)$, $\SL_2(q)$ inducing $t_1$, $t_2$ are $x_1:= {\rm diag}(1,\g,\g^{\e q},\g^{-\e q-1})$, $x_2:={\rm diag}(\g^{(q-\e)/2}, \g^{-(q-\e)/2})$, respectively. The tensor product of $x_1$ and $x_2$ has fixed point space of dimension at most 1, and it follows that $\dim \bfC_{\cL(F_4)}(s) =4$ or 6. If it is 4 then $\bfC_G(s) = \bfC_A(s)$, while if it is 6, then $\bfC_{F_4}(s) = A_1T_3$, whence $|\bfC_G(s)| \le |A_1(q)|(q+1)^3$, as in the conclusion. For $G = G_2(q)$ or $\tw3D_4(q)$, we pick our element $s$ in a subgroup $A = \SL_3(q)$: let $\g \in \F_{q^2}$ have order $(q^2-1)_2$ and take $s$ to be conjugate over $\bar \F_q$ to ${\rm diag}(\g,\g^q,\a)$ where $\a = \g^{-(q+1)}$. Now $\cL(G_2) \downarrow A_2 = \cL(A_2)+V_3+V_3^*$ and $\cL(D_4)\downarrow A_2 = \cL(A_2)+V_3^3+(V_3^*)^3+V_1^2$, where $V_3$ is the natural 3-dimensional module and $V_1$ is trivial. It follows that $\bfC_{\cL(G_2)}(s)$ and $\bfC_{\cL(D_4)}(s)$ have dimensions 2 and 4 respectively, so $\bfC_G(s)$ is a maximal torus, as in the conclusion. Finally, for $G=\tw2 G_2(q)$, an element $s$ of order 4 has centralizer of order $q+1$ (see \cite{ward}). This completes the proof. \hal \begin{lemma}\label{first} Theorem $\ref{2eltexcep}$ holds for $E_8(q)$, $E_7(q)$, $G_2(q)$, $\tw2G_2(q)$, and also for $E_6^\e(q)$ with $q \equiv \e \bmod 4$. \end{lemma} \pf Let $G$ be one of these groups, and let $s$ be the 2-element of $G$ produced in Lemma \ref{2eltcent}. As in the proof of Proposition \ref{main2-sl}, it is sufficient to establish that for every $g\in G$, \begin{equation}\label{eqn} \sum_{\chi \in \Irr(G)} \frac{\chi(s)^3\bar \chi(g)}{\chi(1)^2} \ne 0, \end{equation} and to prove this it suffices to show \[ \sum_{1\ne \chi \in \Irr(G)} \frac{|\chi(s)|^3}{\chi(1)} < 1. \] Lemma \ref{chardeg} implies that $\chi(1)\ge N$ for all nontrivial irreducible characters $\chi$, where $N$ is as in Table \ref{chartab}. Hence \[ \sum_{1\ne \chi \in Irr(G)} \frac{|\chi(s)|^3}{\chi(1)} < \frac{|\bfC_G(s)|^{1/2}}{N}\sum_{\chi \in Irr(G)} |\chi(s)|^2 = \frac{|\bfC_G(s)|^{3/2}}{N} \le \frac{C^{3/2}}{N}, \] where $C$ is as in Table \ref{chartab}. One checks that $C^{3/2}/N < 1$ for the groups in the hypothesis, so the lemma follows. \hal \begin{lemma}\label{second} Theorem $\ref{2eltexcep}$ holds for $E_6^\e(q)$ with $q \equiv -\e \bmod 4$, $F_4(q)$ and $\tw3D_4(q)$. \end{lemma} \pf Let $G$ be one of these groups, let $s$ be the 2-element of $G$ from Lemma \ref{2eltcent}, and let $g \in G$. As in the previous proof, \[ \sum_{1\ne \chi \in \Irr(G)} \frac{|\chi(s)|^3 |\chi(g)|}{\chi(1)^2} < \frac{|\bfC_G(s)|^{3/2}|\bfC_G(g)|^{1/2}}{N^2} \le \frac{C^{3/2}|\bfC_G(g)|^{1/2}}{N^2}, \] where $C,N$ are as in Table \ref{chartab}. The result is proved if the above sum is less than 1, so we may assume that \begin{equation}\label{centeqn} |\bfC_G(g)| \ge \frac{N^4}{C^3}. \end{equation} Our strategy is to show that an element $g$ satisfying this bound must lie in a subgroup of $G$ that is a commuting product of quasisimple classical groups. (A similar strategy was carried out in Section 7 of \cite{ore}.) The conclusion then follows immediately from the results in Section 7.3, where Theorem \ref{2eltexcep} is established for classical groups. Consider $G = F_4(q)$. Here (\ref{centeqn}) gives \begin{equation}\label{f4cent} |\bfC_G(g)| \ge \frac{(q^8+q^4+1)^4}{(q+1)^9q^9}. \end{equation} Assume first that $g$ is a unipotent element. The classes and centralizers of unipotent elements in $G$ are given in \cite[Table 22.2.4]{LSei}, and every centralizer satisfying the above bound has even order. Hence there is an involution $t$ such that $g \in \bfC_G(t)$. Now $\bfC_G(t)$ is either a quasisimple group $B_4(q)$, or a group of the form $(\SL_2(q)\circ \Sp_6(q)).2$, with the unipotent element $g$ lying in the subgroup $\SL_2(q)\circ \Sp_6(q)$. Hence $g$ is in a product of quasisimple classical groups, except possibly in the case where $q=3$ and $g \in \bfC_G(t) = (\SL_2(3)\circ \Sp_6(3)).2$. In the latter case, a computation shows that every element of $\bfC_G(t)$ is a product of three 2-elements. Now assume $g$ is not unipotent; say $g = xu$ has semisimple part $x\ne 1$ and unipotent part $u \in \bfC_G(x)$. Now $\bfC_G(x)$ is a subsystem subgroup of $G$, and the bound (\ref{f4cent}) forces this to have a normal subgroup $D = B_4(q)$, $D_4^\e(q)$, $B_3(q)$, $C_3(q)$, $A_3^\e(q)$, $B_2(q)$ or $A_2^\e(q)$. Then $x \in \bfC_G(D)$, and the unipotent elements of $\bfN_G(D)$ generate a subgroup of $D\bfC_G(D)$, which is contained in a subsystem subgroup $S:=B_4(q)$, $A_1(q)C_3(q)$ or $A_2^\e(q)A_2^\e(q)$. Hence $g = xu \in S$. Observe that $S$ is a product of quasisimple classical groups, except for $A_1(q)C_3(q)$ when $q=3$; however, we already noted that every element of this subgroup is a product of three 2-elements in its normalizer. This completes the proof for $G = F_4(q)$. The proof for $G = E_6^\e(q)$ is similar. If $g$ is unipotent then the bound (\ref{centeqn}) and \cite[Table 22.2.3]{LSei} imply that $\bfC_G(g)$ has even order, so $g \in \bfC_G(t)$ for some involution $t$. This centralizer is either $(q-\e)\circ D_5^\e(q)$ or $(\SL_2(q) \circ \SL_6^\e(q)).2$. Hence the unipotent element $g$ lies in $D_5^\e(q)$ or $\SL_2(q) \circ \SL_6^\e(q)$, and this is a product of quasisimple groups, apart from the latter when $q=3$, in which case a computation shows that every element of $\bfC_G(t)$ is a product of three 2-elements. When $g$ is not unipotent, the bound (\ref{centeqn}) is actually stronger than the bound used in the proof of \cite[Theorem 7.1]{ore} for non-unipotent elements of $E_6^\e(q)$, and this proof shows that such elements lie in a product of quasisimple classical subgroups. Alternatively, an argument similar to that for $F_4(q)$ gives the result in this case. Finally, let $G = \tw3D_4(q)$. The unipotent classes and centralizers can be found in \cite{spalt}, and the unipotent case is handled exactly as for $F_4(q)$. For $g=xu$ non-unipotent as above, (\ref{centeqn}) implies that $\bfC_G(x)$ has a normal subgroup $D = A_1(q^3)$ or $A_2^\e(q)$. In the first case we argue as before that $g=xu$ lies in $D\bfC_G(D) = A_1(q^3)\circ A_1(q)$. In the second case $\bfC_G(s) = ((q^2+\e q+1)\circ D).(3,q-\e)$, and we can assume that $u\ne 1$ (otherwise $g=x$ is real, and the result follows from Lemma \ref{real1}. The group generated by the unipotent elements of $\bfC_G(s)$ is just $D$, so $u \in D$. But the centralizer of a nontrivial unipotent element of $D = A_2^\e(q)$ has order at most $(q+1)q^3$ (see \cite[Chapter 3]{LSei}), so this gives $|\bfC_G(g)| \le (q^2+\e q+1)(q+1)q^3$, which contradicts (\ref{centeqn}). \hal \section{Asymptotic surjectivity: Proofs of Theorems \ref{main3} and \ref{main4}} \begin{lem}\label{center} Let $k, Q \geq 2$ be integers. There is an integer $D = D(k,Q)$ depending on $k$ and $Q$ such that, for every integer $N$ with $\O(N) \leq k$ and for every $q < Q$, every central element of $G \in \{\SL_m(q),\SU_m(q),\Sp_{2m}(q),\O^+_{2m}(q)\}$ is an $N$th power in $G$ whenever $m|D$. \end{lem} \pf We define $D = 2(Q!)^{k+1}$ in the case $G = \SL$ or $\SU$, and $D = 2^{k+1}$ in the case $G = \Sp$ or $\O^+$. It suffices to prove the claim for nontrivial $z \in \bfZ(G)$. Consider the case $G = \SL_m(q)$ or $\SU_m(q)$, and set $\e = +$, respectively $\e = -$. Since $2|m$, $$\GL^\e_m(q) > \GL_{m/2}(q^2) \geq T:= C_{q^{m}-1}.$$ Furthermore, $T_1:= T \cap G$ has index dividing $q-\e$ and contains $\bfZ(G)$; in particular, $z \in T_1$. If $p$ is a prime dividing $|z|$, then $p|(q-\e)$, whence $p|(q^2-1)$ and $p \leq q+1 \leq Q$. Thus $$\left( \frac{q^m-1}{q^2-1} \right)_p = \left( \frac{m}{2} \right)_p \geq ((q-\e)_p)^{k+1},$$ so $$\left( \frac{|T_1|}{|z|} \right)_p \geq ((q-\e)_p)^k \geq p^k.$$ Write $N = N_1N_2$, where all prime divisors of $N_1$ divide $|z|$ and $\gcd(N_2,|z|) = 1$. Since $\O(N) \leq k$, we have shown that $N_1$ divides $|T_1|/|z|$. As $T_1$ is cyclic, we can find $t \in T_1$ such that all prime divisors of $|t|$ divide $|z|$ and $t^{N_1} = z$. Since $\gcd(N_2,|t|) = 1$, $t = h^{N_2}$ for some $h \in T_1$. It follows that $z = h^N$, as desired. If $G$ is $\Sp_{2m}(q)$ or $\O^+_{2m}(q)$, then $|z| = 2$ and $q$ is odd. We can use the same argument as above, taking $T_1$ to be a cyclic maximal torus of order $q^m-1$ in $\Sp_{2m}(q)$, respectively $\SO^+_{2m}(q)$. \hal Let $q$ be a prime power, let $n \geq 13$ be an integer, and let $\e = \pm$. If $\e = +$, then we use $\ps(q^n-\e)$ to denote a primitive prime divisor $\p(q,n)$ if $2 \nmid n$, and $\p(q,n)\p(q,n/2)$ if $2|n$. If $\e = -$, then we use $\ps(q^n-\e)$ to denote a primitive prime divisor $\p(q,2n)$. These primitive prime divisors exist by \cite{Zs}. \begin{lem}\label{pair} Let $q$ be a prime power, let $n \geq m \geq 13$ be integers, and let $\alpha, \beta = \pm$. Suppose that $\gcd(\ps(q^n-\alpha),\ps(q^m-\beta)) > 1$. Then either $(n,\alpha) = (m,\beta)$, or $\alpha = +$ and $n \in \{2m,4m\}$. \end{lem} \pf If $n = m$, then $\gcd(\ps(q^n-\alpha),\ps(q^m-\beta)) > 1$ certainly implies $\alpha = \beta$. Suppose $n > m$. If $\alpha = -$, then $\ps(q^n-\alpha) = \p(q,2n)$ does not divide $\prod^{2n-1}_{i=1}(q^i-1)$, so it cannot be non-coprime to $\ps(q^m-\beta)$. So $\alpha = +$, and $\gcd(\ps(q^n-1),\ps(q^m-\beta)) > 1$ implies that $n = 2m$ or $n = 4m$. \hal Now we prove an analogue of \cite[Proposition 3.4.1]{LarShT} for groups of type $A$ and $C$: \begin{prop}\label{C-bounds} Fix $a\ge 1$, and let $n > 2a + 2$ be an integer. Let $s$ and $t$ be regular semisimple elements of $G := \Sp_{2n}(q)$ belonging to maximal tori $T_{1}$ and $T_{2}$ of type $T_{n-a,a}^{\e_1,\e_2}$ and $T_{a+1,n-a-1}^{\e_3,\e_4}$ respectively, where $\e_i = \pm$ and $\e_1\e_2 = -\e_3\e_4$. The number of distinct irreducible characters of $G$ which vanish neither on $s$ nor on $t$ is bounded, independent of $n$, $q$, and the choices of $s$ and $t$. Likewise, the absolute values of these characters on $s$ and $t$ are bounded independent of $n$, $q$, and the choices of $s$ and $t$. \end{prop} \pf (i) First we show that the maximal tori $T_1$ and $T_2$ are {\it weakly orthogonal} in the sense of \cite[Definition 2.2.1]{LarShT} whenever $\e_1\e_2 = -\e_3\e_4$. We follow the proof of \cite[Proposition 2.6.1]{LarShT}. The dual group $G^{*}$ is $\SO(V) \cong \SO_{2n+1}(q)$, where $V = \F_{q}^{2n+1}$ is endowed with a suitable quadratic form $Q$. Consider the tori dual to $T_1$ and $T_2$, and assume $g$ is an element belonging to both of them. We need to show that $g =1$. We consider the spectrum $S$ of the semisimple element $g$ on $V$ as a multiset. Then $S$ can be represented as the joins of multisets $X \sqcup Y \sqcup \{1\}$ and $Z \sqcup T \sqcup \{1\}$, where $$\begin{array}{l} X := \{x,x^{q}, \ldots,x^{q^{n-a-1}},x^{-1},x^{-q}, \ldots, x^{-q^{n-a-1}}\},\\ Y := \{y,y^{q}, \ldots,y^{q^{a-1}},y^{-1},y^{-q}, \ldots, y^{-q^{a-1}}\},\\ Z := \{z,z^{q}, \ldots,z^{q^{n-a-2}},z^{-1},z^{-q}, \ldots, z^{-q^{a}}\},\\ T := \{t,t^{q}, \ldots,t^{q^{a}},t^{-1},t^{-q}, \ldots, t^{-q^{a}}\},\end{array}$$ for some $x,y,z,t \in \bar{\F}_{q}^{\times}$. Furthermore, $$x^{q^{n-a}-\e_1} = y^{q^{a}-\e_2} = z^{q^{n-a-1}-\e_3} = t^{q^{a+1}-\e_4} = 1.$$ Let $A$ be a multiset of elements of $\bar{\F}_{q}$, where $1 \in A$, the multiplicity of each element of $A$ is $2n+1$, and with the property that if $u \in A$ then $u^{q},u^{-1} \in A$. We claim that if $|A \cap S| > 1$ then $A \supseteq S$. Indeed, since the multiplicity of every $u \in S$ is at most $2n+1$, if $A \cap (X \sqcup \{1\}) > 1$ then $A \supseteq X$, and if $|A \cap (X \sqcup \{1\})|, |A \cap (Y \sqcup \{1\})| > 1$ then $A \supseteq S$; and similarly for $Y$, $Z$, $T$. Now if $|A \cap S| > 1$ but $A \not\supseteq S$, then $S = X \sqcup Y \sqcup \{1\}$ implies that $|A \cap S| \in \{2a+1, 2(n-a)+1\}$. But $S = Z \sqcup T \sqcup \{1\}$ also, so $|A \cap S| \in \{2a+3,2(n-a)-1\}$, which is a contradiction as $n \geq 2a+3$. Applying the claim to the multiset $A$ consisting of those $u \in \bar{\F}_{q}$ such that $u^{q^{n-a}-\e_1} = 1$, each with multiplicity $2n+1$, and noting that $A \supseteq X \sqcup \{1\}$, we deduce that $u^{q^{n-a}-\e_1} = 1$ for all $u \in S$. Arguing similarly, we obtain $$u^{q^{n-a}-\e_1} = u^{q^{a}-\e_2} = u^{q^{n-a-1}-\e_3} = u^{q^{a+1}-\e_4} = 1$$ for all $u \in S$. Consider $u \in S$. Suppose for instance that $\e_3 \neq \e_1$. In particular, $$u^{q^{n-a-1}+\e_1} = u^{q^{n-a}-\e_1} = 1,$$ whence $u^{q+1} = 1$. The condition $\e_1\e_2 = -\e_3\e_4$ now implies that $\e_2 = \e_4$, so $|u|$ divides $\gcd(q^{a+1}-\e_2,q^a-\e_2)|(q-1)$. It follows that $u^2 = 1$ for all $u \in S$. The same argument applies to the case $\e_3 = \e_1$. We have shown that $u^2 = 1$ for all $u \in S$. Now if $1$ has multiplicity at least $2$ in $S$, then applying the claim to the multiset $A'$ consisting only of $1$ with multiplicity $2n+1$, we see that $g = 1_V$ as stated. It remains to consider the case $g = \diag(-1,-1, \ldots,-1,1)$. Now $\Ker(g+1_V)$ is a quadratic subspace of $V$ of type $\e_1\e_2$ and also of type $\e_3\e_4$, a contradiction. \smallskip (ii) Now we proceed exactly as in the proof of \cite[Proposition 3.4.1]{LarShT}, using the main result of \cite{Lusztig} which holds for both types $B_n$ and $C_n$. Also note that the proof of \cite[Proposition 3.4.1]{LarShT} uses only the weak orthogonality of the two tori $T_1$ and $T_2$ but not the signs $\e_i$ in their definitions. \hal \begin{prop}\label{A-bounds} Fix $a\ge 1$, $\e = \pm$, and let $n$ be an integer greater than $2a+2$. Let $s$ and $t$ be regular semisimple elements of $G := \SL^\e_{n}(q)$ belonging to maximal tori $T_{1}$ and $T_{2}$ of type $T_{n-a,a}$ and $T_{a+1,n-a-1}$. The number of distinct irreducible characters of $G$ which vanish neither on $s$ nor on $t$ is bounded, independent of $n$, $q$, and the choices of $s$ and $t$. Likewise, the absolute values of these characters on $s$ and $t$ are bounded independent of $n$, $q$, and the choices of $s$ and $t$. \end{prop} \pf (i) Again, we show that the maximal tori $T_1$ and $T_2$ are weakly orthogonal. Here, the dual group $G^{*}$ is $\PGL^\e(V) \cong \PGL^\e_{n}(q)$, where $V = \F_q^n$ for $\e = +$ and $V = \F_{q^2}^n$ for $\e = -$. Consider the complete inverse images $T_{n-a,a}$ and $T_{n-a-1,a+1}$ of the tori dual to $T_1$ and $T_2$ in $H := \GL^\e(V)$, and assume $g$ is an element belonging to both of them. We need to show that $g \in \bfZ(H)$. The multiset $S$ of eigenvalues of the semisimple element $g$ on $V$ can be represented as the joins of multisets $X \sqcup Y \sqcup \{1\}$ and $Z \sqcup T \sqcup \{1\}$, where $$\begin{array}{l} X := \{x,x^{q\e}, \ldots,x^{(q\e)^{n-a-1}}\},~ Y := \{y,y^{q\e}, \ldots,y^{(q\e)^{a-1}}\},\\ Z := \{z,z^{q\e}, \ldots,z^{(q\e)^{n-a-2}}\},~ T := \{t,t^{q\e}, \ldots,t^{(q\e)^{a}}\},\end{array}$$ for some $x,y,z,t \in \bar{\F}_{q}^{\times}$; furthermore, $$x^{(q\e)^{n-a}-1} = y^{(q\e)^{a}-1} = z^{(q\e)^{n-a-1}-1} = t^{(q\e)^{a+1}-1} = 1.$$ Let $A$ be a multiset of elements of $\bar{\F}_{q}$, where the multiplicity of each element of $A$ is $n$, and with the property that if $u \in A$ then $u^{q\e} \in A$. We claim that if $A \cap S \neq \emptyset$ then $A \supseteq S$. Indeed, since the multiplicity of every $u \in S$ is at most $n$, if $A \cap X \neq \emptyset$ then $A \supseteq X$, and if $A \cap X, A \cap Y \neq \emptyset$ then $A \supseteq S$; and similarly for $Y$, $Z$, $T$. Now if $A \cap S \neq \emptyset$ but $A \not\supseteq S$, then $S = X \sqcup Y$ implies that $|A \cap S| \in \{a, n-a\}$. But $S = Z \sqcup T$ as well, so $|A \cap S| \in \{a+1,n-a-1\}$, which is a contradiction as $n \geq 2a+3$. Applying the claim to the multiset $A$ consisting of those $u \in \bar{\F}_{q}$ such that $u^{(q\e)^{n-a}-1} = 1$, each with multiplicity $n$, and noting that $A \supseteq X$, we see that $u^{(q\e)^{n-a}-1} = 1$ for all $u \in S$. Arguing similarly, we see that $u^{(q\e)^{n-a-1}-1} = 1$, so $u^{q\e-1} = 1$ for all $u \in S$. Now applying the claim to the multiset $A'$ consisting of only $x$ but with multiplicity $n$, and noting that $A \supseteq X$, we conclude that $A =S$ and $g = x \cdot 1_V$, as stated. \smallskip (ii) Now we proceed as in the proof of \cite[Proposition 3.1.5]{LarShT}. Assume that $\chi \in \Irr(G)$ and $\chi(s)\chi(t) \neq 0$. By (i) and \cite[Proposition 2.2.2]{LarShT}, $\chi = \chi_{\uni,\alpha}$ is a unipotent character of $G$ labeled by a partition $\alpha \vdash n$. If $\chi_\alpha \in \Irr(\SSS_n)$ corresponds to $\alpha$, then $$\chi_\alpha(s_1) = \chi(s) \neq 0,~~\chi_\alpha(t_1) = \chi(t) \neq 0,$$ where $s_1 \in \SSS_n$ has cycle type $(n-a,a)$ and $t_1 \in \SSS_n$ has cycle type $(n-a-1,a+1)$. Arguing as in the proof of \cite[Corollary 3.1.3]{LarShT}, one can show that there are at most $4a+6$ possibilities for $\alpha$, and $|\chi_\alpha(s_1)|$, $|\chi_\alpha(t_1)| \leq 4$. \hal \begin{prop}\label{tori} For every positive integer $k$, there are positive integers $A = A(k)$, $B_1 = B_1(k)$, and $B_2= B_2(k)$, each depending on $k$, with the following property. For every $n \geq A$ and for every prime power $q$, a group $G \in \{ \SL_n(q), \SU_n(q), \Sp_n(q), \Spin^\pm_n(q)\}$ contains $k+1$ pairs $(s_i,t_i)$ of regular semisimple elements, $1 \leq i \leq k+1$, such that: \begin{enumerate}[\rm(a)] \item If $i \neq j$, then $\gcd(|s_i| \cdot |t_i|,|s_j| \cdot |t_j|) = 1$; \item For each $i$, there are at most $B_1$ irreducible characters of $G$ that vanish neither on $s_i$ nor on $t_i$. The absolute values of these characters at $s_i$ and $t_i$ are at most $B_2$. \end{enumerate} \end{prop} \pf (i) First we consider the case $G = \Spin^\e_{2n}(q)$ with $n \geq 10k+65$. For {\it odd} $a_i = 2i+11$, $1 \leq i \leq k+1$, there are regular semisimple elements $s_i$, $t_i$ of $G $ belonging to maximal tori $T^1_{i}$ and $T^2_{i}$ of type $T_{n-a_i,a_i}^{\e,+}$ (of order $(q^{n-a_i}-\e)(q^{a_i}-1)$) and $T_{n-a_i-1,a_i+1}^{-\e,-}$ (of order $(q^{n-a_i-1}+\e)(q^{a_i+1}+1)$) respectively. In fact, we can choose $$|s_i| = \ps(q^{n-a_i}-\e) \cdot \ps(q^{a_i}-1),~~|t_i| = \ps(q^{n-a_i-1}+\e) \cdot \ps(q^{a_i+1}+1).$$ By \cite[Proposition 3.3.1]{LarShT} the number of distinct irreducible characters of $G$ that vanish neither on $s_i$ nor on $t_i$ is bounded by some integer $B_1(k)$, dependent on $k$ but independent of $n$, $q$. Likewise, the absolute values of these characters on $s_i$ and $t_i$ are bounded by some integer $B_2(k)$, dependent on $k$ but independent of $n$, $q$. It remains to check the condition (a). Let $1 \leq i < j \leq k+1$. By the choice of $n$, $n/5 \geq a_j+1 \geq a_i+3 \geq 16$. It follows that $$2(n-a_j-1) > n-a_i > n-a_i-1 > \max(n-a_j,4(a_j+1)).$$ Hence, by Lemma \ref{pair}, each of $\ps(q^{n-a_i}-\e)$ and $\ps(q^{n-a_i-1}+\e)$ is coprime to $\ps(q^{n-a_j}-\e)\cdot\ps(q^{n-a_j-1}+\e)\cdot\ps(q^{a_j}-1)\cdot\ps(q^{a_j+1}+1)$. Similarly, as $n-a_j-1 > 4(a_i+1)$, each of $\ps(q^{a_i}-1)$ and $\ps(q^{a_i+1}+1)$ is coprime to $\ps(q^{n-a_j}-\e)\cdot\ps(q^{n-a_j-1}+\e)$. Finally, since $a_j$ and $a_i$ are distinct odd integers, Lemma \ref{pair} also yields that $\ps(q^{a_i}-1)\cdot\ps(q^{a_i+1}+1)$ is coprime to $\ps(q^{a_j}-1)\cdot \ps(q^{a_j+1}+1),$ and we are done. \smallskip (ii) Suppose $G = \Spin_{2n+1}(q)$ with $n \geq 10k+65$. For {\it odd} $a_i = 2i+11$, $1 \leq i \leq k+1$, there are regular semisimple elements $s_i$, $t_i$ of $G $ belonging to maximal tori $T^1_{i}$ and $T^2_{i}$ of type $T_{n-a_i,a_i}^{+,+}$ (of order $(q^{n-a_i}-1)(q^{a_i}-1)$) and $T_{n-a_i-1,a_i+1}^{-,-}$ (of order $(q^{n-a_i-1}+1)(q^{a_i+1}+1)$) respectively. In fact, we can choose $$|s_i| = \ps(q^{n-a_i}-1) \cdot \ps(q^{a_i}-1),~~|t_i| = \ps(q^{n-a_i-1}+1) \cdot \ps(q^{a_i+1}+1).$$ By \cite[Proposition 3.4.1]{LarShT} the number of distinct irreducible characters of $G$ that vanish neither on $s_i$ nor on $t_i$ is bounded by some integer $B_1(k)$, dependent on $k$ but independent of $n$, $q$. Likewise, the absolute values of these characters on $s_i$ and $t_i$ are bounded by some integer $B_2(k)$, dependent on $k$ but independent of $n$, $q$. Finally, condition (a) is satisfied as shown in (i). \smallskip (iii) Consider the case $G = \Sp_{2n}(q)$ with $n \geq 10k+65$. For {\it odd} $a_i = 2i+11$, $1 \leq i \leq k+1$, there are regular semisimple elements $s_i$, $t_i$ of $G $ belonging to maximal tori $T^1_{i}$ and $T^2_{i}$ of type $T_{n-a_i,a_i}^{+,+}$ (of order $(q^{n-a_i}-1)(q^{a_i}-1)$) and $T_{n-a_i-1,a_i+1}^{+,-}$ (of order $(q^{n-a_i-1}-1)(q^{a_i+1}+1)$) respectively. In fact, we can choose $$|s_i| = \ps(q^{n-a_i}-1) \cdot \ps(q^{a_i}-1),~~|t_i| = \ps(q^{n-a_i-1}-1) \cdot \ps(q^{a_i+1}+1).$$ Now we can finish as in (ii) but using Proposition \ref{C-bounds}. \smallskip (iv) Consider the case $G = \SL^\e_n(q)$ with $n \geq 4k+17$. For $a_i = 2i+5$, $1 \leq i \leq k+1$, there are regular semisimple elements $s_i$, $t_i$ of $G $ belonging to maximal tori $T^1_{i}$ and $T^2_{i}$ of type $T_{n-a_i,a_i}$ (of order $(q^{n-a_i}-\e^{n-a_i})(q^{a_i}-\e^{a_i})$) and $T_{n-a_i-1,a_i+1}$ (of order $(q^{n-a_i-1}-\e^{n-a_i-1})(q^{a_i+1}-\e^{a_i+1})$) respectively. Next, observe that for every $m \geq 7$, there is a prime $\p(-q,m)$ that divides $(-q)^m-1$ but does not divide $\prod^{m-1}_{i=1}((-q)^i-1$; namely, we can take $\p(-q,m) = \p(q,2m)$ if $2 \nmid m$, $\p(-q,m) = \p(q,m)$ if $4|m$, and $\p(-q,m) = \p(q,m/2)$ if $4|(m-2)$. In particular, if $m \geq m' \geq 7$ and $\p(q\e,m) = \p(q\e,m')$, then $m = m'$. Now we can choose $$|s_i| = \p(q\e,n-a_i) \cdot \p(q\e,a_i),~~ |t_i| = \p(q\e,n-a_i-1) \cdot \p(q\e,a_i+1).$$ Condition (b) follows from Proposition \ref{A-bounds}. By the choice of $n$, $n/2 \geq a_j \geq a_i+2 \geq 9$ if $1 \leq i < j \leq k+1$. It follows that $$n-a_i-1 > n-a_j > n-a_j-1 > a_j+1 > a_j > a_i+1,$$ so condition (a) is satisfied. \hal \noindent {\bf Proof of Theorems \ref{main3} and \ref{main4}.} Let $k$ be a positive integer. For Theorem \ref{main3} we assume that $N$ is a positive integer with $\pi(N) \leq k$. For Theorem \ref{main4} we assume that $N$ is a positive integer with $\O(N) \leq k$. By Proposition \ref{alt2}, it suffices to prove the two theorems for finite simple classical groups $S$ of sufficiently large rank (and defined over a sufficiently large field $\F_q$, in the case of Theorem \ref{main3}). So we assume that $S = G/Z$, where $Z := \bfZ(G)$ and $G = \Cl_n(q)$ with $\Cl \in \{ \SL, \SU, \Sp, \O^\e\}$ (and $\e = \pm$). Let $V := \F_q^n$ (if $\Cl \neq \SU$) and $V := \F_{q^2}^n$ (for $\Cl = \SU$) denote the natural $G$-module. Also set $\e = +$ if $\Cl = \SL$ and $\e = -$ when $\Cl = \SU$. \smallskip (i) Apply Proposition \ref{tori} to $G$ and consider $n \geq A$. Since $\pi(N) \leq k$, by \ref{tori}(a) there is some $i_0$ between $1$ and $k+1$ such that the orders of $s := s_{i_0}$ and $t := t_{i_0}$ are coprime to $N$. Define $$Q = Q(k) := (B_1B_2^2)^{481}.$$ We claim that if $q \geq Q$, then every $g \in G \setminus Z$ belongs to $s^G \cdot t^G$, so it is a product of two $N$th powers; in particular, we are done with Theorem \ref{main3}. Indeed, since $g \notin Z$, its {\it support} $\supp(g)$, as defined in \cite[Definition 4.1.1]{LarShT}, is at least 1. It follows by \cite[Theorem 4.3.6]{LarShT} and the condition on $q$ that $$\frac{|\chi(g)|}{\chi(1)} < q^{-1/481} \leq \frac{1}{B_1B_2^2}$$ for every $1_G \neq \chi \in \Irr(G)$. Now condition \ref{tori}(b) implies that $$\sum_{1_G \neq \chi \in \Irr(G)}\frac{|\chi(s)\chi(t)\chi(g)|}{\chi(1)} < \frac{B_1B_2^2}{B_1B_2^2} = 1,$$ so $g \in s^G \cdot t^G$ as desired. \smallskip (ii) Now we consider the case $2 \leq q < Q$ and $\O(N) \leq k$. Suppose that $g \in G$ satisfies $$\supp(g) \geq C = C(k) := (\log_2 Q)^2.$$ By \cite[Theorem 4.3.6]{LarShT}, $$\frac{|\chi(g)|}{\chi(1)} < q^{-\sqrt{\supp(g)}/481} \leq 2^{-(\log_2 Q)/481} = \frac{1}{B_1B_2^2}$$ for every $1_G \neq \chi \in \Irr(G)$. Hence, as in (i), $g \in s^G \cdot t^G$, so $g$ is a product of two $N$th powers. \smallskip (iii) It remains to consider the non-central $g \in G$ with $\supp(g) < C$. Recall the integer $D = D(k,Q)$ defined in the proof of Lemma \ref{center}, according to which \begin{equation}\label{div} 8|D, ~~(q-\e)|D. \end{equation} We also choose \begin{equation}\label{large} n \geq \max(A,2C+(9k+4)D). \end{equation} Since $\supp(g) < C \leq n/2$, by \cite[Proposition 4.1.2]{LarShT} we see that $g$ has a {\it primary eigenvalue} $\lambda$, where $\lambda^{q-\e} = 1$ in the case $\Cl = \SL^\e$ and $\lambda= \pm 1$ in the case $\Cl = \Sp, \O$. Moreover, arguing as in the proof of \cite[Lemma 6.3.4]{LarShT}, we get that $g$ fixes an (orthogonal if $\Cl \neq \SL$) decomposition $$V = U \oplus W,$$ where $\dim U \geq n-2C$ and $U \supseteq \Ker (g-\lambda \cdot 1_V)$. Now we consider a chain of (non-degenerate if $\Cl \neq \SL$) subspaces $$U_1 \subset U_2 \subset \ldots \subset U_{k+1} \subset U$$ with $\dim U_j = jD$, and moreover $U_j$ is of type $+$ if $\Cl = \O^\pm$ (this can be achieved since $\dim U \geq n-2C \geq (9k+4)D$ by (\ref{large})). We also define $$W_j:= W \oplus (U_j^\perp \cap U),$$ so $$V = U_j \oplus W_j,~~d_j := \dim W_j = n-jD.$$ Setting $\cR_j := \cR(\Cl(W_j))$, the set of primes defined in Theorem \ref{prime1}, we claim that \begin{equation}\label{2primes} \cR_i \cap \cR_j = \emptyset \end{equation} whenever $1 \leq i \neq j \leq k+1$. Assume the contrary: so $\ell \in \cR_i \cap \cR_j$ for some $i < j$. By the construction of $\cR_i$, $$\ell |(q^{2d_i}-1)(q^{2d_i-2}-1)(q^{2d_i-4}-1),$$ and similarly for $j$. Note that $$kD \geq d_i-d_j = (j-i)D \geq D \geq 8$$ by (\ref{div}). It follows that $\ell |(q^e-1)$, where $$12 \leq 2d_i-4-2d_j \leq e \leq 2d_i-2d_j+4 \leq 2kD+4.$$ On the other hand, (\ref{large}) implies that $$(d_j-2)/4 \geq (n-(k+1)D-4)/4 > 2kD+4.$$ We have shown that some $\ell \in \cR(\Cl_{d_j}(q))$ divides $p^{ef}-1$ with $12 \leq e < (d_j-2)/4$ and $q = p^f$. This contradicts the construction of $\cR(\Cl_{d_j}(q))$ in Theorem \ref{prime1}, according to which $\ell = \p(p,af)$ for some $a \geq (d_j-1)f/4$. Since $\pi(N) \leq k$, (\ref{2primes}) now implies that there is some $i$ such that $N$ is not divisible by any prime in $\cR_i$. Hence, by Theorem \ref{prime1}, $H:=\Cl(W_i)$ admits two regular semisimple elements $s',t'$, whose orders are coprime to $N$, and such that $(s')^H \cdot (t')^H \supseteq H \setminus \bfZ(H)$. Next, $G$ contains a subgroup $\Cl(U_i) \times \Cl(W_i)$. Note that $g$ acts on $U_i$ as the scalar $\lambda$. Condition (\ref{div}) now implies that $x= g|_{U_i} \in \bfZ(\Cl(U_i))$, whence $x = u^N$ for some $u \in \Cl(U_i)$ by Lemma \ref{center} (as $\O(N) \leq k$). Since $g$ fixes $W_i$, it follows that $g = xy$ with $y = g|_{W_i} \in H=\Cl(W_i)$. Note that $y$ has $\lambda$ as an eigenvalue but does not act as the scalar $\lambda$. It follows that $y \in H \setminus \bfZ(H) \subseteq (s')^H \cdot (t')^H$, so $y = v^Nw^N$ with $v,w \in H$. As $x = u^N$ with $u \in \Cl(U_i)$ centralizing $v \in H$, we conclude that $g = (uv)^Nw^N$, as desired. \hal
-274,429.127234
[ -2.91796875, 2.55859375 ]
27.615825
[ -2.599609375, 1.1005859375, -2.251953125, -5.19140625, -1.2958984375, 7.80859375 ]
[ 3.140625, 8.5703125, 2.84765625, 6.515625 ]
1,966
29,784
[ -3.505859375, 4.078125 ]
45.078746
[ -5.1796875, -3.470703125, -4.33203125, -2.40234375, 1.44921875, 11.8359375 ]
0.389246
13.997387
12.64773
7.609243
[ 1.6856935024261475 ]
-148,893.196615
4.347334
-269,630.826094
0.410099
6.292519
[ -1.7587890625, -2.884765625, -3.716796875, -5.40234375, 2.005859375, 12.1171875 ]
[ -5.45703125, -1.5185546875, -1.2470703125, -0.80029296875, 3.302734375, 3.29296875 ]
BkiUdng5qhDACudDNyT7
\section{Introduction} In 1910, \cite{markov1910recherches} proved a central limit theorem for a two-state Markov chain. This initiated one of the longest histories in probability theory, the central limit theorem for stationary processes. One successful approach is the {\it martingale approximation} method, first applied by \cite{gordin69central} and then developed by many other researchers. Along this line, \cite{maxwell00central} proved the following result. Let $\{X_k\}_{k\in\mathbb Z}$ be a stationary process with $X_k = f\circ T^k$ for all $k\in\mathbb Z$, where $f$ is a measurable function from a probability space $(\Omega,{\cal A},\mathbb P)$ to $\mathbb R$, and $T$ is a bimeasurable, measure-preserving, one-to-one and onto map on $(\Omega,{\cal A},\mathbb P)$. Consider \begin{equation}\label{eq:Snf1} S_n(f) = \summ k1n f\circ T^k. \end{equation} Let $\{{\cal F}_k\}_{k\in\mathbb Z}$ be a filtration on $(\Omega,{\cal A},\mathbb P)$ such that $T^{-1} {\cal F}_k = {\cal F}_{k+1}$ for all $k\in\mathbb Z$. Suppose $\int f^2{\rm d}\mathbb P <\infty$, $\int f {\rm d}\mathbb P = 0$, $f\in{\cal F}_0$ (i.e., the sequence is {\it adapted}) and ${\mathbb E}(f\mid \bigcap_{k\in\mathbb Z}{\cal F}_k) = 0$. Maxwell and Woodroofe proved that, if \begin{equation}\label{cond:MW00} \sif k1\frac{\snn{{\mathbb E}(S_k(f)\mid{\cal F}_0)}_2}{k^{3/2}}<\infty\,, \end{equation} then $\sigma^2 = \lim_{n\to\infty} {\mathbb E}(S_n^2)/n$ exists, and \[ \frac{S_n(f)}{\sqrt n}\Rightarrow{\cal N}(0,\sigma^2)\,. \] Here `$\Rightarrow$' denotes the weak convergence of the random variables (convergence in distribution), and the $L^2$ norm $\nn\cdot_{2}$ is with respect to the measure $\mathbb P$. Note that~\eqref{cond:MW00} is implied by \begin{equation}\label{cond:MW00'} \sif k1\frac{\snn{{\mathbb E}(f\circ T^k\mid{\cal F}_0)}_2}{k^{1/2}}<\infty\,. \end{equation} Condition~\eqref{cond:MW00} is referred to as the Maxwell--Woodroofe condition. Later on, \cite{peligrad05new} showed that~\eqref{cond:MW00} also implies the invariance principle. Indeed, let $\{\mathbb B(t)\}_{t\in[0,1]}$ denote the standard Brownian motion. Then,~\eqref{cond:MW00} implies \[ \frac{S_{\left\lfloor n\cdot\right\rfloor}(f)}{\sqrt n}\Rightarrow\sigma \mathbb B(\cdot) \] where $\left\lfloor x\right\rfloor$ denotes the largest integer smaller or equal to $x\in\mathbb R$ and `$\Rightarrow$' is understood as the weak convergence in $C[0,1]$. Furthermore, Peligrad and Utev showed that~\eqref{cond:MW00} is the best possible (among conditions that only restrict the size of $\nn{{\mathbb E}( S_n(f)\mid{\cal F}_0)}_2$). See also \cite{dedecker07weak} and \cite{durieu08comparison} for comparisons of Conditions~\eqref{cond:MW00} and~\eqref{cond:MW00'} with other sufficient conditions for central limit theorems. For non-adapted sequences (i.e., $f\notin{\cal F}_0$), a similar condition guaranteeing the invariance principle is established by \cite{volny07nonadapted}. Other important references on central limit theorems by martingale approximation include \cite{gordin78central}, \cite{kipnis86central}, \cite{woodroofe92central}, \cite{wu04martingale}, \cite{dedecker07weak}, \cite{peligrad07maximal}, among others, and \cite{merlevede06recent} for a survey. The martingale approximation can also be applied to establish invariance principle for empirical processes, see for example \cite{wu03empirical,wu08empirical}, and for random walks in random environment, see for example \cite{rassoulagha05almost, rassoulagha07quenched}. In this paper, we establish a central limit theorem and an invariance principle for stationary multiparameter random fields. We briefly mention a few results in the literature. \cite{bolthausen82central}, \cite{goldie86central} and \cite{bradley89caution} studied this problem under suitable mixing conditions. \cite{basu79functional}, \cite{nahapetian95billingsley}, and \cite{poghosyan98invariance} considered the problem for {\it multiparameter martingales}. Another important result is due to \cite{dedecker98central,dedecker01exponential}, whose approach was based on an adaptation of the {\it Lindeberg method}. As a particular case, \cite{cheng06central} established a central limit theorem for {\it functionals of linear random fields}, based on a lexicographically ordered martingale approximation. Here, we aim at establishing the so-called {\it projective-type} conditions such that the central limit theorem and invariance principle hold. Such conditions, often involving conditional expectations as in~\eqref{cond:MW00} and~\eqref{cond:MW00'}, have recently drawn much attentions in central limit theorems for stationary sequences (see e.g.~\cite{dedecker07weak}). In particular, such conditions are easy to verify when applying such results to stochastic processes from statistics and econometrics (see e.g.~\cite{wu11asymptotic}). However, central limit theorems for stationary random fields based on projective conditions have been much less explored. This problem is not a simple extension of a one-dimensional problem to a high-dimensional one. An important reason is that, the main technique for establishing central limit theorems with projective conditions in one dimension, the {\it martingale approximation} approach, does not apply to (high-dimensional) random fields as successfully as to (one-dimensional) stochastic processes. This obstacle has been known among researchers for more than 30 years. For example, \cite{bolthausen82central} remarked that `Gordin uses an approximation by martingales, but his method appears difficult to generalizes to dimensions $\geq 2$.' Our result, with a condition similar to~\eqref{cond:MW00'}, is a first attempt of extending central limit theorems with projective-type conditions to the multiparameter stationary random fields. The result is obtained by a different approximation approach, namely, approximation by $m$-dependent random fields. To state our main result, we start with some notations. We consider a {\it product probability space} $(\Omega,\calA,\proba)$, i.e., a ${\mathbb Z^d}$-indexed-product of i.i.d.~probability spaces in form of \[ (\Omega,\calA,\proba) \equiv (\mathbb R^{\mathbb Z^d},{\cal B}^{\mathbb Z^d},P^{\mathbb Z^d})\,. \] Write $\epsilon_k(\omega) = \omega_k$, for all $\omega\in\mathbb R^{\mathbb Z^d}$ and $k\in{\mathbb Z^d}$. Then, $\indkd\epsilon$ are i.i.d.~random variables with distribution $P$. On such a space, we define the {\it natural filtration} $\indkd{\cal F}$ by \begin{equation}\label{eq:filFk} {\cal F}_k \mathrel{\mathop:}= \sigma\{\epsilon_l: l\preceq k, l\in{\mathbb Z^d}\}, \mbox{ for all } k\in{\mathbb Z^d}\,. \end{equation} Here and in the sequel, for all vector $x\in\mathbb R^d$, we write $x = (x_1,\dots,x_d)$ and for all $l,k\in\mathbb R^d$, let $l\preceq k$ stand for $l_i\leq k_i, i = 1,\dots,d$. We focus on mean-zero stationary random fields, defined on a product probability space. Let $\indkd T$ denote the group of shift operators on $\mathbb R^{\mathbb Z^d}$ with $(T_k\omega)_l = \omega_{k+l}$, for all $k,l\in{\mathbb Z^d}\,, \omega\in\mathbb R^{\mathbb Z^d}$. Then, we consider random fields in form of \[ \indkd {f\circ T}\,, \mbox{ or equivalently } \{f(\epsilon_{k+l}:l\in\mathbb Z^d)\}_{k\in\mathbb Z^d}\,, \] where $f$ is in the class ${\cal L}_0^p = \{f\in L^p({\cal F}_\infty), \int f{\rm d}\mathbb P = 0\}, p\geq 2$, with ${\cal F}_{\infty} = \bigvee_{k\in\mathbb Z^d}{\cal F}_k$. Throughout this paper, we consider a sequence $\indn V$ of finite rectangular subsets of ${\mathbb Z^d}$, in form of \begin{equation}\label{eq:Vn} V_n = \prod_{i=1}^d\{1,\dots,m_i\topp n\}\subset\mathbb N^d\,, \mbox{ for all } n\in\mathbb N\,, \end{equation} with $m_i\topp n$ increasing to infinity as $n\to\infty$ for all $i = 1,\dots,d$. Let \begin{equation}\label{eq:Snf} S_n(f)\equiv S(V_n,f) = \sum_{k\in V_n}f\circ T_k \end{equation} denote the partial sums with respect to $V_n$. Moreover, write for $t\in[0,1]$, $V_n(t) = \prod_{i=1}^d[0,m_i\topp nt]\subset \mathbb R^d$ and $R_k = \prod_{i=1}^d(k_i-1,k_i]\subset\mathbb R^d$ for all $k\in{\mathbb Z^d}$. We write also \begin{equation}\label{eq:Bntf} B_{n,t}(f)\equiv B_{V_n,t}(f) = \sum_{{k\in{\mathbb N^d}}}\lambda(V_n(t)\cap R_k)f\circ T_k\,, \end{equation} where $\lambda$ is the Lebesgue measure on $\mathbb R^d$, and consider the weak convergence in the space $C[0,1]^d$, the space of continuous functions on $[0,1]^d$, equipped with the uniform metric. Recall that the standard $d$-parameter Brownian sheet on $[0,1]^d$, denoted by $\{\mathbb B(t)\}_{t\in[0,1]^d}$, is a mean-zero Gaussian random field with covariance ${\mathbb E}(\mathbb B(s)\mathbb B(t)) = \prod_{i=1}^d\min(s_i,t_i), s,t\in[0,1]^d$. Write $\vv 0 = (0,\dots,0), \vv 1 = (1,\dots,1)\in{\mathbb Z^d}$. In parallel to~\eqref{cond:MW00'}, our projective-type condition involves the following term: \begin{equation}\label{cond:wtDeltad} \wt\Delta_{d,p}(f) \mathrel{\mathop:}= \sum_{k\in\mathbb N^d} \frac{\snn{{\mathbb E}(f\circ T_{k}\mid{\cal F}_{\vv 1})}_p}{\prod_{i=1}^dk_i^{1/2}}\,. \end{equation} Our main result is the following. \begin{Thm}\label{thm:1} Consider a product probability space described above. If $f\in{\cal L}_0^2$, $f\in{\cal F}_{\vv 0}$ and $\wt\Delta_{d,2}(f)<\infty$, then \[ \sigma^2 = \lim_{n\to\infty} \frac{{\mathbb E}(S_n(f)^2)}{|V_n|} <\infty \] exists and \[ \frac{S_n(f)}{|V_n|^{1/2}} \Rightarrow{\cal N}(0,\sigma^2)\,. \] In addition, if $f\in{\cal L}_0^p$ and $\wt\Delta_{d,p}(f)<\infty$ for some $p>2$, then \begin{equation}\label{eq:IP} \frac{B_{n,\cdot}(f)}{|V_n|^{1/2}}\Rightarrow \sigma\mathbb B(\cdot) \end{equation} in $C[0,1]^d$. \end{Thm} For the sake of simplicity, we will prove Theorem~\ref{thm:1} in the case $d = 2$ in Sections~\ref{sec:CLT} and~\ref{sec:IP}. We develop two applications of the main result. First, we obtain a central limit theorem for {\it orthomartingales}, a special class of multiparameter martingale (see e.g.~\cite{khoshnevisan02multiparameter}), defined on a product probability space. To the best of our knowledge, this result is more general than existing central limit theorems for multiparameter martingales (\cite{basu79functional}, \cite{nahapetian95billingsley} and \cite{poghosyan98invariance}), on which we provide a detailed discussion in Section~\ref{sec:orthomartingales}. In particular, we demonstrate that one should not expect a central limit theorem even for general orthomartingales, without extra conditions on the structure of the underlying probability space. Second, we obtain an invariance principle of functionals of stationary causal linear random fields in Section~\ref{sec:LRF}. This result extends the work of \cite{wu02central} in the one-dimensional case. Another central limit theorem for functional of stationary linear random fields has recently been developed by \cite{cheng06central}, following the approach of \cite{ho97limit} and \cite{cheng05asymptotic} in the one-dimensional case. We provide simple examples where our condition is weaker. \begin{Rem} After we finished this work, \cite{elmachkouri11central} obtained a central limit theorem and an invariance principle for stationary random fields, in the similar spirit as ours. They took also an $m$-approximation approach, based on the {\it physical dependence measure} introduced by \cite{wu05nonlinear}. Their results are more general, in the sense that they established invariance principle for random fields indexed by arbitrary sets instead of rectangle ones. Their conditions are not directly comparable to ours. However, in the application to functionals of linear random fields, their condition on the coefficients is weaker (see Remark~\ref{rem:EVW}). \end{Rem} The paper is organized as follows. In Section~\ref{sec:prelim} we provide preliminary results on $m$-dependent approximation. We establish the central limit theorem in Section~\ref{sec:CLT} and then the invariance principle in Section~\ref{sec:IP}. Sections~\ref{sec:orthomartingales} and~\ref{sec:LRF} are devoted to the applications to orthomartingales and functionals of stationary linear random fields, respectively. In Section~\ref{sec:momentInequality}, we prove a moment inequality, which plays a crucial role in proving our limit results. Some other auxiliary proofs are given in Section~\ref{sec:proofs}. \section{$m$-Dependent Approximation}\label{sec:prelim} We describe the general procedure of $m$-dependent approximation in this section. In this section, we do not assume any structure on the underlying probability space, nor the filtration structure. Instead, we simply assume $f\in L^2_0 = \{f\in L^2(\Omega,\calA,\proba), \int f{\rm d}\mathbb P = 0\}$, and $\indkd T$ is an Abelian group of bimeasurable, measure-preserving, one-to-one and onto maps on $(\Omega,\calA,\proba)$. The notion of $m$-dependence was introduced by \cite{hoeffding48central}. We say a random variable $f$ is {\it $m$-dependent}, if $f\circ T_k, f\circ T_l$ are independent whenever $|k-l|_\infty \mathrel{\mathop:}=\max_{i= 1,\dots,d}|k_i-l_i|>m$. The following result on the asymptotic normality of sums of $m$-dependent random variables is a consequence of \cite{bolthausen82central} (see also \cite{rosen69note}). Recall $\indn V$ given in~\eqref{eq:Vn}. \begin{Thm}\label{thm:rosen} Suppose $f_m\in L^2_0$ is $m$-dependent and. Write \begin{equation}\label{eq:sigmamVn} \sigma^2_m = \sum_{k\in{\mathbb Z^d}}{\mathbb E}[f_m(f_m\circ T_k)]\,. \end{equation} Then, \[ \frac{S_n(f_m)}{|V_n|^{1/2}} \Rightarrow {\cal N}(0,\sigma_m^2)\,. \] \end{Thm} Now, consider the function $f\in L_0^2(\mathbb P)$ and define \begin{equation}\label{eq:plus} \nn f_{V,+} = \limsup_{n\to\infty}\frac{\nn{S_n(f)}_2}{|V_n|^{1/2}}\,. \end{equation} We refer to the pseudo norm defined by $\nn\cdot_{V,+}$ as the {\it plus-norm}. \begin{Lem}\label{lem:CLT} Suppose $f,f_1,f_2,\dots\in L_0^2(\mathbb P)$ and $f_m$ is $m$-dependent for all $m\in\mathbb N$. If \begin{equation}\label{eq:2} \lim_{m\to\infty}\nn{f-f_m}_{V,+} = 0\,, \end{equation} then \begin{equation}\label{eq:1} \lim_{m\to\infty} \sigma_m = \lim_{m\to\infty} \nn{f_m}_{V,+} =:\sigma<\infty \end{equation} exists, and \begin{equation}\label{eq:CLT} \frac{S_n(f)}{|V_n|^{1/2}}\Rightarrow{\cal N}(0,\sigma^2)\,. \end{equation} \end{Lem} \begin{proof} It suffices to prove~\eqref{eq:1}. We will show that $\{\sigma_m^2\}_{m\in\mathbb N}$ forms a Cauchy sequence in $\mathbb R_+$. Observe that since $f_m$ is $m$-dependent with zero mean, \[ \sigma_m = \lim_{n\to\infty} \frac{\snn {S_n(f_{m})}_2}{|V_n|^{1/2}} \,. \] It then follows that \begin{eqnarray*} |\sigma_{m_1}-\sigma_{m_2}| & \leq & {\limsup_{n\to\infty}\frac{{\snn{S_n(f_{m_1}-f_{m_2})}_2}}{|V_n|^{1/2}}}\\ & \leq & \snn{f_{m_1}-f}_{V,+}+\snn{f_{m_2}-f}_{V,+}\,, \end{eqnarray*} which can be made arbitrarily small by taking $m_1, m_2$ large enough. We have thus shown that $\indn{\sigma^2}$ is a Cauchy sequence in $\mathbb R_+$. \end{proof} \begin{Rem} The idea of establishing the central limit theorem by controlling the quantity $\snn{f-f_m}_{V,+}$ dates back to \cite{gordin69central}, where $f_m$ was selected from a different subspace. In the one-dimensional case, when $V_n = \{1,\dots,n\}$, \cite{zhao08martingale} named $\nn\cdot_{V,+}$ the plus-norm, and established a necessary and sufficient condition for the martingale approximation, in term of the plus-norm. See \cite{peligrad10conditional} and \cite{gordin11functional} for improvements and more discussions on such conditions. \end{Rem} In the next section, we will establish conditions, under which~\eqref{eq:2} holds. \section{A Central Limit Theorem}\label{sec:CLT} From this section on, we will focus on stationary multiparameter random fields, defined on product probability spaces. On Such a space, any integrable function has a natural $L^2$-approximation by $m$-dependent functions, and there is a natural commuting filtration. For the sake of simplicity, we consider only the 2-parameter random fields in the sequel and simply say `random fields' for short. We will prove a central limit theorem here and then an invariance principle in the next section. The argument, however, can be generalized easily to $d$-parameter random fields, and the result has been stated in Theorem~\ref{thm:1}. We start with a product probability space with i.i.d.~random variables $\indij\epsilon$. Recall that $\indij T$ are the group of shift operators on $\mathbb R^{\mathbb Z^2}$ and write ${\cal F}_{\infty,\infty} = \sigma(\epsilon_{i,j}:(i,j)\in\mathbb Z^2)$. We focus on the class of functions ${\cal L}_0^p = \{f\in L^p({\cal F}_{\infty,\infty}): {\mathbb E} f = 0\}, p\geq 2$. For all measurable function $f\in{\cal L}_0^2$, define, for all $m\in\mathbb N$, \begin{equation}\label{eq:fm} f_m \mathrel{\mathop:}= {\mathbb E}\spp{f| {\cal F}_{\ip m}}\quad\mbox{ with }\quad {\cal F}_{\ip m} = \sigma(\epsilon_j:j\in\{-m,\dots,m\}^2)\,. \end{equation} Clearly, $f_m\in{\cal L}_0^2$, $\nn {f-f_m}_2\to 0$ as $m\to\infty$ and $\indij{f_m\circ T}$ are $m$-dependent functions. Now, recall the natural filtration $\indij{\cal F}$ defined by ${\cal F}_{k,l} = \sigma(\epsilon_{i,j}:i\leq k, j\leq l)$. This is a 2-parameter filtration, i.e., \begin{equation}\label{eq:multiFiltration} {\cal F}_{i,j}\subset{\cal F}_{k,l}\quad\mbox{ if }\quad i\leq k, j\leq l\,. \end{equation} Also, \begin{equation}\label{eq:nested} T_{-i,-j}{\cal F}_{k,l} = {\cal F}_{k+i,l+j}\,,\forall (i,j),(k,l)\in\mathbb Z^2\,. \end{equation} Moreover, the notion of commuting filtration is of importance to us. \begin{Def}\label{def:commuting} A filtration $\indij{\cal F}$ is {\it commuting}, if for all ${\cal F}_{k,l}$-measurable bounded random variable $Y$, ${\mathbb E}(Y|{\cal F}_{i,j}) = {\mathbb E}(Y|{\cal F}_{i\wedge k,j\wedge l})$. \end{Def} Since $\{\epsilon_{k,l}\}_{(k,l)\in\mathbb Z^2}$ are independent random variables, $\indij{\cal F}$ is {commuting} (see Proposition~\ref{prop:commuting} in Section~\ref{sec:proofs}). This implies that the {\it marginal filtrations} \begin{equation}\label{eq:marginal} {\cal F}_{i,\infty} = \bigvee_{j\geq 0}{\cal F}_{i,j} \quad\mbox{ and }\quad {\cal F}_{\infty,j} = \bigvee_{i\geq 0}{\cal F}_{i,j} \end{equation} are commuting, in the sense that for all $Y\in L^1(\mathbb P)$, \begin{equation}\label{eq:marginalCommuting} {\mathbb E}[{\mathbb E}(Y|{\cal F}_{i,\infty})|{\cal F}_{\infty,j}] = {\mathbb E}[{\mathbb E}(Y|{\cal F}_{\infty,j})|{\cal F}_{i,\infty}] = {\mathbb E}(Y|{\cal F}_{i,j})\,. \end{equation} For more details on the commuting filtration, see \cite{khoshnevisan02multiparameter}. For all ${\cal F}_{0,0}$-measurable function $f\in{\cal L}_0^2$, write \begin{equation}\label{eq:Smn} S_{m,n}(f) = \summ i1m\summ j1{n} f\circ T_{i,j}\,. \end{equation} Thanks to the commuting structure of the filtration, applying twice the maximal inequality in \cite{peligrad07maximal}, we can prove the following moment inequality with $p\geq 2$: \begin{equation}\label{eq:PUW} \snn{S_{m,n}(f)}_p \leq Cm^{1/2}n^{1/2}\Delta_{(m,n),p}(f) \end{equation} with \[ \Delta_{(m,n),p}(f) = \summ k1{m}\summ l1{n}\frac{\snn{{\mathbb E}(S_{k,l}(f)\mid{\cal F}_{1,1})}_p}{k^{3/2}l^{3/2}}\,. \] In fact, we will prove a stronger inequality without the assumptions of product probability space and the ${\cal F}_{0,0}$-measurability of $f$. See Section~\ref{sec:momentInequality}, Proposition~\ref{prop:momentInequality} and Corollary~\ref{coro:1}. Recall that \begin{equation}\label{eq:wtDelta} \wt\Delta_{2,p}(f) = \sif k1\sif l1 \frac{\snn{{\mathbb E}(f\circ T_{k,l}\mid{\cal F}_{1,1})}_p}{k^{1/2}l^{1/2}}\,. \end{equation} Now, we can prove the following central limit theorem for adapted stationary random fields. \begin{Thm}\label{thm:CLT} Consider the product probability space discussed above. Let $\indn V$ be as in~\eqref{eq:Vn} with $d=2$. Suppose $f\in{\cal L}_0^2$, $f\in{\cal F}_{0,0}$, and define $f_m$ as in~\eqref{eq:fm}. If $\wt\Delta_{2,2}(f)<\infty$, then \[ \lim_{m\to\infty}\nn {f-f_m}_{V,+} = 0\,. \] Therefore, $\sigma \mathrel{\mathop:}= \lim_{m\to\infty}\nn{f_m}_{V,+}<\infty$ exist and $S_n(f)/|V_n|^{1/2}\Rightarrow{\cal N}(0,\sigma^2)$. \end{Thm} \begin{proof} The second part follows immediately from Lemma~\ref{lem:CLT}. It suffices to prove $\nn{f-f_m}_{V,+}\to 0$ as $m\to\infty$. First, by the fact that \[ \snn{{\mathbb E}(S_{k,l}(f)\mid{\cal F}_{1,1})}_2\leq \summ i1k\summ j1l\snn{{\mathbb E}\spp{f\circ T_{i,j}\mid{\cal F}_{1,1}}}_2 \] and Fubini's theorem, we have $\Delta_{(\infty,\infty),2}(f)\leq 9\wt\Delta_{2,2}(f)$. So, by~\eqref{eq:plus} and~\eqref{eq:PUW}, it suffices to show \begin{equation}\label{eq:DCT0} \wt\Delta_{2,2}(f-f_m)= \sif k1\sif l1\frac{\snn{{\mathbb E}[(f-f_m)\circ T_{k,l}\mid{\cal F}_{1,1}]}_2}{k^{1/2}l^{1/2}} \to 0 \end{equation} as $m\to\infty$. Clearly, the summand in~\eqref{eq:DCT0} converges to 0 for each $k,l$ fixed, since~\eqref{eq:fm} implies $\nn{f-f_m}_2 \to 0$ as $m\to\infty$ and $\snn{{\mathbb E}[(f-f_m)\circ T_{k,l}\mid{\cal F}_{1,1}]}_2 \leq \snn{f-f_m}_2$. Moreover, observe that, \begin{eqnarray*} {\mathbb E}\spp{f_m\circ T_{k,l}\mid{\cal F}_{1,1}} & = & {\mathbb E}\sbb{{\mathbb E}\spp{f\circ T_{k,l}\mid T_{-k,-l}({\cal F}_{\left\langle m\right\rangle})}\mid{\cal F}_{1,1}}\\ & = & {\mathbb E}\sbb{{\mathbb E}\spp{f\circ T_{k,l}\mid{\cal F}_{1,1}}\mid T_{-k,-l}({\cal F}_{\left\langle m\right\rangle})}\,, \end{eqnarray*} where in the second equality we can exchange the order of conditional expectations by the definitions of ${\cal F}_{1,1}$ and $T_{-k,-l}({\cal F}_{\left\langle m\right\rangle})$ (see Proposition~\ref{prop:commuting} in Section~\ref{sec:proofs} for a detailed treatment). Therefore, \begin{eqnarray*} & & \snn{{\mathbb E}\sbb{(f-f_m)\circ T_{k,l}\mid{\cal F}_{1,1}}}_2 \\ & & \ \ \ \ \quad\quad\quad\leq \snn{{\mathbb E}\spp{f\circ T_{k,l}\mid{\cal F}_{1,1}}}_2 + \snn{{\mathbb E}\spp{f_m\circ T_{k,l}\mid{\cal F}_{1,1}}}_2 \\ & & \ \ \ \ \quad\quad\quad \leq 2\snn{{\mathbb E}\spp{f\circ T_{k,l}\mid{\cal F}_{1,1}}}_2\,. \end{eqnarray*} Then, the condition $\wt\Delta_{2,2}(f)<\infty$ combined with the dominated convergence theorem yields~\eqref{eq:DCT0}. The proof is thus completed. \end{proof} \begin{Rem}An `extension' of Maxwell--Woodroofe condition~\eqref{cond:MW00} to high dimension remains an open problem. Namely if we replace $\wt\Delta_{2,2}(f)<\infty$ by $\Delta_{(\infty,\infty),2}(f)<\infty$ in Theorem~\ref{thm:CLT}, do we have the same conclusion? The latter condition is significantly weaker than the former one. \end{Rem} \section{An Invariance Principle}\label{sec:IP} Recall the space $C[0,1]^2$ and the 2-parameter Brownian sheet $\{\mathbb B(t)\}_{t\in[0,1]^2}$. \begin{Thm}\label{thm:IP} Under the assumptions in Theorem~\ref{thm:CLT}, suppose in addition that $f\in{\cal L}_0^p$ and $\wt\Delta_{2,p}(f)<\infty$ for some $p>2$. Write $B_{n,t}(f)$ as in~\eqref{eq:Bntf} with $d=2$. Then, \[ \frac{B_{n,\cdot}(f)}{|V_n|^{1/2}}\Rightarrow\sigma\mathbb B(\cdot)\,, \] where `$\ \Rightarrow$' stands for weak convergence of probability measures on $C[0,1]^2$. \end{Thm} \begin{proof} It suffices to show that the finite-dimensional distributions converge, and $\{B_{n,t}(f)/|V_n|^{1/2}\}_{t\in[0,1]^2}$ is tight. We first show that, for all $\wt t = (t\topp1,\dots,t\topp k)\subset[0,1]^2$, \begin{equation}\label{eq:fdd} \bpp{\frac{B_{n,t\topp1}(f)}{|V_n|^{1/2}},\cdots,\frac{B_{n,t\topp k}(f)}{|V_n|^{1/2}}}\Rightarrow \sigma(\mathbb B(t\topp1),\cdots,\mathbb B(t\topp k)) =:\sigma\wt{\mathbb B}_{\wt t}\,. \end{equation} Consider the $m$-dependent function $f_m$ defined in~\eqref{eq:fm}. Then, the convergence of the finite-dimensional distributions~\eqref{eq:fdd} with $f$ replaced by $f_m$ follows from the invariance principle of $m$-dependent random fields (see e.g.~\cite{shashkin03invariance}). Furthermore, by Theorem~\ref{thm:CLT}, $\wt\Delta_{2,2}(f)\leq\wt\Delta_{2,p}(f)<\infty$, so that $\nn{f-f_m}_{V,+}\to 0$ as $m\to\infty$, and therefore, letting $\wt B_{n,\wt t}(f)/|V_n|^{1/2}$ denote the left-hand side of~\eqref{eq:fdd}, $\wt B_{n,\wt t}(f_m-f)/|V_n|^{1/2}\to (0,\dots,0)\in\mathbb R^k$ in probability. The convergence of the finite-dimensional distribution~\eqref{eq:fdd} follows. Now, we prove the tightness of $\{B_{n,t}(f)\}_{t\in[0,1]^2}$. Fix $n$ and consider \[ V_n = \{1,\dots,n_1\}\times\{1,\dots,n_2\}\,. \] Write $B_{n,t} \equiv B_{n,t}(f)$ and $S_{m,n} \equiv S_{m,n}(f)$ for short. For all $0\leq r_1<s_1\leq 1, 0\leq r_2<s_2\leq 1$, set, \[ B_n((r_1,s_1]\times(r_2,s_2]) \mathrel{\mathop:}= B_{n,(s_1,s_2)} - B_{n,(r_1,s_2)} - B_{n,(s_1,r_2)} + B_{n,(r_1,r_2)}\,. \] We will show that there exists a constant $C$, independent of $n, r_1,r_2,s_1$ and $s_2$, such that \begin{equation}\label{eq:bound} (n_1n_2)^{-1/2}\nn{B_n((r_1,s_1]\times(r_2,s_2])}_p\leq C\sqrt{(s_1-r_1)(s_2-r_2)}\wt\Delta_{2,p}(f)\,. \end{equation} Inequality~\eqref{eq:bound} implies the tightness, by \cite{nagai74simple}, Theorem 1. Now, we prove~\eqref{eq:bound} to complete the proof. From now on, the constant $C$ may change from line to line. Write $m_i = \left\lfloor n_is_i\right\rfloor - \left\lfloor n_ir_i\right\rfloor, i = 1,2$. If $m_i\geq 2, i = 1,2$, then \begin{eqnarray} & & \snn{B_n((r_1,s_1]\times(r_2,s_2])}_p \nonumber\\ & &\leq \nn{S_{m_1,m_2}}_p + 2\snn{S_{m_1,1}}_p + 2\snn{S_{1,m_2}}_p+ 4\snn{S_{1,1}}_p\nonumber\\ & &\leq C(m_1m_2)^{1/2}\wt\Delta_{2,p}(f)\label{eq:bound1} \end{eqnarray} for some constant $C$, by~\eqref{eq:PUW}. Note that $m_i\geq 2$ also implies $n_i(s_i-r_i)>1$. Therefore, $m_i\leq n_i(s_i-r_i) + 1<2n_i(s_i-r_i)$, and~\eqref{eq:bound1} can be bounded by $C(n_1n_2)^{1/2}[(s_1-r_1)(s_2-r_2)]^{1/2}\wt\Delta_{2,p}(f)$, which yields~\eqref{eq:bound}. In the case $m_1<2$ or $m_2<2$, to obtain~\eqref{eq:bound} requires more careful analysis. We only show the case when $m_1 = 1, m_2\geq 2$, as the proof for the other cases are similar. Suppose that $m_1 = 1$ and we exclude the case $n_1r_1 = \lfloor n_1r_1\rfloor = \lceil n_1r_1\rceil$ (it is easy to see that this case can be eventually controlled by continuity). Then, we have $n_1r_1<\lceil n_1r_1\rceil = \lfloor n_1s_1\rfloor\leq n_1s_1$. Then, \begin{multline*} \snn{B_n((r_1,s_1]\times(r_2,s_2])}_p \\ \leq n_1(s_1-r_1)( \nn{S_{1,m_2}}_p + 2\snn{S_{1,1}}_p) \leq Cn_1(s_1-r_1)m_2^{1/2}\wt\Delta_{2,p}(f)\,. \end{multline*} Observe that $m_1 = 1$ also implies $n_1(s_1-r_1)\in(0,2)$. If $n_1(s_1-r_1)\leq 1$, then $n_1(s_1-r_1)\leq [n_1(s_1-r_1)]^{1/2}$. If $n_1(s_1-r_1)\in (1,2)$, then $n_1(s_1-r_1)< \sqrt 2[n_1(s_1-r_1)]^{1/2}$. It then follows that~\eqref{eq:bound} still holds. \end{proof} \begin{Rem} To prove the invariance principle of stationary random fields, most of the results require a finite moment of order strictly larger than 2. See for example \cite{berkes81strong}, \cite{goldie86variance} and \cite{dedecker01exponential}. This is in contrast to the one-dimensional case, where the invariance principle can be established with finite second moment assumption. To the best of our knowledge, the only invariance principle so far for stationary random fields that assumes finite second moment is due to \cite{shashkin03invariance}, where the random fields are assumed to be $BL(\theta)$-dependent (including $m$-dependent stationary random fields). In general the $BL(\theta)$-dependence is difficult to check. Besides, \cite{basu79functional} proved an invariance principle for martingale difference random fields with finite second moment assumption, but they have stringent conditions on the filtration (see Remark~\ref{rem:orthomartingale} below). In our case, it remains an open problem: whether $\wt\Delta_{2,2}(f)<\infty$ implies the invariance principle. See also a similar conjecture in \cite{dedecker01exponential}, Remark 1. \end{Rem} \section{Orthomartingales}\label{sec:orthomartingales} The central limit theorems and invariance principles for multiparameter martingales are more difficult to establish than in the one-dimensional case. This is due to the complex structure of multiparameter martingales. We will focus on orthomartingales first and establish an invariance principle, and then compare the results on other types of multiparameter martingales. The idea of orthomartingales are due to R.~Cairoli and J.~B.~Walsh. See e.g.~references in \cite{khoshnevisan02multiparameter}, which also provides a nice introduction to the materials. For the sake of simplicity, we suppose $d=2$. Consider a probability space $(\Omega,\calA,\proba)$ and recall the definition of 2-parameter filtration~\eqref{eq:multiFiltration}. We restrict ourselves to the filtration indexed by $\mathbb N^2$. \begin{Def} Given a commuting 2-parameter filtration $\indijn{\cal F}$ on $(\Omega,\calA,\proba)$, we say a family of random variables $\indijn M$ is a {\it 2-parameter orthomartingale} on $(\Omega,\calA,\proba)$, with respect to $\indijn{\cal F}$, if for all $(i,j)\in\mathbb N^2$, $M_{i,j}$ is ${\cal F}_{i,j}$-measurable, and ${\mathbb E}(M_{i+1,j}\mid{\cal F}_{i,\infty}) = {\mathbb E}(M_{i,j+1}\mid{\cal F}_{\infty,j}) = M_{i,j}$, almost surely. \end{Def} In our case, for ${\cal F}_{0,0}$-measurable function $f\in{\cal L}_0^2$, $M_{m,n} = S_{m,n}(f)$ as in~\eqref{eq:Smn} yields a 2-parameter orthomartingale, if \begin{equation}\label{eq:orthomartingale} {\mathbb E}(f\circ T_{i+1,j}\mid{\cal F}_{i,\infty}) = {\mathbb E}(f\circ T_{i,j+1}\mid{\cal F}_{\infty,j}) = 0\mbox{ almost surely}, \end{equation} for all $(i,j)\in\mathbb N^2$. In this case, we say $\indijn{f\circ T}$ are {\it 2-parameter orthomartingale differences}. \begin{Rem} In our case, $\indijn M$ is also a {\it 2-parameter martingale} in the normal sense, i.e., ${\mathbb E}(M_{i,j}\mid{\cal F}_{k,l}) = M_{i\wedge k,j\wedge l}$, almost surely. Indeed, \[ {\mathbb E}(M_{i,j}\mid{\cal F}_{k,l}) = {\mathbb E}[{\mathbb E}(M_{i,j}\mid{\cal F}_{k,\infty})\mid{\cal F}_{\infty,l}] = {\mathbb E}(M_{i\wedge k,j}\mid{\cal F}_{\infty,l}) = M_{i\wedge k,j\wedge l}\,. \] In general, however, the converse is not true, i.e., multiparameter martingales are not necessarily orthomartingales (see e.g.~\cite{khoshnevisan02multiparameter} p.~33). The two notions are equivalent, when the filtration is commuting (see e.g.~\cite{khoshnevisan02multiparameter}, Chapter I, Theorem 3.5.1). \end{Rem} \begin{Thm}\label{thm:orthomartingale} Consider a product probability space $(\Omega,\calA,\proba)$ with a natural filtration $\indijn{\cal F}$. Suppose $f\in{\cal L}_0^2$ and $f\in{\cal F}_{0,0}$. If $\indijn{f\circ T}$ are 2-parameter orthomartingale differences, i.e.,~\eqref{eq:orthomartingale} holds, then $\sigma^2 = \lim_{n\to\infty} {\mathbb E}(S_n(f)^2)/|V_n|^2 <\infty$ exists, and \[ \frac{S_n(f)}{|V_n|^{1/2}}\Rightarrow\sigma{\cal N}(0,1)\,. \] In addition, if $f\in{\cal L}_0^p$ for some $p>2$, then the invariance principle~\eqref{eq:IP} holds. \end{Thm} \begin{proof} Observe that,~\eqref{eq:orthomartingale} implies ${\mathbb E}(f\circ T_{i,j}\mid{\cal F}_{1,1}) = 0$ if $i>1$ or $j>1$. Then, for $f\in{\cal L}_0^p, p\geq 2$, \[ \wt\Delta_{\infty,p}(f) = \snn{{\mathbb E}(f\circ T_{1,1}\mid{\cal F}_{1,1})}_p = \nn f_p<\infty\,. \] The result then follows immediately from Theorem~\ref{thm:1}. Note that, the argument holds for general $d$-parameter orthomartingales ($d\geq 2$) defined in \cite{khoshnevisan02multiparameter}. \end{proof} \begin{Rem}\label{rem:orthomartingale} Our result is more general than \cite{basu79functional}, \cite{nahapetian95billingsley} and \cite{poghosyan98invariance} in the following sense. Let be $\indij\epsilon$ be i.i.d.~random variables. In \cite{nahapetian95billingsley}, the central limit theorem was established for the so-called {\it martingale-difference random fields} $\indijn M$ with $M_{i,j} = \summ k1i\summ l1j D_{k,l}$, such that \[ {\mathbb E}[D_{i,j}\mid\sigma(\epsilon_{k,l}:(k,l)\in\mathbb Z^2, (k,l)\neq (i,j))] = 0\,,\mbox{ for all } (i,j)\in\mathbb N^2\,. \] In \cite{basu79functional} and \cite{poghosyan98invariance}, the authors considered the multiparameter martingales $\indijn M$ with respect to the filtration defined by \[ \wt{\cal F}_{i,j} = \sigma(\epsilon_{k,l}: k\leq i \mbox{ or } l\leq j)\,. \] It is easy to see, in both cases above, their assumptions are stronger, in the sense that they imply that $\indijn M$ is an orthomartingale, with the natural filtration $\indijn{\cal F}$~\eqref{eq:filFk}. On the other hand, however, the results in ( \cite{basu79functional,poghosyan98invariance}) only assume that $\indij\epsilon$ is a stationary random field, which is weaker than our assumption. \end{Rem} \begin{Rem} By assumption, the $\sigma$-algebra of $\{T_{i,j}\}_{(i,j)\in\mathbb Z^2}$-invariant sets is $\mathbb P$ trivial. Therefore, our results are restricted to ergodic random fields, and exclude the following simple case: \[ X_{i,j} = Y\epsilon_{i,j}, (i,j)\in\mathbb Z^2\,, \] where $Y$ is a random variable independent of $\{\epsilon_{i,j}\}_{(i,j)\in\mathbb Z^2}$. Clearly, if $\epsilon_{0,0}$ has zero mean and finite variance $\sigma^2$, then \[ \frac1n\summ i1n\summ j1n X_{i,j}\Rightarrow YZ\,, \] where $Z\sim{\cal N}(0,\sigma^2)$ is independent of $Y$. For central limit theorems on non-ergodic random fields, see for example \cite{dedecker98central,dedecker01exponential}. \end{Rem} At last, we point out that the product structure of the probability space plays an important role. We provide an example of an orthomartingale with a different underlying probability structure. In this case, the limit behavior is quite different from the case that we studied so far. \begin{Example}\label{example:productRWs} Suppose $\indz\epsilon$ and $\indz{\eta}$ are two families of i.i.d.~random variables. Define ${\cal G}_i = \sigma(\epsilon_j:j\leq i)$ and ${\cal H}_i = \sigma(\eta_j:j\leq i)$ for all $i\in\mathbb N$. Then, ${\cal G} = \indn{\cal G}$ and ${\cal H} = \indn{\cal H}$ are two filtrations. Now, let $\indn Y$ and $\indn Z$ be two arbitrary martingales with stationary increment with respect to the filtration ${\cal G}$ and ${\cal H}$, respectively. Suppose $Y_n = \sum_{i=1}^n D_i, Z_n = \sum_{i=1}^n E_i$, where $\indn D$ and $\indn E$ are stationary martingale differences. Then, $\{D_iE_j\}_{(i,j)\in\mathbb N^2}$ is a stationary random fields and \[ M_{m,n} \mathrel{\mathop:}= \summ i1m\summ j1n D_iE_j = Y_mZ_n \] is an orthomartingale with respect to the filtration $\{ {\cal G}_i\vee{\cal H}_j\}_{(i,j)\in\mathbb N^2}$. Clearly, \[ \frac{M_{n,n}}n = \frac{Y_n}{\sqrt n}\frac{Z_n}{\sqrt n}\Rightarrow {\cal N}(0,\sigma_Y^2)\times{\cal N}(0,\sigma_Z^2)\,, \] where the limit is the distribution of the product of two independent normal random variables (a Gaussian chaos). That is, $M_{n,n}/n$ has asymptotically non-normal distribution. One can also define $\wt M_{m,n} = Y_m+Z_n$, which again gives an orthomartingale, and $\{D_i+E_j\}_{(i,j)\in\mathbb N^2}$ is the corresponding stationary random field. This time, one can show that \[ \frac{\wt M_{n,n}}{\sqrt n} = \frac{Y_n}{\sqrt n} + \frac{Z_n}{\sqrt n}\Rightarrow {\cal N}(0,\sigma_Y^2+\sigma_Z^2)\,. \] Here, the limit is a normal distribution, but the normalizing sequence is $\sqrt n$ instead of $n$. \end{Example} This example demonstrates that for general orthomartingales, to obtain a central limit theorem one must assume extra conditions on the structure of the underlying probability space. For the structure mentioned above, there is no $m$-dependent approximation for the random fields. Indeed, the example corresponds to the sample space $\Omega = (\mathbb R^{\mathbb Z},\mathbb R^{\mathbb Z})$ with $[T_{k,l}(\epsilon,\eta)]_{i,j} = (\epsilon_{i+k},\eta_{j+l})$, and if we define $f_m$ similarly as in~\eqref{eq:fm} with \[ {\mathcal F}_{\ip m} \mathrel{\mathop:}= \sigma(\epsilon_i,\eta_j:-m\leq i,j\leq m)\,, \] then $f$ and $f\circ T_{k,l}$ are independent, if and only if $\min(k,l)>m$. That is, the dependence can be very strong, along the horizontal (the vertical resp.) direction of the random field. \section{Stationary Causal Linear Random Fields}\label{sec:LRF} We establish a central limit theorem for functionals of stationary causal linear random fields. We focus on $d=2$. Consider a stationary linear random field $\{Z_{i,j}\}_{(i,j)\in\mathbb Z^2}$ defined by \begin{equation}\label{eq:Zij} Z_{i,j} = \sumZ r\sumZ sa_{r,s}\epsilon_{i-r,j-s} = \sumZ r\sumZ sa_{i-r,j-s}\epsilon_{r,s}\,, \end{equation} where coefficients $\{a_{i,j}\}_{(i,j)\in\mathbb Z^2}$ satisfy $\sum_{(i,j)\in\mathbb Z^2}a_{i,j}^2<\infty$, and $\{\epsilon_{i,j}\}_{(i,j)\in\mathbb Z^2}$ are i.i.d.~random variables with zero mean and finite variance as before. We restrict ourselves to {\it causal} linear random fields, i.e., $a_{i,j} = 0$ unless $i\geq 0$ and $j\geq 0$. They are also referred to be {\it adapted} to the filtration $\{{\cal F}_{i,j}\}_{(i,j)\in\mathbb Z^2}$. Now, consider the random fields $\{f\circ T_{k,l}\}_{(k,l)\in\mathbb Z^2}$ with a more specific form $f = K(\{Z_{i,j}\}_h^{0,0})$, where $h$ is a fixed strictly positive integer, $K$ is a measurable function from $\mathbb R^{h^2}$ to $\mathbb R$ and for all $(k,l)\in\mathbb Z^2$, \[ \{Z_{i,j}\}_h^{k,l} \mathrel{\mathop:}= \{Z_{i,j}:k-h+1\leq i\leq k, l-h+1\leq j\leq l\} \] is viewed as a random vector in $\mathbb R^{h^2}$ with covariates lexicographically ordered. In the sequel, the same definition applies similarly to $\{x_{i,j}\}_h^{k,l}$, given $\{x_{i,j}\}_{(i,j)\in\mathbb Z^2}$. Assume that \begin{equation}\label{eq:K} {\mathbb E} K(\{Z_{i,j}\}_h^{0,0}) = 0\quad\mbox{ and }\quad {\mathbb E} K^p(\{Z_{i,j}\}_h^{0,0}) <\infty \end{equation} for some $p\geq 2$. In this way, \begin{equation}\label{eq:Xkl} f\circ T_{k,l} = K(\{Z_{i,j}\}_h^{k,l})\,. \end{equation} The model~\eqref{eq:Xkl} is a natural extension of the functionals of causal linear processes considered by \cite{wu02central}. Next, we introduce a few notations similar to \cite{ho97limit} and \cite{wu02central}. Here, our ultimate goal is to translate Condition~\eqref{eq:wtDelta} into a condition on the regularity of $K$ and the summability of $\{a_{i,j}\}_{(i,j)\in\mathbb Z^2}$. For all $(i,j)\in\mathbb Z^2$, let \begin{equation}\label{eq:Gammaij} \Gamma(i,j) = \{(r,s)\in\mathbb Z^2:r\leq i,s\leq j\}\,, \end{equation} and write \begin{eqnarray} Z_{i,j} & = & \sum_{(r,s)\in\Gamma(i,j)}a_{i-r,j-s}\epsilon_{r,s}\nonumber\\ & = & \sum_{(r,s)\in\Gamma(i,j)\setminus\Gamma(1,1)}a_{i-r,j-s}\epsilon_{r,s} + \sum_{(r,s)\in\Gamma(1,1)}a_{i-r,j-s}\epsilon_{r,s}\nonumber\\ & =: & Z_{i,j,+}+Z_{i,j,-}\,.\label{eq:Zij+-} \end{eqnarray} Write $W_{k,l,-} = \{Z_{i,j,-}\}_h^{k,l}$ and define, for all $(k,l)\in\mathbb Z^2$, \[ K_{k,l}(\{x_{i,j}\}_h^{k,l}) = {\mathbb E} K(\{Z_{i,j,+} + x_{i,j}\}_h^{k,l})\,. \] In this way, \begin{equation}\label{eq:Kn1} {\mathbb E}\spp{f\circ T_{k,l}\mid{\cal F}_{1,1}} = K_{k,l}(\{Z_{i,j,-}\}_h^{k,l}) =: K_{k,l}(W_{k,l,-})\,. \end{equation} Plugging~\eqref{eq:Kn1} into~\eqref{eq:wtDelta}, we obtain a central limit theorem for functionals of stationary causal linear random fields. \begin{Thm} Consider the functionals of stationary causal linear random fields~\eqref{eq:Xkl}. If Conditions~\eqref{eq:K} hold and \begin{equation}\label{cond:1}\sif k1\sif l1\frac{\nn {K_{k,l}(W_{k,l,-})}_p}{k^{1/2}l^{1/2}}<\infty\,, \end{equation} for $p=2$, then $\sigma^2 = \lim_{n\to\infty}{\mathbb E}(S_n^2)/n^2 <\infty$ exists and $S_n/|V_n|^{1/2}\Rightarrow {\cal N}(0,\sigma^2)$. If the conditions hold with $p>2$, then the invariance principle~\eqref{eq:IP} holds. \end{Thm} Next, we will provide conditions on $K$ and $\{a_{i,j}\}_{(i,j)\in\mathbb Z^2}$ such that~\eqref{cond:1} holds. For all $\Lambda\subset \mathbb Z^2$, write \begin{equation}\label{eq:ZLambda} Z_\Lambda = \sum_{(i,j)\in\Lambda}a_{i,j}\epsilon_{-i,-j}\quad\mbox{ and }\quad A_\Lambda = \sum_{(i,j)\in\Lambda} a_{i,j}^2\,. \end{equation} In particular, our conditions involves summations of $a_{i,j}$ over the following type of regions: \[ \Lambda(k,l) \mathrel{\mathop:}= \{(i,j)\in\mathbb Z^2: i\geq k, j\geq l\}\,, (k,l)\in\mathbb Z^2. \] For the sake of simplicity, we write $A_{k,l} \equiv A_{\Lambda(k,l)}$. The following lemma is a simple extension of Lemma 2, part (b) in \cite{wu02central}. \begin{Lem}\label{lem:K} Suppose that there exist $\alpha,\beta\in\mathbb R$ such that $0<\alpha\leq 1\leq \beta<\infty$ and ${\mathbb E}(|\epsilon|^{2\beta})<\infty$. If \begin{equation}\label{eq:M} {\mathbb E} M_{\alpha,\beta}^2(W_{1,1})<\infty\mbox{ with } M_{\alpha,\beta}(x) = \sup_{\substack{y\in\mathbb R^{h^2},\ y\neq x}}\frac{|K(x)-K(y)|}{|x-y|^\alpha + |x-y|^\beta}\,, \end{equation} then, for all $p\geq 2$, \begin{equation}\label{eq:Kkl2} \snn{K_{k,l}(W_{k,l,-})}_p = O(A_{k+1-h,l+1-h}^{\alpha/2})\,. \end{equation} \end{Lem} The proof is deferred to Section~\ref{sec:proofs}. Consequently, Condition~\eqref{cond:1} can be replaced by specific ones on $A_{k,l}$. \begin{Coro} Assume there exist $\alpha,\beta\in\mathbb R$ as in Lemma~\ref{lem:K}. Consider the functionals of stationary linear random fields in form of~\eqref{eq:Xkl}. Suppose Condition~\eqref{eq:M} holds and \begin{equation}\label{cond:2} \sif k1\sif l1\frac{A_{k+1-h,l+1-h}^{\alpha/2}}{k^{1/2}l^{1/2}}<\infty\,. \end{equation} If ${\mathbb E}(|\epsilon|^p)<\infty$ and~\eqref{eq:K} hold with $p=2$, then $S_n/n\Rightarrow{\cal N}(0,\sigma^2)$ with some $\sigma<\infty$. If ${\mathbb E}(|\epsilon|^p)<\infty$ and~\eqref{eq:K} holds with $p>2$, then the invariance principle~\eqref{eq:IP} holds. \end{Coro} We compare our Condition~\eqref{cond:2} on the summability of $\{a_{i,j}\}_{(i,j)\in\mathbb Z^2}$, and the one considered by \cite{cheng06central}. They only established central limit theorems for functionals of stationary linear random fields, so we restrict to the case $p=2$. \cite{cheng06central} assumed \begin{equation}\label{eq:aij1/2} \sif i0\sif j0 |a_{i,j}|^{1/2}<\infty\,, \end{equation} and provided different regularity conditions on $K$. Namely, \[ \sup_{\Lambda\subset\mathbb Z^2}{\mathbb E} K^2(x + Z_\Lambda)<\infty \] for all $x\in\mathbb R$ with $Z_\Lambda$ defined in~\eqref{eq:ZLambda}, and that for any two independent random variables $X$ and $Y$ with ${\mathbb E}(K^2(X) + K^2(Y) + K^2(X+Y))<M<\infty$, \begin{equation}\label{eq:K@cheng06} {\mathbb E}[(K(X+Y)-K(X))^2]\leq C[{\mathbb E} (Y^2)]^\gamma \end{equation} for some $\gamma\geq 1/2$. In general, \cite{cheng06central}'s condition and ours on the regularity $K$ are not comparable and thus have different range of applications. Below, we focus on the simple case that $h = 1$ and $K$ is Lipschitz, covered by both works. This corresponds to $\alpha = \beta = 1$ in~\eqref{eq:M} and $\gamma = 1$ in~\eqref{eq:K@cheng06}. In the following two examples, our Condition~\eqref{cond:2} is weaker than Condition~\eqref{eq:aij1/2}. \begin{Example}\label{example:1} Consider $a_{i,j} = (i+j+1)^{-q}$ for all $i,j\geq 0$ and some $q>1$. Then, $A = \sif i0\sif j0 a_{i,j}^2<\infty$ and \[ A_{k,l} = \sif j1 j(k+l+j)^{-2q} = O((k+l)^{2-2q})\,. \] Then~\eqref{cond:2} is bounded by, up to a multiplicative constant, \[ \sif k1\sif l1 \frac{(k+l)^{1-q}}{k^{1/2}l^{1/2}} < \sif k1 \frac{k^{(1-q)/2}}{k^{1/2}}\sif l1 \frac{l^{(1-q)/2}}{l^{1/2}} \leq \bpp{\sif k1 k^{-q/2}}^2\,. \] Therefore, Condition~\eqref{cond:2} requires $q>2$. In this case, Condition~\eqref{eq:aij1/2} requires $q>4$. \end{Example} \begin{Example}\label{example:2} Consider $a_{i,j} = (i+1)^{-q}(j+1)^{-q}$, for all $i,j\geq 0$ for some $q>1$. Then, $A = \sif i0\sif j0 a_{i,j}^2<\infty$ and \begin{equation} A_{k,l} = \sif ik\sif jl a_{i,j}^2 = O(k^{-(2q-1)}l^{-(2q-1)})\,. \end{equation} One can thus check that Condition~\eqref{cond:2} requires $q>3/2$ while Condition~\eqref{eq:aij1/2} requires $q>2$. \end{Example} \begin{Rem}\label{rem:EVW} For the central limit theorem for functionals of linear random fields, the weakest condition known is due to \cite{elmachkouri11central} (Example 1 and Theorem 1), who showed that it suffices to require $K$ to be Lipschitz and \[ \sum_{i,j}|a_{i,j}|<\infty\,. \] Furthermore, their result and the one by \cite{cheng06central} do not assume the linear random field to be causal. \end{Rem} \section{A Moment Inequality}\label{sec:momentInequality} We establish a moment inequality for stationary 2-parameter random fields on general probability spaces, without assuming the product structure. We first review the Peligrad--Utev inequality, a maximal $L^p$-inequality in dimension one, with $p\geq 2$. Recall the partial summation in~\eqref{eq:Snf1} and the related probability space. Let $C$ denote a constant that may change from line to line. It is known that for all $f\in L^p({\cal F}_\infty)$ with ${\mathbb E} (f\mid{\cal F}_{-\infty}) = 0$, \begin{multline}\label{eq:Volny} \bnn{\max_{1\leq k\leq n}|S_k(f)|}_p \leq Cn^{1/2}\Bigg(\nn{{\mathbb E}(f\mid{\cal F}_0)}_p + \nn {f - {\mathbb E}(f\mid{\cal F}_0)}_p \\ + \summ k1n\frac{\nn{{\mathbb E}(S_k(f)\mid{\cal F}_{0})}_p}{k^{3/2}} +\summ k1n\frac{\nn{S_k(f) - {\mathbb E}(S_k(f)\mid{\cal F}_{k})}_p}{k^{3/2}}\Bigg)\,. \end{multline} The inequality above was first established for adapted stationary sequences in \cite{peligrad05new} and then extended to $L^p$-inequality for $p\geq 2$ in \cite{peligrad07maximal}. The case $p\in(1,2)$ was addressed by \cite{wu08moderate}. The non-adapted case for $p\geq 2$ was addressed by \cite{volny07nonadapted}. For the sake of simplicity, we simplify the bound in~\eqref{eq:Volny} by regrouping the summations. Observe that $\snn{{\mathbb E}(S_k(f)\mid{\cal F}_0)}_p\leq \snn{{\mathbb E}(S_k(f)\mid{\cal F}_1)}_p$, $\nn{{\mathbb E}(f\mid{\cal F}_0)}_p = \nn{{\mathbb E}(S_1(f)\mid{\cal F}_1)}_p$ and $\snn{f-{\mathbb E}(f\mid{\cal F}_0)}_p = \snn{S_1(f)-{\mathbb E}(S_1(f)\mid{\cal F}_1)}_p$. Thus, we obtain \begin{multline}\label{eq:Volny2} \bnn{\max_{1\leq k\leq n}|S_k(f)|}_p \\\leq Cn^{1/2}\Bigg(\summ k1{n}\frac{\nn{{\mathbb E}(S_k(f)\mid{\cal F}_{1})}_p}{k^{3/2}} +\summ k1{n}\frac{\nn{S_k(f) - {\mathbb E}(S_k(f)\mid{\cal F}_{k})}_p}{k^{3/2}}\Bigg)\,. \end{multline} Now, consider a general probability space $(\Omega,\calA,\proba)$, and suppose there exists a commuting 2-parameter filtration $\indzz{\cal F}$, and an Abelian group of bimeasurable, measure-preserving, one-to-one and onto maps $\indij T$ on $(\Omega,\calA,\proba)$, such that~\eqref{eq:nested} holds. Define ${\cal F}_{\infty,\infty} = \bigvee_{(i,j)\in\mathbb Z^2}{\cal F}_{i,j}$, ${\cal F}_{-\infty,\infty} = \bigcap_{i\in\mathbb Z}{\cal F}_{i,\infty}$ and ${\cal F}_{\infty,-\infty} = \bigcap_{j\in\mathbb Z}{\cal F}_{\infty,j}$. Note that when $(\Omega,\calA,\proba)$ is a product probability space, then ${\cal F}_{-\infty,\infty}$ and ${\cal F}_{\infty,-\infty}$ are trivial, by Kolmogorov's zero-one law. Recall the definition of $S_{m,n}(f)$ in~\eqref{eq:Smn}. Given $f$, write $S_{m,n} \equiv S_{m,n}(f)$ for the sake of simplicity. \begin{Prop}\label{prop:momentInequality} Consider $(\Omega,\calA,\proba)$, $\indij T$ and $\indij{\cal F}$ described as above. Suppose $p\geq 2$, $f\in L^p({\cal F}_{\infty,\infty})$ and ${\mathbb E}(f\mid{\cal F}_{-\infty,\infty}) = {\mathbb E}(f\mid{\cal F}_{\infty,-\infty}) = 0$. Then, \[ \snn{S_{m,n}}_p \leq Cm^{1/2}n^{1/2}\summ k1{m}\summ l1{n}\frac{d_{k,l}(f)}{k^{3/2}l^{3/2}} \] with \begin{eqnarray*} d_{k,l}(f) & = & \snn{{\mathbb E}(S_{k,l}\mid{\cal F}_{1,1})}_p\\ & & + \snn{{\mathbb E}(S_{k,l}\mid{\cal F}_{1,\infty}) - {\mathbb E}(S_{k,l}\mid{\cal F}_{1,l})}_p \\ & & + \snn{{\mathbb E}(S_{k,l}\mid{\cal F}_{\infty,1}) - {\mathbb E}(S_{k,l}\mid{\cal F}_{k,1})}_p\\ & & + \snn{S_{k,l} - {\mathbb E}(S_{k,l}\mid{\cal F}_{k,\infty}) - {\mathbb E}(S_{k,l}\mid{\cal F}_{\infty,l}) + {\mathbb E}(S_{k,l}\mid{\cal F}_{k,l})}_p\,. \end{eqnarray*} \end{Prop} \begin{Coro}\label{coro:1} Suppose the assumptions in Proposition~\ref{prop:momentInequality} hold. \begin{itemize} \item [(i)] If $f\in {\cal F}_{0,0}$, then \[ \snn{S_{m,n}(f)}_p \leq Cm^{1/2}n^{1/2}\summ k1{m}\summ l1{n}\frac{\snn{{\mathbb E}(S_{k,l}(f)\mid{\cal F}_{1,1})}_p}{k^{3/2}l^{3/2}}\,. \] \item[(ii)] If $\{f\circ T_{i,j}\}_{(i,j)\in\mathbb Z^2}$ are two-dimensional martingale differences, in the sense that $f\in L^p({\cal F}_{0,0})$ and ${\mathbb E}(f\mid{\cal F}_{0,-1}) = {\mathbb E}(f\mid{\cal F}_{-1,0}) = 0$, then \[ \snn{S_{m,n}(f)}_p\leq Cm^{1/2}n^{1/2}\nn f_p\,. \] \end{itemize} \end{Coro} The proof of Corollary~\ref{coro:1} is trivial. We only remark that the second case recovers the Burkholder's inequality for multiparameter martingale differences established in \cite{fazekas05burkholder}. \begin{proof}[Proof of Proposition~\ref{prop:momentInequality}] Fix $f$. Define $\wt S_{0,n} = \summ j1n f\circ T_{0,j}$. Clearly, \begin{equation}\label{eq:S0n} S_{m,n} = \summ i1m\summ j1n f\circ T_{i,j} = \summ i1m\bpp{\summ j1n f\circ T_{0,j}}\circ T_{i,0} = \summ i1{m}\wt S_{0,n}\circ T_{i,0}\,. \end{equation} Fix $n$. Observe that ${\mathbb E} \wt S_{0,n}=0$ and $\wt S_{0,n}\circ T_{i,0}$ is a stationary sequence. Furthermore, $\{{\cal F}_{i,\infty}\}_{i\in\mathbb Z}$ is a filtration, $T_{i,0}^{-1}{\cal F}_{j,\infty} = T_{-i,0}{\cal F}_{j,\infty} = {\cal F}_{i+j,\infty}$ and ${\mathbb E}(\wt S_{0,n}\mid{\cal F}_{-\infty,\infty}) = 0$. Therefore, we can apply the Peligrad--Utev inequality~\eqref{eq:Volny2} and obtain \begin{eqnarray} \snn{S_{m,n}}_p & \leq & Cm^{1/2}\Big(\summ k1{m}k^{-3/2}\underbrace{\snn{{\mathbb E}(S_{k,n}\mid{\cal F}_{1,\infty})}_p}_{\Lambda_1}\nonumber\\ & & \quad\quad\quad + \summ k1{m}k^{-3/2}\underbrace{\snn{S_{k,n}-{\mathbb E}(S_{k,n}\mid{\cal F}_{k,\infty})}_p}_{\Lambda_2}\Big)\,.\label{eq:11} \end{eqnarray} We first deal with $\Lambda_1$. Define $\wt S_{m,0} = \summ i1m f\circ T_{i,0}$. Similarly as in~\eqref{eq:S0n}, $S_{k,n} = \summ j1n \wt S_{k,0}\circ T_{0,j}$, and \begin{eqnarray} {\mathbb E}(S_{k,n}\mid{\cal F}_{1,\infty}) & = & \summ j1{n}{\mathbb E}\spp{\wt S_{k,0}\circ T_{0,j}\mid{\cal F}_{1,\infty}}\nonumber\\ & = & \summ j1{n}{\mathbb E}\spp{\wt S_{k,0}\circ T_{0,j}\mid T_{0,-j}({\cal F}_{1,\infty})}\nonumber\,, \end{eqnarray} where in the last equality we used the fact that $T_{0,j}({\cal F}_{i,\infty}) = {\cal F}_{i,\infty}$, for all $i,j\in\mathbb Z$. Now, by the identify ${\mathbb E}(f\mid{\cal F})\circ T = {\mathbb E}(f\circ T\mid T^{-1}({\cal F}))$, we have \begin{equation}\label{eq:Lambda1} {\mathbb E}(S_{k,n}\mid{\cal F}_{1,\infty}) = \summ j1{n}{\mathbb E}(\wt S_{k,0}\mid{\cal F}_{1,\infty})\circ T_{0,j}\,. \end{equation} Observe that~\eqref{eq:Lambda1} is again a summation in the form of~\eqref{eq:Snf1}. Then, applying the Peligrad--Utev inequality~\eqref{eq:Volny2} again, we obtain \begin{eqnarray*} \Lambda_1 & \leq & Cn^{1/2}\Big(\summ l1{n}l^{-3/2}\snn{{\mathbb E}[{\mathbb E}(S_{k,l}\mid{\cal F}_{1,\infty})\mid{\cal F}_{\infty,1}]}_p\\ & & \quad\quad\quad + \summ l1{n}l^{-3/2}\snn{{\mathbb E}(S_{k,l}\mid{\cal F}_{1,\infty})-{\mathbb E}[{\mathbb E}(S_{k,l}\mid{\cal F}_{1,\infty})\mid{\cal F}_{\infty,l}]}_p\Big)\,. \end{eqnarray*} By the commuting property of the marginal filtrations~\eqref{eq:marginalCommuting}, the above inequality becomes \begin{eqnarray} \Lambda_1 & \leq & Cn^{1/2}\Big(\summ l1{n}l^{-3/2}\snn{{\mathbb E}(S_{k,l}\mid{\cal F}_{1,1})}_p\nonumber\\ & & \quad\quad\quad + \summ l1{n}l^{-3/2}\snn{{\mathbb E}(S_{k,l}\mid{\cal F}_{1,\infty})-{\mathbb E}(S_{k,l}\mid{\cal F}_{1,l})}_p\Big)\,.\label{eq:12} \end{eqnarray} Similarly, one can show \begin{eqnarray} \Lambda_2 & = & \bnn{\summ j1{n}\sbb{S_{k,0}-{\mathbb E}(S_{k,0}\mid{\cal F}_{k,\infty})}\circ T_{0,j}}_p\nonumber\\ & \leq & Cn^{1/2}\Big(\summ l1{n}l^{-3/2}\snn{{\mathbb E}(S_{k,l}\mid{\cal F}_{\infty,1})-{\mathbb E}(S_{k,l}\mid{\cal F}_{k,1})}_p\nonumber\\ & & \quad\quad\quad + \summ l1{n}l^{-3/2}\| S_{k,l}-{\mathbb E}(S_{k,l}\mid{\cal F}_{k,\infty})\nonumber\\ & & \quad\quad\quad\quad\quad\quad - {\mathbb E}(S_{k,l}\mid{\cal F}_{\infty,l})+{\mathbb E}(S_{k,l}\mid{\cal F}_{k,l})\|_p\Big)\,.\label{eq:13} \end{eqnarray} Combining~\eqref{eq:11},~\eqref{eq:12} and~\eqref{eq:13}, we have thus proved Proposition~\ref{prop:momentInequality}. \end{proof} \section{Auxiliary Proofs}\label{sec:proofs} For arbitrary $\sigma$-fields ${\cal F},{\cal G}$, let ${\cal F}\vee{\cal G}$ denote the smallest $\sigma$-field that contains ${\cal F}$ and ${\cal G}$. \begin{Prop}\label{prop:commuting} Let $(\Omega,{\cal B},\mathbb P)$ be a probability space and let ${\cal F},{\cal G},{\cal H}$ be mutually independent sub-$\sigma$-fields of ${\cal B}$. Then, for all random variable $X\in{\cal B}$, ${\mathbb E} |X|<\infty$, we have \begin{equation}\label{eq:commuting} {\mathbb E}\bb{{\mathbb E}(X\mid {\cal F}\vee{\cal G})\mid{\cal G}\vee{\cal H}} = {\mathbb E}(X\mid{\cal G})\mbox{ a.s.} \end{equation} \end{Prop} Proposition~\ref{prop:commuting} is closely related to the notion of conditional independence (see e.g.~\cite{chow78probability}, Chapter 7.3). Namely, provided a probability space $(\Omega,{\cal F},\mathbb P)$, and sub-$\sigma$-fields ${\cal G}_1,{\cal G}_2$ and ${\cal G}_3$ of ${\cal F}$, ${\cal G}_1$ and ${\cal G}_2$ are said to be {\it conditionally independent} given ${\cal G}_3$, if for all $A_1\in{\cal G}_1, A_2\in{\cal G}_2$, $\mathbb P(A_1\cap A_2\mid{\cal G}_3) = \mathbb P(A_1\mid{\cal G}_3)\mathbb P(A_2\mid{\cal G}_3)$ almost surely. \begin{proof}[Proof of Proposition~\ref{prop:commuting}] First, we show that ${\cal F}\vee{\cal G}$ and ${\cal G}\vee{\cal H}$ are conditionally independent, given ${\cal G}$. By Theorem 7.3.1 (ii) in \cite{chow78probability}, it is equivalent to show, for all $F\in{\cal F}, G\in{\cal G}$, $\mathbb P(F\cap G\mid{\cal G}\vee{\cal H}) = \mathbb P(F\cap G\mid{\cal G})$ almost surely. This is true since \[ \mathbb P(F\cap G\mid{\cal G}\vee{\cal H}) = {\bf 1}_G{\mathbb E}({\bf 1}_F\mid{\cal G}\vee{\cal H}) = {\bf 1}_G{\mathbb E}({\bf 1}_F\mid{\cal G}) = \mathbb P(F\cap G\mid{\cal G})\mbox{ a.s.} \] Next, by Theorem 7.3.1 (iv) in \cite{chow78probability}, the conditional independence obtained above yields ${\mathbb E}(X\mid{\cal G}\vee{\cal H}) = {\mathbb E}(X\mid{\cal G})$ almost surely, for all $X\in{\cal F}\vee{\cal G}$, ${\mathbb E}|X|<\infty$. Replacing $X$ by ${\mathbb E}(X\mid{\cal F}\vee{\cal G})$, we have thus proved~\eqref{eq:commuting}. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:K}] Write $W_{k,l} = \{Z_{i,j}\}_h^{k,l}$. Define (and recall that) $W_{k,l,\pm} = \{Z_{i,j,\pm}\}_h^{k,l}$. Let $\wt W_{k,l,-}$ be a copy of $W_{k,l,-}$, independent of $W_{k,l,\pm}$. Set $\wt W_{k,l}\mathrel{\mathop:}= W_{k,l,+}+\wt W_{k,l,-}$. Recall $K_{k,l}(W_{k,l,-}) = {\mathbb E}(K(W_{k,l})\mid{\cal F}_{1,1})$ in~\eqref{eq:Kn1}. Observe that by~\eqref{eq:Zij+-}, $W_{k,l,-}\in{\cal F}_{1,1}$, and $W_{k,l,+}, \wt W_{k,l,-}$ are independent of ${\cal F}_{1,1}$. Therefore, ${\mathbb E}(K(\wt W_{k,l})\mid{\cal F}_{1,1}) = {\mathbb E}(K(\wt W_{k,l})) = 0$, and \begin{eqnarray*} |K_{k,l}(W_{k,l,-})| & = & \sabs{{\mathbb E}(K(W_{k,l}) - K(\wt W_{k,l})\mid{\cal F}_{1,1})}\\ & \leq & {{\mathbb E}(|K(W_{k,l}) - K(\wt W_{k,l})|\mid{\cal F}_{1,1})}\,. \end{eqnarray*} Observe that by~\eqref{eq:M}, \[ |K(W_{k,l}) - K(\wt W_{k,l})|\leq M_{\alpha,\beta}(\wt W_{k,l})(|W_{k,l,-}-\wt W_{k,l,-}|^\alpha + |W_{k,l,-}-\wt W_{k,l,-}|^\beta)\,. \] Write $U_{k,l} = W_{k,l,-}-\wt W_{k,l,-}$. By Cauchy--Schwartz's inequality, and noting that ${\mathbb E}(|M_{\alpha,\beta}(\wt W_{k,l})|^2\mid{\cal F}_{1,1}) = \snn{M_{\alpha,\beta}(\wt W_{k,l})}_2^2 = \snn{M_{\alpha,\beta}(\wt W_{1,1})}_2^2$, we have \[ |K_{k,l}(W_{k,l,-})|\leq \snn{M_{\alpha,\beta}(\wt W_{1,1})}_2 \sccbb{{\mathbb E}\sbb{(|U_{k,l}|^\alpha+|U_{k,l}|^\beta)^2\mid{\cal F}_{1,1}}}^{1/2}\,, \] whence, for $p\geq 2$, \begin{eqnarray} \snn{K_{k,l}(W_{k,l,-})}_p & \leq & \snn{M_{\alpha,\beta}(\wt W_{1,1})}_2\snn{|U_{k,l}|^\alpha+|Y_{k,l}|^\beta}_p \nonumber\\ & \leq & \snn{M_{\alpha,\beta}(\wt W_{1,1})}_2(\snn{|U_{k,l}|^\alpha}_p+\snn{|Y_{k,l}|^\beta}_p)\,. \label{eq:Kkl3} \end{eqnarray} Finally, since for all $\gamma>0$ and $n\in\mathbb N$, there exists a constant $C(\gamma,n)>0$ such that and for all vector $w=(w_1,\dots,w_n)\in\mathbb R^n$, \[ |w|^{2\gamma} = \bpp{\sum_{i=1}^nw_i^2}^\gamma \leq C(\gamma,n)\bpp{\sum_{i=1}^n w_i^{2\gamma}}\,, \] it follows that for all $\gamma>0$, \begin{eqnarray} {\mathbb E}(|U_{k,l}|^{2\gamma}) & = & {\mathbb E}(|W_{k,l,-}-\wt W_{k,l,-}|^{2\gamma}) = {\mathbb E}(|\{Z_{i,j,-} - \wt Z_{i,j,-}\}_h^{k,l}|^{2\gamma})\nonumber\\ & = & O\bbb{{\mathbb E}\bpp{\sum_{\substack{k-h<i\leq k\\l-h<j\leq l}}(Z_{i,j,-}-\wt Z_{i,j,-})^{2\gamma}}}\,. \nonumber \end{eqnarray} By \cite{wu02central}, Lemma 4, under the notation~\eqref{eq:ZLambda}, ${\mathbb E}(|\epsilon|^{2\vee2\gamma})<\infty$ implies that for all $\Lambda\subset \mathbb Z^2$, ${\mathbb E}(|Z_\Lambda|^{2\gamma}) \leq CA_\Lambda^\gamma$ for some universal constant $C$. It then follows that ${\mathbb E}(|U_{k,l}|^{2\gamma}) = O(A_{k+1-h,l+1-h}^\gamma)$. Consequently,~\eqref{eq:Kkl3} yields \begin{eqnarray*} \nn{K_{k,l}(W_{k,l,-})}_p & \leq & \nn{M_{\alpha,\beta}(W_{1,1})}_2\bbb{O(A_{k+1-h,l+1-h}^{\alpha/2}) + O(A_{k+1-h,l+1-h}^{\beta/2})} \\ & = & O(A_{k+1-h,l+1-h}^{\alpha/2})\,. \end{eqnarray*} The proof is thus completed. \end{proof} \noindent{\bf Acknowledgment} The authors thank Stilian Stoev for many constructive and helpful discussions, and in particular his suggestion of considering the $m$-dependent approximation approach. \def\cprime{$'$} \def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth \lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
-73,365.496297
[ -2.982421875, 2.5546875 ]
32.38806
[ -2.935546875, 0.56689453125, -2.1171875, -6.6875, -1.046875, 9.1953125 ]
[ 2.283203125, 7.8046875, 1.0634765625, 4.55859375 ]
413
5,992
[ -3.603515625, 3.94140625 ]
35.764063
[ -5.62109375, -4.12890625, -5.1484375, -2.4375, 2.017578125, 13.1796875 ]
0.625114
9.900107
27.269693
3.015714
[ 2.0494425296783447 ]
-46,833.480307
6.407377
-73,585.061458
0.890787
6.307465
[ -1.7607421875, -3.384765625, -4.08984375, -5.640625, 2.03515625, 12.90625 ]
[ -5.91796875, -2.65234375, -2.798828125, -1.99609375, 4.30859375, 5.828125 ]
BkiUc5zxK6EuM_Ubqfdi
\section{Introduction} In cold dark clouds, recent detections of complex organic molecules including some C$_{2}$H$_{4}$O$_{2}$ isomers have led to an influx of studies examining chemical formation routes \citep{jimenez-serra_spatial_2016, soma_complex_2018}. Warmer sources have well-understood thermal production routes for species formed both in the gas phase and on grains, with efficient grain processes producing COMs, which then thermally desorb with the majority of ice from dust grains \citep{charnley_molecular_1992}. The production of the same COMs in cold dark clouds has been studied, yet continues to be an area of interest, if not fully understood \citep{vasyunin_reactive_2013,chang_unified_2016,balucani_formation_2015, Jin_Garrod_2020}. Desorption from grain mantles is a common problem in dark clouds, as most models still under-produce gas-phase abundances of COMs, though non-thermal desorption methods are consistently being developed and applied to models of cold dark clouds, with a particular emphasis on reactive desorption and photodesorption \citep{garrod_non-thermal_2007,oberg_photodesorption_2009}. Cosmic rays are highly energetic ions travelling at significant portions of the speed of light, with H$^{+}$ being the most common ion \citep{cummings_galactic_2016}. Their role in interstellar grain chemistry has been increasingly examined, with radiolysis theory and experiments highlighting the importance of cosmic ray and grain interactions in generating chemical diversity through ionization and excitation of ices \citep{hudson_ir_2005, abplanalp_formation_2015, boyer_role_2016, shingledecker_general_2018}. When cosmic rays collide with interstellar ices and dust grains, perhaps the most noticeable effect is the heating of the aggregate, referred to as whole grain cosmic ray heating or simply cosmic ray heating (CRH), causing periods of enhanced thermal activity, such as desorption and possibly enhanced thermal diffusion \citep{hasegawa_new_1993, kalvans_temperature_2018}. The energy deposited to the surface by cosmic rays does not just cause thermal heating, ionization, and excitation; sputtering is a direct outcome of cosmic ray interactions with grain ices. Sputtering is defined as the desorption of material resulting from high energy particle impacts. This is a separate process from the thermal desorption resulting from the heating of the grains, as the act of collision itself causes not just heating, but also the ejection of surface molecules. Multiple experiments have been recently done to examine the sputtering of high-energy cosmic rays ($\ge$ MeV) impacting interstellar ices, and have shown that sputtering is an efficient method of desorption from surfaces \citep{dartois_swift_2013,dartois_heavy_2015}. Even more recently, chemical models including experimental sputtering data were utilized, demonstrating increased gas-phase abundances of various molecules in chemical kinetic models, although the experiments used heavier ions than the most common cosmic ray, H$^+$ \citep{dartois_cosmic_2018, wakelam_efficiency_2021}. The many theories of sputtering predict that the interaction of cosmic rays and ice surfaces varies based on the energy and mass of the incoming particle, resulting in competition between sputtering caused by high energy ($\ge$ MeV) particles and less energetic particles in the keV range. The competing regimes are further divided by two mechanisms of energy transfer: electronic sputtering, where ineleastic collisions cause movement from coulombic interactions, and nuclear sputtering, where elastic collisions between nuclei cause physical recoil and desorption \citep{sigmund_theory_1969, brown_sputtering_1978}. Lower energy particles more efficiently deposit energy into surfaces through elastic (nuclear) collisions, while for most cosmic ray energies, inelastic (electronic) collisions are the main source of energy transfer \citep{andersen_sputtering_1981, johnson_energetic_1990}. Reasons for the discrepancy in effectiveness will be examined in a later section. In this paper, we examine sputtering theory by including sputtering into a rate-equation based 3-phase model under cold dark cloud conditions. Section ~\ref{sec:model} presents the theoretical treatment of sputtering used in the model along with all physical parameters and chemical networks utilized. Section ~\ref{sec:results} covers the results of the models for theories based on water, carbon dioxide, and a theoretical mixed ice. Section ~\ref{sec:analysis} discusses the implications sputtering has on interstellar environments with comparisons to astronomical observations, and Section ~\ref{sec:conclusions} concludes our work. \section{Models} \label{sec:model} The chemical models reported in this paper make use of the three-phase rate equation-based gas-grain program \texttt{Nautilus-1.1} \citep{wakelam_kinetic_2012,ruaud_gas_2016}. The three phases comprise the gas, the top layers of the grain ice (two monolayers in this instance) and the remaining ice on the dust grain mantle, called the bulk. This distinction is relevant to diffusion of ice species in the mantle, and a variety of adsorption and desorption methods utilized in \texttt{Nautilus-1.1}. These methods include photodesorption, reactive desorption, and thermal desorption, which allow for exchange of species between the ices upon the grain surfaces and the gaseous interstellar medium, are further described later in the section \citep{vasyunin_reactive_2013}. Sputtering, which removes particles from the ice surface and bulk, is slightly different from already included non-thermal desorption methods due to the frequency of sputtering events, and the number of species removed from both the ice surface and bulk. These will be examined further in Section~\ref{sec:sputtering}. With the Nautilus code, we are able to run a series of chemical models with specific parameters in which the diverse effects of cosmic ray interactions in various combinations of activity or inactivity on sputtering are explored. Further details, which apply to all models, such as initial abundances, and constant physical conditions are shown in Table~\ref{tab:abundances} and \ref{tab:physicalcond}. These physical conditions, which are the same as in \citet{ruaud_gas_2016}, are comparable to those observed in cold, dark molecular clouds, which are the environments on which we focus. \begin{table} \centering \caption{Initial abundances of elements with respect to total hydrogen nuclei} \begin{tabular}{lcr} \hline Species & Abundance \\ \hline H$_{2}$ & $0.499$ $^{\textup{a}}$ \\ He & $9.000 \times 10^{-2} $ $ {^\textup{a}}$ \\ N & $6.200 \times 10^{-5} $ $ {^{\textup{a}}} $ \\ C$^{+}$ & $1.700 \times 10^{-4} $ $^{\textup{b}} $ \\ O & $2.429 \times 10^{-4} $ $ ^{\textup{c}} $ \\ S$^{+}$ & $8.000 \times 10^{-8}$ $^{\textup{d}} $ \\ Na$^{+}$ & $2.000 \times 10^{-9}$ $^{\textup{d}} $ \\ Mg$^{+}$ & $7.000 \times 10^{-9}$ $^{\textup{d}} $ \\ Si$^{+}$ & $8.000 \times 10^{-9}$ $^{\textup{d}} $ \\ P$^{+}$ & $2.000 \times 10^{-10}$ $^{\textup{d}} $ \\ Cl$^{+}$ & $1.000 \times 10^{-9}$ $^{\textup{d}} $ \\ Fe$^{+}$ & $3.000 \times 10^{-9}$ $^{\textup{d}} $ \\ F & $6.680 \times 10^{-9}$ $^{\textup{e}} $ \\ \hline $^{\textup{a}}$ \citep{wakelam_polycyclic_2008} \\ $^{\textup{b}}$ \citep{jenkins_unified_2009} \\ $^{\textup{c}}$ \citep{hincelin_oxygen_2011} \\ $^{\textup{d}}$ \citep{graedel_kinetic_1982} \\ $^{\textup{e}}$ \citep{neufeld_chemistry_2005} \\ \end{tabular} \label{tab:abundances} \end{table} \begin{table} \centering \caption{Physical conditions utilized in models for this work, based on TMC-1 conditions} \begin{tabular}{lccr} \hline Parameter & TMC-1 \\ \hline $\textit{n}_{\textup{H}}$ (cm$^{-3}$) & $10^{4}$ \\ $\textit{n}_{\textup{dust}}$ (cm$^{-3}$) & $1.8 \times 10^{-8}$ \\ $\textit{T}_{\textup{gas}}$ (K) & $10$ \\ $\textit{T}_{\textup{dust}}$ (K) & $10$ \\ $\textit{N}_\textup{{site}}$ (cm$^{-2}$) & $1.5 \times 10^{15}$ \\ $\zeta$ (s$^{-1}$) & $1.3 \times 10^{-17}$ \\ $\textit{A}_{\textup{V}}$ & 10 \\ \hline \end{tabular} \label{tab:physicalcond} \end{table} To accurately gauge the impact of each new or updated process examined in this paper, we have devised separate models with differing processes. These are shown in Table~\ref{tab:model_key}, which lists the name by which the specific model will be referenced in the paper. The table also indicates the inclusion or absence of cosmic ray whole grain heating (CRH) and sputtering in the labelled models \citep{hasegawa_new_1993}. Our models include some features not in the standard version of \texttt{Nautilus-1.1}. These additional options include a competitive tunneling mechanism between activation and diffusion barriers set to the faster option, and an option that prevents all species but H and H$_2$ from tunneling under activation energy barriers of surface reactions, set to "on" \citep{smith_chemistry_2008,shingledecker_simulating_2019}. Nonthermal desorption mechanisms for surface species also have associated options. Photodesorption \citep{bertin_indirect_2013} and cosmic ray induced photodesorption \citep{hasegawa_new_1993} are included, along with reactive desorption at 1 per cent probability \citep{garrod_non-thermal_2007}. These three switches are set to "on", with both the photodesorption yield, and the cosmic ray induced photodesorption yield set at $1 \times 10^{-4}$. All models have the same parameters for the features described above, except in the cases of sputtering and CRH. \begin{table} \centering \caption{Model identifiers with a listing of the active sputtering rate and whether cosmic ray heating was included} \begin{tabular}{lcc} \hline Model & Sputtering & Note \\ \hline 1H & Off & Base Model \\ 2HS & On - H$_{2}$O Ice & CRH \\ 3S & On - H$_{2}$O Ice & No CRH \\ 4HSC & On - CO$_{2}$ Ice & CRH \\ 5SC & On - CO$_{2}$ Ice & No CRH \\ 6HSM & On - Mixed Ice & CRH \\ 7SM & On - Mixed Ice & No CRH \\ \hline \end{tabular} \label{tab:model_key} \end{table} \subsection{Network} \label{sec:network} The \texttt{Nautilus-1.1} reaction network used in this study has been expanded to include more complex chemical species that lead up to the C$_{2}$H$_{4}$O$_{2}$ isomers, methyl formate (HCOOCH$_{3}$), glycolaldehyde (HCOCH$_{2}$OH), and acetic acid (CH$_{3}$COOH), as described in \citet{paulive_role_2021}. These molecules are among a class of species known as complex organic molecules (COMs), which range from 6-13 atoms and are partially saturated.The base network of gaseous reactions is taken from the KIDA network \citep{wakelam_kinetic_2012}. The granular reaction network is from the \texttt{Nautilus-1.1} package, with additional thermal grain-surface reaction pathways leading to C$_{2}$H$_{4}$O$_{2}$ isomer precursors from \citet{garrod_formation_2006}. Used initially in hot core models, these reactions have been included here to provide likely thermal pathways to COMs, which may eventually leave the grain surface through cosmic ray interactions \citep{skouteris_genealogical_2018}. Species with a prefix of ``J'' are those on the surface of the ice mantle, while species with a ``K'' lie within the bulk of the ice, which, after the simulation has completed running ($10^7$ yrs), contains close to 100 monolayers of ice. Desorption energies, $E_{\rm D}$, are from a combination of \citet{garrod_formation_2006,garrod_complex_2008,garrod_three-phase_2013}. The diffusion barriers ($E_{\rm b}$) are $0.4 \times E_{\rm D}$ for ice surface species and $0.8 \times E_{\rm D}$ for bulk species. All ices in this paper are considered amorphous solids. \subsection{Sputtering Theory} \label{sec:sputtering} As noted above, sputtering is the process of impacting a surface (or surface of a liquid) with high energy particles resulting in the ejection of particles from the surface into the surrounding space. By applying sputtering to grain surfaces in the interstellar medium, it is possible to estimate rate coefficients, used to calculate rates, based on cosmic ray grain interaction rates, stopping powers, and average energy loss due to collisions, all of which can be used in rate-equation based models. In these models, the overall rate of desorption (R$^{\textup{tot}}$) for some mantle species, $i$, is the sum of all desorption processes. Example desorption mechanisms include thermal desorption (R$^{\textup{Th}}$, photodesorption (R$^\textup{{Pho}}$), and reactive desorption (R$^\textup{{RD}}$). With the addition of sputtering in our models, the calculation of the rate of sputtering (R$^\textup{{s}}$) is necessary. The overall rate of desorption for species $i$ is as follows: \begin{equation} \mathrm{ R^{tot} }= - \frac{dN_{i}}{dt} = \mathrm{ R^{Th} + R^{Pho} + R^{RD} ... + R^{s}}. \label{differential} \end{equation} \noindent Here $\frac{dN_i}{dt}$ is the rate of change of the gas-phase concentration of desorbing species $i$, with units of cm$^{-3}$ s$^{-1}$. The individual rate processes, R, are each given by first-order rate coefficients, $k$, multiplied by N$_{i}$, so that the total rate, R$^{\textup{tot}}$, can be written by \begin{equation} \mathrm{ R^{tot} }= - \frac{dN_{i}}{dt} = N_{i} \left( k_{Th} + k_{Pho} + k_{RD} ... + k_{s} \right). \label{differential2} \end{equation} \noindent Here $k_{\textup{Th}}$, $k_{\textup{Pho}}$, $k_{\textup{RD}}$, and $k_{\textup{s}}$ are the rate coefficients for thermal desorption, photodesorption, reactive desorption, and sputtering, respectively. We will examine how to calculate $k_{\textup{s}}$ later in the section. It is important to note that our models assume all species are affected by sputtering at the same rate coefficient. When referring to the rate coefficient of sputtering off of water ice, carbon dioxide ice, or mixed ice, all ice mantle species in the model will take on the respective sputtering rate for the assumed ice composition. For example, when referring to water ice sputtering, the model implements this by setting the rate coefficient of all ice mantle species to be the calculated rate coefficient for water ice. Models with CO$_{2}$ ice sputtering set the rate coefficient of all species to be equal to that of CO$_{2}$ ice. The units of concentration for species $i$ can vary, so long as they are consistent throughout the model. The number of molecules on a grain, the fractional abundance to hydrogen or water, and the relative density are all common. For this model the concentration has the units of cm$^{-3}$. Some basis for the assumption that all ice species in the model take on the respective sputtering rate is given in the next paragraph. The assumption is justified by two factors: first is the physical structure of amorphous solids. \citet{clements_kinetic_2018} shows that the formation of interstellar ices at 10 K results in a highly porous structure, creating large "chunks" of ice that are interconnected to form the ice mantle. The ices themselves are observed to be approximately 70\% water ice \citep{1996A&A...315L.357W}, meaning that if there is a water ice molecule that is successfully removed from either the surface or the mantle, there is a probability that the water molecule will be removed with neighboring species, or if the ice is porous enough, a large "chunk" of ice will be desorbed alongside the sputtered molecule. Second is experimental evidence regarding sputtering yields. There are multiple studies on the effectiveness of sputtering multiple molecules and dimers from a chemisorbed metal surface \citep{gades_dimer_1995}. Recent experiments with swift heavy ions colliding with amorphous ices and crystalline water ice also sputter large quantities of ice, about 20,000 water molecules per incident ion, although the high number of desorbed molecules is not attributed to the porosity of the ice \citep{dartois_swift_2013,dartois_swift_2015, dartois_cosmic_2018}. These two factors suggest that sputtering has the possibility of desorbing multiple ice species per incident cosmic ray because water, the main component of the ice, will either be desorbed alongside whatever else it is next to, or in the same cluster as the water molecules that are sputtered. We can approximate a first-order sputtering rate coefficient, $k_{s}$, in units of cosmic ray particles per second, starting with a cosmic ray flux for the interstellar medium $\phi_{ism}$ (cosmic ray particles s$^{-1}$ cm$^{-2}$) and a cross section for the cosmic ray interacting with a molecule on the grain surface $\sigma$ (cm$^{2}$): \begin{equation} k_{s} = \sigma \phi_{ism}. \label{eq:basic_kinetics} \end{equation} We obtain a value for the cosmic ray flux by integrating the Spitzer-Tomasko energy function $j(E)$ \citep{spitzer_heating_1968} over the cosmic ray energy distribution from 0.3 MeV ($3 \times 10^{-4}$ GeV) to 100 GeV given by the equation: \begin{equation} \label{eq:cr_e_dist} j(E) = \frac{0.90}{\left(0.85+E_{G}\right)^{2.6}} \frac{1}{\left(1+0.01/E_{G}\right)}. \end{equation} \noindent Here $E_{\textup{G}}$ is the energy of cosmic rays in GeV. The distribution is graphed in Figure~\ref{fig:cr_dist}. Assuming that cosmic rays are isotropic and then integrating over the energy distribution results in an interstellar cosmic ray flux $\phi_{st}$ of 8.6 particles cm$^{-2}$ s$^{-1}$. An additional term, $\zeta$, based on the cosmic ray ionization rate of $\approx 1.36 \times 10^{-17}$ s$^{-1}$ allows for easy scaling of the cosmic ray flux depending on the ionization rate in varying areas of the ISM, similar to the scaling factor included in \citet{shingledecker_general_2018}, where the overall flux term is as follows: \begin{figure} \centering \includegraphics[width=\columnwidth]{cr_distribution.png} \caption{Cosmic ray energy distribution function, from \citet{spitzer_heating_1968}. The cosmic ray flux in particles cm$^{-2}$ s$^{-1}$ is obtained by integrating j(E) over the cosmic ray energy distribution, from $3 \times 10^{-4}$ GeV to 100 GeV. This results in an overall cosmic ray flux of 8.6 particles cm$^{-2}$ s$^{-1}$.} \label{fig:cr_dist} \end{figure} \begin{equation} \label{eq:scaling_factor} \phi_{ism} = \phi_{st} \frac{\zeta}{1.36 \times 10^{-17}} . \end{equation} Obtaining an exact sputtering cross section that will work with our averaged model is difficult, as the sputtering cross section is dependent on the stopping power of the target ice, as well as the mass and energy of the incident cosmic ray \citep{townsend_comparisons_1993,andersen_sputtering_1981}. Although there are experimentally determined sputtering cross sections, they are difficult to incorporate into our model for a number of reasons. First, \texttt{Nautilus-1.1} does not take into account specific cosmic ray nuclei. Secondly, many experimental and theoretical cross sections are not specific to sputtering, instead measuring all manners of destruction, called a destruction cross section. This results in "double counting" other included desorption processes and radiolysis effects if this total destruction cross section were to be used for calculating sputtering rates. A total cross section $\sigma_{t}$ is split into two general categories for collisions: a nuclear cross section $\sigma_{n}$, for elastic collisions, and an electronic (inelastic) cross section $\sigma_{e}$. The electronic cross section can be further split into ionizing $\sigma_{\textup{ion}}$ and excitation $\sigma_{\textup{exc}}$ cross sections. The different partial cross sections are shown in the following equation: \begin{equation} \label{eq:cross_sec} \sigma_{t} = \sigma_{n} + \sigma_{e} = \sigma_{n} + \sigma_{\textup{ion}} + \sigma_{\textup{exc}}. \end{equation} \noindent These cross sections all contribute to physical effects such as sputtering and whole grain heating. The fraction of each cross section that contributes to each resulting interaction type is not well defined. In a similar manner to splitting the total cross section into more focused cross sections to describe different processes, it is helpful to divide $E_{\textup{total}}$, the total energy loss, into more specific terms: \begin{equation} \label{eq:e_partition} E_{\textup{total}} = N_{\textup{ion}} \Bar{E}_{\textup{ion}} + N_{\textup{exc}} \Bar{E}_{\textup{exc}} + N_{\textup{ion}} \Bar{E}_{s} + \nu_{\textup{n}}(E_{\textup{total}}) \end{equation} \noindent where $N_{\textup{ion}}$ and $N_{\textup{exc}}$ are the numbers of ionizations and excitations, respectively, while the average energies lost to ionization and excitation are $\Bar{E}_{\textup{ion}}$ and $\Bar{E}_{\textup{exc}}$. $\Bar{E_{s}}$ is the energy lost to sub-excitation secondary electrons, or free electrons resulting from ionization that are too low in energy to cause further ionizations or excitations. These secondary electrons eventually lose their energy to the surface as vibrational or rotational energy, and these directly contribute to the motions of species on the ice surface, which can result in sputtering. The sum of the contributions of all contributing electronic effects on a single molecule can be referred to as the average electronic energy loss per ion-pair, or $W_{\textup{e}}$, with units of eV. $W_{\textup{e}}$ for a single water molecule is approximately 30 eV \citep{johnson_electronic_1982}. The remaining term, $\nu_{\textup{n}}(E_{\textup{total}})$, where $\nu_{\textup{n}}$ is the fraction of $E_{\textup{total}}$ lost by nuclear effects, is the energy lost due to nuclear elastic collisions, and is related to the nuclear stopping power cross section, in units of eV cm$^{2}$, which contributes to multiple methods for sputtering. The quantity $\nu(E_{\textup{total}})$, divided by the number of nuclear collisions per incident ion, is the average energy lost per nuclear collision, or $W_{\textup{n}}$ \citep{johnson_energetic_1990}. $W_{\textup{n}}$ can be estimated by multiplying the energy of displacement, or the energy required to remove a species from the energy well of the surface, $E_{\textup{dis}}$, by 2.5 \citep{sigmund_theory_1969}. Depending on the path of the cascade collisions, and how deep the ice particle in question is located within the bulk of the ice, $E_{\textup{dis}}$ can range from the desorption energy, $E_{\textup{b}}$, for surface species,to 5$E_{\textup{b}}$ for species embedded in the crystal structure. We approximate $E_{\textup{dis}}$ for our physisorbed ice as $2E_{\textup{b}}$. This results in a $W_{\textup{n}}$ for water ice of approximately 2.45 eV, from an $E_{\textup{b}}$ of 0.49 eV \citep{garrod_complex_2008}. The cross sections $\sigma_{n}$ and $\sigma_{e}$ are approximated from the nuclear and electronic stopping power cross sections $S_{\textup{n}}$, and $S_{\textup{e}}$. Figure ~\ref{fig:stopping_powers} shows a graph of nuclear and electronic stopping powers for hydrogen and iron ions impacting water ice, as approximated using the values for liquid water and calculated by the \texttt{SR-NIEL} web tool \citep{boschini_screened_2014}. Stopping powers are dependent on the mass and energy of the incident ion and the target material. Figure~\ref{fig:stopping_powers} shows a three order of magnitude difference between electronic stopping powers and nuclear stopping powers for incident hydrogen ions, and an increasing difference in stopping power with energy for incident iron ions. This suggests that the majority of energy deposited into grain ices by cosmic rays comes from electronic interactions, compared with nuclear interactions. Figure~\ref{fig:stopping_powers} also shows that the stopping power of iron ions impacting water is significantly higher than hydrogen for both nuclear and electronic collisions. We will not further examine iron cosmic rays in this work however, as they are significantly less abundant compared with hydrogen cosmic rays \citep{cummings_galactic_2016, blasi_origin_2013}. In order to account for the differences in terms and physical causation for sputtering, we will continue to examine nuclear and electronic sputtering separately. \begin{figure} \centering \includegraphics[width=\columnwidth]{stopping_powers.png} \caption{Electronic and nuclear stopping powers of H$^{+}$ and Fe$^{26+}$ ions impacting liquid water. H$^{+}$ electronic stopping powers are yellow dots, nuclear are purple dot-dashed. Iron incident ion electronic stopping powers are solid blue, nuclear are orange dashed. Energy ranges from $3 \times 10^{-4}$ GeV to 100 GeV. At these energies, $S_{e}$ of hydrogen is consistently $>10^{2}$ than the $S_{n}$. Liquid water is used as an approximation for amorphous solid water, which is not included in the \texttt{SR-NIEL} web tool. Calculated using \texttt{SR-NIEL} web tools \protect\hyperlink{http://www.sr-niel.org/index.php/}{http://www.sr-niel.org/index.php/}. } \label{fig:stopping_powers} \end{figure} \subsubsection{Nuclear (Elastic) Sputtering} Nuclear sputtering is caused via collisions, most notably in the keV region in which elastic collisions occur within the ice leading to rebounds and eventually sputtering \citep{johnson_energetic_1990}. This energy range is below that of cosmic rays that bombard dust particles. Throughout the cosmic ray energy distribution used in the paper ($3 \times 10^{-4}$ GeV to 100 GeV), nuclear stopping powers are approximately three orders of magnitude lower than electronic stopping powers for water ice. For use in our models, which do not account for differing cosmic ray ions and energies, we calculate a weighted average stopping power ($\bar{S}$) by the formula \begin{equation} \label{eq:average_s} \bar{S} = \frac{\sum S(E)j(E)}{\sum j(E)}, \end{equation} \noindent where $S(E)$ is the range of stopping powers over a range of energies (separated into nuclear and elastic stopping powers), and $j(E)$ is the Spitzer-Tomasko cosmic ray energy function. Our calculation results in an average nuclear stopping power for H$^{+}$ impacting water ice of $3.489 \times 10^{-18}$ eV cm$^{2}$. To convert our nuclear stopping power to a nuclear cross section, we simply divide $S_{\textup{n}}$ by $W_{\textup{n}}$, resulting in an average nuclear sputtering cross section of $1.424 \times 10^{-18}$ cm$^{2}$. The rate coefficient for nuclear sputtering can now be expressed as: \begin{equation} \label{nuc_no_y} k_{ns} = \sigma_{n} \phi_{ISM} \approx \frac{S_{n}}{W_{n}} \phi_{ST} \frac{\zeta}{1.36 \times 10^{-17}}. \end{equation} \noindent which contains the assumption that only one particle will be ejected per incident ion, which may not be the case. A suitable yield term has been derived by \citet{sigmund_theory_1969} and modified by \citet{johnson_energetic_1990} resulting in the following expression for the nuclear scattering yield $Y_{\rm ns}$: \begin{equation} Y_{ns} \approx \frac{3 \alpha S_{n}}{2 \pi^{2} \Bar{\sigma}_{\textup{diff}} U} \label{eq:nuclear_yield}. \end{equation} \noindent This is an equation for cascade sputtering of a planar surface, where $\alpha$ is an experimentally determined fraction of the nuclear stopping power that is involved in cascade collisions that are close to the surface, $S_{\textup{n}}$ is the nuclear stopping power, $\Bar{\sigma_{\textup{diff}}}$ is the average diffusion cross section, and $U$ is the average binding energy in eV. For water ices being impacted by H$^+$ ions, Equation~(\ref{eq:nuclear_yield}) yields an average result of of $1.8 \times 10^{-3}$ particles per cosmic ray proton, with the parameters shown in Table~\ref{tab:sput_params}. \begin{table} \centering \caption{Parameters for nuclear sputtering calculations on amorphous solid water.} \begin{tabular}{lcr} \hline Parameter & Value & Reference \\ \hline S$_{n}$ (eV cm$^{2}$) & $3.489 \times 10^{-18}$ & This work \\ $W_{n}$ & 2.45 eV & This work, \citep{sigmund_theory_1969} \\ $\Bar{\sigma}_{\textup{diff}}$ (cm$^{2}$) & $~3.6 \times 10^{-16}$ & \citep{johnson_energetic_1990} \\ U (eV) & 0.49 & \citep{garrod_formation_2006} \\ $\alpha$ & 0.6 & \citep{andersen_sputtering_1981} \\ Y$_{ns}$ & $1.8 \times 10^{-3}$ H$_{2}$O per H$^{+}$ & This work \\ \hline $\sigma_{n}$ (cm$^{2}$) & $1.424 \times 10^{-18} $ & This work \\ \hline \end{tabular} \label{tab:sput_params} \end{table} The final form of the rate coefficient for nuclear sputtering is \begin{equation} \label{nuclear_sputtering_final} k_{ns} = Y_{ns} \sigma_{n} \phi_{ISM}, \end{equation} \noindent which, fully expanded, is as follows: \begin{equation} \label{nuclear_sputtering_final_expanded} k_{ns}(s^{-1}) = \left( \frac{3 \alpha S_{n}}{2 \pi^{2} \Bar{\sigma}_{\textup{diff}} U} \right) \left( \frac{S_{n}}{W_{n}} \right) \left( \phi_{ST} \frac{\zeta}{1.36 \times 10^{-17}} \right). \end{equation} \subsubsection{Electronic (Inelastic) Sputtering} Our methodology for determining the electronic sputtering rate coefficient ($k_\textup{{es}}$) is very similar to that for nuclear sputtering, with some nuclear terms replaced by electronic terms. We start with estimating electronic cross sections from electronic stopping powers. Similarly to the nuclear stopping power, we find an average electronic stopping power, weighted by the same cosmic ray energy distribution as nuclear sputtering, shown in Equation~(\ref{eq:average_s}). This results in an average electronic stopping power of $1.131 \times 10^{-15}$ eV per particle for the bombardment of water ice by H$^{+}$. To convert electronic stopping powers to electronic cross sections, we simply divide the average stopping power by the average energy lost to the surface by ion-pair generation ($W_{\textup{e}}$) and by the fraction of electronic energy lost through repulsive decay ($f_{\textup{e}}$) \citep{brown_erosion_1982,schou_transport_1980}. As far as we know, there is no comprehensive table of $f_{\textup{e}}$ terms, and these must be either estimated, or calculated for individual species. The product of $W_{\textup{e}} \times {f_{\textup{e}}}$ is also referred to as the average energy lost to thermal relaxation, or $\overline{\Delta E}$ (in eV) \citep{rook_electronic_1985}. The process results in an electronic cross section of $2.09 \times 10^{-16}$ cm$^{2}$, using a $W_{\textup{e}}$ of 27 eV \citet{shingledecker_general_2018} and an ${f_{\textup{e}}}$ of 0.2 from \citet{johnson_electronic_1982}. All of these values are for amorphous water ice. $W_{\textup{e}}$ is well known for a variety of species \citep{fueki_reactions_1963} and can also be used to estimate cross sections from stopping powers for radiolysis reactions, as outlined in \citet{shingledecker_general_2018}. Related to $W_{\textup{e}}$, $W_{\textup{s}}$, the average energy lost to sub-excitation electrons, may be a good estimate for $\overline{\Delta E}$, the average energy lost to thermal excitation, assuming most of the energy of the sub-excitation electrons is lost to the surface as heat. Therefore, if $f_{e}$ is unknown, we can estimate the inefficiencies in energy transfer by using $W_{\textup{s}}$. The terms are organized in Table~\ref{tab:electronic_yield_terms}, along with parameters needed for calculating electronic yields (Y$_{\textup{es}}$),via the equation \citep{johnson_energetic_1990} \begin{table} \centering \caption{Parameters for calculating electronic sputtering terms on amorphous solid water.} \begin{tabular}{lcr} \hline Parameter & Value & Reference \\ \hline $\overline{S}_{e}$ (eV cm$^{2}$) & $1.131 \times 10^{-15}$ & This work \\ W$_{e}$ (eV) & 27 & \citep{shingledecker_general_2018} \\ $f_{e}$ (Unitless) & 0.20 & \citep{johnson_electronic_1982} \\ $\sigma_{es}$ (cm$^{2}$) & $2.09 \times 10^{-16}$ & This work \\ $C_{e}\times (f_{e}^{2})$ (Unitless) & $8 \times 10^{-4}$ & \citep{brown_linear_1980} \\ n$_{B}$ (cm$^{-3}$) & $3.3 \times 10^{22}$& \citep{brown_linear_1980} \\ U (eV) & 0.49 & \citet{garrod_formation_2006} \\ $Y_{es}$ & 0.0045 H$_{2}$O per H$^{+}$ & This work \\ \hline \end{tabular} \label{tab:electronic_yield_terms} \end{table} \begin{equation} \label{eq:electric_yield} Y_{es} \approx C_{e}\times (f_{e}^{2})[(n_{B}^{-1/3})(n_{B} \times \bar{S_{e}})/U]^{2}. \end{equation} \noindent Here $C_{e}$ is an experimental unitless constant, while $n_{B}$ (particles cm$^{-3}$) is used to approximate the average thickness of a monolayer of a surface species. The determined electronic sputtering yield is 0.0045 H$_{2}$O molecules per incident H$^+$ ion. The final form of the rate coefficient for electronic sputtering is \begin{equation} \label{eq:final_electronic_sputtering} k_{es} = Y_{es} \sigma_{es} \phi_{ISM}, \end{equation} \noindent which, when fully expanded, takes the form: \begin{equation} \label{final_electronic_exploded} k_{es} (s^{-1}) = \left( C_{e}\times f_{e}^{2} \left[ \frac{(n_{B}^{-1/3})(n_{B} \times \bar{S_{e}})}{U} \right]^{2} \right) \left( \frac{\overline{S}_{e}}{W_{e} f_{e}} \right) \left( \frac{\phi_{st} \zeta}{1.36 \times 10^{-17}} \right) \end{equation} \subsubsection{Carbon Dioxide and Mixed Ice Sputtering} \begin{table} \centering \caption{Parameters for CO$_{2}$ sputtering cross section and yield calculations} \begin{tabular}{lcr} \hline Parameter & Value & Reference \\ \hline $\overline{S}_{e}$ (cm$^{2}$ eV) & $2.329 \times 10^{-15}$ & This work \\ $W_{e}$ (eV) & 34.2 & \citep{johnson_electronic_1982} \\ $f_{e}$ (Unitless) & 0.19 & \citep{johnson_electronic_1982} \\ $\sigma_{\textup{es}}$ (cm$^{2}$) & $3.58 \times 10^{-16}$ & This work \\ $C_{e}\times f_{e}^{2}$ (Unitless) & $1 \times 10^{-3}$ & \citep{brown_erosion_1982} \\ $n_{\rm B}$ (cm$^{-3}$) & $2.3 \times 10^{-22}$ & \citep{johnson_energetic_1990} \\ U (eV) & 0.22 & \citep{garrod_formation_2006} \\ $Y_{es}$ & $0.073$ CO$_{2}$ per H$^{+}$ & This work \\ \hline \end{tabular} \label{tab:co2_sput_params} \end{table} For calculating the sputtering parameters for H$^{+}$ impacting CO$_{2}$ ice, chosen as a common ice constituent that isn't water, we will only calculate the electronic sputtering terms, as nuclear sputtering is significantly less effective, as shown in the earlier calculations for water ice. The process is the same for water ice, outlined in the previous section, with constants and important terms used displayed in Table~\ref{tab:co2_sput_params}. Using Equation \ref{eq:average_s} for the range of $S(E)$ CO$_{2}$ values results in an average electronic carbon dioxide stopping power of $2.329 \times 10^{-15}$ eV cm$^{2}$. Dividing $\overline{S}_{e}$ by a $W_{e}$ of 34.2 eV and the $f_{e}$ of 0.19 \citep{johnson_electronic_1982}, we obtain the electronic sputtering cross section for CO$_{2}$ of $3.58 \times 10^{-16}$ cm$^{2}$. Applying CO$_{2}$ values in Table~\ref{tab:co2_sput_params} to Equation~\ref{eq:electric_yield} returns a sputtering yield of 0.073 carbon dioxide molecules per incident hydrogen ion. The higher yield for carbon dioxide ice compared to water ice is due to carbon dioxide ices' cross section being greater than water ice, and a significantly lower binding energy of carbon dioxide ice. Our mixed ice model for sputtering simply adjusts the rate coefficient for sputtering based on the fractional amounts of water and CO$_{2}$ ice within the model, which leads to the following formula for any species of ice, N$_{i}$: \begin{multline} \label{eq:mixed_ice_rate} k_{es}^{\textup{mix}} = \left( k_{es(\textup{H}_{2}\textup{O})} \frac{N_{\textup{H}_{2}\textup{O}}}{N_{\textup{H}_{2}\textup{O}}+N_{\textup{CO}_{2}}} \right) N_{i} \\ + \left( k_{es(\textup{CO}_{2})} \frac{N_{\textup{CO}_{2}}}{N_{\textup{H}_{2}\textup{O}}+N_{\textup{CO}_{2}}} \right)N_{i}. \end{multline} \noindent This treatment of a mixed ice is simplistic, as we do not account for mixed ice yields, binding energies, or stopping powers, all of which will vary from a single species ice. \section{Results and Analysis} \label{sec:results} \begin{figure} \centering \includegraphics[width=\columnwidth]{1_2_slim.png} \caption{Heatmaps comparing models 1H and 2HS; each heatmap has model time on the y axis, with various species on the x axis. Species with a "J" designation represent surface ices, or a "K" for bulk ices. The different colours on the heatmap represent relative differences in abundance between models. Note the different colour bar scales and ranges}% \label{fig:sput} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{1_3_slim.png} \caption{Heatmaps comparing models 1H and 3S; each heatmap has model time on the y axis, with various species on the x axis. Species with a "J" designation represent surface ices or a "K" for bulk ices. The different colours on the heatmap represent relative differences in abundance between species in the compared models. Note the different colour bar scales and ranges}% \label{fig:justsput} \end{figure} We utilize a series of low temperature models with various combinations of cosmic ray sputtering, cosmic ray whole grain heating, and cosmic ray induced diffusion described in Table~\ref{tab:model_key}. We model the physical conditions based on the cold molecular core TMC-1, where the initial abundances are compiled in Table~\ref{tab:abundances}, and relevant physical conditions are displayed in Table~\ref{tab:physicalcond}. The standard "reference" model is designated 1H, a basic model with the same chemical network as in \citet{paulive_role_2021}, without radiolysis chemistry. Models with an "H" after the number have cosmic ray whole grain heating, an "S" if sputtering is on, based on the theory in Section~\ref{sec:sputtering} for water ice, a "C" for sputtering based on CO$_{2}$ ice, and an "M" for the mixed ice models. The chemical networks for all models examined for these and all other models in this paper are the same. All models that have CRH allow for periods of time where the temperature of the grain ices is temporarily increased to simulate the heating from cosmic rays, followed by cooling back to ambient temperatures as volatile species, such as CO, are thermally desorbed, as shown in \citet{hasegawa_new_1993}. The rate coefficient for thermal desorption is the vibrational frequency of a species, $v_{\circ}$, multiplied by the negative exponent of the binding energy in kelvin, divided by the ambient dust temperature, resulting in a rate coefficient of \begin{equation} k_{\textup{td}} = v_{\circ} \exp{ \left( -\frac{E_{\textup{b}}}{T_{\textup{dust}}} \right)}. \end{equation} This is the same equation used to calculate the CRH rate coefficient, however, the ambient dust temperature is changed to the increased temperature used for CRH, in this case 70 K. This increased temperature results in an increase in desorption rate of $e^{7}$ s$^{-1}$ compared to the thermal desorption rate at the dark cloud ambient dust temperature of approximately 10 K. \subsection{Water Ice Models} \label{sec:water_results} As an introduction, we examine the differences in the time-dependent results between the fiducial model (1H) with models 2HS and 3S. These non-fiducial models both include sputtering, while 2HS contains cosmic ray heating in addition to sputtering. Figures~\ref{fig:sput} and ~\ref{fig:justsput} show multiple heatmaps of a suite of common interstellar molecules found in TMC-1. The colour gradient on the heatmap shows the relative difference in molecular abundance between a species for a model with respect to the base model 1H, and the model to which it is being compared. The relative difference between two numbers X and Y can be defined by the ratio (Y-X)/X, where X applies to base model species and Y refers to other models such as 2HS and 3S. If Y is much larger than X, the relative difference approaches Y/X. In Figure~\ref{fig:justsput}, the relative difference ranges from $10^{-1}$ to $10^{-4}$, where a relative difference of $1$ represents a species with an abundance of Y that is double the abundance X in the base sputtering model. Likewise, a relative difference of $10^{3}$ would mean an abundance in model 2HS or 3S 1000 times the base model abundance, while a relative difference of $10^{-4}$ would be only signify an abundance 1.0001 times the abundance in model 1H. The white space within a heatmap signifies that there is no change in abundance between the two models at the given time. Specifically, for Figure~\ref{fig:sput}, these species are mostly abundant gas phase species within cold dark clouds , and we do not see differential large changes in their abundances, as they are generally efficiently produced in the gas phase, and while they could have efficient grain production pathways, the gas phase routes still dominate, or they are adequately desorbed from the grain surface with thermal and non-thermal methods. Looking at gas-phase CO, we see that there is an approximate 1\% increase toward the end of the model, as CO is often depleted onto grain surfaces at later times. With the addition of sputtering, we increase the amount of desorbed CO, resulting in slightly higher gas phase abundances. There are similar results for the comparison between 1H and 3S, shown in Figure~\ref{fig:justsput}, with minor differences: there is less of an increase in abundance in general, which is expected, because, unlike model 2HS, model 3S lacks cosmic ray whole grain heating. The lack of heating causes a lower desorption rate, which leads to the lower gas-phase abundance. In addition, the greater abundances in model 3S compared with 1H indicate that compared to whole grain heating already implemented in Nautilus, which temporarily heats the entirety of grain ices to 70 K, sputtering is generally more effective for desorbing low abundance species from the grain surface, and the bulk ice. This is due to sputtering allowing desorption directly from the bulk of the ice more readily than thermal evaporation, and thermal evaporation following CRH, as there is less of a chance that the ejected species can get trapped by the upper layers of ice from physical changes in the ice following ion bombardment. Another contributing factor for the difference in abundances of models with solely CRH or sputtering is the relatively short time of increased thermal temperature from whole grain heating, which is on the order of $10^{-5}$ seconds, or the approximate time CO needs to thermally desorb from an ice surface \citep{hasegawa_new_1993}. During the duration of increased temperature, not all species are able to efficiently desorb at 70 K, but the more volatile CO is mostly desorbed from a surface in the time period of $10^{-5}$ seconds \citep{hasegawa_new_1993}. CO, and other species that are able to significantly contribute to desorption at 70, then cool the surface to the ambient 10 K. There is no equivalent duration or limitation on what species are effectively removed by sputtering, save for the cosmic ray-grain encounter rate. In Figure ~\ref{fig:water_sput_coms} we show the modelled fractional abundance against time using the models 1H, 2HS, and 3S for a variety of COMs detected toward TMC-1. The models are represented by dashed lines, solid lines, and dotted dashed-lines for models 2HS, 1H, and 3S respectively. The molecules examined are methanol (CH$_{3}$OH), acetaldehyde (CH$_{3}$CHO), methyl formate (HCOOCH$_{3}$), glycolaldehyde (HCOCH$_{2}$OH), and dimethyl ether (CH$_{3}$OCH$_{3}$. All fractional abundances mentioned in this paper are with respect to H$_{2}$. The molecules, which will be discussed later, have binding energies that prevent them from being efficiently desorbed either at the nominal 10 K, or 70 K in models with CRH. There are little to no differences in the abundances of methanol, acetaldehyde, and dimethyl ether among models 1H, 2HS, and 3S at most times. However, there is an increase in abundance by more than a factor of 2, for methyl formate at $5 \times 10^{5}$ yr between model 1H and models 2HS and 3S. There is also an increase in gas-phase glycolaldehyde by less than a factor of 2 at the same time. \begin{figure} \centering \includegraphics[width=\columnwidth]{Water Sputtering Figures/water_coms.png} \caption{Modelled fractional abundance of relevant COMs under cold dark cloud conditions at 10 K. Models 2HS and 3S have water ice sputtering rates of $2.55 \times 10^{-18}$ cm$^{-3}$ s$^{-1}$ at $5\times10^{5}$ yr. The dashed line is model 2HS, the solid line is model 1H, the dotted-dashed line is model 3S. All COMs examined for this graph are gas-phase. Both sputtering models show increases in abundances for methyl formate and glycoaldehyde.} \label{fig:water_sput_coms} \end{figure} \subsection{Carbon Dioxide Ice Models} \label{sec:co2_results} Table~\ref{tab:co2_sput_params} shows relevant parameters used in calculating yields based on carbon dioxide sputtering. We examine the same suite of molecules as in the previous section, shown in Figures~\ref{fig:co2_ices_sput} and ~\ref{fig:co2_sput_coms}. Figure~\ref{fig:co2_ices_sput} show a sample of common interstellar ices and simple molecules found within the ice. Notably, the addition of sputtering does not significantly alter the bulk or surface abundances of the species. This suggests that sputtering does not remove enough material to leave the grain barren of ice when coupled with previously examined thermal, chemical, and photo-desorption methods for these common species at 10 K. \begin{figure} \centering \includegraphics[width=\linewidth]{CO2 Sputtering Figures/co2_coms.png} \caption{Gas-phase fractional abundances of select COMs with respect to hydrogen. Models 4HSC and 5SC have carbon dioxide ice sputtering rates of $1.48 \times 10^{-16}$ cm$^{-3}$ s$^{-1}$ at $5\times10^{5}$ yr. Different colors represent different species. The fiducial model (1H) has a solid line, the model with carbon dioxide sputtering and heating (4HSC) a dashed line, and the model with just sputtering is has a dotted and dashed line (5SC). All molecules presented here show increases of varying degrees in gas-phase abundances, with acetaldehyde (CH$_{3}$CHO) showing less of an increase (a factor of 1.12) compared with other COMs when comparing abundnances between the models with sputtering and the fiducial model.} \label{fig:co2_sput_coms} \end{figure} In contrast, Figure~\ref{fig:co2_sput_coms} shows significant increases in gas-phase abundances of the species presented in the figure for both models that include sputtering. Slight differences, especially at later times in the model, become apparent, with model 4HSC (Both sputtering and cosmic ray heating) slightly outproducing glycolaldehyde (HCOCH$_{2}$OH) and methyl formate (HCOOCH$_{3}$) compared to model 5SC (sputtering, no CRH). Overall, we find significant differences between sputtering at carbon dioxide rates and water rates, partially due to higher yields and larger cross sections, leading to faster sputtering and greater amounts of desorption. Figure~\ref{fig:co2_sput_coms} shows increases of varying magnitude for COMs, where acetaldehyde has the lowest increase in magnitude by a factor of 1.12, while methyl formate and glycolaldehyde see increases by factors of 33 and 24, respectively. These increases contrast with our water sputtering models that only show increases in methyl formate and glycolaldehyde by an approximate factor of 2. For the other two examined COMs, methanol and dimethyl ether, we see increases by factors of 2.1 and 2.7, respectively, at the time $5\times10^{5}$ yr. \subsection{Mixed Ice Models} \label{sec:mixed_ice} \begin{figure} \centering \includegraphics[width=\linewidth]{Mixed Sputtering Figures/mixed_coms.png} \caption{Gas-phase fractional abundances of select COMs with respect to hydrogen. Models 6HSM and 7SM have mixed water and carbon dioxide ice sputtering rates of $7.02 \times 10^{-17}$ cm$^{-3}$ s$^{-1}$ for water and $8.21 \times 10^{-17}$ cm$^{3}$ s$^{-1}$ for carbon dioxide at $5\times10^{5}$ yr. Different colors represent different species, the fiducial model (1H) is a solid line, the model with carbon dioxide sputtering and heating (6HSM) is dashed, the model with pure sputtering is dotted and dashed (7SM). All molecules presented here show increases of varying degrees in gas-phase abundances, with the acetaldehyde (CH$_{3}$CHO) showing less of an increase compared to other COMs.} \label{fig:mix_sput_coms} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{watervco2icecomp.png} \caption{ Grain surface and mantle abundances of water ice and carbon dioxide ice, for models 1H, 6HSM, and 7SM. Other models used in this work differ in water and carbon dioxide fractional abundance by 10\% at most. The models from $1 \times 10^{3}$ yr and $1 \times 10^{6}$ yr have differences by factors of 0.0001, or approximately 0.01\%.} \label{fig:mixicecomp} \end{figure} The models containing the mixed ice sputtering rates (Equation~\ref{eq:mixed_ice_rate}) are labelled 6HSM and 7SM, where both include mixed ice sputtering rates while only 6HSM has whole grain cosmic ray heating. Figures~ \ref{fig:mix_sput_coms} and \ref{fig:mix_common_sput} shows similar curves to the carbon dioxide ice sputtering results, with slight variation in most common and simple ice species, albeit to a lesser extent than the carbon dioxide sputtering models. Similar to previous results, the mixed ice has desorption rates that vary based on the ratio of carbon dioxide to water ice which results in gas-phase abundances between those of pure water ice and pure carbon dioxide ice sputtering. Figure ~\ref{fig:mixicecomp} shows the difference in abundance of mantle and surface water ice and carbon dioxide ice in models 1H, 6HSM, and 7SM. The differences between ice abundances in the models is approximately 0.01\% for most times. There are no new notable results that have not been discussed in the two previous sections, apart from slightly different peak abundances for the examined COMs, shown in Figure~\ref{fig:mix_sput_coms}; these results will be further discussed in the next section, where we compared modelled abundances to observations. \section{Astrochemical Implications} \label{sec:analysis} \begin{table*} \centering \caption{ List of observed column densities and estimated fractional abundances for examined COMs in TMC-1 methanol peak (MP) and cyanopolyyine peak (CP). Column densities are in cm$^{-2}$, while fractional abundances are with respect to hydrogen, and unitless.} \begin{tabular*}{\textwidth}{lcccc} \hline TMC-1 \\ \hline Molecule & Column Density (CP) & Fractional Abundance (CP) & Column Density (MP) & Fractional Abundance (MP) \\ \hline Methanol (CH$_{3}$OH) & $(1.7 \pm 0.3) \times 10^{13}$ & $(1.7 \pm 0.3) \times 10^{-9}$ & $(4.0 \pm 1.4) \times 10^{13}$ & $(4.0 \pm 1.4) \times 10^{-9}$ \\ Acetaldehyde (CH$_{3}$CHO) & $(3.5 \pm 0.2) \times 10^{12}$ & $(3.5 \pm 0.2) \times 10^{-10}$ & $(3.4 \pm 0.4) \times 10^{12}$ & $(3.4 \pm 0.4) \times 10^{-10}$ \\ Methyl Formate (HCOOCH$_{3}$) & $(1.1 \pm 0.2) \times 10^{12} $ & $(1.1 \pm 0.2) \times 10^{-10}$ & $(1.9 \pm 0.2) \times 10^{12}$ & $(1.9 \pm 0.2) \times 10^{-10}$ \\ Dimethyl Ether (CH$_{3}$OCH$_{3}$) & $(2.5 \pm 0.7) \times 10^{12}$ & $(2.5 \pm 0.7) \times 10^{-10}$ & $(2.1 \pm 0.7) \times 10^{12}$ & $(2.1 \pm 0.7) \times 10^{-10}$ \\ \hline \end{tabular*} \\ \begin{flushleft}Estimated hydrogen column density is $10^{22}$ cm$^{-2}$. All column densities are from \citet{soma_complex_2018}, except for methyl formate and dimethyl ether at the cyanopolyyine peak \citep{agundez_o-bearing_2021} and acetaldehyde at the cyanopolyyine peak \citep{cernicharo_discovery_2020}. \end{flushleft} \label{tab:com_observation} \end{table*} \begin{table*} \centering \caption{List of modeled peak fractional abundances for select COMs, distinguished by model.} \begin{tabular}{lcccccccc} \hline Molecule & 1H & 2HS & 3S & 4HSC & 5SC & 6HSM & 7SM & 8HSC10 \\ \hline Methanol & $2.8 \times 10^{-10}$ & $3.0 \times 10^{-10}$ & $2.9 \times 10^{-10}$ & $7.6 \times 10^{-10}$ & $7.3 \times 10^{-10}$ & $3.5 \times 10^{-10}$ & $3.4 \times 10^{-10}$ & $7.8 \times 10^{-9}$ \\ Acetaldehyde & $1.9 \times 10^{-10}$ & $1.9 \times 10^{-10}$ & $1.9 \times 10^{-10}$ & $2.1 \times 10^{-10}$ & $2.1 \times 10^{-10}$ & $2.0 \times 10^{-10}$ & $2.0 \times 10^{-10}$ & $7.1 \times 10^{-10}$ \\ Methyl Formate & $7.5 \times 10^{-13}$ & $1.1 \times 10^{-12}$ & $9.5 \times 10^{-13}$ & $1.5 \times 10^{-11}$ & $1.5 \times 10^{-11}$ & $8.5 \times 10^{-12}$ & $8.5 \times 10^{-12}$ & $1.7 \times 10^{-10}$ \\ Dimethyl Ether & $2.4 \times 10^{-10}$ & $2.7 \times 10^{-10}$ & $2.6 \times 10^{-10}$ & $1.0 \times 10^{-9}$ & $1.0 \times 10^{-9}$ & $3.7 \times 10^{-10}$ & $3.7 \times 10^{-10}$ & $2.1 \times 10^{-8}$ \\ \hline \end{tabular} \label{tab:peak_abundances} \end{table*} Comparing the results of the various models to astronomical observations, we find mixed results: most models without sputtering do not produce enough gas-phase molecules, and while sputtering does adequately increase the gas-phase abundance to observed abundances in some cases, though the faster sputtering models are not enough to adequately reproduce abunandances for other molecules. \cite{soma_complex_2018} reported column densities of methanol, dimethyl ether, and acetaldehyde at the cyanopolyyne peak (referred to as TMC-1 CP) as approximately $1.7 \times 10^{13}$, $4.6 \times 10^{12}$ and $2.0 \times 10^{12}$ cm$^{-2}$, respectively. They also report methanol peak (TMC-1 MP) column densities of methanol, acetaldehyde, methyl formate, and dimethyl ether as $4.0 \times 10^{13}$, $3.4 \times 10^{12}$, $1.9 \times 10^{12}$, and $2.1 \times 10^{12}$ cm$^{-2}$, respectively. \citet{agundez_o-bearing_2021} provides column densities for methyl formate and dimethyl ether toward TMC-1 CP of $1.1 \times 10^{12}$ and $2.5 \times 10^{12}$ cm$^{-2}$, respectively, while \citet{cernicharo_discovery_2020} provides an updated column density for cyanopolyyine peak acetaldehyde of $3.5 \times 10^{12}$ cm$^{-2}$ Dividing these column densities by an approximate molecular hydrogen column density of $10^{22}$ cm$^{-2}$ in TMC-1 yields fractional abundances of the examined molecules. The observed values used here are organized in Table~\ref{tab:com_observation}, while peak modeled abundances are in Table~\ref{tab:peak_abundances}. This results in an observed fractional abundance for methanol of $1.7 \times 10^{-9}$ toward TMC-1 CP, and an abundance of $4.0 \times 10^{-9}$ at TMC-1 MP. Compared with the peak abundances of methanol in model 1H of approximately $2.8 \times 10^{-10}$, we significantly under-produce methanol in TMC-1 by an approximate order of magnitude, at least in the gas phase. Our sputtering models have a peak gaseous abundance of $3.0 \times 10^{-10}$ in model 2HS, and $7.6 \times 10^{-10}$ in model 4HSC, both at times from $5 \times 10^{5}$ until $7 \times 10^{6}$ yr. Our mixed ice model is not as effective as the carbon dioxide ice, with a peak abundance of $3.5 \times 10^{-10}$. These results agree with the generally accepted methanol production method of hydrogenation of carbon on grain surfaces, because the inclusion of sputtering desorbs methanol already made, yet stuck within the grain ices, to better match observed abundances \citep{watanabe_efficient_2002, fuchs_hydrogenation_2009}. The models suggest that faster sputtering is needed to reproduce the observed abundances in TMC-1 MP because the mixed ice and water ice sputtering rate models still under-produce methanol by factors of 11.5 and 13, respectively, when comparing peak abundances. In contrast, CO$_{2}$ sputtering seems to produce and desorb enough CH$_{3}$OH to match peak abundances in TMC-1 CP within a factor of 2.5 lower than the observed abundance, despite still under-producing methanol for TMC-1 MP by a factor of 5.3. Acetaldehyde is an order of magnitude less abundant compared to methanol at both peaks, with peak observed abundances of $3.5 \times 10^{-10}$ at TMC-1 CP \citep{cernicharo_discovery_2020} , and $3.4 \times 10^{-10}$ toward the methanol peak. Our modelled abundances for acetaldehyde are $1.9 \times 10^{-10}$ for the standard heating model of 1H. For models including sputtering, the abundances are as follows: model 2HS has a peak abundance of $1.9 \times 10^{-10}$, model 4HSC peaks at $2.1 \times 10^{-10}$ and model 6HSM peaks at about $2.0 \times 10^{-10}$, with the faster carbon dioxide ice sputtering and heating better matching the observed abundances of the CP. While the base model is within a factor of 1.9 to the peak abundances toward TMC-1 CP, this model also under-produces such abundances for TMC-1 MP by a factor of 1.8; sputtering partially accounts for the difference for TMC-1 MP, but is less than the observed abundance by a factor of 1.6. The calculated peak abundances occur at approximately $5 \times 10^{5}$ yrs, with another peak of similar abundances occur at approximately $6 \times 10^{6}$ yrs in all models examined in this paper, and are shown in Figures ~\ref{fig:water_sput_coms}, ~\ref{fig:co2_sput_coms}, and ~\ref{fig:mix_sput_coms}. The observations for methyl formate have the fractional abundance at $1.9 \times 10^{-10}$, toward the TMC-1 MP. Model 1H calculates the peak fractional abundance as $7.5 \times 10^{-13}$, significantly lower than the observed amount by over two orders of magnitude. The models with sputtering come closer to replicating the observed abundance, but still under-produce methyl formate in the gas phase by an approximate order of magnitude, with model 4HSC just reaching $1.5 \times 10^{-11}$. The mixed-ice and water models produce less methyl formate, producing peak amounts of $8.5 \times 10^{-12}$ and $1.1 \times 10^{-12}$, respectively. All models predict the peak of methyl formate to be at a similar time of $5 \times 10^{5}$ yrs, much like the other molecules highlighted in this section. Recent observations by \citet{agundez_o-bearing_2021} have detected methyl formate toward TMC-1 CP, with a column density of $ (1.1 \pm 0.2) \times 10^{12}$ cm$^{-2}$, and a fractional abundance of $1.1 \times 10^{-10}$. The results comparing the models to these observed amounts match the discrepancies between the models and the observed abundances in TMC-1 MP. Methyl formate has also been examined in cold dark clouds using radiolysis models, such as in \citet{shingledecker_cosmic-ray-driven_2018}, where gas-phase methyl formate production was found to be greatly enhanced with the inclusion of radiolysis chemistry and suprathermal reactions. It may be possible to combine the enhanced grain ice abundances of ices in general, and efficiently desorb them using sputtering, as a future possibility. Dimethyl ether is reported to have an upper limit to the fractional abundance of $4.6 \times 10^{-10}$ toward the cyanopolyyine peak, and an observed fractional abundance of $2.1 \times 10^{-10}$ toward the methanol peak \citep{soma_complex_2018}. In the same publication for methyl formate detection toward the cyanopolyyine peak, \citet{agundez_o-bearing_2021} report a column density of $(2.5 \pm 0.7) \times 10^{12}$ cm$^{-2}$, for a fractional abundance of $(2.5 \pm 0.7) \times 10^{-10}$. All models reach peak abundance at approximately $6 \times 10^{6}$ yrs. Model 4HSC produces the most dimethyl ether, reaching just above $1 \times 10^{-9}$, while 2HS and 6HSM have peak abundances at $2.7$ and $ 3.7 \times 10^{-10}$ respectively. However, at a time of $~5 \times 10^{5}$ yrs, which is similar to other peak times that match observations of molecules discussed earlier in the section, the modelled abundances are lower by a factor of approximately 2 than later in the model. 1H produces $~2.0 \times 10^{-10}$, with 4HSC producing approximately $5 \times 10^{-10}$ dimethyl ether. Models 2HS and 6HSM produce abundances of $~2.2 \times 10^{-10}$ and $~3.6 \times 10^{-10}$ respectively at 10$^5$ yrs. Models 4HSC and 5SC produce dimethyl ether within a factor of 4 toward both peaks of TMC-1, at peak abundances, with the other models all producing dimethyl ether within a factor of 1 at peak abundances. The carbon dioxide ice models more closely match observations at times around $5 \times 10^{5}$ years, within a factor of 2 for the cyanopolyyine peak, and more than 2 for the methanol peak. Overall, the model that best fits the observed abundances for COMs in TMC-1 is either 4HSC or 5SC, despite under-producing methanol values for the MP by a factor of five. We came closer to replicating methanol values for the CP, only under-producing them by a factor of 2. For acetaldeyhde, all models match the observations for the CP, approximately $2 \times 10^{-10}$.For methyl formate at the MP, the carbon dioxide ice models under-produce methyl formate by a factor of 10, with other models under-producing methyl formate by a larger factor. At TMC-1 CP, models 4HSC and 5SC also under-produce methyl formate by an order of magnitude. Finally, the upper limit of observed dimethyl ether toward the CP are less than peak amounts in models 4HSC and 5SC by a factor of 2. The peaks of other models are below the upper limit of dimethyl ether observed in the CP, while slightly overproducing dimethyl ether compared with the observed abundance in the MP by a factor of 2. The models with sputtering rates based on carbon dioxide ices consistently under-produce the observed molecules examined in this paper, and their calculated abundances seem to best replicate the abundances found toward the TMC-1 CP. These COMs can be compared with observed abundances for other environments in the ISM: the cold dense core B1-b, the dense core L483, the prestellar core L1689B, and the molecular cloud Barnard 5, as listed in Appendix~\ref{sec:appa}, Tables ~\ref{tab:B1b}, ~\ref{tab:L483}, ~\ref{tab:L1989B}, and ~\ref{tab:barnard5}. These environments have slightly different initial abundances and conditions compared with TMC-1, though the model parameters remain the same. Comparing modelled abundances with the observed abundances, we see that our variety of models does not adequately replicate methanol abundances in most environments. The observed methanol abundances range from $1.2 \times 10^{-9}$ in L1989B, to $4.5 \times 10^{-8}$ toward Barnard 5. However, the abundances of the other molecules examined are relatively well matched at peak, or near-peak abundances. To compare methanol fractional abundances to the approximate observed levels in all sources would require the rate coefficient for sputtering to be increased by approximately a factor of ten. We have run a supplementary model (8HSC10) with such an increase, and have included the figures presenting the fractional abundances for ten times the sputtering rate in Appendix ~\ref{sec:appa}, Figure ~\ref{fig:go_crazy}. Notable in this model is that the abundance of dimethyl ether is greatly increased, to a greater fractional abundance than that of methanol, which is not observed in any of the sources mentioned previously. However, at this enhanced rate of sputtering, peak modelled methanol abundances match observed abundances well. This suggests that while sputtering rates greater than the ones presented here may be physical, there need to be further studies into modelling the production and destruction of the molecules highlighted here, especially methanol and dimethyl ether, and further examinations of sputtering. \section{Conclusions} \label{sec:conclusions} We present in this paper a way of both estimating and implementing theoretical sputtering parameters into a rate-equation based model of cold dark clouds. We show that sputtering is an effective way of desorbing multiple species off of grain surfaces, even at slow sputtering rates. While there are experimental results that can be included in models, it is more difficult to obtain experimental data that match the mixture of amorphous ices that populate grain surfaces in the ISM. Fortunately, there are theoretical treatments of sputtering ices that seem to be reasonably effective. While further experimental work will need to be done to provide a basis for the sputtering of mixed ices by lighter ions, both current theory and experimental results suggest that sputtering is important as a method of non-thermal desorption in astrochemical models, similar to reactive desorption and photodesorption. Even when coupled with cosmic ray heating, a widely used "thermal" method of temporarily increasing thermal desorption, many species are not excessively depleted from within the ice, nor excessively ejected into the gas phase. In addition, in cases where there are efficient gas destruction pathways, sputtering does not overcome multiple efficient gas phase destruction routes. To adequately match abundances using sputtering alone, the rate coefficient for sputtering will need to be increased by a factor of approximately 10, however, this causes overproduction of multiple molecules including dimethyl ether and acetaldehyde. Further examination of gas and grain destruction pathways is warranted, should sputtering become commonplace in models. \section*{Acknowledgements} E. H. thanks the National Science Foundation (US) for support of his research programme in astrochemistry through grant AST 19-06489. This research has made use of NASA's Astrophysics Data System Bibliographic Services. We would like to thank V. Wakelam for the use of the \texttt{Nautilus-1.1} program. \section*{Data Availability} The data underlying this article are available in the article, as well as cited online repositories (KIDA). \bibliographystyle{mnras}
-41,881.467266
[ -2.525390625, 2.5078125 ]
43.761639
[ -3.087890625, 0.314208984375, -1.740234375, -5.6171875, -0.787109375, 7.92578125 ]
[ 3.30078125, 7.3203125, 3.98828125, 5.54296875 ]
521
8,803
[ -2.541015625, 2.892578125 ]
28.753989
[ -6.03125, -2.958984375, -3.2265625, -2.23828125, 1.4951171875, 10.4609375 ]
0.872604
30.520441
17.550835
4.684195
[ 2.8144166469573975 ]
-28,834.097505
5.623878
-40,512.145082
0.363585
6.025036
[ -3.595703125, -3.59765625, -3.015625, -4.1796875, 2.4765625, 10.796875 ]
[ -5.7578125, -2.330078125, -2.353515625, -2.294921875, 3.80078125, 6.015625 ]
BkiUc5E5qoTAmkNrkwkS
\section{Introduction} For a compact Riemann surface $X$ of genus $g$, a finite set of points $A_{1},\dots,A_{n} \in X$ and a distribution of angles $2\pi(a_{1},\dots,a_{n})$, a natural generalization of uniformization problem is about existence of metric of constant positive curvature in $X \setminus \lbrace A_{1},\dots,A_{n} \rbrace$ that extends to singularities $A_{1},\dots,A_{n}$ as conical points of prescribed angles.\newline In this problem, Gauss-Bonnet theorem states that the sum of diffuse and singular curvature is a topological invariant. Total diffuse curvature is $2\pi(2g-2+n-\sum a_{i})$. Depending on the sign of this quantity, the metric should be hyperbolic, flat or spherical in the complement of the singularities.\newline If the latter quantity is nonpositive, existence of a hyperbolic or a flat metric has been obtained by several authors, see \cite{Tr,Tr1} for recent references. On the opposite, when this quantity is positive, existence of a spherical metric is far from being granted.\newline Some authors have speculated an equivalence between existence of a metric of constant positive curvature and a notion of stability of bundles (in the spirit of Yau-Tian-Donaldson conjecture for Fano manifolds), see \cite{SLX}.\newline We could gain a better understanding of cone spherical metrics by the study of metrics with constrained monodromy. In this paper, we focus on a class of metrics introduced in~\cite{SCLX}. A cone spherical metric has \textit{dihedral monodromy} if its monodromy group (as a subgroup of $\SO(3)$) globally preserves a pair of antipodal points. This is equivalent to globally preserve the dual great circle. Rotations of the monodromy group are rotations around the axis and rotations of angle $\pi$ around any axis whose antipodal points belong to the preserved great circle (see Section~\ref{sec:SO3}). A well-studied subclass of metrics with dihedral monodromy is made of metrics with co-axial monodromy, see \cite{Er}.\newline In \cite{SCLX}, the authors obtain the developing map of a spherical structure with dihedral monodromy by the integration of a quadratic differential. In this way, they prove existence of cone spherical metrics with dihedral monodromy for some distributions of angles. The technical result in their paper (Theorem~1.8) amounts to a characterization of configuration of residues at the poles a quadratic differential may have. They considered only the case of differentials with simple zeroes and double poles.\newline Their work can be extended and completed. Indeed, in recent papers is given a complete characterization of the configurations of local invariants Abelian and quadratic differentials can realize, see \cite{GT,GT1}. Using the solution of this problem (Question 1.7 in \cite{SCLX}), we are able to give an explicit characterization of the distributions of angles that can be realized by a cone spherical metric with dihedral monodromy. The key issue, explained in Section~\ref{sec:JSdif}, is the connection between a subclass of spherical metrics (hemispherical surfaces introduced in Definition~\ref{def:hemsurf}) and a subclass of quadratic differentials (totally real Jenkins-Strebel differentials introduced in Definition~\ref{def:totrealJS}).\newline The main necessary condition for a distribution of angles to be realized by a spherical metric with dihedral monodromy is the \textit{strengthened Gauss-Bonnet inequality} introduced in Proposition~\ref{prop:GBplus}. This inequality is the classical Gauss-Bonnet inequality where only the angles belonging to $\pi\mathbb{Z}$ are considered. We briefly explain why this condition appears. A conical singularity of the spherical metric either corresponds to a double pole of the quadratic differential whose residue is determined by the conical angle or with a singularity of order $k \geq -1$ of the differential. Strengthened Gauss-Bonnet inequality then follows from the classical results about the degree of a divisor associated to a quadratic differential.\newline There are additional obstructions leading to exceptional distributions of angles that cannot be realized by a spherical metric with dihedral monodromy. They come from general arithmetic obstructions to existence of quadratic differentials with integer residues obtained in \cite{GT,GT1}. The list of obstructions and the complete characterization of distribution of angles that can be realized are contained in six theorems.\newline For spherical metrics in genus zero, the classification is given in Theorems~\ref{thm:63},~\ref{thm:65} and~\ref{thm:610} for the strict dihedral case. We remind of \cite[Theorem 1]{Er} for the co-axial case in Theorem~\ref{thm:61}.\newline For spherical metrics in higher genus, the classification is given in Theorem~\ref{thm:52} for the strict dihedral case and Theorem~\ref{thm:53} for the co-axial case.\newline The organization of the paper is the following: \begin{itemize} \item In Section~\ref{sec:dihedral}, we introduce the co-axial and dihedral monodromy classes. We draw the connection between quadratic differentials and cone spherical metrics with dihedral monodromy. \item In Section~\ref{sec:difprescrites}, we present the results on quadratic and Abelian differentials with prescribed singularities. \item In Section~\ref{sec:GBplus}, we state the strenghtened Gauss-Bonnet inequality which is the main necessary condition for realization of a distribution of angles by a cone spherical metric with dihedral monodromy. \item In Section~\ref{sec:highergenus}, we give a characterization of distribution of angles that can be realized in a spherical surface of genus $g \geq 1$ with co-axial or dihedral monodromy (Theorems~\ref{thm:52} and~\ref{thm:53}), comparing the two classes. \item In Section~\ref{sec:genuszero}, we state Eremenko's theorem characterizing distribution of angles for cone spherical metrics with co-axial monodromy in genus zero. Comparatively, we state and prove the analogous result for metrics with dihedral monodromy (Theorems~\ref{thm:63},~\ref{thm:65} and~\ref{thm:610}). \end{itemize} \section{From differentials to spherical metrics with dihedral monodromy} \label{sec:dihedral} In this section, we recall the basic statements about a special class of spherical surfaces. Then we show how they are related to quadratic differentials and give some background on them. \subsection{Spherical structures} \label{sec:SO3} A \textit{spherical structure} on a compact surface with finitely many punctures is an atlas of charts with values in the standard sphere $\mathbb{S}^{2}$ whose transition maps belong to $\SO(3)$.\newline A puncture of a spherical surface is a \textit{conical singularity} of angle $\theta$ if it is locally isometric to the singularity of a a hemispherical sector. \begin{defn} A \textit{hemispherical sector} of angle $\alpha \in ]0;2\pi]$ is a singular surface with boundary, obtained by considering the sector angle $\alpha$ between two meridians in the standard half-sphere and identifying these two sides. It has a conical singularity of angle $\alpha$ and a geodesic boundary of length $\alpha$. \end{defn} The definition of hemispherical sectors extends by ramified cover to any angle $\alpha \in \mathbb{R}_{+}^{\ast}$. They provide a geometric model for every conical singularity.\newline Given a spherical structure on a surface $S$, there is a representation of the fundamental group of the complement of the punctures in $S$ into the group of linear-fractional transformations. Its image is the {\em monodromy} of the spherical structure. Note that it is a subgroup of $\SO(3)$. A first natural subclass of spherical metrics is given by metrics with \textit{reducible} or \textit{co-axial} monodromy.\newline \begin{defn} A cone spherical metric has \textit{co-axial monodromy} if its monodromy group fixes an axis. Equivalently, the monodromy is conjugated to a subgroup of $\SO(3)$ conjugated to $\SO(2)$. \end{defn} In this paper, we focus on a class of metrics with slightly more general monodromy, introduced in~\cite{SCLX}. \begin{defn} A cone spherical metric has \textit{dihedral monodromy} if its monodromy group globally preserves a pair of antipodal points. Equivalently, the monodromy is conjugated to a subgroup of $\SO(3)$ conjugated to $\mathbb{Z}/2\mathbb{Z} \rtimes \SO(2)$. \end{defn} Among cone spherical metrics with dihedral monodromy, we distinguish co-axial monodromy (preserving pointwise the two antipodal points of the axis) and strict dihedral monodromy (dihedral but not co-axial).\newline \subsection{Latitude foliation} \label{sec:latitude} The dihedral monodromy preserves a great circle in the sphere. We will refer to it as the \textit{equator}. We will also refer to the \textit{equatorial net} as the locus in the spherical surface that is mapped to the equator in every chart. Since monodromy acts by isometries, it also preserves the {\em latitude foliation} that decomposes the sphere into circle of constant latitude. Besides, the \textit{absolute latitude}, which is the absolute value of the angular distance of a point of the sphere to the equator, is also well-defined.\newline We rephrase the previous paragraph. Given a surface $S$ with a cone spherical metric with dihedral monodromy, the \textit{absolute latitude} is the map $\phi\colon S \rightarrow [0,\frac{\pi}{2}]$ which associates to a point the norm of its latitude. The preimage of $0$ by $\phi$ is the \textit{equatorial net} while preimages of $\frac{\pi}{2}$ are the \textit{poles}. As we will see, if the monodromy is in fact co-axial, then the sign of the latitude is globally defined.\newline It should be noted that the only circle of latitude which is a geodesic is the equator. The others are just loxodromic paths.\newline Local geometry of conical singularities induces constraints on their position in the latitude foliation. A conical singularity $A$ of angle $\alpha$ should satisfy the following conditions: \begin{itemize} \item if $\alpha \notin \pi\mathbb{Z}$, then $\phi(A)=\frac{\pi}{2}$; \item if $\alpha \notin 2\pi\mathbb{Z}$, then $\phi(A)\in \lbrace 0;\frac{\pi}{2} \rbrace$; \end{itemize} In other words, any singularity can be a pole of the latitude foliation. Otherwise, the angle should be an integer multiple of $\pi$ corresponding to the number of distinct branches of the foliation that approach the singularity. If this number is odd, then the singularity automatically belongs to the equatorial net. Indeed, the latitude circle the singularity belongs to should be preserved by the nontrivial monodromy of a simple loop around this singularity.\newline \begin{rem} For metrics with co-axial monodromy, the \textit{latitude} defined in each chart is preserved by the monodromy and it thus globally defined (not just the absolute latitude). This implies in particular that every singularity whose angle does not belong to $2\pi\mathbb{Z}$ should be a pole of the latitude foliation.\newline \end{rem} \subsection{Hemispherical surfaces} \label{sec:hemispherical} They are a special class of spherical surfaces with cones with dihedral monodromy. \begin{defn}\label{def:hemsurf} A \textit{hemispherical surface} is a closed surface with a cone spherical metric obtained by the gluing of finitely many hemispherical sectors along their geodesic boundary. The gluing can identify several boundary points and create conical singularities whose angle is an integer multiple of $\pi$. \end{defn} This class of surfaces is an example of spherical surface with dihedral monodromy. \begin{prop} A hemispherical surface has dihedral monodromy. \end{prop} \begin{proof} For a spherical surface obtained by the gluing of hemispherical sectors, a latitude foliation is defined in each sector and extends globally through the boundary. Monodromy preserves the great circle where the boundary of each sector lies. \end{proof} A specific property of hemispherical surfaces is that the latitude of every singularity is either $0$ or $\frac{\pi}{2}$.\newline Hemispherical surfaces are really simple to describe. They are completely characterized by the lengths of the boundary segments and the combinatorics of the gluing pattern. The equatorial net of a hemispherical surface is the union of the boundaries of every cylinder. We show in Subsection~\ref{sec:loxodromic} that any spherical surface with dihedral monodromy can be deformed to a hemispherical surface. \begin{ex} The most basic hemispherical surface is obtained from a hemispherical sector of angle $\alpha$. We divide the boundary into two geodesic segments of length $\frac{\alpha}{2}$ and glue them on each other. We obtain a spherical surface with two conical singularities of angle $\pi$ and one singularity of angle $\alpha$.\newline \end{ex} \subsection{Loxodromic projection} \label{sec:loxodromic} In spherical geometry (and navigation), a \textit{loxodromic path} has constant angle with circles of latitude. In particular, circles of latitude are loxodromic paths.\newline For any angle $\theta$, we can define the \textit{loxodromic projection flow} $L_{\theta}$. For any point $x$ of the standard sphere with $\phi(x)\in [0;\frac{\pi}{2}[$, its loxodromic projection $L_{\theta}(x,t)$ is the intersection of the loxodromic path of constant angle $\theta$ passing through $x$ with the circle of latitude $(1-t)\phi(x)$ (with $t \in [0;1]$). In particular, $L_{\theta}(x,0)=x$ while $L_{\theta}(x,1)$ belongs to the equator. We define $L_{\theta}(x,1)$ as the \textit{loxodromic projection} of $x$ to the equator with angle $\theta$. \begin{prop}\label{prop:isodef} For any surface $X$ with a cone spherical metric with dihedral monodromy, there is an isomonodromic deformation $X_{0 \leq t \leq 1}$ such that $X=X_{0}$ and $X_{1}$ is a hemispherical surface. \end{prop} \begin{proof} The latitude foliation decomposes $X$ into "spherical cylinders" that are continuous families of circles of latitude. For a generic choice of angle $\theta$, the loxodromic projection flow does not make any pair of singularities of theses cylinders collide. In the limit, every "spherical cylinder" which is not bounded by a pole of the latitude foliation degenerates to an interval exchange map. Conical angles have not been modified but the latitude of singularities is then either $0$ or $\frac{\pi}{2}$. \end{proof} \subsection{Quadratic differentials and half-translation surfaces} \label{sec:quaddifhalf} On a Riemann surface $X$, a quadratic differential $q$ is a meromorphic section of $K^{\otimes 2}_{X}$. Outside singularities of odd order, it writes locally as the square of a meromorphic $1$-form $\pm \sqrt{q}$.\newline Antiderivatives of $\pm \sqrt{q}$ are the developing maps of a \textit{half-translation structure}. This structure is formed by an atlas in the complement of the singularities of $q$. This atlas is made of charts to $\mathbb{C}$ whose transition maps are of the form $z \mapsto \pm z +c$.\newline A singularity of order $a \geq -1$ corresponds to a conical singularity of angle $(2+a)\pi$ in the associated flat metric. A double pole is a point at infinity in an infinite cylinder. Besides, at a double pole, if $q=(\frac{r_{-2}}{z^{2}}+\frac{r_{-1}}{z}+r_{0}+r_{1}z+\dots)dz^{2}$, then the quadratic residue at the double pole is $r_{-2}$. It should always be nonzero. In the cylinder neighboring the double pole of quadratic residue $r$, the flat cylinder is obtained by identifying the two sides of an half-infinite band by translation of $\pm \sqrt{r}$, see \cite{St} for details.\newline We define a special class of quadratic differentials. \begin{defn}\label{def:totrealJS} A quadratic differential $q$ on a Riemann surface $X$ is a {\em totally real Jenkins-Strebel differential} if its associated half-translation surface is formed by the gluing of finitely many semi-infinite cylinders along segments of their boundary. \end{defn} Recall that a {\em period} of $q$ is the integral of $\pm\sqrt{q}$ along a path between two singularities of order $\geq-1$. A period is {\em absolute} if both starting and ending points coincide. The set of absolute periods forms a subgroup of $\mathbb{C}$. Note that the periods of a totally real Jenkins-Strebel differential are real. It has exactly one double pole for each cylinder and the other singularities are conical singularities of angle in $\pi\mathbb{Z}$. They are either simple poles or zeroes of the differential.\newline \subsection{Relation between hemispherical surfaces and totally real Jenkins-Strebel differentials} \label{sec:JSdif} There is a construction which associates to any hemispherical surface $S$ a flat surface $S_{\flatsu}$. It replaces every hemispherical sector of angle $2a\pi$ by a semi-infinite cylinder of circumference~$a$. The lengths of the segments in the boundary are preserved (up to a factor $2\pi$).\newline Conversely, given a half-translation surface defined by a totally real Jenkins-Strebel differential, we replace each semi-infinite cylinder by q hemispherical sector. This operation is the inverse of the previous one.\newline The group of absolute periods of the quadratic differential is the image of the monodromy group of the hemispherical surface into the rotation group along the preserved axis (up to a factor $2\pi$).\newline A totally real Jenkins-Strebel differential $q$ is the global square of $1$-form if and only if every cylinder of the flat surface can be given a sign such that every boundary segment bounds a positive and a negative cylinder.\newline The latter condition implies in particular that singularities should be of even order (with an angle in $2\pi\mathbb{Z}$).\newline Translated in the language of spherical geometry, a hemispherical surface has globally defined latitude if and only if its totally real Jenkins-Strebel differential is the global square of a $1$-form. This property is equivalent to co-axial monodromy.\newline \subsection{Some background on complex projective structures and quadratic differentials} \label{sec:backgroundquad} In this section, we recall some facts about complex projective structures and their relation with quadratic differentials. This gives interesting background on this work but will not be used in the remaining sections.\newline A {\em complex projective structure} is defined by charts with values in $\mathbb{CP}^{1}$ and transition maps in $\PSL(2,\mathbb{C})$. In particular, a spherical structure is a complex projective structure whose monodromy is conjugated to a subgroup of $\SO(3)$.\newline A complex projective structure has {\em dihedral monodromy} if it globally preserves a pair of points of $\mathbb{CP}^{1}$, see Section~6 in \cite{FG}. Up to conjugation, we can assume these preserved points are $\lbrace{ 0 , \infty \rbrace}$. Then the monodromy will act by functions of the form $z \mapsto az^{\pm 1}$ with $a \in \mathbb{C}^{\ast}$.\newline This directly implies existence of a quadratic differential $q$ such that any developing map of the structure is of the form $e^{\int^{z} \sqrt{-q}}$. Besides, in order to avoid irregular singularities of the developing map, it is reasonable to restrict to quadratic differentials with at worse double poles (logarithmic singularities).\newline In this class of projective structures, the monodromy preserves a metric if and only if $a \in \Un(1)$ for the transition maps. In other words, periods of the quadratic differential should be real (see Section~1.1 of \cite{SCLX} for details). In other words, the quadratic differential should be Jenkins-Strebel.\newline Then, totally real Jenkins-Strebel quadratic differential introduced in definition~\ref{def:totrealJS} are those for which singularities either belong to the equator or the poles of the latitude foliation. The whole correspondence is summarised in Table~\ref{table:cor}.\newline \begin{table}[h] \begin{tabular}{|c|c|l|r|} \hline Complex-analytic object & Geometric structure\\ \hline \hline Quadratic differential & Complex projective structure with dihedral monodromy \\ \hline Jenkins-Strebel quadratic differential & Cone spherical metric with dihedral monodromy\\ \hline Totally real Jenkins-Strebel differential & Hemispherical surface \\ \hline \end{tabular} \caption{Correspondence between analytic and geometric objects.} \label{table:cor} \end{table} \section{Differentials with prescribed singularities} \label{sec:difprescrites} In this section, we recall some results of \cite{GT,GT1} about existence of differentials with prescribed local behaviour. Moreover, we recall an important operation on differentials called the {\em contraction flow}. \subsection{Quadratic differentials} \label{sec:quaddif} By abuse of notation, we refer to a pair $(X,q)$ where $X$ is a Riemann surface of genus $g$ and $q$ a quadratic differential on $X$ as a quadratic differential. The moduli spaces of these objects are stratified according to the orders of the singularities of~$q$. In this paper, quadratic differentials have at worse double poles. For $a_{1},\dots,a_{n} \in 2\mathbb{N}^{\ast}$ and $b_{1},\dots,b_{m} \in 2\mathbb{N}-1$, the {\em stratum of quadratic differentials} with zeroes and simple poles of multiplicities $a_{1},\dots,a_{n},b_{1},\dots,b_{m}$ and $p$ double poles is denoted $\mathcal{Q}(a_{1},\dots,a_{n},b_{1},\dots,b_{m},-2^{p})$. See \cite{La} for details. \newline We also require that quadratic differentials are not global squares of $1$-forms. Such differentials are called \textit{primitive}. Riemann-Roch theorem (or Gauss-Bonnet theorem) implies that $\sum a_{i}+\sum b_{j}-2p=4g-4$.\newline For such quadratic differentials, classical complex analysis shows that the only local invariants up to biholomorphism are the order of the singularity and the quadratic residue in the case of a double pole, see \cite{St} for details. At a double pole, every quadratic differential is (up to a biholomorphic change of variables) $\left(\frac{rdz}{z}\right)^{2}$, where $r^{2}$ is the quadratic residue.\newline In Theorems~1.1,~1.2,~1.3 and~1.9 of~\cite{GT1} has been proved a complete characterization of configurations of quadratic residues that can be realized in any given stratum of quadratic differentials. We extract from these results the following two theorems. \begin{thm}\label{thm:31} Let $\mathcal{Q}$ be a stratum $\mathcal{Q}(a_{1},\dots,a_{n},b_{1},\dots,b_{m},-2^{p})$ of primitive quadratic differentials on a surface of genus $g \geq 1$ such that $p \geq 1$.\newline Every configuration of quadratic residues $(r_{1},\dots,r_{p})\in(\mathbb{R}_{+}^{\ast})^{p}$ is realized by a quadratic differential of $\mathcal{Q}$ with the exception of configurations $(r,\dots,r)$ for strata $\mathcal{Q}(4s,-2^{2s})$ and $\mathcal{Q}(2s+1,2s-1,-2^{2s})$ in genus one.\newline \end{thm} In genus zero, it should be noted that a quadratic differential is primitive if and only if at least one of its singularities is of odd order. \begin{thm}\label{thm:32} Let $\mathcal{Q}$ be a stratum $\mathcal{Q}(a_{1},\dots,a_{n},b_{1},\dots,b_{m},-2^{p})$ of primitive quadratic differentials on the Riemann sphere with $m,p \geq 1$.\newline Every configuration of quadratic residues $(b_{1},\dots,b_{p})\in(\mathbb{R}_{+}^{\ast})^{p}$ is realized by a quadratic differential of $\mathcal{Q}$ with the following exceptions: \begin{itemize} \item $\mathcal{Q}(p-2,p-2,-2^{p})$ with $p$ odd and configurations of the form $(A^{2},B^{2},C^{2},\dots,C^{2})$ with $C=A+B$ or $B=A+C$ and $A,B,C>0$; \item $\mathcal{Q}(p-1,p-3,-2^{p})$ with $p$ even and configurations of the form $(A^{2},A^{2},B^{2},\dots,B^{2})$ with $A,B>0$; \item $\mathcal{Q}(a_{1},\dots,a_{n},b_{1},b_{2},-2^{p})$ and configurations of the form $(L \cdot f_{1}^{2},\dots,L \cdot f_{p}^{2})$ with $L>0$, $f_{1},\dots,f_{p} \in \mathbb{N}^{\ast}$, $\sum f_{j}$ is even and $\sum f_{j} < 2p$; \item $\mathcal{Q}(a_{1},\dots,a_{n},b_{1},b_{2},-2^{p})$ and configurations of the form $(L \cdot f_{1}^{2},\dots,L \cdot f_{p}^{2})$ with $L>0$, $f_{1},\dots,f_{p} \in \mathbb{N}^{\ast}$, $\sum f_{j}$ is odd and $\sum f_{j} \leq \max(b_{1},b_{2})$.\newline \end{itemize} \end{thm} \subsection{Abelian differentials} \label{sec:difabel} Some quadratic differentials are global squares of Abelian differentials ($1$-forms). Therefore, square roots of quadratic residues are globally defined (up to a common factor). Obstructions to realization of configurations of residues in any stratum states differently.\newline We consider the {\em stratum} $\mathcal{H}(a_{1},\dots,a_{n},-1^{p})$ of pairs $(X,\omega)$ where $X$ is a compact Riemann of genus $g$ and $\omega$ is a meromorphic $1$-form with zeroes of orders $a_{1},\dots,a_{n}$ and $p$ simple poles. Gauss-Bonnet theorem implies that $\sum a_{i}-p=2g-2$.\newline We consider real configurations of reals $(\lambda_{1},\dots,\lambda_{x},-\mu_{1},\dots,-\mu_{y})$ with $x+y=p$, whose sum is zero (to satisfy Residue theorem) and such that $\lambda_{1},\dots,\lambda_{x},\mu_{1},\dots,\mu_{y}>0$.\newline In \cite{GT}, we show in Theorem~1.1 that in genus at least one, the only obstruction is the Residue theorem. \begin{thm}\label{thm:33} In any stratum $\mathcal{H}(a_{1},\dots,a_{n},-1^{p})$ of meromorphic $1$-forms on a surface of genus $g \geq 1$ with $p \geq 1$, every configuration of residues $(\lambda_{1},\dots,\lambda_{x},-\mu_{1},\dots,-\mu_{y})$ with $\sum \lambda_{i} = \sum \mu_{j}$ is realized by a differential in the stratum. \end{thm} In genus zero, there is an additional arithmetic obstruction, described in Theorem~1.2 of~\cite{GT} and Theorem~2 of~\cite{Er}. \begin{thm}\label{thm:34} In any stratum $\mathcal{H}(a_{1},\dots,a_{n},-1^{p})$ of meromorphic $1$-forms on a surface of genus zero with $p \geq 1$, every configuration of residues $(\lambda_{1},\dots,\lambda_{x},-\mu_{1},\dots,-\mu_{y})$ with $\sum \lambda_{i} = \sum \mu_{j}$ is realized by a differential in the stratum with the following exception.\newline If the configuration of residues is of the form $(L \cdot f_{1},\dots,L \cdot f_{x},-L.g_{1},\dots,-L.g_{y})$ with $L>0$ and $f_{1},\dots,f_{x},g_{1},\dots,g_{y}$ are integers without nontrivial common factor, then it can be realized in the stratum if and only if $\sum f_{i}= \sum g_{j} > \max(a_{1},\dots,a_{n})$.\newline \end{thm} \subsection{Contraction flow} \label{sec:flotcontr} In Sections~\ref{sec:quaddif} and~\ref{sec:difabel}, we considered quadratic differentials defining flat surfaces of infinite area (because cylinders around double poles are semi-infinite). There is an action of $\GL^{+}(2,\mathbb{R})$ on strata of quadratic differentials. This group acts by postcomposition in the charts and acts naturally on the periods, see \cite{Zo} for details.\newline The \textit{contraction flow} is a one-parameter flow in $\GL^{+}(2,\mathbb{R})$ that preserves a direction and contracts exponentially another. For a quadratic differential defining a flat surface of infinite area, if the contraction flow contracting a generic direction (a direction where there is no saddle connection), then it converges to a surface in the stratum where every (absolute and relative) period belongs to the preserved direction, see Section~5.4 of \cite{Ta}.\newline Infinite area hypothesis is necessary because otherwise the area of the flat surface would shrink along the flow.\newline If we apply the contraction flow to any differential that realizes a real configurations of residues (like in Sections~\ref{sec:quaddif} and~\ref{sec:difabel}), a generic contraction flow preserving the horizontal direction will converge to a \textit{totally real Jenkins-Strebel differential}. \begin{cor} If a distribution of angles is realized by a cone spherical metric with dihedral monodromy, it is realized by a hemispherical surface with the same monodromy. \end{cor} \begin{rem} The loxodromic projection defined in Section~\ref{sec:hemispherical} in the spherical counterpart of the contraction flow.\newline \end{rem} \section{Strengthened Gauss-Bonnet inequality} \label{sec:GBplus} For a distribution of angles and a given genus, we would like to know if the distribution of angles is realized by a cone spherical metric with dihedral monodromy on a surface of genus $g$. Conical singularities with integer or half-integer angles play a special role.\newline We distinguish between: \begin{itemize} \item \textit{even} conical singularities for which the angle is in $2\pi\mathbb{Z}$; \item \textit{odd} conical singularities for which the angle is in $\pi(2\mathbb{Z}+1)$; \item \textit{non-integer} conical singularities for which the angle is not in $\pi\mathbb{Z}$. \end{itemize} For a cone spherical metric with $n$ conical singularities, we have $n=n_{E}+n_{O}+n_{N}$. These three terms are respectively the numbers of even, odd and non-integer conical singularities.\newline \begin{defn}\label{def:41} We consider distributions of angles $2\pi(a_{1},\dots,a_{n_{E}},b_{1},\dots,b_{n_{O}},c_{1},\dots,c_{n_{N}})$ where the three subfamilies are respectively even ($a_{i}\in\mathbb{Z}$), odd ($b_{i}\in\mathbb{Z}+\frac{1}{2}$) and non-integer angles ($c_{i}\notin\frac{1}{2}\mathbb{Z}$). We define: \begin{itemize} \item the \textit{total sum} $\sigma=\sum a_{i} + \sum b_{j} + \sum c_{k}$; \item the \textit{maximal integral sum} $T$ as the sum of $\sum a_{i}$ and the $2 \lfloor \frac{n_{O}}{2} \rfloor$ biggest numbers among the $b_{j}$ (in particular, $T \in \mathbb{Z}$). \end{itemize} \end{defn} Obviously, we have $\sigma \geq T$. If there is an even number of odd singularities, then $T=\sum a_{i} + \sum b_{j}$.\newline Just like a Gauss-Bonnet inequality is necessary for realization of a distribution of angles by a cone spherical metric, a necessary condition for realization by a spherical metric with a dihedral monodromy is that a Gauss-Bonnet inequality should be fulfilled by the integer singularities alone. \begin{prop}\label{prop:GBplus} Let $2\pi(a_{1},\dots,a_{n_{E}},b_{1},\dots,b_{n_{O}},c_{1},\dots,c_{n_{N}})$ be a distribution of angles. If this distribution of angles is realized by a cone spherical metric with dihedral monodromy on a surface of genus $g$, then it should satisfy \textit{strengthened Gauss-Bonnet inequalities}: \begin{itemize} \item $T \geq 2g+n-1$ if $n_{O}$ is even and $n_{N}=0$; \item $T \geq 2g+n-2$ otherwise. \end{itemize} \end{prop} \begin{proof} Following Section~\ref{sec:hemispherical}, if such a distribution of angles is realized by a cone spherical metric with dihedral monodromy, then it can be realized by a hemispherical surface $S$. In this hemispherical surface, singularities with non-integer angles are automatically at the poles of the latitude foliation.\newline Among even and odd singularities, some belong to the equatorial net while some other are at the poles. If the monodromy is dihedral, then every odd singularity is also at the pole. We consider the totally real Jenkins-Strebel differential $q$ corresponding to hemispherical surface $S$.\newline Quadratic differential $q$ belongs to a stratum $\mathcal{Q}(d_{1},\dots,d_{s},-2^{t})$ with conical singularities of orders $d_{1},\dots,d_{s}$ and $t$ double poles. We automatically have $d_{1}+\dots+d_{s}=4g-4+2t$.\newline Since an even or an odd singularity is either a conical singularity of $q$ or a double pole, we have $t \geq n_{N}+(n_{E}+n_{O}-s)$. Consequently, $s+t \geq n$ and $d_{1}+\dots+d_{s}+2s=4g-4+2t+2s \geq 4g+2n-4$.\newline Since every other singularity is a double pole, $d_{1}+\dots+d_{s}$ is even and $\sum (d_{i}-2) \leq T$. This implies $T \geq 2g+n-2$.\newline If $n_{O}$ is even, $n_{N}=0$ and $T=2g+n-2$, then $\sigma=T$ and the classical Gauss-Bonnet inequality requires $\sigma>2g+n-2$. \end{proof} \section{Spherical structures in higher genus} \label{sec:highergenus} We apply the results about quadratic and Abelian differentials with prescribed residues in order to construct hemispherical surfaces realizing the adequate distribution of angles.\newline Using equivalence stating in Section~\ref{sec:loxodromic}, we construct hemispherical surfaces corresponding to totally real Jenkins-Strebel differentials. It should be noted that in genus at least one, in strata of quadratic and Abelian differentials, there are several degrees of freedom in addition to the residues. In other words, if there is a differential that realize some configuration of real residues, then there is another differential that realizes it and such that the group of periods of the differential is dense in $\mathbb{R}$. For such a totally real Jenkins-Strebel differential, the monodromy of the corresponding hemispherical surface is dense in the subgroup of $\SO(3)$ preserving a pair of antipodal points (pointwise in the co-axial case and globally in the dihedral case).\newline \subsection{Strict dihedral case} \label{sec:stricdihedral} Specific obstruction in genus one for quadratic differentials with prescribed residues (Theorem~\ref{thm:31}) lead to four exceptional families of distributions of angles that cannot appear in spherical surfaces with strict dihedral monodromy. \begin{prop}\label{prop:51} Four exceptional families of distributions of angles cannot be realized by a cone spherical metric with strict dihedral monodromy on a torus: \begin{itemize} \item $(4k+2)\pi,c,\dots,c$ with $2k$ non-integer equal angles $c \notin \pi\mathbb{Z}$; \item $(2k+3)\pi,(2k+1)\pi,c,\dots,c$ with $2k$ non-integer equal angles $c \notin \pi\mathbb{Z}$; \item $(4k+2)\pi$ for any integer $k$; \item $(2k+3)\pi,(2k+1)\pi$ for any integer $k$. \end{itemize} \end{prop} \begin{proof} According to Proposition~\ref{prop:isodef}, if such a distribution of angles is realized by a cone spherical metric with dihedral monodromy, then it can be done by a hemispherical surface. In this hemispherical surface, singularities with non-integer equal angles are automatically at the poles of the latitude foliation.\newline Following Section~\ref{sec:JSdif}, a hemispherical surface corresponds to a totally real Jenkins-Strebel differential. Zeroes and simple poles of the quadratic differential can only be the conical singularities with integer angle. Besides, since the other singularities are double poles, the sum of their orders should be even. Consequently, these (primitive) quadratic differentials are in $\mathcal{Q}(4k,-2^{2k})$ or $\mathcal{Q}(2k+1,2k-1,-2^{2k})$. In both cases, Theorem~\ref{thm:31} implies that in these strata, a configuration of uniformly equal residues cannot be realized. If the distribution of angles contains $2k$ equal non-integer angles $c$, the configuration $c,\dots,c$ cannot be realized. If there is no non-integer angle, then, the $2k$ double poles of the quadratic differentials correspond to regular points of the spherical metric and their quadratic residue is $1$. In this case, it cannot be realized either. \end{proof} Outside these four exceptional families, the only condition that should be satisfied is strengthened Gauss-Bonnet inequality stated in Proposition~\ref{prop:GBplus}. \begin{thm}\label{thm:52} Let $2\pi(a_{1},\dots,a_{n_{E}},b_{1},\dots,b_{n_{O}},c_{1},\dots,c_{n_{N}})$ be a distribution of angles as in Section~\ref{sec:GBplus}. Outside obstructions in genus $1$ described in Proposition~\ref{prop:51}, there exists a cone spherical metric with strict dihedral monodromy on a surface of genus $g \geq 1$ with $n$ conical singularities of prescribed angles if and only if strengthened Gauss-Bonnet inequality is satisfied. \end{thm} \begin{proof} We assume the distribution of angles satisfies strengthened Gauss-Bonnet inequality (Proposition~\ref{prop:GBplus}) and is not concerned by obstructions of Proposition~\ref{prop:51}. We are going to prove existence of a cone spherical metric with strict dihedral monodromy in the category of hemispherical surfaces.\newline Since strengthened Gauss-Bonnet inequality is satisfied, we can find in the distribution of angles a subset of $s$ even or odd singularities of angles $\pi(2+d_{i})$ such that $d_{1}+\dots+d_{s}$ is even number whose value is at least $4g-4+2(n-s)$ (or strictly bigger if $n_{O}$ is even and $n_{N}=0$). Then, we work with a nonempty stratum $\mathcal{Q}(d_{1},\dots,d_{s},-2^{t})$ of primitive quadratic differentials on Riemann surfaces of genus $g$.\newline If $g \geq 2$, any configuration of real positive quadratic residues are realized in the stratum (Theorem~\ref{thm:31}) and they are also realized by totally real Jenkins-Strebel differential (see Section~\ref{sec:flotcontr}). Therefore, we can construct the analogous hemispherical surface (see Section~\ref{sec:loxodromic}).\newline If $g=1$, unless the maximal integral sum (see Definition~\ref{def:41}) is realized by angles $(4k+2)\pi$ or $(2k+3)\pi,(2k+1)\pi$, the same construction can be carried out (indeed, in Theorem~\ref{thm:31}, the only strata with obstructions are $\mathcal{Q}(4k,-2^{2k})$ and $\mathcal{Q}(2k+1,2k-1,-2^{2k})$). In the remaining cases, the only configurations of residues that cannot be realized are formed by an even number of identical residues. Consequently, the remaining conical singularities of the spherical structure that correspond to double poles of the quadratic differentials should have equal angles. This rules out the possibility of odd singularities among them. There are thus two possibilities. Either every double pole has a quadratic residue equal to $1$ (because it corresponds to a regular point of the spherical metric) or they all correspond to non-integer singularities. These cases are covered by Proposition~\ref{prop:51}. \end{proof} \subsection{Co-axial case} \label{sec:coaxial} In genus at least one, there is no obstruction to existence of Abelian differentials with prescribed singularities (outside residue theorem). We consider distributions of angles $\pi(a_{1},\dots,a_{n},c_{1},\dots,c_{p})$ with $a_{1},\dots,a_{n} \in 2\mathbb{N}+2$ and $c_{1},\dots,c_{p} \notin 2\mathbb{N}+2$.\newline The following result is analogous to Theorem~1 of \cite{Er} concerning spherical metrics with co-axial monodromy on punctured spheres. \begin{thm}\label{thm:53} Let $2\pi(a_{1},\dots,a_{n},c_{1},\dots,c_{p})$ be a distribution of angles. There exists a spherical metric with co-axial monodromy on a surface of genus $g \geq 1$ with $n+p$ conical singularities of prescribed angles if and only if: \begin{itemize} \item there is a sequence of signs $\epsilon_{1},\dots,\epsilon_{p}$ such that $K=\sum_{j=1}^{p} \epsilon_{j}c_{j}\in \mathbb{N}$; \item $\sum a_{i} -2g+2-n-p-K$ is nonnegative and even if $p \geq 1$; \item $\sum a_{i} -2g+2-n$ is positive and even if $p=0$. \end{itemize} \end{thm} \begin{proof} A distribution of angles is realized by a metric with co-axial monodromy if and only if it is realized by some hemispherical surface (see Proposition~\ref{prop:isodef}). The totally real Jenkins-Strebel differential corresponding to such a hemispherical surface is the square of an Abelian differential (see Sections~\ref{sec:quaddifhalf} and~\ref{sec:JSdif}).\newline Therefore, there is a subset of $(a_{1},\dots,a_{n})$ corresponding to singularities that will belong to the equatorial net (and will be zeroes of the differential). Without loss of generality, we assume they are the $s$ first singularities of the list. Then, the simple poles of the differential will be the $n-s$ other integer singularities, the $p$ non-integer singularities and some additional regular points (simple poles with residues $\pm 1$). Each of them should be given a sign in such a way they sum to zero (and satisfy residue theorem). Since there is no obstruction in genus at least one (Theorem~\ref{thm:33}), we only have to check stratum is nonempty.\newline If a distribution of angles is realized in such a way an integer singularity of angle $2a\pi$ counts as a simple pole of the Abelian differential (of residue $\pm a$), then it can also be realized in such a way this singularity counts as a zero of the form. Indeed, starting from a configuration of residues summing to zero realized in stratum $\mathcal{H}$, we consider stratum $\mathcal{H}'$ with $a-1$ additional simple poles and an additional zero of order $a-1$. We replace the simple pole of residue $a$ by $a$ simple poles of residue $1$ (up to a change of sign). Consequently, we just have to consider hemispherical surfaces where every integer singularity counts as a zero of the Abelian differential.\newline We need to find an Abelian differential in stratum $\mathcal{H}(a_{1}-1,\dots,a_{n}-1,-1^{t})$ where $t=\sum a_{i} -n-2-2g$. Besides, simple poles are of residues $\pm 1$ or $\pm c_{i}$. Since the total residue is zero, there is a signed sum of $(c_{1},\dots,c_{p})$ which is an integer $K$. If $p=0$, we just need to have an even number of simple poles (one half having residue $1$ while the other half has residue $-1$). The condition amounts to $t=\sum a_{i} -n-2-2g$ being even and positive.\newline If $p \geq 1$, then $t-p$ should be positive and its value should be at least $K$ with the same parity (if it is bigger, it should be bigger by an even number in order to keep a total residue equal to zero with residues compensating each other). Following Theorem~\ref{thm:33}, there is no obstructions to realize differentials with prescribed residues in genus at least one so the condition described above are necessary and sufficient. \end{proof} \subsection{Comparison} \label{sec:comparison} In genus at least one, almost every distribution of angles that are realized with co-axial monodromy can also be realized with strict dihedral monodromy. The exceptions are two of the four families of Proposition~\ref{prop:51}. \begin{prop}\label{prop:54} Let $2\pi(a_{1},\dots,a_{n},c_{1},\dots,c_{p})$ be a distribution of angles. If it is realized by a spherical metric with co-axial monodromy on a surface of genus $g$, then it can also be realized by a metric with strict dihedral monodromy on a surface of genus $g$ with the following the following two families of exceptions in genus one (parametrized by $k \in \mathbb{N}^{\ast}$): \begin{itemize} \item $(4k+2)\pi,c,\dots,c$ with $2k$ equal angles $c \notin \pi\mathbb{Z}$; \item $(4k+2)\pi$. \end{itemize} The latter present obstruction for dihedral monodromy (Proposition~\ref{prop:51}) but clearly satisfy hypothesis of Theorem~\ref{thm:53}. \end{prop} \begin{proof} Since the distribution is realized by a metric with co-axial monodromy, we have $\sum a_{i} \geq 2g-2+n+p$. If $g \geq 2$ and $p \geq 1$, this implies existence of a metric with strict dihedral monodromy (Theorem~\ref{thm:52}).\newline If $g \geq 2$ but $p=0$, classical Gauss-Bonnet implies $\sum a_{i} > 2g-2+n$ and thus $\sum a_{i} \geq 2g+n-1$. This also implies existence of the adequate metric.\newline Then we consider the genus one case. If $p=0$, then we have $\sum a_{i} > n$. Theorem~\ref{thm:52} then implies existence of metric with strict dihedral monodromy except in the four exceptional cases of Proposition~\ref{prop:51}. Among them, the only family for which there is obstruction is where there is only one conical singularity the angle of which is of the form $(4k+2)\pi$.\newline Then, we assume $g=1$ and $p \geq 1$. We have $\sum a_{i} \geq 2g-2+n+p$. Therefore, strengthened Gauss-Bonnet inequality is satisfied and Theorem~\ref{thm:52} implies the existence of a metric with strict dihedral monodromy except in the cases of Proposition~\ref{prop:51}. Since there is at least one even singularity and $p \geq 1$, we just have to avoid the case where the distribution of angles is $(4k+2)\pi,c,\dots,c$ with $2k$ non-integer equal angles $c \notin \pi\mathbb{Z}$. \end{proof} \section{Spherical structures in genus zero} \label{sec:genuszero} In punctured spheres, monodromy of cone spherical metrics is generated by rotations around singularities. In particular, if every singularity is even (its angle belongs to $2\pi\mathbb{Z}$), then the monodromy of the metric is trivial and the geometric structure is just a cover of the sphere ramified at these singularities. This case has been already classified, see Section~7 in \cite{Er1}.\newline In the following, we assume at least one conical singularity has a non-integer angle. If there is only one such singularity, then monodromy is cyclic and thus automatically co-axial. In the next subsection, we will see that in fact, this implies existence of a second conical singularity with a non-integer angle.\newline \subsection{Co-axial case} \label{sec:coaxialgzero} In this section, we consider geometric structures whose monodromy group is contained in the group of rotations around an axis. In particular, it could be finite rotation group (if angles of non-integer singularities belong to $\pi\mathbb{Q}$ for example). Theorem~1 in \cite{Er} gave a complete classification of distribution of angles that are realized with a co-axial monodromy. \begin{thm}\label{thm:61} Let $2\pi(a_{1},\dots,a_{n},c_{1},\dots,c_{p})$ be a distribution of angles with $p \geq 1$. There exists a spherical metric with co-axial monodromy on a punctured sphere with $n+p$ conical singularities of prescribed angles if and only if: \begin{itemize} \item there is a sequence of signs $\epsilon_{1},\dots,\epsilon_{p}$ such that $K=\sum_{j=1}^{p} \epsilon_{j}c_{j}\in \mathbb{N}$; \item $M=\sum a_{i} +2-n-p-K$ is nonnegative and even; \item if $c_{1},\dots,c_{p}$ are commensurable, an additional arithmetic condition should hold. \end{itemize} Let $v$ be the vector $(c_{1},\dots,c_{p},1,\dots,1)$ with $M+K$ elements equal to $1$. If $v$ is of the form $L(b_{1},\dots,b_{p+M+K})$ with $L>0$ and $b_{1},\dots,b_{p+M+K}$ are integers, then the additional condition is: $$2\max(a_{1},\dots,a_{n}) \leq \sum b_{i}.$$ \end{thm} The condition about signed sums of non-integer angles means in particular that there should be at least two non-integer singularities.\newline A proof analogous to the one of Theorem~\ref{thm:53} could be given to Theorem~\ref{thm:61} by using Theorem~\ref{thm:34}.\newline \subsection{Strict dihedral case} \label{sec:stricdihedralgzero} In the problem of realization of distribution of angles by a metric with strict dihedral monodromy, the nature of obstructions depends crucially on the number of odd singularities. We first give a restriction on the number of odd singularities. \begin{lem}\label{lem:62} Let $2\pi(a_{1},\dots,a_{n_{E}},b_{1},\dots,b_{n_{O}},c_{1},\dots,c_{n_{N}})$ be a distribution of angles as in Section~\ref{sec:GBplus}. If it is realized by a cone spherical metric with strict dihedral monodromy on the punctured sphere, then $n_{O} \geq 2$. \end{lem} \begin{proof} If the distribution is realized, then it can be realized by a hemispherical surface (Proposition~\ref{prop:isodef}). The latter corresponds to a primitive quadratic differential (Section~\ref{sec:quaddifhalf}). In genus zero, primitive quadratic differentials have singularities of odd order (otherwise they are global squares of $1$-forms). Besides, they have an even number of singularities of odd order (since the total order is $-4$). Consequently, there are at least two odd singularities belonging to the equatorial net of the hemispherical surface. \end{proof} We now treat the cases according to the number of odd singularities. We begin with the case with at least~$4$ odd singularities, then we treat the case with~$3$ odd singularities and finally with~$2$. \subsubsection{At least four odd singularities} \label{sec:quatresing} In the case $n_{O} \geq 4$, there is no additional obstruction to strengthened Gauss-Bonnet inequality. \begin{thm}\label{thm:63} Let $2\pi(a_{1},\dots,a_{n_{E}},b_{1},\dots,b_{n_{O}},c_{1},\dots,c_{n_{N}})$ be a distribution of angles as in Section~\ref{sec:GBplus}. If $n_{O} \geq 4$, there exists a cone spherical metric with strict dihedral monodromy on a punctured sphere with $n$ conical singularities of prescribed angles if and only if strengthened Gauss-Bonnet inequality is satisfied. Besides, the metric can be chosen in such a way its monodromy group is infinite. \end{thm} \begin{proof} We assume the distribution of angles satisfies strengthened Gauss-Bonnet inequality (Proposition~\ref{prop:GBplus}). We prove existence of a cone spherical metric with strict dihedral monodromy in the category of hemispherical surfaces.\newline Since strengthened Gauss-Bonnet inequality is satisfied, we can find in the distribution of angles a subset of $s$ even or odd singularities of angles $\pi(2+d_{i})$ such that $d_{1}+\dots+d_{s}$ is even number whose value is at least $4g-4+2(n-s)$ (or strictly bigger if $n_{O}$ is even and $n_{N}=0$). Besides, we assume there are at least four odd singularities among the $s$ chosen singularities. Then, we consider a nonempty stratum $\mathcal{Q}(d_{1},\dots,d_{s},-2^{t})$ of primitive quadratic differentials on the Riemann sphere. There are at least four singularities of odd order in quadratic differentials of these strata. Therefore, any configuration of real positive quadratic residues are realized in the stratum (Theorem~\ref{thm:32}) and they are also realized by totally real Jenkins-Strebel differential (see Section~\ref{sec:flotcontr}). Therefore, we can construct the analogous hemispherical surface (see Section~\ref{sec:loxodromic}).\newline Finally, we have to prove that we can realize the hemispherical surface in such a way the projection of the monodromy group on the rotation group around the preserved axis has dense image. For a quadratic differential on the Riemann sphere with at least four odd singularities, the canonical double cover ramified at the odd singularities is of genus at least one (Riemann-Hurwitz formula). Therefore, there are other degrees of freedom than quadratic residues. Up to a small perturbation, we can assume the group of absolute periods of the Abelian differential in the cover is dense in $\mathbb{R}$. \end{proof} \subsubsection{Three odd singularities} \label{sec:troisimp} If $n_{O}=3$, there are specific obstructions that require to be handled separately. \begin{prop}\label{prop:64} Let $k \in \mathbb{N}$, $l \geq k$ and $\alpha,\beta \notin \pi\mathbb{Z}$. If $$\alpha+\beta=(2l+1)\pi \text{ or } \alpha+(2l+1)\pi=\beta \text{ or }\beta+(2l+1)\pi=\alpha\, ,$$ then distribution of angles $( (2k+1)\pi,(2k+1)\pi,(2l+1)\pi,\alpha,\dots,\alpha,\beta)$ with $2k-1$ angles equal to $\alpha$ cannot be realized by a cone spherical metric with strict dihedral monodromy on a punctured sphere. \end{prop} \begin{proof} If such a distribution is realized by a cone spherical metric, then it is also realized by a hemispherical surface. The latter should have at least two half-integer singularities on its equatorial net. This implies existence of a quadratic differential in $\mathcal{Q}(2k-1,2k-1,-2^{2k+1})$ whose quadratic residues are $\left(l+\frac{1}{2}\right)^{2},\left(\frac{\alpha}{2\pi}\right)^{2}$ and $2k-1$ quadratic residues equal to $\left(\frac{\beta}{2\pi}\right)^{2}$. This configuration is forbidden in Theorem~\ref{thm:32}. \end{proof} In addition to strengthened Gauss-Bonnet inequality stated in Proposition~\ref{prop:GBplus} and obstruction of Proposition~\ref{prop:64}, an arithmetic condition should be satisfied if every conical singularity has a rational angle. \begin{thm}\label{thm:65} Let $2\pi(a_{1},\dots,a_{n_{E}},b_{1},b_{2},b_{3},c_{1},\dots,c_{n_{N}})$ be a distribution of angles. We assume $b_{1} \geq b_{2} \geq b_{3}$. Outside obstructions described in Proposition~\ref{prop:64}, there exists a cone spherical metric with strict dihedral monodromy on a punctured sphere with $n$ conical singularities of prescribed angles if and only if: \begin{itemize} \item strengthened Gauss-Bonnet inequality $T=\sum a_{i} +b_{1}+b_{2} \geq n-2$ holds; \item an additional arithmetic condition described below is satisfied when $c_{1},\dots,c_{n_{N}}\in \pi\mathbb{Q}$. \end{itemize} If vector $(c_{1},\dots,c_{n_{N}},b_{3},1,\dots,1)$ with $T+2-n$ elements equal to $1$ is of the form $L(r_{1},\dots,r_{T-n_{E}})$ with $L>0$ and $r_{1},\dots,r_{T-n_{E}}$ integers, then: \begin{itemize} \item $\sum r_{i} \geq b_{1}+b_{2}$, if $\sum r_{i}$ is even; \item $\sum r_{i} \geq b_{1}$, if $\sum r_{i}$ is odd. \end{itemize} \end{thm} \begin{proof} We assume the distribution of angles satisfies strengthened Gauss-Bonnet inequality (Proposition~\ref{prop:GBplus}) and is not concerned by obstructions of Proposition~\ref{prop:64}. We are going to prove existence of a cone spherical metric with strict dihedral monodromy in the category of hemispherical surfaces.\newline Since strengthened Gauss-Bonnet inequality is satisfied, we can find in the distribution of angles a subset of $s$ even or odd singularities of angles $\pi(2+d_{i})$ such that $d_{1}+\dots+d_{s}$ is an even number whose value is at least $4g-4+2(n-s)$. Among them there should be two of the three odd singularities. Then, we consider a nonempty stratum $\mathcal{Q}(d_{1},\dots,d_{s},-2^{t})$ of primitive quadratic differentials on the Riemann sphere.\newline Then, we have to check if the configuration of residues determined by the remaining singularities and the additional double poles is realized in the stratum. If one the four obstructions of Theorem~\ref{thm:32} holds and one even singularity of angle $2a_{i}\pi$ is not among the $s$ chosen singularities, then if we can just take stratum $\mathcal{Q}(2a_{i}-2,d_{1},\dots,d_{n},-2^{t+a_{i}-1})$. The first two obstructions do not hold when there is zero of even order in differentials of the stratum. Concerning the two other obstructions, we add an integer residue to a configuration in which there is a residue which is the square of an element of $\mathbb{N}+\frac{1}{2}$. Therefore, the sum of integer weights increases by at least two. If the initial configuration is not realized in its stratum, it cannot be worse with the new one. Consequently, we assume every even singularity is among the $s$ chosen singularities.\newline Similarly, we prove that we could have chosen $b_{1}$ and $b_{2}$ for the two odd singularities that will correspond to conical singularities of the flat metric induced by the quadratic differential. For the two last obstructions, replacing an odd singularity by one with a bigger order, we add at least one double pole with quadratic residue equal to $1$, increasing the sum of integer weights by at least two. The two first obstructions appear only if $n_{E}=0$. The second one involves a configuration of quadratic residues of the form $(A^{2},A^{2},B^{2},\dots,B^{2})$ with $A,B>0$ and an even number of $B^{2}$. Here, the third odd singularity corresponds to a double pole with a quadratic residue equal to $b_{j}^{2}$. There is no double pole with the same quadratic residue otherwise there would be a fourth odd singularity in the angle distribution. Therefore, the second obstruction is not relevant when $n_{O}=3$. The first obstruction of Theorem~\ref{thm:32} is about configurations of quadratic residues of the form $(A^{2},B^{2},C^{2},\dots,C^{2})$ with $C=A+B$ or $B=A+C$ and $A,B,C>0$ in strata $\mathcal{Q}(p-2,p-2,-2^{p})$ with $p$ odd. If among them, one quadratic residue is equal to $1$ while another is equal to $b_{j}^{2}$, then the third value is the square of an element of $\mathbb{N}+\frac{1}{2}$. This would imply a existence of a fourth odd singularity. Therefore, the obstruction only impacts the case where there is no additional double pole. Consequently, we cannot reach a configuration forbidden by this obstruction by replacing an odd singularity by one with a bigger order.\newline From what precedes, it appears that if a distribution of angles is realized by a quadratic differential, then the singularities chosen to be in the equatorial net of the hemispherical surface (or equivalently the conical singularities of the flat metric) are the biggest possible. They realize the maximal integer sum.\newline Thus, we have to consider stratum $\mathcal{Q}(2a_{1}-2,\dots,2a_{n_{E}}-2,2b_{1}-2,2b_{2}-2,-2^{t})$. We have $2t-4=2\sum a_{i} -2n_{E} +2b_{1}+2b_{2}-4$. Consequently, $t=b_{1}+b_{2}+\sum a_{i} -n_{E}$. Among these double poles, one corresponds to $b_{3}$ and there are $n_{N}$ non-integer singularities. The others are $K=t-1-n_{N}=b_{1}+b_{2}+\sum a_{i} -n_{E}-n_{N}-1=T+2-n$. The distribution of angles is realized by a hemispherical surface if and only if the configuration of quadratic residues formed by squares of numbers $c_{1},\dots,c_{n_{N}},b_{3},1,\dots,1$ with $K$ elements equal to $1$ is realized in the stratum.\newline The first obstruction of Theorem~\ref{thm:32} is already settled by eliminating distributions of angles described in Proposition~\ref{prop:64}. Since $b_{3}$ is not equal to any other number of the list, the second obstruction of Theorem~\ref{thm:32} is not relevant. The third and the fourth obstruction are already encompassed in the arithmetic condition.\newline Finally, it should be noted that the obtained hemispherical surface has strict dihedral monodromy. Indeed, there are odd singularities on the equatorial net as well as on the poles of the latitude foliation. Therefore, the monodromy is not co-axial. \end{proof} \begin{rem} In distribution of angles that are forbidden by the additional arithmetic condition, there should always be two non-integer singularities with equal angles (at least two elements among $r_{1},\dots,r_{n_{N}+K+1}$ should be equal to one). They have smaller angle than any other singularity. \end{rem} \begin{ex} Distribution of angles $3\pi,3\pi,3\pi,\frac{3\pi}{2},\frac{3\pi}{2}$ cannot be realized by a metric with dihedral monodromy because of the additional arithmetic obstruction.\newline \end{ex} \subsubsection{Two odd singularities} \label{sec:deuxodd} If $n_{O}=2$, then it has already been proved in Section~4 of~\cite{EGT} that in absence of non-integer singularities, monodromy is co-axial. Therefore we will assume $n_{N} \geq 1$.\newline As previously, some specific obstructions have to be handled separately. They correspond to the first two obstructions of Theorem~\ref{thm:32}. \begin{prop}\label{prop:68} For $k \in \mathbb{N}$, $\alpha,\beta \notin \pi\mathbb{Z}$, the following distributions of angles are not realized by a cone spherical metric with strict dihedral monodromy on a punctured sphere: \begin{itemize} \item $((2k+3)\pi,(2k+1)\pi,\alpha,\dots,\alpha,\beta,\beta)$ with $2k$ angles equal to $\alpha$; \item $((2k+3)\pi,(2k+1)\pi,\alpha,\dots,\alpha)$ with $2k$ angles equal to $\alpha$; \item $((2k+3)\pi,(2k+1)\pi,\alpha,\alpha)$. \end{itemize} \end{prop} \begin{proof} If such a distribution is realized by a cone spherical metric, then it is also realized by a hemispherical surface. The latter should have at least two half-integer singularities on its equatorial net. This implies existence of a quadratic differential in $\mathcal{Q}(2k+1,2k-1,-2^{2k+1})$ where zeroes are the two odd singularities of the spherical metric while the double poles correspond either to non-integer singularities or regular points. Therefore, quadratic residues can be equal to $\left(\frac{\alpha}{2\pi}\right)^{2}$ or $\left(\frac{\beta}{2\pi}\right)^{2}$ (if the double pole corresponds to a non-integer singularity) or are equal to one (if the double pole corresponds to a regular point of the spherical metric).\newline In any case, the second obstruction of Theorem~\ref{thm:32} forbids existence of a quadratic differential with such a configuration of quadratic residues. \end{proof} \begin{prop}\label{prop:69} For $k \in \mathbb{N}$, $\alpha,\beta,\gamma \notin \pi\mathbb{Z}$, the following distributions of angles are not realized by a cone spherical metric with strict dihedral monodromy on a punctured sphere: \begin{itemize} \item $((2k+3)\pi,(2k+3)\pi,\alpha,\dots,\alpha,\beta,\gamma)$ with $2k+1$ angles equal to $\alpha$ and $\alpha=\beta+\gamma$; \item $((2k+3)\pi,(2k+3)\pi,\alpha,\dots,\alpha,\beta,\alpha+\beta)$ with $2k+1$ angles equal to $\alpha$; \item $((2k+3)\pi,(2k+3)\pi,\alpha,\dots,\alpha,\beta)$ with $2k+1$ angles equal to $\alpha$, $\alpha+\beta=2\pi$, $\beta=\alpha+2\pi$ or $\alpha=\beta+2\pi$; \item $((2k+3)\pi,(2k+3)\pi,\alpha,\alpha+2\pi)$; \item $((2k+3)\pi,(2k+3)\pi,\alpha,\beta)$ with $\alpha+\beta=2\pi$. \end{itemize} \end{prop} \begin{proof} We proceed in the same way as in the proof of Proposition~\ref{prop:68} except that in this case we refer to the first obstruction of Theorem~\ref{thm:32}. \end{proof} In addition to strengthened Gauss-Bonnet inequality (see Proposition~\ref{prop:GBplus}) and obstructions of Proposition~\ref{prop:68} and~\ref{prop:69}, an arithmetic condition should be satisfied if non-integer singularities have commensurable angles. \begin{thm}\label{thm:610} Let $2\pi(a_{1},\dots,a_{n_{E}},b_{1},b_{2},c_{1},\dots,c_{n_{N}})$ be a distribution of angles with $n_{N} \geq 1$. We assume $b_{1} \geq b_{2}$. Outside obstructions described in Proposition~\ref{prop:68} and~\ref{prop:69}, there exists a cone spherical metric with strict dihedral monodromy on a punctured sphere with $n$ conical singularities of prescribed angles if and only if the two following conditions hold: \begin{itemize} \item strengthened Gauss-Bonnet inequality $T=\sum a_{i} +b_{1}+b_{2} \geq n-2$ holds; \item an additional arithmetic condition described below is satisfied when $c_{1},\dots,c_{n_{N}}$ are commensurable. \end{itemize} Considering vector $v=(c_{1},\dots,c_{n_{N}},1,\dots,1)$ with $T+2-n$ elements equal to $1$, if $v$ is of the form $L(r_{1},\dots,r_{T-n_{E}})$ with $L>0$ and $r_{1},\dots,r_{T-n_{E}}$ integers, then: \begin{itemize} \item $\sum r_{i} \geq b_{1}+b_{2}$, if $\sum r_{i}$ is even; \item $\sum r_{i} \geq b_{1}$, if $\sum r_{i}$ is odd. \end{itemize} \end{thm} \begin{proof} We assume the distribution of angles satisfies strengthened Gauss-Bonnet inequality (Proposition~\ref{prop:GBplus}) and is not concerned by obstructions of Proposition~\ref{prop:68} and ~\ref{prop:69}. We are going to prove existence of a cone spherical metric with strict dihedral monodromy in the category of hemispherical surfaces.\newline Since strengthened Gauss-Bonnet inequality is satisfied, we can find in the distribution of angles a subset of $s$ even or odd singularities of angles $\pi(2+d_{i})$ such that $d_{1}+\dots+d_{s}$ is an even number whose value is at least $4g-4+2(n-s)$. Among them there should be the two odd singularities. Then, we consider a nonempty stratum $\mathcal{Q}(d_{1},\dots,d_{s},-2^{t})$ of primitive quadratic differentials on the Riemann sphere.\newline Then, we have to check if the configuration of residues determined by the remaining singularities and the additional double poles is realized in the stratum. If one the four obstructions of Theorem~\ref{thm:32} holds and one even singularity of angle $2a_{i}\pi$ is not among the $s$ chosen singularities, then if we can just take stratum $\mathcal{Q}(2a_{i}-2,d_{1},\dots,d_{n},-2^{t+a_{i}-1})$. The first two obstructions do not hold when there is zero of even order in differentials of the stratum. Concerning the two other obstructions, we add an integer residue. In any case, the sum of integer weights increases by at least one while the bound depends on the orders of the two odd singularities (in other words, the bound does not change). If the initial configuration is not realized in its stratum, it cannot be worse with the new one. Consequently, we assume every even singularity is among the $s$ chosen singularities.\newline Thus, we have to consider stratum $\mathcal{Q}(2a_{1}-2,\dots,2a_{n_{E}}-2,2b_{1}-2,2b_{2}-2,-2^{t})$. We have $2t-4=2\sum a_{i} -2n_{E} +2b_{1}+2b_{2}-4$. Consequently, $t=b_{1}+b_{2}+\sum a_{i} -n_{E}$. Among these double poles, there are $n_{N}$ non-integer singularities. The others are $K=t-n_{N}=b_{1}+b_{2}+\sum a_{i} -n_{E}-n_{N}=T+2-n$. The distribution of angles is realized by a hemispherical surface if and only if the configuration of quadratic residues formed by squares of numbers $c_{1},\dots,c_{n_{N}},1,\dots,1$ with $K$ elements equal to $1$ is realized in the stratum.\newline The first and second obstruction are already settled by eliminating distributions of angles described in Propositions~\ref{prop:68} and~\ref{prop:69}. The third and the fourth obstruction are already encompassed in the arithmetic condition.\newline Finally, the obtained hemispherical surface cannot have co-axial monodromy since it has odd singularities on the equator and at least one non-integer singularity at the poles since $n_{N} \geq 1$. \end{proof} \subsection{Comparison} \label{sec:comparisongzero} Just like in Proposition~\ref{prop:54}, some distributions of angles that can be realized by a spherical metric with co-axial monodromy can also be realized by a spherical metric with strict dihedral monodromy. We give a complete characterization. \begin{prop} Let $2\pi(a_{1},\dots,a_{n_{E}},b_{1},\dots,b_{n_{O}},c_{1},\dots,c_{n_{N}})$ be a distribution of angles. If it is realized by a spherical metric with co-axial monodromy on a punctured sphere, then it can also be realized by a metric with strict dihedral monodromy on a punctured sphere if and only if $n_{O} \geq 2$ and $n_{O}+n_{N}\geq 3$. \end{prop} \begin{proof} Lemma~\ref{lem:62} prove that $n_{O} \geq 2$ is a necessary condition for existence of a spherical metric with dihedral monodromy on a punctured sphere. Besides, if the number of singularities with nontrivial monodromy is exactly two, then the monodromy of the metric is automatically co-axial. Therefore, $n_{O}+n_{N}\geq 3$ is a necessary condition. We will prove that these two conditions are also necessary.\newline We consider a distribution of angles realized by a spherical metric with co-axial monodromy. We assume it satisfies the two necessary conditions. Theorem~\ref{thm:61} implies in particular that $\sum a_{i} \geq n_{E}+n_{O}+n_{N}-2$. Since we have $n_{O}+n_{N}\geq 3$, we deduce $n_{E} \geq 1$. Therefore, the distribution of angles is not forbidden by Propositions~\ref{prop:64},~\ref{prop:68} and~\ref{prop:69}. Besides, the condition on the sum of orders of even singularities in Theorem~\ref{thm:61} clearly implies the strengthened Gauss-Bonnet inequality (see Proposition~\ref{prop:GBplus}). It remains to prove that the distribution of angles satisfies the hypothesis of Theorems~\ref{thm:63},~\ref{thm:65} and~\ref{thm:610}.\newline If $n_{O} \geq 4$, then there is no additional condition to check and Theorem~\ref{thm:63} proves that the distribution of angles is realized by a spherical metric with strict dihedral monodromy.\newline If $n_{O}=2$ or $n_{O}=3$, we assume $b_{1} \geq b_{2} \geq b_{3}$. We already know that $\sum a_{i} \geq n-2$. We set $K=T+2-n$ and $T=\sum a_{i} +b_{1}+b_{2} \geq n-2$ (Strengthened Gauss-Bonnet inequality). Thus, $K \geq b_{1}+b_{2}$. Theorem~\ref{thm:65} and~\ref{thm:610} require an additional arithmetic condition.\newline We first consider the case $n_{O}=3$. The condition of Theorem~\ref{thm:65} is the following. Let~$v$ be the vector $(c_{1},\dots,c_{n_{N}},b_{3},1,\dots,1)$ with $K$ elements equal to $1$. If the distribution of angles cannot be realized by a metric with strict dihedral monodromy, then~$v$ is of the form $L(r_{1},\dots,r_{n_{N}+K+1})$ with $L>0$ and $r_{1},\dots,r_{n_{N}+K+1}$ are integers. We should have: \begin{itemize} \item $\sum r_{i} \geq b_{1}+b_{2}$ if $\sum r_{i}$ is even; \item $\sum r_{i} \geq b_{1}$ if $\sum r_{i}$ is odd. \end{itemize} If the condition is not satisfied, then $L=1$ because otherwise the $K \geq b_{1}+b_{2}$ elements of~$v$ that are equal to $1$ would be enough to satisfy the bound. However, $b_{3}$ is not an integer, which leads to a contradiction since $\frac{b_{3}}{L}$ is required to be an integer.\newline In the case $n_{O}=2$, the condition of Theorem~\ref{thm:610} is essentially the same. We have to consider vector $(c_{1},\dots,c_{n_{N}},b_{3},1,\dots,1)$ with $K$ elements equal to $1$. Similarly, if the condition is not satisfied, then $L=1$. This implies $c_{1},\dots,c_{n_{N}}$ are integers, which is a contradiction. We already know by hypothesis that $n_{N} \geq 1$. This ends the proof. \end{proof} \paragraph{\bf Acknowledgements.} The second author is supported by the Israel Science Foundation (grant No. 1167/17) and the European Research Council (ERC) under the European Union Horizon 2020 research and innovation programme (grant agreement No. 802107). The second author would also like to thank Boris Shapiro for introducing him to the field of spherical metrics.\newline \nopagebreak \vskip.5cm
-57,043.402582
[ -2.59765625, 2.3515625 ]
41.512605
[ -1.6201171875, 1.0654296875, -2.345703125, -5.13671875, -1.501953125, 7.890625 ]
[ 4.56640625, 10.2421875, 2.564453125, 6.71875 ]
466
9,131
[ -3.447265625, 3.947265625 ]
26.226862
[ -5.35546875, -3.923828125, -5.6796875, -2.53125, 1.685546875, 13.6328125 ]
0.890093
28.513783
13.875808
1.746049
[ 1.941725492477417 ]
-39,354.872064
5.65984
-56,405.256848
1.896285
5.637436
[ -1.4423828125, -3.40625, -3.759765625, -5.12890625, 1.7734375, 12.1640625 ]
[ -5.01953125, -1.982421875, -2.28125, -1.31640625, 3.61328125, 4.41796875 ]
BkiUc9I5ixsDMB_MitkG
\section{Introduction} \section{Introduction} The holographic duality, also known as the anti-de-Sitter space and conformal field theory correspondence (AdS/CFT)~\cite{Witten1998ASSH,Witten1998ASSTPTCGT,Gubser1998GTCFNST,Maldacena1999LLSFTS}, is a duality between a CFT on a flat boundary and a gravitational theory in the AdS bulk with one higher dimension. It is intrinsically related to the renormalization group (RG) flow~\cite{de-Boer2000HRG,Skenderis2002LNHR,Heemskerk2011HWRG,Swingle2012CHSUER,Swingle2012ERH,Nozaki2012HGERQFT,Balasubramanian2013HIRG} of the boundary quantum field theory, since the dilation transformation, as a part of the conformal group, naturally corresponds to the coarse-graining procedure in the RG flow. The extra dimension emergent in the holographic bulk can be interpreted as the RG scale. In the traditional real-space RG~\cite{Kadanoff1966SLIMNT}, the coarse-graining procedure decimates irrelevant degrees of freedom along the RG flow, therefore the RG transformation is irreversible due to the information loss. However, if the decimated degrees of freedom are collected and hosted in the bulk, the RG transformation becomes a \emph{bijective} map between the degrees of freedom on the CFT boundary and the degrees of freedom in the AdS bulk. Such mappings, generated by information-preserving RG transforms, are called exact holographic mappings (EHM)~\cite{Qi2013EHMESG,Lee2015EHMFFS,Gu2016HDB2QAHS3TI}, which were first formulated for free fermion CFT. Similar idea was also implemented by multiscale entanglement renormalization ansatz (MERA)~\cite{Vidal2007ER,Evenbly2014CHEMSTES} as a hierarchical quantum circuit to simulate quantum state, as well as many of its generalizations~\cite{Haegeman2013ERQFRS,Lee2014QRGH,Mollabashi2014HGCQQFT,Leigh2014HGRGHSS,Lunts2015IH,Molina-Vilaplana2015IGERFQF,Miyaji2015BSHDTS,Wen2016HERTI,You2016EHMMLSSBRG, Cotler:2018ehb, Cotler:2018ufx}. Under the EHM, the boundary features of a quantum field theory of different scales are mapped to different depths in the bulk, and vice versa. The field variable deep in the bulk represents the overall or infrared (IR) feature, while the variable close to the boundary controls the detailed or ultraviolet (UV) feature. Such a hierarchical arrangement of information is often observed in deep neural networks, particularly in convolutional neural networks (CNN)~\cite{Goodfellow2016DL}. The similarity between renormalization group and deep learning has been discussed in several works~\cite{Beny2013DLRG,Mehta2014EMBVRGDL,Beny2015RGSI,Oprisa2017CDLMRG,Lin2017DDCLWW,Gan2017HDL}. Deep learning techniques have also been applied to construct the optimal RG transformations~\cite{Li2018NNRG,Koch-Janusz2018MINNRG} and to uncover the holographic geometry~\cite{You2018MLSGFEF,Hashimoto2018DLMC,Hashimoto2018DLH,Hashimoto2019ADBM}. In this work, we further explore the possibility of designing the EHM for interacting quantum field theories using deep learning approaches. We first point out that the information-preserving RG and deep generative model can be unified as the forward and backward application of the EHM, designing a good RG scheme is equivalent to training an optimal generative model to produce field configurations following the Boltzmann weight. We then propose that the information theoretical goal for a good EHM is to minimize the mutual information in the holographic bulk, which serves as a guiding principle for the machine to design RG rules. Bases on these understandings, we construct a flow-based hierarchical generative model~\cite{Dinh2016Density,Kingma2018Glow} with tractable and differentiable likelihood, which allows us to apply deep learning techniques to train the optimal EHM directly from the field theory action on the holographic boundary. We show that the fluctuation of neural network parameters corresponds to the gravitational fluctuation in the holographic bulk, and optimizing these parameters resembles searching for a classical geometry approximation. The machine-learned holographic mapping can be used to perform both the sampling task (mapping from bulk to boundary) and the inference task (mapping from boundary to bulk), providing us new tools to study both the boundary and the bulk theories. For the sampling task, we run the generative model to propose efficient global-update for boundary field configurations, which helps to boost the Monte Carlo simulation of the CFT. In the inference task, we push the boundary field theory to the bulk and establish the bulk effective theory, which enables us to probe the emergent dual geometry (on the classical level) by measuring the mutual information in bulk field. \section{Renormalization Group and Generative Model}Renormalization group (RG) plays a central role in the study of quantum field theory (QFT) and many body physics. The RG transformation progressively coarse-grains the field configuration to extract relevant features. The coarse-graining rules (or the RG schemes) are generally model-dependent and requires human design. Take the real-space RG\cite{Kadanoff1966SLIMNT} for example: for a ferromagnetic Ising model, the RG rule should progressively extract the uniform spin components as the most relevant feature; however for an antiferromagnetic Ising model, the staggered spin components should be extracted instead; if the spin couplings are randomly distributed on the lattice, the RG rule can be more complicated. When it comes to the momentum-space RG\cite{Wilson1983RGCP}, the rule becomes to renormalize the low-energy degrees of freedom by integrating out the high-energy degrees of freedom. What is the general designing principle behind all these seemly different RG schemes? Can a machine learns to design the optimal RG scheme based on the model action? \begin{figure}[htbp] \begin{center} \includegraphics[width=0.85\columnwidth]{fig_RG} \caption{Relation between (a) RG and (b) generative model. The inverse RG can be viewed as a generative model that generates the ensemble of field configurations from random sources. The random sources can are supplied at different RG scales (coordinated by $z$), which can be viewed as a field $\zeta(x,z)$ living in the holographic bulk with one more dimension. The original field $\phi(x)$ will be generated on the holographic boundary.} \label{fig:RG} \end{center} \end{figure} With these questions in mind, we take a closer look at the RG procedure in a lattice field theory setting. In the traditional RG approach, the RG transformation is invertible due to the information loss at each RG step when the irrelevant features are decimated, as illustrated in \figref{fig:RG}(a). However, if the decimated features are kept at each RG scale, the RG transformation can be inverted. Under the inverse RG flow, the decimated degrees of freedom $\zeta(x,z)$ are supplied to each layer (step) of the inverse RG transformation, such that the field configuration $\phi(x)$ can be regenerated, as shown in \figref{fig:RG}(b). Here we assume that the $\phi(x)$ field is defined in a flat Euclidean spacetime coordinated by $x=(x_1,x_2,\cdots)\in\mathbb{R}^d$, then $\zeta(x,z)$ will live on a manifold with one higher dimension, and the extra dimension $z$ corresponds to the RG scale. Given its close analogy to the holographic duality, we may view $\zeta(x,z)$ as the field in the holographic bulk and $\phi(x)$ as the field on the holographic boundary. The inverse RG can be considered as a deep generative model $G$, which organizes the bulk field $\zeta(x,z)$ to generate the boundary field $\phi(x)$, \eq{\phi(x)=G[\zeta(x,z)].} The renormalization $G^{-1}$ and generation $G$ procedures are thus unified as the forward and backward maps of a bijective (invertible) map between the boundary and the bulk, known as the EHM.\cite{Qi2013EHMESG,Lee2015EHMFFS} At the first glance, such an information-preserving RG does not seem to have much practical use, because it does not reduce the degrees of freedom and hence does not simplify our description. However, since the bulk field $\zeta(x,z)$ represents the irrelevant feature to be decimated under RG, it should look like independent random noise, which contains minimal amount of information. So instead of memorizing the bulk field configuration $\zeta(x,z)$ at each RG scale for reconstruction purpose, we can simply sample $\zeta(x,z)$ from uncorrelated (or weakly correlated) random source and serve them to the inverse RG transformation. Suppose the bulk field $\zeta(x,z)$ is drawn from a prior distribution $P_\text{prior}[\zeta]$, the transformation $\phi=G[\zeta]$ will deform the prior distribution to a posterior distribution $P_\text{post}[\phi]$ for the boundary field $\phi(x)$, \eq{\label{eq:P_post}P_\text{post}[\phi]=P_\text{prior}[\zeta]\Big|\det\Big(\frac{\delta G[\zeta]}{\delta \zeta}\Big)\Big|^{-1},} where $|\det(\delta_\zeta G)|^{-1}$ is the Jacobian determinant of transformation. In such manner, the objective of the inverse RG is not to reconstruct a particular original field configuration, but to generate an ensemble of field configurations $\phi(x)$, whose probability distribution $P_\text{post}[\phi]$ should better match the Boltzmann distribution \eq{\label{eq:P_target}P_\text{target}[\phi]=e^{-S_\text{QFT}[\phi]}/Z_\text{QFT}} specified by the action functional $S_\text{QFT}[\phi(x)]$ of the boundary field theory, where $Z_\text{QFT}=\sum_{[\phi]}e^{-S_\text{QFT}[\phi]}$ denotes the partition function. This setup provides us a theoretical framework to discuss the designing principles of a good RG scheme. We propose two objectives for a good RG scheme (or EHM): the RG transformation should aim at decimating irrelevant features and preserving relevant features, and the inverse RG must aim at generating field configurations matching the target field theory distribution $P_\text{target}[\phi]$ in \eqnref{eq:P_target}. An information theoretic criterion for ``irrelevant'' features is that they should have minimal mutual information, so the prior distribution $P_\text{prior}[\zeta]$ should be chosen to minimize the mutual information between bulk fields at different points, i.e. $\min I(\zeta(x,z):\zeta(x',z'))$. We will refer to this designing principle as the minimal bulk mutual information (minBMI) principle, which is a general information theoretic principle behind different RG schemes and is independent of the notion of field pattern or energy scale. The close relation between RG and deep learning has been thoroughly discussed in several early works\cite{Mehta2014EMBVRGDL,Oprisa2017CDLMRG,Lin2017DDCLWW}. However, as pointed out in \refcite{Koch-Janusz2018MINNRG,Lenggenhager2018ORGTFIT}, the hierarchical architecture itself can not guarantee the emergence of RG transformation in a deep neural network. Additional information theoretic principles must be imposed to guild the learning. In light of this observation, \refcite{Koch-Janusz2018MINNRG,Lenggenhager2018ORGTFIT} proposed the maximal real-space mutual information (maxRSMI) principle, which aims at maximizing the mutual information between the coarse-grained field and the fine-grained field in the surrounding environment. Our minBMI principle is consistent with and more general than the maxRSMI principle (see \appref{sec:minBMI} for detailed discussion about the relation between these two principles). In the simplest setting, we can hard code the minBMI principle by assigning the prior distribution to the uncorrelated Gaussian distribution, \eq{\label{eq:P_prior1}P_\text{prior}[\zeta]=\mathcal{N}[\zeta;0,1]\propto e^{-\|\zeta\|^2},} where $\|\zeta\|^2=\sum_{x,z}|\zeta(x,z)|^2$. Hence the mutual information vanishes for every pair of points in the holographic bulk. Given the prior distribution, the problem of finding the optimal EHM boils down to training the optimal generative model $G$ to minimize the Kullback-Leibler (KL) divergence between the posterior distribution $P_\text{post}[\phi]$ in \eqnref{eq:P_post} and the target distribution $P_\text{target}[\phi]$ in \eqnref{eq:P_target}, i.e. $\min\mathcal{L}$ with \eqs{\label{eq:L}\mathcal{L}&=\mathsf{KL}(P_\text{post}[\phi]\parallel P_\text{target}[\phi])\\ &=\mathop{\mathbb{E}}\limits_{\zeta\sim P_\text{prior}}S_\text{QFT}[G[\zeta]]+\ln P_\text{prior}[\zeta]-\ln\det\Big(\frac{\delta G[\zeta]}{\delta \zeta}\Big),} where $\mathbb{E}_{\zeta\sim P_\text{prior}}$ denotes the average over the ensemble of $\zeta$ drawn from the prior distribution. This fits perfectly to the framework of flow-based generative models\cite{Dinh2016Density,Kingma2018Glow} in machine learning, which can be trained efficiently thanks to its tractable and differentiable posterior likelihood. We model the bijective map $G$ by a neural network (to be detailed later) with trainable network parameters. We initiate the sampling from the bulk $\zeta\sim P_\text{prior}$ and push the bulk field to the boundary by $\phi=G[\zeta]$, collecting the logarithm of the Jacobian determinant along the way. Given the action $S_\text{QFT}[\phi]$, we can evaluate the loss function $\mathcal{L}$ in \eqnref{eq:L} and back propagate its gradient with respect to the network parameters. We then update the network parameters by stochastic gradient descent. We iterate the above steps to train the neural network. In this way, simply by presenting the QFT action $S_\text{QFT}$ to the machine, the machine learns to design the optimal RG transformation $G$ by keep probing $S_\text{QFT}$ with various machine generated field configurations. Thus our algorithm may be called the neural network renormalization group (neural RG)\cite{Li2018NNRG}, which can be implemented using the deep learning platforms such as TensorFlow\cite{MartinAbadi2015TLMLHS}. \section{Holographic Duality and Classical Approximation} We would like to provide an alternative interpretation of the loss function $\mathcal{L}$ in \eqnref{eq:L} in the context of holographic duality, which will deepen our understanding of the capabilities and limitations of our approach. Suppose we can sample the boundary field configuration $\phi(x)$ from the target distribution $P_\text{target}[\phi]$ and map $\phi(x)$ to the bulk by apply the EHM along the RG direction $\zeta=G^{-1}[\phi]$, the obtained bulk field $\zeta(x,z)$ will follow the distribution \eqs{P_\text{bulk}[\zeta]&=P_\text{target}[\phi]\det(\delta_\phi G^{-1}[\phi])^{-1}\\ &=Z_\text{QFT}^{-1}e^{-S_\text{QFT}[G[\zeta]]}\det(\delta_\zeta G).} Then the normalization of the bulk field probability distribution $\sum_{[\zeta]}P_\text{bulk}[\zeta]=1$ implies that the QFT partition function $Z_\text{QFT}=\sum_{[\phi]}e^{-S[\phi]}$, which was originally defined on the holographic boundary, can now be written in terms of the bulk field $\zeta$ as well \eq{\label{eq:Z_QFT dual}Z_\text{QFT}=\sum_{[\zeta]}e^{-S_\text{QFT}[G[\zeta]]+\ln\det(\delta_\zeta G)}.} Note that $Z_\text{QFT}$ is by definition independent of $G$, we are allowed to sum over all possible $G$ on both sides of \eqnref{eq:Z_QFT dual}, which establishes a duality between the following two partition functions \eq{\label{eq:duality}Z_\text{QFT}=\sum_{[\phi]}e^{-S_\text{QFT}[\phi]}\leftrightarrow Z_\text{grav}=\sum_{[\zeta,G]}e^{-S_\text{grav}[\zeta,G]},} with the bulk theory $S_\text{grav}$ given by \eq{\label{eq:S_grav}S_\text{grav}[\zeta,G]=S_\text{QFT}[G[\zeta]]-\ln\det(\delta_\zeta G).} By ``duality'' we mean that $Z_\text{QFT}$ and $Z_\text{grav}$ only differ by a proportionality constant (as $Z_\text{grav}=\sum_{[G]}Z_\text{QFT}$), so they are equivalent descriptions of the same physics theory. $S_\text{grav}[\zeta,G]$ describes how the bulk variables $\zeta$ (matter field) and the neural network $G$ (geometry) would fluctuate and interact with each other, which resembles a ``quantum gravity'' theory in the holographic bulk. The bulk has more degrees of freedom than the boundary, as there can be many different choices of $\zeta$ and $G$ that leads to the same boundary field configuration $\phi=G[\zeta]$. This is a gauge redundancy in the bulk theory, which covers the diffeomorphism invariance as well as the interchangeable role between matter and spacetime geometry in a gravity theory. At this level, the bulk theory looks intrinsically nonlocal and the geometry can fluctuate strongly. However, it is usually more desired to work with quantum gravity theories with a classical limit, which describe weak fluctuations (matter fields and gravitons) around a classical geometry. Although not every CFT admits a classical gravity dual, we still attempt to find the \emph{classical approximation} of the dual quantum gravity theory, neglecting the fluctuation of $G$. Such classical approximations could serve as a starting point on which gravitational fluctuations may be further investigated in future works. Aiming at a classical geometry, we look for the optimal $G$ that maximizes its marginal probability $P_\text{EHM}[G]=Z_\text{grav}^{-1}\sum_{[\zeta]}e^{-S_\text{grav}[\zeta,G]}$ with the bulk matter field $\zeta$ traced out. This optimization problem seems trivial, because according to \eqnref{eq:Z_QFT dual}, $P_\text{EHM}[G]=Z_\text{QFT}/Z_\text{grav}$ is independent of $G$. It is understandable that any choice of $G$ is equally likely if we have no preference on the prior distribution $P_\text{prior}[\zeta]$ of the bulk matter field, because there is a trade-off between $G$ and $P_\text{prior}$ that one can always adjust $P_\text{prior}$ to compensate the change in $G$. Such a trade-off behavior is fundamentally required by the gauge redundancy in the bulk gravity theory. To fix the gauge, we evoke the minBMI principle to bias the bulk matter field towards independent random noise, such that the classical solution of $G$ will look like a RG transformation, in line with our expectation for a holographic mapping. Choosing a minBMI prior distribution such as \eqnref{eq:P_prior1}, $P_\text{EHM}[G]$ can be cast into \eq{\label{eq:P_EHM}P_\text{EHM}[G]=\mathop{\mathbb{E}}\limits_{\zeta\sim P_\text{prior}}\frac{P_\text{target}[G[\zeta]]}{P_\text{post}[G[\zeta]]}\geq e^{-\mathcal{L}},} which is bounded by $e^{-\mathcal{L}}$ from below, with $\mathcal{L}$ being the KL divergence between $P_\text{post}$ and $P_\text{target}$ as defined in \eqnref{eq:L}. Therefore the objective of maximizing $P_\text{EHM}[G]$ can be approximately replaced by minimizing the loss function $\mathcal{L}$, which is no longer a trivial optimization problem. From this perspective, the loss function $\mathcal{L}$ can be approximately interpreted as the action (negative log-likelihood) for the holographic bulk geometry associated to the EHM $G$. Minimizing the loss function corresponds to finding the classical saddle point solution of the bulk geometry. We will build a flow-based generative model to parameterize $G$ and train the neural network using deep learning approaches. The fluctuation of neural network parameters in the learning dynamics reflects (at least partially) the gravitational fluctuation in the holographic bulk. At the classical saddle point $G_*=\text{argmin}_G\mathcal{L}$, we may extract an effective theory for the bulk matter field \eqs{\label{eq:S_eff}S_\text{eff}[\zeta]&\equiv S_\text{grav}[\zeta,G_*]\\ &=\|\zeta\|^2+\ln P_\text{post}[G_*[\zeta]]-\ln P_\text{target}[G_*[\zeta]].} As the KL divergence $\mathcal{L}=\mathsf{KL}(P_\text{post}\parallel P_\text{target})$ is minimized after training, we expect $P_\text{post}$ and $P_\text{target}$ to be similar, such that their log-likelihood difference $\ln P_\text{post}-\ln P_\text{target}$ will be small, so the effective theory $S_\text{eff}[\zeta]$ will be dominated by the first term $\|\zeta\|^2$ in \eqnref{eq:S_eff}, implying that the bulk field $\zeta$ will be massive. The small log-likelihood difference further provides kinetic terms (and interactions) for the bulk field $\zeta$, allowing it to propagate on a classical background that is implicitly specified by $G_*$. In this way, the bulk field will be correlated in general. Even though one of our objectives is to minimize the bulk mutual information as much as possible, the machine-learned EHM typically cannot resolve all correlations in the original QFT, so the residual correlations will be left in the bulk field $\zeta$ as described by the log-likelihood difference in \eqnref{eq:S_eff}. The mismatch between $P_\text{post}$ and $P_\text{target}$ may arise from several reasons: first, limited by the design of the neural network, the generative model $G$ may not be expressive enough to precisely deform the prior distribution to the target distribution; second, even if $G$ has the sufficient representation power, the training may not be able to converge to the global minimum; finally and perhaps the most fundamental reason is not every QFT has a classical gravitational dual, the bulk theory should be quantum gravity in general. Taking the classical approximation and ignoring the gravitational fluctuation leads the unresolvable correlation and interaction for the matter field $\zeta$ that has to be kept in the bulk. Nevertheless, our framework could in principle include fluctuations of $G$ by falling back to $Z_\text{grav}$ in \eqnref{eq:duality}. We can either model the marginal distribution $P_\text{EHM}[G]$ by techniques like graph generative models, or directly analyze the gravitational fluctuations by observing the fluctuations of neural network parameters in the learning dynamics as mentioned below \eqnref{eq:P_EHM}. We will leave these ideas for future exploration. In the following, we will use a concrete example, a 2D compact boson CFT on a lattice, to illustrate our approach of learning the EHM as a generative model and to demonstrate its applications in both the sampling and the inference tasks. \section{Application to Complex $\phi^4$ Model} We consider a lattice field theory defined on a 2D square lattice, described by the Euclidean action \eq{S_\text{QFT}[\phi]=-t\sum_{\langle ij\rangle}\phi_i^*\phi_j+\sum_{i}(\mu|\phi_i|^2+\lambda|\phi_i|^4),} where $\phi_i \in\mathbb{C}$ is a complex scalar field defined on each site $i$ of a square lattice and $\langle ij\rangle$ denotes the summation over all nearest neighbor sites. The model has a global $\mathrm{U}(1)$ symmetry, under which the field rotates by $\phi_i\to e^{\mathrm{i}\varphi}\phi_i$ on every site. We choose $\mu=-200+2t$ and $\lambda=25$ to create a deep Mexican hat potential that basically pins the complex field on a circle $\phi_i=\sqrt{\rho}e^{\mathrm{i}\theta_i}$ of radius $\sqrt{\rho}=2$. In this way, the field theory falls back to the XY-model $S_\text{QFT}=-\tfrac{1}{T}\sum_{\langle ij \rangle}\cos(\theta_i-\theta_j)$ with an effective temperature $T=(\rho t)^{-1}$. By tuning the temperature $T$, the model exhibits two phases: the low-$T$ algebraic liquid phase with a power-law correlation $\langle\phi_i^*\phi_j\rangle\sim|x_i-x_j|^{-\alpha}$ and the high-$T$ disordered phase with a short-range correlation. The two phases are separated by the Kosterlitz-Thouless (KT) transition. Several recent works\cite{Beach2018MLVKT,Zhang2018MLPTPM,Rodriguez-Nieva2018ITOUML,Zhou2018RGNNSFT} have focused on applying machine learning method to identify phase transitions or topological defects (vortices). Our purpose is different here: we stay in the algebraic liquid phase, described by a Luttinger liquid CFT, and seek to develop the optimal holographic mapping for the CFT. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.94\columnwidth]{fig_network} \caption{(a) Side view of the neural-RG network. $x$ is the spatial dimension(s) and $z$ corresponds to the RG scale. There are two types of blocks: disentanglers (dark green) and decimators (light yellow). The network forms an EHM between the boundary variables (blue dots) and the bulk variables (red crosses). (b) Top view of one RG layer in the network. Disentanglers and decimators interweave in the spacetime (taking two-dimensional spacetime for example). Each decimator pushes the coarse-grained variable (black dot) to the higher layer and leaves the decimated variables (red crosses) in the holographic bulk. (c) The training contains two stages. In the first stage, we fix the prior distribution $P[\zeta]$ to be uncorrelated Gaussian and train the EHM $G$ to bring it to the Boltzmann distribution of the CFT. In the second stage, we learn the prior distribution with the trained EHM held fixed. (d) The behavior of the loss function $\mathcal{L}$ in the two training stages.} \label{fig:network} \end{center} \end{figure} We design the generative model $G$ as a bijective deep neural network following the architecture of the neural network renormalization group (neural-RG) proposed by \refcite{Li2018NNRG}. Its structure resembles the MERA network~\cite{Vidal2007ER} as depicted in \figref{fig:network}(a). Each RG step contains a layer of disentangler blocks (like CNN convolutional layer) to resolve local correlations, and a layer of decimator blocks (like CNN pooling layer) to separate the renormalized and decimated variables. Given that the spacetime dimension is two on the boundary, we can overlay decimators on top of disentanglers in an interweaving manner as shwon in \figref{fig:network}(b). Both the disentangler and the decimator are made of three bijective layers: a linear scaling layer, an orthogonal transformation layer and an invertible non-linear activation layer, as arranged in \figref{fig:bijector} (see \appref{sec:bijectors} for more details). They are designed to be invertible, non-linear and $\mathrm{U}(1)$-symmetric transformations, which are used to model generic RG transformations for the complex $\phi^4$ model. The bijector parameters are subject to training (training procesure and number of parametes are specified in \appref{sec:NNtraining}). The Jacobian matrix of these transformations are calculable. After each decimator, only one renormalized variable flows to the next RG layer, and the other three decimated variables are positioned into the bulk as little crosses as shown in \figref{fig:network}(a) and (b). The entire network constitutes an EHM between the original boundary field $\phi(x)$ and the dual field $\zeta(x,z)$ in the holographic bulk. \begin{figure}[htb] \begin{center} \includegraphics[width=0.55\columnwidth]{fig_bijector} \caption{Neural network architecture within a decimator block (the disentangler block shares the same architecture). Starting from the renormalized variable $\phi'$ and the bulk noise $\zeta_{1,2,3}$ as complex variables, the $\Re$ and $\Im$ channels are first separated, then $\mathcal{S}$ applies the scaling separately to the four variables within each channel and $\mathcal{O}$ implements the $\O(4)$ transformation that mixes the four variables together. $\mathcal{S}$ and $\mathcal{O}$ are identical for $\Re$ and $\Im$ channels to preserve the $\mathrm{U}(1)$ symmetry. Then the channels merge into complex variables followed by element wise non-linear activation describe by an invertible $\mathrm{U}(1)$-symmetric map $\phi_i\mapsto (\phi_i/|\phi_i|)\sinh|\phi_i|$.} \label{fig:bijector} \end{center} \end{figure} We start with a $32\times32$ square lattice as the holographic boundary and build up the neural-RG network. The network will have five layers in total. Since the boundary field theory has a gobal $\mathrm{U}(1)$ symmetry, the bijectors in the neural network are designed to respect the $\mathrm{U}(1)$ symmetry (see \figref{fig:bijector}), such that the bulk filed also preserves the $\mathrm{U}(1)$ symmetry. The training will be divided into two stages, as pictured in \figref{fig:network}(c). In the training stage I, we fix the prior distribution in \eqnref{eq:P_prior1} and train the network parameters in the generative model $G$ to minimize the loss function $\mathcal{L}$. The training method is outlined below \eqnref{eq:L}. The loss function $\mathcal{L}$ decays with training steps, whose typical behavior is shown in \figref{fig:network}(d). We will discuss the stage II training later. \begin{figure}[htbp] \centering \includegraphics[width = 0.95\linewidth]{fig_XY} \caption{Performance of the trained EHM for the complex $\phi^4$ theory. (a) Order parameter $\langle\phi\rangle$ v.s. temperature $T$. Different models are trained separately at different temperature. For finite-sized system, $\langle\phi\rangle$ crosses over to zero around the KT transition. Correlation function $\langle\phi_i^*\phi_j\rangle$ scaling in log-log plot (b) and log-linear plot (c). Distribution of $\phi_i$ in a single sample generated by the neural network trained in (d) the algebraic liquid phase and (e) the disordered phase. \label{fig:phi4} } \end{figure} We perform the stage I training for several neural networks at different temperatures $T$ separately, i.e.~we use $S_\text{QFT}[\phi]$ of different parameters to train different neural networks. After training, each neural network can generate configurations of the boundary field $\phi$ from the bulk uncorrelated Gaussian field $\zeta$ efficiently. To test how well these generative models work, we measured the order parameter $\langle\phi\rangle$ and the correlation function $\langle \phi_i^*\phi_j\rangle$ using the field configurations generated by the neural network. Although the order parameter $\langle\phi\rangle$ is expected to vanish in the thermodynamic limit, for our finite-size system, it is not vanishing and can exhibit a crossover around the KT transition, as shown in \figref{fig:phi4}(a). The cross over temperature $T\simeq 0.9$ agrees with the previous Monte Carlo study~\cite{Olsson1995MCATMICWKRE,Hasenbusch1997CRTISMBMM,Hasenbusch2005TMTTHMCS} of the KT transition temperature $T_\text{KT}=0.8929$ in the two-dimensional XY model. We measure the correlation function $\langle \phi_i^*\phi_j\rangle$ at two different temperatures: one at $T=0.5$ in the algebraic liquid phase, one at $T=1.0$ in the disordered phase. We plot the two-point function $\langle \phi_i^*\phi_j\rangle$ as a function of the Euclidean distance $r_{ij}\equiv |x_i-x_j|$ (on the square lattice) in both the log-log scale as \figref{fig:phi4}(b) and the log-linear scale as \figref{fig:phi4}(c). The comparison shows that the correlation function in the algebraic liquid (or the disordered) phase fits better to the power-law (or the exponential) decay. \figref{fig:phi4}(d) shows the statistics of $\phi_i$ in one sample generated by the machine trained in the algebraic liquid phase. It exhibits the ``spontaneous symmetry breaking'' behavior due to the finite-size effect, although accumulating over multiple samples will restore the $\mathrm{U}(1)$ symmetry. However, similar plot \figref{fig:phi4}(e) in the disordered phase respects the $\mathrm{U}(1)$ symmetry in every single sample. Based on these tests, we can conclude that the neural network has learned to generate field configurations $\phi(x)$ that reproduce the correct physics of the complex $\phi^4$ model. The trained generative model $G$ maps an almost uncorrelated bulk field $\zeta$ to a correlated boundary field $\phi$, and vice versa, therefore $G$ provides a good EHM for the $\phi^4$ theory. \begin{figure}[thb] \begin{center} \includegraphics[width=\columnwidth]{local_global_update} \caption{The boundary field configuration $\phi$ before (left) and after (right) a local update in the most IR layer of the bulk field $\zeta$. The complex field $\phi_i$ is represented by the small arrow on each site. The background color represents the vorticity. The inset shows the distribution of $\phi_i$ in the complex plane.} \label{fig:local_global} \end{center} \end{figure} The machine-learnt EHM can be useful in both the backward and forward directions. The backward mapping from bulk to boundary provides efficient sampling of the CFT configurations, which can be used to boost the Monte Carlo simulation of the CFT. The forward mapping from boundary to bulk enables direct inference of bulk field configurations, allowing us to study the bulk effective theory and to probe the bulk geometry. Let us first discuss the sampling task. The EHM establishes a mapping between the massive bulk field $\zeta$ and the massless boundary field $\phi$. The bulk field admits efficient sampling in terms of local update, because the it is uncorrelated (or short-range correlated). Local updates in the bulk gets maps to global updates on the boundary, which allows us to sample the critical boundary field efficiently, minimizing the effect of critical slowdown. To demonstrate this, we tweak the bulk field in the most IR layer. Under the EHM, we observe a global change of the boundary field configuration as shown in \figref{fig:local_global}. It is interesting to note that the change of the IR bulk field basically induces a global $\mathrm{U}(1)$ rotation of $\phi_i$ (see the insets of \figref{fig:local_global}), which corresponds the ``Goldstone mode'' associated to the ``spontaneous symmetry breaking'' in the fine-sized system, showing that the machine can identify the order parameter as the relevant IR degrees of freedom without prior knowledge about low-energy modes of the system. We also check that the Hamiltonian Monte Carlo sampling in the bulk converges much faster compared to applying the same algorithm on the boundary (see \appref{sec:MC} for more evidences). In connection to several recent works, our neural-RG architecture can be integrated to self-learning Monte Carlo approaches\cite{Liu2016SMCMFS,Aoki2016RBMLRIM,Huang2017AMCSWRBM,Liu2017SMCM,Nagai2017SMCMCA,Tanaka2017TRAML,Nagai2018SMCMWBNN} to boost the numerical efficiency in simulating CFTs. The inverse RG transformation can also be used to generate super-resolution samples\cite{Efthymiou2018SIMWCNN} for finite-size extrapolation of thermodynamic observables. Now let us turn to the inference task. We can use the optimal EHM to push the boundary field back into the bulk and investigate the effective bulk theory $S_\text{eff}[\zeta]$ induced by the boundary CFT. As analyzed below \eqnref{eq:S_eff}, the mismatch between $P_\text{post}$ and $P_\text{target}$ will give rise to the residual correlation (mutual information) of the bulk matter field, which can be used to probe the holographic bulk geometry. Assuming an emergent locality in the holographic bulk, the expectation is that the bulk effective theory $S_\text{eff}[\zeta]$ will take the following form in the continuum limit, \eq{\label{eq:Seff}S_\text{eff}[\zeta]=\int_\mathcal{M} g^{\mu\nu}\partial_\mu\zeta^*\partial_\nu\zeta+m^2|\zeta|^2+u|\zeta|^4+\cdots,} which describes the bulk field $\zeta$ on a curved spacetime background $\mathcal{M}$ equipped with the metric tensor $g^{\mu\nu}$. Strictly speaking, $\zeta$ is not a single field but contains a tower of fields corresponding to different primary operators in the CFT. We choose to focus on the lightest component and model it by a scalar field, as it will dominate the bulk mutual information at large scale. Because the bulk field excitation is massive and can not propagate far, we expect the mutual information between the bulk variables at two different points to decay exponentially with their geodesic distance in the bulk. Following this idea, suppose $\zeta_i=\zeta(x_i,z_i)$ and $\zeta_j=\zeta(x_j,z_j)$ are two bulk field variables, then their distance $d(\zeta_i:\zeta_j)$ can be inferred from their mutual information $I(\zeta_i:\zeta_j)$ as follows \eq{\label{eq:d}d(\zeta_i:\zeta_j)=-\xi \ln \frac{I(\zeta_i:\zeta_j)}{I_0},} where the correlation length $\xi$ and the information unit $I_0$ are global fitting parameters. To estimate the mutual information among bulk field variables, we take a quadratic approximation of the bulk effective action $S_\text{eff}[\zeta]\simeq \sum_{ij}\zeta_i^*K_{ij}\zeta_j=\zeta^\dagger K \zeta$, ignoring the higher order interactions of $\zeta$ for now. This amounts to relaxing the prior distribution of the bulk field $\zeta$ to a correlated Gaussian distribution \eq{\label{eq:Pblk}P'_\text{prior}[\zeta]=\frac{1}{\sqrt{\det(2\pi K^{-1})}}e^{-\zeta^\dagger K \zeta}.} The kernel matrix $K$ is carefully designed to ensure positivity and bulk locality (see \appref{sec:kernel} for more details). To determine the best fit of $K$, we initiate the stage II training to learn the prior distribution with the EHM fixed at its optimal solution obtained in the stage I training, as illustrated in \figref{fig:network}(c). We use the reparametrization trick~\cite{Kingma2013AVB} to sample the bulk field $\zeta$ from the correlated Gaussian in \eqnref{eq:Pblk}, then $\zeta$ is pushed to the boundary by the fixed EHM to evaluate the loss function $\mathcal{L}$ in \eqnref{eq:L}, and the gradient signal can back-propagate to train the kernel $K$. As we relax the Gaussian kernel $K$ for training, we can see that the loss function will continue to drop in the stage II, as shown in \figref{fig:network}(d). This indicates that the Gaussian model is learning to capture the residual bulk field correlation (at least partially), such that the overall performance of generation gets improved. One may wonder why not training the generative model $G$ and bulk field distribution $P_\text{prior}[\zeta]$ jointly. This is because there is a trade-off between these two objectives. For example, one can weaken the disentanglers in $G$ and push more correlation to the bulk field distribution $P_\text{prior}[\zeta]$. Such trade-off will undermine our objective of minimizing bulk mutual information in training a good EHM, therefore the two training stages should be separated, or at least assigned very different learning rates. Intuitively, the machine learns the background geometry in the stage I training and the bulk field theory (to the quadratic order) in the stage II training. The trade-off between the two training stages resembles the interchangeable roles between matter and spacetime geometry in a gravity theory. \begin{figure}[htbp] \begin{center} \includegraphics[width=\columnwidth]{fig_geometry} \caption{(a) Distance matrix $D(A:B)$, indexed by the decimator indices $A,B$, obtained based on \eqnref{eq:D}. (b) Visualization of the bulk geometry by multidimensional scaling projected to the leading three principle dimensions. Each point represent a decimator in the neural network, colored according to layers from UV to IR. The neighboring UV-IR links are add to guide the eye.} \label{fig:geometry} \end{center} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width = 0.95\linewidth]{new_distance_scaling} \caption{Distance scaling along (a) the radius and (b) the angular direction.\label{fig:distance}} \end{figure} After the stage II training, we obtain the fitted kernel matrix $K$. The mutual information $I(\zeta_i:\zeta_j)$ can be evaluated from \eq{I(\zeta_i:\zeta_j)=-\frac{1}{2}\ln\Big(1-\frac{\langle\zeta_i^*\zeta_j\rangle}{\langle\zeta_i^*\zeta_i\rangle\langle\zeta_j^*\zeta_j\rangle}\Big),} where the bulk correlation $\langle\zeta_i^*\zeta_j\rangle =(K^{-1})_{ij}$ is simply given by the inverse of the kernel matrix $K$. Then we can measure the holographic distance $d(\zeta_i:\zeta_j)$ between any pair of bulk variables $\zeta_{i}$ and $\zeta_j$ following \eqnref{eq:d}. To probe the bulk geometry, we further define the distance between two decimators $A$ and $B$ to be the average distance between all pairs of bulk variables separately associated to them, \eq{D(A:B)=\mathop{\mathrm{avg}}\limits_{\zeta_i\in A,\zeta_j \in B}d(\zeta_i:\zeta_j).\label{eq:D}} The result is presented in \figref{fig:geometry}(a). To visualize the bulk geometry qualitatively, we perform a multidimensional scaling to obtain a three-dimensional embedding of the decimators in \figref{fig:geometry}(b). One can see a hyperbolic geometry emerges in the bulk. To be more quantitative, we label each decimator by three coordinates $(x^1,x^2,z)$, where $x=(x^1,x^2)$ denotes its center position projected to the holographic boundary and $z=2^l$ is related to its layer depth $l$ (ascending from UV to IR). We found that the measured distance function follows the scaling behavior \eqs{D(x^1,x^2,z:x^1+r,x^2,z)&\propto\ln r,\\ D(x^1,x^2,z:x^1,x^2,z+r)&\propto r,} as demonstrated in \figref{fig:distance}. These scaling behaviors agree with the geometry of a three-dimensional hyperbolic space $H^3$, which corresponds to the AdS$_3$ spacetime under the Wick rotation of the time dimension. This indicates that the emergent bulk geometry is indeed hyperbolic at the classical level. Our result demonstrates that the Luttinger liquid CFT can be approximately dual to a massive scalar fields on AdS$_3$ background geometry. The duality is only approximate because we have assumed a classical geometry in the bulk, ignoring all the gravitational fluctuations. In AdS$_3$/CFT$_2$ correspondence, the bulk gravitational coupling $G_N=3\ell/2c$ is inversely proportional to the central charge $c$ of the CFT.\cite{Brown1986CCCRASEFTG} The Luttinger liquid CFT has a relatively small central charge $c=1$ and hence a large gravitational coupling in the bulk, so we should not expect a classical dual description. It would be more appropriate to consider holographic CFTs which admit classical duals. However, our current method only applies to lattice field theories of bosons with explicit action functionals, which prevent us to study interesting holographic CFTs. Generalizing the neural RG approach to involve fermions and gauge fields and to work with continuous spacetime will be important directions for future development. \section{Summary and Discussions} In conclusion, we introduced the neural RG algorithm to allow automated construction of EHM by machine learning instead of human design. Previously, the EHM was only designed for free fermion CFT. Using machine learning approaches, we are able to develop more general EHMs that also apply to interacting field theories. Given the QFT action as input, the machine effective digests the information contained in the action and encode it into the structure of the EHM network, which represents the emergent holographic geometry. Our result provides a concrete example that the holographic spacetime geometry can emerge as the optimal generative network of a quantum field theory.\cite{Dong2018SOGNQSRQ} The obtained EHM simultaneously provides an information-preserving RG scheme and a generative model to reproduce the QFT, which could be useful for both inference and sampling tasks. However, as a version of EHM, our approach also bares the limitations of EHM. By construction, the bulk geometry is discrete and classical, such that the model can not resolve the sub-AdS geometry and can not capture gravitational fluctuations. Recent development of neural ordinary differential equation approaches\cite{Chen2018NODE,Zhang2018MFGM,Grathwohl2018FFCDSRGM} are natural ways to extend our flow-based generative model to the continuum limit. Continuous formulation of real-space RG has been discussed in the context of gradient flows\cite{Fujikawa2016GFET,Abe2018GFRG,Carosso2018NRONSUGF} and trivializing maps\cite{Luscher2010TMWFA}, where the RG flow equations are human-designed. Our research may pave way for machine-learned RG flow equations for continuous holographic mappings. Our formalism also allows the inclusion of gravitational fluctuations in principle, by relaxing optimization to allow superposition of different EHMs. Our analysis indicates that the fluctuation of neural network parameters is related to the bulk gravitational fluctuation. The machine-learned EHM provides us a starting point to investigate the corrections on top of the classical geometry approximation, which may enable us to go beyond holographic CFTs and study the quantum gravity dual of generic QFTs. Another feature of EHM is that it is a one-to-one mapping of field configurations (operators) between bulk and boundary, while in holographic duality, a local bulk operator can be mapped to multiple boundary operators in different regions. A resolution\cite{Almheiri2015BLQECA,Pastawski2015HQECMBC} of the paradox is that the non-unique bulk-boundary correspondence only applies to the low-energy freedoms in the bulk, which can be encoded on the boundary in a redundant and error-correcting manner. The bidirectional holographic code (BHC)\cite{Yang2016BHCSL,Qi2017HCSFRTN} was proposed as an extension of the EHM to capture the error-correction property of the holographic mapping. Extending our current network design to realize machine-learned BHC will be another open question for future research. \begin{acknowledgments} We acknowledge the stimulating discussions with Xiao-Liang Qi, John McGreevy, Maciej Koch-Janusz, C\'edric B\'eny, Koji Hashimoto, Wenbo Fu and Shang Liu. S.H.L and L.W. are supported by the National Natural Science Foundation of China under the Grant No.~11774398 and the Strategic Priority Research Program of Chinese Academy of Sciences Grant No.~XDB28000000. \end{acknowledgments}
-25,448.562975
[ -2.591796875, 2.3046875 ]
41.614907
[ -3.259765625, -0.2232666015625, -2.5546875, -5.60546875, -0.56005859375, 8.8125 ]
[ 3.091796875, 9.9296875, 1.9072265625, 4.95703125 ]
263
5,911
[ -3.48046875, 4.1640625 ]
22.734422
[ -6.13671875, -5.1015625, -5.5859375, -2.41015625, 2.388671875, 14.2734375 ]
1.38512
28.991363
23.752326
1.334551
[ 1.6584197282791138 ]
-18,373.865414
5.984774
-24,907.532113
0.678426
5.855685
[ -2.462890625, -3.796875, -4.23828125, -5.171875, 2.134765625, 12.9140625 ]
[ -5.7734375, -1.787109375, -2.25390625, -1.1083984375, 3.353515625, 4.44140625 ]
BkiUa9o4uBhhxDSKxoGn
\section{Introduction}\label{sec1} The investigation of the properties of quasi-periodic Schr\"odinger-type operators remains very active drawing techniques from different areas of mathematics and physics \cite{marx_jitomirskaya_2017, Wilkinson2017,Akkermans2021}. The special case of the almost Mathieu operators (AMO) can be traced back to Harper who proposed a model to describe crystal electrons in a uniform magnetic field \cite{Harper_1955}. Subsequently, Hofstadter observed that the spectra of the AMO can be fractal sets \cite{Hofstadter1976}. We refer to \cite{dinaburg_one-dimensional_1976, moser_example_1981} for more early examples of such operators whose spectra are Cantor-like sets, and to \cite{avila_ten_2009, bellissard_cantor_1982, jitomirskaya_metal-insulator_1999, v_mouche_coexistence_1989} for more results on the AMO and references therein. Independently, a line of investigations of self-similar Laplacian operators on graphs, fractals, and networks has emerged \cite{Rammal1984SpectrumOH, RammalToulouse1983, Alexander1984,KadanoffAlexander1983}. A fundamental tool in this framework is the spectral decimation method, initially used in physics to compute the spectrum of the Laplacian on a Sierpinski lattice \cite{KigamiAnaOnFractalsBook, BellissardRenormalizationGroup1992,StrichartzTep2012,Strichartz2003FractafoldsBO,malozemovteplyaev2003}. At the heart of this method is the fact that the spectrum of this Laplacian is completely described in terms of iterations of a rational function. For an overview of the modern mathematical approaches, applications, and extensions of the spectral decimation methods we refer to \cite{Shirai2000, StrichartzTrafoGraphLap2010 ,StrichartzMethodofAver2001, Fukushima1992OnAS, ShimaSierpinski1993, ShimapreSierpinski1991, TeplyaevInfiniteSG1998, BobsBook, BajorinSteinhurst2008, kron_asymptotics_2003} and references therein. Recently, Chen and Teplyaev \cite{ChenTeplyPQmodel2016} used the general framework of the spectral decimation method to investigate the appearnce of the singular continuous spectrum of a family of Laplacian operators. One of the ideas used in establishing this result is that these Laplacians are naturally related to self-similar operators with corresponding self-similar structures \cite{malozemovteplyaev2003} which allows to use complex dynamics techniques. \begin{figure} \begin{minipage}[b]{0.5\linewidth} \hspace*{-1cm} \centering \includegraphics[width=1.2\linewidth]{FiguresHofstadterButterflyPequal1over3Level7new500.png} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1.16\linewidth]{FiguresHofstadterButterflyPequal1over2Level7new500.png} \end{minipage} \caption{The left panel of the figure shows a Hofstadter butterfly for a self-similar almost Mathieu operator corresponding to $\frac{1}{3}$-Laplacian whose spectrum is a Cantor set. For comparison, the (classical) Hofstadter butterfly corresponding to the standard AMO is shown in right panel.} \label{fig:HofstadterButterfliesVaryDiffPara} \end{figure} \begin{figure} \begin{minipage}[b]{0.5\linewidth} \hspace*{-1cm} \centering \includegraphics[width=1.16\linewidth]{FiguresvaryingBeta1To3ForAlphaIrrationalP1Over3and500steps.png} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1.2\linewidth]{FiguresvaryingBeta1To3ForAlphaIrrationalP1Over2and500steps.png} \end{minipage} \caption{ The spectrum of $\hamilton_{p,\beta, \alpha, \theta}$ is plotted ($x$-axis) for the fixed parameters $\alpha = \frac{\sqrt{5}-1}{2}$ and $\theta=0$ while varying $\beta \in [0,3]$ (y-axis). The parameter $p$ is equal to $\frac{1}{3}$ for the left- and equal to $\frac{1}{2}$ for the right-panel. Both panels depict examples corresponding to an irrational $\alpha$. } \label{fig:VaryBetaIrrationalAlpha} \end{figure} The present paper is a first in what we expect to be a research program dealing with quasi-periodic Schr\"odinger-type operators on self-similar sets such as fractals and graphs. Our goal is to initiate the study of a generalization of the discrete almost Mathieu operators for self-similar situations. In this paper we begin by considering finite or half-integer one-dimensional lattices endowed with particular self-similar structures. More general Jacobi matrices will be considered in the forthcoming work \cite{BaluMograbyOkoudjouTeplyaevJacobi2021}. In this setting, the \textit{self-similar almost Mathieu operators (s-AMO)} are formally defined in~\eqref{eq:AMOversionPQ} and will be denoted by $\hamilton_{p,\beta, \alpha, \theta }$, for $\alpha \in \rr$, $\theta \in [0,2 \pi)$ and $\beta \in \rr$. As we will show, these operators can be viewed as limits of finite dimensional analogues that can be completely understood using the spectral decimation methods developed by Malozemov and Teplyaev \cite{malozemovteplyaev2003}. Furthermore, the s-AMO we consider, are defined in terms of self-similar Laplacians $\{\Delta_p\}_{p \in (0,1)}$ which are given by~\eqref{pLaplace}. This class of self-similar Laplacians was first investigated in \cite{TeplyaevSpectralZeta2007} and arises naturally when studying the unit-interval endowed with a particular fractal measure, see also the related work \cite{fang_spectral_2019, Derfel_2012,chan_one-dimensional_2015,Bird_Ngai_Tep_2003}. Moreover, when $p=\frac{1}{2}$, then the self-similar almost Mathieu operators coincide up to a multiplicative constant with the standard one-dimensional almost Mathieu operators (see (\ref{eq:UsualAMO})). Whereas, in the case of AMO when the magnetic flux is a fraction the resulting spectrum is absolute continuous, the fractal case displays a spectrum that is singular continuous. The paper is organized as follows. In Section~\ref{HierarchicalAMO} we introduce the notations and the definition of the self-similar structure we impose on the half-integer lattice. In the first part of Section~\ref{sec-spectralDeci}, we focus on the discrete and finite s-AMOs and describe their spectra using the spectral decimation method, see Section~\ref{sec-finitegraphs}. Subsequently, in Section~\ref{sec:infinitegraphs} we prove one of our main results, Theorem~\ref{thm:SpectralSimiAMOandpqModelnew}, which states that the spectra of the AMO on $\integers_+$ can be completely described using the spectral decimation method when the parameter $\alpha$ belongs to a dense set of numbers. Moreover, these operators have purely singularly continuous spectra when $p\neq 1/2$. As will be seen, Theorem \ref{thm:SpectralSimiAMOandpqModelnew} provides a useful algebraic tool to relate the spectra of the almost Mathieu operators $\hamilton_{p,\beta, \alpha, \theta }$ to that of the family of self-similar Laplacians $\Delta_p$. In Section~\ref{IDSsection}, using the fact that the spectrum of a self-similar Laplacian $\Delta_{p}$ is the Julia set $\mathcal{J}(R_{\Delta_{p}})$ of a polynomial (defined in ~\eqref{eq:specDeciFuncLap}), we derive in Theorem \ref{thm:IDSofAMO} an explicit formula for the density of states of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ by identifying it with the weighted pre-images of the balanced invariant measure on the Julia set $\mathcal{J}(R_{\Delta_{p}})$. As a corollary, we obtain a gaps labeling statement for $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$. In Section~\ref{sec:examplesappl} we present some numerical simulations pertaining the spectra of the s-AMO as well as the integrated density of states for a variety of parameters. A first illustration of our numerical results is Figure~\ref{fig:HofstadterButterfliesVaryDiffPara}. The left panel of the figure shows a Hofstadter butterfly for a self-similar almost Mathieu operator corresponding to $\frac{1}{3}$-Laplacian. For comparison, the Hofstadter butterfly corresponding to the standard AMO is shown in the right panel. In both cases, the spectrum is plotted ($x$-axis) for the fixed parameters $\beta=1$ and $\theta=0$ while varying $\alpha \in [0,1]$ (y-axis). For all $\alpha \in \{\tfrac{k}{3^n}, \, k=0, 1, \hdots, 3^n-1\}_{n=1}^l$ where $l\geq 1$, Theorem \ref{thm:SpectralSimiAMOandpqModelnew} describes the difference in these two figures as a transformation given by a spectral decimation function. Note that in the standard AMO case, corresponding to $p=1/2$ in our formulation, many important results are also obtained for $\alpha$ irrational, but in the fractal setting the methods for irrational $\alpha$ are not developed yet. Figure~\ref{fig:VaryBetaIrrationalAlpha} depict examples corresponding to the case of an irrational $\alpha = \frac{\sqrt{5}-1}{2}$. The spectrum of $\hamilton_{p,\beta, \alpha, \theta }^{(l)}$ is plotted ($x$-axis) for the fixed parameters $\alpha = \frac{\sqrt{5}-1}{2}$ and $\theta=0$ while varying $\beta \in [0,3]$ (y-axis). We end this introduction with a perspective on a general framework underlying the present paper. In a forthcoming and companion paper \cite{BaluMograbyOkoudjouTeplyaevJacobi2021}, we identify a class of Jacobi operators that extend the present results to almost Mathieu operators defined in the fractal setting. We refer to this class of operators as piecewise centrosymmetric Jacobi operators \cite{centrosymmetric1,centrosymmetric2,centrosymmetric3}. In this general setting we show that the spectral decimation function arises from a particular system of orthogonal polynomials associated with the aforementioned Jacobi matrix. In particular, this spectral decimation function is computable using a three-term recursion formula associated to this system of orthogonal polynomials. In the process, we avoid the Schur complement computation that could involve resolvent calculations of large matrices. Additionally, the general setting we consider in \cite{BaluMograbyOkoudjouTeplyaevJacobi2021} can be further extended to higher-dimensional graphs and would allow us to define Jacobi-type operators on graphs like the Sierpinski lattices or Diamond graphs. We plan to use this approach to investigate some of the questions in \cite{AvniSimon2020}. \section{Self-similar Laplacians and almost Mathieu operators} \label{HierarchicalAMO} In this section we introduce the notations and the definition of the self-similar structure we impose on the half-integer lattice. This self-similar structure describes a random walk on the half-line and gives rise to a class of self-similar probabilistic graph Laplacians $\Delta_p$. Moreover, it provides a natural finite graph approximation for the half-integer lattice. Regarding an almost Mathieu operator as a Schr\"odinger-type operator of the form $\Delta + U$ (where $U$ is a potential operator), allows us to define the class of self-similar almost Mathieu operators as $\Delta_p + U$. \subsection{Self-similar $p$ Laplacians on the half-integer lattice}\label{subsec2.1} We consider a family of self-similar Laplacians on the integers half-line. This class of Laplacians was first investigated in \cite{TeplyaevSpectralZeta2007} and arises naturally when studying the unit-interval endowed with a particular fractal measure. For more on this Laplacian and some related work we refer to \cite{Derfel_2012, chan_one-dimensional_2015, wave17}. The Laplacian's spectral-type was investigated in \cite{ChenTeplyPQmodel2016}, where the emerging of singularly continuous spectra was proved. Furthermore, this class of Laplacians serves as a toy model for generating singularly continuous spectra. In this section, we introduce the $p$-Laplacians and review some of its properties that will be needed to state and prove our results, and refer to \cite{ChenTeplyPQmodel2016} for more details. We also introduce a corresponding self-similar structure on the half-integer line. Let $\mathbb{Z}_+$ be the set of nonnegative integers and $\ell(\mathbb{Z}_+)$ be the linear space of complex-valued sequences $(f(x))_{x \in \mathbb{Z}_+}$. Let $p\in (0,1)$, for each $x\in \mathbb{Z}_+ \setminus \{0\}$, we define $m(x)$ to be the largest natural number $m$ such that $3^m$ divides $x$. For $f \in \ell(\mathbb{Z}_+)$ we define a \textit{self-similar Laplacian} $\Delta_p$ by, \begin{align}\label{pLaplace} (\Delta_p f)(x) = \left\{\begin{array}{ll} f(0)-f(1), & \text{if}~x=0 \\ f(x)-(1-p)f(x-1)-pf(x+1), &\text{if}~3^{-m(x)}x \equiv 1~\pmod 3 \\ f(x) - pf(x-1)-(1-p)f(x+1), &\text{if}~3^{-m(x)}x \equiv 2~\pmod 3 \end{array}\right.. \end{align} We equip $\ell(\mathbb{Z}_+)$ with its canonical basis $\{\delta_x\}_{x \in \mathbb{Z}_{+}}$ where \begin{equation} \label{eq:canonicalBasis} \delta_x(y) = \begin{cases} \ 0 & \quad \text{if } x \neq y \\ \ 1 & \quad \text{if } x=y. \end{cases} \end{equation} The matrix representation of $\Delta_p$ with respect to the canonical basis has the following Jacobi matrix \begin{equation} \jacobi_{+,p} = \begin{pmatrix} 1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & \dots \\ p-1 & 1 & - p & 0 & 0 & 0 & 0 & 0 & \dots \\ 0 & - p & 1 & p-1 & 0 & 0 & 0 & 0 & \dots \\ 0 & 0 & p-1 & 1 & - p & 0 & 0 & 0 & \dots \\ 0 & 0 & 0 & p-1 & 1 & - p & 0 & 0 & \dots \\ 0 & 0 & 0 & 0 & - p & 1 & p-1 & 0 & \dots \\ 0 & 0 & 0 & 0 & 0 & - p & 1 & p-1 & \dots \\ 0 & 0 & 0 & 0 & 0 & 0 & p-1 & 1 & \dots\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}. \end{equation} \text{ } \\ The case $p = \frac{1}{2}$ recovers the classical one-dimensional Laplacian (probabilistic graph Laplacian). We adopt the notation used to describe a random walk on the half-line with reflection at the origin and refer to the off-diagonal entries in $\jacobi_{+,p}$ by the transition probabilities \begin{equation} \label{eq:transitionProbabilities} p(x,y) = - \jacobi_{+,p}[x,y], \quad \text{ for } x \neq y. \end{equation} Let $\pi$ be a $\sigma$-finite measure on $\mathbb{Z}_+$. We define the Hilbert space \begin{equation*} \ell^2(\integers_{+}, d \pi) = \{ \psi: \mathbb{Z}_+ \to \complex \ | \ \sum^{\infty}_{x =0} \ |\psi (x)|^2 \pi(x) < \infty \}, \quad \langle f, g \rangle_{\ell^2} = \sum_{x=0}^{\infty} \overline{f(x)} g(x) \pi(x). \end{equation*} Let $n \in \mathbb{Z}_+$, the ($n$-th) Wronskian of $f,g \in \ell(\mathbb{Z}_+)$ is given by \begin{align} W_n(f,g) = \pi(n) p(n,n+1) \Big( \overline{f(n)}g (n+1) -\overline{f(n+1)}g(n) \Big). \end{align} \begin{lemma} \label{lem:SelfAdjointnessOfLap} Let $f,g \in \ell^2(\integers_{+}, d \pi)$ and $n \in \mathbb{Z}_+$. Assume that the measure $\pi$ satisfies the reversibility condition, i.e., $\pi(x)p(x,y)=\pi(y)p(y,x)$ holds for every $x,y \in \mathbb{Z}_+$. Then the discrete Green's second identity holds. That is, we have: \begin{equation} \sum_{x=0}^n \overline{f(x)} \Delta_p g (x) \pi(x) - \sum_{x=0}^n \overline{ \Delta_p f(x) } g (x) \pi(x) = W_n(f,g). \end{equation} Moreover, the operator $\Delta_p$ is a bounded self-adjoint operator on $ \ell^2(\integers_{+}, d \pi)$. \end{lemma} \begin{proof} Direct computation gives for $n \in \mathbb{Z}_+\backslash \{0\}$, \begin{align*} f(n) \Delta_p g(n) \pi(n) - \Delta_p f (n) g (n) \pi(n) &= W_n(f,g) - \pi(n) p(n,n-1) \Big(f(n-1)g (n) - f(n) g(n-1) \Big). \end{align*} Using the reversibility condition, i.e. $\pi(n) p(n,n-1) = \pi(n-1) p(n-1,n)$, we obtain \begin{align*} f(n) \Delta_p g(n) \pi(n) - \Delta_p f (n) g (n) \pi(n) &= W_n(f,g) - W_{n-1}(f,g). \end{align*} For $n=0$, we compute \begin{align*} f(0) \Delta_p g(0) \pi(0) - \Delta_p f (0) g (0) \pi(0) = f(0) p(0,1) g(1) \pi(0) - g(0) p(0,1) f(1) \pi(0) = W_0(f,g) \end{align*} Hence, a telescoping trick gives \begin{equation*} \sum_{x=0}^n f(x) \Delta_p g (x) \pi(x) - \sum_{x=0}^n \Delta_p f(x) g (x) \pi(x) = W_n(f,g). \end{equation*} For $f,g \in \ell^2( \mathbb{Z}_+, d \pi)$, we imply $ \langle f, \Delta_p g \rangle_{\ell^2} - \langle \Delta_p f, g \rangle_{\ell^2} = \lim_{n \to \infty} W_n(f,g) =0$. \end{proof} \begin{figure}[!htb] \centering \includegraphics[width=1.\textwidth]{figure3G1_initial.png} \caption{(Left) Initializing the graph $G_0$. (Right) The graph $G_1$. While the vertices are labeled by the addresses, the labeling of the edges represents the transition probabilities \ref{eq:transitionProbabilities}. } \label{fig:G1_initialNeumann} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=1.\textwidth]{figure4NeumannModelGraph.png} \caption{The visual representation of the protograph indicates how to apply the substitution rule, see Definition \ref{def:finiteGraphApproxNeumann}.} \label{fig:modelGraphNeumann} \end{figure} We regard the integers half-line $ \mathbb{Z}_+$ endowed with $\Delta_p$ as a hierarchical or substitution infinite graph, see \cite{malozemovteplyaev1995, malozemovteplyaev2003} for more details. We define a sequence of finite directed weighted graphs $\{G_l \}_{l \in \nn}$, such that $G_l = (V_l, E_l) $ is constructed inductively according to a substitution rule. We set $V_l =\mathbb{Z}_+ \cap [0,3^l]$ for all $l \geq 0$, where $G_0 = (V_0, E_0)$ is the graph shown in Figure \ref{fig:G1_initialNeumann} (Left). We illustrate the substitution rule by constructing $G_1$ shown in Figure \ref{fig:G1_initialNeumann} (Right). We first introduce the \textit{protograph} shown in Figure \ref{fig:modelGraphNeumann}, which consists of the four vertices $\{m_0, m_1,m_2, m_3\}$. We insert three copies of $G_0$ in the protograph according to the following rule. Between any two vertices $m_i$ and $m_{i+1}$, we substitute the three dots with a copy of $G_0$, identifying the vertex $0$ in $G_0$ with the vertex $m_i$, and the vertex $1$ in $G_0$ with the vertex $m_{i+1}$. We substitute the edges $(0,1)$ and $(1,0)$ in $G_0$ with the corresponding directed weighted edges as indicated in the protograph, see Figure \ref{fig:modelGraphNeumann}. We repeat the procedure and insert copies of $G_0$ between the vertices, $m_0$, $m_1$, then $m_1$, $m_2$ and finally $m_2$, $m_3$. The resulting linear directed weighted graph is denoted by $G_1$, Figure \ref{fig:G1_initialNeumann} (Right). The graph $G_1$ consists of $4$ vertices, which we rename to $\{0,1,2,3 \}$, so that $m_0$ corresponds to the vertex $0$, $m_1$ to $1$ , $m_2$ to $2$ and $m_3$ to $3$. In particular, this gives $V_1 =\mathbb{Z}_+ \cap [0,3^1]$ and $G_1$ can be viewed as a truncation of $\mathbb{Z}_+$ (regarded as a hierarchical infinite graph) to the vertices $\{0,1,2 ,3 \}$, whereby a reflecting boundary condition is imposed on the vertex $3$. Similarly, we construct $G_2$ by inserting $G_1$ in the protograph, see Figure \ref{fig:G2Neumann}. \begin{definition} \label{def:finiteGraphApproxNeumann} Let $G_0 = (V_0, E_0)$ be the graph shown in Figure \ref{fig:G1_initialNeumann} (Left). We define the sequence of graphs $\{G_l \}_{l \in \nn}$ inductively. Suppose $G_{l-1}=(V_{l-1}, E_{l-1})$ is given for some integer $l \geq 1$, where $V_{l-1} =\mathbb{Z}_+ \cap [0,3^{l-1}]$. The graph $G_{l}=(V_{l}, E_{l})$ is constructed according to the following \textit{substitution rule}. We repeat the following steps for $i \in \{0,1,2\}$: \begin{enumerate} \item Insert a copy of $G_{l-1}$ between the two vertices $m_i$ and $m_{i+1}$ of the protograph shown in Figure \ref{fig:modelGraphNeumann} in the following sense. We identify the vertex $0$ in $G_{l-1}$ with the vertex $m_i$ and similarly, we identify the vertex $3^{l-1}$ in $G_{l-1}$ with the vertex $m_{i+1}$. \item We substitute the edges $(0,1)$ and $(3^{l-1},3^{l-1}-1)$ in $G_{l-1}$ with the corresponding directed weighted edges as indicated in the protograph, see Figure \ref{fig:modelGraphNeumann}. \end{enumerate} \end{definition} The resulting linear directed weighted graph is denoted by $G_{l}=(V_{l}, E_{l})$. The graph $G_{l}$ consists of $3^l +1$ vertices, which we rename to $\{0,1,\dots,3^l \}$, so that $m_0$ corresponds to the vertex $0$, ... , $m_l$ corresponds to the vertex $3^l$. In particular, this gives $V_l =\mathbb{Z}_+ \cap [0,3^l]$. The vertices $0$ and $3^l$ are the boundary vertices of $G_l$, and we refer to them by $\partial G_l = \{0,3^l\}$. The interior vertices of $G_l$ are given by $V_l \backslash \partial G_l$. Each graph $G_{l}=(V_{l}, E_{l})$ is naturally associated with a \textit{probabilistic graph Laplacian}, denoted $\Delta^{(l)}_{p}$, and given by \begin{equation*} \Delta^{(l)}_{p} f (x)= \Delta_p f (x), \quad \text{for } \ l \geq 0 \ \text{ and } \ x \in [0,3^l-1]. \end{equation*} \text{ } \\ Note that for $l=0$, the probabilistic graph Laplacian $\Delta^{(0)}_{p}$ is independent of the parameter $p$, and therefore we omit it from the notation in this case \begin{equation} \label{eq:probabilisticLaplacianLevel0} \Delta^{(0)}: =\Delta^{(0)}_{p}= \begin{pmatrix} 1 & -1\\ -1 & 1 \end{pmatrix}. \end{equation} \begin{figure}[htp] \centering \includegraphics[width=1.\textwidth]{figure5NeumannG2.png} \caption{ Visual illustration of the substitution rule. (Top) A copy of $G_1$. The deleted edges correspond to the edges that are replaced when applying the substitution rule. (Bottom) The graph $G_2$, which is constructed by inserting the three copies of $G_1$ in protograph shown in Figure \ref{fig:modelGraphNeumann}. While the vertices are labeled by the addresses, the labeling of the edges represents the transition probabilities (off-diagonal entries in the self-similar Laplacian).} \label{fig:G2Neumann} \end{figure} \subsection{The self-similar almost Mathieu operators}\label{subsec2.2} We introduce a \textit{self-similar version of almost Mathieu operators} defined with respect to the self-similar Laplacian $\Delta_p$ introduced in the last section. Let $f \in \ell(\mathbb{Z}_+)$, $\alpha \in \rr$, $\theta \in [0,2 \pi)$ and $\beta \in \rr$. We define \begin{align} \label{eq:AMOversionPQ} (\hamilton_{p,\beta, \alpha, \theta } f)(x) = \left\{\begin{array}{ll} \beta \cos{( \theta)} f(0)-f(1), & \text{if}~x=0. \\ \beta \cos{(2 \pi \alpha x + \theta)} f(x) & p(x,x-1)=1-p, \ p(x,x+1)=p, \\ \quad \quad -p(x,x-1)f(x-1)-p(x,x+1)f(x+1), & \text{if }~3^{-m(x)}x \equiv 1\pmod 3. \\ \beta \cos{(2 \pi \alpha x + \theta)} f(x) & p(x,x-1)=p, \ p(x,x+1))=1-p, \\ \quad \quad -p(x,x-1)f(x-1)-p(x,x+1)f(x+1),&\text{if}~3^{-m(x)}x \equiv 2\pmod 3. \end{array}\right.. \end{align} Setting $p=\frac{1}{2}$ recovers up to a multiplicative constant the common form of the one-dimensional almost Mathieu operators, i.e. for $x \in \mathbb{Z}_+ \backslash \{0\}$, \begin{align} \label{eq:UsualAMO} (\hamilton_{\frac{1}{2},\beta, \alpha, \theta } f)(x) = -\frac{1}{2} \Big( f(x+1) + f(x-1) -2 \beta \cos{(2 \pi \alpha x + \theta)} f(x) \Big). \end{align} By Lemma \ref{lem:SelfAdjointnessOfLap}, $\hamilton_{p,\beta, \alpha, \theta}$ is a bounded self-adjoint operator on $ \ell^2(\integers_{+}, d \pi)$. For the sequence of graphs $\{G_l\}_{l \in \nn}$ given in Definition~\ref{def:finiteGraphApproxNeumann}, we associate a truncation $\hamilton_{p,\beta, \alpha, \theta }^{(l)} := \hamilton_{p,\beta, \alpha, \theta }|_{V_l} $, of the almost Mathieu operators~\eqref{eq:AMOversionPQ}, where we recall, $V_{l} = \mathbb{Z}_+ \cap [0,3^l]$. In particular, $\hamilton_{p,\beta, \alpha, \theta }^{(l)} $ is given by \begin{align} \label{eq:truncatedAMO} (\hamilton_{p,\beta, \alpha, \theta }^{(l)} f)(x) = \left\{\begin{array}{ll} \beta \cos{( \theta)} f(0)-f(1), & \text{if}~x=0. \\ \beta \cos{(2 \pi \alpha 3^l + \theta)} f(3^l)-f(3^l-1), & \text{if}~x=3^l. \\ \beta \cos{(2 \pi \alpha x + \theta )} f(x) & p(x,x-1)=1-p, \ p(x,x+1)=p, \\ \quad \quad -p(x,x-1)f(x-1)-p(x,x+1)f(x+1), & \text{if }~3^{-m(x)}x \equiv 1\pmod 3. \\ \beta \cos{(2 \pi \alpha x + \theta)} f(x) & p(x,x-1)=p, \ p(x,x+1))=1-p, \\ \quad \quad -p(x,x-1)f(x-1)-p(x,x+1)f(x+1),&\text{if}~3^{-m(x)}x \equiv 2\pmod 3. \end{array}\right.. \end{align} \text{ } \\ Note that, similarly to the construction of the $\{G_{l}\}_{l \geq 0}$, we impose a reflecting boundary condition on the vertex $3^l$. The restriction of $\hamilton_{p,\beta, \alpha, \theta }^{(l)}$ to the interior vertices of $G_l$ is denoted by $\hamilton_{p,\beta, \alpha, \theta }^{(l), D}$, i.e. \begin{align} \label{eq:DiriAMO} \hamilton_{p,\beta, \alpha, \theta }^{(l)} = \left( \begin{array}{c | c | c} \beta \cos{( \theta)} & \begin{array}{c c c} -1 & 0 & \dots \end{array} & 0 \\ \hline \vdots & \begin{array}{c c c} & & \\ & \mbox{\normalfont\Large\bfseries $\hamilton_{p,\beta, \alpha, \theta }^{(l), D}$} & \\ & & \end{array} & \vdots \\ \hline 0 & \begin{array}{c c c} \dots & 0 & -1 \end{array} & \beta \cos{(2 \pi \alpha 3^l + \theta)} \end{array} \right). \end{align} \text{ } \\ We identify $\hamilton_{p,\beta, \alpha, \theta }^{(l),D}$ with $\hamilton_{p,\beta, \alpha, \theta }^{(l)}$ when defined on the domain $\{ f:V_{l} \to \complex \ | \ f(0)=f(3^l)=0 \ \}$. We refer to $\hamilton_{p,\beta, \alpha, \theta }^{(l),D}$ as the \textit{Dirichlet almost Mathieu operator of level $l$}. In the following, we regard the matrix $\hamilton_{p,\beta, \alpha, \theta }^{(l)}$ as extended by zeros to a semi-infinite matrix. \begin{proposition} Let $f \in \ell^2(\integers_{+}, d \pi) $. Then $$\begin{cases} \ \lim_{l \to \infty} || \hamilton_{p,\beta, \alpha, \theta } f - \hamilton_{p,\beta, \alpha, \theta }^{(l)} f || = 0\\ \ \lim_{l \to \infty} || \big( z- \hamilton_{p,\beta, \alpha, \theta } \big)^{-1} f - \big( z- \hamilton_{p,\beta, \alpha, \theta }^{(l)} \big)^{-1} f || = 0.\end{cases}$$ \end{proposition} The strong convergence is evident and strong resolvent convergence follows by \cite{Weidmann1997}. The reader is also referred to \cite{reed1981functional}. We note that the statement holds as well for $\Delta^{(l)}_{p}$ and $\Delta_{p}$. \section{Spectral analysis of the self-similar almost Mathieu operators} \label{sec-spectralDeci} In this section we prove our two main results. First, we consider the truncated self-similar AMO $\hamilton_{p,\beta, \alpha, \theta }^{(l)}$, and prove that their spectra can be determined using the spectral decimation method when the parameter $\alpha$ is restricted to the set $\{\tfrac{k}{3^n},\, k=0,1, \hdots, 3^n -1\}_{n=1}^l$ where $l\geq 1$ is the truncation level. In particular, this finite graph case is given in Theorem~\ref{thm:firstResultFiniteGraphs}. Subsequently, we state Theorem~\ref{thm:SpectralSimiAMOandpqModelnew} under the same restriction on the parameter $\alpha$. \subsection{Finite graphs case}\label{sec-finitegraphs} This section will briefly review a now standard technique used in Analysis on Fractals and called \textit{Spectral Decimation}. We prove that it can be applied to the sequence of almost Mathieu operators $\hamilton_{p,\beta, \alpha, \theta }^{(l)}$ for $\alpha = \frac{k}{3^n}$, $k \in \integers$, $1 \leq n \leq l$ and $\theta =0$. The method was intensively applied in the context of Laplacians on fractals and self-similar graphs. Its central idea is that the spectrum of such Laplacian can be completely described in terms of iterations of a rational function, called the \textit{spectral decimation function}. Below, we extend this method to the self-similar almost Mathieu operators when the frequency $\alpha$ is appropriately calibrated with the hierarchical structure of the self-similar Laplacian. In this case, we provide a complete description of the spectrum of $l$th-level almost Mathieu operators $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$ by relating it to the spectrum of $(l-n)$th-level Laplacian, i.e. $\sigma(\Delta^{(l-n)}_{p})$. The following theorem is the main result of this section. \begin{theorem} \label{thm:firstResultFiniteGraphs} Let $p\in (0,1)$, $\beta \in \rr$, $l \geq 1$, and $1 \leq n \leq l$ be fixed. Let $\theta =0$, and for $k \in \{0,\dots, 3^n-1\}$ set $\alpha = \frac{k}{3^n}$. There exists a polynomial $R_{p,\beta, \frac{k}{3^n},0 }$ of order $3^n$ such that, \begin{equation} \sigma \Big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)} \Big) = R^{-1}_{p,\beta, \frac{k}{3^n},0 } \Big( \sigma(\Delta^{(l-n)}_{p}) \backslash \sigma(\Delta^{(0)}) \Big) \bigcup \sigma \Big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n)} \Big). \end{equation} \text{ } \\ Furthermore, for $n=1$ and $k \in \{1,2\}$, the polynomial is given by \begin{equation*} R_{p,\beta, \frac{k}{3}, 0 }(z) = \frac{\left( - \beta + 2 p - 2 z \right) \left(\beta^{2} + 2 \beta p + \beta z - 2 p z - 2 p - 2 z^{2} + 2\right)}{4 p \left(1-p\right)}. \end{equation*} \end{theorem} Before giving the proof of this result, we recall some facts that can be found in~\cite{malozemovteplyaev2003}. Let $\mathcal{H}$ and $\mathcal{H}_0$ be Hilbert spaces, and $U:\mathcal{H}_0 \to \mathcal{H}$ be an isometry. Suppose $H$ and $H_0$ are bounded linear operators on $\mathcal{H}$ and $\mathcal{H}_0$, respectively, and that $\phi,\psi$ are complex-valued functions. Following \cite[Definition 2.1]{malozemovteplyaev2003}, we say that the operator $H$ \textit{spectrally similar} to the operator $H_0$ with functions $\phi$ and $\psi$ if \begin{equation} \label{eq:OriginalSpectSimi} U^{\ast}(H-z)^{-1}U=(\phi(z)H_0 - \psi(z))^{-1}, \end{equation} for all $z \in \complex$ such that the two sides of~\eqref{eq:OriginalSpectSimi} are well defined. In particular, for $z$ in the domain of both $\phi$ and $\psi$ such that $\phi(z)\neq0$, we have $z\in\rho(H)$ (the resolvent of $H$) if and only if $R(z)=\frac{\psi(z)}{\phi(z)}\in\rho(H_0)$ (the resolvent of $H_0$). We call $R(z)$ the {\em spectral decimation function}. In general, the functions $\phi(z)$ and $\psi(z)$ are usually difficult to express, but they can be computed effectively using the notion of Schur complement. We refer to~\cite{malozemovteplyaev2003,BajorinSteinhurst2008,Bajorin_2007} for some examples. Identifying $\mathcal{H}_0$ with a closed subspace of $\mathcal{H}$ via $U$, let $\mathcal{H}_1$ be the orthogonal complement and decompose $H$ on $\mathcal{H}=\mathcal{H}_0 \oplus \mathcal{H}_1$ in the block form \begin{equation} \label{eq:lDecompo} H=\begin{pmatrix} T & J^T\\ J & X \end{pmatrix}. \end{equation} \begin{lemma}[\cite{malozemovteplyaev2003}, Lemma 3.3] \label{lem:Lemma33} For $z \in \rho(H)\cap \rho(X)$ the operators $H$ and $H_0$ are spectrally similar if and only if the Schur complement of $H-zI$, given by $S_{H}(z)=T-z - J^T(X-z)^{-1}J$, satisfies \begin{equation} \label{eq:schurComp1} S_{H}(z)= \phi(z) H_0 - \psi(z)I. \end{equation} \end{lemma} $\mathscr{E}_{H}:=\{z \in \complex \ | \ z \in \sigma(X) \text{ or } \phi(z)=0 \}$ is called the \textit{exceptional set} of $H$ and plays a crucial role in the spectral decimation method. The spectral decimation has been already implemented for $\{ \Delta^{(n)}_{p}\}_{n \geq 0}$. For the sake of completeness we state this result and refer to \cite[Lemma 5.8]{TeplyaevSpectralZeta2007} for more details. \begin{proposition}\cite[Lemma 5.8]{TeplyaevSpectralZeta2007} \label{prop: SpectralDecimpLap} Let $n \geq 1$, then $\Delta^{(n)}_{p}$ is spectrally similar to $\Delta^{(n-1)}_{p}$ (with respect to functions given in \cite{TeplyaevSpectralZeta2007}). The spectral decimation function $R_{\Delta_{p}}$ and the exceptional set is $$\mathscr{E}_{\Delta_{p}}=\{1+p, 1-p \}$$ and \begin{equation} \label{eq:specDeciFuncLap} R_{\Delta_{p}}(z) = \frac{z^3-3z^2+(2+p(1-p))z}{p(1-p)} = \frac{z(z+p-2)(z-p-1)}{p(1-p)} \end{equation} Moreover, $\sigma(\Delta^{(0)})=\{0,2\}$ and $\sigma(\Delta^{(n)}_{p})=\sigma(\Delta^{(0)}) \cup \bigcup_{i=0} ^{n-1} R_{\Delta_{p}}^{-i} (\{ p,2-p\}) $ for $n \geq 1$. \end{proposition} For the rest of this section, we fix $p\in (0,1)$, $\beta \in \rr$, $l \geq 1$, and $1 \leq n \leq l$. We set $\theta =0$, $k \in \{0,\dots, 3^n-1\}$ and $\alpha = \frac{k}{3^n}$. We apply Lemma \ref{lem:Lemma33} on the level $l$ almost Mathieu operator $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$. We obtain the block form (\ref{eq:lDecompo}) by decomposing $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$ with respect to \begin{align} \mathcal{H}_0:=\text{span}\{\delta_v \ | \ v \text{ mod } 3^n \equiv 0 \} ,\quad \mathcal{H}_1:=\text{span}\{\delta_v \ | \ v \text{ mod } 3^n \not\equiv 0 \}. \end{align} where $\{\delta_x\}_{x \in V_{l}}$ is the canonical basis defined in (\ref{eq:canonicalBasis}) and $V_{l} = \mathbb{Z}_+ \cap [0,3^l]$. In practical terms: \begin{enumerate} \item We rearrange the vertices in such a way that all vertices $v \in V_l$ with $v \text{ mod } 3^n \equiv 0$ appear before all vertices with $v \text{ mod } 3^n \not\equiv 0$, i.e. $V_l=\{0,3^n,\dots, 3^l, 1, 2, .. , 3^l-1\}$. \item We represent the matrix $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$ with respect to the canonical basis so that the order of the basis vectors follows the order of the vertices in step one. \item The matrix $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$ is then decomposed into the following block form \begin{equation} \label{eq:lDecompoLevelL} \hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)} = \begin{pmatrix} T_l & J_l^T\\ J_l & X_l \end{pmatrix}, \end{equation} where $T_l$ and $X_l$, correspond to the basis vectors $\{\delta_v \ | \ v \text{ mod } 3^n \equiv 0 \}$ and $\{\delta_v \ | \ v \text{ mod } 3^n \not\equiv 0 \}$, respectively. \end{enumerate} We observe that $T_l$ is a multiple of the identity matrix and that $X_l$ is a block diagonal matrix in which the diagonal blocks are the $n$th level Dirichlet almost Mathieu Operator $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n),D}$, i.e. \begin{equation} \label{eq: decompoMatrices} T = \beta \left(\begin{matrix} 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & 1 \end{matrix}\right), \quad \quad X_l = \renewcommand*{\arraystretch}{1.4} \left( \begin{array}{@{}cc@{}c@{}c} \ \mbox{\normalfont\Large\bfseries $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n), D}$} & & & \\ & \mbox{\normalfont\Large\bfseries $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n), D}$} & & \\ & & \ \mbox{\normalfont\Large\bfseries $\ddots$} \ \text{ } & \\ &&& \mbox{\normalfont\Large\bfseries $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n), D}$} \end{array} \right). \end{equation} In particular, we imply $\sigma(X_l)= \sigma(\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n),D})$. \begin{lemma} \label{lem:existOfphiandVerphi} Let $p\in (0,1)$, $\beta \in \rr$, $l \geq 1$, and $1 \leq n \leq l$ be fixed. Moreover, we set $\theta =0$, $\alpha = \frac{k}{3^n}$, for $k \in \{0,\dots, 3^n-1\}$. There exist functions $ \phi_{p,\beta, \frac{k}{3^n},0}$ and $\psi_{p,\beta, \frac{k}{3^n},0 }$, such that $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$ is spectrally similar to $\Delta^{(l-n)}_{p}$ with respect to $ \phi_{p,\beta, \frac{k}{3^n},0}$ and $\psi_{p,\beta, \frac{k}{3^n},0}$. \end{lemma} \begin{proof} Due to \cite[Lemma 3.10]{malozemovteplyaev2003}, it is sufficient to prove the existence of such functions $ \phi_{p,\beta, \frac{k}{3^n},0} $ and $\psi_{p,\beta, \frac{k}{3^n},0 }$, so that the $n$th level $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n)}$ is spectrally similar to \begin{equation} \label{eq:probabilisticLaplacianLevel0new} \Delta^{(0)} = \begin{pmatrix} 1 & -1\\ -1 & 1 \end{pmatrix} \end{equation} with the same functions $ \phi_{p,\beta, \frac{k}{3^n},0} $ and $\psi_{p,\beta, \frac{k}{3^n},0 }$. The assumption $\alpha = \frac{k}{3^n}$ guarantees that the matrix $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n)}$ is symmetric with respect to its boundary vertices in the sense of \cite[Definition 4.1]{malozemovteplyaev2003}. The spectral similarity of $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n)}$ and $\Delta^{(0)}$ follows then by \cite[Lemma 4.2]{malozemovteplyaev2003}. \end{proof} \begin{remark} As a domain of $ \phi_{p,\beta, \frac{k}{3^n},0} $ and $\psi_{p,\beta, \frac{k}{3^n},0 }$ we use the resolvent $\rho(X_l)$ of $X_l$, where $X_l$ is the block diagonal matrix in~\eqref{eq: decompoMatrices}. For more details about this facts we refer to~\cite[Corollary 3.4]{malozemovteplyaev2003}. \end{remark} \begin{proposition} \label{prop:PropertiesOfR} Let $p\in (0,1)$, $\beta \in \rr$, $l \geq 1$, and $1 \leq n \leq l$ be fixed, and set $\theta =0$, $\alpha = \frac{k}{3^n}$, for $k \in \{0,\dots, 3^n-1\}$. The following statements hold: \begin{enumerate} \item $\phi_{p,\beta, \frac{k}{3^n},0}(z) \neq 0 $ for all $z \in \rho(X_l)$. \item The exceptional set of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$ is given by $\mathscr{E}_{p,\beta, \frac{k}{3^n},0}=\sigma(\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n),D})$. \item The spectral decimation function $R_{p,\beta, \frac{k}{3^n},0}(z):=\frac{\psi_{p,\beta, \frac{k}{3^n},0 }(z)}{\phi_{p,\beta, \frac{k}{3^n},0}(z) }$ is a polynomial of order $3^n$. \item $z \in \sigma \big(\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n)}\big) \bigcup \sigma \big(\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n),D} \big)$ if and only if $R_{p,\beta, \frac{k}{3^n},0}(z) \in \sigma(\Delta^{(0)})$. \end{enumerate} \end{proposition} \begin{proof} We prove this result in a more general setting of mirror-symmetric Jacobi matrices in a companion paper \cite{BaluMograbyOkoudjouTeplyaevJacobi2021}. \end{proof} The following could be derived immediately from Lemma~\ref{lem:existOfphiandVerphi}, but for the sake of completeness and clarity we give the details leading to explicit formulas for $\phi_{p,\beta,\frac{k}{3},0}$, $\psi_{p,\beta, \frac{k}{3} ,0}$ and $ R_{p,\beta,\frac{k}{3},0 }$. \begin{lemma} \label{prop:SpectralSimilarityToLap} Let $n=1$ and $k \in \{1,2\}$. Then $\hamilton_{p,\beta, \frac{k}{3}, 0 }^{(l)}$ is spectrally similar to $\Delta^{(l-1)}_{p}$ with the functions \begin{equation} \label{eq:phiAndpsi} \phi_{p,\beta,\frac{k}{3},0}(z)=\frac{4 p \left(p - 1\right)}{4 p^{2} - \left( \beta + 2 z\right)^{2}} , \quad \quad \psi_{p,\beta, \frac{k}{3},0 }(z) = - \frac{\beta^{2} + 2 \beta p + \beta z - 2 p z - 2 p - 2 z^{2} + 2}{\beta + 2 p + 2 z}. \end{equation} \text{ } \\ The spectral decimation function $R_{p,\beta,\frac{k}{3},0 }$ and the exceptional set $\mathscr{E}_{p,\beta, \frac{k}{3},0}$ are given by \begin{equation} \label{eq:SpecDeciandExcepSet} R_{p,\beta,\frac{k}{3},0 }(z) = \frac{\left( - \beta + 2 p - 2 z \right) \left(\beta^{2} + 2 \beta p + \beta z - 2 p z - 2 p - 2 z^{2} + 2\right)}{4 p \left(1-p\right)} , \quad \quad \mathscr{E}_{p,\beta, \frac{k}{3},0} = \left\{ - \frac{\beta}{2} - p, \ - \frac{\beta}{2} + p \right\}. \end{equation} \end{lemma} \begin{proof} With the same argument as in the proof of Lemma \ref{lem:existOfphiandVerphi}, it is sufficient to consider the spectral similarity between $\hamilton_{p,\beta, \frac{k}{3} , 0 }^{(1)}$ and $\Delta^{(0)}$. Applying the above three steps on the level-one almost Mathieu operator gives \begin{equation} \label{eq:natricesLevel1and2} \hamilton_{p,\beta, \frac{1}{3}, 0 }^{(1)} = \renewcommand*{\arraystretch}{1.3} \left( \begin{array}{cc|cc} \beta & 0 & -1 & 0 \\ 0 & \beta & 0 & -1 \\ \hline p - 1 & 0 & - \frac{ \beta}{2} & - p\\ 0 & p - 1 & - p & - \frac{ \beta}{2} \end{array} \right) ,\quad X_1= \left(\begin{array}{c c} - \frac{ \beta}{2} & - p \\ - p & - \frac{ \beta}{2} \end{array}\right). \end{equation} We compute the Schur complement and express it as a linear combination $ \phi_{p,\beta, \frac{k}{3},0}(z) \Delta^{(0)}- \phi_{p,\beta, \frac{k}{3},0}(z)I$, \begin{equation} \label{eq:SchurMatrix} \renewcommand*{\arraystretch}{1.3} \left( \begin{array}{cc} \beta - z + \frac{\left(\frac{ \beta}{2} + z\right) \left(p - 1\right)}{p^{2} - \left(\frac{ \beta}{2} + z\right)^{2}} & -\frac{4 p \left(p - 1\right)}{4 p^{2} - \left( \beta + 2 z\right)^{2}}\\ -\frac{4 p \left(p - 1\right)}{4 p^{2} - \left( \beta + 2 z\right)^{2}} & \beta - z + \frac{\left(\frac{ \beta}{2} + z\right) \left(p - 1\right)}{p^{2} - \left(\frac{ \beta}{2} + z\right)^{2}} \end{array} \right)= \phi_{p,\beta, \frac{k}{3},0 }(z) \left( \begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array} \right) - \varphi_{p,\beta, \frac{k}{3},0 }(z) \left( \begin{array}{cc} 1 & 0 \\ 0& 1 \end{array} \right). \end{equation} \text{} \\ The formulas (\ref{eq:phiAndpsi}) and (\ref{eq:SpecDeciandExcepSet}) can be verified by comparing both sides of the equation (\ref{eq:SchurMatrix}). \end{proof} \begin{proof}[Proof of Theorem \ref{thm:firstResultFiniteGraphs}] We note that the spectra of $\{\Delta^{(n)}_{p}\}^{\infty}_{n=0}$ are nested, i.e. $\{0,2\} = \sigma(\Delta^{(0)}) \subset \sigma(\Delta^{(1)}_{p}) \subset \dots \subset [0,2]$. We split the preimages set into two subsets: \begin{enumerate} \item $R^{-1}_{p,\beta, \frac{k}{3^n},0 } \Big( \sigma(\Delta^{(l-n)}_{p}) \backslash \sigma(\Delta^{(0)}) \Big)$: There are $3^{(l-n)}+1$ distinct eigenvalues in $\sigma(\Delta^{(l-n)}_{p})$. In particular, $\big| \sigma(\Delta^{(l-n)}_{p}) \backslash \sigma(\Delta^{(0)})\big|=3^{(l-n)}-1$ and \begin{align*} \Big|R^{-1}_{p,\beta, \frac{k}{3^n},0 } \Big( \sigma(\Delta^{(l-n)}_{p}) \backslash \sigma(\Delta^{(0)}) \Big) \Big|=3^n(3^{(l-n)}-1) = 3^l-3^n. \end{align*} Note that by Proposition \ref{prop:PropertiesOfR}(4), we conclude that all the $3^l-3^n$ preimages are not in the exceptional set and therefore eigenvalues of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$, see \cite[Theorem 3.6.(2)]{malozemovteplyaev2003}. Besides, this implies that all the $3^l-3^n$ preimages are distinct eigenvalues. \item $R^{-1}_{p,\beta, \frac{k}{3^n} ,0} \Big( \sigma(\Delta^{(0)}) \Big)$: By Proposition \ref{prop:PropertiesOfR}(4), we have \begin{equation*} R^{-1}_{p,\beta, \frac{k}{3^n},0 } \big( \sigma(\Delta^{(0)}) \big)=\sigma \big(\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n)}\big) \bigcup \sigma \big(\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n),D} \big), \end{equation*} By excluding the exceptional points, we see that $R^{-1}_{p,\beta, \frac{k}{3^n},0 } \big( \sigma(\Delta^{(0)}) \big)$ generates $3^n+1$ distinct eigenvalues of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$, namely the eigenvalues in $\sigma \big(\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n)}\big)$. \end{enumerate} We generated in part one and two $3^l-3^n + 3^n+1 = 3^l+1$ distinct eigenvalues, which shows with a dimension argument that we completely determined the spectrum $\sigma \Big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)} \Big)$. \end{proof} \subsection{Infinite graphs case}\label{sec:infinitegraphs} We extend the statement of Theorem \ref{thm:firstResultFiniteGraphs} to infinite graphs. We provide a complete description of the spectrum of the almost Mathieu operators $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ by relating it to the self-similar Laplacian's spectrum $\sigma(\Delta_{p})$. The following theorem is the main result. \begin{theorem}\label{thm:SpectralSimiAMOandpqModelnew} Let $\hamilton_{p,\beta, \alpha, \theta}$ and $\Delta_p$ be given as in (\ref{eq:AMOversionPQ}) and (\ref{pLaplace}). Let $p\in (0,1)$, $\beta \in \rr$ and $n \geq 1$ be fixed. We set $\theta =0$, $\alpha = \frac{k}{3^n}$, for $k \in \{1,\dots, 3^n-1\}$. There exists a polynomial $R_{p,\beta, \frac{k}{3^n},0 }$ of order $3^n$ such that, \begin{equation} \sigma \Big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 } \Big) = R^{-1}_{p,\beta, \frac{k}{3^n},0 } \Big( \sigma(\Delta_p) \Big). \end{equation} \text{ } \\ Moreover, $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ has purely singularly continuous spectrum if $p\neq \frac{1}{2}$. \end{theorem} \begin{proof} The first part of theorem \ref{thm:SpectralSimiAMOandpqModelnew} is a consequence of \cite[Lemma 3.10]{malozemovteplyaev2003}. We proceed as in the previous section and apply the spectral decimation method. We set $\mathcal{H} = \ell^2(\integers_{+}, d \pi) $ and $\mathcal{H}_0 = \ell^2(3^n \integers_{+}, d \pi) $, $n \geq 1$. Strictly speaking, the self-similar Laplacian $\Delta_p$ in Theorem \ref{thm:SpectralSimiAMOandpqModelnew} is defined on $ \ell^2(3^n\integers_{+}, d \pi)$. To understand this, we follow \cite[page 125]{BellissardRenormalizationGroup1992} and introduce a dilation operator \begin{equation} D: \ell^2(3^n\integers_{+}, d \pi)\to \ell^2(\integers_{+}, d \pi), \quad \quad (Df)(x)=f(3^n x), \end{equation} and its co-isometric adjoint \begin{equation} D^{\ast}: \ell^2(\integers_{+}, d \pi) \to \ell^2(3^n \integers_{+}, d \pi), \quad \quad (D^{\ast}f)(3^n x)=f(x). \end{equation} Next, we define the operator $\tilde{\Delta}_p$ on $ \ell^2(3^n \integers_{+}, d \pi) $ to be $\tilde{\Delta}_p = D^{\ast}\Delta_p D$. According to \cite{ChenTeplyPQmodel2016}, $\tilde{\Delta}_p$ on $\ell^2(3^n \integers_{+}, d \pi)$ is isometrically equivalent to $\Delta_p$ on $ \ell^2(\integers_{+}, d \pi) $ and $\sigma(\tilde{\Delta}_p)=\sigma(\Delta_p)$. In the following, we will omit the tilde and refer to $\tilde{\Delta}_p$ by $\Delta_p$. We regard $\mathcal{H}_0 = \ell^2(3^n \integers_{+}, d \pi) $ as a subspace of $\ell^2(\integers_{+}, d \pi)$ and introduce $\mathcal{H}_1$ as the orthogonal complement of $\mathcal{H}_0$ in $\mathcal{H}$. Then $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ is decomposed with respect to $\mathcal{H}_0\oplus \mathcal{H}_1$ into the following block form \begin{equation} \label{eq:lDecompoInf} \hamilton_{p, \beta, \alpha, \theta}=\begin{pmatrix} T & J^T\\ J & X \end{pmatrix}. \end{equation} \text{ } \\ We observe that $T$ is a multiple of the identity and that $X$ is a block diagonal semi-finite matrix in which the diagonal blocks are the $n$th level Dirichlet almost Mathieu Operator $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n),D}$, i.e. \begin{equation} \label{eq: decompoMatricesInf} T = \beta \left(\begin{matrix} 1 & 0 & 0 & \dots \\ 0 & 1 & 0 & \dots \\ 0 & 0 & 1 & \ddots \\ \vdots & \vdots & \vdots & \ddots \end{matrix}\right), \quad \quad X = \renewcommand*{\arraystretch}{1.4} \left( \begin{array}{@{}cc@{}c@{}} \ \mbox{\normalfont\Large\bfseries $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n), D}$} & & \\ & \mbox{\normalfont\Large\bfseries $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n), D}$} & \\ & & \ \mbox{\normalfont\Large\bfseries $\ddots$} \ \text{ } \end{array} \right). \end{equation} Similar to the proof of Lemma \ref{lem:existOfphiandVerphi}, the spectral similarity of $\hamilton_{p,\beta, \frac{k}{3^n} , 0 }^{(n)}$ and $\Delta^{(0)}$ implies the spectral similarity of $ \hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ and $\Delta_p$ with the same $ \phi_{p,\beta, \frac{k}{3^n},0} $, $\psi_{p,\beta, \frac{k}{3^n},0 }$, $\mathscr{E}_{p,\beta, \frac{k}{3^n},0}$ and $R_{p,\beta, \frac{k}{3^n},0 }$. By \cite[Theorem 3.6]{malozemovteplyaev2003}, we see that for $z \notin \mathscr{E}_{p,\beta, \frac{k}{3^n},0}$, \begin{align*} z \in \sigma\Big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 } \Big) \quad \Leftrightarrow \quad R_{p,\beta, \frac{k}{3^n},0 }(z) \in \sigma(\Delta_p) \quad \Leftrightarrow \quad \quad z \in R^{-1}_{p,\beta, \frac{k}{3^n},0 } \big( \sigma(\Delta_p) \big). \end{align*} Next, we show $ \mathscr{E}_{p,\beta, \frac{k}{3^n},0} \subset \sigma\big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 } \big)$. To this end we use Proposition \ref{prop:PropertiesOfR} (4), that is $ \mathscr{E}_{p,\beta, \frac{k}{3^n},0} \subset R^{-1}_{p,\beta, \frac{k}{3^n},0 } (0,2)$ and the fact that $0$ and $2$ are not isolated points in the spectrum $\sigma(\Delta_p)$. Let $z \in \mathscr{E}_{p,\beta, \frac{k}{3^n},0} \cap R^{-1}_{p,\beta, \frac{k}{3^n} ,0} (0)$. By a continuity argument, we can find a sequence $\{\lambda_m\}_{m \in \nn} \subset \sigma(\Delta_p)$, $0 < \lambda_m <2 $, $\lambda_m \to 0$ and a partial inverse of $R_{p,\beta, \frac{k}{3^n} ,0}$ (which we will denote by $R^{-1}_{p,\beta, \frac{k}{3^n} ,0}$ to avoid extra notation), such that $R^{-1}_{p,\beta, \frac{k}{3^n},0 }(\lambda_m) \to z$. Again with proposition \ref{prop:PropertiesOfR}(4) we have $R^{-1}_{p,\beta, \frac{k}{3^n},0 }(\lambda_m) \notin \mathscr{E}_{p,\beta, \frac{k}{3^n},0}$ for all $m \in \nn$ and imply by \cite[Theorem 3.6]{malozemovteplyaev2003} that \begin{equation} R^{-1}_{p,\beta, \frac{k}{3^n} ,0}(\lambda_m) \in \sigma\Big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 } \Big) \quad \forall \ m \in \nn. \end{equation} By closedness of the spectrum, we conclude that $z \in \sigma\Big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 } \Big) $, see also Remark \ref{rem:continuityOfPreimages}. The same argument holds for $z \in \mathscr{E}_{p,\beta, \frac{k}{3^n},0} \cap R^{-1}_{p,\beta, \frac{k}{3^n},0 } (2)$. The second part of the statement followes by \cite[Theorem 1]{ChenTeplyPQmodel2016} combined with \cite[Theorem 3.6]{malozemovteplyaev2003}. \end{proof} \section{Integrated density of states} \label{IDSsection} Throughout this section, we assume that $p\in (0,1)$ and $l \geq 1$ are fixed. We follow ideas presented in \cite[Section 5.4]{Kirsch2008Random} and define the \textit{density of states} of $\Delta_{p}$. We start by considering the spectrum of $\Delta^{(l)}_{p}$, which consists of finitely many simple eigenvalues. We refer to the normalized sum of Dirac measures concentrated on the eigenvalues \begin{equation} \nu_{l,p}(\{x\}) = \frac1{3^{l} +1} \sum_{\lambda\in \sigma(\Delta^{(l)}_{p})} \delta_\lambda(x) \end{equation} as the \textit{density of states} of $\Delta^{(l)}_{p}$. The \textit{normalized eigenvalue counting} function of $\Delta^{(l)}_{p}$ is then given by $N^{(l)}_{p}(x):=\nu_{l,p}((-\infty,x])$. We note that $\Delta^{(l)}_{p}$ is the restriction of $\Delta_{p}$ to the finite graph $G_{l}=(V_{l}, E_{l})$ while imposing Neumann boundary conditions. As the following results can be derived in the same way when Dirichlet boundary conditions are applied, we restrict our consideration to the former one. Figure \ref{fig:IDSforLaps} depicts the normalized eigenvalue counting function $N^{(l)}_{p}$ for different parameters. \begin{figure}[!htb] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1.09\linewidth]{FigureseigenvalueCountingFunc_p1over2.png} \vspace{4ex} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1.09\linewidth]{FigureseigenvalueCountingFunc_p1over3.png} \vspace{4ex} \end{minipage} \caption{Numerical computation of the normalized eigenvalue counting function $N^{(l)}_{p}$. The computations are done for level $l=7$. (Left) $p=\frac{1}{2}$, i.e., standard probabilistic graph Laplacian with $\sigma(\Delta_{\frac{1}{2}})=[0,2]$. (Right) $p=\frac{1}{3}$, i.e., a self-similar graph Laplacian where $\sigma(\Delta_{\frac{1}{3}})$ is a cantor set.} \label{fig:IDSforLaps} \end{figure} We recall some known facts about the spectrum of the self-similar Laplacian $\Delta_{p}$. Theorem 1 and Proposition 10 in \cite{ChenTeplyPQmodel2016} show that the spectrum $\sigma(\Delta_{p})$ is the Julia set $\mathcal{J}(R_{\Delta_{p}})$ of the polynomial $R_{\Delta_{p}}$ given in Proposition \ref{prop: SpectralDecimpLap}, for more general settings see \cite{HareTep2011}. For $p=\frac{1}{2}$, we have $\mathcal{J}(R_{\Delta_{\frac{1}{2}}})=[0,2]$ and the spectrum is absolutely continuous. For $p \neq \frac{1}{2}$, the Julia set $\mathcal{J}(R_{\Delta_{p}})$ is a Cantor set of Lebesgue measure zero and the spectrum is purely singularly continuous. Brolin in \cite{Brolin1965} proved the existence of a natural measure on polynomial Julia sets, namely the so-called balanced invariant measure. Moreover, he showed that the balanced invariant measure coincides with the potential theory's equilibrium (harmonic) measure. In higher generality the uniqueness of the balanced invariant measure was established later in \cite{Freire1983,Ma1983}, the reader is referred to \cite{SmirnovThesis} for an overview. Denoting the balanced invariant measure of the Julia set $\mathcal{J}(R_{\Delta_{p}})$ by $\nu_p$ and and using ideas similar to Brolin's lead to the following result. \begin{proposition} \label{prop:IDSofpLap} The sequence of density of states $\{\nu_{l,p}\}_{l \in \nn}$ converges weakly to the balanced invariant measure $\nu_p$ of the Julia set $\mathcal{J}(R_{\Delta_{p}})$. \end{proposition} Let $\theta =0$, $\beta \in \rr$, $1 \leq n \leq l$ and $k \in \{0,\dots, 3^n-1\}$ be fixed. In the same way as above, we define the density of states and the normalized eigenvalue counting function of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}$ and refer to them by $\nu^{(l)}_{p,\beta, \frac{k}{3^n}, 0 }$ and $N^{(l)}_{p,\beta, \frac{k}{3^n}, 0 }$, respectively. Theorem \ref{thm:firstResultFiniteGraphs} asserts the existence of a polynomial $R_{p,\beta, \frac{k}{3^n},0 }$ of order $3^n$. Let $S_1,S_2, \dots, S_{3^n}$ be the $3^n$ branches of the inverse $R^{-1}_{p,\beta, \frac{k}{3^n},0 }$ and $E \subset \rr$ ($ \nu_{p}$-measurable). We define the following measure \begin{align} \label{eq:DefNuForAMO} \nu_{p,\beta, \frac{k}{3^n}, 0 }(E) :=\frac{1 }{3^{n} } \sum_{i=1}^{3^n} \int_{\sigma(\Delta_{p})} \chi_E(S_i(x)) \nu_{p}(dx) \end{align} where $\chi_E$ is the characteristic function of the set $E$, i.e. \begin{equation} \chi_E(x) = \begin{cases} \ 0 & \quad \text{if } x \notin E \\ \ 1 & \quad \text{if } x \in E \end{cases} \end{equation} \begin{theorem} \label{thm:IDSofAMO} Let $\text{supp}(\nu_{p,\beta, \frac{k}{3^n}, 0 })$ denotes the support of $\nu_{p,\beta, \frac{k}{3^n}, 0 }$. Then $\text{supp}(\nu_{p,\beta, \frac{k}{3^n}, 0 })=\sigma\big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 } \big) $. The sequence of density of states $\big\{\nu^{(l)}_{p,\beta, \frac{k}{3^n}, 0 } \big\}_{l \in \nn}$ converges weakly to $\nu_{p,\beta, \frac{k}{3^n}, 0 }$. Moreover, the following identity holds \begin{align} \label{eq:almostSelfSimiId} \int_{\sigma( \hamilton_{p,\beta, \frac{k}{3^n}, 0 } )} f(x) \nu_{p,\beta, \frac{k}{3^n}, 0 }(dx)=\frac{1 }{3^{n} } \sum_{i=1}^{3^n} \int_{\sigma(\Delta_{p})} f(S_i(x)) \nu_{p}(dx) \end{align} for $f \in C_b(\complex)$, i.e. $f$ is a continuous bounded function on $\complex$. \end{theorem} \begin{proof} Let $f \in C_b(\complex)$. Theorem \ref{thm:firstResultFiniteGraphs} implies \begin{align} \label{eq:discIntAMOmeasure} \sum_{x \in \sigma( \hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(l)}) } f(x) \nu^{(l)}_{p,\beta, \frac{k}{3^n}, 0 }(\{x\}) = \frac{1}{3^l +1} \sum_{x \in R^{-1}_{p,\beta, \frac{k}{3^n},0 } ( \sigma(\Delta^{(l-n)}_{p}) \backslash \sigma(\Delta^{(0)}) ) } f(x) + \frac{1}{3^l +1} \sum_{x \in \sigma( \hamilton_{p,\beta, \frac{k}{3^n}, 0 }^{(n)}) } f(x). \end{align} We show that the first term on the right-hand side of equation (\ref{eq:discIntAMOmeasure}) converges to the term on the right-hand side of equation (\ref{eq:almostSelfSimiId}). \begin{align*} \frac{1}{3^l +1} \sum_{x \in R^{-1}_{p,\beta, \frac{k}{3^n},0 } ( \sigma(\Delta^{(l-n)}_{p}) \backslash \sigma(\Delta^{(0)}) ) } f(x) & = \frac{1}{3^l +1} \sum_{x \in \sigma(\Delta^{(l-n)}_{p}) \backslash \sigma(\Delta^{(0)}) } \sum_{i=1}^{3^n} f(S_i(x)) \\ & = \frac{3^{l-n} +1}{3^l +1} \sum_{x \in \sigma(\Delta^{(l-n)}_{p}) \backslash \sigma(\Delta^{(0)}) } \sum_{i=1}^{3^n} f(S_i(x)) \nu_{l-n,p}(\{x\}) \\ & \to \frac{1 }{3^{n} } \sum_{i=1}^{3^n} \int_{\sigma(\Delta_{p})} f(S_i(x)) \nu_{p}(dx), \ \text{ as } \ l \to \infty. \end{align*} \end{proof} \begin{remark} \label{rem:continuityOfPreimages} The existence and continuity of the branches of the inverse $R^{-1}_{p,\beta, \frac{k}{3^n},0 }$ on the interval $[0,2]$ is given in a forthcoming work \cite{BaluMograbyOkoudjouTeplyaevJacobi2021}, where we develop a general framework by extending the results obtained in this paper to a large class of Jacobi operators. \end{remark} Theorem \ref{thm:IDSofAMO} and proposition \ref{prop:IDSofpLap} justify the following definitions. \begin{definition} We refer to $\nu_{p,\beta, \frac{k}{3^n}, 0 }$ and $\nu_p$ as the \textit{density of states} of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ and $\Delta_{p}$, respectively. The corresponding \text{integrated density of states} are given by $N_{p,\beta, \frac{k}{3^n}, 0}(x)=\nu_{p,\beta, \frac{k}{3^n}, 0}((-\infty,x])$ and $N_{p}(x)=\nu_{p}((-\infty,x])$. \end{definition} It now follows that \begin{corollary} \label{cor:scalingCopies} Let $E \subset \sigma(\Delta_{p})$, then $\nu_{p,\beta, \frac{k}{3^n}, 0 }(S_j(E)) = \frac{1}{3^n} \nu_{p}(E)$. \end{corollary} \begin{proof} We compute \begin{align} \nu_{p,\beta, \frac{k}{3^n}, 0 }(S_j(E)) =\frac{1 }{3^{n} } \sum_{i=1}^{3^n} \int_{\sigma(\Delta_{p})} \chi_{S_j(E)}(S_i(x)) \nu_{p}(dx) =\frac{1 }{3^{n} } \int_{\sigma(\Delta_{p})} \chi_{E}(x) \nu_{p}(dx), \end{align} where in the second equality, we use that $\chi_{S_j(E)}(S_i(x)) = 0$ ($\nu_p$ almost surely), whenever $i \neq j$ and $\chi_{S_j(E)}(S_j(x)) = \chi_{E}(x)$. \end{proof} The intuitively, Theorem \ref{thm:IDSofAMO} implies that the density of states $\nu_{p,\beta, \frac{k}{3^n}, 0 }$ equally distributes the original mass $\nu_{p}$ of the spectrum $\sigma(\Delta_{p})$ on the $3^n$ branches of the inverse spectral decimation function $R^{-1}_{p,\beta, \frac{k}{3^n},0 }$. In particular, this enables us to compute the spectral gap labels of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$. Let $\rho(\Delta_{p})$ be the resolvent set of $\Delta_{p}$. We define the set of spectral gap labels of $\Delta_{p}$ by \begin{align} \mathscr{G} \mathscr{L}(\Delta_{p}) = \{ N_p(x) \ | \ x \in \rho(\Delta_{p}) \cap \rr \}. \end{align} It is not difficult to see that $\mathscr{G} \mathscr{L}(\Delta_{\frac{1}{2}}) =\{0,1\}$. For $p \neq \frac{1}{2}$, we have \begin{align} \mathscr{G} \mathscr{L}(\Delta_{p}) = \Big\{ \frac{j}{3^i} \ \Big| \ i \in \nn, \ j \in \{0,1,\dots,3^i\} \Big\}. \end{align} We define the set of spectral gap labels of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ similarly and denote it by $\mathscr{G} \mathscr{L}(\hamilton_{p,\beta, \frac{k}{3^n}, 0 })$. \begin{corollary}[Gap labeling] \label{coro:GapLabel} The set of spectral gap labels of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ is given by \begin{align} \mathscr{G} \mathscr{L}(\hamilton_{p,\beta, \frac{k}{3^n}, 0 }) \subset \Big\{ \frac{j}{3^n}+ \frac{1}{3^n}\mathscr{G} \mathscr{L}(\Delta_{p}) \ \Big| \ \ j \in \{0,1,\dots,3^n-1\} \Big\}. \end{align} \end{corollary} \section{Examples and numerical results}\label{sec:examplesappl} \subsection{Spectra of $\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)}$ and $\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(2)}$} We apply the above framework for finite graphs in the case $p=\frac{1}{3}$, $\beta=1$ and $\alpha = \frac{1}{3}$. Direct computations give $\sigma(\Delta^{(0)})=\{0,2\}$. With Proposition \ref{prop:SpectralSimilarityToLap} we compute the exceptional set and $R_{\frac{1}{3},1, \frac{1}{3}, 0} $, \begin{equation} \label{eq:spectralDecimationfunctionAndExceptset} \mathscr{E}_{\frac{1}{3},1, \frac{1}{3}, 0 } = \left\{ - \frac{1}{6} , \ - \frac{5}{6} \right\}, \quad \quad R_{\frac{1}{3},1, \frac{1}{3}, 0 }(z)= \frac{9 z^{3}}{2} - \frac{55 z}{8} - \frac{9}{8}. \end{equation} We give an illustration of Theorem~\ref{thm:firstResultFiniteGraphs}. Due to the spectral similarity between $\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)}$ and $\Delta^{(0)}$, we see that $z\in\sigma(\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)})\setminus\mathscr{E}_{\frac{1}{3},1, \frac{1}{3}, 0 }$ if and only if $R_{\frac{1}{3},1, \frac{1}{3}, 0 }(z)\in\sigma(\Delta^{(0)})$. We compute the preimage sets $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(0) $ and $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(2) $, see Figure \ref{fig:firstOffsprings} and Table \hyperref[tab:table1]{1}. We note that $R_{\frac{1}{3},1, \frac{1}{3}, 0 }$ is a polynomial of degree 3; therefore, each of the eigenvalues $0, 2 \in \sigma(\Delta^{(0)})$ generates three preimages. Excluding the exceptional points results in four distinct eigenvalues of $\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)}$, which on the other hand, determine the complete spectrum as $\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)}$ is a $4\times4$ matrix. \begin{figure}[!htb] \centering \includegraphics[width=1.\textwidth]{figure6offspringFrom0level0To1.png} \caption{The preimage sets $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(0) $ and $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(2) $. Note that $-\frac{1}{6}$ and $-\frac{5}{6}$ are elements of the exceptional set. The numerical values are given in Table \hyperref[tab:table1]{1}.} \label{fig:firstOffsprings} \end{figure} \begin{table}[!htb] \begin{tabular}{SSSSSS} \toprule {$\sigma( \Delta^{(0)})$} & {$\lambda^{(0)}_1 = 0$} & {$\lambda^{(0)}_2 = 2$} & & & \\ \midrule {$\sigma( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)})$} & {$\lambda^{(1)}_1 = \frac{1}{12} - \frac{\sqrt{217}}{12}$} & {$\lambda^{(1)}_2 =\frac{5}{12} - \frac{\sqrt{145}}{12}$} & {$\lambda^{(1)}_3 = \frac{1}{12} + \frac{\sqrt{217}}{12}$} & {$\lambda^{(1)}_4 = \frac{5}{12} + \frac{\sqrt{145}}{12} $} & \\ \bottomrule \\ \end{tabular} \label{tab:table1} \caption{Numerical computation of the spectra $\sigma( \Delta^{(0)})$ and $\sigma( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)})$. The spectrum $\sigma( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)})$ is computed using Proposition \ref{prop:SpectralSimilarityToLap} and $\sigma( \Delta^{(0)})$.} \end{table} To compute $\sigma(\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(2)})$, we first use Proposition \ref{prop: SpectralDecimpLap} and the spectral decimation function $R_{\Delta_{p}}$ to calculate $\sigma(\Delta^{(1)}_{p})$. It can be easily checked that $\sigma(\Delta^{(1)}_{p})=\{0,\frac{1}{3},\frac{5}{3},2\}$. In particular, four out of the ten eigenvalues in $\sigma( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(2)})$ are computed similarly to above, namely as the elements of preimage sets $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(0) $ and $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(2) $ with excluding the points in the exceptional set. The preimage sets $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(1/3) $ and $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(5/3) $ are computed as shown in Figure \ref{fig:SecondOffsprings} with the numerical values in Table \hyperref[tab: spectraForAlpha1over3first3levelsh2]{2}. These sets generate the remaining 6 eigenvalues. Note in level two, the graph $G_2$ consists of $10$ vertices. \\ \\ \begin{figure}[!htb] \centering \includegraphics[width=1.\textwidth]{figure7offspringFrom1over3level1To2.png} \caption{The preimage sets $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(\frac{1}{3}) $ and $R^{-1}_{\frac{1}{3},1, \frac{1}{3}, 0 }(\frac{5}{3}) $. The numerical values are given in Table \hyperref[tab: spectraForAlpha1over3first3levelsh2]{2}. } \label{fig:SecondOffsprings} \end{figure} \begin{table}[!htb] \begin{tabular}{SSSSSS} \toprule {$\sigma( \Delta^{(1)}_{\frac{1}{3}})$} & {$\lambda^{(1)}_1 = 0$} & {$\lambda^{(1)}_2 =\frac{1}{3}$} & {$\lambda^{(1)}_3 =\frac{5}{3}$} & {$\lambda^{(1)}_4 = 2$} & \\ \midrule {$\sigma( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(2)})$} & {$\lambda^{(2)}_1 =-1.14424$} & {$\lambda^{(2)}_2 = -1.11189$} & {$\lambda^{(2)}_3 = -0.92631$} & {$\lambda^{(2)}_4 = -0.58679$} & \\ & {$\lambda^{(2)}_5 = -0.47717$} & {$\lambda^{(2)}_6 =-0.21899$} & {$\lambda^{(2)}_7 = 1.31091$} & {$\lambda^{(2)}_8 = 1.33089$} & \\ & {$\lambda^{(2)}_9 = 1.40349$} & {$\lambda^{(2)}_{10} =1.42013$} & & & \\ \bottomrule \\ \end{tabular} \label{tab: spectraForAlpha1over3first3levelsh2} \caption{ Numerical computation of the spectra $\sigma( \Delta^{(1)}_{\frac{1}{3}})$ and $\sigma( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(2)})$.}. \end{table} \subsection{Spectral gaps}\label{sec:spectralgaps} The disconnectedness of the Julia set $\mathcal{J}(R_{\Delta_{p}})$, for $p \neq \frac{1}{2}$, implies that the self-similar Laplacian $\Delta_{p}$ has infinitely many spectral gaps. This fact combined with Theorem \ref{thm:SpectralSimiAMOandpqModelnew} lead us to the following two conclusions: \newline \begin{enumerate} \item For $p \neq \frac{1}{2}$, the spectrum $\sigma \Big( \hamilton_{p,\beta, \frac{k}{3^n}, 0 } \Big)$ has infinitely many spectral gaps. \item We can generate the spectral gaps iteratively using the spectral decimation function $ R_{p,\beta, \frac{k}{3^n},0 }$. \end{enumerate} We illustrate these ideas with the example $p=\frac{1}{3}$, $\beta=1$, $\alpha = \frac{1}{3}$, $\theta=0$ and generate the spectral gaps in $\sigma \big( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 } \big)$ using \begin{equation} R_{\frac{1}{3},1, \frac{1}{3}, 0 }(z)= \frac{9 z^{3}}{2} - \frac{55 z}{8} - \frac{9}{8}. \end{equation} \begin{figure}[htb] \centering \includegraphics[width=0.85\textwidth]{FiguresFirst2gapsAMOp1over3.png} \vspace{0cm} \caption{(Top) The spectral decimation function $R_{\frac{1}{3},1, \frac{1}{3}, 0 }$ is plotted. The dashed lines represent the cutoffs at $y=0$ and $y=2$. (Bottom) The integrated density of states $N^{(l)}_{\frac{1}{3},1, \frac{1}{3}, 0 }$ is plotted for level $l=7$. The dashed cutoff lines and spectral decimation function are used to locate the spectral gaps, which coincide with the indicated plateaus of the integrated density of states.} \label{fig:GeneratinggapsAMOfirstTwoGaps} \end{figure} To locate the first two spectral gaps of $ \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 } $, we note that $\sigma(\Delta_{\frac{1}{3}}) \subset [0,2]$. By Theorem \ref{thm:SpectralSimiAMOandpqModelnew}, we obtain \begin{align*} z \in \sigma \Big( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 } \Big) \quad \Rightarrow \quad R_{\frac{1}{3},1, \frac{1}{3}, 0 }(z) \in [0,2], \end{align*} or equivalently \begin{align*} R_{\frac{1}{3},1, \frac{1}{3}, 0 }(z) \notin [0,2] \quad \Rightarrow \quad z \in \rho \Big( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 } \Big). \end{align*} Plotting the spectral decimation function $R_{\frac{1}{3},1, \frac{1}{3}, 0 }$ with both cutoffs $y=0$ and $y=2$ in Figure \ref{fig:GeneratinggapsAMOfirstTwoGaps}, generates the first two spectral gaps $gap_1$ and $gap_2$. By Proposition \ref{prop:PropertiesOfR}, we know that $z \in \sigma \big(\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)}\big) \bigcup \sigma \big(\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1),D} \big)$ if and only if $R_{\frac{1}{3},1, \frac{1}{3},0}(z) \in \{0,2\}$. The eigenvalues of $\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1)}$ are listed in Table \hyperref[tab:table1]{1} and we denote the eigenvalues $\sigma \big(\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }^{(1),D} \big) = \{-\frac{5}{6}, -\frac{1}{6} \}$ by $\lambda^{(1),D}_1 =-\frac{5}{6}$ and $\lambda^{(1),D}_2 = -\frac{1}{6}$. This gives \begin{align} \label{eq:orderingEigenvalues} \lambda^{(1)}_1 \leq \lambda^{(1),D}_1 \leq \lambda^{(1)}_2 \leq \lambda^{(1),D}_2 \leq \lambda^{(1)}_3 \leq \lambda^{(1)}_4. \end{align} The spectrum of $\hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 }$ is then contained in the complement (in $\rr$) of the following set \begin{align*} (-\infty, \lambda^{(1)}_1) \cup (\lambda^{(1),D}_1, \lambda^{(1)}_2) \cup (\lambda^{(1),D}_2, \lambda^{(1)}_3) \cup (\lambda^{(1)}_4,\infty), \end{align*} where $gap_1=(\lambda^{(1),D}_1, \lambda^{(1)}_2) $ and $gap_2 = (\lambda^{(1),D}_2, \lambda^{(1)}_3)$. \begin{figure}[htb] \centering \includegraphics[width=0.9\textwidth]{Figuresnext6gapsAMOp1over3.png} \vspace{0cm} \caption{(Top) The spectral decimation function $R_{\frac{1}{3},1, \frac{1}{3}, 0 }$ is plotted. The dashed lines represent the cutoffs at $y=\frac{1}{3}$ and $y=\frac{2}{3}$ and the dash-dot lines represent the cutoffs at $y=\frac{4}{3}$ and $y=\frac{5}{3}$. (Bottom) The integrated density of states $N^{(l)}_{\frac{1}{3},1, \frac{1}{3}, 0 }$ is plotted for level $l=7$. The cutoff lines and spectral decimation function are used to locate the spectral gaps, which coincide with the indicated plateaus of the integrated density of states.} \label{fig:GeneratinggapsAMOnext6Gaps} \end{figure} To generate the next spectral gaps, we proceed similarly and note that \begin{equation} \sigma(\Delta_{\frac{1}{3}}) \subset \big[0,\frac{1}{3}\big] \cup \big[\frac{2}{3}, \frac{4}{3}\big] \cup \big[\frac{5}{3}, 2\big] \end{equation} where $\sigma(\Delta^{(1)}_{p})=\{0,\frac{1}{3},\frac{5}{3},2\}$ and $\sigma(\Delta^{(1),D}_{p})=\{\frac{2}{3},\frac{4}{3}\}$ (with Dirichlet boundary conditions). Hence, \begin{align*} R_{\frac{1}{3},1, \frac{1}{3}, 0 }(z) \in \Big( \frac{1}{3},\frac{2}{3} \Big) \cup \Big( \frac{4}{3},\frac{5}{3} \Big) \quad \Rightarrow \quad z \in \rho \Big( \hamilton_{\frac{1}{3},1, \frac{1}{3}, 0 } \Big). \end{align*} Plotting the spectral decimation function $R_{\frac{1}{3},1, \frac{1}{3}, 0 }$ with both cutoffs $y=\frac{1}{3}$, $\frac{2}{3}$ and $y=\frac{4}{3}$, $y=\frac{5}{3}$ in Figure \ref{fig:GeneratinggapsAMOnext6Gaps}, generates the next six spectral gaps. \subsection{Gap labeling}\label{subsec:numerical} Figures \ref{fig:IDSAMOalpha1over9p1over2} and \ref{fig:IDSAMOalpha1over9p1over3} give a numerical illustration of~\eqref{eq:almostSelfSimiId}). We recall that the spectral decimation function $R_{\frac{1}{2},1, \frac{1}{9},0 }$ is a polynomial of degree $9$. As such, Figure \ref{fig:IDSAMOalpha1over9p1over2} shows that on each range of the nine branches $S_i(\Delta_{\frac{1}{2}})$, we have a copy of Figure~\ref{fig:IDSforLaps} (left) rescaled by $\frac{1}{9}$, according to Corollary \ref{cor:scalingCopies}. This case corresponds to a periodic Jacobi matrix, and the spectrum consists of nine spectral bands. As expected from Corollary~\ref{coro:GapLabel}, the set of spectral gap labels is \begin{align} \mathscr{G} \mathscr{L}(\hamilton_{\frac{1}{2},1, \frac{1}{9}, 0 }) = \Big\{ 0, \frac{1}{9}, \frac{2}{9}, \dots , 1 \Big\}. \end{align} \begin{figure}[!htb] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\linewidth]{FiguresNEWIDSlevel8beta1p1over2alpha1over9theta0withBox.png} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\linewidth]{FiguresBoxOfNEWIDSlevel8beta1p1over2alpha1over9theta0.png} \end{minipage} \caption{ (Left) Numerical computation of the integrated density of states for $\hamilton_{p,\beta, \alpha, \theta }^{(l)}$. The computations are done for level $l=8, p=\frac{1}{2}, \beta=1,\alpha=\frac{1}{9},\theta=0$. (Right) A resized version of the box in the left-hand side figure is displayed. It shows a copy of Figure \ref{fig:IDSforLaps} (left) rescaled by $\frac{1}{9}$.} \label{fig:IDSAMOalpha1over9p1over2} \end{figure} Similarly, the spectral decimation function $R_{\frac{1}{3},1, \frac{1}{9},0 }$ is a polynomial of degree $9$. As such, Figure~\ref{fig:IDSAMOalpha1over9p1over3} shows that on each range of the nine branches $S_i(\Delta_{\frac{1}{3}})$, we have a copy of Figure~\ref{fig:IDSforLaps} (right) rescaled by $\frac{1}{9}$. In particular, this highlight the Cantor set structure inherited from $\sigma(\Delta_{\frac{1}{3}})$ and the set of spectral gap labels is deduced from Corollary \ref{coro:GapLabel}. \begin{figure}[!htb] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\linewidth]{FiguresNEWIDSlevel8beta1p1over3alpha1over9theta0withBox.png} \end{minipage \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[width=1.1\linewidth]{FiguresBoxOfNEWIDSlevel8beta1p1over3alpha1over9theta0.png} \end{minipage} \caption{ (Left) Numerical computation of the integrated density of states for $\hamilton_{p,\beta, \alpha, \theta }^{(l)}$. The computations are done for level $l=8, p=\frac{1}{3}, \beta=1,\alpha=\frac{1}{9},\theta=0$. (Right) A resized version of the box in the left-hand side figure is displayed. It shows a copy of Figure \ref{fig:IDSforLaps} (right) rescaled $\frac{1}{9}$.} \label{fig:IDSAMOalpha1over9p1over3} \end{figure} \FloatBarrier \section{Connections approaches of B\'{e}llissard and Bessis-Geronimo-Moussa}\label{sec:Bellissard} { Our work is partially motivated by Bellissard's studies on Hamiltonians describing the motion of a particle in quasicrystals. More specifically, his construction of a large class of Hamiltonians with Cantor spectra starting from Jacobi matrices and their associated Julia sets, see \cite{BellissardRenormalizationGroup1992,Bellissard85}. To draw the parallel with our work we make the following observations. Given a polynomial $P(z)$, let $\mathcal{J}(P)$ be the corresponding (compact) Julia set, which, under some assumptions on $P(z)$ is a completely disconnected set \cite{Fatou1919,Julia1918}. In addition, let $\mu$ be the balanced invariant measure of $\mathcal{J}(P)$ and consider the Hilbert space $\mathcal{H}=L^2(\mathcal{J}(R), d\mu)$. The multiplication operator $H$ associated with the identity function $f(x)=x$ in $\mathcal{H}$ is bounded, self adjoint, has the Cantor set $\mathcal{J}(P)$ as spectrum. Furthermore, $\mu$ is the spectral measure of $H$ leading to a singular spectrum. Because the linearly independent set $\{x \in \mathcal{J}(P) \to x^n \in \complex\}$ generates a dense linear subspace of $\mathcal{H}$, the operator $H$ can be represented as a semi-infinite Jacobi matrix\cite{Barnsley1983,Barnsley1985}. Moreover, $H$ satisfies a \textit{renormalization group equation} \begin{align} \label{eq:RenoGroBelli} D \big( zI-H \big)^{-1} D^{\ast} =\frac{P'(z)}{deg(P)} \big( P(z)I-H \big)^{-1} \end{align} where the partial isometry $D$ and its adjoint $D^{\ast}$ are given in~\cite[Theorem 1]{BellissardRenormalizationGroup1992}. The connection between our work and Bellissard's original ideas as elaborated above begins by using defining a probabilistic Laplacian (the so-called \textit{pq-model}) on the integers half-line $ \mathbb{Z}_+$, regarded as a hierarchical or substitution graph with $G_1$ in Figure~\ref{fig:G1_initialNeumann} (right) as its basic building block. The graph $G_1$ determines the spectral decimation function $R_{\Delta_{p}}(z)$ in~\eqref{eq:specDeciFuncLap}, a polynomial which plays the role of the polynomial $P(z)$ appearing in Bellissard's approach. On the one hand, each Laplacian $\Delta_p$ represents an example of a semi-infinite Jacobi matrix that, in similarly to the operators in Bellissard's construction, has a spectrum that coincides with $\mathcal{J}(R_{\Delta_{p}})$: the Julia set of the polynomial $R_{\Delta_{p}}(z)$. On the other hand, the balance measure of $\mathcal{J}(R_{\Delta_{p}})$ plays the role of the spectral measure in Bellissard's approach and is also the density of states in our context, see Proposition~\ref{prop:IDSofpLap}. In a forthcoming work \cite{BaluMograbyOkoudjouTeplyaevJacobi2021}, we generalize the substitution rule in Definition~\ref{def:finiteGraphApproxNeumann} leading to a multiple-parameter families of probabilistic Laplacians whose spectral properties are investigated with similar tools as the aforementioned \textit{pq-model}. We note that the spectra of the self-similar almost Mathieu operators we introduced in this paper are not necessarily given as the Julia sets of some polynomials. Instead, as proved in Theorem~\ref{thm:SpectralSimiAMOandpqModelnew} these spectra are preimages of Julia sets under certain polynomials. In \cite{BaluMograbyOkoudjouTeplyaevJacobi2021}, these results are extended to a class of Jacobi operators, for which we establish a renormalization group equation, see \cite[Theorem 4.6.]{BaluMograbyOkoudjouTeplyaevJacobi2021}. For example, when reduced to the \textit{pq-model}, this renormalization group equation for resolvent, see \cite{Luke10,Luke12,TeplyaevInfiniteSG1998,malozemovteplyaev2003}, takes the form \begin{align} \label{eq:RenoGrEqOURapproach} U^{\ast}\big(z I- \Delta_{p} \big)^{-1}U=\frac{(z-1)^2-p^2}{p(1-p)}\Big( R_{\Delta_{p}}(z) I - \Delta_{p} \Big)^{-1}, \end{align} where $R_{\Delta_{p}}(z)$ is given in~\eqref{eq:specDeciFuncLap}, and $U$ and $U^{\ast}$ are defined in \cite{ChenTeplyPQmodel2016}. For comparison with~\eqref{eq:RenoGroBelli}, we compute \begin{align} \frac{\frac{d}{dz}R_{\Delta_{p}}(z) }{deg(R_{\Delta_{p}})} = \frac{(z-1)^2-\big(\frac{1-p(1-p)}{3}\big)}{p(1-p)}, \end{align} which coincides with the factor on the right-hand side of equation (\ref{eq:RenoGrEqOURapproach}) if and only if $p=\frac{1}{2}$. Thus, our results is related to \cite[Theorem 2.2]{BGM-1988}, although we do not rely on \cite{BGM-1988} as we consider a model which allows us to produce more explicit computation of the spectrum for operators with potential. } \begin{comment} Our work is motivated by Bellissard's studies on Hamiltonians describing the motion of a particle in quasicrystals. In particular, his approach in \cite{BellissardRenormalizationGroup1992}, where Jacobi matrices are constructed to an associated given Julia set and in this way being able to generate a large class of Hamiltonians with Cantor spectra. The approach might be sketched as follows. Let $P(z)$ be a given polynomial and $\mathcal{J}(P)$ the corresponding Julia set. The key observation is that the Julia set in the context of polynomials is always compact, and it is completely disconnected under certain assumptions on the polynomial $P(z)$ \cite{Fatou1919,Julia1918}. Let $\mu$ be the balanced invariant measure of the Julia set $\mathcal{J}(P)$ and introduce the Hilbert space $\mathcal{H}=L^2(\mathcal{J}(R), d\mu)$. Define $H$ as the operator of multiplication by the identity function $f(x)=x$ in $\mathcal{H}$ and as anticipated, this operator $H$ is bounded and self adjoint with the Cantor set $\mathcal{J}(P)$ as its spectrum. The spectral measure of $H$ is given by $\mu$ and the spectrum is hence singular. Using the fact that the monomial functions $x \in \mathcal{J}(P) \to x^n \in \complex$ are linearly independent and generate a dense linear subspace in $\mathcal{H}=L^2(\mathcal{J}(R), d\mu)$, leads to a representation of the operator $H$ as a Jacobi semi-infinite matrix \cite{Barnsley1983,Barnsley1985}. Moreover, the operator $H$ satisfies a \textit{renormalization group equation} \begin{align} \label{eq:RenoGroBelli} D \big( zI-H \big)^{-1} D^{\ast} =\frac{P'(z)}{deg(P)} \big( P(z)I-H \big)^{-1} \end{align} where the reader is referred to \cite{BellissardRenormalizationGroup1992} for the definition of the partial isometry $D$ and its adjoint $D^{\ast}$. In the following we want to point out some connections between our approach and Bellissard's original ideas. We utilize the \textit{pq-model} as a starting point of our investigation. It serves as a well-understood and simple model of the integers half-line $ \mathbb{Z}_+$ regarded as a hierarchical or substitution graph with $G_1$ in figure \ref{fig:G1_initialNeumann} (right) as its basic building block. The graph $G_1$ determines the spectral decimation function $R_{\Delta_{p}}(z)$ in (\ref{eq:specDeciFuncLap}), which is a polynomial and plays the role of the $P(z)$ in Bellissard's approach. Each Laplacian $\Delta_p$ represents an example of a Jacobi semi-infinite matrix that, similarly to the operators in Bellissard's construction, has a spectrum that coincides with the Julia set of the polynomial $R_{\Delta_{p}}(z)$, i.e. $\mathcal{J}(R_{\Delta_{p}})$. On the other hand, while the balances measure of $\mathcal{J}(R_{\Delta_{p}})$ plays the role of the spectral measure in Bellissard's approach, it is the density of states in our context (proposition \ref{prop:IDSofpLap}). We note that the substitution rule in definition \ref{def:finiteGraphApproxNeumann} is generalized in \cite{BaluMograbyOkoudjouTeplyaevJacobi2021}, which enabled us to define multiple-parameter families of probabilistic Laplacians in \cite[Section 5.2.]{BaluMograbyOkoudjouTeplyaevJacobi2021}. In this way, we provide further examples of operators whose spectral properties can be investigated with similar tools as the \textit{pq-model}. In this paper, we extend our approach to the self-similar almost Mathieu operators. This class of examples might not have spectra given by Julia sets of some polynomials; nevertheless, Theorem \ref{thm:SpectralSimiAMOandpqModelnew} shows that their spectra are characterized as preimages of Julia sets under certain polynomials. In \cite{BaluMograbyOkoudjouTeplyaevJacobi2021}, we identify a class of Jacobi operators that extends the result in Theorem \ref{thm:SpectralSimiAMOandpqModelnew}. We refer to this class of operators as piecewise centrosymmetric Jacobi operators. We note that we were able to prove a renormalization group equation in our approach as well, see \cite[Theorem 4.6.]{BaluMograbyOkoudjouTeplyaevJacobi2021}. For instance, we can compute explicitly for the \textit{pq-model} \begin{align} \label{eq:RenoGrEqOURapproach} U^{\ast}\big(z I- \Delta_{p} \big)^{-1}U=\frac{(z-1)^2-p^2}{p(1-p)}\Big( R_{\Delta_{p}}(z) I - \Delta_{p} \Big)^{-1}, \end{align} where $R_{\Delta_{p}}(z)$ is given in (\ref{eq:specDeciFuncLap}). The reader is referred to \cite{ChenTeplyPQmodel2016} for the exact definition of $U$ and $U^{\ast}$. For comparision with (\ref{eq:RenoGroBelli}), we compute \begin{align} \frac{R_{\Delta_{p}}(z) }{deg(R_{\Delta_{p}})} = \frac{(z-1)^2-\big(\frac{1-p(1-p)}{3}\big)}{p(1-p)}, \end{align} which coincides with the factor on the right-hand side of equation (\ref{eq:RenoGrEqOURapproach}) when $p=\frac{1}{2}$. \end{comment} \section{Conclusions} In this paper we introduce and study a fractal version of the almost Mathieu operators $\hamilton_{p,\beta, \alpha, \theta }$. We propose a new adaptation of the method of spectral similarity to analyze their spectral properties. Our main conclusions are the following. \begin{enumerate} \item Theorem \ref{thm:SpectralSimiAMOandpqModelnew} presents a useful algebraic tool to relate the spectrum of the almost Mathieu operators $\hamilton_{p,\beta, \alpha, \theta }$ to that of a family of self-similar Laplacians $\Delta_p$. Our results are established when the parameter $\alpha$ belongs to the dense set $\{\tfrac{k}{3^n}, \, k=0,1, 2, \hdots, 3^n-1\}_{n=1}^l$ where $l\geq 1$. Note that in the classical case, corresponding to $p=1/2$ in our formulation, many important results are also obtained for $\alpha$ irrational, but in the fractal setting the methods for irrational $\alpha$ are not developed yet. The method of spectral similarity is applicable in many situations, and in a forthcoming work \cite{BaluMograbyOkoudjouTeplyaevJacobi2021} we develop a general framework by working with a large class of Jacobi operators. We call these operators piecewise centrosymmetric Jacobi operators \cite{centrosymmetric1,centrosymmetric2,centrosymmetric3}. In this general setting the spectral decimation function can be computed using the theory of orthogonal polynomials associated with the aforementioned Jacobi matrix. As a result, the spectral decimation function in the generalizations of Theorem \ref{thm:firstResultFiniteGraphs} can be computed using a three-term recurrence relation, which provides a simple procedure to show that the spectral decimation function is a polynomial of a specific degree with properties that can be controlled. \item Our methods allow to compute explicitly the density of states. In particular, we proved in theorem \ref{thm:IDSofAMO} an explicit formula connecting the density of states of $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ by identifying it with the weighted preimages of the balanced invariant measure on the Julia set of the polynomial $\mathcal{J}(R_{\Delta_{p}})$. This approach can be generalized to many other situations, and can be verified numerically, see Section~\ref{sec:examplesappl}. \item In our particular situation we are able to conclude that the operators $\hamilton_{p,\beta, \frac{k}{3^n}, 0 }$ have singularly continuous spectrum when $p\neq \frac{1}{2}$ because of the previous recent work \cite{ChenTeplyPQmodel2016} that the spectrum is singularly continuous for $\Delta_p$. This result requires a detailed analysis of a certain dynamical system describing the behavior of the generalized eigenfunction. \item A particular novelty of our results is that we develop the spectral analysis of a self-similar Laplacian with a quasi-periodic potential. In our work we have made the essential steps towards the Fourier analysis on one-dimensional self-similar structures following the general approach developed by Strichartz et al \cite{Bob1989HarmonicAnalysis,Strichartz2003FractafoldsBO}. In certain particular situations this allows to consider classical and quantum wave prorogation on fractal and other irregular structures \cite{Akkermans2013StatMechFractals,Akkermans2014WavePropFractal,wave17}. \item Using these Harmonic Analysis tools, our work introduces a direct approach to the gap labeling for a self-similar Laplacian with a potential. In particular, this direct approach for gap labeling is complementing \cite{be1,be2,be3}. In our case we do not make use of the dual of a group acting on the fractal lattice, and observe that gap labels of the form $\frac{j}{3^n}$ are consistent with the self-similar quasi-periodic structure where renormalization acts by the dilation of the space by $3^n$. This should be contrasted with the gap labeling for fractal structures with more complicated topological structure currently under investigation in \cite{BubbleDiamond2021}, {including the classical Sierpinski gasket and a new model of the bubble diamond fractals}. In general, on a certain class of self-similar structures the gaps are labeled by the values of the integrated density of states of the Laplacian with values $\frac{j}{C^n}$ where $C$ is the topological degree of the self-covering map of the fractal limit space. Thus, our work sets the stage for considering the Bloch theorem, noncommutative Chern characters and fractal-based quantum Hall systems, see \cite{Marcolli2006}, as well as \cite{Benameur-Mathai}. On fractal spaces this is an open problem that has not been previously considered in the literature besides the recent work \cite{Akkermans2021}. \item Our work is connected to several lines of investigations in mathematics and physics, including the topics highlighted at the recent workshop \href{https://alexander-teplyaev.uconn.edu/quasi-periodic-spectral-and-topological-analysis/}{Quasi-periodic spectral and topological analysis}, and in particular the work of E.~Akkermans \cite{akke17,akkermans2007mesoscopic,ovdat2021breaking,Akkermans2014WavePropFractal,Akkermans2013StatMechFractals,Akkermans2021}, D.~Damanik \cite{Damanik2021,Damanik2019,Damanik2015,Damanik2004,Damanik1999}, S.~Jitomirskaya \cite{jitomirskaya_metal-insulator_1999,marx_jitomirskaya_2017,jito19,jito20,jito21,avila_ten_2009}, and E.~Prodan \cite{Prodan2016a,Prodan2016b,Prodan2016c,rosa2021topological,ni2019observation} et al. \end{enumerate} \subsection*{Acknowledgments} The work of G. ~Mograby was supported by ARO grant W911NF1910366. K.~A.~Okoudjou was partially supported by ARO grant W911NF1910366 and the National Science Foundation under Grant No. DMS-1814253. A.~Teplyaev was partially supported by NSF DMS grant 1613025 and by the Simons Foundation. \bibliographystyle{plain}
-102,235.74051
[ -1.849609375, 1.8017578125 ]
40.12975
[ -2.58984375, 1.177734375, -1.4677734375, -3.89453125, -1.166015625, 6.1953125 ]
[ 4.21875, 8.8515625, 2.751953125, 6.1640625 ]
552
9,791
[ -2.912109375, 3.068359375 ]
35.357098
[ -5.6875, -3.638671875, -4.3828125, -2.162109375, 1.5888671875, 11.7421875 ]
0.814647
22.550319
17.82249
4.381322
[ 2.233015537261963 ]
-57,308.990217
6.080584
-100,996.240763
1.657848
6.091531
[ -2.0546875, -3.419921875, -4.1328125, -5.09375, 2.095703125, 12.578125 ]
[ -6.14453125, -1.880859375, -2.33984375, -0.77490234375, 3.423828125, 4.20703125 ]
BkiUcV_xK4sA-9F9jqWm
\section{Introduction and Motivation} Understanding text often requires knowledge beyond what is explicitly stated in the text. While this was mentioned in early AI works \cite{mccarthy1990example}, with recent success in traditional NLP tasks, many challenge datasets have been recently proposed that focus on hard NLU tasks. The Winograd challenge was proposed in 2012, bAbI \cite{weston2015towards} , and an array of datasets were proposed in 2018-19. This includes\footnote{The website https://quantumstat.com/dataset/dataset.html has a large list of NLP and QA datasets. The recent survey \cite{storks2019recent} also mentions many of the datasets.} QASC \cite{khot2019qasc}, CoQA \cite{reddy2019coqa} , DROP \cite{Dua2019DROPAR} , BoolQ \cite{clark2019boolq} , CODAH \cite{chen2019codah} , ComQA \cite{abujabal2018comqa} , CosmosQA \cite{huang2019cosmos} , NaturalQuestions \cite{kwiatkowski2019natural} , PhysicalIQA \cite{bisk2019piqa} , QuaRTz \cite{tafjord2019quartz} , QuoRef \cite{dasigi2019quoref} , SocialIQA \cite{sap2019socialiqa} , WinoGrande \cite{sakaguchi2019winogrande} in 2019 and ARC \cite{Clark2019FromT} , CommonsenseQA \cite{talmor2018commonsenseqa} , ComplexWebQuestions \cite{talmor2018web} , HotpotQA \cite{yang2018hotpotqa} , OBQA \cite{OpenBookQA2018} , Propara \cite{mishra2018tracking} , QuaRel \cite{tafjord2018quarel} , QuAC \cite{choi2018quac} , MultiRC \cite{MultiRC2018} , Record \cite{zhang2018record} , Qangaroo \cite{welbl2018constructing} , ShARC \cite{sharc} , SWAG \cite{zellers2018swag} , SQuADv2 \cite{rajpurkar2018know} in 2018. To understand the technical challenges and nuances in building natural language understanding (NLU) and QA systems we consider a few examples from the various datasets. In his 1972 paper Winograd \cite{winograd1972understanding} presented the following ``councilmen'' example. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8pt}{8.5pt}\selectfont \textbf{WSC item:} The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.\\ \textbf{Question:} Who [feared/advocated] violence? \end{minipage} } \smallskip In this example to understand what ``they'' refers to one needs knowledge about the action of ``refusing a permit to demonstrate''; when such an action can happen and with respect to whom? The bAbI domain is a collection of 20 tasks and one of the harder task sub-domain in bAbI is the Path finding sub-domain. An example from that sub-domain is as follows: \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8pt}{8.5pt}\selectfont \textbf{Context:} The office is east of the hallway. The kitchen is north of the office. The garden is west of the bedroom. The office is west of the garden. The bathroom is north of the garden. \\ \textbf{Question:} How do you go from the kitchen to the garden?\\ \textbf{Answer:} South, East. \end{minipage} } \smallskip To answer the above question one needs knowledge about directions and their opposites, knowledge about the effect of actions of going in specific directions, and knowledge about composing actions (i.e., planning) to achieve a goal. Processes are actions with duration. Reasoning about processes, where they occur and what they change is somewhat more complex. The ProPara dataset \cite{mishra2018tracking} consists of natural text about processes. Following is an example. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8pt}{8.5pt}\selectfont \textbf{Passage:} Chloroplasts in the leaf of the plant trap light from the sun. The roots absorb water and minerals from the soil. This combination of water and minerals flows from the stem into the leaf. Carbon dioxide (CO$_2$) enters the leaf. Light, water and minerals, and CO$_2$ all combine into a mixture. This mixture forms sugar (glucose) which is what the plant eats.\\ \textbf{Question}: Where is sugar produced? \textbf{Answer}: in the leaf \end{minipage} } \smallskip The knowledge needed in many question answering domain can be present in unstructured textual form. But often specific words and phrases in those texts have ``deeper meaning'' that may not be easy to learn from examples. Such a situation arises in the LifeCycleQA domain \cite{mitra2019declarative} where the description of a life cycle is given and questions are asked about them. One LifeCycleQA domain has the description of the life cycle of frog with five different stages: egg, tadpole, tadpole with legs, froglet and adult. It has description about each of these stages such as: \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8pt}{8.5pt}\selectfont \textbf{Lifecycle Description:} Froglet - In this stage, the almost mature frog breathes with lungs and still has some of its tail. Adult - The adult frog breathes with lungs and has no tail (it has been absorbed by the body). \\ \textbf{Question:} What best indicates that a frog has reached the adult stage? \textbf{Options:} A) When it has lungs B) \textit{When the tail is absorbed by the body} \end{minipage} } \smallskip To answer this question one needs a precise understanding (or definition) of the word ``indicates''. In this case, (A) indicates that a frog has reached the adult stage (B) does not. That is because (B) is true for both adult and froglet stages while (A) is only true for the adult stage. The precise definition of ``indicates'' is that a property P indicates a stage $S_i$ with respect to a set of stages $\{S_1, \ldots, S_n\}$ iff P is true in $S_i$, and P is false in all stages in $\{S_1, \ldots, S_{i-1}, S_{i+1}, \ldots S_n\}$. There are some datasets where it is explicitly stated that external knowledge is needed to answer questions in those datasets. Example of such datasets are the OpenBookQA dataset \cite{OpenBookQA2018} and the Machine Commonsense datasets \cite{sap2019socialiqa,bisk2019piqa,bhagavatula2019abductive}. Each OpenBookQA item has a question and four answer choices. An open book of science facts are also provided. In addition QA systems are expected to use additional common knowledge about the domain. Following is an example item from OpenBookQA. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8pt}{8.5pt}\selectfont \textbf{OpenBook Question:} In which of these would the most heat travel?\\ \textbf{Options:} A) a new pair of jeans. B) \textit{a steel spoon in a cafeteria}. C) a cotton candy at a store. D) a calvin klein cotton hat.\\ \textbf{Science Fact:} Metal is a thermal conductor\\ \textbf{Common Knowledge:} Steel is made of metal. Heat travels through a thermal conductor. \end{minipage} } \smallskip As part of the DARPA MCS (machine commonsense) program Allen AI has developed 5 different QA datasets where reasoning with common sense knowledge (which are not given) is required to answer questions correctly. One of those datasets is the Physical QA dataset and following is an example from that dataset. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8pt}{8.5pt}\selectfont \textbf{Physical QA:} You need to break a window. Which object would you rather use?\\ \textbf{Answer Options:} A) \textit{metal stool} B) bottle of water. \end{minipage} } \smallskip To answer the above question one needs knowledge about things that can break a window. One of the most challenging natural language QA task is solving grid puzzles \cite{mitra2015learning}. They contain a textual description of the puzzle and questions about the solution of the puzzle. Such puzzles are currently found in exams such as LSAT and GMAT, and have also been used to evaluate constraint satisfaction algorithms and systems. Building a system to solve them is a challenge as it requires precise understanding of several clues of the puzzle, leaving very little room for error. In addition there is often need for external knowledge (including commonsense knowledge) that is not explicitly specified in the puzzle description. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8pt}{8.5pt}\selectfont \textbf{Puzzle Context:} Waterford Spa had a full appointment calendar booked today. Help Janice figure out the schedule by matching each masseuse to her client, and determine the total price for each.\\ \textbf{Puzzle Conditions:} Hannah paid more than Teri's client. Freda paid 20 dollars more than Lynda's client. Hannah paid 10 dollars less than Nancy's client. Nancy's client, Hannah and Ginger were all different clients. Hannah was either the person who paid \$180 or Lynda's client. \end{minipage} } \smallskip The above examples are a small sample of examples from recent NLQA datasets that illustrate the need for ``reasoning'' with external knowledge in NLQA. In the rest of the paper we present several aspects of building NLQA systems for such datasets and give a brief survey of the existing methods. We start with a brief description of the knowledge repositories used by such NLQA systems and how they were created. We then present models and architectures of systems that do NLQA with external knowledge. \section{Knowledge Repositories and their creation} The knowledge repositories contain two kinds of knowledge: Unstructured and Structured Knowledge. Unstructured knowledge is knowledge in the form of free text or natural language. Structured knowledge, on the other hand, has a well-defined form, such as facts in the form of tuples or Resource Description Framework (RDF); or well-defined rules as used in Answer Set Program Knowledge Bases. Recently pretrained language models are also being used as ``knowledge bases''. We discuss them in a later section. {\bf Repositories of Unstructured Knowledge}: Any natural language text or book or even the web can be thought of as a source of unstructured knowledge. Two large repositories that are commonly used are the Wikipedia Corpus and the Toronto BookCorpus. Wikipedia corpus contains 4.4M articles about varied fields and is crowd-curated. Toronto BookCorpus consists of 11K books on various topics. Both of these are used in several latest neural language models to learn word representations and NLU. A few other notable sources of unstructured commonsense knowledge are the Aristo Reasoning Challenge (ARC) Corpus \cite{Clark2019FromT}, the WikiHow Text Summarization dataset \cite{koupaee2018wikihow}, and the short stories from the RoCStories and Story Cloze Task datasets \cite{mostafazadeh2016corpus}. ARC Corpus contains 14M science sentences which are useful for the corresponding ARC Science QA task. WikiHow dataset contains 230K articles and summaries extracted from the online WikiHow website. This contains articles written by several different human authors, which is different from the news articles present in the Penn Treebank. RocStories and the Story Cloze dataset contains short stories that have a ``rich causal and temporal commonsense relations between daily events''. {\bf Repositories of Structured Knowledge}: There are several structured knowledge repositories, such as Yago \cite{rebele2016yago} , NELL \cite{carlson2010toward} , DBPedia \cite{dbpedia} and ConceptNet \cite{liu2004conceptnet} . Out of these, Yago is human-verified with the confidence of 95\% accuracy in its content. NELL on the other hand, continuously learns and updates new facts into its knowledge base. DBPedia is the structured variant of Wikipedia, with facts and relations present in RDF format. ConceptNet contains commonsense data and is an amalgamation of multiple sources, which are DBPedia, Wiktionary, OpenCyc to name a few. OpenCyc is the open-source version of Cyc, which is a long-living AI project to build a comprehensive ontology, knowledge base and commonsense reasoning engine. Wiktionary \cite{wiki} is a multilingual dictionary describing words using definitions and descriptions with examples. WordNet \cite{fellbaum2012wordnet} contains much more lexical information compared to Wiktionary, including words being grouped into sets of cognitive synonyms expressing a distinct concept. These sets are interlinked by conceptual-semantic and lexical relations. ATOMIC \cite{sap2019atomic} , VerbPhysics \cite{forbes2017verb} and WebChild \cite{tandon2014webchild} are recent collections of commonsense knowledge. WebChild is automatically created by extracting and disambiguating Web contents into triples of nouns, adjectives and relations like \emph{hasShape}, \emph{evokesEmotion}. VerbPhysics contains knowledge about physical phenomena. ATOMIC contains commonsense knowledge, that is curated from human annotators and focuses on inferential knowledge about a given event, such as, intentions of actors, attributes of actors, effects on actors, pre-existing conditions for an event and reactions of actors. It also contains relations, such as, \emph{If-Event-Then-Event}, \emph{If-Event-Then-MentalState} and \emph{If-Event-Then-Persona}. \section{Reasoning with external knowledge: Models and Architectures} The first step in developing a QA system that incorporates reasoning with external knowledge is to have one or more ways to get the external knowledge. \subsection{Extracting the External Knowledge} {\bf Neural Language Models:} Large neural models, such as BERT \cite{devlin2018bert}, that are trained on Masked Language Modeling (MLM) seem to encapsulate a large volume of external knowledge. Their use led to strong performance in multiple datasets and the tasks in GLUE leader board \cite{wang-etal-2018-glue}. Recently, methods have been proposed that can extract structured knowledge from such models and are being used for relation-extraction and entity-extraction by modifying MLM as a ``fill-in-the-blank'' task \cite{petroni2019language,bouraoui2019inducing,DBLP:journals/corr/abs-1906-05317}. These neural models can also be used to generate sentence vector representations, which are used to find sentence similarity and knowledge ranking. {\bf Word Vectors:} Even earlier word vectors such as Word2Vec \cite{mikolov2013distributed} and Glove \cite{pennington2014glove} were used for extracting knowledge. These word vectors imbibe knowledge about word similarities. These word similarities could be used for creating rules that can be used with other handcrafted rules in a symbolic logic reasoning framework. For example, \cite{beltagy-etal-2014-probabilistic} uses word similarities from word vectors and weighted rules in Probabilistic Soft Logic (PSL) to solve the task of Semantic Textual Similarity. {\bf Information/Knowledge Retrieval:} However, the task of open-domain NLQA, such as in OpenBookQA mentioned earlier, need external unstructured knowledge in the form of free text. The text is first processed using techniques aimed at removing noise and making retrieval more accurate, such as removal of punctuation and stop-words, lower-casing of words and lemmatization. The processed text is then stored in reverse-indexed in-memory storage such as Lucene \cite{10.5555/1893016} and retrieval servers like ElasticSearch \cite{gormley2015elasticsearch}. The knowledge retrieval step consists of search keyword identification, initial retrieval from the search engine and then depending on the task, a knowledge re-ranking step. Some complex tasks which require multi-hop reasoning, perform additional such knowledge retrieval steps. In these tasks, the keyword identification step is modified to account for the knowledge sentences retrieved in the previous steps. Multiple such queries make the task computationally expensive. Few of the structured knowledge sources such as ATOMIC and ConceptNet also contain natural language text examples. These are used as unstructured knowledge and retrieved using the above techniques. {\bf Semantic Knowledge Ranking/Retrieval:} The knowledge sentences retrieved through IR are re-ranked further using Semantic Knowledge Ranking/Retrieval (SKR) models, such as in \cite{pirtoaca2019answering,banerjee-etal-2019-careful,mitra2019exploring,banerjee-2019-asu}. In SKR, instead of traditional retrieval search engines, neural networks are used to rank knowledge sentences. These neural networks are trained on the task of semantic textual similarity (STS), knowledge relevance classification or natural language inference (NLI). {\bf Existing structured knowledge sources:} ConceptNet, WordNet and DBPedia are examples of structured knowledge bases, where knowledge is stored in a well-defined schema. This schema allows for more precise knowledge retrieval systems. On the other hand, the knowledge query too, needs to be well defined and precise. This is the challenge for knowledge retrieval from structured knowledge bases. In these systems, the question is parsed to a well-defined structured query that follows the strict semantics of the target knowledge retrieval system. For example, DBPedia has knowledge stored in RDF triples. A question parser translates a natural language question to a SPARQL query which is executed by the DBPedia Server. {\bf Hand coded knowledge:} However, in domains such as the LifeCycleQA where precise definitions of certain terms are needed, such knowledge is hand coded. Similarly, in the puzzle solving domain, ASP rules encoding the fundamental assumption about such puzzles are hand-crafted. One such assumption is that, associations between the elements are unique. The ASP rules for this is: \noindent { \centering \fontsize{8pt}{8.5pt}\selectfont 1 \{tuple(G,Cat,Elem): element(Cat,Elem) \} 1 :- cindex(Cat), eindex(G).\\ \hspace{5pt} :- tuple(G1,Cat,Elem), tuple(G2,Cat,Elem),G1!=G2.} \smallskip {\bf Knowledge learned using Inductive Logic Programming (ILP)}: Sometimes logical ASP rules can be learned from the dataset. Such a method is used in \cite{mitra2019declarative,mitra2019knowledge} to learn rules that are used to solve bAbI, ProPara and Math word problem datasets. There, ILP (inductive logic programming) is used to learn effect of actions, and static causal connection between properties of the world. Following is an example of a rule that is learned. \noindent { \fontsize{8pt}{8.5pt}\selectfont \hspace{5pt} initiatedAt(carry(A, O), T ) :- happensAt(take(A, O), T ). } \noindent The rule says that A is carrying O gets initiated at time point T if the event take(A,O) occurs at T. \subsection{Models and architecture for NLQA with external knowledge} \begin{figure} \small \includegraphics[width=\linewidth,height=0.5\linewidth]{memnet.png} \caption{Memory Network with Free text Knowledge. This is an example from the bAbI dataset. The memory unit is first updated from reading the knowledge passage and finally a query is given.} \label{fig:memnet} \end{figure} NLQA systems with external knowledge can be grouped based on how knowledge is expressed (structured, free text, implicit in pre-trained neural networks, or a combination) and the type of reasoning module (symbolic, neural or mixed). \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8.5pt}{8.5pt}\selectfont \textbf{Answer Set Programs:} An answer set program (ASP) is a collection of rules of the form, $L_0 \leftarrow L_1, \ldots , L_m, \ not \ L_{m+1}, ..., \ not \ L_n$, where each of the $L_i$ is a literal in the sense of a classical logic. Intuitively, the above rule means that if $L_1, \ldots , L_m$ are true and if $L_{m+1}, \ldots, L_n$ can be safely assumed to be false then $L_0$ must be true. The semantics of ASP is based on the stable model (answer set) semantics of logic programming. ASP is one of the declarative knowledge representation and reasoning language with a large building block results, efficient interpreters and large variety of applications. \end{minipage} } \smallskip {\bf Structured Knowledge and Symbolic Reasoner}: While very early NLQA systems were symbolic systems, and most recent NLQA systems are neural, there are a few symbolic systems that do well on the recent datasets. As mentioned earlier the system \cite{beltagy-etal-2014-probabilistic} uses probabilistic soft logic to address semantic textual similarity. The system in \cite{mitra2016addressing} learns logical rules from the bAbI dataset using ILP and then does reasoning using ASP and performs well on that dataset. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8.5pt}{8.5pt}\selectfont \textbf{Inductive Logic Programming:} Inductive Logic Programming (ILP) is a subfield of Machine learning that is focused on learning logic programs. Given a set of positive examples $E^+$, negative examples $E^-$ and some background knowledge B, an ILP algorithm finds an Hypothesis H (answer set program) such that $B \cup H \models E^+$ and $B \cup H \not \models E^-$. The possible hypothesis space is often restricted with a language bias that is specified by a series of mode declarations. Recently, ILP methods have been developed with respect to Answer Set Programming. \end{minipage} } \smallskip The system \cite{mitra2019knowledge} addresses the Propara dataset in a similar manner and performs well. The system to solve puzzles is also symbolic and so far there has not been any neural system addressing this dataset. However none of these systems are end-to-end symbolic systems as they use neural models for tasks (in the overall pipeline) such as semantic parsing of text. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8.5pt}{8.5pt}\selectfont \textbf{Transformers:} Generative Pre-trained Transformer(GPT) \cite{radford2018improving} showed that, pretraining on diverse unstructured knowledge texts using transformers \cite{vaswani2017attention} and then selectively fine-tune on specific tasks, helps achieving state-of-the-art systems. Their unidirectional way learning representations from diverse texts was improved by bidirectional approach in BERT. It uses the transformer encoder as part of their architecture and takes a fixed length of input and every token is able to attend to both its previous and next tokens through the self attention layers of the transformers. Robustly optimized BERT approach, RoBERTa \cite{liu2019roberta} is another variant which uses a different pre-training approach. RoBERTa is trained on ten times more data than BERT and it outperforms BERT in almost all the NLP tasks and benchmarks. \end{minipage} } \smallskip {\bf Neural Implicit Knowledge with Neural Reasoners:} Neural network models based on transformers are able to use the knowledge embedded in their learned parameters in various downstream NLP tasks. These models learn the parameters as a part of \textit{pre-training} phase on a huge collection of diverse free texts. They fine-tune the parameters for selective NLP tasks and in the process refine the pre-trained knowledge even further for a domain and a task. This knowledge in the parameters of the models helps then in multiple NLP tasks. BERT learns the knowledge from mere 16GB, while Roberta uses 160GB of free texts for pre-training. We can say that the knowledge learned during the pre-training phase is generic for any domain and for any NLP task which gets tuned for a particular domain and task during the fine-tuning phase. {\bf Structured Knowledge and Neural Reasoners:} Structured knowledge can be in the form of trees (abstract syntax tree, dependency tree, constituency tree), graphs, concepts or rules. Tree-structured knowledge extracted from input text have been used in the Tree-based LSTM \cite{tai2015improved}. Here, aggregated knowledge from multiple child nodes selected through the gating mechanism in the model propagates to the parent nodes and creates a representation of the whole tree and thus improving the downstream NLP tasks. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8.5pt}{8.5pt}\selectfont \textbf{LSTMs:} Long-Short Term Memory \cite{hochreiter1997long} is a type of Recurrent Neural Networks that are used to deal with variable-length sequence inputs. Reasoning and QA with LSTMs are usually modeled as an encoder-classifier architecture. The encoder network is used to learn a vector representation of the question and external knowledge. The classifier network either classifies word-spans to extract answer or answer options to in a MCQA setting. These models are able to encode variable-length sentences and hence can take as input a considerable amount of external knowledge. The limitations of LSTMs are that they are computationally expensive and have limited memory, i.e, they can only remember last $T$ time-steps. \end{minipage} } \smallskip While knowledge in the form of undirected graphs are mainly used by the graph-based reasoning systems like Graph Neural Networks (GNN) \cite{scarselli2008graph}, the directed graphs are better handled by convolutional neural networks with dense graph propagation \cite{kampffmeyer2019rethinking}. If the knowledge is heterogeneous in nature, having multiple types of objects(graph nodes) and the edges carry different semantic information, heterogeneous graph attention networks \cite{wang2019heterogeneous} is a common choice. Figure \ref{fig:gnn} shows an example. Knowledge carrying information in hierarchy (entity-level, sentence-level, paragraph-level and document-level) makes use of Hierarchical Graph neural network \cite{fang2019hierarchical} for better performance. \begin{figure} \includegraphics[width=\linewidth,height=0.5\linewidth]{GNN.png} \caption{Heterogeneous Graph Neural Network with Structured Knowledge. The example is taken from WikiHop dataset. Based on the documents, question and answer options a heterogeneous graph is created(document nodes in green, option nodes in yellow and entity nodes in blue)} \label{fig:gnn} \end{figure} \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8.5pt}{8.5pt}\selectfont \textbf{Graph Neural Networks:} GNNs \cite{scarselli2008graph} have gained immense importance recently because of it higher expressive power and generalizability across various domains. GNN learns the representation of each nodes of the input graphs by aggregating information from its neighbors through message passing \cite{gilmer2017neural}. It learns both semantically and structurally rich representation by iterating over its neighbors in successive hops. In spite of their heavy usage across multiple domains, their shallow structure(in terms of the layers of GNN), inability to handle dynamic graphs and scalability are still open problems \cite{zhou2018graph}. \end{minipage} } \smallskip Knowledge can be in the form of structured concepts. One such approach was taken in Knowledge-Powered convolutional neural networks \cite{wang2017combining} where they use concepts from Probase to classify short text. In another approach, K-BERT \cite{liu2019k} inject knowledge graph triples along with other text and together learn representation which helps in NLQA and named entity recognition tasks. Structured knowledge can be in the form of rules which are compiled into the neural network. One such approach \cite{hu-etal-2016-harnessing} was taken to incorporate first order logic rules into the neural network parameters. Structured knowledge can also be embedded into a neural network through synthetically created training samples. This can be either template-based or specially hand-crafted to exploit a particular behavior of any model. The knowledge bases such as ConceptNet, WordNet, DBPedia are used to create these hand-crafted training examples. For example, in the NLI task, new knowledge is hand-crafted from these knowledge bases by normalizing the names and switching the roles of the actors. This augmented knowledge helped in differentiating between similar sentences through a symbolic attention mechanism in neural network \cite{mitra2019understanding}. {\bf Free text Knowledge and Neural Reasoners:} Memory networks have been used in prior work on tasks which need long term memory such as the tasks in bAbI datasets \cite{kumar2016ask,liu2017gated}. In these models, free text is used as external knowledge and stored in the memory units. The answering module uses this stored memory representation to reason and answer questions. \smallskip \noindent \fbox{ \begin{minipage}{0.95\linewidth} \fontsize{8.5pt}{8.5pt}\selectfont \textbf{Memory Networks:} Memory Networks augment neural networks with a read and writable \emph{memory} section. These networks are designed to read natural language text and learn to store these input to a specially designed \emph{memory} in which the free text input or a vector representation of the input is written under certain conditions defined by gates. At inference time, a read operation is performed on the memory. These networks are made robust to unseen words during inference with techniques of approximating word vector representations from neighbouring words. These networks outperform RNNs and LSTMs on tasks which require long term memory utilizing both the dedicated \emph{memory} unit and unseen word vectors. Their major limitation lies in their difficulty to be trainable using back-propagation and requiring full supervision during training. \end{minipage} } \smallskip Free text Knowledge in the form of topics, tags and entities have been used along with texts to improve the QA tasks using a knowledge enhanced hybrid neural network \cite{wu2018knowledge}. They used knowledge gates to match semantic information of the input text with the knowledge weeding out the unrelated texts. \begin{figure} \includegraphics[width=\linewidth,height=0.5\linewidth]{Trans_free2.png} \caption{An example from OBQA. Transformer based Neural Models with Free text Knowledge obtained by Information Retrieval on Knowledge Bases. } \label{fig:trans_free} \end{figure} In the scenario where the knowledge embedded in parameters of neural models are not enough for an NLP task, external knowledge is often infused in the free text forms along with the model inputs. This has often led to improved reasoning systems. Commonsense reasoning is one such area where external knowledge in the free text form helps in achieving better performance. Multiple challenges has been mentioned earlier which require reasoning abilities. A common approach for using the free text along with the reasoning systems can be seen from Figure \ref{fig:trans_free}. The idea is to pre-select a set of appropriate knowledge texts (using IR) from an unstructured knowledge repository or document and then let the previously learned reasoning models use it \cite{banerjee-etal-2019-careful,clark2018think,mitra2019understanding}. {\bf Free text Knowledge and Mixed Reasoners:} There have been few approaches in bringing the neural and symbolic approaches together for commonsense reasoning with knowledge represented in text. One such neuro-symbolic approach NeRD \cite{chen2019neural} was used to achieve state-of-the-art performance in two mathematical datasets DROP and MathQA. NeRD consists of a \textit{reader} which generates representation from questions with passage and a \textit{programmer} which generates the reasoning steps that when executed produces the desired results. In the LifeCycleQA dataset terms with deeper meaning, such as ``indicates'', are defined explicitly using Answer Set Programming (ASP). The overall reasoning system in \cite{mitra2019declarative} is an ASP program that makes use of such ASP defined terms and performs external calls to a neural NLI system. In \cite{prakash2019combining} the Winograd dataset is addressed by using ``extracted sentences from web'' and BERT and combined reasoning is done using PSL. {\bf Combination of Knowledge and Mixed Reasoners:} The ARISTO solver by AllenAI \cite{Clark2019FromT} consists of eight reasoners that utilize a combination of external knowledge sources. The external knowledge sources include Open-IE \cite{etzioni2008open}, the ARC corpus, TupleKB, Tablestore and TupleInfKB \cite{clark2018think}. The solvers range from retrieval-based and statistical models, symbolic solvers using ILP, to large neural language models such as BERT. They combine the models using two-step ensembling using logistic regression over base reasoner scores. \section{Discussion: How much Reasoning the models are doing?} While it is easy to see the various kinds of reasoning done by symbolic models, it is a challenge to figure out how much reasoning is being done in neural models. The success with respect to datasets give some indirect evidence, but more thorough studies are needed. \noindent \paragraph{Commonsense Reasoning:} The commonsense datasets, (CommonsenseQA, CosmosQA, SocialIQA, etc.), require commonsense knowledge to be solved. So the systems solving them should perform deductive reasoning with external commonsense knowledge. BERT-based neural models are able to perform reasonably well on such tasks, with accuracy ranging from 75\%-82\% which falls quite short of human performance of 90-94\% \cite{mitra2019exploring}. They leverage both external commonsense knowledge and knowledge learned through pre-training tasks. Inability of the pure neural models to represent such a huge variety of commonsense knowledge as rules led to pure symbolic or neuro-symbolic approaches on such datasets. \noindent \paragraph{Multi-Hop Reasoning:} NLQA datasets such as, Natural Questions, HotpotQA, OpenBookQA, etc, need systems to combine information spread across multiple sentences, i.e, Multi-Hop Reasoning. Current state-of-the-art neural models are able to achieve performance around 65-75\% on these tasks \cite{banerjee-etal-2019-careful}. Transformer models have shown a considerable ability to perform multi-hop reasoning, particularly for the case where the entire knowledge passage can be given as an input, without truncation. Systems have been designed to select and filter sentences to reduce confusion in neural models \noindent \paragraph{Abductive Reasoning:} AbductiveNLI, a dataset from AllenAI, defines a task to abduce and infer which of the possible scenarios best explains the consequent events. Neural models have shown a considerable performance in the simple two-choice MCQ task \cite{mitra2019exploring}, but have a low performance in the abduced hypothesis generation task. This shows the neural models are able to perform abduction to a limited extent. \noindent \paragraph{Quantitative and Qualitative Reasoning:} Datasets such as DROP, AQuA-RAT, Quarel, Quartz require systems to perform both qualitative and quantitative reasoning. The quantitative reasoning datasets like DROP, AQuA-RAT, led to emergence of approaches which can learn symbolic mathematics using deep learning and neuro-symbolic models \cite{Lample2020Deep,chen2019neural}. Neural as well as symbolic approaches have shown decent performance(70-80\%) on qualitative reasoning compared to human performance of (94-95\%) \cite{tafjord2018quarel,tafjord2019quartz}. All the datasets have shown a considerable diversity in systems with all three types (neural, symbolic and neuro-symbolic) of models performing well. \noindent \paragraph{Non-Monotonic Reasoning:} In \cite{clark2020transformers} an attempt is made to understand the limits of reasoning of Transformers. They note that the transformers can perform simple logical reasoning and can understand rules in natural language which corresponding to classical logic and can perform limited non-monotonic reasoning. Though there are few datasets which require such capabilities. \section{Conclusion and Future Directions} In this paper we have surveyed \footnote{\href{https://bit.ly/3836xAW}{A version of this paper with a larger bibliography is linked here.} If accepted, we plan to buy extra pages to accommodate them.} recent research on NLQA when external knowledge -- beyond what is given in the test part -- is needed in correctly answering the questions. We gave several motivating examples, mentioned several datasets, discussed available knowledge repositories and methods used in selecting needed knowledge from larger repositories, and analyzed and grouped several models and architectures of NLQA systems based on how the knowledge is expressed in them and the type of reasoning module used in them. Although there have been some related recent surveys, such as \cite{storks2019recent}, none of them focus on how exactly knowledge and reasoning is done in the NLQA systems dealing with datasets that require external knowledge. Our survey touched upon knowledge types that include structured knowledge, textual knowledge, knowledge embedded in a neural network, knowledge provided via specially constructed examples and combinations of them. We explored how symbolic, neural and mixed models process and reason with such knowledge. Based on our observations for various models following are some questions and future directions. Following up on the LifeCycleQA dataset where some concepts, such as ``indicates'', were manually defined, several questions -- with partial answers -- may come to mind. (i) How big is the list of such concepts? If a list of them is made and they are defined then these definitions can be directly used or compiled into neural models. Can Cyc and the book \cite{gordon2017formal} be starting points in this direction? (ii) Can these definitions be learned from data? How? Unless one only focuses on specific datasets, the challenge in learning these definitions would be that for each of them specialized examples would have to be created. (iii) When is it easier to just write the definitions in a logical language? When is it easier to learn from data sets? Some concepts are easy to define logically but may require lot of examples to teach a system. There are concepts whose definitions took decades for researchers to formalize. An example is the solutions of frame problem. So it is easier to use that formalization rather than learning from scratch. On the other hand the notion of ``cause'' is still being fine tuned. Another direction of future work centers around the question of how well neural network can do reasoning; What kind of reasoning they can do well and what are the challenges ? We hope this paper encourages further interactions between researchers in the traditional area of knowledge representation and reasoning who mostly focus on structured knowledge, and researchers in NLP/NLU/QA who are interested in NLQA where reasoning with knowledge is important. That would be a next step towards better understanding of natural language and development of cognitive skills. It will help us move from knowledge and comprehension to application, analysis and synthesis, as described in Bloom's taxonomy. \newpage \bibliographystyle{plain}
-17,857.901409
[ -0.66015625, 0.974609375 ]
44.883303
[ -3.421875, 0.5205078125, -0.8759765625, -3.353515625, 0.66650390625, 4.9296875 ]
[ 0.446044921875, 5.44921875, 1.9990234375, 5.30078125 ]
332
5,285
[ -2.361328125, 2.861328125 ]
21.402015
[ -6.2109375, -3.34765625, -3.384765625, -1.05859375, 2.357421875, 9.6015625 ]
0.559552
32.464842
26.565752
3.895354
[ 2.274735689163208 ]
-14,317.22117
5.917692
-17,545.528456
1.53797
6.065456
[ -3.748046875, -3.5234375, -2.1484375, -2.634765625, 3.03125, 8.078125 ]
[ -6.57421875, -2.630859375, -2.23046875, -1.232421875, 4.0859375, 5.7265625 ]
BkiUc2I4ubngyA6kKubo
\section{Introduction} R. Feynman wrote: "All things are made of atoms, and that everything that living things do can be understood in terms of the jiggling and wiggling of atoms." (Feynman, 1963). To move beyond this assertion, it is necessary to adopt common principles of organization of atoms and molecules in living systems. These principles, compatible with the existing analytical apparatus of thermodynamics and statistical physics, have been formulated by Gilbert Ling (Ling, 2006). The living cell model created by him was used as a starting point of his study. According to Ling, fundamental properties of the living cell are explained by the single physical factor --- sorption properties of its proteins. An unfolded (linear) protein molecule binding water (multilayer adsorption) and \({\rm K \mit}^+\) (in the presence of \({\rm Na \mit}^+\)) under the control of ATP represents the smallest part (unit) of a living protoplasm which still keeps the main physical characteristics of the living cell. Later Matveev (2005) offered to call the unit as a physiological atom or physioatom. The main physical state of the phisioatom, and accordingly, the cells comprising them, is, according to Ling, a resting state. The physical nature of this state determines, on the Ling's theory, all forms of biological activity of the cell and therefore the analysis of this state is a key issue of the physical theory of the living cell. Compatibility of the Ling's resting cell organization principles with analytical methods of modern theoretical physics was first shown in our previous work (Prokhorenko and Matveev, 2011). The base of our approach is the fact that the majority of statistical mechanics systems (including the most realistic ones) are non-ergodic that was proved by one of us (Prokhorenko, 2009). The generalized thermodynamic analysis (generalized thermodynamics) of Ling's cell we proposed (Prokhorenko and Matveev, 2011) allowed us to explain (in framework of adopted boundary conditions) a number of physiological phenomena that occur when cell is in activated (excited) state: exothermicity of transition to excited state, change of cell volume, folding of natively unfolded proteins (which determine, by Ling, a main features of the resting state), efflux of cell \({\rm K \mit}^+\) and wider --- major redistribution of physiologically important ions between the cell and its environment. However, we determined the sign (direction) only of these processes, there were no numerical evaluations. In other words, the results were obtained as inequalities. These are the normal features of thermodynamics (not of the generalized one only): it allows us to obtain general relations between thermodynamic variables which are independent on a nature of intermolecular interactions. It is needed to construct models with desired properties (including microscopic models) to obtain specific numerical values of thermodynamic variables and then investigate them by methods of statistical mechanics. Indeed, inequalities we have determined (Prokhorenko and Matveev, 2011) were based on the postulate of relative entropy maximum for the resting cell standing in the state of thermodynamic equilibrium with environment. However, an equilibrium with environment does not mean the equilibrium state (the absolute maximum of entropy) of a system; that's why the relative nature of the entropy maximum is indicated. We consider the resting cell as a system in a steady non-equilibrium state described by the generalized Gibbs distribution (Prokhorenko and Matveev, 2011). In other words, the case is the maximum among of all the states described by generalized Gibbs distributions and constructed using the fixed set of first integrals in the involution. These inequalities give evidences of the negative determinacy of the second entropy derivative matrix with respect to parameters describing a system in a state corresponding to the relative entropy maximum. Involvement of statistical mechanics methods brings up an issue of certain properties of the investigated system. Ling's model of the living cell gives, in our opinion, an interesting material for such analysis. So, after construction of the generalized thermodynamics of the resting state, set out in (Prokhorenko and Matveev, 2011), it makes sense to turn to some of the structural characteristics of the investigated system (as it often happened in the history of thermodynamics and statistical mechanics). In our case, the problem arises of constructing various models of protoplasm, and their investigation by various (mostly approximate) methods of theoretical physics. In this paper, the authors make the first steps in this direction. The first model, we call the Van der Waals model, focuses on the nature of interactions between protein molecules only. As Ling, we assume that protein molecules in the resting cell embedded at specific sites in the lattice of a crystal, and the distances between proteins are so long that interaction between them can be neglected. This assumption makes it possible to use the method for calculating thermodynamic potentials of ideal systems in order to determine thermodynamic potentials of the resting protoplasm with a specified structure. In the case of dead protoplasm (the state opposite to the living resting state), the protein molecules are associated due to secondary (non-covalent) bonds in large equilibrium aggregates. In this case, to calculate thermodynamic potentials we use the formula, we have obtained in this paper, for corrections of free energy values in the case of formation of large aggregates. As part of the Van der Waals model we have obtained (i) numerical estimation for heat amount released when erythrocyte die, (ii) the estimation for number of protein aggregates appeared in dead protoplasm, and (iii) the estimation for fraction of whole cell volume occupied by these aggregates. All these estimates are in good (qualitative) agreement with available experimental data. In our second model, we also assume that protein molecules in the resting cell embedded at specific sites in the lattice of a crystal, but at this time the focus is on internal structure of a protein. We consider this model at zero temperature (the energy scale), which makes it possible to use (to calculate the ground state) some effective Hamiltonian describing a superfluid Bose gas in the configuration space of a protein molecule. Based on the representations set forth in (Prokhorenko and Matveev, 2011), we define parameters of effective Hamiltonian corresponding to living and dead states of protoplasm; we show (in accordance with our assumptions) that proteins in the resting state (that determine key properties of the system) are natively unfolded, whereas in dead protoplasm the same protein molecules are folded. In Appendix 1 the process when the living protoplasm transforms into a dead one is considered as it appears in the physical point of view. In Appendix 2 we discuss the mechanism by which ATP is able to effectively influence sorption properties of proteins of Ling's model for water and physiologically important cations. \section{Non-Ergodicity and Crystallization} Let's consider some non-trivial issues that arise when we consider the cell as a nonergodic system. According to the Ling's model (Ling, 2001), water in the resting cell is in a bound quasi-crystalline state (important, water content is about 44 mole/kg wet weight of the cell). The bound state of water and its massive amount in the cell has fundamental importance for the understanding of physiological processes (Ling, 1997). Therefore, one of key issues of cell physics is the question: which properties the crystal has in terms of our recently proposed approach, generalized thermodynamics (Prokhorenko and Matveev, 2011) Let's begin the consideration of this issue with finding out the relation between non-ergodicity of a system (for example, the Ling's cell) and its solidifying capability at low temperatures. Despite this problem definition is non-physiological, its solution will allow us to verify once more that a large number of systems of statistical mechanics, including our model, have a non-ergodicity property. Our argument will be largely heuristic rather than rigorously mathematical character. In the mathematical physics the rigorous theory is often preceded by formal theories handling objects of poorly ascertained mathematical context. However, the heuristics presented here are of interest, in our view, as a basis for more rigorous methods. At first, let's define the ergodicity for statistical mechanics systems. \textbf{Definition.} Suppose the quantum system is described by Hamiltonian \(H\) and \(K_1,...,K_l\) are some commuting (among themselves) self-adjoint integrals of motion. The system is called ergodic with respect to the set of integrals \(K_1,...,K_l\) if any dynamical variable commuting with \(H,\;K_1,...,K_l\) is their function. To give a classical analog of this definition we should just replace the word "commutator" by a Poisson bracket everywhere. Usually, the operator of system momenta \(\vec{P}\) and operator of particles number \(N\) are used as trivial integrals. At first let's show how the non-ergodicity of a system arises from its solidifying capability at low temperatures. Let's consider the system at solidifying temperatures, supposing them \(T<T_0\) \(T_0>0\). In addition, the system can move through the space as a solid body and its coordinates (as as solid body) are six real numbers \begin{eqnarray} x_1,\;x_2,\;x_3\;\varphi_1,\;\varphi_2,\;\varphi_3, \end{eqnarray} where \(x_1,\;x_2,\;x_3\) are Cartesian coordinates of the system's center of mass, and \(\varphi_1,\;\varphi_2,\;\varphi_3\) are some coordinates characterizing the position of a system (as a solid body) relative to its center of mass, for example, Euler angles. Let \(p_1,..p_3,\pi_1,...,\pi_3\) be momenta canonically conjugated to them. The Hamiltonian of the whole system \begin{eqnarray} \hat{H}(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3) \end{eqnarray} is a function of variables \(x_1,\;x_2,\;x_3\;\varphi_1,\;\varphi_2,\;\varphi_3\), conjugated momenta to them and operator "in other variables". Free energy of a system is given by: \begin{eqnarray} F(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3|T)\nonumber\\ =-T {\ln \rm tr \mit} e^{-\frac{\hat{H}(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3)}{T}}, \end{eqnarray} where trace is taken by Hilbert space which is "left" after separation of variables describing the motion of a system as a solid body. We won't refine the meaning of words enclosed in quotation marks considering them as intuitive clear. It's clear that \(F(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3|T)\) does not depend on variable \(\varphi_3\) (if, for example, \(\varphi_1,\;\varphi_2,\;\varphi_3\) are Euler angles). But \begin{eqnarray} {\ln \rm tr \mit} e^{-\frac{\hat{H}(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3)}{T}}=\sum \limits_{i=0}^{\infty}d_i(x_1,...,\pi_3)e^{-\frac{\lambda_i(x_1,..,\pi_3)}{T}}, \end{eqnarray} where \(d_i={\rm dim \mit} L_i\) is a dimension of eigenspace \(L_i\) of operator \(\hat{H}(x_1,...,\pi_3)\) corresponding to \(\lambda_i\) eigenvalue of operator \(\hat{H}(x_1,...,\pi_3)\). But since \(\sum \limits_{i=0}^{\infty}d_i(x_1,...,\pi_3)e^{-\lambda_i(x_1,..,\pi_3)\beta}\) (\(\beta:=\frac{1}{T}\)) does not depend on \(\varphi_3\) (for different \(\beta\)), then \(d_i(x_1,...,\pi_3)\) and \(\lambda_i(x_1,..,\pi_3)\) are also independent of \(\varphi_3\). Indeed \(\sum \limits_{i=0}^{\infty}d_i(x_1,...,\pi_3)e^{-\lambda_i(x_1,..,\pi_3)\beta}\) is really a Laplace transform of a measure \begin{eqnarray} \sum \limits_{i=0}^{\infty}\delta(\lambda-\lambda_i(x_1,..,\pi_3))d_i(x_1,...,\pi_3). \end{eqnarray} So, for all values \(\varphi_3\) when others values parameters \(x_1,...,\pi_3\) are the same, operators \(\hat{H}(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3)\) are unitary equivalent. And after making the appropriate unitary transformation of \(\mathcal{H}\) depending on \(x_1,...,\pi_3\), we can conclude that \(\hat{H}(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3)\) do not depend on \(\varphi_3\). We have considered variables \(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,..p_3,\pi_1,...,\pi_3\) as classical ones since they describe macroscopic degrees of freedom and appear to be very large. Now we again consider \(\varphi_3\), \(\pi_3\) as quantum variables, replacing them by corresponding operators \(\hat{\varphi_3}\), \(\hat{\pi_3}\), and considering the Hamiltonian \(\hat{H}_1(x_1,...,x_3,\varphi_1,\varphi_2,p_1,...,p_3,\pi_1,\pi_2)\) obtained by replacing the variables \(\varphi_3\), \(\pi_3\) with the corresponding quantum-mechanical operators \(\hat{\varphi_3}\), \(\hat{\pi_3}\), in \(\hat{H}\) to describe our system. This operator acts in Hilbert space \(\mathcal{H}\otimes\Gamma\) where \(\Gamma\) is a Hilbert space corresponding to operators \(\hat{\varphi_3}\), \(\hat{\pi_3}\). The fact of independence of \(\hat{H}(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3)\) of \(\varphi_3\) is stated now as commutativity of \(\hat{H}_1\) with \(\hat{\pi_3}\), and the presence of nontrivial first integral of a system means of course the Hamiltonian degeneracy. The last fact can indicate the non-ergodicity of the system, but this new integral should be commutative with the momenta operator. It can be achieved for example by consideration of the integral \(\Pi:=\frac{\hat{\pi_3}}{G}\) instead of \(\hat{\pi_3}\), where \(G\) behaves like \(\sim V^{\frac{5}{3}}\) if the volume \(V\) of the system approaches infinity. Then \(\Pi\) asymptotically commutates with the momenta operator which means the non-ergodicity of the system. The reason for choosing \(G\sim V^{\frac{5}{3}}\) is shown below. However, instead of considering \(\Pi\) as a new independent integral we prefer other way. Variables \(x_1,...,x_3,\varphi_1,...,\varphi_3,p_1,...,p_3,\pi_1,...,\pi_3\) are canonically conjugated variables satisfying to the Hamilton's evolution at temperatures \(T<T_0\) for the Hamiltonian \(F(x_1,...,\pi_3|T)\) (see section 3). Just as we did before, we can show that \(F(x_1,...,\pi_3|T)\) does not depend on \(x_1,...,x_3,\varphi_1,...,\varphi_3\). But the system in the equilibrium state has the energy (in the thermodynamic limit) proportional to the volume of this system, therefore \(E\leq CV\) for some constant \(C\). On the other hand, if \(I_1,I_2,I_3\) are the eigenvalues of the inertia operator of our system as a solid body and \(\omega_1,\omega_2,\omega_3\) are components of angular velocity along corresponding principal axes of inertia operator, then \(E\geq\frac{I_1}{2} \omega_1^2+\frac{I_2}{2} \omega_2^2+\frac{I_3}{2} \omega_3^2\). However, \(I_1,I_2,I_3\sim V^{\frac{5}{3}}\). Therefore, in the thermodynamic limit \(\omega_1=\omega_2=\omega_3=0\) and our system can moves only parallel to itself in other words, \(\dot{\varphi}_1=\dot{\varphi}_2=\dot{\varphi}_3=0\). So, variables \(\varphi_1,\;\varphi_2,\;\varphi_3\) are motion integrals of the system commutating with the impulse operator which makes our system non-ergodic one. Since \(CV\geq E\geq\frac{I_1}{2} \omega_1^2+\frac{I_2}{2} \omega_2^2+\frac{I_3}{2}\omega_3^2=\frac{\pi_1^2}{2I_1}+...+\frac{\pi_3^2}{2I_3}\), and \(I_1,I_2,I_3\sim V^{\frac{5}{3}}\) then \(\pi_1.\;\pi_2,\;\pi_3\sim V^{\frac{4}{3}}\). At low angular velocities \(\omega_1,...,\omega_3\) the free energy of the whole system is presented as \begin{eqnarray} F(x_1,...,\pi_3)=F_0(x_1,...,p_3)+\frac{I_1\omega_1^2}{2}+...+\frac{I_3\omega_3^2}{2}, \end{eqnarray} for some function \(F_0(x_1,...,p_3)\) of variables \(x_1,...,p_3\). Momentum variables \(\pi_1,...,\pi_3\) can be chosen so that time rates of change of canonically conjugated coordinates may be equal to \(\omega_1,...,\omega_3\). Then \(\pi_1=I_1\omega_1\),...,\(\pi_3=I_3\omega_3\) and the effective Hamiltonian of the system equals to \begin{eqnarray} F(x_1,...,\pi_3)=F_0(x_1,...,p_3)+\frac{\pi_1^2}{2I_1}+...+\frac{\pi_3^2}{2I_3}. \end{eqnarray} The form of this Hamiltonian together with the fact that in the thermodynamic limit \(\pi_1.\;\pi_2,\;\pi_3\sim V^{\frac{4}{3}}\) imply that \(\varphi_1,\;\varphi_2,\;\varphi_3\) are the integrals of motion commutating with an operator of the total system momenta. Now let us ask the question arising out of the context of our analysis: why the crystalline state of matter is stable at temperatures different from zero. The fact that matter solidifies at zero temperature is almost evident: configuration of the system must achieve the minimum potential energy. The question arises: why do atoms keep on doing just small oscillations around points of the lattice at non-zero temperature and why does the lattice remain faultless, though thermal fluctuations seem to break it. This question is closely related to the question of why Ling's cell is stable at relatively high temperatures. We shall try to answer with help of our generalized thermodynamics (Prokhorenko and Matveev, 2011). So, let \(\mathcal{H}\) be a Hilbert space of our system, \(\hat{H}\) is a Hamiltonian of our system, and \(E_0\) is the lowest number belonging to spectrum, and \(\hat{E}_0\) is a spectral projection of \(\hat{H}\) onto the eigensubspace \(\hat{H}\) corresponding to the eigenvalue \({E}_0\). In classical terms, at \(T=0\) atoms composing the system have an arrangement which meets the condition of minimum potential energy. This means the body solidifies at \(T=0\). As we suppose, at that moment atoms are situated in points of a crystal lattice. But according to the accepted approach this lattices are not invariant under infinitesimal rotation, i.e. the system state obtained from the initial one by an infinitesimal rotation does not align with the initial one. This results degeneracy of \(E_0\), i.e. \({\rm tr \mit}\hat{E}_0>1\). Now let's complete \(\{\hat{H}\}\) to obtain the complete set of (commuting) observed values by self-adjoint operators \(\hat{K}_1,\hat{K}_2,...\). Here we use Dirac terminology (Dirac, 1958). Completeness of the system of observables \(\hat{H},\hat{K}_1,\hat{K}_2,...\) means that their joint spectrum is simple (non-degenerated). Let \(P_1,P_2,...\) be orthogonal projectors in \(\mathcal{H}\) projecting to the subspaces of subspace \({\rm Im \mit} \hat{E}_0\) (i.e. \(P_i\hat{E}_0=\hat{E}_0P_i=P_i\)) and are projections to their own subspaces of operators family \(\hat{H},\hat{K}_1,\hat{K}_2,...\). All \(P_1,P_2,...\) are clearly one-dimensional due to completeness of operators family \(\hat{H},\hat{K}_1,\hat{K}_2,...\). The generalized microcanonical distribution (Prokhorenko and Matveev, 2011) describing our system, can be taken, for example, in the following form: \begin{eqnarray} \rho=P_f, \label{1} \end{eqnarray} for any \(f\). Observed values of the integrals \(K_i\), \(i=1,2,...\) in the state \(\rho\) are given by the following formula: \begin{eqnarray} K'_i={\rm tr \mit}(\rho K_i). \end{eqnarray} Let \(SO(3)\) be a group of self-rotations in Euclidean three-dimensional space, \(o\) is an arbitrary element of this group, \(\hat{o}\) is a unitary representation of \(o\) in the state space \(\mathcal{H}\) of our system. When subjected to transformation \(o \in SO(3)\) the state \(\rho\) comes to \(\hat{o}\rho\hat{o}^+)\). Let's show that, if necessary, replacing the complete set \(\hat{H},\hat{K}_1,\hat{K}_2,...\) of observables by another complete set \(\hat{H},\hat{L}_1,\hat{L}_2,...\) allows us to choose \(f\) from (\ref{1}) so that for some \(o \in SO(3)\) and for some integer \(i\) \begin{eqnarray} {\rm tr \mit}(\rho K_i)\neq{\rm tr \mit}(\hat{o}\rho\hat{o}^+ K_i). \end{eqnarray} If the stated conclusion is false, then \(\forall i,j=1,2...\), \(\forall o \in SO(3)\) , we have \begin{eqnarray} {\rm tr \mit}(\hat{o}P_i\hat{o}^+P_j)={\rm tr \mit}(P_iP_j)=\delta_{ij}. \end{eqnarray} But the latest means that \(\forall i=1,2,...\) \begin{eqnarray} \hat{o}P_i\hat{o}^+=P_i. \label{2} \end{eqnarray} Let \(f_i\) be a unitary vector stretching the image \(P_i\). It follows from (\ref{2}) that \(\forall i=1,2...\) \begin{eqnarray} \hat{o}f_i={\rm exp \mit}(i\varphi_i(o))f_i \label{3} \end{eqnarray} for some functions \(\varphi_i(o)\) on \(SO(3)\). The last conclusion is true for any complete set of observables \(\hat{H},\hat{K}_1,\hat{K}_2,...\). In particular it is clear that in (\ref{3}) we can choose arbitrary the orthonormal basis \(\{f_i\}\) in \({\rm Im \mit} \hat{E}_0\). Thus \(\forall o \in SO(3)\), the restriction of \(\hat{o}\) on \({\rm Im \mit} \hat{E}_0\) should be diagonal in any orthonormal basis, therefore, the restriction of \(\hat{o}\) on \({\rm Im \mit} \hat{E}_0\) must be proportional to identical operator. But \(SO(3)\) has no one-dimensional representations except the trivial one. Therefore, \(\forall o \in SO(3)\) \(\hat{o}=1\). Thus, any ground state of our system under each rotation should come to itself; but it is false as we seen above. The statement is established. So, the generalized microcanonical distribution \(P_f\) have the property that some rotation of a system causes the change of integral \(K_i\) averaged over this state for some \(f\), \(i=1,2,...\). Now, if we give a sufficiently small non-zero temperature to our system, then (as it follows from the principle of physical continuity) for this temperature there exists a (generalized) equilibrium state described by a generalized microcanonical distribution \(\rho\), such that after some system rotation the corresponding observable value of integral \(K_i\) must change for some \(i = 1,2...\). But the system entropy does not change under the rotation of the system. This means that for a fixed energy (corresponding to enough small temperatures) the system entropy has a plateau of \(d > 0\) dimensions that provides the stability of our generalized microcanonical distributions, as it was discussed in our previous work (Prokhorenko and Matveev, 2011). In addition, let's remark that instead of speaking about the completeness of the system of observables \(\hat{H},\hat{K}_1,\hat{K}_2,...\), we should speak about the macroscopical completeness of this system. We say that the system of macroscopical observables quantities \(\hat{O}_1,\;\hat{O}_2,...\) is macroscopically complete if any macroscopical quantity (which clearly commutates with \(\hat{O}_1,\;\hat{O}_2,...\) because all macroscopical quantities are simultaneously measurable) is a function of observables quantities \(\hat{O}_1,\;\hat{O}_2,...\). Thus, for enough small temperatures the system have stable stationary states which are described by generalized microcanonical distributions which are not reduced to the common microcanonical distribution. We identify the crystal states of the matter exactly with such states, and the stability of the crystal state could be explained by just proven stability of the corresponding microcanonical distribution. \section{Van der Waals Model of Protoplasm} The general description of this model is presented in the introduction. Within the framework of this model and basing on our generalized thermodynamics, we give numerical estimates for some changes proceeding in a cell while it excites or becomes damaged: the amount of emitted heat by the cell and amount of released potassium ions from the cell to environment (according to Ling, potassium ions in the resting state are bounded by proteins). Let's consider two extreme protoplasm states: resting state and "dead" protoplasm. First, let's discuss how does the dead protoplasm appear in the context of our model. In the dead protoplasm protein molecules are in the folded state Prokhorenko and Matveev, 2011), and we suppose they are homogeneous balls of radius \(r_0\) and dielectric permittivity \(\varepsilon'\). The dielectric permittivity of the other matter in the cell is denoted by \(\varepsilon\). Assume that \(M\) is a mass of a protein molecule. Since a protein molecule contains a lot of atoms, \(M\) is a very large quantity. Let's write out the equation of motion which describes the motion of protein molecules. We suppose that the cell is described by classical mechanics, however further obtained results clearly imply that the answer for the quantum case is the same. The motion of protein molecules may be considered as classical motion because of a high value of \(M\). Let's denote \(x=(p,q)\) are coordinates and momenta of all protein molecules. Put by definition that \(y=(p',q')\) are coordinates and momenta of all other protoplasm components. Let \(H(x,y)\) be the Hamiltonian of the whole protoplasm. Then Hamilton's differential equations on \(x\) take the following form: \begin{eqnarray} \dot{p}=-\frac{\partial H(x,y)}{\partial q},\nonumber\\ \dot{q}=\frac{\partial H(x,y)}{\partial p} \label{I1}. \end{eqnarray} However, since the cell is dead, its distribution function is a Gibbs distribution function. In particular, the conditional probability density that variable \(x\) takes value \(x'\) provided variable \(y\) takes value \(y'\) is given by: \begin{eqnarray} w(x'|y')=\frac{1}{Z_1(x')}e^{-\frac{H(x',y')}{T}}, \end{eqnarray} where \(T\) is a system temperature and \begin{eqnarray} Z_1(y'):=\int d x e^{-\frac{H(x,y')}{T}}. \end{eqnarray} Since protein molecules move slowly and their mass \(M\) is very large (thousands of \(D\)), in (\ref{I1}) we can replace right parts by their averages over distribution \(w(y|x)\). Omitting rather trivial calculations, we find that the averaged system (\ref{I1}) is a Hamiltonian one too and the corresponding Hamiltonian is a free energy of the system. More specifically, the averaged system (\ref{I1}) is given by: \begin{eqnarray} \dot{p}=-\frac{\partial F(x|T)}{\partial q},\nonumber\\ \dot{q}=\frac{\partial F(x|T)}{\partial p} \label{I2}, \end{eqnarray} where \begin{eqnarray} F(x|T):=-T \ln \int d y e^{-\frac{H(x,y)}{T}}. \end{eqnarray} This is a standard adiabatic limit. Note two more properties. 1. If our whole system (protoplasm) is described by a Gibbs distribution: \begin{eqnarray} w(x,y)=\frac{1}{Z} e^{-\frac{H(x,y)}{T}} \label{I12}, \end{eqnarray} then, the distribution of probability that protein molecules are situated in the given point of phase space can be achieved by integrating (\ref{I12}) by \(dy\). The distribution of probability \(w(x)\) to find protein molecules in the given point of the configuration space is: \begin{eqnarray} w(x)={\rm const \mit} e^{-\frac{F(x|T)}{T}}. \end{eqnarray} That is, again we received the Gibbs distribution in which the Hamiltonian is the effective Hamiltonian for protein molecules we received above. 2. By using a standard formula we can calculate the free energy of protein system \(F'(T)\) for Hamiltonian \(F(x|T)\). Elementary calculations give: \begin{eqnarray} F'(T)=F(T):=-T \ln \int dx dy e^{-\frac{H(x,y)}{T}}, \end{eqnarray} i.e. \(F'(T)\) equals to the free energy of the whole system \(F(T)\). Now let's define the form of our effective Hamiltonian \(F(x|T)\) as a function of coordinates and momentas of protein molecules. We suppose the contribution to \(F(x|T)\) nontrivially dependent on \(x\) is caused by Van der Waals interaction between protein molecules and therefore \(F(x|T)\) is given by: \begin{eqnarray} F(x|T)=E_{kin}(p)+F_0(T)+\sum \limits_{i>j} V(q_i-q_j|T), \end{eqnarray} where \(E_{kin}(p)\) is a kinetic energy of protein molecules as material points, \(F_0(T)\) is a function of temperature and \(q_i\;p_i\), \(i = 1,2,3,...\) are Cartesian coordinates of protein molecules and canonically conjugated momenta. \(V(x|T)\)--- is a pair interaction potential of the following form: \begin{eqnarray} V(q|T)=+\infty\;{\rm if \mit}\;|q|<2r_0,\nonumber\\ V(q|T)=-\frac{C}{r^6},\;{\rm if \mit}\;|q|\geq 2r_0, \label{POTENTIAL} \end{eqnarray} where \(C\) is some positive constant. The explicit formula expressing \(C\) in terms of \(\varepsilon\), \(\varepsilon'\), \(r_0\) can be retrieved, for example, from (Lifshitz and Pitaevsky, 1978). Now, we find thermodynamic variables for the case just described. The solution of this problem providing that \begin{eqnarray} \frac{\min_{2r_0<r}|V(r|T)|}{T}\ll1 \label{LIP} \end{eqnarray} is stated in many textbooks, for example (Landau, Lifshitz, 1995). However, in real cells (for example, erythrocyte) this condition is not fulfilled; this is a subject of a special analysis in the next section. For simplicity here we suppose that condition (\ref{LIP}) is fulfilled. In the case of the living cell, as we shall see later, the free energy of a cell is given by expression: \begin{eqnarray} F(V,T)_l=F_0(V,T)+F_{id}(V,T), \label{SS1} \end{eqnarray} where \(F_0(V,T)\) is a free water energy where all the proteins are eliminated and \(F_{id}\) is free protein energy calculated as if it was an ideal gas. For the dead protoplasm \begin{eqnarray} F(V,T)_l=F_0(V,T)+F_{id}(V,T)+\Delta F(V,T), \label{SS2} \end{eqnarray} where \begin{eqnarray} \Delta F(V,T)=-T \ln [ \int \limits_V ...\int \limits_V \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V} e^{-\frac{1}{T} \sum \limits_{1\leq i<j\leq N} V(q_i-q_j|T)}]. \label{CONF} \end{eqnarray} Note that when we use formulas (\ref{SS1}), (\ref{SS2}), (\ref{CONF}), we neglect the conformational part of the free energy. There is one more omitted contribution to the free energy, which is caused by possibility of protein molecules rotation as a unit. However, these contributions for fully unfolded and folded protein conformation differ only by the value of \({\rm const\; \mit} T\) and therefore, as it will be clear from the following, omitting of those contributions does not influence the final results. For the case when (\ref{LIP}) is fulfilled and a gas is so much rarefied that we can take into account only pair collisions, the calculations of integral (\ref{CONF}) are made in many textbooks (for example, see (Landau and Lifshitz, 1995). But to find \(\Delta F(V,T)\) in the real case we should derive an expression for \(\Delta F(V,T)\) when condition (\ref{LIP}) is fulfilled and only pair collisions are taken into account. Therefore, we give here derivation for \(\Delta F(V,T)\) (condition (\ref{LIP}) is fulfilled). Put by definition \((U(q|T):= \sum \limits_{1\leq i<j\leq N} V(q_i-q_j|T))\). Then \begin{eqnarray} \Delta F(V,T)=-T \ln [ \int \limits_V ...\int \limits_V \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V} \{e^{-\frac{U(q|T)}{T}}-1\}+1] \end{eqnarray} If we take into account only pair collisions and suppose them rare, then the whole configuration space of the system \(\mathcal{C}=\mathbb{R}^3\times...\times \mathbb{R}^3 \) should be divided into subareas with equal volume \(\mathcal{C}_{i,j}\), \(1\leq i<j\leq N\) such that in each of them collisions of \(i\)-th and \(j\)-th particles happens. But in \(\mathcal{C}_{i,j}\) \(U(q|T)=V(q_i-q_j|T)\) So: \begin{eqnarray} \Delta F(V,T)=-T \ln [ \sum \limits_{1\leq i<j\leq N} \int \limits_{\mathcal{C}_{i,j}} \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V} \{e^{-\frac{V(q_i-q_j|T)}{T}}-1\}+1]. \end{eqnarray} But pairs \((i,j)\) such, that \(1\leq i<j\leq N\) could be chosen by \(\frac{N(N-1)}{2}\approx \frac{N^2}{2}\) ways. So, we have \begin{eqnarray} \Delta F(V,T)=-T \ln [ \frac{N^2}{2V^2} \int \limits_{V} \int \limits_{V} d^3 q_1 d^3 q_2 \{e^{-\frac{V(q_1-q_1|T)}{T}}-1\}+1]=\nonumber\\ -T \ln [ \frac{N^2}{2V} \int \limits_{V} d^3 q \{e^{-\frac{V(q|T)}{T}}-1\}+1] \end{eqnarray} Expanding the logarithm in a Taylor series near \(1\), the final result is \begin{eqnarray} \Delta F(V,T)=-T \frac{N^2}{2V} \int \limits_{V} d^3 q \{e^{-\frac{V(q|T)}{T}}-1\}. \end{eqnarray} In the area \(r:=|q|<2r_0\) \(\{e^{-\frac{V(q|T)}{T}}-1\}=-1\) and in the area \(r>2r_0\) \(\{e^{-\frac{V(q|T)}{T}}-1\}=-\frac{V(q|T)}{T}\) due to weakness of interaction. Put by definition: \begin{eqnarray} b=\frac{16 {\pi}r_0^3}{3},\nonumber\\ a=2\pi \int \limits_{2r_0}^{+\infty} |V(r|q)|r^2dr. \end{eqnarray} Notice by the way that using our choice of potential \begin{eqnarray} a=\frac{\pi C}{12r_0^3}. \end{eqnarray} With these notations we can rewrite \(F(V,T)\) in the following way: \begin{eqnarray} \Delta F(V,T)= T \frac{N^2}{V}(b-\frac{a}{T}). \end{eqnarray} So \begin{eqnarray} S_d=S_0+S_{id}-b\frac{N^2}{V},\nonumber\\ E_d=E_0+E_{id}-\frac{N^2a}{V}, \end{eqnarray} where \(S_d\) and \(E_d\) are entropy and energy of the dead protoplasm respectively, \(S_0\) and \(E_0\) are entropy and energy of the protoplasm where all the protein molecules are eliminated and \(S_{id}\) and \(E_{id}\) are entropy and energy of the ideal gas of protein molecules. That is a definition of the thermodynamic features of the dead protoplasm in our model. Now, let's consider the living protoplasm. Let's divide the effective Hamiltonian of the protein system obtained by the method of adiabatic limit into two summands: \begin{eqnarray} H=H_{v-d-W}+H', \end{eqnarray} Here \(H_{v-d-W}\) contains the kinetic energy of proteins as material points and energy of the Van der Waals interaction between them. The Hamiltonian \(H'\) depends on variables describing internal degrees of freedom of proteins. We suppose that in a certain sense \(H'\ll H_{k}\), where \(H_k\) is the kinetic energy of all protein molecules. Next, according to our common view (Prokhorenko and Matveev, 2011), in the living protoplasm some first integrals in the involution \(K_1,...,K_n\) are active and a statistical weight of protein molecules is given by: \begin{eqnarray} W(E)=\int \prod \limits_{i=1}^N dp_idq_i \prod \limits_{j=1}^n\delta(K_j-K'_j)\delta(H_{v-d-V}+H'-E), \label{J2} \end{eqnarray} where \(p_i\), \(q_i\) are momenta and coordinates of \(i\)-th protein. Let's try to define the form of integrals \(K_i\). According to Ling, intracellular water in the resting living protoplasm is in the bound state and protein molecules form a paracrystals. Since we suppose the living state differs from non-living by activity of integrals \(K_i\), then it's reasonable to assume that fixation of protein molecules in points of lattice is performed by means of multiplier \(\prod \limits_{j=1}^n\delta(K_j-K'_j)\) in the integral of the statistical weight definition. Therefore, we just suppose that \begin{eqnarray} \prod \limits_{j=1}^n\delta(K_j-K'_j)=\prod \limits_{i=1}^N\delta(q_i-q'_i), \end{eqnarray} where \(q_i\) are coordinates of \(i\)-th point of protein molecules lattice. But on the support of \(\prod \limits_{i=1}^N\delta(q_i-q'_i)\) the potential energy of protein interaction \(\sum \limits_{i>j} V(x_i-x_j|T)\) is a constant. Furthermore, we can suppose that on the support \(\prod \limits_{i=1}^N\delta(q_i-q'_i)\) the potential energy of proteins \(\sum \limits_{i>j} V(q_i-q_j|T)=0\). Indeed, if proteins are situated in points of the lattice we mentioned above, then \begin{eqnarray} \sum \limits_{i>j} V(q_i-q_j|T)\sim \frac{N}{r^6}, \end{eqnarray} where \(r\) is a minimal distance between proteins. But \(r\sim(\frac{V}{N})^{1/3}\). Therefore: \begin{eqnarray} \sum \limits_{i>j} V(q_i-q_j|T)\sim N(\frac{N}{V})^{2}. \label{33} \end{eqnarray} Further we show that if the potential energy of protein molecules interaction is neglected, then, when the cell is dying, the quantity of emitted heat and the number of potassium ions released from the cell, calculated for one protein molecule, is a linear polynomial in \(\frac{N}{V}\) with an accuracy up to logarithmic factors. So, as follows from (\ref{33}), in (\ref{J2}) in the limit of low density the potential energy of protein interaction in \(H_{v-d-W}\) can be neglected just kinetic energy \(H_k\). Eventually, for the statistical weight: \begin{eqnarray} W(E)=\int \prod \limits_{i=1}^N dp_idq_i \prod \limits_{j=1}^N\delta(q_j-q'_j)\delta(H_{k}+H'-E). \end{eqnarray} Since \(H'\ll H_{k}\) we can write over this expression in the following way: \begin{eqnarray} W(E)=\int \prod \limits_{i=1}^N dp_idq_i \prod \limits_{j=1}^N \delta(q_j-q'_j)\delta(\sum \limits_{i=1}^N \frac{p_i^2}{2M}-E). \end{eqnarray} Consequently with an accuracy of inessential multiplier, the statistical weight \(W\) equals to the statistical weight for the ideal gas. Therefore, in our model thermodynamic variables for the living protoplasm take the form of: \begin{eqnarray} E_l=E_0+E_{id},\nonumber \\ S_l=S_0+S_{id}. \end{eqnarray} Here \(E_l\) and \(S_l\) are entropy and energy of the living protoplasm. Obviously \(E_d< E_l\), i.e. our model predicts that activation and death of the protoplasm is an exothermal reactions. Numerical evaluations are given in the next section. So, we have the following amount for the quantity of released heat: \begin{eqnarray} Q=\frac{N^2a}{V}. \end{eqnarray} The ability of variables \(q_i\) or closed to them play the role of motion integrals in our model is appeared from the following additional conclusions. Let's denote by \(E_{kin}\) a kinetic energy of one protein molecule. Obviously \begin{eqnarray} |\dot{q}_i|\leq \sqrt{\frac{2E_{kin}}{M}}\sim \sqrt{\frac{T}{M}}. \end{eqnarray} Therefore, in the limit \(M\rightarrow \infty\) (very large mass of protein) \(\dot{q}_i=0\) and \(q_i\) are motion integrals. Now we define the number of potassium ions releasing from the dying cell. In our previous work (Prokhorenko and Matveev, 2011) the following formula was obtained. Let's suggest that the number of first integrals in the involution is so large that the number of active integrals can be characterized by continuous parameter \(s \in[0,1]\). In addition, the number of active first integrals is increasing function of \(s\) and a case of \(s = 0\) corresponds to the case when none of integrals is active, and case of \(s = 1\) corresponds to the case when all the first integrals are active. Supposes \(s\) infinitesimally varies \(s\mapsto s'=s-\delta s\), \(\delta s>0\), where \(s\) is infinitely small. Now, let's define the function of entropy \((\delta f)(S)\) by the condition \((\delta f)(S)=(\delta S)_E\), ( \(s\mapsto s'=s-\delta s\)). This definition is correct due to a proven assertion (Prokhorenko and Matveev, 2011) that \((\delta S)_E\) ( \(s\mapsto s'=s-\delta s\), \(\delta s>0\)) is a constant along any adiabatic process. So, let \(s\mapsto s'=s-\delta s\), \(\delta s>0\), \(\delta s\), is infinitely small. As it was shown in (Prokhorenko and Matveev, 2011), the increasing of number of potassium ions in the cell \(\delta N\) can be found using the following formula: \begin{eqnarray} \delta N=T(\delta f)'(S)(\frac{\partial S}{\partial \mu})_T, \end{eqnarray} where \(\mu\) is a chemical potential of potassium ions in the cell. First, let's define \((\frac{\partial S}{\partial \mu})_T\). Suppose by definition \(\tilde{F}_0(V,T)\) is the free energy of the protoplasm where potassium ions absent. Suppose by definition \(\tilde{F}_0(V,T)\) is the free energy of the ideal gas at temperature \(T\) while its mass equals to the mass of potassium ion \(M\) and the gas contains only one molecule. It is known (Landau and Lifshitz, 1995) that \begin{eqnarray} \phi(V,T)=-T \ln [V(\frac{MT}{2\pi\hbar^2})^{\frac{3}{2}}]. \end{eqnarray} Suppose by definition \(\psi(V,T)\) is potassium solvability in the protoplasm, i.e. change of the free protoplasm energy during transition of one potassium ion from the infinity to the given point. Then the change of the free protoplasm energy after adding one potassium ion to this protoplasm is just a sum of \(\varphi(T,V)\) and \(\psi(V,T)\). So, the free energy of all the protoplasm is \begin{eqnarray} \tilde{F}_0(V,T) \mapsto \tilde{F}(V,T)=\tilde{F}_0(V,T)+\varphi(T,V)+\psi(V,T). \end{eqnarray} If there are \(N\) potassium ions inserted to the protoplasm, but \(N\) is still very small, then an interaction between different potassium ions can be neglected and we have: \begin{eqnarray} \tilde{F}_0(V,T) \mapsto \tilde{F}(V,T)=\tilde{F}_0(V,T)+N\varphi(T,V)+N\psi(V,T)+NT \ln(\frac{N}{e}). \end{eqnarray} The last summand arises because of a common combinatorial multiplier \(\frac{1}{N!}\) included to definition of the partition function of potassium ions. We neglect the solvability of potassium ions in our calculations, in other words we accept \(\psi(V,T)=0\). Put by definition \begin{eqnarray} \mathcal{A}(V,T):=V(\frac{MT}{2\pi\hbar^2})^{\frac{3}{2}}. \end{eqnarray} Eventually, after introduction of \(N\) potassium ions into the protoplasm, the free protoplasm energy changes as follows: \begin{eqnarray} \tilde{F}_0(V,T) \mapsto \tilde{F}(V,T)=\tilde{F}_0(V,T)+N\psi(T,V)+NT \ln(\frac{N}{e\mathcal{A}(V,T)}). \end{eqnarray} Notice that the quantity \(\frac{N}{\mathcal{A}(V,T)}\ll 1\) since by order of magnitude this value is a density of particles in phase-space cells of constant-energy surface and thus it is very small, because the gas of potassium ions, as we admitted, is strongly rarefied (particularly, this allows us to use formulas of the classic statistical mechanics). Potassium adsorption is taken into account in the solvability \(\psi(V,T)\). We supposed that this solvability equals to zero but this assumption is not quite right. After cell death the chemical potential of potassium ions, obtained by differentiating the free energy over with number of potassium ions, remains constant. Since while the cell is dying, it issues \(\approx 0.98\) of the total amount of potassium ions, solvability \(\psi(V,T)\) increases by \(T\ln 50\). Though this section contains exact equalities, actually this equalities are approximate because we neglect a unity compared to \(\ln(\frac{N}{e\mathcal{A}(V,T)})\) and change of \(\psi(V,T)\) by value of \(T\ln 50\). However, the following analysis establishes that contribution of those omitted summands is inessential. When \(N\) potassium ions are inserted into the protoplasm,the entropy changes as follows: \begin{eqnarray} \tilde{S}_0(V,T)\mapsto\tilde{S}(V,T)=\tilde{S}_0(V,T)-N\ln(\frac{N}{e\mathcal{A}(V,T)}). \end{eqnarray} We have \begin{eqnarray} (\frac{\partial \tilde{S}}{\partial \mu})_T=\frac{(\frac{\partial S}{\partial N})_T}{(\frac{\partial \mu}{\partial N})_T}. \end{eqnarray} Further: \begin{eqnarray} (\frac{\partial \tilde{S}}{\partial N})_T=-\ln(\frac{N}{e\mathcal{A}(V,T)}). \end{eqnarray} And: \begin{eqnarray} \mu=(\frac{\partial\tilde{F}(V,T,N)}{\partial N})_T=T\ln(\frac{N}{e\mathcal{A}(V,T)}). \end{eqnarray} Therefore: \begin{eqnarray} (\frac{\partial \mu}{\partial N})_T=\frac{T}{N}. \end{eqnarray} Eventually: \begin{eqnarray} (\frac{\partial \tilde{S}}{\partial \mu})_T=\frac{N}{T}\ln[\frac{e\mathcal{A}(V,T)}{N}], \end{eqnarray} and \begin{eqnarray} \delta N=N(\delta f)'(S)\ln[\frac{e\mathcal{A}(V,T)}{N}].\label{DIFFUR} \end{eqnarray} \section{Van der Waals Model of Protoplasm. Numerical Evaluations 1. Potassium Ions Efflux from the Cell and Heat Release} This section is concerned with getting numerical evaluations basing on the Van der Waals model (see above) and comparing them with experimental data. To specify model parameters (cell size, quantity of proteins and ions in the cell) we chose a human erythrocyte, the well-studied cell having a relatively simple structure-function organization. It's known that potassium ions density in the living erythrocyte is estimated as \(n=6.02\times10^{19} cm^{-3}\). Let's assume that when an erythrocyte is dying the potassium concentration in this erythrocyte becomes equal to the potassium concentration in the blood plasma, \(2-4\, \rm mmol/l\mit\), i.e. about \(0.96-0.98\) of potassium ions, which the living cell contained, release from the erythrocyte. We consider the cell at a temperature of 300\(K\). A potassium nucleus contains 19 protons and 20 neutrons, therefore, the mass of potassium ions can be estimated as \(39 m_n\) where \(m_n\) is the mass of neutron. In the previous section the formula was derived describing the change of a number of potassium ions in the cell when the infinitesimal change of a number of active integrals \(s\mapsto s'=s-\delta s\), \(\delta s\) is infinitely small. Let's write it one more time: \begin{eqnarray} \delta N=N(\delta f)'(S)\ln[\frac{e\mathcal{A}(V,T)}{N}]. \end{eqnarray} The last formula (\ref{DIFFUR}) can be interpreted as a differential equation on \(N\). Being integrated this equation expresses of potassium efflux from the dying cell through other cell parameters. First, let's find \((\delta f)'(S)\). We have: \begin{eqnarray} \delta f(S)=(\delta S)_E=(\delta S)_T-(\delta E)_T (\frac{\partial S}{\partial E})_T. \end{eqnarray} But \begin{eqnarray} (\frac{\partial S}{\partial E})_N=\frac{1}{T} \end{eqnarray} Therefore \begin{eqnarray} \delta f(S)=(\delta S)_E-\frac{1}{T} (\delta E)_T. \end{eqnarray} Using the fact that (see Prokhorenko and Matveev, 2011) \begin{eqnarray} \frac{\partial (\delta S)_T}{\partial (\delta E)_T}=\frac{1}{T}, \end{eqnarray} we find \begin{eqnarray} \frac{d \delta f(S(T))}{d T}=\frac{1}{T^2} (\delta E)_T. \end{eqnarray} And finally \begin{eqnarray} (\delta f)'(S)=\frac{1}{T^2} (\delta E)_T\frac{\partial T}{\partial S} =\frac{1}{C_V T}(\delta E)_T, \end{eqnarray} where \(C_V\) is the heat capacity of the cell at the constant volume. Now, we assume the heat capacities of living and dead cells are almost the same, and in all following formulas the heat capacity of a cell can be replaced by an average value \(C_V\) which does not depend on the number of active first integrals in the involution. Eventually, we have \begin{eqnarray} \frac{\delta N}{N}=-\frac{1}{TC_V}(\delta E)_T \ln[\frac{N}{e\mathcal{A}(V,T)}]. \end{eqnarray} For convenience of further calculations let's introduce a new variable \begin{eqnarray} \Lambda:=\frac{N}{e\mathcal{A}(V,T)}. \end{eqnarray} Then we have: \begin{eqnarray} \delta \ln \Lambda=-\frac{1}{TC_V} (\ln \Lambda)(\delta E)_T. \label{DIFFUR1} \end{eqnarray} As before, we use lower indexes \(l\) and \(d\) to denote variables relating to the living and dead cell respectively. Equation (\ref{DIFFUR1}) is easy to integrate resulting: \begin{eqnarray} \frac{\ln \Lambda_d}{\ln \Lambda_l}={\rm exp \mit} (\frac{Q}{C_VT}), \label{FIRST} \end{eqnarray} where \(Q\) is a heat amount released from the cell while it is dying. The last formula gives an implicit expression for heat generation \(Q\) and its derivation was based just on common thermodynamic considerations without regard to properties of any certain model. Therefore, it can be used for verification of our generalized thermodynamics. However, note that (\ref{FIRST}) is inconvenient for calculations, so, let's simplify it using smallness of \(\Lambda_l\). For this purpose let's consider the expression \(\ln x\), where \(x\) is a very small positive number. Then \(\ln x\) value is very large in modulus and has a sign minus. Let's increase \(x\) by factor of \(k\) where \(k\) is not too large natural number \(\ln x \mapsto \ln x +\ln k\) i.e. practically unalters. We have: \begin{eqnarray} \frac{\ln \Lambda_d}{\ln \Lambda_l}=|\frac{\ln \Lambda_d}{\ln \Lambda_l}|={\rm exp \mit}\{ \int \limits_{|\ln \Lambda_l|}^{|\ln \Lambda_d|} \frac{dx}{x}\}. \end{eqnarray} According to the newly stated remark, on the whole integrating interval we can replace \(x\) by \(|\ln \Lambda_l|\) and eventually we obtain: \begin{eqnarray} \frac{\ln \Lambda_d}{\ln \Lambda_l}=\{\frac{\Lambda_l}{\Lambda_d}\}^{\frac{1}{|\ln \Lambda_l|}}. \end{eqnarray} Eventually, we receive the following implicit expression for heat generation: \begin{eqnarray} \frac{N_l}{N_d}={\rm exp \mit} \{|\ln \Lambda_l| \frac{Q}{C_VT}\}. \end{eqnarray} Data listed in this section are enough to calculate \(|\ln \Lambda_l|\). Omitting corresponding numerical calculations, we present the result: \(|\ln \Lambda_l|\approx 16.1\). The resulting formula for heat generation is: \begin{eqnarray} \frac{N_l}{N_d}={\rm exp \mit}\{\frac{16.1Q}{C_VT}\}.\label{STAR} \end{eqnarray} From here it is easy to find \(\frac{Q}{C_VT}\). \(\frac{N_l}{N_d}\approx50\). \(\ln 50=3.93\). By taking a logarithm of both parts we find: \begin{eqnarray} \frac{Q}{C_VT}\approx 0.24. \end{eqnarray} To check this heat generation value we can carry the following qualitative reasoning. Let's consider a human cell at temperature \(T_h=37 C=310 K\) or 37\({}^0C\). The room temperature (normal for a man's functioning) \(T_r=20 C=293 K\). According to Ling's theory, the cell life activity is expressed as the cycle motion "resting\(\leftrightarrow\)exciting". If the cell was always in thermodynamic equilibrium with the thermostat, then, at transition "resting\(\leftrightarrow\)exciting", the \(Q\) heat would released. Let's accept that in the resting state a cell has a room temperature. Let \(a\) be a digit making \(Q=\alpha C_VT\). If the cell is heat sealed, then in transition from the resting state to the excited state the temperature of the cell rises from \(T_r\) to \(T_e\) and it is likely to be correct that there is an approximate correlation \((T_e-T_r)/T_h=\alpha\). Now, as the cell moves through time cyclically, we have a mean temperature for time \(T_h\). It is reasonable to suggest that \(T_h=(T_e+T_r)/2\), or in other words \(T_e-T_r=2(T_h-T_r)\). But \(T_h-T_r=17 K\). Therefore \(T_e-T_r=34K\) and \(\alpha=(T_e-T_r)/T_h=34/310=0.11\), which coincides with our result in the order of magnitude. The fact that the calculation for heat emission of the dying erythrocyte using data on potassium efflux from the cell appears approximately 2.5 times higher than just stated value is reasonable because the exciting can be considered as a stage towards death (Matveev, 2005). The assumption we stated that the erythrocyte temperature at rest equals to the room one \(T_{room}\) can be explained in the following way. Since the erythrocyte moves through time cyclically: resting\(\leftrightarrow\)exciting, then the erythrocyte can be considered as a heat engine, and room temperature is a temperature of a cooler of this engine. As the room temperature is the most comfortable for a man, we may consider that the erythrocyte at the room temperature, as at the cooler temperature, operates in the most optimum mode. The resting temperature \(T_r\) is the lowest temperature achieved by the erythrocyte during all the resting \(\leftrightarrow\) exciting cycle. If \(T_r>T_{room}\) is correct, then heat transfer from the erythrocyte to the cooler will beat tended by heat transfer from a warmer body to a colder one, i.e. entropy increase. This means the erythrocyte as a heat engine would operate not optimally. Conversely suppose that \(T_r<T_{room}\). If the erythrocyte is functioning in the optimum mode (without entropy increase), then the erythrocyte have to pass the part of the cycle when the erythrocyte temperature \(T <T_r\) being surrounded by an adiabatic "cover". It becomes incomprehensible why does the erythrocyte need this part of the cycle and also, according to Carnot's theorem on the efficiency of heat engines, erythrocyte efficiency could be raised by means of the cooler temperature (room temperature) decreasing. Now let's try to calculate \(Q\) basing on our investigated (Van der Waals) model. For this purpose we should find an interaction constant \(C\) in the law \(V(r)=\frac{C}{r^6}\). To do this, we proceed from the formula taken from (Lifshitz and Pitaevsky, 1978). Suppose there are two parallel non-overlapping semi-spaces and a distance between them is \(l\). Let \(\varepsilon\) be a dielectric permittivity of the semi-spaces and \(\varepsilon'\) is a dielectric permittivity of the cavity between them. Suppose \(l\) is such a large that if \(\omega\) is a frequency of the electromagnetic wave distributed in the cavity between semi-spaces and having wavelength \(l\), then \(\hbar\omega \ll T\). That is obviously our case. Then, the force of attraction between two semi-spaces as per unit area of the border space of each semi-space is equal to: \begin{eqnarray} P=\frac{T}{16 \pi^2 l^3} \int \limits_0^{+\infty} x^2[\{\frac{\varepsilon+\varepsilon'}{\varepsilon-\varepsilon'}\}^2e^x-1]^{-1}dx. \end{eqnarray} Now, suppose that the semi-spaces consist of folded hemoglobin molecules and the cavity between them is filled with water. The dielectric permittivity of the water \(\varepsilon=81\) and the dielectric permittivity of the hemoglobin \(\varepsilon'\approx 2\). Therefore, the factor \(\{\frac{\varepsilon+\varepsilon'}{\varepsilon-\varepsilon'}\}^2\approx1\) and the force of attraction between two semi-spaces as per unit area of the border space of each of them is equal to \begin{eqnarray} P=\frac{T}{16 \pi l^3} \int \limits_0^{+\infty} x^2[e^x-1]^{-1}dx. \end{eqnarray} The involved integral can be easily calculated with any desired degree of precision, for example: \begin{eqnarray} \int \limits_0^{+\infty} x^2[e^x-1]^{-1}dx=\int \limits_0^{+\infty} x^2e^{-x}[1-e^{-x}]^{-1}dx=\nonumber\\ =\int \limits_0^{+\infty} x^2[e^{-x}+e^{-2x}+e^{-3x}+.....]dx=\nonumber\\ =2[1+\frac{1}{2^3}+\frac{1}{3^3}+....]=2[1+1/8+1/27+1/64+...]\approx2.35. \end{eqnarray} Therefore, we come to the following formula for the force of attraction between the semi-spaces: \begin{eqnarray} P=\frac{2.35 T}{16 \pi l^3}. \label{KAZIMIR} \end{eqnarray} Let \(V_{1H}\) be a volume of one hemoglobin molecule. Now, we can calculate a potential energy \(\mathcal{U}(l)\) per unit of the surface plane using a formula for attraction potential of two hemoglobin molecules (\ref{POTENTIAL}) in the following way: \begin{eqnarray} \mathcal{U}(l)=-\frac{C}{V_{1H}^2} 2\pi \int \limits_0^{+\infty} r dr \int \limits_0^{+\infty}dx \int \limits_0^{+\infty} dy \frac{1}{(x+y+l)^2+r^2)^3}. \end{eqnarray} Omitting rather trivial integrating we find \begin{eqnarray} \mathcal{U}(l)=-\frac{\pi C}{12V^2_{1H}}\times \frac{1}{l^2}. \end{eqnarray} Differentiating the last expression with respect to \(l\) we find: \begin{eqnarray} P=\frac{\pi C}{6V^2_{1H}}\times \frac{1}{l^3} \end{eqnarray} Comparing the last formula with (\ref{KAZIMIR}) gives the following result: \begin{eqnarray} C=V_{1H}^2 \frac{7.05 T}{8 \pi^2}. \end{eqnarray} It follows \begin{eqnarray} a=V_{1H}T\times \frac{7.05}{72}=9.79\times 10^{-2} V_{1H} T \end{eqnarray} Note that everywhere above we measured the temperature \(T\) in energy units. Let \(T'\) be an absolute temperature expressed in Kelvin degrees. There is a relation \(T=k_B T'\) where \(k_B=1.38\times 10^{-16} erg K^{-1}\). Hereafter \(Q_v\) means the heat released from a cell within the framework of the Van der Waals model. We want to estimate \(\frac{Q_v}{CT}\). Then, for heat generation: \begin{eqnarray} O_v=1.35\times10^{-17} N^2 (\frac{V_H}{V})(\frac{V_{1H}}{V_H}) erg\times K^{-1}T' \end{eqnarray} where \(V_H\) is the volume of all hemoglobin contained in the dead erythrocyte. We need to know the cell volume \(V\), volume of all hemoglobin \(V_H\) contained in the dead cell, mass of all the hemoglobin \(M_H\), specific heat capacity of hemoglobin per unit mass \(C_M\). Calculating this data we find \(V=10^{-10} cm^3\) (Levine et al., 2001). \(C_M=3.2\times 10^7 erg\; g^{-1} K^{-1}\) (Kholodny et al., 1987). Further it's known that hemoglobin concentration \(C_H=5 mmol \times L^{-1}\), \(C_H=5\times 10^{-6} mol\times cm^{-3}\) (Van Beekvelt et al., 2001). The volume of "dead" hemoglobin is \(V_H=32.6\times10^{3} cm^3/mol\) (Arosio et al., 2002). \(N=3.02\times10^8\) (Van Beekvelt et al., 2001). Therefore, \begin{eqnarray} \frac{V_H}{V}=0.163 \end{eqnarray} But hemoglobin concentration in the human erythrocyte is \(0.33 g/cm^3\) (Van Beekvelt et al., 2001) and its density in the dead cell (in the supercluster) is approximately two times greater than the water density (Van Beekvelt et al., 2001; Arosio et al., 2002). It follows the mass of the erythrocyte \(M_C=1.16\times 10^{-10} g\). The heat capacity \(C_V\) is \(C_V=M_C C_M=3.71\times 10^{-3}\; erg \;K^{-1}\). Let's calculate \(N^2\frac{V_H}{V} \). Taking into account that \(N^2=9.12\times 10^{16}\), we find \begin{eqnarray} \frac{N^2 V_H}{V}=1.49\times 10^{16}. \end{eqnarray} Eventually \begin{eqnarray} \frac{Q_v}{C_V T}=54 \frac{V_{1H}}{V_H} \end{eqnarray} Here we see that if we take a true value \(\approx 3.3 \times 10^{-9}\) for \(\frac{V_{1H}}{V_H}\), then the resulting value for \(\frac{Q_v}{C_V T}\) is many times smaller than \(\frac{Q_v}{C_V T}=0.24\) we found before. This fact has the following explanation. Our derivation of corrections to the free energy was correct only upon the following condition: \begin{eqnarray} \frac{\min_{2r_0<r}|V(r|T)|}{T}\ll1 \label{LIPA} \end{eqnarray} Let's calculate the value in the right part of this inequality. Suppose two protein molecules are situated so that a distance between their centers is slightly more than \(2r_0\). Then, the potential energy between them is \(U=-C\frac{1}{(2r_0)^6}\). We have shown above that \(C=V_{1H}^2 T \frac{7.05}{8 \pi^2}\). Here, after elementary calculations, we find \begin{eqnarray} \frac{U}{T}=\frac{7.05}{288}\approx 2.5 \times 10^{-2}, \end{eqnarray} i.e. our criterion is really fulfilled. However, when two protein molecules come to each other such close that \(2 \pi\hbar \frac{c}{l} \approx 1\), the forces whose contribution we did not take into consideration before begin to play a significant role. We mean Casimir forces caused by the energy of zero-point oscillations of the electromagnetic field in the space between proteins, and which significantly exceed forces we took into consideration before. These forces can be interpreted as chemical ones. As a result, considering these new forces, it is reasonable to expect that in the real dead protoplasm the protein molecules stick together to balls or superclusters, and we should consider this effect in derivation of \(\Delta\tilde{F}(V,T)\). The following section generalizes the derivation of corrections to the free energy in case of possible clustering, and it becomes clear that if \(V_{KH}\) denotes the volume of such a cluster and \(Q_v\) is a heat generation of the Van der Waals model, and \(Q_{kv}\) is a heat generation for the same model (considering the possible clustering), then \begin{eqnarray} Q_{kv}=Q_v\frac{V_{KH}}{V_{1H}}.\label{HEAT} \end{eqnarray} As a result, we have \begin{eqnarray} \frac{Q_{kv}}{C_V T}=54 \frac{V_{KH}}{V_H} \end{eqnarray} If this new formula considering collective phenomena is correct, then we shall find that the dead protoplasm should contain \(\approx 225\) superclusters or aggregates, the properties of which we cannot characterize yet because the conclusion about the number of clusters is a result of a rather common theoretical analysis. On the other hand, the protein aggregation in case of the cell death is a well-known phenomenon. \section{Van der Waals Model of Protoplasm. Numerical Evaluations 2. Clustering of Protein Molecules} In this section we estimate a correction to the free protein energy \(\Delta F\) with respect to their clustering. The correction \(\Delta F(V,T,N)\) is \begin{eqnarray} \Delta F=-T \ln \int \frac {d^3 q_1}{V}...\frac{d^3q_N}{V}e^{-\frac{U(q)}{T}}. \end{eqnarray} Here \(q_1\),...,\(q_N\) are Cartesian coordinates of all molecules having numbers \(1,...,N\) and symbol \(q\) means coordinates of all molecules: \(q:=(q_1,...,q_N)\). \(U(q)\) is given by \begin{eqnarray} U(q)=U_u(q)+U_s(q). \end{eqnarray} Here \(U_u\) is a potential energy of the Van der Waals attraction between particles: \begin{eqnarray} U_u(q)=\sum \limits_{1\leq i< j\leq N} V_u(q_i-q_j),\nonumber\\ \end{eqnarray} where \begin{eqnarray} V_u(q)=-C\frac{1}{|q|^6}. \end{eqnarray} Hereafter \(C\) value is considered as a small parameter used to make any asymptotical expansions. \begin{eqnarray} U_s(q)=U^1_s(q)+U^2_s(q), \end{eqnarray} where \(U^1_s(q)\) is a potential energy of repulsion between protein molecules, arising due to their volume is low bounded. For example: \begin{eqnarray} U^1_s(q)=\sum \limits_{1\leq i< j\leq N} V^1_s(q_i-q_j),\nonumber\\ V^1_s(q)=0,\; {\rm if \mit} |q|>2r_0,\nonumber\\ V^1_s(q)=+\infty\; {\rm if \mit} |q|\leq 2 r_0, \end{eqnarray} \(r_0\) is a radius of the protein molecule. \(U^2_s\) is a potential energy of Casimir forces, arising in very closed distances between protein molecules due to zero-point oscillations of the electromagnetic field in the gap between separate proteins. These forces are short-ranged but they have a high degree of cooperativity which contributes to clusters formation. Potentials \(U_s\), \(U_s^1\), \(U_s^2\) we sometimes call superpotential, and forces corresponding to them, superforces, as \(U_s^1\), \(U_s^2\) are much greater than \(U_u(q)\) and the effective consideration of \(U_s\) in the Gibbs exponent comes down to the fact that the whole configuration space of molecules is replaced by its part. Let's represent \(\Delta F\) in the following form \begin{eqnarray} \Delta F=\Delta F_0+\Delta F_1+..., \end{eqnarray} where \(\Delta F_0\) is a zero-order value of vanishing on \(C\), \(\Delta F_1\) is a value of the first order of vanishing on \(C\) and so on. Let's begin from the definition of \(\Delta F_0\), in other words, put \(C=0\) and \(U(q)=U_s(q)\). Due to presence in \(U_s\) all the molecules stick together to clusters (balls \(B\)) of \(N_B\) molecules in each. Let \(N_k\) be a number of such balls (clusters). Obviously \(N = N_BN_k\). By definition all the clusters identical and there are \begin{eqnarray} \frac{1}{N_k!} \frac{N!}{(N_B!)^{N_k}} \end{eqnarray} of equivalent ways to distribute molecules by clusters. Let's fix one of these ways which we will follow hereafter, where molecules having coordinates \(q_1,...,q_{N_B}\) belong to the first cluster \(B_1\), molecules having coordinates \(q_{N_B+1},....,q_{2N_B}\) belong to the second cluster \(B_2\) and so on. During \(F_0\) calculation we naturally neglected interaction between clusters, so: \begin{eqnarray} \Delta F_0=-T \ln \frac{N!}{N_k!} [\frac{1}{N_B!} \int \frac{d^3q_1}{V}....\frac{d^3 q_{N_B}}{V} e^{-\frac {U_B(q_1,..., q_{N_B})}{T}}]^{N_k}, \end{eqnarray} where by definition we set: \begin{eqnarray} U_B(q_1,...,q_{N_B})=\sum \limits_{1\leq i<j\leq N_B} V_s^1(q_i-q_j)+U_s^2(q_1,...,q_{N_B}). \end{eqnarray} We proceed from the assumption that the cluster is a ball and bonding forces between molecules in the cluster are so strong that the cluster volume equals to a sum of volumes of molecules it consists of (that why cluster density can exceed the water density). Further, one of variables \(q_1\),....,\(q_{N_B}\) is a center of inertia of cluster \(B\). The choice of such a variable can be performed in \(N_B\) ways. Supposing the center of the cluster inertia is \(q_1\) we find by integrating it: \begin{eqnarray} \Delta F_0=-T \ln \frac{N!}{N_k!} [\frac{1}{(N_B-1)!} \int \frac{d^3q_2}{V}....\frac{d^3 q_{N_B}}{V} e^{-\frac {U_B(0,..., q_{N_B})}{T}}]^{N_k}, \end{eqnarray} And the center of cluster \(B\) is 0. Then,we assume every molecule in the cluster moves in an effective field \(W\) and \( U_s^2(0,..., q_{N_B})=W (N_B-1)\). Instead of integral \(\int \limits_{B\times...\times B} d^3q_2...d^3q_{N_B}\) the presence of multiplier \(e^{-\frac{U_s^1(q_1,...,q_{N_B})}{T}}\) imposes the use of integral \begin{eqnarray} \int \limits_{B\times...\times B}' d^3q_2...d^3q_{N_B}, \end{eqnarray} where prime points at the incompressibility of molecules. Using \(N!\approx (\frac{N}{e})^N\) we receive: \begin{eqnarray} \Delta F_0=-T \ln \frac{1}{N_k!} [\frac{1}{(N_B-1)!} (\frac{N}{e})^{N_B}\int \limits_{B\times...\times B}' \frac{d^3q_2}{V}....\frac{d^3 q_{N_B}}{V} e^{\frac{-W (N_B-1)}{T}}]^{N_k}. \end{eqnarray} But integral \(\int \limits_{B\times...\times B}' d^3q_2....d^3 q_{N_B}\) is easy and equals to \begin{eqnarray} \int \limits_{B\times...\times B}' d^3q_2....d^3 q_{N_B}=(N_B-1)!(\frac{V_B}{N_B})^{N_B-1}, \end{eqnarray} where \(V_B\) is the cluster volume and \(V_b:=\frac{V_B}{N_B}\) is the volume of one protein molecule. As a result, we obtain: \begin{eqnarray} \Delta F_0=-T N_k\ln N_B [(\frac{V_B}{V_k})^{N_B-1} \frac{1}{e^{N_B-1}} e^{\frac{-W (N_B-1)}{T}}], \end{eqnarray} where \(V_k:=\frac{V}{N_k}\) is the volume which falls on one cluster. Now, let's derive the equilibrium condition for the cluster \(B\) from which we essentially can define the volume of this cluster. Let's consider one equilibrium cluster in the volume \(V_k\). Now we are generally interesting in configuration part of the free energy since it splits from the kinetic one. Let's add one more protein molecule to the volume \(V_k\). Its free energy is \begin{eqnarray} f_1=-T \ln \int \limits_{V_k} \frac{1}{V_k} d^3q=0. \end{eqnarray} If this protein falls to the cluster, then \(N_B\) is increased by a unit, but since the cluster is equilibrium \(\Delta F_0\) should not change. In other words, the following equality should take place: \begin{eqnarray} \frac{d \Delta F_0}{d N_B}=0. \label{EQUILIB1} \end{eqnarray} The equilibrium condition (\ref{EQUILIB1}) is actually a common thermodynamic condition of equilibrium if we suppose a number of protein molecules in a cell \(N\) to be a parameter which has an influence on the system condition. But we can use this condition only if \(N\) is not preserved, put it otherwise, is not a motion integral. But the fact that \(N\) is not a motion integrals follows from the fact that in the living cell the quantity of protein molecules does not remain invariant because of permanently proceeding processes of proteins synthesis and degradation. The equation (\ref{EQUILIB1}) leads to the equality: \begin{eqnarray} \frac{1}{N_B}+\ln (\frac{V_B}{V_k})+(N_B-1)\frac{V_b}{V_B}-1-\frac{W}{T}=0. \end{eqnarray} In other words: \begin{eqnarray} \frac{V_B}{V_k}=e^{\frac{W}{T}}. \end{eqnarray} Finally: \begin{eqnarray} \Delta F_0=-T N_k \ln N_B +T[N-N_k]. \end{eqnarray} As the number of clusters is small (\( N_k\ll N\)), then in square bracket of the second summand \(N_k\) can be neglected as compared with \(N\). Since \(N_B\) conversely is very large, we can neglect a logarithmic term as compared with \(TN\). As a result, we find: \begin{eqnarray} \Delta F_0=TN. \end{eqnarray} This term just comes down to the additive renormalization of the entropy and therefore in can be omitted. Now, let's calculate \(\Delta F_1(V,T,N)\). We have: \begin{eqnarray} \Delta F_1=\Delta F-\Delta F_0+O(C^2). \end{eqnarray} Let's recall, that \(C\) is considered as as mall parameter. It won't be a great mistake if instead of \(\Delta F_0\) we take \(\Delta F_0\) calculated by \(N - 2\) molecules in the same volume \(V\) at the same temperature \(T\). We have: \begin{eqnarray} \Delta F_1=-T \ln \int \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V} e^{\frac{-U(q)+\Delta F_0}{T}}+O(C^2)\nonumber\\ =-T \ln \int \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V} \prod \limits_{1\leq i<j\leq N} [e^{-\frac{V_u(q_i-q_j)}{T}}-1+1] e^{\frac{-U_s(q)+\Delta F_0}{T}}+O(C^2)\nonumber\\ =-T\ln \int \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V}\{1+\sum \limits_{1\leq i<j\leq N} [e^{-\frac{V_u(q_i-q_j)}{T}}-1]\}e^{\frac{-U_s(q)+\Delta F_0}{T}}+O(C^2)\nonumber\\ =-T\ln \{1+ \int \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V}\{\sum \limits_{1\leq i<j\leq N} [e^{-\frac{V_u(q_i-q_j)}{T}}-1]e^{\frac{-U_s(q)+\Delta F_0}{T}}\}+O(C^2). \end{eqnarray} Eventually, we obtain: \begin{eqnarray} \Delta F_1=-T\ln \{1+ \frac{N^2}{2}\int \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V}[e^{-\frac{V_u(q_1-q_2)}{T}}-1]e^{\frac{-U_s(q)+\Delta F_0}{T}}\}+O(C^2). \end{eqnarray} Since the number of clusters \(N_k\) is still sufficiently large, we neglect cases when \(q_1\) and \(q_2\) are in the same cluster and suppose them lying in different clusters \(B_1\) and \(B_2\). Values \(q_1\) and \(q_2\) are simultaneous centers of the corresponding clusters with probability \(\frac{1}{N_B^2}\). Taking this into account: \begin{eqnarray} \Delta F_1=-T\ln \{1+ \frac{N^2}{2} N_B^2\int \frac{d^3 q_1}{V}....\frac{d^3 q_N}{V}[\langle e^{-\frac{V_u(q'_1-q'_2)}{T}}\rangle-1]e^{\frac{-U_s(q)+\Delta F_0}{T}}\}+O(C^2). \end{eqnarray} Here variables \(q_1\) and \(q_2\) are already centers of inertia of clusters \(B_1\) and \(B_2\). Variables \(q'_1\), \(q'_2\) are equiprobably and independently distributed along clusters \(B_1\) and \(B_2\) and angle brackets mean the averaging over positions \(q'_1\) and \(q'_2\). Considering the meaning of \(\Delta F_0\), the last equation can be rewritten in the following way: \begin{eqnarray} \Delta F_1=-T\ln \{1+ \frac{N^2}{2V} N_B^2\int \limits_{|q|>2r_B} d^3 q [\langle e^{-\frac{V_u(q'_1-q'_2)}{T}}\rangle-1] \nonumber\\ \times[\frac{N^{N_B-1}}{(N_B-1)!} \int \limits_{B\times...\times B} \frac{d^3 q_2}{V}....\frac{d^3 q_{N_B}}{V}e^{\frac{-U_s(0,...,q_2)}{T}}]^2\}+O(C^2)=\nonumber\\ \Delta F_1=-T\ln \{1+ \frac{N^2}{2V} N_B^2\int \limits_{|q|>2r_B} d^3 q [\langle e^{-\frac{V_u(q'_1-q'_2)}{T}}\rangle-1] \nonumber\\ \times[(\frac{V_B}{V_k})^{N_B-1}e^{\frac{-W(N_B-1)}{T}}]^2\}+O(C^2), \end{eqnarray} where \(r_B\) is a radius of a cluster \(V_B=\frac{4 \pi}{3} r_B^3\). But taking into account the clusters equilibrium conditions we derived, the expression inside the last square bracket equals to 1. Eventually: \begin{eqnarray} \Delta F_1=-T\ln \{1+\frac{N^2}{2V} N_B^2\int \limits_{|q|>2r_B} d^3 q [\langle e^{-\frac{V_u(q'_1-q'_2)}{T}}\rangle-1]\}+O(C^2)\nonumber\\ =-T \frac{N^2}{2V} N_B^2\int \limits_{|q|>2r_B} d^3 q [\langle e^{-\frac{V_u(q'_1-q'_2)}{T}}\rangle-1]+O(C^2). \end{eqnarray} This formula assumes that cluster \(B_1\) has a center of inertia in zero, \(q\) is a coordinate of center of inertia of cluster \(B_2\), \(q'_1\) and \(q'_2\) are variables which are uniformly and independently distributed over cluster \(B_1\) and \(B_2\) respectively. This formula particularly proves the rule (\ref{HEAT}) quoted in the end of the previous section. Now, let's define a part which "dead" protein in the protoplasm occupies in the whole cell volume. Suppose the cluster of dead protoplasm proteins is situated in the volume and has an \(V':=\frac{V}{N_k}\) equilibrium volume. If the cluster has the equilibrium size, then inserting one protein molecule to it needs a zero work \(A\), in other words, an average potential energy of the protein molecule in the cluster equals to zero. But all the protein molecules are situated in the neutral external field outside the cluster and in the field \(W\) inside the cluster. Let's consider the chosen protein \(\mathfrak{B}\), it has three degrees of freedom described by Cartesian coordinates \(q^1\), \(q^2\), \(q^3\) and radius \(q^4\) (the protein can be compressed). The only thing we can do is to take a positively defined quadratic form of coordinates \(q^1,...,q^4\) as a raw estimation for the potential energy of protein inside the cluster \(V(q^1,..., q^4)\): \begin{eqnarray} V(q^1,...,q^4)=W+\sum \limits_{i,j=1,...,4} (q^i-q^i_0)B_{i,j} (q^j-q^j_0), \end{eqnarray} where \(q^1_0,...,q^4_0\) are equilibrium coordinates of the protein molecule. For kinetic energy of the chosen protein \(\mathfrak{B}\) as a raw estimation too we take a positively defined form of velocities \(\dot{q}^1,...,\dot{q}^4\): \begin{eqnarray} E_{kin}=\sum \limits_{i,j=1,...,4} \dot{q}^i A_{i,j} \dot{q}^j \end{eqnarray} Note, that protein molecule in the water does not behave as an oscillator along \(q_4\) because a water is incompressible. But in the case of classical mechanics the average potential energy of one-dimensional oscillator equals to \(\frac{1}{2}T\). Therefore, the condition \(A = 0\) leads to \(W = -2T\). This equilibrium condition cluster lets us find: \begin{eqnarray} \frac{V_B}{V'}=e^{\frac{W}{T}}=e^{-2}\approx0.136, \end{eqnarray} which is in a good agreement with true value \(\frac{V_B}{V'}=0.163\) (see section 4). \section{Superfluid Bose Gas on Protein Configuration Space as Model of Living Cell Protoplasm} In this section we discuss the second model of the living cell protoplasm, the superfluid Bose gas on the protein configuration space. The common description of this model is given in the introduction section. While investigating this model we'll show that when the Ling's protoplasm dies, the folding of protein molecules occurs, how the Ling's theory postulates on the qualitative level (Ling, 2001). The protoplasm is considered at zero temperature. This allows us to apply essential tools of theoretical physics but conclusions made for such extreme conditions are useful for understanding the properties of the real cell. Let's proceed to conclude our "superfluid" model. Suppose there are \(N\) protein molecules in the protoplasm, \(x_i\), \(i=1,...,N\) are coordinates of their centers of inertia and \(\sigma_i\) are parameters, passing some space \(X\) with measured \(d\mu\) describing inner freedom degrees of the protein molecule (its configuration). Then, the state of \(N\) protein molecules is described by the wave function \(\Psi(x_1,\sigma_1,...,x_N,\sigma_N)\) depending on coordinates \(x_i,\sigma_i\) of protein molecules. This wave function fulfills the normalizing condition: \begin{eqnarray} \int |\Psi(x_1,\sigma_1,...,x_N,\sigma_N)|^2dx_1,...,dx_nd\mu(\sigma_1)...d\mu(\sigma_N)=1 \end{eqnarray} and symmetry condition \(\forall P \in S_N\) permutation of the set of \(N\) elements \begin{eqnarray} (\hat{P}\Psi)(x_1,\sigma_1,...,x_N,\sigma_N):=\Psi(x_{P(1)},\sigma_{P(1)},...,x_{P(N)},\sigma_{P(N)})=\chi(P) \Psi(x_1,\sigma_1,...,x_N,\sigma_N), \end{eqnarray} where \(\chi(P)\equiv 1\) if protein molecules satisfy to the Bose --- Einstein statistics, and \(\chi(P)=\rm sgn \mit (P)\) (signum of permutation) if protein molecules satisfy to the the Fermi --- Dirac statistics. To describe this system, by using common considerations we'll try to find its Hamiltonian as per the adiabatic limit method (see the previous section). Suppose \(V(x_1,\sigma_1,...,x_N,\sigma_N)\) is a change of the smallest eigenvalue of the protoplasm Hamiltonian while adding there \(N\) protein molecules having coordinates \(x_1,\sigma_1,...,x_N,\sigma_N\), and \(\hat{V}\) is an operator of multiplication of wave function of our system on \(V(x_1,\sigma_1,...,x_N,\sigma_N)\). Then, obviously, the effective Hamiltonian of our system is \begin{eqnarray} \hat{H}_{eff}=\sum \limits_{i=1}^N \hat{T}_i+\hat{V}, \end{eqnarray} where \(\hat{T}_i\) is a kinetic energy operator of \(i\)-th protein as a material point: \begin{eqnarray} \hat{T}_i=-\frac{1}{2M}\nabla_i^2. \end{eqnarray} To simplify our model let's suppose \(\hat{V}\) describes only a pair interaction of protein molecules. \begin{eqnarray} \hat{V}=\sum \limits_{i=1}^N \hat{V}_i+\sum \limits_{N \geq i>j\geq 0} \hat{V}_{i,j}, \end{eqnarray} where in Dirac notation \begin{eqnarray} \langle \sigma_1,x_1,...,x_n,\sigma_n|\hat{V}_i|x_1,\sigma'_1,...,x_n\sigma'_n\rangle=U_1(x_i,\sigma_i|x'_i,\sigma'_i) \end{eqnarray} for some function \(U_1(x'_i,\sigma'_i|x'_i,\sigma'_i)\) fulfilling obvious conditions arising from the Hermitian nature of \(\hat{V}_i\). Similarly \begin{eqnarray} \langle\sigma_1,x_1,...,x_n,\sigma_n|\hat{V}_{i,j}|x_1,\sigma'_1,...,x_n\sigma'_n\rangle=U_2(x_i,\sigma_i,x_j,\sigma_j|x'_i,\sigma'_i,x'_j,\sigma'_j) \end{eqnarray} for some function \(U_2(x_i,\sigma_i,x_j,\sigma_j|x'_i,\sigma'_i,x'_j,\sigma'_j)\) fulfilling evident conditions arising from the Hermitian nature of \(\hat{V}_{i,j}\). We consider the protoplasm at the zero temperature (measured in the energy scale). The following objection may arise against this acceptance: at zero temperature all the metabolic process in cell are stopped and the cell dies (it is the way how the non-equilibrium thermodynamics understands the life and the death). However, it is well-known that cells (even embryos including human ones) come back to life after freezing in the liquid nitrogen when there are no flows of matter or energy at all. Our approach differs: we consider the living state as stationary and thermodynamically sustainable, but not the equilibrium one. Precisely this state is "freezing" and can keep its properties even at the absolute zero. That's why the frozen cell comes back to life when temperature conditions became normal again. The thermodynamic identity of the resting state of the living and cells allows us to apply the same analytical apparatus. As the average kinetic energy of a heat motion of the protein molecule \(T_i\) at temperature \(T\) is equal to \(kT\), so at the zero temperature kinetic energy equals to zero. Therefore, we can omit the term corresponding to kinetic energy from the Hamiltonian \(H_{eff}\). In other words, when \(T =0\) we have the following expression for \(H_{eff}\): \begin{eqnarray} \hat{H}_{eff}=\sum \limits_{i=1}^N \hat{V}_i+\sum \limits_{N \geq i>j\geq 0} \hat{V}_{i,j}. \end{eqnarray} With regard to the protein molecules themselves, according to Ling's model of the cell (Ling, 2001), they are situated in points of a lattice and emerged into the volume of water associated with these proteins (the water is as well structured and ordered), while \(i\)-th point has coordinates \(x_i\). \(\Psi(x_1,\sigma_1,...,x_N,\sigma_N)\) has the following form: \begin{eqnarray} \Psi(x_1,\sigma_1,...,x_N,\sigma_N)=C\sum \limits_{P \in S_N} \prod \limits_{i=1}^N\delta(x'_i-x_{P(i)})A^P(\sigma_1,...,\sigma_N)\chi(P), \label{Anzatz} \end{eqnarray} where \(A^P\) fulfills the following property arising from the property of \(\Psi(x_1,\sigma_1,...,x_N,\sigma_N)\) to fulfill the Bose-Einstein or Fermi-Dirac statistics: \begin{eqnarray} A^{PQ^{-1}}(\sigma_{Q(1)},....,\sigma_{Q(N)})=A^P(\sigma_1,...,\sigma_N), \end{eqnarray} and a divergent constant \(C\) is chosen from the condition \begin{eqnarray} C \int dx_1,...,dx_N(\prod \limits_{i=1}^N\delta(x_i-x'_i))^2=1. \end{eqnarray} At \(T = 0\) the whole protein system is in the ground state \(\Psi_0(x_1,\sigma_1,...,x_N,\sigma_N)\) which can be defined based upon the requirement of minimality of the averaged Hamiltonian \(H_{eff}\) over all normalized states. In other words, in the state \(\Psi_0\) the averaged value \(\langle \Psi|\hat{H}_{eff}|\Psi\rangle\) of the effective Hamiltonian reaches its minimum value \begin{eqnarray} E_0=\min \limits_{\|\Psi\|=1}\langle \Psi|\hat{H}_{eff}|\Psi\rangle. \end{eqnarray} Taking into account the aforesaid, in searching the minimum value of \(\langle \Psi|\hat{H}_{eff}|\Psi\rangle\) we can restrict ourselves to states of the form (\ref{Anzatz}). Since for states of the form (\ref{Anzatz}) the coordinates of inertia centers of proteins are localized, we may say that \(\hat{V}_i\) acts only in \(L^2(X,d \mu)\) and \(\hat{V}_{i,j}\) acts in \(L^2(X^2,d \mu\times d\mu)\). Then, our variational problem comes down to finding the functional minimum \begin{eqnarray} \int d\mu(\sigma_1)...d\mu(\sigma_n)A^{\star}(\sigma_1,...,\sigma_N)\{(\sum \limits_{i=1}^N \hat{V}_i+\sum \limits_{N \geq i>j\geq 0} \hat{V}_{i,j})A\}(\sigma_1,...,\sigma_N) \end{eqnarray} upon the additional condition \begin{eqnarray} \int|A(\sigma_1,...,\sigma_n)|^2d\mu(\sigma_1)...d\mu(\sigma_N)=1. \end{eqnarray} Let's introduce a function \(V_2(\sigma_1,\sigma_2|\sigma'_1,\sigma'_2|x'_i,x'_j)\) with a formula \begin{eqnarray} V_2(\sigma_1,\sigma_2|\sigma'_1,\sigma'_2|x'_i,x'_j)=U_2(x'_i,\sigma_1,x'_j,\sigma_2|x'_i,\sigma'_1,x'_j,\sigma'_2). \end{eqnarray} Then, in obvious notations \begin{eqnarray} \langle \sigma_1,...,\sigma_N|\hat{V}_{i,j}|\sigma'_1,....,\sigma'_N\rangle=V_2(\sigma_i,\sigma_j|\sigma'_i,\sigma'_j|x_i,x_j). \end{eqnarray} Let \(P\) be any permutation from \(S_N\). Let's define the operator \(\hat{V}_{i,j}^P\) with the following formula \begin{eqnarray} \langle \sigma_1,...,\sigma_N|\hat{V}_{i,j}^P|\sigma'_1,...,\sigma'_N\rangle=V_2(\sigma_i,\sigma_j|\sigma'_i,\sigma'_j|x_{P(i)},x_{P(j)}). \end{eqnarray} Let by definition \begin{eqnarray} \hat{V}^e_{ij}:=\frac{1}{|S_N|} \sum \limits_{P \in S_N} \hat{V}_{i,j}^P. \end{eqnarray} Let's renumber protein molecules in such a way that the protein which starts the numeration is situated sufficiently far from the cell surface. Then \begin{eqnarray} \langle\sigma_1,...,\sigma_n|\hat{V}_{ij}^e|\sigma'_1,...,\sigma'_n\rangle=\frac{1}{N}\sum \limits_{k=1}^N V_2(\sigma_i,\sigma_j|\sigma'_i,\sigma'_j|x_0,x_k); \end{eqnarray} Let by definition \begin{eqnarray} U_2(\sigma_1,\sigma_2|\sigma'_1,\sigma'_2):=\frac{1}{N}\sum \limits_{k=1}^N V_2(\sigma_1,\sigma_2|\sigma'_1,\sigma'_2|x_0,x_k). \end{eqnarray} Now, we'll show that within the framework of some approximation the class of functions, where solution of variational problem we just set up is searched, can be narrowed down to symmetrical functions, i.e. such functions that \(\forall P \in S_N\) \begin{eqnarray} A^P(\sigma_1,...,\sigma_N)=A^{id}(\sigma_1,...,\sigma_N)=:A(\sigma_1,...,\sigma_N). \end{eqnarray} We suppose that \(\hat{V}_{i,j}\) is proportional to real constant \(\lambda\) which is a small parameter. Therefore, if parameter \(\lambda\) is enough small, the different protein molecules can be considered as non-correlated ones, in other words, behaving independently. This means that the wave function \(A(\sigma_1,...,\sigma_N)\) can be represented in the following form: \begin{eqnarray} A(\sigma_1,...,\sigma_N)=\prod \limits_{i=1}^N \xi(\sigma_i) \end{eqnarray} for a function \(\xi(\sigma)\in L_2(X,d\mu)\) fulfilling the following normalizing condition: \begin{eqnarray} \int |\xi(\sigma)|^2d\mu(\sigma)=1. \end{eqnarray} Let \(\hat{U}^1\) be an operator in \(L_2(X,d\mu)\) which have matrix elements in \(\sigma\)-representation defined by function \(U_1\), and \(\hat{U}^2\) be an operator in \(L_2(X\times X,d\mu\times d\mu)\) which have matrix elements in \(\sigma\)-representation defined by function \(U_2\). Let's denote \(U_2(\xi)\) as an operator in \(L_2(X,d\mu)\) defined from the following relation: \begin{eqnarray} \langle f|U_2(\xi)|g\rangle=\langle f\otimes \xi|U_2|g\otimes\xi\rangle. \end{eqnarray} The approximation we've just described is called a mean field approximation and time evolution of function \(\xi\) in it is defined by the following equation: \begin{eqnarray} i\frac{\partial}{\partial t}\xi=\hat{V}^1\xi+N\hat{V}^2(\xi)\xi. \end{eqnarray} Any how, in our approximation \begin{eqnarray} A(\sigma_1,...,\sigma_N)\sim\prod \limits_{i=1}^N \xi(\sigma_i). \end{eqnarray} As the class of test functions is narrowed down to the symmetrical ones in our variational problem, then, instead of \(\hat{H}_{eff}\) minimum eigenvalue calculation, we can seek the minimum eigenvalue of the Hamiltonian: \begin{eqnarray} \hat{H'}_{eff}=\sum \limits_{i=1}^N \hat{V}_i+\sum \limits_{N \geq i>j\geq 0} \hat{V}_{i,j}^e. \end{eqnarray} Within the framework of our approximation the energy of the ground state of \(\hat{H}'_{eff}\) is equal to the energy of the ground state of \(\hat{H}_{eff}\). \textbf{Proposition.} There exist functions \(U'_1(\sigma_1|\sigma'_1)\), \(U'_2(\sigma_1,\sigma_2|\sigma'_1,\sigma'_2)\) such as if \(\hat{V'}_i\), \(\hat{V}'_{i,j}\) operators defined by the following relations \begin{eqnarray} \langle \sigma_1,....,\sigma_N|\hat{V}'_{i}|\sigma'_1,...,\sigma'_N\rangle=U'_1(\sigma_i|\sigma'_i),\nonumber\\ \langle\sigma_1,....,\sigma_N|\hat{V}'_{ij}|\sigma'_1,...,\sigma'_N\rangle=U'_2(\sigma_i,\sigma_j|\sigma'_i,\sigma'_j), \end{eqnarray} the operators \(V'_i\) and \(V'_{ij}\) are Hermitian ones, \begin{eqnarray} \hat{H'}_{eff}=\sum \limits_{i=1}^N \hat{V}'_i+\sum \limits_{N \geq i>j\geq 0} \hat{V'}_{i,j}. \end{eqnarray} and operators \(V'_i\) and \(V'_{ij}\) fulfill one more condition being as follows. Let \(\{\varphi'_n|n=0,1,...\}\)be an orthonormal basis of eigenfunctions of operator \(\hat{U}'_1\) in \(L_2(X,d\mu)\) defined by the following equation: \begin{eqnarray} (\hat{U}'_1\varphi)(\sigma)=\int U'_1(\sigma,\sigma')\varphi(\sigma')d\mu(\sigma'). \end{eqnarray} Let \(U'_2\) is the operator in \(L_2(X\times X,d\mu \times d\mu)\) specified by the relation \begin{eqnarray} (\hat{U}'_2\varphi)(\sigma_1,\sigma_2)=\int U'_2(\sigma_1,\sigma_2|\sigma'_1,\sigma'_2)\varphi(\sigma'_1,\sigma'_2)d\mu(\sigma)d\mu(\sigma'). \end{eqnarray} Let \((\hat{U'}_2)_{mn,m'n'}\) be a matrix element of operator \(\hat{U'}_2\) with respect to the basis \(\{\varphi_i\otimes\varphi_j|i,j=0,1,...\}\). \begin{eqnarray} (\hat{U}'_2)_{mn,m'n'}=\langle \varphi_m\otimes\varphi_n|\hat{U}'_2|\varphi_{m'}\otimes\varphi_{n'}\rangle. \end{eqnarray} Then, for any \(n = 1,2,3...\) \begin{eqnarray} (U'_2)_{00,0n}=0 \end{eqnarray} and similar equalities take place where index \(n\) stands on remaining three places and other have zeros. \textbf{Proof.} Let \(\hat{U}_1\) be an operator in specified by the following relation: \begin{eqnarray} (\hat{U}'_1\varphi)(\sigma)=\int U'_1(\sigma,\sigma')\varphi(\sigma')d\mu(\sigma'), \end{eqnarray} and let \(\{\varphi_n|n=0,1,...\}\) be an orthonormal basis of eigenfunctions of operator \(\hat{U}_1\). We represent the operator \(\hat{V}^e_{i,j}\) as a sum of three summands: \begin{eqnarray} \hat{V}^e_{i,j}=\hat{V}^{e,1}_{i,j}+\hat{V}^{e,2}_{i,j}+\hat{V}^{e,3}_{i,j}, \end{eqnarray} where operators \(\hat{V}^{e,1}_{i,j}\), \(\hat{V}^{e,2}_{i,j}\), \(\hat{V}^{e,3}_{i,j}\) are based on functions \((U^1_2)(\sigma_1,\sigma_2)\), \((U^2_2)(\sigma_1,\sigma_2)\), \((U^3_2)(\sigma_1,\sigma_2)\) defined below by the principle which the \(\hat{V}^e_{ij}\) is based on \(U_2(\sigma_1,\sigma_2|\sigma'_1,\sigma'_2)\). The functions \(U^1_2,\;U^2_2,\;U^3_2\) are defined by the following condition. Let \(\hat{U}^1_2,\;\hat{U}^2_2,\;\hat{U}^3_2\) be operators based on functions \(U^1_2,\;U^2_2,\;U^3_2\) using the same scheme as used by the operator \(\hat{U}_2\) based on the function \(U_2\). Among all matrix elements of operators \(\hat{U}^1_2\), \(\hat{U}^2_2\) only the following elements are non-zero: \((U_2^1)_{00,0m}\), \((U_2^1)_{00,m0}\), \((U_2^2)_{m0,00}\), \((U_2^2)_{0m,00}\), \(m=1,2,...\) and among all matrix elements of the operator \(\hat{U}_3^2\) only remained elements are non-zero. It is easily seen that if we redefine \(\hat{V}^{e,3}_i\) in a proper manner, then we can include summands corresponding to \(\hat{V}^{e,1}_{i,j}\), \(\hat{V}^{e,2}_{i,j}\) into \(\hat{V}_{i}\). But \(\hat{V}_{ij}\) and, consequently, \(\hat{V}_{ij}^e\) are first-order values by the constant of protein interaction \(\lambda\). Therefore, in just described substitution, the basis of eigenfunctions of operator \(\hat{U}_1\) \(\{\varphi_n|n=0,1,2,...\}\) passes to the basis of eigenfunctions of redefined operator \(\hat{U}'_1\) \(\{\psi_n|n=0,1,2,...\}\) such that \(\varphi_n-\psi_n\) are the first-order of vanishing values by a coupling constant \(\lambda\) \(\forall n=0,1,2,...\). The matrix elements of just redefined \(\hat{V}^e_{ij}\) of the form \((\hat{U}_2)_{00,0m}\) \(m=1,2,...\) (\(\hat{U}_2\) is coupled with \(\hat{V}^e_{ij}\) by the relation we've mentioned above) relating to the basis \(\{\varphi_n\}\) are equal to zero. But since \(\hat{V}^e_{ij}\) are of first-order of vanishing by the interaction constant, then matrix elements of the form \((\hat{U}_2)_{00,0m}\) of redefined \((\hat{U}_2)\) related to the basis \(\{\psi_n\}\) are of the second order of vanishing by the coupling constant. The procedure described above can be continued endlessly by sequentially lowering the order of vanishing of those terms we want to omit in the Hamiltonian. The proposition may be considered as a proven one even without discussing a problem of convergence of the above described iteration procedure, as all formulas which may be derived hereafter will be just asymptotic expansions in a small parameter \(\lambda\). So, the effective Hamiltonian for considered system of protein molecules, which we want to use for defining the ground state of our system, has the following form: \begin{eqnarray} H'_{eff}=\sum \limits_{i=1}^N \hat{V}_i+\sum \limits_{1\leq i\leq j\leq N} \hat{V}^e_{ij}. \label{effHam} \end{eqnarray} Here operators \(\hat{V}_i\) have "act" only on coordinate \(\sigma_i\) of the wave function and differ from each other only by a number of the coordinate on which they "act". Operators \(\hat{V}_{ij}\) "acts" only on coordinates of numbers \(i\) and \(j\) of the wave function and differ from each other only by numbers of coordinates on which they "acts". Operator \(\hat{V}_{ij}\) "acts" only on coordinates of numbers \(i\) and \(j\) of the wave function and differ from each other only by numbers of coordinates on which they "acts". If in (\ref{effHam}) we evidently extract the dependence from the interaction constant and number of particles, then we shall find \begin{eqnarray} H_{eff}=\sum \limits_{i=1}^N \hat{V}_i+\frac{\lambda}{N}\sum \limits_{1\leq i\leq j\leq N} \tilde{V}^e_{ij}, \end{eqnarray} where prime of \(H_{eff}\) is omitted and \(\tilde{V}^e_{ij}\) does not depend on \(N\) (asymptotically at \(N\rightarrow \infty\)). We intend to investigate the Hamiltonian \(H_{eff}\) using the apparatus of secondary quantization. Let \(\{\varphi_n|n=0,1,2...\}\) be still an orthonormal basis of eigenfunctions of operator \(\hat{U}_1\). Let \(\mathcal{F}=\Gamma(L_2(X,d\mu))\) be a boson Fock space over \(L_2(X,d\mu)\). Let \(a^+_i,\;a_i\) be operators of particles creation and annihilation in the state \(\varphi_i\) acting in \(\mathcal{F}\). \(\forall i=0,1,...\) operators \(a^+_i\) and \(a_i\) are conjugated to each other and fulfill the following canonical commutating relations: \begin{eqnarray} {[a_i,a_j]=[a^+_i,a^+_j]=0},\;i,j=0,1,2...,\nonumber\\ {[a_i,a^+_j]}=\delta_{ij}, \end{eqnarray} where \(\delta_{ij}\) is a common Kronecker delta. We suppose that eigenfunctions \(\varphi_n\) are numerated in such a way that \(E_n\) increases with increasing of number \(n\). In representation of the secondary quantization the Hamiltonian \(H_{eff}\) is given by: \begin{eqnarray} H_{eff}=\sum \limits_{n=0}^{\infty} E_n a^+_na_n+\frac{\lambda}{2N}\sum \limits_{m,n,m',n'}a^+_ma^+_n (U_2)_{mn,m'n'}a_{m'}a_{n'}.\label{effHam1} \end{eqnarray} But it is a common Hamiltonian studied in the superfluid theory, and we can use the same methods for its investigation. At \(T = 0\) a portion of proteins is in the ground state \(\varphi_0\). Let \(N_0\) be a number of protein molecules in the ground state. As \(\frac{\lambda}{N}\ll1\), then \(\frac{N-N_0}{N_0}\ll1\). Because of this reason in the right part of relation \(a_0a^+_0-a^+_0a_0=1\) we can neglect a unity and suppose \(a_0\) and \(a^+_0\) are usual \(c\)-numbers. So, we can simply suppose that \(a_0=\sqrt{N}e^{-i\varphi}\) and \(a^+_0=\sqrt{N}e^{i\varphi}\) and for some real number \(\varphi\). Now let's consider the second summand in the right part of (\ref{effHam1}). According to the assumption proven above we can suppose that this summand has no terms linear by \(a_i^+,\;a_i\), \(i=1,2,...\). Quadratic terms have an order of \(\lambda\), cubic \(\frac{\lambda}{\sqrt{N}}\), and quartic \(\frac{\lambda}{N}\). Therefore, supposing the number of proteins in the cell is finite but a sufficiently large and interaction constant \(\lambda\) is small, in the Hamiltonian (\ref{effHam1}) we can keep only quadratic terms on operators \(a_i^+,\;a_i\), \(i=1,2,...\). Consequently, the effective Hamiltonian \(H_{eff}\) is given by: \begin{eqnarray} H_{eff}=\pi c N_0+\sum \limits_{n=1}^{\infty}E_na^+_na_n\nonumber\\ +{\lambda}\{\sum \limits_{m,n=1}^{\infty}A_{mn}e^{-2i\varphi}a^+_ma^+_n+\sum \limits_{m,n=1}^{\infty}A_{mn}^{\star}e^{2i\varphi}a_m a_n+\sum \limits_{m,n=1}^{\infty}B_{mn}a^+_ma_n\}. \label{HamSupp} \end{eqnarray} Here \(c\) is a material constant (does not depend on \(N_0\)), \(A_{mn}\), \(B_{mn}\) are some matrixes \(A_{mn}=A_{nm}\), \(B^{\star}_{mn}=B_{nm}\), \(\star\) is a sign of complex conjugation. There is a classical variable \(\varphi\) which is canonically conjugated to variable \(J=\pi N_0\). Variable \(\varphi\) satisfies the equation \(\dot{\varphi}=c\) and, therefore, value \(\psi=\varphi-ct\) (\(t\) is time) is a motion integral. Whole our system (denoted by \(M\)) has a structure of a direct product of two systems \(M=M_1\times M_2\) where, roughly speaking, the system \(M_1\) is described by variables \(\varphi\) and \(J\), and the \(M_2\) system is described by (noncommutative) variables \(a^+_n,\;a_n\), \(n=1,2...\). Now, let's consider two cases: when the cell is a live and when it is dead. In the case of the living cell, the expression under the integral for statistical weight contains a multiplier \(\delta(\varphi-\varphi')\), \(\varphi'\) is a fixed real number, and now the free energy \(F(\varphi')\) is a function of \(\varphi'\). However, one can readily see that the free energy \(F(\varphi')\) does not depend on \(\varphi'\). Indeed, the change of \(\varphi'\) to the value \(\delta\varphi\) can really be compensated by proper canonical transformation of operators \(a^+_n,\;a_n\), \(n=1,2...\): \begin{eqnarray} a^+_n\mapsto a^+_ne^{i\delta\varphi}, \nonumber\\ a_n\mapsto a_ne^{-i\delta\varphi}. \end{eqnarray} Now, integral \(\varphi\) fulfills the equivalence principle we have considered above (Prokhorenko and Matveev, 2011). As the formula of the generalized microcanonical distribution includes a multiplier factor \(\delta(\varphi-\varphi')\), the \(M_2\) system may be described by using the Hamiltonian (\ref{HamSupp}) where \(\varphi\) is given as a some value and \(N_0\) is given taking into account that the total system particles number is equal to \(N\). Now, let's consider a dead cell. In this case the cell is described by means of the equilibrium Gibbs distribution. Let's write out Hamilton's equations for \(\varphi\) and \(J\). So, we have: \begin{eqnarray} \dot{\varphi}=c,\nonumber\\ \dot{J}=0+\lambda L(a^+,a), \end{eqnarray} where \(L(a^+,a)\) is a quadratic function of operators \(a^+_n,\;a_n\), \(n=1,2...\). But if \(N\rightarrow \infty\), then \(J\sim N\), and \(L(a^+,a)\sim 1\). Therefore, when \(N\) values are high enough, \(L(a^+,a)\) value may be neglected in the Hamiltonian equations for \(\varphi\) and \(J\) and the following equations are obtained: \begin{eqnarray} \dot{\varphi}=c,\nonumber\\ \dot{J}=0. \end{eqnarray} Thus, when \(N\) values are high enough, dynamics of the \(M_1\) system separates from the dynamics of \(M_2\). This means that dynamical variables for \(M_1\) and \(M_2\) systems are independent and the probability distribution for the \(M_1\) system, corresponding to Gibbs distribution for the whole system, is defined by the formula: \begin{eqnarray} \rho(\varphi,J)=\rm const \mit \delta(J-J'). \end{eqnarray} Now, let's define the distribution function for the system \(M_2\). For this we use a classical analogy. Suppose the Hamiltonian system \(K\) has a structure of direct product of systems \(K_1\) and \(K_2\), \(K = K_1 \times K_2\) and \(H(x,y)\) is the Hamiltonian of the whole system, \(x\in K_1,\;y\in K_2\). If the \(K\) system is described by the Gibbs canonical distribution corresponding to \(T\) temperature, then the probability distribution for the \(K_2\) system is the following: \begin{eqnarray} \rho(y)=\frac{1}{Z}e^{-\frac{F(y|T)}{T}}, \end{eqnarray} where \begin{eqnarray} F(y|T)=-T\ln \int d\Gamma_x e^{-\frac{H(x,y)}{T}}, \end{eqnarray} and \(Z\) is a normalized factor. Alternately, as can be seen above, the effective Hamiltonian for the \(K_2\) system can be obtained in the following way. Let's write out the Hamiltonian equations for the system \(K_2\). Assume \(p\) and \(q\) are canonical coordinates and momenta for the system \(K_2\) system. Then: \begin{eqnarray} \dot{p}=-\frac{\partial H(x,y)}{\partial q}, \nonumber\\ \dot{q}=\frac{\partial H(x,y)}{\partial p}. \end{eqnarray} Now, if we average right parts of the two last equations by the conditional Gibbs distribution for \(K_1\) \(w(x|y)\) having a specified value \(y \in K_2\), then we obtain closed with respect to \(y\) Hamiltonian equations where \(F(y|T)\) is the Hamiltonian. Therefore, in order to define the effective dynamics for the \(M_2\) system for a dead cell, the following method, reasoning by analogy, can be used. Heisenberg equations for \(a^+_n,\;a_n\), \(n=1,2...\) can be written out and averaged over \(\varphi\) and \(J\). However, as shown above, if \(N\rightarrow\infty\), then \(\varphi\) is independent of \(a^+_n,\;a_n\), \(n=1,2...\) and distribution for \(\varphi\) is uniform. Direct calculation shows that if the right parts of Heisenberg equations for \(a^+_n,\;a_n\), \(n=1,2...\) are averaged by the uniformly distributed \(\varphi\), we obtain Heisenberg equations where the Hamiltonian is: \begin{eqnarray} H_{eff}=cN_0+\sum \limits_{n=1}^{\infty}E_na^+_na_n\nonumber\\ +{\lambda}\sum \limits_{m,n=1}^{\infty}B_{mn}a^+_ma_n. \end{eqnarray} It's evident, when \(T = 0\) \(N = N_0\), i.e. all protein molecules are folded. Now, let's pass to analysis of \(H_{eff}\) using a well-known theory of normal forms of quadratic Hamiltonians. For Hamiltonians of a general type this theory was developed by H. Poincare and G.D. Birkhoff (see Arnold, 2003) and for quantum mechanics it was adapted by N.N. Bogoliubov (Bogoliubov and Bogoliubov (jr), 1984). This theory was established only for dynamical systems with a finite number of degrees of freedom, but we use it for systems with an infinite number of degrees of freedom, as it is a standard practice in physics. For this purpose we replace the Hamiltonian \(H_{eff}\) by cuted Hamiltonian \(H^c_{eff}\) describing the system having \(L\) degrees of freedom: \begin{eqnarray} H_{eff}^c=\pi cN_0+\sum \limits_{n=1}^{L}E_na^+_na_n\nonumber\\ +{\lambda}\{\sum \limits_{m,n=1}^{L}A_{mn}e^{-2i\varphi}a^+_ma^+_n+\sum \limits_{m,n=1}^{L}A_{mn}^{\star}e^{2i\varphi}a_m a_n+\sum \limits_{m,n=1}^{L}B_{mn}a^+_ma_n\}. \label{HamSup} \end{eqnarray} In this case, the theory of quadratic Hamiltonians confirms that for general matrixes \(A_{mn}\), \(B_{mn}\) there is such a linear transformation of creation and annihilation operators \(a^+_n,\;a_n\), \(n=1,2...\) to operators \(\xi^+_n,\;\xi_n\), \begin{eqnarray} \xi^+_n=\sum \limits_{m=1}^L U_{nm}a^+_m+\sum \limits_{m=1}^L V_{nm}a_m,\nonumber\\ \xi_n=\sum \limits_{m=1}^L U^{\star}_{nm}a_m+\sum \limits_{m=1}^L V^{\star}_{nm}a^+_m\nonumber\\ \end{eqnarray} that \(\forall n=1,2...\) \(\xi_n\) is conjugated to \(\xi_n^+\); this transformation is canonical: \begin{eqnarray} {[\xi_n,\xi_m]}={[\xi^+_n,\xi^+_m]}=0,\nonumber\\ {[\xi_n,\xi_m^+]}=\delta_{nm},\;m,n=1,2,3... \end{eqnarray} and using new variables, Hamiltonian \(H^c_{eff}\) is: \begin{eqnarray} H^c_{eff}=\sum \limits_{i=1}^K\omega_n\xi^+_n\xi_n+\sum \limits_{i=K+1}^L\chi_n(\xi^+_n\xi^+_n+\xi_n\xi_n).\label{CEH} \end{eqnarray} \(\omega_n\), \(\chi_n\) are some real numbers. However, the physical considerations make clear that \(H^c_{eff}\) should be low bounded. Let's denote \(\mathcal{H}^c\) as Hilbertian space where Hamiltonian \(H^c_{eff}\) acts. This Hilbertian space is represented by the tensor product of Hilbertian spaces \(\mathcal{H}^c_i\) \begin{eqnarray} \mathcal{H}^c=\bigotimes \limits_{i=1}^L \mathcal{H}^c_i,\label{TENSOR} \end{eqnarray} where every \(\mathcal{H}^c_i\) is isomorphic to \(L_2(R)\) and this isomorphism can be chosen in such a way that the following condition is fulfilled. In \(L_2(R)\) the operators of coordinate \(\hat{q}\) and momenta \(\hat{p}\) operate in the standard way: \begin{eqnarray} \hat{p}\hat{q}-\hat{q}\hat{p}=\frac{1}{i}. \end{eqnarray} \(\xi^+_n\) and \(\xi_n\) operate only on the \(n\)-th multiplier in (\ref{TENSOR}). The isomorphism between \(\mathcal{H}^c_i\) and \(L_2(R)\) mentioned above can be chosen in such a way that due to this is isomorphism the following is correct: \begin{eqnarray} \xi_n=\frac{1}{\sqrt{2}}(\hat{p}-i\hat{q}),\nonumber\\ \xi_n^+=\frac{1}{\sqrt{2}}(\hat{p}+i\hat{q}). \end{eqnarray} But due to the isomorphism between \(\mathcal{H}^c_n\) and \(L_n(R)\) mentioned above the operator \(\xi^+_n\xi^+_n+\xi_n\xi_n\) equals to the operator \(\hat{p}^2-\hat{q}^2\). Evidently, the last operator is not neither low nor upper bounded. Therefore, the physical requirement of positiveness of \(H^c_{eff}\) results in the fact that in the right part of (\ref{CEH}) only the first summand in the right part is non-zero. As a result: \begin{eqnarray} H^c_{eff}=\sum \limits_{i=1}^L\omega_n\xi^+_n\xi_n,\label{CEH1} \end{eqnarray} and for any \(n=1,2,3...\) \(\omega_n\) is a real positive number. Operators \(\xi^+_n,\;\xi_n\) are creation and annihilation operators of some quasi-particles. As the formula (\ref{CEH1}) indicates, at zero temperature all occupation numbers of these quasi-particles are equal to zero: \(n^\xi_l=0,\;l=1,2...\). Now, we are interested in the average: \begin{eqnarray} \langle a^+_n a_n\rangle,\;n=1,2... \end{eqnarray} for the ground state of our effective Hamiltonian. Evidently, the condition \(n^\xi_l=0,\;l=1,2...\) implies \(\forall n=1,2...\) \begin{eqnarray} \langle a^+_n a_n\rangle=\sum \limits_{m=1}^{\infty}|{V'}_{nm}|^2, \end{eqnarray} where matrix \(V'_{nm}\) is defined according to following equation: \begin{eqnarray} a^+_n=\sum \limits_{m=1}^L {U'}_{nm}\xi^+_m+\sum \limits_{m=1}^L {V'}_{nm}\xi_m. \end{eqnarray} Clearly for the interaction of general form \(\hat{U}_2\) between protein molecules \(\sum \limits_{m,n=1}^L |{V'}_{nm}|^2>0\). This means that \(\langle a^+_n a_n\rangle>0\) at least for one \(n=1,2...\). So, if the cell is live, then the number of protein molecules in the unfolded state is non-zero. As a conclusion of this section we note that according to above reasoning, the number of protein molecules in the unfolded state approaches the finite value, if the cell volume tends to infinity. The last should mean that the number of the unfolded protein molecules in the real cell is negligible. The possible solution of this difficulty is the fact that the protoplasm has a quasi-crystalline structure only locally within the range of a small volume we call a domain consisting of several physioatoms. Therefore, we can supply the stated analysis only within the range of one domain. Since the domain number in the cell is proportional to its volume, then the number of protein molecules in the unfolded state is also proportional to its volume and has anon-zero value (per unit volume) in the thermodynamic limit. \section{Discussion of Relation between Mean Field Model and Superfluid Bose Gas Model on Protein Configuration Space} In the previous section we quoted the nonlinear Schrodinger equation describing our system in a mean field approximation: \begin{eqnarray} i\frac{\partial}{\partial t}\xi=\hat{V}^1\xi+N\hat{V}^2(\xi)\xi,\nonumber\\ \xi \in L_2(X,d\mu),\label{efield} \end{eqnarray} where \(\xi \in L_2(X,d\mu)\), and \(\hat{V}^1,\;\hat{V}^2(\xi)\) are operators in \(L_2(X,d\mu)\). The question arises, to which degree the results obtained within the framework of the mean field method (superfluid model) correlate with results obtained within the model of the superfluid Bose gas on the protein configuration space. Now, we'll show that these two models give the same result for spectrum of single-particle excitations. Suppose \(\hat{U}_2\) used to construct \(\hat{V}^2(\xi)\) fulfills the condition which defines, as was said, that all the matrix elements of \(\hat{U}_2\) such that for them three of four indexes equal to zero and fourth is non-zero equal to zero. Here we discussed matrix elements regarding the basis of eigenfunctions of operator \(\hat{V}^1\). Since the interaction constant \(\lambda\) is small and almost all protein molecules are in the normal unfolded state, then \(\xi\) is given by: \begin{eqnarray} \xi=\varphi_0+\eta, \label{efield} \end{eqnarray} where \(\eta\) is orthogonal to \(\varphi_0\), \(\varphi_0,\varphi_1,\varphi_2...\) are an orthonormal basis of eigenfunctions of operator \(\hat{V}^1\), and \(\varphi_0\) corresponds to the minimum eigenvalue of \(\hat{V}^1\). The value \(\eta\) is a first-order value by the coupling constant \(\lambda\). Let \(\eta_n\) be Fourier coefficients of vector \(\eta\) with regard to basis \(\varphi_n\) \begin{eqnarray} \eta=\sum \limits_{n=1}^\infty \eta_n\varphi_n. \end{eqnarray} If in (\ref{efield}) we keep only terms linear on \(\eta\), then complex conjugate to (\ref{efield}) has the following form: \begin{eqnarray} -i\frac{\partial}{\partial t}\eta_n^{\star}=E_n\eta_n^{\star}+\sum \limits_{m=1}^{\infty} B_{mn}\eta_m^{\star}+2\sum \limits_{m=1}^{\infty} A_{mn}^{\star}\eta_m,\;n=1,2,...\label{Bog1} \end{eqnarray} A similar derivation can be made for the equation for \(\frac{\partial}{\partial t}\eta_n\). On the other hand in superfluid model, after suitable canonical transformation, \(a^+_n,a_n\), satisfy the Heisenberg equation: \begin{eqnarray} \dot{a}^+_n={i[H_{eff},a^+_n]},\;\dot{a}_n={i[H_{eff},a_n]}. \end{eqnarray} If we write down these equations in close detail, then we receive: \begin{eqnarray} -i\frac{\partial}{\partial t}a_n^{+}=E_na_n^{+}+\sum \limits_{m=1}^{\infty} B_{mn}a_m^{+}+2\sum \limits_{m=1}^{\infty} A_{mn}^{*}a_m,\;n=1,2,...\label{Bog2} \end{eqnarray} and an equation obtained from the previous one by the Hermitean conjugation. But equations (\ref{Bog1}) and (\ref{Bog2}) have the same form, therefore, if variables \(\{a^+_n,a_n\}\) and \(\{\eta^+_n,\eta_n\}\) are subjected to linear transformation of the same form, then equations (\ref{Bog1}) and (\ref{Bog2}) for transformed variables coincide. Particularly, if the linear transformation transfers \(\{a^+_n,a_n\}\) into \(\{\xi^+_n,\xi_n\}\), then \(\eta_n\) transformed under the same transformation are changed according to the following law: \begin{eqnarray} \eta_n=\rm const \mit e^{-i\omega_n t}. \end{eqnarray} So \(\eta\) is given by \begin{eqnarray} \eta=\sum \limits_{n=1}^{\infty}f_ne^{-i\omega_n t}. \end{eqnarray} This formula shows that Bogolyubov's frequencies also describe a spectrum of single-particle excitations in the mean field model. \section{Comparison of Van der Waals Protoplasm Model and Superfluid Bose Gas Model on Configuration Space of Protein Molecule} The present work considers two models of Ling's cell protoplasm microstructure. In the first model called Van der Waals model, we supposed that nontrivial first integrals of the system are so that their fixing by predefined values characterizing the resting state of a cell leads to the fact that protein molecules are situated in points of a lattice in the unfolded conformation. In addition, if the rapid descending of Van der Waals interaction between protein molecules with distance increasing is taken into account, the thermodynamic features of the living protoplasm can be calculated just as for the ideal gas of proteins. The thermodynamic features of the dead protoplasm can be calculated using a well-known Van der Waals interpolation formula for the free energy of a system. With this model we've obtained an expression for the quantity of heat the cell released while dying, and for a number of potassium ions releasing from the cell during this process. In the second model we emphasize the analysis of internal protein molecules structure. As we have already mentioned, the protein molecules are supposed to be situated in points of a lattice, but have nontrivial internal degrees of freedom; and we study the structure of the ground state of this molecular system. The present work shows that within the range of weak interaction between protein molecules, the ground state of the interacting proteins system should be outlined as a ground state of the Hamiltonian of the Bose gas with a weak interaction on the protein molecule configuration space. To analyze this Hamiltonian we used standard methods of the superfluid theory. Taking into account that the interaction between protein molecules is weak and the bulk of protein molecules of the system is in the ground folded state, our Hamiltonian can be replaced with an effective quadratic one, but it has different forms depending on if the cell is live or dead. In addition, it clears up which nontrivial motion integrals in the involution should be included to the generalized Gibbs distribution describing the living cell. The use of these effective quadratic Hamiltonians reveals that in the dead cell all the protein molecules significant for model properties are folded, while in the resting living cell the number of unfolded protein molecules is non-zero. The question arises: what is the relation between considered protoplasm models? Particularly, how do conservation laws postulated for them link? We think, the considered models complement each other and connection of the named conservation laws comes down to the following. If we fix the motion integrals for the second model by some predefined values, then protein molecules are in the unfolded state, and we can suppose that the nature of interaction between different protein molecules provides the formation of a lattice being an energetically favorable structure. In other words, the conservation laws for the first model appear to be consequences of the conservation laws for the second model. If we conversely consider the Van der Waals protoplasm model as the main one and assign conservation laws to it, according to which the protein molecules in the living state are situated in points of a lattice, then it is reasonable to suggest that such a space configuration of protein molecules would be favorable for the unfolded (not folded) state of protein molecules for a number of reasons. In other words, the conservation laws for the second model are consequences of the conservation laws for the first model. This situation is certainly very inexact and hypothetical. The questions, on which principles should new models be constructed(except ones considered here) and how to develop the kinetic theory of the living cell, are the subject of further investigations. \section{Conclusion} In the present work we have constructed and investigated properties of two complementary models of protoplasm --- physical basis of life. The work was realized basing on generalized thermodynamics we proposed (Prokhorenko and Matveev, 2011). Within the framework of stated assumptions we explained a number of properties and phenomena observed in living cells, formalized (in the extended sense) by the physical theory of the living cell by Ling (2001). However, the results we had formerly obtained, were inequalities and denoted only processes direction (in excitation and death of a cell heat releases, volume reduces and so on), there were no certain numerical evaluations obtained. To obtain them we need to specify the properties of intermolecular interactions which escape from the point of view of available analyze methods. That's why, we should construct different protoplasm models emphasizing one system parameter after another. In the present work we constructed and investigated two models of protoplasm: one of them is naturally called Van der Waals model, and other is the model of superfluid Bose gas on the protein molecule configuration space. Our aim in this work was not to construct a model giving the most exact agreement with experimental data but to show that the constructing of such models is reasonable and possible. Qualitative agreement of the obtained results with experimental data gives an evidence of vitality of the thermodynamic theory of the living cell we proposed (Prokhorenko and Matveev, 2011). Our theory can face the misapprehension from devotees of the non-equilibrium thermodynamics, as it is based on the equilibrium statistical physics and thermodynamics. Their main argument is evident: in the equilibrium state the maximum entropy is reached, therefore, the cell cannot perform the biological work. However, the other fact is also evident: the non-equilibrium thermodynamics still haven't given the quantitative description even for elementary phenomena, for example, electric resting potential. In addition, if we accept that the cell is living under the laws of non-equilibrium thermodynamics, we must also recognize that 200 years experience in successful modeling properties of the cell by non-living systems is untenable in principle. It would ruin all our knowledge about the living cell without giving anything in return. As for our generalized thermodynamics, the essential clarification is required: though the states which it operates are static by time, they are not equilibrium in the sense that their entropy is not the maximum one of all the states having the same energy (this is an essential feature of the state of the living material). The existence of such states even for the most realistic statistical mechanics systems is proven by one of us (Prokhorenko, 2009). We are very grateful to P. Agutter, A.V. Koshelkin, Yu. E. Lozovik, A.V. Zayakin, E.N. Telshevskiy and N.V. Puzyrnikova for valuable critical comments on this article and very useful discussions. \section{Appendix 1. Transition of Cell from Living to Dead State in Weak External Alternating Field} Here and in our previous work (Prokhorenko and Matveev, 2011) we considered two extreme states of a cell: non-equilibrium state of rest and equilibrium state corresponding to death. Now, let's ask a question, can we describe death of the protoplasm within the framework of our generalized thermodynamics, i.e. to construct an example of a process transferring a cell from the living state of rest to the state corresponding to death. If we could not construct an example of such a process, all our theory would appear to be doubtful. In addition, up till now we considered only equilibrium statistical mechanics of the living cell, and the constructing of an example of the transformation process is the first step toward the construction of the non-equilibrium statistical mechanics of the living cell. Let's start the analysis of the cell death mechanism from a certain problem of the non-equilibrium statistical mechanics. Let's consider the Hamilton system having \(n\) degrees of freedom which is described by the Hamiltonian \(H(p, q)\) in the field of external forces \(\varepsilon f(t),\;\varepsilon \in \mathbb{R}\) such that the complete Hamiltonian of the system is as follows: \begin{eqnarray} \Gamma=H+\varepsilon f(t) P, \end{eqnarray} where \(P(p,q)\) is a function of canonically conjugated momenta and coordinates of a system. We suppose that applied force \(f(t)\) is a real function of time of the following form: \begin{eqnarray} f(t)=\sum \limits_{\nu}a_\nu \cos(\nu t+\varphi_\nu), \end{eqnarray} where phases \(\varphi_\nu\) are independent random values uniformly distributed by a circle. We suppose the frequency spectrum of the applied force is essentially continuous, so that sums of the following form \begin{eqnarray} \sum \limits_{\nu}F(\nu)a_\nu^2 \end{eqnarray} with continuous \(F(\nu)\) in the limit can be replaced by integrals \begin{eqnarray} \int \limits_{0}^{+\infty}F(\nu)I(\nu)d\nu. \end{eqnarray} This problem was considered by N.N. Bogoliubov (1945) and here we briefly quote the results obtained by them (relevant only to classical mechanics). Let's denote \(D_t\) is a probability density of coordinates and momenta in a time \(t\) provided that phases \(\varphi_\nu\) have some defined values. The probability density for distribution \(p\) and \(q\) in the common sense, i.e. when phase values \(\varphi_\nu\) are inessential, can be found from \(D_t\) by averaging over all phases: \begin{eqnarray} \rho_t=\overline{D_t}. \end{eqnarray} In the initial moment \(t = 0\) we suppose that distribution of coordinates and momenta does not depend on phases, in other terms \begin{eqnarray} D_0=\rho_0. \end{eqnarray} Temporal evolution of \(D_t\) should be passed in accordance with a well-known Liouville's equation: \begin{eqnarray} \frac{\partial D_t}{\partial t}=(\Gamma,D_t)+\varepsilon f(t)(P,D_t), \end{eqnarray} where \((A,B)\) is a Poisson bracket defined by the following formula: \begin{eqnarray} (A,B)=\sum \limits_{i=1}^n(\frac{\partial A}{\partial q_i}\frac{\partial B}{\partial p_i}-\frac{\partial B}{\partial p_i}\frac{\partial A}{\partial q_i}). \end{eqnarray} Let's introduce more one-parameter group of operators \(T_t\; t\in \mathbb{R}\) acting on dynamical variables according to the formula: \begin{eqnarray} T_tF(p,q)=F(p_t,q_t), \end{eqnarray} where \(p_t,q_t\) is a solution of canonical Hamilton equations \begin{eqnarray} \frac{d p_t}{dt}=-\frac{\partial H(p,q)}{\partial q}, \nonumber\\ \frac{d q}{d t}=\frac{\partial H(p,q)}{\partial q}, \end{eqnarray} having initial data \begin{eqnarray} p_0=p,\; q_0=q. \end{eqnarray} Using these assumptions and designations (Bogoliubov, 1945), in the limit of small \(\varepsilon\) the following equation for \(\rho_t\) was derived: \begin{eqnarray} \frac{\partial \rho_t}{\partial t}=[H,\rho_t]+\varepsilon^2\int \limits_{0}^{t} \Delta (t-\tau)(P,(T_{\tau-t}(P,\rho_\tau))d\tau, \label{12} \end{eqnarray} where \begin{eqnarray} \Delta(\tau)=\frac{1}{2} \int \limits_{0}^{+\infty} I(\nu) \cos(\nu\tau) d\tau. \end{eqnarray} Further Bogoliubov (1945) considered the case when the system exposed to external force is a harmonic oscillator of \(n\)-dimensions having incommensurate frequencies \(\vec{\omega}=(\omega_1,...,\omega_n)\). Let's denote through a \(\sigma_t=\overline{\rho_t}\) function of action variables obtained from \(\rho_t\) by averaging by angular variables \(\theta_1,...,\theta_n\). Function \(P(p,q)\) becomes a function of action-angle variables and can be expanded to the Fourier series: \begin{eqnarray} P=\sum \limits_{\vec{n}}P_{\vec{n}} e^{i\vec{\theta}\cdot \vec{n}}. \end{eqnarray} In this case, at the same initial conditions on \(D_t\), in the limit of small \(\varepsilon\) the evolution of \(\sigma_t\) is described by the Fokker-Planck equation: \begin{eqnarray} \frac{\partial \sigma_t}{\partial t}=\varepsilon^2 \sum \limits_{k=1}^{n} \sum \limits_{s=1}^n \frac{\partial}{\partial I_s}A_{sk}(I)\frac{\partial \sigma_t}{\partial I_k}, \end{eqnarray} where \begin{eqnarray} A_{sk}(I)=\frac{\pi}{4} \sum \limits_{\vec{n}} n_k n_s I(\vec{n} \cdot \vec{\omega})|P_{\vec{n}}(I)|^2. \end{eqnarray} This equation, particularly, describes diffusion of distributions which at the initial moment can be distributions concentrated in one point, i.e. \begin{eqnarray} \sigma_0(t)=\frac{1}{(2\pi)^n}\prod \limits_{k=1}^n \delta(I_k-I_k^0). \end{eqnarray} But the last distribution is a generalized microcanonical distribution which we used to describe the states of rest of the Ling's cell. Therefore, it seems to be natural if the method proposed by Bogoliubov (1945) can be used to construct an example of a cell transformation process from the living to dead (equilibrium) state. Now, we shall show how it can be done using the results of Appendix 1 of the work by (Prokhorenko and Matveev, 2011). So, assume there is a Hamilton system \(M\) with \(n+k\) degrees of freedom where \(k\ll n\), which is described by the Hamiltonian \(H\) and where \(k\) independent first integrals in the involution are defined. There was shown in the Appendix 1 of the above-mentioned work that some covering Hamiltonian system \(M'\) of system \(M\) can be represented as a direct product of \(M'=M'_1\times M'_2\), so that the canonically conjugated coordinates on \(M'_2\) are first integrals \(K'_1,...,K'_k\) (more properly, their lifting to \(M'\)) playing a role of momenta and "angle" variables \(\varphi_1,...,\varphi_k\), passing all the real axis and playing a role of coordinates. Let \(\pi\) be a canonical projection of \(M'\) on \(M\). The function \(f\) on \(M'\) we (Prokhorenko and Matveev, 2011) called a periodic one if \(f(x)=h(\pi(x))\) for some function \(h\) defined on \(M\). It has been suggested (Prokhorenko and Matveev, 2011) to represent the mixed state of the \(M'\) system using some positive periodic function \(\rho\) on \(M'\). Put by definition \(R_L:=\{x \in M'||\varphi_1|<L,...,|\varphi_k|<L\}\). If \(A\) is a periodic function on \(M'\), then its average over the state corresponding to distribution \(\rho\) was suggested (Prokhorenko and Matveev, 2011) to calculate using the following formula: \begin{eqnarray} \langle A\rangle=\lim_{L\rightarrow \infty} \frac{\int \limits_{R_L} \rho( K_1,...,K_k,\varphi_1,...,\varphi_k)A(K_1,...,K_k,\varphi_1,...,\varphi_k)dK_1...d\varphi_k}{\int \limits_{R_L} \rho( K_1,...,K_k,\varphi_1,...,\varphi_k)dK_1...d\varphi_k}.\label{11} \end{eqnarray} In this work we have shown that the limit (\ref{11}) always exists. We shall call the distribution function \(\rho(K_1,...,K_k,\varphi_1,...,\varphi_k)\) as a normalized one if \begin{eqnarray} \lim_{R_L\rightarrow \infty}\frac{1}{L^k}\int \limits_{R_L}\rho( K_1,...,K_k,\varphi_1,...,\varphi_k) dK_1...d\varphi_k=1. \end{eqnarray} The entropy of a state corresponding to the normalized distribution \(\rho(K_1,...,K_k,\varphi_1,...,\varphi_k)\) is defined by the following formula \begin{eqnarray} S=-\lim_{R_L\rightarrow \infty}\frac{1}{L^k} \int \limits_{R_L}\rho( K_1,...,K_k,\varphi_1,...,\varphi_k) \ln \rho( K_1,...,K_k,\varphi_1,...,\varphi_k) dK_1...d\varphi_k. \label{14} \end{eqnarray} The last limit exists while the proof of its existence is the same as for the limit (\ref{11}). The Hamiltonian of the covering system \(M'\) we also (without a risk to make an error) denote as \(H\). If our Hamilton system is a Ling's cell, then \(n\gg k\). In this case, as shown (Prokhorenko and Matveev, 2011), the dynamics of the system \(M'_2\) can be considered separately. This dynamics is a Hamiltonian one and is defined by Hamiltonian \(F(x| T)\) where \(F(x| T)\) is the free energy of the system \(M'_1\) at temperature \(T\) provided that the \(M'_2\) system is situated in point \(x\): \begin{eqnarray} F(x|T)=-T \ln \int d\Gamma_y^1 e^{-\frac{H(y,x)}{T}}, \end{eqnarray} \(y \in M'_1\), \(x \in M'_2\), \(d\Gamma^1_y\) is an element of phase volume on \(M'_1\). But \(K_1,...,K_k\) are the motion integrals, therefore \(F(x|T)\), does not depend on angular variables \(\varphi_1,...,\varphi_k\), but depends only on \(K_1,...,K_k\). Put by definition \begin{eqnarray} F'(K_1,...,K_k|T):=F(x|T). \end{eqnarray} Further, we are interested in the case when \(F'(K_1,...,K_k)\) reaches its minimum in the whole area (of non-zero volume) \(\mathcal{O}\). In the work by Prokhorenko and Matveev (2011) the corresponding thesis is called the equivalence principle and there was shown that only under condition of \(F'(K_1,...,K_k)\) constancy in the whole area \(\mathcal{O}\) of the non-zero volume our generalized thermodynamics gives new results in comparison with the common one and can provide the thermodynamic descriptions of resting state of the Ling's cell. So, \(F(x|T)\) is a constant on the direct product \(\mathcal{O}\times\mathbb{R}^k\) and, therefore, it defines a trivial dynamics in this area, i.e. if \((p_0,q_0)\in \mathcal{O}\times\mathbb{R}^k\), then \(\forall t \in \mathbb{R}\) \((p_t,q_t)=(p_0,q_0)\). Using the last circumstance the equation (\ref{12}) can be simplified as follows \begin{eqnarray} \frac{\partial \rho_t}{\partial t}=\varepsilon^2\int \limits_{0}^{t} \Delta (t-\tau)(P,(P,\rho_\tau))d\tau. \end{eqnarray} However, in the limit of small \(\varepsilon\) \(\rho_\tau (z)\) varies with time slowly and is almost constant wherever \(\Delta(t-\tau)\) noticeably differs from zero. Therefore, in the integral from the right part of the last equation we can replace \(\rho_\tau\) by its value at \(\tau=t\). As \begin{eqnarray} \int \limits_0^{+\infty}\Delta(t)dt=\frac{\pi I(0)}{2} \end{eqnarray} we can find \begin{eqnarray} \frac{\partial\rho_t}{\partial t}=\frac{\varepsilon^2\pi I(0)}{2}(P,(P,\rho_t)). \label{13} \end{eqnarray} This equation of Fokker-Planck type is correct in the area \(\mathcal{O}\times\mathbb{R}^k\). As for behavior of function \(\rho_t\) outside the area \(\mathcal{O}\times\mathbb{R}^k\), the physical considerations make clear that the probability to find the system \(M'\) outside the domain \(\mathcal{O}\times\mathbb{R}^k\) is negligible. Therefore, it is sufficient to investigate the behavior of \(\rho_t\) inside the domain \(\mathcal{O}\times\mathbb{R}^k\) by applying the appropriate boundary conditions for \(\rho_t\) on the boundary of \(\mathcal{O}\times\mathbb{R}^k\) which provide the self-adjointness of equation (\ref{13}) and keeping the probability: \begin{eqnarray} \int \limits_{\mathcal{O}\times\mathbb{R}^k}\rho_t(x)d\Gamma^2_x=1, \end{eqnarray} where \(d\Gamma^2_x\) is an element of phase volume on \(M'_2\). However, we won't make an ascertaining of the form of these boundary conditions but just make an assumption that the domain \(\mathcal{O}\) is a compact manifold (without boundary). The self-adjointness condition of operator from the right part of the Fokker-Plank equation is a consequence of equality of forward and backward probabilities of arbitrary transition. This equality of forward and backward probabilities of transitions is a consequence of complete Hamiltonian invariance with respect to the time sign conversion operation (See the principle of kinetic coefficients symmetry by L. Onsager). Now, following Bogolyubov (1945), we can show that the entropy of a state defined by relation (\ref{14}) is increased monotonically in course of time due to equation (\ref{13}). The entropy (\ref{14}) reaches its maximum at the constant distribution function, i.e. at \(t=+\infty\) \begin{eqnarray} \rho_{+\infty}(K_1,...,\varphi_k)=\rm const \mit. \end{eqnarray} But if the distribution function \(\rho_{+\infty}(K_1,...,\varphi_k)\) is constant, then it corresponds to the Gibbs equilibrium microcanonical distribution. So, an example of the Ling's cell transformation from the living (resting) to the dead state is its evolution in the weak external field which is a sum of harmonic oscillations with continuous frequency spectrum and random independent phases uniformly distributed along the circle; at that, for spectral density of intensity of this field \(I(\nu)\), \(I(0)\neq 0\) is required to be fulfilled. The fact that the Fokker-Planck equation includes the spectral density of intensity only through \(I(0)\) is very significant and can be verified experimentally. This conclusion is in a good agreement with observed destructive influence of infrasound, magnetic storms and any white noises to living organisms. \section{Appendix 2. On ATP Structuring Role as Part of (ATP)m-(Protein)n-(H2O)p-(\({\rm K \mit}^+\))q Complex} In this Appendix we discuss the theoretical relation between the capability of an ATP molecule to define the structure of a physioatom in the resting state and non-ergodicity of the Ling's cell. The purpose of this Appendix is to represent a demonstrative physical view of the structuring ATP influence on the physioatom. We do not give a mathematically consistent characteristic of ATP-protein-water-\({\rm K \mit}^+\) interaction here. We are interested in the following issues. How much is a number of protein molecules in the water-protein complex managed by one ATP molecule? Why the physical disturbance generated by the ATP propagates without dissipation to the large number of proteins and how does this disturbance influence on their conformation? The fact that, according to our main assumption, the Ling's cell represents a Hamiltonian system having a vast number of first integrals in the involution makes us think, can we use the theory of completely integrable (in the sense of Liouville) systems to describe such a cell? The Korteweg---de Vries equation (Arnold, 2003) which can be an example of a completely integrable system \begin{eqnarray} u_t=6uu_x-u_{xxx} \end{eqnarray} was originated in the shallow water theory (in narrow ship channels). This equation is remarkable by allowing the solutions in a form of solitary waves (solitons) propagating without dissipation. The only parameter characterizing the soliton is its velocity. The Korteweg---de Vries equation also allows for multi-solitonic solutions which break into separate solitons propagating with different velocities at \(t\rightarrow\pm \infty\). A significant property of multi-solitonic solutions is the fact that in case of solitons collision their velocities do not change. We consider that the distribution of the ATP molecule physical influence on surrounding protein molecules has something similar to the propagating of solitons because there are many commutative first integrals for the Ling's cell, therefore, this case should be somewhat similar to the case arising in the theory of completely integrable systems. So that one ATP molecule could effectively manage the surrounding complex of water and proteins, the disturbance transferred by solitons should not dissipate. For analysis simplicity let's suppose that the distribution of the physical impulse from the ATP is described by a one-dimensional Korteweg---de Vries equation. Then every soliton is unambiguously defined by the only parameter, its velocity. Therefore, to prevent loss of information from the ATP molecule, solitons' velocities shall not change after their collision. But the last property is fulfilled for multi-solitonic solutions of the Korteweg---de Vries equation, as we've mentioned above. The theory of completely integrable systems is a rather developed one (Bullaf and Caudry, 1999). A lot of integrable equations were constructed on the line, for example: nonlinear Schrodinger equation, sine-Gordon equation, Toda chain and so on. The main method of integrating such equations is a method of inverse scattering problem (Bullaf and Caudry, 1999). The property of isolated solutions to pass through each other without velocity changes can be common for all of them and is explained by the presence of the complete set of independent commutative first integrals. Let's take, for instance, the finite Toda chain consisting of \(N\) particles in the line (Mozer, 1975). The state of this system is fully described by defining \(N\) particle coordinates \(\{x_i|i=1,...,N\}\) and \(N\) momenta \(\{p_i|i=1,...,N\}\). By definition the Hamiltonian of this system is: \begin{eqnarray} H=\sum \limits_{i=1}^N \frac{p_i^2}{2}+\sum \limits_{i=1}^{N-1}e^{x_{i+1}-x_i}. \end{eqnarray} As shown by Mozer (1975), at \(t\rightarrow\pm\infty\) the distances between different particles tend to infinity. This system is completely integrable (Mozer, 1975) and there is a set of \(N\) independent first integrals in the involution for this system. If distances between particles are so large so their interaction can be neglected, then those integrals are just elementary symmetrical polynomials of momenta (velocities) (Mozer, 1975). \begin{eqnarray} I_1=p_1+p_2+...+p_N,\nonumber\\ I_2=p_1p_2+...+p_1p_N+..+p_{N-1}p_N\nonumber\\ ........................................................\nonumber\\ I_3=p_1p_2...p_N \end{eqnarray} As before and after particles' collisions the values of integrals \(I_1,...,I_N\) coincide, to define the velocities after collision we have a system of \(N\) algebraic equations which implies that velocities of particles after collision are the roots of algebraic equation \begin{eqnarray} (v-v_1)...(v-v_N)=0, \end{eqnarray} where \(v_1,...,v_N\) are velocities of particles before collision and \(v\) is an unknown variable. So, velocities of particles after collision coincide with velocities of particles before collision with an accuracy of transmutation. Yet notice that for integrable systems with a number of freedom degrees \(N\rightarrow \infty\) the equivalence principle is fulfilled. Suppose \(I_1,...,I_N\) are action variables, and \(\varphi_1,...,\varphi_N\) are angle variables conjugated to them. We can take \(I_2,...,I_N\) as independent first integrals in the involution, which are included in the generalized microcanonical distribution. We have: \begin{eqnarray} S(E,I_2,...,I_N)=\ln \int dI_1 d\varphi_1 \prod \limits_{i=2}^N d \varphi_i \delta(H(I_1,...,I_N)-E). \end{eqnarray} Integrating this formula by \(dI_1\) results in: \begin{eqnarray} S(E,I_2,...,I_N)=\ln \frac{1}{|\frac{\partial H(I_1,...,I_N)}{\partial I_1}|}+N \ln 2\pi. \end{eqnarray} But \(\omega_1(I_1,...,I_N)\) is a frequency corresponding to \(\varphi_1\); and it is reasonable to restrict our choice to systems for which \(\omega_1\) is an asymptotically constant at \(N\rightarrow \infty\). So: \begin{eqnarray} S(E,I_2,...,I_N)=-\ln \omega_1(I_1,...,I_N)+N \ln 2\pi \end{eqnarray} But since \(\omega_1(I_1,...,I_N)\) is an asymptotically constant, in the limit of \(N \rightarrow \infty\) can be neglected, and the equivalence principle is fulfilled for integrals \(I_2,...,I_N\) in the limit of \(N \rightarrow \infty\).
-84,486.076174
[ -1.94921875, 1.923828125 ]
17.430872
[ -3.466796875, 0.1966552734375, -2.794921875, -6.4140625, -0.259033203125, 9.0390625 ]
[ 3.501953125, 7.48828125, 2.181640625, 7.24609375 ]
1,109
16,895
[ -3.27734375, 3.740234375 ]
30.165426
[ -6.11328125, -4.0390625, -4.890625, -2.587890625, 2.05859375, 13.078125 ]
1.005761
9.394954
16.584788
2.54533
[ 2.8039159774780273 ]
-53,469.275219
5.620183
-84,453.062206
0.575021
6.249242
[ -2.916015625, -3.47265625, -2.9140625, -4.171875, 2.373046875, 10.859375 ]
[ -5.6796875, -2.234375, -2.416015625, -0.798828125, 4.2421875, 4.2890625 ]
BkiUaEE25V5hRtaNitmg
\section{Introduction}% \label{sec:Introduction} It is now accepted, within the $\Lambda$ Cold Dark Matter ($\Lambda$CDM) model, that galaxy mergers play a fundamental role in the formation and evolution of galaxies \citep{White1978}. These events change the nature of galaxies in a number of ways: post-mergers exhibit a diminished central metallicity \citep[e.g.][]{Kewley2006,Scudder2012}; while merging galaxies not only have both enhanced morphological disturbances \citep[e.g.][]{Conselice2003,Lotz2008,Casteels2014} and star formation rates \citep[e.g.][]{Patton2013,Thorp2019} in comparison to non-mergers, but also tend to exhibit active galactic nuclei \citep[e.g.][]{Ellison2011,Ellison2019}. Modern cosmological simulations, such as those from the EAGLE \citep{Schaye2015,Crain2015}, Illustris \citep{Vogelsberger2014,Vogelsberger2014a,Genel2014,Sijacki2015}, IllustrisTNG \citep{Marinacci2018,Naiman2018,Nelson2018,Pillepich2018, Springel2018,Nelson2019} and Horizon-AGN \citep{Dubois2014} projects, have been able not only to corroborate most observational findings about mergers, but also to provide additional insights that can be used to inform and interpret observations. For example, one of the most interesting and studied topics is the determination of the galaxy merger rate, namely, the number of mergers per unit time. A proper determination of this quantity is important to fully understand the role of interactions in galactic structure and star formation rates, as well as to test hierarchical galaxy formation models. Theoretically, the merger rate is estimated via semi-empirical (e.g. \citealt{Stewart2009,Hopkins2010}) and semi-analytic (e.g. \citealt{Guo2008}) models as well as using hydrodynamic cosmological simulations (e.g. \citealt{Rodriguez-Gomez2015}). In particular, \citet{Rodriguez-Gomez2015} used the Illustris simulation to quantify the galaxy-galaxy merger rate as a function of stellar mass, merger mass ratio, and redshift, finding that it increases steadily with stellar mass and redshift, while being in good agreement with observational constraints for both intermediate-sized and massive galaxies. On the observational side, it is important to note that the merger rate cannot be estimated directly. Instead, the merger fraction, i.e. the fraction of galaxies observed to be undergoing a merger, must be computed first, and then translated into a rate dividing by an appropriate \textit{observability} time-scale reflecting the merging detection period. The merger fraction is typically computed by using observations of close pairs or morphologically disturbed galaxies. Close-pair merger candidates have been identified by several authors (e.g. \citealt{Lin2004,Propris2005,Kartaltepe2007,Besla2018,Duncan2019}) as galaxies with a neighbour within a small projected angular separation and with a small line-of-sight relative radial velocity. Alternatively, since there is a close connection between galactic structure and merging processes, some types of morphologically disturbed galaxies are also considered to be ideal merger candidates. In particular, non-parametric morphological diagnostics, such as the concentration--asymmetry--smoothness (CAS, \citealt{Conselice2003}), Gini--$M_{20}$ \citep{Lotz2004}, and multimode--intensity--deviation \citep[MID,][]{Freeman2013} statistics, have been successfully used to identify galaxy mergers and to study, among other things, the evolution of the observational merger rate \citep[e.g.][]{Lotz2011}, its dependence on galaxy stellar mass \citep[e.g.][]{Casteels2014}, as well as to obtain the galaxy merger rate and merging time-scales from hydrodynamical simulations, allowing direct comparisons with observational estimates \citep[e.g.][]{Bignone2016,Whitney2021}. More recently, machine learning and deep learning methods have been adopted, for the same purpose, as an alternative to more standard methods. For instance, \citet{Snyder2019} considered a high-mass galaxy sample from the original Illustris simulation to create, based on image-based morphological calculations and merger statistics, a training data set that was fed to a random forest classifier. The resulting model was then used to perform observational and theoretical estimates of the merger rate as a function of redshift. This method was also recently used to perform merger classifications on JWST-like simulated images \citep{2022arXiv220811164R}. Similarly, \citet{2019ApJ...872...76N} combined non-parametric morphological statistics with a linear discriminant analysis (LDA) classifier to characterize simulated mergers and found that the LDA classifier outperformed the individual metrics. Furthermore, convolutional neural networks have been adopted to study galaxy morphology in different contexts. To list a few, they have been applied to differentiate disk galaxies from bulge-dominated systems in both observations and cosmological simulations \citep{HuertasCompany2015,Huertas-Company2019}, to identify merging galaxies in cosmological simulations \citep{Ciprijanovic2020}, to determine the effect of merging events on the star formation rates of galaxies \citep{Pearson2019}, to predict the stage of interaction on synthetic galaxy images and to assess the degree of realism and post-processing needed on these synthetic images to perform adequate deep learning estimations \citep{Bottrell2019}, and to identify high-mass major merger events in observations and simulations and subsequently estimate the merger fraction \citep{2020A&A...644A..87W}. Despite the well-established link between mergers and morphology in massive galaxies, the incidence and effects of mergers in dwarf galaxies $(M_\ast < 10^{9.5}\,\mathrm{M}_\odot)$ are more uncertain. This is an important topic to explore, since galaxy mergers can be transformative events and dwarfs represent the majority of the galaxy population. For example, \citet{Casteels2014} studied a local galaxy sample at $z<0.2$ and $10^{8}\,\mathrm{M}_\odot<M_\ast<10^{11.5}\,\mathrm{M}_\odot$ to infer the mass-dependent galaxy merger fraction and merger rate by measuring the asymmetry parameter from galaxy images. Their estimated major merger fraction is a decreasing function of stellar mass, falling from $4\%$ at $M_\ast\sim10^{9}\,\mathrm{M}_\odot$ to $2\%$ at $M_\ast\sim10^{11}\,\mathrm{M}_\odot$. This finding suggests that galaxy interactions might become increasingly important for lower-mass galaxies. Similarly, \citet{Besla2018} computed the frequency of companions for a low-redshift dwarf galaxy ($0.013<z<0.0252$; $2\times10^{8}\,\text{M}_\odot<M_\ast<5\times10^{9}\,\text{M}_\odot$) sample from the Sloan Digital Sky Survey (SDSS), comparing it to a mock galaxy sample from the original Illustris simulation. One of the goals of their study was to estimate the major pair fraction as a function of stellar mass, finding that this quantity increases slowly with stellar mass, but does not follow the decreasing trend reported by \citet{Casteels2014}. Motivated by such opposing results, in this paper we revisit the topic of the mass-dependent merger fraction, with an emphasis on the regime of dwarf galaxies. In this work, we use the TNG$50$ simulation from the IllustrisTNG project to investigate the galaxy merger fraction at the low-mass end ($8.5\leqslant\log(M_\ast/\text{M}_\odot)\leqslant11$), and whether it can be inferred statistically from morphological disturbances in large samples of galaxies. For this purpose, we generate a large set of synthetic images of TNG$50$ galaxies, including the effects of dust attenuation and scattering, designed to match observations from the Kilo-Degree Survey (KiDS). We then calculate several image-based morphological statistics for both the real and simulated galaxy samples, and explore the connection between morphology and mergers in the simulation. Instead of solely using the asymmetry statistic as a merger indicator, we follow a similar approach to that of \citet{Snyder2019} and train a random forest classifier using several non-parametric morphological indicators as the model features and merger statistics from the simulation merger trees as the ground truth. The trained models are then applied to the observational sample in order to estimate the local merger fraction as a function of galaxy mass in the real Universe. This paper is structured as follows. In \cref{sub:The IllustrisTNG simulations} we briefly review the IllustrisTNG simulations, and in \cref{sub:Observational sample,sub:Simulated sample} we describe the observational and simulated galaxy samples used in our analysis. In \cref{sub:Synthetic image generation,sub:Source segmentation} we describe the design and generation of synthetic images from the simulated sample, as well as the source segmentation and deblending procedures. Subsequently, in \cref{sub:Morphological_measurements} we present the morphological diagnostics measured on both galaxy samples. In \cref{sub:Merger identification} we define our merging and non-merging simulated samples, and in \cref{sub:Random forest classification} we describe the training and calibration of our random forest classifier. Our main results are given in \cref{sec:results}, where we examine the morphological differences between our observational and synthetic galaxy samples (\cref{sub:The morphologies of TNG50 galaxies}) as well as between merging and non-merging simulated galaxies (\cref{sub:The morphologies of intrinsic mergers}), evaluate the performance of our random forest classifier (\cref{sub:Random forest classification performance}), and present the resulting mass-dependent merger fraction (\cref{sub:The merger incidence of GAMA-KiDS observations}). Finally, we discuss our results in \cref{sec:Discussion} and present our conclusions in \cref{sec:Summary}. \section{Methodology}% \label{sec:Methodology} \subsection{The IllustrisTNG simulations}% \label{sub:The IllustrisTNG simulations} The IllustrisTNG project is a suite of $N$-body magneto-hydrodynamical cosmological simulations that model dark and baryonic matter assuming a $\Lambda$CDM framework \citep{Marinacci2018,Naiman2018,Nelson2018,Pillepich2018,Springel2018,Nelson2019}. The simulation suite consists of three cubic volumes with periodic boundary conditions: TNG50, TNG100, and TNG300, which measure 51.7, 110.7, and 302.6 Mpc on a side, respectively. In this work we use the highest resolution version of the TNG$50$ simulation \citep{2019MNRAS.490.3196P,2019MNRAS.490.3234N}, which has a volume of $\quantity(\SI{51.7}{\mega\pc})^3$ at a baryonic (dark) mass resolution of $8.5\cdot10^4\,\mathrm{M}_\odot$ ($4.5\cdot10^5\,\mathrm{M}_\odot$) and a spatial resolution (effectively set by the gravitational softening length of stellar and DM particles) of ${\sim}\SI{300}{\pc}$ at $z=0$. The starting redshift of the simulation is $z=127$ which is evolved down to $z=0$. The assumed cosmological parameters, obtained from \citet{Ade2016}, are $\Omega_{\Lambda,\,0}=0.6911$, $\Omega_{m,\,0}=0.3089$, $\Omega_{b,\,0}=0.0486$, $\sigma_8=0.8159$, $n_s=0.9667$ and $h=0.6774$. The galaxy formation model in IllustrisTNG includes prescriptions for gas radiative cooling, star formation and evolution, supernova feedback, metal enrichment, and feedback from supermassive black holes \citep[see][for a full description]{Weinberger2017a,2018MNRAS.473.4077P}. This model was tuned to approximately match several observational properties, such as the star formation rate density at $z=0-8$, the galaxy mass function and sizes at $z=0$, the stellar-to-halo and BH-to-halo mass relations at $z=0$ and the gas mass fraction within galaxy clusters \citep{2018MNRAS.473.4077P}. Since the simulations were not adjusted to match galaxy morphology, it is noteworthy that there is a reasonable level of morphological consistency between TNG$100$ galaxies and a comparable observational sample \citep{Rodriguez-Gomez2019}, as well as other properties of galaxies such as their star formation activity \citep{2019MNRAS.485.4817D}, resolved star formation \citep{10.1093/mnras/stab2131}, and metallicities \citep{2019MNRAS.484.5587T}. In order to identify DM haloes in the simulation, friends-of-friends groups are constructed using the percolation algorithm by \citet{Davis1985}, linking dark matter particles based on their inter-particle separation. The linking length used in the simulations is $b=0.2$ (in units of the mean interparticle distance). Furthermore, subhaloes are identified with the \textsc{\textsf{subfind}} algorithm \citep{Springel2001a,Dolag2009}. \begin{figure \begin{center} \includegraphics[scale=0.6] {./figures/cropped_ra_dec.pdf-1.png} \end{center} \caption[Position of GAMA galaxies with respect to KiDS-N tiles.] {Position of GAMA galaxies with respect to KiDS-N tiles. This figure demonstrates that the GAMA sample is fully contained within KiDS. Galaxies shown here constitute our final GAMA-KiDS sample with $z<0.05$ and $8.5\leqslant\log \quantity( M_\ast/\mathrm{M}_\odot)\leqslant11$.} \label{fig:gamakidstiles} \end{figure} \begin{figure \begin{center} \includegraphics[scale=0.45]{./figures/z_vs_mass_obs_v999x.pdf-1.png} \end{center} \caption[Distribution of stellar masses and redshifts of GAMA galaxies.]{Distribution of stellar masses and redshifts of GAMA galaxies. The rectangle encloses the examined ($z<0.05;\, 8.5\leqslant\log \quantity( M_\ast / \mathrm{M}_\odot)\leqslant11$) GAMA-KiDS sample.} \label{fig:z_vs_mass} \end{figure} \subsection{Observational sample}% \label{sub:Observational sample} The Kilo-Degree Survey (KiDS; \citealt{deJong2013}) is an ongoing optical wide-field imaging survey operating with the OmegaCAM camera at the Very Large Telescope (VLT) Survey Telescope, whose main goal is to map the Universe's large-scale matter distribution using weak lensing shear and photometric redshift measurements. KiDS is divided into two patches, one in the north (KiDS-N) and the other in the south (KiDS-S). The first of these is found near the equator in the Northern Galactic Cap, while the second is found around the South Galactic Pole; in combination, they cover $\SI{\sim1350}{\degg\squared}$ of the sky. The analysis presented in this work was conducted using products from the fourth data release of KiDS \citep{Kuijken2019}. The survey has a typical seeing of \ang[angle-symbol-over-decimal=true]{;;0.7} with $5\sigma$ depths of 24.2, 25.1, 25.0 and 23.7 mag for the filters \emph{u}, \emph{g}, \emph{r} and \emph{i}, respectively. We only considered $r$-band stacked images (pixel scale of \SI{0.2}{\arcsec\per\pixel}) from KiDS-N. Our galaxy sample was constructed using data from the Galaxy and Mass Assembly (GAMA; \citealt{Baldry2018}) survey, a large catalogue of galaxies with reliable redshifts obtained from spectroscopic and multi-wavelength observations. The Anglo-Australian Telescope's (AAT) AAOmega multi-object spectrograph was used to conduct the survey, which covered three equatorial (G$09$, G$12$ and G$15$) and two southern regions (G$02$ and G$23$). The first three, chosen for this study, each encompassed \ang{\sim5} in declination and \ang{\sim12} in right ascension, and were centred at roughly \SI{9}{\ahour} (G$09$), \SI{12}{\ahour} (G$12$) and \SI{15}{\ahour} (G$15$). \cref{fig:gamakidstiles} shows the locations of galaxies from regions G$09$, G$12$ and G$15$ in relation to tiles from KiDS-N, revealing that practically all of these galaxies are included within the KiDS footprint. This makes it straightforward to match GAMA galaxies to KiDS tiles and create cutouts centred on any individual galaxy. Our \begin{figure*} \begin{center} \includegraphics[width=\textwidth]{./figures/tng-50_comp_examples_v1.2.2.png} \end{center} \caption{Composite $g,r,i$ idealized synthetic images from the simulated TNG50 sample, as obtained with the process described in \cref{sub:Synthetic image generation}. Realistic $r$-band counterparts are shown in the first three rows of \cref{fig:sim_vs_obs_synth_images}. Labels denote the stellar mass of each galaxy.} \label{fig:comp_images} \end{figure*} \begin{figure* \begin{center} \includegraphics[width=\textwidth]{./figures/tng-50_examples_v1.2.2.png}\\ \includegraphics[width=\textwidth]{./figures/gama-kids_examples_v999x_new.png} \end{center} \caption{First three rows: $r$-band synthetic images of TNG50 galaxies after applying realism (convolution with a PSF and addition of shot and background noise) to the underlying idealized images, as described in \cref{sub:Synthetic image generation}, with the labels indicating the corresponding stellar masses. Last three rows: GAMA-KiDS $r$-band galaxy images with upper (lower) labels indicating their redshift (stellar mass).} \label{fig:sim_vs_obs_synth_images} \end{figure*} \noindent position matching approach yields a sample of $104~993$ objects with KiDS imaging from an initial sample of $105~474$ GAMA galaxies. From GAMA's third data release \citep{Baldry2018} we use the \textsf{StellarMasses} \citep{Taylor2011} and \textsf{SpecCat} \citep{Liske2015} products to obtain the stellar mass, redshift and location (right ascension and declination) for the examined galaxies. The distribution of stellar masses and redshifts of GAMA galaxies can be seen in \cref{fig:z_vs_mass}. For this study, we selected galaxies in the mass range $8.5\leqslant\log \quantity( M_\ast/\mathrm{M}_\odot)\leqslant11$ with redshift $z<0.05$ in order to perform a direct comparison to a single snapshot from the TNG$50$ simulation. Our final GAMA-KiDS sample consisted of $1238$ objects, and is illustrated in both \cref{fig:gamakidstiles,fig:z_vs_mass}. Finally, having defined our galaxy sample based on the GAMA catalogues, KiDS cutouts for individual galaxies were created using utilities from the \textsf{astropy} library \citep{Robitaille2013,Price-Whelan2018}. Specifically, the function \texttt{match\_coordinates\_sky} was used to locate the nearest KiDS tile that contained a given GAMA galaxy. Then, the function \texttt{Cutout2D} was employed to create individual images with a fixed size of $240\times240$ pixels. Overall, $104~993$ cutouts were obtained from this matching procedure. \subsection{Simulated sample}% \label{sub:Simulated sample} We consider galaxies from the TNG50 simulation with stellar masses in the range $8.5\leqslant\log(M_\ast/\text{M}_\odot)\leqslant11$, which is consistent with our GAMA-KiDS sample, from a single simulation snapshot at $z=0.034$ (snapshot $96$) that is close to the median redshift of the galaxies in our observational sample. The resulting mock galaxy catalogue consists of $5561$ galaxies. \begin{figure* \begin{center} \includegraphics[width=\textwidth]{./figures/deblending_example_v2_low_vx_noid.png} \end{center} \caption{Segmentation procedure. Leftmost panel: real KiDS $r$-band image showing multiple sources; second from left: segmentation map obtained from the previous image; second from right: regularised segmentation map for the object of interest (central galaxy); rightmost entry: regularised mask.} \label{fig:deblending} \end{figure*} \subsection{Synthetic image generation}% \label{sub:Synthetic image generation} Galaxy images were constructed from the light distributions of stellar populations, including dust effects such as scattering and attenuation. Following \citet{Rodriguez-Gomez2019}, we use different image generation pipelines based on the value of the star-forming gas fraction ($f_{\mathrm{ gas,\,sf }}$) in simulated galaxies: synthetic images for galaxies with $f_{\mathrm{ gas,\,sf }} < 0.01$ were generated using the \textsc{\textsf{galaxev}} stellar population synthesis code \citep{Bruzual2003}, while galaxies with $f_{\mathrm{ gas,\,sf }} \geqslant 0.01$ additionally model young stellar populations (age $<$ 10 Myr) with the \textsc{\textsf{mappings-iii}} libraries \citep{Groves2008} and include dust radiative transfer using the \textsc{\textsf{skirt}} code \citep{Baes2011,Camps2015}. We note that we avoid processing low-gas galaxies with \textsc{\textsf{skirt}} simply for performance reasons, and that both pipelines would produce essentially indistinguishable images for such objects (further details can be consulted in \citealt{Rodriguez-Gomez2019}). For both pipelines, the light contribution from each stellar particle was smoothed using a particle hydrodynamics spline kernel \citep{Hernquist1989,Springel2001} with an adaptive smoothing scale given, for each simulation particle, as the three-dimensional distance to the $32$nd nearest neighbour. Furthermore, the synthetic images were created with the pixel scale of KiDS (\SI{0.2}{\arcsec\per\pixel}) and were mock-observed from the Cartesian projections \emph{xy, yz} and \emph{zx}, also setting the field of view of each object to $240\times240$ pixels, and taking cosmological effects into account (such as surface brightness dimming) assuming they are located at $z = 0.034$. This procedure yielded idealised images in four broadband filters corresponding to the $g,r,i,z$ bands, of which we exclusively used the $r$-band for our analysis. We point out that galaxies from the $xy$ projection were used in the discussion presented in \cref{sub:The morphologies of TNG50 galaxies,sub:The morphologies of intrinsic mergers}, while galaxies from all three projections were employed from \cref{sub:Random forest classification performance} onwards. The units of the synthetic images are analog-to-digital units (ADU) per second, consistent with real KiDS science images. \cref{fig:comp_images} shows composite images of randomly selected galaxies from the simulated sample, presented in order of increasing stellar mass, using the KiDS $g,r,i$ filters. Finally, realism was added to the idealised images via convolution with a point spread function (PSF) and addition of shot and uniform background noise. We convolved each image with a 2D Gaussian PSF with full width at half maximum (FWHM) equal to \ang[angle-symbol-over-decimal=true]{;;0.7}, which corresponds to the median PSF from KiDS $r$-band images. Shot noise was included by assuming an effective gain of $3\times10^{13}$ electrons per data unit, also consistent with KiDS data products, while background noise was modelled as a Gaussian random variable with uniform standard deviation $\sigma_{\mathrm{ bkg }}=2\times10^{-12}\,\mathrm{ADU\,s^{-1}}$ across each simulated image. The first three rows from \cref{fig:sim_vs_obs_synth_images} show the same synthetic images of \cref{fig:comp_images} after adding realism, which are compared in the next three rows to randomly selected galaxies from observations. \begin{table} \caption{Galaxy deblending and segmentation parameters used by \textsc{\textsf{sep}}.} \centering \bgroup \def1.1{1.2} \begin{tabular}{ccc} \toprule \text{Parameter} & \text{Description} & \text{Value} \\ \midrule \texttt{\textbf{minarea}} & \text{Minimum galaxy area} & 10 pixels \\ \texttt{\textbf{deblend\_nthresh}}& \text{Number of deblending levels} & 16 \\ \texttt{\textbf{deblend\_cont}} & \text{Minimum deblending contrast} & 0.0001 \\ \texttt{\textbf{thresh}} & \text{Minimum detection threshold} & 0.75 \\ \bottomrule \end{tabular} \egroup \label{tab:sepparams} \end{table} \subsection{Source segmentation and deblending}% \label{sub:Source segmentation} Segmentation is the process in which distinct sources within an astronomical image are labelled with different integer values, reserving the zero-value for the background. The resulting array, with the same shape as the original image, is called a segmentation map or segmentation image. Proper segmentation is a crucial step for the morphological measurements presented in \cref{sub:Morphological_measurements}, since it defines the region that corresponds to the galaxy of interest while removing contaminant objects. The most challenging stage of this procedure is deblending, i.e. the separation of two or more overlapping different sources. Since we are interested in morphology-based merger identifications on individual galaxies (instead of, for example, counting close pairs) and because it is almost impossible, based on imaging alone, to distinguish between true companions and contaminants (such as chance projections along the line of sight), deblending was applied to all examined sources. Thus, we have created segmentation maps for both the observational and mock samples using \textsc{\textsf{sep}} \citep{Barbary2016}, a \textsf{Python} library that implements the core functionality of \textsf{SE}\textsc{\textsf{xtractor}} \citep{Bertin1996}. We have controlled source detection and deblending using the following \textsc{\textsf{sep}} input parameters: \texttt{\textbf{thresh}}, the detection threshold in standard deviations; \texttt{\textbf{minarea}}, the minimum number of pixels in an object; \texttt{\textbf{deblend\_nthresh}}, the number of deblending levels, and \texttt{\textbf{deblend\_cont}}, the minimum contrast ratio for source deblending. The values of these input parameters are given in \cref{tab:sepparams}. Based on segmentation maps, we produced a mask for each galaxy. This was accomplished by identifying the object that coincided with the galaxy of interest and labelling it as the main source; the remainder of the segments, excluding the background, constituted the mask. It is worth mentioning that the main segment and the mask were \emph{regularised} in the sense that they were smoothed by a uniform filter with a size of $10\times10$ pixels. \cref{fig:deblending} shows a schematic of this procedure. \subsection{Morphological measurements}% \label{sub:Morphological_measurements} Morphology calculations were done using \textsf{statmorph} \citep{Rodriguez-Gomez2019}, a \textsf{Python} package for computing non-parametric morphological diagnostics of galaxy images, as well as fitting 2D Sérsic profiles. In order to run the code we used the science images, their segmentation maps and their associated masks, as well as the \texttt{\textbf{gain}} factor, a scalar that converts the image units into $e^{-}\,\text{pixel}^{-1}$, using the same value as in \cref{sub:Synthetic image generation}. In this work we consider the following parameters: concentration, asymmetry and smoothness (CAS; \citealp{,Conselice2003}), Gini-$M_{20}$ statistics \citep{Lotz2004,Snyder2015a,Snyder2015b}, multimode, intensity and deviation (MID; \citealp{Freeman2013}) and variations of the asymmetry parameter, such as the outer ($A_O$; \citealp{Wen2014}) and shape ($A_S$; \citealp{Pawlik2016}) asymmetries. Below we briefly describe each of them. The concentration parameter ($C$; \citealt{Bershady2000,Conselice2003}) is a measure of the quantity of light at a galaxy's centre in comparison to its outskirts and is given by $5 \log(r_{80}/r_{20})$, where $r_{20}$ and $r_{80}$ are the radii of circular apertures containing 20\% ($r_{20}$) and 80\% ($r_{80}$) of the galaxy's total flux. Elliptical galaxies exhibit high concentration ($\sim4$) values, whereas spiral galaxies have smaller ($\sim3$) values. The asymmetry index ($A$; \citealt{Abraham1996x,Conselice2000,Conselice2003}) is calculated by subtracting a galaxy image from its $ { 180 }^{ \circ } $-rotated counterpart, and is used as a measure of what fraction of light is due to non-symmetric components. The equation for computing this parameter is given by \begin{equation} A = \frac{ \sum_{i,\,j}\abs{I_{ij} - I_{ij}^{180}} }{ \sum_{i,\,j} \abs{I_{ij}}} - A_{ \textsc{bkg} }, \label{eq:asymmetry} \end{equation} where $ I_{ij} $ and $ { I }_{ij}^{ 180 } $ are, respectively, the pixel flux values of the original and rotated distributions, and $ A_ \textsc{bkg} $ is the average asymmetry of the background. High asymmetry values are often used to identify possible recent interactions and galaxy mergers. \begin{table* \centering \bgroup \def1.1{1.1} \begin{tabular}{ccccccccccc} \toprule \multirow{2}{*}{Combination} & \multicolumn{10}{c}{Parameter} \\ & $C$ & $ A $ & $ S $ & $F\quantity(G,\,M_{20})$ & $S\quantity(G,\,M_{20})$ & $M$ & $I$ & $D$ & $A_O$ & $A_S$ \\ \midrule 1 &\textbf{Yes} & \textbf{Yes} &\textbf{Yes} &\textbf{Yes}&\textbf{Yes} &No&No&No&No&No \\ 2 &\textbf{Yes} & \textbf{Yes} &\textbf{Yes} &\textbf{Yes}&\textbf{Yes} &\textbf{Yes} &\textbf{Yes} &\textbf{Yes} &No&No \\ 3 &No & \textbf{Yes} &\textbf{Yes} &No&No &No &No &No &\textbf{Yes} &\textbf{Yes} \\ \bottomrule \end{tabular} \egroup \caption[Feature combinations considered during training.]{Feature combinations considered during hyper-parameter tuning and cross-validation.} \label{tab:comb_features} \end{table*} The smoothness parameter ($S$; \citealt{Conselice2003}) is estimated in a similar way to the asymmetry, by subtracting a galaxy distribution from a counterpart that has been smoothed by a boxcar filter of width $ \sigma $, and it indicates the fraction of light that is contained in \emph{clumpy} regions (e.g. high frequency disturbances). Following \citet{Lotz2004}, we set the value of $\sigma$ to $25\%$ of the Petrosian radius. The Gini index ($G$; \citealt{Abraham2003,Lotz2004}) quantifies the degree of inequality of the brightness distribution in a set of pixels. The Gini coefficient is equal to $1$ when all the galaxy light is concentrated in one pixel; conversely, it is equal to zero when the light distribution is homogeneous across all pixels. Early-type galaxies, as well as galaxies with one or more bright nuclei, exhibit high Gini values. The $ M_{\mathrm{ 20 }} $ coefficient is the normalised second moment of a galaxy's brightest regions, containing $20\%$ of the total flux. Mergers and star-forming disk galaxies tend to have high $M_{20}$ values. The bulge parameter ($F(G,\,M_{20})$; \citealt{Snyder2015b}) is a linear combination of $G$ and $M_{\mathrm{ 20 }}$. In the $G$-$M_{\mathrm{ 20 }}$ space, early-type, late-type and merging galaxies are found by the position they occupy relative to two intersecting lines \citep{Lotz2008}. The bulge parameter, $F(G,\,M_{20})$, is then defined as the position along the line with origin at the intersection $ \quantity( G_0=0.565,M_{20,\,0}=-1.679 ) $ that is perpendicular to the line that separates early-type and late-type galaxies, scaled by a factor of $5$, \begin{equation} F \quantity( G,\,M_{20} ) = -0.693M_{20}+4.95G-3.96. \label{eq:} \end{equation} The merger parameter ($S(G,\,M_{20})$, \citealt{Snyder2015a}) has a similar definition to $F \quantity( G,\,M_{20} )$. It is given as the position along a line with origin at $ \quantity( G_0,M_{20,\,0} ) $ that is perpendicular to the line that separates mergers from non-mergers, \begin{equation} S \quantity( G,\,M_{20} ) = 0.139M_{20}+0.990G-0.327 \label{eq:}. \end{equation} The multimode ($M$; \citealt{Freeman2013}) parameter is the pixel ratio of the two brightest regions of a galaxy, which are identified following a threshold method: bright regions are identified when they are above a threshold value, a process repeated for different values until the ratio is the largest. Double-nuclei systems tend to have values close to one. The intensity ($I$; \citealt{Freeman2013}) parameter is the flux ratio between the two brightest subregions of a galaxy. For its computation the watershed algorithm is used, i.e. the galaxy image is divided into groups such that each subregion consists of all pixels whose maximum gradient paths lead to the same local maximum. Clumpy systems often exhibit high intensity values. The deviation ($D$; \citealt{Freeman2013}) parameter is given as the normalised distance between the image centroid and the centre of the brightest region found during the computation of the $I$-statistic, and is used to quantify the offset between bright regions of a galaxy and the centroid. The outer asymmetry ($A_O$; \citealt{Wen2014}) parameter is defined in the same way as the conventional asymmetry (see \cref{eq:asymmetry}), with the exception that pixels from the inner elliptical aperture that contains $50\%$ of the galaxy's light are not included in the computation. Pixels outside this area build up the outer half-flux region, for which $A_O$ is estimated. Lastly, the shape asymmetry ($A_S$; \citealt{Pawlik2016}) parameter is also calculated in the same way as the standard asymmetry, with the difference that the measurement is done over a binary segmentation map rather than the galaxy brightness distribution. Finally, \texttt{\textbf{statmorph}} provides a \texttt{\textbf{flag}} quality parameter to distinguish between reliable and unsuccessful measurements, which are labelled with \texttt{\textbf{flag} == 0} and \texttt{\textbf{flag} == 1}, respectively. We find that the overall fraction of flagged galaxies is low, representing ${\lesssim}10\%$ for both our observational and simulated samples. From this point onwards we only consider galaxies with reliable morphological measurements, also imposing, for each object, a minimum signal-to-noise ratio $\expval{S/N}\geqslant2.5$. \subsection{Merger identification}% \label{sub:Merger identification} Galaxy mergers in the TNG50 simulation can be identified using merger trees created using the \textsc{\textsf{sublink}} code \citep{Rodriguez-Gomez2015}. The idea behind the merger trees is to associate a given subhalo with its progenitors and descendants from adjacent snapshots, in such a way that a merger event occurs when a subhalo has two or more different progenitors. From the merging history catalogues of the TNG50 simulation, we have determined which galaxies from our synthetic sample have experienced a merger within a given period. The merger mass ratio is defined as $\mu = M_2 / M_1$, where $M_1$ and $M_2$ are the stellar masses of the primary and secondary progenitors, respectively, measured at the moment when the secondary progenitor reaches its maximum stellar mass \citep{Rodriguez-Gomez2015}. Traditionally, major and minor mergers are defined as those with $\mu > 1/4$ and $1/10 < \mu < 1/4$, respectively. Throughout this paper we consider a combined sample of major + minor mergers, with mass ratios $\mu > 1/10$, for all our computations. The main reason for this choice is to have a larger training sample for our classifier, but it also has the advantage of potentially detecting merger signatures that are more subtle or long-lasting than those produced by major mergers. As discussed in \citet{Lotz2011}, both the Gini--$M_{20}$ statistics and the asymmetry parameter are sensitive to minor mergers as well as major ones. In this context, our intrinsic merger sample is composed of mergers that occurred within ${\pm}0.5$ Gyr relative to the reference redshift ($z=0.034$; snapshot 96). For definiteness, we note that this time window includes mergers that were recorded in the snapshots 94 to 99 (since for a merger event recorded at snapshot $k$, the merger must have actually taken place at some time between snapshots $k-1$ and $k$), and represents an observability time-scale of $\approx 1$ Gyr. From a sample of 15~463 galaxy images with successful morphological measurements, we identified 833 major + minor mergers within the specified time window, representing an overall merger fraction of about 5\%. \subsection{Random forest classification}% \label{sub:Random forest classification} The random forest (RF) algorithm \citep{Breiman2001} is an ensemble method based on independent decision trees, each of them learning from subsamples of the input data. Predictions on unseen data are then given as a majority vote among all uncorrelated models given by the trees in the forest. The goal of the algorithm is to learn a rule from the galaxy inputs $\vb{x}$ (morphological measurements) to the labels $y$ (merger statistics), and then generalise over novel inputs. In this study, feature space is defined by the CAS, Gini-$M_{20}$, MID, $A_O$ and $A_S$ parameters, representing a total of 10 features, with inputs given as values of these attributes for the galaxies. In other words, inputs can be seen as vectors $\vb{x}_i$ having different morphological values in each entry. Likewise, the target values of the algorithm are given by the merger label $y_i$ of each galaxy, with $y_i=1$ if the given input is a true intrinsic merger (as defined in \cref{sub:Merger identification}), and $y_i=0$ otherwise. The \textsf{scikit-learn} module \citep{Pedregosa2011} was used to construct random forests. The library's internal implementation of the classifier is based on the algorithm of \citet{Breiman2001}, which incorporates bootstrapping of the training set and randomised feature selection. Utilities from this and the \textsf{imbalanced-learn} library \citep{Lemaitre2017} were also used. The receiver operating characteristic (ROC) curve was considered to assess the performance of each model. In this sense, positive predictive value (PPV, also known as purity or precision) is the ratio between the true positives (TP; true mergers selected by the classifier) and the sum of all objects selected, which are the false positives (FP; non-mergers selected by the classification) and true positives, that is, \( \text{PPV} = \text{TP}/(\text{TP}+\text{FP}). \) Similarly, true positive rate (TPR, also known as completeness or recall) is the ratio between TP and the total number of intrinsic mergers, which is the sum of true positives and false negatives (FN; true mergers rejected by the classifier), i.e. \( \mathbf{\text{TPR} = \text{TP}/(\text{TP}+\text{FN}).} \) The ROC curve is a plot of the TPR against the false positive rate (FPR) at various threshold values, where the latter is computed as the ratio of FP and the total number of intrinsic non-mergers, which in turn are given as the sum of FP and true negatives (TN; non-mergers rejected by the algorithm): \( \text{FPR} = \text{FP}/(\text{FP}+\text{TN}) \). The ROC curve of a random classifier (with no predictive power) is a diagonal line from the origin to the point $(1, 1)$ whereas a perfect classifier is described by two lines, the first of these starting from the origin to $(0, 1)$ and the second one from $(0,1)$ to $(1, 1)$, so that it has $\text{TPR} = 1$ and $\text{FPR} = 0$. Our full learning dataset consists of 15~463 inputs corresponding to successful morphological measurements performed on synthetic galaxy images in three orientations along the axes of the simulation volume, each of them having different values, along with their corresponding merger labels (ground truth). We split our full dataset into training and test sets, assigning $70\%$ of the inputs to the first set, and the remaining $30\%$ to the second set. This division was done in a stratified manner, which keeps the original proportion of distinct classes (merger or non-merger) in both samples. We point out that in the full dataset there are ${\sim}18$ non-mergers for each merger. Thus, class imbalance was offset by randomly under-sampling (RUS) the majority class (non-mergers) to bring it to the same size as the merger set. This method is implemented in the \textsf{imbalanced-learn} library, and was applied before fitting the random forest. We also tried over-sampling the minority class (mergers) using the Synthetic Minority Over-sampling Technique (SMOTE, \citealt{Chawla2002}), which resulted in approximately the same performance. Therefore, from this point onwards we will only show results obtained with the RUS technique. Similarly, the tuning of hyperparameters (i.e. parameters used to control the training process, instead of those derived from it) was done by means of cross-validation on the training set, conducting a 5-fold stratified scheme. This entails randomly dividing the training set into five folds, retaining the proportion of mergers and non-mergers, and repeatedly training the classifier with a combination of four of these samples while validating (testing) the remaining one. Cycling over all splits, the random forest classifier is trained and validated on five different subsets of the original training set. Simultaneously, the hyper-parameters of each forest can be optimised in a grid search fashion. In this step the main optimised parameters were the number of trees in the forest, the depth of each tree, the maximum number of leaf (terminal) nodes, and the balancing of the sample. Accordingly, we used, respectively, the \texttt{\textbf{RandomForestClassifier}} function from {\textsf{scikit-learn}} to tune the \texttt{\textbf{n\_estimators}}, \texttt{\textbf{max\_depth}}, \texttt{\textbf{max\_leaf\_nodes}} and \texttt{\textbf{class\_weight}} parameters, while keeping the rest of hyper-parameters at their default values. Furthermore, during training we have explored several combinations and sub-samples from all morphological attributes in order to test how well they would perform on their own. \cref{tab:comb_features} shows all such combinations considered in the RF models. Lastly, the entire procedure was carried out with the \texttt{\textbf{pipeline}} and \texttt{\textbf{GridSearchCV}} functions from the \textsf{scikit-learn} and \textsf{imbalanced-learn} libraries, which allow for cross-validation and exhaustive hyper-parameter tuning at the same time. The end result of the process is a trained model and a combination of hyper-parameters that give, according to a particular metric, the 'best' classification of the data. Additionally, the classifier provides for each of the galaxies considered, a probability score that is used to label them (as mergers or non-mergers). As such, this threshold can be varied in order to reach a compromise between successful and unsuccessful classifications. This is usually done by maximising certain metrics, such as the $F_1$-score or the Matthews correlation coefficient\footnote{The $F_1$-score is defined as the harmonic mean of precision and recall, taking values between 0 (worst) and 1 (best). The Matthews correlation coefficient (MCC) correlates the ground truth with predictions in a binary classification and is given by \[ \text{MCC} = \frac{\text{TP}\times \text{TN}-\text{FP}\times\text{FN}}{\sqrt{(\text{TP}+\text{FP})(\text{TP}+\text{FN})(\text{TN}+\text{FP})(\text{TN}+\text{FN})}}, \] being equal to $+1$ for a perfect prediction, to $0$ for random classifications, and to $-1$ for wrong predictions.} , or by computing the balance point, namely the point for which $\mathrm{TPR = 1-FPR}$. In this work we use the latter approach as the default probability threshold, emphasizing that we compared this value to those computed with the $F_1$-score and the Matthews coefficient, finding similar results between these thresholds and the balance point, which in turn mean that there are no significant differences for the performance metrics that we present in \cref{sub:Random forest classification performance}. \begin{figure*} \begin{center} \includegraphics[width=0.98\textwidth]{./figures/pairplots_tng50_vs_gama-kids_v1.2x.pdf-1.png} \end{center} \caption{Pairwise plots of $r$-band morphological parameters from the observational GAMA-KiDS (blue) and simulated TNG50 (red) samples, with univariate distributions shown on the diagonal. The bi-dimensional kernel density estimates are shown with contours at $\{0.1, 0.2, 0.5, 0.8, 0.95\}$. This plot indicates that there is good overall agreement between the morphologies of these samples.} \label{fig:pairs_obs_Sim_comb} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.88\textwidth]{./figures/medians_tng50_vs_gama-kids_v1.2x.pdf-1.png} \end{center} \caption{Median trends as a function of stellar mass for several $r$-band morphological parameters. The solid red and blue lines indicate the simulated (SKIRT pipeline) and observational results, respectively; the dashed line corresponds to synthetic images generated without the effects of a dust distribution (GALAXEV pipeline). This figure again shows that there is good agreement between theory and observations, with the median values (at fixed stellar mass) of all morphological parameters in TNG50 lying within 1$\sigma$ of the observational trends; it also shows that the simulated galaxies are more concentrated and asymmetric than their observational counterparts.} \label{fig:medians} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.98\textwidth]{./figures/pairplots_merg_vs_non-merg_v1.2x.pdf-1.png} \end{center} \caption{Pairwise distributions of $r$-band morphological parameters for the non-merging population (black) against the distributions of mergers (green), with contours located at $\{0.05,0.1,0.2,0.5,0.8,0.95\}$. This figure demonstrates that simulated merging and non-merging systems exhibit a high degree of overlap in their morphologies, occupying similar regions in parameter space; it also indicates that the distributions of the merging population tend to have higher distribution values, which is more noticeable for the asymmetry parameter.} \label{fig:pairs_merg_nonmerg} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.88\textwidth]{./figures/medians_merg_vs_non-merg_v1.2x.pdf-1.png} \end{center} \caption{Median trends as a function of stellar mass for several $r$-band morphological parameters. The solid black and green lines indicate the low-mass non-merging and merging TNG50 galaxy populations, respectively. Although merging systems tend to be more asymmetric at all stellar masses, this plot shows that the morphologies of these two samples are highly comparable, particularly at the low mass end.} \label{fig:medians_merg_nonmerg} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=0.97\textwidth]{./figures/roc_feature_importance_v1.2x.pdf-1.png} \end{center} \caption{Top row: ROC curves for random forest models trained on the full dataset using the combinations in listed in \cref{tab:comb_features}; asymmetry and merger statistic lines indicate merger selections using only one of these parameters. The hexagons, triangles and circles indicate default threshold values for the classifier (balance point), asymmetry parameter ($0.25$) and merger statistic ($0.0$) selections. Bottom row: feature importance for each model corresponding to the random forest ROC curves above. The vertical axis indicates the name of the corresponding attributes, while the horizontal axis denotes the percentage, i.e., the parameters' contribution to classification decisions. In all cases the asymmetry or outer asymmetry parameters had the highest weight on decisions, with the merger statistic, smoothness or asymmetry parameter (combination 3) ranking second in their respective combination.} \label{fig:roc_fimp} \end{figure*} \section{Results}% \label{sec:results} \subsection{The morphologies of TNG50 and GAMA-KiDS galaxies}% \label{sub:The morphologies of TNG50 galaxies} \cref{fig:pairs_obs_Sim_comb} shows pairwise plots of various morphological parameters measured by \textsf{statmorph} for our simulated (TNG50, red shaded regions) and observational (KiDS, blue contours) galaxy samples. In the former case, we only show measurements for a single projection (onto the $xy$-plane). The plots in the lower triangle of \cref{fig:pairs_obs_Sim_comb} show the joint distributions of the various morphological parameters, while the plots on the diagonal show their univariate distributions. As can be seen, there is reasonable agreement between the morphologies of simulated galaxies and those of the real observational sample. However, \cref{fig:pairs_obs_Sim_comb} also reveals some differences between the two samples. For instance, the concentrations and Gini coefficients of TNG50 galaxies peak at slightly higher values than their observational counterparts, while the distribution of the $M_{20}$ parameter reaches a maximum at a slightly lower value. These trends indicate that TNG50 galaxies tend to be slightly more concentrated objects than real galaxies of similar stellar mass. On the other hand, the asymmetry distribution reaches a peak very close to zero for both samples, but displays a tail toward higher values for TNG50 galaxies, indicating that these objects tend to be slightly more asymmetric than their observational counterparts. \cref{fig:medians} shows median trends as a function of stellar mass for all morphological parameters considered. The red solid line corresponds to galaxies modelled following our full radiative transfer pipeline, while the dashed one was obtained using simpler models without including the effects of a dust distribution; similarly, the blue solid line indicates the median trend for our GAMA-KiDS sample. The blue and red shaded regions denote the corresponding 16th to 84th percentile range, at a fixed stellar mass, for the observational and simulated (dust effects included) samples, respectively. These figures confirm the overall morphological agreement between observations and simulations: all median trends from TNG50 lie within the $1\sigma$ scatter of the observational measurements. However, closer inspection of \cref{fig:medians} shows again that TNG50 galaxies tend to be slightly more concentrated and asymmetric than their observational counterparts. Finally, \cref{fig:medians} corroborates that simulated galaxies modelled with dust effects have morphologies better aligned with observations, since dust attenuation tends to reduce the brightness of the central regions, where higher concentrations of gas and dust are typically encountered in the simulation. This effect, however, is not enough to bring the concentrations of low-mass galaxies into full agreement with observations. \subsection{The morphologies of true mergers and non-mergers}% \label{sub:The morphologies of intrinsic mergers} \cref{fig:pairs_merg_nonmerg} shows morphology distributions for the merging sample (green shaded regions) defined in \cref{sub:Merger identification} as well as for the non-merging TNG50 population (black contours). As can be seen, the morphologies of these two groups do not differ significantly from each other, i.e. they occupy similar regions in parameter space, as previously pointed out for the case of galaxies at higher redshifts \citep{Snyder2019}. The morphological similarity between mergers and non-mergers can also be seen in \cref{fig:medians_merg_nonmerg}, which shows median trends (solid lines) and $1\sigma$ scatter regions (shaded zones) for the relevant morphological parameters as a function of stellar mass. Nevertheless, the asymmetry and outer asymmetry parameters reveal that, although there is significant morphological overlap between mergers and non-mergers, the former are more asymmetric than the latter. In the next sections, we exploit these differences in order to train an image-based galaxy merger classifier. \subsection{Classifying mergers with random forests}% \label{sub:Random forest classification performance} \subsubsection{Performance and feature importance} In this section we present results about the merger classification of the mock-catalog as detailed in \cref{sub:Random forest classification}. The hyper-parameter tuning and cross-validation procedures performed on the training sets result in the best trained models for each of the feature combinations listed in \cref{tab:comb_features}. Each of these models was then applied to the corresponding test set, which had ${\approx}4389$ non-merging objects and ${\approx}250$ merging galaxies, with stellar masses $8.5\leqslant\log \quantity( M_\ast / \mathrm{M}_\odot)\leqslant11$. We found that all models yield similar classifications independently of the feature combination used: ${\approx}179$ and $2425$ objects were correctly classified as mergers and non-mergers, respectively; $1964$ non-merging galaxies were misclassified as mergers, and $71$ mergers were misclassified as non-mergers. These findings are translated, for the merger class, into an average purity and completeness of ${\approx}8.4\%$ and ${\approx}72\%$, respectively.\footnote{A random classifier (without predictive power) would have a purity of 5.4\%, equal to the overall merger fraction.} The upper panels of \cref{fig:roc_fimp} show the ROC curve (see \cref{sub:Random forest classification}) for the trained models using combinations 1--3 from \cref{tab:comb_features}. For reference, we also show the ROC curves that would be obtained by using only the asymmetry parameter or the Gini--$M_{20}$ merger statistic to select merging galaxies. A perfect classifier lies at the upper-left corner and has $\text{TPR}=1$ and $\text{FPR}=0$, while a model that goes in diagonal from $(0,0)$ to $(1,1)$ has no predictive power. Our models have loci between these two regions, with an area under the ROC curve of ${\approx}0.7$, indicating that they are moderately adequate classifiers. The lower panels of \cref{fig:roc_fimp}. show, also for the model combinations 1--3, input features and their relative importance to the classification decisions. In this context, feature importance is defined as the mean decrease in impurity achieved by each variable at all relevant nodes in the random forest. In all cases the asymmetry or outer asymmetry parameters had the highest weight ($\sim30-50\%$) on the classifier decisions, followed in second place by smoothness parameter ($15\%)$ or the merger statistic ($12\%)$. None of the other parameters had a feature importance above 15\%. \subsubsection{Default random forest model}% \label{sub:Default_rf_model} The models presented above have similar performance regarding purity and completeness values. These results are consistent with the work by \citet{Snyder2019}, who designed random forest models for high-mass galaxies at different redshifts from the original Illustris simulation. For their lowest redshift sample at $z=0.5$, they obtained purity values of up to $10\%$, with completeness at roughly $70\%$. Similarly, the metrics produced by our models are consistent with those found by \citet{Bignone2016} for the Illustris simulation. Specifically, they studied the morphologies of a galaxy sample at $z=0$ with $M_\ast > 10^{10}\,\text{M}_\odot$ and subsequently used the Gini--$M_{20}$ criterion from \citet{Lotz2004} to identify galaxy mergers. Their results, as a function of the time $t$ elapsed since the last merger, show that for $t\sim1$ Gyr the purity metric is around $5\%$ for cases with $\mu > 1/10$ and equal to $9\%$ for $\mu > 1/4$. Throughout the rest of this paper we set the RUS+RF model trained with combination one in \cref{tab:comb_features} as our default random forest model. This decision is mainly based on the robustness of classification performance for different sets of features, but also because the morphological parameters included in that combination are widely used in the literature (e.g. \citealt{Lotz2011}, and references therein) to identify both major and minor mergers. In \cref{sub:The merger incidence of GAMA-KiDS observations} we use this model to estimate the galaxy merger fraction in the TNG50 simulation, which we compare to the intrinsic merger fraction (i.e. computed with the merger trees), as well as to estimate the galaxy merger fraction in the real Universe by applying our classifier to GAMA-KiDS observations. \begin{figure*} \begin{center} \includegraphics[width=0.49\textwidth]{./figures/random_tp_v1.2.2xxx.png} \includegraphics[width=0.49\textwidth]{./figures/random_tn_v1.2.2xxx.png} \includegraphics[width=0.49\textwidth]{./figures/random_fp_v1.2.2xxx.png} \includegraphics[width=0.49\textwidth]{./figures/random_fn_v1.2.2xxx.png} \end{center} \caption{Classification result examples made by the random forest model. The upper-left block of figures shows true positives (true mergers selected) while the upper-right block shows true negatives (non-mergers rejected). These objects typically exhibit the expected attributes: true positives (mergers) are perturbed and often have companions, whereas true negatives (non-mergers) tend to be isolated and unperturbed. Similarly, the lower-left block of figures shows false positives (non-mergers selected) while the lower-right part shows false negatives (true mergers rejected). False positives might arise from mergers taking place outside the window detection period or by having similar morphologies to some true mergers; false negatives could emerge from minor mergers failing to trigger morphological disturbances at the time of detection. In all cases the upper text labels indicate the probability value assigned to that object by the random forest, as well as its corresponding asymmetry value.} \label{fig:tp_tn_fp_fn} \end{figure*} \subsubsection{Classification result examples}% \label{sub:Classification result examples} In \cref{fig:tp_tn_fp_fn} we present examples of simulated galaxies that have been classified by our default random forest model and were categorised as true positives and true negatives as well as false positives and false negatives. Note that these objects are sorted in ascending order according to stellar mass. As can be seen in the upper row from \cref{fig:tp_tn_fp_fn}, most true mergers exhibit clear signs of interaction, such as asymmetric structures and neighbouring or overlapping companions. Likewise, most non-mergers do not have significantly perturbed morphologies and look relatively isolated. This is particularly noticeable for low-mass objects. Similarly, the lower row in \cref{fig:tp_tn_fp_fn} shows examples of the failure modes of the classifier. We found that some false positives had relatively asymmetric structures but were not labelled as mergers in our merger tree-based selection (see \cref{sub:Merger identification}). Thus, such cases might arise from merging events taking place outside our window detection period or from isolated galaxies that are morphologically similar to mergers (see \cref{fig:pairs_merg_nonmerg,fig:medians_merg_nonmerg}). Conversely, some false negatives appear unperturbed so that they are probably the result of minor mergers that did not trigger perceptible morphological signatures. These cases are reminiscent of the findings by \citet{2021MNRAS.500.4937M}, who found that mergers induce limited morphological changes in dwarf galaxies. Thus, the lower part of \cref{fig:tp_tn_fp_fn} illustrates the most common challenges faced by the models. On the one hand, merging events are rare, which reduces the number of class examples and statistics for training; on the other hand, there is a significant degree of similarity between mergers and non-mergers, which explains most of the false positives. \begin{figure*} \begin{center} \subfloat[\label{fig:merger_frac_rf}]{\includegraphics[width=0.45\textwidth]{./figures/merger_fraction_int_rus+rf_run_new_v1.2.pdf-1.png}} \subfloat[\label{fig:merger_frac_asym}]{\includegraphics[width=0.45\textwidth]{./figures/merger_fraction_int_asymetry_run_new_v1.2.pdf-1.png}} \end{center} \caption{(a) Galaxy merger fraction as a function of stellar mass as predicted by our random forest classifier for our simulated and observational galaxy samples, as indicated by the legend. (b) Mass-dependent merger fraction for the observational and simulated galaxy samples as predicted by an asymmetry-based criterion. In both panels, the solid blue line represents the intrinsic merger fraction measured in TNG50 using the merger trees. The error bars represent Poisson uncertainty from the number of mergers in each mass bin. This figure shows that the merger fraction increases steadily as a function of stellar mass for all the approaches considered.} \label{fig:merger_frac_ab} \end{figure*} \subsection{The merger incidence of GAMA-KiDS observations}% \label{sub:The merger incidence of GAMA-KiDS observations} We used our default random forest model to estimate the merger fraction in our observational $(z<0.05; 8.5\leqslant\log \quantity( M_\ast /\mathrm{M}_\odot)\leqslant11)$ galaxy sample. The input entries for this model consist of the morphological measurements carried out on the GAMA-KiDS sample (see \cref{sub:Morphological_measurements}), corresponding to combination one in \cref{tab:comb_features}. Following \citet{Snyder2019}, we estimate the merger fraction $f_{\mathrm{ merger }}$ as \begin{equation} f_{\mathrm{ merger }}=\frac{ N_{\mathrm{ RF }} }{ N_{\mathrm{T}} }\frac{\mathrm{ PPV } }{ \mathrm{ TPR } }\expval{M/N}, \label{eq:merger_frac_rf} \end{equation} where $N_{\mathrm{ RF }}$ and $N_{\mathrm{T}}$ are the number of galaxies selected by the forest and the total number of galaxies, respectively; the factor $\expval{M/N}$ is the average total number of simulated mergers divided by the number of galaxies with at least one such merger, which in our case is approximately equal to one; the term $\mathrm{ PPV }/\mathrm{ TPR }$ is a corrective factor that takes into account the completeness and purity of the default model, so that when used on novel data the best-guess merger fraction is obtained \citep{Snyder2019}. For comparison, we have determined the intrinsic merger fraction of simulated galaxies by estimating, for each mass bin, the quantity $N_{\mathrm{ merger }}/N_{\mathrm{T}}$, where $N_{\mathrm{merger}}$ is the number of intrinsic mergers, obtained from the merger trees as described in \cref{sub:Merger identification}. \cref{fig:merger_frac_rf} shows the estimated merger fraction, as a function of stellar mass, for the simulated and observational samples. As can be seen, both follow a qualitatively similar trend to the intrinsic merger fraction, but show differences within a factor of ${\sim}2$, as discussed below. The error bars in the estimated merger fractions are given by Poisson statistics from the number of mergers in each bin. The fact that the random forest classifier predicts a lower merger fraction for observed galaxies than for simulated ones is perhaps not surprising by noting that the most important feature for the classifications is the asymmetry parameter (\cref{sub:Random forest classification performance}), and that observed galaxies have somewhat lower asymmetry values than simulated ones (\cref{sub:The morphologies of TNG50 galaxies}). A more robust result is that the galaxy merger fraction is an increasing function of stellar mass for all the cases considered. These findings are in contrast with the major merger fraction estimate by \citet{Casteels2014}, who found a decreasing merger fraction within the stellar mass range $9.0\leq\log \quantity( M_\ast / \mathrm{M}_\odot)\leq9.5$, and a roughly constant trend above that mass range, while all of our estimates indicate that the merger fraction increases steadily as a function of stellar mass. This qualitative difference is puzzling, considering that the estimate by \citealp{Casteels2014} was performed on a similar observational galaxy sample, using the asymmetry parameter to estimate the fraction of asymmetric galaxies and subsequently the major merger fraction as a function of stellar mass. For comparison, we have applied an asymmetry-based criterion to our samples to identify highly asymmetric galaxies. These objects were selected as those for which $A>0.25$. In the following step we computed the PPV and TPR for these predictions, and we applied a modified version of \cref{eq:merger_frac_rf} to estimate the merger fraction derived from the asymmetry criterion. We note that $\text{PPV}\approx8.5\%$ and $\text{TPR}\approx7.5\%$ for this classification, which represents a purity of the same order as that of our RF models but with a completeness that is considerably smaller. \cref{fig:merger_frac_asym} shows a comparison between the intrinsic merger fraction from the simulation and the asymmetry-based merger fraction, $f_{\mathrm{m},\,A}$, for both simulation and observations. As can be seen, such estimates qualitatively follow the trend of the intrinsic merger fraction: both are increasing functions of stellar mass. However, $f_{\mathrm{m},\,A}(\mathrm{obs})$ is smaller than the other two fractions by a factor of ${\sim}2$, which again reflects the fact that the asymmetry parameter tends to be lower for our observational sample than for our simulated one. \section{Discussion}% \label{sec:Discussion} Using the state-of-the-art TNG50 cosmological simulation and KiDS observations, we have studied the optical morphologies of galaxies at low redshift ($z < 0.05$) over a wide range of stellar masses ($8.5 < \log_{10}(M_\ast/\mathrm{M}_\odot) < 11$). The goal of this analysis has been threefold: (i) to carry out an `apples-to-apples' comparison between the optical morphologies of TNG50 and KiDS galaxies, allowing us to identify possible weaknesses in the IllustrisTNG galaxy formation model at unprecedentedly high mass resolution (16 times better than TNG100); (ii) combining morphological measurements of the simulated galaxies with information from the merger trees, to train and evaluate the performance of an algorithm for identifying merging galaxies based on morphological diagnostics alone; and (iii) to apply this simulation-trained algorithm to observations in order to estimate the galaxy merger fraction in the real Universe. The first step for carrying out this work was to prepare the observational data set, shown in \cref{fig:gamakidstiles,fig:z_vs_mass}, which consisted in selecting galaxies from the GAMA catalogues satisfying $8.5 \leqslant \log_{10}(M_\ast/\mathrm{M}_\odot) \leqslant 11$ and $z < 0.05$, and extracting their corresponding `cutouts' from KiDS mosaic images. Similarly, we prepared a simulation data set by selecting TNG50 galaxies from snapshot 96 (corresponding to $z = 0.034$, close to the median redshift of the observational sample) also satisfying $8.5 \leqslant \log_{10}(M_\ast/\mathrm{M}_\odot) \leqslant 11$, and then generating synthetic images for all simulated galaxies (including the effects of dust attenuation and scattering, and for three different projections) designed to match the KiDS data set. \cref{fig:comp_images} shows idealized, composite ($g,r,i$ bands) images for some of our simulated galaxies, while \cref{fig:sim_vs_obs_synth_images} shows the corresponding $r$-band images after including realism (convolution with a PSF and noise modelling), along with some example galaxies from the observational sample. After preparing the observational and simulated data sets, we performed source segmentation and deblending on each image in order to isolate the galaxy of interest and remove unwanted or contaminating sources, as illustrated in \cref{fig:deblending}. We then measured various morphological diagnostics in the $r$-band for galaxies from both data sets using the same code (\textsf{statmorph}), which represents a robust, quantitative comparison between theory and observations. This comparison showed good overall agreement between TNG50 and KiDS galaxies, with the median trend as a function of stellar mass for TNG50 galaxies lying within $\sim$1$\sigma$ of the observational distribution for every morphological parameter considered (\cref{fig:pairs_obs_Sim_comb,fig:medians}). However, TNG50 galaxies tend to be slightly more concentrated and asymmetric than their observational counterparts, and show wider distributions for most parameters. Interestingly, using the TNG100 simulation, \citet{Rodriguez-Gomez2019} also found that some IllustrisTNG galaxies are more concentrated compared to their observational counterparts from the Pan-STARRS $3\pi$ Survey \citep{Chambers2016}. However, this discrepancy was observed at higher masses, $M_{\ast} \sim 10^{11} \, {\rm M}_{\odot}$, and was attributed to the implementation details of the active galactic nuclei (AGN) feedback -- specifically, it was argued that the spherical region over which energy and momentum are injected by the AGN might be too large, and therefore ineffective at small radii. In the present work we reach much lower masses than those achievable in TNG100 (by a factor of 16) and find that the discrepancy pointed out by \citet{Rodriguez-Gomez2019} reappears at $M_{\ast} \sim 10^{9} \, {\rm M}_{\odot}$. Despite the different stellar masses, it is possible that the reason for the higher concentrations of TNG50 galaxies at $M_{\ast} \sim 10^{9} \, {\rm M}_{\odot}$ relative to observations is essentially the same as that for TNG100 galaxies at $M_{\ast} \sim 10^{11} \, {\rm M}_{\odot}$, namely, inefficient AGN feedback at the smallest radii. While the AGN feedback implementation operates in very different modes in such different mass ranges (`thermal' versus `kinetic'; \citealt{Weinberger2017a}), the size of the `injection region' is determined by the same prescription in both feedback modes (a sphere enclosing an approximately fixed number of gas cells). It will be interesting and important to explore in future galaxy formation models whether reducing the size of the injection region for AGN feedback produces galaxies with concentrations in better agreement with observations. We note, however, that TNG50 produces deficits in the star formation density on small scales that agrees well with observations \citep{10.1093/mnras/stab2131} On the other hand, the slightly higher asymmetries of TNG50 galaxies compared to observations could simply be a matter of resolution. Young stellar populations, in particular, are undersampled in hydrodynamic cosmological simulations, and manifest as bright clumps that become more noticeable in synthetic images produced with `bluer' broadband filters \citep{Torrey2015}. In principle, this issue could be mitigated by resampling the young stellar populations at a higher resolution \citep{Trayford2017}. However, such procedures would introduce additional complexity to our modelling and, importantly for our goal of characterising galaxy morphology, it is unclear what would be an appropriate spatial distribution for the resampled stellar populations. Therefore, we have adopted the simpler approach of smoothing the light contribution from every stellar particle using the same SPH-like kernel, regardless of the age of the stellar population. A related issue is that the outskirts of simulated galaxies are subject to particle noise, which could further contribute to overestimating the asymmetry parameter. It seems plausible that the asymmetries of simulated galaxies will automatically become more realistic with improved resolution, without the need to make substantial changes to the galaxy formation model. Having compared the optical morphologies of TNG50 galaxies to those from KiDS observations, we proceeded to compare the morphologies of merging and non-merging galaxies in the simulation. In order to do this, we first defined a merger sample composed of simulated galaxies that experienced a major or minor merger (i.e. those with stellar mass ratio $\mu > 1/10$) within a time window of approximately $\pm 0.5$ Gyr. We found that the morphology distributions of our merging and non-merging samples show a large degree of overlap, with the exception of the asymmetry-based statistics, as shown in \cref{fig:pairs_merg_nonmerg,fig:medians_merg_nonmerg}. However, despite such visually similar morphological distributions of our merging and non-meging samples, it is in principle possible that a combination of various morphological parameters would encode information about the merger histories of the galaxies that is unavailable when using the morphological parameters individually. To this end, we trained RFs using several combinations of morphological parameters of TNG50 galaxies as the model features, along with the merger label (0 or 1 for our non-merging and merging samples, respectively) as the ground truth, finding in all cases that the most important feature for identifying mergers is the asymmetry statistic. Therefore, the performance of our RF algorithm, usually quantified by the so-called ROC curve, is comparable to that of the more traditional method of selecting highly asymmetric galaxies, but is superior to a direct application of the Gini--$M_{20}$ merger statistic (\cref{fig:roc_fimp}). \cref{fig:tp_tn_fp_fn} shows some examples of the galaxy merger classifications (both successful and unsuccessful) returned by our RF algorithm. The high importance of the asymmetry parameter might appear to be in tension with \citet{Snyder2019}, where the bulge indicators ($F(G,M_{20})$, concentration) had similar or greater importance than the asymmetry statistic, and the RFs clearly outperformed asymmetry alone. We attribute these differences to the distinct nature of the galaxies considered: massive galaxies ($M_\ast > 10^{10}\,\text{M}_\odot$) at high redshifts in the case of \citet{Snyder2019}, and dwarf galaxies (mostly $M_\ast \lesssim 10^{10}\,\text{M}_\odot$) at low redshifts in the present work. In fact, the RF classification models by \citet{2022arXiv220811164R} indicate that the asymmetry parameter is more significant for identifying mergers at low redshift than indicators of bulge strength (such as the concentration and Gini statistics), while the latter have a higher importance for high-redshift events. These findings help to reconcile our results with those of \citet{Snyder2019}. Another possible factor is the choice of broadband filters. The varying importance of different image-based merger diagnostics in different redshift and stellar mass ranges, as well as for different broadband filters, will be explored in upcoming work. Finally, we applied our RFs to a test sample from the TNG50 simulation and to the observational sample, in order to estimate the galaxy merger fraction as a function of stellar mass in both simulations and observations (\cref{fig:merger_frac_rf}). In the case of the simulation, our RF was able to recover the `intrinsic' merger fraction (obtained directly from the merger trees) reasonably well (within a factor of $\sim$2). When applied to KiDS observations, our RF returned a galaxy merger fraction that increases steadily with stellar mass, just like the intrinsic merger fraction in TNG50, although with a systematic offset of a factor of $\sim$2. For comparison, we repeated this experiment using the asymmetry statistic alone, separating mergers from non-mergers using a `standard' cut at $A = 0.25$ (\cref{fig:merger_frac_asym}). This yielded a steadily rising merger fraction in both simulations and observations, but again with a persistent offset between the two data sets, with the observational merger fraction lying a factor of $\sim$2--3 below the simulation trend. This offset probably reflects the fact that our simulated galaxies are slightly more asymmetric than their observational counterparts. The results shown in \cref{fig:merger_frac_ab} imply that the merger fraction increases steadily with stellar mass, both when using the RF or the asymmetry parameter alone. These findings are qualitatively consistent with those of \citet{Besla2018}, who considered a low-redshift dwarf galaxy ($0.013<z<0.0252$; $2\times10^{8}\,\text{M}_\odot<M_\ast<5\times10^{9}\,\text{M}_\odot$) sample from SDSS to compute and compare the major pair fraction (the fraction of primary dwarf galaxies that have a secondary with a stellar mass ratio $\mu > 1/4$) with estimations from the original Illustris simulation, also finding an increasing trend (their fig. 14). However, our results are in stark contrast with those of \citet{Casteels2014}, who found a decreasing merger fraction over a comparable stellar mass range (their fig. 13), also using the asymmetry statistic as a merger indicator. \section{Summary and outlook} \label{sec:Summary} We have carried out an `apples-to-apples' comparison between the optical morphologies of galaxies from the high-resolution, state-of-the-art TNG50 simulation and those of a comparable galaxy sample from KiDS observations. Overall, we have found good agreement between the simulated and observed data sets, which is remarkable considering that the IllustrisTNG galaxy formation model was not tuned to match morphological observations. The TNG50 galaxies, however, are somewhat more concentrated and asymmetric than their observational counterparts. Using additional information from the simulation that is not available in observations -- namely, the merger trees -- we have trained a random forest algorithm to classify merging galaxies using image-based morphological diagnostics, and applied the random forest to observations in order to estimate the merger fraction in the real Universe. We found that the asymmetry statistic is the single most useful parameter for identifying galaxy mergers, at least in the mass and redshift regime we considered ($8.5\leqslant\log(M_\ast/\text{M}_\odot)\leqslant11$; $z<0.05$), and that the merger fraction is a steadily increasing function of stellar mass for both the simulated and observational samples. Currently, it is still challenging to precisely determine the merger fraction in observations, especially using galaxy morphology alone. However, we are approaching an era in which galaxy formation models will become so realistic that it will be possible to exploit subtle trends in morphological measurements -- such as the ones studied in this paper -- to infer properties of galaxies that are not directly observable in the real Universe, such as their merging histories or even the assembly histories of their host DM haloes. At the same time, our work highlights the importance of developing sophisticated tools to carry out robust comparisons between theory and observations, which will become indispensable in the upcoming years as both computational capacity and astronomical instruments continue to evolve. \section*{Acknowledgements} We thank Gurtina Besla for useful comments and discussions. VRG acknowledges support from UC MEXUS-CONACyT grant CN-19-154. This work used the Extreme Science and Engineering Discovery Environment \citep[XSEDE;][]{Towns2014}, which is supported by NSF grant ACI-1548562. The XSEDE allocation TG-AST160043 utilized the Comet and Data Oasis resources provided by the San Diego Supercomputer Center. The IllustrisTNG flagship simulations were run on the HazelHen Cray XC40 supercomputer at the High Performance Computing Center Stuttgart (HLRS) as part of project GCS-ILLU of the Gauss Centre for Supercomputing (GCS). Ancillary and test runs of the project were also run on the compute cluster operated by HITS, on the Stampede supercomputer at TACC/XSEDE (allocation AST140063), at the Hydra and Draco supercomputers at the Max Planck Computing and Data Facility, and on the MIT/Harvard computing facilities supported by FAS and MIT MKI. This research is based on observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 177.A-3016, 177.A-3017, 177.A-3018 and 179.A-2004, and on data products produced by the KiDS consortium. The KiDS production team acknowledges support from: Deutsche Forschungsgemeinschaft, ERC, NOVA and NWO-M grants; Target; the University of Padova, and the University Federico II (Naples). GAMA is a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. \section*{Data availability} The data from the IllustrisTNG simulations used in this work are publicly available at the website \href{https://www.tng-project.org}{https://www.tng-project.org} \citep{Nelson2019}. The KiDS and GAMA data are available at the websites \href{http://kids.strw.leidenuniv.nl/}{http://kids.strw.leidenuniv.nl/} and \href{http://www.gama-survey.org/}{http://www.gama-survey.org/}. \bibliographystyle{mnras}
-34,836.906156
[ -2.740234375, 2.6640625 ]
19.178886
[ -3.025390625, 0.6767578125, -1.5224609375, -4.89453125, -0.513671875, 6.8515625 ]
[ 3.640625, 6.6640625, 3.11328125, 6.37890625 ]
616
10,412
[ -3.125, 3.384765625 ]
24.108997
[ -6.328125, -4.02734375, -4.3515625, -2.29296875, 1.935546875, 12.4375 ]
0.6463
8.285122
20.599251
2.671661
[ 2.3407106399536133 ]
-24,066.124967
6.092778
-33,434.734527
0.348372
6.196953
[ -3.150390625, -3.7578125, -3.255859375, -3.798828125, 2.5859375, 10.875 ]
[ -5.9296875, -2.373046875, -2.4140625, -1.3779296875, 3.4375, 5.21875 ]
BkiUe0W6NNjgB1scZ_gZ
\section{Introduction} The microscopic physics underlying high $T_c$ superconductivity in the cuprates is believed to be purely electronic in origin, in contrast to ordinary superconductors where the attractive mechanism is due to phonons. Strongly correlated electron models such as the two-dimensional Hubbard model have been proposed to describe it. The Hubbard model simply describes electrons hopping on a square lattice subject to strong, local, coulombic {\it repulsion}. Since it is known that the condensed charge carriers have charge $2e$, and thus some kind of Cooper pairing is involved, the main open problem is to identify the precise pairing mechanism. Several mechanisms were proposed early on, in particular mechanisms based on spin fluctuations\cite{Emery,ScalapinoSpin, Emery2, Pines}, charge fluctuations\cite{Ruvalds,Varma,Hirsch}, and other more exotic ideas such as resonating valence bonds\cite{Anderson}. A more recent work studies the possibility of superconductivity at small coupling, where here the mechanism goes back to ideas of Kohn-Luttinger\cite{Raghu,Kohn}. Since the Mott-insulating anti-ferromagnetic phase at half-filling is well understood, much of the theoretical literature attempts to understand understand how doping ``melts'' the anti-ferromagnetic (AF) order, and how the resulting state can become superconducting. This has proven to be difficult to study, perhaps in part due to the fact that AF order is spatial, whereas superconducting order is in momentum space. (For a review and other refereces, see \cite{Wen}.) Furthermore, superconductivity exists at reasonably low densities far from the AF order at half-filling, which suggests that that one can perhaps treat the model as a gas, with superconductivity arising as a condensation of Cooper pairs as in the BCS theory, and we will adopt this point of view in the present work. Since there is as yet no consensus on the precise pairing mechanism in the cuprates, it is worthwhile continuing the search for new ones. In this paper we take a very conservative approach within the Hubbard model, wherein we do not postulate any particular quasi-particle excitations that would play the role of the phonons, and we ignore the AF order at very low doping. In a field-theoretic approach to the BCS theory, the pairing instability arises from the sum of the ladder diagrams for phonon exchange, as shown in Figure \ref{ladders}. The sum of these diagrams has a pole at energy equal to the gap, and this pole is sufficient to cause the instability, and signifies gapped charge 2 excitations. (See e.g. \cite{BCS,MahanBook}). This will be reviewed in section III. \begin{figure}[htb] \begin{center} \hspace{-15mm} \includegraphics[width=7cm]{ladders.eps} \end{center} \caption{Feynman diagrams that lead to the BCS pairing instability, where dashed lines are phonons.} \vspace{-2mm} \label{ladders} \end{figure} Suppose that the electron-phonon interaction were treated as an effective, short-ranged electron-electron interaction, so that exchange of a single phonon were replaced by an effective 4-electron vertex. Then the diagrams in Figure \ref{ladders} become the diagrams in Figure \ref{Vscreen}. In this work we explore the possibility of pairing instabilities arising from certain similar classes of Feynman diagrams in the Hubbard model. Whereas the diagrams in Figure \ref{Vscreen} lead to the BCS pairing instability for bare {\it attractive} interactions (see below), for repulsive interactions they merely screen the strength of the Coulomb interaction. However, as we will show, the sum of another class of diagrams shown in Figure \ref{Vex} do indeed exhibit poles possibly signifying pairing instabilities very near the Fermi surface. An obvious criticism of the present work is that for the cuprates, the coupling is large, and one should not trust perturbation theory. In answer to this, the diagrams we will focus upon can be summed up, and have a well-defined limit as the coupling goes to infinity. Furthermore, since we are focussing on sums of diagrams that lead to poles in the effective pairing potential, it is possible that these singular diagrams dominate the perturbative expansion. Certainly our calculation can be improved upon; we wish here to present a possible pairing mechanism in its simplest form. For instance, we ignore the effect of interactions on the quasi-particle energies, and take $\xi_{\bf k}$ to be that of the free theory, ignoring self-energy corrections, which are known to be significant in the pseudo-gap region. One can repeat the analysis for instance by extracting $\xi_{\bf k}$ from experiments, however we leave this for the future. At this stage, it is more useful to study the mechanism and its plausibility in its simplest form. Comments on the connection between the present work and our earlier one\cite{HubbardGap} are called for. In \cite{HubbardGap} the diagrams in Figure \ref{Vscreen} at {\it zero chemical potential and temperature} were viewed as contributions to an effective interaction between 2 electrons in vacuum, and it was shown that there are regions in the Brillouin zone where this interaction is attractive. This effective interaction was then fed into a BCS gap equation, which showed solutions. Our current understanding is that this may not be consistent for the following reasons. Superconductivity is a phenomenon that relies strongly on properties near the Fermi surface, Fermi blocking, etc, and thus effectively attractive interactions in free space, though they may lead to bound states, are believed to be insufficient to cause superconductivity. In this paper the diagrams in Figures \ref{Vscreen}, \ref{Vex} are calculated at finite temperature and chemical potential, and can thus be studied near the Fermi surface. In fact, the diagrams in Figure \ref{Vex} are zero at zero density. Although in a different channel than the diagrams in Figure \ref{Vscreen}, they still contribute to Cooper-pair Green functions, and poles in these functions could signify pairing just as the poles in the diagrams in the direct channel Figure \ref{Vscreen}. In summary, as far as superconductivity is concerned, there is no connection between the present work and \cite{HubbardGap}. Furthermore, it was suggested that the results in \cite{HubbardGap} were perhaps more relevant to the so-called pseudo-gap, since the solutions of the gap equation increased all the way down to zero doping. The remainder of this paper is organized as follows. In the next section we define the Cooper `pairing' Green functions of interest. In section III we first review how the Cooper pairing instability manifests itself as poles in such Green's functions in the BCS theory. The remainder of this section specializes to the Hubbard model, where analogous poles are found for a different class of diagrams. The latter diagrams have been studied before in the context of Landau damping and plasmons. However, in the present work these diagrams are used to define an effective potential, which leads to a different dependence on kinematic variables near the Fermi surface. Thus, the detailed effective potential studied in this paper has not been considered before in connection with pairing; this is explained in detail below. The gap is related to the location of these poles, and solutions are found numerically. For $U/t = 10$, we find non-zero gap solutions in the anti-nodal directions in the range of hole doping $0.03<h<0.24$ with a dome-like shape. The maximum of the gap is approximately $\Delta/t = 0.08$ and occurs around $h=0.11$. \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{mkd}{$-{\bf k}\downarrow$} \psfrag{ku}{${\bf k}\uparrow$} \psfrag{mkpd}{$-{\bf k}'\downarrow$} \psfrag{kpu}{${\bf k}'\uparrow$} \includegraphics[width=10cm]{Vscreen.eps} \end{center} \caption{Feynman diagrams contributing to $\CV_{\rm eff}^{(sc)}$.} \vspace{-2mm} \label{Vscreen} \end{figure} \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{mkd}{$-{\bf k}\downarrow$} \psfrag{ku}{${\bf k}\uparrow$} \psfrag{kpd}{${\bf k}'\downarrow$} \psfrag{mkpu}{$-{\bf k}'\uparrow$} \includegraphics[width=10cm]{Vex.eps} \end{center} \caption{Feynman diagrams contributing to $\CV_{\rm eff}^{(ex)}$.} \vspace{-2mm} \label{Vex} \end{figure} \section{Green functions and the effective Cooper pair potential} We study the two-dimensional Hubbard model: \begin{equation} \label{HubbardHamiltonian} H = -t \sum_{ <i,j>, \alpha = \uparrow, \downarrow} c^\dagger_{i,\alpha} c_{j, \alpha} +U \sum_i n_{i, \uparrow} n_{i, \downarrow} \end{equation} where $i,j$ label sites of a two-dimensional square lattice, $<i,j>$ denotes nearest neighbors, and $n_{i, \alpha} = c^\dagger_{i, \alpha} c_{i, \alpha}$. Our convention for neighboring interactions is e.g. $\sum_{<i,j>} c^\dagger_i c_j = c^\dagger_1 c_2 + c^\dagger_2 c_1 + .....$ such that $H$ is hermitian. The lattice operators $c_{{\bf r}_i, \alpha}$ where ${\bf r}_{i}$ is a lattice site, will correspond to the continuum fields \begin{equation} \label{fields} \psi_\alpha ({\bf x}, t ) = \int \frac{ d^2 {\bf k}}{2\pi} \, c_{{\bf k}, \alpha} (t) \, e^{i {\bf k} \cdot {\bf x} } \end{equation} The free hopping terms are diagonal in momentum space: \begin{equation} \label{Hfree} H_{\rm free} = \int d^2 {\bf k} \, \omega_{\bf k} \sum_{\alpha = \uparrow, \downarrow} c^\dagger_{{\bf k}, \alpha} c_{{\bf k}, \alpha} \end{equation} with the 1-particle energies \begin{equation} \label{omegak} \omega_{\bf k} = - 2 t (\cos (k_x a ) + \cos (k_y a) ) \end{equation} where $a$ is the lattice spacing. We henceforth scale out the dependence on $t$ and $a$ such that all energies, temperatures and chemical potentials are in units of $t$. The interactions then depend on the dimensionless coupling $g \equiv U/t$, which is positive for repulsive interactions. Henceforth $\xi_{\bf k} \equiv \omega_{\bf k} - \mu$ is the 1-particle energy measured relative to the Fermi surface, where $\mu$ is the chemical potential. In order to probe possible Cooper pairing instabilities, we consider the Green function $\langle \psi^\dagger_\uparrow (x_1) \psi^\dagger_\downarrow (x_2) \psi_\downarrow (x_3) \psi_\uparrow (x_4 ) \rangle $ where $x = ({\bf x}, t)$. The Fourier transform in both space and time of these functions are correlation functions of the operators $c^\dagger_{{\bf k},\alpha} ( { \scriptstyle \xi }) = \int \frac{dt}{\sqrt{2\pi}} e^{-i \xi t } c^\dagger_{{\bf k}, \alpha} (t) $, and their hermitian conjugates. We study Green functions specialized to Cooper pairs: $ \langle c^\dagger_{-{\bf k}' \uparrow} ({ \scriptstyle {\hat{\xi}} '}) c^\dagger_{{\bf k}' \downarrow} ( {\scriptstyle \xi' }) c_{-{\bf k} \downarrow} ( {\scriptstyle {\hat{\xi}} }) c_{{\bf k} \uparrow} ( {\scriptstyle \xi} ) \rangle $. It should be emphasized that since the above Green functions are just Fourier transforms of the spatial/temporal Green functions, although the $\xi$ are energy variables, they are not necessarily `on-shell', i.e. $\xi$ is not necessarily $\xi_{\bf k}$ (borrowing the relativistic terminology). The only constraint is energy conservation: ${\hat{\xi}} + \xi = {\hat{\xi}}' + \xi'$. There are two important types of quantum corrections to the vertex, which we denote as $ \CV_{\rm eff}^{(sc)} $ and $ \CV_{\rm eff}^{(ex)} $, where, for reasons explained below, {\it sc} refers to {\it screened} and {\it ex} to {\it exchange}. $\CV_{\rm eff}^{(sc)}$ is defined as \begin{equation} \label{Vscreendef} \CV_{\rm eff}^{(sc)} = \langle c^\dagger_{-{\bf k}' \uparrow} ( {\scriptstyle {\hat{\xi}}'} ) c^\dagger_{{\bf k}'\downarrow} ( {\scriptstyle \xi'} ) c_{-{\bf k} \downarrow} ({ \scriptstyle {\hat{\xi}} } ) c_{{\bf k} \uparrow} ( {\scriptstyle \xi } ) \rangle_{\rm trunc}^{(sc)} \end{equation} where {\it trunc} refers to the truncated Green function, i.e. stripped of external propagators and energy-momentum conserving delta functions, and {\it sc} refers to the diagrams in Figure \ref{Vscreen}. Incoming (outgoing) arrows correspond to annihilation (creation) operator fields. The other class of diagrams are shown in Figure \ref{Vex} and define $\CV_{\rm eff}^{(ex)}$ as in eqn. (\ref{Vscreendef}). When `on-shell', which is to say that frequencies are 1-particle energies, i.e. $\xi = \widehat{\xi} = \xi_{\bf k}$ and $ \xi' = \widehat{\xi}' = \xi_{{\bf k}'}$, then these truncated Green functions contribute to the matrix element, i.e. form-factor, of the integrated interaction hamiltonian density: \begin{equation} \label{VonShell} V({\bf k}, {\bf k}' ) = \int d^2 {\bf x} \langle -{\bf k}'\uparrow , {\bf k}'\downarrow | \CH_{\rm int} ({\bf x} ) | {\bf k}\uparrow, -{\bf k}\downarrow \rangle \end{equation} Since there is no integration over time in the above equation, $\xi_{\bf k}$ and $\xi_{{\bf k}'}$ are not necessarily equal. To lowest order, $V = g$. The form factor $V^{\rm (sc) } ({\bf k}, {\bf k}')$ corresponding to the diagrams of $\CV_{\rm eff}^{(sc)}$ is equal to $\CV_{\rm eff}^{(sc)}$ (as computed below) with $\xi \to (\xi_{\bf k} + \xi_{{\bf k}'} ) /2$\cite{HubbardGap}, whereas $V^{\rm (ex)}$ is simply $\Veffex$ placed on shell. For both, one has the necessary symmetry: $V ({\bf k}, {\bf k}' ) = V ({\bf k}' , {\bf k} )$. The evaluation of ${\cal V}} \def\CW{{\cal W}} \def\CX{{\cal X}_{\rm eff}^{(sc, ex)}$ at finite density and temperature is standard, however for completeness we provide some details. Consider first $\CV_{\rm eff}^{(sc)}$. These diagrams factorize into 1-loop integrals and form a geometric series. There is no fermionic minus sign coming from each loop since the arrows do not form a {\it closed} loop. Momentum conservation at each vertex gives a loop integral that is independent of ${\bf k}, {\bf k}'$: \begin{equation} \label{Vscsum} \CV_{\rm eff}^{(sc)} (\xi ) = \frac{ g}{1- g L^{(sc)} (\xi )} \end{equation} where $L^{(sc)}$ is the one-loop integral: \begin{equation} \label{Lscdef} L^{(sc)} (\xi ) = - T \sum_n \int \frac{d^2 {\bf p}}{(2\pi)^2} \( \inv{ i \nu_n - \xi_{\bf p} } \) \( \inv{ 2 \xi - i \nu_n - \xi_{-{\bf p}} } \) \end{equation} where $T$ is temperature, $\nu_n$ is a fermionic Matsubara frequency, $\nu_n = 2\pi (n+\inv{2}) T$ with $n$ an integer, and $\xi_{\bf p} = \omega_{\bf p} - \mu$ where $\mu$ is the chemical potential. One needs the following identity: \begin{equation} \label{scident} T \sum_n \( \inv{i\nu_n - \xi_{\bf p}} \) \( \inv{ 2 \xi - i\nu_n - \xi_{-{\bf p}} }\) = \frac{ f(\xi_{\bf p}) -1/2 }{\xi - \xi_{\bf p}} \end{equation} where $f(\xi) = 1/ (e^{\xi/T} +1) $ is the fermionic occupation number, and we have used $\xi_{\bf p} = \xi_{-{\bf p}}$. The above identity is valid before analytic continuation from imaginary to real time, i.e. when $2 \xi $ is twice a fermionic Matsubara frequency. The final result is then: \begin{equation} \label{LscFinal} L^{(sc)} (\xi ) = \inv{2} \int \frac{d^2 {\bf p}}{(2\pi)^2} \( \frac{ 1 } { \xi - \xi_{\bf p} +i \eta } \) \tanh(\xi_{\bf p} /2T) \end{equation} where $\eta$ is infinitesimally small and positive. Integration is over the first Brillouin zone, $-\pi \leq p_{x,y} \leq \pi$. The diagrams for $\CV_{\rm eff}^{(ex)}$, though simply an `exchanged' version of those for $\CV_{\rm eff}^{(sc)}$, have a rather different and more complicated structure. Here there is a fermionic minus associated with each loop since the arrows form a closed loop. Momentum and energy conservation at each vertex now leads to a loop integral that depends on ${\bf{q}} \equiv {\bf k}' - {\bf k}$ and $\Delta_\xi \equiv \xi' - \xi$: \begin{equation} \label{Vexddef} \CV_{\rm eff}^{(ex)} ({\bf{q}}, \Delta_\xi ) = \frac{g}{1- g L^{(ex)}} \end{equation} where \begin{equation} \label{LexdefT} L^{(ex)} ({\bf{q}}, \Delta_\xi ) = T \sum_n \int \frac{d^2 \pvec}{(2\pi)^2} \( \inv{ i \nu_n - \xi_{\bf p} } \) \( \inv{ \Delta_\xi + i \nu_n - \xi_{{\bf p} + {\bf{q}}}}\) \end{equation} Now one needs the identity \begin{equation} \label{ident2} T \sum_{n} \( \inv{i \nu_n - \xi_{\bf p}} \) \( \inv{i\nu_n + i \omega_m - \xi_{{\bf p}+ {\bf{q}}} } \) = \frac{ f(\xi_{\bf p}) - f(\xi_{{\bf p}+ {\bf{q}}} ) }{i \omega_m + \xi_{\bf p} - \xi_{{\bf p}+ {\bf{q}}} } \end{equation} which is valid for $\omega_m $ twice a bosonic Matsubara frequency. After analytic continuation $i\omega_m \to \Delta_\xi + i \eta $, one has \begin{equation} \label{Lexdef} L^{(ex)} ({\bf{q}}, \Delta_\xi ) = \int \frac{d^2 \pvec}{(2\pi)^2} \( \frac{ f(\xi_{\bf p} ) - f(\xi_{{\bf p} + {\bf{q}}} ) }{\Delta_\xi + \xi_{\bf p} - \xi_{{\bf p} + {\bf{q}}} +i \eta } \) \end{equation} In the above formulas it is implicit that ${\cal V}} \def\CW{{\cal W}} \def\CX{{\cal X}^{(\rm sc, ex)}$ is defined by the real part of $L^{\rm (sc, ex)}$ in the limit $\eta \to 0$. In the limit of zero density, i.e. $f=0$, note that $\Veffex =0$, whereas $\CV_{\rm eff}^{(sc)}$ is non-zero. \section{Cooper pairing instabilities as poles in the effective potential}. It is well-known that the Cooper pairing instability of the BCS theory can be understood as a pole in $\CV_{\rm eff}^{(sc)}$\cite{BCS,MahanBook}. This can be seen by inserting a complete set of states between the pair creation and annihilation operators in eqn. (\ref{Vscreendef}), and noting that poles in the energy $\xi$ signify bound states of electric charge $2$, which are interpreted as the Cooper pairs. To see that this gives the correct result for the gap, let us leave behind the Hubbard model and specialize the coupling $g$ to that of the BCS theory, where it arises from the interaction with phonons. For simplicity, $g = - |g|$ is taken to be negative, signifying attractive interactions, in a narrow region near the Fermi surface $| \xi | < \omega_D$, where $\omega_D$ is the Debye frequency, otherwise zero. We approximate the integral over ${\bf p}$ near the Fermi surface as $\int \rho (\xi_{\bf p} ) d \xi_{\bf p} \approx \rho (0) \int d \xi_{\bf p}$, where $\rho (\xi )$ is the density of states. Letting $\xi=\Delta/2$, in the limit of zero temperature, \begin{equation} \label{LscBCS} L^{(sc)} \approx \frac{\rho (0)}{2} \( \int_{0<\xi_{\bf p} < \omega_D} d\xi_{\bf p} \inv{\Delta/2 - \xi_{\bf p}} - \int_{-\omega_D <\xi_{\bf p} < 0 } d\xi_{\bf p} \inv{\Delta/2 - \xi_{\bf p}} \) \approx \rho(0) \log \( \frac{\Delta}{2\omega_D} \) \end{equation} (The first integral should be interpreted as the principal value.) There is a pole in $\CV_{\rm eff}^{(sc)}$ because both $L^{(sc)}$ and $g$ are negative. The location of the pole is given by $1 + |g| \rho (0) \log (\Delta/2 \omega_D) =0$, which leads to the standard result for the gap: $\Delta = 2 \omega_D \exp (-1/ ( |g| \rho(0)) )$. Note that there is a gap $\Delta$ for arbitrarily small negative coupling $g$. \def\hat{L}^{(ex)}{\widehat{L}^{(ex)}} Returning to the positive $g$ Hubbard model, i.e. with repulsive interactions, there are no poles in $\CV_{\rm eff}^{(sc)}$ since $\Lsc$ turns out to be negative at finite temperature and density. These quantum corrections merely screen the Coulomb potential. It is meaningful to define a screened coupling near the Fermi surface as $g_{\rm sc} = \CV_{\rm eff}^{(sc)}(\xi= 0)$. We wish to point out that at zero density, i.e. $f=0$, and $T=0$, $\Lsc$ can become positive, which means that $\CV_{\rm eff}^{(sc)}$ can be attractive. This was explored in \cite{HubbardGap}, and with the help of a hypothesized gap equation, one finds anisotropic solutions with properties suggestive of the pseudo-gap. We now turn to the investigation of the possibility of analogous pairing instabilities in $\CV_{\rm eff}^{(ex)}$. By construction, since it contributes to the full effective potential in the same way $\CV_{\rm eff}^{(sc)}$ does, then for the same reason as above, poles in $\CV_{\rm eff}^{(ex)}$ could signify Cooper pairs. Poles in $\CV_{\rm eff}^{(ex)}$ for positive $g$ only exist if $L^{(ex)}$ can be positive. For $\Delta_\xi =0$, the integrand in $L^{(ex)}$ is always negative, thus $L^{(ex)}$ is negative and there are no poles for positive $g$. However we are precisely interested in the case $\Delta_\xi \neq 0$ since $\Delta_\xi $ represents a difference of incoming and outgoing energies, and it is thus poles at non-zero values of $\Delta_\xi$ that could signify Cooper paired bound states. As we will see, there are indeed regions of the parameter space near the Fermi surface where $L^{(ex)}$ is positive. For ${\bf{q}}$ and $\Delta_\xi$ unrelated, the integral is easily evaluated numerically, and has a smooth, well-defined limit as $\eta \to 0$. However physically we are interested in this function {\it on-shell}, i.e. the corresponding form factor $V^{\rm (ex)}$ with $\Delta_\xi = \xi_{{\bf k}'} - \xi_{\bf k} $ when ${\bf{q}} = {\bf k}' - {\bf k}$, so that $\CV_{\rm eff}^{(ex)}$ now depends on ${\bf k}, {\bf k}'$, not only ${\bf{q}}$. With this identification of $\Delta_\xi$, $V^{\rm (ex)}$ is properly viewed as an effective potential, as is $\CV_{\rm eff}^{(sc)}$. For clarify, let us define the on-shell version of $L^{(ex)}$ that plays an important role in this paper as follows: \begin{equation} \label{Lon} \hat{L}^{(ex)} ( {\bf k}, {\bf k}' ) = L^{(ex)}( {\bf{q}}={\bf k}' - {\bf k}, \, \Delta_\xi = \xi_{{\bf k}'} - \xi_{\bf k}) \end{equation} Versions of the function $L^{(ex)} ({\bf{q}}, \Delta_\xi)$ appear in other essentially different physical contexts, and we wish to clarify this point. The analysis in \cite{ScalapinoSpin,Emery2} involves the same diagrams that define $L^{(ex)}$, however with the important difference that there $\Delta_\xi = 0$; this is here interpreted as the effective potential $\CV_{\rm eff}^{(ex)}$ with both ${\bf k}, {\bf k}'$ on the Fermi surface and is thus a different function of ${\bf k}, {\bf k}'$ than our on-shell $\hat{L}^{(ex)}$. We also point out that the function $L^{(ex)}$ off-shell, i.e. with $\Delta_\xi $ equal to an arbitrary frequency $\omega$ unrelated to ${\bf{q}}$ appears in the RPA expression for the dielectric response function $\varepsilon_{\rm RPA} (\omega, {\bf{q}} ) = 1- g L^{(ex)} ( \Delta_\xi = \omega, {\bf{q}} )$\cite{MahanBook}. Here, $\omega$ is the frequency of an external probe, such as an electric field. Solutions $\omega ({\bf{q}})$ to the equation $\varepsilon_{\rm RPA} (\omega ({\bf{q}}) , {\bf{q}})= 0$ are interpreted as Landau damping. Plasmons, i.e. quantized electric charge fluctuations, are manifested as delta-function peaks in $- {\rm Im} (1/\varepsilon_{RPA} )$ as a function of $\omega$. The pairing mechanism studied here thus appears closest to the idea of plasmon mediated superconductivity\cite{Ruvalds,Mahan}. However our analysis differs in important ways: no low energy plasmons were postulated, and once again, the properties of the on-shell $1-g \hat{L}^{(ex)}$ are very different from those of $\varepsilon_{\rm RPA}$. The integral defining $\hat{L}^{(ex)}$ is rather delicate in comparison with $L^{(sc)}$ and the off-shell $L^{(ex)}$. In this case there is a pole in the integrand when ${\bf p} = {\bf k}$. Thus the ${\bf p}$ integral should be understood as the Cauchy principal value (PV). Namely, inside an integral over a variable $x$, one has the identity: \begin{equation} \label{CPV} \lim_{\eta \to 0^+} \inv{x + i \eta } = {\rm PV} \( \inv{x} \) - i \pi \delta(x) \end{equation} Recall the PV is defined by excising a small region around the pole: $\int d p_x \to \lim_{\delta \to 0} \( \int_{-\pi}^{k_x - \delta} dp_x + \int_{k_x + \delta}^\pi dp_x \)$ and similarly for $\int dp_y$. In order to better understand other features of the function $L^{(ex)}$ on-shell, it is instructive to study the one-dimensional version of $L^{(ex)}$, appropriate to a chain of lattice sites, since here the integral can be performed analytically. This is described in the Appendix. (It should be emphasized that this exercise is not meant to capture the physics of the one-dimensional Hubbard model, which is exactly solvable\cite{Korepin}, but rather simply to gain intuition on the function $L^{(ex)}$ in two dimensions.) As shown analytically in the appendix, the on-shell $L^{(ex)}$ actually diverges as the temperature goes to zero in one dimension. The reason is that in the limit of zero $T$, the occupation number $f$ is a step function, and the $\int_{k_F + \delta}^\pi d{\bf p}$ piece of the PV is absent, leading it to be ill-defined. The temperature $T$ should thus be viewed as an infra-red regulator in one dimension. In two dimensions we will also not take $T\to 0$ for analogous reasons. For arbitrarily small $T$, we verified that $L^{(ex)}$ is well defined numerically as $\delta \to 0$. As shown in the appendix, one can demonstrate analytically that there is a narrow region around the Fermi surface where $L^{(ex)}$ is positive and thus $\CV_{\rm eff}^{(ex)}$ has poles. As we now show, the same is true in two dimensions. \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{mkd}{$-{\bf k}\downarrow$} \psfrag{kx}{$k_x$} \psfrag{ky}{$k_y$} \psfrag{q}{${\bf{q}}$} \psfrag{kp}{${\bf k}'$} \psfrag{kF}{${\bf k}_F$} \psfrag{FS}{${\rm \scriptstyle{Fermi ~surface}}$} \psfrag{th}{$\theta$} \includegraphics[width=8cm]{FS.eps} \end{center} \caption{Geometry near the Fermi surface in the first quadrant of the Brillouin zone.} \vspace{-2mm} \label{FS} \end{figure} We are interested in potential instabilities when ${\bf k}, {\bf k}'$ are near the Fermi surface. To be more specific, let ${\bf k} = {\bf k}_F (\mu)$ be right on the Fermi surface, i.e. $\xi_{\bf k}=0$, and ${\bf k}'$ be slightly off the surface, with some small $\xi_{{\bf k}'}$, so that now $\Delta_\xi = \xi_{{\bf k}'}$. See Figure \ref{FS}. As usual, the $\theta = \pi/4$ direction will be referred to as nodal, whereas the $\theta = 0$ as anti-nodal. For a fixed value of the chemical potential, $\hat{L}^{(ex)} ({\bf k}_F, {\bf k}' )$ depends only on $|{\bf k}'|$ and the angular directions $\theta, \theta'$ of ${\bf k}_F, {\bf k}'$. The magnitude $|{\bf k}'|$ can be related to $\Delta_\xi $ as follows: $\Delta_\xi = \omega_{{\bf k}'} - \mu$. At fixed chemical potential, with ${\bf k}$ on the Fermi surface, we can thus view $\hat{L}^{(ex)}$ as a function only of $\theta, \theta'$ and $\Delta_\xi$. The chemical potential will be related to the hole doping $h$ by the formula: \begin{equation} \label{hdef} 1- h = 2 \int \frac{d^2 {\bf k}} {(2\pi)^2 } \, \inv{ e^{\xi({\bf k})/T} +1 } \end{equation} This is clearly an approximation since there are self-energy corrections that modify $\xi_{\bf k}$ which we ignore; however, since the experimentally measured Fermi surfaces can be fit to a $\xi_{\bf k}$ of a tight-binding form, we do not expect these corrections to drastically affect the main features of our results. Henceforth, we express various properties in terms of $h$ defined by the above equation in the limit $T\to 0$. Let us first illustrate our findings at the fixed doping $h=0.15$. In Figure \ref{Lexh2} we plot $\hat{L}^{(ex)}$ for both ${\bf k}, {\bf k}'$ in the anti-nodal direction as a function of $\Delta_\xi$ at the low temperature $T_0=0.001$ (compared to the bandwidth). One sees that for small enough $\Delta_\xi$, i.e. close enough to the Fermi surface, $\hat{L}^{(ex)}$ becomes positive. As in the case of the BCS instability reviewed above, let us identify the gap $\Delta_{\rm gap}=\Delta_\xi$ with the location of the pole in $\CV_{\rm eff}^{(ex)}$ in the anti-nodal direction, i.e. the solution to the equation: \begin{equation} \label{gapeqn} \inv{g} = \hat{L}^{(ex)} ( \Delta_{\rm gap}, \mu, T) \end{equation} where it is implicit that $\theta = \theta' =0$. For $g = 10$, appropriate to the cuprates, one sees from Figure \ref{Lexh2} that $\Delta_{\rm gap} \approx 0.045$. Two other features are apparent from Figure \ref{Lexh2}: \medskip \noindent (i) There is a solution to eqn. (\ref{gapeqn}) for arbitrarily large $g$ since $\hat{L}^{(ex)}$ passes through zero. Furthermore, as $g \to \infty$, $\Delta_{\rm gap} $ saturates to where $\hat{L}^{(ex)}$ crosses the real axis, in this case $\Delta_{\rm gap} \approx .047$. This is reminiscent of claims of superconductivity in the t-J model\cite{Maier}, since it is a strong coupling version of the Hubbard model, and since the interactions considered here are also instantaneous. \medskip \noindent (ii) Since $\hat{L}^{(ex)}$ has a maximum, there are only solutions to eqn. (\ref{gapeqn}) for $g$ larger than a minimum value, in this case $g$ greater than approximately $0.2$. Since no such threshold was found in \cite{Raghu}, one should conclude that the pairing mechanism presented here is essentially different. The solutions $\Delta_{\rm gap}$ to eqn. (\ref{gapeqn}) depend on the temperature. However since we view $T$ as an infra-red regulator, one should think of this in terms of the renormalization group. Namely, $g$ can be made to depend on $T$ in such a manner as to keep the solution $\Delta_{\rm gap} $ fixed. One finds that $g$ increases with decreasing $T$. For instance, at $h=0.15$, if $g = 1$ at $T_0 = 10^{-3}$, then $g \approx 2.3$ for $T_0 = 10^{-4}$. We emphasize that there is only one free parameter in our calculation, the value of $g$ at the reference temperature $T_0$. Henceforth we fix the reference temperature $T_0 = 0.001$, keeping $g =10$. \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{x}{$\Delta_\xi$} \psfrag{y}{$\hat{L}^{(ex)}$} \includegraphics[width=7cm]{Lexh15.eps} \end{center} \caption{The loop integral $\hat{L}^{(ex)}$ in the anti-nodal direction as a function of $\Delta_\xi$ for hole doping $h=0.15$ and temperature $T_0=0.001$. The straight line represents $1/g =0.1$ and where it intersects $\hat{L}^{(ex)}$ gives the value of the gap $\Delta_{\rm gap}$.} \vspace{-2mm} \label{Lexh2} \end{figure} In order to understand how $\Delta_{\rm gap}$ depends on doping, in Figure \ref{Lexh3} we plot $\hat{L}^{(ex)}$ for $h=0.05, 0.15$ and $0.24$. One sees that as $h$ is decreased, $\hat{L}^{(ex)}$ crosses zero at smaller values of $\Delta_\xi$, signifying a smaller $\Delta_{\rm gap}$. Finally, when $h$ is large enough, $\hat{L}^{(ex)}$ is nowhere positive, signifying no solution to eqn. (\ref{gapeqn}). In Figure \ref{Gapofh} is shown the anti-nodal solution $\Delta_{\rm gap}$ as a function of doping $h$. The largest gap $\Delta_{\rm gap} =0.08$ occurs around $h=0.11$. For the cuprates, $t \approx 0.4\, {\rm eV}$, which gives $\Delta_{\rm gap} = 32\, {\rm meV}$, which compares favorably with experiments. It is important to note that reasonable values for $\Delta_{\rm gap}$ were obtained without introducing any explicit cut-off in momentum space related to the bandwidth. \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{x}{$\Delta_\xi$} \psfrag{y}{$\hat{L}^{(ex)}$} \psfrag{h1}{$h=0.15$} \psfrag{h2}{$h=0.24$} \includegraphics[width=10cm]{Lexh3.eps} \end{center} \caption{The loop integral $\hat{L}^{(ex)}$ in the anti-nodal direction as a function of $\Delta_\xi$ for hole doping $h=0.05, 0.15$ and $0.24$.} \vspace{-2mm} \label{Lexh3} \end{figure} \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{y}{$\Delta_{\rm gap}$} \psfrag{x}{$h={\rm hole ~ doping}$} \includegraphics[width=7cm]{Gapofh2.eps} \end{center} \caption{$\Delta_{\rm gap}$ in the anti-nodal direction as a function of hole doping $h$.} \vspace{-2mm} \label{Gapofh} \end{figure} \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{y}{$\Delta_{\rm gap}$} \psfrag{x}{$g$} \includegraphics[width=7cm]{Gapofg.eps} \end{center} \caption{$\Delta_{\rm gap}$ in the anti-nodal direction as a function of $g$ (hole-doping $h=0.15$).} \vspace{-2mm} \label{Gapofg} \end{figure} Numerically, we found no solutions to the eqn. (\ref{gapeqn}) in the nodal direction. In fact, solutions only exist in a narrow direction around the anti-nodal region. Since the anti-nodal direction is along a 1-dimensional chain of lattice sites, this suggests that the effect we are describing is closely tied to the properties of the 1-dimensional chains studied in the appendix. Although this may help to explain the d-wave nature of the gap in the cuprates, it by no means establishes it. The potential $\Veffex$ is invariant under $90^\circ$ rotations of the Brillouin zone, as is the hamiltonian. A d-wave gap, by definition, spontaneously breaks this symmetry, i.e. it alternates in sign under such a rotation, thus one cannot deduce a d-wave gap from the poles in $\Veffex$ alone. In order to study this further, one needs a BCS-like gap equation built upon the effective interaction $\Veffex$, or equations analagous to those in \cite{Raghu}, which is beyond the scope of this paper. We point out that It is known that a BCS gap equation based on a potential $V( {\bf k}, {\bf k}')$ which is invariant $90^\circ$ rotations has both s and d-wave solutions\cite{Kotliar}. \section{Conclusions} In summary, we identified the possibility of Cooper pairing instabilities which arise as poles in certain Green functions in the two dimensional Hubbard model, in a manner analagous to the BCS theory. The only parameter in the calculation is the value of the coupling $U/t = g$ at the reference temperature $T_0 /t = 0.001$, which we took to be $g=10$. Reasonable magnitudes for the gap in the anti-nodal direction were found, $\Delta_{\rm gap}/t < 0.08$, in the region of hole doping $0.03 < h < 0.24$, without introducing an explicit cut-off. In the BCS theory, the analogous poles are `resolved' by the BCS gap equation. The latter has not been developed in the present case, thus this should be investigated as a next step in exploring the consequences of the pairing mechanism identified in this work. Since it is not clear that the usual BCS gap equation with the effective pair potential studied in this paper is valid, this has not been pursued here. Certainly the very existence of these poles imply that the effective pairing potential can change sign thereby becoming attractive, thus non-zero solutions to the proper gap equation should exist. \section{Acknowledgments} We wish to thank Neil Ashcroft and Erich Mueller for discussions. This work is supported by the National Science Foundation under grant number NSF-PHY-0757868. \section{Appendix: Chains: The loop integrals in one dimension.} In this appendix we study the one-dimensional version of the integrals for $L^{(ex)}$, i.e. eq. (\ref{Lexdef}) with $\int d^2 {\bf p} /(2\pi)^2 \to \int d{\bf p} /2\pi $ where ${\bf p}$ is now a one-dimensional vector, and $\omega_{\bf k} = -2 \cos {\bf k} $. In the limit of zero temperature, the occupation number $f$ becomes a step function, and the integral can be performed analytically. Namely, \begin{equation} \label{A1} L^{(ex)} = L_+ - L_- \end{equation} where \begin{equation} \label{A2} L_+ = \inv{2\pi} \int_{|{\bf p}| \leq {\bf k}_F (\mu) } \( \frac{d{\bf p}}{\Delta_\xi + \xi_{\bf p} - \xi_{{\bf p} + {\bf{q}}}} \) , ~~~~~ L_- = \inv{2\pi} \int_{|{\bf p}+ {\bf{q}} | \leq {\bf k}_F (\mu) } \( \frac{d{\bf p}}{\Delta_\xi + \xi_{\bf p} - \xi_{{\bf p} + {\bf{q}}}} \) \end{equation} where ${\bf k}_F (\mu) = \arccos (-\mu/2)$ is the Fermi momentum. In the region of small ${\bf{q}}$ that we are interested in, the appropriate branch of the above integrals are given in terms of the functions \begin{equation} \label{Idef} I( {\bf p} ) = \inv{\pi \sqrt{\Delta_\xi^2 - 8 ( 1- \cos {\bf{q}} )}} \arctan \( \frac{ (\Delta_\xi + 2 - 2 \cos {\bf{q}} ) \tan ({\bf p}/2) -2 \sin {\bf{q}} } {\sqrt{ \Delta_\xi^2 - 8 (1-\cos {\bf{q}})}} \), \end{equation} as follows: \begin{equation} \label{A3} L_+ = I ( {\bf k}_F (\mu) ) - I (-{\bf k}_F (\mu) ), ~~~~~ L_- = I ({\bf k}_F (\mu) - {\bf{q}} ) - I(-{\bf k}_F (\mu) -{\bf{q}} ) \end{equation} In the limit of zero temperature, $L^{(ex)}$ is then only a function of ${\bf{q}}, \Delta_\xi$ and the chemical potential $\mu$, by using the identities \begin{equation} \label{A4} \tan( {\bf k}_F (\mu)/2) = \sqrt{ \frac{ 2+\mu}{2-\mu} }, ~~~~ \tan ( ({\bf k}_F (\mu) -{\bf{q}})/2) = \frac{ \sqrt{2+\mu} - \sqrt{2-\mu} \tan({\bf{q}}/2) } {\sqrt{2-\mu} + \sqrt{2+\mu} \tan ({\bf{q}}/2) } \end{equation} Using the above expressions, one can show that $L^{(ex)}$ can indeed be positive, which is required for pole singularities in $\CV_{\rm eff}^{(ex)}$. For instance, for small $q/\Delta_\xi$, $L^{(ex)}$ has the following asymptotic form: \begin{equation} \label{Lexsmallq} L^{(ex)} (q, \Delta_\xi, \mu) \approx \frac{ q^2 \sqrt{4-\mu^2}}{ \Delta_\xi^2 \pi } \( 1 + \frac{q^2}{\Delta_\xi^2} \( 4 - \mu^2 - \Delta_\xi^2/12 \) \) \end{equation} Whereas ${\bf{q}}$ and $\Delta_\xi$ were treated as independent in the above formulas, physically we are interested in the situation where ${\bf{q}} = {\bf k}' - {\bf k}$ and $\Delta_\xi = \xi_{{\bf k}'} - \xi_{\bf k}$, referred to as `on-shell' above. In this case there is a divergence in $L^{(ex)}$ which arises from the pole in the integrand when ${\bf p} = {\bf k}$. Since we are interested in ${\bf k}, {\bf k}'$ near the Fermi surface, let us be more specific and let ${\bf k} = {\bf k}_F (\mu) $ be right on the Fermi surface, and ${\bf k}'$ be slightly off of it, for a small $\xi_{{\bf k}'}$. Here $\Delta_\xi = \xi_{{\bf k}'} $ and the singularity occurs at ${\bf{q}}_F \equiv \arccos(-(\xi_{{\bf k}'} + \mu)/2) - {\bf k}_F (\mu)$. For $\xi_{{\bf k}'} =0.01$ and $\mu = -0.618$, which corresponds to hole doping $h=0.2$, the singularity is at ${\bf{q}}_F = \pm 0.0053$, which is shown in Figure \ref{qc}. Analytically, this divergence arises as $\arctan (i)$ in the above formulas. \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{x}{$q$} \psfrag{y}{$L^{(ex)}$} \psfrag{qc}{$q_F$} \includegraphics[width=10cm]{qc.eps} \end{center} \caption{The loop integral $L^{(ex)} $ as a function of $q$ for for $\Delta_\xi =0.01$ and hole doping $h=0.20$.} \vspace{-2mm} \label{qc} \end{figure} The ${\bf p}$-integral for $L^{(ex)}$ should thus be understood as the Cauchy principal value (PV), i.e. $\int d{\bf p} \to \lim_{\delta \to 0} \( \int_{-\pi}^{{\bf k}_F - \delta} d{\bf p} + \int_{{\bf k}_F + \delta}^\pi d {\bf p} \)$. For finite temperature $T$, one can check numerically that this PV integral is well-defined and finite. However it is important to note that $L^{(ex)}$ continues to be divergent as $T\to 0$, which the above analytic formulas demonstrate. The reason is that in the limit of zero $T$, $f$ is a step function and the $\int_{{\bf k}_F + \delta}^\pi d{\bf p}$ piece of the PV is absent, leading it to be ill-defined. This is interpreted as an infra-red divergence that needs to be regulated by a finite $T$, however small. Let now turn to the 1-d analogs of the gap $\Delta_{\rm gap}$ defined in section III. In this 1d case, because of the divergence as $T\to 0$, it makes sense to define $\Delta_{\rm gap}$ as solutions to eqn. (\ref{gapeqn}) with $T=\Delta_{\rm gap}$, i.e. to determine $\Delta_{\rm gap}$ at a temperature comparable to it. The result is shown in Figure \ref{Gapofh1D}, where $g$ was taken to be the screened coupling for a bare $g=1$. In Figure \ref{Deltaofg} we plot $\Delta_{\rm gap}$ as a function of the bare coupling $g$; here, contrary to the 2d case, there is no minimal value of $g$ required for the existence of solutions. On the other hand the $\Delta_{\rm gap}$ saturates with increasing $g$ as in the two-dimensional case. \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{x}{$h$} \psfrag{y}{$\Delta_{\rm gap}$} \includegraphics[width=7cm]{Gapofh1D.eps} \end{center} \caption{The $\Delta_{\rm gap}$ pole in $\CV_{\rm eff}^{(ex)}$ as a function of hole doping $h$ for bare coupling $g=1$. } \vspace{-2mm} \label{Gapofh1D} \end{figure} \begin{figure}[htb] \begin{center} \hspace{-15mm} \psfrag{x}{$g$} \psfrag{y}{$\Delta_{\rm gap}$} \includegraphics[width=7cm]{Deltaofg.eps} \end{center} \caption{The $\Delta_{\rm gap}$ pole in $\CV_{\rm eff}^{(ex)}$ as a function of coupling $g$ for hole doping $h=0.15$.} \vspace{-2mm} \label{Deltaofg} \end{figure}
-37,751.603727
[ -2.8125, 2.65625 ]
38.869258
[ -2.30859375, 0.76123046875, -2.109375, -5.08984375, -1.115234375, 7.5 ]
[ 2, 7.58203125, 2.271484375, 4.9375 ]
310
5,596
[ -3.42578125, 3.875 ]
34.280784
[ -5.84375, -4.02734375, -4.51953125, -2.478515625, 1.73046875, 12.265625 ]
1.079447
25.709037
21.711937
1.79236
[ 2.283115863800049 ]
-24,933.280939
4.966405
-37,249.910023
0.971503
5.902617
[ -2.39453125, -3.66796875, -3.580078125, -4.85546875, 2.298828125, 12.265625 ]
[ -5.1171875, -1.37109375, -1.8212890625, -0.580078125, 3.017578125, 3.107421875 ]
BkiUahnxK1UJ-rRH9zmm
\section{Introduction} One of the more fascinating objects in low dimensional topology is the colored Jones polynomial. To a knot $K$ one assigns a sequence of knot polynomials $J_{N,K}(q)$ in the Laurent ring $\mathbb{Z}[q,1/q]$ that is indexed by a natural number $N$. For each $N$ the colored Jones polynomial $J_{N,K}(q)$ is normalized to be $1$ for $K$ the unknot. The ordinary Jones polynomial is $J_{2,K}(q)$. It was shown by Xiao-Song Lin and the second author \cite{DasbachLin:HeadAndTail} that for an alternating link $K$ the three leading coefficients of the colored Jones polynomial $J_N(K)$ are up to a common sign independent of $N$, for $N\geq 3$. Furthermore, the volume of the hyperbolic link complement of an alternating link is bounded from above and below linearly in the absolute values of the second and the penultimate coefficient of the colored Jones polynomial. It was conjectured \cite{DasbachLin:HeadAndTail} that more generally for an alternating link $K$ and every $N$ the $N$ leading coefficients of the colored Jones polynomial $J_{N,K}$ stabilize, i.e. they are - up to a sign - the $N$ leading coefficients of $J_{N+1,K}(q)$. For the right-handed trefoil and for the figure-8 knot those first $N$ coefficients are the first $N$ coefficients of the Dedekind $\eta$-function in $q=e^{2 \pi i \tau}$ (sometimes called Euler function). For the left-handed trefoil the first coefficient is $1$ followed by $N-1$ vanishing terms. A power series that describes the first $N$ coefficients of the colored Jones polynomial is called {\it tail}. The tail of $J_{N,K}(1/q)$ is called the {\it head}. For non-alternating links a tail does not exist in general, e.g. for (negative) $(p,q)$-torus knots with $p, q>2$. The proof of the conjecture for all alternating links was recently completed and will be posted in a separate paper by the first author \cite{Armond:HeadAndTailConjecture}. We do not make use of this result here. We learned that independently, L\^e and Garoufalidis also announced a proof of the conjecture for alternating knots \cite{Zagier:TalkAtWalterfest}. The main purpose of this paper is to understand the head and the tail of the colored Jones polynomial while focusing mainly on alternating knots. The paper has three parts. First, different ways to compute the colored Jones polynomial of knots and thus their head and tails lead to number theoretic identities of those power series. We give examples, and we will recover and reprove some well-known classical identities, e.g. \begin{maintheorem}[Ramanujan] $$\sum_{k=0}^{\infty} (-1)^k q^{(k^2+k)/2} = (q;q)_{\infty}^2 \sum_{k=0}^{\infty} \frac {q^k} {(q;q)_k^2},$$ \end{maintheorem} where $(q;q)_k=\prod _{j=1}^{k-1} (1-q^j)$ are the $q$-Pochhammer symbols. Second, we will show that the head and tail functions for alternating knots only depend on the reduced checkerboard graphs of the knots: \begin{maintheorem} Let $K_1$ and $K_2$ be two alternating links together with alternating diagrams with $A$-checkerboard graphs (respectively $B$-checkerboard graphs) $G_1$ and $G_2$. The reduced graphs $G_1'$ and $G_2'$ are obtained from $G_1$ and $G_2$ by replacing parallel edges by single edges. Then if $G_1'=G_2'$ the tails (respectively the heads) of the colored Jones polynomials of $K_1$ and $K_2$ coincide. \end{maintheorem} More geometrically that means that if in an alternating link diagram $\pm k$ half twists are replaced by $\pm (k + l)$ half twist for some positive numbers $k$ and $l$ either the head or the tail of the colored Jones polynomial does not change. For example this explains why the tail of the colored Jones polynomial of the right-handed trefoil equals the tail of the colored Jones polynomial of the figure-8 knot. Third, we will show a product structure on the head and tails for prime alternating links. \begin{maintheorem} For an arbitrary pair of prime alternating links a new prime alternating link can be constructed whose tail of its colored Jones polynomial is the product of the tails of the colored Jones polynomial of the two links. In this sense the head and the tails of prime alternating links form a monoid. \end{maintheorem} Lawrence and Zagier \cite{LawrenceZagier:ModularFormsQuantumInvariants} were the first to realize the occurrence of modular forms in the study of quantum invariants. Hikami \cite{Hikami:VCAsymptoticExpansion} understood that the series in the second Rogers-Ramanujan identity and in the Andrews-Gordon identities appear in the study of the colored Jones polynomial of torus knots. However, our point of view is very different in that the restriction to only the head and tail of the colored Jones polynomial allows in principle to prove combinatorial identities for heads of colored Jones polynomials of alternating knots in general, in a structured way. Some examples of heads and tails for knots with small crossing number were recently computed by Garoufalidis, L\^e and Zagier \cite{Zagier:TalkAtWalterfest}. Many of the phenomena in their table can be explained with the methods developed here. Habiro \cite{Habiro:KnotsInPoland} computed examples of tails of the coefficient functions in the Habiro expansion of the colored Jones polynomial \cite{Habiro:Cyclotomic}. Those coefficient functions are called the reduced colored Jones polynomials. The paper is structured as follows: In Section \ref{SettingTheScene} we give the necessary background from number theory and recall three combinatorial ways to compute the colored Jones polynomial. The first is a skein theoretical approach, the second a combinatorial interpretation of the first author \cite{Armond:Walks} of the quantum determinant formulation for the colored Jones polynomial of Huynh and L\^e \cite{VuLe:ColoredJonesDeterminant}. The third way is a combinatorial model based on a description of the colored Jones polynomial by Murakami \cite{Murakami:IntroToVolumeConjecture}. It generalizes ideas of Lin and Wang \cite{LinWang:RandomWalkColoredJones}. Section \ref{DependenceOnReducedCheckerboardGraph} will show that the head and tail functions only depend on the checkerboard graphs of the knot. Section \ref{ProductFormula} will give the product formula for heads and tails. In a final section we will briefly discuss (non-alternating) torus knots as examples of knots that are not alternating. \noindent {\bf Acknowledgements: } Some of the initial ideas to this paper are based on work of the second author with Xiao-Song Lin. Moreover, we thank Pat Gilmer, Kazuo Habiro, Effie Kalfagianni, Matilde Lalin, Gregor Masbaum, Roland van der Veen and Don Zagier for helpful discussions at various occasions over the years. The first author thanks Thang L\^e and Stavros Garoufalidis for stimulating discussions and the hospitality during a recent visit to Georgia Tech University. Very important to the results in this paper were the search sites KnotInfo by Cha and Livingston \cite{ChaLivingston:KnotInfo} and The On-Line Encyclopedia of Integer Sequences by Sloane \cite{Sloane:IntegerSequences}. Furthermore, Bar-Natan's Mathematica package KnotTheory \cite{BarNatan:KnotTheory} contains an implementation of an algorithm of Garoufalidis and L\^e to compute the colored Jones polynomial, which is feasible for knots with small crossing number and color $N<9$. \section{Setting the scene} \label{SettingTheScene} \subsection{Number theoretical background} The $q$-Pochhammer symbol is defined as $$\qP a q n :=\prod _{j=0}^{n-1} (1- a q^j)$$ and a fundamental identity is given by the Jacobi triple product identity (e.g. \cite{Wilf:JacobiTripleProduct}): $$\sum_{m=-\infty}^{\infty} x^{m^2} y^{2m} = \prod_{n=1}^{\infty} (1-x^{2n}) (1+x^{2n-1} y^2) \left (1+\frac{x^{2n-1}} {y^2} \right ).$$ Two functions will be useful tools in our computations. The Ramanujan general theta function and the false theta function: \begin{df} We will follow the notations in the literature: \begin{enumerate} \item The general (two variable) Ramanujan theta function (e.g. \cite{AndrewsBerndt:RamanujansLostNotebookI}): \begin{eqnarray*} f(a,b)&:=&\sum_{k=-\infty}^{\infty} a^{k(k+1)/2} b^{k (k-1)/2}\\ &=& \sum_{k=0}^{\infty} a^{k(k+1)/2} b^{k(k-1)/2} + \sum_{k=1}^{\infty} a^{k(k-1)/2} b^{k(k+1)/2} \end{eqnarray*} By the Jacobi triple product identity $f(a,b)$ can be expressed as: $$f(a,b)=\qP {-a} {a b} {\infty} \qP {-b} {a b} {\infty} \qP {a b} {a b} {\infty} .$$ In particular $f(-q^2,-q)=(q;q)_{\infty}$. Note that $f(a,b)=f(b,a).$ \item The false theta function (e.g. \cite{McLaughlin:RogersRamanujanSlater}): $$\Psi(a,b):=\sum_{k=0}^{\infty} a^{k(k+1)/2} b^{k(k-1)/2} - \sum_{k=1}^{\infty} a^{k(k-1)/2} b^{k(k+1)/2}$$ Note that $\Psi(a,b)-2 = -\Psi(b,a).$ \end{enumerate} \end{df} \subsubsection{Rogers-Ramanujan type identities; and page 200 in Ramanujan's lost notebook} \begin{enumerate} \item In terms of the Ramanujan theta function the celebrated (second) Rogers-Ramanujan identity states: $$ f(-q^4,-q) = (q;q)_{\infty} \sum_{k=0}^{\infty} \frac{q^{k^2+k}}{(q;q)_k} $$ The Andrews-Gordon identities are generalizations of the second Rogers-Ramanujan identity: \begin{equation} \label{Andrews-Gordon} f(-q^{2k},-q)= (q;q)_{\infty} \sum_{n_1,\dots,n_{k-1}\geq 0} \frac {q^{N_1^2+\dots+N_{k-1}^2+N_1+\dots+N_{k-1}}} {(q;q)_{n_1} \cdots (q;q)_{n_{k-1}}} \end{equation} with $N_j$ defined as $$N_j=n_j+\dots+n_{k-1}.$$ \item A corresponding identity for the false theta function is: \begin{eqnarray*} \Psi(q^3,q)&=& \sum_{k=0}^{\infty} (-1)^k q^{(k^2+k)/2}\\ &=& (q;q)_{\infty} \sum_{k=0}^{\infty} \frac{q^{k^2+k}}{(q;q)^2_k} \hspace{1cm} \mbox{ (Ramanujan's notebook, Part III, Entry 9 \cite{RamanujanNotebooks:PartIII}})\\ &=& (q;q)^2_{\infty} \sum_{k=0}^{\infty} \frac{q^k} {(q;q)_k^2} \hspace{1cm} \mbox{ (Ramanujan's lost notebook; page 200 \cite{AndrewsBerndt:RamanujansLostNotebookI})} \end{eqnarray*} \end{enumerate} The equality between the series in line 1 and line 3 will be proven in this paper. \subsection{How to compute the colored Jones polynomial?} We briefly recall three ways to compute the colored Jones polynomial of a link that are going to be important to us. For links we assume that all components are colored with the same color. For a link $L$ we denote by $J_{N,L}(q)$ the reduced colored Jones polynomial of the link, i.e. for an unknot $U$ the colored Jones polynomial is $J_{N,U}(q)=1.$ Furthermore, for $N=2$ the colored Jones polynomial $J_{2,K}(q)$ equals the ordinary Jones polynomial of $K$. The first approach is the skein theoretical approach. It gives with $A^{-4}=q$ a graphical calculus to compute the (unreduced) colored Jones polynomial $J_{N+1,L}(q)$ up to a power of $q$. \subsection{Skein theory} \label{SectionSkeinTheory} For details see e.g. \cite{Lickorish:KnotTheoryBook, MasbaumVogel:3valentGraphs} By convention an $n$ next to a component of a link indicates that the component is replaced by $n$ parallel ones. The Jones-Wenzl idempotent is indicated by a box on a component. It can be defined as follows: With $$\Delta_{n}:=(-1)^{n} \frac {A^{2(n+1)}-A^{-2 (n+1)}}{A^{2}-A^{-2}}$$ and $\Delta_{n}!:= \Delta_{n} \Delta_{n-1} \dots \Delta_{1}$ the Jones-Wenzl idempotent satisfies $$\begin{tikzpicture}[baseline=0.8cm] \draw(0,0)--(0,1) node[rectangle, draw, ultra thick, fill=white]{ } --(0,2) node[right]{\scriptsize{n+1}}; \end{tikzpicture} = \begin{tikzpicture}[baseline=0.8cm] \draw(0,0)--(0,1) node[rectangle, draw, ultra thick, fill=white]{ } --(0,2) node[right]{\scriptsize{n}}; \draw(0.7,0)--(0.7,2) node[right]{\scriptsize{1}}; \end{tikzpicture} - \left( \frac {\Delta_{n-1}} {\Delta_{n}} \right ) \, \, \begin{tikzpicture}[baseline=0.8cm] \draw (0,0) node[right]{\scriptsize{n}}--(0,1) node[left]{\scriptsize{n-1}} --(0,2) node[right]{\scriptsize{n}}; \draw[ultra thick, fill=white] (-0.2 ,0.5) rectangle (0.4, 0.65); \draw (0.2, 0.65) arc (180:0:0.25); \draw (0.7,0) node[right]{1}--(0.7, 0.65); \draw[ultra thick, fill=white] (-0.2 ,1.5) rectangle (0.4, 1.35); \draw (0.2, 1.35) arc (-180:0:0.25); \draw (0.7,2) node [right]{\scriptsize{1}} --(0.7, 1.35); \end{tikzpicture} $$ with the properties $$\begin{tikzpicture}[baseline=0.8cm] \draw(0,0)--(0,1) node[rectangle, draw, ultra thick, fill=white]{ } --(0,2) node[right]{\scriptsize{i+j}}; \end{tikzpicture} = \begin{tikzpicture}[baseline=0.8cm] \draw(0,0) node[right]{\scriptsize{i+j}}--(0,0.6); \draw[ultra thick, fill=white] (-.4,0.6) rectangle (.4, 0.8); \draw (0.2,0.8)-- (0.2,1.4) node[rectangle, draw, ultra thick, fill=white]{ } -- (0.2,2)node[right]{\scriptsize{j}}; \draw (-0.2,0.8)--(-0.2,2) node[right]{\scriptsize{i}}; \end{tikzpicture}, \, \qquad \qquad \qquad \begin{tikzpicture}[baseline=0.8cm] \draw(0,0)--(0,1) node[rectangle, draw, ultra thick, fill=white]{ } --(0,2) node[right]{\scriptsize{1}}; \end{tikzpicture} = \, \, \begin{tikzpicture}[baseline=0.8cm] \draw(0,0)--(0,2) node[right]{\scriptsize{1}}; \end{tikzpicture}$$ and $$\begin{tikzpicture}[baseline=0cm] \draw (0,0) circle(0.8); \draw (-0.8,0) node[left] {\scriptsize{n}}; \draw (0.75,0) node[rectangle, draw, ultra thick, fill=white]{ } ; \end{tikzpicture} = \Delta_{{n}}. $$ A triple $(a,b,c)$ admissible if \begin{enumerate} \item $a+b+c$ is even \item $|a -b | \leq c \leq a+b.$ \end{enumerate} For an admissible triple $(a,b,c)$ a trivalent vertex is defined by: $$ \begin{tikzpicture}[baseline=2.8cm] \draw (1,1.6) node[right]{\scriptsize{c}}-- (1,3); \draw (1,3)--(0,4) node[left] {\scriptsize{a}}; \draw (1,3)--(2,4) node[right] {\scriptsize{b}}; \fill (1,3) circle (0.07); \end{tikzpicture} = \begin{tikzpicture}[baseline=2.8cm] \draw (1.7,3.5)--(1.7,4) node[right] {\scriptsize{b}}; \draw (0.3,3.5)--(0.3,4) node[left] {\scriptsize{a}}; \draw[ultra thick, fill=white] (1.4,3.35) rectangle (2.0, 3.5); \draw[ultra thick, fill=white] (0,3.35) rectangle (0.6, 3.5); \draw (0.45,3.35) .. controls (0.8,2.9) and (1.2,2.9) .. (1.55, 3.35); \draw (1, 3.2) node {\scriptsize{k}}; \draw (1,1.6) node[right]{\scriptsize{c}}-- (1,2.1); \draw[ultra thick, fill=white] (0.7,2.1) rectangle (1.3, 2.25); \draw (0.85,2.25)-- (0.15, 3.35) ; \draw (0.45, 2.8) node[left] {\scriptsize{i}}; \draw(1.15,2.25)--(1.85, 3.35); \draw (1.5,2.8) node[right] {\scriptsize{j}}; \end{tikzpicture} $$ where $i,j$ and $k$ are defined by $i+k=a, j+k=b$ and $i+j=c$. In a trivalent diagram all edges are implied to be equipped with the Jones-Wenzl idempotent. Fusion is given by $$\begin{tikzpicture}[baseline=0.8cm] \draw (0.7,0) .. controls (0.2 ,0.6) and (0.2,1.4) .. (0.7,2) node[right]{\scriptsize{b}}; \draw (0.35,1) node[shape=rectangle, draw, ultra thick, fill=white]{ }; \draw (-0.7,0).. controls (-.2,0.6) and (-0.2,1.4) .. (-0.7,2) node[left]{\scriptsize{a}}; \draw (-0.35,1) node[rectangle, draw, ultra thick, fill=white]{ }; \end{tikzpicture} = \sum_{c} \frac {\Delta_c} {\theta(a,b,c)} \begin{tikzpicture}[baseline=0.8cm] \draw (1,0) node[right]{\scriptsize{b}} --(0.5,0.5); \draw (0,0) node[left]{\scriptsize{a}} --(0.5,0.5); \draw (0.5,0.5)--(0.5,1) node[right]{\scriptsize{c}}-- (0.5,1.5); \draw (0.5,1.5)--(0,2) node[left] {\scriptsize{a}}; \draw (0.5,1.5)--(1,2) node[right] {\scriptsize{b}}; \fill (0.5,0.5) circle (0.07); \fill (0.5,1.5) circle (0.07); \end{tikzpicture} $$ where the sum is over all $c$ such that $(a,b,c)$ is admissible. \bigskip To define $\theta(a,b,c)$ let $a,b$ and $c$ related as above and $x, y$ and $z$ be defined by $a=y+z, b=z+x$ and $c=x+y$ then $$\theta(a,b,c):= \, \begin{tikzpicture}[baseline=0.8cm, rounded corners=2mm] \draw (0,1) ellipse (0.6 and 1) ; \draw (0.7,0.96) node[right]{\scriptsize{c}} ; \draw (-0.7,0.96) node[left]{\scriptsize{a}}; \draw (0,0)--(0,1) node[right]{\scriptsize{b}}--(0,2); \fill (0,0) circle (0.07); \fill (0,2) circle (0.07); \end{tikzpicture}$$ and one can show that $$\theta(a,b,c)= \frac { \Delta_{x+y+z}! \Delta_{x-1}! \Delta_{y-1}! \Delta_{z-1}! } {\Delta_{y+z-1}! \Delta_{z+x-1}! \Delta_{x+y-1}!}.$$ Furthermore one has: $$\begin{tikzpicture}[baseline=.5cm, rounded corners=2mm] \draw(0,0)-- (-.5,.5) -- (-.1,0.9); \draw (.1,1.1)--(.5,1.5) node[right]{\scriptsize{b}}; \draw (0,0) -- (.5,.5)--(-.5,1.5) node[left]{\scriptsize{a}}; \draw (0,0)--(0,-0.8) node[right]{\scriptsize{c}}; \fill (0,0) circle (0.07); \end{tikzpicture} = (-1)^{\frac{a+b-c} 2} A^{a+b-c+\frac{a^2+b^2-c^2}{2}} \begin{tikzpicture}[baseline=.5cm, rounded corners=2mm] \draw (0,0)--(0,-0.8) node[right]{\scriptsize{c}}; \draw (0,0)--(0.5,1.5) node[right]{\scriptsize{b}}; \draw (0,0)--(-0.5,1.5) node[left]{\scriptsize{a}}; \fill (0,0) circle (0.07); \end{tikzpicture}$$ We are only interested in the list of coefficients of the colored Jones polynomial. In particular we consider polynomials up to powers of their variable. Up to a factor of $\pm A^s$ for some power $s$ that depends on the writhe of the link diagram the (unreduced) colored Jones polynomial $\tilde J_{N+1,L}(A)$ of a link $L$ can be defined as the value of the skein relation applied to the link were every component is decorated by an $N$ together with the Jones-Wenzl idempotent. Recall that $A^{-4}=q$. To obtain the reduced colored Jones polynomial we have to divide $\tilde J_{N,L}(A)$ by its value on the unknot. Thus $$J_{N+1,L}(A):=\frac{\tilde J_{N+1,L}(A)} {\Delta_{N}}.$$ \subsubsection{Walks and the quantum determinant formulation of Huynh and L\^e} We briefly recall the combinatorial walk model for computing the colored Jones polynomial developed in \cite{Armond:Walks} that is based on the quantum determinant formulation of Huynh and L\^e \cite{VuLe:ColoredJonesDeterminant}.For more details see \cite{Armond:Walks}. Given a braid $\beta$ whose closure is the knot $K$, a walk along $\beta$ is defined as follows: At each element of a subset $J$ of the lower ends of the strands of $\beta$, begin walking up the braid. Upon reaching a crossing, the strand walked along is the over strand, then there is a choice to continue along the over strand, or jump-down to the under strand and continue walking up the braid. If the strand is the under-strand of the crossing, then continue along that strand. The set of ending positions must match the set of starting positions, thus inducing a permutation $\pi$ of the set $J$. A walk is simple if it does not traverse any arc more than once. Each walk is assigned a weight as follows: Beginning from the first element of $J$ follow the walk and record $a_{j,\pm}$, for a jump down at crossing $j$, $b_{j,\pm}$ for following the under-strand at crossing $j$, and $c_{j,\pm}$ for following the over-strand at crossing $j$. Here the $\pm$ indicates the sign of the crossing. Once the top of the braid is reached, begin again from the next element of $J$. The weight of the walk is $(-1)(-q)^{|J|+\text{inv}(\pi)}$ times the product of the previous letters with the order as explained above. The letters $a_{j,\pm}$, $b_{j,\pm}$, and $c_{j,\pm}$ satisfy the relations: \begin{tabular}{c c c} $a_{j,+}b_{j,+}=b_{j,+}a_{j,+},$ & $a_{j,+}c_{j,+}=qc_{j,+}a_{j,+},$ & $b_{j,+}c_{j,+}=q^2c_{j,+}b_{j,+}$\\ $a_{j,-}b_{j,-}=q^2b_{j,-}a_{j,-},$ & $c_{j,-}a_{j,-}=qa_{j,-}c_{j,-},$ & $c_{j,-}b_{j,-}=q^2b_{j,-}c_{j,-}$\\ \end{tabular} Also any letters with different subscripts commute. The weights also have an evaluation: $$\mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(b_+^s c_+^r a_+^d)=q^{r(N-1-d)} \prod_{i=0}^{d-1}(1-q^{N-1-r-i})$$ $$\mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(b_-^s c_-^r a_-^d)=q^{-r(N-1)}\prod_{i=0}^{d-1}(1-q^{r+i+1-N})$$ In a word with letters of different subscripts, the word is separated into pieces corresponding to each subscript and each is evaluated separately. \begin{theorem} \cite{Armond:Walks} The colored Jones polynomial of $K$ is $$J_{N,K} = q^{(N-1)(\omega(\beta)-m+1)/2}\sum_{n=0}^\infty \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(C^n)$$ where $\omega(\beta)$ is the writhe of $\beta$, $m$ is the number of strands in $\beta$, and $C$ is the sum of the weights of the simple walks along $\beta$ with $J \subset \{2,\ldots,m\}$. \end {theorem} \begin{remark} This sum is finite due to the fact that any monomial in $C^n$ corresponding to a collection of walks with at least $N$ of the walks traversing any given point will evaluate to $0$. \end{remark} \begin{example}[The $(2,5)$-torus knot] \label{25torus} The (positive) $(2,5)$-torus knot $T(2,5)$ can be expressed as the closure of the braid $\beta = (\sigma_4\sigma_3\sigma_2\sigma_1)^2$ \begin{figure}[htbp] \centering \includegraphics[width=1.25in]{Torus25smooth} \end{figure} The two simple walks along $\beta$ are the following: \begin{center} \begin{tabular}{c c} \includegraphics[width=1.25in]{T25W1}&\includegraphics[width=1.25in]{T25W2}\\ $W_1$ & $W_2$ \end{tabular} \end{center} with weights $W_1=qb_{8,+}c_{4,+}a_{3,+}$, and $W_2=q^3b_{8,+}c_{4,+}c_{3,+}c_{2,+}a_{1,+}b_{6,+}b_{3,+}$ with the relation $W_1W_2=qW_2W_1$. Thus for $K$ the $(2,5)$-torus knot the colored Jones polynomial $J_{N,K}(q)$ is, with $${n\choose k}_{q}:=\frac{(q;q)_n}{(q;q)_{n-k} (q;q)_k}$$ the $q$-binomials: \begin{eqnarray*} &&q^{2(N-1)}\sum_{n=0}^{N-1}\mathcal{E}}\def \mathbb{R} {\mathbb{R}_N((W_1+W_2)^n)\\ &=&q^{2(N-1)}\sum_{n=0}^{N-1}\sum_{k=0}^n {n \choose k}_{q} \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(W_2^kW_1^{n-k})\\ &=&q^{2(N-1)}\sum_{n=0}^{N-1}\sum_{k=0}^n \left( \begin{matrix}n\\k\end{matrix}\right)_{q} q^{3k+(n-k)}\mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(b_{8,+}^n b_{6,+}^{k} c_{4,+}^n (c_{3,+}b_{3,+})^k a_{3,+}^{n-k} c_{2,+} ^k a_{1,+}^k )\\ &=&q^{2(N-1)}\sum_{n=0}^{N-1}\sum_{k=0}^n \left( \begin{matrix}n\\k\end{matrix}\right)_{q} q^{2k+n-k(k+1)}\mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(b_+^n) \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(b_+^k) \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(c_{+}^n) \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(b_{+}^k c_{+}^k a_{+}^{n-k}) \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(c_{+}^k) \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(a_{+}^k)\\ &=& q^{2(N-1)}\sum_{n=0}^{N-1} \sum_{k=0}^n \left( \begin{matrix}n\\k\end{matrix}\right)_{q} q^{nN+k(2N-1-n)} \prod_{i=0}^{n-1}(1-q^{N-1-i}) \end{eqnarray*} \end{example} \subsubsection{Walks and the Hitoshi Murakami's formulation of the colored Jones polynomial} The concept of describing quantum link invariants via $R$-matrices goes back to Turaev \cite{Turaev:YangBaxter}. In particular one can use $R$-matrices to compute the colored Jones polynomial. The following discussion is a simplification of the description given by Hitoshi Murakami in \cite{Murakami:IntroToVolumeConjecture}. We will need to define the $R$-matrix, its inverse and another function $\mu$ (see also \cite{KM:cablingFormulaCoJP}): \begin{eqnarray*} R^{i~j}_{k~l} &=& \sum_{m=0}^{\text{Min}(N-1-i,j)} (-1)^m \delta_{l,i+m} \delta_{k,j-m}\frac{\{l\}!\{N-1-k\}!}{\{i\}!\{m\}!\{N-1-j\}!}\\ && \times q^{-(i-(N-1)/2)(j-(N-1)/2)+m(i-j)/2+m(m+1)/4} \end{eqnarray*} \begin{eqnarray*} (R^{-1})^{i~j}_{k~l} &=& \sum_{m=0}^{\text{Min}(N-1-j,i)}\delta_{l,i-m} \delta_{k,j+m}\frac{\{k\}!\{N-1-l\}!}{\{j\}!\{m\}!\{N-1-i\}!}\\ && \times q^{(i-(N-1)/2)(j-(N-1)/2)+m(i-j)/2-m(m+1)/4} \end{eqnarray*} \begin{eqnarray*} \mu_j &=& q^{-(2j-N+1)/2} \end{eqnarray*} where $\{m\} = q^{m/2} - q^{-m/2}$, and $\{m\}! = \prod_{i=1}^m \{i\}$. These together form an enhanced Yang-Baxter operator, but for our purposes we only need to think of them as functions of the integers $i,j,k,$ and $l$ where $0\leq i,j,k,l \leq N-1$. To calculate the $N$-th colored Jones polynomial, take a braid $\beta$ whose closure is the knot $K$ and close each strand except for the left most one. Choose a label which is an integer between $0$ and $N-1$ for the top and bottom of the left-most strand which will be fixed throughout. A state is a choice of labels on each of the other arcs of this semi-closure. Assign to each crossing the polynomial $(R^{\epsilon_c})^{i~j}_{k~l}$ as in Figure \ref{crossR}, where $\epsilon_c$ is the sign of crossing $c$. Also assign to each arc which attaches the bottom of the braid to the top of the braid the polynomial $\mu_j$ where $j$ is the label of that arc. The weight of the state is the product of the $R$-matrix for each crossing times the product of $\mu_j$'s. Finally the colored Jones polynomial is $q^{-\omega(\beta)(N^2-1)/4}$ times the sum of the weights of each state. \begin{figure}[h] $$\begin{tikzpicture}[scale=1, baseline=0cm] \draw (-.5, -0.5)node[left]{\tiny{k}}-- (0,0) -- (.5,.5) node{\tiny{\, j}}; \draw (.5,-0.5)node{\tiny{\, l}} -- (0.1,-0.1); \draw (-0.1,0.1) -- (-.5,.5) node[left]{\tiny{i}}; \end{tikzpicture} \longrightarrow R^{i~j}_{k~l} $$ $$\begin{tikzpicture}[scale=1, baseline=0cm] \draw (-.5, -0.5)node[left]{\tiny{k}}-- (-0.1,-0.1); \draw (.5,-0.5)node{\tiny{\, l}} -- (0,0)-- (-.5,.5) node[left]{\tiny{i}}; \draw (0.1,0.1) -- (.5,.5) node{\tiny{\, j}}; \end{tikzpicture} \longrightarrow (R^{-1})^{i~j}_{k~l} $$ \caption{The $R$-matrix} \label{crossR} \end{figure} The delta functions in the definition of the $R$-matrix tells us that many of the states will have weight $0$. Figure \ref{jump} gives the conditions for which a state will have a non-zero weight: \begin{figure}[h] $$\begin{tikzpicture}[scale=1, baseline=0cm] \draw (-.5, -0.5)node[left]{\tiny{k}}-- (0,0) -- (.5,.5) node{\tiny{\, j}}; \draw (.5,-0.5)node{\tiny{\, l}} -- (0.1,-0.1); \draw (-0.1,0.1) -- (-.5,.5) node[left]{\tiny{i}}; \end{tikzpicture} : i+j=k+l, l\geq i, k\leq j, $$ $$\begin{tikzpicture}[scale=1, baseline=0 cm] \draw (-.5, -0.5)node[left]{\tiny{k}}-- (-0.1,-0.1); \draw (.5,-0.5)node{\tiny{\, l}} -- (0,0)-- (-.5,.5) node[left]{\tiny{i}}; \draw (0.1,0.1) -- (.5,.5) node{\tiny{\, j}}; \end{tikzpicture} : i+j=k+l, l\leq i , k\geq j. $$ \caption{} \label{jump} \end{figure} This tells us that the labels can be thought of in terms of walks. The label represents how many ``walkers'' are walking along the labeled arc, walking from top to bottom. The conditions given in Figure \ref{jump} tells us that some number of the walkers walking along the over strand can jump down onto the lower strand, but no strand can have more than $N-1$ walkers on it. The walk model for the Jones polynomial that Xiao-Song Lin and Zhenghan Wang give in \cite{LinWang:RandomWalkColoredJones} coincides with this model of the colored Jones polynomial for $N=2$. \begin{example}[The $(2,-4)$ torus link] \label{toruslink} Consider the braid $\sigma_1^{-4}$. The closure of this braid is the $(2,-4)$ torus link. To calculate the colored Jones polynomial we must first choose a label for the top left strand. We can choose any integer between $0$ and $N-1$, but choosing $0$ will simplify the calculations, so we will choose $0$. Next we must find all of the non-zero states. There an no restrictions on the top right arc, so we shall label it $j$. This forces the arcs below the first crossing to be labeled $j$ and $0$: $$\begin{tikzpicture}[scale=0.8, baseline=0.5cm, rounded corners=2mm] \draw (-.5, -0.5)-- (-0.1,-0.1); \draw (.5,-0.5) -- (0,0)-- (-.5,.5) node[left]{\tiny{c}} -- (-.1,0.9); \draw (.1,1.1)--(.5,1.5) node{\tiny{\, b}} --(-.5,2.5)node[left]{\tiny{j}} --(-0.1,2.9); \draw (0.1,0.1) -- (.5,.5) node{\tiny{\, d}} --(-.5,1.5) node[left]{\tiny{a}} -- (-.1,1.9); \draw (0.1,2.1)--(0.5,2.5) node{\tiny{\, 0}} --(-.5,3.5) node[left]{\tiny{0}}; \draw (.1, 3.1)--(0.5,3.5) node[left]{\tiny{j}}; \draw (0.5,-0.5) .. controls (1.5,-.9) and (1.5,3.9) .. (0.5,3.5); \end{tikzpicture} .$$ For the next crossing, some number of the $j$ walkers can jump down to the lower strand. Let us say $l$ of them stay on the upper strand and $j-l$ jump down. $$\begin{tikzpicture}[scale=0.8, baseline=0.5cm, rounded corners=2mm] \draw (-.5, -0.5)-- (-0.1,-0.1); \draw (.5,-0.5) -- (0,0)-- (-.5,.5) node[left]{\tiny{c}} -- (-.1,0.9); \draw (.1,1.1)--(.5,1.5) node{\tiny{\, l}} --(-.5,2.5)node[left]{\tiny{j}} --(-0.1,2.9); \draw (0.1,0.1) -- (.5,.5) node{\tiny{\, d}} --(-.5,1.5) node[left]{\tiny{j-l}} -- (-.1,1.9); \draw (0.1,2.1)--(0.5,2.5) node{\tiny{\, 0}} --(-.5,3.5) node[left]{\tiny{0}}; \draw (.1, 3.1)--(0.5,3.5) node[left]{\tiny{j}}; \draw (0.5,-0.5) .. controls (1.5,-.9) and (1.5,3.9) .. (0.5,3.5); \end{tikzpicture} .$$ For the next crossing, the same situation arises. Let us say $m$ of them stay on the upper strand, and thus $j-m$ end up on the lower strand. \begin{figure}[h] $$\begin{tikzpicture}[scale=0.8, baseline=0.5cm, rounded corners=2mm] \draw (-.5, -0.5)-- (-0.1,-0.1); \draw (.5,-0.5) -- (0,0)-- (-.5,.5) node[left]{\tiny{j-m}} -- (-.1,0.9); \draw (.1,1.1)--(.5,1.5) node{\tiny{\, l}} --(-.5,2.5)node[left]{\tiny{j}} --(-0.1,2.9); \draw (0.1,0.1) -- (.5,.5) node{\tiny{\, m}} --(-.5,1.5) node[left]{\tiny{j-l}} -- (-.1,1.9); \draw (0.1,2.1)--(0.5,2.5) node{\tiny{\, 0}} --(-.5,3.5) node[left]{\tiny{0}}; \draw (.1, 3.1)--(0.5,3.5) node[left]{\tiny{j}}; \draw (0.5,-0.5) .. controls (1.5,-.9) and (1.5,3.9) .. (0.5,3.5); \end{tikzpicture} .$$ \end{figure} However, the lowest labels must be $0$ and $j$, therefore, for the final crossing, no walkers can end on the lower strand, which means there must have been no walkers on the lower strand to begin with. Thus $m=0$. $$\begin{tikzpicture}[scale=0.8, baseline=0.5cm, rounded corners=2mm] \draw (-.5, -0.5)-- (-0.1,-0.1); \draw (.5,-0.5) -- (0,0)-- (-.5,.5) node[left]{\tiny{j}} -- (-.1,0.9); \draw (.1,1.1)--(.5,1.5) node{\tiny{\, l}} --(-.5,2.5)node[left]{\tiny{j}} --(-0.1,2.9); \draw (0.1,0.1) -- (.5,.5) node{\tiny{\, 0}} --(-.5,1.5) node[left]{\tiny{j-l}} -- (-.1,1.9); \draw (0.1,2.1)--(0.5,2.5) node{\tiny{\, 0}} --(-.5,3.5) node[left]{\tiny{0}}; \draw (.1, 3.1)--(0.5,3.5) node[left]{\tiny{j}}; \draw (0.5,-0.5) .. controls (1.5,-.9) and (1.5,3.9) .. (0.5,3.5); \end{tikzpicture} .$$ Now the colored Jones polynomial for the $(2,-4)$ torus link is: \begin{eqnarray*} J_{N,T(2,-4)}(q) &=& q^{1-N^2}\sum_{0\leq l \leq j \leq N-1} (R^{-1})^{0~j}_{j~0} (R^{-1})^{j~0}_{j-l~l} (R^{-1})^{j-l~l}_{j~0} (R^{-1})^{j~0}_{0~j} \mu_j\\ &=& q^{1-N^2}\sum_{0\leq l \leq j \leq N-1} \frac{\{N-1-l\}!\{j\}!\{N-1\}!}{\{N-1-j\}!\{l\}!\{j-l\}!\{N-1-j+l\}!}\\ && \times q^{-(2j-N+1)/2- 3(j-(N-1)/2)(N-1)/2 -(l-j+1)(j-l)/2+(j-l-(N-1)/2)(l-(N-1)/2)} \end{eqnarray*} \end{example} \section{Head and tail of the colored Jones polynomial} We are interested in the $N$ leading coefficients of the colored Jones polynomial $J_{N,K}(q)$: \begin{df} For a Laurent polynomials $P_1(q)$ and a power series $P_2(q)$ we define $$P_1(q) \dot{=}_n P_2(q)$$ if $P_1(q)$ coincide - up to multiplication with $\pm q^s$, $s$ some power - with $P_2(q) \mod q^n$. For example $-q^{-4} + 2 q^{-3}- 3+11 q \dot{=}_5 1-2 q+3 q^4.$ \end{df} \begin{df}[Head and Tail of the colored Jones polynomial] The tail of the colored Jones polynomial of a knot $K$ - if it exists - is a series $T_K(q)=\sum_{j=0}^{\infty} a_j q^j$ with $$J_{N,K}(q) \dot {=}_N T_K(q), \mbox{ for all } N.$$ Similarly the head of the colored Jones polynomial $J_{N,K}(q)$ is defined to be - if it exist - the tail of the colored Jones polynomial of $J_{N,K}(1/q).$ In particular this means that - providing existence - the head of the colored Jones polynomial of a knot $K$ is the tail of the colored Jones polynomial of its mirror image $K^{*}$. \end{df} A theorem of the first author gives the existence of the head and tail in certain cases: \begin{theorem}[\cite{Armond:Walks, Armond:HeadAndTailConjecture}] \label{Armond:Walks} Suppose a link $K$ is \begin{enumerate} \item a knot and the closure of a positive braid. Then the tail of $K$ is $T_{K}(q)=1.$ \label{ArmondTheoremPartI} \label{Armond:WalksPositive} \item an alternating link. Then both the head and the tail exist. \end{enumerate} \end{theorem} \begin{remark} \begin{enumerate} \item Part (\ref{ArmondTheoremPartI}) of Theorem \ref{Armond:Walks} for braid positive knots cannot be extended to positive knots in general. The knot $7_5$ (see Figure \ref{Fig75}) is an example of a positive knot where the tail is $\neq 1$. \item Champanerkar and Kofman \cite{ChampanerkarKofman:TailFullTwist} showed that if the closed positive braid contains a full twist in the braid group then Theorem \ref{Armond:Walks} (\ref{Armond:WalksPositive}) can be strengthened. \end{enumerate} \end{remark} \begin{figure} \includegraphics[width=4.5cm]{7_5} \caption{The knot $7_5$ is positive \label{Fig75}} \end{figure} In fact it follows: \begin{cor} Every braid-positive alternating prime knot is a torus knot. \end{cor} In \cite{EffieAtAl:GutsAndColoredJones} a generalization of this Corollary is given. \subsection{Rogers-Ramanujan type identities coming from knots} The following theorem is essentially a corollary to a theorem of Hugh Morton \cite{Morton:ColoredJonesTorusKnots}: \begin{theorem}[Left-hand side of the Andrews-Gordon identities (\ref{Andrews-Gordon})] For a (negative) $(2,2k+1)$-torus knot $K$ the tail is $$T_K(q)=f(-q^{2k},-q).$$ \end{theorem} \begin{proof} Let $p:=2 k+1$. By \cite{Morton:ColoredJonesTorusKnots} we have \begin{eqnarray*} (q^N-1) J_{N,K}(q) &\dot{=}& \sum_{r=-(N-1)/2}^{(N-1)/2} q^{p (2 r^2+r)} \left ( q^{2 r+1}-q^{-2r} \right )\\ &\dot{=}& \sum_{r=-(N-1)/2}^{(N-1)/2} q^{p (2 r^2+r)} q^{2 r+1} - q^{p (2 r^2-r)} q^{2 r}\\ &\dot{=}& \sum_{R=-N+1}^N (-1)^R q^{p (R^2-R)/2} q^{R}\\ &\dot{=}& \sum_{R=-N+1}^N (-1)^R q^{k (R^2-R)} q^{(R^2+R)/2} \end{eqnarray*} Since $k(R^2-R)+(R^2+R)/2$ is increasing in $|R|$ the result follows from the definition of $f(a,b)$. \end{proof} The methods developed by the first author in \cite{Armond:Walks} allow to obtain the other side of the Andrews-Gordon identities: \begin{theorem}[Right-hand side of the Andrews-Gordon identities (\ref{Andrews-Gordon})] For a (negative) $(2, 2k+1)$-torus knot $K$ the tail is $$T_K(q)= (q;q)_{\infty} \sum_{n_1,\dots,n_{k-1}\geq 0} \frac {q^{N_1^2+\dots+N_{k-1}^2+N_1+\dots+N_{k-1}}} {(q;q)_{n_1} \cdots (q;q)_{n_{k-1}}} $$ with $N_j$ defined as $$N_j=n_1+\dots+n_{j}.$$ \end{theorem} \begin{proof} Let $p=2k+1$ and let $\beta=(\omega_{p-1}\omega_{p-2}\ldots\omega_2\omega_1)^2\in B_p$. Thus $\hat{\beta}= \bar{K}$, the (positive) $(2,2k+1)$-torus knot. Generalizing example \ref{25torus} there are $k$ simple walks along $\beta$. $$W_j=b_{2(p-1)}\left(\prod_{l=0}^{2(j-1)} c_{p-1-l}\right)a_{p-2j}\left(\prod_{l=1}^{j-1}b_{2(p-1-l)}b_{p-2l}\right)$$ Here since $\beta$ is a positive braid, all of the letters have a $+$ subscript, which we leave off to simplify the notation. One can check that these satisfy the relations $W_iW_j=qW_jW_i$ when $i\leq j$. Therefore we can calculate the colored Jones polynomial of $\bar{K}$, with the notation: $$\left( \begin{matrix}&n\\n_1 & \ldots & n_{k}\end{matrix} \right)_q:= \frac {(q;q)_{n}} {(q;q)_{n_{1}} \cdots (q;q)_{n_{k}}}$$ where $n=n_1 + \ldots + n_k$. \begin{eqnarray*} q^{\frac{-(N-1)(p-1)}{2}}J_{N,\bar{K}}(q)&=&\sum_{n=0}^{N-1} \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N((W_1+\ldots+W_k)^n)\\ &=&\sum_{n=0}^{N-1}\sum_{\stackrel {0\leq n_1,\ldots,n_k\leq N-1} {n_1+\ldots + n_k=n}}\left( \begin{matrix}&n\\n_1 & \ldots & n_{k}\end{matrix} \right)_q \mathcal{E}}\def \mathbb{R} {\mathbb{R}_N(W_k^{n_k}\ldots W_1^{n_1})\\ &=&\sum_{n=0}^{N-1}\sum_{\stackrel {0\leq n_1,\ldots,n_k\leq N-1} {n_1+\ldots + n_k=n}}\left( \begin{matrix}&n\\n_1 & \ldots & n_{k}\end{matrix} \right)_q\\ &&\times q^\xi\prod_{j=1}^{n_k}(1-q^{N-j})\prod_{j=1}^{n_{k-1}}(1-q^{N-n_k-j})\ldots\prod_{j=1}^{n_1}(1-q^{N-n_k-n_{k-1}-\ldots-n_2-j})\\ &=&\sum_{n=0}^{N-1}\sum_{\stackrel { 0\leq n_1,\ldots,n_k\leq N-1 } { n_1+\ldots + n_k=n}}\left( \begin{matrix}&n\\n_1 & \ldots & n_{k}\end{matrix} \right)_q q^\xi\prod_{j=0}^{n-1}(1-q^{N-j-1})\\ \end{eqnarray*} Where \begin{eqnarray*} \xi &=& \sum_{j=1}^{k}((N+1)j-1)n_j + \sum_{j=2}^k (N-1-n_{j-1}-N'_j-1)N'_j\\ N'_j &=& \sum_{i=j}^{k} n_j \end{eqnarray*} Since $J_{N,K}(q) = J_{N,\bar{K}}(q^{-1})$, we get \begin{eqnarray*} q^{\frac{(N-1)(p-1)}{2}}J_{N,K}(q) &=&\sum_{n=0}^{N-1}\sum_{\stackrel{0\leq n_1,\ldots,n_k\leq N-1} { n_1+\ldots + n_k=n}}\left( \begin{matrix}&n\\n_1 & \ldots & n_{k}\end{matrix} \right)_{q^{-1}} q^{-\xi}\prod_{j=0}^{n-1}(1-q^{-(N-j-1)})\\ &=& \sum_{n=0}^{N-1}\sum_{\stackrel{0\leq n_1,\ldots,n_k\leq N-1 } { n_1+\ldots + n_k=n}}\frac{\prod_{j=1}^n(1-q^{-j})}{\prod_{j=1}^{n_1}(1-q^{-j})\ldots\prod_{j=1}^{n_k}(1-q^{-j})} q^{-\xi}\prod_{j=0}^{n-1}(1-q^{-(N-j-1)})\\ &=& \sum_{n=0}^{N-1}\sum_{\stackrel{0\leq n_1,\ldots,n_k\leq N-1 } { n_1+\ldots + n_k=n}}\frac{\prod_{j=1}^n(1-q^{j})}{\prod_{j=1}^{n_1}(1-q^{j})\ldots\prod_{j=1}^{n_k}(1-q^{j})} (-1)^{\xi''}q^{\xi'}\prod_{j=0}^{n-1}(1-q^{(N-j-1)})\\ \end{eqnarray*} Here $$\xi' = \sum_{j=1}^k \frac{n_j(n_j+1)}{2} - \frac{n(n+1)}{2} - \frac{N(N-1)}{2} + \frac{(N-n)(N-n-1)}{2} - \xi$$ and $$\xi'' = \sum_{i=1}^{k} n_{i}=n.$$ The lowest $N$ terms all come from terms where $n= N-1$. In particular $(-1)^{\xi''}$ only depends on $N$. So we get $$J_{N,K}(q)\dot{=}_N\sum_{\stackrel{0\leq n_1,\ldots,n_k\leq N-1 }{ n_1+\ldots +n_k=N-1}}\frac{\prod_{j=1}^{N-1}(1-q^{j})}{\prod_{j=1}^{n_1}(1-q^{j})\ldots\prod_{j=1}^{n_k}(1-q^{j})} q^{\xi'}\prod_{j=1}^{N-1}(1-q^{j})$$ where now $\xi' = \sum_{j=1}^k \frac{n_jn_j}{2} - \sum_{j=1}^k(N+1)jn_j - \sum_{j=2}^k(N-2-N_{j-1})N_j$ Since $N_j = \sum_{i=1}^j n_i$ we have $N'_{j}=N-1-N_{j-1}$ and \begin{eqnarray*} \xi' &=& \sum_{j=1}^k \frac{n_jn_j}{2} - \sum_{j=1}^k(N+1)(N'_{j}) - \sum_{j=2}^{k}(N-2-(N-1-N_{j-2}))(N-1-N_{j-1})\\ &=&\sum_{j=1}^k \frac{n_jn_j}{2} - \sum_{j=1}^k(N+1)(N-1-N_{j-1}) - \sum_{j=2}^{k} (N_{j-2}-1)(N-1-N_{j-1}) \end{eqnarray*} We are only interested in $J_{N,K}(q)$ up to multiplication with powers of $q$. Thus we can simplify terms in $\xi'$ that only depend on $N$. We denote those simplifications by $\sim$. \begin{eqnarray*} \xi'&\sim&\sum_{j=1}^k \frac{n_jn_j}{2} + \sum_{j=2}^k(N+1)N_{j-1} - \sum_{j=2}^{k} N_{j-1} - \sum_{j=2}^{k} N_{j-2}(N-1-N_{j-1})\\ &\sim&\sum_{j=1}^k \frac{n_jn_j}{2} + \sum_{j=2}^{k}N \, N_{j-1} - \sum_{j=3}^{k} (N-1)N_{j-2} + \sum_{j=3}^{k} N_{j-1}\, N_{j-2}\\ &\sim&\sum_{j=1}^k \frac{n_jn_j}{2} + N \, N_{k-1} + \sum_{j=2}^{k-1}[N\, N_{j-1}-(N-1)N_{j-1}] + \sum_{j=2}^{k-1} N_{j}\, N_{j-1}\\ &\sim&\sum_{j=1}^k \frac{n_jn_j}{2} + (n_k+N_{k-1}+1)N_{k-1} + \sum_{j=2}^{k-1}N_{j-1} + \sum_{j=2}^{k-1} N_{j} \, N_{j-1}\\ \end{eqnarray*} Hence \begin{eqnarray*} \xi'&\sim&\sum_{j=1}^k \frac{n_jn_j}{2} + n_k \, N_{k-1} + N_{k-1}(N_{k-1}+1) + \sum_{j=2}^{k-1}n_j \, N_{j-1}+ \sum_{j=1}^{k-2}N_{j}(N_{j}+1)\\ &\sim&\sum_{j=1}^k \frac{n_jn_j}{2} + \sum_{j=2}^{k}n_j \, N_{j-1}+ \sum_{j=1}^{k-1}N_{j}(N_{j}+1)\\ &\sim&\sum_{j=1}^k \frac{n_jn_j}{2} + \sum_{j=2}^{k}[n_j \, \sum_{i=1}^{j-1}n_i]+ \sum_{j=1}^{k-1}N_{j}(N_{j}+1)\\ &\sim&\sum_{i=1}^k\sum_{j=1}^k \frac{n_jn_i}{2} + \sum_{j=1}^{k-1}N_{j}(N_{j}+1)\\ &\sim&\frac{(N-1)(N-1)}{2} + \sum_{j=1}^{k-1}N_{j}(N_{j}+1)\\ &\sim&\sum_{j=1}^{k-1}N_{j}(N_{j}+1)\\ \end{eqnarray*} Thus $$J_{N,K}(q) \dot{=}_N \sum_{\stackrel{0\leq n_1,\ldots,n_k\leq N-1 } {n_1+\ldots + n_k=N-1}} \frac{\prod_{j=1}^{N-1}(1-q^{j})q^{\sum_{j=1}^{k-1}N_{j}(N_{j}+1)}}{\prod_{j=1}^{n_1}(1-q^{j})\ldots\prod_{j=1}^{n_{k-1}}(1-q^{j})} \frac{\prod_{j=1}^{N-1}(1-q^{j})}{\prod_{j=1}^{n_k}(1-q^{j})}$$ But $\frac{\prod_{j=1}^{N-1}(1-q^{j})}{\prod_{j=1}^{n_k}(1-q^{j})} = \prod_{j=n_k+1}^{N-1}(1-q^{j}) = 1 - q^{n_k + 1} + \text{higher order terms}$. Thus because $\sum_{j=1}^{k-1}N_{j}(N_{j}+1) + n_k + 1 \geq N_k + 1 = N$, we get $$J_{N,K}(q) \dot{=}_N \prod_{j=1}^{N-1}(1-q^{j})\sum_{\stackrel{0\leq n_1,\ldots,n_{k-1}\leq N-1 } { n_1+\ldots +n_{k-1} = N-1}} \frac{q^{\sum_{j=1}^{k-1}N_{j}(N_{j}+1)}}{\prod_{j=1}^{n_1}(1-q^{j})\ldots\prod_{j=1}^{n_{k-1}}(1-q^{j})}$$ \end{proof} \subsection{Identities coming from links} The following theorem is essentially a corollary to a theorem of Kazuhiro Hikami \cite{Hikami:ColoredJonesTorusLinks} using the methods developed in \cite{Morton:ColoredJonesTorusKnots}. For links we understand all components to be colored with the same color. \begin{theorem} For a (negative) $(2, 2k)$-torus link $L$ the tail is $$T_{L}(q)=\Psi(q^{2k-1},q).$$ \end{theorem} \begin{proof} By \cite{Hikami:ColoredJonesTorusLinks}: \begin{eqnarray*} q^{-(N+1)/2}(q^{N+1}-1) J_{N+1,L}(q)&=&q^{-\frac{ k (N^{2}-1)+1}{2}} \left ( \sum_{r=0}^{N-1} q^{k r^{2}+(k+1)r +1} - \sum_{r=0}^{N-1} q^{k r^{2}+(k-1)r} \right ) \end{eqnarray*} Together with \begin{eqnarray*} \Psi(q^{2k-1},q)&=&\sum_{r=0}^{\infty} q^{(2k-1) r (r+1)/2} q^{r (r-1)/2} - \sum_{r=1}^{\infty} q^{(2k-1) r (r-1)/2} q^{r (r+1)/2}\\ &=& \sum_{r=0}^{\infty} q^{k r^{2}+(k-1) r} - \sum_{r=1}^{\infty} q^{k(r^{2}-r)+r}\\ &=& \sum_{r=0}^{\infty} q^{k r^{2}+(k-1) r} - \sum_{r=0}^{\infty} q^{k r^{2}+(k+1) r +1} \end{eqnarray*} the result follows. \end{proof} Thus, in particular the tail of the (negative) $(2,4)$-torus link is given by $\Psi(q^3,q).$ On the other hand a closer look at Example \ref{toruslink} gives us \begin{theorem} The tail of the colored Jones polynomial for the (negative) $(2,4)$-torus link is given by $$T_{L}(q)=\Psi(q^3,q)=(q;q)_{\infty}^2 \sum_{k=0}^{\infty} \frac{q^k}{(q;q)^2_k}$$ \end{theorem} \begin{proof} Using the fact that $\{m\}! = q^{-m(m+1)/4}(q,q)_m$ we can rewrite the formula from Example \ref{toruslink} to get the following: \begin{eqnarray*} J_{T(2,-4),N}(q) &=& q^{1-N^2}\sum_{0\leq l \leq j \leq N-1} \frac{\{N-1-l\}!\{j\}!\{N-1\}!}{\{N-1-j\}!\{l\}!\{j-l\}!\{N-1-j+l\}!}\\ && \times q^{-(2j-N+1)/2- 3(j-(N-1)/2)(N-1)/2 -(l-j+1)(j-l)/2+(j-l-(N-1)/2)(l-(N-1)/2)}\\ &=&q^{1-N^2}\sum_{0\leq l \leq j \leq N-1} \frac{(q;q)_{N-1-l}(q;q)_{j}(q;q)_{N-1}}{(q;q)_{N-1-j}(q;q)_{l}(q;q)_{j-l}(q;q)_{N-1-j+l}}\\ && \times q^{(1 - 3 N)/2 + j + j^2 - 3 j N + l (N - j) + N^2}\\ \end{eqnarray*} Note that among all the terms with a fixed value for $j$, the lowest degree comes from the terms where $l=0$. So, restricting to $l=0$, the minimum degree decreases as $j$ increases. Also the difference between the minimum degree when $j=N-2$ and when $j=N-1$ is $N+2$, thus all terms which contribute to the tail have $j=N-1$. \begin{eqnarray*} J_{T(2,-4),N}(q) &\dot{=}_N& \sum_{l=0}^{N-1} \frac{(q;q)_{N-1-l}(q;q)_{N-1}(q;q)_{N-1}}{(q;q)_{l}(q;q)_{N-1-l}(q;q)_{l}} q^{l}\\ &=&(q;q)_{N-1}^2\sum_{l=0}^{N-1} \frac{q^{l}}{(q;q)_{l}^2} \\ &\dot{=}_N&(q;q)_{\infty}^2\sum_{l=0}^{\infty} \frac{q^{l}}{(q;q)_{l}^2} \\ \end{eqnarray*} \end{proof} \section{The head and tails for alternating links only depend on the reduced checkerboard graphs} \label {DependenceOnReducedCheckerboardGraph} Without loss of generality we assume that all alternating knot diagrams are reduced, i.e. they do not contain nugatory crossings. Given an alternating knot $K$ with alternating diagram $D$ a checkerboard (black/white) shading of the faces of $D$ defines two (plane) checkerboard graphs in a natural way: The vertices of the first (second) graph are given by the white (black) faces and two vertices are connected by an edge if they meet at a crossing of the diagram. The two graphs can be distinguished from each other in the following way: The $A$-checkerboard graph is the graph where the edges correspond to arcs that overcross from the right to the left and correspondingly the $B$-checkerboard graph is the graph where the edges correspond to arcs that overcross from the left to the right. In particular the $A$-checkerboard graph of the diagram is the $B$-checkerboard graph of the diagram of the mirror image and vice versa. The two graphs are dual to each other. We obtain the reduced checkerboard graphs by replacing parallel edges by single edges. Note, that it is easy to see that by applying flypes to the knot diagram we can assume that parallel edges in the graph are also parallel in the embedding. For the knot itself this reduction means that suitable $k$-half twists are replaced by a single half-twist. Figure \ref{Fig920} gives an example. Note, that in general the two reduced checkerboard graphs are not dual to each other anymore, and one cannot be constructed from the other. \begin{figure} \includegraphics[width=5cm]{9_20} \begin{tikzpicture}[scale=1.7] \path (0,1) coordinate (X1); \path (-1,1) coordinate (X2); \path (-1,0) coordinate (X3); \path (0,0) coordinate (X4); \path (1,0) coordinate (X5); \path (1,1) coordinate (X6); \path (0.5,0.5) coordinate (X7); \foreach \j in {1, ..., 7} \fill (X\j) circle (1pt); \draw (X1)--(X2)--(X3)--(X4)--(X5)--(X6)--(X1)--(X4)--(X7)--(X5); \end{tikzpicture} \hspace{1cm} \begin{tikzpicture}[scale=1.7] \path (0,0) coordinate (X1); \path (1,0) coordinate (X2); \path (0,1) coordinate (X3); \path (-1,0) coordinate (X4); \foreach \j in {1, ..., 4} \fill (X\j) circle (1pt); \draw (X1)--(X2)--(X3)--(X4)--(X1)--(X3); \end{tikzpicture} \caption{The knot $9_{20}$ and its two reduced checkerboard graphs \label{Fig920}. The reduced $B$-checkerboard graph is on the left and the reduced $A$-checkerboard graph on the right} \end{figure} \begin{remark} In \cite{DasbachLin:HeadAndTail} (and compare with \cite{DL:VolumeIsh}) it was shown that for an alternating link $K$ with diagram $D$ and reduced $A$-checkerboard graph $G$ the colored Jones polynomial satisfies: $$J_{N,K}\dot{=}_3 \, 1-a q + b q^2, \mbox { for } N \geq 3,$$ where $a=\beta_1(G)$, the first Betti number of $G$ and $$b={a \choose 2}-t(G)$$with $t(G)$ the number of triangles in $G$. For the knot $9_{20}$ in Figure \ref{Fig920} $a=2$, $t(G)=2$ thus $b={2 \choose 2} - 2= 1-2=-1$ and the colored Jones polynomial for $N \geq 3$ starts with: $$J_{N,K}\dot{=}_3 \, 1-2 q - q^2.$$ As an application \cite{DasbachLin:HeadAndTail} one sees that the volume of the hyperbolic link complement of an alternating link is bounded from above and below linearly in the absolute values of the second and the penultimate coefficient of the colored Jones polynomial. This was generalized to other classes of knots and links by Futer, Kalfagianni and Purcell \cite{FKP:VolumeJones}. \end{remark} The property that the first three coefficients of the colored Jones polynomial only depend on the reduced $A$-checkerboard graph of the knot diagram holds for the whole tail of the colored Jones polynomial: \begin{theorem} Let $K_1$ and $K_2$ be two alternating links with alternating diagrams $D_1$ and $D_2$ such that the reduced $A$-checkerboard (respectively $B$ checkerboard) graphs of $D_1$ and $D_2$ coincide. Then the tails (respectively heads) of the colored Jones polynomial of $K_1$ and $K_2$ are identical. \end{theorem} \begin{proof} First recall the computation of the colored Jones polynomial via skein theory as in Section \ref{SectionSkeinTheory}. By convention $q=A^{-4}$. Thus the meaning of head and tail interchange if we switch from the variable $A$ to $q$. A negative twist region in the knot diagram with $m$ negative half-twists correspond to $m$ parallel edges in the $B$-checkerboard graph: $$\begin{tikzpicture}[scale=0.6, baseline=0.7cm, rounded corners=2mm] \draw (-.5,2) -- (-0.1, 2.4); \draw (0.1,2.6) -- (0.5,3); \draw (0.5, 2) -- (-0.5,3); \draw[dashed] (0, 1.5) -- (0,1.75) node[left]{\small{m}} --(0, 2); \draw (-.5, -0.5) -- (-0.1,-0.1); \draw (.5,-0.5) -- (0,0)-- (-.5,.5) -- (-.1,0.9); \draw (.1,1.1)--(.5,1.5); \draw (0.1,0.1) -- (.5,.5)--(-.5,1.5) ; \end{tikzpicture} \leadsto \, \, \begin{tikzpicture}[scale=0.6, baseline=0.7cm] \draw (-3, 1.25) -- (0,1.25) node[above]{\small{m}} -- (3, 1.25); \draw[dashed] (0, 2) -- (0, 2.8); \draw[dashed] (0, 1.24) -- (0, -0.4); \draw (-3, 1.25) .. controls (-2.5, 3.5) and (2.5,3.5) .. (3,1.25); \draw (-3, 1.25) .. controls (-2.5, -1) and (2.5, -1) .. (3,1.25); \end{tikzpicture} $$ Thus to prove the theorem it is sufficient to show that the tail of colored Jones polynomial in the variable $A$ is invariant under negative twists. This will be done in the remainder of this section. \end{proof} We appeal to the notations of Section \ref{SectionSkeinTheory}. Given an alternating diagram $D$ of a link $L$ and consider a negative twist region. Apply the identities of Section \ref{SectionSkeinTheory} to get the equation: $$\begin{tikzpicture}[scale=0.6, baseline=0.8cm, rounded corners=2mm] \draw (-.5,2) -- (-0.1, 2.4); \draw (0.1,2.6) -- (0.5,3); \draw (0.5, 2) -- (-0.5,3); \draw[dashed] (0, 1.5) -- (0,1.75) node[left]{\small{m}} --(0, 2); \draw (-.5, -0.5) node[left]{\tiny{n}} -- (-0.1,-0.1); \draw (.5,-0.5) node[right]{\tiny{n}}-- (0,0)-- (-.5,.5) -- (-.1,0.9); \draw (.1,1.1)--(.5,1.5); \draw (0.1,0.1) -- (.5,.5)--(-.5,1.5) ; \end{tikzpicture} \, = \sum_{j=0}^n (\gamma(n,n,2j))^m \frac{\Delta_{2j}}{\theta(n,n,2j)} \begin{tikzpicture}[scale=0.6, baseline=0.8cm] \draw (-.5,3) node[left]{\tiny{n}}-- (0, 2.5); \draw (0.5,3) node[right]{\tiny{n}}-- (0,2.5); \draw (0, 2.5) --(0,1.25) node[right]{\tiny{2j}}-- (0,0); \draw (0,0)--(-.5,-.5) node[left]{\tiny{n}}; \draw (0,0)--(.5,-.5) node[right]{\tiny{n}}; \end{tikzpicture} .$$ Here $\gamma(a,b,c):=(-1)^{\frac {a+b-c} 2} A^{a+b-c+ \frac{a^2+b^2-c^2} 2}.$ We would like to say that the tail of the left-hand side is equivalent to the tail of the right-hand side with the sum removed and $j$ replaced with $n$. However this statement is difficult to consider. Instead we will apply this operation to every maximal negative twist region to get a trivalent graph $\Gamma$. We will get a colored graph $\Gamma_{n,(j_1,\ldots,j_k)}$ where $k$ is the number of maximal negative twist regions and $0\leq j_i \leq n$ by coloring the edge coming from the $i$-th twist region by $j_i$ and coloring all of the other edges by $n$. Figure \ref{CodysFavorite} gives an example. \begin{figure} \includegraphics[width=1.8in]{6_2} \hspace {1cm} \begin{tikzpicture}[baseline=-2.3cm, scale=1] \draw (0,0) circle (2); \path (0,-2) coordinate (X0); \path (-1.732, -1) coordinate (X1); \path (1.732, -1) coordinate (X2); \path (-0.4, -1) coordinate (X3); \path (0.4, -1) coordinate (X4); \path (0, -0.6) coordinate (X6); \path (0,-1.4) coordinate (X5); \path (0, 0) coordinate (X7); \path (-0.3,0.953939) coordinate (X8); \path (0.3, 0.953939) coordinate (X9); \path (-0.6, 1.90788) coordinate (X10); \path (0.6, 1.90788) coordinate (X11); \foreach \j in {0, ..., 11} \fill (X\j) circle (1pt); \draw (X1)--(-1, -1) node[above]{\tiny{$2j_1$}}-- (X3); \draw (X4)--(1,-1) node[above]{\tiny{$2j_2$}}--(X2); \draw (X0)--(0,-1.7) node[right]{\tiny{$2j_3$}}--(X5); \draw (X5)--(X4); \draw (X5)--(X3); \draw (X3)--(X6); \draw (X6)--(X4); \draw (X6)--(0,-0.3) node[right]{\tiny{$2j_3$}} -- (X7); \draw (X7)--(X10); \draw (X7)--(X11); \draw (X8) arc (107:72:1); \draw (0.68, 1.45) node{\tiny{$2j_4$}}; \draw (-0.68, 1.45) node{\tiny{$2 j_5$}}; \end{tikzpicture} \caption{The knot $6_2$ and the corresponding $\Gamma_{n,(j_1, \dots, j_5 )}$. All missing labels are $n$ \label{CodysFavorite}} \end{figure} \begin{theorem}\label{skein} $$\tilde J_{n+1,L} \dot{=}_{4(n+1)} \Gamma_{n,(n,\ldots,n)}$$ \end{theorem} For the proof first note that by applying the previous equation $k$ times we get $$\tilde J_{n+1,L} \dot{=} \sum_{j_1,\ldots,j_k=0}^n \prod_{i=1}^k (\gamma(n,n,2j_i))^m \prod_{i=1}^k \frac{\Delta_{2j_i}}{\theta(n,n,2j_i)} \Gamma_{n,(j_1,\ldots,j_k)}$$ For a rational function $R$, let $d(R)$ be the minimum degree of $R$ considered as a power series. the theorem will now follow from the following three lemmas. \begin{lem} \label{gamma} \begin{eqnarray*} d(\gamma(n,n,2n)) &\leq& d(\gamma(n,n,2(n-1))) - 4n\\ d(\gamma(n,n,2j)) &\leq& d(\gamma(n,n,2(j-1))) \end{eqnarray*} \end{lem} \begin{lem} \label{fuse} $$d\left(\frac{\Delta_{2j}}{\theta(n,n,2j)}\right) = d\left(\frac{\Delta_{2(j-1)}}{\theta(n,n,2(j-1))}\right) - 2$$ \end{lem} \begin{lem} \label{graph} \begin{eqnarray*} d(\Gamma_{n,(j_1,\ldots,j_{(i-1)},j_i,j_{i+1},\ldots,j_k)}) &\leq& d(\Gamma_{n,(j_1,\ldots,j_{(i-1)},j_i-1,j_{i+1},\ldots,j_k)}) \pm 2\\ d(\Gamma_{n,(n,\ldots,n,\ldots,n)}) &\leq& d(\Gamma_{n,(n,\ldots,n-1,\ldots,n)}) - 2 \end{eqnarray*} \end{lem} \begin{proof}[Proof of Lemma \ref{gamma}] \begin{eqnarray*} \gamma(n,n,2j) &=& \pm A^{n+n-2j+\frac{n^2+n^2-(2j)^2}{2}}\\ &=& \pm A^{2n-2j+n^2-2j^2} \end{eqnarray*} Clearly $d(\gamma(n,n,2j))$ increases as $j$ decreases. Furthermore: \begin{eqnarray*} d(\gamma(n,n,2n)) &=& -n^2\\ d(\gamma(n,n,2(n-1))) &=& 2n-2(n-1)+n^2-2(n-1)^2\\ &=&-n^2 + 4n\\ \end{eqnarray*} \end{proof} \begin{proof}[Proof of Lemma \ref{fuse}] To calculate $\theta(n,n,2j)$ note that in the previous formula for $\theta$ we get $x= j$, $y=j$, and $z=n-j$. Using this and the fact that $d(\Delta_n) = -2n$, we get: \begin{eqnarray*} d\left(\frac{\Delta_{2j}}{\theta(n,n,2j)}\right) &=& d\left(\frac{\Delta_{2j}\Delta_{n-1}!\Delta_{n-1}!\Delta_{2j-1}!}{\Delta_{n+j}!\Delta_{j-1}!\Delta_{j-1}!\Delta_{n-j-1}!}\right)\\ &=& d\left(\frac{\Delta_{2j}\Delta_{2j-1}\Delta_{n-j}}{\Delta_{n+j}\Delta_{j-1}\Delta_{j-1}}\right) + d\left(\frac{\Delta_{2(j-1)}!\Delta_{n-1}!\Delta_{n-1}!}{\Delta_{n+j-1}!\Delta_{j-2}!\Delta_{j-2}!\Delta_{n-j}!}\right)\\ &=& -4j -2(2j-1) - 2(n-j)+4(j-1) +2(n+j) + d\left(\frac{\Delta_{2(j-1)}}{\theta(n,n,2(j-1))}\right)\\ &=& -2 + d\left(\frac{\Delta_{2(j-1)}}{\theta(n,n,2(j-1))}\right) \end{eqnarray*} \end{proof} Before we can prove Lemma \ref{graph}, we need to consider a property of the Jones-Wenzl idempotent. This idempotent is a linear combination of crossingless matchings, where the coefficients are rational functions in $A$. It is well-known that the crossingless matchings form a monoid, which we will call the Temperley-Lieb monoid (this monoid is very similar to the Temperley-Lieb Algebra). The Temperley-Lieb Monoid is generated by elements $h_i$ called hooks. \begin{proposition} \label{idem} The coefficient of the crossingless matching $M$ in the expansion of the Jones-Wenzl idempotent has minimum degree at least twice the minimum word length of $M$ in terms of the $h_i$'s. \end{proposition} \begin{proof} This follows easily from the recursive definition of the idempotent by an inductive argument. The only issue is that in terms of the form $\begin{tikzpicture}[baseline=0.8cm] \draw (0,0) node[right]{\small{n}}--(0,1) node[left]{\scriptsize{n-1}} --(0,2) node[right]{\small{n}}; \draw[ultra thick, fill=white] (-0.2 ,0.5) rectangle (0.4, 0.65); \draw (0.2, 0.65) arc (180:0:0.25); \draw (0.7,0) node[right]{1}--(0.7, 0.65); \draw[ultra thick, fill=white] (-0.2 ,1.5) rectangle (0.4, 1.35); \draw (0.2, 1.35) arc (-180:0:0.25); \draw (0.7,2) node [right]{\small{1}} --(0.7, 1.35); \end{tikzpicture}$ may have a circle which needs to be removed. In this situation, the minimum degree of the coefficient is reduced by two, but the number of generators used is also reduced by one. \end{proof} \begin{proof}[Proof of Lemma \ref{graph}] Consider the graph $\Gamma_{n,(j_1,\ldots,j_k)}$ viewed as an element in the Kauffman Bracket Skein Module of $S^3$. We can expand each of the Jones-Wenzl idempotents that appear as in Proposition \ref{idem}. Consider a single term $T_1$ in the expansion. Unless all of the idempotents have been replaced by the identity in this term, then there will be a hook somewhere in the diagram. By removing a single hook, we get different term $T_2$ in the expansion. The number of circles in $T_1$ differs from the number of circles in $T_2$ by exactly one. Also there are fewer hooks in $T_2$, so by Proposition \ref{idem}, the minimum degree of $T_1$ is at least as large as the minimum degree of $T_2$. If, however, $T_2$ is the term with no hooks, then $T_2$ has one more circle than $T_1$ and thus $d(T_2) \leq d(T_1) -4$. This argument implies that the minimum degree of $\Gamma_{n,(j_1,\ldots,j_k)}$ comes from the term with each idempotent replaced with the identity. Now we must compare $d(\Gamma_{n,(j_1,\ldots,j_{(i-1)},j_i,j_{i+1},\ldots,j_k)})$ with $d(\Gamma_{n,(j_1,\ldots,j_{(i-1)},j_i-1,j_{i+1},\ldots,j_k)})$. From the previous paragraph, we know that both minimum degrees come from terms with the identities plugged into the idempotents. But comparing these two terms coming from these graphs, we see that the coefficient is $1$ in both cases, but the number of circles differ by $1$. So we get $$d(\Gamma_{n,(j_1,\ldots,j_{(i-1)},j_i,j_{i+1},\ldots,j_k)}) \leq d(\Gamma_{n,(j_1,\ldots,j_{(i-1)},j_i-1,j_{i+1},\ldots,j_k)}) \pm 2$$ Finally $\Gamma_{n,(n,\ldots,n,\ldots,n)}$ has one more circle than $\Gamma_{n,(n,\ldots,n-1,\ldots,n)}$, so we get $$d(\Gamma_{n,(n,\ldots,n,\ldots,n)}) \leq d(\Gamma_{n,(n,\ldots,n-1,\ldots,n)}) - 2$$ \end{proof} \section{The monoid of heads and tails of prime alternating links} \label{ProductFormula} The head and the tails of the colored Jones polynomials of prime alternating links form a monoid in the following sense: \begin{theorem} \label{Thm:monoid} Let $K_1$ and $K_2$ be two prime alternating links. Then there exist a prime alternating link $K_3$ such that the tails of the colored Jones polynomials of the three knots satisfy: $$T_{K_1}(q) T_{K_2}(q) \dot{=} T_{K_3}(q).$$ In particular any alternating link $K_3$ whose reduced $A$-graph can be formed by gluing the reduced $A$-graphs of $K_1$ and $K_2$ along a single edge (as in Figure \ref{prod}) satisfies this statement. A corresponding result holds for the heads. \end{theorem} \begin{proof} The proof of this theorem uses Theorem \ref{skein}, so because in the skein picture $q=A^{-4}$ we will consider the mirror images $K_1^*, K_2^*$ and $K_3^*$ of $K_1$, $K_2$, and $K_3$. Thus it is their reduced $B$-graphs which are related as in the statement of the theorem. In Figures \ref{pairing} and \ref{closure}, the interior of the dotted regions represent $S(D^2,N,N,2 N)$ the Kauffman Bracket Skein Module of the disk with three colored points. This is known to be one dimensional when the three colored points are admissibly colored, generated by a single trivalent vertex \cite{Lickorish:KnotTheoryBook}. Thus any element of $S(D^2,N,N,2N)$ is some rational function times the generator. Let $\bar{\Gamma}$ be the closure of $\Gamma$ by filling in the outside of the dotted circle by a single trivalent vertex as in Figure \ref{closure}. Also define a bilinear pairing $<\Gamma_1, \Gamma_2>$ which identifies the boundaries of $\Gamma_1$ and $\Gamma_2$ as in Figure \ref{pairing}. By Theorem \ref{skein}, there is a $\Gamma_1$ and $\Gamma_2$, such that \begin{eqnarray*} \Delta_n J_{K_1^*,N+1} &\dot{=}_{4(N+1)}& \bar{\Gamma}_1\\ \Delta_n J_{K_2^*,N+1} &\dot{=}_{4(N+1)}& \bar{\Gamma}_2\\ \Delta_n J_{K_3^*,N+1} &\dot{=}_{4(N+1)}& <\Gamma_1,\Gamma_2>\\ \end{eqnarray*} By the fact that $S(D^2,N,N,2N)$ is one-dimensional, there are rational functions $R_i$ for $i=1,2$ such that $\Gamma_i = R_i *$ the trivalent vertex. Thus \begin{eqnarray*} \bar{\Gamma}_1 &=& R_1 \theta(N,N,2N)\\ \bar{\Gamma}_2 &=& R_2 \theta(N,N,2N)\\ <\Gamma_1,\Gamma_2> &=& R_1 R_2 \theta(N,N,2N)\\ \end{eqnarray*} Therefore \begin{eqnarray*} J_{K_1^*,N+1} &\dot{=}_{4(N+1)}& R_1 R_2 \left(\frac{\theta(N,N,2N)}{\Delta_N}\right)^2\\ &=&R_1 R_2 \left(\frac{\Delta_{2N}}{\Delta_N}\right)^2\\ &=&R_1 R_2 \left(\frac{A^{2(2N+1)}-A^{-2(2N+1)}}{A^{2(N+1)}-A^{-2(N+1)}}\right)^2\\ &\dot{=}_{4(N+1)}& J_{K_3^*,N+1} \end{eqnarray*} This last line is true because $\frac{A^{2(2N+1)}-A^{-2(2N+1)}}{A^{2(N+1)}-A^{-2(N+1)}}\dot{=}_{4(N+1)} 1$. \end{proof} \begin{figure} \begin{tikzpicture}[scale=1] \draw[style=dashed] (0,1) ellipse (2 and 0.5) node{$G_{1}$}; \draw (-1,0) -- (1,0); \draw (1,0) -- (1.4,0.645); \draw (-1,0)-- (-1.4,0.645); \draw [color=white] (0,-1.45); \draw (2,0) node {*}; \end{tikzpicture} \begin{tikzpicture}[scale=1] \draw[style=dashed] (0,-1) ellipse (2 and 0.5) node{$G_{2}$}; \draw (-1,0) -- (1,0); \draw (1,0) -- (1.4,-0.645); \draw (-1,0)-- (-1.4,-0.645); \draw [color=white] (0,1.45); \draw (2,0) node {=}; \end{tikzpicture} \begin{tikzpicture}[scale=1] \draw[style=dashed] (0,1) ellipse (2 and 0.5) node{$G_{1}$}; \draw (-1,0) -- (1,0); \draw (1,0) -- (1.4,0.645); \draw (1,0)-- (1.4,-0.645); \draw (-1,0)-- (-1.4, 0.645); \draw (-1,0)-- (-1.4,-0.645); \draw[style=dashed] (0,-1) ellipse (2 and 0.5) node{$G_{2}$}; \end{tikzpicture} \caption{Product of two checkerboard graphs} \label{prod} \end{figure} \begin{figure} \begin{tikzpicture}[scale=1] \draw[style=dashed] (0,1) ellipse (2 and 0.5) node{$\Gamma_{1}$}; \draw[style=dashed] (-2.5,0) -- (2.5,0); \draw (1.38,-0.64) -- (1.38, -0.14) node[right] {n}-- (1.38,0.64); \draw (0,-0.5)--(0,-0.14) node[right]{2n} -- (0,0.5); \draw (-1.38,-0.64) -- (-1.38 ,-0.14) node[right]{n} --(-1.38,0.64); \draw[style=dashed] (0,-1) ellipse (2 and 0.5) node{$\Gamma_{2}$}; \end{tikzpicture} \caption{The bilinear pairing $<\Gamma_1,\Gamma_2>$} \label{pairing} \end{figure} \begin{figure}[ht] \begin{tikzpicture}[scale=1] \draw[style=dashed] (0,1) ellipse (2 and 0.5) node{$\Gamma$}; \draw (0,-0.5)--(0,0.14) node[right]{2n} -- (0,0.5); \fill (0,-0.5) circle(0.07); \draw (-1,0.56) arc (-183:3:1); \draw (1.0,0.14) node[right]{n}; \draw (-0.9,0.14) node[right]{n}; \end{tikzpicture} \caption{The closure of $\Gamma$} \label{closure} \end{figure} \begin{example} Figure \ref{Fig920} depicts the knot $9_{20}$ together with its two reduced checkerboard graphs. In the sense of the proof of Theorem \ref{Thm:monoid} the first reduced checkerboard graph is the product of two squares and a triangle. The second reduced checkerboard graph is the product of two triangles. Thus the head and tail functions of the colored Jones polynomial of $9_{20}$ are: $$T_{9_{20}}(q)\dot{=} \, f(-q^2,-q)^2$$ and $$H_{9_{20}}(q)\dot{=} \, \Psi(q^3,q)^2 f(-q^2,-q).$$ \end{example} \section{Multiple Heads} We single out a sample result for certain non-alternating knots: \begin{proposition} \label{multihead} Let $p>m$. A $(m,p)$-torus knot has one head and one tail if $m=2$ and two heads and one tail if $m>2$. The two heads correspond to even or odd $N$. \end{proposition} \begin{proof} The torus knot is the closure of a positive braid. Thus, by Theorem \ref{Armond:Walks} the tail is identical to $1$. For the head we have: \begin{eqnarray*} (q^N-1) J_{N,K}(q)&\dot =& \sum_{r=-(N-1)/2}^{(N-1)/2} q^{r^2 m p + r p +r m +1}-q^{r^2 m p - r p + r m}\\ &\dot =& \sum_{r=-(N-1)/2}^{(N-1)/2} q^{\psi_{m,p}(r+ 1/m)} - q^{\psi_{m,p}(r)} \end{eqnarray*} with $\psi_{m,p}(r)=r^2 m p - r p + r m.$ Here $\dot{=}$ means up a sign and up to some power of $q^{1/2}$. \end{proof} \begin{theorem} For $K$ a $(4,p)$-torus knot we have \cite{Morton:ColoredJonesTorusKnots}: $$H_{K}^{\mbox{odd}}(q^2) - q^{p-2} H_{K}^{\mbox{even}}(q^2) = f(-q^2,-q^{p-2}).$$ \end{theorem} \begin{proof} By Proposition \ref{multihead} we have: \begin{eqnarray*} H_{K}^{\mbox{odd}}(q) - q^{p/2-1} H_{K}^{\mbox{even}}(q) &=& \sum_{r=-\infty}^{\infty} q^{\psi_{4.p}(r)} - q^{\psi_{4,p}(r+1/4)}+q^{\psi_{4,p}(r+1/2)}-q^{\psi_{4,p}(r+3/4)}\\ &=& \sum_{R=-\infty}^{\infty} (-1)^R q^{\psi_{4,p}(R/4)} \end{eqnarray*} \end{proof}
-78,708.603494
[ -2.599609375, 2.33984375 ]
29.678639
[ -2.97265625, 0.197509765625, -2.06640625, -5.92578125, -0.68017578125, 8.796875 ]
[ 4.0703125, 9.4296875, 3.73828125, 6.984375 ]
916
7,086
[ -3.2265625, 3.87890625 ]
40.89147
[ -5.21875, -3.21484375, -4.125, -2.19921875, 1.142578125, 11.28125 ]
1.454573
22.798547
23.736946
10.874808
[ 2.1969265937805176 ]
-46,506.499653
5.675699
-77,362.057109
3.172709
6.22254
[ -2.095703125, -3.169921875, -3.61328125, -4.796875, 1.96875, 11.7578125 ]
[ -5.546875, -1.87109375, -2.306640625, -1.5009765625, 3.63671875, 4.78515625 ]
BkiUb_A5qsNCPeN2FHd8
\section*{Acknowledgements} N.E.M. wishes to thank Juan Fuster and IFIC-University of Valencia (Spain) for their interest and support, and P. Sodano and INFN-Sezione di Perugia (Italy) for their hospitality and support during the final stages of this work. The work of D.V.N. is supported by D.O.E. grant DE-FG03-95-ER-40917.
-621.01244
[ -1.353515625, 1.5908203125 ]
50
[ -11.2421875, -6.921875, 1.091796875, -7.10546875, 7.51171875, 8.6015625 ]
[ 1.375, 1.4072265625, -0.88720703125, 0.89013671875 ]
12
47
[ -3.177734375, 3.74609375 ]
26.363636
[ 9.1328125, 7.91015625, -0.381591796875, -5.49609375, -5.51953125, -4.6171875 ]
6.349206
36.010661
76.595745
2.255639
[ 0.04286957532167435 ]
-594.657439
5.361702
-596.534559
0
3.461328
[ -4.30078125, 3.091796875, 2.365234375, 0.5400390625, -0.1348876953125, -2.634765625 ]
[ -7.5625, -4.37890625, -2.298828125, 0.020721435546875, 5.3203125, 6.79296875 ]
BkiUc4LxK0zjCobCOFe2
\section{Introduction} Cluster algebras were introduced by Fomin and Zelevinsky in \cite{FZ1}. A cluster algebra $\mathcal{A}$ is a~subalgebra of a rational function f\/ield with a distinguished set of generators, called \emph{cluster variables}, that are generated by an iterative procedure called \emph{mutation}. By construction cluster variables are rational functions, but it is shown in \emph{loc.~cit.}~that they are Laurent polynomials with integer coef\/f\/icients. Moreover, these coef\/f\/icients are known to be non-negative \cite{GHKK, LS}. \looseness=-1 Each cluster algebra $\mathcal{A}$ also determines an \emph{upper cluster algebra} $\mathcal{U}$, where $\mathcal{A}\subseteq \mathcal{U}$ \cite{BFZ}. It is believed, especially in the context of algebraic geometry, that $\mathcal{U}$ is better behaved than $\mathcal{A}$ (for instance, see \cite{BMRS, GHK, GHKK}). Matherne and Muller \cite{MM} gave a general algorithm to compute generators of $\mathcal{U}$. Plamondon \cite{P1,P2} obtained a (not-necessarily positive) formula for certain elements of skew-symmetric upper cluster algebras using quiver representations. However a directly computable and manifestly positive formula for (non-trivial) elements in $\mathcal{U}$ is not available yet. In this paper we develop an elementary formula for a family of elements $\{\tilde{x}[{\bf a}]\}_{{\bf a}\in\mathbb{Z}^n}$ of the upper cluster algebra for any f\/ixed initial seed $\Sigma$. We write $\tilde{x}_\Sigma[{\bf a}]$ for $\tilde{x}[{\bf a}]$ when we need to emphasize the dependence on the initial seed $\Sigma$. This family of elements are constructed in Def\/inition~\ref{definition:GCC2} in terms of sequences of sequences, and an equivalent def\/inition is given in terms of Dyck paths and globally compatible collections in Def\/inition~\ref{definition:GCC}. One of our main theorems is the following. For the def\/inition of {\it geometric type}, see Section~\ref{section2}. \begin{Theorem}\label{thm:x[a]} Let $\mathcal{U}$ be the upper cluster algebra of a $($not necessarily acyclic$)$ cluster algebra~$\mathcal{A}$ of geometric type, and~$\Sigma$ be any seed. Then $\tilde{x}_\Sigma[\mathbf{a}]\in\mathcal{U}$ for all~$\mathbf{a}\in \mathbb{Z}^n$. \end{Theorem} These elements have some nice properties. They have positive coef\/f\/icients by def\/inition; they are multiplicative in the sense that we can factorize an element $\tilde{x}[{\bf a}]$ (${\bf a}\in\mathbb{Z}_{\ge0}^n$) into ``elementary pieces'' $\tilde{x}[{\bf a}']$ where all entries of ${\bf a}'$ are $0$ or $1$; for an equioriented quiver of type $A$, these elements form a canonical basis~\cite{B9}. Moreover, we shall prove in Section~\ref{section6} the following result. (For the terminology and notation therein, see Section~\ref{section2}.) \begin{Theorem}\label{thm:basis} Let $\mathcal{A}$ be an acyclic cluster algebra of geometric type, and $\Sigma$ be an acyclic seed. Then $\{\tilde{x}_\Sigma[\mathbf{a}]\}_{\mathbf{a}\in\mathbb{Z}^n}$ form a $\mathbb{ZP}$-basis of $\mathcal{A}$. \end{Theorem} For a non-acyclic seed $\Sigma$, the family $\{\tilde{x}_\Sigma[\mathbf{a}]\}$ may neither span $\mathcal{U}$ nor be linearly independent. For a linearly dependent example, see Example~\ref{example}(b). Nevertheless, for certain non-acyclic cluster algebras and for some choice of ${\bf a} \in \mathbb{Z}^n$, the element $\tilde{x}_\Sigma[{\bf a}]$ can be used to construct elements in $\mathcal{U}\setminus \mathcal{A}$. One of our main results in this direction is the following: \begin{Theorem}\label{main-cor} A non-acyclic rank three skew-symmetric cluster algebra $\mathcal{A}$ of geometric type is not equal to its upper cluster algebra $\mathcal{U}$. \end{Theorem} This theorem is inspired by the following results: Berenstein, Fomin and Zelevinsky \cite[Proposition 1.26]{BFZ} showed that $\mathcal{A}\neq \mathcal{U}$ for the Markov skew-symmetric matrix $M(2)$, where \begin{gather*} M(a)=\left(\begin{matrix}0 & a & -a\\ -a & 0 & a\\ a & -a & 0\end{matrix}\right). \end{gather*} Speyer \cite{S} found an inf\/initely generated upper cluster algebra, which is the one associated to the skew-symmetric matrix $M(3)$ with generic coef\/f\/icients. On the contrary, \cite[Proposition~6.2.2]{MM} showed that the upper cluster algebra associated to $M(a)$ for $a\geq 2$ but with trivial coef\/f\/icients is f\/initely generated, which implies $\mathcal{A}\neq \mathcal{U}$ because $\mathcal{A}$ is known to be inf\/initely generated \cite[Theorem~1.24]{BFZ}. The paper is organized as follows. In Section~\ref{section2} we review def\/initions of cluster algebras and upper cluster algebras. Section~\ref{section3} is devoted to the construction of $\tilde{x}[{\bf a}]$ and the proof of Theorem~\ref{thm:x[a]}, and Section~\ref{section4} to the proof of Theorem \ref{main-cor}, that $\mathcal{A}\neq \mathcal{U}$ for non-acyclic rank~3 skew-symmetric cluster algebras. Section~\ref{section5} introduces the Dyck path formula and its relation with the construction in Section~\ref{section3}. Finally, in Section~\ref{section6}, we present two proofs of Theorem~\ref{thm:basis}. \section{Cluster algebras and upper cluster algebras}\label{section2} Let $m$, $n$ be positive integers such that $m\geq n$. Denote $\mathcal{F}=\mathbb{Q}(x_1,\dots,x_m)$. A \emph{seed} $\Sigma=(\tilde{\bf x},\tilde{B})$ is a pair where $\tilde{\bf x}=\{x_1,\ldots,x_m\}$ is an $m$-tuple of elements of $\mathcal{F}$ that form a free generating set, $\tilde{B}$ is an $m\times n$ integer matrix such that the submatrix $B$ (called the \emph{principal part}) formed by the top $n$ rows is sign-skew-symmetric (that is, either $b_{ij}=b_{ji}=0$, or else~$b_{ji}$ and~$b_{ij} $ are of opposite sign; in particular, $b_{ii}=0$ for all~$i$). The integer $n$ is called the \emph{rank} of the seed. For any integer $a$, let $[a]_+:= \max(0,a)$. Given a seed $(\tilde{\bf x},\tilde{B})$ and a specif\/ied index $1\leq k \leq n$, we def\/ine \emph{mutation} of $(\tilde{\bf x},\tilde{B})$ at $k$, denoted $\mu_{k}(\tilde{\bf x},\tilde{B})$, to be a new seed $(\tilde{\bf x}',\tilde{B}')$, where \begin{gather*} x_i' = \begin{cases} \displaystyle x'_k=x_k^{-1}\left(\prod\limits_{i=1}^{m}x_i^{[b_{ik}]_+}+\prod\limits_{i=1}^{m}x_i^{[-b_{ik}]_+}\right), & \mbox{if} \ \ i=k, \\ x_i, & \mbox{otherwise}, \end{cases}\\ b_{ij}' = \begin{cases} -b_{ij}, & \mbox{if} \ \ i=k \ \ \text{or} \ \ j=k, \\ b_{ij} + \dfrac{|b_{ik}|b_{kj} + b_{ik}|b_{kj}|}{2}, & \mbox{otherwise.} \end{cases} \end{gather*} If the principal part of $\tilde{B}'$ is also sign-skew-symmetric, we say that the mutation is well-def\/ined. Note that a well-def\/ined mutation is an involution, that is mutating $(\tilde{\bf x}', \tilde{B}')$ at~$k$ will return to our original seed $(\tilde{\bf x}, \tilde{B})$. Two seeds $\Sigma_1$ and $\Sigma_2$ are said to be \emph{mutation-equivalent} or in the same \emph{mutation class} if~$\Sigma_2$ can be obtained by a sequence of well-def\/ined mutations from~$\Sigma_1$. This is obviously an equivalence relation. A seed~$\Sigma$ is said to be \emph{totally mutable} if every sequence of mutations from~$\Sigma$ consists of well-def\/ined ones. It is shown in \cite[Proposition~4.5]{FZ1} that a seed is totally mutable if $B$ is skew-symmetrizable, that is, if there exists a diagonal matrix $D$ with positive diagonal entries such that $DB$ is skew-symmetric. To emphasize the dif\/ferent roles played by $x_i$ $(i\le n)$ and $x_i$ $(i>n)$, we also use $({\bf x},{\bf y},B)$ to denote the seed $(\tilde{\bf x}, \tilde{B})$, where ${\bf x}=\{x_1,\dots,x_n\}$, ${\bf y}=\{y_1,\dots,y_n\}$ where $y_j=\prod\limits_{i=n+1}^mx_i^{b_{ij}}$. We call~${\bf x}$ a~\emph{cluster}, ${\bf y}$ a~\emph{coefficient tuple}, $B$ the \emph{exchange matrix}, and the elements of a cluster \emph{cluster variables}. We denote \begin{gather*} \mathbb{ZP}=\mathbb{Z}\big[x_{n+1}^{\pm1},\dots,x_m^{\pm1}\big]. \end{gather*} In the paper, we shall only study cluster algebras of geometric type, def\/ined as follows. \begin{Definition} Given a totally mutable seed $({\bf x}, {\bf y},B)$, the \emph{cluster algebra} $\mathcal{A}({\bf x},{\bf y},B)$ \emph{of geometric type} is the subring of $\mathcal{F}$ generated over $\mathbb{ZP}$ by \begin{gather*} \bigcup_{({\bf x}',{\bf y}',B')} {\bf x}', \end{gather*} where the union runs over all seeds $({\bf x}', {\bf y}',B')$ that are mutation-equivalent to $({\bf x},{\bf y},B)$. The seed $({\bf x},{\bf y},B)$ is called the \emph{initial seed} of $\mathcal{A}({\bf x},{\bf y},B)$. (Since $({\bf x},{\bf y},B)=(\tilde{\bf x},\tilde{B})$ in our notation, $\mathcal{A}({\bf x},{\bf y},B)$ is also denoted $\mathcal{A}(\tilde{\bf x},\tilde{B})$.) \end{Definition} It follows from the def\/inition that any seed in the same mutation class will generate the same cluster algebra up to isomorphism. For any $n \times n$ sign-skew-symmetric matrix $B$, we associate a (simple) directed graph $Q_B$ with vertices $1,\ldots,n$, such that for each pair $(i,j)$ with $b_{ij}>0$, there is \emph{exactly one} arrow from vertex~$i$ to vertex~$j$. (Note that even if $B$ is skew-symmetric, $Q_B$ is not the usual quiver associated to~$B$ which can have multiple edges.) We call $B$ (as well as the digraph $Q_B$ and the seed $\Sigma=({\bf x},{\bf y},B)$) \emph{acyclic} if there are no oriented cycles in $Q_B$. We say that the cluster algebra $\mathcal{A}({\bf x},{\bf y},B)$ is \emph{acyclic} if there exists an acyclic seed; otherwise we say that the cluster algebra is \emph{non-acyclic}. \begin{Definition} Given a cluster algebra $\mathcal{A}$, the \emph{upper cluster algebra} $\mathcal{U}$ is def\/ined as \begin{gather*} \mathcal{U} = \bigcap_{{\bf x}=\{x_1,\ldots,x_n\}} \mathbb{ZP}\big[x_1^{\pm 1},\ldots,x_n^{\pm 1}\big], \end{gather*} where ${\bf x}$ runs over all clusters of~$\mathcal{A}$. \end{Definition} Now we can give the following def\/inition of coprime when the cluster algebra is of geometric type (given in \cite[Lemma~3.1]{BFZ}). \begin{Definition} A seed $(\tilde{\bf x},\tilde{B})$ is \emph{coprime} if no two columns of $\tilde{B}$ are proportional to each other with the proportionality coef\/f\/icient being a ratio of two odd integers. A cluster algebra is \emph{totally coprime} if every seed is coprime. \end{Definition} In certain cases it is suf\/f\/icient to consider only the clusters of the initial seed and the seeds that are a single mutation away from it, rather than all the seeds in the entire mutation class. For a cluster ${\bf x}$, let $\mathcal{U}_{{\bf x}}$ be the intersection in $\mathbb{ZP}(x_1,\ldots,x_n)$ of the $n+1$ Laurent rings corresponding to ${\bf x}$ and its one-step mutations: \begin{gather*} \mathcal{U}_{\bf x} := \mathbb{ZP}\big[x_1^{\pm 1},\ldots, x_n^{\pm 1}\big] \cap \left(\bigcap_i \mathbb{ZP}\big[x_1^{\pm 1},\ldots,x_i'^{\pm 1},\ldots, x_n^{\pm 1}\big]\right). \end{gather*} \begin{Theorem}[\cite{BFZ,M}]\label{UB} We have $\mathcal{A}\subseteq\mathcal{U}\subseteq\mathcal{U}_{\bf x}$. Moreover, \begin{enumerate}\itemsep=0pt \item[$(i)$] If $\mathcal{A}$ is acyclic, then $\mathcal{A}=\mathcal{U}$. \item[$(ii)$] If $\mathcal{A}$ is totally coprime, then $\mathcal{U}=\mathcal{U}_{{\bf x}}$ for any seed $({\bf x},{\bf y},B)$. In particular, this holds when the matrix~$\tilde{B}$ has full rank. \end{enumerate} \end{Theorem} \section{Construction of some elements in the upper cluster algebra}\label{section3} Fix an initial seed $\Sigma$ (thus $\tilde{B}$ is f\/ixed). In this section, we construct Laurent polyno\-mials~$\tilde{x}[{\bf a}]$ $(=\tilde{x}_\Sigma[{\bf a}])$ and show that they are in the upper cluster algebra. We def\/ine $b_{ij}=-b_{ji}$ for $1\le i\le n$, $n+1\le j\le m$, and def\/ine $b_{ij}=0$ if $i,j>n$. Def\/ine \begin{gather*} Q_{\tilde B}=\{(i,j)\, |\, 1\le i,j\le m,\; b_{ij}>0\}. \end{gather*} By abuse of notation we also use~$Q_{\tilde B}$ to denote the digraph with vertex set~$\{1,\dots,m\}$ and edge set~$Q_{\tilde B}$. Then~$Q_B$ is a full sub-digraph of~$Q_{\tilde B}$ that consists of the f\/irst $n$ vertices. We def\/ine the following operations on the set of f\/inite $\{0,1\}$-sequences. Let $t=(t_1,\dots,t_a)$, $t'=(t'_1,\dots,t'_b)$. Def\/ine \begin{gather}\label{eq:t sum and dot} \bar{t}=(\bar{t}_1,\dots,\bar{t}_a)=(1-t_1,\dots,1-t_a),\qquad |t|=\sum_{r=1}^a t_r,\qquad t\cdot t'=\sum_{r=1}^{\min(a,b)} t_rt'_r. \end{gather} and for $t=()\in \{0,1\}^0$, def\/ine $\bar{t}=()$. \begin{Definition}\label{definition:GCC2} Let $\mathbf{a}=(a_i)\in \mathbb{Z}^n$. \begin{enumerate}\itemsep=0pt \item[(i)] Let $\mathbf{s}=(\mathbf{s}_1,\dots,\mathbf{s}_n)$ where $\mathbf{s}_i=(s_{i,1},s_{i,2},\dots,s_{i,[a_i]_+})\in\{0,1\}^{[a_i]_+}$ for $i=1,\dots,n$. Let $S_{\rm all}=S_{\rm all}({\bf a})$ be the set of all such $\mathbf{s}$. Let $S_{\rm gcc}=S_{\rm gcc}({\bf a})=\{\mathbf{s}\in S_{\rm all}\, |\, \mathbf{s}_i\cdot\bar{\mathbf{s}}_j=0$ for every $(i,j)\in Q_B\}$. By convention, we assume that $a_i=0$ and $\mathbf{s}_i=()$ for $i>n$. \item[(ii)] Def\/ine $\tilde{x}[\mathbf{a}]$ $(=\tilde{x}_\Sigma[{\bf a}])$ to be the Laurent polynomial \begin{gather*} \tilde{x}[\mathbf{a}]:=\left(\prod_{l=1}^n x_l^{-a_l}\right) \sum_{\mathbf{s}\in S_{\rm gcc}}\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}| \bar{\mathbf{s}}_j |} x_{j}^{-b_{ji}|\mathbf{s}_i|}\right). \end{gather*} (Note that the exponent $-b_{ji}|\mathbf{s}_i|$ is nonnegative since $b_{ji}<0$ for $(i,j)\in Q_{\tilde B}$.) \item[(iii)] Def\/ine $z[\mathbf{a}]$ $(=z_\Sigma[{\bf a}])$ to be the Laurent polynomial \begin{gather*} z[\mathbf{a}]:=\left(\prod_{l=1}^n x_l^{-a_l}\right) \sum_{\mathbf{s}\in S_{\rm all}}\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}|\bar{\mathbf{s}}_j|} x_{j}^{-b_{ji}|\mathbf{s}_i|}\right). \end{gather*} \end{enumerate} \end{Definition} \begin{Example}\label{example} (a) Use Def\/inition \ref{definition:GCC2} to compute $\tilde{x}[\mathbf{a}]$ for $n=2$, $m=3$, $\mathbf{a}=(1,1)$, and \begin{gather*} \tilde{B}=\begin{bmatrix} 0&a\\ -a'&0\\ c&-b \end{bmatrix},\qquad a,a',b,c>0. \end{gather*} Then we have $Q_{\tilde B}=\{(1,2),(2,3),(3,1)\}$, $S_{\rm gcc}=\{((0),(0)), ((0),(1)), ((1),(1))\}$. Note that $\mathbf{s}=((1),(0))$ is not in $S_{\rm gcc}$ because $(1,2)\in Q_{\tilde B}$ but $\mathbf{s}_1\cdot\bar{\mathbf{s}}_2=1(1-0)=1\neq 0$. Thus \begin{gather*} \tilde{x}[\mathbf{a}] =x_1^{-1}x_2^{-1} \big( \big(x_1^ax_2^0\big)\big(x_2^0x_3^0\big)\big(x_3^cx_1^0\big)+ \big(x_1^0x_2^0\big)\big(x_2^0x_3^b\big)\big(x_3^cx_1^0\big)+ \big(x_1^0x_2^{a'}\big)\big(x_2^0x_3^b\big)\big(x_3^0x_1^0\big)\big)\\ \hphantom{\tilde{x}[\mathbf{a}]}{} =x_1^{-1}x_2^{-1} \big(x_1^ax_3^c+x_3^bx_3^c+x_2^{a'} x_3^b\big). \end{gather*} (b) For a non-acyclic seed, $\tilde{x}[\mathbf{a}]$ is less interesting for certain choices of ${\bf a}$: take $n=m=3$, $\mathbf{a}=(1,1,1)$ and \begin{gather*} \tilde{B}=\begin{bmatrix} 0&a&-c'\\ -a'&0&b\\ c&-b'&0 \end{bmatrix},\qquad a,a',b,b',c,c'>0. \end{gather*} Then $Q_{\tilde B}=\{(1,2),(2,3),(3,1)\}$, $S_{\rm gcc}=\{((0),(0),(0)), ((1),(1),(1))\}$. Thus \begin{gather*} \tilde{x}[\mathbf{a}] =\frac{x_1^ax_2^bx_3^c+x_2^{a'}x_3^{b'}x_1^{c'}}{x_1x_2x_3}, \end{gather*} which can be reduced to $x_1^{a-1}x_2^{b-1}x_3^{c-1}+x_2^{a'-1}x_3^{b'-1}x_1^{c'-1}$, that is \begin{gather*} \tilde{x}[1,1,1]=\tilde{x}[(1-a,1-b,1-c)]+\tilde{x}[(1-a',1-b',1-c')]. \end{gather*} Thus $\{\tilde{x}[{\bf a}]\}$ is not $\mathbb{ZP}$-linear independent. \end{Example} \begin{Lemma}[multiplicative property of $\tilde{x}{[{\bf a}]}$ and $z{[{\bf a}]}$]\label{multiplicative} Fix a seed $\Sigma$. \begin{enumerate}\itemsep=0pt \item[$(i)$] For $k,a\in\mathbb{Z}$, ${\bf a}\in\mathbb{Z}^n$, define \begin{gather*} f_k(a) :=\begin{cases} 1,&\textrm{if} \ \ k\le a,\\ 0, &\textrm{otherwise}, \end{cases} \end{gather*} $f_k(\mathbf{a}):=(f_k(a_1),\dots,f_k(a_n))\in\{0,1\}^n$, $\mathbf{a}_+:=([a_1]_+, [a_2]_+,\dots, [a_n]_+)$. Then \begin{gather*} \tilde{x}[\mathbf{a}] =\left(\prod_{i=1}^n x_i^{[-a_i]_+}\right)\tilde{x}[\mathbf{a}_+] =\left(\prod_{i=1}^n x_i^{[-a_i]_+}\right)\prod_{k\ge1} \tilde{x}[f_k(\mathbf{a}_+)]. \end{gather*} \item[$(ii)$] Assume that the underlying undirected graph of $Q_B$ has $c>1$ components, inducing a~partition of the vertex set $\{1,\dots,n\}=I_1\cup\cdots\cup I_c$. Define ${\bf a}^{(j)}=\big(a^{(j)}_1,\dots,a^{(j)}_c\big)$ by $a^{(j)}_i=a_i$ if $i\in I_j$, otherwise $a^{(j)}_i=0$. Then \begin{gather*} \tilde{x}[{\bf a}]=\prod_{j=1}^c\tilde{x}\big[{\bf a}^{(j)}\big]. \end{gather*} $($Note that each factor $\tilde{x}[{\bf a}^{(j)}]$ can be regarded as an element in a cluster algebra of rank $|I_j|<n$, with the same $\mathbb{ZP}.)$ \item[$(iii)$] We have $z[{\bf a}]=\prod\limits_{i=1}^nx_i^{\langle-a_i\rangle}$ where we use the notation \begin{gather*} x_i^{\langle r\rangle}=\begin{cases} x_i^r, &\textrm{if} \ \ r\ge 0,\\ (x_i')^{-r}, &\textrm{if} \ \ r< 0. \end{cases} \end{gather*} $($Recall that $x_i'$ is obtained by mutating the initial seed at $i)$. As a consequence, $(i)$, $(ii)$ still hold if we replace~$\tilde{x}[-]$ by~$z[-]$. Moreover, if the seed $\Sigma$ is acyclic, then $\{z[\mathbf{a}]\}_{\mathbf{a}\in\mathbb{Z}^n}$ is the standard monomial basis, i.e., the set of monomials in $x_1,\dots,x_n$, $x_1',\dots,x_n'$ which contain no product of the form~$x_jx_j'$. \end{enumerate} \end{Lemma} \begin{proof} (i)~The f\/irst equality is obvious. For the second equality, assuming ${\bf a}={\bf a}_+$. For a~sequence $t=(t_1,\dots,t_a)\in\{0,1\}^a$, we can regard it as an inf\/inite sequence $(t_1,\dots,t_a,0,0,\dots)$. Then $\bar{t}=(f_r(a)-t_r)_{r=1}^\infty$. The sum and dot product in~\eqref{eq:t sum and dot} extend naturally. Then $\tilde{x}[\mathbf{a}]$ is the coef\/f\/icient of $z^0$ in the following polynomial in $\mathbb{Z}[x_1^{\pm 1},\ldots,x_m^{\pm 1}][z]$ (note that all the products $\prod\limits_{k\ge 1}$ appearing below are f\/inite products since the factors are $1$ if $k>\max([a_1]_+,\dots,[a_n]_+)$) \begin{gather*} \left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{\mathbf{s}\in S_{\rm all}}\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}|\bar{\mathbf{s}}_j|} x_{j}^{-b_{ji}|\mathbf{s}_i|}z^{\mathbf{s}_i\cdot \bar{\mathbf{s}}_j}\right)\\ \qquad {} =\left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{\mathbf{s}\in S_{\rm all}}\prod_{k\ge1}\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}\bar{s}_{j,k}} x_{j}^{-b_{ji}s_{i,k}}z^{s_{i,k}\bar{s}_{j,k}}\right)\\ \qquad {} =\left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{\mathbf{s}\in S_{\rm all}}\prod_{k\ge1}\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}(f_k(a_j)-s_{j,k})} x_{j}^{-b_{ji}s_{i,k}}z^{s_{i,k}(f_k(a_j)-s_{j,k})}\right)\\ \qquad {} =\left(\prod_{i=1}^n x_i^{-a_i}\right)\prod_{k\ge1} \left(\sum_{\mathbf{s}^k\in S_{\rm all}^k}\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}(f_k(a_j)-s_{j,k})} x_{j}^{-b_{ji}s_{i,k}}z^{s_{i,k}(f_k(a_j)-s_{j,k})}\right), \end{gather*} where $S^k_{\rm all}$ is the set of all possible $\mathbf{s}^k=(s_{1,k},\dots,s_{n,k})$ with $(\mathbf{s}_1,\dots,\mathbf{s}_m)$ running through $S_{\rm all}$ (recall that~$s_{i,k}$ is the $k$-th number of ${\bf s}_i$, and by convention $s_{i,k}=0$ if $k>[a_i]_+$). Equivalently, \begin{gather*} S^k_{\rm all}=\big\{\mathbf{s}^k=(s_{1,k},\dots,s_{n,k})\in\{0,1\}^n\,|\,0\le s_{i,k}\le f_k(a_i) \; \textrm{for} \; i=1,\dots,n\big\}. \end{gather*} Meanwhile, denote $f_k(\mathbf{a})=(f_k(a_1),\dots,f_k(a_n))\in\{0,1\}^n$. Then $\tilde{x}[f_k(\mathbf{a})]$ is the coef\/f\/icient of~$z^0$ of \begin{gather*} \left(\prod_{i=1}^n x_i^{-f_k(a_i)}\right) \sum_{\mathbf{s}^k\in S_{\rm all}^k}\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}(f_k(a_j)-s_{j,k})} x_{j}^{-b_{ji}s_{i,k}}z^{s_{i,k}(f_k(a_j)-s_{j,k})}\right). \end{gather*} So we conclude that \begin{gather*} \tilde{x}[\mathbf{a}] =\left(\prod_{i=1}^n x_i^{-a_i}\right)\prod_{k\ge1}\left(\tilde{x}[f_k(\mathbf{a})]\prod_{i=1}^n x_i^{f_k(a_i)}\right)\\ \hphantom{\tilde{x}[\mathbf{a}]}{} =\left(\prod_{i=1}^n x_i^{-a_i+\sum\limits_{k\ge1}f_k(a_i)}\right)\prod_{k\ge1} \tilde{x}[f_k(\mathbf{a})] =\prod_{k\ge1} \tilde{x}[f_k(\mathbf{a})]. \end{gather*} (ii) There is a bijection \begin{gather*} \prod_{j=1}^c S_{\rm gcc}\big({\bf a}^{(j)}\big)\to S_{\rm gcc}({\bf a}), \qquad \big({\bf s}^{(1)},\dots, {\bf s}^{(c)}\big)\mapsto {\bf s}=({\bf s}_1,\dots,{\bf s}_n), \end{gather*} where ${\bf s}_i={\bf s}^{(j)}_i$ if $i\in I_j$. This bijection induces the expected equality. (iii) Rewrite \begin{gather*} z[{\bf a}] =\left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{{\bf s}_1,\dots,{\bf s}_n}\left(\prod_{i,j} x_{i}^{|\bar{\mathbf{s}}_j|[b_{ij}]_+} x_{j}^{|\mathbf{s}_i|[-b_{ji}]_+}\right)\\ \hphantom{z[{\bf a}]}{} = \sum_{{\bf s}_1,\dots,{\bf s}_n}\prod_{k=1}^n \left(x_k^{-a_k}\prod_{i} x_{i}^{|\bar{\mathbf{s}}_k|[b_{ik}]_+}\prod_j x_{j}^{|\mathbf{s}_k|[-b_{jk}]_+}\right)\\ \hphantom{z[{\bf a}]}{} = \prod_{k=1}^n x_k^{-a_k}\sum_{{\bf s}_k} \left(\prod_{i} x_{i}^{|\bar{\mathbf{s}}_k|[b_{ik}]_+}\prod_j x_{j}^{|\mathbf{s}_k|[-b_{jk}]_+}\right)=\prod_{k=1}^n z[a_ke_k].\\ \end{gather*} If $a_k\le 0$, then ${\bf s}_k=()$, therefore $z[a_ke_k]=x_k^{-a_k}=x_k^{\langle -a_k\rangle}$. If $a_k> 0$, then \begin{gather*} z[a_ke_k] = x_k^{-a_k}\sum_{s_{k,1},\dots,s_{k,a_k}} \left(\prod_{i} x_{i}^{\sum\limits_{r=1}^{a_k}(1-s_{k,r})[b_{ik}]_+}\prod_j x_{j}^{\sum\limits_{r=1}^{a_k}(s_{k,r})[-b_{jk}]_+}\right)\\ \hphantom{z[a_ke_k]}{} = x_k^{-a_k} \prod_{r=1}^{a_k} \sum_{s_{k,r}\in\{0,1\}} \left(\prod_{i} x_{i}^{(1-s_{k,r})[b_{ik}]_+}\prod_j x_{j}^{(s_{k,r})[-b_{jk}]_+}\right)\\ \hphantom{z[a_ke_k]}{} = x_k^{-a_k} \prod_{r=1}^{a_k} \left(\prod_{i} x_{i}^{[b_{ik}]_+}+\prod_j x_{j}^{[-b_{jk}]_+}\right) =(x'_k)^{a_k}=x_k^{\langle -a_k\rangle}. \end{gather*} This proves $z[{\bf a}]=\prod\limits_{i=1}^nx_i^{\langle-a_i\rangle}$. The analogue of~(i),~(ii) immediately follows. The fact that $\{z[\mathbf{a}]\}_{\mathbf{a}\in\mathbb{Z}^n}$ forms a basis is proved in \cite[Theorem~1.16]{BFZ}. \end{proof} \begin{Remark} For readers who are familiar with \cite[Lemma~4.2]{B9}, they may notice that the decomposition therein is similar to Lemma~\ref{multiplicative}(i) above. Indeed, \cite[Lemma~4.2]{B9} gives a~f\/iner decomposition. For example, for the coef\/f\/icient-free cluster algebra of the quiver $1\to 2\to 3$, $\tilde{x}[(2,1,3)]$ will decompose as $\tilde{x}[(1,1,1)]\tilde{x}[(1,0,1)]\tilde{x}[(0,0,1)]$ in Lemma~\ref{multiplicative}(i), but decompose as $\tilde{x}[(1,1,1)]\tilde{x}[(1,0,0)]\tilde{x}[(0,0,1)]^2$ in \cite[Lemma~4.2]{B9}. \end{Remark} The following lemma focuses on the case where $\mathbf{a}=(a_i)\in \{0,1\}^n$ as opposed to that in~$\mathbb{Z}^n$. The condition of~$S_{\rm gcc}$ takes a much simpler form: we can treat sequences~$(0)$ and~$(1)$ as numbers~$0$ and~$1$ respectively, and the condition $\mathbf{s}_i\cdot\bar{\mathbf{s}}_j=0$ can be written as $(s_i,a_j-s_j)\neq(1,1)$. We shall use $\mathbf{S}$ to denote this simpler form of~$S_{\rm gcc}$. \begin{Lemma}\label{lemma:01} For $\mathbf{a}=(a_i)\in \{0,1\}^n$, denote by $\mathbf{S}$ the set of all $n$-tuples $\mathbf{s}=(s_1,\dots,s_n)\in\{0,1\}^n$ such that $0\le s_i\le a_i$ for $i=1,\dots, n$, and $(s_i,a_j-s_j)\neq(1,1)$ for every $(i,j)\in Q_B$. By convention we assume $a_i=0$ and $s_i=0$ if $i>n$. Then \begin{enumerate}\itemsep=0pt \item[$(i)$] $\tilde{x}[\mathbf{a}]$ can be written as \begin{gather}\label{df:x[a]} \left(\prod_{i=1}^n x_i^{-a_i}\right)\sum_{\mathbf{s}\in \mathbf{S}}\prod_{i=1}^m x_i^{\sum\limits_{j=1}^n (a_j-s_j)[b_{ij}]_+ + s_j[-b_{ij}]_+}. \end{gather} \item[$(ii)$] $\tilde{x}[\mathbf{a}]$ is in the upper cluster algebra $\mathcal{U}$. \end{enumerate} \end{Lemma} \begin{proof} (i) By Def\/inition \ref{definition:GCC2}, \begin{gather*} \tilde{x}[\mathbf{a}] =\left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{\mathbf{s}\in \mathbf{S}}\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{(a_j-s_j)b_{ij}} x_{j}^{s_i(-b_{ji})}\right)\\ \hphantom{\tilde{x}[\mathbf{a}]}{} =\left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{\mathbf{s}\in \mathbf{S}}\left(\prod_{i,j=1}^m x_{i}^{(a_j-s_j)[b_{ij}]_+} x_{j}^{s_i[-b_{ji}]_+}\right)\\ \hphantom{\tilde{x}[\mathbf{a}]}{} =\left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{\mathbf{s}\in \mathbf{S}}\left(\prod_{i,j=1}^m x_{i}^{(a_j-s_j)[b_{ij}]_+}\right) \left(\prod_{i,j=1}^m x_j^{s_i[-b_{ji}]_+}\right) \\ \hphantom{\tilde{x}[\mathbf{a}]}{} =\left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{\mathbf{s}\in \mathbf{S}}\left(\prod_{i,j=1}^m x_{i}^{(a_j-s_j)[b_{ij}]_+}\right) \left(\prod_{i,j=1}^m x_i^{s_j[-b_{ij}]_+}\right) \\ \hphantom{\tilde{x}[\mathbf{a}]}{} =\left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{\mathbf{s}\in \mathbf{S}}\left(\prod_{i,j=1}^m x_{i}^{(a_j-s_j)[b_{ij}]_++s_j[-b_{ij}]_+}\right)=\eqref{df:x[a]}. \end{gather*} (ii) We introduce $n$ extra variables $x_{m+1},\dots, x_{m+n}$. Let \begin{gather*} \mathbb{ZP'}= \mathbb{ZP}\big[x_{m+1}^{\pm1},\dots,x_{m+n}^{\pm1}\big]= \mathbb{Z}\big[x_{n+1}^{\pm1},\dots,x_{m+n}^{\pm1}\big] \end{gather*} be the ring of Laurent polynomials in the variables $x_{n+1},\dots,x_{m+n}$. Let \begin{gather*} \tilde{B}'=\begin{bmatrix}\tilde{B}\\I_n\end{bmatrix} \end{gather*} be the $(m+n)\times n$ matrix that encodes a new cluster algebra $\mathcal{A}'$. Assume $a_{n+1}=\cdots=a_{m+n}=0$ and def\/ine $Q_{\tilde B}'$, $\mathbf{S}'$, $\tilde{x}'[\mathbf{a}]$ for $\mathbb{ZP}'$ similarly as the def\/inition of $Q_{\tilde B}$, $\mathbf{S}$, $\tilde{x}[\mathbf{a}]$ for $\mathbb{ZP}$. (Of course $\mathbf{S}'=\mathbf{S}$, but we use dif\/ferent notation to emphasize that they are for dif\/ferent cluster algebras). So \begin{gather*} \tilde{x}'[\mathbf{a}]=\left(\prod_{i=1}^n x_i^{-a_i}\right)\sum_{{\bf s}\in {\bf S}'}P_{\bf s}, \qquad P_{\bf s}:=\prod_{i=1}^{m+n} x_i^{\sum\limits_{j=1}^n (a_j-s_j)[b_{ij}]_+ + s_j[-b_{ij}]_+}. \end{gather*} We will show that $\tilde{x}'[{\bf a}]$ is in the upper bound \begin{gather*} \mathcal{U}'_x=\mathbb{ZP'}\big[{\bf x}^{\pm1}\big]\cap\mathbb{ZP'}\big[{\bf x}_1^{\pm1}\big]\cap\cdots\cap\mathbb{ZP'}\big[{\bf x}_n^{\pm1}\big] \end{gather*} (where ${\bf x}=\{x_1,\dots,x_n\}$ and for $1\le k\le n$, the adjacent cluster ${\bf x}_k$ is def\/ined by ${\bf x}_k={\bf x}-\{x_k\}\cup \{x_k'\}$), therefore~$\tilde{x}'[{\bf a}]$ is in the upper cluster algebra~$\mathcal{U}'$, thanks to the fact that $\mathcal{U}'_x=\mathcal{U}'$ when $\tilde{B}'$ is of full rank \cite[Corollary~1.7 and Proposition~1.8]{BFZ}. Then we substitute $x_{n+1}=\cdots=x_{m+n}=1$ and conclude that $\tilde{x}[\mathbf{a}]$ is in the upper cluster algebra $\mathcal{U}$. Since $\tilde{x}'[\mathbf{a}]$ is obviously in $\mathbb{ZP'}[{\bf x}^{\pm1}]$ from its def\/inition, we only need to show that $\tilde{x}'[\mathbf{a}]$ is in~$\mathbb{ZP'}[{\bf x}_k^{\pm1}]$ for $1\le k\le n$. Again from its def\/inition we see that $\tilde{x}'[\mathbf{a}]$ is in $\mathbb{ZP'}[{\bf x}_k^{\pm1}]$ when $a_k=0$. So we may assume $a_k=1$. Let $N\subset{\bf S}'$ contain those ${\bf s}$ such that $P_{\bf s}$ is not divisible by $x_k$; equivalently, \begin{gather*} N=\{{\bf s}\in {\bf S}' \,| \,s_j=a_j\textrm{ if }b_{kj}>0; \; s_j=0 \; \textrm{if} \; b_{kj}<0 \big\}. \end{gather*} Then it suf\/f\/ices to show that $\sum\limits_{{\bf s}\in N}P_{\bf s}$ is divisible by $A$, where \begin{gather*} A=x_k'x_k=\prod_{i=1}^{m+n}x_i^{[b_{ik}]_+}+\prod_{i=1}^{m+n}x_i^{[-b_{ik}]_+}. \end{gather*} Write $N$ into a partition $N=N_0\cup N_1$ where $N_0=\{{\bf s}\in N | s_k=0\}$, $N_1=\{{\bf s}\in N | s_k=1\}$. Def\/ine $\varphi\colon N_0\to N_1$ by \begin{gather*} \varphi(s_1,\dots,s_{k-1},0,s_{k+1},\dots,s_n)=(s_1,\dots,s_{k-1},1,s_{k+1},\dots,s_n). \end{gather*} Then $\varphi$ is a well-def\/ined bijection because in the def\/inition of $N$ there is no condition imposed on~$s_k$. Thus \begin{gather*} \sum_{{\bf s}\in N}P_{\bf s} =\sum_{{\bf s}\in N_0}P_{\bf s}+\sum_{{\bf s}\in N_1}P_{\bf s}= \sum_{{\bf s}\in N_0}(P_{\bf s}+P_{\varphi({\bf s})})\\ \hphantom{\sum_{{\bf s}\in N}P_{\bf s}}{} =\prod_{i=1}^{m+n}\prod_{\stackrel{1\le j\le n}{j\neq k}} x_i^{(a_j-s_j)[b_{ij}]_+ + s_j[-b_{ij}]_+} \left(\prod_{i=1}^{m+n} x_i^{[b_{ik}]_+}+\prod_{i=1}^{m+n} x_i^{[-b_{ik}]_+}\right)\\ \hphantom{\sum_{{\bf s}\in N}P_{\bf s}}{} =\prod_{i=1}^{m+n}\prod_{\stackrel{1\le j\le n}{j\neq k}} x_i^{(a_j-s_j)[b_{ij}]_+ + s_j[-b_{ij}]_+} A \end{gather*} is divisible by $A$. \end{proof} Below is the main theorem of the section. \begin{proof}[Proof of Theorem~\ref{thm:x[a]}] It follows immediately from Lemma~\ref{multiplicative}(i) and Lemma~\ref{lemma:01}(ii). Indeed, we use Lemma \ref{multiplicative}(i) to factor $\tilde{x}[{\bf a}]$ (for ${\bf a}\in\mathbb{Z}_{\ge0}^n$) into the product of a usual monomial (which is in $\mathcal{U}$) and those $\tilde{x}[{\bf a}']$'s where all entries of ${\bf a}'$ are $0$ or $1$ (which is also in~$\mathcal{U}$ by Lemma~\ref{lemma:01}(ii). \end{proof} \begin{Remark} We claim that if $\Sigma$ is acyclic (i.e., $Q_B$ is acyclic), then $\big(\prod\limits_{i=1}^n x_i^{a_i}\big)\tilde{x}[\mathbf{a}]$ is not divisible by~$x_k$ for any $1\le k\le n$, i.e., there exists $\mathbf{s}=(\mathbf{s}_1,\dots,\mathbf{s}_n)\in S_{\rm gcc}$ such that \begin{gather*} \prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}|\bar{\mathbf{s}}_j|} x_{j}^{-b_{ji}|\mathbf{s}_i|} \end{gather*} is not divisible by~$x_k$. Fix $k$ such that $1\le k\le n$. We need to f\/ind ${\bf s}$ such that \begin{gather*} |\bar{\mathbf{s}}_j|=0 \quad \text{if} \ \ (k,j)\in Q_{\tilde B} \qquad \text{and} \qquad |\mathbf{s}_i|=0 \quad \text{if} \ \ (i,k)\in Q_{\tilde B}. \end{gather*} This condition is equivalent to \begin{gather*} |\bar{\mathbf{s}}_j|=0 \quad \text{if} \ \ (k,j)\in Q_B \qquad \text{and} \qquad |\mathbf{s}_i|=0 \quad \text{if} \quad (i,k)\in Q_B, \end{gather*} because ${\bf s}_i=()$ for $i>n$ by convention. Such an ${\bf s}$ can be constructed as follows: since the initial seed is acyclic, $Q_B$ has no oriented cycles, therefore it determines a partial order where $i\prec j$ if there is a (directed) path from~$i$ to~$j$ in~$Q_B$. Def\/ine $\mathbf{s}_l\in\{0,1\}^{[a_l]_+}$ for $l=1,\dots,n$ as \begin{gather*} \mathbf{s}_l=\begin{cases} (1,\dots,1),& \textrm{if} \ \ k\prec l,\\ (0,\dots,0),& \textrm{otherwise}. \end{cases} \end{gather*} Note that the construction of ${\bf s}$ depends on $k$. To check that ${\bf s}$ is in $S_{\rm gcc}$, we need to show that $\mathbf{s}_i\cdot\bar{\mathbf{s}}_j=0$ for every $(i,j)\in Q_B$. This can be proved by contradiction as follows. Assume $\mathbf{s}_i\cdot\bar{\mathbf{s}}_j\neq 0$ for some $(i,j)\in Q_B$. Then there exists $r\le\min([a_i]_+,[a_j]_+)$ such that $s_{i,r}=1$, $s_{j,r}=0$. By our choice of $\mathbf{s}$, we have $k\prec i$, $k\not\prec j$, therefore $i\not\prec j$. This contradicts with the assumption that $(i,j)\in Q_B$. \end{Remark} \section{Non-acyclic rank 3 cluster algebras}\label{section4} In this section, we consider non-acyclic skew-symmetric rank 3 cluster algebras of geometric type. \subsection[Def\/inition and properties of $\tau$]{Def\/inition and properties of $\boldsymbol{\tau}$}\label{section4.1} Let $m\geq 3$ be an integer, $\tilde{B}=(b_{ij})$ be an $m \times 3$ matrix whose principal part $B$ is skew-symmetric and non-acyclic (i.e., $b_{12}$, $b_{23}$, $b_{31}$ are of the same sign). Def\/ine the map \begin{gather*} \tau(B):=(|b_{23}|,|b_{31}|,|b_{12}|)\in \mathbb{Z}^3_{\geq 0}. \end{gather*} \begin{Definition} Let $(x,y,z) \in \mathbb{R}^3$. We def\/ine a partial ordering ``$\le$'' on $\mathbb{R}^3$ by $(x,y,z) \leq (x',y',z')$ if and only if $x \leq x'$, $y \leq y'$, and $z \leq z'$. \end{Definition} Def\/ine three involutary functions $\mu_1,\mu_2,\mu_3\colon \mathbb{R}^3\to\mathbb{R}^3$ as follows \begin{gather*} \mu_1\colon \ (x,y,z)\mapsto (x,xz-y,z), \qquad \mu_2\colon \ (x,y,z) \mapsto (x,y,xy-z), \\ \mu_3\colon \ (x,y,z) \mapsto(yz-x,y,z). \end{gather*} Let $\Gamma$ be the group generated by $\mu_1$, $\mu_2$, and $\mu_3$. It follows that the $\Gamma$-orbit of $\tau(B)$ is identical to the set $\{\tau(B')$: $B'$ is in the mutation class of~$B \}$. Consider the following two situations. \begin{enumerate}\itemsep=0pt \item[(M1)] $(a,b,c)\leq \mu_i(a,b,c)$ for all $i=1,2,3$. \item[(M2)] $(a,b,c)\leq \mu_i(a,b,c)$ for precisely two indices~$i$. \end{enumerate} The following statement follows from \cite[Theorem~1.2, Lemma~2.1]{BBH} for a coef\/f\/icient free cluster algebra. The result holds more generally for a cluster algebra of geometric type since the coef\/f\/icients do not af\/fect whether or not the seed is acyclic. \begin{Theorem} \label{cyclic} Let $B$ be as above, $(a,b,c)=\tau(B)$. Then the following are equivalent: \begin{itemize}\itemsep=0pt \item[\rm(i)]$\mathcal{A}({\bf x},{\bf y},B)$ is non-acyclic. \item[\rm(ii)] $a,b,c \geq 2$ and $abc+4\ge a^2+b^2+c^2$. \item[\rm(iii)] $a,b,c\ge 2$ and there exists a unique triple in the $\Gamma$-orbit of $\tau(B)$ that satisfies~{\rm(M1)}. \end{itemize} Moreover, under these conditions, triples in the $\Gamma$-orbit of $\tau(B)$ not satisfying~{\rm(M1)} must sa\-tis\-fy~{\rm(M2)}. \end{Theorem} Assume $\mathcal{A}({\bf x},{\bf y},B)$ is non-acyclic. We call the unique triple in Theorem \ref{cyclic}(iii) the \emph{root} of the $\Gamma$-orbit of~$\tau(B)$. \begin{Lemma}\label{lemma:chain inequality} Let $B$ be as above, $(a,b,c)=\tau(B)$. \begin{enumerate}\itemsep=0pt \item[$(i)$] The root $(a,b,c)$ of the $\Gamma$-orbit of $\tau(B)$ is the minimum of the $\Gamma$-orbit, i.e., $(a,b,c)\le(a',b',c')$ for any $(a',b',c')$ in the $\Gamma$-orbit. \item[$(ii)$] If $\mu_k(a,b,c)=(a,b,c)$ for any $k$, then $a=b=c=2$ and $(2,2,2)$ is the unique triple in the $\Gamma-$orbit of~$\tau(B)$. Therefore $(2,2,2)$ must also be the root. \item[$(iii)$] Assume that $\tau(B)$ is the root of the $\Gamma$-orbit of $\tau(B)$, $i_1,\dots,i_l\in\{1,2,3\}$, $i_s\neq i_{s+1}$ for $1\le s\le l-1$. Let $B_0=B$, $B_j=\mu_{i_j}(B_{j-1})$ for $j=1,\dots,l$. Then \begin{gather*}\tau(B_0)\le\tau(B_1)\le\cdots\le\tau(B_l). \end{gather*} \end{enumerate} \end{Lemma} \begin{proof} (i) If $\tau(B')$ is the minimal triple in the $\Gamma$-orbit of $\tau(B)$ it satisf\/ies~(M1), so by Theo\-rem~\ref{cyclic}(iii) the root is the unique triple satisfying~(M1) and $B'$ must be the root. For~(ii): $a=b=c=2$ follows easily from Theorem~\ref{cyclic}(ii), and the uniqueness claim follows from the def\/inition of~$\mu_k$. (iii)~is obvious if $\tau(B)=(2,2,2)$. If not, assume~(iii) is false, then there are $1\le j<j'\le l$ such that \begin{gather*} \tau(B_j)<\tau(B_{j+1})=\tau(B_{j+2})=\cdots=\tau(B_{j'})>\tau(B_{j'+1}). \end{gather*} By (ii) it must be that $j+1=j'$ so that $\tau(B_j)< \tau(B_{j+1}) > \tau(B_{j+2})$, but $\tau(B_{j+1})$ does not satisfy either~(M1) or~(M2). This is impossible by Theorem~\ref{cyclic}(iii). \end{proof} \subsection[Grading on~$\mathcal{A}$]{Grading on $\boldsymbol{\mathcal{A}}$}\label{section4.2} We now adapt the grading introduced in \cite[Def\/inition 3.1]{G} to the geometric type. \begin{Definition} A graded seed is a quadruple $({\bf x},{\bf y},B,G)$ such that \begin{enumerate}\itemsep=0pt \item[(i)] $({\bf x},{\bf y},B)$ is a seed of rank $n$, and \item[(ii)] $G=[g_1,\dots,g_n]^T\in \mathbb{Z}^n$ is an integer column vector such that $BG=0$. \end{enumerate} \end{Definition} Set $\deg_G(x_i)=g_i$ and $\deg\big(x_i^{-1}\big)=-g_i$ for $i\le n$, and set $\deg(y_j)=0$ for all $j$. Extend the grading additively to Laurent monomials hence to the cluster algebra $\mathcal{A}({\bf x},{\bf y},B)$. In~\cite{G} it is proved that under this grading every exchange relation is homogeneous and thus the grading is compatible with mutation. \begin{Theorem}[\protect{\cite[Corollary 3.4]{G}}] The cluster algebra $\mathcal{A}({\bf x},{\bf y},B)$ under the above grading is a~$\mathbb{Z}$-graded algebra. \end{Theorem} The following two propositions come from the work in \cite{S1} to show that rank three non-acyclic cluster algebras have no maximal green sequences. \begin{Theorem}[\protect{\cite[Proposition 2.2]{S1}}] Suppose that $B$ is a $3\times 3$ skew-symmetric non-acyclic matrix. Then the $($column$)$ vector $G=\tau(B)^T$ satisfies $BG=0.$ \end{Theorem} \begin{Lemma}\label{lemma: grading} For any graded seed $({\bf x}',B',G')$ in the mutation class of our initial graded seed, we have $G'= \tau(B')^T$. \end{Lemma} \begin{proof} This result follows from \cite[Lemma~2.3]{S1}, but its proof is short, so we reproduce it here. We use induction on mutations. Suppose that $G'= \tau(B')^T$ for a given graded seed $({\bf x}',B',G')$. Let $({\bf x}'',B'',G'')$ be the graded seed obtained by taking mutation $\mu_1$ from $({\bf x}',B',G')$. Then $x_1''=((x_2')^{|b'_{12}|}+(x_3')^{|b'_{13}|})/x_1'$, so its degree is $g_2' |b'_{12}|-g_1'$. By induction we have \begin{gather*} g_2' |b'_{12}|-g_1' =|b'_{31}| |b'_{12}|-|b'_{23}| = |b'_{31}b'_{12}-b'_{23}| =|b''_{23}|, \end{gather*} so we get $G''= \tau(B'')^T$. The cases of~$\mu_2$ and~$\mu_3$ are similar. \end{proof} In the rest of Section~\ref{section4} we assume that $B$ is a $3\times 3$ skew-symmetric non-acyclic matrix and $G=\tau(B)^T$. In other words, if $\tau(B)=(b,c,a)$ then $\deg(x_1)=b$, $\deg(x_2)=c$, $\deg(x_3)=a$ for any seed $({\bf x}, {\bf y}, B)$. If we look at the quiver associated to our exchange matrix we see that the degree of the cluster variable $x_i$ is the number of arrows between the other two mutable vertices in~$Q_B$. Furthermore, this is a~canonical grading for non-acyclic rank three cluster algebras since regardless of your choice of initial seed the grading imposed on~$\mathcal{A}$ is the same. \subsection[Construction of an element in $\mathcal{U}\setminus\mathcal{A}$]{Construction of an element in $\boldsymbol{\mathcal{U}\setminus\mathcal{A}}$}\label{section4.3} Here we shall prove the following main theorem of the paper. We give $\mathcal{A}$ the $\mathbb{Z}$-grading as in Section~\ref{section4.2}. By Theorem~\ref{cyclic}, we can assume that the initial seed \begin{gather*} \Sigma=(\{x_1,x_2,x_3\},{\bf y},B,G),\qquad\textrm{where} \qquad B= \left[ \begin{matrix} 0 & a &-c \\ -a & 0 & b \\ c& -b & 0 \end{matrix} \right], \qquad G=\left[ \begin{matrix} b \\ c \\ a \end{matrix} \right] \end{gather*} satisf\/ies $a,b,c\ge2$ and (M1). Then $\deg x_{1}=b$, $\deg x_{2}=c$, $\deg x_{3}=a$, and we let $x_{4},\dots,x_{m}$ be of degree~0. Furthermore, by permuting the indices if necessary, we assume $a\ge b\ge c$. Def\/ine six degree-0 elements as follows \begin{gather*} \alpha_i^\pm := \prod_{j=4}^m x_j^{[\pm b_{ji}]_+}, \qquad i=1,2,3. \end{gather*} Looking at the seeds neighboring $\Sigma$ we obtain three new cluster variables. Namely, \begin{gather*} z_1=\frac{\alpha_1^-x_2^a+\alpha_1^+x_3^c}{x_1}, \qquad z_2=\frac{\alpha_2^+x_1^a+\alpha_2^-x_3^b}{x_2}, \qquad z_3=\frac{\alpha_3^-x_1^c+\alpha_3^+x_2^b}{x_3}, \end{gather*} which have degree $ac-b$, $ab-c$, and $bc-a$, respectively. We also use the following theorem, whose proof in~\cite{MM} is for coef\/f\/icient free cluster algebras but clearly applies to cluster algebras of geometric type. \begin{Theorem}[\protect{\cite[Proposition 6.1.2]{MM}}] \label{coprime} If $\mathcal{A}$ is a rank three skew-symmetric non-acyclic cluster algebra, then~$\mathcal{A}$ is totally coprime. \end{Theorem} Now we consider a special element in $\mathbb{Q}(x_1,x_2,x_3)$: \begin{gather*} Y:=\tilde{x}[(1,0,1)]/x_2^b =\frac{\alpha_1^-\alpha_3^-x_1^cx_2^{a-b}+\alpha_1^-\alpha_3^+x_2^a+\alpha_1^+\alpha_3^+x_3^c}{x_1x_3}. \end{gather*} We shall prove Theorem~\ref{main-cor} by showing that the element constructed above is in $\mathcal{U}\setminus\mathcal{A}$. \begin{proof}[Proof of Theorem \ref{main-cor}] We f\/irst show that $Y\in \mathcal{U}$. Note that $\mathcal{A}$ is a rank 3 cluster algebra so it is totally coprime by Theorem~\ref{coprime}. It then suf\/f\/ices to show that $Y\in \mathcal{U}_{\bf x}$ by Theorem~\ref{UB}. Clearly $Y\in\mathbb{ZP}[x_1^{\pm 1},x_2^{\pm1},x_3^{\pm 1}]$, where $\mathbb{ZP}=\mathbb{Z}[x_4^{\pm1},\dots,x_m^{\pm1}]$. Also, \begin{gather*} Y=\frac{\alpha_3^+z_1^c+\alpha_1^-\alpha_3^-\big(\alpha_1^-x_2^a+\alpha_1^+x_3^c\big)^{c-1} x_2^{a-b}}{z_1^{c-1}x_3} \in \mathbb{ZP}\big[z_1^{\pm 1},x_2^{\pm 1},x_3^{\pm 1}\big],\\ Y=\frac{\alpha_1^-\alpha_3^-x_1^c\!\big(\alpha_2^+x_1^a\!+\!\alpha_2^-x_3^b\big)^{a{-}b}\!z_2^b\!+\!\alpha_1^+\alpha_3^+z_2^ax_3^c\!+\!\alpha_1^-\alpha_3^+ \!\big(\alpha_2^+x_1^a\!+\!\alpha_2^-x_3^b\big)^{a}}{x_1z_2^ax_3} \in \mathbb{ZP}\big[x_1^{\pm 1}\!,z_2^{\pm 1}\!,x_3^{\pm 1}\big],\\ Y=\frac{\alpha_1^-x_2^{a-b}z_3^c+\alpha_1^+\alpha_3^+\big(\alpha_3^-x_1^c+\alpha_3^+x_2^b\big)^{c-1}}{x_1z_3^{c-1}} \in\mathbb{ZP}\big[x_1^{\pm 1},x_2^{\pm 1},z_3^{\pm 1}\big]. \end{gather*} Therefore we conclude that $Y\in \mathcal{U}_{\bf x}$. Next, we show that $Y\notin \mathcal{A}$. With respect to our grading of $\mathcal{A}$, $Y$ is homogeneous of degree $ac-b-a.$ Combining Lemmas \ref{lemma:chain inequality} and~\ref{lemma: grading} we have already shown that the degree of cluster variables is non-decreasing as we mutate away from our initial seed. We use this fact to explicitly prove that all cluster variables, except possibly $x_1$, $x_2$, $x_3$, $z_3$, have degree strictly larger than $ac-b-a$. Indeed, let $x$ be a cluster variable such that $x\neq x_1,x_2,x_3,z_3$. Then $x$ can be written as \begin{gather*} x=\mu_{i_l}\cdots\mu_{i_2}\mu_{i_1}(x_k),\quad i_1,\dots,i_l,k\in\{1,2,3\}, \qquad \text{and}\qquad i_s\neq i_{s+1} \quad \text{for} \ \ 1\le s\le l-1. \end{gather*} Let $B_0=B$, $B_j=\mu_{i_j}(B_{j-1})$ for $j=1,\dots,l$. Let $\tau(B_j)=(b_j,c_j,a_j)$ for $j=0,\dots,l$. By Lemma~\ref{lemma:chain inequality}, we have that \begin{gather*} (b,c,a)=(b_0,c_0,a_0)\le(b_1,c_1,a_1)\le\cdots\le(b_l,c_l,a_l). \end{gather*} We prove the claim in the following f\/ive cases. \textit{Case $k=1$}. We let $r>0$ be the smallest integer such that $i_r=k$ (which exists since $x\neq x_1,x_2,x_3$). It suf\/f\/ices to prove that the degree of $w=\mu_{i_r}\cdots\mu_{i_2}\mu_{i_1}(x_k)$ is larger than $ac-b-a$. Indeed, \begin{gather*} \deg w=b_r=a_{r-1}c_{r-1}-b_{r-1}=a_{r-1}c_{r-1}-b\ge ac-b>ac-b-a. \end{gather*} \textit{Case $k=2$}. The above proof still works: \begin{gather*} \deg w=c_r=a_{r-1}b_{r-1}-c_{r-1}=a_{r-1}b_{r-1}-c\ge ab-c>ac-b-a. \end{gather*} \textit{Case $k=3$, $i_1=1$}. We let $r>0$ be the smallest integer such that $i_r=k$. Then $(b_1,c_1,a_1)=(ac-b,c,a)$, and \begin{gather*} \deg w=a_r=b_{r-1}c_{r-1}-a_{r-1}=b_{r-1}c_{r-1}-a\ge b_1c_1-a=(ac-b)c-a>ac-b-a. \end{gather*} \textit{Case $k=3$, $i_1=2$}. Similar to the above case, $(b_1,c_1,a_1)=(b,ab-c,a)$, \begin{gather*} \deg w=a_r\ge b_1c_1-a=b(ab-c)-a\ge b^2(a-1)-a>c(a-1)-a\ge ac-b-a. \end{gather*} The second and fourth inequality follow from our assumption that $b \geq c$. \textit{Case $k=3$, $i_1=3$}. Then $(b_1,c_1,a_1)=(b,c,bc-a)$, $i_2\neq 3$. We let $r\ge3$ be the smallest integer such that $i_r=k$ (which exists since $x\neq z_3$). It suf\/f\/ices to prove that the degree of $w=\mu_{i_r}\cdots\mu_{i_2}\mu_{i_1}(x_k)$ is larger than $ac-b-a$. If $i_2=1$, then $(b_2,c_2,a_2)=(c(bc-a)-b,c,bc-a)$, \begin{gather*} \begin{split} &\deg w =a_r=b_{r-1}c_{r-1}-(bc-a)\ge b_2c_2-(bc-a)=(c(bc-a)-b)c-(bc-a) \\ & \hphantom{\deg w }{} =(c^2-2)(bc-a)-a\ge(c^2-2)a-a=a(c^2-3)\ge a(c-1)>ac-b-a. \end{split} \end{gather*} If $i_2=2$, then $(b_2,c_2,a_2)=(b,(bc-a)b-c,bc-a)$, \begin{gather*} \deg w\ge b((bc-a)b-c)-(bc-a)\ge (c(bc-a)-b)c-(bc-a)>ac-b-a. \end{gather*} The second inequality above follows from our assumption that $b \geq c$. One application gives the inequality $(bc-a)b-c\ge c(bc-a)-b$ and a second application replaces a factor of $b$ in the f\/irst term with a factor of~$c$. The last inequality follows from our computation in the previous case. This completes the proof of the claim that all cluster variables, except possibly $x_1$, $x_2$,~$x_3$,~$z_3$, have degree strictly larger than $ac-b-a$. Therefore it is suf\/f\/icient to check that $Y$ cannot be written as a linear combination of products of the cluster variables $x_1$, $x_2$, $x_3$, and~$z_3$ since all other cluster variable have larger degree than~$Y$. This is clear as the numerator of~$Y$ is irreducible and none of these cluster variables have a~factor of~$x_1$ in the denominator. \end{proof} \section{Dyck path formula}\label{section5} In this section, we give the Dyck path construction of $\tilde{x}[{\bf a}]$ that is equivalent to Section~\ref{section3} (under a mild condition that there is no isolated vertex in the digraph $Q_{\tilde B}$). It is in fact our original def\/inition of $\tilde{x}[{\bf a}]$, and appears naturally in the attempt of generalizing greedy bases for cluster algebras of rank~2 in~\cite{LLZ} and the construction of bases of type~$A$ cluster algebras in~\cite{B9} to more general cases. Let $(a_1, a_2)$ be a pair of nonnegative integers. Let $c=\min(a_1,a_2)$. The \emph{maximal Dyck path} of type $a_1\times a_2$, denoted by $\mathcal{D}=\mathcal{D}^{a_1\times a_2}$, is a lattice path from $(0, 0)$ to $(a_1,a_2)$ that is as close as possible to the diagonal joining $(0,0)$ and $(a_1,a_2)$, but never goes above it. A~\emph{corner} is a~subpath consisting of a~horizontal edge followed by a~vertical edge. \begin{Definition}\label{corner-first} Let $\mathcal{D}_1$ (resp.~$\mathcal{D}_2$) be the set of horizontal (resp.~vertical) edges of a maximal Dyck path $\mathcal{D}=\mathcal{D}^{a_1\times a_2}$. We label $\mathcal{D}$ with the \emph{corner-first index} in the following sense: \begin{enumerate}\itemsep=0pt \item[(a)] edges in $\mathcal{D}_1$ are indexed as $u_1,\dots,u_{a_1}$ such that $u_i$ is the horizontal edge of the $i$-th corner for $i\in [1,c]$ and $u_{c+i}$ is the $i$-th of the remaining horizontal ones for $i\in [1, a_1-c]$, \item[(b)] edges in $\mathcal{D}_2$ are indexed as $v_1,\dots, v_{a_2}$ such that $v_i$ is the vertical edge of the $i$-th corner for $i\in [1,c]$ and $v_{c+i}$ is the $i$-th of the remaining vertical ones for $i\in [1, a_2-c]$. \end{enumerate} Here we count corners from bottom left to top right, count vertical edges from bottom to top, and count horizontal edges from left to right.) \end{Definition} \begin{Definition}\label{local_compatibility} Let $S_1\subseteq \mathcal{D}_1$ and $S_2 \subseteq \mathcal{D}_2$. We say that $S_1$ and $S_2$ are \emph{locally compatible} with respect to $\mathcal{D}$ if and only if no horizontal edge in $S_1$ is the immediate predecessor of any vertical edge in $S_2$ on $\mathcal{D}$. In other words, the subpath $S_1\cup S_2$ contains no corners. \end{Definition} \begin{Remark} Notice that we have: $|\mathcal{P(D}_1)\times\mathcal{P(D}_2)|$ possible pairs for $(S_1,S_2)$, where $\mathcal{P(D}_1)$ denotes the power set of $\mathcal{D}_1$ and $\mathcal{P(D}_2)$ denotes the power set of $\mathcal{D}_2$. Further, when either $S_1$ or $S_2 = \varnothing$, any arbitrary choice of the other will yield local compatibility by our above def\/inition. \end{Remark} \begin{Definition}\label{definition:GCC} Let $\mathbf{a}=(a_i)\in \mathbb{Z}^n$. By convention we assume $a_i=0$ for $i>n$. For each pair $(i,j)\in Q_{\tilde B}$ (def\/ined at the beginning of Section~\ref{section3}), denote $\mathcal{D}^{[a_i]_+\times [a_j]_+}$ by $\mathcal{D}^{(i,j)}$ . We label $\mathcal{D}^{(i,j)}$ with the corner-f\/irst index (Def\/inition~\ref{corner-first}), whose horizontal edges are denoted $u_{1}^{(i,j)},\dots,u_{[a_{i}]_{+}}^{(i,j)}$ and vertical edges are denoted by $v_{1}^{(i,j)},\dots,v_{[a_{j}]_{+}}^{(i,j)}$. We say that the collection \begin{gather*} \big\{S_\ell^{(i,j)} \subseteq\mathcal{D}^{(i,j)}_\ell\,\big|\, (i,j)\in Q_{\tilde B},\; \ell\in\{1,2\}\big\} \end{gather*} is \emph{globally compatible} if and only if \begin{enumerate}[(i)]\itemsep=0pt \item $S^{(i,j)}_1$ and $S^{(i,j)}_2$ are \emph{locally compatible} with respect to $\mathcal{D}^{(i,j)}$ for all $(i,j)\in Q_{\tilde B}$; \item if $(i,j)$ and $(j,k)$ are in $Q_{\tilde B}$, then $v^{(i,j)}_r\in S^{(i,j)}_2\Longleftrightarrow u^{(j,k)}_r\not\in S^{(j,k)}_1$ for all $r\in[1,[a_{j}]_{+}]$; \item if $(j,k)$ and $(j,i)$ are in $Q_{\tilde B}$, then $u^{(j,k)}_r \in S^{(j,k)}_1\Longleftrightarrow u^{(j,i)}_r \in S^{(j,i)}_1$ for all $r\in[1,[a_{j}]_{+}]$; \item if $(i,j)$ and $(k,j)$ are in $Q_{\tilde B}$, then $v^{(i,j)}_r \in S^{(i,j)}_2\Longleftrightarrow v^{(k,j)}_r \in S^{(k,j)}_2$ for all $r\in[1,[a_{j}]_{+}]$. \end{enumerate} We say that the collection is \emph{quasi-compatible} if it satisf\/ies (ii), (iii), and (iv). So a globally compatible collection is also a quasi-compatible collection. For an illustration of the conditions, see Fig.~\ref{GCCillust}. \end{Definition} \begin{figure}[h!] \begin{enumerate}[(i)] \itemsep=0pt \item For each corner, \includegraphics{Li-Fig1-i1}, \includegraphics{Li-Fig1-i2}, or \includegraphics{Li-Fig1-i3} are allowed, but \includegraphics{Li-Fig1-i4} is not. \item \includegraphics{Li-Fig1-ii1} \ or \ \includegraphics{Li-Fig1-ii2} \item \includegraphics{Li-Fig1-iii1} \ or \ \includegraphics{Li-Fig1-iii2} \item \includegraphics{Li-Fig1-iv1} \ or \ \includegraphics{Li-Fig1-iv2} \end{enumerate} \caption{Pictorial descriptions for Def\/inition~\ref{definition:GCC}(i)--(iv).} \label{GCCillust} \end{figure} \begin{Proposition}\label{prop:GCCxa} Same notation as in Definition~{\rm \ref{definition:GCC}}. Assume that there is no isolated vertex in the digraph $Q_{\tilde B}$. Then \begin{gather*} \tilde{x}[\mathbf{a}]=\left(\prod_{l=1}^n x_l^{-a_l}\right) \sum\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}\big|S^{(i,j)}_2\big|} x_{j}^{-b_{ji}\big|S^{(i,j)}_1\big|}\right) , \end{gather*} where the sum runs over all globally compatible collections $($abbreviated GCCs$)$. A similar formula holds for~$z[\mathbf{a}]$ if the sum runs over all quasi-compatible collections. \end{Proposition} \begin{proof} We def\/ine a mapping that sends a globally compatible collection \begin{gather*} \big\{S_\ell^{(i,j)} \subseteq\mathcal{D}^{(i,j)}_\ell\,\big|\, (i,j)\in Q_{\tilde B},\; \ell\in\{1,2\}\big\}, \end{gather*} to a sequence of sequences ${\bf s}=({\bf s}_1,\dots,{\bf s}_n)$ as follows. For $1\le j\le n$ and $1\le r\le [a_j]_+$, the $r$-th number $s_{j,r}$ of the sequence $\mathbf{s}_j=(s_{j,1}, s_{j,2},\dots,s_{j,[a_j]_+})\in\{0,1\}^{[a_j]_+}$ is determined by \begin{enumerate}[(i)]\itemsep=0pt \item if $(i,j)\in Q_{\tilde B}$, then $v^{(i,j)}_r\in S^{(i,j)}_2$ if and only if $s_{j,r}=0$; \item if $(j,k)\in Q_{\tilde B}$, then $u^{(j,k)}_r\in S^{(j,k)}_1$ if and only if $s_{j,r}=1$. \end{enumerate} Since we assume that~$Q_{\tilde B}$ has no isolated vertices and the collection $\{S_\ell^{(i,j)}\}$ are globally compatible, all~$s_{j,r}$ are well-def\/ined. Noting that the number of corners in $\mathcal{D}^{(i,j)}$ is $\min([a_i]_+,[a_j]_+)$ and that we use the corner-f\/irst index, it is easy to conclude that ${\bf s}$ is indeed in~$S_{\rm gcc}$, and that the mapping is bijective and the exponents that appear in $\tilde{x}[{\bf a}]$ work out correctly. The formula for~$z[a]$ is proved similarly. \end{proof} \begin{Remark} Note that the local compatibility condition in Def\/inition~\ref{definition:GCC}(i) is weaker than the compatibility condition for greedy elements given in \cite{LLZ}. Even for rank 2 cluster algebras, not all cluster variables are of the form $\tilde{x}[\mathbf{a}]$. As an example, \begin{gather*} B=\begin{bmatrix} 0&2\\-2&0 \end{bmatrix}. \end{gather*} Then by Lemma \ref{multiplicative}(i), \begin{gather*} \tilde{x}[(1,2)]=\tilde{x}[(1,1)]\tilde{x}[(0,1)]=\left(\frac{1+x_1^2+x_2^2}{x_1x_2}\right) \left(\frac{1+x_1^2}{x_2}\right)=\frac{1+2x_1^2+x_1^4+x_2^2+x_1^2x_2^2}{x_1x_2^2}, \end{gather*} which is not equal to the following cluster variable (which is a greedy element) \begin{gather*} x[(1,2)]=\frac{1+2x_1^2+x_1^4+x_2^2}{x_1x_2^2}. \end{gather*} Other $\tilde{x}[{\bf a}]$ are not equal to this cluster variable either, since their denominators do not match. It is illustrating to also compare with the standard monomial basis element \begin{gather*} z[(1,2)]=\left(\frac{1+x_2^2}{x_1}\right)\left(\frac{1+x_1^2}{x_2}\right)^2=\frac{1+2x_1^2+x_1^4+x_2^2+2x_1^2x_2^2+x_1^4x_2^2}{x_1x_2^2}. \end{gather*} Similar as the standard monomial basis, $\{\tilde{x}[{\bf a}]\}$ also depends on the initial cluster in general. For interested readers, we explain the above example using pictures: in Fig.~\ref{figure1}, all 8 possible collections of edges in~$\mathcal{D}^{1\times 2}$ together with their contributions to the numerator of~$z[(1,2)]$ are listed; among them, the f\/irst 6 are GCCs and contribute to the numerator of~$\tilde{x}[(1,2)]$, and the f\/irst 5 are compatible pairs in the def\/inition of greedy elements and contribute to the numerator of~$x[(1,2)]$. \begin{figure}[h!]\centering \includegraphics{Li-Fig2} \caption{Qausi-compatible collections for ${\bf a}=(1,2)$.} \label{figure1} \end{figure} \end{Remark} \section{Proof of Theorem \ref{thm:basis}}\label{section6} In this section, we give two proofs of the fact that $\{\tilde{x}[\textbf{a}]\}_{\textbf{a}\in\mathbb{Z}^n}$ constructed from an {\it acyclic seed} (i.e., {\it $Q_B$~is acyclic}) form a basis of an acyclic cluster algebra $\mathcal{A}$ of geometric type, using dif\/ferent approaches. The main idea behind both proofs is to compare $\{\tilde{x}[\textbf{a}]\}_{\textbf{a}\in\mathbb{Z}^n}$ with the standard monomial basis $\{z[\textbf{a}]\}_{\textbf{a}\in\mathbb{Z}^n}$ which is known to have the desired property. The f\/irst approach is to consider certain orders on Laurent monomials and use the fact that $\tilde{x}[{\bf a}]$ and $z[{\bf a}]$ have the same lowest Laurent monomial to draw the conclusion. The second approach is to use the multiplicative property of $\tilde{x}[{\bf a}]$ and~$z[{\bf a}]$ and the combinatorial descriptions of these elements. \subsection{The f\/irst proof}\label{section6.1} Note that $Q_B$ is acyclic by assumption. If $Q_B$ has only one vertex, then $\tilde{x}[{\bf a}]=z[{\bf a}]$ and we are done. If~$Q_B$ is not weakly connected (recall that a digraph is weakly connected if replacing all of its directed edges with undirected edges produces a connected undirected graph), we can factorize $\tilde{x}[{\bf a}]$ (resp.\ $z[{\bf a}]$) into a product of those with smaller ranks, each corresponds to a~connected component (see Lemma \ref{multiplicative}(ii)). Thus we can reduce to the situation that~$Q_B$ is weakly connected and~$n\ge 2$. Relabeling vertices $1,\dots,n$ if necessary, we may assume that \begin{gather*} \text{if} \ i\to j \ \text{is in} \ Q_B, \ \text{then} \ i<j. \end{gather*} Let $\{e_i\}$ be the standard basis of $\mathbb{Z}^n$, and $(r_1,\dots,r_n)$ is a permutation of $\{1,\dots,n\}$. We say that a map $f\colon \mathbb{Z}^n\to \mathbb{Z}^n$ is {\it coordinate-wise $\mathbb{Z}_{\ge0}$-linear} if \begin{gather*} f\left(\sum_i m_i\epsilon_ie_i\right)=\sum_i m_i f(\epsilon_ie_i),\quad \text{for all} \ \ m_i\in\mathbb{Z}_{\ge0},\ \ \epsilon_i\in\{1,-1\}. \end{gather*} We call such a map {\it triangularizable} if there is an order on the set $\{1,\dots,n\}$ as $\{r_1,\dots,r_n\}$ such that \begin{gather*} f(\epsilon e_{r_i})+\epsilon e_{r_i}\in \mathbb{Z}e_{r_{i+1}}\oplus\cdots \oplus\mathbb{Z}e_{r_n},\qquad \forall\, 1\le i\le n, \quad \epsilon\in\{1,-1\}. \end{gather*} \begin{Lemma}\label{lem:triangularizable bijective} Let $f$ be a triangularizable coordinate-wise $\mathbb{Z}_{\ge0}$-linear map. Then $f$ is bijective. \end{Lemma} \begin{proof} Without loss of generality we assume $r_i=i$ for $i=1,\dots,n$. We show that $f$ is invertible, i.e., for any $\sum c_ie_i\in\mathbb{Z}^n$, there exists a unique $\sum b_ie_i\in\mathbb{Z}^n$ such that \begin{equation}\label{fb=c} f\left(\sum b_ie_i\right)=\sum c_ie_i. \end{equation} We f\/irst show that $f$ is injective. Assume $\sum b_ie_i$ satisf\/ies \eqref{fb=c}. Def\/ine a function \begin{gather*} s\colon \ \mathbb{Z}\to\{1,-1\}, \qquad s(b)= \begin{cases} \hphantom{-}1,& \text{if} \ \ b\ge0,\\ -1, & \text{otherwise}, \end{cases} \end{gather*} Denote the usual inner product on $\mathbb{Z}^n$ where $\langle e_i,e_j\rangle=1$ if $i=j$, otherwise $0$. Then for each $1\le k\le n$, we must have \begin{gather*} c_k =\left\langle\sum c_ie_i,e_k\right\rangle=\sum_{i=1}^{n}|b_i| \langle f(s(b_i) e_i),e_k\rangle \\ \hphantom{c_k}{} =\left(\sum_{i=1}^{k-1}|b_i| \langle f(s(b_i) e_i),e_k\rangle\right)+|b_k|\langle f(s(b_k) e_k),e_k\rangle=\left(\sum_{i=1}^{k-1}|b_i| \langle f(s(b_i) e_i),e_k\rangle\right)-b_k, \end{gather*} where the third equality is because of the following: for $i>k$, the triangularizable property assures that $f(\pm e_i)$ would be written only using a linear combination of $e_j$'s with $j>k$; since $\langle e_j,e_k\rangle=0$ for these $j$'s, we have $\langle f(\pm e_i),e_k\rangle=0$. Therefore $b_1,\dots,b_n$ are uniquely determined (in that order) by the equation \begin{gather}\label{bk} b_k=\left(\sum_{i=1}^{k-1}|b_i| \langle f(s(b_i) e_i),e_k\rangle\right)-c_k, \end{gather} implying that $f$ is injective. To show that $f$ is surjective, we construct $b_1,\dots,b_n$, in that order, by~\eqref{bk}. Then we claim that~\eqref{fb=c} holds. Indeed, \begin{gather*} \left\langle f\left(\sum b_ie_i\right),e_k\right\rangle=\sum_{i=1}^n |b_i|\langle f(s(b_i) e_i),e_k\rangle=\left(\sum_{i=1}^{k-1}|b_i| \langle f(s(b_i) e_i),e_k\rangle\right)-b_k=c_k, \end{gather*} for all $1\le k\le n$, which implies~\eqref{fb=c}. \end{proof} Take the lexicographic order \begin{gather*} x_1>x_2>\cdots >x_n>1, \end{gather*} that is, $\prod x_i^{a_i}< \prod x_i^{b_i}$ if the f\/irst pair of unequal exponents $a_i$ and $b_i$ satisfy $a_i<b_i$. \begin{Lemma}\label{lem:l injective} The map $f\colon \mathbb{Z}^n\to\mathbb{Z}^n$ sending ${\bf a}$ to the exponent vector of the lowest Laurent monomial of $z[{\bf a}]$ is triangularizable coordinate-wise $\mathbb{Z}_{\ge0}$-linear, so is injective. Moreover, the coefficient of the lowest Laurent monomial of $z[{\bf a}]$ $($which a priori is a Laurent polynomial in $\mathbb{ZP})$ is a Laurent monomial in $\mathbb{ZP}$, i.e., invertible in $\mathbb{ZP}$. \end{Lemma} \begin{proof} It follows from the def\/inition of $z[{\bf a}]$ that $f$ is coordinate-wise $\mathbb{Z}_{\ge0}$-linear. Next we show that $f$ is triangularizable. Take $1\le k\le n$. By the def\/inition of mutation, \begin{gather*} x_k' = x_k^{-1} \Bigg( \underbrace{\prod_{1\leq i<k} x_i^{b_{ik}}}_{P_1} \underbrace{\prod_{i>n} x_i^{[b_{ik}]_+}}_{Q_1} + \underbrace{\prod_{k < i \leq n} x_i^{-b_{ik}}}_{P_2} \underbrace{\prod_{i>n} x_i^{[-b_{ik}]_+}}_{Q_2} \Bigg), \end{gather*} so its lowest monomial is \begin{gather*} \begin{cases} x_k^{-1} P_2 Q_2, & \text{if $k$ is not a source in $Q_B$,} \\ x_k^{-1} P_1Q_1= x_k^{-1} Q_1, & \text{if $k$ is a source in $Q_B$.} \end{cases} \end{gather*} So \begin{gather*} f(e_k)=\begin{cases} \displaystyle -e_k+\sum_{k<i\le n}(-b_{ik})e_i, & \text{if $k$ is not a source in $Q_B$,} \\ -e_k, & \text{if $k$ is a source in $Q_B$.} \end{cases} \end{gather*} We also have the obvious equality $f(-e_k)=e_k$. Thus $f$ is triangularizable by taking the order $(1,2,\dots,n)$. Therefore, $f$ is injective by Lemma~\ref{lem:triangularizable bijective}. For the ``moreover'' statement, it suf\/f\/ices to note that $P_1$ and $P_2$ in the above def\/inition of~$x'_k$ have dif\/ferent exponent vectors. This is because of the assumption that $n\ge 2$ and~$Q_B$ is weakly connected, so it is impossible for $P_1=P_2=1$ to hold. \end{proof} \begin{Lemma}\label{lem:lowest equal} For any ${\bf a} \in \mathbb{Z}^n$, the lowest Laurent monomials in $z[{\bf a}]$ and $\tilde{x}[{\bf a}]$ are equal. As a~consequence, $\{\tilde{x}[{\bf a}]\}_{{\bf a}\in\mathbb{Z}^n}$ is $\mathbb{ZP}$-linearly independent. \end{Lemma} \begin{proof} First note that the consequence follows from Lemma~\ref{lem:l injective}, so for the rest we focus on the f\/irst statement. By the multiplicative property of $z[{\bf a}]$ and $\tilde{x}[{\bf a}]$ (Lemma~\ref{multiplicative}), if suf\/f\/ices to consider the case when $\mathbf{a}=(a_i)\in \{0,1\}^n$. In the following, we use the notation from Lemma~\ref{lemma:01}. Def\/ine \begin{gather*} J:=\{1\le k\le n \,|\, a_k=1\},\qquad H:=\{1\le k\le n \,|\, k\textrm{ is a source in } Q_B\}. \end{gather*} Since $z[{\bf a}]=\prod\limits_{k\in J} x'_k$, its lowest Laurent monomial is \begin{gather}\label{lowest} \left(\prod_{l=1}^n x_l^{-a_l}\right)\left(\prod_{k\in J\setminus H} \left(\prod_{k < i \leq n} x_i^{-b_{ik}}\prod_{i>n} x_i^{[-b_{ik}]_+}\right) \right)\left( \prod_{k\in J\cap H} \prod_{i>n} x_i^{[b_{ik}]_+}\right). \end{gather} Def\/ine ${\bf s}=(s_1,\dots,s_n)$ by $s_k=1$ if $k\in J\setminus H$, otherwise take $s_k=0$. For $i\le n$, the degree of~$x_i$ in~\eqref{lowest} is \begin{gather*} -a_i+\sum_{k\in J\setminus H}[-b_{ik}]_+ =-a_i+\sum_{k=1}^ns_k[-b_{ik}]_+ =-a_i+ \sum_{k=1}^n \big(s_k[-b_{ik}]_++(a_k-s_k)[b_{ik}]_+\big), \end{gather*} where the second equality is because $(a_k-s_k)[b_{ik}]_+=0$. Indeed, if $a_k-s_k\neq0$, then $a_k=1$ and $s_k=0$, thus $k\in J\cap H$. Then $k\in H$ implies $b_{ik}\le 0$, therefore $[-b_{ik}]_+=0$. For $i> n$, the degree of $x_i$ in \eqref{lowest} is \begin{gather*} \sum_{k\in J\setminus H}[-b_{ik}]_++\sum_{k\in J\cap H}[b_{ik}]_+ =\sum_{k=1}^ns_k[-b_{ik}]_++(a_k-s_k) [b_{ik}]_+. \end{gather*} So we can rewrite \eqref{lowest} as \begin{gather*} \left(\prod_{i=1}^n x_i^{-a_i}\right)\prod_{i=1}^m x_i^{\sum\limits_{j=1}^n (a_j-s_j)[b_{ij}]_+ + s_j[-b_{ij}]_+}. \end{gather*} It is easy to check that ${\bf s}$ is in ${\bf S}$. Comparing with~\eqref{df:x[a]} in Lemma~\ref{lemma:01} we draw the expected conclusion. \end{proof} \begin{Lemma}\label{supportofS} Let $t\in\{1,\dots,n\}$ and $M\in\mathbb{Z}$. Assume that all the Laurent monomials $cx_1^{r_1}\cdots x_n^{r_n}$ $(c\neq0\in\mathbb{ZP})$ appearing in the sum $S:=\sum\limits_{\bf b} u({\bf b}) z[{\bf b}] $ $(u({\bf b}) \in \mathbb{ZP})$ satisfy $-r_t\le M$. Then $\langle{\bf b},e_t\rangle\le M$ for all ${\bf b}$ with $u({\bf b})\neq 0$ in~$S$. \end{Lemma} \begin{proof} We f\/irst assume that $r_t\le 0$. Take the lexicographic order \begin{gather*} x_t>x_{t-1}>\cdots >x_1>x_{t+1}>x_{t+2}>\cdots>x_n>1. \end{gather*} Consider the map $f\colon \mathbb{Z}^n\to\mathbb{Z}^n$ that sends ${\bf b}$ to the exponent vector of the lowest term of $z[{\bf b}]$. We claim that $f$ is a triangularizable coordinate-wise $\mathbb{Z}_{\ge0}$-linear map by taking the order $(t,t-1,\dots,1,t+1,t+2,\dots,n)$. Indeed, the proof is similar to the one of Lemma \ref{lem:l injective}, the only dif\/ference is that now we have{\samepage \begin{gather*} f(e_k)=\begin{cases} \displaystyle -e_k \ \ \text{or} \ \ -e_k+\sum_{i<k}b_{ik}e_i, & \text{if $k<t$,} \\ \displaystyle -e_k \ \ \text{or} \ \ -e_k+\sum_{k<i\le n}(-b_{ik})e_i, & \text{if $k>t$,} \\ \displaystyle -e_k+\sum_{i<k}b_{ik}e_i \ \ \text{or} \ \ -e_k+\sum_{k<i\le n}(-b_{ik})e_i , & \text{if $k=t$.} \\ \end{cases} \end{gather*} (It is unnecessary for our purpose to f\/igure out the exact value of $f(e_k)$.)} \looseness=-1 Thus we conclude that $f$ is bijective by Lemma~\ref{lem:triangularizable bijective}. Let~$z[{\bf b}']$ be the one that appears in~$S$ whose lowest Laurent monomial $cx_t^{-\langle{\bf b}',e_t\rangle}\prod\limits_{i\neq t} x_i^{r_i}$ is smallest. Note that $\langle{\bf b},e_t\rangle\le\langle{\bf b}',e_t\rangle$ for all~${\bf b}$ appearing in~$S$. By the injectivity of $f$, the lowest Laurent monomial $cx_t^{-\langle{\bf b}',e_t\rangle}\prod\limits_{i\neq t} x_i^{r_i}$ will not cancel out with other terms in~$S$. So $\langle{\bf b}',e_t\rangle\le M$ by the assumption that all the Laurent monomials $cx_1^{r_1}\cdots x_n^{r_n}$ that appear in~$S$ satisfy $-r_t\le M$. Thus $\langle{\bf b},e_t\rangle\le M$ for all~${\bf b}$ appearing in~$S$. \end{proof} Def\/ine a partial order $\prec$ on $\mathbb{Z}^n$: \begin{gather* {\bf b} \prec {\bf a} \ \Leftrightarrow\ b_i\le a_i \ \ \textrm{for all} \ 1\le i\le n, \qquad \text{and} \qquad \sum_i[b_i]_+<\sum_i [a_i]_+. \end{gather*} \begin{Lemma}\label{lem:xinz} For each ${\bf a} \in \mathbb{Z}^n$, we have \begin{gather* \tilde{x}[{\bf a}] - z[{\bf a}] = \sum_{{\bf b} \prec {\bf a}} u({\bf a},{\bf b}) z[{\bf b}], \qquad \text{where} \quad u({\bf a},{\bf b}) \in \mathbb{ZP}. \end{gather*} \end{Lemma} \begin{proof} Since $z[{\bf a}]$ form a $\mathbb{ZP}$-basis of the cluster algebra $\mathcal{A}$, we can write \begin{gather* \tilde{x}[{\bf a}] = \sum_{{\bf b}} u({\bf a},{\bf b}) z[{\bf b}], \end{gather*} where $u({\bf a},{\bf b})\neq 0\in\mathbb{ZP}$. For $1\le t\le n$, it is easy to see that all the Laurent monomials $cx_1^{r_1}\cdots x_n^{r_n}$ ($c\neq0\in\mathbb{ZP}$) appeared $\tilde{x}[{\bf a}]$ satisfy $-r_t\le a_t$, thus $b_t\le a_t$ by Lemma \ref{supportofS}. Together with Lemma~\ref{lem:lowest equal} we can conclude that \begin{gather}\label{weaker expansion x-z} \tilde{x}[{\bf a}] - z[{\bf a}] = \sum_{\bf b} u({\bf a},{\bf b}) z[{\bf b}], \qquad u({\bf a},{\bf b}) \in \mathbb{ZP}, \end{gather} where ${\bf b}\neq{\bf a}$ satisf\/ies $b_i\le a_i$ for all $1\le i\le n$. In the rest we show that $\sum\limits_i[b_i]_+<\sum\limits_i [a_i]_+$. For simplicity, we assume that the f\/irst $n'$ coordinates of ${\bf a}$ are positive and the rest are non-positive. If $n'=n$, then $\sum\limits_i[b_i]_+<\sum\limits_i [a_i]_+$ since ${\bf b}\neq{\bf a}$. Assume $n'<n$. Regard $x_i$ for $n'<i\le n$ as frozen variable. Dividing~\eqref{weaker expansion x-z} by the monomial $x_{n'+1}^{-a_{n'+1}}\cdots x_{n}^{-a_{n}}$, we get the expansion of \begin{gather*} \tilde{x}[(a_1,\dots,a_{n'})] - z[(a_1,\dots,a_{n'})] \end{gather*} in the basis $\{z[{\bf b}']\}_{{\bf b}\in\mathbb{Z}^{n'}}$ with coef\/f\/icients in $\mathbb{ZP}[x_{n'+1}^{\pm1},\dots, x_n^{\pm1}]=\mathbb{Z}[x_{n'+1}^{\pm1},\dots, x_m^{\pm1}]$. Using induction on $n$, we conclude that ${\bf b}'\neq(a_1,\dots,a_{n'})$, thus $\sum\limits_i[b_i]_+<\sum\limits_i [a_i]_+$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:basis}] The $\mathbb{ZP}$-independency of $\{\tilde{x}[{\bf a}]\}_{{\bf a}\in\mathbb{Z}^n}$ is asserted in Lemma~\ref{lem:lowest equal}. Now we show that the $\mathbb{ZP}$-linear span of~$\{\tilde{x}[{\bf a}]\}_{{\bf a}\in\mathbb{Z}^n}$ equals~$\mathcal{A}$. Since $\{z[{\bf a}]\}_{{\bf a}\in\mathbb{Z}^n}$ is a~$\mathbb{ZP}$-basis of~$\mathcal{A}$, it suf\/f\/ices to show that, for any~${\bf a}\in\mathbb{Z}^n$, $z[{\bf a}]$~can be expressed as a $\mathbb{ZP}$-linear combination of~$\tilde{x}[{\bf a}']$ for ${\bf a'}\in\mathbb{Z}^n$. Iteratively applying Lemma~\ref{lem:xinz}, and noting that~$z[{\bf b}]=\tilde{x}[{\bf b}]$ for~${\bf b}\in\mathbb{Z}^n_{\le0}$ (actually the equality holds if ${\bf b}$ has at most one positive coordinate), we can express~$z[{\bf a}]$ as a~linear combination of~$\tilde{x}[{\bf b}]$'s. \end{proof} \subsection{The second proof}\label{section6.2} Def\/ine a partial order on $\mathbb{Z}^n $ by \begin{gather}\label{eq:prec2} {\bf b} \prec {\bf a} \ \ \text{if and only if}\ \ \sum_{i=1}^n [b_i]_+ < \sum_{i=1}^n [a_i]_+. \end{gather} \begin{Definition} For ${\bf a}\in\mathbb{Z}^n$, let $T^{\bf a}$ be the set of all quasi-compatible collections and let $T^{\bf a}_{\rm gcc} \subset T^{\bf a}$ be the globally compatible collections. (See Def\/inition~\ref{definition:GCC} for the terminology.) \end{Definition} Let $ {\bf a} \in \{0,1\}^n $. Let $C^{(i,j)} = \{ u_1^{(i,j)} , v_1^{(i,j)} \}$ be the corner associated to the edge $(i,j)$ (if it exists). Let $\overline{T}^{\bf a}=T^{\bf a}\setminus T^{\bf a}_{\rm gcc}$ and let $t$ be a quasi-compatible collection in $\overline{T}^{\bf a}$. Let $c(t)$ be the set of all corners of $t$. We can def\/ine an equivalence relation on quasi-compatible collections $t,s \in \overline{T}^{\bf a}$ by $t \sim s$ if and only if $c(t)=c(s)$. Let $M_t \subset \overline{T}^{\bf a}$ denote the equivalence class of $t$ with respect to $\sim.$ To each equivalence class we can associate a vector \begin{gather*} {\bf c}_t = \sum_{C^{(i,j)} \in c(t)}\left(\sum_{(i,h)\in Q_B} b_{ih} e_h + \sum_{(k,j)\in Q_B} (-b_{jk}) e_k\right) \in \mathbb{Z}_{\geq 0}^n. \end{gather*} For each $M_t \subset \overline{T}^{\bf a}$ we also associate a vector ${\bf b}_t={\bf a}-{\bf c}_t$. Note that ${\bf b}_t \prec {\bf a}$. Indeed, since $t\notin T^{\bf a}_{\rm gcc}$, there is at least one corner $C^{(i,j)}$ in $c(t)$, then $({\bf b}_t)_i\le a_i-(-b_{ji})<a_i=1$, $({\bf b}_t)_j\le a_j-b_{ij}<a_j=1$, and $({\bf b}_t)_k\le a_k$ for all $k$, thus $\sum\limits_{r=1}^n [({\bf b}_t)_r]_+ < \sum\limits_{r=1}^n [a_r]_+$. Let $\mathcal{D}_{\bf a}$ be the set of edges of Dyck paths for the vector ${\bf a}$. Let $\mathcal{D}_{{\bf b}_t}$ be def\/ined analogously. Since ${\bf b}_t \prec {\bf a}$, $\mathcal{D}_{{\bf b}_t} \subset \mathcal{D}_{\bf a}$ (since each horizontal (resp.~vertical) edge in $\mathcal{D}_{{\bf b}_t}$ naturally corresponds to a horizontal (resp. vertical) edge in $\mathcal{D}_{\bf a}$), and by the def\/inition of ${\bf b}_t$ we have \begin{gather*} \mathcal{D}_{\bf a} \setminus \mathcal{D}_{{\bf b}_t} = \bigcup_{C^{(i,j)}\subset s \in M_t}\big\{ v_1^{(k,j)},u_1^{(i,h)},u_1^{(j,f)},v_1^{(g,i)} \,|\, (j,f),(g,i),(i,h),(k,j) \in Q_{\tilde B} \big\}. \end{gather*} \begin{Definition} Def\/ine the map $\phi_{{\bf b}_t}\colon T^{{\bf b}_t} \rightarrow \overline{T}^{\bf a}$ as follows \begin{gather*} \phi_{{\bf b}_t}(d)= \left( \bigcup_{C^{(i,j)}\subset s \in M_t}\big\{u_1^{(i,h)}, v_1^{(k,j)} \,|\, (i,h),(k,j) \in Q_{\tilde B} \big\}\right) \sqcup d. \end{gather*} \end{Definition} It is easy to see that if $d \in T^{{\bf b}_t}$ is quasi-compatible then so is $\phi_{{\bf b}_t}(d) \in \overline{T}^{\bf a}$ and that $\phi_{{\bf b}_t}$ is injective. \begin{Lemma}\label{lem:phi} Let ${\bf a} \in \{0,1\}^n$, and $t$, $M_t$, ${\bf b}_t$, $\phi_{{\bf b}_t}$ be defined as above. \begin{enumerate}\itemsep=0pt \item[$1.$] If there exists a quasi-compatible collection $d\in T^{{\bf b}_t}$ such that $\phi_{{\bf b}_t}(d) \in M_t$, then $M_t$ is contained in the image of $\phi_{{\bf b}_t}$. Furthermore, $ \{\Ima \phi_{{\bf b}_t} |M_t\subset \bar{T}^{\bf a}\}$ cover $\overline{T}^{\bf a}$. \item[$2.$] For $A \subset T^{\bf a}$ define \begin{gather*} z[A] := \left(\prod_{i=1}^n x_i^{-a_i}\right) \sum _{s \in A}\left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij} \big|S^{(i,j)}_2 \big|} x_{j}^{-b_{ji} \big|S^{(i,j)}_1 \big|}\right). \end{gather*} Then $z[\Ima \phi_{{\bf b}_t}]=z[{\bf b}_t]$. It follows immediately that if $\Ima \phi_{{\bf b}_t}=M_t$ then~$z[M_t]$ is a standard monomial. \item[$3.$] We can define a partial order on~$\{ M_t\}$ by inclusion of sets of corners. If~$M_t$ is maximal with respect to this order then $\Ima \phi_{{\bf b}_t}=M_t$. \end{enumerate} \end{Lemma} \begin{proof} (1)~To see that the $\{\Ima \phi_{{\bf b}_t}\}$ cover we will show that $M_t \subset \Ima\phi_{{\bf b}_t}$. It suf\/f\/ices to f\/ind a~single quasi-compatible collection that maps into~$M_t$, because If $\phi_{{\bf b}_t}(d) \in M_s$ then any~$w$ in~$M_s$ is a disjoint union of \begin{gather*} \bigcup_{C^{(i,j)}\subset s \in M_t}\big\{ v_1^{(k,j)},u_1^{(i,h)} \, |\, (i,h),(k,j) \in Q_{\tilde B} \big\} \end{gather*} with an element in $T^{{\bf b}_t}$. To produce this collection take a globally compatible collection in~$T^{{\bf b}_t}$ that contains no~$u_1^{(k,j)}$ or~$v_1^{(i,h)}$ for all~$i$,~$j$ such that $C^{(i,j)} \subset t$ and for all $(k,j),(i,h) \in Q_{\tilde B}$. This property guarantees that no new corners besides those in~${\bf c}(t)$ will be created when we map it into~$\overline{T}^{\bf a}$. Therefore its image under $\phi_{{\bf b}_t}$ is in~$M_t$. (2)~Using the fact that every quasi-compatible collection in the image of $\phi_{{\bf b}_t}$ shares the edges \begin{gather*} \bigcup_{C^{(i,j)}\subset s \in M_t}\big\{ v_1^{(k,j)},u_1^{(i,h)} \,|\,(i,h),(k,j) \in Q_{\tilde B} \big\}, \end{gather*} and the def\/inition of~${\bf b}_t$ it follows that \begin{gather*} z[{\bf b}_t] = \left(\prod_{i=1}^n x_i^{-({\bf b}_t)_i}\right) \sum_{s \in T^{{\bf b}_t}}\left(\prod_{(i,j)\in Q_{\tilde B} } x_{i}^{b_{ij}\big|S^{(i,j)}_2\big|} x_{j}^{-b_{ji}\big|S^{(i,j)}_1\big|}\right) \\ \hphantom{z[{\bf b}_t]}{} = \left(\prod_{i=1}^n x_i^{-a_i}\right)\left(\prod_{i=1}^n x_i^{({\bf c}_t)_i}\right) \sum_{s \in T^{{\bf b}_t}}\left(\prod_{(i,j)\in Q_{\tilde B} } x_{i}^{b_{ij}\big|S^{(i,j)}_2\big|} x_{j}^{-b_{ji}\big|S^{(i,j)}_1\big|}\right) \\ \hphantom{z[{\bf b}_t]}{} = \left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{s \in \Ima \phi_{{\bf b}_t}}\left(\prod_{(i,j)\in Q_{\tilde B} } x_{i}^{b_{ij}\big|S^{(i,j)}_2\big|} x_{j}^{-b_{ji}\big|S^{(i,j)}_1\big|}\right) = z[\Ima \phi_{{\bf b}_t}]. \end{gather*} (3) We have already shown that $M_t \subset \Ima \phi_{{\bf b}_t}$. Since $M_t$ is maximal, ${\bf c}(t)$~is a maximal set of corners and every element in the image of $\phi_{{\bf b}_t}$ contains ${\bf c}(t)$. So $ \Ima \phi_{{\bf b}_t}\subset M_t.$ \end{proof} We need the following facts about $z[{\bf a}]$. \begin{Lemma}\label{zproduct} For any ${\bf a},{\bf b}\in\mathbb{Z}^n$, \begin{gather*} z[{\bf a}]z[{\bf b}]=\sum_{\bf c} u({\bf c})z[{\bf c}],\quad u({\bf c})\in\mathbb{ZP}, \end{gather*} where ${\bf c}$ satisfies $[c_t]_+\le \sum[a_t+b_t]_+$ for $1\le t\le n$. \end{Lemma} \begin{proof} Regard $x_i$ ($i\neq t$) as frozen variables, i.e., $x_t$ as the only non-frozen variable. Then it is reduced to proving the lemma for $n=1$, which is easy. \end{proof} Now we can prove the following lemma. \begin{Lemma}\label{lem:xinz2} For ${\bf a} \in \mathbb{Z}^n$ the expansion of $\tilde{x}[{\bf a}]$ in the basis of standard monomials is of the form \begin{gather*} \tilde{x}[{\bf a}] - z[{\bf a}] = \sum_{{\bf b} \prec {\bf a}} u({\bf a},{\bf b}) z[{\bf b}], \end{gather*} where $u({\bf a},{\bf b}) \in \mathbb{ZP}$ and all but finitely many are zero, and~$\prec$ as is defined in~\eqref{eq:prec2}. \end{Lemma} \begin{proof} \emph{Step~I.} We prove the case when $m=n$, i.e., $\mathbb{ZP}=\mathbb{Z}$. By the multiplicative property of $\{\tilde{x}[{\bf a}]\}$ and $\{z[{\bf a}]\}$ (Lemma~\ref{multiplicative}) it is suf\/f\/icient to show the result for ${\bf a}\in \{0,1\}^n$. Indeed, if ${\bf a}={\bf a}'+{\bf a}''$ and we have the result for ${\bf a}'$, and ${\bf a}''$ then{\samepage \begin{gather*} \tilde{x}[{\bf a}] =\tilde{x}[{\bf a}']\tilde{x}[{\bf a}''] =\left(z[{\bf a}']+\sum_{{\bf b}'\prec {\bf a}'} u({\bf a}',{\bf b}') z[{\bf b}']\right)\left(z[{\bf a}'']+\sum_{{\bf b}''\prec {\bf a}''} u({\bf a}'',{\bf b}'') z[{\bf b}'']\right)\\ \hphantom{\tilde{x}[{\bf a}]}{} \stackrel{(*)}{=}z[{\bf a}']z[{\bf a}''] +\sum_{{\bf b}\prec {\bf a}'+{\bf a}''} u({\bf b}) z[{\bf b}] = z[{\bf a}] +\sum_{{\bf b}\prec {\bf a}} u({\bf b}) z[{\bf b}], \end{gather*} where $(*)$ uses Lemma \ref{zproduct}.} We have that \begin{gather*} z[{\bf a}]=z[T^{\bf a}]=z[T^{\bf a}_{\rm gcc}]+z[\overline{T}^{\bf a}]=\tilde{x}[{\bf a}]+z[\overline{T}^{\bf a}]. \end{gather*} So we only need to write $z[\overline{T}^{\bf a}]$ in terms of standard monomials. Now $\{M_t\}$ forms a partition of~$\overline{T}^{\bf a}$. So, \begin{gather*} z[\overline{T}^{\bf a}] = \left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{s \in \overline{T}^{\bf a}} \left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}\big|S^{(i,j)}_2\big|} x_{j}^{-b_{ji}\big|S^{(i,j)}_1\big|}\right) \\ \hphantom{z[\overline{T}^{\bf a}]}{} = \left(\prod_{i=1}^n x_i^{-a_i}\right) \sum_{M_t \subset \overline{T}^{\bf a}} \sum_{s \in M_t} \left(\prod_{(i,j)\in Q_{\tilde B}} x_{i}^{b_{ij}\big|S^{(i,j)}_2\big|} x_{j}^{-b_{ji}\big|S^{(i,j)}_1\big|}\right) = \sum_{ M_t \subset \overline{T}^{\bf a}} z[M_t]. \end{gather*} Note that the $z[M_t]$ are not necessarily standard monomials. However by Lemma~\ref{lem:phi}, $z[\Ima \phi_{{\bf b}_t}]$ are standard monomials, and $\{M_t\}$ partition the image of $\phi_{{\bf b}_t}$. Therefore we can do the same manipulation as above to get the equation \begin{gather}\label{zbt} z[{\bf b}_t] = z[\Ima \phi_{{\bf b}_t}]=\sum_{ M_s \subset \Ima \phi_{{\bf b}_t}} z[M_s] . \end{gather} Now there are f\/initely many ${\bf b}_t$ and ${\bf b}_t \prec a$ (since~${\bf c}_t \in \mathbb{Z}^n_{\geq 0}$), so to f\/inish the proof of our claim we only need to show that~$z[\overline{T}^a]$ is a~$\mathbb{Z}$-linear combination of these~$z[{\bf b}_t]$'s. Note that $M_s \subset \Ima \phi_{{\bf b}_t}$ has at least the corners of $c(t)$ but could have more. So $M_s \not\subset \Ima\phi_{b_t}$ if $M_s < M_t$ with respect to our order in Lemma~\ref{lem:phi}(3), and we can rewrite~\eqref{zbt} as \begin{gather*} z[b_t]=z[M_t]+\sum_{M_t<M_s} d_sz[M_s], \qquad d_s\in\{0,1\}. \end{gather*} By extending the partial order on~$\{M_t\}$ to a total order, we see that the transition matrix between~$\{z[{\bf b}_t]\}$ and~$\{z[M_t]\}$ is triangular with all diagonal entries~1, so it is invertible over~$\mathbb{Z}$. Therefore every~$z[M_t]$ is a linear combination of~$z[{\bf b}_t]$. Therefore, $z[\overline{T}^{\bf a}]$ is a $\mathbb{Z}$-linear combination of~$z[{\bf b}_t]$, where~$t$ satisf\/ies $M_t \in \overline{T}^{\bf a}$. \emph{Step II.} We prove the principal coefficient case, i.e., $m=2n$ and $\tilde{B}=\begin{bmatrix} B\\ I_n \end{bmatrix}$ where $I_n$ is the $n\times n$ identity matrix. Let $\mathbb{ZP}=\mathbb{Z}[y_1,\dots,y_n]$ where $y_j=x_{n+j}$ for $1\le j\le n$. Note that this will imply the general coef\/f\/icient case by replacing the principal coef\/f\/icients $y_j$ by $\prod\limits_{i=n+1}^mx_i^{b_{ij}}$ for $1\le j\le n$. Apply Step I to the coef\/f\/icient-free cluster algebra $\mathcal{A}'$ with $B$-matrix $\begin{bmatrix} B&-I_n\\ I_n&0 \end{bmatrix}$. For any ${\bf a}\in\mathbb{Z}^n$, denote $\tilde{\bf a}=(a_1,\dots,a_n,0,\dots,0)\in\mathbb{Z}^{2n}$. Then $\tilde{x}[\tilde{\bf a}]=\tilde{x}[{\bf a}]$ and $z[\tilde{\bf a}]=z[{\bf a}]$. Thus \begin{gather*} \tilde{x}[{\bf a}] - z[{\bf a}]=\tilde{x}[\tilde{\bf a}] - z[\tilde{\bf a}] = \sum u({\bf b}') z[{\bf b}'], \end{gather*} where $u({\bf b}') \in \mathbb{Z}$ and ${\bf b}'=(b_1,\dots,b_{2n})$ satisf\/ies $\sum\limits_{i=1}^m [b_i]_+<\sum\limits_{i=1}^n[a_i]_+$. So it suf\/f\/ices to show that $z[{\bf b}']$ is a $\mathbb{ZP}$-linear combination of $z[{\bf c}]$ with ${\bf c}=(c_1,\dots,c_n)\in \mathbb{Z}^n$ satisfying $\sum\limits_{i=1}^n[c_i]_+\le \sum\limits_{i=1}^n[b_i]_+$ (so $\sum\limits_{i=1}^n[c_i]_+\le \sum\limits_{i=1}^m[b_i]_+<\sum\limits_{i=1}^n[a_i]_+$). Denote ${\bf b}=(b_1,\dots,b_n)$. Since $z[{\bf b}']=z[{\bf b}] \, z[(0,\dots,0,b_{n+1},\dots,b_{2n})]$ where the second factor is a $\mathbb{ZP}$-linear combination of $z[{\bf d}]$ with ${\bf d}=(d_1,\dots,d_n)\in \mathbb{Z}_{\le 0}^n$, it suf\/f\/ices to show that $z[{\bf b}]z[{\bf d}]$ is a $\mathbb{ZP}$-linear combination of $z[{\bf c}]$ with ${\bf c}=(c_1,\dots,c_n)\in \mathbb{Z}^n$ satisfying $\sum[c_i]_+\le \sum[b_i]_+$. This follows from Lemma~\ref{zproduct} since $\sum\limits_{i=1}^n[c_i]_+ \le \sum\limits_{i=1}^n [b_i+d_i]_+\le \sum\limits_{i=1}^n [b_i]_+$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:basis}] Similar to the proof in Section~\ref{section6.1}, except that we use Lemma~\ref{lem:xinz2} in place of Lemma~\ref{lem:xinz}. \end{proof} \subsection*{Acknowledgements} The authors are grateful to the anonymous referees for carefully reading through the manuscript and giving us many constructive suggestions to improve the presentation. KL is supported by Wayne State University, Korea Institute for Advanced Study, AMS Centennial Fellowship and NSA grant H98230-14-1-0323. MM is supported by GAANN Fellowship. \pdfbookmark[1]{References}{ref}
-95,410.651849
[ -2.833984375, 2.541015625 ]
38.442211
[ -3.42578125, -0.1578369140625, -2.4296875, -5.3515625, -0.48095703125, 8.5078125 ]
[ 3.2578125, 9.5625, 1.98828125, 7.0625 ]
545
8,618
[ -3.515625, 4.171875 ]
39.518003
[ -5.265625, -3.63671875, -4.88671875, -2.326171875, 1.580078125, 12.078125 ]
0.412102
19.325022
24.414017
2.810231
[ 2.0915091037750244 ]
-58,574.217826
5.772221
-95,995.028505
0.271384
6.293089
[ -1.7080078125, -3.265625, -4.15625, -5.50390625, 1.9189453125, 12.59375 ]
[ -6, -1.9912109375, -2.287109375, -0.8603515625, 3.62890625, 3.94140625 ]
BkiUepXxK7IDND_hDgzg
\section{Introduction} An outstanding unsolved problem in lattice statistics is the dimer-monomer problem. While it is known \cite{lieb} that the dimer-monomer system does not exhibit a phase transition, there have been only limited closed-form results. The case of close-packed dimers on planar lattices has been solved by Kasteleyn \cite{kas} and by Temperley and Fisher \cite{ft,fisher}, and the solution has been extended to nonorientable surfaces \cite{luwu,tesler}. But the general dimer-monomer problem has proven to be intractable \cite{jerrum}. \medskip In 1974 Temperley \cite{temp} pointed out a bijection between configurations of a single monomer on the boundary of a planar lattice and spanning trees on a related lattice. The bijection was used in \cite{temp} to explain why enumerations of close-packed dimers and spanning trees on square lattices yield the same Catalan constant. More recently Tzeng and Wu \cite{tw} made further use of the Temperley bijection to obtain the closed-form generating function for a single monomer on the boundary. The derivation is however indirect since it makes use of the Temperley bijection which obscures the underlining mathematics of the closed-form solution. \medskip Motivated by the Tzeng-Wu result, there has been renewed interest in the general dimer-monomer problem. In a series of papers Kong \cite{kong,kong1,kong2} has studied numerical enumerations of such configurations on $m\times n$ rectangular lattices for varying $m,n$, and extracted finite-size correction terms for the single-monomer \cite{kong} and general monomer-dimer \cite{kong1,kong2} problems. Of particular interest is the finding \cite{kong} that in the case of a single monomer the enumeration exhibits a regular pattern similar to that found in the Kasteleyn solution of close-packed dimers. This suggests that the general single-monomer problem might be soluble. \medskip As a first step toward finding that solution it is necessary to have an alternate and direct derivation of the Tzeng-Wu solution without the recourse of the Temperley bijection. Here we present such a derivation. Our new approach points to the way of possible extension toward the general single-monomer problem. It also identifies that, besides an overall constant, the Tzeng-Wu solution is given by the square root of the product of the nonzero eigenvalues of the Kasteleyn matrix, and thus clarifies its underlining mathematics. \section{The single monomer problem} Consider a rectangular lattice ${\cal L}$ consisting of an array of $M$ rows and $N$ columns, where both $M$ and $N$ are odd. The lattice consists of two sublattices $A$ and $B$. Since the total number of sites $MN$ is odd, the four corner sites belong to the same sublattice, say, $A$ and there are one more $A$ than $B$ sites. The lattice can therefore be completely covered by dimers if one $A$ site is left open. The open $A$ site can be regarded as a monomer. \medskip Assign non-negative weights $x$ and $y$ respectively to horizontal and vertical dimers. When the monomer is on the boundary, Tzeng and Wu \cite{tw} obtained the following closed-form expression for the generating function, \begin{eqnarray} && G(x,y) = x^{(M-1)/2}y^{(N-1)/2} \nonumber \\ && \times \prod_{m=1}^{\frac {M-1} 2} \prod_{n=1}^{\frac {N-1} 2} \bigg[ 4 x^2 \cos^2 \frac {m\pi}{M+1} +4 y^2 \cos^2 \frac {n\pi}{N+1} \bigg].\label{part} \end{eqnarray} This result is independent of the location of the monomer provided that it is on the boundary. \medskip We rederive the result (\ref{part}) using a formulation which is applicable to any dimer-monomer problem. We first expand ${\cal L}$ into an extended lattice ${\cal L}'$ constructed by connecting each site occupied by a monomer to a new added site, and then consider close-packed dimers on ${\cal L}'$. Since the newly added sites are all of degree 1, all edges originating from the new sites must be covered by dimers. Consequently, the dimer-monomer problem on ${\cal L}$ (with fixed monomer sites) is mapped to a close-packed dimer problem on ${\cal L}'$, which can be treated by standard means. \medskip We use the Kasteleyn method \cite{kas} to treat the latter problem. Returning to the single-monomer problem let the boundary monomer be at site $s_0=(1,n)$ as demonstrated in Fig. 1a. The site $s_0$ is connected to a new site $s'$ by an edge with weight $1$ as shown in Fig. 1b. To enumerate close-packed dimers on ${\cal L}'$ using the Kasteleyn approach, we need to orient, and associate phase factors to, edges so that all terms in the resulting Pfaffian yield the same sign. \begin{figure} \center{\includegraphics [angle=0,height=1.5in]{VacancyDir_fig1.eps}} \caption{ (a) A dimer-monomer configuration on a $5\times 5$ lattice ${\cal L}$ with a single monomer at $s_0 =(1,3)$. (b) The extended lattice ${\cal L}'$ with edge orientation and a phase factor $i$ to horizontal edges. (c) The reference dimer configuration $C_0$.} \end{figure} \medskip A convenient choice of orientation and assignment of phase factors is the one suggested by T. T. Wu \cite{ttwu}. While Wu considered the case $MN=$ even, the consideration can be extended to the present case. Orient all horizontal (resp. vertical) edges in the same direction and the new edge from $s'$ to $s_0$, and introduce a phase factor $i$ to all horizontal edges as shown in Fig. 1b. Then all terms in the Pfaffian assume the same sign. To prove this assertion it suffices to show that a typical term in the Pfaffian associated with a dimer configuration $C$ has the same sign as the term associated with a reference configuration $C_0$. For $C_0$ we choose the configuration shown in Fig. 1c, in which horizontal dimers are placed in the first row with vertical dimers covering the rest of the lattice. Then $C$ and $C_0$ assume the same sign. \medskip The simplest way to verify the last statement is to start from a configuration in which every heavy edge in $C_0$ shown in Fig. 1c is occupied by two dimers, and view each of the doubly occupied dimers as a polygon of two edges. Then the `transposition polygon' (Cf. \cite{kas}) formed by superimposing any $C$ and $C_0$ can always be generated by deforming some of the doubly occupied edges into bigger polygons, a process which does not alter the overall sign. It follows that $C$ and $C_0$ have the same sign for any $C$. This completes the proof. \medskip Here we have implicitly made use of the fact that the monomer is on the boundary. If the monomer resides in the interior of ${\cal L}$, then there exist transposition polygons encircling the monomer site which may not necessarily carry the correct sign. The Pfaffian, while can still be evaluated, does not yield the dimer-monomer generating function. We shall consider this general single-monomer problem subsequently \cite{luwu1}. \medskip With the edge orientation and phase factors in place, the dimer generating function $G$ is obtained by evaluating a Pfaffian \begin{equation} G(x,y) = {\rm Pf} (A')= \sqrt {{\rm Det}\ A'} \label{G} \end{equation} where $A'$ is the antisymmetric Kasteleyn matrix of dimension $(MN+1)\times (MN+1)$. Explicitly, it reads \begin{equation} A' = \pmatrix {0&0&\cdots&0&1&0&\cdots&0\cr 0 & & & & & & & \cr \vdots& & & & & & & \cr 0& & & & & & & \cr -1& & & &A & & & \cr 0 & & & & & & & \cr \vdots& & & & & & & \cr 0 & & & & & & & \cr }. \label{A0matrix} \end{equation} Here, $A$ is the Kasteleyn matrix of dimension $MN$ for ${\cal L}$ given by \begin{equation} A = i\,x\,T_M \otimes I_N + y\, I_M \otimes T_N, \label{Amatrix} \end{equation} with $I_N$ the $N\times N$ identity matrix and $T_N$ the $N\times N$ matrix \begin{equation} T_N = \pmatrix{ 0& 1 & 0& \cdots & 0 &0 \cr -1& 0 & 1& \cdots & 0 &0 \cr 0& -1& 0& \cdots & 0 &0 \cr \dots& \dots &\dots &\dots&\dots&\dots \cr 0& 0& 0& \dots & 0&1 \cr 0& 0& 0& \dots & -1&0}. \label{Tmatrix} \end{equation} Note that elements of $A$ are labeled by $\{(m,n);(m',n')\}$, where $(m,n)$ is a site index, and the element $1$ in the first row of $A'$ is at position $(1,n)$ of $A$, $n=$ odd. \medskip Expanding (\ref{A0matrix}) along the first row and column, we obtain \begin{equation} {\rm Det}\, A' = C(A;\{( 1,n);(1,n)\}) \label{Aprime} \end{equation} where $C(A;\{( 1,n);(1,n)\}) $ is the cofactor of the $\{(1,n);(1,n)\}$-th element of ${\rm Det}\, A$. \medskip The cofactor $C(\alpha,\beta)$ of the $(\alpha,\beta)$-th element of any non-singular $A$ can be computed using the identity \begin{equation} C(A; \alpha,\beta) = A^{-1}(\beta,\alpha) \times {\rm Det} \, A, \label{cofactor} \end{equation} where $A^{-1}(\beta,\alpha)$ is the $(\beta,\alpha)$-th element of $A$. However, the formula is not directly useful in the present case since the matrix $A$ is singular. We shall return to its evaluation in Sec. IV. \section{Eigenvalues of the determinant $A$} In this section we enumerate the eigenvalues of $A$. \medskip The matrix $T_N$ is diagonalized by the similarity transformation \begin{eqnarray} U^{-1}_N \, T_N \, U_N = \Lambda _N \nonumber \end{eqnarray} where $U_N$ and $U^{-1}_N$ are $N\times N$ matrices with elements \begin{eqnarray} U_{N}(n_1,n_2) &=& \sqrt \frac 2 {N+1}\,i\,^{n_1} \sin \frac {n_1n_2\pi}{n+1} \nonumber \\ U_{N}^{-1} (n_1,n_2) &=& \sqrt \frac 2 {N+1} \, (-i)^{n_2} \sin \frac {n_1n_2\pi}{N+1} , \label{UU} \end{eqnarray} and $\Lambda_N$ is an $N\times N$ diagonal matrix whose diagonal elements are the eigenvalues of $T_N$, \begin{equation} \lambda_m = 2i\, \cos \frac {m\pi}{N+1}, \quad m = 1,2,\cdots, N. \end{equation} \medskip Similarly the $MN \times MN$ matrix $A$ is diagonalized by the similarity transformation generated by $U_{MN} = U_M\otimes U_N$. Namely, \begin{eqnarray} U_{MN}^{-1} \,A\, U_{MN} = \Lambda_{MN} \label{UMN} \end{eqnarray} where $\Lambda_{MN}$ is a diagonal matrix with eigenvalues \begin{eqnarray} \lambda_{mn} &=& 2 i \bigg[i\, x \,\cos \frac {m\pi}{M+1} + y\, \cos \frac {n\pi}{N+1}\bigg], \nonumber \\ && \hskip .5cm m=1,2,...,M,\, n=1,2,...,N, \label{eigenvalue} \end{eqnarray} on the diagonal, and elements of $U_{MN}$ and $U^{-1}_{MN}$ are \begin{eqnarray} U_{MN}(m_1,n_1;m_2,n_2) &=& U_M(m_1,m_2)\, U_N(n_1,n_2) \nonumber \\ U^{-1}_{MN}(m_1,n_1;m_2,n_2) &=& U^{-1}_M(m_1,m_2) \, U^{-1}_N(n_1,n_2) .\nonumber \end{eqnarray} Then we have \begin{equation} {\rm Det}\, A = \prod_{m=1}^M \prod_{n=1}^N \lambda_{mn}. \label{DetA} \end{equation} As in (\ref{G}), close-packed dimers on ${\cal L}$ are enumerated by evaluating $\sqrt { {\rm Det} A}$. For $MN$ even, this procedure gives precisely the Kasteleyn solution \cite{kas}. For $MN$ odd, the case we are considering, the eigenvalue $\lambda_{mn} = 0$ for $m=(M+1)/2, n=(N+1)/2$, and hence ${\rm Det}\, A = 0$, indicating correctly there is no dimer covering of ${\cal L}$. However, it is useful for later purposes to consider the product of the nonzero eigenvalues of $A$, \begin{equation} P\equiv {{\prod_{m=1}^M \prod_{n=1}^N}}\, '\, \lambda_{mn} , \label{P1} \end{equation} where the prime over the product denotes the restriction $ (m,n) \not= \big(\frac {M+1} 2,\frac {N+1} 2 \big)$. \medskip Using the identity \begin{eqnarray} \cos \bigg( \frac {m}{M+1}\bigg) \pi = - \cos \bigg( \frac {M-m+1}{M+1}\bigg) \pi, \nonumber \end{eqnarray} one can rearrange factors in the product to arrive at \begin{equation} P = Q \,\prod_{m=1}^{\frac {M-1} 2} \prod_{n=1}^{\frac {N-1} 2} \, \bigg( 4 x^2 \cos^2 \frac {m\pi}{M+1} +4 y^2 \cos^2 \frac {n\pi}{N+1}\bigg)^2 \label{P} \end{equation} where the factor $Q$ is the product of factors with either $m=\frac {M+1} 2$ or $n=\frac {N+1} 2$. Namely, \begin{eqnarray} Q &=& \Bigg[ \prod_{m=1}^{\frac {M-1} 2} 4x^2\cos ^2 \frac {m\pi}{M+1} \Bigg] \times \Bigg[ \prod_{n=1}^{\frac {N-1} 2} 4y^2\cos ^2 \frac {n\pi}{N+1} \Bigg] \nonumber \\ &=& \bigg[\frac {(M+1)(N+1)}4\bigg] x^{M-1} y^{N-1}, \label{Q} \end{eqnarray} where we have made use of the identity \begin{eqnarray} \prod_{n=1}^{\frac {N-1} 2}\bigg( 4 \cos^2 \frac {n\pi}{N+1}\bigg) = \frac {N+1} 2, \quad N= {\rm odd}. \nonumber \end{eqnarray} The expression (\ref{P}) for $P$ will be used in the next section. \section{Evaluation of the cofactor} We now return to the evaluation of the cofactor $C(A;\{( 1,n);(1,n)\}) $. We shall however evaluate the cofactor $C(A;\{( m,n);(m',n')\}) $ for general $m,m',n,n'$, although only the result of $m=m'=1,\, n =n'$ is needed here. \medskip To circumvent the problem of using (\ref{cofactor}) caused by the vanishing of ${\rm Det}\, A =0$, we replace $A$ by the matrix \begin{eqnarray} A(\epsilon) = A + \epsilon \, I_{MN}\, , \quad \epsilon \not= 0 \nonumber \end{eqnarray} whose inverse exists, and take the $\epsilon \to 0$ limit to rewrite (\ref{cofactor}) as \begin{eqnarray} && C(A;\{( m,n);(m',n')\}) \nonumber \\ && \quad =\lim _{\epsilon \to 0} \bigg[ \big[A^{-1}(\epsilon)\big] (m',n';m,n) \times {\rm Det}\ A(\epsilon) \bigg] .\label{part1} \end{eqnarray} Quantities on the r.h.s. of (\ref{part1}) are now well-defined and the cofactor can be evaluated accordingly. The consideration of the inverse of a singular matrix along this line is known in mathematics literature as finding the pseudo-inverse \cite{inverse,inverse1}. The method of taking the small $\epsilon$ limit used here has previously been used successfully in the analyses of resistance \cite{resistor} and impedance \cite{impedance} networks. \medskip The eigenvalues of $A(\epsilon) $ are $\lambda_{mn}(\epsilon) =\lambda_{mn} + \epsilon$ and hence we have \begin{equation} {\rm Det}\, A(\epsilon) = \prod _{m=1}^M \prod_{n=1}^N \big[\lambda_{mn} +\epsilon \big] = \epsilon\, P + O(\epsilon^2),\label{Aepsilon} \end{equation} where $P$ is the product of nonzero eigenvalues given by (\ref{P}). \medskip We next evaluate $A^{-1}(\epsilon)(m.n;m,n)$ and retain only terms of the order of $1/\epsilon$. Taking the inverse of (\ref{UMN}) with $A(\epsilon)$ in place of $A$, we obtain \begin{eqnarray} A^{-1}(\epsilon) = U_{MN}\, \Lambda_{MN}^{-1}(\epsilon)\, U_{MN}^{-1}. \nonumber \end{eqnarray} Writing out its matrix elements explicitly, we have \begin{widetext} \begin{equation} A^{-1}(\epsilon) (m',n'; m,n) = \sum_{m''=1}^M \sum_{n''=1}^N \frac {U_{MN}(m',n';m'',n'') U^{-1}_{MN}(m'',n'';m,n)} {\lambda_{m'',n''}+\epsilon}. \end{equation} For $\epsilon$ small the leading term comes from $(m'',n'')= (\frac {M+1} 2, \frac {N+1} 2)$ for which $\lambda_{m'',n''} =0$. Using $U^{-1}_{MN}(m,n;m',n') = U^{-1}_{M}(m,m')\,U^{-1}_{N}(n,n')$ and (\ref{UU}), this leads to the expression \begin{eqnarray} A^{-1}(\epsilon) (m',n'; m,n) &=& \bigg(\frac 1 \epsilon\bigg) \bigg[\frac {4\, i^{m'+n'} (-i)^{m+n} } {(M+1)(N+1)} \bigg] \sin \frac {m'\pi} 2 \sin \frac {n'\pi} 2 \sin \frac {m\pi} 2 \sin \frac {n\pi} 2 +\ O(1). \nonumber \end{eqnarray} \end{widetext} Thus, after making use of ({\ref{part1}) and (\ref{Aepsilon}) we obtain \begin{eqnarray} &&C(A;\{( m,n);(m',n')\}) \nonumber \\ && =\sin \frac {m \pi} 2 \sin \frac {n \pi} 2 \sin \frac {m' \pi} 2 \sin \frac {n '\pi} 2 \bigg[ {\frac {4\, i^{m'+n'} (-i)^{m+n} P} {(M+1)(N+1)} }\bigg] .\nonumber \\ \label{CA} \end{eqnarray} Finally, specializing to $m=m'=1, \,n=n'$ and combining (\ref{G}), (\ref{Aprime}), and (\ref{CA}), we obtain \begin{eqnarray} G(x,y) &=& \sqrt{ C(A;\{( 1,n);(1,n)\}) } \nonumber \\ &=& \sqrt{ \frac {4 P} {(M+1)(N+1)} }\, ,\hskip 0.2cm {\rm for \>\>} n {\rm \>\> odd\>\>} (A \>\> {\rm site}) \nonumber \\ &=& 0, \hskip 2.9cm {\rm for \>\>} n {\rm \>\> even\>\>} (B \>{\rm \>site})\nonumber \\ \label{final} \end{eqnarray} This gives the result (\ref{part}) after introducing (\ref{P}) for $P$. It also says that there is no dimer covering if the monomer is on a $B$ site. \medskip The expression (\ref{final}) clarifies the underlining mathematical content of the Tzeng-Wu solution (\ref{part}) by identifying it as the product of the {\it nonzero} eigenvalues of the Kasteleyn matrix. This is compared to the Kasteleyn result \cite{kas} that for $MN=$ even the dimer generating function is given by the product of {\it all} eigenvalues. \section{Discussions} We have used a direct approach to derive the closed-form expression of the dimer-monomer generating function for the rectangular lattice with a single monomer on the boundary. Our approach is to first convert the problem into one of close-packed dimers without monomers, and consider the latter problem using established means. This approach suggests a possible route toward analyzing the general dimer-monomer problem. \medskip We have also established that the Tzeng-Wu solution (\ref{part}) is given by the product of the nonzero eigenvalues of the Kasteleyn matrix of the lattice. This is reminiscent to the well-known result in algebraic graph theory \cite{graph} that spanning trees on a graph are enumerated by evaluating the product of the nonzero eigenvalues of its tree matrix. The method of evaluating cofactors of a singular matrix as indicated by (\ref{part1}), when applied to the tree matrix of spanning trees details of which can be easily worked out, offers a simple and direct proof of the fact that all cofactors of a tree matrix are equal and equal to the product of its nonzero eigenvalues. The intriguing similarity of the results suggests there might be something deeper lurking behind our analysis. \medskip I am grateful to W. T. Lu for help in the preparation of the manuscript.
-19,855.596029
[ -3.154296875, 2.89453125 ]
21.830986
[ -2.7890625, 0.2412109375, -2.421875, -6.17578125, -1.0126953125, 8.7734375 ]
[ 3.080078125, 8.640625, 3.228515625, 6.91796875 ]
118
2,455
[ -3.126953125, 3.54296875 ]
33.775985
[ -5.5078125, -3.921875, -4.125, -2.107421875, 1.515625, 11.0859375 ]
1.404959
10.681762
27.494908
2.323616
[ 1.5995389223098755 ]
-13,029.158952
4.928717
-19,696.744198
0.892562
5.575414
[ -2.046875, -3.416015625, -3.70703125, -4.984375, 2.14453125, 11.9296875 ]
[ -5.5, -2.177734375, -2.3671875, -1.314453125, 3.197265625, 4.05078125 ]
BkiUe2zxK0fkXPSOq-FE
\section{Introduction}\label{sec: Intro} Analyzing environmental data sets often require joint modeling of multiple spatially dependent variables accounting for dependence among the variables and the spatial association for each variable. {Joint modeling approaches have two primary benefits over independently analyzing each variable: (i) we can estimate posited associations among the variables that are precluded by independent analyses; and (ii) we can obtain improved predictive inference by borrowing strength across the variables using their dependence. A further consequence of these benefits is that the estimated residuals from joint or multivariate regression models are a better reflection of random noise as the model accounts for different sources of dependence. } For point-referenced variables, multivariate Gaussian processes (GPs) serve as versatile tools for joint modeling of spatial variables \citep[see, e.g.,][and references therein]{scha04, cressie2015statistics, banerjee2014hierarchical, genton2015cross, wackernagel2003}. These texts discuss the substantial literature on modeling multivariate spatial processes, a field referred to as \emph{multivariate geostatistics}, and include several references that have cogently demonstrated some of the aforementioned benefits of joint modeling over independent modeling. However, for a data set with $n$ observed locations, fitting a GP based spatial model typically requires floating point operations (flops) and memory requirements of the order $\sim n^3$ and $\sim n^2$, respectively. This is challenging when $n$ is large. This ``Big Data'' problem in spatial statistics continues to receive much attention and a comprehensive review is beyond the scope of this article \citep[see, e.g.,][]{banerjee2017high, heaton2019case, sun2012geostatistics}. Much of the aforementioned literature for scalable models focused on univariate spatial processes, i.e., assuming only one response for each location. Recently \cite{bradley2015multivariate} and \cite{bradley2018computationally} proposed novel classes of multivariate spatiotemporal models for high-dimensional areal data. On the other hand, our current work emphasizes on multivariate continuous spatial processes as is customarily used for point-referenced geostatistical data analysis. Multivariate processes \citep[see, e.g.,][and references therein]{genton2015cross, salvana2020nonstationary, le2006statistical}, has received relatively limited developments in the context of massive data. Bayesian models are attractive for inference on multivariate spatial processes because they can accommodate uncertainties in the process parameters more flexibly through their hierarchical structure. Multivariate spatial interpolation using conjugate Bayesian modeling can be found in \citet[][]{brown1994multivariate,le1997bayesian,sun1998assessment, le2001spatial,gamerman2004multivariate}, but these methods do not address the challenges encountered in massive data sets. More flexible methods for joint modeling, including spatial factor models, have been investigated in Bayesian contexts \citep[see, e.g. ][]{schmidt2003bayesian, renbanerjee2013, taylor2019spatial}, but these methods have focused upon delivering full Bayesian inference through iterative algorithms such as Markov chain Monte Carlo (MCMC), Integrated Nested Laplace Approximations or INLA \citep{ruemartinochopin2009} or variational Bayes \citep{renbanerjee2011vb}. Our current contribution is {to extend conjugate Bayesian multivariate regression models to spatial process settings and deliver inference using exact posterior samples obtained directly from closed-form posterior distributions. We specifically address the scenario where the number of locations is massive but the number of dependent variables is modestly small so that dimension reduction is sought on the number of locations and not on the number of variables. Our primary contribution is, therefore, to see how the conjugate Bayesian multivariate regression models can be adapted to accommodate closed-form posterior distributions and avoid iterative algorithms (such as MCMC or INLA or variational Bayes).} We develop an augmented Bayesian multivariate linear model framework that accommodates conjugate distribution theory, similar to \cite{gamerman2004multivariate}, but that can scale up to massive data sets with locations numbering in the millions. {We also extend the univariate conjugate Nearest-Neighbor Gaussian process (NNGP) models in \citet{finley2019efficient} and \citet{zdb2019} to multivariate conjugate spatial regression models. We achieve this by embedding the NNGP covariance matrix as a parameter in the Matrix-Normal and Inverse-Wishart family of conjugate priors for multivariate Bayesian regression. We will consider two classes of models. The first is obtained by modeling the spatially dependent variables jointly as a multivariate spatial process, while the second models a latent multivariate spatial process in a hierarchical setup. We refer to the former as the ``response'' model and the latter as the ``latent'' model. While the univariate versions of the response and latent models have been discussed in \citet{finley2019efficient} and \citet{zdb2019}, here we provide some new theoretical insights that help understand the performance of these models as approximations to full Gaussian processes.} The balance of our paper is arranged as follows. Section~\ref{sec: spatial modeling} develops a conjugate Bayesian multivariate spatial regression framework using Matrix-Normal and Inverse-Wishart prior distributions. Section~\ref{subsec: conj_models} develops two classes of conjugate multivariate models the response models and latent models and show how they provide posterior distributions in closed forms. Subsequently, in Section~\ref{subsec: scalable_conj_models} we develop scalable versions of these models using the Nearest Neighbor Gaussian process (NNGP) and a cross-validation algorithm to fix certain hyperparameters required for closed form posteriors is presented in Section~\ref{subsec: CV_conj_NNGP}. Section~\ref{sec: simulation} presents simulation experiments, while Section~\ref{sec: real_data_analy} analyzes a massive Normalized Difference Vegetation Index data with a few million locations. Finally, Section~\ref{sec: Conclusions} concludes the manuscript with some discussion. \section{Bayesian Multivariate Geostatistical Modeling}\label{sec: spatial modeling} \subsection{Conjugate Multivariate Spatial Models}\label{subsec: conj_models} \paragraph{Conjugate Multivariate Response Model}\label{sec: conj_resp} Let $\mathbf{y}(\mathbf{s}) = (y_1(\mathbf{s}), \ldots, y_q(\mathbf{s}))^\top \in \mathbb{R}^q$ be a $q\times 1$ vector of outcomes at location $\mathbf{s} \in \mathcal{D} \subset \mathbb{R}^d$ and let $\mathbf{x}(\mathbf{s}) = (x_1(\mathbf{s}), \ldots, x_p(\mathbf{s}))^\top \in \mathbb{R}^p$ be a $p\times 1$ vector of explanatory variables observed at $\mathbf{s}$. Conditional on these explanatory variables, $\mathbf{y}(\mathbf{s})$ is modeled as a multivariate Gaussian process, \begin{equation}\label{eq: spatial_GP_model_proportion} \mathbf{y}(\mathbf{s}) \sim \mbox{GP}(\boldsymbol{\beta}^\top \mathbf{x}(\mathbf{s}), \mathbf{C}(\cdot, \cdot))\;;\quad \mathbf{C}(\mathbf{s}, \mathbf{s}') = [\rho_{\psi}(\mathbf{s}, \mathbf{s}') + (\alpha^{-1} - 1)\delta_{\mathbf{s} = \mathbf{s}'}]\boldsymbol{\Sigma}\;, \end{equation} where the mean of $\mathbf{y}(\mathbf{s})$ is $\boldsymbol{\beta}^{\top}\mathbf{x}(\mathbf{s})$, $\boldsymbol{\beta}$ is a $p \times q$ matrix of regression coefficients and $\mathbf{C}(\mathbf{s},\mathbf{s}') = \{\mbox{cov}\{y_i(\mathbf{s}),y_j(\mathbf{s}')\}\}$ is a $q\times q$ cross-covariance matrix \citep[][]{genton2015cross} whose $(i,j)$-th element is the covariance between $y_i(\mathbf{s})$ and $y_j(\mathbf{s}')$. The cross-covariance matrix is defined for each pair of locations and is further specified as a multiple of a nonspatial positive definite matrix $\boldsymbol{\Sigma}$. The multiplication factor is a function of the two locations and is composed of two components: a spatial correlation function, $\rho_{\psi}(\mathbf{s}, \mathbf{s}')$, which introduces spatial dependence between the outcomes through hyperparameters $\psi$, and a micro-scale adjustment $(1/\alpha - 1)\delta_{\mathbf{s} = \mathbf{s}'}$, where $\delta_{\mathbf{s}=\mathbf{s}'} = 1$ if $\mathbf{s}=\mathbf{s}'$ and $\delta_{\mathbf{s}=\mathbf{s}'}=0$ if $\mathbf{s}\neq\mathbf{s}'$, and $\alpha\in (0,1]$ is a scalar parameter representing the overall strength of the spatial variability as a proportion of the total variation. The covariance among the elements of $\mathbf{y}(\mathbf{s})$ within a location $\mathbf{s}$ is given by the elements of $\mathbf{C}(\mathbf{s},\mathbf{s}) = (1/\alpha)\boldsymbol{\Sigma}$. Thus, $\boldsymbol{\Sigma}$ is the within-location (nonspatial) dependence among the outcomes adjusted by a scale of $1/\alpha$ to accommodate additional variation at local scales. The interpretation of $\alpha$ is analogous to the ratio of the ``partial sill'' to the ``sill'' in classical geostatistics. For example, in the special case when $\boldsymbol{\Sigma} = \sigma^2\mathbf{I}_q$, $\mbox{cov}\{y_i(\mathbf{s}),y_j(\mathbf{s}')\} = \sigma^2\rho(\mathbf{s},\mathbf{s}') + \sigma^2(1/\alpha - 1)\delta_{\mathbf{s}=\mathbf{s}'}$, which shows that $\sigma^2(1/\alpha-1) = \tau^2$ is the variance of micro-scale processes (or the ``nuggets''), so that $\alpha = \sigma^2/(\sigma^2 + \tau^2)$ is the ratio of the spatial variance (partial sill) to the total variance (sill). A similar interpretation for $\alpha$ results in the univariate setting with $q=1$. Let ${\cal S} = \{\mathbf{s}_1, \ldots, \mathbf{s}_n\} \subset \mathcal{D}$ be a set of $n$ locations yielding observations on $\mathbf{y}(\mathbf{s})$. Then $\mathbf{Y} = \mathbf{y}(\mathcal{S}) = [\mathbf{y}(\mathbf{s}_1): \cdots : \mathbf{y}(\mathbf{s}_n)]^\top$ is $n \times q$ and $\mathbf{X} = \mathbf{x}(\mathcal{S}) = [\mathbf{x}(\mathbf{s}_1) : \cdots : \mathbf{x}(\mathbf{s}_n)]^\top$ is the corresponding $n\times p$ matrix of explanatory variables observed over ${\cal S}$. We will assume that $\mathbf{X}$ has full column rank ($=p < n$). The likelihood emerging from (\ref{eq: spatial_GP_model_proportion}) is $\mathbf{Y} \,|\, \boldsymbol{\beta}, \boldsymbol{\Sigma} \sim \mbox{MN}_{n, q}(\mathbf{X}\boldsymbol{\beta}, \boldsymbol{\mathcal{K}}, \boldsymbol{\Sigma})$, where $\mbox{MN}$ denotes the Matrix-Normal distribution, $\boldsymbol{\mathcal{K}} = \boldsymbol{\rho}_{\psi} + (\alpha^{-1} - 1)\mathbf{I}_n$ {with $\mathbf{I}_n$ the $n \times n$ identity matrix and $\boldsymbol{\rho}_{\psi} = \{\rho_{\psi}(\mathbf{s}_i,\mathbf{s}_j)\}$ the $n \times n$ spatial correlation matrix. A random matrix $\mathbf{Z}_{n \times p}$ that follows a Matrix-Normal distribution $\mbox{MN}_{n,p}(\mathbf{M}, \mathbf{U}, \mathbf{V})$ has a probability density function \begin{equation}\label{eq: MN_density} p(\mathbf{Z}\mid\mathbf{M}, \mathbf{U}, \mathbf{V}) = \frac{\exp\left( -\frac{1}{2} \, \mbox{tr}\left[ \mathbf{V}^{-1} (\mathbf{Z} - \mathbf{M})^{T} \mathbf{U}^{-1} (\mathbf{Z} - \mathbf{M}) \right] \right)}{(2\pi)^{np/2} |\mathbf{V}|^{n/2} |\mathbf{U}|^{p/2}}\; , \end{equation} where $\mbox{tr}$ denotes trace, $\mathbf{M}$ is the mean matrix, $\mathbf{U}$ is the first scale matrix with dimension $n \times n$ and $\mathbf{V}$ is the second scale matrix with dimension $p \times p$ \citep{ding2014dimension}. The vectorization of the $n \times p$ random matrix $\mathbf{Z} = [\mathbf{z}_1, \ldots, \mathbf{z}_p]$, denoted $\mbox{vec}(\mathbf{Z}) = [\mathbf{z}_1^\top, \ldots, \mathbf{z}_p^\top]^\top$, follows a Gaussian distribution $\mbox{vec}(\mathbf{Z}) \sim \mbox{N}_{np}(\mbox{vec}(\mathbf{M}), \mathbf{V} \otimes \mathbf{U})$, where $\otimes$ denote the Kronecker product.} A conjugate Bayesian model is obtained by a Matrix-Normal-Inverse-Wishart (MNIW) prior on $\{\boldsymbol{\beta}, \boldsymbol{\Sigma}\}$, which we denote as \begin{equation}\label{eq: MNIW prior} \mbox{MNIW}(\boldsymbol{\beta},\boldsymbol{\Sigma}\,|\, \boldsymbol{\mu}_{\beta}, \mathbf{V}_{r}, \boldsymbol{\Psi}, \nu) = \mbox{IW}(\boldsymbol{\Sigma}\,|\, \boldsymbol{\Psi}, \nu)\times \mbox{MN}_{p, q}(\boldsymbol{\beta}\,|\, \boldsymbol{\mu}_{\boldsymbol{\beta}} , \mathbf{V}_r, \boldsymbol{\Sigma})\; , \end{equation} where $\mbox{IW}(\boldsymbol{\Sigma}\,|\, \cdot,\cdot)$ is the inverted-Wishart distribution. The MNIW family is a conjugate prior with respect to the likelihood (\ref{eq: MN_density}) and, for any fixed values of $\alpha$, $\psi$ and the hyperparameters in the prior density, we obtain the posterior density \begin{equation}\label{eq: MNIW_posterior} \begin{split} p(\boldsymbol{\beta},\boldsymbol{\Sigma}\,|\, \mathbf{Y}) &\propto \mbox{MNIW}(\boldsymbol{\beta},\boldsymbol{\Sigma}\,|\, \boldsymbol{\mu}_{\beta}, \mathbf{V}_{r}, \boldsymbol{\Psi}, \nu) \times \mbox{MN}_{n,q}(\mathbf{Y}\,|\,\mathbf{X}\boldsymbol{\beta}, \boldsymbol{\mathcal{K}}, \boldsymbol{\Sigma}) \propto \mbox{MNIW}(\boldsymbol{\mu}^\ast, \mathbf{V}^\ast, \mathbf{\Psi}^\ast, \nu^\ast)\;, \end{split} \end{equation} where \begin{equation}\label{eq: collapsed_spatial_pars1} \begin{aligned} \mathbf{V}^\ast &= (\mathbf{X}^\top \boldsymbol{\mathcal{K}}^{-1} \mathbf{X} + \mathbf{V}_r^{-1})^{-1}\; , \; \boldsymbol{\mu}^\ast = \mathbf{V}^\ast (\mathbf{X}^\top \boldsymbol{\mathcal{K}}^{-1}\mathbf{Y} + \mathbf{V}_r^{-1}\boldsymbol{\mu}_{\boldsymbol{\beta}})\; ,\\ \boldsymbol{\Psi}^\ast &= \boldsymbol{\Psi} + \mathbf{Y}^\top \boldsymbol{\mathcal{K}}^{-1}\mathbf{Y} +\boldsymbol{\mu}_{\boldsymbol{\beta}}^\top \mathbf{V}_r^{-1} \boldsymbol{\mu}_{\boldsymbol{\beta}} - \boldsymbol{\mu}^{\ast\top} \mathbf{V}^{\ast-1} \boldsymbol{\mu}^\ast\; , \mbox{ and }\nu^\ast = \nu + n\;. \end{aligned} \end{equation} Direct sampling from the $\mbox{MNIW}$ posterior distribution in (\ref{eq: MNIW_posterior}) is achieved by first sampling $\boldsymbol{\Sigma} \sim \mbox{IW}(\mathbf{\Psi}^\ast, \nu^\ast)$ and then sampling one draw of $\boldsymbol{\beta} \sim \mbox{MN}_{p,q}(\boldsymbol{\mu}^{\ast}, \mathbf{V}^{\ast}, \boldsymbol{\Sigma})$ for each draw of $\boldsymbol{\Sigma}$. The resulting pairs $\{\boldsymbol{\beta},\boldsymbol{\Sigma}\}$ will be samples from (\ref{eq: MNIW_posterior}). Since this scheme draws directly from the posterior distribution, the sample is exact and does not require burn-in or convergence. Turning to predictions, let ${\cal U} = \{\mathbf{u}_1, \ldots, \mathbf{u}_{n'}\}$ be a finite set of locations where we intend to predict or impute the value of $\mathbf{y}(\mathbf{s})$ based upon an observed $n'\times p$ design matrix $\mathbf{X}_{\cal U} = [\mathbf{x}(\mathbf{u}_1) : \cdots : \mathbf{x}(\mathbf{u}_{n'})]^\top$ for ${\cal U}$. If $\mathbf{Y}_{\cal U} = [\mathbf{y}(\mathbf{u}_1): \cdots : \mathbf{y}(\mathbf{u}_{n'})]^\top$ is the $n'\times q$ matrix of predictive random variables, then the conditional predictive distribution is \begin{equation}\label{eq: posterior_predictive_conditional} \begin{aligned} p(\mathbf{Y}_{\cal U} \,|\, \mathbf{Y}, \boldsymbol{\beta}, \boldsymbol{\Sigma}) =\mbox{MN}_{n',q}(\mathbf{Y}_{\cal U}\,|\, \mathbf{X}_{\cal U} \boldsymbol{\beta} &+ \boldsymbol{\rho}_{\psi}({\cal U}, \mathcal{S})\boldsymbol{\mathcal{K}}^{-1}[\mathbf{Y} - \mathbf{X}\boldsymbol{\beta}], \\ & \quad \boldsymbol{\rho}_{\psi}({\cal U}, {\cal U}) + (\alpha^{-1} - 1)\mathbf{I}_{n'} - \boldsymbol{\rho}_{\psi}({\cal U}, \mathcal{S})\boldsymbol{\mathcal{K}}^{-1}\boldsymbol{\rho}_{\psi}(\mathcal{S}, {\cal U}),\; \boldsymbol{\Sigma}) \;, \end{aligned} \end{equation} where $\boldsymbol{\rho}_{\psi}({\cal U}, \mathcal{S}) = \{\rho_{\psi}(\mathbf{u}_i,\mathbf{s}_j)\}$ is $n'\times n$ and $\boldsymbol{\rho}_{\psi}(\mathcal{S}, {\cal U}) = \{\rho_{\psi}(\mathbf{s}_i,\mathbf{u}_j)\} = \boldsymbol{\rho}_{\psi}({\cal U}, \mathcal{S})^{\top}$. Predictions can be directly carried out in posterior predictive fashion, where we sample from \begin{multline}\label{eq: posterior_predictive} p(\mathbf{Y}_{\cal U} \,|\, \mathbf{Y}) = \int \mbox{MN}_{n',q}(\mathbf{X}_{\cal U} \boldsymbol{\beta} + \boldsymbol{\rho}_{\psi}({\cal U}, \mathcal{S})\boldsymbol{\mathcal{K}}^{-1}[\mathbf{Y} - \mathbf{X}\boldsymbol{\beta}],\\ \boldsymbol{\rho}_{\psi}({\cal U}, {\cal U}) + (\alpha^{-1} - 1)\mathbf{I}_{n'} - \boldsymbol{\rho}_{\psi}({\cal U}, \mathcal{S})\boldsymbol{\mathcal{K}}^{-1}\boldsymbol{\rho}_{\psi}(\mathcal{S}, {\cal U}),\; \boldsymbol{\Sigma}) \times \mbox{MNIW}(\boldsymbol{\mu}^\ast, \mathbf{V}^\ast, \mathbf{\Psi}^\ast, \nu^\ast)\,d\boldsymbol{\beta}\,d\boldsymbol{\Sigma}\;. \end{multline} Sampling from (\ref{eq: posterior_predictive}) is achieved by drawing one $\mathbf{Y}_{{\cal U}}$ from (\ref{eq: posterior_predictive_conditional}) for each posterior draw of $\{\boldsymbol{\beta},\boldsymbol{\Sigma}\}$. \paragraph{Conjugate Multivariate Latent Model}\label{subsec: con_latent} We now discuss a conjugate Bayesian model for a latent process. Consider the spatial regression model \begin{equation}\label{eq: conj_latent_model} \mathbf{y}(\mathbf{s}) = \boldsymbol{\beta}^\top\mathbf{x}(\mathbf{s}) + \boldsymbol{\omega}(\mathbf{s}) + \boldsymbol{\epsilon}(\mathbf{s})\;, \; \mathbf{s} \in {\cal D}\;, \end{equation} where $\boldsymbol{\omega}(\mathbf{s}) \sim \mbox{GP}(\mathbf{0}_{q \times 1}, \rho_{\psi}(\cdot, \cdot)\boldsymbol{\Sigma})$ is a $q\times 1$ multivariate latent process with cross-covariance matrix $\rho_{\psi}(\mathbf{s},\mathbf{s}')\boldsymbol{\Sigma}$ and $\boldsymbol{\epsilon}(\mathbf{s}) \stackrel{iid}{\sim}\mbox{N}(\mathbf{0}_{q \times 1}, (\alpha^{-1} - 1)\boldsymbol{\Sigma})$ captures micro-scale variation. The ``proportionality'' assumption for the variance of $\boldsymbol{\epsilon}(\mathbf{s})$ will allow us to derive analytic posterior distributions using conjugate priors. The latent process $\boldsymbol{\omega}(\mathbf{s})$ captures the underlying spatial pattern and holds specific interest in many applications. Let $\boldsymbol{\omega} = \boldsymbol{\omega}(\mathcal{S}) = [\boldsymbol{\omega}(\mathbf{s}_1): \cdots : \boldsymbol{\omega}(\mathbf{s}_n)]^\top$ {be the latent process on $\mathcal{S}$, the parameter set of the latent model \eqref{eq: conj_latent_model} then becomes $\{\boldsymbol{\beta}, \boldsymbol{\omega}, \boldsymbol{\Sigma}\}$}. Letting $\boldsymbol{\gamma}^{\top} = [\boldsymbol{\beta}^{\top}, \boldsymbol{\omega}^{\top}]$ be {$q\times (p+n)$}, we assume that $\{\boldsymbol{\gamma}, \boldsymbol{\Sigma}\}\sim \mbox{MNIW}(\boldsymbol{\mu}_{\boldsymbol{\gamma}}, \mathbf{V}_{\boldsymbol{\gamma}}, \boldsymbol{\Psi}, \nu)$, where $\boldsymbol{\mu}_{\boldsymbol{\gamma}}^\top = [\boldsymbol{\mu}_{\boldsymbol{\beta}}^\top, \mathbf{0}_{q \times n}]$ and $\mathbf{V}_{\boldsymbol{\gamma}} = \mbox{blockdiag}\{\mathbf{V}_r, \boldsymbol{\rho}_{\psi}(\mathcal{S}, \mathcal{S})\}$. The posterior density is \begin{equation}\label{eq: augmented_conj_post_v1} \begin{aligned} p(\boldsymbol{\gamma}, \boldsymbol{\Sigma} \,|\, \mathbf{Y}) &\propto \mbox{MNIW}(\boldsymbol{\gamma}, \boldsymbol{\Sigma}\,|\,\boldsymbol{\mu}_{\boldsymbol{\gamma}}, \mathbf{V}_{\boldsymbol{\gamma}}, \boldsymbol{\Psi}, \nu)\times \mbox{MN}_{n,q}(\mathbf{Y}_{n \times q} \,|\, {[\mathbf{X} : \mathbf{I}_n]}\boldsymbol{\gamma}, {(\alpha^{-1} - 1)}\mathbf{I}_n, \boldsymbol{\Sigma}) \\ &\propto \mbox{MNIW}(\boldsymbol{\gamma}, \boldsymbol{\Sigma}\,|\, \boldsymbol{\mu}_{\boldsymbol{\gamma}}^\ast, \mathbf{V}^\ast, \boldsymbol{\Psi}^\ast, \nu^\ast)\;, \end{aligned} \end{equation} where \begin{equation}\label{eq: augmented_conj_post_v1_pars} \begin{aligned} \mathbf{V}^{\ast} &= \left[\begin{array}{cc} \frac{\alpha}{1 - \alpha} \mathbf{X}^\top \mathbf{X} + \mathbf{V}_r^{-1} & \frac{\alpha}{1 - \alpha}\mathbf{X}^\top \\ \frac{\alpha}{1 - \alpha}\mathbf{X} & \boldsymbol{\rho}_{\psi}^{-1}(\mathcal{S}, \mathcal{S}) + \frac{\alpha}{1 - \alpha}\mathbf{I}_n \end{array} \right]^{-1}\;,\quad \boldsymbol{\mu}_{\boldsymbol{\gamma}}^\ast = \mathbf{V}^{\ast} \left[\begin{array}{c} \frac{\alpha}{1 - \alpha}\mathbf{X}^\top \mathbf{Y} + \mathbf{V}_r^{-1}\boldsymbol{\mu}_{\boldsymbol{\beta}} \\ \frac{\alpha}{1 - \alpha} \mathbf{Y} \end{array} \right],\\ \boldsymbol{\Psi}^\ast &= \boldsymbol{\Psi} + \frac{\alpha}{1 - \alpha} \mathbf{Y}^\top\mathbf{Y} + \boldsymbol{\mu}_{\boldsymbol{\beta}}^\top \mathbf{V}_r^{-1}\boldsymbol{\mu}_{\boldsymbol{\beta}} - \boldsymbol{\mu}_{\boldsymbol{\gamma}}^{\ast\top}\mathbf{V}^{\ast-1} \boldsymbol{\mu}_{\boldsymbol{\gamma}}^\ast \; \mbox{ and } \nu^\ast = \nu + n\; . \end{aligned} \end{equation} For prediction on a set of locations ${\cal U}$, we can estimate the unobserved latent process {$\boldsymbol{\omega}_{\cal U} = \boldsymbol{\omega}({\cal U}) = [\boldsymbol{\omega}(\mathbf{u}_1): \cdots: \boldsymbol{\omega}(\mathbf{u}_{n'})]^\top$} and the response $\mathbf{Y}_{\cal U}$ through \begin{multline}\label{eq: posterior_predict_augmented} p(\mathbf{Y}_{\cal U}, \boldsymbol{\omega}_{\cal U} \,|\, \mathbf{Y}) = \int \mbox{MN}_{n',q}(\mathbf{Y}_{\cal U}\,|\, \mathbf{X}_{\cal U} \boldsymbol{\beta} + \boldsymbol{\omega}_{\cal U},\; (\alpha^{-1} -1)\mathbf{I}_{n'},\; \boldsymbol{\Sigma}) \times \mbox{MN}_{n',q}(\boldsymbol{\omega}_{\cal U}\,|\, \mathbf{M}_{{\cal U}}\boldsymbol{\omega}, \mathbf{V}_{\omega_{{\cal U}}},\boldsymbol{\Sigma})\\ \times \mbox{MNIW}(\boldsymbol{\gamma}, \boldsymbol{\Sigma}\,|\, \boldsymbol{\mu}_{\boldsymbol{\gamma}}^\ast, \mathbf{V}^\ast, \boldsymbol{\Psi}^\ast, \nu^\ast)\; d\boldsymbol{\gamma} d\boldsymbol{\Sigma}\; , \end{multline} where $\mathbf{M}_{{\cal U}} = \boldsymbol{\rho}_{\psi}({\cal U}, \mathcal{S})\boldsymbol{\rho}_{\psi}^{-1}(\mathcal{S},\mathcal{S})$ and $\mathbf{V}_{\omega_{{\cal U}}} = \boldsymbol{\rho}_{\psi}({\cal U}, {\cal U}) - \boldsymbol{\rho}_{\psi}({\cal U}, \mathcal{S})\boldsymbol{\rho}_{\psi}^{-1}(\mathcal{S},\mathcal{S})\boldsymbol{\rho}_{\psi}(\mathcal{S},{\cal U})$. Posterior predictive inference proceeds by sampling one draw of $\boldsymbol{\omega}_{\cal U}\sim \mbox{MN}_{n',q}(\boldsymbol{\omega}_{\cal U}\,|\, \mathbf{M}_{{\cal U}}\boldsymbol{\omega}, \mathbf{V}_{\omega_{{\cal U}}},\boldsymbol{\Sigma})$ for each posterior draw of $\{\boldsymbol{\gamma},\boldsymbol{\Sigma}\}$ and then one draw of $\mathbf{Y}_{\cal U} \sim \mbox{MN}(\mathbf{X}_{\cal U} \boldsymbol{\beta} + \boldsymbol{\omega}_{\cal U},(\alpha^{-1} -1)\mathbf{I}_{n'},\; \boldsymbol{\Sigma})$ for each drawn $\{\boldsymbol{\omega}_{\cal U},\boldsymbol{\gamma},\boldsymbol{\Sigma}\}$. \subsection{Scalable Conjugate Bayesian Multivariate Models}\label{subsec: scalable_conj_models} \paragraph{Conjugate multivariate response NNGP model} A conjugate Bayesian modeling framework is appealing for massive spatial data sets because the posterior distribution of the parameters are available in closed form circumventing the need for MCMC algorithms. The key computational bottleneck for Bayesian estimation of spatial process models concerns the computation and storage involving $\boldsymbol{\mathcal{K}}^{-1}$ in \eqref{eq: collapsed_spatial_pars1}. The required matrix computations require $\mathcal{O}(n^3)$ flops and $\mathcal{O}(n^2)$ storage when $\boldsymbol{\mathcal{K}}$ is $n \times n$ and dense. While conjugate models reduce computational expenses by enabling direct sampling from closed-form posterior and posterior predictive distributions, the computation and storage of $\boldsymbol{\mathcal{K}}$ is still substantial for massive datasets. One approach to circumvent the overwhelming computations is to develop a sparse alternative for $\boldsymbol{\mathcal{K}}^{-1}$ in \eqref{eq: collapsed_spatial_pars1}. One such approximation that has generated substantial recent attention in the spatial literature is an approximation due to \cite{ve88}. Consider the spatial covariance matrix $\boldsymbol{\mathcal{K}} = \boldsymbol{\rho}_{\psi} + \delta_{\mathbf{s}=\mathbf{s}'}\mathbf{I}_n$ in (\ref{eq: MN_density}). This is a dense $n\times n$ matrix with apparently no exploitable structure. Instead, we specify a sparse Cholesky representation \begin{equation}\label{eq: NNGP_approx_invK} \boldsymbol{\mathcal{K}}^{-1} = (\mathbf{I} - \mathbf{A}_{\boldsymbol{\mathcal{K}}})^\top \mathbf{D}_{\boldsymbol{\mathcal{K}}}^{-1} (\mathbf{I} - \mathbf{A}_{\boldsymbol{\mathcal{K}}})\; , \end{equation} where $\mathbf{D}_{\boldsymbol{\mathcal{K}}}$ is a diagonal matrix and $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$ is a sparse lower-triangular matrix with $0$ along the diagonal and with no more than a fixed small number $m$ of nonzero entries in each row of $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$. The diagonal entries of $\mathbf{D}_{\boldsymbol{\mathcal{K}}}$ and the nonzero entries of $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$ are obtained from the conditional variance and conditional expectations for a Gaussian process with covariance function $\rho_{\psi}(\mathbf{s},\mathbf{s}')$. {Vecchia's approximation begins with a fixed ordering of the locations in $\mathcal{S}$ and approximates the joint density of $\mathbf{Y}$ as \begin{equation}\label{eq: vecchia_approx} p(\mathbf{Y}) = p(\mathbf{y}(\mathbf{s}_1))\prod_{i=2}^n p(\mathbf{y}(\mathbf{s}_i)\,|\, \mathbf{y}(H(\mathbf{s}_i))) \approx p(\mathbf{y}(\mathbf{s}_1))\prod_{i=2}^n p(\mathbf{y}(\mathbf{s}_i)\,|\, \mathbf{y}(N_m(\mathbf{s}_i)))\;, \end{equation} where $H(\mathbf{s}_i) = \{\mathbf{s}_1,\mathbf{s}_2,\ldots,\mathbf{s}_{i-1}\}$, $N_m(\mathbf{s}_i) = H(\mathbf{s}_i)$ for $i=2,\ldots,m$ and $N_m(\mathbf{s}_i)\subset H(\mathbf{s}_i)$ for $i=m+1,\ldots,n$ comprises at most $m$ neighbors of of $\mathbf{s}_i$ among locations $\mathbf{s}_j\in \mathcal{S}$ such that $j<i$. Also, $\mathbf{y}(\mathcal{A})$ for any set $\mathcal{A} \subseteq \mathcal{D}$ is the collection of $\mathbf{y}(\mathbf{s}_i)$'s for $\mathbf{s}_i\in A$. Practical advice has been provided in \cite{stein04} and simple ordering by either the $x$-coordinate or the $y$-coordinate or the sum $x+y$ are all practicable solutions for massive data. A more formal algorithm based upon theoretical insights on the effect of ordering on inference has been provided by \cite{guinness2018permutation}. Here we focus on constructing (\ref{eq: NNGP_approx_invK}) using a fixed ordering of the locations in $\mathcal{S}$.} The matrix $\boldsymbol{\mathcal{K}}^{-1}$ in (\ref{eq: NNGP_approx_invK}) can be derived from (\ref{eq: vecchia_approx}) using standard results in multivariate Gaussian distribution theory \citep[see, e.g.][]{banerjee2017high}. The $(i,j)$-th entry of $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$ is $0$ whenever $\mathbf{s}_j \notin N_m(\mathbf{s}_i)$. This means that each row of $\mathbf{A}_{\kappa}$ contains at most $m$ nonzero entries. Suppose $i_1 < i_2 < \ldots < i_m$ are the $m$ column indices that contain nonzero entries in the $i$-th row of $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$. {Let $\mathbf{A}_{\boldsymbol{\mathcal{K}}} = [\mathbf{a}_1:\cdots:\mathbf{a}_n]^\top$ and $\mathbf{D}_{\boldsymbol{\mathcal{K}}} = \mbox{diag}(d_1, d_2,\ldots, d_n)$, where $d_1 = \alpha^{-1}$. The first row of $\mathbf{A}$ has all elements equal to $0$ and for $i = 2, \ldots, n$ we obtain \begin{equation}\label{eq: NNGP_a_d_construct} \begin{split} [\{\mathbf{a}_i\}_{i_1}, \ldots, \{\mathbf{a}_i\}_{i_m}] &= \boldsymbol{\rho}_{\psi}(\mathbf{u}_i, N_m(\mathbf{s}_i))\{\boldsymbol{\rho}_{\psi}(N_m(\mathbf{s}_i), N_m(\mathbf{s}_i)) + (\alpha^{-1} - 1) \mathbf{I}_{m}\}^{-1}\; ,\\ d_i &= \alpha^{-1} - [\{\mathbf{a}_i\}_{i_1}, \ldots, \{\mathbf{a}_i\}_{i_m}]\boldsymbol{\rho}_{\psi}(N_m(\mathbf{s}_i), \mathbf{u}_i)\; , \end{split} \end{equation} where $\{\cdot\}_{k}$ denotes the $k$-th element of a vector.} Equation~(\ref{eq: NNGP_a_d_construct}) completely specifies $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$ and $\mathbf{D}_{\kappa}$ and, hence, a sparse $\boldsymbol{\mathcal{K}}^{-1}$ in (\ref{eq: NNGP_approx_invK}). The construction in (\ref{eq: NNGP_a_d_construct}) can be implemented in parallel and requires storage or computation of matrices of sizes no greater than $m\times m$, where $m << n$, and costs $\mathcal{O}(n)$ flops and storage. Based on Section~\ref{sec: conj_resp}, the posterior distribution $\boldsymbol{\beta}, \boldsymbol{\Sigma} \,|\, \mathbf{Y}$ follows $\mbox{MNIW}(\boldsymbol{\mu}^\ast, \mathbf{V}^\ast, \boldsymbol{\Psi}^\ast, \nu^\ast)$ where $\{\boldsymbol{\mu}^\ast, \mathbf{V}^\ast, \boldsymbol{\Psi}^\ast, \nu^\ast\}$ are given in \eqref{eq: collapsed_spatial_pars1}. With the sparse representation of $\boldsymbol{\mathcal{K}}^{-1}$ in \eqref{eq: NNGP_approx_invK}, the process of obtaining posterior inference for $\{\boldsymbol{\beta}, \boldsymbol{\Sigma}\}$ only involves steps with storage and computational requirement in $\mathcal{O}(n)$. {To obtain the predictions at arbitrary (unobserved) locations ${\cal U} = \{\mathbf{u}_1, \ldots, \mathbf{u}_{n'}\}$, we follow \cite{datta16} and extend the approximation in (\ref{eq: vecchia_approx}) to the Nearest Neighbor Gaussian Process (NNGP). We} extend the definition of $N_m(\mathbf{s}_i)$'s to arbitrary locations in ${\cal U}$ by defining $N_m(\mathbf{u}_i)$ to be the set of $m$ nearest neighbors of $\mathbf{u}_i$ from $\mathcal{S}$. Furthermore, we assume that $\mathbf{y}(\mathbf{u})$ and $\mathbf{y}(\mathbf{u}')$ are conditionally independent of each other given {$\mathbf{Y} = \mathbf{y}(\mathcal{S})$} and the other model parameters. Thus, for any $\mathbf{u}_i\in {\cal U}$, we have \begin{equation} \mathbf{y}(\mathbf{u}_i)\,|\, \mathbf{Y}, \boldsymbol{\beta}, \boldsymbol{\Sigma} \sim \mbox{N}(\boldsymbol{\beta}^{\top}\mathbf{x}(\mathbf{u}_i) + {[\mathbf{Y} - \mathbf{X}\boldsymbol{\beta}]^\top\Tilde{\mathbf{a}}_i}, \, \Tilde{d}_i \boldsymbol{\Sigma}), \; i = 1, \ldots, n'\; , \end{equation} {where $\Tilde{\mathbf{a}}_i$ is an $n \times 1$ vector with $m$ non-zero elements. If $N_m(\mathbf{u}_i) = \{\mathbf{s}_{i_k}\}_{k = 1}^m$, then} \begin{equation}\label{eq: NNGP_predict_collapsed_par} \begin{split} (\{\Tilde{\mathbf{a}}_i\}_{i_1}, &\ldots, \{\Tilde{\mathbf{a}}_i\}_{i_m}) = \boldsymbol{\rho}_{\psi}(\mathbf{u}_i, N_m(\mathbf{u}_i))\{\boldsymbol{\rho}_{\psi}(N_m(\mathbf{u}_i), N_m(\mathbf{u}_i)) + (\alpha^{-1} - 1) \mathbf{I}_{m}\}^{-1}\; ,\\ \Tilde{d}_i &= \alpha^{-1} - \boldsymbol{\rho}_{\psi}(\mathbf{u}_i, N_m(\mathbf{u}_i))[\boldsymbol{\rho}_{\psi}(N_m(\mathbf{u}_i), N_m(\mathbf{u}_i)) + (\alpha^{-1} - 1) \mathbf{I}_{m}]^{-1}\boldsymbol{\rho}_{\psi}(N_m(\mathbf{u}_i), \mathbf{u}_i)\; . \end{split} \end{equation} If $\Tilde{\mathbf{A}} = [\Tilde{\mathbf{a}}_1: \cdots : \Tilde{\mathbf{a}}_{n'}]^\top$ and $\Tilde{\mathbf{D}} = \mbox{diag}(\{\Tilde{d}_i\}_{i = 1}^n)$, then the conditional predictive density for $\mathbf{Y}_{\cal U}$ is \begin{equation}\label{eq: resp_NNGP_YU} \mathbf{Y}_{\cal U} \,|\, \mathbf{Y}, \boldsymbol{\beta}, \boldsymbol{\Sigma} \sim \mbox{MN}(\mathbf{X}_{\cal U}\boldsymbol{\beta} + \Tilde{\mathbf{A}}[\mathbf{Y} - \mathbf{X}\boldsymbol{\beta}], \Tilde{\mathbf{D}}, \boldsymbol{\Sigma}) \; . \end{equation} Since the posterior distribution of $\{\boldsymbol{\beta},\boldsymbol{\Sigma}\}$ and the conditional predictive distribution of $\mathbf{Y}_{\cal U}$ are both available in closed form, direct sampling from the posterior predictive distribution is straightforward. A detailed algorithm for obtaining the posterior inference on parameter set $\{\boldsymbol{\beta}, \boldsymbol{\Sigma}\}$ and the posterior prediction over a new set of locations ${\cal U}$ is given as below. \noindent{ \rule{\textwidth}{1pt} {\fontsize{8}{8}\selectfont \textbf{Algorithm~1}: Obtaining posterior inference of $\{\boldsymbol{\beta}, \boldsymbol{\Sigma}\}$ and predictions on ${\cal U}$ for conjugate multivariate response NNGP model \\ \rule{\textwidth}{1pt}\\%[-12pt] \begin{enumerate \item Construct $\mathbf{V}^\ast$, $\boldsymbol{\mu}^\ast$, $\boldsymbol{\Psi}^\ast$ and $\nu^\ast$: \begin{enumerate} \item Compute $\mathbf{L}_r$ the Cholesky decomposition of $\mathbf{V}_r$ \hfill{$\mathcal{O}(p^3)$ flops} \item Compute $\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{X} = \mathbf{D}_{\boldsymbol{\mathcal{K}}}^{-\frac{1}{2}}(\mathbf{I} -\mathbf{A}_{\boldsymbol{\mathcal{K}}}) \mathbf{X}$ and $\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{Y} = \mathbf{D}_{\boldsymbol{\mathcal{K}}}^{-\frac{1}{2}}(\mathbf{I} -\mathbf{A}_{\boldsymbol{\mathcal{K}}})\mathbf{Y}$ \begin{itemize} \item {Construct $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$ and $\mathbf{D}_{\boldsymbol{\mathcal{K}}}$ as described in \eqref{eq: NNGP_a_d_construct} \hfill{$\mathcal{O}(nm^3)$ flops} } \item Compute $\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{X} = \mathbf{D}_{\boldsymbol{\mathcal{K}}}^{-\frac{1}{2}}(\mathbf{I} -\mathbf{A}_{\boldsymbol{\mathcal{K}}})\mathbf{X}$ and $\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{Y} = \mathbf{D}_{\boldsymbol{\mathcal{K}}}^{-\frac{1}{2}}(\mathbf{I} -\mathbf{A}_{\boldsymbol{\mathcal{K}}})\mathbf{Y}$ by $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$ and $\mathbf{D}_{\boldsymbol{\mathcal{K}}}$ \hfill{$\mathcal{O}(nm(p + q))$ flops} \end{itemize} \item Obtain $\mathbf{V}^\ast$, $\boldsymbol{\mu}^\ast$ and $\boldsymbol{\Psi}^\ast$ \begin{itemize} \item Compute $\mathbf{V}^\ast = (\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{X}^\top\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{X} + \mathbf{V}_r^{-1})^{-1}$ and its Cholesky decomposition $\mathbf{L}_{v\ast}$ \hfill{$\mathcal{O}(np^2)$ flops} \item Compute $\boldsymbol{\mu}^\ast = \mathbf{V}^{\ast}(\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{X}^\top\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{Y} + \mathbf{V}_r^{-1}\boldsymbol{\mu}_{\boldsymbol{\beta}})$ \hfill{$\mathcal{O}(npq)$ flops} \item Compute $\boldsymbol{\Psi}^\ast =\boldsymbol{\Psi} + \mathbf{D}\mathbf{I}\mathbf{A}\mathbf{Y}^\top\mathbf{D}\mathbf{I}\mathbf{A}\mathbf{Y} + (\mathbf{L}_r^{-1} \boldsymbol{\mu}_{\boldsymbol{\beta}})^\top (\mathbf{L}_r^{-1} \boldsymbol{\mu}_{\boldsymbol{\beta}}) - (\mathbf{L}_{v\ast}^{-1} \boldsymbol{\mu}^\ast)^\top (\mathbf{L}_{v\ast}^{-1} \boldsymbol{\mu}^\ast)$ \hfill{$\mathcal{O}(nq^2)$ flops} \item Compute $\nu^\ast = \nu + n$ \hfill{$\mathcal{O}(1)$ flops} \end{itemize} \end{enumerate} \item Generate posterior samples $\{\mathbf{Y}_{\cal U}^{(l)}\}_{l = 1}^L$ on a new set ${\cal U}$ given $\mathbf{X}_{\cal U}$ \begin{enumerate} \item Construct $\Tilde{\mathbf{A}}$ and $\Tilde{\mathbf{D}}$ as described in \eqref{eq: NNGP_predict_collapsed_par} \hfill{$\mathcal{O}(n'm^3)$ flops} \item \text{For} $l$ in $1:L$ \begin{enumerate} \item Sample $\boldsymbol{\Sigma}^{(l)} \sim \mathrm{IW}(\boldsymbol{\Psi}^\ast, \nu^\ast)$ \hfill{$\mathcal{O}(q^3)$ flops} \item Sample $\boldsymbol{\beta}^{(l)} \sim \mbox{MN}(\boldsymbol{\mu}^\ast, \mathbf{V}^\ast, \boldsymbol{\Sigma}^{(l)})$ \begin{itemize} \item Calculate Cholesky decomposition of $\boldsymbol{\Sigma}^{(l)}$, $\boldsymbol{\Sigma}^{(l)} = \mathbf{L}_{\Sigma^{(l)}}\mathbf{L}^\top_{\Sigma^{(l)}}$ \hfill{$\mathcal{O}(q^3)$ flops} \item Sample $\mathbf{u} \sim \mbox{MN}(\mathbf{0}_{p \times q}, \mathbf{I}_p, \mathbf{I}_q)$ (i.e. $\mbox{vec}(\mathbf{u})\sim \mbox{MVN}(\mathbf{0}_{pq \times 1}, \mathbf{I}_{pq})$) \hfill{$\mathcal{O}(pq)$ flops} \item Generate $\boldsymbol{\beta}^{(l)} = \boldsymbol{\mu}^\ast + \mathbf{L}_{v\ast}\mathbf{u} \mathbf{L}_{\boldsymbol{\Sigma}^{(l)}}^\top$ \hfill{$\mathcal{O}(p^2q + pq^2)$ flops} \end{itemize} \item Sample $\mathbf{Y}_{\cal U}^{(l)} \sim \mbox{MN}(\mathbf{X}_{\cal U}\boldsymbol{\beta}^{(l)} + \Tilde{\mathbf{A}}[\mathbf{Y} - \mathbf{X}\boldsymbol{\beta}^{(l)}], \Tilde{\mathbf{D}}, \boldsymbol{\Sigma}^{(l)})$ \begin{itemize} \item Sample $\mathbf{u} \sim \mbox{MN}(\mathbf{0}_{n' \times q}, \mathbf{I}_{n'}, \mathbf{I}_q)$. \hfill{$\mathcal{O}(n'q)$ flops} \item Generate $\mathbf{Y}_{\cal U}^{(l)} = \mathbf{X}_{\cal U}\boldsymbol{\beta}^{(l)} + \Tilde{\mathbf{A}}[\mathbf{Y} - \mathbf{X}\boldsymbol{\beta}^{(l)}] + \Tilde{\mathbf{D}}^{\frac{1}{2}}\mathbf{u}\mathbf{L}_{\boldsymbol{\Sigma}^{(l)}}^\top$ \hfill{$\mathcal{O}((n'+n)pq + n'(q^2 + mq))$ flops } \end{itemize} \end{enumerate} \end{enumerate} \end{enumerate} \vspace*{-8pt} \rule{\textwidth}{1pt} } } \paragraph{Conjugate multivariate latent NNGP model} Bayesian estimation for the conjugate multivariate latent model is more challenging because inference is usually sought on the (high-dimensional) latent process itself. In particular, the calculations involved in $\mathbf{V}^\ast$ in \eqref{eq: augmented_conj_post_v1} are often too expensive for large data sets even when the precision matrix $\boldsymbol{\rho}^{-1}_\psi(\mathcal{S}, \mathcal{S})$ is sparse. Here, the latent process $\boldsymbol{\omega}(\mathbf{s})$ in (\ref{eq: conj_latent_model}) follows a multivariate Gaussian process so that its realizations over $\mathcal{S}$ follows $\boldsymbol{\omega} \sim \mbox{MN}(\mathbf{0}_{n \times q}, \Tilde{\boldsymbol{\rho}}, \boldsymbol{\Sigma})$, where $\Tilde{\boldsymbol{\rho}}$ is the Vecchia approximation of $\boldsymbol{\rho}_{\psi}(\mathcal{S}, \mathcal{S})$. Hence, $\Tilde{\boldsymbol{\rho}}^{-1} = (\mathbf{I} - \mathbf{A}_{\boldsymbol{\rho}})^{\top}\mathbf{D}_{\boldsymbol{\rho}}^{-1}(\mathbf{I} - \mathbf{A}_{\boldsymbol{\rho}})$, where $\mathbf{A}_{\boldsymbol{\rho}}$ and $\mathbf{D}_{\boldsymbol{\rho}}$ are constructed analogous to $\mathbf{A}_{\boldsymbol{\mathcal{K}}}$ and $\mathbf{D}_{\boldsymbol{\mathcal{K}}}$ in (\ref{eq: NNGP_approx_invK}) with {$\alpha$ replaced by $1$}. This corresponds to modeling $\boldsymbol{\omega}(\mathbf{s})$ as an NNGP \citep[see, e.g.,][for details on the NNGP and its properties]{datta16, datta16b, banerjee2017high}. {The distribution theory for $\boldsymbol{\omega}(\mathbf{s})$ over $\mathcal{S}$ and ${\cal U}$ follows analogous to that of $\mathbf{y}(\mathbf{s})$ in the previous section.} The posterior distribution of $\{\boldsymbol{\gamma}, \boldsymbol{\Sigma}\}$ follows a Matrix-Normal distribution similar to (\ref{eq: augmented_conj_post_v1}), but with $\boldsymbol{\rho}_{\psi}(\mathcal{S},\mathcal{S})^{-1}$ in (\ref{eq: augmented_conj_post_v1_pars}) replaced by its Vecchia approximation $\Tilde{\boldsymbol{\rho}}_{\psi}(\mathcal{S},\mathcal{S})$. However, sampling $\{\boldsymbol{\gamma}, \boldsymbol{\Sigma}\}$ is still challenging for massive data sets, where we seek to minimize storage and operations with large matrices. Here we introduce a useful representation. Let $\mathbf{V}_{\boldsymbol{\rho}}$ be a non-singular square matrix such that $\boldsymbol{\rho}_{\psi}^{-1}(\mathcal{S}, \mathcal{S}) = \mathbf{V}_{\boldsymbol{\rho}}^\top\mathbf{V}_{\boldsymbol{\rho}}$ where we write $\mathbf{V}_{\boldsymbol{\rho}} = \mathbf{D}_{\boldsymbol{\rho}}^{-1/2}(\mathbf{I}-\mathbf{A}_{\boldsymbol{\rho}})$. We treat the prior of $\boldsymbol{\gamma}$ as additional ``observations'' and recast $p(\mathbf{Y}, \boldsymbol{\gamma} \,|\, \boldsymbol{\Sigma}) = p(\mathbf{Y} \,|\, \boldsymbol{\gamma}, \boldsymbol{\Sigma}) \times p(\boldsymbol{\gamma} \,|\, \boldsymbol{\Sigma})$ into an augmented linear model \begin{equation}\label{eq: augment_linear_latent} \begin{array}{c} \underbrace{ \left[ \begin{array}{c} \sqrt{\frac{\alpha}{1 - \alpha}} \mathbf{Y}\\ \mathbf{L}_r^{-1} \boldsymbol{\mu}_{\boldsymbol{\beta}} \\ \mathbf{0}_{n \times q} \end{array} \right]}_{\mathbf{Y}^{*}} = \underbrace{ \left[ \begin{array}{cc} \sqrt{\frac{\alpha}{1 - \alpha}} \mathbf{X} & \sqrt{\frac{\alpha}{1 - \alpha}} \mathbf{I}_n \\ \mathbf{L}_r^{-1}& \mathbf{0}_{p \times n} \\ \mathbf{0}_{n \times p}& \mathbf{V}_{\boldsymbol{\rho}} \end{array} \right] }_{\mathbf{X}^{*}} \underbrace{ \left[ \begin{array}{c} \boldsymbol{\beta} \\ \boldsymbol{\omega} \end{array} \right]}_{\boldsymbol{\gamma}}+ \underbrace{ \left[ \begin{array}{c} \boldsymbol{\eta}_1 \\ \boldsymbol{\eta}_2 \\ \boldsymbol{\eta}_3 \end{array} \right]}_{\boldsymbol{\eta} \end{array} , \end{equation} where $\mathbf{L}_r$ is the Cholesky decomposition of $\mathbf{V}_r$, and $\boldsymbol{\eta} \sim \mbox{MN}(\mathbf{0}_{(2n+p) \times q}, \mathbf{I}_{2n + p}, \boldsymbol{\Sigma})$. With a flat prior for $\boldsymbol{\beta}$, $\mathbf{L}_r^{-1}$ degenerates to $\mathbf{O}$ and does not contribute to the linear system. Equation~\eqref{eq: augmented_conj_post_v1_pars} simplifies to \begin{equation}\label{eq: augmented_conj_post_v2_pars} \begin{aligned} \mathbf{V}^\ast &= (\mathbf{X}^{\ast\top}\mathbf{X}^\ast)^{-1}\; , \; \boldsymbol{\mu}^\ast = (\mathbf{X}^{\ast\top}\mathbf{X}^\ast)^{-1}\mathbf{X}^{\ast\top}\mathbf{Y}^\ast \; ,\\ \boldsymbol{\Psi}^\ast &= \boldsymbol{\Psi} + (\mathbf{Y}^\ast - \mathbf{X}^\ast \boldsymbol{\mu}^\ast)^\top(\mathbf{Y}^\ast - \mathbf{X}^\ast \boldsymbol{\mu}^\ast)\; , \; \nu^\ast = \nu + n \; . \end{aligned} \end{equation} Following developments in \citet{zdb2019} for the univariate case, one can efficiently generate posterior samples through a conjugate gradient algorithm exploiting the sparsity of $\mathbf{V}_{\boldsymbol{\rho}}$. The sampling process for $\boldsymbol{\gamma}$ will be scalable when there is a sparse precision matrix $\boldsymbol{\rho}_\psi^{-1}(\mathcal{S}, \mathcal{S})$. It is also possible to construct $\mathbf{V}^\ast$ and $\boldsymbol{\mu}^\ast$ in \eqref{eq: augmented_conj_post_v2_pars} using $\boldsymbol{\rho}_\psi^{-1}(\mathcal{S}, \mathcal{S})$ instead of $\mathbf{V}_{\boldsymbol{\rho}}$. We refer to \citet{zdb2019} for further details of this construction. We provide a detailed algorithm of the conjugate multivariate latent NNGP model in {Algorithm~2. We solve the linear system $\mathbf{X}^{\ast\top}\mathbf{X}^\ast\boldsymbol{\mu}^\ast = \mathbf{X}^{\ast\top}\mathbf{Y}^\ast$ for $\boldsymbol{\mu}^\ast$, compute $\{\boldsymbol{\Psi}^\ast, \nu^\ast\}$ and generate posterior samples of $\boldsymbol{\Sigma}$ from $\mbox{IW}(\boldsymbol{\Psi}^\ast, \nu^\ast)$. Posterior samples of $\boldsymbol{\gamma}$ are obtained by generating $\boldsymbol{\eta} \sim \mbox{MN}(\mathbf{0}_{ (2n+p) \times q}, \mathbf{I}_{2n + p}, \boldsymbol{\Sigma})$, solving $\mathbf{X}^{\ast\top}\mathbf{X}^\ast \mathbf{v} = \mathbf{X}^{\ast\top}\boldsymbol{\eta}$ for $\mathbf{v}$ and then obtaining posterior samples of $\boldsymbol{\gamma}$ from $\boldsymbol{\gamma} = \boldsymbol{\mu}^\ast + \mathbf{v}$. } We implement a ``Sparse Equations and Least Squares'' (LSMR) algorithm \citep{fong2011lsmr} to solve the linear system $\mathbf{X}^{\ast\top}\mathbf{X}^\ast\boldsymbol{\mu}^\ast = \mathbf{X}^{\ast\top}\mathbf{Y}^\ast$ and $\mathbf{X}^{\ast\top}\mathbf{X}^\ast \mathbf{v} = \mathbf{X}^{\ast\top}\boldsymbol{\eta}$ needed to generate $\boldsymbol{\gamma}$. LSMR is a conjugate-gradient type algorithm for solving sparse linear equations $\mathbf{A} \mathbf{x} = \mathbf{b}$ where the matrix $\mathbf{A}$ may be square or rectangular. The matrix $\mathbf{A} := \mathbf{X}^\ast$ is a sparse tall matrix. LSMR only requires storing $\mathbf{X}^{\ast}$, $\mathbf{Y}^{\ast}$ and $\boldsymbol{\eta}^{\ast}$ and, unlike the conjugate gradient algorithm, avoids $\mathbf{X}^{\ast\top}\mathbf{X}^{\ast}$, $\mathbf{X}^{\ast\top}\mathbf{Y}$ and $\mathbf{X}^{\ast\top}\boldsymbol{\eta}$. LSMR also tends to produce more stable estimates than conjugate gradient. We have also tested a variety of conjugate gradient methods and preconditioning methods, where we have observed that their performances varied across different data sets. The LSMR without conditioning showed a relatively good performance for the latent models. Therefore, we choose LSMR without preconditioning for our current illustrations. Posterior predictive inference will adapt from (\ref{eq: posterior_predict_augmented}) for scalable models. After sampling $\{\boldsymbol{\gamma},\boldsymbol{\Sigma}\}$, we sample one draw of $\boldsymbol{\omega}_{\cal U}\sim \boldsymbol{\omega}_{\cal U}\,|\, \boldsymbol{\gamma}, \boldsymbol{\Sigma} \sim \mbox{MN}([\mathbf{0}_{n' \times p}, \Tilde{\mathbf{A}}]\boldsymbol{\gamma}, \Tilde{\mathbf{D}}, \boldsymbol{\Sigma})$ for each sampled $\{\boldsymbol{\gamma},\boldsymbol{\Sigma}\}$, where $\Tilde{\mathbf{A}} = [\Tilde{\mathbf{a}}_1: \cdots: \Tilde{\mathbf{a}}_n]^\top$, $\Tilde{\mathbf{D}} = \mbox{diag}(\{\Tilde{d}_i\}_{i = 1}^n)$ with \begin{equation}\label{eq: conj_latent_NNGP_A_D_U} \begin{split} \left[\{\Tilde{\mathbf{a}}_i\}_{i_1}, \ldots, \{\Tilde{\mathbf{a}}_i\}_{i_m}\right] &= \boldsymbol{\rho}_{\psi}(\mathbf{u}_i, N_m(\mathbf{u}_i))\boldsymbol{\rho}_{\psi}^{-1}(N_m(\mathbf{u}_i), N_m(\mathbf{u}_i))\;,\\ \Tilde{d}_i &= 1 - \boldsymbol{\rho}_{\psi}(\mathbf{u}_i, N_m(\mathbf{u}_i))\boldsymbol{\rho}_{\psi}^{-1}(N_m(\mathbf{u}_i), N_m(\mathbf{u}_i))\boldsymbol{\rho}_{\psi}(N_m(\mathbf{u}_i),\mathbf{u}_i)\;. \end{split} \end{equation} Finally, for each sampled $\{\boldsymbol{\beta},\boldsymbol{\omega}_{\cal U},\boldsymbol{\Sigma}\}$ we make one draw of $\mathbf{Y}_{\cal U}\sim \mbox{MN}(\mathbf{X}_{\cal U}\boldsymbol{\beta}+\boldsymbol{\omega}_{\cal U}, {(\alpha^{-1} - 1)}\mathbf{I}_{n'}, \boldsymbol{\Sigma})$. The following algorithm provides the steps for predictive inference. \noindent{ \rule{\textwidth}{1pt} {\fontsize{8}{8}\selectfont \textbf{Algorithm~2}: Obtaining posterior inference of $\{\boldsymbol{\gamma}, \boldsymbol{\Sigma}\}$ and predictions on set ${\cal U}$ for conjugate multivariate latent NNGP\\ \rule{\textwidth}{1pt \begin{enumerate \item Construct $\mathbf{X}^\ast$ and $\mathbf{Y}^\ast$ in \eqref{eq: augment_linear_latent} \begin{enumerate} \item $\mathbf{L}_r^{-1}$ and $\mathbf{L}_r^{-1} \boldsymbol{\mu_\beta}$ \begin{itemize} \item Compute the Cholesky decomposition of $\mathbf{V}_r$, $\mathbf{L}_r$ \hfill{$\mathcal{O}(p^3)$ flops} \item Compute $\mathbf{L}_r^{-1}$ and $\mathbf{L}_r^{-1} \boldsymbol{\mu}_{\boldsymbol{\beta}}$ \hfill{$\mathcal{O}(p^2q)$ flops} \end{itemize} \item $\mathbf{V}_{\boldsymbol{\rho}}$ \begin{itemize} \item Construct $\mathbf{A}_{\boldsymbol{\rho}}$ and $\mathbf{D}_{\boldsymbol{\rho}}$ \hfill{$\mathcal{O}(nm^3)$ flops} \item Compute $\mathbf{V}_{\boldsymbol{\rho}} = \mathbf{D}_{\boldsymbol{\rho}}^{-\frac{1}{2}}(\mathbf{I} - \mathbf{A}_{\boldsymbol{\rho}})$ \hfill{$\mathcal{O}(nm)$ flops} \end{itemize} \item Construct $\mathbf{X}^\ast$ and $\mathbf{Y}^\ast$ \end{enumerate} \item Obtain $\boldsymbol{\mu}^\ast$, $\boldsymbol{\Psi}^\ast$ and $\nu^\ast$. \begin{enumerate} \item Obtain $\boldsymbol{\mu}^\ast = [\boldsymbol{\mu}^\ast_1: \cdots: \boldsymbol{\mu}^\ast_q]$ \begin{itemize} \item Solve $\boldsymbol{\mu}^{\ast}_i$ from $\mathbf{X}^{\ast}\boldsymbol{\mu}^{\ast}_i = \mathbf{Y}^{\ast}_i$ by LSMR for $i = 1, \ldots, q$. \end{itemize} \item Obtain $\boldsymbol{\Psi}^\ast$ and $\nu^\ast$ \begin{itemize} \item Generate $\mathbf{u} = \mathbf{Y}^\ast - \mathbf{X}^\ast\boldsymbol{\mu}^\ast$ \hfill{$\mathcal{O}(nq(p + m))$ flops} \item Compute $\boldsymbol{\Psi}^\ast = \boldsymbol{\Psi} + \mathbf{u}^\top\mathbf{u}$\hfill{$\mathcal{O}(nq^2)$ flops } \item Compute $\nu^\ast = \nu + n$ \hfill{$\mathcal{O}(1)$ flops} \end{itemize} \end{enumerate} \item Generate posterior samples of $\{\boldsymbol{\gamma}^{(l)}, \boldsymbol{\Sigma}^{(l)}\}_{l = 1}^L$. For $l$ in $1:L$ \begin{enumerate} \item Sample $\boldsymbol{\Sigma}^{(l)} \sim \mathrm{IW}(\boldsymbol{\Psi}^\ast, \nu^\ast)$ \hfill{$\mathcal{O}(q^3)$ flops} \item Sample $\boldsymbol{\gamma}^{(l)} \sim \mbox{MN}(\boldsymbol{\mu}^\ast, \mathbf{V}^\ast, \boldsymbol{\Sigma}^{(l)})$ \begin{itemize} \item Sample $\mathbf{u} \sim \mbox{MN}(\mathbf{0}_{(2n+p) \times q}, \mathbf{I}_{2n + p}, \mathbf{I}_q)$ \hfill{$\mathcal{O}(nq)$ flops} \item Calculate Cholesky decomposition of $\boldsymbol{\Sigma}^{(l)}$, $\boldsymbol{\Sigma}^{(l)} = \mathbf{L}_{\boldsymbol{\Sigma}^{(l)}}\mathbf{L}^\top_{\boldsymbol{\Sigma}^{(l)}}$ \hfill{$\mathcal{O}(q^3)$ flops} \item Generate $\boldsymbol{\eta} = \mathbf{u}\mathbf{L}^{(l)\top} = [\boldsymbol{\eta}_1 : \cdots :\boldsymbol{\eta}_q]$ \hfill{$\mathcal{O}(nq^2)$ flops} \item Solve $\mathbf{v}_i$ from $\mathbf{X}^\ast\mathbf{v}_i = \boldsymbol{\eta}_i$ by LSMR for $i = 1, \ldots, q$. \item Generate $\boldsymbol{\gamma}^{(l)} = \boldsymbol{\mu}^\ast + \mathbf{v}$ with $\mathbf{v} = [\mathbf{v}_1: \cdots:\mathbf{v}_q]$ \hfill{$\mathcal{O}(nq)$ flops } \end{itemize} \end{enumerate} \item Generate posterior samples of $\{\mathbf{Y}_{\cal U}^{(l)}\}$ on a new set ${\cal U}$ given $\mathbf{X}_{\cal U}$. \begin{enumerate} \item Construct $\Tilde{\mathbf{A}}$ and $\Tilde{\mathbf{D}}$ using \eqref{eq: conj_latent_NNGP_A_D_U} \hfill{$\mathcal{O}(n'm^3)$ flops} \item For $l$ in $1:L$ \begin{enumerate} \item Sample $\boldsymbol{\omega}_{\cal U}^{(l)} \sim \mbox{MN}([\mathbf{0}_{n'\times p}, \Tilde{\mathbf{A}}]\boldsymbol{\gamma}^{(l)}, \Tilde{\mathbf{D}}, \boldsymbol{\Sigma}^{(l)})$ \begin{itemize} \item Sample $\mathbf{u} \sim \mbox{MN}(\mathbf{0}_{n' \times q}, \mathbf{I}_{n'}, \mathbf{I}_q)$ \hfill{$\mathcal{O}(n'q)$ flops} \item Generate $\boldsymbol{\omega}_{\cal U}^{(l)} = [\mathbf{0}_{n'\times p}, \Tilde{\mathbf{A}}]\boldsymbol{\gamma}^{(l)} + \Tilde{\mathbf{D}}^{\frac{1}{2}}\mathbf{u}\mathbf{L}_{\boldsymbol{\Sigma}^{(l)}}^\top$ \hfill{$\mathcal{O}(n'mq+ n'q^2)$ flops } \end{itemize} \item Sample $\mathbf{Y}_{\cal U}^{(l)} \,|\, \boldsymbol{\omega}_{\cal U}^{(l)}, \boldsymbol{\gamma}^{(l)}, \boldsymbol{\Sigma}^{(l)} \sim \mbox{MN}(\mathbf{X}_{\cal U}\boldsymbol{\beta}+\boldsymbol{\omega}_{\cal U}, (\alpha^{-1} - 1)\mathbf{I}_{n'}, \boldsymbol{\Sigma})$ \begin{itemize} \item Sample $\mathbf{u} \sim \mbox{MN}(\mathbf{0}_{n' \times q}, \mathbf{I}_{n'}, \mathbf{I}_q)$ \hfill{$\mathcal{O}(n'q)$ flops} \item Generate $\mathbf{Y}_{\cal U}^{(l)} = \mathbf{X}_{\cal U}\boldsymbol{\beta} + \boldsymbol{\omega}_{\cal U}^{(l)} + {(\alpha^{-1} - 1)} \mathbf{u} \mathbf{L}_{\boldsymbol{\Sigma}^{(l)}}^\top$ \hfill{$\mathcal{O}(n'pq + n'q^2)$ flops } \end{itemize} \end{enumerate} \end{enumerate} \end{enumerate} \vspace*{-8pt} \rule{\textwidth}{1pt} } } { \paragraph{Model comparisons} We will use posterior predictive performance as a key measure to compare inferential performance among the multivariate spatial models. For multivariate models we investigate model fit for each variable as well as by combining across variables. For example, using a common hold-out set $\{\mathbf{u}_1,\mathbf{u}_2,\ldots,\mathbf{u}_{n'}\}$ for each model we evaluate the root mean squared prediction error for each outcome as $\mbox{RMSPE} = \sqrt{\sum_{i = 1}^{n'}(y_j(\mathbf{u}_i) - \hat{y_j}(\mathbf{u}_i))^2/{n'}}$ for $j = 1,\ldots, q$ and also for all the responses combined as $\mbox{RMSPE} = \sqrt{\sum_{j=1}^q\sum_{i = 1}^{n'}(y_j(\mathbf{u}_i) - \hat{y_j}(\mathbf{u}_i))^2/(n'q)}$, where $y_j(\mathbf{u}_i)$ and $\hat{y}_j(\mathbf{u}_i)$ are the observed and predicted (posterior predictive mean) values of the outcome, respectively. Other metrics we compute are the prediction interval coverage (CVG; the percent of intervals containing the true value), interval coverage for intercept-centered latent processes of observed response (CVGL), and the mean continuous rank probability score ($\mbox{MCRPS} = \sum_{i = 1}^{n'} \mbox{CRPS}_j(\mathbf{u}_i)/{n'}$ for each $j= 1, \ldots, q$, where $\mbox{CPRS}_j(\mathbf{u}_i)$ is the CRPS of the $j$-th response on held location $\mathbf{u}_i$ \citep{gneiting2007strictly}). To calculate $\mbox{CRPS}_j(\mathbf{u}_i)$, we approximate the predictive distribution by a Normal distribution with mean centered at the predicted value {$\hat{y}_j(\mathbf{u}_i)$} and standard deviation equal to the predictive standard error $\hat{\sigma}_j(\mathbf{u}_i)$. Therefore, $\mbox{CPRS}_j(\mathbf{u}_i) = \hat{\sigma}_j(\mathbf{u}_i)[1/\sqrt{\pi} - 2\varphi(z_{ij}) - z_{ij} (2 \Phi(z_{ij})-1)]$ where $z_{ij} = (y_j(\mathbf{u}_i) - \hat{y}_j(\mathbf{u}_i) ) / \hat{\sigma}_j(\mathbf{u}_i)$, and $\varphi$ and $\Phi$ are the density and the cumulative distribution function of a standard Normal variable, respectively.} {In simulation experiments, where we know the values of true parameters (and the latent process) generating the data, we also evaluate the mean squared error of intercept-centered latent processes $\mbox{MSEL}= {\sum_{i = 1}^n(\omega_j(\mathbf{s}_i) + \mathbf{x}(\mathbf{s}_i)^{\top}\boldsymbol{\beta}_{j}- \hat{\omega_j}(\mathbf{s}_i) - \hat{\boldsymbol{\beta}}_{j})^2/n}$ for each $j = 1, \ldots, q$ and for all responses combined as $\mbox{MSEL}= {\sum_{j=1}^q\sum_{i = 1}^n(\omega_j(\mathbf{s}_i) + \mathbf{x}(\mathbf{s}_i)^{\top}\boldsymbol{\beta}_{j}- \hat{\omega_j}(\mathbf{s}_i) - \hat{\boldsymbol{\beta}}_{j})^2/(nq)}$, where $\hat{\omega_j}(\mathbf{s}_i)$ and $\hat{\boldsymbol{\beta}}_{j}$ are the posterior mean estimates of the latent process and regression slopes, respectively, obtained by analyzing the simulated data. For NNGP models there is also the question of sensitivity to the number of neighbors $m$. Here, we rely upon the recommendations in \cite{datta16} based upon extensive simulation experiments that for standard covariance functions usually $m = 10$ nearest neighbors suffice to deliver robust substantive inference even for massive data sets. \cite{datta2016nearest} investigated NNGP models with an unknown $m$ that was modeled using a discrete prior, but reported almost indistinguishable inference from that obtained by fixing $m$. Hence, we do not pursue modeling $m$ here.} \subsection{Cross-validation for Conjugate Multivariate NNGP Models}\label{subsec: CV_conj_NNGP} Conjugate Bayesian multivariate regression models depends on fixed hyperparameters in the model. We adapt a univariate $K$-fold cross-validation algorithm for choosing $\{\psi, \alpha\}$ \citep{finley2019efficient} to our multivariate setting. For each $\{\psi,\alpha\}$ on a grid of candidate values we fit the conjugate model and perform predictive inference. We compare the model predictions and choose the $\{\psi,\alpha\}$ that produces the {best performance for some model fitting criterion, e.g.,} the least magnitude of root mean square prediction error (RMSPE). {Specifically, we divide the data into $K$ folds (holdout sets), say $\mathcal{S}_1,\ldots,\mathcal{S}_K$, and predict $\mathbf{y}(\mathbf{s})$ over each of these folds using the remaining data (outside of the fold) for training. We calculate the RMSPE over the $K$ folds as $\sum_{k=1}^K\sqrt{\sum_{s\in \mathcal{S}_k}\|\mathbf{y}(\mathbf{s}) - \hat{\mathbf{y}}(\mathbf{s})\|^2/(|\mathcal{S}_k|q)}$, where $\hat{\mathbf{y}}(\mathbf{s})$ is the posterior predictive mean of $\mathbf{y}(\mathbf{s})$ and $|\mathcal{S}_k|$ is the number of locations in $\mathcal{S}_k$. This completes the cross-validation exercise for one choice of $\{\psi,\alpha\}$. We repeat this for all $\{\psi,\alpha\}$ over the grid and choose the point that corresponds to the smallest RMSPE.} Inference corresponding to this choice of hyperparameters is then presented. This is appealing for scalable Gaussian process models, which, for any fixed $\{\psi, \alpha\}$, can deliver posterior inference rapidly at new locations requiring storage and flops in $\mathcal{O}(n)$ only. The next algorithm presents the details of the steps for implementing $K$-fold cross-validation to choose the hyperparameters. \noindent{ \rule{\textwidth}{1pt} {\fontsize{8}{8}\selectfont \textbf{Algorithm~3}: Cross-validation of tuning $\psi$, $\alpha$ for conjugate multivariate response or latent NNGP model\\ \rule{\textwidth}{1pt \begin{enumerate} \item[1.] Split $\mathcal{S}$ into $K$ folds, and build the neighbor index. \begin{itemize} \item Split $\mathcal{S}$ into $K$ folds $\{\mathcal{S}_k\}_{k = 1}^K$, where $\mathcal{S}_{-k}$ denotes the set of locations in $\mathcal{S}$ that are not included in $\mathcal{S}_k$ \item Build nearest neighbors for $\{\mathcal{S}_{-k}\}_{k = 1}^K$ \item Find the collection of nearest neighbor set for $\mathcal{S}_k$ among $\mathcal{S}_{-k}$ for $k = 1, \ldots, K$. \end{itemize} \item[2.] (For response NNGP) Fix $\psi$ and $\alpha$, obtain posterior mean of $\boldsymbol{\beta}$ after removing the $k^{th}$ fold of the data: \begin{itemize} \item Use step 1 in Algorithm~1 to obtain $\hat{\boldsymbol{\beta}}_k$ by taking $\mathcal{S}$ to be $\mathcal{S}_{-k}$ and $\boldsymbol{\mu}^\ast$ to be $\hat{\boldsymbol{\beta}}_k$. \end{itemize} \item[ ] (For latent NNGP) Fix $\psi$ and $\alpha$, obtain posterior mean of $\boldsymbol{\gamma}_k= \{ \boldsymbol{\beta}, \boldsymbol{\omega}(\mathcal{S}_{-k}) \}$ after removing the $k^{th}$ fold of the data: \begin{itemize} \item Use step 1-2 in Algorithm~3 to obtain $\hat{\boldsymbol{\gamma}}_k$ by taking $\mathcal{S}$ to be $\mathcal{S}_{-k}$ and $\boldsymbol{\mu}^\ast$ to be $\hat{\boldsymbol{\gamma}}_k$. \end{itemize} \item[3.] (For response NNGP) Predict posterior means of $\mathbf{y}(\mathcal{S}_{k})$ \begin{itemize} \item Construct matrix $\Tilde{\mathbf{A}}$ through \eqref{eq: NNGP_predict_collapsed_par} by taking $\mathcal{S}$ to be $\mathcal{S}_{-k}$ and ${\cal U}$ to be $\mathcal{S}_{k}$. \item According to \eqref{eq: resp_NNGP_YU}, the predicted posterior mean of $\mathbf{y}(\mathcal{S}_k)$ follows\\ $\hat{\mathbf{y}}(\mathcal{S}_k) = \mbox{E}[\mathbf{y}(\mathcal{S}_k) \,|\, \mathbf{y}(\mathcal{S}_{-k})] = \mathbf{x}(\mathcal{S}_{k})\hat{\boldsymbol{\beta}}_k + \Tilde{\mathbf{A}}[\mathbf{y}(\mathcal{S}_{-k}) - \mathbf{x}(\mathcal{S}_{-k})\hat{\boldsymbol{\beta}}_k]$ \end{itemize} \item[ ] (For latent NNGP) Predict posterior means of $\mathbf{y}(\mathcal{S}_{k})$ \begin{itemize} \item Construct matrix $\Tilde{\mathbf{A}}$ by taking $\mathcal{S}$ to be $\mathcal{S}_{-k}$ and ${\cal U}$ to be $\mathcal{S}_{k}$. \item The predicted posterior mean of $\mathbf{y}(\mathcal{S}_k)$ follows\\ $\hat{\mathbf{y}}(\mathcal{S}_k) = \mbox{E}[\mathbf{y}(\mathcal{S}_k) \,|\, \mathbf{y}(\mathcal{S}_{-k})] = \mbox{E}_{\boldsymbol{\omega}}[\mbox{E}_{\mathbf{y}}[\mathbf{y}(\mathcal{S}_{k}) \,|\, \boldsymbol{\omega}(\mathcal{S}_{-k}), \mathbf{y}(\mathcal{S}_{-k})]] = [\mathbf{x}(\mathcal{S}_{k}), \Tilde{\mathbf{A}}]\hat{\boldsymbol{\gamma}}_k$ \end{itemize} \item[4.] Sum of Root Mean Square Predictive Error (RMSPE) over $K$ folds \begin{itemize} \item Initialize $e = 0$\\ \text{ } for ($k$ in $1:K$) \\ \text{ } \quad \quad $ e =e + \sqrt{\sum_{j = 1}^q \sum_{\mathbf{s}\ in \mathcal{S}_k} (\mathbf{y}_j(\mathbf{s}) - \hat{\mathbf{y}}_j(\mathbf{s}))^2 / (|\mathcal{S}_k|q)}$ \end{itemize} \item[5.] Cross validation for choosing $\psi$ and $\alpha$ \begin{itemize} \item Repeat steps (2) - (4) for all candidate values of $\psi$ and $\alpha$ \item Choose $\psi_0$ and $\alpha_0$ as the value that minimizes the sum of RMSPE \end{itemize} \end{enumerate} \vspace*{-8pt} \rule{\textwidth}{1pt} }} \subsection{Comparison of Response and Latent Models}\label{subsec: response_vs_latent_NNGP} Modeling the response as an NNGP produces a different model from modeling the latent process as an NNGP. In the former, Vecchia approximation to the joint density of the response yields a sparse precision matrix for the response. In the latter, it is the precision matrix of the realizations of the latent process that is sparse. This has been discussed in \cite{datta16} and also explored in greater generality by \cite{katzfuss2017general}. Comparisons based on the Kullback-Leibler divergence (KL-D) between the NNGP based models and their parent full GP models reveal that the latent NNGP model tends to be closer to the full GP than the response NNGP. A proof of such a result is provided by \cite{katzfuss2017general}, but this result holds only in the context of an augmented directed acyclical graphical model with nodes comprising the response and the latent variables. However, if we compute the KL-D between the NNGP models and their full GP counterparts in terms of the collapsed or marginal distribution for $\mathbf{Y}$, then it is theoretically possible for the response model to be closer to the full GP. Here we provide a simple example where a response NNGP model outperforms a latent NNGP model on a collapsed space. Consider $q=1$ variable. Assume that the observed location set is $\mathcal{S} = \{\mathbf{s}_1, \mathbf{s}_2, \mathbf{s}_3\}$, the latent process realization $\boldsymbol{\omega}(\mathcal{S})$ has covariance matrix $\sigma^2 \mathbf{R}$ and $\mathbf{y}(\mathcal{S})$ has covariance matrix $\sigma^2\mathbf{R} + \tau^2\mathbf{I}_3$, where \begin{equation} \mathbf{R} = \begin{bmatrix} 1& \rho_{12} & \rho_{13}\\ \rho_{12}& 1 & \rho_{23} \\ \rho_{13}& \rho_{23} & 1 \end{bmatrix}\; . \end{equation} Let us construct the response NNGP and latent NNGP models using Vecchia's approximation in (\ref{eq: vecchia_approx}) with neighbor sets $N_m(\mathbf{s}_2)=\{\mathbf{s}_1\}$ and $N_m(\mathbf{s}_3) = \{\mathbf{s}_2\}$. Then the covariance matrix of $\mathbf{y}(\mathcal{S})$ from the response NNGP model and of $\mathbf{y}(\mathcal{S})$ from the latent NNGP model on the collapsed space, i.e., after $\boldsymbol{\omega}(\mathcal{S})$ is integrated out, are \begin{equation} \boldsymbol{\Sigma}_R = \sigma^2 \begin{bmatrix} 1 + \delta^2 & \rho_{12} & \frac{\rho_{12} \rho_{23}}{ 1 + \delta^2}\\ \rho_{12}& 1 + \delta^2 & \rho_{23} \\ \frac{\rho_{12} \rho_{23}}{ 1 + \delta^2}& \rho_{23} & 1 + \delta^2 \end{bmatrix}\; \mbox{ and } \; \boldsymbol{\Sigma}_l = \sigma^2 \begin{bmatrix} 1 + \delta^2 & \rho_{12} & \rho_{12} \rho_{23}\\ \rho_{12}& 1 + \delta^2 & \rho_{23} \\ \rho_{12} \rho_{23}& \rho_{23} & 1 + \delta^2 \end{bmatrix}\;, \end{equation} respectively, where $\delta^2 = \frac{\tau^2}{\sigma^2}$ is the noise-to-signal ratio with $\tau^2$ as the variance of the noise process $\epsilon(s)$. Since $\mathbf{R}$ is positive-definite, we must have \begin{equation} 1- (\rho_{12}^2 + \rho_{13}^2 + \rho_{23}^2) + 2\rho_{12}\rho_{13}\rho_{23} > 0 \; ,\; 1 - \rho_{12}^2 > 0\; . \end{equation} It is easy to show that $\boldsymbol{\Sigma}_R$ and $\boldsymbol{\Sigma}_l$ are also positive-definite. If $\rho_{13} = \frac{\rho_{12} \rho_{23}}{ 1 + \delta^2}$, then the KL-D from the response NNGP model to the true model always equals zero, which is no more than the KL-D from the latent NNGP model to the true model. If $\rho_{13} = \rho_{12} \rho_{23}$, then the KL-D of the latent NNGP model to the true model always equals zero, which reverses the relationship. Numerical examples can be found in \url{https://luzhangstat.github.io/notes/KL-D_com.html} While theoretically one model does not always excel over the other, our simulations indicate that the latent NNGP model tends to outperform the response NNGP model in approximating their parent GP based models. This is consistent with the theoretical result of \citep{katzfuss2017general} and also with our intuition: the presence of the latent process should certainly improve the goodness of fit of the model. Without loss of generality, our discussion here considers the univariate case, but the argument applies to the multivariate setting as well. For the remainder of this section of the manuscript, let $\{y(\mathbf{s}): \mathbf{s} \in \mathcal{D}\}$ be the process of interest over $\mathcal{D} \subset \mathbb{R}^d, d \in N^+$, and let $y(\mathbf{s}) = \omega(\mathbf{s}) + \epsilon(\mathbf{s})$ for some latent spatial GP $\omega(\mathbf{s})$ and white noise process $\epsilon(\mathbf{s})$. A response NNGP model specifies the NNGP on $y(\mathbf{s})$, while a latent NNGP model assumes that $\omega(\mathbf{s})$ follows the NNGP. The latter induces a spatial process on $y(\mathbf{s})$ too, but it is not an NNGP. Let the covariance matrix of $\mathbf{y} = y(\mathcal{S})$ of the parent GP based models be $\mathbf{C} + \tau^2\mathbf{I}$, where $\mathbf{C}$ is the covariance matrix of the latent process $\omega(\mathcal{S})$. {Consider ${\tilde{\mathbf{C}}}^{-1}$, the Vecchia approximation of the precision matrices $\mathbf{C}^{-1}$, and $\tilde{\mathbf{K}}^{-1}$, the Vecchia approximation of $\mathbf{K}^{-1} = (\mathbf{C} + \tau^2 \mathbf{I})^{-1}$.} The covariance matrix of $y(\mathcal{S})$ from the latent NNGP model is $\tilde{\mathbf{C}} + \tau^2 \mathbf{I}$, while the precision matrix of $y(\mathcal{S})$ from the response NNGP model is $\tilde{\mathbf{K}}^{-1}$. We denote the error matrix of the Vecchia approximation of $\mathbf{C}^{-1}$ by $\mathbf{E}$. We assume that $\mathbf{E}$ is small so that $\tilde{\mathbf{C}}^{-1}$ approximates $\mathbf{C}^{-1}$ well. With the same observed location $\mathcal{S}$ and the fixed number of nearest neighbors, the error matrix of the Vecchia approximation of $\mathbf{K}^{-1}$ is believed to be close to $\mathbf{E}$, i.e., \begin{equation} \mathbf{C}^{-1} = \tilde{\mathbf{C}}^{-1} + \mathbf{E}\; ; \; \mathbf{K}^{-1} = \tilde{\mathbf{K}}^{-1} + \mathcal{O}(\mathbf{E}). \end{equation} Representing the precision matrices of $y(\mathcal{S})$ of the parent GP based model and the latent NNGP model by \begin{equation} \begin{aligned} (\mathbf{C}+\tau^2\mathbf{I})^{-1} &= \mathbf{C}^{-1} - \mathbf{C}^{-1}\mathbf{M}^{-1}\mathbf{C}^{-1}\; , \mathbf{M} = \mathbf{C}^{-1} + \tau^{-2} \mathbf{I}\; ,\\ (\tilde{\mathbf{C}} + \tau^2\mathbf{I})^{-1} &= \tilde{\mathbf{C}}^{-1} - \tilde{\mathbf{C}}^{-1} \mathbf{M}^{\ast-1}\tilde{\mathbf{C}}^{-1}\; , \mathbf{M}^\ast = \tilde{\mathbf{C}}^{-1} + \tau^{-2}\mathbf{I}\; , \end{aligned} \end{equation} we find that the difference between the precision metrics over the collapsed space for the parent NNGP and for the latent NNGP model is \begin{align*} &(\mathbf{C}+\tau^2\mathbf{I})^{-1} - (\tilde{\mathbf{C}}+ \tau^2\mathbf{I})^{-1} = \mathbf{C}^{-1} - \mathbf{C}^{-1}\mathbf{M}^{-1}\mathbf{C}^{-1} - \tilde{\mathbf{C}}^{-1} + \tilde{\mathbf{C}}^{-1} \mathbf{M}^{\ast-1}\tilde{\mathbf{C}}^{-1}\\ &\quad= \underbrace{\mathbf{E} - \mathbf{E}\mathbf{M}^{-1}\tilde{\mathbf{C}}^{-1} - \tilde{\mathbf{C}}^{-1}\mathbf{M}^{-1}\mathbf{E} - \tilde{\mathbf{C}}^{-1}(\mathbf{M}^{-1} - \mathbf{M}^{\ast-1})\tilde{\mathbf{C}}^{-1}}_{\mathbf{B}} - \underbrace{\mathbf{E}\mathbf{M}^{-1}\mathbf{E}}_{\mathcal{O}(\mathbf{E}^2)} \end{align*} Representing $\mathbf{B}$ in terms of $\tilde{\mathbf{C}}^{-1}$, $\mathbf{M}^\ast$ and $\mathbf{E}$, where $\mathbf{E}$ is assumed to be nonsingular, we find \begin{equation} \label{eq: B_1} \begin{aligned} &\mathbf{B} = \;\mathbf{E} - \mathbf{E}\mathbf{M}^{\ast-1}\tilde{\mathbf{C}}^{-1} + \mathbf{E}\mathbf{M}^{\ast-1}(\mathbf{E}^{-1} + \mathbf{M}^{\ast-1})^{-1}\mathbf{M}^{\ast-1}\tilde{\mathbf{C}}^{-1} - \tilde{\mathbf{C}}^{-1}\mathbf{M}^{\ast-1}\mathbf{E} \\ &\quad+ \tilde{\mathbf{C}}^{-1}\mathbf{M}^{\ast-1}(\mathbf{E}^{-1} + \mathbf{M}^{\ast-1})^{-1}\mathbf{M}^{\ast-1}\mathbf{E} +\tilde{\mathbf{C}}^{-1}\mathbf{M}^{\ast-1}(\mathbf{E}^{-1} + \mathbf{M}^{\ast-1})^{-1}\mathbf{M}^{\ast-1}\tilde{\mathbf{C}}^{-1}\; . \end{aligned} \end{equation} Using the familiar Woodbury matrix identity and the expansion $(\mathbf{I} + \mathbf{X})^{-1} = \sum_{n = 0}^{\infty} \{-\mathbf{X}\}^{n}$, we find \begin{align*} (\mathbf{E}^{-1} + \mathbf{M}^{\ast-1})^{-1}\mathbf{M}^{\ast-1} &= \{\mathbf{M}^{\ast}(\mathbf{E}^{-1} + \mathbf{M}^{\ast-1})\}^{-1} = \{\mathbf{M}^\ast \mathbf{E}^{-1} + \mathbf{I}\}^{-1} \\ & = \mathbf{I} - \{\mathbf{I} + \mathbf{E} \mathbf{M}^{\ast-1}\}^{-1} = \mathbf{I} - \{\mathbf{I} - \mathbf{E}\mathbf{M}^{\ast-1} + \mathcal{O}(\mathbf{E}^2)\} \\ &= \mathbf{E}\mathbf{M}^{\ast-1} + \mathcal{O}(\mathbf{E}^2)\;. \end{align*} Using the above equations and excluding the terms of order $\mathcal{O}(\mathbf{E}^2)$ in the expression of $\mathbf{B}$, the leading term in the difference is \begin{equation} \mathbf{B} = (\mathbf{I} - \tilde{\mathbf{C}}^{-1}\mathbf{M}^{\ast-1})\mathbf{E} (I - \mathbf{M}^{\ast-1}\tilde{\mathbf{C}}^{-1}) = (\mathbf{I} + \tau^2 \tilde{\mathbf{C}}^{-1})^{-1}\mathbf{E}(\mathbf{I} + \tau^2 \tilde{\mathbf{C}}^{-1})^{-1} \; . \end{equation} Using the spectral decomposition $(\mathbf{I} + \tau^2 \tilde{\mathbf{C}}^{-1}) = \mathbf{P}^\top (\mathbf{I} + \tau^2 \mathbf{D})\mathbf{P}$, where $\mathbf{P}$ is orthogonal and $\mathbf{D}$ is diagonal with positive elements on the diagonal, we obtain \begin{equation} \begin{aligned} \|\mathbf{B}\|_{F} &= \|\mathbf{P}^\top (\mathbf{I} + \tau^2 \mathbf{D})^{-1}\mathbf{P}\mathbf{E}\mathbf{P}^\top(\mathbf{I} + \tau^2 \mathbf{D})^{-1}\mathbf{P}\|_F = \| (\mathbf{I} + \tau^2 \mathbf{D})^{-1}\mathbf{P}\mathbf{E}\mathbf{P}^\top(\mathbf{I} + \tau^2 \mathbf{D})^{-1}\|_F\\ &\leq \| \mathbf{P}\mathbf{E}\mathbf{P}^\top\|_F = \|\mathbf{E}\|_F\; , \end{aligned} \end{equation} where $\|\cdot\|_F$ denotes the Frobenius matrix norm. The inequality also holds for the absolute value of the determinant and $p$ norms. And the equality holds if and only if $\tau^2 = 0$ when the difference is the same as the error matrix for response NNGP model. Thus, the latent model tends to shrink the error from the Vecchia approximation, which explains the expected superior performance of the latent NNGP model over the response NNGP model based on KL-Ds. \section{Simulation}\label{sec: simulation} We implemented our models in the \texttt{Julia}~1.2.0 numerical computing environment \citep{bezanson2017julia}. All computations were conducted on an Intel Core i7-7700K CPU @ 4.20GHz processor with 4 cores each and 2 threads per core---totaling 8 possible threads for use in parallel---and running a Linux Operating System (Ubuntu 18.04.2 LTS) with 32 Gbytes of random-access memory. Model diagnostics and other posterior summaries were implemented within the \texttt{Julia} and the \texttt{R}~3.6.1 statistical computing environment. We simulated $\mathbf{y}(\mathbf{s}_i)$s using {\eqref{eq: conj_latent_model}} with $q = 2, p = 2$ over $n=1200$ randomly generated locations inside a unit square. The following describes this process. First, a design matrix $\mathbf{X}$ is fixed with its first column of $1$'s and a single predictor generated from a standard normal distribution. We then generated $\boldsymbol{\omega}(\mathbf{s}_i)$'s over these locations and fixed their values. Finally, we generated $\mathbf{y}(\mathbf{s}_i)$s using {\eqref{eq: conj_latent_model}}. An exponential covariance function with decay $\phi$ was used to model $\rho_\psi(\cdot, \cdot)$ in {\eqref{eq: conj_latent_model}}, i.e., $ \rho_\psi(\mathbf{s}', \mathbf{s}'') = \exp{(-\phi\|\mathbf{s}' - \mathbf{s}''\|)}, \text{ for } \mathbf{s}', \mathbf{s}'' \in {\cal D}\;, $ where $\|\mathbf{s}' - \mathbf{s}''\|$ is the Euclidean distance between $\mathbf{s}'$ and $\mathbf{s}''$, and $\psi = \phi$. The parameter values fixed to generate the data are listed in Table~\ref{table:sim1}. We withheld 200 locations to evaluate predictive performance for conjugate models and benchmark models. For our analysis, we assigned a flat prior for $\boldsymbol{\beta}$, a prior of $\boldsymbol{\Sigma}\sim \mbox{IW}(\boldsymbol{\Psi}, \nu)$ with $\boldsymbol{\Psi} = \mathbf{I}_2$ (the $2\times 2$ identity matrix) and $\nu = 3$. {The candidate values for $\{\phi, \alpha\}$ in our cross-validation algorithms were chosen over a 25 by 25 grid defined by the range $[2.12, 26.52] \times [0.8, 0.99]$, where the support of $\phi$ corresponds to an effective spatial range (i.e., the distance where the spatial correlation drops to below 0.05) between $\sqrt{2}/12.5$ to $\sqrt{2}$, and the support of $\alpha$ is an interval centered at the actual value with width $0.2$. We used $K=5$ fold cross-validation for choosing the hyperparameters. This same setup was used for all the models to achieve a fairer comparison.} We used $500$ posterior samples for both the conjugate response and conjugate latent NNGP models. Note that these are draws from the exact posterior distribution, hence there are no issues of iterative convergence. The run times for the conjugate models included the time for choosing hyper-parameters through the cross-validation algorithm and the time for obtaining the posterior samples. Table~\ref{table:sim1} presents the posterior estimates of the regression coefficients $\boldsymbol{\beta} = \{\boldsymbol{\beta}_{ij}\}_{i = 1, j = 1}^{p, q}$, the covariance of the measurement error (labeled as $\mbox{cov}(\boldsymbol{\epsilon})$), covariance among the different latent processes (labeled as $\mbox{cov}(\boldsymbol{\omega})$; this applies only to the latent NNGP model) and hyperparameters $\{\phi, \alpha\}$ in Table~\ref{table:sim1}. \begin{table}[t] \caption{Simulation study summary table: posterior mean (2.5\%, 97.5\%) percentiles} \begin{minipage}[t]{\textwidth} \centering \begin{tabular}{c|c|cc} \hline\hline & True & Conj resp & Conj latent \\ \hline $\boldsymbol{\beta}_{11}$ & 1.0 & 1.391 (0.814,1.902) & 1.459 (0.865,2.057) \\ $\boldsymbol{\beta}_{12}$ & 1.0 & 0.813 (0.344,1.286) & 0.734 (0.201,1.276) \\ $\boldsymbol{\beta}_{21}$ & -2.0 & -1.978 (-2.114,-1.841) & -1.979 (-2.121,-1.842) \\ $\boldsymbol{\beta}_{22}$ & 2.0 & 2.076 (1.952,2.21) & 2.082 (1.961,2.208) \\ $\mbox{cov}(\boldsymbol{\epsilon})_{11}$ & 0.222 & 0.226 (0.205,0.248) & 0.231 (0.212,0.252)\\ $\mbox{cov}(\boldsymbol{\epsilon})_{12}$ & -0.111 & -0.113 (-0.129,-0.099) & -0.115( -0.128,-0.103)\\ $\mbox{cov}(\boldsymbol{\epsilon})_{22}$ & 0.167 & 0.172 (0.158,0.188) & 0.175 (0.16, 0.189) \\ $\mbox{cov}(\boldsymbol{\omega})_{11}$ & 1.234 & -- & 1.208 (1.148,1.268) \\ $\mbox{cov}(\boldsymbol{\omega})_{12}$ & -0.701 & -- & -0.705 ( -0.75,-0.658)\\ $\mbox{cov}(\boldsymbol{\omega})_{22}$ & 1.077 & -- & 1.077 (1.023,1.131)\\ $\phi$& 6.0 & 8.220 & 7.204 \\ $\alpha$& 0.9 & 0.863 & 0.871 \\ \hline RMSPE &--& \footnotemark[1][0.727; 0.602; 0.668] & \footnotemark[1][0.723; 0.6; 0.664]\\ MSEL &--& -- & \footnotemark[1][0.112; 0.112; 0.103] \\ CVG &--& \footnotemark[1][0.935; 0.955; 0.945] & \footnotemark[1][0.925; 0.95; 0.9375]\\ CVGL &--& -- & \footnotemark[1][0.957; 0.945; 0.951]\\ {MCRPS} &-- & \footnotemark[1][-0.408; -0.336; -0.372] & \footnotemark[1][-0.405; -0.334; -0.37]\\ time(s) &--& \footnotemark[2][12; 1] & \footnotemark[2][17; 1] \\ \hline\hline \end{tabular} \footnotetext[1]{[response 1; response 2; all responses]} \footnotetext[2]{[time for cross-validation in seconds; time for sampling in seconds]} \label{table:sim1} \end{minipage} \end{table} Table~\ref{table:sim1} lists the parameter estimates and performance metrics of the candidate models. The NNGP models used in these experiments used $m = 10$ nearest neighbors. {The posterior inference of regression slopes $\{\boldsymbol{\beta}_{21}, \boldsymbol{\beta}_{22}\}$ are similar between the response and latent models.} The 95\% {credible} intervals of the intercepts $\{\boldsymbol{\beta}_{11}, \boldsymbol{\beta}_{12}\}$ include the value used to generate the data. The covariance matrix of the measurement errors $\mbox{cov}(\boldsymbol{\epsilon})$ is defined as $(\alpha^{-1}-1)\boldsymbol{\Sigma}$ and computed using the posterior samples of $\boldsymbol{\Sigma}$ and the value of $\alpha$ obtained from cross-validation. The posterior samples of $\mbox{cov}(\boldsymbol{\omega})$ are computed directly from the posterior samples of $\boldsymbol{\omega}$. The conjugate NNGP models all yielded very similar RMSPEs {and MCRPSs}. The CVG and CVGL are close to 0.95, supporting reliable inference from conjugate NNGP models. {The run time required by both conjugate models are less than 20 seconds.} The simulation example shows that fitting a conjugate model is a pragmatic method for quick inference in multivariate spatial data analysis. Figure~\ref{fig:sim1} presents interpolated maps of the posterior means of the latent processes. Panel~(a) presents an interpolated map of the values of the first spatial process $\omega_1(\mathbf{s})$ added to the corresponding intercept $\beta_{11}$ over the unit square. Panel~(b) is the corresponding posterior estimate from the latent process model. Panels~(c)~and~(d) are the corresponding interpolated maps for the second process $\omega_2(\mathbf{s})$ and the corresponding estimate $\beta_{12}$. The similarity in spatial patterns between the two data generating processes and their corresponding estimates reveal that these models and their fitting algorithms are able to effectively capture the features of the underlying processes and the differences between them. \begin{figure}[!ht] \subfloat[$\boldsymbol{\omega}_1 + \boldsymbol{\beta}_{11}$ true\label{subfig:sim1a}]{% \includegraphics[width=0.45\textwidth]{sim1_map-w1-true_conj.jpg} } \hfill \subfloat[$\boldsymbol{\omega}_1 + \boldsymbol{\beta}_{11}$ latent NNGP\label{subfig:sim1b}]{% \includegraphics[width=0.45\textwidth]{sim1_map-w1-fit_conj.jpg} }\\ \subfloat[$\boldsymbol{\omega}_2 + \boldsymbol{\beta}_{12}$ true\label{subfig:sim1e}]{% \includegraphics[width=0.45\textwidth]{sim1_map-w2-true_conj.jpg} } \hfill \subfloat[$\boldsymbol{\omega}_2 + \boldsymbol{\beta}_{12}$ latent NNGP\label{subfig:sim1f}]{% \includegraphics[width=0.45\textwidth]{sim1_map-w2-fit_conj.jpg} } \caption{Panels~(a)~and~(c) present interpolated maps of the true generated latent processes. Panels~(b)~and~(d) present the posterior means of the spatial latent processes $\boldsymbol{\omega}$ estimated using the conjugate latent NNGP model for the true processes (a)~and~(c), respectively. The NNGP based models were all fit using $m = 10$ nearest neighbors.\label{fig:sim1}} \end{figure} \section{Normalized Vegetation Index Data Analysis}\label{sec: real_data_analy} We implemented all our proposed models on a vegetation index and land cover data \citep[see][for further details]{ramon2010modis, sulla2018user}. {We deal with two outcomes: (i) standard Normalized Difference Vegetation Index (NDVI); and (ii) red reflectance. NDVI is a robust and empirical measure of vegetation activity on the land surface that is important for understanding the global distribution of vegetation types, their biophysical and structural properties, and spatial-temporal variations \citep{ramon2010modis}. Red reflectance measures the spectral response in the red (0.6-0.7 $\mu m$) wavelengths region. Both outcomes are sensitive to the vegetation amount. All data were mapped to Euclidean planar coordinates using the sinusoidal (SIN) grid projection following \cite{banerjee2005geodetic}. For the current analysis we restrict ouselves to zone \textit{h08v05}, which runs between 11,119,505 and 10,007,555 meters south of the prime meridian and between 3,335,852 to 4,447,802 meters north of the equator. This corresponds to the western United States. We included an intercept specific to each outcome and an indicator variable for no vegetation (or urban area) through the 2016 land cover data as our explanatory variables. All other data were measured through the MODIS satellite over a 16-day period from 2016.04.06 to 2016.04.21. Some variables were rescaled and transformed subsequent to some exploratory data analysis for numerical robustness. The data sets were downloaded using the \texttt{R} package \textit{MODIS}, and the code for exploratory data analysis is also available at \url{https://github.com/LuZhangstat/Conj_Multi_NNGP}.} There are 3,115,934 observed locations. We used a transformed NDVI ($\log(\mbox{NDVI} + 1)$ labeled as NDVI) and red reflectance (red reflectance) as responses. The NNGP based models were constructed with $m = 10$ nearest neighbors. {We held out NDVI and red reflectance on 67,132 locations with about half of them in the region between 10,400,000 and 10,300,000 meters south of the prime meridian and between 3,800,000 and 3,900,000 meters north of the equator. We evaluate the predictive performance of our models using these held out locations and use the remaining for training the models. Figure~\ref{subfig:real_conj_latent_mapsa} illustrates the map of the transformed NDVI data. The white square is the region held out for prediction.} Posterior inference from our conjugate models were based on 500 independent samples drawn directly from the exact posterior distribution. Since these samples are directly drawn from the conjugate posterior distribution, there is no need to monitor convergence of these samples. {We assigned a flat prior for all regression coefficients, while we assumed $\boldsymbol{\Sigma}\sim \mbox{IW}(\boldsymbol{\Psi}, \nu)$ with $\boldsymbol{\Psi} = \mbox{diag}([1.0, 1.0])$ and $\nu = 3$.} We recursively shrink the domain and grid of candidate values $\{\phi, \alpha\}$ through repeatedly using the cross-validation algorithm in Section~\ref{subsec: CV_conj_NNGP} for selecting and fixing these parameters. The recorded run time for running the cross-validation algorithms, therefore, varied substantially across different models. {We took $K=5$ folds in our cross-validation algorithm and ran the prediction on each fold in parallel}. The subsequent computing of all the code were run with a single thread. The results for the conjugate models are listed in Table~\ref{table:real_conj}. {Note that the cross-validation yielded optimal values of $\alpha \approx 1$, which means a negligible nugget. This causes the predictions on the hold-out sets to be smooth.} Consistent with the related background, the regression coefficients of the index of ``no vegetation'' (urban area) are significantly negative for NDVI, which is to be expected as NDVI is a measure of greenness and low vegetation represents lack of greenness. On the other hand, ``no vegetation'' (urban area) is significantly positive for red reflectance, which is also consistent with the fact that NDVI and red reflectance tend to be negatively associated. In fact, this negative association is seen to persist even after we account for the ``no vegetation'' index as seen from the estimated covariance matrices for the residual noise in both models and the latent spatial process for the latent NNGP model. Model performances were compared in terms of RMSPE, CVG, {M}CRPS and run time. The spatial models, unsurprisingly, greatly improved predictive accuracy. In fact, conjugate Bayesian spatial models effected a 35\% shrinkage in the magnitude of RMSPE over a non-spatial Bayesian linear model, i.e., with $\boldsymbol{\omega}(\mathbf{s})=\mathbf{0}$ in (\ref{eq: conj_latent_model}). Therefore, we do not show the estimates from the non-spatial model. Table~\ref{table:real_conj} shows that the latent model seems to slightly outperform the response model in terms of RMSPE and MCRPS, while the CVG for both these models are very comparable. Posterior sampling for the conjugate response and latent models cost between 1.8 and 18.88 minutes, respectively, which is impressive given our sample sizes of around $3\times 10^6$ locations. The run time for the cross-validation algorithm and the posterior sampling from the conjugate models is appealing for such massive data sets. \begin{table}[!t] \caption{Vegetation Index data analysis: Posterior means (2.5\%,97.5\%) percentiles, model comparison metrics and run times (in minutes).} \centering \begin{minipage}[t]{\textwidth} \begin{tabular}{ccccc} \hline\hline & conj response & conj latent \\ \hline $\mbox{intercept}_1$ & 0.1023 (0.0822,0.1223)& 0.240729 (0.240723,0.240736) \\ $\mbox{intercept}_2$ & 0.2218 (0.2094,0.2338)& 0.144277 (0.144273,0.144281) \\ $\mbox{no vege or urban area}_1$ & -8.010e-3 (-8.233e-3,-7.796e-3) & -8.025e-3 (-8.050e-3,-8.001e-3) \\ $\mbox{no vege or urban area}_2$ & 4.381e-3 (4.261e-3,4.514e-3) & 4.390e-3 (4.376e-3,4.402e-3)\\ $\mbox{cov}(\boldsymbol{\epsilon})_{11}$ & 3.493e-5 (3.487e-5,3.499e-5) & 3.125e-5 (3.120e-5,3.130e-5)\\ $\mbox{cov}(\boldsymbol{\epsilon})_{12}$ & -1.214e-5 (-1.217e-5,-1.212e-5) & -1.086e-5 (-1.089e-5,-1.085e-5)\\ $\mbox{cov}(\boldsymbol{\epsilon})_{22}$ &1.090e-5 (1.089e-5,1.092e-5) & 9.760e-6 (9.745e-6,9.776e-6)\\ $\mbox{cov}(\boldsymbol{\omega})_{11}$ & -- & 1.7192e-2 ( 1.7190e-2,1.7193e-2) \\ $\mbox{cov}(\boldsymbol{\omega})_{12}$ & -- & -7.0307e-3 (-7.0314e-3,-7.03e-3) \\ $\mbox{cov}(\boldsymbol{\omega})_{22}$ & -- & 3.8897e-3 (3.8893e-3,3.8901e-3)\\ $(\phi,\alpha)$ & (17.919,0.999551) & (20.1755,0.999551)\\ \hline RMSPE & \footnotemark[1][0.05707; 0.03187; 0.04622] & \footnotemark[1][0.0503; 0.02572; 0.03995]\\ {MCRPS} & \footnotemark[1][-0.03301; -0.0188; -0.02591]& \footnotemark[1][-0.0314; -0.01748; -0.02444]\\ CVG & \footnotemark[1][0.9756; 0.9707; 0.9732] & \footnotemark[1][0.9764; 0.9715; 0.974]\\ time(mins) & \footnotemark[2][1012.18; 1.8] & \footnotemark[2][270.28; 18.88] \\ \hline\hline \end{tabular} \footnotetext[1]{[response 1; response 2; all responses]} \footnotetext[2]{[time for cross-validation in minutes; time for generating 500 samples in minutes]} \end{minipage} \label{table:real_conj} \end{table} Visual inspections of the predictive surfaces based on the conjugate response NNGP model are depicted in Figure~\ref{fig:real_conj_latent_maps}. The maps of the latent processes recovered by the conjugate latent NNGP shown in Figure~\ref{fig:real_conj_latent_maps} further corroborate the findings in Table~\ref{table:real_conj} regarding the negative association between the two latent processes for transformed NDVI and red reflectance. We see from the above figure that the blue and red regions for NDVI seem to have been swapped in the map for red reflectance. Notably, the proposed methods smooth out the predictions in the held-out region which is also a consequence of the cross-validation estimate of $\alpha\approx 1$. \begin{figure}[!ht] \subfloat[ \label{subfig:real_conj_latent_mapsa}]{% \includegraphics[width=0.2\textwidth]{conj_latent_map-raw1.jpg} } \hfill \subfloat[ \label{subfig:real_conj_latent_mapsb}]{% \includegraphics[width=0.2\textwidth]{conj_resp_map-fit1.jpg} } \hfill \subfloat[ \label{subfig:real_conj_latent_mapsc}]{% \includegraphics[width=0.2\textwidth]{conj_latent_map-fit1.jpg} } \hfill \subfloat[ \label{subfig:real_conj_latent_mapsd}]{% \includegraphics[width=0.2\textwidth]{conj_latent_map-incpw1.jpg} }\\ \subfloat[ \label{subfig:real_conj_latent_mapse}]{% \includegraphics[width=0.2\textwidth]{conj_latent_map-raw2.jpg} } \hfill \subfloat[\label{subfig:real_conj_latent_mapsf}]{% \includegraphics[width=0.2\textwidth]{conj_resp_map-fit2.jpg} } \hfill \subfloat[\label{subfig:real_conj_latent_mapsg}]{% \includegraphics[width=0.2\textwidth]{conj_latent_map-fit2.jpg} } \hfill \subfloat[\label{subfig:real_conj_latent_mapsh}]{% \includegraphics[width=0.2\textwidth]{conj_latent_map-incpw2.jpg} } \caption{Colored NDVI and red reflectance images (first and second rows, respectively) of the western United States (zone h08v05). Panels (a)~and~(e) presented interpolated maps of the raw data, (b)~and~(f) present interpolated posterior predictive means of NDVI and red reflectance from the conjugate NNGP response models, (c)~and~(g) present the posterior predictive maps of NDVI and red reflectance from the conjugate NNGP latent models, and (d)~and~(h) present the posterior predictive means of the intercept-centered latent processes corresponding to NDVI and red reflectance recovered from conjugate NNGP latent model. \label{fig:real_conj_latent_maps}} \end{figure} \section{Summary and Discussion}\label{sec: Conclusions} We have presented a conjugate Bayesian multivariate spatial regression model using Matrix-Normal and Inverse-Wishart distributions. A specific contribution is to embed the latent spatial process within an augmented Bayesian multivariate regression to obtain posterior inference for the high-dimensional latent process with stochastic uncertainty quantification. For scalability to massive spatial datasets---our examples here comprise locations in the millions---we adopt the increasingly popular Vecchia approximation and, more specifically, the NNGP models that render savings in terms of storage and floating point operations. We present elaborate simulation experiments to test the performance of different models using datasets exhibiting different behaviors. Our conjugate modeling framework fixes hyperparameters using a $K$-fold cross-validation approach. While our analysis is based upon fixing these hyperparameters, the subsequent inference obtained is seen to be effective in capturing the features of the generating latent process (in our simulation experiments) and is orders of magnitude faster than iterative alternatives at such massive scales as ours. We also applied our models, and compared them, in our analysis of an NDVI dataset. The scalability of our approach is guaranteed when univariate scalable model can exploit a tractable precision or covariance matrix. Our approach can, therefore, incorporate other methods such as multiresolution approximation (MRA) and more general Vecchia-type of approximations \cite[see, e.g.][]{katzfuss2017general,pbf2020}. Future work can extend and adapt this framework to univariate and multivariate spatiotemporal modeling. A modification is to use a dynamic nearest-neighbor Gaussian process (DNNGP) \citep{datta16b} instead of the NNGP in our models, which dynamically learns about space-time neighbors rather than fixing them. We can also develop conjugate Bayesian modeling frameworks for spatially-varying coefficient models, where the regression coefficients $\boldsymbol{\beta}$ are themselves random fields capturing the spatially-varying impact of predictors on the vector of outcomes. While conceptually straighforward, their actual implementation at massive scales will require substantial development. Developments in scalable statistical models must be accompanied by explorations in high performance computing. While the algorithms presented here are efficient in terms of storage and flops, they have been implemented on modest hardware. Implementations exploiting Graphical Processing Units (GPUs) and parallel CPUs can be further explored. For the latent NNGP models, the algorithms relied upon sparse solvers such as conjugate gradients and LSMR matrix algorithms. Adapting such libraries to GPUs and other high performance computing hardware will need to be explored and tested further in the context of our spatial Gaussian process models. \section*{Supporting Information} The work of the first and second authors was supported, in part, by federal grants NSF/DMS 1513654, NSF/IIS 1562303, and NIH/NIEHS 1R01ES027027. The third author was supported by NSF/EF 1253225 and NSF/DMS 1916395, and National Aeronautics and Space Administration's Carbon Monitoring System project.
-98,537.476064
[ -2.93359375, 2.787109375 ]
24.210526
[ -4.09375, 0.213134765625, -1.681640625, -5.53125, -0.466796875, 7.21875 ]
[ 3.232421875, 8.21875, 2.05859375, 8.2890625 ]
564
8,695
[ -2.333984375, 2.412109375 ]
33.232213
[ -6.39453125, -4.71484375, -4.984375, -2.623046875, 2.55859375, 12.8515625 ]
0.52259
10.904353
24.220817
4.933475
[ 2.117504596710205 ]
-62,424.233559
7.26245
-98,153.841846
0.40382
6.401124
[ -2.744140625, -3.8828125, -4.25, -5.37890625, 2.49609375, 12.7890625 ]
[ -5.80078125, -2.25390625, -2.201171875, -1.775390625, 3.404296875, 4.546875 ]
BkiUdds4eIXh0sNxw2Ig
\section{Introduction} \label{Intro} Deuteron-proton elastic scattering is extensively used in the study of, e.g., meson production mechanisms in few nucleon systems at intermediate energies. For such experiments $dp$ elastic scattering is well suited for normalisation purposes, due to its high cross section over a large momentum transfer range (cf.\ Fig. \ref{fig:dpreferencedatalog}). Previous work on meson production, e.g., Refs. \cite{Mersmann:2007gw,Rausmann:2009dn,Mielke:2014xbu}, used the existing database~\cite{Dalkhazhav:1969cma,Winkelmann:1980ca,Irom:1984wr,Velichko:1988ed,Guelmez:1991he} for data normalisation, assuming that for low momentum transfers, i.e., $|t| < 0.4\;(\textrm{GeV}/c)^2$, the differential cross section as a function of $t$ is independent of the beam momentum in the proton kinetic energy range between $T_p = 641\;\textrm{MeV}$ and $T_p = 1000\;\textrm{MeV}$. \begin{figure}[h] \centering \includegraphics[width=1\linewidth]{Figure1} \caption{Unpolarised differential cross sections of $dp$ elastic scattering plotted as a function of the momentum transfer squared $-t$ for different data sets~\cite{Dalkhazhav:1969cma,Winkelmann:1980ca,Irom:1984wr,Velichko:1988ed,Guelmez:1991he}.} \label{fig:dpreferencedatalog} \end{figure} In contrast to the database at smaller momentum transfers $|t| < 0.1\;(\textrm{GeV}/c)^2$, that at larger $|t|$ is much poorer. High-precision data from the ANKE spectrometer, using a deuteron beam and a hydrogen target, allows further study of the behaviour of the unpolarised differential cross sections. This enlarges the database in the momentum transfer range $0.08 < |t| < 0.26\;(\textrm{GeV}/c)^2$ at deuteron momenta that correspond to proton energies between $T_p = 882.2\;\textrm{MeV}$ and $T_p = 918.3\;\textrm{MeV}$. On the theoretical side, $pd$ elastic scattering in the GeV energy region has usually been analysed in terms of the Glauber diffraction model (or its various extensions), which is a high-energy and low-momentum-transfer approximation to the exact multiple-scattering series for the hadron-nucleus scattering amplitude. The original Glauber model~\cite{Franco:1965wi}, where spin degrees of freedom were neglected (or included only partially), has been refined~\cite{Platonova:2016xjq,Platonova:2010zz} by taking fully into account the spin structure of colliding particles, i.e., the spin-dependent $NN$ amplitudes and the $D$-wave component of the deuteron wave function, and also the double-charge-exchange process $p+d \to n+(pp) \to p+d$. In addition, while the majority of previous calculations made within the Glauber model employed simple parameterisations for the forward $NN$ amplitudes, the refined model~\cite{Platonova:2016xjq,Platonova:2010zz} suggests using accurate $NN$ amplitudes, based on modern $NN$ partial-wave analysis (PWA). By using the $NN$ PWA of the George Washington University SAID group (SAID)~\cite{Arndt:2007qn}, the model has been shown to describe small-angle $pd$ differential cross sections and also the more sensitive polarisation observables very well in the energy range $T_p = 200$\,--$1000\;\textrm{MeV}$~\cite{Platonova:2016xjq}. The refined Glauber model therefore seems ideally suited for the description of the experimental data presented here. On the other hand, the new high-precision data can provide a precise test for applicability of the Glauber model. The SAID group has recently published an updated $NN$ PWA solution~\cite{Workman:2016ysf}, which incorporates the new COSY-ANKE data on the near-forward cross section~\cite{Mchedlishvili:2015iwa} and analysing power $A_y$~\cite{Bagdasarian:2014mdj} in $pp$ elastic scattering, as well as the recent COSY-WASA $A_y$ data~\cite{Adlarson:2014pxj} in $np$ elastic scattering. We can therefore re-examine the predictions of the refined Glauber model obtained with the use of the previous PWA solution of 2007~\cite{Arndt:2007qn}. By performing calculations at various incident energies, we can also test the widely-used assumption of energy independence of the $pd$ elastic differential cross section at low momentum transfers. \section{Experimental Setup} \label{ExpSetup} The data were taken with the magnetic spectrometer ANKE~\cite{Barsov:2001xj} (cf.\ Fig.~\ref{ANKE} for a schematic representation of the setup), which is part of an internal fixed-target experimental setup located at the COoler SYnchrotron -- COSY of the Forschungszentrum J\"ulich. One of the main components of ANKE is the magnetic system, with its three dipole magnets D1--D3. The accelerated beam of unpolarized deuterons is deflected by the first dipole magnet D1 (cf.\ Fig.~\ref{ANKE}) into the target chamber, where the beam interacts with the internal hydrogen cluster-jet target~\cite{Khoukaz:1999}. \begin{figure}[!ht] \centering \includegraphics[width=1\linewidth]{Figure2} \caption{Schematic view of the ANKE magnetic spectrometer. It mainly consists of three dipole magnets, an internal hydrogen cluster jet-target and three detection systems (Pd-, Nd- and Fd-system). The red lines represent possible tracks of positively charged particles and the blue lines of negatively charged particles.} \label{ANKE} \end{figure} The second dipole magnet D2 separates the ejectiles by their electric charge and momentum into three different detection systems. The deuterons associated with $dp$ elastic scattering are deflected by D2 into the Forward (Fd) detection system, which was the only element used in this experiment. The Fd was designed and installed near the beam pipe to detect high-momentum particles. Beam particles not interacting with the internal target are deflected by the dipole magnets D2 and D3 back onto the nominal ring orbit. A special feature of this magnetic spectrometer is the moveable D2 magnet, which can be shifted perpendicular to the beam line. It is thus possible to optimise the geometrical acceptance of the detection system for each reaction that one would like to investigate. The deuteron beam momentum range from $3120.17\;\textrm{MeV}/c$ to $3204.16\;\textrm{MeV}/c$ was divided into 16 different fixed beam momenta (cf.\ Table~\ref{tab:BeamMomenta}, originally for the determination of the $\eta$ meson mass \cite{Goslawski:2012dn}) using the supercycle mode of COSY. In each supercycle it is possible to alternate between up to seven different beam settings, each with a cycle length of $206\;\textrm{s}$. The beam momentum spread $\Delta p_{\rm d}/p_{\rm d} < 6 \times 10^{-5}$ was determined using the spin depolarisation technique~\cite{Goslawski:2009vf}. \begin{table}[h] \centering \caption{Beam momenta $p_d$ for each supercycle and flattop in MeV/$c$.} \scriptsize \begin{tabular}{p{0.5cm}|p{0.7cm}p{0.7cm}p{0.7cm}p{0.7cm}p{0.7cm}p{0.7cm}p{0.7cm}}\hline & FT1 &FT2 &FT3 &FT4 &FT5 &FT6 &FT7 \\ \hline SC1 & 3120.17& 3146.41 & 3148.45 & 3152.45 & 3158.71 & 3168.05 & 3177.51 \\ SC2 & 3120.17& 3147.35 & 3150.42 & 3154.49 & 3162.78 & 3172.15 & 3184.87 \\ SC3 & & & & 3157.48 & 3160.62 & & 3204.16 \\ \hline \end{tabular} \label{tab:BeamMomenta} \end{table} \section{Event Selection and Analysis} \label{Analysis} As described above, deuterons originating from $dp$ elastic scattering are deflected by D2 into the Forward detection system, which consists of one multiwire drift chamber as well as two multiwire proportional chambers for track reconstruction. In addition, two scintillator hodoscopes, comprised of eight vertically aligned scintillator strips for the first and nine for the second hodoscope, are used for particle identification using the energy-loss information and time-of-flight measurements. During the data taking a specific hardware trigger was included, which required two coincident scintillator signals, one in each of the two Fd hodoscopes. Due to the cross section for $dp$ elastic scattering being very large, this hardware trigger is equipped with a pre-scaling factor of 1024 to reduce the dead time of the data acquisition system. On account of the small momentum transfer to the target proton, the forward-going deuterons, whose tracks are reconstructed in the Forward detection system, have momenta close to that of the beam. Since only deuterons from elastic scattering have such a high momentum, the reaction can be identified with no physical background from meson production. Reconstructed particles with a momentum $p$ below about $p/p_{d} \approx 0.913$ are discarded to obtain a better signal-to-noise ratio. In order to avoid uncertainties caused by small inhomogeneities of the magnetic field at the edges of the D2 magnet, an additional cut in the $y$ hit position (with $y$ being the axis perpendicular to the COSY plane) of the first multi-wire proportional chamber is required. Events with $|y_{\textrm{{\scriptsize hit}}}| > 105\;\textrm{mm}$ are discarded. For $dp$ elastic scattering the geometrical acceptance of the ANKE magnetic spectrometer is limited to $0.06 < |t| < 0.31\;(\textrm{GeV}/c)^2$. However, to avoid systematic edge effects, only events in the region $0.08 < |t| < 0.26\;(\textrm{GeV}/c)^2$ were analysed, with a bin width of $\Delta t = 0.01\;(\textrm{GeV}/c)^2$. The missing-mass analysis of Fig.~\ref{fig:MMElastic} shows a prominent signal at the proton mass sitting on top of a very small and seemingly constant background. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{Figure3} \caption{Missing-mass spectrum of the $dp\to dX$ reaction at $p_{d} = 3120.17\;\textrm{MeV}/c$ for $0.08 < |t| < 0.09\;(\textrm{GeV}/c)^2$. The blue dashed line represents a constant background fit to the spectrum, excluding the $\pm3\sigma$ region around the peak.} \label{fig:MMElastic} \end{figure} A Gaussian fit to the peak was used to define its position and width and the region outside the $\pm 3\sigma$ region was used to fit a constant background. After subtracting this, the missing-mass spectra are integrated to obtain the number of $dp$ elastic scattering events for each of the 18 momentum transfer bins at all 16 different beam momenta. The detector acceptance, which drops from 15\% to 7\% with increasing momentum transfer, was determined using Monte Carlo simulations. These simulations have to fulfil the same software cut criteria as the data, so that the acceptance-corrected count yield can be determined for each beam momentum setting. The resulting differential cross sections are presented in Sec.~\ref{Results}. \section{Theoretical calculation} \label{Theory} The theoretical calculation of the $pd$ elastic scattering cross section was performed at four incident proton energies $T_p = 800$, $900$, $950$ and $1000\;\textrm{MeV}$ within the refined Glauber model~\cite{Platonova:2016xjq,Platonova:2010zz}. The differential cross section is related to the amplitude $M$ as \begin{equation} \label{dsm} \mbox{$\textrm{d}$}\sigma\! / \!\mbox{$\textrm{d}$} t = {\textstyle \frac{1}{6}}{\rm Sp}\,\bigl(MM^+\bigr). \end{equation} The $pd$ amplitude $M$ in the Glauber approach contains two terms corresponding to single and double scattering of the projectile with the nucleons in the deuteron. These terms are expressed through the on-shell $NN$ amplitudes ($pp$ amplitude $M_p$ and $pn$ amplitude $M_n$) and the deuteron wave function $\Psi_d$: \begin{equation} \label{msd} M({\bf q}) = M^{(s)}({\bf q}) + M^{(d)}({\bf q}), \end{equation} \begin{equation} \label{ms} M^{(s)}({\bf q}) = \int \mbox{$\textrm{d}$}^{3}r\, e^{i{\bf q}{\bf r}/2} \Psi_d({\bf r}) \left[M_n({\bf q}) + M_p({\bf q})\right] \Psi_d({\bf r}), \end{equation} \begin{equation} \label{md} M^{(d)}({\bf q}) = \frac{i}{4 \pi^{3/2}}\int \mbox{$\textrm{d}$}^{2}q' \int \mbox{$\textrm{d}$}^{3}r\, e^{i{\bf q'}{\bf r}} \Psi_d({\bf r}) \times \end{equation} \begin{equation*} \Bigl[M_n({\bf q_2}) M_p({\bf q_1}) + M_p({\bf q_2}) M_n({\bf q_1}) - M_c({\bf q_2}) M_c({\bf q_1})\Bigr] \Psi_d({\bf r}), \end{equation*} where ${\bf q}$ is the overall 3-momentum transfer (so that $t = -q^2$ in the centre-of-mass system), while ${\bf q_1} = {\bf q}/2 - {\bf q'}$ and ${\bf q_2} = {\bf q}/2 + {\bf q'}$ are the momenta transferred in collisions with individual target nucleons, and $M_c({\bf q}) = M_n({\bf q}) - M_p({\bf q})$ is the amplitude of the charge-exchange process $pn \to np$. When spin dependence is taken into account, the $NN$ amplitudes $M_n$, $M_p$ and the deuteron wave function $\Psi_d$ are non-commuting operators in the three-nucleon spin space. They can be expanded into several independent terms that are invariant under spatial rotations and space and time reflections, and the coefficients of the expansions are, respectively, the $NN$ invariant amplitudes (five for both $pp$ and $pn$ scattering) and $S$- and $D$-wave components of the deuteron wave function. The $pd$ amplitude $M$ is also expanded into $12$ independent terms. After undertaking some spin algebra and integrating over the spatial coordinate, all the $pd$ invariant amplitudes can be explicitly related to the $NN$ invariant amplitudes and the various components of the deuteron form factor $S({\bf q}) = {\textstyle \int} \mbox{$\textrm{d}$}^3 r \, e^{i{\bf q}{\bf r}}|\Psi_d({\bf r})|^2$. The detailed derivation and the final formulae of the refined Glauber model can be found in Refs.~\cite{Platonova:2016xjq,Platonova:2010zz}. The $NN$ invariant amplitudes at low momentum transfers are easily evaluated from the centre-of-mass helicity amplitudes, which can be constructed from empirical $NN$ phase shifts. For the present calculation, we used the phase shifts of the latest PWA solution of the SAID group~\cite{Workman:2016ysf}. There are, in fact, two PWA solutions published in Ref.~\cite{Workman:2016ysf}, viz.\ the unweighted fit SM16 and the weighted fit WF16. Unlike their earlier solution SP07~\cite{Arndt:2007qn}, both new SAID solutions incorporate the recent high-precision COSY-ANKE data~\cite{Mchedlishvili:2015iwa,Bagdasarian:2014mdj} on the near-forward differential cross section ($1.0 \leq T_p \leq 2.8$~GeV) and analysing power $A_y$ ($0.8 \leq T_p \leq 2.4$~GeV) in $pp$ elastic scattering and the COSY-WASA data~\cite{Adlarson:2014pxj} on $A_y$ in $np$ scattering at $T_n = 1.135\;\textrm{GeV}$. However, by construction the WF16 solution describes better the new COSY-ANKE results since the weights of these data have here been enhanced. The $NN$ partial-wave amplitudes obtained in the SM16, WF16 and SP07 solutions begin to deviate significantly from each other only for $T_p \geq 1\;\textrm{GeV}$. We examined both new PWA solutions at $T_p = 900\;\textrm{MeV}$ and found the $pd$ differential cross section with WF16 input to be lower than that produced by SM16 by between 1\% and 3\% for $0.08 < |t| <0.26\;(\textrm{GeV}/c)^2$. This small difference is some measure of the uncertainties arising from the input on-shell $NN$ amplitudes. For three other energies ($T_p = 800$, $950$ and $1000\;\textrm{MeV}$) we employed the WF16 $NN$ PWA solution and at $T_p = 1\;\textrm{GeV}$ we also compared the results with those obtained with the SP07 input used in earlier works~\cite{Platonova:2016xjq,Platonova:2010zz}. The changes ranged from 1\% to 8\% in the momentum transfer interval $0.08 < |t| < 0.26\;(\textrm{GeV}/c)^2$. Due to the rapid fall-off of the $NN$ amplitudes with momentum transfer, the $pd$ predictions in the Glauber model are sensitive mainly to the long-range behaviour of the deuteron wave function. We used the one derived from the CD-Bonn $NN$-potential model~\cite{Machleidt:2000ge} but choosing a different (but realistic) wave function would change the resulting $pd$ cross section by not more than about $1$--$2$\%~\cite{Platonova:2010zz}. The dependence of the $NN$ helicity amplitudes on the momentum transfer $q$, as well as the dependence of the deuteron $S$- and $D$-wave functions on the inter-nucleon distance $r$, were parameterised by convenient five-Gaussian fits~\cite{Platonova:2016xjq,Platonova:2010zz}. The fitted $NN$ amplitudes coincide with exact ones at momentum transfers $q < 0.7\;\textrm{GeV}/c$ and the deuteron wave functions at distances $r < 20\;\textrm{fm}$. This parametrisation allows us to perform the calculations fully analytically. \section{Results} \label{Results} The normalisation of the data presented here is obtained using the fit \begin{equation} \mbox{$\textrm{d}$}\sigma/\mbox{$\textrm{d}$} t = \exp(a + b|t| + c|t|^2)~\mu\textrm{b}/(\textrm{GeV}/c)^2 \end{equation} in the momentum transfer range $0.05 < |t| < 0.4\;(\textrm{GeV}/c)^2$ to the combined database from Refs.~\cite{Dalkhazhav:1969cma,Winkelmann:1980ca,Irom:1984wr,Velichko:1988ed,Guelmez:1991he}, which led to the parameters $a = 12.45$, $b = -27.24\;(\textrm{GeV}/c)^{-2}$ and $c = 26.31\;(\textrm{GeV}/c)^{-4}$. To normalise the acceptance-corrected counts at each beam momentum, both the fit to the reference database as well as the numbers of counts are integrated over the momentum transfer range $0.08 < |t| < 0.09\;(\textrm{GeV}/c)^2$. Assuming $\mbox{$\textrm{d}$}\sigma/\mbox{$\textrm{d}$} t$ is independent of the beam momentum, the ratio between the two integrals defines the scaling factor for each beam momentum that takes into account, e.g., different integrated luminosities. The differential cross sections thus determined for all 16 beam momenta are shown in Fig.~\ref{fig:DiffCrossSection}. The plots of differential cross sections at the 16 different beam momenta shows that their shapes are independent of beam momentum over the available momentum range. As a consequence, it is possible to evaluate the differential cross section for each of the 18 momentum transfer bins averaged over the 16 energies (cf.\ Fig.~\ref{fig:DiffCrossSection}, Fig.~\ref{fig:NewWQDatabase}, and Table~\ref{tab:AveragedWQ}). The systematic uncertainties caused by, e.g., the uncertainty in the angle calibration in the D2 magnet are negligible compared to the statistical uncertainties that are presented in Table~\ref{tab:AveragedWQ}. \begin{table}[h] \centering \caption{Differential cross section $\overline{\mbox{$\textrm{d}$}\sigma/\mbox{$\textrm{d}$} t}$ and statistical uncertainty of $dp$ elastic scattering averaged over all 16 different beam momenta.} \begin{tabular}{ccccc}\\ $|t|$ & $\overline{\mbox{$\textrm{d}$}\sigma/\mbox{$\textrm{d}$} t}$ & $\Delta \overline{\mbox{$\textrm{d}$}\sigma/\mbox{$\textrm{d}$} t}_{\textrm{\scriptsize stat}}$ \\ $(\textrm{GeV}/c)^2$ & $\mu \textrm{b}/(\textrm{GeV}/c)^2$ & $\mu \textrm{b}/(\textrm{GeV}/c)^2$ \\ \hline 0.085 & 29898 & 193 \\ 0.095 & 23624 & 155 \\ 0.105 & 21014 & 140 \\ 0.115 & 16448 & 112 \\ 0.125 & 13562 & 95 \\ 0.135 & 11295 & 82 \\ 0.145 & 8546 & 65 \\ 0.155 & 7534 & 59 \\ 0.165 & 6212 & 51 \\ 0.175 & 5098 & 45 \\ 0.185 & 4264 & 39 \\ 0.195 & 3575 & 35 \\ 0.205 & 2963 & 31 \\ 0.215 & 2573 & 29 \\ 0.225 & 2249 & 26 \\ 0.235 & 1909 & 24 \\ 0.245 & 1575 & 21 \\ 0.255 & 1379 & 20 \\ \hline \end{tabular} \label{tab:AveragedWQ} \end{table} From the comparison of the results with the theoretical calculation at $T_p=900\;\textrm{MeV}$ (see Figs.~\ref{fig:DiffCrossSection} and \ref{fig:NewWQDatabase}), it is seen that the refined Glauber model describes our data very well at low momentum transfers $0.08 < |t| < 0.2\;(\textrm{GeV}/c)^2$. It is also evident from Fig.~\ref{fig:NewWQDatabase} that the refined Glauber model calculation agrees similarly with the existing database for $|t| < 0.1\;(\textrm{GeV}/c)^2$. Fig.~\ref{fig:WQGlauberCompared} shows the ratio of the averaged cross section determined in the present experiment to that calculated within the refined Glauber model. The scatter of this ratio around unity for $0.08 < |t| < 0.18\;(\textrm{GeV}/c)^2$ is consistent with the scatter of experimental data around the smooth curve fitting the reference database (see Fig.~\ref{fig:WQGlauberCompared}). \begin{figure*}[!h] \centering \includegraphics[width=0.87\linewidth,trim = 0cm 0cm 0cm 1cm]{Figure4} \caption{Differential cross sections for deuteron-proton elastic scattering for deuteron laboratory momenta between 3120.17 and $3204.16\;\textrm{MeV}/c$. These are labeled in terms of the proton kinetic energy for a deuteron target ($882.2 \leq T_p \leq 918.3\;\textrm{MeV}$). Also shown is the average over the 16 available measurements. The purple ($T_p = 800\;\textrm{MeV}$), red ($T_p = 900\;\textrm{MeV}$), green ($T_p = 950\;\textrm{MeV}$), and blue ($T_p = 1000\;\textrm{MeV}$) lines represent the refined Glauber model calculations (with the use of the SAID $NN$ PWA, solution WF16~\cite{Workman:2016ysf}) and the dashed black line the fit to the $dp$-elastic database from \cite{Dalkhazhav:1969cma,Winkelmann:1980ca,Irom:1984wr,Velichko:1988ed,Guelmez:1991he}.} \label{fig:DiffCrossSection} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=1\linewidth]{Figure5} \caption{Differential cross sections $\overline{d\sigma/dt}$ averaged over the available 16 energies between $882.2\;\textrm{MeV} \leq T_p \leq 918.3\;\textrm{MeV}$ compared with the existing database \cite{Dalkhazhav:1969cma,Winkelmann:1980ca,Irom:1984wr,Velichko:1988ed,Guelmez:1991he} and the refined Glauber model calculation at $T_p = 900\;\textrm{MeV}$ (with the use of the SAID $NN$ PWA, solutions WF16 and SM16~\cite{Workman:2016ysf}).} \label{fig:NewWQDatabase} \end{figure*} At the higher momentum transfers, the theoretical curve begins to deviate from experiment and this is likely to be due to a failure of the small-momentum-transfer approximations (account of only single and double scattering, neglect of recoil, etc.) involved in the Glauber theory. On the other hand, it was found in Ref.~\cite{Platonova:2016xjq} that at the lower energies of $T_p = 250$ and $440\;\textrm{MeV}$ the refined Glauber model calculations agree with the data on $pd$ elastic differential cross section out to at least $|t| = 0.3\;(\textrm{GeV}/c)^2$, i.e., in the same region where exact three-body Faddeev equations describe the data. However, the accuracy of the Glauber model, which is a high-energy approximation to the exact theory, should get better at higher collision energy. The deviations noted here for $|t| > 0.2\;(\textrm{GeV}/c)^2$ might arise from dynamical mechanisms that are not taken into account in either the approximate (Glauber-like) or the exact (Faddeev-type) approach. For example, there could be contributions from a three-nucleon ($3N$) force whose importance rises with collision energy and momentum transfer. One conventional $3N$-force, induced by two-pion exchange with an intermediate $\Delta(1232)$-isobar excitation, is known to contribute to $pd$ large-angle scattering at intermediate energies (see, e.g.,~\cite{Sekiguchi:2014vva}). However, one might also consider three-body forces caused by the meson exchange between the proton and the six-quark core of the deuteron (the deuteron dibaryon) \cite{Kukulin:2003tk}. Indeed, at larger momentum transfers, the incident proton probes shorter $NN$ distances in the deuteron, so that, the proton scattering off the deuteron as a whole could occur with increasing probability. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{Figure6} \caption{Ratio of our measured differential cross sections $\overline{d\sigma/dt}$ averaged over the available 16 energies to the refined Glauber model calculation at $T_p = 900\;\textrm{MeV}$ (with the use of the SAID $NN$ PWA, solutions WF16 and SM16~\cite{Workman:2016ysf}). The grey bars represent the ratio of the averaged differential cross sections to the fit to the reference database.} \label{fig:WQGlauberCompared} \end{figure} The preliminary results of taking the one-meson-exchange between the incident proton and deuteron dibaryon into account in $pd$ elastic scattering have shown this $3N$-force contribution to increase slightly the $pd$ differential cross section already at moderate momentum transfers~\cite{Platonova:2012md}. This interesting question clearly requires further investigation. The calculations at different proton energies from 800 to 1000 MeV show a gradual energy dependence of the $pd$ differential cross section (see Fig. \ref{fig:DiffCrossSection}). The theoretical curves at four energies intersect at around $|t| = 0.08\;(\textrm{GeV}/c)^2$ and then begin to deviate from each other. The difference between the calculated cross sections at $T_p = 800$ and 1000 MeV reaches 13\% at $|t| = 0.2\;(\textrm{GeV}/c)^2$. The increasing slope of the curve implies that at these energies the interaction radius in $pd$ (as well as $NN$) elastic scattering effectively increases with energy. As a result, the forward diffraction peak in the cross section becomes higher and narrower. This means that the $pd$ elastic cross section integrated over $0<|t| <0.2\;(\textrm{GeV}/c)^2$ increases slightly with energy (by $4$\% from $800$ to $1000\;\textrm{MeV}$), though its part taken from $|t| = 0.08\;(\textrm{GeV}/c)^2$ (the lower limit of the present experiment) decreases a little. Hence, whereas the $pd$ elastic cross section as a function of the momentum transfer squared is usually assumed to be constant in the energy and momentum-transfer range considered, the present model calculations reveal a slight energy dependence of the magnitude and slope of the $pd$ elastic cross section. This result has already been taken into account for normalisation of the recent COSY-WASA experimental data on the $\eta$-meson production in $pd$ collisions~\cite{Adlarson:2018rgs}. \section{Summary} \label{Summary} Due to its small number of active particles, deuteron-proton elastic scattering at intermediate energies is well suited for the study of various non-standard mechanisms of hadron interaction, such as the production of nucleon isobars, dibaryon resonances, etc. However, even for $dp$ elastic scattering, the experimental database is scarce at momentum transfers $|t| > 0.1\;(\textrm{GeV}/c)^2$. In this work, new precise measurements of the differential cross sections for $dp$ elastic scattering at 16 equivalent proton energies between $T_p = 882.2\;\textrm{MeV}$ and $T_p = 918.3\;\textrm{MeV}$ in the range $0.08 < |t| < 0.26\;(\textrm{GeV}/c)^2$ have been presented. Since the shapes of the differential cross sections were found to be independent of beam momentum, it was possible to determine precise average values over the whole momentum transfer range. The experimental data at low momentum transfers $|t| < 0.2\;(\textrm{GeV}/c)^2$ are well described by the refined Glauber approach at an average energy $T_p = 900\;\textrm{MeV}$. These calculations take full account of spin degrees of freedom and use accurate input $NN$ amplitudes based on the most recent partial-wave analysis of the SAID group~\cite{Workman:2016ysf}. The deviations of the theoretical predictions from experimental data observed at the higher momentum transfers are likely to be due to failure of the small-momentum-transfer approximations involved in the Glauber model. These deviations might also reflect the missing contributions of some dynamical mechanisms such as $3N$ forces. The calculations at different energies, i.e., $T_p = 800$, $900$, $950$ and $1000\;\textrm{MeV}$, show a slight energy dependence (increasing slope) in the $pd$ elastic cross section as a function of momentum transfer squared $|t|$. The predicted energy dependence may be trusted in the momentum transfer region where the refined Glauber model describes the data. This behaviour should be taken into account when using $pd$ elastic scattering for the normalisation of other data. However, the energy dependence found in this region is so weak that it cannot be identified in existing data. Very precise measurements for at least two distinct energies (say, $T_p = 800$ and $1000\;\textrm{MeV}$) would be needed to observe it. In addition to the unpolarised differential cross sections, it would be interesting to study the momentum transfer and energy behaviour of polarisation observables (analysing powers, etc.), which can readily be calculated within the refined Glauber model at the same energies $T_p = 800$--$1000\;\textrm{MeV}$. The theoretical predictions for such observables will be presented in a forthcoming paper. \section*{Acknowledgments} \label{Acknow} We are grateful to other members of the ANKE collaboration and the COSY crew for their work and the good experimental conditions during the beam time. This work has been supported by the JCHP FEE and Russian Foundation for Basic Research, grant No.~16-02-00265. \bibliographystyle{model1a-num-names}
-20,812.165697
[ -2.146484375, 1.916015625 ]
16.039604
[ -3.150390625, 0.498046875, -1.7783203125, -6.05859375, -1.3515625, 8.3203125 ]
[ 0.888671875, 6.4921875, 2.087890625, 3.529296875 ]
281
3,660
[ -2.75, 3.068359375 ]
28.008669
[ -5.875, -2.892578125, -3.013671875, -2.25, 1.017578125, 10.15625 ]
1.567759
10.102486
29.262295
6.835543
[ 2.4303998947143555 ]
-16,011.936077
5.92541
-20,273.10607
1.097432
5.868994
[ -3.1171875, -4.03515625, -3.65234375, -4.4375, 2.33984375, 11.953125 ]
[ -5.9375, -2.5859375, -2.46875, -1.826171875, 3.49609375, 5.6796875 ]
BkiUb7_xK1UJ-rWoIn0Y
\section{Introduction} The following system describes the evolution of heterogeneous incompressible flows under the influence of gravity, \begin{equation}\label{eq:non-hydrostatic} \begin{aligned} {\partial}_t \rho + ({\bm{u}}+{\bm{u}}_\star) \cdot \nabla_{\bm{x}} \rho + (w+w_\star) {\partial}_z \rho&=0, \\ \rho\big( {\partial}_t {\bm{u}} + (({\bm{u}}+{\bm{u}}_\star) \cdot \nabla_{\bm{x}}) {\bm{u}} + (w+w_\star) {\partial}_z {\bm{u}}\big) + \nabla_{\bm{x}} P&=0, \\ \rho\big({\partial}_t w +({\bm{u}}+ {\bm{u}}_\star) \cdot \nabla_{\bm{x}} w + (w+w_\star) {\partial}_z w \big)+ {\partial}_z P +g\,\rho&=0, \\ \nabla_{\bm{x}} \cdot{\bm{u}} +{\partial}_z w&=0,\\ P|_{z=\zeta}-P_{\rm atm}&=0,\\ {\partial}_t\zeta+({\bm{u}}+{\bm{u}}_\star)|_{z=\zeta}\cdot\nabla_{\bm{x}} \zeta-(w+w_\star)|_{z=\zeta}&=0,\\ w|_{z=-H}&=0. \end{aligned} \end{equation} Here, $t$ and $({\bm{x}},z)$ are the time, and horizontal-vertical space variables, and we denote by $\nabla_{\bm{x}}, \Delta_{\bm{x}}$ the gradient and Laplacian with respect to ${\bm{x}}$. The vector field $({\bm{u}}, w) \in \mathbb{R}^d \times \mathbb{R}$ is the (horizontal and vertical) velocity, $\rho>0 $ is the density, $P\in\mathbb{R}$ is the incompressible pressure, all being defined in the spatial domain \[ \Omega_t=\{({\bm{x}},z) \ : \ {\bm{x}}\in\mathbb{R}^d ,\ -H<z<\zeta(t,{\bm{x}})\}, \] where $\zeta(t, {\bm{x}})$ describes the location of a free surface, and $H$ is the depth of the layer at rest. The gravity field is assumed to be constant and vertical, and $g>0$ is the gravity acceleration constant. Finally, $({\bm{u}}_\star, w_\star) \in \mathbb{R}^d \times \mathbb{R}$ are correctors of the effective transport velocities that take into account eddy correlations in non-eddy-resolving (large-scale) models. Their specific forms were proposed by Gent \& McWilliams~\cite{GMCW90} and read as follows \begin{equation}\label{eq:McWilliams} {\bm{u}}_\star=\kappa {\partial}_z\left(\frac{\nabla_{\bm{x}}\rho}{{\partial}_z\rho}\right)\ , \quad w_\star = -\kappa \nabla_{\bm{x}}\cdot\left(\frac{\nabla_{\bm{x}}\rho}{{\partial}_z\rho}\right)\,, \qquad \kappa>0\,. \end{equation} Discarding the effective advection terms (i.e. setting $\kappa=0$), one recovers the Euler equations for heterogeneous incompressible fluids under the influence of vertical gravity forces, where the last two lines of~\eqref{eq:non-hydrostatic} model the kinematic equation at the free surface and the impermeability condition of the rigid bottom respectively. In~\eqref{eq:non-hydrostatic}, the pressure $P$ can be recovered from its (atmospheric) value at the surface, $P_{\rm atm}$, by solving the elliptic boundary-value problem induced by the incompressibility constraint of divergence-free velocity fields. Yet in the shallow-water regime, where the horizontal scale of the perturbation is large compared with the depth of the layer $H$, formal computations (see below) suggest that vertical accelerations can be neglected and that the pressure $P$ approximately satisfies the hydrostatic balance law, that is \begin{equation}\label{eq:hydr-balance} {\partial}_z P+g\,\rho=0. \end{equation} Replacing the equation for the vertical velocity in ~\eqref{eq:non-hydrostatic} by the identity in \eqref{eq:hydr-balance} yields the so-called \emph{hydrostatic equations}: \begin{equation}\label{eq:hydrostatic} \begin{aligned} {\partial}_t \rho + {\bm{u}} \cdot \nabla_{\bm{x}} \rho + w {\partial}_z \rho&=\kappa \left[\nabla_{\bm{x}} \cdot \left( \frac{\nabla_{\bm{x}} \rho}{{\partial}_z \rho} \right) {\partial}_z \rho - {\partial}_z \left( \frac{\nabla_{\bm{x}} \rho}{{\partial}_z \rho} \right) \cdot \nabla_{\bm{x}} \rho\right], \\ \rho\big( {\partial}_t {\bm{u}} + ({\bm{u}} \cdot \nabla_{\bm{x}}) {\bm{u}} + w {\partial}_z {\bm{u}}\big) + \nabla_{\bm{x}} P&= \kappa \rho \left[\nabla_{\bm{x}} \cdot \left( \frac{\nabla_{\bm{x}} \rho}{{\partial}_z \rho} \right) {\partial}_z {\bm{u}} - {\partial}_z \left( \frac{\nabla_{\bm{x}} \rho}{{\partial}_z \rho} \right) \cdot \nabla_{\bm{x}} {\bm{u}}\right], \\ P&=P_{\rm atm}+g\int_z^\zeta \rho(z',\cdot) \dd z',\\ w&=-\int_{-H}^z\nabla_{\bm{x}} \cdot{\bm{u}}(z',\cdot) \dd z',\\ {\partial}_t\zeta+{\bm{u}}|_{z=\zeta} \cdot\nabla_{\bm{x}} \zeta -w|_{z=\zeta} &=-\kappa \left[ \nabla_{\bm{x}} \cdot \left( \frac{\nabla_{\bm{x}} \rho}{{\partial}_z \rho} \right)\big|_{z=\zeta} +{\partial}_z \left( \frac{\nabla_{\bm{x}} \rho}{{\partial}_z \rho} \right)\big|_{z=\zeta} \cdot \nabla_{\bm{x}} \zeta\right]. \end{aligned} \end{equation} Our aim in this work is to rigorously justify the hydrostatic equations~\eqref{eq:hydrostatic} as an asymptotic model for the non-hydrostatic equations~\eqref{eq:non-hydrostatic}-\eqref{eq:McWilliams} in the shallow-water regime, for regular and stably stratified flows. \subsection*{Modeling aspects} Let us now discuss the relevance and our motivation behind the introduction of the additional transport velocities ${\bm{u}}_\star$ and $w_\star$ defined in~\eqref{eq:McWilliams}. While taking into account viscosity effects is standard in mathematical treatments of fluid mechanics, it should be mentioned that the aforementioned shallow-water regime where horizontal scales are larger than vertical scales produces anisotropic viscosity terms which are predominant in the vertical direction. However, it is worth pointing out that in theoretical and laboratory studies on density-stratified geophysical flows, viscosity effects do not model molecular viscosity but rather ``turbulent'' or ``eddy'' viscosities, and are widely reported to be anisotropic and only relevant in the horizontal (or more precisely isopycnal) direction; see {\em e.g.}~\cite{Griffies2003}*{Section 17.6}. In this work we decide to neglect altogether viscosity effects and rather focus on diffusivity. The deterministic modeling of effective diffusivity induced by eddy correlation that we adopt in this work takes its roots in the 90's and is due to Gent \& McWilliams~\cite{GMCW90}, see also~\cites{GMCW95, GMCW96}. It adds suitable correctors, specifically sugges\-ting~\eqref{eq:McWilliams}, to the advective velocity field of the system of inhomogeneous incompressible fluids submitted to gravity as in~\eqref{eq:non-hydrostatic}. Since mesoscale eddies have an averaged dissipative effect on the large-scale flow at the macroscopic level, then it is natural to consider our unknowns $(\rho, {\bm{u}}, w)$ as the large-scale components of the density and the velocity field respectively, according to~\cite{GMCW90}. This work is motivated by studying theoretically the interplay between (stable) stratification, shallow water limits, and diffusive effects, and leaves aside other important ingredients which are usually considered in the so-called primitive equations modeling large-scale flows (see {\em e.g.} \cite{Griffies2003}): typically the rotational effects, vertical boundaries, bathymetry and several tracers ---say salinity and temperature--- coupled by an equation of state. Many of these constituents could be easily incorporated in our study at the price of blurring the main mechanisms at stake, while interesting singular limits (geostrophic balance, boundary layers, {\em etc.}) would of course deserve a specific treatment. Let us finally mention that there exists a huge mathematical literature dedicated to the investigation of fluid-dynamics equations in the probabilistic setting, where the cumulative effect of mesoscale eddies on the large-scale flow is modeled by means of suitable (additive or multiplicative) noises. For that context, we refer to~\cites{flandoliGL2021,flandoliGL2021-2}, while our setting will be completely deterministic. \subsection*{Previous mathematical results and motivation} At a technical level, the main reason to introduce viscosity or diffusivity contributions in the equations is that, without any of them, the initial-value problem for the hydrostatic equations is not known to be well-posed in finite-regularity functional spaces. In fact, restricting to homogeneous flows (that is $\rho$ being constant), ill-posedness was established by Renardy~\cite{Renardy09} at the linear level, and Han-Kwan and Nguyen~\cite{HanKwanNguyen16} at the nonlinear level. Yet if we additionally assume that the initial data satisfies the Rayleigh condition of (strict) convexity/concavity in the vertical direction, well-posedness is restored~\cites{Brenier99,Grenier99,MasmoudiWong12}. Now, assuming stably stratified flows, the celebrated Miles and Howard criterion~\cites{Miles61,Howard61} states that the linearized equations about equilibria $(\bar\rho(z),\bar{\bm{u}}'(z))$ do not exhibit unstable modes (in dimension $d = 1$, see~\cite{Gallay}*{Remark 1.3} when $d = 2$) provided that the local Richardson number is greater than 1/4 everywhere, that is \[\forall z\in [-H,0],\qquad |\bar{\bm{u}}'(z)|^2\leq 4 g \left(\frac{-\bar\rho'(z)}{\bar\rho(z)}\right).\] Notice that the stabilizing (resp. destabilizing) effect of the stable stratification (resp. shear velocity) is clearly encoded by the above criterion. However, we underline again that the well-posedness of the (nonlinear) hydrostatic equations for initial data (strictly) satisfying the above inequality is still an open problem. This is in sharp contrast with the available results on the non-hydrostatic equations. In this context, we mention the recent work by Desjardins, Lannes and Saut~\cite{Desjardins-Lannes-Saut}, which is close to our framework and provides the well-posedness of the (inviscid and non-diffusive) non-hydrostatic equations in Sobolev spaces (using the rigid-lid assumption). Even though the stabilizing effect of the stable stratification is also a key ingredient of that work, it is not powerful enough to prove that the lifespan of the solutions to the non-hydrostatic equations is uniform with respect to the shallow-water parameter measuring the ratio of vertical to horizontal lengths, without additional smallness conditions on the initial data. A more detailed comparison between ~\cite{Desjardins-Lannes-Saut} and our results is provided later on. From the technical viewpoint, the reason of this discrepancy -- in terms of the available results -- between the non-hydrostatic and hydrostatic equations is that the vertical velocity variable $w$ changes its role passing from prognostic (when it belongs to the set of unknowns) to diagnostic (when it is reconstructed from the unknowns), whence losing one order of regularity; see the fourth equation in~\eqref{eq:hydrostatic}. In order to overcome the difficulties related to this loss of derivatives, without restricting the analysis to then analytic setting as already done in~\cites{KukavicaTemamVicolEtAl11,PaicuZhangZhang20}, one natural approach is the introduction of (regularizing) viscosity contributions. This is the framework of most of the theoretical studies concerning the hydrostatic equations and/or the hydrostatic limit, starting with the work of Azérad and Guillén~\cite{AzeradGuillen2001}. A landmark in the theory is the work of Cao and Titi~\cite{CaoTiti07} where the global well-posedness of the initial-value problem for the hydrostatic equations was proved in dimension $d+1=3$: this striking result should be compared with the state of the art on the Navier-Stokes equations. Several mathematical studies, where partial viscosities and diffusivities and/or more physically relevant boundary conditions are investigated, were established later on. Rather than providing an extensive bibliography for this huge set of results, we limit ourselves to point out the works~\cites{CT2014,CT2016}, which extended the previous results to the case where only horizontal viscosity and diffusivity are added to the equations. We also mention~\cites{LiTiti2019,FurukawaGigaHieberEtAl20,LiTitiYuan21} (in the homogeneous case) and~\cites{PuZhou21,PuZhou22} (in the heterogeneous case) for recent results on the hydrostatic limit and an extended list of references (therein). A peculiarity of our analysis with respect to the previous ones (with the exception of~\cite{Desjardins-Lannes-Saut}) is that we shall crucially {\em use} the (stable) density stratification assumption, but we completely neglect viscosity-induced regularization and only allow for diffusivity effects. We shall also keep track of all relevant parameters in our estimates, and in particular will use diffusivity-induced regularization only when crucially needed. This allows to characterize the relevant convergence rates and timescales, and to exhibit a balance between the destabilizing effect of shear velocities and the stabilizing result of diffusivity. Moreover, to the best of our knowledge, this is the first rigorous mathematical study where the specific form of the diffusivity contributions, due to Gent and McWilliams~\cite{GMCW90} and modeling effective diffusivity induced by eddy correlation, is taken into account. It is worth highlighting that the specific form of the effective advection terms in~\eqref{eq:McWilliams} stems from {\em isopycnal diffusivity}: consistently, we will study equations~\eqref{eq:non-hydrostatic} and~\eqref{eq:hydrostatic} in the coordinate system known as {\em isopycnal coordinates} (hence intrinsically relying on the stable stratification assumption). In the following paragraph, we rewrite the equations passing to isopycnal coordinates. Our main results are described and discussed thereafter, and proved in the next sections. \subsection*{The model in isopycnal coordinates and non-dimensionalization} Let us consider smooth solutions to~\eqref{eq:non-hydrostatic} defined on a time interval $I_t$. Assuming that the flow is \emph{stably stratified}, i.e. \[ \inf(-{\partial}_z \rho ) > 0, \] the density $\rho: z \rightarrow \rho(\cdot, \cdot, z)$ is an invertible function of $z$. We denote its inverse $\cH: \varrho \rightarrow \cH(\cdot, \cdot, \varrho)$, so that \[ \rho(t, {\bm{x}}, \cH(t, {\bm{x}}, \varrho))=\varrho, \quad \cH(t, {\bm{x}}, \rho(t, {\bm{x}}, z))=z. \] We also assume that $\rho(t, {\bm{x}}, -H)=\rho_1, \; \rho(t, {\bm{x}}, \zeta(t, {\bm{x}}))=\rho_0$ for $(t, {\bm{x}}) \in I_t \times \mathbb{R}^d$, where $\rho_0 < \rho_1$ are two fixed and positive constant reference densities. Then we have \begin{equation}\label{def:eta} \cH :I_t\times\Omega\to \mathbb{R} \quad \text{ with } \quad \Omega := \mathbb{R}^d\times (\rho_0, \rho_1) \quad \text{ and } \quad h:=- \partial_\varrho \cH>0 , \end{equation} the latter inequality accounting for the stable stratification assumption. We now introduce \[ \check{\bm{u}}(t,{\bm{x}},\varrho)={\bm{u}}(t,{\bm{x}},\cH(t,{\bm{x}},\varrho)), \quad \check w(t,{\bm{x}},\varrho)=w(t,{\bm{x}}, \cH(t,{\bm{x}},\varrho)), \quad \check P(t,{\bm{x}},\varrho)=P(t,{\bm{x}}, \cH(t,{\bm{x}},\varrho)). \] From the chain rule, we infer that system~\eqref{eq:non-hydrostatic} in isopycnal coordinates reads \begin{equation}\label{eq:nonhydro-iso} \begin{aligned} \partial_t \cH+\check{\bm{u}} \cdot\nabla_{\bm{x}} \cH-\check w&=\kappa \Delta_{\bm{x}} \cH,\\ \varrho\Big( \partial_t\check{\bm{u}}+\big((\check{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{ h}\big)\cdot\nabla_{\bm{x}} \big)\check{\bm{u}}\Big)+ \nabla_{\bm{x}} \check P+\frac{\nabla_{\bm{x}} \cH}{ h} \partial_\varrho \check P&=0,\\ \varrho\Big( \partial_t\check w+\big(\check {\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{ h}\big)\cdot\nabla_{\bm{x}} \check w\Big)- \frac{{\partial}_\varrho \check P}{ h} + g \varrho&=0,\\ -h\nabla_{\bm{x}}\cdot \check{\bm{u}}-(\nabla_{\bm{x}} \cH)\cdot (\partial_\varrho\check{\bm{u}}) +\partial_\varrho \check w&=0,\\ \check P\big|_{\varrho=\rho_0} =P_{\rm atm}, \qquad \check w\big|_{\varrho=\rho_1}&=0. \end{aligned} \end{equation} Notice that differentiating with respect to $\varrho$ the first equation and using the fourth equation (stemming from the incompressibility constraint), the mass conservation reads \begin{equation}\label{eq:h} \partial_t h+\nabla_{\bm{x}}\cdot(h\check{\bm{u}})=\kappa\Delta_{\bm{x}} h. \end{equation} At this point, we are ready to introduce a dimensionless version of the previous system. We are interested in departures from steady solutions to the incompressible Euler equations with variable density: \[ ({h_{\rm eq}},{\bm{u}}_{\rm eq}, w_{\rm eq}, P_{\rm eq})=({\bar h(\varrho)},\bar{\bm{u}}(\varrho), 0, \bar P(\varrho)), \] which satisfy the equilibrium condition \[ {\partial}_\varrho \bar P(\varrho) = g\varrho \bar h(\varrho). \] Therefore, we consider (non-necessarily small) fluctuations of that steady solution, so that our unknowns admit the following decomposition: \begin{align*} h(t,{\bm{x}},\varrho)&{=\bar h(\varrho)+h_{\rm pert}(t,{\bm{x}},\varrho) },& \quad \check{\bm{u}}(t,{\bm{x}},\varrho)&=\bar{{\bm{u}}}(\varrho)+ {\bm{u}}_{\rm pert}(t,{\bm{x}},\varrho), \\ \check w(t,{\bm{x}},\varrho)&=0+ w_{\rm pert}(t,{\bm{x}},\varrho), &\quad \check P(t,{\bm{x}},\varrho)&=\bar P(\varrho)+ P_{\rm pert}(t,{\bm{x}},\varrho). \end{align*} Furthermore, we non-dimensionalize the equations through the following scaled variables: we set \[ \frac{h (t, {\bm{x}}, \varrho)}{H}=\tilde {\bar h}(\varrho) + \tilde h (\tilde t, \tilde{\bm{x}}, \varrho) \quad \text{ and } \quad \frac{\check {\bm{u}}(t,{\bm{x}},\varrho)}{\sqrt{gH}}=\tilde{\bar{\bm{u}}}( \varrho)+ \tilde {\bm{u}}(\tilde t,\tilde{\bm{x}}, \varrho), \] and \footnote{Notice the different scaling between the horizontal and vertical velocity fields. There, $\lambda$ is a reference horizontal length.} \[ \frac{\lambda}{H}\frac{\check w(t,{\bm{x}},\varrho)}{\sqrt{gH}}= \tilde w(\tilde t,\tilde{\bm{x}}, \varrho), \quad \frac{\check P(t,{\bm{x}},\varrho)}{gH}=\frac{P_{\rm atm}}{ g H} +\int_{\rho_0}^\varrho \varrho' \tilde{\bar h}(\varrho') \, d\varrho'+ \tilde P(\tilde t,\tilde{\bm{x}}, \varrho),\] where we use the following scaled coordinates\footnote{\label{f.rho}We could scale also the $\varrho$-coordinate. Adjusting accordingly the other variables, this amounts to setting $\rho_0=1$ (say). In the following we shall not discuss the dependency with respect to $\rho_1$, and in particular the physically relevant limit of small density contrast, $\frac{\rho_1-\rho_0}{\rho_0}\ll1$; see~\cite{Duchene16} and references therein. } \[ \tilde {\bm{x}}=\frac{{\bm{x}}}{\lambda} \quad \text{ and } \quad \tilde t=\frac{\sqrt{gH}}{\lambda} t.\] Introducing the dimensionless diffusion parameter, $\tilde\kappa$ and the shallowness parameter, $\mu$, through \[ \tilde \kappa = \frac{\kappa}{ \lambda\sqrt{gH}} \quad \text{ and } \quad \mu=\frac{H^2}{\lambda^2},\] substituting the scaled coordinates/variables in system~\eqref{eq:nonhydro-iso} and the subsequent equation {\em and dropping the tildes for the sake of readability} yields \begin{align}\label{eq:nonhydro-iso-intro} \partial_{ t} h+\nabla_{ {\bm{x}}} \cdot\big(({\bar h}+ h)({\bar {\bm{u}}}+ {\bm{u}})\big)&= \kappa \Delta_{ {\bm{x}}} h,\notag\\ \varrho\Big( \partial_{ t} {\bm{u}}+\big(({\bar {\bm{u}}} + {\bm{u}} - \kappa\tfrac{\nabla_{ {\bm{x}}} h}{ {\bar h}+ h})\cdot\nabla_{\bm{x}}\big) {\bm{u}}\Big)+ \nabla_{ {\bm{x}}} P+\frac{\nabla_{ {\bm{x}}} \cH}{ {\bar h}+ h}(\varrho {\bar h} + \partial_\varrho P ) &=0,\\ \mu \varrho\Big( \partial_{ t} w+(\bar {\bm{u}} + {\bm{u}} - \kappa\tfrac{\nabla_{ {\bm{x}}} h}{ \bar h+ h})\cdot\nabla_{ {\bm{x}}} w\Big)- \frac{ {\partial}_\varrho P}{\bar h+ h} + \frac{\varrho h}{\bar h + h}&=0,\notag\\ -(\bar h + h) \nabla_{{\bm{x}}} \cdot {\bm{u}}-\nabla_{{\bm{x}}} \cH \cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}) +\partial_\varrho w&=0, \notag\quad \text{(div.-free cond.)} \\ \cH(\cdot, \varrho)=\int_{\varrho}^{\rho_1} h(\cdot, \varrho')\dd\varrho', \qquad P\big|_{\varrho=\rho_0} =0, \qquad w\big|_{\varrho=\rho_1}&=0. \notag \quad \text{(bound. cond.)} \end{align} The hydrostatic system is obtained by setting $\mu=0$ in~\eqref{eq:nonhydro-iso-intro}. Specifically, plugging the hydrostatic balance \[ \frac{ {\partial}_\varrho P}{\bar h+ h} = \frac{\varrho h}{\bar h + h} \quad \text{ and } \quad P\big|_{\varrho=\rho_0} =0\] into the second equation of~\eqref{eq:nonhydro-iso-intro} yields \begin{subequations}\label{eq:hydro-iso-intro} \begin{equation}\label{eq:hydro-iso-intro-eq} \begin{aligned} \partial_t h+\nabla_{\bm{x}}\cdot((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}}))&=\kappa\Delta_{\bm{x}} h,\\ \varrho\Big( \partial_t{\bm{u}}+\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}} \big){\bm{u}}\Big)+ \nabla_{\bm{x}} \psi&=0, \end{aligned} \end{equation} with \begin{align} \psi(t,{\bm{x}},\varrho) &=\int_{\rho_0}^\varrho \varrho' h(t,{\bm{x}},\varrho')\dd\varrho' + \varrho \int_{\varrho}^{\rho_1} h(t, {\bm{x}}, \varrho')\dd\varrho' \nonumber\\ &=\rho_0\int_{\rho_0}^{\rho_1} h(t,{\bm{x}},\varrho')\dd\varrho' +\int_{\rho_0}^\varrho \int_{\varrho'}^{\rho_1} h(t,{\bm{x}},\varrho'')\dd\varrho''\dd\varrho'. \label{eq:nablaphi-intro} \end{align} \end{subequations} We shall provide a rigorous proof of the convergence of (smooth) solutions to~\eqref{eq:nonhydro-iso-intro} towards (smooth) solutions to~\eqref{eq:hydro-iso-intro} as $\mu \searrow 0$, under the stable stratification assumption, $\bar h+h>0$. \subsection*{Our main results} Our main results are stated and commented below. Some notations, and in particular the Sobolev spaces $H^{s,k}(\Omega)$, are introduced right after. First, we prove the existence, uniqueness and control of the solutions to the hydrostatic system~\eqref{eq:hydro-iso-intro} for sufficiently smooth initial data. Let us point out that the existence time of our solutions encodes the aforementioned stabilizing (resp. destabilizing) effect of the stable stratification (resp. shear velocity). \begin{theorem}\label{thm-well-posedness} Let $s,k\in\mathbb{N}$ be such that $s> 2 +\frac d 2$, $2\leq k\leq s$, and $\bar M,M,h_\star,h^\star>0$ and $0<\rho_0<\rho_1$ be fixed. Then there exists $C>0$ such that for any $\kappa\in(0,1]$, any $\bar h,\bar {\bm{u}}\in W^{k,\infty}((\rho_0,\rho_1)) $ satisfying \[ \norm{\bar h}_{W^{k,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho }\leq \bar M\] and any initial data $(h_0, {\bm{u}}_0) \in H^{s,k}(\Omega)$, with $\cH_0(\cdot,\varrho)=\int_{\varrho}^{\rho_1} h_0(\cdot,\varrho')\dd\varrho'$, satisfying the following estimate \[ M_0:=\Norm{\cH_0}_{H^{s,k}}+\Norm{{\bm{u}}_0}_{H^{s,k}}+\norm{\cH_0\big\vert_{\varrho=\rho_0}}_{H^s_{\bm{x}}}+\kappa^{1/2}\Norm{h_0}_{H^{s,k}} \le M; \] and the stable stratification assumption \[ \forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star \leq \bar h(\varrho)+h_0({\bm{x}},\varrho) \leq h^\star , \] there exists a unique $(h^{\rm h},{\bm{u}}^{\rm h})\in \mathcal{C}^0([0,T];H^{s,k}(\Omega)^{1+d})$ solution to~\eqref{eq:hydro-iso-intro} and $(h^{\rm h},{\bm{u}}^{\rm h})\big\vert_{t=0}=(h_0,{\bm{u}}_0)$, where \begin{equation}\label{def:time} T^{-1}= C\, \big(1+ \kappa^{-1} \big(\norm{\bar {\bm{u}}'}_{L^2_\varrho}^2+M_0^2\big) \big). \end{equation} Moreover, $h^{\rm h}\in L^2(0,T;H^{s+1,k}(\Omega))$ and one has, for any $t\in[0,T]$, \[ \forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star/2 \leq \bar h(\varrho)+h(t,{\bm{x}},\varrho) \leq 2\,h^\star , \] and, denoting $\cH^{\rm h}(\cdot,\varrho)=\int_{\varrho}^{\rho_1} h^{\rm h}(\cdot,\varrho')\dd\varrho'$, \begin{multline* \Norm{\cH^{\rm h}(t,\cdot)}_{H^{s,k}}+\Norm{{\bm{u}}^{\rm h}(t,\cdot)}_{H^{s,k}} +\norm{\cH^{\rm h}\big\vert_{\varrho=\rho_0}(t,\cdot)}_{H^s_{\bm{x}}} +\kappa^{1/2}\Norm{h^{\rm h}(t,\cdot)}_{H^{s,k}} \\+ \kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH^{\rm h}}_{L^2(0,T;H^{s,k})} + \kappa^{1/2} \norm{\nabla_{\bm{x}} \cH^{\rm h} \big\vert_{\varrho=\rho_0} }_{ L^2(0,T;H^s_{\bm{x}})} +\kappa \Norm{\nabla_{\bm{x}} h^{\rm h}}_{L^2(0,T;H^{s,k})} \leq CM_0. \end{multline*} \end{theorem} In our second main result, we prove that within the timescale defined by~\eqref{def:time}, there exist solutions to the non-hydrostatic equations~\eqref{eq:nonhydro-iso-intro} for $\mu$ sufficiently small, and they converge towards the corresponding solutions of the hydrostatic equations, with the expected $\mathcal O(\mu)$ convergence rate. \begin{theorem}\label{thm-convergence} There exists $p\in\mathbb{N}$ such that for any $ k=s \in \mathbb{N}$, $\bar M,M,h_\star,h^\star>0$ and $0<\rho_0<\rho_1$, there exists $C>0$ such that the following holds. For any $ 0< M_0\leq M$, $0<\kappa\leq 1$, and $\mu>0$ such that \[\mu \leq \kappa/(C M_0^2 ),\] for any for any $(\bar h, \bar {\bm{u}}) \in W^{k+p,\infty}((\rho_0,\rho_1))^2 $ satisfying \[ \norm{\bar h}_{W^{k+p,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k+p-1,\infty}_\varrho }\leq \bar M \,;\] for any initial data $(h_0, {\bm{u}}_0,w_0)\in H^{s+p,k+p}(\Omega)^{2+d}$ satisfying the boundary condition $w_0|_{\varrho=\rho_1}=0$ and the incompressibility condition \[-(\bar h+h_0)\nabla_{\bm{x}}\cdot {\bm{u}}_0-(\nabla_{\bm{x}} \cH_0)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}_0)+\partial_\varrho w_0=0,\] (denoting $\cH_0(\cdot,\varrho)=\int_{\varrho}^{\rho_1} h_0(\cdot,\varrho')\dd\varrho'$), the inequality \[ \Norm{\cH_0}_{H^{s+p,k+p}}+\Norm{{\bm{u}}_0}_{H^{s+p,k+p}}+\norm{\cH_0\big\vert_{\varrho=\rho_0}}_{H^{s+p}_{\bm{x}}}+\kappa^{1/2}\Norm{h_0}_{H^{s+p,k+p}} =M_0\leq M \] and the stable stratification assumption \[ \forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star \leq \bar h(\varrho)+h_0({\bm{x}},\varrho) \leq h^\star , \] the following holds. Denoting $(h^{\rm h},{\bm{u}}^{\rm h})\in \mathcal{C}^0([0,T^{\rm h}];H^{s+p,k+p}(\Omega)^{1+d})$ the solution to the hydrostatic equations~\eqref{eq:hydro-iso-intro} with initial data $(h^{\rm h},{\bm{u}}^{\rm h})\big\vert_{t=0}=(h_0,{\bm{u}}_0)$ provided by Theorem~\ref{thm-well-posedness}, there exists a unique strong solution $(h^{\rm nh},{\bm{u}}^{\rm nh},w^{\rm nh})\in \mathcal{C}([0,T^{\rm h}];H^{s,k}(\Omega)^{2+d})$ to the non-hydrostatic equations~\eqref{eq:nonhydro-iso-intro} with initial data $(h^{\rm nh},{\bm{u}}^{\rm nh}, w^{\rm nh})\big\vert_{t=0}=(h_0,{\bm{u}}_0, w_0)$. Moreover, one has \[ \Norm{h^{\rm nh}-h^{\rm h}}_{L^\infty(0,T^{\rm h};H^{s,k})}+\Norm{{\bm{u}}^{\rm nh}-{\bm{u}}^{\rm h}}_{L^\infty(0,T^{\rm h};H^{s,k})}\leq C\,\mu.\] \end{theorem} \subsection*{Strategy of the proofs} The proofs of our results rely mainly on the energy method, exhibiting the structure of the systems of equations trough well-chosen energy functionals and making use of product, commutator and composition estimates in the $L^2$-based Sobolev spaces $H^{s,k}(\Omega)$ (that are summarized in the Appendix). The natural energy functional associated with the hydrostatic equations,~\eqref{eq:hydro-iso-intro} involves $\cH$, ${\bm{u}}$ as well as $\cH\vert_{\varrho=\rho_0}$ (that represents the free surface), and their derivatives. A key point is that we do {\em not} control $h=-{\partial}_\varrho \cH$ (see \ref{def:eta}) in the same regularity class of $\cH$, unless it is multiplied by the prefactor $\kappa^{1/2}$. We crucially use the diffusivity-induced regularization in order to control terms stemming from the commutator between advection and density integration in the equation of mass conservation, {\em i.e.} the first equation in~\eqref{eq:hydro-iso-intro}. This explains why the time of existence of our solution in \eqref{def:time} vanishes as $\kappa\searrow 0$, yet with a prefactor involving the shear velocity, $\bar{\bm{u}}'(\varrho)$ (since advection with a $\varrho$-independent velocity commutes with density integration). It is interesting to notice that the index of regularity with respect to the space variable, $s$, and the one with respect to the density variable, $k$, are decoupled (yet only in the hydrostatic framework). This is due to the fact that the isopycnal change of coordinate is semi-Lagrangian: the advection in isopycnal coordinates occurs only in the horizontal space directions. It would be of utmost interest (but outside of the scope of the present work) to decrease the regularity assumption with respect to the density variable, so as to admit discontinuities, representing density interfaces. Concerning the non-hydrostatic system,~\eqref{eq:nonhydro-iso-intro}, the natural energy space involves additionally $\sqrt\mu w$ and its derivatives (hence the control vanishes as $\mu\searrow 0$). In order to obtain suitable energy estimates, we decompose the pressure as the sum of the hydrostatic contribution and the non-hydrostatic contribution, the latter being of lower order in terms of regularity and/or smallness with respect to $\mu\ll 1$. Then we use the structure of the hydrostatic equations, which we complement with an additional symmetric structure for the non-hydrostatic contributions. There, the difficulty consists in providing controls of the energy norms that are uniform with respect to the vanishing parameter $\mu\ll1$. Our estimates concerning the non-hydrostatic contribution of the pressure stem from elliptic estimates on a boundary-value problem. This strategy is hea\-vily inspired by the work of Desjardins, Lannes and Saut, and it is interesting to compare our results with the analogous ``large-time'' well-posedness result in~\cite{Desjardins-Lannes-Saut}*{Theorem~2}. Firstly, due to the choice of isopycnal coordinates, our boundary-value problem is no longer an anisotropic Poisson equation but involves a fully nonlinear elliptic operator. Since this operator involves $h$ that is not controlled in the energy space, we use again the diffusivity-induced regularization at this stage. On the plus side, using isopycnal coordinates rather than Eulerian coordinates allows us to consider the free-surface framework (since isopycnal coordinates readily set the domain as a flat strip, thanks to our assumption that the density is constant at the surface and at the bottom) rather than the rigid-lid setting. We believe that our study can be extended to the rigid-lid framework with small adjustments. Incidentally, we do not employ the often-used Boussinesq approximation, since it is not useful in our context. Additionally, we do not rely on the use of strong boundary conditions on the initial density and velocities and their derivatives at the surface and the bottom, which instead are assumed in~\cite{Desjardins-Lannes-Saut} (and in most of the other works, often put in a periodic framework) and rather use only the natural no-slip boundary condition at the bottom; the former allow to cancel the trace contributions resulting from vertical integration by parts. We also consider the general situation where the velocity field is a perturbation of a non-zero background current, $\bar{\bm{u}}$. In turn, the price to pay to handle this general framework manifests in terms of some restrictions on the length of the time of existence of our solutions, which is inversely proportional with respect to the size of the fluctuations in~\cite{Desjardins-Lannes-Saut}*{Theorem~2}. Our strategy for the proof of the convergence is as follows. Now we describe our strategy to prove the convergence of the solutions to the non-hydrostatic system towards the hydrostatic one. First, we point out that a direct use of energy estimates as previously allows to obtain the existence of solutions of the non-hydrostatic equations in a timescale uniform with respect to $\mu$ but not necessarily the same as the existence time of the corresponding solution to the hydrostatic equations, and (for technical reasons) restricted to sufficiently small data. To overcome these issues, we look at the hydrostatic solution as an approximate solution to the non-hydrostatic system (in the sense of consistency, as it approximately solves the non-hydrostatic equations), and deduce, using the aforementioned structure of the non-hydrostatic equations, an energy inequality that controls the difference between the solution to the non-hydrostatic system and the (respective) solution to the hydrostatic one. This stratey allows to bootstrap the control of the difference of the two solutions (and hence the control of the solution to the non-hydrostatic equations) within the timescale of existence (in a higher-regularity space) of the hydrostatic solution, provided that the parameter $\mu$ is sufficiently small. However, the rate of convergence obtained by this method is not optimal. Therefore, in a second step we implement another strategy to obtain the expected (optimal) convergence rate. It simply consists in taking the opposite viewpoint: to look at the solution to the non-hydrostatic equations as an approximate solution to the hydrostatic equations (again in the sense of consistency) and use the structure of the hydrostatic equations to infer the $\mathcal{O}(\mu)$ convergence rate. Both steps involve loss of derivatives, described by the parameter $p$ in Theorem~\ref{thm-convergence}. \subsection*{Plan of the paper} Section~\ref{S.Hydro} is dedicated to the proof of Theorem~\ref{thm-well-posedness} concerning the initial-value problem for the hydrostatic equations,~\eqref{eq:hydro-iso-intro}. In Section~\ref{S.NONHydro}, we analyze the non-hydrostatic equations,~\eqref{eq:nonhydro-iso-intro}. We first provide elliptic estimates for the boundary-value problem of the pressure reconstruction (Lemma~\ref{L.Poisson} and Corollary~\ref{C.Poisson}), and use them to infer two partial results concerning the initial-value problem: Proposition~\ref{P.NONHydro-small-time} (restricted to small time) and Proposition~\ref{P.NONHydro-large-time} (restricted to small data). In Section~\ref{S.Convergence}, we show the convergence of solutions to the non-hydrostatic equations towards corresponding solutions to the hydrostatic equations as $\mu \searrow 0$, concluding the proof of Theorem~\ref{thm-convergence}. Finally, in Appendix~\ref{S.Appendix} we provide product, commutator and composition estimates in the Sobolev spaces $H^{s,k}(\Omega)$. \subsection*{Notation and conventions} We highlight the following conventions used throughout the paper. \begin{itemize} \item $\rho_0$ and $\rho_1$ are fixed constants such that $0<\rho_0<\rho_1$, and the dependency with respect to these constants is never explicitly displayed. \item For $k,s \in \mathbb{N}$ and $k \leq s$, and $\Omega=\mathbb{R}^d\times(\rho_0,\rho_1)$, we define the functional space \begin{align}\label{def:space} H^{s,k}(\Omega)=\left\{ f \ : \ \forall ({\bm{\alpha}},j)\in \mathbb{N}^{d+1}, \ |{\bm{\alpha}}|+ j\leq s,\ j\leq k,\ \partial_{\bm{x}}^{\bm{\alpha}}\partial_\varrho^j f \in L^2(\Omega) \right\}, \end{align} endowed with the topology of the norm \begin{equation}\label{def:Hsk} \Norm{f}_{H^{s,k}}^2:=\sum_{j=0}^k \sum_{|{\bm{\alpha}}|=0}^{s-j} \Norm{\partial_{\bm{x}}^{\bm{\alpha}}\partial_\varrho^j f}_{L^2(\Omega)}^2 . \end{equation} When $s'\in\mathbb{R}$ (and $k\in\mathbb{N}$) we define $H^{s',k}(\Omega)=\left\{ f \ : \ \forall j\in \mathbb{N},\ j\leq k,\ \Lambda^s\partial_\varrho^j f \in L^2(\Omega) \right\}$ and \[ \Norm{f}_{H^{s',k}}^2:=\sum_{j=0}^k \Norm{\Lambda^{s'-j}\partial_\varrho^j f}_{L^2(\Omega)}^2. \] where $\Lambda=(\Id-\Delta_{\bm{x}})^{1/2}$. Of course the two notations are consistent when $s'=s \in \mathbb{N}$, up to harmless factors in the definition of the norm. \item We use both the equivalent notations $H^s(\mathbb{R}^d)=H^s_{\bm{x}}$ (the usual $L^2$-based Sobolev space on $\mathbb{R}^d$) and $W^{k,\infty}(\mathbb{R}^d)=W^{k,\infty}_{\bm{x}}$ (the $L^\infty$-based Sobolev space on $\mathbb{R}^d$), and similarly $L^2((\rho_0,\rho_1))=L^2_\varrho$ and $W^{k,\infty}((\rho_0,\rho_1))=W^{k,\infty}_\varrho$. For functions with variables in $\Omega$ we denote for instance \[L^2_\varrho L^\infty_{\bm{x}}=L^2(\rho_0,\rho_1;L^\infty(\mathbb{R}^d))=\{f \ : \ \esssup_{{\bm{x}}\in\mathbb{R}^d} |f(\cdot,{\bm{x}})|\in L^2((\rho_0,\rho_1))\}.\] Notice $L^2_\varrho L^2_{\bm{x}}=L^2_{\bm{x}} L^2_\varrho=L^2(\Omega)$ and $L^\infty_\varrho L^\infty_{\bm{x}}=L^\infty_{\bm{x}} L^\infty_\varrho=L^\infty(\Omega)$. We use similar notations for functions also depending on time. For instance, for $k\in\mathbb{N}$, and $X$ a Banach space as above, $\mathcal{C}^k([0,T];X)$ is the space of functions with values in $X$ which are continuously differentiable up to order $k$, and $L^p(0,T;X)$ the $p$-integrable $X$-valued functions. All these spaces are endowed with their natural norms. \item For any operator $A: f \rightarrow Af$, we denote by $[A,f]g=A(fg)-f(Ag)$ the usual commutator, while $[A;f,g]=A(fg)-f(Ag)-g(Af)$ is the symmetric commutator, \item $C(\lambda_1,\lambda_2,\dots)$ denotes a constant which depends continuously on its parameters. \item For any $a, b \in \mathbb{R}$, we use the notation $a \lesssim b$ (resp. $ a \gtrsim b$) if there exists $C>0$, independent of relevant parameters, such that $a \le C b$ (resp. $a \ge C b$). We write $a\approx b$ if $a\lesssim b$ and $a\gtrsim b$. \item We put $a\vee b:=\max(a,b)$. Finally, \[ \big\langle B_a\big\rangle_{a> b} =\begin{cases} 0 &\text{ if } a\leq b\,,\\ B_a& \text{otherwise,} \end{cases} \quad \text{ and } \quad \big\langle B_a\big\rangle_{a= b} =\begin{cases} 0 &\text{ if } a\neq b\,,\\ B_a& \text{otherwise.} \end{cases} \] \end{itemize} \section{The hydrostatic system}\label{S.Hydro} In this section we study the hydrostatic system in isopycnal coordinates. Specifically, we provide in this section a well-posedness result on the initial-value problem, namely Theorem~\ref{thm-well-posedness}. The result follows from careful {\em a priori} energy estimates, and the standard method of parabolic regularization. Therefore we will first study the system \begin{equation}\label{eq:hydro-iso-nu} \begin{aligned} \partial_t h+\nabla_{\bm{x}}\cdot((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}}))&=\kappa\Delta_{\bm{x}} h,\\[1ex] \partial_t{\bm{u}}+\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}} \big){\bm{u}}+ \tfrac{1}{\varrho}\nabla_{\bm{x}} \psi&=\nu \Delta_{\bm{x}} {\bm{u}}, \end{aligned} \end{equation} with \[ \nabla_{\bm{x}}\psi(t,{\bm{x}},\varrho):= \rho_0\int_{\rho_0}^{\rho_1} \nabla_{\bm{x}} h(t,{\bm{x}},\varrho')\dd\varrho' +\int_{\rho_0}^\varrho \int_{\rho'}^{\rho_1} \nabla_{\bm{x}} h(t,{\bm{x}},\varrho'')\dd\varrho''\dd\varrho', \] and $\nu>0$, and will rigorously establish the limit $\nu \rightarrow 0$. \subsection{Well-posedness of the regularized hydrostatic system} We start with proving the well-posedness of the initial value problem. \begin{proposition}\label{P.WP-nu} Let $s>\frac32 + \frac d 2$, $k\in\mathbb{N}$ with $1\leq k\leq s$, and $\bar M$, $M_0 $, $h_\star$, $\nu$, $\kappa>0$ and $C_0>1$. Then there exists $T=T(s,k, \bar M,M_0, h_\star, \nu, \kappa,C_0)$ such that for any $(\bar h,\bar{\bm{u}}) =(\bar h(\varrho),\bar{\bm{u}}(\varrho)) \in W^{k,\infty}((\rho_0, \rho_1))$ and for any $(h_0, {\bm{u}}_0)=(h_0({\bm{x}}, \varrho), {\bm{u}}_0({\bm{x}}, \varrho)) \in H^{s,k}(\Omega)$ such that \[ \inf_{({\bm{x}},\varrho)\in\Omega}(\bar h(\varrho) + h_0({\bm{x}},\varrho))\geq h_\star , \quad \norm{(\bar h,\bar{\bm{u}})}_{W^{k,\infty}_\varrho}\leq \bar M, \quad \Norm{(h_0,{\bm{u}}_0)}_{H^{s,k}} \leq M_0,\] there exists a unique solution $(h, {\bm{u}})\in \mathcal{C}^0([0,T];H^{s,k}(\Omega))$ to system~\eqref{eq:hydro-iso-nu} with $(h, {\bm{u}})\big\vert_{t=0}=(h_0,{\bm{u}}_0)$. Moreover, $(h, {\bm{u}})\in L^2(0,T;H^{s+1,k}(\Omega))$ and, for a universal constant $c_0>0$, the following estimates hold \begin{equation}\label{eq:bound-parabolic} \begin{cases} \Norm{h}_{L^\infty(0,T;H^{s,k})}+c_0\kappa^{1/2} \Norm{\nabla_{\bm{x}} h}_{L^2(0,T;H^{s,k})} < C_0 \Norm{h_0}_{H^{s,k}};\\ \Norm{{\bm{u}}}_{L^\infty(0,T;H^{s,k})}+c_0\nu^{1/2} \Norm{\nabla_{\bm{x}} {\bm{u}} }_{L^2(0,T;H^{s,k})} < C_0 \Norm{{\bm{u}}_0}_{H^{s,k}};\\ \inf_{(t,{\bm{x}},\varrho)\in(0,T)\times \Omega}(\bar h(\varrho)+h(t,{\bm{x}},\varrho))> h_\star/C_0. \end{cases} \end{equation} \end{proposition} \begin{proof} We will construct the solution as the fixed point of the Duhamel formula \begin{align*} h(t,\cdot) &= e^{\kappa t \Delta_{\bm{x}} } h_0+\int_0^t e^{\kappa(t-\tau)\Delta_{\bm{x}} } f(h(\tau,\cdot),{\bm{u}}(\tau,\cdot)) \dd\tau , \\ {\bm{u}}(t,\cdot) &= e^{\nu t \Delta_{\bm{x}} } {\bm{u}}_0+\int_0^t e^{\nu(t-\tau)\Delta_{\bm{x}} } (\bm{f}_1+\bm{f}_2)(h(\tau,\cdot),{\bm{u}}(\tau,\cdot)) \dd\tau \end{align*} where $e^{\alpha t \Delta_{\bm{x}} } $ with $\alpha >0$ is the heat semigroup defined by $\mathcal F[e^{\alpha t \Delta_{\bm{x}} } f](\bm{\xi})=e^{-\alpha t |\bm{\xi}|^2} \mathcal F[ f](\bm{\xi})$ where $\mathcal F$ is the Fourier transform with respect to the variable ${\bm{x}}$, and \begin{align*} f(h,{\bm{u}})&= -\nabla_{\bm{x}}\cdot((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}})),\\ \bm{f}_1(h,{\bm{u}})&= -\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}}\big) {\bm{u}}\\ \bm{f}_2(h,{\bm{u}})&= - \frac1\varrho\left(\rho_0\int_{\rho_0}^{\rho_1} \nabla_{\bm{x}} h(\cdot,\varrho')\dd\varrho' +\int_{\rho_0}^\varrho \int_{\rho'}^{\rho_1} \nabla_{\bm{x}} h(\cdot,\varrho'')\dd\varrho''\dd\varrho'\right). \end{align*} Let us first recall the standard regularization properties of the heat flow. For any $\nu>0$, $T>0$ and for any $u_0\in H^{s,k}(\Omega)$ and $ g\in L^1(0,T;H^{s,k}(\Omega))$, there exists a unique $u \in \mathcal{C}^0([0,T];H^{s,k}(\Omega)) \cap L^2(0,T;H^{s+1,k}(\Omega))$ solution to $\partial_t u-\nu\Delta_{\bm{x}} u=g$ with $u(0,\cdot)=u_0$ which reads by definition \[u= e^{\nu t \Delta_{\bm{x}} } u_0+\int_0^t e^{\nu(t-\tau)\Delta_{\bm{x}} } g(\tau,\cdot)\dd\tau,\] and we have \[\Norm{u}_{L^\infty(0,T;H^{s,k})}+c_0\nu^{1/2} \Norm{\nabla_{\bm{x}} u}_{L^2(0,T;H^{s,k})} \leq \Norm{u_0}_{H^{s,k}} + \Norm{g}_{L^1(0,T;H^{s,k})},\] where $c_0>0$ is a universal constant. The existence and uniqueness of the solution as well as the above estimate easily follow from solving the equation (for almost every $\varrho\in(\rho_0,\rho_1)$) in Fourier space and using Plancherel's formula, then using that $\partial_\varrho$ commutes with $\partial_t$ and $\Delta_{\bm{x}}$, and invoking Minkowski's integral inequality (resp. Fubini's theorem) to exchange the order of integration in the variables $(t,\varrho)$ (resp. $({\bm{x}},\varrho)$). We also remark that, by the positivity of the heat kernel and the continuous embedding $H^{s-1,1}(\Omega)\subset L^\infty(\Omega)$ for $s>\frac32+\frac{d}{2}$ (see Lemma~\ref{L.embedding}), \[\inf_{\Omega} u \geq \inf_\Omega u_0 - \Norm{g}_{L^1(0,T;H^{s-1,1})}.\] Now we consider $(f,\bm{f}_1+\bm{f}_2)$ as a bounded operator from $H^{s+1,k}(\Omega)^{1+d}$ to $H^{s,k}(\Omega)^{1+d}$. Indeed, there exists $C_{s,k}>0$ such that for any $(h,{\bm{u}})\in H^{s+1,k}(\Omega)^{1+d}$, \begin{align*} \Norm{f(h,{\bm{u}})}_{H^{s,k}} &\leq \Norm{\nabla_{\bm{x}}\cdot(\bar h {\bm{u}} + h{\bar {\bm{u}}}+h {\bm{u}})}_{H^{s,k}} \\ &\leq C_{s,k}\times\left( \norm{\bar h}_{W^{k,\infty}_\varrho} \Norm{{\bm{u}}}_{H^{s+1,k}}+\norm{\bar {\bm{u}}}_{W^{k,\infty}_\varrho} \Norm{h}_{H^{s+1,k}}+ \Norm{ h}_{H^{s,k}} \Norm{ {\bm{u}}}_{H^{s+1,k}}+ \Norm{ h}_{H^{s+1,k}} \Norm{ {\bm{u}}}_{H^{s,k}}\right), \end{align*} where we used straightforward product estimates for the first two terms, and Lemma~\ref{L.product-Hsk} for the last ones. Similarly, we have \begin{align*} \Norm{\bm{f}_1(h,{\bm{u}})}_{H^{s,k}} &\leq \Norm{\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}} \big){\bm{u}} }_{H^{s,k}} \\ &\leq C_{s,k}\left( \norm{\bar {\bm{u}}}_{W^{k,\infty}_\varrho} +\Norm{{\bm{u}}}_{H^{s,k}} \right) \Norm{{\bm{u}}}_{H^{s+1,k}}\\ &\quad + \kappa C_{s,k} \big( \norm{\bar h^{-1}}_{W^{k,\infty}_\varrho} + \Norm{\tfrac{h}{\bar h(\bar h+ h)}}_{H^{s,k}} \big) \Norm{(\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}}) {\bm{u}} }_{H^{s,k}}. \end{align*} Using the constraint $\inf_{(\rho_0,\rho_1)} \bar h\geq \inf_{\Omega}(\bar h + h_0)\geq h_\star>0$ and Lemma~\ref{L.composition-Hsk-ex}, we find that for any $h_\star>0$ and $M_0,\bar M\geq 0$ there exists $C_{s,k}(h_\star,\bar M,M_0,C_0)$ such that for any $h\in H^{s,k}(\Omega)$ bounded by $\Norm{h}_{H^{s,k}}\leq C_0M_0$ and satisfying $\inf_{({\bm{x}},\varrho)\in\Omega}(\bar h(\varrho) + h({\bm{x}},\varrho))\geq h_\star/C_0$, one has \[ \norm{\bar h^{-1}}_{W^{k,\infty}_\varrho} + \Norm{\tfrac{h}{\bar h(\bar h+ h)}}_{H^{s,k}} \leq C_{s,k}(h_\star,\bar M,M_0,C_0) .\] Using the last estimates in Lemma~\ref{L.product-Hsk}, since $s>\frac32+\frac{d}{2}$, we have \[\Norm{(\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}}) {\bm{u}} }_{H^{s,k}} \leq \Norm{h}_{H^{s,k}} \Norm{ {\bm{u}} }_{H^{s+1,k}}+\Norm{ h}_{H^{s+1,k}} \Norm{ {\bm{u}} }_{H^{s,k}}.\] Finally, from the continuous embedding $L^\infty((\rho_0,\rho_1))\subset L^2((\rho_0,\rho_1)) \subset L^1((\rho_0,\rho_1))$ we immediately infer \[\Norm{\bm{f}_2(h,{\bm{u}})}_{H^{s,k}} \leq C_{s,k}\Norm{ h}_{H^{s+1,k}} .\] Altogether, we find that for any $h_\star,C_0>0$ and $\bar M,M_0\geq 0$ there exists $C_{s,k}(h_\star,\bar M,M_0,C_0)$ such that for any $(h,{\bm{u}})\in H^{s+1,k}(\Omega)^{1+d}$ satisfying $\Norm{(h,{\bm{u}})}_{H^{s,k}}\leq C_0M_0$ and $\inf_{({\bm{x}},\varrho)\in\Omega}(\bar h(\varrho) + h({\bm{x}},\varrho))\geq h_\star/C_0$, we have \[\Norm{\big( f(h,{\bm{u}}),\bm{f}_1(h,{\bm{u}}),\bm{f}_2(h,{\bm{u}})\big)}_{H^{s,k}} \leq C_{s,k}(h_\star,\bar M,M_0,C_0)\, (1+\kappa)\, \Norm{ (h,{\bm{u}})}_{H^{s+1,k}} .\] By similar considerations, we find that for any $h_\star,C_0>0$ and $\bar M,M_0\geq 0$ there exists $C_{s,k}(h_\star,\bar M,M_0,C_0)$ such that for any $(h_1,{\bm{u}}_1,h_2,{\bm{u}}_2)\in H^{s+1,k}(\Omega)^{2(1+d)}$ satisfying the bound $\Norm{(h_i,{\bm{u}}_i)}_{H^{s,k}}\leq C_0M_0$ as well as $\inf_{({\bm{x}},\varrho)\in\Omega}(\bar h(\varrho) + h_i({\bm{x}},\varrho))\geq h_\star/C_0$ (with $i\in\{1,2\}$), one has \begin{multline*}\Norm{ \big( f(h_2,{\bm{u}}_2)-f(h_1,{\bm{u}}_1), \bm f_1(h_2,{\bm{u}}_2)- \bm f_1(h_1,{\bm{u}}_1) , \bm f_2(h_2,{\bm{u}}_2)-\bm f_2(h_1,{\bm{u}}_1) \big)}_{H^{s,k}} \\ \leq C_{s,k}(h_\star,\bar M,M_0,C_0) (1+\kappa)\Big( \Norm{ ( h_2-h_1, {\bm{u}}_2-{\bm{u}}_1)}_{H^{s+1,k}}\\ + \Norm{(h_1,h_2,{\bm{u}}_1,{\bm{u}}_2)}_{H^{s+1,k}} \Norm{ ( h_2-h_1, {\bm{u}}_2-{\bm{u}}_1)}_{H^{s,k}} \Big). \end{multline*} From the above estimates, we easily infer that for $T>0$ sufficiently small (uniquely depending on $s,k,M_0,\bar M,h_\star,\nu,\kappa,C_0$), \[ \mathcal T:\begin{pmatrix}h\\ {\bm{u}} \end{pmatrix}\mapsto \begin{pmatrix} e^{\kappa t \Delta_{\bm{x}} } h_0+\int_0^t e^{\kappa(t-\tau)\Delta_{\bm{x}} } f(h(\tau,\cdot),{\bm{u}}(\tau,\cdot)) \dd\tau \\ e^{\nu t \Delta_{\bm{x}} } {\bm{u}}_0+\int_0^t e^{\nu(t-\tau)\Delta_{\bm{x}} } (\bm{f}_1+\bm{f}_2)(h(\tau,\cdot),{\bm{u}}(\tau,\cdot)) \dd\tau \end{pmatrix} \] is a contraction mapping on \[ X=\left\{ (h,{\bm{u}}) \in \mathcal{C}^0([0,T];H^{s,k}(\Omega)) \cap L^2(0,T;H^{s+1,k}(\Omega)) \ : \ \text{\eqref{eq:bound-parabolic} holds}\right\}.\] The Banach fixed point theorem provides the existence and uniqueness of a fixed point (and hence solution to~\eqref{eq:hydro-iso-nu}) in $X$, and uniqueness in $\mathcal{C}^0([0,T];H^{s,k}(\Omega))$ is easily checked (for instance by the energy method). \end{proof} \begin{remark} It should be emphasized that the time of existence provided by Proposition~\ref{P.WP-nu} is not uniform with respect to the parameters $\kappa,\nu>0$. More precisely, the proof provides a lower bound as \[T\gtrsim \min(\{1,\kappa,\nu\}) , \qquad \text{\em i.e.} \quad T^{-1}\lesssim 1+\kappa^{-1}+\nu^{-1}.\] \end{remark} \subsection{Quasilinearization}\label{S.Hydro:quasi} In the result below, we apply spatial derivatives to system~\eqref{eq:hydro-iso-nu} and rewrite it in such a way that the linearized equations satisfied by the highest-order terms exhibit a skew-symmetric structure, which will allow us to obtain improved energy estimates in the subsequent section. \begin{lemma}\label{lem:quasilinearization} Let $s, k \in \mathbb{N}$ such that $s>2+\frac d 2$ and $2\leq k\leq s$, and $\bar M,M,h_\star>0$. Then there exists $C=C(s,k,\bar M,M,h_\star)>0$ such that for any $\kappa\in[0,1]$, $\nu\geq0$, for any $(\bar h, \bar {\bm{u}}) \in W^{k,\infty}((\rho_0,\rho_1))$ such that \[ \norm{\bar h}_{W^{k,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho }\leq \bar M \,;\] and any $(h, {\bm{u}}) \in L^\infty(0,T;H^{s,k}(\Omega))$ solution to~\eqref{eq:hydro-iso-nu} with some $T>0$ and satisfying for almost every $t\in[0,T]$ \[ \Norm{h(t,\cdot)}_{H^{s-1, k-1}} + \Norm{\cH(t,\cdot)}_{H^{s,k}}+\Norm{{\bm{u}}(t,\cdot)}_{H^{s,k}}+\norm{\cH(t,\cdot)\big\vert_{\varrho=\rho_0}}_{H^s_{\bm{x}}}+\kappa^{1/2}\Norm{h(t,\cdot)}_{H^{s,k}} \le M \] (where $\cH(t,{\bm{x}},\varrho):=\int_{\varrho}^{\rho_1} h(t,{\bm{x}},\varrho')\dd\varrho'$) and \[ \inf_{({\bm{x}},\varrho)\in \Omega } \bar h(\varrho)+h(t, {\bm{x}},\varrho) \geq h_\star,\] the following holds. Denote, for any multi-index ${\bm{\alpha}} \in \mathbb{N}^d$, $ \cH^{({\bm{\alpha}})}={\partial}_{\bm{x}}^{\bm{\alpha}} \cH, \, {\bm{u}}^{({\bm{\alpha}})}={\partial}_{\bm{x}}^{\bm{\alpha}} {\bm{u}}$. \begin{itemize} \item For any ${\bm{\alpha}} \in \mathbb{N}^d$ with $0 \le |{\bm{\alpha}}| \le s$, we have that \begin{subequations} \begin{equation}\label{eq.quasilin} \begin{aligned} \partial_t \cH^{({\bm{\alpha}})}+ (\bar {\bm{u}} + {\bm{u}}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}})} + \int_\varrho^{\rho_1} (\bar {\bm{u}}' + {\partial}_{\varrho}{\bm{u}}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}})} \, d{\varrho'} &\\ +\int_{\varrho}^{\rho_1} (\bar h+h)\nabla_{\bm{x}} \cdot{\bm{u}}^{({\bm{\alpha}})} \dd\varrho'&=\kappa\Delta_{\bm{x}} \cH^{({\bm{\alpha}})}+R_{{\bm{\alpha}},0},\\ \partial_t{\bm{u}}^{({\bm{\alpha}})}+\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}} \big){\bm{u}}^{({\bm{\alpha}})} \hspace{3.5cm} &\\ + \frac{\rho_0}{\varrho}\nabla_{\bm{x}} \cH^{({\bm{\alpha}})}\big\vert_{\varrho=\rho_0} +\frac1\varrho\int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH^{({\bm{\alpha}})} \dd\varrho'&=\nu\Delta_{\bm{x}} {\bm{u}}^{({\bm{\alpha}})}+{\bm{R}}_{{\bm{\alpha}},0} , \end{aligned} \end{equation} where for almost every $t\in[0,T]$, $(R_{{\bm{\alpha}},0}(t,\cdot),{\bm{R}}_{{\bm{\alpha}},0}(t,\cdot))\in \mathcal{C}^0([\rho_0,\rho_1];L^2(\mathbb{R}^d))\times L^2(\Omega)^d$ and \begin{align} \label{eq.est-quasilin} \Norm{R_{{\bm{\alpha}},0}}_{L^2(\Omega) }+\Norm{ {\bm{R}}_{{\bm{\alpha}},0}}_{L^2(\Omega)} +\norm{ R_{{\bm{\alpha}},0}\big\vert_{\varrho=\rho_0}}_{L^2_{\bm{x}}} &\leq C\, M \, \big(1 +\kappa\Norm{ \nabla_{\bm{x}} h}_{H^{s,k}} \big). \end{align} \end{subequations} \item For any $j\in\mathbb{N}$, $1\leq j\leq k$ and any ${\bm{\alpha}} \in \mathbb{N}^d$, $0 \le |{\bm{\alpha}}| \le s-j$, it holds \begin{subequations} \begin{equation}\label{eq.quasilin-j} \begin{aligned} \partial_t \partial_\varrho^{j} \cH^{({\bm{\alpha}})}+(\bar{\bm{u}}+{\bm{u}})\cdot \nabla_{\bm{x}} \partial_\varrho^{j} \cH^{({\bm{\alpha}})} &=\kappa\Delta_{\bm{x}} \partial_\varrho^{j} \cH^{({\bm{\alpha}})}+R_{{\bm{\alpha}},j},\\ \partial_t\partial_\varrho^j {\bm{u}}^{({\bm{\alpha}})}+\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}} \big)\partial_\varrho^j {\bm{u}}^{({\bm{\alpha}})} &=\nu \Delta_{\bm{x}} \partial_\varrho^j {\bm{u}}^{({\bm{\alpha}})}+{\bm{R}}_{{\bm{\alpha}},j}, \end{aligned} \end{equation} where for almost every $t\in[0,T]$, $(R_{{\bm{\alpha}},j}(t,\cdot),{\bm{R}}_{{\bm{\alpha}},j}(t,\cdot))\in L^2(\Omega)\times L^2(\Omega)^{d}$ and \begin{equation}\label{eq.est-quasilin-j} \Norm{R_{{\bm{\alpha}},j}}_{L^2(\Omega) }+\Norm{ {\bm{R}}_{{\bm{\alpha}},j}}_{L^2(\Omega)} \leq C\, M \, \big(1 +\kappa\Norm{ \nabla_{\bm{x}} h}_{H^{s,k}} \big). \end{equation} \end{subequations} \item For any $j\in\mathbb{N}$, $0\leq j\leq k$ and any multi-index ${\bm{\alpha}} \in \mathbb{N}^d$, $0 \le |{\bm{\alpha}}| \le s-j$, it holds \begin{subequations} \begin{equation}\label{eq.quasilin-j-h} \begin{aligned} \partial_t \partial_\varrho^{j} h^{({\bm{\alpha}})}+(\bar{\bm{u}}+{\bm{u}})\cdot \nabla_{\bm{x}} \partial_\varrho^{j} h^{({\bm{\alpha}})} &=\kappa\Delta_{\bm{x}} \partial_\varrho^{j} h^{({\bm{\alpha}})}+r_{{\bm{\alpha}},j}+\nabla_{{\bm{x}}} \cdot {\bm{r}}_{{\bm{\alpha}},j}, \end{aligned} \end{equation} where for almost every $t\in[0,T]$, $(r_{{\bm{\alpha}},j}(t,\cdot),{\bm{r}}_{{\bm{\alpha}},j}(t,\cdot))\in L^2(\Omega)^{1+d}$ and \begin{equation}\label{eq.est-quasilin-j-h} \kappa^{1/2} \Norm{r_{{\bm{\alpha}},j}}_{L^2(\Omega) } + \Norm{{\bm{r}}_{{\bm{\alpha}},j}}_{L^2(\Omega) } \leq C\,M. \end{equation} \end{subequations} \end{itemize} \end{lemma} \begin{proof} In this proof, we denote $s_0=s-2>\frac d 2$. \noindent {\em Estimate of $R_{{\bm{\alpha}},0}$.} First we notice the identity by integration by parts in $\varrho$, \[ (\bar {\bm{u}} + {\bm{u}}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}})} + \int_\varrho^{\rho_1} (\bar {\bm{u}}' + {\partial}_{\varrho}{\bm{u}}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}})} \, d{\varrho'}=\int_{\varrho}^{\rho_1}(\bar{\bm{u}}+{\bm{u}})\cdot \nabla_{\bm{x}} h^{({\bm{\alpha}})}\dd\varrho'.\] Hence, recalling the notation $[P;u,v]=P(uv)-u(Pv)-v(Pu)$ and integrating by parts in $\varrho$, we get \begin{align*} R_{{\bm{\alpha}},0}&:=-\int_\varrho^{\rho_1} [{\partial}_{\bm{x}}^{\bm{\alpha}}, {\bm{u}} ]\cdot \nabla_{\bm{x}} h + [{\partial}_{\bm{x}}^{\bm{\alpha}}, h] \nabla_{\bm{x}} \cdot {\bm{u}} \, \dd\varrho' \\ &=-\int_\varrho^{\rho_1} [{\partial}_{\bm{x}}^{\bm{\alpha}}; {\bm{u}} , \nabla_{\bm{x}} h ]+ {\bm{u}}^{({\bm{\alpha}})} \cdot \nabla_{\bm{x}} h + [{\partial}_{\bm{x}}^{\bm{\alpha}}; h,\nabla_{\bm{x}} \cdot {\bm{u}}]+ h^{({\bm{\alpha}})} (\nabla_{\bm{x}} \cdot {\bm{u}}) \, \dd\varrho'\\ &=-[{\partial}_{\bm{x}}^{\bm{\alpha}}; {\bm{u}} , \nabla_{\bm{x}} \cH] - \cH^{({\bm{\alpha}})} \nabla_{\bm{x}} \cdot {\bm{u}} \\ &\qquad-\int_\varrho^{\rho_1} [{\partial}_{\bm{x}}^{\bm{\alpha}}; {\partial}_\varrho{\bm{u}} , \nabla_{\bm{x}} \cH ] + {\bm{u}}^{({\bm{\alpha}})} \cdot \nabla_{\bm{x}} h + [{\partial}_{\bm{x}}^{\bm{\alpha}}; h,\nabla_{\bm{x}} \cdot {\bm{u}}]+ \cH^{({\bm{\alpha}})} \nabla_{\bm{x}} \cdot {\partial}_\varrho {\bm{u}} \, \dd\varrho'. \end{align*} By the standard Sobolev embedding $H^{s_0}(\mathbb{R}^d)\subset L^\infty(\mathbb{R}^d)$ and Lemma~\ref{L.embedding}, one gets \begin{align*} \Norm{ \cH^{({\bm{\alpha}})} \nabla_{\bm{x}} \cdot {\bm{u}} }_{L^2(\Omega)} &\leq \Norm{\cH^{({\bm{\alpha}})} }_{ L^2(\Omega)} \Norm{ \nabla_{\bm{x}} \cdot {\bm{u}} }_{ L^\infty(\Omega)} \lesssim \Norm{\cH}_{ H^{s,0}}\Norm{ {\bm{u}} }_{ H^{s_0+\frac 3 2,1}}. \end{align*} and \begin{align*} \norm{ \big(\cH^{({\bm{\alpha}})} \nabla_{\bm{x}} \cdot {\bm{u}} \big)\big\vert_{\varrho=\rho_0} }_{L^2_{\bm{x}}} &\lesssim \norm{\cH\big\vert_{\varrho=\rho_0} }_{ H^s_{\bm{x}} } \Norm{ {\bm{u}} }_{ H^{s_0+\frac32,1}}. \end{align*} By Lemma~\ref{L.commutator-Hs}(\ref{L.commutator-Hs-3}), and Lemma~\ref{L.embedding}, we have \begin{align*} \Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}; {\bm{u}},\nabla_{\bm{x}} \cH] }_{L^2(\Omega)} &\lesssim \Norm{{\bm{u}} }_{ L^\infty_\varrho H^{s-1}_{\bm{x}}} \Norm{\nabla_{\bm{x}} \cH}_{ L^2_\varrho H^{s_0+1}_{\bm{x}}} + \Norm{{\bm{u}} }_{L^\infty_\varrho H^{s_0+1}_{\bm{x}} } \Norm{\nabla_{\bm{x}} \cH}_{ L^2_\varrho H^{s-1}_{\bm{x}}}\\ &\lesssim \Norm{{\bm{u}} }_{ H^{s-\frac 12,1}} \Norm{\cH}_{ H^{s_0+2,0} } + \Norm{{\bm{u}} }_{H^{s_0+\frac 3 2,1} } \Norm{\cH}_{H^{s,0}}, \end{align*} \begin{align*} \norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}; {\bm{u}}\big\vert_{\varrho=\rho_0},\nabla_{\bm{x}} \cH\big\vert_{\varrho=\rho_0} ] }_{ L^2_{\bm{x}}} &\lesssim \norm{{\bm{u}}\big\vert_{\varrho=\rho_0} }_{ H^{s-1}_{\bm{x}}} \norm{\nabla_{\bm{x}} \cH\big\vert_{\varrho=\rho_0} }_{ H^{s_0+1}_{\bm{x}}} + \norm{{\bm{u}}\big\vert_{\varrho=\rho_0} }_{ H^{s_0+1}_{\bm{x}} } \norm{\nabla_{\bm{x}} \cH\big\vert_{\varrho=\rho_0} }_{ H^{s-1}_{\bm{x}}}\\ &\lesssim \Norm{{\bm{u}} }_{ H^{s-\frac 12,1}} \norm{ \cH \big\vert_{\varrho=\rho_0} }_{ H^{s_0+2}_{\bm{x}}}+\Norm{{\bm{u}} }_{H^{s_0+\frac 32,1} } \norm{ \cH \big\vert_{\varrho=\rho_0} }_{ H^{s}_{\bm{x}}}, \end{align*} and using additionally the Cauchy-Schwarz inequality, \begin{align*} \Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}; {\partial}_\varrho {\bm{u}},\nabla_{\bm{x}} \cH ] }_{ L^1_\varrho L^2_{\bm{x}} } &\lesssim \Norm{{\partial}_\varrho {\bm{u}} }_{ L^2_\varrho H^{s-1}_{\bm{x}} } \Norm{ \nabla_{\bm{x}} \cH}_{ L^2_\varrho H^{s_0+1}_{\bm{x}} }+\Norm{{\partial}_\varrho {\bm{u}} }_{ L^2_\varrho H^{s_0+1}_{\bm{x}} } \Norm{ \nabla_{\bm{x}} \cH}_{ L^2_\varrho H^{s-1}_{\bm{x}} } \\ &\lesssim \Norm{ {\bm{u}} }_{ H^{s,1} }\Norm{ \cH}_{ H^{s_0+2 ,0} } + \Norm{ {\bm{u}} }_{ H^{s_0+2 ,1} }\Norm{ \cH}_{ H^{s,0} } , \end{align*} \begin{align*} \Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}; h,\nabla_{\bm{x}} \cdot {\bm{u}} ] }_{ L^1_\varrho L^2_{\bm{x}} } &\lesssim\Norm{h }_{ L^2_\varrho H^{s-1}_{\bm{x}} } \Norm{ \nabla_{\bm{x}} \cdot{\bm{u}}}_{ L^2_\varrho H^{s_0+1}_{\bm{x}} } + \Norm{h }_{ L^2_\varrho H^{s_0+1}_{\bm{x}} } \Norm{ \nabla_{\bm{x}} \cdot{\bm{u}}}_{ L^2_\varrho H^{s-1}_{\bm{x}} } \\ &\lesssim \Norm{ h }_{ H^{s-1,0} } {\Norm{ {\bm{u}} }_{ H^{s_0+2,0} }}+ \Norm{h }_{ H^{s_0+1,0}} \Norm{ {\bm{u}} }_{ H^{s,0} } , \end{align*} and \begin{align*} \Norm{ {\bm{u}}^{({\bm{\alpha}})} \cdot \nabla_{\bm{x}} h }_{L^1_\varrho L^2_{\bm{x}} } &\lesssim \Norm{{\bm{u}}}_{H^{s,0}}\Norm{h}_{ H^{s_0+1,0}},\\ \Norm{ \cH^{({\bm{\alpha}})} (\nabla_{\bm{x}} \cdot {\partial}_\varrho {\bm{u}}) }_{L^1_\varrho L^2_{\bm{x}} } &\lesssim \Norm{\cH}_{H^{s,0}}\Norm{ {\bm{u}}}_{ H^{s_0+2,1} }. \end{align*} Altogether, using the continuous embedding $L^\infty((\rho_0,\rho_1))\subset L^2((\rho_0,\rho_1)) \subset L^1((\rho_0,\rho_1))$, the Minkowski and triangle inequalities and $s\geq s_0+2$, we get \begin{equation}\label{eq:est-R0} \norm{R_{{\bm{\alpha}},0}\big\vert_{\varrho=\rho_0}}_{L^2_{\bm{x}}} +\Norm{R_{{\bm{\alpha}},0}}_{L^2(\Omega)} \lesssim (\Norm{\cH}_{H^{s,0}}+\Norm{h}_{H^{s-1,0}}+\norm{ \cH \big\vert_{\varrho=\rho_0} }_{ H^{s}_{\bm{x}}}) \Norm{{\bm{u}}}_{H^{s, 1}}. \end{equation} \noindent {\em Estimate of $R_{{\bm{\alpha}},j}$ for $1\leq j\leq k$.} We have \begin{align*} R_{{\bm{\alpha}},j}&:= - [{\partial}_{\bm{x}}^{{\bm{\alpha}}} {\partial}_\varrho^{j-1} \nabla_{\bm{x}} \cdot, \bar {\bm{u}} + {\bm{u}}] h - {\partial}_{\bm{x}}^{{\bm{\alpha}}} {\partial}_\varrho^{j-1} \nabla_{\bm{x}} \cdot (\bar h (\bar{\bm{u}}+{\bm{u}})) \\ &=-\sum_{i=1}^d[{\partial}_{\bm{x}}^{{\bm{\alpha}}}\partial_{x_i}{\partial}_\varrho^{j-1} , {\bm{u}}_i]h-[{\partial}_\varrho^{j-1}, \bar {\bm{u}}]\cdot {\partial}_{\bm{x}}^{\bm{\alpha}} \nabla_{\bm{x}} h- {\partial}_\varrho^{j-1} \cdot (\bar h {\partial}_{\bm{x}}^{{\bm{\alpha}}}\nabla_{\bm{x}} {\bm{u}}), \end{align*} where ${\bm{u}}_i$ is the $i^{\rm th}$ component of ${\bm{u}}$. By Lemma~\ref{L.commutator-Hsk} and since $(|{\bm{\alpha}}|+1)+(j-1)\leq s$ and $ j-1\leq k-1$, and $s\geq s_0+\frac32$, we find for $2\leq k-1\leq s$ \[ \Norm{[{\partial}_{\bm{x}}^{{\bm{\alpha}}}\partial_{x_i}{\partial}_\varrho^{j-1} ,{\bm{u}}_i ] h}_{ L^2(\Omega) } \lesssim \Norm{h}_{H^{s-1,k-1}}\Norm{{\bm{u}}}_{H^{s,k}}. \] There remains to consider $1\leq j\leq k\leq 2$. If $j=1$ we have by Lemma~\ref{L.commutator-Hs}(\ref{L.commutator-Hs-2}) and since $|{\bm{\alpha}}|\leq s-1$ and $s\geq s_0+\frac32$ \[\Norm{[{\partial}_{\bm{x}}^{{\bm{\alpha}}}\partial_{x_i} ,{\bm{u}}_i ] h}_{ L^2(\Omega) } \lesssim \Norm{h }_{ L^\infty_\varrho H^{s_0}_{\bm{x}} } \Norm{ {\bm{u}} }_{ L^2_\varrho H^{s}_{\bm{x}} } +\Norm{h }_{ L^2_\varrho H^{s-1}_{\bm{x}} } \Norm{ {\bm{u}} }_{ L^\infty_\varrho H^{s_0+1}_{\bm{x}} } \lesssim \Norm{h}_{H^{s-1,1}}\Norm{{\bm{u}}}_{H^{s,1}}.\] If $j=k=2$, and since $|{\bm{\alpha}}|\leq s-2$ and $s\geq s_0+\frac32$, \begin{align*} \Norm{[{\partial}_{\bm{x}}^{{\bm{\alpha}}}\partial_{x_i}{\partial}_\varrho^{j-1} ,{\bm{u}}_i ] h}_{ L^2(\Omega) } &\leq \Norm{[{\partial}_{\bm{x}}^{{\bm{\alpha}}}\partial_{x_i},{\bm{u}}_i] \partial_\varrho h}_{ L^2(\Omega) } + \Norm{\partial_{\bm{x}}^{{\bm{\alpha}}}\partial_{x_i} ( h\partial_\varrho{\bm{u}}_i)}_{ L^2(\Omega) } \\ &\lesssim \Norm{ \partial_\varrho h }_{ L^2_\varrho H^{s_0}_{\bm{x}} } \Norm{ {\bm{u}} }_{ L^\infty_\varrho H^{s-1}_{\bm{x}} } +\Norm{\partial_\varrho h }_{ L^2_\varrho H^{s-2}_{\bm{x}} } \Norm{ {\bm{u}} }_{ L^\infty_\varrho H^{s_0+1}_{\bm{x}} } \\ &\qquad + \Norm{h }_{ L^\infty_\varrho H^{s_0}_{\bm{x}} } \Norm{\partial_\varrho {\bm{u}} }_{ L^2_\varrho H^{s-1}_{\bm{x}} } +\Norm{h }_{ L^2_\varrho H^{s-1}_{\bm{x}} } \Norm{ \partial_\varrho{\bm{u}} }_{ L^\infty_\varrho H^{s_0}_{\bm{x}} } \\ & \lesssim \Norm{h}_{H^{s-1,1}}\Norm{{\bm{u}}}_{H^{s,2}}. \end{align*} Finally, we have immediately \begin{align*} \Norm{[{\partial}_\varrho^{j-1}, \bar {\bm{u}}]\cdot{\partial}_{\bm{x}}^{\bm{\alpha}} \nabla_{\bm{x}} h }_{ L^2(\Omega) } & \lesssim \norm{\bar {\bm{u}}'}_{W^{j-2,\infty}_\varrho }\Norm{h}_{H^{s-1,j-2}},\\ \Norm{{\partial}_\varrho^{j-1} (\bar h {\partial}_{\bm{x}}^{{\bm{\alpha}}}\nabla_{\bm{x}} \cdot{\bm{u}}) }_{ L^2(\Omega) } & \lesssim \norm{\bar h}_{W^{j-1,\infty}_\varrho }\Norm{{\bm{u}}}_{H^{s,j}}. \end{align*} Altogether, we find that for any $1\leq j\leq k$ \begin{equation}\label{eq:est-Rj} \Norm{R_{{\bm{\alpha}},j}}_{L^2(\Omega)} \lesssim ( \norm{\bar h}_{W^{k-1,\infty}_\varrho} + \norm{\bar {\bm{u}}'}_{W^{k-2,\infty}_\varrho} +\Norm{h}_{H^{s-1,k-1}}) \big( \Norm{{\bm{u}}}_{H^{s, k}}+\Norm{h}_{H^{s-1,k-2}}\big). \end{equation} \noindent {\em Estimate of ${\bm{r}}_{{\bm{\alpha}},j}$ and $r_{{\bm{\alpha}},j}$ for $0\leq j\leq k$.} We have~\eqref{eq.quasilin-j-h} with \begin{align*} {\bm{r}}_{{\bm{\alpha}},j}&:= - [{\partial}_\varrho^{j} , \bar {\bm{u}} ] {\partial}_{\bm{x}}^{{\bm{\alpha}}} h - {\partial}_{\bm{x}}^{{\bm{\alpha}}} {\partial}_\varrho^{j} (\bar h {\bm{u}})- (\partial_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^{j}{\bm{u}}) h, \\ r_{{\bm{\alpha}},j}&:=-[\partial^{\bm{\alpha}}{\partial}_\varrho^{j}\nabla_{\bm{x}}\cdot;{\bm{u}},h] +(\partial_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^{j}{\bm{u}})\cdot\nabla_{\bm{x}} h . \end{align*} We have immediately (since $|{\bm{\alpha}}|+j\leq s$, $ j\leq k$, and using Lemma~\ref{L.embedding}) \begin{align*} \Norm{[{\partial}_\varrho^{j}, \bar {\bm{u}}]{\partial}_{\bm{x}}^{\bm{\alpha}} h }_{ L^2(\Omega) } & \lesssim \norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho }\Norm{h}_{H^{s-1,k-1}},\\ \Norm{\partial_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^{j} (\bar h {\bm{u}}) }_{ L^2(\Omega) } & \lesssim \norm{\bar h}_{W^{k,\infty}_\varrho }\Norm{{\bm{u}}}_{H^{s,k}},\\ \Norm{ (\partial_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^{j}{\bm{u}}) h}_{L^2(\Omega)} & \lesssim \Norm{{\bm{u}}}_{H^{s,k}} \Norm{h}_{H^{s_0+\frac12,1}}. \end{align*} By Lemma~\ref{L.commutator-Hsk-sym} and since $|{\bm{\alpha}}|+j+1\leq s+1$, $ j\leq k\leq s$, $s+1\geq s_0+\frac52$, we find for $2\leq k\leq s$ \[\Norm{[\partial^{\bm{\alpha}}{\partial}_\varrho^{j}\nabla_{\bm{x}}\cdot;{\bm{u}},h] }_{ L^2(\Omega) } \lesssim \Norm{h}_{H^{s,k}}\Norm{{\bm{u}}}_{H^{s,k}},\] and we have by Lemma~\ref{L.embedding} \[\Norm{ (\partial_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^{j}{\bm{u}})\cdot\nabla_{\bm{x}} h}_{L^2(\Omega)} \lesssim \Norm{{\bm{u}}}_{H^{s,k}} \Norm{h}_{H^{s_0+\frac32,1}}.\] Altogether, we find that for any $0\leq j\leq k$ \begin{align}\label{eq:est-brj} \Norm{{\bm{r}}_{{\bm{\alpha}},j}}_{L^2(\Omega)} &\lesssim ( \norm{\bar h}_{W^{k,\infty}_\varrho} + \norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho} +\Norm{h}_{H^{s-1,k-1}}) \big( \Norm{{\bm{u}}}_{H^{s, k}}+\Norm{h}_{H^{s-1,k-1}}\big),\\ \label{eq:est-rj} \Norm{r_{{\bm{\alpha}},j}}_{L^2(\Omega)} & \lesssim \Norm{{\bm{u}}}_{H^{s, k}}\Norm{h}_{H^{s,k}}. \end{align} \noindent {\em Estimate of ${\bm{R}}_{{\bm{\alpha}},0}$.} The precise expression of the second remainder in~\eqref{eq.quasilin} is the following: \[ {\bm{R}}_{{\bm{\alpha}},0}:=- \big( [{\partial}_{\bm{x}}^{\bm{\alpha}}, {\bm{u}}] \cdot \nabla_{\bm{x}} \big){\bm{u}} + \kappa [{\partial}_{\bm{x}}^{\bm{\alpha}}, \tfrac{1}{\bar h+h}] (\nabla_{\bm{x}} h\cdot \nabla_{\bm{x}}) {\bm{u}} + \tfrac{\kappa }{\bar h+h} \big( [{\partial}_{\bm{x}}^{\bm{\alpha}}, \nabla_{\bm{x}} h] \cdot\nabla_{\bm{x}} \big){\bm{u}}.\] By Lemma~\ref{L.commutator-Hs}(\ref{L.commutator-Hs-2}) and Lemma~\ref{L.embedding} we have \begin{align*} \Norm{\big([{\partial}_{\bm{x}}^{\bm{\alpha}}, {\bm{u}}]\cdot \nabla_{\bm{x}}\big) {\bm{u}}}_{L^2(\Omega)} &\lesssim \Norm{{\bm{u}}}_{L^\infty_\varrho H^{s_0+1}_{\bm{x}}}\Norm{{\bm{u}}}_{L^2_\varrho H^s_{\bm{x}}}\lesssim \Norm{{\bm{u}}}_{H^{s_0+\frac32,1}}\Norm{{\bm{u}}}_{H^{s,0}}. \end{align*} Next, appealing again to~\ref{L.commutator-Hs}(\ref{L.commutator-Hs-2}), we have \begin{align*} \kappa \Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}, \tfrac{1}{\bar h+h}] (\nabla_{\bm{x}} h \cdot \nabla_{\bm{x}}) {\bm{u}} }_{L^2(\Omega)} & \lesssim\kappa \Norm{\nabla_{\bm{x}} \tfrac1{\bar h+h}}_{L^\infty_\varrho H^{s_0}_{\bm{x}}}\Norm{(\nabla_{\bm{x}} h \cdot \nabla_{\bm{x}}) {\bm{u}}}_{L^2_\varrho H^{s-1}_{\bm{x}}} \\ &\quad + \kappa \Norm{\nabla_{\bm{x}} \tfrac1{\bar h+h}}_{L^2_\varrho H^{s-1}_{\bm{x}}}\Norm{(\nabla_{\bm{x}} h \cdot \nabla_{\bm{x}}) {\bm{u}}}_{L^\infty_\varrho H^{s_0}_{\bm{x}}}. \end{align*} Now, by Lemma~\ref{L.product-Hs}(\ref{L.product-Hs-2}) and Lemma~\ref{L.composition-Hs}, one has for any $t\geq 0$ that \begin{align} \norm{\nabla_{\bm{x}} \tfrac1{\bar h+h}}_{ H^{t}_{\bm{x}}} &= \norm{ \tfrac{\nabla_{\bm{x}} h}{(\bar h+h)^2}}_{ H^{t}_{\bm{x}}} \leq \norm{ \tfrac{\nabla_{\bm{x}} h}{\bar h^2}}_{ H^{t}_{\bm{x}}} +\norm{ (\tfrac1{\bar h^2}-\tfrac{1}{(\bar h+h)^2})\nabla_{\bm{x}} h}_{ H^{t}_{\bm{x}}} \notag \\ & \lesssim (h_\star)^{-2} \norm{\nabla_{\bm{x}} h}_{H^{t}_{\bm{x}} }+ \norm{\tfrac1{\bar h^2}-\tfrac{1}{(\bar h+h)^2}}_{H^{s_0}_{\bm{x}}}\norm{\nabla_{\bm{x}} h}_{H^t_{\bm{x}}} +\left\langle \norm{\tfrac1{\bar h^2}-\tfrac{1}{(\bar h+h)^2}}_{H^{t}_{\bm{x}}}\norm{\nabla_{\bm{x}} h}_{H^{s_0}_{\bm{x}}}\right\rangle_{t>s_0} \notag\\ &\leq C(h_\star,\norm{h}_{H^{s_0}_{\bm{x}}}) \norm{\nabla_{\bm{x}} h}_{H^{t}_{\bm{x}} }, \label{eq:last-fraction} \end{align} where in the last step we used that, by Lemma~\ref{L.composition-Hs}, \[ \norm{\tfrac1{\bar h^2}-\tfrac{1}{(\bar h+h)^2}}_{H^{s_0}_{\bm{x}}}\leq C(h_\star,\norm{h}_{H^{s_0}_{\bm{x}}}) \] and, provided that $t>s_0$, \begin{align*} \norm{\tfrac1{\bar h^2}-\tfrac{1}{(\bar h+h)^2}}_{H^{t}_{\bm{x}}}\leq \norm{\tfrac1{\bar h^2}-\tfrac{1}{(\bar h+h)^2}}_{H^{s_0}_{\bm{x}}}+\norm{\nabla_{\bm{x}} \tfrac{1}{(\bar h+h)^2}}_{H^{t-1}_{\bm{x}}}, \end{align*} and a finite induction on $t$, until $\norm{ \nabla_{\bm{x}} \tfrac{1}{(\bar h+h)^2} }_{L^2_{\bm{x}}} = \norm{\tfrac{\nabla_{\bm{x}} h}{\bar h+h}}_{L^2_{\bm{x}}} \le h_\star^{-2} \norm{\nabla_{\bm{x}} h}_{L^2_{\bm{x}}}$. Then, by Lemma~\ref{L.product-Hs}(\ref{L.product-Hs-2}) and Lemma~\ref{L.embedding}, we have \begin{align*} \Norm{(\nabla_{\bm{x}} h \cdot \nabla_{\bm{x}}) {\bm{u}}}_{L^2_\varrho H^{s-1}_{\bm{x}}} &\lesssim \Norm{\nabla_{\bm{x}} h}_{L^2_\varrho H^{s-1}_{\bm{x}}} \Norm{{\bm{u}}}_{L^\infty_\varrho H^{s_0+1}_{\bm{x}}} +\Norm{\nabla_{\bm{x}} h}_{L^\infty_\varrho H^{s_0}_{\bm{x}}} \Norm{{\bm{u}}}_{L^2_\varrho H^{s}_{\bm{x}}}\\ &\lesssim \Norm{h}_{H^{s,0}} \Norm{{\bm{u}}}_{H^{s_0+\frac32,1}}+ \Norm{h}_{H^{s_0+\frac32,1}} \Norm{{\bm{u}}}_{H^{s,0}} \end{align*} and \[\Norm{(\nabla_{\bm{x}} h \cdot \nabla_{\bm{x}}) {\bm{u}}}_{L^\infty_\varrho H^{s_0}_{\bm{x}}}\lesssim \Norm{\nabla_{\bm{x}} h}_{L^\infty_\varrho H^{s_0}_{\bm{x}}} \Norm{{\bm{u}}}_{L^\infty_\varrho H^{s_0+1}_{\bm{x}}} \lesssim \Norm{h}_{H^{s_0+\frac32,1}} \Norm{{\bm{u}}}_{H^{s_0+\frac32,1}} .\] Finally, we have by Lemma~\ref{L.commutator-Hs}(\ref{L.commutator-Hs-2}) and Lemma~\ref{L.embedding} \begin{align*} \Norm{ \big([{\partial}_{\bm{x}}^{\bm{\alpha}}, \nabla_{\bm{x}} h] \cdot\nabla_{\bm{x}}\big) {\bm{u}}}_{L^2(\Omega)} & \lesssim \Norm{\nabla_{\bm{x}} h}_{L^\infty_\varrho H^{s_0+1}_{\bm{x}}}\Norm{\nabla_{\bm{x}} {\bm{u}}}_{L^2_\varrho H^{s-1}_{\bm{x}}}+ \Norm{\nabla_{\bm{x}} h}_{L^2_\varrho H^{s}_{\bm{x}}}\Norm{\nabla_{\bm{x}} {\bm{u}}}_{L^\infty_\varrho H^{s_0}_{\bm{x}}}\\ & \lesssim \Norm{ \nabla_{\bm{x}} h}_{ H^{s_0+\frac 32, 1}}\Norm{ {\bm{u}}}_{H^{s,0}} + \Norm{ \nabla_{\bm{x}} h}_{ H^{s,0}}\Norm{ {\bm{u}}}_{ H^{s_0+\frac32,1}}. \end{align*} Collecting the estimates above and using that $s \ge s_0+\frac32$, we obtain \begin{equation}\label{eq:est-bR0} \Norm{{\bm{R}}_{{\bm{\alpha}},0}}_{L^2(\Omega)} \lesssim \Norm{ {\bm{u}}}_{H^{s,0}} \Norm{ {\bm{u}}}_{H^{s,1}} + \kappa C(h_\star,\Norm{h}_{H^{s_0+\frac12,1}}) \big( \Norm{ h}_{ H^{s,1}}^2+\Norm{ \nabla_{\bm{x}} h}_{ H^{s,1}} \big) \Norm{ {\bm{u}}}_{H^{s,1}} . \end{equation} \noindent {\em Estimate of ${\bm{R}}_{{\bm{\alpha}},j}$ for $1\leq j\leq k$.} The explicit expression of the second remainder in~\eqref{eq.quasilin-j} is the following \begin{multline*} {\bm{R}}_{{\bm{\alpha}},j}:= - \big([{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j, \bar {\bm{u}} + {\bm{u}}] \cdot\nabla_{\bm{x}}\big) {\bm{u}} + \kappa [{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j, \tfrac{1}{\bar h+h}] \big( (\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}}) {\bm{u}} \big) + \tfrac{\kappa }{\bar h+h} \big( [{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j, \nabla_{\bm{x}} h] \cdot\nabla_{\bm{x}}\big) {\bm{u}}\\ + {\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} \left( \frac{\rho_0}{\varrho} \int_{\rho_0}^{\rho_1} \nabla_{\bm{x}} h\, \dd \varrho' +\frac1\varrho\int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH \dd\varrho'\right). \end{multline*} By Lemma~\ref{L.commutator-Hsk} we have for $s\geq s_0+\frac32$ and since $0 \le |{\bm{\alpha}}|\leq s-j$ and $j\leq k$ with $k \ge 2$, that \[ \Norm{ \big( [{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j, {\bm{u}}] \cdot\nabla_{\bm{x}} \big) {\bm{u}}}_{L^2(\Omega)} \lesssim \Norm{{\bm{u}}}_{H^{s,k}} \Norm{\nabla_{\bm{x}} {\bm{u}}}_{H^{s-1,k}}. \] Then, \[ \Norm{ \big([{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j, \bar {\bm{u}}] \cdot\nabla_{\bm{x}}\big) {\bm{u}}}_{L^2(\Omega)} =\Norm{ [{\partial}_\varrho^j, \bar {\bm{u}}] \cdot\nabla_{\bm{x}} {\partial}_{\bm{x}}^{\bm{\alpha}} {\bm{u}}}_{L^2(\Omega)} \\ \lesssim \norm{\bar{\bm{u}}'}_{W^{j-1,\infty}_\varrho}\Norm{{\bm{u}}}_{H^{s, j-1}}. \] Next, using Lemma~\ref{L.product-Hsk}, \begin{align*} \Norm{[{\partial}_\varrho^j,\tfrac1{\bar h}] {\partial}_{\bm{x}}^{\bm{\alpha}} \big( (\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}}) {\bm{u}} \big)}_{L^2(\Omega)} &\lesssim C(h_\star)\norm{\bar h'}_{W^{j-1,\infty}_\varrho} \Norm{ (\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}}) {\bm{u}}}_{H^{s-1, j-1}}\\ &\lesssim C(h_\star)\norm{\bar h'}_{W^{j-1,\infty}_\varrho} \Norm{ h}_{H^{s,j}} \Norm{ {\bm{u}}}_{H^{s,j}}, \end{align*} and by Lemma~\ref{L.commutator-Hsk}, since $s\geq s_0+\frac 32$ and $2\leq j\leq s$, $|{\bm{\alpha}}|+j\leq s$, Lemma~\ref{L.composition-Hsk-ex} and Lemma~\ref{L.product-Hsk}, \begin{align*} \Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j, \tfrac{1}{\bar h+h} -\tfrac1{\bar h}] \big( (\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}}) {\bm{u}} \big)}_{L^2(\Omega)} &\lesssim \Norm{\tfrac{1}{\bar h+h} -\tfrac1{\bar h} }_{H^{s,k}} \Norm{(\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}}) {\bm{u}} }_{H^{s-1,\min(\{k,s-1\})}}\\ &\lesssim C(h_\star, \norm{\bar h'}_{W^{k-1,\infty}_\varrho},\Norm{h}_{H^{s-1,k-1}}) \Norm{ h}_{H^{s,k}}^2 \Norm{ {\bm{u}}}_{H^{s, k}}. \end{align*} By Lemma~\ref{L.commutator-Hsk} we have for $s\geq s_0+\frac32$ and since $|{\bm{\alpha}}|+j\leq s$ and $2\leq j\leq s$ \[ \Norm{\big([{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j, \nabla_{\bm{x}} h] \cdot\nabla_{\bm{x}}\big) {\bm{u}} }_{L^2(\Omega)} \lesssim \Norm{ \nabla_{\bm{x}} h}_{H^{s,k}}\Norm{{\bm{u}} }_{H^{s,k}}. \] We have immediately since $|{\bm{\alpha}}|\leq s-j\leq s-1$ , \[\Norm{ {\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}}\Big( \frac{\rho_0}{\varrho} \int_{\rho_0}^{\rho_1} \nabla_{\bm{x}} h \, \dd \varrho'\Big) }_{L^2(\Omega)} \lesssim \norm{{\partial}_{\bm{x}}^{\bm{\alpha}} \nabla_{\bm{x}} \cH\big\vert_{\varrho=\rho_0} }_{L^2_{\bm{x}}} \lesssim \norm{\cH\big\vert_{\varrho=\rho_0}}_{H^{s}_{\bm{x}}} \] and since $(|{\bm{\alpha}}|+1)+(j-1)\leq s$, \[\Norm{ {\partial}_\varrho^j\Big( \frac1\varrho\int_{\rho_0}^\varrho {\partial}_{\bm{x}}^{\bm{\alpha}} \nabla_{\bm{x}} \cH \dd\varrho'\Big)}_{L^2(\Omega)}\lesssim \sum_{i=0}^{j-1}\Norm{\partial_\varrho^i {\partial}_{\bm{x}}^{\bm{\alpha}} \nabla_{\bm{x}} \cH }_{L^2(\Omega)} \ \lesssim \Norm{\cH}_{H^{s,j-1}} .\] Collecting the estimates above we obtain for $1\leq j\leq k$ \begin{multline}\label{eq:est-bRj} \Norm{{\bm{R}}_{{\bm{\alpha}},j}}_{L^2(\Omega)} \lesssim \norm{\cH\big\vert_{\varrho=\rho_0}}_{H^{s}_{\bm{x}}}+ \Norm{\cH}_{H^{s,k-1}} + \big(\norm{\bar{\bm{u}}'}_{W^{k-1,\infty}_\varrho}+\Norm{ {\bm{u}}}_{H^{s,k}} \big)\Norm{ {\bm{u}}}_{H^{s,k}} \\ + \kappa C(h_\star, \norm{\bar h'}_{W^{k-1,\infty}_\varrho},\Norm{h}_{H^{s-1,k-1}}) \big( \Norm{ h}_{ H^{s,k}}^2+\Norm{ \nabla_{\bm{x}} h}_{ H^{s,k}} \big) \Norm{ {\bm{u}}}_{H^{s,k}} . \end{multline} We infer the bound~\eqref{eq.est-quasilin} from~\eqref{eq:est-R0} and~\eqref{eq:est-bR0}, the bound~\eqref{eq.est-quasilin-j} from~\eqref{eq:est-Rj} and~\eqref{eq:est-bRj}, and the bound~\eqref{eq.est-quasilin-j-h} from~\eqref{eq:est-brj} and~\eqref{eq:est-rj}, and the proof is complete. \end{proof} \subsection{A priori energy estimates} In this section we provide {\em a priori} energy estimates associated with the equations featured in Lemma~\ref{lem:quasilinearization}. We start with the transport-diffusion equations in~\eqref{eq.quasilin-j} and~\eqref{eq.quasilin-j-h}, which we rewrite as \begin{equation}\label{eq:transport-diffusion} \partial_t \dot h+{\bm{u}}\cdot\nabla_{\bm{x}} \dot h=\kappa\Delta_{\bm{x}} \dot h+r+\nabla_{\bm{x}}\cdot{\bm{r}}. \end{equation} \begin{lemma}\label{lem:estimate-transport-diffusion} There exists a universal constant $C_0>0$ such that for any $\kappa>0$ and $T>0$, for any ${\bm{u}}\in L^\infty(0,T;L^\infty (\Omega))$ with $\nabla_{\bm{x}}\cdot {\bm{u}} \in L^1(0,T;L^\infty(\Omega))$, for any $(r,{\bm{r}})\in L^2(0,T;L^2(\Omega))$ and for any $\dot h\in L^\infty(0,T;L^2(\Omega))$ with $\nabla_{\bm{x}} \dot h\in L^2(0,T;L^2(\Omega))$, such that~\eqref{eq:transport-diffusion} holds in $L^2(0,T;H^{1,0}(\Omega)')$, we have \begin{multline} \Norm{\dot h}_{L^\infty(0,T;L^2(\Omega))}+\kappa^{1/2}\Norm{\nabla_{\bm{x}} \dot h}_{L^2(0,T;L^2(\Omega))} \\ \leq C_0\big(\Norm{ \dot h\big|_{t=0}}_{L^2(\Omega)} + \Norm{r}_{L^1(0,T;L^2(\Omega))}+\kappa^{-1/2}\Norm{{\bm{r}}}_{L^2(0,T;L^2(\Omega))}\big)\\ \times\exp\Big(C_0\int_0^T \Norm{\nabla_{\bm{x}}\cdot{\bm{u}}(t,\cdot)}_{L^\infty(\Omega)}\dd t\Big). \end{multline} \end{lemma} \begin{proof} Testing the equation against $\dot h$ and integrating by parts (with respect to the variable ${\bm{x}}$) yields \[ \frac12\frac{\dd}{\dd t}\Norm{\dot h}_{L^2(\Omega)}^2 + \kappa \Norm{\nabla_{\bm{x}} \dot h}_{L^2(\Omega)}^2 =\frac12\iint_\Omega (\nabla_{\bm{x}}\cdot{\bm{u}}) \dot h^2\dd{\bm{x}}\dd\varrho + \iint_\Omega r\dot h\dd{\bm{x}}\dd\varrho- \iint_\Omega {\bm{r}}\cdot\nabla_{\bm{x}}\dot h\dd{\bm{x}}\dd\varrho.\] The estimate follows from the Cauchy-Schwarz inequality and Gronwall's Lemma. \end{proof} Next, we consider system~\eqref{eq.quasilin}, which we rewrite as \begin{equation}\label{eq.system} \begin{aligned} \partial_t \dot\cH+(\bar{\bm{u}}+{\bm{u}})\cdot \nabla_{\bm{x}} \dot \cH+\int_{\varrho}^{\rho_1}(\bar{\bm{u}}'+{\partial}_\varrho{\bm{u}})\cdot \nabla_{\bm{x}} \dot \cH \dd\varrho'+\int_{\varrho}^{\rho_1} (\bar h+h)\nabla_{\bm{x}} \cdot\dot{\bm{u}} \dd\varrho'&=\kappa\Delta_{\bm{x}} \dot\cH+R,\\[1ex] \varrho \left(\partial_t\dot{\bm{u}}+\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}} \big)\dot{\bm{u}} \right) + {\rho_0}\nabla_{\bm{x}} \dot\cH\big\vert_{\varrho=\rho_0} + \int_{\rho_0}^\varrho \nabla_{\bm{x}} \dot\cH \dd\varrho'&=\varrho\nu\Delta_{\bm{x}} \dot{\bm{u}}+{\bm{R}} . \end{aligned} \end{equation} For the sake of readability, we introduce the following notations \begin{equation}\label{eq:X01spaces} X^0:= \mathcal{C}^0([\rho_0,\rho_1];L^2(\mathbb{R}^d))\times L^2(\Omega)^d; \quad X^1:= \mathcal{C}^0([\rho_0,\rho_1];H^1(\mathbb{R}^d))\times H^{1,0}(\Omega)^d. \end{equation} \begin{lemma}\label{lem:estimate-system} Let $h_\star,h^\star, \bar M>0$ be fixed. There exists $C(h_\star,h^\star,M )>0$ such that for any $\kappa>0$ and $\nu\in [0,1]$, for any $(\bar h,\bar{\bm{u}})\in W^{1,\infty}((\rho_0,\rho_1))$, for any $T>0$ and $(h,{\bm{u}})\in L^\infty(0,T;W^{1,\infty}(\Omega))$ with $\Delta_{\bm{x}} h\in L^1(0,T;L^\infty(\Omega))$ satisfying~\eqref{eq:hydro-iso-nu} and, for almost every $t\in [0,T]$, the upper bound \[ \Norm{ h(t,\cdot)}_{L^\infty(\Omega)} +\Norm{ \nabla_{\bm{x}} h(t,\cdot)}_{L^\infty_{\bm{x}} L^2_\varrho} +\nu^{1/2}\Norm{ \nabla_{\bm{x}} h(t,\cdot) }_{L^\infty(\Omega)} +\Norm{ \nabla_{\bm{x}}\cdot {\bm{u}}(t,\cdot) }_{L^\infty(\Omega)} \le M \] and the lower and upper bounds \[ \forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star \leq \bar h(\varrho)+h(t,{\bm{x}},\varrho) \leq h^\star ; \] and for any $(\dot\cH, \dot{\bm{u}}) \in \mathcal{C}^0([0,T];X^0)\cap L^2(0,T;X^1)$, with $X^0, X^1$ in~\eqref{eq:X01spaces}, and $(R,{\bm{R}})\in L^2(0,T;X^0)$ satisfying system~\eqref{eq.system} in $L^2(0,T;X^1)'$, the following estimate holds: \begin{multline*}\mathcal{E}(\dot\cH(t,\cdot),\dot{\bm{u}}(t,\cdot))^{1/2} + \kappa^{1/2} \Norm{\nabla_{\bm{x}} \dot \cH}_{L^2(0,t;L^2(\Omega))} + \kappa^{1/2} \norm{\nabla_{\bm{x}} \dot \cH \big\vert_{\varrho=\rho_0} }_{ L^2(0,t;L^2_{\bm{x}})}+\nu\Norm{\nabla_{\bm{x}} \dot {\bm{u}}}_{L^2(\Omega)}^2 \\ \leq \left( \mathcal{E}(\dot\cH(0,\cdot),\dot{\bm{u}}(0,\cdot))^{1/2}+C\int_0^t \mathcal{E}(R(\tau,\cdot),{\bm{R}}(\tau,\cdot))^{1/2}\dd \tau \right)\\ \times \exp\Big( C \int_0^t \big(1+ \kappa^{-1}\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} (\tau,\cdot)}_{L^\infty_{\bm{x}} L^2_\varrho}^2 \big)\dd\tau\Big), \end{multline*} where we denote \[ \mathcal{E}(\dot\cH,\dot{\bm{u}}):= \frac12 \int_{\rho_0}^{\rho_1}\int_{\mathbb{R}^d} \dot\cH^2+ \varrho(\bar h+h)\big|\dot{\bm{u}}\big|^2\dd{\bm{x}}\dd \varrho\ + \ \frac{\rho_0}2\int_{\mathbb{R}^d} \dot\cH^2\big\vert_{\varrho=\rho_0}\dd{\bm{x}}.\] \end{lemma} \begin{proof} We test the first equation against $\dot\cH\in L^2(0,T;H^{1,0}(\Omega)) $, its trace on $\{({\bm{x}},\rho_0 ),{\bm{x}}\in\mathbb{R}^d\}$ against $\rho_0\dot\cH\big\vert_{\varrho=\rho_0}\in L^2(0,T;H^1(\mathbb{R}^d))$, and the second equation against $(\bar h+h) \dot{\bm{u}}\in L^2(0,T;H^{1,0}(\Omega)) $. This yields, after integration by parts \begin{align*} & \frac{\dd}{\dd t} \mathcal{E}(\dot\cH, \dot{\bm{u}})+ \kappa \Norm{\nabla_{\bm{x}} \dot\cH}_{L^2(\Omega)}^2 + \rho_0\kappa \norm{\nabla_{\bm{x}} \dot \cH \big\vert_{\varrho=\rho_0} }_{ L^2_{\bm{x}}}^2+\nu\sum_{i=1}^d\int_\Omega \varrho (\bar h+h) |\partial_{x_i}\dot{\bm{u}}|^2\dd{\bm{x}}\dd\varrho \\ & = - \left((\bar {\bm{u}} + {\bm{u}}) \cdot\nabla_{\bm{x}} \dot\cH , \dot\cH\right)_{L^2(\Omega)}-\left(\int_\varrho^{\rho_1} (\bar {\bm{u}}'+{\partial}_\varrho{\bm{u}}) \cdot \nabla_{\bm{x}} \dot \cH \, d\varrho', \dot\cH\right)_{L^2(\Omega)} & {\rm (i)}\\ &\quad - \left(\int_\varrho^{\rho_1} (\bar h+h) \nabla_{\bm{x}} \cdot \dot{\bm{u}}\, d\varrho', \dot\cH\right)_{L^2(\Omega)}+ \big(R, \dot\cH\big)_{L^2(\Omega)} & {\rm (ii)}\\ &\quad - \left(\varrho (\bar {\bm{u}} + {\bm{u}}) \cdot \nabla_{\bm{x}} \dot{\bm{u}}, (\bar h+h) \dot{\bm{u}}\right)_{L^2(\Omega)} + \kappa \left( \varrho (\nabla_{\bm{x}} h \cdot \nabla_{\bm{x}}) \dot{\bm{u}}, \dot{\bm{u}}\right)_{L^2(\Omega)}& {\rm (iii)} \\ & \quad -\big( \rho_0\nabla_{\bm{x}} \dot\cH\big\vert_{\varrho=\rho_0} , (\bar h+h) \dot{\bm{u}}\big)_{L^2(\Omega)} - \big( \int_{\rho_0}^\varrho \nabla_{\bm{x}} \dot\cH \, d\varrho', (\bar h+h) \dot{\bm{u}} \big)_{L^2(\Omega)}& {\rm (iv)} \\ & \quad -\nu \big( \varrho (\nabla_{\bm{x}} h\cdot\nabla)\dot{\bm{u}} , \dot{\bm{u}}\big)_{L^2(\Omega)} + \big(\varrho{\bm{R}}, (\bar h+h) \dot{\bm{u}}\big)_{L^2(\Omega)}& {\rm (v)} \\ &\quad -\rho_0\left(\big((\bar{\bm{u}}+{\bm{u}})\cdot \nabla_{\bm{x}} \dot \cH\big)\big\vert_{\varrho=\rho_0},\dot\cH\big\vert_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}} -\rho_0\left(\int_{\rho_0}^{\rho_1}(\bar{\bm{u}}'+{\partial}_\varrho{\bm{u}})\cdot \nabla_{\bm{x}} \dot \cH\dd\varrho',\dot\cH\big\vert_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}} & {\rm (vi)}\\ &\quad -\rho_0\left(\int_{\rho_0}^{\rho_1}(\bar h+h)\nabla_{\bm{x}} \cdot\dot{\bm{u}} \dd\varrho',\dot\cH\big\vert_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}} +\rho_0\left(R\big\vert_{\varrho=\rho_0} ,\dot\cH\big\vert_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}}& {\rm (vii)}\\ & \quad + \tfrac 12 (\varrho ({\partial}_t h) \dot {\bm{u}}, \dot {\bm{u}})_{L^2(\Omega)} . & {\rm (viii)}\\ \end{align*} We consider first the second terms in (i) and (vi). We have by an immediate application of Cauchy-Schwarz inequality and the continuous embedding $L^\infty((\rho_0,\rho_1))\subset L^2((\rho_0,\rho_1))$ \begin{align} & \left| \left( \int_\varrho^{\rho_1} (\bar {\bm{u}}' + {\partial}_\varrho {\bm{u}}) \cdot \nabla_{\bm{x}} \dot\cH \, d\varrho' , \dot\cH \right)_{L^2(\Omega)} \right| +\rho_0\left| \left( \int_{\rho_0}^{\rho_1} (\bar {\bm{u}}' + {\partial}_\varrho {\bm{u}}) \cdot \nabla_{\bm{x}} \dot\cH \, d\varrho' , \dot\cH\big\vert_{\varrho=\rho_0} \right)_{ L^2_{\bm{x}}} \right| \nonumber\\ &\qquad \lesssim \Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_{\bm{x}} L^2_\varrho} \Norm{\nabla_{\bm{x}} \dot\cH }_{L^2(\Omega)} \Big(\Norm{\dot\cH }_{L^2(\Omega)}+ \norm{ \dot\cH\big\vert_{\varrho=\rho_0} }_{ L^2_{\bm{x}}} \Big). \label{eq:badterms} \end{align} Notice that the right-hand side~\eqref{eq:badterms} cannot be bounded by the energy functional $\mathcal{E}(\dot\cH,\dot{\bm{u}})$, and this is exactly the point where we use the assumption $\kappa>0$. Let us now estimate all other terms. Using integration by parts in the variable ${\bm{x}}$, we estimate the first addend of (i) and (vi) as follows: \begin{multline*} \left| \left( (\bar {\bm{u}} + {\bm{u}}) \cdot \nabla_{\bm{x}} \dot\cH , \dot\cH \right)_{L^2(\Omega)} \right| +\rho_0\left| \left( \big((\bar {\bm{u}} + {\bm{u}}) \cdot \nabla_{\bm{x}} \dot\cH\big)\big\vert_{\varrho=\rho_0} , \dot\cH\big\vert_{\varrho=\rho_0} \right)_{ L^2_{\bm{x}}} \right| \nonumber\\ \lesssim \Norm{ \nabla_{\bm{x}}\cdot {\bm{u}} }_{L^\infty(\Omega)} \Norm{ \dot\cH }_{L^2(\Omega)}^2 + \norm{ \nabla_{\bm{x}} \cdot{\bm{u}}\big\vert_{\varrho=\rho_0} }_{ L^\infty_{\bm{x}}} \norm{ \dot\cH\big\vert_{\varrho=\rho_0} }_{ L^2_{\bm{x}}}^2 . \end{multline*} The contributions in (iii) and (viii) compensate after integration by parts in ${\bm{x}}$, using the first equation in~\eqref{eq:hydro-iso-nu}. Now consider the first addend of (ii) together with the second addend of (iv). By application of Fubini's theorem we have \[ \int_{\mathbb{R}^d} \int_{\rho_0}^{\rho_1} \left(\int_{\rho_0}^\varrho \nabla_{\bm{x}} \dot\cH(\varrho') \, \dd\varrho'\,\right) \cdot (\bar h+h)(\varrho) \dot{\bm{u}}(\varrho) \, \dd\varrho \, \dd{\bm{x}}= \int_{\mathbb{R}^d} \int_{\rho_0}^{\rho_1}\left(\int_{\varrho'}^{\rho_1} (\bar h+h)(\varrho) \dot{\bm{u}}(\varrho) \, \dd \varrho \right)\cdot \nabla_{\bm{x}} \dot\cH(\varrho') \, \dd\varrho'\, \dd{\bm{x}}\] and hence, integrating by parts in ${\bm{x}}$, we infer \begin{align*} &\Bigg| \int_{\mathbb{R}^d} \int_{\rho_0}^{\rho_1} \int_\varrho^{\rho_1} (\bar h+h)(\varrho') \nabla_{\bm{x}} \cdot \dot{\bm{u}}(\varrho')\, \dd\varrho'\, \dot\cH(\varrho) \, \dd\varrho\, \dd{\bm{x}} \\ & \qquad + \int_{\mathbb{R}^d} \int_{\rho_0}^{\rho_1} \left(\int_{\rho_0}^\varrho \nabla_{\bm{x}} \dot\cH(\varrho') \, \dd\varrho'\,\right) \cdot (\bar h+h)(\varrho) \dot{\bm{u}}(\varrho) \, \dd\varrho \, \dd{\bm{x}} \Bigg| \\ & \quad = \Bigg|\int_{\mathbb{R}^d} \int_{\rho_0}^{\rho_1} \int_\varrho^{\rho_1} (\nabla_{\bm{x}} h)(\varrho') \cdot\dot{\bm{u}}(\varrho') \dot\cH(\varrho) \, \, \dd \varrho'\, \dd\varrho\, \dd{\bm{x}}\Bigg| \lesssim \Norm{ \nabla_{\bm{x}} h}_{L^\infty_{\bm{x}} L^2_\varrho} \Norm{\dot{\bm{u}}}_{L^2(\Omega)}\Norm{\dot\cH}_{L^2(\Omega)}. \end{align*} Concerning first addend of (iv) and the first addend of (vii), we have after integrating by parts with respect to the ${\bm{x}}$ variable and using Cauchy-Schwarz inequality \begin{align*}&\left|-\left(\rho_0\nabla_{\bm{x}} \dot\cH\big\vert_{\varrho=\rho_0} , (\bar h+h) \dot{\bm{u}}\right)_{L^2(\Omega)} - \rho_0\left(\int_{\rho_0}^{\rho_1}(\bar h+h)\nabla_{\bm{x}} \cdot\dot{\bm{u}} \dd\varrho,\dot\cH\big\vert_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}} \right| \\ &\quad =\rho_0\left| \left(\int_{\rho_0}^{\rho_1}(\nabla_{\bm{x}} h) \cdot\dot{\bm{u}} \dd\varrho,\dot\cH\big\vert_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}} \right| \lesssim\Norm{ \nabla_{\bm{x}} h}_{L^\infty_{\bm{x}} L^2_\varrho} \Norm{\dot{\bm{u}}}_{L^2(\Omega)} \norm{\dot\cH\big\vert_{\varrho=\rho_0}}_{L^2_{\bm{x}}}. \end{align*} Concerning the first addend of (v), we have for an arbitrarily large constant $K>0$, \[ \nu \left| \big( \varrho (\nabla_{\bm{x}} h\cdot\nabla)\dot{\bm{u}} , \dot{\bm{u}}\big)_{L^2(\Omega)} \right| \leq \frac{1}{2K} \nu \Norm{ \nabla\dot{\bm{u}}}_{L^2(\Omega)}^2 + \frac{K\rho_1^2}2 \nu\Norm{ \nabla_{\bm{x}} h }_{L^\infty(\Omega)}^2 \Norm{\dot{\bm{u}} }_{L^2(\Omega)}^2.\] The last contributions, namely \[\Big| \big(R, \cH\big)_{L^2(\Omega)} + \big({\bm{R}},\varrho (\bar h+h) \dot{\bm{u}}\big)_{L^2(\Omega)} + {\rho_0\big(R\big\vert_{\varrho=\rho_0} ,\dot\cH\big\vert_{\varrho=\rho_0} \big)_{L^2_{\bm{x}}}} \Big|,\] are easily controlled by means of Cauchy-Schwarz inequality. Collecting all of the above, and using that \[ \mathcal{E}(\dot\cH,\dot{\bm{u}}) \approx \Norm{\dot\cH}_{L^2(\Omega)}^2 + \Norm{\dot{\bm{u}}}_{L^2(\Omega)}^2+ \norm{\dot\cH\big\vert_{\varrho=\rho_0}}_{L^2_{\bm{x}}}^2\] and \[\nu\sum_{i=1}^d\int_\Omega \varrho (\bar h+h) |\partial_{x_i}\dot{\bm{u}}|^2\dd{\bm{x}}\dd\varrho \gtrsim \nu \Norm{\nabla_{\bm{x}}\dot{\bm{u}}}_{L^2(\Omega)}^2\] since $\rho_0 h_\star\leq \varrho(\bar h+h)\leq \rho_1 h^\star$, we obtain (choosing $K$ sufficiently large) \begin{multline*} \frac{\dd}{\dd t} \mathcal{E}(\dot\cH, \dot{\bm{u}})+ \kappa \Norm{\nabla_{\bm{x}} \dot \cH}_{L^2(\Omega)}^2 + \rho_0\kappa \norm{\nabla_{\bm{x}} \dot \cH \big\vert_{\varrho=\rho_0} }_{ L^2_{\bm{x}}}^2\\ \leq C\, \mathcal{E}(\dot\cH, \dot{\bm{u}}) +C \Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_{\bm{x}} L^2_\varrho} \mathcal{E}(\dot\cH, \dot{\bm{u}})^{1/2} \Norm{\nabla_{\bm{x}} \dot \cH}_{L^2(\Omega)}\\ + C \mathcal{E}(\dot\cH, \dot{\bm{u}})^{1/2} \mathcal{E}(R, {\bm{R}})^{1/2} , \end{multline*} with $C=C(h_\star,h^\star,M)$. We deduce (augmenting $C$ if necessary) \begin{multline*} \frac{\dd}{\dd t} \mathcal{E}(\dot\cH, \dot{\bm{u}})+ \frac\kappa2 \Norm{\nabla_{\bm{x}} \cH}_{L^2(\Omega)}^2 + \rho_0\kappa \norm{\nabla_{\bm{x}} \dot \cH \big\vert_{\varrho=\rho_0} }_{ L^2_{\bm{x}}}^2\\ \leq C\, \big(1+ \kappa^{-1}\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_{\bm{x}} L^2_\varrho}^2 \big)\mathcal{E}(\dot\cH, \dot{\bm{u}}) + C \mathcal{E}(\dot\cH, \dot{\bm{u}})^{1/2} \mathcal{E}(R, {\bm{R}})^{1/2} , \end{multline*} and the desired estimate follows by Gronwall's inequality. \end{proof} \subsection{Large-time existence; proof of Theorem~\ref{thm-well-posedness}} We prove the large-time existence and energy esti\-mates on solutions to the regularized system~\eqref{eq:hydro-iso-nu} in the following result. Compared with Proposition~\ref{P.WP-nu}, we provide an existence time which is uniformly bounded (from below) with respect to the artificial regula\-ri\-zation parameter $\nu>0$, and specify the dependency with respect to the diffusivity parameter $\kappa$, in relation with the size of the data. It is in this sense that the existence of strong solutions to the hydrostatic system holds for \emph{large} times. We then complete the proof of Theorem~\ref{thm-well-posedness} at the end of this section. \begin{proposition}\label{P.regularized-large-time-WP} Let $s, k \in \mathbb{N}$ be such that $s> 2+\frac d 2$, $2\leq k\leq s$, and $\bar M,M^\star,h_\star,h^\star>0$. Then, there exists $C>0$ such that, for any $ 0< \nu \leq \kappa\leq 1$, and \begin{itemize} \item for any $(\bar h, \bar {\bm{u}}) \in W^{k,\infty}((\rho_0,\rho_1))$ such that \[ \norm{\bar h}_{W^{k,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho }\leq \bar M\,;\] \item for any initial data $(h_0, {\bm{u}}_0)=(h_0({\bm{x}}, \varrho), {\bm{u}}_0({\bm{x}}, \varrho)) \in H^{s,k}(\Omega)$ with \[ M_0: \Norm{\cH_0}_{H^{s,k}}+\Norm{{\bm{u}}_0}_{H^{s,k}}+\norm{\cH_0\big\vert_{\varrho=\rho_0}}_{H^s_{\bm{x}}}+\kappa^{1/2}\Norm{h_0}_{H^{s,k}} \leq M^\star, \] and \[ \forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star \leq \bar h(\varrho)+h_0({\bm{x}},\varrho) \leq h^\star , \] \end{itemize} the following holds. Denoting \[ T^{-1}= C\, \big(1+ \kappa^{-1} \big(\norm{\bar {\bm{u}}'}_{L^2_\varrho}^2+M_0^2\big) \big), \] there exists a unique strong solution $(h,{\bm{u}})\in \mathcal{C}([0,T];H^{s,k}(\Omega)^{1+d})$ to the Cauchy problem associated with~\eqref{eq:hydro-iso-nu} and initial data $(h,{\bm{u}})\big\vert_{t=0}=(h_0,{\bm{u}}_0)$. Moreover, $h\in L^2(0,T;H^{s+1,k}(\Omega))$ and one has, for any $t\in[0,T]$, the lower and the upper bounds \[ \forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star/2 \leq \bar h(\varrho)+h(t,{\bm{x}},\varrho) \leq 2\,h^\star , \] and the estimate \begin{multline*}\mathcal{F}(t): \Norm{\cH(t,\cdot)}_{H^{s,k}}+\Norm{{\bm{u}}(t,\cdot)}_{H^{s,k}} +\norm{\cH\big\vert_{\varrho=\rho_0}(t,\cdot)}_{H^s_{\bm{x}}} +\kappa^{1/2}\Norm{h(t,\cdot)}_{H^{s,k}} \\+ \kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH}_{L^2(0,t;H^{s,k})} + \kappa^{1/2} \norm{\nabla_{\bm{x}} \cH \big\vert_{\varrho=\rho_0} }_{ L^2(0,t;H^s_{\bm{x}})} +\kappa \Norm{\nabla_{\bm{x}} h}_{L^2(0,t;H^{s,k})} \leq C M_0. \end{multline*} \end{proposition} \begin{proof}Let us denote by $T^\star\in(0,+\infty]$ the maximal time of existence of $(h,{\bm{u}})\in \mathcal{C}^0([0,T^\star);H^{s,k}(\Omega))$ as provided by Proposition~\ref{P.WP-nu}, and \begin{multline*}T_\star=\sup\Big\{ 0<T< T^\star \ : \ \forall t\in (0,T), \qquad h_\star/2\leq \bar h(\varrho)+h(t,{\bm{x}},\varrho) \leq 2\, h^\star \quad \text{ and } \quad \mathcal{F}(t)\leq C_0 M_0\Big\} , \end{multline*} where $C_0>1$ will be determined later on. By the continuity in time of the solution, and using that the linear operator $h\mapsto \cH:= \int_{\varrho}^{\rho_1}h(\cdot,\varrho' )\dd\varrho'$ (resp. $h\mapsto\cH\big\vert_{\varrho=\rho_0}$) is well-defined and bounded from $H^{s,k}(\Omega)$ to itself (resp. $H^s_{\bm{x}}(\mathbb{R}^d)$) we have $T_\star>0$. Using Lemma~\ref{lem:quasilinearization},~\ref{lem:estimate-transport-diffusion} and~\ref{lem:estimate-system} and, therein, the inequalities $\Norm{h}_{H^{s-1,k-1}}=\Norm{{\partial}_\varrho \cH}_{H^{s-1,k-1}}\leq \Norm{\cH}_{H^{s,k}}$, and (since $\nu\leq \kappa$) $\nu^{1/2}\Norm{ \nabla_{\bm{x}} h }_{L^\infty(\Omega)}\leq \kappa^{1/2}\Norm{h}_{H^{s,k}}$, we find that there exists $c_0>1$ depending only on $\rho_0h_\star, \rho_1h^\star$; and $C>0$ depending on $\bar M, h_\star, h^\star, C_0M_0$ such that for any $0<t<T_\star$, \begin{multline} \Norm{\cH(t,\cdot)}_{H^{s,0}}+\Norm{{\bm{u}}(t,\cdot)}_{H^{s,0}} + \norm{ \cH\big\vert_{\varrho=\rho_0}(t,\cdot) }_{H^s_{\bm{x}}} + \kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH}_{L^2(0,t;H^{s,0})} + \kappa^{1/2} \norm{\nabla_{\bm{x}} \cH \big\vert_{\varrho=\rho_0} }_{ L^2(0,t;H^s_{\bm{x}})}\\ \leq c_0 \left( \Norm{\cH_0}_{H^{s,0}}+\Norm{{\bm{u}}_0}_{H^{s,0}} + \norm{ \cH_0\big\vert_{\varrho=\rho_0} }_{H^s_{\bm{x}}} +C \,C_0M_0 \, \big(t+\sqrt t \big)\right)\\ \times \exp\Big( C \int_0^t \big(1 +\kappa^{-1}\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_{\bm{x}} L^2_\varrho}^2 \big)\dd\tau\Big); \end{multline} and (using a slightly adapted version of Lemma~\ref{lem:estimate-system} which does not involve the trace of $\partial_\varrho^j\cH$ at the surface) for any $1\leq j\leq k$ \begin{multline} \Norm{\partial_\varrho^j\cH(t,\cdot)}_{H^{s-j,0}} + \kappa^{1/2} \Norm{\nabla_{\bm{x}} \partial_\varrho^j \cH}_{L^2(0,t;H^{s-j,0})} \\ \leq \left( \Norm{\partial_\varrho^j\cH(0,\cdot)}_{H^{s-j,0}}+C\, C_0 M_0 \, \big(t+\sqrt t\big)\right)\\ \times \exp\Big( C \int_0^t \Norm{\nabla_{\bm{x}}\cdot {\bm{u}}(\tau,\cdot) }_{L^\infty(\Omega)} \dd \tau\Big), \end{multline} and \begin{multline} \Norm{\partial_\varrho^j{\bm{u}}(t,\cdot)}_{H^{s-j,0}} + \nu^{1/2} \Norm{\nabla_{\bm{x}} \partial_\varrho^j {\bm{u}}}_{L^2(0,t;H^{s-j,0})} \\ \leq \left( \Norm{\partial_\varrho^j{\bm{u}}(0,\cdot)}_{H^{s-j,0}}+C \, C_0M_0 \, \big(t+\sqrt t \big)\right)\\ \times \exp\Big( C \int_0^t \Norm{\nabla_{\bm{x}}\cdot \big({\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+h}\big)(\tau,\cdot) }_{L^\infty(\Omega)} \dd \tau\Big); \end{multline} and finally for any $0\leq j\leq k$ \begin{multline} \kappa^{1/2}\Norm{\partial_\varrho^jh(t,\cdot)}_{H^{s-j,0}} + \kappa \Norm{\nabla_{\bm{x}} \partial_\varrho^j h}_{L^2(0,t;H^{s-j,0})} \\ \leq \left(\kappa^{1/2} \Norm{\partial_\varrho^jh(0,\cdot)}_{H^{s-j,0}}+C\, C_0 M_0\, \big( t+\sqrt{t} \big)\right)\\ \times \exp\Big( C \int_0^t \Norm{\nabla_{\bm{x}}\cdot {\bm{u}}(\tau,\cdot) }_{L^\infty(\Omega)} \dd \tau\Big). \end{multline} By the continuous embeddings $H^{s_0+\frac12,1}\subset L^\infty_\varrho H^{s_0}\subset L^\infty(\Omega)$ for any $s_0>d/2$ (see Lemma~\ref{L.embedding}) and since $k\geq 1$ and $s>\frac32+\frac{d}2$, we have \[\Norm{\nabla_{\bm{x}}\cdot {\bm{u}}}_{L^\infty(\Omega)} + \Norm{\nabla_{\bm{x}}\cdot \big({\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+h}\big) }_{L^\infty(\Omega)} \leq C(h_\star)\big( \Norm{{\bm{u}}}_{H^{s,k}} +\kappa\Norm{ h}_{H^{s,k}}^2 +\kappa\Norm{ \nabla_{\bm{x}} h}_{H^{s,k}}\big) .\] We deduce that \[ \mathcal{F}(t)\leq c \Big( M_0 +C \, C_0 M_0 \, \big(t +\sqrt t\big)\Big) \times \exp\Big( C\big( t+\sqrt t + \kappa^{-1}\int_0^t\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}(\tau,\cdot) }_{L^\infty_{\bm{x}} L^2_\varrho}^2\dd\tau \big)\Big), \] where we recall that $c_0>1$ depends only on $h_\star$ and $h^\star$; and $C>0$ depends on $\bar M, C_0M_0, h_\star, h^\star$. Hence choosing $C_0=2c_0$ and using that (by Lemma~\ref{L.embedding} and since $k\geq 2$ and $s>\frac32+\frac{d}2$) \[ \Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_{\bm{x}} L^2_\varrho}\leq \Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^2_\varrho L^\infty_{\bm{x}}}^2 \lesssim \norm{\bar {\bm{u}}'}_{L^2_\varrho}^2 +\Norm{{\bm{u}}}_{H^{s,k}}^2\leq \norm{\bar {\bm{u}}'}_{L^2_\varrho}^2+(C_0M_0)^2,\] we find that there exists $C_0\geq 1$ depending only on $\bar M,M^\star, h_\star, h^\star$ such that \[t \big(1+\kappa^{-1}\big(\norm{\bar {\bm{u}}'}_{L^2_\varrho}^2+M_0^2\big)\big) \leq C_0^{-1} \quad \Longrightarrow \quad \mathcal{F}(t)\leq \frac34 C_0 M_0.\] Now we remark that since \[\partial_th+\bar{\bm{u}}\cdot\nabla_{\bm{x}} h=\kappa\Delta_{\bm{x}} h+g \quad \text{ with } \quad g=-\nabla_{\bm{x}}\cdot(\bar h {\bm{u}}+h{\bm{u}})\] and by the positivity of the heat kernel we have \[\inf_{\Omega} h(t,\cdot) \geq \inf_\Omega h_0 - \Norm{g}_{L^1(0,t;L^\infty(\Omega) )}, \qquad \sup_{\Omega} h(t,\cdot) \leq \sup_\Omega h_0 + \Norm{g}_{L^1(0,t;L^\infty(\Omega))}.\] Now, by the continuous embedding $H^{s-1,1}(\Omega)\subset L^\infty(\Omega)$ (since $s>\frac 3 2 + \frac d 2$), we have that \[ \Norm{g}_{L^\infty(\Omega)} \lesssim \norm{\bar h}_{W^{1,\infty}_\varrho} \Norm{{\bm{u}}}_{H^{s,1}} +\Norm{h}_{H^{s,1}}\Norm{{\bm{u}}}_{H^{s,1}}\\ \leq C(\bar M) (1+\kappa^{-1}M_0^2) . \] Hence augmenting $C_0$ if necessary we find that \[ t (1+\kappa^{-1}M_0^2)\leq C_0^{-1} \quad \Longrightarrow \quad \forall({\bm{x}},\varrho)\in\Omega,\quad \frac23 h_\star\leq \bar h(\varrho)+h(t,{\bm{x}},\varrho) \leq \frac32 h^\star.\] By a continuity argument we infer $T_\star\geq \Big( C \big(1+\kappa^{-1}\big(\norm{\bar {\bm{u}}'}_{L^2_\varrho}^2+M_0^2\big)\big) \Big)^{-1}$, and the proof is complete. \end{proof} \bigskip {\bf Completion of the proof of Theorem~\ref{thm-well-posedness}} \medskip In order to complete the proof of Theorem~\ref{thm-well-posedness}, there remains to consider vanishing viscosity limit, $\nu\searrow 0$, in Proposition~\ref{P.regularized-large-time-WP}. Let us briefly sketch the standard argument. By Proposition~\ref{P.regularized-large-time-WP}, we construct a family $(h_\nu,{\bm{u}}_\nu)\in \mathcal{C}^0([0,T];H^{s,k}(\Omega))$ of solutions to~\eqref{eq:hydro-iso-nu} with $(h_\nu,{\bm{u}}_\nu)\big\vert_{t=0}=(h_0,{\bm{u}}_0)$ indexed by the parameter $\nu>0$. Notice that the time of existence and associated bounds provided by Proposition~\ref{P.regularized-large-time-WP} are uniform with respect to the parameter $\nu>0$. Hence by the Banach-Alaoglu theorem there exists a subsequence which converges weakly towards $(h,{\bm{u}})\in L^\infty(0,T;H^{s,k}(\Omega)^{1+d})$, satisfying the estimates of Proposition~\ref{P.regularized-large-time-WP}. Using the equations, we find that $(\partial_t\zeta_\nu,\partial_t {\bm{u}}_\nu) $ are uniformly bounded in $L^\infty(0,T;H^{s-2,k})$. The Aubin-Lions lemma (see~\cite{Simon87}) implies that, up to extracting a subsequence, the convergence holds strongly in $(h,{\bm{u}})\in \mathcal{C}^0([0,T];H^{s',k}(B)^{1+d})$ for any $0\leq s'<s$ for any bounded $B\subset \mathbb{R}^d\times(\rho_0,\rho_1)$. Choosing $s'>3/2+d/2$ and using Lemma~\ref{L.embedding} and Sobolev embedding, we can pass to the limit in the nonlinear terms of the equation and infer that that $(h,{\bm{u}})$ is a strong solution to~\eqref{eq:hydro-iso-nu} with $\nu=0$. Moreover, since $(h,{\bm{u}})\in \mathcal{C}^0([0,T];H^{s-2,k}(\Omega)^{1+d})$, we have $(h,{\bm{u}})\in \mathcal{C}^0([0,T];H^{s',k}(\Omega)^{1+d})$ for any $0\leq s'<s$. Uniqueness of the solution $(h,{\bm{u}})\in L^\infty(0,T;H^{s,k}(\Omega)^{1+d})$ follows by using Lemma~\ref{lem:estimate-system} on the difference between two solutions, and Gronwall's Lemma. There remains to prove that $(h,{\bm{u}})\in \mathcal{C}^0([0,T];H^{s,k}(\Omega)^{1+d})$. We prove the equivalent statement that for any ${\bm{\alpha}}\in\mathbb{N}^d$ and $j\in\mathbb{N}$ such that $0\leq |{\bm{\alpha}}|+j\leq s$ and $0\leq j\leq k$, $({\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} h, {\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}}{\bm{u}})\in \mathcal{C}^0([0,T];L^2(\Omega)^{1+d})$. By Lemma~\ref{lem:quasilinearization}, as long as $\kappa>0$, we can write \begin{align*} \partial_t ({\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} h)-\kappa\Delta_{\bm{x}} ({\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} h) &=r_{{\bm{\alpha}},j}+\nabla_{{\bm{x}}} \cdot {\bm{r}}_{{\bm{\alpha}},j},\\ \partial_t ({\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} {\bm{u}})+({\bm{v}}\cdot \nabla_{\bm{x}})({\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} {\bm{u}}) &=R_{{\bm{\alpha}},j}, \end{align*} with $(r_{{\bm{\alpha}},j},{\bm{r}}_{{\bm{\alpha}},j},R_{{\bm{\alpha}},j})\in L^2(0,T;L^2(\Omega))^{1+2d}$ and ${\bm{v}}(\cdot,\varrho):=\bar{\bm{u}}(\varrho)+\big({\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+h}\big)(\cdot,\varrho)\in W^{k,\infty}((\rho_0,\rho_1))^d+ L^2(0,T;H^{s,k}(\Omega))^d$. In other words, ${\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} h$ satisfies a heat equation and continuity in time stems from the Duhamel formula, as already used in Proposition~\ref{P.WP-nu}; and ${\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} {\bm{u}}$ satisfies a transport equation and continuity in time is standard, see {\em e.g.}~\cite{BCD11}*{Th.~3.19}. Let us acknowledge however that our situation is slightly different, since $\Omega$ is neither the Euclidean space or the torus, and advection occurs only in the direction ${\bm{x}}$ (and not $\varrho$). It is however easy to adapt the proof of~\cite{BCD11}*{Th.~3.19} to infer ${\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} {\bm{u}} \in L^2(\rho_0,\rho_1;\mathcal{C}^0([0,T];L^2(\mathbb{R}^d)))^d\subset \mathcal{C}^0([0,T];L^2(\Omega))^d$ from the facts that $R_{{\bm{\alpha}},j} \in L^2(0,T;L^2(\Omega))^d\subset L^2(\rho_0,\rho_1; L^1(0,T;L^2(\mathbb{R}^d)))^d $ and $\nabla_{\bm{x}} {\bm{v}}\in L^2(0,T;H^{s-1,k}(\Omega))^{d\times d} \subset L^1(0,T;L^\infty(\rho_0,\rho_1;H^{s-3/2}(\mathbb{R}^d)))^{d\times d}$ and $s-3/2>d/2$, the continuous embeddings following from Minkowski inequality and Lemma~\ref{L.embedding}. This concludes the proof of Theorem~\ref{thm-well-posedness}. \section{The non-hydrostatic system}\label{S.NONHydro} In this section we study the local well-posedness theory for the non-hydrostatic system in isopycnal coordinates, which we recall below. \begin{equation}\label{eq:nonhydro-iso-recall} \begin{aligned} \partial_t h+\nabla_{ {\bm{x}}} \cdot\big((\bar h +h)(\bar{\bm{u}} +{\bm{u}})\big)&= \kappa \Delta_{ {\bm{x}}} h,\\ \varrho\Big( \partial_t {\bm{u}}+\big((\bar {\bm{u}} + {\bm{u}} - \kappa\tfrac{\nabla_{ {\bm{x}}} h}{ \bar h+ h})\cdot\nabla_{\bm{x}}\big) {\bm{u}}\Big)+ \nabla_{ {\bm{x}}} P+ \frac{\nabla_{ {\bm{x}}} \cH}{\bar h+ h}( {\partial}_\varrho P + \varrho \bar h) &=0,\\ \mu \varrho\Big( \partial_t w+\big(\bar {\bm{u}} + {\bm{u}} - \kappa\tfrac{\nabla_{ {\bm{x}}} h}{ \bar h+ h}\big)\cdot\nabla_{ {\bm{x}}} w\Big)- \frac{{\partial}_\varrho P}{\bar h+ h} + \frac{\varrho h}{\bar h + h}&=0,\\ -(\bar h + h) \nabla_{{\bm{x}}} \cdot {\bm{u}}-\nabla_{{\bm{x}}} \cH \cdot ({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}) +\partial_\varrho w&=0, \quad \text{(div.-free cond.)} \\ \cH(\cdot, \varrho)=\int_{\varrho}^{\rho_1} h(\cdot, \varrho')\dd\varrho', \qquad P\big|_{\varrho=\rho_0} = 0, \qquad w\big|_{\varrho=\rho_1}&=0. \quad \text{(bound. cond.)} \end{aligned} \end{equation} \subsection{The pressure reconstruction}\label{S.Nonhydro:Poisson} The first step of our analysis consists in showing how the pressure variable, $P$, can be uniquely reconstructed (thanks to the ``divergence-free'' incompressibility constraint) from prognostic variables ${\bm{u}}$, $w$ and $h$ (or, equivalently, $\cH$), through an elliptic boundary-value problem. Differentiating the ``divergence-free'' incompressibility constraint in~\eqref{eq:nonhydro-iso-recall} with respect to time yields \[-(\bar h+h)\nabla_{\bm{x}}\cdot \partial_t {\bm{u}}-(\nabla_{\bm{x}} \cH)\cdot(\partial_\varrho \partial_t {\bm{u}})+\partial_\varrho \partial_t w=(\partial_t h )(\nabla_{\bm{x}}\cdot {\bm{u}}) +(\nabla_{\bm{x}} \partial_t \cH)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}).\] We plug the expressions for ${\partial}_t {\bm{u}}, {\partial}_t w, {\partial}_t h, {\partial}_t \cH$ provided by~\eqref{eq:nonhydro-iso-recall} inside the above identity. Reorganizing terms, this yields the following \begin{multline*}\textstyle (\bar h+h)\nabla_{\bm{x}}\cdot \left( \frac1\varrho\nabla_{\bm{x}} P+\frac{ \nabla_{\bm{x}} \cH (\partial_\varrho P+\varrho \bar h)}{ \varrho (\bar h+h)}\right)+(\nabla_{\bm{x}} \cH)\cdot\left(\partial_\varrho \left( \frac1\varrho\nabla_{\bm{x}} P+\frac{\nabla_{\bm{x}} \cH(\partial_\varrho P + \varrho \bar h)}{\varrho (\bar h+h)} \right)\right) +\partial_\varrho \left( \frac{{\partial}_\varrho P + \varrho \bar h }{\mu\varrho (\bar h+h)} \right)\\ \textstyle=-(\bar h + h) \nabla_{\bm{x}}\cdot \left(\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}}\big) {\bm{u}} \right) -(\nabla_{\bm{x}} \cH)\cdot\left(\partial_\varrho \left(\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}} \big){\bm{u}} \right) \right)\\ +\partial_\varrho\left(\big({\bar {\bm{u}}}+ {\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h}\big)\cdot\nabla_{\bm{x}} w\right) +\Big(\kappa\Delta_{\bm{x}} h - \nabla_{\bm{x}}\cdot\big((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}})\big) \Big)(\nabla_{\bm{x}}\cdot {\bm{u}}) \\ \textstyle+\left( \kappa\nabla_{\bm{x}}\Delta_{\bm{x}} \cH -\nabla_{\bm{x}}\int_{\varrho}^{\rho_1} \nabla_{\bm{x}}\cdot((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}})) \dd\varrho' \right)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}). \end{multline*} Using that ${\partial}_\varrho\nabla_{\bm{x}}\cH =-\nabla_{\bm{x}} h$ we can rewrite the left-hand side in a compact formulation as \[{\rm (LHS)}=\frac1\mu \begin{pmatrix} \sqrt\mu\nabla_{\bm{x}} \\ {\partial}_\varrho \end{pmatrix}\cdot \left( \begin{pmatrix} \frac{\bar h+h}{\varrho}\Id& \frac{\sqrt\mu\nabla_{\bm{x}} \cH}{\varrho} \\ \frac{\sqrt\mu\nabla_{\bm{x}}^\top \cH}{\varrho}& \frac{1+\mu |\nabla_{\bm{x}} \cH|^2}{\varrho (\bar h+h)} \end{pmatrix} \begin{pmatrix} \sqrt\mu\nabla_{\bm{x}} \\ {\partial}_\varrho \end{pmatrix} (P+ \bar P_{\rm eq})\right), \] with $ \bar P_{\rm eq}:=\int_{\rho_0}^{\varrho} \varrho' \bar h(\varrho') \, \dd\varrho'$. As for the right-hand side, we denote \begin{equation}\label{def:ustar} {\bm{u}}_\star:= -\kappa\frac{\nabla_{\bm{x}} h}{\bar h+ h}, \end{equation} and we infer \begin{multline*} {\rm (RHS)}=-(\bar h+h) \nabla_{\bm{x}}\cdot \left(\big(({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star)\cdot\nabla_{\bm{x}} \big){\bm{u}} \right) -(\nabla_{\bm{x}} \cH)\cdot\left(\partial_\varrho \left(\big({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star\big)\cdot\nabla_{\bm{x}} {\bm{u}} \right) \right)\\ +\partial_\varrho\left(\big({\bar {\bm{u}}}+ {\bm{u}}+{\bm{u}}_\star\big)\cdot\nabla_{\bm{x}} w\right) - \nabla_{\bm{x}}\cdot((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star)) (\nabla_{\bm{x}}\cdot {\bm{u}}) \\ \textstyle\quad-\left(\nabla_{\bm{x}} \int_{\varrho}^{\rho_1} \nabla_{\bm{x}}\cdot((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star)) \dd\varrho' \right)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}). \end{multline*} Notice the identity (reminiscent of~\eqref{eq:h}) \begin{equation}\label{cancellation} \int_{\varrho}^{\rho_1} \nabla_{\bm{x}}\cdot((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star)) \dd\varrho' = ({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star)\cdot\nabla_{\bm{x}} \cH-w-w_\star, \end{equation} where \begin{equation}\label{def:wstar} w_\star:=\kappa\Delta_{\bm{x}} \cH-\kappa\frac{\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}} \cH}{\bar h+h}, \end{equation} which is obtained by integrating with respect to $\varrho$ the divergence-free identities \begin{align*}-(\bar h+h)\nabla_{\bm{x}}\cdot {\bm{u}}-(\nabla_{\bm{x}} \cH)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}})+\partial_\varrho w&=0,\\ -(\bar h+h)\nabla_{\bm{x}}\cdot {\bm{u}}_\star-(\nabla_{\bm{x}} \cH)\cdot(\partial_\varrho{\bm{u}}_\star)+\partial_\varrho w_\star&=0, \end{align*} integrating by parts with respect to $\varrho$, using the boundary condition $w\vert_{\varrho=\rho_1}=0=w_\star\vert_{\varrho=\rho_1}$ and $h=-{\partial}_\varrho \cH$. Hence the above can be equivalently written as \begin{multline}\label{eq.RHS} {\rm (RHS)}=-(\bar h+h) \nabla_{\bm{x}}\cdot \left(\big({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star\big)\cdot\nabla_{\bm{x}} {\bm{u}} \right) -(\nabla_{\bm{x}} \cH)\cdot\left(\partial_\varrho \left(\big({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star\big)\cdot\nabla_{\bm{x}} {\bm{u}} \right) \right)\\ +\partial_\varrho\left(\big({\bar {\bm{u}}}+ {\bm{u}}+{\bm{u}}_\star\big)\cdot\nabla_{\bm{x}} w\right) - \nabla_{\bm{x}}\cdot((\bar h+ h)({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star)) (\nabla_{\bm{x}}\cdot {\bm{u}}) \\ -\left(\nabla_{\bm{x}} \left(({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star)\cdot\nabla_{\bm{x}} \cH-w-w_\star \right) \right)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}). \end{multline} Taking into account the boundary conditions in~\eqref{eq:nonhydro-iso-recall}, we find that the pressure satisfies the following problem (recalling $ \bar P_{\rm eq}:=\int_{\rho_0}^{\varrho} \varrho' \bar h(\varrho') \, \dd\varrho'$): \begin{equation}\label{eq.Poisson} \left\{\begin{array}{l} \displaystyle\frac1\mu \begin{pmatrix} \sqrt\mu\nabla_{\bm{x}} \\ {\partial}_\varrho \end{pmatrix}\cdot \left( \begin{pmatrix} \frac{\bar h+h}{\varrho}\Id& \frac{\sqrt\mu\nabla_{\bm{x}} \cH}{\varrho} \\ \frac{\sqrt\mu\nabla_{\bm{x}}^\top \cH}{\varrho}& \frac{1+\mu |\nabla_{\bm{x}} \cH|^2}{\varrho (\bar h+h)} \end{pmatrix} \begin{pmatrix} \sqrt\mu\nabla_{\bm{x}} \\ {\partial}_\varrho \end{pmatrix} (P+ \bar P_{\rm eq}) \right)= {\rm (RHS)},\\ P\big\vert_{\varrho=\rho_0}=0, \qquad ({\partial}_\varrho P)\big\vert_{\varrho=\rho_1}=\rho_1 h\big\vert_{\varrho=\rho_1}. \end{array}\right. \end{equation} This boundary value problem corresponds to~\cite{Desjardins-Lannes-Saut}*{(7)} written in isopycnal coordinates, adapting the boundary conditions to the free-surface framework, and taking into account the effective transport velocities from eddy correlation. Following~\cite{Desjardins-Lannes-Saut}, we shall infer the existence and uniqueness as well as estimates on the pressure $P$ from the elliptic theory applied to the above boundary value problem, as stated below. \begin{lemma}\label{L.Poisson} Let $s_0>d/2$, $s,k\in\mathbb{N}$ such that $s\geq s_0+\frac52$ and $1\leq k\leq s$. Let $\bar M,M,h_\star>0$. There exists $C>0$ such that for any $\mu\in(0,1]$, and for any $\bar h,h,\cH$ satisfying the following bound \[ \norm{\bar h}_{W^{1\vee k-1,\infty}_\varrho}\leq \bar M,\quad \Norm{h}_{H^{s-1,1\vee k-1}}+\sqrt\mu\Norm{\nabla_{\bm{x}}\cH}_{H^{s-1,1\vee k-1}} \leq M; \] (where we recall the notation $a\vee b=\max(a,b)$) and the stable stratification assumption \[ \inf_{({\bm{x}},\varrho)\in\Omega} \bar h(\varrho)+h({\bm{x}},\varrho) \geq {h_\star}; \] and for any $(Q_0,Q_1,{\bm{R}})\in H^{s,k-1}(\Omega)^2\times H^{s,k}(\Omega)^{d+1}$, there exists a unique $P\in H^{s+1,k+1}(\Omega)$ solution to \begin{equation}\label{eq.Poisson-simple} \left\{\begin{array}{l} \nabla^\mu_{{\bm{x}},\varrho}\cdot\big( A^\mu \nabla^\mu_{{\bm{x}},\varrho} P\big) = Q_0+\sqrt\mu \Lambda Q_1 +\nabla^\mu_{{\bm{x}},\varrho}\cdot {\bm{R}} \\ P\big\vert_{\varrho=\rho_0}=0, \qquad {\bm{e}}_{d+1}\cdot (A \nabla^\mu_{{\bm{x}},\varrho} P)\big\vert_{\varrho=\rho_1}={\bm{e}}_{d+1}\cdot {\bm{R}}\big\vert_{\varrho=\rho_1} \end{array}\right. \end{equation} where we denote $\Lambda:=(\Id-\Delta_{\bm{x}})^{1/2}$, \[ \nabla^\mu_{{\bm{x}},\varrho}:=\begin{pmatrix} \sqrt\mu\nabla_{\bm{x}} \\ {\partial}_\varrho \end{pmatrix} \qquad ; \qquad A^\mu:= \begin{pmatrix} \frac{\bar h+h}{\varrho}\Id& \frac{\sqrt\mu\nabla_{\bm{x}} \cH}{\varrho} \\ \frac{\sqrt\mu\nabla_{\bm{x}}^\top \cH}{\varrho}& \frac{1+ \mu|\nabla_{\bm{x}} \cH|^2}{\varrho (\bar h+h)} \end{pmatrix},\] and one has, denoting $\Norm{(Q_0,Q_1,{\bm{R}})}_{r,j}:=\Norm{Q_0}_{H^{r,j-1}} +\Norm{Q_1}_{H^{r,j-1}} +\Norm{{\bm{R}}}_{H^{r,j}}$, \begin{multline}\label{ineq:est-P} \Norm{P}_{L^2(\Omega)} +\Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{s,k}} \leq C\,\times \Big(\Norm{(Q_0,Q_1,{\bm{R}})}_{s,k}\\ + \big( \Norm{ h}_{H^{s,k}}+\sqrt\mu\Norm{\nabla_{\bm{x}}\cH}_{H^{s,k}}\big)\, \Norm{(Q_0,Q_1,{\bm{R}})}_{s-1,1\vee k-1} \Big) \end{multline} and, when $k\geq 2$, \begin{equation}\label{ineq:est-Plow} \Norm{P}_{L^2(\Omega)} + \Norm{\nabla^\mu_{{\bm{x}},\varrho}P}_{H^{s-1,k-1}} \leq C \Norm{(Q_0,Q_1,{\bm{R}})}_{s-1,k-1}. \end{equation} \end{lemma} \begin{proof} Testing~\eqref{eq.Poisson-simple} with $ P$, using integration by parts and the boundary conditions, we find \[ -\int_{\mathbb{R}^d} \int_{\rho_0}^{\rho_1} A^\mu \nabla^\mu_{{\bm{x}},\varrho} P \cdot \nabla^\mu_{{\bm{x}},\varrho} P \, \dd \varrho\, \dd {\bm{x}} = \int_{\mathbb{R}^d} \int_{\rho_0}^{\rho_1} Q_0 {P} + Q_1 (\sqrt\mu\Lambda {P}) - \tilde {\bm{R}}\cdot \nabla_{{\bm{x}}, \varrho}^\mu P \, \dd \varrho \, \dd {\bm{x}}. \] For $(Q_0,Q_1,{\bm{R}})\in L^2(\Omega)^{2+d+1}$, the existence and uniqueness of a (variational) solution to~\eqref{eq.Poisson-simple} in the functional space \[ H^1_0(\Omega):=\{ P\in L^2(\Omega) \ : \ \nabla_{{\bm{x}},\varrho} P \in L^2(\Omega) , \ P\big\vert_{\varrho=\rho_0} = 0\}\] classically follows from the Lax-Milgram Lemma thanks to the boundedness and the coercivity of the matrix $A$ (recall that $\bar h+h\geq h_\star>0$ and the embedding of Lemma~\ref{L.embedding}), and the Poincar\'e inequality \begin{equation}\label{eq.Poincare} \forall P\in H^1_0(\Omega), \qquad \Norm{P}_{L^2(\Omega)}^2 = \int_{\mathbb{R}^d} \int_{\rho_0}^{\rho_1} \left|\int_{\rho_0}^{\varrho} \partial_{\varrho'} P\dd \varrho'\right|^2 \dd\varrho \dd{\bm{x}} \leq (\rho_1-\rho_0)^2 \Norm{{\partial}_\varrho P }_{L^2(\Omega)}^2, \end{equation} and we have \begin{equation} \Norm{\nabla^\mu_{{\bm{x}},\varrho}P}_{L^2(\Omega)} \lesssim \Norm{Q_0}_{L^2(\Omega)} +\Norm{Q_1}_{L^2(\Omega)} +{\Norm{{\bm{R}}}_{L^2(\Omega)}} .\label{eq.basic-elliptic} \end{equation} The desired regularity for $(Q_0,Q_1,{\bm{R}})\in H^{s,k-1}(\Omega)^2\times H^{s,k}(\Omega)^{d+1}$ is then deduced following the standard approach for elliptic equations (notice the domain is flat) from the estimates which we obtain below. For more details, we refer for instance to~\cite{Lannesbook}*{Chapter~2} where a very similar elliptic problem is thoroughly studied. We now focus on the estimates, assuming {\em a priori} the needed regularity to justify the following computations. First, we provide an estimate for $\Norm{ \nabla_{{\bm{x}},\varrho} P}_{H^{r,0}(\Omega)}$ for $1\leq r\leq s$. One readily checks that $P_r:=\Lambda^r P$ with $\Lambda^r:=(\Id-\Delta_{\bm{x}})^{r/2}$ satisfies~\eqref{eq.Poisson-simple} with $Q_0\leftarrow \Lambda^r Q_0$, $Q_1\leftarrow \Lambda^r Q_1$ and ${\bm{R}}\leftarrow \Lambda^r {\bm{R}}-[\Lambda^r,A^\mu ]\nabla_{{\bm{x}},\varrho} P $. We focus now on the contribution of ${\bm{P}}_r:=[\Lambda^r,A^\mu ]\nabla^\mu_{{\bm{x}},\varrho} P $. By continuous embedding (Lemma~\ref{L.embedding}) and commutator estimates (Lemma~\ref{L.commutator-Hs}), we have \[ \Norm{{\bm{P}}_r}_{L^2(\Omega)} \lesssim \Norm{\nabla_{\bm{x}} A^\mu}_{H^{s_0+\frac12,1}} \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{r-1,0}} + \left\langle \Norm{\nabla_{\bm{x}} A^\mu}_{H^{r-1,0}} \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{s_0+\frac12,1}} \right\rangle_{r>s_0+1} . \] Hence, using product (Lemma~\ref{L.product-Hs},\ref{L.product-Hsk}) and composition (Lemma~\ref{L.composition-Hs},\ref{L.composition-Hsk-ex}) estimates, we deduce \begin{equation} \label{est-P} \Norm{{\bm{P}}_r}_{L^2(\Omega)} \leq C\,\Big( \Norm{ \nabla^\mu_{{\bm{x}},\varrho} P}_{H^{r-1,0}} +\left\langle\big(\Norm{h}_{H^{r,0}} + \sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{r,0}}\big) \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{s_0+\frac12,1}} \right\rangle_{r>s_0+1} \Big) \end{equation} with $ C= C(h_\star,\norm{\bar h}_{W^{1,\infty}_\varrho},\Norm{(h,\sqrt\mu\nabla_{\bm{x}}\cH)}_{ H^{s_0+\frac32,1}} ) $. Plugging~\eqref{est-P} in~\eqref{eq.basic-elliptic} and using continuous embedding (Lemma~\ref{L.embedding}) and $s_0+\frac32\leq s-1$ yields \begin{multline}\label{eq.P-in-Hs1} \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{r,0}} \lesssim \Norm{Q_0}_{H^{r,0}} +\Norm{Q_1}_{H^{r,0}} +\Norm{{\bm{R}}}_{H^{r,0}} + \Norm{ \nabla^\mu_{{\bm{x}},\varrho} P}_{H^{r-1,0}}\ \\ + \ C\, \big( \Norm{ h}_{H^{r,0}}+\Norm{\nabla_{\bm{x}}\cH}_{H^{r,0}} \big) \left\langle \Norm{\nabla_{{\bm{x}},\varrho } P}_{H^{s_0+\frac12,1}} \right\rangle_{r>s_0+1}, \end{multline} where we denote, here and thereafter, $a\lesssim b$ for $a\leq Cb$ with \[ C=C(h_\star,\norm{\bar h}_{W^{1,\infty}_\varrho},\Norm{h}_{H^{s-1,1}},\sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{s-1,1}} ) =C(h_\star,\bar M,M) . \] Next we provide an estimate for $\Norm{ \nabla_{{\bm{x}},\varrho} P}_{H^{r,1}(\Omega)}$ appearing in the above right-hand side. This term involves ${\partial}_\varrho^2P$, which we control by rewriting~\eqref{eq.Poisson-simple} as \begin{equation}\label{eq.Poisson-varrho} \tfrac{1+ \mu|\nabla_{\bm{x}} \cH|^2}{\varrho (\bar h+h)} \partial_\varrho^2 P = -\partial_\varrho\big( \tfrac{1+\mu |\nabla_{\bm{x}} \cH|^2}{\varrho (\bar h+h)}\big) (\partial_\varrho P ) - \nabla^\mu_{{\bm{x}},\varrho}\cdot A_0^\mu \nabla^\mu_{{\bm{x}},\varrho} P +Q_0+\sqrt\mu\Lambda Q_1 +\nabla^\mu_{{\bm{x}},\varrho}\cdot {\bm{R}} =:\widetilde{\bm{R}} \end{equation} where we denote \begin{align*} \nabla^\mu_{{\bm{x}},\varrho}\cdot A_0^\mu \nabla^\mu_{{\bm{x}},\varrho} P &:= \nabla^\mu_{{\bm{x}},\varrho}\cdot \left( \begin{pmatrix} \frac{\bar h+h}{\varrho}\Id& \frac{\sqrt\mu\nabla_{\bm{x}} \cH}{\varrho} \\ \frac{\sqrt\mu\nabla_{\bm{x}}^\top \cH}{\varrho}& 0 \end{pmatrix}\nabla^\mu_{{\bm{x}},\varrho} P\right) \\ &= \frac\mu\varrho \nabla_{\bm{x}}\cdot \big(h \nabla_{{\bm{x}}} P +(\nabla_{\bm{x}} \cH)( \partial_\varrho P)\big)+\partial_\varrho \big(\frac\mu\varrho (\nabla_{\bm{x}} \cH)\cdot(\nabla_{\bm{x}} P)\big) . \end{align*} When estimating the above, we use product estimates (Lemma~\ref{L.product-Hs}) and then continuous embedding (Lemma~\ref{L.embedding}), treating differently terms involving $ \Delta_{\bm{x}} P$ or $\nabla_{\bm{x}}{\partial}_\varrho P$: for instance \begin{align*} \Norm{\Lambda^{r-1} (h \Delta_{\bm{x}} P)}_{L^2(\Omega)} \lesssim \Norm{h}_{H^{s_0+\frac 12, 1}} \Norm{\Delta_{\bm{x}} P}_{H^{r-1,0}} + \big\langle \Norm{h}_{H^{r-\frac 12,1}} \Norm{\Delta_{\bm{x}} P}_{H^{s_0,0}} \big\rangle_{r-1> s_0}; \end{align*} and terms involving only $\nabla_{{\bm{x}}} P$ or ${\partial}_\varrho P$: for instance \begin{align*} \Norm{\Lambda^{r-1}((\Delta_{\bm{x}} \cH) ({\partial}_\varrho P))}_{L^2(\Omega)} & \lesssim \Norm{\Delta_{\bm{x}} \cH}_{H^{s_0+\frac12,1}} \Norm{{\partial}_\varrho P}_{H^{r-1,0}} + \big\langle \Norm{\Delta_{\bm{x}} \cH}_{H^{r-1,0}} \Norm{{\partial}_\varrho P}_{H^{s_0+\frac 12, 1}} \big\rangle_{r-1>s_0} . \end{align*} We infer, using Lemma~\ref{L.embedding}, $\mu\in(0,1]$ and $s_0+\frac32\leq s-1$, that for any $1\leq r\leq s$, \begin{multline}\label{eq.P-in-Hs2} \Norm{\partial_\varrho^2 P}_{H^{r-1,0}}\lesssim \Norm{Q_0}_{H^{r-1,0}} +\Norm{Q_1}_{H^{r,0}} +\Norm{{\bm{R}}}_{H^{r,1}} + \Norm{ \nabla^\mu_{{\bm{x}},\varrho} P}_{H^{r,0}} \\ +\left( \Norm{ h}_{H^{r,1}}+\sqrt\mu\Norm{\nabla_{\bm{x}}\cH}_{H^{r,1}}\right)\left\langle \Norm{\nabla^\mu_{{\bm{x}},\varrho } P}_{H^{s_0+1,1}} \right\rangle_{r>s_0+1}. \end{multline} By combining~\eqref{eq.P-in-Hs1} and~\eqref{eq.P-in-Hs2} we obtain \begin{multline*} \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{r,1}} \lesssim \Norm{Q_0}_{H^{r,0}} +\Norm{Q_1}_{H^{r,0}} +\Norm{{\bm{R}}}_{H^{r,1}} + \Norm{ \nabla^\mu_{{\bm{x}},\varrho} P}_{H^{r-1,0}}\ \\ + \ C\, {\left\langle \left( \Norm{ h}_{H^{r,1}}+\sqrt\mu \Norm{\nabla_{\bm{x}}\cH}_{H^{r,1}}\right) \Norm{\nabla^{\mu}_{{\bm{x}},\varrho } P}_{H^{s_0+1,1}} \right\rangle_{r>s_0+1} } \end{multline*} which, after finite induction on $1\leq r\leq s$ and using~\eqref{eq.basic-elliptic} for the initialization, yields \begin{multline}\label{eq.k=1} \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{r,1}} \lesssim \Norm{Q_0}_{H^{r,0}} +\Norm{Q_1}_{H^{r,0}} +\Norm{{\bm{R}}}_{H^{r,1}}\\ + \left( \Norm{ h}_{H^{r,1}}+\sqrt\mu\Norm{\nabla_{\bm{x}}\cH}_{H^{r,1}}\right)\, \times \left\langle\Norm{Q_0}_{H^{r-1,0}} +\Norm{Q_1}_{H^{r-1,0}} +\Norm{{\bm{R}}}_{H^{r-1,1}} \right\rangle_{r>s_0+1} . \end{multline} This, together with~\eqref{eq.Poincare}, proves~\eqref{ineq:est-P} when $k=1$. \medskip We now proceed to estimate higher $\varrho$-derivatives. In what follows, we denote \[ C=C(h_\star,\norm{\bar h}_{W^{k-1,\infty}_\varrho},\Norm{h}_{H^{s-1,k-1}},\sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{s-1,k-1}} ) =C(h_\star,\bar M,M) . \] Let $2\leq j\leq k$. By definition, and using $\mu\in(0,1]$, we have \[ \Norm{\nabla^\mu_{{\bm{x}},\varrho }P}_{H^{s,j}} \leq \Norm{\nabla^\mu_{{\bm{x}},\varrho }P}_{H^{s,j-1}} +\Norm{{\partial}_\varrho\nabla^\mu_{{\bm{x}},\varrho }P}_{H^{s-1,j-1}} \lesssim \Norm{\nabla^\mu_{{\bm{x}},\varrho }P}_{H^{s,j-1}} +\Norm{{\partial}_\varrho^2 P}_{H^{s-1,j-1}} . \] We shall also use, when $j\leq k-1$, the corresponding estimate \[ \Norm{\nabla^\mu_{{\bm{x}},\varrho }P}_{H^{s-1,j}} \lesssim \Norm{\nabla^\mu_{{\bm{x}},\varrho }P}_{H^{s-1,j-1}} +\Norm{{\partial}_\varrho^2 P}_{H^{s-2,j-1}}. \] By using~\eqref{eq.Poisson-varrho} (according to which ${\partial}_\varrho^2 P=\tfrac{\varrho (\bar h+h)}{1+ \mu|\nabla_{\bm{x}} \cH|^2} \tilde {\bm{R}}$), and since $1\leq j-1 \leq s-1$, using Lemma~\ref{L.product-Hsk} and Lemma~\ref{L.composition-Hsk} yields \[ \Norm{\partial_\varrho^{2} P}_{H^{s-1,j-1}} \lesssim \Norm{\tfrac{\varrho (\bar h+h)}{1+ \mu|\nabla_{\bm{x}} \cH|^2} }_{H^{s-1,j-1}}\Norm{ \widetilde{\bm{R}} }_{H^{s-1,j-1}} \leq C \Norm{ \widetilde{\bm{R}} }_{H^{s-1,j-1}}.\] If moreover $j\leq k-1\leq s-1$, then \[ \Norm{\partial_\varrho^{2} P}_{H^{s-2,j-1}} \lesssim \Norm{\tfrac{\varrho (\bar h+h)}{1+\mu |\nabla_{\bm{x}} \cH|^2} }_{H^{s-2,j-1}}\Norm{ \widetilde{\bm{R}} }_{H^{s-2,j-1}} \leq C \Norm{ \widetilde{\bm{R}} }_{H^{s-2,j-1}}.\] Applying Lemma~\ref{L.product-Hsk} and Lemma~\ref{L.composition-Hsk} to $\widetilde{\bm{R}}$ defined in~\eqref{eq.Poisson-varrho}, we obtain \begin{multline*}\Norm{ \widetilde{\bm{R}} }_{H^{s-1,j-1}} \leq \Norm{ Q_0 }_{H^{s-1,j-1}} + \Norm{ Q_1 }_{H^{s,j-1}} + \Norm{ {\bm{R}} }_{H^{s,j}} \\ + C \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{s,j-1}} + C\, \times\left( \Norm{ h}_{H^{s,j}}+\sqrt\mu\Norm{\nabla_{\bm{x}}\cH}_{H^{s,j}}\right) \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{s-1,j-1}} \end{multline*} and, if moreover $j\leq k-1\leq s-1$, \[ \Norm{ \widetilde{\bm{R}} }_{H^{s-2,j-1}} \leq \Norm{ Q_0 }_{H^{s-2,j-1}} + \Norm{ Q_1 }_{H^{s-1,j-1}} + \Norm{ {\bm{R}} }_{H^{s-1,j}} + C \Norm{\nabla^\mu_{{\bm{x}},\varrho} P}_{H^{s-1,j-1}} . \] From the second set of inequalities,~\eqref{eq.k=1} with $r=s-1$ and finite induction on $2\leq j\leq k-1$ we infer \[ \Norm{\nabla^\mu_{{\bm{x}},\varrho }P}_{H^{s-1,j}} \leq C \, \big( \Norm{ Q_0 }_{H^{s-1,j-1}} +\Norm{ Q_1 }_{H^{s-1,j-1}} + \Norm{ {\bm{R}} }_{H^{s-1,j}} \big).\] Then, from the first set of inequalities,~\eqref{eq.k=1} with $r=s$ and the previous result, we infer by finite induction on $2\leq j\leq k$ \begin{multline*} \Norm{\nabla^\mu_{{\bm{x}},\varrho }P}_{H^{s,j}} \leq C \, \big( \Norm{Q_0}_{H^{s,j-1}} + \Norm{Q_1}_{H^{s,j-1}} +\Norm{{\bm{R}}}_{H^{s,j}}\big) \\ + C\, \big(\Norm{ h}_{H^{s,j}}+\sqrt\mu\Norm{\nabla_{\bm{x}}\cH}_{H^{s,j}}\big)\, \big( \Norm{Q_0}_{H^{s-1,j-2}} + \Norm{Q_1}_{H^{s-1,j-2}} +\Norm{{\bm{R}}}_{H^{s-1,j-1}}\big) . \end{multline*} The result is proved. \end{proof} We now apply Lemma~\ref{L.Poisson} to obtain several estimates on the solution to~\eqref{eq.RHS}-\eqref{eq.Poisson}. \begin{corollary}\label{C.Poisson} Let $s_0>d/2$, $s,k\in\mathbb{N}$ such that $s\geq s_0+\frac52$ and $2\leq k\leq s$. Let $\bar M,M,h_\star>0$. There exists $C>0$ such that for any $\mu\in(0,1]$ and $\kappa\in\mathbb{R}$, for any $(\bar h,\bar{\bm{u}})\in W^{k,\infty}((\rho_0,\rho_1))^{1+d}$, and for any $(h,{\bm{u}},w)\in H^{s+1,k}(\Omega)\times H^{s,k}(\Omega)^d\times H^{s,k-1}(\Omega) $ satisfying (denoting $\cH(\cdot,\varrho):=\int_\varrho^{\rho_1}h(\cdot,\varrho')\dd\varrho'$) \begin{itemize} \item the following bounds \[ \norm{\bar h}_{W^{k,\infty}_\varrho}+\norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho}\leq \bar M,\] \[\Norm{h}_{H^{s-1,k-1}}+\Norm{\nabla_{\bm{x}}\cH}_{H^{s-1,k-1}} +\Norm{{\bm{u}}}_{H^{s,k}} +\sqrt\mu\Norm{w}_{H^{s,k-1}}\leq M; \] \item the stable stratification assumption \[ \inf_{({\bm{x}},\varrho)\in\Omega} \bar h(\varrho)+h({\bm{x}},\varrho) \geq {h_\star}; \] \item the incompressibility condition \[-(\bar h+h)\nabla_{\bm{x}}\cdot {\bm{u}}-(\nabla_{\bm{x}} \cH)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}})+\partial_\varrho w=0,\] \end{itemize} there exists a unique solution $P\in H^{s+1,k+1}(\Omega)$ to~\eqref{eq.RHS}-\eqref{eq.Poisson} and one has \begin{multline}\label{ineq:est-P-hydro} \Norm{P}_{L^2(\Omega)} +\Norm{\nabla^\mu_{{\bm{x}},\varrho}P}_{H^{s,k}} \leq C \, (1+\Norm{ h}_{H^{s,k}}+\sqrt\mu\Norm{\nabla_{\bm{x}}\cH}_{H^{s,k}}) \\ \times \left( \Norm{h}_{H^{s,k}} + \sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{s,k}}+ \Norm{({\bm{u}},{\bm{u}}_\star)}_{H^{s,k}} +\sqrt\mu\Norm{(w,w_\star)}_{H^{s,k-1}} \right) \end{multline} where we recall the notations ${\bm{u}}_\star:=-\kappa\frac{\nabla_{\bm{x}} h}{\bar h+ h} $ and $w_\star:=\kappa\Delta_{\bm{x}} \cH-\kappa\frac{\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}} \cH}{\bar h+h}$. Moreover, decomposing \[ P=P_{\rm h}+P_{\rm nh}, \qquad P_{\rm h}:=\int_{\rho_0}^{\varrho} \varrho' h(\cdot,\varrho') \, \dd\varrho',\] we have \begin{multline}\label{ineq:est-Psmall-nonhydro} \Norm{P_{\rm nh}}_{L^2(\Omega)} + \Norm{\nabla^\mu_{{\bm{x}},\varrho} P_{\rm nh}}_{H^{s-1,k-1}} \leq C\sqrt\mu \Big(\Norm{ \nabla_{\bm{x}} \cH}_{H^{s-1,k-1}} + \norm{\cH\big\vert_{\varrho=\rho_0}}_{H^{s}_{\bm{x}}}\\ + \Norm{({\bm{u}},{\bm{u}}_\star)}_{H^{s,k}} +\sqrt\mu\Norm{(w,w_\star)}_{H^{s,k-1}} \Big), \end{multline} and, setting $\Lambda^\mu=1+\sqrt\mu|D|$, \begin{multline}\label{ineq:est-Ptame-nonhydro} \Norm{P_{\rm nh}}_{L^2(\Omega)} + \Norm{\nabla^\mu_{{\bm{x}},\varrho} P_{\rm nh}}_{H^{s-1,k-1}} \leq C\,\mu\, \Big( \Norm{ (\Lambda^\mu)^{-1} \nabla_{\bm{x}} \cH}_{H^{s,k-1}} + \norm{(\Lambda^\mu)^{-1} \cH\big\vert_{\varrho=\rho_0}}_{H^{s+1}_{\bm{x}}} \\ + \big( \norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho}+\Norm{{\bm{u}}}_{H^{s,k}}\big)\big( \Norm{({\bm{u}},{\bm{u}}_\star)}_{H^{s,k}} +\Norm{( w,w_\star)}_{H^{s,k-1}}\big) + \Norm{{\bm{u}}_\star}_{H^{s,k}} \Norm{w}_{H^{s,k-1}}\Big). \end{multline} \end{corollary} \begin{proof} In view of Lemma~\ref{L.Poisson}, we shall first estimate {\rm (RHS)}, defined in~\eqref{eq.RHS}. We decompose \[ {\rm (RHS)} = R_1+R_2\] where $R_1$ is constituted by terms involving maximum one derivative on $h, \cH, {\bm{u}}, {\bm{u}}_\star, w, w_\star$, while \begin{multline*} R_2:= -( \bar h+h) \left(\big({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star\big)\cdot\nabla_{\bm{x}} (\nabla_{\bm{x}}\cdot{\bm{u}}) \right) -(\nabla_{\bm{x}} \cH)\cdot\left(\big({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star\big)\cdot\nabla_{\bm{x}} \partial_\varrho {\bm{u}} \right) \\ +\big({\bar {\bm{u}}}+ {\bm{u}}+{\bm{u}}_\star\big)\cdot\nabla_{\bm{x}} \partial_\varrho w-\left(({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star)\cdot\nabla_{\bm{x}} (\nabla_{\bm{x}} \cH) \right)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}). \end{multline*} Appealing to the incompressibility condition \[-(\bar h+h)\nabla_{\bm{x}}\cdot {\bm{u}}-(\nabla_{\bm{x}} \cH)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}})+\partial_\varrho w=0,\] we have simply \[ R_2= \left(\big({\bar {\bm{u}}}+{\bm{u}}+{\bm{u}}_\star\big)\cdot(\nabla_{\bm{x}} h) \right) (\nabla_{\bm{x}}\cdot{\bm{u}}). \] As a matter of fact, this term compensates with the second addend of $R_1$, so that contributions from $h$ are not differentiated. By inspecting the remaining terms and using Lemma~\ref{L.product-Hsk}, we infer that for any $r\geq s_0+1/2$ and $1\leq j\leq r \le s-1$, \begin{multline} \label{est-RHS}\Norm{{\rm (RHS)}}_{H^{r,j}} \lesssim (1+\norm{\bar h}_{W^{j,\infty}_\varrho}+\Norm{h}_{H^{r,j}}+\Norm{\nabla_{\bm{x}} \cH}_{H^{r,j}})\\ \times\left( \big( \norm{\bar {\bm{u}}'}_{W^{j,\infty}_\varrho}+\Norm{{\bm{u}}}_{H^{r+1,j+1}}\big) \big( \Norm{({\bm{u}},{\bm{u}}_\star)}_{H^{r+1,j+1}} +\Norm{(w,w_\star)}_{H^{r+1,j}} \big)+\Norm{w}_{H^{r+1,j}}\Norm{{\bm{u}}_\star}_{H^{r+1,j+1}}\right). \end{multline} Owing to the fact that contributions from $(h,\cH,w,w^\star)$ to ${\rm (RHS)}$ are affine, and $\mu\in(0,1]$, we have for any $r\geq s_0+3/2 $ and $2\leq j\leq r\leq s$ \begin{multline} \label{est-Q} \sqrt\mu\Norm{{\rm (RHS)}}_{H^{r-1,j-1}} \lesssim (1+\norm{\bar h}_{W^{j-1,\infty}_\varrho}+\sqrt\mu\Norm{h}_{H^{r-1,j-1}}+\sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{r-1,j-1}})\\ \times \left(\big(\norm{\bar {\bm{u}}'}_{W^{j-1,\infty}_\varrho}+\Norm{{\bm{u}}}_{H^{r,j}}\big) \big( \Norm{({\bm{u}},{\bm{u}}_\star)}_{H^{r,j}} +\sqrt\mu\Norm{(w,w_\star)}_{H^{r,j-1}} \big)+\sqrt\mu\Norm{w}_{H^{r,j-1}} \Norm{{\bm{u}}_\star}_{H^{r,j}}\right). \end{multline} For the first estimate, we write~\eqref{eq.Poisson} as~\eqref{eq.Poisson-simple} with $Q_0=0$, $Q_1=\sqrt \mu\Lambda^{-1}{\rm (RHS)}$ and \[ {\bm{R}} = \begin{pmatrix} -\sqrt\mu \bar h \nabla_{\bm{x}} \cH\\ \frac{h}{\bar h+h} (1+\mu |\nabla_{\bm{x}} \cH|^2)-\mu |\nabla_{\bm{x}} \cH|^2\end{pmatrix}\] where we used the identities \[ \frac{1+\mu|\nabla_{{\bm{x}}} \cH|^2}{\varrho (\bar h + h)} {\partial}_\varrho \bar P_{\rm eq} = \frac{\bar h (1+\mu|\nabla_{{\bm{x}}} \cH|^2)}{ \bar h + h}=1+\mu|\nabla_{\bm{x}} \cH|^2-\frac{h}{\bar h+h} (1+\mu|\nabla_{\bm{x}} \cH|^2). \] Product (Lemma~\ref{L.product-Hsk}) and composition (Lemma~\ref{L.composition-Hsk-ex}) estimates yield if $r\geq s_0+1/2 $ and $1\leq j\leq r\leq s-1$ \begin{equation}\label{est-R0} \Norm{{\bm{R}}}_{H^{r,j}} \leq C(h_\star,\norm{\bar h}_{W^{j,\infty}_\varrho},\Norm{h}_{ H^{r,j}},\sqrt\mu\Norm{\nabla_{\bm{x}} \cH }_{H^{r,j}} ) \times \big( \Norm{h}_{H^{r,j}} + \sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{r,j}} \big); \end{equation} and, if $r\geq s_0+3/2 $ and $2\leq j\leq r\leq s$, using the tame estimates, we obtain \begin{equation}\label{est-R1} \Norm{{\bm{R}}}_{H^{r,j}} \leq C(h_\star,\norm{\bar h}_{W^{j,\infty}_\varrho},\Norm{h}_{ H^{r-1,j-1}},\sqrt\mu\Norm{\nabla_{\bm{x}} \cH }_{H^{r-1,j-1}} ) \times \big( \Norm{h}_{H^{r,j}} + \sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{r,j}} \big). \end{equation} Plugging in~\eqref{ineq:est-P} the estimates~\eqref{est-Q} and~\eqref{est-R1} with $(r,j)=(s,k)$, and~\eqref{est-R0} with $(r,j)=(s-1,k-1)$, yields~\eqref{ineq:est-P-hydro}. For the next set of estimates, we notice that, by~\eqref{eq.Poisson}, $P-P_{{\rm h}}$ satisfies~\eqref{eq.Poisson-simple} with ${\bm{e}}_{d+1}\cdot {\bm{R}}\big\vert_{\varrho=\rho_1} = 0$ and $Q_0+\sqrt\mu\Lambda Q_1+\nabla^\mu_{{\bm{x}},\varrho}\cdot{\bm{R}}=\mu{\rm (RHS)}+\nabla^\mu_{{\bm{x}},\varrho}\cdot\widetilde{\bm{R}}$ where \[ \widetilde {\bm{R}} := -\begin{pmatrix} \sqrt{\mu} \varrho^{-1} (\bar h+h) \nabla_{\bm{x}} \psi \\ {\mu} \varrho^{-1} (\nabla_{\bm{x}} \cH) \cdot (\nabla_{\bm{x}} \psi) \end{pmatrix}, \quad \psi:=\int_{\rho_0}^{\varrho} \cH(\cdot,\varrho')\dd \varrho'+\rho_0 \cH\big\vert_{\varrho=\rho_0} .\] Indeed, we have immediately ${\bm{e}}_{d+1}\cdot \widetilde{\bm{R}}\big\vert_{\varrho=\rho_1} = 0 = \partial_\varrho \big( P-P_{\rm h})\big\vert_{\varrho=\rho_1}$ and we infer \[ - \nabla_{{\bm{x}}, \varrho}^\mu \cdot \widetilde {\bm{R}} = \nabla_{{\bm{x}}, \varrho}^\mu \cdot \big(A^\mu \nabla^\mu_{{\bm{x}},\varrho} (\bar P_{\rm eq}+P_{\rm h}) \big)\] from (integrating by parts as in~\eqref{eq:nablaphi-intro}) \begin{equation}\label{eq.id-Phydro} P_{\rm h}:=\int_{\rho_0}^{\varrho} \varrho' h(\cdot,\varrho') \, \dd\varrho' = -\varrho\cH+\int_{\rho_0}^{\varrho} \cH(\cdot,\varrho')\dd \varrho'+\rho_0 \cH\big\vert_{\varrho=\rho_0}. \end{equation} By product estimates (Lemma~\ref{L.product-Hsk}), we infer immediately for any $r\geq s_0+1/2 $ and $1\leq j\leq r$ \begin{equation}\label{est-tR0}\Norm{\widetilde{\bm{R}}}_{H^{r,j}} \lesssim \Big(\norm{\bar h}_{W^{j,\infty}_\varrho}+\Norm{ h}_{H^{r,j}}+\sqrt\mu\Norm{\nabla_{\bm{x}}\cH}_{H^{r,j}}\Big) \, \Big( \sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{r,j}} +\sqrt\mu \norm{\cH\big\vert_{\varrho=\rho_0}}_{H^{r+1}_{\bm{x}}}\Big). \end{equation} Moreover, using the identity \[ \nabla^\mu_{{\bm{x}},\varrho}\cdot\widetilde{\bm{R}} = \mu \nabla_{\bm{x}}\cH\cdot {\partial}_\varrho(\varrho^{-1} \nabla_{\bm{x}}\psi)+\mu\varrho^{-1}(\bar h+h)\nabla_{\bm{x}}\cdot\nabla_{\bm{x}}\psi\] we infer for any $r\geq s_0+1/2 $ and $1\leq j\leq r$ \begin{equation}\label{est-tR2}\Norm{\nabla^\mu_{{\bm{x}},\varrho}\cdot\widetilde{\bm{R}}}_{H^{r,j}} \lesssim \mu \Big(\norm{\bar h}_{W^{j,\infty}_\varrho}+\Norm{ h}_{H^{r,j}}+\Norm{\nabla_{\bm{x}}\cH}_{H^{r,j}}\Big) \\ \, \Big( \Norm{\nabla_{\bm{x}} \cH}_{H^{r+1,j}} + \norm{\cH\big\vert_{\varrho=\rho_0}}_{H^{r+2}_{\bm{x}}}\Big). \end{equation} Finally, recalling $\Lambda^\mu=1+\sqrt\mu|D|$ and introducing \[\widetilde{\bm{R}}_0 := -\begin{pmatrix} \sqrt{\mu} \varrho^{-1} (\bar h+ (\Lambda^\mu)^{-1} h) ( (\Lambda^\mu)^{-1} \nabla_{\bm{x}} \psi) \\ {\mu} \varrho^{-1} ((\Lambda^\mu)^{-1}\nabla_{\bm{x}} \cH) \cdot ((\Lambda^\mu)^{-1}\nabla_{\bm{x}} \psi), \end{pmatrix} \] proceeding as for~\eqref{est-tR0} and~\eqref{est-tR2} and using $\Id- (\Lambda^\mu)^{-1}=\sqrt\mu|D| (\Lambda^\mu)^{-1}$ and $\Norm{(\Lambda^\mu)^{-1}}_{L^2_{\bm{x}}\to L^2_{\bm{x}}}=1$, for any $r\geq s_0+1/2 $ and $1\leq j\leq r$ one has \begin{multline}\label{est-tRtame} \Norm{\widetilde{\bm{R}}-\widetilde{\bm{R}}_0}_{H^{r,j}}+\Norm{\nabla^\mu_{{\bm{x}},\varrho}\cdot\widetilde{\bm{R}}_0}_{H^{r,j}} \lesssim \mu\, \Big(\norm{\bar h}_{W^{j,\infty}_\varrho}+\Norm{ h}_{H^{r,j}}+\Norm{\nabla_{\bm{x}}\cH}_{H^{r,j}} \Big)\\ \times \Big( \Norm{(\Lambda^\mu)^{-1}\nabla_{\bm{x}} \cH}_{H^{r+1,j}} + \norm{(\Lambda^\mu)^{-1}\cH\big\vert_{\varrho=\rho_0}}_{H^{r+2}_{\bm{x}}}\Big). \end{multline} We obtain~\eqref{ineq:est-Psmall-nonhydro}, by setting $Q_0=\mu{\rm (RHS)}$ and $Q_1=0$, and plug in~\eqref{ineq:est-Plow} the estimates~\eqref{est-Q} with $(r,j)=(s,k)$, and~\eqref{est-tR0} with $(r,j)=(s-1,k-1)$. For~\eqref{ineq:est-Ptame-nonhydro}, we set instead $Q_0=\mu{\rm (RHS)}+\nabla^\mu_{{\bm{x}},\varrho}\cdot\widetilde{\bm{R}}_0$ and $Q_1=0$, ${\bm{R}}=\widetilde{\bm{R}}-\widetilde{\bm{R}}_0$, and plug in~\eqref{ineq:est-Plow} the estimates~\eqref{est-RHS} and~\eqref{est-tRtame} with $(r,j)=(s-1,k-1)$. \end{proof} \subsection{Small-time well-posedness}\label{S.Nonhydro:small-time} We infer small-time existence and uniqueness of regular solutions to the Cauchy problem associated with the non-hydrostatic problem,~\eqref{eq:nonhydro-iso-recall}, proceeding as for the hydrostatic system in Section~\ref{S.Hydro}, that is considering the system as the combination of a transport-diffusion equation and transport equations, coupled through order-zero source terms (by the estimate \eqref{ineq:est-P-hydro} in Corollary~\ref{C.Poisson}). Specifically, we rewrite~\eqref{eq:nonhydro-iso-recall} as \begin{equation}\label{eq:nonhydro-iso-redef} \begin{aligned} \partial_t h+\nabla_{ {\bm{x}}} \cdot\big((\bar h +h)(\bar{\bm{u}} + {\bm{u}})\big)&= \kappa \Delta_{ {\bm{x}}} h,\\ \partial_t {\bm{u}}+\big((\bar {\bm{u}} + {\bm{u}} - \kappa\tfrac{\nabla_{ {\bm{x}}} h}{ \bar h+ h})\cdot\nabla_{\bm{x}}\big) {\bm{u}}+ \frac1\varrho \nabla_{ {\bm{x}}} P+ \big(1+\frac{{\partial}_\varrho P-\varrho h}{\varrho(\bar h+ h)} \big)\nabla_{ {\bm{x}}} \cH &=0,\\ \partial_t w+\big(\bar {\bm{u}} + {\bm{u}} - \kappa\tfrac{\nabla_{ {\bm{x}}} h}{ \bar h+ h}\big)\cdot\nabla_{ {\bm{x}}} w-\frac1{\mu} \frac{{\partial}_\varrho P-\varrho h}{\varrho(\bar h+ h)}&=0, \end{aligned} \end{equation} where $\cH(\cdot, \varrho)=\int_{\varrho}^{\rho_1} h(\cdot, \varrho')\dd\varrho'$ and $P$ is defined by Corollary~\ref{C.Poisson}. Systems~\eqref{eq:nonhydro-iso-recall} and~\eqref{eq:nonhydro-iso-redef} are equivalent (for sufficiently regular data) by the computations of Section~\ref{S.Nonhydro:Poisson}, and in particular regular solutions to~\eqref{eq:nonhydro-iso-redef} satisfy the boundary condition $ w\big|_{\varrho=\rho_1}=0$ and the incompressibility constraint \begin{equation}\label{eq:incompressibility-redef} (\bar h + h) \nabla_{{\bm{x}}} \cdot {\bm{u}} + \nabla_{{\bm{x}}} \cH \cdot ({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}) -\partial_\varrho w=0 \end{equation} provided these identities hold initially. \begin{proposition}\label{P.NONHydro-small-time} Let $s_0>d/2$, $s,k\in\mathbb{N}$ such that $s\geq s_0+\frac {5} 2$ and $2\leq k\leq s$. Let $h_\star,\bar M,M,\mu,\kappa>0$ and $C_0>1$. There exists $T>0$ such that for any $(\bar h,\bar{\bm{u}})\in W^{k,\infty}((\rho_0,\rho_1))^{1+d}$, and for any initial data $(h_0,{\bm{u}}_0,w_0)\in H^{s+1,k}(\Omega)\times H^{s,k}(\Omega)^d\times H^{s,k}(\Omega) $ satisfying \begin{itemize} \item the following bounds (where $\cH_0(\cdot, \varrho):=\int_{\varrho}^{\rho_1} h_0(\cdot, \varrho')\dd\varrho'$) \[ \norm{\bar h}_{W^{k,\infty}_\varrho}+\norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho}\leq \bar M\] \[ M_0:=\Norm{h_0}_{H^{s,k}}+ \Norm{\nabla_{\bm{x}} \cH_0}_{H^{s,k}}+\Norm{{\bm{u}}_0}_{H^{s,k}} +\Norm{w_0}_{H^{s,k}}\leq M, \] \item the stable stratification assumption \[ \inf_{({\bm{x}},\varrho)\in\Omega} \bar h(\varrho)+h_0({\bm{x}},\varrho) \geq h_\star, \] \item the boundary condition $w_0|_{\varrho=\rho_1}=0$ and the incompressibility condition \[-(\bar h+h_0)\nabla_{\bm{x}}\cdot {\bm{u}}_0-(\nabla_{\bm{x}} \cH_0)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}_0)+\partial_\varrho w_0=0,\] \end{itemize} there exists a unique $(h,{\bm{u}},w)\in \mathcal{C}^0(0,T;H^{s,k}(\Omega)^{2+d})$ and $P\in L^2(0,T;H^{s+1,k+1}(\Omega))$ solution to~\eqref{eq:nonhydro-iso-redef} satisfying the initial data $(h,{\bm{u}},w)\big\vert_{t=0}=(h_0,{\bm{u}}_0,w_0)$. Moreover, the conditions $w\vert_{\varrho=\rho_1}=P\vert_{\varrho=\rho_0}=0$ and the incompressibility condition \eqref{eq:incompressibility-redef} hold on $[0,T]$ (and hence the solution satisfies \eqref{eq:nonhydro-iso-recall}) and one has $\cH\in L^\infty(0,T;H^{s+1,k}(\Omega))$ and $(h,\nabla_{\bm{x}}\cH)\in L^2(0,T;H^{s+1,k}(\Omega))$ and \begin{multline*}\mathcal{F}_{s,k,T}:= \Norm{h}_{L^\infty(0,T;H^{s,k})}+\Norm{\nabla_{\bm{x}} \cH}_{L^\infty(0,T;H^{s,k})}+\Norm{{\bm{u}}}_{L^\infty(0,T;H^{s,k})}+\Norm{w}_{L^\infty(0,T;H^{s,k})}\\ + c_0 \kappa^{1/2} \Norm{ h}_{L^2(0,T;H^{s,k})} + c_0\kappa^{1/2} \Norm{\nabla_{\bm{x}}^2\cH}_{L^2(0,T;H^{s,k})} \leq C_0M_0 \end{multline*} with $c_0$ a universal constant. \end{proposition} \begin{proof} Since a very similar proof has been detailed in the hydrostatic framework in Section~\ref{S.Hydro}, we will only briefly sketch the main arguments. As aforementioned, thanks to Corollary~\ref{C.Poisson}, we may consider the contributions of the pressure as zero-order source terms in the energy space displayed in the statement, and~\eqref{eq:nonhydro-iso-redef} is then interpreted as a standard set of evolution equations. We now explain how to infer the necessary bounds on all contributions to $\mathcal{F}_{s,k,T}$, assuming enough regularity. The desired control of $ \Norm{h}_{L^\infty(0,T;H^{s,k})}+ c_0 \kappa^{1/2} \Norm{\nabla_{\bm{x}} h}_{L^2(0,T;H^{s,k})} $ is a direct consequence of the first equation of~\eqref{eq:nonhydro-iso-redef}, and the regularization properties of the heat semigroup already summoned in Proposition~\ref{P.WP-nu}. The corresponding control of $ \Norm{\nabla_x \cH}_{L^\infty(0,T;H^{s,k})}+ c_0 \kappa^{1/2} \Norm{\nabla_{\bm{x}}^2 \cH}_{L^2(0,T;H^{s,k})} $ demands an additional structure. We recall (see~\eqref{eq:h} or~\eqref{cancellation}) that by the identity~\eqref{eq:incompressibility-redef} and integrating the first equation of~\eqref{eq:nonhydro-iso-redef}, one has \begin{equation} \label{eq.id-eta} \partial_t \cH +(\bar {\bm{u}}+{\bm{u}})\cdot\nabla_{\bm{x}}\cH -w=\kappa\Delta_{\bm{x}}\cH. \end{equation} By the regularization properties of the heat semigroup, we infer (with $c_0$ a universal constant) \[\Norm{\nabla_{\bm{x}}\cH }_{L^\infty(0,T;H^{s,k})}+c_0\kappa^{1/2} \Norm{\nabla_{\bm{x}}^2 \cH}_{L^2(0,T;H^{s,k})} \leq \Norm{\nabla_{\bm{x}}\cH_0}_{H^{s,k}} +\frac1{c_0\kappa^{1/2}} \Norm{(\bar {\bm{u}}+{\bm{u}})\cdot\nabla_{\bm{x}}\cH -w}_{L^2(0,T;H^{s,k})},\] and the right-hand side is estimated by product estimates (Lemma~\ref{L.product-Hsk}). Finally, the desired {\em a priori} estimates on $\Norm{{\bm{u}}}_{L^\infty(0,T;H^{s,k})}$ and $\Norm{w}_{L^\infty(0,T;H^{s,k})}$ for sufficiently regular solutions follow by the energy method (that is integrating by parts in the variable ${\bm{x}}$) on the second and third equations of~\eqref{eq:nonhydro-iso-redef}, which can be seen as transport equations with source terms. More precisely, by \eqref{ineq:est-P-hydro} in Corollary~\ref{C.Poisson}, we have the existence and uniqueness of $P\in L^2(0,T;H^{s+1,k+1}(\Omega))$, satisfying the bound \[ \Norm{P}_{L^2(0,T;H^{s+1,k+1})} \leq C(h_\star,\mu,\kappa,\bar M,\mathcal{F}_{s,k,T})\mathcal{F}_{s,k,T}.\] Moreover, the advection velocity is controlled (using Lemma~\ref{L.embedding}, $s-2\geq s_0+\frac12$, $k\geq 1$) by \[ \Norm{\nabla\cdot\big(\bar {\bm{u}} + {\bm{u}} - \kappa\tfrac{\nabla_{ {\bm{x}}} h}{ \bar h+ h}\big)}_{L^\infty(0,T;L^\infty(\Omega))} \leq C(h_\star,\kappa,\mathcal{F}_{s,k,T}),\] and using commutator (Lemma~\ref{L.commutator-Hsk}) and composition (Lemma~\ref{L.composition-Hsk-ex}) estimates, one has for any $f\in H^{s,k}(\Omega)$, and any ${\bm{\alpha}} \in\mathbb{N}^d$, $j\in\mathbb{N}$ with $0\leq j\leq k$ and $|{\bm{\alpha}}|+j\leq s$, \[ \Norm{[\partial_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j, \bar {\bm{u}} + {\bm{u}} - \kappa\tfrac{\nabla_{ {\bm{x}}} h}{ \bar h+ h}] \nabla_{\bm{x}} f}_{L^2(0,T;L^2(\Omega))} \leq C(h_\star,\kappa,\bar M,\mathcal{F}_{s,k,T}) \Norm{f}_{H^{s,k}}.\] It follows \[ \Norm{{\bm{u}}}_{H^{s,k}}+\Norm{w}_{H^{s,k}}\leq \Big( \Norm{{\bm{u}}_0}_{H^{s,k}}+\Norm{w_0}_{H^{s,k}}+C\sqrt T \Big)\exp(C T),\] with $C= C(h_\star,\mu,\kappa,\bar M,\mathcal{F}_{s,k,T})$. Altogether, and using standard continuity arguments, we find that for any $C_0>1$ we can restrict the time $T=T(h_\star,\mu,\kappa,\bar M,C_0 M_0)>0$ so that all sufficiently regular solutions to~\eqref{eq:nonhydro-iso-redef} satisfy the bound $\mathcal{F}_{s,k,T}\leq C_0 M_0$. We may infer the existence of solutions using for instance the parabolic regularization approach (see the closing paragraph of Section~\ref{S.Hydro}), and uniqueness is straightforward. This concludes the proof. \end{proof} \begin{remark} Proposition~\ref{P.NONHydro-small-time} does not provide any lower bound on the time of existence (and control) of solutions with respect to either $\mu\ll 1$ or $\kappa\ll 1$, hence the ``small-time'' terminology. \end{remark} \subsection{Quasi-linearization of the non-hydrostatic system}\label{S.Nonhydro:quasi} In this section we extract the leading-order terms of the equations satisfied by the spatial derivatives of the solutions to system~\eqref{eq:nonhydro-iso-redef}. This will allow us to obtain improved energy estimates in the subsequent section. Notice that starting from here, our study is restricted to the situation $k=s$. \begin{lemma}\label{lem.quasilin-nonhydro} Let $s, k \in \mathbb{N}$ such that $k=s> \frac 52+\frac d 2$ and $\bar M,M,h_\star>0$. Then there exists $C>0$ such that for any $\mu,\kappa\in(0,1]$, and for any $(\bar h, \bar {\bm{u}}) \in W^{k,\infty}((\rho_0,\rho_1)) \times W^{k+1,\infty}((\rho_0,\rho_1))$ satisfying \[ \norm{\bar h}_{W^{k,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k,\infty}_\varrho }\leq \bar M \,;\] and any $(h, {\bm{u}},w) \in L^\infty(0,T;H^{s,k}(\Omega)^{d+2})$ solution to~\eqref{eq:nonhydro-iso-redef} (with $P$ defined by Corollary~\ref{C.Poisson}) with any $T>0$ and satisfying for almost every $t \in [0,T]$, \[ \Norm{h(t, \cdot)}_{H^{s-1, k-1}} + \Norm{\cH(t, \cdot)}_{H^{s,k}}+\norm{\cH\big\vert_{\varrho=\rho_0}(t, \cdot)}_{H^s_{\bm{x}}}+\Norm{{\bm{u}}(t, \cdot)}_{H^{s,k}} +\sqrt\mu\Norm{w(t, \cdot)}_{H^{s,k}}+\kappa^{1/2}\Norm{h(t, \cdot)}_{H^{s,k}} \le M\] (where $\cH(t,{\bm{x}},\varrho):=\int_{\varrho}^{\rho_1} h(t,{\bm{x}},\varrho')\dd\varrho'$) and \[ \inf_{(t,{\bm{x}},\varrho)\in (0,T)\times\Omega } \bar h(\varrho)+h(t, {\bm{x}},\varrho) \geq h_\star,\] the following results hold. Denote, for any multi-index ${\bm{\alpha}} \in \mathbb{N}^d$ and any $j\in\mathbb{N}$ such that $|{\bm{\alpha}}|+j\leq s$, $ h^{({\bm{\alpha}},j)}={\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} h,\, \cH^{({\bm{\alpha}},j)}={\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} \cH, \, {\bm{u}}^{({\bm{\alpha}},j)}={\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} {\bm{u}},\, w^{({\bm{\alpha}},j)}={\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} w$, and $P_{\rm nh}^{({\bm{\alpha}})}={\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} P_{\rm nh}$, with \[ P_{\rm nh} :=P-P_{\rm h}, \qquad P_{\rm h}:=\int_{\rho_0}^{\varrho} \varrho' h(\cdot,\varrho') \, \dd\varrho'.\] We have \begin{subequations} \begin{equation}\label{eq.quasilin-j-nonhydro} \begin{aligned} \partial_t \cH^{({\bm{\alpha}},j)}+ (\bar {\bm{u}} + {\bm{u}}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} -w^{({\bm{\alpha}},j)}-\kappa\Delta_{\bm{x}} \cH^{({\bm{\alpha}},j)} &=\widetilde R_{{\bm{\alpha}},j},\\ \partial_t \cH^{({\bm{\alpha}},j)}+ (\bar {\bm{u}} + {\bm{u}}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} +\Big\langle \int_\varrho^{\rho_1} (\bar {\bm{u}}' + {\partial}_{\varrho}{\bm{u}}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} \, \dd\varrho'\phantom{-\kappa\Delta_{\bm{x}} \cH^{({\bm{\alpha}},j)}}\qquad &\\ +\int_{\varrho}^{\rho_1} (\bar h+h)\nabla_{\bm{x}} \cdot{\bm{u}}^{({\bm{\alpha}},j)} \dd\varrho'\Big\rangle_{j=0}-\kappa\Delta_{\bm{x}} \cH^{({\bm{\alpha}},j)}&=R_{{\bm{\alpha}},j},\\ \partial_t{\bm{u}}^{({\bm{\alpha}},j)}+\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}}\big) {\bm{u}}^{({\bm{\alpha}},j)} + \Big\langle \frac{\rho_0}{ \varrho }\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)}\big\vert_{\varrho=\rho_0} +\frac 1 \varrho\int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} \dd\varrho'\Big\rangle_{j=0} &\\ +\frac 1 \varrho{\nabla_{\bm{x}} P_{\rm nh}^{({\bm{\alpha}},j)}} + \frac{\nabla_{\bm{x}} \cH}{\varrho(\bar h + h)} {\partial}_\varrho P_{\rm nh}^{({\bm{\alpha}},j)} &={\bm{R}}^{\rm nh}_{{\bm{\alpha}},j},\\ \sqrt\mu \left( \partial_t w^{({\bm{\alpha}},j)} + (\bar {\bm{u}} + {\bm{u}} - \kappa \tfrac{\nabla_{\bm{x}} h}{\bar h + h} ) \cdot \nabla_{\bm{x}} w^{({\bm{\alpha}},j)}\right) - \frac1{\sqrt\mu}\frac{{\partial}_\varrho P_{\rm nh}^{({\bm{\alpha}},j)}}{ \varrho( \bar h + h)} &= R^{\rm nh}_{{\bm{\alpha}}, j}, \\ - {\partial}_\varrho w^{({\bm{\alpha}})} + (\bar h + h) \nabla_{\bm{x}} \cdot {\bm{u}}^{({\bm{\alpha}},j)} + (\nabla_{\bm{x}} \cH )\cdot ( {\partial}_\varrho {\bm{u}}^{({\bm{\alpha}},j)})&\\ + (\nabla_{\bm{x}} \cdot {\bm{u}})h^{({\bm{\alpha}},j)} + (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}) \cdot \nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} &=R^{\rm div}_{{\bm{\alpha}},j}, \end{aligned} \end{equation} where for almost every $t\in[0,T]$, one has $(R_{{\bm{\alpha}},j}(t,\cdot),{\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}(t,\cdot), R^{\rm nh}_{{\bm{\alpha}}, j}(t,\cdot), R^{\rm div}_{{\bm{\alpha}}, j}(t,\cdot) )\in L^2(\Omega)^{d+3}$ and $R_{{\bm{\alpha}},0}(t,\cdot)\in\mathcal{C}((\rho_0,\rho_1);L^2(\mathbb{R}^d))$, and \begin{equation} \begin{aligned} &\Norm{R_{{\bm{\alpha}},j}}_{L^2(\Omega) } + \norm{R_{{\bm{\alpha}}, 0}|_{\varrho=\rho_0}}_{L^2_{\bm{x}}} +\Norm{\widetilde R_{{\bm{\alpha}},j}}_{L^2(\Omega) } + \Norm{R^{\rm div}_{{\bm{\alpha}}, j}}_{L^2(\Omega)} \leq C\, M \,, \\ &\Norm{ {\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}}_{L^2(\Omega)}+\Norm{ R^{\rm nh}_{{\bm{\alpha}}, j}}_{L^2(\Omega)} \leq C\, M \, \big(1 +\kappa\Norm{ \nabla_{\bm{x}} h}_{H^{s,k}} \big) \\ &\phantom{\Norm{ {\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}}_{L^2(\Omega)}+\Norm{ R^{\rm nh}_{{\bm{\alpha}}, j}}_{L^2(\Omega)} \leq \qquad} + C\, \big( \Norm{ h}_{H^{s,k}}+\sqrt\mu\Norm{\nabla_{\bm{x}} \cH}_{H^{s,k}} \big) \big(M+ \Norm{{\bm{u}}_\star}_{H^{s,k}}+\sqrt\mu \Norm{w_\star}_{H^{s,k-1} } \big) \,, \end{aligned} \label{eq.est-quasilin-j-nonhydro} \end{equation} \end{subequations} and \begin{subequations} \begin{equation}\label{eq.quasilin-j-h-nonhydro} \begin{aligned} \partial_t h^{({\bm{\alpha}},j)}+(\bar{\bm{u}}+{\bm{u}})\cdot \nabla_{\bm{x}} h^{({\bm{\alpha}},j)} &=\kappa\Delta_{\bm{x}} h^{({\bm{\alpha}},j)}+r_{{\bm{\alpha}},j}+\nabla_{{\bm{x}}} \cdot {\bm{r}}_{{\bm{\alpha}},j}, \end{aligned} \end{equation} where for almost every $t\in[0,T]$, one has $(r_{{\bm{\alpha}},j}(t,\cdot),{\bm{r}}_{{\bm{\alpha}},j}(t,\cdot))\in L^2(\Omega)^{1+d}$ and \begin{align}\label{eq.est-quasilin-j-h-nonhydro} \kappa^{1/2} \Norm{r_{{\bm{\alpha}},j}}_{L^2(\Omega) } + \Norm{{\bm{r}}_{{\bm{\alpha}},j}}_{L^2(\Omega) } & \leq C\,M. \end{align} \end{subequations} \end{lemma} \begin{proof} Let us first point out that the estimates for $\norm{R_{{\bm{\alpha}}, 0}|_{\varrho=\rho_0}}_{L^2_{\bm{x}}}$, $ \Norm{R_{{\bm{\alpha}}, j}}_{L^2(\Omega)}$, $\Norm{r_{{\bm{\alpha}},j}}_{L^2(\Omega) }$ and $ \Norm{{\bm{r}}_{{\bm{\alpha}},j}}_{L^2(\Omega) }$ have been stated and proved in Lemma~\ref{lem:quasilinearization}. Thus we only need to focus on the other terms. In the following, we denote $s_0=s-\frac52>\frac d2$. Using the identity (already pointed out in~\eqref{eq.id-eta}) \[ \partial_t \cH +(\bar {\bm{u}}+{\bm{u}})\cdot\nabla_{\bm{x}}\cH -w=\kappa\Delta_{\bm{x}}\cH, \] and the commutator estimate in Lemma~\ref{L.commutator-Hsk}, we find immediately \[ \widetilde R_{{\bm{\alpha}},j} = [{\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}} ,\bar {\bm{u}}+{\bm{u}}]\cdot\nabla_{\bm{x}}\cH, \qquad \Norm{\widetilde R_{{\bm{\alpha}},j}}_{L^2(\Omega)} \lesssim \big(\norm{\bar{\bm{u}}'}_{W^{k-1,\infty}_\varrho}+\Norm{{\bm{u}}}_{H^{s,k}} \big) \Norm{\cH}_{H^{s,k}}.\] Using the decomposition $P=P_{\rm h}+P_{\rm nh}$ as in Corollary~\ref{C.Poisson} and the identity~\eqref{eq.id-Phydro} we have \begin{align*} \nabla_{\bm{x}} P_{\rm h}:= \int_{\rho_0}^\varrho \varrho' \nabla_{\bm{x}} h (\cdot, \varrho') \, \dd \varrho' = - \varrho \nabla_{\bm{x}} \cH + \rho_0 \nabla_{\bm{x}} \cH |_{\varrho=\rho_0} + \int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH(\cdot, \varrho') \, \dd \varrho', \end{align*} and hence the evolution equation for $ {\bm{u}}$ reads \begin{multline*} {\partial}_t {\bm{u}}+ \big((\bar {\bm{u}} + {\bm{u}} - \kappa \tfrac{\nabla_{\bm{x}} h}{\bar h+h} ) \cdot \nabla_{\bm{x}} \big){\bm{u}}+ \frac{\rho_0 }{\varrho}\nabla_{\bm{x}} \cH |_{\varrho=\rho_0}+ \frac1{\varrho}\int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH(\cdot, \varrho') \, \dd \varrho' \\ +\frac1{\varrho}\nabla_{\bm{x}} {{P_{\rm nh}}} + \frac{\nabla_{\bm{x}} \cH}{\varrho(\bar h +h)} {\partial}_\varrho {{P_{\rm nh}}}= 0. \end{multline*} Differentiating ${\bm{\alpha}}$ times with respect to ${\bm{x}}$ and $j$ times with respect to $\varrho$ yields the corresponding equations in~\eqref{eq.quasilin-j-nonhydro}, with remainder terms \[ {\bm{R}}_{{\bm{\alpha}}, j}^{\rm nh}:={\bm{R}}_{{\bm{\alpha}}, j} - \big[ {\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}}, \tfrac{\nabla_{\bm{x}} \cH}{\varrho(\bar h + h)} \big] {\partial}_\varrho P_{\rm nh}, \] using the notation $ {\bm{R}}_{{\bm{\alpha}}, j}$ for the hydrostatic contributions introduced in Lemma~\ref{lem:quasilinearization}. The first addends have been estimated in Lemma~\ref{lem:quasilinearization},~\eqref{eq.est-quasilin} (when $j=0$) and~\eqref{eq.est-quasilin-j} (when $j\ge1$). We now estimate the second addend as follows. By the commutator estimate in Lemma~\ref{L.commutator-Hsk} with $k=s\geq s_0+3/2$, we have \[\Norm{ \big[ {\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}}, \tfrac{\nabla_{\bm{x}} \cH}{\varrho(\bar h + h)} \big] {\partial}_\varrho P_{\rm nh}}_{L^2(\Omega)} \lesssim \Norm{\tfrac{\nabla_{\bm{x}} \cH}{\varrho(\bar h + h)}}_{H^{s,k}} \Norm{{\partial}_\varrho P_{\rm nh}}_{H^{s-1, k-1}}.\] Then by tame product estimate Lemma~\ref{L.product-Hsk} and composition estimates in Lemma~\ref{L.composition-Hsk-ex}, we have \[ \Norm{\tfrac{\nabla_{\bm{x}} \cH}{\varrho(\bar h + h)}}_{H^{s,k}} \leq C(h_\star,\norm{\bar h}_{W^{k,\infty}},\Norm{h}_{H^{s-1,k-1}}) (\Norm{h}_{H^{s,k}} \Norm{\nabla_{\bm{x}} \cH}_{H^{s-1,k-1}} +\Norm{\nabla_{\bm{x}} \cH}_{H^{s,k}}) \] and there remains to use estimate~\eqref{ineq:est-Psmall-nonhydro} in Corollary~\ref{C.Poisson} to infer \begin{multline*}\Norm{ {\bm{R}}_{{\bm{\alpha}}, j}^{\rm nh}}_{L^2(\Omega)} \leq C(h_\star,\bar M,M) \, M \, \big(1 +\kappa\Norm{ \nabla_{\bm{x}} h}_{H^{s,k}} \big)\\ + C(h_\star,\bar M,M)\, \sqrt\mu\, (\Norm{h}_{H^{s,k}} +\Norm{\nabla_{\bm{x}} \cH}_{H^{s,k}}) \big(M+ \Norm{{\bm{u}}_\star}_{H^{s,k}}+\sqrt\mu \Norm{w_\star}_{H^{s,k-1} } \big). \end{multline*} Now consider \[ R^{\rm nh}_{{\bm{\alpha}}, j} := - \sqrt\mu [{\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}}, \bar {\bm{u}} + {\bm{u}}] \cdot \nabla_{\bm{x}} w + \kappa \sqrt\mu \left[{\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}}, \tfrac{\nabla_{\bm{x}} h}{\bar h+h} \right] \cdot \nabla_{\bm{x}} w + \tfrac1{\sqrt\mu} [{\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}}, \tfrac{1}{\varrho(\bar h + h)}] {\partial}_\varrho P. \] We have, by Lemma~\ref{L.commutator-Hsk} with $k=s\geq s_0+3/2$, \begin{align*} \sqrt\mu\Norm{[{\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}}, \bar {\bm{u}}+{\bm{u}}] \cdot \nabla_{\bm{x}} w}_{L^2(\Omega)} \lesssim \sqrt\mu \big(\norm{\bar{\bm{u}}'}_{W^{k-1,\infty}_\varrho} +\Norm{ {\bm{u}}}_{H^{s,k}}\big) \Norm{\nabla_{\bm{x}} w}_{H^{s-1,k-1}}, \end{align*} and similarly, using tame product estimate Lemma~\ref{L.product-Hsk} and composition estimates in Lemma~\ref{L.composition-Hsk-ex} as above, \[ \kappa \sqrt\mu \Norm{ \big[ {\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}}, \tfrac{\nabla_{\bm{x}} h}{\bar h+h} \big] \cdot \nabla_{\bm{x}} w}_{L^2(\Omega)} \leq \kappa\sqrt\mu\, C(h_\star,\bar M,M) (\Norm{h}_{H^{s,k}}^2 +\Norm{\nabla_{\bm{x}} h}_{H^{s,k}}) \Norm{w}_{H^{s,k-1}} . \] and \[ \tfrac1{\sqrt\mu} \Norm{ [{\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}}, \tfrac1{\varrho(\bar h + h)}] {\partial}_\varrho P_{\rm nh}}_{L^2(\Omega)} \le \, \tfrac1{\sqrt\mu} C(h_\star,\bar M,M) \Norm{h}_{H^{s,k}} \Norm{{\partial}_\varrho P_{\rm nh}}_{H^{s-1,k-1}}. \] Collecting the above and using estimate~\eqref{ineq:est-Psmall-nonhydro} in Corollary~\ref{C.Poisson} yields \begin{align*}\Norm{ R_{{\bm{\alpha}}, j}^{\rm nh}}_{L^2(\Omega)} &\leq C(h_\star,\bar M,M) \, M \, \big(1 +\kappa\Norm{ \nabla_{\bm{x}} h}_{H^{s,k}} \big)\\ &\quad + C(h_\star,\bar M,M)\, \Norm{h}_{H^{s,k}} \big(M+ \Norm{{\bm{u}}_\star}_{H^{s,k}}+\sqrt\mu \Norm{w_\star}_{H^{s,k-1} } \big). \end{align*} Finally, we consider the remainder (stemming from differentiating the incompressibility condition~\eqref{eq:incompressibility-redef}) \begin{align*} R^{\rm{div}}_{{\bm{\alpha}},j}&= \big({\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}} (h \nabla_{\bm{x}} \cdot {\bm{u}})- ( h^{({\bm{\alpha}},j)} )\nabla_{\bm{x}} \cdot {\bm{u}} - h \nabla_{\bm{x}} \cdot {\bm{u}}^{({\bm{\alpha}},j)}\big) \\ & \qquad + \big({\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}}(({\partial}_\varrho {\bm{u}}) \cdot (\nabla_{\bm{x}} \cH)) - ({\partial}_\varrho{\bm{u}}^{({\bm{\alpha}},j)} ) \cdot (\nabla_{\bm{x}} \cH)-({\partial}_\varrho {\bm{u}}) \cdot( \nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} )\big)\\ & \qquad + \big({\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}} (\bar h \nabla_{\bm{x}} \cdot {\bm{u}})- \bar h \nabla_{\bm{x}} \cdot {\bm{u}}^{({\bm{\alpha}},j)}\big) + \big({\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}}(\bar {\bm{u}}' \cdot \nabla_{\bm{x}} \cH) -\bar {\bm{u}}'\cdot \nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} \big). \end{align*} Using Lemma~\ref{L.commutator-Hsk-sym} for the first two terms and direct estimates for the last to terms, and $k=s\geq s_0+5/2$, \begin{align*} \Norm{R_{{\bm{\alpha}},j}^{\rm{div}}}&\lesssim \Norm{h}_{H^{s-1, k-1}} \Norm{\nabla_{\bm{x}} \cdot {\bm{u}}}_{H^{s-1, k-1}} + \Norm{{\partial}_\varrho {\bm{u}}}_{H^{s-1, k-1}} \Norm{\nabla_{\bm{x}} \cH}_{H^{s-1, k-1}}\\ &\qquad + \Norm{\bar h}_{W^{k, \infty}_\varrho} \Norm{\nabla_{\bm{x}} \cdot {\bm{u}}}_{H^{s-1,k-1}} + \Norm{\bar {\bm{u}}'}_{W^{k, \infty}_\varrho} \Norm{\nabla_{\bm{x}} \cH}_{H^{s-1, k-1}}\\ &\lesssim \big( \Norm{\bar h}_{W^{k, \infty}_\varrho}+ \Norm{\bar {\bm{u}}'}_{W^{k, \infty}_\varrho}+ \Norm{h}_{H^{s-1, k-1}}+\Norm{ {\bm{u}}}_{H^{s, k}}\big) \big( \Norm{ \cH}_{H^{s, k-1}}+\Norm{ {\bm{u}}}_{H^{s, k-1}} \big). \end{align*} This concludes the proof. \end{proof} \subsection{A priori energy estimates} In this section we provide {\em a priori} energy estimates associated with the equations featured in Lemma~\ref{lem.quasilin-nonhydro}. We point out that such estimates concerning $ h^{({\bm{\alpha}},j)}$ solving the transport-diffusion equation~\eqref{eq.quasilin-j-h-nonhydro} have been provided in Lemma~\ref{lem:estimate-transport-diffusion}. Corresponding estimates for $\nabla_{\bm{x}}\cH$ stemming from the first equation of~\eqref{eq.quasilin-j-nonhydro} are easily obtained. Hence we consider the remaining equations in~\eqref{eq.quasilin-j-nonhydro}. Specifically, recalling the notation $\dot \cH= \cH^{({\bm{\alpha}},j)}, \dot h=h^{({\bm{\alpha}}, j)}, \dot {\bm{u}}={\bm{u}}^{({\bm{\alpha}}, j)}, \dot w=w^{({\bm{\alpha}}, j)}, \dot P_{\text{nh}}= P_{\text{nh}}^{({\bm{\alpha}}, j)}$, we consider the following linearized system: \begin{equation}\label{eq.nonhydro-linearized} \begin{aligned} \partial_t \dot\cH+(\bar{\bm{u}}+{\bm{u}})\cdot \nabla_{\bm{x}} \dot \cH + \int_\varrho^{\rho_1} \big( (\bar {\bm{u}}' + {\partial}_{\varrho}{\bm{u}}) \cdot\nabla_{\bm{x}} \dot \cH + (\bar h+h)\nabla_{\bm{x}} \cdot \dot{\bm{u}}\big) \dd\varrho' -\kappa\Delta_{\bm{x}} \dot\cH &=R,\\ \varrho \big( \partial_t\dot{\bm{u}}+\big(({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h})\cdot\nabla_{\bm{x}} \big)\dot{\bm{u}} \big) +\rho_0 \nabla_{\bm{x}} \dot \cH |_{\varrho=\rho_0} + \int_{\rho_0}^\varrho \nabla_{\bm{x}} \dot \cH\dd\varrho'+\nabla_{\bm{x}} \dot{{P_{\rm nh}}} + \frac{\nabla_{\bm{x}} \cH}{\bar h + h} {\partial}_\varrho \dot{P_{\rm nh}} &={\bm{R}}^{\rm nh} ,\\ \sqrt\mu \varrho \big( \partial_t \dot w + \mu \big({\bar {\bm{u}}}+{\bm{u}}-\kappa\tfrac{\nabla_{\bm{x}} h}{\bar h+ h}\big)\cdot\nabla_{\bm{x}} \dot w\big) - \frac1{\sqrt\mu}\frac{{\partial}_\varrho \dot{P_{\rm nh}} }{\bar h + h} &= R^{\rm nh},\\ -{\partial}_\varrho \dot w + (\bar h + h) \nabla_{\bm{x}} \cdot \dot {\bm{u}} + \nabla_{\bm{x}} \cH \cdot {\partial}_\varrho \dot {\bm{u}} - ({\partial}_\varrho \dot \cH)\nabla_{\bm{x}} \cdot {\bm{u}} + \nabla_{\bm{x}} \dot \cH \cdot (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}) &= R^{\rm div} ,\end{aligned} \end{equation} where we denote as always $\cH(\cdot,\varrho)=\int_{\varrho}^{\rho_1}h(\cdot,\varrho)\dd\varrho$. We shall use the following definitions of the spaces $Y^0$ and $Y^1$ \begin{equation}\label{eq:Y01spaces} \begin{aligned} Y^0&:= \mathcal{C}^0([\rho_0,\rho_1];L^2(\mathbb{R}^d))\times L^2(\Omega)^d \times L^2(\Omega) \times L^2(\Omega)\ , \qquad \text{ and } \\ Y^1&:=\left\{ (\cH,{\bm{u}},w,P)\in H^{1,1}(\Omega)^{d+3} \ : \ \cH\big\vert_{\varrho=\rho_0}\in H^1(\mathbb{R}^d),\ w\big\vert_{\varrho=\rho_1}=0,\ P\big\vert_{\varrho=\rho_0}=0\right\}. \end{aligned} \end{equation} \begin{lemma}\label{lem:estimates-nonhydro} Let $M, h_\star,h^\star>0$ be fixed. There exists $C(M,h_\star,h^\star)>0$ such that for any $\kappa>0$ and $\mu > 0$, and for any $(\bar h, \bar {\bm{u}}) \in W^{1, \infty}((\rho_0, \rho_1))^{1+d}$ and any $T>0$ and $(h,{\bm{u}}, w)\in L^\infty(0,T;W^{1,\infty}(\Omega))$ with $\Delta_{\bm{x}} h\in L^1(0,T;L^\infty(\Omega))$ satisfying~\eqref{eq:nonhydro-iso-redef} and, for almost any $t\in [0,T]$, the estimate \[ \Norm{ h(t,\cdot)}_{L^\infty(\Omega)} +\Norm{ \nabla_{\bm{x}} h(t,\cdot)}_{L^\infty_{\bm{x}} L^2_\varrho} +\Norm{ \nabla_{\bm{x}}\cdot {\bm{u}}(t,\cdot) }_{L^\infty(\Omega)} \le M \] and the upper and lower bounds \[ \forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star \leq \bar h(\varrho)+h(t,{\bm{x}},\varrho) \leq h^\star ; \] and for any $(\dot\cH, \dot{\bm{u}}, \dot w,\dot P_{\rm nh}) \in \mathcal{C}^0([0,T]; Y^0)\cap L^2(0,T; Y^1)$ and $(R,{\bm{R}}^{\rm nh},R^{\rm nh},R^{\rm div})\in L^2(0,T; Y^0)$ satisfying system~\eqref{eq.nonhydro-linearized} in $L^2(0,T; Y^1)'$, the following inequality holds: \begin{align*} \frac{\dd}{\dd t} \mathcal{E}(\dot\cH, \dot{\bm{u}},\dot w)+ \tfrac\kappa2 \Norm{\nabla_{\bm{x}} \dot \cH}_{L^2(\Omega)}^2& + \rho_0\kappa \norm{\nabla_{\bm{x}} \dot \cH \big\vert_{\varrho=\rho_0} }_{ L^2_{\bm{x}}}^2\\ \leq &\, C\, (1+\kappa^{-1}\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_{\bm{x}} L^2_\varrho}^2 ) \mathcal{E}(\dot\cH, \dot{\bm{u}},\dot w) \\ &+C\,\big( M+\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{ L^\infty_{\bm{x}} L^\infty_\varrho} \big) \Norm{\dot{P_{\rm nh}}}_{L^2(\Omega)} \big( \Norm{{\partial}_\varrho\dot \cH}_{L^2(\Omega)}+\Norm{\nabla_{\bm{x}} \dot\cH}_{L^2(\Omega)} \big) \\ &+\Norm{\dot{P_{\rm nh}}}_{L^2(\Omega)} \Norm{R^{\rm div}}_{L^2(\Omega)} + C\, \mathcal{E}(\dot\cH, \dot{\bm{u}},\dot w)^{1/2} \mathcal{E}(R, {\bm{R}}^{\rm nh}, R^{\rm nh})^{1/2} , \end{align*} where we denote \begin{equation}\label{eq:energy-nonhydro} \mathcal{E}(\dot \cH, \dot {\bm{u}}, \dot w) = \frac 12 \int_{\rho_0}^{\rho_1} \int_{\mathbb{R}^d} \dot \cH^2 + \varrho (\bar h + h) |\dot{\bm{u}}|^2+\mu \varrho (\bar h + h) \dot w^2 \, \dd{\bm{x}} \dd \varrho + \frac 12 \int_{\mathbb{R}^d} \dot \cH^2|_{\varrho=\rho_0} \, \dd {\bm{x}} . \end{equation} \end{lemma} \begin{proof} We test the first equation against $\dot \cH \in L^2(0,T; H^{1,1}(\Omega))$ and its trace on $\{({\bm{x}},\rho_0),\ {\bm{x}}\in\mathbb{R}^d\}$ against $\rho_0\dot \cH|_{\varrho=\rho_0}\in L^2(0,T; H^{1}_{\bm{x}}(\mathbb{R}^d))$, the second equation against $(\bar h + h) \dot {\bm{u}} \in L^2(0,T; H^{1,1}(\Omega)^d)$ and the third equation against $\sqrt\mu(\bar h +h) \dot w \in L^2(0,T; H^{1,1}(\Omega))$. This yields: \begin{align*} &\frac{\dd}{\dd t}\mathcal{E}(\dot \cH, \dot {\bm{u}}, \dot w) + \kappa \Norm{ \nabla_{\bm{x}} \dot \cH}_{L^2(\Omega)}^2+\kappa \norm{ \nabla_{\bm{x}} \dot \cH|_{\varrho=\rho_0}}_{L^2_{\bm{x}} }^2 \\ & = -\big( (\bar {\bm{u}} + {\bm{u}}) \cdot \nabla_{\bm{x}} \dot \cH, \dot \cH\big)_{L^2(\Omega)} - \left(\int_\varrho^{\rho_1} (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}) \cdot \nabla_{\bm{x}} \dot \cH \, \dd\varrho', \dot \cH \right)_{L^2(\Omega)} & {\rm (i)}\\ & \quad - \left( \int_\varrho^{\rho_1} (\bar h+h) \nabla_{\bm{x}} \cdot \dot {\bm{u}} \, \dd \varrho', \dot \cH\right)_{L^2(\Omega)} + \big(R, \dot \cH\big)_{L^2(\Omega)} & {\rm (ii)} \\ &\quad - \big(\varrho (\bar {\bm{u}} + {\bm{u}}) \cdot \nabla_{\bm{x}} \dot {\bm{u}}, (\bar h + h) \dot {\bm{u}}\big)_{L^2(\Omega)} + \kappa \big(\varrho (\nabla_{\bm{x}} h \cdot \nabla_{\bm{x}}) \dot {\bm{u}}, \dot {\bm{u}}\big)_{L^2(\Omega)} & {\rm (iii)}\\ & \quad - \big(\rho_0 \nabla_{\bm{x}} \dot \cH|_{\varrho=\rho_0},(\bar h +h) \dot {\bm{u}}\big)_{L^2(\Omega)} - \left(\int_{\rho_0}^\varrho \nabla_{\bm{x}} \dot \cH \, \dd \varrho', (\bar h +h) \dot {\bm{u}} \right)_{L^2(\Omega)} & {\rm (iv)}\\ &\quad - \big(\nabla_{\bm{x}} \dot {P_{\rm nh}}, (\bar h+h)\dot {\bm{u}}\big)_{L^2(\Omega)} - \big(({\partial}_\varrho \dot{P_{\rm nh}}) \, \nabla_{\bm{x}} \cH , \dot {\bm{u}}\big)_{L^2(\Omega)} + \big({\bm{R}}^{\rm nh}, (\bar h + h) \dot {\bm{u}}\big)_{L^2(\Omega)} & {\rm (v)}\\ & \quad - \mu \big(\varrho (\bar {\bm{u}} + {\bm{u}}) \cdot \nabla_{\bm{x}} \dot w, (\bar h + h) \dot w\big)_{L^2(\Omega)} +\mu \kappa \big(\varrho (\nabla_{\bm{x}} h \cdot \nabla_{\bm{x}} ) \dot w, \dot w\big)_{L^2(\Omega)} & {\rm (vi)}\\ &\quad + \big({\partial}_\varrho \dot{P_{\rm nh}}, \dot w\big)_{L^2(\Omega)} + \sqrt\mu\big( R^{\rm nh}, (\bar h + h) \dot w\big)_{L^2(\Omega)} & {\rm (vii)}\\ & \quad - \rho_0 \left ((\bar {\bm{u}}+{\bm{u}}) (\nabla_{\bm{x}} \dot \cH|_{\varrho=\rho_0}), \dot \cH|_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}} - \rho_0 \left( \int_{\rho_0}^{\rho_1} (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}) \cdot \nabla_{\bm{x}} \dot \cH \dd \varrho', \dot \cH|_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}} & {\rm (viii)}\\ & \quad - \rho_0 \left( \int_{\rho_0}^{\rho_1} (h+\bar h) (\nabla_{\bm{x}} \cdot \dot {\bm{u}}) \dd \varrho', \dot \cH|_{\varrho=\rho_0} \right)_{L^2_{\bm{x}}} + \big(R|_{\varrho=\rho_0}, \dot \cH|_{\varrho=\rho_0}\big)_{L^2_{\bm{x}}} & {\rm (ix)}\\ &\quad + \frac 12 \big(\varrho ({\partial}_t h) \dot {\bm{u}}, \dot {\bm{u}}\big)_{L^2(\Omega)} + \frac \mu 2 \big(\varrho ({\partial}_t h) \dot w, \dot w\big)_{L^2(\Omega)}. & {\rm (x)} \end{align*} Some terms have already been treated in the course of the proof of Lemma~\ref{lem:estimate-system}: the second term in \rm{(i)} and the second term in \rm{(viii)} require $\kappa>0$; the first terms in \rm{(i)}, \rm{(viii)} are advection terms; the first addend of \rm{(ii)} together with the second term in \rm{(iv)} after integration by parts; the first addend of \rm{(iv)} with the first term in \rm{(ix)}. The contributions in \rm{(iii)} compensate with the first addend of \rm{(x)}, using the first equation of~\eqref{eq:nonhydro-iso-redef} and, in the same way, the contributions in \rm{(vi)} compensate with the second addend of \rm{(x)}. It remains only to deal with the contribution frm the non-hydrostatic pressure terms in \rm{(v)} and \rm{(vii)}, and remainder terms. Consider the sum of the first two terms in \rm{(v)} and the first term in \rm{(vii)}. We integrate by parts in ${\bm{x}}$ the first term and in $\varrho$ in the last two terms. Thus we have \begin{multline*} -\big(\nabla_{\bm{x}} \dot{P_{\rm nh}}, (\bar h + h) \dot {\bm{u}}\big)_{L^2(\Omega)} - \big(({\partial}_\varrho \dot{P_{\rm nh}}) \nabla_{\bm{x}} \cH, \dot {\bm{u}}\big)_{L^2(\Omega)} + \big({\partial}_\varrho \dot{P_{\rm nh}}, \dot w\big)_{L^2(\Omega)} \\ = \big(\dot{P_{\rm nh}}, (\bar h + h) \nabla_{\bm{x}} \cdot \dot {\bm{u}}\big)_{L^2(\Omega)}+\big(\dot{P_{\rm nh}} \nabla_{\bm{x}} \cH, {\partial}_\varrho \dot {\bm{u}}\big)_{L^2(\Omega)} -\big(\dot{P_{\rm nh}}, {\partial}_\varrho \dot w\big)_{L^2(\Omega)}, \end{multline*} where we used the identity $ h=-{\partial}_\varrho \cH$ and the boundary conditions $\dot{P_{\rm nh}}|_{\varrho=\rho_0}= \dot \cH|_{\varrho=\rho_1}=\dot w|_{\varrho=\rho_1}=0$ when integrating by parts with respect to $\varrho$. Using the last equation in~\eqref{eq.nonhydro-linearized} (stemming from the incompressibility condition), the above term reads \[ \big(\dot{P_{\rm nh}}, (\nabla_{\bm{x}} \cdot {\bm{u}})({\partial}_\varrho\dot \cH)- (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}})\cdot \nabla_{\bm{x}} \dot \cH +R^{\rm div}\big)_{L^2(\Omega)}. \] These terms, alike remainder terms \[ \big\vert \big(R, \dot \cH\big)_{L^2(\Omega)}\big\vert+\big\vert\big(R|_{\varrho=\rho_0}, \dot \cH|_{\varrho=\rho_0}\big)_{L^2_{\bm{x}}} \big\vert+\big\vert\big({\bm{R}}^{\rm nh}, (\bar h+h) \dot {\bm{u}}\big)_{L^2(\Omega)}\big\vert+ \sqrt\mu\big\vert (R^{\rm nh}, (\bar h + h) \dot w\big)_{L^2(\Omega)}\big\vert\ , \] are bounded by Cauchy-Schwarz inequality and using $\rho_0 h_\star\leq \varrho(\bar h+h)\leq \rho_1 h^\star$. Altogether, we obtain the differential inequality \begin{align*} \frac{\dd}{\dd t} \mathcal{E}(\dot\cH, \dot{\bm{u}},\dot w)+ \kappa \Norm{\nabla_{\bm{x}} \cH}_{L^2(\Omega)}^2 &+ \rho_0\kappa \norm{\nabla_{\bm{x}} \cH \big\vert_{\varrho=\rho_0} }_{ L^2_{\bm{x}}}^2\\ \leq &\,C\, \mathcal{E}(\dot\cH, \dot{\bm{u}},0) +C \Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^2_\varrho L^\infty_{\bm{x}}} \mathcal{E}(\dot\cH, \dot{\bm{u}},0)^{1/2} \Norm{\nabla_{\bm{x}} \dot \cH}_{L^2(\Omega)}\\ &+C\, \big( M+\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_\varrho L^\infty_{\bm{x}}} \big) \Norm{\dot{P_{\rm nh}}}_{L^2(\Omega)} \big( \Norm{{\partial}_\varrho\dot \cH}_{L^2(\Omega)}+\Norm{\nabla_{\bm{x}} \dot\cH}_{L^2(\Omega)} \big) \\ &+\Norm{\dot{P_{\rm nh}}}_{L^2(\Omega)} \Norm{R^{\rm div}}_{L^2(\Omega)} + C \mathcal{E}(\dot\cH, \dot{\bm{u}},\dot w)^{1/2} \mathcal{E}(R, {\bm{R}}^{\rm nh}, R^{\rm nh})^{1/2} \end{align*} with $C=C(h_\star,h^\star,M)$, and the desired estimate follows straightforwardly. \end{proof} \begin{remark}\label{R.estimates-nonhydro} Lemma~\ref{lem:estimates-nonhydro} will be applied to the system~\eqref{eq.quasilin-j-nonhydro}-\eqref{eq.est-quasilin-j-nonhydro} appearing in Lemma~\ref{lem.quasilin-nonhydro}, {\em when $j=0$}. A similar result holds for the simplified system when $j\neq 0$. The main difference is that the result does not require nor provide the control of the trace ${\partial}_\varrho^j\cH\big\vert_{\varrho=\rho_0}$. \end{remark} \subsection{Large-time well-posedness} We prove the large-time existence of strong solutions to system~\eqref{eq:nonhydro-iso-recall}. As for the hydrostatic system, \emph{large time} underlines the fact that the existence time that is provided by the following result is uniformly bounded (from below) with respect to the vanishing parameter $\mu\in(0,1]$. Besides, the result below keeps track of the dependency of this large time-scale on the diffusivity parameter $\kappa\in[\mu,1]$. \begin{proposition}\label{P.NONHydro-large-time} Let $s, k \in \mathbb{N}$ be such that $k=s> \frac 52 +\frac d 2$ and $\bar M,M,h_\star,h^\star>0$. Then, there exists $C>0$ such that, for any $ 0<\mu\leq \kappa\leq 1$, and any $(\bar h, \bar {\bm{u}}) \in W^{k,\infty}((\rho_0,\rho_1)) \times W^{k+1,\infty}((\rho_0,\rho_1))^d $ such that \[ \norm{\bar h}_{W^{k,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k,\infty}_\varrho }\leq \bar M\,;\] for any initial data $(h_0, {\bm{u}}_0, w_0)\in H^{s,k}(\Omega)^{d+2}$ with \[ M_0: \Norm{\cH_0}_{H^{s,k}}+\Norm{{\bm{u}}_0}_{H^{s,k}}+\sqrt \mu \Norm{w_0}_{H^{s,k}}+\norm{\cH_0\big\vert_{\varrho=\rho_0}}_{H^s_{\bm{x}}}+\kappa^{1/2}\Norm{h_0}_{H^{s,k}} + \mu^{1/2}\kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH_0}_{H^{s,k}} \leq M\,, \] and satisfying the boundary condition $w_0|_{\varrho=\rho_1}=0$ and the incompressibility condition \[-(\bar h+h_0)\nabla_{\bm{x}}\cdot {\bm{u}}_0-(\nabla_{\bm{x}} \cH_0)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}_0)+\partial_\varrho w_0=0,\] the lower and upper bounds \[\forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star \leq \bar h(\varrho)+h_0({\bm{x}},\varrho) \leq h^\star \,, \] \item and the smallness assumption \[ C \kappa^{-1}\, \big(\norm{ \bar {\bm{u}}'}_{L^\infty_\varrho }^2+ M_0^2) \leq 1\,,\] the following holds. Denoting by \[ T^{-1}= C\, \big(1+ \kappa^{-1} \big(\norm{\bar {\bm{u}}'}_{L^2_\varrho}^2+M_0^2\big) \big), \] there exists a unique $(h,{\bm{u}},w)\in \mathcal{C}^0([0,T];H^{s,k}(\Omega)^{2+d})$ and $P\in L^2(0,T;H^{s+1,k+1}(\Omega))$ strong solution to~\eqref{eq:nonhydro-iso-redef} with initial data $(h,{\bm{u}},w)\big\vert_{t=0}=(h_0,{\bm{u}}_0,w_0)$. Moreover, one has $\cH\in L^\infty(0,T;H^{s+1,k}(\Omega))$ and $(h,\nabla_{\bm{x}}\cH)\in L^2(0,T;H^{s+1,k}(\Omega))$ and , for any $t\in[0,T]$, the lower and the upper bounds hold \[\forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star/2 \leq \bar h(\varrho)+h(t,{\bm{x}},\varrho) \leq 2 h^\star \,, \] and the estimate below holds true \begin{align}\mathcal{F}(t):& \Norm{\cH(t,\cdot)}_{H^{s,k}}+\Norm{{\bm{u}}(t,\cdot)}_{H^{s,k}} +\mu^{1/2}\Norm{w(t,\cdot)}_{H^{s,k}}+\norm{\cH\big\vert_{\varrho=\rho_0}(t,\cdot)}_{H^s_{\bm{x}}} \notag\\ &\quad+\kappa^{1/2}\Norm{h(t,\cdot)}_{H^{s,k}} + \mu^{1/2}\kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH (t, \cdot)}_{H^{s,k}} \notag\\ &\quad+ \kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH}_{L^2(0,t;H^{s,k})} + \kappa^{1/2} \norm{\nabla_{\bm{x}} \cH \big\vert_{\varrho=\rho_0} }_{ L^2(0,t;H^s_{\bm{x}})} \notag\\ &\quad+\kappa \Norm{\nabla_{\bm{x}} h}_{L^2(0,t;H^{s,k})} + \mu^{1/2}\kappa \Norm{\nabla_{\bm{x}}^2 \cH }_{L^2(0,t; H^{s,k})} \leq C\, M_0. \label{eq:functional-nonhydro} \end{align} \end{proposition} \begin{proof} As for the large-time existence for the hydrostatic system (see Proposition~\ref{P.regularized-large-time-WP}), the proof is based on a bootstrap argument on the functional $\mathcal{F}$. Recalling that the (short-time) existence and uniqueness of the solution has been provided in Proposition~\ref{P.NONHydro-small-time}, we denote by $T^\star$ the maximal existence time, and set \begin{equation}\label{control-F} T_\star= \sup \{0 < T < T^\star \, : \; \forall \, t \in (0,T), \; h_\star/2 \le \bar h(\varrho) + h(t, {\bm{x}}, \varrho) \le 2h^\star \quad \text{and} \quad \mathcal{F}(t) \le C_0 M_0\}, \end{equation} with $C_0=C(h_\star,h^\star,\bar M, M)$ sufficiently large (to be determined). Henceforth, we restrain to $0<T<T_\star$, and and denote by $C$ any positive constant depending uniquely on $\bar M, h_\star, h^\star, C_0M_0$ and $s,k$. By means of~\eqref{eq.quasilin-j-h-nonhydro}-\eqref{eq.est-quasilin-j-h-nonhydro} in Lemma~\ref{lem.quasilin-nonhydro} and Lemma~\ref{lem:estimate-transport-diffusion}, we infer as in the proof of Proposition~\ref{P.regularized-large-time-WP} the control \begin{equation}\label{control-h} \kappa^{1/2}\Norm{h}_{L^\infty(0,T;H^{s,k})}+\kappa\Norm{\nabla_{\bm{x}} h}_{L^2(0,T;H^{s,k})} \leq \left(c_0 M_0+C\, C_0 M_0\, \big( T+\sqrt{T} \big)\right) \times \exp\Big( C C_0 M_0\, T\Big) \end{equation} with the same notations as above and $c_0$ a universal constant. In the non-hydrostatic situation, additional controls can be inferred on $\cH$. Indeed, from the first equation in in Lemma~\ref{lem.quasilin-nonhydro},~\eqref{eq.quasilin-j-nonhydro}-\eqref{eq.est-quasilin-j-nonhydro}, we find that \[ \partial_t \cH^{({\bm{\alpha}},j)}+ (\bar {\bm{u}} + {\bm{u}}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} = \kappa \Delta_{\bm{x}} \cH^{({\bm{\alpha}})}+\widetilde R_{{\bm{\alpha}},j}+ w^{({\bm{\alpha}},j)} \] with \[ \sqrt\mu \Norm{\widetilde R_{{\bm{\alpha}},j}+ w^{({\bm{\alpha}},j)}}_{L^2(\Omega) } \leq C\, C_0 M_0\, . \] Differentiating once with respect to the space variables and proceeding as in Lemma~\ref{lem:estimate-transport-diffusion}, we infer \begin{multline}\label{control-cH} \mu^{1/2}\kappa^{1/2}\Norm{\nabla_{\bm{x}} \cH}_{L^\infty(0,T;H^{s,k})}+\mu^{1/2}\kappa\Norm{\nabla_{\bm{x}}^2 \cH}_{L^2(0,T;H^{s,k})} \\ \leq \big(c_0M_0 + C C_0 M_0(T +\sqrt T)\big)\times\exp\Big(C C_0 M_0 T\Big). \end{multline} Next we use again Lemma~\ref{lem.quasilin-nonhydro},~\eqref{eq.quasilin-j-nonhydro}-\eqref{eq.est-quasilin-j-nonhydro}, together with Lemma~\ref{lem:estimates-nonhydro} (see also Remark~\ref{R.estimates-nonhydro}) to obtain that the functional \[\mathcal{E}^{s,k} :=\frac12\sum_{j=0}^k \sum_{|{\bm{\alpha}}|=0}^{s-j} \iint_{\Omega} ( {\partial}_\varrho^j {\partial}_{\bm{x}}^{\bm{\alpha}} \cH)^2 + \varrho (\bar h + h) |{\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}}{\bm{u}}|^2+\mu \varrho (\bar h + h) ( {\partial}_\varrho^j{\partial}_{\bm{x}}^{\bm{\alpha}} w)^2 \, \dd{\bm{x}} \dd \varrho + \frac 12 \sum_{|{\bm{\alpha}}|=0}^{s}\int_{\mathbb{R}^d} ({\partial}_{\bm{x}}^{\bm{\alpha}} \cH|_{\varrho=\rho_0})^2 \, \dd {\bm{x}} ,\] satisfies the differential inequality \begin{equation} \label{ineq}\frac{\dd}{\dd t} \mathcal{E}^{s,k}+ \tfrac\kappa2 \Norm{\nabla_{\bm{x}} \cH}_{H^{s,k}}^2 + \rho_0\kappa \norm{\nabla_{\bm{x}} \cH \big\vert_{\varrho=\rho_0} }_{ H^s_{\bm{x}}}^2\leq C\, \big( R_1+R_2+R_3 \big); \end{equation} with \begin{align*} R_1 &:= (1+\kappa^{-1}\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_{\bm{x}} L^2_\varrho }^2 )\mathcal{E}^{s,k} ,\\ R_2 &:= \big( C_0 M_0+\Norm{ \bar {\bm{u}}'+{\partial}_\varrho {\bm{u}} }_{L^\infty_\varrho L^\infty_{\bm{x}}} \big) \Norm{{P_{\rm nh}}}_{H^{s,k}} \big( \Norm{ h}_{H^{s,k}}+ \Norm{\nabla_{\bm{x}} \cH}_{H^{s,k}}\big),\\ R_3 &:=\Norm{ P_{\rm nh}}_{H^{s,k}} \Norm{R^{\rm{div}}_{s,k}}_{L^2(\Omega)} + (\mathcal{E}^{s,k})^{1/2} \Norm{\mathcal{R}_{s,k}}_{L^2(\Omega)}, \end{align*} and \begin{align} \Norm{R^{\rm{div}}_{s,k}}_{L^2(\Omega)} & \leq C\ C_0 M_0\,, \label{ineq-Rdiv} \\ \Norm{\mathcal R_{s,k}}_{L^2(\Omega)} & \leq C\, C_0M_0\, \big(1+\kappa \Norm{\nabla_{\bm{x}} h}_{H^{s,k}} \big) \notag\\ & \quad + C\, \big( \Norm{ h}_{H^{s,k}}+\mu^{1/2}\Norm{\nabla_{\bm{x}} \cH}_{H^{s,k}} \big) \big(C_0M_0+ \Norm{{\bm{u}}_\star}_{H^{s,k}}+\mu^{1/2} \Norm{w_\star}_{H^{s,k} } \big) \,. \label{ineq-R} \end{align} By~\eqref{control-F}, we have obviously for any $0<t<T_\star$, \[ \frac1{2\rho_1 h^\star}\mathcal{E}^{s,k}(t) \leq \Norm{\cH(t,\cdot)}_{H^{s,k}}^2+\Norm{{\bm{u}}(t,\cdot)}_{H^{s,k}}^2 +\mu\Norm{w(t,\cdot)}_{H^{s,k}}^2+\norm{\cH\big\vert_{\varrho=\rho_0}(t,\cdot)}_{H^s_{\bm{x}}}^2 \leq \frac2{\rho_0 h_\star}\mathcal{E}^{s,k}(t).\] Moreover, we have the following control on ${\bm{u}}_\star:=-\kappa\frac{\nabla_{\bm{x}} h}{\bar h+ h} $ and $w_\star:=\kappa\Delta_{\bm{x}} \cH-\kappa\frac{\nabla_{\bm{x}} h\cdot\nabla_{\bm{x}} \cH}{\bar h+h}$ stemming from (tame) product and composition estimates (Lemma~\ref{L.product-Hsk} and~\ref{L.composition-Hsk-ex}), and using that $\mu\leq \kappa\leq 1$: \begin{equation}\label{control-u-w-star} \Norm{{\bm{u}}_\star}_{L^2(0,T;H^{s,k})}+\mu^{1/2}\Norm{w_\star}_{L^2(0,T;H^{s,k})} \leq C \,C_0 M_0 (1+\sqrt T)\,. \end{equation} Finally, using estimate~\eqref{ineq:est-Psmall-nonhydro} in Corollary~\ref{C.Poisson} yields \begin{align*} \Norm{P_{\rm nh}}_{H^{s,k}} & \leq \Norm{P_{\rm nh}}_{L^2} +\Norm{\nabla_{{\bm{x}},\varrho} P_{\rm nh}}_{H^{s-1,k-1}} \leq \Norm{P_{\rm nh}}_{L^2} + \mu^{-1/2} {\Norm{\nabla^\mu_{{\bm{x}},\varrho} P_{\rm nh}}_{H^{s-1,k-1}} }\notag \\ & \le C\, \left(\Norm{ \nabla_{\bm{x}} \cH}_{H^{s-1,k-1}} + \norm{\cH\big\vert_{\varrho=\rho_0}}_{H^{s}_{\bm{x}}}+ \Norm{({\bm{u}},{\bm{u}}_\star)}_{H^{s,k}} +\mu^{1/2}\Norm{(w,w_\star)}_{H^{s,k-1}} \right), \end{align*} from which we infer, using the controls~\eqref{control-F} and~\eqref{control-u-w-star}, that \begin{equation}\label{ineq-P} \Norm{P_{\rm nh}}_{L^2(0,T;H^{s,k})}\leq C\, C_0 M_0(1+\sqrt T). \end{equation} From~\eqref{control-F} and~\eqref{ineq-Rdiv}-\eqref{ineq-R}-\eqref{control-u-w-star}-\eqref{ineq-P} we infer \begin{align*} \int_0^T R_1(t)\dd t &\leq C\, (C_0 M_0)^2 (1+\kappa^{-1}\norm{ \bar {\bm{u}}'}_{L^2_\varrho}^2+\kappa^{-1}(C_0M_0)^2 )\, T,\\ \int_0^T R_2(t)\dd t &\leq C\, \kappa^{-1/2} \, \big( C_0 M_0+\norm{ \bar {\bm{u}}'}_{L^\infty_\varrho } \big) \, (C_0 M_0)^2(1+\sqrt T)^2 \, ,\\ \int_0^T R_3(t)\dd t &\leq C\, (C_0 M_0)^2(T+\sqrt T)\ +\ C\, (C_0 M_0)^2\, \big( T+C_0 M_0 \sqrt T+ \kappa^{-1/2}(C_0 M_0)(T+\sqrt T)\big) . \end{align*} Hence there exists $C>0$, depending on $\bar M, h_\star, h^\star, C_0, M_0$ (and $s,k$), such that if \[C\, T\, \big( 1+\kappa^{-1}(\norm{{\bm{u}}'}_{L^2_\varrho}^2 +M_0^2))\leq 1,\] and imposing additionally \footnote{We point out that the only term requiring the above smallness condition~\eqref{eq:smallness} on the initial data is (the time integral of) $R_2$, and more precisely the product $ \Norm{P_{\text{nh}}}_{H^{s,k}}\Norm{\nabla_{\bm{x}} \cH}_{H^{s,k}}$, where both terms are only square-integrable in time.} that \begin{equation}\label{eq:smallness} C\, \kappa^{-1/2}\, \big( C_0 M_0+\norm{ \bar {\bm{u}}'}_{L^\infty_\varrho } \big) \leq \tfrac1{16} \rho_0 h_\star \end{equation} we have, when integrating the differential inequality~\eqref{ineq} and combine with~\eqref{control-h} and~\eqref{control-cH}, \[ \mathcal{E}^{s,k}(t) \leq \mathcal{E}^{s,k}(0)+ \tfrac18 (\rho_0h_\star) (C_0 M_0)^2\,.\] Now, setting $C_0=\max(\{ 4(\frac{\rho_1 h^\star}{\rho_0h_\star})^{1/2}, 8 c_0\}$, and $C$ accordingly, one has $\mathcal{F}(t) \leq C_0 M_0/2$ for all ${0<t<T}$. We obtain as in the proof of Proposition~\ref{P.regularized-large-time-WP} the lower and upper bounds $2h_\star/3 \le \bar h(\varrho) + h(t, {\bm{x}}, \varrho) \le 3h^\star/2$, augmenting $C$ if necessary, and the standard continuity argument allows to conclude the proof. \end{proof} \section{Convergence}\label{S.Convergence} This section is devoted to the proof of the convergence of regular solutions to the non-hydrostatic equations~\eqref{eq:nonhydro-iso-intro} towards the corresponding solutions to the limit hydrostatic equations~\eqref{eq:hydro-iso-intro}, namely Theorem~\ref{thm-convergence}. Our convergence result holds in the strong sense and ``with loss of derivatives'': we prove that the solutions to the approximating (non-hydrostatic) equations converge towards the solutions to the limit (hydrostatic) equations in a suitable strong topology that is strictly weaker than the one measuring the size of the initial data. For a given set of initial data, we use the apex ${\rm h}$ to refer to the solution to the hydrostatic equations (provided by the analysis of Section~\ref{S.Hydro} culminating with Theorem~\ref{thm-well-posedness}), and the apex ${\rm nh}$ for the corresponding solution to the non-hydrostatic equations (provided by the analysis of Section~\ref{S.NONHydro}, specifically Proposition~\ref{P.NONHydro-small-time}). The apex ${\rm d}$ denotes the difference between the non-hydrostatic solution and the hydrostatic one, whose size will be controlled in the limit $\mu \searrow 0$. While we can appeal to Theorem~\ref{thm-well-posedness} to obtain the existence, uniqueness and control of solutions to the hydrostatic equations over a large time interval, Proposition~\ref{P.NONHydro-small-time} provides only a time interval which {\em a priori} vanishes as $\mu \searrow 0$, and Proposition~\ref{P.NONHydro-large-time} only applies to sufficiently small initial data. The standard strategy (used for instance in~\cite{KlainermanMajda82} in the context of weakly compressible flows) that we apply here relies on a bootstrap argument to control the difference between the non-hydrostatic solution and the hydrostatic one in the time-interval provided by the hydrostatic solution, from which the existence and control of the non-hydrostatic solution (again, with loss of derivatives) can be inferred. We perform this analysis in Sections~\ref{S.CV-consistency} to~\ref{S.CV-control}, where we first provide a consistency result (Lemma~\ref{L.CV-consistency}), then exhibit the (non-hydrostatic) quasilinear structure of the equations satisfied by the difference (Lemma~\ref{L.CV-quasilinear}), and finally infer the uniform control of the non-hydrostatic solution and the strong convergence towards the corresponding hydrostatic solution (Proposition~\ref{P.CV-control}). In a last step, in Section~\ref{S.CV-convergence}, we use this uniform control to offer an improved convergence rate based this time on the structure of the hydrostatic equations (Proposition~\ref{P.CV-convergence}). Propositions~\ref{P.CV-control} and~\ref{P.CV-convergence} immediately yield Theorem~\ref{thm-convergence}. \subsection{Consistency}\label{S.CV-consistency} In the following result we prove that solutions to the hydrostatic equations~\eqref{eq:hydro-iso-intro} emerging from smooth initial data satisfy (suitably defining the horizontal velocity and pressure variables) the non-hydrostatic equations~\eqref{eq:nonhydro-iso-intro}, up to small remainder terms. \begin{lemma}\label{L.CV-consistency} There exists $p\in\mathbb{N}$ such that for any $s,k\in \mathbb{N}$ with $0\leq k\leq s$, the following holds. Let $\bar M,M,h_\star,h^\star>0$ be fixed. Then there exists $C_0>0$ and $C_1>0$ such that for any $\kappa\in(0,1]$, any $(\bar h,\bar {\bm{u}})\in W^{k+p,\infty}((\rho_0,\rho_1))^{1+d} $ satisfying \[ \norm{\bar h}_{W^{k+p,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k+p-1,\infty}_\varrho }\leq \bar M,\] and any initial data $(h_0, {\bm{u}}_0) \in H^{s+p,k+p}(\Omega)$ satisfying the following estimate \[ M_0:= \Norm{\cH_0}_{H^{s+p,k+p}}+\Norm{{\bm{u}}_0}_{H^{s+p,k+p}}+\norm{\cH_0\big\vert_{\varrho=\rho_0}}_{H^{s+p}_{\bm{x}}}+\kappa^{1/2}\Norm{h_0}_{H^{s+p,k+p}} \le M \] (where we denote $\cH_0(\cdot,\varrho):=\int_\varrho^{\rho_1} h_0(\cdot,\varrho')\dd\varrho'$) and the stable stratification assumption \[ \inf_{({\bm{x}},\varrho)\in \Omega } h_\star\leq \bar h(\varrho)+h_0({\bm{x}},\varrho) \leq h^\star,\] there exists a unique $(h^{\rm h},{\bm{u}}^{\rm h})\in \mathcal{C}^0([0,T];H^{s+p,k+p}(\Omega)^{1+d})$ strong solution to~\eqref{eq:hydro-iso-intro} with initial data $(h^{\rm h},{\bm{u}}^{\rm h})\big\vert_{t=0}=(h_0,{\bm{u}}_0)$, where \[ T^{-1}= C_0\, \big(1+ \kappa^{-1} \big(\norm{\bar {\bm{u}}'}_{L^2_\varrho}^2+M_0^2\big) \big) \, .\] Moreover, one has for all $t\in[0,T]$, \[ \forall ({\bm{x}},\varrho)\in \Omega , \qquad h_\star/2 \leq \bar h(\varrho)+h^{\rm h}(t,{\bm{x}},\varrho) \leq 2\,h^\star , \] and, denoting $\cH^{\rm h}(\cdot,\varrho):=\int_\varrho^{\rho_1} h^{\rm h}(\cdot,\varrho')\dd\varrho'$ and \begin{align}\label{def-wh} w^{\rm h}(\cdot,\varrho) &:=-\int_\varrho^{\rho_1} (\bar h(\varrho') + h^{\rm h}(\cdot,\varrho')) \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h}(\cdot,\varrho')+\nabla_{\bm{x}} \cH^{\rm h}(\cdot,\varrho') \cdot ({\bar {\bm{u}}}'(\varrho')+\partial_\varrho{\bm{u}}^{\rm h}(\cdot,\varrho'))\dd\varrho'\,,\\ \label{def-Ph} P^{\rm h}(\cdot,\varrho)&:=\int_{\rho_0}^\varrho \varrho' h^{\rm h}(\cdot,\varrho')\dd\varrho'\,, \end{align} one has for any $ t\in[0,T]$, \begin{equation}\label{eq:hydro-estimate} \Norm{\big(h^{\rm h}(t,\cdot),\cH^{\rm h}(t,\cdot),{\bm{u}}^{\rm h}(t,\cdot),w^{\rm h}(t,\cdot),P^{\rm h}(t,\cdot)\big)}_{H^{s+1,k+1}}\leq C_1\, M_0\,, \end{equation} and \begin{subequations} \begin{equation}\label{eq:hydro-consistency} \begin{aligned} \partial_t h^{\rm h}+\nabla_{\bm{x}} \cdot\big((\bar h +h^{\rm h})(\bar{\bm{u}} +{\bm{u}}^{\rm h})\big)&= \kappa \Delta_{\bm{x}} h^{\rm h},\\ \varrho\Big( \partial_t {\bm{u}}^{\rm h}+\big((\bar {\bm{u}} + {\bm{u}}^{\rm h} - \kappa\tfrac{\nabla_{\bm{x}} h^{\rm h}}{ \bar h+ h^{\rm h}})\cdot\nabla_{\bm{x}}\big) {\bm{u}}^{\rm h}\Big)+ \nabla_{\bm{x}} P^{\rm h}+ \frac{\nabla_{\bm{x}} \cH^{\rm h}}{\bar h+ h^{\rm h}}( {\partial}_\varrho P^{\rm h} + \varrho \bar h^{\rm h}) &=0,\\ \mu \varrho\Big( \partial_t w^{\rm h}+\big(\bar {\bm{u}} + {\bm{u}}^{\rm h} - \kappa\tfrac{\nabla_{\bm{x}} h^{\rm h}}{ \bar h+ h^{\rm h}}\big)\cdot\nabla_{\bm{x}} w^{\rm h}\Big)- \frac{{\partial}_\varrho P^{\rm h}}{\bar h+ h^{\rm h}} + \frac{\varrho h^{\rm h}}{\bar h + h^{\rm h}}&=\mu\, R^{\rm h},\\ -(\bar h + h^{\rm h}) \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h}-\nabla_{\bm{x}} \cH^{\rm h} \cdot ({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}^{\rm h}) +\partial_\varrho w^{\rm h}&=0, \end{aligned} \end{equation} with $R^{\rm h}(t,\cdot) \in \mathcal{C}^0([0,T];H^{s,k}(\Omega))$ and satisfying for any $ t\in[0,T]$, \begin{equation}\label{est:hydro-consistency} \Norm{R^{\rm h}(t,\cdot)}_{H^{s,k}} \leq C_1\, M_0. \end{equation} \end{subequations} \end{lemma} \begin{proof} From Theorem~\ref{thm-well-posedness} we infer immediately (for $p>2+d/2$) the existence, uniqueness and control of the hydrostatic solution $(h^{\rm h},{\bm{u}}^{\rm h})\in \mathcal{C}^0([0,T];H^{s+p,k+p}(\Omega)^{1+d})$, and $C_0>0$. From the formula~\eqref{def-wh},~\eqref{def-Ph} and product estimates (Lemma~\ref{L.product-Hsk}) in the space $H^{s+p',k+p'}(\Omega)$ (for $1\leq p'\leq p$ sufficiently large) we infer the estimate~\eqref{eq:hydro-estimate}. We obtain similarly the desired consistency estimate,~\eqref{eq:hydro-consistency}-\eqref{est:hydro-consistency}, using the identity (recall~\eqref{eq.id-Phydro}) \[ P^{\rm h} +\varrho\cH^{\rm h} = \int_{\rho_0}^{\varrho} \cH^{\rm h}(\cdot,\varrho')\dd \varrho'+\rho_0 \cH^{\rm h}\big\vert_{\varrho=\rho_0}\,, \] and denoting \[R^{\rm h}:= \varrho\Big( \partial_t w^{\rm h}+\big(\bar {\bm{u}} + {\bm{u}}^{\rm h} - \kappa\tfrac{\nabla_{\bm{x}} h^{\rm h}}{ \bar h+ h^{\rm h}}\big)\cdot\nabla_{\bm{x}} w^{\rm h}\Big),\] differentiating with respect to time the identity~\eqref{def-wh}, and using~\eqref{eq:hydro-iso-intro} to infer the control of $ {\partial}_t{\bm{u}}^{\rm h}$ and ${\partial}_t w^{\rm h}$. \end{proof} As a corollary to the above, we can write the equations satisfied by the difference between $(h^{\rm h},{\bm{u}}^{\rm h},w^{\rm h})$, i.e. the maximal solution to the hydrostatic equations emerging from given regular, well-prepared initial data, and $(h^{\rm nh},{\bm{u}}^{\rm nh},w^{\rm nh})$, i.e. the maximal solution to the non-hydrostatic with the same data (see Proposition~\ref{P.NONHydro-small-time}). Specifically, under the assumptions and using the notations of Lemma~\ref{L.CV-consistency}, we have that \[ h^{\rm d}:=h^{\rm nh}-h^{\rm h}; \quad {\bm{u}}^{\rm d}:={\bm{u}}^{\rm nh}-{\bm{u}}^{\rm h}; \quad w^{\rm d}:=w^{\rm nh}-w^{\rm h}; \quad \] satisfies $(h^d,{\bm{u}}^{\rm d},w^{\rm d})\big\vert_{t=0}=(0,0,0)$ and \begin{equation}\label{eq:difference} \begin{aligned} {\partial}_t h^{\rm d} +\nabla_{\bm{x}}\cdot \big( (\bar {\bm{u}}+{\bm{u}}^{\rm nh}) h^{\rm d} + (\bar h + h^{\rm h}) {\bm{u}}^{\rm d} \big) & =\kappa \Delta_{\bm{x}} h^{\rm d},\\ {\partial}_t \cH^{\rm d} + (\bar {\bm{u}}+{\bm{u}}^{\rm nh}) \cdot \nabla_{\bm{x}} \cH^{\rm d} +\int_\varrho^{\rho_1} (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}^{\rm nh}) \cdot \nabla_{\bm{x}} \cH^{\rm d} \, \dd \varrho' +\int_\varrho^{\rho_1} (\bar h + h^{\rm nh}) \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} \, \dd\varrho' & \\ +\int_\varrho^{\rho_1} {\bm{u}}^{\rm d} \cdot \nabla_{\bm{x}} h^{\rm h} + h^{\rm d} \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h} \, \dd \varrho' & = \kappa\Delta_{\bm{x}}\cH^{\rm d},\\ {\partial}_t {\bm{u}}^{\rm d} + \big((\bar{\bm{u}} + {\bm{u}}^{\rm nh} - \kappa \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} ) \cdot \nabla_{\bm{x}}\big) {\bm{u}}^{\rm d} + {\frac{\rho_0}{\varrho} \nabla_{\bm{x}} \cH^{\rm d}|_{\varrho=\rho_0}} + {\frac 1 \varrho \int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH^{\rm d} \, \dd\varrho'} & \\ + \big(( {\bm{u}}^{\rm d} - \kappa ( \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} - \tfrac{\nabla_{\bm{x}} h^{\rm h}}{\bar h+h^{\rm h}})) \cdot \nabla_{\bm{x}}\big) {\bm{u}}^{\rm h} + \frac{\nabla_{\bm{x}}{ {P_{\rm nh}} }}{\varrho} + \frac{\nabla_{\bm{x}} \cH^{\rm nh}}{\varrho (\bar h+h^{\rm nh})} {\partial}_\varrho{ {P_{\rm nh}} } &= 0,\\ \mu \big( \partial_t w^{\rm d}+\big(\bar {\bm{u}} + {\bm{u}}^{\rm nh} - \kappa\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{ \bar h+ h^{\rm nh}}\big)\cdot\nabla_{\bm{x}} w^{\rm d} + ( {\bm{u}}^{\rm d} - \kappa ( \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} - \tfrac{\nabla_{\bm{x}} h^{\rm h}}{\bar h+h^{\rm h}})) \cdot \nabla_{\bm{x}} w^{\rm h}\big) - \frac{{\partial}_\varrho P_{\rm nh}}{\varrho(\bar h+ h^{\rm nh})} &=-\mu\, R^{\rm h},\\ -(\bar h + h^{\rm nh}) \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} - h^{\rm d} \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h} -\nabla_{\bm{x}} \cH^{\rm d} \cdot ({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}^{\rm nh}) -\nabla_{\bm{x}} \cH^{\rm h} \cdot (\partial_\varrho{\bm{u}}^{\rm d}) +\partial_\varrho w^{\rm d} &=0, \end{aligned} \end{equation} where we denote as usual $\cH^{\rm h}(\cdot,\varrho)=\int_\varrho^{\rho_1} h^{\rm h}(\cdot,\varrho')\dd\varrho'$ (and analogously $\cH^{\rm nh}$, $\cH^{\rm d}$), and define the non-hydrostatic pressure ${P_{\rm nh}(\cdot,\varrho):=P^{\rm nh}(\cdot,\varrho)-\int_{\rho_0}^\varrho \varrho' h^{\rm nh}(\cdot,\varrho')\dd\varrho'}$ where $P^{\rm nh}$ is defined by Corollary~\ref{C.Poisson}. \subsection{Quasi-linearization}\label{S.CV-quasilinear} In this section we extract the leading order terms of the system~\eqref{eq:difference}, in the spirit of Lemma~\ref{lem.quasilin-nonhydro}. \begin{lemma}\label{L.CV-quasilinear} There exists $p\in\mathbb{N}$ such that for any $s, k \in \mathbb{N}$ such that $k=s> \frac 52+\frac d 2$ and $\bar M,M,h_\star>0$, there exists $C>0$ and $C_1>0$ such that the following holds. For any $0<\mu\leq \kappa \leq 1$, and for any $(\bar h, \bar {\bm{u}}) \in W^{k+p,\infty}((\rho_0,\rho_1))^{1+d} $ satisfying \[ \norm{\bar h}_{W^{k+p,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k+p-1,\infty}_\varrho }\leq \bar M \,;\] and any $(h^{\rm nh},{\bm{u}}^{\rm nh},w^{\rm nh}) \in \mathcal{C}^0([0,T^{\rm nh}];H^{s,k}(\Omega)^{d+2})$ and $P^{\rm nh}\in L^2(0,T^{\rm nh};H^{s+1,k+1}(\Omega))$ solution to~\eqref{eq:nonhydro-iso-redef} with some $T^{\rm nh}>0$ and satisfying for any $t\in [0,T^{\rm nh}]$ \begin{multline*} \Norm{h^{\rm nh}(t, \cdot)}_{H^{s-1, k-1}} + \Norm{\cH^{\rm nh}(t, \cdot)}_{H^{s,k}}+\norm{\cH^{\rm nh}(t, \cdot)\big\vert_{\varrho=\rho_0}}_{H^s_{\bm{x}}}+\Norm{{\bm{u}}^{\rm nh}(t, \cdot)}_{H^{s,k}}+\mu^{1/2}\Norm{w^{\rm nh}(t, \cdot)}_{H^{s,k}}\\ +\kappa^{1/2}\Norm{h^{\rm nh}(t, \cdot)}_{H^{s,k}}+ \mu^{1/2} \kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH^{\rm nh}(t, \cdot)}_{H^{s,k}} \le M \end{multline*} (where $\cH^{\rm nh}(t,{\bm{x}},\varrho):=\int_\varrho^{\rho_1} h^{\rm nh}(t,{\bm{x}},\varrho')\dd\varrho'$), the stable stratification assumption \[ \inf_{({\bm{x}},\varrho)\in \Omega } \bar h(\varrho)+h^{\rm nh}(t, {\bm{x}},\varrho) \geq h_\star,\] and the initial bound \[ M_0:= \Norm{\cH^{\rm nh}\big\vert_{t=0}}_{H^{s+p,k+p}}+\Norm{{\bm{u}}^{\rm nh}\big\vert_{t=0}}_{H^{s+p,k+p}}+\norm{(\cH^{\rm nh}\big\vert_{\varrho=\rho_0})\big\vert_{t=0}}_{H^{s+p}_{\bm{x}}}+\kappa^{1/2}\Norm{h^{\rm nh}\big\vert_{t=0}}_{H^{s+p,k+p}} \le M, \] we have the following. Denote $(h^{\rm h},{\bm{u}}^{\rm h},w^{\rm h}) \in \mathcal{C}^0([0,T^{\rm h}];H^{s+1,k+1}(\Omega)^{2+d})$ the corresponding strong solution to the hydrostatic equations~\eqref{eq:hydro-iso-nu} (see Lemma~\ref{L.CV-consistency}) satisfying \[ \Norm{h^{\rm h}(t,\cdot)}_{H^{s+1,k+1}} +\Norm{{\bm{u}}^{\rm h}(t,\cdot)}_{H^{s+1,k+1}}+\Norm{\cH^{\rm h}(t,\cdot)}_{H^{s+1,k+1}} +\Norm{w^{\rm h}(t,\cdot)}_{H^{s+1,k+1}} \leq C_1M_0 \] and, for any multi-index ${\bm{\alpha}} \in \mathbb{N}^d$ and $j\in\mathbb{N}$ such that $0 \le |{\bm{\alpha}}| +j\le s$, \[ \cH^{({\bm{\alpha}},j)}:={\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j\cH^{\rm nh}-{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j\cH^{\rm h}; \quad {\bm{u}}^{({\bm{\alpha}},j)}:={\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j{\bm{u}}^{\rm nh}-{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j{\bm{u}}^{\rm h}; \quad w^{({\bm{\alpha}},j)}:={\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j w^{\rm nh}-{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j w^{\rm h}; \quad \] and ${P^{({\bm{\alpha}}, j)}_{\rm nh}(\cdot,\varrho)={\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j \big(P^{\rm nh}(\cdot,\varrho)-\int_{\rho_0}^\varrho \varrho' h^{\rm nh}(\cdot,\varrho')\dd\varrho'\big)}$. Then restricting to $t\in[0,\min(T^{\rm h},T^{\rm nh})]$ and such that \begin{align*} \mathcal{F}_{s,k}:=\Norm{h^{\rm d}}_{H^{s-1, k-1}} + \Norm{\cH^{\rm d}}_{H^{s,k}}+\norm{\cH^{\rm d}\big\vert_{\varrho=\rho_0}}_{H^s_{\bm{x}}}+\Norm{{\bm{u}}^{\rm d}}_{H^{s,k}} +\mu^{1/2}\Norm{w^{\rm d}}_{H^{s,k}} &\\ +\kappa^{1/2}\Norm{h^{\rm d}}_{H^{s,k}}+\mu^{1/2}\kappa^{1/2}\Norm{\nabla_{\bm{x}}\cH^{\rm d}}_{H^{s,k}} & \leq \kappa^{1/2}M \end{align*} we have \begin{subequations} \begin{equation}\label{eq.quasilin-diff} \begin{aligned} \partial_t \cH^{({\bm{\alpha}},j)}+ (\bar {\bm{u}} + {\bm{u}}^{\rm nh}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} + w^{({\bm{\alpha}},j)} - \kappa \Delta_{\bm{x}} \cH^{({\bm{\alpha}})}&= \widetilde R_{{\bm{\alpha}},j}, \\ \partial_t \cH^{({\bm{\alpha}},j)}+ (\bar {\bm{u}} + {\bm{u}}^{\rm nh}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} +\Big\langle \int_\varrho^{\rho_1} (\bar {\bm{u}}' + {\partial}_\varrho{\bm{u}}^{\rm nh}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} \, \dd\varrho'\phantom{\qquad -\kappa\Delta_{\bm{x}} \cH^{({\bm{\alpha}},j)}} &\\ +\int_\varrho^{\rho_1} (\bar h+h^{\rm nh})\nabla_{\bm{x}} \cdot{\bm{u}}^{({\bm{\alpha}},j)} \dd\varrho'\Big\rangle_{j=0}-\kappa\Delta_{\bm{x}} \cH^{({\bm{\alpha}},j)}&=R_{{\bm{\alpha}},j},\\ \partial_t{\bm{u}}^{({\bm{\alpha}},j)}+\big(({\bar {\bm{u}}}+{\bm{u}}^{\rm nh}-\kappa\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h+ h^{\rm nh}})\cdot\nabla_{\bm{x}}\big) {\bm{u}}^{({\bm{\alpha}},j)} +\Big\langle \frac{\rho_0}{ \varrho }\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)}\big\vert_{\varrho=\rho_0} +\frac 1 \varrho\int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} \dd\varrho' \Big\rangle_{j=0}&\\ +\frac 1 \varrho\nabla_{\bm{x}} P_{\rm nh}^{({\bm{\alpha}},j)} + \frac{\nabla_{\bm{x}} \cH^{\rm nh}}{\varrho(\bar h + h^{\rm nh})} {\partial}_\varrho P_{\rm nh}^{({\bm{\alpha}},j)} &={\bm{R}}^{\rm nh}_{{\bm{\alpha}},j},\\ \mu^{1/2} \left( \partial_t w^{({\bm{\alpha}},j)} + (\bar {\bm{u}} + {\bm{u}}^{\rm nh} - \kappa \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} ) \cdot \nabla_{\bm{x}} w^{({\bm{\alpha}},j)}\right) - \frac1{\mu^{1/2}}\frac{{\partial}_\varrho P_{\rm nh}^{({\bm{\alpha}},j)}}{ \varrho( \bar h + h^{\rm nh})} &= R^{\rm nh}_{{\bm{\alpha}}, j}, \\ - {\partial}_\varrho w^{({\bm{\alpha}},j)} + (\bar h + h^{\rm nh}) \nabla_{\bm{x}} \cdot {\bm{u}}^{({\bm{\alpha}},j)} + (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}^{\rm nh}) \cdot \nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)}&\\ + (\nabla_{\bm{x}} \cdot {\bm{u}}^{\rm nh})h^{({\bm{\alpha}},j)}+ (\nabla_{\bm{x}} \cH^{\rm nh} )\cdot ( {\partial}_\varrho {\bm{u}}^{({\bm{\alpha}},j)}) &=R^{\rm div}_{{\bm{\alpha}},j}, \end{aligned} \end{equation} where $(R_{{\bm{\alpha}},j}(t,\cdot),{\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}(t,\cdot),R_{{\bm{\alpha}}, j}^{\rm nh}(t,\cdot),R_{{\bm{\alpha}}, j}^{\rm div})\in L^2(\Omega)^{d+3}$, $R_{{\bm{\alpha}},0}(t,\cdot)\in\mathcal{C}((\rho_0,\rho_1);L^2(\mathbb{R}^d))$ and \begin{align} \label{eq.est-quasilin-diff0} & \Norm{ R_{{\bm{\alpha}},j}}_{L^2(\Omega) } +\norm{R_{{\bm{\alpha}},0}\big\vert_{\varrho=\rho_0}}_{L^2_{\bm{x}} } + \Norm{ \widetilde R_{{\bm{\alpha}},j}}_{L^2(\Omega) } +\Norm{R^{\rm div}_{{\bm{\alpha}}, j}}_{L^2(\Omega)} \leq C\,\mathcal{F}_{s,k}, \\ \label{eq.est-quasilin-diff1} & \Norm{ {\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}}_{L^2(\Omega)}+\Norm{ R^{\rm nh}_{{\bm{\alpha}}, j}}_{L^2(\Omega)} \leq C \, \big( \mathcal{F}_{s,k} + \kappa \Norm{\nabla_{\bm{x}} h^{\rm d}}_{H^{s,k}} +\mu^{1/2}\kappa \Norm{\Delta_{\bm{x}} \cH^{\rm d}}_{H^{s,k}} \big)+C\, \mu^{1/2}M , \end{align} \end{subequations} and \begin{subequations} \begin{equation}\label{eq.quasilin-h-diff} \partial_t h^{({\bm{\alpha}},j)}+(\bar{\bm{u}}+{\bm{u}}^{\rm nh})\cdot \nabla_{\bm{x}} h^{({\bm{\alpha}},j)}-\kappa\Delta_{\bm{x}} h^{({\bm{\alpha}},j)} =r_{{\bm{\alpha}},j}+\nabla_{\bm{x}} \cdot {\bm{r}}_{{\bm{\alpha}},j}, \end{equation} where $(r_{{\bm{\alpha}},j}(t,\cdot),{\bm{r}}_{{\bm{\alpha}},j}(t,\cdot))\in L^2(\Omega)^{1+d}$ and \begin{equation}\label{eq.est-quasilin-h-diff} \kappa^{1/2} \Norm{r_{{\bm{\alpha}},j}}_{L^2(\Omega) } + \Norm{{\bm{r}}_{{\bm{\alpha}},j}}_{L^2(\Omega) } \leq C\,\mathcal{F}_{s,k}. \end{equation} \end{subequations} \end{lemma} \begin{proof} Explicit expressions for the remainder terms follow from~\eqref{eq:difference}. Specifically, the following equation is obtained by combining the second and last equation (recall~\eqref{eq.id-eta}) \[\partial_t \cH^{\rm d}+ (\bar {\bm{u}} + {\bm{u}}^{\rm nh}) \cdot\nabla_{\bm{x}} \cH^{\rm d} + {\bm{u}}^{\rm d}\cdot\nabla_{\bm{x}}\cH^{\rm h} - w^{\rm d} = \kappa \Delta_{\bm{x}} \cH^{\rm d} \] and hence \[ \widetilde R_{{\bm{\alpha}},j}:=-[{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j, \bar {\bm{u}}+{\bm{u}}^{\rm nh}]\cdot\nabla_{\bm{x}}\cH^{\rm d}- {\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j({\bm{u}}^{\rm d}\cdot\nabla_{\bm{x}}\cH^{\rm h}),\] and it follows from product (Lemma~\ref{L.product-Hsk}) and commutator (Lemma~\ref{L.commutator-Hsk}) estimates \[ \Norm{\widetilde R_{{\bm{\alpha}},j}}_{L^2(\Omega)} \lesssim \big(\norm{\bar{\bm{u}}'}_{W^{k-1,\infty}_\varrho}+\Norm{{\bm{u}}^{\rm nh}}_{H^{s,k}} +\Norm{\cH^{\rm h}}_{H^{s+1,k}} \big)\,\big(\Norm{\cH^{\rm d}}_{H^{s,k-1}}+\Norm{ {\bm{u}}^{\rm d}}_{H^{s,k}}\big) .\] Then, from the second equation we have \[ R_{{\bm{\alpha}},j} := R^{(i)}_{{\bm{\alpha}},j}+R^{(ii)}_{{\bm{\alpha}},j}\] with $R^{(i)}_{{\bm{\alpha}},j}:=- [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\bar{\bm{u}}+{\bm{u}}^{\rm nh}] \cdot \nabla_{\bm{x}} \cH^{\rm d}$ and \[R^{(ii)}_{{\bm{\alpha}},j}:=\begin{cases} -\int_\varrho^{\rho_1} [\partial_{\bm{x}}^{\bm{\alpha}} , {\partial}_\varrho {\bm{u}}^{\rm nh} ]\cdot \nabla_{\bm{x}} \cH^{\rm d} + [\partial_{\bm{x}}^{\bm{\alpha}} , h^{\rm nh}] \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} + \partial_{\bm{x}}^{\bm{\alpha}} \big( {\bm{u}}^{\rm d} \cdot \nabla_{\bm{x}} h^{\rm h} + h^{\rm d} \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h}\big) \, \dd \varrho' & \text{if $j=0$,} \\ {\partial}_\varrho^{j-1}\partial_{\bm{x}}^{\bm{\alpha}} \big( (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}^{\rm nh}) \cdot \nabla_{\bm{x}} \cH^{\rm d}+ (\bar h + h^{\rm nh}) \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} + {\bm{u}}^{\rm d} \cdot \nabla_{\bm{x}} h^{\rm h} + h^{\rm d} \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h}\big) & \text{if $j\geq1$.} \end{cases} \] Using Lemma~\ref{L.product-Hsk}, Lemma~\ref{L.commutator-Hsk} and the continuous embedding $L^\infty((\rho_0,\rho_1))\subset L^2((\rho_0,\rho_1)) \subset L^1((\rho_0,\rho_1))$ we find \begin{multline*}\Norm{ R_{{\bm{\alpha}},j}}_{L^2(\Omega)} \lesssim \big( \norm{\bar{\bm{u}}'}_{W^{k-1,\infty}_\varrho}+\Norm{{\bm{u}}^{\rm nh}}_{H^{s,k}}+\Norm{h^{\rm nh}}_{H^{s-1,k-1}}+\Norm{\cH^{\rm nh}}_{H^{s,k-1}} +\Norm{h^{\rm h}}_{H^{s+1,k-1}} +\Norm{{\bm{u}}^{\rm h}}_{H^{s+1,k-1}}\big)\\ \times\big(\Norm{\cH^{\rm d}}_{H^{s,k-1}}+\Norm{ {\bm{u}}^{\rm d}}_{H^{s,k-1}}+\Norm{ h^{\rm d}}_{H^{s-1,k-1}}\big) \end{multline*} where for $j=0$ we used the identities (and Lemma~\ref{L.embedding} and Lemma~\ref{L.commutator-Hs},(\ref{L.commutator-Hs-3}) and (\ref{L.commutator-Hs-3})) \begin{multline*} \int_\varrho^{\rho_1} [\partial_{\bm{x}}^{\bm{\alpha}} , {\partial}_\varrho {\bm{u}}^{\rm nh} ]\cdot \nabla_{\bm{x}} \cH^{\rm d} + [\partial_{\bm{x}}^{\bm{\alpha}} , h^{\rm nh}] \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} \dd\varrho' = \int_\varrho^{\rho_1} [\partial_{\bm{x}}^{\bm{\alpha}} ; {\partial}_\varrho {\bm{u}}^{\rm nh} ,\nabla_{\bm{x}} \cH^{\rm d}]+ [\partial_{\bm{x}}^{\bm{\alpha}} ; h^{\rm nh}, \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} ] \dd\varrho'\\ +\int_\varrho^{\rho_1} \partial_{\bm{x}}^{\bm{\alpha}} {\bm{u}}^{\rm nh}\cdot \nabla_{\bm{x}} h^{\rm d} +(\partial_{\bm{x}}^{\bm{\alpha}} \cH^{\rm nh}) (\nabla_{\bm{x}} \cdot {\partial}_\varrho{\bm{u}}^{\rm d}) \dd\varrho'+ \partial_{\bm{x}}^{\bm{\alpha}} {\bm{u}}^{\rm nh}\cdot\nabla_{\bm{x}} \cH^{\rm d} +(\partial_{\bm{x}}^{\bm{\alpha}} \cH^{\rm nh}) (\nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d}) \end{multline*} and \[ \int_\varrho^{\rho_1} \partial_{\bm{x}}^{\bm{\alpha}} ( h^{\rm d} \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h}) \dd\varrho' = \int_\varrho^{\rho_1} [\partial_{\bm{x}}^{\bm{\alpha}} ,\nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h}] h^{\rm d} +(\partial_{\bm{x}}^{\bm{\alpha}} \cH^{\rm d})(\nabla_{\bm{x}} \cdot {\partial}_\varrho{\bm{u}}^{\rm h})\dd\varrho' + (\partial_{\bm{x}}^{\bm{\alpha}} \cH^{\rm d})(\nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h}).\] This yields the desired estimate for $\Norm{ R_{{\bm{\alpha}},j}}_{L^2(\Omega)}$ and the corresponding estimate for $\norm{R_{{\bm{\alpha}},0}\vert_{\varrho=\rho_0}}_{L^2_{\bm{x}} }$ relies on the additional estimate (stemming from Lemma \ref{L.commutator-Hs}(\ref{L.commutator-Hs-2}) and Lemma \ref{L.embedding}) \[ \norm{ (R^{(i)}_{{\bm{\alpha}},0}+ \partial_{\bm{x}}^{\bm{\alpha}} {\bm{u}}^{\rm nh}\cdot\nabla_{\bm{x}} \cH^{\rm d} )\vert_{\varrho=\rho_0}}_{L^2_{\bm{x}} }= \norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}};{\bm{u}}^{\rm nh}\vert_{\varrho=\rho_0}, \nabla_{\bm{x}} \cH^{\rm d}\vert_{\varrho=\rho_0}]}_{L^2_{\bm{x}} }\lesssim \Norm{{\bm{u}}^{\rm nh}}_{H^{s,1}}\norm{\cH^{\rm d}\vert_{\varrho=\rho_0}}_{H^s_{\bm{x}}}.\] Then, we have \begin{align*} R^{\rm div}_{{\bm{\alpha}}, j}&:= [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j, \bar h ]\nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} + [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j, \bar {\bm{u}}' ]\cdot \nabla_{\bm{x}} \cH^{\rm d} \\ &\quad + [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j, h^{\rm nh} ]\nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} +[{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j, \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h} ] h^{\rm d} + \nabla_{\bm{x}} \cdot ({\bm{u}}^{\rm h}-{\bm{u}}^{\rm nh}){\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j h^{\rm d}\\ &\quad + [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j, {\partial}_\varrho {\bm{u}}^{\rm nh} ]\cdot \nabla_{\bm{x}} \cH^{\rm d}+[{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j, \nabla_{\bm{x}} \cH^{\rm h} ] \cdot {\partial}_\varrho {\bm{u}}^{\rm d} +\nabla_{\bm{x}} (\cH^{\rm h}-\cH^{\rm nh})\cdot {\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j {\partial}_\varrho {\bm{u}}^{\rm d} . \end{align*} Decomposing $ h^{\rm nh}= h^{\rm h}+ h^{\rm d}$, $ {\partial}_\varrho {\bm{u}}^{\rm nh}= {\partial}_\varrho {\bm{u}}^{\rm h}+ {\partial}_\varrho {\bm{u}}^{\rm d}$, some manipulations of the terms to exhibit symmetric commutators and the use of Lemma~\ref{L.commutator-Hsk} and~\ref{L.commutator-Hsk-sym} lead to \begin{multline*} \Norm{ R^{\rm div}_{{\bm{\alpha}}, j} }_{L^2(\Omega)} \lesssim \big(\norm{\bar h}_{W^{k,\infty}_\varrho}+\norm{\bar {\bm{u}}'}_{W^{k,\infty}_\varrho}+\Norm{ h^{\rm d}}_{H^{s-1,k-1}}+\Norm{ {\partial}_\varrho {\bm{u}}^{\rm d}}_{H^{s-1,k-1}}\\ +\Norm{ h^{\rm h}}_{H^{s,k}} +\Norm{ \nabla_{\bm{x}} \cH^{\rm h}}_{H^{s,k}}+\Norm{ {\bm{u}}^{\rm h}}_{H^{s+1,k+1}} \big) \times\big( \Norm{{\bm{u}}^{\rm d}}_{H^{s,k}}+\Norm{\cH^{\rm d}}_{H^{s,k-1}}+\Norm{h^{\rm d}}_{H^{s-1,k-1}}\big) \end{multline*} which concludes the estimate~\eqref{eq.est-quasilin-diff0}. We focus now on $\Norm{ {\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}}_{L^2(\Omega)}$ and $\Norm{ R^{\rm nh}_{{\bm{\alpha}}, j}}_{L^2(\Omega)} $. We have \begin{align*} {\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}&\textstyle:=[{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,(\bar{\bm{u}} + {\bm{u}}^{\rm nh} - \kappa \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}})\cdot\nabla_{\bm{x}} ] {\bm{u}}^{\rm d} + \big\langle{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j \big(\frac{\rho_0}{\varrho} \nabla_{\bm{x}} \cH^{\rm d}|_{\varrho=\rho_0} + \frac 1 \varrho \int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH^{\rm d} \, \dd\varrho' \big)\big\rangle_{j\geq 1} \\ &\quad + {\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j\big(\big(( {\bm{u}}^{\rm d} - \kappa ( \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} - \tfrac{\nabla_{\bm{x}} h^{\rm h}}{\bar h+h^{\rm h}})) \cdot \nabla_{\bm{x}}\big) {\bm{u}}^{\rm h}\big) + [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\tfrac{1}{\varrho}]\nabla_{\bm{x}} {P_{\rm nh}} + [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\tfrac{\nabla_{\bm{x}} \cH^{\rm nh}}{\varrho (\bar h+h^{\rm nh})}] {\partial}_\varrho {P_{\rm nh}} ,\\ R^{\rm nh}_{{\bm{\alpha}}, j}&:=\mu^{1/2} [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\big(\bar {\bm{u}} + {\bm{u}}^{\rm nh} - \kappa\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{ \bar h+ h^{\rm nh}}\big)\cdot\nabla_{\bm{x}}] w^{\rm d} + \mu^{1/2} {\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j\big( ( {\bm{u}}^{\rm d} - \kappa ( \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} - \tfrac{\nabla_{\bm{x}} h^{\rm h}}{\bar h+h^{\rm h}})) \cdot \nabla_{\bm{x}} w^{\rm h}\big)\\ &\quad -\tfrac1{\mu^{1/2}} [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\tfrac{1}{\varrho(\bar h+ h^{\rm nh})}] {\partial}_\varrho P_{\rm nh} -\mu^{1/2}{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j R^{\rm h}, \end{align*} where $R^{\rm h}$ is the consistency remainder introduced in Lemma~\ref{L.CV-consistency},~\eqref{eq:hydro-consistency} and estimated in~\eqref{est:hydro-consistency}, namely \[\Norm{{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j R^{\rm h}}_{L^2(\Omega)} \lesssim M_0\leq M.\] Let us estimate each contribution. In the following, we shall use repeatedly that $\mathcal{F}^{s,k}\leq \kappa^{1/2}M$ and hence $\Norm{ h^{\rm d}}_{H^{s,k}}\leq M$. As a consequence, by Lemma~\ref{L.composition-Hsk-ex} and triangular inequality, \[\Norm{\tfrac{h^{\rm nh} }{\bar h+ h^{\rm nh}}}_{H^{s,k}} \leq C(h_\star,\norm{\bar h}_{W^{k,\infty}_\varrho}\Norm{h^{\rm nh}}_{H^{s-1,k-1}})\Norm{h^{\rm nh} }_{H^{s,k}} \leq C(h_\star,\bar M,M)M. \] By Lemma~\ref{L.commutator-Hsk}, we have \begin{multline*}\Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j,(\bar{\bm{u}}+{\bm{u}}^{\rm nh} )\cdot \nabla_{\bm{x}}] {\bm{u}}^{\rm d}}_{L^2(\Omega)}+\mu^{1/2}\Norm{[{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j,(\bar{\bm{u}}+{\bm{u}}^{\rm nh} )\cdot \nabla_{\bm{x}}] w^{\rm d} }_{L^2(\Omega)}\\ \lesssim \big(\norm{\bar{\bm{u}}'}_{W^{k-1,\infty}_\varrho}+\Norm{{\bm{u}}^{\rm nh}}_{H^{s,k}}\big)\big(\Norm{\nabla_{\bm{x}} {\bm{u}}^{\rm d}}_{H^{s-1,k-1}}+\mu^{1/2}\Norm{\nabla_{\bm{x}} w^{\rm d}}_{H^{s-1,k-1}}\big) \leq C(\bar M, M) \mathcal{F}_{s,k}. \end{multline*} By Lemma~\ref{L.commutator-Hsk-sym}, Lemma~\ref{L.product-Hsk} and Lemma~\ref{L.composition-Hsk-ex} \begin{align*}\Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\big(\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h+ h^{\rm nh}} \cdot \nabla_{\bm{x}}\big)] {\bm{u}}^{\rm d}}_{L^2(\Omega)} &\lesssim \Norm{\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h+ h^{\rm nh}}}_{H^{s,k}}\Norm{\nabla_{\bm{x}} {\bm{u}}^{\rm d}}_{H^{s-1,k-1}}\\ &\lesssim \Norm{\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h}}_{H^{s,k}}\big(1+\Norm{\tfrac{h^{\rm nh} }{\bar h+ h^{\rm nh}}}_{H^{s,k}}\big) \mathcal{F}_{s,k}\\ &\leq C(h_\star,\bar M,M) \big(M+\Norm{\nabla_{\bm{x}} h^{\rm d}}_{H^{s,k}} \big) \mathcal{F}_{s,k}. \end{align*} In the same way, we have \[ \mu^{1/2}\Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\big(\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h+ h^{\rm nh}} \cdot \nabla_{\bm{x}}\big)] w^{\rm d}}_{L^2(\Omega)}\leq C(h_\star,\bar M,M) \big(M+\Norm{\nabla_{\bm{x}} h^{\rm d}}_{H^{s,k}} \big) \mathcal{F}_{s,k}.\] When $j\geq 1$, \begin{multline*}\Norm{ {\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j \big(\frac{\rho_0}{\varrho} \nabla_{\bm{x}} \cH^{\rm d}|_{\varrho=\rho_0} \big)}_{L^2(\Omega)} + \Norm{ {\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j \big(\frac 1 \varrho \int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH^{\rm d} \, \dd\varrho' \big)}_{L^2(\Omega)}\\ \lesssim \norm{ \nabla_{\bm{x}} \cH^{\rm d}|_{\varrho=\rho_0}}_{H^{s-1}_{\bm{x}}}+ \Norm{ \nabla_{\bm{x}} \cH^{\rm d}}_{H^{s-1,k-1}}\leq \mathcal{F}_{s,k}. \end{multline*} By Lemma~\ref{L.product-Hsk}, we have \begin{multline*}\Norm{ {\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j \big(( {\bm{u}}^{\rm d} \cdot \nabla_{\bm{x}}) {\bm{u}}^{\rm h}\big)}_{L^2(\Omega)}+\mu^{1/2}\Norm{{\partial}_{\bm{x}}^{\bm{\alpha}} {\partial}_\varrho^j \big(( {\bm{u}}^{\rm d} \cdot \nabla_{\bm{x}}) w^{\rm h}\big) }_{L^2(\Omega)}\\ \lesssim \Norm{{\bm{u}}^{\rm d}}_{H^{s,k}}\big(\Norm{\nabla_{\bm{x}} {\bm{u}}^{\rm h}}_{H^{s,k}}+\mu^{1/2}\Norm{\nabla_{\bm{x}} w^{\rm d}}_{H^{s,k}}\big) \leq C(\bar M, M) \mathcal{F}_{s,k}. \end{multline*} By repeated use of tame estimates in Lemma~\ref{L.product-Hsk} and Lemma~\ref{L.composition-Hsk}, we find \begin{align*} \Norm{{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j\big( (\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} - \tfrac{\nabla_{\bm{x}} h^{\rm h}}{\bar h+h^{\rm h}}) \cdot \nabla_{\bm{x}}\big) {\bm{u}}^{\rm h}\big)}_{L^2(\Omega)} &\lesssim \Norm{\tfrac{\nabla_{\bm{x}} h^{\rm d}}{\bar h +h^{\rm h}}+ \tfrac{h^{\rm d}\nabla_{\bm{x}} h^{\rm nh}}{(\bar h+h^{\rm nh})(\bar h+h^{\rm h})}}_{H^{s,k}} \Norm{\nabla_{\bm{x}} {\bm{u}}^{\rm h}}_{H^{s,k}}\\ &\leq C(h_\star,\bar M,M) M ( \Norm{\nabla_{\bm{x}} h^{\rm d}}_{H^{s,k}} + M\Norm{ h^{\rm d}}_{H^{s,k}}), \end{align*} and similarly \[\mu^{1/2} \Norm{{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j\big( (\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} - \tfrac{\nabla_{\bm{x}} h^{\rm h}}{\bar h+h^{\rm h}}) \cdot \nabla_{\bm{x}}\big) w^{\rm h}\big)}_{L^2(\Omega)}\leq C(h_\star,\bar M,M) M ( \Norm{\nabla_{\bm{x}} h^{\rm d}}_{H^{s,k}} + M\Norm{ h^{\rm d}}_{H^{s,k}}).\] Contributions from the pressure remain. By direct inspection, and since $|{\bm{\alpha}}|+j-1\leq s-1$, \[ \Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\tfrac{1}{\varrho} ]\nabla_{\bm{x}} P_{\rm nh} }_{L^2(\Omega)}\lesssim \Norm{\nabla_{\bm{x}} P_{\rm nh}}_{H^{s-1,k-1}}.\] By Lemma~\ref{L.commutator-Hsk} and since $s=k>\frac52+\frac d2$, using the above and Lemma~\ref{L.embedding} \begin{align*} \Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\tfrac{\nabla_{\bm{x}} \cH^{\rm nh}}{\varrho(\bar h + h^{\rm nh})} ]{\partial}_\varrho P_{\rm nh}}_{L^2(\Omega)} &\lesssim \Norm{\tfrac{\nabla_{\bm{x}} \cH^{\rm nh}}{\varrho(\bar h + h^{\rm nh})}}_{H^{s,k}}\Norm{{\partial}_\varrho P_{\rm nh}}_{H^{s-1,k-1}}\\ &\leq C(h_\star,\bar M,M)\,\big( M+ \Norm{\nabla_{\bm{x}} \cH^{\rm d}}_{H^{s,k}} \big) \Norm{{\partial}_\varrho P_{\rm nh}}_{H^{s-1,k-1}}. \end{align*} Similarly, \begin{align*} \Norm{ [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j,\tfrac{1}{\varrho(\bar h + h^{\rm nh})}]{\partial}_\varrho P_{\rm nh} }_{L^2(\Omega)} &\lesssim \big(\norm{\tfrac1{\varrho\bar h}}_{W^{k,\infty}_\varrho}+\Norm{\tfrac{h^{\rm nh}}{\varrho(\bar h + h^{\rm nh})}}_{H^{s,k}}\big)\Norm{{\partial}_\varrho P_{\rm nh}}_{H^{s-1,k-1}}\\ &\leq C(h_\star,\bar M,M)\, \Norm{{\partial}_\varrho P_{\rm nh} }_{H^{s-1,k-1}}. \end{align*} Altogether, and using $\mathcal{F}_{s,k} \leq \kappa^{1/2}M$ and $\mu\leq \kappa$, we find \begin{equation}\label{est-Rj} \Norm{ {\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}}_{L^2(\Omega)}+\Norm{ R^{\rm nh}_{{\bm{\alpha}}, j}}_{L^2(\Omega)} \leq C(h_\star,\bar M,M) \, \big( \mathcal{F}_{s,k} + \kappa \Norm{\nabla_{\bm{x}} h^{\rm d}}_{H^{s,k}} + \mu^{-1/2} \Norm{\nabla^\mu_{{\bm{x}},\varrho} P_{\rm nh} }_{H^{s-1,k-1}}\big). \end{equation} Now, we use Corollary~\ref{C.Poisson}, specifically~\eqref{ineq:est-Ptame-nonhydro}: \begin{align*} \Norm{\nabla^\mu_{{\bm{x}},\varrho} P_{\rm nh}}_{H^{s-1,k-1}} & \leq C(h_\star,\bar M,M)\,\mu\, \Big( \Norm{ (\Lambda^\mu)^{-1} \nabla_{\bm{x}} \cH^{\rm nh}}_{H^{s,k}} + \norm{(\Lambda^\mu)^{-1} \cH^{\rm nh}\big\vert_{\varrho=\rho_0}}_{H^{s+1}_{\bm{x}}} \\ &\qquad +\Norm{({\bm{u}}^{\rm nh},{\bm{u}}_\star^{\rm nh})}_{H^{s,k}} +\Norm{( w^{\rm nh},w_\star^{\rm nh})}_{H^{s,k-1}} + \Norm{{\bm{u}}_\star^{\rm nh}}_{H^{s,k}} \Norm{ w^{\rm nh}}_{H^{s,k-1}}\Big). \end{align*} where we recall the notations $\Lambda^\mu:=1+\sqrt \mu |D|$, ${\bm{u}}_\star^{\rm nh}:=-\kappa\frac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h+ h^{\rm nh}} $ and $w_\star^{\rm nh}:=\kappa\Delta_{\bm{x}} \cH^{\rm nh}-\kappa\frac{\nabla_{\bm{x}} h^{\rm nh}\cdot\nabla_{\bm{x}} \cH^{\rm nh}}{\bar h+h^{\rm nh}}$. Then we use on one hand that \[ \Norm{ (\Lambda^\mu)^{-1} h^{\rm nh}}_{H^{s,k-1}} \leq \Norm{ h^{\rm h}}_{H^{s,k-1}}+\mu^{-1/2}\Norm{ h^{\rm d}}_{H^{s-1,k-1}} \lesssim M+\mu^{-1/2} \mathcal{F}_{s,k},\] and, similarly, \begin{align*}\Norm{ (\Lambda^\mu)^{-1} \nabla_{\bm{x}} \cH^{\rm nh}}_{H^{s,k}} & \leq \Norm{ \nabla_{\bm{x}} \cH^{\rm h}}_{H^{s,k}} +\mu^{-1/2}\Norm{ \nabla_{\bm{x}} \cH^{\rm d}}_{H^{s-1,k}}\lesssim M+\mu^{-1/2} \mathcal{F}_{s,k},\\ \norm{(\Lambda^\mu)^{-1} \cH^{\rm nh}\big\vert_{\varrho=\rho_0}}_{H^{s+1}_{\bm{x}}}&\leq \norm{\cH^{\rm h}\big\vert_{\varrho=\rho_0}}_{H^{s+1}_{\bm{x}}} +\mu^{-1/2} \norm{\cH^{\rm d}\big\vert_{\varrho=\rho_0}}_{H^{s}_{\bm{x}}} \lesssim M+\mu^{-1/2} \mathcal{F}_{s,k}. \end{align*} On the other hand, \[\Norm{ w^{\rm nh}}_{H^{s,k-1}} \leq \Norm{ w^{\rm h}}_{H^{s,k-1}}+\Norm{ w^{\rm d}}_{H^{s,k-1}} \lesssim C(\bar M,M) M+\mu^{-1/2} \mathcal{F}_{s,k}\] where, for the first contribution, we applied the product estimates to the expression in~\eqref{def-wh}. Then, we have \begin{align*}\Norm{{\bm{u}}_\star^{\rm nh}}_{H^{s,k}} &\leq \kappa \Norm{ \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h+h^{\rm nh}} }_{H^{s,k}} \leq \kappa \big( \Norm{\nabla_{\bm{x}} h^{\rm nh} }_{H^{s,k}} + \Norm{h^{\rm nh} }_{H^{s,k}}^2\big) \\ &\leq C(h_\star,\bar M,M)\,\big( M+\kappa \Norm{\nabla_{\bm{x}} h^{\rm d} }_{H^{s,k}}\big),\\ \Norm{ w_\star^{\rm nh}}_{H^{s,k-1}} &\leq \kappa\Norm{\Delta_{\bm{x}} \cH^{\rm nh}}_{H^{s,k-1}}+C(h_\star,\bar M,M)\kappa\Norm{\nabla_{\bm{x}} h^{\rm nh}}_{H^{s,k-1}} \Norm{\nabla_{\bm{x}} \cH^{\rm nh}}_{H^{s,k-1}}\\ &\leq C(h_\star,\bar M,M)\,\big(M+ \kappa\Norm{\Delta_{\bm{x}} \cH^{\rm d}}_{H^{s,k-1}} + \kappa^{1/2}M\Norm{\nabla_{\bm{x}} h^{\rm d}}_{H^{s,k-1}} \big). \end{align*} Altogether, this yields \begin{multline*} \mu^{-1/2}\Norm{\nabla^\mu_{{\bm{x}},\varrho} P_{\rm nh}}_{H^{s-1,k-1}} \leq C_0\, \Big( \mu^{1/2}M+ \mathcal{F}_{s,k} + \mu^{1/2}\kappa^{1/2} \Norm{\nabla_{\bm{x}} h^{\rm d} }_{H^{s,k}}+\mu^{1/2}\kappa \Norm{\Delta_{\bm{x}} \cH^{\rm d}}_{H^{s,k}} \\ +\big( M+\kappa \Norm{\nabla_{\bm{x}} h^{\rm d} }_{H^{s,k}} \big)\,\big( \mu^{1/2}M+ \mathcal{F}_{s,k} \big)\Big). \end{multline*} Plugging this estimate in~\eqref{est-Rj}, using $\mathcal{F}_{s,k}\leq \kappa^{1/2} M$ and $\mu\leq \kappa$, we obtain~\eqref{eq.est-quasilin-diff1}. \medskip Finally, we set \[ {\bm{r}}_{{\bm{\alpha}},j}:= - [{\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j, \bar {\bm{u}}+{\bm{u}}^{\rm nh} ] h^{\rm d} - {\partial}_{\bm{x}}^{{\bm{\alpha}}} {\partial}_\varrho^{j} \big( (\bar h+ h^{\rm h}) {\bm{u}}^{\rm d}\big), \qquad r_{{\bm{\alpha}},j}:= -({\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j h^{\rm d} )\nabla_{\bm{x}} \cdot {\bm{u}}^{\rm nh} . \] By Lemma~\ref{L.product-Hsk} and Lemma~\ref{L.commutator-Hsk} and since $s\geq s_0+\frac32$ and $2\leq k= s$, we have \[\Norm{{\bm{r}}_{{\bm{\alpha}},j}}_{L^2(\Omega)} \lesssim \big(\norm{\bar {\bm{u}}'}_{W^{k-1,\infty}_\varrho}+\Norm{{\bm{u}}^{\rm nh}}_{H^{s,k}}\big)\Norm{h^{\rm d}}_{H^{s-1,k-1}} +\big(\norm{\bar h}_{W^{k,\infty}_\varrho}+\Norm{h^{\rm h}}_{H^{s,k}}\big)\Norm{{\bm{u}}^{\rm d}}_{H^{s,k}}\] and (by Lemma~\ref{L.embedding}) \[\Norm{r_{{\bm{\alpha}},j}}_{L^2(\Omega)} \lesssim \Norm{h^{\rm d}}_{H^{s,k}}\Norm{{\bm{u}}^{\rm nh}}_{H^{s,k}}.\] This yields immediately~\eqref{eq.est-quasilin-h-diff}. The proof is complete. \end{proof} \subsection{Strong convergence}\label{S.CV-control} In this section, we prove that for $\mu$ sufficiently small and starting from regu\-lar and well-prepared initial data, the solution to the non-hydrostatic equations exists at least within the existence time of the solution to the hydrostatic equation. We also prove the strong convergence of the non-hydrostatic system to the hydrostatic one as $\mu\searrow0$. \begin{proposition}\label{P.CV-control} There exists $p\in\mathbb{N}$ such that for any $s, k \in \mathbb{N}$ such that $k=s> \frac 52+\frac d 2$ and any $\bar M,M,h_\star,h^\star>0$, there exists $C=C(s,k,\bar M,M,h_\star,h^\star)>0$ such that the following holds. For any $ 0< M_0\leq M$, $0<\kappa\leq 1$, and $\mu>0$ such that \[\mu \leq \kappa/(C M_0^2 ),\] for any $(\bar h, \bar {\bm{u}}) \in W^{k+p,\infty}((\rho_0,\rho_1))^{1+d} $ satisfying \[ \norm{\bar h}_{W^{k+p,\infty}_\varrho } + \norm{\bar {\bm{u}}'}_{W^{k+p-1,\infty}_\varrho }\leq \bar M \,;\] for any initial data $(h_0, {\bm{u}}_0, w_0)\in H^{s,k}(\Omega)^{2+d}$ satisfying the boundary condition $w_0|_{\varrho=\rho_1}=0$ and the incompressibility condition \[-(\bar h+h_0)\nabla_{\bm{x}}\cdot {\bm{u}}_0-(\nabla_{\bm{x}} \cH_0)\cdot({\bar {\bm{u}}}'+\partial_\varrho{\bm{u}}_0)+\partial_\varrho w_0=0,\] (denoting $\cH_0(\cdot,\varrho)=\int_\varrho^{\rho_1} h_0(\cdot,\varrho')\dd\varrho'$), the bounds \[ \Norm{\cH_0}_{H^{s+p,k+p}}+\Norm{{\bm{u}}_0}_{H^{s+p,k+p}}+\norm{\cH_0\big\vert_{\varrho=\rho_0}}_{H^{s+p}_{\bm{x}}}+\kappa^{1/2}\Norm{h_0}_{H^{s+p,k+p}} =M_0 \le M \] and the stable stratification assumption \[ \inf_{({\bm{x}},\varrho)\in \Omega } h_\star\leq \bar h(\varrho)+h_0({\bm{x}},\varrho) \leq h^\star,\] the following holds. Denoting \[ (T^{\rm h})^{-1}= C^{\rm h}\, \big(1+ \kappa^{-1} \big(\norm{\bar {\bm{u}}'}_{L^2_\varrho}^2+M_0^2\big) \big), \] as in Lemma~\ref{L.CV-consistency} there exists a unique strong solution $(h^{\rm nh},{\bm{u}}^{\rm nh},w^{\rm nh})\in \mathcal{C}^0([0,T^{\rm h}];H^{s,k}(\Omega)^{1+d})$ to the non-hydrostatic equations~\eqref{eq:nonhydro-iso-intro} with initial data $(h^{\rm nh},{\bm{u}}^{\rm nh}, w^{\rm nh})\big\vert_{t=0}=(h_0,{\bm{u}}_0, w_0)$. Moreover, one has $h^{\rm nh}\in L^2(0,T^{\rm h};H^{s+1,k}(\Omega))$, $\cH^{\rm nh}\in L^2(0,T^{\rm h};H^{s+2,k}(\Omega))$ and, for any $t\in[0,T^{\rm h}]$, the lower and the upper bounds hold \[ \inf_{({\bm{x}},\varrho)\in \Omega } \bar h(\varrho)+h^{\rm nh}(t,{\bm{x}},\varrho) \geq h_\star/3, \quad \sup_{({\bm{x}},\varrho)\in \Omega } \bar h(\varrho)+h^{\rm nh}(t,{\bm{x}},\varrho) \le 3h^\star, \] and the estimate below holds true \begin{align} \Norm{\cH^{\rm nh}(t,\cdot)}_{H^{s,k}}+\Norm{{\bm{u}}^{\rm nh}(t,\cdot)}_{H^{s,k}} +\mu^{1/2}\Norm{w^{\rm nh}(t,\cdot)}_{H^{s,k}}+\norm{\cH^{\rm nh}\big\vert_{\varrho=\rho_0}(t,\cdot)}_{H^s_{\bm{x}}} \notag\\ &\quad+\kappa^{1/2}\Norm{h^{\rm nh}(t,\cdot)}_{H^{s,k}} + \mu^{1/2}\kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH^{\rm nh} (t, \cdot)}_{H^{s,k}} \notag\\ &\quad+ \kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH^{\rm nh}}_{L^2(0,t;H^{s,k})} + \kappa^{1/2} \norm{\nabla_{\bm{x}} \cH^{\rm nh} \big\vert_{\varrho=\rho_0} }_{ L^2(0,t;H^s_{\bm{x}})} \notag\\ &\quad+\kappa \Norm{\nabla_{\bm{x}} h^{\rm nh}}_{L^2(0,t;H^{s,k})} + \mu^{1/2}\kappa \Norm{\nabla_{\bm{x}}^2 \cH^{\rm nh} }_{L^2(0,t; H^{s,k})} \leq C\, M_0, \label{eq:control-F-nh} \end{align} and $(h^{\rm nh},{\bm{u}}^{\rm nh})$ converges strongly in $L^\infty(0,T;H^{s,k}(\Omega)^{1+d})$ towards $(h^{\rm h},{\bm{u}}^{\rm h})$ the corresponding solution to the hydrostatic equations~\eqref{eq:hydro-iso-intro}, as $\mu\searrow 0$. \end{proposition} \begin{proof} We closely follow the proof of Proposition~\ref{P.NONHydro-large-time} and exhibit a bootstrap argument on the functional \begin{align*}\mathcal{F}(t):& \Norm{\cH^{\rm d}(t,\cdot)}_{H^{s,k}}+\Norm{{\bm{u}}^{\rm d}(t,\cdot)}_{H^{s,k}} +\mu^{1/2}\Norm{w^{\rm d}(t,\cdot)}_{H^{s,k}}+\norm{\cH^{\rm d}\big\vert_{\varrho=\rho_0}(t,\cdot)}_{H^s_{\bm{x}}} \notag\\ &\quad+\kappa^{1/2}\Norm{h^{\rm d}(t,\cdot)}_{H^{s,k}} + \mu^{1/2}\kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH^{\rm d} (t, \cdot)}_{H^{s,k}} \notag\\ &\quad+ \kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH^{\rm d}}_{L^2(0,t;H^{s,k})} + \kappa^{1/2} \norm{\nabla_{\bm{x}} \cH^{\rm d} \big\vert_{\varrho=\rho_0} }_{ L^2(0,t;H^s_{\bm{x}})} \notag\\ &\quad+\kappa \Norm{\nabla_{\bm{x}} h^{\rm d}}_{L^2(0,t;H^{s,k})} + \mu^{1/2}\kappa \Norm{\nabla_{\bm{x}}^2 \cH^{\rm d} }_{L^2(0,t; H^{s,k})} \end{align*} where we denote \[ h^{\rm d}:= h^{\rm nh}-h^{\rm h};\quad \cH^{\rm d}:=\cH^{\rm nh}-\cH^{\rm h}; \quad {\bm{u}}^{\rm d}:={\bm{u}}^{\rm nh}-{\bm{u}}^{\rm h};\quad w^{\rm d}:=w^{\rm nh}-w^{\rm h}\] with the usual notation for $\cH^{\rm nh}$, $\cH^{\rm h}$, and $w^{\rm h}$ is defined by~\eqref{def-wh}. Denoting by $T^\star$ the maximal existence time of the non-hydrostatic solution provided by Proposition~\ref{P.NONHydro-small-time}, we set \begin{multline}\label{eq:control-F-diff} T^{\rm nh}= \sup \Big\{0 < T < \min(T^\star,T^{\rm h}) \, : \; \forall \, t \in (0,T), \; h_\star/3 \le \bar h(\varrho) + h^{\rm nh}(t, {\bm{x}}, \varrho) \le 3h^\star \quad \\ \text{and} \quad \mathcal{F}(t) \le \mu^{1/2} M_0 \exp(C_0t), \quad \mathcal{F}(t) \le \kappa^{1/2} M_0\Big\}, \end{multline} with $C_0$ sufficiently large (to be determined later on). We will show by the standard continuity argument that $T^{\rm nh}=\min(T^\star,T^{\rm h})$, which in turns yields $T^\star> T^{\rm h}$ and shows the result. Indeed, the converse inequality $T^{\rm nh}=T^\star\leq T^{\rm h}$ yields a contradiction by Proposition~\ref{P.NONHydro-small-time} and the desired estimates immediately follow from the control of $\mathcal{F}$, the bound \begin{equation} \label{eq:control-F-h} \Norm{h^{\rm h}(t,\cdot)}_{H^{s+1,k+1}} +\Norm{{\bm{u}}^{\rm h}(t,\cdot)}_{H^{s+1,k+1}}+\Norm{\cH^{\rm h}(t,\cdot)}_{H^{s+1,k+1}} +\Norm{w^{\rm h}(t,\cdot)}_{H^{s+1,k+1}} \leq C^{\rm h} M_0 \end{equation} provided by Lemma~\ref{L.CV-consistency}, and triangular inequality (when $C$ is chosen sufficiently large). Let us now derive from Lemma~\ref{L.CV-quasilinear} the necessary estimates for the bootstrap argument. In the following we repeatedly use the triangular inequality to infer from~\eqref{eq:control-F-diff} and~\eqref{eq:control-F-h} the corresponding control~\eqref{eq:control-F-nh} with $C$ depending only $C^{\rm h}$, $T^{\rm h}$ (and $\kappa\leq 1$). We shall denote by $C$ a constant depending uniquely on $s,k,\bar M,M,h_\star,h^\star$ and $C^{\rm h}$, $T^{\rm h}$, but not on $C_0$, and which may change from line to line. By means of~\eqref{eq.quasilin-h-diff}-\eqref{eq.est-quasilin-h-diff} and Lemma~\ref{lem:estimate-transport-diffusion}, we infer from~\eqref{eq:control-F-diff}-\eqref{eq:control-F-h} \[ \kappa^{1/2}\Norm{h^{\rm d}}_{L^\infty(0,T;H^{s,k})}+\kappa\Norm{\nabla_{\bm{x}} h^{\rm d}}_{L^2(0,T;H^{s,k})} \leq C\, \big( \norm{\mathcal{F}}_{L^1_T}+\norm{\mathcal{F}}_{L^2_T} \big). \] Next, by differentiating with respect to space the first equation of~\eqref{eq.quasilin-diff} using~\eqref{eq.est-quasilin-diff0} and Lemma~\ref{lem:estimate-transport-diffusion}, we infer \[ \mu^{1/2}\kappa^{1/2}\Norm{\nabla_{\bm{x}} \cH^{\rm d}}_{L^\infty(0,T;H^{s,k})}+\mu^{1/2}\kappa\Norm{\nabla_{\bm{x}}^2 \cH^{\rm d}}_{L^2(0,T;H^{s,k})} \leq C\, \big( \norm{\mathcal{F}}_{L^1_T}+\norm{\mathcal{F}}_{L^2_T} \big). \] Now, we use~\eqref{eq.quasilin-diff}-\eqref{eq.est-quasilin-diff0}-\eqref{eq.est-quasilin-diff1} and proceeding as in the proof of Proposition~\ref{P.NONHydro-large-time} (together with the above estimates) we infer that for any $t\in(0,T)$, \[ \mathcal{F}(t) \leq C_1 \norm{\mathcal{F}}_{L^1_t}+C_2 \norm{\mathcal{F}}_{L^2_t}+ C_3\, \mu^{1/2}M_0 \, t\] with $C_i$ ($i\in\{1,2,3\}$) depending uniquely on $s,k,\bar M,M,h_\star,h^\star$ and $C^{\rm h}$, $T^{\rm h}$. By using the inequality $ \mathcal{F}(t) \le \mu^{1/2} M_0 \exp(C_0t)$ from~\eqref{eq:control-F-diff} and the inequality $\tau\leq \exp(\tau)$ (for $\tau\geq0$), we deduce \[ \mathcal{F}(t) \leq C_1 \mu^{1/2} M_0 C_0^{-1} \exp(C_0t) +C_2 \mu^{1/2} M_0 (2C_0)^{-1/2} \exp(C_0t) + C_3\mu^{1/2}M_0 C_0^{-1} \exp(C_0t).\] There remains to choose $C_0$ sufficiently large so that $C_1 C_0^{-1} + C_2 (2C_0)^{-1/2} + C_3 C_0^{-1} <1$, and restrict to $\mu$ sufficiently small so that $\mu^{1/2} M_0 \exp(C_0 T^{\rm h}) \leq \mu^{1/2} M_0 C^{1/2}/2\leq \kappa^{1/2}/2$. The upper and lower bounds for $\bar h+h^{\rm nh}$ follow immediately from the corresponding ones for $\bar h+h^{\rm h} $ provided by Lemma~\ref{L.CV-consistency} and triangular inequality, augmenting $C$ if necessary. Then the usual continuity argument yields, as desired, $T^{\rm nh}=\min(T^\star,T^{\rm h})$. \end{proof} \subsection{Improved convergence rate}\label{S.CV-convergence} Proposition~\ref{P.CV-control} established the strong convergence for regular well-prepared initial data of the solution to the non-hydrostatic equations, \eqref{eq:nonhydro-iso-intro}, towards the corresponding solution to the hydrostatic equations, \eqref{eq:hydro-iso-intro}, as $\mu \searrow 0$. The convergence rate displayed in the proof is $\mathcal O(\mu^{1/2})$. The aim of this section is to provide an improved and \emph{optimal} convergence rate $\mathcal O (\mu)$. The strategy is based on the interpretation of the non-hydrostatic solution as an approximate solution to the hydrostatic equations (in the sense of consistency) and the use of the uniform control obtained in Proposition~\ref{P.CV-control}. \begin{proposition}\label{P.CV-convergence} There exists $p\in\mathbb{N}$ such that for any $s, k \in \mathbb{N}$ with $k=s> \frac 52+\frac d 2$ and $\bar M,M,h_\star,h^\star>0$, there exists $C=C(s,k,\bar M,M,h_\star,h^\star)>0$ such that under the assumptions of Proposition~\ref{P.CV-control} and using the notations therein, \[ \Norm{ h^{\rm nh}-h^{\rm h}}_{L^\infty(0,T^{\rm h};H^{s-1,k-1})}+\Norm{ \cH^{\rm nh}-\cH^{\rm h}}_{L^\infty(0,T^{\rm h};H^{s,k})}+\Norm{{\bm{u}}^{\rm nh}-{\bm{u}}^{\rm h}}_{L^\infty(0,T^{\rm h};H^{s,k})}\leq C\,\mu.\] \end{proposition} \begin{corollary} Incrementing $p\in\mathbb{N}$, we find that for any $s, k \in \mathbb{N}$ such that $k=s> \frac 32+\frac d 2$, \[ \Norm{ h^{\rm nh}-h^{\rm h}}_{L^\infty(0,T^{\rm h};H^{s,k})}+\Norm{ \cH^{\rm nh}-\cH^{\rm h}}_{L^\infty(0,T^{\rm h};H^{s+1,k+1})}+\Norm{{\bm{u}}^{\rm nh}-{\bm{u}}^{\rm h}}_{L^\infty(0,T^{\rm h};H^{s+1,k+1})}\leq C\,\mu\] with $C=C(s+1,k+1,\bar M,M,h_\star,h^\star)>0$. \end{corollary} \begin{proof} Since all arguments of the proof have been already used in slightly different contexts, we only quickly sketch the argument. For any $p'\in\mathbb{N}$, we may use Proposition~\ref{P.CV-control} with indices $s+p'$ and $k+p'$ to infer the existence of the non-hydrostatic solution $(h^{\rm nh},{\bm{u}}^{\rm nh},w^{\rm nh})\in \mathcal{C}([0,T^{\rm h}];H^{s+p',k+p'}(\Omega)^{2+d})$ and the control \[ \sup_{t\in [0,T^{\rm h}]}\Big( \Norm{\cH^{\rm nh}(t,\cdot)}_{H^{s+p',k+p'}}+\Norm{{\bm{u}}^{\rm nh}(t,\cdot)}_{H^{s+p',k+p'}} +\norm{\cH^{\rm nh}\big\vert_{\varrho=\rho_0}(t,\cdot)}_{H^{s+p'}_{\bm{x}}} \Big) \leq C\, M_0. \] By using $h^{\rm nh}=-{\partial}_\varrho \cH^{\rm nh}$ and the divergence-free condition \[ w^{\rm nh}= ({\bar {\bm{u}}}+{\bm{u}}^{\rm nh})\cdot\nabla_{\bm{x}} \cH^{\rm nh}-\int_\varrho^{\rho_1} \nabla_{\bm{x}}\cdot((\bar h+ h^{\rm nh})({\bar {\bm{u}}}+{\bm{u}}^{\rm nh})) \dd\varrho'\] we obtain (augmenting $C$ if necessary) \[ \sup_{t\in [0,T^{\rm h}]}\Big( \Norm{h^{\rm nh}(t,\cdot)}_{H^{s+p'-1,k+p'-1}}+\Norm{w^{\rm nh}(t,\cdot)}_{H^{s+p'-1,k+p'-1}} \Big)\leq C\, M_0 \] and hence, by Corollary~\ref{C.Poisson} (specifically~\eqref{ineq:est-Ptame-nonhydro}), Poincar\'e inequality~\eqref{eq.Poincare} and choosing $p'$ sufficiently large, that $P_{\rm nh}(\cdot,\varrho):=P^{\rm nh}(\cdot,\varrho)-\int_{\rho_0}^\varrho \varrho' h^{\rm nh}(\cdot,\varrho')\dd\varrho'$ satisfies \[ \sup_{t\in [0,T^{\rm h}]} \Norm{P_{\rm nh}(t,\cdot)}_{H^{s+1,k+1}}\leq C\,\mu \, M_0.\] From this estimate we infer (by Lemma~\ref{L.CV-consistency}) that $ h^{\rm d}:=h^{\rm nh}-h^{\rm h}$ and $ {\bm{u}}^{\rm d}:={\bm{u}}^{\rm nh}-{\bm{u}}^{\rm h}$ satisfies \begin{equation}\label{eq:difference-CV} \begin{aligned} {\partial}_t \cH^{\rm d} + (\bar {\bm{u}}+{\bm{u}}^{\rm nh}) \cdot \nabla_{\bm{x}} \cH^{\rm d} +\int_\varrho^{\rho_1} (\bar {\bm{u}}'+{\partial}_\varrho {\bm{u}}^{\rm nh}) \cdot \nabla_{\bm{x}} \cH^{\rm d} \, \dd \varrho' +\int_\varrho^{\rho_1} (\bar h + h^{\rm nh}) \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm d} \, \dd\varrho' & \\ +\int_\varrho^{\rho_1} {\bm{u}}^{\rm d} \cdot \nabla_{\bm{x}} h^{\rm h} + h^{\rm d} \nabla_{\bm{x}} \cdot {\bm{u}}^{\rm h} \, \dd \varrho' & = \kappa\Delta_{\bm{x}}\cH^{\rm d},\\ {\partial}_t {\bm{u}}^{\rm d} + \big((\bar{\bm{u}} + {\bm{u}}^{\rm nh} - \kappa \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} ) \cdot \nabla_{\bm{x}}\big) {\bm{u}}^{\rm d} + {\frac{\rho_0}{\varrho} \nabla_{\bm{x}} \cH^{\rm d}|_{\varrho=\rho_0}} + {\frac 1 \varrho \int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH^{\rm d} \, \dd\varrho'} & \\ + \big(( {\bm{u}}^{\rm d} - \kappa ( \tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h + h^{\rm nh}} - \tfrac{\nabla_{\bm{x}} h^{\rm h}}{\bar h+h^{\rm h}})) \cdot \nabla_{\bm{x}}\big) {\bm{u}}^{\rm h} &= {\bm{R}}^{\rm nh}, \end{aligned} \end{equation} with ${\bm{R}}^{\rm nh}:=- \frac{\nabla_{\bm{x}}{ {P_{\rm nh}} }}{\varrho} - \frac{\nabla_{\bm{x}} \cH^{\rm nh}}{\varrho (\bar h+h^{\rm nh})} {\partial}_\varrho{ {P_{\rm nh}} } $ satisfying (by Lemma~\ref{L.product-Hsk} and Lemma~\ref{L.composition-Hsk-ex}) the bound \[ \sup_{t\in [0,T^{\rm h}]}\Norm{{\bm{R}}^{\rm nh}(t,\cdot)}_{H^{s,k}}\leq C\,\mu\, M_0.\] From this, inspecting the proof of Lemma~\ref{L.CV-quasilinear}, we infer that as long as \[ \mathcal{F}_{s,k}:=\Norm{h^{\rm d}}_{H^{s-1, k-1}} + \Norm{\cH^{\rm d}}_{H^{s,k}}+\norm{\cH^{\rm d}\big\vert_{\varrho=\rho_0}}_{H^s_{\bm{x}}}+\Norm{{\bm{u}}^{\rm d}}_{H^{s,k}} +\kappa^{1/2}\Norm{h^{\rm d}}_{H^{s,k}} \leq \kappa^{1/2}M_0,\] one has for any ${\bm{\alpha}}\in\mathbb{N}^d$ and $j\in\mathbb{N}$ such that $|{\bm{\alpha}}|+j\leq s$ that $\cH^{({\bm{\alpha}},j)}:={\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j\cH^{\rm d}$, ${\bm{u}}^{({\bm{\alpha}},j)}:={\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^j{\bm{u}}^{\rm d}$ and $h^{({\bm{\alpha}},j)}:={\partial}_{\bm{x}}^{\bm{\alpha}}{\partial}_\varrho^jh^{\rm d}$ satisfy \[ \begin{aligned} \partial_t \cH^{({\bm{\alpha}},j)}+ (\bar {\bm{u}} + {\bm{u}}^{\rm nh}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} +\Big\langle \int_\varrho^{\rho_1} (\bar {\bm{u}}' + {\partial}_\varrho{\bm{u}}^{\rm nh}) \cdot\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} \, \dd\varrho' \phantom{-\kappa\Delta_{\bm{x}} \cH^{({\bm{\alpha}},j)}}\qquad &\\ +\int_\varrho^{\rho_1} (\bar h+h^{\rm nh})\nabla_{\bm{x}} \cdot{\bm{u}}^{({\bm{\alpha}},j)} \dd\varrho'\Big\rangle_{j=0}-\kappa\Delta_{\bm{x}} \cH^{({\bm{\alpha}},j)}&=R_{{\bm{\alpha}},j},\\ \partial_t{\bm{u}}^{({\bm{\alpha}},j)}+\big(({\bar {\bm{u}}}+{\bm{u}}^{\rm nh}-\kappa\tfrac{\nabla_{\bm{x}} h^{\rm nh}}{\bar h+ h^{\rm nh}})\cdot\nabla_{\bm{x}}\big) {\bm{u}}^{({\bm{\alpha}},j)} +\Big\langle \frac{\rho_0}{ \varrho }\nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)}\big\vert_{\varrho=\rho_0} \qquad &\\ +\frac 1 \varrho\int_{\rho_0}^\varrho \nabla_{\bm{x}} \cH^{({\bm{\alpha}},j)} \dd\varrho' \Big\rangle_{j=0} &={\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}, \end{aligned} \] and \[ \partial_t h^{({\bm{\alpha}},j)}+(\bar{\bm{u}}+{\bm{u}})\cdot \nabla_{\bm{x}} h^{({\bm{\alpha}},j)} =\kappa\Delta_{\bm{x}} \partial_\varrho^{j} h^{({\bm{\alpha}},j)}+r_{{\bm{\alpha}},j}+\nabla_{\bm{x}} \cdot {\bm{r}}_{{\bm{\alpha}},j}, \] with \[ \Norm{R_{{\bm{\alpha}},j}}_{L^2(\Omega) }+\Norm{ {\bm{R}}^{\rm nh}_{{\bm{\alpha}},j}}_{L^2(\Omega)} \leq C \, \big( \mathcal{F}_{s,k} + M_0\kappa \Norm{\nabla_{\bm{x}} h^{\rm d}}_{H^{s,k}} \big) +C\, \mu\, M_0 \] and \[ \kappa^{1/2} \Norm{r_{{\bm{\alpha}},j}}_{L^2(\Omega) } + \Norm{{\bm{r}}_{{\bm{\alpha}},j}}_{L^2(\Omega) } \leq C\,\mathcal{F}_{s,k}. \] We may then proceed as in the proof of Proposition~\ref{P.regularized-large-time-WP}, and bootstrap the control \[\mathcal{F}_{s,k}(t)+ \kappa^{1/2} \Norm{\nabla_{\bm{x}} \cH^{\rm d}}_{L^2(0,t;H^{s,k})} + \kappa^{1/2} \norm{\nabla_{\bm{x}} \cH^{\rm d} \big\vert_{\varrho=\rho_0} }_{ L^2(0,t;H^s_{\bm{x}})} +\kappa \Norm{\nabla_{\bm{x}} h^{\rm d}}_{L^2(0,t;H^{s,k})} \leq C \,\mu\, M_0 \] (choosing $C$ large enough) on the time interval $[0,T^{\rm h}]$. This concludes the proof. \end{proof} \section{Product, composition and commutator estimates}\label{S.Appendix} In this section we collect useful estimates in the spaces $H^{s,k}(\Omega)$ introduced in~\eqref{def:Hsk}. Our results will follow from standard estimates in Sobolev spaces $H^s(\mathbb{R}^d)$ (see {\em e.g.} \cite{Lannesbook}*{Appendix~B} and references therein), and the following continuous embedding. Henceforth we denote $\Omega=\mathbb{R}^d\times (\rho_0,\rho_1)$. \begin{lemma}\label{L.embedding} For any $s\in \mathbb{R}$ and $\rho_0<\rho_1$, $H^{s+1/2,1}(\Omega) \subset \mathcal{C}^0([\rho_0,\rho_1]; H^s(\mathbb{R}^d) )$ and there exists $C>0$ such that for any $F\in H^{s+1/2,1}(\Omega) $, \[ \max_{\varrho \in [\rho_0, \rho_1]}\norm{F(\cdot,\varrho)}_{H^s_{\bm{x}} } \leq C \Norm{F}_{H^{s+1/2,1}} .\] More generally, for any $k\geq 1$, $H^{s+1/2,1}(\Omega) \subset \bigcap_{j=0}^{k-1}\mathcal{C}^j([\rho_0,\rho_1]; H^{s-j}(\mathbb{R}^d) )$, and in particular, for any $s_0>d/2$ and $j\in\mathbb{N}$, $H^{j+s_0+\frac12,j+1}(\Omega)\subset\big( \mathcal{C}^j(\Omega)\cap W^{j,\infty}(\Omega)\big)$. \end{lemma} \begin{proof} By a density argument, we only need to prove the inequality for smooth functions $F$. Set $\phi:[\rho_0,\rho_1]\to\mathbb{R}^+$ a smooth function such that $\phi(\rho_0)=0$ and $\phi(\varrho)=1$ if $\varrho\geq \tfrac{\rho_0+\rho_1}2$, and deduce that for any $\varrho\geq \tfrac{\rho_0+\rho_1}2$, recalling the notation $\Lambda^s:=(\Id-\Delta_{\bm{x}})^{s/2}$, \begin{align*} \int_{\mathbb{R}^d} (\Lambda^s F)^2({\bm{x}},\varrho) \dd{\bm{x}}&= \int_{\mathbb{R}^d}\int_{\rho_0}^\varrho \partial_\varrho \big(\phi(\varrho')(\Lambda^s F)^2({\bm{x}},\varrho')\big)\dd \varrho'\dd{\bm{x}} \\ &\leq 2\norm{\phi}_{L^\infty_\varrho}\int_{\rho_0}^{\rho_1}\norm{\Lambda^s F(\cdot, \varrho)}_{H^{1/2}_{\bm{x}}}\norm{\Lambda^s \partial_\varrho F(\cdot,\varrho)}_{H^{-1/2}_{\bm{x}}}\dd \varrho\\ &\quad +\norm{\phi'}_{L^\infty}\int_{\rho_0}^{\rho_1}\norm{\Lambda^s F(\cdot, \varrho)}_{L^2}\norm{\Lambda^s F(\cdot, \varrho)}_{L^2}\dd \varrho\\ &\lesssim \Norm{F}_{H^{s+1/2,0}}^2+\Norm{\partial_\varrho F}_{H^{s-1/2,0}}^2 \end{align*} Using symmetrical considerations when $\varrho< \tfrac{\rho_0+\rho_1}2$, we prove {the claimed inequality, which yields the first continuous embedding. Higher-order embeddings follow immediately.} \end{proof} Recall the notation \[ A_s+\big\langle B_s\big\rangle_{s>s_\star} =\begin{cases} A_s &\text{ if } s\leq s_\star\,,\\ A_s+B_s& \text{otherwise.} \end{cases}\] \subsection*{Product estimates} Recall the standard product estimates in Sobolev spaces $H^s(\mathbb{R}^d)$. \begin{lemma}\label{L.product-Hs} Let $d\in\mathbb{N}^\star$, $s_0>d/2$. \begin{enumerate} \item \label{L.product-Hs-1} For any $s,s_1,s_2\in\mathbb{R}$ such that $s_1\geq s$, $s_2\geq s$ and $s_1+s_2\geq s+s_0$, there exists $C>0$ such that for any $f\in H^{s_1}(\mathbb{R}^d)$ and $g\in H^{s_2}(\mathbb{R}^d)$, $fg\in H^s(\mathbb{R}^d)$ and \[\norm{fg}_{H^s}\leq C \norm{f}_{H^{s_1}}\norm{g}_{H^{s_2}}.\] \item \label{L.product-Hs-2} For any $s\geq -s_0$, there exists $C>0$ such that for any $f\in H^{s}(\mathbb{R}^d)$ and $g\in H^{s}(\mathbb{R}^d)\cap H^{s_0}(\mathbb{R}^d)$, $fg\in H^s(\mathbb{R}^d)$ and \[\norm{fg}_{H^s}\leq C \norm{f}_{H^{s_0}}\norm{g}_{H^{s}} +C\left\langle \norm{f}_{H^{s}}\norm{g}_{H^{s_0}}\right\rangle_{s>s_0}.\] \item \label{L.product-Hs-3} For any $s_1,\dots,s_n\in\mathbb{R}$ such that $s_i\geq 0$ and $s_1+\dots+s_n\geq (n-1)s_0$, there exists $C>0$ such that for any $(f_1,\dots,f_n)\in H^{s_1}(\mathbb{R}^d)\times \cdots\times H^{s_n}(\mathbb{R}^d) $, $\prod_{i=1}^n f_i\in L^2(\mathbb{R}^d)$ and \[\norm{\prod_{i=1}^n f_i }_{L^2}\leq C \prod_{i=1}^n \norm{f_i}_{H^{s_i}}.\] \end{enumerate} \end{lemma} Let us turn to product estimates in $H^{s,k}(\Omega)$ spaces. \begin{lemma}\label{L.product-Hsk} Let $d\in\mathbb{N}^\star$, $s_0>d/2$. Let $s,k\in\mathbb{N}$ such that $s\geq s_0+\frac12$ and $1\leq k\leq s$. Then $H^{s,k}(\Omega)$ is a Banach algebra and there exists $C>0$ such that for any $F,G\in H^{s,k}(\Omega)$, \[\Norm{FG}_{H^{s,k}}\leq C\Norm{F}_{H^{s,k}}\Norm{G}_{H^{s,k}}.\] Moreover, if $s\geq s_0+\frac32$ and $2\leq k\leq s$, then there exists $C'>0$ such that for any $F,G\in H^{s,k}(\Omega)$, \[\Norm{FG}_{H^{s,k}}\leq C'\Norm{F}_{H^{s,k}}\Norm{G}_{H^{s-1,k-1}}+C'\Norm{F}_{H^{s-1,k-1}}\Norm{G}_{H^{s,k}},\] and if $s\geq s_0+\frac32$ and $k=1$, then there exists $C''>0$ such that for any $F,G\in H^{s,k}(\Omega)$, \[\Norm{FG}_{H^{s,1}}\leq C''\Norm{F}_{H^{s,1}}\Norm{G}_{H^{s-1,1}}+C''\Norm{F}_{H^{s-1,1}}\Norm{G}_{H^{s,1}}.\] \end{lemma} \begin{proof} We set two multi-indices ${\bm{\beta}}=({\bm{\beta}}_{\bm{x}},{\bm{\beta}}_\varrho)\in\mathbb{N}^{d+1}$ and ${\bm{\gamma}}=({\bm{\gamma}}_{\bm{x}},{\bm{\gamma}}_\varrho)\in\mathbb{N}^{d+1}$ being such that $|{\bm{\beta}}|+|{\bm{\gamma}}|\leq s$ and ${\bm{\beta}}_\varrho+{\bm{\gamma}}_\varrho\leq k$. Let us first assume furthermore that ${\bm{\gamma}}_\varrho\leq k-1$ and $|{\bm{\gamma}}|\leq s-1$. Then \begin{align*} \Norm{(\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)}_{L^2(\Omega)}^2 &\lesssim \int_{\rho_0}^{\rho_1} \norm{\partial^{\bm{\beta}} F(\cdot,\varrho)}_{H^{s-|{\bm{\beta}}|}_{\bm{x}}}^2\norm{\partial^{\bm{\gamma}} G(\cdot,\varrho)}_{H^{s-|{\bm{\gamma}}|-\frac12}_{\bm{x}}}^2\dd \varrho\\ &\lesssim \Norm{\partial^{\bm{\beta}} F}_{H^{s-|{\bm{\beta}}|,0}}^2\Norm{\partial^{\bm{\gamma}} G}_{H^{s-|{\bm{\gamma}}|,1}}^2\leq \Norm{ F}_{H^{s,k}}^2\Norm{ G}_{H^{s,k}}^2. \end{align*} where we used Lemma~\ref{L.product-Hs}(\ref{L.product-Hs-1}) with $(s,s_1,s_2)=(0,s-|{\bm{\beta}}|,s-|{\bm{\gamma}}|-\frac12)$, and Lemma~\ref{L.embedding}. If ${\bm{\gamma}}_\varrho=k$ or $|{\bm{\gamma}}|=s$, and since $1\leq k\leq s$, we have ${\bm{\beta}}_\varrho\leq k-1$ and $|{\bm{\beta}}|\leq s-1$ and we may make use of the symmetric estimate. Hence the proof of the first statement follows from Leibniz rule. For the second statement, we assume first that $\max(\{{\bm{\beta}}_\varrho,{\bm{\gamma}}_\varrho\})\leq k-1$ and $\max(\{|{\bm{\beta}}|,|{\bm{\gamma}}|\})\leq s-1$. Then, using Lemma~\ref{L.product-Hs} with $(s,s_1,s_2)=(0,s-|{\bm{\beta}}|-\frac12,s-|{\bm{\gamma}}|-1)$ (recall $s\geq s_0+\frac32$), and Lemma~\ref{L.embedding}, \begin{align*} \Norm{(\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)}_{L^2(\Omega)} &\lesssim \norm{\norm{\partial^{\bm{\beta}} F(\cdot,\varrho)}_{H^{s-|{\bm{\beta}}|-\frac12}_{\bm{x}}}\norm{\partial^{\bm{\gamma}} G(\cdot,\varrho)}_{H^{s-|{\bm{\gamma}}|-1}_{\bm{x}}} }_{L^2_\varrho}\\ &\lesssim \Norm{ F}_{H^{s,{\bm{\beta}}_\varrho+1}}\Norm{ G}_{H^{s-1,{\bm{\gamma}}_\varrho}}\leq \Norm{F}_{H^{s,k}}\Norm{G}_{H^{s-1,k-1}}, \end{align*} Then if ${\bm{\beta}}_\varrho=k$ or $|{\bm{\beta}}|= s$, we have (since $s\geq k\geq 2$) ${\bm{\gamma}}_\varrho\leq k-2$ and $|{\bm{\gamma}}|\leq s-2$, and we infer \begin{align*} \Norm{(\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)}_{L^2(\Omega)} &\lesssim \norm{ \norm{\partial^{\bm{\beta}} F(\cdot,\varrho)}_{H^{s-|{\bm{\beta}}|}_{\bm{x}}}\norm{\partial^{\bm{\gamma}} G(\cdot,\varrho)}_{H^{s-|{\bm{\gamma}}|-\frac32}_{\bm{x}}} }_{L^2_\varrho}\\ &\lesssim \Norm{ F}_{H^{s,{\bm{\beta}}_\varrho}}\Norm{ G}_{H^{s-1,{\bm{\gamma}}_\varrho+1}}\leq \Norm{F}_{H^{s,k}}\Norm{G}_{H^{s-1,k-1}}. \end{align*} Of course we have the symmetrical result when ${\bm{\gamma}}_\varrho=k$ or $|{\bm{\gamma}}|= s$, which complete the proof. {Finally, for the last statement, we consider first the case ${\bm{\beta}}_\varrho=0$ and $\max(\{|{\bm{\beta}}|,|{\bm{\gamma}}|\})\leq s-1$, and infer as above \[ \Norm{(\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)}_{L^2(\Omega)} \lesssim \norm{\norm{\partial^{\bm{\beta}} F(\cdot,\varrho)}_{H^{s-|{\bm{\beta}}|-\frac12}_{\bm{x}}}\norm{\partial^{\bm{\gamma}} G(\cdot,\varrho)}_{H^{s-|{\bm{\gamma}}|-1}_{\bm{x}}} }_{L^2_\varrho} \lesssim \Norm{ F}_{H^{s,1}}\Norm{ G}_{H^{s-1,1}}.\] The case ${\bm{\beta}}_\varrho=1$ (and hence ${\bm{\gamma}}_\varrho=0$) and $\max(\{|{\bm{\beta}}|,|{\bm{\gamma}}|\})\leq s-1$ is treated symmetrically. Then if $|{\bm{\beta}}|= s$ we have ${\bm{\gamma}}_\varrho=|{\bm{\gamma}}|=0$, and we infer \[ \Norm{(\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)}_{L^2(\Omega)} \lesssim \norm{ \norm{\partial^{\bm{\beta}} F(\cdot,\varrho)}_{H^{s-|{\bm{\beta}}|}_{\bm{x}}}\norm{\partial^{\bm{\gamma}} G(\cdot,\varrho)}_{H^{s-|{\bm{\gamma}}|-\frac32}_{\bm{x}}} }_{L^2_\varrho} \lesssim \Norm{F}_{H^{s,1}}\Norm{G}_{H^{s-1,1}}.\] The case $|{\bm{\gamma}}|= s$ is treated symmetrically, and the proof is complete.} \end{proof} \subsection*{Composition estimates} Let us recall the standard composition estimate in Sobolev spaces $H^s(\mathbb{R}^d)$. \begin{lemma}\label{L.composition-Hs} Let $d\in\mathbb{N}^\star$, $s_0>d/2$. For any $\varphi\in \mathcal{C}^\infty(\mathbb{R};\mathbb{R})$ such that $\varphi(0)=0$, and any $M>0$, there exists $C>0$ such that for any $f\in H^{s_0}(\mathbb{R}^d) \cap H^s(\mathbb{R}^d)$ with $\norm{f}_{H^{s_0}} \leq M$, one has $\varphi(f)\in H^s(\mathbb{R}^d)$ and \[\norm{\varphi(f)}_{H^s} \leq C\norm{f}_{H^s}.\] \end{lemma} We now consider composition estimates in $H^{s,k}(\Omega)$. \begin{lemma}\label{L.composition-Hsk} Let $d\in\mathbb{N}^\star$, $s_0>d/2$. Let $s,k\in\mathbb{N}$ with $s\geq s_0+\frac12$ and $1\leq k\leq s$, and $M>0$. There exists $C>0$ such that for any $\varphi\in W^{s,\infty}(\mathbb{R};W^{k,\infty}((\rho_0,\rho_1)))$ with $\varphi(0;\cdot)\equiv 0$, and any $F\in H^{s,k}(\Omega)$ such that $\Norm{F}_{H^{s,k}} \leq M$, then $\varphi\circ F:({\bm{x}},\varrho)\mapsto\varphi(F({\bm{x}},\varrho);\varrho)\in H^{s,k}(\Omega)$ and \[ \Norm{\varphi\circ F}_{H^{s,k}}\leq C\norm{\varphi}_{W^{s,\infty}( \mathbb{R} ; W^{k,\infty}( (\rho_0,\rho_1) ) )}\Norm{F}_{H^{s,k}}.\] If moreover $s\geq s_0+\frac32$ and $2\leq k\leq s$, then there exists $C'>0$ such that for any $F\in H^{s,k}(\Omega)$ such that $\Norm{F}_{H^{s-1,k-1}} \leq M$, \[ \Norm{\varphi\circ F}_{H^{s,k}}\leq C'\norm{\varphi}_{W^{s,\infty}(\mathbb{R};W^{k,\infty}((\rho_0,\rho_1)))}\Norm{F}_{H^{s,k}}.\] \end{lemma} \begin{proof} Let ${\bm{\alpha}}=({\bm{\alpha}}_{\bm{x}},{\bm{\alpha}}_\varrho)\in\mathbb{N}^{d+1}\setminus\{{\bf 0}\}$ with $0\leq|{\bm{\alpha}}|\leq s$ and $0\leq{\bm{\alpha}}_\varrho\leq k$. We have by Fa\`a di Bruno's formula \[ \Norm{\partial^{\bm{\alpha}} (\varphi\circ F)}_{L^2(\Omega)} \lesssim \sum\Norm{\big(({\partial}_1^i{\partial}_2^j\varphi)\circ F \big)\, (\partial^{{\bm{\alpha}}^{i,j}_1} F)\cdots (\partial^{{\bm{\alpha}}^{i,j}_i} F)}_{L^2(\Omega)}, \] where $i,j\in\mathbb{N}$ with $i+j\leq |{\bm{\alpha}}|\leq s$, and the multi-indices ${\bm{\alpha}}^{i,j}_\ell=({\bm{\alpha}}^{i,j}_{\ell,{\bm{x}}},{\bm{\alpha}}^{i,j}_{\ell,\varrho})\in\mathbb{N}^{d+1}\setminus\{{\bf 0}\}$ satisfy $ \sum_{\ell=1}^i {\bm{\alpha}}^{i,j}_{\ell,{\bm{x}}}={\bm{\alpha}}_{\bm{x}} $ and $ j+\sum_{\ell=1}^i {\bm{\alpha}}^{i,j}_{\ell,\varrho}={\bm{\alpha}}_\varrho $. If $i=0$ then we have from the mean value theorem that for any $0\leq j\leq k$ \[ \Norm{({\partial}_2^j\varphi)\circ F}_{L^2(\Omega)}=\Norm{({\partial}_2^j\varphi)\circ F-({\partial}_2^j\varphi)\circ 0}_{L^2(\Omega)}\leq \norm{{\partial}_1{\partial}_2^j\varphi}_{L^\infty(\mathbb{R}\times (\rho_0,\rho_1))} \Norm{F}_{L^2(\Omega)}.\] The case $i=1$ is straightforward, and we now focus on the case $i\geq 2$. We assume without loss of generality that $|{\bm{\alpha}}^{i,j}_{1,\varrho}|\geq |{\bm{\alpha}}^{i,j}_{2,\varrho} |\geq \cdots \geq |{\bm{\alpha}}^{i,j}_{i,\varrho} |$ and remark that for $\ell\neq 1$, $|{\bm{\alpha}}^{i,j}_{\ell,\varrho}|\leq k-1$ (otherwise $|{\bm{\alpha}}^{i,j}_{1,\varrho}|+|{\bm{\alpha}}^{i,j}_{\ell,\varrho}|=2k>k\geq |{\bm{\alpha}}_\varrho|$) and $ |{\bm{\alpha}}^{i,j}_{\ell} |\leq s-|{\bm{\alpha}}^{1,j}_1|\leq s-1 $. Hence we have \begin{align*}\textstyle\Norm{\prod_{\ell=1}^i (\partial^{{\bm{\alpha}}^{i,j}_\ell} F)}_{L^2(\Omega)} &\textstyle\lesssim \norm{\norm{\partial^{{\bm{\alpha}}^{i,j}_1} F}_{H^{s-|{\bm{\alpha}}^{i,j}_1|}_{\bm{x}}} \big(\prod_{\ell=2}^i \norm{\partial^{{\bm{\alpha}}^{i,j}_\ell} F}_{H^{s-|{\bm{\alpha}}^{i,j}_2|-\frac12}_{\bm{x}}}\big) }_{L^2_\varrho}\\ &\textstyle\lesssim \Norm{ F}_{H^{s,{\bm{\alpha}}^{i,j}_{1,\varrho}}} \big(\prod_{\ell=2}^i \Norm{ F}_{H^{s,{\bm{\alpha}}^{i,j}_{\ell,\varrho}+1}} \big)\leq \Norm{ F}_{H^{s,k}}^i \end{align*} where we used Lemma~\ref{L.product-Hs}(\ref{L.product-Hs-3}) and $(i-1)(s-\frac12)\geq (i-1)s_0 $ and Lemma~\ref{L.embedding}. The first claim follows. Now we assume additionally that $k\geq 2$ and $s\geq s_0+\frac32$. The cases $i\in\{0,1\}$ can be treated exactly as above and we deal only with the case $i\geq2$, ordering $|{\bm{\alpha}}^{i,j}_{1,\varrho}|\geq |{\bm{\alpha}}^{i,j}_{2,\varrho}| \geq \cdots \geq |{\bm{\alpha}}^{i,j}_{i,\varrho}| $ as above. Assume first that $|{\bm{\alpha}}^{i,j}_{1,\varrho}|=k\geq 2$. Then for all $\ell\neq 1$, $|{\bm{\alpha}}^{i,j}_{\ell,\varrho}|=0$ and $|{\bm{\alpha}}^{i,j}_{\ell}|\leq s-2$, and we conclude as before with \[\textstyle\Norm{\prod_{\ell=1}^i (\partial^{{\bm{\alpha}}^{i,j}_\ell} F)}_{L^2(\Omega)} \lesssim \norm{\norm{\partial^{{\bm{\alpha}}^{i,j}_1} F}_{H^{s-|{\bm{\alpha}}^{i,j}_1|}_{\bm{x}}} \big(\prod_{\ell=1}^i \norm{\partial^{{\bm{\alpha}}^{i,j}_\ell} F}_{H^{s-|{\bm{\alpha}}^{i,j}_\ell|-\frac32}_{\bm{x}}} \big)}_{L^2_\varrho} \lesssim \Norm{ F}_{H^{s,k}}\Norm{ F}_{H^{s-1,1}}^{i-1}. \] Otherwise we have $|{\bm{\alpha}}^{i,j}_{2,\varrho}|\leq |{\bm{\alpha}}^{i,j}_{1,\varrho}|\leq k-1$ and $|{\bm{\alpha}}^{i,j}_{2}|\leq s-|{\bm{\alpha}}^{i,j}_{1}|\leq s-1$ and notice that for $\ell \geq 3$, $|{\bm{\alpha}}^{i,j}_{\ell,\varrho}|\leq k-2$ (since otherwise we have the contradiction $|{\bm{\alpha}}^{i,j}_{1,\varrho}|+ |{\bm{\alpha}}^{i,j}_{2,\varrho}|+|{\bm{\alpha}}^{i,j}_{3,\varrho}|\geq 3(k-1)\geq k+1\geq |{\bm{\alpha}}_\varrho|+1$) and $|{\bm{\alpha}}^{i,j}_{\ell}|\leq s-|{\bm{\alpha}}^{i,j}_{1}|-|{\bm{\alpha}}^{i,j}_{2}|\leq s-2$. Hence \begin{align*}\textstyle\Norm{\prod_{\ell=1}^i (\partial^{{\bm{\alpha}}^{i,j}_\ell} F)}_{L^2(\Omega)} &\textstyle\lesssim \norm{\norm{\partial^{{\bm{\alpha}}^{i,j}_1} F}_{H^{s-|{\bm{\alpha}}^{i,j}_1|-\frac12}_{\bm{x}}}\norm{\partial^{{\bm{\alpha}}^{i,j}_2} F}_{H^{s-|{\bm{\alpha}}^{i,j}_2|-1}_{\bm{x}}}\big(\prod_{\ell=3}^i \norm{\partial^{{\bm{\alpha}}^{i,j}_\ell} F}_{H^{s-|{\bm{\alpha}}^{i,j}_\ell|-\frac32}_{\bm{x}}}\big)}_{L^2_\varrho}\\ &\textstyle\lesssim \Norm{ F}_{H^{s,{\bm{\alpha}}^{i,j}_{1,\varrho}+1}}\Norm{ F}_{H^{s-1,{\bm{\alpha}}^{i,j}_{2,\varrho}}} \big(\prod_{\ell=3}^i\Norm{ F}_{H^{s-1,{\bm{\alpha}}^{i,j}_{\ell,\varrho}+1}}\big) \lesssim \Norm{ F}_{H^{s,k}}\Norm{ F}_{H^{s-1,k-1}}^{i-1}. \end{align*} This concludes the proof. \end{proof} We shall apply the above to estimate quantities such as (but not restricted to) \[ \Phi:({\bm{x}},\varrho)\in\Omega\mapsto\frac{h({\bm{x}},\varrho)}{\bar h(\varrho)+h({\bm{x}},\varrho)},\] with $\bar h\in W^{k,\infty}((\rho_0,\rho_1))$ and $h\in H^{s,k}(\mathbb{R}^d)$ satisfying the condition $\inf_{({\bm{x}},\varrho)\in\Omega}\bar h(\varrho)+h({\bm{x}},\varrho)\geq h_\star>0$. Let us detail the result and its proof for this specific example. \begin{lemma}\label{L.composition-Hsk-ex} Let $d\in\mathbb{N}^\star$, $s_0>d/2$. Let $s,k\in\mathbb{N}$ with $s\geq s_0+\frac12$ and $1\leq k\leq s$, and $M,\bar M,h_\star>0$. There exists $C>0$ such that for any $\bar h\in W^{k,\infty}((\rho_0,\rho_1))$ with $\norm{\bar h}_{W^{k,\infty}_\varrho}\leq \bar M$ and any $h\in H^{s,k}(\Omega)$ with $\Norm{h}_{H^{s,k}}\leq M$ and satisfying the condition $\inf_{({\bm{x}},\varrho)\in\Omega}\bar h(\varrho)+h({\bm{x}},\varrho)\geq h_\star$, then \[ \Phi:({\bm{x}},\varrho)\mapsto\frac{h({\bm{x}},\varrho)}{\bar h(\varrho)+h({\bm{x}},\varrho)} \in H^{s,k}(\Omega),\] and \[ \Norm{\Phi}_{H^{s,k}} \leq C \Norm{h}_{H^{s,k}}.\] If moreover $s> \frac d2+\frac32$ and $2\leq k\leq s$, then the above holds for any $h\in H^{s,k}(\Omega)$ with $\Norm{h}_{H^{s-1,k-1}}\leq M$. \end{lemma} \begin{proof} We can write $\Phi=\varphi\circ h$ with $\varphi(\cdot,\varrho)=f(\cdot,\bar h(\varrho))$ where $f\in \mathcal{C}^\infty(\mathbb{R}^2)$ is set such that $f(y,z)=\frac{y}{y+z}$ on $\omega:=\{(y,z) \ : \ \ |y|\leq \Norm{h}_{L^\infty(\Omega)} ,\ |z|\leq \norm{\bar h}_{L^\infty((\rho_0,\rho_1))},\ y+z\geq h_\star \}$. We can construct $f$ as above such that the control of $\norm{\varphi}_{W^{s,\infty}(\mathbb{R};W^{k,\infty}((\rho_0,\rho_1)))} $ depends only on $\Norm{h}_{L^\infty(\Omega)} $ (which is bounded appealing to Lemma~\ref{L.embedding}, if $h \in H^{s,k}$ with $s>\frac d2 + \frac 12, \, 1 \le k \le s$), $ \norm{\bar h}_{W^{k,\infty}((\rho_0,\rho_1))}$ and $h_\star>0$. The result is now a direct application of Lemma~\ref{L.composition-Hsk}.\end{proof} \subsection*{Commutator estimates} We now recall standard commutator estimates in $H^s(\mathbb{R}^d)$. \begin{lemma}\label{L.commutator-Hs} Let $d\in\mathbb{N}^\star$, $s_0>d/2$ and $s\geq0$. \begin{enumerate} \item \label{L.commutator-Hs-1} For any $s_1,s_2\in\mathbb{R}$ such that $s_1\geq s$, $s_2\geq s-1$ and $s_1+s_2\geq s+s_0$, there exists $C>0$ such that for any $f\in H^{s_1}(\mathbb{R}^d)$ and $g\in H^{s_2}(\mathbb{R}^d)$, $[\Lambda^s,f]g:=\Lambda^s(fg)-f\Lambda^s g\in L^2(\mathbb{R}^d)$ and \[\norm{[\Lambda^s,f]g}_{L^2}\leq C \norm{f}_{H^{s_1}}\norm{g}_{H^{s_2}}.\] \item \label{L.commutator-Hs-2} There exists $C>0$ such that for any $f\in L^\infty(\mathbb{R}^d)$ such that $\nabla f \in H^{s-1}(\mathbb{R}^d)\cap H^{s_0}(\mathbb{R}^d)$ and for any $g\in H^{s-1}(\mathbb{R}^d)$, one has $[\Lambda^s,f]g\in L^2(\mathbb{R}^d)$ and \[\norm{[\Lambda^s,f]g}_{L^2}\leq C \norm{\nabla f}_{H^{s_0}}\norm{g}_{H^{s-1}} +C\left\langle \norm{\nabla f}_{H^{s-1}}\norm{g}_{H^{s_0}}\right\rangle_{s>s_0+1}.\] \item \label{L.commutator-Hs-3} There exists $C>0$ such that for any $f,g\in H^{s}(\mathbb{R}^d)\cap H^{s_0+1}(\mathbb{R}^d)$, the symmetric commutator $[\Lambda^s;f,g]:=\Lambda^s(fg)-f\Lambda^s g - g\Lambda^s f \in L^2(\mathbb{R}^d)$ and \[\norm{[\Lambda^s;f,g]}_{L^2}\leq C \norm{f}_{H^{s_0+1}}\norm{g}_{H^{s-1}} +C \norm{f}_{H^{s-1}}\norm{g}_{H^{s_0+1}} .\] \end{enumerate} {The validity of the above estimates persist when replacing the operator $\Lambda^s$ with the operator $\partial^{\bm{\alpha}}$ with ${\bm{\alpha}}\in\mathbb{N}^d$ a multi-index such that $|{\bm{\alpha}}|\leq s$.} \end{lemma} We conclude with commutator estimates in the spaces $H^{s,k}(\Omega)$. \begin{lemma}\label{L.commutator-Hsk} Let $d\in\mathbb{N}^\star$, $s_0>d/2$. Let $s\geq s_0+\frac32$ and $k\in\mathbb{N}$ such that $2\leq k\leq s$. Then there exists $C>0$ such that for any ${\bm{\alpha}}=({\bm{\alpha}}_{\bm{x}},{\bm{\alpha}}_\varrho)\in\mathbb{N}^{d+1}$ with $|{\bm{\alpha}}|\leq s$ and ${\bm{\alpha}}_\varrho\leq k$, one has \[\Norm{[\partial^{\bm{\alpha}},F]G}_{L^2(\Omega)}\leq C\Norm{F}_{H^{s,k}}\Norm{G}_{H^{s-1,\min(\{k,s-1\})}}.\] \end{lemma} \begin{proof} We set two multi-indices ${\bm{\beta}}=({\bm{\beta}}_{\bm{x}},{\bm{\beta}}_\varrho)\in\mathbb{N}^{d+1}$ and ${\bm{\gamma}}=({\bm{\gamma}}_{\bm{x}},{\bm{\gamma}}_\varrho)\in\mathbb{N}^{d+1}$ with ${\bm{\beta}}+{\bm{\gamma}}={\bm{\alpha}}$, and $|{\bm{\gamma}}|\leq s-1$. Assume first that ${\bm{\beta}}_\varrho\leq k-1$ and $|{\bm{\beta}}|\leq s-1$. Then \begin{align*} \Norm{(\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)}_{L^2(\Omega)} &\lesssim \norm{\norm{\partial^{\bm{\beta}} F(\cdot,\varrho)}_{H^{s-|{\bm{\beta}}|-\frac12}_{\bm{x}}}\norm{\partial^{\bm{\gamma}} G(\cdot,\varrho)}_{H^{s-|{\bm{\gamma}}|-1}_{\bm{x}}} }_{L^2_\varrho}\\ &\lesssim \Norm{ F}_{H^{s,{\bm{\beta}}_\varrho+1}}\Norm{ G}_{H^{s-1,{\bm{\gamma}}_\varrho}}\leq \Norm{F}_{H^{s,k}}\Norm{G}_{H^{s-1,\min(\{k,s-1\})}}, \end{align*} where we used Lemma~\ref{L.product-Hs}(\ref{L.product-Hs-1}) with $(s,s_1,s_2)=(0,s-|{\bm{\beta}}|-\frac12,s-|{\bm{\gamma}}|-1)$, and Lemma~\ref{L.embedding}. Otherwise ${\bm{\gamma}}_\varrho=0$ and $|{\bm{\gamma}}|\leq s-|{\bm{\beta}}|\leq s-2$, and we have \begin{align*} \Norm{(\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)}_{L^2(\Omega)} &\lesssim \norm{ \norm{\partial^{\bm{\beta}} F(\cdot,\varrho)}_{H^{s-|{\bm{\beta}}|}_{\bm{x}}}\norm{\partial^{\bm{\gamma}} G(\cdot,\varrho)}_{H^{s-|{\bm{\gamma}}|-\frac32}_{\bm{x}}} }_{L^2_\varrho}\\ &\lesssim \Norm{ F}_{H^{s,{\bm{\beta}}_\varrho}}\Norm{ G}_{H^{s-1,1}} \leq \Norm{F}_{H^{s,k}}\Norm{G}_{H^{s-1,\min(\{k,s-1\})}}. \end{align*} The claim follows from decomposing $[\partial^{\bm{\alpha}},F]G$ as a sum of products as above. \end{proof} \begin{lemma}\label{L.commutator-Hsk-sym} Let $d\in\mathbb{N}^\star$, $s_0>d/2$. Let $s\geq s_0+\frac52$ and $k\in\mathbb{N}$ such that $2\leq k\leq s$. Then there exists $C>0$ such that for any ${\bm{\alpha}}=({\bm{\alpha}}_{\bm{x}},{\bm{\alpha}}_\varrho)\in\mathbb{N}^{d+1}$ with $|{\bm{\alpha}}|\leq s$ and ${\bm{\alpha}}_\varrho\leq k$, one has \[\Norm{[\partial^{\bm{\alpha}};F,G]}_{L^2(\Omega)}\leq C\Norm{F}_{H^{s-1,\min(\{k,s-1\})}}\Norm{G}_{H^{s-1,\min(\{k,s-1\})}}. \] \end{lemma} \begin{proof} We can decompose \[[\partial^{\bm{\alpha}};F,G]= \sum_{{\bm{\beta}}+{\bm{\gamma}}={\bm{\alpha}}} (\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)\] with multi-indices ${\bm{\beta}}=({\bm{\beta}}_{\bm{x}},{\bm{\beta}}_\varrho)\in\mathbb{N}^{d+1}$ and ${\bm{\gamma}}=({\bm{\gamma}}_{\bm{x}},{\bm{\gamma}}_\varrho)\in\mathbb{N}^{d+1}$ such that $|{\bm{\beta}}|+|{\bm{\gamma}}|\leq s$ and ${\bm{\beta}}_\varrho+{\bm{\gamma}}_\varrho\leq k$, and $1\leq |{\bm{\beta}}|,|{\bm{\gamma}}|\leq s-1$. Assume furthermore that ${\bm{\beta}}_\varrho\leq k-1$ and $|{\bm{\beta}}|\leq s-2$. Then \begin{align*} \Norm{(\partial^{\bm{\beta}} F)(\partial^{\bm{\gamma}} G)}_{L^2(\Omega)} &\lesssim \norm{ \norm{\partial^{\bm{\beta}} F(\cdot,\varrho)}_{H^{s-|{\bm{\beta}}|-\frac32}_{\bm{x}}}\norm{\partial^{\bm{\gamma}} G(\cdot,\varrho)}_{H^{s-|{\bm{\gamma}}|-1}_{\bm{x}}} }_{L^2_\varrho}\\ &\lesssim \Norm{ F}_{H^{s-1,{\bm{\beta}}_\varrho+1}}\Norm{ G}_{H^{s-1,{\bm{\gamma}}_\varrho}}\leq \Norm{F}_{H^{s-1,\min(\{k,s-1\})}}\Norm{G}_{H^{s-1,\min(\{k,s-1\})}}, \end{align*} where we used Lemma~\ref{L.product-Hs}(\ref{L.product-Hs-1}) with $(s,s_1,s_2)=(0,s-|{\bm{\beta}}|-\frac32,s-|{\bm{\gamma}}|-1)$, and Lemma~\ref{L.embedding}. By symmetry, the result holds if ${\bm{\gamma}}_\varrho\leq k-1$ and $|{\bm{\gamma}}|\leq s-2$. Hence there remains to consider the situation where (${\bm{\beta}}_\varrho=k$ or $|{\bm{\beta}}| =s-1$) and (${\bm{\gamma}}_\varrho=k$ or $|{\bm{\gamma}}| =s-1$). Since $s> 2$ and $|{\bm{\beta}}|+|{\bm{\gamma}}|\leq s$, we cannot have $|{\bm{\beta}}| =|{\bm{\gamma}}|=s-1$. In the same way, we cannot have ${\bm{\beta}}_\varrho={\bm{\gamma}}_\varrho=k$ since $k> 0$. Furthermore , we cannot have ${\bm{\beta}}_\varrho=k$ and $|{\bm{\gamma}}| =s-1$, since the former implies $|{\bm{\beta}}|\geq {\bm{\beta}}_\varrho=k\geq 2$ and the latter implies $|{\bm{\beta}}|\leq 1$. Symmetrically, we cannot have ${\bm{\gamma}}_\varrho=k$ and $|{\bm{\beta}}| =s-1$. This concludes the proof. \end{proof}
-467,927.383914
[ -2.55859375, 2.232421875 ]
20.137694
[ -2.8671875, 0.65576171875, -2.140625, -6.484375, -1.2841796875, 8.8359375 ]
[ 1.1123046875, 8.5625, -0.83251953125, 3.63671875 ]
976
21,133
[ -2.349609375, 2.234375 ]
42.612028
[ -5.828125, -4.73828125, -5.46875, -2.83203125, 2.046875, 13.8828125 ]
0.478989
9.602657
18.713035
2.359702
[ 1.4890029430389404 ]
-318,983.422259
7.231439
-474,830.760824
0.340265
6.688294
[ -2.109375, -3.75390625, -4.11328125, -5.4140625, 2.158203125, 13.046875 ]
[ -5.65625, -2.55078125, -2.41796875, -1.896484375, 4.1171875, 5.0546875 ]
BkiUd_e6NNjgBpvICM3T
\section{Related Work} \subsection{Distributed Training} Training large machine learning models (e.g., deep neural networks) is computationally intensive. In order to finish the training process in a reasonable time, many studies worked on distributed training to speedup. There are many works that aim to improve the scalability of distributed training, both at the algorithm level~\cite{li2014scaling, dean2012large, recht2011hogwild, goyal2017accurate, jia2018highly} and at the framework level~\cite{chen2015mxnet, chainermn_mlsys2017, tensorflow2015-whitepaper, paszke2017pytorch, sergeev2018horovod}. Most of them adapt synchronous SGD as the backbone because the stable performance while scaling up. In general, distributed training can be classified into two categories: with a parameter server (centralized)~\cite{li2014scaling, iandola2016firecaffe, kim2016deepspark} and without a parameter server (decentralized)~\cite{barney2010introduction_to_parallel_computing, patarasuk2009ring_all_reduce, sergeev2018horovod}). In both schemes, each node first performs the computation to update its local weights, and then sends gradients to other nodes. For the centralized mode, the gradients first get aggregated and then delivered back to each node. For decentralized mode, gradients are exchanged between neighboring nodes. In many application scenarios, the training data is privacy-sensitive. For example, a patient's medical condition can not be shared across hospitals. To avoid the sensitive information being leaked, collaborative learning has recently emerged~\cite{jochems2017developing, jochems2016distributed, mcmahan2016communication} where two or more participants can jointly train a model while the training dataset never leave each participants' local server. Only the gradients are shared across the network. This technique has been used to train models for medical treatments across multiple hospitals~\cite{jochems2016distributed}, analyze patient survival situations from various countries~\cite{jochems2017developing} and build predictive keyboards to improve typing experience~\cite{mcmahan2016communication, google2016federated, bonawitz2019towards}. \subsection{``Shallow'' Leakage from Gradients} Previous works have made some explorations on how to infer the information of training data from gradients. For some layers, the gradients already leak certain level of information. For example, the embedding layer in language tasks only produces gradients for words occurred in training data, which reveals what words have been used in other participant's training set~\cite{Melis2018unintended}. But such leakage is ``shallow'': The leaked words is unordered and and it is hard to infer the original sentence due to ambiguity. Another case is fully connected layers, where observations of gradient updates can be used to infer output feature values. However, this cannot extend to convolutional layers because the size of the features is far larger than the size of weights. Some recent works develop learning-based methods to infer properties of the batch. They show that a binary classifier trained on gradients is able to determine whether an exact data record (membership inference~\cite{shokri2017membership, Melis2018unintended}) or a data record with certain properties (property inference~\cite{Melis2018unintended}) is included in the other participant's' batch. Furthermore, they train GAN models~\cite{goodfellow2014generative} to synthesis images look similar to training data from the gradient ~\cite{hitaj2017information, Melis2018unintended, fredrikson2015model_inversion}, but the attack is limited and only works when all class members look alike (e.g., face recognition). \section{Defense Strategies} \label{sec:defense} \subsection{Noisy Gradients} \label{sec:defense:noisy} One straightforward attempt to defense DLG is to add noise on gradients before sharing. To evaluate, we experiment Gaussian and Laplacian noise (widely used in differential privacy studies) distributions with variance range from $10^{-1}$ to $10^{-4}$ and central $0$. From Fig.~\ref{fig:defense:gaussian} and \ref{fig:defense:Laplacian}, we observe that the defense effect mainly depends on the magnitude of distribution variance and less related to the noise types. When variance is at the scale of $10^{-4}$, the noisy gradients do not prevent the leak. For noise with variance $10^{-3}$, though with artifacts, the leakage can still be performed. Only when the variance is larger than $10^{-2}$ and the noise is starting affect the accuracy, DLG will fail to execute and Laplacian tends to slight better at scale $10^{-3}$. However, noise with variance larger than $10^{-2}$ will degrade the accuracy significantly (Tab.~\ref{tab:defense_vs_accuracy}). Another common perturbation on gradients is half precision, which was initially designed to save GPU memory footprints and also widely used to reduce communication bandwidth. We test two popular half precision implementations IEEE float16 (\textit{Single-precision floating-point format}) and bfloat16 (\textit{Brain Floating Point}~\cite{tagliavini2017transprecision}, a truncated version of 32 bit float). Shown in Fig.~\ref{fig:defense:fp16}, both half precision formats fail to protect the training data. We also test another popular low-bit representation Int-8. Though it successfully prevents the leakage, the performance of model drops seriously (Tab.~\ref{tab:defense_vs_accuracy}). \input{figures/defense.tex} \subsection{Gradient Compression and Sparsification} \label{sec:defense:few_gradients} We next experimented to defend by gradient compression~\cite{lin2017deep, tsuzuku2018variance}: Gradients with small magnitudes are pruned to zero. It's more difficult for DLG to match the gradients as the optimization targets are pruned. We evaluate how different level of sparsities (range from 1\% to 70\%) defense the leakage. When sparsity is 1\% to 10\%, it has almost no effects against DLG. When prune ratio increases to 20\%, as shown in Fig.~\ref{fig:defense:dgc}, there are obvious artifact pixels on the recover images. We notice that maximum tolerance of sparsity is around 20\%. When pruning ratio is larger, the recovered images are no longer visually recognizable and thus gradient compression successfully prevents the leakage. Previous work~\cite{tsuzuku2018variance, lin2017deep} show that gradients can be compressed by more than 300$\textbf{}\times$ without losing accuracy by error compensation techniques. In this case, the sparsity is above 99\% and already exceeds the maximum tolerance of DLG (which is around 20\%). It suggests that compressing the gradients is a practical approach to avoid the deep leakage. \subsection{Large Batch, High Resolution and Cryptology}\label{sec:defend:others} If changes on training settings are allowed, then there are more defense strategies. As suggested in Tab.~\ref{tab:batched_restore}, increasing the batch size makes the leakage more difficult because there are more variables to solve during optimization. Following the idea, upscaling the input images can be also a good defense, though some changes on CNN architectures is required. According to our experiments, DLG currently only works for batch size up to 8 and image resolution up to 64$\times$64. Beside methods above, cryptology can be also used to prevent the leakage: Bonawitz~\emph{et al}\onedot~\cite{bonawitz16secure_Aggregation} designs a secure aggregation protocols and Phong~\emph{et al}\onedot~\cite{phong2018privacy} proposes to encrypt the gradients before sending. Among all defenses, cryptology is the most secured one. However, both methods have their limitations and not general enough: secure aggregation~\cite{bonawitz16secure_Aggregation} requires gradients to be integers thus not compatible with most CNNs, and homomorphic encryption~\cite{phong2018privacy} is against parameter server only. \section{Conclusions} In this paper, we introduce the \emph{Deep Leakage from Gradients} (DLG): an algorithm that can obtain the local training data from public shared gradients. DLG does not rely on any generative model or extra prior about the data. Our experiments on vision and language tasks both demonstrate the critical risks of such deep leakage and show that such deep leakage can be only prevented when defense strategies starts to degrade the accuracy. This sets a challenge to modern multi-node learning system (e.g., distributed training, federated learning). We hope this work would raise people's awareness about the security of gradients and bring the community to rethink the safety of existing gradient sharing scheme. \section*{Acknowledgments} We sincerely thank MIT-IBM Watson AI lab, Intel, Facebook and AWS for supporting this work. We sincerely thank John Cohn for the discussions. \section{Experiments}\label{sec:experiments} \input{experiments/vision_compare.tex} \myparagraph{Setup.} Implementing algorithm.~\ref{algorithm:deep_leakage} requires to calculate the high order gradients and we choose PyTorch~\cite{paszke2017pytorch} as our experiment platform. We use L-BFGS~\cite{liu1989lbfgs} with learning rate 1, history size 100 and max iterations 20 and optimize for 1200 iterations and 100 iterations for image and text task respectively. We aim to match gradients from all trainable parameters. Notably, DLG has no requirements on model's convergence status, in another word, \textit{the attack can happen anytime during the training}. To be more general, all our experiments are using \textit{randomly initialized weights}. More task-specific details can be found in following sub-sections. \subsection{Deep Leakage on Image Classification} \label{sec:experiments_vision} Given an image containing objects, images classification aims to determine the class of the item. We experiment our algorithm on modern CNN architectures ResNet-56~\cite{he2016deep} and pictures from MNIST~\cite{lecun1998mnist}, CIFAR-100~\cite{krizhevsky2009cifar}, SVHN~\cite{netzer2011svhn} and LFW~\cite{LFWTech}. Two changes we have made to the models are replacing activation ReLU to Sigmoid and removing strides, as our algorithm requires the model to be twice-differentiable. For image labels, instead of directly optimizing the discrete categorical values, we random initialize a vector with shape $N \times C$ where $N$ is the batch size and $C$ is the number of classes, and then take its softmax output as the one-hot label for optimization. The leaking process is visualized in Fig.~\ref{fig:experiment_vision}. We start with random Gaussian noise (first column) and try to match the gradients produced by the dummy data and real ones. As shown in Fig~\ref{fig:vision_fig3}, minimizing the distance between gradients also reduces the gap between data. We observe that monochrome images with a clean background (MNIST) are easiest to recover, while complex images like face take more iterations to recover (Fig.~\ref{fig:experiment_vision}). When the optimization finishes, the recover results are almost identical to ground truth images, despite few negligible artifact pixels. We visually compare the results from other method~\cite{Melis2018unintended} and ours in Fig.~\ref{fig:experiment_vision}. The previous method uses GAN models when the class label is given and only works well on MNIST. The result on SVHN, though is still visually recognizable as digit ``9'', this is no longer the original training image. The cases are even worse on LFW and collapse on CIFAR. We also make a numerical comparison by performing leaking and measuring the MSE on all dataset images in Fig.~\ref{fig:vision_fig_bar}. Images are normalized to the range $[0, 1]$ and our algorithm appears much better results (ours $< 0.03$ v.s. previous $> 0.2$) on all four datasets. \subsection{Deep Leakage on Masked Language Model} \label{sec:experiments_nlp} \input{experiments/nlp.tex} For language task, we verify our algorithm on Masked Language Model (MLM) task. In each sequence, 15\% of the words are replaced with a [MASK] token and MLM model attempts to predict the original value of the masked words from a given context. We choose BERT~\cite{devlin2018bert} as our backbone and adapt hyperparameters from the official implementation~\footnote{\url{https://github.com/google-research/bert}}. Different from vision tasks where RGB inputs are continuous values, language models need to preprocess discrete words into embeddings. We apply DLG on embedding space and minimize the gradients distance between dummy embeddings and real ones. After optimization finishes, we derive original words by finding the closest entry in the embedding matrix reversely. In Tab.~\ref{tab:experiment_nlp}, we exhibit the leaking history on three sentences selected from NeurIPS conference page. Similar to the vision task, we start with randomly initialized embedding: the reverse query results at iteration 0 is meaningless. During the optimization, the gradients produced by dummy embedding gradually match the original ones and so the embeddings. In later iterations, part of sequence gradually appears. In example 3, at iteration 20, `annual conference' appeared and at iteration 30 and the leaked sentence is already close to the original one. When DLG finishes, though there are few mismatches caused by the ambiguity in tokenizing, the main content is already fully leaked. \section{Discussion of the uniqueness}\label{proof} \newcommand\imgwidth{0.07} \section{Method}\label{sec:methods} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/fig2-crop.pdf} \caption{The overview of our DLG algorithm. Variables to be updated are marked with a bold border. While normal participants calculate $\nabla W$ to update parameter using its private training data, the malicious attacker updates its dummy inputs and labels to minimize the gradients distance. When the optimization finishes, the evil user is able to obtain the training set from honest participants. } \label{fig:methods} \end{figure} We show that it's possible to steal an image pixel-wise and steal a sentence token-wise from the gradients. We focus on the standard synchronous distributed training: At each step $t$, every node $i$ samples a minibatch $(\mathbf{x}_{t, i}, \mathbf{y}_{t, i}) $ from its own dataset to compute the gradients \begin{equation} \nabla W_{t, i} = \frac{ \partial \ell (F(\mathbf{x}_{t, i}, W_{t}), \mathbf{y}_{t, i} ) }{ \partial W_t } \end{equation} The gradients are averaged across the $N$ servers and then used to update the weights: \begin{equation} \begin{split} \nabla W_{t} = \frac{1}{N} \sum_{j}^{N} \nabla W_{t, j}; \quad W_{t+1} = W_{t} - \eta \nabla W_{t} \end{split} \end{equation} Given gradients $\nabla W_{t, k}$ received from other participant $k$, we aim to steal participant $k$'s training data $(\mathbf{x}_{t, k}, \mathbf{y}_{t, k})$. Note $F()$ and $W_{t}$ are shared by default for synchronized distributed optimization. \subsection{Training Data Leakage through Gradients Matching}\label{sec:algorithm} \input{contents/algo1_single_batch.tex} To recover the data from gradients, we first randomly initialize a dummy input $\mathbf{x'}$ and label input $\mathbf{y'}$. We then feed these ``dummy data'' into models and get ``dummy gradients''. \begin{equation} \nabla W' = \frac{ \partial \ell (F(\mathbf{x}', W), \mathbf{y}' ) }{ \partial W } \end{equation} Optimizing the dummy gradients close as to original also makes the dummy data close to the real training data (the trends shown in Fig.~\ref{fig:vision_fig3}). Given gradients at a certain step, we obtain the training data by minimizing the following objective \begin{align} \begin{split} \mathbf{x'}^{*}, \mathbf{y'}^{*} = \argmin_{\mathbf{x'}, \mathbf{y}'} ||\nabla W' - \nabla W||^2 = \argmin_{\mathbf{x'}, \mathbf{y}'} || \frac{\partial \ell (F(\mathbf{x}', W), \mathbf{y}')}{\partial W} - \nabla W ||^2 \end{split} \end{align} The distance $||\nabla W' - \nabla W||^2$ is differentiable w.r.t dummy inputs $\mathbf{x'}$ and labels $\mathbf{y'}$ can thus can be optimized using standard gradient-based methods. Note that this optimization requires $2^{nd}$ order derivatives. We make a mild assumption that $F$ is twice differentiable, which holds for the majority of modern machine learning models (e.g., most neural networks) and tasks. \subsection{Deep Leakage for Batched Data}\label{sec:algorithm:batched} \input{experiments/vision.tex} The algo.~\ref{algorithm:deep_leakage} works well when there is only a single pair of input and label in the batch. However when we naively apply it to the case where batch size $N \ge 1$, the algorithm would be too slow to converge. We think the reason is that batched data can have $N!$ different permutations and thus make optimizer hard to choose gradient directions. To force the optimization closer to a solution, instead of updating the whole batch, we update a single training sample instead. We modify the \textit{line 6} in algo.~\ref{algorithm:deep_leakage} to : \begin{align} \begin{split} \mathbf{x'}^{i \bmod N}_{t+1} &\leftarrow \mathbf{x'}^{i \bmod N}_{t} - \nabla_{\mathbf{x'}^{i \bmod N}_{t+1}} \mathbb{D} \\ \mathbf{y'}^{i \bmod N}_{t+1} &\leftarrow \mathbf{y'}^{i \bmod N}_{t} - \nabla_{\mathbf{y'}^{i \bmod N}_{t+1}} \mathbb{D} \end{split} \end{align} Then we can observe fast and stable convergence. We list the iterations required for convergence for different batch sizes in Tab.~\ref{tab:batched_restore} and provide visualized results in Fig.~\ref{fig:experiment_vision_multi_batch}. The larger the batch size is, the more iterations DLG requires to attack. \begin{table}[h] \centering \begin{tabular}{c|c|c|c|c} & BS=1 & BS=2 & BS=4 & BS=8 \\ \hline ResNet-20 & 270 & 602 & 1173 & 2711 \end{tabular} \caption{The iterations required for restore batched data on CIFAR~\cite{krizhevsky2009cifar} dataset. } \label{tab:batched_restore} \end{table} \input{experiments/vision_multibatch.tex} \pagebreak \section{Introduction} Distributed training becomes necessary to speedup training on large-scale datasets. In a distributed learning system, the computation is executed parallely on each worker and synchronized via exchanging gradients (both parameter server~\cite{li2014scaling, iandola2016firecaffe} and all-reduce~\cite{barney2010introduction_to_parallel_computing, patarasuk2009ring_all_reduce}). The distribution of computation naturally leads to the splitting of data: Each client has its own the training data and only communicates gradient during training (says the training set never leaves local machine). It allows to train a model using data from multiple sources without centralizing them. This scheme is named as collaborative learning and widely used when the training set contains private information~\cite{jochems2016distributed, google2016federated}. For example, multiple hospitals train a model jointly without sharing their patients' medical data~\cite{jochems2017developing, mcmahan2016communication}. Distributed training and collaborative learning have been widely used in large scale machine learning tasks. However, \textbf{does the ``gradient sharing'' scheme protect the privacy of the training datasets of each participant?} In most scenarios, people assume that gradients are safe to share and will not expose the training data. Some recent studies show that gradients reveal some properties of the training data, for example, property classifier~\cite{Melis2018unintended} (whether a sample with certain property is in the batch) and using generative adversarial networks to generate pictures that look similar to the training images~\cite{hitaj2017information, Melis2018unintended, fredrikson2015model_inversion}. Here we consider a more challenging case: can we \textbf{completely} steal the training data from gradients? Formally, given a machine learning model $F()$ and its weights $W$, if we have the gradients $\nabla w$ w.r.t a pair of input and label, can we obtain the training data reversely? Conventional wisdom suggests that the answer is no, but we show that this is actually possible. In this work, we demonstrate \emph{Deep Leakage from Gradients} (\textbf{DLG}): sharing the gradients can leak private training data. We present an optimization algorithm that can obtain both the training inputs and the labels in just few iterations. To perform the attack, we first randomly generate a pair of ``dummy'' inputs and labels and then perform the usual forward and backward. After deriving the dummy gradients from the dummy data, instead of optimizing model weights as in typical training, we optimize the dummy inputs and labels to minimize the distance between dummy gradients and real gradients (illustrated in Fig.~\ref{fig:methods}). Matching the gradients makes the dummy data close to the original ones (Fig.~\ref{fig:vision_fig3}). When the optimization finishes, the private training data (both inputs and labels) will be fully revealed. Conventional ``shallow'' leakages (property inference~\cite{shokri2017membership, Melis2018unintended} and generative model~\cite{hitaj2017information} using class labels) requires extra label information and can only generate similar synthetic images. Our ``deep'' leakage is an optimization process and does not depend on any generative models; therefore, DLG does not require any other extra prior about the training set, instead, it can infer the label from shared gradients and the results produced by DLG (both images and texts) are the exact original training samples instead of synthetic look-alike alternatives. We evaluate the effectiveness of our algorithm on both vision (image classification) and language tasks (masked language model). On various datasets and tasks, DLG fully recovers the training data in just a few gradient steps. Such a deep leakage from gradients is first discovered and we want to raise people's awareness of rethinking the safety of gradients. The deep leakage puts a severe challenge to the multi-node machine learning system. The fundamental gradient sharing scheme, as shown in our work, is not always reliable to protect the privacy of the training data. In centralized distributed training (Fig.~\ref{fig:overview:left}), the parameter server, which usually does not store any training data, is able to steal local training data of all participants. For decentralized distributed training (Fig.~\ref{fig:overview:right}), it becomes even worse since any participant can steal its neighbors' private training data. To prevent the deep leakage, we demonstrate three defense strategies: gradient perturbation, low precision, and gradient compression. For gradient perturbation, we find both Gaussian and Laplacian noise with a scale higher than $10^{-2}$ would be a good defense. While half precision fails to protect, gradient compression successfully defends the attack with the pruned gradient is more than 20\%. \begin{figure} \centering \begin{subfigure}[b]{0.53\textwidth} \centering \includegraphics[width=\textwidth]{figures/fig1_left-crop.pdf} \caption{Distributed training with a centralized server} \label{fig:overview:left} \end{subfigure} \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[width=\textwidth]{figures/fig1_right-crop.pdf} \caption{Distributed training without a centralized server} \label{fig:overview:right} \end{subfigure} \caption{The deep leakage scenarios in two categories of classical multi-node training. The little red demon appears in the location where the deep leakage might happen. When performing centralized training, the parameter server is capable to steal all training data from gradients received from worker nodes. While training in a decentralized manner (e.g., ring all reduce~\cite{patarasuk2009ring_all_reduce}), any participant can be malicious and steal the training data from its neighbors.} \label{fig:overview} \end{figure} Our contributions include: \begin{itemize} \item We demonstrate that it is possible to obtain the private training data from the publicly shared gradients. To our best knowledge, DLG is the first algorithm achieving it. \item DLG only requires the gradients and can reveal pixel-wise accurate images and token-wise matching texts. While conventional approaches usually need extra information to attack and only produce partial properties or synthetic alternatives. \item To prevent potential leakage of important data, we analyze the attack difficulties in various settings and discuss several defense strategies against the attack. \end{itemize} \section{More Deep Leakage Image Examples } \input{supplementary/vision.tex} \end{document}
-14,125.987702
[ -2.30859375, 2.2265625 ]
58.208955
[ -2.9921875, 0.65576171875, -1.892578125, -4.5625, -0.201416015625, 6.43359375 ]
[ 0.10003662109375, 4.6328125, 0.337890625, 4.8671875 ]
208
3,195
[ -2.091796875, 2.2265625 ]
23.159112
[ -6.31640625, -4.47265625, -4.359375, -1.7490234375, 2.76171875, 11.875 ]
1.524855
33.718395
31.799687
1.999705
[ 1.8468263149261475 ]
-10,907.547765
6.157746
-14,174.851021
0.83867
5.967828
[ -2.943359375, -3.51953125, -3.033203125, -4.07421875, 2.751953125, 10.84375 ]
[ -5.41796875, -1.6083984375, -1.716796875, -1.220703125, 3.353515625, 4.28515625 ]
BkiUdbs5qoTA_-bStJGz
\section{Introduction} \label{sec:in} \vspace{-0.2cm} Euclidean statistical methods can generally not be used to analyse anatomical shapes because of the non-linearity of shape data spaces. Taking into account non-linearity and curvature of the data space in statistical analysis often requires implementation of concepts from differential geometry. Numerical implementation of even simple concepts in differential geometry is often a complex task requiring manual implementation of long and complicated expressions involving high-order derivatives. We propose to use the Theano framework in Python to make implementation of differential geometry and non-linear statistics algorithms a simpler task. One of the main advantages of Theano is that it can perform symbolic calculations and take symbolic derivatives of even complex constructs such as symbolic integrators. As a consequence, mathematical equations can almost directly be translated into Theano code. For more information on the Theano framework, see~\cite{theano}. Even though Theano make use of symbolic calculations, it is still able to perform fast computations on high-dimensional data. A main reason why Theano can handle complicated data is the opportunity to use both CPU and GPU for calculations. As an example, Fig. \ref{fig:match} shows matching of $20000$ landmarks on two different ellipsoids performed on a $40000$-dimensional landmark manifold. The matching code was implemented symbolically using no explicit GPU code. The paper will discuss multiple concepts in differential geometry and non-linear statistics relevant to computational anatomy and provide corresponding examples of Theano implementations. We start by considering simple theoretical concepts and then move to more complex constructions from sub-Riemannian geometry on fiber bundles. Examples of the implemented theory will be shown for landmark representations of Corpus Callosum shapes using the Riemannian manifold structure on the landmark space defined in the Large Deformation Diffeomorphic Metric Mapping (LDDMM) framework. \vspace{-0.7cm} \begin{figure}[H] \begin{center} \includegraphics[scale=0.4, trim = 20 20 40 30,clip]{pictures/det_matching.pdf} \caption{Matching of $20000$ landmarks on two ellipsoids. Only the matching curve for $20$ landmarks have been plotted to make the plot interpretable. The GPU computation is automatic in Theano and no explicit GPU code is used for the implementation.} \label{fig:match} \end{center} \end{figure} \vspace{-0.2cm} The presented Theano code is available in the Theano Geometry repository \textcolor{blue}{\url{http://bitbucket.org/stefansommer/theanogeometry}} that includes Theano implementations of additional differential geometry, Lie group, and non-linear statistics algorithms. The described implementations are not specific to the LDDMM landmark manifold used for examples here. The code is completely general and can be directly applied to analysis of data modelled in spaces with different non-linear structures. For more examples of Theano implementation of algorithms directly targeting landmark dynamics, see~\cite{arnaudon_stochastic_2016,arnaudon_geometric_2017}. The paper is structured as follows. Section \ref{sec:LDDMM} gives a short introduction to the LDDMM manifold. Section \ref{sec:geo} concerns Theano implementation of geodesics as solution to Hamilton's equations. In Section \ref{sec:chris}, we use Christoffel symbols to define and implement parallel transport of tangent vectors. In Section \ref{sec:FMean}, the Frech\'et mean algorithm is considered, while stochastics, Brownian motions, and normal distributions are described in Section \ref{sec:Norm}. Section \ref{sec:FMeanfm} gives an example of calculating sample mean and covariance by estimating the Frech\'et mean on the frame bundle. The paper ends with concluding remarks. \subsection{Background} \label{sec:LDDMM} \vspace{-0.1cm} The implemented theory is applied to data on a landmark manifold defined in the LDDMM framework~\cite{shapes}. More specifically, we will exemplify the theoretical concepts with landmark representations of Corpus Callosum (CC) shapes. Consider a landmark manifold, $\mathcal{M}$, with elements $q = (x_1^1,x_1^2,\ldots,x_n^1,x_n^2)$ as illustrated in Fig. \ref{fig:data}. In the LDDMM framework, deformation of shapes are modelled as a flow of diffeomorphisms. Let $V$ denote a Reproducing Kernel Hilbert Space (RKHS) of vector fields and let $K\colon V\times V\to\mathbb R$ be the reproducing kernel, i.e. a vector field $v\in V$ satisfies $v(q) = \langle K_q,v\rangle_V$ for all $q\in\mathcal{M}$ with $K_q = K(.,q)$. Deformation of shapes in $\mathcal{M}$ are then modelled by flows $\varphi_t$ of diffeomorphisms acting on the landmarks. The flow solves the ordinary differential equation $\partial_t\varphi_t = v(t)\circ\varphi_t$, for $v\in V$. With suitable conditions on $K$, the norm on $V$ defines a right-invariant metric on the diffeomorphism group that descends to a Riemannian structure on $\mathcal{M}$. The induced cometric $g^*_q\colon T^*_q\mathcal{M}\times T^*_q\mathcal{M}\to\mathbb R$ takes the form \begin{equation} g^*_q(\nu,\xi) = \sum_{i,j=1}^n \nu_i K(\boldsymbol{x}_i,\boldsymbol{x}_j)\xi_j, \label{eq:metric} \end{equation} where $\boldsymbol{x}^i = (x_i^1,x_i^2)$ for $i\in\{1,\ldots,n\}$. The coordinate matrix of the cometric is $g^{ij}=K(\boldsymbol{x}_i,\boldsymbol{x}_j)$ which results in the metric $g$ having coordinates $g_{ij} = K^{-1}(\boldsymbol{x}_i,\boldsymbol{x}_j)$. In the examples, we use $39$ landmarks representing the CC shape outlines, and the kernel used is a Gaussian kernel defined by $K(\boldsymbol{x}_i,\boldsymbol{x}_j) = \exp\left(-\frac{\|\boldsymbol{x}_i-\boldsymbol{x}_j\|^2}{2\sigma^2}\right)$ with variance parameter $\sigma$ set to the average distance between landmarks in the CC data. Samples of CC outlines are shown in the right plot of Fig. \ref{fig:data}. \vspace{-0.4cm} \begin{figure}[H] \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4, trim = 20 20 40 30,clip]{pictures/sample.pdf} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4,trim = 20 20 40 30,clip]{pictures/CCdat.pdf} \end{minipage} \caption{(left) An example of a point in $\mathcal{M}$. (right) A subset of the data considered in the examples of this paper. The black curve represents the mean CC of the data.} \label{fig:data} \end{center} \end{figure} \section{Geodesics} \label{sec:geo} \vspace{-0.2cm} Geodesics on $\mathcal{M}$ can be obtained as the solution to Hamilton's equations used in Hamiltonian mechanics to describe the change in position and momentum of a particle in a physical system. Let $(U,\varphi)$ be a chart on $\mathcal{M}$ and assume $(\mathcal{M},g)$ is a Riemannian manifold. The Hamiltonian $H$ describes the total amount of energy in the physical system. From the cometric $g^*$, the Hamiltonian can be defined as $H(q,p) = \frac{1}{2}p^Tg^*_qp$, where $g^*_q=(g^{ij})$ is the component matrix of $g^*$ at $q$. Hamilton's equations are given as the system of ordinary differential equations \begin{align*} dq_t = \nabla_p H(q,p), \quad dp_t = -\nabla_q H(q,p). \end{align*} Using the symbolic derivative feature of Theano, the system of ODE's can be represented and discretely integrated with the following code snippet: \begin{lstlisting} """ Hamiltonian function and equations """ # Hamiltonian function: H = lambda q,p: 0.5*T.dot(p,T.dot(gMsharp(q),p)) # Hamiltonian equations: dq = lambda q,p: T.grad(H(q,p),p) dp = lambda q,p: -T.grad(H(q,p),q) def @ode_Ham@(t,x): dqt = dq(x[0],x[1]) dpt = dp(x[0],x[1]) return T.stack((dqt,dpt)) # Geodesic: Exp = lambda q,v: integrate(ode_Ham,T.stack((q,gMflat(v)))) \end{lstlisting} where \lstinline!gMflat! is the $\flat$ map turning tangent vectors in $T\mathcal{M}$ to elements in $T^\star\mathcal{M}$. \lstinline!integrate! denotes a function that integrates the ODE by finite time discretization. For the examples considered here, we use a simple Euler integration method. Higher-order integrators are available in the implemented repository mentioned in Section \ref{sec:in}. A great advantage of Theano is that such integrators can be implemented symbolically as done below using a symbolic \lstinline!for!-loop specified with \lstinline!theano.scan!. The actual numerical scheme is only available after asking Theano to compile the function. \begin{lstlisting} """ Numerical Integration Method """ def @integrator@(ode_f): def @euler@(*y): t = y[-2] x = y[-1] return (t+dt,x+dt*ode_f(*y)) return euler def @integrate@(ode,x): (cout, updates) = theano.scan(fn=integrator(ode), outputs_info=[x],sequences=[*y], n_steps=n_steps) return cout \end{lstlisting} In the above, \lstinline!integrator! specifies the chosen integration method, in this example the Euler method. As the \lstinline!integrate! function is a symbolic Theano function, symbolic derivatives can be obtained for the integrator, allowing e.g. gradient based optimization for the initial conditions of the ODE. An example of a geodesic found as the solution to Hamilton's equations is visualized in the right plot of Fig. \ref{fig:geo}. The initial point $q_0\in\mathcal{M}$ was set to the average CC for the data shown in Fig. \ref{fig:data} and the initial tangent vector $v_0\in T_{q_0}\mathcal{M}$ was given as the tangent vector plotted in Fig. \ref{fig:geo}. \begin{figure} \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4, trim = 20 20 40 30,clip]{pictures/geoCCmean.pdf} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4,trim = 20 20 40 30,clip]{pictures/geoCC.pdf} \end{minipage} \caption{(left) The initial point and tangent vector for the geodesic. (right) A geodesic obtained as solution to Hamilton's equations.} \label{fig:geo} \end{center} \end{figure} The exponential map, $\exp_x\colon T_x\mathcal{M}\to\mathcal{M}$, $x\in\mathcal{M}$ is defined as $\exp_x(v) = \gamma^v_1$, where $\gamma^v_t$, $t\in [0,1]$ is a geodesic with $\dot{\gamma}^v_0=v$. The inverse of the exponential map is called the logarithm map, denoted $\log$. Given two points $q_1,q_2\in\mathcal{M}$, the logarithm map retuns the tangent vector $v\in T_{q_1}\mathcal{M}$ that results in the minimal geodesic from $q_1$ to $q_2$, i.e. $v$ satisfies $\exp_{q_1}(v) = q_2$. The logarithm map can be implemented using derivative based optimization by taking a symbolic derivative of the exponential map, \lstinline!Exp!, implemented above: \begin{lstlisting} """ Logarithm map """ # Loss function for landmarks: loss = lambda v,q1,q2: 1./d*T.sum(T.sqr(Exp(q1,v)-q2)) dloss = lambda v,q1,q2: T.grad(loss(v,q1,q2),v) # Logarithm map: (v0 initial guess) Log = minimize(loss, v0, jac=dloss, args=(q1,q2)) \end{lstlisting} The use of the derivative features provided in Theano to take symbolic derivatives of a discrete integrator makes the implementation of the logarithm map extremely simple. The actual compiled code internally in Theano corresponds to a discrete backwards integration of the adjoint of the Hamiltonian system. An example of matching shapes by the logarithm map was shown in Fig. \ref{fig:match}. Here two ellipsoids of $20000$ landmarks were matched by applying the above \lstinline!Log! function. \section{Christoffel Symbols} \label{sec:chris} \vspace{-0.2cm} We here describe how Christoffel symbols can be computed and used in the Theano framework. A connection $\nabla$ defines links between tangent spaces on $\mathcal{M}$ and describes how tangent vectors for different tangent spaces relate. Let $(U,\varphi)$ denote a coordinate chart on $\mathcal{M}$ with basis coordinates $\partial_i$, $i=1,\ldots,d$. The connection $\nabla$ is uniquely described by its Christoffel symbols, $\Gamma_{ij}^k$, defined as $\nabla_{\partial_i} \partial_j = \Gamma_{ij}^k\partial_k$. An example of a frequently used connection is the Levi-Civita connection for Riemannian manifolds. Based on the metric $g$ on $\mathcal{M}$, the Levi-Civita Christoffel symbols are found by \begin{equation} \Gamma_{ij}^k = \frac{1}{2}g^{kl}(\partial_i g_{jl} + \partial_j g_{il} - \partial_l g_{ij}). \label{eq:chris} \end{equation} The Theano implementation below of the Christoffel symbols directly translates $\eqref{eq:chris}$ into code: \begin{lstlisting} """ Christoffel Symbols """ ## Cometric: gsharp = lambda q: T.nlinalg.matrix_inverse(g(q)) ## Derivative of metric: Dg = lambda q: T.jacobian(g(q).flatten(),q).reshape((d,d,d)) ## Christoffel symbols: Gamma_g = lambda q: 0.5*(T.tensordot(gsharp(q),Dg(q),axes = [1,0])\ + T.tensordot(gsharp(q),Dg(q),axes = [1,0]).dimshuffle(0,2,1)\ - T.tensordot(gsharp(q),Dg(q),axes = [1,2])) \end{lstlisting} The connection, $\nabla$, and Christoffel symbols, $\Gamma_{ij}^k$, can be used to define parallel transport of tangent vectors on $\mathcal{M}$. Let $\gamma\colon I\to\mathcal{M}$ be a curve and let $t_0\in I$. A vector field $V$ is said to be parallel along $\gamma$ if the covariant derivative of $V$ along $\gamma$ is zero, i.e. $\nabla_{\dot\gamma_t} V = 0$. For a tangent vector $v_0=v_0^i\partial_i\in T_{\gamma_{t_0}}\mathcal{M}$, there exists a unique parallel vector field $V$ along $\gamma$ s.t. $V_{t_0} = v_0$. Assume $V_t = v^i(t)\partial_i$, then the vector field $V$ is parallel along $\gamma$ if the coordinates follows the differential equation, \begin{equation} \dot v^k(t) + \Gamma_{ij}^k(\gamma_t)\dot\gamma_t^i v^j(t) = 0, \end{equation} with initial values $v^i(0) = v_0^i$. In Theano code the ODE can be written as, \vspace{-0.15cm} \begin{lstlisting} """ Parallel transport """ def @ode_partrans@(gamma,dgamma,t,x): dpt = - T.tensordot(T.tensordot(dgamma, Gamma_gM(gamma), axes = [0,1]),x, axes = [1,0]) return dpt pt = lambda v,gamma,dgamma: integrate(ode_partrans,v,gamma,dgamma) \end{lstlisting} Let $q_0$ be the mean CC plotted in Fig. \ref{fig:geo} and consider $v_1,v_2\in T_{q_0}\mathcal{M}$ s.t. $v_1$ is the vector consisting of $39$ copies (one for each landmark) of $e_1 = (1,0)$ and $v_2$, the vector of $39$ copies of $e_2 = (0,1)$. The tangent vector $v_2$ is shown in Fig. \ref{fig:geo}. Define $\gamma$ as the geodesic calculated in Section \ref{sec:geo} with initial values $(q_0,v_2)$. The parallel transport of $v_1,v_2$ along $\gamma$ is visualized in Fig \ref{fig:par}. To make the plot easier to interpret, the parallel transported vectors are only shown for five landmarks. \vspace{-0.2cm} \begin{figure} \begin{center} \includegraphics[scale=0.5, trim = 20 20 40 30,clip]{pictures/partrans.pdf} \caption{Example of parallel transport of basis vectors $v_1$, $v_2$ along the geodesic with initial values $q_0$, $v_2$. The parallel transported vectors are only plotted for $5$ landmarks.} \label{fig:par} \end{center} \end{figure} \section{Frech\'et Mean} \label{sec:FMean} \vspace{-0.2cm} The Fr\'echet mean~\cite{fréchet1944intégrale} is a generalization of the Euclidean mean value to manifolds. Let $d$ be a distance map on $\mathcal{M}$. The Frech\'et mean set is defined as $F(x) = \argmin_{y\in\mathcal{M}}\mathbb{E} d(y,x)^2$. For a sample of data points $x_1,\ldots,x_n\in\mathcal{M}$, the empirical Fr\'echet mean is \begin{equation} \label{eq:FMean} F_{\bar{x}} = \argmin_{y\in\mathcal{M}} \frac{1}{n}\sum_{i=1}^n d(y,x_i)^2. \end{equation} Letting $d$ be the Riemannian distance function determined by the metric $g$, the distance can be formulated in terms of the logarithm map, defined in Section \ref{sec:geo}, as $d(x,y) = \|\log(x,y)\|^2$. In Theano, the Fr\'echet mean can be obtained by optimizing the function implemented below, again using symbolic derivatives. \begin{lstlisting} """ Frechet Mean """ def @Frechet_mean@(q,y): (cout,updates) = theano.scan(fn=loss, non_sequences=[v0,q], sequences=[y], n_steps=n_samples) return 1./n_samples*T.sum(cout) dFrechet_mean = lambda q,y: T.grad(Frechet_mean(q,y),q) \end{lstlisting} The variable \lstinline!v0! denotes the optimal tangent vector found with the \lstinline!Log! function in each iteration of the optimization procedure. Consider a sample of $20$ observations of the CC data shown in the right plot of Fig. \ref{fig:frechet}. To calculate the empirical Fr\'echet mean on $\mathcal{M}$, the initial point $q_0\in\mathcal{M}$ was set to one of the CC observations plotted in the left plot of Fig. \ref{fig:frechet}. The result of the optimization is shown in Fig. \ref{fig:frechet} (bold outline). \begin{figure} \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4, trim = 20 20 40 30,clip]{pictures/FrechetMeanuv1.pdf} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4,trim = 20 20 40 30,clip]{pictures/mfca17/FrechetMean1.pdf} \end{minipage} \caption{(left) The estimated empirical Frech\'et mean (black), the initial value (blue) and the Euclidean mean of the $20$ samples (red). (right) Plot of the $20$ samples of CC, with the Frech\'et mean shown as the black curve.} \label{fig:frechet} \end{center} \end{figure} So far we have shown how Theano can be used to implement simple and frequently used concepts in differential geometry. In the following sections, we will exemplify how Theano can be used for stochastic dynamics and for implementation of more complex concepts from sub-Riemannian geometry on the frame bundle of $\mathcal{M}$. \section{Normal Distributions and Stochastic Development} \label{sec:Norm} \vspace{-0.2cm} We here consider Brownian motion and normal distributions on manifolds. Brownian motion on Riemannian manifolds can be constructed in several ways. Here, we consider two definitions based on stochastic development and coordinate representation of Brownian motion as an It\^o SDE. The first approach \cite{elworthy_geometric_1988} allows anisotropic generalizations of the Brownian motion \cite{sommer_modelling_2015,sommer_anisotropic} as we will use later. Stochastic processes on $\mathcal{M}$ can be defined by transport of processes from $\mathbb R^m$, $m\leq d$ to $\mathcal{M}$ by the stochastic development map. In order to describe stochastic development of processes onto $\mathcal{M}$, the frame bundle has to be considered. The frame bundle, $F\mathcal{M}$, is the space of points $u=(q,\nu)$ s.t. $q\in\mathcal{M}$ and $\nu$ is a frame for the tangent space $T_q\mathcal{M}$. The tangent space of $F\mathcal{M}$, $TF\mathcal{M}$, can be split into a vertical subspace, $VF\mathcal{M}$, and a horizontal subspace, $HF\mathcal{M}$, i.e. $TF\mathcal{M} = VF\mathcal{M}\oplus HF\mathcal{M}$. The vertical space, $VF\mathcal{M}$, describes changes in the frame $\nu$, while $HF\mathcal{M}$ defines changes in the point $x\in\mathcal{M}$ when the frame $\nu$ is fixed in the sense of having zero acceleration measured by the connection. The frame bundle can be equipped with a sub-Riemannian structure by considering the distribution $HF\mathcal{M}$ and a corresponding degenerate cometric $g^*_{F\mathcal{M}}\colon TF\mathcal{M}^*\to HF\mathcal{M}$. Let $(U,\varphi)$ denote a chart on $\mathcal{M}$ with coordinates $(x^i)_{i=1,\ldots,d}$ and coordinate frame $\partial_i = \frac{\partial}{\partial x^i}$ for $i=1,\ldots,d$. Let $\nu_\alpha$ $\alpha=1,\ldots,d$ denote the basis vectors of the frame $\nu$. Then $(q,\nu)$ have coordinates $(q^i,\nu_\alpha^i)$ where $\nu_\alpha = \nu_\alpha^i\partial_i$ and $\nu^\alpha_i$ defines the inverse coordinates of $\nu_\alpha$. The coordinate representation of the sub-Riemannian cometric is then given as \begin{equation} (g_{F\mathcal{M}})^{ij} = \begin{pmatrix} W^{-1} & -W^{-1}\Gamma^T \\ -\Gamma W^{-1} & \Gamma W^{-1}\Gamma^T \end{pmatrix}, \end{equation} where $W$ is the matrix with components $W_{ij} = \delta_{\alpha\beta}\nu_i^\alpha\nu_j^\beta$ and $\Gamma = (\Gamma_j^{h_\gamma})$ for $\Gamma_j^{h_\gamma}=\Gamma^h_{ji}\nu_\gamma^i$ with $\Gamma^h_{ji}$ denoting the Christoffel symbols for the connection, $\nabla$. The sub-Riemannian structure restricts infinitesimal movements to be only along horizontal tangent vectors. Let $\pi_\nu^*\colon T_\nu\mathcal{M}\to H_\nu F\mathcal{M}$ be the lift of a tangent vector in $T\mathcal{M}$ to its horizontal part and let $e\in\mathbb R^d$ be given. A horizontal vector at $u=(q,\nu)$ can be defined as the horizontal lift of the tangent vector $\nu e\in T_q\mathcal{M}$, i.e. $H_e(u) = (\nu e)^*$. A basis for the horizontal subspace at $u\in F\mathcal{M}$ is then defined as $H_i = H_{e_i}(u)$, where $e_1,\ldots,e_d$ denote the canonical basis of $\mathbb R^d$. Let $W_t$ denote a stochastic process on $\mathbb R^m$, $m\leq d$. A stochastic process $U_t$ on $F\mathcal{M}$ can be obtained by the solution to the stratonovich stochastic differential equation, $dU_t = \sum_{i=1}^m H_i(U_t)\circ dW^i_t$, with initial point $u_0\in F\mathcal{M}$. A stochastic process on $\mathcal{M}$ can then be defined as the natural projection of $U_t$ to $\mathcal{M}$. In Theano, the stochastic development map is implemented as \begin{lstlisting} """ Stochastic Development """ def @sde_SD@(dWt,t,q,nu): return T.tensordot(Hori(q,nu), dWt, axes = [1,0]) stoc_dev = lambda q,u,dWt: integrate_sde(sde_SD, integrator_stratonovich,q,u,dWt)[1] \end{lstlisting} Here, \lstinline!integrate_sde! is a function performing stochastic integration of the SDE. The \lstinline!integrate_sde! is defined in a similar manner as \lstinline!integrate! described in Section \ref{sec:geo}. In Fig. \ref{fig:stoc} is given an example of stochastic development of a stochastic process $W_t$ in $\mathbb R^2$ to the landmark manifold. Notice that for $m < d$, only the first $m$ basis vectors of the basis $H_i$ is used in the stochastic development. \vspace{-0.4cm} \begin{figure} \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4, trim = 20 20 40 30,clip]{pictures/stoc.pdf} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4,trim = 20 20 40 30,clip]{pictures/stocCC.pdf} \end{minipage} \caption{(left) Stochastic process $W_t$ on $\mathbb R^2$. (right) The stochastic development of $W_t$ on $\mathcal{M}$. The blue points represents the initial point chosen as the mean CC. The red points visualize the endpoint of the stochastic development.} \label{fig:stoc} \end{center} \end{figure} Given the stochastic development map, Brownian motions on $\mathcal{M}$ can be defined as the projection of the stochastic development of Brownian motions in $\mathbb R^d$. Defining Brownian motions by stochastic development makes it possible to consider Brownian motions with anisotropic covariance by choosing the initial frame as not being orthonormal. However, if one is only interested in isotropic Brownian motions, a different definition can be applied. In~\cite{sommer_bridge_2017}, the coordinates of a Brownian motion is defined as solution to the It\^o integral, \begin{equation} dq_t^i = g_q^{kl}\Gamma_{kl}^i dt + \sqrt{g_q^*}^i dW_t. \label{eq:coordB} \end{equation} This stochastic differential equation is implemented in Theano by the following code. \begin{lstlisting} """ Brownian Motion in Coordinates """ def @sde_Brownian_coords@(dW,t,q): gMsharpq = gMsharp(q) X = theano.tensor.slinalg.Cholesky()(gMsharpq) det = T.tensordot(gMsharpq,Gamma_gM(q),((0,1),(0,1))) sto = T.tensordot(X,dW,(1,0)) return (det,sto,X) Brownian_coords = lambda x,dWt: integrate_sde(sde_Brownian_coords, integrator_ito,x,dWt) \end{lstlisting} An example of an isotropic Brownian motion found by the solution of $(\ref{eq:coordB})$ is shown in Fig. \ref{fig:norm}. \vspace{-0.4cm} \begin{figure} \begin{center} \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4, trim = 20 20 40 30,clip]{pictures/browncoord.pdf} \end{minipage}% \begin{minipage}{0.5\textwidth} \centering \includegraphics[scale=0.4,trim = 20 20 40 30,clip]{pictures/normsamples.pdf} \end{minipage} \caption{(left) Brownian motion on $\mathcal{M}$. (right) Samples drawn from an isotropic normal distribution defined as the transition distribution of a Brownian motion obtained as a solution to $(\ref{eq:coordB})$.} \label{fig:norm} \end{center} \end{figure} In Euclidean statistical theory, a normal distribution can be considered as the transition distribution of a Brownian motion. A similar definition was described in~\cite{sommer_modelling_2015}. Here a normal distribution on $\mathcal{M}$ is defined as the transition distribution of a Brownian motion on $\mathcal{M}$. In Fig. \ref{fig:norm} is shown samples drawn from a normal distribution on $\mathcal{M}$ with mean set to the average CC shown in Fig. \ref{fig:data} and isotropic covariance. The Brownian motions are in this example defined in terms of $(\ref{eq:coordB})$. \section{Frech\'et Mean on Frame Bundle} \label{sec:FMeanfm} \vspace{-0.2cm} A common task in statistical analysis is to estimate the distribution of data samples. If the observations are assumed to be normally distributed, the goal is to estimate the mean vector and covariance matrix. In~\cite{sommer_modelling_2015}, it was proposed to estimate the mean and covariance of a normal distribution on $\mathcal{M}$ by the Frech\'et mean on the frame bundle. Consider Brownian motions on $\mathcal{M}$ defined as the projected stochastic development of Brownian motions on $\mathbb R^d$. A normal distribution on $\mathcal{M}$ is given as the transition distribution of a Brownian motion on $\mathcal{M}$. The initial point for the stochastic development, $u_0 = (q_0,\nu_0)\in F\mathcal{M}$, corresponds to the mean and covariance, i.e. $q_0\in\mathcal{M}$ denotes the mean shape and $\nu_0$ the covariance of the normal distribution. As a consequence, normal distributions with anisotropic covariance can be obtained by letting $\nu_0$ be a non-orthonormal frame. In Section \ref{sec:FMean}, the Frech\'et mean on $\mathcal{M}$ was defined as the point $y\in\mathcal{M}$, minimizing the average geodesic distance to the observations. However, as only a sub-Riemannian structure is defined on $F\mathcal{M}$, the logarithm map does not exist and hence the geodesic distance cannot be used to define the Frech\'et mean on $F\mathcal{M}$. Instead, the distance function will be defined based on the most probable paths (MPP) defined in~\cite{sommer_anisotropic}. In this section, a slightly different algorithm for estimating the mean and covariance for a normal distribution is proposed compared to the one defined in~\cite{sommer_modelling_2015}. Let $u=(q,\nu)\in F\mathcal{M}$ be given such that $q,\nu$ is the mean and covariance of a normal distribution on $\mathcal{M}$. Assume that observations $y_1,\ldots,y_n\in\mathcal{M}$ have been observed and let $p_1,\ldots,p_n\in H^* F\mathcal{M}$. The Frech\'et mean on $F\mathcal{M}$ can then be obtained by optimizing, \begin{equation*} F_{F\mathcal{M}} = \argmin_{(u,p_1,\ldots,p_n)} \frac{1}{n}\sum_{i=1}^n \|p_i\|_{g^*_{FM}}^2 + \frac{\lambda}{n}\sum_{i=1}^n d_{\mathcal{M}}(\pi(\exp_{u}(p_i^\sharp)),y_i)^2 -\frac{1}{2}\log(\det\nu), \end{equation*} where $\sharp$ denotes the sharp map on $F\mathcal{M}$ changing a momentum vector in $T^* F\mathcal{M}$ to the corresponding tangent vector in $TF\mathcal{M}$. Notice that the optimization is performed over $u\in F\mathcal{M}$, which is the Frech\'et mean, but also over the momentum vectors $p_1,\ldots,p_n$. When making optimization over these momentum vectors, the geodesics, $\exp_{u}(p_i^\sharp)$, becomes MPPs. The Frech\'et mean on $F\mathcal{M}$ is implemented in Theano and numpy as, \begin{lstlisting} """ Frechet Mean on FM """ detg = lambda q,nu: T.nlinalg.Det()(T.tensordot(nu.T, T.tensordot(gM(q),nu,axes=(1,0)),axes=(1,0))) lossf = lambda q1,q2: 1./d.eval()*np.sum((q1-q2)**2) def @Frechet_meanFM@(u,p,y0): q = u[0:d.eval()] nu = u[d.eval():].reshape((d.eval(),rank.eval())) for i in range(n_samples): distv[i,:] = lossf(Expfmf(u,p[i,:])[0:d.eval()],y0) normp[i,:] = 2*Hfm(u,p[i,:]) # Hamiltonian on FM return 1./n_samples*np.sum(normp) +lambda0/n_samples*np.sum(distv**2)-1./2*np.log(detgf(x,u)) \end{lstlisting} \section{Conclusion} \label{sec:con} \vspace{-0.2cm} In the paper, it has been shown how different concepts in differential geometry and non-linear statistics can be implemented using the Theano framework. Integration of geodesics, computation of Christoffel symbols and parallel transport, stochastic development and Fre\'chet mean estimation were considered and demonstrated on landmark shape manifolds. In addition, we showed how the Fre\'chet mean on the frame bundle $F\mathcal{M}$ can be computed for estimating the mean and covariance of an anisotropic normal distribution on $\mathcal{M}$. Theano has, for the cases shown in this paper, been a very efficient framework for implementation of differential geometry concepts and for non-linear statistics in a simple and concise way yet allowing efficient and fast computations. We emphasize that Theano is able to perform calculations on high-dimensional manifolds using either CPU or GPU computations. In the future, we plan to extend the presented ideas to derive Theano implementations of differential geometry concepts in more general fiber bundle and Lie group geometries. \vspace{0.1cm} \textbf{Acknowledgements.} This work was supported by Centre for Stochastic Geometry and Advanced Bioimaging (CSGB) funded by a grant from the Villum foundation. \bibliographystyle{plain} \input{Junepaper.bbl} \end{document}
-25,905.030423
[ -2.3671875, 2.349609375 ]
28.851541
[ -3.01953125, 0.408447265625, -1.3798828125, -5.09375, -0.59912109375, 7.21484375 ]
[ 3.046875, 7.7109375, 1.7490234375, 6.890625 ]
334
3,561
[ -1.970703125, 2 ]
28.524666
[ -5.7578125, -3.59375, -3.861328125, -1.8720703125, 1.8896484375, 10.9453125 ]
0.726052
13.470133
26.410328
3.360487
[ 1.9620311260223389 ]
-16,755.074497
6.18843
-25,346.964578
0.326723
5.785369
[ -2.7734375, -3.69921875, -3.59375, -4.49609375, 2.57421875, 11.796875 ]
[ -5.0703125, -1.65234375, -2.400390625, -1.92578125, 3.375, 4.8515625 ]
BkiUgD_xK6Ot9WA5kRT_
\section{Introduction} String Theory is a candidate for the theory of all matter and interactions including gravity. But it has a dimension full parameter, the tension of the string, in its standard formulation. The appearance of a dimension full string tension from the start appears somewhat unnatural. Previously however, in the framework of a Modified Measure Theory, a formalism originally used for gravity theories, see for example \cite{d,b}, the tension was derived as an additional degree of freedom \cite{a,c,supermod, cnish, T1, T2}. See also the treatment by Townsend and collaborators \cite{xx,xxx}. This essay is organized as follows. In Section 2 we review the modified-measure theory in the string context. In Section 3 we discuss the fact that this tension generation could take place independently for each world sheet separately, which would mean that the string tension is not a fundamental coupling in nature and it could be different for different strings. In section 4 we discuss possible consequences for physics that are derived from these string theories with dynamical string tension generation. In particular we review the results obtained in when we define a new Tension scalar background field which change locally the value of the string tension along the world sheets of the strings. When there are many strings with different string tensions this Tension field can be determined from the requirement of world sheet conformal invariance of all the string world sheets. For two types of string tensions depending on the relative sign of the tensions we obtain non singular cosmologies and warp space scenarios, if the strings have opposite signs or, when the strings both are positive, we obtain scenarios where the Hagedorn temperature is avoided in regions of space time with very large string tensions (which means at those regions the Hagedorn temperature becomes infinity) in cosmology and warp space scenarios. \\ \section{The Modified Measure Theory String Theory} The standard world sheet string sigma-model action using a world sheet metric is: \begin{equation}\label{eq:1} S_{sigma-model} = -T\int d^2 \sigma \frac12 \sqrt{-\gamma} \gamma^{ab} \partial_a X^{\mu} \partial_b X^{\nu} g_{\mu \nu}. \end{equation} Here $\gamma^{ab}$ is the intrinsic Riemannian metric on the 2-dimensional string worldsheet and $\gamma = det(\gamma_{ab})$; $g_{\mu \nu}$ denotes the Riemannian metric on the embedding spacetime. $T$ is a string tension, a dimension full scale introduced into the theory by hand. \\ From the variations of the action with respect to $\gamma^{ab}$ and $X^{\mu}$ we get the following equations of motion: \begin{equation} \label{eq:tab} T_{ab} = (\partial_a X^{\mu} \partial_b X^{\nu} - \frac12 \gamma_{ab}\gamma^{cd}\partial_cX^{\mu}\partial_dX^{\nu}) g_{\mu\nu}=0, \end{equation} \begin{equation} \label{eq:3} \frac{1}{\sqrt{-\gamma}}\partial_a(\sqrt{-\gamma} \gamma^{ab}\partial_b X^{\mu}) + \gamma^{ab} \partial_a X^{\nu} \partial_b X^{\lambda}\Gamma^{\mu}_{\nu\lambda}=0, \end{equation} where $\Gamma^{\mu}_{\nu\lambda}$ is the affine connection for the external metric. \\ There are no limitations on employing any other measure of integration different than $\sqrt{-\gamma}$. The only restriction is that it must be a density under arbitrary diffeomorphisms (reparametrizations) on the underlying spacetime manifold. The modified-measure theory is an example of such a theory. \\ In the framework of this theory two additional worldsheet scalar fields $\varphi^i (i=1,2)$ are introduced. A new measure density is \begin{equation} \Phi(\varphi) = \frac12 \epsilon_{ij}\epsilon^{ab} \partial_a \varphi^i \partial_b \varphi^j. \end{equation} Then the modified bosonic string action is (as formulated first in \cite{a} and latter discussed and generalized also in \cite{c}) \begin{equation} \label{eq:5} S = -\int d^2 \sigma \Phi(\varphi)(\frac12 \gamma^{ab} \partial_a X^{\mu} \partial_b X^{\nu} g_{\mu\nu} - \frac{\epsilon^{ab}}{2\sqrt{-\gamma}}F_{ab}(A)), \end{equation} where $F_{ab}$ is the field-strength of an auxiliary Abelian gauge field $A_a$: $F_{ab} = \partial_a A_b - \partial_b A_a$. \\ To check that the new action is consistent with the sigma-model one, let us derive the equations of motion of the action (\ref{eq:5}). \\ The variation with respect to $\varphi^i$ leads to the following equations of motion: \begin{equation} \label{eq:6} \epsilon^{ab} \partial_b \varphi^i \partial_a (\gamma^{cd} \partial_c X^{\mu} \partial_d X^{\nu} g_{\mu\nu} - \frac{\epsilon^{cd}}{\sqrt{-\gamma}}F_{cd}) = 0. \end{equation} It implies \begin{equation} \label{eq:a} \gamma^{cd} \partial_c X^{\mu} \partial_d X^{\nu} g_{\mu\nu} - \frac{\epsilon^{cd}}{\sqrt{-\gamma}}F_{cd} = M = const. \end{equation} The equations of motion with respect to $\gamma^{ab}$ are \begin{equation} \label{eq:8} T_{ab} = \partial_a X^{\mu} \partial_b X^{\nu} g_{\mu\nu} - \frac12 \gamma_{ab} \frac{\epsilon^{cd}}{\sqrt{-\gamma}}F_{cd}=0. \end{equation} We see that these equations are the same as in the sigma-model formulation (\ref{eq:tab}), (\ref{eq:3}). Namely, taking the trace of (\ref{eq:8}) we get that $M = 0$. By solving $\frac{\epsilon^{cd}}{\sqrt{-\gamma}}F_{cd}$ from (\ref{eq:a}) (with $M = 0$) we obtain (\ref{eq:tab}). \\ A most significant result is obtained by varying the action with respect to $A_a$: \begin{equation} \epsilon^{ab} \partial_b (\frac{\Phi(\varphi)}{\sqrt{-\gamma}}) = 0. \end{equation} Then by integrating and comparing it with the standard action it is seen that \begin{equation} \frac{\Phi(\varphi)}{\sqrt{-\gamma}} = T. \end{equation} That is how the string tension $T$ is derived as a world sheet constant of integration opposite to the standard equation (\ref{eq:1}) where the tension is put ad hoc.The variation with respect to $X^{\mu}$ leads to the second sigma-model-type equation (\ref{eq:3}). The idea of modifying the measure of integration proved itself effective and profitable. This can be generalized to incorporate super symmetry, see for example \cite{c}, \cite{cnish}, \cite{supermod} , \cite{T1}. For other mechanisms for dynamical string tension generation from added string world sheet fields, see for example \cite{xx} and \cite{xxx}. However the fact that this string tension generation is a world sheet effect and not a universal uniform string tension generation effect for all strings has not been sufficiently emphasized before, this we do in our next section. \section{Each String in its own world sheet determines its own string tension. Therefore the string tension is not universal for all strings} Let us now observe indeed that it does not appear that the string tension derived above corresponds to ¨the¨ string tension of the theory. The derivation of the string tension in the previous section holds for a given string, another string could acquire a different string tension. Similar situation takes place in the dynamical string generation proposed by Townsend for example \cite{xx}, in that paper world sheet fields include an electromagnetic gauge potential. Its equations of motion are those of the Green-Schwarz superstring but with the string tension given by the circulation of the world sheet electric field around the string. So again ,in \cite{xx} also a string will determine a given tension, but another string may determine another tension. If the tension is a universal constant valid for all strings, that would require an explanation in the context of these dynamical tension string theories, for example some kind of interactions that tend to equalize string tensions, or that all strings in the universe originated from the splittings of one primordial string or some other mechanism. In any case, if one believes in strings , in the light of the dynamical string tension mechanism being a process that takes place at each string independently, we must ask whether all strings have the same string tension. \section{The Tension Scalar and its Consequences} The string tension can be influenced by external fields for example \cite{Ansoldi}. Indeed, if to the action of the string we add a coupling to a world-sheet current $j ^{a}$, i.e. a term \begin{equation} S _{\mathrm{current}} = \int d ^{2} \sigma A _{a} j ^{a} , \label{eq:bracuract} \end{equation} then the variation of the total action with respect to $A _{a }$ gives \begin{equation} \epsilon ^{a b} \partial _{a } \left( \frac{\Phi}{\sqrt{- \gamma}} \right) = j ^{b} . \label{eq:gauvarbracurmodtotact} \end{equation} Suppose that we have an external scalar field $\phi (x ^{\mu})$ defined in the bulk. From this field we can define the induced conserved world-sheet current \begin{equation} j ^{b} = e \partial _{\mu} \phi \frac{\partial X ^{\mu}}{\partial \sigma ^{a}} \epsilon ^{a b} \equiv e \partial _{a} \phi \epsilon ^{a b} , \label{eq:curfroscafie} \end{equation} where $e$ is some coupling constant. The interaction of this current with the world sheet gauge field is also invariant under local gauge transformations in the world sheet of the gauge fields $A _{a} \rightarrow A _{a} + \partial_{a}\lambda $. For this case, (\ref{eq:gauvarbracurmodtotact}) can be integrated to obtain \begin{equation} T = \frac{\Phi}{\sqrt{- \gamma}} = e \phi + T _{i} , \label{eq:solgauvarbracurmodtotact2} \end{equation} or equivalently \begin{equation} \Phi = \sqrt{- \gamma}( e \phi + T _{i}) , \label{eq:solgauvarbracurmodtotact} \end{equation} The constant of integration $T _{i}$ may vary from one string to the other. Notice tha the interaction is metric independent since the internal gauge field does not transform under the the conformal transformations. This interaction does not therefore spoil the world sheet conformal transformation invariance in the case the field $\phi$ does not transform under this transformation. One may interpret (\ref{eq:solgauvarbracurmodtotact} ) as the result of integrating out classically (through integration of equations of motion) or quantum mechanically (by functional integration of the internal gauge field, respecting the boundary condition that characterizes the constant of integration $T _{i}$ for a given string ). Then replacing $ \Phi = \sqrt{- \gamma}( e \phi + T _{i})$ back into the remaining terms in the action gives a correct effective action for each string. Each string is going to be quantized with each one having a different $ T _{i}$. The consequences of an independent quantization of many strings with different $ T _{i}$ covering the same region of space time will be studied next. we can incorporate the result of the tension as a function of scalar field $\phi$, given as $e\phi+T_i$, for a string with the constant of integration $T_i$ by defining the action that produces the correct equations of motion for such string, adding also other background fields, the anti symmetric two index field $A_{\mu \nu}$ that couples to $\epsilon^{ab}\partial_a X^{\mu} \partial_b X^{\nu}$ and the dilaton field $\varphi $ that couples to the topological density $\sqrt{-\gamma} R$ \begin{equation}\label{variablestringtensioneffectiveacton} S_{i} = -\int d^2 \sigma (e\phi+T_i)\frac12 \sqrt{-\gamma} \gamma^{ab} \partial_a X^{\mu} \partial_b X^{\nu} g_{\mu \nu} + \int d^2 \sigma A_{\mu \nu}\epsilon^{ab}\partial_a X^{\mu} \partial_b X^{\nu}+\int d^2 \sigma \sqrt{-\gamma}\varphi R . \end{equation} Notice that if we had just one string, or if all strings will have the same constant of integration $T_i = T_0$. In any case, it is not our purpose here to do a full generic analysis of all possible background metrics, antisymmetric two index tensor field and dilaton fields, instead, following \cite{cosmologyandwarped} and \cite{noHagedorn} , we will take cases where the dilaton field is a constant or zero, and the antisymmetric two index tensor field is pure gauge or zero, then the demand of conformal invariance for $D=26$ , see for example \cite{Polchinski}. becomes the demand that all the metrics, for two types of string tensions ($i=1,2$), \begin{equation}\label{tensiondependentmetrics} g^i_{\mu \nu} = (e\phi+T_i)g_{\mu \nu} \end{equation} will satisfy simultaneously the vacuum Einstein´s equations,\begin{equation}\label{Einstein1} R_{\mu \nu} (g^1_{\alpha \beta}) = 0 \end{equation} and , at the same time, \begin{equation}\label{Einstein1} R_{\mu \nu} (g^2_{\alpha \beta}) = 0 \end{equation} These two simultaneous conditions above impose a constraint on the tension field $\phi$, because the metrics $g^1_{\alpha \beta}$ and $g^2_{\alpha \beta}$ are conformally related, but Einstein´s equations are not conformally invariant, so the condition that Einstein´s equations hold for both $g^1_{\alpha \beta}$ and $g^2_{\alpha \beta}$ is highly non trivial. Then for these situations, we have, \begin{equation}\label{relationbetweentensions} e\phi+T_1 = \Omega(e\phi+T_2) \end{equation} which leads to a solution for $e\phi$ \begin{equation}\label{solutionforphi} e\phi = \frac{\Omega T_2 -T_1}{1 - \Omega} \end{equation} which leads to the tensions of the different strings to be \begin{equation}\label{stringtension1} e\phi+T_1 = \frac{\Omega(T_2 -T_1)}{1 - \Omega} \end{equation} and \begin{equation}\label{stringtension2} e\phi+T_2 = \frac{(T_2 -T_1)}{1 - \Omega} \end{equation} There are many solutions with $\Omega$, for example multiplying a Schwarschild solution by a constant, gives another Schwarschild solution of vacuum Einstein´s equations, the same is true with Kasner solutions. Then there are solutions with two types of strings covering the same region of space time, where $\Omega$ is not a constant, and in this case we differentiate between $\Omega$ positive and $\Omega$ negative. To first cosmological solutions it is useful to consider flat space in the Milne representation, $D=4$ this reads, \begin{equation}\label{Milne4D} ds^2 = -dt^2 + t^{2}(d\chi^2 + sinh^2\chi d\Omega_2^2) \end{equation} where $ d\Omega_2^2 $ represent the contribution of the 2 angles to the metric when using spherical coordinates, that is, it represents the metric of a two dimensional sphere of unit radius. In $D$ dimensions we will have a similar expression but now we must introduce the metric of a $D-2$ unit sphere $ d\Omega_{D-2}^2 $ so we end up with the following metric that we will take as the metric 2 \begin{equation}\label{MilneD1} ds_2^2 = -dt^2 + t^{2}(d\chi^2 + sinh^2\chi d\Omega_{D-2}^2) \end{equation} For the metric $1$ we will take the metric that we would obtain from the coordinate $t \rightarrow 1/t $ (using Minkowskii coordinates $x^\mu $, this corresponds to the inversion transformation, for a review and generalizations see \cite{Kastrup}. $x^\mu \rightarrow x^\mu/(x^\nu x_\nu ) $) and then we furthermore multiply by a constant $\sigma$, so \begin{equation}\label{MilneD2} ds_1^2 =\frac{\sigma}{t^4} (-dt^2 + t^{2}(d\chi^2 + sinh^2\chi d\Omega_{D-2}^2)) \end{equation} Then the equations (\ref{relationbetweentensions}), (\ref{solutionforphi}), (\ref{stringtension1}), (\ref{stringtension2}), with $ \Omega= \frac{\sigma}{t^4}$. The strings 1 and 2 have both positive tensions if $\sigma $ is positive and if the sign of $T_2 -T_1$ is positive and the solution is not continued before the singularity which takes place when $ \Omega= \frac{\sigma}{t^4} = 1$. At that point both string tensions apprach infinity and we have the possibility of avoiding the Hagedorn Temperature in the early universe. If $\sigma$ is negative, we obtain a non singular cosmology with negative tensions dominating in the early universe and positive tensions dominating in the late universe. The metric $g_{\mu \nu}$ contains a bounce, \begin{equation}\label{universalmetric} ds^2 =g_{\mu \nu}dx^{\mu}dx^{\nu} = (\frac{{1 - \Omega}}{T_2 -T_1})(-dt^2 + t^{2}(d\chi^2 + sinh^2\chi d\Omega_{D-2}^2)) \end{equation} and considering that $\Omega = \frac{\sigma}{t^4} = \frac{-K}{t^4}$ , where $K$ is positive. So the coefficient of the hyperbolic D -1 dimensional metric $d\chi^2 + sinh^2\chi d\Omega_{D-2}^2$ is $\frac{{t^{2} + \frac{-K}{t^2}}}{T_2 -T_1}$, showing a contraction, a bounce and a subsequent expansion. The initial and final spacetimes are flat and satisfy vacuum Einstein´s equations, but not the full space, with most appreciable deviations from Einstein´s equations at the bouncing time, $t =t*= K^{1/4}$. One can consider also warped spaces , as the ones discussed by Wesson, these are solutions of hiher dimensions Einstein´s equations. In five dimensions for example the following warped solution is found, \begin{equation}\label{Wesson1} ds^2 =l^2dt^2 -l^{2} cosh^{2}t (\frac{dr^2}{1-r^{2}} + r^2 d\Omega_{2}^2)) - dl^{2} \end{equation} where $l$ is the fourth dimension, so we see that as in the fourth dimension $l$ such a solution is homogeneous of degree two, just as the Milne space time was homogeneous of degree two with respect to the time. Notice that maximally symmetric de Sitter space times sub spaces $l =$ constant appear for instead of euclidean spheres that appear in the Milne Universe for $t =$ constant. The list of space times of this type is quite large, for example, one cal find solutions of empty GR with Schwarzchild de Sitter subpaces for $l =$ constant, as in \begin{equation}\label{Wesson2} ds^2 =\frac{\Lambda l^{2}}{3} (dt^2 (1-\frac{2M}{r} -\frac{\Lambda r^2}{3}) - \frac{dr^2}{1-\frac{2M}{r} -\frac{\Lambda r^2}{3}} - r^2 d\Omega_{2}^2) - dl^{2} \end{equation} This of course can be extended to $D$ dimensions, where we choose one dimension $l$ to have a factor $l^2$ warp factor for the other dimensions , generically for $D$ dimensions as in \begin{equation}\label{Wessongeneric} ds_2^2 =l^{2}\bar{g}_{\mu \nu}(x)dx^{\mu}dx^{\nu} - dl^{2} \end{equation} where $\bar{g}_{\mu \nu}(x)$ is a $D-1$ Schwarzschild de Sitter metric for example \cite{Wesson}. This we will take as our $2$ metric, In any case, working with this generic metric of the form (\ref{Wessongeneric}), but now in $D$ dimensions, we can perform the inversion transformation $l \rightarrow \frac{1}{l} $, and multiplying also by a factor $ \sigma$ and obtain the conformally transformed metric $1$ that also satisfies the vacuum Einstein´s equations \begin{equation}\label{Wessongenericinverted} ds_1^2 = \sigma l^{-2}\bar{g}_{\mu \nu}(x)dx^{\mu}dx^{\nu} - \sigma\frac{dl^{2}}{l^{4}} = \sigma l^{-4}ds_2^2 \end{equation} From this point on , the equations the solutions for the tensions of the $1$ and $2$ strings have similar behavior to that in the cosmological case, just that $t \rightarrow l$, so now $\Omega= \sigma l^{-4}$, so that we now insert this expression for $\Omega$ in (\ref{stringtension1}) and in (\ref{stringtension2}). Here, as in the cosmological case, we can distinguish two qualitatively different cases: one where the string tensions of the two types of strings have different signs, that is $\sigma$ is negative, where there is no singularity for the metrics or the string tensions and where negative string tensions dominate at one limit of $l$ and positive string tensions dominate at one limit of $l$, The other case is when $\sigma$ is positive, and we take the two types of string tensions positive, here, there is a value of $l$ where the string tensions approaches infinite, which means also that the Hagedorn temperature approaches infinity at this point, which means string can escape the Hagedorn phase transition by getting close to this value of $l$, just as in the cosmological case also when $\sigma$ is positive, and we take the two types of string tensions positive strings in the early universe are also able to escape the Hagedorn temperature. One final comment on the methodology for finding the solutions: we indeed consider two conformally related space times which are obtained from each other by a coordinate transformation. There is no local coordinate transformation that can bring the two spacetimes to the Minkowskii form simultaneously however. To see this one may consider the ratio $\frac{\sqrt{-g^1}}{\sqrt{-g^2}} = \Omega^{D/2}$, so $\Omega$ is a coordinate invariant and $\neq 1$. We notice finally that in the case that both the metric $1$ and the metric $2$ are two flat metrics, like in our studies of cosmological solutions, or for the warped spaces in the case the warped space \ref{Wessongeneric} is defined with $\bar{g}_{\mu \nu}(x)$ being de Sitter and not the more generic Schwarzschild de Sitter case, the solutions are most likely exact and not just to first order in the slope, since not only the Ricci tensor vanish but also the Riemann curvature and so will all higher curvature corrections to the beta function. \textbf{Acknowledgments} I thank Oleg Andreev, David Andriot, Stefano Ansoldi, David Benisty, Hitoshi Nishino, Emil Nissimov, Svetlana Pacheva, Subhash Rajpoot, Euro Spallucci and Tatiana Vulfs for useful discussions. I also want to thank FQXi and the COST actions, CA18108 and CA16104 for support. \\
-18,977.028484
[ -1.4091796875, 1.3974609375 ]
33.687943
[ -3.080078125, 0.541015625, -1.994140625, -4.87109375, -0.2275390625, 7.2265625 ]
[ 1.3681640625, 8.3671875, 1.46875, 4.59375 ]
86
2,857
[ -3.4375, 4.0390625 ]
27.49587
[ -5.58984375, -4.2421875, -4.65625, -2.291015625, 1.732421875, 12.046875 ]
0.834135
22.949902
26.596859
2.067812
[ 2.820338249206543 ]
-12,863.436492
5.455023
-19,127.054452
0.821303
5.657258
[ -2.455078125, -3.615234375, -3.494140625, -4.80078125, 2.1640625, 11.7734375 ]
[ -5.0546875, -1.287109375, -1.4609375, -0.52294921875, 3.0234375, 2.759765625 ]
BkiUdtE4uzlhfMbxUQMX
\section{Introduction} Let $G$ be a topological group and let $X$ be a compact space. A continuous action $G\curvearrowright X$ is called a {\em $G$-flow} (or just a flow, if the group $G$ is understood from the context). A \emph{$G$-map} between two flows $G\curvearrowright X$ and $G\curvearrowright Y$ is a map $f:X\to Y$ such that for every $g\in G$ and $x\in G$ we have $f(gx)=gf(x).$ A flow is {\em minimal} if all of its orbits are dense. It is a general result in topological dynamics, due to Ellis, that for any topological group $G$ there is a {\em universal minimal flow} $M(G)$, that is, a minimal flow $G\curvearrowright M(G)$ such that for any minimal flow $G\curvearrowright X$ there is a continuous $G$-map from $M(G)$ onto $X$. For a compact group $G$, the flow $G\curvearrowright M(G)$ can be identified with the action of $G$ on itself by left translations. If $G$ is locally compact, but not compact (such as $G$ discrete) $M(G)$ is a very large space, in particular, it is always non-metrizable. For example, when $G$ is the set of integers $\mathbb{Z}$, $M(G)$ is the Gleason space of $2^{\mathfrak{c}},$ that is the Stone space of the Boolean algebra of regular open sets of $2^{\mathfrak{c}},$ where $\mathfrak{c}$ is the cardinality of real numbers. Many groups that have been studied by descriptive set-theorists and model-theorists, as it turned out in the last 10-20 years, have metrizable universal minimal flows that can be computed explicitly and a surprising number of them have a trivial universal minimal flow, we call such groups {\em extremely amenable}. Pestov \cite{P} applied the finite Ramsey theorem to show that ${\rm Aut}(\mathbb{Q},<)$, the group of order-preserving bijections of rationals with the pointwise convergence topology, is extremely amenable. Glasner and Weiss \cite{GW} used the finite Ramsey theorem to identify the universal minimal flow of $S_\infty$, the group of all permutations of natural numbers $\mathbb{N}$, with its canonical action on the space of all linear orderings on $\mathbb{N}$. In 2005, Kechris, Pestov, and Todorcevic \cite{KPT} developed a general powerful tool to compute universal minimal flows of automorphism groups of countable model-theoretic structures via establishing a strong connection between the dynamics of such groups and the structural Ramsey theory. The focus of our paper is on homeomorphism groups of compact spaces. We compute the universal minimal flow of the homeomorphism group $H(L)$ of the Lelek fan $L$. We will show that it is equal to the action of $H(L)$ on the space of all maximal chains on $L$ consisting of continua containing the top point of $L$, which is induced from the evaluation action of $H(L)$ on $L$. An important motivation for our work was the following (still open) question due to Uspenskij from 2000. \begin{question}[Uspenskij, \cite{U}] Let $P$ be the pseudo-arc and let $H(P)$ be its homeomorphism group. What is the the universal minimal flow of $H(P)$? In particular, is the action $H(P)\curvearrowright P$ given by $(h,x)\to h(x)$ the universal minimal flow of $H(P)$? \end{question} Both the pseudo-arc and the Lelek fan are well known very homogeneous continua that can be constructed as natural quotients of projective Fra\"{i}ss\'{e} limits of finite structures. Pestov~\cite{P} showed as a consequence of the Ramsey theorem that the group of increasing homeomorphisms of the unit interval is extremely amenable and identified the universal minimal flow of the orientation preserving homeomorphisms of the circle $S^1$ with its natural action on $S^1.$ Glasner and Weiss \cite{GW2} proved that the universal minimal flow of the homeomorphism group of the Cantor set is the action on the space of maximal chains of closed subsets of the Cantor set, which is induced from the evaluation action. These seem to be the only examples of homeomorphism groups for which the universal minimal flow was computed. In each of these examples, the description of the universal minimal flow follows directly from a description of the universal minimal flow of a certain automorphism group of a countable structure (rational numbers in the case of $S^1$ and $[0,1]$, and the countable atomless Boolean algebra in the case of the Cantor set). Universal minimal flows of homeomorphism groups of many very simple compact spaces, such as $[0,1]^2$ or the sphere $S^2$, are unknown. Unlike for the automorphism groups of countable structures, there are no general techniques to compute universal minimal flows of homeomorphism groups of compact spaces. To obtain the universal minimal flow of $H(L)$, we will use our earlier construction, presented in~\cite{BK}, of the Lelek fan $L$ as a quotient of a projective Fra\"{i}ss\'{e} limit $\mathbb{L}$. First, we compute the universal minimal flow of the automorphism group ${\rm Aut}(\mathbb{L})$; we will use tools provided by Kechris, Pestov, and Todorcevic \cite{KPT} and a new Ramsey theorem, which we prove using the Dual Ramsey theorem \cite{GR}. Second, by relating $\mathbb{L}$ and $L$, we compute the universal minimal flow of $H(L)$. This second step is novel and nontrivial, and we hope it will find applications to homeomorphism groups of other compact spaces. \section{Discussion of results} A {\em continuum} is a compact connected metric space. Denoting by $C$ the Cantor set and by $[0,1]$ the unit interval, the {\em Cantor fan} is the quotient of $C\times [0,1]$ by the equivalence relation $\sim$ given by $(a,b)\sim (c,d)$ if and only if either $(a,b)=(c,d)$ or $b=d=0.$ For a continuum $X,$ a point $x\in X$ is an {\em endpoint} in $X$ if for every homeomorphic embedding $h:[0,1]\to X$ with $x$ in the image of $h$ either $x=h(0)$ or $x=h(1).$ The {\em Lelek fan} $L$, constructed by Lelek in \cite{L}, can be characterized as the unique non-degenerate subcontinuum of the Cantor fan whose endpoints are dense in $L$ (see \cite{BO} and \cite{C}). Denote by $v$ the {\em top} (which we will also sometime call the {\em root}) $(0,0)/\!\!\sim$ of the Lelek fan. If $K$ is a compact topological space, a chain $\mathcal{C}$ on $K$ is a family of closed subsets of $K$ such that for every $C_1, C_2\in\mathcal{C}$, either $C_1\subset C_2$ or $C_2\subset C_1$. We say that a chain $\mathcal{C}$ is maximal if for every closed set $C\subset K$, if $\{C\}\cup\mathcal{C}$ is a chain then $C\in\mathcal{C}$. The set ${\rm Exp}(K)$ of all closed subsets of a compact topological space $K$ equipped with the Vietoris topology is a compact space, which we introduce in Section~\ref{sec:chain}. Let $Y^*\subset {\rm Exp (Exp}(K))$ be the space of all maximal chains $\mathcal{C}$ on $L$ such that each $C\in\mathcal{C}$ is connected and it contains the root of $L$. This space is compact, which we prove in Proposition~\ref{ccompact}, and the natural action of $H(L)$ -- the homeomorphism group of the Lelek fan $L$ on $L$ given by $(g,x)\to g(x)$ induces an action on ${\rm Exp}(L)$ which further induces an action on ${\rm Exp (Exp}(L))$ which is invariant on $Y^*$. The main result of this article is the following: \begin{thm}\label{thumf} The universal minimal flow of $H(L)$ -- the homeomorphism group of the Lelek fan $L$ -- is \[ H(L)\curvearrowright Y^*. \] \end{thm} To prove Theorem \ref{thumf}, we will first find a ``quotient'' description of the universal minimal flow of $H(L)$. Let $H$ be the closed subgroup of $H(L)$ consisting of homeomorphisms that preserve the ``generic'' maximal chain in $Y^*$. Such a chain is constructed explicitly, we expand the projective Fra\"{i}ss\'e family $\mathcal{F}$ of finite fans, whose limit gives the Lelek fan, to the projective Fra\"{i}ss\'e family $\mathcal{F}_c$ of finite fans expanded by a maximal chain of connected sets containing the root. The limit of this new family gives the Lelek fan equipped with the required ``generic'' chain. The details and necessary definitions are contained in the next sections. The quotient space $H(L)/H$ is precompact in the quotient of the right uniformity on $H(L)$ and consequently its completion $\widehat{H(L)/H}$ is compact. The group $H(L)$ acts on itself by left translations. This actions induces an action on the quotient, which extends to the completion. Theorem \ref{thumf} will follow from Theorem \ref{thumf2}. \begin{thm}\label{thumf2} The universal minimal flow of $H(L)$--the homeomorphism group of the Lelek fan $L$ is \[ H(L)\curvearrowright\widehat{H(L)/H}. \] \end{thm} Let $\mathbb{L}$ and $\mathbb{L}_c$ be the projective Fra\"{i}ss\'{e} limits of $\mathcal{F}$ and $\mathcal{F}_c$ respectively and let $ {\rm Aut}(\mathbb{L})$ and $ {\rm Aut}(\mathbb{L}_c)$ be their automorphism groups. In Section \ref{extamen}, we show that ${\rm Aut}(\mathbb{L}_c) $ is extremely amenable and in Section \ref{umfa}, we provide two equivalent descriptions of the universal minimal flow of $ {\rm Aut}(\mathbb{L})$. We prove our main result in Section \ref{umfh}. \section{Preliminaries}\label{prelim} We first review the Fra\"{i}ss\'{e} and the projective Fra\"{i}ss\'{e} constructions, as well as the construction of the Lelek fan in the projective Fra\"{i}ss\'{e} framework the authors introduced in \cite{BK} (Sections \ref{sec:fra}, \ref{pff}, and \ref{sec:lelek}). We then discuss topics specifically relevant to studying the universal minimal flow of the homeomorphism group of the Lelek fan: maximal chains on compact spaces (Section \ref{sec:chain}), uniform spaces (Section \ref{pus}), and the Kechris-Pestov-Todorcevic correspondence for Fra\"{i}ss\'{e}-HP families (Section \ref{sec:KPT}). \subsection{Fra\"{i}ss\'{e} families}\label{sec:fra} Given a first-order language $\mathcal{L}$ that consists of relation symbols $r_i$, with arity $m_i$, $i\in I$, and function symbols $f_j$, with arity $n_j$, $j\in J$, and two structures $A$ and $B$ in $\mathcal{L}$, say that $i: A\to B$ is an {\em embedding} if it is an injection such that for a function symbol $f$ in $\mathcal{L}$ of arity $n$ and $x_1,\ldots,x_n\in A$ we have $i( f^A(x_1,\ldots,x_n))=f^B(i(x_1),\ldots,i(x_n))$; and for a relation symbol $r$ in $\mathcal{L}$ of arity $m$ and $x_1,\ldots,x_m\in A$ we require $ r^A(x_1,\ldots,x_m)$ iff $r^B(i(x_1),\ldots,i(x_m))$. For a relation symbol $r\in\mathcal{L}$ with arity $k$ and a function $f: A\to B$, say that $f$ is {\em $R$-preserving} if for every $x_1,\ldots,x_k\in A$ we have $ r^A(x_1,\ldots,x_k)$ iff $r^B(f(x_1),\ldots,f(x_k))$. A countable first order structure $M$ {in $\mathcal{L}$} is {\em locally finite} if every finite subset of $M$ generates a finite substructure. It is {\em ultrahomogeneous} if every isomorphism between finite substructures of $M$ can be extended to an automorphism of $M$. In that case, $\mathcal{F}={\rm Age}(M)$, the family of all finite substructures of $M$, has the following three properties: the {\em hereditary property} {\em (HP)}, that is, if $A\in\mathcal{F}$ and $B$ is a substructure of $A$, then $B\in\mathcal{F}$; the {\em joint embedding property} {\em (JEP)}, that is, for any $A,B\in\mathcal{F}$ there is $C\in\mathcal{F}$ such that $A$ embeds both into $B$ and into $C$; and the {\em amalgamation property} {\em (AP)}, that is, for any $A,B_1,B_2\in\mathcal{F}$, any embeddings $\phi_1: A\to B_1$ and $\phi_2: A\to B_2$, there exist $C\in\mathcal{F}$ and embeddings $\psi_1: B_1\to C$ and $\psi_2: B_2\to C$ such that $\psi_1\circ\phi_1=\psi_2\circ\phi_2$. Conversely, by a classical theorem due to Fra\"{i}ss\'{e}, if a countable family of finite structures $\mathcal{F}$ in some language $\mathcal{L}$ has the HP, the JEP and the AP, then there is a unique countable locally finite ultrahomogeneous structure $M$ such that $\mathcal{F}={\rm Age}(M)$. In this paper, we will call a countable family of finite structures that satisfies the JEP and the AP a {\em Fra\"{i}ss\'{e}-HP family} (read as Fra\"{i}ss\'{e} minus HP family), and a countable family of finite structures that satisfies the HP, the JEP, and the AP we will call a {\em Fra\"{i}ss\'{e} family}. A {\em Fra\"{i}ss\'{e} limit} of a Fra\"{i}ss\'{e} family $\mathcal{F}$ is a countable locally finite ultrahomogeneous structure $M$ such that $\mathcal{F}={\rm Age}(M)$, and a {\em Fra\"{i}ss\'{e} limit} of a Fra\"{i}ss\'{e}-HP family $\mathcal{F}$ is a countable structure $M$ such that every structure in $\mathcal{F}$ embeds into $M$, for every finite subset $X$ of $M$ there is $A\in\mathcal{F}$ and an embedding $i:A\to M$ such that $X\subset i(A)$, and $M$ is ultrahomogeneous with respect to $\mathcal{F}$, that is, every isomorphism between finite substructures of $M$ which are isomorphic to a structure in $\mathcal{F}$ can be extended to an automorphism of $M$. Clearly every Fra\"{i}ss\'{e} family is also a Fra\"{i}ss\'{e}-HP family. If $\mathcal{F}$ is a Fra\"{i}ss\'{e} family or it is a Fra\"{i}ss\'{e}-HP family, the Fra\"{i}ss\'{e} limit of $\mathcal{F}$ always exists and it is unique up to an isomorphism. For example, the rationals with the usual ordering is the Fra\"{i}ss\'{e} limit of the family of finite linear orders, the Rado graph is the Fra\"{i}ss\'{e} limit of the family of finite graphs, and the countable atomless Boolean algebra is the Fra\"{i}ss\'{e} limit of the family of finite Boolean algebras. We say that a family $\mathcal{G}_1$ is {\em cofinal } in a family $\mathcal{G}_2$ if for every $A\in\mathcal{G}_2$ there are $B\in\mathcal{G}_1$ and an embedding $\phi: A\to B$. \begin{rem}\label{cofinall} {\rm Suppose that a family $\mathcal{G}_1$ is contained in and cofinal in a Fra\"{i}ss\'{e}-HP family $\mathcal{G}_2$. Then $\mathcal{G}_1$ is also a Fra\"{i}ss\'{e}-HP family and moreover Fra\"{i}ss\'{e} limits of $\mathcal{G}_1$ and of $\mathcal{G}_2$ are isomorphic. } \end{rem} \subsection{Projective Fra\"{i}ss\'{e} families}\label{pff} Given a first-order language $\mathcal{L}$ that consists of relation symbols $r_i$, with arity $m_i$, $i\in I$, and function symbols $f_j$, with arity $n_j$, $j\in J$, a \emph{topological $\mathcal{L}$-structure} is a compact zero-dimensional second-countable space $A$ equipped with closed (in the product topology) relations $r_i^A\subset A^{m_i}$ and continuous functions $f_j^A: A^{n_j}\to A$, $i\in I, j\in J$. A~continuous surjection $\phi: B\to A$ between two topological $\mathcal{L}$-structures is an \emph{epimorphism} if it preserves the structure, that is, for a function symbol $f$ in $\mathcal{L}$ of arity $n$ and $x_1,\ldots,x_n\in B$ we require: \[ f^A(\phi(x_1),\ldots,\phi(x_n))=\phi(f^B(x_1,\ldots,x_n)); \] and for a relation symbol $r$ in $\mathcal{L}$ of arity $m$ and $x_1,\ldots,x_m\in A$ we require: \begin{equation*} \begin{split} & r^A(x_1,\ldots,x_m) \\ &\iff \exists y_1,\ldots,y_m\in B\left(\phi(y_1)=x_1,\ldots,\phi(y_m)=x_m, \mbox{ and } r^B(y_1,\ldots,y_m)\right). \end{split} \end{equation*} By an \emph{isomorphism} we mean a bijective epimorphism. Let $\mathcal{G}$ be a countable family of finite topological $\mathcal{L}$-structures. We say that $\mathcal{G}$ is a \emph{ projective Fra\"{i}ss\'{e} family} if the following two conditions hold: (JPP) (the joint projection property) for any $A,B\in\mathcal{G}$ there are $C\in \mathcal{G}$ and epimorphisms from $C$ onto $A$ and from $C$ onto $B$; (AP) (the amalgamation property) for $A,B_1,B_2\in\mathcal{G}$ and any epimorphisms $\phi_1: B_1\to A$ and $\phi_2: B_2\to A$, there exists $C\in\mathcal{G}$ with epimorphisms $\psi_1: C\to B_1$ and $\psi_2: C\to B_2$ such that $\phi_1\circ \psi_1=\phi_2\circ \psi_2$. A topological $\mathcal{L}$-structure $\mathbb{G}$ is a \emph{ projective Fra\"{i}ss\'{e} limit } of a projective Fra\"{i}ss\'{e} family $\mathcal{G}$ if the following three conditions hold: (L1) (the projective universality) for any $A\in\mathcal{G}$ there is an epimorphism from $\mathbb{G}$ onto~$A$; (L2) for any finite discrete topological space $X$ and any continuous function $f: \mathbb{G} \to X$ there are $A\in\mathcal{G}$, an epimorphism $\phi: \mathbb{G}\to A$, and a function $f_0: A\to X$ such that $f = f_0\circ \phi$; (L3) (the projective ultrahomogeneity) for any $A\in \mathcal{G}$ and any epimorphisms $\phi_1: \mathbb{G}\to A$ and $\phi_2: \mathbb{G}\to A$ there exists an isomorphism $\psi: \mathbb{G}\to \mathbb{G}$ such that $\phi_2=\phi_1\circ \psi$. \begin{rem}\label{coveri} {\rm It follows from (L2) above that if $\mathbb{G}$ is the projective Fra\"{i}ss\'{e} limit of $\mathcal{G}$, then every finite open cover can be {\em refined by an epimorphism, i.e. for every open cover $\mathcal{U}$} of $\mathbb{G}$ there is an epimorphism $\phi:\mathbb{G}\to A$, for some $A\in\mathcal{G}$, such that for every $a\in A$, $\phi^{-1}(a)$ is contained in an open set in $\mathcal{U}$. } \end{rem} \begin{thm}[Irwin-Solecki, \cite{IS}]\label{is} Let $\mathcal{G}$ be a projective Fra\"{i}ss\'{e} family of finite topological $\mathcal{L}$-structures. Then: \begin{enumerate} \item there exists a projective Fra\"{i}ss\'{e} limit of $\mathcal{G}$; \item any two projective Fra\"{i}ss\'{e} limits of $\mathcal{G}$ are isomorphic. \end{enumerate} \end{thm} The theorem below is a folklore, nevertheless it has not been published. It says that the projective Fra\"{i}ss\'{e} theory is a special case of an (injective) Fra\"{i}ss\'{e} theory via a generalization of Stone duality. \begin{thm}\label{dual} For a projective Fra\"{i}ss\'{e} family $\mathcal{F}$ in a relational language with a projective Fra\"{i}ss\'{e} limit $\mathbb{F}$ there is an equivalent via a contravariant functor (defined on $\mathcal{F}\cup\{\mathbb{F}\}$ and on all epimorphisms between structures in $\mathcal{F}\cup\{\mathbb{F}\}$) Fra\"{i}ss\'{e}-HP family $\mathcal{G}$ with a Fra\"{i}ss\'{e} limit $\mathbb{G}$. \end{thm} Theorem \ref{dual} will follow from Proposition \ref{stone1}, a generalization of the classical Stone duality between Boolean algebras with embeddings and compact totally disconnected spaces with continuous surjections, which we recall here. In this paper, we will not consider $\mathcal{F}$ in a language that contains function symbols. \begin{prop}\label{stone0} The family of compact totally disconnected spaces $\mathcal{F}_0$ with continuous surjections is equivalent via a contravariant functor to the family $\mathcal{G}_0$ of Boolean algebras with embeddings. \end{prop} In the Stone duality, to $K\in\mathcal{F}_0$ we associate the Boolean algebra ${\rm Clop}(K)$ of clopen sets of $K$ with the usual operations of the union $\cup^{{\rm Clop}(K)}$, 0 is the empty set and 1 is identified with $K$, the intersection $\cap^{{\rm Clop}(K)}$ and the complement $^{-{\rm Clop}(K)}$, and to a continuous surjection $f:L\to K$ we associate an embedding $F:{\rm Clop}(K)\to {\rm Clop}(L)$ given by $F(X)=f^{-1}(X)$. \begin{prop}\label{stone1} Let $\mathcal{L}$ be a relational language and let $\mathcal{F}_1$ be a family of topological $\mathcal{L}$-structures, maps between structures are epimorphisms. Then there is a family $\mathcal{G}_1$ of countable structures in the language equal to the union of the language of Boolean algebras and of $\mathcal{L}$, maps between structures are embeddings, such that $\mathcal{F}_1$ is equivalent to $\mathcal{G}_1$ via a contravariant functor. \end{prop} \begin{proof} Let $\mathcal{L}= \{R_1,\ldots, R_n\}$, where $R_i$ is a relation symbol of the arity $m_i,$ be the language of $\mathcal{F}_1.$ Let $\mathcal{L}'= \{S_1,\ldots, S_n,\cup,\cap,^-,0,1\}$ be the language where $S_i$ is a relation symbol of the arity $m_i$ and $\{\cup,\cap,^-, 0,1\}$ is the language of Boolean algebras. For $K=(K,R_1^K,\ldots, R_n^K)\in\mathcal{F}_1$, let $M=(M, S_1^M, \ldots, S^M_n, \cup^M, \cap^M, ^{-M}, 0^M,1^M\}$ be the structure such that $M={\rm Clop}(K)$ is the family of all clopen sets of $K$, $\cup^M$ is the union, $\cap^M$ is the intersection, $^{-M}$ is the complement, $0^M$ is the empty set and $1^M=M$. Moreover, we require that for every~$i$, $S_i^M(X_1,\ldots, X_{m_i})$ iff for some $c_1\in X_1,\ldots, c_{m_1}\in X_{m_1}$, we have $R_i^K(c_1,\ldots, c_{m_i})$. Let $\mathcal{G}_1$ be the family of all $M$'s obtained in this way from a $K\in\mathcal{F}_1$ with the embeddings. Let $f:L\to K$, where $K,L\in\mathcal{F}_1$, be a continuous surjection and let $F:{\rm Clop}(K)\to {\rm Clop}(L)$ be the map given by $F(X)=f^{-1}(X)$. In view of Proposition \ref{stone0}, all we have to check is that $f$ is $R_i$-preserving if and only if $F$ is $S_i$-preserving, and that will follow from the two claims below. \smallskip \noindent {\bf{Claim.}} If $f$ is $R_i$-preserving then $F$ is $S_i$-preserving. \begin{proof} If $S_i^{{\rm Clop}(K)}(X_1,\ldots, X_{m_i})$ then $R_i^K(a_1,\ldots, a_{m_i})$ for some $a_i\in X_i$, which implies that for some $c_i\in f^{-1}(a_i)\subset f^{-1}(X_i)$, $R_i^L(c_1,\ldots, c_{m_i})$, hence $S_i^{{\rm Clop}(L)}(f^{-1}(X_1),\ldots, f^{-1}(X_{m_i}))$, i.e. $S_i^{{\rm Clop}(L)}(F(X_1),\ldots, F(X_{m_i})).$ Conversely, if $S_i^{{\rm Clop}(L)}(F(X_1),\ldots, F(X_{m_i}))$, that is $S_i^{{\rm Clop}(L)}(f^{-1}(X_1),\ldots, f^{-1}(X_{m_i}))$ then for some $c_i\in f^{-1}(X_i)$, $R_i^L(c_1,\ldots, c_{m_i})$, therefore $R_i^K(f(c_1),\ldots, f(c_{m_i}))$, which gives $S_i^{{\rm Clop}(K)}(X_1,\ldots, X_{m_i})$. \end{proof} \smallskip \noindent {\bf{Claim.}} If $F$ is $S_i$-preserving then $f$ is $R_i$-preserving. \begin{proof} We have $R_i^{K}(a_1,\ldots, a_{m_i})$ iff for every $X_i\in{\rm Clop}(K)$ such that $a_i\in X_i$ we have $S_i^{{\rm Clop}(K)}(X_1, \ldots,X_{m_i} )$ iff for every $X_i\in{\rm Clop}(K)$ such that $a_i\in X_i$ we have $S_i^{{\rm Clop}(L)}(f^{-1}(X_1), \ldots, f^{-1}(X_{m_i}))$ iff for every $X_i\in{\rm Clop}(K)$ such that $a_i\in X_i$ there exist $c_i\in f^{-1}(X_i)$ for which we have $R_i^L(c_1,\ldots, c_{m_i})$ iff there exist $c_i\in f^{-1}(a_i)$ for which we have $R_i^L(c_1,\ldots, c_{m_i})$. In the first and last equivalences we used that the relations $R_i^K$ and $R_i^L$ are closed in $K^{m_i}$ and $L^{m_i}$, respectively. \end{proof} \end{proof} Now one may ask why we study projective Fra\"{i}ss\'{e} families at all. The reason is that it is more natural to use projective Fra\"{i}ss\'{e} families to construct and study compact spaces, like the Lelek fan or the pseudo-arc, rather than to study them via families of finite Boolean algebras equipped with relations. In further sections, we will introduce families $\mathcal{F}_c$ and $\mathcal{F}_{cc}$ of finite fans expanded by an additional structure, which will be neither a function nor a relation, for which we will have to prove an analog of Theorem \ref{dual}. These families will not exactly fall into the framework of the projective Fra\"{i}ss\'{e} theory discussed in this section. Nevertheless, we will still call them projective Fra\"{i}ss\'{e} families, and their limits we will call projective Fra\"{i}ss\'{e} limits. \subsection{Construction of the Lelek fan}\label{sec:lelek} For completeness, we repeat here more or less Section 3.1 from \cite{BK2}, where we review the construction of the Lelek fan from \cite{BK}. Unlike in \cite{BK} and \cite{BK2}, we will not assume that all branches in a finite fan are of the same length. By a {\em fan} we mean an undirected connected simple graph with all loops, with no cycles of the length greater than one, and with a distinguished point $r$, called the {\em root}, such that all elements other than $r$ have degree at most 2. On a fan $T,$ there is a natural partial tree order $\preceq_T$: for $t,s\in T$ we let $s\preceq_T t$ if and only if $s$ belongs to the path connecting $t$ and the root. We say that $t$ is a {\em successor} of $s$ if $s\preceq_T t$ and $s\neq t$. It is an {\em immediate successor} if additionally there is no $p\in T$, $p\neq s,t$, with $s\preceq_T p\preceq_T t$. For a fan $T$ and $x,y\in T$ which are on the same branch and $x\preceq_T y$, by $[x,y]_{\preceq_T}$ we denote the interval $\{z\in T: x\preceq_T z \preceq_T y\}$. A {\em chain} in a fan $T$ is a subset of $T$ on which the order $\preceq_T$ is linear. A {\em branch } of a fan $T$ is a maximal chain in $(T,\preceq_T)$. If $b$ is a branch in $T$ with $n+1$ elements, we will sometimes enumerate $b$ as $(b^0,\ldots,b^n)$, where $b^0$ is the root of $T$, and $b^i$ is an immediate successor of $b^{i-1}$, for every $i=1, 2, \ldots, n$. In that case, $n $ will be called the {\em height} of the branch $b$. Define the {\em height} of the fan to be the maximum of the heights of all of its branches and define the {\em width} of the fan to be the number of its branches. Let $\mathcal{L}=\{R\}$ be the language with $R$ a binary relation symbol. For a fan $T$ and $s,t\in T$, we let $R^T(s,t)$ if and only if $s=t$ or $t$ is an immediate successor of $s$. Let $\mathcal{F}$ be the family of all finite fans, viewed as topological $\mathcal{L}$-structures, equipped with the discrete topology. \begin{rem} {\rm{ For two fans $(S,R^S)$ and $(T,R^T)$ in $\mathcal{F}$, a function $\phi: (S,R^S)\to (T,R^T)$ is an epimorphisms if and only if it is a surjective homomorphism, i.e., for every $s_1,s_2\in S$, $R^S(s_1,s_2)$ implies $R^T(\phi(s_1),\phi(s_2))$. }} \end{rem} We say that a projective Fra\"{i}ss\'{e} family $\mathcal{G}_1$ is {\em coinitial } in a projective Fra\"{i}ss\'{e} family $\mathcal{G}_2$ if for every $A\in\mathcal{G}_2$ there are $B\in\mathcal{G}_1$ and an epimorphism $\phi: B\to A$. \begin{prop}\label{Fraissef} The family $\mathcal{F}$ is a projective Fra\"{i}ss\'{e} family. \end{prop} In \cite[Proposition 2.3]{BK}, we proved that the family, which we call now $\mathcal{F}_1$, of finite fans with all branches of the same length, is a projective Fra\"{i}ss\'{e} family. The proof of Proposition \ref{Fraissef} is essentially the same as the proof that $\mathcal{F}_1$ is a projective Fra\"{i}ss\'{e} family. By Theorem \ref{is}, there exists a unique projective Fra\"{i}ss\'{e} limit of $\mathcal{F}$, which we denote by $\mathbb{L}=(\mathbb{L}, R^{\mathbb{L}})$. The underlying set $\mathbb{L}$ is homeomorphic to the Cantor set. The family $\mathcal{F}_1$ is coinitial in $\mathcal{F}$, and this implies (by Remark \ref{cofinall} and Theorem \ref{dual}) that the projective Fra\"{i}ss\'{e} limits of $\mathcal{F}$ and $\mathcal{F}_1$ are isomorphic. Let $ R_S^{\mathbb{L}}$ be the symmetrization of $ R^{\mathbb{L}}$, that is, $ R_S^{\mathbb{L}}(s,t)$ if and only if $ R^{\mathbb{L}}(s,t)$ or $ R^{\mathbb{L}}(t,s)$, for $s,t\in\mathbb{L}$. \begin{thm}[Theorem 2.5, \cite{BK} ] The relation $ R_S^{\mathbb{L}}$ is an equivalence relation which has only one and two element equivalence classes. \end{thm} \begin{thm}[Theorem 2.6, \cite{BK} ] The quotient space $\mathbb{L}/R^{\mathbb{L}}_S$ is homeomorphic to the Lelek fan $L.$ \end{thm} Let $\pi:\mathbb{L}\to L$ denote the quotient map given by $R^{\mathbb{L}}_S$. We denote by ${\rm Aut}(\mathbb{L})$ the group of all automorphisms of $\mathbb{L}$, that is, the group of all homeomorphisms of $\mathbb{L}$ that preserve the relation $R$. This is a topological group when equipped with the compact-open topology inherited from $H(\mathbb{L})$, the group of all homeomorphisms of the Cantor set underlying the structure $\mathbb{L}$. Since $R^\mathbb{L}$ is closed in $\mathbb{L}\times \mathbb{L}$, the group ${\rm Aut}(\mathbb{L})$ is closed in $H(\mathbb{L})$. Let $\pi^*$ be the map that takes $h\in {\rm Aut}(\mathbb{L})$ to $h^*\in H(L)$ and $h^*\pi(x)= \pi h(x)$ for every $h\in {\rm Aut}(\mathbb{L})$ and $x\in\mathbb{L}$. We will frequently identify ${\rm Aut}(\mathbb{L})$ with the corresponding subgroup $\{h^*: h\in {\rm Aut}(\mathbb{L})\}$ of $H(L)$. Observe that the compact-open topology on ${\rm Aut}(\mathbb{L})$ is finer than the topology on ${\rm Aut}(\mathbb{L})$ that is inherited from the compact-open topology on $H(L)$. \subsection{Spaces of maximal chains}\label{sec:chain} We will assume throughout the paper that every compact space is Hausdorff. Let $K$ be a compact topological space. A {\em chain} $\mathcal{C}$ on $K$ is a family of closed subsets of $K$ such that for every $C_1, C_2\in\mathcal{C}$, either $C_1\subset C_2$ or $C_2\subset C_1$. Sometimes we will call the sets in a chain {\em links}. We say that a chain $\mathcal{C}$ is {\em maximal} if for every closed set $C\subset K$, if $\{C\}\cup\mathcal{C}$ is a chain then $C\in\mathcal{C}$. Note that if $\mathcal{C}$ is a maximal chain and $A\subset \mathcal{C}$, then $\bigcap A\in\mathcal{C}$ and $\overline{\bigcup A}\in\mathcal{C}$. The set ${\rm Exp}(K)$ of all closed subsets of $K$ is equipped with the Vietoris topology generated by the sets \[ [U_1,\ldots, U_n]=\{ F\in {\rm Exp}(K): F\subset U_1\cup\ldots\cup U_n { \rm\ and\ for\ every\ } i=1,\ldots, n, \ F\cap U_i\neq\emptyset\}, \] where $n\in\mathbb{N}$ and $U_1,\ldots, U_n$ are open in $K$. Without loss of generality, $U_1,\ldots, U_n$ are only taken from some fixed basis of $K$. If $K$ is metrizable by a metric $d_0$ then the space ${\rm Exp}(K)$ is metrizable by the {\em Hausdorff metric} given by \[d(X,Y)= \max \{ \sup_{x\in X} \inf_{y\in Y} d_0(x,y), \sup_{y\in Y} \inf_{x\in X} d_0(x,y) \}. \] It is not hard to show (see \cite[Lemma 6.4.7]{P2}) that every maximal chain is closed in the Vietoris topology. It is well known that ${\rm Exp}(K)$ is compact. Uspenskij \cite{U} showed that the set of maximal chains on $K$ is closed in ${\rm Exp (Exp}(K))$, and therefore it is compact. Let $\leq$ be a partial order on $K$. We say that a set $C\subset K$ is {\em downwards closed} if it is closed and for every $x, y\in K$, if $y\in C$ and $x\leq y$ then $x\in C$, and a chain $\mathcal{C}$ on $K$ be {\em downwards closed} if every $C\in\mathcal{C}$ is downwards closed. A downwards closed chain $\mathcal{C}$ is {\em downwards closed maximal} if for every downwards closed set $C\subset K$, if $\{C\}\cup\mathcal{C}$ is a chain then $C\in\mathcal{C}$. For a map $f: Y\to X$ and a chain $\mathcal{C}$ on $Y$, by $f(\mathcal{C})$ we will denote the chain $\{f(C): C\in\mathcal{C}\}$. We start with the following observations. \begin{lemma}\label{image3} Let $K,M$ be compact sets and let $f: M\to K$ be a continuous surjection. If $\mathcal{C}$ is a maximal chain in $M$, then the chain $f(\mathcal{C})$ is also maximal. \end{lemma} \begin{proof} Suppose a closed set $D\subset K$ is such that $\{D\}\cup f(\mathcal{C})$ is a chain. We will show that there is $J\in\mathcal{C}$ satisfying $f(J)=D$. Let $K_1=\{ C\in f(\mathcal{C}): C\supset D\}$ and let $M_1=\{E\in\mathcal{C}: f(E)\in K_1\}.$ As $\mathcal{C}$ is maximal, $M=\bigcap M_1\in \mathcal{C}$. Since $D\subset f(M)$, we have that $J=f^{-1}(D)\cap M$ satisfies $f(J)=D$ and has the property that $\{J\}\cup\mathcal{C}$ is a chain, and hence by the maximality of $\mathcal{C}$, $J\in\mathcal{C}$ as required. \end{proof} Using Zorn's Lemma, we get the following. \begin{lemma} Let $K$ be a compact set and let $\mathcal{D}$ be a chain on $K$. Then there is a maximal chain on $K$ that extends $\mathcal{D}$. \end{lemma} Let $\mathcal{F}^*$ be the family of all topological $\mathcal{L}$-structures that are countable inverse limits of finite fans in $\mathcal{F}$. If $P$ is the inverse limit of $(A_n, f_m^n)$, the relation $R^P$ on $P$ is defined as follows \[ R^P(x,y) \text{ iff for every } n, \ R^{A_n}(f^\infty_n(x), f^\infty_n(y)). \] Clearly we can identify $\mathcal{F}$ with a subset of $\mathcal{F}^*$ by assigning to $A$ the inverse limit of $(A, {\rm{Id}}_m^n)$. Recall that $A\in\mathcal{F}$ is equipped with the tree partial order $\preceq_A$; we let $x\preceq_A y$ iff $x$ belongs to the segment joining the root of $A$, $v_A$, with $y$. We let \[x\preceq_P y \text{ iff for every } n, \ f^\infty_n(x)\preceq_{A_n} f^\infty_n(y).\] In particular, we have just defined a partial order on $\mathbb{L}$, the projective Fra\"{i}ss\'{e} limit of the family $\mathcal{F}$ of finite fans. This in turn defines a partial order on $L$ by $x\preceq_L y$ if and only if for some (equivalently, for any) $v,w\in\mathbb{L}$ such that $\pi(v)=x$ and $\pi(w)=y$, we have $v\preceq_{\mathbb{L}} w$, where $\pi: \mathbb{L} \to L$ is the quotient map. Whenever we talk about downwards closed sets on $P\in\mathcal{F}^*$ or on $L$, we will understand that they are downwards closed with respect to $\preceq_P$ or $\preceq_L$, respectively. \begin{rem} {\rm A closed set in $L$ is downwards closed if and only if it is connected and it contains the root of $L$.} \end{rem} \begin{lemma}\label{cm} Every downwards closed maximal chain on $P\in\mathcal{F}^*$ is maximal. \end{lemma} \begin{proof} It is not hard to see that the conclusion is true for a structure in $\mathcal{F}$. Let $\mathcal{C}$ be a downwards closed maximal chain on $P=\varprojlim(A_n, f^n_m)\in\mathcal{F}^*$. Then for each $n$, $\mathcal{C}^{A_n}=\{ f^{\infty}_n(C): C\in\mathcal{C}\}$ is a downwards closed chain which is maximal, by the same argument as in Lemma~\ref{image3}. If a closed set $D\subset P$ is such that $\mathcal{C}\cup\{D\}$ is a chain, then for each $n$, $f^{\infty}_n(D)\in\mathcal{C}^{A_n}$ by the maximality of $\mathcal{C}^{A_n}$ and therefore $f^{\infty}_n(D)$ is downwards closed, and consequently so is $D$, which implies $D\in\mathcal{C}$. \end{proof} \begin{prop}\label{ccompact} For every $P\in\mathcal{F}^*$ the set of all downwards closed maximal chains on $P$ is compact. In particular, the set of all downwards closed maximal chains on $\mathbb{L}$ is compact. \end{prop} \begin{proof} We first show that the set ${\rm CExp}(P)$ of all downwards closed closed subsets of $P=\varprojlim(A_n, f^n_m)$ is closed in ${\rm Exp}(P)$. Let $K\subset P$ be closed but not downwards closed, witnessed by $x\notin K$ and $y\in K$ be such that $x\preceq_P y$. Pick $n$ and $a\in A_n$ such that $a=f^\infty_n(x)\neq f^\infty_n(y)$ and $A=(f^\infty_n)^{-1}(a)\cap K=\emptyset$. Let $B=(f^\infty_n)^{-1}(\{b\in A_n: a\prec_{A_n} b\})$. Clearly $B$ is open and $y\in B$. Then \[ V := [B,P]\cap [P\setminus A]=\{L\in {\rm Exp}(P): L\cap B\neq\emptyset {\rm \ and \ } L\subset P\setminus A\} \] is such that $K\in V$ and all sets in $V$ are not downwards closed, which finishes the proof that ${\rm CExp}(P)$ is closed. Since, Uspenskij proved in \cite{U} that the set of maximal chains is closed in ${\rm Exp (Exp}(P))$, by Lemma \ref{cm}, it is enough to show that the set of points in ${\rm Exp (Exp}(P))$ consisting of sets contained ${\rm CExp}(P)$ is again a closed sets. This last thing follows from the following simple general observation: If $K$ is a compact space and $D\in {\rm Exp}(K)$, then $\{E\in {\rm Exp}(K): E\subset D\}$ is closed in ${\rm Exp}(K)$. Finally, take $K={\rm Exp} (P)$ and $D={\rm CExp}(P)$. \end{proof} \subsection{Precompact uniform spaces}\label{pus} A good introduction to uniform spaces can be found in Engelking \cite{E}, Chapter 8 (precompact spaces are called totally bounded there), Below we briefly review the very minimum that is needed for the paper, all undefined concepts are in Engelking \cite{E}. A {\em uniformity} is a set $X$ together with a family $\mathcal{U}$ of subsets of $X\times X$ having the following properties: \begin{enumerate} \item each $U\in\mathcal{U}$ contains the diagonal $\{(x,x): x\in X\}$; \item if $U\in \mathcal{U}$ and $U\subset V$, then $V\in\mathcal{U}$; \item if $U,V\in\mathcal{U}$, then $U\cap V\in\mathcal{U}$; \item if $U\in\mathcal{U}$, then $U^{-1}=\{(y,x): (x,y)\in U\}\in\mathcal{U}$; \item if $U\in\mathcal{U}$ then there is $V\in \mathcal{U}$ such that $V\circ V=\{(x,z): { \rm\ there\ exists\ } y\in X { \rm\ such\ that\ } (x,y)\in V { \rm\ and\ } (y,z)\in V\}\subset U$. \end{enumerate} Every uniform space $(X,\mathcal{U})$ becomes a topological space if we declare $U\subset X$ to be open if and only if for every $x \in U$ there exists an $ V\in\mathcal{U}$ such that $V[x]=\{y\in X: (x,y)\in V\}\subset~U$. A function $f: (X,\mathcal{U})\to (Y,\mathcal{V})$ between uniform spaces is called {\em uniformly continuous} if for every $V\in\mathcal{V}$ there exists $U\in\mathcal{U}$ such that $f(U)\subset V$. We say that a uniform space $(X,\mathcal{U})$ is {\em precompact} if for every $U\in\mathcal{U}$ there are finitely many $x_1,\ldots, x_n\in X$ such that $X=\{x\in X: \exists_i (x,x_i)\in U\}$. Equivalently, a uniform space $(X,\mathcal{U})$ is precompact if its completion is compact. If $(X,\mathcal{U})$ is metrizable by a metric $d$ (i.e. the topology induced by $(X,\mathcal{U})$ is equal to the topology induced by $d$) then the completion of $(X,\mathcal{U})$ is equal to the completion of $(X,d)$ (see Lemma 8.3.7 and Proposition 8.3.5 in Engelking \cite{E}). This implies, $(X,\mathcal{U})$ is precompact if and only if the metric space $(X,d)$ is precompact. Recall that any compact space $X$ has the unique uniformity compatible with the topology. This uniformity consists of all symmetric neighbourhoods of the diagonal in $X\times X$. A topological group $G$ admits a few natural uniform structures compatible with its topology. We will be working with the right uniformity, which is generated by the sets \[ O_V=\{(x,y): xy^{-1}\in V\},\] where $V$ is an open symmetric neighbourhood of the identity in $G$. For a closed subgroup $H$ of $G$ we consider the quotient space $G/H$ with the quotient uniformity generated by the sets \[ U_V=\{(xH,yH): xy^{-1}\in V\}=\{(gH,vgH): g\in H, v\in V\} ,\] where $V$ is an open symmetric neighbourhood of the identity in $G$. This uniformity is compatible with the quotient topology of $G/H$ and $G/H$ is precompact if and only if for every open symmetric neighbourhood $V$ of the identity in $G$, there exist finitely many $x_1,\ldots, x_n\in G$ such that $G=\bigcup_{i=1}^n (Vx_iH)$. If $G$ is a Polish group and $d_R$ is a right-invariant metric on $G$, then the uniformity on $G/H$ is metrizable by the metric \[ d(g_1H, g_2H)=\inf_{h\in H} d_R(g_1 h, g_2).\] The following is a folklore, but we could not find a proof, therefore we include it here. \begin{prop}\label{extension} Suppose that $G/H$ is precompact, where $G,H$ are Polish groups, $H$ is a closed subgroup of $G$. Then the continuous action of $G$ on $G/H$ by left translations, $g_1\cdot (g_2H)=(g_1g_2)H$, extends to a continuous action of $G$ on the completion $\widehat{G/H}$. \end{prop} Suppose that $G$ acts on a uniform space $X=(X,\mathcal{U}_X)$ by uniform space isomorphisms. The action is called {\em bounded or motion equicontinuous} (see Pestov \cite{P2}, page 70, and references therein) if for every $U\in\mathcal{U}_X$, the set $\{g\in G: \forall x\in X \ (x, g\cdot x)\in U\}$ is a neighbourhood of 1 in $G$. Assuming additionally that $X$ is a Polish space and $G$ is a Polish group, we immediately see the following. \begin{enumerate} \item If $X$ is a completion of an invariant space $X_0$ then if the action of $G$ on $X_0$ is motion equicontinuous then so is the action of $G$ on $X$. \item If the action of $G$ on $X$ is motion equicontinuous then for a fixed $x\in X$ the function $g\to g\cdot x$ is continuous with respect to the compatible topologies. (The definition immediately implies that $g\to g\cdot x$ is continuous at the identity, which implies that this function is in fact continuous.) \end{enumerate} \begin{proof}[Proof of Proposition \ref{extension}] First observe that for a fixed $g\in G$, the bijection $f_g: G/H\to G/H$ given by $f_g(hH)=ghH$ is uniformly continuous. Indeed, for any open symmetric neighbourhood $1\in V$ in $G$, we have $(f_g\times f_g)(U_{g^{-1}Vg})\subset U_V$. Similarly, $f^{-1}_g$ is uniformly continuous. Therefore $f_g$ extends to a uniform isomorphism of $\widehat{G/H}$. This gives an action of $G$ on $\widehat{G/H}$, which is continuous if we fix $g\in G$. Since separately continuous functions are continuous (see \cite{BKe}, Proposition 2.2.1), it suffices to show that it is continuous if we fix $x\in\widehat{G/H}$. By the remarks before the proof, it suffices to check that for a fixed $hH$ the function from $G$ to $G/H$, $g\to ghH$ is motion equicontinuous. However, this is clear, as for any open symmetric neighhourhood $1\in V$ in $G$, $h\in G$, and $g\in V$ we have $(hH, ghH)\in \mathcal{U}_V$. \end{proof} \subsection{Kechris-Pestov-Todorcevic correspondence}\label{sec:KPT} In this section, we review the Kechris-Pestov-Todorcevic correspondence between the structural Ramsey theory of a Fra\"{i}ss\'{e}-HP family and the dynamics (extreme amenability, the universal minimal flow) of the automorphism group of its Fra\"{i}ss\'{e} limit. A topological group $G$ is {\em extremely amenable} if every $G$-flow has a fixed point. A~{\em colouring} of a set $X$ is any function $c: X\to \{1,2,\ldots,r\}$, for some $r\geq 2$; we say that $Y\subset X$ is {\em $c$-monochromatic} (or just {\em monochromatic}) if $r\restriction Y$ is constant. Let $\mathcal{G}$ be a family of finite structures in a language $\mathcal{L}$. For $A,B$ in $\mathcal{G}$, let ${B \choose A}$~denotes the set of all embeddings of $A$ into $B$. We say that $\mathcal{G}$ is a {\em Ramsey class} if for every integer $r\geq 2$ and for $A,B\in \mathcal{G}$ there exists $C\in \mathcal{G}$ such that for every colouring $c: {C \choose A} \to\{1,2,\ldots,r\}$ there exists $h\in {C \choose B}$ such that $\{ h\circ f: f\in {B \choose A} \}$ is monochromatic. We say that $A\in\mathcal{G}$ is {\em rigid} if it has a trivial automorphism groups. Kechris-Pestov-Todorcevic \cite{KPT} worked with Fra\"{i}ss\'{e} families and their ordered Fra\"{i}ss\'{e} expansions, which was generalized by then Nguyen Van Th\'e \cite{NVT} to Fra\"{i}ss\'{e} families and to arbitrary relational Fra\"{i}ss\'{e} expansions. The Kechris-Pestov-Todorcevic correspondence remains true for Fra\"{i}ss\'{e}-HP families, which was checked by several people, and it appears in \cite{Z}. \begin{thm}[Kechris-Pestov-Todorcevic \cite{KPT}, see Theorem 5.1 in \cite{Z}] \label{kpt1} Let $\mathcal{G}$ be a Fra\"{i}ss\'{e}-HP, let $\mathbb{G}$ be its Fra\"{i}ss\'{e} limit, and let $G={\rm Aut}(\mathbb{G})$. Then the following are equivalent: \begin{enumerate} \item The group $G$ is extremely amenable. \item The family $\mathcal{G}$ is a Ramsey class and it consists of rigid structures. \end{enumerate} \end{thm} Let $\mathcal{G}$ be a Fra\"{i}ss\'{e}-HP family in a language $\mathcal{L}$, let $\mathbb{G}$ be its Fra\"{i}ss\'{e} limit, and let $G={\rm Aut}(\mathbb{G})$. Let $\mathcal{G}^*$ be a Fra\"{i}ss\'{e}-HP family, in a language $\mathcal{L}^*\supset \mathcal{L}$, $\mathcal{L}^*\setminus \mathcal{L}$ relational, such that for every $A^*\in\mathcal{G}^*$, $A^*\restriction\mathcal{L}\in\mathcal{G}$, that is, every $A^*\in\mathcal{G}^*$ is an {\em expansion} of some $A\in\mathcal{G}$, or $\mathcal{G}^*$ is an {\em expansion} of $\mathcal{G}$. Let $\mathbb{G}^*$ be the Fra\"{i}ss\'{e} limit of $\mathcal{G}$, and let $G^*={\rm Aut}(\mathbb{G}^*)$. We say that the expansion $\mathcal{G}^*$ of $\mathcal{G}$ is {\em reasonable}, that is, for any $A,B\in\mathcal{G}$, an embedding $\alpha: A\to B$ and an expansion $A^*\in\mathcal{G}^*$ of $A$, there is an expansion $B^*\in\mathcal{G}^*$ of $B$ such that $\alpha: A^*\to B^*$ is an embedding. It is {\em precompact} if for every $A\in\mathcal{G}$ there are only finitely many $A^*\in\mathcal{G}^*$ such that $A^*\restriction \mathcal{L}= A$. We say that $\mathcal{G}^*$ has the {\em expansion property} relative to $\mathcal{G}$ if for any $A^*\in\mathcal{G}^*$ there is $B\in\mathcal{G}$ such that for any expansion $B^*\in\mathcal{G}^*$, there is an embedding $\alpha\colon A^*\to B^*$. \begin{prop}[\cite{KPT}, \cite{NVT}, see Proposition 5.3 in \cite{Z}]\label{kpt_reas} The expansion $\mathcal{G}^*$ of $\mathcal{G}$ is reasonable if and only if $\mathbb{G}^*\restriction \mathcal{L}= \mathbb{G}$. \end{prop} {\bf{From now on till the end of this section,}} we will assume that the expansion $\mathcal{G}^*$ of $\mathcal{G}$ is reasonable, precompact, and satisfies the property $(*)$ below. \begin{equation*} \begin{split} (*) & \text{ For any $A\in\mathcal{G}$ and an embedding $i:A\to \mathbb{G}$ there is an expansion }\\ & \text{$A^*\in\mathcal{G}^*$ of $A$ such that $i:A^*\to \mathbb{G}^*$ is an embedding.} \end{split} \end{equation*} Below $(\mathbb{G}, \vec{R})$, $(\mathbb{G}, \vec{S})$, etc. denote an expansion of $\mathbb{G}$ to a structure in $\mathcal{L}^*$. Instead of $(\mathbb{G}, \vec{R})$ we will often just write $\vec{R}$. Define \begin{equation*} \begin{split} X_{\mathbb{G}^*}=&\{ \vec{R}: {\rm\ for\ every\ } A\in\mathcal{G}, {\rm\ and\ an\ embedding\ } i: A\to \mathbb{G} {\rm\ there \ exists\ } \\ & A^*\in\mathcal{G}^*, {\rm such \ that\ } i: A^*\to(\mathbb{G},\vec{R}) {\rm \ is\ an\ embedding }\}. \end{split} \end{equation*} We make $X_{\mathbb{G}^*}$ a topological space by declaring sets \[ V_{i, A^*}=\{\vec{R}\in X_{\mathbb{G}^*} : i: A^*\to (\mathbb{G},\vec{R}) \text{ is an embedding} \}, \] where $i: A\to \mathbb{G}$ is an embedding, $A^*\in\mathcal{G}^*$, and $A^*\restriction \mathcal{L}= A$, to be open. The group ${\rm Aut}(\mathbb{G}^*)$ acts continuously on $X_{\mathbb{G}^*}$ via \[ g\cdot \vec{R}(\bar{a})=\vec{R}(g^{-1}(\bar{a})).\] Reasonability and precompactness of the expansion $\mathcal{G}^*$ of $\mathcal{G}$ imply that the space $X_{\mathbb{G}^*}$ is compact, zero-dimensional, and it is nonempty as $\mathbb{G}^*\in X_{\mathbb{G}^*}$. \begin{thm}[\cite{KPT}, \cite{NVT}, see Proposition 5.5 in \cite{Z}]\label{kpt_minim} The following are equivalent: \begin{enumerate} \item The flow $G\curvearrowright X_{\mathbb{G}^*} $ is minimal. \item The family $\mathcal{G}^*$ has the expansion property relative to $\mathcal{G}$. \end{enumerate} \end{thm} \begin{thm}[Kechris-Pestov-Todorcevic \cite{KPT}, Nguyen Van Th\'e \cite{NVT}, see Theorem 5.7 in \cite{Z}]\label{kpt2} The following are equivalent: \begin{enumerate} \item The flow $G\curvearrowright X_{\mathbb{G}^*} $ is the universal minimal flow of $G$. \item The family $\mathcal{G}^*$ is a rigid Ramsey class and has the expansion property relative to $\mathcal{G}$. \end{enumerate} \end{thm} We finish this section with several general observations, which are adaptations of those in \cite{NVT} (pages 6-8) to the framework of the Fra\"{i}ss\'{e}-HP theory. For an embedding $\alpha: A\to \mathbb{G}$, $A\in\mathcal{G}$, let $V_\alpha$ denote the pointwise stabilizer of $\alpha$, that is, \[ V_\alpha=\{g\in {\rm Aut}(\mathbb{G}): \text{ for every } a\in A, \ g(\alpha(a))=\alpha(a) \}. \] Then $V_{\alpha}$ is a symmetric clopen neighbourhood of the identity in ${\rm Aut}(\mathbb{G})$, in fact it is also a subgroup of ${\rm Aut}(\mathbb{G})$, and sets of this form constitute a neighbourhood basis of the identity in ${\rm Aut}(\mathbb{G})$. \begin{lemma} The right uniform space ${\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)$ is precompact. \end{lemma} \begin{proof} Let $V=V_{\alpha}$ for some embedding $\alpha: A\to\mathbb{G}$. Enumerate all expansions of $A$ in $\mathbb{G}^*$ as $A^*_1, A^*_2,\ldots, A^*_N$. For each $i=1,2,\ldots , N,$ using the universality of $\mathbb{G}^*$, pick an embedding $y_i: A^*_i\to\mathbb{G}^*$. The ultrahomogeneity of $\mathbb{G}$ with respect to $\mathcal{G}$ implies that there are $x_i\in {\rm Aut}(\mathbb{G})$ such that $y_i=x_i\circ\alpha: A^*_i\to\mathbb{G}^*.$ Pick any $g\in {\rm Aut}(\mathbb{G})$ and we will show that $g^{-1}\in Vx_i^{-1}{\rm Aut}(\mathbb{G}^*)$, for some $i=1,2,\ldots , N$, which will finish the proof of the lemma. Using the property $(*)$ , take $i$ such that $g\circ\alpha: A^*_i\to\mathbb{G}^*$ is an embedding. From the ultrahomogeneity of $\mathbb{G}^*$, we get $h\in {\rm Aut}(\mathbb{G}^*)$ such that $h\circ g\circ\alpha=x_i\circ\alpha$. That implies $x_i^{-1}\circ h\circ g\in V$, and hence we get $g^{-1}\in Vx_i^{-1}{\rm Aut}(\mathbb{G}^*)$. \end{proof} As we have just seen, the space $X_{\mathbb{G}^*}$ is compact and that the right uniform space ${\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)$ is precompact, we show in Theorem \ref{iden} that $X_{\mathbb{G}^*}$ and $\widehat{{\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)}$ are isomorphic. Call a map $t:X\to Y$, such that $X,Y$ are uniform spaces and a topological group $G$ acts continuously on both $X$ and $Y$ a {\em uniform $G$-isomorphism} if it is a $G$-map which is an isomorphism between the uniform spaces $X$ and $Y$. \begin{thm}\label{iden} The map $g{\rm Aut}(\mathbb{G}^*)\to g\cdot \vec{R}^\mathbb{G}$ from ${\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)$ to $X_{\mathbb{G}^*}$ is a uniform $G$-isomorphism from $G\curvearrowright {\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)$ to $G\curvearrowright X_{\mathbb{G}^*} $. \end{thm} We will say that flows $G\curvearrowright X$ and $G\curvearrowright Y$ are {\em isomorphic} if there is a homeomorphism from $X$ onto $Y$ which is a $G$-map. \begin{cor}\label{ident} The flow $G\curvearrowright \widehat{{\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)}$ is isomorphic to the flow $G\curvearrowright X_{\mathbb{G}^*} $. \end{cor} To prove Theorem \ref{iden}, first we show the following lemma. \begin{lemma} \begin{enumerate} \item The uniformity on $X_{\mathbb{G}^*}$ is generated by sets \begin{align*} U^\alpha=&\{(\vec{R}, \vec{S}): \text{ for some } A^*\in\mathcal{G}^*\text{ with } A^*\restriction \mathcal{L}= A\ \\ & \alpha: A^*\to(\mathbb{G}, \vec{R})\text{ and } \alpha: A^*\to (\mathbb{G}, \vec{S})\text{ are embeddings}\}, \end{align*} where $\alpha: A\to\mathbb{G}$ is an embedding. \item The quotient uniformity on ${\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)$ given by the right uniformity on ${\rm Aut}(\mathbb{G})$ is generated by sets \[U_\alpha=\{(x{\rm Aut}(\mathbb{G}^*),y{\rm Aut}(\mathbb{G}^*)): x^{-1}\circ \alpha=y^{-1} \circ \alpha\},\] where $\alpha: A\to \mathbb{G}$ is an embedding. \end{enumerate} \end{lemma} \begin{proof} To see (1), using the compactness of $X_{\mathbb{G}^*}$, note that for each open neighbourhood $U$ of the diagonal of $X_{\mathbb{G}^*}$ we can find $A\in\mathcal{G}$ and an embedding $\alpha: A\to \mathbb{G}$ such that the partition of $X_{\mathbb{G}^*}$ into clopen sets \[ V_{\alpha, A^*}=\{\vec{R}\in X_{\mathbb{G}^*} : \alpha: A^*\to(\mathbb{G}, \vec{R}) \text{ is an embedding} \}, \] where $A^*\in \mathcal{L}^*$ and $A^*\restriction\mathcal{L}= A$, has the property that \[ \bigcup_{A^*\in\mathbb{G}^*, \ A^*\restriction\mathcal{L}= A} V_{\alpha, A^*}\times V_{\alpha, A^*}\subset U.\] Part (2) follows immediately from the definition of the uniformity on ${\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)$. \end{proof} \begin{proof}[Proof of Theorem \ref{iden}] Let $\alpha: A\to \mathbb{G}$ be an embedding. It is enough to show that \[U_\alpha=\{(x{\rm Aut}(\mathbb{G}^*),y{\rm Aut}(\mathbb{G}^*)): x^{-1}\circ \alpha=y^{-1} \circ \alpha\}\] on ${\rm Aut}(\mathbb{G})/{\rm Aut}(\mathbb{G}^*)$ is mapped to \begin{align*} U^\alpha=&\{(x\cdot \vec{R}^\mathbb{G}, y\cdot \vec{R}^\mathbb{G}): \text{ for some } A^*\in\mathcal{G}^*\text{ with } A^*\restriction\mathcal{L}= A\ \\ & \alpha: A^*\to(\mathbb{G}, x\cdot \vec{R}^\mathbb{G})\text{ and } \alpha: A^*\to (\mathbb{G}, y\cdot \vec{R}^\mathbb{G})\text{ are embeddings}\} \end{align*} on ${\rm Aut}(\mathbb{G})\cdot \vec{R}^\mathbb{G}$. Clearly, the set $U_\alpha$ is mapped to \[\overline{U}_\alpha = \{(x\cdot \vec{R}^\mathbb{G},y\cdot \vec{R}^\mathbb{G}): x^{-1}\circ \alpha=y^{-1} \circ \alpha\}.\] Let $(x\cdot \vec{R}^\mathbb{G}, y\cdot \vec{R}^\mathbb{G})\in U^\alpha$. Let $A^*$ be such that $\alpha: A^*\to(\mathbb{G}, x\cdot \vec{R}^\mathbb{G})$ and $\alpha: A^*\to (\mathbb{G}, y\cdot \vec{R}^\mathbb{G})$ are embeddings. Then $x^{-1}\circ\alpha: A^*\to (\mathbb{G}, \vec{R}^\mathbb{G})$ and $y^{-1}\circ\alpha: A^*\to (\mathbb{G}, \vec{R}^\mathbb{G})$ are embeddings. By the projective ultrahomogeneity property for $\mathcal{G}^*$, there is $h\in{\rm Aut}(\mathbb{G}^*)$ such that $h \circ x^{-1}\circ\alpha= y^{-1}\circ\alpha$. This implies $((x\circ h^{-1}) \cdot \vec{R}^\mathbb{G}, y \cdot\vec{R}^\mathbb{G})\in \overline{U}_\alpha$, and since $h^{-1}\in{\rm Aut}(\mathbb{G}^*)$ and so $h^{-1} \cdot \vec{R}^\mathbb{G}=\vec{R}^\mathbb{G}$, we get $(x \cdot \vec{R}^\mathbb{G}, y \cdot \vec{R}^\mathbb{G})\in \overline{U}_\alpha$. Now suppose that $(x \cdot \vec{R}^\mathbb{G}, y \cdot \vec{R}^\mathbb{G})\in \overline{U}_\alpha$. From the property $(*)$, it follows that there is $A^*\in\mathcal{G}^*$ with $A^*\restriction\mathcal{L}= A$ such that $x^{-1}\circ \alpha : A^*\to (\mathbb{G}, \vec{R}^\mathbb{G})$ is an embedding. Since $x^{-1}\circ\alpha=y^{-1}\circ\alpha$, clearly $y^{-1}\circ \alpha : A^*\to (\mathbb{G}, \vec{R}^\mathbb{G})$ is an embedding. This implies that $\alpha : A^*\to(\mathbb{G}, x\cdot \vec{R}^\mathbb{G})$ and $\alpha :A^*\to (\mathbb{G}, y\cdot \vec{R}^\mathbb{G})$ are embeddings, which gives $(x\cdot \vec{R}^\mathbb{G}, y\cdot \vec{R}^\mathbb{G})\in U^\alpha$. \end{proof} \section{The universal minimal flow of ${\rm Aut}(\mathbb{L})$} We will expand each finite fan in $\mathcal{F}$ by a maximal chain of downwards closed subsets and obtain a class $\mathcal{F}_c,$ which does not directly fall into the framework of the projective Fra\"iss\'e theory. However, we will show that $\mathcal{F}_c$ is equivalent to a Fra\"iss\'e-HP class of first-order structures, which is reasonable and precompact with respect to $\mathcal{F}.$ In Section 4.2, we prove the main combinatorial result that $\mathcal{F}_c$ is a Ramsey class. We do so indirectly by showing that a certain class $\mathcal{F}_{cc}$ coinitial in $\mathcal{F}_c$ is Ramsey. In our proof we will use the dual Ramsey theorem of Graham and Rothschild. Finally, in Section 4.3, we apply methods from Section 3.6 to compute the universal minimal flow of ${\rm Aut}(\mathbb{L})$ in two ways - as a completion of a precompact space equal to the quotient of ${\rm Aut}(\mathbb{L})$ by an extremely amenable subgroup, and as the space of maximal downwards closed chains on $\mathbb{L}$ - and we exhibit an explicit isomorphism between them. \subsection{Finite fans equipped with chains -- the family $\boldsymbol{\mathcal{F}_c}$}\label{sec:fc} Define \[\mathcal{F}_c=\{(A, \mathcal{C}^A): A\in\mathcal{F} \text{ and } \mathcal{C}^A \text{ is a downwards closed maximal chain}\}\] and for $(A, \mathcal{C}^A), (B, \mathcal{C}^B)\in\mathcal{F}_c$ we say that $f: (B, \mathcal{C}^B)\to (A, \mathcal{C}^A)$ is an {\em epimorphism} iff $f: B\to A$ is an epimorphism and for every $C\in \mathcal{C}^B$, we have $f(C)\in \mathcal{C}^A$ (short: $f(\mathcal{C}^B)=\mathcal{C}^A$). If $A\in\mathcal{F}$ and $\mathcal{C}^A$ are such that $A_c=(A,\mathcal{C}^A)\in\mathcal{F}_c$, then $\mathcal{C}^A$ induces a linear order $\leq^{A_c}$ on $A$ given by $x<^{A_c} y$ iff for some $C\in\mathcal{C}^A$, $x\in C$ and $y\notin C$. For $(A, \mathcal{C}^A)\in\mathcal{F}_c$ we say that $\mathcal{C}^A$ is {\em canonical} on $A$ if for some ordering of branches $b_1<\ldots< b_n$ of $A$, it holds that whenever $C\in\mathcal{C}^A$ and $C\cap b_j\neq\emptyset$, then $b_i\subset C$ for every $1\leq i<j\leq n.$ Analogously as for topological $\mathcal{L}$-structures, we define the JPP and the AP for the family $\mathcal{F}_c$ with the epimorphisms as above. \begin{lemma}\label{fcapp} The family $\mathcal{F}_c$ has the JPP and the AP. \end{lemma} \begin{proof} As there is $A\in\mathcal{F}_c$ which has just one element, the AP will imply the JPP. For the AP, let $f: B\to A$ and $g: D\to A$ be epimorphisms. We find $E$, $k: E\to B$ and $l: E\to D$ such that $f\circ k=g\circ l$. It is straightforward to see that the conclusion holds when each of $A$, $B$, $D$ has exactly one branch. In the general situation, we take $E$ that has the width $w=b+d-a$, where $b,d,a$ are numbers of elements in $B,D,A,$ respectively, and it has all branches of the height $h$, where $h$ is the height of any $E_0\in \mathcal{F}_c$ that witnesses the AP for $A_0,B_0, D_0\in \mathcal{F}_c$ all with one branch only whose heights are the same as of $A,B,D,$ respectively, and any epimorphisms $f_0:B_0\to A_0$ and $g_0:D_0\to A_0.$ Let $\mathcal{C}^E$ be the canonical chain on $E$ with respect to some enumeration $e_1,\ldots, e_w$ of branches of $E$. Let $(a^i)_{1\leq i \leq a}$ be the increasing enumeration of $A$ according to the linear order $\leq^A$ induced by $\mathcal{C}^A$. For every $i$ let $b^i\in B$ and $d^i\in D$ be the the smallest (with respect to the linear orders $\leq^B$ induced by $\mathcal{C}^B$ and $\leq^D$ induced by $\mathcal{C}^D$, respectively), such that $f(b^i)=a^i$ and $g(d^i)=a^i$. Let $v_A,v_B,v_D$ be the roots of $A,B,D$, respectively. We now proceed to define epimorphisms $k: E\to B$ and $l: E\to D$ such that $f\circ k=g\circ l$. For every $i=1,2,\ldots, a$, let $J_i=|\{b\in B: b<^B b^i \}|+|\{d\in D: d<^D d^i\}|-(i-1)$. For $j=J_i+1$, let $k\restriction e_j$ and $l\restriction e_j$ be chosen so that $(k\restriction e_j) (e_j)=[v_B,b^i]_{\preceq_B}$, $(l\restriction e_j) (e_j)=[v_D,d^i]_{\preceq_D}$, and $(f\circ k)\restriction e_j=(g\circ l)\restriction e_j$, which we can do by the choice of $h$. Let $b^{i,1},\ldots, b^{i,m_i}$ be the increasing (with respect to $\leq^B$) enumeration of $\{b\in B: b^i<^B b<^B b^{i+1} \}$ and let $d^{i,1},\ldots, d^{i,n_i}$ be the increasing (with respect to $\leq^D$) enumeration of $\{d\in D: d^i<^D d<^D d^{i+1} \}$. For $j=J_i+m+1$, $m=1,\ldots, m_i$, let $k\restriction e_j$ and $l\restriction e_j$ be such that $(k\restriction e_j) (e_j)=[v_B,b^{i,m}]_{\preceq_B}$, $(l\restriction e_j) (e_j)\leq^D d^i$, and $(f\circ k)\restriction e_j=(g\circ l)\restriction e_j$. For $j=J_i+m_i+n+1$, $n=1,\ldots, n_i$, let $k\restriction e_j$ and $l\restriction e_j$ be such that $(k\restriction e_j) (e_j)\leq^B b^i$, $(l\restriction e_j) (e_j)=[v_D,d^{i,n}]_{\preceq_D}$, and $(f\circ k)\restriction e_j=(g\circ l)\restriction e_j$. This defines the required epimorphisms $k$ and $l$. \end{proof} At this point we would like to say that there is a limit of the family $\mathcal{F}_c$, i.e. there is some $\mathbb{F}_c$ which satisfies properties (L1), (L2), and (L3). However, structures in $\mathcal{F}_c$ are not first order structures, chains are neither functions nor relations, therefore we cannot directly apply the Irwin-Solecki theorem about the existence and uniqueness of projective Fra\"{i}ss\'{e} limits. It does not seem that we can realize the family $\mathcal{F}_c$ as a projective Fra\"{i}ss\'{e} family. In particular, the following natural attempt fails. \begin{rem} {\rm To a structure $(A,\mathcal{C}^A)\in\mathcal{F}_c$ we associate a structure $(A,<^A)$, where $<^A$ is the linear order induced by $\mathcal{C}^A$, and we consider a family $\mathcal{F}_<$ of all $(A,<^A)$ obtained in this way. However, epimorphisms between structures in $\mathcal{F}_c$ and epimorphisms between structures in $\mathcal{F}_<$ are not the same. For example, let $A=\{a_1,a_2\}$ consist of a single branch and let $\mathcal{C}^A=\{\{a_1\}, \{a_1, a_2\}\}$, let $B=\{b_1,b_2,b_3\}$ consist of a single branch and let $\mathcal{C}^B=\{\{b_1\},\{b_1,b_2\}, \{b_1,b_2,b_3\}\}$. Let $\phi$ satisfy $\phi(b_1)=\phi(b_3)=a_1$ and $\phi(b_2)=a_2$. Then $\phi:(B,\mathcal{C}^B)\to (A,\mathcal{C}^A)$ is an epimorphism, whereas $\phi:(B,<^B)\to (A,<^A)$ is not an epimorphism.} \end{rem} Nevertheless, we will show that the family $\mathcal{F}_c$ can be identified with a Fra\"{i}ss\'{e}-HP family similarly as in Theorem \ref{dual}. For this we will have to consider inverse limits of structures in $\mathcal{F}_c$. Recall from Section \ref{sec:chain} that by $\mathcal{F}^*$ we denoted the family of all topological $\mathcal{L}$-structures that are countable inverse limits of finite fans in $\mathcal{F}$. If $P\in\mathcal{F}^*$ is the inverse limit of an inverse sequence $(A_n, f_m^n)$, denoted by $P=\varprojlim(A_n, f^n_m),$ we consider the partial order on $P$ given by \[x\preceq_P y \text{ iff for every } n, \ f^\infty_n(x)\preceq_{A_n} f^\infty_n(y),\] where $\preceq_{A_n}$ is the tree partial order on $A_n$. Downwards closed sets on $P\in\mathcal{F}^*$ will be taken with respect to $\preceq_P.$ If $((A_n,\mathcal{C}^{A_n}), f_m^n)$ is an inverse sequence in $\mathcal{F}_c$, we say that $(P,\mathcal{C}^P)$ is its inverse limit if $P=\varprojlim(A_n, f^n_m)$ and $\mathcal{C}^P$ is the collection of all closed subsets $C$ of $P$ such that $f^\infty_n(C)\in \mathcal{C}^{A_n}$ for every $n$. We will show that $\mathcal{C}^P$ is a downwards closed maximal chain. To see that $\mathcal{C}^P$ is a chain, note that if $C_1, C_2\in\mathcal{C}^P$, then either for all $n$, $f_n^\infty(C_1)\subset f_n^\infty(C_2)$ or for all $n$, $f_n^\infty(C_2)\subset f_n^\infty(C_1)$. In the first case, $C_1\subset C_2$, and in the second, $C_2\subset C_1$. The chain $\mathcal{C}^P$ is downwards closed. Finally, the chain $\mathcal{C}^P$ is maximal. Indeed, if a downward closed set $C$ is such that $\{C\}\cup \mathcal{C}^P$ is a chain, then for each $n$, by the maximality of $\mathcal{C}^{A_n}$, $f^{\infty}_n(C)\in \mathcal{C}^{A_n}$, which by the definition of $\mathcal{C}^P$ gives $C\in\mathcal{C}^P$. Let $\mathcal{F}_c^*$ be the family of all inverse limits of structures from $\mathcal{F}_c$. Clearly, we can identify $\mathcal{F}_c$ with a subfamily of $\mathcal{F}^*_c$ by assigning to $(A,\mathcal{C}^A)$ the inverse limit of $((A,\mathcal{C}^{A}), {\rm{Id}}_m^n)$. For $A_c=(A,\mathcal{C}^A)\in\mathcal{F}^*_c$ denote by $A_c\restriction\mathcal{L}$ the structure $A$. Generalizing the definition for $\mathcal{F}_c$, for $ (Q, \mathcal{C}^Q), (P, \mathcal{C}^P)\in\mathcal{F}^*_c$ we will say that $f: (Q, \mathcal{C}^Q)\to (P, \mathcal{C}^P)$ is an {\em epimorphism} iff $f: Q\to P$ is an epimorphism and for every $C\in \mathcal{C}^Q$, we have $f(C)\in \mathcal{C}^P$. If $(P, \mathcal{C}^P)=\varprojlim((A_n,\mathcal{C}^{A_n}), f^n_m)$, then $f^\infty_n: (P,\mathcal{C}^P)\to (A_n, \mathcal{C}^{A_n})$ is an epimorphism for every~$n$. We say that a function $f:(Q,\mathcal{C}^Q)\to (P,\mathcal{C}^P)$ is chain preserving iff $f(\mathcal{C}^Q)= \mathcal{C}^P$. \begin{thm}\label{stone2} The family $\mathcal{F}_c^*$ with epimorphisms is equivalent via a contravariant functor to a family of first order structures with embeddings. \end{thm} \begin{proof} Let $R$ be a binary relation symbol and take the language $\{S,\leq_{BA},\cup,\cap,^-,0,1\}$, where $S$ and $\leq_{BA}$ are binary relation symbols and $\{\cup,\cap,^-, 0,1\}$ is the language of Boolean algebras. For $K=(K,R^K, \mathcal{C}^K)\in\mathcal{F}^*_c$, let $M=(M, S^M,\leq_{BA}^M, \cup^M, \cap^M, ^{-M}, 0^M, 1^M\}$ be the structure such that $M={\rm Clop}(K)$ is the family of all clopen sets of $K$, $\cup^M$ is the union, $\cap^M$ is the intersection, $^{-M}$ is the complement, $0^M$ is the empty set and $1^M=M$. As in Proposition \ref{stone1}, we set for every $X, Y\in M$, $S^M(X, Y)$ if and only if for some $a\in X, b\in Y$, we have $R^K(a,b)$. We first define $\leq_{BA}$ for $K\in\mathcal{F}_c$, and then we provide a definition for $K\in\mathcal{F}^*_c$. Let then $K\in\mathcal{F}_c$. As before, denote by $\leq^K$ the linear order on $K$ induced by $\mathcal{C}^K$ by letting $x<^K y$ iff there exists $C\in\mathcal{C}^K$ such that $x\in C$ and $y\notin C$. From the maximality of $\mathcal{C}^K$, the order $\leq^K$ is total. For $K\in\mathcal{F}_c$, let $\leq^K_{op}$ denote the order opposite to the order $\leq^K$, that is, we let $x\leq^K_{op} y$ iff $y\leq^K x$. Take $\leq_{BA}^M $ to be the antilexicographical order with respect to $\leq^K_{op} $, that is for $X,Y\in M={\rm Clop}(K)=P(K)$, where $P(K)$ denotes the power set of $K$, let $X <_{BA}^M Y$ iff for $a\in K$ which is the largest with respect to $\leq^K_{op} $ such that $a\in X\triangle Y$, we have $a\in Y$. Let $f:L\to K$, where $K,L\in\mathcal{F}$ be a continuous surjection and let $F: P(K)\to P(L)$ be the map given by $F(X)=f^{-1}(X)$. \smallskip \noindent {\bf{Claim.}} $F$ is $\leq_{BA}$-preserving iff $f(\mathcal{C}^L)= \mathcal{C}^K$. \begin{proof} The function $f$ is chains preserving iff $f$ maps cofinal segments in $\leq^L_{op}$ to cofinal segments in $\leq^K_{op}$ , i.e. $f$ maps sets $\{z\in L: a \leq^L_{op} z\}$, some $a\in L$ to $\{z\in K: b \leq^K_{op} z\}$, some $b\in K$ iff $F$ is $\leq_{BA}$-preserving. \end{proof} Proposition \ref{stone1} together with the claim above, already imply the conclusion of the theorem for the family $\mathcal{F}_c$. Let $\mathcal{G}$ be the family of all $M$'s obtained in this way from some $K\in\mathcal{F}_c$. Maps we consider between structures in $\mathcal{G}$ are embeddings. Now let us come to the general situation where the structures come from $\mathcal{F}^*_c$. The claim above implies that an inverse sequence $((K_n,\mathcal{C}^{K_n}), f_m^n)$ in $\mathcal{F}_c$ corresponds to a direct sequence $((M_n,\leq_{BA}^{M_n}), g_m^n)$ in $\mathcal{G}$. Let $(K, \mathcal{C}^K)$ together with epimorphisms $f^\infty_n: (K,\mathcal{C}^K)\to (K_n, \mathcal{C}^{K_n})$ be the inverse limit of $((K_n,\mathcal{C}^{K_n}), f_m^n)$. Let $M$ together with embeddings $g^\infty_n: M_n\to M$ be the direct limit of $(M_n, g_m^n)$. By the definition of the direct limit, $X \leq_{BA}^M Y$ iff for some (equivalently every) $n$ such that there are $X_n, Y_n\in M_n$ with $g_n^\infty(X_n)=X$ and $g_n^\infty(Y_n)=Y$, we have $ X_n \leq_{BA}^{M_n} Y_n $. Then $(M,\leq_{BA}^M)$ together with embeddings $g^\infty_n: M_n\to M$ is the direct limit of $((M_n,\leq_{BA}^{M_n}), g_m^n)$. The following claim will finish the proof. \noindent{\bf Claim.} Let $f:L\to K$, where $K,L\in\mathcal{F}^*_c$, be a continuous surjection and let $F: {\rm Clop}(K)\to {\rm Clop}(L)$ be the map given by $F(X)=f^{-1}(X)$. Then $f$ preserves chains iff $F$ preserves $\leq_{BA}$. \begin{proof} Let $(L_n, l_m^n)$ be an inverse sequence in $\mathcal{F}_c$ with the limit $L$ and let $(K_n, k_m^n)$ be an inverse sequence in $\mathcal{F}_c$ with the limit $K$. Let $p_m^n: {\rm Clop}(K_m) \to {\rm Clop}(K_n)$ be the dual map to $k_m^n$ and let $q_m^n: {\rm Clop}(L_m) \to {\rm Clop}(L_n)$ be the dual map to $l_m^n$. Then $({\rm Clop}(K_m), p_m^n)$ is an inverse sequence with the limit ${\rm Clop}(K)$ and $({\rm Clop}(L_m), q_m^n)$ is an inverse sequence with the limit ${\rm Clop}(L)$. Having $f: L\to K$ find a strictly increasing sequence $(t_n)$ and continuous surjections $h_n: L_{t_{n}}\to K_n$ such that $h_n\circ l^{t_{n+1}}_{t_n}=k^{n+1}_{n}\circ h_{n+1}$. Let $H_n: {\rm Clop}( K_{n}) \to {\rm Clop}(L_{t_{n}})$ be dual maps to $h_n$. Then we have: $f: L\to K$ preserves chains iff for every $n$, $h_n$ preserve chains iff for every $n$, $H_n$ preserve linear orders iff $F: {\rm Clop}(K)\to {\rm Clop}(L)$ preserves linear orders. \end{proof} \end{proof} Thanks to Theorem \ref{stone2}, we not only know that there is a unique up to an isomorphism structure in $\mathcal{F}^*_c$ that satisfies conditions (L1), (L2) and (L3) for the family $\mathcal{F}_c$, but also all theorems that were proved for Fra\"{i}ss\'{e}-HP families are available to us. Let $\mathbb{L}_c$ denote the limit of the family $\mathcal{F}_c$. The proposition below tells us that $\mathbb{L}_c$ is equal to $(\mathbb{L}, \mathcal{C}^\mathbb{L})$ for some downwards closed maximal chain $\mathcal{C}^\mathbb{L}$. \begin{lemma}\label{reason} The expansion $\mathcal{F}_c$ of $\mathcal{F}$ is reasonable, that is, for every $A,B\in\mathcal{F}$, an epimorphism $\phi: B\to A$, and $A_c\in\mathcal{F}_c$ such that $A_c\restriction\mathcal{L}= A$, there is $B_c\in\mathcal{F}_c$ such that $B_c\restriction\mathcal{L}= B$ and $\phi: B_c\to A_c$ is an epimorphism. \end{lemma} \begin{proof} Let $A,B\in\mathcal{F}$, an epimorphism $\phi: B\to A$, and $A_c\in\mathcal{F}_c$ such that $A_c\restriction\mathcal{L}= A$, be given. We get $\mathcal{C}^B$ by extending in an arbitrary way the downwards closed chain $\{\phi^{-1}(C): C\in\mathcal{C}^A\}$ into a downwards closed maximal chain. \end{proof} Lemma \ref{reason} and Proposition \ref{kpt_reas} immediately imply the following corollary. \begin{cor}\label{reasoncor} We have $\mathbb{L}_c\restriction \mathcal{L}= \mathbb{L}$. \end{cor} Corollary \ref{reasoncor} implies that ${\rm Aut}(\mathbb{L}_c)$, the automorphism group of $\mathbb{L}_c$, is a subgroup of ${\rm Aut}(\mathbb{L})$, the automorphism group of $\mathbb{L}$. Recall that the group ${\rm Aut}(\mathbb{L})$ is equipped with the compact open topology inherited from $H(\mathbb{L})$, the homeomorphism group of $\mathbb{L}$. \begin{lemma}\label{autclosed} The group ${\rm Aut}(\mathbb{L}_c)$ is a closed subgroup of ${\rm Aut}(\mathbb{L})$. \end{lemma} \begin{proof} For any $D\in {\rm Exp}(\mathbb{L})$, the map that takes $f\in {\rm Aut}(\mathbb{L})$ and assigns to it $f(D)\in {\rm Exp}(\mathbb{L})$ is continuous. Since $\mathcal{C}^\mathbb{L}$ is maximal, it is closed in ${\rm Exp}(\mathbb{L})$, which implies that ${\rm Aut}(\mathbb{L}_c)$ is closed in ${\rm Aut}(\mathbb{L})$. \end{proof} Lemma \ref{autclosed} also follows from Proposition \ref{cont}, which we prove later. \subsection{ Extreme amenability of $\boldsymbol{{\rm Aut}(\mathbb{L}_c)}$}\label{extamen} In this section, we define a family $\mathcal{F}_{cc}$ coinitial in $\mathcal{F}_c$ and show that it is a Ramsey class. This will imply that ${\rm Aut}(\mathbb{L}_c)$ is extremely amenable and that $\mathcal{F}_c$ is a Ramsey class. Recall that for $A_c\in\mathcal{F}_c$, $\mathcal{C}^A$ is canonical if there is an order on branches of $A_c$, which we denote by $\leq_{cc}^{A_c}$, given by $b\leq_{cc}^{A_c} c$ iff for every $x\in b$ and $y\in c$, we have $x\leq^{A_c}_c y$. Let \[\mathcal{F}_{cc}=\{(A, \mathcal{C}^A)\in\mathcal{F}_c: \mathcal{C}^A \text{ is canonical and all branches in } A \text{ have the same height} \}.\] For $A,B\in\mathcal{F}_{cc}$, let ${B \choose A}$~denote the set of all epimorphisms from $B$ onto $A$. The main result of this section is the following theorem. \begin{thm}\label{rams} The class $\mathcal{F}_{cc}$ is a Ramsey class, that is, for every integer $r\geq 2$ and for $S,T\in \mathcal{F}_{cc}$ with ${T \choose S}\neq\emptyset$ there exists $U\in \mathcal{F}_{cc}$ such that for every colouring $e: {U \choose S} \to\{1,2,\ldots,r\}$ there is $g\in {U \choose T}$ such that $\{ h\circ g: h\in {T \choose S} \}$ is monochromatic. \end{thm} \begin{prop}\label{coin} The family $\mathcal{F}_{cc}$ is coinitial in $\mathcal{F}_c$, that is, for every $A_c\in\mathcal{F}_c$ there exist $B_c\in\mathcal{F}_{cc}$ and an epimorphism from $B_c$ onto $A_c$. Moreover, we can choose $B_c$ in a way that its height and width depend only on the height and width of $A_c$. \end{prop} \begin{proof} Let $A_c\in\mathcal{F}_c$ be of height $k$ and let $v_{A_c}$ denote its root. Let $B_c\in\mathcal{F}_{cc}$ be of height $k$ and width $l$ equal to the number of elements in $A_c$, and let $\mathcal{C}^{B_c}$ be canonical. Enumerate $A_c$ according to $\leq^{A_c}$ into $a^1,\ldots a^l$, and enumerate branches in $B_c$ according to $\leq^{B_c}_{cc}$ into $b_1,\ldots, b_l$. Now let for each $i=1,\ldots l$, the branch $b_i$ be mapped onto the segment $[v_{A_c}, a^i]_{\preceq_{A}}$ in $A_c$ in an $R$-preserving way. This defines a required epimorphism from $B_c$ onto~$A_c$. \end{proof} \begin{rem}\label{fccfc} {\rm Proposition \ref{coin} implies (by Lemma \ref{fcapp} and Remark \ref{cofinall}) that $\mathcal{F}_{cc}$ satisfies the JPP and the AP and that the limits of $\mathcal{F}_c$ and $\mathcal{F}_{cc}$ are isomorphic to $\mathbb{L}_c$.} \end{rem} From Theorems \ref{kpt1} and \ref{rams}, using Remark \ref{fccfc}, we will obtain the following corollary. \begin{cor} The automorphism group ${\rm Aut}(\mathbb{L}_c)$ is extremely amenable. \end{cor} The family $\mathcal{F}_{cc}$ is easier to work with than the family $\mathcal{F}_c$. Nevertheless, $\mathcal{F}_c$ is a Ramsey class as well, which follows from the following proposition. \begin{prop} Let $\mathcal{G}_1\subset \mathcal{G}_2$ be Fra\"{i}ss\'{e}-HP families and suppose that $\mathcal{G}_1$ is coinitial in $\mathcal{G}_2$. If $\mathcal{G}_1$ is a Ramsey class, so is $\mathcal{G}_2$. \end{prop} \begin{proof} Let $\mathbb{G}$ be the Fra\"{i}ss\'{e} limit of both $\mathcal{G}_1$ and $\mathcal{G}_2$ and let $G={\rm Aut}(\mathbb{G})$. As $\mathcal{G}_1$ is a Ramsey class, Theorem \ref{kpt1} (applied to $G$ and $\mathcal{G}_1$) implies that $G$ is extremely amenable. Then again applying Theorem \ref{kpt1}, this time to $G$ and $\mathcal{G}_2$, we get that $\mathcal{G}_2$ is a Ramsey class. \end{proof} \begin{cor}\label{xyza} The family $\mathcal{F}_c$ is a Ramsey class. \end{cor} The main two ingredients in the proof of Theorem \ref{rams} will be Theorem \ref{lelek} and Corollary \ref{dhe}. Let $\mathbb{N}=\{1,2,3,\ldots\}$ denote the set of natural numbers and let $k\in\mathbb{N}$. For a function $p:\mathbb{N}\to\{0,1,\ldots,k\}$, we define the {\em support } of $p$ to be the set ${\rm{ supp }}(p)=\{l\in\mathbb{N}: p(l)\neq 0\}$. Let \[ {\rm{ FIN }}_k=\{p:\mathbb{N}\to\{0,1,\ldots,k\}: {\rm{ supp }}(p) {\rm\ is\ finite\ and\ } \exists l\in{\rm{ supp }}(p)\ \left(p(l)=k\right)\}, \] and for each $n\in\mathbb{N}$, let \[{\rm{ FIN }}_k(n)=\{p\in{\rm{ FIN }}_k: {\rm{ supp }}(p)\subset\{1,2,\ldots,n\}\}.\] We equip ${\rm{ FIN }}_k$ and each ${\rm{ FIN }}_k(n)$ with a partial semigroup operation $+$ defined for $p$ and $q$ whenever ${\rm{ supp }}(p)\cap {\rm{ supp }}(q)=\emptyset$ by $(p+q)(x)= p(x)+ q(x)$. The Gowers' {\em tetris} operation is the function $T: {\rm{ FIN }}_k\to {\rm{ FIN }}_{k-1}$ defined by \[ T(p)(l)=\max\{0,p(l)-1\}.\] We define for every $0<i\leq k$ a function $T^{(k)}_i:{\rm{ FIN }}_k\to{\rm{ FIN }}_{k-1}$, which behaves like the identity up to the value $i-1$ and like the tetris above it as follows. \[ T^{(k)}_i(p)(l) = \begin{cases} p(l) &\text{if } p(l)<i \\ p(l)-1 &\text{if } p(l)\geq i. \end{cases} \] We also define $T^{(k)}_0={\rm{Id}}\restriction_{{\rm{ FIN }}_k}$. It may seem more natural to denote the identity by $T^{(k)}_{k+1}$ or $T^{(k)}_\infty$, only for notational convenience later on, we will be using $T^{(k)}_0$. Note that $T^{(k)}_1$ is the Gowers' tetris operation. We will usually drop superscripts and write $T_i$ rather than $T^{(k)}_i$. Define \begin{equation*} \begin{split} {\rm{ FIN }}_k^{(d)}(n)=&\{(p_1,\ldots, p_d) : p_i\in{\rm{ FIN }}_k(n) \text{ and } \forall_{i< j} \left({\rm{ supp }}(p_i)\cap{\rm{ supp }}(p_j)=\emptyset\text{ and } \right. \\ &\left. \min({\rm{ supp }}(p_i))<\min({\rm{ supp }}(p_j))\right) \} \end{split} \end{equation*} and \[{\rm{ FIN }}_k^{*(d)}(n)=\{(p_1,\ldots, p_d) \in{\rm{ FIN }}_k^{(d)} : \min({\rm{ supp }}_k(p_{i}))<\min({\rm{ supp }}(p_{i+1})) \},\] where ${\rm{ supp }}_j(p)=\{l\in\{1,\ldots, n\}: p(l)=j\}$ for $p\in{\rm{ FIN }}_k(n)$ and $j=1,\ldots, k.$ Let $P_k=\prod_{j=1}^k\{0,1,\ldots,j\}.$ For any $\vec{i}=(i(1),\ldots,i(k))\in P_k$ denote \[ T_{\vec{i}}=T_{i(1)}\circ\ldots \circ T_{i(k)}. \] For $l> k,$ let $P_{k+1}^l=\prod_{j=k+1}^{l} \{1,2,\ldots,j\},$ and let $P_{k+1}^k$ contain only the constant sequence $(0,\ldots,0)$. Note that if $p\in{\rm{ FIN }}_l$ and $\vec{i}\in P^l_{k+1}$, then $T_{\vec{i}}(p)\in{\rm{ FIN }}_k$. Let $l\geq k$ and let $B=(b_s)_{s=1}^m\in{\rm{ FIN }}_l^{*(m)}(n)$, we denote by $\left<\bigcup_{\vec{i}\in P_{k+1}^{l}} T_{\vec{i}}(B)\right>_{P_{k}}$ the partial subsemigroup of ${\rm{ FIN }}_{k}$ consisting of elements of the form \[ \sum_{s=1}^m T_{\vec{t}_s} \circ T_{\vec{i}_s}(b_{s}), \] where $\vec{i}_1,\ldots,\vec{i}_m\in P_{k+1}^{l}$, $\vec{t}_1,\ldots,\vec{t}_m\in P_k$, and there is an $s$ such that all entries of $t_s$ are 0. By $\left<\bigcup_{\vec{i}\in P_{k+1}^{l}} T_{\vec{i}}(B)\right>_{P_{k}}^{*(d)}$, we denote the set of all $(p_1,\ldots, p_d) \in{\rm{ FIN }}_k^{*(d)} $ such that $p_i\in \left<\bigcup_{\vec{i}\in P_{k+1}^{l}} T_{\vec{i}}(B)\right>_{P_{k}}$. The following theorem is the combinatorial core of Theorem \ref{rams}, its proof was inspired by the proof a Ramsey Theorem in \cite{BLLM}. \begin{thm}\label{lelek} Let $k\geq 1$. Then for every $m\geq d,$ for every $ l\geq k,$ and for every $r\geq 2$ there exists a natural number $n$ such that for every colouring $c:{\rm{ FIN }}^{*(d)}_k(n)\to \{1,2,\ldots,r\},$ there is $B\in{\rm{ FIN }}_l^{*(m)}(n)$ such that the partial semigroup $\left< \bigcup_{\vec{i}\in P_{k+1}^l}T_{\vec{i}} (B)\right>_{P_k}^{*(d)}$ is $c$-monochromatic. Denote the smallest such $n$ by $L(d,m,k,l,r).$ \end{thm} In the proof of Theorem \ref{lelek}, we will use the Graham-Rothschild theorem about colouring of partitions \cite{GR}. For natural numbers $d,n$ let $\mc P^d(n)$ denote the set of all partitions of the set $\{1,2,\ldots,n\}$ into exactly $d$ non-empty sets. We say that a partition $\mathcal{P}$ is a {\em coarsening} of a partition $\mathcal{Q}$ if for every $Q\in\mathcal{Q}$ there is $P\in\mathcal{P}$ such that $Q\subset P$. Every partition $\mathcal{P}\in \mc P^d(n)$ carries a {\em canonical enumeration}, where for $P, Q\in\mathcal{P}$ we set $P<Q$ iff $\min(P)<\min(Q)$. \begin{thm}[Graham-Rothschild, \cite{GR}] Let $k<l $ and $r\geq 2$ be given natural numbers. Then there is a natural number~$n$ such that for any colouring of $\mc P^k(n)$ into $r$ colours there is a partition $\mathcal{P}\in \mc P^l(n)$ such that the set $\{\mathcal{Q}\in \mc P^k(n): \mathcal{P} {\rm\ is\ a \ coarsening \ of\ \mathcal{Q}} \}$ is monochromatic. Let $\mathbf{GR}(k,l,r)$ denote the smallest such natural number $n$. \end{thm} \begin{proof}[Proof of Theorem \ref{lelek}] We set $n=\mathbf{GR}(dk+1,ml+1,r)$ and let $c:{\rm{ FIN }}^{*(d)}_k(n)\to \{1,2,\ldots,r\}$ be an arbitrary colouring. Define the map $\Phi:\mc P^{dk+1}(n)\to {\rm{ FIN }}^{*(d)}_k(n)$ that to a canonically enumerated partition $\mc P=(P_i)_{i=0}^{dk}$ assigns \[ \Phi(\mc P)_j=\sum_{s=1}^k s\cdot \mathbbm{1}_{P_{(j-1)k+s}} \] for $j=1,\ldots,d.$ Then $c\circ\Phi$ is a colouring of $\mc P^{dk+1}(n)$. Let a canonically enumerated partition $\mc Q= (Q_i)_{i=0}^{ml}$ be $c\circ\Phi$-monochromatic. We define $B=(b_j)_{j=1}^m$ by \[ b_j=\sum_{s=1}^l s\cdot \mathbbm{1}_{P_{(j-1)l+s}}. \] The following claim verifies that $\left< \bigcup_{\vec{i}\in P_{k+1}^l}T_{\vec{i}} (B)\right>_{P_k}^{*(d)}$ is contained in the image of coarsenings of $\mc Q$ of size $dk+1$ under $\Phi$, which implies that $\left< \bigcup_{\vec{i}\in P_{k+1}^l}T_{\vec{i}} (B)\right>_{P_k}^{*(d)}$ is $c$-monochromatic, which is what we wanted to show. \smallskip \noindent {\bf{Claim.}} Let $A=(A_1,\ldots,A_d)\in \left< \bigcup_{\vec{i}\in P_{k+1}^l}T_{\vec{i}} (B)\right>_{P_k}^{*(d)}.$ Then $A=\Phi(\mathcal{P})$ for $\mc P=(P_j)_{j=0}^{dk}$ given by $P_{(s-1)k+i}:={\rm{ supp }}_i(A_s)$ for $s=1,\ldots,d$ and $i=1,\ldots,k, $ and $P_0=\{1,\ldots, n\}\setminus \bigcup_{j=1}^{dk} P_j.$ \begin{proof} Clearly, $\mc P$ is a coarsening of $\mc Q$. Therefore all we have to show is that $\mc P$ is a partition into exactly $dk+1$ nonempty sets (see (1) below) and that the enumeration of sets in $\mc P$ is the canonical enumeration (see (2), and (3), and (4) below). \begin{enumerate} \item For every $i=1,\ldots, k$ and $s=1,\ldots, d$, we have $ {\rm{ supp }}_i(A_s)\neq\emptyset$. \item We have $Q_0\subset P_0$. \item For every $i,i'=1,\ldots, k$, $i<i'$, and $s=1,\ldots, d$ \[\min {\rm{ supp }}_i(A_s)<\min {\rm{ supp }}_{i'}(A_{s}).\] \item For every $s=1,\ldots, d-1$, \[\min {\rm{ supp }}_k(A_s)<\min {\rm{ supp }}(A_{s+1}).\] \end{enumerate} Property (2) is clear. Properties (1) and (3) follow from the definition of $B$ and that we can write \[ A_s=\sum_{j\in J_s} T_{\vec{i}_j^s} b_j \] for some $\vec{i}_j^s\in P_k P_{k+1}^l$ (where $P_k P_{k+1}^l$ is the set of concatenations of sequences in $P_k$ and $P_{k+1}^l$) and $J_s\subset\{1,\ldots, m\}$. Property (4) follows from $A\in{\rm{ FIN }}_k^{*(d)}(n).$ \end{proof} \end{proof} For a natural number $N$ we let $N^{[j]}$ denote the collection of all $j$-element subsets of $\{1,\ldots, N\}$ and let $N^{[\leq j]}$ denote the collection of all at most $j$-element subsets of $\{1,\ldots,N\}.$ Note that $N^{[\leq j]}=\bigcup_{i=0}^j N^{[j]}$. We will often write $N$ instead of $N^{[1]}$. Let $m, k_1,\ldots,k_m, l_1,\ldots,l_m ,r\geq 2$, and $N$ be natural numbers and let \[ c: \prod_{i=1}^m N^{[\leq k_i]}\to \{1,2,\ldots,r\} \] be a colouring. Given $B_i\subset N$ for $i=1,2\ldots,m,$ we say that $c$ is {\em size-determined} on $(B_i)_{i=1}^m$ if whenever $A_i, A_i'\subset B_i$ are such that $0\leq |A_i|=|A_i'|\leq k_i$ for $i=1,2,\ldots,m$ then \[ c(A_1,\ldots,A_m)=c(A_1',\ldots, A_m'). \] \begin{thm}[Theorem 4.10, \cite{BK2}]\label{Tdifheights} Let $m, k_1,\ldots,k_m, l_1,\ldots,l_m$ and $ r\geq 2$ be natural numbers such that $k_i\leq l_i$ for every $i=1,2,\ldots,m$. Then there exists $N$ such that for every colouring \[ c: \prod_{i=1}^m N^{[\leq k_i]}\to \{1,2,\ldots,r\} \] there exist $B_1,\ldots, B_m\subset N$ with $|B_i|=l_i$ such that $c$ is size-determined on $(B_i)_{i=1}^m$. Denote by $S(m,k_1,\ldots,k_m,l_1,\ldots,l_m,r)$ the minimal such $N.$ \end{thm} For $f\in \prod_{i=1}^m N^{[\leq k_i]}$, we define ${\rm{ supp }}(f)=\{i: f(i)\neq\emptyset\}$. For a natural number~$d,$ let $\left(\prod_{i=1}^m N^{[\leq k_i]}\right)^{*(d)}$ be the set of all sequences $(f_s)_{s=1}^d$ with $f_s\in \prod_{i=1}^m N^{[\leq k_i]}$ and ${\rm{ supp }}(f_s)\cap {\rm{ supp }}(f_{s+1})=\emptyset$, for each $s$. Note that supports of some of the $f_s$ may be empty. Then, more generally, if \[ \chi: \left(\prod_{i=1}^m N^{[\leq k_i]}\right)^{*(d)}\to \{1,2,\ldots,r\} \] is a colouring and $B_i\subset N$ for $i=1,2\ldots,m,$ we say that $\chi$ is {\em size-determined} on $(B_i)_{i=1}^m$ if whenever $(f_s)_{s=1}^d$ and $(g_s)_{s=1}^d$ are such that for each $i$ and $s$, ${\rm{ supp }}(f_s)={\rm{ supp }}(g_s)$, $f_s(i),g_s(i)\subset B_i$, and $|f_s(i)|=|g_s(i)|$, then \[\chi \left( (f_s)_{s=1}^d\right)=\chi \left( (g_s)_{s=1}^d\right).\] Corollary \ref{dhe} is a multidimensional version of Theorem \ref{Tdifheights}. \begin{cor}\label{dhe} Let $d\leq m,$ and $k_1\leq l_1,\ldots,k_m\leq l_m,$ and $ r\geq 2$ be natural numbers. Then there exists $N$ such that for every colouring \[ \chi: \left(\prod_{i=1}^m N^{[\leq k_i]}\right)^{*(d)}\to \{1,2,\ldots,r\} \] there exist $B_1,\ldots, B_m\subset N$ with $|B_i|=l_i$ such that $\chi$ is size-determined on $(B_i)_{i=1}^m$. Denote by $S(d,m,k_1,\ldots,k_m,l_1,\ldots,l_m,r)$ the minimal such $N.$ \end{cor} \begin{proof} Denote by $\Gamma$ the set of all ordered partitions $\gamma:\{1,\ldots, m\}\to\{1,\ldots, d\}$ into $d$ non-empty pieces. Let $c: \prod_{i=1}^m N^{[\leq k_i]}\to r^{\Gamma}$ be the colouring given by \[ c(A_1,\ldots, A_m)(\gamma)=\chi\left((f^\gamma_s)^d_{s=1}\right), \text{ where }\] \[f_s^\gamma(n) = \begin{cases} A_n & \text{ if } n\in\gamma(s), \\ \emptyset & \text{otherwise}. \end{cases} \] Applying Theorem \ref{Tdifheights}, we get $B_1,\ldots, B_m\subset N$ with $|B_i|=l_i$ such that $c$ is size-determined on $(B_i)_{i=1}^m$. It follows that $\chi$ is size-determined on $(B_i)_{i=1}^m$. \end{proof} The proof below is similar to the proof of Theorem 4.12 in Barto\v sov\'a-Kwiatkowska~\cite{BK2}. \begin{proof}[Proof of Theorem \ref{rams}] Let $S\in \mathcal{F}_{cc}$ be of height $k$ and width $d$, and let $T\in \mathcal{F}_{cc}$ be of height $l\geq k$ and width $m\geq d$ (so that ${T\choose S}\neq \emptyset$). Let $r\geq 2$ be the number of colours. Let $n$ be as in Theorem \ref{lelek} for $d,m,k,l,r$, that is $n=L(d,m,k,l,r)$, and let $N$ be as in Corollary \ref{dhe} for $d, n,k,\ldots,k,l,\ldots,l,r$, that is $N=S(d,n,k,\ldots,k,l,\ldots,l,r)$. Let $U\in\mathcal{F}_{cc}$ consist of $n$ branches of height $N.$ We will show that this $U$ works for $S,T$ and $r$ colours. Let $a_1,\ldots, a_d$ and $c_1,\ldots,c_n$ be the increasing (according to $<^S_{cc}$ and $<^U_{cc}$) enumerations of branches in $S$ and $U$, respectively. Let $(a_j^i)_{i=0}^k$ be the increasing enumeration of the branch $a_j$, $j=1,\ldots,d$, and let $(c_j^i)_{i=0}^N$ be the increasing enumeration of the branch $c_j$ for $j=1,\ldots,n.$ To each $f\in {U\choose S}$, we associate $f^*=(p^f_i )_{i=1}^d\in {\rm{ FIN }}_k^{*(d)}(n)$ such that \[ {\rm{ supp }}(p^f_i)=\{j: a^1_i\in f(c_j)\} \] and for $j\in {\rm{ supp }}(p_i^f)$ \[ p_i^f(j)=z \ \iff \ f(c_j^N)=a_i^z. \] We moreover associate to $f$ a sequence $(F_i^f)_{i=1}^d\in (\prod_{j=1}^n (c_j\setminus \{c_j^0\})^{[\leq k]})^{*(d)}$ such that for each $i$ there is $j$ with $F_i^f(j)\in (c_j\setminus \{c_j^0\})^{[k]}$ as follows. For $j\in{\rm{ supp }}(p_i^f),$ we let \[ F_i^f(j)=\{\min\{c_j^y\in c_j : f(c_j^y)=a_i^x\} : 0<x\leq p^f_i (j)\},\] where the $\min$ above is taken with respect to the partial order $\preceq_U$ on the fan $U$. Let us remark that $f\mapsto (F_i^f)_{i=1}^d$ is an injection from ${U\choose S}$ to $(\prod_{j=1}^n(c_j\setminus \{c_j^0\})^{[\leq k]})^{*(d)}$ and $f\mapsto f^*$ is a surjection from ${U\choose S}$ to ${\rm{ FIN }}_k^{*(d)}(n)$. Note that if $f_1^*=f_2^*$ then $|F_i^{f_1}(j)|=|F_i^{f_2}(j)|$ for all $i,j.$ Analogously, to any $g\in{U\choose T}$, we associate $g^*\in{\rm{ FIN }}_l^{*(m)}(n)$ and $(F_i^g)_{i=1}^m\in (\prod_{j=1}^n (c_j\setminus \{c_j^0\})^{[\leq l]})^{*(m)}$. Let $e: {U\choose T}\to \{1,\ldots, r\}$ be a colouring. Let $e_0$ be a colouring of $(\prod_{j=0}^n (c_j\setminus \{c_j^0\})^{[\leq k]})^{*(d)}$ induced by the colouring $e$ via the injection $f\to (F_i^f)_{i=1}^d$. We colour elements in $(\prod_{j=0}^n (c_j\setminus \{c_j^0\})^{[\leq k]})^{*(d)}$ not of the form $(F_i^f)_{i=1}^d$ in an arbitrary way by one of the colours in $\{1,\ldots, r\}$. Applying Corollary \ref{dhe}, we can find $C_j\subset c_j\setminus \{c_j^0\}$ of size $l$ for $j=1,\ldots,n$ such that $e_0$ is size-determined on $(C_j)_{j=1}^n$. It follows that the colouring $e^*:{\rm{ FIN }}_k^{*(d)}(n)\to \{1,2,\ldots,r\}$ given by $e^*(f^*)=e(f)$ for $f\in {U\choose S}$ with $(F_i^f)_{i=1}^d\in (\prod_{j=1}^n C_j^{[\leq k]})^{*(d)}$ is well-defined. Now we can apply Theorem \ref{lelek} to get $D=(d_j)_{j=1}^m\in{\rm{ FIN }}_l^{*(m)}(n)$ such that $\left<\bigcup_{\vec{i}\in P_{k+1}^l} T_{\vec{i}}(D)\right>^{*(d)}_{P_k}$ is $e^*$-monochromatic. Let $g\in{U\choose T}$ be any epimorphism such that $(F_i^g)_{i=1}^m\in (\prod_{j=1}^n C_j^{[\leq l]})^{*(m)}$ and $g^*=D$. Then for every $h\in{T\choose S}$, we have $(h\circ g)^*\in \left<\bigcup_{\vec{i}\in P_{k+1}^l} T_{\vec{i}}(D)\right>^{*(d)}_{P_k}$. Since $\left<\bigcup_{\vec{i}\in P_{k+1}^l} T_{\vec{i}}(D)\right>^{*(d)}_{P_k}$ is $e^*$-monochromatic, we conclude that ${T\choose S}\circ g$ is $e$-monochromatic. \end{proof} \subsection{The universal minimal flow of $\boldsymbol{{\rm Aut}(\mathbb{L})}$}\label{umfa} In this section, we present two descriptions of the universal minimal flow of ${\rm Aut}(\mathbb{L})$ and we exhibit an explicit isomorphism between them. Let $X^*$ be the set of downwards closed maximal chains on $\mathbb{L}$ equipped with the topology inherited from ${\rm Exp}({\rm Exp}(\mathbb{L}))$. This is a compact space, which we proved in Proposition \ref{ccompact}. The group ${\rm Aut}(\mathbb{L})$ acts on $X^*$ by left translations \[g\cdot \mathcal{C}_1=\mathcal{C}_2 \ \ \iff \ \ \mathcal{C}_2=\{g(C): C\in \mathcal{C}_1\}.\] \begin{prop}\label{cont} The action $ {\rm Aut}(\mathbb{L})\curvearrowright X^*$ is continuous. \end{prop} \begin{proof} We have the following general fact: Whenever a Polish group $H$ acts continuously on a compact space $K$, then the corresponding action of $H$ on ${\rm Exp( K)}$ by left translations is also continuous (see page 20 in \cite{BKe}, Example (ii) ). It follows, using the fact above twice, that the action of ${\rm Aut}(\mathbb{L})$ on ${\rm Exp(Exp}(\mathbb{L}))$ by left translations is continuous. Therefore the restriction of this action to the closed invariant set $X^*$ is also continuous, which is what we wanted to show. \end{proof} In this section, we prove the following: \begin{thm}\label{umfauto} The universal minimal flow of ${\rm Aut}(\mathbb{L})$ -- the automorphism group of the projective Fra\"{i}ss\'{e} limit of finite fans -- is equal to \[ {\rm Aut}(\mathbb{L})\curvearrowright X^* \] and it is isomorphic to \[ {\rm Aut}(\mathbb{L})\curvearrowright\widehat{{\rm Aut}(\mathbb{L})/{\rm Aut}(\mathbb{L}_c)}. \] \end{thm} Let \begin{equation*} \begin{split} X_{\mathbb{L}_c}=&\{\mathcal{C}\in X^*: {\rm\ for\ every\ } A\in\mathcal{F} {\rm\ and\ an\ epimorphism\ } \phi: \mathbb{L}\to A {\rm\ there \ exists\ } \\ & A_c\in\mathcal{F}_c, {\rm such \ that\ } \phi: (\mathbb{L},\mathcal{C})\to A_c {\rm \ is\ an\ epimorphism }\}. \end{split} \end{equation*} We make $X_{\mathbb{L}_c}$ a topological space by declaring sets \[ V_{\phi, A_c}=\{\mathcal{C}\in X_{\mathbb{L}_c} : \phi: (\mathbb{L},\mathcal{C})\to A_c \text{ is an epimorphism} \}, \] where $\phi: \mathbb{L}\to A$ is an epimorphism, $A_c\in\mathcal{F}_c$, and $A_c\restriction \mathcal{L}= A$, to be open. \begin{prop}\label{ident2} We have $X_{\mathbb{L}_c}=X^*$. \end{prop} \begin{proof} We have to show two things: $X^*\subset X_{\mathbb{L}_c}$ and the topologies on $X^*$ and $X_{\mathbb{L}_c}$ agree. Lemma \ref{image3} implies that if $\mathcal{C}\in X^*$ and $\phi: \mathbb{L}\to A$ is an epimorphism, then there is a (necessarily unique) $A_c\in\mathcal{F}_c$ with $A_c\restriction \mathcal{L}= A$ such that $\phi: (\mathbb{L},\mathcal{C})\to A_c$ is an epimorphism, from which it follows that $X^*\subset X_{\mathbb{L}_c}$. Since $X^*$ and $X_{\mathbb{L}_c}$ are both compact, it suffices to show that the identity map from $X^*$ to $X_{\mathbb{L}_c}$ is continuous. For this we show that sets of the form $V_{\phi, A_c}$ are open in $X^*$. For any partition $P$ of $\mathbb{L}$ into clopen sets and any $P_1,\ldots, P_n\subset P$, by the definition of the Vietoris topology, each of the sets \begin{align*} \overline{P}_i= & \{ D\in {\rm Exp}( \mathbb{L}): D\subset\bigcup P_i \text{ and for every } p\in P_i,\ D\cap p\neq\emptyset \} \\ =&\{ D\in {\rm Exp}( \mathbb{L}): P_i =\{p\in P: D\cap p\neq\emptyset\} \} \end{align*} is open in ${\rm Exp}( \mathbb{L})$ and therefore \begin{align*} V_{P_1,\ldots,P_n}=& \{\mathcal{C}\in { \rm Exp( Exp(}\mathbb{L})) : \mathcal{C}\subset \overline{P}_1\cup\ldots \cup\overline{P}_n \text{ and for every } i,\ \mathcal{C}\cap\overline{P}_i\neq\emptyset \} \\ =&\{\mathcal{C}\in { \rm Exp( Exp(}\mathbb{L})) : \\ & \{P_1,\ldots,P_n\}=\{Q\subset P : \text{ for some } D\in\mathcal{C}, \ Q=\{p\in P: D\cap p\neq\emptyset\} \} \ \} \end{align*} is open in ${ \rm Exp( Exp(}\mathbb{L})) $. Now if $P=\{\phi^{-1}(a): a\in A\}$, where $\phi: \mathbb{L}\to A$ is an epimorphism, and if $P_i =\{\phi^{-1}(a_j): a_j\leq^{A_c} a_i\}$, where $A=\{a_1,\ldots, a_n\}$ is linearly ordered by $\leq^{A_c}$, the order on $A$ induced by the chain on $A_c$, then $ V_{\phi, A_c}=V_{P_1,\ldots,P_n}\cap X^*$, which implies that $ V_{\phi, A_c}$ is open. \end{proof} \begin{lemma}\label{minim} The family $\mathcal{F}_c$ has the expansion property with respect to $\mathcal{F}$, that is, for any $A_c\in\mathcal{F}_c$ there is $D\in\mathcal{F}$ such that for any expansion $D_c\in\mathcal{F}_c$ of $D$, there is an epimorphism $\phi: D_c\to A_c$. \end{lemma} \begin{proof} Using Proposition \ref{coin}, find $B_c\in\mathcal{F}_{cc}$ such that there is an epimorphism from $B_c$ onto $A_c$. Let $k$ and $l$ be the height and width of $B_c$, respectively. We show that $D\in\mathcal{F}$ of height $m=kl$ and width $l$ is as required, that is, for any $D_c\in\mathcal{F}_c$ with $D_c\restriction \mathcal{L}= D$, there is an epimorphism from $D_c$ onto $B_c$. Let $b_1 <^{B_c}_{cc} b_2 <^{B_c}_{cc}\ldots <^{B_c}_{cc} b_l$ be the increasing enumeration of branches of $B_c$ and let $d_1, d_2, \ldots, d_l$ be a list of all branches of $D$ . Let for each $i$, $(d_i^j)_{j=0}^m$ be the $\preceq_D$-increasing enumeration of the branch $d_i$ and let for each $i$, $(b_i^j)_{j=0}^k$ be the $\preceq_B$-increasing enumeration of the branch $b_i$. Let $v_B$ and $v_D$ denote the roots of $B$ and $D$, respectively. Take any $D_c\in\mathcal{F}_c$ with $D_c\restriction \mathcal{L}= D$. We recursively define a sequence $(x^i)_{i=1}^l$ as follows. Let $x^i$ be the least (with respect to the linear order $\leq^{D_c}$ induced by $D_c$) element from the set $\{d_j^{ik}: j=1,2,\ldots, l\}$ which is not on the same branch as $x^1,\ldots, x^{i-1}$ are. Let $\bar{d}_i$ denote the branch of $D$ on which $x^i$ is, so $x^i=\bar{d}_i^{ik}$. Let $\psi: D_c\to B_c$ be the $R$-preserving map satisfying for each $i$, \begin{align*} &\psi(\bar{d}_i^{(i-1)k+t})=b_i^t, \ t=1,2,\ldots, k,\\ &\psi([\bar{d}_i^{ik}, \bar{d}_i^m]_{\preceq_D})=b_i^k, \text { and} \\ &\psi([v_D, \bar{d}_i^{(i-1)k}]_{\preceq_D})=v_B. \end{align*} Then $\psi$ preserves chains and therefore it is a required epimorphism. \end{proof} \begin{proof}[Proof of Theorem \ref{umfauto}] We will apply Theorem \ref{kpt2}, via Theorem \ref{stone2}, to families $\mathcal{F}$ and $\mathcal{F}_c$ to show that the flow ${\rm Aut}(\mathbb{L})\curvearrowright X_{\mathbb{L}_c}$ is the universal minimal flow of ${\rm Aut}(\mathbb{L})$. Clearly, $\mathcal{F}_c$ is a precompact expansion of $\mathcal{F}$ and from Lemma \ref{image3} it follows that the property~$(*)$ holds for $\mathcal{F}$ and $\mathcal{F}_c$. Further, the expansion $\mathcal{F}_c$ of $\mathcal{F}$ is reasonable (Lemma \ref{reason}), $\mathcal{F}_c$ has the expansion property with respect to $\mathcal{F}$ (Lemma \ref{minim}), and $\mathcal{F}_{c}$ is a Ramsey class (Theorem \ref{rams} and Corollary \ref{xyza}). Finally, Corollary \ref{ident} implies that ${\rm Aut}(\mathbb{L})\curvearrowright \widehat{{\rm Aut}(\mathbb{L})/{\rm Aut}(\mathbb{L}_c)}$ and Proposition \ref{ident2} implies that ${\rm Aut}(\mathbb{L})\curvearrowright X^*$ describe the universal minimal flow of ${\rm Aut}(\mathbb{L})$. \end{proof} \section{The universal minimal flow of $H(L)$}\label{umfh} We will compute the universal minimal flow of $H(L)$ -- the homeomorphism group of the Lelek fan $L$ -- in two ways, as we did for ${\rm Aut}(\mathbb{L})$; one description will correspond to the quotient description, and the other to the chain description. \subsection{The 1st description} Let $\pi:\mathbb{L}\to L$ denote the continuous surjection and let $\pi^*: {\rm Aut}(\mathbb{L})\to H(L)$ denote the continuous homomorphism with a dense image, both defined in Section~\ref{sec:lelek}. The chain $\mathcal{C}^L=\pi(\mathcal{C}^\mathbb{L})$ is downwards closed and it is maximal by Lemma \ref{image3}. Let $L_c=(L, \mathcal{C}^L)$ and set \[H(L_c)=\{h\in H(L): h(\mathcal{C}^L)=\mathcal{C}^L\}.\] Note that $H(L_c)$ is closed in $H(L)$. Indeed, since for any $D\in {\rm Exp}(L)$ the map that takes $h\in H(L)$ and assigns to it $h(D)\in {\rm Exp}(L)$ is continuous, and since $\mathcal{C}^L$ is maximal and hence closed in ${\rm Exp}(L)$, we get that $H(L_c)$ is closed . In this section, we will prove the following theorem. \begin{thm}\label{umfhom1} The universal minimal flow of $H(L)$ is equal to $H(L)\curvearrowright \widehat{H(L)/H(L_c)}$. \end{thm} Theorem \ref{image} below will immediately imply that the universal minimal flow of $H(L)$ is equal to $\widehat{H(L)/H_1}$, where $H_1=\overline{\pi^*({\rm Aut}(\mathbb{L}_c))}$. We will identify $H_1$ with $H(L_c)$ in Proposition \ref{hl}. First, we need Theorem \ref{unii}, its proof is in \cite{NVT} (the proof of $ii)\to i)$ of Theorem 5). We will say that a flow $G\curvearrowright X$ is {\em universal} in a family of $G$-flows $\mathcal{FL}$ if $G\curvearrowright X\in \mathcal{FL}$ and for every $G$-flow $G\curvearrowright Y\in \mathcal{FL}$ there is a continuous $G$-map from $X$ onto $Y$. \begin{thm}\label{unii} Let $G$ be a Polish group and let $H$ be its closed subgroup. Suppose that $H$ is extremely amenable and that $G/H$ is precompact. Then the $G$-flow $G\curvearrowright\widehat{G/H}$ is universal in the family of $G$-flows in which there is an $H$-fixed point with a dense orbit. \end{thm} \begin{thm}\label{image} Let $G$ be a Polish group and let $H$ be a closed subgroup of $G$. Suppose that $H$ is extremely amenable, $G/H$ is precompact, and $M(G)=\widehat{G/H}$. Let $G_1$ be a Polish group and let $\phi: G\to G_1$ be a continuous homomorphism with a dense image. Then $M(G_1)=\widehat{G_1/H_1}$, where $H_1=\overline{\phi(G)}$. \end{thm} \begin{proof} First note that $H_1$ is extremely amenable (see Lemma 6.18 in \cite{KPT}) and that $\phi$ is uniformly continuous. We show that $G_1/H_1$ is precompact. Consider $\tilde{\phi}: G/H\to G_1/H_1$ given by $\tilde{\phi}(gH)=\phi(g)H_1$. This map is well defined, uniformly continuous (with respect to quotients of the right uniformities), and has a dense image. As the image of a precompact space by a uniformly continuous map is precompact (a straightforward calculation, this property is also stated in Engelking \cite{E}, page 445, the 2nd paragraph), $\tilde{\phi}(G/H)$ is precompact in the uniformity inherited from $G_1/H_1$. Moreover, as $\tilde{\phi}(G/H)$ is dense in $G_1/H_1$, completions of both spaces are equal (see \cite{E} 8.3.12). This gives that $G_1/H_1$ is precompact. Since $\tilde{\phi}$ is uniformly continuous, $\tilde{\phi}$ extends to a $G$-map $\tilde{\Phi}: \widehat{G/H}\to \widehat{G_1/H_1}$ (see 8.3.10 in \cite{E}). As $\tilde{\phi}(G/H)$ is dense in $G_1/H_1$ and the image $\tilde{\Phi}(\widehat{G/H})$ is closed in $\widehat{G_1/H_1}$, $\tilde{\Phi}$ is onto. The continuous action $G_1\curvearrowright \widehat{G_1/H_1}$ and $\phi: G\to G_1$ induce a continuous action $G\curvearrowright \widehat{G_1/H_1}$. As $G\curvearrowright \widehat{G/H}$ is minimal, so is $G\curvearrowright \widehat{G_1/H_1}$, therefore the flow $G_1\curvearrowright \widehat{G_1/H_1}$ is minimal as well. By Theorem \ref{unii}, the flow $G_1\curvearrowright \widehat{G_1/H_1}$ is universal in the family of minimal $G_1$-flows. \end{proof} To finish the proof of Theorem \ref{umfhom1}, we show the proposition below. \begin{prop}\label{hl} We have $H_1=H(L_c)$. \end{prop} To show Proposition \ref{hl}, we need Lemma \ref{hl2}, which generalizes the following lemma. \begin{lemma}[Barto\v sov\'a-Kwiatkowska \cite{BK}, Lemma 2.14]\label{coverold} Let $d<1$ be any metric on~$L$. Let $\epsilon>0$ and let $v$ be the root of $L$. Then there is $A\in\mathcal{F}$ and an open cover $(U_a)_{a\in A}$ of $L$ such that \begin{itemize} \item[(C1)] for each $a\in A$, $\text{diam}(U_a)<\epsilon$, \item[(C2)] for each $a,a'\in A$, if $U_a\cap U_{a'}\neq\emptyset$ then $R^{A}(a,a') $ or $R^{A}(a', a) $, \item[(C3)] for each $x,y\in L$ with $x\in [v,y]$, if $x\in U_a$ and $y\in U_b$, $a\neq b$, and $\{x,y\}\not\subset U_a\cap U_b$, then $a\preceq_A b$, where $\preceq_A$ is the partial order on $A$, \item[(C4)] for every $a\in A$ there is $x\in L$ such that $x\in U_a\setminus(\bigcup \{U_{a'}: a'\in A, a'\neq a\}).$ \end{itemize} \end{lemma} \begin{lemma}\label{hl2} Let $d<1$ be any metric on $L$. Let $\epsilon>0$ and let $v$ be the root of $L$. Then there is $A_c=(A,\mathcal{C}^A)\in\mathcal{F}_c$ and an open cover $(U_a)_{a\in A_c}$ of $L_c$ such that (C1)-(C4) from Lemma \ref{coverold} hold and additionally we have the following. {\rm (C5)} For any $c\in \mathcal{C}^L$, the set $\{a\in A: c\cap U_a\neq\emptyset\}$ is in $\mathcal{C}^A$. \end{lemma} \begin{proof} Take the cover $(U_a)_{a\in A}$ of $L$ as in Lemma \ref{coverold}. Then the set $\{\{a\in A: c\cap U_a\neq\emptyset\}: c\in\mathcal{C}^L\}$ is a downwards closed chain in $A$. To get $A_c$ extend this chain to a downwards closed maximal one. \end{proof} \begin{proof}[Proof of Proposition \ref{hl}] Since $H(L_c)$ is closed, it follows that $H_1\subset H(L_c)$. To show the converse, take $h\in H(L_c)$ and $\epsilon>0$. Let $d<1$ be any metric on $L$ and let $d_{\sup}$ be the corresponding supremum metric on $H(L)$. We will find $\gamma\in{\rm Aut}(\mathbb{L}_c)$ such that $d_{\sup}(h,\gamma^*)<\epsilon$, which will finish the proof as $\gamma^*\in H_1$ and since $H_1$ is closed. Let $A_c\in\mathcal{F}_c$ and $(U_a)_{a\in A}$, an open cover of $L$, be as in Lemma \ref{hl2} taken for $d$ and $\epsilon$. Since $h$ is uniformly continuous, we can assume additionally that for each $a\in A$, $\text{diam}(h(U_a))<\epsilon$. Let $(V^1_a)_{a\in A}$ and $(V^2_a)_{a\in A}$ be the open covers of $\mathbb{L}$ given by $V^1_a=\pi^{-1}(U_a)$ and $V^2_a=\pi^{-1}(h(U_a))$. For $B^i\in\mathcal{F}$ and an epimorphism $\phi^i: \mathbb{L}\to B^i$ such that $\{(\phi^i)^{-1}(b): b\in B^i\}$ refines $(V^i_a)_{a\in A}$, $i=1,2$, denote by $(W^i_a(\phi^i))_{a\in A}$ the clopen partition of $\mathbb{L}$ such that for every~$a$, $W^i_a(\phi^i)$ is the union of all $(\phi^i)^{-1}(b)$ which lie in $V^i_a$ and do not lie in a $V^i_{a'}$ for some $a'$ with $R^A(a,a')$. This partition defines an epimorphism $\psi_{\phi^i}$ from $\mathbb{L}$ to $A$. Indeed, by (C4) $\psi_{\phi^i}$ is onto. The properties (C2) and (C3) imply that if $x,y\in \mathbb{L}$ satisfy $R^{\mathbb{L}}(x,y)$ then $R^{A}(\psi_{\phi^i}(x),\psi_{\phi^i}(y))$, which already gives that $\psi_{\phi^i}:\mathbb{L}\to A$ is an epimorphism. Observe that if we take arbitrary $B^i\in\mathcal{F}$ and epimorphism $\phi^i:\mathbb{L}\to B^i$, $i=1,2$, such that $\{(\phi^i)^{-1}(b): b\in B\}$ refines $(V^i_a)_{a\in A}$, then for any $c\in\mathcal{C}^\mathbb{L} $ and $a\in A$, we have $a\in\psi_{\phi^i}(c)$ iff $c\cap W_a(\phi^i)\neq\emptyset$. Define $\mathcal{D}=\{\{a: c\cap V^i_a\neq\emptyset:\}: c\in\mathcal{C}^\mathbb{L}\}$ and observe that $\mathcal{D}=\{\{a: c\cap U_a\neq\emptyset:\}: c\in\mathcal{C}^L\}$, and hence $\mathcal{D}\subseteq \mathcal{C}^A$, by the definition of $\mathcal{C}^A$. This follows for $i=1$ from that for any $c\in\mathcal{C}^\mathbb{L} $ and $a\in A$, $c\cap V^1_a\neq\emptyset$ iff $c\cap \pi^{-1}(U_a)\neq\emptyset$ iff $\pi(c)\cap U_a\neq\emptyset$; and for $i=2$ it follows from that for any $c\in\mathcal{C}^\mathbb{L} $ and $a\in A$, $c\cap V^2_a\neq\emptyset$ iff $c\cap \pi^{-1}(h(U_a))\neq\emptyset$ iff $\pi(c)\cap h(U_a)\neq\emptyset$ iff $h^{-1}(\pi(c))\cap U_a\neq\emptyset$, and use that $h^{-1}$ preserves $\mathcal{C}^L$. \smallskip \noindent{\bf{Claim.}} There are $B^i\in\mathcal{F}$ and epimorphisms $\phi^i:\mathbb{L}\to B^i$, $i=1,2$, such that $\{(\phi^i)^{-1}(b): b\in B^i\}$ refines $(V^i_a)_{a\in A}$, satisfying $\mathcal{C}^A=\psi_{\phi^i}(\mathcal{C}^\mathbb{L})$. \smallskip \begin{proof}[Proof of the Claim] We fix $i=1,2$. Let $k$ be the number of elements in $A$ and let $\mathcal{D}$ be as before. Note that $\mathcal{D}$ may not be maximal. Inductively on $1\leq m\leq k$ we construct $B_m\in\mathcal{F}$ together with an epimorphism $\phi_m:\mathbb{L}\to B_m$ such that the partition $P_m=\{\phi_m^{-1}(b): b\in B_m\}$ refines $P_{m-1}$ and the the first $m$ links of the maximal chain $\psi_{\phi_m}(\mathcal{C}^\mathbb{L})$ are equal to the first $m$ links of $\mathcal{C}^A$. Then $\phi^i=\phi_k$ will be as required. We let $\phi_1:\mathbb{L}\to B_1$ be an arbitrary epimorphism such that $\{\phi_1^{-1}(b): b\in B_1\}$ refines $(V^i_a)_{a\in A}$. Suppose that we already constructed $\phi_1:\mathbb{L}\to B_1,\ldots, \phi_m:\mathbb{L}\to B_m$, $m<k$, and that there is an $m$-element link $M$ in $\mathcal{D}$. We construct $\phi_{m+1}:\mathbb{L}\to B_{m+1},\ldots, \phi_{m+l}:\mathbb{L}\to B_{m+l}$, where $l>0$ is the smallest such that there is an $(m+l)$-element link $N$ in $\mathcal{D}$. If $l=1$ and the $(m+1)$-st link of the maximal chain $\psi_{\phi_m}(\mathcal{C}^\mathbb{L})$ is equal to the $(m+1)$st link of $\mathcal{C}^A$ (which may not be the case even if $l=1$), we let $\phi_{m+1}=\phi_m$. Otherwise, let us see that \[S=\{d\in\mathcal{C}^\mathbb{L}: N=\{a\in A: d\cap V^i_a\neq\emptyset\} \text{ and } \psi_{\phi_m}(d)=M\}\] has at least $l$ elements. If $|M|+1=|N|$ and $S=\emptyset$, then using that for every $d\in \mathcal{C}^\mathbb{L}$, $ \psi_{\phi_m}(d)\subseteq \{a\in A: d\cap V^i_a\neq\emptyset\} $, we obtain that the $(m+1)$-st link of $\psi_{\phi_m}(\mathcal{C}^\mathbb{L})$ is equal to the $(m+1)$-st link of $\mathcal{C}^A$ (which in this case is equal to $N$). But we explicitly ruled out this happening. If $|M|+2\leq |N|$, we will show that $S$ is infinite. For this it will suffice to show that there is no smallest $d\in\mathcal{C}^{\mathbb{L}}$ (with respect to the inclusion) such that $N=\{a\in A: d\cap V_a^i\neq\emptyset\}$ and that simultaneously there is the smallest $d\in\mathcal{C}^{\mathbb{L}}$ such that $\psi_{\phi_m}(d)=M$. The second claim is clear, since $B_m$ is a clopen partition. For the first claim, suppose towards a contradiction that there is the smallest $d_s\in\mathcal{C}^\mathbb{L}$ such that $N=\{a\in A: d_s\cap V^i_a\neq\emptyset\}$. Clearly there is the largest $d_l\in\mathcal{C}^\mathbb{L}$ such that $M=\{a\in A: d_l\cap V^i_a\neq\emptyset\} $ and $d_l\subseteq d_s$. However, this contradicts the maximality of $\mathcal{C}^\mathbb{L}$. If $a_1\neq a_2\in N\setminus M$, then $d_s\setminus V^i_{a_2}$ is downwards closed, properly contained in $d_s$, and it properly contains $d_l$. Therefore we can find $c_1\subset \ldots\subset c_l\in\mathcal{C}^\mathbb{L}$ with $c_j\in S$. Let $M=M_0\subset\ldots\subset M_l=N$ be links in $\mathcal{C}^A$ such that $|M_{j+1}\setminus M_j|=1$. We can now successively define $ \phi_{m+1},\ldots, \phi_{m+l}$ in a way that $\psi_{\phi_{m+j}}(c_j)=M_j$ and the first $m+j$ links of the maximal chain $\psi_{\phi_{m+j}}(\mathcal{C}^\mathbb{L})$ are equal to the first $m+j$ links of $\mathcal{C}^A$, $j=1,\ldots, l$. To get $\phi_{m+j}$, we take $a\in M_j\setminus M_{j-1}$ and we pick $p\in P_{m+j-1}$ such that $p\cap c_j\neq \emptyset$, $p\cap V_a^i\neq\emptyset$ and $p\cap V_{a^-}^i\neq\emptyset$, where $a^-\in A$ is such that $R^A(a^-,a)$. We then partition $p$ into clopen sets $r$ and $s$ such that $s\subset V_a^i$ and $\phi_{m+j}$ with $P_{m+j}=(P_{m+j-1}\setminus \{p\})\cup \{r,s\}$ is an epimorphism. Then $\phi_{m+j}$ is as required. We are done with steps $m+1$,...,$m+l$. \end{proof} Denote $\psi_1=\psi_{\phi^1}$ and $\psi_2=\psi_{\phi^2}$. The (L3) provides us with $\gamma\in{\rm Aut}(\mathbb{L}_c)$ such that $\psi_1=\psi_2\circ \gamma .$ We will show that $d_{\sup}(h,\gamma^*)<\epsilon$. Pick any $x\in \mathbb{L}$, let $a=\psi_1(x)$, and note that \[ \gamma^*(\pi(x))\in\gamma^*(\pi\circ\psi_1^{-1}(a))=\pi\circ\gamma\circ\psi_1^{-1}(a)=\pi\circ\psi_2^{-1}(a)\subset h(U_a).\] Therefore $\gamma^*(\pi(x))\in h(U_a)$, $h(\pi(x))\in h(U_a)$, and $\text{diam}(h(U_a))<\epsilon$, and we get the required conclusion. \end{proof} \subsection{The 2nd description} Let $C$ be the Cantor set viewed as the middle third Cantor set in [0,1]. Each point of $C$ can be expanded in a ternary sequence $0.a_1a_2 a_3\ldots$, where $a_i\in\{0,2\}$ for every $i$, and each point of the interval $[0,1]$ can be expanded in a binary sequence $0.b_1b_2 b_3\ldots$ where $b_i\in\{0,1\}$ for every $i$. Let $\pi_0: C\to [0,1]$ be given by $\pi_0(0.a_1a_2 a_3\ldots)=0.b_1b_2 b_3\ldots$, where $b_i=0$ if $a_i=0$ and $b_i=1$ if $a_i=2$. We can view the Cantor fan $F$ as the union of segments joining the point $v=(\frac{1}{2},0)\in\mathbb{R}^2$ and a point $(c, 1)\in\mathbb{R}^2$, where $c\in C$. We first describe a topological $\mathcal{L}$-structure $\mathbb{F}$ such that $\pi(\mathbb{F})=F$, where $\pi$ is be the continuous surjection such that $\pi(x)=\pi(y)$ if and only if $R^{\mathbb{F}}(x,y)$, and for every $(a,b)\in\mathbb{F}$ the second coordinate of $\pi(\mathbb{F})$ is equal to $\pi_0(b)$. For this we let $\mathbb{F}= F\cap (C\times C)$ and $R^{\mathbb{F}}((a,b),(c,d)) $ if and only if $a=c$ and $b=d$, or if $v$, $(a,b)$, and $(c,d)$ are collinear and $(b,d)$ is an interval removed from $[0,1]$ in the construction of $C$. We view the Lelek fan $L$ as a subset of $F$ with its root equal to $v$ and we notice that $\mathbb{L}$ is isomorphic to $\pi^{-1}(L)$. Let $m_0$ denote the metric on $L$, equal to the restriction of the Euclidean distance on $\mathbb{R}^2$. Let $d_0$ be the corresponding supremum metric on $H(L)$. It is a right-invariant metric and it induces a right-invariant metric $d$ on $H(L)/H(L_c)$ via \[ d(gH(L_c), hH(L_c))=\inf\{d_0(gk, h): k\in H(L_c)\}. \] Let $m$ be the metric on ${\rm Exp}({\rm Exp}(L))$ obtained from $m_0$. To be more specific, to get $m$, we first take the Hausdorff metric on ${\rm Exp}(L)$ with respect to $m_0$ and then we take the Hausdorff metric on ${\rm Exp}({\rm Exp}(L))$ with respect to that metric on ${\rm Exp}(L)$. Let $Y^*$ be the set of downwards closed maximal chains on $L$. Note that $\pi:\mathbb{L}\to L$ induces a continuous surjection from ${\rm Exp}(\mathbb{L})$ to ${\rm Exp}(L)$, which further induces a continuous surjection from ${\rm Exp}({\rm Exp}(\mathbb{L}))$ to ${\rm Exp}({\rm Exp}(L))$. Let $\pi_c: X^*\to Y^*$ be the restriction of this last map to $X^*$. Note that $\pi_c$ is onto. Indeed, take $\mathcal{D}\in Y^*$ and observe that $\pi_c^{-1}(\mathcal{D})$ is a downwards closed chain in $\mathbb{L}$. Using Zorn's Lemma, extend it to a downwards closed maximal chain $\mathcal{C}$, and note that $\pi_c(\mathcal{C})=\mathcal{D}$. Consider the $H(L)$-flow $H(L)\curvearrowright Y^*$ induced from the natural action of $H(L)$ on $L$ given by $(g,x)\to g(x)$. In Theorem \ref{umfhom2} we will show that this flow is isomorphic to the flow $H(L)\curvearrowright \widehat{H(L)/H(L_c)}$. The main ingredient of the proof will be Theorem \ref{equival}. \begin{thm}\label{umfhom2} The universal minimal flow of $H(L)$ is isomorphic to $H(L)\curvearrowright Y^*$. \end{thm} Let $G=H(L)$ and let $H=H(L_c)$. \begin{thm}\label{equival} The bijection \[ gH \ \to \ g\cdot \mathcal{C}^L \] is a uniform $G$-isomorphism from $G/H$ to $G\cdot \mathcal{C}^L$. \end{thm} Let us first see that Theorem \ref{equival} implies Theorem \ref{umfhom2}. \begin{proof}[Proof of Theorem \ref{umfhom2}] The uniform $G$-isomorphism $gH \to g\cdot \mathcal{C}^L$ from $G/H$ to $G\cdot \mathcal{C}^L$ extends to a uniform $G$-isomorphism $h$ between the completion $\widehat{G/H}$ of $G/H$ and the completion of $G\cdot \mathcal{C}^L$, which is the closure of $G\cdot \mathcal{C}^L$ in $Y^*$, which we have to show is equal to $Y^*$. Let $f: \widehat{{\rm Aut}(\mathbb{L})/{\rm Aut}(\mathbb{L}_c)}\to X^*$ be the uniform $G$-isomorphism that extends the map $g {\rm Aut}(\mathbb{L}_c)\to g\cdot \mathcal{C}^\mathbb{L}$ , let $\rho$ be the extension of the map $g {\rm Aut}(\mathbb{L}_c)\to gH$ to the respective completions $\widehat{{\rm Aut}(\mathbb{L})/{\rm Aut}(\mathbb{L}_c)}$ and $\widehat{G/H}$. Let $\pi_c: X^*\to Y^*$, as before, be the map induced from $\pi:\mathbb{L}\to L$. As $\pi_c f= h\rho$ and $\pi_c$ is onto, we obtain that $h$ is onto. \end{proof} We define the {\em mesh} of a finite open cover $\mathcal{U}$ of $L$ to be the maximum of diameters of the sets in $\mathcal{U}$ and denote it by ${\rm mesh}(\mathcal{U})$, and we define the {\em spread} of $\mathcal{U}$ to be the minimum of distances between non-intersecting sets in $\mathcal{U}$ and denote it by ${\rm spr}(\mathcal{U})$ Let $A\in\mathcal{F}$ and let $\mu_0^A$ be the {\em path metric} on $A$ in which $\mu_0^A(x,y)$ is equal to the length of the shortest path joining $x$ and $y$ in the undirected graph obtained from $A$ via symmetrization of the relation $R^A$. The {\em length} of a path is understood to be the number of edges in the path. The metric $\mu_0^A$ on $A$ induces the Hausdorff metric $\mu_1^A$ on ${\rm Exp}(A)$, which in turn induces the Hausdorff metric $\mu^A=\mu_2^A$ on ${\rm Exp}({\rm Exp}(A))$. \noindent{\bf{Special covers:}} We will need the following covers $\mathcal{U}_l$ of $L$, $l=1,2,\ldots$. Let $J_i=[x_i,y_i]$, $i=0,1,\ldots, 2^l-1$, be the intervals we obtain in the $l$-th step of the construction of the middle third Cantor set. In particular, we have $y_i-x_i=\frac{1}{3^l}$. We assume that $y_i<x_{i+1}$, $i=0,1,\ldots, 2^l-2$. Let $b_j=\frac{j}{2^l}$, $j=0,1,\ldots, 2^l$. We let \[ \mathcal{U}_l =\left(\{P(J_i, b_j, b_{j+1}): i=0,1,\ldots, 2^l -1, j=1,\ldots, 2^l -1\}\setminus \{\emptyset\}\right)\cup \{ T(b_1) \}, \] where $P(J_i, b_j, b_{j+1})$ is the intersection of $L$ with the 4-gon determined by lines $y=b_j$, $y=b_{j+1}$, the line passing through $v$ and $(x_i,1)$ and the line passing through $v$ and $(y_{i},1)$, and $T(b_1)$ is the the intersection of $L$ with the triangle determined by the lines $y=b_1$, the line passing through $v$ and $(0,1)$, and the line passing through $v$ and $(1,1)$. Any cover $\mathcal{U}_l$ obtained as above will be called a {\em special cover}. Note that, by the definition of $\pi$, for any $l$, the open cover $\mathcal{V}_l=\{\pi^{-1}(U): U\in\mathcal{U}_l\}$ consists of disjoint sets, and when $l\to\infty$, then ${\rm mesh}(\mathcal{U}_l)\to 0$ and ${\rm spr}(\mathcal{U}_l)\to 0$. For a special cover $\mathcal{U}_l$, let $A_l=(a_V)_{V\in\mathcal{V}_l}\in\mathcal{F}$ be such that the map $\theta_l:\mathbb{L}\to A_l$ given by $\theta_l(x)=a_V$ iff $x\in V$ is an epimorphism. We will call $\theta_l$ a {\em special epimorphism}. In the proof of Theorem \ref{equival}, we will use the following lemma. \begin{lemma}\label{nakr} Let $A, B\in\mathcal{F}$, let $K$ be the number of elements in $A$, and let $\phi: B\to A$ be an epimorphism such that for every branch $b$ in $B$, and $a\in A$, if $\phi^{-1}(a)\cap b$ does not contain the endpoint of $b$ then it has at least $2K+1$ elements. Let $\mathcal{C}$ and $\mathcal{D}$ be maximal chains on $B$ such that $\mu^B(\mathcal{C},\mathcal{D})\leq 1$. Then there is an epimorphism $\psi: B\to A$ such that the maximal chains $\psi(\mathcal{C})$ and $\psi(\mathcal{D})$ are equal and for every $a\in A$ \[\psi^{-1}(a)\subset \phi^{-1}(a)\cup \bigcup_{a'\in {\rm is}(a)} \phi^{-1}(a'),\] where ${\rm is}(a)$ denotes the set of immediate successors of $a$ (and it is a singleton unless $a$ is the root). \end{lemma} \begin{proof} We will construct $\psi$ by induction on $n\leq K$. In step $n$ we will construct $\psi_n:B\to A$ and a downwards closed set $D_n$ of $A$ such that the first $n$ links in both $\psi_n(\mathcal{C})$ and $\psi_n(\mathcal{D})$ are equal to $D_1\subset\ldots\subset D_n$. Moreover, for any $a\in A$ and $n\leq K$ it will hold \begin{equation*} \begin{split} \psi_{n}^{-1}(a)\subset \phi^{-1}(a)\cup &\{x: \exists_{ y_0,\ldots, y_m} \text{ such that } m\leq 2(n-1), \\ & y_0\in \phi^{-1}(a), x=y_m \text{ and } \forall_{i<m} R^B(y_i, y_{i+1})\}. \end{split} \end{equation*} Note that this last containment implies that: \noindent $(\triangle_n)$ \ \ \ for every branch $b$ in $B$, and $a\in A$, if $\psi_n^{-1}(a)\cap b$ does not contain the endpoint of $b$ then it has at least $2(K-n+1)+1$ elements. Then we set $\psi=\psi_K$ and we will have $\psi(\mathcal{C})=\psi(\mathcal{D})=\{D_n: n\leq K\}.$ This $\psi$ will be as required. Observe that by the definition of the metric $\mu^B$, $\mu^B(\mathcal{C},\mathcal{D})\leq 1$ means: for every $C\in \mathcal{C}$ there is $D\in \mathcal{D}$ such that $\mu_1^B(C,D)\leq 1$ and for every $D\in \mathcal{D}$ there is $C\in \mathcal{C}$ such that $\mu_1^B(C,D)\leq 1$. {\bf{ Step 1.}} Take $\psi_1=\phi$ and let $D_1$ be the set whose only element is the root of $A$. {\bf{ Step $\mathbf{n+1}$.}} Let $E=\psi_n^{-1}(D_n)$, $E$ is clearly downwards closed. Let $C\in\mathcal{C}$ be the least (with respect to containment) such that there is a branch $b=(b^i) $ in $B$ and $i_0$ such that $p=b_{i_0}\notin E$ and $p, q=b_{i_0+1}\in C$, and let $D\in\mathcal{D}$ be such that $\mu_1^B(C,D)\leq 1$. Take $\psi_{n+1} $ such that $\psi_{n+1}(x)=\psi_n(x)$ for $x\in E\cup(B\setminus (C\cup D))\cup b$. For $x\in (C\cup D)\setminus (E\cup b)$ let $c=(c^j)$ be the branch in $B$ such that for some $j_0$, we have $x=c^{j_0}$, and let $\psi_{n+1}(x)=\psi_n(z)$, where $z\in E\cap c$ is the largest with respect to the tree order on $B$. Take $D_{n+1}=D_n\cup\{\psi_n(p)\}$. Note that $b\cap ((C\cup D)\setminus E)$ consists of 2 or 3 elements, in fact it is equal to $\{p,q\}$ or to $\{p,q, r\}$, where $r$ is the immediate successor of $q$. Further by $(\triangle_n)$, $\psi_{n+1}$ is constant on $b\cap ((C\cup D)\setminus E)$. \end{proof} \begin{proof}[Proof of Theorem \ref{equival}] It is easy to see that $gH\to g\cdot \mathcal{C}^\mathbb{L}$ is uniformly continuous. It is straightforward, by the definitions of metrics $d$ and $m$, that if $d(gH, hH)<\epsilon$ then $m(g\cdot \mathcal{C}^L, h\cdot \mathcal{C}^L)<\epsilon$. For the opposite direction, we show that for every $\epsilon$ there is $\delta$ such that for every $g,h\in G$ whenever $m(g\cdot \mathcal{C}^L, h\cdot \mathcal{C}^L)<\delta$, then $d(gH, hH)<3\epsilon$, that is, for some $k\in H$ we have $d_0(gk, h)<3\epsilon$. This direction requires more work, we will use Lemma \ref{nakr} and the ultrahomogeneity of $\mathbb{L}_c$. Pick $\epsilon>0$ and let $l$ be such that ${\rm mesh}(\mathcal{U})<\epsilon$, where $\mathcal{U}$ is the cover $\{U_a\cup\bigcup_{b\in{\rm is}(a)} U_b : a\in A_l\}$ obtained from the cover $\mathcal{U}_l=(U_a)_{a\in A_l}$. Let $K=|A_l|$ and let $k\geq (2K+1)l$. Let $\theta_l:\mathbb{L}\to A_l$ and $\theta_k:\mathbb{L}\to A_k$ be special epimorphisms, and $\phi: A_k\to A_l$ be the epimorphism such that $\phi\theta_k=\theta_l$. Denote $A=A_l$ and $B=A_k$. Take $\delta={\rm spr}(\mathcal{U}_k)$ and suppose that $m(g\cdot \mathcal{C}^L, h\cdot \mathcal{C}^L)<\delta$. Suppose first that there are $g_0, h_0\in {\rm Aut}(\mathbb{L})$ such that $g=\pi^*(g_0)\in H(L)$ and $h=\pi^*(h_0)\in H(L)$. Since $m(g\cdot \mathcal{C}^L, h\cdot \mathcal{C}^L)<{\rm spr}(\mathcal{U}_k)$, we obtain $\mu^B(\theta_k(g_0\cdot\mathcal{C}^{\mathbb{L}}), \theta_k(h_0\cdot\mathcal{C}^{\mathbb{L}}) )\leq 1$. Applying Lemma \ref{nakr} to $\phi:B\to A$, $\mathcal{C}=\theta_k(h_0\cdot\mathcal{C}^{\mathbb{L}})$ and $\mathcal{D}=\theta_k(g_0\cdot\mathcal{C}^{\mathbb{L}})$, we get an epimorphism $\psi: B\to A$ such that $\psi(\mathcal{C})=\psi(\mathcal{D})=:\mathcal{C}^A$. The conclusion of Lemma \ref{nakr} implies that $\psi\theta_k: (\mathbb{L}, h_0\cdot\mathcal{C}^{\mathbb{L}})\to (A,\mathcal{C}^A)$ and $\psi\theta_k: (\mathbb{L}, g_0\cdot\mathcal{C}^{\mathbb{L}})\to (A,\mathcal{C}^A)$ are epimorphisms, as well as that $\psi^{-1}(a)\subset \phi^{-1}(a)\cup \bigcup_{b\in {\rm is}(a)} \phi^{-1}(b)$, for every $a\in A$, ${\rm mesh}(\mathcal{U}_0)\leq {\rm mesh}(\mathcal{U})<\epsilon$, where $\mathcal{U}_0=\{\pi(( \psi\theta_k)^{-1}(a)): a\in A\}.$ From the ultrahomogeneity of $\mathbb{L}_c$ applied to epimorphisms $\psi\theta_k g_0:(\mathbb{L}, \mathcal{C}^\mathbb{L})\to (A,\mathcal{C}^A)$ and $\psi\theta_k h_0:(\mathbb{L}, \mathcal{C}^\mathbb{L})\to (A,\mathcal{C}^A)$, we get $k_0\in {\rm Aut}(\mathbb{L}_c)$ such that $\psi\theta_k g_0 k_0=\psi\theta_k h_0$, that is for every $x\in\mathbb{L}$, $g_0k_0(x)$ and $h_0(x)$ are in the same set of the cover $\{(\psi\theta_k)^{-1}(a): a\in A\}$ of $\mathbb{L}$. This implies that, denoting $k=\pi^*(k_0)$, for every $y\in L$, $gk(y)$ and $h(y)$, are in the same set of the cover $\mathcal{U}_0$, and therefore, since ${\rm mesh}(\mathcal{U}_0)<\epsilon$, we get $d_0(gk, h)<\epsilon$, as needed. In general, if $g,h\in H(L)$, take $g',h'\in \pi^*({\rm Aut}(\mathbb{L}))\subset H(L)$ such that $d_0(g,g')<\epsilon$ and $d_0(h,h')<\epsilon$. Then, if $k$ is such that $d_0(g'k,h')<\epsilon$, using the right-invariance of $d_0$ and the triangle inequality, we obtain $d_0(gk,h)<3\epsilon$. \end{proof}
-131,313.764592
[ -2.591796875, 2.35546875 ]
54.914337
[ -2.580078125, 0.98876953125, -2.2578125, -6.06640625, -0.99951171875, 9.203125 ]
[ 2.71875, 8.28125, 0.58447265625, 5.95703125 ]
762
15,884
[ -3.1796875, 3.619140625 ]
36.444033
[ -5.25390625, -3.802734375, -5.68359375, -2.435546875, 1.7392578125, 13.7265625 ]
1.3745
29.308786
15.657265
1.271249
[ 2.327061653137207 ]
-76,565.220539
4.946739
-130,302.258627
0.839973
6.108247
[ -1.509765625, -3.205078125, -4.1640625, -5.390625, 1.8369140625, 12.5234375 ]
[ -5.8046875, -1.6220703125, -1.783203125, -0.52734375, 3.28515625, 3.3125 ]
BkiUd7E5qoTBC5y2MUa5
\section{Introduction} Discs in late-type galaxies contain two distinct dynamical populations, the ``cold'' thin disc and the ``hot'' thick disc, as found in the Milky Way \citep[MW; e.g.][]{Gilmore83} and nearby edge-on galaxies \citep{Yoachim06, Comeron19}. The thin disc stars are younger with rotational velocities close to that of the collisional gas \citep{Roberts66}. The thick disc stars are older and through dynamical heating, via secular evolution of the disc \citep{Sellwood14} or mergers with satellite galaxies \citep{Quinn86}, their rotational velocity decreases and velocity dispersion increases. In the solar neighborhood, velocity dispersion appears to increase monotonically with age \citep{Delhaye65, Casagrande11}, but it is not known if that is representative of the entire MW disc. A major part of the MW thick disc is also found to be chemically distinct from its dominant thin disc \citep[see review by][]{bhg16}. While mergers of satellite galaxies can dynamically heat galactic discs, the cold MW disc does not seem to have undergone any major merger event in the last 10 Gyr. The Andromeda (M31) galaxy is the closest giant spiral galaxy to the MW. A number of substructures have been observed in the inner halo of M31 \citep{mcc18}, which may have resulted from a recent merger \citep{Fardal13,ham18}. This may also be linked to an observed burst of star formation $\sim2$ Gyr ago \citep{bernard15,wil17}. It is well known that the velocity dispersion of a disc stellar population increases with age \citep{Stromberg25,Wielen77}. \citet[][hereafter D15]{dorman15} estimated the age-velocity dispersion relation (AVR) for the M31 disc with kinematics of stars from the SPLASH spectroscopic survey \citep{Guha05, Guha06}. They assigned ages to stars based on their position on the colour-magnitude diagram (CMD, see their Figure 6) from the PHAT survey \citep{dal12}. While their main-sequence stars (MS) are well separated in this CMD, the red giant branch (RGB) and asymptotic giant branch (AGB) loci overlap, resulting in a more ambiguous age separation. They construct the line-of-sight (LOS) velocity dispersion ($\rm\sigma_{LOS}$) vs radius profiles for the different populations. Their Figure 16 shows a general trend of increasing $\rm\sigma_{LOS}$ with age, although with significant overlap of these profiles among populations and radii. On the basis of this trend, they state that the RGB population in M31 has a velocity dispersion that is nearly three times that of the MW. Planetary Nebulae (PNe) are discrete tracers of stellar populations and their kinematics have been measured in galaxies of different morphological types \citep[e.g.][]{coc09, cor13, pul18, Aniyan18}. PNe in M31 have negligible contamination from MW PNe or background galaxies \citep{Bh+19}. A number of PN properties are related to the mass, luminosity and ages of their progenitor stars. For example, from the central star properties derived from modelling nebular emission lines of PNe in the Magellanic clouds and M31, \citet{Ciardullo99} find a correlation between PNe circumstellar extinction and their central-star masses. Dust production of stars in the AGB phase scales exponentially with their initial progenitor masses for the $1\sim2.5 M_{\odot}$ range after which it remains roughly constant \citep{Ventura14}. Additionally, PNe with dusty high-mass progenitors evolve faster \citep{MillerB16} and so their circumstellar matter has little time to disperse, while PNe with lower central star masses evolve sufficiently slowly that a larger fraction of dust is dissipated from their envelopes \citep{Ciardullo99}. Kinematics of young and old stellar populations are well-traced by high and low mass giant stars respectively in the MW through their rotational velocity and velocity dispersion \citep[e.g.][]{Aniyan16}. In the M31 disc, different kinematics of younger and older stellar populations are expected to correlate with high- and low-extinction PNe respectively. \begin{figure}[t] \centering \includegraphics[width=\columnwidth,angle=0]{av_hist.png} \caption{The histogram showing the distribution of extinction values for the \citetalias{san12} and Bh+19b PNe. The high- and low-extinction PNe lie in the blue and red shaded regions respectively. The distribution of extinction values of the LMC PNe observed by \citet{rp10}, shifted such that its peak corresponds to that of the M31, are shown in yellow.} \label{fig:av_hist} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth,angle=0]{spat_pos.png} \caption{Position on sky of \citetalias{san12} PNe (squares) and Bh+19b PNe (circles). The high-extinction PNe are shown in blue while the low-extinction PNe are shown in red. The PNe are divided into the elliptical bins to obtain rotation curves.} \label{fig:spat_pos} \end{figure} In this letter, we identify PNe populations based on their extinction for the first time, allowing us to get two distinct age populations for the M31 disc. The data used in this work are described in Section~\ref{sect:data}. In Section~\ref{sect:ana}, we first discuss classification of PNe based on their extinction values. We then obtain the rotational velocity curve and rotational velocity dispersion for the M31 disc high- and low-extinction PNe. We assign ages to the two PNe populations by comparing modelled central star properties in \citet[][hereafter Kw+12]{Kw12} to the \citet{MillerB16} stellar evolution tracks. We then obtain the AVR for the M31 disc in Section~\ref{sect:avr} and compare it with previous determinations in M31 and the MW. From the comparison with simulated galaxies, we estimate the mass ratio of a possible merger event in the M31 disc. We summarise our results and conclude in Section~\ref{sect:sum}. \section{Data description} \label{sect:data} \citet{Bh+19} identified PNe candidates in a 16 sq. deg. imaging survey of M31 with MegaCam at the CFHT, covering the disc and inner halo. Spectroscopic observations of a complete subsample of these PNe candidates were carried out with the Hectospec multifiber positioner and spectrograph on the Multiple Mirror Telescope \citep[MMT;][]{fab05}. The Hectospec 270 gpm grating was used and provided spectral coverage from 3650 to 9200 $\AA$ at a resolution of $\sim5~\AA$. Each Hectospec fiber subtends 1.5$\arcsec$ on the sky and were positioned on the PNe candidates in each field. On September 15, 2018, and October 10, 2018, with an exposure time of 9000 seconds each, two fields in the south-west region of the M31 disc were observed, while on December 4, 2018, with an exposure time of 3600 seconds, one field covering the northern part of the M31 disc and the Northern Spur substructure was observed. Of the 343 observed PNe candidates, 129 had confirmed detection of the [\ion{O}{iii}] 4959/5007 $\AA$ emission-lines. Of these observed PNe, 92 showed the H$\beta$ line and their extinctions (A$_V$) could be determined from the Balmer decrement. Details of the spectroscopic observations of the PNe, along with the extinction determination and chemical abundances will be presented in a forthcoming paper (Bhattacharya et al. 2019b in preparation, hereafter Bh+19b). \citet[][hereafter San+12]{san12} also studied PNe and \ion{H}{ii} regions in the M31 disc and outer bulge using Hectospec on the MMT. They observed 407 PNe, 321 with the H$\beta$ line detected and subsequent reliable extinction measurements. The combined sample of PNe with extinction measurements in M31 from \citetalias{san12} and Bh+19b thus consists of 413 PNe. \begin{figure}[t] \centering \includegraphics[width=\columnwidth,angle=0]{vrot.png} \caption{Rotational velocity for the high- and low-extinction PNe are shown in blue and red respectively. The black line shows the \ion{H}{i} rotation velocity from \citet{Chemin09}. The co-rotation radius (black dotted line) and outer Lindblad resonance (OLR; black dashed line) of the M31 bar are as found by the models of \citet{Blana18}. The grey shaded region is possibly influenced by different dynamical heating events and is not discussed here.} \label{fig:vrot} \end{figure} \section{Analysis} \label{sect:ana} \subsection{Classification of Planetary Nebulae based on extinction measurements} \label{sect:extb} The distribution of the M31 PNe extinction values (see Figure~\ref{fig:av_hist}) exhibits a sharp drop at A$\rm_V = 0.75$ mag, increases again at A$\rm_V =$1--1.25 mag, and drops off gradually at larger values of A$\rm_V$. Figure~\ref{fig:av_hist} also shows the distribution of the LMC PNe extinction values \citep{rp10}, shifted such that their peak (originally in the A$\rm_V =$ 0.75--1 mag bin) is coincident with the distribution of the M31 PNe extinction values (A$\rm_V =$ 0.25--0.5 mag bin). The shifted distribution of the LMC PNe extinction values also shows a sharp drop at A$\rm_V = 0.75$ mag and gradually falls off while that of the M31 disc PNe shows a secondary peak at A$\rm_V=$1--1.25 mag. The distribution of M31 PNe extinction values around the first higher peak possibly results from an older parent stellar population (numerically more prevalent) spawning PNe having lower circumstellar extinction values (further discussions in Section~\ref{sect:age}), while the secondary peak at higher circumstellar extinction values would indicate the Presence of a younger parent stellar population. We thus classify M31 PNe with extinction values higher and lower than A$\rm_V = 0.75$ mag as high- and low-extinction PNe respectively. Our PNe sample is then divided into 145 high- and 268 low-extinction PNe, which are expected to be associated with younger and older parent stellar populations respectively. We note that using a different extinction value within A$\rm_V = 0.65-0.85$ mag range for the classification of the two PN populations has negligible effect on the rotation curves obtained in Section~\ref{sect:rot}. The high-extinction PNe classification is not biased by the LOS dust attenuation in M31 as per our investigation in Appendix~\ref{app:dust}. Figure~\ref{fig:spat_pos} shows the spatial distribution of the PNe in the M31 disc. \subsection{Rotation curves} \label{sect:rot} For both \citetalias{san12} and Bh+19b PNe, the LOS velocities (LOSV) are obtained from full spectral fitting, resulting in an uncertainty of 3 km s$^{-1}$. The PNe are de-projected on to the galaxy plane based on the position angle (PA = 38$^\circ$) and inclination (i = $77^\circ$) of M31 in the planer disc approximation. They are then binned into seven elliptical bins (Figure~\ref{fig:spat_pos}) with the first six bins covering 3 kpc each starting at a deprojected major axis radius R$\rm_{GC}= 2$ kpc from the center of M31 and the final bin covering R$\rm_{GC}=$ 20--30 kpc. PNe observed outside R$\rm_{GC}=$ 30 kpc probably belong to the inner halo substructures, possibly the Northern Spur, and are hence not included in the analysis. The position of the PNe in each bin can be described using cylindrical coordinates, with the \textit{z} = 0 kpc plane as the local plane of the galaxy, \textit{r} = 0 kpc as the galactic center, and $\phi$ measured counterclockwise from the position angle of M31. The LOSV for the PNe, $V\rm_{LOS}$, in each bin is then fitted by the following equation: \begin{equation} \label{eq:vel} $$\rm V_{LOS} = V_{sys} + V_{\phi}~cos(\phi)~sin(i) + V_{R}~sin(\phi)~sin(i) + V_{err}$$ \end{equation} where $V\rm_{sys}$ is the systemic velocity of M31, assumed to be $-309$ km s$^{-1}$ \citep{merrett06}; $V\rm_{\phi}$ is the rotational velocity in the plane of the galaxy; $V\rm_{R}$ is the radial streaming motion that can be inwards or outwards; $\rm i$ is the inclination of M31 mentioned previously and $V\rm_{err}= 3 $ km s$^{-1}$ is the uncertainty in measurement. LOSVs for the high- and low-extinction PNe are fitted separately in each elliptical bin using LMFIT \citep{lmfit} to obtain $V\rm_{\phi}$, $V\rm_{R}$ and $\rm\phi$ as the parameters describing the mean motion of the PNe populations in each bin. We note that $V\rm_{Z}$, the off-plane motion in the \textit{z} direction, is considered to be zero as no net off-plane motion is expected for PNe in the disc. \begin{figure}[t] \centering \includegraphics[width=\columnwidth,angle=0]{sigma_rot.png} \caption{Rotational velocity dispersion for the high- and low-extinction PNe are known in blue and red respectively. The black lines and grey shaded region are same as in Figure~\ref{fig:vrot}. } \label{fig:sigrot} \end{figure} The obtained $V\rm_{\phi}$ rotation curves for the high- and low-extinction PNe are shown in blue and red respectively in Figure~\ref{fig:vrot}. The uncertainty in the fitted $V\rm_{R}$ is quite high and their values are close to zero in each bin. Thus, no clear evidence of radial streaming motion is found in either PNe population. Setting $V\rm_{R}=0$ km s$^{-1}$ also has negligible effect on the rotation curves. The difference in rotational velocities between the gas and the stellar population in a disc is a measure of the asymmetric drift and it is higher for older stellar populations which have more non-circular orbits as a result of dynamical heating \citep{Str46}. Outside R$\rm_{GC}= 14$ kpc, the high-extinction PNe have a rotational velocity closer to that of the \ion{H}{i} gas derived by \citet{Chemin09}, indicative of a dynamically young population, while that of the low-extinction PNe is much further away from that of the \ion{H}{i} gas, indicative of a dynamically older population \citep{Str46}. \begin{figure}[t] \centering \includegraphics[width=\columnwidth,angle=0]{kwit_ages.png} \caption{The high- and low-extinction PNe observed by \citetalias{Kw12} are shown in blue and red respectively in the $\rm log(L/L_{\odot})$ vs. $\rm log(T_{eff})$ plot. The stellar evolution tracks from \citet{MillerB16} corresponding to metallicity, $\rm Z_{0}=0.01$, are shown in black. The initial stellar mass and $\rm\tau_{MS+AGB}$ are also labeled.} \label{fig:kwit_ages} \end{figure} We estimate the rotational velocity dispersion, $\rm\sigma_{\phi}$, as the standard deviation with respect to the fitted $V\rm_{\phi}$ in each bin. The $\rm\sigma_{\phi}$ profiles for the high- and low-extinction PNe are shown in Figure~\ref{fig:sigrot}. Outside R$\rm_{GC}= 14$ kpc, $\rm\sigma_{\phi}$ is lower for the high-extinction PNe, as expected for a dynamically young population, than that measured for low-extinction PNe, a dynamically older population. $\rm\sigma_{\phi}$ for the low-extinction PNe population increases sharply in the outermost bin. This may be due to the presence of PNe associated with the M31 inner halo substructures like the Northern Spur or the NGC 205 loop at this distance from the M31 center. Within 14 kpc, both the high- and low-extinction PNe samples show an overall reversal in the $V\rm_{\phi}$ rotation curves and the $\rm\sigma_{\phi}$ but both populations are dynamically hot. While this maybe linked to the interaction of the disc with the bar in M31 as modelled by \citet{Blana18} for the inner two bins, other sources of dynamical heating may be at play for R$\rm_{GC}= 8-14$ kpc, either stemming just from the secular evolution of the disc and/or through a merger event. This will be investigated in a forthcoming paper (Bhattacharya et al. 2019b in preparation). Given the large values of $\rm\sigma_{\phi} \approx 130 $ km s$^{-1}$ for the low-extinction PNe, their parent stellar population may be distributed as a flattened spheroid, rather than a planer disc. Given the inclination of the M31 disc, deprojecting these PNe as a planer disc may result in an overestimate of their R$\rm_{GC}$ values, leading to a bias in the estimated $\rm\sigma_{\phi}$. We investigate the effect of disc thickness in Appendix~\ref{app:scale}. We find that the scale height of the low-extinction PNe is H$\rm_{Low~ext} \approx 0.86$ kpc. Within our 3 kpc bin sizes, only $\sim10\%$ of the low-extinction PNe may be included in a different bin. The effect on the estimated $\rm\sigma_{\phi}$ values of these $\sim10\%$ PNe in different bins is within the measurement uncertainties. \subsection{Ages of the M31 disc Planetary Nebulae} \label{sect:age} \citetalias{Kw12} observed sixteen PNe in the outer disc of M31 to measure various emission lines and determine chemical abundances. They used the CLOUDY photoionization codes \citep{cloudy} to estimate the bolometric luminosity ($\rm L/L_{\odot}$) and effective temperature ($T\rm_{eff}$) of the central stars of these PNe. Figure~\ref{fig:kwit_ages} shows their estimated $\rm log(L/L_{\odot})$ vs. $\rm log(T_{eff})$, coloured by their extinction classification (high-extinction: blue; low-extinction: red). The post-AGB stellar evolution tracks from \citet{MillerB16} for a metallicity $Z\rm_{0}=0.01$ are also plotted in Figure~\ref{fig:kwit_ages}. It is clear that the high-extinction PNe in this sub-sample lie either around the tracks corresponding to an initial progenitor mass of 1.5 $M\rm_{\odot}$ and age ($\rm\tau_{MS+AGB}$; lifetime in main-sequence and AGB phases) of 2.3 Gyr or are even younger with higher initial progenitor masses. The low-extinction PNe in this sub-sample, barring one, are older than 4.2 Gyr with initial progenitor mass lower than 1.25 $M\rm_{\odot}$. We note that these ages maybe uncertain up to $\sim1$ Gyr based on the estimations by \citetalias{Kw12}. We may thus assign the mean ages corresponding to the \citetalias{Kw12} high- ($\sim2.5$ Gyr) and low- ($\sim4.5$ Gyr) extinction PNe to those with the corresponding extinction values in the \citetalias{san12} and Bh+19b PNe population. \begin{figure}[t] \centering \includegraphics[width=\columnwidth,angle=0]{age_vd_mw.png} \caption{The AVR of PNe in the M31 disc at R$\rm_{GC}=$ 14--17 and 17--20 kpc is shown in magenta and cyan respectively. The assigned age is shown in log scale with the MS \citepalias[$\sim30$ Myr age;][]{dorman15} in the R$\rm_{GC}=$ 14--17 kpc bin shown at 0.8 Gyr for visual clarity. The AVR obtained in the solar neighbourhood in the MW \citep{Nord04} is shown in grey for comparison. Their total velocity dispersion is shown with squares while the velocity dispersion in the space velocity components (\textit{U, V, W}) are shown with filled circles, open circles and filled triangles respectively. We also present, with open triangles, the velocity dispersion in the W component from \citet{Aniyan18} for only those MW stars with [Fe/H]<-0.3, showing a flattening in the MW AVR at older ages. } \label{fig:age_vd_mw} \end{figure} \section{Age-velocity dispersion relation} \label{sect:avr} \subsection{The observed age-velocity dispersion relation in M31} We obtain the AVR in M31 for two elliptical bins with R$\rm_{GC}=$ 14--17 and 17--20 kpc. They are presented in Figure~\ref{fig:age_vd_mw}, clearly showing the increase in the velocity dispersion with age. In Figure~\ref{fig:age_vd_mw}, we also present the age-velocity dispersion value for the MS ($\sim30$ Myr age), $\rm \sigma_{\phi,~MS}= 30\pm 10$ km s$^{-1}$, obtained by \citetalias{dorman15} in the R$\rm_{GC}=$ 14--17 kpc bin. Based on models fitting the star formation rate, gas profiles and metallicity distributions of the M31 and MW discs, \citet{Yin09} find that R$\rm_{GC}$=19 kpc in the M31 disc is the equivalent distance, in disc scale lengths, of the Sun (R$\rm_{\odot}$=8 kpc) in the MW disc. We thus compare the velocity dispersion of the MW disc obtained in the solar neighbourhood by \citet{Nord04} to our $\rm\sigma_{\phi}$ in the R$\rm_{GC}$=17--20 kpc bin, where $\rm\sigma_{\phi,~2.5~Gyr}=\rm\sigma_{\phi,~High~ext}= 61\pm 14$ km s$^{-1}$ and $\rm\sigma_{\phi,~4.5~Gyr}=\rm\sigma_{\phi,~Low~ext}= 101\pm 13$ km s$^{-1}$. \citet{Nord04} describe the MW velocity dispersion in space-velocity components (\textit{U, V, W}), defined in a right-handed Galactic system with \textit{U} pointing towards the Galactic centre, \textit{V} in the direction of rotation, and \textit{W} towards the north Galactic pole (Figure~\ref{fig:age_vd_mw}). The equivalent in the MW disc for the $\rm\sigma_{\phi}$ in the M31 disc would be some combination of $\rm\sigma_{MW, U}$ and $\rm\sigma_{MW, V}$ with a value intermediate between the two \citepalias{dorman15}. We compare our obtained $\rm\sigma_{\phi}$ of the M31 disc with the $\rm\sigma_{MW, U}$, which is $\sim29$ km s$^{-1}$ and $\sim35$ km s$^{-1}$ for 2.5 Gyr and 4.5 Gyr old populations respectively. In the R$\rm_{GC}$=17--20 kpc bin, the $\rm\sigma_{\phi}$ of the 2.5 Gyr and 4.5 Gyr old population in M31 are about twice and thrice that of the $\rm\sigma_{MW, U}$ of the 2.5 Gyr and 4.5 Gyr old MW thin disc population respectively. \subsection{Comparison with previously measured and simulated age-velocity dispersion relations} \label{sect:dis} The AVR in M31 was previously estimated by \citetalias{dorman15} from the $\rm\sigma_{LOS}$ of stars whose classification in different age bins suffered from ambiguity. Their observations were also limited to the PHAT survey footprint, covering about a quarter of the M31 disc along its major axis out to R$\rm_{GC}\sim 18$ kpc. Our observed PNe sample covers the entire M31 disc out R$\rm_{GC}= 30$ kpc, and the high- and low-extinction PNe are well separated in age (Figure~\ref{fig:kwit_ages}). The $\rm\sigma_{\phi}$ values for the high- and low-extinction PNe agree within errors with that obtained by \citetalias{dorman15} for older AGB ($\sim 2$ Gyr old) and RGB ($\sim 4$ Gyr old) stars respectively. \citet{Quirk19} fitted the rotation curves for stellar populations identified by \citetalias{dorman15}. In the R$\rm_{GC}=14-17$ kpc bin, $V\rm_{\phi}$ for the high-extinction PNe is in good agreement with that obtained by \citet{Quirk19} for older AGB stars, but for the low-extinction PNe it is lower than that of RGB stars by $\sim 30$ km s$^{-1}$. This is possibly due to their RGB population being contaminated by younger AGB stars, resulting in a $V\rm_{\phi}$ value that is closer to that of the HI gas. The AVR for the M31 disc shows a steep slope between 0--2.5 Gyr age range and an even steeper slope between 2.5--4.5 Gyr age range than those for the MW disc in similar age bins. The AVR of the MW disc is considered to be driven by secular evolution channels \citep[see review by][]{Sellwood14}. An AVR with velocity dispersion increasing gradually with age is also measured in simulated disc galaxies with similarly quiescent merger histories \citep[from zoom-in cosmological simulations by][]{House11, Martig14}. However, simulated disc galaxies undergoing a single merger show significant increase in the velocity dispersion for stellar populations older than the end of the merger \citep[][see their Figure 2]{Martig14}, with larger velocity dispersion for higher merger mass ratios. After the end of the merger, it takes $\sim2$ Gyr for stellar populations to form with velocity dispersion values similar to those for quiescent discs. The high $\rm\sigma_{\phi,~4.5~Gyr}$ values in the M31 disc is reminiscent of those seen in populations older than the merger event in simulated galaxies. The lower $\rm\sigma_{\phi,~2.5~Gyr}$ values in the M31 disc is reminiscent of the lower values predicted by simulations some time after the end of the merger. Finally, the velocity dispersion for the MS in M31 is then akin to that for quiescent discs, also observed at least $\sim2$ Gyr after the merger event in the simulated galaxies. Hence, we may deduce from the observed AVR in the M31 disc, that a single merger event took place 2.5 -- 4.5 Gyr ago. \subsection{Estimation of the merger mass ratio} \label{sect:ratio} In the framework of a single merger in the M31 disc, we estimate the merger mass ratio and satellite mass required to produce the dynamically hot 4.5 Gyr old population with disc scale height H$\rm_{4.5~Gyr} = H_{Low~ext} \approx 0.86$ kpc (Appendix~\ref{app:scale}). We utilize the relation between disc scale height (H) and satellite-to-disc-mass-ratio ($\rm M_{sat}/M_{disc}$) described by \citet{Hopkins08} for a satellite galaxy (assumed to be a rigid body) that merged with a disc galaxy (assumed to be a thin disc) on an in-plane prograde radial orbit. The relation in the case of a satellite merging with a \citet{Mestel63} disc galaxy, having constant circular velocity $V_{c,disc}$, is as follows: \begin{equation} \label{eq:mass} $$\rm\frac{\Delta H}{R_{e,disc}} = \alpha_{H}~ (1-f_{gas}) ~\frac{M_{sat}}{M_{disc}}~\tilde{h}(R/R_{e,disc})$$ \end{equation} where $\rm\Delta H$ gives the increase in scale height in the disc galaxy following the merger; $\rm\alpha_{H}= 1.6\tilde{v}$ is a derived constant with $\rm\tilde{v}=(V_{c,disc}/V_{h})^2$, $V_{h}$ being the halo circular velocity; $\rm f_{gas}$ is the gas fraction in both the disc galaxy and satellite (assumed to be equal) before the merger; $\rm R$ is the galactocentric radius of the population with scale height H ; and $\rm R_{e,disc}$ is the disc effective radius. We assume that the M31 disc evolved by secular evolution prior to the merger event. Hence we adopt the scale height of the old thin disc of the MW as measured in the solar neighbourhood, H$_{MW}\approx300$ pc \citep[see][and references therein]{bhg16}, as the pre-merger scale height $\rm H_{pre-merger}$ for the M31 disc. Thus, $\rm H_{pre-merger} \approx H_{MW}\approx0.3$ kpc and $\rm\Delta H = H_{4.5~Gyr}- H_{pre-merger} \approx 0.56$ kpc. $\rm R=18.5$ is the median of the R$\rm_{GC}$=17--20 kpc bin, which is the equivalent disc scale lengths in M31 to the solar neighbourhood. From \citet{Blana18}, we adopt $V_{c,disc}=250$ km s$^{-1}$, $\rm R_{e,disc}=9.88$ and $V_{h}=182$ km s$^{-1}$. The present day gas fraction in M31 is $\sim9\%$ \citep{Yin09} but M31 is observed to have undergone a burst of star formation $\sim2$ Gyr ago which produced $\sim10\%$ of its mass \citep{wil17}. Assuming that the stellar mass formed in this burst was present as gas mass before the merger, we adopt $\rm f_{gas}=0.19$. Plugging these values into Equation~\ref{eq:mass}, we obtain $\rm M_{sat}/M_{disc} \approx 0.21$ or $\rm M_{sat}:M_{disc} \approx 1:5$. Given that the total mass of the M31 disc is $\rm 7 \times 10^{10} ~M_{\odot}$ \citep{Yin09}, a $\rm 1.4 \times 10^{10} ~M_{\odot}$ satellite is required to dynamically heat the M31 disc. \section{Summary and conclusion} \label{sect:sum} We classify the observed sample of PNe based on their measured extinction values into high- and low-extinction PNe which are respectively associated with 2.5 Gyr and 4.5 Gyr parent populations. By fitting rotation curves to the two PNe populations in de-projected elliptical bins, we find that the high- and low-extinction PNe are dynamically colder and hotter respectively, especially at R$\rm_{GC}=14-20$ kpc (Figures~\ref{fig:vrot},~\ref{fig:sigrot}). We thus obtain the AVR at these radii to find that $\rm\sigma_{\phi}$ increases with age in the M31 disc, which is dynamically much hotter than the stars in the MW disc of corresponding ages. There is an interesting timescale coincidence between the age of the high-extinction PNe and the $\sim2$ Gyr old burst of star formation observed both in the stellar disc and inner halo of M31 \citep{bernard15,wil17}. We speculate that most of the high-extinction PNe, causing the secondary peak in the extinction distribution (Figure~\ref{fig:av_hist}), are those whose progenitors formed during the $\sim2$ Gyr old star formation burst, while the low-extinction PNe were likely formed earlier. The high-extinction PNe are kinematically tracing the younger thin disc of M31 outside R$\rm_{GC}= 14$ kpc from the center. They are clearly separated, both in $\rm V_{\phi}$ and $\rm\sigma_{\phi}$, from the dynamically hotter low-extinction PNe which may be associated with the thicker disc. Some low-extinction PNe may also be associated with the old thin disc and inner halo of M31. Using hydrodynamical simulations, \citet{ham18} argue that a single major merger 2 -- 3 Gyr ago, where the satellite eventually coalesced to build up the M31 bulge after multiple passages, can explain the dynamical heating of the M31 disc. They also predict a merger with mass ratio of at least 1:4.5 from their simulations, and also a decreasing trend in the velocity dispersion with radius, as observed, albeit within errors, in Figure~\ref{fig:sigrot}. Such a merger could also explain the burst of star formation $\sim2$ Gyr ago and the presence of the M31 inner halo substructures. \citet{Fardal13} also use hydrodynamical simulations to predict the formation of the giant stream from a merger $\sim1$ Gyr ago with a $\rm \sim 3.2 \times 10^{9} M_{\odot}$ satellite. The AVR measured in the M31 disc using PNe is indicative of a single merger occurring 2.5 -- 4.5 Gyr ago with a merger mass ratio $\approx$ 1:5, with a $\rm 1.4 \times 10^{10} ~M_{\odot}$ satellite galaxy. Such a galaxy would have been the third largest member of the local group, more massive than M33 \citep{Kam17}. This is consistent with the prediction from \citet{ham18}. In conclusion, the kinematics of the M31 disc PNe have been able to shed light on the recent dynamical evolution of M31. Our next step is to utilize PNe to further investigate the interface of the disc and inner halo of M31. \begin{acknowledgements} SB acknowledges support from the IMPRS on Astrophysics at the LMU Munich. We are grateful to the anonymous referee for the constructive comments that improved the manuscript. Based on observations obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT). This research made use of Astropy-- a community-developed core Python package for Astronomy \citep{Rob13}, Numpy \citep{numpy} and Matplotlib \citep{matplotlib}. This research also made use of NASA's Astrophysics Data System (ADS\footnote{\url{https://ui.adsabs.harvard.edu}}). \end{acknowledgements} \bibliographystyle{aa}
-21,573.221817
[ -3.349609375, 3.13671875 ]
55.769231
[ -2.779296875, 0.76513671875, -1.6767578125, -5.18359375, -0.63720703125, 7.70703125 ]
[ 4.75, 7.98046875, 4.38671875, 6.8671875 ]
248
4,346
[ -3.548828125, 4.17578125 ]
26.052396
[ -6.328125, -3.091796875, -3.3125, -2.13671875, 1.5166015625, 11.3125 ]
1.408634
40.301521
24.459273
1.236462
[ 2.8780179023742676 ]
-17,052.729324
5.30879
-20,834.332398
1.34362
5.706482
[ -3.796875, -3.798828125, -3.275390625, -3.94921875, 2.720703125, 11.1328125 ]
[ -6.3671875, -2.68359375, -2.986328125, -2.53515625, 4.375, 7.35546875 ]
BkiUdyk4uzlh5xHhzVik
\section{Introduction} \label{sec:intro} Our Galaxy consists of a few observationally distinct components, which differ for their chemical abundance and their kinematic properties: the thin disk, the Thick disk\footnote{To facilitate the reading of thin and Thick labels of disk samples, we will write the Thick one starting in capital letter.}, the bulge, and the halo. Thick disks are observed in most of the disk galaxies, besides the Milky Way. The Milky Way Thick disk bears evidence about the early history of the Galaxy, and is regarded as a significant component for understanding the process of Galaxy formation. The Milky Way Thick disk was discovered by \citet{gilmore1983new}. Significant effort has been made afterwards to understand the origin and the physical properties of the Milky Way thick disk. However, after almost four decades, there is no consensus yet regarding the origin of the Thick disk of the Milky Way. To explain the origin of the Thick disk, a few scenarios have been suggested. A natural assumption of the Thick disk origin is the dynamical heating of a pre-existing thin disk by some mechanism. The heating of the thin disk can be produced by the scattering of thin disk stars by giant molecular clouds \citep{spitzer1951possible}, by spirals or barred structures \citep{sellwood1984spiral}, or by minor mergers of small companion galaxies \citep{quinn1993heating}. Radial migration of stars can also work as a possible mechanism of the Thick disk formation \citep{Roskar2008riding, schonrich2009chemical}. Another group of scenarios involves a major accretion and a merger of a large satellite galaxy. \citet{abadi2003simulations} suggested first that the Thick disk was formed by the stars originated from the disrupted satellite galaxy. Somewhat intermediate to the pictures of the {\it internal} and {\it external} origin of the Thick disk is the scenario of the dual origin of the Galactic Thick disk, where the accretion of a significant merger triggers a centrally concentrated burst of star formation that marks the end of the formation of the rotationally-supported in-situ Thick disk that began forming prior to the merger. \citep{Bovy12} advocate for a continuous vertical disc structure connected by different individual stellar populations that smoothly go from Thin to Thick with increasing age. The integrated vertical space density of all these mono-abundance population reveals itself as identifying the Thin and Thick disk \citet{RixBovy}. On the other hand, \citet{Bird13} confirmed this finding by simulating the vertical mass density profiles of individual age cohorts that are progressively steeper for younger populations. The superposition of age cohorts in the solar annulus results indeed in a double-exponential profile compatible with that observed in the Milky Way star counts. In other words, the "upside-down" evolution that \citet{Bird12,Bird13} expose in their simulations indicates that the Thick disk arises from continuous trends between stellar age and metallicity. Therefore, the whole disk structure as it appears now is the result of a continuous evolution of the pristine disk stellar population, and does not originate from a discrete merger event, secular heating, or stellar radial migration after formation. Recent progress in the Thick disk studies is related to the high-resolution zoom-in cosmological simulations. Such simulations demonstrate that the cohorts of older stars have larger scale heights and shorter scale lengths, representing thus the Thick disk stellar population, while younger stars form the thin disk population of galaxies. In this picture the Thick disk is not a distinct component, but is rather a part of a double component system with a gradually-varying-with-height mixture of young and of old stars \citep{buck2020nihao}. \citet{park2021exploring} using high-resolution cosmological simulations GALACTICA and NEWHORIZON focused on the question whether the spatially defined thin and Thick disks are formed by different mechanisms. The authors traced the birthplaces of the stellar particles in the thin and Thick disks and found that most of the Thick disk stars in simulated galaxies were formed close to the mid-plane of the galaxies. This suggests that the two disks are not distinct in terms of the formation process but are rather the signature of a complex evolution of galaxies. The only way to clarify which of the proposed formation scenarios was in place is a detailed study of the properties of the Milky Way Thick disk. To select Thick disk stars a few criteria have been suggested. \citet{fuhrmann2000deep} selected Thick disk stars using the relative abundance of alpha-elements of the stars taking into account the considerable difference in the abundances of such elements in the Milky Way thin and Thick disks. Attempts were made to distinguish each population based on the fact that the Thick disk is an older subsystem of the Galaxy while the thin disk stars are relatively young \citep{fuhrmann2000deep}. However, due to large errors in the determination of the age of individual stars this criterion seems rather unreliable. The majority of studies of the properties of the Thick disk are based on the selection of Thick disk stars located at 1-2 kpc above or below the Galactic mid-plane where, presumably, most of the stellar population is represented by the Thick disk \citep{girard2006abundances, kordopatis2011spectroscopic}. In this study we use a different approach to select Thick disk stars. The stars that belong to the Thick disk are located not only above or below the Galactic thin disk, but they are present as well in the solar neighborhood close to the Galactic mid-plane. Moreover, the concentration of Thick disk stars is expected to be highest near the mid-plane of the Galactic disk. Using kinematical data for the stars in solar neighborhood we can choose the stars that have relatively high velocities in the direction perpendicular to the galactic plane, so they will leave the solar neighborhood in a near future. The approach has a few obvious advantages. Kinematical data of the nearby stars are determined with a better accuracy compared to the kinematics of the distant stars. The volume density of the Thick disk stars is expected to be highest close to the mid-plane of the of Galaxy, decreasing exponentially perpendicular to the Galactic disk. Thus, selecting Thick disk stars in solar neighborhood in this way we obtain richer stellar sample with better determined kinematical properties. As a consequence, this study is organized as follows: section 2 explains how the data samples were selected and their completeness assessed, section 3 is dedicated to the statistical calculation and analysis of the velocities' mean and dispersion, and the number density ratio of the Milky Way thin and Thick disks. Section 4 provides a brief discussion and section 5 summarizes our conclusions. An appendix is finally included with several additional explanations. \section{Data selection and completeness} \label{sec:datasel} We based our study on data taken from the Gaia Early Data Release 3 (EDR3) catalog \citep{2016A&A...595A...1G, 2021A&A...649A...1G} selecting a complete sample of good quality red giant branch (RGB) stars within the cylinder centered at Sun that has $\pm$ 0.5 kpc height and 1 kpc radius. A first selection from the Gaia catalog was done by imposing the following criteria: astrometric quality \texttt{ruwe}$<1.4$, parallax errors less than 10\%, apparent magnitudes $G<13.5$, and \texttt{parallax}$>0.89$ (4,863,317 sources). A further cut was done to select all sources within the above mentioned cylinder volume (4,274,340 sources), and having not null \texttt{bp\_rp} and \texttt{dr2\_radial\_velocity}, that is measured $B-R$ color and Gaia radial velocity (2,387,462 sources; $\sim 56$\% of cylinder sample). The value of \texttt{ruwe} is a renormalized unit-weight error which indicates how good is the astrometric solution of the star. Following the general recommendation from \citet{2021gdr3.reptE....V} we use the value 1.4 as a safe upper limit to select well-behaved data. Parallax error cut assures we can use the inverse of the parallax as a unbiased measure of the star distance, as explained by \citet{2021gdr3.reptE....V}. As described by the GAIA collaboration, GAIA EDR3 has in general twice-better proper motions and parallaxes compared to the GAIA DR2 catalog. Radial velocities in GAIA EDR3 catalog however, are just a copy of the radial velocities from GAIA DR2 catalog and are available only for a portion of stars with apparent magnitude $\sim 4<G<\sim 13$. The plot of absolute magnitude $G_{abs}$ vs $B-R$ for the latter sample shows the presence of the red clump (RC) stars, as an over-density at $G_{abs}\sim 0.5$ on its blue end and tilted towards fainter $G$ magnitudes and redder $B-R$ colours due to the effect of interstellar extinction. The selection $B-R>1$ and $G_{abs}-1.85(B-R)<-0.7$ allowed us to choose all the RGB stars from the RC upwards (see Figure \ref{fig:cmd}, left panel), yielding a sample of 278,228 stars named Gaia RGB in Table \ref{tab_basic_samples}. This sample represents 94\% of all stars listed in Gaia EDR3 located within the cylinder and having the same color and magnitude selection of our sample (296,879 stars), meaning we are missing only 6\% of what Gaia observed in that volume for that kind of stars, due to lacking radial velocity information. A plot of $G_{abs}$ vs. distance (Figure \ref{fig:cmd}, right panel) shows the RC stars being well sampled up to 1.15 kpc in distance by our RGB sample, the only signature of incompleteness comes from the bright limit in Gaia, visible as the curved brighter end of the plotted data, meaning our sample is missing some nearby ($\lesssim 200$ pc) RC stars only. By selecting the red giant population, which is presumably older and less affected by kinematical features associated with their birth place, our results will be more indicative of the overall disk dynamics. \begin{figure}[ht!] \plottwo{fig_cmd.png}{fig_mag_dist.png} \caption{Left panel: $G_{abs}$ vs. $B-R$ for all the Gaia data in the cylinder studied in this investigation, colour-coded by density of points (the whiter the denser, every 20 stars plotted). Our Gaia RGB sample was selected from the area delimited by the grey dashed line. Right panel: $G_{abs}$ vs. $(R_{xy}/1000)^2$ for our Gaia RGB sample, colour-coded by density as in left panel (every 2 stars plotted). Grey contours are the Kernel Density Estimation of all the Gaia data in the cylinder (iso-density curves from 20\% to 80\% levels).} \label{fig:cmd} \end{figure} We use Topcat \citep{2005ASPC..347...29T} functions \texttt{astromXYZ, astromUVW} and \texttt{icrsToGal} to compute rectangular coordinates and velocities oriented such that X(U) points towards the Galactic center, Y(V) points towards the galactic rotation, and Z(W) points towards the North Galactic Pole. Finally, the Galacto-centric velocities in the cylindrical coordinate system $(V_R,V_\phi,V_Z)$ were computed assuming the solar motion with respect to the Local Standard of Rest (LSR) of $(U,V,W)_\odot=(11.1,12.2,7.3)$ km s$^{-1}$ \citep{2010MNRAS.403.1829S} and a galactic rotation for the LSR of -244.5 km s$^{-1}$ \citep{2020A&A...644A..83F} with $V_R$-axis pointing away from the Galactic center, $V_\phi$-axis oriented against the galactic rotation at the LSR and $V_Z$-action directed towards the North Galactic Pole. In total, 133,218 stars were selected as RGB stars in the northern part of the cylinder $(b>0)$ and 145,010 in the southern one ($b\leq 0)$ with the total number of stars being 278,228. Selected samples were chosen according to their $V_Z$ values, so that they are dominated by the thin and Thick disk populations respectively. We call the thin disk sample those stars having $V_z$ velocities of $|V_z|<15$ and heights $|Z|<200$ pc, and the Thick disk sample as those with $40<|V_z|<80$ (see Table \ref{tab_basic_samples}). Therefore we did not split these GAIA$\times$RAVE samples into northern and southern parts. Metallicity value $[Fe/H]$ (labeled $Met_K$ in RAVE) was used to select the thin disk stars as those having the described above kinematical properties and $[Fe/H]>-0.4$, while the Thick disk stars have the described above kinematical properties and $-1<[Fe/H]<-0.4$. Table \ref{tab_basic_samples} summarizes the number of stars in each Gaia/Gaia$\times$RAVE, North/South, thin/Thick disk sample. Details on the Gaia$\times$RAVE cross-match are discussed in Appendix \ref{app_xmatch}. \begin{table}[ht] \centering \begin{tabular}{p{6cm}rrr} Sample name & North & South & Total \\ \hline\hline Gaia RGB & 133,218 & 145,010 & 278,228 \\ Gaia RGB thin disk & 54,197 & 55,628 & 109,825 \\ Gaia RGB Thick disk & 6,776 & 7,022 & 13,798 \\ \hline Gaia$\times$RAVE RGB & \multicolumn{3}{r}{30,170} \\ Gaia$\times$RAVE RGB thin disk & \multicolumn{3}{r}{3,381} \\ Gaia$\times$RAVE RGB Thick disk & \multicolumn{3}{r}{1,113} \\ \hline \end{tabular} \caption{Samples names and sizes used in this work as described in Section \ref{sec:datasel}.} \label{tab_basic_samples} \end{table} \subsection{Completeness}\label{sec:datacomp} Assuming that stellar density changes only vertically and is symmetrically distributed with respect to the Galactic plane, the empirical cumulative distribution function (CDF) of the normalized cylindrical radius squared $(R_{xy}/1000)^2=(X^2+Y^2)/1000^2$ should grow linearly, following very closely the identity function. In other words, in a complete sample, the number of stars that are uniformly distributed in a cylinder should grow proportionally to the square of the radius of the cylinder. The CDF is obtained by sorting the $(R_{xy}/1000)^2$ data and computing the percentile of each data, versus the data itself. A complete sampling is achieved when the CDF is very close to the identity function, i.e. the number of stars grows linearly with $R^2_{xy}$. Any deviation from this law means that the sample is not homogeneous or incomplete. Figure \ref{fig:cdf} plots the CDF minus the identity function to check how much the CDF deviates from the latter. For the samples studied in this work we find that the Gaia ones are essentially complete with incompleteness being about 1\%. Their CDF being below the identity means these Gaia stars should occupy higher percentiles, i.e. some stars are missing at smaller distances, and this is consistent with Figure \ref{fig:cmd} (right panel), that shows a slight incompleteness occurs at the fainter end of the RGB stars at smaller cylindrical radius distances. The Gaia+RAVE samples deviate more from the identity function. For example the Gaia+RAVE thin disk overcomes the identity at $(R_{xy}/1000)^2\gtrsim 0.64$ (i.e. $R_{xy}\gtrsim 800$ pc), that is the data there radially accumulates faster than expected for a radially uniform distribution. Gaia+RAVE samples are about 5\% incomplete in radius sampling. Details on how incompleteness is computed are provided in Appendix \ref{app_complete}. \begin{figure}[ht!] \plottwo{fig_density_gaia.png}{fig_density_gaia_rave.png} \caption{Cumulative distribution function (CDF) of the normalized cylindrical radius squared minus the identity function, for the samples studied in this work.} \label{fig:cdf} \end{figure} \begin{table}[ht] \centering \begin{tabular}{p{6cm}rr} Sample name & North & South \\ \hline\hline Gaia RGB & 1.02 & 0.91 \\ Gaia RGB thin disk & 0.84 & 0.77 \\ Gaia RGB Thick disk & 1.19 & 1.46 \\ \hline Gaia$\times$RAVE RGB & \multicolumn{2}{c}{4.87} \\ Gaia$\times$RAVE RGB thin disk & \multicolumn{2}{c}{3.36} \\ Gaia$\times$RAVE RGB Thick disk & \multicolumn{2}{c}{5.84} \\ \hline \end{tabular} \caption{Samples incompleteness percentages, as described in Section \ref{sec:datacomp}.} \label{tab_complete} \end{table} \section{Kinematics of the thin and thick disks} \subsection{Radial and azimuthal velocities} Figure \ref{fig:histovr} shows distributions of the radial velocity $V_R$ computed with 5 km s$^{-1}$ bins separately for the thin and Thick disks in the northern and southern parts of the cylinder. Left panel presents the radial velocity distributions measured for our complete samples of stars. Right panel presents the same distribution for the samples that also have the chemical abundance information. Similarity of both distributions proves that lack of chemical abundance information does not bias substantially the conclusions based on purely kinematical data. Mean values and standard deviations of the radial velocity distributions in all samples were computed using the standard algebraic expression (see Table \ref{tab_momvr}), and estimated also with help of a Gaussian fit from the Python {\tt\string scipy} package (see Table \ref{tab_gfitvr}). As one can see from the tables, mean radial velocities in all samples are of order of 1 - 2 km s$^{-1}$, in both thin and Thick samples, and radial velocity dispersions are $\sim$ 50 km $^{-1}$ for the Thick disk, and $\sim$ 30 km $^{-1}$ for the thin disk samples. Figure \ref{fig:histovphi} together with Tables \ref{tab_momvphi} and \ref{tab_gfitvphi}, present measurements of the azimuthal velocity mean and dispersion for selected samples of stars. As one can see, the $V_\phi$-velocity distribution not symmetric, showing the presence of asymmetric drift in the samples. Again, the kinematically selected samples have values comparable to those estimated with help the much smaller GAIA$\times$RAVE samples that include stellar abundance information. This proves that the lack of chemical information does not bias the results. As one can see from the Tables \ref{tab_momvphi} and \ref{tab_gfitvphi}, mean velocities of the thin and of the Thick disks lag from the assumed velocity of the local standard of rest of -244.5 km s$^{-1}$. The thin disk sample is lagging behind LSR rotational velocity by 5 to 8 km s$^{-1}$. The Thick disk in the solar neighborhood is lagging behind rotational velocity of LSR by about 20 km s$^{-1}$. The observed lag of the rotational velocity in both samples is caused by two reasons. The lag of the rotational velocity in both subsystems is caused by asymmetric drift, the difference between the gravitational force and the centrifugal force caused by the nonzero components of the velocity dispersion of the systems \citep{binney2008galactic}. Another reason for the existence of asymmetric drift is that the thin and Thick disks are mixed, which will be proved in the next section by the analysis of kinematics of RGB stars in the direction perpendicular to the Galactic disk. \begin{figure}[ht!] \plottwo{fig_histo_vr_gaia.png}{fig_histo_vr_gaia_rave.png} \caption{Histograms and Gaussian fit of $V_R$ for the samples studied in this work. See also Tables \ref{tab_momvr} and \ref{tab_gfitvr} \label{fig:histovr}.} \end{figure} \begin{table}[ht] \centering \begin{tabular}{p{5cm}rcccc} & \multicolumn{2}{c}{Mean} & \multicolumn{2}{c}{StdDev} \\ \hline Sample name & North & South & North & South \\ \hline\hline Gaia RGB thin disk & 0.01 $\pm$ 0.14 & -1.04 $\pm$ 0.14 & 32.65 $\pm$ 0.10 & 32.64 $\pm$ 0.10 \\ Gaia RGB Thick disk & 1.67 $\pm$ 0.64 & 3.53 $\pm$ 0.63 & 52.54 $\pm$ 0.45 & 52.48 $\pm$ 0.44 \\ \hline Gaia$\times$RAVE RGB thin disk & \multicolumn{2}{c}{0.48 $\pm$ 0.54} & \multicolumn{2}{c}{31.29 $\pm$ 0.38} \\ Gaia$\times$RAVE RGB Thick disk & \multicolumn{2}{c}{4.00 $\pm$ 1.52} & \multicolumn{2}{c}{50.85 $\pm$ 1.08} \\ \hline \end{tabular} \caption{$V_R$ mean and standard deviation values for the samples studied in this work. Errors are computed as StdDev/$\sqrt{n-1}$ and StdDev/$\sqrt{2n}$ respectively.} \label{tab_momvr} \end{table} \begin{table}[ht] \centering \begin{tabular}{p{5cm}rcccc} & \multicolumn{2}{c}{$\mu$} & \multicolumn{2}{c}{$\sigma$} \\ \hline Sample name & North & South & North & South \\ \hline\hline Gaia RGB thin disk & 1.05 $\pm$ 0.30 & -0.64 $\pm$ 0.27 & 30.81 $\pm$ 0.25 & 31.26 $\pm$ 0.22 \\ Gaia RGB Thick disk & 0.98 $\pm$ 0.48 & 2.50 $\pm$ 0.57 & 48.58 $\pm$ 0.39 & 50.04 $\pm$ 0.47 \\ \hline Gaia$\times$RAVE RGB thin disk & \multicolumn{2}{c}{2.51 $\pm$ 0.70} & \multicolumn{2}{c}{30.33 $\pm$ 0.57} \\ Gaia$\times$RAVE RGB Thick disk & \multicolumn{2}{c}{3.56 $\pm$ 1.41} & \multicolumn{2}{c}{50.31 $\pm$ 1.15} \\ \hline \end{tabular} \caption{$V_R$ mean $\mu$ and dispersion $\sigma$ values from a Gaussian fit (Figure \ref{fig:histovr}) with their respective errors, for the samples studied in this work.} \label{tab_gfitvr} \end{table} \begin{figure}[ht!] \plottwo{fig_histo_vphi_gaia.png}{fig_histo_vphi_gaia_rave.png} \caption{Histograms and Gaussian fit of $V_\phi$ for the samples studied in this work. See also Tables \ref{tab_momvphi} and \ref{tab_gfitvphi}. \label{fig:histovphi}} \end{figure} \begin{table}[ht] \centering \begin{tabular}{p{5cm}rcccc} & \multicolumn{2}{c}{Mean} & \multicolumn{2}{c}{StdDev} \\ \hline Sample name & North & South & North & South \\ \hline\hline Gaia RGB thin disk & -236.80 $\pm$ 0.09 & -237.11 $\pm$ 0.09 & 22.06 $\pm$ 0.07 & 22.32 $\pm$ 0.07 \\ Gaia RGB Thick disk & -218.07 $\pm$ 0.52 & -218.26 $\pm$ 0.50 & 43.07 $\pm$ 0.37 & 42.19 $\pm$ 0.36 \\ \hline Gaia$\times$RAVE RGB thin disk & \multicolumn{2}{c}{-234.22 $\pm$ 0.37} & \multicolumn{2}{c}{21.30 $\pm$ 0.26} \\ Gaia$\times$RAVE RGB Thick disk & \multicolumn{2}{c}{-221.13 $\pm$ 1.09} & \multicolumn{2}{c}{36.38 $\pm$ 0.77} \\ \hline \end{tabular} \caption{$V_\phi$ mean and standard deviation values for the samples studied in this work. Errors are computed as StdDev/$\sqrt{n-1}$ and StdDev/$\sqrt{2n}$ respectively.} \label{tab_momvphi} \end{table} \begin{table}[ht] \centering \begin{tabular}{p{5cm}rcccc} & \multicolumn{2}{c}{$\mu$} & \multicolumn{2}{c}{$\sigma$} \\ \hline Sample name & North & South & North & South \\ \hline\hline Gaia RGB thin disk & -239.31 $\pm$ 0.23 & -239.40 $\pm$ 0.19 & 19.67 $\pm$ 0.19 & 20.15 $\pm$ 0.15 \\ Gaia RGB Thick disk & -225.67 $\pm$ 0.70 & -225.80 $\pm$ 0.79 & 35.01 $\pm$ 0.57 & 35.84 $\pm$ 0.65 \\ \hline Gaia$\times$RAVE RGB thin disk & \multicolumn{2}{c}{-236.82 $\pm$ 0.54} & \multicolumn{2}{c}{20.50 $\pm$ 0.44} \\ Gaia$\times$RAVE RGB Thick disk & \multicolumn{2}{c}{-226.61 $\pm$ 1.31} & \multicolumn{2}{c}{34.97 $\pm$ 1.07} \\ \hline \end{tabular} \caption{$V_\phi$ mean $\mu$ and dispersion $\sigma$ values from a Gaussian fit (Figure \ref{fig:histovphi}) with their respective errors, for the samples studied in this work.} \label{tab_gfitvphi} \end{table} \subsection{Vertical velocities, density ratio and North-South symmetry} The distribution of vertical velocities $V_Z$ for the Gaia RGB sample cannot be fitted by a single Gaussian function. Our selection of the thin and Thick disk samples was respectively based on the $|V_z|<15$ km s$^{-1}$ {\it core} and the $40<|V_z|<80$ km s$^{-1}$ {\it wings} of the $V_Z$ distribution. A zero-centered Gaussian fit corresponding to each of these portions in the non-normalized histogram is shown in Figure \ref{fig:histovz}. The obtained amplitude, mean and dispersion $(A,\mu,\sigma)$ values are shown in Table \ref{tab_gfitvz1}. These plots demonstrate that the thin disk sample can be heavily contaminated by Thick disk stars, although it must be kept in mind that the thin disk samples have $|Z|<200$ pc while the Thick disk samples go up to $|Z|<500$ pc. \begin{table}[ht] \centering \begin{tabular}{p{4.5cm}rcccccc} & \multicolumn{2}{c}{$\mu$} & \multicolumn{2}{c}{$\sigma$} & \multicolumn{2}{c}{$A$} \\ \hline Sample name & North & South & North & South & North & South \\ \hline\hline Gaia RGB thin disk & -0.14 $\pm$ 0.47 & -0.21 $\pm$ 0.41 & 11.44 $\pm$ 0.65 & 10.94 $\pm$ 0.55 & 129609 $\pm$ 5584 & 130213 $\pm$ 5008 \\ Gaia RGB Thick disk & -0.39 $\pm$ 0.18 & -0.11 $\pm$ 0.26 & 26.73 $\pm$ 0.36 & 27.22 $\pm$ 0.50 & 101016 $\pm$ 3103 & 98312 $\pm$ 3991 \\ \hline Gaia$\times$RAVE RGB thin disk & \multicolumn{2}{c}{0.02 $\pm$ 0.55} & \multicolumn{2}{c}{10.74 $\pm$ 0.73} & \multicolumn{2}{c}{7849 $\pm$ 410} \\ Gaia$\times$RAVE RGB Thick disk & \multicolumn{2}{c}{-1.13 $\pm$ 0.46} & \multicolumn{2}{c}{28.34 $\pm$ 0.88} & \multicolumn{2}{c}{14251 $\pm$ 905} \\ \hline \end{tabular} \caption{$V_Z$ mean $\mu$ and dispersion $\sigma$ values from a Gaussian fit (Figure \ref{fig:histovz}) with their respective errors, for the samples studied in this work.} \label{tab_gfitvz1} \end{table} \begin{figure}[ht!] \plottwo{fig_histo_vz_gaia.png}{fig_histo_vz_gaia_rave.png} \caption{Histograms and Gaussian fit of $V_Z$ for the samples studied in this work. See also Table \ref{tab_gfitvz1}. \label{fig:histovz}} \end{figure} A proper estimate of the proportion of Thick and thin disk stars must be performed on the same volume by fitting the $V_Z$ normalized histogram with the sum of two Gaussians, which for simplicity can be assumed both to have mean zero and each its own dispersion, the smaller one of the thin and the larger one of the Thick disk population, including a factor that represents the proportion of Thick disk stars within the whole volume sampled. We did so with the 2 km s$^{-1}$ bins $V_Z$ normalized histogram, with the function {\tt\string optimize.curve\_fit} from the Python {\tt\string scipy} package. Figure \ref{fig:fit2g0} shows the obtained results for the Gaia RGB sample ($|Z|<500$ pc) and for a sub-sample of it up to $|Z|<200$ pc. Table \ref{tab_gfitvz2} shows the results for samples of various heights. \begin{figure}[ht!] \plottwo{fig_2g0_500.png}{fig_2g0_200.png} \caption{Histogram and Sum-of-two-Gaussians fit for $V_Z$, for the samples studied in this work. Left panel is for $|Z|\leq 500$ pc and right one for $|Z|\leq 200$ pc. See also Table \ref{tab_gfitvz2}. \label{fig:fit2g0}} \end{figure} \begin{table}[ht] \centering \begin{tabular}{cccccccc} Gaia Cylinder RGB & \multicolumn{2}{c}{$\sigma_{thin}$} & \multicolumn{2}{c}{$\sigma_{Thick}$} & \multicolumn{2}{c}{Thick Disk $\%$} \\ \hline Sample height & North & South & North & South & North & South \\ \hline\hline $|Z|<500$ pc & 11.09 $\pm$ 0.28 & 11.78 $\pm$ 0.33 & 23.90 $\pm$ 0.64 & 23.49 $\pm$ 0.73 & 55 $\pm$ 3 & 53 $\pm$ 4 \\ $|Z|<200$ pc & 11.68 $\pm$ 0.28 & 10.25 $\pm$ 0.31 & 24.76 $\pm$ 1.22 & 21.81 $\pm$ 0.72 & 39 $\pm$ 4 & 55 $\pm$ 4 \\ $|Z|<100$ pc & 11.51 $\pm$ 0.31 & 10.15 $\pm$ 0.29 & 24.27 $\pm$ 1.31 & 21.73 $\pm$ 0.77 & 39 $\pm$ 4 & 51 $\pm$ 4 \\ $|Z|<\;\;50$ pc & 10.90 $\pm$ 0.32 & 10.32 $\pm$ 0.31 & 23.19 $\pm$ 1.12 & 22.02 $\pm$ 0.89 & 44 $\pm$ 4 & 49 $\pm$ 4 \end{tabular} \caption{$V_Z$ dispersion $\sigma$ values and errors from the sum of two zero-centered Gaussians fit (Figure \ref{fig:fit2g0}).} \label{tab_gfitvz2} \end{table} A few interesting results emerge: 1) Around half of the cylinder samples are in fact Thick disk stars (see Table \ref{tab_gfitvz2}); 2) There is a North-South number of stars asymmetry more visible in the $|Z|<200$ pc cylinder sample (see Figure \ref{fig:fit2g0} right panel); and 3) There is bit of an excess of stars at $-40<V_Z<-20$ and a dearth at $25<V_Z<50$ more visible when plotting the residuals between the $V_Z$ histogram and the fitted sum of two Gaussians (see Figure \ref{fig:histores}), which occurs similarly in both North and South and becomes more evident in the $|Z|<500$ pc samples. As for point 2), from the sum-of-two-Gaussians fit we estimate that in the $|Z|<200$ pc sample there is a $27\%$ (North) and $42\%$ (South) contamination by Thick disk stars in the thin disk ($|V_Z|<15$ km s$^{-1}$) samples, which levels up to 41\% in both North and South for $|Z|<500$ pc sample (see Appendix \ref{app_thickcontam} for calculations). As for point 3), the residuals in Figure \ref{fig:histores} do not look random with $V_Z$ and the above mentioned excess and dearth of stars go clearly beyond the Poisson noise computed for each histogram bin, by the properly normalized square root of the fit at the bin mid value (statistically, residuals are expected to be within this noise 68\% of the time). An additional but much smaller excess unaccounted for by the fit is also visible at larger velocities in the $|Z|\leq 500$ pc sample, which is probably caused by halo stars contamination. \begin{figure}[ht!] \plottwo{fig_v2040_bump_500.png}{fig_v2040_bump_200.png} \caption{Residuals from the $V_Z$ sum-of-two-Gaussian fit for the $|Z|\leq 500$ pc sample (left panel) and the $|Z|\leq 200$ pc sample (right panel). Dashed lines mark the corresponding Poisson noise (1$\sigma$) for each histogram bin. The excess at $-40<V_Z<-20$ and dearth at $25<V_Z<50$ visibly extend beyond the expected noise. Another smaller excess unaccounted for by the fit is also visible at larger velocities in the $|Z|\leq 500$ pc sample.} \label{fig:histores} \end{figure} Knowing that the scale height of the Thick disk is significantly larger than the thin disk one ($900 \pm 180$ pc vs. $300 \pm 50$ pc as summarized by \citet{2016ARA&A..54..529B}), it is expected that the proportion of Thick disk stars decreases at smaller heights samples, just as the North samples do, this is why we limited our thin disk samples to $|Z|<200$ pc, in an attempt to reduce Thick disk contamination, while the Thick disk samples go up to $|Z|<500$ pc. The histograms of $|Z|$ for each North and South Gaia Cylinder RGB samples show the South sample number of stars decreases more slowly for $Z\gtrsim 250$ pc (see Figure \ref{fig_histozz}, left panel), yet the excess of Thick disk stars with respect to the thin disk sample in Table \ref{tab_gfitvz2} was detected at heights less than 200 pc. A kernel density estimate plot $|Z|$ vs. $V_Z$ (see Figure \ref{fig_histozz}, right panel) of the Gaia RGB sample clarifies these findings. The following two features emerge: 1) the outermost and lowest (10\%) iso-density level for both hemisphere extends farther in {\bf distance} at $-40<V_Z<-20$ than at $25<V_Z<50$. This explains the ``bump'' seen at $-40<V_Z<-20$ and the ``dip'' at $25<V_Z<50$ in Figure \ref{fig:histores}. 2) There are two visible excesses of stars in the South hemisphere that extend farther in height from the Galactic plane than its northern counterpart at iso-density levels 30\% ($300\lesssim |Z|\lesssim 450$) and 90\% ($|Z|\lesssim 100$ pc). The first of these excesses is compatible with Figure \ref{fig_histozz} (left panel) South vs. North excess for $|Z|>250$ pc, while the second one may explain the excess of Thick disk stars in the $|Z|<200$ pc cylinder seen in the right panel of Figure \ref{fig:fit2g0}. \begin{figure}[ht!] \plottwo{fig_histo_zz.png}{fig_kde_contour.png} \caption{Left: Histogram of $|Z|$ for the Gaia Cylinder RGB North and South samples. Right: Kernel Density Estimation for $|Z|$ vs. $V_Z$ on the Gaia Cylinder RGB sample. This was implemented using the function \texttt{kdeplot} of Python Seaborn package, isodensity levels shown are 10\% (outermost) to 90\% (innermost) in steps of 20\%.} \label{fig_histozz} \end{figure} From Figure \ref{fig:histores} we estimate the number of stars present in the ``bump'' and missing in the ``dip'' at the aforementioned $V_Z$ values, to be as follows: For the 500-pc samples, 1254 and 1319 North stars are in the ``bump'' and missing from the ``dip'', from which at most 265 and 368 stars can be taken as noise, respectively; 768 and 1451 South stars are in the ``bump'' and missing from the ``dip'', from which at most 276 and 379 stars can be taken as noise, respectively. For the 200-pc samples these numbers are respectively for the North: 754 and 576 with a maximum possible noise of 188 and 258 stars, and the South : 338 and 711, with maximum noise of 198 and 268 stars. As seen from the histograms above, these stars are just a small percentage of the whole RGB sample, amounting to only 0.5 to 1\% of the stars, yet their presence is visible in all the plots analyzed. In light of the $V_Z$ results, we evaluated also the $V_R$ and $V_\phi$ velocities to look for such features in those velocities. When analyzing the whole RGB cylinder, with both thin and Thick disk populations mixed, it is hard to see an asymmetry except possibly at the innermost density level (see Figure \ref{fig_kde}, left panel). But the same plot for the Gaia Thick disk sample, shows some localized asymmetries (see Figure \ref{fig_kde}, right panel). We do not attempt this analysis in the Gaia thin disk sample as we have proven already that about half of that sample is composed of Thick disk stars. \begin{figure}[ht!] \plottwo{fig_kde_vphi_vr_cylinder.png}{fig_kde_vphi_vr_THICK.png} \caption{Left: Kernel Density Estimation for $V_\phi$ vs. $V_R$ on the Gaia RGB North and South samples. Parameters as in the previous figure. Right: Same for the Gaia Thick disk samples.} \label{fig_kde} \end{figure} \section{Discussion} Using RAVE data, \citet{williams2013wobbly} studied the kinematical properties of red clump giants within a few kpc radius and height volume around the Sun. They found differences between the North and South in the radial velocity streaming motions, $V_R$. \citet{williams2013wobbly} found also a surprising complex behavior of $V_Z$ velocity component in solar neighborhood. Interior to the solar circle, stars move upwards above the plane and downwards below the galactic plane. Exterior to the solar circle the stars both above and below the plane move towards the galactic plane with velocities up to $|V_Z|=17$ km s$^{-1}$. The authors interpret such behavior of $V_Z$ as a wave of compression and rarefaction in the direction perpendicular to the Galactic plane. Such wave, as \citet{williams2013wobbly} suggest, could be caused either by a recently engulfed satellite, or by the disk spiral arms. Our analysis also shows in the solar neighborhood an anomaly in $V_Z$-velocity field. The distribution of $V_Z$ velocity of red giant stars has an excess of stars with velocities -40 to -20 km s$^{-1}$ (see Figures \ref{fig:fit2g0} and \ref{fig_histozz}-right panel) and a dearth of stars with $V_Z$ velocities 25 to 50 km s$^{-1}$. Such features are also seen in Figure \ref{fig_kde} that shows $Z$ vs $V_Z$ iso-density contours for Thick disk stars both in the southern and northern galactic hemispheres. At the same time $V_R$ and $V_{\phi}$ velocity components of red giant stars do not have such an anomaly. Several mechanisms can be responsible for the observed peculiarity in the $V_Z$ stellar velocity distribution in the solar neighborhood. The disk is strongly affected by the Galactic bar and spiral structure \citep{2018Natur.561..360A}. Non-equilibrium phase mixing can occur due to the tidal disturbance of the Galactic disk by the crossing of a Sagittarius-like satellite with mass $\sim 3 \times 10^{10}$ M$_\odot$ \citep{2019MNRAS.486.1167B}. Further study is needed to identify mechanism responsible for these features. \citet{lee2011formation} used a sample of 17,277 G-dwarfs with measured $[\alpha/Fe]$ ratio and chemically separated the disk onto thin and Thick disk populations. \citet{lee2011formation} used also proper motion information of their sample with typical proper motion errors about 3-4 mas/yr together with distances to individual stars estimated with help of calibrated stellar isochrones. Using these data, \citet{lee2011formation} measured the velocity lag between chemically separated thin and thick dicks to be nearly constant $\sim$30 km s$^{-1}$ at any given distance $|Z|$ from the galactic plane. Our estimate of the velocity lag between the thin and Thick disks is about 14 km s$^{-1}$. We checked an influence of metallicity information on the value of the velocity lag. Cross-matching the sample with the RAVE DR5 catalog and taking into account metallicity information shows that metallicity does not change essentially the value of the velocity lag of the Thick disk in the solar neighborhood. Recently, \citet{anguiano2020stellar} used stellar metallicity information to discriminate the three primary stellar populations: thin disk, Thick disk and halo. Chemistry-based selection of the stars belonging to different galactic subsystems allowed \citet{anguiano2020stellar} to select 211,820 stars associated with the Milky Way thin disk, 52,709 stars associated by their abundances to the Thick disk, and 5,795 stars belonging to the halo population. The sample of stars used by \citet{anguiano2020stellar} spans approximately $6<R<10$ kpc in Galactocentric cylindrical radius and $-1<Z<2$ kpc in $Z$-coordinate. \citet{anguiano2020stellar} found that chemically selected thin disk has velocity dispersion $(\sigma_R, \sigma_\phi, \sigma_Z)$ of $(36.81, 24.35, 18.03) \pm (0.07, 0.04, 0.03)$ km s$^{-1}$. For the thick disk, the authors get a velocity dispersion $(\sigma_R, \sigma_\phi, \sigma_Z)$ of $(62.44, 44.95,41.45) \pm (0.21, 0.15, 0.15)$ km s$^{-1}$. The mean rotational velocity of the chemically selected Thick disk according to \citet{anguiano2020stellar} is equal to $191.82 \pm 0.24$ km s$^{-1}$ and their value of asymmetric drift between Thick and thin disks is about 30 km s$^{-1}$. Our analysis of the kinematical properties of red giant stars selected close to the Sun gives a smaller value of asymmetric drift about 19 km s$^{-1}$. Discrepancy between the values of the velocity lag of the Thick disk may be due to the fact that the sample of stars selected by \citet{anguiano2020stellar} has a much larger volume compared to our sample selected in a close proximity to the Sun. Also, \citet{anguiano2020stellar} did not discuss the issue of completeness of their sample which can be essential in determination the relative velocity lag between the subsystems. A more serious discrepancy between our study and the results of \citet{anguiano2020stellar} appears in the estimate of the proportion of stars that belong to the different subsystems of the Galaxy. \citet{anguiano2020stellar} estimate that in their data set 81.9 percent of stars belongs to the thin disk, 16.6 percent are the Thick disk stars, and about 1.5 percent of the stars belong to the Milky Way halo. The local Thick-to-thin density normalization $\rho_{Thick}$ / $\rho_{thin}$ was estimated by \citet{anguiano2020stellar} to be about 2 percent. We find that in our complete kinematically selected sample of stars the ratio of local Thick-to-thin number density is about 90 percent (from the last row in Table \ref{tab_gfitvz2}). Two comments should be made here. First, as \citet{kawata2016milky} have noticed, the Thick disks that are selected chemically or kinematically are strictly speaking different objects. Our criterion of selection allows to choose the complete sample of stars that have in the solar neighborhood the Thick disk {\it kinematics}, i.e., these stars will deviate in the direction perpendicular to the Galactic plane by 1-2 kpc in a near future. The stars selected this way represent the wings of the Thick disk vertical velocity distribution. This result was confirmed independently by fitting the complete sample of stars with the sum of two Gaussian distribution. By incorporating metallicity information for our sample and dividing the sample of stars onto the thin ($[Fe/H]>-0.4$) and the Thick ($-1<[Fe/H]<-0.4$) disk stars does not change the kinematical properties of the thin and Thick disks in solar neighborhood. \citet{everall1,everall2} used Gaia photometry and astrometry to estimate the spatial distribution of the Milky Way disk at the Solar radius. To correct for sample incompleteness, they used a solution for selection function for GAIA source catalog \citet{evebou,boueve} and recovered the densities of the Thin and of the Thick disks in Solar neighborhood, and the scale heights of the vertical density distributions in both disks. \citet{everall1} find a ratio of the Thick-to-Thin local density of 0.147 $\pm$ 0.005, and a value of surface density of the stellar disk of 23.17 $\pm$ 0.08 (stat) $\pm$ 2.43 (sys) M$_{\odot}$ pc$^{-2}$. Their first value is considerably lower than our estimate of the local ratio of Thin-to-Thick densities obtained from a complete sample of red clump stars, and their second value is also lower than previous estimates of the surface density of stars in Solar neighborhood, as concluded by them. Our finding that the Milky Way thick disk has mass comparable to that of the thin disk concurs with the result of \citet{kawata2016milky}. These authors determined the Milky Way star formation history using the imprint left on chemical abundances of long-lived stars. \citet{kawata2016milky} find that the formation of the Galactic Thick disk occurred during an intense star formation phase between 9.0 and 12.5 Gyr ago that was followed by a dip in star formation rate lasting about 1 Gyr. The intense phase of star formation in the past of the Milky Way galaxy resulted in the formation of a massive Thick galactic disk. In a another paper, \citet{lehnert2014milky} compared the star formation history of the Milky Way galaxy with the properties of distant disk galaxies. They found that during the first 4 Gyr of its evolution, the Milky Way formed stars with a high rate ($\sim 0.6 M_{\odot}$ yr$^{-1}$ kpc$^{-2}$) resulting in the formation of the thick Milky Way disk with a mass approximately equal to that of the thin Milky Way disk. An additional piece of information comes from the comparison of the Milky Way Thick disk stellar eccentricity distribution with that presented for simulated disks formed via accretion, radial migration, and gas-rich mergers \citep{wilson2011testing}. The authors find that the broad peak at moderately high eccentricities in the accretion model is inconsistent with the relatively narrow peak at low eccentricity observed in the Milky Way Thick disk, which indicates, that the Galactic thick disk was formed predominantly {\it in situ}. The above mentioned results are in agreement with recent high-resolution NEWHORIZON and GALACTICA simulations \citep{park2021exploring}. These authors find that being spatially separated, the two disks contain overlapping components, so that even in the Galactic mid-plane Thick disk stars contribute on average $\sim$ 30 percent in total density of stars, and about 10 percent in their luminosity. \citet{park2021exploring} conclude that spatially defined thin and Thick disks are not entirely distinct components in terms of formation process. The two disks represent parts of a single disk that evolves with time due to continuous star formation and disk heating, which in fact finds confirmation in our analysis of kinematics of stars in solar neighborhood. \section{Conclusions} Using a complete sample of 296,879 RGB stars distributed in a cylinder centered at the Sun with a 1 kpc radius and half-height of 0.5 kpc, we study the kinematical properties of the Milky Way disk in the solar neighborhood. Analysis of kinematical properties of RGB stars in solar neighborhood was done with help of a two-component fit to the velocity distribution of $V_Z$ velocity component. Our results can be summarized as follows. 1. The kinematical properties of the selected stars point at the existence of two distinct components: the thin disk with mean velocities $V_R$, $V_{\phi}$, $V_Z$ of -1, -239, 0 km s$^{-1}$, and velocity dispersions $\sigma_R$, $\sigma_{\phi}$, $\sigma_Z$ of 31, 20 and 11 km s$^{-1}$, correspondingly. The Thick disk component has, on the other hand, mean velocities $V_R$, $V_{\phi}$, $V_Z$ of +1, -225, 0 km s$^{-1}$, and velocity dispersions $\sigma_R$, $\sigma_{\phi}$, $\sigma_Z$ of 49, 35, and 22 km s$^{-1}$. Completeness of our RGB sample of stars allows to estimate the density ratio of the thin and Thick disks in the solar neighborhood. We find that Thick disk stars comprise about half the stars of the disk. Such high density of the stars with Thick disk kinematics points at an {\it in situ} rather than {\it ex situ} formation of the Thick disk. 2. $V_Z$ velocity field has {\bf small but real} anomaly in the solar neighborhood. Velocity distribution of red-giant stars in the direction perpendicular to the galactic plane has an excess of stars with $V_Z$ velocities of -40 to -20 km s$^{-1}$ and a dearth of stars with $25<V_Z<50$ km s$^{-1}$. Anomaly is observed both in the Northern and in the Southern Galactic hemispheres. \begin{acknowledgments} VK acknowledges support from Russian Science Foundation (Project No- 18-12-00213-P). \end{acknowledgments} \vspace{5mm} \software{Python v. 3.8.10 (\url{https://www.python.org/}), Topcat \citep{2005ASPC..347...29T}, SciPy \citep{2020SciPy-NMeth}, Seaborn \citep{Waskom2021}} \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document}
-35,143.099477
[ -2.767578125, 2.70703125 ]
31.588448
[ -2.931640625, 0.56396484375, -1.9345703125, -5.38671875, -0.92919921875, 7.9609375 ]
[ 3.76171875, 6.4140625, 2.849609375, 6.65234375 ]
614
7,818
[ -3.4609375, 4.07421875 ]
29.067207
[ -6.03125, -3.330078125, -3.552734375, -2.01171875, 1.4677734375, 11.1640625 ]
1.349258
19.246581
22.921463
6.17632
[ 3.5990145206451416 ]
-25,226.372199
5.45101
-34,049.085652
0.929228
6.084272
[ -3.462890625, -3.59375, -3.029296875, -3.798828125, 2.53515625, 10.75 ]
[ -5.828125, -2.5703125, -2.533203125, -1.8466796875, 3.98046875, 5.92578125 ]
BkiUfQM5qhLBeyJsPR3P
\section{Introduction} The nuclei $^{12}$C and $^{16}$O are typical light nuclei, and they have been extensively studied based on the cluster approaches~\cite{Fujiwara}. Since there is no bound nucleus with mass number 5 or 8, formation of $^{12}$C from three $^4$He nuclei ($\alpha$ clusters) is a key process of the nucleosynthesis. Here, the second $0^+$ state at $E_x = 7.6542$ MeV plays a crucial role, which is the second excited state of $^{12}$C and located just above the threshold energy to decay into three $^4$He nuclei~\cite{Hoyle}. The existence of three $\alpha$ state just at this energy is essential factor in the synthesis of various elements in starts. For also $^{16}$O, cluster structure has been shown to be extremely important. Although the ground state corresponds to the doubly closed $p$ shell of the shell model, this configuration can be also interpreted from four $\alpha$ and $^{12}$C+$\alpha$ view points, if we take certain limit for the inter cluster distances. Also the first excited state of $^{16}$O at $E_x = 6.0494$ MeV, very close to the threshold to decay into $^{12}$C and $^4$He, can be interpreted as $^{12}$C+$^4$He cluster state~\cite{Horiuchi}, and low-lying cluster states just around the threshold are quite important in the synthesis of $^{16}$O in starts~\cite{Descouvemont}. Various cluster models have been proposed and successfully applied to these nuclei. However consistent description of $^{12}$C and $^{16}$O has been a long standing problem of microscopic $\alpha$ cluster models. Here the definition of the microscopic model is that the wave function is fully antisymmetrized and the effective interaction is applied not between $\alpha$ clusters but between nucleons. When the effective interaction is designed to reproduce the binding energy of $^{16}$O (four $\alpha$), the binding energy of $^{12}$C (three $\alpha$) becomes underbound by about 10 MeV, and on the contrary, when the binding energy of $^{12}$C is reproduced, $^{16}$O becomes overbound by about 20 MeV. We have previously utilized Tohsaki interaction~\cite{Tohsaki}, which has finite-range three-body terms, and the obtained result is better than ones only within the two-body terms; however the problem has not been fully solved~\cite{C-O}. One of the clue to solve this problem is the inclusion of the spin-orbit interaction. In most of the cluster models, $\alpha$ clusters are defined as simple $(0s)^4$ configuration at some point. These $\alpha$ clusters are spin singlet systems and the spin-orbit interaction does not contribute inside $\alpha$ clusters and also between $\alpha$ clusters. In $jj$-coupling shell model, the spin-orbit interaction is quite important and this plays an essential role in explaining the observed magic numbers. According to the $jj$-coupling shell model, $^{12}$C corresponds to the subclosure configuration of spin-orbit attractive orbits ($p_{3/2}$) and the spin-orbit interaction works attractively, whereas $^{16}$O corresponds to the closure of major shell ($p$ shell), where both spin-orbit attractive ($p_{3/2}$) and repulsive ($p_{1/2}$) orbits are filled and the contribution of the spin-orbit interaction cancels. Therefore, inclusion of the $\alpha$ breaking wave function and taking into account the spin-orbit contribution are expected to decrease the binding energy difference of $^{12}$C and $^{16}$O. To describe the $jj$-coupling shell model states with the spin-orbit contribution starting with the cluster model wave function, we proposed the antisymmetrized quasi-cluster model (AQCM)~\cite{Simple,Masui,Yoshida2,Ne-Mg,Suhara,Suhara2015,Itagaki}. In the AQCM, the transition from the cluster- to shell-model-structure can be described by only two parameters: $R$ representing the distance between $\alpha$ clusters and $\Lambda$, which characterizes the transition of $\alpha$ cluster(s) to quasi-cluster(s) and quantifies the role of the spin-orbit interaction. In nuclear structure calculations, it is quite well known that the central part of the nucleon-nucleon interaction in the calculation should have proper density dependence in order to to satisfy the saturation property of nuclear systems. If we just introduce simple two-body interaction, for instance Volkov interaction~\cite{Volkov}, which has been widely used in the cluster studies, we have to properly choose Majorana exchange parameter for each nucleus, and consistent description of two different nuclei with the same Hamiltonian becomes a tough work. Thus it is rather difficult to reproduce the threshold energies to decay into different subsystems. Concerning the density dependence of the interaction, adding zero range three-body interaction term helps better agreements with experiments as in modified Volkov (MV) interaction ~\cite{MV1}. However, in this case the binding energies become quite sensitive to the choice of the size parameter of Gaussian-type single particle wave function. Especially, the binding energy and radius of $^4$He cannot be reproduced consistently. This situation is essentially common in the case of Gogny interaction widely used in mean field studies~\cite{Gogny}. The nucleus $^4$He is a building block of $\alpha$ cluster states and it is desired that the size and binding energy are reproduced. The Tohsaki interaction, which has finite range three-body terms, has much advantages compared with the zero range three-body interactions. This interaction is a phenomenological one and designed to reproduce the $\alpha$-$\alpha$ scattering phase shift. Also it gives reasonable size and binding energy of the $\alpha$ cluster, which is rather difficult in the case of zero-range three-body interaction, and the binding energy is less sensitive to the choice of size parameter of Gaussian-type single particle wave function because of the finite range effect of the three-body interaction. Furthermore, the saturation properties of nuclear matter is reproduced rather satisfactory. Of course, introducing the term proportional to the fractional power of the density is another possibility to reproduce the saturation properties of nuclear systems as in density functional theories (DFT), instead of introducing three-body interaction terms. However, here we perform parity and angular momentum projections, and we also superpose many Slater determinants based on the generator coordinate method (GCM). In this case, it is desired that the Hamiltonian is expressed in a operator form such as three-body interaction, which enables us to calculate the transition matrix elements between different Slater determinants. From this view point, simplified version of finite range three-body interaction is proposed in Ref.~\cite{Enyo}. The purpose of the present work is to combine the use of finite range three-body interaction for the interaction part and AQCM for the wave function part to establish consistent understanding of $^{12}$C and $^{16}$O. The original Tohsaki interaction gives small overbound for $^{16}$O (about 3 MeV)~\cite{C-O}, and here, we try to improve by slightly modifying three-body Majorana exchange parameter. For $^{12}$C, subclosure configuration of $jj$-coupling shell model, where the spin-orbit interaction plays an important role, is coupled to three $\alpha$ model spaced based on AQCM. For $^{16}$O, the closed shell configuration of the $p$ shell is the dominant configuration of the ground state, and we apply four $\alpha$ model, which coveres the model space of closed $p$ shell. Also, the application of Tohsaki interaction has been limited to $4N$ nuclei, and we add Bartlet and Heisenberg exchange terms in the two-body interaction for the purpose of applying it to neutron-rich systems. \section{The Model\label{model}} \subsection{Hamiltonian} The Hamiltonian ($\hat{H}$) consists of kinetic energy ($\hat{T}$) and potential energy ($\hat{V}$) terms, \begin{equation} \hat{H} = \hat{T} +\hat{V}, \end{equation} and the kinetic energy term is described as one-body operator, \begin{equation} \hat{T} = \sum_i \hat{t_i} - T_{cm}, \end{equation} and the center of mass kinetic energy ($T_{cm}$), which is constant, is subtracted. The potential energy has central ($\hat{V}_{central}$), spin-orbit ($\hat{V}_{spin-orbit}$), and the Coulomb parts. \subsection{Tohsaki Interaction} For the central part of the potential energy ($\hat{V}_{central}$), we adopt Tohsaki interaction. The Tohsaki interaction consists of two-body ($V^{(2)}$) and three-body ($V^{(3)}$) terms: \begin{equation} \hat{V}_{central} = {1 \over 2} \sum_{ij} V^{(2)}_{ij} + {1 \over 6} \sum_{ijk} V^{(3)}_{ijk}, \end{equation} where $V^{(2)}_{ij}$ and $V^{(3)}_{ijk}$ consist of three terms, \begin{equation} V^{(2)}_{ij} = \sum_{\alpha=1}^3 V^{(2)}_\alpha \exp[- (\vec r_i - \vec r_j )^2 / \mu_\alpha^2] (W^{(2)}_\alpha + M^{(2)}_\alpha P^r)_{ij}, \label{2body} \end{equation} \begin{eqnarray} V^{(3)}_{ijk} = \sum_{\alpha=1}^3 && V^{(3)}_\alpha \exp[- (\vec r_i - \vec r_j )^2 / \mu_\alpha^2 - (\vec r_i - \vec r_k)^2 / \mu_\alpha^2 ] \nonumber \\ \times && (W_\alpha^{(3)} + M_\alpha^{(3)} P^r)_{ij} (W_\alpha^{(3)} + M_\alpha^{(3)} P^r)_{ik}. \end{eqnarray} Here, $P^r$ represents the exchange of spatial part of the wave functions of interacting two nucleons, and this is equal to $-P^\sigma P^\tau$ due to the Pauli principle ($P^rP^\sigma P^\tau = -1$), where $P^\sigma$ and $P^\tau$ are spin and isospin exchange operators, respectively. The range parameters $\{ \mu_\alpha \}$ are set to be common for the two-body and three-body parts, and the values are listed in Table~\ref{Tohsaki-2} together with strengths of two-body interaction $\{ V^{(2)}_\alpha \}$, three-body interaction $\{ V^{(3)}_\alpha \}$, and the Majorana exchange parameters of the two-body interaction. The values of Winger parameters, $\{W^{(2)}_\alpha \}$, are given as $W^{(2)}_\alpha = 1 - M^{(2)}_\alpha$. We employ F1 parameter set in Ref.~\cite{Tohsaki}. Until now Tohsaki interaction has been applied only to $4N$ nuclei, and in this article we extend the application to neutron-rich nuclei. The original Tohsaki interaction does have Wigner and Majorana exchange terms, but spin and isospin exchange terms are missing. Because of this, the interaction gives a weak bound state for a two neutron system, as in the case of Volkov interaction. Therefore, here we add Bartlet ($BP^\sigma$) and Heisenberg ($HP^\tau$) exchange terms in Eq.~\ref{2body} as $(W^{(2)}_\alpha + BP^\sigma - HP^\tau + M^{(2)}_\alpha P^r)_{ij}$. The values of $B$ and $H$ are chosen to be 0.1. By adding these terms, the neutron-neutron interaction (or pron-proton interaction) because weaker than the original interaction, while $\alpha$-$\alpha$ scattering phase shift is not influenced by this modification. \begin{table} \caption{ Parameter set for the two-body part of the Tohsaki interaction (F1 parameterization in Ref.~\cite{Tohsaki}) together with the strengths of the three-body interaction.} \begin{tabular}{cccccc} \hline \hline $\alpha$ & $\mu_\alpha$ (fm) & $V^{(2)}_\alpha$ (MeV) & $V^{(3)}_\alpha$ (MeV) & $M^{(2)}_\alpha$ & $W^{(2)}_\alpha$ \\ \hline 1 & 2.5 & $-5.00$ & $-0.31$ & 0.75 & 0.25 \\ 2 & 1.8 & $-43.51$& 7.73 & 0.462 & 0.538 \\ 3 & 0.7 & $60.38$ & 219.0 & 0.522 & 0.478 \\ \hline \end{tabular} \\ \label{Tohsaki-2} \end{table} \begin{table} \caption{ Majorana exchange parameters for the three-body interaction terms. F1 stands for the original F1 set of Tohsaki interaction~\cite{Tohsaki}, and F1' is the modified versions introduced in the present article. } \begin{tabular}{ccc} \hline \hline & F1 & F1' \\ \hline $M^{(3)}_1$ & 0.0 & 0.0 \\ $M^{(3)}_2$ & 0.0 & 0.0 \\ $M^{(3)}_3$ & 1.909 & 1.5 \\ \hline \end{tabular} \\ \label{Tohsaki-3} \end{table} \subsection{Spin-orbit interaction} For the spin-orbit part, G3RS \cite{G3RS}, which is a realistic interaction originally determined to reproduce the nucleon-nucleon scattering phase shift, is adopted; \begin{equation} \hat{V}_{spin-orbit}= {1 \over 2} \sum_{ij} V^{ls}_{ij} \end{equation} \begin{equation} V^{ls}_{ij}= V_{ls}( e^{-d_{1} (\vec r_i - \vec r_j)^{2}} -e^{-d_{2} (\vec r_i - \vec r_j)^{2}}) P(^{3}O){\vec{L}}\cdot{\vec{S}}, \label{Vls} \end{equation} where $d_{1}= 5. 0$ fm$^{-2},\ d_{2}= 2. 778$ fm$^{-2}$, and $P(^{3}O)$ is a projection operator onto a triplet odd state. The operator $\vec{L}$ stands for the relative angular momentum and $\vec{S}$ is the total spin, ($\vec{S} = \vec{S_{1}}+\vec{S_{2}}$). The strength, $V_{ls}$, has been determined to reproduce the $^4$He+$n$ scattering phase shift~\cite{Okabe}, and $V_{ls} = 1600-2000$ MeV has been suggested. \subsection{Single particle wave function (Brink model)} In conventional $\alpha$ cluster models, the single particle wave function has a Gaussian shape \cite{Brink}; \begin{equation} \phi_{i} = \left( \frac{2\nu}{\pi} \right)^{\frac{3}{4}} \exp \left[- \nu \left(\bm{r}_{i} - \bm{R}_i \right)^{2} \right] \eta_{i}, \label{Brink-wf} \end{equation} where $\eta_{i}$ represents the spin-isospin part of the wave function, and $\bm{R}_i$ is a real parameter representing the center of a Gaussian wave function for the $i$-th particle. In this Brink-Bloch wave function, four nucleons in one $\alpha$ cluster share the common $\bm{R}_i$ value. Hence, the contribution of the spin-orbit interaction vanishes. \subsection{Single particle wave function in the AQCM} In the AQCM, $\alpha$ clusters are changed into quasi clusters. For nucleons in the quasi cluster, the single particle wave function is described by a Gaussian wave packet, and the center of this packet $\bm{\zeta}_{i}$ is a complex parameter; \begin{equation} \psi_{i} = \left( \frac{2\nu}{\pi} \right)^{\frac{3}{4}} \exp \left[- \nu \left(\bm{r}_{i} - \bm{\zeta}_{i} \right)^{2} \right] \chi_{i} \tau_{i}, \label{AQCM_sp} \\ \end{equation} where $\chi_{i}$ and $\tau_{i}$ in Eq.~\eqref{AQCM_sp} represent the spin and isospin parts of the $i$-th single particle wave function, respectively. The spin orientation is governed by the parameters $\xi_{i\uparrow}$ and $\xi_{i\downarrow}$, which are in general complex, while the isospin part is fixed to be `up' (proton) or `down' (neutron), \begin{equation} \chi_{i} = \xi_{i\uparrow} |\uparrow \ \rangle + \xi_{i\downarrow} |\downarrow \ \rangle,\\ \tau_{i} = |p \rangle \ \text{or} \ |n \rangle. \end{equation} The center of Gaussian wave packet is give as \begin{equation} \bm{\zeta}_{i} = \bm{R}_i + i \Lambda \bm{e}^{\text{spin}}_{i} \times \bm{R}_i, \label{center} \end{equation} where $\bm{e}^{\text{spin}}_{i}$ is a unit vector for the intrinsic-spin orientation, and $\Lambda$ is a real control parameter describing the dissolution of the $\alpha$ cluster. As one can see immediately, the $\Lambda = 0$ AQCM wave function, which has no imaginary part, is the same as the conventional Brink-Bloch wave function. The AQCM wave function corresponds to the $jj$-coupling shell model wave function, such as subshell closure configuration, when $\Lambda = 1$ and $\bm{R}_i \rightarrow 0$. The mathematical explanation for this is summarized in Ref.~\cite{Suhara}. For the width parameter, we use the value of $\nu = 0.23$ fm$^{-2}$. \subsection{AQCM wave function of the total system} The wave function of the total system $\Psi$ is antisymmetrized product of these single particle wave functions; \begin{equation} \Psi = {\cal A} \{ \psi_1 \psi_2 \psi_3 \cdot \cdot \cdot \cdot \psi_A \}. \label{total-wf} \end{equation} The projections onto parity and angular momentum eigen states can be performed by introducing the projection operators $P^J_{MK}$ and $P^\pi$, and these are performed numerically in the actual calculation. \subsection{Superposition of different configurations} Based on GCM, the superposition of different AQCM wave functions can be done, \begin{equation} \Phi = \sum_i c_i P^J_{MK} P^\pi \Psi_i. \label{GCM} \end{equation} Here, $\{ \Psi_i\}$ is a set of AQCM wave functions with different values of the $R$ and $\Lambda$ parameters, and the coefficients for the linear combination, $\{ c_i \}$ are obtained by solving the Hill-Wheeler equation~\cite{Brink}. We obtain a set of coefficients for the linear combination $\{c_j\}$ for each eigen value of $E$. \section{Results} We start the application of Tohsaki interaction with $^6$He. So far Tohsaki interaction has been applied to 4$N$ nuclei, and here a neutron-rich system is examined. Figure \ref{he6conv} shows the energy convergence of the ground $0^+$ state of $^{6}$He. The model space is $\alpha$+$n$+$n$ and the positions of two neutrons are randomly generated, and we superpose different configurations for the neutrons. The horizontal line at $-27.31$ MeV shows the threshold energy of $^4$He+$n$+$n$ (the experimental value is $-28.29566$ MeV). Here, the dotted line is the result of original F1 parameter set. We add the Bartlet and Heisenberg terms with the parameters of $B = H = 0.1$, and the result is shown as the dashed line. Furthermore, we slightly modify the Majorana exchange parameter of the three-body interaction term and the results of F1' parameter set is shown as solid line (this is to reproduce the binding energy of $^{16}$O, which will be discussed shortly). For the spin-orbit part, the strength $V_{ls}$ has been determined to be $1600-2000$ MeV in the analysis of $^4$He+$n$ scattering phase shift~\cite{Okabe}, and here we adopted $V_{ls} = 1600$ MeV. The results of dashed and solid lines are similar and difficult to distinguish. Nevertheless, by adding Bartlet and Heisenberg terms, we obtained the binding energy close to the experimental onei (the experimental value of $S_{2n}$ is 0.975 MeV). \begin{figure}[t] \centering \includegraphics[width=6.0cm]{he6conv.eps} \caption{ The energy convergence of the ground $0^+$ state of $^{6}$He with an $\alpha$+$n$+$n$ model. We prepare different configurations for the two neutrons outside $^4$He and superpose them. The horizontal line at $-27.31$ MeV shows the threshold energy of $^4$He+$n$+$n$. The dotted line is the result of original F1 parameter set, and the dashed line is the result after adding $B=H=0.1$. The results of F1' parameter sets is shown as the solid line. } \label{he6conv} \end{figure} \begin{figure}[t] \centering \includegraphics[width=6.0cm]{o16-aa.eps} \caption{. The $0^+$ energy of $^{16}$O with a tetrahedron configuration of four $\alpha$'s as a distance between $\alpha$ clusters. The dotted line is the result of original F1 parameter set. The results of F1' parameter set is shown as solid line. The dashed line is for the result calculated using Volkov No.2 interaction~\cite{Volkov} with $M = 0.63$. } \label{O16aa} \end{figure} For $^{16}$O, according to the $jj$-coupling shell model, the ground state corresponds to the closure of major shell ($p$ shell), where both spin-orbit attractive ($p_{3/2}$) and repulsive ($p_{1/2}$) orbits are filled and the contribution of the spin-orbit interaction cancels. Therefore, $\alpha$ breaking configurations are not expected to mix strongly, and here we introduce a four $\alpha$ model space, which is known to coincide with the closed $p$ shell configuration at the limit of relative distance between $\alpha$ clusters equal to zero. The $0^+$ energy of $^{16}$O with a tetrahedron configuration of four $\alpha$'s is shown in Fig.~\ref{O16aa} as a function of the relative distance between $\alpha$ clusters. The dotted line is the result of original F1 parameter set, and the result of newly introduced F1' parameter set is shown as the solid line. Here, F1' is designed to avoid small overbinding of $^{16}$O when the original F1 parameter set is introduced, and the solid line is less attractive compared with the dotted line by about $2 \sim 3$ MeV. The energies of $^{16}$O is obtained by superposing the basis states with $\alpha$-$\alpha$ distances of 0.1, 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 fm based on GCM, and the newly introduced parameter F1' parameter set gives $-126.9$ MeV for the ground state compared to the experimental value of $-127.619293$ MeV. \begin{figure}[t] \centering \includegraphics[width=6.0cm]{o16-rms.eps} \caption{. The matter rms radius for the $0^+$ state of $^{16}$O calculated with the tetrahedron configuration of four $\alpha$'s. The horizontal axis shows the relative distance between $\alpha$ clusters. } \label{O16rms} \end{figure} \begin{figure}[t] \centering \includegraphics[width=6.0cm]{aa-en.eps} \caption{ The $0^+$ energy curves of $^{8}$Be calculated with $\alpha$+$\alpha$ model as a function of relative $\alpha$-$\alpha$ distance. The dotted and solid lines are the results calculated with the original F1 parameter set and newly introduced F1' parameter set. The dotted line at $-54.61$ MeV shows the threshold energy of $\alpha$+$\alpha$. } \label{aa-en} \end{figure} The observed root mean square (rms) radius of $^{16}$O is quite large; the charge radius is 2.69 fm \cite{Angeli}, and we often underestimate the radius by 0.1-0.2 fm, if we calculate with four $\alpha$ models and use only two-body effective interaction such as Volkov interaction~\cite{Volkov}. The dashed line in Fig.~\ref{O16aa} is for the result calculated using Volkov No.2 interaction with $M = 0.63$. In this case, the energy curve is much shallower and feature of the curve is quite different from the Tohsaki interaction cases. The dashed line shows the energy minimum point around the $\alpha$-$\alpha$ distance of 2 fm. Using the Tohsaki interaction with finite-range three-body interaction terms, the solid line in Fig.~\ref{O16aa} shows that the lowest energy is obtained with the $\alpha$-$\alpha$ distance of 2.5 fm, larger than the result of Volkov interaction (dashed line) by 0.5 fm. The matter rms radius for the $0^+$ state of $^{16}$O with a tetrahedron configuration of four $\alpha$'s is shown in Fig.~\ref{O16rms} as a function of distances between $\alpha$ clusters. When the $\alpha$-$\alpha$ distance is 2.5 fm, which gives the lowest energy in the Tohsaki interaction cases, the matter radius is 2.49 fm. This matter radius decreases to 2.35 fm if the $\alpha$-$\alpha$ distance is 2.0 fm, which gives the lowest energy in the Volkov interaction case. The ground state of $^{16}$O obtained by superposing the basis states with the $\alpha$-$\alpha$ distances of 0.1, 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 fm gives the rms matter radius of 2.49 fm (Tohsaki interaction F1' parameter set). This value corresponds to the charge radius of 2.64 fm, and the experimental value is almost reproduced. Here we compare the F1 (original) and F1' (modified) parameter sets of the Tohsaki interaction. The $0^+$ energy curves of $\alpha$-$\alpha$ ($^8$Be) calculated with F1 (dotted line) and F1' (solid line) are compared in Fig.~\ref{aa-en}. As explained previously, F1' is designed to avoid small overbinding of $^{16}$O calculated with F1, and the solid line is slightly more repulsive at short $\alpha$-$\alpha$ relative distances. However the difference is quite small and less than 1 MeV, and the character of the original F1 that the $\alpha$-$\alpha$ scattering phase shift is reproduced is not influenced by this modification. For $^{12}$C, we prepare $jj$-coupling ($\alpha$ breaking) components of the wave functions based on AQCM. We introduce basis states with equilateral triangular configuration of three $\alpha$ clusters with the relative distances of $R$ = 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0 fm and change $\alpha$ clusters to quasi clusters by giving dissolution parameter $\Lambda$. The values of $\Lambda$ are chosen to be 0.2 and 0.4, since the states in between pure three $\alpha$ clusters ($\Lambda = 0$) and $jj$-coupling shell model limit ($\Lambda = 1$) are known to be important for the description of the ground state. In addition to these 12 basis states, we prepare 28 basis sates with various there $\alpha$ configurations by randomly generating Gaussian center parameters. This is because, the Hoyle state is a gas-like state without specific shape, and it has been known that not only equilateral triangular configuration, various three $\alpha$ cluster configurations couple in this state. By superposing these 40 basis states based on GCM and diagonalizing the Hamiltonian, energy eigen states are obtained. The F1' parameter set of the Tohsaki interaction is adopted for the central part. In Fig~\ref{c12-level}, the ground $0^+$, first $2^+$, and second $0^+$ states of $^{12}$C are shown together with the calculated three $\alpha$ threshold energy (dotted line). The strength of the spin-orbit interaction, $V_{ls}$ in Eq.~\ref{Vls}, is chosen as $V_{ls} =$ 0 MeV, 1600 MeV, 1800 MeV, and 2000 MeV in (a), (b), (c), and (d), respectively. The reasonable range of the strength of $V_{ls} = 16000-2000$ MeV has been suggested in the $^4$He+$n$ scattering phase shift analysis~\cite{Okabe}. Without the spin-orbit interaction ($V_{ls} =$ 0 MeV), the ground $0^+$ state of $^{12}$C is obtained at $-85.2$ MeV in Fig~\ref{c12-level} (a) compared with the experimental value of $-92.161726$ MeV. However, with the spin-orbit effect, the ground state is obtained at $-87.8$ MeV in Fig~\ref{c12-level} (b) ($V_{ls} =$ 1600 MeV), $-88.9$ MeV in (c) ($V_{ls} =$ 1800 MeV), and $-90.5$ MeV in (d) ($V_{ls} =$ 2000 MeV). Therefore, the absolute value of the binding energy of $^{12}$C can be almost reproduced with the present interaction and the model, together with the binding energies of $^4$He and $^{16}$O. If we measure the energy from the three $\alpha$ threshold energy, the ground state of $^{12}$C is $-3.3$ MeV in Fig~\ref{c12-level} (a), $-5.9$ MeV in (b), $-7.0$ MeV in (c), and $-8.6$ MeV in (d), compared with the experimental value of $-7.2747$ MeV. Thus the binding energy from the three $\alpha$ threshold is also reproduced in the case of $V_{ls} = 1800$ MeV (Fig~\ref{c12-level} (c)). The famous Hoyle state, the second $0^+$ state experimentally observed at $E_x = 7.65420$ MeV, appears at $E_x = 6.1$ MeV in Fig~\ref{c12-level} (a), $E_x = 7.5$ MeV in (b), $E_x = 7.9$ MeV in (c), and $E_x = 8.6$ MeV in (d). If we measure from the three $\alpha$ threshold energy, these energies correspond to $E_x = 2.8$ MeV in Fig~\ref{c12-level} (a), $E_x = 1.6$ MeV in (c), $E_x = 0.9$ MeV in (c), and $E_x = -0.1$ MeV in (d). Here again $V_{ls} = 1800$ MeV gives reasonable agreements with the experiment; however the result implies that we need slightly larger number of basis states to reproduce Hoyle state just above (experimentally 0.38 MeV) the three $\alpha$ threshold energy. \begin{figure}[t] \centering \includegraphics[width=6.0cm]{c12-level.eps} \caption{ The energy levels of $^{12}$C. Here (a) is the result without the spin-orbit interaction, and (b), (c), and (d) show the results calculated with 1600 MeV, 1800 MeV, and 2000 MeV for the strength of the spin-orbit terms of the G3RS interaction ($V_{ls}$ in Eq.~\ref{Vls}), respectively. The dotted line at $-81.92$ MeV shows three $\alpha$ threshold energy. } \label{c12-level} \end{figure} The traditional three $\alpha$ cluster models have a serious problem that they give very small level spacing between the ground $0^+$ state and the first $2^+$ state, which is the first excited state of $^{12}$C~\cite{Fujiwara}. In our case, $V_{ls} = 0$ MeV result in Fig~\ref{c12-level} (a) shows that the level spacing is 2.5 MeV and much smaller compared with the experimental value of 4.4389131 MeV. This is improved by the spin-orbit effect, since the excitation from the ground $0^+$ state to the $2^+$ state corresponds to one-particle one-hole excitation to a spin-orbit unfavored orbit from the closure configuration of spin-orbit attractive orbits in the $jj$-coupling shell model. The $0^+-2^+$ level spacing becomes 3.1 MeV in (b), 3.7 MeV in (c), and 4.6 MeV in (c). Similar trend has been reported in the recent antisymmetrized molecular dynamics~\cite{EnyoC} and fermionic molecular dynamics~\cite{Neff} calculations. The appearance of negative parity states in low-lying excitation energy has been considered as the signature of the importance of three $\alpha$ cluster structure in $^{12}$C. Experimentally $3^-$ and $1^-$ states have been observed at $E_x = 9.6415$ MeV and $E_x = 10.84416$ MeV, respectively. These $3^-$ and $1^-$ states are reproduced at 9.8 MeV and 12.6 MeV, respectively, when the strength of the spin-orbit interaction is chosen as $V_{ls} = 1800$ MeV, which gives reasonable results for the $0^+$ and $2^+$ states. \section{Summary}\label{summary} We have tried to achieve consistent description of $^{12}$C and $^{16}$O, which has been a long standing problem of microscopic cluster model. By taking into account the coupling with the $jj$-coupling shell model and utilizing Tohsaki interaction, which is finite-range three-body interaction, we have shown that consistent understanding of these nuclei can be achieved. The original Tohsaki interaction gives small overbound of about 3 MeV for $^{16}$O, and this is improved by slightly modifying three-body Majorana exchange parameter. Also, so far the application of Tohsaki interaction has been limited to $4N$ nuclei, and here, we added Bartlet and Heisenberg exchange terms in the two-body interaction for the purpose of applying it to neutron-rich systems. By applying Tohsaki interaction with finite-range three-body interaction terms to $^{16}$O, the lowest energy of the tetrahedron configuration of four $\alpha$'s is obtained with very large $\alpha$-$\alpha$ distance (2.5 fm). After performing GCM, the ground state is obtained with the charge radius of 2.64 fm, compared with the observed value of 2.69 fm. We often underestimate the radius by 0.1-0.2 fm with four $\alpha$ models, if we calculate only within the two-body effective interactions, and this is significantly improved. For $^{12}$C, we prepared various there $\alpha$ configurations by randomly generating Gaussian center parameters, and we mix $jj$-coupling ($\alpha$ breaking) components based on AQCM. The ground $0^+$ state of $^{12}$C is obtained at $-88.0 \sim -90.5$ MeV with reasonable strength of the spin-orbit interaction compared with the experimental value of $-92.2$ MeV. The absolute value of the binding energy of $^{12}$C (and also $^4$He and $^{16}$O) can be almost reproduced with the present interaction and the model. If we measure the energy from the three $\alpha$ threshold energy, the agreement with the experiment is even more reasonable. The famous Hoyle state (second $0^+$ state) is reproduced just around the three $\alpha$ threshold energy. Also, traditional three $\alpha$ cluster models give very small level spacing for the ground $0^+$ state and the first $2^+$ state, and this is significantly improved by the spin-orbit effect. The appearance of negative parity states in low-lying excitation energy has been considered as the signature of the three $\alpha$ cluster structure of $^{12}$C. Experimentally $3^-$ and $1^-$ states have been observed at $E_x = 9.6415$ MeV and $E_x = 10.84416$ MeV, respectively, and these states are also reproduced within the present framework. \begin{acknowledgments} The author thank the discussions with Prof. A. Tohsaki. Numerical calculation has been performed at Yukawa Institute for Theoretical Physics, Kyoto University. This work was supported by JSPS KAKENHI Grant Numbers 716143900002. \end{acknowledgments}
-25,306.955096
[ -2.501953125, 2.29296875 ]
32.647059
[ -3.025390625, 0.2266845703125, -2.259765625, -5.80078125, -0.90478515625, 8.4765625 ]
[ 1.82421875, 7.3515625, 3, 4.86328125 ]
310
4,433
[ -2.611328125, 3.125 ]
29.501494
[ -5.84375, -3.6171875, -3.51171875, -1.9970703125, 1.7216796875, 10.8203125 ]
1.141897
11.671515
20.257162
4.016457
[ 2.4688215255737305 ]
-18,075.911912
5.235055
-24,645.444669
1.387512
5.594668
[ -2.994140625, -3.78125, -3.470703125, -4.375, 2.40234375, 11.53125 ]
[ -5.3046875, -1.650390625, -2.02734375, -1.0498046875, 3.236328125, 3.828125 ]
BkiUczc5qhDACrbibYto
\section{Introduction} The problem of superdiffusion and subdiffusion processes, and the corresponding correlated random walks, has received attention within the literature to describe many physical scenarios, mostly for crowded systems~\cite{metzler2000random,bouchaud1990anomalous}. Superdiffusion process in the protein diffusion within cells and also the diffusion in the porous media are important examples. This problem involves large number of phenomenon, ranging from heartbeat intervals to DNA sequences~\cite{buldyrev1994fractals}. Recently, anomalous diffusion was found in several systems including single particle movements in cytoplasm~\cite{regner2013anomalous}, worm-like micellar solutions~\cite{jeon2013anomalous}, telomeres in the nucleus of cells~\cite{bronstein2009transient}, ultra-cold atoms~\cite{sagi2012observation}. When the environmental disorder is present, this problem falls into the more general class, i.e. the critical phenomena on the imperfect and also fractal hosts~\cite{dhar1977lattices,dhar1978d,gefen1980critical}. This problem is realized simply by arresting the diffusing particles to enter some forbidden islands, which cause their diffusion properties change. For example by modeling the imperfections of the host media by the uncorrelated percolation theory, it is shown that the diffusion becomes anomalous ~\cite{havlin1987diffusion,bouchaud1990anomalous,sahimi2011flow}, and the fractal dimension ($D_f$) of the trace of particles considerably changes~\cite{kremer1981self,cheraghalizadeh2018self,daryaei2014loop,rammal1984self,chakrabarti1981statistics}. When the occupation probability of the host media $p$ is larger than the percolation threshold ($p_c$), two different scales are present: for small scales (compared with the correlation length of the percolation system) the exponents are anomalous, whereas for the large scales, the walkers behave in the same manner as the regular system~\cite{ben2000diffusion}. The problem is completely different for critical case ($p=p_c$) for which for all scales the anomalous exponents are obtained~\cite{daryaei2014loop}. \\ The anomalous diffusion problems can mostly be realized by some correlated random walks. Loop-erased random walk (LERW) is a correlated self-avoiding random trace which has deep relations with important statistical models. It is defined as the traces which are produced by random walkers who erase all of loops of the trace. It is well-known that in two-dimensional regular lattice, the fractal dimension of these traces is $\frac{5}{4}$, corresponding to $\nu=\frac{4}{5}$ diffusion process~\cite{Majumdar1992Exact} ($\nu$ being the end-to-end exponent, which is defined in the following sections). For simulating the $\nu=4/5$ diffusion process, one can consider LERW traces and test its properties, which is the purpose of the present paper. This model has relations to the uniform spanning trees~\cite{wilson1996symposium}, the BTW sandpiles~\cite{Majumdar1992Equivalence,najafi2012avalanche} and the $q$-state Potts model~\cite{Majumdar1992Exact}. The properties of the model have been well-studied on the regular lattices, e.g. its upper-critical dimension is $d_u=4$, and also in two dimensions, it is related to the Schramm-loewner evolution (SLE) with the diffusivity parameter $\kappa=2$, which corresponds to $c=-2$ conformal field theory. The ghost action of this universality class has elegantly been extracted by Bauer \textit{et al.}~\cite{bauer2008lerw}, who have also considered the problem in (mass-less ghosts) and out (massive ghosts) of criticality. Despite of this huge literature, a little attention has been paid to the response of the diffusing particles to the configuration of allowed areas of the host system. In~\cite{daryaei2014loop}, the authors revealed that loop erased random walkers create non-intersecting traces with $D_f\approx 1.22$ on the critical uncorrelated percolation lattice. In this problem the configuration of dilute sites is uncorrelated. One may investigate the same problem in the correlated percolation, to investigate the diffusion properties of the natural systems. In nature, the environmental disorder is commonly correlated, which is related to the process of the formation of host media~\cite{cheraghalizadeh2017mapping}. In this case, it is ideal to have a parameter that tunes the correlation and also the fraction of active sites. This is the aim of the present paper. We bring the correlations into the calculations by means of the Ising model with an artificial temperature $T$, which tunes the correlations between (quenched) environmental imperfections in the host media in which the loop-erased random walkers take their steps. We show that these correlations crucially change the properties of the model with respect to the uncorrelated percolation system.\\ The paper has been organized as follows: In the following section, we motivate this study and introduce and describe the model. The numerical methods and the results are presented in the section~\ref{NUMDet} which contains critical, as well as off-critical results. We end the paper by a conclusion. \section{The construction of the problem} \label{sec:model} \begin{figure} \centerline{\includegraphics[scale=.35]{Movement.pdf}} \caption{A schematic set up of the LERW on site-diluted lattice. The random walker reaches the central site, having $n=2$ possibilities to move in the next step (the front site in not active ($s(i,j+1)=-1$), and also the random walker cannot return to $(i,j-1)$ since it forms a loop that should be removed). Therefore, the random walker chooses to move right or left with the equal probabilities $p=1/2$.} \label{fig:Movement} \end{figure} Consider a diffusion process in which the mean squared displacement (MSD) $\left\langle R^2\right\rangle $ scales with time. In a typical diffusion process MSD scales linearly with time in any dimension. Anomalous diffusion is however a diffusion process with a non-linear relationship to time, i.e. $\left\langle R^2\right\rangle =D t^{2\nu}$, where $D$ is the diffusion coefficient, $\nu=0.5$ for ordinary diffusion process. If $\nu>0.5$ ($\nu<0.5$) the system is in the superdiffusion (subdiffusion) phase. \\ Diffusion processes can be realized by random walks, and the above mentioned non-linearity in time arises from the correlations in the random walks. Two important examples are self-avoiding walks (SAW), and loop erased random walks (LERW). The LERW is an important prototype of superdiffusion process which is analyzed in this paper. For this model $\nu=0.8$~\cite{lawler1999loop}. LERW is defined as the trace of a random walker with removed loops. To be more precise, let $\left\lbrace \vec{r}_i\right\rbrace_{i=1}^{N}$ be a normal two-dimensional path of a random walker (which may include some loops). A loop with length $l$ is defined as the sub-path $\left\lbrace \vec{r}_{i_0+i}\right\rbrace_{i=0}^{l}$ with $\vec{r}_{i_0}=\vec{r}_{i_0+l}$. The removing of this loop is equivalent to considering the path $\vec{r}_1,\vec{r}_2,...,\vec{r}_{i_0},\vec{r}_{i_0+l+1},...,\vec{r}_{N}$. By repetition of this procedure, we obtain a loop-less path, which is a LERW. It is well-known that this model belongs to $c=-2$ conformal field theory~\cite{lawler2011conformal}, and also $\kappa=2$ schramm-Loewner evolution (SLE$_{\kappa=2}$) in the continuum limit~\cite{schramm2011scaling}.\\ An important question may arise concerning the effect of the geometry of the metric space on this superdiffusion process. In the geometries of non-integral dimensionality, the properties of critical phenomena change considerably~\cite{dhar1977lattices,dhar1978d}, and its properties depend on the fractal dimension, the order of ramification and the connectivity of the host system~\cite{gefen1980critical}. The LERW in the uncorrelated percolation lattice has been studied in~\cite{daryaei2014loop}, in which it was shown that for the critical host cluster $p=p_c$, the fractal dimension is equal to $D_f\approx 1.22$. To bring the correlations in the diluteness pattern of the host media, one may use a binary model with tunable correlations. The Ising model is a good candidate for this purpose, since it deals with artificial spins ($\sigma$), and the correlations are tuned by an artificial temperature ($T$). The spins play the role of the field of activity-inactivity (diluteness pattern) of the media in the present paper. We consider the majority spin sites, as the active sites through which the random walkers can pass. To this end, we identify the spanning cluster (the set of connected active sites which connects two apposite boundaries), and let the random walkers to have dynamics only over the active area, i.e. the walks is only possible on the active ($s=+1$) sites in the spanning cluster. The sites of the cluster of active area have coordination numbers that is $z_j\equiv \sum_{i\in \delta_i}\delta_{s_i,1}$ in which $\delta_i$ is the set of neighbors of $i$th site and $\delta$ is the Kronecker delta function. This has schematically shown in Fig~\ref{fig:Movement}, for which the coordination number of the central site is three. It is notable that the activity configuration of the media is quenched, i.e. when an Ising configuration is obtained, SAW samples are generated in the resulting dilute lattice. \\ If we show the Ising spins by $s$, then $s=+1$ ($s=-1$) are attributed to the active (inactive) sites, and the temperature is the control parameter which tunes the correlations of the host system. The Ising Hamiltonian is \begin{equation} H=-J\sum_{\left\langle i,j\right\rangle}s_is_j-h\sum_{i}\sigma_i, \ \ \ \ \ s_i=\pm 1 \label{Eq:Ising} \end{equation} in which $J$ is the coupling constant, $h$ is the magnetic field, $s_i$ and $s_j$ are the spins at the sites $i$ and $j$ respectively, and $\left\langle i,j \right\rangle$ shows that the sites $i$ and $j$ are nearest neighbors. $J>0$ corresponds to positively correlated host system, whereas $J<0$ is for negatively correlated one. The artificial temperature $T$, in addition to being the control parameter of the correlations, controls also the population of the active sites to the total number of sites. This population can also be directly controlled by $h$ which determines the preferred direction of the spins in the Ising model. For $h=0$ (which is the case for the present paper) the model is well-known to exhibit a non-zero magnetization per site $M=\left\langle s_i\right\rangle $ at temperatures below the critical temperature $T_c$. For the 2D regular Ising model at $h=0$ these two transitions occur simultaneously~\cite{delfino2009field}, although it is not the case for all versions of the Ising model, e.g. for the site-diluted Ising model~\cite{najafi2016monte}. \\ We define the Ising model on the $L\times L$ square lattice. Then by solving the Eq.~\ref{Eq:Ising} for $h=0$ an Ising sample at a temperature $T\leq T_c$ is made, and some random walkers start the motion from the boundaries on the largest spanning cluster of the sample as the host media. Some samples of this process have been shown in Fig.~\ref{IsingSamples}. The emphasis of two first figures is on the Ising-correlated site-diluted lattice at two artificial temperatures $T=1.9$ and $T=2.269$. LERWs are evident in these figures. In the two last figures we show the diffusion of particles for two other Ising samples with the mentioned temperatures. The forbidden islands are evident in these figures. The process of random walks is terminated, when they reach one of the boundaries. \begin{figure*} \begin{subfigure}{0.43\textwidth}\includegraphics[width=\textwidth]{sampleLowT.pdf} \caption{} \label{fig:sampleLowT} \end{subfigure} \centering \begin{subfigure}{0.43\textwidth}\includegraphics[width=\textwidth]{sampleHighT.pdf} \caption{} \label{fig:sampleHighT} \end{subfigure} \begin{subfigure}{0.43\textwidth}\includegraphics[width=\textwidth]{500.pdf} \caption{} \label{fig:500} \end{subfigure} \centering \begin{subfigure}{0.43\textwidth}\includegraphics[width=\textwidth]{500xy.pdf} \caption{} \label{fig:500xy} \end{subfigure} \caption{(Color online): (a) An Ising sample with a loop erased random walker ($n=1500$ steps) for (a) $T=1.9\ll T_c$ (b) $T=T_c\approx 2.268$. The diffusion process: the particle traces in the background of Ising-correlated lattice for (c) $T=1.9$, and (d) $T=T_c$.} \label{IsingSamples} \end{figure*} \subsection{Numerical details} We have used the Wolff Monte Carlo method to simulate the system in the vicinity of the critical point to avoid the problem of critical slowing down. Our ensemble averaging contains both random walks averaging as well as Ising-configuration averaging. For the latter case we have generated $2\times 10^3$ Ising uncorrelated samples for each temperature on the lattice size $L=2048$. To make the Ising samples uncorrelated, between each successive sampling, we have implied $L^2/3$ random spin flips and let the sample to equilibrate by $500L^2$ Monte Carlo steps. The main lattice has been chosen to be square, for which the Ising critical temperature is $T_c\approx 2.269$. Only the samples with temperatures $T\leq T_c$ have been generated, since the spanning clusters (active space) are present only for this case. As stated in the previous section the random walkers move only on the active space. The temperatures considered in this paper are $T=T_c-\delta t_1\times i$ ($i=1,2,...,5$ and $\delta t_1=0.01$) to obtain the statistics in the close vicinity of the critical temperature $T_c\simeq 2.269$ and $T=T_c-\delta t_2\times i$ ($i=1,2,...,10$ and $\delta t_2=0.05$) for the more distant temperatures. To equilibrate the Ising sample and obtain the desired samples we have started from the high temperatures ($T>T_c$). For each temperature $2\times 10^6$ LERWs were generated for $2\times 10^3$ Ising samples. We have used the Hoshen-Kopelman~\cite{hoshen1976percolation} algorithm for identifying the clusters in the lattice. \section{measures and results}\label{NUMDet} The winding-angle statistics is a very powerful method to extract the key parameters of a two-dimensional critical system, i.e. the diffusivity parameter of the Schramm-Loewner evolution (SLE). The SLE theory aims to describe the interfaces of two-dimensional (2D) critical models via growth processes, and analyzes classifies them to the one-parameter classes represented by $\kappa$. Thanks to this theory, a deep connection between the local properties and the global (geometrical) features of the 2D critical models has been discovered. These non-intersecting interfaces are assumed to have two essential properties, conformal invariance and the domain Markov property~\cite{cardy2005sle}. For more information on the SLE theory see \cite{cardy2005sle,lowner1923untersuchungen}. The relation between the fractal dimension of the curves $D_f\equiv \frac{1}{\nu}$ and the diffusivity parameter ($\kappa$) is $D_f=1+\frac{\kappa}{8}$. Also the CFT/SLE correspondence is satisfied by means of a relation between the central charge of charge of CFT, and the diffusivity parameter of SLE, namely $c=\frac{(6-\kappa)(3\kappa-8)}{2\kappa}$. \\ There are many tests for SLE, most importantly the left passage probability~\cite{najafi2013left,Najafi2015Fokker}, the direct SLE mapping~\cite{Najafi2012Observation,najafi2015observation} and the winding angle statistics~\cite{duplantier1988winding}. The winding angle $\theta$ is the total winding angle of the movement at the end point and a global direction, and is calculated by \begin{equation} \text{Var}\left[ \theta\right] = \kappa\log R \end{equation} in which $\text{Var}\left[ \theta\right]\equiv \left\langle \theta^2\right\rangle-\left\langle \theta\right\rangle^2$, $\left\langle \right\rangle$ means ensemble averaging, and $R$ is end-to-end distance. Noting that $R$ scales with the trace length (or equivalently the time) $l$ in the form $R\sim l^{\nu}$ (which defines the $\nu$ exponent), one finds that $\text{Var}\left[ \theta\right] = \kappa\nu\log l$. For LERW in $T=0$ (the regular lattice), this slope is $1.6$. Also the fractal dimension of a curve (in the box-counting scheme) is defined by the relation $l(L)\sim L^{D_f}$, in which $L$ is the linear size of a box which contains a curve of length $l$.\\ \begin{figure*} \centering \begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{r_Tc.pdf} \caption{} \label{fig:r_Tc} \end{subfigure} \begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{WA-Tc.pdf} \caption{} \label{fig:WA-Tc} \end{subfigure} \caption{(Color online): (a) $\log_{10}\left\langle R^2\right\rangle^{\frac{1}{2}}$ in terms of $\log_{10}N$ for $T=T_c$. Inset: $\log_{10}\left\langle N(L)\right\rangle$ in terms of $\log_{10}L$. (b) $\left\langle \theta^2\right\rangle $ in terms of $\ln N$ with the slope $\gamma=1.52\pm 0.02$ for $T=T_c$. Upper inset: the distribution function of $\theta$ for various rates of $N$. Lower inset: $\left\langle \theta\right\rangle$ in terms of $N$ which is identical to zero.} \label{fig:Tc} \end{figure*} Now let us consider the critical lattice, i.e. $T=T_c$. In this case the exponents exhibit deviation with respect to the regular lattice. In the Fig.~\ref{fig:Tc} the exponents $\nu$, $D_f$, and $\kappa$ have been calculated. Figure \ref{fig:r_Tc} reveals that $\nu=\frac{1}{D_f}$ is valid at the critical lattice. We note that the $\nu$ exponent $0.9\%$ changes from its value for regular lattice, i.e. $\nu_{\text{LERW}}=0.8$.\\ The statistics of $\theta$ has been presented in Fig.~\ref{fig:WA-Tc}. We see that the distribution of $\theta$ ($p(\theta)$) remains Gaussian at $T=T_c$, and its variance grows with $N$ (the number of walkers steps). The variance of $\theta$ grows linearly with $\ln N$ and $\left\langle \theta\right\rangle\approx 0$. In this figure $\gamma=\kappa\nu$ is the slope of the linear graph. This guarantees that the trace of walkers is conformal invariant, i.e. they are SLE traces. The results have been gathered in TABLE~\ref{tab:nu}. \begin{table} \begin{tabular}{c | c c c c} \hline & $\kappa_{T_c}$ & $\kappa_{T_c}(D_f)$ & $\nu_{T_c}$ & $D_F^{T_c}$ \\ \hline direct & $1.89\pm 0.05$ & $1.93\pm 0.02$ & $0.807\pm 0.002$ & $1.241\pm 0.002$ \\ \hline \end{tabular} \caption{The exponents of LERW at $T=T_c$. $\kappa_{T_c}$ has been obtained by winding angle analysis, and $\kappa_{T_c}(D_f)$ has been obtained by the relation $\kappa=8(D_f-1)$. The $\nu_{T_c}$ and $D_F^{T_c}$ have been obtained using the relation $R\sim l^{\nu}$ and the box-counting method.} \label{tab:nu} \end{table} \subsection{The general behavior in terms of $T$}\label{offcritical} \begin{figure*} \centering \begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{BC.pdf} \caption{} \label{fig:BC} \end{subfigure} \begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{rms.pdf} \caption{} \label{fig:rms} \end{subfigure} \begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{theta.pdf} \caption{} \label{fig:theta} \end{subfigure} \begin{subfigure}{0.49\textwidth}\includegraphics[width=\textwidth]{WA.pdf} \caption{} \label{fig:WA} \end{subfigure} \caption{(Color online) (a) $\log_{10}\left\langle N(L)\right\rangle$ in terms of $\log_{10}L$ for various rates of temperature. Upper inset: $D_F$ in terms of $T$, Lower inset: power-law behavior of the fractal dimension. (b) $\log_{10}\left\langle R^2\right\rangle^{\frac{1}{2}}$ in terms of $\log_{10}N$ for various rates of temperature. Upper inset: the $\nu$ exponent in terms of $T$, Lower inset: power law behavior of $\nu$. (c) The distribution of the winding angle $\theta$ for $N=1000$ and $T=T_c$, which is fitted to $\exp-\left(\theta/\eta\right)^2$. Insets: the power-law and ordinary behaviors of the $\eta$ in terms of $T$. (d) $\left\langle \theta^2\right\rangle $ in terms of $\ln N$ with the slope $\gamma$ which is $T$-dependent. Inset: $\gamma$ in terms of $T$.} \label{fig:Off-Tc} \end{figure*} We have seen that for the critical lattice ($T=T_c$), the exponents are displaced with respect to the regular lattice. In this case the effective dimension of the system is $\bar{d}=\frac{187}{96}\simeq 1.948$~\cite{duplantier1989exact}. An equally worthy problem is the diffusion process in the super-critical systems, i.e. $T<T_c$ in which the $\bar{d}(T_c)<\bar{d}(T)<2$ (the effective dimension of the host media) varies. In this case, the trace of particles in the diffusion process should be denser (due to more available active regions) resulting to larger fractal dimensions and equivalently lower $\nu$s. This variation is not surely predetermined. Importantly, one may expect some power-law behaviors, due to the criticality of the system in this regime. These power-law behaviors in the geometrical observables arises from the scaling laws of the correlation length $\zeta(T)$ of the Ising model as the host of the system. This characteristic length tends to infinity at $T=T_c$ in the thermodynamic limit. \\ In Fig.~\ref{fig:Off-Tc} we have shown the results for this case. The most important results are sketched in Figs.~\ref{fig:BC} and~\ref{fig:rms}. It is realized from these figures that $D_f(T)-D_f(T_c)\sim t^{\alpha}$ and $D_f(T)-D_f(T_c)\sim t^{\beta}$, in which $t\equiv\frac{T_c-T}{T_c}$, and $\alpha=0.43\pm 0.05$, and $\beta=0.49\pm 0.03$. We note that the correlation length of the Ising model scales as $\zeta\sim t^{-1}$. Approximating $\alpha$ and $\beta$ by their the closest fractional value ($\frac{1}{2}$), one finds that $D_f(T)-D_f(T_c)\sim \frac{1}{\sqrt{\zeta(T)}}$. This result has been found in many statistical models on the Ising-correlated lattices, like Gaussian free field~\cite{cheraghalizadeh2018gaussian}, self-avoiding walk~\cite{cheraghalizadeh2018self}, and sandpile models~\cite{cheraghalizadeh2017mapping}.\\ The winding angle statistics also yield some information concerning the diffusivity parameter $\kappa$. In Figs.~\ref{fig:theta} and~\ref{fig:WA} we have shown the same functions as the Fig.~\ref{fig:WA-Tc}. It is seen that the variance of $\theta$ ($\sim\eta$) for $N=1000$ increases with decreasing $T$. This means that the traces tend to twist more in lower temperatures. This is compatible with the obtained fractal dimensions, i.e. more fractal dimensions are equivalent to denser traces which apparently should twist more than the dilute system. $\gamma$ increases also with decreasing $T$ which has been shown in the inset of Fig.~\ref{fig:WA}. Extending the relation $\gamma=\kappa\nu=8\nu(D_f-1)$ to off-critical temperatures, we expect that this exponent does not show the power-law behaviors. \section*{Discussion and Conclusion} \label{sec:conc} In the present paper we have considered the effects of the correlations in the dilute pattern in the system, on the properties of the superdiffusion process with $\nu=\frac{4}{5}$. Loop-erased random walk has been employed to simulate the particle's motion in this process. We also have modeled the forbidden regions into which the particles are not allowed to enter by the Ising model with an artificial temperature $T$, that tunes the host system correlations. The spins of the Ising model play the role of occupation field in the real system. We have observed that the results change considerably with respect to the un-correlated host system. Importantly at the critical temperature $T=T_c$, the exponent of the end-to-end distance becomes $\nu=0.807\pm 0.002$. This shows that the resulting particle traces become more dilute in this case. The winding angle test also shows that the variance of the angle $\theta$ behaves linearly with $\ln R$, which yields the diffusivity parameter of the SLE theory $\kappa=1.89\pm 0.05$. The other predictions of the conformal invariance are also satisfied, i.e. the fractal dimension has the relation with $\nu$ and $\kappa$ as $D_f=\frac{1}{\nu}=1+\frac{\kappa}{8}$.\\ In the off-critical regime $T<T_c$, in addition to the ordinary power-law behaviors of the statistical observables (which gives rise to some critical exponents), some power-law behaviors are also seen for the exponents in terms of $t\equiv\frac{T-T_c}{T_c}$. Importantly we have observed that $D_f(T)-D_f(T_c)\sim t^{0.43\pm 0.05}$. Using the well-known result of the Ising model for the correlation length $\zeta(T)\sim t^{-1}$ in the thermodynamic limit, and approximating the exponent with $\frac{1}{2}$, we see that $D_f(T)-D_f(T_c)\sim \zeta^{-\frac{1}{2}}$, which has been observed in many other statistical models on the Ising-correlated lattices. The full exponents have been listed in TABLE~\ref{tab:off}. \begin{table} \begin{tabular}{c | c c c} \hline exponent & definition & value & closest fractional value \\ \hline $\alpha$ & $D_f(T)-D_f{T_c}\sim t^{\alpha}$ & $0.43\pm 0.05$ & $\frac{1}{2}$ \\ \hline $\beta$ & $\nu(T)-\nu{T_c}\sim t^{\beta}$ & $0.49\pm 0.02$ & $\frac{1}{2}$ \\ \hline $\xi$ & $\eta(T)-\eta(T_c)\sim t^{\xi}$ & $0.33\pm 0.05$ & $\frac{1}{3}$ \\ \hline \end{tabular} \caption{The off-critical exponents of the problem, with their definitions, and the suggested fractional value.} \label{tab:off} \end{table}
-19,220.649512
[ -2.181640625, 2.013671875 ]
18.518519
[ -2.822265625, 1.203125, -1.5205078125, -4.85546875, -0.7138671875, 7.1328125 ]
[ 2.1171875, 7.9296875, 2.724609375, 4.59765625 ]
267
3,271
[ -3.33984375, 3.96484375 ]
26.175796
[ -5.98046875, -3.87109375, -4.5, -2.365234375, 1.8583984375, 12.5 ]
1.535259
15.188477
27.545093
2.348289
[ 2.2160556316375732 ]
-13,107.819384
5.87435
-18,596.578653
0.608899
5.620783
[ -2.75390625, -3.6015625, -3.728515625, -4.71875, 2.24609375, 12 ]
[ -5.7890625, -1.5703125, -2.013671875, -0.8388671875, 3.39453125, 3.958984375 ]
BkiUbrU4uzqh_Kly7amc
\section{Introduction} Resilience represents the inherent ability of a given system to oppose external disturbances and eventually recover the unperturbed state. The concept of resilience is particularly relevant to ecology \cite{resilience1}. Here, perturbations of sufficient magnitude may force the system beyond the stability threshold of a reference equilibrium. When the threshold is breached, recovery is not possible and the system under scrutiny steers towards an alternative attractor, distinct from the original one. Resilience, namely the capacity of a system to withstand changes in its environment, plays a role of paramount importance for a large plethora of applications, beyond ecology and ranging from climate change to material science, via information security and energy development \cite{resilience2}. To grasp the mathematical essence of the phenomenon, one can customarily rely on a straightforward linear stability analysis of the governing dynamical, supposedly deterministic, equations. The eigenvalues of the Jacobian matrix, evaluated at equilibrium, conveys information on the associated stability. If the largest real part of the eigenvalues is negative, the examined deterministic system is deemed stable, against tiny disturbances \cite{murray,strogatz}. To state it differently, the system is able to regain its deputed equilibrium, by exponentially damping the imposed perturbation. However, a linearly stable equilibrium can be made unstable, through non-linearities, by a sufficiently large perturbation amount: this occurs for instance when the enforced disturbance takes the system outside the basin of attraction, i.e. the set of initial conditions leading to long-time behavior that approaches the attractor \cite{menck}. In the following, we shall focus on sufficiently small perturbations, so that the linear stability holds true. A transient growth can, however, take place, at short times, before the perturbation fades eventually away, as established by the spectrum of the Jacobian matrix. This short time amplification is instigated by the non-normal character of the interaction scheme and may trigger the system unstable, also when the eigenvalues of the Jacobian display a negative real part \cite{trefethen,teo1}. The elemental ability of a non-normal system to prompt an initial rise of the associated norm, can be made perpetual by an enduring stochastic drive \cite{zagli,sara}. Taken altogether, these evidences call for a revised notion of resilience, for systems driven by non-normal coupling and shaked by stochastic forcing. We shall be in particular interested in interacting multi-species models, diffusively coupled via a networked arrangement \cite{othmer1,othmer2,Mikhailov,Nakao,ACPSF,HNM,ABCFP,Kouv,PFMC,CBF,PLFC,LFC,teo2}. The investigated systems are assumed to hold an homogeneous equilibrium. For a specific selection of the involved parameters, the homogeneous fixed point can turn unstable upon injection of a non homogeneous perturbation, which activates the diffusion component. The ensuing symmetry breaking instability, as signaled by the so called dispersion relation, constitutes the natural generalization of the celebrated Turing instability to reaction-diffusion systems hosted on a complex network. In \cite{muolo}, we showed that non-normal, hence asymmetric networks may drive a deterministic system unstable, also if this latter is predicted stable under the linear stability analysis. As we shall here prove, the effect is definitely more remarkable when the non-normal system is made inherently stochastic \cite{gardiner,vankampen}. At variance with the analysis in \cite{biancalani}, it is the non normality of the network to yield the self-consistent amplification of the noisy component, a resonant mechanism which prevents the resilient recovery. To elaborate along these lines, we will consider a generic reaction-diffusion model defined on a directed one dimensional lattice. The degenerate spectrum of the Laplacian that governs the diffusive exchanges between adjacent nodes sits at the root of the generalized class of instability that we shall here address. Making the directed lattice longer, i.e. adding successive nodes to the one-dimensional chain, allows for the perturbation to grow in potency, and for the system to eventually cross the boundaries of stability \cite{clement}. A similar scenario is met when the hosting network is assumed quasi degenerate, meaning that the Laplacian eigenvalues are densely packed within a limited portion of the complex plane. Many real networks, from different realms of investigations ranging from biology (neuronal, proteins, genetic) to ecology (foodwebs), via sociology (communication, citations) and transport (airlines), have been reported to possess a pronounced degree of non-normality \cite{teo1}. In particular, it was shown that their adjacency matrix is almost triangular (when properly re-organizing the indexing of the nodes) which, in spectral terms, implies enhancing the probability of yielding a degenerate spectrum for the associated Laplacian matrix. Networks that display a triangular adjacency matrix are known as directed acyclic graphs (DAG). The paper is organized as follows. In the next section, we shall introduce our reference setting, a reaction-diffusion system anchored on a directed one dimensional lattice. We will in particular show that a self-consistent amplification of the noisy component of the dynamics is produced, when successively incrementing the number of nodes that form the chain. This bears non trivial consequences in terms of resilience, displayed by the system under scrutiny. Moving from this preliminary information, in Section II we will modify the lattice structure by accommodating for, uniform and random, return loops. This breaks the degeneracy that arises from the directed lattice topology. The coherent amplification of the stochastic drive is however persistent as long as the spectrum is close to degenerate. Random directed acyclic graphs with quasi degenerate spectrum can be also created, which make reaction-diffusion systems equally prone to the stochastic driven instability. This topic is discussed in Section III. Finally in Section IV we sum up and draw our conclusion. \section{Reaction-diffusion dynamics on a directed lattice} We begin by considering the coupled evolution of two species which are bound to diffuse on a (directed) network. Introduce the index $i = 1,...,\Omega$ to identify the $\Omega$ nodes of the collection and denote by $\phi_i$ and $\psi_i$ the concentration of the species on the $i$-th node. The local (on site) reactive dynamics is respectively governed by the non-linear functions $f(\phi_i,\psi_i)$ and $g(\phi_i,\psi_i)$. The structure of the underlying network is specified by the adjacency matrix $\mathbf{A}$: $A_{ij}$ is different from zero if a weighted link exists which connects node $j$ to node $i$. The species can relocate across the network traveling the available edges and subject to standard Fickean diffusion. With reference to species $\phi_i$, the net flux at node $i$ reads $D_{\phi} \sum_{j=1}^{\Omega} A_{ij} (\phi_j-\phi_i)$, where $D_{\phi}$ stands for the diffusion coefficient and the sum is restricted to the subset of nodes $j$ for which $A_{ij} \ne 0$. Furthermore, we shall assume that the dynamics on each node gets perturbed by an additive noise component of amplitude $\sigma_i$. In formulae the system under investigation can be cast as: \begin{subequations} \begin{align} \frac{d}{dt} \phi_i &= f\left(\phi_i,\psi_i\right)+D_{\phi}\sum_{j=1}^\Omega \Delta_{ij} \phi_j + \sigma_i (\mu_{\phi})_i \label{eqn: eq di partenza1} \\ \frac{d}{dt} \psi_i &= g\left(\phi_i,\psi_i\right)+D_{\psi}\sum_{j=1}^\Omega \Delta_{ij} \psi_j + \sigma_i (\mu_{\psi})_i\label{eqn: eq di partenza2} \end{align} \end{subequations} where $\Delta_{ij}=A_{ij}-k_i\delta_{ij}$ stands for the discrete Laplacian and $k_i = \sum_j A_{ij}$ is the incoming connectivity. Here, $(\mu_{\phi})_i$ and $(\mu_{\psi})_i$ are Gaussian random variables with zero mean and correlations $\langle (\mu_{\phi})_i(t) (\mu_{\phi})_j (t') \rangle = \langle (\mu_{\psi})_i(t) (\mu_{\psi})_j (t') \rangle = \delta_{ij} \delta(t-t')$ and $\langle (\mu_{\phi})_i(t) (\mu_{\psi})_j (t') \rangle =0$. In this section, we will assume a directed lattice as the underlying network. A schematic layout of the system is depicted in Fig. \ref{fig1_scheme}. The $\Omega \times \Omega$ Laplacian matrix associated to the directed lattice reads: \begin{equation} \mathbf{\Delta}= \begin{pmatrix} 0 & 0 & \dots \\ 1 & -1 & 0 & \dots \\ 0 & 1 & -1 & 0 & \dots \\ \vdots & \ddots & \ddots & \ddots \\ \end{pmatrix} \end{equation} and admits a degenerate spectrum, an observation that will become crucial for what it follows. More precisely, the eigenvalues of the Laplacian operators are $\Lambda^{(1)}=0$, with multiplicity $1$, and $\Lambda^{(2)}=-1$, with multiplicity $\Omega-1$. Notice that the eigenvalues are real, even though the Laplacian matrix is asymmetric. \begin{figure}[ht!] \includegraphics[scale=0.25]{Fig1_QD.png} \caption{The scheme of the model is illustrated. Two populations, respectively denoted by $\phi$ and $\psi$, are distributed on a collection of $\Omega$ nodes. The nodes are arranged so as to form a one dimensional lattice subject to unidirectional couplings, as outlined in the scheme. The weight of the link between adjacent nodes is set to one. The local, on site, interaction among species is ruled by generic non linear functions of the concentration amount.} \label{fig1_scheme} \end{figure} \subsection{The deterministic limit ($\sigma_i=0$).} To continue with the analysis, we will assume that the deterministic analogue of system \eqref{eqn: eq di partenza1}-\eqref{eqn: eq di partenza2} (obtained when setting $\sigma_i =0$, $\forall i$) admits a homogeneous fixed point and label this latter $(\phi^*,\psi^*)$. In practice, $\phi_i=\phi^*$ and $\psi_i=\psi^*$, $\forall i$, is an equilibrium solution of the model equations. Furthermore, we will assume that the aforementioned fixed point is stable against homogeneous perturbation, a working assumption which can be formally quantified by considering the associated Jacobian matrix: \begin{equation} \mathbf{J}= \begin{pmatrix} f_{\phi} & f_{\psi}\\ g_{\phi} & g_{\psi}\ \end{pmatrix} \end{equation} where $f_{\phi}$ stands for the partial derivative of $f(\phi, \psi)$ with respect to $\phi$, evaluated at the fixed point $(\phi^*,\psi^*)$. Similar definitions hold for $f_{\psi}, g_{\phi}$ and $g_{\psi}$. The homogeneous fixed point is stable provided $tr(J)=f_{\phi}+g_{\psi}<0$ and $det(J)=f_{\phi} g_{\psi} - f_{\psi} g_{\phi}>0$. Here, $tr(...)$ and $det(...)$ denote, respectively, the trace and the determinant. When the above inequalities are met, the largest real part of the eigenvalues of the Jacobian matrix $J$ is negative, pointing to asymptotic stability. Peculiar behaviors however arise when the spectrum of $J$ is degenerate, namely when the eigenvalues come in identical pair. In this case, the solution to the linear problem ruled by matrix $J$ contains a secular term, which, depending on the initial conditions, might yield a counterintuitive growth of the perturbation, at short time. For long enough time the exponential damping takes it over and the system relaxes back to the stable equilibrium (or, equivalently, the imposed perturbation is damped away). Short time transients can also be found in stable linear systems, ruled by non-normal matrices. A matrix is said non-normal, if it does not commute with its adjoint. For the case at hand, $J$ is assumed to be real. Hence, taking the adjoint is identical to considering the transpose of the matrix. In formulae, $J$ is non-normal, provided $[J, J^T]=J J^T - J^T J \ne 0$, where the apex $T$ identifies the transpose operation. Assume that the non-normal matrix $J$ is stable, hence its eigenvalues have negative real parts. Consider then the Hermitian part of $J$, a symmetric matrix defined as $H(J)=(J+J^T)/2$. If the largest eigenvalue of $H$ is positive, then the linear system governed by $J$ can display a short time growth, for a specific range of initial conditions. Systems characterized by stable Jacobian matrix, with an associated unstable Hermitian part, are termed reactive. In the following we shall consider a reactive two components system, namely a two species model that possesses the elemental ability to grow the imposed perturbation at short times, also when deemed stable. This latter ability will be considerably augmented by replicating such fundamental unit on the directed chain, and so engendering a secular behavior which eventually stems from the degenerate structure of the associated Jacobian. Heading in this direction we shall first make sure that the examined system is stable when formulated in its spatially extended variant, namely when the two species dynamical system is mirrored on a large set of nodes, coupled diffusively via unidirectional links. A non homogeneous perturbation can be in fact imposed which activates the diffusion component and consequently turns, under specific conditions, the homogenous solution unstable. The subtle interplay between diffusion and reaction weaken therefore the resilience of the system, by opposing its ability to fight external disturbances and eventually regain the unperturbed (homogeneous) state. The conditions for the onset of the diffusion driven instability are obtained via a linear stability analysis, that we shall hereafter revisit for the case at hand. The analysis yields a dispersion relation which bears information on the outbreak of the instability. This latter constitutes a straightforward generalization of the celebrated Turing mechanism to the case of a discrete, possibly directed support. To further elaborate along this axis, we focus on the deterministic version of system \eqref{eqn: eq di partenza1}-\eqref{eqn: eq di partenza2} and impose a, supposedly small, non homogeneous perturbation of the homogenous equilibrium. In formulae, we set: \begin{eqnarray} \phi_i&=&\phi^*+\xi_i \\ \nonumber \psi_i&=&\psi^*+\eta_i \label{perturb} \end{eqnarray} where $\xi_i$ and $\eta_i$ stand for the imposed perturbation. In the following we will label $\boldsymbol{\zeta}=\left(\xi_1,\eta_1,...,\xi_\Omega,\eta_\Omega \right)$ the vector which characterizes the fluctuations around the fixed point. Insert now the above condition in the governing equation and linearize around the fixed point, assuming the perturbation to be small. This readily yields a $2 \Omega \times 2 \Omega$ linear system in the variable $\boldsymbol{\zeta}$: \begin{equation} \frac{d}{dt} \boldsymbol{\zeta} = \mathcal{J} \boldsymbol{\zeta} \label{linear_syst} \end{equation} where \begin{equation} \label{generalizedLapl} \mathcal{J}= \begin{pmatrix} \mathbf{J} & \mathbf{0} & \dots \\ \mathbf{D} & \mathbf{J}-\mathbf{D} & \mathbf{0} & \dots \\ \mathbf{0} & \mathbf{D} & \mathbf{J}-\mathbf{D} \\ \vdots & \ddots & \ddots & \ddots \end{pmatrix} \end{equation} where $\mathbf{D}=\begin{pmatrix} D_{\phi} & 0\\ 0 & D_{\psi}\ \end{pmatrix}$ is the diagonal diffusion matrix. The spectrum of the generalized Jacobian matrix $\mathcal{J}$ convey information on the asymptotic fate of the imposed perturbation. If the eigenvalues display negative real parts, then the perturbation is bound to fade away, at sufficiently large times. The system recovers hence the unperturbed homogeneous configuration. Conversely, the perturbation grows when the eigenvalues possess a positive real part. A Turing-like instability sets in and the system evolves towards a different, non homogeneous attractor. To compute the eigenvalues $\lambda$ of matrix $\mathcal{J}$, we label $\varepsilon_i=\mathbf{J}+\mathbf{D}\Delta_{ii}$ for $i=1,\dots,\Omega$. Then, the characteristic polynomial of $\mathcal{J}$ reads: \begin{eqnarray} 0&=&\det(\mathcal{J}-\lambda I)=\prod_{i=1}^{N}\det(\varepsilon_i-\lambda I) \\ &=& \det(\varepsilon_1-\lambda I) \left[ \det(\varepsilon_2-\lambda I) \right]^{\Omega-1} \end{eqnarray} where $I$ stands for the $2 \times 2$ identity matrix and \begin{equation} \det(\varepsilon_1-\lambda I)= \lambda^2 -tr(\mathbf{J}) \lambda + det (\mathbf{J}) \end{equation} \begin{eqnarray} \nonumber \det(\varepsilon_2-\lambda I)&=&\lambda^2-tr(\mathbf{J'})\lambda+\det(\mathbf{J'}) \end{eqnarray} where we have introduced the matrix $\mathbf{J'}=\mathbf{J}-\mathbf{D}$. The spectrum of the generalized Jacobian is hence degenerate if $\Omega>2$ and the multiplicity of the degeneracy of the spectrum grows with $\Omega$, the number of nodes that compose the lattice. In formulae: \begin{equation} \lambda_{1,2}=\frac{tr(\mathbf{J})\pm\sqrt{tr^2(\mathbf{J})-4\det (\mathbf{J})}}{2} \end{equation} and \begin{equation} \lambda_{3,4} = \frac{tr(\mathbf{J'})\pm\sqrt{tr^2(\mathbf{J'})-4\det(\mathbf{J'})}}{2} \end{equation} with degeneracy $\Omega-1$. The stability of the fixed point $(\phi^*,\psi^*)$ to external non homogeneous perturbation is hence determined by $\left(\lambda_{re}\right)_{max}$, the largest real part of the above eigenvalues. \\ For $\Omega>2$, the solution of the linear system (\ref{linear_syst}) can be cast in a closed analytical form, by invoking the concept of generalized eigenvectors. Denote with $\boldsymbol{v_0}$ and $\boldsymbol{w_0}$ the ordinary eigenvectors associated to the non degenerate eigenvalues $\lambda_1$ and $\lambda_2$. The ordinary eigenvectors $\boldsymbol{v_1}$ and $\boldsymbol{w_1}$, respectively associated to $\lambda_3$ and $\lambda_4$, are non degenerate (i.e. the geometric multiplicity is one). This can be proven for a generic dimension $2\Omega$ of the matrix $\mathcal{J}$ due to the simple block structure of the matrix. We then introduce the generalized eigenvectors associated to $\lambda_3$, $\lambda_4$ as: \begin{align} (\mathcal{J}-\lambda_3 I)^ i\boldsymbol{v_{i+1}} &=\boldsymbol{v_i}, \quad i=1,\dots,\Omega-2 \\ (\mathcal{J}-\lambda_4 I)^i \boldsymbol{w_{i+1}} &=\boldsymbol{w_i}, \quad i=1,\dots,\Omega-2. \end{align} The solution of (\ref{linear_syst}) reads therefore: \begin{multline} \boldsymbol{\zeta}=c_0e^{\lambda_1t}\boldsymbol{v_0}+d_0e^{\lambda_2t}\boldsymbol{w_0}\\ +[c_1\boldsymbol{v_1}+c_2(\boldsymbol{v_1}t+\boldsymbol{v_2})+c_3(\boldsymbol{v_1}t^2+\boldsymbol{v_2}t+\boldsymbol{v_3})\\ +\dots+c_{\Omega-1}(\boldsymbol{v_1}t^{\Omega-2}+\boldsymbol{v_2}t^{\Omega-3}+\dots+\boldsymbol{v_{\Omega-1}})]e^{\lambda_3t}\\ +[d_1\boldsymbol{w_1}+d_2(\boldsymbol{w_1}t+\boldsymbol{w_2})+d_3(\boldsymbol{w_1}t^2+\boldsymbol{w_2}t+\boldsymbol{w_3})+\dots\\ +d_{\Omega-1}(\boldsymbol{w_1}t^{\Omega-2}+\boldsymbol{w_2}t^{\Omega-3}+\dots+\boldsymbol{w_{\Omega-1}})]e^{\lambda_4t}. \end{multline} Secular terms, which bear the imprint of the spectrum degeneracy, may boost the short time amplification of the norm of the imposed perturbation. Remind that the transient growth occurs for $\left(\lambda_{re}\right)_{max}<0$, i.e. when the perturbation is bound to fade away asymptotically. Interestingly, the degree of the polynomial increases with the lattice size. This implies in turns that the short time amplification of a stable system could progressively grow, as the number of lattice nodes get increased. Without loss of generality, and to test the implication of the above reasoning, we shall hereafter assume the Brusselator model, as a reference reaction scheme. The Brusselator is a paradigmatic testbed for nonlinear dynamics, and it is often invoked in the literature as a representative model of self-organisation, synchronisation and pattern formation. Our choice amounts to setting $f\left(\phi_i,\psi_i\right)=1-(b+1) \phi_i + c \phi_i^2 \psi_i$ and $g\left(\phi_i,\psi_i\right)=b \phi_i-c \phi_i^2 \psi_i$ where $b$ and $c$ stand for positive parameters. The system admits a trivial homogeneous fixed point for $\left(\phi_i,\psi_i\right) = (1, b/c)$. This latter is stable to homogeneous perturbation provided $c > b-1$. We can then isolate in the parameters plane $(b,c)$ the domain where $\left(\lambda_{re}\right)_{max}>0$, or stated differently, the region that corresponds to a generalized Turing instability. The result of the analysis is displayed in Fig. \ref{fig2}, for a specific choice of the diffusion parameters, with $D_{\phi}>D_{\psi}$: the region of Turing instability falls inside the black solid lines (the red line follows the analysis of the stochastic analogue of the model, as we will explain in the following). The symbols identify two distinct operating points, positioned outside the region of deterministic instability. Working in this setting, after a short time transient, the perturbations get exponentially damped and the system relaxes back to its homogeneous equilibrium. The effect of the short time amplification should get more visible for increasing values of $\Omega$, the lattice size and the close the operating point is to the threshold of instability. This scenario is confirmed by inspection of Fig \ref{fig3} where the norm of the perturbation is plotted against time, for the chosen parameters value. To further elaborate on this observation we display in Fig \ref{fig4} the density of species $\phi$, on different nodes of the chain, against time. Panel (a) refers to the choice of parameters that corresponds to the blue square in Fig. \ref{fig2}, while panel (b) follows for the parameters associated to the orange circle. By making the chain longer, one better appreciates the initial growth of the perturbation which materialize in transient patterns. For sufficiently long time, the patterns disappear and the system converges back to the unperturbed homogeneous fixed point. The time for equilibration grows with the size of the lattice, i.e. with the number of nodes $\Omega$. Transient patterns are more pronounced and persistent, the closer to the region of deterministic instability. \begin{figure}[ht!] \includegraphics[scale=0.14]{fig1_NNpatt.pdf} \caption{The region of instability is depicted in the parameter plane $(b,c)$. This is the portion of the plane contained in between the two black solid lines. The symbols refer to two distinct operating points, positioned outside the domain of deterministic instability. The two points are located at $b=2$, and have respectively $c=2.8$ and $c=4.3$. The red line delimits the upper boundary of the parameters region where the stochastic amplification can eventually take place, as discussed in the remaining part of the section. Here, $D_{\phi}=1$ and $D_{\psi}=10$.} \label{fig2} \end{figure} \begin{figure} \begin{tabular}{cc} \includegraphics[scale=0.25]{ampli1.pdf} & \includegraphics[scale=0.25]{ampli2.pdf} \\ (a) & ( b) \end{tabular} \caption{The evolution of the norm of the perturbation is displayed for different choices of the reaction parameter $c$, corresponding to the two points selected in Fig. \ref{fig2} (at $b=2$). (a) Here $c=2.8$, the lower point (blue square) in Fig \ref{fig2}. The solid line refers to $\Omega=5$ while the dashed line is obtained for $\Omega=10$. (b) Here $c=4.3$, the upper point (orange circle) in Fig \ref{fig2}. The solid line stands for $\Omega=5$ while the dashed line refers to $\Omega=25$. Notice that the amplification gets more pronounced the closer the working point is to the deterministic transition line. Also, the peak of the norm against time shifts towards the right as the power of the leading secular terms is increased. Here, $D_{\phi}= 1$ and $D_{\psi}=10$.} \label{fig3} \end{figure} \begin{figure} \includegraphics[scale=0.14]{fig4_NNpatt.pdf} \\ (a) \\ \includegraphics[scale=0.14]{fig6_NNpatt.pdf} \\ ( b) \caption{The time evolution of species $\phi$ is displayed with an appropriate color code, on different nodes of the chain and against time. The Brusselator reaction scheme is assumed. Diffusion constants are set to $D_{\phi}= 1$ and $D_{\psi}=10$. Panel (a) refers to the values of $b$ and $c$ associated to the blue square in Fig. \ref{fig2}, while panel (b) follows the parameters attributed by assuming the orange circle as the operating point.} \label{fig4} \end{figure} \subsection{The stochastic evolution ($\sigma_i \ne 0$).} We shall here move on to consider the effect produced by a perpetual noise. This amounts to assuming $\sigma_i \ne 0$ in \eqref{eqn: eq di partenza1}-\eqref{eqn: eq di partenza2}. For the sake of simplicity we will set in the following $\sigma_i=\sigma$. We anticipate however that our conclusions still hold when accounting for an arbitrary degree of heterogeneity in the strength of the noise. By linearizing the governing dynamical system around the fixed point yields a set of linear Langevin equations: \begin{equation}\label{eq: Langevin_LIN} \frac{d}{d\tau}\zeta_i=\left(\mathcal{J}\boldsymbol{\zeta}\right)_i+\hat{\lambda}_i \end{equation} where $\hat{\boldsymbol{\lambda}}$ is a $2 \Omega$ vector of random Gaussian entries with zero mean, $<\hat{\boldsymbol{\lambda}}>=0$, and correlation given by $<\hat{\lambda}_i(\tau)\hat{\lambda}_j(\tau')>=\sigma^2\delta_{ij}\delta(\tau-\tau')\equiv \sigma^2\delta(\tau-\tau')$. The linear Langevin equations (\ref{eq: Langevin_LIN}) are equivalent to the following Fokker-Planck equation for the distribution function $\Pi$ of the fluctuations \begin{equation}\label{eq:eq Fokker Planck linearizzata vari nodi} \frac{\partial}{\partial \tau}\Pi =-\sum_{i=1}^{2\Omega}\frac{\partial}{\partial \zeta_i}\left(\mathcal{J}\boldsymbol{\zeta}\right)_i\Pi+\frac{1}{2}\sigma^2\frac{\partial^2}{\partial \zeta_i^2}\Pi \end{equation} The solution of the Fokker-Planck equation is a multivariate Gaussian that we can univocally characterize in terms of the associated first and second moments. It is immediate to show that the first moment converges in time to zero. We focus instead on the the $2 \Omega \times 2 \Omega$ family of second moments, defined as $\langle \zeta_l \zeta_m \rangle = \int \zeta_l \zeta_m \Pi d {\boldsymbol \zeta}$. A straightforward calculation returns: \begin{widetext} \begin{equation} \label{eqn: eq momenti diagonali} \frac{d}{d\tau}<\zeta_l^2>=2<\left(\mathcal{J}\boldsymbol{\zeta}\right)_l\zeta_l>+\sigma^2=2\sum_{j=1}^{2\Omega}\mathcal{J}_{lj}<\zeta_l\zeta_j>+\sigma^2 \end{equation} \begin{equation} \label{eqn: eq momenti misti} \frac{d}{d\tau}<\zeta_l\zeta_m>=<\left(\mathcal{J}\boldsymbol{\zeta}\right)_l\zeta_m>+<\left(\mathcal{J}\boldsymbol{\zeta}\right)_m\zeta_l>=\sum_{j=1}^{2\Omega}\mathcal{J}_{lj}<\zeta_m\zeta_j>+\mathcal{J}_{mj}<\zeta_l\zeta_j> \end{equation} \end{widetext} for respectively the diagonal and off-diagonal ($l \ne m$) moments. The stationary values of the moments can be analytically computed by setting to zero the time derivatives on the left hand side of equations \eqref{eqn: eq momenti diagonali}-\eqref{eqn: eq momenti misti} and solving the linear system that is consequently obtained. Particularly relevant for our purposes is the quantity $\delta_i=\langle \zeta_i^2 \rangle$, the variance of the fluctuations displayed, around the deterministic equilibrium, on node $i$. The value of $\delta_i$, normalized to $\delta_1$, is plotted in Fig. \ref{fig5}, for both species of the Brusselator model. The parameters correspond to the working point identified by the blue square in Fig. \ref{fig2}. The solid line stands for the, analytically determined, variance of the fluctuations predicted for species $\phi$, while the dashed line refers to species $\psi$. The symbols are obtained after direct stochastic simulations of system \eqref{eqn: eq di partenza1}-\eqref{eqn: eq di partenza2}, assuming the Brussellator as the reference reaction scheme. As it can be appreciated by visual inspection of Fig. \ref{fig5}, the fluctuations grow progressively node after node. The predicted variances nicely agree with the result of the simulations on the first nodes of the collection. For $\Omega>15$, deviations are abruptly found and the linear noise approximation fails. Our interpretation goes as follows: the amplification mechanism is manifestly triggered by the imposed noise, which resonates with the peculiar topology of the embedding support. Due to this interplay, noise seeded fluctuations grow across the chain and make it possible for the system to explore the phase space landscape, beyond the local basin of attraction to which it is deterministically bound. From here on, it is no more legitimate to simplify the dynamics of the system as if it was evolving in the vicinity of the homogeneous solution and the assumption that sits at the root of the linear noise estimate are consequently invalidated. The time evolution of species $\phi$ is plotted, with an appropriate color code, on different nodes of the chain and versus time in Fig. \ref{fig6}. Noise secures the stabilization of complex dynamical patterns, which are perpetually maintained in the stochastic version of the model, so breaking the spatial symmetry that characterizes the asymptotic deterministic solution. The amplification mechanism driven by the stochastic component of the dynamics can solely occur within a closed domain of the parameter space $(b,c)$ adjacent to the region of deterministic instability. The domain of interest is delimited by the red solid line in Fig. \ref{fig2}: for the parameters that fall below the red line, the variance of the fluctuations is analytically predicted to increase along the chain. Even more importantly, the system may be frozen in the heterogenous state, when silencing the noise (so regaining the deterministic limit) after a transient of the stochastic dynamics. This is clearly the case provided the deterministic dynamics possesses a stable non homogeneous attractor, for the chosen parameters set. In Fig. \ref{fig7} (a) the pattern is stably displayed, only when the noise is active. When the stochastic forcing is turned off (at the time identified by the white dashed line), the system regains the homogenous equilibrium. A selection of individual trajectories, recorded on specific nodes of the chain, is plotted in Fig. \ref{fig7} (b) and yields the same qualitative conclusion. A different scenario is instead met when operating with a slightly smaller value of the parameter $c$ (still outside the region of deterministic Turing instability). When turning off the noise, the system spontaneously sediments in a Chimera like pattern, see Fig. \ref{fig7} (c), a superposition of homogeneous (in the beginning of the chain) and heterogenous states (at the bottom of the chain) \cite{chimera1,chimera2,chimera3}. The same conclusion is reached upon inspection of Fig. \ref{fig7} (d). Here, the deterministic attractor is represented with a collection of crosses. A straightforward calculation confirms that it is indeed one of the different non homogeneous and stable attractors displayed by the system in its deterministic version. \begin{figure}[ht!] \includegraphics[scale=0.2]{fig8_NNpatt.pdf} \caption{The phenomenon of noise driven amplification is illustrated: $\delta_i/\delta_1$ is plotted against the node's number along the chain. The solid (dashed) line refers to the variance of the fluctuations, as predicted for species $\phi$ ($\psi$). The symbols stand for the homologues quantities as computed via direct stochastic simulations. The Brussellator model is assumed as the reference reaction scheme and parameters refer to the blue square displayed in Fig. \ref{fig2}.} \label{fig5} \end{figure} \begin{figure} \includegraphics[scale=0.2]{fig5_NNpatt.pdf} \\ (a) \\ \includegraphics[scale=0.2]{fig7_NNpatt.pdf} \\ ( b) \caption{The time evolution of species $\phi$ is displayed with an appropriate color code, on different nodes of the chain and against time. The Brusselator reaction scheme is assumed with the same operating choice of Fig. \ref{fig5}. By integrating the stochastic dynamics on a sufficiently long chain yields a robust pattern, which holds permanently, at variance with its deterministic analogue.} \label{fig6} \end{figure} \begin{figure*} \begin{tabular}{cc} \includegraphics[scale=0.4]{fig7a.pdf} & \includegraphics[scale=0.4]{fig7b.pdf} \\ (a) & (b) \\ \includegraphics[scale=0.4]{fig7c.pdf} & \includegraphics[scale=0.4]{fig7d.pdf} \\ (c) & (d) \\ \end{tabular} \caption{Panels (a) and (d). The time evolution of species $\phi$ is displayed with an appropriate color code, on different nodes of the chain and against time. The vertical dashed line identifies the instant in time when the external noise is turned off: from here on the system evolves according to a purely deterministic scheme. The Brusselator reaction scheme is assumed with the same parameters choice of Fig. \ref{fig5}, except for $c$ In (a), the pattern fades eventually away. Here $c=2.8$, as in Fig. \ref{fig5}. In (c), the system sediments in a stationary pattern of the Chimera type. Here $c=2.4$. In Figs. (b) and (c), the density of species $\phi$ is displayed on a few nodes of the collection (one node each five, across the chain). In panel (b) (corresponding to pattern (a)), the system converges to the homogeneous fixed point (black cross). In panel (d) (corresponding to pattern (b)), the system reaches a stable heterogeneous attractor.} \label{fig7} \end{figure*} \section{Quasi-degenerate directed lattice} This section is devoted to generalize the above analysis to the relevant setting where the degeneracy of the problem is removed by the insertion of return links, among adjacent nodes. More specifically we will assume that an edge with weight $\epsilon$ exists that goes from node $i$ to nodes $i-1$, for all $i>1$. At the same time the strength of the corresponding forward link is set to $1-\epsilon$, so as to preserve the nodes' strength (for $i>1$), when modulating $\epsilon$. In the following we will consider $\epsilon$ to be small. In particular, for $\epsilon \rightarrow 0$ one recovers the limiting case discussed in the previous section. On the other hand the introduction of a tiny return probability suffices to break the degeneracy of the problem: the $\Omega$ eigenvalues become distinct and the eigenvectors of the Laplacian define a basis which can be used to solve the linear problem (\ref{linear_syst}), which stems for the deterministic version of the inspected model. Denote by $\mathbf{v^{\alpha}}$ the eigenvector of $\mathbf{\Delta}$ relative to eigenvalue $\Lambda^{(\alpha)}$, with $\alpha=1,.., \Omega$. In formulae, $\sum_{j=1}^{\Omega}\Delta_{ij}v_j^{\alpha}=\Lambda^{\alpha}v_i^{\alpha}$. The perturbation $\boldsymbol{\zeta}$ in equation (\ref{linear_syst}) can be expanded as $\zeta_i=\sum_{\alpha=1}^{\Omega} c_{\alpha} e^{\lambda_{\alpha}t}v_i^{\alpha}$, where the constants $c_{\alpha}$ are determined by the initial condition. By inserting the aforementioned ansatz into the linear system (\ref{linear_syst}) yields the self-consistent condition \begin{equation} \det \begin{pmatrix} f_{\phi}+D_{\phi}\Lambda^{\alpha}-\lambda_{\alpha} & f_{\psi} \\ g_{\phi} & g_{\psi}+D_{\psi}\Lambda^{\alpha}-\lambda_{\alpha} \end{pmatrix} =0. \end{equation} \begin{figure} \includegraphics[scale=0.2]{disp1.pdf}\\ (a)\\ \includegraphics[scale=0.2]{disp2.pdf}\\ (b)\\ \includegraphics[scale=0.2]{disp3.pdf}\\ (c) \caption{Laplacian eigenvalues on the real axis in correspondence with their respective points on the dispersion relation for different values of $\epsilon$ [(a) $\epsilon=0.01$, (b) $\epsilon=0.05$, (c) $\epsilon=0.1$] and $\Omega=10$. The solid lines stand for the dispersion curve obtained when placing the system on a continuous spatial support. Vertical dashed lines are a guide for the eye to project the discrete dispersion relation back to the real axis where the spectrum of the Laplacian falls. The red circle is position at $-1$ the vallue to which the eigenvalues tend when sending $\epsilon \rightarrow 0$. } \label{fig8} \end{figure} The stability of the homogenous fixed point can be therefore determined from the above condition, by computing $\lambda_{\alpha} $ as a function of the Laplacian eigenvalues $\Lambda^{(\alpha)}$. This is the generalization of the so called dispersion relation to a setting where the spatial support is a network. In Fig. \ref{fig8}, we plot the dispersion relation for a chain made of $\Omega=10$ nodes and for different values of $\epsilon$, assuming the Brusselator model as the reference scheme. The reaction parameters are set so as to yield the square symbol in Fig.\ref{fig2}. Remarkably, the spectrum of the Laplacian operator is real and the largest eigenvalue is $\Lambda^{(1)}$=0, as it readily follows by its definition. The remaining $\Omega-1$ eigenvalues are real and cluster in the vicinity of $\bar{\Lambda}=-1$, the smaller $\epsilon$ is, the closer they get to this value, as illustrated Fig \ref{fig8}. The case of a degenerate chain can be formally recovered by sending $\epsilon$ to zero, which in turn implies that the non trivial portion of the dispersion curve, as depicted in Fig \ref{fig8}, collapses towards an asymptotic attractor located at $(\bar{\Lambda},\lambda(\bar{\Lambda}))$. Building on this observation, it can be shown that the solution of the deterministic linear problem (\ref{linear_syst}), for the system defined on a degenerate chain, can be obtained by performing the limit for $\epsilon \rightarrow 0$ of the non degenerate linear solution. It is hence tempting to speculate that the stochastic driven instability as outlined in the preceding section, can readily extend to a setting where the chain is non degenerate, provided $\epsilon$ is sufficiently small. The remaining part of this section is entirely devoted to explore this interesting generalization . Following the strategy discussed above, we can set to calculate the stationary values for the moments of the stochastic fluctuations. In Fig. \ref{fig9}, the stationary values of the moments $\delta_i=\langle \zeta_i^2 \rangle$ are normalized to $\delta_1$ and plotted against the node label across the chain. In analogy with the above, the solid line stands for the variance of the fluctuations associated to species $\phi$, while the dashed line stands for species $\psi$. As expected, the fluctuations magnify along the chain, as it happens when the system is made to evolve with $\epsilon=0$. The predicted variances agree with the results of the stochastic simulations, up to a critical length of the chain above which the system begins feeling the non linearities that will steer it towards a non homogeneous attractor. Noise stabilizes the heterogeneous patterns which become hence perpetual in the stochastic version of the model, as it can be appreciated by inspection of Fig. \ref{fig10}. \begin{figure} \includegraphics[scale=0.2]{fig14_NNpatt.pdf}\\ \caption{$\delta_i/\delta_1$ is plotted against the node's number along the chain. Here we set $\epsilon=0.01$. The solid (dashed) line refers to the variance of the fluctuations as predicted for species $\phi$ ($\psi$). The symbols stand for the homologues quantities as computed via direct stochastic simulations. The parameters refer to the blue square displayed in Fig. \ref{fig2}.} \label{fig9} \end{figure} \begin{figure} \includegraphics[scale=0.2]{fig18_NNpatt.pdf} \caption{The time evolution of species $\phi$ is displayed with an appropriate color code, on different nodes of the chain and against time. Here $\epsilon=0.01$ and the Brusselator reaction scheme is assumed with the same parameters choice of Fig. \ref{fig9}} \label{fig10} \end{figure} As a further attempt to grasp the complexity of the phenomenon, we consider again a chain with return links, but assume the weights $\epsilon$ to be random entries drawn from a Gaussian distribution, centered in $\bar{\epsilon}$, with variance $\eta_{\bar{\epsilon}}$. The spectrum of the Laplacian operator is now complex (at variance with the case where the $\epsilon$ are uniform) and the discrete dispersion relation is no longer bound to the idealized continuum curve, see Fig. \ref{fig11}. For a sufficiently small $\bar{\epsilon}$, and modest scattering around the mean, the negative portion of the dispersion relation clusters in the vicinity of the point $(\bar{\Lambda},\lambda(\bar{\Lambda}))$, which stems from the fully degenerate solution. Arguing as above, one can expect that noise and spatial coupling will cooperate also in this setting to yield robust stochastic patterns, in a region of the parameters for which deterministic stability is granted. In Fig. \ref{fig13}, we show that the signal amplification for the stochastic Brusselator model defined on a chain with random return weights, is on average identical to that observed when the $\epsilon$ are uniform and set to the average value $\bar{\epsilon}$. The depicted points are computed after averaging over $100$ realization of the random quasi degenerate network and the error in the computed quantities is of the order of the symbols size. The ensuing stochastic pattern is reported in Fig. \ref{fig12}. \begin{figure} \includegraphics[scale=0.2]{disp4.pdf} \caption{Dispersion relation for $\epsilon$ selected randomly, for each couple of nodes, from a Gaussian distribution with mean $\bar{\epsilon}=0.01$ and variance $\eta_{\bar{\epsilon}}=10^{-4}$. Here the chain is assumed to be $\Omega=80$ nodes long.} \label{fig11} \end{figure} \begin{figure} \includegraphics[scale=0.2]{barre_errore.pdf} \caption{The ratio between the norm of the fluctuations on the last and first nodes of the chain is plotted against the length of the chain. The solid line refers to the norm as predicted. The symbols stand for the homologous quantities as computed via direct stochastic simulations. Blue diamonds refer to fixed weights $\epsilon=0.01$, red squares to random weights (averaging over $25$ realizations) chosen from a Gaussian distribution centered in $\bar{\epsilon}=0.01$, with variance $\eta_{\bar{\epsilon}}=10^{-4}$. The parameters are the same as those in Fig. \ref{fig9}.} \label{fig13} \end{figure} \begin{figure} \includegraphics[scale=0.2]{fig19_NNpatt.pdf} \caption{The time evolution of species $\phi$ is displayed with an appropriate color code, on different nodes of the chain and against time. Here we assume the weights $\epsilon$ to be random entries chosen from a Gaussian distribution centered in $\bar{\epsilon}=0.01$ with variance $\eta_{\bar{\epsilon}}=10^{-4}$. The parameters are the same as those in Fig. \ref{fig9}.} \label{fig12} \end{figure} In conclusion, we have shown that a generic reaction model, which is stable when defined on a continuous or lattice-like support, can turn unstable due to the cooperative interplay of two effects, noise and the quasi degenerate nature of generalized Jacobian operator, as reflecting the specific spatial support here assumed. An increase in the node number yields a progressive amplification of the fluctuations on the rightmost end of the direct chain, a process which eventually drives the uniform attractor unstable. In the reciprocal space, one gains a complementary insight into the scrutinized phenomenon. Driving stochastically unstable a system, which is deemed stable under the deterministic angle, requires packing within a bound domain of the complex plane a large collection of Laplacian eigenvalues. The eigenmodes associated to the quasi degenerate spectrum provide effective route system to vehiculate the instability. One could therefore imagine to generate networks prone to the instability, by hierarchically assembling nodes in such a way that the associated Laplacian possesses a quasi degenerate spectrum, according to the above interpretation. To challenge this view in the simplest scenario possible, we implemented a generative scheme which builds on the following steps. First we consider two nodes, linked via a direct edge that goes from the first to the second. The Laplacian associated to this pair displays two eigenvalues, one in zero and the other localized in $-1$. We then add a third node to the collection. We select at random a node, from the pool of existing ones, and identify it as target of a link that originates from the newly added node. The strength of the link is chosen randomly from a Gaussian distribution with given mean and standard deviation. We then compute the spectrum of the obtained network and accept the move if the new eigenvalue falls sufficiently close to $-1$, or reject it otherwise. The procedure is iterated for a maximum number of times which scales extensively with the size of the network. Once the third node is aggregated to the initial pair, we move forward to adding the fourth node according to an identical strategy that we iterate forward. One exemplary of networks generated according to this procedure is displayed in Fig. \ref{fig14}. This is the skeleton of a distorted one dimensional directed chain, short segments being added on the lateral sides, and fall in the class of the so called random directed acyclic graphs. The Brussellator model evolved on this network (see Fig. \ref{fig15}) returns noise triggered patterns which share similar features with those obtained earlier. In principle, one could devise more complicated networks, that would return a patchy distribution of eigenvalues, engineered to densely populate distinct regions of the complex plane. This extension is left for future work. \begin{figure} \includegraphics[scale=0.2]{netw.pdf} \caption{Example of network generated with the procedure described in the text made of $\Omega=80$ nodes. The Brusselator reaction scheme is assumed with the parameter choice of blue square in Fig. \ref{fig2}.} \label{fig14} \end{figure} \begin{figure} \includegraphics[scale=0.2]{fig22_NNpatt.pdf} \caption{The time evolution of species $\phi$ is displayed with an appropriate color code, on different nodes of the network of Fig. \ref{fig14} and against time.} \label{fig15} \end{figure} \section{Conclusions} To investigate the stability of an equilibrium solution of a given dynamical system, it is customary to perform a linear stability analysis which aims at characterizing the asymptotic evolution of an imposed perturbation. In doing so, one obtains a Jacobian matrix, evaluated at the fixed point of interest, whose eigenvalues bear information on the stability of the system. If the eigenvalues of the Jacobian matrix display negative real parts, the system is resilient, meaning that it will eventually regain the equilibrium condition, by exponentially damping the initial perturbation. Non-normal reactive components may however yield a short time amplification of the disturbance, before the crossover to the exponential regime that drives the deterministic system back to its stable equilibrium. Particularly interesting is the interplay between noise, assumed as a stochastic perpetual forcing, and the inherent non-normality, as stemming from the existing interactions. Patterns have been for instance reported to occur for spatially extended stochastic models, with a pronounced degree of non-normal reactivity and outside the region of deterministic instability. Building on these premises, we have here taken one step forward towards the interesting setting where the degree of inherent non-normality is magnified by the embedding spatial support. Indeed, by replicating a two-species model on diffusively coupled patches of a directed lattice, we enhanced the ability of the system to grow perturbation at short time. This effect is the byproduct of the degeneracy in the spectrum of the Jacobian matrix associated to the examined system, and which ultimately reflects the architecture of couplings between adjacent units. A non trivial amplification of the noise across the lattice is observed and explained, which materializes in self-organized patterns, that are instead lacking in the deterministic analogue of the analyzed model. Our conclusions are then extended to a quasi-degenerate support: the ingredient that we have identified as crucial for the onset of the amplification is the presence of a compact region in the complex plane where the eigenvalues of the Laplacian operators accumulate. Beyond a critical size of the system, expressed in terms of number of nodes that define the support, the system may lose its deterministic resilience. In fact, it can eventually migrate towards another attractor that is stably maintained, also when the noise forcing is turned off. Taken all together, our investigations point the importance of properly accounting for the unavoidable sources of stochasticity when gauging the resilience of a system: non normality and quasi degenerate networks might alter dramatically the deterministic prediction turning unstable a system that would be presumed otherwise stable under a conventional deterministic perspective. Furthermore, our findings bear a remarkable similarity with the phenomenon of convective instability, \cite{deissler, lepri1, lepri2, jiotsa}, a possible connection that we aim at investigating in a future contribution.
-27,284.329655
[ -3.615234375, 3.220703125 ]
26.875
[ -3.056640625, 0.2724609375, -2.447265625, -6.46484375, -0.50244140625, 9.0859375 ]
[ 4.53125, 8.0625, 1.587890625, 7.6171875 ]
403
6,596
[ -3.35546875, 3.91015625 ]
23.620322
[ -6.1796875, -4.828125, -5.27734375, -2.634765625, 2.169921875, 13.65625 ]
2.022006
15.977367
21.861734
3.153359
[ 3.3647170066833496 ]
-18,369.07739
5.773347
-26,655.077869
0.341378
5.862781
[ -2.67578125, -3.8984375, -3.728515625, -4.87890625, 2.462890625, 12.46875 ]
[ -4.8984375, -2.251953125, -2.341796875, -1.4677734375, 3.58203125, 4.61328125 ]
BkiUbRY4eIfiURL9QzlY
\section{Convolutional Neural Networks for Dense Image Labeling} \label{sec:convnets} Herein we describe how we have re-purposed and finetuned the publicly available Imagenet-pretrained state-of-art 16-layer classification network of \cite{simonyan2014very} (VGG-16) into an efficient and effective dense feature extractor for our dense semantic image segmentation system. \subsection{Efficient Dense Sliding Window Feature Extraction with the Hole Algorithm} \label{sec:convnet-hole} Dense spatial score evaluation is instrumental in the success of our dense CNN feature extractor. As a first step to implement this, we convert the fully-connected layers of VGG-16 into convolutional ones and run the network in a convolutional fashion on the image at its original resolution. However this is not enough as it yields very sparsely computed detection scores (with a stride of 32 pixels). To compute scores more densely at our target stride of 8 pixels, we develop a variation of the method previously employed by \citet{GCMG+13, sermanet2013overfeat}. We skip subsampling after the last two max-pooling layers in the network of \citet{simonyan2014very} and modify the convolutional filters in the layers that follow them by introducing zeros to increase their length (\by{2}{} in the last three convolutional layers and \by{4}{} in the first fully connected layer). We can implement this more efficiently by keeping the filters intact and instead sparsely sample the feature maps on which they are applied on using an input stride of 2 or 4 pixels, respectively. This approach, illustrated in \figref{fig:hole} is known as the `hole algorithm' (`atrous algorithm') and has been developed before for efficient computation of the undecimated wavelet transform \cite{Mall99}. We have implemented this within the Caffe framework \citep{jia2014caffe} by adding to the \textsl{im2col} function (it converts multi-channel feature maps to vectorized patches) the option to sparsely sample the underlying feature map. This approach is generally applicable and allows us to efficiently compute dense CNN feature maps at any target subsampling rate without introducing any approximations. \begin{figure} \centering \includegraphics[width=0.5\linewidth]{fig/atrous2.pdf} \caption{Illustration of the hole algorithm in 1-D, when \textsl{kernel\_size = 3}, \textsl{input\_stride = 2}, and \textsl{output\_stride = 1}.} \label{fig:hole} \end{figure} We finetune the model weights of the Imagenet-pretrained VGG-16 network to adapt it to the image classification task in a straightforward fashion, following the procedure of \citet{long2014fully}. We replace the 1000-way Imagenet classifier in the last layer of VGG-16 with a 21-way one. Our loss function is the sum of cross-entropy terms for each spatial position in the CNN output map (subsampled by 8 compared to the original image). All positions and labels are equally weighted in the overall loss function. Our targets are the ground truth labels (subsampled by 8). We optimize the objective function with respect to the weights at all network layers by the standard SGD procedure of \citet{KrizhevskyNIPS2013}. During testing, we need class score maps at the original image resolution. As illustrated in Figure~\ref{fig:score-maps} and further elaborated in Section~\ref{sec:local-chal}, the class score maps (corresponding to log-probabilities) are quite smooth, which allows us to use simple bilinear interpolation to increase their resolution by a factor of 8 at a negligible computational cost. Note that the method of \citet{long2014fully} does not use the hole algorithm and produces very coarse scores (subsampled by a factor of 32) at the CNN output. This forced them to use learned upsampling layers, significantly increasing the complexity and training time of their system: Fine-tuning our network on PASCAL VOC 2012 takes about 10 hours, while they report a training time of several days (both timings on a modern GPU). \subsection{Controlling the Receptive Field Size and Accelerating Dense Computation with Convolutional Nets} \label{sec:convnet-field} Another key ingredient in re-purposing our network for dense score computation is explicitly controlling the network's receptive field size. Most recent DCNN-based image recognition methods rely on networks pre-trained on the Imagenet large-scale classification task. These networks typically have large receptive field size: in the case of the VGG-16 net we consider, its receptive field is \by{224}{224} (with zero-padding) and \by{404}{404} pixels if the net is applied convolutionally. After converting the network to a fully convolutional one, the first fully connected layer has 4,096 filters of large \by{7}{7} spatial size and becomes the computational bottleneck in our dense score map computation. We have addressed this practical problem by spatially subsampling (by simple decimation) the first FC layer to \by{4}{4} (or \by{3}{3}) spatial size. This has reduced the receptive field of the network down to \by{128}{128} (with zero-padding) or \by{308}{308} (in convolutional mode) and has reduced computation time for the first FC layer by $2 - 3$ times. Using our Caffe-based implementation and a Titan GPU, the resulting VGG-derived network is very efficient: Given a \by{306}{306} input image, it produces \by{39}{39} dense raw feature scores at the top of the network at a rate of about 8 frames/sec during testing. The speed during training is 3 frames/sec. We have also successfully experimented with reducing the number of channels at the fully connected layers from 4,096 down to 1,024, considerably further decreasing computation time and memory footprint without sacrificing performance, as detailed in Section~\ref{sec:experiments}. Using smaller networks such as \citet{KrizhevskyNIPS2013} could allow video-rate test-time dense feature computation even on light-weight GPUs. \section{Detailed Boundary Recovery: Fully-Connected Conditional Random Fields and Multi-scale Prediction} \label{sec:boundary-recovery} \subsection{Deep Convolutional Networks and the Localization Challenge} \label{sec:local-chal} As illustrated in Figure~\ref{fig:score-maps}, DCNN score maps can reliably predict the presence and rough position of objects in an image but are less well suited for pin-pointing their exact outline. There is a natural trade-off between classification accuracy and localization accuracy with convolutional networks: Deeper models with multiple max-pooling layers have proven most successful in classification tasks, however their increased invariance and large receptive fields make the problem of inferring position from the scores at their top output levels more challenging. Recent work has pursued two directions to address this localization challenge. The first approach is to harness information from multiple layers in the convolutional network in order to better estimate the object boundaries \citep{long2014fully, eigen2014predicting}. The second approach is to employ a super-pixel representation, essentially delegating the localization task to a low-level segmentation method. This route is followed by the very successful recent method of \citet{mostajabi2014feedforward}. In Section~\ref{sec:dense-crf}, we pursue a novel alternative direction based on coupling the recognition capacity of DCNNs and the fine-grained localization accuracy of fully connected CRFs and show that it is remarkably successful in addressing the localization challenge, producing accurate semantic segmentation results and recovering object boundaries at a level of detail that is well beyond the reach of existing methods. \subsection{Fully-Connected Conditional Random Fields for Accurate Localization} \label{sec:dense-crf} \begin{figure}[ht] \centering \begin{tabular}{c c c c c} \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/2007_007470.jpg} & \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Score_Class1_Itr0.pdf} & \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Score_Class1_Itr1.pdf} & \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Score_Class1_Itr2.pdf} & \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Score_Class1_Itr10.pdf} \\ \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/2007_007470.png} & \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Belief_Class1_Itr0.pdf} & \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Belief_Class1_Itr1.pdf} & \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Belief_Class1_Itr2.pdf} & \includegraphics[width=0.16\linewidth]{fig/mean_field_illustration/Belief_Class1_Itr10.pdf} \\ Image/G.T. & DCNN output & CRF Iteration 1 & CRF Iteration 2 & CRF Iteration 10 \\ \end{tabular} \caption{Score map (input before softmax function) and belief map (output of softmax function) for Aeroplane. We show the score (1st row) and belief (2nd row) maps after each mean field iteration. The output of last DCNN layer is used as input to the mean field inference. Best viewed in color.} \label{fig:score-maps} \end{figure} Traditionally, conditional random fields (CRFs) have been employed to smooth noisy segmentation maps \cite{rother2004grabcut, kohli2009robust}. Typically these models contain energy terms that couple neighboring nodes, favoring same-label assignments to spatially proximal pixels. Qualitatively, the primary function of these short-range CRFs has been to clean up the spurious predictions of weak classifiers built on top of local hand-engineered features. Compared to these weaker classifiers, modern DCNN architectures such as the one we use in this work produce score maps and semantic label predictions which are qualitatively different. As illustrated in Figure~\ref{fig:score-maps}, the score maps are typically quite smooth and produce homogeneous classification results. In this regime, using short-range CRFs can be detrimental, as our goal should be to recover detailed local structure rather than further smooth it. Using contrast-sensitive potentials \cite{rother2004grabcut} in conjunction to local-range CRFs can potentially improve localization but still miss thin-structures and typically requires solving an expensive discrete optimization problem. To overcome these limitations of short-range CRFs, we integrate into our system the fully connected CRF model of \citet{krahenbuhl2011efficient}. The model employs the energy function \begin{align} E(\boldsymbol{x}) = \sum_i \theta_i(x_i) + \sum_{ij} \theta_{ij}(x_i, x_j) \end{align} where $\boldsymbol{x}$ is the label assignment for pixels. We use as unary potential $\theta_i(x_i) = - \log P(x_i)$, where $P(x_i)$ is the label assignment probability at pixel $i$ as computed by DCNN. The pairwise potential is $\theta_{ij}(x_i, x_j) = \mu(x_i,x_j)\sum_{m=1}^{K} w_m \cdot k^m(\boldsymbol{f}_i, \boldsymbol{f}_j)$, where $\mu(x_i,x_j)=1 \text{ if } x_i \neq x_j$, and zero otherwise (\ie, Potts Model). There is one pairwise term for each pair of pixels $i$ and $j$ in the image no matter how far from each other they lie, \ie the model's factor graph is fully connected. Each $k^m$ is the Gaussian kernel depends on features (denoted as $\boldsymbol{f}$) extracted for pixel $i$ and $j$ and is weighted by parameter $w_m$. We adopt bilateral position and color terms, specifically, the kernels are \begin{align} \label{eq:fully_crf} w_1 \exp \Big(-\frac{||p_i-p_j||^2}{2\sigma_\alpha^2} -\frac{||I_i-I_j||^2}{2\sigma_\beta^2} \Big) + w_2 \exp \Big(-\frac{||p_i-p_j||^2}{2\sigma_\gamma^2}\Big) \end{align} where the first kernel depends on both pixel positions (denoted as $p$) and pixel color intensities (denoted as $I$), and the second kernel only depends on pixel positions. The hyper parameters $\sigma_\alpha$, $\sigma_\beta$ and $\sigma_\gamma$ control the ``scale'' of the Gaussian kernels. Crucially, this model is amenable to efficient approximate probabilistic inference \citep{krahenbuhl2011efficient}. The message passing updates under a fully decomposable mean field approximation $b(\boldsymbol{x}) = \prod_i b_i(x_i)$ can be expressed as convolutions with a Gaussian kernel in feature space. High-dimensional filtering algorithms \citep{adams2010fast} significantly speed-up this computation resulting in an algorithm that is very fast in practice, less that 0.5 sec on average for Pascal VOC images using the publicly available implementation of \citep{krahenbuhl2011efficient}. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{fig/model_illustration3.pdf} \caption{Model Illustration. The coarse score map from Deep Convolutional Neural Network (with fully convolutional layers) is upsampled by bi-linear interpolation. A fully connected CRF is applied to refine the segmentation result. Best viewed in color.} \label{fig:ModelIllustration} \end{figure} \subsection{Multi-Scale Prediction} \label{sec:multiscale} Following the promising recent results of \cite{hariharan2014hypercolumns, long2014fully} we have also explored a multi-scale prediction method to increase the boundary localization accuracy. Specifically, we attach to the input image and the output of each of the first four max pooling layers a two-layer MLP (first layer: 128 3x3 convolutional filters, second layer: 128 1x1 convolutional filters) whose feature map is concatenated to the main network's last layer feature map. The aggregate feature map fed into the softmax layer is thus enhanced by 5 * 128 = 640 channels. We only adjust the newly added weights, keeping the other network parameters to the values learned by the method of Section~\ref{sec:convnets}. As discussed in the experimental section, introducing these extra direct connections from fine-resolution layers improves localization performance, yet the effect is not as dramatic as the one obtained with the fully-connected CRF. \section{Discussion} \label{sec:discussion} Our work combines ideas from deep convolutional neural networks and fully-connected conditional random fields, yielding a novel method able to produce semantically accurate predictions and detailed segmentation maps, while being computationally efficient. Our experimental results show that the proposed method significantly advances the state-of-art in the challenging PASCAL VOC 2012 semantic image segmentation task. There are multiple aspects in our model that we intend to refine, such as fully integrating its two main components (CNN and CRF) and train the whole system in an end-to-end fashion, similar to \citet{Koltun13, chen2014learning, zheng2015crfrnn}. We also plan to experiment with more datasets and apply our method to other sources of data such as depth maps or videos. Recently, we have pursued model training with weakly supervised annotations, in the form of bounding boxes or image-level labels \citep{papandreou15weak}. At a higher level, our work lies in the intersection of convolutional neural networks and probabilistic graphical models. We plan to further investigate the interplay of these two powerful classes of methods and explore their synergistic potential for solving challenging computer vision tasks. \subsection*{Acknowledgments} This work was partly supported by ARO 62250-CS, NIH Grant 5R01EY022247-03, EU Project RECONFIG FP7-ICT-600825 and EU Project MOBOT FP7-ICT-2011-600796. We also gratefully acknowledge the support of NVIDIA Corporation with the donation of GPUs used for this research. We would like to thank the anonymous reviewers for their detailed comments and constructive feedback. \subsection*{Paper Revisions} Here we present the list of major paper revisions for the convenience of the readers. \paragraph{v1} Submission to ICLR 2015. Introduces the model DeepLab-CRF, which attains the performance of $66.4\%$ on PASCAL VOC 2012 test set. \paragraph{v2} Rebuttal for ICLR 2015. Adds the model DeepLab-MSc-CRF, which incorporates multi-scale features from the intermediate layers. DeepLab-MSc-CRF yields the performance of $67.1\%$ on PASCAL VOC 2012 test set. \paragraph{v3} Camera-ready for ICLR 2015. Experiments with large Field-Of-View. On PASCAL VOC 2012 test set, DeepLab-CRF-LargeFOV achieves the performance of $70.3\%$. When exploiting both mutli-scale features and large FOV, DeepLab-MSc-CRF-LargeFOV attains the performance of $71.6\%$. \paragraph{v4} Reference to our updated ``DeepLab'' system \cite{chen2016deeplab} with much improved results. \section{Introduction} \label{sec:intro} Deep Convolutional Neural Networks (DCNNs) had been the method of choice for document recognition since \citet{LeCun1998}, but have only recently become the mainstream of high-level vision research. Over the past two years DCNNs have pushed the performance of computer vision systems to soaring heights on a broad array of high-level problems, including image classification \citep{KrizhevskyNIPS2013, sermanet2013overfeat, simonyan2014very, szegedy2014going, papandreou2014untangling}, object detection \citep{girshick2014rcnn}, fine-grained categorization \citep{zhang2014part}, among others. A common theme in these works is that DCNNs trained in an end-to-end manner deliver strikingly better results than systems relying on carefully engineered representations, such as SIFT or HOG features. This success can be partially attributed to the built-in invariance of DCNNs to local image transformations, which underpins their ability to learn hierarchical abstractions of data \citep{zeiler2014visualizing}. While this invariance is clearly desirable for high-level vision tasks, it can hamper low-level tasks, such as pose estimation \citep{chen2014articulated, tompson2014joint} and semantic segmentation - where we want precise localization, rather than abstraction of spatial details. There are two technical hurdles in the application of DCNNs to image labeling tasks: signal downsampling, and spatial `insensitivity' (invariance). The first problem relates to the reduction of signal resolution incurred by the repeated combination of max-pooling and downsampling (`striding') performed at every layer of standard DCNNs \citep{KrizhevskyNIPS2013, simonyan2014very, szegedy2014going}. Instead, as in \citet{papandreou2014untangling}, we employ the `atrous' (with holes) algorithm originally developed for efficiently computing the undecimated discrete wavelet transform \cite{Mall99}. This allows efficient dense computation of DCNN responses in a scheme substantially simpler than earlier solutions to this problem \cite{GCMG+13, sermanet2013overfeat}. The second problem relates to the fact that obtaining object-centric decisions from a classifier requires invariance to spatial transformations, inherently limiting the spatial accuracy of the DCNN model. We boost our model's ability to capture fine details by employing a fully-connected Conditional Random Field (CRF). Conditional Random Fields have been broadly used in semantic segmentation to combine class scores computed by multi-way classifiers with the low-level information captured by the local interactions of pixels and edges \citep{rother2004grabcut, shotton2009textonboost} or superpixels \citep{lucchi2011spatial}. Even though works of increased sophistication have been proposed to model the hierarchical dependency \citep{he2004multiscale, ladicky2009associative, lempitsky2011pylon} and/or high-order dependencies of segments \citep{delong2012fast, gonfaus2010harmony, kohli2009robust, CPY13, Wang15}, we use the fully connected pairwise CRF proposed by \citet{krahenbuhl2011efficient} for its efficient computation, and ability to capture fine edge details while also catering for long range dependencies. That model was shown in \citet{krahenbuhl2011efficient} to largely improve the performance of a boosting-based pixel-level classifier, and in our work we demonstrate that it leads to state-of-the-art results when coupled with a DCNN-based pixel-level classifier. The three main advantages of our ``DeepLab'' system are (i) speed: by virtue of the `atrous' algorithm, our dense DCNN operates at 8 fps, while Mean Field Inference for the fully-connected CRF requires 0.5 second, (ii) accuracy: we obtain state-of-the-art results on the PASCAL semantic segmentation challenge, outperforming the second-best approach of \citet{mostajabi2014feedforward} by a margin of 7.2$\%$ and (iii) simplicity: our system is composed of a cascade of two fairly well-established modules, DCNNs and CRFs. \section{Related Work} Our system works directly on the pixel representation, similarly to \citet{long2014fully}. This is in contrast to the two-stage approaches that are now most common in semantic segmentation with DCNNs: such techniques typically use a cascade of bottom-up image segmentation and DCNN-based region classification, which makes the system commit to potential errors of the front-end segmentation system. For instance, the bounding box proposals and masked regions delivered by \citep{arbelaez2014multiscale, Uijlings13} are used in \citet{girshick2014rcnn} and \cite{hariharan2014simultaneous} as inputs to a DCNN to introduce shape information into the classification process. Similarly, the authors of \citet{mostajabi2014feedforward} rely on a superpixel representation. A celebrated non-DCNN precursor to these works is the second order pooling method of \citep{carreira2012semantic} which also assigns labels to the regions proposals delivered by \citep{carreira2012cpmc}. Understanding the perils of committing to a single segmentation, the authors of \citet{cogswell2014combining} build on \citep{yadollahpour2013discriminative} to explore a diverse set of CRF-based segmentation proposals, computed also by \citep{carreira2012cpmc}. These segmentation proposals are then re-ranked according to a DCNN trained in particular for this reranking task. Even though this approach explicitly tries to handle the temperamental nature of a front-end segmentation algorithm, there is still no explicit exploitation of the DCNN scores in the CRF-based segmentation algorithm: the DCNN is only applied post-hoc, while it would make sense to directly try to use its results {\em during} segmentation. Moving towards works that lie closer to our approach, several other researchers have considered the use of convolutionally computed DCNN features for dense image labeling. Among the first have been \citet{farabet2013learning} who apply DCNNs at multiple image resolutions and then employ a segmentation tree to smooth the prediction results; more recently, \citet{hariharan2014hypercolumns} propose to concatenate the computed inter-mediate feature maps within the DCNNs for pixel classification, and \citet{dai2014convolutional} propose to pool the inter-mediate feature maps by region proposals. Even though these works still employ segmentation algorithms that are decoupled from the DCNN classifier's results, we believe it is advantageous that segmentation is only used at a later stage, avoiding the commitment to premature decisions. More recently, the segmentation-free techniques of \citep{long2014fully, eigen2014predicting} directly apply DCNNs to the whole image in a sliding window fashion, replacing the last fully connected layers of a DCNN by convolutional layers. In order to deal with the spatial localization issues outlined in the beginning of the introduction, \citet{long2014fully} upsample and concatenate the scores from inter-mediate feature maps, while \citet{eigen2014predicting} refine the prediction result from coarse to fine by propagating the coarse results to another DCNN. The main difference between our model and other state-of-the-art models is the combination of pixel-level CRFs and DCNN-based `unary terms'. Focusing on the closest works in this direction, \citet{cogswell2014combining} use CRFs as a proposal mechanism for a DCNN-based reranking system, while \citet{farabet2013learning} treat superpixels as nodes for a local pairwise CRF and use graph-cuts for discrete inference; as such their results can be limited by errors in superpixel computations, while ignoring long-range superpixel dependencies. Our approach instead treats every pixel as a CRF node, exploits long-range dependencies, and uses CRF inference to directly optimize a DCNN-driven cost function. We note that mean field had been extensively studied for traditional image segmentation/edge detection tasks, \eg, \citep{geiger1991parallel, geiger1991common, kokkinos2008computational}, but recently \citet{krahenbuhl2011efficient} showed that the inference can be very efficient for fully connected CRF and particularly effective in the context of semantic segmentation. After the first version of our manuscript was made publicly available, it came to our attention that two other groups have independently and concurrently pursued a very similar direction, combining DCNNs and densely connected CRFs \citep{bell2014material, zheng2015crfrnn}. There are several differences in technical aspects of the respective models. \citet{bell2014material} focus on the problem of material classification, while \citet{zheng2015crfrnn} unroll the CRF mean-field inference steps to convert the whole system into an end-to-end trainable feed-forward network. We have updated our proposed ``DeepLab'' system with much improved methods and results in our latest work \cite{chen2016deeplab}. We refer the interested reader to the paper for details. \subsection{Related Work} Our model is mostly related to two fields, and the goal of this work is to combine the best from them. {\bf{Conditional Random Fields for segmentation: }} Many semantic segmentation methods rely on Conditional Random Fields (CRFs), which model the local interactions between pixels \citep{rother2004grabcut, shotton2009textonboost} or superpixels \citep{lucchi2011spatial} via the pairwise potential. Several works have been proposed to model the hierarchical dependency \citep{he2004multiscale, ladicky2009associative, lempitsky2011pylon} and the high-order potential (in addition to pairwise potential) \citep{delong2012fast, gonfaus2010harmony, kohli2009robust, krahenbuhl2011efficient}. Our model takes use of the efficient inference algorithm in fully connected CRF proposed by \citet{krahenbuhl2011efficient} for its powerful long range dependency. Note that mean field had been extensively studied for traditional image segmentation/edge detection tasks citep{geiger1991common, geiger1991parallel, kokkinos2008computational}, but \citet{krahenbuhl2011efficient} showed that it can be very efficient in the context of semantic segmentation. {\bf{Deep Convolutional Neural Network for segmentation: }} Most of the systems built on top of DCNN classify either a single object label for an entire image \citep{KrizhevskyNIPS2013, simonyan2014very, szegedy2014going}, or several object labels for bounding boxes within an image \citep{papandreou2014untangling, girshick2014rcnn}. Recently, there are several works that attemp to semantically segment an image with DCNN. \citet{girshick2014rcnn, hariharan2014simultaneous} take both bounding box proposals and masked regions as input to DCNN. The masked regions provide extra object shape informantion to the neural networks. These methods heavily depend on the region proposals \citep{arbelaez2014multiscale, Uijlings13}. \citet{farabet2013learning} apply DCNN to multi-resolution of the input image and employ a segmentation tree to smooth the prediction results. We notice that there are serveral concurrent works, which bear similarities to our proposed model. \citet{mostajabi2014feedforward} propose to combine the multi-scale cues extracted by DCNN (and some hand-crafted features) to label one superpixel. Similarly, their results depend on the superpixels, which may leak out the object boundaries, and the errors are difficult to recover. \citet{hariharan2014hypercolumns} propose to concatenate the computed inter-mediate feature maps within the DCNN for pixel classfication, and \citet{dai2014convolutional} propose to pool the inter-mediate feature maps by region proposals. Their models are two-step models, which depends on the accuracy of first step (\ie region proposals). On the other hand, our model is most similar to \citet{long2014fully, eigen2014predicting}, which directly apply DCNN to the whole image in sliding window fashion. The last fully connected layers within DCNN are replaced by convolutional layers. \citet{long2014fully} upsample and concatenate the scores from inter-mediate feature maps, while \citet{eigen2014predicting} refine the prediction result from coarse to fine by propagating the coarse results to another DCNN. {\bf{Combine Graphical Model and DCNN: }} The main difference between our model and other state-of-the-art models is the combination of graphical models and DCNN. Other works that are similar to ours include \citet{cogswell2014combining}, which employ CRF to propose diverse region proposals \citep{yadollahpour2013discriminative} and extract features via DCNN, and \citet{chen2014learning} which efficiently blend inference and learning to jointly train DCNN and CRF. Our current model employs piecewise training, and jointly training is our next goal. \section{Experimental Evaluation} \label{sec:experiments} \paragraph{Dataset} We test our DeepLab model on the PASCAL VOC 2012 segmentation benchmark \citep{everingham2014pascal}, consisting of 20 foreground object classes and one background class. The original dataset contains $1,464$, $1,449$, and $1,456$ images for training, validation, and testing, respectively. The dataset is augmented by the extra annotations provided by \citet{hariharan2011semantic}, resulting in $10,582$ training images. The performance is measured in terms of pixel intersection-over-union (IOU) averaged across the 21 classes. \paragraph{Training} We adopt the simplest form of piecewise training, decoupling the DCNN and CRF training stages, assuming the unary terms provided by the DCNN are fixed during CRF training. For DCNN training we employ the VGG-16 network which has been pre-trained on ImageNet. We fine-tuned the VGG-16 network on the VOC 21-way pixel-classification task by stochastic gradient descent on the cross-entropy loss function, as described in Section~\ref{sec:convnet-hole}. We use a mini-batch of 20 images and initial learning rate of $0.001$ ($0.01$ for the final classifier layer), multiplying the learning rate by 0.1 at every 2000 iterations. We use momentum of $0.9$ and a weight decay of $0.0005$. After the DCNN has been fine-tuned, we cross-validate the parameters of the fully connected CRF model in \equref{eq:fully_crf} along the lines of \citet{krahenbuhl2011efficient}. We use the default values of $w_2 = 3$ and $\sigma_\gamma = 3$ and we search for the best values of $w_1$, $\sigma_\alpha$, and $\sigma_\beta$ by cross-validation on a small subset of the validation set (we use 100 images). We employ coarse-to-fine search scheme. Specifically, the initial search range of the parameters are $w_1 \in [5, 10]$, $\sigma_\alpha \in [50:10:100]$ and $\sigma_\beta \in [3:1:10]$ (MATLAB notation), and then we refine the search step sizes around the first round's best values. We fix the number of mean field iterations to 10 for all reported experiments. \begin{table}[t] \centering \begin{tabular}{c c} \hspace{-0.7cm} \raisebox{0cm}{ \begin{tabular}{l | c} Method & mean IOU (\%) \\ \hline \hline DeepLab & 59.80 \\ DeepLab-CRF & 63.74 \\ \hline DeepLab-MSc & 61.30 \\ DeepLab-MSc-CRF & 65.21 \\ \hline \hline DeepLab-7x7 & 64.38 \\ DeepLab-CRF-7x7 & 67.64 \\ \hline DeepLab-LargeFOV & 62.25 \\ DeepLab-CRF-LargeFOV & 67.64 \\ \hline DeepLab-MSc-LargeFOV & 64.21 \\ DeepLab-MSc-CRF-LargeFOV & 68.70 \\ \end{tabular} } & \raisebox{0.4cm}{ \begin{tabular}{l | c} Method & mean IOU (\%) \\ \hline \hline MSRA-CFM & 61.8 \\ FCN-8s & 62.2 \\ TTI-Zoomout-16 & 64.4 \\ \hline \hline DeepLab-CRF & 66.4 \\ DeepLab-MSc-CRF & 67.1 \\ DeepLab-CRF-7x7 & 70.3 \\ DeepLab-CRF-LargeFOV & 70.3 \\ DeepLab-MSc-CRF-LargeFOV & 71.6 \\ \end{tabular} } \\ (a) & (b) \end{tabular} \caption{(a) Performance of our proposed models on the PASCAL VOC 2012 `val' set (with training in the augmented `train' set). The best performance is achieved by exploiting both multi-scale features and large field-of-view. (b) Performance of our proposed models (with training in the augmented `trainval' set) compared to other state-of-art methods on the PASCAL VOC 2012 `test' set.} \label{tb:valIOU} \end{table} \paragraph{Evaluation on Validation set} We conduct the majority of our evaluations on the PASCAL `val' set, training our model on the augmented PASCAL `train' set. As shown in \tabref{tb:valIOU} (a), incorporating the fully connected CRF to our model (denoted by DeepLab-CRF) yields a substantial performance boost, about 4\% improvement over DeepLab. We note that the work of \citet{krahenbuhl2011efficient} improved the $27.6\%$ result of TextonBoost \citep{shotton2009textonboost} to $29.1\%$, which makes the improvement we report here (from $59.8\%$ to $63.7\%$) all the more impressive. Turning to qualitative results, we provide visual comparisons between DeepLab and DeepLab-CRF in \figref{fig:ValResults}. Employing a fully connected CRF significantly improves the results, allowing the model to accurately capture intricate object boundaries. \paragraph{Multi-Scale features} We also exploit the features from the intermediate layers, similar to \citet{hariharan2014hypercolumns, long2014fully}. As shown in \tabref{tb:valIOU} (a), adding the multi-scale features to our DeepLab model (denoted as DeepLab-MSc) improves about $1.5\%$ performance, and further incorporating the fully connected CRF (denoted as DeepLab-MSc-CRF) yields about 4\% improvement. The qualitative comparisons between DeepLab and DeepLab-MSc are shown in \figref{fig:msBoundary}. Leveraging the multi-scale features can slightly refine the object boundaries. \paragraph{Field of View} The `atrous algorithm' we employed allows us to arbitrarily control the Field-of-View (FOV) of the models by adjusting the input stride, as illustrated in \figref{fig:hole}. In \tabref{tab:fov}, we experiment with several kernel sizes and input strides at the first fully connected layer. The method, DeepLab-CRF-7x7, is the direct modification from VGG-16 net, where the kernel size = \by{7}{7} and input stride = 4. This model yields performance of $67.64\%$ on the `val' set, but it is relatively slow ($1.44$ images per second during training). We have improved model speed to $2.9$ images per second by reducing the kernel size to \by{4}{4}. We have experimented with two such network variants with different FOV sizes, DeepLab-CRF and DeepLab-CRF-4x4; the latter has large FOV (\ie, large input stride) and attains better performance. Finally, we employ kernel size \by{3}{3} and input stride = 12, and further change the filter sizes from 4096 to 1024 for the last two layers. Interestingly, the resulting model, DeepLab-CRF-LargeFOV, matches the performance of the expensive DeepLab-CRF-7x7. At the same time, it is $3.36$ times faster to run and has significantly fewer parameters (20.5M instead of 134.3M). The performance of several model variants is summarized in \tabref{tb:valIOU}, showing the benefit of exploiting multi-scale features and large FOV. \begin{table}[t]\scriptsize \centering \begin{tabular}{l | c c c c || c c} Method & kernel size & input stride & receptive field & \# parameters & mean IOU (\%) & Training speed (img/sec) \\ \hline \hline DeepLab-CRF-7x7 & \by{7}{7} & 4 & 224 & 134.3M & 67.64 & 1.44 \\ \hline DeepLab-CRF & \by{4}{4} & 4 & 128 & 65.1M & 63.74 & 2.90 \\ \hline DeepLab-CRF-4x4 & \by{4}{4} & 8 & 224 & 65.1M & 67.14 & 2.90 \\ \hline DeepLab-CRF-LargeFOV & \by{3}{3} & 12 & 224 & 20.5M & 67.64 & 4.84 \\ \end{tabular} \caption{Effect of Field-Of-View. We show the performance (after CRF) and training speed on the PASCAL VOC 2012 `val' set as the function of (1) the kernel size of first fully connected layer, (2) the input stride value employed in the atrous algorithm.} \label{tab:fov} \end{table} \begin{figure}[ht] \centering \begin{tabular}{c c c c c} \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2007_003022.png} & \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2007_001284.png} & \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2007_001289.png} & \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2007_001311.png} & \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128noup_2009_000573.png} \\ \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2007_003022.png} & \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2007_001284.png} & \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2007_001289.png} & \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2007_001311.png} & \includegraphics[height=0.11\linewidth]{fig/boundary_refine/vgg128ms_2009_000573.png} \\ \end{tabular} \caption{Incorporating multi-scale features improves the boundary segmentation. We show the results obtained by DeepLab and DeepLab-MSc in the first and second row, respectively. Best viewed in color.} \label{fig:msBoundary} \end{figure} \paragraph{Mean Pixel IOU along Object Boundaries} To quantify the accuracy of the proposed model near object boundaries, we evaluate the segmentation accuracy with an experiment similar to \citet{kohli2009robust, krahenbuhl2011efficient}. Specifically, we use the `void' label annotated in val set, which usually occurs around object boundaries. We compute the mean IOU for those pixels that are located within a narrow band (called trimap) of `void' labels. As shown in \figref{fig:IOUBoundary}, exploiting the multi-scale features from the intermediate layers and refining the segmentation results by a fully connected CRF significantly improve the results around object boundaries. \paragraph{Comparison with State-of-art} In \figref{fig:val_comparison}, we qualitatively compare our proposed model, DeepLab-CRF, with two state-of-art models: FCN-8s \citep{long2014fully} and TTI-Zoomout-16 \citep{mostajabi2014feedforward} on the `val' set (the results are extracted from their papers). Our model is able to capture the intricate object boundaries. \paragraph{Reproducibility} We have implemented the proposed methods by extending the excellent Caffe framework \citep{jia2014caffe}. We share our source code, configuration files, and trained models that allow reproducing the results in this paper at a companion web site \url{https://bitbucket.org/deeplab/deeplab-public}. \begin{figure}[!tbp] \centering \resizebox{\columnwidth}{!}{ \begin{tabular} {c c c} \raisebox{1.7cm} { \begin{tabular}{c c} \includegraphics[height=0.1\linewidth]{fig/trimap/2007_000363.jpg} & \includegraphics[height=0.1\linewidth]{fig/trimap/2007_000363.png} \\ \includegraphics[height=0.1\linewidth]{fig/trimap/TrimapWidth2.pdf} & \includegraphics[height=0.1\linewidth]{fig/trimap/TrimapWidth10.pdf} \\ \end{tabular} } & \includegraphics[height=0.25\linewidth]{fig/SegPixelAccWithinTrimap.pdf} & \includegraphics[height=0.25\linewidth]{fig/SegPixelIOUWithinTrimap.pdf} \\ (a) & (b) & (c) \\ \end{tabular} } \caption{(a) Some trimap examples (top-left: image. top-right: ground-truth. bottom-left: trimap of 2 pixels. bottom-right: trimap of 10 pixels). Quality of segmentation result within a band around the object boundaries for the proposed methods. (b) Pixelwise accuracy. (c) Pixel mean IOU. } \label{fig:IOUBoundary} \end{figure} \begin{figure}[t] \centering \begin{tabular}{c c} \includegraphics[height=0.55\linewidth]{fig/comparedWithFCN.pdf} & \includegraphics[height=0.55\linewidth]{fig/comparedWithRoomOut.pdf} \\ (a) FCN-8s vs. DeepLab-CRF & (b) TTI-Zoomout-16 vs. DeepLab-CRF \\ \end{tabular} \caption{Comparisons with state-of-the-art models on the val set. First row: images. Second row: ground truths. Third row: other recent models (Left: FCN-8s, Right: TTI-Zoomout-16). Fourth row: our DeepLab-CRF. Best viewed in color.} \label{fig:val_comparison} \end{figure} \paragraph{Test set results} Having set our model choices on the validation set, we evaluate our model variants on the PASCAL VOC 2012 official `test' set. As shown in \tabref{tab:voc2012}, our DeepLab-CRF and DeepLab-MSc-CRF models achieve performance of $66.4\%$ and $67.1\%$ mean IOU\footnote{\url{http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=6}}, respectively. Our models outperform all the other state-of-the-art models (specifically, TTI-Zoomout-16 \citep{mostajabi2014feedforward}, FCN-8s \citep{long2014fully}, and MSRA-CFM \citep{dai2014convolutional}). When we increase the FOV of the models, DeepLab-CRF-LargeFOV yields performance of $70.3\%$, the same as DeepLab-CRF-7x7, while its training speed is faster. Furthermore, our best model, DeepLab-MSc-CRF-LargeFOV, attains the best performance of $71.6\%$ by employing both multi-scale features and large FOV. \begin{table*}[!tbp] \setlength{\tabcolsep}{3pt} \resizebox{\columnwidth}{!}{ \begin{tabular}{|l||c*{20}{|c}||c|} \hline Method & bkg & aero & bike & bird & boat & bottle& bus & car & cat & chair& cow &table & dog & horse & mbike& person& plant&sheep& sofa &train & tv & mean \\ \hline \hline MSRA-CFM & - & 75.7 & 26.7 & 69.5 & 48.8 & 65.6 & 81.0 & 69.2 & 73.3 & 30.0 & 68.7 & 51.5 & 69.1 & 68.1 & 71.7 & 67.5 & 50.4 & 66.5 & 44.4 & 58.9 & 53.5 & 61.8 \\ FCN-8s & - & 76.8 & 34.2 & 68.9 & 49.4 & 60.3 & 75.3 & 74.7 & 77.6 & 21.4 & 62.5 & 46.8 & 71.8 & 63.9 & 76.5 & 73.9 & 45.2 & 72.4 & 37.4 & 70.9 & 55.1 & 62.2 \\ TTI-Zoomout-16 & 89.8 & 81.9 & 35.1 & 78.2 & 57.4 & 56.5 & 80.5 & 74.0 & 79.8 & 22.4 & 69.6 & 53.7 & 74.0 & 76.0 & 76.6 & 68.8 & 44.3 & 70.2 & 40.2 & 68.9 & 55.3 & 64.4 \\ \hline DeepLab-CRF & 92.1 & 78.4 & 33.1 & 78.2 & 55.6 & 65.3 & 81.3 & 75.5 & 78.6 & 25.3 & 69.2 & 52.7 & 75.2 & 69.0 & 79.1 & 77.6 & 54.7 & 78.3 & 45.1 & 73.3 & 56.2 & 66.4 \\ DeepLab-MSc-CRF & 92.6 & 80.4 & 36.8 & 77.4 & 55.2 & 66.4 & 81.5 & 77.5 & 78.9 & 27.1 & 68.2 & 52.7 & 74.3 & 69.6 & 79.4 & 79.0 & 56.9 & 78.8 & 45.2 & 72.7 & 59.3 & 67.1 \\ \href{http://host.robots.ox.ac.uk:8080/anonymous/EKRH3N.html}{DeepLab-CRF-7x7} & 92.8 & 83.9 & 36.6 & 77.5 & 58.4 & {\bf 68.0} & 84.6 & {\bf 79.7} & 83.1 & 29.5 & {\bf 74.6} & 59.3 & 78.9 & 76.0 & 82.1 & 80.6 & {\bf 60.3} & 81.7 & 49.2 & {\bf 78.0} & 60.7 & 70.3 \\ DeepLab-CRF-LargeFOV & 92.6 & 83.5 & 36.6 & {\bf 82.5} & 62.3 & 66.5 & {\bf 85.4} & 78.5 & {\bf 83.7} & 30.4 & 72.9 & {\bf 60.4} & 78.5 & 75.5 & 82.1 & 79.7 & 58.2 & 82.0 & 48.8 & 73.7 & 63.3 & 70.3 \\ DeepLab-MSc-CRF-LargeFOV & {\bf 93.1} & {\bf 84.4} & {\bf 54.5} & 81.5 & {\bf 63.6} & 65.9 & 85.1 & 79.1 & 83.4 & {\bf 30.7} & 74.1 & 59.8 & {\bf 79.0} & {\bf 76.1} & {\bf 83.2} & {\bf 80.8} & 59.7 & {\bf 82.2} & {\bf 50.4} & 73.1 & {\bf 63.7} & {\bf 71.6} \\ \hline \end{tabular} } \caption{Labeling IOU (\%) on the PASCAL VOC 2012 test set, using the trainval set for training.} \label{tab:voc2012} \end{table*} \begin{figure}[!htbp] \centering \scalebox{0.82} { \begin{tabular}{c c c | c c c} \includegraphics[height=0.12\linewidth]{fig/img/2007_002094.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_002094.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_002094.png} & \includegraphics[height=0.12\linewidth]{fig/img/2007_002719.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_002719.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_002719.png} \\ \includegraphics[height=0.12\linewidth]{fig/img/2007_003957.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_003957.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_003957.png} & \includegraphics[height=0.12\linewidth]{fig/img/2007_003991.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_003991.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_003991.png} \\ \includegraphics[height=0.10\linewidth]{fig/img/2008_001439.jpg} & \includegraphics[height=0.10\linewidth]{fig/res_none/2008_001439.png} & \includegraphics[height=0.10\linewidth]{fig/res_crf/2008_001439.png} & \includegraphics[height=0.12\linewidth]{fig/img/2008_004363.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2008_004363.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2008_004363.png} \\ \includegraphics[height=0.12\linewidth]{fig/img/2008_006229.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2008_006229.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2008_006229.png} & \includegraphics[height=0.12\linewidth]{fig/img/2009_000412.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2009_000412.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2009_000412.png} \\ \includegraphics[height=0.12\linewidth]{fig/img/2009_000421.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2009_000421.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2009_000421.png} & \includegraphics[height=0.12\linewidth]{fig/img/2010_001079.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2010_001079.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2010_001079.png} \\ \includegraphics[height=0.12\linewidth]{fig/img/2010_000038.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2010_000038.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2010_000038.png} & \includegraphics[height=0.12\linewidth]{fig/img/2010_001024.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2010_001024.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2010_001024.png} \\ \includegraphics[height=0.24\linewidth]{fig/img/2007_005331.jpg} & \includegraphics[height=0.24\linewidth]{fig/res_none/2007_005331.png} & \includegraphics[height=0.24\linewidth]{fig/res_crf/2007_005331.png} & \includegraphics[height=0.24\linewidth]{fig/img/2008_004654.jpg} & \includegraphics[height=0.24\linewidth]{fig/res_none/2008_004654.png} & \includegraphics[height=0.24\linewidth]{fig/res_crf/2008_004654.png} \\ \includegraphics[height=0.24\linewidth]{fig/img/2007_000129.jpg} & \includegraphics[height=0.24\linewidth]{fig/res_none/2007_000129.png} & \includegraphics[height=0.24\linewidth]{fig/res_crf/2007_000129.png} & \includegraphics[height=0.24\linewidth]{fig/img/2007_002619.jpg} & \includegraphics[height=0.24\linewidth]{fig/res_none/2007_002619.png} & \includegraphics[height=0.24\linewidth]{fig/res_crf/2007_002619.png} \\ \includegraphics[height=0.12\linewidth]{fig/img/2007_002852.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_002852.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_002852.png} & \includegraphics[height=0.12\linewidth]{fig/img/2010_001069.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2010_001069.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2010_001069.png} \\ \hline \hline \includegraphics[height=0.12\linewidth]{fig/img/2007_000491.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000491.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000491.png} & \includegraphics[height=0.12\linewidth]{fig/img/2007_000529.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000529.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000529.png} \\ \includegraphics[height=0.12\linewidth]{fig/img/2007_000559.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000559.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000559.png} & \includegraphics[height=0.12\linewidth]{fig/img/2007_000663.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000663.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000663.png} \\ \includegraphics[height=0.12\linewidth]{fig/img/2007_000452.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_000452.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_000452.png} & \includegraphics[height=0.12\linewidth]{fig/img/2007_002268.jpg} & \includegraphics[height=0.12\linewidth]{fig/res_none/2007_002268.png} & \includegraphics[height=0.12\linewidth]{fig/res_crf/2007_002268.png} \\ \end{tabular} } \caption{Visualization results on VOC 2012-val. For each row, we show the input image, the segmentation result delivered by the DCNN (DeepLab), and the refined segmentation result of the Fully Connected CRF (DeepLab-CRF). We show our failure modes in the last three rows. Best viewed in color.} \label{fig:ValResults} \end{figure}
-33,893.46521
[ -2.630859375, 2.521484375 ]
25.036819
[ -2.849609375, 0.91650390625, -1.458984375, -4.90625, -1.1259765625, 7.13671875 ]
[ 1.40625, 6.68359375, 0.55078125, 5.859375 ]
682
5,483
[ -1.7265625, 2.0546875 ]
29.07133
[ -6.36328125, -4.55859375, -4.71484375, -2.2109375, 2.4140625, 13.359375 ]
0.353144
17.737805
30.43954
7.38113
[ 1.7630187273025513 ]
-24,811.358036
6.972096
-33,444.901869
0.094172
6.323701
[ -2.587890625, -3.666015625, -3.828125, -4.6796875, 2.498046875, 12.28125 ]
[ -6.2734375, -2.59765625, -3.03515625, -1.9609375, 4.140625, 6.6484375 ]
BkiUbgw5qsFAf4WX2WDt
\section{Introduction} The ever-growing number of consumer wireless devices, together with the newly emerged data-hungry applications, have introduced unprecedented spectrum usage challenges \cite{saad2019vision}. Even with cutting-edge technologies, as massive MIMO \cite{lu2014overview} and mmWave communications \cite{hong2021role}, currently in place, there are still several challenges for further improvements in the end-user data rates, cost efficiency, and communication latency. These challenges raised the need for developing new techniques for enhancing the system performance. Reference signals (RS) overhead reduction approaches have recently gained significant interest as means for improving the system performance \cite{ahmed2019overhead}. RS are located in some specific resource elements (REs) in the time-frequency grid. The 5G NR specifications define several types of RS that are transmitted in different ways and intended to be used for different purposes \cite{3gpp38211}, \cite{3gpp38213}, \cite{3gpp38214}. For instance, CSI-RS are downlink RS dedicated for downlink channel estimation, while demodulation reference signals (DM-RS) are intended for estimating the effective channel (precoded channel) as part of coherent demodulation at the receiver. Decoding the signal of interest at the receivers generally requires a two-step solution; first estimating the channel followed by designing an equalizer as a function of the estimated channel. In cellular systems, RS-based channel estimation may require a large number of RS to ensure acceptable estimation performance; a requirement that hurts the data transmission rate since more REs are reserved for RS as opposed to data. The situation is further complicated in the multiuser MIMO setup where the RS need to be orthogonal across the different UEs, which is a difficult proposition on its own, as it requires tight coordination between the different base stations. Once the channel is estimated, an equalizer is designed to recover signal of interest. Several equalization approaches with various levels of complexity can be used to decode the signal of interest \cite{madhow1994mmse,wiesel2005semidefinite}. However, such approaches require accurate channel estimate to obtain acceptable channel estimation performance. This paper seeks to address the aforementioned limitations of the pilot-based approaches via proposing a a new DM-RS free data structure that employs a simple repetition protocol on a portion of the user data in the time-frequency grid. The repetition structure will then be utilized at the receiver side to create two data views -- assuming that the repetition pattern is known at the receiver and is different across layers. Applying canonical correlation analysis (CCA) \cite{hotelling1936relations} on the two constructed data views, for each layer, will recover the desired signal at the UE. We introduce a strategy for choosing the repetition type that relies on the channel parameters. In addition, we propose a solution to boost the CCA performance in scenarios where frequency selectivity is severe. Finally, we showed through simulations that the proposed method outperforms the state of the art method, while having lower complexity. CCA is a statistical learning tool that has a wide variety of applications in signal processing and wireless communications. This includes, but not limited to, direction-of-arrival estimation~\cite{wu1994music}, equalization \cite{dogandzic2002finite}, spectrum sharing \cite{SalahUnderlay22}, blind source separation~\cite{bertrand2015distributed}, cell-edge user detection~\cite{salahtwc}, to list a few. \section{System Model and Problem Statement}\label{Sec: probdesc} Consider a downlink (DL) transmission in a 5G NR network where a single base station (gNB) sends data to a single user equipment (UE) through a physical DL shared channel (PDSCH). The gNB is equipped with $N_{\mathrm{t}}$ antennas and serves a UE of $N_{\mathrm{r}}$ antennas. For sub-band $n$, the gNB transmits $L$ data streams; each of length $N_{\mathrm{data}}$ and represented by the matrix $\mathbf{X}^{(n)}\in \mathbb{C}^{N_{\mathrm{data}}\times L}$, {for $n \in [N] := \{1,\cdots,N\}$}, and $N$ is the number of sub-bands. The received base-band signal for sub-band $n$ can then be described as follows, \begin{equation} {\bf Y}^{(n)} = {\bf H}^{(n)}\sum\limits_{\ell = 1}^{L} \sqrt{\alpha_\ell}{\bf f}^{(n)}_\ell{\bf x}^{\top(n)}_\ell + {\bf W}^{(n)} \end{equation} where $\mathbf{Y}^{(n)}\in\mathbb{C}^{N_{\mathrm{r}}\times N_{\mathrm{data}}}$ is the $N_r$-dimensional received signal at the UE, $\mathbf{H}^{(n)}\in\mathbb{C}^{N_{\mathrm{r}}\times N_{\mathrm{t}}}$ is the DL channel response matrix associated with the $n$-th sub-band, and $\mathbf{W}^{(n)}\in\mathbb{C}^{N_{\mathrm{r}}\times N_{\mathrm{data}}}$ contains independent and identically distributed (i.i.d) complex Gaussian entries of zero mean and variance $\sigma^{2}$ each. The term $\alpha_\ell$ represents the power allocated to the $\ell$-th layer, where unless stated otherwise, $\alpha_\ell$ is chosen to be $1/\sqrt{L}$, $\forall \ell$. In order to support multi-stream transmission, the gNB precodes the $\ell$-th stream data symbols $\mathbf{x}_{\ell}^{(n)} \in \mathbb{C}^{N_{\mathrm{data}}}$ using the precoder $\mathbf{f}_\ell^{(n)}\in\mathbb{C}^{N_{\mathrm{t}}}$, {where $\|{\bf f}_\ell\|_2^2 = 1$}. For the SU-MIMO case considered herein, we assume SVD based precoding where the precoder ${\bf f}^{(n)}$ is the $\ell$-th dominant right singular vector of the matrix ${\bf{H}}^{(n)}$. Throughout this work, the DL effective channel matrix $\bar{\mathbf{h}}_{\ell}^{(n)}\stackrel{\Delta}{=}\mathbf{H}^{(n)}\mathbf{f}_\ell^{(n)}$ is assumed to be unknown at the UE. We assume a wide band (WB) precoding of the entire resource grid (RG). This yields a single precoder for each layer, $\mathbf{f}^{(n)}_\ell = \mathbf{f}_\ell, \forall n \in [N]$, that is chosen to be the $\ell$-th dominant right singular vector of the average of all channel matrices across the RG, and hence, we suppress the superscript dependence of $\mathbf{f}_\ell^{(n)}$. \begin{figure}[t] \begin{center} \includegraphics[scale=.35]{DMRS.eps} \caption{Traditional wideband data channel structure with data REs colored in blue while DM-RS REs are colored in yellow.} \label{DM-RS} \end{center} \end{figure} In 3GPP 5G NR, data transmission is accompanied with a set of reference signals, such as DM-RS. DM-RS for PDSCH are intended for the estimation of $\bar{\mathbf{h}}_{\ell}^{(n)}$ as part of coherent demodulation. DM-RS based channel estimation and equalization is a computationally complex process. The channel estimation accuracy depends on the DM-RS density; increasing the DM-RS density may improve the channel estimation accuracy, but it could potentially hurt the data rate as fewer REs are then used for data transmission. Moreover, for the multi-user (MU) transmission case, the DM-RS sequences associated with the co-scheduled UEs need to be orthogonal -- a constraint that is difficult to achieve in the multi-cell networks. We will next show how to overcome the DM-RS short-comings mentioned by designing a machine leaning (ML) based detection method that tends to recover the transmitted data symbols in an unsupervised manner, and at much lower complexity relative to the DM-RS approach. \begin{figure}[t] \centering \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=1\textwidth]{CCApat1.eps} \caption{Pattern 1.} \label{fig:CCApatt1} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \centering \includegraphics[width=1\textwidth]{CCApat3.eps} \caption{Pattern 2.} \label{fig:CCApatt3} \end{subfigure} \caption{Proposed data structure with data REs colored in blue and light blue and reserved REs colored in yellow.} \label{fig:CCApatts} \end{figure} \section{Proposed Method} In this section, we explain how CCA can be exploited in order to solve the problem described in Section \ref{Sec: probdesc}. Instead of using the frame structure in Fig. \ref{DM-RS} that contains reserved REs for DM-RS symbols (colored in yellow), we propose a new DM-RS-free data transmission structure. The new structure requires repeating a few data symbols in the time-frequency grid, as depicted in Fig. 2. The repetition may be employed either in time or frequency, where the different repetition patterns can yield different performance, as will be explained in Section \ref{Sec:CCAPatterns}. The repetition structure will then be utilized at the UE side to derive the combiners that will be used to decode the PDSCH data in the repeated locations and in the neighbourhood of the repeated symbols, as will be shown later. To explain how the proposed method works, let ${\bf x}_{\mathrm{c}\ell} \in \mathbb{C}^{\Bar{N}}$ denote the common/repeated signal associated with the $\ell$-th layer within a part of the sub-band, where $\Bar{N}$ represents the length of the common signal and $\Bar{N} << N_{\mathrm{data}}$. Towards this end, the baseband equivalent model of the received signal at each of those two separate regions using the repetition pattern of the $\ell$-th layer, can be expressed as, \begin{equation} \label{CCAViews} {\bf Y}^{(i)}_{\ell} = \bar{\bf h}_\ell^{(i)}{\bf x}^{\top}_{\mathrm{c}\ell} + \sum\limits_{j \neq \ell}^{L} \bar{\bf h}_j^{(i)}{\bf x}^{\top}_{ij} +\overline{\mathbf{W}}^{(i)}_{\ell} \end{equation} where $i=1,2$ refers to the region/view index, and ${\bf x}_{ij}$ is the signal associated with the received signal of the $j$-th layer in the $i$-th region. The power allocation terms are absorbed in the respective channel vectors. It should be noted that, under the assumption that the repetition patterns are unique, the two views in \eqref{CCAViews} share one common signal associated with the layer which repetition pattern is used -- all the other layers will be randomly permuted, and hence, having different signals in both views. We will now show how CCA can be utilized at the UE to recover the desired signal. CCA is a widely-used machine learning tool that seeks to find two linear combinations of two given random vectors such that the resulting pair of random variables are maximally correlated. In a recent work \cite{salahtwc}, the authors came up with a new algebraic interpretation of CCA as a method that can identify a common subspace between two given data views, under a linear generative model. It was shown that, if two signals views have a common/shared component in addition to individual components across each view, then applying CCA to those two views will recover the common component up to a global complex scaling ambiguity, where the scaling ambiguity can be resolved via correlation a few pilots symbols with the corresponding received one. It can be easily seen that such an interpretation is applicable to the model in \eqref{CCAViews}, where the two data views in \eqref{CCAViews} only share the signal of the layer which repetition pattern is used. Considering the following MAXVAR formulation of CCA \cite{hardoon2004canonical}, \begin{subequations}\label{MAXVAR} \begin{align} &\underset{{\bf g}_{\ell},{\bf q}_{1\ell},{\bf q}_{2 \ell}}{\min}~ \sum_{j = 1}^{2}\| {\bf Y}^{(i)\top}{\bf q}_{i\ell} - {\bf g}_{\ell}\|^2_2,\\ & \text{s.t.} \quad \|{\bf g}_{\ell}\|^2_2 = 1. \end{align} \end{subequations} The optimal canonical vectors (referred to as combiners in the considered problem), ${\bf q}^\star_{1\ell}$ and ${\bf q}^\star_{2 \ell}$, can be obtained through solving an eigen-decomposition problem. To clarify how the results of \cite{salahtwc} can be mapped to the considered problem herein, let us define the matrix ${\bf X}_{i\ell} := [{\bf x}_{\mathrm{c}\ell}, {\bf X}_{i}] \in \mathbb{C}^{\Bar{N} \times L}$, where the columns of ${\bf X}_{i} \in \mathbb{C}^{\Bar{N} \times (L-1)}$ represent the signal associated with the other/interfering $L-1$ layers. Similarly, let the matrix $\overline{\bf H}_i \in \mathbb{C}^{N_r \times L}$ hold the corresponding effective channel vectors. Towards this end, we have the following result. \begin{theorem} In the noiseless case, if the matrices ${\bf X}_{i\ell}$ and $\overline{\bf H}_i$, for $i=1,2$ and $\ell \in [L]\stackrel{\Delta}{=}\{1,\dots L\}$, are full column rank, then the optimal solution ${\bf g}_\ell^\star$ of \eqref{MAXVAR} is given as ${\bf g}_\ell^\star = \gamma_\ell {\bf x}_{\mathrm{c}\ell}$, where $\gamma_\ell \in \mathbb{C}$ is a complex phase ambiguity. \end{theorem} \begin{proof} The proof follows from Theorem 1 in~\cite{salahtwc}. \end{proof} \begin{remark} The full rank condition on the matrix ${\bf X}_{i\ell}$ requires the CCA view length $\Bar{N}$ to be greater than or equal to the number of layers $L$ and the columns of ${\bf X}_{i\ell}$ to be linearly independent. Since the transmissions associated with the different layers are independent, then such requirements can be easily satisfied with modest $\Bar{N}$. On the other hand, the full rank condition on the effective channel matrix $\overline{\bf H}_i$ can be satisfied if $i)$ the number of receive antennas is greater than or equal to the number of layers and $ii)$ the columns of $\overline{\bf H}_i$ are linearly independent. Both conditions are satisfied with probability one since $i)$ the number of transmitted streams $L$ is always upper-bounded by $N_r$ and $ii)$ the columns of $\overline{\bf H}_i$ are in fact orthogonal (because of the SVD-based precoding). \end{remark} \subsection{Pattern Design}\label{Sec:CCAPatterns} For the $\ell$-th layer, we consider a RG of $N_{\mathrm{RB}}$ resource blocks (RB)s. Each RB is composed of REs that are divided among 12 sub-carriers vertically and 14 OFDM symbols horizontally. $N_{\mathrm{data}}$ REs in the RG are loaded with data symbols from the transmit vector $\mathbf{x}_{\ell{}}$, such that each symbol corresponds to a RE, while $N_{\mathrm{res}}$ REs are reserved for control signals. REs in the $\ell$-th layer RG are located using matrix linear indexing. Assuming that the (sub-carrier, slot) location of an RE in the RG is $(m,n)$ and both $m$ and $n$ start from 0, the corresponding index can be found as $i_{m,n}=12 n N_{\mathrm{RB}}+m+1$. We define $\mathcal{I}_{\mathrm{RE}}$ as the set of all indices in a RG, where for $\ell \in [L]$, $\mathcal{I}_{\mathrm{RE}}=\mathcal{I}_{\mathrm{data}}^{\ell}\cup\mathcal{I}_{\mathrm{res}}^{\ell}$, such that $\mathcal{I}_{\mathrm{data}}^{\ell}$ and $\mathcal{I}_{\mathrm{res}}^{\ell}$ are two disjoint sets that correspond to the indices of data and reserved REs for layer $\ell$, respectively. Let the indices set $\mathcal{I}_{\mathrm{s}}^{\ell}\subset \mathcal{I}_{\mathrm{data}}^{\ell}$ such that $|\mathcal{I}_{\mathrm{s}}^{\ell}|=N_{\ell}<|\mathcal{I}_{\mathrm{data}}^{\ell}|=N_{\mathrm{data}}$, where $|.|$ is the cardinality of a set. For ease of notations, we assume equal view length for all layers, i.e., $N_{\ell}=\Bar{N}, \forall \ell$. In this case, the common signal $\mathbf{x}_{\mathrm{c}\ell}\in\mathbb{C}^{\Bar{N}}$, is the vector of symbols formed by the data in the REs located with the index set $\mathcal{I}_{\mathrm{s}}^{\ell}$. We also let the index set $\mathcal{I}_{\mathrm{d}}^{\ell}\subseteq \mathcal{I}_{\mathrm{res}}^{\ell}$, such that $|\mathcal{I}_{\mathrm{d}}^{\ell}|=|\mathcal{I}_{\mathrm{s}}^{\ell}|=\Bar{N}$. We refer to $\mathcal{I}_{\mathrm{s}}^{\ell}$, $\mathcal{I}_{\mathrm{d}}^{\ell}$ and the tuple $\mathcal{P}^{\ell}\stackrel{\Delta}{=}(\mathcal{I}_{\mathrm{s}}^{\ell}, \mathcal{I}_{\mathrm{d}}^{\ell})$ by source, destination indices sets and a CCA pattern, respectively. \subsection{Data repetition} \label{Datarep} At time slot $t$, the UE generates the source indices set for the $\ell$-th layer, $\mathcal{I}_{\mathrm{s}}^{\ell}$, such that the symbols of the corresponding CCA signal $\mathbf{x}_{\mathrm{c}\ell}$ lie within the time-bandwidth coherence (TBC) block $\mathfrak{B}_{1}^{\ell}$. Similarly, the destination indices set $\mathcal{I}_{\mathrm{d}}^{\ell}$ is generated such that $|\mathcal{I}_{\mathrm{d}}^{\ell}|=|\mathcal{I}_{\mathrm{s}}^{\ell}|=\Bar{N}$ with REs in the TBC block $\mathfrak{B}_{2}^{\ell}$. The CCA pattern $\mathcal{P}^{\ell}=(\mathcal{I}_{\mathrm{s}}^{\ell},\mathcal{I}_{\mathrm{d}}^{\ell})$ is then signaled to the gNB to be used in the next slot transmission. At time slot $t+1$, the gNB repeats the CCA signal $\mathbf{x}_{\mathrm{c}\ell}$, constructed from the indices in $\mathcal{I}_{\mathrm{s}}^{\ell}$, in the locations defined by $\mathcal{I}_{\mathrm{d}}^{\ell}$. The UE constructs the two views $\mathbf{Y}_{\ell}^{(i)}\in\mathbb{C}^{N_{\mathrm{r}}\times \Bar{N}}, i=\{1,2\}$, of the CCA signal $\mathbf{x}_{\mathrm{c}\ell}$ defined in \eqref{CCAViews}. Given the two views, $\mathbf{Y}_{\ell}^{(i)}, i=\{1,2\}$, the UE then solves the problem in \eqref{MAXVAR} to find the combiners $\mathbf{q}^{*}_{i\ell}\in\mathbb{C}^{N_{\mathrm{r}}}, i=\{1,2\}$. $\mathbf{q}_{i\ell}^{*}$ is then used to combine TBC block $\mathfrak{B}_{i}^{\ell}$ and the $N_{\mathrm{\mathfrak{B}}_{i}\ell}$ REs in its vicinity such that $N_{\mathrm{\mathfrak{B}}_{1}\ell}+N_{\mathrm{\mathfrak{B}}_{2}\ell}=N_{\mathrm{data}}-\Bar{N}$. \subsection{Sub-gridding} As mentioned in the previous section, the combiner $\mathbf{q}_{i\ell}^{*}$ is used to combine the REs in the vicinity region of $\mathfrak{B}_{i}^{\ell}$. However, this only serves as an approximation for the optimal combiners required for those REs. In order to reduce that approximation error, we introduce the RG sub-grids (SG)s solution. The RG is divided into equal sized non-overlapping SGs, each of size $N_{\mathrm{BSG}}$ RBs, where SG $j$ contains its own CCA pattern $\mathcal{P}_{j}^{\ell}=(\mathcal{I}_{s,j}^{\ell},\mathcal{I}_{d,j}^{\ell})$, and signal $\mathbf{x}_{\mathrm{c}\ell,j}$. The pattern $\mathcal{P}_{j}^{\ell}$ is then used to construct the two views, $\mathbf{Y}_{\ell,j}^{(i)}, i\in\{1,2\}$, of the CCA signal $\mathbf{x}_{\mathrm{c}\ell,j}$ within SG $j$. Given the per SG views, the CCA problem is solved to find the optimal combiners $\mathbf{q}_{i\ell,j}^{*}, i\in\{1,2\}$. Combiner $\mathbf{q}_{i\ell,j}^{*}$ is then used to combine the REs in its vicinity in SG $j$. We define the set $\mathcal{I}_{s,j}^{\ell,m}$ $(\mathcal{I}_{d,j}^{\ell,m})$ to be the set of source (destination) REs in RB $m$ of SG $j$ such that $\bigcup_{m\in [N_{\mathrm{BSG}}]}\mathcal{I}_{x,j}^{\ell,m}=\mathcal{I}_{x,j}^{\ell}$ and $|\mathcal{I}_{x,j}^{\ell,m}|=N_{j}^{m}$ with $x\in\{s,d\}$. In order to reduce the signaling overhead between the UE and the gNB, we assume that the sets $\mathcal{I}_{s,j}^{\ell,m}$ and $\mathcal{I}_{d,j}^{\ell,m}$ are symmetric across all RBs in the entire RG, Hence, for layer $\ell$, the UE signals $\mathcal{P}_{1}^{\ell,1}\stackrel{\Delta}{=}(\mathcal{I}_{s,1}^{\ell,1},\mathcal{I}_{d,1}^{\ell,1})$ to the gNB which casts the information required by the gNB's transmission scheme. Due to the symmetry assumption, we drop the RB and SG dependence, i.e, the indices $j$ and $m$, in the per layer RB (PL-RB) pattern $\mathcal{P}_{j}^{\ell,m}=(\mathcal{I}_{s,j}^{\ell,m},\mathcal{I}_{d,j}^{\ell,m})$ and $N_{j}^{m}$ and refer to them with $\Bar{\Bar{\mathcal{P}}}^{\ell}$ and $\Bar{\Bar{N}}$ respectively. When a RG is divided into SGs, the size of the TBC blocks per SG, denoted by $\Bar{N}\stackrel{\Delta}{=}\Bar{\Bar{N}}N_{\mathrm{BSG}}$ and referred to as the view length, decreases. Hence, despite the sub-gridding role in decreasing the combining approximation error (through reducing the number of REs equalized with the same combiner), it motivates the importance of investigating the effect of the view length on the performance. In the numerical results section, we provide a detailed study on the effect of the view length $\Bar{N}$ and the SG size $N_{\mathrm{BSG}}$ on the system performance. \begin{figure}[t] \begin{center} \includegraphics[scale=.2,width=0.9\columnwidth]{pattdescombined_ICASP.eps} \caption{SER vs SNR for a RG of 50 RBs, and $N_{\mathrm{BSG}}=2$, CDL-C channel model with SCS=30kHz.} \label{fig:pattdes} \end{center} \end{figure} \section{Simulation Results}\label{Simulations} We consider a DL transmission where a gNB, with $N_{\mathrm{t}}=8$ antennas, transmits QPSK data symbols to a UE, with $N_{\mathrm{r}}=2$ antennas, through a CDL-C channel with 15 kHz sub-carrier spacing (SCS) and a carrier frequency of 4 GHz. Data is transmitted into the form of 10 ms frames, where each frame is composed of 10 slots; each is 1 ms. We average the simulation results over 100 different channel seeds and 100 frames per seed point. We first simulate the effect of different CCA patterns on the average SER. Figure \ref{fig:CCApatts} provides different CCA patterns for SGs of $N_{\mathrm{BSG}}=2$ RBs each. On one side, pattern 1 in \ref{fig:CCApatt1} is concentrated in one symbol per view, spans the entire frequency domain and time repetition is employed. On the other side, pattern 2 in Figure \ref{fig:CCApatt3} spans the entire time domain, located in only two sub-carriers per view and is repeated in frequency. We simulate the average SER of the 2 patterns for a CDL-C channel with SCS = 30kHz and a RG of 50 RBs that is divided into SGs of size $N_{\mathrm{BSG}}=2$ RBs each. We consider two different channel setups with: 1) CDL-C channel of 30ns delay spread (DS) and 60km/hour UE speed. It can be seen, from figure \ref{fig:pattdes}, that pattern 2 outperforms pattern 1. 2) CDL-C channel of 300ns DS and 1km/hour UE speed. In the second setup, pattern 2 completely fails to detect the transmitted symbols while pattern 1 performs better. The simulation results from figure \ref{fig:pattdes} motivate that a CCA pattern must be designed such that the symbols within a pattern should experience the same channel, i.e., symbols must be within the same TBC block. \begin{figure}[t] \begin{center} \includegraphics[scale=.2,width=0.9\columnwidth]{SGexp_res2.eps} \caption{SER vs SNR for for a RG of 50 RBs, SCS=30kHz and $N_{\mathrm{p}}=1$ symbols per SG.} \label{fig:SGSres1} \end{center} \end{figure} We then study the impact of the SG size $N_{\mathrm{BSG}}$ on the SER performance. We consider a RG of $N_{\mathrm{RB}}=50$ RBs with SCS equal to 30 kHz, delay spread equal to 30 ns, and a PL-RB pattern $\Bar{\Bar{\mathcal{P}}}$ with $\Bar{\Bar{N}}=8$ REs. These choices ensure that the channel's coherence BW is less than the transmission BW leading to a frequency selective channel. In figure \ref{fig:SGSres1}, we simulate the average SER for different values of $N_{\mathrm{BSG}}$. It can be realized that when the SG size is large, $N_{\mathrm{BSG}}\in\{25,50\}$, CCA transmission provides a degraded average SER. This is because, due to the frequency selectivity nature of the channel, the symbols within a CCA signal experience different channel responses which violates the requirement that they should be constructed, and repeated, within a TBC block. Upon decreasing the SG size, $N_{\mathrm{BSG}}\in\{10,5,2\}$, the SER decreases as the channel affecting the CCA signal within each SG tends to be flatter. For a very small SG size, $N_{\mathrm{BSG}}=1$, the average SER degrades again because the view length, $\Bar{N}$, per SG tends to be significantly small. \begin{figure}[t] \begin{center} \includegraphics[scale=.2,width=0.9\columnwidth]{MLayerTX.eps} \caption{SER vs SNR for a RG of 52 RBs, DS=30 ns \& UE speed=5km/hour, $N_{\mathrm{BSG}}=4$ and an $8\times4$ antennas system.} \label{fig:MLTX} \end{center} \end{figure} Next we consider a DL transmission scenario where a gNB transmits two data layers, i.e., $L=2$, to the UE. A CCA pattern with $\Bar{\Bar{N}}=8$ REs is used in the first layer. In order to ensure that the CCA recovers one common signal at each layer, we use a shifted version of the first layer's pattern in the second layer's transmission; this way the patterns do not overlap. In figure \ref{fig:MLTX}, we present the average SER for each of the layers separately and compare the CCA equalization performance with a DM-RS pattern of mapping type 1, configuration type 2, length 1 and 1 additional position as defined in \cite{3gpp38211,3gpp38212,3gpp38213,3gpp38214}. Perfect channel knowledge (PCHAN) results, are incorporated as a lower bound on the performance. It can be seen that the considered CCA pattern outperforms the DM-RS one in the simulated channel conditions. Moreover, the gap between CCA and DM-RS in layer 1 is less than that of layer 2. This is because the interference power on layer 2 is larger than that of layer 1, assuming equal power allocation across layers, which leads to a degraded SER for DM-RS as an inaccurate channel estimate is used for equalization. However, CCA is not affected as it tends to recover the common signal regardless of the interference power. To show how robust CCA is to strong interference, we now treat the second layer as an unknown interference. This can be used to model a multi-user (MU) MIMO network with two gNBs; each transmits data to an assigned UE. For either of the UEs in that network, the interference is then due to the transmission of the secondary gNB to its intended UE. We set the first layer power to $\alpha_1$, while the second layer power is set to $1-\alpha_1$. The signal to interference ratio (SIR) then can easily found to be $\text{SIR}=\frac{\alpha_1}{1-\alpha_1}$. We vary the SIR value in our simulation while satisfying a total power budget constraint. In order to ensure that the SNR remains fixed for the first layer, the noise power is scaled with a factor of $\alpha_1$. In figure \ref{fig:interference}, we simulate the average SER vs SIR for a CCA pattern with $\Bar{\Bar{N}}=8$ REs and compare it with the same DM-RS pattern in the previous experiment. On one side, CCA equalization is not affected by the low SIR values. This is because CCA aims to recover the common signal regardless of the interference power. However, on the other side, DM-RS is highly influenced by the interference power. It fails to calculate an accurate effective channel estimate to be used further for equalization and therefore, it provides a high SER in the interference limited (low SIR) range. On increasing SIR, the channel switches to being a noise limited one where the performance converges to the average SER achieved by the simulated noise range, i.e., convergence is to zero for noiseless case and $2\times 10^{-3}$ average SER for 2 dB noise in the DM-RS case. \begin{figure}[t] \begin{center} \includegraphics[scale=.2,width=0.9\columnwidth]{interference.eps} \caption{SER vs SIR for a RG of 52 RBs, DS=30 ns, UE speed=5km/hour.} \label{fig:interference} \end{center} \end{figure} \section{Conclusions} \label{Conc} We considered the problem of RS overhead reduction through proposing a new data structure in a downlink single user MIMO system. The new data structure requires repetition of a portion of the PDSCH data across all sub-band. Utilizing the repetition structure at the UE side, we showed that reliable detection of the desired signal is possible via CCA. The proposed solution is shown to provide both performance and computational gains, rendering it practically feasible. To further boost the CCA performance, we presented two effective strategies to select the repetition pattern type and to deal with the high frequency selective channel scenarios. We showed through numerical results that both strategies are effective in improving the CCA performance. We demonstrated through extensive simulations on a 3GPP link-level testbench that the proposed approach considerably outperforms the state-of-the-art methods, while having a lower computational complexity. \bibliographystyle{IEEEtran}
-22,897.892802
[ -2.7265625, 2.75390625 ]
47.402597
[ -3.333984375, 0.314697265625, -2.240234375, -6.15625, -0.8603515625, 8.8203125 ]
[ 2.15625, 7.24609375, 1.208984375, 5.14453125 ]
186
3,720
[ -2.9921875, 3.51171875 ]
27.435714
[ -6.41796875, -4.5625, -4.48828125, -1.9951171875, 2.578125, 12.3828125 ]
0.776661
28.920678
27.231183
1.638115
[ 1.9458627700805664 ]
-16,270.567912
5.537903
-22,902.306329
0.577642
5.782932
[ -3.083984375, -3.69921875, -3.857421875, -5.046875, 2.63671875, 12.078125 ]
[ -5.71875, -2.482421875, -2.40625, -1.931640625, 4.03125, 5.5703125 ]
BkiUgqXxK02iP5rWMwfQ
\section{Introduction} Solar quiet gamma-ray astronomy started playing a significant role in the early 90's thanks to the EGRET mission. \citet{hudson} pointed out the importance of gamma-ray emission from the quiet Sun as an interesting possibility for highly sensitive instruments such as EGRET. \citet{seckel} estimated that the flux of gamma rays produced by cosmic-ray interactions on the solar surface would be detectable by EGRET. This emission is the result of particle cascades initiated in the solar atmosphere by Galactic cosmic rays. Meanwhile \citet{thompson97} detected in the EGRET data the gamma rays produced by cosmic ray interactions with the lunar surface. Although they expected similar interactions on the Sun, the EGRET data they analyzed did not show any excess of gamma rays consistent with the position of the Sun. They obtained an upper limit of the flux above 100 MeV of 2 $\times$ 10$^{-7}$cm$^{-2}$s$^{-1}$ . \citet{fairbairn}, analyzing the EGRET data of the solar occultation of 3C279 in 1991, found a photon flux of about 6 $\times$ 10$^{-7}$ cm$^{-2}$s$^{-1}$ above 100 MeV in the direction of the occulted source, which they used to put limits on axion models; however they did not consider the extended solar emission. Inverse Compton scattering of cosmic-ray electrons on the solar photon halo around the Sun has been estimated to be an important source of gamma-ray emission \citep{orlando,moskalenko} \footnote{The same process from nearby stars has been studied in \citet{orlando} and \citet{orlandoc}}. In our previous work, we predicted this to be comparable to the diffuse background even at large angular distance from the Sun. The formalism has now been improved and more accurate estimates have been obtained. The anisotropic scattering formulation and the solar modulation have been implemented, in order to give a more precise model for the EGRET data. In this work we explain in detail our model of the extended emission produced via inverse Compton scattering of cosmic-ray electrons on the solar radiation field. A first report of our detection with EGRET data was given in \citet{orlandob}. Here we describe the full analysis including the spectrum of disk and extended components. Our result is very promising for the forthcoming GLAST mission which certainly will be able to detect the emission even at larger angular separation from the Sun. Moreover, the extended emission from the Sun has to be taken into account since it can be strong enough to be a confusing background for Galactic and extragalactic diffuse emission studies. \section{Theoretical model} Inverse-Compton scattering of solar optical photons by $\sim$ GeV cosmic-ray electrons produces $\gamma$-radiation with energies of 100 MeV and above. Improving on the approximate estimates given in \citet{orlando}, in this paper the anisotropic formulation of the Klein-Nishina cross section has been introduced, taking into account the radial distribution of the photons propagating outward from the Sun. Our elegant analytical formulation has been replaced by a numerical solution depending on the angle between the momenta of the cosmic ray electrons and the incoming photons. As pointed out in \citet{orlando} this will affect the results at about the 10$\%$ level. Moreover, while in our previous work we used the modulated cosmic-ray electron spectrum as observed at earth, here a formulation of the solar modulation as a function of the distance from the Sun has been taking into account. The gamma-ray intensity spectrum is: \begin{equation} \label{eq1} I(E_{\gamma})=\frac{1}{4\pi}\int\epsilon(E_{\gamma})dx \end{equation} where the emissivity $\epsilon$ is given by: \begin{eqnarray} \epsilon(E_{\gamma})=\int dE_{e} \times \nonumber\\ \times \int \sigma_{K-N}(\gamma,E_{ph},E_{\gamma})~ n_{ph}(E_{ph},r)~c~N(E_{e},r)~dE_{ph} \label{eq2} \end{eqnarray} $N(E_{e},r)$ is the electron density, with $E_{e}$ electron energy, $n_{ph}(E_{ph},r)$ the solar photon density as function of the distance from the Sun $r$ and the solar photon field $E_{ph}$, $\sigma$ is the Klein-Nishina cross section and $\gamma=E_{e}/m_{e}$. The cosmic-ray electrons have been assumed isotropically distributed everywhere in the heliosphere. In \citet{orlando}, an elegant analytical formulation in eq (\ref{eq3}) was obtained for the isotropic case, assuming a simple inverse square law for the photon density for all distances from the Sun and a cosmic ray electron spectrum considered constant for all distances and determined from experimental data (see Fig.(\ref{fig4})) for 1 AU. In order to obtain the inverse Compton radiation over a line of sight at an angle $\alpha$ from the Sun, the photon field variation over the line of sight has to be known. \begin{figure}[!t \includegraphics[width=9cm, angle=0]{8817fig1.ps} \caption{Definition of variables for eq.(\ref{eq3}) describing the geometry of the solar inverse Compton emission.} \label{fig1} \end{figure} As shown in Fig.(\ref{fig1}), for a given point $x$ on the line of sight, the surface photon density is proportional to $1/r^{2}$, where $r^{2}=x^{2}+d^{2}-2$~$x$~$d$ $\cdot$ $cos~\alpha$, with $d$ distance from the Sun. Integrating the photon density over the line of sight from $x=0$ to $x=\infty$ one obtains: {\setlength\arraycolsep{2pt} \begin{eqnarray} n_{ph}(E_{ph})~R^{2}\int_{0}^{\infty}\frac{dx}{x^{2}+d^{2}-2~x~d \cdot cos~\alpha}= \nonumber\\ =-n_{ph}(E_{ph})~R^{2}~arctan\left( \frac{d \cdot cos~\alpha-x}{d \cdot sin~\alpha}\right) /(d \cdot sin~\alpha~)= \nonumber\\ =n_{ph}(E_{ph})~R^{2}\left( \frac{ \pi/2+arctan(cot~\alpha)}{d \cdot sin~\alpha} \right) \label{eq3} \end{eqnarray}} which depends only on the angle $\alpha$, where $R$ is the solar radius and $n_{ph}(E_{ph})$ is the solar photon density. For large distance from the source $n_{ph}(E_{ph})$=1/4 $n_{BB}(E_{ph})$, where $n_{BB}(E_{ph})$ is the black-body density of the Sun. Integrating over solid angle and using eq.(\ref{eq1}) and eq.(\ref{eq2}) with the Klein-Nishina cross section, the total photon flux produced by inverse Compton scattering within an angle $\alpha$ becomes: {\setlength\arraycolsep{2pt} \begin{eqnarray} I(E_{\gamma})=\frac{1}{4\pi} \int_{0}^{2\pi} d\varphi \int_{0}^{\alpha} sin~\alpha~ d \alpha \int dE_{ph} {} \nonumber\\ {}\times \int \sigma_{K-N}(\gamma,E_{ph}, E_{\gamma})~c~N(E_{e})~dE_{e}~ n_{ph}(E_{ph})~R^{2} \times {} \nonumber\\ {}\int_{0}^{\infty}\frac{dx}{x^{2}+d^{2}-2~x~d \cdot cos~\alpha}= {} \nonumber\\ {}=\frac{R^{2}}{16~d}\left( \pi \alpha + (\frac{\pi}{2})^{2}-arctan^{2}~(cot~\alpha)\right) {} \nonumber\\ {} \times \int dE_{ph} \int \sigma_{KN}~c~N(E_{e})~n_{BB}(E_{ph})~dE_{e} {} \label{eq4} \end{eqnarray} which for small $\alpha$ is proportional to $\alpha/d$ and the intensity $I$ (per solid angle) is proportional to 1/($\alpha d$). For the case of the anisotropic formulation of the Klein-Nishina cross section and for the electron modulation along the line of sight, we have to use numerical computations, already adopted in \citet{orlandob}. \subsection{Solar photon field} \subsubsection{Basic relations} The Sun is treated as black body where the energy density is characterized by the effective temperature on the surface (T=5777 K) following the Stephan-Boltzmann equation. For the photon density close to the Sun, the simple inverse square law is inappropriate. In this work the emission from an extended source has been evaluated. The distribution of photon density from the Sun, as extended source, is given by integrating over the solid angle with the variables shown in Fig.(\ref{fig2}) \begin{figure}[!h \centering \includegraphics[width=.5\textwidth, angle=0]{8817fig2.ps} \caption{ Variables involved in eq.(\ref{eq7}) for the calculation of the photon density around the Sun, where R is the solar radius and r is shown also in Fig. (\ref{fig1})} \label{fig2} \end{figure} {\setlength\arraycolsep{2pt} \begin{eqnarray} n_{ph}(E_{ph},r)=n_{BB}(E_{ph})\int_{0}^{\phi_{MAX}} \frac{2\pi~R^{2}~sin~\phi ~cos~\beta ~d\phi}{4\pi~s^{2}} \nonumber\\ =0.5 ~n_{BB}(E_{ph}) \left[ 1-\sqrt{1-(R/r)^{2}}\right] \label{eq5} \end{eqnarray} with $cos~\beta$=$(r~cos~\phi~-~R)/s$ and $cos~\phi_{MAX}~=~R$/$r$. For large distances from the Sun it reduces to the inverse square law. \subsubsection{Deviations of solar spectrum from black body} We checked the effect of deviations of the solar spectrum from a black body. We used the black-body approximation with the AM0 solar spectrum \footnote{taken from http://www.spacewx.com/}, referring to 2007, period of solar minimum, extrapolated back to the Sun. Since the estimated inverse Compton emission from the Sun is not affected significantly ($<$1$\%$) considering the actual spectrum, we decided to keep the black body approximation for simplicity. We also verify that the EUV and XUV range of the solar spectrum do not affect the inverse-Compton emission. The irradiance at the shortest wavelengths varies significantly with solar conditions and the range of variability spans a few percent at the longest wavelengths, from 40\% to more than 500\% across the EUV and XUV , from solar minimum to solar maximum. Moreover, during solar flare, XUV increases, reaching factors of eight or more, and EUV increases on the order of 10-40\% \citep{eparvier}. However, \citet{oncica} found that the X-ray flares cannot contribute to the total solar irradiance fluctuations and even the most energetic X-ray flares cannot account for more than 1 mW/$m^{2}$. We found that the estimated inverse-Compton emission from the Sun is not affected significantly by the XUV/EUV flux and its variability. \subsection{Comparison of isotropic/anisotropic formulations} In \citet{orlando} the isotropic formulation for the Klein-Nishina cross section given in \citet{moskalenkoa} was used in the analytical formulation for estimating the IC gamma-ray emission . In the present analysis we used the following anisotropic Klein-Nishina cross section in a more convenient form than in \citet{moskalenkoa}: {\setlength\arraycolsep{2pt} \begin{eqnarray} \sigma_{K-N}(\gamma,E_{ph}, E_{\gamma}) = \frac{ \pi r_{e}^{2}m_{e}^{2}}{E_{ph} E_{e}^{2}}\times \nonumber\\ \left[ \left( \frac{m_{e}}{E'_{ph}}\right) ^{2} (\frac{v}{1-v})^{2} {} -2 \frac{m_{e}}{E'_{ph}} \frac{v}{(1-v)}+(1-v)+\frac{1}{1-v} \right] \label{eq6} \end{eqnarray} where $v=E_{\gamma}/E_{e}$ and $\gamma$=$E_{e}/m_{e}$, with $m_{e}$ electron mass and \begin{equation} E'_{ph}=\gamma E_{ph}(1+cos~ \eta) \label{eq7} \end{equation} where $\eta$ is the scattering angle for the relativistic case, with $\eta=0$ for a head-on collision. In Fig.(\ref{fig3}) is shown the contribution to the flux as a function of distance along the line-of-sight for the isotropic and anisotropic cross-sections. \begin{figure}[!h] \centering \includegraphics[width=.35\textwidth, angle=270]{8817fig3.ps} \caption{ Relative contribution (arbitrary units) of the isotropic (gray lines) and anisotropic (black lines) cross sections, multiplied by the photon density along the line of sight for E$_{e}$=10$^{4}$ MeV and E$_{\gamma}$=100 MeV. $x$ and $d$ are defined in Fig 1. The lines represent different angles from the Sun, from 1$^{\circ}$ to 6$^{\circ}$ (right to left).}. \label{fig3} \end{figure} The integrated inverse-Compton emission was compared for isotropic and anisotropic cross sections; the maximum difference of the intensity for the two Klein-Nishina cross sections is of about 15$\%$ around 300 MeV and for 0.5$^{\circ}$ angular distances from the Sun. \subsection{Electron spectrum} The solar modulation of cosmic ray electrons has been widely studied, but is still not completely understood. \citet{mueller} analyzed the data of Helios-1 from 1974 to 1975, a period that corresponds to the approach to solar minimum, and reported the measurements of proton and helium cosmic ray intensities for the 20-50 MeV/n energy interval between 0.3 and 1 AU. The proton gradients they obtained were small and generally consistent with zero, while the helium gradients were positive but small. On the basis of Helios-1 data, \citep{kunow} stated: `The radial variation seems to be very small for all particles and all energy ranges'. The study of \citep{seckel} was based on this hypothesis. Since that time, the modulation theories have been developed including effects like drifts and current sheet tilt variations \citep{thomas} and short time variations. In this work we care only about long period modulations for solar maximum and minimum and we assume that the longitude gradient is negligible. \citet{mcdonald}, analyzing cycle 20 and 22 of solar minimum between 15 and 72 AU, found that most of the modulation in this period occurs near the termination shock (assumed to be at 100 AU). On the other hand, from solar minimum to solar maximum the modulation increases mainly inside the termination shock. Recently \citet{morales}, investigating the radial intensity gradients of cosmic rays from 1 AU to the distant heliosphere and interpreting the data from IMP8, Voyagers 1 and 2, Pioneer 10 and BESS for Cycles 21, 22 and 23, found different radial gradients in the inner heliosphere compared to \citet{mcdonald}. In this region they obtained an average radial gradient of $\simeq$ 3\% /AU for 175 MeV H and $\simeq$ 2.2\% /AU for 265MeV/n He, that, at 1AU, give an intensity smaller than \citet{mcdonald} for H and higher for He. \citet{gieseler} analyzed the data from Ulysses from 1997 to 2006 at 5AU. They found a radial gradient of 4.5\%/AU for $\alpha$ particles with energies from 125 to 200 MeV/n, which is consistent with previous measurements. In this work, our model of the modulation of cosmic rays uses the formulation given in \citet{gleeson}, using the studies regarding the radial distribution of cosmic rays in the heliosphere at solar minimum and maximum. The cosmic ray electron spectrum is given by the well known force field approximation of cosmic ray nuclei used to obtain the modulated differential intensity J(r,E) at energy E and distance r from the Sun \citep{gleeson}. As demonstrated by \citet{caballero} it is a good approximation for Galactic cosmic rays in the inner heliosphere. The differential intensity of the modulated spectrum is given by: \begin{equation} J(r, E)=~J ( \infty, ~E+ ~\Phi(r) ) ~ \frac{E(E+2E_{0})}{(E+~\Phi(r)+2E_{0})(E+\Phi(r))} \label{eq8} \end{equation} where E is kinetic energy in MeV and $E_{0}$ is the electron rest mass. We use the local interstellar electron spectrum $J ( \infty, ~E+ ~\Phi(r))$. $\Phi(r)=(Ze/A)\phi$ where $\phi$ is the modulation potential, Z the charge number and A the mass number. In order to compute the modulation potential in the inner heliosphere, we used the parameterization found by \citet{fujii} for Cycle 21 from 100 AU to 1 AU, neglecting the time dependence and normalizing at 1 AU for 500 MV (solar minimum) and 1000 MV (solar maximum). As \citet{moskalenko} we find: \begin{equation} \Phi(r)=\Phi_{0}(r^{-0.1}-r_{b}^{-0.1})/(1-r_{b}^{0.1}) \label{eq9} \end{equation} where $\Phi_{0}$ is the modulation potential at 1 AU, of 500 and 1000 MV for solar minimum and maximum respectively; $r_{b} =100 AU$ and $r$ is the distance from the Sun in AU. Figure (\ref{fig4}) shows the local interstellar electron spectrum \citep{strong} and the modulated spectrum at 1 AU, compared with the data. \begin{figure}[!h \centering \includegraphics[width=.35\textwidth, angle=270]{8817fig4.ps} \caption{ Local interstellar electron spectrum (blue line) as in \citet{strong} and the modulated at 1 AU of 500 MV (red line) and 1000 MV (black line) compared with the data. See \citet{strong} for data reference. } \label{fig4} \end{figure} The electron intensity as a function of distance from the Sun is given in Fig.(\ref{fig5}) for different energies and for modulation potential 500 MV. \begin{figure}[!h \centering \includegraphics[width=.45\textwidth, angle=0]{8817fig5.ps} \caption{ Electron intensity as a function of distance from the Sun in AU, where 0 corresponds to the Sun, for different energies and for modulation potential $\Phi_{0}=500 MV$. } \label{fig5} \end{figure} \begin{figure}[!h \centering \includegraphics[width=.36\textwidth, angle=270]{8817fig6.ps} \caption{ Contribution to the IC emission for different electron energy ranges, left to right: $100-10^{3}$ MeV, $(1-3) \times 10^{3}$ MeV, $(3-5) \times 10^{3}$ MeV, $(0.5-1) \times 10^{4}$ MeV,$(1-5) \times 10^{4}$ MeV, $(0.5-1) \times 10^{5}$ MeV, $1 \times 10^{5}- 5 \times 10^{7}$ MeV. } \label{fig6} \end{figure} The major contribution to the energy range 100-500 MeV we are interested in comes from electrons with energy between 1 and 10 GeV, as shown in Fig. (\ref{fig6}), obtained for the local interstellar electron spectrum, with no modulation. To compute the inverse-Compton extended emission from the heliosphere and compare it with the EGRET data, details of the solar atmosphere, the non-isotropic solar wind and the asymmetries of the magnetic field have been neglected, considering the limited sensitivity of EGRET. Since the biggest uncertaintes in the gamma-ray emission come from the cosmic-ray electron spectrum close to the Sun, in our model we considered two possible configurations of the solar modulation within 1 AU. The "naive" approximation is to assume that the cosmic-ray flux towards the Sun equals the observed flux at Earth, since there is evidence that modulation by the solar wind does not significantly alter the spectrum once cosmic rays have penetrated as far as Earth. Moreover, high energy electrons are less sensitive to the modulation. This approximation gives an upper limit of the modelled flux. The other approach is to assume that the electron spectrum varies due to solar wind effects within 1 AU. With this "nominal" approximation we assume that the formulation for solar modulation from 100 AU to 1AU can be extrapolated also below 1 AU, using eq. (\ref{eq8}). This gives an approximate lower limit in our model. \subsection{Calculated extended solar emission} Figure (\ref{fig7}) shows the spectrum of the emission for two different angular distances from the Sun, without modulation and for two levels of solar modulation ($\Phi$=500, 1000 MV, respectively for solar minimum and solar maximum) and for the cases of upper and lower limits described above. \begin{figure}[!h] \centering \includegraphics[width=.45\textwidth, angle=0]{8817fig7.ps} \caption{ Spectrum of the emission for (top to bottom) 0.5$^{\circ}$ and 5$^{\circ}$ angular distances from the Sun and for different conditions of solar modulation. Solid lines: no modulation, pink lines: $\Phi_{0}$=500MV, blue lines:$\Phi_{0}$=1000MV, dashed lines: naive model, dotted lines: nominal model. EGB is the extragalactic background as in \citet{stronga}} \label{fig7} \end{figure} The angular profile of the emission is shown in Fig.(\ref{fig8}) above 100 MeV without modulation, for two levels of solar modulation ($\Phi$=500, 1000 MV) and for the cases naive and nominal. The emission is extended and is important compared to the extragalactic background (around 10$^{-5}$cm$^{-2}$s$^{-1}$sr$^{-1}$) even at very large angles from the Sun. Even around 10$^{\circ}$ it is still about 10$\%$ of the extragalactic background and with the senstivity of GLAST should be included in the whole sky diffuse emission. \begin{figure}[!h] \centering \includegraphics[width=.45\textwidth, angle=0]{8817fig8.ps} \caption{ Angular profile of the emission as a function of the angular distance from the Sun above 100 MeV. Blue line: no modulation, pink lines: $\Phi_{0}$=500MV, green lines: $\Phi_{0}$=1000MV, solid lines: naive model, dashed lines: nominal model.} \label{fig8} \end{figure} An example of the intensity of the inverse-Compton emission predicted in a region of 10$^{\circ}$ from the Sun is shown in Fig.(\ref{fig9}) for 300-500 MeV. \begin{figure}[h!] \centering \includegraphics[width=.3\textwidth, angle=0]{8817fig9.ps} \caption{Inverse-Compton emission modelled for a region of 10$^{\circ}$ from the Sun for 300-500 MeV and for the naive model of 1000 MV modulation. Intensity is given in cm$^{-2}$s$^{-1}$sr$^{-1}$. In the figure, the minimum value of the intensity is 1.2 $\times$ 10$^{-7}$ cm$^{-2}$s$^{-1}$sr$^{-1}$.} \label{fig9} \end{figure} While the inverse Compton emission is expected to be readily detectable in future by GLAST, the situation for available EGRET data is more challenging. In the following section we present our study with the EGRET database. \section{EGRET data preparation and selection} We analyzed the EGRET data using the code developed for the moving target such as Earth \citep{petry}. The software permits us to produce images centred on a moving source and traces of other sources around with respect to the centred source. To perform the analysis for the Sun, we analyzed the data in a Sun-centred system. In order to have a better sensitivity and hence detection of our emission component, the diffuse background was reduced by excluding the Galactic plane within 15$^{\circ}$ latitude. Otherwise all available exposure from October 1991 to June 1995 was used (viewing periods 1, 11, 12, 19, 26, 209, 221, 320, 325, 410, 420), excluding the big solar flare in June 1991. When the Sun passed by other gamma-ray sources (moon, 3C 279 and several quasars), these sources were included in the analysis. In order to obtain the flux of the moon to be fitted, the same method performed for the Sun has been made with all data moon centred. Details will be given in \citet{petrya}. \section{EGRET analysis} We fitted the EGRET data in the Sun-centred system using a multi-parameter likelihood fitting technique (see Appendix). The region used for fitting is a circle of radius 10$^{\circ}$ around the Sun. The total number of counts predicted includes inverse Compton and solar disk flux, the moon, QSO 3C279, other background sources and the background. We left as free parameters the solar extended inverse-Compton flux from the model, the solar disk flux (treated as point source since the solar disk is not resolvable by EGRET), a uniform background, and the flux of 3C279 - the dominant background point source. The moon flux was determined from moon-centred fits and the 3EG source fluxes were fixed at their catalogue values. All components were convolved with the energy-dependent EGRET PSF. \subsection{Model of extended solar emission} The solar extended inverse-Compton emission was modelled within 10$^{\circ}$ radius from the Sun and with a pixel size of 0.5$^{\circ}$. The model was convolved with the energy-dependent EGRET PSF and implemented in the fit, with a scale factor as free parameter. We took into account the different approximations to the modulation described above, since the data we used cover a period from October 1991 when there was a solar maximum to June 1995 close to the solar minimum. Since the emission we are looking for is very close to the EGRET sensitivity limit, we had to take as much exposure as possible and it was not possible to perform an analysis splitting the data according to solar modulation. \subsection{3C279} Since 3C 279 is the brightest source that passes close to the Sun, we decided to leave its flux as free parameter in the fit. We fixed the spectral index at 1.96, the value given in the EGRET catalogue. The flux obtained was then compared to that in the EGRET catalogue. \subsection{{\bf Other background sources}} Many sources passing within 10$^{\circ}$ from the Sun were included in the analysis. We decided to implement their fluxes as fixed parameters. With the code \citep{petry} it was possible to know the exact viewing period of any source and hence to determine its flux from the 3rd EGRET catalogue \citep{hartmann}. Hence, we took the flux value in the EGRET catalogue corresponding to the period we used, when this was listed. Since the catalogue does not contain the flux values corresponding to all the periods in which the sources were close to the Sun, in case of incertainty we decided to use the first entry in the catalogue, which is the one from which the source position was derived. In almost all cases, this is the detection with the highest statistical significance, which in most cases corresponds to the maximum value of the flux. We also took the spectral index from the EGRET catalogue. The values adopted for the sources are listed in Table (\ref{table1}). Since the spectral index of J2321-0328 is not known, the fit was performed without this source above 300 MeV, since in the literature there is no evidence of its detection above this energy. In order to verify that this choice does not affect significantly the fit result, we tested also a typical spectral index of 2. Both fitting results will be reported in the following results section. The traces of the background quasars were added together in the same data file, rescaled according to their fluxes. \begin{table} \begin{minipage}[t]{\columnwidth} \caption{Parameters of the background sources used for the analysis.} \label{table1} \centering \renewcommand{\footnoterule}{} \begin{tabular}{l l l } \hline \hline Source& Flux ($>$100 MeV)\footnote{Units 10$^{-7}$ cm$^{-2}$s$^{-1}$} & spectral index \\ \hline J0204+1458 &2.36 & 2.23 \\ J0215+123 &1.80& 2.03 \\ J1230-0204 &1.13 & 2.85 \\ J1235+0233 &1.24 & 2.39 \\ J1246-0651 &1.29 & 2.73 \\ J1310-0517 &1.04 & 2.34 \\ J1409-0745 &2.74 & 2.29 \\ J2321-0328 &3.82 & - \footnote{The choice of the spectral index we used is explained in the text} \\ \hline \end{tabular} \end{minipage} \end{table} \subsection{Diffuse background} Since the count maps used for our analysis include only data above $\mid$b$\mid$ $>$15$^{\circ}$ latitude, and the Galactic plane is excluded, the Galactic emission can be well approximated by an isotropic background. We left its intensity as a free parameter in the fitting analysis. \subsection{Moon} The flux of the moon was determined from moon-centred data, fitting the flux and an isotropic background. Since the moon moves quickly across the sky, all other sources moving by are taken into account as a constant component included in the isotropic background. The fit was performed for 2 energy ranges: 100-300 MeV and $>$300 MeV. The value of flux of the moon for the different energy ranges was obtained by maximum likelihood. The maximum likelihood ratio statistic used for this analysis is described in the appendix. The values of the best fit fluxes and 1$\sigma$ errors are given in Table (\ref{table2}). \begin{table} \caption{Fitting results for the moon} \label{table2} \centering \begin{tabular}{l l l l } \hline\hline Energy (MeV)& Moon flux ($cm^{-2}s^{-1}$) & Background ($cm^{-2}s^{-1}sr^{-1}$) \\ \hline $> 100$ & (5.55~$\pm$~0.65)~ $\times 10^{-7}$ & 3.47 $\times 10^{-5}$ \\ $> 300$ & (5.76~$\pm$~1.66)~ $\times 10^{-8}$ & 1.01 $\times 10^{-5}$ \\ $100-300$ & (4.98~$\pm$~0.57)~ $\times 10^{-7}$ & 2.13 $\times 10^{-5}$ \\ \hline \end{tabular} \end{table} Previous studies of the EGRET data (Thompson et al.) gave a flux of ($5.4 \pm 0.7) \times 10^{-7}$ cm$^{-2}$s$^{-1}$ above 100 MeV in agreement with our analysis. \begin{figure}[!h] \centering \includegraphics[width=.45\textwidth, angle=0]{8817fig10.ps} \caption{Spectrum of the moon obtained by the fitting analysis of the EGRET data between October '91 to June '95 (dashed region). Thickness of the line shows the error bars. Black bars are obtained by \citet{thompson97} for different solar conditions, while lines are the theoretical model of \citet{moskalenkob}, for solar maximum (red line) and solar minimum (green line).} \label{fig10} \end{figure} The spectrum of the moon shown in Fig.(\ref{fig10}) has been obtained imposing a constant spectral index $\gamma$, extrapolated up to 2 GeV. The lunar spectral index was determined as $3.05^{+0.38}_{-0.29}$. Errors are calculated from the uncertainties on the integrated fluxes. Since the EGRET data we used to analyze the emission from the Sun extend from solar maximum to minimum, in this analysis we estimated an average lunar flux between the two periods of solar modulation. Comparing our spectrum with the model of \citet{moskalenkob} for different solar conditions, we find a good agreement at 100 MeV, while at higher energies the model is about a factor of two higher. More extensive analysis will be given in \citet{petrya}. \section{Solar analysis results} The analysis was performed for 2 energy ranges : 100-300 MeV and above 300 MeV as well as the combination to improve the detection statistics. This choice was determined by the limitations of the EGRET data. Fig.\ref{fig11} shows the smoothed count maps, centred on the Sun, for those energy ranges and for a region of 20$^{\circ}$ side. The fitting region has a radius of 10$^{\circ}$ centred on the Sun. Since the interesting parameters are solar disk source and extended emission, the likelihood is maximized over the other components. The analysis was performed for four different estimates of the solar modulation. \begin{figure}[!h] \centering \includegraphics[width=.25\textwidth, angle=0]{8817fig11a.ps}\\%Log_100_500sup.eps \includegraphics[width=.25\textwidth, angle=0]{8817fig11b.ps}\\ \includegraphics[width=.25\textwidth, angle=0]{8817fig11c.ps} \caption{EGRET Sun-centred counts maps (top to bottom)$>$100 MeV, $>$300 MeV and 100-300 MeV. The colorbar shows the counts per pixel. The area is 20$^{\circ}$ on a side and the maps are Gaussian smoothed to 3$^{\circ}$.} \label{fig11} \end{figure} \begin{table*} \caption{Results for different energy ranges and for the naive model of 1000 MV solar modulation. Fluxes are given in cm$^{-2}$s$^{-1}$ and intensities in cm$^{-2}$s$^{-1}$sr$^{-1}$. For details see text. } \label{table3} \centering \begin{tabular}{l l l l l l } \hline\hline Model& & $>$100 MeV & $>$300 MeV with J2321-0328 & $>$300 MeV no J2321-0328 &100-300 MeV\\ \hline\hline Source flux&best fit& 4.68$^{+4.17}_{-3.82}$ $\times 10^{-8}$ & 3.38$^{+1.80}_{-1.81}$ $\times 10^{-8}$& 3.38$^{+1.88}_{-1.84}$ $\times 10^{-8}$ &14.04 $\times 10^{-8}$ \\ &mean& (5.42$\pm$3.20)$\times 10^{-8}$ & (3.03$\pm$1.10)$\times 10^{-8}$& (3.65$\pm$1.71)$\times 10^{-8}$ &(14.2$\pm$9.1)$\times 10^{-8}$ \\ &counts & 28 & 20 & 20 &39\\ \hline Extended flux&best fit & 3.83$^{+2.78}_{-2.80}$ $\times 10^{-7}$& 1.54$^{+1.06}_{-1.18}$ $\times 10^{-7}$&1.75$^{+1.08}_{-1.13}$ $\times 10^{-7}$&1.15 $\times 10^{-7}$\\ &mean & (3.89$\pm$2.16)$\times 10^{-7}$& (1.49$\pm$0.67)$\times 10^{-7}$&(1.70$\pm$0.87)$\times 10^{-7}$&(2.07$\pm$1.34)$\times 10^{-7}$\\ & counts & 273&116&132& 83\\ \hline Extended solar model&flux &2.18 $\times 10^{-7}$& 0.90 $\times 10^{-7}$&0.90 $\times 10^{-7}$&1.28 $\times 10^{-7}$\\ \hline Background& intensity & 3.50$\times 10^{-5}$&1.13$\times 10^{-5}$&1.12$\times 10^{-5}$&2.48$\times 10^{-5}$\\ &counts & 2220&750&741&1603\\ \hline 3C279& flux & 4.88$\times 10^{-7}$&1.30$\times 10^{-7}$&1.30$\times 10^{-7}$&5.85$\times 10^{-7}$\\ &counts & 103&29&29&79\\ \hline Prob. (flux=0)&& 6.4$\times 10^{-3}$&2.3$\times 10^{-4}$& 9.7$\times 10^{-5}$&0.24\\ \hline Strongest bkg source &flux& 3.82$\times 10^{-7}$ & 1.28$\times 10^{-7}$ & 0.63$\times 10^{-7}$ & 2.54$\times 10^{-7}$\\ Total bkg sources & counts &113 & 27 & 21 & 66\\ \hline Moon& flux& 5.55$\times 10^{-7}$ & 0.57 $\times 10^{-7}$ & 0.57$\times 10^{-7}$ & 4.98$\times 10^{-7}$\\ & counts& 37 & 4 & 4 & 29\\ \hline \end{tabular} \end{table*} For the different models of solar modulation, the expected values of inverse Compton emission are in agreement with the data within 1$\sigma$, with rather big errors bars. This also means that, because of the limited sensitivity of EGRET, it is not possible to prove which model better describes the data. We found probabilities corresponding to 2.7, 3.6/4 and 1$\sigma$ above 100 MeV, above 300 MeV (including/excluding J2321-0328) and for 100-300 MeV respectively for the significance of the detection. However, we chose the naive model of 1000 MV solar modulation in order to derive the spectra of the extended and disk emission. This is for two reasons: first this model produces the highest value of the likelihood; second it should be the most realistic for the period '91-'95, since the data cover the total period of solar maximum and the upcoming solar minimum. {\it Hence, we report only the results for the naive case of 1000 MV solar modulation.} \begin{figure*}[!h] \centering \includegraphics[width=.45\textwidth, angle=0]{8817fig12a.ps} \includegraphics[width=.45\textwidth, angle=0]{8817fig12b.ps} \includegraphics[width=.45\textwidth, angle=0]{8817fig12c.ps} \caption{Logarithm of the likelihood ratio (ln~$(L/L_{0})$) as a function of the solar disk flux and an extended component for (top to bottom) $>$100 MeV, $>$300 MeV and 100-300 MeV . Color contours are different values of the ratio, as explained in the colorbar. Contours are obtained by allowing the background and the QSO 3C279 component to vary to maximize the likelihood for each value of disk flux and extended component. Contour lines define 1, 2 and 3$\sigma$ intervals for the 2 separate parameters.} \label{fig12} \end{figure*} We used both frequentist and Bayesian approaches to analyze the results. The statistical methods are described in the appendix. Values of the best fit fluxes, 1$\sigma$ errors, counts and mean values with errors are given in Table \ref{table3}. Best fit values are calculated from the log-likelihood ratio statistic, while mean values are from the Bayesian method. Counts are for the maximum likelihood values, as are the background, the 3C279 fluxes and the probability of the null hypothesis. Values in the first column were used to have more statistics for detection, while separate energies are best for the analysis. For 100-300 MeV range error bars can not be determined by the frequentist method. We also give the probability of the null hypothesis (i.e. zero flux from disk and extended emission) using the likelihood ratio statistic. The table contains two cases above 300 MeV with different fluxes of the background sources. The first is obtained including the source J2321-0328 whose spectral index was fixed at a typical value of 2, since it is not given in the 3EG catalogue. The second case does not include that source, since it has not been detected at energy above 300 MeV. We took the second case for the following analysis. These two cases give also an estimate of the uncertainty of our results. \begin{figure*}[!h] \centering \includegraphics[width=.32\textwidth, angle=0]{8817fig13a.ps} \includegraphics[width=.32\textwidth, angle=0]{8817fig13b.ps} \includegraphics[width=.32\textwidth, angle=0]{8817fig13c.ps} \includegraphics[width=.32\textwidth, angle=0]{8817fig13d.ps} \includegraphics[width=.32\textwidth, angle=0]{8817fig13e.ps} \includegraphics[width=.32\textwidth, angle=0]{8817fig13f.ps} \caption{Marginal probability of the two components calculated from eq.\ref{eqA4} and cumulative probability as function of the extended (upper panel) and disk (lower panel) fluxes for different energy ranges, (left to right) above 100 MeV, above 300 MeV and for 100-300 MeV} \label{fig13} \end{figure*} The log-likelihood ratio for the three energy ranges we analyzed are displayed in Fig.\ref{fig12} as a function of solar disk flux and extended flux. Colors show different values of the ratio, obtained by allowing the background and the QSO 3C279 component to vary to maximize the likelihood for each value of disk flux and extended component, while contour lines define 1, 2 and 3$\sigma$ confidence intervals for the two separate parameters. Marginal probabilities of the two components calculated with the Bayesian method and cumulative probability as function of the disk and extended fluxes are shown in Fig. \ref{fig13}. The sum of the two components, disk and extended, is given in Table \ref{table4}. \begin{table} \caption{{\it Sum} of disk and extended components of solar emission for different energies. Values are from the Bayesian method.} \label{table4} \centering \begin{tabular}{l l } \hline\hline Energy (MeV)& Total flux ($10^{-7}$ cm$^{-2}$s$^{-1}$) \\ \hline $> 100$ & (4.44$\pm$2.03) \\ $> 300$ & (2.07$\pm$0.79) \\ $100-300$ & (3.49$\pm$1.35) \\ \hline \end{tabular} \end{table} Counts of the source components centred on the Sun resulting from the fitting technique above 300 MeV are shown in Fig.\ref{fig14}. \begin{figure*}[!h] \centering \includegraphics[width=1.\textwidth, angle=0]{8817fig14.ps} \caption{Counts of the source components resulting from the fitting technique above 300 MeV. Left to right: solar disk, inverse Compton, QSO 3C279, moon and background sources. Colorbars show counts per pixel.} \label{fig14} \end{figure*} A summary of the main results is given in Table \ref{table5}. \begin{table} \caption{Fluxes used to produce the plotted solar spectra. Fluxes are in 10$^{-7}$ cm$^{-2}$s$^{-1}$} \label{table5} \centering \begin{tabular}{l l l} \hline\hline Source & 100-300 MeV &$>$300MeV \\ \hline \hline Extended&2.1$\pm$1.3 & 1.7$\pm$0.9\\ Model extended & 1.3 & 0.9\\ Disk & 1.4$\pm$0.9 &0.4$\pm$0.2 \\ Seckel's disk model &0-1.1 &0.1-0.5\\ \hline \end{tabular} \end{table} The solar disk spectral index we found is 2.4$^{+0.9}_{-0.8}$. Errors are calculated from the uncertainties on the integrated fluxes. Figure (\ref{fig15}) shows the solar disk spectrum obtained imposing a constant spectral index of 2.4, its mean value, and following a simple power law up to 2 GeV. For the extended emission, we found a spectral index of 1.7$^{+0.8}_{-0.5}$. The extended spectrum is shown in Fig. (\ref{fig16}). As for the disk spectrum, the spectral index has been fixed at the mean value. The spectrum is compared with the modelled spectrum for "naive" cases of 500 MV and 1000 MV solar modulation. \begin{figure}[!h] \centering \includegraphics[width=.5\textwidth, angle=0]{8817fig15.ps} \caption{Solar disk spectrum. The orange regions defines the possible values obtained by varying the mean flux within 1$\sigma$ errors and for $\gamma$=2.4, the mean value of the spectral index.} \label{fig15} \end{figure} \begin{figure}[!h] \centering \includegraphics[width=.5\textwidth, angle=0]{8817fig16.ps} \caption{Solar extended spectrum. Gray regions define the possible values obtained by varying the mean flux within 1$\sigma$ errors and for $\gamma$=1.7, the mean value of the spectral index. Black line is the model for the naive case of 1000 MV modulation. For comparison, the pink line shows the model for the naive case of 500 MV modulation. } \label{fig16} \end{figure} \section{Tests of analysis procedure} In order to perform a test of the solar detection, we divided the total exposure which we used for the analysis described above, into two periods with about the same exposure time. Then, we analyzed the data with the same fitting technique explained above. Both these time periods contain about 3.6$\times$10$^{8}$ cm$^{2}$s of exposure on the Sun. Assuming equal periods and a constant flux from the Sun, the two periods should give compatible fit results. As an example, the analysis was performed above 300 MeV, where the detection was most significant and for the naive case and solar modulation 1000 MV. The first period contains only 3C279, J1409-0745 and the moon, while in the second one there are only the other background sources and the moon; 3C279 is not present. The total emission from the Sun was then detected at a level of 2.6$\sigma$ and 3.4$\sigma$ for first and second period respectively. The values obtained for the different emission components are consistent within 1$\sigma$. Moreover, they are also in agreement with those obtained for the total period. This confirms the validity of our method and the solar detection. In order to exclude possible contamination of some faint solar flares detected by CGRO/COMPTEL during the same period, we performed the fitting method also with solar flares times excluded. Values will not be reported since, as expected, we did not find any change in the results. \section{Discussion} For the extended solar emission, Fig. (\ref{fig16}) shows that our model is fully consistent with the measured spectrum. The main uncertainty in the model is the electron spectrum (the solar radiation field and the physics of the inverse Compton are precisely known). The electron spectrum at the relevant energies has about a factor 2 experimental uncertainty. In fact the measured extended fluxes are higher by a factor $\sim$2 (although not very significant) which suggests our electron spectrum could be too low. Regarding the solar disk flux, the flux in all cases is in agreement with the theoretical value for pion-decay obtained by \citet{seckel}. The Galactic background obtained with the fitting technique is compatible with the expected Galactic emission (GALPROP) \citep{stronga}. The flux of 3C~279 of 7.2~$\times 10^{-7} cm^{-2}s^{-1}$, sum of the flux for 100-300 MeV and above 300 MeV, is in agreement with the EGRET catalogue. In the catalogue for period 11 used for this analysis the flux is ($7.94 \pm 0.75) \times 10^{-7} cm^{-2}s^{-1}$ above 100 MeV, while for period 12 (the other viewing period used in the analysis) there is hardly any contribution. With respect to the analysis of EGRET data performed by \citet{thompson97}, our study has some improvements which explain why we succeeded in detection. Instead of excluding data near the sources 3C279 and the moon, they are included in our analysis. This leads to more exposure using contributions from all observing periods. The special dedicated analysis software for moving targets and the inclusion of the extended emission from the Sun produced a more realistic prediction of the total data and hence a more sensitive analysis. \section{Conclusions} In this paper the theory of gamma-ray emission from IC scattering of solar radiation field by Galactic CR electrons has been given in detail. Analyzing the EGRET database, we find evidence for emission from the Sun and its vicinity. For all models the expected values are in good agreement with the data, with rather big errors bars. This also means that, because of the limited sensitivity of EGRET, it is not possible to prove which model better describes the data. The spectrum of the moon has also been derived. Since the inverse Compton emission from the heliosphere is extended, it contributes to the whole sky foreground and has to be taken into account for diffuse background studies. Moreover, since the emission depends on the electron spectrum and its modulation in the heliosphere, observations in different directions from the Sun can be used to determine the electron spectrum at different position, even very close to the Sun. To distinguish the modulation models (eg. with GLAST) an accuracy of $\sim$10$\%$ would be required. With the point source sensitivity of GLAST around 10$^{-7}$cm$^{-2}$s$^{-1}$ per day above 100 MeV, it will be possible to detect daily this emission, when the Sun is not close to the Galactic plane. In additional a precise model of the extended emission, such as that presented here, would be required in searches for exotic effects such as in \citet{fairbairn} \begin{acknowledgements} We thank Dirk Petry for providing counts maps, exposure and sources traces of the EGRET data and for useful discussions. We are also grateful to Berndt Klecker for useful discussions and drawing our attention to the Helios results. Thanks are due to the referee David Thompson for helpful suggestions which significantly improved the presentation and analysis. \end{acknowledgements} \bibliographystyle{aa}
-28,635.502417
[ -2.08203125, 2.24609375 ]
36.270492
[ -3.255859375, 0.00777435302734375, -1.814453125, -5.1875, -0.52587890625, 7.51171875 ]
[ 3.107421875, 7.01953125, 4.6484375, 5.38671875 ]
485
6,142
[ -2.51171875, 2.84765625 ]
29.765195
[ -5.99609375, -3.4296875, -3.22265625, -1.998046875, 1.7109375, 10.5625 ]
1.257119
16.775804
21.182025
8.122636
[ 3.6267900466918945 ]
-20,346.959332
5.374796
-27,709.065184
0.999636
5.81797
[ -3.4453125, -3.931640625, -3.357421875, -4.1796875, 2.625, 11.46875 ]
[ -5.89453125, -2.51953125, -2.296875, -1.771484375, 3.77734375, 5.3203125 ]
BkiUf5k5qhDBLPWJMGyD
\section{Introduction} The dKP hierarchy (= dispersionless limit of the KP hierarchy) and various reductions thereof (such as dNKdV) play an increasingly important role in topological field theory and its connections to strings and 2-D gravity (cf. \cite{aa,ac,af,ca,cj,dp,kk,tb,tc}). There are also connections to twistor theory in the spirit of \cite{tb,tc} but we do not pursue this direction here. We address rather the dispersionless Hirota equations which arise from the dispersionless differential Fay identity and characterize dKP. We show that this identity is equivalent to a kernel expansion which generates the dispersionless Hirota equations in a universal algebraic manner depending only on the expressions $\lambda = P + \sum_1^{\infty}U_{n+1}P^{-n}$ or equivalently $P = \lambda - \sum_1^{\infty}P_{j+1}\lambda^{-j}$. The $P_{j+1}$ play the role of universal coordinates and this aspect has important consequences in topological field theory (cf. \cite{af}). The solutions are given in terms of a residue formula $F_{mn} = Res_P[\lambda^md\lambda^n_{+}]$, which in fact are shown to be generated by the universal coordinates $P_{n+1} = F_{1n}/n$. We also show how the Hamilton-Jacobi and hodograph analysis for dNKdV yields D-bar data for the action $S$ and this gives a moment type formula for the $F_j$ in terms of $\bar{\partial}S$. In the dKdV situation this actually yields a direct formula for the $F_{ij}$ providing an immediate alternative method of calculation. \medskip \section{Background on KP and dKP} \renewcommand{\theequation}{2.\arabic{equation}}\setcounter{equation}{0} One can begin with two pseudodifferential operators ($\partial = \partial/\partial x$), \begin{equation} L = \partial + \sum_1^{\infty}u_{n+1}\partial^{-n};\, \ \,W = 1 + \sum_1^ {\infty}w_n\partial^{-n} \ , \label{AA} \end{equation} called the Lax operator and gauge operator respectively, where the generalized Leibnitz rule with $\partial^{-1} \partial = \partial \partial^{-1} = 1$ applies \begin{equation} \partial^if\cdot \ = \sum_{j=0}^{\infty}{i \choose j}(\partial^j f) \partial^ {i-j} \cdot \ , \label{AB} \end{equation} for any $i \in {\bf Z}$, and $L = W\partial\,W^{-1}$. The KP hierarchy then is determined by the Lax equations ($\partial_n = \partial/\partial t_n$), \begin{equation} \partial_n L = [B_n,L] := B_n L - L B_n \ , \label{AC} \end{equation} where $B_n = L^n_{+} $ is the differential part of $L^n = L^n_{+} + L^n_{-} = \sum_0^{\infty}\ell_i^n\partial^i + \sum_{-\infty}^ {-1}\ell_i^n\partial^i$. One can also express this via the Sato equation, \begin{equation} \partial_n W\,W^{-1} = -L^n_{-} \ , \label{AD} \end{equation} which is particularly well adapted to the dKP theory. Now define the wave function via \begin{equation} \psi = W\,e^{\xi} = w(t,\lambda)e^{\xi};\, \ \,\xi := \sum_1^{\infty}t_n \lambda^n; \, \ \,w(t,\lambda) = 1 + \sum_1^{\infty}w_n(t)\lambda^{-n} \ , \label{AE} \end{equation} where $t_1 = x$. There is also an adjoint wave function $\psi^{*} = W^{*-1} \exp(-\xi) = w^{*}(t,\lambda)\exp(-\xi),\,\,w^{*}(t,\lambda) = 1 + \sum_1^ {\infty}w_i^{*}(t)\lambda^{-i}$, and one has equations \begin{equation} L\psi = \lambda\psi;\, \ \,\partial_n\psi = B_n\psi;\, \ \,L^{*}\psi^{*} = \lambda\psi^{*};\, \ \,\partial_n\psi^{*} = -B_n^{*}\psi^{*} \ . \label{AF} \end{equation} Note that the KP hierarchy (\ref{AC}) is then given by the compatibility conditions among these equations with an iso-spectral property. Next one has the fundamental tau function $\tau(t)$ and vertex operators ${\bf X},\,\,{\bf X}^{*}$ satisfying \begin{equation} \psi(t,\lambda) = \frac{{\bf X}(\lambda)\tau (t)}{\tau (t)} = \frac{e^{\xi} G_{-}(\lambda)\tau (t)}{\tau (t)} = \frac{e^{\xi}\tau(t-[\lambda^{-1}])} {\tau (t)}; \label{AG} \end{equation} $$\psi^{*}(t,\lambda) = \frac{{\bf X}^{*}(\lambda)\tau (t)} {\tau (t)} = \frac{e^{-\xi} G_{+}(\lambda)\tau (t)}{\tau (t)} = \frac{e^{-\xi}\tau(t+[\lambda^{-1}])} {\tau (t)} \ ,$$ where $G_{\pm}(\lambda) = \exp(\pm\xi(\tilde{\partial},\lambda^{-1}))$ with $\tilde{\partial} = (\partial_1,(1/2)\partial_2,(1/3)\partial_3, \cdots)$ and $t\pm[\lambda^{-1}]= (t_1\pm \lambda^{-1},t_2 \pm (1/2) \lambda^{-2}, \cdots)$. One writes also \begin{equation} e^{\xi} := \exp \left({\sum_1^{\infty}t_n\lambda^n}\right) = \sum_0^ {\infty}\chi_j(t_1, t_2, \cdots ,t_j) \lambda^j \ , \label{AH} \end{equation} where the $\chi_j$ are the elementary Schur polynomials, which arise in many important formulas (cf. below). \\[3mm]\indent We mention now the famous bilinear identity which generates the entire KP hierarchy. This has the form \begin{equation} \oint_{\infty}\psi(t,\lambda)\psi^{*}(t',\lambda)d\lambda = 0 \label{AI} \end{equation} where $\oint_{\infty}(\cdot)d\lambda$ is the residue integral about $\infty$, which we also denote $Res_{\lambda}[(\cdot)d\lambda]$. Using (\ref{AG}) this can also be written in terms of tau functions as \begin{equation} \oint_{\infty}\tau(t-[\lambda^{-1}])\tau(t'+[\lambda^{-1}]) e^{\xi(t,\lambda)-\xi(t',\lambda)}d\lambda = 0 \ . \label{AJ} \end{equation} This leads to the characterization of the tau function in bilinear form expressed via ($t\to t-y,\,\,t'\to t+y$) \begin{equation} \left(\sum_0^{\infty}\chi_n(-2y)\chi_{n+1}(\tilde D)e^{\sum_1^{\infty} y_i D_i}\right)\tau\,\cdot\,\tau = 0 \ , \label{AK} \end{equation} where $D_i$ is the Hirota derivative defined as $D^m_j a\,\cdot\,b = (\partial^m/\partial s_j^m) a(t_j+s_j)b(t_j-s_j)|_{s=0}$ and $\tilde D = (D_1,(1/2) D_2,(1/3)D_3,\cdots)$. In particular, we have from the coefficients of $y_n$ in (\ref{AK}), \begin{equation} \label{hirota} D_1D_n\tau \cdot \tau = 2 \chi_{n+1} (\tilde D) \tau \cdot \tau \ , \end{equation} which are called the Hirota bilinear equations. Such calculations with vertex operator equations and residues, in the context of finite zone situations where the tau function is intimately related to theta functions, also led historically to the Fay trisecant identity, which can be expressed generally as the Fay identity via (cf. \cite{aa,ce} - c.p. means cyclic permutations) \begin{equation} \sum_{c.p.}(s_0-s_1)(s_2-s_3)\tau(t+[s_0]+[s_1])\tau(t+[s_2]+[s_3]) = 0 \ . \label{ZA} \end{equation} This can be also derived from the bilinear identity (\ref{AJ}). Differentiat ing this in $s_0$, then setting $s_0 = s_3 = 0$, then dividing by $s_1 s_2$, and finally shifting $t\to t-[s_2]$, leads to the differential Fay identity, \begin{eqnarray} \nonumber & &\tau(t)\partial\tau(t+[s_1]-[s_2]) - \tau(t+[s_1] -[s_2])\partial \tau(t) \\ & &= (s_1^{-1}-s_2^{-1}) \left[\tau(t+[s_1]-[s_2]) \tau(t) - \tau(t+[s_1])\tau(t-[s_2])\right] \ . \label{AL} \end{eqnarray} The Hirota equations (\ref{hirota}) can be also derived from (\ref{AL}) by taking the limit $s_1 \to s_2$. The identity (\ref{AL}) will play an important role later. \\[3mm]\indent Now for the dispersionless theory (dKP) one can think of fast and slow variables, etc., or averaging procedures, but simply one takes $t_n\to\epsilon t_n = T_n\,\,(t_1 = x\to \epsilon x = X)$ in the KP equation $u_t = (1/4)u_{xxx} + 3uu_x + (3/4)\partial^{-1}u_{yy},\,\, (y=t_2,\,\,t=t_3)$, with $\partial_n\to \epsilon\partial/\partial T_n$ and $u(t_n)\to U(T_n)$ to obtain $\partial_T U = 3UU_X + (3/4)\partial^ {-1}U_{YY}$ when $\epsilon\to 0\,\,(\partial =\partial/\partial X$ now). Thus the dispersion term $u_{xxx}$ is removed. In terms of hierarchies we write \begin{equation} L_{\epsilon} = \epsilon\partial + \sum_1^{\infty}u_{n+1}(T/\epsilon) (\epsilon\partial)^{-n} \ , \label{AM} \end{equation} and think of $u_n( T/\epsilon)= U_n(T) + O(\epsilon)$, etc. One takes then a WKB form for the wave function with the action $S$ {\cite{ka}}, \begin{equation} \psi = \exp \left[\frac{1}{\epsilon}S(T,\lambda) \right]. \label{AN} \end{equation} Replacing now $\partial_n$ by $\epsilon\partial_n$, where $\partial_n = \partial/\partial T_n$ now, we define $P := \partial S = S_X$. Then $\epsilon^i\partial^i\psi\to P^i\psi$ as $\epsilon\to 0$ and the equation $L\psi = \lambda\psi$ becomes \begin{equation} \lambda = P + \sum_1^{\infty}U_{n+1}P^{-n};\, \ \,P = \lambda - \sum_1^{\infty}P_{i+1}\lambda^{-i} \ , \label{AO} \end{equation} where the second equation is simply the inversion of the first. We also note from $\partial_n\psi = B_n\psi = \sum_0^nb_{nm}(\epsilon\partial)^m\psi$ that one obtains $\partial_n S = {\cal B}_n(P) = \lambda^n_{+}$ where the subscript (+) refers now to powers of $P$ (note $\epsilon\partial_n\psi/\psi \to \partial_n S$). Thus $B_n = L^n_{+}\to {\cal B}_n(P) = \lambda^n_{+} = \sum_0^nb_{nm}P^m$ and the KP hierarchy goes to \begin{equation} \partial_n P = \partial {\cal B}_n \ , \label{YC} \end{equation} which is the dKP hierarchy (note $\partial_n S = {\cal B}_n\Rightarrow \partial_n P = \partial{\cal B}_n$). The action $S$ in (\ref{AN}) can be computed from (\ref{AG}) in the limit $\epsilon \to 0$ as \begin{equation} \label{action} S = \sum_{1}^{\infty} T_n \lambda^n - \sum_{1}^{\infty} {\partial_mF \over m} \lambda^{-m}, \end{equation} where the function $F=F(T)$ (free energy) is defined by {\cite{tb}} \begin{equation} \label{tau} \tau = \exp \left[ {1 \over \epsilon^2} F(T) \right]. \end{equation} The formula (\ref{action}) then solves the dKP hierarchy (\ref{YC}), i.e. $P={\cal B}_1 = \partial S$ and \begin{equation} \label{B} {\cal B}_n = \partial_n S = \lambda^n - \sum_{1}^{\infty} {F_{nm} \over m} \lambda^{-m} \ , \end{equation} where $F_{nm} = \partial_n\partial_m F$ which play an important role in the theory of dKP. \\[3mm]\indent Now following \cite{tc} we write the differential Fay identity (\ref{AL}) with $\epsilon\partial_n$ replacing $\partial_n$ etc. in the form \begin{eqnarray} \label{AR} \nonumber & &\frac{\tau(T-\epsilon[\mu^{-1}]-\epsilon[\lambda^{-1}])\tau(T)} {\tau(T-\epsilon[\mu^{-1}])\tau(T-\epsilon[\lambda^{-1}])} \\ & & = 1 + \frac{\epsilon\partial}{\mu-\lambda}\left[\log\left( \tau(T- \epsilon[\mu^{-1}])\right) - \log\left(\tau(T-\epsilon[\lambda^{-1}])\right)\right] \end{eqnarray} (in (\ref{AL}) take $t\to t-[s_1],\,\,s_1=\mu^{-1},\,\,s_2=\lambda^{-1}$ and insert $\epsilon$ at the appropriate places; note $T$ is used in (\ref{AR})). One notes from (\ref{AH}) that $\exp(-\xi(\tilde{\partial},\lambda^{-1})) = \sum_1^ {\infty}\chi_j(\tilde{\partial})\lambda^{-j}$ so taking logarithms in (\ref{AR}) and using (\ref{tau}) yield \begin{eqnarray} \label{AS} \nonumber & &{1 \over \epsilon^2} \sum_{m,n=1}^{\infty}\mu^{-m} \lambda^ {-n}\chi_n(-\epsilon\tilde{\partial}) \chi_m(-\epsilon\tilde{\partial})F \\ & &= \log\left[1+ {1 \over \epsilon} \sum_1^{\infty}\frac{\mu^{-n}- \lambda^{-n}}{\mu-\lambda} \chi_n(-\epsilon\tilde{\partial})\partial_XF \right] \ . \end{eqnarray} In passing this to limits only the second order derivatives survive, and one gets the dispersionless differential Fay identity (note $\chi_n (-\epsilon\tilde{\partial})$ contributes here only $-\epsilon\partial_n/n$) \begin{equation} \sum_{m,n=1}^{\infty}\mu^{-m}\lambda^{-n}\frac{F_{mn}}{mn} = \log \left(1- \sum_1^{\infty}\frac{\mu^{-n}-\lambda^{-n}}{\mu-\lambda} \frac{F_{1n}}{n} \right) \ . \label{AT} \end{equation} Although (\ref{AT}) only uses a subset of the Pl\"ucker relations defining the KP hierarchy it was shown in \cite{tc} that this subset is sufficient to determine KP; hence (\ref{AT}) characterizes the function $F$ for dKP. Following \cite{cb}, we now derive a dispersionless limit of the Hirota bilinear equations (\ref{hirota}), which we call the dispersionless Hirota equations. We first note from (\ref{action}) and (\ref{AO}) that $F_{1n} = nP_{n+1}$ so \begin{equation} \sum_1^{\infty}\lambda^{-n}\frac{F_{1n}}{n} = \sum_1^{\infty}P_{n+1} \lambda^{-n} = \lambda - P(\lambda) \ . \label{AU} \end{equation} Consequently the right side of (\ref{AT}) becomes $\log[\frac{P(\mu) - P(\lambda)}{\mu-\lambda}]$ and for $\mu\to \lambda$ with $\dot{P}:= \partial_{\lambda}P$ we have \begin{equation} \log\dot{P}(\lambda) = \sum_{m,n=1}^{\infty}\lambda^{-m-n}\frac{F_{mn}} {mn} = \sum_{j=1}^{\infty} \left(\sum_{n+m=j} {F_{mn} \over mn} \right) \lambda^{-j} \ . \label{AV} \end{equation} Then using the elementary Schur polynomial defined in (\ref{AH}) and (\ref{AO}), we obtain \begin{eqnarray} \nonumber \dot{P}(\lambda) &=& \sum_0^{\infty} \chi_j(Z_2, \cdots,Z_j) \lambda^{-j} \\ &=&1 + \sum_1^{\infty}jP_{j+1}\lambda^{-j-1} = 1 + \sum_1^{\infty}F_{1j} \lambda^{-j-1} \ , \label{AW} \end{eqnarray} where $Z_i, \, i \ge 2$ are defined by \begin{equation} \label{SV} Z_i = \sum_{m+n=i} {F_{mn} \over mn} \ . \end{equation} Note that $Z_1=0$ is assumed for the polynomials $\chi_j$. Thus we obtain the dispersionless Hirota equations, \begin{equation} \label{F} F_{1j} = \chi_{j+1}(Z_2, \cdots,Z_{j+1}) \ . \end{equation} These can be also derived directly from (\ref{hirota}) with (\ref{tau}) in the limit $\epsilon \to 0$ or by expanding (\ref{AV}) in powers of $\lambda^{-n}$. We list here a few entries from such an expansion (cf. \cite{cb}): \begin{eqnarray} \lambda^{-4}&:& \frac{1}{2}F^2_{11} - \frac{1}{3}F_{13} + \frac{1}{4} F_{22} = 0;\label{AX}\\ \lambda^{-5}&:& F_{11}F_{12} - \frac{1}{2}F_{14} + \frac{1}{3}F_{23} = 0; \nonumber\\ \lambda^{-6}&:& \frac{1}{3}F^3_{11} - \frac{1}{2}F^2_{12} - F_{11}F_{13} + \frac{3}{5}F_{15} - \frac{1}{9}F_{33} - \frac{1}{4}F_{24} = 0; \nonumber\\ \lambda^{-7}&:& F^2_{11}F_{12} - F_{12}F_{13}-F_{11}F_{14} + \frac{2}{3}F_{16} - \frac{1}{6}F_{34} - \frac{1}{5}F_{25} = 0; \nonumber\\ \lambda^{-8}&:& \frac{1}{4}F^4_{11} - F_{11}F_{12}^2 - F^2_{11}F_{13} + \frac{1}{2}F^2_{13} + F_{12}F_{14}+\nonumber\\ & & + F_{11}F_{15} - \frac{5}{7}F_{17} + \frac{1}{16}F_{44} + \frac{2}{15} F_{35} + \frac{1}{6}F_{26} = 0;\nonumber\\ \lambda^{-9}&:& F^3_{11}F_{12} - \frac{1}{3}F^3_{12} - 2F_{11}F_{12}F_{13} -F^2_{11}F_{14} + F_{13}F_{14} \nonumber\\ & & + F_{12}F_{15} + F_{11}F_{16} - \frac{3}{4}F_{18} + \frac{1}{10}F_{45} + \frac{1}{9}F_{36} + \frac{1}{7}F_{27} = 0;\nonumber\\ \lambda^{-10}&:& \frac{1}{5}F^5_{11}-\frac{3}{2}F^2_{11}F^2_{12} -F^3_{11}F_{13} + F^2_{12}F_{13} + F_{11}F^2_{13}\nonumber\\ & & +2F_{11}F_{12}F_{14} - \frac{1}{2}F^2_{14} + F^2_{11}F_{15} - F_{13}F_{15} -F_{12}F_{16} -F_{11}F_{17}\nonumber\\ & & +\frac{7}{9}F_{19} -\frac{1}{25}F_{55} -\frac{1}{12}F_{46} - \frac{2}{21} F_{37} - \frac{1}{8}F_{28} = 0 \ . \nonumber \end{eqnarray} These equations are discussed in various ways below and we will also show the equivalence of the dispersionless Fay differential identity with another formula of a Cauchy kernel in Section 3. Note here that for $U = F_{11}$ the first equation in (\ref{AX}) is a dKP equation $U_T = 3UU_X + (3/4)\partial^{-1} U_{YY}$ and other equations in the hierarchy are similarly generated. \medskip It is also interesting to note that the dispersionless Hirota equations (\ref{F}) or (\ref{AX}) can be regarded as algebraic equations for ``symbols" $F_{mn}$, which are defined via (\ref{B}), i.e. \begin{equation} {\cal B}_n := \lambda^n_+= \lambda^n - \sum_1^{\infty}\frac{F_{nm}}{m} \lambda^{-m}\ . \label{AY} \end{equation} \begin{Lemma} The symbols satisfy \begin{equation} F_{nm} = F_{mn} = Res_P[\lambda^m d \lambda^n_+] \ . \label{AZ} \end{equation} \end{Lemma} \begin{Proof} One need simply observe that \begin{eqnarray} \nonumber F_{nm} &=&- Res_{\lambda}[{\cal B}_nd\lambda^m] = - Res_P[{\cal B}_nd\lambda^m] \\ \nonumber &=& - Res_P[\lambda^n_{+}d\lambda^m_{-}] =- Res_P[\lambda^nd\lambda^m_{-}] \\ &=& Res_P[\lambda^nd\lambda^m_{+}] = Res_P[\lambda^md\lambda^n_{+}] \ = \ F_{mn}. \label{BA} \end{eqnarray} Here we have used $\lambda^m_{-} = \lambda^m - \lambda^m_{+}$ and $Res[d(ab)] = 0 = Res[da\,b + a\,db]$ for pseudo-differential or formal Laurent expansions $a$ and $b$. \end{Proof} \medskip \noindent Thus we have: \begin{Theorem} For $\lambda,\,\,P$ given algebraically as in (\ref{AO}), with no a priori connection to dKP, and for ${\cal B}_n$ defined as in the last equation of (\ref{AY}) via a formal collection of the symbols with two indices $F_{mn}$, it follows that the dispersionless Hirota equations (\ref{F}) or (\ref{AX}) are nothing but the polynomial identities among $F_{mn}$. \end{Theorem} In the next section we give a direct proof of this fact, that is, the $F_{mn}$ defined in (\ref{AY}) satisfy the dispersionless Hirota equations, which we designate by (\ref{F}) in what follows. \medskip Now one very natural way of developing dKP begins with (\ref{AO}) and (\ref{YC}) since eventually the $P_{j+1}$ can serve as universal coordinates (cf. here \cite{af} for a discussion of this in connection with topological field theory = TFT). This point of view is also natural in terms of developing a Hamilton-Jacobi theory involving ideas from the hodograph $-$ Riemann invariant approach (cf. \cite{ca,gf,ka,kc} and (\ref{YE}) below) and in connecting NKdV ideas to TFT, strings, and quantum gravity (cf. \cite{cj} for a survey of this). It is natural here to work with $Q_n := (1/n){\cal B}_n$ and note that $\partial_n S = {\cal B}_n$ corresponds to $\partial_n P = \partial {\cal B}_n = n\partial Q_n$. In this connection one often uses different time variables, say $T'_n = nT_n$, so that $\partial'_nP = \partial Q_n$, and $G_{mn} = F_{mn}/mn$ is used in place of $F_{mn}$. Here however we will retain the $T_n$ notation with $\partial_n S = nQ_n$ and $\partial_n P = n\partial Q_n$ since one will be connecting a number of formulas to standard KP notation. Now given (\ref{AO}) and (\ref{YC}) the equation $\partial_n P = n\partial Q_n$ corresponds to Benney's moment equations and is equivalent to a system of Hamiltonian equations defining the dKP hierarchy (cf. \cite{ca,kc} and remarks after (\ref{YC}); the Hamilton-Jacobi equations are $\partial_n S = nQ_n$ with Hamiltonians $nQ_n(X, P=\partial S)$). \medskip We now have an important formula for the functions $Q_n$: \begin{Proposition} \cite{kc} The generating function of $\partial_P Q_n(\lambda)$ is given by \begin{equation} \frac{1}{P(\mu) - P(\lambda)} = \sum_1^{\infty}\partial_P Q_n(\lambda) \mu^{-n} \ . \label{BF} \end{equation} \end{Proposition} \begin{Proof} Multiplying (\ref{AO}) by $\lambda^{n-1}\partial_P\lambda$, we have \begin{equation} \lambda^n\partial_P\lambda = P(\lambda)\lambda^{n-1}\partial_P\lambda + P_2\lambda^{n-2}\partial_P \lambda + \cdots \ . \label{BG} \end{equation} Taking the polynomial part leads to a recurrence relation \begin{equation} \partial_PQ_{n+1}(\lambda) = P(\lambda)\partial_P Q_n (\lambda)+ P_2\partial_P Q_{n-1}(\lambda) + \cdots + P_n\partial_P Q_1(\lambda) \ . \label{BH} \end{equation} Then noting $\partial_PQ_1=1, Q_0=1$, and summing up (\ref{BH}) as follows, we obtain \begin{eqnarray} \label{sum} \nonumber & &\sum_{n=0}^{\infty} \left( \partial_P Q_{n+1}(\lambda) - P(\lambda) \partial_P Q_n(\lambda) - \sum_{i+j=n} P_{j+1} \partial_P Q_i(\lambda) \right)\mu^{-n} \\ & & = \left( \mu - P(\lambda) - \sum_1^{\infty} P_{j+1}\mu^{-j} \right) \sum_1^{\infty} \partial_P Q_i(\lambda) \mu^{-i} = 1 \ , \end{eqnarray} which is just (\ref{BF}). \end{Proof} \medskip This is a very important kernel formula which will come up in various ways in what follows. In particular we note \begin{equation} \oint_{\infty} {\mu^n \over P(\mu) - P(\lambda)} d\mu = \partial_P Q_{n+1}(\lambda) \ , \label{canonicalT} \end{equation} which gives a key formula in the Hamilton-Jacobi method for the dKP \cite{kc}. Thus it represents a Cauchy kernel and has a version on Riemann surfaces related to the prime form (cf. \cite{kd}). In fact the kernel is a dispersionless limit of the Fay prime form. Also note here that the function $P(\lambda)$ alone provides all the information necessary for the dKP theory. This will be discussed further in the next section. \section{Solutions and variations} \renewcommand{\theequation}{3.\arabic{equation}}\setcounter{equation}{0} We have already discussed solutions of the dispersionless Hirota equations (\ref{F}) briefly in Theorem 1, but we go now to some different points of view. First let us prove: \begin{Theorem} The kernel formula (\ref{BF}) is equivalent to the dispersionless differential Fay identity (\ref{AT}). \end{Theorem} This will follow from the following lemma. \begin{Lemma} Let $\chi_n(Q_1,\cdots,Q_n)$ denote the elementary Schur polynomials defined as in (\ref{AH}). Then \begin{equation} \label{ShP} \partial_P Q_n = \chi_{n-1}(Q_1,\cdots,Q_{n-1}) \ . \end{equation} \end{Lemma} \begin{Proof} We integrate the kernel formula (\ref{BF}) with respect to $P(k)$, and normalize at $\lambda = \infty$ to obtain \begin{equation} \frac{1}{P(\lambda) - P(k)} = \frac{1}{\lambda}\exp \left({\sum_1^{\infty} Q_n(k)\lambda^{-n}}\right) = \sum_1^{\infty}\chi_{m-1}(Q_1,\cdots,Q_{m-1}) \lambda^{-m} \ . \label{BI} \end{equation} Equating this to (\ref{BF}) gives the result. \end{Proof} \medskip {\it Proof of Theorem 2}. $ \ \ $ Using (\ref{B}), the left hand side of (\ref{AT}) can be written as \begin{eqnarray} \label{BJ} \nonumber LHS &=& \sum_1^{\infty}\frac{1}{n\lambda^n} \left[{\sum_1^{\infty}\frac{F_{mn}} {m\mu^m}}\right] = \sum_1^{\infty}\frac{1}{n\lambda^n} [\mu^n - {\cal B}_n(\mu)] \\ &=& \sum_1^{\infty}\frac{1}{n} \left({\frac{\mu}{\lambda} }\right)^n - \sum_1^{\infty} \frac{1}{n\lambda^n}{\cal B}_n(\mu) = -\log \left({1-\frac{\mu}{\lambda} }\right) - \sum_1^{\infty}\frac{Q_n(\mu)}{\lambda^n} \ . \end{eqnarray} On the other hand the right hand side of (\ref{AT}) is calculated as indicated in (\ref{AU}) and remarks thereafter. Thus \begin{eqnarray} \label{BK} \nonumber RHS &=& \log \left[{1 - \frac{1}{\lambda - \mu}\sum_1^{\infty} \left({\frac{1}{\lambda^n} - \frac{1}{\mu^n}}\right)\frac{F_{1n}}{n}}\right] \\ \nonumber &=& -\log(\lambda - \mu) + \log \left[{ \left({\lambda - \sum_1^{\infty}\frac{F_{1n}} {n\lambda^n}} \right) - \left({\mu - \sum_1^{\infty}\frac{F_{1n}}{n\mu^n}}\right)} \right] \\ &=& -\log(\lambda - \mu) + \log [P(\lambda) - P(\mu)] \ . \end{eqnarray} This implies \begin{equation} \log[P(\lambda) - P(\mu)] = \log\lambda -\sum_1^{\infty}{Q_n(\mu)} {\lambda^{-n}} \ , \label{BL} \end{equation} which leads to (\ref{BI}). Then Lemma 2 implies the assertion. \medskip \noindent Theorem 2 implies that the dispersionless Hirota equations (\ref{F}) can be derived from the kernel formula (\ref{BF}) which is a direct consequence of the definition of $F_{mn}$ in (\ref{AY}). Thus Theorem 2 gives a direct proof of Theorem 1. \medskip We will now express the $\chi_n(Q_1,\cdots,Q_n)$ as polynomials in $Q_1 = P$ with the coefficients given by polynomials of $P_{j+1}$. First we have: \begin{Lemma} One can write \begin{equation} \chi_n = det \left[ \begin{array}{ccccccc} P & -1 & 0 & 0 & 0 & \cdots & 0\\ P_2 & P & -1 & 0 & 0 & \cdots & 0\\ P_3 & P_2 & P & -1 & 0 & \cdots & 0\\ \vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ P_n & P_{n-1} & \cdots & P_4 & P_3 & P_2 & P \end{array} \right] = \partial_P Q_{n+1} \ {}. \label{BM} \end{equation} \end{Lemma} \begin{Proof} In terms of $\chi_n(Q_1,\cdots,Q_n) = \partial_P Q_{n+1}$ the recursion relation (\ref{BH}) can be written in the form $\chi_n = P\chi_{n-1} + P_2\chi_{n-2} + \cdots + P_n\chi_0$. It is easy to derive the determinant expression from this form. \end{Proof} \medskip This leads to a rather evident fact, which we express as a proposition because of its importance. Thus: \begin{Proposition} The $F_{mn}$ can be expressed as polynomials in $P_{j+1} = F_{1j}/j$. \end{Proposition} \begin{Proof} $\partial_{\lambda}Q_{n} = \chi_{n-1}(Q_1;P_2,\cdots,P_{n-1}) \partial_{\lambda}Q_1 \,\,(Q_1 = P)$. Put this together with $Q_n= {\cal B}_n/n$ in (\ref{B}) and $P $ in (\ref{AO}) to arrive at the conclusion (cf. also Remark 1). \end{Proof} \begin{Corollary} The dispersionless Hirota equations can be solved totally algebraically via $F_{mn} = \Phi_{mn}(P_2,P_3,\cdots,P_{m+n})$ where $\Phi_{mn}$ is a polynomial in the $P_{j+1}$. Thus the $F_{1n} = nP_{n+1}$ are generating elements for the $F_{mn}$. \end{Corollary} This is of course evident from the dispersionless differential Fay identity (\ref{AT}). The point here however is to give explicit formulas of $F_{mn}$ in terms of the elementary Schur polynomials $\chi_n$ in (\ref{BM}). \medskip \noindent {\bf Remark 1} $\,\,$ One can also arrive at this polynomial dependence via the residue formulas of Lemma 1. As an adjunct to the proof of Proposition 2 we note now the following explicit calculations. Recall that $Q_1 = P = \lambda - \sum_1^{\infty}P_{j+1}\lambda^{-j}$ and generally $Q_n = \lambda^n/n - \sum_1^{\infty}G_{jn}\lambda^{-j} \ (G_{jn} :=F_{jn}/jn)$. Thus from $Q_2 = \lambda^2_{+}/2 = P^2/2 + P_2$ we get \begin{equation} \frac{1}{2}\left(\lambda - \sum_1^{\infty}\frac{P_{j+1}}{\lambda^j}\right)^2 + P_2 =\frac{\lambda^2}{2} -\sum_1^{\infty}\frac{G_{2j}}{\lambda^j} \ . \label{YY} \end{equation} Writing this out yields \begin{equation} G_{2m} = P_{2+m} -\frac{1}{2}\sum_{j+k=m}P_{j+1}P_{k+1} \ . \label{YB} \end{equation} This process can be continued with $Q_3 = \lambda^3_{+}/3 = P^3/3 + P_2P + P_3$, etc. \medskip It is also interesting to note the following relations: \begin{Proposition} For any $n \ge 2$, \begin{equation} \label{SchurP} \chi_n(-Q_1,\cdots,-Q_n) = -P_n . \end{equation} \end{Proposition} \begin{Proof} obtain (cf. (\ref{AH})) \begin{equation} \left(\sum_0^{\infty}\frac{\chi_n(Q)}{\lambda^n}\right) \left(\sum_0^{\infty}\frac{\chi_m(-Q)}{\lambda^m}\right) = \sum_0^{\infty}\frac{1}{\lambda^p}\sum_0^p\chi_{p-k}(Q)\chi_k(-Q) = 1 \ . \label{BN} \end{equation} Hence $\sum_0^p\chi_{p-k}(Q)\chi_k(-Q) = 0$ for $p\geq 1$ which implies $\chi_k(-Q) = -P_k$ for $k\geq 2$ from the recursion relation (\ref{BH}). \end{Proof} \medskip \noindent Note from $P_{n+1}=F_{1n}/n$ that the equations (\ref{SchurP}) give another representation of the dispersionless Hirota equations (\ref{F}). \medskip \noindent {\bf Remark 2} $\,\,$ Formulas such as (\ref{BM}) and the statement in Proposition 2 indicate that in fact dKP theory can be characterized using only elementary Schur polynomials since these provide all the information necessary for the kernel (\ref{BF}) or equivalently for the dispersionless differential Fay identity. This amounts also to observing that in the passage from KP to dKP only certain Schur polynomials survive the limiting process $\epsilon\to 0$. Such terms involve second derivatives of $F$ and these may be characterized in terms of Young diagrams with only vertical or horizontal boxes. This is also related to the explicit form of the hodograph transformation where one needs only $\partial_P Q_n = \chi_{n-1}(Q_1,\cdots,Q_{n-1})$ and the $P_{j+1}$ in the expansion of $P$ (cf. here (\ref{YE})). \section{Connections to D-bar} \renewcommand{\theequation}{4.\arabic{equation}}\setcounter{equation}{0} It was shown in \cite{ca,cb,gb,za} how inverse scattering information is connected to the dispersionless theory for KdV and some other situations (Benney equations and vector nonlinear Schr\"odinger equations for example). We will see here that although such connections seem generally not to be expected, nevertheless, one can isolate D-bar data for $S$ and $P$ in the dispersionless NKdV situations leading to an expression for the generating elements $F_{1n}=nP_{n+1}$ which can be useful in computation. The technique also indicates another role for the Cauchy type kernel $1/(P(\lambda) -P(\mu))$ in (\ref{BF}). Thus as background consider KdV and the scattering problem in the standard form (cf. \cite{ce}) \begin{equation} u_t = u''' -6uu';\, \ \ \, \psi'' - u\psi = -k^2\psi \ . \label{CA} \end{equation} Let $\psi_{\pm}(k,x) \sim \exp(\pm ikx)$ as $x\to \pm\infty$ be Jost solutions with scattering data, $T$ and $R$, determined via \begin{equation} \label{TR} T(k)\psi_{-}(k,x) = R(k)\psi_{+}(k,x) + \psi_{+}(-k,x) \ . \end{equation} Setting $\psi_{-} = \exp(-ikx + \phi(k,x))$ we have from (\ref{CA}) \begin{equation} v := - \partial \log (\psi_-) =ik - \phi'; \, \, \ \phi'' - 2ik\phi' + \phi'^2 = u \ . \label{CB} \end{equation} One looks for expansions, \begin{equation} \phi'= \sum_1^{\infty}{\phi_n \over (ik)^n};\, \ \ \, v= ik + \sum_1^{\infty}{v_n \over (ik)^n} \ , \label{CC} \end{equation} entailing $\phi_n = -v_n$. Here for example one can assume $\int_{-\infty}^{\infty}(1 + x^2)|u|dx < \infty$ or $u\in {\cal S}$ = Schwartz space for convenience so that all of the inverse scattering machinery applies. Then $T(k)$ will be meromorphic for $Im(k) > 0$ with (possibly) a finite number of simple poles at $k_j = i\beta_j\,\,(\beta_j > 0),\,\, |R(k)|$ is small for large $|k|,\,\,k\in {\bf R}$, and $ \log(T)=\sum_0^{\infty}c_{2n+1}/k^{2n+1}$ where $c_n$ can be written in terms of the $\beta_j$ and the normalization of the wave function $\psi_{\pm}$. For $x\to\infty,\,\,Im(k) > 0,\,\,\psi_{-} \exp(ikx) \to 1/T$ from (1.2), and taking logarithms $\phi(k,\infty) = -\log(T)$ which implies \begin{equation} -\log(T) = \sum_1^{\infty}\frac{1}{(ik)^n}\int_{-\infty}^{\infty} \phi_n dx \label{CD} \end{equation} leading to $\int_{-\infty}^{\infty} \phi_{2m} dx = 0$ with $ i^{2m}c_{2m+1} = \int_{-\infty}^{\infty}\phi_{2m+1} dx$. Hence the scattering information ($\sim c_{2n+1}$, arising from $x$-asymptotics, given via $R$ and $T$) is related to the $k$-asymptotics $\phi_n$ of the wave function $\psi_{-}$. This kind of connection between space asymptotics (or spectral data - which is generated by these asymptotic conditions and forced by boundary conditions on $u$ here) and the spectral asymptotics of the wave function is generally more complicated and we refer to a formula in the Appendix to \cite{ca} describing the Davey-Stewartson (DS) situation \begin{equation} -2\pi i\chi_{n+1} = \int_{\Omega}\int\zeta^n\bar{\partial}_{\zeta} \chi \ d\zeta \wedge d\bar{\zeta} \ , \label{CE} \end{equation} where $\bar{\partial}_{\zeta}\chi$ is D-bar data and the wave function $\psi$ has the form $\psi = \chi \exp(\Xi)$ with $\chi = 1 + \sum_1^{\infty}\chi_j\lambda^{-j}$ for $|\lambda|$ large (this is in a matrix form). The potentials in (\ref{CE}) occur in $\chi_1$. The D-bar data, or departure from analyticity of $\chi$, corresponds to spectral data in a sometimes complicated way and we refer to \cite{ab,bd,ca,cf,cl,fb,gh,kg,lb,ma} for more on this. \\[3mm]\indent Now one expects some connections between inverse scattering for KdV and the dKdV theory since $\epsilon x = X$ means e.g. $x\to \pm \infty\sim \epsilon\to 0$ for $X$ fixed. Now from (\ref{TR}), we have \begin{equation} {1 \over T }= \frac{1}{2ik}W(\psi_{+},\psi_{-}) := \frac{1}{2ik} (\psi'_{+}\psi_{-} - \psi_{+}\psi'_{-}) \ . \label{CF} \end{equation} Given $\psi_{+}\sim\psi = \exp(S/\epsilon)$ and $\psi_{-}\sim \psi^{*}= \exp(-S/\epsilon)$ (cf. \cite{ca,cb}) one sees that $W(\psi_+,\psi_-) \to 2\partial S/\partial X = 2P$, and from (\ref{AO}) with $\lambda=ik$, \begin{equation} \label{T} {1 \over T} = {P \over ik} = 1 - \sum_1^{\infty} {P_{n+1} \over (ik)^{n+1}} \ . \end{equation} Now $\log|T|$ and $\arg(R/T)$ have natural roles as action angle variables ( cf. \cite{cb}) and from $R = W(\psi_{-}(k,x),\psi_{+}(-k,x))/ W(\psi_{+}, \psi_{-})$ one obtains \begin{equation} R = -\left({P(-k) + P(k) \over 2P(k)}\right) \exp\left({\frac{S(k) - S(-k)}{\epsilon}} \right) \ . \label{CG} \end{equation} (We have suppressed the slow variables $T_n$ ). Thus the scattering data, which are considered to be averaged over the fast variables, now depend on the slow variables (in particular, see (4.5) and (4.8)). This corresponds to a dynamics of the Riemann surface determined by the scattering problem (the Whitham averaging approach). The exponential terms in (\ref{CG}) could in principle give problems here so we consider this in the spirit of \cite{ca,cb,gb}. Thus consider $\psi_{-} = \exp (-ikx + \phi)$ with $\phi(x) = \int^x_{-\infty}\phi'(\xi)d\xi$. Now $\psi_{-}$ is going to have the form $\psi_{-}= \exp(-S(X,k)/ \epsilon)$ and we can legitimately expect $-S/{\epsilon} = -ikX/\epsilon + \phi(X/{\epsilon},k)$. For this to make sense let us write (note $\epsilon x = X,\,\,\epsilon \xi = \Xi$) \begin{equation} \phi(X/{\epsilon},k) = \int_{-\infty}^{X/{\epsilon}} \phi'(\xi)d\xi = \frac{1}{\epsilon}\int_{-\infty}^X\phi'(\Xi/\epsilon) d\Xi = \frac{1}{\epsilon}(ikX - S(X,k)) \ . \label{CH} \end{equation} Thus for $\phi(X/\epsilon,k)$ to equal ${\epsilon}^{-1}(ikX - S(X,k))$ we must have \begin{equation} \int^X_{-\infty}\phi'(\Xi/\epsilon)d\Xi = ikX - S(X,k) \ , \label{CI} \end{equation} where $\phi'=\partial\phi/\partial x$. This says that $\phi'(\Xi/\epsilon) = f(\Xi/\epsilon) = \tilde{f} (\Xi) + O(\epsilon)$ which is reasonable in the same spirit that $u_n(T_j/\epsilon) = U_n(T_j) + O(\epsilon)$ was reasonable before. Thus we have \begin{equation} P = {\partial S \over \partial X} = ik - \phi' = v; \ \ \, P^2 - U = -k^2 \ , \label{CJ} \end{equation} which corresponds to a dispersionless form of (\ref{CB}) in new variables. Note here that $\phi'' = \epsilon \partial \phi'/\partial X \to 0$ as $\epsilon \to 0$. Thus we have: \begin{Proposition} The manipulation of variables $x,\,X$ in (\ref{CH}) and (\ref{CI}) is consistent and mandatory. It shows that scattering information (expressed via $\phi$ for example) is related to the dispersionless quantity $S$ or $P$. \end{Proposition} Further we have isolated D-bar data ($\bar{\partial}_kS$) for S since for $P = \sqrt{U - k^2}$ one can write for arbitrary $X_0$ (cf. (\ref{CI})) \begin{equation} ikX - S(X,k) = \int^X_{-\infty}\left(ik - P(\Xi,k)\right) \ d\Xi \ . \label{CK} \end{equation} Replace $-\infty$ by $X_0$ for large negative $X_0$ to get then $ S = \int^X_{X_0} Pd\Xi + ikX_0$. Thus one expects a pole for $|k|\to\infty$ plus D-bar data along $(-\sqrt{U},\sqrt{U})$. This is in the spirit of \cite{gb,gf} where it is phrased differently. Observe that the expression $P = \sqrt{U - k^2}$ places us on a 2 sheeted Riemann surface with a cut along $(-\sqrt{U},\sqrt{U})$ in the $k$-plane. Going around say $\sqrt{U}$ on the + sheet one has an opposite sides of the cut $P_{+} = i\sqrt{k^2 - U}$ and $P_{-} = - i \sqrt{k^2 - U}$, so $P_{+} - P_{-} = \Delta P = 2 i\sqrt{k^2 - U}$ which corresponds to D-bar data. This then leads to \begin{equation} \Delta S = S_{+} - S_{-} = \int^X_{X_0} \Delta P d\Xi = 2\int^X_{X_0}\sqrt{U(\Xi)-k^2} \ d\Xi = 2Re S_{+} \ . \label{CL} \end{equation} Note here that $P_{\pm}$ is real on the cut with $P_{-} = -P_{+}$ (cf. Remark 3 for some general comments). \\[2mm]\indent Consider next the situation of \cite{ca,gf,kc} with a reduction of dKP to \begin{equation} \Lambda:=\lambda^N = P^N + a_0P^{N-2} + ... + a_{N-2} \ , \label{CM} \end{equation} which is called dNKdV reduction. Assume $\Lambda(P)$ has $N$ distinct real zeros $P_1 > P_2 > ... > P_N$ with $N-1$ interwoven turning points $\Lambda_k = \Lambda(\tilde{P}_k), \,\, P_{k+1} > \tilde{P}_k > P_k$. Assume, as $X\to -\infty$, these Riemann invariants $\Lambda_k\to 0$ monotonically so $\Lambda\to P^N$ with $(P - \lambda)\to 0$. Such situations are considered in \cite{gf} and techniques from \cite{kc} are adapted to produce formulas of the type ($W_m = \partial_P{\cal B}_m$) \begin{eqnarray} \label{CN1} \nonumber & &S(X,\mu) - P(X,\mu)X - {\cal B}_m(P(X,\mu),X)T_m \\ & &= -{1 \over 2\pi i}\oint_{\Gamma}\frac{S(X,\lambda) \partial_{\lambda}P(X,\lambda)}{P(X,\lambda) - P(X,\mu)}d\lambda \ , \end{eqnarray} \begin{eqnarray} \label{CN} \nonumber & & \frac{\partial S}{\partial \mu} - \frac{\partial P} {\partial\mu}\left(X + W_m(P,X)T_m\right) = -{1 \over 2\pi i}\frac{\partial P} {\partial\mu}\oint_{\Gamma}\frac{S(X,\lambda)\partial_{\lambda}P(X,\lambda)} {(P(X,\lambda)-P(X,\mu))^2} d\lambda \\ & &= -{\partial_{\mu}P \over 2\pi i}\oint_{\Gamma} \frac{\partial_{\lambda}S(X,\lambda)} {P(X,\lambda) - P(X,\mu)}d\lambda = \frac{\partial_{\mu}P}{\pi} \int_{\Gamma_{+}}\frac{\partial_{\lambda}Im S(X,\lambda)}{P(X,\lambda) - P(X,\mu)}d\lambda \ . \end{eqnarray} We see here the emergence of the Cauchy type kernel from (\ref{BF}) in an important role. At $\lambda_k:=\lambda(\tilde P_k)$ where $S_{\mu}$ is bounded and $P_{\mu}\to \infty$ (\ref{CN}) yields Tsarev type generalized hodograph formulas (cf. \cite{gf}). Thus such a formula is \begin{equation} X + W_m( \tilde P_k(X),X)T_m = -\frac{1}{\pi}\int_{\Gamma_{+}} \frac{\partial_{\lambda} Im S(X, \lambda)}{P(X,\lambda) - \tilde P_k(X)}d\lambda \ . \label{YE} \end{equation} Here one has Riemann invariants $\lambda_k=\lambda(\tilde{P}_k)$ where $\partial_P\Lambda = 0$ and there is a collection $L$ of finite cuts through the origin of angles $k\pi/N\,\, (1\leq k\leq N-1)$ in the $\lambda$ plane with branch points $\lambda_k$. One takes $\Gamma$ to be a contour encircling the cuts clockwise (not containing $\mu$) and sets $\Gamma = \Gamma_{-} - \Gamma_{+}$ where + refers to the upper half plane. It turns out that $S|_{\Gamma_{+}} = \bar{S}|_{\Gamma_{-}}$ and the contours can be collapsed onto the cuts to yield the last integral in (\ref{CN}) (see \cite{cl,gf} for pictures and details). By reorganizing the terms in the integrals one can express now integrals such as (\ref{CN}) in terms of D-bar data of $P$ on the cuts (cf. \cite{cl} for details). Thus $P$ and $S$ are analytic in $\lambda$ for finite $\lambda$ except on the cuts $L$ where there is a jump discontinuity $\Delta P$ (yielding $\Delta S$ by integration in $X$ as in (\ref{CL})). We have seen for dKdV that $\Delta P=2\sqrt{U-k^2}$ on $(-\sqrt{U}, \sqrt{U})$ and other dNKdV situations are similar (cf. \cite{cl,gf}). Moreover there is no need to restrict ourself to one time variable $T_m$ in (\ref{CN}), or for that matter to dNKdV. Indeed the techniques, leading to formulas of the type (\ref{CN}) are based on \cite{kc} (cf. also \cite{ca}), and apply equally well to dKP provided the D-bar data $\bar{\partial}P$ for dKP lie in a bounded set $\Omega$ (cf. \cite{cl} and remarks below). In this respect, concerning dKP, one notes first that the transformation $t\to -t,\,\,U\to -U$ sends dKP-1 to dKP-2 so one expects that any D-bar data for $S$ or $P$ will be the same. Secondly, we know from \cite{ca,kc} that for large $|\lambda|$ there will be formulas of the type (\ref{BF}). Let us assume that $\bar{\partial}P$ (and hence $\bar{\partial}S$) is nontrivial only for a region $\Omega$ where say $|\lambda|\leq M < \infty$, and let $\Gamma$ be a contour enclosing $|\lambda| \leq M$. Then without regard for the nature of such data (Riemann- Hilbert data, poles, simple nonanalyticity, etc.) one can in fact derive formulas (\ref{CN}) following \cite{ca,gf,kc} (see \cite{cl} for details). We cannot collapse on cuts as in the last equation of (\ref{CN}) but we can think of the other formulas in (\ref{CN}) as integrals over D-bar data $\bar{\partial}P$ or $\bar{\partial}S$. It remains open to describe D-bar data for the dKP situations however. The idea of some kind of limit of spiked cut collections arises but we have not investigated this. In summary (cf. \cite{cl} for more detail) we can state: \begin{Proposition} For dNKdV (or dKP with given bounded D-bar data) one can write \begin{eqnarray} S(X,\mu) = P(\mu)X + \sum_2^{\infty}{\cal B}_n(P(\mu),X)T_n - \frac{1}{2\pi i}\oint_{\Gamma} \frac{S(X,\lambda)\partial_{\lambda}P(X, \lambda)} {P(X,\lambda) - P(X,\mu)}d\lambda \ , \label{ZD} \end{eqnarray} where $\Gamma$ encircles the cuts (or D-bar data) clockwise. The determination of D-bar data here is to be made via analysis of the polynomial $\lambda^N$ in (\ref{CM}) for dNKdV and in this situation one can again collapse (\ref{CN}) to the cuts. \end{Proposition} {\bf Remark 3} $ \, \, $ Now perhaps the main point of this has been to show that $S$ can be characterized via D-bar data of $S$ or $P$. In general we do not expect D-bar or scattering data for NKdV or KP to give D-bar data for $S$. The case of KdV is probably exceptional here and the situation of \cite{gb,za} where spectral data for a system of nonlinear Schr\"odinger equations is related to the Benney or dKP hierarchy involves a different situation (the spectral data is not related to KP). Further since KP-1 and KP-2 have vastly different spectral or D-bar properties, and both pass to the same dKP, one does not expect the spectral data to play a role. We note also that spectral data is created by the potentials via asymptotic conditions for example (and vice versa) whereas D-bar data for dNKdV, arising from the polynomial $\lambda^N$, is a purely algebraic matter. \\[3mm]\indent There is an interesting way in which D-bar data for $P$ or $S$ can be exploited. Thus, given that $S$ has D-bar data as indicated above in a bounded region, one can say that for $|\lambda|$ large and with some analytic function $A$ of $\lambda$, \begin{eqnarray} \nonumber S &=& A + {1 \over 2\pi i}\int\int\frac{\bar{\partial}_{\zeta}S} {\zeta - \lambda}d\zeta\wedge d\bar{\zeta} \\ &=& A - {1 \over 2\pi i}\sum_0^{\infty}\frac{1}{\lambda^{j+1}}\int\int \zeta^j\bar{\partial}_{\zeta}S \ d\zeta\wedge d\bar{\zeta} \ , \label{CO} \end{eqnarray} and from ({\ref{action}), we get \begin{equation} \partial_j F = {j \over 2\pi i}\int\int\zeta^{j-1} \bar{\partial}_{\zeta}S \ d\zeta\wedge d\bar{\zeta} \ . \label{CP} \end{equation} This is very useful information about $F_j$, since from the computation of (\ref{CP}) one could in principle compute all of the functions $F_{ij}$ for example. In fact, for dKdV, and possibly some other dNKdV situations, one can obtain a direct formula for the $F_{1j}$, which we know to generate all the $F_{ij}$ via Lemma 1. Thus for dKdV we know from (\ref{CL}) that $\Delta S = \int_{X_0}^X\Delta P dX' = 2\int^X_ {X_0}\sqrt{U(X',T)-k^2}dX'$ on the cut $L = (-\sqrt{U},\sqrt{U})$ in the k-plane, and $\bar{\partial}S=\Delta S (i/2)\delta_L$ where $\delta_L$ is a suitable delta function on $L$. Adjusting variables ($\zeta^2 = -k^2$ etc.) one can write then \begin{equation} \partial F_j = -\frac{j(i)^{j-1}}{\pi}\partial\int_{-\sqrt{U}}^{\sqrt{U}} k^{j-1}\int_{X_0}^X\sqrt{U(X',T) - k^2} \ dX'dk \ . \label{CQ} \end{equation} The only $X$ dependence here is visible and one can differentiate under the integral sign in (\ref{CQ}). Since all $F_{1 \ 2m} =0$ by the residue formula (\ref{AZ}), for example we take $j = 2n-1$ to obtain \begin{eqnarray} \nonumber & & F_{1 \ 2n-1} = (-1)^n\frac{2n-1}{\pi}\int_{-\sqrt{U}}^{\sqrt{U}}k^{2n-2} \sqrt{U-k^2}dk = \\ & &=(-1)^n\frac{(2n-1)U^n}{\pi}\int_{-\frac{\pi}{2}}^ {\frac{\pi}{2}}\sin^{2n-2}\theta \cos^2\theta d\theta = (-1)^n \left(\frac{U}{2}\right)^n\prod_1^n\frac{2l-1}{l} \label{CR} \end{eqnarray} (here $k = a\sin\theta$ and $a^2 = U$). We summarize this in (cf. \cite{cl} for more detail) \begin{Theorem} For any situation with bounded D-bar data for $S$ one can compute $F_j$ from (\ref{CP}). In the case of dKdV with $\Delta S$ as indicated one can use (\ref{CR}) to determine directly the $F_{1 \, 2n-1}$ which will generate all the functions $F_{ij}$. Generally, if $\bar{\partial}S = \int_{X_0}^X\bar{\partial}P(X',T, \zeta,\bar{\zeta})dX'$ (as in (\ref{CQ})), then from (\ref{CP}) $F_{1j} = (j/2\pi i)\int\int\zeta^{j-1}\bar{\partial}Pd\zeta\wedge d\bar{\zeta}$, and formally, $\partial_n\partial_j F = (j/2\pi i) \int\int\zeta^{j-1}\bar{\partial}{\cal B}_n d\zeta\wedge d\bar{\zeta}$ since formally $\partial_n\bar{\partial}S = \bar{\partial}\partial_n S = \bar{\partial}{\cal B}_n$. \end{Theorem} {\bf Remark 4} $\,\,$ The calculations using (\ref{CQ}) in fact agree with the determination of $F_{1 \ 2n-1}$ from residue calculations as in Lemma 1 (note here $2U_2= -U$ when adjusting notations between Sections 2 and 4). In general one may have some difficulties in determining $\bar{\partial}{\cal B}_n$ or $\bar{\partial}P$ for example, so the formulas (\ref{CQ}) and (\ref{CR}) may be exceptional in this regard. However for dKdV we have checked the validity of these formal integrals for $F_{nj}$ involving $\bar{\partial}{\cal B}_n$ for a number of terms (e.g. $F_{31},\,\,F_{51},\,\,F_{33},\,\,F_{35}$, and $F_{53}$) against the results of residue calculations. Here one can write for $n$ odd $\bar{\partial}{\cal B}_n = \bar{\partial}\zeta^n_{+} = \Delta\zeta^n_{+} = P^{-1}\zeta^n_{+}\Delta P$ and one gets for $n = 2k-1,\,\,j = 2m-1$ \begin{equation} F_{nj} = (-1)^m\frac{2(2m-1)U^{k+m-1}}{\pi}\int_0^{\frac{\pi}{2}} \sin^{2m-2}\theta\hat{\zeta}^{2k-1}_{+}(\cos\theta)\cos\theta d\theta \ , \label{NN} \end{equation} where $a^{2k-1}\hat{\zeta}^{2k-1}_{+}(\cos\theta)=\zeta^{2k-1}_{+}(P)$ for $P = a\cos\theta\,\,(a^2 = U)$. Let us mention also that one can easily write down the tables of $F_{ij}$ for dNKdV from the table (\ref{AX}). Thus for dNKdV we have a reduction of the table (\ref{AX}) given via $F_{Nm}=F_{mN}=0$ for all $m$. \par\medskip\medskip \noindent {\bf Acknowledgment} We thank Mr. Seung Hwan Son for his Mathematica programs yielding (\ref{AX}) from (\ref{AV}) and $F_{1 \ 2n-1}$ for the dKdV from (\ref{AZ}). The work of YK is partially supported by an NSF grant DMS-9403597.
-55,505.536409
[ -2.58203125, 2.279296875 ]
11.790879
[ -4.109375, -0.77294921875, -2.140625, -6.2734375, -0.043670654296875, 8.828125 ]
[ 1.2568359375, 9.53125, 0.91015625, 3.63671875 ]
250
5,228
[ -3.6015625, 4.28125 ]
36.567347
[ -5.62890625, -4.015625, -4.37109375, -2.11328125, 1.7685546875, 11.484375 ]
0.493869
8.045744
29.399388
2.803293
[ 1.3278377056121826 ]
-32,362.3833
5.615914
-54,983.570608
0.861717
6.243599
[ -2.482421875, -3.568359375, -4.0234375, -5.12109375, 2.40234375, 12.4375 ]
[ -4.8203125, -0.97705078125, -1.470703125, -0.5927734375, 2.623046875, 2.185546875 ]
BkiUc9c25V5ioeHBl6dH
\section{Introduction} \label{sec:introduction} \paragraph*{Distributed checking of minimum spanning tree.} The problem is the following: we are given a weighted graph where some edges are selected, and we want to enable the nodes of the graph to collectively check whether the selected edges form a minimum spanning tree (MST) of this weighted graph or not. More precisely, we want the nodes to take a \emph{distributed decision}. For this, every node takes its own decision whether to accept or to reject, and the collective decision is considered to be an acceptance, if and only if, all nodes accept. The difficulty is that the nodes have a limited knowledge of the graph. Each node has a local view that contains only its adjacent edges and nodes, along with the weight of these adjacent edges and whether they are selected or not. In addition, we assume that the nodes are given unique identifiers. Actually we will need to provide more information to the nodes. Indeed, it is impossible to check whether the selected edges form a minimum spanning tree or not in this restricted setting, as the following example shows. Consider a ring, where the nodes have arbitrary distinct identifiers, and all edges are selected. Now take an arbitrary node. Given its local view, it cannot distinguish whether it is indeed in a ring, or if it is in the middle of a long path, where all edges are selected. If this node chooses to reject, then the distributed decision will automatically be a rejection, and in the long path this would be a wrong decision. Thus it has to accept. But then, in the ring, by symmetry, every node has to accept, and then the distributed decision is an acceptance, although the ring is not a correct instance. \paragraph*{Distributed proof.} The mechanism used to bypass the impossibility above is called a \emph{distributed proof}. In such a mechanism, every node will be given a label, and a node can see not only its own label but also the ones of its neighbors. These labels are supposed to certify the correctness of the minimum spanning tree, in the following sense: \begin{itemize} \item If the set of selected edges form a minimum spanning tree, then there exists a label assignment such that all nodes accept. \item If the set of selected edges does not form a minimum spanning tree, then for any label assignment, at least one node will reject. \end{itemize} In other words, the set of labels certifies that the configuration is correct, and the nodes can check this certification. \paragraph*{A bit of vocabulary and notations.} Let us introduce some terminology. The set of selected edges is \emph{the solution}, and the graph and the solution form together \emph{the configuration}. The labels used in a distributed proof are called the \emph{certificates}, and a distributed proof may also be called a \emph{distributed certification}. (The original name for distributed proof is \emph{proof-labeling scheme}.) When describing the mechanism, it is convenient to refer to an entity that assigns the labels to the nodes, this is \emph{the prover}. Note that we are not interested in the question of how the prover computes the certificates. Also note that given the definition of certification, we can see the prover as an entity trying to make the nodes accept, regardless of whether the solution is correct or not. And the nodes are capable of detecting when the prover is trying to fool them. When describing a scheme, we only describe what the prover does on correct instances, as on incorrect instances it can assign arbitrary labels. The number of nodes in the graph is denoted by $n$, the weight of an edge $(u,v)$ is denoted by $w(u,v)$, and the maximum weight is $W$. We assume that the identifiers are on $O(\log n)$ bits. The size of a distributed proof for MST, is a function of $n$ and $W$, and is the size of the largest label for an instance of $n$ nodes, and max weight $W$. We refer to the survey~\cite{FeuilloleyF16} for more information about distributed decision and certification. \section{Warm-up: non-optimal MST distributed proofs} \label{sec:non-optimal} The goal of this note is to explain the MST distributed proof of~\cite{KormanK07}, which uses $O(\log n \log W)$ bits. This size is optimal, as proved in~\cite{KormanK07}. As a warm-up, we sketch a few simpler distributed proofs for minimum spanning tree, that use larger certificates. \paragraph*{Universal proof.} A basic result about distributed certification is that it is possible to certify any solution of problem~\cite{KormanKP10}. On a correct instance, the prover just has to provide to every node a complete description of the configuration. More precisely, every node is assigned a certificate containing: (1) the adjacency matrix of the graph, (2) the bijection saying which row of the matrix corresponds to which node of the graph (designated by its identifier) and (3) a description of the solution (and extra input information when needed, \emph{e.g.} edge weights for minimum spanning tree). It is then easy for the nodes to check this distributed proof: every node checks that it has the same certificate as its neighbors, that its local view is consistent with the graph described in the certificate, and that the solution is correct in this described graph. The problem of this technique is that it requires large certificates: the adjacency matrix alone takes $\Omega(n^2)$ bits. \paragraph*{Proofs based on algorithms and `à la Kruskal' proof.} A classic technique in distributed certification is to take a distributed algorithm that can produce the solution at hand, and to describe its run in the certificates. More precisely, the prover can give to every node the list of all the messages that it would send and receive during the run. Then the nodes can check that this run is correct, by virtually running the algorithm, based on this transcript, and they can check that it produces the solution at hand. The certificates given by this technique are unfortunately very large in general. A strategy is to keep only the essential pieces of information, and to design to make the node not verify the run {\emph{per se}}, but only some key properties of it. Let us exemplify this technique with Kruskal's algorithm. In this algorithm (which is a centralized algorithm), the edges are ranked by increasing weight, and are considered in this order. Each edge is selected to be in the minimum spanning tree if it does not create a cycle with the edges already selected. The certificates that are derived from this algorithm are the following. Every node is given the list of the selected edges (described by the identifiers of the two endpoints), along with the weights of these edges. It is proved in \cite{FeuilloleyH18} that this is enough to allow the nodes to check the solution using a procedure based on Kruskal algorithm: checking at every step that no lighter edge could be added without closing a cycle. This scheme uses labels of size $\Theta(n \log n + n \log W)$, which is better than the universal proof but still very large. \paragraph*{`À la Borůvka' proof.} Yet another scheme is based on another MST algorithm, namely Borůvka's algorithm (or its distributed variant the GHS algorithm~\cite{GallagerHS83}). It uses $O(\log^2 n +\log n\log W)$ bit certificates~\cite{KormanKP10}. The lower bound for MST certification being $\Omega(\log n \log W)$~\cite{KormanK07}, this scheme is optimal for the regime where $W$ is at least polynomial is $n$. Let us remind Borůvka's algorithm. It proceeds in phases, using objects called \emph{fragments}, that are acyclic subgraphs. At the beginning of the run, every node is a fragment, and during the run of the algorithm, edges are added between fragment such that they get merged. At the end there is only one fragment, and it is a minimum spanning tree. The way the edges are added is the following. There are phases, and at each phase every fragment chooses a so called \emph{outgoing edge} (that is an edge that has exactly one endpoint in the fragment) of minimum weight. These outgoing edges are then added to the solutions, and we proceed for the next phase. At each phase a fragment merges with at least one other fragment, therefore there are at most $O(\log n)$ phases. In the certificates a node is given, for each of the $O(\log n)$ phases, the name and weight of the outgoing edge chosen by its fragment (along with a proof that this edge exists). This is enough to check that that the solution is an MST. It uses $\Theta(\log n +\log W)$ bits per phase, hence the size of the scheme is $\Theta(\log^2 n +\log n\log W)$. \section{Theorem and outline} The focus of this note is the following theorem. \begin{theorem}[\cite{KormanK07}] There exists a distributed proof for MST of size $O(\log n \log W)$. \end{theorem} \noindent The rest of the note is in an explanation of the proof of this theorem. \section{A basic tool: spanning trees} \label{sec:tree} \paragraph*{Certifying a spanning tree.} Before certifying a minimum spanning tree, it is useful to know how to certify a spanning tree. The classic scheme is the following: on correct instances, the prover chooses an arbitrary node as the root of the tree, and assigns to every node $u$ a certificate containing the identifier of the root, and the distance from $u$ to the root. It is folklore that this is enough to certify a spanning tree: the acyclicity is certified by the distance, and the connectivity is certified by the root identifier (see \cite{FeuilloleyF16} for explanations). The two pieces of information above (the identifier and the distance) both use $\Theta(\log n)$ bits, thus certifying a spanning tree can be done using $\Theta(\log n)$ bits. \paragraph*{Uses of a certified spanning tree.} The first use we make of the certification scheme above is to simply certify that the solution at hand is a spanning tree. This way, we can focus on proving the minimality. The solution can now be called \emph{the input tree}. Second, we will use other spanning trees than the input tree. More precisely, in order to certify the minimality, the certificates will contain a distributed description and the certification of spanning trees different from the input tree. For example, suppose that at some point you want to certify that there is a node with identifier $x$ in the graph. To do so, the prover can describe and certify a spanning tree of the graph whose root is this special node. As this spanning tree is certified, the nodes will be able to check its structure, and if the root also checks that is has identifier $x$, and no node rejects, it means that indeed a node with identifier $x$ exists. \section{Idea 1: Checking the cycle property} \label{subsec:checking-weights} Now that we have the spanning tree tool, let us start to discuss the ideas that lead to the distributed proof of~\cite{KormanK07}. The first idea is to differ from the approaches described in Section~\ref{sec:non-optimal}, by \emph{not} mimicking an algorithm. Instead, we will allow the nodes to check a fundamental property of minimum spanning trees: the \emph{cycle property}. \paragraph*{Cycle property.} Consider a spanning tree $T$ of a weighted graph. For any edge $(u,v)$ of the graph, let $\max_T(u,v)$ be the largest weight on the path from $u$ to $v$ in $T$. The cycle property states that $T$ is minimum, if and only if, for every edge $(u,v)$, $w(u,v)\geq \max_T(u,v)$. Hence if the nodes get to know the values $\max_T(u,v)$, they can check the inequality for all edges, and thus check the minimality. \paragraph*{Naive approach.} A naive idea consists in writing in the certificate of every node $u$, the values $\max_T(u,v)$, for all neighbor $v$ of $u$. Without even looking into how the nodes could check that these values are correct, there is already a problem of size: this can take $\Theta(n\log W)$ bits per node, as we have no control on the degree. Thus, we want to allow every node $u$ to retrieve the values $\max_T(u,v)$, without explicitly writing them in the certificate. \section{Idea 2: Using intermediate nodes} \paragraph*{Using non-neighbor intermediate nodes} In the naive approach above, a node would not use the fact that it can see the certificates of its neighbors. But if $v$ is a neighbor of $u$, then $u$ has access to the certificate of $v$. Now, an important idea, is that it can be useful for a node $u$, to know the value $\max_T(u,w)$, even if the node $w$ is not a neighbor of $u$. Indeed, if $w$ is a node on the path between $u$ and $v$ in the input tree, and if $u$ and $v$ respectively hold $\max_T(u,w)$ and $\max_T(v,w)$, then they can compute $\max_T(u,v)$, because: \[ {\max}_T(u,v)=\max({\max}_T(u,w),{\max}_T(v,w)). \] In this case, it is useless that either $u$ or $v$ stores $\max_T(u,v)$ as they can compute it. Now our scheme will be based on the idea of giving to any node $u$, a list of values $\max_T(u,w)$, for some carefully chosen $w$. For a given node $u$, such an intermediate node $w$ will be called \emph{an anchor} of $u$. Note that we actually need to provides pairs $(ID(w),\max_T(u,w))$, where $ID(w)$ is the identifier of $w$, in order to allow the nodes to compare their sets of anchors. \paragraph*{Which nodes to use?} Now the question is: how to chose the sets of anchors? This set of anchors has to fulfill several goal. First, it should allow any pair of neighbors $u,v$ to retrieve the value $\max_T(u,v)$, thus any pair of neighbors should have an anchor in common, and this anchor should be on the path between them in the input tree. Second, we want the nodes to be able to check that the pairs $(ID(w),\max_T(u,w))$ are correct. Third, we want to minimize the number of pairs $(ID(w),\max_T(u,w))$ that any node gets. \section{Idea 3: Ancestors as anchors} Remember that thanks to Section~\ref{sec:tree}, we can assume that the solution is a spanning tree of the graph, and that it is oriented towards a root. We will use this structure to chose our anchors. \paragraph*{Ancestors.} We simply use the ancestors of $u$ in the tree as its anchors (including $u$ itself). We claim that this is a correct set of anchors. Indeed the nearest common ancestor of $u$ and $v$ is an anchor of both $u$ and $v$ as it is an ancestor of both, and it is on the path between the two nodes. But the nodes may have many ancestors in common, and only the nearest common ancestor is on the path between them. How to allow the nodes to identify the nearest common ancestor? We simply list the pairs $(ID(w),\max_T(u,w))$, from the closest ancestor to the root. This way the nodes simply have to look for the first anchor that is on both lists. \paragraph*{Checking.} Now we need to describe how the nodes check that the values they are given are correct. For the ancestors, every node just has to check that its ancestors list is consistent with the lists of its parent and children in the tree. For the maximum weight, similar checking works, except that now the nodes also check that the weights of their local views are consistent with the certificates. \paragraph*{Size.} At this point, we have described a correct distributed proof for minimum spanning tree, based on checking the cycle property. A problem is again the size. If $d$ is the depth of the tree with the orientation we took at the beginning the scheme described above has size $\Theta(d\log n+d\log W)$. As we can control the diameter of the input tree, the size can be as large as $\Omega(n\log n + n\log W))$, for example when the tree is a path. \section{Idea 4: Using another tree} An idea to cope with the depth problem is to define the anchors using a tree that is \emph{not} the input tree. That is, the anchors of a node will still be its ancestor in a tree, but not the same tree. This new tree will be called \emph{the overlay tree}. We will make sure that this overlay tree is balanced, in order to get a small depth, and avoid the problem of the previous section. \paragraph*{Virtual edges} In general, the overlay tree cannot be defined as a spanning tree of the graph. Indeed, if the graph is a path, for example, it has no small-depth spanning tree. Instead, we will use virtual edges, that is the overlay tree may have some edges between nodes $a$ and $b$, even if there is no edge $(a,b)$ in the graph. Of course this has to be done carefully, and will make the certificates more complicated. \paragraph*{Top-down construction of an overlay tree} Let us now describe a general construction to build an overlay tree. We will take care of the depth later. The overlay tree is built in a top-down fashion. Take an arbitrary node, and make it the root of the overlay tree. After removing it from the input tree, we are left with some $k$ connected components (of the input tree). For each of them chose an arbitrary node, these are the $k$ children of the root. Continue recursively until all the nodes appear in the overlay tree. \paragraph*{The nearest common ancestor is still a correct anchor choice} We claim that the construction above preserves the following key property: given two nodes $u$ and $v$, their the nearest common ancestor \emph{in the overlay tree}, is on the path between them \emph{in the input tree}. At each step of the top-down construction of the tree, as long as $u$ and $v$ remain in the same connected component, they are assigned the same ancestors. Then at some point a new node~$c$ is chosen to be the root of one of the connected components, and its removals leaves $u$ and $v$ in different components. On the one hand, as the removal of $c$ separates $u$ and $v$, the node $c$ must be on the path from $u$ to $v$ in the input tree. On the other hand, $c$ is the nearest common ancestor they have in the overlay tree. This proves the claim. \paragraph*{Encoding and checking of the overlay tree.} The certificate of a node contains, as before, a list of pairs $(ID(w),\max_T(u,w))$, but the checking that the nodes $w$ are ancestors in the overlay tree requires more work than the analogous checking in the original tree. Let $w$ be one of these ancestor nodes. How can the node $u$ check that $w$ really exists and is an ancestor? Or more precisely, how can we encode the overlay tree, to make this checking possible. By construction, the nodes that have $w$ as an ancestor in the overlay tree form a connected component in the input tree. We can then define a tree that is spanning this component, and whose root is $w$. Then, the certificate of $u$ will contain, in addition of $(ID(w),\max_T(u,w))$, the local encoding and certification of this tree. That is, it contains the name of the parent of~$u$ in this `component tree' as well as the distance from $u$ to $w$ in this tree. As this component tree is based on edges of the graph (contrary to the overlay tree), it can be checked easily in the same way as in Section~\ref{sec:tree}. We do so for every ancestor of every node. That is, a node is given the local encoding and certification of a spanning tree for each of its $O(\log n)$ ancestors. Not only does this ensures the ancestors really exist, but by checking the consistency of these spanning trees, the nodes can also check the whole overlay tree structure. \paragraph*{Balanced overlay tree.} In the construction of the overlay tree, there is freedom for the choice of the root. We can chose this node to be a center of the tree, that is a node such that its removal leaves the tree with components of size at most $n/2$. By choosing such a center at each step of the recursion, we get an overlay tree that is balanced, and has depth in $O(\log n)$. \paragraph*{Size.} Let us now consider the size of this new scheme. We still have, in the first part of the certificate, the certification that the input tree is a spanning tree of the graph, which takes $\Theta(\log n)$ bits. This is negligible compared to what follows (and it will also be in the next sections). In addition to this, the certificate of a node $u$ contains $O(\log n)$ fields (one for each ancestor in the overlay tree), with: \begin{itemize} \item the identifier of the ancestor in the overlay tree, \item the maximum weight on the path between $u$ and the ancestor in the input tree, \item the identifier of a neighbor of $u$ (tha parent in the component tree), \item a distance (from $u$ to its ancestor through the component tree). \end{itemize} In total, the size is in $\Theta(\log^2\!n+\log n\log W)$. Therefore, this scheme is as good as the scheme based on Borůvka algorithm, but not better. In particular, even if $W$ is very small, the size will still be in $\Omega(\log^2\!n)$. In the next sections, we show how to make the $\log^2\!n$ term disappear, to get the $\Theta(\log n\log W)$ size. To do so, we first encode the overlay tree in a more compact manner (to avoid using the identifier of a neighbor at each level), and then make the names of ancestors more compact too (to avoid using the identifier of an ancestor at each level). \section{Idea 5: Compact encoding of the overlay tree} To describe and check the structure of the overlay tree, we can do better than giving the identifier of a neighbor for each of the $O(\log n)$ levels. To do so, we will rely on the input tree (and its orientation to a root) that we have. For each level of the overlay tree, instead of giving the identifier of a neighbor as a pointer to the parent in the component tree, we simply write in which direction this neighbor is with respect to the input tree. That is, if this neighbor is the parent in the input tree, then the label will simply say~\emph{up}, otherwise, it is a children in the input tree, and the label says \emph{down}. (The third case being that if the node is the root of this tree structure, then it has a label \emph{root}.) Two things to point out here. First, as the input tree is certified to be acyclic, there is no risk to have something like a cycle of nodes with label \emph{up}. Second, if the flag is \emph{down}, at first a node cannot know which of its children is the good one. But actually, if the labeling is correct, exactly one of its children has a label that is not \emph{up} (that is either \emph{root} or \emph{down}), and this is its parent. Thus, from this up-down labeling, the nodes can recompute the identifier of their parents, and check the correctness. The good things is: this takes only $O(1)$ bits. Thus in total this is $O(\log n)$ bits, hence we have eliminated the first $\log^2n$ term of the size. \section{Idea 6: Compact encoding of the ancestors} The remaining $\log^2\!n$ term comes from the list of $\Theta(\log n)$ ancestor identifiers that each node holds. Note that it is crucial to have some way of naming these ancestors, such that two neighbors of the graph can decide which anchor of their lists they should use to compute the max value, and then check the cycle property. Also, in general, there is no better way of naming a set of $\Theta(\log n)$ nodes than to use $\Theta(\log^2n)$ bits. But, the sets of nodes we are interested in are not arbitrary: they are chains of nodes that form a path from a node to the root, through all its ancestors. We can tweak the names such that retrieving the concatenation of names of such chains requires small labels. \paragraph*{Subtree numbering to avoid identifiers} A way to give distinct names to the node of a tree is the following: every node gives a distinct number to each of its $k$ children (from 1 to $k$), in an arbitrary way. Then the name of a node is the concatenation of the numbers of all its ancestors from the root to itself. It is easy to check that these names are unique. Now, in our context, this is especially useful, because it is not necessary anymore to give the name each ancestor: from its own name, a node can retrieve the names of all its ancestors, as they are just prefixes. Also, because of the overlay tree, the nodes can check that these new names are correct. We are not yet done, as the new names might have $\Omega(\log^2n)$ bits, for example if a node has all its ancestors having (almost) $\Omega(n)$ children. \paragraph*{Compact subtree numbers} We can play with the way a node numbers its children. Namely, we number the subtrees in a similar fashion as in \cite{GavoilleKKPP01}, that is, in the inverse order of the size of the subtree. More precisely, we number the subtrees from~1, for the subtree with the largest number of nodes, to $k$, for the subtree with the smallest number of nodes. Let us compute the size of the name of an arbitrary node $u$ at depth $r$. For every $i$, from~0 to~$r$, let $a_i$ be the ancestor of $u$ of depth $i$ (with $u=a_r$). Also let $n_i$ be the number of nodes in the tree of $a_i$ (that is $a_i$ and all its descendants). Finally, for $i\geq 1$, let $k_i$ be the number given to $a_i$ by its parent. Now, if a node is given a number $k_i>1$, it means that its parent had at least $k_i-1$ children with larger trees. Thus for all $i$ from 1 to $r$, $n_{i-1}\geq k_i\cdot n_{i}$, that is $k_i\leq n_{i-1}/n_{i}$. Then the size of the name of $u$, wich is the size of the concatenation of the subtree numbers of the ancestors of $u$ is of order: \[ \sum_{i=1}^{r} \log(k_i) = \log(\Pi_{i=1}^{r} k_i) \leq \log \left( \Pi_{i=1}^{r} \frac{n_{i-1}}{n_{i}} \right)=\log\left(\frac{n_0}{n_r}\right) \leq \log (n_0)=\log (n). \] That is the name of every node is on $O(\log n)$ bits, thus we have eliminated the second $\log^2n$ term.\footnote{A subtlety is that, to be able to retrieve the subtree numbers from the concatenation, one has to add commas between the numbers, but as there are at most $O(\log n)$ ancestors, this incurs no overhead. Actually, more complex nearest-common ancestor labelings can cope with more ancestors (see\cite{AlstrupGKR}, used in \cite{BlinF15}).} \section{Wrap-up} Putting all the pieces together, we get $O(\log n \log W)$ bit labels for certifying MST. This is optimal~\cite{KormanK07}. Something that may be a bit surprising at first, is that the nodes cannot check that the overlay tree is balanced and that the subtree numbering is an inverse-size numbering. This is no problem for our setting, as we only want to (1) certify the solution, and (2) have small labels on correct instances. This can be problematic if we want to build the certificates in a distributed manner without exceeding some space limit. A recent preprint~\cite{BlinDF19} tackles the problem of how to construct the labels to a get an optimal space self-stabilizing algorithm, and solve this problem by allowing the nodes to check the size of their labels. \DeclareUrlCommand{\Doi}{\urlstyle{same}} \renewcommand{\doi}[1]{\href{https://doi.org/#1}{\footnotesize\sf doi:\Doi{#1}}} \bibliographystyle{plainnat}
-16,904.830282
[ -2.671875, 2.484375 ]
80.597015
[ -1.9365234375, 1.0419921875, -2.271484375, -5.73828125, -1.037109375, 8.2109375 ]
[ 1.861328125, 6.62109375, 1.044921875, 8.0546875 ]
232
4,536
[ -2.6875, 2.861328125 ]
24.504418
[ -5.8359375, -4.5390625, -5.08203125, -1.9482421875, 2.5625, 12.8203125 ]
1.493561
25.668223
16.519824
0.447256
[ 1.811415195465088 ]
-13,760.959561
4.501984
-17,225.58845
1.439694
5.311148
[ -1.9638671875, -2.9921875, -2.919921875, -4.55859375, 2.240234375, 10.6484375 ]
[ -5.671875, -2.318359375, -2.0703125, -1.310546875, 4.0859375, 5.28515625 ]
BkiUbbDxK7ICUmfbyc3i
\section{Appendix} Here, in Section~\ref{app:uppaal} we explain how the different \code{UPPAAL} models work and in Section~\ref{app:blockc} we dive into the \textsf{\small MTL}\xspace specifications we use to verify 3-party swap and the auction protocol. \subsection{\code{UPPAAL} Models} \label{app:uppaal} Below we explain in details how each of the \code{UPPAAL} models work. In respect to our monitoring algorithm, we consider multiple instances of each of the models as different processes. Each event consists of the action that was taken along with the time of occurrence of the event. In addition to this, we assume a unique clock for each instance, synchronized by the presence of a clock synchronization algorithm with a maximum clock skew of $\epsilon$. \paragraph{The Train-Gate} It models a railway control system which controls access to a bridge for several trains. The bridge can be considered as a shared resource and can be accessed by one train at a time. Each train is identified by a unique \texttt{id} and whenever a new train appears in the system, it sends a \texttt{appr} message along with it's \texttt{id}. The Gate controller has two options: (1) send a \texttt{stop} message and keep the train in waiting state or (2) let the train cross the bridge. Once the train crosses the bridge, it sends a \texttt{leave} message signifying the bridge is free for any other train waiting to cross. \input{train} The gate keeps track of the state of the bridge, in other words the gate acts as the controller of the bridge for the trains. If the bridge is currently not being used, the gate immediately offers any train appearing to go ahead, otherwise it sends a \texttt{stop} message. Once the gate is free from a train leaving the bridge, it sends out a \texttt{go} message to any train that had appeared in the mean time and was waiting in the queue. \input{gate} \paragraph{The Fischer's Protocol} It is a mutual exclusion protocol designed for $n$ processes. A process always sends in a request to enter the critical section (\texttt{cs}). On receiving the request, a unique \texttt{pid} is generated and the process moves to a \texttt{wait} state. A process can only enter into the critical section when it has the correct \texttt{id}. Upon exiting the critical section, the process resets the \texttt{id} which enables other processes to enter the \texttt{cs} \input{fischer} \input{gossip} \paragraph{The Gossiping People} The model consists of $n$ people, each having a private secret they wish to share with each other. Each person can \texttt{Call} another person and after a conversation, both person mutually knows about all their secrets. With respect to our monitoring problem, we make sure that each person generates a new secret that needs to be shared among others infinitely often. \subsection{Blockchain} \label{app:blockc} Below shows the specifications we used to verify the correctness of hedged three-party swap and auction protocols, as shown in \cite{xue2021hedging}. The structure of the specifications are similar to that of hedged two-party swap protocol. \subsubsection{Hedged 3-Party Swap Protocol} The three-party swap example we implemented can be described as a digraph where there are directed edges between Alice, Bob and Carol. For simplicity, we consider each party transfers 100 assets. Transfer between Alice and Bob is called $ApricotSwap$, meaning Alice proposes to transfer 100 apricot tokens to Bob, transfer between Bob and Carol called $BananaSwap$, meaning Bob proposes to transfer 100 banana tokens to Carol, transfer between Carol and Alice, called $CherrySwap$, meaning Carol proposes to transfer 100 cherry tokens to Alice. Different tokens are managed by different blockchains (Apricot, Banana and Cherry respectively). We denote the time they reach an agreement of the swap as $startTime$. $\Delta$ is the maximum time for parties to observe the state change of contracts by others and take a step to make changes on contracts. According of the protocol, the execution should follow the following steps: \begin{itemize} \item Step 1. Alice deposits 3 tokens as $escrow\_premium$ in $ApricotSwap$ before $\Delta$ elapses after $startTime$ . \item Step 2. Bob deposits 3 tokens as $escrow\_premium$ in $BananaSwap$ before $2\Delta$ elapses after $startTime$ . \item Step 3. Carol deposits 3 tokens as $escrow\_premium$ in $CherrySwap$ before $3\Delta$ elapses after $startTime$. \item Step 4. Alice deposits 3 tokens as $redemption\_premium$ in $CherrySwap$ before $4\Delta$ elapses after $startTime$. \item Step 5. Carol deposits 2 tokens as $redemption\_premium$ in $BananaSwap$ before $5\Delta$ elapses after $startTime$ . \item Step 6. Bob deposits 1 token as $redemption\_premium$ in $ApricotSwap$ before $6\Delta$ elapses after $startTime$. \item Step 7. Alice escrows 100 ERC20 tokens to $ApricotSwap$ before $7\Delta$ elapses after $startTime$. \item Step 8. Bob escrows 100 ERC20 tokens to $BananaSwap$ before $8\Delta$ elapses after $startTime$. \item Step 9. Carol escrows 100 ERC20 tokens to $CherrySwap$ before $9\Delta$ elapses after $startTime$. \item Step 10. Alice sends the preimage of the hashlock to $CherrySwap$ to redeem Carol's 100 tokens before $10\Delta$ elapses after $startTime$. \item Step 11. Carol sends the preimage of the hashlock to $BananaSwap$ to redeem Bob's 100 tokens before $11\Delta$ elapses after $startTime$. \item Step 12. Bob sends the preimage of the hashlock to $ApricotSwap$ to redeem Alice's 100 tokens before $12\Delta$ elapses after $startTime$. \end{itemize} If all parties are conforming, the protocol is executed as above. Otherwise, some asset refund and premium redeem events will be triggered to resolve the case where some party deviates. To avoid distraction, we do not provide details here. \paragraph{Liveness} Below shows the specification to liveness, if all the steps of the protocol has been taken: \begin{align*} &\varphi_{\mathsf{liveness}}=\F_{[0, \Delta)} \texttt{apr.depositEscrowPr(alice)}\\ & \land \F_{[0, 2\Delta)} \texttt{ban.depositEscrowPr(bob)} \\ & \land \F_{[0, 3\Delta)} \texttt{che.depositEscrowPr(carol)} \\ & \land \F_{[0, 4\Delta)} \texttt{che.depositRedemptionPr(alice)} \\ & \land \F_{[0, 5\Delta)} \texttt{ban.depositRedemptionPr(carol)} \\ & \land \F_{[0, 6\Delta)} \texttt{apr.depositRedemptionPr(bob)} \\ & \land \F_{[0, 7\Delta)} \texttt{apr.assetEscrowed(alice)} \\ & \land \F_{[0, 8\Delta)} \texttt{ban.assetEscrowed(bob)} \\ & \land \F_{[0, 9\Delta)} \texttt{che.assetEscrowed(carol)} \\ & \land \F_{[0, 10\Delta)} \texttt{che.hashlockUnlocked(alice)} \\ & \land \F_{[0, 11\Delta)} \texttt{ban.hashlockUnlocked(carol)} \\ & \land \F_{[0, 12\Delta)} \texttt{apr.hashlockUnlocked(bob)} \\ & \land \F \texttt{assetRedeemed(alice)} \\ & \land \F \texttt{assetRedeemed(bob)} \\ & \land \F \texttt{assetRedeemed(carol)} \\ & \land \F \texttt{EscrowPremiumRefunded(alice)} \\ & \land \F \texttt{EscrowPremiumRefunded(bob)} \\ & \land \F \texttt{EscrowPremiumRefunded(carol)} \\ &\land \F \texttt{RedemptionPremiumRefunded(alice)}\\ &\land \F \texttt{RedemptionPremiumRefunded(bob)}\\ &\land \F \texttt{RedemptionPremiumRefunded(carol)} \end{align*} \paragraph{Safety} Below shows the specification to check if an individual party is conforming. If a party is found to be conforming we ensure that there is no negative payoff for the corresponding party. Specification to check Alice is conforming: \begin{align*} &\varphi_{\mathsf{alice\_conf}} = \F_{[0, \Delta)} \texttt{apr.depositEscrowPr(alice)} \\ & \land \big( \F_{[0, 3\Delta)} \texttt{che.depositEscrowPr(carol)} \rightarrow \\ & \F_{[0, 4\Delta)} \texttt{che.depositRedemptionPr(alice)} \big) \\ & \land \big( \neg \texttt{che.depositRedemptionPr(alice)} \U \\ & \texttt{che.depositEscrowPr(carol)} \big) \land \\ & \big( \F_{[0, 6\Delta)} \texttt{apr.depositRedemptionPr(bob)} \rightarrow \\ & \F_{[0, 7\Delta)} \texttt{apr.assetEscrowed(alice)} \big) \\ & \land \big( \neg \texttt{apr.assetEscrowed(alice)} \U \\ & \texttt{apr.depositRedemptionPr(bob)} \big) \\ & \land \big( \F_{[0, 9\Delta)} \texttt{che.assetEscrowed(carol)} \rightarrow \\ & \F_{[0, 10\Delta)} \texttt{che.hashlockUnlocked(alice)} \big) \\ & \land \big( \neg \texttt{che.hashlockUnlocked(alice)} \U \\ & \texttt{che.assetEscrowed(carol)} \big) \land \\ & \big( \neg \texttt{ban.hashlockUnlocked(carol)} \U \\ & \texttt{che.hashlockUnlocked(alice)} \big) \\ & \land \big( \neg \texttt{apr.hashlockUnlocked(bob)} \U \\ & \texttt{che.hashlockUnlocked(alice)} \big) \end{align*} Specification to check conforming Alice does not have a negative payoff: \begin{align*} \varphi_{\mathsf{alice\_safety}} &= \varphi_{\mathsf{alice\_conform}} \rightarrow \\ \big( \sum_{\texttt{TransTo = alice}} \texttt{amount} &\geq \sum_{\texttt{TransFrom = alice}} \texttt{amount} \big) \end{align*} \paragraph{Hedged} Below shows the specification to check that, if a party is conforming and its escrowed asset is refunded, then it gets a premium as compensation. \begin{align*} \varphi_{\mathsf{alice\_hedged}}= &\F \big( \varphi_{\mathsf{alice\_conform}}\\ &\land \texttt{apr.assetEscrowed(alice)} \big) \\ &\rightarrow \F \big( \sum_{\texttt{TransTo = alice}} \texttt{amount} \\ &\geq \sum_{\texttt{TransFrom = alice}} \texttt{amount}\\ &+\texttt{apr.redemptionPremium.amount} \big) \end{align*} \subsubsection{Auction Protocol} In the auction example, we consider Alice to be the auctioneer who would like to sell a ticket (worth 100 ERC20 tokens) on the ticket (\texttt{tckt}) blockchain, and Bob and Carol bid on the \texttt{coin} blockchain and the winner should get the ticket and pay for the auctioneer what they bid, and the loser will get refunded. We denote the time that they reach an agreement of the auction as $startTime$. $\Delta$ is the maximum time for parties to observe the state change of contracts by others and take a step to make changes on contracts. Let $TicketAuction$ be a contract managing the ``ticket" on the ticket blockchain, and $CoinAuction$ be a contract managing the bids on the coin blockchain. The protocol is briefed as follows. \begin{itemize} \item Setup. Alice generates two hashes $h(s_b)$ and $h(s_c)$. $h(s_b)$ is assigned to Bob and $h(s_c)$ is assigned to Carol. If Bob is the winner, then Alice releases $s_b$. If Carol is the winner, then Alice releases $s_c$. If both $s_b$ and $s_c$ are released in $TicketAuction$, then the ticket is refunded. If both $s_b$ and $s_c$ are released in $CoinAuction$ , then all coins are refunded. In addition, Alice escrows her ticket as 100 ERC20 tokens in $TicketAuction$ and deposits 2 tokens as premiums in $CoinAuction$. \item Step 1 (Bidding). Bob and Carol bids before $\Delta$ elapses after $startTime$. \item Step 2 (Declaration). Alice sends the winner's secret to both chains to declare a winner before $2\Delta$ elapses after $startTime$. \item Step 3 (Challenge). Bob and Carol challenges if they see two secrets or one secret missing, i.e. Alice cheats, before $4\Delta$ elapses after $startTime$. They challenge by forwarding the secret released by Alice using a path signature scheme \cite{herlihy2018atomic}. \item Step 4 (Settle). After $4\Delta$ elapses after $startTime$, on the $CoinAuction$, if only the hashlock corresponding to the actual winner is unlocked, then the winner's bid goes to Alice. Otherwise, the winner's bid is refunded. Loser's bid is always refunded. If the winner's bid is refunded, all bidders including the loser gets 1 token as premium to compensate them. On the $TicketAuction$, if only one secret is released, then the ticket is transferred to the corresponding party who is assigned the hash of the secret. Otherwise, the ticket is refunded. \end{itemize} \paragraph{Liveness} Below shows the specification to check that, if all parties are conforming, the winner (Bob) gets the ticket and the auctioneer gets the winner's bid. \begin{align*} \varphi_{\mathsf{liveness}} &= \F_{[0, \Delta)} \texttt{coin.bid(bob)} \\ &\land \F_{[0, 2\Delta)} \texttt{coin.declaration(alice, $s_b$)} \\ &\land \F_{[0, 2\Delta)} \texttt{tckt.declaration(alice, $s_b$)} \\ &\land \F_{(4\Delta,\infty)} \texttt{coin.redeemBid(any)} \\ & \land \F_{(4\Delta,\infty)} \texttt{coin.refundPremium(any)} \\ &\land \big(\texttt{coin.bid(carol)} \rightarrow \\ &\F_{[0, \Delta)} \texttt{coin.refundBid(any)}\big) \\ &\land \texttt{tckt.redeemTicket(any)} \\ & \land \neg \texttt{coin.challenge(any)} \\ & \land \neg \texttt{tckt.challenge(any)} \end{align*} \paragraph{Safety} Below shows the specification to check that, if a party is conforming, this party does not end up worse off. Take Bob (the winner) for example. Specification to define Bob is conforming: \begin{align*} \varphi_{\mathsf{bob\_conform}} &= \F_{[0, \Delta)} \texttt{coin.bid(bob)} \\ & \land \Big( \big( \texttt{coin.declaration(alice, $s_c$)} \lor \\ & \texttt{coin.challenge(carol, $s_c$)} \big) \rightarrow \\ & \land \big( \texttt{tckt.declaration(alice, $s_c$)} \lor \\ & \texttt{tckt.challenge(carol, $s_c$)} \lor \\ & \texttt{tckt.challenge(bob, $s_c$)} \big) \Big) \\ & \land \Big( \big( \texttt{coin.declaration(alice, $s_b$)} \lor \\ & \texttt{coin.challenge(carol, $s_b$)} \big) \rightarrow \\ & \land \big( \texttt{tckt.declaration(alice, $s_b$)} \lor \\ & \texttt{tckt.challenge(carol, $s_b$)} \lor \\ & \texttt{tckt.challenge(bob, $s_b$)} \big) \Big) \\ & \land \Big( \big( \texttt{tckt.declaration(alice, $s_c$)} \lor \\ & \texttt{tckt.challenge(carol, $s_c$)} \big) \rightarrow \\ & \land \big( \texttt{coin.declaration(alice, $s_c$)} \lor \\ & \texttt{coin.challenge(carol, $s_c$)} \lor \\ & \texttt{coin.challenge(bob, $s_c$)} \big) \Big) \\ & \land \Big( \big( \texttt{tckt.declaration(alice, $s_b$)} \lor \\ & \texttt{tckt.challenge(carol, $s_b$)} \big) \rightarrow \\ & \land \big( \texttt{coin.declaration(alice, $s_b$)} \lor \\ & \texttt{coin.challenge(carol, $s_b$)} \lor \\ & \texttt{coin.challenge(bob, $s_b$)} \big) \Big) \end{align*} Specification to define Bob does not end up worse off: \begin{align*} \varphi_{\mathsf{bob\_safety}} &= \varphi_{\mathsf{bob\_conform}} \rightarrow \\ & \F \Big( \big( \texttt{coin.refundBid(any)} \\ & \land \texttt{coin.redeemPremium(any)}\big) \lor \\ & \texttt{tckt.redeemTicket(any)} \Big) \end{align*} \paragraph{Hedged} Below shows the specification to check that, if a party is conforming and its escrowed asset is refunded, then it gets a premium as compensation. \begin{align*} \varphi_{\mathsf{bob\_hedged}} &= \G \Big(\varphi_{\mathsf{bob\_conforming}} \\ & \land \big( \texttt{tckt.refundTicket(alice)} \lor \\ & \texttt{tckt.redeemTicket(carol)} \big) \Big) \rightarrow \\ & \F \big( \texttt{coin.refundBid(any)} \\ & \land \texttt{coin.redeemPremium(any)} \big) \end{align*} \section{Case Study and Evaluation} \label{sec:eval} In this section, we analyze our SMT-based solution. We note that we are not concerned about data collections, data transfer, etc, as given a distributed setting, the runtime of the actual SMT encoding will be the most dominating aspect of the monitoring process. We evaluate our proposed solution using traces collected from benchmarks of the tool \code{UPPAAL}~\cite{lpy97}\footnote{\code{\footnotesize UPPAAL} is a model checker for a network of timed automata. The tool-set is accompanied by a set of benchmarks for real-time systems. Here, we assume that the components of the network are partially synchronized.} models (Section~\ref{sec:uppaal}) and a case study involving smart contracts over multiple blockchains (Section~\ref{sec:blockchain}). \subsection{UPPAAL Benchmarks} \label{sec:uppaal} \subsubsection{Setup} We base our synthetic experiments on $3$ different \code{UPPAAL} benchmark models described in~\cite{uppaal04}. {\em The Train Gate} models a railway control system which controls access to a bridge. The bridge is controlled by a gate/operator and can be accessed by one train at a time. We monitor two properties: \begin{align*} \varphi_1 &= (\bigwedge_{i \in \mathcal{P}} \neg \texttt{Train[i].Cross}) \;\U\; \texttt{Train[1].Cross} \\ \varphi_2 &= \bigwedge_{i \in \mathcal{P}} \G \big( \texttt{Train[i].Appr} \rightarrow \\ & ~~~\F (\texttt{Gate.Occ} \;\U\; \texttt{Train[i].Cross}) \big) \end{align*} where $\mathcal{P}$ is the set of trains. {\em Fischer's Protocol} is a mutual exclusion protocol for $n$ processes. We verify first, that no two process (\texttt{P}) enter the critical section (\texttt{cs}) at the same time and second, all request (\texttt{req}) should be followed by the processes able to access the critical section within some time. \begin{align*} \varphi_3 &= \G (\sum_{i \in \mathcal{P}} \texttt{P[i].cs} \leq 1) \\ \varphi_4 &= \G (\bigwedge_{i \in \mathcal{P}} \texttt{P[i].req} \rightarrow \F_\mathcal{I} \texttt{P[i].cs} ) \end{align*} {\em The Gossiping People} is a model consisting of $n$ people who wish to share their secret with each other. We monitor first, that each \texttt{Person} gets to know about everyone else's \texttt{secret} within some time bound and second, each \texttt{Person} has \texttt{secrets} to share infinitely often. \begin{align*} \varphi_5 &= \F_\mathcal{I} (\bigwedge_{i, j \in \mathcal{P}} (i \neq j) \rightarrow \texttt{Person[i].secret[j]} ) \\ \varphi_6 &= \bigwedge_{i \in \mathcal{P}} \G (\F_\mathcal{I} \texttt{Person[i].secrets}) \end{align*} Each experiment involves two steps: (1) distributed computation/trace generation and (2) trace verification. For each \code{UPPAAL} model, we consider each pair of consecutive events are $0.1 s$ apart, i.e., there are $10$ events per second per process. For our verification step, our monitoring algorithm executes on the generated computation and verifies it against an \textsf{\small MTL}\xspace specification. We consider the following parameters (1) primary which includes time synchronization constant ($\epsilon$), (2) \textsf{\small MTL}\xspace formula under monitoring, (3) number of segments ($g$), (3) computation length ($l$), (4) number of processes in the system ($\mathcal{P}$), and (5) event rate. We study the runtime of our monitoring algorithm against each of these parameters. We use a machine with $2x$ Intel Xeon Platinum $8180$ ($2.5$ Ghz) processor, $768$ GB of RAM, $112$ vcores with gcc version $9.3.1$. \subsubsection{Analysis} We now study each of the parameters individually and analyze how it effects the runtime of our monitoring approach. All results correspond to $\epsilon = 15 ms$, $|\mathcal{P}| = 2$, $g = 15$, $l = 2sec$, a event rate of $10 events/sec$ and $\varphi_4$ as the specification unless mentioned otherwise. \begin{figure*}[t] \centering \subcaptionbox{Different Formula\label{graph:diffFormula}} {\scalebox{0.3}{\input{graph_diffFormula}}} \hfill \subcaptionbox{Epsilon\label{graph:epsilon}} {\scalebox{0.3}{\input{graph_epsilon}}} \hfill \subcaptionbox{Segment Frequency\label{graph:segFreq}} {\scalebox{0.3}{\input{graph_segFreq}}} \hfill \subcaptionbox{Computation Length\label{graph:compLength}} {\scalebox{0.3}{\input{graph_compLength}}} \hfill \subcaptionbox{Number of Process\label{graph:numProcess}} {\scalebox{0.3}{\input{graph_numProcess}}} \hfill \subcaptionbox{Event Rate\label{graph:eventRate}} {\scalebox{0.3}{\input{graph_eventRate}}} \caption{Impact of different parameters on synthetic data} \vspace{-2mm} \end{figure*} \vspace{2mm} \noindent {\bf Impact of different formula.} Fig.~\ref{graph:diffFormula} shows that runtime of the monitor depends on two factors: the number of sub-formulas and the depth of nested temporal operators. Comparing $\varphi_3$ and $\varphi_6$, both of which consists of the same number of predicates but since $\varphi_6$ has recursive temporal operators, it takes more time to verify and the runtime is comparable to $\varphi_1$, which consists of two sub-formulas. This is because verification of the inner temporal formula often requires observing states in the next segment in order to come to the final verdict. This accounts for the more runtime for the monitor. \vspace{2mm} \noindent {\bf Impact of epsilon.} Increasing the value of time synchronization constant ($\epsilon$), increases the possible number of concurrent events that needs to be considered. This increases the complexity of verifying the computation and there-by increasing the runtime of the algorithm. In addition to this, higher values of $\epsilon$ also correspond to more number of possible traces that are possible and should be taken into consideration. We observe that the runtime increases exponentially with increasing the time synchronization constant in Fig.~\ref{graph:epsilon}. An interesting observation is with longer segment length, the runtime increases at a higher rate than with shorter segment length. This is because with longer segment length and higher $\epsilon$, it equates to a larger number of possible traces that the monitoring algorithm needs to take into consideration. This increases the overall runtime of the verification algorithm by a considerable amount and at a higher pace. \vspace{2mm} \noindent {\bf Impact of segment frequency.} Increasing the segment frequency makes the length of each segment lower and thus verifying each segment involves consideration of a lower number of events. We observe the effect of segment frequency on the runtime of our verification algorithm in Fig.~\ref{graph:segFreq}. With increasing the segment frequency, the runtime decreases unless it reaches a certain value (here it is $\approx 0.6$) after which the benefit of working with a lower number of events is overcast by the time required to setup each SMT instances. Working with higher number of segments equates to solving more number of SMT problem for the same computation length. Setting up the SMT problem requires a considerable amount of time which is seen by the slight increase in runtime for higher values of segment frequency. \vspace{2mm} \noindent {\bf Impact of computation length.} As it can be inferred from the previous results, the runtime of our verification algorithm is majorly dictated by the number of events in the computation. Thus, when working with a longer computation, keeping the maximum clock skew and the number of segments constant, we should see a longer verification time as well. Results in Fig.~\ref{graph:compLength} makes the above claim true. \vspace{2mm} \noindent {\bf Impact of number truth values per segment.} In order to take into consideration all possible truth values of a computation, we execute the SMT problem multiple times, with the verdict of all previous executions being added to the SMT problem such that no two verdict is repeated. Here in Fig.~\ref{graph:numProcess} we see that the runtime is linearly effected by increasing number of distinct verdicts. This is because, the complexity of the problem that the SMT is trying to solve does not change when trying to evaluate to a different solution. \vspace{2mm} \noindent {\bf Impact of event-rate.} Increasing the event rate involves more number of events that needs to be processes by our verification algorithm per segment and thereby increasing the runtime at an exponential rate as seen in Fig.~\ref{graph:eventRate}. We also observe that with higher number of processes, the rate at which the runtime of our algorithm increases is higher for the same increase in event rate. \subsection{Blockchain} \label{sec:blockchain} \subsubsection{Setup} We implemented the following cross-chain protocols from \cite{xue2021hedging}: two-party swap, multi-party swap, and auction. The protocols were written as smart contracts in \code{Solidity} and tested using \code{Ganache}, a tool that creates mocked \code{Ethereum} blockchains. Using a single mocked chain, we mimicked cross-chain protocols via several (discrete) tokens and smart contracts, which do not communicate with each other. We use the hedged two-party swap example from \cite{xue2021hedging} to describe our experiments. The implementation of the other two protocols are similar. Suppose Alice would like to exchange her apricot tokens with Bob's banana tokens, using the hedged two-party swap protocol shown in Fig.~\ref{fig:intro1}. This protocol provides protection for parties compared to a standard two-party swap protocol \cite{tiernolan}, in that if one party locks their assets to exchange which is refunded later, this party gets a premium as compensation for locking their assets. The protocol consists of six steps to be executed by Alice and Bob in turn. In our example, we let the amount of tokens they are exchanging be 100 ERC20 tokens and the premium $p_b$ be 1 token and $p_a+p_b$ be 2 tokens. We deploy two contracts on both apricot blockchain(the contract is denoted as $ApricotSwap$) and banana blockchain (denoted as $BananaSwap$) by mimicking the two blockchains on Ethereum. Denote the time that they reach an agreement of the swap as $startTime$. $\Delta$ is the maximum time for parties to observe the state change of contracts by others and take a step to make changes on contracts. In our experiment, $\Delta = 500$ milliseconds. By the definition of the protocol, the execution should be: \begin{itemize} \item Step 1. Alice deposits 2 tokens as premium in $BananaSwap$ before $\Delta$ elapses after $startTime$ . \item Step 2. Bob should deposit 1 token as premium in $ApricotSwap$ before $2\Delta$ elapses after $startTime$. \item Step 3. Alice escrows her 100 ERC20 tokens to $ApricotSwap$ before $3\Delta$ elapses after $startTime$. \item Step 4. Bob escrows her 100 ERC20 tokens to $BananaSwap$ before $4\Delta$ elapses after $startTime$. \item Step 5. Alice sends the preimage of the hashlock to $BananaSwap$ to redeem Bob's 100 tokens before $5\Delta$ elapses after $startTime$. Premium is refunded. \item Step 6. Bob sends the preimage of the hashlock to $ApricotSwap$ to redeem Alice's 100 tokens before $6\Delta$ elapses after $startTime$. Premium is refunded. \end{itemize} If all parties all conforming, the protocol is executed as above. Otherwise, some asset refund and premium redeem events will be triggered to resolve the case where some party deviates. To avoid distraction, we do not provide details here. Each smart contract provides functions to let parties deposit premiums \texttt{DepositPremium()}, escrow an asset \texttt{EscrowAsset()}, send a secret to redeem assets \texttt{RedeemAsset()}, refund the asset if it is not redeemed after timeout, \texttt{RefundAsset()}, and counterparts for premiums \texttt{RedeemPremium()} and \texttt{RefundPremium()}. Whenever a function is called successfully (meaning the transaction sent to the blockchain is included in a block), the blockchain emits an event that we then capture and log. The event interface is provided by the Solidity language. For example, when a party successfully calls \texttt{DepositPremium()}, the \texttt{PremiumDeposited} event emits on the blockchain. We then capture and log this event, allowing us to view the values of \texttt{PremiumDeposited}'s declared fields: the time when it emits, the party that called \texttt{DepositPremium()}, and the amount of premium sent. Those values are later used in the monitor to check against the specification. \subsubsection{Log Generation and Monitoring} Our tests simulated different executions of the protocols and generated 1024, 4096, and 3888 different sets of logs for the aforementioned protocols, respectively. We use the hedged two-party swap as an example to show how we generate different logs to simulate different execution of the protocol. On each contract, we enforce the order of those steps to be executed. For example, step 3 \texttt{EscrowAsset()} on the $ApricotSwap$ cannot be executed before Step 1 is taken, i.e. the premium is deposited. This enforcement in the contract restricted the number of possible different states in the contract. Assume we use a binary indicator to denote whether a step is attempted by the corresponding party. $1$ denotes a step is attempted, and $0$ denotes this step is skipped. If the previous step is skipped, then the later step does not need to be attempted since it will be rejected by the contract. We use an array to denote whether each step in taken for each contract. On each contract, the different execution of those steps can be [1,1,1] means all steps are attempted, or [1,1,0] meaning the last step is skipped, and so on. Each chain has 4 different executions. We take the Cartesian product of arrays of two contracts to simulate different combinations of executions on two contracts. Furthermore, if a step is attempted, we also simulate whether the step is taken late, or in time. Thus we have $2^6$ possibilities of those 6 steps. In summary, we succeeded generating $4 \cdot 4 \cdot 2^6=1024$ different logs. In our testing, after deploying the two contracts, we iterate over a 2D array of size $1024 \times 12$, and each time takes one possible execution denoted as an array length of 12 to simulate the behavior of participants. For example, $[1,0,1,1,1,1,1,1,1,1,1,1]$ means the first step is attempted however it is late, and the steps after second step are all attempted in time. Indexed from 0, the even index denotes if a step is attempted or not and the odd index denotes the former step is attempted in time or late. By the indicator given by the array, we let parties attempt to call a function of the contract or just skip. In this way, we produce $1024$ different logs containing the events emitted in each iteration. We check the policies mentioned in~\cite{xue2021hedging}: liveness, safety, and ability to hedge against sore loser attacks. {\em Liveness} means that Alice should deposit her premium on the banana blockchain within $\Delta$ from when the swap started($ \F_{[0, \Delta)} \texttt{ban.premium\_deposited(alice)}$) and then Bob should deposit his premiums, and then they escrow their assets to exchange, redeem their assets (i.e. the assets are swapped), and the premiums are refunded. In our testing, we always call a function to settle all assets in the contract if the asset transfer is triggered by timeout. Thus, in the specification, we also check all assets are settled: \begin{align*} \varphi_{\mathsf{liveness}} &= \F_{[0, \Delta)} \texttt{ban.premium\_deposited(alice)} \land \\ & \F_{[0, 2\Delta)} \texttt{apr.premium\_deposited(bob)} \land \\ & \F_{[0, 3\Delta)} \texttt{apr.asset\_escrowed(alice)} \land \\ & \F_{[0, 4\Delta)} \texttt{ban.asset\_escrowed(bob)} \land \\ & \F_{[0, 5\Delta)} \texttt{ban.asset\_redeemed(alice)} \land \\ & \F_{[0, 6\Delta)} \texttt{apr.asset\_redeemed(bob)} \land \\ & \F_{[0, 5\Delta)} \texttt{ban.premium\_refunded(alice)} \land \\ & \F_{[0, 6\Delta)} \texttt{apr.premium\_refunded(bob)} \land \\ & \F_{[6\Delta, \infty)} \texttt{apr.all\_asset\_settled(any)} \land \\ & \F_{[5\Delta, \infty)} \texttt{ban.all\_asset\_settled(any)} \end{align*} {\em Safety} is provided only for conforming parties, since if one party is deviating and behaving unreasonably, it is out of the scope of the protocol to protect them. Alice should always deposit her premium first to start the execution of the protocol($\F_{[0, \Delta)} \texttt{ban.premium\_deposited(alice)}$) and proceed if Bob proceeds with the next step. For example, if Bob deposits his premium, then Alice should always go ahead and escrow her asset to exchange($ \F_{[0, 2\Delta)} \texttt{apr.premium\_deposited(bob)} \rightarrow \F_{[0, 3\Delta)} \texttt{apr.asset\_escrowed(alice)}$). Alice should never release her secret if she does not redeem, which means Bob should not be able to redeem unless Alice redeems, which is expressed as $\neg \texttt{apr.asset\_redeemed(bob)} \U$ $\texttt{ban.asset\_redeemed(alice)}$: {\small \begin{align*} \varphi_{\mathsf{alice\_conform}} &= \F_{[0, \Delta)} \texttt{ban.premium\_deposited(alice)} \land \\ & \big( \F_{[0, 2\Delta)} \texttt{apr.premium\_deposited(bob)} \rightarrow \\ &~~~\F_{[0, 3\Delta)} \texttt{apr.asset\_escrowed(alice)} \big) \, \land \\ & \big( \F_{[0, 4\Delta)} \texttt{ban.asset\_escrowed(bob)} \rightarrow \\ &~~~\F_{[0, 5\Delta)} \texttt{ban.asset\_redeemed(alice)} \big) \, \land \\ & \big( \neg \texttt{apr.asset\_redeemed(bob)} \U \\ &~~~\texttt{ban.asset\_redeemed(alice)} \big) \end{align*} } By definition, safety means a conforming party does not end up with a negative payoff. We track the assets transferred from parties and transferred to parties in our logs. Thus, a conforming party is safe. e.g. Alice, is specified as the $\varphi_{alice\_safety}$: \begin{align*} \varphi_{\mathsf{alice\_safety}} =& \varphi_{\mathsf{alice\_conform}} \rightarrow \\ \big( \sum_{\texttt{TransTo = alice}} \texttt{amount} &\geq \sum_{\texttt{TransFrom = alice}} \texttt{amount} \big) \end{align*} To enable a conforming party to hedge against the sore loser attack if they escrow assets to exchange which is refunded in the end, our protocol should guarantee the aforementioned party get a premium as compensation, which is expressed as $\varphi_{\mathsf{alice\_hedged}}$: \begin{align*} \varphi_{\mathsf{alice\_hedged}}= &\F \big( \varphi_{\mathsf{alice\_conform}} \land \\ & \texttt{apr.asset\_escrowed(alice)} \land \\ & \texttt{apr.asset\_refunded(any)} \big) \rightarrow \\ & \F \big( \sum_{\texttt{TransferTo = alice}} \texttt{amount} \geq \\ & \sum_{\texttt{TransferFrom = alice}} \texttt{amount}\\ &+\texttt{apr.premium.amount} \big) \end{align*} \subsubsection{Analysis of Results} \input{graph_blockc} We put our monitor to test the traces generated by the Truffle-Ganache framework. To monitor the 2-party swap protocol we do not divide the trace into multiple segments due to the low number of events that are involved in the protocol. On the other hand both, 3-party swap and auction protocol involves a higher number of events and thus we divide the trace into two segments ($g=2$). In Fig.~\ref{graph:blockc}, we show how the runtime of the monitor is effected by the number of events in each transaction log. Additionally, we generate transaction logs with different values for deadline ($\Delta$) and time synchronization constant ($\epsilon$) to put the safety of the protocol in jeopardy. We observe both $\mathtt{true}$ and $\mathtt{false}$ verdict when $\epsilon \gtrapprox \Delta$. This is due to the non deterministic time stamp owning to the assumption of a partially synchronous system. The observed time stamp of each event can at most be off by $\epsilon$. Thus, we recommend not to use a value of $\Delta$ that is comparable to the value of $\epsilon$ when designing the smart contract. \section{Introduction} \label{sec:intro} {\em Blockchain} technology~\cite{lu2019blockchain,nakamoto2008bitcoin} is a blockbuster in this era. It has drawn extensive attention from both industry and academia. With blockchain technology, people can trade in a peer-to-peer manner without mutually trusting each other, removing the necessity of a trusted centralized party. The concept of decentralization appears extremely appealing, and the transparency, anonymity, and persistent storage provided by blockchain make it more attractive. This revolutionary technology has triggered many applications in industry, ranging from cryptocurrency~\cite{herlihy2018atomic}, non-fungible tokens~\cite{herlihy2021cross}, internet of things\cite{christidis2016blockchains} to health services \cite{xu2019healthchain}, etc. Besides the huge success of cryptocurrencies known as blockchain 1.0, especially Bitcoin~\cite{nakamoto2008bitcoin}, blockchain 2.0, known as {\em smart contracts}~\cite{cong2019blockchain}, is also promising in many scenarios. Smart contract is a program running on the blockchain. Its execution is triggered automatically and enforced by conditions preset in the code. In this way, the transfer of assets can be automated by the rules in the smart contracts, and human intervention cannot stop it. A typical smart contract implementation is provided by \code{Ethereum}~\cite{dannen2017introducing}, which uses {\em Solidity} \cite{dannen2017introducing}, which is a Turing-complete language. However, automating the transactions by smart contracts also has its downsides. If the smart contract has bugs and does not do what is expected, then lack of human intervention may lead to massive financial losses. For example, as pointed out by~\cite{ellul2018runtime}, the Parity Multisig Wallet smart contract~\cite{parity} version 1.5 included a vulnerability which led to the loss of 30 million US dollars. % Thus, developing effective techniques to verify the correctness of smart contracts is both urgent and important to protect against possible losses. Furthermore, when a protocol is made up of multiple smart contracts across different blockchains, the correctness of protocols also need to be verified. In this paper, we advocate for a {\em runtime verification} (RV) approach, to monitor the behavior of a system of blockchains with respect to a set of temporal logic formulas. Applying RV to deal with multiple blockchains can be reduced to {\em distributed RV}, where a centralized or decentralize monitor observes the behavior of a distributed system in which processes do not share a global clock. Although RV deals with finite executions, the lack of a common global clock prohibits it from having a unique ordering of events in a distributed setting. Put it another way, the monitor can only form a partial order of event which may result in different verification verdicts. Enumerating all possible ordering of events at run time incurs in an exponential blow up, making the approach not scalable. To add to this already complex task, most specification for verifying blockchain smart contracts, come with a time bound. This means, not only the ordering of the events are at play when verifying, but also the actual physical time of occurrence of the event dictates the verification verdict. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{two_party_swap_icdcs.pdf} \caption{Hedged Two-party Swap} \label{fig:intro1} \vspace{-5mm} \end{figure} In this paper, we propose an effective, sound and complete solution to distributed RV for timed specifications expressed in the {\em metric temporal logic} (\textsf{\small MTL}\xspace)~\cite{koy90}. To present a high-level view of \textsf{\small MTL}\xspace, consider the {\em two-party swap} protocol~\cite{herlihy2021cross} shown in Fig~\ref{fig:intro1}. Alice and Bob, each in possession of Apricot and Banana blockchain assets respectively, wants to swap their assets between each other without being a victim of sore-loser attack. There is a number of requirements that should be followed by conforming parties to discourage any attack on themselves. We use the {\em metric temporal logic} (\textsf{\small MTL}\xspace)~\cite{koy90} to express such requirements. One such requirement, where Alice should not redeem her asset before Bob within eight time units can be represented by the \textsf{\small MTL}\xspace formula: $$ \varphi_{\mathsf{spec}} = \neg \texttt{Apr.Redeem}(\mathit{bob}) \, \U_{[0, 8)} \texttt{Ban.Redeem}(\mathit{alice}). $$ We consider a fault proof central monitor which has the complete view of the system but has no access to a global clock. In order to limit the blow-up of states posed by the absence of a global clock, we make a practical assumption about the presence of a {\em bounded clock skew} $\epsilon$ between the local clocks of every pair of processes. This is guaranteed by a synchronization algorithm (e.g. NTP~\cite{ntp}). This setting is known to be partially synchronous when we do not assume any presence of a global clock and limit the impact of asynchrony within clock drifts. Such an assumption limits the window of partial orders of events only within $\epsilon$ time units and significantly reduces the combinatorial blow-up caused by nondeterminism due to concurrent. Existing distributed RV techniques either assume a global clock when working with time sensitive specifications~\cite{basin2015, worrell2019} or use untimed specifications when assuming partial synchrony~\cite{ganguly2020,mbab21}. \input{fig_intro2} We introduce an SMT\footnote{{\em Satisfiability modulo theories} (SMT) is the problem of determining whether a formula involving Boolean expressions comprising of more complex formulas involving real numbers, integers, and/or various data structures is satisfiable.}-based {\em progression-based} formula rewriting technique over distributed computations which takes into consideration the events observed thus far to rewrite the specifications for future extensions. Our monitoring algorithm accounts for all possible orderings of events without explicitly generating them when evaluating \textsf{\small MTL}\xspace formulas. For example, in Fig.~\ref{fig:intro2}, we see the events and the time of occurrence in the two blockchains, Apricot($Apr$) and Banana($Ban$) divided into two segments, $seg_1$ and $seg_2$ for computational purposes. Considering maximum clock skew $\epsilon = 2$ and the specification $\varphi_{\mathsf{spec}}$, at the end of the first segment, we have two possible rewritten formulas for the next segment: \begin{align*} \varphi_{\mathsf{spec}_1} & = \neg \texttt{Apr.Redeem}(bob) \, \U_{[0, 4)} \texttt{Ban.Redeem}(alice)\\ \varphi_{\mathsf{spec}_2} & = \neg \texttt{Apr.Redeem}(bob) \, \U_{[0, 3)} \texttt{Ban.Redeem}(alice) \end{align*} This is possible due to the different ordering and different time of occurrence of the events $\texttt{Deposit}(p_b)$ and $\texttt{Deposit}(p_a + p_b)$. In other words, the possible time of occurrence of the event $\texttt{Deposit}(p_b)$ (resp. $\texttt{Deposit}(p_a + p_b)$) is either 2, 3 or 4 (resp. 3, 4, or 5) due to the maximum clock skew of 2. Likewise, at the end of $seg_2$, we have $\varphi_{\mathsf{spec}_1}$ evaluate to $\mathtt{true}$ where as $\varphi_{\mathsf{spec}_2}$ evaluate to $\mathtt{false}$. This is because, even if we consider the scenario when $\texttt{Ban.Redeem(alice)}$ occurs before $\texttt{Apr.Redeem(bob)}$, a possible time of occurrence of $\texttt{Ban.Redeem(alice)}$ is $8$ (resp. $6$) which makes $\varphi_{\mathsf{spec}_2}$ (resp. $\varphi_{\mathsf{spec}_1}$) evaluate to $\mathtt{false}$ (resp. $\mathtt{true}$). We have fully implemented our technique\footnote{\url{https://github.com/ritam9495/rv-mtl-blockc}} and report the results of rigorous experiments on monitoring synthetic data, using benchmarks in the tool \code{UPPAAL}~\cite{lpy97}, as well as monitoring correctness, liveness and conformance conditions for smart contracts on blockchains. We put our monitoring algorithm to test studying the effect of different parameters on the runtime and report on each of them. Using our technique we learn not to use a value of $\Delta$ (transaction deadline) that is comparable to the value of clock skew $\epsilon$ when designing the smart contract. \paragraph*{Organization} Section~\ref{sec:prelim} presents the background concepts. Formal statement of our RV problem is discussed in Section~\ref{sec:sol}. The formula progression rules and the SMT-based solution are described in Sections~\ref{sec:progress} and~\ref{sec:smt}, respectively, while experimental results are analyzed in Section~\ref{sec:eval}. Related work is discussed in Section~\ref{sec:related} before we make concluding remarks in Section~\ref{sec:concl}. The appendix includes more details about our case studies. \section{Related Work} \label{sec:related} Centralized and decentralized online predicate detection in an asynchronous distributed system have been studies in~\cite{cgnm13,mg05}. Extensions to include temporal operators appear in~\cite{og07,mb15}. The line of work in~\cite{cgnm13,mg05,og07,mb15,svar04} considers a fully asynchronous system. A SMT-based predicate detection solution has been introduced in~\cite{vyktd17}. On the other hand, runtime monitoring for synchronous distributed system has been studied in~\cite{ds19,cf16,bf16}. This approach has shortcoming, the major one being the assumption of a common global clock shared among all the processes. Finally, fault-tolerant monitoring, where monitors can crash, has been investigated in~\cite{bfrrt16} for asynchronous and in~\cite{kb18} for synchronized distributed processes. Runtime monitoring of time sensitive distributed system has been studied in~\cite{basin2015, basin2010, worrell2019, rosu2005}. With the onset of blockchains and the security vulnerability posed by smart contracts have been studied in~\cite{garcia2020, pace2021, pace2021a, rosu2018, rosu2018a}. The major area that these work lack is all of them consider the system to be synchronous with the presence of a global clock. However, smart contracts often include multiple blockchains and thus we consider a partially synchronous system where a synchronization algorithm limits the maximum clock skew among processes to a constant. An SMT-based solution was studied in~\cite{ganguly2020}, which we extend to include more expressive time bounded logic. \section{Conclusion} \label{sec:concl} In this paper, we study distributed runtime verification. We propose a technique which takes an \textsf{\small MTL}\xspace formula and a distributed computation as input. By assuming partial synchrony among all processes, first we chop the computation into several segments and then apply a progression-based formula rewriting monitoring algorithm implemented as a SMT decision problem in order to verify the correctness of the distributed system with respect to the formula. We conducted extensive synthetic experiments on trace generated by the \code{UPPAAL} tool and a set of blockchain smart contracts. For future work, we plan to study the trade off among accuracy and scalability of our approach. Another important extension of our work is distributed runtime verification where the processes are dynamic, i.e., the process can crash and can also restore its state at any given time during execution. This will let us study a wide range of applications including airspace monitoring. \bibliographystyle{IEEEtran} \section{Preliminaries} \label{sec:prelim} In this section, we present an overview of the distributed computations and the metric temporal logic (\textsf{\small MTL}\xspace). \subsection{Distributed Computation} We consider a loosely coupled asynchronous message passing system, consisting of $n$ reliable processes (that do not fail), denoted by $\mathcal{P} = \{ P_1, P_2, \cdots, P_n \}$. As a system, the processes do not share any memory or have a common global clock. Channels are assumed to be FIFO and lossless. In our model, we represent each local state change by an event and a message activity (send or receive) is represented by an event as well. Message passing does not change the state of the process and we disregard the content of the message as it is of no use for our monitoring technique. Here, we refer to a global clock which will act as the ``real" timekeeper. It is to be noted that the presence of this global clock is just for theoretical reasons and it is not available to any of the individual processes. We make an assumption about a partially synchronous system. For each process $P_i$, where $i \in [1, n]$, the local clock can be represented as a monotonically increasing function $c_i: \mathbb{Z}_{\geq 0} \rightarrow \mathbb{Z}_{\geq 0}$, where $c_i(\mathcal{G})$ is the value of the local clock at global time $\mathcal{G}$. Since we are dealing with discrete-time systems, for simplicity and without loss of generality, we represent time with non-negative integers $\mathbb{Z}_{\geq 0}$. For any two processes $P_i$ and $P_j$, where $i \neq j$, we assume: $$ \forall \mathcal{G} \in \mathbb{Z}_{\geq 0}. \mid c_i(\mathcal{G}) - c_j(\mathcal{G}) \mid < \epsilon, $$ where $\epsilon > 0$ is the maximum clock skew. The value of $\epsilon$ is constant and is known to the monitor. This assumption is met by the presence of a clock synchronization algorithm, like NTP~\cite{ntp}, to ensure bounded clock skew among all processes. We denote an {\em event} on process $P_i$ by $e^i_{\sigma}$, where $\sigma = c_i(\mathcal{G})$, that is the local time of occurrence of the event at some global time $\mathcal{G}$. \begin{definition} A {\em distributed computation} consisting of $n$ processes is represented by the pair $(\mathcal{E}, \rightsquigarrow)$, where $\mathcal{E}$ is a set of events partially ordered by Lamport's happened-before ($\rightsquigarrow$) relation~\cite{hb1978}, subject to the partial synchrony assumption: \begin{itemize} \item For every process $P_i$, $1 \leq i \leq n$, all the events happening on it are totally ordered, that is, $$\forall \sigma, \sigma' \in \mathbb{Z}_{\geq 0}: (\sigma < \sigma') \rightarrow (e^i_{\sigma} \rightsquigarrow e^i_{\sigma'}); $$ \item If $e$ is a message sending event in a process and $f$ is the corresponding message receiving event in another process, then we have $e \rightsquigarrow f$; \item For any two processes $P_i$ and $P_j$ and two corresponding events $e^i_{\sigma}, e^j_{\sigma'} \in \mathcal{E}$, if $\sigma + \epsilon < \sigma'$ then, $e^i_{\sigma} \rightsquigarrow e^j_{\sigma'}$, where $\epsilon$ is the maximum clock skew, and \item If $e \rightsquigarrow f$ and $f \rightsquigarrow g$, then $e \rightsquigarrow g$.$~\blacksquare$ \end{itemize} \end{definition} \begin{definition} Given a distributed computation $(\mathcal{E}, \rightsquigarrow)$, a subset of events $\mathcal{C} \subseteq \mathcal{E}$ is said to form a consistent cut if and only if when $\mathcal{C}$ contains an event $e$, then it should also contain all such events that happened before $e$. Formally, $$ \forall e \in \mathcal{E}. (e \in \mathcal{C}) \land (f \rightsquigarrow e) \rightarrow f \in \mathcal{C}.~\blacksquare $$ \end{definition} The frontier of a consistent cut $\mathcal{C}$, denoted by $\mathsf{front}(\mathcal{C})$ is the set of all events that happened last in each process in the cut. That is, $\mathsf{front}(\mathcal{C})$ is a set of $e^i_{last}$ for each $i \in [1, |\mathcal{P}|]$ and $e^i_{last} \in \mathcal{C}$. We denote $e^i_{last}$ as the last event in $P_i$ such that $\forall e^i_{\sigma} \in \mathcal{C}. (e^i_{\sigma} \neq e^i_{last}) \rightarrow (e^i_{\sigma} \rightsquigarrow e^i_{last})$. \subsection{Metric Temporal Logic (MTL)~\cite{mtl1992,mtl1994}} \label{subsec:mtl} Let $\mathbb{I}$ be a set of nonempty intervals over $\mathbb{Z}_{\geq 0}$. We define an interval, $\mathcal{I}$, to be $$ [\mathit{start}, \mathit{end}) \triangleq \{ a \in \mathbb{Z}_{\geq 0} \mid \mathit{start} \leq a < \mathit{end} \} $$ where $\mathit{start} \in \mathbb{Z}_{\geq 0}$, $\mathit{end} \in \mathbb{Z}_{\geq 0} \cup \{ \infty \}$ and $\mathit{start} < \mathit{end}$. We define $\mathsf{AP}$ as the set of all {\em atomic propositions} and $\Sigma = 2^{\mathsf{AP}}$ as the set of all possible {\em states}. A {\em trace} is represented by a pair which consists of a sequence of states, denoted by $\alpha = s_0s_1 \cdots$, where $s_i \in \Sigma$ for every $i > 0$ and a sequence of non-negative numbers, denoted by $\bar{\tau} = \tau_0\tau_1 \cdots$, where $\tau_i \in \mathbb{Z}_{\geq 0}$ for all $i > 0$. We represent the set of all infinite traces by a pair of infinite sets, $(\Sigma^\omega, \mathbb{Z}_{\geq 0}^\omega)$. The trace $s_ks_{k+1}\cdots$ (resp. $\tau_k\tau_{k+1}$) is represented by $\alpha^k$ (resp. $\tau^k$). For an infinite trace $\alpha = s_0s_1 \cdots$ and $\bar{\tau} = \tau_0\tau_1 \cdots$, $\bar{\tau}$ is a increasing sequence, meaning $\tau_{i+1} \geq \tau_{i}$, for all $i \geq 0$. \paragraph*{Syntax.} The syntax of metric temporal logic (\textsf{\small MTL}\xspace) for infinite traces are defined by the following grammar: $$ \varphi ::= p \mid \neg \varphi \mid \varphi_1 \lor\varphi_2 \mid \varphi_1 \U_\mathcal{I} \varphi_2 $$ where $p \in \mathsf{AP}$ and $\U_\mathcal{I}$ is the `until' temporal operator with time interval $\mathcal{I}$. Note that other propositional and temporal operators can be represented using the ones mentioned above. For example, $\mathtt{true} = p \lor \neg p$, $\mathtt{false} = \neg \mathtt{true}$, $\varphi_1 \rightarrow \varphi_2 = \neg \varphi_1 \lor \varphi_2$, $\varphi_1 \land \varphi_2 = \neg (\neg \varphi_1 \lor \neg \varphi_2)$, $\F_\mathcal{I} \varphi = \mathtt{true} \U_\mathcal{I} \varphi$ (``eventually") and $\G_\mathcal{I} \varphi = \neg (\F_\mathcal{I} \neg \varphi)$ (``always"). We denote the set of all \textsf{\small MTL}\xspace formulas by $\Phi_{\textsf{\small MTL}\xspace}$. \paragraph*{Semantics}The semantics of metric temporal logic (\textsf{\small MTL}\xspace) is defined over the trace, $\alpha = s_0s_1 \cdots$ and $\bar{\tau} = \tau_0\tau_1 \cdots$ as follows: \[ \begin{array}{l l l} (\alpha, \bar{\tau}, i) \models p~ & \text{iff} & p \in s_i \\ (\alpha, \bar{\tau}, i) \models \neg \varphi & \text{iff} & (\alpha, \bar{\tau}, i) \not\models \varphi \\ (\alpha, \bar{\tau}, i) \models \varphi_1 \lor \varphi_2 & \text{iff} & (\alpha, \bar{\tau}, i) \models \varphi_1 \lor (\alpha, \bar{\tau}, i) \models \varphi_2 \\ (\alpha, \bar{\tau}, i) \models \varphi_1 \U_\mathcal{I} \varphi_2 & \text{iff} & \exists j \geq i. \tau_j - \tau_i \in \mathcal{I} \land (\alpha, \bar{\tau}, j) \models \\ & & \varphi_2 \land \forall k \in [i, j), (\alpha, \bar{\tau}, k) \models \varphi_1 \end{array} \] It is to be noted that $(\alpha, \bar{\tau}) \models \varphi$ holds if and only if $(\alpha, \bar{\tau}, 0) \models \varphi$. In the context of runtime verification, we introduce the notion of finite \textsf{\small MTL}\xspace. The truth values are represented by the set $\mathsf{B}_2 = \{ \top, \bot \}$, where $\top$ (resp. $\bot$) represents a formula that is satisfied (resp. violated) given a finite trace. We represent the set of all finite traces by a pair of finite sets, $(\Sigma^{*}, \mathbb{Z}_{\geq 0}^{*})$. For a finite trace, $\alpha = s_0s_1\cdots s_n$ and $\bar{\tau} = \tau_0\tau_1\cdots \tau_n$ the only semantic that needs to be redefined is that of $\U$ (`until') and is as follows: \begin{equation} \nonumber [(\alpha, \bar{\tau}, i) \models_F \varphi_1 \U_\mathcal{I} \varphi_2] = \begin{cases} \top & \text{if }\exists j \geq i. \tau_j - \tau_i \in \mathcal{I} \\ & ~([\alpha^j \models_F \varphi_2] = \top) \land \forall k \in \\ & ~[i, j) : ([\alpha^k \models_F \varphi_1] = \top)\\ \bot & \text{otherwise}. \end{cases} \end{equation} In order to further illustrate the difference between \textsf{\small MTL}\xspace and finite \textsf{\small MTL}\xspace, we consider the formula $\varphi = \F_\mathcal{I} p$ and a trace $\alpha = s_0s_1\cdots s_n$ and $\bar{\tau} = \tau_0\tau_1\cdots\tau_n$. We have $[(\alpha, \bar{\tau}) \models_F \varphi] = \top$ if for some $j \in [0, n]$ we have $\tau_j - \tau_0 \in \mathcal{I}$ and $p \in s_i$, otherwise $\bot$. Now, consider the formula $\varphi = \G_\mathcal{I} p$ we have $[(\alpha, \bar{\tau}) \models_F \varphi] = \bot$ if for some $j \in [0, n]$ we have $\tau_j - \tau_0 \in \mathcal{I}$ and $p \not\in s_i$, otherwise $\top$. \section{Formal Problem Statement} \label{sec:sol} In a partially synchronous system, there are different ordering of events that is possible and each unique {\em ordering} of events~\cite{ylies2012} might evaluate to different verdicts. In other words, a partially synchronous distributed computation $(\mathcal{E}, \rightsquigarrow)$ may have different ordering of events primarily due to the different interleavings of events that is possible. Thus, it is possible to have different verdicts for the same distributed computation for the different ordering of events. Let $(\mathcal{E}, \rightsquigarrow)$ be a distributed computation. A sequence of consistent cuts is of the form $\mathcal{C}_0\mathcal{C}_1\mathcal{C}_2 \cdots$, where for all $i \geq 0$, we have (1) $\mathcal{C}_i \subset \mathcal{C}_{i+1}$ and (2) $|\mathcal{C}_i| + 1 = |\mathcal{C}_{i+1}|$, and (3) $\mathcal{C}_0 = \emptyset$. The set of all sequences of consistent cuts be denoted by $\mathbb{C}$. We note that we view time interval $\mathcal{I}$ in the syntax of \textsf{\small MTL}\xspace is in terms of the physical (global) time $\mathcal{G}$. Thus, when deriving all the possible traces given the distributed computation \linebreak $(\mathcal{E}, \rightsquigarrow)$, we have to account for all different orders in which the events could possibly had occur with respect to $\mathcal{G}$. This involves replacing the local time of occurrence of an event, $e^i_\sigma$ with the set of event $\{e^i_{\sigma'} \mid \sigma' \in [\max\{0, \sigma - \epsilon + 1\}, \sigma + \epsilon)\}$. This is to account for the maximum clock drift that is possible on the local clock of a process when compared to the global clock. \input{fig_prelim_ps} For example, given the computation in Figure~\ref{fig:prelim}, a maximum clock skew $\epsilon = 2$ and a \textsf{\small MTL}\xspace formula, $\varphi = a \U_{[0, 6)} b$, one has to consider all possible traces including $(a,1)(a,2)(b,4)(\neg a,5) \models \varphi$ and $(a,1)(a,2)(\neg a,4)(b,5) \not\models \varphi$. The contradictory result is due to the different time of occurrence of event that needs to be considered. Given a sequence of consistent cuts, it is evident that for all $j > 0$, $|\mathcal{C}_j - \mathcal{C}_{j-1}| = 1$ and event $\mathcal{C}_j - \mathcal{C}_{j-1}$ is the last event that was added onto the cut $\mathcal{C}_j$. To translate monitoring of a distributed system into monitoring a trace, We define a sequence of natural numbers as $\bar{\pi} = \pi_0\pi_1\cdots$, where $\pi_0 = 0$ and for each $j \geq 1$, we have $\pi_{j} = \sigma$, such that $\mathsf{front}(\mathcal{C}_{j}) - \mathsf{front}(\mathcal{C}_{j-1}) = \{e^i_{\sigma}\}$. To maintain time monotonicity, we only consider sequences where for all $i \geq 0$, $\pi_{i+1} \geq \pi_i$. The set of all traces that can be formed from $(\mathcal{E}, \rightsquigarrow)$ is defined as: $$ \mathsf{Tr}(\mathcal{E}, \rightsquigarrow) = \Big\{ \mathsf{front}(\mathcal{C}_0)\mathsf{front}(\mathcal{C}_1)\cdots \mid \mathcal{C}_0\mathcal{C}_1\cdots \in \mathbb{C} \Big\}. $$ In the sequel, we assume that every sequence $\alpha$ of frontiers in $\mathsf{Tr}(\mathcal{E}, \rightsquigarrow)$ is associated with a sequence $\bar{\pi}$. Thus, to comply with the semantics of \textsf{\small MTL}\xspace, we refer to the elements of $\mathsf{Tr}(\mathcal{E}, \rightsquigarrow)$ by pairs of the form $(\alpha, \bar{\pi})$. Now that we have a set of all possible traces, we evaluate an \textsf{\small MTL}\xspace formula $\varphi$ with respect to the computation $(\mathcal{E}, \rightsquigarrow)$ as follows: $$ [(\mathcal{E}, \rightsquigarrow) \models_F \varphi] = \Big\{ (\alpha, \bar{\pi}, 0) \models_F \varphi \mid (\alpha, \bar{\pi}) \in \mathsf{Tr}(\mathcal{E}, \rightsquigarrow) \Big\}. $$ This boils down to having a set of verdicts, since a distributed computation may involve several traces and each trace might evaluate to a different verdict. \paragraph*{Overall Idea of our solution.} To solve the above problem (evaluating all possible verdicts), we propose a monitoring approach based on formula-rewriting (Section~\ref{sec:progress}) and SMT solving (Section~\ref{sec:smt}). Our approach involves iteratively(1) chopping a distributed computation into a sequence of smaller segments to reduce the problem size and (2) progress the \textsf{\small MTL}\xspace formula for each segment for the next segment, which results in a new \textsf{\small MTL}\xspace formula by invoking an SMT solver. Since each computation/segment corresponds to a set of possible traces due to partial synchrony, each invocation of the SMT solver may result in a different verdict. \section{Formula Progression for MTL} \label{sec:progress} We start describing our solution by explaining the formula progression technique. \begin{definition} A {\em progression function} is of the form $\mathsf{Pr} : \Sigma^{*} \times \mathbb{Z}_{\geq 0}^{*} \times \Phi_{\textsf{\small MTL}\xspace} \rightarrow \Phi_{\textsf{\small MTL}\xspace}$ and is defined for all finite traces $(\alpha, \bar{\tau}) \in (\Sigma^{*}, \mathbb{Z}_{\geq 0}^{*})$, infinite traces $(\alpha', \bar{\tau}') \in (\Sigma^{\omega}, \mathbb{Z}_{\geq 0}^{\omega})$ and \textsf{\small MTL}\xspace formulas $\varphi \in \Phi_{\textsf{\small MTL}\xspace}$, such that $(\alpha.\alpha', \bar{\tau}.\bar{\tau}') \models \varphi$ if and only if $(\alpha', \bar{\tau}') \models \mathsf{Pr}(\alpha, \bar{\tau}, \varphi)$. $~\blacksquare$ \end{definition} \vspace{2mm} It is to be noted that compared to the classic formula regression technique in~\cite{Klaus2001}, here the function $\mathsf{Pr}$ takes a finite trace as input, while the algorithm in~\cite{Klaus2001} rewrite the formula after every observed state. When monitoring a partially synchronous distributed system, where multiple verdicts are possible and no unique ordering of events are possible, the classical state-by-state formula rewriting technique is of little use. The motivation of our approach comes from the fact that for computation reasons, we chop the computation into smaller segments and the verification of each segment is done through an SMT query. A state-by-state approach would incur in a huge number of SMT queries being generated. Let $\mathcal{I} = [\mathit{start}, \mathit{end})$ denote an interval. By $\mathcal{I} - \tau$, we mean the interval $\mathcal{I}' = [\mathit{start}', \mathit{end}')$, where $\mathit{start}' = \max\{0, \mathit{start} - \tau \}$ and $\mathit{end}' = \max\{0, \mathit{end} - \tau \}$. Also, for two time instances, $\tau_i$ and $\tau_0$, we let $\InInt{i}$ return $\mathtt{true}$ or $\mathtt{false}$ depending upon the whether $\tau_i - \tau_0 \in \mathcal{I}$. \begin{figure*} \begin{minipage}{0.48\textwidth} \input{alg_globally} \end{minipage} \hfill \begin{minipage}{0.48\textwidth} \input{alg_eventually} \end{minipage} \end{figure*} \input{alg_until} \vspace{2mm} \noindent \textbf{Progressing atomic propositions.} For an \textsf{\small MTL}\xspace formula of the form $\varphi = p$, where $p \in \mathsf{AP}$, the result depends on whether or not $p \in \alpha(0)$. This marks as our base case for the other temporal and logical operators: \begin{equation} \nonumber \mathsf{Pr}(\alpha, \bar{\tau}, \varphi) = \begin{cases} \mathtt{true} & \text{if }~p \in \alpha(0) \\ \mathtt{false} & \text{if }~p \not\in \alpha(0) \end{cases} \end{equation} \noindent \textbf{Progressing negation.} For an \textsf{\small MTL}\xspace formula of the form $\varphi = \neg \phi$, we have: $$ \mathsf{Pr}(\alpha, \bar{\tau}, \varphi) = \neg \mathsf{Pr}(\alpha, \bar{\tau}, \phi). $$ \noindent \textbf{Progressing disjunction.} Let $\varphi = \varphi_1 \lor \varphi_2$. Apart from the trivial cases, the result of progression of $ \varphi_1 \lor \varphi_2$ is based on progression of $\varphi_1$ and/or progression of $\varphi_2$: \begin{equation} \nonumber \mathsf{Pr}(\alpha, \bar{\tau}, \varphi) = \begin{cases} \mathtt{true} & \text{if }~\mathsf{Pr}(\alpha, \bar{\tau}, \varphi_1) = \mathtt{true} \: \lor \\ & \mathsf{Pr}(\alpha, \bar{\tau}, \varphi_2) = \mathtt{true} \\ \mathtt{false} & \text{if }~\mathsf{Pr}(\alpha, \bar{\tau}, \varphi_1) = \mathtt{false} \: \land \\ & \mathsf{Pr}(\alpha, \bar{\tau}, \varphi_2) = \mathtt{false} \\ \varphi_2' & \text{if }~\mathsf{Pr}(\alpha, \bar{\tau}, \varphi_1) = \mathtt{false} \: \land \\ & \mathsf{Pr}(\alpha, \bar{\tau}, \varphi_2) = \varphi_2' \\ \varphi_1' & \text{if }~\mathsf{Pr}(\alpha, \bar{\tau}, \varphi_2) = \mathtt{false} \: \land \\ & \mathsf{Pr}(\alpha, \bar{\tau}, \varphi_1) = \varphi_1' \\ \varphi_1' \lor \varphi_2' & \text{if }~\mathsf{Pr}(\alpha, \bar{\tau}, \varphi_1) = \varphi_1' \: \land \\ & \mathsf{Pr}(\alpha, \bar{\tau}, \varphi_2) = \varphi_2' \end{cases} \end{equation} \noindent \textbf{Always and eventually operators.} As shown in Algorithms~\ref{alg:globally} and ~\ref{alg:eventually}, the progression for `always', $(\G_\mathcal{I} \varphi)$ and `eventually', $(\F_\mathcal{I} \varphi)$ depends on the value of $\InInt{i}$ and the progression of the inner formula $\varphi$. In Algorithm~\ref{alg:globally} and \ref{alg:eventually}, we divide the algorithm into three cases: (1) line 4, corresponds to if the $\mathcal{I}$ is within the sequence $\bar{\tau}$; (2) line 6, corresponds to where $\mathcal{I}$ starts in the current trace but its end is beyond the boundary of the sequence $\bar{\tau}$, and (3) line 9, corresponds to if the entire interval $\mathcal{I}$ is beyond the boundary of sequence $\bar{\tau}$. In Algorithm~\ref{alg:globally}, we are only concerned about the progression of $\varphi$ on the suffix $(\alpha^i, \bar{\tau}^i)$ if $\InInt{i} = \mathtt{true}$. In case, $\InInt{i} = \mathtt{false}$ the consequent drops and the entire condition equates to $\mathtt{true}$. In other words, equating over all $i \in [0, |\alpha|]$, we are only left with conjunction of $\mathsf{Pr}(\alpha^i, \bar{\tau}^i, \varphi)$ where $\InInt{i} = \mathtt{true}$. In addition to this, we add the initial formula with updated interval for the next trace. Similarly, in Algorithm~\ref{alg:eventually}, equating over all $i \in [0, |\alpha|]$, if $\InInt{i} = \mathtt{false}$ the corresponding $\mathsf{Pr}(\alpha^i, \bar{\tau}^i, \varphi)$ is disregarded and the final formula is a disjunction of $\mathsf{Pr}(\alpha^i, \bar{\tau}^i, \varphi)$ with $\InInt{i} = \mathtt{true}$. \vspace{2mm} \noindent \textbf{Progressing the until operator.} Let the formula be of the form $\varphi_1 \U_\mathcal{I} \varphi_2$. According to the semantics of until $\varphi_1$ should be evaluated to true in all states leading up to some $i \in \mathcal{I}$, where $\varphi_2$ evaluates to true. We start by progressing $\varphi_1$ (resp. $\varphi_2$) as $\G_{[0, \tau_i - \tau_0)} \varphi_1$ (resp. $\F_{[\tau_i, \tau_i + 1)} \varphi_2$) for some $i \in \mathcal{I}$. Since, we are only verifying the sub-formula, $\F_{[\tau_i, \tau_i + 1)} \varphi_2$, on the trace sequence $(\alpha, \bar{\tau})$, it is equivalent to verifying the sub-formula $\F_{[0, 1)} \varphi_2 \equiv \varphi_2$ over the trace sequence $(\alpha^i, \bar{\tau}^i)$. Similar to Algorithms~\ref{alg:globally} and~\ref{alg:eventually}, in Algorithm~\ref{alg:until} we need to consider three cases. In lines 4, 6 and 9, following the semantics of until operator, we make sure for all $i \in [0, |\alpha|]$, if $\tau_i < \mathcal{I}_{start} + \tau_0$, $\varphi_1$ is satisfied in the suffix $(\alpha^i, \bar{\tau}^i)$. In addition to this there should be some $j \in [0, |\alpha|]$ for which if $\InInt{j} = \mathtt{true}$, then the trace satisfies the sub-formula $\G_{[0, \tau_j - \tau_0)} \varphi_1$ and $\F_{[\tau_j, \tau_j + 1)} \varphi_2$). In lines 6 and 9, we also accommodate for future traces satisfying the formula $\varphi_1 \U_\mathcal{I} \varphi_2$ with updated intervals. \input{smt_prog_example} \paragraph*{Example} In Fig.~\ref{fig:progression}, the time line shows propositions and their time of occurrence, for formula $\F_{[0,6)} r \rightarrow ( \neg p \U_{[2, 9)} q)$. The entire computation has been divided into 3 segments, $(\alpha, \bar{\tau})$, $(\alpha', \bar{\tau}')$, and $(\alpha'', \bar{\tau}'')$ and each state has been represented by $(s, \tau)$: \begin{itemize} \item We start with segment $(\alpha, \bar{\tau})$. % First we evaluate $\F_{[0,6)} r$, which requires evaluating $\mathsf{Pr}(\alpha^i, \bar{\tau}^i, r)$ for $i \in \{0, 1, 2\}$, all of which returns the verdict $\mathtt{false}$ and there by rewriting the sub-formula as $\F_{[0, 4)} r$. Next, to evaluate the sub-formula $\neg p \U_{[2, 9)} q$, we need to evaluate (1) $\mathsf{Pr}(\alpha^i, \bar{\tau}^i, \neg p)$ for $i \in \{0, 1\}$ since $\tau_i - \tau_0 < 2$ and both evaluates to $\mathtt{true}$, (2) $\mathsf{Pr}(\alpha, \bar{\tau}, \G_{[0, 2)} \neg p)$ which also evaluates to $\mathtt{true}$ and (3) $\mathsf{Pr}(\alpha^2, \bar{\tau}^2, q)$ which evaluates as $\mathtt{false}$. Thereby, the rewritten formula after observing $(\alpha, \bar{\tau})$ is $\F_{[0, 3)} r \rightarrow (\neg p \U_{[0, 6)} q)$. \item Similarly, we evaluate the formula now with respect to $(\alpha', \bar{\tau}')$, which makes the sub-formula $\F_{[0, 3)} r$ evaluate to $\mathtt{true}$ at $\tau = 3$ and the sub-formula $\neg p \U_{[0, 6)} q$ (there is no such $i \in \{0, 1, 2\}$ where $\tau_i - \tau_0 < 0$ and for all $j \in \{0, 1, 2\}$, $\mathsf{Pr}(\alpha'^j, \bar{\tau}'^j, q) = \mathtt{false}$) is rewritten as $\neg p \U_{[0, 4)} q$. \item In $(\alpha'', \bar{\tau}'')$, for $j = 1$, $\mathsf{Pr}(\alpha'', \bar{\tau}'', \G_{[0, 2)} \neg p) = \mathtt{true}$ and $\mathsf{Pr}(\alpha''^j, \bar{\tau}''^j, q) = \mathtt{true}$, and thereby rewriting the entire formula as $\mathtt{true}$. \end{itemize} \section{SMT-based Solution} \label{sec:smt} \subsection{SMT Entities} SMT entities represent (1) sub-formulas of the \textsf{\small MTL}\xspace specification, and (2) variables used to represent the distributed computation. After we have the verdicts for each of the individual sub-formulas, we use the progression laws discussed in Section~\ref{sec:progress} to construct the formula for the future computations. \vspace{2mm} \noindent \textbf{Distributed Computation} We represent a distributed computation $(\mathcal{E}, \rightsquigarrow)$ by function $f: \mathcal{E} \rightarrow \{0, 1, \ldots, |\mathcal{E}|-1\}$. To represent the happen-before relation, we define a $\mathcal{E} \times \mathcal{E}$ matrix called $\mathsf{hbSet}$ where $\mathsf{hbSet}[e^i_\sigma][e^j_{\sigma'}] = 1$ represents $e^i_\sigma \rightsquigarrow e^j_{\sigma'}$ for $e^i_\sigma, e^j_{\sigma'} \in \mathcal{E}$. Also, if $|\sigma - \sigma'| \geq \epsilon$ then $\mathsf{hbSet}[e^i_\sigma][e^j_{\sigma'}] = 1$, else $\mathsf{hbSet}[e^i_\sigma][e^j_{\sigma'}] = 0$. This is all done in the pre-processing phase of the algorithm and in the rest of the paper, we represent events by the set $\mathcal{E}$ and a happen-before relation by $\rightsquigarrow$ for simplicity. In order to represent the possible time of occurrence of an event, we define a function $\delta: \mathcal{E} \rightarrow \mathbb{Z}_{\geq 0}$, where $$\forall e^i_\sigma \in \mathcal{E}. \exists \sigma' \in [\max\{0, \sigma - \epsilon + 1\}, \sigma + \epsilon - 1]. \delta(e^i_\sigma) = \sigma'$$ To connect events, $\mathcal{E}$, and propositions, $\mathsf{AP}$, on which the \textsf{\small MTL}\xspace formula $\varphi$ is constructed, we define a boolean function $\mu : \mathsf{AP} \times \mathcal{E} \rightarrow \{ \mathtt{true}, \mathtt{false} \}$. For formulas involving non-boolean variables (e.g., $x_1 + x_2 \leq 7$), we can update the function $\mu$ accordingly. We represent a sequence of consistent cuts that start from $\{\}$ and end in $\mathcal{E}$, we introduce an {\em uninterpreted function} $\rho: \mathbb{Z}_{\geq 0} \rightarrow 2^\mathcal{E}$ to reach a verdict given, it satisfies all the constrains explained in~\ref{subsec:smt_const}. Lastly, to represent the sequence of time associated with the sequence of consistent cuts, we introduce a function $\tau : \mathbb{Z}_{\geq 0} \rightarrow \mathbb{Z}_{\geq 0}$. \subsection{SMT Constrains} \label{subsec:smt_const} Once we have the necessary SMT entities, we move onto including the constrains for both generating a sequence of consecutive cuts and also representing the \textsf{\small MTL}\xspace formula as a SMT constrain. \vspace{2mm} \noindent \textbf{Consistent cut constrains over $\rho$:} In order to make sure the sequence of cuts represented by the uninterpreted function $\rho$, is a sequence of consistent cuts, i.e., they follow the happen-before relations between events in the distributed system: $$\forall i \in [0, |\mathcal{E}|]. \forall e, e' \in \mathcal{E}. \Big( (e' \rightsquigarrow e) \land \big( e \in \rho(i) \big) \Big) \rightarrow \big( e' \in \rho(i) \big)$$ Next, we make sure that in the sequence of consistent cuts, the number of events present in a consistent cut is one more than the number of events that were present in the consistent cut before it: $$\forall i \in [0, |\mathcal{E}|). \mid\rho(i+1)\mid = \mid\rho(i)\mid + 1$$ Next, we make sure than in the sequence of consistent cuts, each consistent cut includes all the events that were present in the consistent cut before it, i.e, it is a subset of the consistent cut prior in the sequence. $$\forall i \in [0, |\mathcal{E}|]. \rho(i) \subset \rho(i+1)$$ The sequence of consistent cuts starts from $\{\}$ and ends at $\mathcal{E}$. $$\rho(0) = \emptyset; \; \rho(|\mathcal{E}|) = \mathcal{E}$$ The sequence of time reflects the time of occurrence of the event that has just been added to the sequence of consistent cut: $$\forall i \geq 1. \tau(i) = \delta(e^i_\sigma) \text{, such that } \rho(i) - \rho(i-1) = \{e^i_\sigma\}$$ And finally, we make sure the monotonosity of time is maintained in the sequence of time $$\forall i \in [0, |\mathcal{E}|). \tau(i + 1) \geq \tau(i)$$ \textbf{Constrains for \textsf{\small MTL}\xspace formulas over $\rho$:} These constrains will make sure that $\rho$ will not only represent a valid sequence of consistent cuts but also make sure that the sequence of consistent cuts satisfy the \textsf{\small MTL}\xspace formula. As is evident, a distributed computation can often yield two contradicting evaluation. Thus, we need to check for both satisfaction and violation for all the sub-formulas in the \textsf{\small MTL}\xspace formula provided. Note that monitoring any \textsf{\small MTL}\xspace formula using our progression rules will result in monitoring sub-formulas which are atomic propositions, eventually and globally temporal operators. Below we mention the SMT constrain for each of the different sub-formula. Violation (resp. satisfaction) for atomic proposition and eventually (resp. globally) constrain will be the negation of the one mentioned. \begin{align*} \varphi = \mathsf{p} & & \bigvee_{e \in \mathsf{front}(\rho(0))} \mu[\mathsf{p}, e] = \mathtt{true}, \text{for } \mathsf{p} \in \mathsf{AP} \\ & &~~~~\text{(satisfaction, i.e., $\top$)}\\ \varphi = \G_\mathcal{I} \varphi & & \exists i \in [0, |\mathcal{E}|]. \tau(i) - \tau(0) \in \mathcal{I} \land \rho(i) \not\models \varphi \\ & &~~~~\text{(violation, i.e., $\bot$)} \\ \varphi = \F_\mathcal{I} \varphi & & \exists i \in [0, |\mathcal{E}|]. \tau(i) - \tau(0) \in \mathcal{I} \land \rho(i) \models \varphi \\ & &~~~~\text{(satisfaction, i.e., $\top$)} \end{align*} A satisfiable SMT instance denotes that the uninterpreted function was not only able to generate a valid sequence of consistent cuts but also that the sequence satisfies or violates the \textsf{\small MTL}\xspace formula given the computation. This result is then fed to the progression cases to generate the final verdict. \subsection{Segmentation and Parallelization of Distributed Computation} We know that predicate detection, let alone runtime verification, is NP-complete~\cite{garg2002} in the size of the system (number of processes). This complexity grows to higher classes when working with nested temporal operators. To make the problem computationally viable, we aim to chop the computation, $(\mathcal{E}, \rightsquigarrow)$ into $g$ segments, $(\textit{seg}_1, \rightsquigarrow), (\textit{seg}_2, \rightsquigarrow), \cdots, (\textit{seg}_g, \rightsquigarrow)$. This involves creating small SMT-instances for each of the segments which improves the runtime of the overall problem. In a computation of length $l$, if we were to chop it into $g$ segments, each segment would of the length $\frac{l}{g} + \epsilon$ and the set of events included in it can be given by: \begin{align*} \textit{seg}_j &= \Big\{ e^i_\sigma \mid \sigma \in \bigg[max\big(0, \frac{(j-1)\times l}{g} - \epsilon\big), \frac{j \times l}{g}\bigg] \land \\ &~~~ i \in [1, \mid\mathcal{P}\mid]\Big\} \end{align*} Note that monitoring of a segment should include the events that happened within $\epsilon$ time of the segment actually starting since it might include events that are concurrent with some other events in the system not accounted for in the previous segment.
-56,555.024749
[ -2.673828125, 2.5078125 ]
35.894737
[ -2.884765625, 1.85546875, -1.31640625, -4.70703125, -0.75830078125, 5.3671875 ]
[ -0.810546875, 4.72265625, 0.38330078125, 4.62109375 ]
614
9,894
[ -2.4296875, 2.861328125 ]
28.723129
[ -5.39453125, -2.416015625, -1.72265625, -0.9462890625, 1.7236328125, 7.1484375 ]
0.677978
19.336905
17.647059
1.450671
[ 2.0104751586914062 ]
-35,762.907094
5.664948
-55,956.677568
0.824279
6.149061
[ -2.46875, -2.33203125, -3.28125, -4.671875, 2.34375, 10.4609375 ]
[ -5.91015625, -1.3056640625, -1.7998046875, -1.4599609375, 3.638671875, 3.90234375 ]
BkiUdObxK7IDBOV4sdC7
\section{Deep radio surveys} \subsection{The modern radio sky} The GHz radio bright ($\ga 1$ mJy) sky consists mainly of ``classical'' radio sources, that is radio quasars and radio galaxies. These are active galactic nuclei (AGN) whose radio emission is generated from the gravitational energy associated with a supermassive black hole and emitted through relativistic jets of particles as synchrotron radiation. Below 1 mJy there is an increasing contribution to the radio source population from synchrotron emission resulting from relativistic plasma ejected from supernovae associated with massive star formation in galaxies. Star forming galaxies (SFG), however, appear not to be the only component of the faint radio sky, at least down to $\sim 50~\mu$Jy at a few GHz (e.g., \cite[Gruppioni et al. 2003] {Gruppioni_etal2003}, \cite[Padovani et al. 2009]{Padovani_etal09}, \cite[2011]{Padovani_etal11}, \cite[Padovani 2011]{Padovani_etal2011}, \cite[Norris et al. 2013]{Norris_etal2013}), contrary to the (until recently) most accepted paradigm. It turned out that there are still plenty of AGN down there but of the more numerous, radio-fainter type, and therefore not associated to radio quasars and radio galaxies. These AGN, in fact, are of the so-called radio-quiet (RQ) type. Why should every astronomer care about all this? For a variety of reasons: 1) radio emission, through the so-called ``radio-mode'' feedback, appears to play a very important role in galaxy evolution (e.g., \cite[Croton et al. 2006]{Croton_etal06}); 2) RQ AGN reside typically in spiral galaxies, which are still forming stars, and therefore are likely to provide a vital contribution to our understanding of the AGN -- galaxy co-evolution issue; 3) radio observations are unaffected by absorption and therefore are sensitive to all types of AGN, independently of their orientation (i.e., type 1s and type 2s); 4) finally, and most importantly, the fact that by going radio faint one starts to detect the bulk of the AGN population (and not only the small minority of radio quasars and radio galaxies) means that radio astronomy is no longer a ``niche'' activity but is extremely relevant to a whole bunch of extragalactic studies. \subsection{The issue of radio-quiet AGN} A further point of general interest has to do with RQ AGN. Soon after the discovery of quasars in 1963 it was realized that the majority of them were not as strong radio sources as were the first quasars and were undetected by the radio telescopes of the time: they were ``radio-quiet.'' It was later realized that these sources were actually only ``radio-faint.'' For the same optical power their radio powers were $\approx 3$ orders of magnitude smaller than their radio-loud (RL) counterparts. RQ AGN were until recently normally found in optically selected samples and are characterized by relatively low radio-to-optical flux density ratios and radio powers. It is important to realize that the distinction between the two types of AGN is not simply a matter of semantics: the two classes represent intrinsically different objects, with RL AGN emitting most of their energy over the entire electromagnetic spectrum non-thermally and in association with powerful relativistic jets, while the multi-wavelength emission of RQ AGN is dominated by thermal emission, directly or indirectly related to the accretion disk. The mechanism responsible for radio emission in RQ AGN has been a matter of debate for the past fifty years. Alternatives have included a scaled down version of the RL AGN mechanism (e.g., \cite[Miller et al. 1993]{Miller_etal93}, \cite[Ulvestad et al. 2005]{Ulvestad_etal05}), star formation (\cite[Sopp \& Alexander 1991]{Sopp_91}), a black hole rotating more slowly than in RL AGN (\cite[Wilson \& Colbert 1995]{Wilson_95}), and many more. This is a non-trivial issue for various reasons: 1) most ($> 90\%$) AGN are RQ; 2) some of the proposed explanations have profound implications on our understanding of AGN physics (jets, accretion, black hole spin, etc.); 3) some others are very relevant for the relationship between AGN and star formation in the Universe (related to ``AGN feedback''), which is a hot topic in extragalactic research. We note that it is important to compare the properties of the two AGN classes in the band where they differ most, that is the radio band. \section{The {\it Chandra} Deep Field South} This is exactly what we did. Our observations were done in the {\it Chandra} Deep Field South (CDFS) area, which is part of the Great Observatories Origins Deep Survey (GOODS) and as such one of the most intensively studied regions in the sky. The CDFS is a brainchild of Riccardo Giacconi, who conceived the idea of having the {\it Chandra} X-ray observatory stare at the same spot of the sky for a long time to reach very faint X-ray fluxes. After the initial 1 Msec (11.6 days; \cite[Giacconi et al. 2002]{Giacconi_etal2002}), one more was added (\cite[Luo et al. 2008]{Luo_etal2008}). Four Msecs (1.5 months) were finally reached by \cite[Xue et al. (2011)]{Xue_etal2011}. More observing time has been granted in the meantime so that a total of 7 Msecs (almost three months) of observing time should be available by the end of 2014. \cite[Kellermann et al. (2008)]{Kellermann_etal08} used the National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) to obtain 1.4 GHz data, with 8.5~$\mu$Jy rms noise per 3.5" x 3.5" beam, in a field centered on the CDFS, defining a complete sample of 198 radio sources sources reaching $\sim 43~\mu$Jy over 0.2 deg$^2$. These data were exploited in a series of papers on optical counterparts (\cite[Mainieri et al. 2008]{Mainieri_etal08}), X-ray properties (\cite[Tozzi et al. 2009]{Tozzi_etal09}), and source populations, evolution, and luminosity functions (LFs) (\cite[Padovani et al. 2009]{Padovani_etal09}, \cite[2011]{Padovani_etal11}). \cite[Miller et al. (2008]{Miller_etal08}, \cite[2013)]{Miller_etal13} observed the so-called Extended CDFS (E-CDFS), again using the VLA, reaching a somewhat smaller rms over a larger area, namely $\sim 6~\mu$Jy rms noise, and therefore $\sim 30~\mu$Jy at $5\sigma$, in a 2.8" x 1.6" beam over 0.3 deg$^2$. This resulted in a sample of almost 900 sources. Our group is exploiting these new radio data with the aim of addressing the issues of the faint radio source population and RQ AGN in more detail, given the larger and slightly deeper E-CDFS sample, as compared to the CDFS one. In particular, \cite[Bonzini et al. (2012)]{Bonzini_etal2012} have identified the optical and infrared (IR) counterparts of the E-CDFS sources, finding reliable matches and redshifts for $\sim 95\%$ and $\sim 81\%$ of them respectively, while \cite[Vattakunnel et al. (2012)]{Vattakunnel_etal2012} have identified the X-ray counterparts and studied the radio -- X-ray correlation for SFG. Finally, \cite[Bonzini et al. (2013)]{Bonzini_etal2013} have provided reliable source classification. \subsection{The classification of faint radio sources} The classification of faint radio sources is complex. First, these objects are quite faint in the optical/near IR regimes. The sources with an optical counterpart have a median magnitude $R \sim 22.8$, but can be hosted in galaxies as faint as $R \sim 27$. Therefore, getting spectra, which can be used for an optical classification is not feasible for the bulk of the objects. But even if we had optical spectra for all our sources we would still have problems since, as also remarked by a few speakers at this conference (e.g., Laura Trouille) a single band only gives a biased view of the properties of AGN and the optical band is the worst one, being strongly affected by obscuration. Indeed, there are quite a few examples of optically boring sources where the AGN is detected only in the X-ray band (i.e., some of the so-called X-ray Bright Optically Normal Galaxies; XBONGs). One then needs to use all the multi-wavelength information available for the E-CDFS, which is substantial, to figure out what a radio source really is, looking for AGN in the IR and X-ray bands as well. \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{Padovani_fig1.eps} \caption{Preliminary Euclidean normalized E-CDFS source counts: whole sample (black squares), SFG (green diamonds), all AGN (magenta triangles), radio-quiet AGN (blue circles), and radio-loud AGN (red squares). Error bars correspond to $1\sigma$ Poisson errors (\cite[Gehrels 1986]{Gehrels_86}).} \label{counts} \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=9cm]{Padovani_fig2.eps} \caption{Relative fraction of the various classes of E-CDFS sources as a function of radio flux density: SFG (green diamonds), all AGN (magenta triangles), radio-quiet AGN (blue circles) and radio-loud AGN (red squares). Error bars correspond to $1\sigma$ Poisson errors (\cite[Gehrels 1986]{Gehrels_86}). Adapted from \cite[Bonzini et al. (2013)]{Bonzini_etal2013}.} \label{fractions} \end{center} \end{figure} \cite[Bonzini et al. (2013)]{Bonzini_etal2013} have used a modified version of the classification scheme used by \cite[Padovani et al. (2011)] {Padovani_etal11} to disentangle SFG, RQ, and RL AGN. In short, they identify RL AGN using the so-called $q_{24}$ parameter, which is defined by $\log f_{24 \mu m }/f_{1.4GHz}$, while RQ AGN are distinguished from SFG according to their IRAC (near-IR) colours and X-ray power. Finally, other parameters were examined to check the classification and sort out possible outliers. These included: the presence of an inverted radio spectrum ($\alpha_{\rm r} < 0$, where $S_{\nu} \propto \nu^{-\alpha}$), possible Very Long Baseline Array (VLBA) detections, broad/high excitation lines in the optical spectra, Polycyclic Aromatic Hydrocarbon (PAH) features, X-ray absorption and variability. \subsection{The faint radio source population} Thanks to this careful classification, we now have what we believe is the most accurate determination of the source population of faint radio sources. Fig. \ref{counts} shows the preliminary Euclidean normalized number counts for our sample, along with the population subsets (Padovani et al., in preparation). Fig. \ref{fractions}, adapted from \cite[Bonzini et al. (2013)] {Bonzini_etal2013}, plots the relative fractions of the various classes as a function of radio flux density. One can see that AGN dominate at large flux densities ($\ga 1$ mJy) but SFG become the main population below $\approx 0.1$ mJy. Similarly, RL AGN are the predominant type of AGN above 0.1 mJy but their contribution drops fast at lower flux densities. In more detail, AGN make up $43\pm4$\% of sub-mJy sources, going from 100\% of the total at $\sim 10$ mJy, down to 39\% at the survey limit. SFG, on the other hand, which represent $57\pm3$\% of the sub-mJy sky, are missing at high flux densities but become the dominant population below $\approx 0.1$ mJy, reaching 61\% at the survey limit. RQ AGN represent $26\pm6$\% (or 60\% of all AGN) of sub-mJy sources but their fraction appears to increase at lower flux densities, where they make up 73\% of all AGN and $\approx 30$\% of all sources at the survey limit, up from $\approx 6$\% at $\approx 1$ mJy. These results are in good agreement with those of \cite[Padovani et al. (2011)]{Padovani_etal11}. The strong message is that below 0.1 mJy the radio sky, which at large flux densities is characterized by the prevalence of synchrotron radiation associated with powerful jets, is instead dominated by star-formation-related processes. \subsection{Evolution and luminosity functions} Having redshifts for the majority of our sources, we can also study the LFs and evolution of faint radio sources. The most general approach to do this is to perform a maximum likelihood fit of an evolving LF to the observed distribution in luminosity and redshift (see details in \cite[Padovani et al. 2011] {Padovani_etal11}). This method makes maximal use of the data and is free from arbitrary binning, although it is model dependent. \begin{figure}[ht] \begin{center} \includegraphics[width=8cm]{Padovani_fig3.eps} \caption{Preliminary maximum likelihood confidence contours ($1\sigma$, $2\sigma$, and $3\sigma$) for the E-CDFS sample assuming a pure luminosity evolution of the type $\propto (1+z)^{k_L}$ and a single power-law for the LF. The best fit values for the various classes are indicated by the different symbols.} \label{contours} \end{center} \end{figure} Fig. \ref{contours} plots preliminary maximum likelihood confidence contours ($1\sigma$, $2\sigma$, and $3\sigma$) under the simple assumptions of a pure luminosity evolution of the type $\propto (1+z)^{k_L}$ and a single power-law for the LF. $k_L > 0$ indicates positive evolution, i.e., the higher the redshift the larger the power, while $k_L < 0$ indicates negative evolution, i.e., the higher the redshift the smaller the power. RQ AGN and SFG occupy the same region of parameter space (strong positive evolution and steep LF), while RL and RQ AGN are totally disconnected, the former displaying negative evolution and a flat LF. Note that, even considering a more complicated picture, as suggested by a more detailed analysis of the data (i.e., a dual power-law LF: Padovani et al. in prep.), RQ AGN and SFG still evolve similarly in the radio band. In particular, the derivation of the local LF confirms that RL AGN have a much flatter LF than RQ ones and shows that the RQ AGN LF seems to be an extension of the SFG LF at higher radio powers (see also \cite[Padovani et al. 2009] {Padovani_etal09}). \section{Radio emission in radio-quiet AGN} Our results suggest very close ties between star formation and radio emission in RQ AGN at $z \sim 1.5 - 2$, the mean redshift of our sources, since their evolution is similar to that of SFG and their LF appears to be an extension of the SFG LF. Furthermore, radio emission in the two classes of AGN appears to have a different origin. If RQ AGN were simply ``mini RL'' AGN, in fact, they would have to share the evolutionary properties of the latter and their LF should also be on the extrapolation of the RL LF at low powers. Neither of these is borne out by our data. This conclusion agrees with our previous results (\cite[Padovani et al. 2011] {Padovani_etal11}) and \cite[Kimball et al. (2011)]{Kimball_etal2011}. This result is further confirmed by the comparison of the star formation rates (SFRs) derived from the far-IR and radio luminosities, assuming that all the radio emission is due to SF. For RQ AGN and SFG the two SFR estimates are fully consistent, while for RL AGN the agreement is poor due to the large contribution of the relativistic jet to their radio luminosity (Bonzini et al., this volume). \section{A revolution in radio astronomy} The Square Kilometre Array (SKA; http://www.skatelescope.org) will offer an observing opportunity extending well into the {\it nanoJy} regime with unprecedented versatility (we stress that at present we have reached $\sim 15~\mu$Jy at 1.4 GHz). First science with the so-called SKA$_1$ is scheduled early in the next decade. The LOw Frequency ARray (LOFAR) has already started operations and will carry out large area surveys at 15, 30, 60, 120 and 200 MHz (\cite[Morganti et al. 2009]{Morganti_etal09}), opening up a whole new region of parameter space at low radio frequencies. The VLA has been upgraded and the new instrument, the Jansky Very Large Array (JVLA; https://science.nrao.edu/facilities/vla), has hugely improved capabilities and MERLIN, which is now e-MERLIN (http://www.e-merlin.ac.uk/), has greatly improved sensitivity. Many other radio telescopes are currently under construction in the lead-up to the SKA including the Australian Square Kilometre Array Pathfinder (ASKAP; http://www.atnf.csiro.au/projects/askap/), Apertif (http://www.astron.nl/\-general/\-apertif/apertif), and MeerKAT (http://www.ska.ac.za/\-meerkat). These projects will survey the sky faster than has been possible with existing radio telescopes producing surveys covering large areas of the sky down to fainter flux densities than presently available (see \cite[Norris et al. 2013]{Norris_etal2013} for details). As an example, simple modelling based on our results shows that a radio survey reaching $\sim 1~\mu$Jy at 1.4 GHz will need to cover only $\sim 30$ deg$^2$ to detect a number of ``potential" AGN larger than those currently known, while $\sim 200$ deg$^2$ should reveal about one million such sources. The Evolutionary Map of the Universe (EMU), one of the ASKAP projects, expects to reach flux densities similar to those of the E-CDFS over $\sim 3/4$ of the sky, detecting more than 70 million sources, about half of which will be ``potential" AGN (\cite[Norris et al. 2011]{Norris_etal2011}). The ``potential'' here is important because, as detailed above, the classification of faint radio sources requires a great deal of ancillary, multi-wavelength information, which will not be easy to get at very faint levels (\cite[Padovani 2011]{Padovani_11}) or over very large areas. \section{Conclusions} We have shown that the GHz sub-mJy sky is a complex mix of evolving star-forming galaxies and radio-quiet AGN and negatively evolving radio-loud AGN. Contrary to what used to be the predominant view, AGN still make up $\sim 40\%$ of the faint radio sky, although the deep ($< 0.1$ mJy) radio sky appears to be dominated by star forming galaxies and therefore stellar processes. Radio-quiet AGN are also very important, accounting for $\sim 60\%$ of all sub-mJy AGN and $\sim 25\%$ of the total radio faint population. Radio surveys, which were in the past only useful to study radio-loud AGN, have reached so deep that they are now dominated by the same galaxies detected by IR, optical and X-ray surveys. As a result, the next generation radio surveys, which will produce samples of tens of millions of sources, will be an increasingly important component of multiwavelength studies of galaxy evolution. Finally, by studying the evolution and luminosity functions of faint radio sources, we have found a close relationship between star formation and radio emission in radio-quiet AGN at $z \sim 1.5 - 2$, which is confirmed by the very similar star formation rates obtained independently in the radio and far-IR bands.
-10,747.855755
[ -3.26953125, 3.009765625 ]
17.142857
[ -2.896484375, 0.7060546875, -1.8330078125, -5.48046875, -0.348876953125, 7.4375 ]
[ 2.68359375, 6.04296875, 3.47265625, 5.16796875 ]
183
2,765
[ -2.740234375, 2.85546875 ]
25.258206
[ -6.28125, -3.7265625, -3.51171875, -1.7890625, 1.9130859375, 11.234375 ]
1.438092
13.200984
31.64557
2.75243
[ 3.0546741485595703 ]
-9,261.464368
5.155515
-10,664.478065
0.785689
5.812912
[ -3.93359375, -3.69140625, -2.14453125, -3.126953125, 2.84765625, 9.3203125 ]
[ -5.98828125, -2.732421875, -2.53125, -1.9130859375, 3.7421875, 6.3046875 ]
BkiUfaA5qhLBrnqN0YOS
\section{Introduction} Gas-grain interactions play a key role in the chemical evolution of star-forming regions. During the first stages of star formation virtually all species accrete onto grains in dense cold cores. Later on in the star formation sequence --- when so-called hot cores are formed --- grains are warmed to temperatures where molecules can desorb again. In order to characterize this astrophysical process quantitatively it is necessary to understand the underlying molecular physics by studying interstellar ice analogs under laboratory controlled conditions. A series of recent papers shows that even for simple molecules such a quantification is far from trivial (e.g. Fraser et al. 2001, Collings et al. 2003, 2004, {\"O}berg et al. 2005, Bisschop et al. 2006). In the present work results for CO and O$_2$ ices are discussed that extend recent work comparing CO-N$_2$ and CO-O$_2$ ice features (Fuchs et al. 2006) to an empirical kinetic model characterizing the desorption behavior. The reason for focusing on ices containing O$_2$ is that a substantial amount of interstellar oxygen may well freeze out onto grains in the form of molecular oxygen. Attempts to determine gaseous O$_2$-abundances from recent SWAS and ODIN campaigns put upper limits on the O$_2$ abundance in cold dark clouds in the range of 3$\times$10$^{-6}$-1$\times$10$^{-7}$ (Goldsmith et al. 2000, Pagani et al. 2003, Liseau et al. 2005). This low abundance, along with the low abundance of gaseous H$_2$O, raises serious questions about the total oxygen budget when compared with the well observed atomic oxygen abundance of 3$\times 10^{-4}$ in diffuse clouds (Meyer et al. 1998, Ehrenfreund \& van Dishoeck 1998). One possible explanation for the `missing' oxygen is that it is frozen out onto grains in the coldest regions. The fundamental vibration of solid O$_2$ around 1550 cm$^{-1}$ (6.45 $\mu$m) becomes observable through perturbations of the symmetry of O$_2$ in a matrix or ice containing other molecules (Ehrenfreund et al. 1992). This band has been sought towards the proto-stellar sources RCrA IRS2 and NGC 7538 IRS9 (Vandenbussche et al.\ 1999), but not detected. Upper limits between 50\% and 100\% of solid O$_2$ relative to solid CO have been reported from analysis of the CO profile, since solid CO, in contrast with solid O$_2$, has been observed through its vibrational band at 4.67 $\mu$m (2140 cm$^{-1}$) (e.g. Chiar et al. 1998, Pontoppidan et al. 2003). Here the amount of frozen O$_2$ can be estimated by observing its influence on the shape of the CO absorption band. Transmission spectra recorded for mixed CO-O$_2$ ices show indeed significant changes compared to pure CO ices (Ehrenfreund et al. 1997, Elsila et al. 1997). However, such changes can also be caused by grain size and shape effects (Dartois 2006). The best limits on solid O$_2$ therefore come from analysis of the weak solid $^{13}$CO band which is not affected by grain shape effects. This band leads to upper limits of 100\% on the amount of O$_2$ that can be mixed with CO (Boogert et al.\ 2003, Pontoppidan et al.\ 2003). \begin{figure}[t] \includegraphics[width=8cm]{6272fig1.ps} \vspace{0.0cm} \caption{The cryo-pumping factor ($C_x(T)= p_{rt}/p_{t}$) as a function of temperature for CO (solid) and O$_2$ (dashed).} \end{figure} In this study, laboratory results for CO-O$_2$ ices are presented, both in pure, layered and mixed ice morphologies for varying ice thicknesses and for different relative abundances (Section 2). The focus is on temperature programmed desorption (TPD) data that are recorded to study the desorption process and to visualize changes in ice morphology during heating (Section 3). An empirical kinetic model is used to interpret the TPD data (Section 4). The aim of this work is to derive accurate molecular parameters (CO-CO, CO-O$_2$ and O$_2$-O$_2$ binding energies, classification of desorption kinetics, and temperature dependent sticking coefficients) to allow reliable predictions of typical behavior under astrophysical conditions (Section 5). \section{Experiment} \label{exp-sec} The experimental setup consists of an ultra-high vacuum setup (background pressure $<$ 5$\times$10$^{-11}$ mbar) in which interstellar ice analogs are grown with mono-layer precision on a 2.5$\times$2.5 cm$^2$ sample surface made out of a 0.1 $\mu$m thick polycrystalline gold film. Physical-chemical interactions can be studied using TPD and/or reflection absorption infrared spectroscopy (RAIRS). Details of the setup are available from van Broekhuizen (2005) and Fuchs et al. (2006) and here only relevant details are given. We assume that for a sticking coefficient of 1, molecules hitting the cold surface (14 K) build up one monolayer (ML) at a steady gas exposure of 1.33$\cdot$10$^{-6}$ mbar s$^{-1}$ (10$^{-6}$ Torr s$^{-1}$). This is by definition one Langmuir [L] and corresponds to roughly 10$^{15}$ particles cm$^{-2}$ in a non-porous condensed solid. In the present experiment amorphous ice is grown using a 0.01 L s$^{-1}$ growth velocity resulting in a well specified layer thickness as long as the growth is linear with the exposure time. Typical layer thicknesses are generated in the range between 20 and 80~L as astrophysical observations of young stellar objects in nearby low-mass star-forming clouds indicate that CO ices exist at thicknesses around 40 monolayers (Pontoppidan et al. 2003). In addition, astrochemical models suggest that O$_2$ may freeze out onto a pre-existing CO layer and consequently both pure, mixed and layered ices are studied here (Hasegawa et al. 1992, d'Hendecourt et al. 1985, Bergin et al. 2000). Studies of O$_2$ in an H$_2$O-rich environment have been performed by Collings et al.\ (2004). \begin{table}[t] \begin{center} \caption[]{Overview of all ice morphologies used in the CO-O$_2$ experiments. The thickness is given in Langmuir [L].} \begin{tabular}{|l|c|c|c|c|} \hline \hline & $^{13}$CO \hspace{+0.25cm} $^{18}$O$_2$ & Total & $^{12}$CO \hspace{+0.25cm} $^{16}$O$_2$ & Total \\ \hline CO & 20 \hspace{0.25cm} - & 20 & 20 \hspace {0.25cm} - & 20 \\ pure & 40 \hspace{0.25cm} - & 40 & 40 \hspace {0.25cm} - & 40 \\ & 60 \hspace{0.25cm} - & 60 & - \hspace {0.25cm} - & - \\ & 80 \hspace{0.25cm} - & 80 & 80 \hspace {0.25cm} - & 80 \\ \hline O$_2$& - \hspace{0.25cm} 20 & 20 & - \hspace {0.25cm} 20 & 20 \\ pure & - \hspace{0.25cm} 40 & 40 & - \hspace {0.25cm} 40 & 40 \\ & - \hspace{0.25cm} 60 & 60 & - \hspace {0.25cm} - & - \\ & - \hspace{0.25cm} 80 & 80 & - \hspace {0.25cm} 80 & 80 \\ \hline CO:O$_2$& 20 \hspace{0.25cm} 20 & 40 & - \hspace{0.25cm} - & - \\ mixed & 30 \hspace{0.25cm} 30 & 60 & - \hspace{0.25cm} - & - \\ & 40 \hspace{0.25cm} 40 & 80 & 40 \hspace{0.25cm} 40 & 80 \\ & 60 \hspace{0.25cm} 60 & 120 & - \hspace{0.25cm} - & - \\ & 80 \hspace{0.25cm} 80 & 160 & - \hspace{0.25cm} - & - \\ \hline O$_2$/CO $^a$& 20 \hspace{0.25cm} 20 & 40 & 20 \hspace{0.25cm} 20 & 40 \\ layered & 30 \hspace{0.25cm} 30 & 60 & - \hspace{0.25cm} - & 80 \\ & 40 \hspace{0.25cm} 40 & 80 & 40 \hspace{0.25cm} 40 & 80 \\ & 60 \hspace{0.25cm} 60 & 120 & - \hspace{0.25cm} - & - \\ & 80 \hspace{0.25cm} 80 & 160 & 80 \hspace{0.25cm} 80 & - \\ & 40 \hspace{0.25cm} \,\, 5 & 45 & - \hspace{0.25cm} - & - \\ & 40 \hspace{0.25cm} 10 & 50 & 40 \hspace{0.25cm} 10 & 50 \\ & 40 \hspace{0.25cm} 20 & 60 & 40 \hspace{0.25cm} 20 & 60 \\ & 40 \hspace{0.25cm} 30 & 70 & - \hspace{0.25cm} - & - \\ \hline CO/O$_2$ $^a$& 20 \hspace{0.25cm} 20 & 40 & - \hspace{0.25cm} - & 40 \\ layered & 40 \hspace{0.25cm} 40 & 80 & 40 \hspace{0.25cm} 40 & 80 \\ & 80 \hspace{0.25cm} 80 & 160 & - \hspace{0.25cm} - & - \\ \hline \hline \end{tabular} $^a$ See text for explanation of the used notation. \label{table-1} \end{center} \end{table} Two sets of ices have been investigated, $^{13}$CO-$^{18}$O$_2$ and $^{12}$CO-$^{16}$O$_2$, both using isotopes of at least 99\% purity. Most experiments were performed using $^{13}$CO-$^{18}$O$_2$ to distinguish between possible impurities in the vacuum system. An overview of the used ice samples is given in Table 1. Throughout this paper, the notation x/y and x:y denote layered (x on top of y) and mixed ice morphologies, respectively. The notation 1/1 indicates layered ices of equal thickness, 1:1 refers to equally mixed ices and x/40 L stands for ices where the thickness of the top layer is varied by x~L and the bottom layer is kept constant at 40 L. \begin{figure}[t] \centering \includegraphics[angle=270,width=9cm]{6272fig2.ps} \caption{TPD spectra of pure ices. (a) $^{13}$CO and (b) $^{18}$O$_2$ - for exposures of 20 L (solid), 40 L (dot-dashed), 60 L (dashed) and 80 L (dotted) - and (c) $^{12}$CO and (d) $^{16}$O$_2$ - for exposures of 20 L (solid), 40 L (dot-dashed) and 80 L (dotted). The grey vertical lines indicate the peak temperatures at 40 L.} \end{figure} Once an ice is grown the desorption behavior is examined by a controlled linear temperature rise from 12 to 80 K. The heating rate in all experiments is 0.1 K min$^{-1}$, unless stated differently. The CO and O$_2$ molecules that are released are monitored by a quadrupole mass spectrometer (QMS). For a constant pumping speed the QMS signal strengths are proportional to the amount of desorbed species. Since the deposition surface, together with other cold surface areas like the radiation shield, can be regarded as a cryogenic pump a temperature dependent cryo-pump factor has to be introduced as the pressure in the chamber decreases when the temperature drops below 50 K. This effect is measured by opening the leakage valve up to a certain pressure ($p_{rt}$) at room temperature after which the system is cooled down to 14~K and the corresponding pressure ($p_t$) is determined. In Fig. 1 the ratio $C_x(T)= p_{rt}/p_{t}$ is shown as a function of temperature for both CO and O$_2$ revealing that at 14 K the cryo-pumping factor increases by a factor of about 7 with respect to the pumping speed, $s_{tp}$, of the turbo pump alone. This effect is important and will be explicitly taken into account in the kinetic model. Reflection-Absorption Infrared (RAIR) spectra have been taken simultaneously with the TPD data but are not included here. Spectra for selected CO-O$_2$ experiments were presented and discussed in Fuchs et al.\ (2006); they provide additional information on the processes occurring in the ice during heating. \section{Experimental results} \label{exp-res-sec} TPD spectra of pure and mixed/layered CO-O$_2$ ices are shown in Figs. 2 and 3, respectively. \vspace{2mm} \noindent {\bf Pure ices.} In the case of pure $^{13}$CO ice (Fig. 2a) and pure $^{12}$CO ice (Fig. 2c) the desorption starts around 26.5 K and peaks between 28 and 29 K. All leading edges clearly coincide and peak temperatures shift towards higher temperatures for higher thicknesses. This behavior is typical for a 0$^{th}$-order desorption process (see \S 4.1) and corresponds to a desorption rate that is independent of the ice thickness and that remains constant until no molecules are left on the surface (see also Collings et al. 2003, Bisschop et al. 2006). The binding energies for both isotopes are similar and consequently nearly the same peak temperature is found. For pure $^{18}$O$_2$ ice (Fig. 2b) and pure $^{16}$O$_2$ ice (Fig. 2d) a similar behavior is found as for CO: all desorption curves have the same leading edges and the peak temperature slightly shifts for increasing thicknesses (from 29.5 to 31.3~K). The desorption peak of O$_2$ is clearly shifted to higher temperatures by $\sim$2~K compared with CO. A small shift of peak temperatures of less than 0.5~K is observed when comparing $^{18}$O$_2$ and $^{16}$O$_2$. \begin{figure}[t] \includegraphics[angle=270,width=9cm]{6272fig3.ps} \caption{ TPD spectra for mixed/layered ices. (a) O$_2$ trace of (5, 10, 20, 30 and 40 L) $^{18}$O$_2$/40 L $^{13}$CO - differential layer, (b) (20, 40 and 80 L) $^{18}$O$_2$ (solid line) and $^{13}$CO (dashed line) for 1/1 O$_2$/CO system. (The shoulder observed in the 40 L measurement is an experimental artifact absent in other experiments). (c) (10, 20, 40, 60, 80 L) $^{18}$O$_2$ (solid line) and $^{13}$CO (dashed line) for 1:1 mixed ices, (d) (20, 40 and 80 L) $^{16}$O$_2$ (solid line) and $^{12}$CO (dashed line) for the 1/1 O$_2$/CO system. The grey vertical lines indicate the peak temperatures for pure O$_2$ (solid) and pure CO (dashed) for 40~L. } \label{fig-3} \end{figure} \vspace{2mm} \noindent {\bf Layered ices.} In Fig.~\ref{fig-3}a TPD spectra are shown for different layers of $^{18}$O$_2$ on top of 40 L $^{13}$CO and in Fig.~\ref{fig-3}b (\ref{fig-3}d) for $^{18}$O$_2$ ($^{16}$O$_2$) on top of $^{13}$CO ($^{12}$CO) for different 1/1 configurations (i.e. 20 L $^{18}$O$_2$ /20 L $^{13}$CO, 40 L $^{16}$O$_2$ / 40 L $^{12}$CO, etc.). In all cases the O$_2$ desorption behaves as a 0$^{th}$-order process and is very similar to that observed for pure O$_2$ ice. This can be expected as O$_2$ is less volatile than CO, i.e., at the desorbing temperatures of O$_2$ there is little CO left to influence the O$_2$ desorption process. The only noticeable difference compared to pure ices is that the peak temperature shifts to a lower value for O$_2$ and to a higher temperature for CO by about 0.5~K. This indicates that in the layered ice systems the binding energies change; for CO the binding energy increases whereas for O$_2$ it decreases. With exception of the 20~L $^{13}$CO spectrum, all layered CO traces reveal a 0$^{th}$-order process as can be seen in Fig.~3b and Fig.~3d for the $^{12}$CO. Potentially, the presence of O$_2$ on top of the CO ice can change the desorption process for CO but RAIR spectra of layered ices do not change much with respect to pure ices. The lack of co-desorption of CO with the O$_2$ suggests that the molecular interaction between these species is weak. \vspace{2mm} \noindent {\bf Mixed ices.} In Fig. 3c the TPD spectra are shown for different layer thicknesses of 1:1 $^{18}$O$_2$:$^{13}$CO mixed ices. The O$_2$ desorption follows again a 0$^{th}$-order process, but the band as a whole is slightly broadened by 10-15\% compared to pure O$_2$. In the $^{13}$CO spectra a shoulder around 29.6~K appears, i.e. at the O$_2$ desorption temperature, which suggests that a fraction of the CO desorbs from CO-CO binding sites like in pure ices whereas the rest desorbs from a mixed environment, e.g. CO-O$_2$ binding sites. The main isotopes (not shown in the figure) exhibit a similar behavior. Compared with previous experiments performed on CO-N$_2$ ices (\"Oberg et al. 2005, Bisschop et al. 2006) these results have been interpreted as follows (Fuchs et al. 2006): neither N$_2$ nor O$_2$ possess an electric dipole moment so they interact with CO mainly via quadrupole interactions. However, solid O$_2$ has a 4 to 6 times weaker quadrupole moment compared to N$_2$ and CO. Furthermore, N$_2$ and CO possess the same $\alpha$-crystalline structure below 30~K, but $\alpha$-phase O$_2$ has a different crystalline structure and also undergoes a phase change to the $\beta$-form at 23.5~K. The combination of these two effects can lead to the absence of mixing and co-desorption in the CO-O$_2$ system compared with CO-N$_2$, as observed in our experiments. \begin{figure}[t] \includegraphics[angle=270,width=9cm]{6272fig4.ps} \caption{Model TPD spectra of pure ice (solid lines) compared with laboratory data (dashed lines) for 20, 40, 60 and 80~L ices. (a) $^{13}$CO, (b) $^{18}$O$_2$, (c) Model $^{13}$CO (dashed) and $^{18}$O$_2$ (solid) using recommended energy values and (d) variation of $\chi^2$ with energy for an exposure of 40 L, illustrating the well-determined minima. The effect of fits with and without using the cryo-pumping function $C_x(T)$ are shown.} \label{mod-1-fig} \end{figure} \begin{figure}[h] \centering \includegraphics[angle=270,width=9cm]{6272fig5.ps} \caption{Model TPD spectra of layered and mixed ices; (a) Experimental (dashed) and model (solid) results of $^{13}$CO in mixed ice; (b) Best fitting $^{13}$CO and $^{18}$O$_2$ model results for mixed ices; (c) Best fitting $^{18}$O$_2$ model results for x/40 L $^{18}$O$_2$ / $^{13}$CO and (d) best fitting model results for $^{13}$CO and $^{18}$O$_2$ in 1/1 layered ices.} \label{mod-2-fig} \end{figure} \section{Empirical kinetic model of CO-O$_2$ desorption} The experimental TPD results are interpreted in terms of an empirical kinetic model, describing the desorption kinetics and providing values for fundamental molecular properties to be used in astrochemical models. \subsection{The model} The kinetic desorption process for a species $X$ can be expressed by the well known Polanyi-Wigner type equations of the form, \begin{equation} R_{des}=\frac{dN_g(X)}{dt}=\nu_i\left[ N_s(X)\right]^i \, {\exp [- \frac{E_{d}(X)}{T}]} \end{equation} \noindent with $R_{des}$ the desorption rate (molecules cm$^{-2}$s$^{-1}$), $N_g(X)$ the number density (cm$^{-2}$) of molecules desorbing from the substrate, $\nu_i$ a pre-exponential factor (molecules$^{1-i}$ cm$^{2(i-1)}$s$^{-1}$) for desorption order $i$, $N_s(X)$ the number density (cm$^{-2}$) of molecules on the surface at a given time $t$, $E_{d}(X)$ the binding energy (in K) and $T$ the surface temperature (also in K). The desorption order reflects the nature of the desorption process and is expressed in integer values although also non-integer values are allowed. Ideally, the pre-exponential factor and the desorption energy are determined independently. However, since both parameters have a similar effect on the fitting routine a unique parameter set could not be found. Therefore, the pre-exponential factor is approximated by the harmonic oscillator of a solid in terms of a vibrational frequency (s$^{-1}$) by \begin{equation} \nu=\sqrt{\frac{2 \, N_s \,E_{d}(X)}{\pi^2 \,M}} \end{equation} \noindent with $N_s$ $\approx$ $10^{15}$ cm$^{-2}$ and $M$ the mass of species $X$ (Hasegawa et al. 1992). This equation gives values around 10$^{12}$ s$^{-1}$. In previous papers (see e.g., Bisschop et al. 2006, Collings et al. 2003) the pre-exponential factor has been taken as a free parameter. In the present work this parameter has been linked to $E_{d}(X)$ following Eq.(2). For a 0$^{th}$-order desorption process, $\nu$ has been multiplied with the surface density $N_s$, i.e. $\nu_0 = \nu \cdot N_s = \nu \cdot N_s(t=0)$. For a 1$^{st}$-order process the vibrational frequency is taken as a pre-exponential factor, i.e. $\nu_1 = \nu$. Thus the only parameter that is floating in the present model fit to the experimental data is the binding energy $E_{d}(X)$ of the species. \begin{table*}[t] \caption{Best fitting model parameters (recommended values) for CO and O$_2$.} \label{para-table} \centering \begin{tabular}{c c c c c c} \hline\hline Type of ice & Reaction & Rate equation & $\nu$ & $E_{d}$ & $i$ \\ & & & [molecules $^{1-i}$ & & \\ & & & cm$^{2(i-1)}$s$^{-1}$] & [K] & \\ \hline \multicolumn{2}{l}{$^{12}$CO-$^{16}$O$_2$} \\ Pure & CO(s) $\rightarrow$ CO(g) & $\nu_0 \,\exp(-E_d/T)$ & 7.2E26 &858$\pm$15 & 0 \\ Pure & O$_2$(s) $\rightarrow$ O$_2$(g) & $\nu_0 \,\exp(-E_d/T)$ & 6.9E26 &912$\pm$15 & 0 \\ Layered & CO(s) $\rightarrow$ CO(g) & $\nu_0 \,\exp(-E_d/T)$ & 7.2E26 &856$\pm$15 & 0 \\ Layered & O$_2$(s) $\rightarrow$ O$_2$(g) & $\nu_0 \,\exp(-E_d/T)$ & 6.9E26 &904$\pm$15 & 0 \\ Mixed & CO(s) $\rightarrow$ CO(g) & $\nu_1[{\rm CO}] \,\exp(-E_d/T)$ & 7.6E11 &955$\pm$18 & 1 \\ & CO(s) $\rightarrow$ CO(g) & $\nu_0 \, \exp(-E_d/T)$ & 7.2E26 &865$\pm$18 & 0 \\ Mixed & O$_2$(s) $\rightarrow$ O$_2$(g) & $\nu_0 \, \exp(-E_d/T)$ & 6.8E26 &896$\pm$18 & 0 \\ \\ \multicolumn{2}{l}{$^{13}$CO-$^{18}$O$_2$} \\ Pure & CO(s) $\rightarrow$ CO(g) & $\nu_0 \,\exp(-E_d/T)$ &7.0E26 & 854$\pm$10 & 0 \\ Pure & O$_2$(s) $\rightarrow$ O$_2$(g) & $\nu_0 \,\exp(-E_d/T)$ &6.6E26 & 925$\pm$10 & 0 \\ Layered & CO(s) $\rightarrow$ CO(g) & $\nu_0 \,\exp(-E_d/T)$ &7.1E26 & 860$\pm$15$^*$ & 0 \\ Layered & O$_2$(s) $\rightarrow$ O$_2$(g) & $\nu_0 \,\exp(-E_d/T)$ &6.5E26 & 915$\pm$10 & 0 \\ Mixed & CO(s) $\rightarrow$ CO(g) & $\nu_1[{\rm CO}] \,\exp(-E_d/T)$ &7.5E11 & 965$\pm$10 & 1 \\ & CO(s) $\rightarrow$ CO(g) & $\nu_0 \, \exp(-E_d/T)$ &7.2E26 & 890$\pm$10 & 0 \\ Mixed & O$_2$(s) $\rightarrow$ O$_2$(g) & $\nu_0 \, \exp(-E_d/T)$ &6.2E26 & 915$\pm$10 & 0 \\ \\ Pump \\ & CO(g) $\rightarrow$ CO(pump) & k$_{pump}$[CO(g)] & 0.00024 & - & 1 \\ & O$_2$(g) $\rightarrow$ O$_2$(pump) & k$_{pump}$[O$_2$(g)] & 0.00036 & - & 1 \\ \hline \end{tabular} \\ $^*$ Average value for ice thicknesses greater than 30~L. For ices between 20 and 30~L use 888$\pm$15~K. \end{table*} \vspace{2mm} \noindent Starting from Eq.~(1), the kinetics are represented in terms of two coupled differential equations \begin{equation} \frac{dN_{s}(X)}{dt} = - \sum_{i=0,1} \nu_{i} \left[ N_s(X) \right]^i {\exp [- \frac{E_{d,i}(X)}{T}]} \end{equation} \pagebreak \begin{equation*} \frac{dN_{g}(X)}{dt} = \sum_{i=0,1} \nu_{i} \left[ N_s(X) \right]^i {\exp [- \frac{E_{d,i}(X)}{T}]} \end{equation*} \begin{equation} - C_x(T)k_{pump}N_{g}(X) \label{eq-des} \end{equation} \noindent representing the density change of the solid phase (s) and gas phase (g) molecules. The last term of Eq.~(\ref{eq-des}) gives the removal of gaseous species from the vacuum chamber by the pump. Here, $k_{pump}$ is the pumping constant in s$^{-1}$ and $C_x(T)$ is a dimensionless cryo-pumping factor between 1 and 7 as discussed in \S\ref{exp-sec}. The pumping constant, $k_{pump}$, is constrained by fitting the pump down curve of both species, i.e., the slope of the TPD curve at temperatures higher than the desorption peak temperature. In most cases, the peak tail is not well reproduced due to the presence of other cold surfaces in the system. Very small amounts of gas that have missed the target can get deposited and can be desorbed afterwards at a temperature different from that of the target surface. A direct consequence is a deviation in model curves from the experimental plots at high temperatures. However, this does not affect the determination of the binding energies that are mainly sensitive to the peak value and the leading edge. In order to calculate the temperature dependent rate measured in the TPD experiments, $dn/dt$ is written as \begin{equation} \frac{dn}{dt}=\frac{dn}{dT}\frac{dT}{dt}=\beta\frac{dn}{dT} \end{equation} \noindent with $dn/dT$ the temperature dependent rate (molecules cm$^{-2}$K$^{-1}$), and $\beta$ = $dT/dt$ the constant TPD heating rate (K s$^{-1}$) that corresponds to 0.1 K/minute in the present study unless stated differently. \vspace{2mm} \noindent To extract the binding energy from the observations a standard minimization of $\chi^2$ is used which represents the sum over the squares of the differences between the experimental points and the calculated ones. In this procedure first a 0$^{th}$- or 1$^{st}$-order process is fitted and subsequently the appropriate parameters are optimized. \subsection{Results} The results are shown in Figs.~\ref{mod-1-fig} and \ref{mod-2-fig} and Table 2 summarizes the model equations with best fit parameters. In nearly all experiments a linear dependence between ice thicknesses and desorption energies has been found. \vspace{2mm} \noindent {\bf Pure ices.} For pure $^{18}$O$_2$ five experiments are performed using different thicknesses (20 L, 40 L, 50 L, 60 L, 80 L ) and the best fits to these experiments (Fig.~\ref{mod-1-fig}b) give 932, 927, 924, 928 and 918 K, respectively, corresponding to an average value of 925 $\pm$ 10 K. The error of the mean value is chosen to give a conservative estimate of these parameters. Similarly, for pure CO (Fig.~\ref{mod-1-fig}a) the binding energy is 854 $\pm$ 10 K which is consistent with the value of 855~K reported by {\" O}berg et al. (2005) and Bisschop et al. (2006). Both pure CO and pure O$_2$ ice exhibit 0$^{th}$-order kinetics. The TPD spectra are reproduced very well with this model using only one free parameter as is demonstrated in Figs.~\ref{mod-1-fig}a and b, where the experimental and model results are plotted on top of each other. In Fig.~\ref{mod-1-fig}d the variations of $\chi^2$ for the experiments of 40 L for the pure CO and O$_2$ are shown. The energy values for which $\chi^2$ is minimum are taken as the final values. This yields 854~K for $^{13}$CO and 927~K for $^{18}$O$_2$. As mentioned in the experimental section we have explicitly taken into account the effect of cryo-pumping. A systematic deviation of 2 to 3~K in the desorption energy is induced when this factor is neglected. This is illustrated in Fig.~\ref{mod-1-fig}d. Inclusion of the $C_x(T)$ function generally improves the fit substantially.\\ $^{12}$CO and $^{16}$O$_2$ show a similar desorption behavior and the binding energies are 858 $\pm$ 15~K and 912 $\pm$ 15~K, respectively. Thus $^{12}$CO has the same binding energy as $^{13}$CO within the errors. For $^{16}$O$_2$ ices the binding energy is lowered by 1.6\% with respect to $^{18}$O$_2$ on average. Because of the smaller data set for the main isotope the errors in the fitted parameters of the $^{12}$CO and $^{16}$O$_2$ isotopes are larger than those for $^{13}$CO and $^{18}$O$_2$. \vspace{2mm} \noindent {\bf Layered ices.} The desorption of $^{18}$O$_2$ in 1/1 (Fig.~\ref{mod-2-fig}d) and x/40 (Fig.~\ref{mod-2-fig}c) layered ices is 0$^{th}$-order and the binding energy is determined as 915 $\pm$ 10 K, which is slightly lower than for the pure ices. $^{16}$O$_2$ shows a similar behavior with a binding energy of 904 $\pm$ 10 K. $^{13}$CO desorption is a 0$^{th}$-order process and only at low coverages a 1$^{st}$-order process may be involved. Since there is no signature in the TPD spectra (nor in the RAIR spectra) of mixing or segregation and since the spectrum looks very similar to pure ices it is modeled in 0$^{th}$-order with an average binding energy of 860 $\pm$ 15 K for ices thicker than 30~L. The correctness of this procedure is confirmed by the TPD spectra of $^{12}$CO which exhibit a 0$^{th}$-order desorption with a binding energy of 856 $\pm$ 15 K. \vspace{2mm} \noindent {\bf Mixed ices.} O$_2$ in mixed ices (Fig.~\ref{mod-2-fig}c) is again 0$^{th}$-order with a similar binding energy as in layered ices. In mixed ices, TPD spectra of $^{13}$CO are broader with respect to pure ices and a shoulder around the desorption temperature of O$_2$ is observed. This suggests that CO is desorbing from a wider range of binding sites and this is also supported by RAIR spectra that are broader with respect to pure ice spectra. Both the peak and shoulder are fitted very well using a combination of 0$^{th}$- and 1$^{st}$-order processes (see Fig.~\ref{mod-2-fig}a). About 50\% of the CO desorbs from CO-CO binding sites as in the pure ices but with a binding energy of 890 $\pm$ 10 K. The residual CO molecules desorb through a 1$^{st}$-order process from a mixed environment, i.e. including CO-O$_2$ binding sites, with a binding energy of 965 $\pm$ 5~K. Table 2 has separate entries for the $^{13}$CO-$^{18}$O$_2$ and $^{12}$CO-$^{16}$O$_2$ isotopomers. Independent of the used isotopomers it can be concluded that the binding energies of CO increase in the following order $E_{\rm pure}$ $\lesssim$ $E_{\rm layered}$ $<$ $E_{\rm mixed}$. For O$_2$ this is inverted $E_{\rm pure}$ $>$ $E_{\rm layered}$ $\gtrsim$ $E_{\rm mix}$. \vspace{2mm} \noindent {\bf The effect of $\beta$ on the binding energies.} In order to rule out any dependencies on the adopted heating rates, experiments have been performed for $\beta$=0.1, 0.2 and 0.5 K min$^{-1}$ on 40~L CO ices. This is of relevance when applying laboratory values to interstellar warm-up time scales. The calculated $\nu$ and $E_{d}$ values for the 0.5 K min$^{-1}$ experiment have been used to predict the experimental TPD curve for 0.2 and 0.1 K min$^{-1}$. The deviation between the calculated and experimental values are within the experimental error, i.e. $\Delta E_{d}$ = $\pm$ 15~K, with a slight tendency for lower inferred $E_{d}$ at lower heating rates. \subsection{Sticking probability} In addition to the desorption rates, the sticking coefficients also play an important role to describe freeze-out onto interstellar grains. The measurement procedure has been extensively discussed in Bisschop et al. (2006) and Fuchs et al. (2006). It is important to note that the measurements only provide an `uptake' coefficient $\gamma$ rather than a sticking coefficient, $S$, as only the net rate of molecules sticking and leaving the surface can be given. Consequently, the values given in Fig.~\ref{stick-fig} represent lower limits for the sticking coefficients of O$_2$ and CO. It is found that the freeze-out dominates for O$_2$ on O$_2$, CO on CO and O$_2$ on CO up to 25~K with lower limits on the sticking probabilities between 0.85 and 0.9, i.e., close to unity. Under real astrophysical conditions this value will increase and approaches unity at 10 K. \begin{figure}[t] \includegraphics[angle=270,width=9cm]{6272fig6.ps} \caption{Uptake coefficient $\gamma$ representing the lower limits on the sticking coefficients as functions of the temperature below 25~K. } \label{stick-fig} \end{figure} \section{Astrophysical implications} The scenarios put forward to explain the absence of gaseous O$_2$ in interstellar clouds can roughly be divided into two categories: time-dependent models of cold cores invoking freeze-out of oxygen (in all its forms) onto grains (e.g., Bergin et al.\ 2000, Aikawa et al.\ 2005), and depth-dependent models of large-scale warm clouds invoking deep penetration by UV radiation in a clumpy structure enhancing the O$_2$ photodissociation (e.g., Spaans \& van Dishoeck 2002). The laboratory experiments presented here are relevant to the first scenario. In these models, atomic O is gradually transformed with time into O$_2$ in the gas phase. Freeze-out of oxygen occurs on a similar timescale, and any O is assumed to be turned effectively into H$_2$O ice on the grains where it subsequently sticks. No formation of solid O$_2$ on the grain is expected because of the presence of atomic hydrogen, which is much more mobile at low temperatures. The H$_2$O ice formation lowers the gaseous [O]/[C] ratio and potentially leaves only a small abundance of O$_2$ in the gas. The amount of solid O$_2$ and the remaining fraction of gaseous O$_2$ of $\sim 10^{-8} - 10^{-7}$ with respect to H$_2$ depends sensitively on the temperature at which O$_2$ is frozen out. A related question is to what extent O$_2$ differs in this respect from CO, since CO is readily observed in the gas phase and in solid form. \begin{figure*}[t] \includegraphics[width=17cm]{6272fig7-comp.ps} \caption{Astrophysical simulations of gaseous CO and O$_2$ abundances relative to the total gas + solid abundances. Panel (a) shows pure CO and O$_2$ in steady-state, panel (b) 0$^{th}$-order desorption for heating rates of 1~K/1000 yr, panel (c) the same but for 1~K/10$^6$ yr, and panel (d) shows the desorption of CO from mixed (layered) ices for 1~K/10$^3$ yr. In the panels (b)-(d) the solid line indicates accretion corresponding to $n$(H$_2$)=10$^3$ cm$^{-3}$, the dashed line corresponds to accretion for $n$(H$_2$)=10$^7$ cm$^{-3}$. For the panels (b)--(d) the adopted initial gaseous abundances of CO is $10^{-4}$ and for O$_2$ it is $10^{-4}$ and $10^{-6}$ with respect to H$_2$. } \label{astro-fig} \end{figure*} Astrochemical models usually assume 1$^{st}$-order desorption with rates in cm$^{-3}$ s$^{-1}$. Appendix A (see online material) summarizes the equations used to apply our laboratory results to astrochemical models for both 1$^{st}$ and 0$^{th}$ order kinetics. The latter is more appropriate for thick ices in interstellar clouds and consistent with our laboratory data. We consider the simple case of pure desorption and accretion of $^{12}$CO and $^{16}$O$_2$, without any gas-phase or solid-state chemistry. Thus, the main difference with the equations in \S 4.1 is that desorption is now balanced by a freeze-out (accretion) term, rather than a pumping factor. Also, the laboratory heating rate can be replaced by a heating rate appropriate for the astrophysical object under consideration, for example a protostar heating its envelope. Our model only considers classical dust grains with a radius of 0.1~$\mu$m. The density $n$(H$_2$) is varied to simulate the effects for clouds of different densities. In 0$^{th}$ order, the results depend on the initial number densities of the CO and O$_2$ molecules. For CO, an abundance $10^{-4}$ relative to H$_2$ is chosen, initially all on the grains. For O$_2$, two options are considered. The first option is that most oxygen has been converted into O$_2$ at an abundance of $10^{-4}$, as found in gas-phase models and the maximum allowed by the observed upper limits on solid O$_2$. The second case considers a much lower O$_2$ abundance of 10$^{-6}$, as found in gas-grain models. Fig.~\ref{astro-fig}a shows the results for pure CO and pure O$_2$ ices in the simplest case of steady-state. The temperature at which these species desorb depends strongly on density: the higher the density, the faster the accretion rate which needs to be balanced by desorption. Because of the larger binding energy of O$_2$, it always desorbs at $\sim$2 K higher temperatures than CO. Fig.~\ref{astro-fig}b shows the result for 0$^{th}$ order desorption at a heating rate typical for a proto-stellar environment of 1 K/1000 yr (Lee et al. 2005). For abundances of CO and O$_2$ of $10^{-4}$, both species desorb at $\sim$2.5 K higher temperatures than the steady-state case at low densities; at high densities, there is little difference. If the O$_2$ abundance is lowered to $10^{-6}$ the 0$^{th}$ order curve shifts to lower temperature by $\sim$2 K. Thus, in contrast with the steady-state or equal abundance cases, O$_2$ can desorb at {\it lower} temperatures than CO if its abundance is significantly lower. This reversal is a specific feature of the 0$^{th}$ order desorption and is not found in the 1$^{st}$ order formulation (see Appendix A in the online version). An important question is whether such a situation is astrochemically relevant. For O$_2$ abundances as low as $10^{-6}$, the coverage becomes less than a mono-layer and 1$^{st}$ order kinetics or other effects due to the peculiarities of the ice (e.g. polar vs. apolar environment, compact vs. porous ice) will determine the desorption behavior. Fig.~\ref{astro-fig}c shows the abundance curve of the species for a heating rate of 1~K/10$^6$ yr, as expected for a cold core at near constant temperature. Since the time scale is increased by 3 orders of magnitude with respect to the previously considered heating rate there is simply more time for the molecules to desorb and consequently the entire profile shifts to lower temperatures by 0.5 -- 2.5 K depending on the density of the species. Finally, Fig.~\ref{astro-fig}d shows the abundance curves of CO and O$_2$ from mixed ices. The graphs of mixed and layered ices are nearly identical, thus the layered ices are not shown separately. The relative abundance curve is shifted to a higher temperature by about 0.5~K compared with the pure ices because of the slightly higher binding energies. Overall, Fig.~\ref{astro-fig} shows that the differences in the desorption behavior of O$_2$ and CO with temperature are very minor for a wide range of realistic cloud densities and abundances. Thus, it is unlikely that a large reservoir of solid O$_2$ is hidden in the bulk of molecular clouds which show abundant gaseous CO but no O$_2$, unless the O$_2$ is in a more strongly bound ice environment. Conversely, any region with significant CO freeze-out should also have some solid O$_2$. As noted in the introduction, the best limits come from analysis of the weak solid $^{13}$CO band, which gives upper limits of 100\% on the amount of O$_2$ that can be mixed with CO, i.e., about $(0.5-1)\times 10^{-4}$ with respect to H$_2$ (Boogert et al.\ 2003, Pontoppidan et al.\ 2003). Direct freeze-out of the gas-phase O$_2$ abundances inferred from the ODIN measurements would give much lower limits. The small differences between CO and O$_2$ desorption found here may become relevant in the interpretation of high spatial resolution observations of individual cold cores with temperatures in the 10--20 K range such as the pre-stellar core B68. \section{Concluding remarks} The desorption processes of CO-O$_2$ pure, mixed and layered ice systems have been investigated experimentally and modeled using an empirical kinetic model. The resulting molecular parameters can be used to model the desorption behavior of these ices under astrophysical conditions. We find that both pure $^{16}$O$_2$ and pure $^{12}$CO desorb through 0$^{th}$-order processes with binding energies of 912 $\pm$ 15 K and 858 $\pm$ 15 K, respectively. In mixed and layered ices the $^{16}$O$_2$ binding energy decreases to a lower value around 896 $\pm$ 18 K and 904 $\pm$ 15~K respectively. The $^{12}$CO desorption from layered ices is 0$^{th}$-order with a binding energy of 856 $\pm$ 15~K. In mixed ices a combination of 0$^{th}$- and 1$^{st}$-order is found with desorption energies of 865 $\pm$ 18 K and 955 $\pm$ 18 K, respectively. For $^{18}$O$_2$ and $^{13}$CO, these numbers change by a few percent. O$_2$ is less volatile than CO but CO does not co-desorb with O$_2$. This is in contrast with the CO--N$_2$ ice system for which Bisschop et al. (2006) found that N$_2$ is more volatile than CO and that significant amounts of N$_2$ co-desorb with CO. The sticking coefficients of CO and O$_2$ at temperatures below 20~K are close to unity, with 0.85 as a lower limit. In cold clouds ($T_d<18$~K), O$_2$ can be frozen out onto the grains, but the relative difference in desorption between CO and O$_2$ is so small that this is unlikely to be the explanation for the missing gaseous O$_2$ in interstellar clouds which show significant gaseous CO. \begin{acknowledgements} We are grateful to F.A. van Broekhuizen, S.E Bisschop, K.I. {\"O}berg and S. Schlemmer for useful discussions and help in the construction of the experimental setup. We acknowledge funding through the Netherlands Research School for Astronomy (NOVA), FOM and a Spinoza Grant from the Netherlands Organization for Scientific Research (NWO). K.A. thanks the Greenberg family for a Greenberg research fellowship and the ICSC-World Laboratory Fund for additional funding. \end{acknowledgements}
-31,921.642615
[ -3.333984375, 3.107421875 ]
18.322581
[ -2.884765625, 1.3310546875, -1.611328125, -5.42578125, -1.5283203125, 7.5 ]
[ 3.185546875, 6.86328125, 3.263671875, 6.03125 ]
449
5,680
[ -2.982421875, 3.3671875 ]
36.78946
[ -5.70703125, -2.41015625, -2.546875, -2.6015625, 0.76220703125, 9.5078125 ]
0.789408
13.030409
22.482394
6.807085
[ 2.688839912414551 ]
-23,427.650199
4.906514
-30,624.675771
0.351645
5.937933
[ -3.9765625, -3.7109375, -2.884765625, -3.841796875, 2.49609375, 10.59375 ]
[ -6.1015625, -2.189453125, -2.357421875, -1.654296875, 3.439453125, 5.34765625 ]
BkiUd9g4eIfiUUgeh0VY
\section{Introduction} Cavity optomechanics \citep{7} has been playing an important role in the exploration of quantum mechanical systems, especially the coupling between the electromagnetic field of the cavity and the mechanical oscillator \citep{1,2,3}. The photons inside the ultrahigh finesse cavity are capable of displacing the mechanical mirror through radiation pressure and this has been a subject of early research in the context of nanomechanical cantilevers \citep{4,5,6}, vibrating microtoroids \citep{8}, membranes and Bose-Einstein condensates \citep{9}. Recent advancements in the field of laser cooling, high finesse nanomechanical mirrors have made it possible to study ultra cold atoms by combining the tools of cavity quantum electrodynamics. Experimental realisation of quantum entanglement, gravitational wave detection \citep{10,11} in the last few years has added new interest to the field of optomechanics. Such a system with an ensemble of N atoms with single optical mode has been an interesting theme in quantum optics after the work of Dicke \citep{12}, showing the effects of quantum phase transition and superradiant phases.The phase transition from a super fluid to a self organised phase, above a certain threshold frequency, when a laser driven BEC \cite{46, 47, 48, 49} is coupled to the vacuum field of the cavity refers to the basic Dicke model \citep{35, 36, 37}. The ultra cold atoms self organizes to form a checkerboard pattern trapped in the interference pattern of the pump and the cavity beams \citep{13,14,15}. This self organization initiates at the onset of the superradiance in an effective non equilibrium Dicke model. Since then many theoretical proposals for single mode, multi mode \citep{38} and optomechanical Dicke models has been made which are presumed to exhibit interesting physics \citep{39} and applications in the field of quantum simulation and quantum information \citep{40, 41, 42, 43}. In the present cold atom settings, the splitting of the two distinct momentum states of the BEC is controlled by the atomic recoil energy, and this enables the phase transition to be observed with optical frequencies with light. This is quite similar to the theoretical approach proposed by Dimer $\textit{et al.}$ \citep{17} for attaining Dicke phase transitions using Raman pumping schemes between the hyperfine levels \citep{44}.\\ In this paper, we propose an optomechanical system consisting of N, two level elongated cigar shaped BEC interacting with light in a high finesse optical cavity with a movable mirror. Such systems can be used to investigate the optomechanical effects on the second order phase transition to a superradiant regime. We study the dynamics of the system and bring out all the possible phases by analytical arguments and further propose a modification in the system that can be used to alter the phase portraits and transition point of the system. \section{The Model} We consider a Fabry- Perot optical cavity with one fixed and another movable high finesse mirror of mass $M$, capable of oscillating freely with frequency $\omega_m$. A two level, cigar shaped BEC is trapped within the cavity with transition frequency $\omega_a$. The optical cavity is subjected to a transverse pump beam with Rabi frequency $\Omega_p$, wave vector $k$ and frequency $\omega_p$. In order to avoid population inversion, the later is far detuned from the atomic transition $\omega_a$. Absorption and emission of cavity photons generates an effective two level spin system with spin down and spin up corresponding to the ground $\Ket{0,0}$ and excited states $\Ket{\pm k, \pm k}$ respectively. The effective Hamiltonian of such a system can be written as ($\hbar$= 1 throughout the paper) \citep{18, 21, 22}:- \begin{eqnarray} H&=& \omega_a S_z+ \omega a^{\dagger} a+ \omega_m b^{\dagger} b+ \delta_0 a^{\dagger} a (b+ b^{\dagger}) \nonumber\\ &+& g(a+ a^{\dagger})(S_{+}+ S_{-})+ US_z a^{\dagger}a, \end{eqnarray} \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{pic.jpg} \caption{The schematic representation of the model considered. One of the mirror is movable under the radiation pressure of the cavity beams. The optical cavity has a decay rate of $\kappa$ and the mechanical mirror has a damping rate $\Gamma_m$.} \centering \end{figure} where $\omega= \omega_c- \omega_p- N(5/8)g_0^2/ (\omega_a- \omega_c)$ \citep{22}, $\omega_c$ being the cavity frequency. $U$ represents the back reaction of the cavity light on the BEC and is given by $U= -(1/4)g_0^2/ (\omega_a- \omega_c)$, which is generally negative, however both the signs are achievable experimentally and we shall deal with the both in the present paper. $\delta_0$ is the optomechanical single photon coupling strength which represents the optical frequency shift produced by a zero point displacement. $\delta_0$ can be identified as $\omega x_{ZPF}/{L}$, $L$ being the cavity length and $x_{ZPF}$ denoting the mechanical zero point fluctuations (width of the mechanical ground state wave function) \citep{50}. $\omega_m$ represents the frequency of the mechanical mirror, which generates phonons with $b (b^{\dagger})$ as the annihilation (creation) operator. In the experiments of \citep{14}, both the pump and cavity were red detuned from the atomic transition and hence $U$ was considered negative for the observed Dicke phase transition. $a (a^{\dagger})$ is the annihilation (creation) operator of the optical mode while $b (b^{\dagger})$ representing the same for the mechanical mode, following the commutation relation [$a (b), a^{\dagger} (b^{\dagger})]$= 1. $S_{+}, S_{-}$ and $S_z$ are the spin operators obeying the relation [$S_+, S_-$]= $2S_z$ and [$S_{\pm}, S_z$]= $\mp S_{\pm}$. $\textbf{S}= (S_x, S_y, S_z)$ is the effective collective spin of length $N/2$. The co and counter rotating matter light coupling has been taken equal throughout the paper and is denoted by $g$. The schematic representation of the model considered in this paper has been shown in Fig. (1). \\ In the thermodynamic limit, the semi classical equations for our system, takes the form: - \begin{equation} \dot{S_{-}}= -i(\omega_a+ U \mid a \mid ^2) S_{-}+ 2ig(a+ a^{\dagger})S_z, \end{equation} \begin{equation} \dot{S_z}= -ig(a+ a^{\dagger})S_{+}+ ig(a+ a^{\dagger})S_{-}, \end{equation} \begin{eqnarray} \dot{a}&=& -[\kappa+ i(\omega+ US_z+ \delta_0 (b+ b^{\dagger})]a\nonumber\\ &-& ig(S_{+}+ S_{-}), \end{eqnarray} \begin{equation} \dot{b}= -i\omega_m b- i\delta_0 \mid a \mid^2- \Gamma_m b, \end{equation} where $\kappa$ and $\Gamma_m$ are the cavity decay rate and damping rate of the mechanical mode respectively. We employ the steady state analysis $(\dot{S}_-= \dot{S_z}= \dot{a}= \dot{b}= 0)$ of the above equations to determine the critical atom- cavity coupling strength. We carry a numerical approach in this paper to determine the critical value $\lambda_c$ ($g\sqrt{N_c}$). The analytical process uses the c- number variables and quantum fluctuations, and one can refer \citep{21} for the complete process in the absence of back reaction term. $\lambda$ ($g\sqrt{N}$) $>$ $\lambda_c$ ($g\sqrt{N_c}$) marks the onset of the superradiance, which was first observed experimentally by Tilman Esslinger and his group \citep{14} for BEC atoms in 2010. \section{Superradiant phases and Phase Portraits} To study the dynamics of the present system, we employ the same mathematical technique as \citep{22, 27} and define $a= a_1+ ia_2$, $b= b_1+ ib_2$ and $S_{\pm}= S_x \pm iS_y$. Substituting the same in the above semi classical equations (Eq. (2)- (5)) and comparing the real and imaginary parts on both side yields: - \begin{equation} (\omega_a+ U\mid a \mid^2)S_y= 0, \end{equation} \begin{equation} (\omega_a+ U\mid a \mid^2)S_x- 4ga_1S_z= 0, \end{equation} \begin{equation} -\kappa a_1+ (\omega+ US_z+ 2\delta_0 b_1)a_2= 0, \end{equation} \begin{equation} \kappa a_2+ (\omega+ US_z+ 2\delta_0 b_1)a_1+ 2gS_x= 0. \end{equation} \begin{equation} b= -\Big(\frac{\delta_0 \mid a \mid ^2 \omega_m}{\Gamma_m^2+ \omega_m^2}\Big) - i\Big(\frac{\delta_0 \mid a \mid^2 \Gamma_m}{\Gamma_m^2+ \omega_m^2}\big) \end{equation} Clearly, from Eq. (6), either $S_y$= 0 or $(\omega_a+ U\mid a \mid^2)$= 0. We define the case arising from the first condition as the superradiant phase A (SRA) and the second condition as the superradiant phase B (SRB). SRA represents the quantum phase transition from normal (N) or inverted (I) states to a self organized states defined by $S_x$ and $S_z$ only. Similarly, the SRB represents the transition from the mixed states (N+ I) to a superradiant phase defined by all the components of $\textbf{S}$. The difference of transition from mixed states (N+ I) as in SRB phase compared to from normal (N) or inverted (I) as in SRA phase can be understood in the phase diagrams. Ofcourse, with increasing back action parameter, we expect a reduced phase transition region. Again, SRB phase condition limits $U$ to be only negative, since the phase is defined as $(\omega_a+ U\mid a \mid^2)$= 0. However, what might be the effect of the mechanical mirror motion on the phase transition of the system? In the absence of the back reaction parameter $U$, \citep{21, 27} suggests no change in the critical transition point, $\lambda_c$ for the SRA phase. However, in the presence of the back reaction term and in the SRB phase, what role can the mirror frequency play in defining the phase portraits, a question to be analyzed in this paper. In the next section, we shall analyze all the possible conditions and present the phase portraits of the system for both positive and negative back reaction parameter. \subsection{SRA Phase} As defined before, $S_y$= 0 marks the SRA phase, which is simply the transition from normal (N) or inverted (I) state to the regime of superradiance. The critical atom- cavity coupling point can be determined by setting $[S_x, S_y, S_z]= [0, 0, \pm N/2]$, which signifies the presence of either spin up (inverted) or spin down (normal) particles and no photons. The steady state equations (Eq. 6- 9) can be straightforwardly solved using matrix method for $S_z$, which yields a quadratic equation supporting two roots of $S_z$. The determinant representing the steady state equations, takes the form: - \begin{equation} \begin{vmatrix} \omega_{a} + U\mid a \mid^2 & 0 & -4gS_{z} & 0 \\ 0 & \omega_{a} + U\mid a \mid^2 & 0 & 0 \\ 2g & 0 & \chi & \kappa \\ 0 & 0 & -\kappa & \chi \end{vmatrix}= 0, \end{equation} where $\chi=\omega+ US_z- \frac{2\delta_0^2\mid a \mid^2 \omega_m}{\Gamma_m^2+ \omega_m^2}$. The above determinant has been solved numerically and the results are too cumbersome to be reproduced here. The two supporting roots for $S_z$ when equated to $\pm N/2$, and solved for $\omega$, represents the dynamical phase portrait for SRA phase showing the transition from normal (N) and inverted (I) phase to regimes of superradiance. An important point to note here, is that the SRA phase exists for any value of the back action parameter, U. Although the two roots of $S_z$ must be independent, however, we shall find a small region in the phase portraits, where both the roots of $S_z$ are satisfied. Such regions has been defined as 2SRA phase, or more precisely as SRA (N)+ SRA (I) phase. The same also had its existence in \citep{22}, however, in this paper, we shall find the mirror frequency $\omega_m$ to determine the physics of such coexisting regime and we shall exploit such condition to alter the phase portraits. \subsection{SRB phase} We defined the condition $(\omega_a+ U\mid a \mid^2)$= 0 as the origin of the B type superradiance. The same condition when incorporated in Eq. (7), yields $4gS_z a_1$= 0. Evidently, this bounds $a$ to be purely imaginary. Correspondingly, the initial condition also yields:- \begin{equation} \mid a \mid ^2= -\frac{\omega_a}{U}, \end{equation} which again suggests the same nature for $a$. Hence $a_1$= 0, which when plugged in Eq. (8) and Eq. (9) yields: - \begin{equation} S_x^2= -\frac{\kappa^2 \omega_a}{2gU} \end{equation} and \begin{equation} S_z^2= \Big(\frac{\omega+ 2\delta_0 b_1}{U}\Big)^2. \end{equation} As noted previously, $S_y$= 0 was defined in SRA phase and in SRB phase $S_y$ $\ne$ 0. Hence, it follows from the normalization condition that $S_x^2+ S_z^2 \le$ $N^2/4$, where the above expressions give the corresponding values, with Eq. (10) determining the expression for $b_1$ and $\mid b \mid^2$. \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{1.pdf} \includegraphics[width=0.45\textwidth]{2.pdf} \includegraphics[width=0.45\textwidth]{3.pdf} \caption{Dynamical phase portrait of the stable attractors as a function of cavity frequency $\omega_c$ and $g\sqrt{N}$. The panels represent plots for $UN$= 0 MHz, -40 MHz and 40 MHz. Other parameters chosen were $\omega_m= 1$, $\omega_a= 0.05$ MHz \citep{14}, $\kappa$= 8.1 MHz \citep{14, 22, 23} and $\delta_0$= 0.05 and $\Gamma_m$= 0.05 $\omega_m$ \citep{21}.} \centering \end{figure} \subsection{The Phase portraits} We finally summarize the phase portraits of the dynamical system, with chosen parameters that satisfy the Routh- Hurwitz criteria \citep{33, 34} for a stable optomechanical system. We plot the phase portraits as a function of $g\sqrt{N}$, where N is the number of atoms $\approx$ $10^6$. We consider all the cases possible through analytical treatment of the dynamical equations of the system and it is noteworthy to mention here that although all these phase regions can be investigated in various experimental conditions, however, not all will emerge in a single experiment. The designing of such a system to observe various phase regions discussed here is a matter of technological advancement in controlling the parameters of the system. Experiments reported by K. Baumann $\textit{et al.}$ \citep{14, 15} showed the system evolving from normal phase (N) with all spins pointing downwards and no photons. The first panel of fig.2 shows the phase diagram for UN= 0 MHz. The purple line marks the onset of superradiance from both normal (N) and inverted (I) states with all spins pointing downwards and upwards respectively. For $\omega<$ 0, the normal state (N) becomes unstable and the inverted state (I) becomes stable instead. As the backaction parameter is reduced (UN= -40 MHz), the SRA phase boundaries (purple line) between the (N) and (I) state shift to higher and lower frequency respectively. Simultaneously, SRB phase (red line) emerges which coincides with SRA(N) and SRA(I) for negative U as discussed previously and few new regimes come to play as seen from the second panel of fig. 2. The (N) and (I) phase coexists due to the shift of the SRA boundary and also gives rise to (SRB+ N), (SRB+ I) and (SRB+ N+ I) regions. Due to the frequency shifts induced by negative U, there exists a small region where SRA(N) and SRA(I) coexists, where both the roots of $S_z$ are supported. These phases are represented as 2SRA (SRA(N)+ SRA(I)) in this paper and we shall deal with the same in next section. \\ Although we have portrayed all the possible cases (for $UN$= -40 MHz) in the middle panel of Fig. (2), not all can be simultaneously observed in any single experiment. As reported by Esslinger and his group \citep{14}, the first superradiant transition was observed from inverted state (I) to SRA (Inverted) which corresponds to the lower symmetrical half of the phase portrait. The realization of other transitions is purely dependent on the conditions of the system. Considering $S_y$= 0 and the initial state being the normal state (-N/2 and no photons), the phase transition would correspond to the SRA (N) denoted by the purple line on the positive Y axis of top panel of Fig. (2) and vice versa for system prepared with inverted state (N/2) and operated with negative effective cavity frequency ($\omega$). The purple line marks the phase transition from superfluid to a self organized state and as seen from the figure, the critical transition point increases as the effective cavity frequency ($\omega$) in increased. This also supports the analytical results in \citep{14, 17, 21, 22, 23, 29} which showed the critical point at $\frac{1}{2} \Big( \frac{\omega_a}{\omega} (\kappa^2+ \omega^2) \Big)^{1/2}$ for $U$= 0. \\ Interestingly, as we reduce the back reaction parameter $U$ (middle panel, Fig. (2)), the SRA phase boundary shifts towards each other by $\pm UN/2$ so as to offer an identical superradiant phase (area covered between the purple lines and black horizontal lines (at $\pm UN/2$) in the middle panel of Fig. 2). Although we can never witness both the transitions in a single experiment, however, theoretical study predicts an identical superradiant phase when operated with effective cavity frequency ranging between $\pm UN/2$ MHz and initial state being normal or inverted. In simple words, for $UN$= -40 MHz we can predict an identical phase transition when operated with $-20 \le \omega \le 20$ MHz without worrying whether we started from normal (N) or inverted (I) state. Thus we can start from any mixed state configuration ($i.e.$ a combination of spin up (I) and spin down (N)) and still expect to get a phase transition if $UN$ is negative. This is analogous to the case of preparing mixed atoms with 50$\%$ spin up and 50$\%$ spin down and still get an identical superradiance as in two atom Dicke model \citep{52}. The advantage lies in the fact that with negative back action parameter, we get a short window of selecting our effective cavity frequency ($- UN/2 \le \omega \le +UN/2$) and worry not about the initial condition (N or I) to observe a Type A superradiance. The comparison becomes evident when we see the top panel of Fig. (2), which showed phase transition only when the system is operated and prepared in a combination of either positive $\omega$ and Normal state (+$\omega$, N) or negative $\omega$ and Inverted state (-$\omega$, I). Thus a negative variation of $U$ gives us a freedom to choose our effective cavity frequency ($\omega$) and initial state. \\ As the backaction parameter is made positive (UN= 40 MHz), the SRB phase vanishes for obvious reasons discussed previously. The SRA (N) and SRA (I) shifts away from each other by $\pm UN/2$ as seen from the lower panel of Fig. (2). The separation of the boundaries in opposite direction leads to the formation of another region termed here as persistent oscillation regime. Evidently, no phase transition can be observed when the system is operated with effective cavity frequency ($\omega$) between $\pm UN/2$. As the name suggests, this regime describes persistent oscillation and no steady state is reached even for long duration experiments, thereby predicting the presence of limit cycle. The notion of persistent oscillation will become clear in time evolution section when we shall simulate the system with initial conditions described by point (c), which lies in the concerned region.\\ We observe the type B superradiance only when $(\omega_a + U\mid a \mid ^2)$=0 $i.e.$ when $U$ is negative since $\omega_a >$0. The critical line separating the superfluid and self organized state has been denoted with red colour in the middle panel of Fig. (2). The SRB imposing condition ($S_x^2+ S_z^2 \le N^2/4$) itself reveals the fact that it can take both $\pm N/2$ ($i.e.$ both Normal (N) and Inverted (I) state), which marks its appearance between $\pm UN/2$ in the phase portraits. Thus when $UN$= -40 MHz, and we have an initial mixed state configuration (N+ I) and effective cavity frequency $(\omega$) being operated between $\pm 20$ MHz, we can get either a Type A superradiance or a Type B superradiance depending on whether $S_y$= 0 or $(\omega_a+ U\mid a \mid^2)$= 0 respectively but never both simultaneously in a single experiment. \section{2SRA phase} \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{4.pdf} \includegraphics[width=0.45\textwidth]{5.pdf} \includegraphics[width=0.45\textwidth]{6.pdf} \caption{Magnified view of the dynamical phase diagram for UN= -30 MHz for $\omega_m$= 0.2, 0.4 and 0.8 for upper, middle and lower panel respectively. Parameters chosen were same as in the previous plots.} \centering \end{figure} In this section we aim to discuss the role of the mechanical mirror in defining the phase portraits of the system. As hinted previously, there are regions where both the roots of $S_z$ are supported and the SRA(N) and SRA(I) regions coincides to describe the new phase. Although evident from previous discussion that the mirror frequency plays no role in defining the SRA region, however, the SRB phase does have an explicit dependency on $\omega_m$ as seen from Eq. (13) and (14), together with Eq. (10) for the expression of $b_1$. We produce here a magnified view of the dynamical phase diagram for UN= -30 MHz and determine the variation in transition point for different values of $\omega_m$. Interestingly, the 2SRA phase is no more distinct as in the case of a fixed mirror \citep{22} and in the optomechanical case, the mirror frequency determines the physics of this tricritical point where all the phase boundaries cross each other. \\ For $\omega_m$= 0.2, the top panel of Fig. (3) shows no 2SRA region and the same starts becoming prominent as the mirror frequency $\omega_m$ is increased as seen from middle and the lowermost plots of Fig. (3). The mirror is therefore found to be altering the coexisting regime, for experimentally realizable values of the mirror frequency. These optical systems with a movable cantilever can therefore be efficiently used for controlling the crtitical point and also the coexisting regime. With these plots, the effect of the mirror frequency can be well established. However, we may demand to alter the phase portraits more since the change with the mechanical mirror is almost negligible for any use as in experimental phenomenon like quantum entanglement or manipulation etc. So can we devise and conceive any further modification to the system that can allow further manipulation of the critical transition point. We shall deal with the same in Sec VI, with an aim of modifying the phase diagrams by some easy controllable parameter. \section{Time Evolution} In order to get insight on the distinction between the described phases, we examine the time evolution of the system from various initial conditions lying in different phase regions. We mainly consider the points (a), (b) and (c) marked in the dynamical phase diagrams (fig 2) which lies in the (SRB+ N+ I), SRA and persistent oscillation regime respectively. We solve the semiclassical equations of the system numerically for $S_x$, $S_y$, and $S_z$ by fourth order Runge Kutta method and illustrate the relaxation time in reaching their corresponding stable attractors. The plots below shows the time evolution of the system from different initial conditions. \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{7.pdf} \includegraphics[width=0.45\textwidth]{8.pdf} \caption{Time evolution of the system from different initial condition. The top panel describes the point (a) and point (b) of fig.1 and the lower panel shows the persistent oscillations predicting a limit cycle in persistent oscillation regime for positive back action parameters. Other parameters used are same as in previous plots.} \centering \end{figure} The top panel of fig. 4 shows the time evolution for point (a) (UN= -40 MHz) and point (b) (UN= 40 MHz) in the superradiant regime that are close to the normal and inverted state. The plot well describes the relaxation time for reaching their stable attractors. For point (a), $S_z$ initially increases and finally attains a stable value in approximately 0.7 ms thereby prediciting a stable case for realistic experiments. Point (b) lies in the SRA (N) region just above the persistent oscillation region and the time evolution of $S_z$ (blue curve) shows the system reaching their stable attractors in approximately 0.7 ms. As the initial condition enters the oscillation regime (point(c)), all the system parameters ($S_y$, $S_z$) starts to oscillate periodically at long times and no stable points are reached even after long duration as shown in the lower panel of fig. 4. Since the motion is described in a two dimensional plane, the attractors represent a simple limit cycle \citep{24}, thereby tagging the entire bounded plane as persistent oscillation regime. \section{Dicke model in the presence of an external pump} We modify our previous model by adding an external mechanical pump, which can be any external object in physical contact with the mirror or an external laser that is capable of changing the mirror frequency via radiation pressure. The pump can excite the mirror by coupling with the mirror fluctuation quadratures. The Hamiltonian of the new system, takes the form ($\hbar$= 1 throughout the paper) \citep{19, 27}: - \begin{eqnarray} \label{eqn1} H&=& \omega_a S_z+ \omega a^{\dagger}a+ \omega_m b^{\dagger}b+ \delta_0 a^{\dagger}a(b+ b^{\dagger}) \nonumber \\ &+& g(a+ a^{\dagger})(S_{+}+ S_{-})+ US_za^{\dagger}a+ \eta_p(b+ b^{\dagger}), \end{eqnarray} \begin{figure}[h!] \includegraphics[width=0.45\textwidth]{9.pdf} \includegraphics[width=0.45\textwidth]{10.pdf} \caption{The dynamical phase diagram in the presence of the external mechanical pump. UN= -20 MHz and $\eta_p$= 1. Other parameters are same as in previous plots.} \centering \end{figure} where $\eta_p$ represents the mechanical pump frequency and the last additional term describes the energy due to it. The mechanical pump frequency will be considered to be small here, $\textit{i.e.}$ $ 0 \le \eta_p \le1$. To proceed further, we begin with the semiclassical equations, of which the following equations gets modified: - \begin{equation} \dot{b}= -i\omega_m b- i\delta_0\mid a \mid^2- i\eta_p- \Gamma_m b. \end{equation} We repeat the same analysis to determine the dynamical phase diagram of the system in the presence of the mechanical pump for both SRA and SRB phase. We produce here the dynamical phase portraits for SRA and SRB separately in fig. 5 to unveil the effect of the mechanical pump. The dotted lines in both the plots of fig. 5 marks the SRA and SRB phase boundaries in the absence of the mechanical pump and the bold curve represents the phase portrait when the external mechanical pump starts working. The blue shaded region named $\eta$- SRA and $\eta$- SRB represents the extra region created by the external mechanical pump. Clearly, the shaded region decreases the critical transition point both in positive and negative direction symmetrically. Although the external mechanical pump has changed the dynamical phase diagram to a large extend, the physics behind the time evolution remains almost same as in previous case with minor change in the relaxation time. The external mechanical pump frequency $\eta_p$ also enhances the 2SRA region to a large extend since both the SRA and SRB phase shows considerable increase in phase area. We don't produce these plots as these remains evident from the plots of fig. 4. Thus it is clear from the discussion in this section and section IV that the phase portraits can be altered and enhanced by a simple modification. Although the SRA phase region is unaltered by mirror frequency initially, the same can be modified when we add external force to the mirror. These systems can be used for altering the phase transition in Dicke model by simple controllable parameters like the external mechanical pump. Such can find use in experiments like detecting quantum entanglement, which tends to infinity at the critical point \citep{51}. \section{Conclusion} In this paper, we have explored the dynamics of an optomechanical system with ultracold atoms between the optical cavities. Within the framework of non equilibrium Dicke model, we presented the rich phase portrait of attractors, including regimes of coexistence and persistent oscillations. We conclude from the analytical methods that the optomechanical system remains handy over an optical system in terms of control over phase transition and dynamical phase regions. The cantilever was found to be enhancing the coexisting region to a large extend and the persistent oscillation regime predicted the existence of limit cycle that prohibits reaching any stable state even in very long duration experiments. To study the system further, we added an external mechanical pump and found the external pump enhancing both the SRA and SRB phases thereby predicting an enhancement even in coexisting regions. We thereby predict a system that alters the phase transition in a Dicke model through a simple and effective process. Such system can also be used to study the dynamical entanglement in different regimes in the presence and absence of mechanical pump which can be used as a tool to selectively modify and alter the entanglement \citep{45} between the modes. \section{Acknowledgements} Aranya B Bhattacherjee acknowledge financial support from the Department of Science and Technology, New Delhi for financial assistance vide grant SR/S2/LOP-0034/2010.
-18,819.524462
[ -3.447265625, 3.18359375 ]
52.863436
[ -3.6796875, 0.1837158203125, -2.01953125, -5.68359375, -0.31005859375, 7.97265625 ]
[ 3.73828125, 8.40625, 2.83984375, 5.21875 ]
238
4,411
[ -3.427734375, 3.90234375 ]
25.314251
[ -6.1640625, -4.1484375, -4.140625, -2.48828125, 1.9951171875, 12.0546875 ]
1.188554
46.11303
23.19202
2.618982
[ 2.0242888927459717 ]
-13,827.101295
5.054636
-18,215.71226
0.641371
5.757765
[ -2.939453125, -3.84375, -3.630859375, -4.78515625, 2.373046875, 12.2265625 ]
[ -5.5625, -1.5166015625, -1.93359375, -1.0029296875, 3.4375, 3.8359375 ]
BkiUeATxK4sA-7sDf1HS
\section*{Introduction} Simple oxides are suitable candidates for microelectronic applications as they are easy to deposit and silicon compatible. Among them, hafnia-based systems are specially preferred since the material is already part of the \textit{complementary metal oxide semiconductor} (CMOS) process flow. Once interesting for its relatively high dielectric permittivity, replacing SiO$_2$ as gate oxide (to hold the capacitance while reducing the dielectric thickness), hafnia (HfO$_2$) is nowadays a subject of many research efforts by academia and industry. The discovery of ferroelectricity in doped and undoped hafnia layers \citep{boscke_ferroelectricity_2011,polakowski_ferroelectricity_2015} comprises a step forward towards the miniaturization needed to increase the packing density and consequently improving the overall computers' performance. In the meantime, a (not so) newly conceived signal processing broadly referred to as 'neuromorphic computing' \citep{schuller_neuromorphic_2015}, takes features identified in biological brains as a source of inspiration for some desired new functionalities (such as temporal coding or spiking ability). This inspiration, in addition to the striking progress made in the last decades by the \textit{Artificial Neural Networks} (\textit{ANN}), which are software implementations still running on CMOS hardware \citep{lecun_deep_2015}, seems to point towards new functionalities and huge energy savings if computer platforms include electronic devices that share some similarities with biological neurons and synapses. The two pushes, for mimicking biological features and simplifying complex calculus operations by projecting them into the hardware, aim for more stable, more controllable, and more flexible electrical behaviours. These requirements challenge material scientists since a deep understanding of the underlying mechanisms, which in turn would allow to improve the controllability over the obtained properties, is key for unlocking some of these proposed advances. Many requirements seem to be posed for the next generation of electronic devices \citep{schuller_neuromorphic_2015}. However, many baby steps can be done to partially solve current technological issues and set milestones in the way of a more ambitious long term goal. Aligned with that strategy, this work analyses how the properties of our devices might be promising to tackle specific issues in current designs and how to engineer them to make them suitable for a realistic technological implementation. This study deals with a system based on atomic-layer-deposited HfO$_2$ and sputtered TiO$_x$ on silicon and, in particular, different batches of physical samples. The original batch, which gave rise to our special interest in the stack (and whose in-depth characterization has been the subject of previous works \citep{quinteros_atomic_2018,quinteros_oxidos_2016}), is briefly introduced in subsection \textit{Preceding research} together with a description of the former microscopical picture used to rationalize the obtained electrical response. Moreover, we emphasize the reasons why such a system is remarkable both from the basic science point of view as well as the technological one. In section \textit{Results}, additional data is reported, providing new structural insight on the electronic picture of that former batch. Afterwards, we describe the attempts to reproduce the samples and the impossibility of mimicking the desired overall response. As a corollary, in the \textit{Discussion}, we consider alternative microscopical mechanisms that could produce the sought behaviour: threshold switching, ferroelectricity, and \textit{anti-ferroelectric} (\textit{AFE}) behaviour. The reconsideration of the underlying mechanism points new aspects for further research, hoping this will encourage also others to follow this approach, and finally succeed in reproducing this unique behaviour. \subsection*{Preceding research} The original batch was synthesized starting from a highly-doped p-type Si substrate ($\rho$ = 4-40 m$\Omega$ cm), with a thermally grown 150 nm-thick SiO$_{2-x}$\footnote{The off-stoichiometry relies on the fact that the dopants, present in the highly-doped semiconducting substrate, substantially affect the composition of the oxide achieved by this method \citep{ho_si_1979}.} layer \citep{quinteros_atomic_2018}. A metallic 20 nm-thick Ti layer was sputtered on the SiO$_{2-x}$. Subsequently, a 20nm-thick HfO$_{2}$ layer was grown by means of \textit{Atomic Layer Deposition} (\textit{ALD}), using tetrakis(dimethylamido) hafnium (TDMAHf) and ozone as hafnium and oxygen precursors, respectively \citep{quinteros_atomic_2018}. A final capping of Co (35 nm) / Pd (40 nm), photolithographically defined as square shaped electrodes of 200 $\mu$m lateral size, completed the stack. This deposition was conducted in a process that was optimized to get the most stable switching operation \citep{zazpe_thesis}. The devices obtained by this means displayed a remarkable IV loop (Fig. \ref{fig:IV}) that upon increasing the voltage amplitude demonstrate an abrupt change (SET) from a highly-resistive state (\textit{HRS}) to a low resistive state (\textit{LRS}) \citep{quinteros_atomic_2018}. The achieved \textit{LRS} can be retained or lost upon removing the external source depending on the specific pulsing scheme selected for ramping the voltage \citep{quinteros_atomic_2018}. Fig. \ref{fig:logIV} demonstrates that upon a 5 ms ON/5 ms OFF pulsed voltage sweep the achieved \textit{LRS}s are retained after having turned off the power supply (non-volatile). The description is qualitatively the same whether the voltage ramp goes towards positive or negative polarities. This corresponds to two switching units being present within the stack \citep{quinteros_atomic_2018}. The opposite change, from \textit{LRS} to \textit{HRS} (RESET), can be observed only if applying reading pulses in between stimulating pulses \citep{quinteros_atomic_2018}. The combination of two switching units (two SET operations observable) with the fact that each of them are bipolar, determines that upon the polarity reversal, the current is always ruled by the unit that persists in its \textit{HRS} \citep{quinteros_atomic_2018}. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{IVlinOriginal.PNG} \subcaption{} \label{fig:linIV} \end{subfigure} \hfill \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{IVlogOriginal.PNG} \subcaption{} \label{fig:logIV} \end{subfigure} \caption{Previously reported results showing a typical IV loop recorded applying a pulsed V ramp (5 ms ON / 5 ms OFF) while recording current in Auto Scale mode (adapted from \citep{quinteros_atomic_2018}). \ref{fig:linIV}) Plot in linear scale. Starting from a \textit{HRS} (A), a SET operation occurs close to + 5 V defining a \textit{LRS} that remains when reducing the stimulus amplitude (B). The two polarities appear very similar, with no measurable RESET operation but a new SET event (C), instead. Besides, \ref{fig:logIV}), plotted in semi-logarithmic scale and including four successive voltage ramps acquired in the same device, reveals the non-volatility of the achieved low-resistive states upon cycling.} \label{fig:IV} \end{figure} The interest in such a complex stack lies on a couple of advantageous properties that combined together make the devices very promising. Among those it is possible to mention: \begin{itemize} \item \textbf{forming free}. The electrical response of the devices does not rely on a previous step to enable the switching \citep{quinteros_atomic_2018}. Upon cycling, devices behave the same as if they were pristine. \item \textbf{two non-volatile bipolar units in the same stack}. One SET operation observable per polarity \citep{quinteros_atomic_2018}, similar to the so-called complementary switching \citep{linn_complementary_2010}. \item \textbf{rectifying behaviour}. Due to the coupling of two bipolar units, the overall response can be thought as a connection of two oppositely connected diodes \citep{quinteros_atomic_2018}. \item \textbf{self-limiting switching}. No external control is required to limit the runaway of the current during the SET \citep{quinteros_atomic_2018}. \item \textbf{high ratio current switching}. 5 orders of magnitude can be observed between the two sets of \textit{HRS} and \textit{LRS} for positive and negative polarity \citep{quinteros_atomic_2018}. \item \textbf{low currents for all the available states}. Even at the two \textit{LRS}: \textbf{I} $_{HRS}^{unit1}$ @+2V = 2 $\cdot$ 10$^{-13}$A,\\ \textbf{I} $_{LRS}^{unit1}$ @+2V = 10$^{-8}$A, \textbf{I} $_{HRS}^{unit2}$ @-2V = 3 $\cdot$ 10$^{-13}$A, \textbf{I} $_{LRS}^{unit2}$ @-2V = 7 $\cdot$ 10$^{-9}$A \citep{quinteros_atomic_2018}. \item \textbf{coexistence of volatile and non-volatile modes}. The retentivity of the states depends upon the width of and the separation in between the voltage pulses \citep{quinteros_atomic_2018}. \item \textbf{high repeatability along consecutive cycles} (cycle-to-cycle, \textit{C2C}, stability) \citep{quinteros_oxidos_2016}. \item \textbf{high reproducibility among different devices} (device-to-device, \textit{D2D}, stability) \citep{quinteros_oxidos_2016}. \end{itemize} However, coupled to those promising aspects, there are some others that make them unsuitable for a technological application without further optimization: \begin{itemize} \item \textbf{need for (too) high voltages} (\textbf{V} $_{SET}^{unit1} \sim$ +5.5V, \textbf{V} $_{RESET}^{unit1} \sim$ -2V, \textbf{V} $_{SET}^{unit2} \sim$ -4V, \textbf{V} $_{RESET}^{unit2} \sim$ +1.5V), prohibitive for a realistic implementation \citep{quinteros_atomic_2018,quinteros_oxidos_2016}. \item \textbf{5-orders of magnitude ratio between two \textit{HRS} and \textit{LRS}}. This is as much an advantage, since distinguishing between the two states is quite evident, as a drawback due to the requirement for handling such dissimilar current levels \citep{quinteros_atomic_2018}. \item \textbf{big size electrodes} (unfeasible in terms of layout footprint) \citep{quinteros_oxidos_2016,quinteros_oxidos_2016}. \item \textbf{complexity of the stack} (to be reduced to its minimum) \citep{quinteros_oxidos_2016}. \item \textbf{too thick layers} (which would impact in the package density of any eventual device) \citep{quinteros_oxidos_2016}. \end{itemize} Fully understanding the physical processes that explain the observed behaviour would allow to tune the inconvenient aspects of the stacks and to explore the possibilities and limitations of the underlying physical mechanisms. The observed spatial (device-to-device) and temporal (cycle-to-cycle) stability \citep{quinteros_oxidos_2016}, lacking in many other systems, keeps pushing the effort for comprehensively understanding which is the physical mechanism that gives rise to these highly-desired properties. In previous works, a physical mechanism compatible with all the available experimental was proposed \citep{quinteros_oxidos_2016,quinteros_atomic_2018}. Experimental evidence of oxygen being present all the way across the Ti layer \citep{quinteros_atomic_2018} pointed out the formation of an unintentionally formed TiO$_x$ layer. The explanation, considering a stack as complex (and thick) as the one described here, was rationalized in terms of charge-trapping and the semiconducting nature of the specific TiO$_{x}$ phase \citep{quinteros_atomic_2018}. Within this model, the TiO$_{x}$ would play a key role while the SiO$_{2-x}$ and HfO$_2$ would (surprisingly) only contribute as leaky dielectrics through which electrons would flow by means of hopping between trap states. Those states would originate from the defective off-stoichiometric nature of both oxides, the SiO$_{2-x}$ due to the growth method (from a highly-doped semiconductor \citep{ho_si_1979}) and the HfO$_2$ due its known oxygen transport ability and to the oxygen scavenging of the two neighbouring layers: Ti and Co \citep{quinteros_atomic_2018}. In the present study, we report that this explanation might not be enough to account for the observed changes in the electrical response and open the discussion to other plausible underlying mechanisms. \section*{Results} \subsection*{Microscopy analysis} In the original batch (from now on referred to as batch B1), the two accessible electrodes of each device are the highly-doped Si substrate and the photo-lithographically defined Co/Pd pads. As previously mentioned, ex-situ chemical analysis has shown that oxygen is present across the whole Ti layer \citep{quinteros_atomic_2018}, featuring an unintentional oxidation of the metallic Ti layer promoted by the successive deposition steps. Consequently, the stack actually consists of a nanolaminated dielectric composed by SiO$_{2-x}$/TiO$_x$/HfO$_2$ sandwiched between two metallic-like electrodes. In order to improve our understanding of the structure, a cross-section is prepared and imaged under the electron microscope. \begin{figure}[ht!] \centering \includegraphics[width=0.5\textwidth]{RZ37-STEM-HAADF.png} \caption{\textit{HAADF} image of a 1 $\mu$m-sized window of a lamella taken from a typical stack (sample B1) by means of the focus ion beam technique.} \label{fig:STEM} \end{figure} Fig. \ref{fig:STEM} shows a \textit{Scanning Transmission Electron Microscopy} (\textit{STEM}) image of a lamella prepared by means of the focus ion beam technique (Helios 660 Thermo Fisher Scientific) from one device belonging to sample B1\footnote{Through the manuscript 'samples' are understood as the pieces of substrate subjected to one specific set of deposition and processing steps. Alternatively, each top electrode defined within a sample, determines a device. Thus one sample comprises many devices.}. \textit{STEM} images are formed by convergent electrons passing through a sufficiently thin specimen ($\sim$ 100 nm) and scanned across the field of view \citep{williams_transmission_2009}. In particular, \textit{annular dark-field} (\textit{ADF}) mode was used. In \textit{ADF}, images are formed by fore-scattered electrons on an annular detector, which lies outside of the path of the directly transmitted beam. By using the \textit{high-angle ADF} (\textit{HAADF}) mode, to detect incoherently scattered electrons, it is possible to form atomic resolution images where the contrast of an atomic column is directly related to the atomic number (Z-contrast image). Fig. \ref{fig:STEM} is a \textit{HAADF} image of a 1 $\mu$m-sized window. The stack can be easily identified (with the successive layers from bottom to top being SiO$_{2-x}$/TiO$_{x}$/HfO$_2$/Co/Pd), in turn covered by two protective layers of Pt deposited during the lamella preparation (using the electron and ion beams of the dual beam system, respectively). The most remarkable Z-contrast is given by the insulating HfO$_2$ film and the surrounding layers. In this imaging method the brighter layers are responsible for incoherently scattering more electrons to the annular detector. The interface between the Co and the Pd appears drastically defined (darker than the surrounding) which might relate to the fact that Co was deposited by electron beam while Pd was sputtered \citep{zazpe_thesis}. Moreover, the Pd layer follows the topographic landscape of the outermost Co surface and no inter-diffusion can be clearly identified. On the contrary, the smooth transition from the HfO$_2$ to the Co layer seems to suggest that oxygen is present (or even HfO$_2$) within the Co layer, which could be explained in terms of its reactivity and scavenging ability \citep{khotseng_oxygen_2018}. As can be observed in Fig. \ref{fig:STEM}, specifically regarding the TiO$_{x}$ layer, a columnar arrangement of alternating brighter and darker areas is noticeable, which indicates inhomogeneous and localized oxidation. This new insight offered by the \textit{HAADF} image, in agreement with the previously mentioned oxygen presence registered along the whole thickness of the Ti layer \citep{quinteros_atomic_2018}, supports the hypothesis of a stack more complex than only a pure HfO$_2$ switching layer. Moreover, the apparent columnar structure determined by dark and bright zones within the TiO$_x$ layer may account for the quality of the former Ti layer, featuring a columnar growth with grain boundaries that could facilitate selective oxygen diffusion, that in turn could have lead to a laterally inhomogeneous oxidation all across the layer's thickness. It is also worth mentioning that even though the presence of oxygen along the whole Ti layer appears strongly supported in the experimental evidence (both by previously reported Secondary Ion Mass Spectroscopy \citep{quinteros_atomic_2018} as well as indirect quantification of the capacitive term \citep{quinteros_oxidos_2016} and the new insight offered here by the \textit{HAADF} measurement) neither of them enable an exact determination of the stoichiometry. A plethora of stable titanium oxides \citep{szot_tio2prototypical_2011} has been reported displaying a variety of dissimilar electrical behaviours. This, in addition to the mentioned inhomogeneity observed by the \textit{HAADF} image, impedes intentionally depositing a certain titanium oxide phase having to rely on following the same procedure flow such that the conditions are the appropriate ones to promote the oxidation of the Ti layer as it occurred in the original batch. \subsection*{Subsequent deposition processes} Based on the aforementioned picture, the two subsequent deposition processes (batches B2 and B3) were focused on reproducing the TiO$_{x}$ and the HfO$_2$ layers. All the samples of the subsequent batches consist of a silicon highly-doped substrate, a silicon oxide layer, and a sputtered layer of Ti on top of which a layer of HfO$_2$ was deposited by \textit{ALD}. Top electrodes (\textit{TE}) of different sizes, shapes, and metals were defined. Except otherwise specified, all the deposited layers were continuous excluding the \textit{TE}, which were lithographically defined. B2 samples were aimed to investigate the effect of the Ti layer thickness\footnote{Within each batch, samples with different specifications were grown, resulting in labels such as B2 $\#$\textbf{i}.}, while keeping the thickness of the underlying SiO$_2$ layer within the same order of magnitude (further details in Table \ref{tab:FZJ-RUG-sample-details}). The used substrates were commercial ones and the doping level of the semiconductor assured both its metallic-like behaviour (to serve as a back contact) as well as the defective SiO$_{2-x}$ obtained as a consequence of thermal growth from a highly-doped buffer \citep{ho_si_1979}. The reactant used for the \textit{ALD} deposition differed from the original recipe in the use of water, instead of ozone, as oxygen precursor, in an attempt to decouple the two possible sources of oxygen in the Ti layer: the ozone during the \textit{ALD} reaction and the oxygen content in the HfO$_2$ and/or SiO$_{2-x}$ layers acting as reservoirs. Fig. \ref{fig:FZJ-samples-summary} displays the electrical response of B2 samples, a sketch of which is included as an inset. Within this batch (B2) four different samples were grown: B2 $\#$1 (Si/SiO$_{2-x}$ (120 nm)/HfO$_2$ (20 nm)/Pt), B2 $\#$2 (Si/SiO$_{2-x}$ (120 nm)/Ti (20 nm)/HfO$_2$ (10 nm)/Pt), B2 $\#$3 (Si/SiO$_{2-x}$ (120 nm)/Ti (10 nm)/HfO$_2$ (20 nm)/Pt), and B2 $\#$4 (Si/SiO$_{2-x}$ (120 nm)/Ti (20 nm)/HfO$_2$ (20 nm)/Pt). Sample B2 $\#$4 is the most similar to B1 in terms of partial and total thicknesses of the laminated layers and is the one chosen for the devices in Fig. \ref{fig:FZJ-samples-summary}. On the one hand, three successive loops (labelled in Roman numbers) of two different pristine devices are shown. During a first forming-like curve (I), an extremely high-current regime is observed, even reaching the current compliance (10 $\mu$A) defined for preventing an irreversible breakdown\footnote{This is not strictly accurate since the compliance provided by the equipment relies on the switching of a relay which takes some instants to react during which the irreversible breakdown may occur anyway. Better alternatives consist of serially connecting physical devices that limit the current in a sustained way \citep{menghini_resistive_2015}.}. This regime is also observed after removing the power supply when stimulating towards the same polarity as in the previous run (II). Moreover, when reversing the polarity (III), the previously achieved low-resistance state is retained, opposite to the rectifying ability highlighted in the original case (B1). That occurs either if the first explored polarity is positive or negative, as demonstrated in Fig. \ref{fig:FZJ-IV} (successive I, II, and III and i, ii, and iii loops, for devices 1 and 2, respectively). Other samples of the same batch (B2) showed similar characteristics that, in turn, differ from B1 noticeably, mainly regarding the lack for self-limitation and the impossibility of recovering the pristine state. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{FZJ_typical-IV.PNG} \subcaption{} \label{fig:FZJ-IV} \end{subfigure} \hfill \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{FZJ_dielectric-constants.PNG} \subcaption{} \label{fig:FZJ-dielectric} \end{subfigure} \vspace{0.2cm} \caption{Graphical summary of B2 samples' electrical properties. \ref{fig:FZJ-IV} Current as a function of voltage (IV) in semilog scale. IV loops, starting towards opposite polarities for two equivalent pristine devices (1 and 2, respectively) of the same sample (B2$\#$4), are presented. A sketch of the stack is included (inset), demonstrating the continuity of the layers except for the \textit{TE}. The cartoon accounts for two different top electrodes defining two different devices within the same stack. \ref{fig:FZJ-dielectric} Dielectric constants determined from the normalized ratio $\frac{Capacitance}{Area}$ divided by the thickness of the dominant capacitor in each case. B2 samples, containing a Ti metallic-like layer, display a permittivity matching the HfO$_2$ value while the B1 sample (with a fully oxidized TiO$_x$ layer) matches the SiO$_2$ dielectric constant. The top electrodes are omitted to emphasize the differences among the stacks themselves.} \label{fig:FZJ-samples-summary} \end{figure} Furthermore, Fig. \ref{fig:FZJ-dielectric} compares the dielectric permittivity (measured at 10 kHz) obtained for different stacks, with their labels accounting for the various B2 samples (see Table \ref{tab:FZJ-RUG-sample-details}). An original B1 sample is included for the sake of comparison. The measured capacitance of B2 samples agrees with a dominant HfO$_2$-based capacitor for all the cases except for the sample without Ti layer. This can be rationalized as a persistently metallic Ti layer which determines two independent capacitors within the stack: a top one, composed by Pt/HfO$_2$/Ti whose area is determined by the top electrode's size, and a bottom one corresponding to Ti/SiO$_{2-x}$/Si whose area corresponds to the whole film area (due to devices' definition, see Fig. \ref{fig:FZJ-IV}'s inset). In summary, the electric behaviour of the B2 samples seems to indicate a simple bipolar switching scheme, instead of the striking properties observed in B1. Moreover, the capacitive measurements agree with retaining two serially-connected capacitive units decoupled from each other. Taken together, these two evidences are interpreted as an indication of the incomplete (if any) Ti oxidation. An aftermath analysis seemed to point out that the decision of mimicking the SiO$_{2-x}$ thickness could have been detrimental for reproducing the voltage drop distribution. An additional consideration corresponds to having used Pt in contact to the HfO$_2$ layer, which might have implied a 'less defective' HfO$_2$ layer (more stoichiometric and consequently less leaky) due to the lower reactivity of Pt compared to Co (as in B1). The conclusion that can be withdrawn is that the reactive environment within the \textit{ALD} chamber is key for the formation of the specific TiO$_x$ phase, as we have already suggested in a previous communication \citep{quinteros_atomic_2018}. This, in turn, is in agreement with the electrical behaviour found by Yoon and col. \citep{yoon_pt/ta2o5/hfo2x/ti_2015} who reported similar results on \textit{ALD}-grown HfO$_2$-based stacks using ozone as oxygen precursor\footnote{Their stack can be considered as half of the one introduced here as B1.}. A second attempt consisted of reproducing in more detail the HfO$_2$ deposition itself (batch B3). In this case, both the Hf and O precursors were the same as during the original process and the \textit{TE} material was also reproduced. The substrate was a highly-doped Si (same doping type) only covered by a native oxide layer as thin as $\sim$ 2 nm (no intentional growth of thermal SiO$_{2-x}$\footnote{This decision is partly supported by the assumption that the SiO$_2$ layer merely acts as a series resistor. Additionally, the structural properties of TiO$_x$ grown on chemically treated Si and thermally grown SiO$_2$ were demonstrated to be equivalent up to thicknesses of 20 nm \citep{puurunen_controlling_2011}.}). Apart from this last observation, the main difference between B1 and B3 is the temperature at which the \textit{ALD} was conducted. \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.47\textwidth} \centering \includegraphics[width=\textwidth]{RUG-typical-IV.PNG} \subcaption{} \label{fig:RUG-IV} \end{subfigure} \hfill \begin{subfigure}[b]{0.46\textwidth} \centering \includegraphics[width=\textwidth]{RUG-conduction-mechanisms.PNG} \subcaption{} \label{fig:RUG-conduction} \end{subfigure} \vspace{0.2cm} \caption{Electrical properties of a typical B3 sample. The comparison with a reference device (from a B1 sample) reflects a self-limiting mechanism at work in B3 samples, even when the SiO$_{2-x}$ thickness is reduced to its minimum. Furthermore, a metastable state (identified as branch B) partially matches the results obtained for the original batch. \ref{fig:RUG-IV} Current as a function of voltage (IV) in semilog scale of a B3 sample (coloured symbols) compared to the original device (grey scale). Successive IV loops are measured. \ref{fig:RUG-conduction} Double-logarithmic IV loop for the two successive excursions presented in Fig. \ref{fig:RUG-IV}. The dependence obtained for a sample of reference is included for comparison.} \label{fig:RUG-samples-summary} \end{figure} Fig. \ref{fig:RUG-samples-summary} includes IV loops recorded in a B3 sample consisting of the same nominal thickness of both Ti and HfO$_2$ with the \textit{ALD} process employing exactly the same precursors as in B1. Once again the electrical properties differ from those of B1 mainly in the absence of coupling between the two units, which gives rise to the rectifying ability, and in the lack of C2C and D2D stability (highlighted as a relevant and distinctive aspect of B1). Interestingly, like in the original case, there is a trace of a self-limiting switching that prevents the current from a runaway immediately after the SET operation is observed. However, this was not the case for all the samples of this batch. Fig. \ref{fig:RUG-conduction} shows that if there is any similarity between the two types of samples (see Fig. \ref{fig:RUG-conduction}) it is not stable enough to persist after cycling. Moreover, the voltage needed to SET the B3 devices is even higher than in B1. This value would even surpass the nominal breakdown field if the voltage would fully drop on the HfO$_2$ layer. Even though, this is not the case (no irreversible breakdown is observed), as demonstrated by the self-limited current, it does not appear as repeatable and sustainable as the process registered in B1. In addition, a second loop towards the same polarity starts at a resistance level lower than that at the end of the previous voltage run and evidences a RESET operation typical of an unipolar resistive switch (instead of the bipolar nature claimed in the rationalization of B1 electric behaviour). \begin{table}[h!] \centering \begin{tabular}{c|c|c|c} & B1 & B2 & B3 \\ \hline Condition or parameter & & & \\ \hline Si doping ($cm^{-3}$) & $10^{18}/10^{19}$ (p-type) & $10^{18}$ \colorbox{red}{\color{white}{(n-type)}} & \colorbox{green}{$10^{19}$ (p-type)}\\%Zazpe 4-40 mOhm cm & p-type / Pavan's buffer 5 mOhm cm & B-doped (p-type) Oxidized SiO$_{2-x}$ thickness ($nm$) & 150 & 120 & \colorbox{red}{\color{white}{0}} \\ Ti thickness ($nm$) & 20 & 0, 10, \colorbox{green}{20} & 0, 5, 10, \colorbox{green}{17}, 35 \\ Ti deposition method & sputtering & sputtering & sputtering \\ ALD pre-process & - & - & \colorbox{red}{\color{white}{O$_3$ and N$_2$ pulses at 300C}} \\ T during ALD ($C$) & 300 & \colorbox{red}{\color{white}{175}} & \colorbox{red}{\color{white}{100}} \\%I recall 175 C but it doesn't seem to be written Oxygen precursor & ozone & \colorbox{red}{\color{white}{water}} & \colorbox{green}{ozone} \\ Hf precursor & TDMAHf & \colorbox{red}{\color{white}{TEMAHf}} & \colorbox{green}{TDMAHf} \\ HfO$_2$ thickness ($nm$) & 20 & 10, \colorbox{green}{20} & \colorbox{green}{20} \\ TE material (thickness in $nm$) & Co(35)/Pd(40) &\colorbox{red}{\color{white}{Pt(70)}} & \colorbox{green}{Co(30)}/\colorbox{red}{\color{white}{Pt(40)}}, Pt(70), Ti(10)/Au(60) \\ TE deposition method & e-beam/sputtering & & \colorbox{red}{\color{white}{sputtering}} \\ TE shape & square & circle & circle \\ TE dimension ($\mu$m) & L = 200 & $\phi$ = 150, \colorbox{green}{250} & $\phi$ = 100, 125, 200, \colorbox{green}{250}, 350, 500, 1000 \end{tabular} \caption{Parameters' comparison between the original batch (B1) samples and the two subsequent deposition processes (B2 and B3) aiming to mimic the remarkable electrical properties of the former batch. TEMAHf stands for hafnium tetrakis (ethylmethylamide) = $Hf[N(CH_3)(C_2H_5)]_4$. TDMAHf stands for hafnium tetrakis (dimethylamide) = $Hf[N(CH_3)_2]_4$. Simiarities are highlighted in green while differences are highlighted in red.} \label{tab:FZJ-RUG-sample-details} \end{table} Table \ref{tab:FZJ-RUG-sample-details} summarizes \colorbox{green}{similarities} (highlighted in green) and \colorbox{red}{\color{white}{differences}} (highlighted in red) compared to the original process. Unfortunately, neither the batch B2 nor the B3 one gave the expected results. Even though it remains unclear whether the accidentally formed TiO$_x$ phase from the original samples was matched in the later ones, the fact that the most remarkable properties were not reproduced make us consider other possibilities. \section*{Discussion} Even though the space of parameters of such a complex stack is far from fully explored, the evident contrast between the remarkable features of the original samples and the electrical response of the subsequently-designed ones led us to wonder if there is not any other suitable description for the observed behaviour. Perhaps the key for unlocking the desired properties lies on an alternative physical mechanism at work, while the previous picture, based on the role of the TiO$_x$ interface, might have been misleading the efforts in that regard. In the search for other explanations, in the following, we discuss two alternatives that seem feasible and deserve further investigation: threshold switching and (anti)ferroelectricity, including a mention to ferroelectricity for the sake of completeness. \subsection*{Threshold switching} In the early '60s, the observation of a sudden change in the resistance state in bulky chalcogenides was referred to as threshold switching \citep{ovshinsky_reversible_1968}. In those systems, when a certain threshold voltage value (V$_{th}$) is reached\footnote{That value depends on the thickness of the material.}, a dramatic reduction of the resistance defines a new state that will be held until the so-called holding voltage is reached back (see Fig. \ref{fig:Typical-threshold}). This second voltage condition (V$_h$) is obtained while reducing the applied stimulus since it is lower than the threshold one (V$_{th}$), of the same polarity. In such a case, when the device is not being polarized, the highly resistive state is recovered. The same qualitative behaviour is observed regardless the polarity. An archetypical electrical response is presented in Fig. \ref{fig:Typical-threshold} (adapted from ref. \citep{adler_mechanism_1978}). \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{Typical-IV_threshold-switching_adapted.PNG} \subcaption{} \label{fig:Typical-threshold} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{nG_threshold-like-IV.PNG} \subcaption{} \label{fig:Cont-LinIV} \end{subfigure} \vspace{0.2cm} \caption{\ref{fig:Typical-threshold} Typical threshold switching IV dependence. Adapted from Adler \textit{et al.} \citep{adler_mechanism_1978}. The device naturally starts at the OFF state that upon V$_{th}$ undergoes a transition to an ON state retained only until V$_h$ is surpassed. \ref{fig:Cont-LinIV} Current measured on a B1 sample upon a continuous voltage sweep. The \textit{LRS} vanishes before removing the power supply.} \label{fig:threshold-switching} \end{figure} Fig. \ref{fig:Cont-LinIV} displays an IV loop conveniently presented to analyse differences and similarities with the threshold switching. The loop was recorded in a B1 sample and acquired under a continuous voltage ramp\footnote{A device measured on the same B1 sample was shown for the sake of comparison in Fig. \ref{fig:RUG-IV}. The differential electrical response (non-volatile in that case) relies on the difference in the applied protocol as highlighted in the subsection \textit{Original batch}.}. Under such conditions, the volatile switching nature is evident since the ON state vanishes before returning to the 0 V condition. Presented in this way, a threshold could be spotted even though the switch is not as abrupt as in the archetypical case (Fig. \ref{fig:Typical-threshold}). In addition, an asymmetry between the positive and the negative polarities is observed which could screen some other relevant aspect not taken into account. The fact that the original samples (B1) appear so dependent on the specific pulsing scheme seems to suggest a strong dependence either on thermal activity or transient responses which would enable different responses depending on the waiting time in between pulsing. Even when there might be a reminiscence of the electrical signature of the original threshold switches, the nanometric stacks are radically different from the glassy semiconductors in which the effect was firstly identified. Nowadays, they are composed by nanolaminated layers of binary or ternary oxides. For contemporary systems like these, for instance a Poole-Frenkel barrier lowering (field driven effect) and an associated thermal runaway have been proved to satisfactory reproduce experimental results in niobium oxides \citep{funck_multidimensional_2016}. In fact, quite recently, the debate between thermal and non-thermal mechanisms has been stirred up due to an experimental demonstration of the heat dissipated during a CC-NDR experiment \citep{goodwill_spontaneous_2019}. \vspace{0.2 cm} Therefore, on the one hand, the threshold nature of the electrical response offers a framework to rationalize the observed changes in thick layers without any requirement for specific crystal structures, as is the case for the samples presented in this study. It would also allow to rationalize the occurrence of volatile switching. Eventually, the coexistence of volatile and non-volatile states might also be explained in terms of the dissipated heat due to the specific applied protocol. On the other hand, this picture considers only two relevant interfaces while our stacks are nanolaminated or multi-layered by construction. In order to further test if this mechanism is at play it would be necessary to identify which are those and what role the others have. In addition, the similarity in the SET voltages for the two polarities should be explained in terms of a stack that does not present any symmetry. \subsection*{Ferroelectric HfO$_2$-based systems} Ferroic materials are characterized by a spontaneous order parameter that can be reversibly switched between at least two energetically-equivalent ground states by an applied conjugate field. \textit{Ferroelectrics} (\textit{FE}) are insulators possessing a spontaneous electric polarization switchable by an electric field \citep{rabe_physics_2007}. Recently, ferroelectricity has been observed in crystalline doped HfO$_2$. The most interesting aspect of this ferroelectricity\footnote{Understood as the presence of a remnant polarization able to be switched upon external stimuli.} is that, contrary to the expectations of losing the polarization while the thickness is reduced, hafnia seems to display stronger polarization the thinner the layer becomes. And even though there are many nuances still under investigation, the surface energy contribution and associated internal pressure inside the nano-size grains seems to be the reason for this non-intuitive behaviour \citep{martin_ferroelectricity_2014,sang_structural_2015}. Within this context, one could wonder whether there is any chance that the HfO$_2$ layer formed in our stack also presents these characteristics. Even though the process temperature should not be high enough to crystallise hafnia, it shows a polycrystalline nature \citep{zazpe_resistive_2013}, while usual \textit{ALD} processes give rise to amorphous layers, in need for a \textit{rapid thermal anneal} (\textit{RTA}) to crystallize. In this regard, it is worth bearing in mind that doping, in the form of migrating ions from layer to layer, could also affect this aspect by introducing distortion in the lattice. Even though one might guess whether there is any possibility for the Pd to have diffused into the HfO$_2$ layer \citep{chakraborty_mechanisms_2010} to introduce that kind of crystalline defect, the STEM measurement rules out this possibility, given the sharply defined Co/Pd imaged interface. In addition, the thickness seems to also play a role in triggering the crystallization \citep{polakowski_ferroelectricity_2015}. In any case, even if we accept the evidence of the polycrystallinity of the layer, grazing-incidence X-ray measurements suggested a dominant monoclinic phase \citep{zazpe_resistive_2013} which being centro-symmetric, would be incompatible with the presence of ferroelectricity. Even though a refinement of the identified peaks might be necessary, there is no evidence for a ferroelectric behaviour either from the structural point of view or from electrical switching measurements (not shown) using the so-called PUND protocol \citep{schenk_about_2014}. \vspace{0.2 cm} Having concluded that none of the signatures seem to indicate a ferroelectric nature of our stack, a closer look at the I-V loop allows to recognize a striking similarity with the polarization vs field (P-E) dependence of an antiferroelectric (\textit{AFE}). One may wonder whether there is any mechanism to relate the current to the polarization such that the IV curve resembles so closely the dependence of the strain upon the same driving force. \subsection*{Antiferroelectric HfO$_2$} \begin{figure}[ht!] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{AFE_hysteresis-states_adapted.png} \subcaption{} \label{fig:typicalAFEloop} \end{subfigure} \hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=\textwidth]{nG_cont-IV.PNG} \subcaption{} \label{fig:nGlinIV-similAFE} \end{subfigure} \vspace{0.2cm} \caption{Antiferroelectric behaviour compared to experimental data obtained in a B1 device. \ref{fig:typicalAFEloop} Archetypical polarization vs electric field (P-E) loop for an \textit{AFE} with cartoons of ferroelectric domains' orientation upon external electric field representing the \textit{AFE/FE} transition (adapted from Hao \textit{et al.} \citep{hao_comprehensive_2014}). \ref{fig:nGlinIV-similAFE} Semi-logarithmic plot of current (absolute value) vs voltage measured upon a continuous voltage ramp (same curve as displayed in Fig. \ref{fig:Cont-LinIV}).} \label{fig:AFE_P-V_loop} \end{figure} The P-E loop for an \textit{AFE} material can be described as a two threshold excursion \citep{kittel_theory_1951}. Firstly, upon increasing the applied electric field a pronounced increase of the polarization accompanied by a transient current is observed, very much like a \textit{FE} itself \citep{schenk_about_2014}. That condition corresponds to the re-orientation of the domains that were antiparallel to the external field direction. The \textit{FE} state of \textit{AFE} materials is however, unstable and depending on their ability to hold this state they are classified as soft or hard \textit{AFE} \citep{hao_comprehensive_2014}. In turn, this characteristic would depend on thermal dissipation, strain and other factors (such as charge rearrangement, as shown in \textit{AFE}-like loops that upon stimulating undergo a transition to pure \textit{FE} nature, also known as wake-up effect \citep{zhou_wake-up_2013}). Most of the \textit{AFE} show a distinctive feature while the applied electric field is reduced, at a certain value (E$_{AFE}$ in Fig. \ref{fig:typicalAFEloop}) the external stimulus will not be enough to hold the parallel polarization and the material will relax to its more energetically stable \textit{AFE} state. An abrupt reduction of the polarization (without considering the paraelectric component, it would ideally reach zero) accompanied by a current flow in the direction of the field gradient \citep{schenk_about_2014} would mark the \textit{FE}/\textit{AFE} transition. Ideally, at zero bias, there is no trace of the phase transition. Even when \textit{FE} and \textit{AFE} definitions are different and so are the requirements for the materials able to display those behaviours \citep{rabe_antiferroelectricity_2013}, there is a close relationship between the two of them. The energies of the \textit{FE} and \textit{AFE} phases are supposedly very close together being able to transition from one another upon varying different parameters. In fact, \textit{FE} behaviour of thin films of otherwise \textit{AFE} materials was demonstrated \citep{ayyub_ferroelectric_1998} while tuning the dopant content allows to cover many possibilities in a complex phase diagram of commensurate and incommensurate \textit{FE} \citep{asada_coexistence_2004} as well as ferrielectricity and \textit{AFE} \citep{asada_coexistence_2004}. Therefore, given that ferroelectricity was identified in HfO$_2$-based systems, one may wonder if \textit{AFE} could take place upon a proper orientation. Hafnia based systems have broadly demonstrated \textit{FE} properties attributed to its intrinsic nature but achieved by stabilizing otherwise unstable phases \citep{polakowski_ferroelectricity_2015}. Even though a polar distortion of a high symmetry phase is needed for a \textit{FE} to occur, pinched hysteresis loops as well as the wake-up and fatigue are still being closely studied \citep{pesic_physical_2016}, and hysteresis deformation in general \citep{schenk_about_2014} suggests subtle acting mechanisms that could lead to extremely interesting behaviours still to be detected. In particular, quite recently the pinched loop during the so-called wake up of \textit{FE} hafnia was compared to the double loop typical of the \textit{AFE} response \citep{lomenzo_depolarization_2020}. \vspace{0.2 cm} As a summary, it is now well established that \textit{FE} in HfO$_2$-based systems can be achieved by different methods, either by doping, encapsulating or both, such that metastable phases of lower symmetry can be stabilized. Epitaxial strain can also stabilize the \textit{FE} phase but it is incompatible with our system. In addition, \textit{AFE} in hafnia-based materials have also been reported. Our system of interest, while possibly displaying some fraction of \textit{FE} phase, resembles to some extent the behaviour of an \textit{AFE} (Fig. \ref{fig:nGlinIV-similAFE} coupled to an additional mechanism that is able to map polarization into current). Therefore, the next step will be to explore the possibility of having a ferroic hafnia which adequately coupled to other layers results in the occurrence of alternative volatile and non-volatile states of high interest for improved functionality devices with neuromorphic scopes. \section*{Final remarks and perspectives} There exist several differences among samples grown in different set-ups. Assuming that the SiO$_{2-x}$ layer does not play a key role in the switching mechanism, the effort was focused on mimicking the TiO$_x$ and HfO$_2$ quality. Also, the role of the top electrode was somewhat disregarded. Nevertheless, the impossibility of reproducing the combination of properties featured by B1 indicated a key aspect remained undetected. Understanding the ubiquity of threshold switching and the variety of underlying processes leading to similar macroscopic response, it seems clear that charge injection could be at play in our stack. This, in particular, could help identifying the mechanism that enables the observed volatile switching. In addition, having determined the \textit{FE} nature of many hafnia-based systems leads us to consider the plausibility of an unintentional ion migration that would populate the layer, introducing the necessary distortion to form lower symmetry phases. Nonetheless, the impossibility of recording a \textit{FE} loop and the lack of evidence for a predominant structural phase contradicts this scenario. However, given the striking similarities between the IV loop and the displacement in an \textit{AFE}, the hypothesis of this connection appears unavoidable. If anything like a ferroic order of the HfO$_2$ layer is involved (\textit{FE} or \textit{AFE}), then the temperature inside the \textit{ALD} chamber during deposition would be of paramount importance. So far, for the subsequent batches (B2 and B3), the temperature was held high (at 300$^{\circ}$C) only for a couple of minutes before the introduction of the HfO$_2$ precursors, to promote the oxidation of the underlying layer. Depositing HfO$_2$ at such temperatures had been, until now, intentionally avoided to preserve the quality of the precursors. Performing an ex-situ \textit{RTA} would enable to crystallize the HfO$_2$ (which in batches B2 and B3 seem to be amorphous) but would not produce the same impact on the Ti layer underneath. An unexplored aspect of the stack is the role of the Co layer used as top electrode in the original batch. Cobalt is ferromagnetic (\textit{FM}) with the magnetization out-of-plane when the layer is $\leq$ 1.2 nm and in-plane otherwise \citep{metaxas_creep_2007}. In addition, CoO$_x$, which could have been formed (as suggested from the \textit{HAADF} image) due to the oxygen scavenging from the oxide matrix, is antiferromagnetic. Any of these scenarios could impact the overall properties of the stack and have not been extensively studied. In fact, deposition of \textit{FM} layers on top of \textit{FE} dielectrics has been a strategy exploited to achieve electrical control of \textit{FM} loops \citep{vermeulen_ferroelectric_2019}. Another unconsidered ingredient is the role of the oxygen vacancies. Even though, we cannot rule out the presence of oxygen vacancies within any of our stacks, experimental evidence seems to indicate other mechanism/s controlling the overall response. For that reason, and given the fact that involving oxygen vacancies would have required a detailed analysis on its own, we have avoided involving them in our proposed alternative explanations. \vspace{0.2cm} In summary, a list of changes to be applied to the fabrication flow as well as a complementary characterization have been identified as necessary in order to spot traces of the mentioned mechanisms. We hope that this discussion will encourage others to join efforts towards a tight control of the stack in the direction of obtaining the desired properties. \vspace{6pt} \section*{Acknowledgements} \noindent The authors would like to acknowledge R. Zazpe and L. Hueso for providing the samples that triggered this project. Special thanks has to be given to A. Hardtdegen and S. Hoffmann-Eifert who were involved in the deposition of the \textit{FZJ} complementary batch. \vspace{0.2cm} \noindent The authors gratefully acknowledge financial support from NWO's TOP-PUNT grant 718.016002. C.P. Quinteros also wants to acknowledge \textit{Deutscher Akademischer Austauschdienst} (\textit{DAAD}) and the Argentinian Education Ministry for funding her stay at the \textit{Forschungszentrum Jülich}. This project has also received funding from EU-H2020-RISE project \textit{Memristive and multiferroic materials for logic units in nanoelectronics} 'MELON' (SEP-2106565560). \section*{Conflicts of interest} The authors declare no conflict of interest. \bibliographystyle{ieeetr}
-24,516.478363
[ -2.505859375, 2.486328125 ]
47.42268
[ -3.279296875, 0.68505859375, -2.20703125, -5.4296875, -0.65869140625, 8.171875 ]
[ 4.609375, 7.79296875, 3.181640625, 6.9453125 ]
333
6,597
[ -1.9482421875, 2.103515625 ]
23.573618
[ -6.03515625, -3.203125, -3.5390625, -2.39453125, 1.6240234375, 11.5859375 ]
1.040028
27.888749
26.583813
2.902531
[ 2.011723756790161 ]
-18,775.856199
5.975747
-23,894.97428
0.284105
6.226304
[ -3.615234375, -3.505859375, -2.90234375, -3.85546875, 2.6875, 10.46875 ]
[ -4.9453125, -2.306640625, -2.1796875, -1.80078125, 3.759765625, 5.5625 ]
BkiUe2c5qhDCNj5tiQd0
\section{Introduction} Many extensions of the Standard Model (SM), such as light hidden dark-sector model \cite{hidden} and Next-to-Minimal Supersymmetric Standard Model (NMSSM) \cite{nmssm}, introduce light weak-interacting degrees of freedom. In the hidden dark-sector model, WIMP-like fermionic dark matter particles are charged under a new force carrier. The corresponding gauge field, the dark photon ($\gamma'$), is coupled to the SM particles via a kinetic mixing \cite{epsilon} with mixing strength ($\epsilon$). In the framework of this hidden sector, dark matter particles annihilate into a pair of dark photons, which subsequently decay of SM particles. The mass of the dark photon is constrained to be at most a few GeV to accommodate the features of recent experimental anomalies observed in cosmic rays \cite{pamela}. The NMSSM adds an additional singlet super field to the Minimal Supersymmetric Standard Model (MSSM) to solve the so called naturalness problem of the MSSM \cite{MSSM}. The Higgs sector of the NMSSM contains three CP-even Higgs bosons, two CP-odd Higgs bosons and two charged Higgs boson. The mass of the lightest CP-odd Higgs boson ($A^0$) can be less than twice the mass of a charm quark and produced via $J/\psi \rightarrow \gamma A^0$ \cite{wilzeck}. The coupling of the Higgs field to down-type (up-type) quark pair is proportional to tan$\beta$ (cot$\beta$), where tan$\beta$ is the ratio of up and down type of Higgs doublets. The branching fraction of $J/\psi \rightarrow \gamma A^0$ decay is expected to be in the range of $10^{-9} - 10^{-7}$, depending on $A^0$ mass and coupling \cite{bra0}. The final state to which the $A^0$ decays depends on various parameters such as tan$\beta$ and $A^0$ mass \cite{Dermisek}. The discovery of such a low-mass states would open a new frontier in particle physics beyond the SM. BABAR \cite{babardarkphot,babarlighthigh09,babarlighthiggs13}, Belle \cite{belledarkphot}, CMS \cite{cmslighthiggs} and CLEO \cite{cleohiggs} experiments have performed the search for these low-mass states and reported negative results so far. More recently, BESIII has also performed the search for low-mass Higgs and dark bosons using the data collected at $J/\psi$, $\psi(2S)$ and $\psi(3770)$ resonances. The following sections describe the detail information about these new physics searches at BESIII. \section{Search for a low-mass dark photon} The high intensity $e^+e^-$ collider experiments can provide a clean environment to probe the low-mass dark-sectors \cite{Batell, Essig}. The width of the dark photon is suppressed by a factor of $\epsilon^2$ and expected to be very narrow than the experimental resolution. The dark photon can therefore be detected via the initial state radiation (ISR) process of $e^+e^- \rightarrow \gamma l^+l^-$ ($l = e, \mu$). BABAR has performed the search for a dark photon using this ISR decay process and set one of best exclusion limits in the mass range of $0.02 - 10.2$ GeV/$c^2$ \cite{babardarkphot}. Due to collecting a large amount of data sample at different center-of-mass energies, BESIII has the capability to produce the compatible exclusion limits on the coupling of the dark photon to the SM particles in the absence of any signal observations. The search for di-lepton decays of a light dark photon is performed in the ISR process of $e^+e^- \rightarrow \gamma_{ISR} \gamma' \rightarrow l^+l^-$ ($l=e, \mu$) using $2.9$ fb$^{-1}$ of $\psi(3770)$ data collected by BESIII experiment during $2010-2011$. The events of interest are selected with two oppositely charged tracks only, where both the tracks are identified as a muon (electron) using the standard muon (electron) particle identification (PID) system developed by the BESIII experiment \cite{muonpid}. In order to remove the beam related backgrounds, both the tracks are required to be originated within 10.0 cm along the beam direction and 1.0 cm in the transverse direction of the beam. The polar angle of both the tracks is required to be in the main drift chamber (MDC) acceptance region (i.e. $|cos\theta| < 0.93$). This search is based on an untagged ISR photon method, where the ISR photon is emitted at a small polar angle of photon and not detected within the angular acceptance of the electromagnetic calorimeter (EMC), to suppress the non-ISR backgrounds (Figure~\ref{fig:ISRGam} (left)). In order to improve the mass resolution of the dark photon, a one-constraint (1C) kinematic fit is performed with two charged tracks and missing track with a condition that the mass of the mission track must be zero. The fit quality condition $\chi_{1C}^2 < 20$ ($\chi_{1C}^2 < 5$) is applied in the $\gamma_{ISR} \mu^+\mu^-$ ($\gamma_{ISR}e^+e^-$) case. We finally require that the di-lepton invariant mass must be within $1.5-3.4$ GeV/$c^2$. Figure~\ref{fig:ISRGam} (right) shows the plot of di-muon invariant mass distribution of both data and Monte Carlo (MC) samples. No evidence of dark photon production is found in any of the decay processes at any mass points and we set $90\%$ C.L. upper limits on $\epsilon$ as a function of $m_{\gamma'}$ using the formula of Equation 19 in \cite{epsformula}. \begin{figure}[htb] \centering \includegraphics[height=2.0in,width=2.9in]{thetagam.eps} \includegraphics[height=2.0in,width=2.9in]{mmumu.eps} \caption{(Left) Polar angle ($\theta_{\gamma}$) distribution of the ISR photon and (right) the di-muon invariant mass ($m_{2\mu}$) distribution of data (black dot points) and MC (shaded area). The marked area around the $J/\psi$ resonance is excluded from the analysis. The inlay in the upper left in the right hand side plot displays an enlargement of the $m_{2\mu}$ distribution. The region of the $\theta_{\gamma}$ distribution in the BESIII detection region is $[0.376,2.765]$ radian. In the untagged photon method, the ISR photon is required to be not detected in the EMC detection region. } \label{fig:ISRGam} \end{figure} \section{Search for a low-mass Higgs boson} Previous searches of the $A^0$ performed by BABAR \cite{babarlighthigh09,babarlighthiggs13}, CLEO \cite{cleohiggs}, and CMS \cite{cmslighthiggs} experiments have placed very strong exclusion limits on the coupling of the b-quark to the $A^0$ \cite{Dermisek,cmslighthiggs,babarlighthiggs13}. BESIII has also previously searched for di-muon decays of a light Higgs boson in the radiative decays of $J/\psi$ using the data at $\psi(2S)$ resonance, where the $J/\psi$ events were selected by tagging the pion pair from $\psi(2S) \rightarrow \pi^+\pi^- J/\psi$ transitions \cite{BESIII_Higgs0}. No evidence of $A^0$ candidate was found and exclusion limits on $\mathcal{B}(J/\psi \rightarrow \gamma A^0) \times \mathcal{B}(A^0 \rightarrow \mu^+\mu^-)$ were is set in the range of $(0.4-21.0) \times 10^{-6}$ for $0.212 \le m_{A^0} \le 3.0$ GeV/$c^2$, which seem to be above the theoretical prediction. The large data samples collected at the $J/\psi$ resonance by BESIII experiment allow us to perform this study once again with a high precision. The search for a light Higgs boson is performed in the fully reconstructed decay process of $J/\psi \rightarrow \gamma A^0$, $A^0 \rightarrow \mu^+\mu^-$ using 225 million $J/\psi$ events collected by BESIII experiment. Same amount of generic $J/\psi$ decays of simulated MC events is used for the background studies. This work is based upon a blind analysis; we don't see the full data sample unless all the selection criteria are finalized. The event of interests is selected with two oppositely charged tracks and at least one good photon candidate. The minimum energy of this photon is required to be 25 MeV in the barrel region ($|cos\theta| < 0.8$) and 80 MeV in the end-cap region ($0.86 < |cos\theta| < 0.92$). The EMC time is also required to be in the range of $[0,14](\times 50)$ ns within the event start time to suppress the electronic noise and energy deposits unrelated to the signal events. Additional low energetic photons are allowed to be in the events. In order to reduce the beam related backgrounds, charged tracks are required to originate within $\pm 10.0$ cm from the interaction point in the beam direction and within $\pm 1.0$ cm in perpendicular to the beam. The charged tracks are required to be in the polar angle region, $|cos\theta| < 0.93$, thus having a reliable measurement in the MDC. The two charged tracks are assumed to be the muons and required at least one charged track must be identified as muon using a muon PID system, which is based on the selection criteria of following three variables: (1) the energy deposited in the EMC ($E_{cal}^{\mu}$) by a muon particle must be within the range of $0.1 < E_{cal}^{\mu} < 0.3$ GeV, (2) the absolute value of the time difference between time-of-flight (TOF) and expected muon time ($\Delta t^{TOF}$) must be less than 0.26 ns and (3) the penetration depth in muon counter (MuC) must be greater than $(-40.0 +70\times p$) cm for $0.5 \le p \le 1.1$ GeV/$c$ and 40 cm for $p > 1.1$ GeV/$c$. A 4C kinematic fit is performed with two charged tracks and one of the good photon candidates to improve the mass resolution of the $A^0$ candidate. The $\chi^2$ from the 4C kinematic fit is required to be less than $40$ to suppress the background contributions from the decay processes of $J/\psi \rightarrow \rho\pi$ and $e^+e^- \rightarrow \gamma \pi^+\pi^-\pi^0$. We further require that either one of the tracks must have cosine of muon helicity angle (cos$\theta_{\mu}^{hel}$), defined as the angle between the direction of one of the muons and the direction of $J/\psi$ in the $A^0$ rest frame, to be less than 0.92 to suppress the backgrounds peaking in the forward direction. Figure~\ref{fig:mred} shows the reduced di-muon mass, $m_{\rm red} = \sqrt{m_{\mu^+\mu^-}^2 -4m_{\mu}^2}$, distribution of the $10\%$ of $J/\psi$ data together with a composite MC sample of generic $J/\psi$ decays and ISR production of $e^+e^- \rightarrow \gamma \mu^+\mu^-$ simulated events. The $m_{\rm red}$ is equal to twice the muon momentum in the $A^0$ rest frame, easier to model it near the threshold in comparison to the di-muon invariant mass. In $10\%$ of the $J/\psi$ data-set, the background is dominated by following two types of process: \rm{\lq\lq non-peaking\rq\rq} component of $e^+e^- \rightarrow \gamma \mu^+\mu^-$ and \rm{\lq\lq peaking\rq\rq} components of $J/\psi \rightarrow \rho\pi$ and $J/\psi \rightarrow \gamma X$ decays, where $X = f_2(1270)$ and $f_0(1710)$ mesons. However, an addition peaking component at $f_4(2050)$ mass position seems to be appearing in the MC sample of generic $J/\psi$ decays. The additional source of this background can be seen in the full $J/\psi$ data-set, we therefore also consider this source of background to develop our maximum likelihood (ML) fitting procedures to extract the signal events as a function of $m_{A^0}$. \begin{figure}[htb] \centering \includegraphics[height=2.0in,width=3.2in]{mred.eps} \caption{The $m_{red}$ distribution of cocktail MC (shaded area) and $10\%$ of the $J/\psi$ data (red error bars). The cocktail MC sample is a composition of the simulated events of generic $J/\psi$ decays and $e^+e^- \rightarrow \gamma \mu^+\mu^-$. The MC samples are normalized with data luminosity. Due to very low statistic in the $10\%$ of the $J/\psi$ data, an additional source of the peaking background at the $f_4(2050)$ mass position seems to be disappeared. However, this background might be seen in the full $J/\psi$ data sample.} \label{fig:mred} \end{figure} We use the sum of the two Crystal Ball (CB) to model the $m_{red}$ distribution of the signal while generating the signal MC samples at $23$ assumed $A^0$ mass points. The $m_{red}$ resolution of the signal varies in the range of $2-12$ MeV/$c^2$ while the signal efficiency varies from $49\%$ to $33\%$ depending upon the momentum values of two muons at different Higgs mass points. The $m_{red}$ distribution of the non-peaking background is modeled by the $n^{th}$ ($n=2,3,4,5$) order Chebyshev polynomial function depending upon different $m_{A^0}$ intervals. In the low-mass region, we constrain the curve must pass through the origin when the cross-section is zero (i.e. $m_{red} \approx 0$). We $m_{red}$ distribution of the $\rho$ resonance is modeled by a Cruijff function \cite{cruijff} and $f_2(1270)$, $f_0(1710)$ and $f_4(2050)$ resonances by the sum of the two CB functions. The additive and multiplicative types of the systematic uncertainties are considered in this analysis. An additive uncertainty arises due to fit bias and fixed PDF parameters, and does not scale with the number of reconstructed signal events. The multiplicative uncertainties scale with the number of reconstructed signal events and arise due to the reconstruction efficiency, the uncertainty in the number of $J/\psi$ mesons ($1.30\%$), the resolutions of the peaking backgrounds ($1.2\%$ for the $\rho$ resonance and $6.7\%$ for $f_2(1270)$, $f_0(1710)$ and $f_4(2050)$ resonances) and the PDF parameters of the non-peaking backgrounds ($3.18\%$). The uncertainty associated with the photon detection efficiency is measured to be less than $1\%$ using a control sample of the ISR process of $e^+e^- \rightarrow \gamma \mu^+\mu^-$, where the ISR photon is estimated using the four-momenta of two charged tracks \cite{photon_eff}. We use a control sample of $J/\psi \rightarrow \mu^+\mu^- (\gamma)$, in which one track is tagged with tight muon PID, to study the systematic uncertainty associated with the muon PID($(4.0-5.73)\%$), $\chi_{4C}^2$ ($1.56\%$) and the cos$\theta_{\mu}^{heli}$ ($0.34\%$) requirements. The final value of muon PID uncertainty also takes into account the fraction of events with one track or two tracks identified as muons, which is obtained from the signal MC. The total additive and multiplicative uncertainties vary in the range of ($0.502 - 0.767$) events and $(5.95 - 8.96)\%$, respectively depending on $m_{A^0}$. We perform the search for a narrow resonance in the steps of half of $m_{red}$ resolution at 2035 mass points using the cocktail MC sample. The systematic uncertainty is included with the final results by convolving the negative log likelihood (NLL) versus branching fraction curve with a Gaussian distribution having a width equal to the systematic uncertainty. The projected $90\%$ C.L. upper limits on the product branching fraction $\mathcal{B}(J/\psi \rightarrow \gamma A^0 \times \mathcal{B}(A^0 \rightarrow \mu^+\mu^-)$ are observed to be in the range of $(2.8 - 386.5)\times 10^{-8}$ for $0.212 \le m_{A^0} \le 3.0$ GeV/$c^2$ depending upon the $m_{A^0}$ mass points (Figure~\ref{exclusion:lim} (left)). The new BESIII expected limits seem to have an order of magnitude improvement than the previous measurement \cite{BESIII_Higgs0} (Figure~\ref{exclusion:lim} (right)) and can significantly constrain the parameters of the new physics models \cite{bra0,new_physics}. We will unblind the full $J/\psi$ data sample very soon to produce the final results. \begin{figure}[htb] \centering \includegraphics[height=2.0in,width=2.9in]{Clulcocktail.eps} \includegraphics[height=2.0in,width=2.9in]{Clulcocktail_BES_oldvsnew.eps} \caption{The $90\%$ C.L. upper limits of the product branching fraction $\mathcal{B}(J/\psi \rightarrow \gamma A^0) \rightarrow (A^0 \rightarrow \mu^+\mu^-)$ as a function of $m_{A^0}$ for new BESIII measurement, including with and without systematic uncertainty shown by cyan and blue colors, respectively (left) and old versus new BESIII measurement (right). The new BESIII exclusion limits are based on a cocktail MC sample containing the same statistics as we expect the full data sample. } \label{exclusion:lim} \end{figure} \section{Summary and conclusion} The BESIII has performed the search for di-lepton decays of the low-mass Higgs and dark bosons using the data samples collected at $J/\psi$ and $\psi(3770)$ resonances, respectively. No evidence of any narrow resonance particles is found in the data-sets and set one of the stringent exclusion limits. These exclusion limits can constrain a large fraction of the parameters in the new physics models including the NMSSM. All the results are preliminary. BESIII is performing the search for new physics in other decay processes too, and we are looking forward to seeing some more exciting results in the near future. \bigskip \bigskip \begin{center} \begin{large This work is supported in part by the CAS/SAFEA International Partnership Program for Creative Research Teams, CAS and IHEP grants for the Thousand/Hundred Talent programs and National Natural Science Foundation of China under the Contracts Nos. 11175189 and 11125525.
-13,579.932818
[ -3.201171875, 2.93359375 ]
62.857143
[ -3.11328125, 0.313720703125, -2.056640625, -5.18359375, -0.53271484375, 7.984375 ]
[ 2.875, 9.40625, 3.9453125, 5.66796875 ]
157
2,462
[ -3.70703125, 4.3359375 ]
26.941863
[ -5.66796875, -2.9921875, -3.046875, -2.125, 1.34375, 10.390625 ]
1.741746
41.779078
29.569456
2.228364
[ 2.5992324352264404 ]
-10,411.78789
5.130382
-13,124.224262
0.760035
5.542621
[ -3.35546875, -3.9453125, -3.630859375, -4.359375, 2.376953125, 11.890625 ]
[ -5.8046875, -2.07421875, -1.943359375, -1.3876953125, 3.5390625, 4.8359375 ]
BkiUfP3xK03BfL1dWd7s
\section{Introduction} The prototype of a scheme $Z$ with \emph{perfect obstruction theory} \cite{BF} is the zero locus of a section of a vector bundle $E$ on a smooth ambient variety $A$. We recall the construction in the next Section. \emph{All} perfect obstruction theories are \emph{locally} of this form. In the rare situations where this is also true \emph{globally}, the natural virtual cycle \cite{BF} pushes forward to what we might expect, namely the Euler class of the bundle: \beq{Euler} \iota_*[Z]^{\vir}\=c_r(E)\ \in\ A_{\vd}(A). \eeq Here $\iota\colon Z\into A$ is the inclusion, $r=\rk E,\ \vd=\dim A-r$ is the virtual dimension of the problem, and $[Z]^{\vir}$ lies in $A_{\vd}(Z)$ or $H_{2\!\vd}(Z)$. \eqref{Euler} can help in computing integrals over the virtual cycle. Examples include the computation of the number 27 of lines on a cubic surface, numbers of lines and conics on quintic threefolds, and the quantum hyperplane principle. A more relevant example to us is the reduced stable pair computations in \cite{KT}, carried out by writing the moduli space of stable pairs (and its reduced perfect obstruction theory) as the zero locus of a section of a tautological bundle over a certain Hilbert scheme.\medskip In this paper we study a generalisation of zero loci, namely degeneracy loci. We show these give another prototype of a perfect obstruction theory.\footnote{In fact we prove this by reducing to the model \eqref{model} in a bigger ambient space.} Again, when this can be done \emph{globally}, it allows us to express integrals over the virtual cycle in terms of integrals over the ambient space, via the Thom-Porteous formula. So fix a two term complex of vector bundles $E_\bullet=\{E_0\rt\sigma E_1\}$ on a smooth ambient space $A$. Set $n=\dim A,\ r_i=\rk(E_i)$, and denote the $r$th degeneracy locus by $$ Z_r\ :=\ \big\{x\in A\colon\rk(\sigma|_x)\le r\}. $$ We work with the smallest $r$ for which $Z:=Z_r$ is nonempty. Our first result is the following, made more precise in Theorem \ref{prop}. \begin{thm*} Assume $Z_{r-1}=\emptyset$. Then both $h^0(E_\bullet|_Z)=\ker(\sigma|_Z)$ and $h^1(E_\bullet|_Z)=\coker(\sigma|_Z)$ are locally free on $Z:=Z_r$, which inherits a perfect obstruction theory $$ \Big\{h^1(E_\bullet|_Z)^*\otimes h^0(E_\bullet|_Z)\To\Omega_A|\_Z\Big\}\To\LL_Z. $$ The push forward of the resulting virtual cycle $[Z]^{\vir}\in A_{n-k}(Z)$ to $A$ is given by the Thom-Porteous formula $$ \Delta\;_{r_1-r}^{r_0-r}\big(c(E_1-E_0)\big)\ \in\ A_{n-k}(A), $$ where $k=(r_0-r)(r_1-r)$ and $\Delta^a_b(c):=\det\!\big(c_{b+j-i}\big)_{1\le i,j\le a}\,.$ \end{thm*} \smallskip\noindent\textbf{Nested Hilbert schemes.} Our main application is to the punctual Hilbert schemes of \emph{nested} subschemes of a fixed projective surface $S$. Full details and notation will be described later; for now for simplicity we restrict attention to the simplest case of the 2-step nested punctual Hilbert scheme $$ S^{[n_1,n_2]}\ :=\ \big\{I_1\subseteq I_2\subseteq\cO_S\ \colon\ \mathrm{length}\;(\cO_S/I_i)=n_i\big\}. $$ Now $S^{[n_1,n_2]}$ lies in the ambient space $S^{[n_1]}\times S^{[n_2]}$ as the locus of points $(I_1,I_2)$ for which there is a nonzero map $\Hom_S(I_1,I_2)\ne0$. Thus it can be seen as the degeneracy locus of the complex of vector bundles \beq{fortnite} R\hom_\pi(\cI_1,\cI_2) \quad\mathrm{over}\quad S^{[n_1]}\times S^{[n_2]} \eeq which, when restricted to the point $(I_1,I_2)$, computes $\Ext^*_S(I_1,I_2)$. When $H^{0,2}(S)=0$ this complex is 2-term, so we can apply the above theory. The resulting perfect obstruction theory on $S^{[n_1,n_2]}$ agrees with that of \cite{GSY1}. In turn this arises in local DT theory \cite{GSY2}, so we can express DT integrals in terms of Chern classes of tautological sheaves over $S^{[n_1]}\times S^{[n_2]}$. When $H^{0,1}(S)\ne0$ the result is zero; when $H^{0,2}(S)\ne0$ the theory does not apply. So for a general projective surface $S$ we modify the complex $\Ext^*_S(I_1,I_2)$ with $H^1(\cO_S)$ and $H^2(\cO_S)$ terms. The modification is canonical over $S^{[n_1,n_2]}$, recovering the \emph{reduced} version of the local DT deformation theory that arises in the $SU(r)$ Vafa-Witten theory of $S$ \cite{TT1}. \smallskip \noindent\textbf{Splitting trick.} We would like to extend this modification over the rest of $S^{[n_1]}\times S^{[n_2]}$, so we can apply the Thom-Porteous formula. Such modifications exist locally but \emph{not} globally, so in Section \ref{splitpr} we use a trick reminiscent of the splitting principle in topology, pulling back to a certain bundle over $S^{[n_1]}\times S^{[n_2]}$ where there \emph{is} a canonical modification. This allows us to prove the following (whose notation will be explained more fully in Sections \ref{00}--\ref{kstep}, in particular \eqref{trayce}). \begin{thm*} Let $S$ be any smooth projective surface. The $k$-step nested Hilbert scheme $S^{[n_1,\ldots,n_k]}$ can be seen as an intersection of degeneracy loci after pulling back to an affine bundle over $S^{[n_1]}\times\cdots\times S^{[n_k]}$. The resulting perfect obstruction theory $F\udot\to\LL_{S^{[n_1,\ldots,n_k]}}$ has virtual tangent bundle $$ (F\udot)^\vee\ \cong\ \Big\{T_{S^{[n_1]}}\oplus\cdots\oplus T_{S^{[n_k]}}\To\ext^1_p(\cI_1,\cI_2)\_0\oplus\cdots\oplus\ext^1_p(\cI_{k-1},\cI_k)\_0\Big\}, $$ the same as the one in Vafa-Witten theory \cite{TT1} or ``reduced local DT theory" \cite{GSY1, GSY2}. The virtual cycle $$ \big[S^{[n_1,\ldots,n_k]}\big]^{\vir}\,\in\,A_{n_1+n_k}\big(S^{[n_1,\ldots,n_k]}\big) $$ pushes forward to \beq{CO} c_{n_1+n_2}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big)\cup\cdots\cup c_{n_{k-1}+n_k}\big(R\hom_{\pi}(\cI_{k-1},\cI_k)[1]\big) \eeq in $A_{n_1+n_k}\big(S^{[n_1]}\times\cdots\times S^{[n_k]})$. \end{thm*} The formula \eqref{CO} for the pushforward of the virtual class was conjectured in \cite{GSY1} for $k=2$ and proved for toric surfaces. It was also shown to be true for more general surfaces when integrated against some natural classes. The classes $c_{n_{i-1}+n_i}\big(R\hom_{\pi}(\cI_{i-1},\cI_i)[1]\big)$, considered as maps $H^*(S^{[n_{i-1}]})\to H^{*+2n_i-2n_{i-1}}(S^{[n_i]})$, are called Carlsson-Okounkov operators. Carlsson-Okounkov \cite{CO} calculate them in terms of Grojnowski-Nakajima operators, and prove vanishing of the higher Chern classes: \beq{vanz} c_{n_1+n_2+i}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big)\=0, \qquad i>0, \eeq by showing the left hand side is a universal expression in Chern numbers of $S$, and that this universal expression vanishes for toric surfaces by a localisation computation. This gives enough relations to prove the universal expression is in fact zero. In Section \ref{COvan} we reprove the vanishing \eqref{vanz} rather easily and geometrically using the Thom-Porteous formula, as well as the following generalisation. \begin{thm*} Let $S$ be any smooth projective surface. For any curve class $\beta\in H_2(S,\Z)$, any Poincar\'e line bundle $\cL\to S\times\Pic_{\beta}(S)$, and any $i>0$, $$ c_{n_1+n_2+i}\big(\!\;R\pi_*\;\cL-R\hom_{\pi}(\cI_1,\cI_2\otimes\cL)\big)\=0 \quad\mathrm{on}\ S^{[n_1]}\times S^{[n_2]}\times\Pic_\beta(S). $$ \end{thm*} \medskip \noindent\textbf{The other degeneracy loci.} In the companion paper \cite{GT2} we work with \emph{all} the degeneracy loci $Z_k$. These do not generally admit perfect obstruction theories when $k>r$. However there are natural spaces $\widetilde Z_k\to Z_k$ dominating them which are actually resolutions of their singularities in the transverse case (when all the $Z_k$ have the correct codimension). For this reason we call the $\widetilde Z_k$ ``\emph{virtual resolutions}". Though they are singular in general we show they admit natural perfect obstruction theories and virtual cycles whose pushforwards we can again describe by Chern class formulae.\footnote{Since $\widetilde Z_r\cong Z_r$ the constructions in \cite{GT2} and this paper coincide when $k=r$.} In this paper the natural application was to nested \emph{punctual} Hilbert schemes of a smooth surface $S$. In \cite{GT2} the natural application is to nested Hilbert schemes of both points \emph{and curves} in $S$. Fundamentally the difference is the following. Letting $I_1,I_2\subset\cO_S$ be ideal sheaves of 0-dimensional subschemes of $S$, then \beq{hom12} \Hom(I_1,I_2) \eeq either vanishes, or --- for $I_1\subset I_2$ in the nested Hilbert scheme --- is at most $\C$. Hence $S^{[n_1,n_2]}$ is the degeneracy locus of the complex \eqref{fortnite}. Conversely, when $I_1$ or $I_2$ have divisorial components, \eqref{hom12} can become arbitrarily big, and different elements correspond to different subschemes of $S$. (In the case $I_1=\cO_S(-D)$ and $I_2=\cO_S$, elements correspond --- up to scale --- to divisors in the same linear system as the divisor $D\subset S$.) Therefore the corresponding nested Hilbert scheme \emph{dominates} the degeneracy locus of the complex \eqref{fortnite} but need not equal it. In \cite{GT2} we show it is naturally a virtual resolution of the type $\widetilde Z_k$. \smallskip\noindent\textbf{Acknowledgements.} Artan Sheshmani was part of a good portion of this project, but decided to concentrate on developing virtual fundamental classes in higher rank situations \cite{SY}. We thank him for many useful conversations, as well as Davesh Maulik, Andrei Negut and Andrei Okounkov. A.G. acknowledges partial support from NSF grant DMS-1406788. There are some constructions in the literature which are closely related to ours; see for instance \cite[Equation 6.2]{Ne1} for $S=\PP^2$, \cite[Section 2.3]{Ne2} for some special types of nested sheaves, and \cite[Equation (2.20)]{Ne2} for their perfect obstruction theory. Just before posting this paper we became aware of the old notes \cite{MO}, which describe the local DT obstruction theory \cite{GSY2} on the nested Hilbert scheme, and a K-theoretic version of the Carlsson-Okounkov operator on toric surfaces. With hindsight it seems that Okounkov et al probably new of some form of relationship between the virtual class and the Thom-Porteous formula for toric surfaces with $H^{\ge1}(\cO_S)=0$, even if they're too modest to admit it now. \medskip\noindent\textbf{Notation.} Given a map $f\colon X\to Y$, we often use the same letter $f$ to denote its basechange by any map $Z\to Y$, i.e. $f\colon X\times_YZ\to Z$. We also sometimes suppress pullback maps $f^*$ on sheaves. \section{Zero loci} \label{zl} We start by recalling the standard construction of a perfect obstruction theory, on the zero scheme $Z$ of a section $\sigma$ of a vector bundle $E$ over a smooth ambient space $A$: \beq{model} \xymatrix@R=15pt@C=0pt{ & E\dto \\ Z=\sigma^{-1}(0)\ \subset & A.\ar@/^{-2ex}/[u]_\sigma} \eeq On $Z$ the derivative of this diagram gives \beq{model2} \xymatrix@R=18pt{ E^*|_Z \ar[d]_\sigma\ar[r]^{d\sigma|_Z}& \Omega_A|_Z\ar@{=}[d] \\ I/I^2 \ar[r]^d& \Omega_A|_Z,} \eeq where $I\subset\cO_A$ is the ideal sheaf of $Z$ generated by $\sigma$. The bottom row is a representative of the truncated cotangent complex $\LL_Z$ of $Z$; denoting the two-term locally free complex on the top row by $F\udot$ we get a morphism\footnote{\eqref{model} also induces a natural map from $F\udot$ to the \emph{full} cotangent complex of $Z$ \cite[Section 6]{BF}, but we shall not need this.} \beq{pot} F\udot\To\LL_Z \eeq in $D(\mathrm{Coh}\,Z)$ which induces an isomorphism on $0$th cohomology sheaves $h^0$ and a surjection on $h^{-1}$. This data is called a \emph{perfect obstruction theory} \cite{BF} on $Z$, and induces a virtual cycle $$ [Z]^{\vir}\ \in\ A_{\vd}(Z)\To H_{2\!\vd}(Z) $$ satisfying natural properties. Here $H$ denotes Borel-Moore homology, and $\vd:=\dim A-\rk E$ is the \emph{virtual dimension} of the perfect obstruction theory. \section{Degeneracy loci}\label{degloc} We work on a smooth complex quasi-projective variety $A$ with a map $$ E_0\rt{\sigma}E_1 $$ between vector bundles of ranks $r_0$ and $r_1$. We denote by \beq{Zkay} Z_k\ \subset\ A \eeq the degeneracy locus where $\rk(\sigma)$ drops to $\le k$. This has a scheme structure defined by the vanishing of the $(k+1)\times(k+1)$ minors of $\sigma$, i.e. of \beq{wwedge} \mbox{\Large $\wedge$}^{k+1}\sigma\,\colon\,\mbox{\Large $\wedge$}^{k+1}E_0\To\mbox{\Large $\wedge$}^{k+1}E_1. \eeq The $Z_k$ can be characterised by the rank of the cokernel of $\sigma$ over them \cite[Section 20.2]{Ei}. In Section \ref{arb} we will need a characterisation in terms of the kernel. Though this does not basechange well, it works for the smallest $Z_k$. That is, let $r$ denote the minimal rank of $\sigma$, so that $Z_{r-1}=\emptyset$, and set $Z:=Z_r$. This is the largest subscheme of $A$ on which $\ker\sigma|_Z$ is locally free of rank $r_0-r$: \begin{lem} \label{h0base} For a map of schemes $f\colon T\to A$, the following are equivalent. \begin{enumerate} \item $f$ factors through $Z=Z_r\subset A$, \item $\ker\big(f^*\sigma\colon f^*E_0\to f^*E_1\big)$ is a rank $r_0-r$ subbundle of $f^*E_0$, \item $\ker\big(f^*\sigma\colon f^*E_0\to f^*E_1\big)$ has a locally free subsheaf of rank $r_0-r$. \end{enumerate} \end{lem} \begin{proof} If $f$ factors through $Z$ then $\mbox{\Large $\wedge$}^{r+1}f^*\sigma\cong f^*\mbox{\Large $\wedge$}^{r+1}\sigma|_Z\equiv0$. Since $Z_{r-1}=\emptyset$ it follows from \cite[Proposition 20.8]{Ei} that $\coker f^*\sigma$ is locally free of rank $r_1-r$. Thus $\ker f^*\sigma$ is a rank $r_0-r$ subbundle of $f^*E_0$. This proves $(1)\ \ext@arrow 0359\Rightarrowfill@{}{\hspace{3mm}}\ (2)\ \ext@arrow 0359\Rightarrowfill@{}{\hspace{3mm}}\ (3)$. For $(3)\ \ext@arrow 0359\Rightarrowfill@{}{\hspace{3mm}}\ (1)$, we suppose the kernel $K$ of $f^*E_0\to f^*E_1$ contains a locally free subsheaf of rank $r_0-r$. Therefore the rank of $f^*\sigma$ on the generic point of $T$ is $\le r$, and thus in fact equal to $r$ since we are assuming it drops no lower. In particular, $\coker(f^*\sigma)$ is a rank $r_1-r$ sheaf. By lower semi-continuity of rank, $f^*\sigma|_t$ is of rank $\le r$ for any closed point $t\in T$, so, by our assumption on $r$ again, it is equal to $r$. Combined with the exact sequence \beq{kerr} f^*E_0|\_t\rt{\sigma|_t}f^*E_1|\_t\To(\coker f^*\sigma)|\_t\To0, \eeq i.e. the fact that $\coker(f^*\sigma|_t)=(\coker f^*\sigma)|_t$, this shows that $(\coker f^*\sigma)|_t$ has dimension $r_1-r$ for every closed point $t$. Therefore $\coker f^*\sigma$ is locally free of rank $r_1-r$ by the Nakayama lemma. This implies that $\ker f^*\sigma$ is a rank $r_0-r$ subbundle (rather than just a locally free subsheaf) of $f^*E_0$. In particular $f^*E_0/K$ is locally free of rank $r$, so $\mbox{\Large $\wedge$}^{r+1}(f^*E_0/K)=0$. But $$ f^*\mbox{\Large $\wedge$}^{r+1}\sigma\=\mbox{\Large $\wedge$}^{r+1}f^*\sigma\,\colon\ \mbox{\Large $\wedge$}^{r+1}f^*E_0\To\mbox{\Large $\wedge$}^{r+1}f^*E_1 $$ factors through $\mbox{\Large $\wedge$}^{r+1}(f^*E_0/K)$, so it is also zero. That is, $f$ factors through the zero scheme $Z\left(\mbox{\Large $\wedge$}^{r+1}\sigma\right)=Z_r$ of $\mbox{\Large $\wedge$}^{r+1}\sigma$. \end{proof} So $\sigma|_Z$ has rank precisely $r$, and its kernel $h^0:=h^0(E_\bullet|_Z)$ and cokernel $h^1:=h^1(E_\bullet|_Z)$ are \emph{vector bundles} on $Z$ of rank $r_0-r$ and $r_1-r$ respectively, \beq{h*} 0\To h^0\To E_0|_Z\rt{\sigma|_Z}E_1|_Z\To h^1\To0. \eeq For instance if $r=r_0-1$ then $\sigma$ is generically injective (and globally injective as a map of coherent sheaves) and $Z$ is the locus where it fails to be injective as a map of bundles. Its kernel is a line bundle over $Z$. If $E_0=\cO_A$ then $Z$ is the zero locus of $\sigma$ and we are back in the setting of Section \ref{zl}. \begin{thm} \label{prop} The degeneracy locus $Z=Z_r$ inherits a 2-term perfect obstruction theory $$ \big\{(h^1)^*\otimes h^0\To\Omega_A|_Z\big\}\To\LL_Z. $$ The push forward of the resulting virtual cycle $[Z]^{\vir}\in A_{n-k}(Z)$ to $A$ is given by the Thom-Porteous formula $$ \Delta\;_{r_1-r}^{r_0-r}\big(c(E_1-E_0)\big)\ \in\ A_{n-k}(A). $$ Here $n=\dim A,\ k=(r_0-r)(r_1-r)$ and $\Delta\;^a_b(c):=\det\!\big(c_{b+j-i}\big)_{1\le i,j\le a}$\,. \end{thm} \begin{proof} We work on the relative Grassmannian of $(r_0-r)$-dimensional subspaces of $E_0$, $$ \Gr\,:=\,\Gr(r_0-r,E_0)\rt{q}A $$ with universal subbundle $U\into q^*E_0$. Composing with $q^*\sigma$ gives a section \beq{tilda} \widetilde\sigma\ \in\ \Gamma(U^*\otimes q^*E_1). \eeq \textbf{Claim 1.} The zero locus $Z(\widetilde\sigma)\subset\Gr$ is isomorphic to $Z\subset A$ under the restriction $\overline{q}\colon Z(\widetilde\sigma)\to A$ of the projection $q\colon\!\Gr\to A$. \smallskip At the level of closed points this is obvious: for $x\in A$ \begin{align*} x\in Z&\iff\rk(\sigma|_x)=r \\ &\iff\rk(\ker(\sigma_x))=r_0-r \\ &\iff(E_0)|_x\mathrm{\ has\ a\ unique\ }(r_0-r)\mathrm{-dimensional\ subspace} \\ &\hspace{36mm}U_x =\ker(\sigma_x)\mathrm{\ on\ which\ }\sigma|_x\mathrm{\ vanishes}\\ &\iff U_x\ \mathrm{is\ the\ unique\ point\ of\ }Z(\widetilde\sigma)\cap q^{-1}\{x\}. \end{align*} So $\overline{q}$ maps $Z(\widetilde\sigma)$ bijectively to $Z\subset A$. To see it maps scheme theoretically, note that, by construction, the composition $$ U\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} q^*E_0\rt{q^*\sigma}q^*E_1 $$ is zero over $Z(\widetilde\sigma)$, so $\ker(\overline{q}^*\sigma)$ contains a locally free sheaf $U|_{Z(\widetilde\sigma)}$ of rank $r_0-r$. Thus $\overline{q}$ factors through $Z\subset A$ by Lemma \ref{h0base}. By Lemma \ref{h0base} again, $\ker(\sigma|_Z)$ is a rank $r_0-r$ subbundle of $E_0$. Its classifying map $Z\to\Gr(r_0-r,E_0)$ has image in $Z(\widetilde\sigma)$ and clearly defines a right inverse to $\overline{q}\colon Z(\widetilde\sigma)\to Z$. So to prove that $\overline{q}$ is an isomorphism to $Z$ we need only show that the inverse image $\overline q^{\,-1}\{x\}$ of any closed point $x\in Z$ is a closed point of $Z(\widetilde\sigma)$. Given a rank $r$ linear map $\Sigma\colon V\to W$ between vector space of dimensions $r_0,r_1$, an elementary calculation show that the composition $$ U\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} V\otimes\cO\rt{\Sigma}W\otimes\cO $$ on the Grassmannian $\Gr(r_0-r,V)$ cuts out the \emph{reduced} point $\big[\!\;\ker\Sigma\subset V\big]\in\Gr(r_0-r,V)$. Applying this to $\Sigma=\sigma|_x$ proves Claim 1. \smallskip \noindent\textbf{Perfect obstruction theory.} Since $Z\cong Z(\widetilde\sigma)$ is cut out of $\Gr$ by $\widetilde\sigma\in\Gamma(U^*\otimes q^*E_1)$, it inherits the standard perfect obstruction theory \eqref{model2}, i.e. \beq{mod3} U\otimes q^*E_1^*|_{Z(\widetilde\sigma)}\rt{d\;\widetilde\sigma|_{Z(\widetilde\sigma)}}\Omega_{\Gr}|_{Z(\widetilde\sigma)} \eeq mapping to $\LL_{Z(\widetilde\sigma)}=\LL_Z$. Now \eqref{mod3} fits into a diagram \beq{dg} \xymatrix@R=18pt{ U\big|_Z\otimes(h^1)^* \ar[d]\ar[r] & q^*\Omega_A\big|_{Z(\widetilde\sigma)} \ar[d] \\ U\otimes E_1^*\big|_Z \ar[d]_{\id_U\otimes\!}^{\!\sigma^*}\ar[r]^{d\;\widetilde\sigma|_{Z(\widetilde\sigma)}} & \Omega_{\Gr}\big|_{Z(\widetilde\sigma)} \ar[d] \\ U|_Z\otimes\big(E_0|_Z\big/\ker\sigma\big)^* \ar@{=}[r]& \Omega_{\Gr/A}\big|_{Z(\widetilde\sigma)\,,}} \eeq with left hand column the short exact sequence $U|_Z\otimes\,$\eqref{h*}${}^*$, and right hand column the natural short exact sequence of the fibration $\Gr\to A$. The bottom equality is dual to the standard identification $T_{\Gr/A}\cong\hom(U,E_0/U)$. \medskip Assuming \eqref{dg} is commutative for now, we can consider it as providing a quasi-isomorphism between the top row and the middle row (which is \eqref{mod3}). Hence the perfect obstruction theory \eqref{mod3} is $$ h^0\otimes(h^1)^*\To\Omega_A|_Z, $$ as claimed. Just as in \eqref{Euler}, the pushforward of the resulting virtual cycle to $\Gr$ is the Euler class $c_{(r_0-r)r_1}(U^*\otimes q^*E_1)$. Pushing this down to $A$ gives the pushforward of $[Z]^{\vir}$ to $A$, by the commutativity of the diagram $$ \xymatrix@R=18pt{ Z(\widetilde\sigma) \INTO\ar@{=}[d] & \Gr \ar[d]^ q \\ Z \INTO & A.\!} $$ But pushing forward $c_{(r_0-r)r_1}(U^*\otimes q^*E_1)$ to $A$ gives $\Delta\;_{r_1-r}^{r_0-r}\big(c(E_1-E_2)\big)$ by \cite[Theorem 14.4]{Fu}. So we are left to prove \medskip \noindent\textbf{Claim 2.} The diagram \eqref{dg} is commutative. \smallskip We need only show that the lower square of \eqref{dg} commutes; the upper one is then induced from it. Let $\Gr\_Z:=\Gr\times\_{\!A}\;Z$ and observe $Z(\widetilde\sigma)\subset\Gr\_Z$, with ideal sheaf $I$ say. We let $$ 2Z\ \ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}\ \Gr_Z $$ be its scheme-theoretic doubling with ideal sheaf $I^2$. Let $p:= q|_{2Z}$ be the induced projection $2Z\to Z$ and consider the maps \beq{ars} U|\_{2Z}\ \ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}\ (q^*E_0)|\_{2Z}\,\cong\,p^*(E_0|_Z) \To p^*(E_0/U|_Z)\rt{\sigma|_Z}p^*(E_1|_Z). \eeq The final arrow is constructed from $\sigma|_Z\colon E_0|_Z\to E_1|_Z$ by recalling that $U|_Z\cong\ker(\sigma|_Z)$. The composition of the first two arrows of \eqref{ars} is a section of $U^*|\_{2Z}\otimes p^*(E_0/U|_Z)$ on $2Z$ which vanishes on $Z$. Since the ideal of $Z\subset2Z$ is $\Omega_{\Gr\_{Z\!}/Z}$ it is a section of $(U|_Z)^*\otimes(E_0/U|_Z)\otimes\Omega_{\Gr\_{Z\!}/Z}$. This is precisely the (adjoint of) the standard description of the isomorphism $$ U|\_Z\otimes(E_0/U)|_Z^*\ \cong\ \Omega_{\Gr\_{Z\!}/Z}\,, $$ i.e. the bottom row of \eqref{dg}. Since $p^*(E_1|_Z)=(q^*E_1)|_{2Z}$, the composition of all the arrows in \eqref{ars} is just $\widetilde\sigma|_{2Z}$. It vanishes on $Z$, defining the section $[d\;\widetilde\sigma|_Z]$ of $$ (U|_Z)^*\otimes E_1|_Z\otimes I/I^2\ \cong\ \hom\big(U\otimes E_1^*|\_Z,\Omega_{\Gr/A}|\_Z\big) $$ which defines the central arrow of \eqref{dg}. Thus \eqref{dg} commutes. \end{proof} \subsection{Higher Thom-Porteous formula} When $r_0-r=1$, so the sheaf $h^0$ is a line bundle on the degeneracy locus $Z$, the following ``higher" Thom-Porteous formula will be useful later. Let $\iota\colon Z\into A$ denote the inclusion. \begin{prop} \label{simplah} If $r_0-r=1$ then the Thom-Porteous formula becomes $$ \iota_*[Z]^{\vir}\=c_{r_1-r_0+1}(E_1-E_0) $$ in $A_{n+r-r_1}(A)$, and for any $i\ge0$ we have the following extension to higher Chern classes: \beq{higher} \iota_*\Big(c_1\big((h^0)^*\big)^i\cap[Z]^{\vir}\Big)\=c_{r_1-r_0+1+i\;}(E_1-E_0). \eeq \end{prop} \begin{proof} The first part follows from the simplification $$ \Delta\;_b^a\big(c(\ \cdot\ )\big)\= c\_b(\ \cdot\ ) $$ when $a=r_0-r=1$. For the second part, recall from \eqref{tilda} that $Z$ is cut out of $\PP(E_0)\rt{q}A$ by the vanishing of the composition $$ \cO_{\PP(E_0)}(-1)\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow} q^*E_0\rt{q^*\sigma}q^*E_1. $$ Moreover, over this copy of $Z$, we see that the kernel $h^0$ of $E_0\to E_1$ is $\cO_{\PP(E_0)}(-1)$. Therefore, denoting Segre classes by $s_i$, we have \begin{align*} \iota_*\Big(c_1\big((h^0)^*\big)^i&\cap\big[Z\big]^{\vir}\Big)\=q_*\!\left(c_1\big(\cO_{\PP(E_0)}(1)\big)^i\cup c_{r\_1}(q^*E_1(1))\right) \nonumber \\ &\=q_*\left(c_1\big(\cO_{\PP(E_0)}(1)\big)^i\cup \sum_{j=0}^{r_1}c_j(q^*E_1)\cup c_1\big(\cO_{\PP(E_0)}(1)\big)^{r_1-j}\right) \nonumber \\ &\=\sum_{j=0}^{r_1}q_*\left(c_1\big(\cO_{\PP(E_0)}(1)\big)^{i+r_1-j}\cup q^*c_j(E_1)\right) \nonumber \\ &\=\sum_{j=0}^{r_1}s_{i+r_1-j-r_0+1}(E_0)\cap c_j(E_1) \nonumber \\ &\=c_{r_1-r_0+i+1}(E_1-E_0). \qedhere \end{align*} \end{proof} Working throughout this Section with $\sigma^*\colon E_1^*\to E_0^*$ instead of $\sigma\colon E_0\to E_1$ gives the same results, up to some reindexing of notation. \section{Jumping loci of direct image sheaves} Suppose $f\colon X\to Y$ is morphism of projective schemes, with $Y$ smooth. Fix either a coherent sheaf $\cF$ on $X$ which is \emph{flat over $Y$}, or a perfect complex $\cF$ on $X$ and assume that \emph{$X$ is flat over $Y$.} We assume that the cohomologies of $\cF$ on any closed fibre $X_y,\ y\in Y$, are concentrated in only two adjacent degrees $i,\,i+1$. Let $a$ denote the maximal dimension of $h^i(X_y,\cF_y)$ as $y$ varies throughout $Y$. That is, we assume there exists $i\in\N$ such that \begin{itemize} \item $h^j(X_y,\cF_y)\=0 \quad \forall j\not\in\{i,i+1\},\ \forall y\in Y$, \item $h^i(X_y,\cF_y)\ \le\ a \quad \forall y\in Y$. \end{itemize} It follows that $h^{i+1}(X_y,\cF_y)$ has maximal dimension $b:=a-(-1)^i\chi(\cF_y)$. Now $Rf_*\;\cF$ is a perfect complex on $Y$ which, by basechange and the Nakayama lemma, can be trimmed to be a 2-term complex of locally free sheaves $$ Rf_*\,\cF\ \simeq\ \big\{E_i\to E_{i+1}\big\} $$ in degrees $i$ and $i+1$. On restriction to the maximal degeneracy locus $$ Z_a\ :=\ \big\{y\in Y\colon h^i(X_y,\cF_y)=a\big\}\ \subset\ Y $$ it has kernel of rank $a$. (Note this labelling convention differs slightly from \eqref{Zkay}.) Let $X_Z:=X\times_YZ$ and $\bar f:=f|_{X_Z}$. By \eqref{wwedge} and Theorem \ref{prop} we deduce the following. \begin{prop} \label{prop2} The maximal jumping locus $Z=Z_a$ has a natural scheme structure and perfect obstruction theory $$ \Big\{\big(R^{i+1}\bar f_*\;\cF\big)^*\otimes R^i\bar f_*\;\cF\To\Omega_Y|_Z\Big\}\To\LL_Z, $$ with the $R^j\bar f_*\;\cF$ locally free. The resulting virtual cycle $$ [Z]^{\vir}\,\in\,A_d(Z), \quad d:=\dim Y-ab, $$ when pushed forward to $Y$, is given by $$ \Delta\;_b^a\big(c(Rf_*\cF[i+1])\big)\ \in\ A_d(Y). \vspace{-6mm} $$ \ \hfill$\square$ \end{prop} The result can also be applied to jump loci of relative Ext sheaves (the cohomology sheaves of $R\hom_f(A,B):=Rf_*R\hom(A,B)$) by setting $\cF:=R\hom(A,B)$. We shall use this on punctual Hilbert schemes next. \section{Nested Hilbert schemes on surfaces with $b_1=0=p_g$}\label{00} Given positive integers $n_1\ge n_2\ge\cdots\ge n_k$, the $k$-step nested punctual Hilbert scheme of $S$ is, as a set, \beqa S^{[n_1,n_2,\ldots,n_k]} &\!:=& \big\{S\supseteq Z_1\supseteq Z_2\supseteq\cdots\supseteq Z_k\ \colon\ \mathrm{length}(Z_i)=n_i\big\} \\ &=& \big\{I_1\subseteq I_2\subseteq\cdots\subseteq I_k\subset\cO_S \colon\ \mathrm{length}(\cO_S/I_i)=n_i\big\}. \eeqa As a scheme it represents the functor which takes any base scheme $B$ to the set of ideals $\cI_1\subseteq\cI_2\subseteq\cdots\subseteq\cI_k\subset\cO_{S\times B}$, flat over $B$, such that the restriction of $\cI_i$ to any closed fibre $S\times\{b\}$ has colength $n_i$. For simplicity we restrict to $k=2$ for now; we will return to general $k$ in Section \ref{kstep}. \medskip Let $S$ be a smooth complex projective surface with (for now) $h^{0,1}(S)=0=h^{0,2}(S)$, and fix integers $n_1\ge n_2$. Over $$ S^{[n_1]}\times S^{[n_2]}\times S\rt{\pi}S^{[n_1]}\times S^{[n_2]} $$ we have the two universal subschemes $\cZ_1,\,\cZ_2$ and their ideal sheaves $\cI_1,\,\cI_2$. We will apply Proposition \ref{prop2} to the perfect complex $$ R\hom_\pi(\cI_1,\cI_2)\ :=\ R\pi_*R\hom(\cI_1,\cI_2) $$ on $S^{[n_1]}\times S^{[n_2]}$. Over the closed point $(I_1,I_2)\in S^{[n_1]}\times S^{[n_2]}$ we have \beq{uro} \Ext^i(I_1,I_2)\=0, \qquad i\ne0,1, \eeq by Serre duality. Moreover \beq{homjp} \Hom(I_1,I_2)\=\left\{\!\! \begin{array}{ll} 0 & Z_1\not\supseteq Z_2, \\ \C & Z_1\supseteq Z_2 \end{array} \right. \eeq is generically zero and jumps by 1 (but never more) over the nested Hilbert scheme \beq{nst} S^{[n_1,n_2]}\ :=\ \big\{Z_2\subseteq Z_1\subset S,\ \mathrm{length}(Z_i)=n_i\big\}, \eeq at least set-theoretically. Despite our usual notational conventions (to denote $\pi$ basechanged by $S^{[n_1,n_2]}\into S^{[n_1]}\times S^{[n_2]}$ also by $\pi$) we reserve $$ p\colon S^{[n_1,n_2]}\times S\To S^{[n_1,n_2]} $$ for the obvious projection. Since $\cI_1,\cI_2$ are flat over $S^{[n_1]}\times S^{[n_2]}$ they restrict to ideal sheaves over $S^{[n_1,n_2]}$; we denote them by the same letters. \begin{prop} \label{nest0} If $h^{0,1}(S)=0=h^{0,2}(S)$ then the 2-step nested Hilbert scheme $S^{[n_1,n_2]}$ carries a perfect obstruction theory \beq{nhpot} \Big(\big(\ext^1_p(\cI_1,\cI_2)\big)^*\To\Omega_{S^{[n_1]}\times S^{[n_2]}}\big|_{S^{[n_1,n_2]}}\Big)\To\LL_{S^{[n_1,n_2]}} \eeq and virtual cycle $$ \big[S^{[n_1,n_2]}\big]^{\vir}\,\in\,A_{n_1+n_2}\big(S^{[n_1,n_2]}\big). $$ Its pushforward to $S^{[n_1]}\times S^{[n_2]}$ is given by \beq{pfwd} c_{n_1+n_2}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big)\ \in\ A_{n_1+n_2}\big(S^{[n_1]}\times S^{[n_2]}\big). \eeq \end{prop} \begin{proof} By \eqref{homjp} we may apply Proposition \ref{prop2} to the degeneracy locus $Z$ of $R\hom_\pi(\cI_1,\cI_2)$ by setting $\cF=R\hom(\cI_1,\cI_2)$. By \eqref{uro} and the Nakayama lemma $\cF$ is quasi-isomorphic to a 2-term complex of vector bundles. As sets $Z\cong S^{[n_1,n_2]}$ by \eqref{homjp}. Over the degeneracy locus $Z$ we have the exact sequence \eqref{h*} with $h^0$ a rank one locally free sheaf, i.e. a line bundle $L$. Thus over $Z\times S$ we obtain a map $$ \cI_1\otimes p^*L\To \cI_2 $$ which is nonzero on any fibre of $p$. Taking determinants or double duals shows that $L$ is trivial, $h^0\cong\cO_{S^{[n_1,n_2]}}$, and we get a map $\cI_1\to \cI_2$ whose classifying map gives a morphism $Z\to S^{[n_1,n_2]}$. Conversely, since $p_*\;\hom(\cI_1,\cI_2)=\cO$ over $S^{[n_1,n_2]}$, the latter lies in the degeneracy locus of $R\hom_\pi(\cI_1,\cI_2)$, i.e. $S^{[n_1,n_2]}\subset Z$. It is clear these two maps are inverses. The rest follows from Proposition \ref{prop2}, simplified as in Proposition \ref{simplah}, and the fact that $h^0\cong\cO_{S^{[n_1,n_2]}}$. \end{proof} \begin{rmks} In Theorem \ref{nest} we will identify our virtual cycle with that of \cite{GSY1}. The formula \eqref{pfwd} for the pushforward of this cycle was conjectured in \cite{GSY1}, proved there for toric surfaces, and shown to be true for more general surfaces when integrated against some natural classes. From \eqref{dg} one can work out that the dual of the first arrow in \eqref{nhpot} is $$ \ext^1_p(\cI_1,\cI_1)\oplus\ext^1_p(\cI_2,\cI_2)\rt{(\iota,\,-\iota^*)}\ext^1_p(\cI_1,\cI_2), $$ where $\iota\colon\cI_1\to\cI_2$ is the natural inclusion. This complex is therefore the virtual tangent bundle of our perfect obstruction theory on $S^{[n_1,n_2]}$. \end{rmks} \section{Removing $H^1(\cO_S)$ and $H^2(\cO_S)$ on arbitrary surfaces} \label{arb} When $h^{0,1}(S)>0$ the virtual cycle constructed in the last section becomes zero due to a trivial $H^1(\cO_S)$ piece in its obstruction sheaf. And when $h^{0,2}(S)>0$ the perfect complex $R\hom_\pi(\cI_1,\cI_2)$ over $S^{[n_1]}\times S^{[n_2]}$ becomes \emph{3-term}, as it has nonzero $h^2=\ext^2_\pi(\cI_1,\cI_2)$. So we want to modify $R\hom_\pi(\cI_1,\cI_2)$ with $H^1(\cO_S)$ and $H^2(\cO_S)$ terms. The correct geometric way to do this is to take the product of our ambient space $S^{[n_1]}\times S^{[n_2]}$ with $\Jac(S)$ --- we do this in Section \ref{jack} when $h^{0,2}(S)=0$.\footnote{When $h^{0,2}(S)>0$ one should do the same with the \emph{derived scheme} $\Jac(S)$ with nonzero obstruction bundle $H^2(\cO_S)\otimes\cO$. We don't go this far.} In this Section we use a more \emph{ad hoc} fix which is less geometric but appears to give stronger results. To describe it, consider the natural composition $$ \xymatrix@C=0pt@R=16pt{ H^2(\cO_S)\otimes\_\C\cO_{S^{[n_1]}\times S^{[n_2]}} \ar[rrd]& \cong\ R^2\pi_*\;\cO\ \cong\ R^2\pi_*\;\cI_2\ = & \!\ext^2_\pi(\cO,\cI_2)\ \ar[d]^{\iota^*_1} \\ &&\!\ext^2_\pi(\cI_1,\cI_2),}\vspace{-9mm} $$ \beq{ont} \eeq induced by $\iota\_1\colon\cI_1\to\cO_{S^{[n_1]}\times S^{[n_2]}\times S}$\;. Since $\ext^3_\pi(\cO/\cI_1,\cI_2)=0$ (because $\pi$ has relative dimension 2) the composition \eqref{ont} is \emph{surjective}. Therefore, if there was a lifting \beq{find} \xymatrix@R=18pt{ H^2(\cO_S)\otimes\cO[-2]\ar@{->>}[dr]\ar@{.>}[d] \\ R\hom_\pi(\cI_1,\cI_2) \ar[r]& \ext^2_\pi(\cI_1,\cI_2)[-2],\!} \eeq then the cone on the dotted arrow \eqref{find} would have no $h^2$ and so would be quasi-isomorphic to a 2-term complex of vector bundles. So we could replace $R\hom_\pi(\cI_1,\cI_2)$ by this cone: they have the same $h^0$ jumping locus $S^{[n_1,n_2]}$ (this is proved in Lemma \ref{gw}; it is not true for the $h^{\ge1}$ jumping loci, however) and the same Chern classes. Assuming we could find a similar lift for $H^1(\cO_S)\otimes\cO[-1]$ as well, applying Theorem \ref{prop} to the cone would give the following. \begin{thm} \label{aim} Let $S$ be any smooth projective surface. The 2-step nested Hilbert scheme $S^{[n_1,n_2]}$ carries a natural\,\footnote{Naturality will follow from the fact that the lift \eqref{find} is canonical on restriction to $S^{[n_1,n_2]}\subset S^{[n_1]}\times S^{[n_2]}$; see \eqref{rite}.} perfect obstruction theory and virtual cycle $$ \big[S^{[n_1,n_2]}\big]^{\vir}\,\in\,A_{n_1+n_2}\big(S^{[n_1,n_2]}\big) $$ whose pushforward to $S^{[n_1]}\times S^{[n_2]}$ is $c_{n_1+n_2}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big).$ \end{thm} Unfortunately the lifting \eqref{find} does not exist in general, so to prove the Theorem we will use a trick borrowed from the splitting principle in topology: we pull back to a bigger space $\cA\to S^{[n_1]}\times S^{[n_2]}$ where there is such a splitting, then show the passage does not destroy any information. For the rest of this Section we carry this out, dealing similarly with $H^1(\cO_S)$ at the same time. \medskip We denote by $R^{\ge1}\pi_*\;\cO$ the truncation $\tau^{\ge1}R\pi_*\;\cO$. Choosing once and for all a splitting of $R\Gamma(\cO_S)$ into its cohomologies induces a splitting \beq{1stsplit} R^{\ge1}\pi_*\;\cO\ \cong\ H^1[-1]\ \oplus\ H^2[-2], \eeq where $$ H^i\ \colon=\ H^i(\cO_S)\otimes\cO_{S^{[n_1]}\times S^{[n_2]}} $$ is the trivial vector bundle of rank $h^{0,i}(S)$ over $S^{[n_1]}\times S^{[n_2]}$. As described above, we wish to map this to $R\hom_{\pi}(\cI_1,\cI_2)$ in an appropriate way, which we will do by factoring through the map \beq{amic} \iota_1^*\,\colon\ R\pi_*\;\cI_2\To R\hom_{\pi}(\cI_1,\cI_2) \eeq induced by $\iota\_1\colon\cI_1\to\cO$. We relate $R\pi_*\;\cI_2$ and $R^{\ge1}\pi_*\;\cO$ by the commutative diagram of exact triangles $$ \xymatrix@=18pt{ & \cO \ar[d]_{h^0}\ar@{=}[r]& \cO \ar[d] \\ R\pi_*\;\cI_2 \ar@{=}[d]\ar[r]& R\pi_*\;\cO \ar[d]\ar[r]& \pi_*\big(\cO/\cI_2\big) \ar[d] \\ R\pi_*\;\cI_2 \ar[r]& R^{\ge1}\pi_*\;\cO \ar[r]& \cO^{[n_2]}/\cO.\!} $$ Here $\cO^{[n_2]}:=\pi_*(\cO/\cI_2)$ is the tautological vector bundle, and the top two rows induce the bottom one. This gives the exact triangle \beq{split?} \xymatrix{ \cO^{[n_2]}/\cO\,[-1] \ar[r]& R\pi_*\;\cI_2 \ar@<.5ex>[r]& R^{\ge1}\pi_*\;\cO \ar@{.>}@<.5ex>[l]} \eeq which we want to split (to then compose with \eqref{amic}). To write this more explicitly, we split $R^{\ge1}\pi_*\;\cO$ by \eqref{1stsplit} and fix a 2-term locally free resolution $F_1\to F_2$ of $R\pi_*\;\cI_2$, with $F_i$ in degree $i$. Then \eqref{split?} gives \beq{dig} \xymatrix{ & \cO^{[n_2]}/\cO \\ 0 \ar[r]& R^1\pi_*\;\cI_2 \ar[r]^-{h^1}\ar@{->>}[d]_{\iota\_2}\ar@{<-^)}[u]-U& F_1 \ar[r]& F_2 \ar[r]^-{h^2}& R^2\pi_*\;\cI_2 \ar@{=}[d]\ar[r]& 0 \\ & H^1 \ar@{.>}@/^{-2ex}/[u]_{\phi_1} &&& H^2,\!\!\! \ar@{.>}[ul]^-{\phi_2}} \eeq where $\iota\_2\colon\cI_2\to\cO$ and the left hand column is a short exact sequence. Choices of splittings $\phi_1,\,\phi_2$ would induce a splitting of \eqref{split?}. Since the $H^i$ are \emph{free}, splittings $(\phi_1,\phi_2)$ of \eqref{dig} exist \emph{locally}. But unfortunately we can show they do \emph{not} exist globally in general. So we use a trick, pulling back to a bigger space $\cA\to S^{[n_1]}\times S^{[n_2]}$ where there is a tautological such splitting. \subsection{A splitting trick.} \label{splitpr} Inside the total space of the bundle $$ \cE\ :=\ (H^1)^*\!\otimes\!R^1\pi_*\;\cI_2\ \ \oplus\ \ (H^2)^*\!\otimes\!F_2 $$ over $S^{[n_1]}\times S^{[n_2]}$ there is a natural affine bundle\footnote{Modelled on the vector bundle $(H^1)^*\otimes\frac{\cO^{[n_2]}}{\cO}\ \ \oplus\,\ (H^2)^*\otimes\;\ker(h^2)$. Bhargav Bhatt pointed out that we could have used the Jouanolou trick here to find an affine bundle whose total space is an affine variety on which therefore there exist (non-canonical) splittings.} $\cA\subset\cE$ of pointwise splittings $(\phi_1,\phi_2)$ of \eqref{dig}. That is, the surjective map of locally free sheaves $$ (1\otimes\iota\_2\,,1\otimes h^2)\,\colon\ \cE\To\ \End H^1\ \oplus\ \End H^2 $$ induces a map on the total spaces of the associated vector bundles. Taking the inverse image of the section $\big(\!\id_{H^1},\id_{H^2}\!\big)$ defines the affine bundle $$ \rho\ \colon\ \cA\To S^{[n_1]}\times S^{[n_2]}. $$ Pulling \eqref{dig} back to $\cA$, it now has a \emph{canonical} tautological splitting $\Phi=(\phi_1,\phi_2)$, giving \beq{donesplit} \Phi\,:\ \rho^*H^1[-1]\ \oplus\ \rho^*H^2[-2]\To\rho^*R\pi_*\;\cI_2 \eeq as sought in \eqref{split?}. That is, composing $\Phi$ with (the pullback by $\rho^*$ of) $\iota_2\colon R\pi_*\;\cI_2\to R^{\ge1}\pi_*\;\cO$ gives the identity: $\iota_2\circ\Phi=\id$. So finally we may compose \eqref{donesplit} with (the pullback by $\rho^*$ of) $\iota_1^*$ \eqref{amic} to give a map \beq{coney} \iota_1^*\circ\Phi\,\colon\ \rho^*R^{\ge1}\pi_*\;\cO\To\rho^*R\hom_\pi(\cI_1,\cI_2). \eeq By construction, on taking $h^2$ it induces (the pullback by $\rho^*$ of) the surjection \eqref{ont}. Therefore the cone $C(\iota_1^*\circ\Phi)$ on \eqref{coney} has no $h^2$ and is quasi-isomorphic to a 2-term complex of locally free sheaves. We next give a more explicit description of $C(\iota_1^*\circ\Phi)$. It is nicest over $\rho^{-1}\big(S^{[n_1,n_2]}\big)$, since on $S^{[n_1,n_2]}$ the natural inclusion $\iota\colon\cI_1\to\cI_2$ induces a \emph{canonical} lift given by the composition \beq{rite} R^{\ge1}\pi_*\cO\To R\pi_*\cO\rt{\id}R\hom_\pi(\cI_1,\cI_1)\rt{\iota}R\hom_\pi(\cI_1,\cI_2). \eeq \begin{lem} \label{invariant} The cone $C(\iota_1^*\circ\Phi)$ can be represented by a 3-term complex of vector bundles\footnote{This can be truncated to a 2-term complex of vector bundles by removing the third term and replacing the second term by the kernel of the surjection $(\rho^*\sigma_2,\psi_2)$.} \beq{tri} \xymatrix@R=0pt@C=35pt{ \rho^*E_0 \ar[r]^{\rho^*\sigma_1}& \rho^*E_1 \ar[r]^{\rho^*\sigma_2}& \rho^*E_2, \\ \oplus & \oplus \\ \rho^*H^1 \ar[ruu]_{\psi_1} & \rho^*H^2 \ar[ruu]_{\psi_2}} \eeq where $E_0\to E_1\to E_2$ represents $R\hom_\pi(\cI_1,\cI_2)$. Moreover the maps may be chosen so that, on restriction to $\rho^{-1}\big(S^{[n_1,n_2]}\big)$, they are the pullbacks by $\rho^*$ of maps on $S^{[n_1,n_2]}$, and $C(\iota_1^*\circ\Phi)$ is the pullback $\rho^*\;C$ of the cone $C$ on the composition \eqref{rite}. \end{lem} \begin{rmk} Recall that by our notation convention, we are using the same notation $\rho$ for the restriction of $\rho$ to $\rho^{-1}\big(S^{[n_1,n_2]}\big)$. The Lemma tells us that on $\rho^{-1}\big(S^{[n_1,n_2]}\big)$, the explicit resolution \eqref{tri} can be taken to be constant on the fibres of $\rho$ --- i.e. independent on the choice of lifts $(\phi_1,\phi_2)$ of \eqref{dig} --- since, after composition with $\iota_1^*$, all lifts become quasi-isomorphic to the canonical one \eqref{rite} on $\rho^{-1}\big(S^{[n_1,n_2]}\big)$. \end{rmk} \begin{proof} First we show that $C(\iota_1^*\circ\Phi)$ restricted to $\rho^{-1}\big(S^{[n_1,n_2]}\big)$ is quasi-isomorphic to $\rho^*\;C$. Consider the diagram $$ \xymatrix{ \rho^*R^{\ge1}\pi_*\cO \ar[r]\ar[d]_\Phi& \rho^*R\pi_*\cO \ar[r]^-{\id}& \rho^*R\hom_\pi(\cI_1,\cI_1) \ar[r]^{\iota}& \rho^*R\hom_\pi(\cI_1,\cI_2) \\ \rho^*R\pi_*\;\cI_2 \ar[ru]_(.6){\iota\_2}\ar[urrr]_{\iota_1^*}\ar@/^{-2ex}/[u]} $$ on $\rho^{-1}\big(S^{[n_1,n_2]}\big)$, where we have the canonical map $\iota\colon\rho^*\cI_1\into\rho^*\cI_2$. Here the curved arrow is from \eqref{split?} and makes the first triangle commute. Since by construction $\Phi$ is a right inverse to this map, the first triangle also commutes if we start at the top left corner. Since the second triangle also commutes, everything does, which means that $\iota_1^*\Phi$ equals the composition of the arrows along the top row.\medskip Next we resolve $R\hom_\pi(\cI_1,\cI_2)^\vee$ by a complex of \emph{very negative} vector bundles $G\udot$. This means that they behave like projectives in the abelian category of coherent sheaves. In particular, by making them sufficiently negative, we can arrange that the map $(\iota_1^*\Phi)^\vee$ can be represented by a genuine map of complexes \beq{firs} \rho^*G\udot\To\rho^*(H^1)^*[1]\ \oplus\ \rho^*(H^2)^*[2], \eeq and, on $S^{[n_1,n_2]}$, the dual of the composition \eqref{rite} is represented by a genuine map of complexes \beq{seco} G\udot\To(H^1)^*[1]\,\oplus\,(H^2)^*[2]. \eeq On restriction to $\rho^{-1}\big(S^{[n_1,n_2]}\big)\subset\cA$, we have shown that the first map \eqref{firs} is quasi-isomorphic to the pullback by $\rho^*$ of the second \eqref{seco}. Again we may assume we took the $G^i$ sufficiently negative that --- by the usual proof that quasi-isomorphic maps of complexes of projectives are homotopic --- there is a homotopy between \eqref{firs} and $\rho^*\;$\eqref{seco}. This homotopy is a pair of maps $$ \rho^*G^0\To\rho^*(H^1)^*, \qquad \rho^*G^{-1}\To\rho^*(H^2)^*, $$ over $\rho^{-1}\big(S^{[n_1,n_2]}\big)$. By the sufficient negativity of the $G^i$ they can be extended\footnote{For $N\gg0$ the restriction $\Hom\_{\cA}(G(-N),F)\to\Hom_{\rho^{-1}(S^{[n_1,n_2]})}(G(-N),F)$ is \emph{onto} for locally free $F$ and $G$.} to maps on all of $\cA$. Modifying \eqref{firs} by this homotopy, dualising and then truncating $(G\udot)^\vee$ to a 3-term complex now gives \eqref{tri}. \end{proof} So $C(\iota_1^*\circ\Phi)$ is quasi-isomorphic to the 2-term complex of vector bundles \beq{EF} \rho^*(E_0\oplus H^1)\rt\sigma F, \eeq where $F$ is defined to be the kernel \beq{Fdef} 0\To F\To\rho^*(E_1\oplus H^2)\To\rho^*E_2\To0. \eeq And over $\rho^{-1}\big(S^{[n_1,n_2]}\big)$, the complex \eqref{EF} can be seen as a pull back by $\rho^*$. \begin{lem}\label{gw} The $h^0$ jumping locus of $C(\iota_1^*\circ\Phi)$ is the same as that of $\rho^*R\hom_\pi(\cI_1,\cI_2)$, i.e. it is $\rho^{-1}\big(S^{[n_1,n_2]}\big)$. \end{lem} \begin{proof} Given any map $T\rt{f}\cA\to S^{[n_1]}\times S^{[n_2]}$, we denote the basechange of $\pi$ by $$ \pi\_T\,\colon\, T\times S\To T. $$ We denote the pull backs of $\cI_1,\,\cI_2$ to $T\times S$ by the same notation. Pulling $C(\iota_1^*\circ\Phi)$ back to $T$, the long exact sequence associated to the cone becomes $$ 0\To\hom_{\pi\_T}(\cI_1,\cI_2)\To h^0\big(f^*C(\iota_1^*\circ\Phi)\big)\To R^1\pi\_{T*}\cO\rt{\iota_1^*\Phi}\ext^1_{\pi\_T}(\cI_1,\cI_2). $$ It remains to prove that the last arrow is an injection, since that implies $\hom_{\pi\_T}(\cI_1,\cI_2)\cong h^0\big(f^*C(\iota_1^*\circ\Phi)\big)$ on any $T$, to which we can apply Lemma \ref{h0base} to conclude. The last arrow is the composition $\iota_1^*\circ\Phi$ in the diagram $$\xymatrix{ R^1\pi\_{T*}\;\cO \ar@{.>}@/^{2ex}/[r]^\Phi& R^1\pi\_{T*}\;\cI_2 \ar[l]^{\iota\_2}\ar[d]^{\iota_1^*} \\ \ext^1_{\pi\_T}(\cI_1,\cO) \ar@<-1ex>@{<-^)}[u]+<0pt,-12pt>^-{\iota_1^*} & \ext^1_{\pi\_T}(\cI_1,\cI_2). \ar[l]_{\iota\_2}} $$ To prove it is an injection it is sufficient to do so after composing with $\iota\_2$ along the bottom. Since the diagram commutes and $\Phi$ is a right inverse of the $\iota\_2$ along the top, this is equivalent to the left hand $\iota_1^*$ being injective. But this follows from the vanishing of $\ext^1_{\pi\_T}(\cO/\cI_1,\cO)$. \end{proof} For brevity we set $Z:=S^{[n_1,n_2]}$. By Lemmas \ref{gw} and \ref{invariant} we can see $\rho^{-1}(Z)$ as the degeneracy locus of any of the four maps \begin{eqnarray} \rho^*\sigma_1 \!&\colon& \rho^*E_0\To\rho^*E_1, \label{1} \\ (\rho^*\sigma_1,\psi_1) \!&\colon& \rho^*(E_0\oplus H^1)\To\rho^*E_1, \label{2} \\ \mat{\rho^*\sigma_1}{\psi_1}00 \!\!\!&\colon& \rho^*(E_0\oplus H^1)\To\rho^*(E_1\oplus H^2), \label{3} \\ \sigma \!&\colon& \rho^*(E_0\oplus H^1)\To K, \label{4} \end{eqnarray} where $K:=\ker\!\big(\rho^*(E_1\oplus H^2)\to\rho^*E_2\big)$. These give rise to four different perfect obstruction theories for $\rho^{-1}(Z)$. The one we are interested in is the fourth \eqref{4}, but we will use the third \eqref{3} and the second \eqref{2} to relate this to the first \eqref{1} which has the desirable property that it is $\rho$-invariant: it is pulled back from a perfect obstruction theory on $Z$. \medskip By Lemma \ref{invariant} we can write each of (\ref{1}--\ref{4}) as the degeneracy locus of a map $$ s\,\colon\ \rho^*A\To B, $$ which on restriction to $\rho^{-1}(Z)$ becomes a pullback from $Z$ --- i.e. there exists a bundle $B'$ on $Z$ and $s'\colon A|_Z\to B'$ such that \beq{pullbk} B|_{\rho^{-1}(Z)}\cong\rho^*B'\quad\mathrm{and}\quad s|_{\rho^{-1}(Z)}\cong\rho^*s'. \eeq Now apply Section \ref{degloc} with $r_0-r=1$ to this. We see $\rho^{-1}(Z)$ as being cut out of $$ \rho^*\PP(A)\ \cong\ \PP(\rho^*A)\rt{q}\cA $$ by the induced section $\widetilde s$ \eqref{tilda} of $q^*B(1)$, inducing the perfect obstruction theory \eqref{mod3} \beq{rho1} \xymatrix@R=18pt@C=35pt{ q^*B^*(-1)\big|_{\rho^{-1}(Z)} \ar[d]_{\widetilde s}\ar[r]^-{d\;\widetilde s}& \Omega\_{\rho^*\PP(A)}\big|_{\rho^{-1}(Z)} \ar@{=}[d] \\ \rho^*(I/I^2) \ar[r]^-d& \Omega\_{\rho^*\PP(A)}\big|_{\rho^{-1}(Z)}\,.\!\!\!} \eeq Here $I$ is the ideal of $Z\subset \PP(A)$, so the bottom row is the truncated cotangent complex $\LL_{\rho^{-1}(Z)}$\;. The bottom arrow factors through $\rho^*\Omega_{\PP(A)}|_{\rho^{-1}(Z)}$, so using \eqref{pullbk} the diagram factors through \beq{doh} \xymatrix@R=18pt{ q^*\rho^*(B')^*(-1)\big|_{\rho^{-1}(Z)} \ar[d]_{\widetilde s}\ar[r]^-{d\;\widetilde s}& \rho^*\Omega\_{\PP(A)}\big|_{\rho^{-1}(Z)} \ar@{=}[d] \\ \rho^*(I/I^2) \ar[r]^-d& \rho^*\Omega\_{\PP(A)}\big|_{\rho^{-1}(Z)}\,.\!\!\!} \eeq All of the sheaves here are pullbacks by $\rho^*$. Although on $\rho^{-1}(Z)$ the map $s$ is also a pullback \eqref{pullbk}, that does \emph{not} immediately mean that the maps in the above diagram are pulled back --- they use the restriction of $s$ not just to $\rho^{-1}(Z)$ but to its scheme theoretic doubling defined by the ideal $\rho^*I^2$. However, in the first set-up \eqref{1} the maps clearly are pulled back. Using the second \eqref{2} and third \eqref{3} we will prove the same is true for the fourth \eqref{4}, so that it descends to give a perfect obstruction theory for $Z$ independent of the $(\phi_1,\phi_2)$ choices built into $\cA$. \begin{prop} \label{rhocom} Using the description \eqref{4} of $\rho^{-1}(Z)$, the resulting diagram \eqref{doh} is $\rho$-invariant: it is the pullback by $\rho^*$ of a perfect obstruction theory $F\udot\to\LL_Z$ for $Z=S^{[n_1,n_2]}$. \end{prop} \begin{proof} Applying \eqref{doh} to the first set-up \eqref{1} gives $$ \xymatrix@R=18pt@C=40pt{ \rho^*q^*E_1^*(-1)\big|_{\rho^{-1}(Z)} \ar[d]_{\rho^*\widetilde\sigma_1}\ar[r]^-{\rho^*d(\widetilde\sigma_1)}& \rho^*\Omega\_{\PP(E_0)}\big|_{\rho^{-1}(Z)} \ar@{=}[d] \\ \rho^*(I/I^2) \ar[r]^-d& \rho^*\Omega\_{\PP(E_0)}\big|_{\rho^{-1}(Z)}\,,\!\!\!} $$ where $I$ is the ideal of $Z\subset\PP(E_0)$. Applied instead to the second \eqref{2}, we get the diagram \beq{dghell} \xymatrix@R=18pt@C=50pt{ \rho^*q^*E_1^*(-1)\big|_{\rho^{-1}(Z)} \ar[d]_{\widetilde{(\rho^*\sigma_1,\psi_1)}}\ar[r]^-{d\widetilde{(\rho^*\sigma_1,\psi_1)}}& \rho^*\Omega\_{\PP(E_0\oplus H^1)}\big|_{\rho^{-1}(Z)} \ar@{=}[d] \\ J/J^2 \ar[r]^-d& \rho^*\Omega\_{\PP(E_0\oplus H^1)}\big|_{\rho^{-1}(Z)}\,,\!\!\!} \eeq where $J$ is the ideal of $\rho^{-1}(Z)\subset\PP(\rho^*E_0\oplus H^1)$. (Throughout this proof we denote $q^*H^i,\,\rho^*H^i$ and $q^*\rho^*H^i$ simply by $H^i$.) This inclusion factors $$ \rho^{-1}(Z)\,\subset\,\PP(\rho^*E_0)\,\subset\,\PP(\rho^*E_0\oplus H^1). $$ The first has conormal sheaf $\rho^*I/I^2$, while the second has conormal bundle $(H^1)^*(-1)$ The splitting of $\rho^*E_0\oplus H^1$ induces a splitting $$ \Omega_{\PP(\rho^*E_0\oplus H^1)}|_{\rho^{-1}(Z)}\cong\Omega_{\PP(\rho^*E_0)}|_{\rho^{-1}(Z)}\oplus H(-1)|_{\rho^{-1}(Z)} $$ and so $$ J/J^2\=\rho^*(I/I^2)\oplus(H^1)^*(-1). $$ When substituted into \eqref{dghell} it becomes \beq{arss} \xymatrix@R=18pt@C=40pt{ \rho^*q^*E_1^*(-1)\big|_{\rho^{-1}(Z)} \ar[d]_{(\rho^*\widetilde\sigma_1,\,\psi_1^*)}\ar[r]^-{(\rho^*d\;\widetilde\sigma_1,\,\psi_1^*)}& \rho^*\Omega\_{\PP(E_0)}\big|_{\rho^{-1}(Z)} \oplus(H^1)^*(-1)\ar@{=}[d] \\ \rho^*(I/I^2)\oplus(H^1)^*(-1) \ar[r]^-{(d,\id)}& \rho^*\Omega\_{\PP(E_0)}\big|_{\rho^{-1}(Z)}\oplus(H^1)^*(-1)\,.\!\!\!} \eeq The \emph{key point} of this proof is that the above diagram is pulled back by $\rho^*$ from a similar diagram on $Z$. This is clear of all the bundles involved, and also clear of the first summand of the upper and left hand arrows. But these are the only parts of the arrows which depend on the thickening of $\rho^{-1}(Z)$. The other summands $\psi_1^*$ depend only on their restriction to $\rho^{-1}(Z)$, where they are also pull backs by Lemma \ref{invariant}. So the second degeneracy locus description of $\rho^{-1}(Z)$ \eqref{2} gives rise to a diagram which descends to (a perfect obstruction theory on) $Z$. For the third description \eqref{3} we add an extra $(H^2)^*(-1)$ summand to the diagram \eqref{arss} with all maps from it zero: $$\qquad \xymatrix@R=20pt@C=60pt{ \rho^*q^*(E_1\oplus H^2)^*(-1)\big|_{\rho^{-1}(Z)} \ar[d]_{(\rho^*\widetilde\sigma_1,\,\psi_1^*)}^{\!\oplus\,(0,0)}\ar[r]^-{(\rho^*d\;\widetilde\sigma_1,\,\psi_1^*)\!}_-{\oplus\,(0,0)}& \rho^*\Omega\_{\PP(E_0)}\big|_{\rho^{-1}(Z)} \oplus(H^1)^*(-1)\ar@{=}[d] \\ \rho^*(I/I^2)\oplus(H^1)^*(-1) \ar[r]^-{(d,\id)}& \rho^*\Omega\_{\PP(E_0)}\big|_{\rho^{-1}(Z)}\oplus(H^1)^*(-1).\!\!} \vspace{-17mm} $$ \beq{bb} \vspace{1cm} \eeq This is therefore also a pullback by $\rho^*$. Finally, since \eqref{tri} is a complex, the map \eqref{3} takes values in $K\subset\rho^*(E_1\oplus H^2)$. Thus the equation cutting out $\rho^{-1}(Z)$ takes values in $q^*K(1)\subset q^*\rho^*(E_1\oplus H^2)(1)$. Therefore the upper horizontal and left hand vertical arrows of \eqref{bb} factor through $q^*K^*(-1)$, giving $$ \xymatrix@R=18pt@C=40pt{ q^*K^*(-1)\big|_{\rho^{-1}(Z)} \ar[d]\ar[r]& \rho^*\Omega\_{\PP(E_0)}\big|_{\rho^{-1}(Z)} \oplus(H^1)^*(-1)\ar@{=}[d] \\ \rho^*(I/I^2)\oplus(H^1)^*(-1) \ar[r]^-{(d,\id)}& \rho^*\Omega\_{\PP(E_0)}\big|_{\rho^{-1}(Z)}\oplus(H^1)^*(-1)\,,\!\!\!} \vspace{-17mm} $$ \beq{rho2} \vspace{8mm} \eeq which is the diagram \eqref{doh} applied to the fourth degeneracy locus \eqref{4}. By Lemma \ref{invariant}, both $K$ and its inclusion into $\rho^*E_1\oplus H^2$ are $\rho$-invariant. Thus the quotient diagram \eqref{rho2} of the diagram \eqref{bb} is also a pull back by $\rho^*$. \end{proof} \begin{proof}[Proof of Theorem \ref{aim}] Applying \eqref{rho1} (with $A=E_0\oplus H^1$ and $B=K$) to the fourth description \eqref{4} induces a perfect obstruction theory on $\rho^{-1}\big(S^{[n_1,n_2]}\big)$. And diagram \eqref{doh} applied to \eqref{4} gives \eqref{rho2}, which descends --- by Proposition \ref{rhocom} --- to give a compatible perfect obstruction theory on $S^{[n_1,n_2]}$. This compatibility means they satisfy $$ \rho^*\big[S^{[n_1,n_2]}\big]^{\vir}\= \big[\rho^{-1}\big(S^{[n_1,n_2]}\big)\big]^{\vir} \ \in\ A_{\;\dim\cA-k}(\cA). $$ By Theorem \ref{prop} the second term is $\Delta\;_{r_1-r}^{r_0-r}\big(c(K-(\rho^*E_0\oplus H^1))\big) $. But the Chern classes of $K-(\rho^*E_0\oplus H^1)$ are the same as those of $\rho^*(-E_0+E_1-E_2)$ and so those of $\rho^*R\hom_\pi(\cI_1,\cI_2)[1]$. Thus $$ \rho^*\big[S^{[n_1,n_2]}\big]^{\vir}\=\rho^* \Delta\;_{r_1-r}^{r_0-r}\big(c(R\hom_\pi(\cI_1,\cI_2)[1])\big)\ \in\ A_{\;\dim\cA-k}(\cA). $$ Here $r_0-r=1$ is the rank of $\ker(\rho^*E_0\to\rho^*E_1)$ over the degeneracy locus. And $r_1-r_0=\rk K-\rk E_0-h^1(\cO_S)=\rk E_1+h^2(\cO_S)-\rk E_2-\rk E_0-h^1(\cO_S)=-\chi(I_1,I_2)+\chi(\cO_S)-1=n_1+n_2-1$, so $r_1-r=n_1+n_2$ and $k=(r_0-r)(r_1-r)=n_1+n_2$. Therefore the above becomes $$ \rho^*\big[S^{[n_1,n_2]}\big]^{\vir}\=\rho^*\, c\_{n_1+n_2}(R\hom_\pi(\cI_1,\cI_2)[1])\ \in\ A_{\;\dim\cA-n_1-n_2}(\cA). $$ But since $\rho$ is an affine bundle, \beq{krush} \rho^*\,\colon\,A_{n_1+n_2}(S^{[n_1]}\times S^{[n_2]})\To A_{\;\dim\cA-n_1-n_2}(\cA) \eeq is an isomorphism \cite[Corollary 2.5.7]{Kr}, so the result follows. \end{proof} Over the degeneracy locus $\rho^{-1}\big(S^{[n_1,n_2]}\big)$, our complex $C(\iota_1^*\Phi)$ has $$ h^0\=\cO, $$ trivialised by the inclusion $\iota:\cI_1\into\cI_2$. And $h^1[-1]$ is the cone on $$ h^0\big(C(\iota_1^*\Phi)\big)\ \cong\ \cO_{\rho^{-1\,}S^{[n_1,n_2]}}\,\rt{h^0}\,C(\iota_1^*\Phi)\big|_{\rho^{-1\,}S^{[n_1,n_2]}}\,. $$ By Lemma \ref{invariant} and the description \eqref{rite}, this is \beq{trayce} R\hom_p(\cI_1,\cI_2)\_0\ :=\ \Cone\!\;\big(Rp_*\;\cO\rt{\iota\cdot\id}R\hom_p(\cI_1,\cI_2)\big), \eeq where we recall that $p$ is the basechange of $\pi$ to $S^{[n_1,n_2]}\subset S^{[n_1]}\times S^{[n_2]}$. Thus \beq{h11} h^1\=\ext^1_p(\cI_1,\cI_2)\_0. \eeq Theorem \ref{prop} shows the perfect obstruction theory of a degeneracy locus has virtual tangent bundle $$ T_{\cA}|_{\rho^{-1}(Z)}\To(h^0)^*\otimes h^1. $$ As in the proof of Theorem \ref{aim} this descends to give our perfect obstruction theory on $Z=S^{[n_1,n_2]}$, yielding the following. \begin{cor}\label{vtb} The perfect obstruction theory on $S^{[n_1,n_2]}$ of Theorem \ref{aim} can be written, in the notation of \eqref{trayce}, as \beq{tun} \big\{T_{S^{[n_1]}\times S^{[n_2]}}\big|_{S^{[n_1,n_2]}}\to\ext^1_p(\cI_1,\cI_2)\_0\big\}^\vee\To\LL_{S^{[n_1,n_2]}}. \eeq \end{cor} \section{$k$-step nested Hilbert schemes} \label{kstep} For $n_1\ge n_2\ge\cdots\ge n_k$, the $k$-step Hilbert scheme $$ S^{[n_1,n_2,\ldots,n_k]}\ :=\ \big\{I_1\subseteq I_2\subseteq\cdots\subseteq I_k\subseteq\cO_S,\ \mathrm{length}(\cO_S/I_i)=n_i\big\} $$ can be seen inside $S^{[n_1]}\times\cdots\times S^{[n_k]}$ as the intersection of the $(k-1)$ degeneracy loci $$ \big\{\!\Hom(I_i,I_{i+1})=\C\big\},\quad i=1,2,\ldots,k-1 $$ where the maps in the complexes $R\hom_\pi(\cI_i,\cI_{i+1})$ drop rank. So when $H^{\ge1}(\cO_S)=0$ we can employ the exact same method as in Proposition \eqref{nest0}, using $k-1$ sections of tautological bundles on a $(k-1)$-fold fibre product of relative Grassmannians, to describe a perfect obstruction theory, virtual cycle, and product of Thom-Porteous terms to compute its pushforward. For general $S$, possibly with $H^{\ge1}(\cO_S)\ne0$, we can replace the complexes $R\hom_\pi(\cI_i,\cI_{i+1})$ with their modifications $C(\iota_i^*\circ\Phi_i)$ of \eqref{coney} after pulling back to an affine bundle of splittings. Then we use the same method as in Theorem \ref{aim} to produce the following result. We use the projections \begin{align*} \pi\,\colon\ S^{[n_1]}\times\cdots\times S^{[n_k]}\times S\ &\To\, S^{[n_1]}\times\cdots\times S^{[n_k]}, \\ p\,\colon\ S^{[n_1,\ldots,n_k]}\times S\ &\To\,S^{[n_1,\ldots,n_k]}, \end{align*} and, when $I\subset J$, the same $\Ext(I,J)\_0$ notation as in (\ref{trayce}, \ref{h11}). \begin{thm} \label{nest} Fix a smooth complex projective surface $S$. Via degeneracy loci the $k$-step nested Hilbert scheme $S^{[n_1,\ldots,n_k]}$ inherits a perfect obstruction theory $F\udot\to\LL_{S^{[n_1,\ldots,n_k]}}$ with virtual tangent bundle $$ (F\udot)^\vee\ \cong\ \Big\{T_{S^{[n_1]}}\oplus\cdots\oplus T_{S^{[n_k]}}\To\ext^1_p(\cI_1,\cI_2)\_0\oplus\cdots\oplus\ext^1_p(\cI_{k-1},\cI_k)\_0\Big\}, $$ where the arrow is the obvious direct sum of the maps \eqref{tun}. This is isomorphic to the virtual tangent bundle $$ \Cone\left\{\bigg(\bigoplus_{i=1}^kR\hom_p(\cI_i,\cI_i)\bigg)_{\!0} \To\bigoplus_{i=1}^{k-1}R\hom_p(\cI_i,\cI_{i+1})\right\} $$ of the perfect obstruction theory of \cite{GSY1} or Vafa-Witten theory \cite{TT1} when the latter are defined. The pushforward of the resulting virtual cycle $$ \big[S^{[n_1,\ldots,n_k]}\big]^{\vir}\,\in\,A_{n_1+n_k}\big(S^{[n_1,\ldots,n_k]}\big) $$ to $S^{[n_1]}\times\cdots\times S^{[n_k]}$ is given by the product $$ c_{n_1+n_2}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big)\cup\cdots\cup c_{n_{k-1}+n_k}\big(R\hom_{\pi}(\cI_{k-1},\cI_k)[1]\big). $$ \end{thm} \begin{rmk} Note that we are not claiming the two perfect obstruction theories are the same, although they undoubtedly are. Proving this would involve identifying the map $F\udot\to\LL$ produced by our degeneracy locus construction with the one induced by Atiyah classes in \cite{GSY2,TT1}. We do not need this because the virtual cycles depend only on the scheme structure of $S^{[n_1,\ldots, n_k]}$ and the K-theory class of $F\udot$. \end{rmk} \vspace{-3mm} \begin{proof} All that is left to do is relate the two virtual tangent bundles. The virtual tangent bundle of \cite{GSY1} is the cone on the bottom row of the diagram \beq{gsy} \xymatrix@R=18pt{ Rp_*\;\cO \ar[d]^(.42){\oplus_{i=1}^k\id} \\ \bigoplus_{i=1}^kR\hom_p(\cI_i,\cI_i) \ar[r]\ar[d]& \bigoplus_{i=1}^{k-1}R\hom_p(\cI_i,\cI_{i+1}) \ar@{=}[d] \\ \bigg(\!\bigoplus_{i=1}^kR\hom_p(\cI_i,\cI_i)\bigg)_{\!0} \ar[r]& \bigoplus_{i=1}^{k-1}R\hom_p(\cI_i,\cI_{i+1}).\!\!} \eeq Here the left hand column is an exact triangle which defines the term in the lower left corner. The central horizontal arrow acts on the $j$th summand ($1\le j\le k$) of the left hand side by taking it to $(0,\ldots,0,-i_{j-1}^*,i_j,0,\ldots,0)$ on the right hand side, where $i_j$ appears in the $j$th position and is the canonical map $\cI_j\into \cI_{j+1}$. (For $j=1$ we ignore the $-i_{j-1}^*$ term to get $(i_1,0,\ldots,0)$; for $j=k$ we ignore the $i_j$ term to get $(0,\ldots,0,-i^*_{k-1})$.) This has zero composition with $\oplus_{i=1}^k\id$, so induces the lower horizontal arrow. The identity map from $(Rp_*\;\cO)^{\oplus k}=Rp_*\;\cO\otimes\C^k$ to the central left hand term of \eqref{gsy} induces a map from $Rp_*\;\cO\otimes(\C^k/\C)$ to the bottom left hand term, where $\C$ sits in $\C^k$ via $(1,1,\ldots,1)$. Projecting the elements $(1,0,\ldots,0)$, $(1,1,0,\ldots,0),\,\ldots,\,(1,1,\ldots,1,0)$ of $\C^k$ defines a basis in $\C^k/\C$ and so identifies $Rp_*\;\cO\otimes(\C^k/\C)\cong(Rp_*\;\cO)^{\oplus(k-1)}$. Using our description of the central arrow, this identifies the induced map $$ Rp_*\;\cO\otimes(\C^k/\C)\To\bigoplus_{i=1}^{k-1}R\hom_p(\cI_i,\cI_{i+1}) $$ with $$ (Rp_*\;\cO)^{\oplus(k-1)}\rt{\operatorname{diag}(i_1,i_2,\cdots,i_{k-1})}\bigoplus_{i=1}^{k-1}R\hom_p(\cI_i,\cI_{i+1}). $$ Taking the cone on these two maps from $(Rp_*\;\cO)^{\oplus(k-1)}$ to the two entries on the bottom row of \eqref{gsy} shows the bottom row is quasi-isomorphic to $$ \bigoplus_{i=1}^kR\hom_p(\cI_i,\cI_i)\_0\To\ \bigoplus_{i=1}^{k-1}R\hom_p(\cI_i,\cI_{i+1})\_0 $$ in the notation of \eqref{trayce}. Each of these complexes has cohomology only in degree 1, so the virtual tangent bundle of \cite{GSY1} is the cone on $$ \bigoplus_{i=1}^k\ext^1_p(\cI_i,\cI_i)\_0\To\ \bigoplus_{i=1}^{k-1}\ext^1_p(\cI_i,\cI_{i+1})\_0 $$ in the notation of \eqref{h11}. On the $j$th summand on the left the arrow is $(0,\ldots,0,-i_{j-1}^*,i_j,0,\ldots,0)$. But this is $(F\udot)^\vee$, as required. In \cite{GSY2} it is shown that the perfect obstruction theory of \cite{GSY1} is a summand of the obstruction theory one gets from localised local DT theory. The piece one has to remove is explained in terms of a more global perfect obstruction theory arising in Vafa-Witten theory in \cite{TT1}. \end{proof} \section{Generalised Carlsson-Okounkov vanishing} \label{COvan} Theorem \ref{aim} expresses $\big[S^{[n_1,n_2]}\big]^{\vir}$ as a degeneracy class. This allows us to give a topological proof of the following result of Carlsson-Okounkov \cite{CO}, which we will then generalise below. \begin{cor} Let $S$ be any smooth projective surface. Over $S^{[n_1]}\times S^{[n_2]}$ we have the vanishing \beq{COzero} c_{n_1+n_2+i}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big)\=0, \qquad i>0. \eeq \end{cor} \begin{proof} We apply the higher Thom-Porteous formula \eqref{higher} to our modified complex $C(\iota_1^*\circ\Phi)$ \eqref{coney} on $\cA$. It has degeneracy locus $\rho^{-1}\big(S^{[n_1,n_2]}\big)$, over which $h^0$ is just $\cO$, trivialised by the tautological inclusion $\cI_1\into\cI_2$ over the nested Hilbert scheme. Hence \eqref{higher} gives $$ c_{r_1-r_0+i+1}\big(C(\iota_1^*\circ\Phi)[1]\big)\=0 $$ for $i>0$, where $r_1-r_0=n_1+n_2-1$. Since $C(\iota_1^*\circ\Phi)[1]$ only differs from $\rho^*R\hom_{\pi}(\cI_1,\cI_2)[1]$ by some trivial bundles $H^1,\,H^2$, this gives $$ \rho^*\;c_{n_1+n_2+i}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big)\=0. $$ But $\rho^*\colon A_{n_1+n_2-i}(S^{[n_1]}\times S^{[n_2]})\to A_{\;\dim\cA-n_1-n_2-i}(\cA)$ is an isomorphism \cite[Corollary 2.5.7]{Kr}, which gives the result. \end{proof} The rest of this Section is devoted to proving the following generalisation. \begin{thm} \label{park} Let $S$ be any smooth projective surface. For any curve class $\beta\in H_2(S,\Z)$, any Poincar\'e line bundle $\cL\to S\times\Pic_{\beta}(S)$, and any $i>0$, \beq{film} c_{n_1+n_2+i}\big(\!\;R\pi_*\;\cL-R\hom_{\pi}(\cI_1,\cI_2\otimes\cL)\big)\=0 \eeq on $S^{[n_1]}\times S^{[n_2]}\times\Pic_\beta(S)$. \end{thm} To prove this we will work with more general nested Hilbert schemes of subschemes $S\supset Z_1\supseteq Z_2$, by allowing $Z_1$ to have dimension $\le1$ instead of just 0. Separating out its divisorial and 0-dimensional parts, we are then led, for $\beta\in H_2(S,\Z)$, to the nested Hilbert scheme $S^{[n_1,n_2]}_{\beta}$. As a set it is \begin{multline} \label{nestbeta} S^{[n_1,n_2]}_{\beta}\ :=\ \big\{I_1(-D)\subset I_2\subset\cO_S\ \colon \\\mathrm{length}(\cO_S/I_i)=n_i,\ D\ \mathrm{Cartier\ with}\ [D]=\beta\big\}. \end{multline} As a scheme it represents the functor taking schemes $B$ to families of nested ideals $\cI_1(-\cD)\into \cI_2\into\cO_{S\times B}$, flat over $B$. Here $\cD$ is a Cartier divisor, the $\cO_S/\cI_i$ are finite over $B$ of length $n_i$, and --- on restriction to any closed fibre $S_b$ --- $\cD_b$ has class $\beta$ and the maps are still injections. Setting $\beta=0$ and $n_1\ge n_2$ recovers the punctual nested Hilbert scheme \eqref{nst}. Instead setting $n_1=0=n_2$ gives the Hilbert scheme of curves $S_\beta$, which fibres over $\Pic_\beta(S)\ni L$ with fibres $\PP(H^0(L))$. \medskip In the sequel \cite{GT2} we will construct a natural perfect obstruction theory and virtual cycle on $S^{[n_1,n_2]}_{\beta}$ for any $\beta$. Here we only sketch a less general construction for classes $\beta\gg0$ since we do not actually need the virtual class, only the degeneracy locus expression, in order to prove Theorem \ref{park}. \subsection{Another degeneracy locus construction} So fix $\beta\gg0$ sufficiently positive that $H^{\ge1}(L)=0$ for all $L\in\Pic_\beta(S)$. The Abel-Jacobi map $\AJ\colon S_\beta\to\Pic_\beta(S)$ is then a projective bundle. Let $\cD$ be the universal curve in $S_\beta\times S$ (or any basechange thereof) and as usual let $\pi$ denote any projection down $S$. Then $$ R\hom_\pi(\cI_1(-\cD),\cO) \quad\mathrm{over}\quad S^{[n_1]}\times S^{[n_2]}\times S_\beta $$ has $h^2=0$. Also $h^0=\pi_*\;\cO(\cD)$ and $$ h^1\=\ext^1_\pi(\cI_1(-\cD),\cO)\ \cong\ \ext^2_\pi(\cO_{\cZ_1}(-\cD),\cO)\ \cong\ \Big[\big(K_S(-\cD)\big)^{[n_1]}\Big]^*, $$ with the last isomorphism\footnote{Given any line bundle $L$ on $S$, there is a tautological rank $n_1$ vector bundle $L^{[n_1]}:=\pi_*\big[(\cO_{S^{[n_1]}}\boxtimes L)\otimes\cO_{\cZ_1}\big]$ over $S^{[n_1]}$ whose fibre over $Z_1\in S^{[n_1]}$ is $\Gamma(L|_{Z_1})$. Here we are using the obvious family generalisation applied to the line bundle $K_S(-\cD)$ over $S\times S_\beta$.} given by Serre duality down the fibres of $\pi$. Thus $R\hom_\pi(\cI_1(-\cD),\cO)$ can be trimmed to a 2-term complex of vector bundles $E_0\to E_1$ sitting in an exact sequence $$ 0\To\pi_*\;\cO(\cD)\To E_0\To E_1\To\Big[\big(K_S(-\cD)\big)^{[n_1]}\Big]^*\To0, $$ all of whose terms are locally free. So just as in Section \ref{splitpr} we may work on an affine bundle $\rho\colon\cA\to S^{[n_1]}\times S^{[n_2]}\times S_\beta$ over which this splits canonically, giving an isomorphism $$ \rho^*R\hom_\pi(\cI_1(-\cD),\cO)\ \cong\ \rho^*\pi_*\;\cO(\cD)\ \oplus\ \rho^*\Big[\big(K_S(-\cD)\big)^{[n_1]}\Big]^*[-1] $$ which induces the identity on cohomology sheaves. From now on we shall omit $\rho^*$ from our notation and work as if this splitting holds on $S^{[n_1]}\times S^{[n_2]}\times S_\beta$ since we know that $\rho^*$ induces an isomorphism on Chow groups \eqref{krush}. In particular we get an induced composition $$\xymatrix{ R\hom_\pi(\cI_1(-\cD),\cI_2) \ar[r]\ar[rrd]_{\Psi}& R\hom_\pi(\cI_1(-\cD),\cO) \ar[r]& \pi_*\;\cO(\cD) \ar[d] \\ && \frac{\pi_*\;\cO(\cD)}{s_{\cD}\cdot\;\cO}\;,\!} \vspace{-14mm} $$ \beq{sie}\vspace{6mm}\eeq where $s_{\cD}\colon\cO\to\pi_*\;\cO(\cD)$ is induced by adunction from the section $s_{\cD}\colon\pi^*\cO\to\cO(\cD)$ cutting out $\cD$. At a closed point $(I_1,I_2,D)$ of $S^{[n_1]}\times S^{[n_2]}\times S_\beta$, the horizontal composition along the top of \eqref{sie} acts on $h^0$ as follows. It takes a nonzero element of $\Hom(I_1(-D),I_2)$ --- i.e. a point of the nested Hilbert scheme up to scale --- to its divisorial part in $H^0(\cO(D))$; this is injective. The vertical map then compares this to the divisor $D$. Thus $h^0(\Psi)$ has one dimensional kernel $\cO$ (canonically trivialised by $s_{\cD}$) at precisely the points of the nested Hilbert scheme \beq{sett} S^{[n_1,n_2]}_\beta\ \stackrel\iota\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}\ S^{[n_1]}\times S^{[n_2]}\times S_\beta, \eeq and the kernel is never any bigger. Said differently, the 2-term complex of vector bundles $$ \Cone(\Psi)[-1] $$ drops rank by $1$ on the subset \eqref{sett}, and no further. By working very similar to that in Proposition \ref{nest0} one can easily show that \eqref{sett} also describes the degeneracy locus scheme-theoretically, inducing a perfect obstruction theory on $S^{[n_1,n_2]}_\beta$. By the Thom-Porteous formula of Proposition \ref{simplah} the resulting virtual cycle therefore satisfies $$ \iota_*\big[S^{[n_1,n_2]}_\beta\big]^{\vir}\= c\_b\big(\!\Cone(\Psi)\big), $$ where $b=\chi(\Cone(\Psi))+1=n_1+n_2$. More generally, by \eqref{higher}, $$ \iota_*\!\left(\!\;c_1\big((h^0)^*\big)^i\cap\big[S^{[n_1,n_2]}_\beta\big]^{\vir}\right)\=c_{n_1+n_2+i\;}\big(\!\Cone(\Psi)\big). $$ Since we have already observed that $h^0(\Cone(\Psi)[-1])\cong\cO$ is trivialised by the restriction of $s_{\cD}$ to \eqref{sett}, this gives \beq{park2} c_{n_1+n_2+i}\big(R\pi_*\;\cO(\cD)-R\hom_\pi(\cI_1(-\cD),\cI_2)\big)\=0 \quad\text{on}\ S^{[n_1]}\times S^{[n_2]}\times S_\beta \eeq for $\beta\gg0$ and all $i>0$. Notice how close this is to the result claimed in Theorem \ref{park}. \begin{proof}[Proof of Theorem \ref{park}.] We want to descend \eqref{park2} from $S_\beta$ to $\Pic_\beta(S)$ and then extend from $\beta\gg0$ to all $\beta\in H_2(S,\Z)$. We will use the formula of \cite[Proposition 1]{Ma}, $$ c_{n+i}(F\otimes M)\=\sum_{j=0}^{n+i}{\rk F-j\choose n+i-j}c_j(F)\;c_1(M)^{n+i-j}, $$ for any perfect complex $F$ and line bundle $M$, using the usual conventions for negative binomial coefficients. Applying this to $F=R\pi_*\;\cO(\cD)-R\hom_\pi(\cI_1(-\cD),\cI_2)$ of rank $n:=n_1+n_2$ gives \beq{FoM} c_{n_1+n_2+i}(F\otimes M)\=\sum_{j=n_1+n_2+1}^{n_1+n_2+i}{n_1+n_2-j\choose n_1+n_2+i-j}c_j(F)\;c_1(M)^{n_1+n_2+i-j}, \eeq because for smaller $j$ the inequalities $n_1+n_2+i-j>n_1+n_2-j\ge0$ force the binomial coefficient to vanish. By the vanishing \eqref{park2} this gives \beq{M} c_{n_1+n_2+i}(F\otimes M)\=0 \eeq for $i>0$ and any line bundle $M$ on $S^{[n_1]}\times S^{[n_2]}\times S_\beta$. For any Poincar\'e line bundle $\cL$ pulled back from $S\times\Pic_\beta(S)$, the line bundle $\cL(-\cD)$ is trivial on each $S$ fibre and is the pullback $\pi^*M$ of a line bundle $M$ on $S^{[n_1]}\times S^{[n_2]}\times S_\beta$. (In fact $M=\cO(-1)$ is the tautological bundle if we consider $S_\beta\to\Pic_\beta(S)$ to be the projectivisation of the vector bundle $\pi_*\;\cL$.) Subsituting into \eqref{M} gives $$ c_{n_1+n_2+i}\big(R\pi_*\;\cL-R\hom_\pi(\cI_1,\cI_2\otimes\cL)\big)\=0 $$ on $S^{[n_1]}\times S^{[n_2]}\times S_\beta$. Since this is pulled back from $S^{[n_1]}\times S^{[n_2]}\times\Pic_\beta(S)$ the Leray-Hirsch theorem shows we have the same vanishing there. \medskip So we have proved the vanishing \eqref{film} for $\beta\gg0$, and we need to generalise it to all $\beta\in H_2(S,\Z)$. We write the left hand side of \eqref{film} on $S^{[n_1]}\times S^{[n_2]}\times\Pic_\beta(S)$ in terms of characteristic classes using the Grothendieck-Riemann-Roch theorem applied to $\pi$. The result is an $H^{2(n_1+n_2+i)}\big(S^{[n_1]}\times S^{[n_2]}\times\Pic_\beta(S)\big)$-valued polynomial expression in the variables $$ \xymatrix@R=10pt@C=0pt{ (\beta,\id,\gamma) \ar@{=}[d]&\in& H^2(S)\ \oplus\ H^1(S)\otimes H^1(S)^*\ \oplus\ H^2(\Pic_\beta(S)) \ar@{=}[d]<-12ex> \\ c_1(\cL) & \in & \hspace{-4cm} H^2(\Pic_\beta(S)\times S).} $$ We have shown that this polynomial vanishes on an open cone of classes $\beta\gg0$ (for any $\gamma$). It therefore vanishes for all $\beta$. \end{proof} \begin{cor}\label{sero} For any curve class $\beta$, let $\cD \subset S\times S_\beta$ be the universal divisor. Then for $i>0$ $$c_{n_1+n_2+i}\big(R\pi_*\;\cO(\cD)-R\hom_\pi(\cI_1(-\cD),\cI_2)\big)\=0 \quad\text{on}\ S^{[n_1]}\times S^{[n_2]}\times S_\beta.$$ \end{cor} \begin{proof} By \cite[Lem 2.15]{DKO} we can identify the Hilbert scheme $S_\beta$ with the projective cone $\PP^*(R^2\pi_* \cL^*(K_S))$ of quotient line bundles of $R^2\pi_* \cL^*(K_S)$, in such a way that its natural projection to $\Pic_\beta(S)$ is given by the Abel-Jacobi morphism, and $\cO(\cD)\cong \AJ^*\cL \otimes \cO_{\PP^*}(1)$ over $S\times S_\beta$. Now substitute $$ F:=R\pi_*\;\cL-R\hom_{\pi}(\cI_1,\cI_2\otimes\cL),\qquad M:=\cO_{\PP^*}(1) $$ over $S^{[n_1]}\times S^{[n_2]}\times S_\beta$ into \eqref{FoM}. Each of the terms on the right hand side vanishes for any $\beta$ by Theorem \ref{park}. \end{proof} \begin{rmk} This result suggests that $R\pi_*\;\cO(\cD)-R\hom_\pi(\cI_1,\cI_2(\cD))$ has the same K-theory class as an honest vector bundle of rank $n_1+n_2$ on $S^{[n_1]}\times S^{[n_2]}\times S_\beta$. We show in \cite[Equation 4.27]{GT2} that this is actually true after we pull back an affine bundle over $S^{[n_1]}\times S^{[n_2]}\times S_\beta$. Therefore its higher Chern classes are zero after pulling back to this affine bundle. Since this pullback is an isomorphism on Chow groups \cite[Corollary 2.5.7]{Kr}, this gives another explanation for the vanishing of Corollary \ref{sero}. Aravind Asok kindly pointed out that it is possible that any bundle on the affine bundle is pulled back from the base; this would prove $R\pi_*\;\cO(\cD)-R\hom_\pi(\cI_1,\cI_2(\cD))$ is represented by a bundle on $S^{[n_1]}\times S^{[n_2]}\times S_\beta$. \end{rmk} \section{Alternative approach to the virtual cycle using $\Jac(S)$}\label{jack} Instead of removing $H^1(\cO_S)$ by hand, as we did in Section \ref{arb}, we can do it geometrically by replacing the moduli space $S^{[n]}$ of ideal sheaves by the moduli space $S^{[n]}\times\Jac(S)$ of rank 1 torsion free sheaves. Let $\cL$ be a Poincar\'e line bundle over $S\times\Jac(S)$, and let $$ \cL_1,\,\cL_2\To\big[S^{[n_1]}\times\Jac(S)\big]\times\big[S^{[n_2]}\times\Jac(S)\big]\times S $$ be $\pi_{25}^*\;\cL$ and $\pi_{45}^*\;\cL$ respectively, where $\pi_{ij}$ is projection to the product of the $i$th and $j$th factors. Then the degeneracy locus of the 2-term complex\footnote{It is only 2-term if $p_g(S)=0$. If $p_g(S)>0$ then we can pull back to an affine bundle where $H^2(\cO_S)$ splits off, as in Section \ref{splitpr}.} \beq{2turm} R\hom_{\pi}(\cI_1\otimes\cL_1,\cI_2\otimes\cL_2) \eeq is $$ S^{[n_1,n_2]}\times\Jac(S)\ \subset\ \big[S^{[n_1]}\times\Jac(S)\big]\times\big[S^{[n_2]}\times\Jac(S)\big], $$ where the map is the product of the usual inclusion $S^{[n_1,n_2]}\subset S^{[n_1]}\times S^{[n_2]}$ with the diagonal map $\Jac(S)\subset\Jac(S)\times\Jac(S)$. Therefore, just as in Sections \ref{degloc} and \ref{00}, $S^{[n_1,n_2]}\times\Jac(S)$ inherits a perfect obstruction theory $$ \big(\ext^1_p(\cI_1,\cI_2)\big)^*\To\Omega_{S^{[n_1]}\times\Jac(S)\times S^{[n_2]}\times\Jac(S)}\big|_{S^{[n_1,n_2]}\times\Jac(S)} $$ (note the $\cL_i$ cancel over the diagonal $\Jac(S)$). And the resulting virtual cycle, pushed forward to $S^{[n_1]}\times\Jac(S)\times S^{[n_2]}\times\Jac(S)$, is $$ c\_{n_1+n_2+g}\big(R\hom_\pi(\cI_1\otimes\cL_1,\cI_2\otimes\cL_2)\big),\qquad g:=h^{0,1}(S). $$ Everything so far has been invariant under the obvious diagonal action of $\Jac(S)$. Taking a slice by pulling back to $\{\cO_S\}\times\Jac(S)\subset\Jac(S)\times\Jac(S)$ gives the following. \begin{prop} \label{night} There is a perfect obstruction theory \beq{mapp} \big(\ext^1_p(\cI_1,\cI_2)\big)^*\To\Omega_{S^{[n_1]}\times S^{[n_2]}\times\Jac(S)}\big|_{S^{[n_1,n_2]}\times\{\cO_S\}} \eeq on $S^{[n_1,n_2]}$. The push forward of the resulting virtual cycle $$ \big[S^{[n_1,n_2]}\big]^{\vir}\,\in\,A_{n_1+n_2}\big(S^{[n_1,n_2]}\big) $$ to $S^{[n_1]}\times S^{[n_2]}\times\Jac(S)$ is \beq{PD} c_{n_1+n_2+g}\big(R\hom_{\pi}(\cI_1,\cI_2\otimes\cL)[1]\big). \vspace{-6mm} \eeq \ \hfill$\square$ \end{prop} \begin{rmk} The canonical section $$ \cO\To\hom(\cI_1,\cI_2)\To R\hom(\cI_1,\cI_2) $$ over $S^{[n_1,n_2]}\times S$ gives \beq{mapq} R^1p_*\;\cO\To\ext^1_p(\cI_1,\cI_2). \eeq Dualising gives $$ \big(\ext^1_p(\cI_1,\cI_2)\big)^*\To H^1(\cO_S)^*\otimes\cO_{S^{[n_1,n_2]}}\ \cong\ \Omega_{\Jac(S)}. $$ One can show that this map is the projection of \eqref{mapp} to $\Omega_{\Jac(S)}$. So letting $\ext^1_p(\cI_1,\cI_2)_0$ denote the cokernel of the injection \eqref{mapq}, we can simplify the perfect obstruction theory \eqref{mapp} to $$ \big(\ext^1_p(\cI_1,\cI_2)\_0\big)^*\To\Omega_{S^{[n_1]}\times S^{[n_2]}}\big|_{S^{[n_1,n_2]}}, $$ recovering the one of Section \ref{arb} by Corollary \ref{vtb}. \end{rmk} \begin{rmk} The degeneracy locus $S^{[n_1,n_2]}$ of Proposition \ref{night} lies in \beq{innn} S^{[n_1]}\times S^{[n_2]}\times\{\cO_S\}\ \stackrel j\ensuremath{\lhook\joinrel\relbar\joinrel\rightarrow}\ S^{[n_1]}\times S^{[n_2]}\times\Jac(S), \eeq and \eqref{PD} gives an expression for the pushforward of the virtual cycle to the right hand side of \eqref{innn}. It would be nice to deduce a similar expression for the pushforward of the virtual cycle to the left hand side of \eqref{innn} (as we managed in Theorem \ref{aim} using the \emph{ad hoc} method of Section \ref{splitpr} to remove $H^1(\cO_S)$). The more geometric method of this Section does not seem to give such an expression directly. But we \emph{can} deduce it from \eqref{PD} if we use the generalised Carlsson-Okounkov vanishing result of Theorem \ref{park}. This allows us to write \begin{multline} \label{mult} c_{n_1+n_2+g}\big(R\hom_{\pi}(\cI_1,\cI_2\otimes\cL)[1]\big)\= \\ c_g\big(R\pi_*\;\cL[1]\big)\cdot c_{n_1+n_2}\big(R\pi_*\;\cL-R\hom_{\pi}(\cI_1,\cI_2\otimes\cL)\big) \end{multline} on $S^{[n_1]}\times S^{[n_2]}\times\Jac(S)$, because the higher Chern classes of $R\pi_*\;\cL-R\hom_{\pi}(\cI_1,\cI_2\otimes\cL)$ vanish. (The lower Chern classes do not feature because they are multiplied by $c_{>g}\big(R\pi_*(\cL)\big)$ which are pulled back from $\Jac(S)$ of dimension $g$ and so are zero.) Setting $n_1=0=n_2$ in \eqref{PD} shows $c_g\big(R\pi_*\;\cL[1]\big)$ is Poincar\'e dual to the origin $\cO_S\in\Jac(S)$ (all multiplied by $S^{[n_1]}\times S^{[n_2]}$). Since $\cL$ and $R\pi_*\;\cL$ become trivial on this locus, the right hand side of \eqref{mult} becomes $$ j_*\,c_{n_1+n_2}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big), $$ using the pushforward map \eqref{innn}. Combined again with \eqref{PD} this recovers the result of Theorem \ref{aim}, that the virtual cycle's pushforward to $S^{[n_1]}\times S^{[n_2]}$ is $c_{n_1+n_2}\big(R\hom_{\pi}(\cI_1,\cI_2)[1]\big)$. This argument would only not be circular, however, if we could prove the generalised Carlsson-Okounkov vanishing of Theorem \ref{park} without using Theorem \ref{aim}. \end{rmk} \bibliographystyle{halphanum}
-80,879.723758
[ -2.798828125, 2.53515625 ]
27.909091
[ -2.49609375, 0.408447265625, -2.30078125, -5.97265625, -0.80810546875, 8.2578125 ]
[ 5.42578125, 10.359375, 2.150390625, 7.41796875 ]
527
8,682
[ -3.330078125, 4.0234375 ]
37.258316
[ -5.1171875, -4.078125, -5.32421875, -2.576171875, 1.7392578125, 13.140625 ]
0.682347
15.76633
23.73877
4.551543
[ 1.5950124263763428 ]
-52,819.611198
5.739231
-81,006.341853
1.637633
6.256708
[ -1.359375, -3.27734375, -4.0234375, -5.63671875, 1.7724609375, 12.828125 ]
[ -5.41015625, -1.4267578125, -1.763671875, -0.80419921875, 3.06640625, 3.390625 ]
BkiUc5Q5qWTA5dUDj6cR
\section{Introduction} \label{sec:intro} Quasibound nuclear states of $\bar K$ mesons have been studied by us recently in a series of articles \cite{MFG05,MFG06,GFGM07,GFGM08}, using a self-consistent extension of nuclear relativistic mean-field (RMF) models. References \cite{MFG05,MFG06,GFGM07} focused on the widths expected for $\bar K$ quasibound states, particularly in the range of $\bar K$ separation energy $B_{\bar K} \sim 100-150$ MeV deemed relevant from $K^-$-atom phenomenology \cite{MFG06,FGa07} and from the KEK-PS E548 $^{12}{\rm C}(K^-,N)$ missing-mass spectra \cite{kish07} that suggest values of Re~$V_{\bar K}(\rho_0) \sim -(150-200)$ MeV. Such deep potentials are not reproduced at present by chirally based approaches that yield values of Re~$V_{\bar K}(\rho_0)$ of order $-100$ MeV or less attractive, as summarized recently in Ref.~\cite{WHa08}. For a recent overview of $\bar K N$ and $\bar K$-nucleus dynamics, see Ref.~\cite{galexa08}. The subject of multi-$\bar K$ nuclei was studied in Refs.~\cite{GFGM07,GFGM08}, where the focal question considered was whether or not kaon condensation could occur in strong-interaction self-bound nuclear matter. Yamazaki {\it et al.}~\cite{YDA04} argued that $\bar{K}$ mesons might provide the relevant physical degrees of freedom for reaching high-density self-bound strange matter that could then be realized as multi-$\bar{K}$ nuclear matter. This scenario requires that $B_{\bar{K}}$ beyond some threshold value of strangeness exceeds $m_Kc^2 + \mu_N - m_{\Lambda}c^2 \gtrsim 320$~MeV, where $\mu_N$ is the nucleon chemical potential, thus allowing for the conversion $\Lambda \to \bar{K} + N$ in matter. For this strong $\bar K$ binding, $\Lambda$ and $\Xi$ hyperons would no longer combine with nucleons to compose the more conventional kaon-free form of strange hadronic matter, which is made out of $\{ N,\Lambda,\Xi \}$ particle-stable configurations \cite{SDG93,SDG94} (see Ref.~\cite{SBG00} for an update), and $\bar K$ mesons would condense then macroscopically. However, our detailed calculations in Ref.~\cite{GFGM08} demonstrated a robust pattern of saturation for $B_{\bar{K}}$ and for nuclear densities upon increasing the number of $\bar K$ mesons embedded in the nuclear medium. For a wide range of phenomenologically allowed values of meson-field coupling constants compatible with assuming a deep $\bar K$-nucleus potential, the saturation values of $B_{\bar K}$ were found generally to be below 200 MeV, considerably short of the threshold value of $\approx 320$ MeV required for the onset of kaon condensation under laboratory conditions. Similar results were subsequently published by Muto {\it et al.} \cite{MMT09}. Our discussion here concerns kaon condensation in self-bound systems, constrained by the strong interactions. It differs from discussions of kaon condensation in neutron stars where weak-interaction constraints are operative for any given value of density. For very recent works on kaon condensation in neutron-star matter, see Ref.~\cite{BGB08}, where hyperon degrees of freedom were disregarded, and Ref.~\cite{Muto08a}, where the interplay between kaon condensation and hyperons was studied, and references to earlier relevant work cited therein. In our calculations of multi-$\bar{K}$ nuclei \cite{GFGM08}, the saturation of $B_{\bar{K}}$ emerged for any boson-field composition that included the dominant vector $\omega$-meson field, using the F-type SU(3) value $g_{\omega KK}\approx 3$ associated with the leading-order Tomozawa-Weinberg term of the meson-baryon effective Lagrangian. This value is smaller than in any of the other commonly used models \cite{GFGM08}. Moreover, the contribution of each one of the vector $\phi$-meson and $\rho$-meson fields was found to be substantially repulsive for systems with a large number of antikaons, reducing $B_{\bar{K}}$ as well as lowering the threshold value of the number of antikaons required for saturation to occur. We also verified that the saturation behavior of $B_{\bar K}$ is qualitatively independent of the RMF model applied to the nucleonic sector. The onset of saturation was found to depend on the atomic number. Generally, the heavier the nucleus is, the more antikaons it takes to saturate their separation energies. We concluded that $\bar{K}$ mesons do not provide a constituent degree of freedom for self-bound strange dense matter. In the present work we extend our previous RMF calculations of multi-$\bar{K}$ nuclei into the domain of multi-$\bar{K}$ \emph{hypernuclei}, to check whether a joint consideration of $\bar{K}$ mesons together with hyperons could bring new features or change our previous conclusions. This is the first RMF calculation that considers both $\bar{K}$ mesons and hyperons together within {\it finite self-bound} hadronic configurations. The effect of hyperonic strangeness in bulk on the dispersion of kaons and antikaons was considered by Schaffner and Mishustin \cite{SMi96}. More recently, kaon-condensed hypernuclei as highly dense self-bound objects have been studied by Muto \cite{Muto08b}, using liquid-drop estimates. The plan of the article is as follows. In Sec.~\ref{sec:model} we briefly outline the RMF methodology for multi-$\bar{K}$ hypernuclei and discuss the hyperon and ${\bar K}$ couplings to the meson fields used in the present work. Results of these RMF calculations for multi-$\bar{K}$ hypernuclei are shown and discussed in Sec.~\ref{sec:res}. We conclude with a brief summary and outlook in Sec.~\ref{sec:fin}. \section{Model} \label{sec:model} \subsection{RMF formalism} In the present work, our interest is primarily aimed at multiply strange baryonic systems containing (anti)kaons. We employed the relativistic mean-field approach where the strong interactions among pointlike hadrons are mediated by \emph{effective} mesonic degrees of freedom. In the following calculations we started from the Lagrangian density \begin{equation} \label{eq:l} \begin{split} {\mathcal L} &=\bar{B}\left[{\rm i}\gamma^\mu D_\mu -(M_B-g_{\sigma B}\sigma-g_{\sigma^* B}\sigma^*) \right]B \\ &+\left( D_\mu K \right)^\dagger \left( D^{\,\mu} K \right)-(m_K^2-g_{\sigma K}\,m_K\sigma- g_{\sigma^* K}\,m_K\sigma^*)K^\dagger K \\ &+ (\sigma,\sigma^*,\omega_\mu,\vec{\rho}_\mu,\phi_\mu,A_\mu \, \text{free-field terms}) -U(\sigma)-V(\omega), \end{split} \end{equation} which includes, in addition to the common isoscalar scalar ($\sigma$), isoscalar vector ($\omega$), isovector vector ($\rho$), electromagnetic ($A$) fields, and nonlinear self-couplings $U(\sigma)$ and $V(\omega)$, also \emph{hidden strangeness} isoscalar $\sigma^*$ and $\phi$ fields that couple exclusively to strangeness degrees of freedom. Vector fields are coupled to baryons $B$ (nucleons, hyperons) and $K$ mesons via the covariant derivative \begin{equation} D_\mu = \partial_\mu + {\rm i}\, g_{\omega \Phi}\, \omega_\mu + {\rm i}\, g_{\rho \Phi}\, \vec{I} \cdot \vec{\rho}_\mu + {\rm i}\, g_{\phi \Phi}\, \phi_\mu + {\rm i}\, e\, (I_3+ \textstyle\frac{1}{2}\displaystyle Y) A_\mu \:, \end{equation} where $\Phi=B$ and $K$, with $\vec{I}$ denoting the isospin operator, $I_3$ being its $z$ component, and $Y$ standing for hypercharge. This particular choice of the coupling scheme for $K^-$ mesons ensures the existence of a conserved Noether current, the timelike component of which can then be normalized to the number of $K^-$ mesons in the medium, \begin{equation} \rho_{K^-}=2 (E_{K^-}+g_{\omega K}\,\omega+g_{\rho K}\,\rho+g_{\phi K}\,\phi+e\, A) K^+K^-, \qquad \int {\rm d}^3 x\, \rho_{K^-} = \kappa , \end{equation} and serves as a dynamical source in the equations of motion for the boson fields in matter: \begin{equation} \begin{split} (-\nabla^2 + m_\sigma^2)\sigma =& \:g_{\sigma B} \bar{B}B + g_{\sigma K} m_K K^+\hspace{-1pt} K^- -\frac{\partial}{\partial \sigma} U(\sigma) \\ (-\nabla^2 \hspace{-2pt} + m_\sigma^{*2})\sigma^*\hspace{-4pt} =& \:g_{\sigma^* B} \bar{B}B + g_{\sigma^* K} m_K K^+\hspace{-1pt} K^- \\ (-\nabla^2 + m_\omega^2) \omega =& \:g_{\omega B} B^\dagger \hspace{-1pt}B - g_{\omega K} \rho_{K^-} +\frac{\partial}{\partial \omega} V(\omega) \\ (\,-\nabla^2 + m_\rho^2) \rho =& \:g_{\rho B} B^\dagger I_3 B - g_{\rho K} \rho_{K^-} \\ (-\nabla^2 + m_\phi^2)\phi =& \:g_{\phi B} B^\dagger \hspace{-1pt}B - g_{\phi K} \rho_{K^-} \\ -\nabla^2 A =& \:e\, B^\dagger(I_3+\textstyle\frac{1}{2}\displaystyle Y)B - e\, \rho_{K^-}. \end{split} \end{equation} These \emph{dynamically} generated intermediate fields then enter the Dirac equation for baryons, \begin{equation} \left[ -{\rm i} \mbox{\boldmath$\alpha$\unboldmath}\cdot\mbox{\boldmath$\nabla$\unboldmath} +\beta \left( M_B-g_{\sigma B}\sigma-g_{\sigma^* B}\sigma^* \right) +g_{\omega B}\omega +g_{\rho B}I_3\rho +g_{\phi B}\phi +e\left(I_3+\textstyle\frac{1}{2}\displaystyle Y \right)A \right]B=\epsilon B \end{equation} and the Klein-Gordon equation for $K^-$ mesons, \begin{equation} \label{eq:Kkg} [-\mbox{\boldmath$\nabla$\unboldmath}^2-E_{K^-}^2 +m_K^2 + \Pi_{K^-} ]K^-=0, \end{equation} with the in-medium $K^-$ self-energy, \begin{equation} \begin{split} \Pi_{K^-}=&-g_{\sigma K}m_K\sigma-g_{\sigma^* K}m_K\sigma^* -2E_{K^-}(g_{\omega K}\omega+g_{\rho K}\rho+g_{\phi K}\phi+eA) \\ &-(g_{\omega K}\omega+g_{\rho K}\rho+g_{\phi K}\phi+eA)^2. \end{split} \end{equation} Hence, the presence of the $\bar{K}$ mesons modifies the scalar and vector mean fields entering the Dirac equation, consequently leading to a \emph{dynamical} rearrangement of the baryon configurations and densities that, in turn, modify the ${\bar K}$ quasibound states in the medium. This requires a self-consistent solution of these coupled wave equations, a procedure followed numerically in the present as well as in our previous works. In the present work, for the sake of simplicity, we have suppressed the imaginary part of $\Pi_{K^-}$ arising from in-medium $K^-$ absorption processes except for demonstrating its effect in one example. Note that, for the range of values $B_{K^-} \gtrsim 100$ MeV mostly considered here, the effect of Im~$\Pi_{K^-}$ was found to be negligible (see Fig.~1 of Ref.~\cite{GFGM08}). \subsection{Choice of the model parameters} To parametrize the nucleonic part of the Lagrangian density (\ref{eq:l}) we considered the standard RMF parameter sets NL-SH \cite{SNR93} and NL-TM1(2) \cite{STo94}, which have been successfully used in numerous calculations of various nuclear systems. In the case of hyperons the coupling constants to the vector fields were fixed using SU(6) symmetry. For $\Lambda$ hyperons this leads to \begin{equation} \label{eq:lambda} g_{\omega \Lambda}=\frac{2}{3}g_{\omega N}, \,\, g_{\rho \Lambda}=0, \,\, g_{\phi \Lambda}=\frac{-\sqrt{2}}{3}g_{\omega N}. \end{equation} The coupling to the scalar $\sigma$ field, $g_{\sigma \Lambda}/g_{\sigma N}=0.6184\, (0.623)$ for the NL-SH (NL-TM) RMF model, was then estimated by fitting to measured $\Lambda$-hypernuclear binding energies~\cite{HTa06}. This essentially ensures the well depth of 28~MeV for $\Lambda$ in nuclear matter. The coupling of the $\Lambda$ hyperon to the scalar $\sigma^*$ field was fixed by fitting to the measured value $\Delta B_{\Lambda\Lambda}\approx1$~MeV of the uniquely identified hypernucleus $_{\Lambda\Lambda}^{~~6}{\rm He}$ \cite{Tak01}. For $\Xi$ hyperons, SU(6) symmetry gives \begin{equation} \label{eq:xi} g_{\omega\Xi}=\frac{1}{3}g_{\omega N}, \,\, g_{\rho \Xi}=-g_{\rho N}, \,\, g_{\phi\Xi}=-2\frac{\sqrt{2}}{3}g_{\omega N}. \end{equation} Because there are no experimental data for $\Xi(\Lambda)$-$\Xi$ interactions, we set $g_{\phi\Xi}=g_{\sigma^* \Xi}=0$ to avoid parameters that might lead to unphysical consequences and that, in addition, are expected to play a minor role (in analogy to the small effect, of order 1 MeV for $B_{K^-}$, found upon putting $g_{\phi\Lambda}$ and $g_{\sigma^*\Lambda}$ to zero, and as is demonstrated below in Fig.\ 7 within a different context). The coupling to the scalar $\sigma$ field was then constrained to yield an optical potential Re~$V_{\Xi^-}=-14$~MeV in the center of $^{12}$C \cite{Kha00}. This corresponds to $g_{\sigma \Xi}=0.299 g_{\sigma N}$ for the NL-TM2 RMF model. \begin{table} \caption{$\bar K$ and $K^-$ separation energies, $B_{\bar K}$ and $B_{K^-}$, respectively, calculated statically (in MeV) for a single antikaon $1s$ state in several nuclei, using the NL-TM nuclear RMF parametrizations (TM2 for $^{12}$C and $^{16}$O, TM1 for $^{40}$Ca and above) and vector SU(3) coupling constants, Eq.~(\ref{eq:KSU(3)}). The difference $B_{K^-}-B_{\bar K}$ is due to the $K^-$ finite-size Coulomb potential.} \label{tab:t1} \begin{ruledtabular} \begin{tabular}{lccccc} & $^{12}$C & $^{16}$O & $^{40}$Ca & $^{90}$Zr & $^{208}$Pb \\ \hline $B_{\bar K}$ & 44.8 & 42.7 & 49.8 & 54.5 & 53.6 \\ $B_{K^-}$ & 49.0& 47.6 & 59.2 & 69.4 & 76.6 \\ \end{tabular} \end{ruledtabular} \end{table} Finally, for the antikaon couplings to the vector meson fields we adopted a purely F-type, vector SU(3) symmetry: \begin{equation} \label{eq:KSU(3)} 2g_{\omega K}=2g_{\rho K} = \sqrt{2}g_{\phi K}=g_{\rho \pi}=6.04, \end{equation} where $g_{\rho\pi}$ is due to the $\rho\rightarrow 2\pi$ decay width \cite{WHa08}. (Here we denoted by $g_{VP}$ the VPP electric coupling constant $g_{VPP}$.) Using this ``minimal'' set of coupling constants to establish correspondence with chirally based approaches, we calculate the single antikaon $1s$ separation energies $B_{\bar K}$ and $B_{K^-}$ listed in Table~\ref{tab:t1}. These separation energies are lower roughly by 25~MeV than those anticipated from $\bar K N - \Sigma \pi$ coupled-channel chiral approaches \cite{WHa08}, most likely because the $K^{\star}$ vector-meson off-diagonal coupling is not included in the standard RMF formulation. The missing attraction, and beyond it, is incorporated here by coupling the antikaon to scalar fields $\sigma$ and $\sigma^*$. SU(3) symmetry is not of much help when fixing the coupling constants of scalar fields. Because there still is no consensus about the microscopic origin of the scalar $\sigma$ field and the strength of its coupling to ${\bar K}$ mesons \cite{TKO08,KMN09}, in this work we fitted $g_{\sigma K}$ to several assumed $K^-$ separation energies $B_{K^-}$ in the range of $100-150$ MeV for a single $K^-$ meson in selected nuclei across the periodic table, as implied by the deep $K^-$-nucleus potential phenomenology of Refs.~\cite{MFG06,kish07}. Furthermore, for use in multistrange configurations, the coupling constant to the $\sigma^*$ field is taken from $f_0(980)\rightarrow K{\bar K}$ decay to be $g_{\sigma^* K}=2.65$ \cite{SMi96}. The effect of the $\sigma^*$ field was found generally to be minor. For a more comprehensive discussion of the issue of scalar couplings, see our previous work \cite{GFGM08}. \subsection{Inclusion of the SU(3) baryon octet} We considered many-body systems consisting of the SU(3) octet $N,\Lambda,\Sigma$, and $\Xi$ baryons that can be made particle stable against strong interactions \cite{SDG93,SDG94}. The energy release $Q$ values for various conversion reactions of the type $B_1B_2\rightarrow B_3B_4$ together with phenomenological guidance on hyperon-nucleus interactions suggest that only the conversions $\Xi^-p\rightarrow \Lambda\Lambda$ and $\Xi^0n\rightarrow \Lambda\Lambda$ (for which $Q\simeq 20$~MeV) can be overcome by binding effects. It becomes possible then to form particle-stable multi-$\{ N,\Lambda,\Xi \}$ configurations for which the conversion $\Xi N\rightarrow\Lambda\Lambda$ is Pauli blocked owing to the $\Lambda$ orbitals being filled up to the Fermi level. For composite configurations with $\Sigma$ hyperons the energy release in the $\Sigma N \rightarrow \Lambda N$ conversion is too high ($Q\gtrsim 75$~MeV) and, hence, it is unlikely for hypernuclear systems with $\Sigma$ hyperons to be particle stable. \section{Results and discussion} \label{sec:res} In Refs.~\cite{GFGM07,GFGM08} we studied multi-$\bar K$ nuclei, observing that the calculated $K^-$ separation energies as well as the nuclear densities saturate upon increasing the number of $K^-$ mesons embedded dynamically in the nuclear medium. This saturation phenomenon, which is qualitatively independent of the applied RMF model, emerged for any boson-field composition containing the dominant vector $\omega$-meson field which acts repulsively between $\bar K$ mesons. Because the calculated $K^-$ separation energies did not exceed 200~MeV, for coupling-constant combinations designed to bind a single $K^-$ meson in the range $B_{K^-} \sim 100-150$ MeV, it was argued that kaon condensation is unlikely to occur in strong-interaction self-bound hadronic matter. In this section we demonstrate that these conclusions hold also when adding, within particle-stable multistrange configurations, large numbers of hyperons to nuclei across the periodic table. \subsection{Multi-$\{ N, \Lambda, K^- \}$ configurations} Figure~\ref{fig:f1} presents $1s$ $K^-$ separation energies $B_{K^-}$ in $^{16}{\rm O}+\eta \Lambda +\kappa K^-$ multi-$K^-\Lambda$ hypernuclei as a function of the number $\kappa$ of $K^-$ mesons for $\eta = 0,2,4, 6$, and 8 $\Lambda$ hyperons, calculated in the NL-SH model for two values of $g_{\sigma K}$ ($g_{\sigma K}=0.233 g_{\sigma N}$ and $0.391 g_{\sigma N}$) chosen to produce $B_{K^-}=100$ and 150~MeV, respectively, for $\eta=0$, $\kappa=1$. In addition, the lower group of curves with $B_{K^-}<60$~MeV corresponds to $g_{\sigma K}=0$. The figure illustrates saturation of $B_{K^-}$ with the number of antikaons in multi-$\Lambda$ hypernuclei. There is an apparent increase of $B_{K^-}$ (up to 15\%) when the first two $\Lambda$ hyperons fill the $1s$ shell. Further $\Lambda$ hyperons, placed in the $p$ shell, cause only insignificant variation of $B_{K^-}$ for small values of $\kappa$. However, the effect of the $1p_{3/2}$-shell hyperons increases with the number of antikaons, and for $\kappa=8$ it adds another $5-10$~MeV to $B_{K^-}$. The separation energy $B_{K^-}$ remains almost unaffected (or even decreases) by the next two $\Lambda$ hyperons placed in the $1p_{1/2}$ shell. The figure thus suggests saturation of the $K^-$ separation energy also with the number $\eta$ of $\Lambda$ hyperons in the nuclear medium. When the $K^-$ coupling to the $\sigma$ field is switched off, $g_{\sigma K}=0$, the $K^-$ separation energy assumes relatively low values, $B_{K^-}\lesssim 50$~MeV, and decreases as a function of $\kappa$ when Im~$\Pi_{K^-}$ is considered (solid lines). In this case, the effect of $K^-$ absorption is not negligible as illustrated by the dot-dashed line showing $B_{K^-}$ for Im~$\Pi_{K^-}=0$. The effect of Im~$\Pi_{K^-}\neq 0$ for $B_{K^-}>100$~MeV in the upper groups of curves is negligible and is not shown here or in all subsequent figures. \begin{figure} \includegraphics[scale=0.7]{orev.eps} \caption{(Color online) The $1s$ $K^-$ separation energy $B_{K^-}$ in $^{16}{\rm O}+\eta\Lambda +\kappa K^-$ as a function of the number $\kappa$ of antikaons for several values of the number $\eta$ of $\Lambda$ hyperons, with initial values $B_{K^-}=100$ and 150~MeV for $\eta=0$, $\kappa =1$, calculated in the NL-SH RMF model. The solid (dot-dashed) lines with open symbols correspond to $g_{\sigma K}=0$ including (excluding) Im~$\Pi_{K^-}$.} \label{fig:f1} \end{figure} It is worth noting that $\eta=8$ is the maximum number of $\Lambda$ hyperons in our calculation that can be bound in the $^{16}$O nuclear core. In some of the $^{16}{\rm O}+\eta\Lambda+\kappa K^-$ allowed configurtions, $1p_{1/2}$ neutrons became less bound than $1d_{5/2}$ neutrons because of the strong spin-orbit interaction. (This occurs, e.g., for $\eta=0$ when $\kappa\geq 5$ or for $\eta=8$ when $\kappa \geq 3$.) However, the total binding energy of the system was found always to be higher for configurations with $1p_{1/2}$ neutrons. Consequently, the standard shell configurations of oxygen are more bound and are thus energetically favorable. \begin{figure} \includegraphics[scale=0.6]{pblshell.eps} \caption{(Color online) The $1s$ $K^-$ separation energy $B_{K^-}$ in $^{208}{\rm Pb} + \eta \Lambda + \kappa K^-$ as a function of the number $\kappa$ of antikaons for several values of the number $\eta$ of $\Lambda$ hyperons, with initial value $B_{K^-}=100$~MeV for $\eta=0$, $\kappa = 1$, calculated in the NL-TM1 RMF model.} \label{fig:f2} \end{figure} The saturation of $B_{K^-}$ upon increasing the number of $\Lambda$ hyperons in multi-$K^-\Lambda$ hypernuclei based on a $^{16}$O nuclear core holds also when going over to heavier core nuclei. Figure~\ref{fig:f2} shows the $1s$ $K^-$ separation energy $B_{K^-}$ in $^{208}{\rm Pb}+\eta \Lambda +\kappa K^-$ multi-$K^-\Lambda$ hypernuclei as a function of both the number $\kappa$ of $K^-$ mesons and $\eta$ of $\Lambda$ hyperons, calculated in the NL-TM1 model for $g_{\sigma K}=0.133g_{\sigma N}$ such that $B_{K^-}=100$~MeV for $\eta=0$, $\kappa=1$. For any given number $\eta$ of $\Lambda$ hyperons, $B_{K^-}$ saturates with the number $\kappa$ of $K^-$ mesons, reaching its maximum value for $\kappa =12$. Morever, $B_{K^-}$ increases with the number of hyperons up to $\eta=20$, when it reaches its maximum value $B_{K^-} \approx 110$~MeV for $\kappa=12$, and then starts to decrease with $\eta$. Consequently, in the Pb configurations with 100 $\Lambda$ hyperons and more than 5 $K^-$ mesons, $K^-$ mesons are even less bound than in configurations with no $\Lambda$ hyperons. The decrease of $B_{K^-}$ with $\eta$ beyond $\eta=20$ is apparently related to a depletion of the central nuclear density in the presence of a massive number of hyperons in outer shells, as confirmed by some of the subsequent figures, because $B_{K^-}$ is greatly affected by the central nuclear density. \subsection{Multi-$\{ N, \Lambda, \Xi, K^- \}$ configurations} When building up baryonic multi-$\{ N,\Lambda,\Xi \}$ configurations with maximum strangeness for selected core nuclei, we first started by filling up $\Lambda$ hyperon single-particle states in a given nuclear core up to the $\Lambda$ Fermi level. Subsequently, we added $\Xi^0$ and $\Xi^-$ hyperons as long as the reaction $[AN,\eta\Lambda,\mu\Xi]\rightarrow[(A-1)N,\eta\Lambda,(\mu-1)\Xi]+2\Lambda$ was energetically forbidden (here, $[...]$ denotes a bound configuration). Finally, we checked that the inverse reaction $[AN,\eta\Lambda,\mu\Xi]\rightarrow[(A+1)N,(\eta-2)\Lambda,(\mu+1)\Xi]$ is kinematically blocked as well. These conditions guarantee that such $\{ N,\Lambda, \Xi \}$ multistrange configurations are particle stable against strong interactions, decaying only via weak interactions. Clearly, the amount of $\Xi$ hyperons bound in a given system depends on the depth $-V_{\Xi}$ of the $\Xi$-nucleus potential. We adopted a value for $g_{\sigma\Xi}$ that gives $V_{\Xi}^{\rm Dirac}=V_{\rm S}+V_{\rm V}=-18$~MeV, corresponding to a depth of $-V_{\Xi}^{\rm Schr.}\simeq 14$~MeV for use in the Schroedinger equation \cite{Kha00}. For comparison, in some cases we also considered $V_{\Xi}^{\rm Dirac}=-25$~MeV. \begin{figure} \includegraphics[scale=0.7]{olx2.eps} \caption{(Color online) The $1s$ $K^-$ separation energy $B_{K^-}$ in $^{40}{\rm Ca}$, $^{90}{\rm Zr}$, and $^{208}{\rm Pb}$ with $\eta\Lambda + \mu\Xi +\kappa K^-$ as a function of the number $\kappa$ of antikaons, with initial value $B_{K^-}=100$~MeV for $\eta=\mu=0$, $\kappa = 1$, calculated in the NL-TM1 RMF model.} \label{fig:f3} \end{figure} The $^{16}$O core can accommodate up to $\eta=8$ $\Lambda$ hyperons in particle-stable configurations, and the $^{16}{\rm O} + 8 \Lambda$ system admits many more, of order 40 $K^-$ mesons. However, we have not found any energetically favorable conversion $\Lambda\Lambda\rightarrow\Xi N$ in $^{16}{\rm O}+\eta\Lambda+\kappa K^-$ systems. Therefore, $\Xi$ hyperons are not part of any particle-stable multistrange configurations built upon the $^{16}$O core. While checking the energy balance in heavier systems with $^{40}$Ca, $^{90}$Zr, and $^{208}$Pb nuclear cores, we found particle-stable configurations: $^{40}{\rm Ca}+20\Lambda +2\Xi^0$, $^{90}{\rm Zr}+40\Lambda +2\Xi^0+2\Xi^-$, and $^{208}{\rm Pb}+106\Lambda +8\Xi^0+18\Xi^-$. We then embedded several $K^-$ mesons in these configurations and studied density distributions and binding energies in such multi-$K^-$ hypernuclear systems. Figure~\ref{fig:f3} demonstrates the calculated $1s$ $K^-$ separation energy $B_{K^-}$ in $^{40}{\rm Ca}+20\Lambda +2\Xi^0+\kappa K^-$, $^{90}{\rm Zr}+40\Lambda + 2\Xi^0+2\Xi^-+\kappa K^-$, and $^{208}{\rm Pb}+106\Lambda +8\Xi^0+18\Xi^- +\kappa K^-$ as a function of the number $\kappa$ of $K^-$ mesons. For comparison, in the case of the $^{208}$Pb core, we also present calculations done excluding $\Xi$ hyperons but keeping the same number, $\eta=106$, of $\Lambda$ hyperons. A decrease of $B_{K^-}$ upon adding hyperons ($\Xi$ in this case) is noted, in line with the trend observed and discussed for Fig.~\ref{fig:f2} above. The calculations shown in Fig.~\ref{fig:f3} were performed within the NL-TM1 nuclear RMF scheme using values of $g_{\sigma K}=0.211 g_{\sigma N}$ ($^{40}$Ca) and $0.163 g_{\sigma N}$ ($^{90}$Zr), which yield $B_{K^-}=100$ MeV for a single $K^-$ nuclear configuration with $\eta=\mu=0$, where $\mu$ denotes the number of $\Xi$ hyperons. The figure demonstrates that the saturation of $K^-$ separation energies, observed for multi-$\Lambda$ hypernuclei in Figs.~\ref{fig:f1} and \ref{fig:f2}, holds also when $\Xi$ hyperons are added dynamically within particle-stable configurations and that the heavier the system is, the larger number $\kappa$ of antikaons it takes to saturate $B_{K^-}$. It is worth noting that in all cases $B_{K^-}$ does not exceed 120~MeV. Finally, the two curves for a $^{90}$Zr nuclear core in Fig.~\ref{fig:f3} (using diamond symbols) show the sensitivity to the value assumed for the $\Xi$ hyperon potential depth, the standard $-V_{\Xi}^{\rm Dirac}=18$ MeV, and a somewhat increased depth $-V_{\Xi}^{\rm Dirac}=25$~MeV, illustrating the tiny effect it exercises on $B_{K^-}$ that is noticeable only for $\kappa < 12$. \begin{figure} \includegraphics[scale=0.65]{ca-zr-25.eps} \caption{(Color online) The $1s$ $K^-$ separation energy $B_{K^-}$ in $^{40}{\rm Ca}$ and $^{90}{\rm Zr}$ with $\eta\Lambda +\mu\Xi +\kappa K^-$, for $V_{\Xi}^{\rm Dirac}=-25$~MeV, as a function of the number $\kappa$ of antikaons, with initial value $B_{K^-}=100$~MeV for $\eta=\mu=0$, $\kappa=1$, calculated in the NL-TM1 RMF model.} \label{fig:f4} \end{figure} A deeper $\Xi$ potential supports binding of more $\Xi$ hyperons in a given multi-$\Lambda$ hypernucleus. For $V_{\Xi}^{\rm Dirac}=-18$~MeV, only 2$\Xi^0$ and $2\Xi^0+2\Xi^-$ hyperons were found to be bound in $^{40}{\rm Ca}+ 20\Lambda$ and $^{90}{\rm Zr}+40\Lambda$, respectively. However, for $V_{\Xi}^{\rm Dirac}=-25$~MeV it is possible to accommodate up to $8\Xi^0+ 2\Xi^-$ hyperons in $^{40}{\rm Ca}+20\Lambda$ and $8\Xi^0+8\Xi^-$ hyperons in $^{90}{\rm Zr} + 40\Lambda$. Figure~\ref{fig:f4} presents the $1s$ $K^-$ separation energy $B_{K^-}$ in multi-$K^-$ hypernuclei $^{40}{\rm Ca} + 20\Lambda + 8\Xi^0 + 2\Xi^- + \kappa K^-$ and $^{90}{\rm Zr} + 40\Lambda + 8\Xi^0 + 8\Xi^- + \kappa K^-$ as a function of the number $\kappa$ of $K^-$ mesons, calculated in the NL-TM1 model for $V_{\Xi}^{\rm Dirac}=-25$ MeV, using values for $g_{\sigma K}$ such that $B_{K^-}=100$ MeV in $^{40}{\rm Ca} +1K^-$ and in $^{90}{\rm Zr} +1K^-$. The figure illustrates that the saturation of the $K^-$ separation energy occurs also in baryonic systems with three species of hyperons, $\Lambda$, $\Xi^0$, and $\Xi^-$, reaching quite large fractions of strangeness [${|S|}/{B} = 0.57(0.8)$ for a Ca(Zr) core]. We note that the separation energy $B_{K^-}$ barely exceeds 120~MeV in these cases too. \begin{figure} \includegraphics[scale=0.7]{zrny3rev.eps} \caption{(Color online) Density distributions in $^{90}{\rm Zr}+40\Lambda +2\Xi^0 +2\Xi^- + \kappa K^-$, for $\kappa=0$ (top panel) and $\kappa=10$ (bottom panel), with $B_{K^-}=100$~MeV for $\eta=\mu=0$, $\kappa = 1$, calculated in the NL-TM1 RMF model. The dotted line corresponds to the nucleon density $\rho_N$ in $^{90}$Zr. The densities $\rho_{\Lambda}$ (open diamonds) and $\rho_N$ (open circles) in $^{90}{\rm Zr} + 40\Lambda + \kappa K^-$ are shown for comparison.} \label{fig:f5} \end{figure} We also studied the rearangement of nuclear systems induced by embedding hyperons and $K^-$ mesons. Figure~\ref{fig:f5} presents the evolution of the density distributions in Zr after first adding $40\Lambda+4\Xi$ hyperons (top panel) and then 10 $K^-$ mesons (bottom panel). The nucleon density $\rho_N$ in $^{90}$Zr is denoted by a dotted line. The relatively weakly bound hyperons with extended density distributions (dashed line, solid diamonds) attract nucleons, thus depleting the central nucleon density $\rho_N$ (dashed line, circles). Adding extra 10 $K^-$ mesons to the hypernuclear system induces large rearrangement of the baryons. The $K^-$ mesons, which pile up near the origin (solid line, squares), attract the surrounding nucleons and hyperons. Consequently, the densities $\rho_N$ and $\rho_Y$ (solid lines, solid circles and diamonds, respectively) increase considerably in the central region. The resulting configuration $^{90}{\rm Zr}+40\Lambda +2\Xi^0+2\Xi^- +10K^-$ is thus significantly compressed, with central baryon density $\rho_B$ exceeding the nuclear density in $^{90}$Zr by a factor of roughly 3. For comparison we present in Fig.~\ref{fig:f5} also the $\Lambda$ hyperon ($\rho_{\Lambda}$, open diamonds) and nucleon ($\rho_N$, open circles) density distributions calculated in $^{90}{\rm Zr}+40\Lambda+\kappa K^-$ for $\kappa=0$ and $10$ $K^-$ mesons. The removal of the $1s$-state $\Xi$ hyperons from the primary baryonic configuration $^{90}{\rm Zr}+40\Lambda+ 2\Xi^0+2\Xi^-$ affects considerably the hyperon density distribution $\rho_Y$ in the central region of the nucleus, this effect being magnified by the presence of $K^-$ mesons. In contrast, the nucleon density $\rho_N$ remains almost intact. For $\kappa=10$, $\Xi$ hyperons appear to repel nucleons from the center of the multi-$\{ N, Y, {\bar K}\}$ system, much like $\Lambda$ hyperons do. \subsection{Multi-$\{ N, \Lambda, \Xi, K^+ \}$ configurations} \begin{figure} \includegraphics[scale=0.7]{vkp2.eps} \caption{(Color online)The $K^+$ static potential in $^{16}{\rm O} + \eta \Lambda - \nu p$, calculated in the NL-SH RMF model.} \label{fig:f6} \end{figure} The $K^+$-nucleus potential is known to be repulsive, with V$_{K^+} \approx 30$~MeV at central nuclear density \cite{FGa07}. Schaffner and Mishustin \cite{SMi96} suggested that the presence of hyperons could lead eventually to a decrease of the repulsion that $K^+$ mesons undergo in nuclear matter so that the $K^+$ potential might even become attractive. Here we studied the possibility of binding $K^+$ mesons in hypernuclear matter, neglecting for simplicity dynamical effects arising from coupling $K^+$ mesons to the hypernuclear system. The $K^+$-nucleus potential was constructed simply by applying a $G$-parity transformation to the corresponding $K^-$ potential, choosing $g_{\sigma K}$ such that it produces $B_{K^-}=100$~MeV in the given core nucleus. Figure~\ref{fig:f6} shows the radial dependence of the real part of the static $K^+$ potential in various hypernuclear systems connected with $^{16}$O. The dotted line shows the repulsive $K^+$ potential in $^{16}$O for comparison. The figure indeed shows that the repulsion decreases, from roughly 30 MeV down to roughly 20 MeV with the number of $\Lambda$ hyperons added to the nuclear core, but the $K^+$ potential remains always repulsive in $^{16}$O+$\eta\Lambda$ systems. Searching for a $K^+$ bound state in hadronic systems we also calculated the $K^+$ potential in more exotic multistrange hypernuclei $^{\rm A}{\rm Z} + \eta\Lambda - \nu p$, where several protons are removed from the nuclear core in an attempt to increase the $|S|/B$ ratio and to reduce Coulomb repulsion. Figure~\ref{fig:f6} indicates that such removal of protons from $^{16}$O has a sizable effect on the shape of the $K^+$ potential, which may result in a shallow attractive pocket. However, the attraction is insufficient to bind a $K^+$ meson in these hadronic systems. Our calculations confirmed that the above conclusion holds also in heavier hypernuclear configurations based on Ca, Zr, and Pb cores. \begin{figure} \includegraphics[scale=0.7]{zr2.eps} \caption{(Color online) The $K^+$ static potential in $^{90}{\rm Zr} + \eta \Lambda + \mu (\Xi^0 + \Xi^-)$, calculated in the NL-TM1 RMF model.} \label{fig:f7} \end{figure} In heavier nuclei, where it becomes possible to accommodate also $\Xi$ hyperons in addition to $\Lambda$ hyperons, the $K^+$ repulsion may be further reduced. This is demonstrated in Fig.~\ref{fig:f7} for a $^{90}$Zr nuclear core. However, this reduction is insufficient to reverse the repulsion into attraction. The figure also shows that the {\it hidden strangeness} couplings (chosen to be $g_{i\Xi}=2g_{i\Lambda},\; i=\sigma^*,\; \phi $) have no effect whatsoever on the reduction accomplished by the presence of $\Xi$ hyperons. Finally, we searched for $K^+$ bound states in nuclei sustained by $K^-$ mesons. The presence of deeply bound $K^-$ mesons makes the $K^+$ potential immensely deep (more than 100~MeV in $^{16}{\rm O}+8{\rm K}^-$). Hovever, because the $K^-$ mesons are concentrated at the very center of the nucleus, the $K^+$ potential is of a rather short range of about 1~fm. As a result, we found only very weakly bound $K^+$ states (by 1~MeV) in multi-$\{ N,Y,K^- \}$ configurations. A more careful treatment of $K^+K^-$ dynamics near threshold is necessary before coming to further conclusions, but our conclusion is not at odds with recent studies of the $I=1/2, J^{\pi}={1/2}^+$ $K{\bar K}N$ system \cite{JKE08,TKO09}. \section{Summary and conclusions} \label{sec:fin} In this work, the RMF equations of motion for multi-$\bar K$ hypernuclei were formulated and solved for self-bound finite multistrange configurations. The choice of coupling constants of the constituents -- nucleons, hyperons, and $\bar K$ mesons -- to the vector and scalar meson fields was guided by a combination of accepted models and by phenomenology. The sensitivity to particular chosen values was studied. The results of the RMF calculations show a robust pattern of binding-energy saturation for $\bar K$ mesons as a function of their number $\kappa$. Compared to our previous RMF results for multi-$\bar K$ nuclei \cite{GFGM08}, the added hyperons do not bring about any quantitative change in the $B_{K^-}(\kappa)$ saturating curve. The main reason for saturation remains the repulsion induced by the vector meson fields, primarily $\omega$, between $\bar K$ mesons. The SU(3)$_V$ values adopted here for $g_{vK}$, Eq.~(\ref{eq:KSU(3)}), provide the ``minimal'' strength for $g_{\omega K}$ out of several other choices made in the literature, implying that the saturation of $B_{K^-}(\kappa)$ persists also for other choices of coupling-constant sets, as discussed in Ref.~\cite{GFGM08}. The repulsion between $\bar K$ mesons was also the primary reason for saturation in multi-$\bar K$ nuclei, both in our previous work \cite{GFGM08} and in Ref.~\cite{MMT09}. The saturation of $B_{K^-}$ with typical values below 200 MeV, considerably short of what it takes to replace a $\Lambda$ hyperon by a nucleon and a $\bar K$ meson, means that $\bar K$ mesons do not compete favorably and thus cannot replace hyperons as constituents of strange hadronic matter. In other words, $\bar K$ mesons do not condense in self-bound hadronic matter. The baryon densities of multi-$\bar K$ hypernuclei are between $(2-3)\rho_0$, where $\rho_0$ is nuclear-matter density. This is somewhat above the values obtained without $\bar K$ mesons, but still within the density range where hadronic models are likely to be applicable. Our conclusion of no ``kaon condensation'' is specific to self-bound finite hadronic systems run under strong-interaction constraints. It is not directly related to the Kaplan-Nelson conjecture of macroscopic kaon condensation \cite{KNe86}, nor to hadronic systems evolving subject to weak-interaction constraints, such as neutron stars. Yet, this conclusion has been challenged recently by Muto \cite{Muto08b} who uses the liquid-drop approach to claim that multi-$\bar K$ hypernuclei (termed by him ``kaon-condensed hypernuclei'') may provide the ground-state configuration of finite strange hadronic systems at densities about $9\rho_0$. Of course this high value of density for kaon-condensed hypernuclei is beyond the range of applicability of hadronic models, because quark-gluon degrees of freedom must enter in this density range. His calculation also reveals an isomeric multistrange hypernuclear state, without $\bar K$ mesons, at density about $2\rho_0$ which is close to what we find here within a RMF bound-state calculation. The appearance of a high-density kaon-condensed hypernuclear bound state in Muto's calculation might be just an artifact of the applied liquid-drop methodology, which does not provide an accurate substitute for a more microscopically oriented bound-state calculation. The role of $K^-$ strong decays in hadronic matter was played down in the present calculation of multi-$K^-$ hypernuclei because our aim, primarily, was to discuss and compare (real) binding energies of strange hadronic matter with and without $K^-$ mesons. The width of deeply bound $K^-$ nuclear configurations was explored by us in Refs.~\cite{MFG05,MFG06,GFGM07}, concluding that residual widths of order $\Gamma_{K^-} \sim 50$~MeV due to $K^-NN \to \Lambda N,\Sigma N$ pionless conversion reactions are expected in the relevant range of binding energy $B_{K^-} \sim 100 - 200$~MeV. This estimate should hold also in multi-$K^-$ hypernuclei where added conversion channels are allowed: $K^-NY \to \Lambda Y, \Sigma Y$, $K^-N \Lambda \to N \Xi$, and $K^- \Lambda Y \to \Xi Y$. We know of no physical mechanism capable of reducing substantially these widths, and therefore we do not anticipate multi-$K^-$ nuclei or multi-$K^-$ hypernuclei to exist as relatively long-lived isomeric states of strange hadronic matter which consists of multi-$\{ N,\Lambda,\Xi \}$ configurations. \begin{acknowledgments} This work was supported in part by GACR grant 202/09/1441 and by SPHERE within the FP7 research grant system. AG acknowledges instructive discussions with Wolfram Weise and the support extended by the DFG Cluster of Excellence, Origin and Structure of the Universe, during a visit to the Technische Universit\"{a}t M\"{u}nchen. \end{acknowledgments}
-32,503.95594
[ -0.96826171875, 0.94091796875 ]
16.959064
[ -3.333984375, 0.214111328125, -2.171875, -6.19140625, -0.78076171875, 9.328125 ]
[ 1.5947265625, 7.49609375, 1.8330078125, 2.994140625 ]
251
5,190
[ -2.6484375, 3.0703125 ]
30.854078
[ -5.97265625, -4.171875, -4.08984375, -2.638671875, 1.7646484375, 12.09375 ]
0.976528
12.080316
25.992293
2.953165
[ 1.3478754758834839 ]
-23,358.76131
5.426012
-31,269.308921
0.461631
5.989947
[ -2.794921875, -3.95703125, -4.0390625, -5.046875, 2.3125, 12.765625 ]
[ -5.56640625, -2.2109375, -2.279296875, -1.3603515625, 3.599609375, 4.77734375 ]
BkiUdVw5qsFAfoQ6RPbj
\section{Introduction} \label{sec:introduction} The study of the social impact of automated decision making has focused largely on issues of fairness at the point of decision, evaluating the fairness (with respect to a population) of a sequence or pipeline of decisions, or examining the dynamics of a game between the decision-maker and the decision subject. What is missing from this study is an examination of \emph{precarity}: a term coined by Judith Butler to describe an unstable state of existence in which negative decisions can have ripple effects on one's well-being. Such ripple effects are not captured by changes in income or wealth alone or by one decision alone. To study precarity, we must reorient our frame of reference away from the decision-maker and towards the decision subject; away from aggregates of decisions over a population and towards aggregates of decisions (for an individual) over time. An individual who lives with higher precarity is more affected and less able to recover by the same negative decision than another with low precarity. Thus including only the direct impact of a single decision or a few decisions is insufficient to judge if that system was fair. However, precarity is not an attribute of an individual; it is a result of being subject to greater risks and fewer supports, in addition to starting off at a less secure position. Precarity is impacted by racism, sexism, ableism, heterosexism, and other systems of oppression, and an individual's intersectional identity may put one at greater risk in society, subject to a lower income for the same job, less able to build wealth even at the same income level, and less able to recover from harm. Given that automated decision systems and public policy rules operate in a world in which some people's long term well-being is impacted more by the same action, how do we account for the effects of automated decisions and, more generally, proposed public policy rules? One may advocate for pilot studies, in which the policy or algorithm is deployed on some group. However, since precarity is a long-term consequence, a pilot study will necessarily take a long time to evaluate its effects. When a policy is needed for urgent circumstances, such as addressing the impact of a pandemic, there is little opportunity for testing policies in pilots. In this paper, we propose a modeling framework to simulate the effects of compounded decisions on an individual over time, incorporating a quantification of their precarity. Our framework allows us to explore the effects of different kinds of decision-making processes on individuals' levels of precarity. In particular, we are able to demonstrate the ill-effects of compounded decision making on the fairness of automated decisions and policies. While our model does not capture the full extent of the realities which place some individuals in the precarious position of being more harmed by the same decision compared to someone in a less precarious state, our model does add sufficient complexity to demonstrate how this can happen, and further, a method to quantify the effect. The message for fairness advocates is that one must look beyond the effect of a single decision on a large number of people, to look at how aggregates of decisions over time impact individuals as a function of their precarity. \paragraph{Our contributions:} The main contributions of this paper can be summarized as follows: \begin{itemize} \item We introduce the idea of \emph{precarity} to the world of automated decision making, drawing on an extensive literature in sociology and economics. \item We build a simulation framework to experiment with and understand the evolution of precarity in a population. This framework incorporates ideas from macroeconomics as well as the framework of bounded rationality to capture the way income \emph{shocks} affect the long-term dynamics of individual wealth. \item We present a suite of insights drawn from our simulation platform that validates some of the observations on precarity we see particularly visible in the context of the encountered (especially in the last year) pandemic and illustrates how we can evaluate the effectiveness of proposed policy interventions. \end{itemize} The simulation framework, along with associated scripts, are available at \url{https://github.com/pnokhiz/precarity}. \section{Background and related work} \label{motivation} \paragraph{Precarity:} Precarity \cite{anthro} is a multi-faceted concept that very broadly speaks to the instability and \emph{precariousness} of people's lives. It has been interpreted as an economic condition \cite{nancyworth, greigDepeuter}, a sociological condition that speaks the interconnectedness and therefore vulnerability of human lives \cite{butler2006precarious,butler2016frames}, as a descriptor of a political class characterized by irregular or transient employment \cite{standing2014precariat,gill2016century} or as a psychological condition of exclusion and displacement \cite{allison2014precarious}. In this work, our focus is on algorithmic decision systems and the effect of compounded decision-making on households and groups. In that regard, we interpret precarity as the instability associated with sequences of negative decisions: specifically, the way in which repeated negative outcomes can increase the likelihood of one falling into poverty. Ritschard et al.\ \cite{ritschard2018index} were the first to attempt to quantify precarity (in the context of the labor market) by looking at transitions between more or less precarious states (for example, a full time versus a part time job). This work observes that negative transitions have the most critical role in increasing precarity. Aneja et al.\ \cite{aneja2019no} study the effect of incarceration on access to credit -- arguing that incarceration reduces the access to credit, which in turn increases recidivism. An important recent work that has greatly influenced our thinking is by Abebe et al.\ \cite{abebe2020subsidy}. In it, they build a theoretical model to capture the effect of \emph{income shocks} on one's chance of going bankrupt and propose efficient allocations of limited stimulus to maximize the expected number of individuals saved from bankruptcy. \paragraph{Fairness in sequential decision-making:} Zhang and Liu \cite{zhang2020fairness} present a comprehensive review of work on fairness in sequential decision making broken down by whether the decision process affects input features or not. A large body of work considers the case where input features do not change \cite{heidari2018preventing, gupta2019individual, bechavod2019equal, joseph2018meritocratic, hebert2017calibration, valera2018enhancing, bechavod2019equal, li2019combinatorial, chen2019fair, gillen2018online, patil2019achieving, dwork2018fairness}. When considering how a population might evolve in response to decisions, two broad lines of work emerge -- those that consider two decision stages \cite{liu2018delayed, heidari2019long, downstream} and those that consider finite or infinite-horizon decision making \cite{labor_market_lily_hu, hashimoto2018fairness,zhang2019group,mouzannar,disparat-equi,emelianov2019price}. This latter body of work is more closely related to our study. The broad goal here is to understand how qualifications of different groups evolve in the long run under various fairness interventions and the conditions to achieve social equality. They often focus on the problem of access and ``dropout'' (when decisions lead to withdrawal from the market) as causes of disparity between groups and propose various interventions to address this. Another approach to understanding sequential decision making has been to take advantage of simulations on Markov decision processes (MDP). As \cite{fairness-static} argues, long-term fairness dynamics are hard to evaluate, and so we need simulations to assess fairness over time. MDPs can also be formally analyzed for long-term effects on (group and individual fairness) as explored by \cite{jabbari2017fairness,NIPS2016_6355,wen2019fairness,fairness-static}. Our work also focuses on simulations to reveal insights about the underlying dynamics. Our focus, however, differs from the above works in that we a) model a system heterogeneously where different states capture different levels of precarity and b) focus on the effect of the system on individual \emph{trajectories} rather than only population-level outcomes. While we do explore fairness concerns, we do this in the context of precarity rather than focusing on tradeoffs between utility and fairness. \vspace{-2.0mm} \paragraph{Economic models of consumption and savings:} Macroeconomics emphasize time-related decisions, such as consumption plans. It is often useful to assume that the time horizon is infinite in these settings, which necessitates the use of dynamic optimization. The infinite horizon models are about optimal consumption and savings at various points in time, given that the production is subject to random noises (e.g., the optimal growth model) \cite{stokey1989recursive, ljungqvist2018recursive}. For analyzing capital income risk, which is essential for understanding the joint distribution of income and wealth \cite{benhabib2015wealth, stachurski2019impossibility}, there are other infinite horizon models like the Income Fluctuations Model (household consumption and savings problem), which is established in a more dynamic manner where state-dependent returns on assets fluctuate over time \cite{ma2020income}. \section{Quantifying precarity} \label{sec:quant-prec} Precarity (as discussed above) is a broad interdisciplinary notion describing the instability of modern life. In this paper, we focus on the \emph{economic} aspects of precarity -- how financial and other shocks create uncertainty around one's financial status. Within the social sciences, it has long been recognized that standard measures of inequality -- like the Gini index and others -- cannot quantify the dynamics of a precarious trajectory. Indeed, precarity has been referred to as a ``slow death'' \cite{johnson2020precarious, puar2012precarity} because of its progressive nature that unfolds for an individual over time. Much research \cite{pelletier2020measuring} has therefore gone into characterizing properties of \emph{sequences} that describe the state of an individual over time. Researchers have proposed measures that seek to capture the \emph{number} of distinct states, the number and direction of transitions between states, and even incorporate the significance and meaning of individual states in the sequence. For example, to capture the variability in states in a sequence, the entropy of the frequency distribution of states has been regularly used. To capture effects at different time scales, other researchers have proposed first constructing subsequences of the trajectory (akin to the use of skip n-gram models in text analysis). In this paper, we use one of these measures, proposed by Ritschard et al.\ \cite{ritschard2018index}, that seeks to capture three key aspects of precarity. We assume that an individual's trajectory is described as a sequence of states $\sigma = s_1, s_2, \dots, s_t$ where $s_i \in S$ and $S$ is the set of states. A \emph{quality} function $r : S \to \mathbb R$ indicates the level of financial wherewithal (where a higher quality implies a better condition). Then the measure of precarity for a given sequence $\sigma$ depends on \begin{itemize} \item The quality of the starting state $r(s_1)$ \item The net decline in state over $\sigma$ \item The amount of variability in $\sigma$ \end{itemize} \paragraph{Net decline in state:} We assume the states in $S$ are sorted from lowest to highest ``quality''. In any sequence $\sigma$, we can classify transitions between states as either negative or positive, depending on which state is higher. Let $q^-(\sigma)$ be the proportion of transitions that are negative, and $q^+(\sigma)$ be the proportion that are positive, and set $q(\sigma) = q^-(\sigma) - q^+(\sigma)$. The quantity $q(\sigma)$ represents the net magnitude of negative transitions and ranges between $-1$ (for purely positive transitions) and $1$ for purely negative transitions. We note in passing that the transitions can be weighted: in that case, the proportions are appropriately calculated in a weighted manner. We weigh them by the hops (distance) a state has to the least precarious state in a sequence. \paragraph{The variability in the sequence:} There are two factors used to define variability in $\sigma$. The first is the number of states visited, or more generally the distribution of the states entered during the sequence. This can easily be captured by computing the entropy $h(\sigma)$ of the (normalized) frequency distribution of states. This in turn, must be normalized by the maximum entropy possible, which is merely $\log |S|$. This does not however capture the transitions between states. For example, consider the sequences $\sigma = (1,1, 1, 1, 0, 0, 0, 0)$ and $\sigma' = (1,0,1,0,1,0,1,0)$. Clearly $h(\sigma) = h(\sigma')$ but $\sigma'$ reflects a more erratic state of existence. To account for this, Ritschard et al.\ \cite{ritschard2018index} add in a term $t(\sigma)$ that merely counts the number of transitions to different states (normalized by $|\sigma|-1$). Note that $t(\sigma)/(|\sigma|-1) = 1/7$ but $t(\sigma')/(|\sigma|-1) = 1$. These two terms are combined using their geometric mean: \[ c(\sigma) = \sqrt{\frac{h(\sigma)}{\log |S|} \frac{t(\sigma)}{|\sigma|-1}} \] \paragraph{The precarity index:} The overall precarity index of a sequence $p(\sigma)$ is a function of the initial quality $r(s_1)$, the net decline in state $q(\sigma)$ and the amount of variability $c(\sigma)$. In this paper, we use \cite{ritschard2018index}'s formulation of the index: whether other functional forms might provide different sensitivity is a matter we defer to further research. The precarity index can then be defined as: \[ p(\sigma) = \lambda r(s_1) + (1 - \lambda) c(\sigma) ^\alpha(1 + q(\sigma))^\gamma \] This can be seen as a convex combination of the starting position and terms involving dynamic components (controlled by $\lambda$). The two dynamic components are weighted by different exponents to reflect different degrees of sensitivity and importance. We set these values as $\lambda=0.2$, $\alpha=1$, and $\gamma=1.2$, as is done in \cite{ritschard2018index}. Note that we use the term $1+q(\sigma)$ to yield a term between $0$ and $2$: if the trajectory of the sequence is purely positive (thus setting $q(\sigma) = -1$) the precarity is merely a function of the initial state. In our experiments, we test several values of $\alpha$ and $\gamma$, and they do not affect the results as long as they are above zero, since the overall effects on the underlying population will be similar for all data points. We can also now elaborate on why measures like the Gini index fail to capture precarity. Precarity is a notion evaluated for an individual over time -- the precarity index is a way to quantify this as a kind of time average. The Gini index instead is a measure of inequality of a population measured at a snapshot in time and acts as a population aggregate measure. \section{A simulation based methodology for exploring precarity} \label{sec:simul-based-meth} Continuing in the line of works like \cite{fairness-static} and \cite{zheng2020ai}, we use a simulation framework to explore the dynamics of precarity. In this simulation framework, individual \emph{agents} make choices (and are subject to decisions) within a system, and are described by parameters for income, wealth and health. We use population-level economic data to initialize the system, and allow the agents to make either locally reasonable decisions (in a \emph{bounded rationality}-like framework) or allow them to maximize expected utility within epochs. Using a simulation framework with realistic input parameters and controls allows us to observe the evolution of the system in a way that would be difficult to do formally (like for example, \cite{abebe2020subsidy} is able to do for the more specific problem of income shocks), and allows us to experiment with different kinds of interventions. \paragraph{Agents:} The agents are households who interact with simulated environments in an alternating loop. Each agent is specified by their \textbf{income}, \textbf{net worth}, and \textbf{health}. An agent incurs \textbf{expenses} each time period and also earns income. Agents must make decisions about their assets -- whether to consume, pay for expenses, save, or improve their health. \paragraph{States:} We associate each agent with a set of three states (one for each of income, net worth, and health). Each state indicates which decile of the overall population they are in for that attribute (so there are a total of $10\times 10\times 10 = 1000$ possible states). \paragraph{Metrics:} We use sequences of states for each attribute separately to perform precarity computations for each agent as described in Section~\ref{sec:quant-prec} above. We record the precarity value of all households for each income decile. \paragraph{Initialization and updates:} We initialize a population of agents using parameters drawn from published statistics. For initial income, we generate an income distribution of 10,000 points using 2019 income data of the US Census Bureau's Annual ASEC survey of the Consumer Price Index (2019 dollar values) as detailed by the IPUMS Consumer Price Survey \cite{flood2020integrated, DQYDJ}. To each household, we assign a net-worth (their financial and non-financial assets minus their liabilities). The net worth is assigned by detailed median percentile net worth data and median net worth by income by percentile data from the Federal Reserve.\footnote{\url{https://www.federalreserve.gov/}} The health index average of the population is extracted from the Census Bureau CPS Annual Social and Economic (March) Supplement 2019 \cite{census}.\footnote{\url{https://www.census.gov/programs-surveys/cps.html}} Note that we consider one health feature for the entire household. While health is of course a personal state, this allows us to combine this data with the household-based data for the other attributes. Each household has a set of basic expenditures each month (e.g., for food, housing, transportation, etc.). These expenses are extracted from 2019 mean annual expenditures from the Consumer Expenditure Surveys of the US Bureau Of Labor Statistics.\footnote{\url{https://www.bls.gov/cex/tables.htm}} \textbf{Updating health information.} Net worth automatically updates as agents spend their income and/or save it. Income updates happen via a decision process that we describe below. What remains is to describe how the health status updates. The relationship between health and income has been observed to be positive and concave \cite{preston1975changing}. Wagstaff et al.\ \cite{doi:10.1146/annurev.publhealth.21.1.543} have proposed modeling this as a second degree polynomial whose first gradient is positive and the second gradient is negative. To define the function, we use the principle of \emph{relative income theory} where ``health depends on income relative to average incomes of one or more reference groups" \cite{deaton2003health}. That is, the individual (we consider the household one entity) health equals income relative to a specific group's income reduced by the square of income relative to the group's income: \[ h_i = \Bar{h} + \eta (w_i - w_g) - \sigma (w_i - w_g)^2, \] where $h_i$ is the individual (household's) health, $\Bar{h}$ is the mean health index of the whole population (extracted from CPS), $w_i$ is individual (household)'s income, and $w_g$ is the income mean in the group of reference, i.e., the income decile the household is in, in each round of decision making. $\eta$ and $\sigma$ are positive model parameters. We choose parameters that result in a wider range of indices for precarity states. We choose $1$ and $10^{-20}$ for $\eta$ and $\sigma$, respectively. \subsection{Income shocks} \label{sec:income-shocks} The decisions are made for 10 rounds on a monthly income and expenditure monetary value basis. The effects of positive or negative decisions are reflected on income after each round. We deduct (add) a unit based on negative (positive) outcomes if the households do not stay in the same state. We set this unit as 10\% of their income, which has less financial calamity than setting a fixed value (e.g., \$500) for lower incomes since it decreases proportionally with their income. Clearly, a higher value will be more beneficial for the wealthy and more ruinous for middle and lower income households. \textbf{Benefit decision policy.} Public policy can improve household financial stability by providing benefits, and in these simulations, we explore the effect of decision classifiers which make these decisions based on an individual's current state. We introduce a lenient classifier, which accepts $50\%$ of the initial population applying for the service based on their current income. The threshold is a global fixed value for the whole population despite their previous transitions, highlighting the fact that the decision-maker is unaware of the precariousness of the household. We implemented the experiments for a range of classifier thresholds to see the precarity of the population for the most lenient classifier, the most difficult classifier, and all other classifiers in between. We chose the most lenient classifier to consider the most optimistic scenario for assigning positive decisions. The harsher classifier has the smallest impact on precarity levels. We try to make the default simulation specification in the interest of lower income households. \subsection{Strategies} \label{sec:strategies} The final piece of the simulation is specifying how agents behave at each time step. The economics literature typically views agents as rational (discounted) utility maximizers, and an extensive literature has developed around different stochastic models under which to maximize utility. An alternative approach is to take a viewpoint of \emph{bounded rationality}: each agent now makes realistic choices (stochastically) from a collection of options that are locally rational, but cannot perform long-range utility maximization. We simulate agent behavior under both of these models, which we describe below. \subsubsection{Rational agents and Income Fluctuations Problem (IFP)} In this model \cite{ma2020income, sargent2014quantitative, deaton1989saving, den2010comparison, kuhn2013recursive, rabault2002borrowing, reiter2009solving, schechtman1977some}, an agent finds a consumption-asset path $ \{(c_t, a_t)\} $ where $a_t$ is the assets (net worth) at point $t$, and $c_t$ is the consumption at point $t$, with the goal of maximizing \begin{equation} \mathbb E \left\{ \sum_{t=0}^\infty \beta^t u(c_t) \right\} \label{obj1} \end{equation} such that \begin{equation} a_{t+1} = R_{t+1} (a_t - c_t) + Y_{t+1} \; \text{ and } \; 0 \leq c_t \leq a_t \label{cons} \end{equation} Where, $\beta \in (0,1) $ is the discount factor, $Y_t $ is non-capital income (i.e., via labor), and $ R_t $ is the interest rate on savings. For simplicity, in this paper we will disregard gains from savings by setting $R_t = 1$. The non-capital income $Y_t$ is controlled by an exogenous state process $z = \{Z_t\}$. As we shall see, this is how we can introduce income shocks via decision processes. The quantity $u$ is the utility to the household. We use the Constant Relative Risk Aversion (CRRA) \cite{ljungqvist2018recursive,wakker2008explaining} utility \[ u(c) = \frac{c^{1 - \gamma_c}} {1 - \gamma_c}, \] which is a commonly used utility function in finance and economics that captures the idea that risk aversion is independent of scale. Here, risk aversion refers to an individual's inclination to prefer low uncertainty (more predictable) but lower pay results over the results with high uncertainty but higher payoffs \cite{o2018modeling}. We pick $\gamma_c = 2$ since the utility function has a $c^{(1-\gamma_c)}$ term and with a smaller value, $u(c)$ could become imaginary given that we use $u(c - b)$ where $b$ is their monthly basic expenditures. This is to assure that they cover their basic needs in every round (if $c < b$ then there is negative utility). A \emph{feasible} consumption path $ (a,z) \in \mathsf S $ is equivalent to the consumption path $ \{c_t\} $. However, $ \{c_t\} $ and its asset path $ \{a_t\} $ must satisfy the following: \begin{itemize} \item $ (a_0, z_0) = (a, z) $ \item the feasibility constraints in \ref{cons} \item being measurable. This means that only before $t$ (and not afterward) the consumption path is a function of random variables. Thus, at time $t$ the consumption cannot be a function of unobserved outcomes. \end{itemize} An \emph{optimal consumption path} $ (a,z) $ is a feasible consumption path that attains the supremum in $\max{(.)}$ for the objective~\ref{obj1}, which can be shown to be: \begin{equation} u' (c_t) = \max \left\{ \beta R \, \mathbb{E}_t u'(c_{t+1}) \,,\; u'(a_t) \right\} \label{5} \end{equation} This can be optimized to find the optimal consumption. Please see Appendix \ref{sec:supp-optimal-IFP} for details.\footnote{Our explanations and implementation for this model is built upon \url{https://python.quantecon.org/ifp_advanced.html}} \subsubsection{Markov Decision Process (MDP) model} We now turn to our second approach to modeling agents. Here, each agent will occupy a state of a Markov decision process, with transitions out of each node based on locally reasonable decisions about asset management. Agents can use their savings to pay for their necessities and liabilities, they can sell a tangible asset, opt for the conversion of a health-related tangible asset (such as an insurance plan). They can use their income to increase their savings, invest in health improvement, or build assets through consumption. The decision they make (stochastically) moves them to a new state with modified attributes (income, wealth and health) accordingly. See Appendix \ref{sec:supp-mcmc} for details on the transitions in this system. We note that the transitions are designed based on prior studies of income and precarity \cite{income-precarity} that describe typical behaviors of individuals in different income classes when faced with income and health shocks. \paragraph{The decision making process:} The IFP model presents challenges for the income shock process that we described in Section~\ref{sec:income-shocks}. In the macroeconomic literature on income fluctuation (and indeed also in the work by \cite{abebe2020subsidy}), shocks are assumed to present in a stochastic form. Thus, while there is randomness in the shock generation process, it is a predictable kind that can be optimized for (in an expected sense). However, shocks generated by an external decision cannot be optimized for in the same way (and indeed, this is an important element of precarity). In our simulation, we think of the optimization process as happening in epochs \emph{between} decision points. This model captures the idea that long-range planning is constrained by decision points that the individual has no control over. \section{An empirical inquiry} \label{sec:assess_prec} With our simulation framework now in place, we are ready to explore a set of questions relating to how precarity manifests itself. For each of these questions, we will run both simulation methodologies described above in Section~\ref{sec:strategies}. We will run the simulation for a fixed number of time steps, recording the (cumulative) precarity indices of individuals for each of the three state variables as their state string gets longer. We will show the distribution of precarity index values across the population at each time step in order to illustrate how the distribution evolves over time. \subsection{Evolution of precarity} \label{sec:evolution-precarity} Our first sequence of experiments acts as a baseline to demonstrate how income shocks affect the precarity of individuals over time with respect to each of the three state variables. The results (for each of the variables) for the IFP model are shown in Figure~\ref{fig:ifp 10 rounds precarity} and the corresponding results for the MDP model are shown in Figure~\ref{fig:10 rounds precarity}. \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_income.png} \caption{Income precarity} \label{fig:ifp-prec_inc_population} \end{subfigure} \qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty.png} \caption{Net worth precarity} \label{fig:ifp-prec_savings} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_health.png} \caption{Health index precarity} \label{fig:ifp-prec_health} \end{subfigure} \caption{Assessing household precarity over time - IFP model} \label{fig:ifp 10 rounds precarity} \end{figure*} \begin{figure*}[!htbp] \centering \begin{subfigure}[c]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/precarity_income_all.png} \caption{Income precarity} \label{fig:prec_inc_population} \end{subfigure} \qquad \begin{subfigure}[c]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/savings_dist_rounds.png} \caption{Net worth precarity} \label{fig:prec_networth} \end{subfigure} \qquad \begin{subfigure}[c]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/precarity_health_all.png} \caption{Health index precarity} \label{fig:prec_health} \end{subfigure} \caption{Assessing household precarity over time - MDP Model} \label{fig:10 rounds precarity} \vspace{-2.0mm} \end{figure*} \textbf{Analysis.} In both models we observe that as the system progresses the precarity distribution for net worth shifts rightward (i.e., there is an overall increase in precarity). The changes are of different magnitude (and we will explore the reasons for that next), but it is worth noting that \emph{income} shocks affect both net worth and health indices because of the interconnected nature of these attributes in reality. The health index precarity changes in a less consistent manner: indeed in the IFP model it appears that health precarity appears to decrease in certain parts of the distribution. We suspect this is because of two factors: firstly, the health index is computed relative to the average income level in a particular state. Thus, even if income decreases, the health index might appear to be ``further'' from that mean value and spuriously indicate a better health index (see the discussion in Section~\ref{sec:simul-based-meth}). A second cause of this effect could also be that individuals starting off with high precarity might have their precarity \emph{reduce} as they see similar states (even if they are inferior states): this is linked to the way in which the different terms in the precarity index are weighted. \subsection{Heterogeneity in precarity evolution} \label{sec:heter-prec-evol} The above picture is a global view on precarity across all income levels. One of the observed effects of precarity is the non-uniform way in which individuals at different income levels might be affected by financial shocks. To investigate this, we look at precarity distributions segmented by income level. These classes are the lower 29\% of the incomes, the middle 52\% of incomes, and the upper 19\% incomes.\footnote{\url{https://www.pewresearch.org/fact-tank/2020/07/23/are-you-in-the-american-middle-class/}} In the IFP model, the precarity index of income, net worth, and health can be seen in Figures \ref{fig:ifp income classes}, \ref{fig:ifp savings income classes}, and \ref{fig:ifp health income classes}, respectively. Figures \ref{fig:income classes}, \ref{fig:savings income classes} and \ref{fig:health income classes} show this for the MDP model. \textbf{Analysis.} In general, we see the following consistent behavior. Higher income individuals maintain a (low) level of precarity over time and sometimes even experience a \emph{decrease} in precarity. Lower income individuals experience a clear increase in precarity, and middle income individuals also experience a precarity increase (but less). In other words, there is a \emph{compounding} effect of income shocks for individuals who are already in precarious positions, the exact concern that precarity seeks to capture. While our simulation is a gross oversimplification of reality, this phenomenon has been observed in the real world. During the pandemic for example, in March around $34.4\%$ of low income people with income less than \$27,000 lost their job compared to that of only $13.2\%$ high income people with income more than \$60,000.\footnote{\url{https://tracktherecovery.org/}} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_income-low.png} \caption{Low income precarity} \label{fig:ifp-prec_income_low} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_income-middle.png} \caption{Middle income precarity} \label{fig:ifp-prec_income_middle} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_income-high.png} \caption{High income precarity} \label{fig:ifp-prec_income_high} \end{subfigure} \caption{Assessing income classes' precarity over time - IFP model} \label{fig:ifp income classes} \end{figure*} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty-low.png} \caption{Low income net worth precarity} \label{fig:ifp-prec_sav_low} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty-middle.png} \caption{Middle income net worth precarity} \label{fig:ifp-prec_sav_mid} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty-high.png} \caption{High income class net worth precarity} \label{fig:ifp-prec_sav_high} \end{subfigure} \caption{Assessing income classes' net worth precarity over time - IFP model} \label{fig:ifp savings income classes} \end{figure*} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_health-low.png} \caption{Low income health precarity} \label{fig:ifp-prec_health_low} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_health-middle.png} \caption{Middle income health precarity} \label{fig:ifp-prec_health_mid} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_health-high.png} \caption{High income class health precarity} \label{fig:ifp-prec_health_high} \end{subfigure} \caption{Assessing income classes' health precarity over time - IFP Model} \label{fig:ifp health income classes} \end{figure*} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/lower_income_precarity.png} \caption{Low income precarity} \label{fig:prec_income_low} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/middle_income_precarity.png} \caption{Middle income precarity} \label{fig:prec_income_middle} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/higher_income_precarity.png} \caption{High income precarity} \label{fig:prec_income_high} \end{subfigure} \caption{Assessing income classes' precarity over time - MDP Model} \label{fig:income classes} \end{figure*} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/lower_savings_precarity.png} \caption{Low income net worth precarity} \label{fig:prec_sav_low} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/middle_precarity_savings.png} \caption{Middle income net worth precarity} \label{fig:prec_sav_mid} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/higher_income_savings.png} \caption{High income net worth precarity} \label{fig:prec_sav_high} \end{subfigure} \caption{Assessing income classes' net worth precarity over time - MDP Model} \label{fig:savings income classes} \end{figure*} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/lower_health_precarity.png} \caption{Low income health precarity} \label{fig:prec_health_low} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/middle_health_precarity.png} \caption{Middle income health precarity} \label{fig:prec_health_mid} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/higher_health_precarity.png} \caption{High income health precarity} \label{fig:prec_health_high} \end{subfigure} \caption{Assessing income classes' health precarity over time - MDP Model} \label{fig:health income classes} \end{figure*} \subsection{Policy interventions} A potential value of a simulation framework is our ability to experiment with interventions that would be difficult if not impossible to test ``in the wild''. We demonstrate the value of our simulation with two policy interventions that might be implemented to alleviate precarity. Both interventions are motivated by concrete measures that have been proposed to alleviate wealth shocks experienced during the pandemic. \begin{enumerate} \item \textit{Fixed stimulus intervention}: We consider a fixed stimulus intervention (measured as a fixed monthly value of \$1500 similar to the stimulus monthly checks during the pandemic \footnote{\url{https://www.nytimes.com/article/coronavirus-stimulus-package-questions-answers.html}}) given to all households who fall below the classifier threshold on every round. This form of fixed stimulus is similar to the mitigation model suggested by Abebe et al.\ \cite{abebe2020subsidy} (although in their model the goal is to allocate different fixed amounts of stimulus to different individuals) \item \textit{Precarity resistance}: An alternate approach to dealing with income shocks was demonstrated (among others) by Germany, where the government instituted a program to help people keep their jobs and continue to be on the payroll.\footnote{Germany's Kurzarbeit Program: \url{https://tinyurl.com/yd9qpahs}} We modeled this by reducing the probability of a transition to a poorer economic state after an adverse decision in our simulations. We implement this in the MDP model by adjusting the transition probabilities directly and in the IFP model by adjusting the transition process that generates the exogenous state $Z$. \end{enumerate} We show the out-turn of the same policy interventions in the IFP model. These results are shown in Figures \ref{fig:ifp-stimulus} and \ref{fig:ifp-markov}. Figures \ref{fig:stimulus} and \ref{fig:markov} show the results for the MDP model, respectively. \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_income-interventions1500.png} \caption{Stimulus effect on income} \label{fig:ifp-stimulus income} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty-intervention1500.png} \caption{Stimulus effect on net worth} \label{fig:ifp-stimulus net worth} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_health-interventions1500.png} \caption{Stimulus effect on health} \label{fig:ifp-stimulus health} \end{subfigure}% \caption{Stimulus effects - IFP Model} \label{fig:ifp-stimulus} \end{figure*} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/stimulus_income_precarity.png} \caption{Stimulus effect on income} \label{fig:stimulus income} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/stimulus_networth_precarity.png} \caption{Stimulus effect on net worth} \label{fig:stimulus net worth} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/stimulus_health_precarity.png} \caption{Stimulus effect on health} \label{fig:stimulus health} \end{subfigure}% \caption{Stimulus effects - MDP Model} \label{fig:stimulus} \end{figure*} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_income-InterventionProb.png} \caption{Markov persistence effect on income} \label{fig:ifp-markov income} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty-InterventionProb.png} \caption{Markov persistence effect on net worth} \label{fig:ifp-markov net worth} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/ifp_precairty_health-InterventionProb.png} \caption{Markov persistence effect on health} \label{fig:ifp-markov health} \end{subfigure}% \caption{Precarity resistance - IFP Model} \label{fig:ifp-markov} \end{figure*} \begin{figure*}[!htbp] \vspace{-1.0mm} \centering \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/markov_income_precarity.png} \caption{Markov persistence effect on income} \label{fig:markov income} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/markov_networth_precarity.png} \caption{Markov persistence effect on net worth} \label{fig:markov net worth} \end{subfigure}\qquad \begin{subfigure}[t]{0.3\textwidth} \centering \includegraphics[width=\columnwidth]{figures/markov_health_precarity.png} \caption{Markov persistence effect on health} \label{fig:markov health} \end{subfigure}% \caption{Precarity resistance - MDP Model} \label{fig:markov} \end{figure*} \textbf{Analysis.} We see that these interventions have a measurable effect on decreasing household precarity compared to Figure \ref{fig:10 rounds precarity}, as the number of households with higher precarity indices reduces. The effect of enforcing an intervention on income is of a ripple effect on other tied features: we also observe a decline in precarity for these features. In addition, we see a measurable effect of the stimulus on the net worth and the count of households in the lowest income precarity values. Note that the changes in the IFP model are more subtle because in the IFP model a household consumes as much as possible, constrained by utility and basic needs. Therefore, although assets increase with the \$1500 stimulus, it also increases the consumption. Note that we employed the interventions from the first round onward. We also tested the effects with interventions in later rounds (e.g., round 6 onward). The effects of such interventions is negligible, implying that reacting to the underlying population's precarity after they are precarious beyond the point of recovery would have little to no effect. \section{Discussion} The main contribution of this work is the introduction of the rich sociological and the economic notion of precarity to the community of researchers thinking about automated decision making, a simulation framework to experiment with it and an empirical study of how precarity manifests in a simplified macroeconomic system. We believe that the study of precarity is important for two reasons. Firstly, it takes the focus of decision making away from the decision \emph{maker} and their goals for maximizing utility and other socially desirable goals, and towards the experience of an individual subject to a sequence of decisions. Secondly, this ``averaging over time'' reveals phenomena of inequality that are hidden underneath gross population-wide measures of progress. Lastly, although this work is noted to the Artificial Intelligence community, the contributions are also important for public policy. \paragraph{Group inequality and precarity:} Thus far, all simulations use parameters that apply to the entire population of households. While individuals in the simulation start at different states, incomes, wealth, and health, the model is identical for every individual. Yet, it is clear that different demographic groups experience bias and discrimination in society. Historical discrimination has a huge impact on an individual's starting point \cite{baradaran2019color, abbasi2019fairness, gupta2019equalizing}, and ongoing discrimination has a major impact on one's income even for the same job (via gender \cite{blau2017gender} or ability \cite{whittaker2019disability}), income shocks \cite{zwerling1992race}, health shocks \cite{hicken2014racial}, and wealth shocks, as well as the likelihood of benefiting from public policy. In a sense, one's demographic group changes how the economic model must be run. One might apply this model to members of a particular intersectional group who benefit from or are harmed by discrimination in the same manner. Future work could also use multiple models, one each for individuals in different intersectional groups, to show how model differences can lead to disparate results by groups in precarity and financial position in society, for example, why the US wealth gap by race has remained largely unchanged over many decades \cite{baradaran2019color}. \paragraph{Limitations:} Our framework, like any simulation framework, is limited by what we have chosen to retain and omit. Our simulation assumes societal homogeneity -- all individuals are subject to the same forces and constraints, outside of economic differences. This of course, fails to reflect other forms of inequity in society like the ones described above, and is a direction for further exploration. The system could, however, if there is heterogeneity in individuals, take that into account in its decisions. Our choice of income, net worth and health as state variables is well-grounded in the literature on macroeconomic models. However, our modeling of how income and wealth factors influence and are influenced by health relies on a simple formula equating these factors. Our two choices of simulation frameworks reflect two extreme perspectives in economic modeling: the infinite-horizon optimizer and the myopic local decision maker. The truth usually lies somewhere in between, and we expect that a more sophisticated modeling of bounded rationality might allow us to capture a more nuanced and realistic set of behaviors. Finally, while our model behavior and predictions broadly align with observed patterns of inequality that we see right now, a more reliable validation would be to see if we can match trends in income and wealth distributions more precisely. It would also be instructive to develop strong tests of significance for the shifts that we observe in response to interventions. \section*{Acknowledgements} \label{sec:aknowledgement} We thank Vivek Gupta for his valuable insights at various stages of the project. We also thank anonymous reviewers for their helpful suggestions and comments. \cleardoublepage \bibliographystyle{unsrt}
-28,236.243387
[ -3.51171875, 3.21875 ]
33.379888
[ -2.794921875, 0.259521484375, -2.30859375, -5.515625, -0.0626220703125, 7.75 ]
[ 5.5625, 7.4453125, 1.8203125, 8.828125 ]
377
6,100
[ -2.189453125, 2.1171875 ]
24.791997
[ -6.19921875, -4.375, -5.515625, -2.77734375, 2.318359375, 14.46875 ]
0.519292
22.066932
24.918033
1.451415
[ 1.9792624711990356 ]
-20,488.105273
6.31377
-27,837.723439
2.835333
6.030144
[ -3.302734375, -3.49609375, -2.732421875, -3.904296875, 2.78125, 10.140625 ]
[ -5.4609375, -3.404296875, -2.546875, -2.404296875, 4.0546875, 7.30078125 ]
BkiUdjM4ukPiEekASt6Y
\section{Introduction} A rigid body~\cite{favro,miguelx,delong} that conducts Brownian motion can translate and rotate in space. Most interestingly, in scenarios where the particle is screwlike~\cite{brenner1,brenner2}, L-shaped~\cite{Kummel}, biaxial~\cite{Wittkowski} and ellipsoidal~\cite{Han0}, etc., translation and rotation can couple~\cite{brenner1,brenner2,Sun,Ayan,Ayan2,Ayan3,Han0}, leading to a rich class of trajectory patterns, e.g., helical motion~\cite{Wittkowski}, circular motion~\cite{Kummel}. In recent years, apart from exploring these novel dynamic behaviors arising from rigid-body Brownian motion, there were general models built, such as Brownian Dynamics~\cite{ermak,miguelx}, Stokesian Dynamics~\cite{brady,swan}, Fluctuating Hydrodynamics~\cite{Sharma} and the Langevin dynamics of arbitrarily shaped particle~\cite{Sun,delong}. Usually in these models the effects of non-stochastic factors such as hydrodynamic interactions and particle geometry enter into the displacement equations as resistance tensor~\cite{Sun} or equivalently the mobility tensor~\cite{delong}, while stochasticity is contained in force and torque terms. Subsequently, the trajectory of a tracking point (TP) on the body, e.g., the center of mass (CoM), or the center of friction (CoF, also known as center of hydrodynamic stress~\cite{Ayan}) is generated. Hence the particle is still represented by a zero volume TP rather than a finite volume body. However, to a large extent, the analysis of a single trajectory of a single TP could indeed provide rich information regarding the particle's physical properties and its interactions with the environment. Methods like single trajectory analysis~\cite{tejedor,holcman} and power spectral analysis~\cite{Schnellb,Sposini} can be utilized to extract useful information of the particle, e.g., diffusion coefficient~\cite{Xavier1} and mean squared displacement~\cite{Xavier2}. Most recently there is exciting new model based on information theory to infer the external force field from a stochastic trajectory~\cite{Frishman}. Indeed, the toolkit one can use to decipher a trajectory is updating fast. Nevertheless, when particle size matters the trajectory traced by a single TP does not reveal the orientation of the body along its path, thus not enough to characterize the particle's state. And even worse, trajectory recorded by tracking an inappropriately chosen TP could contain error. It is therefore natural to consider trajectories of multiple TPs on a rigid body because this helps us understand either the difference or commonality among different TPs. Since the object is Brownian, it is necessary to consider the mean trajectory. Note that to obtain the ``mean trajectory'' of a designated TP, one must consider an ensemble of identical body and average based on the ensemble. We expect to find simple but characteristic regimes of interaction among the mean trajectories of different tracking points. To achieve this, we adopt an extremely simplified representation - polygon (in 2D) and polyhedron (in 3D). As shown in Fig. \ref{fig:pgschematics}, each vertex of the polygon/polyhedron is a mass point with mass $m_i$ and friction coefficient $\xi_i$ ($1\leq i\leq n$, $n$ being the number of vertices.) Vertex $i$ experiences thermal fluctuation force $\delta\mathbf{F}_i$, friction force $\mathbf{f}_i$ exerted by the fluids and external force $\mathbf{F}_i$. Edges are assumed rigid, massless and noninteractive with the environment. Hydrodynamic interaction among vertices (which are mediated by fluids) is also neglected. Consequently, it leads to a simple Langevin equation system which describes translation and rotation in space. This setup resembles a dot multisubunit representation~\cite{Torre4} and the bead representation~\cite{delong,Sun}, although in many previous studies the hydrodynamic interactions among beads are preserved to make more realistic cases. However, our simplified model turns out not too simplified and is able to generate rich dynamic behaviors. Besides, as shown in the Appendix, the mean displacement curve generated from this model agrees well with experimental and theoretical results of boomerang colloidal particle~\cite{Ayan}. \begin{figure} \includegraphics[width=0.45\textwidth]{polygon_schematics} \caption{\label{fig:pgschematics} (Color Online) Schematics of the polygon/polyhedron representation. Each labeled vertex traces a trajectory in space as the body moves through.} \end{figure} The paper is organized as follows. In Section \ref{model}, the construction of the Langevin dynamics model of the Brownian polygon/polyhedron system is presented. Details of computation are presented. In Section \ref{Con}, the convergence regime for motion in 2D space is identified and modeled. In Section \ref{Align}, the alignment regime for motion in 2D space is identified and modeled. In Section \ref{twist}, the twist regime for motion in 2D space is identified and modeled. In Section \ref{transition}, we discuss the transition between regimes and the modification of Brownian behavior in the transition. In Section \ref{2to3}, we extend the 2D investigations to 3D. Finally, concluding remarks are presented in Section \ref{Conclusion}. \section{\label{model}Model Construction and Computation} Denote the position vector of the \textit{i}-th vertex as $\mathbf{r}_i$. It is customary to define various ``centers''~\cite{Ayan} of the body. In our context, first, the geometric center (GC), whose position vector is $\mathbf{r}_c=\frac{1}{n}\sum_{i=1}^{n}\mathbf{r}_i$. Second, CoM, whose position vector is $\mathbf{r}_m=\frac{1}{\sum_{i=1}^{n}m_i}\sum_{i=1}^{n}m_i\mathbf{r}_i$. Third, CoF, whose position vector is $\mathbf{r}_f=\frac{1}{\sum_{i=1}^{n}\xi_i}\sum_{i=1}^{n}\xi_i\mathbf{r}_i$. Depending on the distribution of mass and friction, these three centers can separate or coincide. Then, denote the vectors joining from these centers to vertex \textit{i} as $\mathbf{R}^c_i (=\mathbf{r}_i-\mathbf{r}_c)$, $\mathbf{R}^m_i (=\mathbf{r}_i-\mathbf{r}_m)$ and $\mathbf{R}^f_i (=\mathbf{r}_i-\mathbf{r}_f)$, respectively. These vectors have the properties \begin{equation} \sum_{i=1}^{n}\mathbf{R}^c_i=\mathbf{0},\quad\sum_{i=1}^{n}m_i\mathbf{R}^m_i=\mathbf{0},\quad\sum_{i=1}^{n}\xi_i\mathbf{R}^f_i=\mathbf{0}, \label{eq:prop1} \end{equation} which can be easily shown to be true. They attach to the body and can translate and rotate in the lab frame. Motion of the polygon/polyhedron is decomposed into translation of CoM and rotation relative to CoM. Denote vertex \textit{i}'s velocity in the lab frame as $\mathbf{v}_i$. Obviously, $\mathbf{v}_i=\mathbf{v}_m+\boldsymbol{\omega}\times\mathbf{R}^m_i$, where $\mathbf{v}_m$ is the velocity of CoM, $\boldsymbol{\omega}$ the angular velocity. Newtonian mechanics of $\{\boldsymbol{\omega},\mathbf{v}_m\}$~\cite{Sun} in the polygon/polyhedron picture writes, \begin{equation} \begin{aligned} \big(\sum_{i=1}^{n}m_i|\mathbf{R}^m_i|^2\big)\frac{d\boldsymbol{\omega}}{dt}&=\sum_{i=1}^{n}\mathbf{R}^m_i\times(\mathbf{f}_i+\delta\mathbf{F}_i+\mathbf{F}_i),\\ \big(\sum_{i=1}^n m_i\big)\frac{d\mathbf{v}_m}{dt}&=\sum_{i=1}^{n}(\mathbf{f}_i+\delta\mathbf{F}_i+\mathbf{F}_i),\\ \end{aligned} \label{eq:torgue_momentum} \end{equation} where $\sum_{i=1}^{n}m_i|\mathbf{R}^m_i|^2$ is the moment of inertia. $|\mathbf{R}^m_i|$ is the length of the vector and stays constant. $\delta\mathbf{F}_i$ observes \begin{equation} \langle\delta\mathbf{F}_i(t)\delta\mathbf{F}_j(t')\rangle=2\xi_ikT\mathbf{B}\delta_{ij}\delta(t-t'), \label{eq:fluc-dissi} \end{equation} where $\mathbf{B}$ is an identity matrix, $k$ is the Boltzmann constant, $T$ is the temperature, $\delta_{ij}$ is the Kronecker sign, $\delta(\cdot)$ is the Dirac delta function and $\langle\cdot\rangle$ is the ensemble average. In general, Eq. (\ref{eq:fluc-dissi}) could be written as $\langle\delta\mathbf{F}_i(t)\delta\mathbf{F}_j(t')\rangle=2kT\boldsymbol{\Xi}\delta_{ij}\delta(t-t')$~\cite{Sun}, where $\boldsymbol{\Xi}$ is a resistance tensor. In our simple representation, $\boldsymbol{\Xi}$ reduces to $\xi_i\mathbf{B}$. $\delta_{ij}$ assumes thermal fluctuation at one vertex is not correlated to that at another vertex, which is reasonable. $\mathbf{f}_i=-\xi_i\mathbf{v}_i=-\xi_i(\mathbf{v}_m+\boldsymbol{\omega}\times\mathbf{R}^m_i)$. Substituting $\mathbf{f}_i$ into Eq. (\ref{eq:torgue_momentum}), after simple algebraic manipulations one arrives at the Langevin equations, \begin{equation} \begin{aligned} \big(\sum_{i=1}^{n}m_i|\mathbf{R}^m_i|^2\big)\frac{d\boldsymbol{\omega}}{dt}=&-\big(\sum_{i=1}^{n}\xi_i|\mathbf{R}^m_i|^2\big)\boldsymbol{\omega}-\sum_{i=1}^{n}\mathbf{R}^m_i\times\xi_i\mathbf{v}_m +\sum_{i=1}^{n}\mathbf{R}^m_i\times\delta\mathbf{F}_i+\sum_{i=1}^{n}\mathbf{R}^m_i\times\mathbf{F}_i,\\ \big(\sum_{i=1}^{n}m_i\big)\frac{d\mathbf{v}_m}{dt}=&-\big(\sum_{i=1}^{n}\xi_i\big)\mathbf{v}_m-\sum_{i=1}^{n}\xi_i\boldsymbol{\omega}\times\mathbf{R}^m_i+\sum_{i=1}^{n}\delta\mathbf{F}_i+\sum_{i=1}^{n}\mathbf{F}_i.\\ \end{aligned} \label{eq:gen-eqn} \end{equation} An additional equation for $\mathbf{R}^m_i$ comes from the rigidity of the body. One could write $\mathbf{R}^m_i-\mathbf{R}^m_i(0)=\Delta\mathbf{r}_i-\Delta\mathbf{r}_m=\int_{0}^{t}\mathbf{v}_idt'-\int_{0}^{t}\mathbf{v}_mdt'=\int_{0}^{t}\boldsymbol{\omega}\times\mathbf{R}^m_idt'$, where $\Delta\mathbf{r}$ is the displacement in lab frame. Equivalently, \begin{equation} \frac{d\mathbf{R}^m_i}{dt}=\boldsymbol{\omega}\times\mathbf{R}^m_i. \label{eq:prop2} \end{equation} Eqs.~(\ref{eq:gen-eqn}-\ref{eq:prop2}) describe the Langevin dynamics for a polygon/polyhedron. Several comments about Eqs.~(\ref{eq:gen-eqn}-\ref{eq:prop2}): a): Different polygon/polyhedron geometries result in different sets of vectors $\{\mathbf{R}^m_i\}_{1\leq i\leq n}$. On the one hand, these vectors enter the moment of inertia/friction and impact the relaxation rate of angular velocity. On the other hand, they participate in the torques acting on the body, as well as the translation-rotation coupling term. Third, geometry impacts the relative position of GC, CoM and CoF, which is important in the mean trajectories patterns. In the numerical investigations we will study different geometries. b): Stochasticity is introduced into the system through $\delta\mathbf{F}_i$. $\delta\mathbf{F}_i=\sqrt{2\xi_ikT}\mathbf{W}(t)$, where $\mathbf{W}(t)$ is a vector and each of its component is a Gaussian white noise. Stochasticity participates in driving the evolution of velocity and angular velocity. It competes with deterministic force to shape the evolution of a Brownian trajectory~\cite{Frishman}. c): Eq. (\ref{eq:gen-eqn}) explicitly contains translation-rotation coupling~\cite{Sun,Ayan,Ayan2,Ayan3,Han0} terms. Under such coupling, how the body translates influences how it rotates, and vice versa. However, if $\xi_i/m_i=\xi_j/m_j|_{1\leqslant i\neq j\leqslant n}$, clearly $\sum_{i=1}^{n}\xi_i\mathbf{R}^m_i=const.\cdot\sum_{i=1}^{n}m_i\mathbf{R}^m_i=\mathbf{0}$ (Eq. (\ref{eq:prop1})), hence the coupling terms vanish. Under such condition, translation and rotation decouple. This forms a criterion of categorizing different polygons/polyhedra as translation-rotation coupled (TRC) or non-TRC, as shown in Tab. \ref{tab:isoaniso_trc}. Since $\{\xi_i\}_{1\leqslant i\leqslant n}$ measures the interaction strength between vertices and the carrying medium, we can also categorize polygons/polyhedra based on how $\xi_i$'s are distributed. We define here a body is isotropic if $\xi_i=\xi_j|_{1\leqslant i\neq j\leqslant n}$ is true, and anisotropic if it's not true. These two criteria overlap and give finer description of the body. Noteworthily, it is readily verifiable that for Eq. (\ref{eq:gen-eqn}) all non-TRC bodies should behave similarly, regardless of isotropy and anisotropy, which only make a difference in the relaxation rate. Therefore, for simplicity it suffices to investigate isotropic non-TRC, isotropic TRC and anisotropic TRC under different geometries. \begin{table} \caption{Categorization of a polygon/polyhedron. $A=\{\xi_i/m_i=\xi_j/m_j|_{1\leqslant i\neq j\leqslant n}\mathrm{\,is\,true}\}$, $B=\{\xi_i=\xi_j|_{1\leqslant i\neq j\leqslant n}\mathrm{\,is\,true}\}$.} \label{tab:isoaniso_trc} \begin{center} \begin{tabular}{ccc} \hline \hline & $A$ & $^{\neg}A$\\ \hline $B$ & Isotropic non-TRC & Isotropic TRC\\ $^{\neg}B$ & Anisotropic non-TRC & Anisotropic TRC\\ \hline \hline \end{tabular} \end{center} \end{table} d): Given a geometry, $\boldsymbol{\omega}(t)$, $\mathbf{v}_m(t)$, and $\mathbf{R}^m_i(t)$ are solved numerically. Initially, we set $\boldsymbol{\omega}(0)=\mathbf{0}$ and $\mathbf{v}_m(0)=\mathbf{0}$. $\mathbf{R}^m_i(0)$ depends on the initial placement of the body in the coordinates. Then these vectors are evolved according to Eqs.~(\ref{eq:gen-eqn}-\ref{eq:prop2}). Time $t$ is discretized into $t=N\Delta t$, $N$ is the $N$-th time step and $\Delta t$ is the time step size. The simulation is carried out in a unitless fashion, $kT=1$, time step $\Delta t=0.1$. At each time step, we sample $\mathbf{W}(N)$ in $x$, $y$ (for 2D) and $x$, $y$, $z$ (for 3D) directions independently from a Gaussian probability density function of zero mean and standard deviation of $\Delta t$, such that we get $\delta\mathbf{F}_i(N)$. A simplified representation for the numerical iterations is as follows, \begin{equation} \begin{aligned} &\boldsymbol{\omega}(N)=\boldsymbol{\omega}(N-1)+\Delta t\cdot f\big(\boldsymbol{\omega}(N-1),\mathbf{R}^m_i(N-1),\mathbf{v}_m(N-1),\delta\mathbf{F}_i(N-1),\mathbf{F}_i(N-1)\big);\\ &\mathbf{v}_m(N)=\mathbf{v}_m(N-1)+\Delta t\cdot g\big(\boldsymbol{\omega}(N-1),\mathbf{R}^m_i(N-1),\mathbf{v}_m(N-1),\delta\mathbf{F}_i(N-1),\mathbf{F}_i(N-1)\big);\\ &\mathbf{R}^m_i(N)=\mathbf{R}^m_i(N-1)+\Delta t\cdot h\big(\boldsymbol{\omega}(N-1),\mathbf{R}^m_i(N-1)\big); \end{aligned} \label{eq:numerical_scheme} \end{equation} where $f()$, $g()$, $h()$ are algebraic operations given by Eqs.~(\ref{eq:gen-eqn}-\ref{eq:prop2}), $N=1,2,3,4\cdots$. This is an explicit scheme after applying Euler's method and it is easy to execute. Such scheme works well if $\Delta t$ is small, e.g., in our case $\Delta t=0.1$. One may also try method such as Runge-Kutta, which is computationally more expensive but is more tolerant to a large time step. Given $\boldsymbol{\omega}(0)$, $\mathbf{v}_m(0)$ and $\mathbf{R}^m_i(0)$, Eq. (\ref{eq:numerical_scheme}) evolves the time series of these quantities. After obtaining these quantities, one could integrate to get displacement and hence the trajectory. To obtain the mean trajectory, one must run the scheme multiple times to get the ensemble average. In our computations, each mean trajectory is averaged based on 2400 realizations. \section{\label{Con}Convergence of mean trajectories} The simplest situation is a freely roaming polygon in 2D space where no external force is present ($\{\mathbf{F}_i\}_{1\leq i\leq n}=\{\mathbf{0}\}$). The polygon is driven only by $\{\delta\mathbf{F}_i\}_{1\leq i\leq n}$. Eqs. (\ref{eq:gen-eqn}-\ref{eq:prop2}) are solved under some representative geometries. First we consider an equilateral triangle of anisotropic TRC type, as shown in Fig. \ref{fig:mdCompare} (a). \begin{figure} \includegraphics[width=0.4\textwidth]{mdCompare} \caption{\label{fig:mdCompare}(Color Online) Mean trajectories of vertices and CoM in the absence of external force. Black dashed lines are initial placement of the polygon. (a) Equilateral triangle, anisotropic TRC. (b) Equilateral triangle, isotropic TRC. (c) Arrow-shaped polygon and (d) equilateral hexagon, isotropic non-TRC.} \end{figure} The mean trajectories of the three vertices and CoM till $t=500$ in the $\langle x\rangle$-$\langle y\rangle$ space are generated. In this case, $m_1=m_2=m_3=1$, $\xi_1=0.28$, $\xi_2=\xi_3=0.01$. CoM coincides with GC at $(0,0)$ (black cross in the figure). The CoF by definition is at $(\frac{0.9}{2},-\frac{0.9\sqrt{3}}{6})$ , which is very close to vertex 1's initial position $(\frac{1}{2},-\frac{\sqrt{3}}{6})$. Results show that the mean trajectories of the three vertices (red, green, blue circles for vertices 1, 2, 3) and the CoM (black solid line) converge to the CoF unanimously. In Fig. \ref{fig:mdCompare} (b), isotropic TRC is considered - $m_1=m_3=0.5$, $m_2=2$, and $\xi_1=\xi_2=\xi_3=0.1$. Again, CoM is initially placed at $(0,0)$. CoF coincides GC at $(0,-\frac{\sqrt{3}}{6})$. The mean trajectories converge to CoF as well. Fig. \ref{fig:mdCompare} (c) and (d) compare two isotropic non-TRC cases with rather different shapes - (c) an arrow-shaped polygon and in (d) an equilateral hexagon. In (c), vertices 1, 2, 3, 4's initial positions are $(\frac{1}{4},0)$, $(\frac{3}{4},\frac{\sqrt{3}}{6})$, $(-\frac{1}{4},0)$, $(\frac{3}{4},-\frac{\sqrt{3}}{6})$, respectively. $m_1=m_2=m_3=m_4=0.75$, $\xi_1=\xi_2=\xi_3=\xi_4=0.075$. CoF, GC and CoM coincide at $(\frac{3}{8},0)$. However, because of concave geometry, these centers fall out of the body~\cite{Ayan}. Again, all the trajectories converge to CoF. In (d), vertices 1, 2, 3, 4, 5, 6's initial positions are $(1,0)$, $(\frac{3}{4},\frac{\sqrt{3}}{4})$, $(\frac{1}{4},\frac{\sqrt{3}}{4})$, $(0,0)$, $(\frac{1}{4},-\frac{\sqrt{3}}{4})$, $(\frac{3}{4},-\frac{\sqrt{3}}{4})$. $m_1=m_2=m_3=m_4=m_5=m_6=0.5$, $\xi_1=\xi_2=\xi_3=\xi_4=\xi_5=\xi_6=0.05$. CoF, GC and CoM coincide at $(\frac{1}{2},0)$. In this case, the mean trajectories converge to CoF as well. This convergence behavior in the absence of external force is also verified through a semi-analytical solution to Eqs. (\ref{eq:gen-eqn}-\ref{eq:prop2}), as shown in the Appendix. Surprisingly, the above results suggest that the convergence behavior is invariant with respect to change in polygon geometries. However, change in geometries may lead to observational effects. A concave body's CoF can be outside the body (like in Fig. \ref{fig:mdCompare} (c)), whereas CoF of a convex body is inside. Therefore the observed Brownian motion of a convex body, such as a sphere, looks unbiased, while for a concave body, such as a boomerang particle, looks biased~\cite{Ayan}. Given a geometry, we further show that details of convergence depend on size of the system. Based on case of Fig. \ref{fig:mdCompare} (a), we explored how the size of the system influences the spatial and temporal scale of the convergence. A comparison of MD of the CoM when $l=1$, $l=2$, $l=4$ and $l=8$ is made, where $l$ is the edge length of the triangle. \begin{figure} \includegraphics[width=0.7\textwidth]{saturationComp} \caption{\label{fig:saturationComp}(Color Online) (a): MD ($\langle x\rangle$ and $\langle y\rangle$) of CoM versus time based on Fig. \ref{fig:mdCompare}(a)'s results under different triangle edge length $l$. (b): Normalized MD ($\langle x\rangle/l$ and $\langle y\rangle/l$) of CoM versus time based on Fig. \ref{fig:mdCompare}(a)'s results under different triangle edge length $l$.} \end{figure} Fig. \ref{fig:saturationComp} (a) shows $\langle x\rangle$ and $\langle y\rangle$ of CoM versus time. Obviously, larger $l$, higher plateau. This is reasonable because the larger triangle, the wider separation between CoM and CoF. Fig. \ref{fig:saturationComp} (b) shows $\langle x\rangle/l$ and $\langle y\rangle/l$ versus time. The normalized MDs converge to the same plateau for different $l$ values, which confirms that the plateau is proportional to and bounded by triangle size. It also shows the larger the triangle, the longer it takes to achieve the plateau, although given enough time the triangle will eventually equilibrate with the environment. Thenceforth on average the body behaves like a point. Noteworthily, there is a zero size limit. For those triangles that have very small size, the spatial and temporal scale of the convergence becomes negligible. MD of any TP on it would be zero, as predicted by classical theory of Brownian motion. Because of early stage MD increase and late stage MD plateau, the mean squared displacement (MSD), typically, will first increase fast then converge back to classical Brownian behavior ($\langle\Delta\mathbf{r}^2\rangle\propto t$), exhibiting a crossover behavior~\cite{Ayan}. We would like to refer the readers to works dealing with MSD of different TPs, such as Refs.~\cite{delong,Ayan}. The primary reason why MSD is not chosen here as the signature for discriminating regimes is that the information of direction and orientation is lost in MSD, which is important for our study. Now consider external force $\{\mathbf{F}_i\}_{1\leq i\leq n}$. Two forms of forces are considered here: constant force and harmonic force. In the constant force scenario, each vertex feels the same force no matter where the body is. In the harmonic force case, the forces felt by different vertices are different because they have distinct distances from the valley of harmonic potential. In Fig. \ref{fig:convergence_const} (a), the triangle in Fig. \ref{fig:mdCompare} (b) is subjected to a constant force $\mathbf{F}=(0.001,0.001)$ ($\mathbf{F}_1=\mathbf{F}_2=\mathbf{F}_3=\mathbf{F}$.) The mean trajectories (red for $m_1$, green for $m_2$, blue for $m_3$) of the three vertices and that of the CoM (black solid line) till $t=500$ are shown in the figure. These trajectories converge to a single one, which is the translation of CoF in $\mathbf{F}$'s direction. \begin{figure} \includegraphics[width=0.48\textwidth]{figure_convergence_const} \includegraphics[width=0.48\textwidth]{figure_convergence_harmonic} \caption{\label{fig:convergence_const}(Color Online) Convergence of mean trajectories of vertices on (a) an equilateral triangle (isotropic TRC) and (b) an equilateral hexagon (isotropic non-TRC) under a constant force (c) an equilateral triangle (isotropic TRC) and (d) an equilateral hexagon (isotropic non-TRC) under a harmonic force.} \end{figure} As a comparison, in Fig. \ref{fig:convergence_const} (b), the hexagon in Fig. \ref{fig:mdCompare} (d) is forced by a constant force. The figure shows the mean trajectories of six vertices of the hexagon till $t=1300$. The convergence to a single trajectory is also observed. In Fig. \ref{fig:convergence_const} (c) and (d), the constant force in Fig. \ref{fig:convergence_const} (a) and (b) is replaced by a harmonic force. Depending on the spring strength, mean trajectories can over shoot. However, ultimately they converge to a point on the valley. The results in this section show that for isotropic body, whether there is external force or not, the mean trajectories converge. They converge to the CoF or the extrapolation of CoF in external force's direction. For anisotropic body, the convergence holds if the body is free. \section{\label{Align}Alignment of the Mean Trajectories} In the alignment regime, the mean trajectories juxtapose each other in parallel. If an anisotropic polygon is subject to external force, the system could fall into the alignment regime. For example, in Fig. \ref{fig:alignmentcomp} (a), the triangle in Fig. \ref{fig:mdCompare} (a) is subjected to a constant force $\mathbf{F}=(0.001,0.001)$ ($\mathbf{F}_1=\mathbf{F}_2=\mathbf{F}_3=\mathbf{F}$). In this case $\xi_1$ is significantly larger than $\xi_2$ and $\xi_3$, and vertex 1 will experience the highest resistance force in the triangle's motion. This is because friction force $\mathbf{f}_i=-\xi_i\mathbf{v}_i=-\xi_i(\mathbf{v}_m+\boldsymbol{\omega}\times\mathbf{R}^m_i)$. Here $\xi_1$ is 27 times larger than $\xi_2$ and $\xi_3$, while the magnitude of $\mathbf{v}_i$ only differs by a factor of $|\boldsymbol{\omega}|\cdot|\mathbf{R}^m_i|$. $|\mathbf{R}^m_i|$ is bounded by triangle size, which is $\sim1$. And according to Fig. \ref{fig:twistregime}, magnitude of $|\boldsymbol{\omega}|$ is smaller than 1 even for very strong force. Therefore the effect of nonuniform $\{\xi_i\}_{1\leq i\leq n}$ is overwhelming and $|\mathbf{f}_1|\gg|\mathbf{f}_2|\simeq|\mathbf{f}_3|$. Consequently, vertices 2 and 3 will be pushed to the front with vertex 1 lagging behind. The polygon reorients to accommodate to the force applied in the $(1,1)$ direction. \begin{figure} \includegraphics[width=0.65\textwidth]{figure_alignment} \caption{\label{fig:alignmentcomp}(Color Online) (a) Alignment of mean trajectories of vertices on an equilateral triangle (anisotropic TRC) under a constant force. (b) The final convergence of mean trajectories of vertices on an equilateral triangle (anisotropic TRC) under a harmonic force.} \end{figure} After the accommodation, the trajectories continue and they keep the distance from each other. The situation gets more interesting if we replace the constant force with a harmonic force, as shown in Fig. \ref{fig:alignmentcomp} (b). Similar to (a), the triangle tends to align with the force but before it manages to do so the body shifts to the other side of the potential well. Then it must orient again to the force pointing to opposite direction. Depending on the magnitude of the spring constant, the triangle can touch the center line several times, or just once. Ultimately the triangle resides on the center line where force is zero. Under zero force, the mean trajectories converge. This is consistent with the results in Section \ref{Con}. Consequently, the alignment regime does not appear in this force case. Furthermore, if the spring constant is too small to trap the triangle, the triangle behaves like a free body and undergoes convergence regime as well. In general, the alignment regime applies when the external force does not frequently change its direction and the force persists long enough such that the polygon has enough time to reorient itself towards the force. \section{\label{twist}Twist of the Mean Trajectories} In Fig. \ref{fig:alignmentcomp} (a), before the triangle manages to align with the force there is a period when the force is correcting the triangle's orientation. It is found if the magnitude of the external force rises this regime becomes more salient and unlike the cleanly aligned trajectories they could be twisted and intertwined to form a plait structure. We identify it as the twist regime. \begin{figure*} \includegraphics[width=0.98\textwidth]{twist_regime1} \caption{\label{fig:twistregime}(Color Online) The twist regime developed before alignment is achieved under different magnitude of the constant external force. (a), (b), (c), (d) show the mean trajectories traced by vertices 1, 2, 3 and CoM (red circle, green asterisk, blue cross and black solid line) based on the simulation in Fig. \ref{fig:alignmentcomp} (a) under forces of $(0.001,0.001)$, $(0.01,0.01)$, $(0.1,0.1)$, $(1,1)$, respectively. The small panel at the top left corner shows the whole trajectory till $t=500$. The major panel shows the snapshot of the twist part of the whole trajectory. As the increase of force magnitude, mean trajectories traced by different vertices become increasingly intertwined. (e), (f), (g), (h) display the ensemble average of the triangle's angular velocity corresponding to (a), (b), (c), (d).} \end{figure*} Fig. \ref{fig:alignmentcomp} (a) is selected as the base case. As shown in Figs. \ref{fig:twistregime} (a)-(d), the magnitude of the constant force rises from 0.001, 0.01, 0.1 to 1. As the force strengthens, the twist of the mean trajectories becomes increasingly significant. Figs. \ref{fig:twistregime} (e)-(f) show the ensemble average of angular velocity corresponding to (a)-(d). $\langle\omega\rangle$ reflects the rotation caused by deterministic force. In (a) and (e), the force is small and one witnesses very mild undulation of mean angular velocity which ultimately attenuates to zero, meaning the triangle has reached the stable alignment. The triangle first rotates clockwise (negative $\langle\omega\rangle$), then counterclockwise (positive $\langle\omega\rangle$). When the force rises to 0.01, in (b) and (f) the frequency of switching between clockwise and counterclockwise ramps up. Amplitude of the mean angular velocity also gets higher. Proceeding to 0.1 ((c) and (g)) and 1 ((d) and (h)), both the frequency and amplitude build up. This is because a large force overcorrects the orientation and this overcorrection gets corrected again and again until the triangle reaches the stable orientation. Under the switching between clockwise and counterclockwise rotation, the triangle wiggles around the force's direction and the mean trajectories get twisted and intertwined. Ultimately the triangle aligns with the force. Note that twist may not necessarily end up in alignment. For example, in the harmonic force case in Fig. \ref{fig:alignmentcomp} (b), the triangle ends up in convergence regime. Therefore what regimes to sample also depends on the form of external force. Under realistic conditions, the potential field could be irregular such that the force frequently changes its direction and magnitude, which may keep the system in the twist regime indefinitely. One interesting aspect that can be investigated by this section's apparatus is how the Brownian behavior changes as one increases the magnitude of external force. One obtains the Brownian angular velocity by subtracting $\langle\omega\rangle$ from the instantaneous angular velocity $\omega$, \begin{equation} \label{eq:bangular} \Delta\omega=\omega-\langle\omega\rangle. \end{equation} $\Delta\omega$ represents contribution from the Brownian source. Typical results for $\Delta\omega$ corresponding to Fig. \ref{fig:twistregime} (e) - (h) are shown in Fig. \ref{fig:frocemagnitude} (a) - (d). \begin{figure} \includegraphics[width=0.35\textwidth]{brownian_intensify} \caption{\label{fig:frocemagnitude} (Color Online) Instantaneous angular velocity contributed from Brownian source under different magnitudes of external force for anisotropic triangle.} \end{figure} The figure shows that as the magnitude of external force rises, the Brownian fluctuation is excited. After alignment has been achieved (mostly before $t=200$ as shown in Fig. \ref{fig:twistregime}), the Brownian force is still making small rotations of the triangle when $\langle\omega\rangle=0$. Again, the excitation is caused by the correction mechanism. Every now and then the aligned triangle proposes a random rotation, the force rejects and corrects it. And if the force is large, the correction is quick. It looks as if the external force is fueling the Brownian rotation. Numerically it stems from the translation-rotation coupling in the original equations. Conversely, if we go from large force to small force to zero force, the Brownian fluctuation relaxes to low frequency. Therefore, typically the convergence regime features relaxed Brownian fluctuation. \section{\label{transition}Transition among convergence, alignment and Twist} As partly discussed in previous sections, polygon can sample and transit from one regime to another. In this section we consider a force that exponentially decays in space, where all the three regimes can be experienced in one path. Based on Fig. \ref{fig:alignmentcomp} (a), the constant force is replaced by an exponentially decaying force with respect to distance in (1,1) direction. The mean trajectories under a small decay rate is shown in Fig. \ref{fig:tactransition} (a). \begin{figure} \includegraphics[width=0.57\textwidth]{tac_transition} \includegraphics[width=0.42\textwidth]{tc_transition} \caption{\label{fig:tactransition} (Color Online) Mean trajectories traced by vertices of the triangle in Fig. \ref{fig:alignmentcomp} (a) under an exponentially decaying force in space till $t=500$. (a) Small decay rate. (b) High decay rate.} \end{figure} The triangle experiences sequentially the twist, alignment and convergence regimes until the force decays to zero. If, instead, the decay rate is high, one may only have the twist to convergence transition, as shown in Fig. \ref{fig:tactransition} (b). In general, the overall magnitude of force determines the sampling between convergence and nonconvergent regimes (alignment and twist), while the details of the force will determine whether alignment or twist to choose. For example, for free anisotropic body (zero force), convergence rules. Once a nontrivial force is established, all three regimes could be possible depending on the details and idiosyncracies of the field. For instance, as shown above, an anisotropic body ends up in alignment under a constant force whereas it ends up in convergence in harmonic and exponentially decaying potentials. Since the Brownian rotation is correlated with magnitude of external force (Fig. \ref{fig:frocemagnitude}), if the regime change involves change in force magnitude, one shall expect the modification of Brownian behavior. We arbitrarily select a single triangle from the ensemble and obtain its Brownian angular velocity $\Delta\omega$ till $t=800$ under the context of Fig. \ref{fig:tactransition} with a force that is slowly decaying ($\mathbf{F}=(1,1)\cdot e^{-\frac{0.001}{\sqrt{2}}(x+y+1)}$). The results are shown in Fig. \ref{fig:tactranomega}. \begin{figure} \includegraphics[width=0.5\textwidth]{tac_transition_omega} \caption{\label{fig:tactranomega} (Color Online) The frequency attenuation of instantaneous angular velocity contributed from Brownian source as the system undergoes a twist to alignment to convergence transition under a slowly decaying exponential force.} \end{figure} It could be seen that the frequency undergoes attenuation as the regime changes from twist to alignment to convergence. Sometimes the transition between twist and alignment does not involve change in force's magnitude (such as the constant force case), then there will not be frequency change in such transition. \section{\label{2to3}From Polygon to Polyhedron} It is well-known that 3D Brownian motion is different from its 2D version~\cite{Han0,Han1,MUKHIJA}. As discussed in Refs.~\cite{Han0,Han1,MUKHIJA}, 2D and quasi-2D confinement significantly increase friction anisotropy of the particle and impact the anisotropy in diffusion compared to 3D. In other words, from 2D to 3D, one may expect the influence of friction anisotropy to decrease. Therefore it is interesting and meaningful to investigate how the patterns for 2D polygon we found above change when one instead has a polyhedron. As displayed in Fig. \ref{fig:convergence_3d} (a), \begin{figure*} \includegraphics[width=0.98\textwidth]{figure_convergence_3d} \caption{\label{fig:convergence_3d}(Color Online) The convergence of mean trajectories of vertices on a tetrahedron (a) of isotropic non-TRC in the absence of eternal force, (b) of anisotropic TRC in the absence of external force, (c) of isotropic non-TRC in the presence of a constant force, (d) of anisotropic TRC in the presence of a constant force. Red, green, blue, magenta circles for trajectories of vertices 1, 2, 3, 4. Black solid line for trajectory of CoM.} \end{figure*} we start by considering a simple isotropic equilateral tetrahedron without external force. The initial coordinates for the four vertices 1, 2, 3, 4 are $(\frac{1}{2},-\frac{\sqrt{3}}{6},-\frac{\sqrt{6}}{12})$, $(0,\frac{\sqrt{3}}{3},-\frac{\sqrt{6}}{12})$, $(-\frac{1}{2},-\frac{\sqrt{3}}{6},-\frac{\sqrt{6}}{12})$, $(0,0,\frac{\sqrt{6}}{4})$, respectively. $m_1=m_2=m_3=m_4=1$. In (a) the friction is distributed evenly among vertices as $\xi_1=\xi_2=\xi_3=\xi_4=0.075$. Then by definition the CoM and CoF coincide at $(0,0,0)$. The figure shows the mean trajectories of the four vertices till $t=160$. They converge to the CoF in the end. In Fig. \ref{fig:convergence_3d} (b), the friction is redistributed as $\xi_1=\xi_2=\xi_3=0.04$ and $\xi_4=0.18$. Now we have an anisotropic tetrahedron with vertex 4 experiencing much more resistance from the medium. Consequently, the new location for CoF is about $(0,0,0.2858)$. The figure shows the mean trajectories of the four vertices and the CoM (black solid line) till $t=160$. In the end they all converge to the CoF. Results in Fig. \ref{fig:convergence_3d} (a) and (b) lead to the same conclusion as 2D system that in the absence of external force the mean trajectories converge regardless of isotropy and anisotropy. We carry on to subject the isotropic tetrahedron in (a) to a constant external force $\mathbf{F}=(0.001,0.001,0.001)$, shown in Fig. \ref{fig:convergence_3d} (c). This figure shows the mean trajectories till $t=400$. In the end they converge to one single trajectory. This again draws the same conclusion as the 2D system that the mean trajectories converge for isotropic body under external force. As a comparison, Fig. \ref{fig:convergence_3d} (d) shows the results of subjecting the anisotropic tetrahedron in (b) to the constant force. They end up in convergence as well. This differs from 2D system where an anisotropic body under external force can experience the alignment and twist regime. In 3D, the alignment and twist regimes disappear. It could be understood as that the body tries to align itself with the force, but the rotation with respect to the ``alignment axis'' would again lead to convergence. The wiggle by the ``alignment axis'' is also evened out by the extra rotations hence there is no twist regime. Adding more rotational degrees of freedom has made the particle behave more isotropically~\cite{MUKHIJA}. We summarize the results thus far in Tab. \ref{tab:regimeclass}. \begin{table} \caption{Regime classifications for interactions between mean trajectories of multiple tracking points on a Brownian rigid body. (A = Alignment, C = Convergence, T = Twist)} \label{tab:regimeclass} \begin{center} \begin{tabular}{ccccc} \hline \hline & \multicolumn{2}{c}{2D} & \multicolumn{2}{c}{3D}\\ & Free & Forced & Free & Forced\\ \hline Isotropic & C & C & C & C\\ Anisotropic & C & C/A/T & C & C\\ \hline \hline \end{tabular} \end{center} \end{table} As the table shows, the convergence regime has an overwhelming dominance in most of the situations. Only for forced anisotropic body in 2D, one might have the alignment and twist regimes which are made possible by translation-rotation coupling. Translation-rotation coupling is also responsible for the modification of Brownian behavior in regime transition. However, such coupling yields to the overwhelming effect of multiple rotational degrees of freedom in 3D space. This is consistent with the results for an ellipsoid where the effect of translation-rotation coupling is much stronger under 2D and quasi-2D confinement~\cite{Han0} compared to 3D. \section{\label{Conclusion}Conclusions} We have identified three regimes of interaction, namely, convergence, alignment and twist, between mean trajectories of different tracking points on a Brownian rigid body based on a polygon/polyhedron representation. Depending on the properties of the rigid body and external force, in 2D the body can sample from and transit between these three regimes, while in 3D there is only convergence to sample. And when a body in 2D is transiting between regimes, its Brownian behavior could be modified. The translation-rotation coupling plays a fundamental role in making the nonconvergent regimes (alignment and twist) possible. Otherwise the convergence regime dominates. Our results show that in most situations, different tracking points on a Brownian rigid body are statistically the same, because of the convergence of mean trajectories. Only for systems in alignment and twist regimes, different tracking points are statistically different, such that an inappropriately chosen tracking point could lead to error in displacement. However, such error is shown to be bounded by the size of the particle. Therefore, if the particle size is relatively small compared to the spatial scale one is interested in, the issue of choosing a tracking point will not be a concern. Nevertheless, in scenarios such as rigid particle sedimentation near a surface, the spatial scale is reduced to be commensurate with particle size, then there could be a special choice of tracking point that best estimates long-time transport coefficient~\cite{delong}. The result that only convergence regime survives from 2D to 3D suggests Brownian behaviors of rigid body in bulk medium and under confinement are quite different. Confinement in space limits the number of rotational degrees of freedom and allows more anisotropic behaviors, while increasing the number of rotational degrees of freedom makes the particle behave much more isotropically~\cite{Han0,Han1,MUKHIJA}. It is somewhat surprising that our extremely simplified model has captured such dramatic change during dimensional change. Indeed, there remain more details to uncover in future works about the interactions between anisotropy, translation-rotation coupling and spatial confinement in Brownian dynamics. \section*{APPENDIX: The Semi-Analytical Solution of Mean Displacement of CoM} This appendix presents the semi-analytical solution to mean displacement (MD) of CoM in the absence of external forces. When $\{\mathbf{F}_i\}_{1\leq i\leq n}=\{\mathbf{0}\}$, from Eq. (\ref{eq:gen-eqn}), one obtains \begin{equation} \frac{d\mathbf{v}_m}{dt}=-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}\mathbf{v}_m-\boldsymbol{\omega}\times\frac{1}{\sum_{i=1}^{n}m_i}\sum_{i=1}^{n}\xi_i\mathbf{R}^m_i+\frac{1}{\sum_{i=1}^{n}m_i}\sum_{i=1}^{n}\delta\mathbf{F}_i. \tag{A.1} \end{equation} Integrating Eq. (A.1) and using $\Delta\mathbf{r}_m=\int_0^t\mathbf{v}_mdt'$, we arrive at ($\mathbf{v}_m(0)=\mathbf{0}$ applied) an equation for displacement $\Delta\mathbf{r}_m$, \begin{equation} \frac{d}{dt}\Delta\mathbf{r}_m=-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}\Delta\mathbf{r}_m-\frac{1}{\sum_{i=1}^{n}m_i}\int_0^t\boldsymbol{\omega}\times(\sum_{i=1}^{n}\xi_i\mathbf{R}^m_i)dt' +\frac{1}{\sum_{i=1}^{n}m_i}\sum_{i=1}^{n}\int_0^t\delta\mathbf{F}_idt'. \tag{A.2} \end{equation} Taking ensemble average of Eq. (A.2), one gets the Langevin equation for MD $\langle\Delta\mathbf{r}_m\rangle$, \begin{equation} \frac{d}{dt}\langle\Delta\mathbf{r}_m\rangle=-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}\langle\Delta\mathbf{r}_m\rangle-\frac{1}{\sum_{i=1}^{n}m_i}\Big\langle\int_0^t\boldsymbol{\omega}\times(\sum_{i=1}^{n}\xi_i\mathbf{R}^m_i)dt'\Big\rangle. \tag{A.3} \end{equation} By Eq. (\ref{eq:prop2}), the integral in Eq. (A.3) could be carried out \begin{equation} \frac{d}{dt}\langle\Delta\mathbf{r}_m\rangle=-(\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i})\langle\Delta\mathbf{r}_m\rangle-\frac{1}{\sum_{i=1}^{n}m_i}\sum_{i=1}^{n}\xi_i\langle\mathbf{R}^m_i\rangle +\frac{1}{\sum_{i=1}^{n}m_i}\sum_{i=1}^{n}\xi_i\mathbf{R}^m_i(0). \tag{A.4} \end{equation} Applying the initial condition $\Delta\mathbf{r}_m(0)=\mathbf{0}$, solution to Eq. (A.4) is \begin{equation} \langle\Delta\mathbf{r}_m\rangle=-\frac{1}{\sum_{i=1}^{n}m_i}\int_0^t\Big(\sum_{i=1}^{n}\xi_i(\langle\mathbf{R}^m_i(t')\rangle-\mathbf{R}^m_i(0))\Big) e^{-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}(t-t')}dt'. \tag{A.5} \end{equation} The second integral can be taken out and Eq. (A.5) simplifies to \begin{equation} \langle\Delta\mathbf{r}_m\rangle=\frac{\sum_{i=1}^{n}\xi_i\mathbf{R}^m_i(0)}{\sum_{i=1}^n\xi_i}(1-e^{-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}t}) -\frac{1}{\sum_{i=1}^{n}m_i}\int_0^t\Big(\sum_{i=1}^{n}\xi_i\langle\mathbf{R}^m_i(t')\rangle\Big) e^{-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}(t-t')}dt'. \tag{A.6} \end{equation} Eq. (A.6) can be expressed in a more concise way. Let's introduce the vector $\mathbf{S}_F^m$, i.e., the vector joining from the CoM to the CoF. The $\sum_{i=1}^{n}\xi_i\langle\mathbf{R}^m_i(t')\rangle$ term can be rewritten as \begin{equation} \sum_{i=1}^{n}\xi_i\langle\mathbf{R}^m_i(t')\rangle=\langle\sum_{i=1}^{n}\xi_i(\mathbf{S}_F^m(t')+\mathbf{R}^f_i(t'))\rangle=(\sum_{i=1}^{n}\xi_i)\langle\mathbf{S}_F^m(t')\rangle. \notag \end{equation} This is true because of Eq. (\ref{eq:prop1}). Therefore Eq. (A.6) reduces to \begin{equation} \langle\Delta\mathbf{r}_m\rangle=\mathbf{S}_F^m(0)(1-e^{-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}t})-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}\int_0^t\langle\mathbf{S}_F^m(t')\rangle e^{-\frac{\sum_{i=1}^{n}\xi_i}{\sum_{i=1}^{n}m_i}(t-t')}dt'. \tag{A.7} \end{equation} \begin{figure} \includegraphics[width=0.4\textwidth]{appendix} \caption{\label{fig:aa}(Color Online) (a) The behavior of $\langle\mathbf{S}_F^m\rangle$ versus time. (b) The behavior of $\langle\Delta\mathbf{r}_m\rangle$ versus time.} \end{figure} This result shows that $\langle\Delta\mathbf{r}_m\rangle$ is biased towards CoF until it saturates. This is only a semi-analytical result because we do not have an analytical solution for $\langle\mathbf{S}_F^m\rangle$, which is determined by the angular velocity $\omega$. The translation-rotation coupling in Eq. (\ref{eq:gen-eqn}) makes it difficult to obtain an analytical solution for $\omega$. However, based on the case in Fig. \ref{fig:mdCompare} (a), we numerically compute the behavior of $\langle\mathbf{S}_F^m\rangle$ versus time. The result is shown in Fig. \ref{fig:aa} (a). With the numerical solution for $\langle\mathbf{S}_F^m\rangle$, we can substitute it into Eq. (A.7) and obtain the MD of CoM. The result is shown in Fig. \ref{fig:aa} (b). This result indicates that the contribution from the integral part in Eq. (A7) is negligible and the MD is almost determined by the first part of Eq. (A7) - a saturated exponential growth, which agrees with the experimental and theoretical results of a boomerang colloidal particle study~\cite{Ayan}.
-36,529.944845
[ -3.15625, 2.845703125 ]
37.407407
[ -3.361328125, 0.810546875, -2.18359375, -6.015625, -0.96533203125, 8.5703125 ]
[ 3.359375, 8.3359375, 2.412109375, 5.49609375 ]
470
5,522
[ -3.599609375, 4.234375 ]
26.429652
[ -6.359375, -4.37890625, -4.80859375, -2.435546875, 2.4375, 12.7734375 ]
0.716458
15.681939
23.940601
2.034322
[ 2.2335422039031982 ]
-24,226.897158
6.192684
-36,041.387903
1.222365
5.904994
[ -2.80078125, -3.685546875, -3.71484375, -4.82421875, 2.47265625, 12.171875 ]
[ -5.5859375, -1.72265625, -2.216796875, -1.6787109375, 3.208984375, 4.4375 ]
BkiUbB04eILhQCVbc-NL
\section{The Zak transform of a compact subgroup} \label{sec:Zak} In Sections \ref{sec:Zak} -- \ref{sec:tranFrm}, $G$ is a second countable locally compact group (not necessarily abelian), and $K\subset G$ is a compact subgroup. Our main result is the existence of an operator-valued Zak transform on $L^2(G)$ that treats left translation by $K$ in a manner similar to the Fourier transform on $L^2(K)$. This operator will form the basis for our classification of $K$-invariant subspaces of $L^2(G)$ in Section \ref{sec:ranTran}, and for our analysis of frames formed by $K$-translates in Section \ref{sec:tranFrm}. The reader may consult \cite{F,HR2} for background on compact groups and their representations. We record a few of the basics here. Throughout the paper, we normalize Haar measure on $K$ so that $|K| = 1$. The left and right translates of $f\colon K \to \mathbb{C}$ by $\xi \in K$ are denoted $L_\xi f$ and $R_\xi f$, respectively. That is, \[ (L_\xi f)(\eta) = f(\xi^{-1} \eta), \quad (R_\xi f)(\eta) = f(\eta \xi) \qquad (\eta \in K). \] We give $L^2(K)$ the usual convolution and involution, namely \[ (f * g)(\xi) = \int_K f(\eta) g(\eta^{-1} \xi)\, d\eta \qquad (f,g\in L^2(K);\ \xi \in K) \] and \[ (f^*)(\xi) = \overline{f(\xi^{-1})} \qquad (f \in L^2(K),\ \xi \in K). \] These operations make $L^2(K)$ a Banach $*$-algebra. The dual object of $K$ is $\hat{K}$; it has one representative of each equivalence class of irreducible unitary representations of $K$. Each $\pi \in \hat{K}$ acts on a finite dimensional space, which we denote $\H_\pi$. Its dimension is $d_\pi = \dim \H_\pi$. The \emph{Fourier transform} of $f\in L^2(K)$ evaluated at $\pi \in \hat{K}$ is the operator \[ \hat{f}(\pi) = \int_K f(\xi) \pi(\xi^{-1})\, d\xi \in B(\H_\pi), \] where the integral is to be interpreted in the weak sense. For our purposes, the utility of the Fourier transform lies in the formulae \begin{equation} \label{eq:FourTrans} (L_\xi f)\char`\^(\pi) = \hat{f}(\pi) \pi(\xi^{-1}), \quad (R_\xi f)\char`\^(\pi) = \pi(\xi) \hat{f}(\pi) \qquad (f \in L^2(K),\ \xi \in K,\ \pi \in \hat{K}) \end{equation} and \begin{equation}\label{eq:FourConv} (f^*)\char`\^(\pi) = \hat{f}(\pi)^*, \quad (f * g)\char`\^(\pi) = \hat{g}(\pi) \hat{f}(\pi) \qquad (f,g \in L^2(K);\ \pi \in \hat{K}). \end{equation} If $B(\H_\pi)$ is treated as a Hilbert space with inner product $\langle A, B \rangle = d_\pi \langle A,B \rangle_{\mathcal{HS}} = d_\pi \tr(B^*A)$, the Fourier transform may be viewed as a unitary \[ \mathcal{F} \colon L^2(K) \to \bigoplus_{\pi \in \hat{K}} B(\H_\pi), \qquad \mathcal{F} f = (\hat{f}(\pi))_{\pi \in \hat{K}}. \] This is called \emph{Plancherel's Theorem}. When an orthonormal basis $e_1^\pi,\dotsc,e_{d_\pi}^\pi \in \H_\pi$ is chosen for each $\pi \in \hat{K}$, we define the \emph{matrix elements} $\pi_{i,j} \in C(K)$ by \[ \pi_{i,j}(\xi) = \langle \pi(\xi) e_j^\pi, e_i^\pi \rangle \qquad (\pi \in \hat{K};\ \xi \in K;\ i,j = 1,\dotsc,d_\pi). \] In other words, the matrix for $\pi(\xi)$ with respect to the chosen basis is $(\pi_{i,j}(\xi))_{i,j=1}^{d_\pi}$. For $f \in L^2(K)$, the $(i,j)$-entry of the matrix for $\hat{f}(\pi)$ over this basis is \[ \hat{f}(\pi)_{i,j} = \int_K f(\xi) \overline{\pi_{i,j}(\xi)}\, d\xi \qquad (f \in L^2(K);\ \pi \in \hat{K};\ i,j=1,\dotsc,d_\pi). \] The \emph{contragredient} to $\pi\in \hat{K}$ is the representation $\overline{\pi}$ on $\H_\pi$ with matrix elements \[ \overline{\pi}_{i,j}(\xi) = \overline{\pi_{i,j}(\xi)} \qquad (\xi \in K;\, i,j=1,\dotsc,d_\pi). \] The contragredient of an irreducible representation is also irreducible. The \emph{Peter-Weyl Theorem} asserts that \[ \{ \sqrt{d_\pi} \pi_{i,j} : \pi \in \hat{K},\, i,j = 1,\dotsc,d_\pi\} \] is an orthonormal basis for $L^2(K)$. In particular, \begin{equation}\label{eq:Planch} \Norm{f}_{L^2(K)}^2 = \sum_{\pi \in \hat{K}} \sum_{i,j=1}^{d_\pi} d_\pi | \hat{f}(\pi)_{i,j}|^2 = \sum_{\pi \in \hat{K}} d_\pi \Norm{ \hat{f}(\pi) }_{\mathcal{HS}}^2 \qquad (f \in L^2(K)). \end{equation} \medskip Let $K\backslash G$ be the quotient space of \emph{right} cosets of $K$ in $G$. A \emph{cross section} of $K\backslash G$ in $G$ is a map $\tau \colon K\backslash G \to G$ that selects a representative of each coset. In other words, $\tau(Kx) \in Kx$ for every $Kx \in K\backslash G$. By a classic result of Feldman and Greenleaf \cite{FG}, there is a Borel cross section $\tau \colon K\backslash G \to G$ which maps compact subsets of $K\backslash G$ to sets with compact closure in $G$. Fix such a cross section, and let $T \colon K \times K \backslash G \to G$ be the bijection \begin{equation} \label{eq:measIsom} T(\xi, Kx) = \xi\cdot \tau(Kx) \qquad (\xi \in K,\ Kx \in K\backslash G). \end{equation} By \cite[Theorem 3.6]{I}, $K\backslash G$ admits a unique regular Borel measure with respect to which $T$ is a measure space isomorphism. We shall always have this measure in mind when we treat $K\backslash G$ as a measure space. Given a function $f\colon G \to \mathbb{C}$ and a coset $Kx \in K \backslash G$, we will denote $f_{Kx} \colon K \to \mathbb{C}$ for the function given by \begin{equation} \label{eq:cosetFunct} f_{Kx}(\xi) = f(\xi\cdot \tau(Kx)) \qquad (\xi \in K). \end{equation} Intuitively, we are treating the coset $Kx$ like a copy of $K$ itself, with the chosen representative $\tau(Kx)$ taking the role of the identity element. In this sense, $f_{Kx}$ is just the restriction of $f$ to $Kx$. Obviously, \begin{equation} \label{eq:tranRes} (L_\xi f)_{Kx} = L_\xi(f_{Kx}) \qquad (\xi \in K,\ Kx \in K\backslash G). \end{equation} \smallskip \begin{theorem} \label{thm:Zak} There is a unitary \[ Z \colon L^2(G) \to \bigoplus_{\pi \in \hat{K}} B(\H_\pi, L^2(K\backslash G ; \H_\pi) ) \] given by \begin{equation} \label{eq:ZakFour} [(Zf)(\pi) u ](Kx) = (f_{Kx})\char`\^ (\pi) u \qquad (f \in L^2(G),\ \pi \in \hat{K},\ u \in \H_\pi,\ Kx \in K\backslash G). \end{equation} Here $B(\H_\pi, L^2(K\backslash G ; \H_\pi) )$ is treated as a Hilbert space with inner product $\langle A, B \rangle = d_\pi \tr(B^*A)$, and the direct sum is that of Hilbert spaces. For $f \in L^2(G)$, $\xi \in K$, and $\pi \in \hat{K}$, the unitary $Z$ satisfies \begin{equation}\label{eq:ZakTrans} [Z (L_\xi f)](\pi) = (Zf)(\pi)\, \pi(\xi^{-1}). \end{equation} \end{theorem} We call $Z$ the \emph{Zak transform} for the pair $(G,K)$. \begin{proof} The measure space isomorphism $T\colon K \times K\backslash G \to G$ induces a unitary $U \colon L^2(G) \to L^2(K\times K\backslash G)$, namely \[ (Uf)(\xi,Kx) = f(\xi \cdot \tau(Kx)) = f_{Kx}(\xi) \qquad (f \in L^2(G),\ \xi \in K,\ Kx \in K\backslash G). \] Follow this with the canonical unitary $V \colon L^2(K\times K\backslash G) \to L^2(K) \otimes L^2(K\backslash G)$, and then apply \[ \mathcal{F}_K \otimes \text{id} \colon L^2(K) \otimes L^2(K\backslash G) \to [ \bigoplus_{\pi \in \hat{K}} B(\H_\pi) ] \otimes L^2(K\backslash G). \] Finally, make the natural identifications \[ [ \bigoplus_{\pi \in \hat{K}} B(\H_\pi) ] \otimes L^2(K\backslash G) \cong \bigoplus_{\pi \in \hat{K}} [ B(\H_\pi) \otimes L^2(K\backslash G) ] \cong \bigoplus_{\pi \in \hat{K}} B(\H_\pi, \H_\pi \otimes L^2(K\backslash G) ) \] \[ \cong \bigoplus_{\pi \in \hat{K}} B(\H_\pi, L^2(K\backslash G; \H_\pi)). \] The resulting composition is $Z$. The translation identity \eqref{eq:ZakTrans} follows directly from \eqref{eq:tranRes}, \eqref{eq:ZakFour}, and the corresponding identity for the Fourier transform \eqref{eq:FourTrans}. \end{proof} \begin{rem} \label{rem:ZakFour} In the extreme case where $K$ is all of $G$, the quotient $K\backslash G$ consists of a single point, and we can interpret $L^2(K\backslash G; \H_\pi)$ as simply being $\H_\pi$. Then the Zak transform reduces to the usual Fourier transform on $L^2(K)$, as long as the cross section $\tau$ chooses the identity element as the representative of the (single) coset of $K$ in $G$. \end{rem} \sloppy In general, the choice of cross-section $\tau$ is noncanonical, and the operator $Z$ depends on this choice. Nonetheless, Zak transforms associated with different cross-sections are easily related. Suppose that $\tau' \colon K\backslash G \to G$ is another cross-section with the required properties. For each $Kx \in K\backslash G$, there is an element $\eta_{Kx} \in K$ such that $\tau'(Kx) = \eta_{Kx} \tau(Kx)$. Denoting $Z_\tau$ and $Z_{\tau'}$ for the versions of the Zak transform obtained using $\tau$ and $\tau'$, respectively, we obtain the following formula from \eqref{eq:ZakFour} and \eqref{eq:FourTrans}: \[ [(Z_{\tau'} f)(\pi) u](Kx) = \pi(\eta_{Kx}) [(Z_\tau f)(\pi) u](Kx) \qquad (f \in L^2(G),\ \pi \in \hat{K},\ u \in \H_\pi,\ Kx \in K\backslash G). \] In other words, $Z_{\tau'}$ can be obtained from $Z_\tau$ by applying post composition with $\pi(\eta_{Kx})$ at each point $Kx \in K\backslash G$ and in every coordinate $\pi \in \hat{K}$. As with the usual Fourier transform on $L^2(K)$, there is another, basis-dependent version of the Zak transform that sometimes makes computation more convenient. When an orthonormal basis $e_1^{\pi}, \dotsc, e_{d_\pi}^\pi$ is chosen for $\H_\pi$, the space $B(\H_\pi, L^2(K\backslash G; \H_\pi) )$ can be identified with $M_{d_\pi}(L^2(K\backslash G))$ by mapping the operator $A$ to the matrix whose $(i,j)$-entry is the function \[ Kx \mapsto \langle (A e_j^\pi)(Kx), e_i^\pi \rangle \qquad (Kx \in K\backslash G). \] Under this identification, the inner product on $M_{d_\pi}(L^2(K\backslash G))$ corresponding to the one in the definition of the Zak transform is given by \[ \langle M, N \rangle = d_\pi \sum_{i,j=1}^{d_\pi} \langle M_{i,j}, N_{i,j} \rangle \qquad (M,N \in M_{d_\pi}(L^2(K\backslash G)) ). \] When this identification is made for each $\pi \in \hat{K}$, the Zak transform becomes a unitary \[ \tilde{Z} \colon L^2(G) \to \bigoplus_{\pi \in \hat{K}} M_{d_\pi}(L^2(K\backslash G)). \] The translation formula \eqref{eq:ZakTrans} then becomes \begin{equation} \label{eq:ZakTransBas} [\tilde Z (L_\xi f)](\pi) = (\tilde{Z} f)(\pi) \cdot (\pi_{i,j}(\xi^{-1}) )_{i,j=1}^{d_\pi} \qquad (f \in L^2(G),\ \xi \in K,\ \pi \in \hat{K}), \end{equation} where the vector- and scalar-valued matrices multiply using the usual formula for matrix multiplication. For $f \in L^2(G)$ and $\pi \in \hat{K}$, the $(i,j)$-entry of $(\tilde Z f)(\pi)$ is the function in $L^2(K\backslash G)$ given by \begin{equation} \label{eq:ZakBas} Kx \mapsto \int_K f(\xi \tau(Kx)) \pi_{i,j}(\xi^{-1})\, d\xi \qquad (Kx \in K\backslash G). \end{equation} For example, when $K$ is a compact \emph{abelian} group, each irreducible representation $\pi \in \hat{K}$ has dimension 1. Thus $M_{d_\pi}( L^2(K\backslash G) )$ can be identified with $L^2(K\backslash G)$, and if we reinterpret the direct sum, we may view the Zak transform as a unitary \[ \tilde{\tilde{Z}} \colon L^2(G) \to \ell^2(\hat{K}; L^2(K\backslash G) ) \] given by \[ [(\tilde{\tilde{Z}} f)(\alpha)](Kx) = \int_K f(\xi \tau(Kx)) \overline{\alpha(\xi)}\, d\xi \qquad (f \in L^2(G),\ \alpha \in \hat{K},\ Kx \in K \backslash G). \] This agrees with the notion of Zak transform for an abelian subgroup described by the author in \cite{I}. If $G$ and $K$ are both abelian, this definition is equivalent to the original notion of Zak transform as described by Weil in \cite[p. 164--165]{W}. That version of the Zak transform has a very long history in harmonic analysis. We refer the reader to \cite{HSWW} for a brief survey. \medskip \section{Range functions and translation invariance} \label{sec:ranTran} A closed subspace $V \subset L^2(G)$ will be called \emph{$K$-invariant} if $L_\xi f \in V$ whenever $f \in V$ and $\xi \in K$. In this section, we apply the Zak transform to classify the $K$-invariant subspaces of $L^2(G)$ in terms of \emph{range functions}. \begin{defn} Let $X$ be an indexing set, and let $\mathscr{H} = \{\H(x)\}_{x\in X}$ be a family of Hilbert spaces. A \emph{range function} in $\mathscr{H}$ is a mapping \[ J \colon X \to \bigcup_{x\in X} \{\text{closed subspaces of } \H(x)\} \] such that $J(x) \subset \H(x)$ for each $x \in X$. In other words, it is a choice of closed subspace $J(x) \subset \H(x)$ for each $x \in X$. \end{defn} If $J$ is a range function in $\{L^2(K\backslash G; \H_\pi)\}_{\pi \in \hat{K}}$, we define \[ V_J = \{ f \in L^2(G) : \text{for all $\pi \in \hat{K}$, the range of $(Zf)(\pi)$ is contained in $J(\pi)$}\}. \] In terms of the Zak transform, \begin{equation} \label{eq:ZVJ} Z(V_J) = \bigoplus_{\pi \in \hat{K}} B(\H_\pi, J(\pi)), \end{equation} where we consider $B(\H_\pi, J(\pi) )$ to be a closed subspace of $B(\H_\pi, L^2(K\backslash G; \H_\pi) )$. The translation identity \eqref{eq:ZakTrans} for the Zak transform shows that $V_J$ is $K$-invariant. Remarkably, every $K$-invariant subspace of $L^2(G)$ takes this form. \begin{theorem} \label{thm:ranTran} The mapping $J \mapsto V_J$ is a bijection between range functions in $\{L^2(K\backslash G; \H_\pi)\}_{\pi \in \hat{K}}$ and $K$-invariant subspaces of $L^2(G)$. \end{theorem} A basis-dependent version of this theorem runs as follows. Choose orthonormal bases for each of the spaces $\H_\pi$, $\pi \in \hat{K}$, and let \[ \tilde{Z} \colon L^2(G) \to \bigoplus_{\pi \in \hat{K}} M_{d_\pi}( L^2(K\backslash G) ) \] be the resulting basis-dependent Zak transform. For each $\pi \in \hat{K}$, we will think of the columns of $M_{d_\pi}( L^2(K\backslash G) )$ as elements of $L^2(K\backslash G)^{\oplus d_\pi}$, the direct sum of $d_\pi$ copies of $L^2(K\backslash G)$. Given a range function $J$ in $\{ L^2(K\backslash G)^{\oplus d_\pi} \}_{\pi \in \hat{K}}$, let \[ \tilde{V}_J = \{ f \in L^2(G) : \text{for all $\pi \in \hat{K}$, the columns of $(\tilde{Z}f)(\pi)$ lie in $J(\pi)$}\}. \] Then $J \mapsto \tilde{V}_J$ is a bijection between range functions in $\{ L^2(K\backslash G)^{\oplus d_\pi}\}_{\pi \in \hat{K}}$ and $K$-invariant subspaces of $L^2(G)$. Range functions have a long history in the theory of translation invariance. Helson \cite{He} and Srinivasan \cite{S} seem to have the first results in this area. Their work was released at approximately the same time, and each cites the other, so it is not clear who deserves credit for this line of research. The idea of applying a Fourier-like transform and classifying invariant subspaces in terms of range functions has since been applied by a host of researchers in a variety of settings \cite{ACHKM,ACP,ACP2,B,B2,BR,CP,CP2,CMO,BDR,He2,KR}. Recently, the author \cite{I} and Hern{\'a}ndez, et al.\ \cite{BHP2} independently applied a version of the Zak transform to classify translation invariance by an abelian subgroup. The present theorem extends these results to the setting of compact groups. We emphasize the novelty of applying this technique with a nonabelian subgroup. Of the results mentioned above, only Currey, et al.\ \cite{CMO} treats a noncommutative case.\footnote{At least one other group of researchers has shown interest in generalizing the shift-invariance results of \cite{B} to the compact nonabelian setting. An attempt at a classification theorem appears in \cite{RK}.} The theory of translation invariance in the nonabelian setting is only in its beginning stages. We hope that by first understanding the case of compact groups, where the representation theory is comparatively simple, we can help point a direction for understanding more general locally compact nonabelian groups. \medskip The proof of Theorem \ref{thm:ranTran} relies on a standard decomposition of actions of compact groups. If $\rho\colon K \to U(\H_\rho)$ is a unitary representation of $K$, then the \emph{isotypical component} of $\pi \in \hat{K}$ in $\rho$ is the invariant subspace $\mathcal{M}_\pi \subset \H_\rho$ spanned by all subspaces of $\H_\rho$ on which $\rho$ is unitarily equivalent to $\pi$. Then \begin{equation} \label{eq:dirSumIso} \H_\rho = \bigoplus_{\pi \in \hat{K}} \mathcal{M}_\pi. \end{equation} Moreover, each $\mathcal{M}_\pi$ decomposes as a direct sum of irreducible subspaces on which $\rho$ is equivalent to $\pi$. If $\mult(\pi,\rho)$ is the multiplicity of $\pi$ in $\rho$, it follows that \begin{equation} \label{eq:multDimIso} \dim \mathcal{M}_\pi = d_\pi \cdot \mult(\pi,\rho) \qquad (\pi \in \hat{K}). \end{equation} See \cite[\S 5.1]{F}. Given an invariant subspace $V \subset \H_\rho$, we will write $\rho^V$ for the subrepresentation of $\rho$ on $V$. The following can be deduced easily from \cite[Theorem 27.44]{HR2}. \begin{lemma} \label{lem:isoCom} Let $\rho \colon K \to U(\H_\rho)$ be a unitary representation of $K$, with isotypical components $\mathcal{M}_\pi \subset \H_\rho$ for $\pi\in \hat{K}$. \begin{enumerate}[(i)] \item For each $\pi \in \hat{K}$, let $E_\pi \subset \H_\rho$ be the closed linear span of some invariant subspaces on which $\rho$ is equivalent to $\pi$. If $\H_\rho = \bigoplus_{\pi \in \hat{K}} E_\pi$, then $E_\pi = \mathcal{M}_\pi$ for every $\pi \in \hat{K}$. \item If $V \subset \H_\rho$ is an invariant subspace, then $V \cap \mathcal{M}_\pi$ is the isotypical component of $\pi \in \hat{K}$ in $\rho^V$. \end{enumerate} \end{lemma} The next lemma follows from Schur's Lemma and the Double Commutant Theorem for von Neumann algebras. \begin{lemma} \label{lem:repSpn} Let $\pi \in \hat{K}$. Then $B(\H_\pi) = \spn\{ \pi(\xi) : \xi \in K\}$. \end{lemma} \medskip \noindent \emph{Proof of Theorem \ref{thm:ranTran}.} Since $Z$ is unitary, \eqref{eq:ZVJ} shows that the mapping $J \mapsto V_J$ is injective. We need only prove that every $K$-invariant subspace $V\subset L^2(G)$ arises as such a $V_J$. To do this, we will first show that $V$ decomposes as a direct sum of simpler pieces, and then we will leverage the Zak transform's translation property \eqref{eq:ZakTrans} on each piece. Let $\rho$ be the action of $K$ on $L^2(G)$ by left translation. For each $\pi \in \hat{K}$, let \[ M_\pi = \{f \in L^2(G) : (Zf)(\sigma) = 0 \text{ for }\sigma \neq \pi\}. \] We claim that $M_\pi$ is the isotypical component of $\overline{\pi}$ in $\rho$. Fix an orthonormal basis $e_1^\pi,\dotsc,e_{d_\pi}^\pi \in \H_\pi$. For each $\pi \in \hat{K}$, $i=1,\dotsc,d_\pi$, and nonzero $F \in L^2(K\backslash G; \H_\pi)$, we define $F_{\pi,i} \in L^2(G)$ by \[ (ZF_{\pi,i})(\sigma)e_j^\sigma = \begin{cases} d_\pi^{-1/2} \Norm{F}^{-1} \cdot F, & \text{if }\sigma = \pi \text{ and }i=j \\ 0, & \text{otherwise} \end{cases} \qquad (\sigma \in \hat{K};\ j = 1,\dotsc,d_\sigma)\] Then $\langle F_{\pi,i}, F_{\pi,j} \rangle = \langle Z F_{\pi,i}, Z F_{\pi,} \rangle = \delta_{i,j}$, and one can check that \[ L_\xi F_{\pi,j} = \sum_{i=1}^{d_\pi} \overline{\pi}_{i,j}(\xi)\cdot F_{\pi,i}. \] Hence $\spn\{F_{\pi,i} : i=1,\dotsc,d_\pi\}$ is a $K$-invariant subspace of $M_\pi$ on which $\rho$ is equivalent to $\overline{\pi}$. Moreover, \[ M_\pi = \overline{\spn}\{ F_{\pi,i} : F\in L^2(K\backslash G; \H_\pi), i = 1,\dotsc, d_\pi\}. \] The claim follows from Lemma \ref{lem:isoCom}(i). Now let $V \subset L^2(G)$ be a $K$-invariant subspace. By Lemma \ref{lem:isoCom}(ii), \[ V = \bigoplus_{\pi \in \hat{K}} V \cap M_\pi. \] Since $(Zf)(\sigma)=0$ for $f\in V\cap M_\pi$ and $\sigma \neq \pi$, we may view $W_\pi := Z(V \cap M_\pi)$ as a closed subspace of $B(\H_\pi, L^2(K\backslash G; \H_\pi))$. Let \[ J(\pi) = \overline{\spn}\{ Au : A \in W_\pi, u \in \H_\pi \} \subset L^2(K\backslash G; \H_\pi). \] Clearly $W_\pi \subset B(\H_\pi, J(\pi) )$. If we can upgrade this inclusion to equality, we will be able to conclude that \[ Z(V) = \bigoplus_{\pi \in \hat{K}} Z(V\cap M_\pi) = \bigoplus_{\pi \in \hat{K}} B(\H_\pi, J(\pi) ), \] and the proof will be complete. Fix any $A \in B(\H_\pi, J(\pi) )$. We want to show that $A \in W_\pi$. A moment's thought shows that $A$ is the sum of operators in $B(\H_\pi, J(\pi) )$ whose kernels have codimension one. It is enough to show that each of those operators belongs to $W_\pi$. We may therefore assume that there is a unit norm vector $u \in \H_\pi$ such that $Av = 0$ for all $v \perp u$. Let $\epsilon > 0$ be arbitrary. Since $\ran A \subset J(\pi)$, we can find operators $B_1,\dotsc,B_n \in W_\pi$ and nonzero vectors $v_1,\dotsc,v_n \in \H_\pi$ such that \[ \Norm{ Au - \sum_{j=1}^n B_j v_j }^2 < \epsilon. \] We are going to produce an operator $B \in W_\pi$ with $B u = \sum_{j=1}^n B_j v_j$ and $B v = 0$ for $v \perp u$. Here is the key step. Since $V \cap M_\pi$ is invariant under left translation by $K$, the identity \eqref{eq:ZakTrans} shows that $W_\pi = Z(V\cap M_\pi)$ is invariant under right multiplication by $\pi(\xi^{-1})$ for each $\xi \in K$. Therefore $W_\pi$ is invariant under right multiplication by $B(\H_\pi) = \spn\{\pi(\xi^{-1}) : \xi \in K\}$. In particular, we can precompose each $B_j \in W_\pi$ with another operator in $B(\H_\pi)$ to make $B_j' \in W_\pi$ satisfying $B_j' u = B_j v_j$ and $B_j' v = 0$ for all $v \perp u$. Then $B:= B_1' + \dotsb + B_n'$ belongs to $W_\pi$, and \[ \Norm{ A - B }^2 = d_\pi \Norm{ Au - Bu }^2 < d_\pi \epsilon. \] Since $W_\pi$ is closed and $\epsilon >0$ was arbitrary, we conclude that $A \in W_\pi$. Therefore, \[ Z(V \cap M_\pi) = W_\pi = B(\H_\pi, J(\pi) ), \] as desired. \hfill \qed \medskip The preceding proof contained a fact that is useful in its own right. \begin{prop} \label{prop:isoComp} Let $J$ be a range function in $\{L^2(K\backslash G; \H_\pi)\}_{\pi \in \hat{K}}$, and let $\rho_J$ be the representation of $K$ on $V_J$ given by left translation. Then the isotypical component of $\pi \in \hat{K}$ in $\rho_J$ is \[ \mathcal{M}_\pi = \{ f \in V_J : (Zf)(\sigma) = 0 \text{ for }\sigma \neq \overline{\pi} \}. \] In particular, \begin{equation} \label{eq:multRan} \mult(\pi, \rho_J) = \dim J(\overline{\pi}). \end{equation} and \begin{equation} \label{eq:dimRan} \dim V_J = \sum_{\pi \in \hat{K}} d_\pi\cdot \dim J(\overline{\pi}). \end{equation} \end{prop} \begin{proof} That $\mathcal{M}_\pi$ is the isotypical component of $\pi$ in $\rho_J$ was proven above. To see \eqref{eq:multRan}, simply observe that \[ \dim \mathcal{M}_\pi = \dim Z \mathcal{M}_\pi = d_\pi \cdot \dim J(\pi) \] and apply \eqref{eq:multDimIso}. Then \eqref{eq:dimRan} follows from \eqref{eq:dirSumIso}. \end{proof} \begin{rem} \label{rem:dimRan} \sloppy $K$-invariant spaces are determined up to unitary equivalence by the dimensions of the spaces chosen by their range functions, in the following sense. Let $J_1$ and $J_2$ be two range functions in $\{L^2(K\backslash G; \H_\pi)\}_{\pi \in \hat{K}}$, and let $V_1$ and $V_2$ be the corresponding $K$-invariant subspaces of $L^2(G)$. Then there is a unitary map $U\colon V_1 \to V_2$ with the property that \[ U L_\xi = L_\xi U \qquad (\xi \in K) \] if and only if \[ \dim J_1(\pi) = \dim J_2(\pi) \qquad (\pi \in \hat{K}). \] This is a consequence of \eqref{eq:multRan}, since representations of compact groups are determined up to unitary equivalence by multiplicities of irreducible representations. Compare with Bownik's results on the dimension function for shift-invariant subspaces of $L^2(\mathbb{R}^n)$ \cite[Theorem 4.10]{B}. \end{rem} \begin{theorem} \label{thm:ranGen} Let $\mathscr{A} \subset L^2(G)$ be an arbitrary family of functions, and let $S(\mathscr{A}) \subset L^2(G)$ be the $K$-invariant subspace generated by $\mathscr{A}$. That is, \[ S(\mathscr{A}) = \overline{\spn}\{ L_\xi f : \xi \in K,\ f \in \mathscr{A}\}. \] Then $S(\mathscr{A}) = V_J$, where \[ J(\pi) = \overline{\spn}\{ \ran (Zf)(\pi) : f \in \mathscr{A}\} \qquad (\pi \in \hat{K}). \] \end{theorem} \begin{proof} If $J$ and $J'$ are two range functions in $\{L^2(K\backslash G; \H_\pi)\}_{\pi \in \hat{K}}$ with the property that $J(\pi) \subset J'(\pi)$ for all $\pi \in \hat{K}$, then it is easy to see that $V_J \subset V_{J'}$. Moreover, $V_{J'}$ contains $S(\mathscr{A})$ if and only if $J'(\pi)$ contains $\ran (Zf)(\pi)$ for all $f \in \mathscr{A}$, for every $\pi \in \hat{K}$. Since $S(\mathscr{A})$ is the smallest $K$-invariant space containing $\mathscr{A}$, the corresponding range function $J$ must be such that $J(\pi)$ is the smallest closed subspace of $L^2(K\backslash G; \H_\pi)$ containing $\ran (Zf)(\pi)$ for all $f \in \mathscr{A}$, for every $\pi \in \hat{K}$. That subspace is precisely \[ \overline{\spn}\{ \ran (Zf)(\pi) : f \in \mathscr{A}\}. \qedhere\] \end{proof} \smallskip \begin{cor} \label{cor:noCyc} $L^2(G)$ contains a function $f$ with $\overline{\spn}\{L_\xi f : \xi \in K\} = L^2(G)$ if and only if $G=K$. \end{cor} \begin{proof} The $K$-invariant space $L^2(G)$ corresponds with the range function $J'$ given by \[ J'(\pi) = L^2(K\backslash G; \H_\pi) \qquad (\pi \in \hat{K}). \] If $K \subsetneq G$, then any $f \in L^2(G)$ has \[ \rank (Zf)(\pi) \leq d_\pi < \dim L^2(K\backslash G ; \H_\pi) \qquad (\pi \in \hat{K}). \] By the previous theorem, the range function $J$ associated with $S(\{f\})$ has $J(\pi) = \ran (Zf)(\pi) \neq J'(\pi)$ for each $\pi \in \hat{K}$. Hence, \[ S(\{f\}) = V_J \neq V_{J'} = L^2(G). \] When $G=K$, on the other hand, it is well known that every subrepresentation of the regular representation is cyclic. See, for instance, \cite{GM}. \end{proof} We will now study the correspondence between range functions and $K$-invariant spaces in greater detail. Roughly speaking, we will see that the map $V_J \mapsto J$ allows us to view the lattice of $K$-invariant spaces as a much simpler lattice of linear subspaces. Many of the ideas that follow will appear again in our analysis of invariant subspaces of general representations of compact groups in Section \ref{sec:multGen}. To begin, we introduce the notion of direct sum for range functions. If $J$ and $J'$ are two range functions in the same family $\mathscr{H} = \{\H(x)\}_{x\in X}$, with the property that $J(x) \perp J'(x)$ for every $x \in X$, then we say that $J$ and $J'$ are \emph{orthogonal}, and write $J \perp J'$. Given a family $\{J_\alpha\}_{\alpha \in A}$ of pairwise orthogonal range functions in $\mathscr{H}$, we denote $\oplus_{\alpha \in A} J_\alpha$ for the range function in $\mathscr{H}$ given by \[ [\bigoplus_{\alpha \in A} J_\alpha](x) = \bigoplus_{\alpha \in A} [J_\alpha(x)] \qquad (x \in X). \] Let $J$ and $J'$ be two range functions in $\{ L^2(K\backslash G; \H_\pi) \}_{\pi \in \hat{K}}$. For each $\pi \in \hat{K}$, we view $B(\H_\pi, J(\pi) )$ and $B(\H_\pi, J'(\pi))$ as closed subspaces of $B(\H_\pi, L^2(K\backslash G; \H_\pi))$, with the inner product \[ \langle A,B \rangle = d_\pi \langle A, B \rangle_{\mathcal{HS}}. \] Then $B(\H_\pi, J(\pi) )$ is orthogonal to $B(\H_\pi, J'(\pi))$ if and only if $J(\pi) \perp J'(\pi)$. Since $Z$ is unitary and \[ Z(V_J) = \bigoplus_{\pi \in \hat{K}} B(\H_\pi, J(\pi) ), \] we conclude that \begin{equation} \label{eq:ranPerp} J \perp J' \iff V_J \perp V_{J'}. \end{equation} Moreover, if $\{J_\alpha\}_{\alpha \in A}$ is a family of range functions in $\{ L^2(K\backslash G; \H_\pi) \}_{\pi \in \hat{K}}$, then \begin{equation} \label{eq:ranSum} J = \bigoplus_{\alpha \in A} J_\alpha \iff V_J = \bigoplus_{\alpha \in A} V_{J_\alpha}. \end{equation} With these simple observations, we can easily describe all possible decompositions of $V_J$ as a direct sum of irreducible subspaces. \begin{theorem} \label{thm:ranDecomp} Let $J$ be a range function in $\{ L^2(K\backslash G; \H_\pi)\}_{\pi \in \hat{K}}$. For each $\pi \in \hat{K}$, choose an orthonormal basis $\{F_i^\pi\}_{i \in I_\pi}$ for $J(\pi)$.\footnote{If $J(\pi) = \{0\}$, we take $I_\pi$ to be the empty set.} Then \[ V_{\pi,i} := \{ f \in L^2(G) : \ran (Zf)(\pi) \subset \spn\{F_i^\pi\},\text{ and }(Zf)(\sigma) = 0 \text{ for }\sigma \neq \pi\text{ in }\hat{K}\} \] is an irreducible $K$-invariant space for each $\pi \in \hat{K}$ and $i \in I_\pi$, and \begin{equation} \label{eq:ranDecomp1} V_J = \bigoplus_{\pi \in \hat{K}} \bigoplus_{i \in I_\pi} V_{\pi,i}. \end{equation} Moreover, every decomposition of $V_J$ as a direct sum of irreducible $K$-invariant spaces occurs in this way. \end{theorem} In terms of the Zak transform, the direct sum decomposition \eqref{eq:ranDecomp1} simply says that \[ Z(V_J) = \bigoplus_{\pi \in \hat{K}} \bigoplus_{i \in I_\pi} B(\H_\pi,\spn\{F_i^\pi\}). \] We can think of $\spn\{F_i^\pi\}$ as being a copy of $\mathbb{C}$, so that $B(\H_\pi,\spn\{F_i^\pi\})$ is like a copy of $\H_\pi^*$. It will therefore come as no surprise that the corresponding action of $K$ on $B(\H_\pi, \spn\{F_i^\pi\})$ is unitarily equivalent to $\overline{\pi}$. \begin{proof} For each $\pi \in \hat{K}$ and each $i = 1,\dotsc,d_\pi$, let $J_{\pi,i}$ be the range function given by \[ J_{\pi,i}(\sigma) = \begin{cases} \spn\{F_i^\pi\}, & \text{if }\sigma = \pi \\ \{0\}, & \text{if }\sigma \neq \pi \end{cases} \qquad (\sigma \in \hat{K}). \] Then $V_{\pi,i} = V_{J_{\pi,i}}$, and the direct sum decomposition \eqref{eq:ranDecomp1} follows immediately from \eqref{eq:ranSum}. If $\rho_{\pi,i}$ is the action of $K$ on $V_{\pi,i}$ by left translation, then $\rho_{\pi,i} \cong \overline{\pi}$ by \eqref{eq:multRan}. In particular, $V_{\pi,i}$ is irreducible. Suppose \begin{equation} \label{eq:ranDecomp2} V_J = \bigoplus_{\alpha \in A} V_\alpha \end{equation} is another decomposition of $V_J$ into irreducible $K$-invariant spaces. Each $V_\alpha$ has the form $V_{J_\alpha}$ for some range function $J_\alpha$, and \eqref{eq:multRan} shows that $J_\alpha(\pi)$ is one dimensional for exactly one $\pi \in \hat{K}$, and trivial for all others. For that unique value of $\pi$, we choose a unit norm vector $G_\alpha \in J_\alpha(\pi)$. Applying \eqref{eq:ranSum} again, we see that $J = \bigoplus_{\alpha \in A} J_\alpha$. In particular, \[ J(\pi) = \bigoplus_{\substack{\alpha \in A, \\ J_\alpha(\pi) \neq \{0\}}} J_\alpha(\pi) = \bigoplus_{\substack{\alpha \in A, \\ J_\alpha(\pi) \neq \{0\}}} \spn\{G_\alpha\} \] for each $\pi \in \hat{K}$. Hence $\{G_\alpha : \alpha \in A,\, J_\alpha(\pi) \neq \{0\} \}$ is an orthonormal basis for $J(\pi)$. Rearranging the decomposition \eqref{eq:ranDecomp2} as \[ V_J = \bigoplus_{\pi \in \hat{K}} \bigoplus_{\substack{\alpha \in A, \\ J_\alpha(\pi) \neq \{0\}}} V_\alpha \] shows it has the same form as \eqref{eq:ranDecomp1}. \end{proof} \medskip \section{Frames of translates} \label{sec:tranFrm} There is a long tradition of combining range function classifications of invariant spaces with conditions for a family of translates to form a reproducing system. Bownik \cite{B} seems to have the first results along these lines. His example was followed in \cite{BHP2,BR,CP,CMO,I,KR}. We now carry that tradition to the setting of compact, nonabelian subgroups. For our purposes, the relevant notion will be a continuous version of frames. \begin{defn} Let $\H$ be a separable Hilbert space, and let $(\mathcal{M},\mu)$ be a $\sigma$-finite measure space. Let $\{f_x\}_{x\in \mathcal{M}}$ be an indexed family with the property that $x \mapsto \langle g, f_x\rangle$ is a measurable function on $\mathcal{M}$ for every $g \in \H$. Then $\{f_x\}_{x\in \mathcal{M}}$ is a \emph{Bessel mapping} if there is a constant $B > 0$ such that \[ \int_\mathcal{M} | \langle g, f_x \rangle |^2\, d\mu(x) \leq B \Norm{g}^2 \qquad \text{for every }g \in \H. \] It is a \emph{continuous frame} for $\H$ if there are constants $0 < A \leq B < \infty$ such that \[ A \Norm{g}^2 \leq \int_\mathcal{M} | \langle g, f_x \rangle |^2\, d\mu(x) \leq B \Norm{g}^2 \qquad \text{for every }g \in \H. \] The constants $A$ and $B$ are called \emph{bounds}. If we can take $A=B$, the frame is \emph{tight}. If we can take $A=B=1$, it is a \emph{Parseval frame}. \end{defn} The reader unfamiliar with this notion may consult \cite{AAG,K}, where it was originally developed. Further details are available in \cite{GH} and \cite{RND}. In the case where $\mathcal{M}$ is a discrete set equipped with counting measure, continuous frames reduce to the usual, discrete version. (The reader may even take this as a definition.) We will use the terms ``frame'' and ``continuous frame'' interchangeably. The usual reproducing properties of discrete frames carry over to the continuous versions, with predictable modifications. Let $\{f_x \}_{x\in \mathcal{M}}$ be a Bessel mapping. The associated \emph{analysis operator} $T \colon \H \to L^2(\mathcal{M})$ is defined by \[ (Tg)(x) = \langle g, f_x \rangle \qquad (g \in \H,\ x \in \mathcal{M}); \] its adjoint is the \emph{synthesis operator} $T^*\colon L^2(\mathcal{M}) \to \H$, \[ T^* \phi = \int_\mathcal{M} \phi(x) f_x\, d\mu(x) \qquad (\phi \in L^2(\mathcal{M})), \] where the vector-valued integral is interpreted in the weak sense. The \emph{Gramian} is $\mathcal{G} = T T^*$, and the \emph{frame operator} is $S = T^* T$. When our Bessel mapping is a continuous frame, the frame operator is positive and invertible, and $\{S^{-1/2} f_x \}_{x\in \mathcal{M}}$ is a continuous Parseval frame for $\H$, called the \emph{canonical tight frame}. For Parseval frames, the frame operator is the identity map, and the Gramian is an orthogonal projection. Even when the frame is not tight, $\{S^{-1} f_x\}_{x\in \mathcal{M}}$ is another frame for $\H$ which satisfies \[ g = \int_\mathcal{M} \langle g, S^{-1} f_x \rangle f_x\, d\mu(x) \qquad (g \in \H). \] \begin{rem} \label{rem:frmBnds} The results in this paper apply for arbitrary second countable compact groups, which includes finite groups in particular. When $K$ is finite, all of our results about continuous frames indexed by $K$ can be interpreted in terms of discrete frames. We caution that it is necessary to reinterpret the frame bounds in this case, since Haar measure on $K$ is normalized so that $|K|=1$. In the special case where $K$ is finite, a continuous frame over $K$ having bounds $A,B$ is the same as a discrete frame indexed by $K$ having bounds $\card(K)\cdot A, \card(K)\cdot B$. \end{rem} For a countable family $\mathscr{A} \subset L^2(G)$, we will denote \[ E(\mathscr{A}) = \{L_\xi f\}_{\xi \in K, f \in \mathscr{A}} \] for the translates of $\mathscr{A}$. Recall that \[ S(\mathscr{A}) = \overline{\spn}\, E(\mathscr{A}) \] is the $K$-invariant space generated by $\mathscr{A}$, and that $S(\mathscr{A}) = V_J$, with \begin{equation} \label{eq:SARan} J(\pi) = \overline{\spn} \{ \ran (Zf)(\pi) : f \in \mathscr{A} \} \qquad (\pi \in \hat{K}). \end{equation} We would like to know under what circumstances $E(\mathscr{A})$ forms a continuous frame for $S(\mathscr{A})$. Our main result is as follows. \begin{theorem} \label{thm:tranFrm} Let $\mathscr{A} \subset L^2(G)$ be a countable family of functions, and let $J$ be the range function in \eqref{eq:SARan}. For any constants $0 < A \leq B < \infty$ and any choice of orthonormal bases $e_1^\pi,\dotsc,e_{d_\pi}^\pi \in \H_\pi$, $\pi \in \hat{K}$, the following are equivalent. \begin{enumerate}[(i)] \item $E(\mathscr{A})$ is a continuous frame for $S(\mathscr{A})$ with bounds $A,B$. That is, \begin{equation} \label{eq:tranFrm0} A \Norm{g}^2 \leq \sum_{f \in \mathscr{A}} \int_K |\langle g, L_\xi f \rangle|^2\, d\xi \leq B \Norm{g}^2 \qquad (g \in S(\mathscr{A})). \end{equation} \item For every $\pi \in \hat{K}$, $\{(Zf)(\pi) e_i^\pi : f \in \mathscr{A}, i =1,\dotsc,d_\pi\}$ is a discrete frame for $J(\pi)$ with bounds $A,B$. \end{enumerate} \end{theorem} This is in the spirit of \cite[Theorem 2.3]{B}. When $K$ is compact and \emph{abelian}, the theorem above reduces to \cite[Theorem 5.4]{I}. If $G$ is also abelian, the same result was given in \cite[Theorem 6.10]{BHP2}. Similar results appear in \cite{BHP2,BR,CP,CMO,I,JaLem,KR,RS}. The proof of Theorem \ref{thm:tranFrm} relies on the following lemma, which will also play a prominent role in Section \ref{sec:brack}. To each pair $f,g \in L^2(G)$, we associate the \emph{matrix element} $V_f g \in C(K)$ given by \[ (V_f g)(\xi) = \langle g, L_\xi f \rangle \qquad (\xi \in K). \] \begin{lemma} \label{lem:ZakBrack} For $f, g \in L^2(G)$ and $\pi \in \hat{K}$, \[ (V_f g)\char`\^(\pi) = (Zf)(\pi)^* (Zg)(\pi). \] \end{lemma} \begin{proof} Fix an orthonormal basis $e_1^\pi,\dotsc,e_{d_\pi}^\pi$ for each $\H_\pi$, $\pi \in \hat{K}$. For $f, g \in L^2(G)$, $\pi \in \hat{K}$, and $i,j=1,\dotsc,d_\pi$, the $(i,j)$-entry of the matrix for $(V_f g)\char`\^(\pi)$ with respect to this basis is \[ (V_f g)\char`\^(\pi)_{i,j} = \int_K \int_G g(x) \overline{(L_\xi f)(x)}\, dx\, \overline{\pi_{i,j}(\xi)}\, d\xi. \] Applying the measure space isomorphism $G \to K\backslash G \times K$ from \eqref{eq:measIsom}, we see this is equal to \[ \int_K \int_{K\backslash G} \int_K g_{Kx}(\eta) \overline{ f_{Kx}(\xi^{-1} \eta) }\, d\eta\, d(Kx)\, \overline{\pi_{i,j}(\xi)}\, d\xi = \int_K \int_{K\backslash G} (g_{Kx} * f_{Kx}^*)(\xi)\, d(Kx)\, \overline{\pi_{i,j}(\xi)}\, d\xi, \] where $f_{Kx}$ and $g_{Kx}$ are as defined in \eqref{eq:cosetFunct}. We wish to reverse the order of integration above with Fubini's Theorem. Assuming for the moment that this is possible, we will have \[ (V_f g)\char`\^(\pi)_{i,j} = \int_K \int_{K\backslash G} (g_{Kx} * f_{Kx}^*)(\xi)\, d(Kx)\, \overline{\pi_{i,j}(\xi)}\, d\xi \] \[ = \int_{K\backslash G} \int_K (g_{Kx} * f_{Kx}^*)(\xi) \overline{\pi_{i,j}(\xi)}\, d\xi\, d(Kx) = \int_{K\backslash G} (g_{Kx} * f_{Kx}^*)\char`\^(\pi)_{i,j}\, d(Kx) \] \[ = \int_{K\backslash G} \langle (f_{Kx})\char`\^(\pi)^* (g_{Kx})\char`\^(\pi) e_j^\pi, e_i^\pi \rangle\, d(Kx) = \int_{K\backslash G} \langle (g_{Kx})\char`\^(\pi) e_j^\pi, (f_{Kx})\char`\^(\pi) e_i^\pi \rangle\, d(Kx) \] \[ = \int_{K\backslash G} \langle [(Zg)(\pi)e_j^\pi](Kx), [(Zf)(\pi) e_i^\pi](Kx) \rangle\, d(Kx) = \langle (Zg)(\pi) e_j^\pi, (Zf)(\pi) e_i^\pi \rangle \] \[ = [(Zf)(\pi)^* (Zg)(\pi)]_{i,j}, \] where we have applied the definition of the Zak transform \eqref{eq:ZakFour} in the third to last equality. Once the above holds for all $i$ and $j$, we will be able to conclude that \[ (V_f g)\char`\^(\pi) = (Zf)(\pi)^* (Zg)(\pi), \] as desired. It only remains to justify our use of Fubini's Theorem. To do so, we observe first that \[ |\pi_{i,j}(\xi)| = | \langle \pi(\xi) e_j^\pi, e_i^\pi \rangle | \leq \Norm{\pi(\xi) e_j^\pi} \Norm{e_i^\pi} = 1 \qquad (\xi \in K), \] by Cauchy-Schwarz. Hence, \[ \int_{K\backslash G} \int_K | (g_{Kx} * f_{Kx}^*)(\xi) \pi_{i,j}(\xi)|\, d\xi\, d(Kx) \leq \int_{K\backslash G} \Norm{g_{Kx} * f_{Kx}^*}_{L^1(K)}\, d(Kx) \] \[ \leq \int_{K\backslash G} \Norm{g_{Kx}}_{L^1(K)} \Norm{f_{Kx}}_{L^1(K)}\, d(Kx) \leq \left( \int_{K\backslash G} \Norm{g_{Kx}}_{L^1(K)}^2\, d(Kx) \right)^{1/2} \left( \int_{K\backslash G} \Norm{f_{Kx}}_{L^1(K)}^2\, d(Kx) \right)^{1/2}. \] The proof will be finished if we can show that $\int_{K\backslash G} \Norm{f_{Kx}}_{L^1(K)}^2 d(Kx) < \infty$ for all $f \in L^2(G)$. An application of Minkowski's Integral Inequality produces \[ \left( \int_{K\backslash G} \Norm{ f_{Kx} }_{L^1(K)}^2\, d(Kx) \right)^{1/2} = \left( \int_{K\backslash G} \left| \int_K |f(\eta \tau(Kx))|\, d\eta \right|^2 d(Kx) \right)^{1/2} \] \[ \leq \int_K \left( \int_{K\backslash G} | f(\eta \tau(Kx)) |^2\, d(Kx) \right)^{1/2} d\eta. \] Let \begin{align*} E &= \{ \eta \in K : \int_{K\backslash G} | f(\eta \tau(Kx) |^2\, d(Kx) < 1\} \\ F &= \{ \eta \in K : \int_{K\backslash G} |f(\eta \tau(Kx) |^2\, d(Kx) \geq 1\}. \end{align*} (These are well defined up to sets of measure zero.) Then \[ \left( \int_{K\backslash G} \Norm{ f_{Kx} }_{L^1(K)}^2\, d(Kx) \right)^{1/2} \leq \int_K \left( \int_{K\backslash G} | f(\eta \tau(Kx)) |^2\, d(Kx) \right)^{1/2} d\eta \] \[ = \int_E \left( \int_{K\backslash G} | f(\eta \tau(Kx)) |^2\, d(Kx) \right)^{1/2} d\eta + \int_F \left( \int_{K\backslash G} | f(\eta \tau(Kx)) |^2\, d(Kx) \right)^{1/2} d\eta \] \[ \leq |E| + \int_F \int_{K\backslash G} | f(\eta \tau(Kx)) |^2\, d(Kx) \, d\eta \leq 1 + \int_K \int_{K\backslash G} | f(\eta \tau(Kx)) |^2\, d(Kx) \, d\eta \] \[ = 1 + \int_G | f(x) |^2\, dx < \infty, \] where we have once again applied the measure space isomorphism $K \times K\backslash G \to G$. This completes the proof. \end{proof} With this lemma in hand, Theorem \ref{thm:tranFrm} becomes an easy consequence of Plancherel's Theorem and our classification of $K$-invariant spaces. \medskip \begin{proof}[Proof of Theorem \ref{thm:tranFrm}] For any $f,g \in L^2(G)$, we use Plancherel's Theorem and Lemma \ref{lem:ZakBrack} to perform the fundamental calculation \begin{equation}\label{eq:tranFrm1} \int_K |\langle g, L_\xi f \rangle |^2\, d\xi = \sum_{\pi \in \hat{K}} d_\pi \Norm{ (Zf)(\pi)^* (Zg)(\pi)}_{\mathcal{HS}}^2 = \sum_{\pi \in \hat{K}} d_\pi \sum_{j=1}^{d_\pi} \sum_{i=1}^{d_\pi} |\langle (Zg)(\pi) e_j^\pi, (Zf)(\pi) e_i^\pi \rangle |^2. \end{equation} On the other hand, the fact that $Z$ is unitary implies \begin{equation} \label{eq:tranFrm2} \Norm{g}^2 = \sum_{\pi \in \hat{K}} d_\pi \Norm{ (Zg)(\pi) }_{\mathcal{HS}}^2 = \sum_{\pi \in \hat{K}} d_\pi \sum_{j=1}^{d_\pi} \Norm{ (Zg)(\pi)e_j^\pi }^2. \end{equation} Suppose (i) holds. Fix $\pi \in \hat{K}$, and choose any $G \in J(\pi)$. Define $g \in L^2(G)$ by the formula \[ (Zg)(\sigma)e_j^\sigma = \begin{cases} d_\pi^{-1/2} G, & \text{if }\sigma = \pi \\ 0, & \text{if }\sigma \neq \pi \end{cases} \qquad (\sigma \in \hat{K};\ j = 1,\dotsc,d_\sigma). \] Then $g \in V_J = S(\mathscr{A})$, by construction. It satisfies \[ \Norm{g}^2 =\Norm{G}^2, \] by \eqref{eq:tranFrm2}, and \[ \sum_{f\in \mathscr{A}} \int_K | \langle g, L_\xi f \rangle |^2\, d\xi = \sum_{f\in \mathscr{A}} \sum_{i=1}^{d_\pi} |\langle G, (Zf)(\pi) e_i^\pi \rangle |^2, \] by \eqref{eq:tranFrm1}. Substituting these equations into \eqref{eq:tranFrm0} gives \[ A \Norm{G}^2 \leq \sum_{f\in \mathscr{A}} \sum_{i=1}^{d_\pi} |\langle G, (Zf)(\pi) e_i^\pi \rangle |^2 \leq B \Norm{G}^2. \] In other words, (ii) holds. Now assume (ii) is satisfied. For every $g \in S(\mathscr{A}) = V_J$ and every $\pi \in \hat{K}$, $(Zg)(\pi)e_j^\pi \in J(\pi)$. By \eqref{eq:tranFrm2} and the frame inequality, \[ A \Norm{g}^2 = \sum_{\pi \in \hat{K}} d_\pi \sum_{j=1}^{d_\pi} A \Norm{ (Zg)(\pi)e_j^\pi }^2 \leq \sum_{\pi \in \hat{K}} d_\pi \sum_{j=1}^{d_\pi} \sum_{f \in \mathscr{A}} \sum_{i=1}^{d_\pi} |\langle (Zg)(\pi) e_j^\pi, (Zf)(\pi) e_i^\pi \rangle |^2. \] Applying \eqref{eq:tranFrm1} to the last expression above, we see that \[ A \Norm{g}^2 \leq \sum_{f \in \mathscr{A}} \int_K |\langle g, L_\xi f \rangle |^2\, d\xi. \] A similar computation produces \[ \sum_{f \in \mathscr{A}} \int_K |\langle g, L_\xi f \rangle |^2\, d\xi \leq B \Norm{g}^2. \] This proves (i). \end{proof} \medskip \section{Bracket analysis for compact group actions} \label{sec:brack} We turn our attention now to a detailed study of group frames, as described in the introduction. In this section, we introduce a computational system known as a \emph{bracket} for the analysis of representations of compact groups. Our primary motivation is the study of group frames with a single generator. We will see, however, that the bracket carries vital information about the structure of the representation itself, including its isotypical components and the multiplicities of irreducible representations. Several applications for the theory of group frames, including a complete classification of (compact) group frames with a single generator, appear in Section \ref{sec:brackApp}. Throughout, we fix a second countable compact group $K$, as in the previous sections, with Haar measure normalized so that $|K|=1$. We also fix a unitary representation $\rho$ of $K$, acting on a separable Hilbert space $\H_\rho$. Our approach is motivated by the work of Weiss, et al.\ in \cite{HSWW2}. Let $\mathcal{G}$ be a second countable locally compact abelian (LCA) group, with dual group $\hat{\mathcal{G}}$. Normalize Haar measures on $\mathcal{G}$ and $\hat{\mathcal{G}}$ so that the Plancherel theorem holds. A representation $\pi\colon \mathcal{G} \to U(\H_\pi)$ is called \emph{dual integrable} if there is a \emph{bracket} \[ [ \cdot, \cdot] \colon \H_\pi \times \H_\pi \to L^1(\hat{\mathcal{G}}) \] such that \[ \langle f, \pi(x) g \rangle = \int_{\hat{\mathcal{G}}} [f, g](\alpha) \overline{\alpha(x)}\, d\alpha \qquad (f, g \in \H_\pi;\ x \in \mathcal{G}). \] When $\mathcal{G}$ is identified with the dual of $\hat{\mathcal{G}}$ via Pontryagin Duality, this means that $\langle f, \pi(\cdot) g \rangle$ is the Fourier transform of $[f,g]$. The bracket provides an elegant description of frame properties for an orbit $\{\pi(x) f\}_{x\in \mathcal{G}}$. \begin{prop}[ \cite{HSWW2,I} ] \label{prop:brackFrm} For $f \in \H_\pi$ and constants $A,B$ with $0 < A \leq B < \infty$, the following are equivalent. \begin{enumerate}[(i)] \item The orbit $\{\pi(x) f\}_{x\in \mathcal{G}}$ is a continuous frame for its closed linear span, with bounds $A,B$ \item For a.e.\ $\alpha \in \hat{\mathcal{G}}$, either $[f,f](\alpha) = 0$ or $A \leq [f,f](\alpha) \leq B$. \end{enumerate} \end{prop} A possible difficulty with this approach is that, generally speaking, one may know a representation is dual integrable without being able to compute the bracket.\footnote{For certain kinds of representations, there are ways to recover the bracket even when $\mathcal{G}$ is not compact. Most of these methods involve variants of the Zak transform. See \cite{HSWW2} and \cite{I}.} Suppose, however, that $\mathcal{G}$ is \emph{compact} abelian. Then we can compute brackets as follows. Let $\pi$ be \emph{any} unitary representation of $\mathcal{G}$ on a separable Hilbert space $\H_\pi$. Then $\pi$ decomposes as a direct sum of cyclic subrepresentations, each of which is unitarily equivalent to a subrepresentation of the regular representation. (See, for instance, \cite{GM}.) By \cite[Corollary 3.4]{HSWW2}, $\pi$ is dual integrable. Let $[\cdot,\cdot] \colon \H_\pi \times \H_\pi \to L^1(\hat{\mathcal{G}})$ be a bracket for $\pi$. That is, \[ \langle f , \pi(x) g \rangle = [f,g]\char`\^(x) \qquad (f,g \in \H;\ x \in \mathcal{G}). \] Since $\mathcal{G}$ is compact, $[f,g]\char`\^$ lies in $C(\mathcal{G}) \subset L^1(\mathcal{G})$ for every $f,g \in \H_\pi$. Therefore we can apply Fourier inversion to recover the bracket from the matrix elements $\langle f, \pi(\cdot) g \rangle$: \[ [f,g](\alpha) = \langle f, \pi(\cdot) g \rangle\char`\^(\alpha^{-1}) \qquad (f,g \in \H_\pi;\ \alpha \in \hat{\mathcal{G}}). \] These results suggest that, for our general compact group $K$ with unitary representation $\rho$, it should be possible to analyze frames appearing as orbits of $\rho$ using the (operator-valued) Fourier transform of the matrix elements \[ (V_g f)(\xi) := \langle f, \rho(\xi) g \rangle \qquad (f,g \in \H_\rho;\ \xi \in K). \] This is indeed the case. \begin{defn} The \emph{bracket} associated with $\rho$ is the map \[ [ \cdot, \cdot ] \colon \H_\rho \times \H_\rho \to \bigoplus_{\pi \in \hat{K}} B(\H_\pi) \] given by \[ [f,g](\pi) = (V_g f)\char`\^(\pi) \qquad (\pi \in \hat{K}). \] \end{defn} Here, as elsewhere, we consider $B(\H_\pi)$ to be a Hilbert space with inner product given by \[ \langle A, B \rangle = d_\pi \langle A, B \rangle_{\mathcal{HS}} = d_\pi \tr(B^* A). \] Then $\bigoplus_{\pi \in \hat{K}} B(\H_\pi)$ is the Hilbert space direct sum. Following the notation of \cite{HSWW2}, we will denote $\langle f \rangle \subset \H_\rho$ for the cyclic subspace generated by $f \in \H_\rho$. That is, \[ \langle f \rangle = \overline{\spn} \{ \rho(\xi) f : \xi \in K\} \qquad (f \in \H_\rho). \] Our main result is the following. \begin{theorem} \label{thm:brackFrm} For $f \in \H_\rho$ and constants $A,B$ with $0 < A \leq B < \infty$, the following are equivalent. \begin{enumerate}[(i)] \item The orbit $\{ \rho(\xi) f \}_{\xi\in K}$ is a continuous frame for $\langle f \rangle$ with bounds $A, B$. \item For every $\pi \in \hat{K}$, the nonzero eigenvalues of $[f,f](\pi)$ lie in the interval $[A,B]$. \end{enumerate} \end{theorem} When $\dim \H_\rho < \infty$, it is easy to tell when $\langle f \rangle = \H_\rho$ using the ranks of $[f,f](\pi)$, $\pi \in \hat{K}$; see Proposition \ref{prop:cycBrack} below. Thus, one can tell whether or not $\{\rho(\xi) f \}_{\xi \in K}$ is a frame for $\H_\rho$, and with what bounds, based solely on the eigenvalues of $[f,f](\pi)$, $\pi \in \hat{K}$, and their multiplicities. The condition that $\dim \H_\rho < \infty$ is always satisfied when $\{\rho(\xi) f \}_{\xi \in K}$ is a frame for $\H_\rho$; this is a consequence of Theorem \ref{thm:KFrmDim}, infra. If $Q_\pi$ denotes orthogonal projection of $\H_\pi$ onto $(\ker [f,f](\pi))^\perp$, then condition (ii) of the theorem above can be interpreted to say that $A Q_\pi \leq [f,f](\pi) \leq B Q_\pi$ for each $\pi \in \hat{K}$. (Compare with \cite[Theorem A]{BHP}.) In the special case where $K$ is compact \emph{abelian}, Theorem \ref{thm:brackFrm} reduces to Proposition \ref{prop:brackFrm}. Tight frames generated by actions of \emph{finite} nonabelian groups have been the focus of a flurry of recent activity \cite{CW,CW2,Ha,VW2,VW,VW3}. See \cite[Theorem 6.18]{VW2} and its generalization \cite[Theorem 2.8]{VW3} in particular for another characterization of \emph{tight} frames that occur in this way. A nice summary of the state of the art circa 2013 appears in \cite{Wa}; unfortunately the survey is already out of date, thanks in part to recent work by Waldron himself. This field is advancing rapidly. Brackets have been used to analyze reproducing systems in $L^2(\mathbb{R}^n)$ since at least the work of Jia and Micchelli \cite{JM}. Weiss and his collaborators brought these techniques into the group-theoretic domain with \cite{HSWW2}, as described above. In the nonabelian setting, Hern{\'a}ndez, et al.\ have developed notions of bracket maps for the Heisenberg group and for countable discrete groups \cite{BHM,BHP,BHP3}. The bracket defined above is related to the one that appears in \cite{BHP,BHP3}. Suppose that $K$ is finite (that is, both compact and discrete). Let us write $[\cdot,\cdot]_0 \colon \H_\rho \times \H_\rho \to B(L^2(K))$ for the bracket as developed in \cite{BHP}. One can show that, for all $f,g \in \H_\rho$, \[ [f,g]_0(\phi) = \phi * V_g f \qquad (\phi \in L^2(K)). \] Conjugating with the Fourier transform turns $[f,g]_0$ into left multiplication by $[f,g]$. One might say the papers \cite{BHP,BHP3} study the convolution operator given by $V_g f$, where this paper studies its Fourier transform. Much of our analysis relies on functions of positive type. We remind the reader that $\phi \in C(K)$ is said to be of \emph{positive type} if \[ \int_K (f * f^*)(\xi)\, \phi(\xi) \, d\xi \geq 0 \qquad \text{for all }f\in L^1(K). \] Equivalently, there is a unitary representation $\sigma$ of $K$ and a vector $f \in \H_\sigma$ such that \[ \phi(\xi) = \langle f, \sigma(\xi) f \rangle \qquad (\xi \in K). \] The representation and the vector are unique in the following sense: If $\sigma'$ is another representation of $K$ with a cyclic vector $f' \in \H_{\sigma'}$ such that $\phi(\xi) = \langle f', \sigma'(\xi) f' \rangle$ for all $\xi \in K$, then there is a unitary $U \colon \H_{\sigma'} \to \H_\sigma$ intertwining $\sigma'$ with $\sigma$ and mapping $f' \mapsto f$. (See, for instance, \cite[\S 3.3]{F}.) When $\sigma$ is the regular representation and $f, g \in L^2(K)$, we have \begin{equation} \label{eq:transPos} \langle f, L_\xi g \rangle = \int_K f(\eta) g^*(\eta^{-1} \xi)\, d\eta = (f * g^* )(\xi) \qquad (\xi \in K). \end{equation} For arbitrary $f \in L^2(K)$, this means that $\phi = f * f^*$ is a function of positive type. Up to unitary equivalence, the cyclic representations of $K$ are precisely the subrepresentations of the regular representation (\cite{GM}); thus \emph{every} function of positive type takes this form. In particular, \begin{equation} \label{eq:posSelfAdj} \phi^* = \phi, \end{equation} and \begin{equation}\label{eq:posFour} \hat{\phi}(\pi) = (f * f^*)\char`\^ (\pi) = \hat{f}(\pi)^* \hat{f}(\pi) \geq 0 \qquad (\pi \in \hat{K}). \end{equation} (It is positive semidefinite.) The bracket $[f,f]$ in Theorem \ref{thm:brackFrm} is the Fourier transform of the associated function of positive type \[ V_f f(\xi) = \langle f, \rho(\xi) f \rangle \qquad (\xi \in K). \] Given $V_f f$, it is possible to reconstruct the Hilbert space $\langle f \rangle$, the restriction of $\rho$ to $\langle f \rangle$, and the cyclic vector $f$. In other words, $V_f f$ contains complete information about the cyclic representation generated by $f$. Philosophically speaking, it must also be able to tell us when the orbit of $f$ is a continuous frame for $\langle f \rangle$. Theorem \ref{thm:brackFrm} tells how to extract this information. \medskip We will write $A^\dagger$ for the Moore-Penrose pseudoinverse of a bounded linear operator $A$. When $A$ has closed range, $AA^\dagger$ is orthogonal projection onto the range of $A$, and $A^\dagger A$ is orthogonal projection onto $(\ker A)^\perp$. \begin{lemma} \label{lem:cycIsom} For every $f \in \H_\rho$, there is a unique linear isometry $T_f \colon \langle f \rangle \to L^2(K)$ intertwining $\rho$ with left translation, and sending $f$ to a function of positive type. Explicitly, \begin{equation}\label{eq:cycIsom1} (T_fg)\char`\^(\pi) = ( [f,f](\pi)^{1/2} )^\dagger\cdot [g,f](\pi) \qquad (g \in \langle f \rangle,\ \pi \in \hat{K}). \end{equation} \end{lemma} \begin{proof} Since the restriction of $\rho$ to $\langle f \rangle$ is square integrable, the existence of a linear isometry $T_f \colon \langle f \rangle \to L^2(K)$ intertwining $\rho$ with left translation and mapping $f$ to a function of positive type is given by \cite[Theorem 13.8.6]{D}. Then $(T_f f)^* = T_f f$, and \[ (V_f f)(\xi) = \langle T_f f, L_\xi (T_f f) \rangle = [T_f f * (T_f f)^*](\xi) = (T_f f * T_f f)(\xi) \qquad (\xi \in K). \] Since $(T_f f)\char`\^(\pi) \geq 0$ for all $\pi \in \hat{K}$, we conclude that \[ (T_f f)\char`\^(\pi) = [f,f](\pi)^{1/2} \qquad (\pi \in \hat{K}). \] For any $g \in \langle f \rangle$, \eqref{eq:transPos} gives \[ (V_f g)(\xi) = \langle T_f g, L_\xi T_f f \rangle = [(T_f g) * (T_f f)^*](\xi) = [(T_f g) * (T_f f)](\xi) \qquad (\xi \in K), \] or equivalently, \begin{equation} \label{eq:cycIsom2} [g,f](\pi) = (T_f f)\char`\^(\pi)\cdot (T_f g)\char`\^(\pi) \qquad (\pi \in \hat{K}). \end{equation} Since $T_f g \in \langle T_f f \rangle$, Theorem \ref{thm:ranGen} shows that \[ \ran (T_f g)\char`\^(\pi) \subset \ran (T_f f)\char`\^(\pi) = (\ker (T_f f)\char`\^(\pi) )^\perp \qquad (\pi \in \hat{K}). \] (Here we use the Fourier transform in place of the Zak transform; see Remark \ref{rem:ZakFour}.) Applying $[(T_f f)\char`\^(\pi)]^\dagger = ([f,f](\pi)^{1/2})^\dagger$ to both sides of \eqref{eq:cycIsom2} establishes \eqref{eq:cycIsom1}. In particular, $T_f$ is uniquely determined. \end{proof} \smallskip \begin{prop} \label{prop:brackProp1} The bracket has the following properties. \begin{enumerate}[(i)] \item $[\cdot,\cdot]$ is linear in the first variable, and conjugate linear in the second. \item For all $f, g \in \H_\rho$ and $\pi \in \hat{K}$, \[ [f,g](\pi) = [g,f](\pi)^*. \] \item For all $f \in \H_\rho$ and $\pi \in \hat{K}$, $[f,f](\pi) \geq 0$. \item For all $f,g \in \H_\rho$ and $A \in B(\H_\rho)$, \[ [Af, g ] = [f,A^* g]. \] \item For all $f,g \in \H_\rho$, $\pi \in \hat{K}$, and $\xi \in K$, \[ [f, \rho(\xi) g](\pi) = \pi(\xi)\cdot [f,g](\pi) \] and \[ [\rho(\xi) f, g](\pi) = [f,g](\pi)\cdot \pi(\xi^{-1}). \] \item For $f,g \in \H_\rho$, $f \perp \langle g \rangle$ if and only if $[f,g]=0$. \end{enumerate} \end{prop} More properties will be given in Propositions \ref{prop:brackProp2} and \ref{prop:brackProp3} below. \begin{proof} Item (i) follows from linearity of the Fourier transform and sesquilinearity of the map $(f,g) \mapsto V_g f$. To see (ii), apply \eqref{eq:FourConv} to the identity $V_f g = (V_g f)^*$. Equation \eqref{eq:posFour} gives (iii), since $V_f f$ is a function of positive type. Apply the simple identity $V_g (Af) = V_{A^* g}f$ to get (iv). For (v), use \eqref{eq:FourTrans} and the identities \[ V_{\rho(\xi) g} f = R_\xi (V_g f), \quad V_g (\rho(\xi) f) = L_\xi (V_g f) \qquad (f,g \in \H_\rho;\ \xi \in K). \] For (vi), first assume that $f \perp \langle g \rangle$. Let $P_g$ denote orthogonal projection of $\H_\rho$ onto $\langle g \rangle$, and apply (iv) to see that \[ [f, g] = [f, P_g g] = [P_g f, g] = 0. \] Now suppose that $f,g \in \H_\rho$ satisfy $[f,g]=0$. By Plancherel's Theorem, $V_g f =0$. That is, $\langle f, \rho(\xi) g \rangle = 0$ for all $\xi \in K$. Hence $f \perp \langle g \rangle$. \end{proof} When $K$ is contained in a larger second countable, locally compact group $G$, the Zak transform provides a bracket for the action of $K$ on $L^2(G)$ by left translation. Indeed, Lemma \ref{lem:ZakBrack} says precisely that \[ [f,g](\pi) = (Zg)(\pi)^* (Zf)(\pi) \qquad (f,g \in L^2(G);\ \pi \in \hat{K}) \] in this case. The theorem below shows that this example is universal; it is always possible to embed $\H_\rho$ as a $K$-invariant subspace of $L^2(G)$, for some larger group $G$ containing $K$, in such a way that $\rho$ becomes left translation by $K$. \begin{theorem} \label{thm:isom} There is a second countable, locally compact group $G$ containing $K$ as a closed subgroup, and a linear isometry $T \colon \H_\rho \to L^2(G)$ satisfying \[ T \rho(\xi) f = L_\xi T f \qquad (f \in \H_\rho,\ \xi \in K). \] If $Z$ is the Zak transform for the pair $(G,K)$, then the bracket for $\rho$ is given by \[ [f,g](\pi) = (ZTg)(\pi)^*(ZTf)(\pi) \qquad (f,g \in \H_\rho;\ \pi \in \hat{K}). \] \end{theorem} \begin{proof} There is a countable family $\{f_i\}_{i \in I} \subset \H_\rho$ for which \[ \H_\rho = \bigoplus_{i \in I} \langle f_i \rangle. \] For each $i \in I$, let $T_{f_i} \colon \langle f_i \rangle \to L^2(K)$ be the isometry from Lemma \ref{lem:cycIsom}. Give $I$ the structure of a discrete abelian group, and let $G = K \times I$. Given $g \in \H_\rho$, find the unique decomposition $g = \sum_{i \in I} g_i$ with $g_i \in \langle f_i \rangle$ for all $i$, and define \[ (Tg)(\xi, i) = (T_{f_i} g_i)(\xi) \qquad (\xi \in K,\ i \in I). \] Then $T\colon \H_\rho \to L^2(G)$ is the desired isometry. \end{proof} \begin{prop} \label{prop:brackProp2} In addition to the properties listed in Proposition \ref{prop:brackProp1}, the bracket satisfies the following. \begin{enumerate}[(i)] \item For all $f,g \in \H_\rho$, \begin{equation} \label{eq:innBrack} \langle f, g \rangle = \sum_{\pi \in \hat{K}} d_\pi \tr( [f,g](\pi) ). \end{equation} \item For all $f, g \in \H_\rho$, \begin{equation} \label{eq:brackProp2.1} \Norm{ [f,g](\pi) }_{\mathcal{HS}}^2 \leq \Norm{ [f,f](\pi) }_{\mathcal{HS}} \Norm{ [g,g](\pi) }_{\mathcal{HS}} \qquad (\pi \in \hat{K}). \end{equation} \item If $f_n \to f$ in $\H_\rho$, then $[f_n,g] \to [f,g]$ for all $g \in \H_\rho$. In particular, \[ [f_n,g](\pi) \to [f,g](\pi) \] for all $g \in \H_\rho$ and $\pi \in \hat{K}$. \end{enumerate} \end{prop} \begin{proof} By applying Theorem \ref{thm:isom} if necessary, we may assume that $\H_\rho$ is a $K$-invariant subspace of $L^2(G)$ for some second countable locally compact group $G$ containing $K$ as a closed subgroup, that $\rho$ is given by left translation of $K$, and that \[ [f,g](\pi) = (Zg)(\pi)^* (Zf)(\pi) \qquad (f,g \in \H_\rho;\ \pi \in \hat{K}), \] where $Z$ is the Zak transform for the pair $(G,K)$. Now (iii) follows immediately from continuity of the Zak transform. To prove (i), we simply compute \[ \langle f, g \rangle = \langle Zf, Zg \rangle = \sum_{\pi \in \hat{K}} d_\pi \langle (Zf)(\pi), (Zg)(\pi) \rangle_{\mathcal{HS}} = \sum_{\pi \in \hat{K}} d_\pi \tr([f,g](\pi)) \qquad (f,g \in \H_\rho). \] For (ii), we use Cauchy-Schwarz for the Hilbert-Schmidt inner product to estimate \[ \Norm{ [f,g](\pi) }_{\mathcal{HS}}^2 = \tr( (Zf)(\pi)^* (Zg)(\pi) (Zg)(\pi)^* (Zf)(\pi) ) = \tr( (Zf)(\pi) (Zf)(\pi)^* (Zg)(\pi) (Zg)(\pi)^* ) \] \[ = | \langle (Zg)(\pi) (Zg)(\pi)^*, (Zf)(\pi) (Zf)(\pi)^* \rangle_{\mathcal{HS}} | \leq \Norm{ (Zg)(\pi) (Zg)(\pi)^* }_{\mathcal{HS}} \Norm{ (Zf)(\pi) (Zf)(\pi)^* }_{\mathcal{HS}} \] \[ = \Norm{ [g,g](\pi) }_{\mathcal{HS}} \Norm{ [f,f](\pi) }_{\mathcal{HS}}. \qedhere \] \end{proof} Equation \eqref{eq:innBrack} implies that vectors in $\H_\rho$ are uniquely determined by their bracket values. Specifically, if $f,g \in \H_\rho$ have $[f,h] = [g,h]$ for all $h \in \H_\rho$, then \eqref{eq:innBrack} shows that $\langle f,h \rangle = \langle g, h \rangle$, so that $f=g$. Propositions \ref{prop:brackProp1} and \ref{prop:brackProp2} together give the general feeling that the bracket behaves like a kind of operator-valued inner product on $\H_\rho$.\footnote{For representations of discrete groups, this idea was made more precise using the language of Hilbert modules and a slightly different notion of bracket in \cite{BHP3}.} However, the bracket can tell us about much more than the linear and geometric properties of $\H_\rho$. It can tell us about $\rho$ itself. For each $\pi \in \hat{K}$, we will denote $\mathcal{M}_\pi$ for the isotypical component of $\pi$ in $\rho$. In other words, $\mathcal{M}_\pi$ is the closed linear span of all invariant subspaces of $\H_\rho$ on which $\rho$ is equivalent to $\pi$. We will write $P_\pi$ for the orthogonal projection of $\H_\rho$ onto $\mathcal{M}_\pi$. Finally, when $V \subset \H_\rho$ is an invariant subspace, we denote $\rho^V$ for the subrepresentation of $\rho$ on $V$. Then we have the following proposition. \begin{prop} \label{prop:brackProp3} The bracket carries the following information about the isotypical components of $\rho$. \begin{enumerate}[(i)] \item For all $\pi \in \hat{K}$, \[ \mathcal{M}_\pi = \{ f \in \H_\rho : [f,g](\sigma) = 0 \text{ for all }g\in \H_\rho \text{ and }\sigma \neq \overline{\pi}\} \] \[ = \{ f \in \H_\rho : [f,f](\sigma) = 0 \text{ for }\sigma \neq \overline{\pi} \}. \] \item For all $f, g \in \H_\rho$, \[ [f,g](\overline{\pi}) = [ P_\pi f, g ](\overline{\pi}) \qquad (\pi \in \hat{K}). \] \item For all $f \in \H_\rho$ \[ \rank [f,f](\pi) = \mult(\overline{\pi}, \rho^{\langle f \rangle }) \qquad (\pi \in \hat{K}). \] In particular, \[ \dim \langle f \rangle = \sum_{\pi \in \hat{K}} d_\pi \cdot \rank [f,f](\pi). \] \end{enumerate} \end{prop} \begin{proof} As in the proof of the last proposition, we may assume that $K$ is a closed subgroup of a second countable locally compact group $G$, that $\H_\rho$ is a $K$-invariant subspace of $L^2(G)$, and that $\rho$ is left translation by $K$. If $Z$ is the Zak transform for the pair $(G,K)$, then the bracket is given by \[ [f,g](\pi) = (Zg)(\pi)^* (Zf)(\pi) \qquad (f,g \in \H_\rho;\ \pi \in \hat{K}). \] For any $f \in \H_\rho$ and $\pi \in \hat{K}$, this implies in particular that $(Zf)(\pi) = 0$ if and only if $[f,f](\pi) = 0$. Moreover, the Cauchy-Schwarz type inequality \eqref{eq:brackProp2.1} shows that $[f,f](\pi)=0$ if and only if $[f,g](\pi) = 0$ for all $g \in \H_\rho$. Now (i) follows from Proposition \ref{prop:isoComp}. For (ii), apply Proposition \ref{prop:isoComp} to see that $(ZP_\pi f)(\overline{\pi}) = (Zf)(\overline{\pi})$. Finally, (iii) follows from \eqref{eq:multRan}, Theorem \ref{thm:ranGen}, and the fact that \[ \rank [f,f](\pi) = \rank((Zf)(\pi)^* (Zf)(\pi)) = \rank( (Zf)(\pi) ) \qquad (\pi \in \hat{K}). \qedhere \] \end{proof} In many cases, statement (iii) above can be used to test whether a particular vector in $\H_\rho$ is cyclic for $\rho$. \begin{prop}\label{prop:cycBrack} Suppose that $\mult(\pi,\rho) < \infty$ for each $\pi \in \hat{K}$. Then $f\in \H_\rho$ is a cyclic vector for $\rho$ if and only if \[ \rank [f,f](\pi) = \mult(\overline{\pi},\rho) \quad \text{for every }\pi \in \hat{K}. \] Moreover, when $\dim \H_\rho < \infty$, $f$ is a cyclic vector if and only if \[ \sum_{\pi \in \hat{K}} d_\pi \cdot \rank [f,f](\pi) = \dim \H_\rho. \] \end{prop} We can now prove our main result. \begin{proof}[Proof of Theorem \ref{thm:brackFrm}] By Lemma \ref{lem:cycIsom}, we may assume that $f$ is a function of positive type in $L^2(K)$, and that $\rho$ is given by left translation. We are going to apply Theorem \ref{thm:tranFrm} with $G=K$ and $\mathscr{A} = \{ f \}$. As explained in Remark \ref{rem:ZakFour}, the Zak transform reduces to the Fourier transform in this case. In particular, Theorem \ref{thm:ranGen} gives $\langle f \rangle = S(\mathscr{A}) = V_J$, where \[ J(\pi) = \ran \hat{f}(\pi) \qquad (\pi \in \hat{K}). \] It remains to show that our condition (ii) is equivalent to condition (ii) in Theorem \ref{thm:tranFrm}. For fixed $\pi \in \hat{K}$, we have $\hat{f}(\pi) \geq 0$, since $f$ is a function of positive type. Choose an orthonormal basis $e_1^\pi,\dotsc,e_{d_\pi}^\pi$ for $\H_\pi$ consisting of eigenvectors for $\hat{f}(\pi)$, with corresponding eigenvalues $\lambda_1^\pi \geq \dotsc \geq \lambda_{d_\pi}^\pi \geq 0$. If $r_\pi = \rank \hat{f}(\pi)$, then the nonzero eigenvalues of $[f,f](\pi) = \hat{f}(\pi)^2$ are precisely $(\lambda_1^\pi)^2,\dotsc, (\lambda_{r_\pi}^\pi)^2$. Now $\{\hat{f}(\pi) e_i^\pi\}_{i=1}^{d_\pi} = \{ \lambda_i^\pi e_i^\pi\}_{i=1}^{d_\pi}$ is a discrete frame for $J(\pi) = \spn\{e_1^\pi,\dotsc,e_{r_\pi}^\pi\}$ with bounds $A,B$ if and only if $A \leq (\lambda_1^\pi)^2,\dotsc, (\lambda_{r_\pi}^\pi)^2 \leq B$. \end{proof} \begin{example} \label{ex:irredFrm} When $\rho$ is irreducible, it is well known that any nonzero $f \in \H_\rho$ generates a continuous tight frame with bound $\Norm{f}^2/ (\dim \H_\rho)$. We can recover this fact as follows. First, Proposition \ref{prop:brackProp3}(iii) shows that \[ \rank [f,f](\pi) = \begin{cases} 1, & \text{if }\pi = \overline{\rho} \\ 0, & \text{if }\pi \neq \overline{\rho} \end{cases} \qquad (\pi \in \hat{K}). \] In particular, the operators $[f,f](\pi)$, $\pi \in \hat{K}$, have only one nonzero eigenvalue between them. Call that eigenvalue $\lambda$. By Theorem \ref{thm:brackFrm}, $\{\rho(\xi) f\}_{\xi \in K}$ is a continuous tight frame with bound $\lambda$. Now use Proposition \ref{prop:brackProp2}(i) to compute $\Norm{f}^2 = \lambda\cdot (\dim \H_\rho)$. \end{example} \begin{example} \label{ex:dihedFrm} Let $D_3 = \langle a, b : a^3 = b^2 = 1, bab^{-1} = a^{-1} \rangle$ be the dihedral group of order six. It has three irreducible representations: the trivial representation $\pi_1$, the one-dimensional representation $\pi_2$ given by $\pi_2(a) = 1$ and $\pi_2(b) = -1$, and the two-dimensional representation $\pi_3$ given by \[ \pi_3(a) = \begin{pmatrix} \omega & 0 \\ 0 & \omega^{-1} \end{pmatrix} \quad \text{and} \quad \pi_3(b) = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}. \] Consider the four-dimensional representation $\rho$ given by \[ \rho(a) = \frac{1}{4} \begin{pmatrix} 1 & i\sqrt{3} & -3 & i\sqrt{3} \\ i\sqrt{3} & 1 & i\sqrt{3} & -3 \\ -3 & i\sqrt{3} & 1 & i\sqrt{3} \\ i\sqrt{3} & -3 & i\sqrt{3} & 1 \end{pmatrix} \quad \text{and} \quad \rho(b) = \frac{1}{2} \begin{pmatrix} 1 & 1 & 1 & -1 \\ 1 & -1 & -1 & -1 \\ 1 & -1 & 1 & 1 \\ -1 & -1 & 1 & -1 \end{pmatrix}. \] Let $f = (3,1,-1,1)$. One can compute $[f,f](\pi_1) = 4$, $[f,f](\pi_2) = 4$, and \[ [f,f](\pi_3) = \begin{pmatrix} 0 & 0 \\ 0 & 2 \end{pmatrix}. \] By the dimension count in Proposition \ref{prop:cycBrack}, $\langle f \rangle = \H_\rho = \mathbb{C}^4$. Applying Theorem \ref{thm:brackFrm}, we see that the orbit of $f$ forms a continuous frame for $\mathbb{C}^4$ with optimal bounds $2$ and $4$. When viewed as a discrete frame, the optimal bounds are $12$ and $24$. (See Remark \ref{rem:frmBnds}.) As this example demonstrates, bracket analysis can result in significant dimension reduction for the study of group frames. Suppose, for instance, that we want to know the optimal frame bounds for $\{\rho(\xi) f\}_{\xi \in K}$. A naive approach to this problem would be to compute the Gramian operator for the sequence $\{\rho(x) f \}_{x \in K}$ and find the range of its nonzero eigenvalues. In this example, that would mean computing the eigenvalues of a $6\times 6$ matrix, which could be intractably difficult. Using bracket analysis, on the other hand, the largest matrix we had to analyze was $2 \times 2$. \end{example} \medskip \section{Applications of bracket analysis} \label{sec:brackApp} We now explore several applications of the bracket analysis developed in Section \ref{sec:brack}. \subsection{Block diagonalization of the Gramian} As we have just seen, the orbit $\{\rho(\xi) f\}_{\xi \in K}$ of a vector $f \in \H_\rho$ forms a frame only under special circumstances. However, compactness of $K$ implies that it is always a \emph{Bessel} mapping. Indeed, the Cauchy-Schwarz inequality produces \[ \int_K | \langle g, \rho(\xi) f \rangle |^2 \, d\xi \leq \int_K \Norm{g}^2 \cdot \Norm{\rho(\xi) f}^2\, d\xi = \Norm{f}^2\cdot \Norm{g}^2 \qquad (g \in \H). \] In particular, the Gramian $\mathcal{G} \colon L^2(K) \to L^2(K)$ and the frame operator $S \colon \H_\rho \to \H_\rho$ are well-defined for any choice of $f \in \H_\rho$, whether or not $\{\rho(\xi) f\}_{\xi \in K}$ is a frame. A direct computation shows the Gramian is given by \begin{equation} \label{eq:framOpGrm1} \mathcal{G}(\phi) = \phi * V_f f \qquad (\phi \in L^2(K)), \end{equation} and the frame operator satisfies \[ V_h (Sg) = V_f g * V_h f \qquad (g,h \in \langle f \rangle). \] Thus, $S$ is defined uniquely by the relation \begin{equation} \label{eq:framOpGrm2} [Sg,h](\pi) = [f,h](\pi)\cdot [g,f](\pi) \qquad (g,h \in \langle f \rangle;\ \pi \in \hat{K}). \end{equation} The Gramian and the frame operator are intimately connected through the linear isometry $T_f \colon \langle f \rangle \to L^2(K)$ from Lemma \ref{lem:cycIsom}. Indeed, given any $g \in \langle f \rangle$, we compute \[ (T_f Sg)\char`\^(\pi) = ( [f,f](\pi)^{1/2})^\dagger\cdot [Sg,f](\pi) = ( [f,f](\pi)^{1/2})^\dagger\cdot [f,f](\pi)\cdot [g,f](\pi) \] \[ = [f,f](\pi)\cdot ( [f,f](\pi)^{1/2} )^\dagger \cdot [g,f](\pi) = (\mathcal{G}T_f g)\char`\^(\pi) \qquad (\pi \in \hat{K}). \] Therefore, \begin{equation} \label{eq:framOpGrm3} T_f S = \mathcal{G} T_f. \end{equation} In fact, when $\{\rho(\xi) f\}_{\xi \in K}$ is a frame for $\langle f \rangle$, $T_f$ is the analysis operator for the canonical tight frame. To see this, first observe that the range of $T_f$ is $\langle T_f f \rangle$, the left ideal generated by $T_f f$. Let $R$ be the operator on $\ran T_f$ given by \[ R(\phi) = \phi * (T_f f) \qquad (\phi \in \ran T_f ). \] For any $g \in \langle f \rangle$, we have \[ \langle g, \rho(\xi) f \rangle = \langle T_f g, L_\xi(T_f f) \rangle = [(T_f g)*(T_f f)^*](\xi) = [(T_f g)* (T_f f)](\xi) = (RT_f g)(\xi) \qquad (\xi \in K). \] In other words, the analysis operator $T\colon \langle f \rangle \to L^2(K)$ for the frame $\{ \rho(\xi) f \}_{\xi \in K}$ is given by \[ T = R T_f. \] Moreover, the computation above shows that $V_f f = (T_f f)*(T_f f)$, so \begin{equation} \label{eq:convFrmOp} R^2 T_f = \mathcal{G} T_f = T_f S. \end{equation} The operator $R$ is positive semidefinite; for any $\phi \in \ran T_f$, we have \[ \langle \phi, R(\phi) \rangle = \langle \phi, \phi * (T_f f) \rangle = \langle \phi^* * \phi, T_f f \rangle = \int_K (\phi^* * \phi)(\xi)\cdot \overline{(T_f f)(\xi)}\, d\xi \geq 0,\] since $\overline{T_f f}$ is also a function of positive type. Since $T_f$ is a linear isometry, it follows from \eqref{eq:convFrmOp} that $T_f S^{1/2} = R T_f = T$. Equivalently, $T_f = T S^{-1/2}$, as desired. \medskip One is often interested in the spectrum $\sigma(\mathcal{G})$ of the Gramian, since the optimal frame bounds are precisely the infimum and supremum of $\sigma(\mathcal{G}) \setminus\{0\}$. For a general positive semidefinite operator, finding the spectrum means diagonalization, which may be extremely difficult. For group frames, however, the realization of $\mathcal{G}$ as a convolution operator in \eqref{eq:framOpGrm1} can take us a long way in this direction, as in the proposition below. \begin{prop} \label{prop:blkDiagGrm} Fix any $f \in \H_\rho$, and let $\mathcal{G} \colon L^2(K) \to L^2(K)$ be the Gramian for the Bessel mapping $\{\rho(\xi) f\}_{\xi \in K}$. For each $\pi \in \hat{K}$, choose an orthonormal basis for $B(\H_\pi)$ with respect to the inner product $\langle A, B \rangle = d_\pi \tr(B^* A)$. Let $M_{[f,f](\pi)} \in M_{d_\pi^2}(\mathbb{C})$ be the matrix over this basis for the operator $M_{[f,f](\pi)} \colon B(\H_\pi) \to B(\H_\pi)$ given by \[ M_{[f,f](\pi)}(A) = [f,f](\pi)\cdot A. \] If $\hat{K} = \{ \pi_1, \pi_2,\dotsc\}$, then $\mathcal{G}$ is unitarily equivalent to the block diagonal matrix \[ \tilde{\mathcal{G}} = \begin{pmatrix} M_{[f,f](\pi_1)} & & & 0 \\ & M_{[f,f](\pi_2)} && \\ 0 && \ddots \end{pmatrix}, \] and the Fourier transform $\mathcal{F} \colon L^2(K) \to \bigoplus_{\pi \in \hat{K}} B(\H_\pi)$ is a conjugating unitary. That is, $\tilde{\mathcal{G}} = \mathcal{F} \mathcal{G} \mathcal{F}^{-1}$. \end{prop} \begin{proof} This is obvious from the formulae \eqref{eq:framOpGrm1}, which gives the Gramian as a convolution operator, and \eqref{eq:FourConv}, which says the Fourier transform turns convolution operators into multiplication operators. \end{proof} Proposition \ref{prop:blkDiagGrm} leads to an alternative proof of Theorem \ref{thm:brackFrm}. Briefly: the spectrum of $\mathcal{G}$ is the union of the eigenvalues for $M_{[f,f](\pi)}$ as $\pi$ runs through $\hat{K}$, and the eigenvalues for $M_{[f,f](\pi)}$ are the same as those for $[f,f](\pi)$. Now use the fact that a Bessel mapping is a frame if and only if the nonzero elements of $\sigma(\mathcal{G})$ are bounded away from zero, with the optimal frame bounds equal to the infimum and supremum of $\sigma(\mathcal{G})\setminus\{0\}$, respectively. \subsection{Classification of $K$-frames} Continuous frames of the form $\{\rho(\xi) f\}_{\xi \in K}$ are sometimes called $K$-frames. We will say that $\rho$ \emph{admits} a $K$-frame if $\H_\rho$ has a continuous frame of this form. In that case, the orbit of $f$ spans $\H_\rho$, so in particular $\rho$ is cyclic. Greenleaf and Moskowitz \cite[Theorem 1.10]{GM} have reduced the property of being cyclic to a count of multiplicities of irreducible representations. Explicitly, they have shown that $\rho$ is cyclic if and only if $\mult(\pi,\rho) \leq d_\pi$ for each $\pi \in \hat{K}$. The following theorem refines this result for $K$-frames. \begin{theorem} \label{thm:KFrmDim} The following are equivalent. \begin{enumerate}[(i)] \item $\rho$ admits a $K$-frame. \item $\rho$ admits a Parseval $K$-frame. \item $\rho$ is cyclic, and $\dim \H_\rho < \infty$. \item For all $\pi \in \hat{K}$, $\mult(\pi, \rho) \leq d_\pi$. Moreover, $\mult(\pi,\rho) = 0$ for all but finitely many $\pi \in \hat{K}$. \end{enumerate} \end{theorem} The result of Greenleaf and Moskowitz mentioned above says, in part, that every subrepresentation of the regular representation of $K$ on $L^2(K)$ admits a cyclic vector. Theorem \ref{thm:KFrmDim} shows that this result can not be improved using the language of frames. In particular, the regular representation admits a $K$-frame if and only if $K$ is finite. \begin{proof} The equivalence of (iii) and (iv) is obvious from \cite[Theorem 1.10]{GM} and the formula \[ \dim \H_\rho = \sum_{\pi \in \hat{K}} d_\pi\cdot \mult(\pi,\rho). \] It remains to prove that (i), (ii), and (iv) are equivalent. (i) $\implies$ (iv). Let $f \in \H_\rho$ be such that $\{\rho(\xi) f\}_{\xi \in K}$ is a continuous frame for $\H_\rho$, with lower frame bound $A > 0$. Since $\rho$ is cyclic, \cite[Theorem 1.10]{GM} shows that $\mult(\pi,\rho) \leq d_\pi$ for all $\pi \in \hat{K}$. By Proposition \ref{prop:brackProp2}(i), Theorem \ref{thm:brackFrm}, and Proposition \ref{prop:brackProp3}(iii), \[ \Norm{f}^2 = \sum_{\pi \in \hat{K}} d_\pi \tr( [f,f](\pi) ) \geq \sum_{\pi \in \hat{K}} d_\pi A \cdot \rank([f,f](\pi)) = A \sum_{\pi \in \hat{K}} d_\pi \mult(\overline{\pi},\rho). \] Consequently, $\mult(\pi,\rho) = 0$ for all except finitely many $\pi \in \hat{K}$. (iv) $\implies$ (ii). We are going to embed $\H_\rho$ as a translation-invariant subspace of $L^2(K)$. Recalling that the Zak transform for the pair $(K,K)$ is the usual Fourier transform on $L^2(K)$ (see Remark \ref{rem:ZakFour}), we can then use the results of Section \ref{sec:ranTran} to analyze $\H_\rho$. For each $\pi \in \hat{K}$, choose a subspace $J(\pi) \subset \H_\pi$ of dimension equal to $\mult(\overline{\pi},\rho)$. Let \[ V_J = \{ f \in L^2(K) : \ran \hat{f}(\pi) \subset J(\pi) \text{ for each }\pi \in \hat{K} \} \] be the translation invariant subspace of $L^2(K)$ corresponding to the range function $J$. Since representations of $K$ are determined up to unitary equivalence by multiplicities of irreducible representations, we may assume by \eqref{eq:multRan} that $\H_\rho = V_J$, and that $\rho$ is given by left translation. For each $\pi \in \hat{K}$, let $P_\pi \in B(\H_\pi)$ be orthogonal projection onto $J(\pi)$. Then \[ \sum_{\pi \in \hat{K}} d_\pi \Norm{P_\pi}_{\mathcal{HS}}^2 = \sum_{\pi \in \hat{K}} d_\pi \dim J(\pi) = \sum_{\pi \in \hat{K}} d_\pi \mult(\overline{\pi},\rho) < \infty, \] so there is a function $f \in L^2(K)$ with $\hat{f}(\pi) = P_\pi$ for all $\pi \in \hat{K}$, by Plancherel's Theorem. Moreover, $\langle f \rangle = V_J = \H_\rho$ by Theorem \ref{thm:ranGen}. Finally, Lemma \ref{lem:ZakBrack} shows that \[ [f,f](\pi) = \hat{f}(\pi)^* \hat{f}(\pi) = P_\pi \qquad (\pi \in \hat{K}), \] so $\{\rho(\xi) f\}_{\xi \in K}$ is a continuous Parseval frame for $\H_\rho$, by Theorem \ref{thm:brackFrm}. (ii) $\implies$ (i). This is trivial. \end{proof} Two $K$-frames $\{\rho(\xi)f\}_{\xi \in K}$ and $\{\rho'(\xi) f'\}_{\xi \in K}$ are \emph{unitarily equivalent} if there is a unitary $U\colon \H_\rho \to \H_{\rho'}$ such that $U \rho(\xi)f = \rho'(\xi) f$ for all $\xi \in K$. Equivalently, $U$ is a unitary equivalence of $\rho$ and $\rho'$ satisfying $U f = f'$. We now classify $K$-frames up to unitary equivalence. In the theorem below, we treat $L^2(K)$ as a Banach $*$-algebra under convolution. Thus, a \emph{projection} in $L^2(K)$ is a function $f$ with the property that $f = f * f = f^*$. Equivalently, it is a function $f$ such that $\hat{f}(\pi)$ is an orthogonal projection for each $\pi \in \hat{K}$. We also write \[ \mathcal{E}(K) = \{f \in L^2(K) : \hat{f}(\pi) = 0 \text{ for all but finitely many }\pi \in \hat{K}\} \] for the space of trigonometric polynomials on $K$. Every projection in $L^2(K)$ belongs to $\mathcal{E}(K)$. \begin{theorem} \label{thm:KFrmClas} Up to unitary equivalence, $K$-frames are indexed by functions of positive type in $\mathcal{E}(K)$. If $f$ is such a function, the associated frame is $\{L_\xi f \}_{\xi \in K}$. The same correspondence sets up a bijection between equivalence classes of Parseval $K$-frames and projections in $L^2(K)$. \end{theorem} In the special case where $K$ is finite, some aspects of this theorem appear implicitly in Vale and Waldron \cite{VW}. See also Han \cite{Ha}. For Parseval $K$-frames, the fact that the generating function $f$ is a projection implies that $V_f f = f * f^* = f$. By \eqref{eq:framOpGrm1}, the Gramian of the associated frame is the convolution operator $g \mapsto g * f$, which is orthogonal projection onto $\langle f \rangle$. In this sense, the theorem above may be compared with a result of Han and Larson \cite[Corollary 2.7]{HL}, which says that the correspondence between a frame and its Gramian induces a bijection between equivalence classes of Parseval frames indexed by a set $I$, and orthogonal projections on $\ell^2(I)$. For continuous frames, a similar result appears in \cite[Proposition 2.1]{GH}. Lots of orthogonal projections on $L^2(K)$ correspond to continuous Parseval frames over $K$. The projections that correspond to $K$-frames are precisely those given by convolution. \begin{proof} We use the term \emph{cyclic structure} for a pair $(\rho,f)$ consisting of a cyclic representation $\rho$ and a cyclic vector $f\in \H_\rho$. Call two cyclic structures $(\rho,f)$ and $(\rho',f')$ \emph{equivalent} if there is a unitary equivalence between $\rho$ and $\rho'$ that maps $f$ to $f'$. This agrees with the notion of equivalence of $K$-frames. Given $f\in L^2(K)$, we will denote $\rho_f$ for the subrepresentation of the regular representation on \[ \langle f \rangle = \{ g \in L^2(K) : \ran \hat{g}(\pi) \subset \ran \hat{f}(\pi) \text{ for all }\pi \in \hat{K}\}. \] Lemma \ref{lem:cycIsom} shows that \[ \{ (\rho_f, f) : f \in L^2(K)\text{ is a function of positive type}\} \] is a complete and irredundant set of cyclic structures, up to equivalence. For a fixed function $f \in L^2(K)$ of positive type, it only remains to show \begin{equation} \label{eq:KFrmClas1} \{L_\xi f \}_{\xi \in K} \text{ is a frame for }\langle f \rangle \iff \hat{f}(\pi) = 0 \text{ for all but finitely many }\pi \in \hat{K} \end{equation} and \begin{equation} \label{eq:KFrmClas2} \{L_\xi f\}_{\xi \in K} \text{ is a Parseval frame for }\langle f \rangle \iff \hat{f}(\pi) \text{ is an orthogonal projection for all }\pi \in \hat{K}. \end{equation} The forward implication of \eqref{eq:KFrmClas1} follows from Theorem \ref{thm:KFrmDim}, since \[ \mult(\overline{\pi}, \rho_f) = \rank \hat{f}(\pi) \qquad (\pi \in \hat{K}), \] by \eqref{eq:multRan}. For the reverse implication, suppose that $\hat{f}(\pi) = 0$ for all but finitely many $\pi \in \hat{K}$. Then the operators $[f,f](\pi) = \hat{f}(\pi)^2$, $\pi \in \hat{K}$, have only finitely many nonzero eigenvalues between them, so $\{L_\xi f\}_{\xi \in K}$ is a continuous frame, by Theorem \ref{thm:brackFrm}. To prove \eqref{eq:KFrmClas2}, recall that $\hat{f}(\pi) \geq 0$ for all $\pi \in \hat{K}$, so the eigenvalues of $[f,f](\pi) = \hat{f}(\pi)^2$ are precisely the squares of the eigenvalues of $\hat{f}(\pi)$. By Theorem \ref{thm:brackFrm}, $\{L_\xi f\}_{\xi \in K}$ is a continuous Parseval frame for $\langle f \rangle$ if and only if 0 and 1 are the only eigenvalues of $\hat{f}(\pi)$, $\pi \in \hat{K}$. Since the operators $\hat{f}(\pi)$ are self-adjoint, that happens if and only if each $\hat{f}(\pi)$ is an orthogonal projection. \end{proof} \begin{rem} A function $f \in L^2(K)$ is a projection if and only if $\hat{f}(\pi)$ is an orthogonal projection for each $\pi \in \hat{K}$. If we let $J(\pi) = \ran \hat{f}(\pi) \subset \H_\pi$, we see that Parseval $K$-frames can also be classified by range functions in $\{ \H_\pi \}_{\pi \in \hat{K}}$ with the property that $J(\pi) = 0$ for all but finitely many $\pi \in \hat{K}$. \end{rem} Given a projection $f \in L^2(K)$, $\{L_\xi f\}_{\xi \in K}$ is a frame only for its closed linear span in $L^2(K)$, not necessarily for the whole space. This is troublesome in practice, where one usually wants coordinates for a frame in its ``native domain''. The corollary below gives such coordinates for every Parseval $K$-frame. When a matrix space $M_{m,n}(\mathbb{C})$ is treated as a Hilbert space below, its inner product is gotten from the natural identification with $\mathbb{C}^{mn}$. \begin{cor} \label{cor:nonAbelHarm} For each $\pi \in \hat{K}$, choose an integer $r_\pi \in \{0,\dotsc,d_\pi\}$, in such a way that only finitely many $r_\pi \neq 0$. Choose an orthonormal basis for $\H_\pi$, and let $\pi_{i,j} \in C(K)$ be the corresponding matrix elements. Given $\xi \in K$, define $M_\xi(\pi) \in M_{r_\pi,d_\pi}(\mathbb{C})$ by \[ M_\xi(\pi) = (\sqrt{d_\pi} \pi_{i,j}(\xi) )_{1\leq i \leq r_\pi, 1 \leq j \leq d_\pi}. \] Then $\{M_\xi \}_{\xi \in K}$ is a continuous Parseval frame for $\bigoplus_{\pi \in \hat{K}} M_{r_\pi,d_\pi}(\mathbb{C})$, and it is a $K$-frame when indexed $\{ M_{\xi^{-1}}\}_{\xi \in K}$. Up to unitary equivalence, every Parseval $K$-frame is produced in this way. \end{cor} \begin{proof} First we will show that $\{M_{\xi^{-1}}\}_{\xi \in K}$ is a Parseval $K$-frame. For each $\pi \in \hat{K}$, let $e_1^\pi,\dotsc,e_{d_\pi}^\pi$ be the orthonormal basis for $\H_\pi$ used in the construction of $\{M_\xi\}_{\xi \in K}$. Let $P_\pi \in B(\H_\pi)$ be orthogonal projection onto $\spn\{e_1^\pi,\dotsc,e_{r_\pi}^\pi\}$. By Plancherel's Theorem, there is a projection $f \in L^2(K)$ with $\hat{f}(\pi) = P_\pi$ for each $\pi \in \hat{K}$. We are going to map \[ \langle f \rangle = \{ g \in L^2(K) : \ran \hat{g}(\pi) \subset \ran P_\pi \text{ for each }\pi \in \hat{K} \} \] unitarily onto $\bigoplus_{\pi \in \hat{K}} M_{r_\pi,d_\pi}(\mathbb{C})$ in a way that sends the Parseval $K$-frame $\{ L_\xi f \}_{\xi \in K}$ to $\{M_{\xi^{-1}}\}_{\xi \in K}$. For each $\pi \in \hat{K}$, assign $B(\H_\pi)$ the inner product $\langle A, B \rangle = d_\pi \langle A, B \rangle_{\mathcal{HS}}$, as in Plancherel's Theorem. There is a unitary $U_\pi \colon B(\H_\pi) \to M_{d_\pi}(\mathbb{C})$ that replaces each operator with $\sqrt{d_\pi}$ times its matrix over the chosen basis. Let \[ U\colon L^2(K) \to \bigoplus_{\pi \in \hat{K}} M_{d_\pi}(\mathbb{C}) \] be the unitary that follows the Fourier transform $\mathcal{F} \colon L^2(K) \to \bigoplus_{\pi \in \hat{K}} B(\H_\pi)$ by an application of $U_\pi$ in every coordinate $\pi \in \hat{K}$. Given $\xi \in K$, the translation identity \eqref{eq:FourTrans} shows that \[ (L_\xi f)\char`\^(\pi) = P_\pi \pi(\xi^{-1}) \qquad (\pi \in \hat{K}), \] so the $\pi$-th coordinate of $U(L_\xi f)$ is the $d_\pi \times d_\pi$ matrix with $M_{\xi^{-1}}(\pi)$ in the top $r_\pi$ rows and zeros in the bottom $d_\pi - r_\pi$ rows. Moreover, \[ U \langle f \rangle = \{ (A_\pi)_{\pi \in \hat{K}} \in \bigoplus_{\pi \in \hat{K}} M_{d_\pi}(\mathbb{C}) : \text{for each $\pi \in \hat{K}$, $A_\pi$ has zeros in the bottom $d_\pi - r_\pi$ rows} \}. \] Following $U$ with the natural identification \[ U \langle f \rangle \cong \bigoplus_{\pi \in \hat{K}} M_{r_\pi,d_\pi}(\mathbb{C}) \] gives the desired unitary of $\langle f \rangle$ onto $\bigoplus_{\pi \in \hat{K}} M_{r_\pi,d_\pi}(\mathbb{C})$. To see that \emph{every} Parseval $K$-frame is produced in this way, reverse the procedure above for an arbitrary projection $f \in L^2(K)$. For each $\pi \in \hat{K}$, let $P_\pi = \hat{f}(\pi)$, let $r_\pi = \rank P_\pi$, and choose an orthonormal basis $e_1^\pi,\dotsc,e_{d_\pi}^\pi$ for $\H_\pi$ in such a way that $\ran P_\pi = \spn\{ e_1^\pi, \dotsc, e_{r_\pi}^\pi \}$. The Parsevel $K$-frame $\{M_{\xi^{-1}}\}_{\xi \in K}$ produced with these parameters is unitarily equivalent to $\{L_\xi f\}_{\xi \in K}$ through the isometries constructed above. \end{proof} In the special case where $K$ is finite and \emph{abelian}, the frames described in Corollary \ref{cor:nonAbelHarm} are precisely the ``harmonic'' frames made by deleting rows from a discrete Fourier transform (DFT) matrix. (See \cite{VW2} for another proof that harmonic frames come from group actions.) While each finite abelian group can be used to make only finitely many Parseval frames in this way, a nonabelian group can make uncountably many inequivalent Parseval frames, since there are uncountably many projections in $L^2(K)$. (For finite groups, this was observed in \cite{VW}.) Moreover, it is often possible to make \emph{real} frames using nonabelian groups, as in the next example. \begin{example} Let $K=D_3$. Use notation as in Example \ref{ex:dihedFrm}. If we choose each $r_\pi$ to be as large as possible in Corollary \ref{cor:nonAbelHarm}, we obtain the following tight frame: \[ \begin{pmatrix} \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} & \begin{pmatrix} 1 \end{pmatrix} \\[5 pt] \begin{pmatrix} \pmb{1} \end{pmatrix} & \begin{pmatrix} \pmb{1} \end{pmatrix} & \begin{pmatrix} \pmb{1} \end{pmatrix} & \begin{pmatrix} \pmb{-1} \end{pmatrix} & \begin{pmatrix} \pmb{-1} \end{pmatrix} & \begin{pmatrix} \pmb{-1} \end{pmatrix} \\[5 pt] \begin{pmatrix} \pmb{\sqrt{2}} & \pmb{0} \\ 0 & \sqrt{2} \end{pmatrix} & \begin{pmatrix} \pmb{ \omega \sqrt{2} } & \pmb{0} \\ 0 & \omega^2 \sqrt{2} \end{pmatrix} & \begin{pmatrix} \pmb{ \omega^2 \sqrt{2} } & \pmb{0} \\ 0 & \omega \sqrt{2} \end{pmatrix} & \begin{pmatrix} \pmb{0} & \pmb{ \sqrt{2} } \\ \sqrt{2} & 0 \end{pmatrix} & \begin{pmatrix} \pmb{0} & \pmb{ \omega\sqrt{2} } \\ \omega^2 \sqrt{2} & 0 \end{pmatrix} & \begin{pmatrix} \pmb{0} & \pmb{ \omega^2 \sqrt{2} } \\ \omega \sqrt{2} & 0 \end{pmatrix} \end{pmatrix}. \] We can get another tight frame by deleting some of the rows: \[ \begin{pmatrix} \begin{pmatrix} \pmb{1} \end{pmatrix} & \begin{pmatrix} \pmb{1} \end{pmatrix} & \begin{pmatrix} \pmb{1} \end{pmatrix} & \begin{pmatrix} \pmb{-1} \end{pmatrix} & \begin{pmatrix} \pmb{-1} \end{pmatrix} & \begin{pmatrix} \pmb{-1} \end{pmatrix} \\[5 pt] \begin{pmatrix} \pmb{\sqrt{2}} & \pmb{0} \end{pmatrix} & \begin{pmatrix} \pmb{ \omega \sqrt{2} } & \pmb{0} \end{pmatrix} & \begin{pmatrix} \pmb{ \omega^2 \sqrt{2} } & \pmb{0} \end{pmatrix} & \begin{pmatrix} \pmb{0} & \pmb{ \sqrt{2} } \end{pmatrix} & \begin{pmatrix} \pmb{0} & \pmb{ \omega\sqrt{2} } \end{pmatrix} & \begin{pmatrix} \pmb{0} & \pmb{ \omega^2 \sqrt{2} } \end{pmatrix} \end{pmatrix}. \] This corresponds to choosing $r_1=0$ and $r_2=r_3 = 1$. Collapsing the interior matrices gives a tight frame for $\mathbb{C}^3$: \[ \begin{pmatrix} 1 & 1 & 1 & -1 & -1 & -1 \\ \sqrt{2} & \omega\sqrt{2} & \omega^2 \sqrt{2} & 0 & 0 & 0 \\ 0 & 0 & 0 & \sqrt{2} & \omega \sqrt{2} & \omega^2 \sqrt{2} \\ \end{pmatrix}. \] The frame bound is $\card(D_3) = 6$; see Remark \ref{rem:frmBnds}. Representing the two-dimensional representation over a different basis gives a completely different frame. If we use \[ \pi_3(a) = \frac{1}{2} \begin{pmatrix} -1 & -\sqrt{3} \\ \sqrt{3} & -1 \end{pmatrix} \quad \text{and} \quad \pi_3(b) = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \] and choose rows exactly as above, we obtain the tight frame \[ \begin{pmatrix} 1 & 1 & 1 & -1 & -1 & -1 \\ \sqrt{2} & -1/\sqrt{2} & -1/\sqrt{2} & \sqrt{2} & -1/\sqrt{2} & -1/\sqrt{2} \\ 0 & -\sqrt{3/2} & \sqrt{3/2} & 0 & \sqrt{3/2} & -\sqrt{3/2} \end{pmatrix}. \] This time we used real representations, so we got a tight frame for $\mathbb{R}^3$. \end{example} \subsection{Disjointness properties} Let $\H$ and $\mathcal{K}$ be separable Hilbert spaces carrying frames $\Phi=\{f_i\}_{i\in I}$ and $\Psi=\{g_i\}_{i\in I}$, respectively. We say that $\Phi$ and $\Psi$ are \emph{disjoint} if $\{ (f_i,g_i) \}_{i \in I}$ is a frame for $\H \oplus \mathcal{K}$. Disjoint frames were introduced independently by Balan \cite{Ba} and by Han and Larson \cite{HL}. For a detailed study of disjoint \emph{continuous} frames, see \cite{GH}. The corollary below says that $K$-frames from distinct isotypical components of $\rho$ are always disjoint, and that every $K$-frame can be decomposed into disjoint frames in this way. This will be generalized for group frames with multiple generators in Corollary \ref{cor:multCompFrm}. Recall that $\mathcal{M}_\pi \subset \H_\rho$ denotes the isotypical component for $\pi \in \hat{K}$, and that $P_\pi \in B(\H_\rho)$ is orthogonal projection of $\H_\rho$ onto $\mathcal{M}_\pi$. \begin{cor} \label{cor:compFrm} Fix a vector $f \in \H_\rho$ and constants $A$ and $B$ with $0 < A \leq B < \infty$. The following are equivalent. \begin{enumerate}[(i)] \item $\{\rho(\xi) f\}_{\xi \in K}$ is a continuous frame for $\H_\rho$ with bounds $A, B$. \item For each $\pi \in \hat{K}$, $\{ \rho(\xi) P_\pi f\}_{\xi \in K}$ is a continuous frame for $\mathcal{M}_\pi$ with bounds $A,B$. \end{enumerate} \end{cor} \begin{proof} For each $\pi \in \hat{K}$ and each $g \in \H_\rho$, Proposition \ref{prop:brackProp3} shows that \begin{equation} \label{eq:compFrm1} [P_\pi f, P_\pi g](\sigma) = [P_\pi f, g](\sigma) = \begin{cases} [f,g](\overline{\pi}), & \text{if }\sigma = \overline{\pi} \\ 0, & \text{if }\sigma \neq \overline{\pi}. \end{cases} \end{equation} Taking $g = f$ above, we see that $f$ satisfies condition (ii) of Theorem \ref{thm:brackFrm} if and only if each $P_\pi f$ does the same. It remains to show that $\langle f \rangle = \H_\rho$ if and only if $\langle P_\pi f \rangle = \mathcal{M}_\pi$ for each $\pi \in \hat{K}$. If $\langle f \rangle \neq \H_\rho$, then we can find a nonzero vector $g \in \H_\rho$ with $[f,g] =0$, by Proposition \ref{prop:brackProp1}(vi). Find $\pi \in \hat{K}$ for which $P_\pi g \neq 0$. Then \eqref{eq:compFrm1} shows that $[P_\pi f, P_\pi g] = 0$, so that $P_\pi g \perp \langle P_\pi f \rangle$. Thus, $\langle P_\pi f \rangle \neq \mathcal{M}_\pi$. Conversely, if there is some $\pi \in \hat{K}$ for which $\langle P_\pi f \rangle \neq \mathcal{M}_\pi$, then there is a nonzero vector $g \in \mathcal{M}_\pi$ with $0 = [P_\pi f, g] = [f, P_\pi g] = [f,g]$. Hence, $g \perp \langle f \rangle$, and $\langle f \rangle \neq \H_\rho$. \end{proof} Recall that $\rho$ is \emph{multiplicity free} when all of its isotypical components are irreducible. Equivalently, this means that $\mult(\pi,\rho) \in \{0,1\}$ for all $\pi \in \hat{K}$. Corollary \ref{cor:compFrm} leads to an extension of Example \ref{ex:irredFrm} for multiplicity free representations. \begin{cor} \label{cor:multFrFrm} Suppose $\rho$ is multiplicity free. Let $E = \{ \pi \in \hat{K} : \mult(\pi,\rho) \neq 0\}$. For a nonzero vector $f \in \H_\rho$, the following are equivalent. \begin{enumerate}[(i)] \item $\{ \rho(\xi) f\}_{\xi \in K}$ is a tight frame for $\H_\rho$. \item For any $\pi,\sigma \in E$, $\Norm{P_\pi f}^2/d_\pi = \Norm{P_\sigma f}^2/d_\sigma$. \end{enumerate} When this happens, the optimal frame bound is the common value of $\Norm{P_\pi f}^2/d_\pi$ for $\pi \in E$. \end{cor} In the special case where $K$ is \emph{finite}, the equivalence of (i) and (ii) above can be deduced from \cite[Theorem 6.18]{VW2}. We mention just one of a myriad applications for Corollary \ref{cor:compFrm}. An action of a group $G$ on a set $X$ is called \emph{2-transitive} when the following holds: for every two pairs $(x,y), (w,z) \in X \times X$ with $x \neq y$ and $w \neq z$, there is a single group element $g \in G$ with $g\cdot x = w$ and $g\cdot y = z$. \begin{cor} \label{cor:2tranFrm} Let $G$ be a finite group acting on a finite set $X$ with an action that is $2$-transitive. Fix a nonzero vector $f =(f_x)_{x\in X} \in \ell^2(X)$. Then $\{ (f_{g\cdot x})_{x\in X} : g \in G\}$ is a tight frame for $\ell^2(X)$ if and only if \begin{equation} \label{eq:2tranFrm} \left| \sum_{x\in X} f_x \right|^2 = \sum_{x\in X} | f_x |^2. \end{equation} \end{cor} \begin{proof} The statement is trivial when $X$ is a singleton, so we may assume that $X$ has more than one point. Let $\rho$ be the unitary representation of $G$ on $\ell^2(X)$ associated with the action of $G$. Namely, for $g \in G$ and $\psi = (\psi_x)_{x\in X} \in \ell^2(X)$, we define $\rho(g)\psi = (\psi_{g^{-1}\cdot x})_{x\in X}$. By \cite[Corollary 29.10]{JL}, $\rho$ is multiplicity free with two isotypical components, \[ \mathcal{M}_1 = \left\{ (\psi_x)_{x\in X} \in \ell^2(X) : \psi_x = \psi_y \text{ for all }x,y \in X \right\} \] and \[ \mathcal{M}_2 = \left\{ (\psi_x)_{x\in X} \in \ell^2(X) : \sum_{x\in X} \psi_x = 0 \right\}. \] Let $P_j$ be orthogonal projection of $\ell^2(X)$ onto $\mathcal{M}_j$, for $j=1,2$. If we denote \[ \overline{f} = \frac{1}{|X|} \sum_{x\in X} f_x, \] then $P_1 f = (\overline{f})_{x\in X}$, and $P_2 f = (f_x - \overline{f})_{x\in X}$. In particular, \[ \Norm{P_1 f}^2 = \frac{1}{|X|} \left| \sum_{x\in X} f_x \right|^2. \] By Corollary \ref{cor:multFrFrm}, the orbit of $f$ under $\rho$ is a tight frame for $\ell^2(X)$ if and only if $\Norm{P_1 f}^2 = \Norm{P_2 f}^2 / (|X| - 1)$, if and only if $|X|\cdot \Norm{ P_1 f}^2 = \Norm{P_2 f}^2 + \Norm{P_1 f}^2 = \Norm{f}^2$, if and only if \[ \left| \sum_{x \in X} f_x \right|^2 = \sum_{x\in X} | f_x |^2. \qedhere \] \end{proof} The proof indicates a simple and universal method for constructing the generating vector $f$. Let $\varphi \in \ell^2(X)$ be the all-ones vector. Fix any nonzero vector $\psi \in \ell^2(X)$ with $\sum_{x\in X} \psi_x = 0$, and scale it so that $\Norm{\psi}^2 = |X|^2 - |X|$. Then $f = \varphi + \psi$ generates a tight frame for $\ell^2(X)$, by Corollary \ref{cor:multFrFrm}. Up to scaling, every vector satisfying \eqref{eq:2tranFrm} is produced in this way. \begin{example} \label{ex:PermFrm} The action of the symmetric group $S_n$ on the set with $n$ elements is $2$-transitive. Thus, Corollary \ref{cor:2tranFrm} and the comment above explain how to make a unit norm tight frame of $n!$ vectors in $\mathbb{C}^n$ just by permuting the entries of a single vector. \end{example} \medskip \section{Group frames with multiple generators} \label{sec:multGen} The last two sections focused on frames generated by a single vector $f\in \H_\rho$. We now consider frames with \emph{multiple} generators. For a countable family $\mathscr{A} = \{f_j\}_{j\in I} \subset \H_\rho$, this means that we will determine precise (and simple) conditions under which the orbit $\{ \rho(\xi) f_j\}_{j\in I, \xi \in K}$ forms a continuous frame for $\H_\rho$. In the course of doing so, we will classify the invariant subspaces of $\H_\rho$ in terms of range functions. Despite significant interest in the problem, very little has been done in the area of group frames with multiple generators. The most fruitful area has been frames generated by translations, mostly with abelian groups \cite{BHP2,B,BR,CP,I,KR} but in at least one case with nonabelian \cite{CMO}. In the setting of discrete nonabelian groups, Hern{\'a}ndez and his collaborators \cite{BHP3} have recently developed an abstract machinery to handle frames with multiple generators for a special class of unitary representations. For finite groups and \emph{tight} frames, Vale and Waldron \cite{VW3} recently broke through the single generator barrier, with a neat condition in terms of norms and orthogonality of the generating vectors. These few papers provide the state of the art. Our main result is a duality theorem unifying the work of Vale and Waldron with classical duality of frames and Riesz sequences, simultaneously extending their results to non-tight frames and actions by compact groups. Here we pull ahead of the abelian setting. As far as the author knows, there is nothing of this kind in the literature for LCA groups. Once again, we hope that by illuminating the situation for nonabelian compact groups, we can set a path for further research on representations of general locally compact groups. \medskip Our notation and assumptions are as follows. Let $K$ and $\rho$ be as in the previous sections. Since $K$ is compact, it is always possible to decompose $\H_\rho$ as a direct sum of irreducible invariant subspaces. Our main assumption is that \emph{this has already been done}. For $\pi \in \hat{K}$, we let $m_\pi = \mult(\pi,\rho)$. We write $\pi^{\oplus m_\pi}$ for the direct sum of $m_\pi$ copies of $\pi$, which acts on $\H_\pi^{\oplus m_\pi}$. Without loss of generalty, we may assume that \[ \rho = \bigoplus_{\pi \in \hat{K}} \pi^{\oplus m_\pi}, \] and that \[ \H_\rho = \bigoplus_{\pi \in \hat{K}} \H_\pi^{\oplus m_\pi}. \] We warn that some of the multiplicities $m_\pi$ may be infinite, but since $\H_\rho$ is separable, they must all be countable. Fix the following notation. Let $\mathscr{A} = \{f_j\}_{j\in I} \subset \H_\rho$ be a countable family of vectors. We write $f_j = (f_j^\pi)_{\pi \in \hat{K}} \in \H_\rho$, with $f_j^\pi = (f_{i,j}^\pi)_{i=1}^{m_\pi} \in \H_\pi^{\oplus m_\pi}$. We also denote \[ E(\mathscr{A}) = \{ \rho(\xi) f_j\}_{j \in I, \xi \in K} \] for the orbit of $\mathscr{A}$ under $\rho$. Formally, $E(\mathscr{A})$ should be interpreted as a set with multiplicities, or more accurately, as a mapping $I \times K \to \H_\rho$. Finally, we let \[ S(\mathscr{A}) = \overline{\spn}\{ \rho(\xi) f_j : j \in I, \xi \in K\} \] be the invariant subspace generated by $\mathscr{A}$. Our notation is meant to suggest that $\mathscr{A}$ is a kind of matrix. For each $\pi \in \hat{K}$, we define \[ \mathscr{A}(\pi) = ( f_{i,j}^\pi )_{1 \leq i \leq m_\pi, j \in I}, \] which is a (possibly infinite) matrix with entries in $\H_\pi$. The number of rows equals $m_\pi$, and the number of columns equals $\card(\mathscr{A})$. For instance, if $\mathscr{A}$ were finite with $I = \{1,\dotsc, N\}$, we would have \[ \mathscr{A}(\pi) = \begin{pmatrix} | & | & \dots & | \\ f_1^\pi & f_2^\pi & \dotso & f_N^\pi \\ | & | & \dots & | \end{pmatrix}. \] If we now imagine the matrices $\mathscr{A}(\pi)$ stacked vertically, then the $j$-th column of the resulting ``matrix'' precisely describes the direct sum decomposition of $f_j \in \mathscr{A}$. We remind the reader that a \emph{Riesz sequence} in a Hilbert space $\H$ is a sequence of vectors $\{ f_i \}_{i \in J} \subset \H$ for which there are constants $0 < A \leq B < \infty$ such that, whenever $(c_i) \in \ell^2(J)$ has finite support, \[ A \sum_{i \in J} | c_i |^2 \leq \Norm{ \sum_{i \in J} c_i f_i }^2 \leq B \sum_{i \in J} |c_i|^2. \] Once this inequality holds for those $(c_i) \in \ell^2(J)$ with finite support, it automatically holds for arbitrary $(c_i) \in \ell^2(J)$. Our main result, below, says that the frame properties of the orbit of the ``columns'' of $\mathscr{A}$ can be read from the Riesz properties of the \emph{rows}. \begin{theorem} \label{thm:multFrm2} The following are equivalent for constants $A$ and $B$ with $0 < A \leq B < \infty$. \begin{enumerate}[(i)] \item The orbit $E(\mathscr{A}) = \{\rho(\xi) f_j\}_{j \in I, \xi \in K}$ is a continuous frame for $\H_\rho$ with bounds $A,B$. \item For every $\pi \in \hat{K}$, the rows of $\mathscr{A}(\pi)$ belong to $\H_\pi^{\oplus I}$, where they form a Riesz sequence with bounds $d_\pi A,d_\pi B$. \end{enumerate} \end{theorem} This will actually be a corollary of a more general theorem. Theorem \ref{thm:multFrm} (infra) gives conditions for $E(\mathscr{A})$ to form a continuous frame for a general invariant subspace of $\H_\rho$. \begin{example} Here are four special cases of Theorem \ref{thm:multFrm2}. \smallskip (1) When $K$ is the trivial group and $\rho$ is the trivial action of $K$ on $\mathbb{C}$, we recover the usual duality theorem for frames and Riesz sequences, which says that the columns of a matrix $M \in M_{m,n}(\mathbb{C})$ form a frame for $\mathbb{C}^m$ if and only if the rows of $M$ form a Riesz sequence in $\mathbb{C}^n$. Moreover, the bounds of the frame and the Riesz sequence are the same. \smallskip (2) When $\mathscr{A}$ has a single vector $f$ and $\rho$ is irreducible, there is only one matrix $\mathscr{A}(\pi)$ to consider, namely $\mathscr{A}(\rho) = (f)$. Obviously its rows form a Riesz sequence with upper and lower bounds both equal to $\Norm{f}^2$, so the orbit $\{ \rho(\xi) f \}_{\xi \in K}$ is a tight frame for $\H_\rho$ with bound $\Norm{f}^2/(\dim \H_\rho)$. This is the conclusion of Example \ref{ex:irredFrm}. \smallskip (3) More generally, when $\rho$ is multiplicity free, we can easily recover Corollary \ref{cor:multFrFrm}. \smallskip (4) Taking $A=B$ in Theorem \ref{thm:multFrm2}, we see that $E(\mathscr{A})$ is a tight frame for $\H_\rho$ with bound $A$ if and only if the rows of each matrix $\mathscr{A}(\pi)$ form an orthogonal sequence of vectors in $\H_\pi^{\oplus I}$, with each vector's norm equal to $\sqrt{d_\pi A}$. That is, \[ \sum_{j \in I} \langle f_{i_1,j}^\pi, f_{i_2,j}^\pi \rangle = \delta_{i_1,i_2}\cdot d_\pi A. \] In the case where $K$ and $\mathscr{A}$ are both \emph{finite}, this is a result of Vale and Waldron \cite[Theorem 2.8]{VW3}. \end{example} \begin{rem} Neither the group $K$ nor the representation $\rho$ play a prominent role in condition (ii) of Theorem \ref{thm:multFrm2}, except to provide conditions on the direct sum decomposition $\H_\rho = \bigoplus_{\pi \in \hat{K}} \H_\pi^{\oplus m_\pi}$. Suppose, then, that $G$ is \emph{another} compact group acting on $\H_\rho$ with a representation $\eta$ that admits the same decomposition of $\H_\rho$ as a direct sum of irreducible invariant subspaces. Then the orbit of $\mathscr{A}$ under the action of $\rho$ is a frame for $\H_\rho$ if and only if the orbit under the action of $\eta$ is, too. Moreover, the frame bounds are the same in both cases. While this may seem surprising at first, it is really an extension of a well-known phenomenon. After all, any nonzero vector $f\in \H_\rho$ generates a tight frame when $\rho$ acts irreducibly, and this mild condition $(f\neq 0)$ has nothing to do with $K$ or the particular irreducible representation $\rho$. As we have seen, this is a special case of Theorem \ref{thm:multFrm2}. \end{rem} \medskip \subsection{Classification of invariant subspaces} From a technical perspective, we can always find an encompassing group $G \supset K$ for which $\H_\rho$ embeds into $L^2(G)$ as a $K$-invariant subspace, with $\rho$ turning into left translation. (See Theorem \ref{thm:isom}.) In this sense, Theorem \ref{thm:tranFrm} on frames generated by translations already gives a complete characterization of group frames with multiple generators. In practice, however, it may be tedious to unravel this characterization through the embedding $\H_\rho \to L^2(G)$. Instead of following that route, we will now try to recreate the program of Sections \ref{sec:Zak}--\ref{sec:tranFrm} from scratch. Namely, we will give a range function characterization of the invariant subspaces of $\H_\rho$, and then we will use that characterization to deduce Theorem \ref{thm:multFrm2}. To begin our program, we need a substitute for the Zak transform. Fix $\pi \in \hat{K}$, and associate each sequence $\Phi = {(\phi_i)_{i=1}^{m_\pi}} \in {\H_\pi^{\oplus m_\pi}}$ with its analysis operator $T_\pi \Phi \colon \H_\pi \to \ell^2_{m_\pi}$, which is given by \[ [ T_\pi \Phi ](\psi) = ( \langle \psi, \phi_i \rangle )_{i=1}^{m_\pi} \qquad (\psi \in \H_\pi). \] Then $T_\pi \colon \H_\pi^{\oplus m_\pi} \to \mathcal{HS}( \H_\pi, \ell^2_{ m_\pi } )$ is a \emph{conjugate}-linear unitary. To see this, consider the composition of isomorphisms \[ \H_\pi^{\oplus m_\pi} \cong \H_\pi \otimes \ell^2_{m_\pi} \cong \mathcal{HS}(\H_\pi, \ell^2_{m_\pi} ), \] the last of which is conjugate linear (see \cite[Section 7.3]{F}). Letting $\pi$ run through $\hat{K}$, we obtain a conjugate-linear unitary \[ T \colon \H_\rho \to \bigoplus_{\pi \in \hat{K}} \mathcal{HS}( \H_\pi, \ell^2_{m_\pi} ) \] given by \[ T (g_\pi)_{\pi \in \hat{K}} = ( T_\pi g_\pi )_{\pi \in \hat{K}} \qquad ( (g_\pi)_{\pi \in \hat{K}} \in \bigoplus_{\pi \in \hat{K}} \H_\pi^{\oplus m_\pi} = \H_\rho ). \] If we write $g_\pi = (g_i^\pi)_{i=1}^{m_\pi} \in \H_\pi^{\oplus m_\pi}$, then the simple formula $\langle \phi, \pi(\xi) g_i^\pi \rangle = \langle \pi(\xi^{-1}) \phi, g_i^\pi \rangle$ gives the key identity \begin{equation}\label{eq:TRho} (T \rho(\xi) g)(\pi) = (Tg)(\pi)\cdot \pi(\xi^{-1}) \qquad (g \in \H_\rho,\ \xi \in K,\ \pi \in \hat{K}). \end{equation} This will serve as our substitute for the Zak transform's translation property \eqref{eq:ZakTrans}. \medskip A careful reading of Section \ref{sec:ranTran} shows that we used only two properties of the Zak transform: the translation property \eqref{eq:ZakTrans}, and the fact that $Z$ is unitary. In the current setting, we can therefore leverage the intertwining property \eqref{eq:TRho} to classify invariant subspaces of $\H_\rho$ in terms of range functions. Let $J$ be a range function in $\{ \ell^2_{m_\pi} \}_{\pi \in \hat{K}}$, and let \[ V_J = \{ (g_\pi)_{\pi \in \hat{K}} \in \H_\rho : \text{for each $\pi \in \hat{K}$, } \ran T_\pi g_\pi \subset J(\pi) \}. \] Equivalently, \[ T V_J = \bigoplus_{\pi \in \hat{K}} \mathcal{HS}(\H_\pi, J(\pi) ). \] By \eqref{eq:TRho}, $V_J$ is an invariant subspace of $\H_\rho$. In fact, a trivial modification of the proof of Theorem \ref{thm:ranTran} shows that every invariant subspace of $\H_\rho$ takes this form. Explicitly, we have the following. \begin{theorem} The mapping $J \mapsto V_J$ is a bijection between range functions in $\{ \ell^2_{m_\pi} \}_{\pi \in \hat{K}}$ and invariant subspaces of $\H_\rho$. \end{theorem} In further analogy with the range function analysis of Section \ref{sec:ranTran}, it is easy to see that the correspondence $J \mapsto V_J$ preserves direct sum decompositions. This leads to the following analogue of Theorem \ref{thm:ranDecomp}. \begin{theorem} \label{thm:ranDecompGen} Let $J$ be a range function in $\{ \ell^2_{m_\pi}\}_{\pi \in \hat{K}}$. Choose an orthonormal basis $\{e_i^\pi\}_{i\in I_\pi}$ for each $J(\pi)$, $\pi \in \hat{K}$. For each $\pi \in \hat{K}$ and $i \in I_\pi$, let $V_{\pi,i}$ be the space of $(g_\sigma)_{\sigma \in \hat{K}} \in \H_\rho$ such that $g_\sigma = 0$ for $\sigma \neq \pi$, and such that $g_\pi = (g_j^\pi)_{j=1}^{m_\pi}$ satisfies $(\langle \phi,g_j^\pi\rangle )_{j=1}^{m_\pi} = c_\phi e_i^\pi$ for every $\phi \in \H_\pi$, where $c_\phi$ is a scalar. Then $V_{\pi,i}$ is an irreducible invariant subspace of $\H_\rho$, and \[ V_J = \bigoplus_{\pi \in \hat{K}} \bigoplus_{i\in I_\pi} V_{\pi,i}. \] Moreover, every decomposition of $V_J$ as a direct sum of irreducible subspaces occurs in this way. \end{theorem} When $J$ is the range function with $J(\pi) = \ell^2_{m_\pi}$ for every $\pi \in \hat{K}$, the theorem above describes every possible decomposition of $\H_\rho$ as a direct sum of irreducibles. Remember that our operating assumption is that we can find \emph{one} such decomposition. Thus, knowing one decomposition is enough to describe them all (and very simply, at that). \subsection{Duality for frames with multiple generators} Now we can prove our main theorem on group frames with multiple generators. Remember our interpretation of $\mathscr{A}$ as a kind of matrix, with the vectors $f_j \in \mathscr{A}$ appearing as the ``columns''. It turns out that the frame properties of the orbit of the ``columns'' of $\mathscr{A}$ can be read from a Riesz-like property on the \emph{rows}. \begin{theorem} \label{thm:multFrm} Let $J$ be a range function in $\{ \ell^2_{m_\pi} \}_{\pi \in \hat{K}}$, and assume that $\mathscr{A} \subset V_J$. For constants $A$ and $B$ with $0 < A \leq B < \infty$, the following are equivalent. \begin{enumerate}[(i)] \item $E(\mathscr{A})$ is a continuous frame for $V_J$ with bounds $A,B$. That is, \[ A \Norm{g}^2 \leq \sum_{j\in I} \int_K | \langle g, \rho(\xi) f_j \rangle |^2\, d\xi \leq B \Norm{g}^2 \qquad (g \in V_J). \] \item For every $\pi \in \hat{K}$ and every sequence $(c_i)_{i=1}^{m_\pi} \in J(\pi) \subset \ell^2_{m_\pi}$, \[ d_\pi A \sum_{i=1}^{m_\pi} |c_i|^2 \leq \sum_{j\in I} \Norm{ \sum_{i=1}^{m_\pi} c_i f_{i,j}^\pi }^2 \leq d_\pi B \sum_{i=1}^{m_\pi} |c_i|^2. \] \end{enumerate} \end{theorem} \begin{proof} Fix $g,h \in \H_\rho$. We will denote $g = (g_\pi)_{\pi \in \hat{K}}$, with $g_\pi \in \H_\pi^{\oplus m_\pi}$, and $g_\pi = (g_i^\pi)_{i=1}^{m_\pi}$, with $g_i^\pi \in \H_\pi$. We use a similar notation for $h$. For each $\pi \in \hat{K}$, fix an orthonormal basis $e_1^\pi,\dotsc,e_{d_\pi}^\pi$ for $\H_\pi$, and let $\pi_{i,j} \in C(K)$ be the corresponding matrix elements. We are going to decompose $V_h g \in L^2(K)$ in the orthonormal basis $\{ \sqrt{d_\pi} \pi_{i,j} : \pi \in \hat{K}, 1 \leq i,j \leq d_\pi\}$. For any $\xi \in K$, we can use \eqref{eq:TRho} and the fact that $T$ is a conjugate-linear unitary to write \[ \langle g, \rho(\xi) h \rangle = \langle T \rho(\xi) h, T g \rangle = \sum_{\pi \in \hat{K}} \langle (T_\pi h_\pi) \pi(\xi^{-1}), T_\pi g_\pi \rangle_{\mathcal{HS}} = \sum_{\pi \in \hat{K}} \sum_{k=1}^{d_\pi} \langle (T_\pi h_\pi) \pi(\xi^{-1}) e_k^\pi, (T_\pi g_\pi) e_k^\pi \rangle \] \[ = \sum_{\pi \in \hat{K}} \sum_{k=1}^{d_\pi} \sum_{i=1}^{m_\pi} \langle \pi(\xi^{-1}) e_k^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle = \sum_{\pi \in \hat{K}} \sum_{k=1}^{d_\pi} \sum_{i=1}^{m_\pi} \sum_{l=1}^{d_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle \pi(\xi^{-1}) e_k^\pi, e_l^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \] \[ = \sum_{\pi \in \hat{K}} \sum_{k=1}^{d_\pi} \sum_{i=1}^{m_\pi} \sum_{l=1}^{d_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \overline{\pi_{k,l}(\xi)}. \] By using the inequalities $| \langle e_l^\pi, h_i^\pi \rangle |^2 \leq \Norm{h_i^\pi}^2$, $| \langle g_i^\pi, e_k^\pi \rangle |^2 \leq \Norm{g_i^\pi}^2$, and $|\pi_{k,l}(\xi)|\leq 1$, one can easily show that \[ \sum_{i=1}^{m_\pi} | \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \overline{\pi_{k,l}(\xi)} | \leq \Norm{h_\pi} \Norm{g_\pi} < \infty \qquad (\pi \in \hat{K};\ k,l=1,\dotsc,d_\pi). \] Thus, we can reorder the sum above to write \[ \langle g, \rho(\xi) h \rangle = \sum_{\pi \in \hat{K}} \sum_{k,l=1}^{d_\pi} \left( \frac{1}{\sqrt{d_\pi}} \sum_{i=1}^{m_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \right) \sqrt{d_\pi}\, \overline{\pi}_{k,l}(\xi). \] We want to apply the Peter-Weyl Theorem to conclude that \begin{equation}\label{eq:multFrm1} \int_K | \langle g, \rho(\xi) h \rangle |^2 d\xi = \sum_{\pi \in \hat{K}} \frac{1}{d_\pi} \sum_{k,l=1}^{d_\pi} \left| \sum_{i=1}^{m_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \right|^2. \end{equation} To justify \eqref{eq:multFrm1}, it suffices to prove the sum on the right is finite. To see this is the case, first observe that for $\pi \in \hat{K}$, \[ \sum_{i=1}^{m_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle = \langle (T_\pi h_\pi) e_l^\pi, (T_\pi g_\pi) e_k^\pi \rangle \qquad (k,l=1,\dotsc,d_\pi). \] Denoting $\Norm{\cdot}_{\text{op}}$ for the operator norm, we have \[ \frac{1}{d_\pi} \sum_{k,l=1}^{d_\pi} | \langle (T_\pi h_\pi) e_l^\pi, (T_\pi g_\pi) e_k^\pi \rangle |^2 = \frac{1}{d_\pi} \sum_{l=1}^{d_\pi} \Norm{ (T_\pi g_\pi)^* (T_\pi h_\pi) e_l^\pi }^2 \leq \Norm{ (T_\pi g_\pi)^* (T_\pi h_\pi) }_{\text{op}}^2 \] \[ \leq \Norm{ T_\pi g_\pi }_{\text{op}}^2 \Norm{ T_\pi h_\pi }_{\text{op}}^2 \leq \Norm{ T_\pi g_\pi }_{\mathcal{HS}}^2 \Norm{ T_\pi h_\pi }_{\mathcal{HS}}^2. \] Since $\Norm{g}^2 = \sum_{\pi \in \hat{K}} \Norm{ T_\pi g_\pi }_{\mathcal{HS}}^2$, there is some $M > 0$ such that $\Norm{ T_\pi g_\pi }_{\mathcal{HS}}^2 \leq M$ for all $\pi \in \hat{K}$. Hence, \[ \sum_{\pi \in \hat{K}} \frac{1}{d_\pi} \sum_{k,l=1}^{d_\pi} \left| \sum_{i=1}^{m_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \right|^2 \leq \sum_{\pi \in \hat{K}} \Norm{ T_\pi g_\pi }_{\mathcal{HS}}^2 \Norm{ T_\pi h_\pi }_{\mathcal{HS}}^2 \leq M \Norm{h}^2 < \infty. \] This proves \eqref{eq:multFrm1}. We continue by refining the expression on the right side of \eqref{eq:multFrm1} even further. For $\pi \in \hat{K}$ and $k \in \{1,\dotsc,d_\pi\}$, we claim that \begin{equation} \label{eq:multFrm2} \sum_{l=1}^{d_\pi} \left| \sum_{i=1}^{m_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \right|^2 = \Norm{ \sum_{i=1}^{m_\pi} \langle e_k^\pi, g_i^\pi \rangle h_i^\pi }^2. \end{equation} Indeed, we can write \[ \sum_{l=1}^{d_\pi} \left| \sum_{i=1}^{m_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \right|^2 = \sum_{l=1}^{d_\pi} \sum_{i=1}^{m_\pi} \sum_{j=1}^{m_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle g_i^\pi, e_k^\pi \rangle \langle h_j^\pi, e_l^\pi \rangle \langle e_k^\pi, g_j^\pi \rangle \] \[ = \sum_{i=1}^{m_\pi} \sum_{j=1}^{m_\pi} \langle g_i^\pi, e_k^\pi \rangle \langle e_k^\pi, g_j^\pi \rangle \sum_{l=1}^{d_\pi} \langle e_l^\pi, h_i^\pi \rangle \langle h_j^\pi, e_l^\pi \rangle = \sum_{i=1}^{m_\pi} \sum_{j=1}^{m_\pi} \langle g_i^\pi, e_k^\pi \rangle \langle e_k^\pi, g_j^\pi \rangle \langle h_j^\pi, h_i^\pi \rangle \] \[ = \sum_{i=1}^{m_\pi} \sum_{j=1}^{m_\pi} \langle \langle e_k^\pi, g_j^\pi \rangle h_j^\pi, \langle e_k^\pi, g_i^\pi \rangle h_i^\pi \rangle. \] Since $| \langle e_k^\pi, g_i^\pi \rangle |^2 \leq \Norm{g_i^\pi}^2$, one can show that $\sum_{i=1}^{m_\pi} \Norm{ \langle e_k^\pi, g_i^\pi \rangle h_i^\pi } \leq \Norm{ g_\pi} \Norm{ h_\pi } < \infty$. Hence the sum $\sum_{i=1}^{m_\pi} \langle e_k^\pi, g_i^\pi \rangle h_i^\pi$ converges in $\H_\pi$. That means we can move the sums inside the inner product above. This gives \eqref{eq:multFrm2}. Combining \eqref{eq:multFrm1} with \eqref{eq:multFrm2}, and letting $h$ run through $\mathscr{A}$, we obtain the critical identity \begin{equation} \label{eq:multFrm3} \sum_{j\in I} \int_K | \langle g, \rho(\xi) f_j \rangle |^2 d\xi = \sum_{\pi \in \hat{K}} \sum_{k=1}^{d_\pi} \frac{1}{d_\pi} \sum_{j\in I} \Norm{ \sum_{i=1}^{m_\pi} \langle e_k^\pi, g_i^\pi \rangle f_{i,j}^\pi }^2 \qquad (g \in \H_\rho). \end{equation} Meanwhile, \begin{equation} \label{eq:multFrm4} \Norm{g}^2 = \sum_{\pi \in \hat{K}} \sum_{k=1}^{d_\pi} \sum_{i=1}^{m_\pi} | \langle e_k^\pi, g_i^\pi \rangle |^2 \qquad (g \in \H_\rho). \end{equation} The rest of the proof comes easily. If $g \in V_J$, then $( \langle e_k^\pi, g_i^\pi \rangle )_{i=1}^{m_\pi} \in \ran T_\pi g_\pi \subset J(\pi)$ for every $\pi \in \hat{K}$ and every $k \in \{1,\dotsc,d_\pi\}$. Thus, (ii) implies (i). Now assume (i) holds. Fix $\pi \in \hat{K}$, and let $(c_i)_{i=1}^{m_\pi} \in J(\pi)$ be arbitrary. Define $g \in \H_\rho$ by \[ g_i^\sigma = \begin{cases} \overline{c_i} e_1^\pi, & \text{ if } \sigma = \pi \\ 0, & \text{ if }\sigma \neq \pi \end{cases} \qquad (\sigma \in \hat{K},\ 1 \leq i \leq m_\sigma ). \] Then \[ \Norm{g}^2 = \sum_{i=1}^{m_\pi} |c_i|^2, \] while \eqref{eq:multFrm3} gives \[ \sum_{j\in I} \int_K | \langle g, \rho(\xi) f_j \rangle |^2 d\xi = \frac{1}{d_\pi} \sum_{j\in I} \Norm{ \sum_{i=1}^{m_\pi} c_i f_{i,j}^\pi }^2. \] Since $\ran (Tg)(\sigma) \subset J(\sigma)$ for each $\sigma \in \hat{K}$, (i) applies to tell us that \[ A \sum_{i=1}^{m_\pi} | c_i |^2 \leq \frac{1}{d_\pi} \sum_{j\in I} \Norm{ \sum_{i=1}^{m_\pi} c_i f_{i,j}^\pi }^2 \leq B \sum_{i=1}^{m_\pi} |c_i|^2. \] This is (ii). \end{proof} \medskip \begin{cor} \label{cor:rowSum} If $E(\mathscr{A})$ is a continuous frame for $S(\mathscr{A})$, then every row of $\mathscr{A}(\pi)$ belongs to $\H_\pi^{\oplus I}$, for every $\pi \in \hat{K}$. \end{cor} \begin{proof} Fix $\pi \in \hat{K}$. Let $i_0 \in \{1,\dotsc, m_\pi\}$ when $m_\pi < \infty$ and $i_0 \in \mathbb{N}$ when $m_\pi = \infty$. Denote $\delta_{i_0} \in \ell^2_{m_\pi}$ for the vector with a $1$ in the $i_0$-th coordinate and $0$ in all others, and let $J$ be the range function given by \[ J(\sigma) = \begin{cases} \spn\{\delta_{i_0}\}, & \text{if }\sigma = \pi \\ \{0\}, & \text{if } \sigma \neq \pi. \end{cases} \] Then $V_J$ is the $i_0$-th summand of $\H_\pi^{\oplus m_\pi} \subset \H_\rho$. Let $P\colon S(\mathscr{A}) \to V_J$ be the restriction to $S(\mathscr{A})$ of the orthogonal projection $\H_\rho \to V_J$. Since $V_J$ is an invariant subspace of $\H_\rho$, $P$ commutes with $\rho(\xi)$ for every $\xi \in K$, and the range of $P$ is an invariant subspace of $V_J$. Since $V_J$ is irreducible, one of two things must happen: either the range of $P$ is zero, or it is all of $V_J$. In the former case, we have $f_{i_0,j}^\pi = 0$ for all $j \in I$, so that the $i_0$-th row of $\mathscr{A}(\pi)$ equals $0 \in \H_\pi^{\oplus I}$. In the latter case, $\{P \rho(\xi) f_j \}_{j \in I, \xi \in K} = \{ \rho(\xi) P f_j \}_{j\in I, \xi \in K}$ is a continuous frame for $V_J$. Say the upper bound is $B > 0$. Applying Theorem \ref{thm:multFrm} with $\delta_{i_0}$ in place of $(c_i)_{i=1}^{m_\pi}$, we find that \[ \sum_{j\in I} \Norm{ f_{i_0,j}^\pi }^2 \leq B d_\pi < \infty. \] Thus, $(f_{i_0,j})_{j\in I} \in \H_\pi^{\oplus I}$. \end{proof} The work above assumes that $\mathscr{A}$ is countable and that our continuous frames $\{ \rho(\xi) f_j\}_{j \in I, \xi \in K}$ are taken over the measure space $I \times K$, where $I$ is equipped with counting measure. We have imposed this assumption only for the sake of clarity. Our arguments work just as well (with obvious modifications) if we replace $I$ with a $\sigma$-finite measure space $(X,\mu)$, and allow $\mathscr{A} = \{f_x\}_{x\in X}$ to be a possibly uncountable family of vectors. We have to assume, however, that the mapping $x \mapsto f_x$ is weakly measurable from $X$ to $\H_\rho$. We also have to replace the direct sum $\H_\pi^{\oplus I}$ with the direct \emph{integral} $\int_X^\oplus \H_\pi$. (See \cite[\S 7.4]{F} for a definition.) A standard measurability argument, which we omit, proves the mapping $X \times K \to \H_\rho$ given by $(x,\xi) \mapsto \rho(\xi) f_x$ is weakly measurable. We denote $E(\mathscr{A})$ for this mapping. As in the countable case, we write $f_x = (f_x^\pi)_{\pi \in \hat{K}}$ with $f_x^\pi = (f_{i,x}^\pi)_{i=1}^{m_\pi} \in \H_\pi^{\oplus m_\pi}$ and $f_{i,x}^\pi \in \H_\pi$. Strictly speaking, \[ \mathscr{A}(\pi) := (f_{i,x}^\pi)_{1 \leq i \leq m_\pi, x \in X} \] is no longer a matrix, but a sequence of mappings $X \to \H_\rho$, each given by $x \mapsto f_{i,x}^\pi$ for some $i$. For the sake of analogy, we will still call these mappings \emph{rows} of $\mathscr{A}(\pi)$. Then we have the following results. \begin{theorem} \label{thm:multFrmInt} Let $J$ be a range function in $\{ \ell^2_{m_\pi} \}_{\pi \in \hat{K}}$, and assume that $\mathscr{A} \subset V_J$. For constants $A$ and $B$ with $0 < A \leq B < \infty$, the following are equivalent. \begin{enumerate}[(i)] \item $E(\mathscr{A})$ is a continuous frame for $V_J$ with bounds $A,B$. That is, \[ A \Norm{g}^2 \leq \int_X \int_K | \langle g, \rho(\xi) f_x \rangle |^2\, d\xi\, d x \leq B \Norm{g}^2 \qquad (g \in V_J). \] \item For every $\pi \in \hat{K}$ and every sequence $(c_i)_{i=1}^{m_\pi} \in J(\pi) \subset \ell^2_{m_\pi}$, \[ d_\pi A \sum_{i=1}^{m_\pi} |c_i|^2 \leq \int_X \Norm{ \sum_{i=1}^{m_\pi} c_i f_{i,x}^\pi }^2 dx \leq d_\pi B \sum_{i=1}^{m_\pi} |c_i|^2. \] \end{enumerate} \end{theorem} \begin{cor} \label{cor:multFrmInt} The following are equivalent for constants $A$ and $B$ with $0 < A \leq B < \infty$. \begin{enumerate}[(i)] \item $E(\mathscr{A})$ is a continuous frame for $\H_\rho$ with bounds $A,B$. \item For every $\pi \in \hat{K}$, the ``rows'' of $\mathscr{A}(\pi)$ belong to $\int_X^\oplus \H_\pi$, where they form a Riesz sequence with bounds $d_\pi A,d_\pi B$. \end{enumerate} \end{cor} We end with an application. Remember that $\mathcal{M}_\pi \subset \H_\rho$ denotes the isotypical component of $\pi\in \hat{K}$ in $\rho$. In terms of our decomposition of $\H_\rho$, $\mathcal{M}_\pi$ is the summand $\H_\pi^{\oplus m_\pi} \subset \H_\rho$. We write $P_\pi$ for orthogonal projection of $\H_\rho$ onto $\mathcal{M}_\pi$. The result below generalizes Corollary \ref{cor:compFrm} for frames with multiple generators. It is a trivial consequence of Corollary \ref{cor:multFrmInt}. \begin{cor} \label{cor:multCompFrm} Let $\mathscr{A}$ be as described in the paragraph above Theorem \ref{thm:multFrmInt}. The following are equivalent for constants $A$ and $B$ with $0 < A \leq B < \infty$. \begin{enumerate}[(i)] \item $E(\mathscr{A})$ is a continuous frame for $\H_\rho$ with bounds $A,B$. \item For each $\pi \in \hat{K}$, the mapping $X \times K \to \mathcal{M}_\pi$ given by $(x,\xi) \mapsto \rho(\xi) P_\pi f_x$ is a continuous frame for $\mathcal{M}_\pi$ with bounds $A,B$. \end{enumerate} \end{cor} \medskip \section{Acknowledgements} The author thanks the following people for insightful comments and conversations: Marcin Bownik, Eusebio Gardella, Eugenio Hern{\'a}ndez, John Jasper, Peter Luthy, Azita Mayeli, Chris Phillips, Ken Ross, and Shayne Waldron. Extra thanks go to Marcin Bownik and Ken Ross, who both read the manuscript and gave helpful suggestions. This research was supported in part by NSF grant DMS-1265711. \bibliographystyle{abbrv}
-166,761.759344
[ -2.544921875, 2.265625 ]
35.306407
[ -3.470703125, 0.451171875, -1.9423828125, -7.30859375, -1.0791015625, 9.8515625 ]
[ 2.486328125, 9.65625, 1.3994140625, 5.6328125 ]
849
16,895
[ -3.337890625, 3.8984375 ]
36.473784
[ -5.6953125, -3.974609375, -5.03515625, -2.33984375, 1.80078125, 12.421875 ]
1.877629
21.729598
12.406037
2.93403
[ 1.2019801139831543 ]
-100,451.481598
4.7285
-165,520.414866
1.572201
6.013692
[ -2.138671875, -3.419921875, -3.921875, -5.59375, 1.990234375, 13.03125 ]
[ -5.19140625, -1.1552734375, -2.044921875, -1.2119140625, 2.88671875, 3.09765625 ]