id
stringlengths 20
20
| score
int64 1
5
| normalized_score
float64 0.2
1
| content
stringlengths 212
2.96M
| sub_path
stringclasses 1
value |
---|---|---|---|---|
BkiUbOHxK02iP4Wj-qS9
| 5 | 1 |
\section*{Introduction}
Topological semimetals (TSs) form an outstanding group of materials characterized by perfect linear dispersion of some bulk electronic states.\cite{Hasan2017,Armitage2017a} In accordance with presence or absence of Lorentz invariance, one discriminates type-I and type-II systems.\cite{Yan2017a} In the latter class of TSs, Dirac cone is strongly tilted with respect to Fermi level.\cite{Soluyanov2015a} Both type-I and type-II TSs can be composed from Dirac or Weyl fermions, depending on what kind of symmetries is preserved.\cite{Yang2014a} Nontrivial electronic structures of TSs give rise to unusual electronic transport properties, commonly considered being highly prospective for various electronic devices of new kind. A hallmark feature of TSs is chiral magnetic anomaly (CMA), which manifests itself as a negative magnetoresistance (MR) observed when electric and magnetic fields are collinear.\cite{Nielsen1983} Actually, negative MR was found in many TSs, more often in type-I materials \cite{Li2015c,Li2016,Hirschberger2016a,Niemann2016a,Xiong2015} but also in a few type-II systems.\cite{Lv2017} Another smoking-gun signature of TSs is the existence of topological surface states, which have a form of Fermi-arcs.\cite{Hasan2017,Armitage2017a} Their presence was confirmed experimentally in a large number of TSs, among them Cd$_3$As$_2$,\cite{Wang2016m} MoTe$_2$,\cite{Deng2016} WTe$_2$\cite{Li2017b} and TaAs.\cite{Lv2015}
Type-II topological semimetallic states have been revealed in several transition metal dichalcogenides and in MA$_3$ (M = V, Nb, Ta; A = Al, Ga, In) icosagenides.\cite{Chang2017a,Yan2017a,Zhang2017e,Deng2016,Soluyanov2015a,Autes2016,Huang2016b}
Within the former group of compounds, MoTe$_2$ and WTe$_2$ have been classified as type-II Weyl semimetals,\cite{Deng2016,Bruno2016} whereas platinum and palladium dichalcogenides have been established via angle-resolved photoemission spectroscopy (ARPES) experiments to represent the family of type-II Dirac semimetals.\cite{Yan2017a,Zhang2017e,Noh2017} The TSs nature of PdTe$_2$ is reflected in its peculiar magnetotransport behavior.\cite{Wang2016i,Fei2017} In turn, no comprehensive study on the electronic transport properties of the Pt-bearing counterpart has been reported in the literature up to date. In this work, we investigated the galvanomagnetic properties of PtTe$_2$ with the main aim to discern features appearing due to the alleged nontrivial topology of its electronic band structure.
\section*{Results and discussion}
\subsection*{Electrical resistivity, magnetic field-induced plateau and magnetoresistance}
Figure~\ref{rho(T)}a shows the results of electrical resistivity, $\rho$, measurements performed on single-crystalline sample of PtTe$_2$, as a function of temperature, $T$, with electric current, $i$, flowing within the hexagonal $a-b$ plane. The overall behavior of $\rho(T)$ indicates a metallic character of the compound. The resistivity decreases from the value of $24.09\,\mu\Omega\,\rm{cm}$ at $T=300$\,K to $0.17\,\mu\Omega\,\rm{cm}$ at $T=2$\,K, yielding the residual resistivity ratio RRR = $\rho(300\,\rm{K})/\rho(2\,\rm{K})$ equal to 142. The magnitudes of both $\rho(2\,\rm{K})$ and RRR indicate high crystallinity of the specimen measured (RRR is approximately 5 times larger than that reported for PtTe$_2$ in Ref.\cite{Zhang2017e}). As displayed in Fig.~\ref{rho(T)}a (note the red solid line), in the whole temperature range covered, $\rho(T)$ can be very well approximated with the Bloch-Gr{\"u}neisen (BG) law:
\begin{equation}
\rho(T)=\rho_0+A\Bigg(\frac{T}{\varTheta_{\rm D}}\Bigg)^k\int_{0}^{\frac{\varTheta_{\rm D}}{T}} \frac{x^k}{(e^x-1)(1-e^{-x})} dx,
\label{BG_eq}
\end{equation}
where $\rho_0$ is the residual resistivity, accounting for scattering conduction electrons on crystal imperfections, while the second term represents electron-phonon scattering ($\varTheta_{\rm D}$ stands for the Debye temperature). The least-squares fitting yielded the parameters: $\rho_0=0.17\,\mu\Omega\,\rm{cm}$, $\varTheta_{\rm D}=253$\,K, $A=11.6\,\mu\Omega\,\rm{cm}$ and $k=3.23$. The value of $k$ is smaller than $k=5$, expected for simple metals, yet similar to $k$ exponents determined for several monopnictides.\cite{Sun2016a,Pavlosiuk2017,Pavlosiuk2017b}
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{Fig1.eps}
\caption{\textbf{Electrical resistivity of PtTe$_2$.} (a) Temperature dependence of the electrical resistivity measured in zero magnetic field with electric current $i$ confined within the $a-b$ plane of the crystallographic unit cell. Solid red curve represents the result of fitting the BG law to the experimental data. (b) Low-temperature resistivity ($i \perp c$) measured as a function of temperature in transverse magnetic field ($B \parallel c$).
\label{rho(T)}}
\end{figure}
The temperature dependencies of the electrical resistivity of PtTe$_2$ measured in transverse magnetic field ($i \perp c$ axis and $B \parallel c$ axis) are gathered in Fig.~\ref{rho(T)}b. In non-zero $B$, $\rho(T)$ is a non-monotonous function of temperature, showing an upturn below a certain slightly field-dependent temperature $T*$, and then forming a plateau at the lowest temperatures. With increasing magnetic field, the magnitude of the resistivity in the turn-on and plateau regions distinctly increases. Similar behavior was considered as a fingerprint of the presence of nontrivial topology in the electronic structure of several TSs.\cite{Shekhar2015b,Li2016f,Singha2016a,Hosen2017} Another possible explanation of this type of magnetic field governed changes in $\rho(T)$ is a metal-insulator transition.\cite{Zhao2015,Li2016f,Du2005} However, as discussed first for WTe$_2$,\cite{Wang2015d} and afterwards, e.g., for rare-earth monopnictides,\cite{Zeng2016,Niu2016,Pavlosiuk2016f,Pavlosiuk2017,Pavlosiuk2017b,Kumar2016,Sun2016a,Ghimire2016,Han2017a,Xu2017f} the magnetic field induced upturn in $\rho(T)$ may appear also in trivial semimetals, which are close to perfect charge carriers compensation. We presume that the latter mechanism is also fully appropriate for the electrical transport behavior in PtTe$_2$.
In order to examine the actual nature of the galvanomagnetic behavior observed for PtTe$_2$, transverse magnetoresistance, MR = $[\rho(B)-\rho(B=0)]/\rho(B=0)$, measurements were performed at several constant temperatures in the configuration $i \perp c$ axis and $B \parallel c$ axis. As can be inferred from Fig.~\ref{mr(B)}a, in $B=9$\,T, MR taken at $T=1.8$\,K achieves a giant value of $3060\%$, which is an order of magnitude larger than MR determined in the same conditions for the related system PdTe$_2$,\cite{Wang2016i} and almost equal to that reported for the type-II Weyl semimetal MoTe$_2$.\cite{Chen2016a} With increasing temperature, MR measured in $B=9$\,T does not change significantly up to about 10 K (i.e., in the plateau region of $\rho(T)$), and then decreases rapidly. However, even at $T$ = 150 K, MR remains exceptionally large exceeding $500\%$ in 9~T. At each of the temperatures studied, MR shows no tendency towards saturation in strong fields. MR behavior similar to that of PtTe$_2$ was established before for several TSs.\cite{Wang2016i,Chen2016a,Shekhar2015b,Singha2016a,Kumar2017c,Wang2016l}
However, unsaturated MR can be also attributed to perfect or almost perfect carrier compensation in a semimetallic material.\cite{Ziman1972}
Remarkably, as demonstrated in Fig.~\ref{mr(B)}b, the MR isotherms of PtTe$_2$ obey the Kohler's rule in the entire temperature range studied. This finding rules out the scenario of metal-insulator transition as a possible mechanism of the magnetic field driven changes in the electrical transport of PtTe$_2$. The MR data collapse onto a single curve, which can be approximated by the expression: MR $\propto(B/\rho_0)^m$, with the exponent $m=1.69$ (note the red solid line in Fig.~\ref{mr(B)}b). While $m=2$ is expected for materials with perfect electron-hole balance, and the obtained value is also smaller than $m=1.92$ reported for WTe$_2$,\cite{Wang2015d} it is similar to those found for some monopnictides, which were reported as trivial semimetals fairly close to charge compensation.\cite{Pavlosiuk2016f,Pavlosiuk2017,Han2017a}
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{Fig2.eps}
\caption{\textbf{Magnetotransport properties of PtTe$_2$. } (a) Magnetoresistance measured with $i \perp c$ and $B \parallel c$ at several constant temperatures. (b) Kohler's plot of the magnetoresistance data. Red solid line is the result of fitting the Kohler's equation to the experimental data.
\label{mr(B)}}
\end{figure}
\subsection*{Quantum oscillations}
In order to characterize the Fermi surface in PtTe$_2$, we investigated quantum oscillations in $\rho(B)$ (Shubnikov -- de Haas effect) at a few different temperatures. Fig.~\ref{SdH_analysis}a shows the oscillatory part of the electrical resistivity, $\Delta\rho$, obtained by subtraction of the second-order polynomial from the experimental data, plotted as a function of reciprocal magnetic field, $1/B$. As can be inferred from this figure, the SdH oscillations remain discernible at temperatures up to at least $15\,$K, however, their amplitudes systematically decrease with increasing temperature. Fast Fourier transform (FFT) analysis, the results of which are presented in Fig.~\ref{SdH_analysis}b, disclose four features at the oscillations frequencies $f_i^{\rm{FFT}}$ ($i$ represents the Fermi surface pocket label). The most prominent peak occurs at $f_{\alpha}^{\rm{FFT}}=108$\,T, and the corresponding Fermi surface pocket will be labeled thereinafter as $\alpha$. The next feature occurs at $f_{2\alpha}^{\rm{FFT}}=215$\,T and it is the second harmonic frequency of $f_{\alpha}^{\rm{FFT}}$. Then, the peak with its maximum at $f_{\beta}^{\rm{FFT}}=246$\,T can be attributed to another Fermi surface pocket, labeled $\beta$ in what follows. Eventually, the very weak maximum centered at $f_{3\alpha}^{\rm{FFT}}=325$\,T likely arises as the third harmonics of $f_{\alpha}^{\rm{FFT}}$. It is worthwhile noting that the FFT spectrum of PtTe$_2$ is very similar to that reported for PdTe$_2$ in Ref.\cite{Wang2016i} but differs from the FFT data shown for the same compound in Ref.\cite{Fei2017} Using the Onsager relation $f_i^{\rm{FFT}} = hS/e$, where $S$ stands for the area of Fermi surface cross-section, one finds for the two pockets in PtTe$_2$ the values $S_{\alpha}=1.03\times10^{-2}{\rm\AA^{-2}}$ and $S_{\beta}=2.34\times10^{-2}{\rm\AA^{-2}}$.
Assuming circular cross-sections, the corresponding Fermi wave vectors are $k_{\rm{F},\alpha}=5.73\times10^{-2}{\rm\AA^{-1}}$ and $k_{\rm{F},\beta}=8.63\times10^{-2}{\rm\AA^{-1}}$. Then, if we assume that both Fermi surface pockets are spherical, which is poor approximation, the carrier densities in these two pockets would be equal to $n_{\alpha}=6.35\times10^{18}$\,cm$^{-3}$ and $n_{\beta}=2.17\times10^{19}$\,cm$^{-3}$.
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{Fig3.eps}
\caption{\textbf{Shubnikov-de Haas effect in PtTe$_2$.} (a) Oscillating part of the electrical resistivity measured at several different temperatures with $i \perp c$ axis and $B \parallel c$ axis, plotted as a function of inverse magnetic field. (b) Fast Fourier transform spectrum obtained from the analysis of the data presented in panel (a). Inset shows the temperature dependencies of the amplitudes of two principal frequencies in the FFT spectra. Solid lines represent the fits of Eq.~\ref{LK_eq} to the experimental data.
\label{SdH_analysis}}
\end{figure}
The electronic structure calculated for PtTe$_2$\cite{Yan2017a,Zhang2017e} comprises three bands crossing the Fermi level, one hole-like band, located at the center of the Brillouin zone, and two electron-like bands. The fact that we observed experimentally only two principle frequencies is probably due to very small size of the third Fermi surface pocket or might arise owing to somewhat lower position of the Fermi level in the single crystal studied. From the comparison of the values of $k_{\rm{F},i}$ obtained from the FFT analysis and sizes of the calculated electronic bands,\cite{Yan2017a} one may presume that the $\alpha$ Fermi surface pocket corresponds to one of the electron-like bands and the $\beta$ Fermi surface pocket represents the hole-like band.
The inset to Fig.~\ref{SdH_analysis}b displays the temperature variations of the FFT amplitudes, $R_i(T)$, corresponding to the $\alpha$ and $\beta$ Fermi surface pockets in PtTe$_2$. The gradual damping of both oscillations with rising temperature can be described by formula:\cite{Shoenberg1984}
\begin{equation}
R_i(T)\propto(\lambda m^*_iT/B_{\rm{eff}})/\sinh(\lambda m^*_iT/B_{\rm{eff}}),
\label{LK_eq}
\end{equation}
where $m^*_i$ is the effective cyclotron mass of charge carriers, $B_{\rm{eff}}= 4.5$\,T was calculated as $B_{\rm{eff}}=2/(1/B_1+1/B_2)$ ($B_1=3$\,T and $B_2=9$\,T are the borders of the magnetic field window in which the FFT analysis was performed), and $\lambda=14.7\;$T/K was obtained from the relationship $\lambda=2\pi^2k_{\rm B}m_{\rm e}/e\hbar$ ($m_{\rm e}$ stands for the free electron mass, and $k_{\rm B}$ is the Boltzmann constant). The fitting shown in the inset to Fig.~\ref{SdH_analysis}b yielded $m^*_\alpha=\,0.11m_e$ and $m^*_\beta=\,0.21m_e$. It is worth noting that the effective mass determined for the $\alpha$ Fermi surface pocket in PtTe$_2$ is almost equal to that reported for the Pd-bearing counterpart.\cite{Wang2016i}
In the next step, we attempted to determine the phase shift, $\varphi_i$, in the SdH oscillations, which is directly related to the Berry phase, $\varphi_{\rm{B},i}$, of the carries involved. There are known a few methods of extracting $\varphi_i$, and the proper interpretation of their values remains debatable.\cite{Ando2013,Wang2016k,Li2018d,Taskin2011b} The most reliable approach is direct fitting to the experimental data of the Lifshitz-Kosevich (LK) function:\cite{Shoenberg1984}
\begin{equation}
\Delta\rho \propto \frac{1}{\sqrt{B}}\sum_{i}\frac{p_i\lambda m^*_iT/B}{\sinh(p_i\lambda m^*_iT/B)}\exp(-p_i\lambda m^*_iT_{\rm{D,i}}/B)\cos\bigg(2\pi\big(p_if_i/B+\varphi_i\big)\bigg),
\label{full_LK_eq}
\end{equation}
where $p_i$ is the harmonic number, $f_i$ is the oscillation frequency, and $T_{\rm{D,i}}$ stands for the Dingle temperature. In Fig.~\ref{SdH_LK_fit}a, there is shown the result of fitting the LK function to the oscillating resistivity of PtTe$_2$ observed at $T=1.8$\,K (note the red solid line). As discussed above, at this temperature, the FFT spectrum comprises three peaks, and thus the sum in Eq.~\ref{full_LK_eq} consists of as many as three contributions. In order to reduce the total number of free parameters in this equation, the effective masses were fixed at the values obtained from Eq.~\ref{LK_eq} (see above). With this simplification, one obtained the parameters: $f_\alpha$ = 108.1~T, $T_{\rm{D},\alpha}=9.1$\,K and $\varphi_\alpha=0.65$ for the $\alpha$ band, and $f_\beta$ = 246~T, $T_{\rm{D},\beta}=5$\,K and $\varphi_\beta=0.54$ for the $\beta$ band. Remarkably, the so-obtained values of $f_i$ are almost identical to those derived from the afore-described FFT analysis, hence confirming the internal consistency of the approach applied. Using the Dingle temperatures, the quantum relaxation time, $\tau_{q,i}$, could be calculated from the relation $\tau_{q,i}=\hbar/(2\pi k_{\rm B}T_{\rm{D},i})$ to be equal to $1.34\times10^{-13}$\,s and $2.43\times10^{-13}$\,s for the $\alpha$ and $\beta$ bands, respectively. Then, the quantum mobility of charge carriers, $\mu_{q,i}$, were estimated from the relationship $\mu_{q,i}=e\tau_{q,i}/m^*_i$ to be $2138\,\rm{cm^2V^{-1}s^{-1}}$ and $2038\,\rm{cm^2V^{-1}s^{-1}}$ for the $\alpha$ and $\beta$ bands, respectively.
The phase shift $\varphi_i$ in Eq.~\ref{full_LK_eq} is generally a sum $\varphi_i=-1/2+\varphi_{B,i}+\delta_i$, where $\delta_i$ represents the dimension-dependent correction to the phase shift.\cite{Li2018d} In two-dimensional (2D) case, this parameter amounts zero, while in three-dimensional (3D) case $\delta_i$ is equal to $\pm1/8$, and its sign depends on type of charge carriers and kind of cross-section extremum. Supposing that the SdH oscillations in PtTe$_2$ originate from 3D bands with carriers moving on their maximal orbits, one can set $\delta=-1/8$ for electrons and $\delta=1/8$ for holes. With this assumption, the Berry phases $\varphi_{B,\alpha}=0.55\pi$ and $\varphi_{B,\beta}=1.83\pi$ were obtained for the $\alpha$ and $\beta$ bands, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{Fig4.eps}
\caption{\textbf{Lifshitz-Kosevich analysis of the magnetotransport in PtTe$_2$.} Oscillating part of the electrical resistivity measured at (a) $T=1.8$\,K and (b) $T=10$\,K, plotted as a function of inverse magnetic field. Solid red lines correspond to the results of fitting Eq.~\ref{full_LK_eq} to the experimental data.
\label{SdH_LK_fit}}
\end{figure}
To check the reliability of the LK analysis performed, Eq.~\ref{full_LK_eq} was also used to describe the experimental data measured at $T=10$\,K. At this temperature, just one peak in the FFT spectrum is discernible (see Fig.~\ref{SdH_analysis}b), which corresponds to the $\alpha$ Fermi surface pocket. The result of fitting the LK formula is presented in Fig.~\ref{SdH_LK_fit}b (note the red solid line), and the so-derived values of the parameters are: $f_\alpha$ = 108.1~T, $T_{\rm{D},\alpha}=12.6$\,K and $\varphi_{B,\alpha}=0.46\pi$. Notably, the agreement between the $f_\alpha$ values obtained at $T=10$\,K and $T=1.8$\,K is perfect. The value of $T_{\rm{D},\alpha}$ implies $\mu_q=1544\,\rm{cm^2V^{-1}s^{-1}}$. Clearly, with increasing temperature, the Dingle temperature increases and consequently the quantum charge carriers mobility becomes smaller, which is probably due to increasing the scattering rate. In turn, the Berry phase of the $\alpha$ band was found almost independent of temperature. All the parameters obtained from the LK approach to the magnetotransport in PtTe$_2$ are gathered in Table~\ref{L_K_param}.
\begin{table}[h]
\centering
\caption{\textbf{Parameters extracted from the LK analysis of the SdH oscillations in PtTe$_2$}. $T$ - temperature; $f_{i}^{\rm{FFT}}$ - oscillation frequency obtained from the FFT analysis; $f_{i}$ - oscillation frequency obtained from Eq.~\ref{full_LK_eq}; $m^*$ - effective mass; $T_{\rm{D}}$ - Dingle temperature; $\tau_q$ - quantum relaxation time; $\mu_q$ - quantum mobility; $\varphi_{\rm{B}}$ - Berry phase.}
\begin{tabular*}{0.85\textwidth}{@{\extracolsep{\fill}}*{9}{c}} \hline\hline
$T$ & band & $f_{i}^{\rm{FFT}}$ & $f_i$ & $m^*$ & $T_{\rm{D}}$ & $\tau_q$ & $\mu_q$ & $\varphi_{\rm{B}}$ \\
(K) & & (T) & (T) & ($m_e$) & (K) & (s) & ($\rm{cm^2V^{-1}s^{-1}}$) & \\\hline
\multirow{2}{*}{1.8} & $\alpha$ & 108 & 108.1 & 0.11 & 9.1 & $1.34\times10^{-13}$ & 2138 & $0.55\pi$ \\
& $\beta$ & 246 & 246 & 0.21 & 5 & $2.43\times10^{-13}$ & 2038 & $1.83\pi$ \\
10 & $\alpha$ & 108 & 108.5 & 0.11 & 12.6 & $9.65\times10^{-14}$ & 1544 & $0.46\pi$ \\
\hline\hline
\end{tabular*}\label{LMtable}
\label{L_K_param}
\end{table}
Another commonly applied technique for Berry phase derivation is using Landau level (LL) fan diagrams. Though in case of multi-frequency oscillations this method is obstructed by possible superposition of the quantum oscillation peaks that hinders precise determination of the Landau level index for a given frequency,\cite{Hu2016} we made an attempt to construct the LL fan diagram for the $\alpha$ Fermi pocket in PtTe$_2$. As it is apparent from Fig.~\ref{SdH_analysis}b, the FFT maximum occurring at $f_{\alpha}^{FFT}=108$\,T is fairly well separated from the other FFT peaks, and hence one could filter this oscillation with reasonably high accuracy. For PtTe$_2$ one finds $\rho > \rho_{xy}$ ($\rho_{xy}$ is the Hall resistivity discussed in the next section), and therefore the maxima in the oscillatory resistivity measured at $T=1.8$\,K (see Fig.~\ref{SdH_LK_fit}a) were numbered by integers, $n$, and the minima by half-integers, $n+1/2$. The result of this approach is shown in the main panel of Fig.~\ref{Berry}. A linear fit of the LL indices (note the solid line) gives an intercept of 0.62, which corresponds to the Berry phase $\varphi_{B,\alpha}=0.51\pi$. In turn, the slope of this straight line defines the oscillation frequency $f_\alpha= 108.3$~T.
\begin{figure}[h]
\centering
\includegraphics[width=8.5cm]{Fig5.eps}
\caption{\textbf{Landau level fan diagrams for PtTe$_2$.} The Landau level indices, $n$, determined for the $\alpha$ Fermi surface pocket at $T=1.8$\,K plotted as a function of inverse magnetic field. Inset shows the LL plot of the SdH oscillations measured at $T=10$\,K. Solid lines represent the linear fits to the LL data.
\label{Berry}}
\end{figure}
At $T=10$\,K, the electrical resistivity of PtTe$_2$ oscillates in the transverse magnetic field with only one frequency (cf. Fig.~\ref{SdH_analysis}b), so building the LL fan diagram is straightforward. As can be inferred from the inset to Fig.~\ref{Berry}, the LL indices plot yields the intercept 0.63, which is almost the same as that obtained at the lower temperature. Also, the slope of the straight line ($f_\alpha= 108.3$~T) is identical with that determined at 1.8\,K, and furthermore it is very close to the FFT value (see Table~\ref{L_K_param}). Most importantly, all the parameters extracted from the LL indices plots are in perfect agreement with the quantities obtained for the $\alpha$ Fermi pocket from the LK analysis, which unambiguously corroborates the correctness of both techniques applied for PtTe$_2$.
\subsection*{Hall effect}
The results of Hall resistivity measurements, performed on single-crystalline PtTe$_2$ with electric current flowing within the basal plane of the hexagonal unit cell and magnetic field applied along the $c$ axis, are shown in Fig.~\ref{Hall}a. At $T=$ 2~K, $\rho_{xy}(B)$ behaves in a rather complex manner. In weak magnetic fields, it is negative and exhibits a shallow minimum. Near 2~T, the Hall resistivity changes sign to positive, and then its magnitude increases with increasing $B$. The $\rho_{xy}(B)$ isotherm measured at 25~K shows a fairly similar field variation, yet the positive contribution in strong fields remains too small to cause sign reversal. At higher temperatures, one observes a gradual straightening of the $\rho_{xy}(B)$ with rising $T$. The overall behavior of the Hall effect in PtTe$_2$ confirms the multi-band character of the electrical transport in this material. It is worth recalling that very similar Hall response was observed for the closely related compound PdTe$_2$.\cite{Fei2017}
\begin{figure}[h]
\centering
\includegraphics[width=17cm]{Fig6.eps}
\caption{\textbf{Hall effect in PtTe$_2$}. (a) Magnetic field dependencies of the Hall resistivity measured at several different temperatures with $i \perp c$ axis and $B \parallel c$ axis. (b) Hall conductivity as a function of magnetic field at $T=2$\,K. Blue dashed line represents the fit with two-bands model, and red solid line stands for result obtained with three-bands model.
\label{Hall}}
\end{figure}
For quantitative analysis of the experimental data, first a two-bands Drude model was applied. For this purpose, $\rho_{xy}(B)$ measured at $T=2$\,K was converted to the Hall conductivity $\sigma_{xy}=\rho_{xy}/(\rho_{xy}^2+\rho^2)$, as displayed in Figure~\ref{Hall}b. Next, $\sigma_{xy}(B)$ was fitted by the formula:
\begin{equation}
\sigma_{xy}(B)=eB\Bigg(\frac{n_{h} \mu_{h}^2}{1+(\mu_hB)^2}+\frac{n_{e1} \mu_{e1}^2}{1+(\mu_{e1}B)^2}\Bigg),
\label{two-band-sigma-model}
\end{equation}
where $n_{e1}$ and $\mu_{e1}$, $n_h$ and $\mu_h$ stand for the carrier concentrations and the carrier mobilities of electron- and hole-like bands, respectively. As can be inferred from Fig.~\ref{Hall}b, the so-obtained approximation of the measured $\sigma_{xy}(B)$ data (note the blue dashed line) is not ideal. An obvious reason for the discrepancy between the experiment and the two-band model could be contribution from another band, the presence of which was revealed in the ab-initio calculations of the electronic structure of PtTe$_2$.\cite{Zhang2017e,Yan2017a}
Therefore, in the next step, the Hall conductivity was analysed in terms of a three-bands model:
\begin{equation}
\sigma_{xy}(B)=eB\Bigg(\frac{n_{h} \mu_{h}^2}{1+(\mu_hB)^2}+\frac{n_{e1} \mu_{e1}^2}{1+(\mu_{e1}B)^2}+\frac{n_{e2} \mu_{e2}^2}{1+(\mu_{e2}B)^2}\Bigg).
\label{three-band-sigma-model}
\end{equation}
where $n_{e2}$ and $\mu_{e2}$ account for the carrier concentration and the carrier mobility, respectively, of another electron-like band in PtBi$_2$. The result of fitting Eq.~\ref{three-band-sigma-model} to the experimental $\sigma_{xy}(B)$ data is shown as red solid line in Fig.~\ref{Hall}b. Clearly, the obtained description is much better than that with the two-bands model.
\begin{table}[h]
\centering
\caption{\textbf{Parameters obtained for PtTe$_2$ from multi-bands models analyses of the the Hall effect data.} $n_h$ - hole concentration; $\mu_h$ - hole mobility; $n_{e1}$ and $n_{e2}$ - electron concentrations; $\mu_{e1}$ and $\mu_{e2}$ - electron mobilities.}
\begin{tabular*}{0.85\textwidth}{@{\extracolsep{\fill}}*{7}{c}} \hline\hline
Model & $n_h$ & $\mu_h$& $n_{e1}$ & $\mu_{e1}$ & $n_{e2}$ & $\mu_{e2}$ \\
& (cm$^{-3}$) & ($\rm{cm^2V^{-1}s^{-1}}$) & (cm$^{-3}$) & ($\rm{cm^2V^{-1}s^{-1}}$) & (cm$^{-3}$) & ($\rm{cm^2V^{-1}s^{-1}}$) \\\hline
two-bands & $5.99\!\times\!10^{20}$ & 7179 & $5.05\!\times\!10^{20}$ & 17373 & - & -\\
three-bands & $7.91\!\times\!10^{20}$ & 4740 & $3.8\!\times\!10^{20}$ & 19240 & $3.82\!\times\!10^{20}$ & 2564 \\
\hline\hline
\end{tabular*}\label{LMtable}
\label{Hall_parameters}
\end{table}
The fitting parameters derived in the two approaches are listed in Table~\ref{Hall_parameters}. Both models yielded large carriers concentrations of the order of $10^{20}\,\rm{cm^{-3}}$. It is worth noting that very similar charge densities were found in the Dirac semimetal PtBi$_2$.\cite{Gao2017} On the contrary, for the type-II Weyl semimetal WTe$_2$ the carrier concentrations were reported to be up to two orders of magnitude larger than those in PtTe$_2$.\cite{Wang2017b} As regards the level of carrier compensation, the two-bands model yielded considerable charge imbalance given by the ratio $n_h/n_{e1}=1.19$, however the three-bands model led to fairly balanced scenario $n_h/(n_{e1}+n_{e2})=1.04$. Recently, similar degree of electron-hole compensation was established, e.g., in semimetallic monobismuthides YBi and LuBi.\cite{Pavlosiuk2017b} The mobilities of charge carriers in PtTe$_2$ were found very high, especially that obtained for one of the electron-like Fermi surface pockets ($\mu_{e1} \sim2\!\times\!10^4\,\rm{cm^2V^{-1}s^{-1}}$). Though the latter value is not such large as the carriers mobilities in Cd$_3$As$_2$ (Ref.\cite{Liang2014}) or NbP (Ref.\cite{Shekhar2015b}), it exceeds the values reported for type-II Weyl semimetals MoTe$_2$,\cite{Chen2016a} WTe$_2$,\cite{Luo2015a} and WP$_2$.\cite{Wang2017b} It is worth noting that the carriers mobility derived from the Hall effect data are larger than the quantum mobilities determined in the analyses of the SdH oscillations. Similar finding was reported for other TSs, like Cd$_3$As$_2$,\cite{Liang2014} ZrSiS,\cite{Hu2017b} WP$_2$,\cite{Kumar2017c} and PtBi$_2$.\cite{Gao2017}
The discrepancy likely arises due to the fact that the quantum mobility is affected by all possible scattering processes, whereas the Hall mobility is sensitive to small-angle scattering only.\cite{Shoenberg1984}
\subsection*{Angle-dependent magnetoresistance}
In order to check whether PtTe$_2$ demonstrates CMA, angle-dependent magnetotransport measurements were performed at $T$ = 2~K. In these experiments, electric current was always flowing within the hexagonal $a-b$ plane, while the angle $\theta$ between current and magnetic field direction was varied from $\theta=90^{\circ}$ ($B \perp i$) to $\theta=0^{\circ}$ ($B \parallel i$). As can be inferred from Fig.~\ref{rho_rotator}, the electrical resistivity rapidly decreases on deviating from the transverse configuration, and eventually for the longitudinal geometry $\rho$ measured in $B$ = 9~T is about an order of magnitude smaller than that for $B \perp i$.
Clearly, the longitudinal MR experiments did not provide any evidence for CMA in PtTe$_2$. A possible source for that may be large contribution of non-Dirac states to the measured resistivity. At odds with the Drude theory, which predicts zero MR for $B \parallel i$,\cite{Pippard2009} in several materials sizeable positive longitudinal MR was observed.
Among the theories which interpret this phenomenon,\cite{Miller1996,Argyres1956,Stroud1976,Pal2010} that accounting for Fermi surface anisotropy\cite{Pal2010} seems appropriate for PtTe$_2$. Within the latter approach, positive longitudinal MR up to $\sim100\%$ can be expected for strongly anisotropic systems. In consequence, CMA would be discernible only if its negative contribution to the longitudinal MR is larger than the positive term due to trivial electronic bands.
\begin{figure}[h]
\centering
\includegraphics[width=8.5cm]{Fig7.eps}
\caption{\textbf{Angle-dependent magnetotransport in PtTe$_2$}. Magnetic field dependencies of the electrical resistivity measured at $T$ = 2~K with $i \perp c$ axis and external magnetic field oriented at several different angles in relation to the electric current direction.
\label{rho_rotator}}
\end{figure}
\section*{Conclusions}
Our comprehensive investigations of the galvanomagnetic properties of the alleged type-II Dirac semimetal PtTe$_2$, performed on high-quality single crystals, have not provided any definitive proof of the the presence of Dirac states in this material. The conclusion was hampered by the existence of trivial bands at the Fermi level, which significantly contribute to the electrical transport. In particular, CMA effect was not resolved, and transverse MR was found to obey the Kohler's scaling. From the analysis of the Hall effect and the SdH oscillations, very high mobilities of charge carriers with small effective masses were extracted. However, the derived Berry phases, different from the value of $\pi$ expected for Dirac fermions, indicate that the SdH effect is governed predominantly by trivial electronic states. This finding is in concert with the electronic band structure calculations, which showed that the Dirac point in PtTe$_2$ is located below the Fermi level.\cite{Yan2017a} Further investigations performed on suitably doped or pressurized material might result in observation of clear contribution of Dirac states to its transport properties, caused by appropriate tuning the chemical potential. Based on the hitherto obtained results, PtTe$_2$ can be classified as a semimetal with moderate degree of the charge carriers compensation.
\section*{Methods}
Single crystals of PtTe$_2$ were grown by flux method. High-purity constituents (Pt 5N, Te 6N), taken in atomic ratio 1:20, were placed in an alumina crucible covered by molybdenum foil strainer and capped with another inverted alumina crucible. This set was sealed inside a quartz tube under partial Ar gas atmosphere. The ampoule tube was heated up to 1150$^{\circ}$C, held at this temperature for 24 hours, then quickly cooled down to 850$^{\circ}$C at a rate of 50$^{\circ}$C/h, kept at this temperature for 360 hours, followed by slow cooling down to 550$^{\circ}$C at a rate of 5$^{\circ}$C/h. Subsequently, the tube was quenched in cold water. Upon flux removal by centrifugation, multitude of single crystals with typical dimensions $3\times\!2\!\times\!0.4$\,mm$^3$ were isolated. Their had metallic luster and were found stable against air and moisture.
Chemical composition of the single crystals obtained was checked by energy-dispersive X-ray analysis using a FEI scanning electron microscope equipped with an EDAX Genesis XM4 spectrometer. The average elemental ratio Pt : Te = 35.2(5) : 64.8(3) was derived, in accord with the expected stoichiometry. The crystal structure of the single crystals was examined by X-ray diffraction on a KUMA Diffraction KM-4 four-circle diffractometer equipped with a CCD camera, using graphite-monochromatized Cu-K$\alpha 1$ radiation. The hexagonal CdI$_2$-type crystal structure (space group $P\overline{3}m1$, Wyckoff No. 164) reported in Ref.\cite{Faruseth1965} was confirmed, with the lattice parameters very close to the literature values.
Crystallinity and orientation of the crystal used in the electrical transport studies was checked by Laue backscattering technique employing a Proto LAUE-COS system. Due to the layered crystal structure of PtTe$_2$ it was possible to obtain very thin samples by scotch-tape technique with their surface corresponding to the $a-b$ plane of the hexagonal unit cell of the compound. Rectangular-shaped specimen with dimensions $2.9\times\!1.3\!\times\!0.04$\,mm$^3$ was cut from the cleaved single crystal using a scalpel. Electrical contacts were made from $50\,\mu$m thick silver wires attached to the sample using silver epoxy paste. Electrical transport measurements were carried out within the temperature range $2-300$\,K and in magnetic field up to 9\,T using a conventional four-point ac technique implemented in a Quantum Design PPMS platform.
|
train/arxiv
|
BkiUdJw4uBhi1OGmcheN
| 5 | 1 |
\section{Introduction}
In recent years, the human-computer interaction challenge has led to the demand to introduce efficient facial and speech recognition systems \cite{els1,els2, els3, els5, b1}. Facial emotion recognition is the identification of a human emotion based on the facial expression and mimics \cite{DWT}. The facial emotion recognition has a wide range of appliction prospects in different areas, such as medicine \cite{med}, robotics \cite{3,appl1}, computer vision, surveillance systems \cite{els1}, machine learning \cite{appl4}, artificial intelligence, communication \cite{appl2,mobile}, psychological studies \cite{els5}, smart vehicles \cite{appl1}, security and embedded systems \cite{appl3}.
There are two main approaches for facial expression recognition: geometry-based and appearance-based methods \cite{els2}. The geometry-based methods extract main feature points and their shapes from the face image and calculate the distances between them. While, appearance-based methods focus on the face texture using different classification and template matching methods \cite{sth2,sth3}. In this paper, we focus on facial emotion recognition based on template matching techniques that remains a challenging task \cite{new1,new2,int1}. Since the orientation of pixel features are sensitive to the changes in illumination, pose, scale and other natural imaging variabilities, the matching errors tend to be high \cite{els5, els4, ref1}. Pixel matching methods are known to be useful when the images has missing features because imaging matrices become sparse and feature computation process is not trivial. As facial expressions cause a mismatch of intra-class features due to their orientation variability, it is difficult to map them between the imaging templates.
Facial emotion recognition accuracy depends on the robustness of a feature extraction method to intra-class variations and classifier performance under noisy conditions and with various types of occlusions \cite{appl4}. Even thought a variety of approaches for the automated recognition of human expressions from the face images using template matching methods have been investigated and proposed over the last few years \cite{sth2}, the emotion recognition method with robust feature extraction and effective classification techniques accompanied by low computational complexity is still an open research problem \cite{sth}. Therefore, in this paper, we address the issues
of matching templates through pixel normalization followed by the removal of inter-image feature outliers using a Min-Max similarity metric. We apply Gaussian normalization method with local mean and standard deviation to normalize the pixels and extract relevant face features followed by Min-Max classification method to determine an emotion class. The simulation is performed in Matlab for the Japanese Female Facial Expression (JAFFE) database \cite{8} and the emotion recognition accuracy is calculated using leave-one-out cross-validation method.
The main contributions of this work are the following:
\begin{itemize}
\item We develop a simplified approach for facial emotion recognition with template matching method using Gaussian normalization, mean and standard deviation based feature extraction and Min-Max classification approach.
\item We present simple and effective facial emotion recognition algorithm having low computational complexity and ability to suppress the outliers and remove intensity offsets.
\item We conduct the experiments and simulations on JAFFE database to demonstrate the efficiency of the proposed approach and highlight its advantages, comparing to the other existing methods.
\end{itemize}
The paper is organized as follows. Section \ref{sec2} presents the overview of the existing methods for facial emotion recognition, their drawbacks and reasons to propose a new method. In Section \ref{sec3}, we show normalization, feature extraction and classification parts of the proposed method, present the algorithm and describe the experiments. Section \ref{sec4} contains the simulation results and comparison of the obtained results with the existing methods. In Section \ref{sec5}, we discuss the benefits and drawbacks of the proposed method, in comparison to the traditional methods. Section \ref{sec6} concludes the paper.
\section{Background and related works}
\label{sec2}
To address the problem of facial emotion recognition, several template matching methods have been proposed in the last decades \cite{els1, 3,4,5,comp1}. In most of the cases, the process of emotion recognition from human face images is divided into two main stages: feature extraction and classification \cite{els1,3}. The main aim of feature extraction methods is to minimize intra-class variations and maximize inter-class variations. The most important facial elements for human emotion recognition are eyes, eyebrows, nose, mouth and skin texture. Therefore, a vast majority of feature extraction methods focus on these features \cite{els2,p1}. The selection of irrelevant face image features or insufficient number of them would lead to low emotion recognition accuracy, even applying effective classification methods \cite{sth}. The main purpose of the classification part is to differentiate the elements of different emotion classes to enhance emotion recognition accuracy.
The commonly used feature extraction methods include two-dimensional Linear Discriminant Analysis (2D-LDA) \cite{3,comp1}, two-dimensional Principle Component Analysis (2D-PCA) \cite{PCA}, Discrete Wavelet Transform
(DWT) \cite{SVM2,3,DWT}, Gabor based methods \cite{els6,els7} and wavelets-based techniques \cite{4,Ahmad2}. In 2D-LDA method, the two-dimensional image matrix is exploited to form scatter matrices between the classes and within the class \cite{3}. 2D-LDA method can be applied for facial features extraction alone or accompanied with the other feature extraction method, as DWT \cite{3,comp1}. In 2D-PCA feature extraction method, the covariance matrix representation of the image is derived directly from the original image \cite{3,PCA}. The size of the derived principle component matrix is smaller than the original image size that allows to decrease the amount of processing data, and consequently, reduce the required computational memory \cite{PCA2}. However, 2D-LDA and 2D-PCA methods applied in template matching techniques require an additional processing of the image, dimensionality reduction techniques or application of another feature extraction method to achieve higher recognition accuracy, which leads to the increase in processing time.
The other feature extraction method is DWT. This method is based on the low-pass and high-pass filtering, therefore, it is appropriate for the images with different resolution levels \cite{3}. In the emotion recognition task, DWT is applied for the extraction of useful features from the face images and can be replaced with its Orthogonal Wavelet Transform (OWT) and Biorthogonal Wavelet Transform (BWT) having the advantages of orthogonality \cite{DWT}. Another method for facial emotion recognition is Gauss-Laguerre wavelet geometry based technique. This method represents the processed image in polar coordinates with the center at a particular pivot point. The degree of freedom is one of the advantages that Gauss-Laguerre approach provides, which in turn allows to extract of features of the desirable frequency from the images \cite{4,Ahmad2}. However, DWT and Gauss-Laguerre approaches are complex and require time and memory consuming calculations.
The classification of the extracted features can be implemented using Support Vector Machine (SVM) algorithm \cite{3,SVM2}, K-Nearest Neighbor (KNN) method \cite{4,KNN}, Random Forest classification method \cite{med,forest1} and Gaussian process \cite{5}. The SVM principle is based on non-linear mapping and identification of a hyperplane for the separation of data classes. SVM classifier is used with the application of different kernel functions, such as linear,
quadratic, polynomial and radial basis functions, to optimize the SVM performance \cite{3,SVM2}. KNN approach is based on the numerical comparison of a testing data sample with training data samples of each class followed by the determination of the similarity scores. The data class is defined by K most similar data samples based on the minimum difference between train and test data \cite{4,KNN}. KNN and SVM classifiers are simple and widely used for emotion recognition, however these classifiers do not suppress the outliers that leads to lower recognition accuracy.
The Random Forest classification method is based on the decision making tree approach with randomized parameters \cite{med,forest1}. To construct the decision tree, Random Forest algorithm takes a random set of options and selects the most suitable from them \cite{forest2}. Random Forest classifier is robust and has a high recognition rate for the images of large resolution \cite{forest3}. The drawback of Random Forest classifier is its computational complexity. The other classification method is the Gaussian process approach. Gaussian process is based on the predicted probabilities and can be used for facial emotion recognition without application of feature selection algorithms. The Gaussian process allows a simplified computational approach, however has a smaller emotion recognition rate, comparing to the other methods \cite{5}.
Even thought the a number of methods for feature extraction and classification have been proposed, there is a lack of template matching methods that allow to achieve high recognition accuracy with minimum computational cost. Therefore, the method that we propose has a potential to reduce the computational complexity of facial emotion recognition operation and increase the recognition accuracy due to the simplicity of the algorithm, effective feature extraction and ability to suppress outliers. The proposed algorithm can be implemented using small computational capacity devices keeping facial emotion recognition operation fast and accurate.
\section{Methodology}
\label{sec3}
The main idea of the proposed method is to extract the spatial change of standardized pixels in a face image and detect the emotion class of the face using a Min-Max similarity Nearest Neighbor classier. The images from the JAFFE database \cite{8} are used for the experiments. This database contains 213 images of 10 female faces comprising 6 basic facial expressions and neutral faces. The original images from the database have the size of $256\times256$ pixels and in our experiments they are cropped to a size of $101\times114$ pixels retaining only the relevant information of the face area. A block diagram of the proposed method is shown in Fig. \ref{1}.
\begin{figure}[!h]
\centering{\includegraphics[width=70mm]{pict.jpg}}
\caption{Outline of the proposed emotion recognition system
}
\label{1}
\end{figure}
\subsection{Pre-processing}
Illumination variability introduces the inter-class feature mismatch resulting in the inaccuracies in the detection of emotion discriminating features from the face images. Therefore, image normalization is essential to reduce the inter-class feature mismatch that can be viewed as intensity offsets. Since the intensity offsets are uniform within a local region, we perform Gaussian normalization using local mean and standard deviation. The input image is represented as $x(i, j)$, and $y(i,j)$ is the normalized output image, where $i$ and $j$ are the row and column number of the processed image. The normalized output image is calculated by Eq. \ref{Eq1} \cite{james2010inter}, where $\mu$ is a local mean and $\sigma$ is a local standard deviation computed over a window of $N\times N$ size.
\begin{align}\label{Eq1}
\begin{split}
y(i,j)=\frac{x(i,j)-\mu(i,j)}{6\sigma(i,j)}
\end{split}
\end{align}
The parameters $\mu$ and $\sigma$ are calculated using Eq. \ref{Eq2} and \ref{Eq3} with $a=(N-1)/2$.
\begin{align}\label{Eq2}
\begin{split}
\mu(i,j)=\frac{1}{N^2}\sum_{k=-a}^{a} \sum_{h=-a}^{a}x(k+i,h+j)
\end{split}
\end{align}
\begin{align}\label{Eq3}
\begin{split}
\sigma(i,j)=\sqrt{\frac{1}{N^2}\sum_{k=-a}^{a} \sum_{h=-a}^{a}{[x(k+i,h+j)-\mu(i,j)]^2}}
\end{split}
\end{align}
An example image from the JAFFE database with three different lighting conditions is shown in Fig. \ref{fig2a} (a). As the JAFFE database does not contain the face images with different illumination conditions, the illumination change was created by adding and subtracting the value of 10 from the original image. Fig. \ref{fig2a} (b) illustrates the respective images after Gaussian local normalization. Irrespective of the illumination conditions, the three locally normalized images appear similarly with the minimum pixel intensity variation.
\begin{figure}[h]
\centering
\begin{subfigure}[]{}
\includegraphics[width=43mm]{crop.JPG}
\end{subfigure}%
\begin{subfigure}[]{}
\includegraphics[width=43mm]{norm.JPG}
\end{subfigure
\begin{subfigure}[]{}
\includegraphics[width=43mm]{edge.JPG}
\end{subfigure}%
\caption{(a) Sample image from JAFFE database with different lighting conditions obtained by adding and subtracting a value of 10 from the original image.(b) Normalized images of above sample images obtained by performing Gaussian normalization using local mean and local standard deviation taken over a window of size N=11. (c) Feature detected images from the normalized image by performing local standard deviation using a window of size M=11.}
\label{fig2a}
\end{figure}
\subsection{Feature detection}
The feature parts useful for the facial emotion recognition are eyes, eyebrows, cheeks and mouth regions. In this experiment, we perform the feature detection by calculating local standard deviation of normalized image using a window of M$\times$M size. Eq. \ref{Eq4} is applied for the feature detection with $b=(M-1)/2$.
\begin{align}\label{Eq4}
\begin{split}
w(i,j)=\sqrt{\frac{1}{M^2}\sum_{k=-b}^{b} \sum_{h=-b}^{b}{[y(k+i,h+j)-\mu'(i,j)]^2}}
\end{split}
\end{align}
In Eq. \ref{Eq4} the parameter $\mu'$ refers to the mean of the normalized image $y(i,j)$ and can be calculated by Eq. \ref{Eq5}.
\begin{align}\label{Eq5}
\begin{split}
\mu'(i,j)=\frac{1}{M^2}\sum_{k=-b}^{b} \sum_{h=-b}^{b}y(k+i,h+j)
\end{split}
\end{align}
Fig. \ref{fig2a} (c) shows the results of feature detection corresponding to the normalized images.
\subsection{Emotion Classification}
For the recognition stage, we propose a Min-Max similarity metric in a Nearest Neighbor classifier framework. This method is based on the principle that the ratio of the minimum difference to the maximum difference of two pixels will produce a unity output for equal pixels and an output less than unity for unequal pixels. The proposed method is described in Algorithm \ref{alg1}. The algorithm parameter $trainlen$ refers to the number of train images, $N$ corresponds to the normalization window size, and $M$ indicates the feature detection window size.
Each cropped image is of $m\times n$ pixel dimension. The parameter \textit{train} is a feature array of $trainlen\times$($m\times n$) size, where each row corresponds to the processed train images. After normalization and feature detection, test images are stored in to a vector \textit{test} of $ 1\times$($m\times n$) size. A single test image is compared pixel-wise with processed train images of all the classes in the feature array using the proposed Min-Max classifier:
\begin{align}\label{Eq6}
\begin{split}
s(i,j)=[\frac{min[train(i,j),test(1,j)]}{max[train(i,j),test(1,j)]}]^\alpha,
\end{split}
\end{align}
where a parameter $\alpha$ controls the power of exponential to suppress the outlier similarities. Outliers are the observations that come inconsistent with the remaining observations and are common in real-time image processing. The presence of outliers may cause misclassification of an emotion, since sample maximum and sample minimum are maximally sensitive to them. In order to remove the effects of the outliers, $\alpha=3$ is selected to introduce the maximum difference to the inter-class images and minimum difference to the intra-class images.
After Min-Max classification, a column vector $z$ of $trainlen \times 1$ size containing the weights obtained after comparing the test image with each of the \textit{trainlen} number of train images is calculated using Eq. \ref{Eq7}.
\begin{align}\label{Eq7}
\begin{split}
z(i)=\sum_{j=1}^{m\times n}s(i,j)
\end{split}
\end{align}
The classification output \textit{out} shown in Eq. \ref{Eq8} is the maximum value of $z$ corresponding to the train image that shows the maximum match. The recognized emotion class is the class of the matched train image.
\begin{align}\label{Eq8}
\begin{split}
out=max(z(i))
\end{split}
\end{align}
\begin{algorithm}[!h]
\caption{Emotion Recognition using Min-Max classifier }
\label{alg1}
\begin{algorithmic}[1]
\REQUIRE {Test image Y,Train images $X_t$,trainlen,window size N and M}
\STATE {Crop the images to a dimension of $m\times n$}
\FOR{$t=1$ to $trainlen$}
\STATE $C(i,j)=\frac{X_t(i,j)-\mu(i,j)}{6\sigma(i,j)}$
\STATE $W(i,j)=\sqrt{\frac{1}{M^2}\sum_{k=-b}^{b} \sum_{h=-b}^{b}{[C(k+i,h+j)-\mu(i,j)]^2}}$
\STATE {Store the value of W to an array $train$ of dimension $trainlen\times m\times n$}
\ENDFOR
\FOR{$t=1$ to $trainlen$}
\STATE $V(i,j)=\frac{Y(i,j)-\mu(i,j)}{6\sigma(i,j)}$
\STATE $test(i,j)=\sqrt{\frac{1}{M^2}\sum_{k=-b}^{b} \sum_{h=-b}^{b}{[V(k+i,h+j)-\mu(i,j)]^2}}$
\STATE $s(t,j)=[\frac{min[train(t,j),test(1,j)]}{max[train(t,j),test(1,j)]}]^3$
\STATE $z(t)=\sum_{j=1}^{m\times n}s(t,j)$
\STATE $ out=max(z(t))$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Results and comparison }
\label{sec4}
To benchmark the performance of the proposed algorithm, leave-one-out cross-validation method has been used. In this method, one image of each expression of each person is applied for testing and the remaining images are used for training \cite{4}. The cross-validation is repeated 30 times to obtain a statistically stable performance of the recognition system and to ensure that all the images in JAFFE database are used for testing at least once. The overall emotion recognition accuracy of the system is obtained by averaging the results of the cross-validation trials. Fig. \ref{ff3} shows the different accuracy rates obtained for each trial by varying feature detection window size $M$ from 3 to 21 and keeping normalization window size at $N=11$. It is shown that the maximum emotion recognition accuracy this normalization window size can be achieved with the detection window size of 11.
\begin{figure}[!h]
\centering{\includegraphics[width=80mm]{new_plot2.JPG}}
\caption{The accuracy rates obtained for four trials of leave-one-out cross-validation method for different feature detection window size M ranging from 3 to 21.}
\label{ff3}
\end{figure}
Fig. \ref{f4} shows the recognition accuracy rates obtained for different normalization and feature detection window sizes ranging from 3 to 21 for a single trial.
\begin{figure}[!h]
\centering{\includegraphics[width=90mm]{accur.JPG}}
\caption{Graph shows the accuracy rates obtained for different normalization and feature detection window sizes ranging from M=N=3 to 21 for one trial of leave-one-out cross-validation method.}
\label{f4}
\end{figure}
To evaluate the performance of the proposed Min-Max classifier, it has been compared with the other classifiers, such as Nearest Neighbor\cite{6} and Random Forest\cite{7}, after normalization and feature detection. The obtained accuracy values are shown in Table \ref{tb1}.
\begin{table}[!h]
\centering
\caption{Testing of feature detected images on other classifiers}
\begin{tabular}{l|c}\hline
$Classifier$ & $Accuracy (\%)$ \\\hline
Nearest Neighbor & 92.85 \\
Random Forest & 88.57 \\
\textbf{Proposed Min-Max classifier} & \textbf{98.57}\\\hline
\end{tabular}
\label{tb1}
\end{table}
The proposed method achieves a maximum accuracy of 98.57\% for a window size of $M=N=11$ which outperforms the other existing methods in the literature for leave-one-out cross-validation method. Table \ref{tb2} shows the performance comparison of the proposed method with the other existing systems on the same database using leave-one-out cross-validation method. Applying the proposed method, we achieved emotion recognition accuracy of 98.57\% proving the significance of emotion detection method with Min-Max similarity classifier.
\begin{table}[!h]
\caption{Comparison of proposed emotion recognition system with other existing system based on leave-one-out cross-validation method}
{\begin{tabular}{|l|p{2.5cm}|c|}\hline
$Existing$ $systems$ & $Method$ $used$ & $Accuracy (\%)$\\\hline
Cheng et. al \cite{5} & Gaussian Process & 93.43 \\\hline
Hamester et. al \cite{cnn} & Convolutional Neural Network & 95.80 \\\hline
Frank et. al \cite{3} & DWT + 2D-LDA +SVM & 95.71 \\\hline
Poursaberi et. al \cite{4} & Gauss Laguerre wavelet+ KNN & 96.71\\\hline
Hegde et. al \cite{els6}& Gabor and geometry based features & 97.14
\\\hline
\textbf{Proposed method} & \textbf{Min-Max classifier} & \textbf{98.57}\\\hline
\end{tabular}}{}
\label{tb2}
\end{table}
In addition, comparing to the proposed emotion recognition system, the other existing methods require specialized feature extraction and dimensionality reduction techniques before classification stage. The main advantages of the proposed emotion recognition system are its simplicity and straightforwardness.
\section{Discussion}
\label{sec5}
The main advantages of the proposed facial motion recognition approach are high recognition accuracy and low computational complexity. To achieve high recognition accuracy, the effective feature selection is required. In the existing methods, the complex algorithms for feature selection are applied without normalizing the image. The normalization stage is important and has a considerable effect on the accuracy of the feature selection process. In the proposed algorithm, we apply simple pre-processing methods to normalize the images and eliminate intensity offsets that effects the accuracy of the feature selection process and leads to the increase of emotion recognition accuracy, in comparison to the existing methods. The effect of the proposed Min-Max classification method on recognition accuracy is also important. Table \ref{tb1} shows the application of the other classification method for the same approach where proposed Min-Max classifier illustrates the performance improvement. In comparison to the existing method, the proposed Min-Max classifier has an ability to suppress the outliers that significantly impacts overall performance of this approach.
The simplicity and straightforwardness of the proposed approach are also important due to the resultant low computational complexity. Most of the existing methods use complicated feature extraction and classification approaches that double the complexity of the facial recognition process and require the device with large computational capacity to process the images. We address this problem applying direct local mean and standard deviation based feature detection methods and simple Min-Max classification method. In comparison to the existing feature detection methods, such as PCA \cite{PCA} and LDA \cite{3}, the proposed method is straightforward and does not require dimensionality reduction. Moreover, simple Min-Max classification method also reduce the computational time, comparing to the traditional classification approaches, such as SVM \cite{SVM2}, KNN \cite{KNN}, Gaussian process \cite{5} and Neural Network \cite{nn}. Therefore, the algorithm can be run on the device with low computational capacity.
\section{Conclusion}
\label{sec6}
In this paper, we have represented the approach to improve the performance of emotion recognition task using template matching method. We have demonstrated that the pixel normalization and feature extraction based on local mean and standard deviation followed up by the Mix-Max similarity classification can result in the improvement of overall classification rates. We achieved emotion recognition accuracy of 98.57\% that exceeds the performance of the existing methods for the JAFFE database for leave-one-out cross-validation method. The capability of the algorithm to suppress feature outliers and remove intensity offsets results in the increase of emotion recognition accuracy. Moreover, the proposed method is simple and direct, in comparison to the other existing methods requiring the application of dimensionality reduction techniques and complex classification methods for computation and analysis. Low computational complexity is a noticeable benefit of the proposed algorithm that implies the reduction of computational time and required memory space. This method can be extended to the other template matching problems, such as face recognition and biometric matching. The drawback of the proposed method, as in any other template matching method, is the metric learning requiring to create the templates for each class that, in turn, consumes additional memory space to store the templates.
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUa-45qX_AYtUxBND_
| 5 | 1 |
\section{Introduction}
\ \ \ \ Let $\mathbb{G}=\left(V\left(\mathbb{G}\right),\ E\left(\mathbb{G} \right)\right)$ denote a graph with the edge set $E\left(\mathbb{G} \right)$ and the vertex set $V\left(\mathbb{G} \right)$.
The degree of vertex $u$ is represented by $d_{\mathbb{G} }\left(u\right)$. If $u,v\in V\left(\mathbb{G} \right)$ are adjacent, the edge connecting them is labeled by $uv$. Please refer to\cite{BJA} for concepts or notations in graph theory that are not addressed in this paper. A multitude of degree-based topological indices play a very significant role in the fields of mathematics and chemistry.
For further information on topological indices, refer to\cite{PMS}. Gutman\cite{GI} has proposed a new group of topological indices called the Sombor indices, which includes the (ordinary) Sombor index, the reduced Sombor index, and the average Sombor index, inspired by the Euclidean metric.
For a graph $\mathbb{G} $, the formulas of the Sombor index $SO\left(\mathbb{G} \right)$, the reduced Sombor index $SO_{red}\left(\mathbb{G} \right)$ and the average Sombor index $SO_{avr}\left(\mathbb{G} \right)$ are given as follows:
\begin{eqnarray*}
SO\left(\mathbb{G} \right)=\sum_{uv\in E\left(\mathbb{G} \right)}\sqrt{d^2_\mathbb{G} \left(u\right)+d^2_\mathbb{G} \left(v\right)}\hfill \ \ ,
\end{eqnarray*}
\begin{eqnarray*}
SO_{red}\left(\mathbb{G} \right)=\sum_{uv\in E\left(\mathbb{G} \right)}\sqrt{\left(d_\mathbb{G} \left(u\right)-1\right)^2+\left(d_\mathbb{G} \left(v\right)-1\right)^2}\hfill \ \ ,
\end{eqnarray*}
\begin{eqnarray*}
SO_{red}\left(\mathbb{G} \right)=\sum_{uv\in E\left(\mathbb{G} \right)}\sqrt{\left(d_\mathbb{G} \left(u\right)-\frac{2m}{n}\right)^2+\left(d_\mathbb{G} \left(v\right)-\frac{2m}{n} \right)^2}\hfill \ \ ,
\end{eqnarray*}
where $\frac{2m}{n}$ denotes the average degree of graph $\mathbb{G} $ and $m$, $n$ represent the set of edges and the set of vertices, respectively.
It is obvious that we can obtain the general equation for the Sombor indices\cite{GI}, defined as
\begin{eqnarray*}
SO_a\left(\mathbb{G} \right)=\sum_{uv\in E\left(\mathbb{G} \right)}\sqrt{\left(d_\mathbb{G} \left(u\right)-a \right)^2+\left(d_\mathbb{G} \left(v\right)-a\right)^2}\hfill \ \ .
\end{eqnarray*}
\begin{figure}[htbp]
\centering\includegraphics[width=8cm,height=2.5cm]{2.jpg}
\caption{A $l$-polygonal chain $PC_n$ with $n$ polygons, where $O_n$ is the $n$-th polygon.}
\end{figure}
\begin{figure}[htbp]
\centering\includegraphics[width=13cm,height=11cm]{44.jpg}
\caption{$k$ sorts of permutations in an $l$-polygonal chain.}
\end{figure}
An $l$-cycles (polygon) $PC_n$ can be seen as a new structure. Between a new polygon with $l$-cycles and a chain of polygons with $n-1$ $l$-cycles, with line segments acting as bridges connecting the endpoints of the segments. When $1\leqslant i \leqslant n$, $O_i$ is known as the $i$-th polygon of the $PC_n$; The polygonal chain is unique when $n = 1,\ 2 $ (see Fig $1$); When $n \geqslant 3$, obtain $PC^i_n$ by joining the vertex of $O_n$ with the vertex $V_{n-1}$ of $O_{n-1}$, where $1\leqslant i\leqslant n$.
For an $l$-cycle random polygon chain $PC_n(n;\ p_1,\ \ldots,\ p_k)$. The transition from $PC_{n-1}$ to $PC_n$ is assumed to be a stable stochastic process, indicated by $p_i$, $p_i$ is a constant, independently of $n$, where $1\leqslant i\leqslant k$ (see Fig 2). In other words, the stated process is viewed as a zeroth-order Markovprocess.
Topological indices and graph invariants based on the distances between graph vertices are widely used to characterize molecular graphs, establish relationships between the structure and properties of molecules, predict the biological activity of compounds, and enable their chemical applications. The Sombor index is one of the more novel topological indices available and is highly correlated with many physical and chemical indices of molecular compounds.
The Sombor index, a new category of degree-based topological molecular descriptors, has received a lot of attention derived from its excellent chemistry applicability. Roberto et al\cite{CRI}, were concerned the results of the Sombor index of chemical graphs.
In\cite{LH.C}, the Sombor index of the chemistry tree was given, and the boiling temperatures of benzenoid hydrocarbons and the (reduced) Sombor index were confirmed to be substantially linked. Existing bounds and extremal results related to the Sombor index and its variants were collected by Liu and You\cite{8}.
In\cite{RJJ}, the authors systematically described the general properties of the Sombor indices.
For additional outstanding Sombor indices results, we recommend that the reader refer to\cite{LZZ,KVRI,LHYL,GNS,MIM}. Meanwhile, more scientific studies were focused on the exploration of the mathematical expected values or variances of some degree-based topological indices.
In\cite{WSKX}, a simple mathematical formula for the expected values of the Wiener index were established in a random cyclooctane chain.
In random hydrocarbon chains, Raza et al\cite{Raza} have discovered the expected values for the several chemistry indices.
The expected values of the basic Randic index for conjugated hydrocarbons were reported by the\cite{LJJ}.
Zhang and Li\cite{ZLL} discussed the expected values of the four types of degree-based topological indices in a random polyphenylene chain.
In\cite{HGK}, the expected values of the Kirchhoff indices in the random polyphenyl and spiro chains were obtained.
Many studies have revealed the expected values and variances of degree-based topological indices for a wide range of substances across this time period, and the reader can refer to\cite{CHL,ZWY,ZJP,TGX}.
As a result of the preceding research, we offered precise analytical formulas for the Sombor indices's expected values and variances in this study.
The structure of the paper is detailed following. In Section $2$, we define a general random polygonal chain and introduce some basic probability theory notions. The distributions of the Sombor indices in a general random chain are established in Section $3$. Based on these distributions, we obtain expressions for their expected values and variances in Section $4$. As an application of Sections $3$ and $4$, in Section $5$, the Sombor indices for the polyonino, pentachain, polyphenyl, and cyclooctane chains, as well as their expected values and variances of the Sombor indices are calculated. The asymptotic behavior of the distributions of the Sombor indices for all the polygonal chains given in this paper is illustrated in Section $6$.
\section{Preliminaries}
\ \ \ \ For a random polygonal chain $PC_n$, the $SO_a(PC_n)$ are random variables. First, we introduce some of the probability theories that will used in next section.
\begin{defi}
The random chain $G_n= G(n;\ p_1,\ p_2,\ \ldots,\ p_k)$ with $n$ identical graphs, a graph $\mathbb{G} $ is given, created by the following ways:
\begin{flushleft}
$(1)$ $ \mathbb{G} _2$ is made up for two polygons, see Fig 1. \\
$(2)$ For each $n>2,\ \mathbb{G} _n$ is created by attaching one $O_n$ to $\mathbb{G} _{n-1}$ in certain ways, resulting in $\mathbb{G} ^1_n, \ \mathbb{G} ^2_n,\ \ldots,\ \mathbb{G} ^k_n$ with probability $p_1,\ p_2,\ \ldots,\ p_k$, respectively, where $\sum_{i= 1}^{k} p_i=1$.
\end{flushleft}
\end{defi}
Some of the fundamental concepts of probability theory are illustrated below.
The polynomial distribution is a binomial distribution extension with n independent replicate experiments, each with k possible results. It is usually denoted by $M (n,\mathbf{p})$, where $n$ represents the number of experiments and the vector $\mathbf{p}$ is the probability of occurrence of the event.
\begin{equation*}
\{\mathbf{p} | \mathbf{p} =(p_1,\ p_2,\ \ldots,\ p_k)^T\in \mathbb{R}^t, i=1,\ 2,\ \ldots,\ k, \ p_i\geqslant 0\ and\ \sum_{ i = 1}^{k} p_i=1\} .
\end{equation*}
The sample space of the multinomial distribution is
\begin{equation*}
\mathbb{S} = \{ \mathbf{X} | \mathbf{X} =(x_1,\ x_2,\ \ldots,\ x_k)^T\in \mathbb{Z}^t,\ i=1,\ 2,\ldots ,\ k,\ x_i\geqslant 0\ and \ \sum_{ i = 1}^{k} x_i=n\} .
\end{equation*}
The probability function of the multinomial distribution is
\begin{equation*}
f(x)=\frac{n!}{\prod_{i=1}^k x_i!} \prod_{i=1}^{k}p_i^{x_i},\ x\in \mathbb{S} ,
\end{equation*}
Assume that the random vector $\mathbf{X}$ follows the polynomial distribution, which is denoted by $\mathbf{X} \thicksim M(n,\mathbf{p})$. The following conclusion about $X$ can be obtained.
\begin{equation}
\mathbb{E} (\mathbf{X} )=n\mathbf{p} ,\ \mathbb{V}\mathbf{a} \mathbf{r} (\mathbf{X} )=n(\mathbf{d}\mathbf{i}\mathbf{a}\mathbf{g}(\mathbf{p})-\mathbf{p} \mathbf{p} ^T) .
\end{equation}
The bernoulli distribution $B(p_1)$, binomial distribution$M(n,\ p_1)$, and category distribution $C(p)$ are obtained by setting $n=1$ and $k=1$,\ $n>1$ and $k=2$,\ and $n=1$ and $k>2$ in that order.
The multinomial distribution's two essential properties are described below.
\begin{prop}
(Addition Rule\cite{PWL}). Let $\mathbf{X} _i\thicksim M(n_i,\mathbf{p})$, where $i=1,\ 2, \ldots,\ k$, for each $X_i$ is an independent vector of each other. Then
\begin{equation*}
\sum_{i = 1}^{k}\mathbf{X} _i \thicksim M(\sum_{i = 1}^k n_i,\ \mathbf{p} ) .
\end{equation*}
\end{prop}
\begin{prop}
(Marginal Distribution\cite{PWL}). Let $\mathbf{X} =(\mathbf{X} _1,\ \mathbf{X} _2,\ \ldots ,\ \mathbf{X} _k)\thicksim M(n,\ \mathbf{p} )$. Then $X_i\thicksim B(n,\ p_i)$ for each $i=1,\ 2,\ \ldots,\ k$.
\end{prop}
For further information on general probability distributions, please refer to\cite{KPM}. The Lemma 2.3 below is commonly applied.
\begin{lem}
Let $X$ be a random variable and $a,\ b\in \mathbb{R} $. Then
\begin{equation*}
\mathbb{E} (\mathbf{A} \mathbf{X} +\mathbf{b} )=\mathbf{A}\mathbb{E} (\mathbf{X} )+\mathbb{E} (\mathbf{b} ) ,\ \mathbb{V} \mathbf{a} \mathbf{r} (\mathbf{A} \mathbf{X} +\mathbf{b} )=\mathbf{A} \mathbb{V} \mathbf{a} \mathbf{r} (\mathbf{X} )\mathbf{A} ^T.
\end{equation*}
\end{lem}
A general method of calculating the expected values and variances for Sombor indices in polygonal chain are given.
\begin{thm}
Let $PC_n$ be a random polygonal chain of length $n$ and $\mathbf{X} \thicksim B(n-2,\ p_1)$, where $n>2$. Then
\begin{eqnarray*}
SO(PC_n)&=&A\mathbf{X} +B(n-2)+C ,\\
\mathbb{E} (SO_a(PC_n))&=&(p_1A+B)(n-2)+C ,\\
\mathbb{V} \mathbf{a} \mathbf{r} (SO_a(PC_n))&=&A^2(n-2)p_1(1-p_1),
\end{eqnarray*}
$where$
\begin{equation*}
(A,B,C)=\Bigl(A_1-A_2,\ A_2,\ -2A_2+SO_a(PC_n)\Bigr).
\end{equation*}
\end{thm}
\begin{proof}
In order to quantify the random variable $SO_a(PC_n)$, we defined a family of $3$-dimensional random vectors $ {\mathbb{Z} _k}$ as follows:
\begin{eqnarray*}
\mathbb{Z} _k=
\begin{cases}
(1,0,0), & PC_n=PC^1_n ;\\
(0,1,0), & PC_n=PC^2_n ; \\
(0,0,1), & PC_n=PC^3_n .\\
\end{cases}
\end{eqnarray*}
By the definition of the random polygonal chain, we can check that $\mathbb{Z} _k$ follows the categprical distribution $C(1,\ p_1,\ p_2,\ p_3)$ and $\mathbb{Z} _3,\ \mathbb{Z} _4,\ \ldots ,\ \mathbb{Z} _n$ are independent.
For each $k=3,\ 4,\ \ldots,\ n$, $SO_a(PC_k)$ can be quantified as
\begin{equation}
SO_a(PC_k)=\Bigl(SO_a(PC^1_k),\ SO_a(PC^2_k),\ SO_a(PC^3_k)\Bigr)\mathbb{Z}_k .
\end{equation}
By the definition of Sombor indices, for each $k=3,\ 4,\ \ldots,\ n$, and $i=1,\ 2,\ 3$, we have
\begin{equation*}
SO_a(PC^i_k)-SO_a(PC_{k-1})=SO_a(PC^i_3)-SO_a(PC_2) ,
\end{equation*}
we denoted $A_i=SO_a(PC^i_3)-SO_a(PC_2),\ i=1,\ 2,\ 3$. Then
\begin{equation}
SO_a(PC^i_k)=SO_a(PC_{k-1})+A_i,\ i=1,\ 2,\ 3.
\end{equation}
Associated (2.2) with (2.3), the $SO_a(PC_k)$ satisfies the following recursive relation
\begin{equation}
SO_a(PC_k)=SO_a(PC_{k-1})+(A_1,\ A_2,\ A_3)\mathbb{Z} _k,
\end{equation}
\begin{equation}
SO_a(PC_n)=(A_1,A_2,A_3)\mathbf{X} +SO_a(PC_2),\ \mathbf{X} =(X_1,\ X_2,\ X_3)^T=\sum_{k = 3}^{n} \mathbb{Z} _k,
\end{equation}
where $\mathbf{X} $ follows the Multinomial distribution $M (n-2,\ p_1,\ p_2,\ p_3)$ by Proposition 2.1.
Thus, we obtain
\begin{align*}
A_1&=SO_a(PC^1_3)-SO_a(PC_2) ,\\
A_2&=SO_a(PC^2_3)-SO_a(PC_2)=A_3.
\end{align*}
By (2.5), we have
\begin{eqnarray*}
SO_a(PC_n)
&=&(A_1,A_2,A_2)\mathbf{X} +SO_a(PC_2)\\
&=&(A_1-A_2)(1,0,0)\mathbf{X} +A_2(1,1,1)\mathbf{X} +TI(PC_2)\\
&=&(A_1-A_2)X_1+A_2(n-2)+SO_a(PC_2),
\end{eqnarray*}
where $X_1$ follows the binomial distribution $B(n-2,\ p_1)$ by Proposition 2.2.
Put $A=A_1-A_2,\ B=A_2,\ C=SO_a(PC_2)$.
Applying Lemma 2.3, we obtain
\begin{equation*}
\mathbb{E} \bigl(SO_a(PC_n)\bigr)=(p_1A+B)(n-2)+C ,\ \mathbb{V} \mathbf{a} \mathbf{r} \bigl(SO_a(PC_n)\bigr) =A^2(n-2)p_1(1-p_1) .
\end{equation*}
\end{proof}
\section{The distribution for $SO_a(\mathbb{G} _n)$}
\ \ \ \ We discover that $SO_a(PC_n)$ are regarded as random variables. Then, the distributions of $SO_a(PC_n)$ of a $(2k+1)$-polygonal chain and $SO_a(PC_n)$ of a $2k$-polygonal chain are presented in this section.
\begin{thm}
Let $\mathbb{G} _n(n>2)$ be an random polygonal chain also with connect constants $\{ A_i\} ^k_{i=1}$ for Sombor indices, by Definition $2.1$.
Then
\begin{equation*}
SO_a(\mathbb{G} _n)=\mathbf{A} ^T X+SO_a(\mathbb{G} _2) ,
\end{equation*}
$where$ $\mathbf{A} ^T=(A_1,\ A_2,\ \ldots,\ A_k)$ $and$ $\mathbf{X}$ follow the $M(n-2,\ \mathbf{p} ),\ \mathbf{p} =(p_1,\ p_2,\ \ldots,\ p_k)$.
\begin{proof}
Associated $(2.2),\ (2.3),\ (2,4)$ with $(2.5)$ can be obtained.
\end{proof}
\end{thm}
\begin{cor}
Let $PC_n$ be a $(2k+1)$-polygonal chain of length $n$ and $X\sim B(n-2,\ p_1)$, where $k\geqslant 2$ and $n>2$. Then $SO_a(PC_n)=AX+Bn+C$, where
\begin{eqnarray*}
A&=&2\sqrt{2} \left\lvert 2-a\right\rvert+\sqrt{2}\left\lvert 3-a\right\rvert +2v_1, \\
B&=&\sqrt{2}(2k-3)\left\lvert 2-a\right\rvert +\sqrt{2} \left\lvert 3-a\right\rvert +v_1 , \\
C&=&4\sqrt{2} \left\lvert 2-a\right\rvert -\sqrt{2} \left\lvert 3-a\right\rvert +8v_1 .
\end{eqnarray*}
Specifically, we have
\begin{eqnarray*}
SO(PC_n)&=&(6\sqrt{2} +2\sqrt{13} )X+(4\sqrt{2}k-3\sqrt{2}+\sqrt{13} )n+5\sqrt{2}+8\sqrt{13} ,\\
SO_{red}(PC_n)&=&(4\sqrt{2} +2\sqrt{5} )X+(2\sqrt{2}k -\sqrt{2}+\sqrt{5} )n+3\sqrt{2}+8\sqrt{5} ,\\
SO_{avr}(PC_n)&=&A_1X+B_1n+C_1,
\end{eqnarray*}
where
\begin{eqnarray*}
A_1&=&\frac{\sqrt{2}(2k+3) +2\mu _1}{n(2k+1)} ,\\
B_1&=&\frac{6\sqrt{2} nk-7\sqrt{2}n+4\sqrt{2} k-8\sqrt{2} +\mu _1}{n(2k+1)} ,\\
C_1&=&\frac{-4\sqrt{2}nk+9\sqrt{2} n+11\sqrt{2} +8\mu _1 }{n(2k+1)} ,\\
v_1&=&\sqrt{2a^2-10a+13} ,\\
\mu _1&=&\sqrt{4n^2k^2-4n^2k-8nk+5n^2+12n+8} .
\end{eqnarray*}
\end{cor}
\begin{proof}
According to the concept of Sombor indices, we can obtain the Sombor indices of a random $(2k+1)$-polygonal chain. Specific proofs are as follows.
\begin{eqnarray*}
SO_a(PC_2)&=&(4k-2)\sqrt{2} \left\lvert 2-a\right\rvert +\sqrt{2} \left\lvert 3-a\right\rvert +4v_1,\\
SO_a(PC^1_3)&=&(6k-4)\sqrt{2}\left\lvert 2-a\right\rvert +3\sqrt{2} \left\lvert 3-a\right\rvert +6v_1 ,\\
SO_a(PC^2_3)&=&(6k-5)\sqrt{2} \left\lvert 2-a\right\rvert +2\sqrt{2} \left\lvert 3-a\right\rvert+8v_1 ,\\
&\vdots & \\
SO_a(PC^k_3)&=&(6k-5)\sqrt{2} \left\lvert 2-a\right\rvert +2\sqrt{2} \left\lvert 3-a\right\rvert+8v_1,
\end{eqnarray*}
then relevant variables of $PC_n$ are derived by
\begin{eqnarray*}
A_1&=&SO_a(PC^1_3)-SO_a(PC_2)\\
&=&(2k-2)\sqrt{2} \left\lvert 2-a\right\rvert +2\sqrt{2} \left\lvert 3-a\right\rvert +2v_1,\\
A_2&=&SO_a(PC^2_3)-SO_a(PC_2)\\
&=&(2k-3)\sqrt{2} \left\lvert 2-a\right\rvert +\sqrt{2} \left\lvert 3-a\right\rvert +4v_1 ,
\end{eqnarray*}
where $v_1=\sqrt{2a^2-10a+13}$.
Applying (2.3), one obtains
\begin{equation*}
SO_a(PC_n)=(A_1-A_2)X+A_2n-2A_2+SO_a(PC_2),\ X\sim B(n-2,\ p_1).
\end{equation*}
First part can be obtained by right away. If $\left\lvert V(PC_n)\right\rvert =n(2k+1)$ and $\left\lvert E(PC_n)\right\rvert =2n(k+1)-1$, then the average degree of $PC_n$ is $\frac{4n(k+1)+2}{n(2k+1)}$. The proof is completed by putting $a=0,\ 1,\ \frac{4n(k+1)+2}{n(2k+1)}$ into the equation, respectively.
\end{proof}
\begin{cor}
Let $PC_n$ be a random $2k$-polygonal chain of length $n$ and $X\sim B(n-2,\ p_1)$, where $k\geqslant 2$ and $n>2$. Then $SO_a(PC_n)=AX+Bn+C$, where
\begin{eqnarray*}
A&=&2\sqrt{2} \left\lvert 2-a\right\rvert-2v_1 , \\
B&=&\sqrt{2} ((2k-3)\left\lvert 2-a\right\rvert +\left\lvert 3-a\right\rvert )+4v_1 , \\
C&=&\sqrt{2} \left\lvert 2-a\right\rvert -\sqrt{2} \left\lvert 3-a\right\rvert -4v_1 .
\end{eqnarray*}
In particular, we have
\begin{eqnarray*}
SO(PC_n)&=&(2\sqrt{2} -2\sqrt{13} )X+(4\sqrt{2}k+4\sqrt{13}-3\sqrt{2} )n-\sqrt{2}-4\sqrt{13} ,\\
SO_{red}(PC_n)&=&(\sqrt{2} -2\sqrt{5} )X+(2k-1)\sqrt{2}n+4\sqrt{5}n -\sqrt{2} k-4\sqrt{5} ,\\
SO_{avr}(PC_n)&=&A_2X+B_2n+C_2,
\end{eqnarray*}
where
\begin{eqnarray*}
A_2&=&\frac{2\sqrt{2}n-2\mu _2-2\sqrt{2} }{nk} , \\
B_2&=&\frac{3\sqrt{2}nk-2\sqrt{2}k-4\sqrt{2} n+4\sqrt{2} +4\mu _2 }{nk} ,\\
C_2&=&\frac{-\sqrt{2} nk+2\sqrt{2}n-2\sqrt{2} -4\mu _1 }{nk} ,\\
v_1&=&\sqrt{2a^2-10a+13},\\
\mu _2&=&\sqrt{n^2k^2-2n^2k+2nk+2n^2-4n+2} .
\end{eqnarray*}
\end{cor}
\begin{proof}
According to the definition of Sombor indices, we can obtain the Sombor indices of a random $2k$-polygonal chain.
\begin{eqnarray*}
SO_a(PC_2)&=&\sqrt{2} \Bigl((4k-5\left\lvert 2-a\right\rvert )+\left\lvert 3-a\right\rvert \Bigr) +4v_1 ,\\
SO_a(PC^1_3)&=&\sqrt{2} \Bigl((6k-7\left\lvert 2-a\right\rvert )+3\left\lvert 3-a\right\rvert \Bigr)+6v_1 ,\\
SO_a(PC^2_3)&=&\sqrt{2} \Bigl((6k-8\left\lvert 2-a\right\rvert )+2\left\lvert 3-a\right\rvert \Bigr) +8v_1 ,\\
&\vdots & \\
SO_a(PC^k_3)&=&\sqrt{2} \Bigl((6k-8\left\lvert 2-a\right\rvert )+2\left\lvert 3-a\right\rvert \Bigr) +8v_1 ,
\end{eqnarray*}
then relevant variables of $PC_n$ are given by
\begin{eqnarray*}
A_1&=&SO_a(PC^1_3)-SO_a(PC_2)\\
&=&\sqrt{2} \Bigl((2k-2\left\lvert 2-a\right\rvert )+\left\lvert 3-a\right\rvert \Bigr)+2v_1 ,\\
A_2&=&SO_a(PC^2_3)-SO_a(PC_2)\\
&=&\sqrt{2} \Bigl((2k-3\left\lvert 2-a\right\rvert )+\left\lvert 3-a\right\rvert \Bigr)+4v_1 .
\end{eqnarray*}
Appiying (2.3), we obtain
\begin{equation*}
SO_a(PC_n)=(A_1-A_2)X+A_2n-2A_2+SO_a(PC_2),\ X\sim B(n-2,\ p_1).
\end{equation*}
Setting $A=A_1-A_2,\ B=A_2$ and $C=-2A_2+SO_a(PC_2)$, we promptly get the first component. Setting $A = A1-A2, B = A2$, and $C =-2A_2+ SO_a(PC_2)$, we promptly get the first component. Since $\left\lvert V(PC_n)\right\rvert =2nk$ and $\left\lvert E(PC_n)\right\rvert =2nk+n-1$, then the $\frac{2m}{n}$ of $PC_n$ is $\frac{2nk+n-1}{nk} $. The proof is performed directly by setting $a=0,\ 1,\ \frac{2nk+n-1}{nk}$.
\end{proof}
\section{The expected values and variances for $SO_a(\mathbb{G} _n)$}
\ \ \ \ The exact analytical equations of $\mathbb{E} (SO_a(PC_n))$ and $\mathbb{V} \mathbf{a} \mathbf{r} (SO_a(PC_n))$ of a $(2k+1)$-polygonal chain and a $2k$-polygonal chain are discussed in this section, respectively.
\begin{thm}
Let $\mathbb{G} _n$ be a random polygonal chain. The expected values and variances of $SO_n(\mathbb{G} _n)$ are calculated by
\begin{eqnarray*}
\mathbb{E} (\mathbb{G} _n)&=&\left(\sum_{i = 1}^{k} A_i p_i \right)(n-2)+SO_a(G_2) ,\\
\mathbb{V} \mathbf{a} \mathbf{r} (\mathbb{G} _n) &=& \left(\sum_{i = 1}^{k} A^2_i p_i-(\sum_{i = 1}^{k}A_i p_i)^2 \right) .
\end{eqnarray*}
\begin{proof}
$SO_a(\mathbb{G} _n)\mathbf{A} ^T+SO_a(\mathbb{G} _2),\ \mathbf{X} \sim M(n-2,\mathbf{p} ),\ \mathbf{p} =(p_1,\ p_2,\ \ldots,\ p_k)^T. $
By Lemma 2.3, we obtain
\begin{eqnarray*}
\mathbb{E} \Bigl(SO_a(\mathbb{G} _n)\Bigl)&=&E(\mathbf{A}^T\mathbf{X}+SO_a(\mathbb{G} _2)) \\
&=&\mathbf{A}^T E(\mathbf{X} )+SO_a(\mathbb{G} _2) \\
&=&(\sum_{i = 1}^{k} A_i p_i )(n-2)+SO_a(\mathbb{G} _2),\\
\mathbb{V} \mathbf{a} \mathbf{r} \Bigl(SO_a(\mathbb{G} _n)\Bigl)&=&Var(\mathbf{A}^T\mathbf{X}+SO_a(\mathbb{G} _2))\\
&=&\mathbf{A}^T Var(\mathbf{X})\mathbf{A}\\
&=&\Bigl(\sum_{i = 1}^{k} A_i p_i-(\sum_{i=1}^{k}A_i p_i)^2\Bigr) (n-2).
\end{eqnarray*}
\end{proof}
\end{thm}
\begin{cor}
Let $PC_n$ be a random $(2k+1)$-polygonal chain of length $n$, where $k\geqslant 2$ and $n>2$. Then
\begin{equation*}
\mathbb{E} \Bigl(SO_a(PC_n)\Bigl)=Mn+N,\ \mathbb{V} \mathbf{a} \mathbf{r} \Bigl(SO_a(PC_n)\Bigl)=Pn+Q,
\end{equation*}
where
\begin{eqnarray*}
M&=&\sqrt{2}\left\lvert 2-a\right\rvert(p_1+2k-3)+\sqrt{2}(1+p_1)\left\lvert 3-a\right\rvert +(4-2p_1)v_1 , \\
N&=&(-2\sqrt{2}p_1-4\sqrt{2} k+10\sqrt{2}) \left\lvert 2-a\right\rvert -3\sqrt{2}\left\lvert 3-a\right\rvert +4p_1v_1 , \\
A^2&=&3v_1^2+2\left\lvert 2-a\right\rvert \left\lvert 3-a\right\rvert -2\sqrt{2} (\left\lvert 2-a\right\rvert +\left\lvert 3-a\right\rvert ),\\
P&=&A^2p_1(1-p_1) ,\\
Q&=&-2A^2p_1(1-p_1),\\
v_1&=&\sqrt{2a^2-10a+13} .
\end{eqnarray*}
In particular, we have
\begin{eqnarray}
\begin{aligned}
\mathbb{E} \Bigl(SO(PC_n)\Bigr)
&=\Bigl((5\sqrt{2}-2\sqrt{13} )p_1+4\sqrt{2}k-3\sqrt{2}+4\sqrt{13}\Bigr)n \\
&+(4\sqrt{13}-4\sqrt{2})p_1-8\sqrt{2}k+11\sqrt{2} ,
\end{aligned}
\end{eqnarray}
\begin{eqnarray}
\begin{aligned}
\mathbb{E} \Bigl(SO_{red}(PC_n) \Bigr)
&=\Bigl((3\sqrt{2}-2\sqrt{5})p_1+2\sqrt{2}k+4\sqrt{5} -\sqrt{2}\Bigr)n \\
&+(4\sqrt{5} -2\sqrt{2} ) p_1-4\sqrt{2}k+4\sqrt{2} ,
\end{aligned}
\end{eqnarray}
\begin{eqnarray}
\mathbb{E} \Bigl(SO_{avr}(PC_n)\Bigl)=M_1n+N_1,
\end{eqnarray}
where
\begin{eqnarray*}
M_1&=&\frac{\sqrt{2} np_1(2k+1)+\sqrt{2}n(6k-7)+4\sqrt{2}(k-2)+(4-2p_1)\mu _1}{n(2k+1)} ,\\
N_1&=&\frac{-4\sqrt{2}np_1-14\sqrt{2} nk+(4\mu _1-4\sqrt{2} )p_1-8\sqrt{2} +23\sqrt{2} n+26\sqrt{2} }{n(2k+1)} ,\\
v_1&=&\sqrt{2a^2-10a+13} ,\\
\mu _1&=&\sqrt{4n^2k^2-4n^2k-8nk+5n^2+12n+8} .
\end{eqnarray*}
\begin{eqnarray}
\mathbb{V} \mathbf{a} \mathbf{r} \Bigl(SO(PC_n)\Bigl)
&=&(16\sqrt{26}+84 )p_1(1-p_1)(n-2),
\end{eqnarray}
\begin{eqnarray}
\mathbb{V} \mathbf{a} \mathbf{r}\bigl(SO_{red}(PC_n)\bigr)
&=&(8\sqrt{10}+28 )(n-2)p_1(1-p_1),
\end{eqnarray}
\begin{eqnarray}
\mathbb{V} \mathbf{a} \mathbf{r}\bigl(SO_{avr}(PC_n)\bigr) =\widetilde{\sigma } (n-2)p_1(1-p_1),
\end{eqnarray}
where
\begin{eqnarray*}
\widetilde{\sigma }
=\frac{16(\sqrt{2} +1)\sqrt{4n^2k^2-4n^2k-8nk+5n^2+12n+8} }{n^2(2k+1)^2}
-\frac{64(4n^2k^2-4n^2k+2n+1)}{n^2(2k+1)^2} +84.
\end{eqnarray*}
\end{cor}
\begin{cor}
Let $PC_n$ be a random $(2k)$-polygonal chain of length $n$, where $k\geqslant 2$ and $n>2$. Then
\begin{equation*}
\mathbb{E} \Bigl(SO_a(PC_n)\Bigl)=Mn+N,\ \mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(PC_n)\Bigl)=Pn+Q,
\end{equation*}
where
\begin{eqnarray*}
M&=&\sqrt{2}\Bigl((p_1+2k-3)\left\lvert 2-a\right\rvert+\left\lvert 3-a\right\rvert \Bigr) +(4-2p_1)v_1 , \\
N&=&\sqrt{2}\left\lvert 2-a\right\rvert \left(-2p_1-4k+7\right) -3\sqrt{2} \left\lvert 3-a\right\rvert -2\sqrt{2} (4-2p_1)v_1 , \\
P&=&\Bigl(16a^2-72a+84-8\sqrt{2}v_1\left\lvert 2-a\right\rvert\Bigl)p_1(1-p_1) ,\\
Q&=&-2P.
\end{eqnarray*}
In particular, we have
\begin{eqnarray}
\begin{aligned}
\mathbb{E} \Bigl(SO(PC_n)\Bigl)
&=\Bigl(2p_1(\sqrt{2}-\sqrt{13} )+4\sqrt{2}k-4\sqrt{13}-3\sqrt{2}\Bigl)n\\
&+4p_1(\sqrt{26}-\sqrt{2} ) +8\sqrt{2}k+5\sqrt{2}-8\sqrt{26} ,
\end{aligned}
\end{eqnarray}
\begin{eqnarray}
\begin{aligned}
\mathbb{E} \Bigl(SO_{red}(PC_n)\Bigl)
&=\Bigl(\sqrt{2} (p_1+2k-1)+(4-2p_1)\sqrt{5} \Bigl) n\\
&-2\sqrt{2} p_1+4\sqrt{10} p_1-4\sqrt{2} k-8\sqrt{10} +\sqrt{2} ,
\end{aligned}
\end{eqnarray}
\begin{eqnarray}
\begin{aligned}
\mathbb{E} \Bigl(SO_{avr}(PC_n)\Bigl)
&=\frac{-2\sqrt{2}np_1+2\sqrt{2}p_1+8\sqrt{2}n-6\sqrt{2}}{k} \\
&+\frac{4np_1-8n-4}{nk} \mu _2 ,
\end{aligned}
\end{eqnarray}
\begin{equation}
\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO(PC_n)\Bigl) =(84-16\sqrt{26} )p_1(1-p_1)(n-2),
\end{equation}
\begin{equation}
\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_{red}(PC_n)\Bigl) =(28-16\sqrt{10} )(n-2)p_1(1-p_1),
\end{equation}
\begin{eqnarray}
\begin{aligned}
\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_{avr}(PC_n)\Bigl)
&=\frac{-80n^2k^2-8n^2k+8nk+16n^2-32n+16}{n^2k^2} \\
&+\frac{-8\sqrt{2} (n-1)\mu _2 }{n^2k^2} ,
\end{aligned}
\end{eqnarray}
where
\begin{eqnarray*}
v_1&=&\sqrt{2a^2-10a+13} ,\\
u_2&=&\sqrt{n^2 k^2-2n^2 k+2nk+2n^2-4n+2} .
\end{eqnarray*}
\end{cor}
\section{Application of Corollaries 4.2 and 4.3}
\ \ \ \ The expected values and variances of topological indices of random polygonal chains have received much attention in scientific research, such as random polygonal chains, random pentagonal chains, random polyphenyl chains, and random cyclooctane chains. This section studied the Sombor indices distribution of these polygonal chains as well as their expected values and variances. According to the common research methods presented in 4.2 and 4.3, there was the following analysis.
A random polyonino chain is a random modified domino chain with $n$ squares\cite{WSX}, represented by $GPC_n(n;\ p_1,\ 1-p_1)$. Due to the numerous intriguing combinatorial subjects that result from them, for example, the dominance problems\cite{CEJ,HSC}, car dominoes, enumeration problems\cite{VLG,LNH} with perfect matching, and so on, generalized domino graphs have aroused the curiosity about certain mathematicians.
In a random generalized poly-energy chain, there are two sorts of local permutations, due to the definition of a random $l$-poly energy chain (see Fig. 3). Based on the calculation of the Sombor indices for an random polygonal chain presented in Section 3, the generalized formula of the Sombor indices of $GPC_n$ is obtained by
\begin{eqnarray*}
SO_a(GPC_n)
&=&(2\sqrt{2}\left\lvert 2-a\right\rvert -2v_1 )X\\
&+&\Bigl(\sqrt{2} (\left\lvert 2-a\right\rvert+\left\lvert 3-a\right\rvert)+4v_1\Bigr) n\\
&+&\sqrt{2} \left\lvert 2-a\right\rvert -\sqrt{2} \left\lvert 3-a\right\rvert -4v_1 ,
\end{eqnarray*}
where $v_1=\sqrt{2a^2-10a+13}$.
\begin{figure}[htbp]
\centering\includegraphics[width=13cm,height=9cm]{4.jpg}
\caption{Two sorts of local permutations in a random polyonino chain.}
\end{figure}
\begin{figure}[htbp]
\centering\includegraphics[width=13cm,height=10cm]{5.jpg}
\caption{Two sorts of local permutations in a random pentachain.}
\end{figure}
\begin{thm}
The expected value and variance of the Sombor indices for $GPC_n$ are given by
\begin{eqnarray*}
\mathbb{E} \Bigl(SO_a(GPC_n)\Bigr)
&=&\Bigl(\sqrt{2}(1+p_1)\left\lvert 2-a\right\rvert+\sqrt{2}\left\lvert 3-a\right\rvert+(4-2p_1)v_1\Bigr)n\\
&+&\sqrt{2} (-2p_1-1)\left\lvert 2-a\right\rvert -3\sqrt{2} \left\lvert 3-a\right\rvert -2\sqrt{2} (4-2p_1)v_1 ,
\end{eqnarray*}
\begin{eqnarray*}
\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(GPC_n)\Bigr)
=\Bigl(16a^2 -72a+84-8\sqrt{2}v_1\left\lvert 2-a\right\rvert \Bigr)(n-2)p_1(1-p_1),
\end{eqnarray*}
where $v_1=\sqrt{2a^2-10a+13}$.
\end{thm}
A random pentachain $P(n;\ p_1,\ 1 - p_1)$ of length $n$ with $n$ pentagons\cite{HOO,RNP}, labeled it by $P_n$.
According to the concept of a random $l$-polygon chain, a random pentachain has two sorts of local permutations (see Fig 4). Depending on the calculation of the Sombor indices for an random polygonal chain presented in Section 3, the generalized formula of the Sombor indices of $P_n$ is obtained by
\begin{eqnarray*}
SO_a(P_n)
&=&\Bigl(2\sqrt{2}\left\lvert 2-a\right\rvert+\sqrt{2} \left\lvert 3-a\right\rvert +2v_1\Bigr)X\\
&+&\Bigl(\sqrt{2} \left\lvert 2-a\right\rvert +\sqrt{2} \left\lvert 3-a\right\rvert +v_1\Bigr) n\\
&+&4\sqrt{2} \left\lvert 2-a\right\rvert -\sqrt{2} \left\lvert 3-a\right\rvert +8v_1 ,
\end{eqnarray*}
where $v_1=\sqrt{2a^2-10a+13}$.
\begin{thm}
The expected value and variance of the Sombor indices for $P_n$ are given by
\begin{eqnarray*}
\mathbb{E} \Bigl(SO_a(P_n)\Bigr)
&=&\Bigl(\sqrt{2} ((1+p_1)\left\lvert 2-a\right\rvert+\left\lvert 3-a\right\rvert )+(4-2p_1)v_1\Bigr) n\\
&+&(2\sqrt{2}-2\sqrt{2}p_1 )\left\lvert 2-a\right\rvert -3\sqrt{2} \left\lvert 3-a\right\rvert +4p_1v_1 ,
\end{eqnarray*}
\begin{eqnarray*}
\mathbb{V} \mathbf{a} \mathbf{r}\Bigl((SO_a(P_n)\Bigr)
=A^2(n-2)p_1(1-p_1),
\end{eqnarray*}
where
\begin{eqnarray*}
A^2&=&3v_1^2+2\left\lvert 2-a\right\rvert\left\lvert 3-a\right\rvert -2\sqrt{2} \Bigl(\left\lvert 2-a\right\rvert +\left\lvert 3-a\right\rvert \Bigr) ,\\
v_1&=&\sqrt{2a^2-10a+13}.
\end{eqnarray*}
\end{thm}
Random polyphenyl chain of length $n$, with $n$ hexagons, labeled by $PPC_n(n;\ p_1,\ p_2,\ 1-p_1-p_2)$.
Polygonal chains are nothing more than an unbranched heavy hydrocarbons graph with modifications that are widely seen in chemical synthesis, medicinal synthesis, and heat exchangers, and they have long aroused the curiosity of chemists\cite{YWF,HGM}.
There are three types of configurations according to the random polygonal chain concept, as shown in Fig 5. Based on the calculation of the Sombor indices for an random polygonal chain presented in Section 3, the generalized formula of the Sombor indices of $PPC_n$ is obtained by
\begin{eqnarray*}
SO_a(PPC_n)
&=&\Bigl(2\sqrt{2}\left\lvert 2-a\right\rvert-2v_1\Bigr) X\\
&+&\Bigl(3\sqrt{2} \left\lvert 2-a\right\rvert +\sqrt{2} \left\lvert 3-a\right\rvert +4v_1\Bigr) n \\
&+&\sqrt{2} \left\lvert 2-a\right\rvert -\sqrt{2} \left\lvert 3-a\right\rvert -4v_1 ,
\end{eqnarray*}
where $v_1=\sqrt{2a^2-10a+13}$.
\begin{thm}
The expected value and variance of the Sombor indices for $PPC_n$ are derived by
\begin{eqnarray*}
\mathbb{E} \Bigl(SO_a(PPC_n)\Bigr)
&=&\Bigl(\sqrt{2} ((3+p_1)\left\lvert 2-a\right\rvert+\sqrt{2} \left\lvert 3-a\right\rvert )+(4-2p_1)v_1\Bigr) n\\
&-&5\left\lvert 2-a\right\rvert -3\sqrt{2} \left\lvert 3-a\right\rvert -2\sqrt{2} (4-2p_1)v_1 ,
\end{eqnarray*}
\begin{eqnarray*}
\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(PPC_n)\Bigr)
=\Bigl(16a^2-72a+84-8\sqrt{2} v_1\left\lvert 2-a\right\rvert \Bigr)(n-2)p_1(1-p_1).
\end{eqnarray*}
\end{thm}
\begin{figure}[htbp]
\centering\includegraphics[width=13cm,height=6cm]{6.jpg}
\caption{Three sorts of permutations in a random polyphenyl chain.}
\end{figure}
\begin{figure}[htbp]
\centering\includegraphics[width=13cm,height=10cm]{8.jpg}
\caption{Four sorts of permutations in a random cyclooctane chain.}
\end{figure}
The random $8$-polygonal chain $COC_n(n;\ p_1,\ p_2,\ p_3,\ 1-p_1-p_2-p_3)$ is a random cyclooctanechain with $n$ octagons.
Aromatic hydrocarbons and their derivatives have constantly attracted the interest of chemists\cite{WSXK,HJM}. According to the definition of a random $l$-polygon chain, there are four sorts of permutations in random cyclooctane chains (see Fig 6).
According to the calculation of the Sombor indices for a random polygonal chain presented in Section 3, the generalized formula of the Sombor indices of $COC_n$ is obtained by
\begin{eqnarray*}
SO_a(COC_n)
&=&(2\sqrt{2}\left\lvert 2-a\right\rvert-2v_1) X+(5\sqrt{2} \left\lvert 2-a\right\rvert+\sqrt{2} \left\lvert 3-a\right\rvert +4v_1) n\\
&+&\sqrt{2} \left\lvert 2-a\right\rvert -\sqrt{2} \left\lvert 3-a\right\rvert -4v_1 ,
\end{eqnarray*}
where $v_1=\sqrt{2a^2-10a+13}$.
\begin{thm}
The expected value and variance of the Sombor indices for $COC_n$ are obtained by
\begin{eqnarray*}
\mathbb{E} \Bigl(SO_a(COC_n)\Bigr)
&=&\Bigl(\sqrt{2} ((5+p_1)\left\lvert 2-a\right\rvert+\sqrt{2} (1+p_1)\left\lvert 3-a\right\rvert )+(4-2p_1)v_1\Bigr) n\\
&-&(2\sqrt{2p_1+6} )\left\lvert 2-a\right\rvert -3\sqrt{2} \left\lvert 3-a\right\rvert -4p_1v_1 ,
\end{eqnarray*}
\begin{eqnarray*}
\mathbb{V} \mathbf{a} \mathbf{r} \Bigl(SO_a(COC_n)\Bigr)
=\Bigl(3v_1^2+2\left\lvert 2-a\right\rvert *\left\lvert 3-a\right\rvert -2\sqrt{2} (\left\lvert 2-a\right\rvert*\left\lvert 3-a\right\rvert )\Bigr)(n-2)p_1(1-p_1),
\end{eqnarray*}
where $v_1=\sqrt{2a^2-10a+13}$.
\end{thm}
\section{Asymptotic behaviors for $SO_a (G_n)$}
\ \ \ \ The distribution of the Sombor indices of random polygonal chains is presented in Section 3, see Corollaries $4.2-4.3$, respectively. In addition, the preceding section's applications to four specific polygonal chains were integrated. Therefore, in this part, asymptotic behavior of the Sombor indices within those random chains.
Under specific conditions, the normal distribution is a good approximation of the binomial distribution and can be used to calculate the probability of the binomial distribution. Since the probability thus obtained is only an approximation to the true probability value of the binomial distribution, this application of the normal distribution is known as the normal approximation to the binomial distribution.
\begin{thm}
Let $X$ obeys bernoulli distribution. For $n \to \infty$, $X$ asymptotically obeys normal distributions. One has
\begin{eqnarray*}
\lim_{n \to \infty} \sup _{a\in \mathbb{R} } \left\lvert \mathbb{P} (\frac{X_n-np}{np(1-p) }\leqslant a ) -\int_{-\infty}^{a} \frac{1}{\sqrt{2\pi } } e^{\frac{t^2}{2} } \,dt\right\rvert=0.
\end{eqnarray*}
\end{thm}
Theorem 6.1 leads to the following conclusion.
\begin{prop}
Let $PC_n$, $GPC_n$, $P_n$, $PPC_n$, and $COC_n$ be a random polygonal chain, polyonino chain, pentachain, polyphenyl, and cyclooctane chain, respectively.
For each $x\in \mathbb{R},\ n \to \infty$, we have
\begin{eqnarray}
\lim_{n \to \infty} \sup _{x\in \mathbb{R} } \left\lvert \mathbb{P} (\frac{SO_a(PC_n)-\mathbb{E} \Bigl(SO_a(PC_n)\Bigr)}{\sqrt{\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(PC_n)\Bigr)} }\leqslant x ) -\int_{-\infty}^{x} \frac{1}{\sqrt{2\pi } } e^{\frac{t^2}{2} } \,dt\right\rvert=0 ,
\end{eqnarray}
\begin{eqnarray}
\lim_{n \to \infty} \sup _{x\in \mathbb{R} } \left\lvert \mathbb{P} (\frac{SO_a(GPC_n)-\mathbb{E} \Bigl(SO_a(GPC_n)\Bigr)}{\sqrt{\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(GPC_n)\Bigr)} }\leqslant x ) -\int_{-\infty}^{x} \frac{1}{\sqrt{2\pi } } e^{\frac{t^2}{2} } \,dt\right\rvert=0 ,
\end{eqnarray}
\begin{eqnarray}
\lim_{n \to \infty} \sup _{x\in \mathbb{R} } \left\lvert \mathbb{P} (\frac{SO_a(P_n)-\mathbb{E} \Bigl(SO_a(P_n)\Bigr)}{\sqrt{\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(P_n)\Bigr)} }\leqslant x ) -\int_{-\infty}^{x} \frac{1}{\sqrt{2\pi } } e^{\frac{t^2}{2} } \,dt\right\rvert=0 ,
\end{eqnarray}
\begin{eqnarray}
\lim_{n \to \infty} \sup _{x\in \mathbb{R} } \left\lvert \mathbb{P} (\frac{SO_a(PPC_n)-\mathbb{E} \Bigl(SO_a(PPC_n)\Bigr)}{\sqrt{\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(PPC_n)\Bigr)} }\leqslant x ) -\int_{-\infty}^{x} \frac{1}{\sqrt{2\pi } } e^{\frac{t^2}{2} } \,dt\right\rvert=0 ,
\end{eqnarray}
\begin{eqnarray}
\lim_{n \to \infty} \sup _{x\in \mathbb{R} } \left\lvert \mathbb{P} (\frac{SO_a(COC_n)-\mathbb{E} \Bigl(SO_a(COC_n)\Bigr)}{\sqrt{\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(COC_n)\Bigr)} }\leqslant x ) -\int_{-\infty}^{x} \frac{1}{\sqrt{2\pi } } e^{\frac{t^2}{2} } \,dt\right\rvert=0 .
\end{eqnarray}
\end{prop}
\begin{proof}
By Corollary 4.2 and Corollary 4.3, we have
\begin{equation*}
\frac{SO_a(PC_n)-\mathbb{E} \Bigl( SO_a(PC_n)\Bigr)} {\sqrt{\mathbb{V} \mathbf{a} \mathbf{r}\Bigl(SO_a(PC_n)\Bigr)}} =\frac{X-\mathbb{E} (X)}{\sqrt{\mathbb{V} \mathbf{a} \mathbf{r}(X)} },\ X\sim B(n-2,\ p_1).
\end{equation*}
By Theorem 6.1 and $(4.6-4.19), (6.20)$ can be proved.
Similarly, $(6.21-6.24)$ are obtained by such proof process.
\end{proof}
According to Proposition 6.2, the distributions of the Sombor indices of the random polygonal chain networks are consistent with the asymptotic normal distributions, and when $n > 30$, $p_1$ is a constant between $0$ and $1$. The normal distributions can be regarded as an approximation to the distributions of the Sombor indices of the random chain networks in this paper,and please refer to Table 1 for the specific values.
\begin{table}[htbp]
\setlength{\abovecaptionskip}{0.05cm}
\centering \vspace{.3cm}
\caption{Expected values and variances of Normal distribution for Sombor indices.}
\begin{tabular}{cl}
\hline
$Indices$ & $Normal\ \ parameters$ \\[1.2ex]
\hline
$SO(PC_n)_{2k+1}$ & $\mu =\Bigl((5\sqrt{2}-2\sqrt{13} )p_1+4\sqrt{2}k-3\sqrt{2}+4\sqrt{13}\Bigr)n
+(4\sqrt{13}-4\sqrt{2} )p_1-8\sqrt{2}k+11\sqrt{2} $\\[1.3ex]
$ \ \ $ &$\sigma ^2=(16\sqrt{26}+84 )p_1(1-p_1)(n-2)$ \\ [1.3ex]
$SO_{red}(PC_n)_{2k+1}$ & $\mu =\Bigl((3\sqrt{2}-2\sqrt{5})p_1+2\sqrt{2}k+4\sqrt{5} -\sqrt{2} \Bigr)n+(4\sqrt{5}-2\sqrt{2})p_1-4\sqrt{2}k+4\sqrt{2}$ \\[1.3ex]
$ \ \ $ & $\sigma ^2=(8\sqrt{10}+28 )(n-2)p_1(1-p_1)$ \\ [1.3ex]
$SO_{avr}(PC_n)_{2k+1}$ & $\mu =M_1n+N_1 $\\ [1.3ex]
$ \ \ $ & $\sigma ^2=\widetilde{\sigma } (n-2)p_1(1-p_1) $ \\ [1.3ex]
$SO(PC_n)_{2k}$ & $\mu =\Bigl((2p_1(\sqrt{2}-\sqrt{13} )+4\sqrt{2}k-4\sqrt{13}-3\sqrt{2}\Bigr)n+4p_1(\sqrt{26}-\sqrt{2} ) +8\sqrt{2}k+5\sqrt{2}-8\sqrt{26}$ \\ [1.3ex]
$ \ \ $ & $\sigma ^2=(84-16\sqrt{26} )p_1(1-p_1)(n-2)$ \\ [1.3ex]
$SO_{red}(PC_n)_{2k}$ & $\mu=\Bigl(\sqrt{2} (p_1+2k-1)+(4-2p_1)\sqrt{5} \Bigr)n-2\sqrt{2} p_1+4\sqrt{10} p_1-4\sqrt{2} k-8\sqrt{10} +\sqrt{2} $\\ [1.3ex]
$\ \ $ & $\sigma ^2=(28-16\sqrt{10} )(n-2)p_1(1-p_1)$\\ [1.3ex]
$SO_{avr}(PC_n)_{2k}$ & $\mu=\frac{-2\sqrt{2}np_1+2\sqrt{2}p_1+8\sqrt{2}n-6\sqrt{2}}{k}+\frac{4np_1-8n-4}{nk} \mu _2$\\ [2.5ex]
$\ \ $ & $\sigma ^2=\frac{-80n^2k^2-8n^2k+8nk+16n^2-32n+16}{n^2k^2}+\frac{-8\sqrt{2} (n-1)\mu _2 }{n^2k^2}$ \\ [2.5ex]
\hline
\end{tabular}
\end{table}
\section{Conclusion}
\ \ \ \ In this paper, a method for calculating the distributions of the Sombor indices of a random polygonal chain has been established. The expected values and variances of the Sombor indices of a random polygonal chain have been calculated. As an application, we also have obtained the expected values and variances of the Sombor indices for polyonino chain, pentachain, polyphenyl chain, and cyclooctane chain.
Based on the central limit theorem, it is also discovered from a probabilistic perspective that since the end connections of random chains obey a binomial distribution, when the number $n$ of polygons connected by any chain tends to infinity, is the Sombor indices of any chain at that point from a normal distribution. Sombor indices can help to predict the physicochemical properties of many molecules. We have compared the Sombor indices with some existing topological indices and will find that sometimes the Sombor indices show better predictive power than existing indices, which also provide new ideas in our future research discussions.
\section*{Declartion of competing interest}
\ \ \ \ We confirm that these results have neither published elsewhere nor are under consideration. The authors have no conflict of interest to disclose.
\section*{Data availability statements}
\ \ \ \ The data that support the finding of this study are availability within the article. All relevant data are also availability from the corresponding author upon reasonable request.
|
train/arxiv
|
BkiUd6zxK2li-HhsAGJd
| 5 | 1 |
\section{Introduction}
We study the well posedness and approximation of a generalised linear saddle point problem in reflexive Banach spaces using three bilinear forms: find $(u,w) \in X \times Y$ such that
\begin{equation}
\begin{split}
c(u,\eta) + b(\eta,w) &= \langle f, \eta \rangle \quad \forall \eta \in X, \\
b(u,\xi) - m(w,\xi) &= \langle g, \xi \rangle \quad \forall \xi \in Y.
\end{split}
\end{equation}
Our assumptions on the bilinear forms and spaces will be detailed in Section \ref{AbsSplit}. First we give some context to this general problem within the existing literature. If we were to set $m=0$ the resulting saddle point problem is well studied, see for example \cite{ErnGue13}, and the assumptions we will make on $b$ and $c$ are sufficient to show well posedness. The $m \neq 0$ case is examined in \cite{CiaHuaZou03, KelLiu96, BofBreFor13}. In these papers well posedness is shown under a different set of assumptions to ours. In contrast to our assumptions only one of the inf sup conditions is required for $b$ and $m$ has a weaker coercivity assumption but $c$ is assumed to be coercive. Indeed their assumptions are weaker than the ones used in this work for $b$ and $m$ but stronger for $c$.
This system is motivated by splitting methods in which we turn a single high
order partial differential equation into a coupled system of lower order equations. For example, consider the PDE
\begin{equation} \label{eq:introExamplePDE}
Au=f,
\end{equation}
where $A$ is a fourth order differential operator. Suppose we may write $A=B_1\circ B_2 +C$, where $B_1,B_2$ and $C$ are second order differential operators. By introducing a new variable, $w=B_2 u$, we may rewrite \eqref{eq:introExamplePDE} as a coupled system of equations
\begin{equation} \label{eq:introExampleSplitting}
\begin{split}
C u + B_1 w & = f, \\
B_2 u - w & = 0.
\end{split}
\end{equation}
The advantage of such a splitting method is that the resulting system of equations is second order, it can thus be solved numerically using simpler finite elements than are required to directly solve \eqref{eq:introExamplePDE}. Of course the meaning of the resulting split systems also depends on boundary conditions where needed.
Please note that all examples presented in the paper are actually set on closed surfaces.
To be an effective method the system \eqref{eq:introExampleSplitting} must itself be well posed. This question is considered in \cite{CiaHuaZou03}, where sharp conditions are given detailing well posedness of the system. Amongst these conditions is a relationship between the norm of $B_1-B_2$ and other properties of the operators (see \cite[Section 3.1]{CiaHuaZou03}). When designing a splitting method it can be difficult to ensure that this condition holds.
In this paper we will take $B_1=B_2$. This case is studied in \cite{KelLiu96,Yan02}. These papers treat the case where $C$ induces a bilinear operator that is coercive or at least positive semi definite. We will not make this assumption here as it is not compatible with many of problems we wish to consider. To illustrate this point, consider the operator
\[
A = \Delta^2 u +\Delta u + u.
\]
Such an $A$ induces a coercive bilinear form on $H^2(\Omega)\cap H_0^1(\Omega)$ where $\Omega$ is a bounded open set in $\mathbb R^2$ with smooth boundary or on $H^2(\Gamma)$ where $\Gamma$ is a closed smooth hypersurface in $\mathbb R^3$. These lead to a problem of the form \eqref{eq:introExamplePDE} being well posed. However to perform a splitting which satisfies the conditions in \cite{KelLiu96,Yan02} we require a $B_1$ which induces a bilinear form satisfying an inf sup condition, equivalently $B_1$ is invertible in an appropriate sense, and a $C$ which induces a positive semi-definite bilinear form. A possible choice is $B_1 = B_2= -\Delta + \lambda$ for some $\lambda >0$ (with homogeneous Dirichlet boundary condition in the case that
$\Omega \subset \mathbb R^2$ with a smooth boundary) but this produces
\[
C = A- B_1 \circ B_1 = (1+2\lambda) \Delta +(1-\lambda^2)
\]
which isn't positive semi definite for any $\lambda >0$. We will thus consider a situation where $C$ does not induce a positive semi-definite bilinear form. Note that this work is not a direct generalisation of the results in \cite{KelLiu96,Yan02}, whilst we consider a weaker condition on $C$ this is accommodated by a stronger condition on the operator which acts on $w$ in the second equation, chosen to be the negative identity map in \eqref{eq:introExampleSplitting}.
Our abstract setting and assumptions are motivated by applications of this general theory to formulate a splitting method for surface PDE problems arising in models of biomembranes which
are posed over a sphere and a torus, \cite{EllFriHob17}.
The complexity of the fourth order operator we wish to split, which results from the second variation of the Willmore functional, makes it difficult to formulate the splitting problem in such a way that existing theory can be applied. Such a formulation may be possible but it is our belief that the method presented here is straightforward to apply to this and similar problems. Moreover the additional assumptions we make on $b$ and $m$ are quite natural for the applications we consider. See also \cite{EllFreMil89, EllGraHobKorWol16} for other possible applications to fourth order partial differential equations.
\subsection*{Outline of paper}
In Section \ref{AbsSplit} we define an abstract saddle point system consisting of two coupled variational equations in a Banach space setting using three bilinear forms \{c,b,m\}. Well posedness is proved subject to Assumptions \ref{def:genSplitSetupASSUME} and \ref{def:liftedDiscreteSetup}.
An abstract finite element approximation is defined in Section 3. Natural error bounds are proved under approximation assumptions. Section \ref{SurfCalc} details some notation for surface calculus and surface finite elements. Section \ref{BilinearFormb} details results about a useful bilinear form $b(\cdot,\cdot)$ used in the examples of fourth order surface PDEs studied in later sections. Examples of two fourth order PDEs on closed surfaces satisfying the assumptions of Section \ref{AbsSplit} are given in Section \ref{PDEex} and the analysis of the application of the surface finite element method to the saddle point problem is studied in Section \ref{PDEexDiscrete}. Finally a couple of numerical examples are given in Section \ref{NumExpts} which verify the proved convergence rates.
\section{Abstract splitting problem}\label{AbsSplit}
We now introduce the coupled system on which the splitting method is based. Our abstract problem is formulated in a Banach space setting. We will first define the spaces and functionals used and the required assumptions.
\begin{definition} \label{def:genSplitSetup}
Let $X,Y$ be reflexive Banach spaces and $L$ be a Hilbert space with $Y \subset L$ continuously. Let \{c,b,m\} be bilinear functionals such that
\begin{align*}
&c:X\times X \rightarrow \mathbb{R} \text{, bounded and bilinear}, \\
&b:X\times Y \rightarrow \mathbb{R} \text{, bounded, bilinear}, \\
&m:L\times L \rightarrow \mathbb{R} \text{, bounded, bilinear, symmetric and coercive}.
\end{align*}
Let $f \in X^*$ and $g \in Y^*$.
\end{definition}
Using this general setting we formulate the coupled problem. Note that we allow a non-zero right hand side in each equation, this is a generalisation of the motivating problem \eqref{eq:introExampleSplitting}.
\begin{problem} \label{prob:genSplittingProb}
With the spaces and functionals in Definition \ref{def:genSplitSetup}, find $(u,w) \in X \times Y$ such that
\begin{equation} \label{eq:genSplittingProb}
\begin{split}
c(u,\eta) + b(\eta,w) &= \langle f, \eta \rangle \quad \forall \eta \in X, \\
b(u,\xi) - m(w,\xi) &= \langle g, \xi \rangle \quad \forall \xi \in Y.
\end{split}
\end{equation}
\end{problem}
Throughout we assume the following inf sup and coercivity conditions on
the bilinear forms $b(\cdot,\cdot), c(\cdot,\cdot)$ and $m(\cdot,\cdot)$.
\begin{assumption} \label{def:genSplitSetupASSUME}
~
\begin{itemize}
\item
There exist $\beta,\gamma >0$ such that
\begin{equation} \label{eq:infSupsDefn}
\beta \|\eta\|_X \leq \sup_{\xi \in Y} \frac{b(\eta,\xi)}{\|\xi\|_Y} \;\; \forall \eta \in X \quad \text{and} \quad
\gamma \|\xi\|_Y \leq \sup_{\eta \in X} \frac{b(\eta,\xi)}{\|\eta\|_X} \;\; \forall \xi \in Y.
\end{equation}
\item
There exists $C>0$ such that for all $(u,w) \in X \times Y$
\begin{equation} \label{eq:uwCoercivity}
b(u,\xi) = m(w,\xi) \; \forall \xi \in Y \implies C \|w\|_L^2 \leq c(u,u) + m(w,w).
\end{equation}
\end{itemize}
\end{assumption}
For existence we will make the additional assumption that the spaces $X$ and $Y$ can be approximated by sequences of finite dimensional spaces.
Moreover we assume that such approximating spaces are sufficiently rich to satisfy an appropriate inf sup inequality. This assumption allows us to use a Galerkin approach.
\begin{assumption}\label{def:liftedDiscreteSetup}
We assume there exist sequences of finite dimensional approximating spaces $X_n \subset X$ and $Y_n \subset Y$. That is, for
any $\eta \in X$ there exists a sequence $\eta_n \in X_n$ such that $\|\eta_n - \eta\|_X \rightarrow 0$, similarly for any $\xi \in Y$ there exists a sequence $\xi_n \in Y_n$ such that $\|\xi_n - \xi\|_Y \rightarrow 0$.
Moreover, we assume the discrete inf sup inequalities hold. That is there exist $\tilde{\beta},\tilde{\gamma}> 0$, independent of $n$, such that
\begin{align}
\tilde{\beta} \|\eta\|_X &\leq \sup_{\xi \in Y_n} \frac{b(\eta,\xi)}{\|\xi\|_Y} \;\; \forall \eta \in X_n, \quad \\
\tilde{\gamma} \|\xi\|_Y &\leq \sup_{\eta \in X_n} \frac{b(\eta,\xi)}{\|\eta\|_X} \;\; \forall \xi \in Y_n.
\end{align}
Finally, assume there exists a map $I_n:Y \rightarrow Y_n$ for each $n$, such that
\begin{equation} \label{eq:YTildeSupConv}
\begin{split}
&b(\eta_n, I_n \xi) = b(\eta_n, \xi) \quad \forall (\eta_n, \xi) \in X_n \times Y \\
&\sup_{\xi \in Y} \frac{\|\xi - I_n \xi\|_L}{\|\xi\|_Y} \rightarrow 0 \quad \text{ as } n \rightarrow \infty.
\end{split}
\end{equation}
\end{assumption}
We now show the well posedness of Problem \ref{prob:genSplittingProb}. First we prove two key lemmas in which we
construct a discrete inverse operator and a discrete coercivity relation that is an analogue of (\ref{eq:uwCoercivity}). We make use of a generalised form of the
Lax-Milgram theorem, the Banach-Ne\v{c}as-Babu\v{s}ka Theorem \cite[Section 2.1.3]{ErnGue13}. For completeness, the theorem is stated below.
\begin{theorem}[Banach-Ne\v{c}as-Babu\v{s}ka] \label{thm:BNB}
Let $W$ be a Banach Space and let $V$ be a reflexive Banach space. Let $A \in \mathcal{L}(W \times V ; \mathbb{R})$ and $F \in V^*$. Then there exists a unique $u_F \in W$ such that
\[
A(u_F,v) = F(v) \;\; \forall v \in V
\]
if and only if
\begin{align*}
&\exists \alpha > 0 \;\; \forall w \in W, \quad \sup_{v \in V} \frac{ A(w,v) }{ \|v\|_V } \geq \alpha \|w\|_W, \\
&\forall v \in V, \quad \left( \forall w \in W, \; A(w,v)=0 \right) \implies v = 0.
\end{align*}
Moreover the following a priori estimate holds
\[
\forall F \in V^*, \quad \|u_F\|_W \leq \alpha^{-1}\|F\|_{V^*}.
\]
\end{theorem}
\begin{lemma} \label{lem:liftedDiscreteCoercivity}
Under the Assumptions \ref{def:genSplitSetup} and \ref{def:liftedDiscreteSetup},
there exists a linear map $G_n:Y^* \rightarrow X_n$ such that for each $\Theta \in Y^*$
\[
b(G_n \Theta, \xi_n) = \langle \Theta , \xi_n \rangle \;\; \forall \xi_n \in Y_n.
\]
These maps satisfy the uniform bound
\[
\|G_n \Theta\|_X \leq \tilde{\beta}^{-1} \|\Theta\|_{Y^*}.
\]
Furthermore, there exists a map $G: Y^* \rightarrow X$ such that for each $\Theta \in Y^*$
\[
b(G \Theta, \xi) = \langle \Theta , \xi \rangle \;\; \forall \xi \in Y .
\]
This map satisfies the bound
\[
\|G \Theta\|_X \leq \beta^{-1} \|\Theta\|_{Y^*}.
\]
\end{lemma}
\begin{proof}
To construct $G_n$, let $\Theta \in Y^*$, then $\Theta \in (Y_n, \|\cdot\|_Y)^*$. Then by Theorem \ref{thm:BNB}, there exists a unique $G_n \Theta \in X_n$ such that
\[
b(G_n \Theta, \xi_n) = \langle \Theta , \xi_n \rangle \;\; \forall \xi_n \in Y_n.
\]
The assumptions required to apply Theorem \ref{thm:BNB} are made in Assumption \ref{def:liftedDiscreteSetup}. That $G_n$ is linear follows immediately from the construction. The two bounds are a consequence of the discrete inf sup inequalities in Assumption \ref{def:liftedDiscreteSetup}. The map $G$ is constructed similarly using the assumptions made in Assumption \ref{def:genSplitSetupASSUME}.
\end{proof}
We can now prove a discrete coercivity relation which is key in proving well posedness for Problem \ref{prob:genSplittingProb}. This is a discrete analogue of \eqref{eq:uwCoercivity}.
\begin{lemma}
Under the assumptions in Lemma \ref{lem:liftedDiscreteCoercivity}, there exists $C,N>0$ such that, for all $n\geq N$,
\begin{equation} \label{eq:liftedDiscreteCoercivity}
C\|v_n\|_L^2 \leq c(G_n m(v_n,\cdot),G_n m(v_n,\cdot)) + m(v_n,v_n) \;\; \forall v_n \in Y_n.
\end{equation}
Here $m(v_n,\cdot) \in Y^*$ denotes the map $y \mapsto m(v_n,y)$.
\end{lemma}
\begin{proof}
Let $v_n \in Y_n$, $m(v_n, \cdot) \in Y^*$ holds as $Y$ is continuously embedded into $L$ and observe
\[
\|m(v_n,\cdot)\|_{Y^*} = \sup_{y \in Y}\frac{|m(v_n,y)|}{\|y\|_Y} \leq C \frac{\|v_n\|_L \|y\|_L }{\|y\|_Y} \leq C \|v_n\|_L.
\]
It follows
\begin{align*}
b((G-G_n)m(v_n,\cdot),\xi) &= b((G-G_n)m(v_n,\cdot),\xi - I_n \xi) \\
&= b(G m(v_n,\cdot), \xi - I_n \xi)
\\
&= m(v_n, \xi - I_n \xi)
\\
& \leq C\|v_n \|_L \|\xi - I_n \xi\|_L.
\end{align*}
Using the inf sup inequalities given in \eqref{eq:infSupsDefn} we deduce
\[
\|(G-G_n)m(v_n,\cdot)\|_X \leq C \|v_n\|_L \sup_{\xi \in Y} \frac{\|\xi - I_n \xi\|_L}{\|\xi\|_Y}.
\]
For any $v_n \in Y_n$ we can thus bound the difference
\[
|c(G_n m(v_n,\cdot),G_n m(v_n,\cdot)) - c(G m(v_n,\cdot),G m(v_n,\cdot))| \leq C\|v_n\|_L^2 \sup_{\xi \in Y} \frac{\|\xi - I_n \xi\|_L}{\|\xi\|_Y}.
\]
Now, choosing $n$ sufficiently large in the bound above, by \eqref{eq:uwCoercivity} and \eqref{eq:YTildeSupConv} it follows for any $v_n \in Y_n$
\begin{align*}
C\|v_n\|_L^2 &\leq c(G_n m(v_n,\cdot),G_n m(v_n,\cdot)) + m(v_n,v_n) + \frac{C}{2}\|v_n\|_L^2,
\end{align*}
from which the result is immediate.
\end{proof}
\begin{theorem} \label{thm:genSplittingWellPosed}
Suppose the Assumptions \ref{def:genSplitSetup} and \ref{def:liftedDiscreteSetup} hold, then there exists a unique solution to Problem \ref{prob:genSplittingProb}. Moreover, there exists $C>0$, independent of the data, such that
\[
\|u\|_X + \|w\|_Y \leq C(\|f\|_{X^*} + \|g\|_{Y^*}).
\]
\end{theorem}
\begin{proof}
We begin with existence, using a Galerkin argument. Let $(u_n,w_n) \in X_n \times Y_n$ be the unique solution of
\begin{align*}
c(u_n,\eta_n) + b(\eta_n,w_n) &= \langle f, \eta_n \rangle \quad \forall \eta_n \in X_n, \\
b(u_n,\xi_n) - m(w_n,\xi_n) &= \langle g,\xi_n \rangle \quad \forall \xi_n \in Y_n.
\end{align*}
As the problem is linear and finite dimensional, existence and uniqueness of such a solution is equivalent to uniqueness for the
homogeneous problem $f=g=0$. In this case, testing the first equation with $u_n$, the second with $w_n$ and subtracting we obtain
\[
c(u_n,u_n) + m(w_n,w_n) = 0.
\]
For sufficiently large $n$ this implies $w_n=0$ by \eqref{eq:liftedDiscreteCoercivity}, as $u_n = G_n m(w_n,\cdot)$ in the homogeneous
case, thus $u_n=0$ also due to the linearity of $G_n$.
Now we return to the inhomogeneous case and produce a priori bounds on $u_n,w_n$. To create a pair of initial bounds
we use the discrete inf sup inequalities with each of the finite dimensional equations. Firstly,
\begin{align*}
\tilde{\gamma}\|w_n\|_Y \leq \sup_{\eta_n \in X_n} \frac{b(\eta_n,w_n)}{\|\eta_n\|_X} \leq \|f\|_{X^*} + C \|u_n\|_X.
\end{align*}
Similarly with the second equation,
\begin{align*}
\tilde{\beta}\|u_n\|_X \leq \sup_{\xi_n \in Y_n} \frac{b(u_n,\xi_n)}{\|\xi_n\|_Y} \leq \|g\|_{Y^*} + C\|w_n\|_L.
\end{align*}
Combining these two inequalities produces
\begin{equation} \label{eq:liftAPwithL2}
\|u_n\|_X + \|w_n\|_Y \leq C(\|f\|_{X^*} + \|g\|_{Y^*} + \|w_n\|_L).
\end{equation}
To bound the $\|w_n\|_L$ term we use the same approach of subtracting the equations as used to
show uniqueness. In the inhomogeneous case this produces
\[
c(u_n,u_n) + m(w_n,w_n) = \langle f, u_n \rangle - \langle g, w_n \rangle.
\]
Notice now $u_n=G_n m(w_n,\cdot) + G_n g$, and thus (\ref{eq:liftedDiscreteCoercivity}) yields
\begin{align*}
C\|w_n\|_L^2 &\leq c(u_n,u_n) + m(w_n,w_n) -c(u_n,G_n g) -c(G_n g,u_n) +c(G_n g,G_n g) \\
& \leq \|f\|_{X^*}\|u_n\|_X + \|g\|_{Y^*}\|w_n\|_Y + C(\|u_n\|_X+ \|G_ng\|_X)\|G_ng\|_X.
\end{align*}
Recall, by Lemma \ref{lem:liftedDiscreteCoercivity},
\[
\|G_n g\|_X \leq \tilde{\beta}^{-1}\|g\|_{Y^*}.
\]
Combining these two inequalities with \eqref{eq:liftAPwithL2} produces
\[
\|w_n\|_L^2 \leq C(\|f\|_{X^*} + \|g\|_{Y^*})(\|f\|_{X^*} + \|g\|_{Y^*} + \|w_n\|_L).
\]
Hence by Young's inequality we deduce
\[
\|w_n\|_L \leq C(\|f\|_{X^*} + \|g\|_{Y^*}),
\]
then inserting this bound into \eqref{eq:liftAPwithL2} produces
\[
\|u_n\|_X + \|w_n\|_Y \leq C(\|f\|_{X^*} + \|g\|_{Y^*} ).
\]
Thus $u_n$ and $w_n$ are bounded sequences in $X$ and $Y$ respectively, which are both reflexive Banach spaces,
hence there exists a subsequence (which we continue to denote with a subscript $n$) such that
\[
u_n \xrightharpoonup{\;\;X\;\;} u \quad \text{ and } \quad w_n \xrightharpoonup{\;\;Y\;\;} w,
\]
for some weak limits $u \in X$ and $w \in Y$. We will show that this weak limit is a solution to Problem \ref{prob:genSplittingProb}.
For any $\eta \in X$, there exists an approximating sequence $\eta_n \rightarrow \eta$ with each $\eta_n \in X_n$, it follows
\[
c(u,\eta) + b(\eta,w) = \lim_{n \rightarrow \infty} \left( c(u_n,\eta_n) + b(\eta_n,w_n) \right) = \lim_{n \rightarrow \infty } \langle f , \eta_n \rangle = \langle f, \eta \rangle.
\]
We treat the second equation similarly, for any $\xi \in Y$ we may find a sequence $\xi_n \rightarrow \xi$ with each $\xi_n \in Y_n$ and
\[
b(u,\xi) - m(w,\xi) = \lim_{n \rightarrow \infty} \left( b(u_n,\xi_n) - m(w_n, \xi_n) \right) = \lim_{n \rightarrow \infty } \langle g , \xi_n \rangle = \langle g, \xi \rangle.
\]
Thus $(u,w)$ does indeed solve Problem \ref{prob:genSplittingProb}. Moreover, as $u,w$ are the weak limits of bounded sequences in reflexive
Banach spaces they satisfy the same upper bound, that is
\[
\|u\|_X + \|w\|_Y \leq C(\|f\|_{X^*} + \|g\|_{Y^*} ).
\]
We complete the proof by proving uniqueness, as the system is linear it is sufficient to consider the homogeneous case $f=g=0$. In such a case $b(u,\xi) = m(w,\xi) \; \forall \xi \in Y$ and
\[
c(u,u) + m(w,w) = 0.
\]
Then by \eqref{eq:uwCoercivity} we have $w=0$ and hence $u=0$.
\end{proof}
\section{Abstract finite element method}
In this section we formulate and analyse an abstract finite element method to approximate the solution of Problem \ref{prob:genSplittingProb}. In our applications we wish to use a non-conforming finite element method in the sense of using finite element spaces which are not subspaces of the function spaces. For example, we will approximate problems based on a surface $\Gamma$ via problems based on a discrete surface $\Gamma_h$.
\begin{definition} \label{def:genFemSetUp}
Suppose, for $h>0$, $X_h$, $Y_h$ are finite dimensional normed vector spaces and there exist lift operators
\begin{align*}
l_h^X:X_h \rightarrow X \quad \text{ and }\quad l_h^Y:Y_h \rightarrow Y,
\end{align*}
which are linear and injective, such that $X_h^l := l_h^X(X_h)$ and $Y_h^l := l_h^Y(Y_h)$ satisfy Assumption \ref{def:liftedDiscreteSetup}.
For $\eta_h \in X_h$ let $\eta_h^l := l_h^X(\eta_h) \in X_h^l$, similarly for $\xi_h \in Y_h$ let $\xi_h^l := l_h^Y(\xi_h) \in Y_h^l$.
Let $c_h$, $b_h$, $m_h$ denote bilinear functionals such that
\begin{align*}
&c_h:X_h\times X_h \rightarrow \mathbb{R} \text{, bilinear}, \\
&b_h:X_h\times Y_h \rightarrow \mathbb{R} \text{, bilinear,} \\
&m_h:Y_h\times Y_h \rightarrow \mathbb{R} \text{, bilinear and symmetric.}
\end{align*}
We will assume the following approximation properties, there exists $C>0$ and $k \in \mathbb{N}$ such that
\begin{align*}
|c(\eta_h^l,\xi_h^l) - c_h(\eta_h,\xi_h)| &\leq Ch^k \|\eta_h^l\|_X \|\xi_h^l\|_X \quad \forall (\eta_h, \xi_h) \in X_h \times X_h, \\
|b(\eta_h^l,\xi_h^l) - b_h(\eta_h,\xi_h)| &\leq Ch^k \|\eta_h^l\|_X \|\xi_h^l\|_Y \quad \forall (\eta_h, \xi_h) \in X_h \times Y_h, \\
|m(\eta_h^l,\xi_h^l) - m_h(\eta_h,\xi_h)| &\leq Ch^k \|\eta_h^l\|_L \|\xi_h^l\|_L \quad \forall (\eta_h, \xi_h) \in Y_h \times Y_h.
\end{align*}
Finally, let $f_h \in X_h^*$ and $g_h \in Y_h^*$, where $X_h^*$ and $Y_h^*$ are the dual spaces of $X_h$ and $Y_h$ respectively, be such that
\begin{align*}
|\langle f,\eta_h^l \rangle- \langle f_h,\eta_h \rangle| & \leq Ch^k \|f\|_{X^*} \|\eta_h^l\|_X \quad \forall \eta_h \in X_h, \\
|\langle g, \xi_h^l\rangle - \langle g_h, \xi_h \rangle| & \leq Ch^k \|g\|_{Y^*}\|\xi_h^l\|_Y \quad \forall \xi_h \in Y_h.
\end{align*}
\end{definition}
The finite element approximation can now be formulated.
\begin{problem} \label{prob:genSplittingFem}
Under the assumptions of Definition \ref{def:genFemSetUp}, find $(u_h,w_h) \in X_h \times Y_h$ solving the discretised problem
\begin{align*}
c_h(u_h,\eta_h) + b_h(\eta_h,w_h) &= \langle f_h, \eta_h \rangle \quad \forall \eta_h \in X_h, \\
b_h(u_h,\xi_h) - m_h(w_h,\xi_h) &= \langle g_h, \xi_h \rangle \quad \forall \xi_h \in Y_h.
\end{align*}
\end{problem}
We now prove well posedness for the finite element method, Problem \ref{prob:genSplittingFem}, and produce a priori bounds for the solution.
\begin{theorem} \label{thm:genFemInfBound}
For sufficiently small $h$, there exists a unique solution to Problem \ref{prob:genSplittingFem}. Moreover, there exists a constant $C>0$, independent of $h$, such that
\[
\|u-u_h^l\|_X + \|w-w_h^l\|_Y \leq C \inf_{(\eta_h, \xi_h) \in X_h \times Y_h} \left( \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y \right) + h^k(\|f\|_{X^*} + \|g\|_{Y^*} ).
\]
\end{theorem}
\begin{proof}
For existence and uniqueness it is sufficient to prove existence for the homogeneous case $f_h=g_h=0$ as the system is linear and finite dimensional. In the homogeneous case we see
\[
c_h(u_h,u_h) + m_h(w_h,w_h) = 0.
\]
We will denote by $G_h^l:Y^* \rightarrow X_h^l$ the map constructed in Lemma \ref{lem:liftedDiscreteCoercivity} and also define $G_h:Y^* \rightarrow X_h$ by $G_h:= (l_h^X)^{-1}\circ G_h^l$. Notice also,
\begin{align*}
\tilde{\beta}\|u_h^l - G_h^l m(w_h^l,\cdot)\|_X &\leq \sup_{\xi_h \in Y_h} \frac{b(u_h^l-G_h^l m(w_h^l,\cdot), \xi_h^l)}{\|\xi_h^l\|_Y} \\
& \leq \sup_{\xi_h \in Y_h} \frac{ b(u_h^l,\xi_h^l) - b_h(u_h,\xi_h) + m_h(w_h,\xi_h) - m(w_h^l,\xi_h^l) }{\|\xi_h^l\|_Y} \\
& \leq Ch^k \|w_h^l\|_L.
\end{align*}
The final line holds as $\|u_h^l\|_X \leq C \|w_h^l\|_L$ in the homogeneous case, using the second equation of the system. It follows, by \eqref{eq:liftedDiscreteCoercivity},
\begin{align*}
C\|w_h^l\|_L^2 &\leq c(G_h^l m(w_h^l,\cdot), G_h^l m(w_h^l,\cdot) ) + m(w_h^l,w_h^l) \\
& = c(u_h^l, u_h^l ) + m(w_h^l,w_h^l) - c_h(u_h,u_h) - m_h(w_h,w_h) \\
& \quad + c(G_h^l m(w_h^l,\cdot), G_h^l m(w_h^l,\cdot) ) - c(u_h^l,u_h^l) \\
& \leq \tilde{C}h^k \|w_h^l\|_L^2.
\end{align*}
Hence for $h$ sufficiently small $w_h^l = 0$ from which we deduce $u_h^l=0$ and hence $w_h=u_h=0$. Thus there exists a unique solution for sufficiently small $h$.
Now we prove the required error estimate. Let $\eta_h \in X_h$ and $\xi_h \in Y_h$ be arbitrary. Using the second equation and the discrete inf sup inequality it follows
\begin{align*}
\tilde{\beta} &\|u_h^l-\eta_h^l\|_X \leq \sup_{v_h \in Y_h} \frac{1}{\|v_h^l\|_Y} \left[ b(u_h^l-\eta_h^l, v_h^l) \right] \\
& = \sup_{v_h \in Y_h} \frac{1}{\|v_h^l\|_Y} \bigg[ b(u-\eta_h^l, v_h^l) - m(w-w_h^l,v_h^l) -\langle g, v_h^l \rangle + \langle g_h ,v_h \rangle \\
& \hspace{4.25cm} -b_h(u_h,v_h) + m_h(w_h,v_h) + b(u_h^l,v_h^l) -m(w_h^l,v_h^l) \bigg] \\
& \leq C \left[ \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + \|w_h^l-\xi_h^l\|_L +h^k(\|g\|_{Y^*} + \|u_h^l\|_X + \|w_h^l\|_L) \right].
\end{align*}
We can produce a similar bound using the first equation of the system
\begin{align*}
\tilde{\gamma} &\|w_h^l-\xi_h^l\|_Y \leq \sup_{v_h \in X_h} \frac{1}{\|v_h^l\|_X} \left[ b( v_h^l,w_h^l-\xi_h^l) \right] \\
& = \sup_{v_h \in X_h} \frac{1}{\|v_h^l\|_X} \bigg[ b(v_h^l,w-\xi_h^l) +c(u-u_h^l,v_h^l) -\langle f, v_h^l \rangle + \langle f_h, v_h \rangle \\
& \hspace{4.7cm} -b_h(v_h,w_h) - c_h(u_h,v_h) + b(v_h^l,w_h^l) +c(u_h^l,v_h^l) \bigg] \\
& \leq C \left[ \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + \|u_h^l-\eta_h^l\|_X +h^k(\|f\|_{X^*} + \|u_h^l\|_X + \|w_h^l\|_Y) \right].
\end{align*}
Combining these two estimates produces the bound
\begin{equation} \label{eq:genSumBoundWithL2}
\begin{split}
\|u_h^l-\eta_h^l\|_X + \|w_h^l-\xi_h^l\|_Y \leq C \Big[ & \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + \|w_h^l-\xi_h^l\|_L \\
& +h^k(\|f\|_{X^*} + \|g\|_{Y^*} + \|u_h^l\|_X + \|w_h^l\|_Y) \Big].
\end{split}
\end{equation}
To produce the result we must bound the $L$-norm term which appears here. To do so we will add the discrete equations together and use the discrete coercivity relation \eqref{eq:liftedDiscreteCoercivity}. Firstly consider
\begin{align*}
&|c_h(u_h-\eta_h, u_h-\eta_h) + b_h(u_h-\eta_h, w_h-\xi_h)| \\
&= |c(u-\eta_h^l, u_h^l-\eta_h^l) + b(u_h^l-\eta_h^l, w-\xi_h^l) -\langle f, u_h^l-\eta_h^l \rangle + \langle f_h , u_h-\eta_h \rangle \\
& \quad +c(\eta_h^l, u_h^l-\eta_h^l) + b(u_h^l-\eta_h^l, \xi_h^l) - c_h(\eta_h, u_h-\eta_h) - b_h(u_h-\eta_h, \xi_h)| \\
& \leq C \|u_h^l-\eta_h^l\|_X \left[ \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + h^k(\|f\|_{X^*} +\|\eta_h^l\|_X + \|\xi_h^l\|_Y) \right].
\end{align*}
Treating the second equation similarly produces
\begin{align*}
&|b_h(u_h-\eta_h, w_h-\xi_h) - m_h(w_h-\xi_h, w_h-\xi_h)| \\
&= |b(u-\eta_h^l, w_h^l-\xi_h^l) - m(w-\xi_h^l, w_h^l-\xi_h^l) -\langle g , w_h^l-\xi_h^l \rangle + \langle g_h, w_h-\xi_h \rangle \\
& \quad +b(\eta_h^l, w_h^l-\xi_h^l) - m(\xi_h^l,w_h^l - \xi_h^l) - b_h(\eta_h, w_h-\xi_h) + m_h(\xi_h, w_h-\xi_h)| \\
& \leq C \|w_h^l-\xi_h^l\|_Y \left[ \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + h^k(\|g\|_{Y^*}+\|\eta_h^l\|_X + \|\xi_h^l\|_Y) \right].
\end{align*}
Combining these two estimates with \eqref{eq:genSumBoundWithL2} produces
\begin{equation}\label{eq:initialFemSumBound}
|c_h(u_h - \eta_h,u_h-\eta_h) + m_h(w_h-\xi_h,w_h-\xi_h)| \leq C \left(\mathbb{B}^2 + \mathbb{B}\|w_h^l-\xi_h^l\|_L \right),
\end{equation}
where the grouping of terms $\mathbb{B}$ is given by
\begin{equation} \label{eq:bigBracketB}
\begin{split}
\mathbb{B}:=&\|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y \\
& +h^k(\|f\|_{X^*} + \|g\|_{Y^*} + \|u_h^l\|_X + \|\eta_h^l\|_X + \|w_h^l\|_Y + \|\xi_h^l\|_Y ).
\end{split}
\end{equation}
The coercivity relation in \eqref{eq:liftedDiscreteCoercivity} gives
\[
C\|w_h^l-\xi_h^l\|_L^2 \leq c(G_h^lm(w_h^l-\xi_h^l,\cdot),G_h^l m(w_h^l-\xi_h^l,\cdot) ) + m(w_h^l-\xi_h^l,w_h^l-\xi_h^l), \\
\]
it follows
\begin{equation} \label{eq:L2starterBound}
\begin{split}
C\|w_h^l-\xi_h^l\|_L^2\leq& |c(u_h^l-\eta_h^l,u_h^l-\eta_h^l ) + m(w_h^l-\xi_h^l,w_h^l-\xi_h^l) \\
& - \left[ c_h(u_h -\eta_h,u_h-\eta_h ) + m_h(w_h-\xi_h,w_h-\xi_h) \right]| \\
&+ |c_h(u_h - \eta_h,u_h-\eta_h) + m_h(w_h-\xi_h,w_h-\xi_h)| \\
&+ |c(G_h^lm(w_h^l-\xi_h^l,\cdot),G_h^l m(w_h^l-\xi_h^l,\cdot) ) - c(u_h^l-\eta_h^l,u_h^l-\eta_h^l )|.
\end{split} \raisetag{2.5\baselineskip}
\end{equation}
To proceed we bound the three terms appearing here. The first term is simply an approximation property,
\begin{gather}
\begin{split}
& |c(u_h^l-\eta_h^l,u_h^l-\eta_h^l ) + m(w_h^l-\xi_h^l,w_h^l-\xi_h^l) - \left[ c_h(u_h -\eta_h,u_h-\eta_h ) + m_h(w_h-\xi_h,w_h-\xi_h) \right]| \\
& \leq Ch^k \left( \|u_h^l-\eta_h^l\|_X^2 + \|w_h^l-\xi_h^l\|_Y^2 \right) \\
&\leq Ch^k \left( \mathbb{B}^2 + \mathbb{B}\|w_h^l-\xi_h^l\|_L + \|w_h^l-\xi_h^l\|_L^2 \right).
\end{split} \raisetag{1\baselineskip}
\label{eq:starter1stLine}
\end{gather}
The final line is true for sufficiently small $h$ and follows from \eqref{eq:genSumBoundWithL2}. The second term we have already bounded in \eqref{eq:initialFemSumBound}. For the final term notice
\begin{align*}
|c&(G_h^lm(w_h^l-\xi_h^l,\cdot),G_h^l m(w_h^l-\xi_h^l,\cdot) ) - c(u_h^l-\eta_h^l,u_h^l-\eta_h^l )| \\
&\leq C(\|G_h^lm(w_h^l-\xi_h^l,\cdot)\|_X + \|u_h^l-\eta_h^l\|_X)\|G_h^lm(w_h^l-\xi_h^l,\cdot)-(u_h^l-\eta_h^l)\|_X.
\end{align*}
To bound these terms first notice, by Lemma \ref{lem:liftedDiscreteCoercivity},
\[
\|G_h^l m(w_h^l-\xi_h^l,\cdot)\|_X \leq C\|m(w_h^l-\xi_h^l,\cdot)\|_{Y^*} \leq C\|w_h^l-\xi_h^l\|_L.
\]
We can then use the bound on $\|u_h^l - \eta_h^l\|_X$ established in \eqref{eq:genSumBoundWithL2} to produce
\begin{align*}
\|G_h^l m(w_h^l-\xi_h^l,\cdot)\|_X + \|u_h^l-\eta_h^l\|_X \leq C \big[ & \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + \|w_h^l-\xi_h^l\|_L \\
&+h^k(\|f\|_{X^*} + \|g\|_{Y^*} + \|u_h^l\|_X + \|w_h^l\|_Y) \big].
\end{align*}
For the second factor we first introduce $G_h^l(g_h^l)$, where $g_h^l \in (Y_h^l)^*$ is defined by
\[
\langle g_h^l, v_h^l \rangle := \langle g_h,v_h \rangle.
\]
Note that the map $G_h^l$ is well defined on $(Y_h^l)^*$, see the proof of Lemma \ref{lem:liftedDiscreteCoercivity}.
By the triangle inequality
\[
\|G_h^lm(w_h^l-\xi_h^l,\cdot)-(u_h^l-\eta_h^l)\|_X \leq \|G_h^l(m(w_h^l,\cdot)+g_h^l)-u_h^l\|_X + \|\eta_h^l -G_h^l(g_h^l + m(\xi_h^l,\cdot))\|_X.
\]
To bound each of these we use the discrete inf sup inequalities and the definition of $G_h^l$. Firstly,
\begin{align*}
&\tilde{\beta}\|G_h^l(m(w_h^l,\cdot)+g_h^l)-u_h^l\|_X \leq \sup_{v_h \in Y_h} \frac{b(G_h^l(m(w_h^l,\cdot)+g_h^l)-u_h^l,v_h^l)}{\|v_h^l\|_Y} \\
& = \sup_{v_h \in Y_h} \frac{1}{\|v_h^l\|_Y} \left[ -b(u_h^l,v_h^l) +b_h(u_h,v_h) - m_h(w_h,v_h) + m (w_h^l,v_h^l) \right] \\
& \leq Ch^k\left( \|u_h^l\|_X + \|w_h^l\|_Y \right).
\end{align*}
Similarly, for the second term
\begin{align*}
&\tilde{\beta}\|\eta_h^l - G_h^l(m(\xi_h^l,\cdot)+g_h^l)\|_X \leq \sup_{v_h \in Y_h} \frac{b(\eta_h^l - G_h^l(m(\xi_h^l,\cdot)+g_h^l),v_h^l)}{\|v_h^l\|_Y} \\
&=\sup_{v_h \in Y_h} \frac{1}{\|v_h^l\|_Y} \left[ \langle g, v_h^l \rangle - \langle g_h,v_h \rangle + m(w-\xi_h^l,v_h^l ) + b(\eta_h^l-u,v_h^l) \right] \\
& \leq C(h^k \|g\|_{Y^*} + \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y).
\end{align*}
Thus combining these bounds we see
\begin{align}
|c&(G_h^lm(w_h^l-\xi_h^l,\cdot),G_h^l m(w_h^l-\xi_h^l,\cdot) ) - c(u_h^l-\eta_h^l,u_h^l-\eta_h^l )| \leq C \left(\mathbb{B}^2 + \mathbb{B}\|w_h^l-\xi_h^l\|_L \right). \label{eq:starter3rdLine}
\end{align}
Now, inserting \eqref{eq:initialFemSumBound}, \eqref{eq:starter1stLine} and \eqref{eq:starter3rdLine} into \eqref{eq:L2starterBound} and considering sufficiently small $h$, to absorb the final term appearing in \eqref{eq:starter1stLine} into the left hand side, produces
\begin{align*}
&\|w_h^l-\xi_h^l\|_L^2\leq C \left( \mathbb{B}^2 + \mathbb{B} \|w_h^l-\xi_h^l\|_L \right).
\end{align*}
Thus by Young's inequality
\[
\|w_h^l-\xi_h^l\|_L \leq C\mathbb{B}.
\]
Inserting this bound into \eqref{eq:genSumBoundWithL2} gives
\begin{align*}
\|u_h^l-\eta_h^l\|_X + \|w_h^l-\xi_h^l\|_Y \leq C \bigg[ \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + h^k(\|f\|_{X^*} + \|g\|_{Y^*})& \\
+ h^k( \|u_h^l\|_X + \|\eta_h^l\|_X + \|w_h^l\|_Y + \|\xi_h^l\|_Y )& \bigg].
\end{align*}
We can deduce an a priori estimate by setting $\eta_h=\xi_h=0$ as then
\[
\|u_h^l\|_X + \|w_h^l\|_Y \leq C \left[ \|u\|_X + \|w\|_Y +h^k(\|f\|_{X^*} + \|g\|_{Y^*} + \|u_h^l\|_X + \|w_h^l\|_Y ) \right],
\]
hence using the estimate in Theorem \ref{thm:genSplittingWellPosed}, for sufficiently small $h$,
\begin{equation}
\|u_h^l\|_X + \|w_h^l\|_Y \leq C \left[ \|f\|_{X^*} + \|g\|_{Y^*} \right].
\label{a_priori_estimate}
\end{equation}
Using this bound and the triangle inequality gives
\begin{align*}
\|u-u_h^l\|_X &+ \|w-w_h^l\|_Y \leq \|u - \eta_h^l\|_X + \|w-\xi_h^l\|_Y + \|u_h^l - \eta_h^l\|_X + \|w_h^l-\xi_h^l\|_Y \\
& \leq C \left[ \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y +h^k(\|f\|_{X^*} + \|g\|_{Y^*} + \|\eta_h^l\|_X + \|\xi_h^l\|_Y) \right].
\end{align*}
A further application of the triangle inequality and the a priori estimate in Theorem \ref{thm:genSplittingWellPosed} produces
\begin{align*}
\|\eta_h^l\|_X + \|\xi_h^l\|_Y &\leq \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + \|u\|_X + \|w\|_Y \\
& \leq \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y + C(\|f\|_{X^*} + \|g\|_{Y^*}).
\end{align*}
Thus for sufficiently small $h$ we have
\[
\|u-u_h^l\|_X + \|w-w_h^l\|_Y \leq C \left[ \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y +h^k(\|f\|_{X^*} + \|g\|_{Y^*}) \right].
\]
Now we obtain the required result by taking an infimum, as the left hand side is independent of $\xi_h$ and $\eta_h$.
\end{proof}
This bound forms the core of the error analysis in our applications. There we will have the existence of an interpolation operator which allows this infimum bound to be turned into an error bound of the form $Ch^\alpha$, for some $0 \leq \alpha \leq k$. Exactly how large this $\alpha$ can be depends upon the regularity of the solution $(u,w)$. We now introduce this error bound in this abstract setting.
\begin{corollary} \label{cor:abstractOrderOfConv}
Suppose there exist Banach spaces $\tilde{X} \subset X$, $\tilde{Y} \subset Y$ such that $(u,w) \in \tilde{X} \times \tilde{Y}$ and with each embedding being continuous. Further assume there exists $\tilde{C},\alpha >0$, independent of $h$, such that
\[
\inf_{(\eta_h, \xi_h) \in X_h \times Y_h} \|u-\eta_h^l\|_X + \|w-\xi_h^l\|_Y \leq \tilde{C}h^\alpha\left( \|u\|_{\tilde{X}} + \|w\|_{\tilde{Y}} \right).
\]
Then, for sufficiently small $h$, there exists $C>0$, independent of $h$, such that
\[
\|u-u_h^l\|_X + \|w-w_h^l\|_Y \leq C h^{\min\left\{\alpha,k\right\}} \left( \|u\|_{\tilde{X}} + \|w\|_{\tilde{Y}} + \|f\|_{X^*} + \|g\|_{Y^*} \right).
\]
\end{corollary}
We can also establish higher order error bounds in weaker norms by using a duality argument similar to the Aubin-Nitsche trick. To do so we assume that $c(\cdot,\cdot)$ is symmetric and that the Banach spaces $X$ and $Y$ can be embedded into some larger Hilbert spaces which supply the appropriate weaker norms.
\begin{proposition} \label{prop:abstractDualityBound}
Under the assumptions of Corollary \ref{cor:abstractOrderOfConv}, further suppose $c(\cdot,\cdot)$ is symmetric and there exist Hilbert spaces $H$, $J$ such that $X \subset H$ and $Y \subset J$ with both embeddings being continuous. Let $(\psi,\varphi) \in X \times Y$ denote the unique solution to Problem \ref{prob:genSplittingProb} with right hand side
\[
\eta \mapsto \langle u-u_h^l, \eta \rangle_H \quad \text{ and } \quad \xi \mapsto \langle w-w_h^l, \xi \rangle_J.
\]
Assume that there exist Banach spaces $\hat{X} \subset X$ and $\hat{Y} \subset Y$ such that $(\psi,\varphi) \in \hat{X} \times \hat{Y}$ with both embeddings continuous and $\tilde{C}, \beta >0$ such that
\begin{equation} \label{eq:dualityApproxAssumption}
\inf_{(\eta_h, \xi_h) \in X_h \times Y_h} \|\psi-\eta_h^l\|_X + \|\varphi-\xi_h^l\|_Y \leq \tilde{C}h^\beta\left( \|\psi\|_{\hat{X}} + \|\varphi\|_{\hat{Y}} \right).
\end{equation}
Finally assume the regularity result
\begin{equation} \label{eq:dualityRegAssumption}
\|\psi\|_{\hat{X}} + \|\varphi\|_{\hat{Y}} \leq \hat{C}(\|u-u_h^l\|_H + \|w-w_h^l\|_J).
\end{equation}
Then, for sufficiently small $h$, there exists $C>0$, independent of $h$, such that
\[
\|u-u_h^l\|_H + \|w-w_h^l\|_J \leq C h^{\min\left\{\alpha + \beta,k\right\}} \left( \|u\|_{\tilde{X}} + \|w\|_{\tilde{Y}} + \|f\|_{X^*} + \|g\|_{Y^*} \right).
\]
\end{proposition}
\begin{proof}
Let $(\psi,\varphi)$ be as defined in the statement above. It follows, for any $(\eta_h, \xi_h) \in X_h \times Y_h$,
\begin{align*}
&\langle u-u_h^l, u-u_h^l \rangle_H + \langle w-w_h^l, w-w_h^l \rangle_J \\
&= c(u-u_h^l,\psi -\eta_h^l) + b(u-u_h^l,\varphi - \xi_h^l) + b(\psi-\eta_h^l,w-w_h^l) -m(w-w_h^l,\varphi - \xi_h^l) \\
&\quad + \langle f,\eta_h^l \rangle - \langle f_h,\eta_h \rangle + \langle g,\xi_h^l \rangle - \langle g_h,\xi_h \rangle - c(\eta_h^l,u_h^l) +c_h(u_h, \eta_h) \\
&\quad -b(u_h^l,\xi_h^l) + b_h(u_h,\xi_h) -b(\eta_h^l,w_h^l) + b_h(\eta_h,w_h) + m(w_h^l,\xi_h^l) - m_h(\xi_h,w_h).
\end{align*}
It follows, using the boundedness and approximation properties of the bilinear operators,
\begin{align*}
\langle u&-u_h^l, u-u_h^l \rangle_H + \langle w - w_h^l, w - w_h^l \rangle_J \\
\leq& C \Big[ (\|\psi-\eta_h^l\|_X + \|\varphi-\xi_h^l\|_Y)(\|u-u_h^l\|_X + \|w-w_h^l\|_Y) \\
& \quad + h^k(\|f\|_{X^*} + \|g\|_{Y^*} )(\|\psi-\eta_h^l\|_X + \|\varphi - \xi_h^l\|_Y + \|\psi\|_X + \|\varphi\|_Y ) \Big].
\end{align*}
Taking the infimum with respect to $(\eta_h, \xi_h)$ gives
\begin{align*}
&\| u - u_h^l \|_H^2 + \| w - w_h^l \|_J^2 \\
&\leq C ( \| u - u_h^l \|_H + \| w - w_h^l \|_J)
\left[ h^{\alpha + \beta} ( \| u \|_{\tilde{X}} + \| w \|_{\tilde{Y}}) + h^k ( \| f \|_{X^*} + \| g \|_{Y^*}) \right]
\end{align*}
The result is then deduced, for sufficiently small $h$, using Young's inequality.
\end{proof}
\section{Surface calculus and surface finite elements}\label{SurfCalc}
In this section we establish some notation with respect to surface PDEs and surface finite elements and study a particular bilinear form associated with a positive definite second order elliptic operator.
\subsection{Surface calculus}
We follow the development in \cite{DziEll13}. Let $\Gamma$ be a closed (that is compact and without boundary) $C^k$-hypersurface in $\mathbb R^3$, where
$k$ is as large as needed but at most $4$. There is a bounded domain $U \subset \mathbb{R}^3$ such that $\Gamma$ is the boundary set of $U$.
The unit normal $\nu$ to $\Gamma$
that points away from this domain is called the outward unit normal.
We define $P:= \texttt{1}\!\!\texttt{l} - \nu \otimes \nu$ on $\Gamma$
to be, at each point of $\Gamma$, the projection onto the corresponding tangent space.
Here $\texttt{1}\!\!\texttt{l}$ denotes the identity matrix in $\mathbb{R}^{3}$.
For a differentiable function $f$ on $\Gamma$ we define the tangential gradient by
\begin{equation*}
\nabla_{\Gamma} f := P \nabla \overline{f},
\end{equation*}
where $\overline{f}$ is a differentiable extension of $f$ to an open neighbourhood of
$\Gamma \subset \mathbb{R}^{3}$. Here, $\nabla$ denotes the usual gradient in $\mathbb{R}^{3}$.
The above definition only depends on the values of $f$ on $\Gamma$.
In particular, it does not dependent on the extension $\overline{f}$,
see Lemma 2.4 in \cite{DziEll13} for more details.
The components of the tangential gradient are denoted by $(\D 1 f, \D 2 f, \D {3} f)^T := \nabla_\Gamma f$.
For a differentiable vector field $v: \Gamma \rightarrow \mathbb{R}^3$ we define the divergence by
$\nabla_{\Gamma} \cdot v:= \D 1 v_1 + \D 2 v_2 + \D 3 v_3$.
For a twice differentiable function the Laplace-Beltrami operator is defined by
$$
\Delta_\Gamma f := \nabla_\Gamma \cdot \nabla_\Gamma f.
$$
The extended Weingarten map $\mathcal{H} := \nabla_{\Gamma} \nu$ is symmetric and has zero eigenvalue
in the normal direction. The eigenvalues $\kappa_i$, $i=1, 2$, belonging to the tangential eigenvectors
are the principal curvatures of $\Gamma$. The mean curvature $H$ is the sum of the principal curvatures,
that is $H := \sum_{i=1}^2 \kappa_i = \mbox{trace}\;( \mathcal{H}) = \nabla_{\Gamma} \cdot \nu$.
Note that our definition differs from the more common one by a factor of $2$.
We will denote the identity function on $\Gamma$ by $id_\Gamma$, that is $id_\Gamma(p) = p$ for all $p \in \Gamma$.
The mean curvature vector $H \nu$ satisfies $H \nu = - \Delta_\Gamma id_\Gamma$, see Section 2.3 in \cite{DecDziEll05}.
\subsection{Surface finite elements}
\label{sub_section_SFEM}
We will consider surface finite elements, \cite{DziEll13}.
We assume that the surface $\Gamma$ is approximated by a polyhedral hypersurface
$$
\Gamma_h = \bigcup_{T \in \mathcal{T}_h} T,
$$
where $\mathcal{T}_h$ denotes the set of two-dimensional simplices in $\mathbb{R}^{3}$
which are supposed to form an admissible triangulation.
For $T \in \mathcal{T}_h$ the diameter of $T$ is $h(T)$ and the radius of the largest ball contained in $T$ is $\rho(T)$.
We set $h := \max_{T \in \mathcal{T}_h} h(T)$ and assume that the ratio between $h$ and $\rho(T)$ is uniformly bounded (independently of $h$).
We assume that $\Gamma_h$ is
contained in a strip $\mathcal{N}_\delta$ of width $\delta > 0$ around $\Gamma$ on which the
decomposition
$$
x = p + d(x) \nu(p), \quad p \in \Gamma
$$
is unique for all $x \in \mathcal{N}_\delta$. Here, $d(x)$ denotes the oriented distance function to $\Gamma$,
see Section 2.2 in \cite{DecDziEll05}. This defines a map $x \mapsto p(x)$ from $\mathcal{N}_\delta$ onto
$\Gamma$. We here assume that the restriction $p_{|\Gamma_h}$
of this map on the polyhedral hypersurface $\Gamma_h$
is a bijective map between $\Gamma_h$ and $\Gamma$. In addition, the vertices of the simplices
$T \in \mathcal{T}_h$ are supposed to sit on $\Gamma$. The generation of these triangulations for torii is rather standard, see for example \cite{DziEll13}.
The piecewise affine Lagrange finite element space on $\Gamma_h$ is
$$
\mathcal{S}_h := \left\{ \chi \in C(\Gamma_h) \;|\; \chi_{T} \in P^1(T)\;
\forall T \in \mathcal{T}_h \right\},
$$
where $P^1(T)$ denotes the set of polynomials of degree $1$ or less on $T$.
The Lagrange basis functions $\varphi_i$ of this space are uniquely determined by their values
at the so-called Lagrange nodes $q_j$, that is $\varphi_i(q_j) = \delta_{ij}$.
The associated Lagrange interpolation of a continuous function $f$ on $\Gamma_h$ is defined by
$$
I_h f := \sum_{i} f(q_i) \varphi_i.
$$
We now introduce the lifted discrete spaces. We will use the standard lift operator as constructed in \cite[Section 4.1]{DziEll13}. The lift $f^l$ of a function $f: \Gamma_h \rightarrow \mathbb{R}$ onto $\Gamma$ is defined by
$$f^l(x) := (f \circ p_{|\Gamma_h}^{-1})(x)$$
for all $x \in \Gamma$.
The inverse map is denoted by $f^{-l} := f \circ p$. The lifted finite element space is
$$\mathcal{S}_h^l := \left\{ \chi^l \;|\; \chi \in \mathcal{S}_h \right\}.$$
Finally, the lifted Lagrange interpolation $I_h^l: C(\Gamma) \rightarrow \mathcal{S}_h^l$ is given by
$I_h^l f := (I_h f^{-l})^l$.
In the next section we introduce a bilinear form $b$ on $\Gamma$ for which we prove that the lifted discrete spaces satisfy the conditions in Assumption \ref{def:liftedDiscreteSetup} when we set $X_h^l := Y_h^l := \mathcal{S}_h^l$.
To be more precise here, for a sequence of triangulated surfaces $(\Gamma_{h_n})_{n \in \mathbb{N}}$ with maximal diameter $h_n \searrow 0$ for $n \rightarrow \infty$
we set $X_n := X_{h_n}^l = \mathcal{S}_{h_n}^l$ and $Y_n := Y_{h_n}^l = \mathcal{S}_{h_n}^l$.
\section{A useful bilinear form $b(\cdot,\cdot)$}\label{BilinearFormb}
Throughout this section let
$b:X \times Y \rightarrow \mathbb{R}$ be given by
\[
b(u,v) := \int_\Gamma \nabla_\Gamma u \cdot \nabla_\Gamma v + \lambda uv \;do
\]
for appropriate Banach spaces $X$ and $Y$ and positive constant $\lambda$.
\subsection{Inf-sup conditions}
\begin{proposition} \label{prop:infSupsforb}
Suppose $1< p \leq 2 \leq q < \infty$ are chosen such that $1/p + 1/q =1$. Let $\lambda > 0$,
$X=W^{1,q}(\Gamma)$ and $Y=W^{1,p}(\Gamma)$.There exist $\beta, \gamma > 0$ such that
\[
\beta \|\eta\|_X \leq \sup_{\xi \in Y} \frac{b(\eta,\xi)}{\|\xi\|_Y} \;\; \forall \eta \in X \quad \text{and} \quad
\gamma \|\xi\|_Y \leq \sup_{\eta \in X} \frac{b(\eta,\xi)}{\|\eta\|_X} \;\; \forall \xi \in Y.
\]
\end{proposition}
\begin{proof}
Consider the map $A:W^{1,p}(\Gamma) \rightarrow W^{1,q}(\Gamma)^*$ given, for each $u \in W^{1,p}(\Gamma)$ by
\[
A(u)[v]:= b(v,u).
\]
Evidently $A$ is well-defined and linear, by H\"{o}lder's inequality it is also continuous. We will now show that it is an isomorphism, beginning with showing that $A$ is surjective. Consider the inverse Laplacian type map $T:L^2(\Gamma) \rightarrow H^2(\Gamma)$, where, for $f\in L^2(\Gamma)$, $Tf \in H^1(\Gamma)$ is defined to be the unique solution to
\[
b(Tf,v) = \int_\Gamma fv \quad \forall v \in H^1(\Gamma).
\]
That $T$ is well defined, continuous and a bijection follows by elliptic regularity. It is immediate that $T^{-1}=-\Delta_\Gamma + \lambda Id$.
Now suppose $F \in W^{1,q}(\Gamma)^*$ and set $g:=T^*(F) \in L^2(\Omega)$, this is well defined as $W^{1,q}(\Gamma)^* \subset H^2(\Gamma)^*$.
For any $\varphi \in C^\infty_0(\Gamma)$ and first order derivative $\D\alpha$ it holds
\begin{align*}
&\int_\Gamma g\D\alpha\varphi = \int_\Gamma g\D\alpha T^{-1} T \varphi \\
&=\int_\Gamma g \left( T^{-1} \D\alpha T \varphi - \nu_\alpha (2\mathcal{H}:\nabla_\Gamma \nabla_\Gamma T\varphi + \nabla_\Gamma H \cdot \nabla_\Gamma T\varphi) -\left[(2\mathcal{H}^2-H\mathcal{H})\nabla_\Gamma T\varphi \right]_\alpha \right).
\end{align*}
The second line is due to a commutation relation for $\D \alpha$ and $\Delta_\Gamma$ which follows from \cite[Lemma 2.6]{DziEll13}. To be more explicit, by summing over
repeated indices we obtain for a twice continously differentiable function $u$ on $\Gamma$
\begin{align*}
\D \alpha \Delta_\Gamma u &= \D \alpha \D \beta \D \beta u = \D \beta \D \alpha \D \beta u + (\mathcal{H}_{\beta \gamma} \nu_\alpha - \mathcal{H}_{\alpha \gamma} \nu_\beta)
\D \gamma \D \beta u
\\
&= \D \beta ( \D \beta \D \alpha u + (\mathcal{H}_{\beta \gamma} \nu_\alpha - \mathcal{H}_{\alpha\gamma} \nu_\beta ) \D \gamma u)
+ (\mathcal{H}_{\beta \gamma} \nu_\alpha - \mathcal{H}_{\alpha \gamma} \nu_\beta)
\D \gamma \D \beta u
\\
&= \Delta_\Gamma \D \alpha u + (\nu_\alpha \D \beta \mathcal{H}_{\beta \gamma} + \mathcal{H}_{\beta \gamma} \mathcal{H}_{\beta \alpha} - H \mathcal{H}_{\alpha \gamma}) \D \gamma u
+ 2 \nu_\alpha \mathcal{H}_{\beta \gamma} \D \beta \D \gamma u
- \nu_\beta \mathcal{H}_{\alpha \gamma} \D \gamma \D \beta u,
\end{align*}
and
\begin{align*}
& \D \beta \mathcal{H}_{\beta \gamma} = \D \beta \D \gamma \nu_\beta = \D \gamma \D \beta \nu_\beta + (\mathcal{H}_{\gamma \rho} \nu_\beta - \mathcal{H}_{\beta \rho} \nu_\gamma) \D \rho \nu_\beta
= \D \gamma H - \mathcal{H}_{\beta \rho} \mathcal{H}_{\rho \beta} \nu_\gamma,
\\
& - \nu_\beta \mathcal{H}_{\alpha \gamma} \D \gamma \D \beta u = - \nu_\beta \mathcal{H}_{\alpha \gamma} (\mathcal{H}_{\beta \rho} \nu_\gamma - \mathcal{H}_{\gamma \rho} \nu_\beta)
\D \rho u = \mathcal{H}_{\alpha \gamma} \mathcal{H}_{\gamma \rho} \D \rho u.
\end{align*}
It then follows that
\begin{align*}
&\int_\Gamma -g\D\alpha\varphi + H\nu_\alpha g \varphi
\\ &=\langle F, T\left(H\nu_\alpha \varphi + \nu_\alpha (2\mathcal{H}:\nabla_\Gamma \nabla_\Gamma T\varphi + \nabla_\Gamma H \cdot \nabla_\Gamma T\varphi) + \left[(2\mathcal{H}^2-H\mathcal{H})\nabla_\Gamma T\varphi \right]_\alpha \right) \rangle \\
& \quad -\langle F, \D\alpha T\varphi \rangle.
\end{align*}
Notice $T \in \mathcal{L}(L^q(\Gamma),W^{2,q}(\Gamma))$, $\D\alpha \in \mathcal{L}(W^{2,q}(\Gamma),W^{1,q}(\Gamma))$ and
thus we may extend the map $\varphi \mapsto -\langle F, \D\alpha T\varphi \rangle$ to $L^q(\Gamma)$ and that extension lies in $L^q(\Gamma)^*$. The first term may be treated in a similar manner. It follows there exists $g_\alpha \in L^p(\Gamma)$ such that
\[
\int_\Gamma -g\D\alpha\varphi + H\nu_\alpha g \varphi = \int_\Gamma g_\alpha \varphi \quad \forall \varphi \in C^\infty_0(\Gamma).
\]
Hence $g \in W^{1,p}(\Gamma)$.
Now, for the constructed $g \in W^{1,p}(\Gamma)$ it holds, for any $v \in H^2(\Gamma)$,
\[
\int_\Omega g(-\Delta v + \lambda v) = \int_\Omega T^*F T^{-1}v = \langle F, v \rangle.
\]
Integrating the left hand side by parts and using density the above equation implies, for any $v \in W^{1,q}(\Gamma)$,
\[
A(g)[v] = \int_\Omega \nabla_\Gamma g \cdot \nabla_\Gamma v + \lambda gv = \langle F,v \rangle.
\]
Hence $A(g)=F$ and thus $A$ is surjective. To show $A$ is injective, suppose $A(u)=0$, then in particular,
\[
0=A(u)[Tu] = \int_\Gamma u^2 \implies u = 0.
\]
Thus $A$ is a bijection and by the bounded inverse theorem $A^{-1}$ is also bounded, it follows
\[
\|\xi\|_Y \leq \|A^{-1}\| \|A\xi\|_{X^*} \quad \forall \xi \in Y.
\]
Hence we obtain
\[
\|A^{-1}\|^{-1} \|\xi \|_Y \leq \sup_{\eta \in X } \frac{b(\eta,\xi)}{\|\eta\|_X }.
\]
Additionally, $(A^*)^{-1}=(A^{-1})^*$ is bounded, thus similarly
\[
\|(A^*)^{-1}\|^{-1} \|\eta\|_X \leq \sup_{\xi \in Y } \frac{A^*(\eta)[\xi]}{\|\xi\|_Y }.
\]
Finally notice $A^*(\eta)[\xi]=A(\xi)[\eta]=b(\eta,\xi)$, completing the second inf sup inequality.
Here, we have implicitly made use of the canonical isomorphism between $X$ and $X^{**}$.
\end{proof}
\subsection{Ritz projection}
For the approximation and uniform convergence conditions \eqref{eq:YTildeSupConv} related to our bilinear form $b(\cdot,\cdot)$ we will make use of the Ritz projection which is defined in the lemma below.
\begin{lemma} \label{lem:ritzProjection}
Suppose $\lambda > 0$, let $1<r\leq\infty$, $X:=W^{1,r}(\Gamma)$ and $Y:= W^{1,s}(\Gamma)$
where $1 \leq s < \infty$ is chosen such that $1/r + 1/s =1$.
For each $h>0$, let $X_h^l:=Y_h^l:=\mathcal{S}_h^l$. There exists a bounded linear map $\Pi_h:W^{1,r}(\Gamma) \rightarrow (\mathcal{S}_h^l,\|\cdot \|_{1,r})$ given by
\[
b(\Pi_h \varphi, v_h^l) = b(\varphi, v_h^l) \quad \forall v^l_h \in \mathcal{S}^l_h.
\]
There exists $C(r)>0$, independent of $h$, such that
\[
\| \Pi_h \psi\|_{1,r} \leq C(r) \|\psi\|_{1,r} \quad \forall \psi \in W^{1,r}(\Gamma).
\]
Finally, it holds that
\[
\sup_{\psi \in W^{1,r}(\Gamma)} \frac{\| \psi - \Pi_h \psi \|_{0,2}}{\| \psi \|_{1,r}} \rightarrow 0 \quad \textnormal{as} \quad h \searrow 0.
\]
\end{lemma}
\begin{proof}
One can see the Ritz projection $\Pi_h$ is well defined as this is equivalent to the invertibility of $S + \lambda M$, where $S,M$ are the usual mass and stiffness matrices
for lifted finite elements. The linearity of $\Pi_h$ is obvious. It is straightforward to show
that $\| \Pi_h \psi \|_{1,2} \leq C(\lambda) \| \psi \|_{1,2}$ for all $\psi \in W^{1,2}(\Gamma)$. From formula (4.16) in \cite{Pow17}, we learn that
$\| \Pi_h \psi \|_{1,\infty} \leq C \| \psi \|_{1,\infty}$. From the interpolation of Sobolev spaces, see e.g.~Corollary 5.13 in \cite{BenSha88}, we can deduce
that $\| \Pi_h \psi \|_{1,r} \leq C \| \psi \|_{1,r}$ for all $2 \leq r \leq \infty$.
Observe that $b( \eta, \Pi_h \psi) = b(\Pi_h \eta, \Pi_h \psi) = b(\Pi_h \eta, \psi)$. Then, using Proposition \ref{prop:infSupsforb} with $q=r$ and $p=s$ for $1< s \leq 2$, it follows that
$$
\gamma \| \Pi_h \psi \|_{1,s} \leq \sup_{\eta \in W^{1,r}(\Gamma)} \frac{b( \eta, \Pi_h \psi)}{\| \eta \|_{1,r}}
\leq \sup_{\eta \in W^{1,r}(\Gamma)} \frac{b( \Pi_h \eta, \psi)}{\| \eta \|_{1,r}}
\leq C \| \psi \|_{1,s}
$$
so that we indeed have $\| \Pi_h \psi \|_{1,r} \leq C \| \psi \|_{1,r}$ for all $1 < r \leq \infty$.
We next show that for $2 \leq s < \infty$,
$$
\inf_{v_h^l \in \mathcal{S}_h^l} \| \psi - v_h^l\|_{1,s} \leq C h^{2/s} \| \psi \|_{2,2} \qquad \forall \psi \in H^2(\Gamma).
$$
Using the equivalence of the norms on the surfaces $\Gamma$ and $\Gamma_h$, see \cite{Dem09}, we can lift the usual interpolation estimates for the Lagrange interpolation operator $I_h$
onto $\Gamma$. We hence obtain,
\begin{align*}
\| \psi - I_h^l \psi \|_{1,s} &= \left( \sum_{T \in \mathcal{T}_h^l} \| \psi - I_h^l \psi \|_{1,s,T}^s \right)^{1/s}
\\
&\leq C \left( \sum_{T \in \mathcal{T}_h^l } |T|^{1 - s/2} h^s \| \psi \|_{2,2,T}^s \right)^{1/s},
\end{align*}
where we have summed over all curved triangles $T$ of the lifted triangulation $\mathcal{T}_h^l$ of $\Gamma_h$.
Under the assumptions on the triangulation $\mathcal{T}_h$ made in Section \ref{sub_section_SFEM}, it holds that $Ch^2 \leq |T|$.
Hence, $|T|^{1 - s/2} \leq C h^{2 - s}$ and
\begin{align*}
\| \psi - I_h^l \psi \|_{1,s} \leq C h^{2/s} \left( \sum_{T \in \mathcal{T}_h^l} \| \psi \|_{2,2,T}^s \right)^{1/s}.
\end{align*}
Using the estimate $(a^s + b^s) \leq (a^2 + b^2)^{s/2}$, which holds for all $a,b \geq 0$, we finally conclude that
\begin{equation}
\| \psi - I_h^l \psi \|_{1,s} \leq C h^{2/s} \left( \sum_{T \in \mathcal{T}_h^l} \| \psi \|_{2,2,T}^2 \right)^{1/2} = C h^{2/s} \| \psi \|_{2,2}.
\label{interpolation_estimate}
\end{equation}
Now, for $\psi \in W^{1,r}(\Gamma) \subset L^2(\Gamma)$ with $1 < r \leq \infty$, let $\varphi \in H^2(\Gamma)$ be the solution to
$$
b(\varphi, v) = \int_{\Gamma} (\psi - \Pi_h \psi) v \; do \qquad \forall v \in H^1(\Gamma).
$$
It follows that
$$
\| \psi - \Pi_h \psi \|_{0,2}^2 = b(\varphi, \psi - \Pi_h \psi) = b(\varphi - v_h^l, \psi - \Pi_h \psi),
$$
where $v_h^l \in \mathcal{S}_h^l$ is arbitrary. For $1 < r \leq 2$ and $s := r /(r - 1) \in [2,\infty)$, we obtain
\begin{align}
\| \psi - \Pi_h \psi \|_{0,2}^2 &\leq C \inf_{v_h^l \in \mathcal{S}_h^l} \| \varphi - v_h^l \|_{1,s} \| \psi - \Pi_h \psi \|_{1,r} \leq C h^{2/s} \| \varphi \|_{2,2} \| \psi \|_{1,r}
\nonumber \\
&\leq C h^{2/s} \| \psi - \Pi_h \psi \|_{0,2} \| \psi \|_{1,r}.
\label{interpolation_estimate_2}
\end{align}
On the other hand, for $2 \leq r \leq \infty$, we can conclude that
\begin{align*}
\| \psi - \Pi_h \psi \|_{0,2}^2 &\leq C \inf_{v_h^l \in \mathcal{S}_h^l} \| \varphi - v_h^l \|_{1,2} \| \psi - \Pi_h \psi \|_{1,2}
\leq Ch \| \varphi \|_{2,2} \| \psi \|_{1,2}
\nonumber \\
&\leq Ch \| \psi - \Pi_h \psi \|_{0,2} \| \psi \|_{1,2} \leq Ch \| \psi - \Pi_h \psi \|_{0,2} \| \psi \|_{1,r}.
\end{align*}
Hence, for any $1 < r \leq \infty$,
$$
\sup_{\psi \in W^{1,r}(\Gamma)} \frac{\| \psi - \Pi_h \psi\|_{0,2}}{\| \psi \|_{1,r}} \rightarrow 0 \;\; \textnormal{as} \;\; h \searrow 0.
$$
\end{proof}
\noindent
For the choices $X = W^{1,q}(\Gamma)$ and $Y=W^{1,p}(\Gamma)$ with $1 < p \leq 2 \leq q < \infty$ such that $1/p + 1/q =1$ as well as $L = L^2(\Gamma)$,
the uniform convergence condition \eqref{eq:YTildeSupConv} now follows by
choosing $I_n := \Pi_{h_n}$ and setting $r=p$ in the lemma above.
Furthermore, the conditions $\| \eta_n - \eta \|_X \rightarrow 0$ and $\| \xi_n - \xi \|_Y \rightarrow 0$ in Assumption \ref{def:liftedDiscreteSetup} hold for the following reasons.
First, $\eta$ and $\xi$ can be approximated sufficiently well by smooth functions $\tilde{\eta}$ and $\tilde{\xi}$, respectively.
Then, $\tilde{\eta}$ and $\tilde{\xi}$ are approximated by $I_h^l \tilde{\eta}$ and $I_h^l \tilde{\xi}$.
For $\tilde{\eta}$ this follows from (\ref{interpolation_estimate}) by choosing $s=q \geq 2$.
For $\tilde{\xi}$ the estimate
$\| \tilde{\xi} - I_h^l \tilde{\xi} \|_Y = \| \tilde{\xi} - I_h^l \tilde{\xi} \|_{1,p} \leq C \| \tilde{\xi} - I_h^l \tilde{\xi} \|_{1,2}
\leq C h \| \tilde{\xi} \|_{2,2}$ implies convergence.
\subsection{Discrete inf-sup condition}
To prove the discrete inf sup conditions we require Fortin's criterion. We use the following form of the criterion, which follows from \cite[Lemma 4.19]{ErnGue13}.
\begin{lemma} \label{lem:fortinsCriterion}
Suppose $V$ and $W$ are Banach spaces and $\tilde{b} \in \mathcal{L}(V \times W ; \mathbb{R})$ such that there exists $\beta > 0$ such that
\[
\beta \leq \inf_{\xi \in W \setminus \left\{ 0\right\} } \sup_{\eta \in V \setminus \left\{ 0\right\}} \frac{\tilde{b}(\eta,\xi)}{\|\eta\|_V \|\xi\|_W}.
\]
Let $V_h \subset V$ and $W_h \subset W$ with $W_h$ reflexive.
If there exists $\delta > 0$ such that, for all $\eta \in V$, there exists $\Pi_h(\eta) \in V_h$ such that
\[
\forall \xi_h \in W_h, \quad \tilde{b}(\eta,\xi_h)=\tilde{b}(\Pi_h(\eta),\xi_h) \text{ and } \|\Pi_h(\eta)\|_V \leq \delta \|\eta\|_V,
\]
then
\[
\frac{\beta}{\delta} \leq \inf_{\xi_h \in W_h \setminus \left\{ 0\right\} } \sup_{\eta_h \in V_h \setminus \left\{ 0\right\}} \frac{\tilde{b}(\eta_h,\xi_h)}{\|\eta_h\|_V \|\xi_h\|_W}.
\]
\end{lemma}
We can now prove the discrete inf sup conditions for $b(\cdot,\cdot)$.
\begin{lemma} \label{lem:discreteInfSupsForb}
Under the assumptions of Lemma \ref{lem:ritzProjection} (for $1< r < \infty$), there exist $\tilde{\beta}, \tilde{\gamma} > 0$, independent of $h$, such that
\[
\tilde{\beta} \|\eta_h^l\|_X \leq \sup_{\xi_h \in Y_h} \frac{b(\eta_h^l,\xi_h^l)}{\|\xi_h^l\|_Y} \;\; \forall \eta_h^l \in X_h^l \quad \text{and} \quad
\tilde{\gamma} \|\xi_h^l\|_Y \leq \sup_{\eta _h \in X_h} \frac{b(\eta_h^l,\xi_h^l)}{\|\eta_h^l\|_X} \;\; \forall \xi_h^l \in Y_h^l.
\]
\end{lemma}
\begin{proof}
We apply Fortin's Criterion (Lemma \ref{lem:fortinsCriterion}). Setting $V=W^{1,p}(\Gamma)$, $W=W^{1,q}(\Gamma)$, $V_h=W_h=\mathcal{S}_h^l$ and using the Ritz projection $\Pi_h$ constructed above in Lemma \ref{lem:ritzProjection} proves the first inf sup inequality.
Similarly, setting $W=W^{1,p}(\Gamma)$ and $V=W^{1,q}(\Gamma)$ proves the reversed inf sup inequality.
\end{proof}
\section{Applications to second order splitting of fourth order surface PDEs}\label{PDEex}
\subsection{A standard fourth order problem}
In this section we apply the abstract theory to splitting a fairly general fourth order surface PDE. That is we consider solving a problem of the form
\[
\Delta_\Gamma^2 u - \nabla_\Gamma \cdot (P\mathcal{B}P\nabla_\Gamma u)
+\mathcal{C}u = \mathcal{F},
\]
posed over $\Gamma \subset \mathbb{R}^3$, a closed 2-dimensional hypersurface. This PDE results from minimising the functional
\[
\frac{1}{2} \int_\Gamma (\Delta_\Gamma u)^2 + (\mathcal{B}\nabla_\Gamma u) \cdot \nabla_\Gamma u + \mathcal{C}u^2 - 2 \mathcal{F} u \;do
\]
(for symmetric $\mathcal{B}$) over $H^2(\Gamma)$.
We make the following assumptions on $\mathcal{B}$ and $\mathcal{C}$ to ensure that the equation is well posed.
\begin{assumption}
Let $\mathcal{B}: \Gamma \rightarrow \mathbb{R}^{3\times 3}$, $\mathcal B$ be measurable and symmetric such that there exists $\lambda_M >0$ satisfying
\[
\|\mathcal{B}(x)\| \leq \lambda_M \; \forall x \in \Gamma.
\]
Let $\mathcal{C}: \Gamma \rightarrow \mathbb{R}$ be measurable and there exist $\mathcal{C}_m,\mathcal{C}_M >0$ such that
\[
\mathcal{C}_m < \mathcal{C}(x) < \mathcal{C}_M \;\; \forall x \in \Gamma.
\]
There exists $\Lambda >0$ such that
\[
\frac{\Lambda \lambda_M}{2} < \mathcal{C}_m \quad \text{ and } \quad \frac{\lambda_M}{2\Lambda} < 1.
\]
Finally we suppose $\mathcal{F} \in L^2(\Gamma)$.
\end{assumption}
\begin{remark}
Note that in the above $P\nabla_\Gamma u$ can be replaced by $\nabla_\Gamma u$ since $P$ projects onto the tangent space and that
$\nabla_\Gamma\cdot(P\mathcal B P\nabla_\Gamma u)=\nabla_\Gamma\cdot(\mathcal B \nabla_\Gamma u) - H \mathcal{B}\nabla_\Gamma u \cdot \nu$. Also we can write $\mathcal B$ rather than $P\mathcal B P$ provided for each $x\in \Gamma$, $\mathcal B:\mathcal T_x\rightarrow \mathcal T_x$. \end{remark}
The well-posedness of the PDE follows by consideration of the weak formulation of the problem.
\begin{problem} \label{prob:4oSplittingexample}
Find $u \in H^2(\Gamma)$ such that
\[
\int_\Gamma \Delta_\Gamma u \Delta_\Gamma v + \mathcal{B} \nabla_\Gamma u \cdot \nabla_\Gamma v + \mathcal{C} uv \; do = \int_\Gamma \mathcal{F} v \;do \;\; \forall v \in H^2(\Gamma).
\]
\end{problem}
The assumptions we make on $\mathcal{B}$ and $\mathcal{C}$ ensure that the bilinear form is coercive on $H^2(\Gamma) \times H^2(\Gamma)$ and hence the problem is well posed by the Lax-Milgram theorem. Here we have chosen an $L^2$ right hand side, one could make a more general choice, however, we restrict to $L^2$ here as we will later show that in this case the numerical method attains the optimal order of convergence.
We will now formulate an appropriate splitting method whose solution coincides with that of the fourth order problem. The coupled PDEs in distributional form are
\begin{align}
&-\Delta_\Gamma w+w -\nabla_\Gamma\cdot((P\mathcal BP-2\texttt{1}\!\!\texttt{l})\nabla_\Gamma u)+(\mathcal C-1)u=\mathcal F\\
& -\Delta_\Gamma u +u-w=0.
\end{align}
This motivates solving Problem \ref{prob:genSplittingProb} with the following definition of the data. Note that $\mathcal G=0$ for the above PDE system.
\begin{definition} \label{def:4oSplittingExample}
With respect to Definition \ref{def:genSplitSetup}, set $L=L^2(\Gamma)$ and $X=Y=H^1(\Gamma)$. Set the bilinear functionals
\begin{align*}
c(u,v) :=& \int_{\Gamma} (\mathcal{B} -2\texttt{1}\!\!\texttt{l} )\nabla_{\Gamma} u \cdot \nabla_{\Gamma} v + \left(\mathcal{C}-1 \right) uv \;do, \\
b(u,v):=& \int_\Gamma \nabla_\Gamma u \cdot \nabla_\Gamma v + uv \;do , \\
m(w,v):=& \int_\Gamma wv \;do.
\end{align*}
Finally, take the data to be
\[
f := m(\mathcal{F},\cdot) \qquad \text{and} \qquad g:=m(\mathcal{G},\cdot),
\]
with $\mathcal{F}, \mathcal{G} \in L^2(\Gamma)$.
\end{definition}
We can now use the abstract theory to show well posedness for this problem.
\begin{proposition} \label{prop:4oWPandBound}
There exists a unique solution to Problem \ref{prob:genSplittingProb} with the spaces and functionals as chosen in Definition \ref{def:4oSplittingExample}. Moreover
for $\mathcal{B} \in W^{1, \infty}(\Gamma)$ we have the regularity result $u,w \in H^2(\Gamma)$ with the estimate
\[
\|u\|_{H^2(\Gamma)} + \|w\|_{H^2(\Gamma)} \leq C \left( \|\mathcal{F}\|_{L^2(\Gamma)} + \|\mathcal{G}\|_{L^2(\Gamma)} \right).
\]
Furthermore, when $\mathcal{G}=0$ the solution $u$ coincides with the solution of Problem \ref{prob:4oSplittingexample}.
\end{proposition}
\begin{proof}
For the well posedness we apply Theorem \ref{thm:genSplittingWellPosed}. The assumptions required in Definition \ref{def:genSplitSetup} are straightforward to check, the inf sup conditions conditions are established in Proposition \ref{prop:infSupsforb} ($\lambda=1,p=q=2$). For the coercivity relation \eqref{eq:uwCoercivity} notice that
\[
b(u,\xi) = m(w,\xi) \;\forall \xi \in Y \implies u \in H^2(\Gamma) \text{ and } w = -\Delta_\Gamma u + u,
\]
hence we deduce
\[
c(u,u) + m(w,w) = \int_\Gamma (\Delta_\Gamma u )^2 + \mathcal{B} \nabla_\Gamma u \cdot \nabla_\Gamma u + \mathcal{C} u^2 \; do \geq C \int_\Gamma ( \Delta_\Gamma u)^2 + u^2 \; do \geq C\|w\|_{0,2}^2.
\]
For the assumptions made in Assumption \ref{def:liftedDiscreteSetup}, we take the lifted discrete spaces described in the previous section and the required discrete inf sup inequalities follow from Lemma \ref{lem:discreteInfSupsForb}. Finally, \eqref{eq:YTildeSupConv} holds by Lemma \ref{lem:ritzProjection}.
We thus have well posedness by Theorem \ref{thm:genSplittingWellPosed}. The regularity estimate follows by applying elliptic regularity to each of the equations of the system. Finally, when $\mathcal{G}=0$, by elliptic regularity we have
\[
w = -\Delta_\Gamma u + u.
\]
It follows, for any $v \in H^2(\Gamma)$,
\begin{align*}
\int_\Gamma \mathcal{F}v \;do = c(u,v) + b(v,w) = \int_\Gamma \Delta_\Gamma u \Delta_\Gamma v + \mathcal{B} \nabla_\Gamma u \cdot \nabla_\Gamma v + \mathcal{C} uv \; do.
\end{align*}
\end{proof}
\subsection{Clifford torus problems}
We now look to apply the above theory to produce a splitting method for a pair of fourth order problems,
based around the second variation of the Willmore functional, posed on a Clifford torus $\Gamma = T(R,R\sqrt{2})$.
The problems are derived and motivated in Section 6.1.2 of \cite{EllFriHob17}. In order to state the problems we need the following definitions.
\begin{definition} \label{def:torusSplitSetup}
With respect to Definition \ref{def:genSplitSetup}, set the spaces to be $L=L^2(\Gamma)$, $X=W^{1,q}(\Gamma)$ and $Y=W^{1,p}(\Gamma)$, where $1< p <2 < q < \infty$ such that $1/p + 1/q =1$. Let $\delta,\rho>0$ be sufficiently small.
We set the bilinear functionals to be as follows,
\begin{align*}
&c(u,v):= r_1(u,v) + r_2(u,v),~ b(u,v):= \int_\Gamma \nabla_\Gamma u \cdot \nabla_\Gamma v + uv \;do,~ m(v,w):= \int_\Gamma vw \;do\\
&\mbox{where}\\
&r_1(u,v) := \frac{1}{\rho} \sum_{k=1}^K \int_\Gamma u g_k \;do \int_\Gamma v g_k \;do + \chi_{\text{con}}\frac{1}{\delta} \sum_{k=1}^N u(X_k)v(X_k), ~r_2(u, v) := \int_{\Gamma} \nabla_{\Gamma} u \cdot\mathcal B\nabla_{\Gamma} v + \mathcal C uv \; do, \\
&\mbox{and}\\
&\mathcal B:= \left[\frac{3}{2}H^2 - 2|\mathcal{H}|^2 -2 \right]\texttt{1}\!\!\texttt{l} - 2 H \mathcal{H}, \\& \mathcal C:= - \frac{3}{2} H^2 | \mathcal{H} |^2 + 2 ( \nabla_{\Gamma} \nabla_{\Gamma} H) : \mathcal{H} + |\nabla_{\Gamma} H|^2 + 2 H Tr(\mathcal{H}^3) +\Delta_\Gamma |\mathcal{H}|^2 +|\mathcal{H}|^4 -1.\\
\end{align*}
Here the parameter $\chi_{\text{con}}$ takes one of
two values leading to two problems. These are $\chi_ {\text{con}} =0$ or $\chi_{\text{con}}=1$ corresponding to the
two cases of point forces or point constraints respectively. The functions $g_k$ are smooth and form a basis for the kernel of the second variation of the Willmore functional. Their specific form is given in Section 6.1.2 of \cite{EllFriHob17} but is not required here. Finally set $g=0$ and $f$ such that
\[
\langle f, v \rangle = \sum_{k=1}^N \beta_k v(X_k) \quad \text{ or } \quad \langle f, v \rangle = \frac{1}{\delta}\sum_{k=1}^N \alpha_k v(X_k),
\]
for the point forces, $\chi_ {\text{con}} =0$, or point constraints, $\chi_ {\text{con}} =1$, problem respectively.
\end{definition}
\begin{remark} The variational problem for the Clifford torus is to minimise over $H^2(\Gamma)$ the functional
$$\frac{1}{2}a(v,v)+\frac{1}{2} c(v,v)
-\langle f,v \rangle$$
where
$$a(v,v):=(-\Delta_\Gamma v+v,-\Delta_\Gamma v+v) .$$
The terms involving $\rho$ and $\delta$ in $r_1(\cdot,\cdot)$ are penalty terms which, respectively, enforce orthogonality to the $\{g_k\}_{k=1}^K$ and point displacement constraints at $\{X_k\}_{k=1}^N$.
\end{remark}
We will now check that all of the assumptions required in Definition \ref{def:genSplitSetup} and Assumption \ref{def:genSplitSetupASSUME} hold for the choices made above in
Definition \ref{def:torusSplitSetup}. Most of these are straightforward, however the inf sup conditions require the Proposition \ref{prop:infSupsforb}.
Now we check the remaining assumptions required.
\begin{lemma}
The assumptions made in Definition \ref{def:genSplitSetup} and Assumption \ref{def:genSplitSetupASSUME} hold for the choices made for the spaces and functionals in Definition \ref{def:torusSplitSetup}.
\end{lemma}
\begin{proof}
The space $L^2(\Gamma)$ is a Hilbert Space and $W^{1,r}(\Gamma)$ is a reflexive Banach space for any $1<r<\infty$. The embedding $W^{1,p}(\Gamma) \subset L^2(\Gamma)$ is continuous by the Sobolev embedding theorem.
Having proven the inf sup inequalities in Proposition \ref{prop:infSupsforb}, the remaining conditions on $c,r,b$ and $m$ are straightforward. To obtain the coercivity relation \eqref{eq:uwCoercivity}, in this case from elliptic regularity
\[
b(u,\xi)=m(w,\xi) \;\forall \xi \in Y \implies w=-\Delta_\Gamma u + u \text{ and } u \in H^2(\Gamma).
\]
It follows
\[
c(u,u) + m(w,w) = \int_\Gamma (\Delta_\Gamma u)^2 +2 |\nabla_\Gamma u|^2 + u^2 +c(u,u) \geq C \|u\|^2_{2,2} \geq C\|w\|_{0,2}^2.
\]
The $H^2$ coercivity result used here holds for sufficiently small $\delta,\rho$, see in Proposition 5.2 and Section 6.1.2 of \cite{EllFriHob17}.
Finally, the choices for $f$ and $g$ lie in the required dual spaces. For $f$ this follows from the continuous embedding $W^{1,q}(\Gamma) \subset C^0(\Gamma)$.
\end{proof}
The splitting method is thus well posed, this follows by applying the abstract theory.
\begin{corollary} \label{cor:torusWPandReg}
There exists a unique solution to Problem \ref{prob:genSplittingProb} with the spaces and functionals as chosen in Definition \ref{def:torusSplitSetup}. Moreover we have the additional regularity $u \in W^{3,p}(\Gamma)$ for all $1<p<2$ and the regularity estimate
\[
\|u\|_{3,p} \leq C(p)\|w\|_{1,p}.
\]
\end{corollary}
\begin{proof}
We have proven that the assumptions made in Assumptions \ref{def:genSplitSetup} and \ref{def:liftedDiscreteSetup} hold in this case, thus we may apply Theorem \ref{thm:genSplittingWellPosed} to show well posedness. The regularity result follows by elliptic regularity, applied to the second equation of the system.
\end{proof}
\section{Second order splitting SFEM for fourth order surface PDEs}\label{PDEexDiscrete}
\subsection{Standard fourth order problem}
We now consider the standard fourth order problem and use the abstract theory to produce a convergent finite element method. Using $P^1$ finite elements, we will achieve optimal error bounds for both $u$ and $w$ of order $h$ convergence in the $H^1$ norm and order $h^2$ in the $L^2$ norm.
\begin{definition} \label{def:4oProbFemSetUp}
In the context of Definition \ref{def:genFemSetUp}, set $X_h=Y_h=\mathcal{S}_h$. Take $l_h^X$ and $l_h^Y$ to be the standard lift operator, see Section \ref{sub_section_SFEM}.
Set the bilinear functionals to be
\begin{align*}
c_h(u_h,v_h) :=& \int_{\Gamma_h} ((P\mathcal{B}P)^{-l} -2\texttt{1}\!\!\texttt{l} )\nabla_{\Gamma_h} u_h \cdot \nabla_{\Gamma_h} v_h + \left(\mathcal{C}^{-l}-1 \right) u_hv_h \;do_h, \\
b_h(u_h,v_h):=& \int_{\Gamma_h} \nabla_{\Gamma_h} u_h \cdot \nabla_{\Gamma_h} v_h + u_hv_h \;do_h , \\
m_h(w_h,v_h):=& \int_{\Gamma_h} w_h v_h \;do_h.
\end{align*}
Here, $do_h$ denotes the induced volume measure on $\Gamma_h$.
Finally, set
\[
f_h := m_h(\mathcal{F}^{-l},\cdot) \quad \text{and} \quad g_h := m_h(\mathcal{G}^{-l},\cdot).
\]
\end{definition}
We can now prove convergence for this method.
\begin{corollary} \label{cor:4oProbConvBounds}
With the spaces and functionals chosen in Definition \ref{def:4oSplittingExample} and Definition \ref{def:4oProbFemSetUp},
there exists $h_0 > 0$ such that for all $0<h<h_0$ there exists a unique solution $(u_h,w_h) \in X_h \times Y_h$ to the problem
\begin{align*}
c_h(u_h,\eta_h) + b_h(\eta_h,w_h) &= \langle f_h, \eta_h \rangle \quad \forall \eta_h \in X_h, \\
b_h(u_h,\xi_h) - m_h(w_h,\xi_h) &= \langle g_h, \xi_h \rangle \quad \forall \xi_h \in Y_h.
\end{align*}
Moreover there exists $C>0$, independent of $h$, such that
\[
\|u-u_h^l\|_{i,2} + \|w-w_h^l\|_{i,2} \leq C h^{2-i}(\|\mathcal{F}\|_{0,2} + \|\mathcal{G}\|_{0,2}),
\]
for each $i=0,1$ and for all $0<h<h_0$.
\end{corollary}
\begin{proof}
For the $i=1$ case we apply Corollary \ref{cor:abstractOrderOfConv}, the assumptions on the lift operators and
bilinear functionals made in Definition \ref{def:genFemSetUp} hold by the same arguments as for the Clifford torus application, see the proof of Corollary \ref{cor:oneMinusEpsBounds}.
For the approximation to the data follow the proof of Lemma 4.7 in \cite{DziEll13},
\[
|m(\mathcal{F},\eta_h^l) - m_h(\mathcal{F}^{-l},\eta_h)| \leq C h^2|m(\mathcal{F},\eta_h^l)| \leq C h^2\|m(\mathcal{F},\cdot)\|_{X^*}\|\eta_h^l\|_{X},
\]
an identical argument holds for $\mathcal{G}$. Set the spaces $\tilde{X}=\tilde{Y}=H^2(\Gamma)$ and $\alpha=1$, the
approximation assumption in Corollary \ref{cor:abstractOrderOfConv} holds by the standard interpolation estimates (see e.g. \cite[Lemma 4.3]{DziEll13}). It follows
\[
\|u-u_h^l\|_{1,2} + \|w-w_h^l\|_{1,2} \leq Ch\left(\|u\|_{2,2} + \|w\|_{2,2} + \|m(\mathcal{F},\cdot)\|_{-1,2} + \|m(\mathcal{G},\cdot)\|_{-1,2} \right).
\]
Hence by the regularity estimate in Proposition \ref{prop:4oWPandBound} we have
\[
\|u-u_h^l\|_{1,2} + \|w-w_h^l\|_{1,2} \leq Ch\left(\|\mathcal{F}\|_{0,2} + \|\mathcal{G}\|_{0,2} \right).
\]
For the $i=0$ result we use Proposition \ref{prop:abstractDualityBound}, setting $H=J=L^2(\Gamma)$ and $\hat{X}=\hat{Y}=H^2(\Gamma)$.
The approximation condition \eqref{eq:dualityApproxAssumption} holds for $\beta=1$ by the standard interpolation estimates.
The regularity result \eqref{eq:dualityRegAssumption} holds by elliptic regularity applied to the dual problem. It follows
\[
\|u-u_h^l\|_{0,2} + \|w-w_h^l\|_{0,2} \leq Ch^2\left(\|\mathcal{F}\|_{0,2} + \|\mathcal{G}\|_{0,2} \right).
\]
\end{proof}
\subsection{Clifford torus problems}
We now apply the abstract finite element method to produce a convergent finite element approximation for the Clifford torus problems.
\begin{definition} \label{def:torusFemSetUp}
In the context of Definition \ref{def:genFemSetUp}, set $X_h=Y_h=\mathcal{S}_h$. Take $l_h^X$ and $l_h^Y$ to be the standard lift operator, see Section \ref{sub_section_SFEM}.
Set the bilinear functionals to be
\begin{align*}
&c_h(u_h,v_h) := \frac{1}{\rho} \sum_{k=1}^K \int_{\Gamma_h} u_h g_k \circ p \;do_h \int_{\Gamma_h} v_h g_k \circ p \;do_h + \chi_{\text{con}}\frac{1}{\delta} \sum_{k=1}^N u_h(p^{-1}(X_k))v_h(p^{-1}(X_k)) \\
& + \int_{\Gamma_h} \nabla_{\Gamma_h} u_h \cdot \left( \left[\frac{3}{2}H^2 - 2|\mathcal{H}|^2 -2 \right]\texttt{1}\!\!\texttt{l} - 2 H \mathcal{H} \right) \circ p \nabla_{\Gamma_h} v_h
\\
& + u_hv_h \left( - \frac{3}{2} H^2 | \mathcal{H} |^2 + 2 ( \nabla_{\Gamma} \nabla_{\Gamma} H) : \mathcal{H} + |\nabla_{\Gamma} H|^2 + 2 H Tr(\mathcal{H}^3) +\Delta_\Gamma |\mathcal{H}|^2 +|\mathcal{H}|^4 -1\right) \circ p \; do_h, \\
&b_h(u_h,v_h):= \int_{\Gamma_h} \nabla_{\Gamma_h} u_h \cdot \nabla_{\Gamma_h} v_h + u_hv_h \;do_h , \\
&m_h(w_h,v_h):= \int_{\Gamma_h} w_hv_h \;do_h.
\end{align*}
Finally, set $g_h = 0$ and $f_h$ such that
\[
\langle f_h, v_h \rangle = \sum_{k=1}^N \beta_k v_h(p^{-1}(X_k)) \quad \text{ or } \quad \langle f_h, v_h \rangle = \frac{1}{\delta}\sum_{k=1}^N \alpha_k v_h(p^{-1}(X_k)).
\]
\end{definition}
We shall check the assumptions made in Definition \ref{def:genFemSetUp} hold in this context and produce the following convergence result.
\begin{corollary} \label{cor:oneMinusEpsBounds}
With the spaces and functionals chosen in Definition \ref{def:torusSplitSetup} and Definition \ref{def:torusFemSetUp}, there exists $h_0 > 0$ such that for all $0<h<h_0$ there exists a unique solution $(u_h,w_h) \in X_h \times Y_h$ to the problem
\begin{align*}
c_h(u_h,\eta_h) + b_h(\eta_h,w_h) &= \langle f_h, \eta_h \rangle \quad \forall \eta_h \in X_h, \\
b_h(u_h,\xi_h) - m_h(w_h,\xi_h) &= 0 \quad \forall \xi_h \in Y_h.
\end{align*}
Moreover, for any $1 < p < 2 < q < \infty$ with $1/p + 1/q = 1$ there exists $C(q) > 0$, independent of $h$, such that
\[
\|u-u_h^l\|_{1,2} + \|w-w_h^l\|_{0,2} \leq C(q) h^{2/q} \|f\|_{X^*},
\]
for all $0<h<h_0$.
\end{corollary}
\begin{proof}
Firstly, for the well posedness of the finite element method we need only check the assumptions made in Definition \ref{def:genFemSetUp} hold for the choices made in Definition \ref{def:torusFemSetUp}. The space $\mathcal{S}_h$ is a normed vector space and the standard lift operator is linear and injective, see \cite{DziEll13} for details. Each of the functionals defined are bilinear by inspection and $m_h$ is indeed symmetric.
The approximation properties for $b_h$, $m_h$ and the $L^2$ and $H^1$ type terms in $c_h$ can be proven as in Lemma 4.7 of \cite{DziEll13}, in this case $k=2$.
The main idea is to compare the volume measures on $\Gamma$ and $\Gamma_h$ as well as the corresponding surface gradients $\nabla_\Gamma$
and $\nabla_{\Gamma_h}$. For the term with $\nabla_{\Gamma_h} u_h \cdot \mathcal{H} \circ p \nabla_{\Gamma_h} v_h$ in $c_h$, please keep in mind that $\mathcal{H}= P \mathcal{H} P$.
Notice also we have treated $c_h$ analogously to the treatment of the surface diffusion term with symmetric mobility tensor in Section 3.1 of \cite{DziEll07}. For the remaining terms in $c_h$, the $1/\rho$ term can be treated in the same manner as the $L^2$ inner product and for the $1/\delta$ term observe
\[
\sum_{k=1}^N u_h(p^{-1}(X_k))v_h(p^{-1}(X_k)) = \sum_{k=1}^N u_h^l(X_k)v_h^l(X_k),
\]
hence this term makes no contribution to the approximation error. A similar observation shows, in this case,
\begin{equation}
\langle f_h, v_h \rangle = \langle f, v_h^l \rangle.
\label{f_h_equals_f}
\end{equation}
Hence $f_h$ satisfies the required approximation property as does $g_h$ because $g_h=g=0$. We thus have satisfied all of the assumptions of Definition \ref{def:genFemSetUp}, hence the discrete problem is well posed by Theorem \ref{thm:genFemInfBound}.
For the convergence result we will argue as in Proposition \ref{prop:abstractDualityBound}, however,
due to the lack of further regularity in this circumstance a more careful argument is required.
Let $(\psi, \varphi) \in X \times Y$ denote the solution to Problem \ref{prob:genSplittingProb} with right hand side
$$
\eta \mapsto \langle u - u_h^l, \eta \rangle_{H^1(\Gamma)} \quad \textnormal{and} \quad \xi \mapsto \langle w - w_h^l, \xi \rangle_{L^2(\Gamma)}.
$$
It follows
$$
\| u - u_h^l \|_{1,2}^2 + \| w - w_h^l \|_{0,2}^2
= c(\psi, u - u_h^l) + b(u - u_h^l, \varphi) + b(\psi, w - w_h^l) - m(\varphi, w - w_h^l).
$$
As $g = g_h = 0$ in this case we also have
$$
b(u, \varphi) - m(w, \varphi) = 0,
$$
as well as
$$
b_h(u_h, (\Pi_h \varphi)^{-l}) - m_h(w_h, (\Pi_h \varphi)^{-l}) = 0.
$$
Hence,
\begin{align*}
& | b( u - u_h^l, \varphi ) - m( w - w_h^l, \varphi ) | = | - b(u_h^l, \varphi) + m(w_h^l, \varphi) |
\\
&\leq | b_h(u_h, (\Pi_h \varphi)^{-l} ) - b(u_h^l, \varphi) | + | m(w_h^l, \varphi) - m(w_h^l, \Pi_h \varphi ) | + | m(w_h^l, \Pi_h \varphi) - m_h(w_h, (\Pi_h \varphi)^{-l} ) |
\\
&\leq C h^2 ( \| u_h^l \|_X \| \Pi_h \varphi \|_Y + \| w_h^l \|_L \| \Pi_h \varphi \|_L ) + C \| w_h^l \|_L \| \varphi - \Pi_h \varphi \|_L,
\end{align*}
where we have used the identity $b(\Pi_h \varphi, u_h^l) = b(\varphi, u_h^l)$ and the geometric estimates already discussed above, which produce the $h^2$ terms.
Then, using (\ref{interpolation_estimate_2}) for $r=p$ and (\ref{a_priori_estimate}), we obtain
\begin{align}
& | b( u - u_h^l, \varphi ) - m( w - w_h^l, \varphi ) |
\leq C h^2 ( \| u_h^l \|_X + \| w_h^l \|_Y ) \| \Pi_h \varphi \|_Y + C \| w_h^l \|_Y \| \varphi - \Pi_h \varphi \|_L
\nonumber\\
&\leq C h^2 \| f \|_{X^*} \| \varphi \|_Y + Ch^{2/q} \| f \|_{X^*} \| \varphi \|_{Y}
\label{estimate_ritz_projection_r_q}
\\
&\leq C h^{2/q} \| f \|_{X^*} ( \| u - u_h^l \|_{1,2} + \| w - w_h^l \|_{0,2} ).
\nonumber
\end{align}
To deal with the two remaining terms, observe that for any $\eta_h \in \mathcal{S}_h$,
\begin{align*}
&| c(\eta_h^l, u - u_h^l) + b(\eta_h^l, w - w_h^l) | = | c(u, \eta_h^l) + b(\eta_h^l, w) - (c(u_h^l, \eta_h^l) + b(\eta_h^l, w_h^l))|
\\
&\leq | \langle f, \eta_h^l \rangle - \langle f_h, \eta_h \rangle | + | c_h(u_h, \eta_h) + b_h(\eta_h, w_h) - (c( u_h^l, \eta_h^l) + b(\eta_h^l, w_h^l)) |
\\
&\leq Ch^2 \| u_h^l \|_X \| \eta_h^l \|_X + Ch^2 \| \eta_h^l \|_X \| w_h^l \|_Y \leq Ch^2 \| f \|_{X^*} \| \eta_h^l \|_X,
\end{align*}
where we used (\ref{f_h_equals_f}) and
the last step follows from (\ref{a_priori_estimate}). Choosing $\eta_h^l = I_h^l \psi$, the Lagrange interpolant, and $s = q$ in (\ref{interpolation_estimate}), we obtain
\begin{align}
&|c(\psi, u - u_h^l) + b(\psi, w - w_h^l)|
\leq | c(\psi - I_h^l \psi, u - u_h^l) + b(\psi - I_h^l \psi, w - w_h^l) | + Ch^2 \| f \|_{X^*} \| I_h^l \psi \|_X
\nonumber \\
&\leq C \| \psi - I_h^l \psi \|_X ( \| u - u_h^l \|_X + \| w - w_h^l \|_Y ) + C h^2 \| f \|_{X^*} \| I_h^l \psi \|_X
\nonumber \\
&\leq C h^{2/q} \| \psi \|_{2,2} ( \| u - u_h^l \|_X + \| w - w_h^l \|_Y ) + C h^2 \| f \|_{X^*} \| \psi \|_{2,2}
\label{estimate_lagrange_interpolation_s_q} \\
&\leq C h^{2/q} ( \| \varphi \|_{0,2} + \| w - w_h^l \|_{0,2}) ( \| u - u_h^l \|_X + \| w - w_h^l \|_Y + \| f \|_{X^*} )
\nonumber \\
&\leq C h^{2/q} ( \| u - u_h^l \|_{1,2} + \| w - w_h^l \|_{0,2}) ( \| u - u_h^l \|_X + \| w - w_h^l \|_Y + \| f \|_{X^*} ),
\nonumber
\end{align}
where we used the regularity of $\psi$ coming from the second equation, that is $b(\psi, \xi) = m(\varphi, \xi) + \langle w - w_h^l, \xi \rangle_{L^2(\Gamma)}$.
The a priori estimates from (\ref{a_priori_estimate}) and Theorem \ref{thm:genSplittingWellPosed} finally give
$$
|c(\psi, u - u_h^l) + b(\psi, w - w_h^l)| \leq C h^{2/q} \| f \|_{X^*} ( \| u - u_h^l \|_{1,2} + \| w - w_h^l \|_{0,2} )
$$
The result then follows by combining the estimates derived above.
\end{proof}
\iffalse
Moreover we have the infimum bound produced there, this leads to the required error bound by making use of the Lagrange interpolant $I_h$ (see \cite{DziEll13}), the bounds on Lagrange interpolation from \cite{Cia02} and the results of Lemma \ref{lem:ritzProjection} regarding the Ritz projection. By Theorem \ref{thm:genFemInfBound}, for any $p<p'<2$,
\begin{align*}
\|u-u_h^l\|_{1,q} + \|w-w_h^l\|_{1,p} &\leq C(p) \left[ \|u-I_h^l u\|_{1,q} + \|w-\Pi_h w\|_{1,p} + h^2\|f\|_{-1,q} \right] \\
& \leq C(p,p') \left[ h \|u\|_{2,q} + h^{2(1/p-1/p')}\|w\|_{1,p'} + h^2 \|f\|_{-1,q} \right].
\end{align*}
Now, for any $0<\varepsilon <2/p-1$, set $p'=2/(1+\varepsilon) \in (p,2)$ then for sufficiently small $h$, using the regularity estimate in Corollary \ref{cor:torusWPandReg} and the a priori bound in Theorem \ref{thm:genSplittingWellPosed} produces
\[
\|u-u_h^l\|_{1,q} + \|w-w_h^l\|_{1,p} \leq C(p,\varepsilon) h^{2/p-1-\varepsilon} \|f\|_{-1,2/(1-\varepsilon)}.
\]
To obtain the highest possible order of convergence we thus take $p$ close to $1$. This produces an almost optimal order of convergence for $u$ in the $H^1$ norm. To produce such a bound and bounds in weaker norms we use Proposition \ref{prop:abstractDualityBound}.
\begin{corollary}
In the setting of Corollary \ref{cor:oneMinusEpsBounds}, for sufficiently small $h$ and any $\varepsilon \in (0,1)$, there exists $C(\varepsilon)>0$ independent of $h$ such that
\begin{align*}
\|u-u_h^l\|_{1,2} + \|w-w_h^l\|_{0,2} &\leq C(\varepsilon)h^{1-\varepsilon} \|f\|_{-1,4/(2-\varepsilon)}, \\
\|u-u_h^l\|_{0,2} + \|w-w_h^l\|_{-1,2} & \leq C(\varepsilon) h^{2-\varepsilon} \|f\|_{-1,8/(4-\varepsilon)}.
\end{align*}
\end{corollary}
\begin{proof}
For the first inequality we apply Proposition \ref{prop:abstractDualityBound} with $\hat{X}=X=W^{1,q(\varepsilon)}(\Gamma)$, $\hat{Y}=Y=W^{1,p(\varepsilon)}(\Gamma)$, $H=H^1(\Gamma)$, $J=L^2(\Gamma)$, $\tilde{X}=W^{2,r(\varepsilon)}(\Gamma)$, $\tilde{Y}=W^{1,p'(\varepsilon)}(\Gamma)$, setting $p=4/(4-\varepsilon)$, $q$ such that $1/p+1/q=1$, $p'=4/(2 + \varepsilon)$ and $q'$ such that $1/p' + 1/q' =1$ as well as $r = 4/(3\varepsilon)$. We also choose $\alpha=1-\varepsilon$ and $\beta = 0$. The approximation assumption for Corollary \ref{cor:abstractOrderOfConv} is then a consequence of the estimates for the Ritz projection in Lemma \ref{lem:ritzProjection}. As we choose $\beta = 0$ the approximation assumption \eqref{eq:dualityApproxAssumption} is trivial and the regularity result \eqref{eq:dualityRegAssumption} follows immediately from the a priori bound in Theorem \ref{thm:genSplittingWellPosed}. Then by Proposition \ref{prop:abstractDualityBound} there exists $C(\varepsilon) > 0$, independent of $h$, such that
\[
\|u-u_h^l\|_{1,2} + \|w-w_h^l\|_{0,2} \leq C(\varepsilon)h^{1-\varepsilon}(\|u\|_{2,r} + \|w\|_{1,p'} + \|f\|_{-1,q}).
\]
Since $W^{3,p'}(\Gamma)$ continuously embeds into $W^{2,r}(\Gamma)$, we conclude from the regularity result in Corollary \ref{cor:torusWPandReg} that
\[
\|u-u_h^l\|_{1,2} + \|w-w_h^l\|_{0,2} \leq C(\varepsilon)h^{1-\varepsilon}(\|w\|_{1,p'} + \|f\|_{-1,q}).
\]
The inequality in the statement then follows by the a priori bound produced in Theorem \ref{thm:genSplittingWellPosed} and the fact that $\| f \|_{-1,q} \leq C \| f \|_{-1,q'}$.
For the second inequality we follow a similar procedure, now set $X=W^{1,q'(\varepsilon)}(\Gamma)$, $Y=W^{1,p(\varepsilon)}(\Gamma)$, $H=L^2(\Gamma)$, $J=H^{-1}(\Gamma)$, $\tilde{X}=\hat{X}=W^{1,q(\varepsilon)}(\Gamma)$, $\tilde{Y}=\hat{Y}=W^{1,p'(\varepsilon)}(\Gamma)$, $\alpha = 1-\varepsilon/2$ and $\beta=1-\varepsilon/2$. Now we set $p=8/(8-\varepsilon)$ and $p'=8/(4 + \varepsilon)$. Note that we use the embedding $\iota:Y \rightarrow J$ given by
\[
\iota(v) = m(v,\cdot).
\]
For the inner product on $H^{-1}(\Gamma)$ we will use
\[
\langle u, v \rangle_J := \langle u, (-\Delta_\Gamma + 1)^{-1}v \rangle_{H^{-1}(\Gamma),H^1(\Gamma)},
\]
the term on the right hand side denotes the duality paring of $u \in H^{-1}(\Gamma)$ acting upon $(-\Delta_\Gamma + 1)^{-1}v \in H^1(\Gamma)$. Note that the norm induced by this inner product is in fact the standard duality norm
\[
\|F\|_{-1,2} = \sup_{v \in H^1(\Gamma)} \frac{|\langle F, v \rangle_{H^{-1}(\Gamma),H^1(\Gamma)}|}{\|v\|_{1,2}}.
\]
The approximation assumption for Corollary \ref{cor:abstractOrderOfConv} and the similar assumption \eqref{eq:dualityApproxAssumption} are a consequence of the estimates for the Ritz projection in Lemma \ref{lem:ritzProjection}. For the regularity result \eqref{eq:dualityRegAssumption} we apply the a priori bound in Theorem \ref{thm:genSplittingWellPosed} and the continuity of the embeddings $W^{1,p'}(\Gamma) \subset J$, $W^{1,q}(\Gamma) \subset H$.
We may then apply Proposition \ref{prop:abstractDualityBound} to obtain
\[
\|u-u_h^l\|_{0,2} + \|w-w_h^l\|_{-1,2} \leq C(\varepsilon) h^{2-\varepsilon} (\|u\|_{1,q} + \|w\|_{1,p'} + \|f\|_{-1,q'}).
\]
The inequality in the statement then follows by the a priori bound produced in Theorem \ref{thm:genSplittingWellPosed}.
\end{proof}
\fi
\section{Numerical examples}\label{NumExpts}
We conclude with numerical examples showing that these theoretical convergence rates are achieved in practice. All of the numerical examples given here have been implemented in the DUNE framework, making particular use of the DUNE-FEM module \cite{DuneFem10}.
\subsection{Higher regularity problem}
We consider the problem outlined in Definition \ref{def:4oSplittingExample}, setting $\Gamma=S(0,1)$, the unit sphere, taking
\[ \mathcal{B}(x)= \left( \begin{array}{ccc}
x_1 & 0 & 0 \\
0 & x_2 & 0 \\
0 & 0 & x_3 \end{array} \right),
\;\mathcal{C}(x)=2+x_1 x_2, \;C_m = 3/2, \; C_M = 5/2, \;\lambda_M = 1, \;\Lambda = 1,
\]
and selecting
\begin{align*}
\mathcal{F}(x) &:= -5 x_3 (x_1^3 + x_2^3 + x_3^3) + 2x_3(x_1+x_2+x_3) -4x_3 + 4x_3^2 -1 +(1+x_1x_2)x_3 + 7x_1x_2, \\
\mathcal{G}(x) &:= 3x_3 - x_1 x_2.
\end{align*}
These choices for $\mathcal{F}$ and $\mathcal{G}$ give the solution $(u,w)=(\nu_3,\nu_1\nu_2)$. The example is chosen as it shows that this method can be used to split a fourth order problem where the second order terms make an indefinite contribution to the bilinear form. Explicitly, the fourth order equation solved by $u$ is
\[
\Delta_\Gamma^2 u - \nabla_\Gamma \cdot \left( P\mathcal{B} P\nabla_\Gamma u \right) + \mathcal{C} u
= \mathcal{F} + \mathcal{G} - \Delta_\Gamma \mathcal{G}.
\]
The resulting errors and experimental orders of convergence are shown in Tables \ref{table:4oExampleUErrors} and \ref{table:4oExampleWErrors}.
In each case, for grid size $h$, $E_V(h)$ is the error in the $V$ norm of the finite element approximation. For example, in Table \ref{table:4oExampleUErrors} we have
\[
E_{L^2(\Gamma)}(h) := \|u-u_h^l\|_{0,2}.
\]
The experimental order of convergence ($EOC$) with respect to the $V$-norm, for tests with grid sizes $h_1$ and $h_2$, is given by
\[
EOC = \frac{\log(E_V(h_1)/E_V(h_2))}{\log(h_1/h_2)}.
\]
In each of our examples the $EOC$ is calculated between the current $h$ and the previous refinement, so that the denominator is approximately $\log(1/2)$ each time as the grid size approximately halves with each refinement.
Observe that the method achieves the orders of convergence proven in Corollary \ref{cor:4oProbConvBounds}, order $h$ and $h^2$ convergence in the $H^1$ and $L^2$ norms respectively.
\begin{table}[ht]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|c c|c c|}
\hline
$h$ & $E_{L^2(\Gamma)}(h)$ & $EOC$ & $E_{H^1(\Gamma)}(h)$ & $EOC$ \\ \hline
1.41421 & $5.51463\times 10^{-1}$ & - & $1.00111$ & - \\ \hline
7.07106$\times 10^{-1}$ & $1.87559\times 10^{-1}$ & 1.55592 & $6.28156\times 10^{-1}$ & 0.6724 \\ \hline
3.53553$\times 10^{-1}$ & $5.05247\times 10^{-2}$ & 1.89228 & $3.22169\times 10^{-1}$ & 0.963307 \\ \hline
1.76776$\times 10^{-1}$ & $1.29659\times 10^{-2}$ & 1.96227 & $1.59478\times 10^{-1}$ & 1.01446 \\ \hline
8.83883$\times 10^{-2}$ & $3.2712\times 10^{-3}$ & 1.98683 & $7.92569\times 10^{-2}$ & 1.00875 \\ \hline
4.41941$\times 10^{-2}$ & $8.20352\times 10^{-4}$ & 1.9955 & $3.95406\times 10^{-2}$ & 1.0032 \\ \hline
2.20970$\times 10^{-2}$ & $2.05302\times 10^{-4}$ & 1.9985 & $1.97563\times 10^{-2}$ & 1.00102 \\ \hline
1.10485$\times 10^{-2}$ & $5.13428\times 10^{-5}$ & 1.99951 & $9.87605\times 10^{-3}$ & 1.00031 \\ \hline
5.52427$\times 10^{-3}$ & $1.28369\times 10^{-5}$ & 1.99987 & $4.93772\times 10^{-3}$ & 1.00009 \\ \hline
\end{tabular}
\caption{Errors and Experimental orders of convergence for $u - u_h^l$. } \label{table:4oExampleUErrors}
\end{table}
\begin{table}[ht]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|c c|c c|}
\hline
$h$ & $E_{L^2(\Gamma)}(h)$ & $EOC$ & $E_{H^1(\Gamma)}(h)$ & $EOC$ \\ \hline
1.41421 & $8.14491\times 10^{-1}$ & - & 2.03684 & - \\ \hline
7.07106$\times 10^{-1}$ & $5.01333\times 10^{-1}$ & 0.700128 & 1.30646 & 0.640664 \\ \hline
3.53553$\times 10^{-1}$ & $1.739\times 10^{-1}$ & 1.52752 & $6.64415\times 10^{-1}$ & 0.975509 \\ \hline
1.76776$\times 10^{-1}$ & $4.73979\times 10^{-2}$ & 1.87536 & $3.2514\times 10^{-1}$ & 1.03103 \\ \hline
8.83883$\times 10^{-2}$ & $1.21214\times 10^{-2}$ & 1.96727 & $1.61316\times 10^{-1}$ & 1.01117 \\ \hline
4.41941$\times 10^{-2}$ & $3.04823\times 10^{-3}$ & 1.99151 & $8.04912\times 10^{-2}$ & 1.00299 \\ \hline
2.20970$\times 10^{-2}$ & $7.63212\times 10^{-4}$ & 1.99781 & $4.02248\times 10^{-2}$ & 1.00075 \\ \hline
1.10485$\times 10^{-2}$ & $1.90877\times 10^{-4}$ & 1.99944 & $2.01098\times 10^{-2}$ & 1.00018 \\ \hline
5.52427$\times 10^{-3}$ & $4.77244\times 10^{-5}$ & 1.99985 & $1.00546\times 10^{-2}$ & 1.00005 \\ \hline
\end{tabular}
\caption{Errors and Experimental orders of convergence for $w - w_h^l$. } \label{table:4oExampleWErrors}
\end{table}
\subsection{Lower regularity problem}
We will next study a problem similar to the point forces problem on a Clifford torus introduced in Definition \ref{def:torusSplitSetup}. For ease of construction of an exact solution we will not study this problem precisely but a similar one on a sphere whose solution exhibits the same regularity, $(u,w) \in W^{3,p}(\Gamma) \times W^{1,p}(\Gamma)$ for any $1<p<2$, as proven in Corollary \ref{cor:torusWPandReg}.
The coupled problem we study, in distributional form, is given by
\begin{align*}
-\Delta_\Gamma w + w + \Delta_\Gamma u + 2u &= \delta_N - \frac{1}{4\pi} - \frac{3}{4\pi}x_3 \\
-\Delta_\Gamma u + u -w &= \frac{3}{8\pi} \left[ (1-x_3)\log(1-x_3) + \frac{1}{2} - \log(2) \right]
\end{align*}
where we take $\Gamma$ to be the unit sphere $\Gamma = S(0,1)$ and $\delta_N$ is a delta function centred at the north pole $N=(0,0,1)$. This can be viewed as a second order splitting of the fourth order PDE
\begin{align*}
\Delta_\Gamma^2 u - \Delta_\Gamma u + 3 u =& \delta_N -\frac{1}{4\pi}-\frac{3}{4\pi}x_3 + g - \Delta_\Gamma g
\end{align*}
with $g := \frac{3}{8\pi} \left[ (1-x_3)\log(1-x_3) + \frac{1}{2} - \log(2) \right]$.
\begin{remark} The construction of this problem follows by consideration of the function
\[
w(x) = -\frac{1}{4\pi}\left[ \log(1-x_3) -\log(2) +1 +\frac{3x_3}{2} \right].
\]
This function has a smooth part and a logarithmic part which is based upon the Green's function for the Laplace Beltrami operator on a sphere, see \cite{KimOka87}. That is, in a distributional sense, $w$ satisfies
\[
-\Delta_\Gamma w = \delta_N - \frac{1}{4\pi} - \frac{3}{4\pi}x_3.
\]
The logarithmic part of $w$ lies in $W^{1,p}(\Gamma)$ for any $1<p<2$ but is not in $H^1(\Gamma)$. We take $u$ to be
\[
u(x) = \frac{1}{8\pi} \left[ (1-x_3)\log(1-x_3) + \frac{1}{2} - \log(2) \right].
\]
\end{remark}
The weak formulation and discretisation of the system is completely analogous to the treatment of the point forces problem described in Definition \ref{def:torusSplitSetup} and Definition \ref{def:torusFemSetUp}. Explicitly, in terms of the general abstract formulation in Problem \ref{prob:genSplittingProb}, we choose
\begin{align*}
& c(u,v):= \int_\Gamma -\nabla_\Gamma u \cdot \nabla_\Gamma v + 2uv \;do,
~b(u,v):= \int_\Gamma \nabla_\Gamma u \cdot \nabla_\Gamma v + uv \;do,
~ m(w,v):= \int_\Gamma wv \;do, \\
& \langle f,v \rangle := v(0,0,1) -\frac{1}{4\pi}\int_\Gamma v \;do - \frac{3}{4\pi}\int_\Gamma x_3 v \;do,
~ \langle g,v \rangle := \int_\Gamma \frac{3}{8\pi} \left[ (1-x_3)\log(1-x_3) + \frac{1}{2} - \log(2) \right] v \;do.
\end{align*}
The finite element method formulation is also completely analogous to the treatment of the point forces problem. Explicitly, in terms of the general abstract formulation in Problem \ref{prob:genSplittingFem}, we choose
\begin{align*}
& c_h(u_h,v_h):= \int_{\Gamma_h} -\nabla_{\Gamma_h} u_h \cdot \nabla_{\Gamma_h} v_h + 2u_hv_h \;do_h, ~ b_h(u_h,v_h):= \int_{\Gamma_h} \nabla_{\Gamma_h} u_h \cdot \nabla_{\Gamma_h} v_h + u_hv_h \;do_h, \\
&m_h(w_h,v_h):= \int_{\Gamma_h} w_hv_h \;do_h,
~ \langle f_h,v_h \rangle := v_h(p^{-1}(0,0,1)) -\frac{1}{4\pi}\int_{\Gamma_h} v_h \;do - \frac{3}{4\pi}\int_{\Gamma_h} (x_3)^{-l} v_h \;do_h, \\
& \langle g_h,v_h \rangle := \int_{\Gamma_h} \frac{3}{8\pi} \left[ (1-x_3)\log(1-x_3) + \frac{1}{2} - \log(2) \right]^{-l} v_h \;do_h.
\end{align*}
The finite element method converges at the rates proven in Corollary \ref{cor:oneMinusEpsBounds}, where only the case $g = g_h = 0$ was addressed.
The experimental order of convergence is given in Tables \ref{table:deltaProbUErrors} and \ref{table:deltaProbWErrors}.
In fact, we observe linear convergence in this example. The proof of this convergence rate is left for future research.
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|c c|c c|}
\hline
$h$ & $E_{L^2(\Gamma)}(h)$ & $EOC$ & $E_{H^1(\Gamma)}(h)$ & $EOC$ \\ \hline
1.41421 & $7.2206\times 10^{-2}$ & - & $9.60127\times 10^{-2}$ & - \\ \hline
7.07106$\times 10^{-1}$ & $2.68314\times 10^{-2}$ & 1.4282 & $4.81314\times 10^{-2}$ & 0.996248 \\ \hline
3.53553$\times 10^{-1}$ & $7.71427\times 10^{-3}$ & 1.79832 & $2.4304\times 10^{-2}$ & 0.985781 \\ \hline
1.76776$\times 10^{-1}$ & $2.04304\times 10^{-3}$ & 1.91681 & $1.24533\times 10^{-2}$ & 0.964672 \\ \hline
8.83883$\times 10^{-2}$ & $5.30802\times 10^{-4}$ & 1.94447 & $6.31331\times 10^{-3}$ & 0.980055 \\ \hline
4.41941$\times 10^{-2}$ & $1.37634\times 10^{-4}$ & 1.94734 & $3.17379\times 10^{-3}$ & 0.992192 \\ \hline
2.20970$\times 10^{-2}$ & $3.57961\times 10^{-5}$ & 1.94296 & $1.58979\times 10^{-3}$ & 0.997373 \\ \hline
1.10485$\times 10^{-2}$ & $9.3513\times 10^{-6}$ & 1.93656 & $7.95344\times 10^{-4}$ & 0.999182 \\ \hline
5.52427$\times 10^{-3}$ & $2.45312\times 10^{-6}$ & 1.93055 & $3.97739\times 10^{-4}$ & 0.999757 \\ \hline
\end{tabular}
\caption{Errors and Experimental orders of convergence for $u - u_h^l$. } \label{table:deltaProbUErrors}
\end{table}
\begin{table}[h]
\centering
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|c c|}
\hline
$h$ & $E_{L^2(\Gamma)}(h)$ & $EOC$ \\ \hline
1.41421 & $1.28739\times 10^{-1}$ & - \\ \hline
7.07106$\times 10^{-1}$ & $4.91831\times 10^{-2}$ & 1.38821 \\ \hline
3.53553$\times 10^{-1}$ & $2.37553\times 10^{-2}$ & 1.04991 \\ \hline
1.76776$\times 10^{-1}$ & $1.25937\times 10^{-2}$ & 0.915547 \\ \hline
8.83883$\times 10^{-2}$ & $6.5736\times 10^{-3}$ & 0.937948 \\ \hline
4.41941$\times 10^{-2}$ & $3.35583\times 10^{-3}$ & 0.970015 \\ \hline
2.20970$\times 10^{-2}$ & $1.69215\times 10^{-3}$ & 0.987811 \\ \hline
1.10485$\times 10^{-2}$ & $8.48703\times 10^{-4}$ & 0.995527 \\ \hline
5.52427$\times 10^{-3}$ & $4.24803\times 10^{-4}$ & 0.998466 \\ \hline
\end{tabular}
\caption{Errors and Experimental orders of convergence for $w - w_h^l$. } \label{table:deltaProbWErrors}
\end{table}
|
train/arxiv
|
BkiUfK7xK0wg0_75AiZ5
| 5 | 1 |
\section{Introduction\label{intro}}
In the past decade, a lot of work has been done in the field of
bioreaction databases, which are a powerful tool for studying
biochemical processes (see, for example, Francke et al.~\cite{fst-rmnbi-05}).
Examples for such databases are the {\em Roche wall chart of
Biochemical Pathways}~\cite{e-bp-05}, the {\em BioCyc}
databases (e.g.~\cite{caddfgkkkklmpppszk-mdmpe-10}),
the {\em Kyoto Encyclopedia of Genes and Genomes} databases ({\em KEGG})~\cite{kgfth-kramn-10}
and the {\em BRaunschweig ENzyme
DAtabase BRENDA}~\cite{sgcsmrssts-beis2-11}.
\begin{figure}[t!]
\centering
\includegraphics{figs/network.pdf}
\caption{Example of an organism-specific metabolic network ({\em
H}. {\em sapiens}), reconstructed from a bioreaction
database~\cite{sszfk-ebdts-}.\label{network}}
\end{figure}
Biochemical reaction databases allow the reliable reconstruction of metabolic
networks, see Figure~\ref{network},
which---in turn---help to improve the annotation of
genome sequences to interpret high-throughput omics data, to
understand biological processes of pathogenesis or industrial
production at a system level, and to rationally control or design
biological systems
(e.g., \cite{dhp-rveif-04,jks-esada-06,rs-olgpm-06,slrz-mpand-07}).
Such databases are subject to perpetual research and update: New
insights lead to extensions and corrections, resulting in more
accurate databases and more detailed views on biological aspects.
Under certain circumstances, database updates can be evaluated by
considering shortest paths in metabolic networks that have been
reconstructed from the
database~\cite{mz-rmngd-03,r-btvsa-06,rsmhz-mn-08}.
In the case of a metabolic network, a shortest path describes the
number of necessary reactions to convert one metabolite into
another---provided that the network does not contain so-called currency
metabolites, which would introduce biologically infeasible shortcuts.
Ma and Zeng~\cite{mz-rmngd-03} developed such a database, and
demonstrated its usefulness by reconstruction of high-resolution
metabolic networks. In this database, Ma and
Zeng defined the reversibility of the reactions according to
literature data and biochemistry know\-ledge and introduced the concept
of reactant pairs by considering currency metabolites.
(The background for our work is a recent
upgrade of this database~\cite{sszfk-ebdts-}.)
Furthermore, shortest paths are of
particular interest in organism-specific metabolic networks, because
they can help to find alternative pathways (e.g.,~\cite{rjphss-mnaia-05}).
The tool {\em gapFiller} uses shortest paths to detect
gaps that occur in biochemical pathways (i.e., biomass that is
produced by an organism but not documented in the network due to a
lack of complete annotation) \cite{l-pc-10}. Additionally,
shortest paths can be used to compute centrality indices~\cite{b-fabc-01}.
From a computer science point of view, shortest paths in a graph are
easy to compute using {\em Breadth-First Search} or {\em
Dijkstra's algorithm}; see, for example, Cormen et al.~\cite{clrs-ia-01}.
However, simple (i.e., graph-theoretic) shortest paths do
not properly reflect biochemical facts: Biochemical pathways in
metabolic networks lead from one node (representing a
metabolite) to another one. The edges passed on this pathway
represent biochemical reactions that convert a substrate to a product.
Given a
reaction that leads from a substrate to two different products and
back--such as from
{\em 2-dehydro-3-deoxy-D-galactonate 6-phosphate} (KEGG compound ID C01286) to
{\em glyceraldehyde 3-P} (C00118) and {\em pyruvate} (C00022)---it is
possible to find a shortest path that leads from one product via the
substrate to the other one. One way to avoid this effect is to
store the reaction types as labels on the edges of the network, and
ensure that a shortest path passes no label twice \cite{r-btvsa-06,rsmhz-mn-08}.
Thus, this work considers shortest paths with pairwise-distinct edge
labels for computing {\em biologically
feasible} shortest paths (in metabolic networks that have been
reconstructed based on a database without currency metabolites and
with reversibility information).
In particular,
we show in Section~\ref{spul-comp}
that this kind of shortest paths is hard to compute.
\subsection*{Related Work}
Shortest paths have been considered in many settings and applications such
as robotics and traffic optimization.
For an overview see the survey articles by Mitchell \cite{m-spn-97,m-gspno-00}.
If the graph is unknown, common solutions are the
$A^*$- or $D^*$-algorithms~\cite{hnr-fbhdm-68,s-oeppp-94}.
Fleischer et al.~\cite{fkklt-colao-08} give a general framework for computing
short paths for settings where the environment as well as the
location of the target is unknown.
For finding biologically feasible shortest paths, the approach by Faust
et al.~\cite{fch-mpura-09} is to take the chemical structure of
reactants into account to differentiate between side and main
compounds of a reaction. Croes et al.~\cite{ccwh-mpirp-05} use
shortest paths in networks where a compound is assigned a weight equal
to its number of incident edges.
Finding pathways based on atomic transfers was presented by Boyer and
Viari~\cite{bv-airmp-03} and Heath et al.~\cite{hbk-fmpua-10}. McShan
et al.~\cite{mrs-ppmph-03} use heuristic search methods for finding
metabolic pathways. Kaleta et al.~\cite{kfs-cwbls-09}
compute elementary flux patterns to detect pathways.
Independently from our work, Chakraborty et al.~\cite{cfmy-harc-09}
show that it is NP-hard to find the minimum number of labels such that any
two vertices are connected by a {\em rainbow path} (i.e., a path with pairwise
distinct edge labels).
There are many tools available for computing shortest paths and
analyzing biological networks in
various contexts. The tool {\em Pajek} \cite{bm-pplna-98} has its
roots in Social Sciences. The origin of {\em Cytoscape}
(e.g.,~\cite{ccbsaklwicsbs-cseim-05})
is Systems Biology. The main focus of
both tools is on network visualization but both allow also some
analysis of networks.
Another program is the {\em Pathway Hunter Tool}
\cite{rasss-mpaws-05}.
All these tools allow for the computation of
shortest paths but none is---so far---able to compute
shortest paths with unique labels.
Software for computing such paths
is available from the authors~\cite{ks-sspul-08}.
\section{Background from Computer Science}\label{backInf}
In this section, we briefly review some basics in computer science
for readers with more background in biology than in computer science.
\subsection{Modeling Metabolic Networks}
Mathematically, a metabolic network is a graph, so let us briefly
review some terms from graph theory. A graph consists of a set of
{\em vertices} or {\em nodes}. Relations between two nodes are modeled by
edges between nodes. If relations are unidirectional (i.e., the
edges are arrows), the graph is called {\em directed} graph.
In our case, the nodes represent the metabolites, the edges model
reactions converting one metabolite into another. Usually, reactions are
reversible; that is, the corresponding edges are undirected. However,
there are a few cases where the reverse direction is not feasible from
thermodynamic aspects. As directed graphs are easier to handle than
mixed graphs, we use only directed edges, and model reversible reactions
by two oppositely directed edges.
A {\em path} in a graph is a sequence of edges, where---except for the
first edge---every edge starts in that node where the preceding edge
ends. The {\em length} of a path is defined as the number of edges on
the path (if the edges have no weights). Consequently, the shortest
path between two given nodes in a graph is the path having the
smallest number of edges.
\begin{table}[b!]
\hrulefill\\[-20pt]
\begin{tabbing}
\mbox\qquad\=\qquad\=\qquad\=\qquad\=\kill
Let $Q$ be a queue of vertices\\
insert start vertex into $Q$\\
while $Q$ is not empty do\\
\> $v$ := first vertex from $Q$\\
\> remove first vertex from $Q$\\
\> {\bf for all} vertices $v'$ adjacent to $v$ {\bf do}\\
\>\> {\bf if} $v'$ was not visited before {\bf then}\\
\>\>\> {\em v'.father} := $v$\\
\>\>\> mark $v'$ as visited\\
\>\>\> append $v'$ to $Q$\\
\>\>\> report shortest path to $v'$\\
\>\> {\bf end if}\\
\> {\bf end for}\\
{\bf end while}\\[-25pt]
\end{tabbing}
\hrulefill\\
Algorithm 1: Breadth-First Search (BFS)
\end{table}
\subsection{Shortest Paths in Unlabeled Networks}
Given a graph, $G=(V, E)$, it is quite simple to compute the shortest
path between two vertices of the graph or even from a given vertex to
every other one. {\em Breadth-First Search} (BFS), see
Algorithm~1, is known to every first-grade student of computer
sciences. Even if the edges have different length, the problem is
still easy to compute using {\em Dijkstra's algorithm}
(e.g., Cormen et al.~\cite{clrs-ia-01}).
Note that BFS creates a tree of all shortest paths by storing for
every vertex, $v$, its predecessor, {\em v.father}, on the shortest
path to the start vertex. Thus, a shortest path from a given vertex to
the start vertex can be found simply by following the {\em father}
pointers.
\subsection{Computational Complexity}\label{compcomp}
As we will see in the following section, things get much harder
in our case.
To express the hardness of a problem, it is very common in computer
science to estimate the order of growth in the resources (i.e.,
running time and memory requirements) needed by a program as a
function in the input size. In our case, the input size is the number
of vertices and edges in the graph. While the running time of BFS is
linear in the input size (that is, for example, if we double the input
size, the running time doubles also), we can show that the running
time is most likely to be exponential for shortest paths with unique labels;
that is, if we have a graph with $v$ vertices and $e$ edges, the
running time is in the order of $2^{v+e}$. More precisely, we can
show that our problem belongs to the class of {\em NP}-{\em complete}
problems~\cite{gj-cigtn-79}. It is a widely held belief that there is
no sub-exponential solution for NP-complete problems. The impact of
this running time is shown in Figure~\ref{figs/exp-fig}.
\begin{figure}[t!]
\centering
\includegraphics[height=8cm]{figs/exp.jpg}
\caption{Magnitude of resource request for an
NP-complete problem compared to BFS.\label{figs/exp-fig}}
\end{figure}
NP-completeness is usually shown
using a common technique in computer science known
as proof by {\em reduction}. That is, we take a well-known hard
problem---in our case 3-SAT---and show that this problem would be easy
to solve if our problem {\em shortest path with unique labels problem}
(SPUL), as defined in the following section, would
be easy to solve. This is done by describing how to translate an input
to \linebreak 3-SAT to SPUL such that a solution to SPUL yields a solution to
3-SAT.
Given a set of {\em binary variables}, $x_1, \ldots, x_n$, and
a set of {\em clauses}, $C_1, \ldots, C_m$, consisting of three
literals (i.e., $C_i = L_{i1} \vee L_{i2} \vee L_{i3}$, where $L_{ik}$
denotes a negated ($\overline{x_k}$) or unnegated variable ($x_k$)),
the problem {\em 3-SAT} asks if there is a assignment
of $x_1, \ldots, x_n$ to 0 or 1 such that all clauses are
fulfilled~\cite{gj-cigtn-79}.
\begin{figure*}[t!]
\centerline{\input{figs/spul-np}}
\caption{Transforming a 3-SAT instance to an SPUL instance.
\label{figs/spul-np-fig}}
\end{figure*}
\section{Shortest Paths with Unique Labels}
\label{spnet}
Let $G$ be a directed graph (in our
case a metabolic network) that consists of a set of vertices
(metabolites), $V$, and a set of edges (reactions), $E$.
Even if $G$ does not contain currency
metabolites, which would introduce biologically infeasible shortcuts,
a shortest path in $G$ may still be biologically infeasible,
because it may use the same reaction
twice, as explained in Section~\ref{intro}.
Thus, we are interested only in shortest {\em feasible} paths; that
is, shortest paths from $s$ to $t$ with distinct reaction types. Such
a path may be longer than the overall
shortest path, see Figure~\ref{figs/problem-fig}. To store the
reaction types in the graph, we use {\em labels} for the edges.
Altogether,
a mathematical model for our problem is:
{\bf Problem} {\em Shortest Path with Unique Labels} (SPUL): Given a graph
$G=(V, E)$, a mapping $\ell: E\longrightarrow \mathrm{I\kern-.23emI\kern-.29em N}$ that
assigns a label to every edge, and two vertices, $s\in V$ and $t\in V$,
find a shortest path $P=(e_1, e_2, ..., e_k)$ starting
in $s$ and ending in $t$ with pairwise distinct edge labels; that is, for
$1\leq i < j \leq k: \ell(e_i) \neq \ell(e_j)$.
\begin{figure}[t]
\centerline{\input{figs/problem}}
\caption{The shortest path from $S$ to $T$ is
$S\stackrel{1}{\longrightarrow} A
\stackrel{2}{\longrightarrow} B
\stackrel{1}{\longrightarrow} T$, but this path is
infeasible, because it passes the label ``$1$'' twice. The shortest
feasible path is \newline
$S\stackrel{1}{\longrightarrow} A
\stackrel{2}{\longrightarrow} C
\stackrel{3}{\longrightarrow} D
\stackrel{4}{\longrightarrow} T$.
\label{figs/problem-fig}}
\end{figure}
\subsection{Computational Complexity}\label{spul-comp}
In an unlabeled network, we can simply apply BFS.
Things get considerably harder, if we add labels to the edges
and require that no label is passed twice on a path between two
vertices. Note that neither BFS nor Dijkstra's algorithm are
able to find the shortest feasible path shown in Figure~\ref{figs/problem-fig}.
\begin{theorem}
Given a graph $G=(V, E)$ with a mapping
$\ell:E\longrightarrow \mathrm{I\kern-.23emI\kern-.29em N}$ that assigns a label to every edge, it is
NP-complete to determine if there is a path
$P=(e_1, e_2, \ldots, e_k)$ that uses every label at most once; that is,
for $1\leq i< j \leq k: \ell(e_i) \neq \ell(e_j)$.
\end{theorem}
\begin{proof}
We show our theorem by reduction from 3-SAT.
A 3-SAT instance can be transformed to a SPUL instance as follows: For
every clause $C_i$ we use a clause gadget that consists of three
parallel edges labelled with $L_{ik}$ (i.e., with $x_{ik}$ or
$\overline{x_k}$ for an unnegated or negated variable,
respectively). The variable gadget for variable $x_j$ consists of two
parallel paths, one with all negated labels, one with all unnegated
labels. For the whole input, we start with a vertex, $s$, and add all
variable gadgets followed by all clause gadgets. The last vertex is
labeled $t$; see Figure~\ref{figs/spul-np-fig}. To find a path from $s$ to $t$, we
have to pass either the negated or the unnegated branch for every
variable. Thus, after passing the variable gadgets we have either all
negated or all unnegated variables left to pass the clause gadgets
without using a label twice. This is possible if and only if the given
formula is satisfiable.
\end{proof}
\begin{table}[t!]
\hrulefill\\[-20pt]
\begin{tabbing}
\mbox\quad\=\quad\=\quad\=\quad\=\quad\=\kill
Let $Q$ be a queue of edges\\
insert ``dummy edge'' to start vertex into $Q$\\
{\bf while} $Q$ is not empty {\bf do}\\
\> $e$ := first edge from $Q$\\
\> remove first edge from $Q$\\
\> {\bf for all} edges $e'$ adjacent to {\em e.target} {\bf do}\\
\>\> {\bf if} {\em $e'$.label} was not used on the shortest\\
\>\>\>\> path from $s$ to $e$ {\bf then}\\
\>\>\> {\em $e'$.father} := $e$\\
\>\>\> append $e'$ to $Q$\\
\>\>\> {\bf if} {\em $e'$.target} was not visited before {\bf then}\\
\>\>\>\> report shortest path to {\em $e'$.target}\\
\>\>\> {\bf end if}\\
\>\> {\bf end if}\\
\> {\bf end for}\\
{\bf end while}\\[-25pt]
\end{tabbing}
\hrulefill\\
Algorithm 2: Shortest Path with Unique Labels
\end{table}
\subsection{Computing Shortest Path with Unique Labels}\label{computing}
\subsubsection{A Memory-Consuming Solution}\label{alg1}
We use a modified version of BFS to solve our problem, see
Algorithm~2. Similar to BFS, we store all paths found so far. But instead
of storing a shortest-path tree for the vertices as in BFS, we
construct a shortest-feasible-path tree on the edges of the graph,
see Figure~\ref{ds}.
That is, we store every possible feasible path leading to a vertex in
the tree during the search. When the search reaches a vertex, $v$, via
an edge, $e'$, we can determine if the path to $v$ via $e'$ is
feasible (i.e., no label occurs twice). By the BFS-manner of this
algorithm, the first feasible path found to a vertex is also the
shortest feasible path. The drawback is that the algorithm is quite
memory consuming, because it stores all feasible paths to all
vertices.
\begin{figure}[t!]
\mbox{}\hspace{-1em}\includegraphics{figs/DS.pdf}
\caption{An example for a shortest feasible path tree constructed by
Algorithm~2. The network consists of three nodes (S, A, and B),
three edges from S to A, and three edges from A to B. The dashed
lines show the shortest feasible path tree.\label{ds}}
\end{figure}
\subsubsection{Balancing Time and Memory Requirements}
To save memory, we tried a different solution by exploiting the fact
that a metabolic network has many parallel edges. Thus, our search
does not have to explore all edges incident to a vertex, but only
those edges that lead to different vertices. Instead of storing one
label per edge on a shortest path, we store a set of labels. This
significantly decreases the number of shortest paths that we have to
store. The drawback is that we have to find a feasible combination of
labels when the search progresses (i.e., when we want to add a new
edge to a path). This can be solved by a simple backtracking; that is,
we successively test combinations of labels until we either find a
feasible combination or no more combination is possible. The idea is
that in most cases this backtracking may not require much time,
because a feasible combination is found quickly. Only if there is no
feasible combination, we have to test all of them. Clearly, this
heavily depends on the structure of the input network.
\subsubsection{Preprocessing}
Before we start our algorithm, we perform a simple BFS to determine,
which vertices can be reached at all (i.e., there is a feasible or
infeasible path). We store these vertices to be able to abort the
search as soon as feasible paths to all of them have been found. In a
second stage of the preprocessing, we perform a simple BFS again, this
time checking if the found path is feasible and reporting
feasible paths.
\def{}
\def$\!\!\!\!\!${$\!\!\!\!\!$}
\begin{table*}
\caption{Examples of running time and found paths starting in vertex 1 (XEON CPU 3.0 GHz, 16 GB memory).\label{runtimes}\newline}
\noindent
\begin{tabular}{l|*{9}{c}}
& & & $\!\!\!\!\!$ Vertices$\!\!\!\!\!$
& \multicolumn{3}{c}{Paths found}
& \multicolumn{3}{c}{Running time in min} \\
Network & Vertices & Edges & reachable
& Alg.\,A$\!\!\!\!\!$ & Alg.\,B & B with
& Alg.\,A$\!\!\!\!\!$ & Alg.\,B & B with
\\
& in $G$ & in $G$ & from start &&&preproc. &&&preproc.\\ \hline
{\em A.~niger} & 2547 & 7818 & 1488 & 1283 & 1369 & 1382 & 1.2 & 23.4 & 23.6 \\
{\em E.~coli} & 1895 & 5525 & 1111 & 911 & 1012 & 1021 & 1.3 & 35.6 & 35.9 \\
{\em H.~sapiens} & 2474 & 7873 & 1614 & 1347 & 1507 & 1526 & 1.2 & 20.2 & 20.5 \\
\end{tabular}
\end{table*}
\def$\!\!${$\!\!$}
\begin{table*}
\caption{Comparison on the number of paths found in several organism-specific metabolic networks\label{smaller}: Total number of shortest paths (SP),
number of correct/infeasible shortest paths found by BFS,
number of shortest paths with unique labels (SPUL).
\newline
}
\noindent
\begin{tabular}{l|*{8}{c}}
$\!\!$ & uur$^{\rm -cm}$ $\!\!$ & $\!\!$ uur$^{\rm +cm}$ $\!\!$ & $\!\!$ mpn$^{\rm -cm}$ $\!\!$ & $\!\!$ mpn$^{\rm +cm}$ $\!\!$ & $\!\!$ bbu$^{\rm -cm}$ $\!\!$ & $\!\!$ bbu$^{\rm +cm}$ $\!\!$ & $\!\!$ mge$^{\rm -cm}$ $\!\!$ & $\!\!$ mge$^{\rm +cm}$ \\ \hline
SP & 2372 & 17038 & 2191 & 6652 & 6357 & 39552 & 6793 & 36246\\
Correct SP & 1259 & 9302 & 1288 & 3929 & 2470 & 16864 & 2868 & 18460 \\
Infeasible SP & 1113 & 7736 & 903 & 2723 & 3887 & 22688 & 3925 & 17786\\
SPUL & 1308 & 13306 & 1601 & 4577 & 2513 & 22809 & 3061 & 22281\\[5pt]
\end{tabular}
{\footnotesize
uur = {\em U.\,urealyticum}, mpn = {\em M.\,pneumoniae}, bbu = {\em B.\,burgdorferi}, mge = {\em M.\,genitalium},\newline
+cm/-cm: with/without currency metabolites}
\end{table*}
\subsubsection{Comparison}
Table~\ref{runtimes} shows results for large databases. There was not sufficient
memory to compute all paths. Thus, we compare the number of paths
found until the program was stopped because there was no memory
left. It turned out that the memory consuming solution (Alg.~A) is
much faster than our second approach, but finds less paths. The
preprocessing with BFS further improves the number of found
paths. Smaller organisms are compared in Table~\ref{smaller}:
We compared the number of (graph theoretically)
shortest paths (SP) to the number of shortest paths with unique labels
(SPUL). Furthermore, we listed the
number of correct shortest paths and the
number of biologically infeasible
shortest paths
(i.e., paths such as
S$\longrightarrow$ A$\longrightarrow$ B $\longrightarrow$ T in
Fig.~\ref{figs/problem-fig})
that were found using BFS.
\section{Conclusion}
We showed that the problem of finding paths pairwise-distinct edge labels
in (directed) graphs is NP-hard. A straightforward modification of BFS to
this problem results in a memory-consuming solution that allows for computing only small
instances. Balancing time- and memory requirements, we are able to
deal with larger instances.
\section*{Acknowledgements}
This work was financially supported by the {\em German Research
Foundation} ({\em DFG}), project {\em SFB 578} and by {\em Federal
Ministry of Education and Research} ({\em BMBF}), project {\em
InterGenomics}. Tom Kamphans was supported by 7th Framework
Programme contract 215270 (FRONTS).
\bibliographystyle{abbrv}
{\small
|
train/arxiv
|
BkiUfZ05qhDCrhZoE5uG
| 5 | 1 |
\section{Introduction}
Recently, there has been a lot of progress in understanding the low energy effective theory of coincident M2-branes.
Schwarz raised the possibility of using Chern-Simons theories to describe the superconformal theory on M2-branes \cite{Schwarz:2004yj}. The idea was first realized concretely by Bagger and Lambert \cite{Bagger:2006sk} and Gustavsson \cite{Gustavsson:2007vu} (BLG), where they constructed a Lagrangian for $SU(2) \times SU(2)$ Chern-Simons theory with $\mathcal{N}=8$ supersymmetry. The BLG theory had some peculiarities that appeared puzzling from the M2-brane point of view. First, it was not clear how to extend this theory to describe arbitrary number of M2-branes. Second, a string/M-theoretic
derivation of the BLG theory was lacking.
Soon afterwards, the BLG theory was followed by a variety of superconformal Chern-Simons theories more clearly rooted in string/M-theory.\footnote{In this paper, we will focus
on relatively new ${\cal N}\ge4$ theories only. For a nice summary of more
conventional ${\cal N} \le 3$ Chern-Simons theories,
see {\it e.g.} \cite{Gaiotto:2007qi} and references therein and thereto.} In a type IIB string theory setup, Gaiotto and Witten \cite{Gaiotto:2008sd} gave a general
construction of ${\cal N}=4$ Chern-Simons theories with one type of
hyper-multiplets, where the theories were shown to be classified
by an auxiliary Lie super-algebra.
This construction was extended in \cite{HLLLP1} to include twisted hyper-multiplets, so that all ${\cal N} \ge 4$ theories can,
in principle, fit into the Gaiotto-Witten classification.
It was also pointed out in \cite{HLLLP1} that the Gaiotto-Witten setup can be related via T-duality to M2-branes
probing orbifold geometries.
Aharony, Bergman, Jafferis and Maldacena \cite{ABJM} (ABJM) then performed an in-depth study of an especially simple and instructive case of ${\cal N}=6$ theory
with $U(N)\times U(N)$ gauge group. They gave convincing arguments that
the Chern-Simons theory at level $k$ is dual to M-theory on
AdS$_4 \times S^7/\mathbb{Z}_k$. The ABJM theory thus opened up
a laboratory for testing the long-missing AdS$_4$/CFT$_3$ duality
and led to many exciting developments.
More work on explicit construction and classification of ${\cal N}=4,5,6$ theories can be found in \cite{Benna, Imamura:2008nn, HLLLP2,Bagger:2008se,Tera,Schnabl:2008wj,Imamura:2008dt,hohm}.
The aim of this paper is to compute the superconformal index \cite{Kinney:2005ej, Romelsberger:2005eg}
for some ${\cal N}=4,5$ quiver Chern-Simons theories constructed in
\cite{HLLLP1,HLLLP2} and further studied in \cite{Benna, Imamura:2008nn,Tera}, which are believed to be dual to M-theory on AdS$_4$ times certain orbifolds of $S^7$.
The superconformal index, originally defined for 4-dimensional theories in \cite{Kinney:2005ej, Romelsberger:2005eg} counts the number of certain chiral operators. This index gets contributions only from short multiplets which cannot combine to form long multiplets as the parameters of the theory are varied. Just as for the Witten index \cite{Witten:1982df}, the superconformal index does not change under continuous change of parameters. The superconformal indices in 3, 5, 6 dimensions were constructed in \cite{Bhattacharya:2008zy, Dolan}, and the index for the ABJM theory has been computed in \cite{Bhattacharya:2008bja, Dolan}.
As in \cite{Bhattacharya:2008bja}, we compute the superconformal indices for $\mathcal{N}=4, 5$ Chern-Simons theories in the large $N$ and large $k$ limit, where $N$ is the rank of the gauge group and $k$ is the Chern-Simons level. As explained in \cite{ABJM}, in the large $k$ limit, the circle fiber of $S^7$ shrinks. So, in our case, the compact space becomes an orbifold of $\mathbb{C} \mathbb{P}^3$. In the large $N$ limit with
't Hooft coupling $\lambda = N/k$ fixed at a large value, the supergravity description is valid and we can compute the index by counting the Kaluza-Klein spectrum of the compact space. On the field theory side, since we have the effective coupling $\lambda$ which can be taken to zero, we can compute the index in the free theory limit. However, when the orbifold has a non-trivial fixed locus, we have massless twisted sector contributions which survive the large $N$ limit. This is an inherently stringy effect, so we cannot solely rely on supergravity description. Similar situation in four-dimensional quiver gauge theories has been discussed in \cite{Nakayama:2005mf}. We discuss how to figure out the twisted sector contribution to the index both in field theory and gravity side. After taking into account the twisted sector, we show that the indices in $\lambda = 0$ and $\lambda = \infty$ agree.
The outline of this paper is as follows. In section 2, we briefly review the definition of the superconformal index, some salient features of the $\mathcal{N}=4, 5$ theories we study in later sections, and the computation of the index in ABJM theory \cite{Bhattacharya:2008bja}. In section 3 and 4, we compute the index for $\mathcal{N}=5$ and $\mathcal{N}=4$ theories in field theory as well as in gravity. Perfect agreement is found in the ${\cal N}=5$ without twisted sector contributions. Agreement is
possible for ${\cal N}=4$ theories if and only if the twisted sector
contributions are taken into account.
In subsection 4.3, we also briefly discuss the ``dual ABJM model'' proposed in \cite{Hanany:2008fj,Franco:2008um}. In section 5, we conclude with some open questions. Some useful formulas are provided in the appendix.
\section{Reviews}
\subsection{Superconformal index in three dimensions}
In many supersymmetric theories, there exist short multiplets which contain smaller number of states than ordinary long multiplets. Some of these short multiplets can be combined into long multiplets as the parameters of the theory are varied, but there are others that cannot. The superconformal index defined in \cite{Kinney:2005ej,Romelsberger:2005eg} receives contributions only from the latter states, so it is kept constant while the couplings of the theory change.
In three dimensions, the bosonic subgroup of the superconformal group $OSp(\mathcal{N} |4)$ is $SO(3, 2) \times SO(\mathcal{N})$, and its maximally compact subgroup is $SO(2)\times SO(3) \times SO({\cal N})$, where ${\cal N}$ is the number of supersymmetries.
Following \cite{Bhattacharya:2008zy,Bhattacharya:2008bja}, we denote the eigenvalues of the Cartan generators of $SO(2)\times SO(3)\times SO({\cal N})$ by $\epsilon_0$, $j$ and $h_i$ $(i=1,\cdots, [{\cal N}/2])$.
In radial quantization (compactifying the theory on $\mathbb{R} \times S^2$), $\epsilon_0$ and $j$ are interpreted as energy and angular momentum.
There are $4\mathcal{N}$ real supercharges with $\epsilon_0=\pm 1/2$.
In this paper, we will compute the three-dimensional version of the index of \cite{Kinney:2005ej},
\begin{eqnarray} \label{WIndex}
{\cal I}(x,\{y_i\}) = {\rm Tr} \left[(-1)^F x^{\epsilon_0+j} y_1^{h_2} \cdots y_{[\mathcal{N}/2]-1}^{h_{[\mathcal{N}/2]}}\right]\,.
\end{eqnarray}
The definition of the index singles out a particular supercharge $Q$ with charges $\epsilon_0=+1/2$, $j = -1/2$, $h_1 = 1$, $h_i = 0$ $(i\ge2)$. The superconformal algebra implies
that
\begin{eqnarray}
\{Q, Q^\dagger \} = \Delta \equiv \epsilon_0 - j -h_1 \,.
\end{eqnarray}
The short multiplets contributing to the index satisfy $\Delta = 0$ and
are annihilated by both $Q$ and $Q^\dagger$. Note that these states can be
interpreted as elements of $Q$-cohomology class.
Since $\Delta = Q Q^\dagger + Q^\dagger Q$, we can think of $Q$ as
analogous to the $d$ operator in de Rham cohomology and $\Delta$ to the Laplacian operator.
On the field theory side, the index in the free theory limit can be computed using a matrix-integral formula \cite{Aharony:2003sx},
\begin{eqnarray}
{\cal I}(x,\{y_i\}) = \int \prod_a DU_a \exp \left(\sum_R \sum_{n=1}^\infty \frac{1}{n} f_R(x^n , y_1^n,\cdots y_{[\mathcal{N}/2]}^n) \chi_{R} (\{U_a^n\})\right) \,,
\label{ind-ft}
\end{eqnarray}
where $R$ denotes the representations of matter fields (``letters''),
$f_R$ is the index computed over the single letters without restriction
on gauge invariance, and $\chi_R$ is the group character. In the case of $U(N)$ gauge theory with bi-fundamental matter fields,
\begin{eqnarray}
\chi_{ab} (\{U_c\}) = {\rm Tr} U_a {\rm Tr} U_b^{\dagger} \,.
\end{eqnarray}
The index (\ref{ind-ft}) enumerates all possible gauge-invariant multi-trace operators.
The standard rule of Bose statistics (or equivalently the plethystic integral)
relates the ``multi-particle'' index ${\cal I}$ to the ``single-particle'' index $I_{\rm sp}$ through
\begin{eqnarray}
{\cal I}(x,\{y_i\}) = \exp \left( \sum_{k=1}^{\infty} \frac{1}{k} I_{\rm sp} (x^k,\{y^k_i\}) \right) \,.
\end{eqnarray}
On the gravity side, the single particle index $I_{\rm sp}$ can be computed from the Kaluza-Klein spectrum of eleven-dimensional supergravity compactified on the internal
seven-manifold. For $S^7$, $I_{\rm sp}$ was computed in \cite{Bhattacharya:2008zy} using the known Kaluza-Klein spectrum in supergravity \cite{Gunaydin:1985tc,Biran:1983iy}. For orbifolds of $S^7$, the index counts
states in untwisted and twisted sectors. The untwisted sector
is simply the subset of the spectrum for $S^7$ that is
invariant under the orbifold action.
If the orbifold has fixed points, there may be additional twisted sector states
localized at the fixed points. We will see that ${\cal N}=5,6$ orbifolds
have no twisted sector contributions, while ${\cal N}=4$ orbifolds
do have such contributions.
As usual, the index should not change as we vary the continuous parameter of the theory \cite{Witten:1982df}. Contrary to the case of Yang-Mills theory, the effective coupling of the Chern-Simons theory $\lambda = N/k$ is discrete. But, when we take the large $N$ limit, $\delta \lambda = - \lambda^2 /N $ becomes effectively continuous. So we can take the limit of 't Hooft coupling $\lambda = N/k$ to be zero, and then the theory becomes free. This means that we also have to take the Chern-Simons level $k$ to be infinite, so $k$ never enters into our calculation.
In the following discussion, we take both large $N$ and large $k$ limit.
\subsection{$\mathcal{N}=4,5,6$ Chern-Simons matter theories}
In this section, we give a short summary of the $\mathcal{N} = 4,5,6$ superconformal Chern-Simons matter theories we will study in later sections. Our notations closely follow those of \cite{HLLLP1,HLLLP2}.
Our discussion will be brief, and we refer the reader to the
original papers \cite{Gaiotto:2008sd, HLLLP1,ABJM,Benna, Imamura:2008nn, HLLLP2,Bagger:2008se,Tera,Schnabl:2008wj,Imamura:2008dt} for details.
The field theory computations in this paper will be done
in free theory limit,
so for most purposes, it is sufficient to recall the matter content
and gauge symmetry of the theory. We proceed in descending order
of number of supersymmetries.
\paragraph{${\cal N}=6$ ABJM theory}
The gauge group is $U(M)\times U(N)$. The theory makes
sense at quantum level if and only if the rank of the gauge group and the Chern-Simons level $k$ satisfy $|M-N| \le k$ \cite{Aharony:2008gk}. The matter content is summarized by the following table:
\begin{eqnarray}
\begin{array}{c|cccc}
& \; \Phi_\alpha \; & \;\bar{\Phi}^\alpha \; & \;\Psi^{\alpha}\; & \;\bar{\Psi}_{\alpha} \; \\ \hline
U(M)\times U(N) & (M,\bar{N}) & (N,\bar{M}) & (M,\bar{N}) & (N,\bar{M}) \\
SO(6)_R & \mathbf{4} & \mathbf{\bar{4}} & \mathbf{\bar{4}} &
\mathbf{4}
\end{array}
\end{eqnarray}
To compare with the index computation of \cite{Bhattacharya:2008bja}, we choose the convention for $SO(6)_R$ highest
weights such that $\mathbf{4}$ representation have $(h_1,h_2,h_3,h_4) = (+\frac{1}{2},+\frac{1}{2},+\frac{1}{2},-\frac{1}{2})$.
As explained in \cite{ABJM},
the moduli space of vacua of this theory is
(a symmetric product of) $\mathbb{C}^4/\mathbb{Z}_k$.
The scalars $\Phi_\alpha$ can be thought of as coordinates on $\mathbb{C}^4$.
The orbifold action $\mathbb{Z}_k$, a residual discrete gauge symmetry
on the Coulomb branch, acts on $\Phi_\alpha$ as $\Phi_\alpha \rightarrow e^{2\pi i /k} \Phi_\alpha$.
\paragraph{${\cal N}=5$ theory}
The gauge group is $O(M)\times Sp(2N)$.\footnote{We use the notation in which $Sp(2N)$ has rank $N$.}
Let us denote the generators of $O(M)$ and $Sp(2M)$ as
$M_{ab}$ and $M_{\dot a\dot b}$, respectively.
The invariant anti-symmetric tensor of $Sp(2M)$ is denoted by
$\omega_{\dot a\dot b}$. The invariant tensor of $Sp(4)=SO(5)_R$ is denoted by $C_{\alpha\beta}$.
We denote the bi-fundamental matter fields
by
\begin{equation}
\Phi_\alpha^{a\dot a}\,, ~~~
\Psi_\alpha^{a\dot a} \,.
\end{equation}
They obey the reality condition of the form
\begin{equation}
\bar\Phi^\alpha_{\dot aa} ~=~
(\Phi_\alpha^{a\dot a})^\dagger ~=~
\delta_{ab}\omega_{\dot a\dot b}C^{\alpha\beta}\Phi_\beta^{b\dot b},
\label{real5}
\end{equation}
and similarly for the fermions.
As explained in \cite{HLLLP2},
the moduli space of vacua of this theory is $\mathbb{C}^4/\mathbb{D}_{k}$
where $\mathbb{D}_{k}$ is the binary dihedral group with $4k$ elements.
The dihedral group is generated by
\begin{eqnarray}
\alpha \; : \; \Phi_\alpha \;\; \rightarrow \;\; e^{\pi i/k} \Phi_\alpha \,,
\;\;\;\;\;
\beta \; : \; \Phi_\alpha \;\; \rightarrow \;\; C_{\alpha\beta} \bar{\Phi}^\beta \,.
\end{eqnarray}
\paragraph{${\cal N}=4$ quiver theories}
We use $(\alpha,\beta ; {\dot{\a}},{\dot{\b}})$ doublet indices
for the $SU(2)_L\times SU(2)_R$ $R$-symmetry group.
We denote the invariant tensors by $\epsilon_{\alpha\beta},
\epsilon_{\dot{\alpha}\dot{\beta}}$ and their inverses by $\epsilon^{\alpha\beta},
\epsilon^{\dot{\alpha}\dot{\beta}}$ such that $\epsilon^{\alpha\gamma}\epsilon_{\gamma\beta}=
\delta^\alpha_{\ \beta}$, $\epsilon^{{\dot{\a}}{\dot{\g}}}\epsilon_{{\dot{\g}}{\dot{\b}}}=
\delta^{\dot{\a}}_{\ {\dot{\b}}}$.
The hyper-multiplets are denoted by $(q_\alpha, \psi_{\dot{\a}})$ and
the twisted hyper-multiplets by $(\tilde{q}_{\dot{\a}},\tilde{\psi}_\alpha)$.
A doublet of $SU(2)_L$ has the $SO(4)$
highest weight $(h_1,h_2)= (\frac{1}{2},\frac{1}{2})$ and
a doublet of $SU(2)_R$ has $(\frac{1}{2},-\frac{1}{2})$.
We will consider two types of ${\cal N}=4$ quiver gauge theories :
the $U(M|N)$-type and the $OSp(M|N)$-type \cite{HLLLP1}
or more briefly $U$-type and $OSp$-type.
The $U$-type quivers can be viewed as an extension of the $\mathcal{N}=6$ theory. It consists of a product of $2m$ $U(N_i)$ gauge groups. The ranks can be different in general, but for simplicity
we will assume that they are all equal, so the gauge group is $U(N)^{2m}$. There are $m$ hypers $(q^i_\alpha, \psi^i_{\dot{\a}})$ in $(N,\bar{N})$ of $U(N)_{2i-1}\times U(N)_{2i}$ and $m$ twisted hypers $(\tilde{q}^i_{\dot{\a}}, \tilde{\psi}^i_\alpha)$ in $(N,\bar{N})$ of $U(N)_{2i}\times U(N)_{2i+1}$.
The hermitian conjugates ($\bar{q}_\alpha = \epsilon_{\alpha\beta} (q^\dagger)^\beta$, etc.)
belong to the same $R$-symmetry representation,
but form anti-bi-fundamental representations under the gauge groups.
The matter content of the $U(M|N)$-type quiver theory is summarized in Fig. \ref{quiver}(a).
The $OSp$-type quivers can be regarded as
an extension of the $\mathcal{N}=5$ theory. The quiver consists of an alternating series of $m$ factors of $O(M_i)$ and $Sp(N_i)$.
For simplicity, we assume that the gauge group is $[O(2N)\times Sp(2N)]^m$. There are $m$ hypers $(q^i_\alpha, \psi^i_{\dot{\a}})$ in bi-fundamental of $O(2N)_{i}\times Sp(2N)_{i}$ and $m$ twisted hypers $(\tilde{q}^i_{\dot{\a}}, \tilde{\psi}^i_\alpha)$ in bi-fundamentals of $Sp(2N)_{i}\times O(2N)_{i+1}$.
The hermitian conjugates obey a reality condition similar to (\ref{real5}) with $C^{\alpha\beta}$ replaced by $\epsilon^{\alpha\beta}$ or $\epsilon^{{\dot{\a}}{\dot{\b}}}$.
The matter content of the $OSp$-type quiver theory is summarized in Fig. \ref{quiver}(b).
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=10.5cm]{quiver.pdf}
\caption{Matter content of (a) $U$-type and (b) $OSp$-type quiver theories. }
\label{quiver}
\end{center}
\end{figure}
The moduli space of vacua of the $U$-type theories were
studied in \cite{Benna} and \cite{Tera}.
When the Chern-Simons level $k$ is unity, the solution to F-term and D-term conditions gives the moduli space of vacua
$(\mathbb{C}^2/\mathbb{Z}_m)^2$, where the generators of $\mathbb{Z}_m$ act on coordinates of $\mathbb{C}^4$ as
\begin{eqnarray}
(z_1,z_2 , z_3,z_4) \sim (\omega_m z_1, \omega_m z_2 , z_3,z_4)
\sim ( z_1, z_2 , \omega_m z_3, \omega_m z_4)
\;\;\; (\omega_m \equiv e^{2\pi i/m}) \,.
\label{orbi1}
\end{eqnarray}
For $k>1$, as in the ${\cal N}=6$ theory, the residual discrete gauge symmetry induces further orbifolding by
\begin{eqnarray}
(z_1,z_2 , z_3,z_4) \sim
\omega_k ( z_1, z_2 ,z_3, z_4) \,.
\label{orbi2}
\end{eqnarray}
Note that all the orbifold actions (\ref{orbi1}, \ref{orbi2}) can
be generated by two (as opposed to three) generators.
In the simple case where $m$ and $k$ are relatively prime, we may choose the independent generators to be
\begin{eqnarray}
\label{orbiaction}
(\omega_m, \omega_m , 1, 1) \;\;\; {\rm and} \;\;\;
(\omega_{mk} , \omega_{mk} , \omega_{mk} ,\omega_{mk}) \, ,
\end{eqnarray}
and say that the moduli space is $\mathbb{C}^4/(\mathbb{Z}_m \times \mathbb{Z}_{mk})$.
Similarly, the moduli space of the $OSp$-type theories can be shown to be $(\mathbb{C}^2/\mathbb{Z}_m)^2 /\mathbb{D}_k \simeq \mathbb{C}^4/(\mathbb{Z}_m \times \mathbb{D}_{mk})$. The orbifold action $\mathbb{Z}_m$ is the same as (\ref{orbi1}) and the action of $\mathbb{D}_k$ is the same
as in the ${\cal N}=5$ theory.
For later purposes, let us review some details of the supersymmetry algebra of the $\mathcal{N}=4$ theories. We denote the supercharges by $Q_{\alpha{\dot{\a}}}$ and write their components as $Q_{\pm\pm}$.
The matter fields can be written as $q_{\pm}$, $\psi_{\pm}$ and so on.
In this notation,
the special supercharge involved in the definition of
the superconformal index is $Q_{++}$.
The supersymmetry transformation rule includes
\begin{eqnarray}
\left[ Q_{++}, q_+^i \right] = 0 \,,
\;\;\;\;\;
\left[ Q_{-+}, q_+^i \right] = \psi_+^i \,,
\;\;\;\;\;
\left[ Q_{++}, \tilde{\psi}_+^i \right] =\tilde{q}_+^i q_+^{i+1} \bar{q}_+^{i+1} - \bar{q}_+^i q_+^i \tilde{q}_+^i
\,.
\label{n4susy}
\end{eqnarray}
and similar relations obtained by hermitian conjugation and/or
exchange of hypers with twisted hypers.
\subsection{Indices for ${\cal N}=6$ ABJM theories}
We now review the computation of the index of the ${\cal N}=6$ theory
following \cite{Bhattacharya:2008bja}.
\footnote{
The same index was computed in \cite{Dolan} using a different method of evaluating the integral (\ref{n6-int}).
The same reference also gives general formulas for
the index for superconformal algebra $OSp(2N|4)$.
See also a related work \cite{Dolan2} where the indices of $SO/Sp$ gauge theories (in four dimensions) were computed,
which complements the method we develop in the next section.}
We will be slightly more general and allow the gauge group to be $U(M) \times U(N)$. The compact bosonic subgroup of the superconformal group is $SO(2) \times SO(3) \times SO(6)$,
and the index is given by
\begin{eqnarray}
{\cal I}(x,y_1,y_2) = {\rm Tr} \left((-1)^F x^{\epsilon_0 + j} y_1^{h_2} y_2^{h_3} \right) \,,
\end{eqnarray}
where $h_{i}$'s are the second and third Cartan charges of $SO(6)$.
The single letter index for bi-fundamental and anti-bi-fundamental matter fields are
\begin{eqnarray}
\label{lett-6a}
f_{12} = \frac{x^{1/2}}{1-x^2}\left(\sqrt{\frac{y_1}{y_2}} + \sqrt{\frac{y_2}{y_1}}\right) - \frac{x^{3/2}}{1-x^2}\left(\sqrt{y_1y_2} + \frac{1}{\sqrt{y_1y_2}}\right) \,,
\\
\label{lett-6b}
f_{21} = \frac{x^{1/2}}{1-x^2}\left(\sqrt{y_1y_2} + \frac{1}{\sqrt{y_1y_2}}\right) - \frac{x^{3/2}}{1-x^2}\left(\sqrt{\frac{y_1}{y_2}} + \sqrt{\frac{y_2}{y_1}}\right) \,.
\end{eqnarray}
On the field theory side, the superconformal index is given by
\begin{eqnarray}
\label{n6-int}
{\cal I} = \int DU_1DU_2 \exp \left(\sum_{a,b} \sum_{n=1}^\infty \frac{1}{n} f_{ab}(x^n , y_1^n, y_2^n) {\rm Tr}(U_a^n) {\rm Tr}(U_b^{\dagger n})\right) \,.
\end{eqnarray}
The only difference from \cite{Bhattacharya:2008bja} is that now $U_2$ is an element of $U(M)$. Now, we make the change of variables
in a standard way in large $N$ computations,
\begin{eqnarray}
\rho_n = \frac{1}{N} {\rm Tr} U_1^n, \;\;\; \chi_n = \frac{1}{M} {\rm Tr} U_2^n \,.
\label{rhoch}
\end{eqnarray}
Then the measure is given by (a derivation of this measure factor is given in appendix \ref{meas})
\begin{eqnarray}
DU_1 = \prod_{n=1}^{\infty} d\rho_n \exp \left( -N^2 \sum_n \frac{\rho_n \rho_{-n}}{n} \right ) \,,
\\
DU_2 = \prod_{n=1}^{\infty} d\chi_n \exp \left(-M^2 \sum_n \frac{\chi_n \chi_{-n}}{n} \right ) \,.
\end{eqnarray}
Substituting this, and writing $M = N + m$, we get
\begin{eqnarray}
{\cal I} = \prod_n d\rho_n d\chi_n \exp \left(-\frac{N^2}{2} \sum_n \frac{1}{n} C_n^T M_n C_n \right) \,.
\end{eqnarray}
Here, $C_n^T =(\chi_n ~ \rho_n ~\chi_{-n} ~ \rho_{-n} ) $ and
\begin{eqnarray}
M_n = \left( \begin{array}{cccc}
0 & 0 & (1+\alpha)^2 & -(1+\alpha) f_{21;n} \\
0 & 0 & -(1+\alpha) f_{12;n} & 1 \\
(1+\alpha)^2 & - (1+\alpha) f_{12;n} & 0 & 0 \\
-(1 +\alpha) f_{21;n} &1 & 0 & 0
\end{array} \right) \,,
\label{m144}
\end{eqnarray}
with $\alpha = m/N$ and $f_{ab;n} \equiv f_{ab}(x^n,y_1^n,y_2^n)$.
The overall normalization of the Gaussian integral is fixed by requiring that ${\cal I}(x=0,y_i=0)=1$. The final result is
\begin{eqnarray}
{\cal I} &=& \prod_n \frac{(1+\alpha)^2}{\sqrt{{\rm det} M_n}}
\;=\; \prod_n \frac{(1-x^{2n})^2}{(1 - \frac{x^n}{y_1^n})(1 - \frac{x^n}{y_2^n})(1 - x^n y_1^n)(1 - x^n y_2^n)} \,,
\nonumber \\
I_{\rm sp} &=& \frac{x}{y_1-x}+\frac{1}{1-x y_1}+\frac{x}{y_2-x}+\frac{1}{1-x y_2}- \frac{2}{1-x^2} \,.
\end{eqnarray}
Note that the result is independent of $\alpha$ as we take the large $N$ limit.
This agrees with the observations on the gravity dual \cite{Aharony:2008gk}. The only difference comes from the torsion flux or in type IIA description, NS-NS 2-form flux through $\mathbb{C}\mathbb{P}^1 \subset \mathbb{C}\mathbb{P}^3$. Since the torsion does not affect the classical equations of motion, we expect exactly the same set of graviton states as in $U(N) \times U(N)$ theory. The torsion flux makes difference in the baryonic spectrum, but they do not contribute to the index in this limit. In \cite{Aharony:2008gk}, it was also shown that this theory is superconformal only if $|M-N| \le k$. But in our field theory calculation, we are taking limit of $\lambda = N/k \to 0$. Therefore, we do not expect any sign of inconsistency for $|M-N|$ large.
In view of these observations, we will neglect the differences
in the ranks of the gauge group in the following sections.
\newpage
\section{Indices for ${\cal N}=5$ theories}
In this section, we calculate the index of the $\mathcal{N}=5$ $O(2N) \times Sp(2N)$ theory of \cite{HLLLP2}.
The bosonic subgroup of the superconformal group is
$ SO(3, 2) \times SO(5) $, so we define the index to be
\begin{eqnarray}
{\cal I} = {\rm Tr} \left[ (-1)^F x^{\epsilon_0 + j} y^{h_2} \right] \,.
\end{eqnarray}
where $h_2$ is the second Cartan charge of $SO(5)$.
\subsection{Field theory}
We begin with the formula for the index of a free field theory
with two gauge groups and bi-fundamental matter fields only,
\begin{eqnarray} \label{MIntSpO}
{\cal I} = \int DA DB \exp \left( \sum_{n=1}^{\infty} \frac{1}{n} f(x^n, y^n) {\rm Tr}(A^n) {\rm Tr}(B^{n}) \right)\,,
\end{eqnarray}
For the ${\cal N}=5$ theory, we take $A \in SO(M)$ and $B \in Sp(2N)$.
The function $f$ denotes the index of $\Delta=0$ ``letters''.
The letters contributing to $f$ are summarized in the following table:
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
type & operators & $\epsilon_0$ & $j$ & $SO(5)$ \\
\hline
bosons & $\Phi$ & ${\textstyle \frac{1}{2}}$ & $0$ & $({\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}})$ \\
fermions & $\Psi$ & $1$ & ${\textstyle \frac{1}{2}}$ & $({\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}})$ \\
\hline
derivatives & $\partial$ & $1$ & $1$ & $(0,0)$ \\
\hline
\end{tabular}
\end{center}
\caption{The $\Delta=0$ letters of the ${\cal N}=5$ theory.}
\label{tb:osp-letter}
\end{table}
Contrary to the $\mathcal{N}=6$ case, there is no distinction between
bi-fundamental and anti-bi-fundamental matter fields.
We also lose a Cartan generator and, consequently,
the $\mathbf{4}$ and $\mathbf{\bar{4}}$ representations
of $SO(6)$ become the same representation of $SO(5)$.
It follows that, to obtain the index over the letters of the ${\cal N}=5$ theory,
we can simply set $y_2=1$ in the ${\cal N}=6$ result (\ref{lett-6a}) or (\ref{lett-6b}).
The result is
\begin{eqnarray}
f(x, y) \;=\; \frac{ x^\frac{1}{2}}{1-x^2} \left( \sqrt{y}+ \frac{1}{\sqrt{y}} \right) - \frac{x^{\frac{3}{2}}}{1-x^2} \left( \sqrt{y}+ \frac{1}{\sqrt{y}} \right)
\;=\; \frac{x^\frac{1}{2}}{1+x} \left( \sqrt{y}+ \frac{1}{\sqrt{y}} \right) \,.
\end{eqnarray}
\subsubsection*{Integration measure}
\paragraph{$SO(2N)$ :}
Any element $A \in SO(2N)$ can be block-diagonalized into the form
\begin{eqnarray}
A = \bigoplus_{i=1}^{N}
\begin{pmatrix}
\cos \alpha_i & -\sin \alpha_i \\
\sin \alpha_i & \cos \alpha_i
\end{pmatrix}
\;\;\;\;\; (|\alpha_i| \le \pi) \,.
\end{eqnarray}
In this basis,
the Harr measure over the $SO(2N)$ group manifold is (see appendix A)
\begin{eqnarray}
DA = \prod_{i=1}^N d\alpha_i \prod_{i<j} \sin^2\left(\frac{\alpha_i-\alpha_j}{2}\right)\sin^2\left(\frac{\alpha_i+\alpha_j}{2}\right)\,.
\end{eqnarray}
{}Throughout this subsection, we will suppress unimportant overall normalization constants.
In the large $N$ limit, we can introduce the eigenvalue distribution function
\begin{eqnarray}
\rho(\theta) = \sum_i \delta (\theta - \alpha_i).
\end{eqnarray}
Since the ``eigenvalues'' of $SO(2N)$ come in pairs $(e^{\pm\alpha_i})$,
we can choose $\alpha_i > 0$ without loss of generality and restrict the domain of $\rho(\theta)$ to be $[0, \pi]$, so that
\begin{eqnarray}
\int_{0}^{\pi} \rho(\theta) d \theta = N.
\end{eqnarray}
Now, instead of integrating over $\alpha_i$'s we can integrate over the Fourier modes of $\rho$ :
\begin{eqnarray}
\rho(\theta) = \frac{1}{\pi} \left[ \frac{1}{2} \rho_0+ \sum_{n\ge1} \rho_n \cos(n\theta) \right]\,,
\;\;\;\;\;
\rho_n = 2 \int_0^\pi \rho(\theta) \cos(n\theta) d\theta \,.
\end{eqnarray}
The normalization of $\rho_n$ is chosen such that\footnote{The normalization here differs from that in (\ref{rhoch})
by a factor of $N$. This is to emphasize that the finite shift in (\ref{so2nf}) survives the large $N$ limit. }
\begin{eqnarray}
{\rm Tr}(A^n) = 2 \sum_i \cos (n\alpha_i) = 2 \int d\theta \rho(\theta) \cos (n \theta) = \rho_n \,.
\end{eqnarray}
We can rewrite the measure factor as
\begin{eqnarray}
DA &=& \prod_{i} d\alpha_i\exp \left( \sum_{i < j} \left( \log \left| \sin^2 \left(\frac{\alpha_i - \alpha_j}{2} \right) \right| + \log \left| \sin^2 \left(\frac{\alpha_i + \alpha_j}{2} \right) \right| \right) \right)
\nonumber \\
&=& \prod_{i} d\alpha_i\exp \left( -2 \sum_{n=1}^{\infty} \sum_{i < j} \left( \frac{\cos(n (\alpha_i - \alpha_j) )}{n} + \frac{\cos(n(\alpha_i+\alpha_j))}{n}\right) \right)
\\
&=& \prod_{i} d\alpha_i\exp \left( - \sum_{n=1}^{\infty}
\left\{ \sum_{i, j} \left( \frac{\cos(n (\alpha_i - \alpha_j) )}{n} + \frac{\cos(n(\alpha_i+\alpha_j))}{n}\right) - \sum_i \frac{\cos(2n\alpha_i)}{n} \right\}\right) \,.
\nonumber
\end{eqnarray}
In the large $N$ limit,
\begin{eqnarray}
&&\sum_{i, j} \left( \frac{\cos(n (\alpha_i - \alpha_j) )}{n} + \frac{\cos(n(\alpha_i+\alpha_j))}{n}\right)
\nonumber \\
&\rightarrow&
\int d\theta_1 d\theta_2 \rho(\theta_1) \rho(\theta_2) \left( \frac{\cos(n (\theta_1 - \theta_2) )}{n} + \frac{\cos(n(\theta_1 + \theta_2)}{n}\right)
\;=\; \frac{1}{2n} \rho_{n}^2 \,,
\\
&&\sum_i \frac{\cos(2n\alpha_i)}{n}
\; \rightarrow \;
\int d\theta \rho(\theta) \frac{\cos(2n\theta)}{n} = \frac{1}{2n} \rho_{2n} \,.
\end{eqnarray}
To summarize, the integration measure of $SO(2N)$ is
\begin{eqnarray}
DA = \prod_m d\rho_m \exp\left( - \sum_{n\; {\rm odd}} \frac{1}{2n} \rho_n^2
- \sum_{n\;{\rm even}} \frac{1}{2n} (\rho_n -1)^2\right) \,.
\label{so2nf}
\end{eqnarray}
This measure is consistent with the fact that
\begin{eqnarray}
\langle {\rm tr} A^{2k+1} \rangle = 0,
\;\;\;\;
\langle {\rm tr} A^{2k} \rangle = 1,
\end{eqnarray}
which holds exactly (without large $N$ approximation)
for $2k < N$.
\paragraph{$SO(2N+1)$ :}
The discussion proceeds in parallel with that of $SO(2N)$
with two important differences. First, the Harr measure over
the $SO(2N+1)$ group manifold is given by
\begin{eqnarray}
DA = \prod_{i=1}^N d\alpha_i \prod_{i<j} \sin^2\left(\frac{\alpha_i-\alpha_j}{2}\right)\sin^2\left(\frac{\alpha_i+\alpha_j}{2} \right) \prod_{i} \sin^2\left(\frac{\alpha_i}{2}\right)
\,.
\end{eqnarray}
The last factor $\prod_i \sin^2(\alpha_i/2)$ induces a linear term in $\rho_n$ for every (even and odd) $n$. However, this shift is precisely compensated
by another shift in the definition of $\rho_n$,
\begin{eqnarray}
\rho_n \equiv {\rm tr}(A^n) = 2 \sum_i \cos(n \alpha_i) +1 \,.
\end{eqnarray}
As a result, the final form of the measure of $SO(2N+1)$
in the large $N$ limit is identical to that of $SO(2N)$ in (\ref{so2nf}).
\paragraph{$Sp(2N)$ :}
The Harr measure over the $Sp(2N)$ group manifold is given by
\begin{eqnarray}
DB = \prod_{i=1}^N d\beta_i \prod_{i<j} \sin^2\left(\frac{\beta_i-\beta_j}{2}\right)\sin^2\left(\frac{\beta_i+\beta_j}{2} \right) \prod_{i} \sin^2\beta_i
,.
\end{eqnarray}
Compared to the $SO(2N)$ case, the $\prod_i \sin^2\beta_i$ term
over-compensates the shifts for $n=2k$, resulting in the net shift
$+1$ as opposed to $-1$ of $SO(2N)$. In other words, with the definition,
\begin{eqnarray}
\chi_n \equiv {\rm tr}(B^n) = 2\sum_i \cos(n \beta_i) \, ,
\end{eqnarray}
the integration measure is given by
\begin{eqnarray}
DB = \prod_m d\chi_m \exp\left( - \sum_{n\; {\rm odd}} \frac{1}{2n} \chi_n^2
- \sum_{n\;{\rm even}} \frac{1}{2n} (\chi_n +1)^2\right) \,.
\label{sp2nf}
\end{eqnarray}
\paragraph{Result}
In terms of the shorthand notation, $f_n \equiv f(x^n,y^n)$,
the index to be computed is
\begin{eqnarray}
{\cal I} = \int DA DB \exp\left( \sum_{n=1}^{\infty} \frac{1}{n} f_n \rho_n \chi_n
\right) \,,
\end{eqnarray}
where the integration measures are given in (\ref{so2nf}) and (\ref{sp2nf}).
Performing the Gaussian integral by
diagonalization ($\rho^{\pm}_n = \rho_n \pm \chi_n$), we obtain
\begin{eqnarray}
{\cal I} = \prod_{n=1}^{\infty} \frac{1}{\sqrt{1-f_n^2}}
\prod_{k=1}^{\infty} \exp\left(-\frac{f_{2k}}{2k(1+f_{2k})} \right) \,.
\end{eqnarray}
The overall normalization is fixed by requiring that $I(x=y=0)=1$.
Using the relations,
\begin{eqnarray}
\frac{1}{1-f_n^2} = \frac{(1+x^n)^2}{(1-(xy)^n)(1-(x/y)^n)}\,,
\;\;\;
\frac{f_{2k}}{1+f_{2k}} = \frac{(xy)^k+(x/y)^k}{(1+(xy)^k)(1+(x/y)^k)} \,,
\end{eqnarray}
we can exponentiate the $\prod (1-f_n^2)^{-1/2}$ factor and rewrite the result as
\begin{eqnarray}
{\cal I} &=& \exp \left( \sum_{k=1}^{\infty} \frac{1}{k} I_{\rm sp} (x^k,y^k) \right) \,,
\nonumber \\
I_{\rm sp} &=& \frac{x}{1-x^2} + \frac{1}{2} \left[\frac{xy}{1-xy}+\frac{x/y}{1-x/y}-\frac{xy+x/y}{(1+xy)(1+x/y)} \right]
\nonumber \\
&=& \frac{1}{1-x^2} \left[ (1-x/y)\frac{(xy)^2}{1-(xy)^2}
+(1-xy)\frac{(x/y)^2}{1-(x/y)^2} +x +x^2\right] \,.
\label{ISFT}
\end{eqnarray}
\subsection{Gravity}
The gravity computation can be done by taking the states of $AdS_4 \times S^7$ and keeping the states invariant under the orbifold action.
The spectrum of $AdS_4 \times S^7$ was originally obtained in \cite{Gunaydin:1985tc,Biran:1983iy} and recently discussed in the context of the superconformal index in \cite{Bhattacharya:2008zy}.
The $\mathbb{Z}_k$ orbifolding was studied in \cite{Bhattacharya:2008bja}.
In the basis where the supercharges transform vectorially under the $SO(8)$,
the $\mathbb{Z}_k$ action is a rotation by $4\pi/k$ along the $SO(2)$ part
of the decomposition $SO(6)\times SO(2) \subset SO(8)$.
If we denote the generator of the $SO(2)$ by $J_3$, then
the invariant states should satisfy
\begin{eqnarray}
\label{zk-orbi}
\exp\left(\frac{4\pi i}{k} J_3 \right) | \psi \rangle = 0 \,,
\end{eqnarray}
which means $J_3 | \psi \rangle = 0$ for large $k$.
The $\mathbb{D}_k$ group is a subgroup of the $SO(3)$
in the decomposition $SO(5)\times SO(3) \subset SO(8)$.
If we denote the $SO(3)$ generators by $J_{1,2,3}$,
the generators of $\mathbb{D}_k$ are given by
\begin{eqnarray}
\exp\left(\frac{2\pi i}{k} J_3 \right), \;\;\;
\exp\left( \pi i J_2 \right) \,.
\end{eqnarray}
For large $k$, the first generator again requires that
$J_3| \psi \rangle = 0$. In the standard $| \ell,m\rangle$ notation,
all $|\ell \in \mathbb{Z} , m=0 \rangle$ states satisfy this condition.
They are also eigenstates of the second generator
with eigenvalues $(-1)^\ell$. Therefore, the fully invariant states are
$|\ell \in 2 \mathbb{Z} , m=0 \rangle$.
In summary,
to compute the index over single gravitons, we need to
decompose the $SO(8)$ graviton spectrum into irreducible representations (irreps) of $SO(5)\times SO(3)$
and keep only those $SO(5)$ representations tensored with $|\ell\in 2\mathbb{Z},m=0\rangle$
states of $SO(3)$.
In \cite{Bhattacharya:2008bja},
the projection from $SO(8)$ to $SO(6)$ was performed
in two equivalent but slightly different methods.
In the ``indirect'' method, one begins with the index of the unorbifolded
theory computed in \cite{Bhattacharya:2008zy}
and projects out the $J_3$-non-invariant states by a contour integral.
In the ``direct'' method, one takes the graviton spectrum
of the unorbifolded theory, decomposes them under $SO(8)\rightarrow SO(6)\times SO(2)$ and sums over the $J_3$-invariant subspace of
the spectrum.
In the case at hand, the indirect method does not seem to be available.
On the other hand, the direct method can be implemented without much difficulty.
We only have to follow the reduction from Table 1 of Ref.~\cite{Bhattacharya:2008zy}
to Table 3 of Ref.~\cite{Bhattacharya:2008bja}, while taking into account
the extra condition discussed above.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
range of $n$ & $\epsilon_0$ & $j$ & $SO(8)$ \ \\
\hline
$n\geq1$ &$\frac{n}{2}$ &$0$ &($\frac{n}{2},\frac{n}{2},\frac{n}{2},\frac{-n}{2}$) \\
$n\geq1$ &$\frac{n+1}{2}$&$\frac{1}{2}$ &($\frac{n}{2},\frac{n}{2},\frac{n}{2},\frac{-(n-2)}{2}$) \\
$n\geq2$ &$\frac{n+2}{2}$&$1$ &($\frac{n}{2},\frac{n}{2},\frac{(n-2)}{2},\frac{-(n-2)}{2}$) \\
$n\geq2$ &$\frac{n+3}{2}$&$\frac{3}{2}$&($\frac{n}{2},\frac{(n-2)}{2},\frac{(n-2)}{2},\frac{-(n-2)}{2}$) \\
\hline
\end{tabular}
\end{center}
\caption{The $\Delta=0$ subset of the super-graviton spectrum in AdS$_4\times S^7$.}
\label{tb:gravspec}
\end{table}
We reproduce the $\Delta=0$ subset of Table 1 of Ref.~\cite{Bhattacharya:2008zy} in our Table \ref{tb:gravspec}. Let us first focus on the simplest tower on top of the table. The irreps with half-integer $h_i$ are clearly irrelevant.
We need to decompose the $(n,n,n,-n)$ reps of $SO(8)$
into irreps of $SO(5)\times SO(3)$ and keep only the
$SO(5)$ states tensored with $|\ell\in 2\mathbb{Z},m=0\rangle$ .
Using the character formulas in appendix \ref{char-so}, we can show that
\begin{eqnarray}
\label{chpr}
(n,n,n,-n)_{SO(8)} = \bigoplus_{\ell=0}^{n} (n,\ell)_{SO(5)} \otimes (\ell)_{SO(3)}
\,.
\end{eqnarray}
So, after the projection, the index receives contributions from
$(n,2k)$ reps of $SO(5)$ for each integer $n\ge 1$ and $0\le k \le [n/2]$.
We can decompose the other three towers in Table \ref{tb:gravspec} in a similar manner. The result is as follows.
\begin{enumerate}
\item
$(n,n,n,-n+1)$ decomposes into four ``families'',
three of which have $\Delta=0$:
\[
{\rm (a)}\;\;\;
\bigoplus_{\ell=0}^{n-1} (n,\ell+1) \otimes (\ell),
\;\;\;\;\;
{\rm (b)}\;\;\;
\bigoplus_{\ell=1}^{n} (n,\ell) \otimes (\ell),
\;\;\;\;\;
{\rm (c)}\;\;\;
\bigoplus_{\ell=1}^{n} (n,\ell-1) \otimes (\ell) .
\]
\item
$(n,n,n-1,-n+1)$ decomposes into six families,
three of which have $\Delta=0$:
\[
{\rm (a)}\;\;\;
\bigoplus_{\ell=0}^{n-1} (n,\ell+1) \otimes (\ell),
\;\;\;\;\;
{\rm (b)}\;\;\;
\bigoplus_{\ell=1}^{n-1} (n,\ell) \otimes (\ell),
\;\;\;\;\;
{\rm (c)}\;\;\;
\bigoplus_{\ell=1}^{n} (n,\ell-1) \otimes (\ell) .
\]
\item
$(n,n-1,n-1,-n+1)$ decomposes into four families,
one of which has $\Delta=0$:
\[
\bigoplus_{\ell=0}^{n-1} (n,\ell) \otimes (\ell).
\]
\end{enumerate}
\noindent
Now, it is easy to sum over all $SO(5)$ reps with $\ell=$(even).
Introduce the notation,
\begin{eqnarray}
Q_1 &\equiv& \sum_{k=0}^{\infty} \chi_{SO(3)}^{(2k)}(y) \sum_{m=0}^{\infty} x^{m+2k} \; = \; \frac{1}{(1-x)(1-y)}\left[\frac{1}{1-x^2/y^2}-\frac{y}{1-x^2y^2}\right] \,, \\
Q_2 &\equiv& \sum_{k=0}^{\infty} \chi_{SO(3)}^{(2k+1)}(y) \sum_{m=0}^{\infty} x^{m+2k}\; = \; \frac{1}{(1-x)(1-y)}\left[\frac{y^{-1}}{1-x^2/y^2}-\frac{y^2}{1-x^2y^2}\right] \,.
\end{eqnarray}
Then, the partial sums
\begin{eqnarray}
S_j = \sum x^{\epsilon_0+j} \chi_{SO(3)}^{(h)}(y) \,,
\end{eqnarray}
can be written as
\begin{eqnarray}
& S_0 = \displaystyle{Q_1-1}\,,&
\nonumber \\
S_{1/2}^{(a)} = x^2 Q_2\,, \;\;\;\;\;
&
S_{1/2}^{(b)} = \displaystyle{x \left(Q_1-\frac{1}{1-x} \right)}\,,
& \;\;\;\;\;
S_{1/2}^{(c)} = x^3 Q_2\,,
\nonumber \\
S_{1}^{(a)} = x^3 Q_2\,, \;\;\;\;\;
&
S_{1}^{(b)} = \displaystyle{ x^3 \left(Q_1-\frac{1}{1-x} \right)}\,,
& \;\;\;\;\;
S_{1}^{(c)} = x^4 Q_2\,,
\nonumber \\
& S_{3/2} = \displaystyle{ x^4 Q_1} \,.&
\end{eqnarray}
To sum up,
the index evaluated over all single gravitons is
\begin{eqnarray}
I_{\rm sp} &=& \frac{1}{1-x^2}\sum_j (-1)^{2j} S_j \,. \nonumber \\
&=& \frac{1}{1-x^2} \left[ (1-x/y)\frac{(xy)^2}{1-(xy)^2}
+(1-xy)\frac{(x/y)^2}{1-(x/y)^2} +x +x^2\right] \,,
\label{ISGR}
\end{eqnarray}
in perfect agreement with the field theory result (\ref{ISFT}).
\section{Indices for $\mathcal{N}=4$ theories}
\subsection{$U$-type}
\paragraph{Field Theory}
Since the $R$-symmetry group is $SO(4)$, we define the index to be
\begin{eqnarray}
{\cal I}(x, y) = {\rm Tr} \left[ (-1)^F x^{\epsilon_0 + j} y^{h_2} \right]\,.
\end{eqnarray}
where $h_2$ is the second Cartan charge of $SO(4)$.
We can read off the letters contributing to the single-letter partition function $f$ from the matter content in Fig.~\ref{quiver}(a).
The result is summarized in the following table:
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
type & operators & $\epsilon_0$ & $j$ & $SO(4)$ \\
\hline
bosons in hyper & $q_+$, $\bar{q}_+$ & ${\textstyle \frac{1}{2}} $ & $0$ & $({\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}})$ \\
fermions in hyper & $\psi_+$, $\bar{\psi}_+$ & $1$ & ${\textstyle \frac{1}{2}}$ & $({\textstyle \frac{1}{2}}, -{\textstyle \frac{1}{2}})$ \\
\hline
bosons in twisted hyper & $\tilde{q}_+$, $\bar{\tilde{q}}_+$ & ${\textstyle \frac{1}{2}}$ & $0$ & $({\textstyle \frac{1}{2}}, -{\textstyle \frac{1}{2}})$ \\
fermions in twisted hyper& $\tilde{\psi}_+$,$\bar{\tilde{\psi}}_+$ & $1$ & ${\textstyle \frac{1}{2}}$ & $({\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}})$ \\
\hline
derivatives & $\partial$ & $1$ & $1$ & $(0,0)$ \\
\hline
\end{tabular}
\end{center}
Then, the single letter partition function is given by
\begin{eqnarray}
{\rm hyper} &:& f(x, y) = \frac{x^{1/2} y^{1/2}}{1-x^2} - \frac{x^{3/2} y^{-1/2}}{1-x^2} = \frac{\sqrt{xy}}{1-x^2} \left(1 - \frac{x}{y} \right) \,,
\\
{\rm twisted}\;\; {\rm hyper} &:& \tilde{f}(x, y) = \frac{x^{1/2} y^{-1/2}}{1-x^2} - \frac{x^{3/2} y^{1/2}}{1-x^2} = \frac{\sqrt{x/y}}{1-x^2} \left(1 - xy \right) \,.
\end{eqnarray}
To calculate the index, we have to evaluate the following matrix integral.
\begin{eqnarray}
{\cal I}^U_{m} &=& \int \prod_{i=1}^{2m} DU_i \exp \left( \sum_{n=1}^{\infty} \sum_{i=1}^m \frac{1}{n} F_{n,i} \right) \,,
\nonumber \\
F_{n,i} &=& f_n \left[ {\rm Tr} ( U_{2i-1}^n ) {\rm Tr}( U_{2i}^{-n} ) + {\rm Tr} ( U_{2i-1}^{-n} ) {\rm Tr}( U_{2i}^{n} ) \right]
\nonumber \\
&& + \tilde{f}_n \left[ {\rm Tr} ( U_{2i}^n ) {\rm Tr}( U_{2i+1}^{-n}) + {\rm Tr} ( U_{2i}^{-n} ) {\rm Tr}( U_{2i+1}^{n}) \right] \,.
\end{eqnarray}
Here, we used the shorthand notation $f_n = f(x^n, y^n)$ and $\tilde{f}_n = \tilde{f}(x^n, y^n)$, and $U_{2m+1}$ is identified with $U_1$.
By changing the variables to $\rho_{i, n} = \frac{1}{N} {\rm Tr}(U_{2i-1}^n)$ and $\chi_{i, n} = \frac{1}{N} {\rm Tr}(U_{2i}^n)$ and using
the measure of the matrix integral, we can rewrite the integral as
\begin{eqnarray}
{\cal I}^U_{m} &=& \int \prod_{i=1}^m \prod_{n=1}^{\infty} d\rho_{i, n} d\chi_{i, n}
\exp \left( -N^2 \sum_{i, n} \frac{1}{n} \left[ |\rho_{i, n}|^2 + |\chi_{i, n}|^2 - G_{n,i} \right] \right) \,,
\nonumber \\
G_{n,i} &=& f_n \rho_{i, n} \chi_{i, n}
+ f_n \rho_{i, -n} \chi_{i, n}
+ \tilde{f}_n \chi_{i, n} \rho_{i+1, -n}
+ \tilde{f}_n \chi_{i, -n} \rho_{i+1, n} \,.
\end{eqnarray}
As in the ${\cal N}=6$ case, it is convenient to write the integral in a matrix form,
\begin{eqnarray}
{\cal I}^U_{m} = \int \prod_{i=1}^m \prod_{n=1}^{\infty} d\rho_{i, n} d\chi_{i, n} \exp \left( -\frac{N^2}{2} \sum_{n=1}^{\infty} \frac{1}{n} \left( C_{m,n}^T M_{m,n} C_{m,n} \right) \right) \,.
\end{eqnarray}
Here, $C_{m,n}^T = (\rho_{1, n} ~ \chi_{1, n} ~ \rho_{1, -n} ~ \chi_{1, -n} ~\cdots~ \rho_{m,-n}~\chi_{m, -n})$, and $M_{m,n}$ is a positive definite $4m \times 4m$ matrix which coincides with (\ref{m144}) when $m=1$.
Performing the integral, we find
\begin{eqnarray}
{\cal I}^U_{m} = \prod_{n=1}^{\infty} \frac{(1-x^{2n})^{2m}}{(1-x^n y^n)^m(1-x^n/y^n)^m(1-x^{n m})^2} \,.
\end{eqnarray}
The corresponding single-particle index is
\begin{eqnarray}
\label{am-ftsp}
I^U_{{\rm sp},{m}} (x,y) = m\left( \frac{1}{1-xy} + \frac{1}{1-x/y} -\frac{2}{1-x^2}\right) + \frac{2 x^{m}}{1-x^{m}} \,.
\end{eqnarray}
\paragraph{Gravity}
The gravity computation in the untwisted sector turns out to be almost trivial.
The orbifolding action due to the Chern-Simons level (in the $k\rightarrow\infty$ limit) makes the same effect as in the process of going from $\mathcal{N}=8$ to $\mathcal{N}=6$. Thus, we can begin with the $\mathcal{N}=6$ single-particle index
\begin{eqnarray}
I_{\rm sp}^{\mathcal{N}=6}(x,y_1,y_2) = \frac{x}{y_1-x}+\frac{1}{1-x y_1}+\frac{x}{y_2-x}+\frac{1}{1-x y_2}- \frac{2}{1-x^2} \,.
\end{eqnarray}
The other $\mathbb{Z}_{m}$ orbifolding acts only on $y_2$. So, we can simply take
\begin{eqnarray}
I_{{\rm sp}, m}^U(x,y) \;=\; \frac{1}{m}\sum_{j=1}^m I_{\rm sp}^{\mathcal{N}=6}(x,y,\omega_m^j)
\;=\;
\frac{1}{1-xy} + \frac{1}{1-x/y} -\frac{2}{1-x^2} + \frac{2 x^{m}}{1-x^{m}} \,.
\label{am-gravsp}
\end{eqnarray}
Comparing (\ref{am-gravsp}) and (\ref{am-ftsp}),
we find a mismatch
\begin{eqnarray}
\Delta I^U_{{\rm sp},m} &=& (m-1) \left( \frac{1}{1-xy} + \frac{1}{1-x/y} -\frac{2}{1-x^2}\right)
\nonumber \\
&=&
(m-1)
\frac{1}{1-x^2}\left((1-x/y)\frac{xy}{1-xy} + (1-xy)\frac{x/y}{1-x/y}\right)
\,.
\label{mis-u}
\end{eqnarray}
We will now argue that the twisted sector contributions can account for
the mismatch and lead to perfect agreement of the index between field theory and gravity.
\paragraph{Twisted sector $-$ field theory}
Suppose we have operators $O(i)$ $(i=1,\cdots,m)$ which form
a regular representation of $\mathbb{Z}_m$. Then, the $(m-1)$ linearly independent
operators $O(i+1)-O(i)$ not invariant under $\mathbb{Z}_m$ must belong
to the twisted sector.
The bosonic single-trace operators that contribute to the index are given by
\begin{eqnarray}
O_n^B(i) \; = \;
{\rm Tr} \left(q^i_+ \bar{q}^i_+\right)^{n} \,,
\;\;\;\;\;
\widetilde{O}^B_n(i) \; = \;
{\rm Tr} \left(\tilde{q}^i_+ \bar{\tilde{q}}^i_+ \right)^n \,,
\label{twi-boson}
\end{eqnarray}
where $q^i$ are hypers in $(N,\bar{N})$ of $U(N)_{2i-1}$ and $U(N)_{2i}$,
and $\tilde{q}^i$s are twisted hypers in $(N,\bar{N})$ of $U(N)_{2i} \times U(N)_{2i+1}$. The bars denote hermitian conjugation.
As explained earlier, the subscripts $(\pm)$ denote the doublet indices under the $SU(2)_L\times SU(2)_R$ $R$-symmetry.
The supersymmetry transformation rule (\ref{n4susy}) shows that
these operators are annihilated by $Q_{++}$ and $S_{--} = (Q_{++})^\dagger$, so they contribute to the index.
Their quantum numbers are $(\epsilon_0,j,h_1,h_2)= (n, 0, n, \pm n)$,
so $\Delta = \epsilon_0-j-h_1=0$ as expected.
To obtain the fermionic operators, we can take the super-descendants of the bosonic operators by acting with supercharges commuting with $Q_{++}$. We find
\begin{eqnarray}
O^F_n(i) &=& \left[ Q_{-+} , O^B_n(i) \right] \;=\;
{\rm Tr} \left[ (\psi^i_+ \bar{q}^i_+ + q^i_+ \bar{\psi}^i_+ )(q^i_+\bar{q}_+^i)^{n-1} \right] \,,
\nonumber \\
\widetilde{O}^F_n (i) &=& \left[ Q_{+-} , \widetilde{O}^B_n(i) \right] =
{\rm Tr} \left[ (\tilde{\psi}^i_+ \bar{\tilde{q}}^i_+ + \tilde{q}^i_+ \bar{\tilde{\psi}}^i_+)(\tilde{q}^i_+ \bar{\tilde{q}}^i_+ )^{n-1}\right] \,,
\label{twi-ferm}
\end{eqnarray}
whose quantum numbers are $(\epsilon_0,j,h_1,h_2)=(n+1/2,1/2,n,\pm (n-1))$ such that $\Delta=0$. The superconformal algebra,
\begin{eqnarray}
\{ Q_{++} , Q_{\pm \mp} \} = 0,
\;\;\;
\{ S_{--}, Q_{-+} \} \propto J_{--},
\;\;\;
\{ S_{--}, Q_{+-} \} \propto \tilde{J}_{--},
\end{eqnarray}
where $J$ and $\tilde{J}$ are generators of $SU(2)_L\times SU(2)_R$,
shows that these fermionic operators are also annihilated
by $Q_{++}$ and $S_{--}$ and contribute to the index.
Taking into account the bosonic descendants (derivatives), we obtain the index summed over the $\mathbb{Z}_{m}$ non-invariant bosonic and fermionic operators,
\begin{eqnarray}
I^U_{{\rm sp},m} {\rm (twisted)} = (m-1) \times \frac{1}{1-x^2}\left[(1-x/y)\frac{xy}{1-xy} + (1-xy)\frac{x/y}{1-x/y}\right] \,,
\label{twi-u}
\end{eqnarray}
which agrees precisely with the mismatch (\ref{mis-u}).
It is rather remarkable that (\ref{twi-boson}) and (\ref{twi-ferm}) exhaust all twisted sector contributions to the index, as
there are many more operators which apparently satisfy $\Delta=0$.
However, all such operators connecting three or more nodes
of the quiver can be shown to be $(Q_{++})$-exact
by using the supersymmetry transformation rule (\ref{n4susy}).
For instance, repeated use of (\ref{n4susy}) shows that
\begin{eqnarray}
&&O(i) = {\rm Tr}( q^i_+ \bar{q}^i_+ q^{i}_+ \tilde{q}^{i}_+ \bar{\tilde{q}}^i_+ \bar{q}^i_+ ) \,,
\nonumber \\
&&O(i+1) - O(i) = [ Q_{++}, {\rm Tr}( q^{i+1}_+ \bar{q}^{i+1}_+ q^{i+1}_+ \bar{\psi}^{i+1}_+ +
q^{i+1}_+ \bar{q}^{i+1}_+ \bar{\tilde{\psi}}^{i}_+ \tilde{q}^i + \bar{\tilde{\psi}}_+^i \bar{q}_+^i q_+^i \tilde{q}_+^i)
] \,.
\end{eqnarray}
In the ${\cal N}=2$ superfield language adopted in \cite{Benna,Tera},
this is a consequence of the F-term equivalence relations.
\paragraph{Twisted sector $-$ gravity}
The field theory computation above implies that
there should be corresponding twisted sector states on the gravity side.
In fact, the orbifold has two copies of $S^2 \subset \mathbb{C} \mathbb{P}^3 \subset S^7$ as the fixed loci, one at $(z_1, z_2,0,0)$ and the other at $(0,0,z_3,z_4)$, as the circle fiber of the Hopf fibration of $S^3$ is removed by the $\mathbb{Z}_{mk}$ orbifold action in the large $k$ limit.
Therefore, in type IIA description, there should exist chiral primary states in the twisted sector localized at these fixed loci.
Note that the splitting of twisted sector states into
hypers and twisted hypers as in (\ref{twi-boson}) and (\ref{twi-ferm})
matches nicely with two disjoint fixed loci of the orbifold geometry.
To understand the nature of the twisted sector states, let us examine
the geometry near the fixed loci. We begin with writing the metric of
$S^7$ as
\begin{eqnarray} \label{s7metric}
ds^2 &=& d\alpha^2 +\sin^2\alpha d\Omega_1^2 + \cos^2\alpha d\Omega_2^2
\nonumber \\
&=& d\alpha^2 + \textstyle{\frac{1}{4}}\sin^2\alpha \left[ d\theta_1^2+ \sin^2\theta_1 d\phi_1^2 + (d\psi_1 +\cos\theta_1 d\phi_1)^2) \right]
\nonumber \\
&& \;\;\;\;\;\;\;\;
+ \textstyle{\frac{1}{4}}\cos^2\alpha \left[ d\theta_2^2+ \sin^2\theta_2 d\phi_2^2 + (d\psi_2 +\cos\theta_2 d\phi_2)^2) \right] \,.
\end{eqnarray}
The orbifold action can be taken to be
\begin{eqnarray}
(\psi_1, \psi_2) \sim (\psi_1 + 2\pi/m, \psi_2) \sim (\psi_1 + 2\pi/mk, \psi_2+ 2\pi/mk) \,.
\end{eqnarray}
Near $\alpha = 0$, it is convenient to take the ``fundamental domain'' of
the $(\psi_1, \psi_2)$ torus as follows:
\begin{eqnarray}
\psi_1 = \psi + \beta , \;\;\; \psi_2 = \psi , \;\;\;
0 < \psi < 2\pi/mk, \;\;\; 0 < \beta < 2\pi /m .
\end{eqnarray}
Then the metric can be rewritten as
\begin{eqnarray}
ds^2
&=& d\alpha^2 + \textstyle{\frac{1}{4}}\sin^2\alpha \left( d\theta_1^2+ \sin^2\theta_1 d\phi_1^2
\right)
+ \textstyle{\frac{1}{4}}\cos^2\alpha \left( d\theta_2^2+ \sin^2\theta_2 d\phi_2^2 \right)
\nonumber \\
&&+\textstyle{\frac{1}{4}} \cos^2\alpha\sin^2\alpha(d\beta+\cos\theta_1d\phi_1 - \cos\theta_2 d\phi_2)^2
\nonumber \\
&& + \textstyle{\frac{1}{4}}\left[d\psi+ \sin^2\alpha (d\beta+ \cos\theta_1d\phi_1) + \cos^2\alpha \cos\theta_2 d\phi_2\right]^2
\nonumber \\
&\approx& d\alpha^2+ \textstyle{\frac{1}{4}} \alpha^2 \left[ d\theta_1^2+ \sin^2\theta_1 d\phi_1^2 +(d\beta+\cos\theta_1d\phi_1 - \cos\theta_2 d\phi_2)^2 \right]
\nonumber \\
&&
+ \textstyle{\frac{1}{4}} \left( d\theta_2^2+ \sin^2\theta_2 d\phi_2^2 \right)
+ \textstyle{\frac{1}{4}}\left[d\psi+ \cdots \right]^2 \,.
\label{orbi-met}
\end{eqnarray}
The angle $\psi$ is the coordinate of the 11-th circle. So, up to an overall warping,
the IIA metric near the singularity looks like a non-trivial fibration of $A_{m-1}$ over $S^2$.
A direct string theoretic analysis of the twisted sector states would be a formidable task. Instead, we could follow the analysis of \cite{Gukov:1998kk}
where blow-up of the orbifold was used to obtain the spectrum
of the twisted sector states. There are $(m-1)$ normalizable harmonic two-forms
$\omega_i$ in the blown-up $A_{m-1}$ singularity.
As in \cite{Gukov:1998kk}, the candidate for chiral primary states
in the twisted sector comes from the harmonic decomposition of the NSNS $B$-field into
$(m-1)$ scalars by
\begin{eqnarray}
B = \sum_{i=1}^{m-1} \phi_i \,\omega_i \,.
\end{eqnarray}
The spherical harmonics on either $S^2$
have integer ``spin''-$n$ under $SU(2)_L$ or $SU(2)_R$
factors of the $SO(4)$ $R$-symmetry.
In particular, the highest weight states have the $SO(4)$
Cartan charges $(h_1,h_2)= (n,\pm n)$.
Therefore, in order to match the quantum numbers
$(\epsilon_0,j,h_1,h_2)=(n,0,n,\pm n)$ of the twisted sector states
found in the field theory,
it would be sufficient to show that these states have $\epsilon_0=n$,
which is equivalent to $({\rm mass})^2 = n(n-3)$.
The Laplacian operator contributes $n(n+1)$ to the mass squared of the scalar fields. As noted in \cite{Gukov:1998kk}, the interaction of the $B$-field with background RR 3-form and 1-form fields is likely to produce
a shift in mass squared. Having $\epsilon_0=n$ amounts to a mass shift
$\delta({\rm mass})^2 = -4 n$.
The computation of the mass shift would take two steps.
First, we need to blow up the metric (\ref{orbi-met}) while maintaining
the Einstein condition. Second, we solve the equation,
\begin{eqnarray}
d*dC_3 = \frac{1}{2} dC_3 \wedge dC_3\,,
\end{eqnarray}
of eleven-dimensional supergravity with the ansatz
\begin{eqnarray}
C_3 = C_3({\rm background}) +\phi_i \, \omega_i \wedge (d\psi + \cdots) \,.
\end{eqnarray}
We leave the detailed computation of the mass shift to a future work.
\subsection{$OSp$-type}
\paragraph{Field Theory} We have $m$ copies of alternating $O(2N)$ and $Sp(2N)$ gauge groups, and the quiver diagram is given in figure \ref{quiver}(b). Since the matter content is all the same as in
the $U$-type theory except that they are in the real representations of the gauge group, the single letter partition function is exactly the same as that of the $U$-type theory.
\begin{eqnarray}
{\rm hyper} &:& f(x, y) = \frac{x^{1/2} y^{1/2}}{1-x^2} - \frac{x^{3/2} y^{-1/2}}{1-x^2} = \frac{\sqrt{xy}}{1-x^2} \left(1 - \frac{x}{y} \right) \,,
\\
{\rm twisted}\;\; {\rm hyper} &:& \tilde{f}(x, y) = \frac{x^{1/2} y^{-1/2}}{1-x^2} - \frac{x^{3/2} y^{1/2}}{1-x^2} = \frac{\sqrt{x/y}}{1-x^2} \left(1 - xy \right) \,.
\end{eqnarray}
Plugging in the single letter partition functions to the matrix-integral formula for the index, we obtain,
\begin{eqnarray}
{\cal I}^{OSp}_{m} &=& \int \prod_{i=1}^{m} DA_i DB_i \exp \left( \sum_{n=1}^{\infty} \sum_{i=1}^m \frac{1}{n} F_{n,i} \right) \,,
\nonumber \\
F_{n,i} &=& f_n {\rm Tr} ( A_{i}^n ) {\rm Tr}( B_{i}^{n} ) + \tilde{f_n} {\rm Tr} ( B_{i}^n ) {\rm Tr}( A_{i+1}^{n}) \,,
\end{eqnarray}
where $f_n=f(x^n, y^n)$ and $A$ and $B$ are elements of orthogonal and symplectic gauge group respectively.
Since we consider circular quivers, we identify $A_{m+1}\sim A_{1}$ and $B_{m+1}\sim B_1$. We can repeat the same matrix integral for ${\cal N}=5$ $OSp$ theory except that there are $m$ copies of gauge group and matters.
The result is
\begin{eqnarray}
{\cal I}^{OSp}_m = \prod_{n=1} \frac{(1-x^{2n})^m}{(1-(xy)^n)^{m/2}(1-(x/y)^n)^{m/2}(1-x^{nm})} \prod_{k=1} \exp{\left(-\frac{f_{2k}}{2k(1+f_{2k})}\right)} .
\end{eqnarray}
The corresponding single-particle index is
\begin{eqnarray}
\label{omk-ftsp}
I_{{\rm sp},m}^{OSp}= \frac{m}{2}\left( \frac{1}{1-xy} + \frac{1}{1-x/y} -\frac{2}{1-x^2}-\frac{xy+x/y}{(1+xy)(1+x/y)}\right) + \frac{x^{m}}{1-x^{m}} \,.
\end{eqnarray}
For $m=1$, it agrees with the ${\cal N}=5$ result as expected.
\paragraph{Gravity} Again, we can work out the decomposition of
representations, etc., but there is a shortcut.
Consider the broken $SO(4)= SU(2)\times SU(2) \subset SO(8)$ and denote
its representations by $(j_1,j_2)$.
Taking the $\mathbb{D}_k$ orbifold action into account, but
leaving the $\mathbb{Z}_m$ orbifolding aside, we can write down
the ``preliminary version'' of the index as follows:
\begin{eqnarray}
\label{omk-grav1}
I_{{\rm sp}, m}^{OSp-{\rm pre}} = \int \frac{d\theta}{2\pi} {\rm tr} \left[ \frac{x^{\epsilon_0+j}}{1-x^2} y_1^{h_2} y_2^{h_3} \left(\frac{1+e^{\pi i \left\{J_2^{(1)}+J_2^{(2)}\right\}} }{2} \right) e^{i \theta \left\{ J_3^{(1)}+ J_3^{(2)}\right\}} \right] \,.
\end{eqnarray}
As before,
the $\mathbb{Z}_{k}$ orbifolding picks out only the states with $m_1+m_2=0$.
The other generator of $\mathbb{D}_k$ acts on these states as
\begin{eqnarray}
\label{dkeff}
e^{\pi i \left[J_2^{(1)}+J_2^{(2)}\right]} |j_1,j_2; +m,-m \rangle = (-1)^{j_1+j_2}
|j_1,j_2; -m,+m \rangle \,.
\end{eqnarray}
The first half of (\ref{omk-grav1})
which does not involve $J_2^{(1)}+J_2^{(2)}$ is nothing but
one half of the $\mathcal{N}=6$ index.
The other half (call it $\Delta I$) picks up contributions
only from the states with $m_1 = m_2 = 0$ because of (\ref{dkeff}).
Since $h_3 = m_1 - m_2$ in (\ref{omk-grav1}), $\Delta I$ is independent of $y_2$.
Before the $\mathbb{Z}_m$ orbifolding, the index must coincide with
the $\mathcal{N}=5$ result.\footnote{
We could reverse the logic, compute $\Delta I$ directly
and give an alternative derivation of the $\mathcal{N}=5$ index,
but we will not pursue it here.}
Therefore,
\begin{eqnarray}
\Delta I(x,y) = I_{\rm sp}^{\mathcal{N}=5}(x,y) - {\textstyle \frac{1}{2}} I_{\rm sp}^{\mathcal{N}=6}(x,y,y_2=1) \,.
\end{eqnarray}
The $\mathbb{Z}_m$ orbifolding acts only on $y_2$, so it makes no effect
on $\Delta I$.
\begin{eqnarray}
\label{omk-grav2}
I_{{\rm sp}, m}^{OSp}(x,y) &=& \frac{1}{m} \sum_{j=1}^m I_{{\rm sp}, 1}^{OSp-{\rm pre}}(x,y,\omega_m^j) \;\;\;\;\; (\omega_m \equiv e^{2\pi i/m})
\nonumber \\
&=& \frac{1}{2m} \sum_{j=1}^m I_{\rm sp}^{\mathcal{N}=6}(x,y,\omega_m^j) +\Delta I(x,y)
\\
&=& \frac{1}{2}\left( \frac{1}{1-xy} + \frac{1}{1-x/y} -\frac{2}{1-x^2}-\frac{xy+x/y}{(1+xy)(1+x/y)}\right) + \frac{x^{m}}{1-x^{m}} \,.
\nonumber
\end{eqnarray}
Comparing (\ref{omk-ftsp}) and (\ref{omk-grav2}),
we find a mismatch
\begin{eqnarray}
\Delta I_{{\rm sp}, m}^{OSp}(x,y) &=& \frac{m-1}{2} \left( \frac{1}{1-xy} + \frac{1}{1-x/y} -\frac{2}{1-x^2}-\frac{xy+x/y}{(1+xy)(1+x/y)}\right)
\nonumber \\
&=&
(m-1) \times {1-x^2} \left[ (1-x/y)\frac{(xy)^2}{1-(xy)^2}
+(1-xy)\frac{(x/y)^2}{1-(x/y)^2} \right] \,.
\label{mis-osp}
\end{eqnarray}
\paragraph{Twisted sector}
We can repeat the same analysis as in the previous case with slight modifications. The bosons contributing to the index are given by
\begin{eqnarray}
O_n^B(i) \; = \;
{\rm Tr} \left(q^i_+ \bar{q}^i_+\right)^{2n} \,,
\;\;\;\;\;
\widetilde{O}^B_n(i) \; = \;
{\rm Tr} \left(\tilde{q}^i_+ \bar{\tilde{q}}^i_+ \right)^{2n} \,.
\label{twi-boson2}
\end{eqnarray}
Recall that $q$ and $\bar{q}$ are related by $\bar{q}^{a\dot{a}} = \delta^{ab} \omega^{\dot{a}\dot{b}}q_{b\dot{b}}$. We have $q^{4n}$ rather than $q^{2n}$ because the trace vanishes due to the anti-symmetric tensor $\omega^{\dot{a}\dot{b}}$ of $Sp(2N)$.
The fermionic operators corresponding to the index
are again obtained by taking super-descendants of the bosonic operators:
\begin{eqnarray}
O^F_n(i) &=& \left[ Q_{-+} , O^B_n(i) \right] \;=\;
{\rm Tr} \left[ (\psi^i_+ \bar{q}^i_+ + q^i_+ \bar{\psi}^i_+ )(q^i_+\bar{q}_+^i)^{2n-1} \right] \,,
\nonumber \\
\widetilde{O}^F_n (i) &=& \left[ Q_{+-} , \widetilde{O}^B_n(i) \right] =
{\rm Tr} \left[ (\tilde{\psi}^i_+ \bar{\tilde{q}}^i_+ + \tilde{q}^i_+ \bar{\tilde{\psi}}^i_+)(\tilde{q}^i_+ \bar{\tilde{q}}^i_+ )^{2n-1}\right] .
\end{eqnarray}
The bosonic and fermionic operators together contribute to the index by
\begin{eqnarray}
(m-1) \times \frac{1}{1-x^2}\left[(1-x/y)\frac{(xy)^2}{1-(xy)^2}+(1-xy)\frac{(x/y)^2}{1-(x/y)^2}
\right] \,,
\end{eqnarray}
which exactly account for the difference between supergravity approximation and field theory computation (\ref{mis-osp}) .
On the gravity side, we again have fixed loci at $(z_1,z_2,0,0)$
and $(0,0,z_3,z_4)$.
The $\mathbb{D}_{k}$ action
$z_i \sim \omega_k z_i$ and $(z_1, z_2, z_3, z_4) \sim (\bar{z_2}, -\bar{z_1}, \bar {z_3}, -\bar{z_4})$ make the fixed loci
to be $\mathbb{R}\mathbb{P}^2=S^2/\mathbb{Z}_2$. Among the spherical harmonics
on $S^2$, only those with even $n$ contribute.
This matches the spectrum of twisted sector states in the field theory.
\subsection{On the dual ABJM proposal}
In \cite{Hanany:2008fj,Franco:2008um}, a $U(N) \times U(N)$ Chern-Simons theory with manifest $\mathcal{N}=2$ supersymmetry and the superpotential $W = [\phi_1, \phi_2]AB$ was proposed, where $\phi_1, \phi_2$ are adjoints in the first gauge group and $A, B$ are bi-fundamentals. The moduli space of vacua was claimed to be $\mathbb{C}^2/\mathbb{Z}_k \times \mathbb{C}^2$, which suggests that the theory may exhibit $\mathcal{N}=4$ at the IR fixed point. The existence of non-trivial fixed point for $\mathcal{N}=2$ Chern-Simons-Matter theories were shown in \cite{Gaiotto:2007qi}. The moduli space seems to imply that the IR fixed point of this theory is dual to M-theory on AdS$_4 \times S^7/\mathbb{Z}_k$ where the $\mathbb{Z}_k$ acts on either $\psi_1$ or $\psi_2$ in \eqref{s7metric}. For $k=1$, the moduli space is $\mathbb{C}^4$, and the authors of \cite{Hanany:2008fj,Franco:2008um} claim that the theory flow to the same IR fixed point as the ABJM theory for $k=1$ (hence the name ``dual ABJM''). It would be interesting to test whether the fixed point theory actually has $\mathcal{N}=4$ supersymmetry when $k > 1$. For this purpose, we will compute the index on both sides and make comparison.
All the field theory computations in previous sections were based
on the fact that we can take the free theory limit by continuous marginal deformation. The free theory computation cannot be justified
if the theory undergoes a non-trivial RG flow.
However, the analysis \cite{Gaiotto:2007qi} of the RG fixed point
of ${\cal N}=2$ theories with quartic superpotential
indicates that the coefficient of the superpotential
at the fixed point is of order $1/k$. So, in the large $k$ limit,
the theory is weakly coupled throughout the RG-flow
and the free theory computation may be reliable.
We will proceed to perform the
free theory computation and check the consistency of
the approach a posteriori.
\paragraph{Gravity}
As usual, in the $\mathcal{N}=8$ setup before orbifolding, we take the basis of $SO(8)$ such that the supercharge
$Q$ has highest weight $(1,0,0,0)$ and the scalar field has
highest weight $(1/2,1/2,1/2,-1/2)$.
Let's denote the 7-sphere to be
\begin{eqnarray}
S^7 = \{ (x_1, \cdots, x_8) | \sum_{i=1}^8 |x_i|^2 = 1 \} .
\end{eqnarray}
Consider the $\mathbb{Z}_k$ orbifolding action by
\begin{eqnarray}
\exp\left( \frac{2\pi}{k} (H_{56}+H_{78})\right) \,,
\end{eqnarray}
where $H_{56}, H_{78}$ denotes rotation generators in $x_5-x_6$ and $x_7-x_8$ planes. Clearly, the surviving supercharges are vectors in $x_{1, 2, 3, 4}$ directions.
Under $SO(8)$, the scalar fields decompose as
\begin{eqnarray} \label{scalardec}
\phi_1 &:& \pm{\textstyle \frac{1}{2}}(+,+,+,-) \,,
\nonumber \\
\phi_2 &:& \pm{\textstyle \frac{1}{2}}(+,+,-,+) \,,
\nonumber \\
A &:& \pm{\textstyle \frac{1}{2}}(+,-,+,+) \,,
\nonumber \\
B &:& \pm{\textstyle \frac{1}{2}}(+,-,-,-) \,.
\end{eqnarray}
Note that $\phi_{1,2}$ are neutral under the orbifold action,
while $A$ and $B$ have charge $\pm 1$.
To incorporate the effect orbifolding in the index,
we can invoke the contour integral method.
Take $I_{\rm sp}^{\mathcal{N}=8}(x,y_1,y_2,y_3)$, set $y_2=y_3=z$ and collect
the $z$-independent terms by using the contour integral.
The result is (setting $\sqrt{x} = t$ and $\sqrt{y_1} =u$)
\begin{eqnarray}
I_{\rm sp}^{\rm gravity} = -1 + \frac{1}{(1-tu)^2}-\frac{2t^4}{(1-t^4)(1-tu)}-\frac{t^2}{t^2-u^2} \,.
\end{eqnarray}
\paragraph{Field Theory}
From \eqref{scalardec}, we can identify the letters contributing to the index as follows.
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
type & operators & $\epsilon_0$ & $j$ & $(h_1, h_2)$ \\
\hline
bosons & $\phi_{1, b}$ & ${\textstyle \frac{1}{2}} $ & $0$ & $({\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}})$ \\
& $\phi_{2, b}$ & ${\textstyle \frac{1}{2}} $ & $0$ & $({\textstyle \frac{1}{2}}, {\textstyle \frac{1}{2}})$ \\
& $A_b$ & ${\textstyle \frac{1}{2}} $ & $0$ & $({\textstyle \frac{1}{2}},-{\textstyle \frac{1}{2}})$ \\
& $B_b$ & ${\textstyle \frac{1}{2}} $ & $0$ & $({\textstyle \frac{1}{2}},-{\textstyle \frac{1}{2}})$ \\
\hline
fermions & $\phi_{1, f}$ & $1 $ & ${\textstyle \frac{1}{2}}$ & $({\textstyle \frac{1}{2}}, -{\textstyle \frac{1}{2}})$ \\
& $\phi_{2, f}$ & $1 $ & ${\textstyle \frac{1}{2}}$ & $({\textstyle \frac{1}{2}}, -{\textstyle \frac{1}{2}})$ \\
& $A_f$ & $1 $ & ${\textstyle \frac{1}{2}}$ & $({\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}})$ \\
& $B_f$ & $1$ & ${\textstyle \frac{1}{2}}$ & $({\textstyle \frac{1}{2}},{\textstyle \frac{1}{2}})$ \\
\hline
derivatives & $\partial$ & $1$ & $1$ & $(0,0)$ \\
\hline
\end{tabular}
\end{center}
Here, the subscript $b, f$ denotes the bosonic and fermionic components of the corresponding superfield.
The ``indices for letters'' are given by
\begin{eqnarray}
f_{12} = f_{21} = \frac{t/u-t^3u}{1-t^4},
\;\;\;
f_{11} = \frac{2(tu-t^3/u)}{1-t^4},
\;\;\;
f_{22} = 0 \,.
\end{eqnarray}
The matrix integral is straightforward to compute.
\begin{eqnarray}
{\cal I}^{\rm FT} &=& \int DU_1 DU_2 \exp\left( \sum_{n=1}^{\infty} \sum_{a,b} \frac{1}{n} f_{ab}(t^n, u^n) {\rm tr}(U_a^n){\rm tr}(U_b^{-n}) \right)
\nonumber \\
&=& \prod_{n=1}^{\infty} \frac{1}{1-f_{11}^{(n)}-f_{12}^{(n)}f_{21}^{(n)}}
\nonumber \\
&=& \prod_{n=1}^{\infty} \frac{(1-t^{4n})^2}{(1-t^{2n}/u^{2n})(1-2t^nu^n + 2t^{5n}u^n-t^{6n}u^{2n})} \,.
\end{eqnarray}
\paragraph{Comparison}
Let's expand the index by powers of $t$. The field theory index is given by
\begin{eqnarray}
{\cal I}^{\rm FT} &=& 1 + 2 u t + (u^{-2} + 6u^2) t^2 + (2u^{-1} + 14 u^3) t^3 +
(2 u^{-4} + 4 + 34 u^4) t^4 \nonumber \\
&+& (4u^{-3} + 8 u + 74 u^5)t^5 +(3 u^{-6} + 10 u^{-2} + 15 u^2 + 166 u^6) t^6 + \cdots,
\end{eqnarray}
and the gravity side is given by
\begin{eqnarray}
{\cal I}^{\rm gravity} &=& \exp\left( \sum_{k=1}^{\infty} \frac{1}{k} I_{\rm sp}^{\rm gravity}(t^n, u^n) \right) \nonumber \\
&=& 1 + 2 u t + (u^{-2} + 6u^2) t^2 + (2u^{-1} + 14 u^3) t^3 +
(2 u^{-4} + 4 + 33 u^4) t^4 \nonumber \\
&&+ (4u^{-3} + 8 u + 70 u^5)t^5 +(3 u^{-6} + 10 u^{-2} + 15 u^2 + 149 u^6) t^6 + \cdots,
\end{eqnarray}
The results match to a large extent, but not completely:
\begin{eqnarray}
{\cal I}^{\rm FT} - {\cal I}^{\rm gravity} =
u^4t^4 + 4 u^5t^5+ 17 u^6t^6 + \cdots\,.
\end{eqnarray}
Note that ${\cal I}^{\rm FT} > {\cal I}^{\rm gravity}$ holds order by order
and that the the orbifold has the fixed locus $S^3 \subset S^7/\mathbb{Z}_k$.
So, it seems possible for the twisted sector states to account for
the mismatch. It is not clear, however, why only a small number
of terms survive in the $k\rightarrow \infty$ limit.
It would be worthwhile to repeat the computation
for large but finite $k$, where perturbation theory is applicable,
in order to fully verify the existence of the $\mathcal{N}=4$ fixed point
of the dual ABJM theory.
\section{Conclusion}
We have checked that the superconformal indices of $\mathcal{N}=4, 5$ Chern-Simons theories match with those of their gravitational dual theories. This provides a non-trivial check of the proposed dual geometry. Especially, from the fact that orbifold has fixed locus, and by identifying the contributions from the twisted sectors, we can verify that the superconformal theory is not dual to just supergravity, but full superstring theory. Also, we have computed the index for the dual ABJM model proposed by \cite{Hanany:2008fj}. Our computation suggests that we have to go beyond the supergravity approximation to actually verify the claim.
In principle, the superconformal index depends both on $N$ and $k$, because they determine the form of possible gauge invariant operators. Our computation was done in the large $k$ and large $N$ limit. In this regime, we can use weakly coupled type IIA superstring description.
We can regard the 't Hooft coupling $\lambda = N/k $ to be effectively continuous so that the index does not change as we go from weak coupling (free field theory) to very strong coupling (supergravity). But, the index is still discrete for finite $N$, so it remains to be checked whether this index does not `jump' under such deformations. In addition, to properly understand the behavior of the index when the two gauge groups have different ranks, ({\it e.g.} $U(M) \times U(N)$, $M \ne N$) we should look into the case of finite $N$. According to \cite{Aharony:2008gk}, $\mathcal{N}=6$ $U(M) \times U(N)$ Chern-Simons theory remains superconformal only if $|M-N| \le k$. To check this, we should also investigate the finite $k$ effect. On the gravity side, finite $k$ effect is easily computable for large $N$, where we can use supergravity approximation. But in order to see finite $N$ effect, we should compute $1/N$ corrections. We leave this as a future work.
\footnote{
Matching of the index of finite rank gauge groups in four dimensions
was previously reported in \cite{Dolan2}.}
It is possible to compute the superconformal indices for less supersymmetric Chern-Simons theories. But, less supersymmetry means the index has fewer expansion parameters. For example, in the case of $\mathcal{N}=2, 3$ theories, we only have 1 parameter in the index. And for $\mathcal{N}=1$ theories, it becomes an ordinary Witten index. So as we go to less supersymmetic case, the index contains less information. Also, the gravity duals of less supersymmetric theories are more complicated, and the KK spectrum of the compact space is not known. However, it is worth noting that the superconformal index is invariant under any deformation that preserves the special supercharges $Q$ and $S$. In our examples presented here, there is no marginal deformation preserving the same number of supersymmetry besides the Lagrangian itself. But, there are possible deformations which preserve less supersymmetries. In this case, we should be able to identify the corresponding deformation of the dual geometry on the gravity side. One possible example is the $\mathcal{N}=1$ Chern-Simons theory obtained from deforming the free theory limit of $\mathcal{N}=6$ ABJM theory \cite{Ooguri:2008dk}. The dual geometry of the theory is conjectured to be the orbifolded squashed seven-sphere. Since the free theory limit of the both theory agrees, it seems possible to construct the index which includes the global symmetry along with superconformal algebra.
Another example is the three dimensional version of the beta-deformation
preserving ${\cal N}=2$ supersymmetry.
The gravity dual of the beta-deformation was studied earlier in
\cite{Lunin, Ahn, Gaunt}. It would be interesting to study the deformation
in the corresponding Chern-Simons theories.
\acknowledgments
We are grateful to Seok Kim, Sungjay Lee, Hirosi Ooguri, Chang-Soon Park and Masahito Yamazaki for helpful discussions.
The work of S.L. is supported in part by the KOSEF Grant
R01-2006-000-10965-0 and the Korea Research Foundation Grant
KRF-2007-331-C00073. The work of J.S. is supported in
part by DOE
grant DE-FG02-92ER40701 and Samsung Scholarship.
\newpage
\centerline{\Large\bf Appendix}
|
train/arxiv
|
BkiUceA4eILhQLGUToXL
| 5 | 1 |
\section*{Introduction}
Spin transfer torque\cite{berger1996prb,slonczewski} (STT)---the transfer of angular momentum from a spin current to the local spins---can act as negative spin wave damping and drive the magnetization of a ferromagnet (FM) into a state of sustained large-angle precessio
For STT to occur, one first needs to generate a spin current and then inject this spin current into a ferromagnetic layer with a magnetization direction that is non-collinear with the polarization axis of the spin current. In so-called spin torque nano-oscillators (STNOs) these conditions can be fulfilled
either in giant magnetoresistance trilayers, or in magnetic tunnel junctions,
if the two ferromagnetic layers are in a non-collinear state \cite{Chen2016}. More recently, the spin Hall effect \cite{Hirsh1999,D'yakonov1971, Sinova2015} was instead used to create pure spin currents injected into an adjacent ferromagnetic layer, in devices known as spin Hall nano-oscillators (SHNOs) \cite{Chen2016}. These are both much easier to fabricate \cite{Demidov2014apl}, and exhibit superior synchronization properties and therefore much higher signal coherence \cite{Awad2017}. Even so, SHNOs still suffer from a number of drawbacks and conflicting requirements: \emph{i}) the non-magnetic layer generating the spin current dramatically increases the zero-current spin wave damping of the ferromagnetic layer (up to 3x for NiFe/Pt \cite{Nanprb2015}), \emph{ii}) the surface nature of the spin Hall effect requires ultrathin ferromagnetic layers for reasonable threshold currents, which further increases the spin wave damping, and \emph{iii}) since the current is shared between the driving layer and the magnetoresistive ferromagnetic layer, neither driving nor signal generation benefits from the total current, leading to non-optimal threshold currents, low output power, unnecessary dissipation, and heating. It would therefore be highly advantageous if one could do away with the additional metal layers for operation.
It was recently realized that spin-orbit torque can emerge
from Berry curvature due to a broken inversion symmetry\cite{kurebayashi2014antidamping}, either in the bulk of the magnetic material\cite{fang2011spin,Safranski2017planar} or at its interfaces. In particular, interfaces to insulating oxides\cite{Miron2011,an2016spin,Emori2016prb,demasius2016enhanced,Gao2018prl} can lead to a non-uniform spin distribution over the film thickness \cite {haidar2013prb,Tsukahara2014prb,Azevedo2015prb}. As oxide spin-orbitronics has seen considerable development in recent years\cite{Vaz2018jjap, manchon2018current}, oxide based spin-orbit torque nano-oscillators with a single metallic layer could be envisioned. If so, the current would be confined to the ferromagnetic metal layer, avoiding the detrimental current sharing between adjacent metal layers and leading to an overall lower power consumption.
Here we propose, and experimentally demonstrate, an intrinsic spin orbit torque (SOT) nano-oscillator based on a single permalloy layer with thickness in the 15--20 nm range, grown on an Al$_2$O$_3$ substrate and capped with SiO$_2$. The magnetodynamics of the unpatterned stacks is characterized using ferromagnetic resonance (FMR) spectroscopy and the linear spin wave modes of the final devices are determined using spin-torque FMR. Spin wave auto-oscillations are observed both electrically, via a microwave voltage from the anisotropic magnetoresistance, and optically, using micro-Brillouin Light Scattering ($\mu$-BLS) microscopy. The current-field symmetry of the observed auto-oscillations agree with the expected symmetry for spin orbit torques,
either intrinsic to the Py layer itself or originating from the Al$_2$O$_3$/Py and Py/SiO$_2$ interfaces.
Fig.~1A shows a scanning electron microscopy image of a $w$ = 30 nm wide nano-constriction etched out of $t$ = 15 nm thick single permalloy (Ni$_{80}$Fe$_{20}$; Py) layer, magnetron sputtered onto a sapphire substrate.
On top of the nano-constriction a coplanar waveguide is fabricated for electrical measurements (not shown). During operation, an external magnetic field $(H)$, applied at an out-of-plane angle $\theta$ = $75^{\circ}$ and an in-plane angle $\phi$ = $22^{\circ}$, tilts the Py magnetization out of plane.
Fig.~1B shows the thickness dependence of the Gilbert damping ($\alpha$) of the unprocessed Py films as well as the resistance (R) and the anisotropic magnetoresistance (AMR) of the final devices (inset). As the thickness of the Py layer increases, we measure a noticeable decrease of $\alpha$ and R, and a corresponding increase of AMR \cite{Ingvarssonprb2002}. Thus, using thicker permalloy films could be advantageous as compared to thinner films where by reducing $\alpha$ one can reduce the threshold current for driving spin wave auto-oscillations and by increasing AMR one can achieve larger output power as the electrical read-out is based on AMR in these devices.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\textwidth]{Fig1.pdf}
\caption{\textbf{Nano-constriction fabrication, characterization and simulation.} (\textbf{A}) Scanning electron microscope image of a permalloy nano-constriction
showing its width ($w$ = 30 nm) and the direction of the applied field. (\textbf{B}) Gilbert damping of blanket permalloy films as a function of film thickness. Inset: AMR (closed symbol) and resistance (open symbol) vs.~thickness of 30 nm wide nano-constrictions. (\textbf{C}) ST-FMR spectrum vs.~field at $\it{f}$ = 10 GHz for a $w$ = 30 nm and $t$ = 15 nm nano-constriction showing its two linear modes;
red and blue lines are Lorentzian fits to the FMR mode and the localized mode, respectively. The spatial distribution of the FMR and the localized modes, imaged by $\mu$-BLS (\textbf{D,E}), and calculated using micromagnetic simulations (\textbf{F,G}).}
\end{center}
\end{figure}
To determine the magnetodynamic properties of the nano-constrictions, we first characterized them in the linear regime using spin torque ferromagnetic resonance spectroscopy (ST-FMR) \cite{Liu2011} in a field applied at $\theta$ = $75^{\circ}$ and $\phi$ = $22^{\circ}$.
A typical ST-FMR spectrum
as a function of field magnitude, and at constant frequency \textit{f} = 10 GHz, reveals
two well-separated resonances that can both be accurately fit with Lorentzians (Fig. 1C).
The frequency vs.~field dependence of the lower field peak agrees perfectly with the FMR dispersion at the specific angles used, from which we can extract an effective magnetization $\mu_{0}M_{eff}$ = 0.72 T.
We can directly image the spatial distribution of the two linear modes using micro-Brillouin light scattering ($\mu$-BLS) microscopy of the thermally excited SW
: the amplitude of the FMR mode is the largest well outside the nano-constriction region (Fig. 1D) whereas the amplitude of the higher-field (lower energy) mode is localized to the center of the nano-constriction (Fig. 1E). The localized mode is confined to the nano-constriction due to a local reduction of the internal magnetic field in this region. The nature of the observed modes is further corroborated by carrying out micromagnetic simulations where we calculate the spatial profile of the magnetization within the nano-constriction. The spatial maps for the simulated FMR mode and the localized mode (Fig. 1F, G) closely agree with the $\mu$-BLS maps.
We then investigated the impact of a direct current on the linewidths ($\Delta H$) of both the FMR and the localized modes (Fig. 2A). While the linewidths of both modes decreases with current magnitude for an even current-field symmetry
(the orientation of the field and current satisfy $\mathbf{H} \cdot \mathbf{I} >0$) and increases for an odd symmetry ($\mathbf{H} \cdot \mathbf{I} <0$), the localized mode (blue circles) is three times more strongly affected than the FMR mode.
This is again consistent with the localized mode residing in the nano-constriction region where the current density is the largest. Naively extrapolating the $\Delta H$ vs.~current data to zero linewidth then predicts that auto-oscillations on this mode should be possible at a current magnitude of 2 mA.
That auto-oscillations are indeed possible to achieve at such currents is experimentally confirmed by sweeping the field magnitude from in Fig.~2B\&C.
The electrical microwave power spectral density (PSD) is measured at constant currents of $I$ = 2 mA (Fig.~2B) and $I$ = --2 mA (Fig.~2C) as the field is swept from 0.8 T to -0.8 T.
Auto-oscillations are detected under positive applied fields between +1 kOe and +8 kOe with a frequency tunable from 3.5 to 12 GHz. Somewhat unexpectedly, a measurable PSD is also observed under \emph{negative} fields between -1 kOe and -3 kOe where the frequency increases from 3.5 to 5.5 GHz. Then, by switching the direction of the current, (Fig. 2C), we measure a corresponding continuous branch of auto-oscillations under negative fields between -1 kOe and -8 kOe and, again, a noticeable PSD appears under a \emph{positive} field between +1 kOe and +3 kOe, this time at an essentially constant frequency of about 4 GHz.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{Fig2.pdf}
\caption{\textbf{Current induced spin wave auto-oscillations.} (\textbf{A}) Current dependent ST-FMR linewidth of the FMR (red circles) and localized (blue circles) modes for positive field (filled symbols) and negative field (open symbols).
(\textbf{B, C}) Electrical measurements of the power spectral densities of spin wave auto-oscillations vs.~applied field measured under a direct current of (\textbf{B}) +2 mA and (\textbf{C}) -2 mA. Label I and II indicate low-field and high-field auto-oscillations, respectively. (\textbf{D, E, F}) $\mu$-BLS measurements of the frequency vs.~applied field showing the intensity of the FMR and the localized mode measured at 0 mA (\textbf{D}), +1.2 mA (\textbf{E}), and -1.2 mA (\textbf{F}). The insets in (\textbf{E, F}) show the change of the BLS counts as a function of current at $\mu_0 H$ = 0.7 T and $\mu_0 H$ = 0.2 T, respectively.}
\end{center}
\end{figure}
We categorize the measured PSD into two regions depending on the strength of the magnetic field: region (I) for low fields $|H| \leq 3$ kOe and region (II) for high fields $|H| > 3$ kOe.
In region (I) the PSD
is independent of the field-current symmetry, \emph{i.e.}~a measureable PSD is always present regardless of the field and current polarities. The microwave signal in this region is rather weak with a large linewidth.
In region (II) the auto-oscillations are both sharper and more powerful and are detected only when the orientation of the field and current satisfy $\mathbf{H} \cdot \mathbf{I} >0$. In this region, the spin current creates a spin-torque that is aligned either anti-parallel ($\mathbf{H} \cdot \mathbf{I} >0$) or parallel ($\mathbf{H} \cdot \mathbf{I}<0$) to the damping and hence it either suppresses or enhances the damping depending on symmetry.
To better understand the origin of the two different regions and their distinctly different symmetries we performed $\mu$-BLS measurements vs.~field and current. Fig.~2D shows the frequency vs.~field of the thermally populated linear modes at $I$ = 0 mA, where we observe the localized mode and the continuous SW band above the FMR frequency. When we increase the current to 1.2 mA (Fig.~2E) a noticeable increase in the intensity of the localized mode can be observed for high positive fields, \emph{i.e.} in region (II). It is noteworthy that the SW amplitude in this region depends exponentially on current magnitude, as shown in the inset of (Fig.~2E), consistent with spin-torque driven auto-oscillations. In contrast, by switching the current to --1.2 mA the intensity of the localized mode in region (II) becomes weaker than at zero current, consistent with additional damping from spin-torque, while it increases slightly a low fields (region (I)). The current dependence in region (I) is however much weaker and essentially linear, suggesting thermal activation,
but not auto-oscillations. We can hence conclude from the $\mu$-BLS characterization that the observed auto-oscillations originate in a continuous manner from the localized mode inside the nano-constriction and that the auto-oscillations require a certain magnetic state reached only for a field above 0.3 T. These results were reproducible from device to device and auto-oscillations were observed for both $t$ = 15 and 20 nm.
Next, we discuss the possible mechanism that could be driving the auto-oscillations, \emph{i.e.}~for which, the spin current should exert an opposite torque to the Gilbert damping with the observed current-field symmetry.
Since our devices have a strongly non-uniform magnetization distribution in the vicinity of the constriction and should have a relatively strong temperature gradient during operation, we might have anti-damping originating from \emph{(i}) Zhang-Li torque \cite{ZhangLi2004}, and \emph{(ii)} spin torque from the spin dependent Seebeck effect \cite{slachter2010thermally}. However, neither of these torques follows the experimentally observed $\mathbf{H} \cdot \mathbf{I} > 0$ symmetry, as their sign would not depend on the direction of the applied current.
The observed symmetry instead strongly suggests that spin-orbit torque (SOT) is the main source of anti-damping.
Recent results obtained in ferromagnetic/oxide heterostructures suggest that
SOT can result from non-equilibrium spin accumulation due to inversion symmetry breaking and enhanced spin-orbit coupling at the interfaces\cite{Hals2010epl, kurebayashi2014antidamping,Kim2017prb}.
In principle, intrinsic anomalous \cite{Das2017prb,Bose2018prappl} and spin Hall effects in Py \cite{Azevedo2015prb,Tsukahara2014prb} and interfacial Rashba effect \cite{Miron2011} might all contribute to the SOT in our system.
In fact, our micromagnetic simulations, assuming a spin Hall/Rashba-like SOT, require a spin Hall angle of about 0.13 to reproduce the auto-oscillation threshold currents observed in the experiments, which is in the range of values reported for Py/oxide stacks\cite{manchon2018current}.
Besides the fundamental interest of generating, controlling, and optimizing SOT in single ferromagnetic layers, \emph{e.g.}~using ultra-low damping metals and different oxide interfaces, our results will have a direct impact on a wide range of
applications. A magnetic tunnel junction (MTJ) in the nano-constriction region only requires an additional ferromagnetic layer, which will allow for separate optimization of the SOT drive and the MTJ based high-power read-out. Arrays of SOT driven nano-constrictions can also be used for oscillator based neuromorphic computing \cite{Torrejon2017nature}. In the sub-threshold regime, the intrinsic damping in magnonic crystals can be greatly reduced, solving their issues with high transmission losses \cite{KruglyakJPD2010}.
\section*{Acknowledgments:} This work was supported by the European Research Council (ERC) under the European Community's Seventh Framework Programme (FP/2007-2013)/ERC Grant 307144 "MUSTANG." This work was also supported by the Swedish Research Council (VR), the Swedish Foundation for Strategic Research (SSF), and the Knut and Alice Wallenberg Foundation, and the Wenner-Gren Foundation.
\newline
\clearpage
|
train/arxiv
|
BkiUdyw5qWTBGBH18lht
| 5 | 1 |
\subsection{From imaginary to real time}\label{se:imaginaryToRealtime}
Furthermore, at thermal equilibrium the imaginary time correlator $\langle \hat{N}_A(-i\tau)\hat{N}_A(0)\rangle_T$ can be related with the
Fourier transform
of the appropriate real-time correlator,
defined as
\begin{align}
&\chi(\omega,T)=\int dt e^{i\omega t}\langle \hat{N}_A(t)\hat{N}_A(0)\rangle_{T}.
\end{align}
This is the charge noise of the QD.
As a result Eq.~(\ref{eq:central_general}) becomes~\cite{SM}
\begin{equation}
\begin{split}
\Delta S^{(2)}=&-\log\Big[1-2\langle\hat{N}_A\rangle_{T/2}+2\int\frac{d\omega}{2\pi}\chi\big(\omega,T/2\big) e^{-\frac{\omega}{T}}\Big],\\
\Delta S^{(3)}=&-\frac{1}{2}\log\Big[1-3\langle \hat{N}_A\rangle_{T/3}+3\int \frac{d\omega}{2\pi}\chi\big(\omega,T/3)e^{-\frac{2\omega}{T}}\Big].
\end{split}\label{eq:central_Renyi23}
\end{equation}
This is our main result for Coulomb blockaded systems. It allows one to measure \Renyi moments of the NEE in thermal states at temperature $T$, from charge correlations measured at temperature $T/\ell$. The advantage of our approach is that these charge correlations have been repeatedly measured in mesoscopic systems using charge sensing techniques (see, e.g.,~\cite{fujisawa2006bidirectional,gustavsson2006counting,gustavsson2006countingL,dial2013charge,wilen2021correlated}). In the supplementary material ~\cite{SM} we relate explicitly $\chi(\omega,T)$ to the voltage-dependent noise~\cite{blanter2000shot} in a quantum-point-contact charge detector electrostatically coupled to the QD as depicted in Fig.~\ref{fig:2ckresult}(a).
Eqs.~\eqref{eq:central_Renyi23} apply generally to Coulomb blockaded systems.
The simplest possible example is a spinless double dot system. The entanglement of $\log 2$ due to coherent hopping of an electron between the two dots can be directly computed from the density matrix, or equivalently from the charge-charge correlation via Eq.~\eqref{eq:central_Renyi23}, as demonstrated in the supplementary material~\cite{SM}. However, the power of the relations \eqref{eq:central_Renyi23} lies in the prospect of applying them to strongly correlated systems, such as magnetic impurities embedded in a continuum, which can lead to a multitude of Kondo effects and to a competition between spin correlations and screening in multiple-impurity systems (for a review see, e.g., \cite{grobis2007kondo}).
Here we exemplify the usefulness of these relations for multi-channel Kondo systems.
\begin{figure*}[]
\includegraphics[width=\textwidth]{2CK_result.pdf}
\caption{(a) The charge-multi-channel-Kondo setup \cite{matveev1995coulomb,furusaki1995theory,iftikhar2015two,iftikhar2018tunable}, shown here for two channels (2CK). In each of the channels ($\alpha=1,2$) an electron can hop from the metallic dot to its respective lead with matrix elements $J_\alpha$ and
thus
change the dot occupancy by unity, $N_d=0 \leftrightarrow 1$, effectively flipping the ``impurity spin". The gate detuning from charge degeneracy $\Delta E$ serves as an effective magnetic field. The asymmetry $T_-\propto(J_1-J_2)^2$ allows to study the crossover between two and one-channel Kondo models. We added an additional quantum point contact coupled electrostatically to the dot, which allows measurements of the dot charge and its fluctuations. (b) The charge $N_d(T)$ and (c) its fluctuations $\chi(\omega)$, required for the evaluation of the \Renyi moments $\Delta S^{(\ell)}$ of the number entanglement entropy (NEE). $N_d(T)$ is shown for $\Delta E= T_K/2$ (yellow), $0$ (red), $- T_K$ (blue) with $T_-=0$. And $\chi(\omega)$ is shown for $T=0.01T_K$ (solid lines) and $T=0.5 T_K$ (dashed line) for $T_-=\Delta E=0$ (2CK), $T_- =T_K$ (resonant level model - RLM) with $\Delta E=0$, and for $\Delta E=T_K/2$ with $T_-=0$. In the plot, $\delta$ function peaks~\cite{SM} are not drawn. (d) The \Renyi moment of NEE $\Delta S^{(2)}$ obtained from $N_d$ and $\chi(\omega)$ using Eq.~\eqref{eq:central_Renyi23} (solid lines). The NEE $\Delta S^{(1)}$ is also shown for comparison (dashed lines). We compare the 2CK case, the RLM case, and also the 2CK with finite level detuning $\Delta E$. (e) $\log-\log$ plot of $\log(2)-\Delta S^{(\ell)}(T)$ for the 2CK and RLM cases, demonstrating distinct
power law behaviour; up (down) triangles refer to $\ell=1$ ($\ell=2$). (f) $\log(2)-\Delta S^{(\ell)}(T)$ for a small channel anisotropy scale $T_- / T_K = 0.1$, displaying a crossover from the 2CK behavior for $T>T_-$ (blue) to 1CK
behavior for $T<T_-$ (red).
}
\label{fig:2ckresult}
\end{figure*}
The multi-channel Kondo (MCK) model describes an impurity spin-1/2 interacting antiferromagnetically with $M$ spinfull channels with continuous density of states. For $M=1$ the impurity is perfectly screened at $T=0$, while for $M>1$ the ground state is a non-Fermi liquid, due to overscreening of the impurity by the $M$ channels~\cite{nozieres1980p}. In both cases, though,
the spin-screening cloud [see, e.g, \onlinecite{barzykin1998screening,affleck2009entanglement}] leads to a zero-temperature entanglement entropy of $\log2$. Recently, finite-temperature entanglement measures, such as the entanglement of formation (EoF) and negativity (${\cal N}$), have been applied to this model ~\cite{lee2015macroscopic,kim2021universal}, demonstrating, for both quantities, a low-temperature scaling behavior of
\begin{equation}
{\rm{EoF}}, {\cal N} \simeq \log 2 - a_{M} T^{2\Delta_{M}},
\label{eq:EoF}
\end{equation}
where $a_{M}$ is an $M$-dependent numerical coefficient and $\Delta_M$ is the scaling dimension of the impurity spin operator in the $M$-channel problem. As determined by the conformal field theory solution~\cite{affleck1991critical}, $\Delta_M=2/(2+M)$ for $M \ge 2$, and for the single channel case $\Delta_1=1$. Unfortunately, these entanglement measures, while calculable, are practically impossible to measure (as they require measurements of all elements of the density matrix). Below we demonstrate that the entanglement measure NEE for the charge-Kondo model, and its \Renyi moments, which can be directly measured by applying Eqs.~(\ref{eq:central_Renyi23}), obey the same scaling relation as Eq.~(\ref{eq:EoF}), and thus allow direct measurement of the finite-temperature entanglement and the scaling dimensions of MCK systems.
Unlike the commonly observed one-channel Kondo effect, signatures of a two-channel spin-Kondo behavior in quantum dots have only been reported in \cite{potok2007observation,keller2015universal}. A major step forward has been recently taken in Refs.~\onlinecite{iftikhar2015two,iftikhar2018tunable} which utilized a mapping between the two charge states of a metallic dot $N_d=0,1$ and spin $S_z=\pm 1/2$ \cite{matveev1995coulomb,furusaki1995theory} to demonstrate scaling and crossover between single and multi-channel $M=1,2,3$ Kondo effects [here the number of channels is controlled by the number of one dimensional leads attached to the dot, see Fig.~\ref{fig:2ckresult}(a)]. Since, in this case of a charge-Kondo effect, the role of spin is played by the deviation of the occupation from half-filling, then charge correlations in the charge-Kondo system map onto spin correlation in spin-Kondo system, thus allowing us to extract the finite temperature entanglement measure NEE for non-Fermi liquid MCK systems by utilizing Eqs.~(\ref{eq:central_Renyi23}). In this case the NEE reduces at $T=0$ to the full entanglement entropy.
To be specific, the charge-MCK Hamiltonian is given by \cite{matveev1995coulomb,furusaki1995theory}
\begin{align}
H=&\sum_{\alpha=1}^M J_{\alpha} \sum_{k,k'}(c_{\alpha\uparrow k}^\dagger c_{\alpha\downarrow k'} S_- +c_{\alpha\downarrow k}^\dagger c_{\alpha\uparrow k'} S_+)\nonumber\\
&+\sum_{\alpha,k,\sigma}k c_{\alpha\sigma k}^\dagger c_{\alpha\sigma k}+\Delta E S_z.
\label{eq:MCK}
\end{align}
The system is depicted in Fig.~\ref{fig:2ckresult}(a) for the case $M=2$. Here $J_{\alpha}$ is the tunneling between the $\alpha$-th lead and the dot, $c_{\alpha\sigma k}$ and $c^\dagger_{\alpha\sigma k}$ denote annihilation and creation operators, respectively, for electrons with momentum $k$ in the leads for $\sigma=\uparrow$ or in the dot for $\sigma=\downarrow$. Here the spin flip operator changes the charge state of the dot, $S_+|0\rangle= |1\rangle$, and $S_z=(|1\rangle \langle 1|-|0\rangle \langle 0|)/2=\hat{N}_d-1/2$, where $|0\rangle$ and $|1\rangle$ are the empty and occupied dot states, and $\hat{N}_d$ is the dot
number operator. $\Delta E$ is a gate voltage detuning away from the charge degeneracy point, which corresponds to a magnetic field in the conventional MCK Kondo model. The electrons are assumed to be spinless, e.g. due to a large magnetic field. The Hamiltonian (\ref{eq:MCK}) is identical to the spin-MCK Hamiltonian, where the role of the local spin is played by the occupation states of the metallic dot.
Consider now the NEE between the QD and the leads. Using Eqs.~(\ref{eq:central_Renyi23}), which expresses the NEE as a correlation function, we find for the second \Renyi NEE
\begin{equation}
\Delta S^{(2)}=-\log\Big[\frac{1}{2}+2\langle S_z(\frac{1}{iT})S_z(0)\rangle_{T/2}\Big].
\label{eq:sscorreltions}
\end{equation}
The second term is a two-point correlation function of the impurity spin operator in imaginary time. Using the general conformal field theory
correlation function $\langle \mathcal{O}(-i\tau) \mathcal{O}(0) \rangle_T = \left(\frac{\pi T}{\sin \pi \tau T} \right)^{2 \Delta_{\mathcal{O}}}$ for an operator $\mathcal{O}$ with scaling dimension $\Delta_\mathcal{O}$, we obtain~\cite{SM}
\begin{equation}
\label{szszDeltaM}
\langle S_z(\frac{1}{iT})S_z(0)\rangle_{T/2} \propto \left( \frac{\pi T/2}{\sin (\pi \frac{1}{T} T/2)} \right)^{2\Delta_{M}} \sim T^{2\Delta_{M}}.
\end{equation}
Furthermore, we find that at low temperature the NEE $\Delta S^{(1)}$ is also dominated by the same two point correlator~\cite{SM}.
Accordingly the entanglement measure NEE and its \Renyi moments obey the exact same scaling behavior as EoF and ${\mathcal{N}}$ in Eq.~(\ref{eq:EoF}). For example for the 1CK case we obtain a quadratic temperature dependence, $\Delta S^{(\ell)}_{1CK} \simeq \log 2 - a_{1\ell} T^{2}$, whereas in the 2CK case the dependence is linear, $\Delta S^{(\ell)}_{2CK} \simeq \log 2 - a_{2\ell} T$, where now the non-universal coefficients $a_{M\ell}$ depend also on the \Renyi moment $\ell$. As pointed out in Refs.~\cite{lee2015macroscopic,kim2021universal}, this linear behavior, corresponding to $\Delta_{2}=1/2$, is an indication of the existence of a Majorana fermion in the ground state.
\begin{comment}
Thus at zero temperature the second term involving the $S_z$ correlation vanishes and we obtain a $\log 2$ entropy, independent of the number of channels. The temperature correction is then expected to be
\begin{equation}
\Delta S^{(2)} \simeq \log 2 - a T^{2\Delta_{M}},
\end{equation}
exactly as in Eq.~(\ref{eq:MCK}), where $a$ is another numerical coefficient. In fact, we find~\cite{SM} this result holds
any $k$ including $\ell=1$
\end{comment}
The temperature dependence obtained in Eq.~(\ref{szszDeltaM}) is based on a general scaling argument. In order to make a more quantitative comparison with experiments for the case of the 1CK and 2CK models, we can use the exact solution of the corresponding low-temperature Hamiltonian. This allows us to obtain results for the temperature behavior along the crossover between 1CK and 2CK behaviors. Also, the same model allows us to provide quantitative results for the simpler resonant level model.
After a renormalization process at low temperatures,
the Hamiltonian for the $M=2$ MCK model can be mapped into a Majorana resonant level model~\cite{emery1992mapping,zarand2000analytical}
\begin{equation}
\begin{split}
H=&\sum_k k\gamma_{x,-k}\gamma_{x,k}+k\eta_{x-k}\eta_{x,k}\\
&+i\sqrt{\frac{2T_K}{L}}\sum_k \gamma_k \eta_d +i\sqrt{\frac{2T_-}{L}}\sum_k \eta_k \gamma_d+i\Delta E\gamma_d\eta_d.
\end{split}
\end{equation}
Here $T_K$ %
is the Kondo energy scale and $T_-\propto (J_1-J_2)^2$~\cite{mitchell2016universality}
is an energy scale associated with channel asymmetry. Here $\gamma_d$ and $\eta_d$ are local Majorana zero modes, and $\gamma_{x,k}$,$\eta_{x,k}$ represent the mode expansion of a Majorana fields. The charge occupation operator of the metallic dot is written in terms of the local Majorana operators as $\hat{N}_d=\frac{1}{2}+ i\gamma_d\eta_d \equiv \hat{N}_A$.
This model describes different regimes. The 2CK state corresponds to $\Delta E=T_-=0$. As long as ${\rm{max}} \{ \Delta E, T_- \} \ll T_K$, this model describes the vicinity of the 2CK fixed point.
As $T_-$ increases, it faithfully describes the crossover~\cite{pustilnik2004quantum,mitchell2012universal} from the 2CK to the 1CK fixed point.
Additionally, for the special value $T_-=T_K$, this model maps into the non-interacting resonant level model (RLM), which can be realized in a single level spinless QD, with on site energy $\Delta E$ and width $\Gamma \equiv T_K$.
This model can be solved exactly \cite{emery1992mapping}. The results for the occupation number $N_d$ and $\chi(\omega)$, which can be measured using the charge detector, were obtained by solving numerically the resulting integral expressions~\cite{SM}
and are shown in Fig.~\ref{fig:2ckresult}(b,c) for selected model parameters.
For $\Delta E=0$ (either 2CK or RLM), by symmetry $N_d=1/2$, and the NEE \Renyi moments $\Delta S^{(\ell)}$ can be extracted using Eq.~(\ref{eq:central_Renyi23}) solely from the charge noise $\chi(\omega)$. We plot in Fig.~\ref{fig:2ckresult}(d) the resulting $\Delta S^{(2)}$, and for comparison~\cite{SM} the NEE ($=\Delta S^{(1)}$) which is an entanglement monotone~\cite{ma2021symmetric}, with very similar behavior. Specifically, the resulting $\Delta S^{(\ell)}$ for the 2CK state confirms our field theory scaling results as seen in the $\log-\log$ plot in Fig.~\ref{fig:2ckresult}(e). For the RLM the temperature scaling is quadratic.
In Fig.~\ref{fig:2ckresult}(f) we show the crossover from 2CK to 1CK behavior which can be observed for a small channel anisotropy $T_-$. While $\lim_{T \to 0} \Delta S^{(\ell)}$ remains $\log 2$ for any $J_-$, since by symmetry $N_d=1/2$, the temperature dependence becomes quadratic $\log 2-\Delta S^{(\ell)}\propto T^2$ for $T < T_-$.
For a finite gate voltage detuning from the fixed point ($\Delta E\ne 0$) we have that $\langle N_d\rangle\ne 1/2$ as seen in Fig.~\ref{fig:2ckresult}(b). As a result, $\lim_{T \to 0} \Delta S^{(\ell)}$ decreases below $\log 2$ as shown in Fig.~\ref{fig:2ckresult}(d). Since $\Delta E$ is a relevant perturbation, we observe a non-monotonic temperature dependence for some range of values of $\Delta E$ as shown in Fig.~\ref{fig:2ckresult}(d) (finite $\Delta E$)
To conclude, we have proposed in this paper an experimental procedure to measure entanglement at finite temperatures. In particular we have demonstrated that an entanglement measure - the number entanglement entropy (NEE) - can be obtained solely from the subsystems charge distribution function and correlation functions.
We formulated exact thermodynamic relations between moments of the NEE and subsystem charge correlation functions. A similar observation was made for the quantum Fisher information which witnesses multipartite entanglement~\cite{hauke2016measuring}. In our approach, the $\ell$-th moment of the NEE of a thermal state at temperature $T$ is related to a charge correlation function of the same Hamiltonian at temperature $ T/\ell$.
The setup we propose for measurement of the NEE in quantum dot systems has already been utilized to measure the charge noise, thus we expect that it can be readily extended to measure finite temperature entanglement measures in various systems, including the frustrated multi-channel Kondo system.
\paragraph*{Acknowledgments.} We acknowledge support from the European Research
Council (ERC) under the European Unions Horizon
2020 research and innovation programme under grant
agreement No. 951541, ARO (W911NF-20-1-0013),
the US-Israel Binational Science Foundation (Grant No.
2016255) and the Israel Science Foundation, grant number 154/19.
\section{Positivity of $\Delta S^{(\ell)}$}
\label{se:appendixskmonotone}
In this section we
would like to
show that
\begin{equation}
\label{eq:positivity}
-\log(\text{Tr}[\rho_{\hat{N}_A}^\ell]) \ge -\log(\text{Tr}[\rho^\ell]).
\end{equation}
Since the $\ell$-th \Renyi entropy is Schur convex~\cite{bengtsson2006geometry}, this inequality is guaranteed if the eigenvalues of $\rho_{\hat{N}_A}$, denoted $\{p_i\}$
in nonincreasing order,
are majorized by those of $\rho$ denoted $\{\lambda_i\}$, i.e. if
\begin{equation}
\vec{p}\prec \vec{\lambda}\label{eq:majorize0}.
\end{equation}
Thus we just need to prove this majorization condition. By construction $\rho_{\hat{N}_A}$ is block diagonal in $N_A$. This means that we can diagonalize $\rho_{\hat{N}_A}$ by a unitary operator $U_{\hat{N}_A}$ which itself is block diagonal in $N_A$,
\begin{equation}
U_{\hat{N}_A}\rho_{\hat{N}_A} U_{\hat{N}_A}^\dagger={\rm{diag}}(\{p_i \}).
\end{equation}
This implies that the diagonal elements of the density matrix $U_{\hat{N}_A}\rho U_{\hat{N}_A}^\dagger$ are the same as those of $U_{\hat{N}_A}\rho_{\hat{N}_A} U_{\hat{N}_A}^\dagger$, i.e they are given by $\{p_i \}$. Applying the Schur-Horn theorem~\cite{bengtsson2006geometry} to the Hermitian matrix $U_{\hat{N}_A}\rho_{\hat{N}_A} U_{\hat{N}_A}^\dagger$, we conclude that its diagonal elements $\{p_i \}$ are majorized by its eigenvalues $\{\lambda_i \}$, i.e. Eq.~(\ref{eq:majorize0}) is satisfied, leading to Eq.~(\ref{eq:positivity}). Therefore $\Delta S^{(\ell)}$ is positive and zero when $\rho=\rho_{\hat{N}_A}$.
\section{Derivation of Eq.~(7) in the main text}
\label{se:twochargestatecase}
Consider a subsystem $A$ with only two charge states, $N_A=0,1$.
In this case the projection operators become $\Pi_{N_A=1}=\hat{N}_A$, $\Pi_{N_A=0}=1-\hat{N}_A$. Focus on the first \Renyi moments. The trace of $\rho_{\hat{N}_A}^2$ and $\rho_{\hat{N}_A}^3$ is given by
\begin{align}
\text{Tr}[\rho_{\hat{N}_A}^2]=&\text{Tr}[\hat{N}_A\rho\hat{N}_A\rho]+\text{Tr}[(1-\hat{N}_A)\rho(1-\hat{N}_A)\rho]\nonumber\\
=&\frac{1}{Z(T)^2}\Big[\sum_{i}e^{-2E_{i}/T}-2e^{-2E_i/T}\langle i|\hat{N}_A|i\rangle +2\sum_{ij}e^{-E_{i}/T-E_{j}/T}|\langle {i}|\hat{N}_A|{j}\rangle|^2 \Big], \label{eq:rhom2}
\end{align}
and
\begin{align}
\text{Tr}[\rho_{\hat{N}_A}^3]=&\frac{1}{Z(T)^3}\Big[\sum_{i}e^{-3E_{i}/T}-3e^{-3E_i/T}\langle i|\hat{N}_A|i\rangle) +3\sum_{ij}e^{-E_{i}/T-2E_{j}/T}|\langle {i}|\hat{N}_A|{j}\rangle|^2 \Big],\label{eq:rhom3}
\end{align}
respectively. The first terms in Eqs.~\eqref{eq:rhom2} and \eqref{eq:rhom3} are the same as $Z(T/\ell)$ with $\ell=2,3$, respectively. Likewise the second terms involve the number occupation at temperature $T/\ell$,
\begin{align}
\sum_{i}\langle {i}|\hat{N}_A|{i}\rangle e^{-\ell E_{i}/T}=Z(T/\ell)\langle \hat{N}_A\rangle_{T/\ell}.\label{eq:num_kth}
\end{align}
The third terms can be rewriten as
\begin{align}
&\sum_{ij}e^{-E_i/T}e^{-(\ell-1)E_j/T}|\langle i|\hat{N}_A|j\rangle|^2 =\sum_{ij}e^{-\ell E_i/T} e^{(\ell-1)(E_i-E_j)/T}|\langle i|\hat{N}_A|j\rangle|^2. \nonumber
\end{align}
This resembles an imaginary time charge correlator. To see this we define the imaginary time Green's function
\begin{align}
\langle \hat{N}_A(-i\tau)\hat{N}_A(0)\rangle_T =\sum_{ij}\frac{e^{-E_{i}/T}}{Z(T)}|\langle {i}|\hat{N}_A|{j}\rangle|^2e^{(E_{i}-E_{j})\tau},\label{eq:nncor_imag}
\end{align}
and obtain
\begin{equation}
\begin{split}
\text{Tr}[\rho_{\hat{N}_A}^2]=\frac{Z\big(\frac{T}{2}\big)}{Z(T)^2}\big[1-2\langle \hat{N}_A\rangle_{T/2}+2\langle \hat{N}_A(-i/T)\hat{N}_A(0)\rangle_{T/2}\big],\\
\text{Tr}[\rho_{\hat{N}_A}^3]=\frac{Z\big(\frac{T}{3}\big)}{Z(T)^3}\big[1-3\langle \hat{N}_A\rangle_{T/3}+3\langle \hat{N}_A(-2i/T)\hat{N}_A(0)\rangle_{T/3}\big].
\end{split}
\end{equation}
Therefore the 2nd and 3rd \Renyi moments of the NEE can be written in terms of imaginary time charge correlators as
\begin{equation}
\begin{split}
\Delta S^{(2)}(T)=&-\log\big[1-2\langle \hat{N}_A\rangle_{T/2}+2\langle \hat{N}_A(-i/T)\hat{N}_A(0)\rangle_{T/2}\big]
,\label{eq:main_result}\\
\Delta S^{(3)}(T)=&-\frac{1}{2}\log\big[1-3\langle \hat{N}_A\rangle_{T/3}+3\langle \hat{N}_A(-2i/T)\hat{N}_A(0)\rangle_{T/3}\big].
\end{split}
\end{equation}
At thermal equilibrium the imaginary time correlator can be related with the appropriate real time correlator. To see this, we write the real time correlator using the Lehmann representation,
\begin{align}
&\chi(\omega,T)=\int dt e^{i\omega t}\langle \hat{N}_A(t)\hat{N}_A(0)\rangle_{T}=\frac{1}{Z(T)}\sum_{ij}|\langle {i}|\hat{N}_A|{j}\rangle|^2 e^{- E_{i}/T}2\pi \delta(\omega+E_{i}-E_{j}).\label{eq:nncor_freqsym_Lehmann}
\end{align}
Comparing with the imaginary time Green's function in Eq.~\eqref{eq:nncor_imag}, we obtain the relation
\begin{align}
\langle \hat{N}_A(-i\tau)\hat{N}_A(0)\rangle_T=\int \frac{d\omega}{2\pi} \chi(\omega,T) e^{-\omega\tau}.\label{eq:nncor_RI_rel}
\end{align}
As a result, Eq.~(\ref{eq:main_result}) becomes Eq.~(7) in the main text
\section{Charge-charge correlator from shot noise via a QPC}
In this section we provide a
simple model in which $\chi(\omega)$ can be related to the voltage-dependent noise in the
charge detector quantum point contact (QPC).
\begin{figure}[]
\includegraphics[width=0.5\columnwidth]{fig_setup_exp}
\caption{A QPC interacts electrostatically with subsystem $A$. The backscattering strength between modes $\psi_L$ and $\psi_R$ in the QPC depends on $N_A$.
}\label{fig:expsetup}
\end{figure}
Consider subsystem $A$ to describe a QD whose charge is monitored using a QPC, see Fig.~\ref{fig:expsetup}.
We assume the QPC consists of one left moving and one right moving noninteracting fermion mode, $\psi_L$ and $\psi_R$, undergoing a weak backscattering (see dashed line in Fig.~\ref{fig:expsetup}) at the QPC. The backscattering amplitude consists of two terms, one which depends on the charge of QD $A$ with coefficient $\lambda$, and one which is a constant $\lambda_0$,
\begin{equation}
\begin{split}
H_{T}=& \lambda \hat{N}_A \psi^\dagger_L(t)\psi_R(t)+\text{h.c.},\\
H_{T_0}=&\lambda_0 \psi^\dagger_L(t)\psi_R(t)+\text{h.c.}.
\end{split}
\end{equation}
We set the voltage bias of the left moving mode to be $V$ and ground the right moving mode.
The current operators of the QPC are $\hat{I}_{T_{(0)}}=i[N_L-N_R,H_{T_{(0)}}]$, where $N_{L(R)}$ is the particle number of the left (right) mode.
We now demonstrate how $\chi(\omega)$ can be obtained. Setting $\lambda_0=0$ with finite $\lambda\ne 0$, we consider the current $I(V,T)$ and the symmetrized noise $S(V,T)=\int dt [\frac{1}{2}\langle \{\hat{I}_T(t),\hat{I}_T(0)\}\rangle -\langle \hat{I}_T\rangle^2 ]$.
After a derivation presented in
the next subsection, we obtain
\begin{align}
\chi(\omega,T)=\int dy \int \frac{dV}{2\pi}&\partial_V^2\Big[\frac{S(V,T)+e I(V,T)}{2\lambda^2 e^2}\Big]\frac{\sinh^2(\pi Ty)}{\pi T^2 y^2}e^{i(V-\omega)y}.\label{eq:numcor_rel}
\end{align}
This relation allows to extract the charge-correlation from the finite voltage noise and current at the QPC.
\subsection{Derivation of Eq.~(\ref{eq:numcor_rel}
}
\label{se:appendixnoisechi}
In this subsection, we derive Eq.~(\ref{eq:numcor_rel})
using perturbation theory in $\lambda$. First we calculate the noise up to the lowest order of $\lambda$,
\begin{align}
S(V,T)=\frac{\lambda^2e^2}{2}\int_{-\infty}^{\infty}dt&\Big[\langle \psi_L^\dagger(t)\psi_R(t)\psi^\dagger_R(0)\psi_L(0)\rangle\langle \hat{N}_A(t)\hat{N}_A(0)\rangle +\langle\psi_R^\dagger(t)\psi_L(t)\psi_L^\dagger(0)\psi_R(0)\rangle \langle \hat{N}_A(t)\hat{N}_A(0)\rangle \nonumber\\
&+\langle \psi_L^\dagger(0)\psi_R(0)\psi^\dagger_R(t)\psi_L(t)\rangle\langle \hat{N}_A(0)\hat{N}_A(t)\rangle +\langle \psi_R^\dagger(0)\psi_L(0)\psi^\dagger_L(t)\psi_R(t)\rangle\langle \hat{N}_A(0)\hat{N}_A(t)\rangle \Big].\label{eq:supp_noise1}
\end{align}
Next, using $\langle \hat{N}_A(t)\hat{N}_A(0)\rangle=\langle \hat{N}_A(0)\hat{N}_A(-t)\rangle$, and the correlators for $\psi_L$ and $\psi_R$,
\begin{align}
\langle \psi_L^\dagger(t) \psi_L(0)\rangle=\frac{e^{iVt}}{\frac{\beta}{\pi}\sin\big(\frac{\pi}{\beta}(\tau_0+it)\big)},\quad \langle \psi_R^\dagger(t)\psi_R(0)\rangle=\frac{1}{\frac{\beta}{\pi}\sin\big(\frac{\pi}{\beta}(\tau_0+it)\big)},
\end{align}
where $\tau_0\equiv a_0/v_F$ with infinitesimal length $a_0$, we obtain
\begin{align}
S(V,T)=&\lambda^2 e^2\int_{-\infty}^{\infty} dt \frac{1}{\Big[\frac{\beta}{\pi}\sin(\frac{\pi}{\beta}(\tau_0+it))\Big]^2}(e^{iVt}+e^{-iVt})\langle \hat{N}_A(t)\hat{N}_A(0)\rangle. \label{eq:noise_supp}
\end{align}
Repeating the same algebra for the current, we obtain
\begin{align}
I(V,T)=\lambda^2 e\int_{-\infty}^0 dt&\Big[\langle \psi_L^\dagger(t) \psi_R(t)\psi^\dagger_R(0)\psi_L(0)\rangle\langle N_A(t)N_A(0)\rangle -\langle \psi_R^\dagger(t)\psi_L(t)\psi_L^\dagger(0)\psi_R(0)\rangle\langle N_A(t)N_A(0)\rangle \nonumber\\
&-\langle \psi_R^\dagger(0)\psi_L(0)\psi_L^\dagger(t)\psi_R(t)\rangle\langle N_A(0)N_A(t)\rangle +\langle \psi_L^\dagger(0)\psi_R(0)\psi_R^\dagger(t)\psi_L(t)\rangle\langle N_A(0)N_A(t)\rangle \Big]\nonumber\\
=&\lambda^2 e\int_{-\infty}^{\infty}dt \frac{1}{\Big[\frac{\beta}{\pi}\sin(\frac{\pi}{\beta}(\tau_0+it))\Big]^2}(e^{iVt}-e^{-iVt})\langle N_A(t)N_A(0)\rangle.\label{eq:cur_supp}
\end{align}
Combining the two observables,
\begin{align*}
\frac{S(V,T)+e I(V,T)}{2\lambda^2 e^2}=\int_{-\infty}^\infty dt \frac{e^{iV t}}{\Big[\frac{\beta}{\pi}\sin(\frac{\pi}{\beta}(\tau_0+it))\Big]^2}\langle N_A(t)N_A(0)\rangle.
\end{align*}
Fourier transforming, we find
\begin{align}
\frac{S(V,T)+eI(V,T)}{2\lambda^2 e^2}&=\int\frac{d\omega}{2\pi}\chi(\omega,T)(\omega-V)\Big[1+\coth(\frac{\omega-V}{2T})\Big].\label{eq:supp_numcor_exp1}
\end{align}
After taking the second derivative with respect to $V$, we have
\begin{equation}
\partial_V^2\Big[\frac{S(V,T)+eI(V,T)}{2\lambda^2 e^2}\Big] =\int\frac{d\omega}{2\pi}\chi(\omega,T)\frac{\frac{\omega-V}{2T}\coth\Big(\frac{\omega-V}{2T}\Big)-1}{T\sinh^2(\frac{\omega-V}{2T})}.\label{eq:numcor_exp}
\end{equation}
To extract $\chi(\omega,T)$, we Fourier transform the quantity,
\begin{align}
&\int \frac{dV}{2\pi} \partial_V^2\Big[\frac{S(V,T)+eI(V,T)}{2\lambda^2 e^2}\Big] e^{iV y}=\frac{\pi T^2 y^2}{\sinh^2(\pi T y)}\int \frac{d\omega}{2\pi} \chi(\omega,T)e^{i\omega y}.
\end{align}
Finally, to obtain Eq.~(\ref{eq:numcor_rel})
we use the integral
\begin{align}
\int_{-\infty}^\infty \frac{dV}{2\pi} \frac{\frac{V}{2T}\coth\Big(\frac{V}{2T}\Big)-1}{T\sinh^2(\frac{V}{2T})} e^{i V y}=\frac{\pi T^2 y^2}{\sinh^2(\pi T y)}.
\end{align}
\begin{figure}[t]
\includegraphics[width=0.6\columnwidth]{DD_result.pdf}
\caption{(a) Double dot setup, see Eq.~\eqref{eq:DDham}. (b) $\langle \hat{N}_A\rangle_{T}$ versus $T$ for $t_0=1$ and $\delta=-2,-1,0,1,2$ from uppermost to lowermost. (c) $\chi(\omega,T)$ for $T=0.1$, $t_0=1$ and $\delta=0$. Two peaks correspond to $\omega=0$, $2\Delta$ (an additional peak at $\omega=-2\Delta$ is suppressed for $T \ll \Delta$.
(d) $\Delta S^{(2)}$ (solid lines) and $\Delta S^{(3)}$ (dashed lines) versus $T$ for $\delta=0,1,2$ from uppermost to lowermost with $t_0=1$. }\label{fig:ddresult}
\end{figure}
\section{Simplest example : Double dot }
In this section we exemplify Eq.~(7) of the main text for a double quantum dot. We compute the subsystem charge and its correlation, from which the \Renyi moments of the NEE are obtained.
Consider two QDs denoted $A$ and $B$ with Hamiltonian
\begin{align}
H=t_0 (c_A^\dagger c_B+c_B^\dagger c_A)+\delta(c_A^\dagger c_A-c_B^\dagger c_B),\label{eq:DDham}
\end{align}
where $c_{A(B)}^\dagger$ creates a spinless electron in QD $A(B)$, $t_0$ is the hopping amplitude, $2\delta$ is the asymmetry of the QDs, and we assume the double dot contains one electron, see Fig.~\ref{fig:ddresult}(a).
At zero temperature, the NEE moments $\Delta S^{(2)}$ and $\Delta S^{(3)}$ approach $\log(2)$ in the symmetric case $\delta\to 0$ where the wave function becomes a Bell state $(|10\rangle + |01\rangle)/\sqrt{2}$ where $1(0)$ denotes an occupied (empty) QD. They remain finite at any $T>0$ and decay as~\cite{ma2021symmetric} $1/T^2$ as shown in Fig.~\ref{fig:ddresult}(d) for both $\Delta S^{(2)}$ and $\Delta S^{(3)}$.
We now calculate separately the ingredients of Eq.~(7) of the main text. The average and two point correlator of $N_A$ are given by
\begin{equation}
\langle \hat{N}_A\rangle_{T}=a^2 f(-\Delta/T)+b^2 f(\Delta/T),\label{eq:DDnum}
\end{equation}
and
\begin{equation}
\frac{\chi(\omega,T)}{2\pi}=a^2 b^2\Big(\delta(\omega+2\Delta) f(\frac{\Delta}{T})^2+\delta(\omega-2\Delta) f(-\frac{\Delta}{T})^2\Big)
+\delta(\omega)\Big[ a^4 f(-\frac{\Delta}{T})+b^4 f(\frac{\Delta}{T})+ 2a^2b^2 f(\frac{\Delta}{T})f(-\frac{\Delta}{T})\Big] ,\label{eq:DDnoise}
\end{equation}
respectively, where $a=\sin\alpha/2$, $b=\cos\alpha/2$ with $\alpha=\tan^{-1}t_0/\delta$, $\Delta=\sqrt{t_0^2+\delta^2}$, and $f(x)=1/(1+e^x)$. They are plotted in Fig.~\ref{fig:ddresult}(b) and (c). In the symmetric case $\langle N_A \rangle = 1/2$ and the information about the NEE is encoded in the two point charge correlator $\chi(\omega)$, which in this system features $\delta$-function peaks at the eigenenergy differences $0,\pm 2 \Delta$ with appropriate matrix elements.
Substituting Eqs.~(\ref{eq:DDnum}) and~(\ref{eq:DDnoise}) into Eq.~(7) of the main text, one obtains
\begin{eqnarray}
\Delta S^{(2)}&=&-\log\Big[\frac{1+\cosh(\frac{2\Delta}{T})(a^4+b^4)+2a^2b^2}{2\cosh^2(\frac{\Delta}{T})}\Big], \nonumber \\
\Delta S^{(3)}&=&-\frac{1}{2}\log\Big[\frac{1}{2\cosh^2(3\Delta/2T)}\Big(1+(a^6+b^6)\cosh(\frac{3\Delta}{T})+3(a^4 b^2+a^2b^4)\cosh(\frac{\Delta}{T})\Big)\Big],\label{eq:DD_Renyi3}
\end{eqnarray}
which can be derived directly from the density matrix.
\subsection{Derivation of Eq.~(\ref{eq:DDnoise})}
First we diagonalize the double dot Hamiltonian as
\begin{align}
H=\Delta c_1^\dagger c_1-\Delta c_0^\dagger c_0,
\end{align}
where $c_A=a c_0+b c_1$, $c_B=bc_0-ac_1$. Then the charge correlator becomes
\begin{align}
\langle c_A^\dagger c_A(t) c_A^\dagger c_A(0)\rangle=&a^4\langle c_0^\dagger c_0(t)c_0^\dagger c_0(0)\rangle+b^4\langle c_1^\dagger c_1(t)c_1^\dagger c_1(0)\rangle\nonumber\\
&+a^2 b^2\big(\langle c_0^\dagger c_0(t) c_1^\dagger c_1(0)\rangle+\langle c_1^\dagger c_1(t) c_0^\dagger c_0(0)\rangle +\langle c_1^\dagger c_0(t) c_0^\dagger c_1(0)\rangle+\langle c_0^\dagger c_1(t) c_1^\dagger c_0(0)\rangle\big)\nonumber\\
=&\frac{a^4}{(1+e^{-\Delta/T})}+\frac{b^4}{1+e^{\Delta/T}}+\frac{2a^2 b^2}{(1+e^{\Delta/T})(1+e^{-\Delta/T})}+\frac{a^2 b^2 e^{2i\Delta t}}{(1+e^{\Delta/T})^2}+\frac{a^2 b^2 e^{-2i\Delta t}}{(1+e^{-\Delta/T})^2}.\label{eq:supp_dd}
\end{align}
Eq.~(\ref{eq:DDnoise}) follows by Fourier transforming Eq.~\eqref{eq:supp_dd}.
\section{Derivation of 2-channel Kondo results}
In this section we detail the derivation of results for the number operator, charge noise, as well as \Renyi entropies for the 2CK model based on the effective non-interacting Majorana fermion theory. We also explain our novel numerical method to obtain the NEE $\Delta S$ which was not obtained on previous calculations which considered exclusively the \Renyi moments of the NEE~\cite{ma2021symmetric}.
\subsection{\Renyi entropies}
First we rewrite the Hamiltonian~\cite{schiller1998toulouse}
\begin{align}
H=\sum_k \frac{k}{2}\gamma_{-k}\gamma_k +\frac{k}{2}\eta_{-k}\eta_k +i\sqrt{\frac{2T_K}{L}}\sum_{k}\gamma_{k}\eta_d +i\sqrt{\frac{2T_-}{L}}\sum_k \eta_k\gamma_d +ih\gamma_d\eta_d.
\end{align}
The Majorana mode operators $\gamma_k$ and $\eta_k$ satisfy $\{\gamma_k,\gamma_q\}=\delta_{q,-k}$, $\{\eta_k,\eta_q\}=\delta_{q,-k}$, and $\{\gamma_k,\eta_q\}=0$, and the local Majorana zero modes satisfy $\{\gamma_d,\gamma_d\}=\{\eta_d,\eta_d\}=1$.
The number operator is
\begin{align}
\hat{N}_d=c_d^\dagger c_d=i\gamma_d\eta_d+1/2
\end{align}
Since the Hamiltonian is non-interacting, we use Wick's theorem to obtain
\begin{align}
\chi(\omega,T)=\int dt e^{i\omega t}\langle \hat{N}_d(t)\hat{N}_d(0)\rangle=&-2\pi\delta(\omega)\Big[\int \frac{d\omega_1}{2\pi}\langle \gamma_d\eta_d\rangle(\omega_1)\Big]^2+2\pi \delta(\omega) i\int \frac{d\omega_1}{2\pi}\langle \gamma_d\eta_d\rangle(\omega_1)+\frac{\pi}{2}\delta(\omega)\nonumber\\
&+\int\frac{d\omega_1}{2\pi}\langle\gamma_d\gamma_d\rangle(\omega_1)\langle\eta_d\eta_d\rangle(\omega-\omega_1)-\int \frac{d\omega_1}{2\pi}\langle \gamma_d\eta_d\rangle(\omega_1)\langle \eta_d\gamma_d\rangle(\omega-\omega_1).
\end{align}
The Green functions here are
\begin{equation}
\begin{split}
\langle \gamma_d\gamma_d\rangle(\omega)=\frac{2T_-\omega^2+2 T_- T_K^2+2T_K h^2}{(\omega^2-T_K T_- -h^2)^2+(T_K+T_-)^2\omega^2}\frac{1}{1+e^{-\beta\omega}},\\
\langle \eta_d\eta_d\rangle(\omega)=\frac{2 T_K\omega^2+2 T_K T_-^2+2T_- h^2}{(\omega^2-T_K T_- -h^2)^2+(T_K+T_-)^2\omega^2}\frac{1}{1+e^{-\beta \omega}},\\
\langle \eta_d\gamma_d\rangle(\omega)=\frac{2 h(T_K+T_-)\omega}{(\omega^2-T_K T_--h^2)^2+(T_K+T_-)^2\omega^2}\frac{1}{1+e^{-\beta \omega}},\\
\langle \gamma_d\eta_d\rangle(\omega)=-\frac{2 h(T_K+T_-)\omega}{(\omega^2-T_K T_- -h^2)^2+(T_K+T_-)^2\omega^2}\frac{1}{1+e^{\beta \omega}}.
\end{split}
\end{equation}
Using these results we obtain $\chi(\omega)$ in Fig.~1(c) in the main text. Similarly the results in Fig.~1(d-f) in the main text are obtained using Eq.~(7) in the main text.
\subsection{Gaussian property of blocks in $\rho_{\hat{N}_A}$}
In this subsection we provide a general approach to compute the NEE $\Delta S$ for free fermion models with a subsystem consisting of a single site, using Gaussian properties of the measured density matrix. This method was used for the results displayed in Fig.~1(d,e,f) of the main text.
Consider the density matrix of $N+1$ sites. After measuring one site ($A$ site), the measured density matrix is
\begin{align}
\rho_{\hat{N}_A}=\hat{N}_A\rho \hat{N}_A + (1-\hat{N}_A)\rho(1-\hat{N}_A).
\end{align}
Now we show that $\hat{N}_A\rho \hat{N}_A$ or $(1-\hat{N}_A)\rho(1-\hat{N}_A)$ have a Gaussian form. We write the projectors $\hat{N}_A=c_A^\dagger c_A$ and $1-\hat{N}_A$ as
\begin{align}
\hat{N}_A=\lim_{\alpha\rightarrow\infty} e^{\alpha (c_A^\dagger c_A-1)},\quad 1-\hat{N}_A=\lim_{\alpha\rightarrow\infty}e^{-\alpha c_A^\dagger c_A},
\end{align}
which is of course Gaussian. Since the product of Gaussian matrices is Gaussian, $\hat{N}_A\rho\hat{N}_A$ and $(1-\hat{N}_A)\rho(1-\hat{N}_A)$ are Gaussian.
We define new density matrices as
\begin{align}
\rho^{N_A=1}=\frac{1}{\langle \hat{N}_A\rangle}\hat{N}_A\rho \hat{N}_A,\quad \rho^{N_A=0}=\frac{1}{1-\langle \hat{N}_A\rangle}(1-\hat{N}_A)\rho(1-\hat{N}_A),
\end{align}
which can be written in diagonal Gaussian forms
\begin{align}
\rho^{N_A=1}=\frac{1}{K_{1}}e^{-\sum_q \epsilon_q^{N_A=1} a_q^\dagger a_q},\quad \rho^{N_A=0}=\frac{1}{K_0}e^{-\sum_q\epsilon_q^{N_A=0} b_q^\dagger b_q},
\end{align}
where $a_k$, $b_k$ and $c_i$ are related as
\begin{align}
c_i=\sum_k \phi_i^{N_A=1}(k) a_k,\quad c_i=\sum_k \phi_i^{N_A=0}(k)b_k.
\end{align}
Then the entropy of those density matrices is~\cite{peschel2009reduced}
\begin{align}
S_{N_A}=-\text{Tr}[\rho^{N_A}\log\rho^{N_A}]=\sum_{l}\log(1+e^{-\epsilon_l^{N_A}})+\sum_{l}\frac{\epsilon_l^{N_A}}{e^{\epsilon_l^{N_A}}+1}.
\end{align}
The energy levels are obtained from the correlation matrix,
\begin{align}
C^{N_A=1}_{ij}=\frac{1}{\langle \hat{N}_A\rangle}\text{Tr}[\hat{N}_A\rho \hat{N}_A c_i^\dagger c_j],\quad C^{N_A=0}_{ij}=\frac{1}{1-\langle \hat{N}_A\rangle}\text{Tr}[(1-\hat{N}_A)\rho(1-\hat{N}_A)c_i^\dagger c_j],
\end{align}
where $i,j\ne A$. If $i$ or $j$ coincide with the site corresponding to subsystem $A$, then the correlator vanishes.
After diagonalizing the correlation matrix, the correlator becomes
\begin{align}
C^{N_A}_{ij}=\sum_{k}\phi_i^{N_A}(k)\phi^{N_A}_j(k)\frac{1}{e^{\epsilon^{N_A}_k}+1}\rightarrow \bar{C}^{N_A}_k=\frac{1}{e^{\epsilon_k^{N_A}}+1}.
\end{align}
If we diagonalize the correlation matrix, then $\epsilon_k$ becomes
\begin{align}
\epsilon_k^{N_A}=\log(\frac{1-\bar{C}^{N_A}_k}{\bar{C}^{N_A}_k}).
\end{align}
Finally we obtain
\begin{align}
\Delta S=\text{Tr}[\rho\log\rho]+\langle N_A\rangle S_{N_A=1}+(1-\langle N_A\rangle)S_{N_A=0}-\langle N_A\rangle\log\langle N_A\rangle-(1-\langle N_A\rangle)\log(1-\langle N_A\rangle).
\end{align}
This allowed us to obtain the results plotted in Fig.~1(d,e,f) in the main text. In these calculations we use the lattice model
\begin{align}
H=\Delta_L\sum_{n=-N_{max}-1/2}^{N_{max}+1/2} n\gamma_{-n}\gamma_n +n\eta_{-n}\eta_n+i\sqrt{\frac{2T_K}{L}}\sum_n \gamma_n \eta_d+i\sqrt{\frac{2T_-}{L}}\sum_n \eta_n \gamma_d+i\Delta E \gamma_d\eta_d,
\end{align}
with $\Delta_L=1$, $N_{max}=800$, $T_K=25\pi$.
\section{Scaling of NEE in the multi-channel Kondo effect}
In this section we show for the MCK model that the low temperature scaling of $\Delta S^{(\ell)}$ is independent on $\ell=1,2,3...$ and is determined by the scaling dimension of the spin operator $S_z$.
For the MCK model the projection operators for the impurity spin are
\begin{align}
\hat{N}_d=\frac{1}{2}+S_z,\quad 1-\hat{N}_d=\frac{1}{2}-S_z\label{eq:N_Srel}.
\end{align}
Consider first $\Delta S^{(2)}$. It can be written as
\begin{equation}
\label{deltaSmCORRELATOR}
\Delta S^{(2)}=\log(\text{Tr}[\rho^2])-\log(\text{Tr}[\rho \hat{N}_d\rho \hat{N}_d]+\text{Tr}[\rho(1-\hat{N}_d)\rho(1-\hat{N}_d)])=-\log\Big[\frac{1}{2}+2\langle S_z(\frac{1}{iT})S_z(0)\rangle_{T/2}\Big].
\end{equation}
We can expand the impurity spin operator as a sum over local fields of increasing scaling dimension,
$S_z=c_1\psi+... $ where $\psi$ is the leading operator which has a known scaling dimension $\Delta_M$ and $...$ corresponds to operators with higher scaling dimensions. This operator determines the leading low temperature behaviour of the spin-spin correlator in Eq.~(\ref{deltaSmCORRELATOR}), which becomes
\begin{align}
\langle S_z(\frac{1}{iT})S_z(0)\rangle\simeq c_1^2\Big[\frac{\pi T/2}{ \sin(\pi T /2(1/T))} \Big]^{2\Delta_{M}}\propto T^{2\Delta_{M}}.
\end{align}
Thus we obtain
\begin{align}
\Delta S^{(2)}\simeq -\log(1/2+a T^{2\Delta_{M}})\simeq \log 2 -2 a T^{2\Delta_{M}}.
\end{align}
It is easy to extend this result to $\Delta S^{(\ell)}$ with $\ell>2$, and show that it satisfies the same temperature scaling.
From Eq.~(5) in the main text, and Eq.~\eqref{eq:N_Srel}, we obtain
\begin{align}
\Delta S^{(\ell)}=\frac{1}{1-\ell}\log\bigg[&\bigg\langle \Big[\frac{1}{2}+S_z\Big(\frac{\ell-1}{iT}\Big)\Big]\Big[\frac{1}{2}+S_z\Big(\frac{\ell-2}{iT}\Big)\Big]\cdots\Big[\frac{1}{2}+S_z\Big(\frac{1}{iT}\Big)\Big]\Big[\frac{1}{2}+S_z(0)\Big]\bigg\rangle_{T/\ell}\nonumber\\
&+\Big\langle\Big[\frac{1}{2}-S_z\Big(\frac{\ell-1}{iT}\Big)\Big]\Big[\frac{1}{2}-S_z\Big(\frac{\ell-1}{iT}\Big)\Big]\cdots \Big[\frac{1}{2}-S_z\Big(\frac{1}{iT}\Big)\Big]\Big[\frac{1}{2}-S_z(0)\Big]\Big\rangle_{T/\ell} \bigg].
\end{align}
After expanding we find
\begin{align}
\Delta S^{(\ell)}=\frac{1}{1-\ell}\log\big[\frac{1}{2^{\ell}}+2\sum_{k=1}^{\ell-1}\sum_{q=0}^{k-1}\langle S_z(i k/T)S_z(i q/T)\rangle_{T/\ell}+\cdots\big],
\end{align}
where $\cdots$ includes higher (4, 6 ...) point correlators, which have the temperature scaling dependence higher than $2\Delta_M$. Hence they are negligible at the low temperature limit. The sum of the two-point correlators gives the same temperature scaling.
Next we consider the NEE $\Delta S^{(1)}$. It can be written as
\begin{align}
\Delta S^{(1)}=\text{Tr}[\rho\log\rho]-\text{Tr}[\rho (\frac{1}{2}+S_z) \log(\rho (\frac{1}{2}+S_z))]-\text{Tr}[\rho (\frac{1}{2}-S_z)\log(\rho(\frac{1}{2}-S_z))].
\end{align}
Expanding the logarithm using
\begin{align}
\log A=-\sum_{k=1}^\infty\frac{1}{k}(I-A)^k,
\end{align}
the NEE becomes
\begin{align}
\Delta S^{(1)}=&\frac{1}{Z(T)}\sum_{k=1}^\infty\frac{1}{k}\Big[-\text{Tr}[e^{-H/T}(I-e^{-H/T})^k]+\text{Tr}[e^{-H/T} (\frac{1}{2}+S_z)(I-e^{-H/T} (\frac{1}{2}+S_z))^k]\nonumber\\
&\qquad\qquad+\text{Tr}[e^{-H/T} (\frac{1}{2}-S_z)(I-e^{- H/T}(\frac{1}{2}-S_z))^k]\Big].
\end{align}
Now we expand the last expression. From each factor we can choose either the $1/2$ or the $S_z$. First we consider the term with all the $1/2$'s chosen. It is equivalent to
\begin{align}
\frac{1}{Z(T)}\Big[\text{Tr}[e^{-H/T}\log(e^{- H/T} )]-\frac{1}{2}\text{Tr}[e^{-H/T}\log (e^{- H/T} \frac{1}{2})]-\frac{1}{2} \text{Tr}[e^{-H/T}\log (e^{-H/T}\frac{1}{2})]\Big]=\log(2).
\end{align}
Next we consider terms involving the $S_z$'s. At the low temperature, the dominant contribution comes two-point functions of $S_z$,
\begin{align}
\sum_{k=1}^\infty \frac{1}{k} \sum_q^k \frac{k!}{(k-q)! q!} \frac{1}{2^{q-1}}\frac{Z(T/q)}{Z(T)}\sum_p^q q\langle S_z(\frac{p}{iT})S_z(0)\rangle_{T/q},
\end{align}
with
\begin{align}
\langle S_z(\frac{p}{iT})S_z(0)\rangle_{T/q}=\Big[\frac{\pi T/q}{\sin(\pi p/q )}\Big]^{2\Delta_M}\sim T^{2\Delta_M}.
\end{align}
Therefore the leading low temperature correction of $\Delta S^{(1)}$ coincides with that of $\Delta S^{(\ell)}$ with $\ell=2,3...$.
|
train/arxiv
|
BkiUfRQ25V5je08w0Igk
| 5 | 1 |
\section{Introduction}
Extensive structural protein studies are computationally not feasible using full
atom protein representations. The challenge is to reduce complexity while
maintaining detail \cite{Dill_2008,Istrail_2009}. Lattice protein models are
often used to achieve this but in general only the protein backbone or the amino
acid center of mass is represented
\cite{Backofen_Will_Constraints2006,Mann_LatPack_HFSP_08,Mann:Will:Backofen:CPSP-tools:BMCB:2008,Miao_hydroCollapse_JMB_04,Bornberg:97a}.
A huge variety of lattices and energy functions have previously been developed
\cite{Dill_1985,Godzik_backboneFit_93,Reva_1996}, while the lattices 2D-square,
3D-cubic and 3D face centered cubic (FCC) are most prominent.
In order to evaluate the applicability of different lattices and to enable the
transformation of real protein structures into lattice models, a representative
lattice protein structure has to be calculated. In detail, given a full atom
protein structure one has to find the best structure representation within the
lattice model that minimizes the applied distance measure. Ma\v{n}uch and Gaur
have shown the NP-completeness of this problem for backbone-only models in the
3D-cubic lattice when minimizing coordinate root mean square deviation (cRMSD)
and named it the \emph{protein chain lattice fitting (PCLF)
problem}~\cite{Manuch_2008}.
The PCLF problem has been widely studied for backbone-only models.
Suggested approaches utilize quite different methods, ranging from full
enumeration~\cite{Covell_1990}, greedy chain growth
strategies~\cite{Mann_latfit_2010,Miao_hydroCollapse_JMB_04,Park:Levitt:JMB1995},
dynamic programming~\cite{Hinds_1992}, simulated
annealing~\cite{Ponty_backboneFits_NAR_08}, or the optimization of specialized
force fields~\cite{Koehl_1998,Reva_1998}. The most important aspects in
producing lattice protein models with a low root mean squared deviation (RMSD)
are the lattice co-ordination number and the neighborhood vector angles
\cite{Park:Levitt:JMB1995,Pierri_Proteins_08}. Lattices with intermediate
co-ordination numbers, such as the face-centered cubic (FCC) lattice, can produce
high resolution backbone models \cite{Park:Levitt:JMB1995} and have been used in
many protein structure studies (e.g.
\cite{Istrail_2009,Jacob_2007,Ullah:CPSP_LS:09}).
Most of the PCFL methods introduced are heuristics to derive good solutions in
reasonable time. Greedy methods as chain growth
algorithms~\cite{Mann_latfit_2010,Miao_hydroCollapse_JMB_04,Park:Levitt:JMB1995}
enable low runtimes but the fitting quality depends on the chain growth direction and
parameterization. Thus, resulting lattice models are biased by the method
applied and have potential for refinement.
This paper has the goal to provide some evidence that greedy methods
can be effectively improved by subsequent refinement steps that increase the
fitting quality. We present a formalization and a simple working prototype.
Moreover we briefly discuss some potential methodologies that we expect could be
effectively employed.
\section{Definitions and Preliminaries}
\label{sec:pre}
\newcommand{\text{neigh}}{\text{neigh}}
\newcommand{$C_{\alpha}$}{$C_{\alpha}$}
\newcommand{\operatorname{dist}}{\operatorname{dist}}
In order to define the Constraint Programming approach we first introduce some
preliminary formalisms.
Given a protein in full atom representation of length~$n$ (e.g. in Protein
Data Base (PDB) format~\cite{PDB_2000}), we denote the sequence of
3D-coordinates of its $C_{\alpha}$-atoms (its \emph{backbone trace}) by $P=(P_1,\ldots,P_n)$.
A regular \emph{lattice~$L$} is defined by a set of neighboring
vectors~$\vec{v}\in N_L$ of equal length $(\forall_{\vec{v_i},\vec{v_j}\in N_L}
: |\vec{v_i}|=|\vec{v_j}|)$, each with a reverse $(\forall_{\vec{v}\in N_L} :
-\vec{v}\in N_L$, such that $L = \{\vec{x} \;|\; \vec{x} = \sum_{\vec{v}_i\in
N_L} d_i \cdot \vec{v}_i \wedge d_i \in \mathds{Z}^{+}_{0} \}$. $|N_L|$~gives
the coordinate number of the lattice~$L$, e.g. 6~for 3D-cubic or 12~for the FCC
lattice. All neighboring vectors $\vec{v} \in N_L$ of the used lattice $L$ are
scaled to a length of~3.8$\mathring{A}${}, which is the mean distance between
consecutive $C_{\alpha}$-atoms in real protein structures.
A backbone-only \emph{lattice protein structure~$M$} of length~$n$ is defined by
a sequence of lattice nodes $M = (M_1,\ldots,M_n) \in L^n$ representing the
backbone ($C_{\alpha}$) monomers of each amino acid. A valid structure ensures backbone
connectivity $(\forall_{i<n} : M_i-M_{i+1} \in N_L)$ as well as selfavoidance
$(\forall_{i\neq j} : M_i \neq M_j)$, i.e. it represents a selfavoiding walk
(SAW) in the underlying lattice.
The \emph{PCFL problem} is to find a lattice protein model~$M$ of a given
protein's backbone~$P$, such that a distance measure between~$M$ and~$P$
($\operatorname{dist}(M,P)$) is minimized~\cite{Manuch_2008}.
In this contribution, we tackle the \emph{PCFL refinement problem}. Here, a
protein backbone~$P$ as well as a first lattice model~$M$ is given, e.g. derived
by a greedy chain growth
procedure~\cite{Mann_latfit_2010,Miao_hydroCollapse_JMB_04,Park:Levitt:JMB1995}.
The problem is to find a lattice model~$M'$, such that $\operatorname{dist}(M',P) <
\operatorname{dist}(M,P)$, via a relaxation/refinement of the original model~$M$.
In the following, we utilize distance RMSD (dRMSD, Eq.~\ref{eq:dRMSD}) as the
distance measure~$\operatorname{dist}(M,P)$. dRMSD is independent of the relative orientation
of~$M$ and~$P$ since it captures the model's deviation from the pairwise
distances of $C_{\alpha}${}-atoms in the original protein. Minimizing this measure
optimizes the lattice model obtained.
\begin{eqnarray}
\text{dRMSD}(M,P) &=& \sqrt{ \frac{ \sum_{i<j} \; (|M_j-M_i| - |P_j-P_i|)^2 }{
n(n-1)/2 } } \label{eq:dRMSD}
\end{eqnarray}
\section{Refinement of Lattice Models: a Constraint Model in COLA}
In this section we formalize a Constraint Optimization Problem (COP) to solve
the PCFL refinement problem (see Sec.~\ref{sec:pre}), i.e. to refine a lattice
model~$M$ of a protein~$P$. The input is the original protein~$P$ and its lattice model~$M$
to be refined. The output is a lattice model~$M'$ derived from~$M$ via some
relaxation that optimizes our distance measure dRMSD$(M',P)$
(Eq.~\ref{eq:dRMSD}).
We first formalize the problem and show how to implement it in COLA, a
COnstraint solver for
LAttices~\cite{Dal_Palu:Dovier:Fogolari:Const_Logic_Progr:2004}. This is
followed by an altered formulation that utilizes limited discrepancy
search~\cite{Harvey95limiteddiscrepancy}.
\newpage
\subsection{The Constraint Optimization Problem}
The COP can be formalized as follows:
\begin{center}
\begin{tabular}{p{0.25\textwidth} p{0.7\textwidth}}
$X_1 \ldots X_n$ & variables representing $M' = (M'_1,\ldots,M'_n)$ \\
\\
$D(X_i)$ & variable domains $= \{ v \;|\; v \in L \wedge | v -
M_i | \leq f_{\text{scale}} \cdot d_{\max} \}$, \\
& i.e. an $M_i$ surrounding sphere with
radius $f_{\text{scale}}\cdot d_{\max}$ \\
\\
$SAW(X_1 \ldots X_n)$ & self-avoiding walk constraint, e.g. split into
a chain of binary \texttt{contiguous} and a global \texttt{alldifferent}
constraint\\
\\
$O$ & objective function variable, implements dRMSD\\
& $ = \sum_{i<j} (|X_j - X_i| - |P_j - P_i|)^2$
to be minimized\\
\end{tabular}
\end{center}
Note that $d_{\max}$ refers to the number of lattice units used and thus it is
scaled to the correct distance of $f_{\text{scale}}=3.8$\AA{}. Thus, the
domains for $d_{\max}=0$ only contain the original lattice point~$M_i$ (domain size~1),
while $d_{\max}=1$ results in~$M_i$ as well as all neighbored lattice points
(domain size~$1+12=13$ in FCC). The domain size guided by $d_{\max}$ defines the
allowed relaxation of the original lattice model~$M$ to be refined. For more
details about global constraints for protein structures on lattices, the reader
can refer to~\cite{Backofen_Will_Constraints2006,DalPalu_2010}.
The COLA implementation takes advantage of the availability of 3D lattice point
domains and distance constraints. The implementation changes the original
framework only in the input data handling and objective function definition. A
working copy of COLA and the COP implemented for this paper are available
at \texttt{http://www2.unipr.it/$\sim$dalpalu/COLA/}
\subsection{Limited Discrepancy Search}
A simple enumeration with $d_{\max}=1$ and a protein of length~50, already shows
that the search space of the COP from the previous section is not manageable. In
this example, each point can be placed in 13~different positions in the FCC
lattice, and even if the contiguous constraint among the amino acids is
enforced, the number of different paths is still beyond the current
computational limits.
We tried a simple branch and bound search an $X_1,\ldots,X_n$, where the dRMSD
bound is estimated by considering the possible placement of non labeled
variables and the best dRMSD contribution provided by each amino acid. In
detail, each amino acid $s$ not yet labeled is compared to each other amino acid
($s'$). Each pair provides a range of different contributions to dRMSD measure,
depending on the placement of $s$ and the placement of the other amino acids
(when not yet labeled). A closed formula computation (rather than a full
enumeration of all combinations), based on bounding box of domain positions, is
activated, in order to estimate the minimal contribution. Clearly, this
estimation is not particularly suited, since we relax the estimation on
$\mathds{R}^3$, where the null (best) contribution can be easily found as soon
as the bounds on $|X_s-X_{s'}|$ include the value $|P_s-P_{s'}|$. Unfortunately,
the discrete version requires a more expensive evaluation that boils down to
full pair checks. Therefore, the current bound is very loose and the pruning
effects are modest.
A general impression is that the dRMSD measure presents a pathological
distribution of local minima, depending on the placement of amino acids on the
lattice. In general, due to the discrete nature of the lattice, the modification
of a single amino acid's position can drastically vary its contributions to
the measure.
These considerations suggested us to focus on the identification of solutions
that improve the dRMSD w.r.t. $M$ rather than searching for the optimal one.
In terms of approximated search we tried to capture the main characteristics of
the COP and design efficient and effective heuristics.
A simple idea we tested is the \emph{limited discrepancy}
search~\cite{Harvey95limiteddiscrepancy}. This search compares the amino acid
placements in the lattice models~$M$ and~$M'$. Every time a corresponding amino
acid is placed differently in the two conformations, we say that there is a
\emph{discrepancy}. We set a global constraint that limits the number of
deviations to at most~$K$. This allows to generate conformations that are rather
similar to~$M$, especially if $d_{\max}$ is greater than~1. The rational behind
this heuristics is that we expect that potential conformations $M'$ improve the
dRMSD only when contained in a close neighborhood of the $M$ structure.
The count of the number of discrepancies $K$ is implemented directly in COLA at
each labeling step.
\subsection{Results}
We summarize here the preliminary results coming from the COLA implementation of
a $K$ discrepancy search in 3D FCC lattice.
The initial lattice models to be refined were generated using the LatFit tool
from the LatPack package~\cite{Mann_LatPack_HFSP_08,Mann_latfit_2010}. LatFit
implements an efficient greedy dRMSD optimizing chain growth method and was
parameterized to consider the best 100~structures from each elongation for
further growth\footnote{For details on the LatFit method see
\cite{Mann_latfit_2010} and the freely available web interface at
{\small\texttt{\url{http://cpsp.informatik.uni-freiburg.de}}}}.
\begin{table}[tb]
\centering
\begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}l|c|c|c|}
Protein ID & 8RXN & 1CKA & 2FCW
\\
\hline
length & 52 & 57 & 106
\\
\end{tabular*}
\vspace{1em}
\caption{Used proteins from the Protein Data Base (PDB)~\cite{PDB_2000}.}
\label{tab:ID}
\vspace{-1em}
\end{table}
We test three proteins (Table~\ref{tab:ID}) and for each of them we input the
conformation~$M$ obtained from the greedy algorithm (LatFit).
Table~\ref{tab:results} reports the best dRMSD of our new model~$M'$ found
depending on $d_{\max}$ and the number $K$ of amino acids placed differently
from the input conformation. Furthermore, time consumption for each
parameterization is given.
Note that if either $K=0$ or $d_{\max}=0$ only the input structure resulting
from the greedy LatFit run can be enumerated.
\begin{figure}[tb]
\centering
\includegraphics[width=0.5\textwidth]{LatFit_8RXN_bb_FCC-loop-refinement-3-4}
\caption{The initial lattice model~$M$ (red) of the
protein chain~$P$ (blue, balls) and the final/refined lattice model~$M'$
(green) resulting from $d_{\max}=2$ and $K=4$ for protein \texttt{8RNX}. Note, only the altered
loop regions (residue 2-14) are shown, but the whole structure models~$M$ and
$M'$ were superpositioned to~$P$ independently.}
\label{fig:8RNX-refinement-LDS-3-4}
\end{figure}
\begin{table}[tb]
\centering
\small
\begin{minipage}{0.48\textwidth}
\begin{tabular*}{0.95\textwidth}{@{\extracolsep{\fill}}rr|rrrr|}
\multicolumn{2}{c}{}& \multicolumn{4}{c}{dRMSD}\\
\hline
\multicolumn{2}{|c|}{}& \multicolumn{4}{c|}{$K$}\\
\multicolumn{2}{|c|}{8RXN}& 1 & 2 & 3 & 4 \\
\hline
\multirow{4}{*}{$d_{\max}$}
&$0$ & 1.2469 & 1.2469 & 1.2469 & 1.2469\\
&$1$ & 1.2319 & 1.2172 & 1.1639 & 1.1189\\
&$2$ & 1.2319 & 1.1674 & 1.1596 & 1.0884\\
&$3$ & 1.2319 & 1.1674 & 1.1596 & 1.0884\\
\hline
\multicolumn{2}{|c|}{}& \multicolumn{4}{c|}{$K$}\\
\multicolumn{2}{|c|}{1CKA}& 1 & 2 & 3 & 4 \\
\hline
\multirow{4}{*}{$d_{\max}$}
&$0$ & 1.2370 & 1.2370 & 1.2370 & 1.2370\\
&$1$ & 1.2226 & 1.2226 & 1.2226 & 1.2226\\
&$2$ & 1.2026 & 1.1887 & 1.1887 & 1.1887\\
&$3$ & 1.2026 & 1.1887 & 1.1887 & 1.1887\\
\hline
\multicolumn{2}{|c|}{}& \multicolumn{4}{c|}{$K$}\\
\multicolumn{2}{|c|}{2FCW}& 1 & 2 & 3 & 4 \\
\hline
\multirow{4}{*}{$d_{\max}$}
&$0$ & 1.1353 & 1.1353 & 1.1353 & 1.1353 \\
&$1$ & 1.1353 & 1.1324 & 1.1317 & 1.1309 \\
&$2$ & 1.1321 & 1.1300 & 1.1254 & 1.1200 \\
&$3$ & 1.1321 & 1.1300 & 1.1254 & 1.1200 \\
\cline{3-6}
\end{tabular*}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\begin{tabular*}{0.95\textwidth}{@{\extracolsep{\fill}}rr|rrrr|}
\multicolumn{2}{c}{}& \multicolumn{4}{c}{time in seconds}\\
\hline
\multicolumn{2}{|c|}{}& \multicolumn{4}{c|}{$K$}\\
\multicolumn{2}{|c|}{8RXN}& 1 & 2 & 3 & 4 \\
\hline
\multirow{4}{*}{$d_{\max}$}
&$0$ & 0.048 & 0.081 & 0.040 & 0.039 \\
&$1$ & 0.112 & 0.790 & 2.365 & 20.70 \\
&$2$ & 0.068 & 0.983 & 6.500 & 106.6 \\
&$3$ & 0.106 & 0.499 & 7.399 & 124.0 \\
\hline
\multicolumn{2}{|c|}{}& \multicolumn{4}{c|}{$K$}\\
\multicolumn{2}{|c|}{1CKA}& 1 & 2 & 3 & 4 \\
\hline
\multirow{4}{*}{$d_{\max}$}
&$0$ & 0.031 & 0.030 & 0.027 & 0.037 \\
&$1$ & 0.402 & 0.615 & 3.442 & 39.27 \\
&$2$ & 0.225 & 0.456 & 7.595 & 120.6 \\
&$3$ & 0.421 & 0.616 & 8.573 & 140.2 \\
\hline
\multicolumn{2}{|c|}{}& \multicolumn{4}{c|}{$K$}\\
\multicolumn{2}{|c|}{2FCW}& 1 & 2 & 3 & 4 \\
\hline
\multirow{4}{*}{$d_{\max}$}
&$0$ & 0.043 & 0.050 & 0.058 & 0.078 \\
&$1$ & 0.118 & 1.997 & 49.99 & 1128 \\
&$2$ & 0.294 & 7.192 & 341.8 & 14235 \\
&$3$ & 0.332 & 8.129 & 394.5 & 16140 \\
\cline{3-6}
\end{tabular*}
\end{minipage}
\\
\vspace{1em}
\caption{$d_{\max}$ and $K$ influence on discrepancy search measured in
dRMSD and time.}
\label{tab:results}
\vspace{-1em}
\end{table}
These results, yet preliminary, offer an interesting insight about the
distribution of suboptimal solutions. It is interesting to note, e.g., that
better solutions are found by allowing a rather large local neighborhood for a
few amino acids ($d_{\max}$ parameter). On the other side, it seems that few
modifications ($K$) are sufficient to alter the input sequence and obtain a
better conformation.
In Figure~\ref{fig:8RNX-refinement-LDS-3-4} we exemplify the gain of model
precision for the protein \texttt{8RNX}. Only the relaxation of $K=4$ monomers
enables the structural change that leads to a dRMSD drop from 1.2469 down to
1.0884, an improvement of about~13\%. A movement of less monomers would not
enable such a drastic change. This depicts the potential of a local search
scheme that iteratively applies a series of such structural changes.
Investigating the time consumption (Table~\ref{tab:results}) one can see that
the runtime increases drastically with $K$ which governs the search tree size.
The domain sizes implied by $d_{\max}$ do not show such an immense influence.
The behavior encountered is an indicator that a search based on exploring only
the neighborhood should provide efficient and good suboptimal solutions. In the
next section we briefly discuss some promising approaches that we plan to
investigate.
\clearpage
\newpage
\subsection{Future work}
In our opinion, a framework that integrates CP and Local Search is particularly
suited to generate fast suboptimal solutions, potentially very close to the
optimal one. We identify some possible directions that we believe are excellent
candidates to model and solve approximately the PCLF problem:
\begin{itemize}
\item {\bf local neighboring search \cite{Cipriano_GELATO,Dotu:08}}: this
technique allows to integrate Gecode and Local Search frameworks. The
framework handles constraint specifications and local moves within C++
programming language;
\item {\bf $k$-local moves \cite{Ponty_backboneFits_NAR_08}:} the idea here
is to apply structural changes on $k$ consecutive amino acids and repeat the
process in a Monte-Carlo and/or simulated annealing style.
\item {\bf side chain model \cite{Mann:local_move:WCB09}}: our model can be
extended to include side chains and we could exploit a similar set of local
moves.
\item {\bf the framework presented in~\cite{COLAandLS}}: COLA is here
extended and combined directly to a Local Search approach based on \emph{pull
moves} \cite{Lesh:RECOMB2003}.
\end{itemize}
\section{Conclusion}
In this paper we presented a Constraint Programming based model for the
refinement of lattice fitting of protein conformations. A simple branching was
shown to be ineffective and a limited discrepancy search was modeled and shown
to be beneficial to the identification of suboptimal solutions. A prototypical
implementation in the framework COLA and some preliminary results have shown the
feasibility of the method. We believe that an extension of the framework to
Local Search is particularly suited for the PCLF problem at hand.
\paragraph{Acknowledgments} This work is partially supported by PRIN08 \emph{Innovative multi-disciplinary approaches
for constraint and preference reasoning} and GNCS-INdAM
\emph{Tecniche innovative per la programmazione con vincoli in applicazioni strategiche}.
\bibliographystyle{plain}
|
train/arxiv
|
BkiUbkA5qX_AYzDLsVLy
| 5 | 1 |
\section{Introduction}
Diffraction of light limits the spatial resolution of classical optical microscopes \cite{abbe1873,rayleigh1879} and hinders their applicability to life sciences at very small scales. Quite recently, a number of superresolving techniques, suitable for overcoming the classical limit, have been proposed. The approaches include, for example, stimulated-emission depletion microscopy \cite{hell1994STED}, superresolving imaging based on fluctuations \cite{dertinger2009fast} or antibunched light emission of fluorescence markers \cite{schwartz2013}, structured illumination microscopy \cite{classen2017superresolution,classen2018analysis}, and quantum imaging \cite{shih_2018_introduction,giovannetti2009sub,xu_experimental_2015}.
Quantum entanglement is known to be a powerful tool for resolution and visibility enhancement in quantum imaging and metrology \cite{shih_2018_introduction,giovannetti2009sub,xu_experimental_2015,boto_quantum_2000,rozema_scalable_2014,giovannetti_quantum-enhanced_2004,agafonov2009high,chan_high-order_2009,chen_arbitrary-order_2010}. It has been shown that using $n$ entangled photons and measuring the $n$-th order correlations, one can effectively reduce the width of the point-spread function (PSF) $\sqrt{n}$ times \cite{giovannetti2009sub, tsang2009quantum,rozema_scalable_2014,giovannetti_quantum-enhanced_2004} and beat the classical diffraction limit. The increase of the effect with the growth of $n$ can naively be explained as summing up the ``pieces of information'' carried by each photon when measuring their correlations. Such logic suggests that, being given an $n$-photon entangled state, the intuitively most winning measurement strategy is to maximally exploit quantumness of the illuminating field and to measure the maximal available order of the photon correlations (i.e. the $n$th one).
Surprisingly, it is not always the case. First, it is worth mentioning that effective narrowing of the PSF and resolution enhancement can be achieved with classically correlated photons \cite{giovannetti2009sub} or even in complete absence of correlations between fields emitted by different parts of the imaged object (as it is for the stochastic optical microscopy \cite{dertinger2009fast}). Moreover, the maximal order of correlations is not necessarily the best one \cite{pearce_precision_2015,vlasenko2020}. In this contribution, we show that, for an entangled $n$-photon illuminating state, it is possible to surpass the measurement of all $n$-photon correlations by loosing a photon and measuring only $n-1$ remaining photons. According to our results, measurement of $(n-1)$th-order correlations effectively leads to $\sqrt{2(n-1)/n}$ times narrower PSF than for commonly considered $n$-photon detection. It is even more strange in view of the notorious entanglement fragility \cite{RevModPhys.81.865}: if even just one of the entangled photons is lost, the correlations tend to become classical.
The insight for understanding that seeming paradox can be gained from a well-established ghost-imaging technique \cite{pittman_optical_1995,strekalov1995observation,agafonov2009high,chan_high-order_2009,chen_arbitrary-order_2010,bai_ghost_2007,erkmen2008unified,moreau_resolution_2018} and from a more complicated approach of quantum imaging with undetected photons \cite{skornia2001nonclassical,thiel2007quantum,oppel_superresolving_2012,bhatti2018generation}. In our case, detecting $n-1$ photons and ignoring the remaining $n$th one effectively comprises two possibilities (see the imaging scheme depicted in Fig.~\ref{fig:scheme}): the $n$th photon can either fly relatively close to the optical axis of the imaging system towards the detector or go far from the optical axis and fail to pass through the aperture of the imaging system. In the first case, the photon can be successfully detected and provide us its piece of information. In the second case, it does not bring us the information itself, but effectively modifies the state of the remaining $n-1$ photons (as in Refs. \cite{skornia2001nonclassical,thiel2007quantum,bhatti2018generation,cabrillo1999creation,brainis2011quantum}). It effectively produces position-dependent phase shift, thus performing wave-function shaping \cite{brainis2011quantum} and leading to an effect similar to structured illumination \cite{classen2017superresolution,classen2018analysis}, PSF shaping \cite{paur2018tempering}, or linear interferometry measurement \cite{lupo2020quantum}, and enhancing the resolution. We show that for $n > 2$ photons, the sensitivity-enhancement effect leads to higher information gain than just detection of the $n$th photon, and measurement of $(n-1)$-photon correlations surpasses $n$-photon detection.
The discussed sensitivity-enhancement effect can be used for increase of resolution in practical imaging schemes. One can devise a conditional measurement set-up by placing a bucket detector outside the normal pathway of the optical beam (e.g. near the lens outside of its aperture) and post-selecting the outcomes when one photon gets to the bucket detector and the remaining $n-1$ ones successfully reach the position-sensitive detector, used for the coincidence measurements. We show that such post-selection scheme indeed leads to additional increase of resolution relatively to $(n-1)$-photon detection. Resolution enhancement by post-selecting the more informative field configuration is closely related to the spatial mode demultiplexing technique \cite{tsang2016quantum,tsang2017subdiffraction}. However, in our case the selection of more informative field part is performed by detection of a photon while all the remaining photons are detected in the usual way rather than by filtering the beam itself. Also, our technique bears some resemblance to the multi-photon ghost-imaging \cite{agafonov2009high}.
For drawing quantitative conclusions about the resolution enhancement of the proposed technique relatively to traditional measurements of $n$ and $(n-1)$-photon coincidences, we employ the Fisher information, which has already proved itself as a powerful tool for analysis of quantum imaging problems and for meaningful quantification of resolution \cite{motka2016optical,tsang2016quantum,tsang2017subdiffraction,mikhalychev_efficiently_2019,paur2018tempering,paur2019reading,pearce_precision_2015,vlasenko2020}. Our simulations show that for imaging a set of semi-transparent slits (i.e., for multi-parametric estimation problem), one indeed has a considerable increase in the information per measurement, and the corresponding resolution enhancement. While genuine demonstration of the discussed effects requires at least 3 entangled photons, which can be generated by a setup with complex nonlinear processes (e.g. cascaded spontaneous parametric down-conversion (SPDC) \cite{hubel2010direct}, combination of SPDC with up-conversion \cite{keller1998theory}, cascaded four-wave mixing \cite{wen2010tripartite}, or the third-order SPDC \cite{corona2011experimental,corona2011third,borshchevskaya2015three}), a relatively simple biphoton case is still suitable for observing resolution enhancement for a specific choice of the region where the $n$th photon (here, the second one) is detected.
\section{Results}
\subsection{Imaging with entangled photons}
We consider the following common model of a quantum imaging setup (Fig.~\ref{fig:scheme}). An object is described by a transmission amplitude $A(\vec s)$, where $\vec s$ is the vector of transverse position in the object plane. It is illuminated by linearly polarized light in an $n$-photon entangled quantum state
\begin{multline}
\label{eq:Psi_n}
|\Psi_n\rangle \propto \int d^2 \vec k_1 \cdots d^2 \vec k_n \delta^{(2)}(\vec k_1 + \cdots + \vec k_n) \\ {} \times a^+ (\vec k_1) \cdots a^+ (\vec k_n) |0 \rangle \propto \int d^2 \vec s \left(a^+ (\vec s)\right) ^ n |0 \rangle,
\end{multline}
where $a^+ (\vec k)$ and $a^+ (\vec s)$ are the operators of photon creation in the mode with the wavevector $\vec k$ and at position $\vec s$ respectively. An optical system with the PSF $h(\vec s, \vec r)$ maps the object onto the image plane, where the field correlations are detected.
\begin{figure}[htbp]
\includegraphics[width=0.9\linewidth]{Imaging_scheme.pdf}
\caption{General scheme of an imaging setup. See the text for details. An image of an object placed at the object plane OP is formed by the optical system at the image plane IP.}
\label{fig:scheme}
\end{figure}
Features of the field passing through the analyzed object (and, thus, the object parameters) can be inferred from the measurement of intensity correlation functions accomplished by simple coincidence photo-counting. The detection rate of the $n$-photon coincidence at a point $\vec r$ is determined by the value of the $n$th-order correlation function (see Methods for details):
\begin{equation}
\label{eq:Gn}
G^{(n)}(\vec r) \propto \left| \int d^2 \vec s A^n (\vec s) h^n(\vec s, \vec r) \right|^2.
\end{equation}
The signal, described by Eq.~(\ref{eq:Gn}), includes the $n$th power of the PSF, which is $\sqrt n$ times narrower than the PSF itself. At least for the object of just two transparent point-like pinholes, such narrowing
yields $\sqrt n$ times better visual resolution of the object than for imaging with coherent light (see e.g. Refs. \cite{giovannetti_quantum-enhanced_2004,shih_2018_introduction}).
Alternatively, one may try to ignore one of the photons and measure correlations of the remaining $(n-1)$ ones. The rate of $(n-1)$-photon coincidences is described by the $(n-1)$th-order correlation function:
\begin{multline}
\label{eq:Gn-1}
G^{(n-1)}(\vec r) \propto \int d^2 \vec s \left|A (\vec s)\right|^{2(n-1)} \left|h(\vec s, \vec r)\right|^{2(n-1)}.
\end{multline}
Here, the $2(n-1)$th power of the PSF is present. For $n>2$, the resolution enhancement factor $\sqrt{2(n-1)}$ is larger than the factor $\sqrt{n}$ achievable for $n$-photon detection.
The result obtained looks quite counter-intuitive: each photon carries some information about the illuminated object, while discarding one of the photons leads to additional information gain. This seeming contradiction is just a consequence of applying classical intuition to quantum dynamics of an entangled system. Due to quantum correlations, an entangled photon can affect the state of the remaining ones and increase their ``informativity'' even when it is lost without being detected \cite{skornia2001nonclassical,thiel2007quantum,oppel_superresolving_2012,bhatti2018generation}.
Here we show that in our imaging scheme such an enhancement by loss is indeed taking place. Moreover, an additional resolution increase can be achieved through conditioning by detecting the photon outside the aperture of the imaging system (see Fig.~\ref{fig:scheme}).
\subsection{Effective state modification}
Let us consider the change of the $(n-1)$ photons state depending on the ``fate'' of the $n$th photon in more details. We follow the approach discussed in Ref. \cite{bhatti2018generation}, which consists in splitting the description of an $n$-photon detection process into 1-photon detection, density operator modification, and subsequent $(n-1)$-photon detection for the modified density operator. If we detect $(n-1)$-photon coincidence, the $n$th photon can be: (1) transmitted through the object into the imaging system aperture, (2) transmitted through the object outside the imaging system aperture, or (3) absorbed by the imaged object.
For the first possibility, the $n$th photon can reach the detector and potentially be registered at certain point $\vec r'$. The effective state of the remaining photons is (see Methods):
\begin{equation}
\label{eq:Psi_n-1(1)}
|\Psi_{n-1}^{(1)}(\vec r')\rangle \propto \int d^2 \vec s A(\vec s) h(\vec s, \vec r') (a^+(\vec s))^{n-1} |0\rangle.
\end{equation}
If we are interested in detection of all the $n$ photons (i.e. postselect the cases when the $n$th photon is successfully detected at the position $\vec r' = \vec r$), the information gain due to the $n$th photon detection results from the factor $h(\vec s, \vec r)$ introduced into the effective state (\ref{eq:Psi_n-1(1)}) of the remaining photons. It forces the photons to pass through the particular part of the object, which is mapped onto the vicinity of the detection point $\vec r$, and effectively reduces the image blurring.
If, according to the second possibility, the $n$th photon goes outside the aperture of the imaging system and has the transverse momentum component $\vec k$, the effective state of the remaining photons is
\begin{equation}
\label{eq:Psi_n-1(2)}
|\Psi_{n-1}^{(2)}(\vec k)\rangle \propto \int d^2 \vec s A(\vec s) e^{i \vec k \cdot \vec s} (a^+(\vec s))^{n-1} |0\rangle.
\end{equation}
An important feature of Eq. (\ref{eq:Psi_n-1(2)}) is the factor $e^{i \vec k \cdot \vec s}$, which effectively introduces the periodic phase modulation of the field, illuminating the object, and leads to a similar effect as intensity modulation for the structured illumination approach \cite{classen2017superresolution,classen2018analysis}.
To take into account possible absorption of the $n$th photon by the imaged object, one can introduce an additional mode and model the object as a beamsplitter (see e.g. Ref. \cite{lemos2014quantum}). Similarly, to the two previously considered cases, the following expression can be derived for the effective $(n-1)$-photon density operator:
\begin{equation}
\label{eq:rho_n-1(3)}
\rho_{n-1}^{(3)} = \int d^2 \vec s [1 - |A(\vec s)|^2] (a^+(\vec s))^{n-1} |0\rangle \langle 0 |(a(\vec s))^{n-1}.
\end{equation}
By averaging over the three discussed possibilities (see Methods), one can obtain the following expression for the effective state of the remaining $(n-1)$ photons:
\begin{equation}
\label{eq:rho_n-1_final}
\rho_{n-1} = \int d^2 \vec s (a^+(\vec s))^{n-1} |0\rangle \langle 0 |(a(\vec s))^{n-1},
\end{equation}
which indicates the well-known effect of turning an $n$-photon entangled state into an $(n-1)$-photon classically correlated state when one of the photons is excluded from consideration.
The detailed derivation of the result, while being quite trivial from formal point of view, helps us to get to the following physical conclusions:
\begin{itemize}
\item The effective state of $n-1$ photons (and the $(n-1)$th order correlation function) is modified, even if the $n$th photon is not detected, and depends on its ``fate'' (the way the photon actually passes).
\item The effective state of $(n-1)$-photons might be changed in a way, which provides the object resolution enhancement.
\item When $n$-photon coincidences are measured, the $n$th photon detection actually leads to the postselection due to discard of possibilities leading to the photon loss.
\item For $n > 2$, the advantage gained from registering more photon coincidences with the $n$th photon detection does not compensate for the information loss caused by discarding outcomes corresponding to the strongly modified $(n-1)$-photon state, which is more sensitive to the object features (see Eqs. (\ref{eq:Gn}) and (\ref{eq:Gn-1})).
\end{itemize}
Further, we discuss how the advantageous outcomes can be postselected, instead of being discarded, for resolution enhancement.
\subsection{Model example}
To gain better understanding of the processes of resolution enhancement by a photon loss and postselection, let us consider a standard model object illuminated by an $n$-photon entangled state and consisting of two pinholes, which are separated by the distance $2 d$ and positioned at the points $\vec d$ and $- \vec d$ (Fig.~\ref{fig:model_example}a). If the pinholes are small enough, the light passing through the object can be decomposed into just two field modes, corresponding to the spherical waves emerging from the two pinholes and further denoted by the indices ``$+$'' and ``$-$'' for the upper and the lower pinhole respectively. For simplicity's sake, we assume that the PSF is a real-valued function, and that the light state directly after the object has the form of a NOON-state of the discussed modes ``$+$'' and ``$-$'':
\begin{equation}
|\Phi_n \rangle \propto | n \rangle_{+} | 0 \rangle_{-} + | 0 \rangle_{+} | n \rangle _{-}.
\end{equation}
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{Model_example.pdf}
\caption{Model example: imaging two pinholes (a), separated by the distance $2 d$. Constructive (b), absent (c) and destructive interference (d) of the light passing through the pinholes. Dot-dashed lines represent separate contributions from the pinholes; dashed line shows the interference signal; solid line represents the sum of all the contributions.}
\label{fig:model_example}
\end{figure}
The $n$th-order correlation signal contains separate contributions from the single pinholes and a cross-term, caused by constructive interference and leading to additional blurring of the image (Fig.~\ref{fig:model_example}b):
\begin{equation}
\label{eq:Gn_model}
G^{(n)}(\vec r) \propto h^{2n}(\vec d, \vec r) + h^{2n} (-\vec d, \vec r) + 2 h^n (\vec d, \vec r) h^n (-\vec d, \vec r).
\end{equation}
The $(n-1)$th-order correlations include only separate single-pinhole signals and produce a sharper image (Fig.~\ref{fig:model_example}c):
\begin{equation}
\label{eq:Gn-1_model}
G^{(n-1)}(\vec r) \propto h^{2(n-1)}(\vec d, \vec r) + h^{2(n-1)} (-\vec d, \vec r).
\end{equation}
Let us interpret these results in terms of detecting $n-1$ photons conditioned by the $n$th photon detection. According to Eq. (\ref{eq:Psi_n-1(1)}), if the photon is detected at the point $\vec r$ of the image plane, it transforms the state of the remaining photons into
\begin{multline}
|\Phi_{n-1}^{(1)}(\vec r) \rangle \propto h(\vec d, \vec r) | n - 1 \rangle_{+} | 0 \rangle_{-} \\ {} + h(-\vec d, \vec r) | 0 \rangle_{+} | n - 1 \rangle _{-}.
\end{multline}
The state coherence is preserved, while the blurring, caused by constructive interference, is slightly reduced due certain ``which path'' information provided by the $n$th photon detection.
If the $n$th photon is characterized by the transverse momentum $\vec k$, $|\vec k| > k_{max}$, and does not get into the imaging system aperture, the effective modified state of the remaining photons is (see Eq. (\ref{eq:Psi_n-1(2)})):
\begin{multline}
|\Phi_{n-1}^{(2)}(\vec k) \rangle \propto e^{i \vec k \cdot \vec d} | n - 1 \rangle_{+} | 0 \rangle_{-} \\ {} + e^{-i \vec k \cdot \vec d} | 0 \rangle_{+} | n - 1 \rangle _{-},
\end{multline}
Now, the phase shift between the two modes depends on $\vec k$ and can lead to destructive interference, which enhances the contrast of the image. For example, when $\vec k \cdot \vec d = \pi / 2$, one has maximally destructive interference and the detected signal is proportional to $|h^{(n-1)}(\vec d, \vec r) - h^{(n-1)} (-\vec d, \vec r)|^2$ with $100\%$ visibility of the gap between the two peaks (Fig.~\ref{fig:model_example}d).
Discarding the information about the $n$th photon (measuring $G^{(n-1)}$) corresponds to averaging over the possibilities to have the photon passing to the detector and missing it, and yields the following mixed state of the remaining photons:
\begin{multline}
\rho_{n-1} \propto |n - 1\rangle_{+} \langle n - 1| \otimes |0\rangle_{-} \langle 0 | \\{} + |0\rangle_{+} \langle 0| \otimes |n - 1\rangle_{-} \langle n - 1 |.
\end{multline}
I.e., the cross-terms with constructive and destructive interference cancel each other, and resulting mixed state allows for some resolution gain over the pure $n$th photon NOON state.
\subsection{Application to quantum imaging}
To illustrate possible application of the ideas to practical quantum imaging, we consider an object represented by a set of semitransparent slits (Fig.~\ref{fig:simulations}a,c). The resolution of the modeled optical system is limited by diffraction at the lens aperture, which admits only the photons with the transverse momentum $\vec k$ not exceeding $k_{max}$: $|\vec k| \le k_{max}$.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{Simulations.pdf}
\caption{Model objects (a, c), imaging scheme for $(n-1)$-photon coincidence conditioned by detecting a photon in the region $\Omega$ (e), and simulation results (b, d). Sets of semitransparent slits with transmission amplitudes $0.5 \div 1$ (a) and $0.9 \div 1$ (c) were used as model objects. The signal ($G^{(n)}$ --- dotted lines, $G^{(n-1)}$ --- dashed lines, $G^{(n-1,1)}$ --- solid lines) was simulated for the object from panel (a) and $n = 3$ (b) and 4 (d). The detection region for the $n$th photon is $\Omega = \{ \vec k \colon k_{max} \le |\vec k| \le 2 k_{max}\}$. The axis $x$ is directed across the slits. The coordinate $x$ is normalized by the slit size $d$ indicated in plot (a).}
\label{fig:simulations}
\end{figure}
We compare the following three strategies: (i) measuring $G^{(n)} (\vec r)$ along the direction perpendicular to the slits (the signal is described by Eq.~(\ref{eq:Gn})); (ii) measuring $G^{(n-1)} (\vec r)$ at the same points (Eq.~\ref{eq:Gn-1}); (iii) measuring coincidence signal $G^{(n-1,1)}(\vec r, \Omega)$ of $n-1$ photons detected at the point $\vec r$ of the detection plane and the $n$th photon being anywhere in certain region $\Omega$ outside the lens aperture. For the latter case the signal can be written as
\begin{multline}
G^{(n-1,1)}(\vec r, \Omega) \propto \int d^2 \vec s d^2 \vec s' A^n (\vec s) A^n(\vec s') \\{} \times h^{n-1}(\vec s, \vec r) h^{n-1}(\vec s', \vec r)) g(\vec s - \vec s'),
\end{multline}
where
\begin{equation}
\label{eq:kernel}
g(\vec s - \vec s') = \int_{\vec k \in \Omega} d^2 \vec k e^{i \vec k \cdot (\vec s - \vec s')}.
\end{equation}
Integration in Eq.~(\ref{eq:kernel}) corresponds to detection of the $n$th photon by a bucket detector, similarly to multiphoton ghost imaging \cite{agafonov2009high,chan_high-order_2009,chen_arbitrary-order_2010}. However, all the $n$ photons (including the $n-1$ ones, which get to the position-resolving detector) do pass through the investigated object. Our scheme can also be considered as a generalization of hybrid near- and far-field imaging, when the entangled photons are analyzed partially in position space and partially in momentum space.
Simulated images are shown in Fig.~\ref{fig:simulations}b,d. One can clearly see that $(n-1)$-photon detection yields better visual resolution than measurement of the $n$th-order coincidences, while $G^{(n-1,1)}$ provides additional contrast enhancement.
Of course, the effective narrowing of the PSF by measuring correlation functions does not necessarily mean a corresponding increase in precision of inferring of the analyzed parameters (positions of the object details, channel characteristics, etc.) \cite{pearce_precision_2015,vlasenko2020}. However, at least for certain imaging tasks (such as, for example, a cornerstone problem of finding a distance between two point sources in the far-field imaging), narrowing of the PSF can indeed lead to increase of the informational content per measurement, and to the potentially unlimited resolution with increasing of $n$ \cite{vlasenko2020}.
To describe the resolution enhancement in a quantitative and more consistent way, we employ Fisher information. Let the transmission amplitude of the object be decomposed as $A(\vec s) = \sum_\mu \theta_\mu f_\mu (\vec s)$, where the basis functions $f_\mu (\vec s)$ can represent e.g. slit-like pixels for the considered example \cite{mikhalychev_efficiently_2019}. Then the problem of finding $A(\vec s)$ becomes equivalent to reconstruction of the unknown decomposition coefficients $\theta_\mu$. If one has certain signal $S(\vec r)$, sampled at the points $\{\vec r_i\}$, Fisher information matrix (FIM) \cite{fisher1925theory,rao1945information}, normalized by a single detection event, can be introduced as
\begin{equation}
\label{eq:Fisher}
F_{\mu\nu} = \sum_i\left(\frac{1}{S(\vec r_i)} \frac{\partial S(\vec r_i)}{\partial \theta_\mu} \frac{\partial S(\vec r_i)}{\partial \theta_\nu} \right) / \sum_i S(\vec r_i).
\end{equation}
Cram{\'e}r-Rao inequality \cite{cramer1946,rao1945information} bounds the total reconstruction error (the sum of variances of the estimators for all the unknowns $\{\theta_\mu\}$) by the trace of the inverse of FIM:
\begin{equation}
\label{eq:Delta2}
\Delta^2 = \sum_\mu \Delta \theta_\mu \ge \frac{1}{N} \operatorname{Tr} F^{-1},
\end{equation}
where $N$ is the number of registered coincidence events. When the size of the analyzed object features (e.g. the slit size $d$ in Fig.~\ref{fig:simulations}a) tends to zero, the bound in Eq.~(\ref{eq:Delta2}) diverges (the effect is termed ``Rayleigh's curse''). The achievable resolution can, therefore, be determined by the feature size $d$, for which $\operatorname{Tr} F^{-1}$ starts growing rapidly with the decrease of $d$. A more rigorous definition can be given by specifying certain reasonable threshold $N_{max}$ for the maximal required number of registered coincidence events $N$ (e.g. we take $N_{max} = 10^5$ for further examples) and imposing the restriction $\operatorname{Tr} F^{-1} \le N_{max}$ (see Methods).
The dependence of the predicted reconstruction error on the normalized object scale $d / d_R$ (where $d_R = 3.83 / k_{max}$ is the classical Rayleigh limit for the considered optical system) is shown in Fig.~\ref{fig:Fisher}. The sampling points for the signal are taken with the step $d/2$ along a line perpendicular to slits. As expected, ignoring $n$th photon and measuring $(n-1)$-photon coincidences brings about larger achievable information per measurement, and correspondingly lower errors in object parameter estimation, yielding $(10\div20)\%$ better resolution for $n = 3$ and 4. According to the theoretical predictions, for $n = 2$ no resolution increase is observed.
Also, our results confirm that the proposed hybrid scheme is indeed capable of increasing the resolution for $n = 3$ and 4 by conditioned detection of the $n$th photon. An additional information that can be gained relatively to the measurement of $(n-1)$-photon correlations is about $(10\div 15)\%$. However, that gain vanishes in the regime of deep superresolution ($d \lesssim 0.2 d_R$): the black solid and red dot-dashed lines intersect with the green dashed one for small $d / d_R$ in Fig.~\ref{fig:Fisher}. The reason for such behavior is that the effective phase shift, introduced by detection of the $n$th photon with the transverse momentum $\vec k$ outside of the aperture of the imaging system, becomes insufficient for the resolution enhancement for $|\vec k| d \ll 1$.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{Fisher.pdf}
\caption{Dependence of the trace of inverse FIM on the normalized slit size $d / d_R$ for detection of $n$th-order correlations (blue dotted lines), $(n-1)$th-order correlations (green dashed lines), and $(n-1)$th-order correlations, conditioned by detection of the $n$th photon in the region $\Omega = \{ \vec k \colon k_{max} \le |\vec k| \le 2 k_{max}\}$ (black solid lines) and $\Omega = \{ \vec k \colon k_{max} \le |\vec k| \le 1.5 k_{max}\}$ (red dot-dashed lines) for $n = 4$ (a,b), 3 (c,d), and 2 (e,f). The objects are shown in Fig.~\ref{fig:simulations}a (for plots a, c, and e) and Fig.~\ref{fig:simulations}c (for plots b, d, and f). Horizontal dotted lines indicate the threshold $\operatorname{Tr}F^{-1} \le N_{max} = 10^5$, used for quantification of resolution.}
\label{fig:Fisher}
\end{figure}
For $n=2$, the scheme also can give certain advantages (Fig. \ref{fig:Fisher}e,f), which, however, are not so prominent because they do not stem from the fundamental requirement of having better resolution for $G^{(n-1)}$ than for $G^{(n)}$. Still, taking into account the difficulties in generation of 3-photon entangled states \cite{hubel2010direct,keller1998theory,wen2010tripartite,corona2011experimental,corona2011third,borshchevskaya2015three}, an experiment with biphotons can be proposed for initial tests of the approach.
The plots, shown in Fig. \ref{fig:Fisher}, represent information per single detection event. Therefore, certain concerns about the rates of such events may arise: waiting for a highly informative, but very rare event can be impractical. Fig. \ref{fig:rate} shows the ratio of the overall detection probabilities $p_{n-1,1} / p_n$, where $p_{n-1,1}$ corresponds to $(n-1)$-photon coincidence, conditions by detection of a photon outside the aperture, and $p_n$ describes traditional measurement of $n$-photon coincidences. For a signal $S(\vec r_i)$, the overall detection probability is defined as $p = \sum_i S(\vec r_i)$ and represents the denominator of Eq. (\ref{eq:Fisher}). When plotting Fig. \ref{fig:rate}, we do not include the measurement of $G^{(n-1)}$ into the comparison, because the ratio of probabilities for $(n-1)$ and $n$-photon detection events strongly depends on details of a particular experiment, such as the efficiency of the detectors.
\begin{figure}[htbp]
\includegraphics[width=0.6\linewidth]{Rate.pdf}
\caption{Dependence of the overall detection probabilities ratio on the normalized slit size $d / d_R$ (see details in the text). The $n$th photon is detected in the region $\Omega = \{ \vec k \colon k_{max} \le |\vec k| \le 2 k_{max}\}$ (solid black and dashed green lines) or $\Omega = \{ \vec k \colon k_{max} \le |\vec k| \le 1.5 k_{max}\}$ (dot-dashed red and dotted blue lines) for $n = 4$. The objects are shown in Fig.~\ref{fig:simulations}a (for solid black and dot-dashed red lines) and Fig.~\ref{fig:simulations}c (for dashed green and dotted blue lines). Horizontal dotted lines indicate the threshold, used for quantification of resolution.}
\label{fig:rate}
\end{figure}
The rate of $(n-1)$-photon coincidences, conditioned by the detection of a photon outside of the aperture, is indeed $3\div 20$ times smaller than the rate of $n$-photon coincidences. However, for the considered multi-parametric problem, the ``Rayleigh curse'' leads to very fast decrease of information when the slit size $d$ becomes smaller than the actual resolution limit, and the effect of rate difference is almost negligible. For example, for $n = 4$, the object, shown in Fig. \ref{fig:simulations}a, and the threshold $\operatorname{Tr}F^{-1} \le 10^5$, the minimal slit width $d$ for successful resolution of transmittances equals $0.212 d_R$ for the measurement of $G^{(n)}$, $0.170 d_R$ for the measurement of $G^{(n-1,1)}$ with the $n$th photon detected in the region $\Omega = \{ \vec k \colon k_{max} \le |\vec k| \le 2 k_{max}\}$, and $0.177 d_R$ for the same measurement of $G^{(n-1,1)}$ when the reduced detection rate is taken into account. Notice, that all the mentioned values of the resolved feature size $d$ are quite far beyond the classical resolution limit $d_R$.
At the first glance, the reported percentage of the resolution enhancement does not look very impressive or encouraging. However, one should keep in mind that the increase of the number of entangled photons from $n=2$ to $n=3$, while requiring significant experimental efforts, leads to the effective PSF narrowing just by 22\% for the measurement of $G^{(n)}$. The transition from a 3-photon entangled state to a 4-photon one yields only 15\% narrower PSF. Moreover, the actual resolution enhancement is typically smaller than the relative change of the PSF width \cite{pearce_precision_2015,vlasenko2020}, especially for high orders of the correlations, where it may saturate completely. The proposed approach provides a similar magnitude of the resolution increase on the cost of adding a bucket detector to the imaging scheme, which is much simpler than changing the number of entangled photons.
Similar concern about soundness of the results may be elicited by recalling a simple problem of resolving two point sources, commonly investigated theoretically \cite{tsang2016quantum,paur2018tempering,paur2019reading}. For such a simple model situation, the error of inferring the distance $d$ between the sources scales as $\Delta d \propto d^{-1} N^{-1/2}$ \cite{paur2018tempering}, where $N$ is the number of detected events. Therefore, to resolve twice smaller separation of the two sources with the same error, one just needs to perform a 4 times longer experiment and collect $4N$ events. The situation becomes completely different when a more practical multiparametric problem is considered \cite{mikhalychev_efficiently_2019}: the achievable resolution becomes practically insensitive to the data acquisition time (as soon as the number of detected events becomes sufficiently large). For example, for the situation described by the solid black line in Fig.~\ref{fig:Fisher}a, a 100-fold increase of the acquisition time leads to 14\% larger resolution.
\section{Discussions}
We have demonstrated how to enhance resolution of imaging with an $n$-photon entangled state by loosing a photon and measuring the $(n-1)$th-order correlation function instead of the $n$th-order coincidence signal. The resolution gain occurs despite breaking of entanglement as a consequence of the photon loss. We have explained the effect in terms of the effective modification of the remaining photons state when one of the entangled photons is lost. Measurement of $(n-1)$-photon coincidences for an $n$-photon entangled state not only discards some information carried by the ignored $n$th photon, but also makes the resulting signal more informative in the considered imaging experiment. The latter effect prevails for $n > 2$ and leads to increase of the information per measurement and to decrease of the lower bounds for the object inference errors.
The $(n-1)$-photon detection represents a mixture of different possible outcomes for the discarded $n$th photon, including its successful detection at the image plane (resulting in the $n$-photon coincidence signal). The information per a single $(n-1)$-photon detection event is averaged over the discussed possibilities and, for $n > 2$, is larger than the information for a single $n$-photon coincidence event. It means that certain outcomes for the $n$th photon provide more information per event than the average value, achieved for $G^{(n-1)}$. We prove that proposition constructively by proposing a hybrid measurement scheme, which provides resolution increase relatively to detection of $(n-1)$-photon coincidences. Intentional detection of a photon outside the optical system, used for imaging of the object, introduces an additional phase shift and increases the sensitivity of the measurement performed with the remaining photons. Our simulations show that the effect can be observed even for $n = 2$, thus making its practical implementation much more realistic. We believe that our observation will pave a way for practical exploitation of entangled states by devising a superresolving imaging scheme conditioned on detecting photons not only successfully passing through the imaging system, but also those missing it.
\section{Methods}
\subsection{Expressions for field correlation functions}
For the imaging setup, shown in Fig. \ref{fig:scheme}, the positive-frequency field operators $E(\vec r)$ at the detection plane are connected to the operators $E_0(\vec s)$ of the field illuminating the object as
\begin{equation}
\label{eq:Eplus}
E^{(+)}(\vec r) = \int d^2 \vec s E_0^{(+)}(\vec s) A(\vec s) h(\vec s, \vec r).
\end{equation}
The $n$th-order correlation function for the $n$-photon entangled state (\ref{eq:Psi_n}) is calculated according to the following standard definition:
\begin{equation}
\label{eq:Gn_def}
G^{(n)}(\vec r) = \langle \Psi_n | \left[ E^{(-)} (\vec r) \right]^n \left[ E^{(+)} (\vec r) \right]^n |\Psi_n \rangle,
\end{equation}
where $E^{(-)}(\vec r) = [E^{(+)}(\vec r)]^+ $ is the negative-frequency field operator. By substitution of Eq. (\ref{eq:Eplus}) into Eq. (\ref{eq:Gn_def}), one can obtain the expression (\ref{eq:Gn}) in the Results section.
The $(n-1)$th-order correlation function is calculated according to the expression
\begin{equation}
\label{eq:Gn-1_def}
G^{(n-1)}(\vec r) = \langle \Psi_n | \left[ E^{(-)} (\vec r) \right]^{n-1} \left[ E^{(+)} (\vec r) \right]^{n-1} |\Psi_n \rangle ,
\end{equation}
which yields Eq. (\ref{eq:Gn-1}) after substitution of Eq. (\ref{eq:Eplus}).
\subsection{Effective $(n-1)$-photon state}
The density operator, describing the effective $(n-1)$-photon state averaged over the possible ``fates'' of the $n$th photon, discussed in the main text, is
\begin{equation}
\label{eq:rho_n-1}
\rho_{n-1} = \rho_{n-1}^{(1)} +\rho_{n-1}^{(2)} +\rho_{n-1}^{(3)} ,
\end{equation}
where the operators $\rho_{n-1}^{(k)} $ are indexed according to the introduced possibilities and normalized in such a way that $\operatorname{Tr}\rho_{n-1}^{(k)} $ is the probability of the $k$th ``fate''.
According to the approach, discussed in Ref. \cite{bhatti2018generation}, detection of the $n$th photon at the position $\vec r'$ of the detector effectively modifies the states of the remaining $(n - 1)$ photons in the following way:
\begin{equation}
\label{eq:Psi_n-1(1)_def}
|\Psi_{n-1}^{(1)}(\vec r')\rangle \propto E^{(+)} (\vec r ') |\Psi_n\rangle.
\end{equation}
Substitution of Eqs. (\ref{eq:Psi_n}) and (\ref{eq:Eplus}) yields Eq. (\ref{eq:Psi_n-1(1)}).
If we ignore the information about the position $\vec r'$ of the photon detection, the contribution to the averaged density operator (\ref{eq:rho_n-1}) is
\begin{equation}
\label{eq:rho_n-1(1)}
\rho_{n-1}^{(1)} = \int d^2 \vec r' |\Psi_{n-1}^{(1)}(\vec r')\rangle \langle \Psi_{n-1}^{(1)}(\vec r')|.
\end{equation}
For the possibility, described by Eq. (\ref{eq:Psi_n-1(2)}) and corresponding to the $n$th photon passage outside the aperture of the imaging system, the contribution to the averaged density operator (\ref{eq:rho_n-1}) is
\begin{equation}
\label{eq:rho_n-1(2)}
\rho_{n-1}^{(2)} = \int_{|\vec k| > k_{max}} d^2 \vec k |\Psi_{n-1}^{(2)}(\vec k)\rangle \langle \Psi_{n-1}^{(2)}(\vec k)|,
\end{equation}
where $k_{max}$ is the maximal transverse momentum transferred by the optical system: $k_{max} = k R / s_o$; $k$ is the wavenumber of the light, $R$ is the radius of the aperture, and $s_o$ is the distance between the object and the lens used for imaging.
Calculating integrals in Eqs. (\ref{eq:rho_n-1(1)}), (\ref{eq:rho_n-1(2)}), and (\ref{eq:rho_n-1(3)}) and taking into account the connection between the PSF shape and $k_{max}$ (see e.g. Ref. \cite{shih_2018_introduction}), one can obtain Eq. (\ref{eq:rho_n-1_final}) for the effective $(n-1)$-photon state.
\subsection{Model of point-spread function}
For the simulations, illustrated by Figs. \ref{fig:simulations}, \ref{fig:Fisher}, and \ref{fig:rate}, we assume for simplicity that the magnification of the optical system is equal to 1, neglect the phase factor in PSF, and use the expression
\begin{multline}
\label{eq:h}
h(\vec s, \vec r) = \int _{|\vec k| \le k_{max}} d^2 \vec k\, e^{i \vec k \cdot (\vec s + \vec r)} \\{} = 2 \pi k_{max}^2 \operatorname{somb} \left(k_{max} \left|\vec s + \vec r\right|\right),
\end{multline}
where $\operatorname{somb}(x) = 2 J_1(x) / x $, $J_1(x)$ is the first-order Bessel function, and $k_{max}$ is the maximal transverse momentum transferred by the optical system.
\subsection{Quantification of resolution}
Let us assume that a reasonable number of detected coincidence events $N$ in a quantum imaging experiment is limited by the value $N_{max}$ and the acceptable total reconstruction error (see Eq. (\ref{eq:Delta2})) is $\Delta^2 \le 1$. Therefore, Eq. (\ref{eq:Delta2}) implies the following threshold for the trace of the inverse of FIM:
\begin{equation}
\label{eq:FIM_threshold}
\operatorname{Tr}F^{-1} \le N \Delta^2 \le N_{max}.
\end{equation}
Therefore, one can define the spatial resolution, achievable under the described experimental conditions, as the minimal feature size $d$, for which the condition (\ref{eq:FIM_threshold}) is satisfied.
\section{Acknowledgements}
All the authors acknowledge financial support from the King Abdullah University of Science and Technology (grant 4264.01), A. M., I. K., and D.M. also acknowledge support from the EU Flagship on Quantum Technologies, project PhoG (820365).
\section{Author contributions}
The theory was conceived by A.M. and D.M. Numerical calculations were performed by A.M. The project was supervised by D.M., D.L.M., and A.M. All the authors participated in the manuscript preparation, discussions and checks of the results.
\section{Competing interests}
The authors declare no competing interests.
|
train/arxiv
|
BkiUdMA5qhDBeB4iT3N5
| 5 | 1 |
\section{Introduction}
Current quantum computers are noisy, characterized by a reduced number of qubits (5-50) with non-uniform quality and highly constrained connectivity. These systems and near-term ones (with 1000 qubits or less), which are denoted as Noisy Intermediate-Scale Quantum (NISQ), may be able to surpass the capabilities of today's most powerful classical digital computers, for some problems. However, noise in quantum gates limits the size of quantum circuits that can be executed reliably. To reliably process information, the physical qubits should maintain the quantum state for sufficiently long. The qubits should also support sufficiently precise operations to allow for correct state manipulation during the coherence window. Last but not least, the qubits should support accurate measurement operations.
Quantum compilation is the problem of translating an input quantum circuit into the most efficient equivalent of itself~\cite{Corcoles2020}, taking into account the characteristics of the device that will execute the computation. In general, the quantum compilation problem is NP-Hard~\cite{Botea2018,Soeken2019}.
Compilation strategies are composed of sequential \textit{passes} that perform placement, routing and optimization tasks. Placement is performed once, at the very beginning of the compiling process. It is the task of defining a mapping between the virtual qubits of the input quantum circuit and the physical qubits of the device. Routing is the task of modifying the circuit in order to move through subsequent mappings, by means of a clever swapping strategy, in order to conform to the qubit layout of the device. Optimization is the task of minimizing some property of the circuit in order to reduce the impact of noise. Several routing and optimization passes may be executed within a compilation strategy.
Noise-adaptive compilers do take the noise statistics of the device into account~\cite{Murali2019,Niu2020, Nishio2020, Sivarajah2020}, for some or all passes. The noise statics can be obtained from calibration data, and updated after each device calibration. For example, Qiskit\footnote{\url{https://qiskit.org/}} allows for programmatic retrieval of IBM Q devices' calibration data.
In the literature, compiled circuits are frequently evaluated in terms of depth and gate count overhead with respect to the input circuits. Calculating these figures of merit does not require to execute the compiled circuit. Another common method is to run the compiled circuit many times with a figure of merit being the success rate, i.e., the fraction of runs that resulted in a correct (classical) answer. The Hellinger fidelity \cite{Pollard2001} is a measure of the distance between two random distribution. It is frequently used to compare the sampled distribution of the results of a quantum computation to the theoretical distribution (provided that the latter one is known).
Recently, Mills et al.~\cite{Mills2020} have presented a framework for application-motivated benchmarking of full quantum computing stacks. The benchmarks defined there have a \textit{circuit class}, describing the type of circuit to be run on the system, and a \textit{figure of merit}, quantifying how well the system did when running circuits from that class. The idea is that a circuit class should represent a particular application domain. The application-motivated circuit classes proposed by Mills et al. draw inspiration from quantum algorithmic primitives \cite{Blume2019} and from the literature on near-term quantum computing applications (e.g., machine learning and chemistry).
\subsection{Our Contributions}
The first contribution of this paper is a novel noise-adaptive compilation strategy that is computationally efficient. The proposed strategy assumes that the coupling map, i.e., the device-specific directed graph whose vertices correspond to the physical qubits and edges correspond to permitted CNOT gates, uses a heavy-hexagon lattice. In Section~\ref{sec:strategy}, we describe the proposed strategy and we thoroughly analyze its theoretical performance.
The second contribution is the application-motivated benchmarking of the proposed noise-adaptive compilation strategy, compared with some of the most advanced state-of-art approaches. In Section~\ref{sec:benchmarks}, we summarize the features of three notable circuit classes (deep, square and shallow) and we recall the definitions of five significant figures of merit. Then, in Section~\ref{sec:eval}, we show and discuss the evaluation results.
\section{Related Work}
Despite the quantum compilation problem has been studied for years, there are still few noise-adaptive compiling strategies.
Qiskit's \textsf{NoiseAdaptiveLayout} placement pass associates a physical qubit to each virtual qubit of the circuit using calibration data, based on the heuristic method proposed by Murali et al.~\cite{Murali2019}. The pass maps virtual qubit pairs in order of decreasing frequency of the CNOT occurrences between them. If a pair exists with both qubits unmapped, the pass picks the best available physical qubit pair, based on CNOT reliability, to map it. If a pair has only one qubit unmapped, the pass maps that qubit to a location that ensures maximum reliability for CNOTs with previously mapped qubits. In the end if there are unmapped qubits, the pass maps them to any available physical qubit.
Nishio et al.~\cite{Nishio2020} proposed a placement pass and a routing pass. The placement pass, which is denoted as \textsf{Greatest Connecting Edge Mapping}, leverages a strategy that is very similar to the one proposed by Murali et al.~\cite{Murali2019}. The routing pass is a beam search algorithm with an heuristic cost function based on the estimated success probability of candidate SWAP gates.
t$|$ket$\rangle$'s \textsf{NoiseAwarePlacement}~\cite{Sivarajah2020} searches for candidate partial placements of virtual qubits to the physical ones. This is done by casting the problem as finding a subgraph monomorphism between the coupling map and a graph representing virtual qubit CNOT interactions in the circuit. Where different possible candidates placement are found, using a heuristic approach, the pass chooses the one with the maximum expected overall fidelity.
Recently, Niu et al.~\cite{Niu2020} have implemented a hardware-aware routing pass (\textsf{HA}), inspired by the work of Li et al.~\cite{Li2019}, that iteratively selects the best scoring SWAP with respect to calibration data as well as qubits distance. The influence of each of these factors on the cost function can be set by means of tunable weights.
\section{Proposed Compilation Strategy}
\label{sec:strategy}
We introduce an heuristic-based compilation strategy consisting of two compilation passes that account for calibration data such as gate reliability and readout errors. The first one is a placement pass, i.e., a pass that finds an initial mapping of virtual qubits to physical ones. The second pass is a routing strategy to make all two-qubits gates compliant with hardware connectivity. Both passes exploit device calibration data with the aim of improving the quality of the states produced by the compiled circuit. In the following we assume a heavy-hexagon lattice for the coupling map of the device, such as the one used by IBM superconducting devices \cite{IBMQ}. In heavy-hexagon lattices, the qubits are located on the nodes and edges of each hexagon. Each qubit has either two or three neighbors, meaning the graph has vertices of degree 2 or 3. As a consequence, only three different frequency assignments are necessary for the superconducting qubits, as opposed to a square lattice, which naturally requires at least five different frequencies for addressability. The heavy-hexagon lattice also greatly reduces crosstalk errors since, in principle, only qubits on the edges of the lattice need to be driven by cross-resonance (CR) drive tones \cite{Zhu2020}.
\subsection{Placement}
In the proposed placement pass, a sequence of qubits connected in a line is detected. On a coupling map with heavy-hexagon connectivity, it not possible to find a line connecting all qubits. However, it is possible to find lines that traverse a good portion of the graph, with leftover qubits that could later be used for swapping across segments of the line. Fig.~\ref{fig:ibmq_manhattan} shows the coupling map of the 65 qubits \textit{ibmq\_manhattan} device, with the sequence of qubits used for the initial mapping highlighted in light-blue.
\begin{figure}
\centering
\includegraphics[width=8cm]{ibmq_manhattan.pdf}
\caption{Initial mapping on the 65 qubits of the \textit{ibmq\_manhattan} device, highlighted in light-blue.}
\label{fig:ibmq_manhattan}
\end{figure}
To find a line of qubits, one can leverage the features of the graph, such as its regular structure and the fact that every qubit is identified by a number in $\{0,...,n-1\}$, where $n$ is the number of qubits in the device. This is done starting from node $0$ and traversing the graph while keeping track of already visited nodes, backtracking if a dead end is reached before having explored the entire graph.
If the found sequence contains more qubits than needed by the circuit, the pass selects a best subset based on two-qubit gates reliability using a sliding window technique. On the other hand, if there are not enough qubits in the sequence, the pass proceeds to insert qubits from the leftover ones, depicted in dark gray in Fig.~\ref{fig:ibmq_manhattan}, until the necessary number of qubits is reached. These qubits are first scored based on calibration data, and then inserted into the chain after one of their neighboring qubits, starting form the one with the highest score.
\subsection{Noise Adaptive Swap}
Given an initial mapping of virtual qubits to physical ones, the \textsf{NoiseAdaptiveSwap} pass proceeds to compute a \textit{front layer} of non hardware-compliant CNOT gates.
\begin{defin}
Let $C_{k}$ be a quantum circuit on $k$ qubits and $U_{i}(\mathcal{Q}_{i})$ a quantum gate acting on a set of qubits $\mathcal{Q}_{i}$ with $0<|\mathcal{Q}_{i}|\leq k$. A \textsf{layer} $\mathcal{L}$ is a set of consecutive gates that can be applied concurrently such that:
\begin{enumerate}
\item $\mathcal{Q}_i \cap \mathcal{Q}_j = \emptyset$ for all $U_{i},U_{j} \in \mathcal{L}$, with $i \neq j$
\item $\sum_{U_{i}\in \mathcal{L}} |\mathcal{Q}_{i}| \leq k$
\end{enumerate}
\end{defin}
\begin{defin}
The \textsf{front layer} is a layer $\mathcal{F}$ such that $|\mathcal{Q}_{i}| = 2$ for all $ U_{i} \in \mathcal{F}$.
\end{defin}
For convenience, we reformulate circuits by means of the Directed Acyclic Graph (DAG) circuit formalism, as it enables to effectively represent gates dependencies in quantum circuits.
\begin{defin}
A \textsf{DAG circuit} is a directed acyclic graph where vertices represent gates and directed edges represent qubit dependencies. A directed edge $e_{q}(i,j)$ between vertices $i$ and $j$ represents a dependency between gate $i$ and gate $j$ with respect to qubit $q$, i.e., gate $i$ must be executed before $j$ and both gates act on qubit $q$.
\end{defin}
\begin{defin}
Given a directed graph $G = (\mathcal{V}, \mathcal{E})$, a \textsf{topological order} of $G$ is a linear ordering over vertices in $\mathcal{V}$ such that, for all directed edges $(v, w) \in \mathcal{E}$, $v$ precedes $w$ in the ordering.
\end{defin}
\begin{defin}
Let $v_{0}v_{1}...v_{n}$ be a topological ordering on graph $G=(\mathcal{V},\mathcal{E})$. Then $v_{j}$ is a \textsf{direct topological successor} of $v_{i}$ iff $j>i$ and $\exists e \in \mathcal{E}$ such that $e=(v_{i},v_{j})$.
\end{defin}
To compute the front layer of non hardware-compliant CNOT gates, the pass iterates over all gates in the circuit, in topological order. Gates that do not need routing, such as one-qubit gates or hardware-compliant CNOT gates are added to the set of \textit{executed} gates $\mathcal{X}$. CNOT gates that need to be properly mapped and do not interact with qubits already interested by a CNOT gate in the front layer, are added to the latter. Every gate that is a successor of a CNOT gate in the front layer, is added to the set of \textit{not executed} gates $\mathcal{\bar{X}}$.
From the front layer, the pass computes a list of possible SWAP operations involving at least one qubit interested by a CNOT in the front layer. These SWAP operations are scored with an heuristic cost function $h(s)$, where $s$ denotes the considered SWAP gate. The cost function is computed over all gates in the \textit{front layer} $\mathcal{F}$ plus a set of upcoming gates $\mathcal{U} \subset \mathcal{\bar{X}}$, as shown in Eq.~\ref{eq:h}, where $\pi_{s}(g_{c})$ and $\pi_{s}(g_{t})$ are the physical qubits corresponding to the control and target of gate $g$ with mapping $\pi_{s}$.
\begin{equation}
h(s) = \frac{\alpha}{|\mathcal{F}\cup \mathcal{U}|} \sum_{g \in \mathcal{F}\cup \mathcal{U}} R(\pi_{s}(g_{c}),\pi_{s}(g_{t})) + \frac{1-\alpha}{|\mathcal{F}\cup \mathcal{U}|} \sum_{g \in \mathcal{F}\cup \mathcal{U}} 1-D(\pi_{s}(g_{c}),\pi_{s}(g_{t}))\label{eq:h}
\end{equation}
Here, $R$ and $D$ are respectively the swap paths reliability matrix and the distance matrix for every qubits pair in the coupling map. Matrix $R$ stores in entry $(i,j)$ the reliability of the most reliable swap path between qubits $i$ and $j$, where the reliability of a single SWAP along an edge of the coupling map is computed with regard to CNOT gate and readout error rates. Matrix $D$ is the distance matrix and stores at entry $(i,j)$ the shortest distance between qubits $i$ and $j$. The swap path reliability and the distance between qubits are important components that one would like to maximize and minimize, respectively. The coefficient $\alpha$ gives the opportunity to set equal or opposite weights for them, in the context of the heuristic cost function $h(s)$.
\begin{defin}
The \textsf{reliability} of SWAP $s$ between qubits $i$ and $j$ is $r(s) = \mu(i,j)^3$, where $\mu(i,j)$ is the success rate of a $CNOT(i,j)$ between qubits $i$ and $j$, obtained from calibration data.
\end{defin}
In the previous definition, it is assumed that the SWAP gate is composed of 3 CNOT gates.
\begin{defin}
Let $S = s_{0}s_{1}...s_{k}$ be a sequence of SWAP gates (SWAP path) and denote the reliability of SWAP gate $s$ as $r(s)$. Then the reliability of $S$ is given by
$\prod_{i=0}^{k} r(s_{i})$.
\end{defin}
\begin{defin}
Given a quantum device with $n$ qubits, the SWAP paths \textsf{reliability matrix} $R \in \mathbb{R}^{n \times n}$, is the real matrix that stores in entry $(i,j)$ the maximum reliability or, equivalently, success rate at which qubit $i$ can be moved to a neighbor of qubit $j$ through a sequence of SWAP gates.
\end{defin}
\begin{defin}
Given a quantum device with $n$ qubits, the \textsf{distance matrix} $D \in \mathbb{N}^{n \times n}$, is the matrix that stores in entry $(i,j)$ the minimum distance between qubit $i$ and a neighbor of qubit $j$.
\end{defin}
Both matrices can be efficiently precomputed using the Floyd–Warshall algorithm~\cite{Floyd1962}. Matrix $R$ is computed with edge weights corresponding to SWAP reliability, while matrix $D$ has edge weights equal to 1. The SWAP reliability can be easily derived from CNOT calibration data, as each SWAP is usually achieved with three consecutive CNOTs. As matrix $D$ contains entries with incompatible scales with respect to $R$, both matrices in Eq.~\ref{eq:h} are normalized.
The first part of Eq.~\ref{eq:h} sums over the swap reliability of all gates involved in $\mathcal{F}\cup \mathcal{U}$ and divides by $|\mathcal{F}\cup \mathcal{U}|$ to obtain a mean value. The second part follows a similar approach with the distance matrix, except that the sum must be over $1-D$, as the goal is to maximize the reliability while minimizing the normalized distance.
\subsection{Computational Complexity}
\begin{thm}
The execution of the \textsf{Placement} pass takes $O(n)$ time, where $n$ is the number of qubits in the device.
\end{thm}
\begin{proof}
In order to find the initial placement of qubits, the \textsf{Placement} pass needs to iterate over all nodes in the coupling map, taking $O(n)$ time, where $n$ is the number of qubits in the device.
\end{proof}
As previously stated, the $R$ and $D$ matrices are precomputed using the Floyd–Warshall algorithm, whose time complexity is $O(n^{3})$.
\begin{thm}
Assuming a coupling map with heavy-hexagon connectivity, computing the $h(s)$ score of a candidate SWAP gate $s$ takes $O(n^{2})$ time, where $n$ is the number of qubits in the device.
\label{thm:score}
\end{thm}
\begin{proof}
Computing the score of a candidate SWAP gate requires retrieving SWAP paths reliability and distance information for all non-compliant CNOT gates in the front layer, and a few additional ones from subsequent layers (by default 5, which can be treated as a constant factor and therefore ignored). There can be at most $n/2$ CNOT gates in one layer, where $n$ is the number of qubits. Thus, computing the $h(s)$ score takes $O(n)$ time.
Assuming a coupling map with heavy-hexagon connectivity, the number of possible different SWAP gates candidates in a layer is $O(n)$. Consequently, to obtain the most promising SWAP gates for the current layer the score function must be computed $O(n)$ times, taking a total of $O(n^{2})$ time.
\end{proof}
A \textit{beam search} algorithm is used to look up for a new possible mapping through a sequence of SWAP gates, with respect to either the initial mapping or the one from a previous iteration of the routing pass. The beam search algorithm is characterized by beam width $w$ and search depth $k$, which gives an $O(w^{k} n^{2})$ time complexity, taking into account the statement from Theorem~\ref{thm:score}.
\begin{thm}
Assuming a coupling map with heavy-hexagon connectivity, the \textsf{NoiseAdaptiveSwap} pass takes $O(\frac{w^{k}}{k} g n^{2.5})$ time, where $g$ is the number of CNOT gates in the circuit.
\end{thm}
\begin{proof}
Beam search with beam width $w$ and search depth $k$ takes $O(w^{k} n^{2})$ time.
Given a coupling map with heavy-hexagon connectivity, the distance between any qubit pair can be reasonably approximated to $O(\sqrt{n})$. In the worst case scenario, one would need to search $O(\sqrt{n})$ mappings for every CNOT gate in the circuit. This can be relaxed to $O(\sqrt{n}/k)$, as each new mapping search will insert $k$ SWAP gates on average. It follows that the whole compilation process should then take $O(g \frac{\sqrt{n}}{k} w^{k} n^{2}) = O(\frac{w^{k}}{k} g n^{2.5})$, where $g$ is the number of CNOT gates in the circuit.
\end{proof}
The $w^k$ factor is due the fact that the pass does not just pick the best scoring SWAP at each iteration, but instead searches over the $k$ best SWAPs, as the best scoring one may not necessarily lead to the best final solution. We could further assume that $w=k$, giving a time complexity of $O(k^{k-1} g n^{2.5})$.
\subsection{Implementation}
A Python implementation of the proposed noise-adaptive compiler is available on GitHub.\footnote{\url{https://github.com/qis-unipr/noise-adaptive-compiler}} It has been designed as a Qiskit pass \cite{IBMQ}, thus it can be used with any quantum device supported by Qiskit.
\section{Application-Motivated Benchmarks}
\label{sec:benchmarks}
With reference to the recent paper by Mills et al.~\cite{Mills2020}, we consider three circuit classes, namely \textit{deep}, \textit{square} and \textit{shallow}.
Deep circuits are constructed from several layers of Pauli gadgets~\cite{Cowtan2020}, which are quantum circuits that implement an operation corresponding to exponentiating a tensor product of Pauli matrices. For example, in quantum chemistry, deep circuits are used to build UCC trial states used in the variational quantum eigensolver (VQE) \cite{Panagiotis2018}.
Square circuits are random circuits built from two-qubit gates. They provide a benchmark at all layers of the quantum computing stack. Indeed, they have been suggested as a means to demonstrate quantum computational supremacy \cite{Boixo2018}.
Square circuits avoid favoring any device in particular, because they allow two-qubit gates to act between any pair of qubits in the uncompiled circuit.
The class of shallow circuits is a subclass of Instantaneous Quantum Polytime (IQP) circuits~\cite{Shepherd2009}.
IQP circuits consist of gates diagonal in the Pauli-Z basis, sandwiched between two layers of Hadamard gates acting on all qubits. Shallow circuits are characterized by limited connectivity between the qubits and by a depth that increases slowly with width. Thus, shallow circuits are useful for understanding the performance of a device being utilized for applications whose circuit depth grows less quickly than their qubit requirement.
In Section~\ref{sec:eval}, the compiled circuits are evaluated in terms of a few figures of merit that are described below. Let us denote an $n$ qubit circuit as $C$, the ideal output distribution of $C$ as $p_C$ and the output distribution produced by the compiled implementation of $C$ as $D_C$.
\begin{itemize}
\item \textbf{Hellinger fidelity} - The Hellinger fidelity between $D_C$ and $p_C$ is
\begin{equation}
F_C = \left(\sum_{x \in \{0,1\}^n} \sqrt{D_C(x)p_C(x)}\right)^2.
\end{equation}
We would like that $F_C = 1$.
\item \textbf{Heavy Output Generation (HOG)} - An output $z \in \{0,1\}^n$ is heavy for a quantum circuit $C$, if $p_C(z)$ is greater than the median of the set $\{p_C(x) : x \in \{0,1\}^n\}$. The HOG probability of $D_C$, i.e., the probability that samples drawn from from $D_C$ will be heavy outputs in $p_C$, is
\begin{equation}
\text{HOG}(D_C,p_C) = \sum_{x \in \{0,1\}^n} D_C(x) \delta_C(x)
\end{equation}
where $\delta_c(x) = 1$ if $x$ is heavy for $C$, and $\delta_c(x) = 0$ otherwise. We would like HOG$(D_C,p_C) > 1/2$, as it would help us distinguish between a good implementation of $C$ and an attempt to mimic it by generating random bitstrings. Of course this is true if $p_C$ is sufficiently far from uniform.
\item \textbf{$l_1$-norm distance} - The $l_1$-norm distance between $D_C$ and $p_C$ is
\begin{equation}
l_1(D_C,p_C) = \sum_{x \in \{0,1\}^n} |D_C(x) - p_C(x)|.
\end{equation}
We would like that $l_1(D_C,p_C) = 0$.
\item \textbf{CNOT Count} - In a quantum circuit, the number of CNOT gates is denoted as CNOT count.
\item \textbf{CNOT Depth} - In a quantum circuit, the number of layers containing CNOT gates is denoted as CNOT depth.
\end{itemize}
In Section~\ref{sec:eval}, as suggested in~\cite{Mills2020}, we approximate $D_C$ using samples obtained by running the compiled implementation of $C$ several times. That is, given samples $\mathcal{S} = \{x_i,..,x_m\}$ from $D_C$, let $\mathcal{S}_x$ be the number of times $x$ appears in $\mathcal{S}$ and define $\widetilde{D}_C(x) = \mathcal{S}_x/m$.
\section{Evaluation}
\label{sec:eval}
\begin{figure*}[!ht]
\centering
\begin{tabular}{ cc }
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_deep_fidelity.pdf}
\subcaption{}
\label{fig:deep_fidelity}
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_deep_hog.pdf}
\subcaption{}
\label{fig:deep_hog}
\end{minipage}\\
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_deep_l1.pdf}
\subcaption{}
\label{fig:deep_l1}
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_deep_cx_count.pdf}
\subcaption{}
\label{fig:deep_cx_count}
\end{minipage}\\
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_deep_cx_depth.pdf}
\subcaption{}
\label{fig:deep_cx_depth}
\end{minipage}
&
\begin{minipage}{7cm}
\centering
\includegraphics[width=3cm]{legend.pdf}
\end{minipage}
\end{tabular}
\caption{Comparison of compilation strategies with deep circuits using (\ref{fig:deep_fidelity})~Hellinger fidelity, (\ref{fig:deep_hog})~HOG, (\ref{fig:deep_l1})~$l_1$-norm, (\ref{fig:deep_cx_count})~CNOT count and (\ref{fig:deep_cx_depth})~CNOT depth as metrics. Qiskit's statevector simulator has been used to sample noisy circuits from the noise model obtained with calibration data of the IBM \textit{ibmq\_casablanca} 7-qubit device.}
\label{fig:deep}
\end{figure*}
\begin{figure*}[!ht]
\centering
\begin{tabular}{ cc }
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_square_fidelity.pdf}
\subcaption{}
\label{fig:square_fidelity}
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_square_hog.pdf}
\subcaption{}
\label{fig:square_hog}
\end{minipage}\\
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_square_l1.pdf}
\subcaption{}
\label{fig:square_l1}
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_square_cx_count.pdf}
\subcaption{}
\label{fig:square_cx_count}
\end{minipage}\\
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_square_cx_depth.pdf}
\subcaption{}
\label{fig:square_cx_depth}
\end{minipage}
&
\begin{minipage}{7cm}
\centering
\includegraphics[width=3cm]{legend.pdf}
\end{minipage}
\end{tabular}
\caption{Comparison of compilation strategies with square circuits using (\ref{fig:square_fidelity})~Hellinger fidelity, (\ref{fig:square_hog})~HOG, (\ref{fig:square_l1})~$l_1$-norm, (\ref{fig:square_cx_count})~CNOT count and (\ref{fig:square_cx_depth})~CNOT depth as metrics. Qiskit's statevector simulator has been used to sample noisy circuits from the noise model obtained with calibration data of the IBM \textit{ibmq\_casablanca} 7-qubit device.}
\label{fig:square}
\end{figure*}
\begin{figure*}[!ht]
\centering
\begin{tabular}{ cc }
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_shallow_fidelity.pdf}
\subcaption{}
\label{fig:shallow_fidelity}
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_shallow_hog.pdf}
\subcaption{}
\label{fig:shallow_hog}
\end{minipage}\\
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_shallow_l1.pdf}
\subcaption{}
\label{fig:shallow_l1}
\end{minipage}
&
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_shallow_cx_count.pdf}
\subcaption{}
\label{fig:shallow_cx_count}
\end{minipage}\\
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth]{ibmq_casablanca_shallow_cx_depth.pdf}
\subcaption{}
\label{fig:shallow_cx_depth}
\end{minipage}
&
\begin{minipage}{7cm}
\centering
\includegraphics[width=3cm]{legend.pdf}
\end{minipage}
\end{tabular}
\caption{Comparison of compilation strategies with shallow circuits using (\ref{fig:shallow_fidelity})~Hellinger fidelity, (\ref{fig:shallow_hog})~HOG, (\ref{fig:shallow_l1})~$l_1$-norm, (\ref{fig:shallow_cx_count})~CNOT count and (\ref{fig:shallow_cx_depth})~CNOT depth as metrics. Qiskit's statevector simulator has been used to sample noisy circuits from the noise model obtained with calibration data of the IBM \textit{ibmq\_casablanca} 7-qubit device.}
\label{fig:shallow}
\end{figure*}
We have evaluated the proposed noise-adaptive compilation strategy, compared with some of the most advanced state-of-art approaches, over the circuit classes presented in Section~\ref{sec:benchmarks}, with 200 circuits for each class and different number of qubits, as in~\cite{Mills2020}. For each circuit the ideal output distribution $p_C$ has been obtained using Qiskit's statevector simulator. The same simulator has been used to sample noisy circuits from the noise model obtained with calibration data of the IBM \textit{ibmq\_casablanca} 7-qubit device.
We have tested the following compilers:
\begin{itemize}
\item \textit{qiskit} is the Qiskit standard compiler with optimization level 3;
\item \textit{qiskit\_noise} is the Qiskit compiler with \textsf{NoiseAdaptiveLayout} and optimization level 3;
\item \textit{pytket} is the t$|$ket$\rangle$ compiler with \textsf{NoiseAwarePlacement} and maximum optimization level;
\item \textit{na} are the proposed noise-adaptive placement and routing passes integrated with Qiskit and optimization level 3;
\item \textit{qiskit\_na} is the proposed routing pass combined with Qiskit's \textsf{NoiseAdaptiveLayout} and optimization level 3;
\item \textit{t} denotes the application of CNOT cascade transformations, as described in~\cite{Ferrari2021}.
\end{itemize}
For each of the above we have used Qiskit v0.23.6 and t$|$ket$\rangle$ v0.7.2. We remark that the \textit{pytket\_na} compiler was not tested, as the integration of \textit{pytket} and \textit{na} is possible only with regards to swap passes, not for the placement ones.
The results are reported in Fig.\ref{fig:deep}, Fig.\ref{fig:square} and Fig.\ref{fig:shallow}.
For each compiler configuration, we have used box plots to depict the distributions of the resulting values for the figures of merit described in Section~\ref{sec:benchmarks}. A box plot is constructed of two parts, a box and a set of whiskers. The box is drawn from the lower quartile to the upper quartile with a horizontal line drawn in the middle to denote the median. For the whiskers, the lowest point is the minimum of the data set and the highest point is the maximum of the data set.
The reader may observe that, for deep and square circuits, the best results in Hellinger fidelity, HOG, and $l_1$-norm are those achieved with \textit{qiskit\_na}. Interestingly, with this compiler, the CNOT depth is worst than with the other compilers. Indeed, improving the quality of a quantum computation with a noise-adaptive compilation strategy does not forcedly imply reducing the depth of the circuit. Regarding shallow circuits, \textit{qiskit\_na} and \textit{pytket} have the same performance.
Regarding the impact of $\alpha$, one may observe that, with deep circuits, $\alpha=0.9$ produces better Hellinger fidelity. Instead, with square and shallow circuits, $\alpha=0.5$ seems a better choice. The difference is nevertheless minimal. This could be due to the small number of qubits in the circuits and in the considered device.
\section{Conclusions}
\label{sec:conclusion}
In this work, we presented a novel noise-adaptive quantum compilation strategy that is computationally efficient.
The contributed strategy assumes heavy-hexagon topologies for quantum devices, which is particularly crucial for the placement pass. Moreover, the assumption simplifies the derivation of the computational complexity upper bounds. Nevertheless, the proposed routing pass is general enough to be effective independently of the coupling map of the target quantum device.
The presented results seem to indicate that our compilation strategy is particularly effective for circuits characterized by great depth and/or randomness. On the other hand, we are aware that the performed evaluation is not exhaustive and further work is necessary to fully characterize the proposed approach, for example in designing a reasonable method for tuning the $\alpha$ parameter in Eq.~\ref{eq:h}. In particular, we plan to extend the evaluation to circuits and devices with more qubits, and to distributed quantum computing architectures as well \cite{VanMeter2016,Cuomo2020,Ferrari2021bis}.
As a final remark, it is worth noting that it is possible to further mitigate the effect of noise by using an \textit{ensemble of diverse mappings} (EDM) approach, as suggested by Tannu and Qureshi \cite{Tannu2019}. In the near future, we shall integrate this method into our quantum compiling library.
\section*{Acknowledgements}
This research benefited from the HPC (High Performance Computing) facility of the University of Parma, Italy.
\bibliographystyle{unsrt}
|
train/arxiv
|
BkiUbAPxK0-nUh8iHvWk
| 5 | 1 |
\section{Motivation to study single-nucleon transfer using radioactive beams}
\label{sec:motivation}
A single-nucleon transfer reaction is a powerful experimental tool to populate a certain category of interesting states in nuclei in a selective manner. These states have a structure that is given by the original nucleus as a core, with the transferred nucleon in an orbit around it. Nucleon transfer is thus an excellent way to probe the energies of shell model orbitals and to study the changes in the energies of these orbitals as we venture away from the stable nuclei. Despite a large number of detailed issues that complicate this simple picture, it remains the case that nucleon transfer reactions preferentially populate these "single particle" states in the final nucleus and also that these states are of especial interest, theoretically. Therefore, transfer reactions promise to be one of the most important sources of nuclear structure information about exotic nuclei, as more beams become available at radioactive beam facilities.
\begin{figure}[b]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-monopole}
\caption{The effective energies of the valence neutron orbitals are modified according to the number of protons present in the $0d_{5/2}$ orbital. The effect is to replace the $N=20$ neutron shell gap by a gap at $N=16$ when the nucleus becomes more exotic. }
\label{fig:1}
\end{figure}
The factors that complicate the interpretation of the experiments arise primarily from the theoretical interpretation of the data. Experimentally, the selectivity of the transfer reactions is usually clear, and the states of interest - those having a large overlap with the simple core-plus-particle picture - are emphatically favoured. Often, these states will be embedded within a background of other nuclear levels. This selectivity on structural grounds is itself useful, and often allows immediate associations to be inferred between experimentally observed states and the predictions from, for example, shell model calculations. The states that are suppressed will have more complex wave functions that mix a number of configurations and are intrinsically more difficult to describe theoretically. In the first instance, it is in many ways best to focus upon the more simple states that are selected by transfer reactions, and to use these to refine the theory. Complications begin to arise when we seek to quantify the degree to which the wave function of a particular state overlaps with the simple core-plus-particle wave function. At that level, many debates occur, regarding the quantitative interpretation of data. With suitably stated assumptions, however, quantitative analyses of experiments can be performed and confronted with theory. Thus, on a qualitative and on a quantitative level, transfer reactions provide an indispensable tool for uncovering the structures of exotic nuclei.
\subsection{Migration of shell gaps and magic numbers, far from stability}
\label{subsec:migration}
Figure \ref{fig:1} shows a simplified representation of the proton and neutron shell model orbital energies and occupancies for some light nuclei. In the nuclear shell model, each nucleon is assumed to occupy an energy level (or orbital) that can be obtained by solving the Schr\"odinger equation for a mean field potential. This potential represents the average binding effect of all of the other nucleons. In the simplest model, the nuclear structure is obtained by filling orbitals from the lowest energies, obeying the Pauli exclusion principle. In a more sophisticated model, the interactions between valence nucleons in different orbitals (or in the same orbital) are taken into account. This allows significant mixing between different simple configurations that all have the same spin and parity and about the same (unmixed) energy. Some degree of mixing will even occur over a wide range of configuration energies. In principle the valence nucleon interaction energies, which can be represented as matrix elements in some suitable basis, can be calculated from the solutions for the mean field and an expression for the nucleon-nucleon interaction (with all of its dependence on spatial variables, spin and orbital angular momentum). In practice, the best shell model calculations in terms of agreement with experimental data are those in which the calculated matrix elements are subsequently varied by fitting them to a selection of experimental data, thus establishing an {\it effective interaction} that is valid in a particular model space that was used for the fitting procedure. Once we accept that valence nucleons will have an interaction potential, and hence some energy associated with the interaction, it naturally becomes possible that the valence interactions can actually change to some degree the effective energies of the orbitals themselves. Slightly more technically, the interaction potentials can be analysed in terms of a multipole expansion. It is the monopole term in the expansion that has the effect of changing the effective energies of orbitals. The energy of a single valence nucleon in a particlar orbital is determined by the energy of the orbital plus the sum of the monopole components of its interaction with other active valence nucleons. A closed shell has no net effect, so it is the interactions with partially filled orbitals that needs to be considered. Whilst both the interactions between like nucleons ({\it p-p}) or ({\it n-n}) and interactions between different nucleons ({\it p-n}) are all important, the strongest effects occur when it is a proton-neutron interaction between active valence nucleons. After that, the strongest effects are between orbitals of the same number of radial nodes, and then if the angular momentum is the same this makes the effect is even stronger. This arises from the degree of spatial overlap of the wave functions. For example, the interaction between protons in an open $0d_{5/2}$ orbital and neutrons in an open $0d_{3/2}$ orbital is particularly strong.
In Figure \ref{fig:1}, the structure of the $N=14$ isotones is shown, with the additional odd neutron for $N=15$ being shown in an otherwise vacant $0d_{3/2}$orbital. The $0d_{5/2}$ neutron orbital is filled, at $N=14$. On the right hand side, we see the stable nucleus $^{28}$Si, wherein the 14 protons also fill the proton $0d_{5/2}$ orbital. The $3/2^+$ state in $^{28}$Si is therefore at a relatively low energy, because the $sd$-shell orbitals are relatively closely spaced, all lying below the $N=20$ shell gap. As successive pairs of protons are removed from $0d_{5/2}$, the diagram indicates that the energy of the $0d_{3/2}$ orbital increases. This is actually in accord with detailed calculations and can be understood in terms of the monopole interaction \cite{otsuka1,otsuka2} and a version of this diagram can be found in ref. \cite{otsuka1}. By the time we reach the neutron rich $^{22}$O, the $0d_{3/2}$ orbital has risen to such an extent that the shell gap is now below that orbital, at $N=16$. The orbital that has moved up in energy has $j=\ell - 1/2$ and the reason for its change is that there are fewer protons in $0d_{5/2}$ (where $j=\ell + 1/2$) with which a valence $d_{3/2}$ neutron can interact. This proton-neutron interaction between $\ell + 1/2$ and $\ell - 1/2$ nucleons is attractive \cite{otsuka1}, and hence this reduction in $0d_{5/2}$ protons causes the raising in energy of the neutron $0d_{3/2}$ orbital. This is, in fact, essentially the explanation for $^{24}$O being the heaviest bound oxygen isotope (with the neutrons just filling the $1s_{1/2}$ orbital). Neutrons in the $0f_{7/2}$ and $1p_{3/2}$ orbitals, with $j=\ell + 1/2$, experience a repulsive interaction with the $0d_{5/2}$ protons and hence they are lowered in energy as these protons are removed. This further confounds the previous $N=20$ gap seen for nuclei near stability, and also tends to displace the $N=28$ gap to a higher number ($N=34$). In the present work, several of the example nuclei studied using transfer ($^{25,27}$Ne, $^{21}$O) are directly of interest because of this particular migration of orbital energies. What we measure experimentally are the energies of actual states in these nuclei, and not the energies of the shell model orbitals {\it per se}, but there is a strong connection between the energies of the states and the orbitals in the cases that are studied here.
\subsection{Coexistence of single particle structure and other structures}
\label{subsec:coexistence}
Of course, it is not the case that all of the excited states in the final nucleus will have a structure that is simply explained by a neutron orbiting the original core nucleus. Such states are an important but (usually) small subset of the states in the final nucleus, and are selectively populated by transfer reactions. To measure the energy of the $0d_{3/2}$ neutron orbital, say, in $^{21}$O we could imagine an experiment to add a neutron to $^{20}$O and then deduce the energy of the $3/2^+$ excited state, and hence the $0d_{3/2}$ orbital energy relative to the $0d_{5/2}$ orbital of the ground state. This is shown conceptually in Figure \ref{fig:2}(a). In the lowest energy configuration, the two holes in the neutron $0d_{5/2}$ orbital are coupled to spin zero. This association of the energy of the state directly with that of the orbital is overly simplified because the state will not have a pure configuration. In Figure \ref{fig:2}(b), another relatively low energy configuration is shown, which also has spin and parity $3/2^+$. Here, the holes in $0d_{5/2}$ are coupled to spin 2, as they are in the $2^+$ state of $^{20}$O. A neutron in $1s_{1/2}$ can then couple with this to produce two states in $^{21}$O, one of which is $3/2^+$. The residual interactions between valence nucleons will mix these two configurations and the nucleus $^{21}$O will have the single particle amplitude split between the two states. Indeed, in the real nucleus, there will be even more components contributing to the wave functions with various smaller amplitudes.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.55\textwidth]{fig-two-structures}
\caption{Neutron structure for states in $^{21}$O: (a) a low-lying $3/2^+$ state can be made by transferring a neutron into the vacant $0d_{3/2}$ orbital , or (b) by having a neutron in $1s_{1/2}$ coupled to a $2^+$ $^{20}$O core, where the two holes in $0d_{5/2}$ are coupled to spin 2.}
\label{fig:2}
\end{figure}
\subsection{Description of single particle structure using spectroscopic factors}
\label{subsec:SFs}
Due to the mixing of different states with the same spin and parity, the single particle state produced by a nucleon orbiting the core of the target, in an otherwise vacant orbital, will be mixed with other nuclear states of different structures. Usually, these will be of more complex structures, or core excited structures. The contribution that this single particle amplitude makes, to the different states, will result in these states all being populated in a nucleon transfer reaction. The strength of the population of each state in the reaction will depend on the intensity of the single particle component. This intensity is essentially the quantity that is called the {\it spectroscopic factor}. Experimentally, it is measured by taking the cross section that is calculated for a pure single particle state and comparing it to the cross section that is measured. More specifically, this comparison is performed using differential cross sections, which are a function of the scattering angle. If the picture described here is correct, then the experimental cross section should have the same shape as the theory, and simply be multiplied by a number less than one - that is, the spectroscopic factor. To describe the sharing of intensity between states, we say that the single particle strength will be spread across a range of states in the final nucleus. This is represented in Figure \ref{fig:3}, where the single particle strength (represented as the spectroscopic factor) is plotted as a function of excitation energy. The weighted average of the excitation energies, for all states containing strength from a particular $\ell j$ orbital, will give the energy of that orbital. Note that, for experiments with radioactive beams, the limited intensity of the beam is likely to preclude the possibility of identifying and measuring all of the spectroscopic strength, which was traditionally the aim of transfer experiments. A different approach will often be dictated by these circumstances, wherein only the strongest states are located experimentally. Then, placing more reliance on theory than was formerly done, an association can be made between the strong states experimentally and the states predicted by the theory to be the strongest. We then need to see whether the experimental data, in terms of the energies and spectroscopic factors for the strongest states, can give us enough clues about how to adapt the theoretical calculations to give an improved set of predictions. If applied consistently across a range of nuclei, using the same theory, this approach can reasonably be expected to yield good results.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-sp-strengths}
\caption{Spectroscopic factor versus excitation energy for $3/2^-$ (red) and $1/2^-$ (blue) states in $^{41}$Ca. The strength for a given spin is split between different states in the final nucleus. This Figure is due to John Schiffer \protect
\cite{JS_ORNL}. Here, we have added the inset to indicate schematically how the weighted average of the $3/2^-$ excitation energies can be calculated, which gives a measure of the energy of the $1p_{3/2}$ single particle orbital. }
\label{fig:3}
\end{figure}
\subsection{Disclaimer: what this article is, and is not, about}
\label{subsec:disclaimer}
This article is intended to describe briefly the general motivations for studies using (mostly) single nucleon transfer, and to provide in some detail the background, insights and perspectives relevant to designing and performing the experiments. For more details about the nuclear structure motivations in terms of nuclear structure and monopole shift the reader is referred to several excellent reviews \cite{sorlin,sorlin2,otsuka}.
This article most definitely does not seek to summarise or describe the theories that are used to interpret the data from nucleon transfer reactions, although some general features of the theoretical predictions are discussed and a justification is given for the model of choice for the examples of analysis that are described here. Detailed descriptions of the relevant reaction theory can be found in several well-known articles and books, such as those by Glendenning \cite{Glendenning-Cerny, Glendenning} and Satchler \cite{SatchlerIntro, SatchlerBig}. An excellent and up-to-date introduction and overview with particular reference to weakly bound and unbound states is given in this volume by G\'omez Camacho and Moro \cite{GCM}.
With regard to the experimental results, although the main objectives of most of these measurements is to obtain the differential cross sections for individual final states, just a small number of illustrative results are shown here. In all of the discussions, the references are given for the original work, and it is to those publications that reference may be made in order to study the extent and quality achieved for the various differential cross section measurements. It is through the measurement and interpretation of these differential cross sections that the assignments of angular momentum and determinations of spectroscopic single-particle strength are made, for the nuclear states.
\section{Choice of the reaction and the bombarding energy}
\label{sec:2:reactions}
In this section, some features of transfer reactions as traditionally performed using stable targets and a low-mass beam (for example, the (d,p) reaction) are reviewed. Some of the differences in the case of inverse kinematics are introduced.
\subsection{Kinematics and measurements using normal kinematics}
\label{subsec:kinematics}
A good way to measure (d,p) reactions when using a beam of deuterons and a stable target is to use a high resolution magnetic spectrometer to record the protons from the reaction, because this can be done with a high precision and a low background. The proton peaks observed at a particular angle will have different energies for different excited states and hence will be dispersed across the focal plane of the spectrometer. The spacings of the proton energies will be almost the same as the spacings of the energy levels in the final nucleus. An example of the kinematical variation of proton energies with laboratory angle is shown in Figure \ref{fig:4}. The lines that are almost horizontal are calculations of the proton energies from the (d,p) reaction on a $^{208}$Pb target, with the uppermost being for protons populating the ground state of $^{209}$Pb. The energies have little variation with laboratory angle because the very heavy recoil $^{209}$Pb nucleus takes away very little kinetic energy. The uppermost line, with a much bigger slope, is for the (d,p) reaction on a much lighter target, $^{12}$C. The lines of intermediate slope are for the (d,p) reaction on a target of $^{16}$O. The energy at zero degrees is different for the different targets because of the different reaction Q-values, whereas the slopes reflect the target mass. The carbon and oxygen calculations are shown, because these isotopes are typical target contaminants. In a study of $^{208}$Pb(d,p)$^{209}$Pb, the contaminant reactions will give proton energies that overlap the energy region of interest for the $^{209}$Pb states, but these can be identified by comparing data taken at different laboratory angles, since the contaminant peaks will shift in energy, relative to the $^{209}$Pb peaks. The example of the proton energies seen in a measurement made at $53.75^\circ$ is shown in Figures \ref{fig:4} and \ref{fig:5}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-normal-kinematics}
\caption{Kinematics plots showing proton energy as a function of laboratory angle for the reaction (d,p) initiated by 20 MeV deuterons. Different curves represent the population of different excited states formed by reactions on $^{208}$Pb, $^{12}$C and $^{16}$O (see text). The angle $53.75^\circ$ is relevant to Figure \protect\ref{fig:5}. }
\label{fig:4}
\end{figure}
The peaks corresponding to different final states in $^{209}$Pb, measured at $53.75^\circ$ for the (d,p) reaction \cite{Kovar}, are shown in Figure \ref{fig:5}. The different intensities reflect both the spectroscopic strengths and the dynamical effects of different angular momentum transfers. It is apparent that different states can easily be resolved and studied. The peaks in the shaded region of Figure \ref{fig:5} correspond to the reactions populating the ground states of $^{17}$O and $^{13}$C from the oxygen and carbon contamination in the target. At increasing laboratory angles, these peaks would be seen to move to the left in the spectrum, relative to the $^{209}$Pb peaks.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-pb209}
\caption{Magnetic spectrometer data for protons from $^{208}$Pb(d,p)$^{209}$Pb at a beam energy of $E_d=20.0$ MeV and a laboratory angle of $53.75^\circ$. Data are from ref. \protect\cite{Kovar}. Excitation energy increases from right to left and the unshaded peaks correspond to states in $^{209}$Pb. The shaded region is where reactions on the $^{12}$C and $^{16}$O in the target produce contaminant peaks. }
\label{fig:5}
\end{figure}
\subsection{Differential cross sections: dependence on beam energy and $\ell$ transfer}
\label{subsec:crosssection}
\begin{figure}[h]
\sidecaption
\includegraphics[width=.6\textwidth]{fig-momentum-diag}
\caption{A consideration of the conservation of linear momentum in transfer implies a relationship of the laboratory scattering angle to the transferred momentum, and therefore to the transferred orbital angular momentum, $\ell$. This implies that the location of the primary maximum in the angular distribution will be approximately proportional to the transferred $\ell$ (see text). }
\label{fig:6}
\end{figure}
The principal piece of information (after excitation energy) that is measured directly, via transfer studies, is the orbital angular momentum that is transferred to the target nucleus. This comes from the shape of the differential cross section. Next, the magnitude of the cross section can tell us the magnitude of the single-particle component of the wave function, or the spectroscopic factor.
The transferred angular momentum will indicate, for single-nucleon transfer, into which orbital the nucleon has been transferred. The transferred angular momentum is measured via the angular distribution of the reaction products. In this type of reaction, the differential cross section will tend to have some diffraction-like oscillatory behaviour, with the angle of the main maximum being related to the magnitude of the transferred angular momentum. We can see how the transferred angular momentum affects the angular distribution by considering a simple momentum diagram. Suppose as in the inset of Figure \ref{fig:6} that the incident projectile has momentum of magnitude $p$ and that the momentum transferred to the target nucleus has magnitude $p_t$. For a small scattering angle, $\theta$, the beam particle will have only a small reduction in the magnitude of its momentum, as seen by construction of the vector diagram for momentum conservation (cf. Figure \ref{fig:6}). From the application of the cosine rule to this triangle, the formula for $\theta ^2$ as shown in the Figure can be derived, where we make use of the expansion to second order for cosine: $2(1- \cos \theta ) \approx 2(1 - [1 - \theta ^2 /2! ]) = \theta^2$. From inspection of the diagram, the reduction $\delta$ in the length of the $p$ vector is small compared to the magnitude of the actual transferred momentum, $p_t$. Hence, we can drop the terms in $(\delta /p)$ in the expression for $\theta ^2$ and we obtain $\theta ^2 \approx (p_t /p)^2$. If the nucleon is transferred at the surface of the target nucleus, which has radius $R$, then the transferred angular momentum $\ell$ is given by $p_t \times R = \sqrt{\ell (\ell +1)} \hbar$. This immediately indicates that $\theta \approx {\rm constant} \times \ell$, and in a full quantum mechanical treatment we will not see a single angle but can expect a peak to occur in the differential cross section, at a laboratory angle that is approximately proportional to the transferred angular momentum, $\ell$. This is shown schematically in Figure \ref{fig:6}, which also includes the diffractive effects in a schematic fashion. In fact, for deuterons incident at a kinetic energy of $E$ (MeV) on a target of mass $A$ this simple picture gives $\theta {\rm (degrees)} \approx 217/(\sqrt{E} \times A^{1/3}) \times \sqrt{\ell (\ell+1)}$. For 20 MeV deuterons incident on a target of mass 32, the constant term evaluates to $15^\circ$, which of course can serve only as a guide, but is in reasonable agreement with the trend in the primary maxima observed in the middle panel of Figure \ref{fig:7}(a) which shows proper calculations for (d,p) on $^{32}$Mg at 10 A.MeV. The Figure is actually plotted in terms of $\theta _{{\rm c.m.}}$, but would look very similar when plotted in terms of $\theta _{{\rm lab}}$ for normal kinematics (which refers to the situation where the target is heavier than the projectile). The preceding discussion of the vector diagram is adapted from reference \cite{Cohen}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=1.0\textwidth]{fig-cm-sigma}
\caption{Differential cross sections for single nucleon transfer in (a) $^{32}$Mg(d,p)$^{33}$Mg, and (b) $^{132}$Sn(d,p)$^{133}$Sn. The three panels in each case are for three different bombarding energies, namely 5, 10 and 20 A.MeV. Each panel shows calculations for several different $\ell$-transfers. The ADWA model was used (see text, subsection \protect\ref{subsec:adwa}). These plots are in terms of the centre of mass reaction angle, $\theta _{{\rm c.m.}}$.}
\label{fig:7}
\end{figure}
Calculations are shown in Figure \ref{fig:7} for various $\ell$-transfers at several different bombarding energies, and for two different targets. The first point to note is that the shapes of the distributions for different $\ell$-transfer are distinctive, especially for 10 A.MeV (the middle panels). For the light nucleus $^{32}$Mg, the 5 A.MeV distributions are also characteristic of the transferred $\ell$. For the heavier $^{132}$Sn target, the distributions are less distinctive due to the forward angle parts (small $\theta _{{\rm c.m.}}$) being suppressed. This is due to the Coulomb repulsion between the projectile and the target, which means that the small angle scattering (especially) has a suppressed nuclear component. In addition to the above considerations, there is a general trend towards lower cross sections as the bombarding energy increases, of around half to one order of magnitude per 10 A.MeV. Taken together, this information suggests that 10 A.MeV is an ideal bombarding energy for this type of study, and this can be relaxed down to 5 MeV perhaps, for lighter nuclei. The remaining question is whether the existing theories are equally valid at all energies, and the ADWA model used here (see section \ref{subsec:adwa}) should have good validity at both 5 and 10 A.MeV, although probably not at energies much lower than this.
The aim of a typical nucleon transfer experiment is to measure the differential cross sections for different states in the final nucleus. From the shape of the cross section plot, the transferred angular momentum can then be deduced. The calculations shown in Figure \ref{fig:7} are for pure single-particle states. That is, it is assumed that the structure of the final state is given perfectly by the picture of the target core with the transferred nucleon in an associated shell model orbital. Hence, another important experimental result will be the scaling factor between the theoretical calculation and the data, which will give the experimental value for the spectroscopic factor.
\begin{figure}[h]
\sidecaption
\includegraphics[width=1.0\textwidth]{fig-lab-sigma}
\caption{As for Figure \protect\ref{fig:7}, but in terms of the laboratory angle for the detected proton. In this reference frame, the extreme right of each panel corresponds to the point at the extreme left of the panels in Figure \protect\ref{fig:7}. }
\label{fig:8}
\end{figure}
If experiments are performed in normal kinematics, with a deuteron beam, then the cross sections will look much like Figure \ref{fig:7} whether we plot them in terms of the centre of mass angles or the laboratory angles. However, in reality the isotopes $^{32}$Mg and $^{132}$Sn used in these examples are radioactive and the experiments need to be performed in inverse kinematics: where the deuteron is the target and the heavier particle is the projectile. In Figure \ref{fig:8}, the same calculations as in Figure \ref{fig:7} are plotted, but using the laboratory angles and assuming an inverse kinematics experiment. The same relative velocities of beam and target, i.e. the same values of the beam energy in MeV per nucleon, are employed. It can be seen that the structure characteristic of $\ell$ is maintained, for the cases where it was previously evident. The transformation takes zero degrees in the centre of mass frame to $180^\circ$ in the laboratory frame. Now, the first peak observed relative to $180^\circ$ is further from $180^\circ$ as the $\ell$-transfer increases. From inspection, an experimental measurement should include at least the region from $90^\circ$ to $180^\circ$ in order to allow an assignment of the transferred $\ell$ according to the observed shape of the distribution. The situation with the heavier target is more problematic, especially at the lowest energy shown here.
The transformation from the centre of mass to the laboratory reference frame, and in particular the transformation of the solid angle, is discussed further in section \ref{subsec:com}.
\subsection{Choice of a theoretical reaction model: the ADWA description}
\label{subsec:adwa}
The perfect theoretical model to interpret experimental data for transfer reactions does not exist. The scattering theory is most often treated in an {\it optical model} approach, where the scattering potential is complex and has attractive and absorptive components. As in optical light being scattered from a cloudy crystal sphere, the loss of flux (by whatever process) is represented mathematically by the imaginary part of the potential. Most often, but not of necessity, the final state populated in a reaction such as (d,p) is represented as a core (being the original target nucleus) with the transferred nucleon in an eigenstate of the potential that arises due to the core. This implies a perfect single-particle structure for the final state, and the ratio between the experimental and theoretical cross sections is then the spectroscopic factor, as previously discussed. The simplest scattering theory, described in introductory quantum mechanics texts (for example ref. \cite{Schiff}), is in terms of a {\it plane wave Born approximation} (PWBA). An improved model \cite{Schiff} replaces the plane waves by the wave solutions that are distorted by the presence of the scattering potential, giving the {\it distorted wave Born approximation} (DWBA).
Even though transfer has been a widely used and valuable tool in nuclear spectroscopy for well over 50 years, there are still new and important developments occurring in quite fundamental aspects of the theory. One aspect of this concerns the spatial localisation of the transferred nucleon in the projectile and the final nucleus, or what is known as the {\em form factor}. Another important aspect, particularly for the (d,p) reaction, concerns the coupling to continuum states. Because the deuteron is weakly bound, it very easily disintegrates in the field of the target nucleus when used as a projectile. When the (d,p) reaction is applied to weakly bound exotic nuclei, the problem also occurs for the final nucleus. What is more, the coupling is not necessarily one-way: continuum states can couple back to the bound state, which can have important effects on the reaction cross section. One way to take this into account is via {\em coupled reaction channels} (CRC) calculations, in which all of the different contributing reaction pathways are explicitly included in the calculation. In order to include the continuum contribution, the theory usually considers hypothetical energy bins in the continuum and treats them as different states that can couple into the intermediate stages of the reaction. These are {\em coupled discretized continuum channels} (CDCC) calculations. The challenges of such calculations are many, including the computational power required and the choices of parameters for the various coupling strengths.
An ingenious analytical short-cut to include continuum contributions was developed by Johnson and Soper \cite{ADWA}. In the scattering process, the neutron and the proton inside the deuteron have complex histories, and in particular when continuum states for the neutron are included - that is, deuteron or final-nucleus breakup. The Johnson-Soper method relied on the observation that certain integrations over all spatial coordinates are dominated by the contributions wherein the neutron and proton are within a range determined by the neutron-proton interaction. Within a {\em distorted wave} formulation, certain energy differences are ignored, which means that the approximations of the model become less applicable at lower beam energies. However, at 5 to 10 A.MeV they should remain substantially valid. The coupling to the continuum, subject to these approximations, is included exactly and to all orders by means of the simplified integrations. This theoretical method has become known as the {\em adiabatic distorted wave approximation}, or ADWA. A convenient feature is that the calculations are largely identical (but with different input) to those required for the DWBA, and hence the pre-existing DWBA computer codes can be adapted to perform ADWA calculations. The DWBA remains another popular choice for the analysis of transfer reactions. Descriptions can be found, for example, in the articles and books by Glendenning \cite{Glendenning-Cerny, Glendenning} and Satchler \cite{SatchlerIntro, SatchlerBig}. The DWBA uses imaginary potentials to take into account the loss of reaction flux from the elastic channel, which allows for deuteron breakup but not for a proper two-way coupling with the continuum. The extensions via CDCC are computationally intensive and often incomplete in terms of the contributing physics. Therefore, the ADWA has important advantages in the case of (d,p) reactions and is adopted for all such analysis in the present work. The calculations are performed using a version of the code TWOFNR \cite{TWOFNR}. The ADWA method has recently been refined to take into account the zero-point motion of the neutron and proton inside the bound deuteron \cite{TimofeyukJS}.
\subsection{Comparisons: other transfer reactions and knockout reactions}
\label{subsec:knockout}
In the discussion in this article the emphasis is on single-nucleon transfer, and primarily (d,p) reactions, studied in inverse kinematics with radioactive beams. In terms of physics, the aim which is emphasised is the understanding of single particle structure and the evolution of shell orbitals and shell gaps as nuclei become more exotic. There are certainly other types of transfer reaction and other ways with which to probe single particle structure. Some of those topics are briefly described here. This article aims to identify the experimental challenges and techniques of transfer reaction studies, rather than to provide a review of all such studies in the literature; some more details of other work can be found, for example, in ref. \cite{KateNobel} or in other papers by many of the groups cited in section \ref{sec:4:lightion}.
Nucleon removal reactions include (p,d) and (d,t) which are discussed in sections \ref{subsec:spectrometer} and \ref{subsec:dt} respectively, and (d,$^3$He). An alternative to the first two is ($^3$He,$\alpha$) whilst an alternative to (d,p) is ($\alpha$,$^3$He). The choice of which reaction to use should not be random. The helium-induced reactions will generally show a different selectivity due to the different reaction Q-value, and could be chosen to highlight higher-$\ell$ transfers. In terms of the discussion in section \ref{subsec:crosssection}, a more negative Q-value will reduce the kinetic energy in the exit channel so that the exiting particle takes away less orbital angular momentum than it brings in. This will tend to favour the higher $\ell$-transfers. In practice, the helium-induced reactions are harder to study using radioactive beams. No simple and thin solid helium target exists, so it is necessary either to use a gas target (a windowed cell, or a differentially pumped jet) or an implanted helium-in-metal target or a cryogenic target. Each has its own challenges, but can be built and will find an increased application in the future.
Another important type of transfer reaction is when a cluster is transferred. It is always the case that when multiple particles are transferred then the process could be single-step (when the whole cluster is preformed and is transferred) or could have two or even more steps involved. Multiple-step processes are modelled theoretically using {\it coupled reaction channel} (CRC) extensions of the DWBA. In the case of heavy-ion transfer, they can also be modelled semiclassically, as mentioned in section \ref{subsec:hi-selectivity}. Traditionally, anything heavier than helium is called a heavy ion, and two heavy-ion induced transfer reactions of particular importance are ($^6$Li,d) and ($^7$Li,t), which transfer an $\alpha$-particle. Various heavy-ion transfer reactions, including $\alpha$-transfer, are discussed for example in ref. \cite{AnyasWeiss}.
The simplest form of cluster transfer is probably the (t,p) reaction in which the transfer of two neutrons, coupled to spin and relative orbital angular momentum zero, is the dominant mechanism. These can carry various amounts of angular momentum with them as a cluster, into the final nucleus. Experimentally, it is a challenging reaction: historically, the tritium nucleus was the projectile and would pose particular problems due to its radioactivity, and with the advent of radioactive beams the tritium has to be incorporated into a compact target and then be bombarded, which potentially poses even greater problems. Nevertheless, these problems have been solved in a study of shape coexistence in $^{32}$Mg via the (t,p) reaction in inverse kinematics \cite{Wimmer}. The beam was $5 \times 10^4$ pps of $^{30}$Mg at 1.8 A.MeV at ISOLDE, CERN. This experiment used the T-REX array which is described in section \ref{subsec:silicon}. The target was a foil of titanium metal ($500\mu$g/cm$^2$) into which $40\mu$g/cm$^2$ of $^3$H had been absorbed. There was a ratio of $\approx 1.5$ of hydrogen atoms to lattice Ti atoms, giving a radioactivity of the target of 10 GBq. For stable targets, the (t,p) reaction tends to have a large positive Q-value. For the more neutron-rich radioactive isotopes, such as $^{30}$Mg, the Q-value drops to be close to zero but the kinematics remain quite similar to (d,p), which is discussed in section \ref{subsec:inversekinematics}.
With the advent of radioactive beams at the extremes of measured nuclear existence, obtained via intermediate and high energy fragmentation reactions at laboratories such as MSU, GANIL, GSI and RIKEN, a new type of nucleon removal reaction was developed and exploited. This type of reaction is sometimes called a {\it knockout} reaction, but it is completely separate from true knockout reactions such as $(e,e^\prime p)$ and $(p,p^\prime p)$. The nucleus in the beam is incident on a light target nucleus that acts like a black disk and ideally cannot be internally excited without disintegrating - the usual choice is $^9$Be. The experimental requirement is that the projectile survives the reaction, with just the single nucleon removed, and this automatically selects very peripheral collisions. The black disk essentially erases some part of the tail of the wave function of the removed nucleon \cite{Hansen}. This method, which was originally developed to study the ground states of halo nuclei, provides another way in which to study the single particle structure of nuclei. Nucleon removal from the ground state of a projectile simultaneously studies the structure of the projectile state and the structure of the final nucleus. Individual states in the final nucleus can be identified using gamma-ray spectroscopy. The angular momentum transfer and the spectroscopic factor are deduced, respectively, from the width of the longitudinal momentum distribution of the beam fragment and from the magnitude of the cross section. A very successful method of analysing these reactions was developed using high-energy Glauber approximations that were previously used to describe high energy deuteron-induced reactions (the deuteron being the archetypal halo nucleus) and this theory is outlined in ref. \cite{JATLewes}, with a more extensive discussion of results in the review of ref. \cite{JATreview}. A currently very topical result from the extensive studies using knockout reactions is the apparent quenching of single-particle spectroscopic factors relative to the predictions of large-basis shell model calculations \cite{Gade}. The quenching appears to be correlated with the binding energy of the removed nucleon, which suggests some connection with higher-order correlations of nucleons, coupling to configurations outside of the shell model basis. Various different explanations have been advanced for this effect, for example those in refs. \cite{TimofeyukSource,Barbieri,barbieri2}. The observations appear to be consistent with previously observed quenching of spectrocopic strength in stable nuclei using $(e,e^\prime p)$. One way to investigate this for radioactive nuclei, and also to check the reaction dependence, is via $(p,p^\prime p)$ knockout reactions such as those performed in Japan \cite{Kobayashi} and GSI \cite{GSIp2p}. Another is to compare neutron and proton knockout with results from (d,t) and (d,$^3$He) studies, as has been performed for the neutron deficient $^{14}$O nucleus \cite{Freddy}.
\section{Experimental features of transfer reactions in inverse kinematics}
\label{sec:3:inverse}
This section addresses some simple and rather general features of reactions such as (d,p) and (p,d) when studied in inverse kinematics. Instead of the centre of mass frame being almost at rest in the laboratory frame, as in normal kinematics experiments, the centre of mass frame moves with nearly the beam velocity. The kinematical variation of energy with angle therefore bears no resemblance to the situation for normal kinematics shown in Figure \ref{fig:4}. In a (d,p) or (p,d) reaction, the mass of the light (target) particle is substantially changed by the transfer, being halved in (d,p) or doubled in (p,d). This in itself turns out to be a major factor in determining the two-body kinematics of the reaction. In order to illustrate this, it is convenient to use velocity addition diagrams, where we add the velocities of particles as measured in the centre of mass frame to a vector representing the velocity of the centre of mass frame in the laboratory. The resultant vectors give the velocities of the final particles in the laboratory frame, and of course this is using the Galilean transformation and thus is strictly correct only for non-relativistic situations. This is no great problem if we are working at the energies of order 10 A.MeV that were suggested in section \ref{subsec:crosssection}. The discussion in the following section follows that in ref. \cite{Divonne}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-elastic-vectors}
\caption{Classical velocity addition diagram for elastic scattering in inverse kinematics, showing that the light (target) particles emerge at angles just forward of $90^\circ$ for small centre of mass scattering angles. }
\label{fig:9}
\end{figure}
\subsection{Characteristic kinematics for stripping, pickup and elastic scattering}
\label{subsec:inversekinematics}
The vector diagram describing elastic scattering in inverse kinematics is shown in Figure \ref{fig:9}. The velocity of the centre of mass in the laboratory frame is given by a large fraction of the beam velocity, since the target is light. Measured in the centre of mass frame, taking into account conservation of momentum, we can also note that the velocities of the two particles after the collision must be in inverse proportion to their masses. Thus, the target-like particle has a velocity $\upsilon _{{\rm target}}^{{\rm c.m.}}$ that is much greater than that of the beam-like particle in this frame. This is shown in the Figure by the red dashed vectors. Furthermore, the target particle is initially at rest and hence the length of the target-like vector $\upsilon _{{\rm target}}^{{\rm c.m.}}$ is equal to the length of the centre of mass velocity as measured in the laboratory frame, $\upsilon _{{\rm c.m.}}^{{\rm lab}}$. The scattering angle as measured in the centre of mass frame is given by the angle enclosed between $\upsilon _{{\rm c.m.}}^{{\rm lab}}$ and $\upsilon _{{\rm target}}^{{\rm c.m.}}$, indicated by $\theta$ in Figure \ref{fig:9}. For a scattering angle of zero in the centre of mass frame, the light particle in the final state is stationary. For small scattering angles (where the cross section is highest, for elastic scattering) the light particles emerge just forward of $90^\circ$ and with a velocity (energy) that increases approximately linearly (quadratically) with centre of mass angle. Also, the centre of mass angle is simply twice the difference between the laboratory angle and $90^\circ$ in this classical approximation, since the velocity addition triangle is isosceles. The beam particle continues in the forward direction with little change in either energy or direction. In the case of backscattering in the centre of mass frame, the light particles travel rapidly in the direction of the incoming beam, and the beam particle also continues in that direction, being just slightly slowed down. An important result here, of experimental significance, is that elastically scattered target particles will be detected just forward of $90^\circ$ and their energies will increase rapidly with angle. In general, they will require a thick detector for them to be stopped and their energy measured precisely.
\begin{figure}[h]
\sidecaption
\includegraphics[width=1.0\textwidth]{fig-nonelastic-vectors}
\caption{Velocity addition diagrams (a) for a typical {\em pickup} reaction such as (p,d) or (d,t), and (b) for a typical {\em stripping} reaction such as (d,p). Certain assumptions about the beam energy and the reaction Q-value are described in the text. }
\label{fig:10}
\end{figure}
The vector diagrams describing reactions in which there is {\em pickup} of a nucleon by the light particle, or {\em stripping} of a nucleon from the light particle are shown in Figure \ref{fig:10} (adapted from ref. \cite{Divonne}). The lengths of the vectors in these diagrams are given in terms of the masses involved, and the reaction Q-value, by formulae included in refs. \cite{Divonne, Kinematic}. As shown by those formulae, the diagrams shown here implicitly assume a small reaction Q-value, or at least that the Q-value in units of MeV is small compared to the energy of the beam as expressed in MeV per nucleon. Especially for reactions involving exotic neutron rich projectiles, the Q-values for neutron addition or removal will typically be small, and similarly for a reaction such as (d,$^3$He) on the proton-rich side of the nuclear chart.
In the case of a reaction such as (p,d), corresponding to Figure \ref{fig:10}(a), it is easy to obtain a rough estimate of the length of the light particle vector in the centre of mass, labelled $\upsilon _e$ in the Figure. Firstly, the heavy particle is going to continue with little change in velocity or direction, much as in the case of elastic scattering. Now, the centre of mass vector in elastic scattering was required to be the same length as the centre of mass velocity vector in the laboratory frame, denoted by $\upsilon _{{\rm cm}}$ in Figure \ref{fig:10}. In the case of (p,d), the mass of the light particle is doubled relative to the elastic scattering situation, but the momentum that this particle must carry in the centre of mass frame is about the same as in the elastic case, which follows from the remark about the velocity of the beam particle. Thus, this vector $\upsilon _e$ is about half the length of $\upsilon _{{\rm cm}}$. The precise value depends upon the reaction Q-value of course, but the basic form of the vector diagram is always the same, subject to the assumptions mentioned above. The result is that the light reaction products are forward focussed into a cone of angles or around $40^\circ$ relative to the beam direction. There will be two energy solutions for each angle, within this cone, wherein the lower energy corresponds to the smaller centre of mass reaction angle and hence (typically) the higher cross section. The low energy solution may be very low indeed, in energy.
In the case of a reaction such as (d,p), corresponding to Figure \ref{fig:10}(b), the mass of the light particle is halved in the reaction and hence its velocity vector in the centre of mass frame is approximately doubled in length, in the approximate picture. The small centre of mass angles (and typically the higher cross section) will correspond to light particles that emerge travelling opposite to the beam direction. They will have energies that may be quite low, and will increase in energy all the way to zero degrees in the laboratory frame, which corresponds to a centre of mass angle of $180^\circ$. For reactions that populate an excited state in the final nucleus, there will be less energy available in the final state than for the ground state, and hence the vectors in the centre of mass are shorter and the laboratory energies of the light particle will be lower than for the ground state, at all laboratory angles.
When planning an experiment in inverse kinematics, it can be useful to construct a velocity addition diagram such as those in Figure \ref{fig:10}. It allows an intuition about the reaction kinematics to be gained, easily. The form of the diagram depends only on the ratio of the length of $\upsilon _e$ to that of $\upsilon _{{\rm cm}}$. This ratio is given \cite{Divonne} by
$$ \frac{\upsilon _e}{\upsilon _{{\rm cm}}} = \left( q f ~\frac{M_R}{M_P} \right)^{1/2} \approx \sqrt{qf} {\rm ~~if~} M_R \approx M_P $$
where the masses of the projectile and recoil are denoted by $M_P$ and $M_R$. The quantity $f$ is related to the change in mass of the target particle, $f=M_T / M_e$ where $M_T$ and $M_e$ are the masses of the target and light ejectile respectively. The quantity $q$ is of order unity but has a Q-value dependence and typically varies between 1 and 1.5. Specifically, $q=1+Q/E_{{\rm cm}}$ where $Q = Q_{{\rm g.s.}} - E_{{\rm x}}$ for an excited state and $E_{{\rm cm}}$ is the kinetic energy in the centre of mass frame. Given that the target is much lighter than the projectile, most of the kinetic energy in the centre of mass frame is carried by the target particle, so $E_{{\rm cm}} \approx M_T (E/A)_{{\rm beam}}$. Then $q \approx 1+ Q/2 (E/A)_{{\rm beam}}$ and $q$ is closer to unity for small Q-values or as the $E/A$ for the beam is increased. In the limit that $q=1$ then for a pickup reaction such as (p,d) or (d,t) the size of the cone around the beam direction that contains all of the events is given by $\theta _{{\rm max}} = \sin ^{-1} \sqrt{f}$ where $f=1/2$ for (p,d) and $f=2/3$ for (d,t). This gives, as a first approximation, a cone of about $50^\circ$ degrees half-angle in each case. Similarly, it is possible to estimate that in (d,p) the laboratory angle corresponding to $30^\circ$ in the centre of mass frame is about $110^\circ$, so a (d,p) experiment will typically need to measure at least the angular range from $110^\circ$ to near $180^\circ$ in the laboratory.
Another interesting feature of the velocity addition diagram is how it scales with the $E/A$ of the beam \cite{enam}. Whilst the relative lengths of the vectors are determined largely by the masses of the various particles, with some residual dependence on the Q-value and the beam energy, the length scale (as given in ref. \cite{Kinematic}) is $\sqrt{2q(M_R + M_e )}$ which, with the assumption again that $M_P \gg M_T$ is approximately proportional to $\sqrt{(E/A)_{{\rm beam}} / M_P}$. Now, with the assumption that $M_P \approx M_R$ (because the transfer hardly changes the mass), the lengths of the vectors such as $\upsilon _e$ and $\upsilon _{{\rm cm}}$ scales as $\sqrt{M_P}$. Thus, the whole diagram scales as the product of these lengths and the length scale itself, and the $\sqrt{M_P}$ factor cancels. The diagram therefore scales roughly as $\sqrt{(E/A)_{{\rm beam}}}$ and the energies will scale roughly as $(E/A)_{{\rm beam}}$. The approximation is better, the closer the Q-value is to zero, but the expression at least gives a guide to the behaviour that can be expected: the detected energy scales with the beam energy. For elastic scattering, the Q-value is zero, so the result is accurate: the rate of increase of the energy of the scattered particle with angle, for angles moving forward of $90^\circ$, scales with the beam energy.
The results of proper (relativistically correct) kinematics calculations for two very different beams and energies are shown in Figure \ref{fig:11}. In Figure \ref{fig:11}(a), the results are for a beam of $^{16}$C at 35 A.MeV such as might be produced by a fragmentation beam facility. The central solid line near $90^\circ$ shows the energy of elastically scattered deuterons, rising steeply as the centre of mass angle increases and the laboratory angle slightly decreases. On the right hand side of Figure \ref{fig:11}(a) are the results for the protons from the (d,p) reaction populating the ground state in $^{17}$C (upper curve) and a hypothetical excited state at 4 MeV excitation energy. The pair of curves with the lowest energies at zero degrees are for the (d,t) reaction. The faint dotted line near $90^\circ$ shows the energies of elastically scattered protons, if there were to be any $^1$H in the target along with the $^2$H (a situation commonly encountered experimentally). The curve with the highest energy at zero degrees is for tritons from the (p,t) reaction populating the ground state of $^{14}$C. The remaining curves at the intermediate energies at zero degrees are for the reactions (d,$^3$He) and (p,d) initiated by the different isotopes in the target. Lines connecting all of the curves show the points corresponding to the indicated centre of mass angles. Note that the energies of the particles from (d,p) and (d,t) are less than or equal to 5 MeV over the most interesting range of relatively small centre of mass angles, where the differential cross section will be largest and most structured. Also, the maximum energies reached over the interesting range are all less than about 30 MeV. Figure \ref{fig:11}(b) is for $^{74}$Kr at 8.16 A.MeV. The curves on the right are for (d,p) to the ground state of $^{75}$Kr and a hypothetical state at 5 MeV excitation. At forward angles, the two lower curves are for (d,$^3$He) from this neutron-deficient nucleus. The next two curves are for (d,t). In each of these cases, the calculations are for the ground state and a 5 MeV state. The final kinematic curve in Figure \ref{fig:11}(b), intersecting at 15 MeV at $0^\circ$, is for the (p,d) reaction to the ground state of $^{73}$Kr. Once again, the particles of principal interest are generally of about 5 MeV or less, and the energies of interest range up to about 30 MeV. This consistency of the relevant kinematic energy-angle domains has important implications for the design of particle detection systems aimed at studying transfer in inverse kinematics. It indicates that a static array could be optimised to such measurements and would be applicable to a wide range of reaction studies.
\begin{figure}[h]
\sidecaption
\includegraphics[width=1.0\textwidth]{fig-kinematic-energies}
\caption{Two-body relativistic kinematics calculations for two very different beams in terms of mass and energy, including results for elastic scattering and several different single-nucleon transfer reactions: (a) $^{16}$C at 35 A.MeV, (b) $^{74}$Kr at 8.16 A.MeV. }
\label{fig:11}
\end{figure}
\subsection{Laboratory to centre of mass transformation}
\label{subsec:com}
It is common to transform results for the measurement of differential cross sections from the laboratory frame into the centre of mass frame, for comparison with the results of reaction theory calculations. The theory is of course naturally calculated in the centre of mass frame. In the days when the experiments were performed in normal kinematics, the shape of the cross section plot would be similar in both the laboratory and the centre of mass reference frame, because the target was typically much heavier than the incident deuteron. In the case of inverse kinematics, this is no longer the case, as shown by comparing Figures \ref{fig:7} and \ref{fig:8}. It is important to note that it is not simply a transformation from one angle to another that changes the differential cross sections between the two reference frames, but the solid angle is also transformed. The ratio of differential factors that describes this transformation is known as the Jacobian. Inspection of Figure \ref{fig:10}(b), which describes the (d,p) reaction, shows that for backward laboratory angles (as illustrated) the laboratory angle (for $\upsilon_e^{lab}$ measured relative to $\upsilon_{cm}$) varies much more rapidly than the centre of mass angle (enclosed between $\upsilon _{cm}$ and $\upsilon _e$). In the diagram there is a factor of about two, between the rates of change. This means that a small solid angle in the centre of mass frame is spread out over a rather large solid angle in the laboratory frame. Thus, while $d\sigma /d\Omega _{{\rm c.m.}}$ is largest at small $\theta _{{\rm c.m.}}$ or near $180^\circ$ in the laboratory frame, the effect of the Jacobian is that $d\sigma /d\Omega _{{\rm lab}}$ is reduced relative to less backward angles. That is, while the very backward laboratory angles are still important in (d,p) measurements, for determining the shape of the differential cross section, there are very few counts there.
The transformation from centre of mass to laboratory angles, as just mentioned, has the effect of spreading out the counts from (d,p) at small centre of mass angles, so that they are spread over a wider solid angle. This reduces the yield of counts observed near $180^\circ$ in the laboratory frame. A completely separate effect to also remember is the ``$\sin \theta$'' effect. This will further emphasise the importance of detectors close to $90^\circ$ compared to those near $180^\circ$, assuming that the charged particle detection is cylindrically complete, or approaching this. Then, since the solid angle in a range $d\theta $ at angle $\theta$ is given by $2\pi \sin \theta d\theta$, the cross section that measures the number of counts in an experiment is not $d\sigma / d\Omega$ but
$$\frac{d\sigma }{d\theta} = 2\pi \sin \theta \frac{d\sigma }{d\Omega }~.$$
Thus, if a coincidence experiment is considered, for example measuring gamma-rays in a (d,p$\gamma$) experiment, many of the gamma-rays will come in association with particles detected towards $90^\circ$.
The transformation between the centre of mass and laboratory reference frames, for the differential cross section, is given for example by Schiff in his classic {\em Quantum Mechanics} text \cite{Schiff}:
$$ \frac{d\sigma}{d\Omega}_{{\rm lab}} = \frac{(1+\gamma^2 +2\gamma \cos \theta _{{\rm c.m.}})^{3/2}}{\mid 1+\gamma \cos \theta _{{\rm c.m.}}\mid} \times
\frac{d\sigma}{d\Omega}_{{\rm c.m.}} $$
where $\gamma = \upsilon_{{\rm c.m.}}/\upsilon_e$. This complicated transformation, as noted above, changes the shape of the curves significantly. Therefore, a plot in the centre of mass frame of experimental data for the differential cross section will retain very little information about any experimental constraints or impacts of the laboratory angles. For example, the physical gaps between detectors, or the differing thicknesses of target through which the particles must exit: these often have important implications for the data but the information is lost in the transformation to the centre of mass frame. For this reason, some workers choose to plot experimentally measured cross sections in the laboratory frame, for inverse kinematics experiments, following the ethos of presenting the data in a form as close as possible to what is actually measured experimentally.
\subsection{Strategies to combat limitations in excitation energy resolution}
\label{subsec:resolution}
In trying to do experiments using radioactive beams, there are two properties of the beams that tend to influence the experimental design more than any others. Firstly, the beams are radioactive. That means that care must be taken regarding the eventual dumping of the beam and also, quite often, to deal with the angular scattering of the beam in the target \cite{TaLL3}. Secondly, the beams are generally weak, maybe up to a million times weaker than a typical stable beam that one might have used for an equivalent normal kinematics experiment with a stable target. This means that, in practice, there will be a minimum sensible value for the target thickness in order to perform the experiment in a reasonable time. In turn, this will affect the energies and angles measured for the particles produced in the reaction. As discussed above, the particles of interest are often of rather low energy. The energy that is measured may depend quite significantly on where the reaction takes place - at the front or at the back surface of the target, or somewhere in between. Also, for the lowest energy particles, the direction may be affected by multiple low-angle scattering of the charged particle as it leaves the target material.
In an experiment to identify and study the unknown excited states of an exotic nucleus, the kinematical formulae used to produce a plot such as Figure \ref{fig:11} will be inverted so that the measured energy and angle of a particle are used to calculate the excitation energy of the final state. Any process that modifies the measured energy and angle from the actual reaction values will lead to a limitation on the achieved resolution for excitation energy, even if the best possible computed corrections are applied. All of these factors were included in a detailed analysis of the resolution that could be expected from transfer reactions, under different experimental conditions \cite{WinfieldNIM}. The two basic categories of experiment were as follows:
\begin{enumerate}[I]
\item {\it rely on detecting the beam-like ejectile in a magnetic spectrometer}
\item {\it rely on detecting the light, target-like ejectile in a silicon detector}\\ ~\\with a third option arising which is\\
\item {\it detect decay gamma-rays in addition to the charged particles.}
\end{enumerate}
A magnetic spectrometer or a recoil separator is essential in the first case, in order to separate the reaction products from the direct beam and to measure the ejectile properties with sufficient accuracy. Operated at zero degrees, it will need to be instrumented to allow the accurate measurement of the scattering angles for the very forward-focussed beam particles. The degree of forward focussing, and hence the requirements placed upon the resolution of the angular measurements, become more and more demanding as the mass of the projectile increases. For heavier beams, it becomes impractical for existing detectors. Furthermore, any spread in the beam energy translates directly to a spread in the measured excitation energies, and any nucleon transfer reactions on contaminant material in the hydrogen targets (usually plastic) will contribute to the observed yield.
If the second method is employed, then we know from the discussion of the kinematics that the particles of interest are spread over a significant range of angles. In order to detect particles over this range, and with good resolution in both energy and angle, the most obvious choice is an array of semiconductor detectors, and silicon is by far the most versatile material at present. This method is less sensitive than the first, to a spread in the beam energy, but is limited as discussed above by the effects of the target thickness on the measured energies and angles. In practice, it is hard to imagine resolutions of better than 100-200 keV or so, for excitation energy, if the experiment demands targets of 0.5 mg/cm$^2$ or more \cite{WinfieldNIM}. (This assumes $(CD_2)_n$ deuterated polythene targets,
\footnote{For convenience, the $( \ldots )_n$ part of $(CD_2)_n$ will subsequently be omitted, and similarly for the non-deuterated $(CH_2)_n$ targets.}
and with a thickness determined by beam intensities that may be as low as $10^4$ pps). In some experiments, thinner targets could be used and hence better resolution achieved, if the beam intensity were to allow it. In any case, to achieve the best resolution, the detector array for the light particles should achieve good measurement of the particle angles. In the case of a silicon strip detector array, this requires a high degree of segmentation, or in some cases a resistive strip readout is possible.
A variant of the second method, which avoids the need for an extensive and highly segmented silicon array, is to use a magnetic solenoid aligned with the beam axis to collect and focus the charged particles onto a more modest array of silicon detectors located around the axis of the solenoid. This is the {\em HELIOS} concept, named after the first device of this type to be implemented \cite{HELIOSone}. The elegant feature of HELIOS is that it removes the kinematic compression of energies. Considering the kinematics of a typical (d,p) reaction as shown for example in Figure \ref{fig:11}, the lines for a difference of 1 MeV in excitation energy are separated in terms of proton energy by less than 1 MeV at a particular laboratory angle. In HELIOS, when the protons are focussed back to the axis of the solenoid, they are dispersed in distance according to a linear dependence on excitation energy. For a detector located at a a particular distance along the axis, it measures particles emitted at different angles for different excitation energies. The net result is to disperse excited states in energy in such a way that any limitation due to the intrinsic resolution of the silicon detector becomes significantly less important. However, if the limitation lies in the target thickness and the ensuing deleterious effects on the energy and angle of the particles leaving the target, then there is little benefit to be obtained from simply using a different method of measurement. Ultimately, if the experiment demands a relatively thick target, the resolution will be as estimated in ref. \cite{WinfieldNIM}. The helical detector concept is discussed again in section \ref{subsec:choosing}.
It may be that the limits to resolution imposed by a reasonable target thickness are not acceptable for a good measurement to be performed. This is likely to happen in the case of heavier nuclei where levels are more closely spaced than the light nuclei, or it can occur in any odd-odd nucleus for any mass. In this situation, which can be expected to be common, the third solution - measuring decay gamma-rays - becomes attractive.
The higher energy resolution that can be achieved with gamma-ray detection then gives a much better energy resolution for excited states. This of course works only for bound states in the final nucleus. In addition to the improved energy resolution, another feature of possibly comparable importance is that the gamma-ray decay pathway for a particular final state may help to identify the state more precisely. From the particle transfer measurement, it is only possible to infer the $\ell$-transfer, which leaves an uncertainty according to whether the spin is $(\ell + 1/2)$ or $(\ell - 1/2)$ since the transferred nucleon has spin 1/2. The gamma-ray decay branches may resolve this ambiguity. It should be noted that there is an experimental challenge in detecting the gamma-rays with a high enough efficiency and with the ability to apply a sufficiently good Doppler correction. For the typical beam energies discussed above, the final gamma-ray emitting nuclei will have velocities of the order of $0.10c$ in the laboratory reference frame, always aligned almost exactly along the beam direction ($c$ is the speed of light). In order to correct for the substantial Doppler shift that this implies, the gamma-ray detectors will also need to measure the angle of emission for the gamma-ray, relative to the incident beam. Doppler shift corrections are discussed further in section \ref{subsec:tiara}.
\section{Examples of light ion transfer experiments with radioactive beams}
\label{sec:4:lightion}
Having described the various approaches to designing an experiment in the previous section, these possibilities are now illustrated by means of specific examples. Mostly, the examples are early experiments which helped in developing the techniques and/or (for convenience) experiments by the author with collaborators.
\subsection{Using a spectrometer to detect the beam-like fragment}
\label{subsec:spectrometer}
An example of an experiment in which the beam-like particle is measured, and used to extract all of the experimental information, is provided by the early experiment performed by the Orsay and Surrey groups at GANIL \cite{Fortier,WinfieldBe11} and illustrated in Figure \ref{fig:12}. The aim was to study the (p,d) reaction with $^{11}$Be in order to study the parentage of the $^{11}$Be halo ground state. Because the projectile is relatively light, then it is a reasonable approach to measure the beam-like particle (method I of section \ref{subsec:resolution}). A magnetic spectrometer was used, for two reasons. Firstly, the beam was produced by secondary fragmentation and therefore has significant spreads in both energy and angle. In order to resolve final states in $^{10}$Be that were separated by less energy than the spread in the beam, a dispersion matched spectrometer was required. This experiment used the {\em spectrom\`etre \`a perte d'energie du GANIL}, SPEG \cite{SPEG}. Secondly, in order to measure the scattering angle of the $^{10}$Be it was necessary to separate the $^{10}$Be from the beam and track its trajectory to a precision that required a spectrometer. In order to recover the scattering angle, it was also necessary to track the incident beam particles, which required detectors placed before the first (dispersive) dipole element of SPEG. The beam intensity was $3 \times 10^4$ particles per second (pps) and the mean beam energy was 35.3 A.MeV. The measured $^{10}$Be particles, at the focal plane, were dominated by the yield from carbon in the polythene $CH_2$ target. Reactions on just the protons in the target were isolated by recording the deuterons from the reaction in an array of ten large area silicon detectors. This experiment was successful in measuring the parentage of the $^{11}$Be ground state, which has a neutron halo, in terms of the $s$-wave and $d$-wave components (the latter with an excited $^{10}$Be core). In addition to the innovative experimental techniques, the experiment also highlighted some important complexities in the theory and made innovations in the theoretical interpretation. Specifically, it was necessary to go beyond the normal simplification of modelling the transferred nucleon in a potential well that is due to the core. It was necessary to use a dynamic picture of $^{11}$Be in terms of a particle-vibration coupling model, in order to calculate the overlap functions in the transfer amplitude directly from the nuclear structure model.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.5\textwidth]{fig-be11-experiment}
\caption{In this (p,d) study using a secondary $^{11}$Be beam \protect\cite{Fortier,WinfieldBe11}, the beam had a large energy spread, so a dispersion matched spectrometer was used. This, together with the limited spatial focussing of the beam required beam tracking detectors at the target and in the beam line. Coincident deuteron detection allowed background from the carbon in the $CH_2$ target to be removed in the analysis, but the $^{10}$Be measurement in the spectrometer gave all of the critical energy and angle information. The active beam stop comprised a plastic scintillator that allowed the intensity of the beam to be monitored.}
\label{fig:12}
\end{figure}
\subsection{Using a silicon array to detect the light (target-like) ejectile}
\label{subsec:silicon}
The first high resolution example of this kind of experiment, aimed at measuring spectroscopic quantities using a radioactive beam, was an experiment employing a previously-prepared source of radioactive $^{56}$Ni in order to measure the reaction (d,p) in inverse kinematics \cite{Rehm}. Useful and astrophysically relevant results were obtained. The experiment used silicon strip detectors arranged in the backward hemisphere with a solid target of $CD_2$ deuterated polythene
and a recoil separator device - in this case, the fragment mass analyser (FMA) at Argonne \cite{FMA}. The beam was produced in the normal way for a tandem accelerator using a source of radioactive nickel material, and had a typical intensity on target of $2.5 \times 10^4$ pps at an energy of 4.46 A.MeV. An additional challenge was the isobaric impurity of $^{56}$Co which was a factor of seven more intense than the $^{56}$Ni and was separated using differential stopping foils within the FMA.
A particular silicon array that was developed specifically for experiments with radioactive beams is MUST \cite{MUST}, which uses large area highly segmented silicon strip detectors with CsI detectors in a telescope configuration. MUST led the way in developing electronics that could cope with the many channels required for highly segmented detectors. Excellent particle identification is achieved. MUST has been used to study a range of reactions including inelastic scattering of a range of nuclei, and with regard to transfer it was very often targeted at experiments to study the structure of very light and even unbound exotic nuclei, for example $^{7,8}$He \cite{Valerie2,Valerie}. Another major silicon array is HiRA which was developed initially for experiments using radioactive beams produced by fragmentation at MSU \cite{HiRA}. The MUST array was combined with SPEG spectrometer in a study of neutron-rich argon isotopes with a pure reaccelerated beam of $2 \times 10^4$ pps of $^{46}$Ar at 10.7 A.MeV from SPIRAL at GANIL, incident on a $CD_2$ target of 0.4 mg/cm$^2$\cite{olivier-argon}. Good resolution in excitation energy was achieved, in part by exploiting the special optics of the SPEG beamline. The detection of argon ions in SPEG was useful in helping to identify and eliminate background from carbon in the target, and also allowed the identification of bound and unbound states in $^{47}$Ar according to whether $^{47}$Ar or $^{46}$Ar was detected in SPEG, although the spectrometer acceptance was limited and prevented a full coincidence experiment. Another interesting experiment that used a silicon array by itself was the study of (d,p) using a beam of $^{132}$Sn at 4.8 A.MeV from the Oak Ridge radioactive beam facility \cite{Kate}. As seen from the calculated cross sections in Figures \ref{fig:7} and \ref{fig:8}, this was not really the ideal energy for such a heavy beam, but it was the maximum possible. The resolution achieved for excitation energy was limited, for this heavy beam, not by the silicon array but by the target thickness of 0.16 mg/cm$^2$. As also suggested by the cross section plots in Figure \ref{fig:8}, the silicon detectors were optimised by mounting them in a range of angles around $90^\circ$ in the laboratory.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-tiara-profile}
\caption{The TIARA array was designed specifically to measure nucleon transfer reactions in inverse kinematics with radioactive beams. It has an octagonal barrel of position-sensitive silicon detectors, with annular silicon arrays at forward and backward angles. In total, approximately 90\% of $4\pi$ is exposed to active silicon. The vacuum vessel is designed so that EXOGAM gamma-ray detectors can be placed very close to the target, achieving a gamma-ray peak efficiency of order 15\% at 1 MeV. A robotic target changing mechanism allows different targets to be placed at the centre of the barrel.}
\label{fig:13}
\end{figure}
The TIARA array \cite{Labiche} was the first purpose-built array to combine silicon charged particle detectors with gamma-ray detectors for transfer work and was first employed with a radioactive $^{24}$Ne beam at the SPIRAL facility at the GANIL laboratory \cite{Ne24}. Initial tests and benchmarking were performed with a stable beam and a reaction that was previously studied in normal kinematics \cite{Labiche}. TIARA was designed, taking into account the experience gained from using a high intensity radioactive beam of nearly $10^9$ pps of $^{19}$Ne in the TaLL experiment at Louvain-la-Neuve \cite{TaLL1,TaLL2,TaLL3}. This led to a design in which radioactive beam particles that are scattered at significant angles by the reaction target will be carried away from the immediate vicinity of the target, and hence away from the field of view of the gamma-ray array \cite{TaLL3}.
TIARA is shown schematically in Figure \ref{fig:13}. It is designed to be operated with four HPGe clover gamma-ray detectors from the EXOGAM array \cite{EXOGAM,TaLL3} mounted at $90^\circ$ and at a distance of only 50-55~mm from the centre of the target. The space available in the forward hemisphere was also severely restricted due to the design requirement of coupling to the VAMOS spectrometer \cite{VAMOS}. The spectrometer allows reaction products to be measured with high precision and to be identified according to $Z$ and $A$. The exceptionally large angular acceptance of VAMOS (up to $10^\circ$) also allows the efficient detection of recoils from the decay of unbound states via neutron emission. Examples of the gamma-ray and spectrometer performance are given in sections \ref{subsec:tiara}, \ref{subsec:zero} and \ref{subsec:unbound}.
Figure \ref{fig:14} shows in detail the geometry of the central barrel in TIARA relative to the segmented HPGe clover detectors of EXOGAM. The front faces of the clovers are mounted 54~mm from the centre of the target in this configuration with two layers of silicon in the barrel. The inner layer of silicon is position sensitive parallel to the beam direction, which is the most important direction in defining the scattering angle of any detected particle. Each of the 8 inner detectors has four resistive strips and is $400~\mu$m thick. The second layer of silicon is 1 mm thick but non-resistive. The 4 strips per detector align behind the strips on the inner barrel. The primary purpose of the second layer of the barrel is to indicate when particles punch through the inner layer. The target is placed at the geometric centre of the barrel. The targets are typically 0.5 mg/cm$^2$ self-supporting foils of $CD_2$ mounted on thin holders with holes of diameter 40~mm, where the large hole diameter is chosen so as to minimize the shadowing of the barrel by the target frame.
\begin{figure}[h]
\centering
\includegraphics[width=.85\textwidth]{fig-tiara-geant}
\caption{The TIARA setup as modelled in {\em g\'eant4} \protect\cite{geant}:~ (a) overview, including MUST2 \protect\cite{MUST2} and the EXOGAM clover HPGe gamma-ray detectors \protect\cite{EXOGAM}. The four leaves of each of the 4 are shown; (b) the central silicon array comprises two concentric octagonal barrels and the clover front faces are 54 mm from the beam axis. The view is looking with the beam from just in front of the annular array. Beyond the barrel, the detectors of MUST2 are glimpsed. The circular target is mounted at the centre of the barrel.}
\label{fig:14}
\end{figure}
Subsequent developments of the TIARA approach are represented by T-REX \cite{T-Rex} and SHARC \cite{SHARC}, which are shown in Figure \ref{fig:15}. Another key development, with a barrel design similar to TIARA, is ORRUBA \cite{Pain} (and its non-resistive strip version super-ORRUBA) which was developed at Oak Ridge. The most obvious feature of these arrays, relative to TIARA, is that they are designed to fit inside a more conventional gamma-ray array. To some extent, this is equivalent to accepting a limitation on the beam intensity that can be used - certainly at an intensity of $10^9$ pps as envisaged in the TIARA design, an enormous amount of radioactivity would be deposited inside the gamma-ray array by the elastic scattering of beam particles from a typical $CD_2$ target. However, at beam intensities of up to a about $10^8$ pps, the radioactivity deposited inside the array will be tolerable and there will be a real benefit in having the silicon array inside a more extensive array of gamma-ray detectors. The advantages lie in the energy resolution achievable with improved Doppler correction, and in simply having a wider range of gamma-ray angles included in the measurements. A wide range of gamma-ray angles may open up additional physics possibilities in the interpretation of the data. The planned deployment (GODDESS) of ORRUBA inside Gammasphere \cite{Gammasphere} with around 100 gamma-ray detectors is perhaps the pinnacle of this approach. The two arrays T-REX and SHARC, coincidentally, have extremely similar geometries. The choice of rectangular boxes allows the silicon detector designs to be relatively simple and hence economical, and the ends of the array are completed with compact annular detectors of a pre-existing design. T-REX (as in the case of ORRUBA, and the original TIARA) includes resistive strips, which helps to keep the number of electronics channels manageable using conventional electronics. However, the price that is paid for using resistive strips is quite high, in terms of performance. Firstly, such detectors typically have higher energy thresholds than non-resistive strips, because they have an electronic noise contribution related to the resistance of the strip \cite{resistive1,resistive2}. Secondly the position resolution that is achieved is dependent on the energy deposited, being proportional to $1/E$ \cite{owen-awcock}. SHARC is the first dedicated compact transfer array to utilise double-sided (non-resistive) silicon strip detectors completely, resulting in superior energy thresholds and a consistency in position resolution. This choice of detector was made possible by the availability of up to 1000 channels of high resolution electronics using the TIGRESS digital data acquisition system \cite{TIGRESS}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-trex-sharc}
\caption{Two post-TIARA silicon arrays developed for use completely inside a large gamma-ray array: (a) T-REX \protect\cite{T-Rex}, which is operated inside the MINIBALL array of HPGe cluster detectors at ISOLDE \protect\cite{MINIBALL}, and (b) SHARC \protect\cite{SHARC} which is operated inside the TIGRESS array of segmented HPGe clover detectors \protect\cite{TIGRESS}. Both include silicon boxes situated forward and backward from the target.}
\label{fig:15}
\end{figure}
\subsection{Choosing the right experimental approach to match the experimental requirements}
\label{subsec:choosing}
As will be apparent from the examples already discussed, a variety of experimental approaches are chosen by different experimenters, for transfer experiments. Largely, these are driven by specific experimental requirements, of which two of the most important are: beam intensity limitations, and the required resolution in excitation energy. One of the most versatile and complete approaches is the combination of a compact, highly segmented silicon array with an efficient gamma-ray detection (as adopted, for example, by TIARA) and hence the results from that approach are presented in some detail, in this document. In this section, we briefly review alternative choices made by experimenters.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-obertelli}
\caption{This is a variant on the technique of extracting spectroscopic information from the beam-like particle, rather than the light target-like particle. The aim was to use a thicker target to compensate for a low beam intensity, and the background from target contaminants such as carbon was minimized by using a solid deuterium target. Gamma-ray detection allowed precise excitation energy measurements. See text for definition of other terms. }
\label{fig:16}
\end{figure}
In the case of an experiment at SPIRAL at GANIL, aimed at studying $^{27}$Ne via the (d,p) reaction \cite{Ne27-obertelli}, the experimental limitation at the time was the available beam intensity. The solution adopted (see Figure \ref{fig:16}) was to employ a much thicker target, but this implied that the protons would have too low an energy to exit and be detected. Therefore the experiment was focussed on using the heavier beam-like particle, as in the $^{11}$Be experiment discussed in section \ref{subsec:spectrometer}. The final nucleus had a reasonably complex structure, and hence gamma-ray detection was considered vital and would possibly offer additional information on spin, since the proton differential cross sections could not be observed. The EXOGAM array of segmented Ge gamma-ray detectors was employed \cite{EXOGAM}. The required target thickness, in order to achieve sufficient gamma-ray detection, was then achieved by using a solid cryogenic pure $D_2$ target of 17 mg/cm$^2$. In terms of an equivalent $CD_2$ thickness of deuterons, the energy loss in the cryogenic target is reduced by a factor of three, so this is equivalent in energy terms to 6 mg/cm$^2$ of $CD_2$ but has three times the number of target nuclei. In addition, the absence of carbon in the target removes the problem of background reactions that was mentioned in section \ref{subsec:spectrometer}. A microchannel plate detector (MCP) before the target assisted in particle identification using the VAMOS spectrometer \cite{VAMOS}. Inside VAMOS, the particles were focussed by two quadrupole elements ($Q_1, Q_2$) through a dipole magnet and then detectors in the focal plane region recorded the particles' positions, angles and energies. An example of the particle identification that can be achieved in VAMOS is included in section \ref{subsec:zero}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.6\textwidth]{fig-maya}
\caption{The MAYA detector \protect\cite{MAYA} is an {\it active target} in the sense that the gas that fills MAYA acts both as the target for the nuclear reactions and also as the fill gas of a time projection chamber. Ionisation paths in the gas are drifted to readout planes, and using the drift time it is possible to reconstruct every individual nuclear reaction in three dimensions (and with particle identification). The diagram shows a reaction on the $^{12}$C in the C$_4$H$_{10}$ gas, but reactions on the hydrogen, or other fill gases, can also be studied.}
\label{fig:17}
\end{figure}
Most experimental methods discussed here are limited in resolution by the energy loss effects in thick targets. However, this problem is largely removed if it is possible to determine the precise point of interaction within the target. By turning the target into an active detector, designs such as MAYA \cite{MAYA} (shown in Figure \ref{fig:17}) achieve this objective and hence can be used with the lowest beam intensities. In fact, for higher beam intensities it is usually necessary in this type of detector to place an electrostatic screen around the path of the beam itself. The classic model for this type of detector is IKAR, which was produced for high energy beams and operates with multiple atmospheres of H$_2$ gas \cite{IKAR}. In MAYA, the reaction can occur at any point through the gas. The ionisation by all the particles in the gas is drifted in an electric field to a readout plane where the position and amount of ionisation are recorded, along with the time of arrival (i.e. the drift time). This allows a full reconstruction in three dimensions of all charged particle trajectories, subject to various limitations in spatial and energy resolution. The measurement of the ionisation along the whole path of the particles in the gas allows the particle types to be identified. In order for proper drifting of the charge and proper readout, the choice of gas pressure is subject to some restrictions, and hence some particles might easily penetrate beyond the confines of the gas volume. The MAYA detector includes a forward wall of CsI detectors, to deal with these penetrating particles.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.6\textwidth]{fig-helios}
\caption{The HELIOS device \protect\cite{HELIOSone,HELIOStwo} collects particles magnetically at all angles and focusses them to compact detectors along the axis. The angular information is reconstructed from the measured energy and the distance from the target to its point of return to the axis, and is generally more accurate than can be obtained by direct angle measurements. The way in which the spectrometer operates has the effect of reducing the limitations arising from the detector energy resolutions themselves.}
\label{fig:18}
\end{figure}
A novel approach to achieving $4\pi$ detection efficiency is the HELIOS concept that has been developed by the Argonne group and collaborators \cite{HELIOSone}. Particles emerging at almost all angles from the target are focussed in a large-volume solenoidal field and are brought back to a position-sensitive silicon array aligned along the solenoid axis. This is shown schematically in Figure \ref{fig:18}, which is adapted from ref. \cite{HELIOSone}. The targets are typically $CD_2$ foils, but a gas cell target has also been constructed to allow the study of $^{3,4}$He-induced reactions. The ideal design parameters for the solenoid are remarkably similar to those for medical MRI scanners and indeed the original HELIOS is a decommissioned MRI device \cite{HELIOStwo}. The energy limitations arise not only from the field strength and radius, but also the length along the axis. It is shown in ref. \cite{HELIOSone} that the limitations are much more significant for a typical 0.5 m long device (or a 1 m long device with the target at the centre) than they are for a 1.5 m long device (Figure 8 of ref. \cite{HELIOSone}). The detection limits as a function of angle, for a device between the quoted lengths, are well matched to the kinematics of (d,p) in inverse kinematics. As shown in ref. \cite{HELIOSone}, the Q-value (excitation energy) is calculated directly from the measured anergy and distance along the axis for each detected particle (eq. (5) \cite{HELIOSone}). So also is the centre of mass angle (eq. (7) \cite{HELIOSone}). At an intermediate point in the calculation, the measured time of flight is used to measure the charge to mass ratio $A/q$ for the particle (eq. (2) \cite{HELIOSone}) but once the particle identification is made, the exact value is substituted in further calculations. Thus, apart from the measured energy and position, the calculations rely only upon the precise value/stability and the homogeneity of the magnetic field. The particle identification (apart from one $A/q$ ambiguity between deuterons and $^4$He$^{++}$) is a significant bonus, although it does have some implications for the time structure of pulsed beams. As mentioned in section \ref{subsec:resolution}, any impact on the excitation energy resolution arising from the detector energy resolution is significantly reduced in the HELIOS method, because particles are compared at the same $z$ (distance along the axis) rather than at the same $\theta _{{\rm lab}}$ (angle of emission in the laboratory frame). This turns out to have the effect of removing the {\em kinematic compression} observed in Figure \ref{fig:11}, wherein (particularly at backward angles in the laboratory) the kinematic lines are closer together in proton energy than in excitation energy.
An example of the use of HELIOS with an online produced radioactive beam is the study of $^{16}$C via the (d,p) reaction in inverse kinematics with a thin $CD_2$ target of 0.11 mg/cm$^2$ and a beam of $10^6$ pps of $^{15}$C \cite{c16-helios}. Interestingly, the $^{15}$C secondary beam was itself produced using the (d,p) reaction in inverse kinematics with a $^{14}$C primary beam. Good resolution was achieved, but one key doublet of states at 3.986/4.088 MeV in $^{16}$C could not be resolved. Each of these states gamma-decays to the 1.766 MeV level, and the 100 keV difference in the energies of these 2.2 MeV gamma-rays would be easily resolvable with a modern Ge gamma-ray array. It is a considerable challenge to combine the HELIOS technique with state-of-the-art gamma-ray detection. One very appealing future direction of development would be to combine the MAYA and HELIOS concepts, so that particles could be completely tracked in three dimensions but with the focussing and collection advantages of the magnetic field.
\subsection{Using (d,p) with gamma-rays, to study bound states}
\label{subsec:tiara}
Typical data for the energies of the measured particles, as a function of their deduced laboratory angle, are shown in Figure \ref{fig:19} for an experiment using a silicon array with a large angular coverage\cite{Wilson}. This experiment was performed with a beam of $3 \times 10^7$ pps of $^{25}$Na at 5 A.MeV, using the SHARC array \cite{SHARC} at TRIUMF. Provided that calibrations have been performed in advance, this type of spectrum can be created online, during data acquisition. Once the kinematic lines are seen, the first hurdle is crossed, and the experiment is seen to be working correctly. Then, the discussion can turn to the specifics of the physics to be measured and the statistics that are required. The most intense lines will typically be those due to elastic scattering. In the Figure, the data show lines that are recognisable as coming from the elastic scattering of both deuterons and protons in the 0.5 mg/cm$^2$ $CD_2$ target. It is typical that any deuterated target will have some fraction of non-deuterated molecules. The intensity falls away, generally, as the energy increases and the centre of mass scattering angle also increases. The angular distribution may show oscillations, but the fall in intensity is the general tendency. In this particular experiment, there is a gap in the data near $90^\circ$ due to a physical gap in the array, related to the target mounting and changing mechanism. A further gap exists in the backward angle region due to the silicon detector support structure. In the region backward of $90^\circ$, the kinematic lines arising from (d,p) reactions are evident. In this angular range, there are no other deuteron-induced reactions (apart form (d,p)) that can contribute to the charged particle yield. From that perspective, no particle identification is needed for the backward angles. In fact, because of the low energies, no $\Delta E-E$ identification technique would be appropriate, but time-of-flight or silicon pulse-shape techniques would be feasible. The reason that particle identification could indeed be useful is that not all reactions will be induced on the deuterons in the target. Assuming a $CD_2$ target as in the experiment shown here, the reactions induced on carbon nuclei can produce charged particles at any angle. Typically, the compound nuclear reactions that arise from the carbon will produce both protons and alpha-particles (and possibly other species) by evaporation from the excited compound nucleus. Standard codes exist, to estimate the evaporation channels that will be important for a particular beam and energy combination (e.g. LISE++ \cite{LISE}, which includes the fusion-evaporation code PACE4 \cite{PACE}). These evaporated particles will not have a specific angle-energy relationship because several particles will be evaporated. Also, alpha-particles can deposit much more energy than protons in a given thickness of silicon because of their shorter range. Thus, the kinematic lines from (d,p) and elastic scattering will in general appear on a smooth background arising from evaporated charged particles from compound nuclear reactions. This is evident to some extent in Figure \ref{fig:19}, even though some experimental techniques have been applied so as to reduce the compound nuclear contribution (see below).
\begin{figure}[h]
\sidecaption
\includegraphics[width=0.7\textwidth]{fig-EThTrifoil}
\caption{Raw data for a typical experiment \protect\cite{Wilson} using a silicon array to detect the light particles, to be compared with Figure \protect\ref{fig:11}. Kinematic lines are overlaid over the deuterons (higher energies) and protons from elastic scattering. At larger angles, the loci are clearly seen for protons from (d,p) reactions. The apparent angular dependence of the lower energy threshold is due to corrections that are applied to compensate for energy losses in the target. }
\label{fig:19}
\end{figure}
Figure \ref{fig:20} summarises a range of experimental results from a (d,p$\gamma$) study using a radioactive beam of $2 \times 10^4$ pps of $^{24}$Ne at 10 A.MeV \cite{Ne24}. The energy and angle information as shown in the previous Figure can be combined to calculate the excitation energy in the final nucleus, assuming that the reaction was (d,p) initiated by the beam. Angular regions where other reactions dominate can be removed in the analysis. Figure \ref{fig:20}(a) shows an excitation energy spectrum for $^{25}$Ne calculated from the kinematic formulae, for one particular angle bin. The fit to the various excited state peaks in this spectrum was informed and constrained by the observed gamma-ray energies. The gamma-ray energy spectrum observed with specific limitations on the excitation energy are shown in part (b) of the Figure, where parts (iii) and (iv) correspond to the indicated excitation energy limits in $^{25}$Ne. For the events included in Figure \ref{fig:20}b(iii), the results of gating on particular gamma-ray peaks are shown in parts (i) and (ii). The $p-\gamma -\gamma$ triple coincidence statistics in these two spectra are sufficient (just) to deduce that the two observed gamma-ray transitions are in coincidence. (Actually, the experiment in ref. \cite{Ne24} also measured the heavy ($^{25}$Ne) particle after the reaction, so the data in Figure \ref{fig:20}b(i-ii) actually represent quadruple coincidence data). Taking into account the excitation energies at which the nucleus is fed by the (d,p) reaction, and the observed gamma-ray cascade, the level scheme in Figure \ref{fig:20}(c) was inferred. The angular distributions shown in Figure \ref{fig:20}(d) were used to deduce the transferred angular momentum carried by the neutron, according to the best-fit shape. The calculations that are shown were performed using the ADWA method. Different angular momenta were deduced for the various states. For example, the ground state has a clear $\ell =0$ distribution. The scaling of the theory to the experimental data gave the measured spectroscopic factors. In the case of the 4.03 MeV state, it was only possible to set a lower limit on the cross section at certain angles. This was related to the energy thresholds of the silicon detectors used for the proton detection. As shown in the kinematics diagrams in Figure \ref{fig:11}, and illustrated in the data of Figure \ref{fig:19}, the observed particles from (d,p) are lower in energy for states with higher excitation energy and hence the higher states are subject to this type of threshold effect. Raising the beam energy will give access to higher excitation energies. The observed lower limits on the cross section for the 4.03 MeV state were nevertheless sufficient to rule out the alternative angular momentum assignments and $\ell =3$ could be assigned. Finally, an inset in Figure \ref{fig:20}(d) shows the differential cross section for deuteron elastic scattering, measured as a function of the centre-of-mass scattering angle. This was derived from the rapidly rising locus of data points observed in the data, similar to that for the elastics shown in Figure \ref{fig:19}. This will be discussed further, in section \ref{subsec:elastic}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=1.0\textwidth]{fig-tiara-ne24}
\caption{Results from a (d,p) study of $^{25}$Ne using a beam of $^{24}$Ne at 10 A.MeV \protect\cite{Ne24}: (a) example excitation energy spectrum reconstructed from the measured proton energies and angles and showing gating regions used to extract coincident gamma-ray spectra, (b) gamma-ray energy spectra (iii, iv) from $p-\gamma$ coincidences for highlighted regions of excitation energy in (a), spectra (i,ii) from $p-\gamma-\gamma$ data with the events and $\gamma$-ray gates indicated in (iii), (c) summary of the level and decay scheme deduced from this experiment, (d) differential cross sections for the indicated $\ell$ transfers to states in $^{25}$Ne. Elastic scattering data are inset (see text). }
\label{fig:20}
\end{figure}
The gamma-ray energy spectra of Figure \ref{fig:20} include a correction, applied event-by-event, for a very significant Doppler shift. At the recommended beam energies of 5-10 A.MeV, the projectiles have a velocity of approximately $0.10c$. Actually, the velocity is sufficient for the Doppler shift at $90^\circ$ due to the second-order terms to be easily measured. Hence, the full relativistically correct formula should be used, to apply Doppler corrections to the measured gamma-ray energies so that they accurately reflect the emission energies in the rest frame of the nucleus. The Doppler-corrected energy $E_{{\rm corr}}$ is given by
$$ E_{{\rm corr}} = \gamma ~(1 - \beta \cos \theta_{{\rm lab}} )~ E_{{\rm lab}} $$
where $\gamma = 1/\sqrt{1-\beta^2}$ and $\beta = \upsilon /c$ where $\upsilon$ is the velocity of the emitting nucleus. The angle $\theta_{{\rm lab}}$ is measured for the gamma-ray detector relative to the direction of motion of the nucleus. In practice, and taking into account the accuracy with which the gamma-ray angle can be determined, it is usually sufficient to assume that the emitting nucleus is travelling along the beam direction in these inverse kinematics experiments (although it is also easy to calculate it's angle from the measured light-particle angle). It will be relevant later, to note that another relativistic effect related to gamma-rays is significant at these beam energies. The angle of emission relative to the beam direction, as measured in the frame of the emitting nucleus, is different from the angle measured in the laboratory frame of reference. This consequence of relativistic abberation means that the gamma-rays emitted by a moving nucleus are concentrated conically towards its direction of motion, which is known as relativistic beaming or as the relativistic headlight effect. For isotropic centre of mass emission at $\beta = 0.1$, the fraction of gamma-rays emitted forward of $90^\circ$ in the laboratory will be about 55\%. The yield of gamma-rays observed at $10^\circ$ in the laboratory will be larger than the yield at $170^\circ$ by a factor of $1.22/0.82 = 1.49$. The relativistic aberration formula is given by
$$ \cos \theta_{{\rm lab}} = \frac{\cos \theta_{{\rm c.m.}} - \beta }{1 - \beta \cos \theta_{{\rm c.m.}}} $$
where $\theta_{{\rm c.m.}}$ is measured in the rest frame of the nucleus and other terms are as defined above.
\begin{figure}[h]
\sidecaption
\includegraphics[width=0.7\textwidth]{fig-doppler-correct}
\caption{Results of the Doppler shift correction procedure applied to $^{26}$Na gamma-rays produced in the reaction of 5 A.MeV $^{25}$Na with deuterons \protect\cite{Wilson}. The upper spectrum (outlined and partly shaded in light grey) is uncorrected, with the shaded parts indicating the spread of counts contributing to four of the strongest peaks in the lower spectrum. The lower spectrum (darker shading) is corrected for the Doppler shift.
In addition to the Doppler correction, an add-back procedure has been applied to account for Compton scattering (see text). This lowers the continuum background. All of these data are "Trifoil gated" to remove or minimize events of a compound nuclear origin, as explained in section \protect\ref{subsec:zero}). }
\label{fig:21}
\end{figure}
The relativistic Doppler shift correction was already performed for the gamma-rays in Figure \ref{fig:20} and is shown in more detail for a different experiment, in Figure \ref{fig:21}. In the case of Figure \ref{fig:20}, the gamma-ray angle could be determined only according to which leaf (crystal) of the clover detector recorded the initial interaction. The resolution at 1 MeV was 65 keV FWHM (full width at half maximum) after correction \cite{Ne24}, limited by the high value of $\beta = 0.1$, the close proximity of the detectors to the target (50 mm) and the lack of any further gamma-ray angle information. This is reduced by a third to just under 45 keV (FWHM) at 1 MeV in the TIARA configuration if the clover segmentation information is used \cite{Labiche}. In the experiments \cite{Wilson} with SHARC, using TIGRESS, the distance to the front face of the gamma-ray detectors was nearly three times larger than TIARA, at 145 mm. The gamma-ray clover detectors were centred at either $90^\circ$ or $135^\circ$ and each leaf of the clover was four-fold segmented electronically. An add-back procedure was applied, to account for Compton scattering between different leaves of the same clover. This involved adding the energies together and then adopting the segment with the highest energy as indicating the angle of the initial interaction (a criterion that is justified by simulations \cite{Labiche}). For a (d,p$\gamma$) gamma-ray at 1806 keV, the observed resolution after Doppler correction was 23 keV (FWHM) or 18 keV (FWHM) for detectors at $90^\circ$ and $135^\circ$ respectively (reflecting the Doppler broadening, as opposed to shift, that contributes at $90^\circ$). Scaling this to the previously quoted energy of 1 MeV gives a resolution of 10-12 keV (FWHM). This resolution is a factor of 10-50 better than the resolution in excitation energy obtained from using the measured energy and angle of the proton. Thus, the resolution in excitation energies for states populated in (d,p) reactions can be improved by a similar factor.
\subsection{The use of a zero-degree detector in (d,p) and related experiments}
\label{subsec:zero}
The ability to detect the beam-like particle, as well as the light particle, from transfer reactions in inverse kinematics is a big advantage for several reasons. It was therefore a fundamental design constraint, for TIARA \cite{CAARI,Labiche}, that it should be coupled to the magnetic spectrometer VAMOS. The advantages are partly evident from inspection of Figure \ref{fig:22}. The different particle types observed at angles around zero degrees, following the bombardment of a $CD_2$ target with a $^{26}$Ne beam, are clearly identified. The beam in this case was 2500 pps at 10 A.MeV and the target thickness was 1.20 mg/cm$^2$. Two further features make this zero degree detection even more useful. Firstly, the silicon array will record the coincident particles only for the reactions induced on the hydrogen in the target; the recoil carbon nuclei for this constrained kinematics will essentially all stop in the target. Secondly, the spectrometer gives not only the particle identification but also the angle of emission for the heavy particle, which can be exploited, for example as in section \ref{subsec:unbound}. In the example shown here, the reaction products could be simultaneously collected and identified for (d,p) to bound states of $^{27}$Ne, (d,p) to unbound $^{27}$Ne that decays back to $^{26}$Ne, and (d,t) to bound states of $^{25}$Ne.
\begin{figure}[h]
\sidecaption
\includegraphics[width=0.7\textwidth]{fig-vamos-pid}
\caption{Data from a study of $^{26}$Ne at 10 A.MeV bombarding a $CD_2$ target \protect\cite{ne27,Simon}. Particles were detected in the wide-acceptance spectrometer VAMOS centred at zero degrees and were identified using the parameters measured at the focal plane. This determined the reaction channel and effectively eliminated any contribution from carbon in the target. }
\label{fig:22}
\end{figure}
In experiments currently performed at TRIUMF, there is no access to a spectrometer such as VAMOS, and hence a less elaborate solution was implemented, and is described here. Note that, in the longer term, the purpose-built fragment mass separator EMMA \cite{EMMA} will become available at TRIUMF. In the meantime, a detector developed at LPC Caen and called the {\it trifoil} was adapted \cite{GLWrutherford} from its original purpose, which was to provide a timing signal for secondary beams produced via projectile fragmentation at intermediate energies. The experimental layout for the first experiment \protect\cite{Wilson} using the trifoil in this fashion is shown in Figure \ref{fig:23}. In this implementation, the plastic scintillator in the trifoil will record signals arising from unreacted beam particles or transfer and similar reactions in the target, i.e. where the beam-like particle is not slowed down. If the reaction in the $CD_2$ target was induced by the carbon, then it could be either a transfer reaction (if peripheral) or a compound nuclear reaction. In the former case, no particle would be observable in the silicon array SHARC. In the second case, the evaporated charged particles could be observed, but also the product at zero degrees would be slower moving and would have a higher $Z$ than for a transfer reaction induced by the hydrogen in the target. The compound nuclear products are then stopped by a passive layer of aluminium, whilst still leaving the direct reaction products with sufficient energy to be recorded in the trifoil and then pass through to a remote beam dump. The present trifoil detector is big enough to span the cone of recoil beamlike particles corresponding to protons from (d,p) collected over a wide range of angles. Compound nuclear events are completely prevented from producing a valid trifoil signal, by means of the passive stopper, but depending on the beam rate there may be random coincidences with other beam particles arriving in the same bunch of the pulsed beam. (Ideally, the detector would be insensitive to unreacted beam particles, and this was achieved to some extent.)
\begin{figure}[h]
\sidecaption
\includegraphics[width=0.7\textwidth]{fig-sharc-tigress}
\caption{Schematic of the experimental setup for experiments combining the SHARC Si array with the TIGRESS gamma-ray array \cite{Wilson}. A plastic scintillator detector was introduced at zero degrees, 400 mm beyond the target, to help in identifying and eliminating events arising from reactions on the carbon component of the $CD_2$ target. The performance of this {\em trifoil} detector \protect\cite{GLWrutherford} is discussed in the text. }
\label{fig:23}
\end{figure}
The effect of the zero degree trifoil detector in reducing the background in the gamma-ray energy spectra is illustrated in Figure \ref{fig:24}. This spectrum was acquired for a beam of $^{25}$Na at 5 A.MeV incident on a $CD_2$ target with an average intensity of $3 \times 10^7$ pps. The spectrum includes data from the full TIGRESS array, comprising 8 detectors with 4 placed at $90^\circ$ and 4 at $135^\circ$ in this experiment \cite{Wilson}. The spectrum is Doppler corrected as described above, and hence the gamma-rays produced by a source at rest (such as the 511 keV annihilation gamma-ray and those originating from the radioactive decay of scattered and then stopped $^{26}$Na projectiles) have been transformed into multiple peaks depending on their angle of detection relative to the target. Escape suppression has also been applied, using the signals from the scintillator shields for each clover detector. The first thing to note is that the smooth background, arising from unsuppressed Compton scattering events due to higher energy gamma-rays, is massively reduced by applying the trifoil requirement. This is quantified below. Secondly, with regard to the peaks, it can be seen for example that the 1806 keV peak arising from the (d,p) product $^{26}$Na is retained in the trifoil-gated spectrum with high efficiency whereas the 1266 keV peak arising from the compound nuclear product $^{31}$P is mostly eliminated. In fact, the elimination of the $^{31}$P peak reveals an underlying $^{26}$Na peak at 1276 keV.
\begin{figure}[h]
\sidecaption
\includegraphics[width=1.0\textwidth]{fig-gemma-gamma}
\caption{Gamma-ray energy spectrum acquired for a beam of $^{25}$Na at 5 A.MeV incident on a $CD_2$ target, using the full TIGRESS array (see text). The requirement of a trifoil signal eliminates a large fraction of the smooth background, and largely removes the peaks due to scattered radioactivity and compound nuclear reactions. The radioactivity peaks are dispersed by the Doppler correction. }
\label{fig:24}
\end{figure}
In order to quantify the improvement in peak:background ratio that was achieved by using the trifoil, spectra such as those in Figure \ref{fig:25} were produced. The gamma-ray energy spectrum in Figure \ref{fig:25}(a) is for a single clover at a single laboratory angle. The data were analysed in this way, in order to be sure to separate as much as possible the gamma-rays arising from transfer and compound nuclear reactions. The optimal value of the velocity $\beta$ for the Doppler correction is of course different for these two different categories of reaction, so the correction procedure produces relative movement in energy between counts from transfer and compound reactions depending upon the angle of the gamma-ray detection. The proton energy data in Figure \ref{fig:25}(b) are for a thin slice in a spectrum of energy versus angle such as that shown in Figure \ref{fig:19}. Already, in Figure \ref{fig:19}, the trifoil requirement was applied and this reduced a smooth background arising from compound nuclear events. The extent of this background reduction can be measured using Figure \ref{fig:25}(b). In this particular experiment, the average efficiency for successfully tagging a genuine proton or a genuine gamma-ray (i.e. one arising from a transfer or other direct reaction) was about 80\%. The shortfall relative to 100\% was due to the intrinsic efficiency properties of this particular trifoil detector. The average probability for incorrectly tagging a charged particle or gamma-ray of compound nuclear origin was about 15\%, or for a gamma-ray from radioactive decay it was about 10\%. The origin of this unwanted probability lay in the high beam intensity and the chance of recording an unreacted beam particle in the same nanosecond sized beam bunch as a compound reaction. Taken overall, the peak:background ratio in each of the proton energy spectrum and the gamma-ray energy spectrum was improved by nearly an order of magnitude. The two reductions of the background are not independent. For a particular gamma-ray peak, an enhancement in the peak:background ratio of typically a factor of 40 was observed, and there is scope for improvement upon this as noted above.
\begin{figure}[h]
\sidecaption
\includegraphics[width=1.0\textwidth]{fig-rejection}
\caption{Energy spectra accumulated for a beam of $^{25}$Na at 5 A.MeV incident on a $CD_2$ target, showing the rejection of background using the trifoil detector as discussed in the text: (a) expanded view of the low energy {\it gamma-ray} spectrum, for a single clover crystal at $82^\circ$ to the beam direction, (b) example {\it proton} energy spectrum for measured proton angles between $105^\circ$ and $107^\circ$ in the laboratory frame, i.e. a vertical slice in Figure \protect\ref{fig:19}. }
\label{fig:25}
\end{figure}
\subsection{Simultaneous measurements of elastic scattering distributions}
\label{subsec:elastic}
In the experiments with TIARA \cite{Ne24,ne27,bea} and SHARC \cite{Wilson}, the absolute normalisation was provided by a simultaneous measurement of the elastic scattering cross section. An example of the data obtained for the cross section, plotted as a function of the centre of mass scattering angle, is shown as the inset in Figure \ref{fig:20}. This technique works well, so long as the elastic scattering can be measured sufficiently close to $90^\circ$ in the laboratory that it includes the small values of the centre of mass angle where the elastic cross section can be calculated reliably. The method relies upon being able to evaluate the cross section theoretically using an optical model calculation. At small centre of mass angles, the deviation from Rutherford scattering will be small and the cross section will be reliable. Assuming that the measurements can be made, there are significant advantages in using this technique. The three main advantages concern (a) the beam integration, (b) the target thickness and (c) the dead time in the data acquisition system. The beam integration would normally require the direct counting of every incident beam particle, with a detector of a known and consistent efficiency. The target thickness would normally be required to be known precisely. However, the measurement of the yield for elastic scattering allows the product of these two quantities ((a) and (b)) to be measured, including any necessary correction for the dead time (c) of the acquisition. In the experiments described, the trigger for the acquisition was for a particle to be detected in the silicon array. The elastic scattering and (d,p) reaction events were then subject to the same dead time constraints. It is still necessary to have a reasonable measurement of the target thickness, so that corrections can be applied for the energy lost by the incident beam and by charged particles as they leave the target.
\subsection{Extending (d,p) studies to unbound states}
\label{subsec:unbound}
The extension of (d,p) studies to include transfer to states in the continuum of the final nucleus is relatively straightforward experimentally compared to the theoretical interpretation. In fact, this issue highlights situations in the development of the reaction theory that have remained unresolved, or partially unresolved, from the days when (d,p) reactions in normal kinematics were a major topic of research.
An experimental example that is relatively simple to treat, both experimentally and theoretically, is provided by a study of the lowest $7/2^-$ state in $^{27}$Ne, populated via (d,p) with a $^{26}$Ne beam \cite{ne27}. This state is observed as an unbound resonance at an excitation energy of 1.74 MeV in $^{27}$Ne, compared to the neutron separation energy of 1.43 MeV. For reasons of both the relatively small energy above threshold and the relatively large neutron angular momentum of $\ell =3$, this unbound state is quite narrow. In fact, the experiment implies the natural width to be 3-4 keV (but in the data it is observed with a peak width of 950 keV due primarily to target thickness effects). In the case of a relatively narrow resonance, meaning a resonance with a natural width that is small compared to its energy above threshold, it is possible to carry out a theoretical analysis with relatively small modifications to the theory. One method is to make the approximation that the state is bound, say by 10 keV, in order to calculate the form factor (i.e. overlap integral) for the neutron in the transfer; this can satisfactorily describe the wave function in the region of radii where the transfer takes place. An improved approach is to use a resonance form factor, following the method of Vincent and Fortune \cite{VincentFortune}. In this theory, the magnitude of the differential cross section scales in proportion to the width of the resonance. If a barrier penetrability calculation is used, to estimate the width for a pure single particle state, then the cross section can again be interpreted in terms of a spectroscopic factor. The Vincent and Fortune method has been implemented \cite{Cooper} in the Comfort extension \cite{Comfort} of the widely used DWBA code DWUCK4 \cite{Kunz}. For these narrow, almost bound resonances, the structure of the differential cross section retains its characteristic shape, determined by the transferred angular momentum.
It has long been known \cite{Akram} that the oscillatory features of the differential cross sections, which allow the transferred angular momentum to be inferred from experimental data, are less prominent or even absent when the final state is unbound and broad in energy. The method of Vincent and Fortune also ceases to be applicable, for these broad resonances. Because of the lack of structure, it becomes difficult to interpret the experimental data so as to determine the spins of final states. An experimental example is provided by the study of unbound states in $^{21}$O via the (d,p) reaction with a beam of $^{20}$O ions \cite{bea}. The analysis in ref. \cite{bea} included calculations using the CDCC model mentioned in section \ref{subsec:adwa}, wherein the continuum in $^{21}$O was considered to be divided into discrete energy bins with particular properties.
It may be possible to recover some sensitivity to the transferred angular momentum by observing the sequential decay of the resonance states. The observed angular distribution should reflect the angular momentum of the decay of the resonance, with a dependence on the magnetic substate populations for the resonant state in the transfer reaction. An attempt to exploit this effect was made in the study of d($^{26}$Ne,$^{27}$Ne)p mentioned above \cite{ne27,Simon}. The $^{26}$Ne products were identified in a magnetic spectrometer as shown in Figure \ref{fig:22}. By a process of ray tracing \cite{VAMOS} it was also possible to reconstruct the magnitude and direction ($\theta,\phi$) of the $^{26}$Ne momentum. Combining this with the momentum of the incident beam and the light particle detected in TIARA, it was possible to reconstruct the missing momentum \cite{Simon}. It was assumed that the light particle was a proton, arising from (d,p). The primary aim of this particular analysis was to be able to separate the events arising from (d,p) from those arising from (d,d) or (p,p) in the part of the TIARA array forward of $90^\circ$. In this sense, it was very successful, as shown by the separation of the main elastic peak from the sequential decay peak in Figure \ref{fig:25a}. A threshold of 40 MeV/c effectively discriminates between these two reaction channels. Unfortunately, the resolution in terms of the reconstructed angle (rather than the magnitude) of the unobserved neutron momentum was inadequate to take this further. No useful angular correlation could be extracted, for the sequential $^{27}$Ne$^*$ $\rightarrow $ $^{26}$Ne + $n$ decays.
\begin{figure}[h]
\sidecaption
\includegraphics[width=0.7\textwidth]{fig-missing-momentum}
\caption{Reconstructed magnitude for the momentum of any missing particle when $^{26}$Ne and a light charged particle (assumed to be a proton) are detected from the reaction of a $^{26}$Ne beam on a $CD_2$ target \protect\cite{Simon}. The upper histogram is for all data where a $^{26}$Ne was positively identified. The green shaded area giving mainly a peak near zero is for data selected to highlight elastic scattering, which in fact is d($^{26}$Ne,$^{26}$Ne)d. The dark blue shaded area with fewer counts is selected to highlight the reaction d($^{26}$Ne,$^{27}$Ne$^*$ $\rightarrow$ $^{26}$Ne + n )p where the neutron was undetected. }
\label{fig:25a}
\end{figure}
\subsection{Simultaneous measurement of other reactions such as (d,t)}
\label{subsec:dt}
Radioactive beams are so difficult to produce that an experiment should make the best possible use of the beam delivered to the target. The compact silicon arrays such as TIARA were designed to cover the whole range of laboratory angles with particle detectors that would assist in this aim. The detectors in the forward hemisphere can record the particles from reactions such as (d,t) or (d,$^3$He), at the same time as those just forward of $90^\circ$ record elastic scattering and those in (predominantly) the backward hemisphere record the (d,p) reaction products. Indeed, the experiment using TIARA to study $^{21}$O via (d,p) with a beam of $^{20}$O \cite{bea} was also designed to measure the (d,t) reaction to $^{19}$O at the same time. The (d,t) measurements \cite{Alexis,Ramus} employed the telecopes of MUST2 \cite{MUST2} which were mounted at the angles forward of the TIARA barrel (cf. Figure \ref{fig:14}). The gamma-ray coincidence measurements with EXOGAM allowed new spin assignments as well as the spectroscopic factor measurements for $^{19}$O states \cite{Alexis,Ramus}. Any studies with (d,t) are immediately useful for comparing to the sorts of knockout studies described in section \ref{subsec:knockout}. The work of ref. \cite{bea} was able to take the additional step of combining the spectroscopic factors measured for (d,p) and (d,t) from $^{20}$O. In an analysis based on sum rules and the formalism of \cite{Baranger} and \cite{Signoracci}, it was possible to derive experimental numbers for the single particle energies for this nucleus. The values were in good agreement \cite{bea} with the effective single particle energies of the USDA and USDB shell model interactions for the $sd$-shell obtained in ref. \cite{BrownRichter}. The previously discussed experiment using a beam of $^{26}$Ne \cite{ne27,Simon} used the same TIARA + MUST2 experimental setup as the $^{20}$O experiment. The data for the (d,t) reaction from $^{26}$Ne are still under analysis \cite{JST} but an interesting feature here is that the (p,d) reaction was also able to be measured at the same time. The separation experimentally of the d and t products of the (p,d) and (d,t) was possible in MUST2 with a suitable combination of time-of-flight identification and kinematical separation.
\subsection{Taking into account gamma-ray angular correlations in (d,p)}
\label{subsec:gamma}
It is well known that gamma-ray angular correlations will be observed for gamma-rays de-exciting states that are populated in nuclear reactions. These correlations have been widely exploited to reveal information about transition multipolarities and mixing, and hence to deduce spin assignments. For a nucleus produced in a reaction, and having some spin $J$, the angular distribution of gamma-rays measured relative to some $z$-axis (such as the beam direction) will depend on the population distribution for the magnetic substates $m_j = -J$ up to $+J$. If $J=0$, the gamma-rays will necessarily be isotropic. However, for other $J$-values the population of substates will be determined by the reaction mechanism and other details of the reaction. Thus, in (d,p) reactions for example, the gamma-ray angular distribution can depend on details such as the angle of detection of the proton. Certain simplifications can be made. For example, if $J=1/2$ then, for an unpolarised incident beam, and for protons detected symmetrically around zero degrees (with respect to the beam) the gamma-rays will necessarily be isotropic. Historically, experiments performed with stable beams and targets were designed to restrict the detection parameters in such a way as to simplify the angular momentum algebra, so as to remove any need to understand the magnetic substate populations, and hence the reaction mechanism, in detail. One of the most widely used classifications for angular correlation experiments are the Methods I and II of Litherland and Ferguson \cite{Litherland}. These methods are discussed in some detail in various texts, for example ref. \cite{England}. A simple and relevant example of the application of Method II is a study of the $^{26}$Mg(d,p$\gamma$)$^{27}$Mg reaction, in which the spins of the first three states in $^{27}$Mg were deduced from the measured gamma-ray angular correlations \cite{Eswaran}. Method I of Litherland and Ferguson involves measuring a $\gamma$-$\gamma$ angular correlation relative to a particular fixed angle for the first gamma-ray. The quantisation axis is defined by the direction of the incident beam. Method II, the more relevant here, is to measure a particle-$\gamma$ angular correlation where the outgoing particle from the reaction is measured at either $0^\circ$ or $180^\circ$. This limits the orbital magnetic quantum numbers of the projectile and ejectile to be $m_\ell =0$ and the consequences of this eliminate the need for any detailed knowledge of the reaction in order to know the substate populations for the final nucleus.
In the present work, we consider a more general situation where we retain one major simplification, namely the cylindrical symmetry of the particle detection, around the beam axis. The discussion is based around the previously discussed experiment using a $^{25}$Na beam to study (d,p) reactions populating states in $^{26}$Na \cite{Wilson}. The SHARC experimental setup (cf. Figure \ref{fig:23}) gives essentially cylindrically symmetrical detection of the protons. The simplification that is produced by this symmetry in the angular description of the angular correlation is dramatic and is described in sections III.E and III.F of the article by Rose and Brink \cite{RoseBrink}. Rose and Brink define an {\it alignment condition} which means that $w(-M_1)=w(M_1)$ for all values of the magnetic substate quantum number $M_1$ of the emitting nucleus with spin $J_1$. Here, $w$ is the weight (i.e. population probability) for a given magnetic substate and is subject to the normalisation
$$ \sum _{M_1} w(M_1) = 1~. $$
As described in their method 2 of section III.F, entitled {\it the alignment is achieved by a particle-particle reaction}, the alignment condition will be achieved if the outgoing particle is detected with cylindrical symmetry (assuming that the beam and target particles are unpolarised). Method II of Litherland and Ferguson is simply a very restricted instance of this stipulation. The results used here to describe angular correlations are taken from Rose and Brink's article \cite{RoseBrink}, and they have also been summarised and discussed in the book by Gill \cite{Gill}.
Suppose we have an experiment where the outgoing particle (for example, the proton in a (d,p) reaction) is detected in a cylindrically symmetric fashion at some particular angle with respect to the beam direction. Let the spin of the excited state be $J_1$. Suppose also, for simplicity, that the gamma-ray transition by which the excited state decays is a pure transition of a particular mulitpole $L$ (the more general cases of mixed multipolarity transitions with a mixing ratio $\delta$ are discussed in refs. \cite{RoseBrink, Gill}). If a gamma-ray detector with a fixed solid angle were then to be moved sequentially to various angles $\theta$ with respect to the beam direction, then the angular distribution observed for the gamma-rays would be given by equation (3.38) of ref. \cite{RoseBrink},
$$ W_{{\rm exp}} (\theta ) = \sum _K a_K P_K (\cos \theta ) $$
where it can be shown that $K$ runs from 0 to $2L$ and is even, the $P_K$ are the Legendre polynomials and the $a_K$ can be calculated (as described below) provided that we know the magnetic substate populations of the initial state $J_1$ and the spin of the final state $J_2$. Outside of the summation, there will also be an additional factor, usually denoted $A_0$, to normalise $W$ to the data. The definition of $W(\theta )$ is chosen so that isotropic emission corresponds to $W(\theta )=1$. Note that this implies that the constant term in the expansion is always $a_0 = 1$. The number of gamma-rays in total that are emitted at an angle $\theta$ into an angular range $d\theta $ is given by $W(\theta ) \times 2\pi \sin \theta d\theta $. In the case of a transition with pure multipolarity (i.e. with a mixing parameter of $\delta =0$) equation (3.47) of ref. \cite{RoseBrink} states that the theoretical form for the angular distribution is given by
$$ W(\theta ) = \sum _K B_K(J_1) \times R_K(LLJ_1 J_2) \times P_K (\cos \theta) $$
where the $R_K$ are independent of the reaction mechanism and basically contain coefficients to describe the angular momentum coupling. The expression for $R_K$ is given by equation (3.36) of ref. \cite{RoseBrink},
$$ R_K (LL^\prime J_1 J_2) = (-)^{1+J_1 - J_2 + L^\prime - L - K} \times \sqrt{(2J_1 +1) (2L+1) (2L^\prime +1)} \times (LL^\prime 1-1 \mid K0)
\times W(J_1 J_1 L L^\prime ; K J_2 ) $$
where the final two terms are the Clebsch-Gordon coefficient and the Racah W-coefficient describing the indicated angular momentum couplings. These coefficients may be obtained from tables or recursion formulae or from a suitable computer code such as ref. \cite{PDS}. In the present case, for a pure multipolarity, we have $L^\prime = L$. It is the $B_K$ coefficients that contain the information from the reaction mechanism, via the magnetic substate population parameters, $w(M_1)$. The expression for $B_K$ is given by equation (3.62) of ref. \cite{RoseBrink},
$$ B_K (J_1 ) = \sum _{M_1 = 0 {\rm ~or~} 1/2}^{M_1 = J_1 } w(M_1) \times \rho _K (J_1 M_1) $$
where the statistical tensor coefficients $\rho _K$ are given by
$$ \rho _K (J_1 M_1) = (2-\delta _{{M_1},0} ) \times (-)^{J_1 - M_1} \times \sqrt{2J_1 + 1} \times (J_1 J_1 M_1 -M_1 \mid K0) $$
and the final term is again a Clebsch-Gordon coefficient. For most normally-arising cases, the values of $\rho _K$ and $R_K$ are tabulated in the appendix of ref. \cite{RoseBrink}. The above description has followed exclusively the formulation of Rose and Brink \cite{RoseBrink}. Other authors have also presented formulae to describe these angular correlations, but it should be remembered that the different authors often adopt different phase conventions, etc., and hence the tables of symbols appropriate to one description can not be assumed to be appropriate for a different description: one particular formulation must be used consistently. Also, in ref. \cite{RoseBrink} the formalism is extended to the case where a gamma-ray cascade occurs, and the second (or subsequent) gamma-ray is the one that is observed. In this case, as given by equation (3.46) of ref. \cite{RoseBrink}, the $R_K$ coefficient in the expression for $W(\theta )$ is replaced by a product of coefficients $U_K R_K$ where $U_K$ depends on $J_1$ and $J_2$ for the initial gamma-ray transition and $R_K$ depends on $J_2$ and $J_3$ for the second gamma-ray transition. The extension to a longer gamma-ray cascade is straightforward.
Thus, if the spins of the states are known, it is possible to calculate the $a_K$ coefficients, $a_2$, $a_4, \ldots$, of the Legendre polynomials in the gamma-ray angular distribution $W(\theta )$ provided that the magnetic substate weights $w(M_1)$ are known - at least, for pure multipolarity transitions. These expressions all rely on the particle detection being cylindrically symmetric at some angle (or range of angles) with respect to the beam direction. This ensures that $w(-M_1)=w(M_1)$ for all $M_1$.
The values of the population parameters $w(M_1)$ depend on the reaction mechanism and, in general, on the angle of the particle detection. An ADWA calculation for a (d,p) reaction can be used to calculate the population parameters $w(M_1)$ and their evolution with the detection angle of the proton. Examples of this are shown in Figure \ref{fig:26}, for the (d,p) study discussed above, using a beam of $^{25}$Na at 5 A.MeV \cite{Wilson}. The different panels correspond to different assumptions about the final orbital for the transferred neutron, and also for the final spin in $^{26}$Na. The different panels are for $\ell$ transfers of $\ell = 0,1,2$ and 3. The different lines are for different values of $M_1$ from 0 to $J_1$. The main point to note is that in general the populations change dramatically, for different angles of observation. The obvious counter example is the panel for $s_{1/2}$ transfer. The symmetry imposed by $s$-wave transfer forces all five substates, from $M_1 = -2$ to $+2$ to have equal weights of 0.2 at every observation angle and the gamma-ray emission will always be isotropic in this case.
\begin{figure}[h]
\sidecaption
\includegraphics[width=1.0\textwidth]{fig-substates}
\caption{Calculations of magnetic substate population parameters as a function of centre of mass angle, performed using the ADWA model with the code TWOFNR \cite{TWOFNR}. The calculations all suppose a final state at 2.2 MeV excitation, formed in the (d,p) reaction with $^{25}$Na to make $^{26}$Na. The orbital into which the neutron is transferred is indicated, along with the assumed final state spin. It can be seen that, in general, the populations vary dramatically. In the experiment, centre of mass angles out to approximately $30^\circ$ were studied. }
\label{fig:26}
\end{figure}
In Figure \ref{fig:27} the gamma-ray angular distributions determined by the substate populations are plotted, for the upper right hand case in Figure \protect\ref{fig:26}, namely $1p_{3/2}$ transfer populating a hypothetical $4^-$ state at 2.2 MeV excitation energy in $^{26}$Na. The gamma-ray decay is assumed to be a pure dipole decay to the $3^+$ ground state. Since the multipolarity of this decay is $L=1$, the maximum value of $K$ for the $a_K$ coefficients is 2. In the centre of mass frame (rest frame) of the emitting nucleus, the gamma-ray angular distribution with respect to the beam axis is given by a constant term plus a term proportional to $a_2 P_2 (\cos \theta )$, and the value of $a_2$ depends on the detection angle of the proton. It is assumed that, for a given proton angle $\theta$(proton) with respect to the beam direction, the protons are detected with cylindrical symmetry at all polar angles, $\phi$. For the centre of mass gamma-ray angular distributions, the functions are necessarily symmetric around $90^\circ$. The three curves intersecting the axis higher up at $\theta =0$ are plotted with the horizontal axis representing the gamma-ray angle as measured in the laboratory frame. There is a focussing of the gamma-rays towards zero degrees, due to the relativistic headlight effect as discussed in section \ref{subsec:tiara}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=0.65\textwidth]{fig-gamma-dists}
\caption{Gamma-ray angular distributions for different detection angles $\theta _{{\rm cm}}$(proton) for the proton from (d,p). Calculated for $^{25}$Na incident on deuterons at 5 A.MeV, with $1p_{3/2}$ transfer populating a hypothetical $4^-$ state at 2.2 MeV excitation energy. For the three symmetric curves, the horizontal axis shows the gamma-ray angle in the centre of mass frame of the emitting nucleus. For the other three curves, the horizontal angle is the gamma-ray angle measured in the laboratory, with respect to the beam direction. The proton centre of mass angles are (a) $10^\circ$, (b) $20^\circ$, (c) $30^\circ$.}
\label{fig:27}
\end{figure}
In Figure \ref{fig:28}, the differential cross sections in the laboratory frame are shown, for the population of states in $^{26}$Na via the (d,p) reaction in inverse kinematics. The curves for $\ell = 0, 1, 2$ and 3 show the expected movement of the main peak progressively further away from $180^\circ$ as $\ell$ increases. The parallel curves with the lower cross sections are actually the computed curves, assuming a gamma-ray coincidence requirement. The angular distributions for a gamma-decay to the ground state were computed using TWOFNR and the ADWA model, for each proton laboratory angle. The gamma-ray angular distributions were then integrated over the appropriate range of angles, corresponding to the laboratory angles spanned by the TIGRESS detectors in the experiment \cite{Wilson}. The relativistic aberration effect was also taken into account. The important point here is that the curves, whilst not perfectly parallel, are very little modified in shape from the ungated curves, i.e. those that have no coincidence requirement. This means that the experimental data can simply be corrected for the measured efficiency of the gamma-ray array and then compared with the unmodified ADWA calculations. This simplification was achieved in this experiment by the large angular range spanned by the gamma-ray array, which meant that the various changes in the angular distributions of the gamma-rays had little net effect after integration. The slight distortions that do occur are negligible (in this case) compared to the statistical errors in the data points and to the inevitable discrepancies that typically occur, between the theoretical and experimental shapes of the differential cross sections. The results from this experiment \cite{Wilson} are currently being prepared for publication.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\textwidth]{fig-gated-cross-sections}
\caption{Differential cross sections in the laboratory frame, calculated for the (d,p) reaction leading to four different states in $^{26}$Na for an experiment at 5 A.MeV, in inverse kinematics. Pairs of almost parallel curves are shown for (a) $1s_{1/2}$ transfer to a $2^+$ state at 0.233 MeV, (b) $1p_{3/2}$ transfer to a hypothetical $4^-$ state at 2.2 MeV, (c) $0d_{3/2}$ transfer to a $2^+$ state at 0.407 MeV, (d) $0f_{7/2}$ transfer to a hypothetical $6^-$ state at 2.2 MeV in $^{26}$Na. In each case, the upper curve is the ADWA calculation and the lower curve is the calculated curve for a gamma-ray coincidence requirement (see text). }
\label{fig:28}
\end{figure}
\clearpage
\subsection{Summary}
\label{subsec:summary}
Section \ref{sec:4:lightion} was headed {\em examples of light ion transfer experiments with radioactive beams} and in this section a range of different experimental approaches have been reviewed. With a relatively light projectile such as $^{11}$Be it was possible to make all of the detailed spectroscopic measurements using the beam-like particle. For the alternative approach using a silicon array for the light (target-like) particle, the TIARA array and subsequent developments such as T-REX and SHARC were described. Gamma-ray detection was shown to be useful, or in many cases essential, in order to resolve different excited states and to identify them on the basis of their gamma-ray decay pathways. Hence, the related issues of Doppler correction and angular correlations were discussed. The use of a detector centred at zero degrees for the beam-like reaction products was shown to be a great advantage. Whilst a large-acceptance spectrometer such as VAMOS gives superior performance including full particle identification, it was shown that even a simple detector such as the {\it trifoil} can substantially assist in the reduction of background. The background arises from compound nuclear reactions induced by the beam on contaminant materials in the target, such as carbon. A common target choice is to use normal $(CH_2)_n$ or deuterated $(CD_2)_n$ polythene self-supporting foils. The option of using a helical orbit (solenoidal) spectrometer instead of a conventional silicon array, for the light particle detection, was described. An example of the use of a cryogenic target of deuterium was included: in the example described, the target was thick and largely absorbed the low energy target-like particles, but it is worth noting that there is research aimed at producing much thinner cryogenic targets that could be used with light particle detection. Finally, an important different approach was described, wherein the target thickness is essentially removed as a limitation because the target becomes the detector itself. This is sometimes called an {\it active target}. With a time projection chamber (TPC) such as MAYA, the fill-gas of the detector includes within its molecules the target nuclei, and the measurements make it possible to reconstruct the full kinematics of the nuclear reaction in three dimensions. This makes an active target the ideal choice for very low intensity beams, where a thick target is indispensable. With more development to improve the resolution and dynamic range, this type of detector could eventually have the widest applicability of all experimental approaches.
\section{Heavy ion transfer reactions}
\label{sec:5:heavyion}
For the transfer of a nucleon between two heavy ions, there is an important selectivity in favour of certain final states which allows the spins of the final states to be deduced. This is known as $j_>/j_<$ selectivity because it can tell us whether the final orbital for the transferred nucleon has $j = \ell + 1/2$ or $j = \ell - 1/2$. The origin of the effect is two-fold \cite{bond-comment}. Firstly, a heavy ion at the appropriate energies will have a small de Broglie wavelength because of its large mass, and hence its path can be reasonably described as a classical trajectory. Secondly, the transfer must take place in a peripheral encounter between projectile and target because a smaller impact parameter will result in a strongly absorbed compound nuclear process and a larger impact parameter will keep the nuclei from interacting except through the large repulsive coulomb interaction. Therefore, we can consider classical trajectories for peripheral transfer and take into account quantum mechanical factors in a semiclassical fashion. Of course, a full quantum mechanical treatment using the normal reaction theories is possible. The advantage of the semiclassical model is that it allows the origins of the particular selectivity in heavy ion transfer to be understood more readily.
\subsection{Selectivity according to j$_>$ and j$_<$ in a semi-classical model}
\label{subsec:bond}
The semiclassical model for nucleon transfer between heavy ions has been described by Brink \cite{Brink} and is represented in Figure \ref{fig:29}. At the moment of transfer, the mass $m$ has some linear momentum in the beam direction due to the beam velocity $\upsilon$ and also due to the rotational motion of $m$ around $M_1$. Just after the transfer, it is orbiting $M_2$ which is at rest, and all of the linear momentum is due to the orbital motion. The initial and final linear momenta should be approximately equal by conservation of momentum. Quantum mechanically, they need not be exactly equal because of the uncertainty in momentum introduced by the spatial uncertainty in the precise point of transfer as measured in the beam direction (which can be estimated). A similar condition can be formulated for the angular momentum of the transferred mass $m$. Before the transfer, this has contributions from the relative motion between the two colliding heavy ions and from the internal orbital angular momentum of the transferred nucleon. These are the only parts that change: the former due to the adjustments in mass and possibly charge, and the second due to the change of orbital. Once again, the initial and final values should be almost equal.
The two kinematical conditions given by Brink \cite{Brink} are:
$$ \Delta k = k_0 - \lambda_1 / R_1 - \lambda_2 / R_2 \approx 0 $$
$$ \Delta L = \lambda_2 - \lambda_1 + \frac{1}{2} k_0 (R_1 - R_2 ) + Q_{{\rm eff}} R / \hbar \upsilon \approx 0 $$
where the orbital angular momentum and projection on the $z$-axis for the transferred particle are given by $(\ell, \lambda)$ with subscripts 1 and 2 for before and after the transfer, respectively. The quantity $Q_{{\rm eff}}$ is equal to the reaction Q-value in the case of neutron transfer, but otherwise has an adjustment due to changes in Coulomb repulsion: $Q_{{\rm eff}} = Q - \Delta ( Z_1 Z_2 e^2 /R)$. The beam direction is $y$ and the $z$ direction is chosen perpendicular to the reaction plane. A further pair of conditions arise from the requirement that the transfer should take place in the reaction plane, where the two nuclei meet, and hence the spherical harmonic functions $Y_{\ell m}$ should not be zero in that plane:
$$ \ell_1 + \lambda_1 = {\rm even} $$
$$ \ell_2 + \lambda_2 = {\rm even}. $$
The two kinematical conditions arising from linear momentum and angular momentum conservation will each, separately, imply a particular {\it well matched} angular momentum value, for a given reaction, bombarding energy and final state energy (Q-value). Alternatively, for a given $\ell$-transfer they will each imply a particular excitation energy at which the matching is optimal. If the values implied by the two equations are equal, then the reaction to produce a state of the given spin and excitation energy will have a large cross section (if such a state exists, with the correct structure in the final nucleus). If the two values are not equal, then the cross section will be reduced by an amount that depends on the degree of mismatch.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.35\textwidth]{fig-brink}
\caption{Sketch of the transfer of a mass $m$ from the projectile $M_1$ to the target $M_2$ in a heavy ion collision, showing the variables used to derive the Brink matching conditions \protect\cite{Brink} (see text). }
\label{fig:29}
\end{figure}
\subsection{Examples of selectivity observed in experiments}
\label{subsec:hi-selectivity}
A detailed inspection of the Brink matching conditions for $\Delta k$ and $\Delta L$, given above, implies that a reaction with a large negative Q-value will favour final states with high spin, or more specifically a large value of $\lambda _2$ in the notation of Figure \ref{fig:29}. This arises because the conservation of linear momentum favour a high value of $\lambda _2 + \lambda_1$ and the conservation of angular momentum implies a large value for $\lambda_2 - \lambda_1$. This selectivity, which occurs for heavy ion transfer with a negative Q-value, is discussed in detail by Bond \cite{Bond} with a derivation in terms of DWBA formalism. As further noted by Bond \cite{bond-comment} the large negative Q-value will imply that the projectile has reduced kinetic energy after the collision and hence is slowed down, which implies a significant transfer of angular momentum. Being heavy ions, the angular momentum of relative motion is large, and hence a relatively small reduction corresponds to transfer into a relatively high spin orbital. In Figure \ref{fig:30} for the ($^{16}$O,$^{15}$O) reaction, which has a large negative Q-value, the neutron is transferred from the $0p_{1/2}$ orbital. For the best matching, there will be a maximum $\ell$-transfer which implies that the nucleon will change $\lambda$, i.e. the projection of the angular momentum in the direction perpendicular to the reaction plane, as much as possible. For example, from a $0p_{1/2}$ orbital (with orbital angular momentum $\ell =1$) and an initial projection of $m_\ell = -1$ (which implies also that $m_s = +1/2$) the transfer will favour $m_\ell = +\ell$ for a high-$\ell$ orbital in the final nucleus. It is reasonable to assume that there is no interaction in the transfer to change the direction (projection) of the intrinsic spin of the nucleon. Therefore the relative directions of orbital and spin angular momentum for the nucleon become swapped in the transfer process. The preferred transfer in this case is from $\ell - 1/2$ (denoted as $j_<$) to $\ell + 1/2$ (denoted as $j_>$). In general, if the Q-value is negative, the transfer from an orbital with $j_<$ ($j_>$) will favour the population of orbitals with $j_>$ ($j_<$) in order to achieve the largest change in $\lambda$ for the transferred nucleon. Therefore, in Figure \ref{fig:30}, the reaction ($^{12}$C,$^{11}$C) shows the opposite selectivity to ($^{16}$O,$^{15}$O). In the upper panel we see a favouring of the ($j_>$) $7/2^-$ state corresponding to the $1f_{7/2}$ orbital, and a relative suppression of the ($j_<$) $9/2^-$ state corresponding to the $0h_{9/2}$ orbital. This selectivity is reversed in the lower panel, and we also see that the ($j_>$) $13/2^+$ state (corresponding to the $0i_{13/2}$ orbital) follows the ($j_>$) $7/2^-$ in becoming weaker relative to the favoured ($j_<$) $9/2^-$ state.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.4\textwidth]{fig-bond}
\caption{Illustration of the $j_> /j_<$ selectivity exhibited in heavy ion transfer when the Q-value is large and negative. The data are for the reactions ($^{16}$O,$^{15}$O) and ($^{12}$C,$^{11}$C) on a $^{148}$Sm target with the same beam velocity, defined by a beam energy of 7.5 A.MeV. The selectivity is reversed due to the parent orbitals of the transferred neutron being $0p_{1/2}$ ($j_<$) and $0p_{3/2}$ ($j_>$) respectively. Therefore the upper panel favours $j_>$ states and the lower panel favours $j_<$ states. The two highlighted peaks correspond to populating the $0h_{9/2}$ ($j_<$) and $1f_{7/2}$ ($j_>$) orbitals. The biggest peak (unshaded) corresponds to the $0i_{13/2}$ ($j_>$) orbital. Figure adapted from ref. \cite{bond-comment}}
\label{fig:30}
\end{figure}
The discussion for single nucleon transfer can be simply extended to include cluster transfer \cite{Brink}. A further step is to describe reactions in which nucleons are transferred in both directions, to and from the projectile, or in two independent transfers in the same direction. In the work of ref. \cite{n19o21}, the ideas developed by Brink \cite{Brink} and described by Anyas-Weiss {\it et al.} \cite{AnyasWeiss} are extended to describe the reactions ($^{18}$O,$^{17}$F) and ($^{18}$O,$^{15}$O) where one of the two steps is the transfer of a dineutron cluster. The trajectories of the transferred particles between the two heavy ions are represented in Figure \ref{fig:31} for the favoured (well matched) and unfavoured trajectories. The proton is required in each case to make a transition from $j_<$ to $j_>$ in a stretched trajectory as shown, so as to form the $5/2^+$ ground state in $^{17}$O, which was observed in the experiment \cite{n19o21}. Figure \ref{fig:31}(a) shows that the favoured final states in $^{19}$N will have a total spin where 1/2 from the $0p_{1/2}$ proton is added collinearly with the orbital angular momentum transferred by the dineutron cluster. This type of selectivity was observed in the experiment and was used to interpret the states populated in $^{19}$N and $^{21}$O. In the case of the $^{21}$O there has been independent verification of the interpretation via the previously-mentioned study of the (d,p) reaction with a beam of $^{20}$O using TIARA \cite{bea}.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.45\textwidth]{fig-n19o21}
\caption{The semiclassical model of Brink \protect\cite{Brink,AnyasWeiss} can be extended to two-step transfer reactions, such as this ($^{18}$O,$^{17}$F) reaction on a target of $^{18}$O (Figure adapted from ref. \cite{n19o21}). The reaction is modelled as a dineutron transfer from the $^{18}$O projectile and the pickup of a proton from the $^{18}$O target: (a) the strongly favoured senses for the two transfers, (b) the less favoured transfer directions. }
\label{fig:31}
\end{figure}
\section{Perspectives}
\label{sec:6:perspectives}
It is always dangerous to speculate about the future directions for the development of instrumentation or experimental techniques. The experimental devices described here are all likely to deliver a range of new results in nucleon transfer, as new facilities and more beams at the appropriate energies become available. It is, however, perhaps worth taking note of some of the new developments that might be expected. These developments will in part be enabled by an increased capability to deal with large numbers of electronics channels, due to innovations in electronics design. One development is to take the simple idea of a highly efficient silicon array (i.e. with a large geometrical coverage) mounted inside a highly efficient gamma-ray array (as adopted by TIARA, SHARC, T-REX, ORRUBA, $\ldots$) and improve it. This is the aim of GASPARD \cite{GASPARD} which is an international initiative based originally around the new SPIRAL2 Phase 2 facility but also able to be deployed potentially at HIE-ISOLDE. A preliminary design is shown in Figure \ref{fig:32}. The particle detection is based on one to three layers of silicon, depending on angle. The segmentation of the silicon is sub-millimetre, but with the detectors still able to supply particle identification information based on the pulse shape. The array is sufficiently compact to fit inside newly developed gamma-ray arrays such as AGATA or PARIS. The geometry is chosen to allow innovative target design, and in particular to have operation with the thin solid hydrogen target CHyMENE, currently under development at Saclay. Another current development is the AT-TPC detector at MSU \cite{ATTPC} which aims to combine the advantages of the active target MAYA and the helical spectrometer HELIOS. As noted previously, a key advantage of an active target is that it can, in principle, remove the limitations on energy resolution (or, indeed the limitations on even being able to the detect reaction products) that arise from target thickness. An alternative approach to minimising the target thickness effect is to use an extremely thin target but to compensate by passing the beam through it many times, say $10^6$ times. Under certain circumstances, a beam of energy 5-10 A.MeV as suitable for transfer could be maintained and recirculated in a storage ring for this many revolutions. A thin gas jet target would allow transfer reactions to be studied in inverse kinematics. The ring could be periodically refilled and the beam cooled, in a procedure that was synchronised with the time structure of the beam production. This is one of the ideas behind the proposed operation of the TSR storage ring with reaccelerated ISOL beams at ISOLDE \cite{TSR}.
In summary, there are some very powerful experimental devices already available and able to exploit the existing and newly developed radioactive beams. In addition, there are challenging and exciting developments underway, that will create even better experimental possibilities to exploit the beams from the next generation of facilities. Because of their unique selectivity, and because the states that are populated have a simple structure that should be especially amenable to a theoretical description and interpretation, transfer reactions will always be at the forefront of studies using radioactive beams to extend our knowledge of nuclear structure.
\begin{figure}[h]
\sidecaption
\includegraphics[width=.7\textwidth]{fig-gaspard-layout}
\caption{Preliminary design for a new array GASPARD \cite{GASPARD} which would represent a new generation of device for the approach using a compact particle array with coincident gamma-ray detection. Multilayer highly segmented particle detectors with enhanced particle identification properties, plus the ability to use cryogenic targets, are amongst its advantages. }
\label{fig:32}
\end{figure}
\begin{acknowledgement}
Thank you to all of my colleagues who have assisted with putting together, performing and analysing the results from the experiments described in this work: these include N.A. Orr, M. Labiche, R.C. Lemmon, C.N. Timis, B. Fernandez-Dominguez, J.S. Thomas, S.M. Brown, G.L. Wilson, C. Aa. Diget, the VAMOS group at GANIL and the TIGRESS group at TRIUMF, the nuclear theory group at Surrey, plus all of the other colleagues from the TIARA, MUST2, TIGRESS and Charissa collaborations: {\it we together weathered many a storm} \cite{Bob}.
\end{acknowledgement}
|
train/arxiv
|
BkiUe7DxK6wB9jjDhtTq
| 5 | 1 |
\section*{Introduction}
The study of Veronese ideals and Veronese algebras, and variations of them, has a long history. The motivation to study these algebraic objects comes from combinatorics and algebraic geometry. In this paper we aim at a unified treatment, and then focus on two particular classes of Veronese type ideals and algebras.
Let $K$ be a field and $S=K[x_1, \dots, x_n]$ be the polynomial ring in $n$ indeterminates over $K$.
We fix an integer $d$ and a sequence $\mathbf{a}=(a_1,\dots, a_n)$ of integers with $1\leq a_1\leq \dots \leq a_n \leq d$ and $d<\sum_{i=1}^{d}a_i$. The monomial ideal in $S$ generated by all the monomials of the form $x_1^{c_1}\cdots x_n^{c_n}$ with $c_1+\dots +c_n=d$ and with $c_i\leq a_i$ for all $1\leq i\leq n$ is called an {\it ideal of Veronese type}. We denote it by $I_{n,d, (a_1,\dots, a_n)}$. Properties of these ideals have been studied for example in \cite{HHV}, \cite{HRV} and \cite{assocprim}. The $K$-subalgebra of $S$ generated by the monomials $u\in G(I_{n,d,(a_1,\dots,a_n)})$ is called an {\it algebra of Veronese type}. Here, for a monomial ideal $I$, we denote by $G(I)$ the set of minimal monomial generators of $I$. De Negri and Hibi, in \cite{deNegri}, characterized those algebra of Veronese type which are Gorenstein.
While ideals of Veronese type are defined by bounding the exponents of the generators, the next family is defined by spreading the variables. In paper \cite{ehq}, the concept of a $t$-spread monomial was introduced. A monomial $x_{i_1}x_{i_2}\cdots x_{i_d}$ with $i_1\leq i_2\leq \dots \leq i_d$ is called {\it $t$-spread} if $i_j -i_{j-1}\geq t$ for $2\leq j\leq n$. We fix integers $d$ and $t$. The monomial ideal in $S$ generated by all $t$-spread monomials of degree $d$ is called the {\it t-spread Veronese ideal of degree $d$}. We denote it by $I_{n,d,t}$. For $t=1$ one obtains the squarefree Veronese ideals, which may also be viewed as the edge ideal of a hypersimplex. Properties of these ideals were first studied in \cite{Sturmfels}, and then further in \cite{ehq}, \cite{bahareh}, \cite{JZ} and \cite{JKMR}. The $K$-subalgebra of $S$ generated by the monomials $v\in G(I_{n,d,t})$ is called the {\it $t$-spread Veronese algebra}. In \cite{dinu} the Gorenstein property for the $t$-spread Veronese algebra was analyzed.
In this paper we present a unified concept to deal with all the above cases and to study new such classes.
In Section~\ref{bc} this unified point of view is introduced by identifying multisets with monomials. For a given multiset $A=\{i_1\leq i_2\leq \cdots \leq i_d\}$, the corresponding monomial is $x_{i_1}x_{i_2}\cdots x_{i_d}$.
The multiset $A$ is called {\em $t$-spread}, if $i_{k+1}-i_k \geq t$ for all $k$. In that case a subset $B \subset A$ is called a {\em block} of size $r$ if $B=\{i_k\leq i_{k+1}\leq \ldots \leq i_{k+r-1}\}$ with $i_{l+1}-i_l=t$ for all $l$. Any multiset has a unique block decomposition into maximal blocks. With this notion at hand one defines different restricted classes of Veronese ideals. We denote by ${\mathcal A}_{n,d,t}$, the set of all $t$-spread multisets $A \in {\mathcal A}_{n,d}$, where ${\mathcal A}_{n,d}$ denotes multisets in $[n]$ with $d$ elements. For given positive integers $c$ and $k$, we consider the following two classes of multisets:
1. ${\mathcal A}_{c,(n,d,t)}$ is the set of all multisets $A \in {\mathcal A}_{n,d,t}$ such that $|B|\leq c$ for each block $B \subset A$, and we set $I_{c,(n,d,t)}= ({\bold x}_A \:\; A \in {\mathcal A}_{c,(n,d,t)})$.
2. ${\mathcal A}_{(n,d,t),k}$ is the set of all multisets $A \in {\mathcal A}_{n,d,t}$ which have at most $k$ blocks in their unique block decomposition, and we set $I_{(n,d,t),k}= ({\bold x}_A \:\; A \in {\mathcal A}_{(n,d,t),k}).$
The first family of multisets gives us the $c$-bounded $t$-spread Veronese ideals which we study in Section~\ref{ctVero}. It is shown that these ideals have linear quotients (Theorem~\ref{asbefore}). In Corollary~\ref{macgor}, we characterize in terms of $c,n,d$ and $t$ when $I_{c,(n,d,t)}$ is Cohen--Macaulay or Gorenstein.
Section~\ref{ctVerofiber} is devoted to the study of the powers of the ideal $I_{c,(n,d,t)}$ and its fiber cone. The basic tool for this task is provided in Theorem~\ref{sortable}, where it is shown that the minimal monomial set of generators of $I_{c,(n,d,t)}$ is a sortable set. This has several important consequences. First of all it follows in Corollary~\ref{koszul} that the fiber cone of $I_{c,(n,d,t)}$ is Koszul and a Cohen-Macaulay normal domain. Together with the fact that $I_{c,(n,d,t)}$ satisfies the so-called {\it $l$-exchange property} with respect to the sorting order, as shown in Theorem~\ref{thmx}, we conclude in Corollary~\ref{ReesCM} that the Rees algebra $\mathcal{R}(I_{c,(n,d,t)})$ is a normal Cohen-Macaulay domain. This has the consequence that $I_{c,(n,d,t)}$ satisfies the strong persistence property and that all powers of $I_{c,(n,d,t)}$ have linear resolution, see Corollary~\ref{strongpersistence}. These results generalize the corresponding statements for $t$-spread Veronese ideals in \cite{ehq} and follow the similar line of arguments, though the $c$-bound condition, requires substantial extra efforts in the proofs.
In order to compute the analytic spread of $I_{c,(n,d,t)}$, which is the Krull dimension of its fiber cone, we present in Lemma~\ref{better} a result, which is of quite general interest and generalizes \cite[Lemma 4.2]{JQ}. It is shown that for a monomial ideal $I$ with linear relations which is generated in a single degree and whose linear relation graph $\Gamma$ has $r$ vertices and $s$ connected components, the analytic spread of $I$ is given by the formula $\ell(I)=r-s+1$. Based on this lemma we succeed in Theorem~\ref{headache} to give a complete answer regarding the analytic spread of $I_{c,(n,d,t)}$. For its proof one introduces a partial order on the monomial generators of $I_{c,(n,d,t)}$, by saying that $u$ covers $v$ if there exist $i<j$ such that $x_j$ divides $u$ and $v=x_i(u/x_j)$. In Theorem~\ref{maxelem} is shown that there exists a unique maximal element and a unique minimal with respect to this partial order. The exponents of these elements as well as the integers $c,n,d$ and $t$ determine in a complicated but explicit way the analytic spread of $I_{c,(n,d,t)}$. Several examples demonstrate the result.
In the last section of the paper we study $t$-spread Veronese ideals of bounded block type, namely the ideals of the form $I_{(n,d,t),k}$. This is the $t$-spread monomial ideal in $n$ variables generated in degree $d$ with at most $k$ blocks. In particular, $I_{(n,d,0),k}$ is the ideal in $n$ variables generated in degree $d$ whose generators have support of cardinality at most $k$.
For a suitable integer $m$, the ideal $I_{(n,d,t),k}$ is obtained from $I_{(m,d,0),k}$ by an iterated application of the Kalai shifting operator. In this paper we concentrate on the study of the ideals $I_{(n,d,0),k}$. It can be easily seen
$I_{(n,d,0),k} = \sum _{i_1 \leq i_2 \leq \ldots \leq i_k} (x_{i_1}, \ldots, x_{i_k})^d$.
These ideals are of height $n$, so that the regularity is determined by their socle degree. The regularity of $I_{(n,d,0),k}$ is given in Theorem~\ref{regblock}. At present we do not know the regularity of $I_{(n,d,t),k}$ when $t>0$, even it is obtained by shifting from an ideal of the type $I_{(m,d,0),k}$. Indeed since $I_{(m,d,0),k}$ is not a stable ideal, there is no obvious relationship between its regularity and that of its shifted ideals. In Proposition~\ref{k=2} we show that $I_{(n,d,0),k}^{n-1}$ is a power of the maximal ideal, so that this power of $I_{(n,d,0),k}^{n-1}$ has linear resolution. In general much smaller powers of $I_{(n,d,0),k}$ may have linear resolution. Knowing this smallest power $j$ such that $I_{(n,d,0),k}^j$ has linear resolution gives us the regularity of the fiber cone
of $I_{(n,d,0),k}$, see Corollary~\ref{fibercone}. The examples that we could check with Singular\cite{Si} indicate that the fiber cone of $I_{(n,d,0),k}$ has quadratic relations. Among them is the well-known pinched Veronese which is known to be even Koszul, see \cite{caviglia}.
\section{Basic concepts}\label{bc}
Let ${\mathcal A}_{n,d}$ be the set of all multisets $A \subset [n]$ with $|A|=d$. A multiset
\[
A=\{i_1\leq i_2\leq \ldots \leq i_d\}\subset [n]
\]
is called $t$-spread if $i_{k+1}-i_k \geq t$ for all $k$.
We denote by ${\mathcal A}_{n,d,t}$, the set of all $t$-spread $A \in {\mathcal A}_{n,d}$. Note that ${\mathcal A}_{n,d,0}={\mathcal A}_{n,d}$.
Let $A \in {\mathcal A}_{n,d,t}$. A subset $B \subset A$ is called a {\em block} of size $r$ if $B=\{i_k\leq i_{k+1}\leq \ldots \leq i_{k+r-1}\}$ with $i_{l+1}-i_l=t$ for all $l$. The block $B$ is called {\em maximal} if it is not properly contained in any other block of $A$. Note that every $A \in {\mathcal A}_{n,d,t}$ has unique {\em block decomposition}. In other words, $A=B_1 \sqcup B_2\sqcup \ldots \sqcup B_k$, where each $B_i$ is a maximal block. The number of blocks in $A$ is called {\em the block type of $A$}. For example, $\{ 1,3,5,8,10,13,16\} \subset {\mathcal A}_{16, 7, 2}$ has block decomposition
$ B_1=\{1,3,5\}$, $B_2= \{8,10\}$, $B_3 =\{13\}$ and $B_4=\{16\}.$
In this paper, we consider, the following restricted multisets of ${\mathcal A}_{n,d,t}$. Given positive integers $c$ and $k$, we define:
\begin{enumerate}
\item ${\mathcal A}_{c,(n,d,t)}$ is the set of all multisets $A \in {\mathcal A}_{n,d,t}$ such that $|B|\leq c$ for each block $B \subset A$.
\item ${\mathcal A}_{(n,d,t),k}$ is the set of all multisets $A \in {\mathcal A}_{n,d,t}$ which have at most $k$ blocks in their unique block decomposition.
\end{enumerate}
Let $K$ be a field and $S=K[x_1, \ldots, x_n]$ be the polynomial ring in $n$ indeterminates. For a multiset $A=\{i_1\leq i_2\leq \ldots \leq i_r\}$ on $ [n]$, we define the monomial ${\bold x}_{A}=\prod_{j=1}^rx_{i_j}$.
Corresponding to the restricted multisets introduced in (1) and (2), we define the following monomial ideals
\begin{center}
$I_{c,(n,d,t)}= ({\bold x}_A \:\; A \in {\mathcal A}_{c,(n,d,t)}) \quad \text{and} \quad I_{(n,d,t),k}= ({\bold x}_A \:\; A \in {\mathcal A}_{(n,d,t),k}).$
\end{center}
\begin{Examples}
(i) The ideal $I_{c,(n,d,0)}$ is of Veronese-type $I_{n,d;(c,\ldots, c)}$. In other words, the ideal $I_{c,(n,d,0)}$ is generated by all monomials in degree $d$ whose exponents are bounded by $c$.
(ii) The ideal $I_{1, (n,d,0)}$ is the squarefree Veronese ideal.
(iii) The ideal $I_{(n,d,0),k}$ is the ideal generated by all $u \in \Mon(S)$ of degree $d$ such that $|\supp(u)| \leq k$.
\end{Examples}
Here, as usual, the support of a monomial $u$ is the set $\{i\:\, x_i|u\}$. We denote this set by $\supp(u)$.
In what follows we also need the concept of the multi-support of a monomial. Let $u=x_{i_1}x_{i_2}\dots x_{i_d}$ with $i_1\leq i_2\leq \dots \leq i_d$. We call the multiset $\{i_1, \dots, i_d\}$ the multi-support of $u$ and denote it by $\msupp(u)$.
\section{$c$-bounded $t$-spread Veronese ideals}\label{ctVero}
In this section we study properties of $c$-bounded $t$-spread Veronese ideals. Recall that $I_{c,(n,d,t)}$ denotes the $t$-spread monomial ideal in $n$ variables generated in degree $d$ whose blocks are $c$-bounded.
\begin{Theorem}\label{asbefore}
The ideal $I_{c,(n,d,t)}$ has linear quotients with respect to the lexicographic order.
\end{Theorem}
\begin{proof} For simplicity, we set $I=I_{c,(n,d,t)}$.
Let $G(I)=\{u_1, \dots, u_m\}$ ordered with respect to the lexicographic order. Let $r\leq m$ and $J=(u_1, \dots, u_{r-1})$. We want to show that $J: u_r$ is generated by variables. In this context, it is enough to prove that for all $1\leq k\leq r-1$ there exists $x_i \in J:u_r$ such that $x_i$ divides $u_k/\gcd(u_k, u_r)$. Let $u_k=x_{i_1}x_{i_2}\dots x_{i_d}$ with $i_1\leq i_2\leq \ldots \leq i_d$ and $u_r= x_{j_1}x_{j_2}\dots x_{j_d}$ with $j_1\leq j_2\leq \ldots \leq j_d$. Since $u_k >_{lex} u_r$ there exists $q$ with $1\leq q\leq t$ such that $i_1=j_1$, $\dots$, $i_{q-1}=j_{q-1}$ and $i_q<j_q$. Let $v=x_{i_q}(u_r/x_{j_q})$.
Then, $v=x_{j_1}x_{j_2}\dots x_{j_{q-1}}x_{i_{q}}x_{j_{q+1}}\dots x_{j_d}=x_{i_1}x_{i_2}\dots x_{i_{q-1}}x_{i_{q}}x_{j_{q+1}}\dots x_{j_d}$. Since $j_{q+1}-i_q> j_{q+1}-j_q\geq t$, we see that $v$ is $t$-spread and any maximal block belongs to $\{j_1, \dots , j_{q-1},i_q\}= \{i_1, \dots , i_{q-1},i_q\}$ or $\{j_{q+1}, \dots, j_d\}$. Hence, since $u_k$ and $u_r$ are $c$-bounded, it follows that $v$ is also $c$-bounded. Thus, $v\in G(J)$ because $v\in G(I)$ and $v>_{\lex} u_r$. Therefore, $x_{i_q}\in J: u_r$ and $x_{i_q}$ divides $u_k/\gcd(u_k, u_r)$, as required.
\end{proof}
Let $I=I_{c,(n,d,t)}$ and $u \in I$, and let $B_1\sqcup\cdots \sqcup B_r$ be the block decomposition of $\msupp(u)$. For $j=1,\ldots,r-1$, the {\em $j$th gap interval} of $u$ is the interval
$[ \max(B_j)+t,\min(B_{j+1})-1]$ if $|B_j|<c$ and is the interval $[ \max(B_j)+t+1,\min(B_{j+1})-1]$ if $|B_j|=c$. Furthermore, we call $[1,\min(B_1)-1]$ the $0$th gap interval.
\medskip
By using this terminology we have
\begin{Lemma}
\label{gap}
Let $u\in G(I)$. Then $i\in \set(u)$ if and only if $i$ belongs to some gap interval of $u$.
\end{Lemma}
\begin{proof}
Let $i $ be an integer that belongs to the $j$th gap interval of $u$, and let $k=\min(B_{j+1})$ and $v=x_i u / x_{k}$. It follows from the definition of the gap intervals that $v\in I$. Since $v >_{\lex} u$, it follows that $i\in \set(u)$.
Let $i\in \set(u)$. Then there exists $v=x_iu/x_{k}\in I$ for some $i<k$. In particular, $i < k \leq \max(u)=\max(B_r)$. We have the following:
\begin{enumerate}
\item $i \notin [\min(B_r), \max(B_{r})]$, otherwise $v$ is not $t$-spread if $t\geq 1$, and $i=k$ if $t=0$, which in both cases is a contradiction.
\item For any $1 \leq j \leq r-1$ with $|B_j|< c$, we have $i \notin[\min(B_j), \max(B_j)+t-1]$. Otherwise, if $t \geq 1$ then $v$ is not $t$-spread, and if $t=0$, then $[\min(B_j), \max(B_j)-1]=\emptyset$ because $\min(B_j)=\max(B_j)$, which in both cases is a contradiction.
\item For any $1 \leq j \leq r-1$ with $|B_j|= c$, we have $i \notin [\min(B_j), \max(B_j)+t]$. Otherwise, $v$ is not $c$-bounded if $i= \max(B_j)+t$. Furthermore, if $t\geq 1$ then $v$ is not $t$-spread for $\min(B_j) \leq t < \max(B_j)+t$, and if $t=0$, then again $v$ is not $c$-bounded, which again in all cases leads to a contradiction.
\end{enumerate}
The only possibility that remains is that $i$ belongs to a $j$th gap interval for some $0\leq j \leq r-1$.
\end{proof}
\begin{Examples}
(i) Let $u=x_1x_2x_3x_6x_{10} \in I_{3, (10, 5, 1)}$. Then $\set(u)=\{5,7,8,9\}$.\\
(ii) Let $u=x_1^3x_3^2x_5 \in I_{3, (5,6,0)}$. Then $\set(u)=\{2,3,4\}$.
\end{Examples}
We define the {\em Kalai stretching operator} $\sigma$ as follows: for $A=\{i_1\leq i_2\leq \dots\leq i_d\}\in [n]$, $A^{\sigma}$ is the multiset $\{i_1, i_2+1,\dots, i_d+d-1\}\subseteq [n+d-1]$. Inductively, for any $t\geq 1$, we define $A^{\sigma^{t}}=(A^{\sigma^{t-1}})^{\sigma}$. For $u=x_{A}$, we set $u^{\sigma^t}=x_{A^{\sigma^t}}$. We define the inverse map of $\sigma$ by $\tau$ as follows: for $A=\{i_1< i_2< \dots< i_d\}\in [n]$, $A^{\tau}$ is the set $\{i_1, i_2-1,\dots, i_d-(d-1)\}\subseteq [n-(d-1)]$.
\begin{Proposition}\label{shift} The stretching operator $\sigma$ has the following properties:
\begin{enumerate}[{\em(i)}]
\item $ v\in G(I_{c, (n+d-1, d, t+1)})$ if and only if there exists $u\in G(I_{c,(n,d,t)})$ such that $v= u^{\sigma}$;
\item Let $u\in G(I_{c,(n,d,t)})$, $u=x_{A}$ and $A=\{i_1\leq \dots \leq i_d\}$. Assume $\set(u)=\{a_1, \dots, a_r\}$. Then $\set(u^{\sigma})=\{b_1, \dots, b_r\}$, where $b_i=a_i+\max\{j: i_j \leq a_i\}$ for all $i$ and $\max \emptyset=0$.
\end{enumerate}
\end{Proposition}
\begin{proof}
(i) If $A$ is $t$-spread, it follows that $A^{\sigma}$ is $t+1$-spread. Moreover, if $B_1\sqcup \dots \sqcup B_r$, is the block decomposition of $A$, then the block decomposition $A^{\sigma}$ is $C_1\sqcup\cdots \sqcup C_r$ such that if $B_j=\{i_k \leq i_{k+1} \leq \ldots \leq {i_l}\}$ then $C_j= \{i_k+k-1 < i_{k+1}+k < \ldots < i_l+l-1\}$. This shows that $v\in I_{c, (n+d-1, d, t+1)}$ if $u\in I_{c,(n,d,t)}$. Similarly one shows that if $ v\in I_{c, (n+d-1, d, t+1)} $ and $u=v^{\tau}$, then $u\in I_{c, (n, d, t)}$. Since $u^{\sigma}=v$, this completes the proof.
(ii) From Lemma~\ref{gap}, we know $\set(u)$ and $\set(u^{\sigma})$ are equal to the union of all gap intervals of $A$ and $A^{\sigma}$, respectively. From the construction of $A$ and $A^{\sigma}$ in part (i), we see that $A$ and $A^{\sigma}$ has same number of blocks and hence the same number of gap intervals. Note that the $0$th gap interval is same for $A$ and $A^{\sigma}$ because $\min(B_1)=\min(C_1)$. Therefore, $a_i$ belongs to the $0$th gap interval of $u$ if and only if $a_i=b_i$ belongs to the $0$th gap interval of $u^{\sigma}$.
For $j>0$, by following the definition of gap intervals, the $j$th gap interval is given as follows :
\begin{equation}\label{1}
a_i \in \{i_l+p, i_l+p+1, \ldots, i_{l+1}-1\},
\end{equation}
where $i_l =\max(B_j)$ and $p=t$ if $|B_j|<c$ and $p=t+1$ if $|B_j|=c$. Note that $\max\{j: i_j \leq a_i\}=l$. Furthermore, the $j$th gap interval of $u^{\sigma}$ is
\[
\{i_l+ (l-1)+q, i_l+(l-1)+q+1, \ldots, i_{l+1}+l-1\},
\]
where $i_l+l-1= \max(C_j)$ and $q=t+1$ if $|C_j|<c$ and $q=t+2$ if $|C_j|=c$. Since $|B_j|=|C_j|$, we have $q=p+1$. So we may write the $j$th gap interval of $u^{\sigma}$ as
\begin{equation}\label{2}
\{i_l+p+l, i_l+p+1+l, \ldots, i_{l+1}-1+l\}.
\end{equation}
Therefore, for any $j>0$, $a_i$ is in $j$th gap interval of $u$ if and only if $b_i=a_i+l$ is in $j$th gap interval of $u^{\sigma}$.
\end{proof}
\begin{Corollary}\label{burned}
$\beta_{i}(I_{c,(n,d,t)})= \beta_{i}(I_{c,(n-(d-1)t,d,0)})$ for all $i$.
\end{Corollary}
\begin{proof}
We use the following general fact (see \cite[Lemma 1.5]{JT}): Let $I$ be a monomial ideal generated in degree $d$ with linear quotients. Then
\begin{equation}\label{eq}
\beta_{i}(I)=|\{\alpha \subset \set(u): u\in G(I) \text{ and } |\alpha|=i\}|.
\end{equation}
Now, let $G(I_{c,(n-(d-1)t,d,0)})=\{u_1,\dots, u_r\}$, then $G(I_{c,(n,d,t)})=\{u_1^{\sigma^t}, \dots, u_r^{\sigma^t}\}$, by Proposition \ref{shift}(i), and by Proposition \ref{shift}(ii) we have $|\set(u_j)|=|\set(u_j^{\sigma^t})|$ for all $j$. Thus, the desired conclusion follows from \eqref{eq}.
\end{proof}
\begin{Lemma}\label{height}
Let $I\subset S$ be a graded ideal in $S=K[x_1, \dots, x_n]$ and $J \subset T$ a graded ideal in $T=K[x_1,\dots, x_{n^{\prime}}]$ such that $\beta_{i,j}(I)=\beta_{i,j}(J)$ for all $i$ and $j$. Then
\[
\height(I)=\height(J).
\]
\end{Lemma}
\begin{proof}
Denote by $d= \dim(S/I)$ and by $d^{\prime}=\dim(T/J)$. Since $I$ and $J$ have the same graded Betti numbers, we have that $\Hilb(S/I)=\frac{P(t)}{(1-t)^n}=\frac{Q(t)}{(1-t)^{d}}$, $Q(1)\neq 0$, and $\Hilb(T/J)=\frac{P(t)}{(1-t)^{n^{\prime}}}=\frac{Q^{\prime}(t)}{(1-t)^{d^{\prime}}}$, $Q^{\prime}(1)\neq 0$. Thus,
$P(t)=Q(t)(1-t)^{n-d}=Q^{\prime}(t)(1-t)^{n^{\prime}-d^{\prime}}$. Hence, $n-d= n^{\prime}-d^{\prime}$, which leads to the desired conclusion.
\end{proof}
\begin{Proposition}
\label{height}
{\em(i)} $\height(I_{c,(n,d,t)})= \height(I_{c,(n-(d-1)t,d,0)}).$
{\em (ii)} $\height(I_{c,(n,d,t)})=n-(k+t(d-1))$, where $k=\lfloor \frac{d-1}{c}\rfloor$ and $r=d-kc$.
{\em (iii)} $n-(k+t(d-1))=\max\{\min(u) \:\; u \in G(I_{c,(n,d,t)})\}$.
\end{Proposition}
\begin{proof}
(i) By Corollary \ref{burned} the ideals $I_{c,(n,d,t)}$ and $I_{c,(n-(d-1)t,d,0)}$ have the same Betti numbers and by Theorem~\ref{asbefore} they both have $d$-linear resolution. Therefore, their the graded Betti numbers are the same. Thus, by applying Lemma \ref{height}, we conclude that the two ideals have the same height. Part (ii) follows from part (i) and \cite[Proposition 3.1]{assocprim}. For part (iii), it is easy to see that the smallest monomial generator of $I_{c,(n,d,t)}$ with respect to the lexicographic order is the monomial $$x_{n-k((t+1)+(c-1)t)-rt} \cdots x_{n-(c-1)t-(t+1)}x_{n-(c-1)t}\cdots x_{n-t}x_n.$$ Thus we obtain the desired conclusion.
\end{proof}
\begin{Corollary}
\label{macgor}
{\em (i)} $I_{c,(n,d,t)}$ is Cohen-Macaulay if and only if $n\leq t(d-1)+\lfloor\frac{d-1}{c}\rfloor+1$, or $d \leq c$, or $c=1$.
{\em (ii)} $I_{c,(n,d,t)}$ is Gorenstein if and only if $n\leq t(d-1)+\lfloor\frac{d-1}{c}\rfloor+1$ or $d=1$.
\end{Corollary}
\begin{proof}
(i) By Corollary \ref{burned}, the ideals $I_{c,(n,d,t)}$ and $I_{c,(n-(d-1)t,d,0)}$ have the same graded Betti numbers and hence the same projective dimension. Therefore, by Proposition \ref{height}, they have the same height. It follows that $I_{c,(n,d,t)}$ is Cohen-Macaulay if and only if $I=I_{c,(n-(d-1)t,d,0)}$ is Cohen-Macaulay. Therefore, the desired conclusion follows from \cite[Theorem 4.2]{CM}, which says that the Veronese type ideal is Cohen-Macaulay if and only if $I$ is either a principal ideal and hence $\height(I)\leq 1$, which by Proposition \ref{height} means that $n\leq t(d-1)+\lfloor\frac{d-1}{c}\rfloor+1$, or $I$ is a power of the maximal ideal, which is the case if and only if $d\leq c$, or $I$ is squarefree Veronese, which is the case if and only if $c=1$.
(ii) Since the resolution for $I_{c,(n,d,t)}$ is $d$-linear and since for Gorenstein rings the resolution for $S/I_{c,(n,d,t)}$ is self-dual it follows that $I_{c,(n,d,t)}$ is Gorenstein if and only if $d=1$ or $I_{c,(n,d,t)}$ is principal, i.e. $n\leq t(d-1)+\lfloor\frac{d-1}{c}\rfloor+1$.
\end{proof}
\section{On the powers and the fiber cone of $c$-bounded $t$-spread Veronese ideals}\label{ctVerofiber}
We start this section by defining sorted sets of monomials, a notion due to Sturmfels \cite{Sturmfels}. The relations of toric rings generated by sortable sets are well understood.
Let $S_d$ be the $K$-vector space generated by all the monomials of degree $d$ in $S$ and let $v, w \in S_d$. We write $vw=x_{i_1}\dots x_{i_2}\dots x_{i_{2d}}$ with $i_1\leq i_2 \leq \dots \leq i_{2d}$. The {\it sorting} of the pair $(v,w)$ is the pair of monomials $(v^{\prime}, w^{\prime})$ with
\begin{center}
$v^{\prime}=x_{i_1}x_{i_3}\dots x_{i_{2d-1}}, w^{\prime}= x_{i_2}x_{i_4}\dots x_{i_{2d}}.$
\end{center}
The map
\begin{center}
$\sort: S_d\times S_d \rightarrow S_d\times S_d, (u,v)\mapsto (u^{\prime}, v^{\prime})$
\end{center}
is called the {\it sorting operator}. A pair $(u,v)$ is {\it sorted} if $\sort(u,v)=(u,v)$. Notice that if $(u,v)$ is sorted, then $u>_{lex}v$ and $\sort(u,v)=\sort(v,u)$. If $u_1=x_{i_1}\dots x_{i_d}, u_2=x_{j_1}\dots x_{j_d}, \dots, u_{r}=x_{l_1}\dots x_{l_d}$, then the $r$-tuple $(u_1, \dots, u_r)$ is sorted if and only if
\begin{eqnarray}
\label{sequence}
i_1\leq j_1\leq \dots\leq l_1\leq i_2\leq j_2\leq \dots \leq l_2\leq \dots \leq i_d\leq j_d\leq \dots \leq l_d,
\end{eqnarray}
which means that $(u_i, u_j)$ is sorted, for all $i>j$.
For any $r$-tuple of monomials $(u_1, \dots, u_r)$, $u_i \in S_d$, for all $i$, there exists a unique sorted $r$-tuple $(v_1,\dots, v_r)$ such that $u_1\cdots u_r=v_1\cdots v_r$, see \cite[Theorem 6.12]{EH}. We say that the product $u_1\cdots u_r$ is sorted if $(u_1, \dots, u_r)=(v_1, \dots, v_r)$.
\begin{Theorem}
\label{sortable}
The set $G(I_{c,(n,d,t)})$ is sortable.
\end{Theorem}
\begin{proof}
Let $u,v\in G(I_{c,(n,d,t)})$. Then $u=\bf{x}_{U}$ and $v=\bf{x}_{V}$ for some $U,V \in{\mathcal A}_{c,(n,d,t)}$. Let $uv=x_{i_1}x_{i_2}\cdots x_{i_{2d}}$ such that $i_1\leq i_2 \leq \ldots \leq i_{2d}$. Moreover, let $v^\prime=x_{i_1}x_{i_3}\cdots x_{i_{2d-1}}$ and $u^\prime =x_{i_{2}}x_{i_4}\cdots x_{i_{2d}}$. We need to show that $u', v' \in G(I_{c,(n,d,t)})$.
If $t=0$, then note that $\deg_{x_i}(u')$ and $\deg_{x_i}(v')$ is at most $\left\lceil \frac{\deg_{x_i} (u)+ \deg_{x_i} (v)}{2}\right\rceil $. Since $\deg_{x_i} (u), \deg_{x_i} (u) \leq c$, it shows that $\deg_{x_i}(u')$ and $\deg_{x_i}(v')$ is also bounded by c. This shows that $u', v' \in G(I_{c,(n,d,0)})$.
Let $t \geq 1$. It follows from \cite[Proposition 3.1]{ehq} that $u'$ and $v'$ are $t$-spread monomials. In particular, $u'$ and $v'$ are squarefree. Now we will show that if $B \subset \{i_2< i_4< \ldots< i_{2d}\}$ is a block then $|B| \leq c$. The case when $B \subset \{i_1< i_3< \ldots< i_{2d-1}\}$ follows by a similar argument.
Let $B=\{i_{2l} < i_{2(l+1)}< \ldots <i_{2(l+k)}\}$. for some $1\leq l < l+k \leq d$. Let $A=\{i_{2l}\leq i_{2l+1}\leq i_{2l+2}\leq i_{2l+3}\leq \ldots \leq i_{2(l+k)-1}\leq i_{2(l+k)}\}$. We have either $i_{2l} \in U$ or $i_{2l} \notin U$. Assume that $i_{2l} \in U$ . Then we have two possibilities, namely, either $i_{2l+1} \in U$ or $i_{2l+1} \notin U$. If $i_{2l+1} \in U$ then $i_{2l+1}-i_{2l}=t$ and $i_{2l+1}=i_{2l+2}$ because $i_{2l+2}-i_{2l}=t$ and $i_{2l} \leq i_{2l+1}\leq i_{2l+2}$. Therefore, $i_{2l+2} \in U$.
On the other hand, if $i_{2l+1} \notin U$ then $i_{2l}< i_{2l+1}$ and $i_{2l+1} \in V$. Again, in this case $i_{2l+2} \in U$ because $i_{2l+2} \notin V$. Indeed $i_{2l} < i_{2l+1} \leq i_{2l+2}$ and therefore $i_{2l+2} -i_{2l+1} < t$. We see that in both cases, if $i_{2l} \in U$ then $i_{2l+2} \in U$. Continuing in the same way, we see that $B \subset U$ and hence $|B| \leq c$.
Now assume that $i_{2l} \notin U$. Then $i_{2l} \in V$ and by following the same argument as above, we conclude that $B \subset V$ and $|B| \leq c$.
\end{proof}
Let $I$ be an equigenerated monomial ideal. We consider the polynomial ring $T=K[\{t_u \:\; u\in G(I)\}]$ in $|G(I)|$ variables. We denote by $K[I]$ the $K$-algebra which is generated by the set of monomials $G(I)$. The kernel of the $K$-algebra homomorphism
\begin{center}
$\varphi : T \rightarrow K[I]$, $\varphi(t_u)=u,$ for $u\in G(I)$
\end{center}
is a binomial ideal and is called the defining ideal of $K[I]$.
\begin{Corollary}
\label{koszul}
$K[I_{c,(n,d,t)}]$ is Koszul and a Cohen-Macaulay normal domain.
\end{Corollary}
\begin{proof}
We use the fact that the sorting relations form a quadratic Gr{\"o}bner basis of the defining $J$ of $K[I_{c,(n,d,t)}]$ with respect to the sorting order, see
\cite[Theorem 6.15]{EH} and \cite[Theorem 6.16]{EH}. Then, by using a result of Fr\"oberg \cite{Froberg}, the algebra is Koszul. Since the initial ideal of $J$ with respect to the sorting order is squarefree it follow from \cite{Sturmfels}, see also \cite[Corollary 4.26]{binomialideals}, that $K[I_{c,(n,d,t)}]$ is normal, and by \cite{Hochster} this implies that it is Cohen-Macaulay.
\end{proof}
In order to study the powers of the ideals $I_{c,(n,d,t)}$, we have to understand the structure of the Rees algebra of such ideals. To do this, we use the so-called {\it $l$-exchange property}, see \cite{HHV} or \cite[Section 6.4]{EH}.
Let $\mathcal{R}(I)=\bigoplus_{j\geq 0}I^{j}t^j\subset S[t]$ be the Rees algebra of an equigenerated monomial ideal $I$. Since $I$ is equigenerated, the fiber $\mathcal{R}(I)/\mathfrak{m}\mathcal{R}(I)$ of the Rees algebra $\mathcal{R}(I)$ is isomorphic to the toric $K$-algebra $K[I]$.
The Rees ring $\mathcal{R}(I)$ has the presentation
\begin{center}
$\psi : R=S[\{t_u \:\; u\in G(I)\}] \rightarrow \mathcal{R}(I)$,
\end{center}
defined by
\begin{center}
$x_i \mapsto x_i$, for $1\leq i\leq n$ and $t_v \mapsto vt$, for $v\in G(I)$.
\end{center}
Let $<$ be a monomial order on $T$. We call a monomial $t_{u_1}\cdots t_{u_N}\in T$ {\it standard with respect to $<$}, if it does not belong to the initial ideal of the defining ideal $J\subset T$ of $K[I]$.
\begin{Definition}\cite{HHV} \label{lexchange}
A monomial ideal $I\subset S$ satisfies the $l$-exchange property with respect to the monomial order $<$ on $T$, if the following conditions are satisfied: let $t_{u_1}\cdots t_{u_N}, t_{v_1}\cdots t_{v_N}$ be two standard monomials in $T$ of degree $N$ with respect to $<$ such that:
\begin{enumerate}[{(i)}]
\item $\deg_{x_i}u_1\cdots u_N=\deg_{x_i}v_1\cdots v_N$, for $1\leq i \leq q-1$ with $q\leq n-1$.
\item $\deg_{x_q}u_1\cdots u_N<\deg_{x_q}v_1\cdots v_N$.
\end{enumerate}
Then there exist integers $\delta, j$ with $q<j\leq n$ and $j\in \supp(u_{\delta})$ such that $x_q u_{\delta}/x_j \in I$.
\end{Definition}
\begin{Theorem}\label{thmx}
$I_{c,(n,d,t)}$ satisfies the $l$-exchange property with respect to the sorting order $<_{\sort}$.
\end{Theorem}
\begin{proof}
We use similar arguments as in \cite[Proposition 2.2]{bahareh}. Let $t_{u_1}\cdots t_{u_N}, t_{v_1}\cdots t_{v_N}\in K[\{t_u \:\; u\in G(I)\}]$ be two standard monomials of degree $N$ satisfying (i) and (ii) of Definition \ref{lexchange}. Since $t_{u_1}\cdots t_{u_N}, t_{v_1}\cdots t_{v_N}$ does not belong to the initial ideal of $J$ with respect to the sorting order, the products $u_1\cdots u_N$ and $v_1\cdots v_N$ are sorted. Condition (i) together with (\ref{sequence}) implies that
\begin{eqnarray}\label{Condition1}
\text{$\deg_{x_i}(u_{\gamma})=\deg_{x_i}(v_{\gamma})$, for all $1\leq \gamma\leq N$ and $1\leq i \leq q-1$,}
\end{eqnarray}
and Condition (ii) implies that there exists $1\leq \delta\leq N$ such that
\begin{eqnarray}\label{Condition2}
\text{$\deg_{x_q}(u_{\delta})<\deg_{x_q}(v_{\delta})$.}
\end{eqnarray}
Let $u_{\delta}=x_{j_1}\cdots x_{j_d}$, $v_{\delta}=x_{l_1}\cdots x_{l_d}$. Then from (\ref{Condition1}) and (\ref{Condition2}), we see that there exists $k$ such that $j_1=l_1, \dots, j_{k-1}=l_{k-1}$ and $j_k>l_k$. Then $l_k=q$. We need to show that $x_qu_{\delta}/x_j \in I_{c,(n,d,t)}$, for some $j\in \supp(u_{\delta})$ with $q<j$.
Take $j=j_k$. Then $x_qu_{\delta}/x_j= x_{j_1}\cdots x_{j_{k-1}}x_q x_{j_{k+1}}\cdots x_{j_d}=x_{l_1}\cdots x_{l_{k-1}}x_{q}x_{j_{k+1}}\cdots x_{j_d}$. To see $w=x_qu_{\delta}/x_j$ is $t$-spread, we only need to check $j_{k+1}-q \geq t$. Indeed, $j_{k+1}-q =j_{k+1}-l_k > j_{k+1}-j_k \geq t$. Moreover, $w$ is $c$-bounded because $\{l_1\leq l_2 \leq \ldots \leq q\}\subset \msupp(v_{\delta})$ and $\{j_{k+1} \leq \ldots \leq j_d\} \subset \msupp(u_{\delta})$ are $c$-bounded, and $j_{k+1}- q >t$.
\end{proof}
For the sortable ideal $I_{c,(n,d,t)}$ we consider the sorting order $<_{\sort}$ on $T$ and the lexicographic order $<_{\lex}$ on $S$. Let $<$ be the monomial order on $R$ defined as follows: if $s_1, s_2 \in S$ and $t_1, t_2 \in T$ be monomials, then $s_1t_1>s_2t_2$ if $s_1>_{lex}s_2$ or $s_1=s_2$ and $t_1>_{sort}t_2$.
\medskip
Let $P\subset R$ be the defining ideal of $\mathcal{R}(I_{c,(n,d,t)})$. The following result shows that $P$ has a quadratic Gr\"obner basis and the initial ideal of $P$ is squarefree.
\begin{Theorem} \label{GB}
The reduced Gr\"obner basis of the toric ideal $P$ with respect to $<$ defined before consists of the set of binomials $t_ut_v- t_{u^{\prime}}t_{v^{\prime}}$, where $(u, v)$ is unsorted and $(u^{\prime}, v^{\prime})=\sort(u,v)$ and the set of binomials of the form $x_i t_u-x_j t_v$, where $i<j$, $x_iu=x_jv$, and $j$ is the largest integer for which $x_i v/x_j \in G(I)$.
\begin{proof}
The result follows from Theorem \ref{thmx} and \cite[Theorem 5.1]{HHV}.
\end{proof}
\end{Theorem}
We have the following consequences:
\begin{Corollary} \label{ReesCM}
The Rees algebra $\mathcal{R}(I_{c,(n,d,t)})$ is a normal Cohen-Macaulay domain.
\end{Corollary}
\begin{Corollary}
\label{strongpersistence}
$I_{c,(n,d,t)}$ satisfies the strong persistence property and all powers of $I_{c,(n,d,t)}$ have linear resolution.
\end{Corollary}
\begin{proof}
The first part follows from Theorem \ref{GB} and \cite[Corollary 1.6]{JQ} and the second part follows from Theorem \ref{GB} and \cite[Theorem 10.1.9]{monomialideals}.
\end{proof}
\section{The analytic spread of $c$-bounded $t$-spread Veronese ideals}
By Corollary~\ref{ReesCM}, the Rees algebra of $I_{c,(n,d,t)}$ is Cohen-Macaulay. Therefore $\lim_{s\to\infty}\depth S/I^s=n-\ell(I_{c,(n,d,t)})$, see \cite[Proposition 3.3]{EisenbudHuneke}. Here for a graded ideal $I\subset S$, we denote by $\ell(I)$ the analytic spread of $I$, which by definition is the Krull dimension of $\mathcal{R}(I)/{\frk m} \mathcal{R}(I)$, where ${\frk m}$ denotes the graded maximal ideal of $S$. The aim of this section is to give an explicit formula for $\ell(I_{c,(n,d,t)})$.
\begin{Definition}
Let $G(I)=\{u_1, \dots, u_m\}$. {\em The linear relation graph} $\Gamma$ of $I$ is the graph with the edge set
\[
E(\Gamma)=\{\{i,j\} \:\; \text{ there exist } u_k, u_l \in G(I) \text{ such that } x_i u_k=x_j u_l\}.
\]
\end{Definition}
\begin{Definition}
{\em The analytic spread} of an ideal $I$, $l(I)$, is the Krull dimension of the fiber ring $\mathcal{R}(I)/\mathfrak{m}\mathcal{R}(I)$.
\end{Definition}
To compute the analytic spread of $I_{c, (n,d,t)}$, which is the Krull dimension of $K[I_{c, (n,d,t)}]$, we use the following result, which is a generalization of \cite[Lemma~4.2]{JQ}.
\begin{Lemma}
\label{better}
Let $I$ be a monomial ideal with linear relations generated in a single degree whose linear relation graph $\Gamma$ has $r$ vertices and $s$ connected components. Then
\[
\ell(I)=r-s+1.
\]
\end{Lemma}
\begin{proof}
Let $G(I)=\{u_1, \dots, u_q\}$ with $u_i=\mathbf{x}^{a_i}$ for all $i$, and $\mathcal{M}=\{a_1, \dots, a_q\}$ be the set of exponent vectors of $G(I)$. We denote by $V$ the $\mathbb{Q}$-vector space generated by the vectors $a_1, \dots, a_q$. It is clear that $\ell(I)$ is the dimension of $V$.
Let $W \subset V$ be the $\mathbb{Q}$-vector space spanned by all the vectors $a_k-a_l$ with $a_k, a_l \in \mathcal{M}$ such that $a_k-a_l= \pm \varepsilon _{ij}=\pm (\varepsilon_i -\varepsilon_j)$, for some $i<j$. In \cite[Lemma 4.2]{JQ}, it was showed that $\dim W=r-s$.
We show that $a_i-a_j \in W$ for all $i$ and $j$. We may assume $i\neq j$. Since $I$ is multigraded, there exists an exact sequence of multigraded modules
\[
0\To U\To \Dirsum_{i=1}^m Se_i \To I \To 0
\]
with $e_i \mapsto u_i$ for $i=1,\dots,m$ and the multidegree of $e_i$, denoted $\Deg(e_i)$, is equal to $a_i$.
Since $I$ has linear relations, $U$ is generated by homogeneous relations of the form $r=x_ke_{\alpha}-x_le_{\beta}$. Here $\Deg(r)=\varepsilon_k+a_{\alpha}=\varepsilon_l+a_{\beta}$. Hence $a_{\alpha}-a_{\beta}\in W$ for the generating relations of $I$. Note that
\[
r_{ij}=\frac{u_j}{\gcd(u_i, u_j)}e_i-\frac{u_i}{\gcd(u_i, u_j)}e_j \in U.
\]
Thus, we can write
\[
r_{ij}=\sum_{t=1}^m \widetilde{w}_{t}(x_{k_{t}}e_{\alpha_{t}}-x_{l_{t}}e_{\beta_{t}}),
\]
with $\Deg(\widetilde{w}_{t}(x_{k_{t}}e_{\alpha_{t}}-x_{l_{t}}e_{\beta_{t}}))= \Deg(r_{ij})$ for all $t$.
Thus
\[
r_{ij}=\sum_{t=1}^m (v_{t}e_{\alpha_{t}}-w_{t}e_{\beta_{t}}),
\]
where $v_t= \widetilde{w}_tx_{k_t}$ and $w_t=\widetilde{w}_tx_{l_t}$.
Moreover, all summands are homogeneous of multidegree $\Deg(r_{ij})$.
Since $r_{ij}$ contains only the basis elements $e_i$ and $e_j$, the other basis elements in this sum must cancel each other. Therefore, we can write this sum as
\[
r_{ij}=\sum_{t=1}^m (v_{t}e_{\alpha_{t}}-w_{t}e_{\alpha_{t+1}}),
\]
where $\alpha_1=i$ and $\alpha_m=j$. Since $w_{t}e_{\alpha_{t+1}}$ has the same multidegree as $v_{t+1}e_{\alpha_{t+1}}$, it follows that $w_t=v_{t+1}$, for all $i=1,\dots, m-1$. Thus
\[
r_{ij}=\sum_{t=1}^m (v_{t}e_{\alpha_{t}}-v_{t+1}e_{\alpha_{t+1}}).
\]
Let $b_t=\Deg v_t$ for all $t$. Then
\[
b_{t+1}-b_t= \Deg(e_{\alpha_t})-\Deg(e_{\alpha_{t+1}}) \in U.
\]
Therefore,
\[
a_i-a_j=b_m-b_1= \sum_{t=1}^m (b_{t+1}-b_t) \in W.
\]
Since $a_1, \dots, a_q$ is a $\mathbb{Q}$-basis of $V$, it follows that $a_1\notin W$. Hence, $V=W+a_1\mathbb{Q}$, which leads to
\[
\ell(I)=\dim V= \dim W+1=r-s+1.
\]
\end{proof}
Now we apply this Lemma \ref{better} to obtain an explicit formula for the analytic spread of $I_{c,(n,d,t)}$. In order to do this, we first introduce a partial order $\prec$ on $G(I_{c,(n,d,t)})$. Let $u, v\in G(I_{c,(n,d,t)})$. We say {\it $u$ covers $v$} with respect to $\prec$ if there exists $i<j$ such that $x_j$ divides $u$ and $v=x_i\frac{u}{x_j}$.
\begin{Theorem}\label{maxelem}
For integers $d>0$ and $c>0$, write $d=kc+r$ with integers $k\geq 0$ and $0<r\leq c$. In other words, $k=\lfloor (d-1)/c\rfloor$. Then the following holds:
The partially ordered set $G(I_{c,(n,d,t)})$ has a unique maximal element, namely $x_{a_1}\cdots x_{a_d}$ with
\[
a_{i}=n-k-t(d-1)+(i-1)t+\left\lceil \dfrac{i-r}{c}\right \rceil =a_1 +(i-1)t+\left\lceil \dfrac{i-r}{c}\right \rceil
\]
for $i=1,\ldots,d$. In other words,
\[
a_{d-i+1}=n-(i-1)t-\left \lfloor \frac{i-1}{c}\right \rfloor,
\]
and it has a unique minimal element, namely $x_{{\alpha}_1}\cdots x_{{\alpha}_d}$ with
\[
{\alpha}_i=(i-1)t+ \left\lfloor \frac{i-1}{c}\right\rfloor +1,
\]
for any $i=1,\dots,d$.
\end{Theorem}
\begin{proof}
Denote by $u_0=x_{a_1}\cdots x_{a_d}$. It is clear that $u_0 \in G(I_{c,(n,d,t)})$. Suppose that $u_0$ is not a maximal element. Then there exists an element $x_{b_1}\dots x_{b_d}$ that covers $u_0$. Thus, there exist $i<j$ such that $x_j \mid x_{b_1}\cdots x_{b_d}$ and $u_0=x_i(x_{b_1}\cdots x_{b_d})/x_j$, which implies that $x_j u_0/ x_i= x_{b_1}\cdots x_{b_d}$, which is not possible and this can be seen from the formula for $a_{d-i+1}$.
In order to show the uniqueness of $u_0$, we assume that there exists another element $u=x_{c_1}\cdots x_{c_d}$ which is maximal and $u\neq u_0$. Since $u\neq u_0$, there exists some $l$ such that $c_l<a_l$ and we take the largest $l$ with this property. Then $u^{\prime}=x_{a_l}u/x_{c_l} \in I_{c,(n,d,t)}$ and $u\prec u^{\prime}$, hence a contradiction.
Similarly one can argue that $x_{{\alpha}_1}\cdots x_{{\alpha}_d}$ is the unique minimal element with respect to $\prec$.
\end{proof}
By $[a,b]$ we denote all the integer numbers $c$ with $a\leq c \leq b$. With the notation from Theorem \ref{maxelem}, we have the following result:
\begin{Proposition}
\label{complete}
For $i=1,\dots, d$, let $K_i$ be the complete graph on $[\alpha_i, a_i]$. Let $\Gamma$ be the linear relation graph of $I_{c,(n,d,t)}$. Without loss of generality we may assume that $\gcd(G(I_{c,(n,d,t)}))=1$. Then $\alpha_i<a_i$ for all $i$, and $\Gamma$ and $\Union_{i=1}^d K_i$ have the same number of connected components and the same number of vertices.
\end{Proposition}
\begin{proof} Suppose $\alpha_i=a_i$ for some $i$. Then $x_{a_i}$ is common factor of the elements in $G(I_{c,(n,d,t)})$, a contradiction.
We show that $\Union_{i=1}^d K_i\subseteq \Gamma$. Indeed,
let $\{k,l\}\in \Union_{i=1}^d K_i$. Then $\{k,l\}\in K_i$ for some $i$. We may assume $k< l$. Thus, ${\alpha}_i \leq k< l\leq a_i$. Let $w_1=x_{{\alpha}_1}\cdots x_{{\alpha}_{i-1}}x_k x_{a_{i+1}}\cdots x_{a_d}$ and $w_2=x_{{\alpha}_1}\cdots x_{{\alpha}_{i-1}}x_l x_{a_{i+1}}\cdots x_{a_d}$. Then, it is clear that $w_1$ and $w_2 \in G(I_{c,(n,d,t)})$ and $x_lw_1=x_kw_2$ is a relation, thus $\{k,l\}\in E(\Gamma)$.
Next we observe that $V(\Gamma)=V(\Union_{i=1}^d K_i)$. We only need to show that $V(\Gamma)\subseteq V(\Union_{i=1}^d K_i)$. In fact, if $i\in V(\Gamma)$, then there exists $u\in G(I_{c,(n,d,t)})$ with $i \in \supp(u)$. Note that if $u=x_{i_1}\cdots x_{i_d}$, then $i_q\in K_q$ for all $q$, by Theorem~\ref{maxelem}. This show that $i \in K_q$ for some $q$.
Now since $\Union_{i=1}^d K_i\subset \Gamma$ and since $V(\Gamma)=V(\Union_{i=1}^d K_i)$ it follows that the number of connected components of $\Gamma$ is less than or equal to the number of connected components $\Union_{i=1}^d K_i$.
Now let $\{i,j\}\in E(\Gamma)$. We show that the vertices $i$ and $j$ belong to the same connected component of $\Union_{i=1}^d K_i$. This then shows that $\Gamma$ and $\Union_{i=1}^d K_i$ have the same number of connected components. Indeed, we may assume that $i<j$. Because $\{i,j\}\in E(\Gamma)$, there exists a monomial $u\in G(I_{c, (n,d,t)})$ such $x_j| u$ and $v=x_i(u/x_j)\in G(I_{c, (n,d,t)})$. Let $u=x_{i_1}\cdots x_{i_k}\cdots x_{i_d}$. Then $j=i_k$ for some $k$. By Theorem~\ref{maxelem}, $i_q \in K_q=[\alpha_q, a_q]$ for all $q=1,\dots, d$. Let $l$ be the smallest integer such that $i\leq i_l$. If $k=l$, then $i,j\in K_l$, and hence to the same connected component. Now let $k\neq l$.
Then
\[
v=x_{i_1}\cdots x_{i_{l-1}}x_i x_{i_l}\cdots x_{i_{k-1}}x_{i_{k+1}}\cdots x_{i_d}.
\]
Since $v\in G(I_{c,(n,d,t)})$, it follows that $i_q\in K_{q+1}$ for $q=l,\ldots k-1$. On the other hand $i_q\in K_q$ for all $q$. It follows that $K_q\sect K_{q+1}\neq \emptyset$ for $q=l,\ldots, k-1$. Therefore,
\[
K_l\union K_{l+1}\union \cdots \union K_k
\]
is connected. Since $i\in K_l$ and $j\in K_k$, it follows that $i$ and $j$ belong to the same connected component.
\end{proof}
\begin{Corollary}
\label{connected}
We have $\ell(I_{c,(n,d,t)})=n$ if and only if $\Gamma$ is connected.
\end{Corollary}
\begin{proof}
By Lemma~\ref{better} and Proposition~\ref{complete} we have $\ell(I_{c,(n,d,t)})=|V(\Gamma)|-s+1$, where $s$ is the number of connected components of $\Gamma$.
Hence if $\ell(I_{c,(n,d,t)})=n$, we must have $|V(\Gamma)|=n$ and $s=1$, because $|V(\Gamma)|\leq n$ and $s\geq 1$, in general. Hence $\Gamma$ is connected.
Conversely, assume that $\Gamma$ is connected. Then $V(\Gamma)=\Union_{i=1}^dK_i=[\alpha_1,a_d]$.
Since $\alpha_1=1$ and $a_d=n$, we see that $|V(\Gamma)|=n$, and hence $\ell(I_{c,(n,d,t)})=n$.
\end{proof}
For integers $d>0$ and $c>0$, let $r=d-\lfloor (d-1)/c\rfloor c$. Then for $i=1,\dots, d-1$ we set
\[
\delta_i= \left\lfloor \frac{i}{c}\right\rfloor - \left\lceil\frac{i-r}{c}\right\rceil.
\]
For the proof of the next results we need
\begin{Lemma} \label{delta_i}
Let $d=kc+r$ with $0<r\leq c$ and $i-r=lc+s_i$ with $0\leq s_i<c$ for some integers $k$ and $l$. Then we have:
\begin{enumerate}[{\em(i)}]
\item if $s_i=0$, then
\[
\delta_i= \left\{
\begin{array}{ll}
0, & \text{ if } r<c,\\
1, & \text{ if } r=c, \\
\end{array}
\right.
\]
\item if $0<s_i<c$ and $r=c$, then $\delta_i=0$, while if $r<c$, then
\[
\delta_i= \left\{
\begin{array}{rl}
-1, & \text{ if } r+s_i<c,\\
0, & \text{ if } r+s_i\geq c. \\
\end{array}
\right.
\]
\end{enumerate}
\end{Lemma}
\begin{proof}
(i) If $s_i=0$, then $c\mid (i-r)$. Then
\[
\delta_i =\left\lfloor\frac{i}{c}\right\rfloor-\left\lfloor\frac{i-r}{c}\right\rfloor.
\]
Hence, if $r=c$, then $i=(l+1)c$, which implies that $\delta_i=l+1-l=1$. If $r<c$, then $i=lc+r$ with $0<r<c$, which implies that $\delta_i=l-l=0$.
(ii) If $0<s_i<c$, then $c\nmid (i-r)$. Then
\[
\delta_i= \left\lfloor \frac{i}{c}\right\rfloor - \left\lfloor\frac{i-r}{c}\right\rfloor-1.
\]
Now, if $r=c$, then $i=(l+1)c+s_i$ with $0< s_i<c$, thus $\delta_i=l+1-l-1=0$. If $r<c$, then $i=lc+r+s_i$ with $0<r<c$ and $0< s_i<c$. Thus $0< r+s_i< 2c$ and we can distinguish the cases: if $0<r+s_i<c$, then $\delta_i=-1$, and if $c\leq r+s_i <2c$, then $\delta_i=0$.
\end{proof}
The proof of the previous lemma shows
\begin{Lemma}\label{delta}
Let $\delta_{\min}= \min\{ \delta_i\:\; i=1, \dots, d-1\}$ and $\delta_{\max}= \max\{ \delta_i\:\; i=1, \dots, d-1\}$. Then
\[
\delta_{\min}= \left\{
\begin{array}{rl}
-1, & \text{ if } r<c,\\
0, & \text{ if } r=c, \\
\end{array}
\right.
\]
\[
\delta_{\max}= \left\{
\begin{array}{ll}
0, & \text{ if } r<c,\\
1, & \text{ if } r=c, \\
\end{array}
\right.
\]
\end{Lemma}
\begin{Proposition}\label{dcomp}
Let $k=\left\lfloor (d-1)/c\right\rfloor$. Then the linear relation graph of $I_{c,(n,d,t)}$ has at most $d$ connected components, and it has exactly $d$ connected components if and only if $n-(k+t(d-1))<t+1+\delta_{\min}$. In this case, the connected components of $\Gamma$ are $K_1,\dots, K_d$.
\end{Proposition}
\begin{proof}
By Proposition~\ref{complete}, it follows that $\Gamma$ has at most $d$ connected components and that it has exactly $d$ connected components if and only if $V(K_i)\cap V(K_j)= \emptyset$ for all $i\neq j$. We may assume that $i<j$. Then $V(K_i)\cap V(K_j)= \emptyset$ for $i<j$ if and only if $V(K_i)\cap V(K_{i+1})=\emptyset$ for any $i$, because ${\alpha}_j>a_{i+1}$. Now $V(K_i)\cap V(K_{i+1})=\emptyset$ if and only if $a_i<{\alpha}_{i+1}$. Therefore, together with Proposition~\ref{complete} it follows that $\Gamma$ has exactly $d$ components if and only if $a_i<{\alpha}_{i+1}$ for all $i$, and in this case $\Gamma =\Union_{i=1}^d K_i$ and $K_1,\ldots, K_d$ are the connected components of $\Gamma$.
Now we analyze what it means that $a_i<{\alpha}_{i+1}$ for all $i$. Let $r=d-kc$. By Theorem~\ref{maxelem}, the condition $a_i<\alpha_{i+1}$ holds if and only if $$a_1<t+1+ \left\lfloor \frac{i}{c}\right\rfloor -\left\lceil \frac{i-r}{c}\right\rceil$$ for all $i$, and this is the case if and if $a_1<t+1+\delta_{\min}.$
\end{proof}
\begin{Theorem}\label{hardwork}
Let $\Gamma$ be the relation graph of $I_{c,(n,d,t)}$, $k= \left\lfloor (d-1)/c\right\rfloor$ and $r=d-kc$. Then the following holds:
\begin{enumerate}[{\em(i)}]
\item If $n-k-t(d-1)<t+1+\delta_{\min}$, then $\Gamma$ has exactly $d$ connected components and
\[
|V(\Gamma)|= nd-d(d-1)t-k(k-1)c-2rk.
\]
\item $\Gamma$ is a connected graph and has $n$ vertices if and only if $n-k-t(d-1)\geq t+1+\delta_{\max}$.
\item If $t+1+\delta_{\max}>a_1 \geq t+1+\delta_{\min}$,
then $\Gamma$ has $d/c$ connected components and $n$ vertices if $r=c$, and $\Gamma$ has $(r+1)k+1$ connected components and
\[
\nu=n-\sum_{j=0}^{k}\sum_{i=0}^{r+1}(\alpha_{jc+i}-a_{jc+i-1}+1)-(\alpha_{kc+r+1}-a_{kc+r}+1)
\]
vertices if $r<c$.
\end{enumerate}
\end{Theorem}
\begin{proof}
(i) By Theorem~\ref{maxelem}, $a_1=n-k-t(d-1)$, and since $a_1<t+1+\delta_{\min}$, Proposition~\ref{dcomp} implies that $\Gamma$ has exactly $d$ connected components.
By Proposition~\ref{complete}, and by using Theorem~\ref{maxelem} we obtain
\begin{eqnarray*}
|V(\Gamma)|&=& \sum_{i=1}^d |[{\alpha}_i, a_i]|=\sum_{i=1}^d(a_i-{\alpha}_i+1)\\
&=& \sum_{i=1}^d a_{d-i+1} -\sum_{i=1}^d(\alpha_i-1)=\sum_{i=1}^{d}(a_{d-i+1}-\alpha_i +1)\\
&=& \sum_{i=1}^d (n-2t(i-1)-2\left\lfloor \frac{i-1}{c}\right\rfloor)\\
&=& nd-2t{d\choose 2}-2\sum_{i=1}^d \left\lfloor \frac{i-1}{c}\right\rfloor\\
&=& nd -2t{d\choose 2}-2({k\choose 2}c+rk)\\
&=& nd-d(d-1)t-k(k-1)c-2rk.
\end{eqnarray*}
(ii) By Proposition~\ref{complete}, it follows that $\Gamma$ is connected if and only if $V(K_i)\cap V(K_{i+1})\neq \emptyset$ for all $i=1,\dots, d-1$ which is the case if and only if ${\alpha}_{i+1}\leq a_i$ for all $i$.
We have that ${\alpha}_{i+1}\leq a_i$ if and only if $a_1\geq t+1+\delta_{\max}$, see Theorem~\ref{maxelem}.
(iii) By Lemma~\ref{delta}, $\delta_{\max} - \delta_{\min}=1$. Therefore, our assumption implies that $a_1=t+1+\delta_{\min}$. It follows that
\[
a_i-\alpha_{i+1}= a_1-t-1-\delta_{i}
\leq 0.
\]
Thus, we see that $V(K_i)\sect V(K_{i+1})\neq \emptyset$ if and only if $a_i=\alpha_{i+1}$. This is the case if only $\delta_{\min}=\delta_i$.
Assume $r=c$. By Lemma~\ref{delta}, $\delta_{\min}=0$. We have to consider those $i \in \{1, \dots, d-1\}$ such that $\delta_i=\delta_{\min}=0$. Hence, by using Lemma~\ref{delta_i}, we obtain
\[
\{i\:\; \delta_i = \delta_{\min}=0\}=\{i\:\; c\nmid (i-r)\}=\{i\:\; d-i \not\equiv 0 \mod c\}=\{i\:\; i \not\equiv d \mod c\}.
\]
Since $d \equiv 0\mod c$, it follows that $V(K_i)\sect V(K_{i+1})\neq \emptyset$ if and only if $i\not\equiv 0\mod c$. This shows that in this case we have $ d/c$ components and $|V(\Gamma)|=n$.
Finally assume that $r<c$. By Lemma~\ref{delta}, $\delta_{\min}=-1$. We have to consider those $i \in \{1, \dots, d-1\}$ such that $\delta_i=\delta_{\min}$. Hence, by using Lemma~\ref{delta_i}, we obtain
\[
\{i\:\; \delta_i=\delta_{\min}=-1\}.
\]
It follows form the definition of $\delta_i$ that $\delta_i=-1$ if and only if $i\in [jc+r+1, (j+1)c-1]$ for $j=0,\ldots, k-1$. From this we deduce that $\Gamma$ has $(r+1)k+1$ connected components and
\[
n-\sum_{j=0}^{k}\sum_{i=0}^{r+1}(\alpha_{jc+i}-a_{jc+i-1}+1)-(\alpha_{kc+r+1}-a_{kc+r}+1)
\]
vertices, where $a_{-1}= 0$, by definition.
\end{proof}
\begin{Theorem}\label{headache}
Let $k= \left\lfloor (d-1)/c\right\rfloor$ and $r=d-kc$. Then the following holds:
\begin{enumerate}[{\em(i)}]
\item If $n-k-t(d-1)<t+1+\delta_{\min}$, then
\[
\ell(I_{c,(n,d,t)})= nd-d(d-1)t-k(k-1)c-2rk-d+1.
\]
\item If $n-k-t(d-1)\geq t+1+\delta_{\max}$, then $\ell(I_{c,(n,d,t)})=n$.
\item If $t+1+\delta_{\max}>a_1 \geq t+1+\delta_{\min}$, then $\ell(I_{c,(n,d,t)})=n-d/c+1$, if $r=c$, and $\ell(I_{c,(n,d,t)})=\nu-(r+1)k$, if $r<c$.
\end{enumerate}
\end{Theorem}
\begin{proof}
The desired result follows from Lemma~\ref{better} and Theorem~\ref{hardwork}.
\end{proof}
We illustrate Theorem~\ref{headache} with suitable examples.
\begin{Examples}
(i) Let $I=I_{3, (12, 4,3)}$. Then $\Gamma$ has $4$ connected components: $K_1$ is the complete graph on $[1,2]$, $K_2$ on $[4,6]$, $K_3$ on $[7,9]$ and $K_4$ on $[11,12]$. Since $\Gamma$ has $10$ vertices, we obtain $\ell(I)=10-4+1=7$.
(ii) Let $I=I_{3,(16,6,2)}$. Then $\Gamma$ is connected and has $16$ vertices. Thus, $\ell(I)=16-1+1=16$.
(iii) Let $I=I_{2,(5,3,1)}$. Then $\Gamma$ is connected and has $5$ vertices. Thus, $\ell(I)=5-1+1=5$.
(iv) Let $I=I_{2,(9,6,1)}$. Then $\Gamma$ has $3$ connected components: the connected component $K_1\union K_2$ with vertex set $[1,3]$, $K_3\union K_4$ with vertex set $[4,6]$ and $K_5\union K_6$ on the vertex set $[7,9]$. Since $\Gamma$ has $9$ vertices, we obtain $\ell(I)=9-3+1=7$.
\end{Examples}
Let $S$ be a standard graded $K$-algebra of dimension $n$ and $I\subset S$ a graded ideal. By a theorem of Brodmann \cite{brodmann}, $\depth S/I^s$ is constant for all $s$ large enough. This constant value is called the {\it limit depth} of $I$. Brodmann showed that
\[
\lim_{s\to\infty}\depth S/I^s\leq n-\ell(I).
\]
If the Rees algebra of $I$ is Cohen-Macaulay, then by a result of Huneke \cite{Huneke}, the associated graded ring of $I$ is Cohen-Macaulay and, by using a result of Eisenbud and Huneke \cite[Proposition 3.3]{EisenbudHuneke}, this implies that
\[
\lim_{s\to\infty}\depth S/I^s= n-\ell(I).
\]
We obtain the following consequence:
\begin{Corollary}
$\lim_{s\to\infty}\depth S/I_{c,(n,d,t)}^s=0$ if and only if condition (ii) of Theorem~\ref{hardwork} holds, or $c=d$ and condition (iii) of Theorem~\ref{hardwork} holds.
\end{Corollary}
\begin{proof}
By Corollary~\ref{ReesCM}, the Rees algebra of $I_{c,(n,d,t)}$ is Cohen-Macaulay. Therefore $\lim_{s\to\infty}\depth S/I^s=0$ if and only if $\ell(I_{c,(n,d,t)})=n$. By Corollary~\ref{connected}, this is the case if and only if $\Gamma$ is connected. Thus the desired conclusion follows from Theorem~\ref{hardwork}.
\end{proof}
\section{$t$-spread Veronese ideals of bounded block type}\label{tVeroblock}
In this section we study properties of $t$-spread Veronese ideals of bounded block type. Recall that $I_{(n,d,t),k}$ is the $t$-spread monomial ideal in $n$ variables generated in degree $d$ with at most $k$ blocks. In particular, $I_{(n,d,0),k}$ is the ideal in $n$ variables generated in degree $d$ whose generators have support with cardinality at most $k$.
We have $k\leq n$, and if $k=n$, then $I_{(n,d,t),k}=I_{n,d,t}$. Moreover, $I_{(n,d,0),1}=(x_1^d,\ldots,x_n^d)$. In the following we assume that $1<k<n$ and $d\geq 2$.
The next result provides a description of the ideals $I_{(n,d,0),k}$ and has the consequence that for arbitrary $t$ the ideal $(I_{(n,d,t),k})$ can be obtained by an iterated application of the Kalai shifting operator.
\begin{Lemma}\label{sig}
We have
\begin{enumerate} [{\em(i)}]
\item $I_{(n,d,0),k} = \sum _{i_1 \leq i_2 \leq \ldots \leq i_k} (x_{i_1}, \ldots, x_{i_k})^d$, and
\item $(I_{(n,d,t),k})^{\sigma}=I_{(n+(d-1),d,t+1),k}$.
\end{enumerate}
\end{Lemma}
\begin{proof}
(i) obvious by definition.
(ii) Let $u\in I_{(n,d,t),k}$, $u=x_{A}, A=\{i_1\leq \dots \leq i_d\}$. Since $A$ is $t$-spread, it follows that $A^{\sigma}$ is $t+1$-spread. Moreover, if $B_1\sqcup \dots \sqcup B_r$, is the block decomposition of $A$, then the block decomposition $A^{\sigma}$ is $C_1\sqcup\cdots \sqcup C_r$ such that if $B_j=\{i_k \leq i_{k+1} \leq \ldots \leq {i_l}\}$ then $C_j= \{i_k+k-1 < i_{k+1}+k < \ldots < i_l+l-1\}$. Thus, $u^{\sigma}\in I_{(n+(d-1), d, t+1),k}$. Similarly one shows that if $ v\in I_{(n+(d-1), d, t+1),k} $, then $u=v^{\tau}\in I_{(n, d, t),k}$. Since $u^{\sigma}=v$, this completes the proof.
\end{proof}
The following result is a consequence of Lemma~\ref{sig}(ii).
\begin{Corollary}
$I_{(n,d,t),k}=(I_{(n-(d-1)t,d,0),k})^{\sigma^t}$.
\end{Corollary}
\begin{Proposition}
The number of generators of $I_{(n,d,0),k}$ is
\[
\mu(I_{(n,d,0),k})= \sum_{i=1}^{k} {d-1 \choose i-1}{n\choose i}.
\]
\end{Proposition}
\begin{proof}
The number of monomials in $n$ variables of degree $d$ whose support have cardinality $i$ is ${d-1 \choose i-1}{n\choose i}$. Thus the desired formula follows.
\end{proof}
\begin{Theorem}
\label{regblock}
\[
\reg (I_{(n,d,0),k})= (n-k)\left\lfloor \frac{d-1}{k} \right\rfloor +d-1.
\]
\end{Theorem}
\begin{proof}
Note that $\dim S/I_{(n,d,0),k}=0$, since $x_i^d\in I_{(n,d,0),k}$ for $i=1,\ldots,n$. Therefore, the largest degree of a monomial in $S\setminus I_{(n,d,0),k}$ gives us the regularity of $I_{(n,d,0),k}$.
Let $I=\{i_1, \dots, i_l\}\subset [n]$, $l\leq k$. Denote by $a_{I}=a_{i_1}+\dots+a_{i_l}$.
Let us consider the polytope
\[
\mathcal{P}= \{(a_1, \dots, a_n)\in \mathbb{R}^{n}: a_i\geq 0, a_{I}\leq d-1 \text{ for all } I\subset [n], |I|\leq k\}.
\]
Note that $x_1^{a_1}\cdots x_n^{a_n}\not \in I_{(n,d,0),k}$ if and only if $(a_1,\ldots,a_n)\in \mathcal{P}$. Thus,
in order to find the maximal degree of the socle elements, we have to find an integer point $a=(a_1, \dots, a_n) \in \mathcal{P}$ for which the function $f(a_1, \dots, a_n)=a_1+\dots+a_n$ takes the maximal value. By symmetry, we can assume that $a_1\geq a_2 \geq \dots \geq a_n$. Then $a\in \mathcal{P}$ if and only if $a_1+a_2+\dots+a_k\leq d-1$ and $a_i \geq 0$ for all $i$. In particular, if $a\in \mathcal{P}$, then $a^{\prime}=(a_1,\dots, a_{k-1}, a_k, \dots, a_k)\in \mathcal{P}$ and $f(a^{\prime})\geq f(a)$. Therefore, in order to find the maximal value for $f$, we may assume that $a=(a_1,\dots, a_{k-1}, a_k,\dots, a_k)$. For such $a$, we have that $f(a)=a_1+\dots+a_{k-1}+ (n-k+1)a_k$. Now, our problem can be reformulated as follows: we are looking for $a_1, \dots, a_k$ with the following conditions: find the maximal value of the linear function $$g(a_1,\dots, a_k)=a_1+\dots+a_{k-1}+(n-k+1)a_k$$
for the integer points $(a_1,\dots, a_k)$ satisfying
\begin{enumerate}[{(i)}]
\item $a_1\geq a_2\geq \dots \geq a_k>0$,
\item $a_1+a_2+\dots +a_k\leq d-1$.
\end{enumerate}
Now let $a_k=i$ with $1\leq i\leq d-1$. By (i) and (ii), we obtain that $ki\leq d-1$. This implies that $1\leq i\leq \left\lfloor \frac{d-1}{k}\right \rfloor$. Let $h_i= (n-k)i+(d-1)\geq g(a_1, \dots, a_{k-1}, i)$. So for each $i$, $g$ can take at most the value $h_i$. Note that $h_1<h_2<\dots < h_{\left\lfloor \frac{d-1}{k}\right \rfloor}$. We will show that $g(a_1,\dots, a_{k-1},\left\lfloor \frac{d-1}{k} \right\rfloor)$ attains the value $$h_{\left\lfloor \frac{d-1}{k}\right \rfloor}= (n-k)\left\lfloor \frac{d-1}{k}\right \rfloor +(d-1).$$ By definition, $g(a_1, \dots, a_{k-1},\left\lfloor \frac{d-1}{k} \right\rfloor)= a_1+\dots +a_{k-1}+\left\lfloor \frac{d-1}{k}\right \rfloor+(n-k)\left\lfloor \frac{d-1}{k} \right\rfloor$. Indeed, let $a_2=\dots =a_{k-1}=\left\lfloor \frac{d-1}{k}\right \rfloor$ and $a_1=(d-1)-(k-1)\left\lfloor \frac{d-1}{k}\right \rfloor$.
\end{proof}
\begin{Proposition} \label{k=2}
$I^{n-1}_{(n,d,0),2}= \mathbf{m}^{(n-1)d}$.
\end{Proposition}
\begin{proof}
The inclusion $I^{n-1}_{(n,d,0),2}\subseteq \mathbf{m}^{(n-1)d}$ is trivial. Conversely, let $w=x_1^{a_1}\cdots x_n^{a_n} \in \mathbf{m}^{(n-1)d}$. Thus $\sum_{i=1}^n a_i =(n-1)d$. We show that $w$ can be written as a product of $n-1$ monomials of degree $d$ whose support has cardinality at most $2$. We prove this by induction on $n$. We may assume $a_1\geq a_2\geq \dots \geq a_n$. It is clear that $a_n<d$, because otherwise $\sum_{i=1}^n a_i >(n-1)d$.
Now, if $a_1+a_n<d$, then $a_i+a_n<d$ for all $2\leq i\leq n-1$, hence $\sum_{i=1}^{n-1}a_i+(n-1)a_n<(n-1)d$, which is not possible since $n>2$, by assumption. Thus, $a_1+a_n\geq d$, therefore $a_1-(d-a_n)\geq 0$. We can write
\[
w=(x_1^{d-a_n}x_n^{a_n})(x_1^{a_1-(d-a_n)}x_2^{a_2}\cdots x_{n-1}^{a_{n-1}}).
\]
Denote by $w^{\prime}=x_1^{a_1-(d-a_n)}x_2^{a_2}\cdots x_{n-1}^{a_{n-1}}$. It is clear that $w^{\prime}$ has degree $(n-2)d$ and $|\supp (w^{\prime})|\leq n-1$. By induction hypothesis $w'\in I^{n-2}_{(n-1,d,0),2}$. This implies that $w\in I^{n-1}_{(n,d,0),2}$.
\end{proof}
One can ask in general whether a power of a $0$-spread Veronese ideal of bounded block type is a power of the maximal ideal. This is indeed the case, since we have the following inclusions
\[
(\mathbf{m}^d)^{n-1} \supseteq I^{n-1}_{(n,d,0),k} \supseteq I^{n-1}_{(n,d,0),2}= (\mathbf{m}^d)^{n-1},
\]
where the last equality is given by Proposition \ref{k=2}. Therefore, $I^{n-1}_{(n,d,0),k}=(\mathbf{m}^d)^{n-1}$. Hence the smallest integer $j$ for which $I^{j}_{(n,d,0),k}=(\mathbf{m}^d)^{j}$ is bounded by $n-1$.
To know the smallest integer $j$ for which
$I^j_{(n,d,0),k}= \mathbf{m}^{jd}$ is of interest by the following result.
\begin{Proposition}
\label{fibercone} Let $j$ the smallest integer such that $I^m_{(n,d,0),k}= \mathbf{m}^{md}$. Then
\begin{eqnarray*}
\reg(K[I_{(n,d,0), k}])
&=&\max \{j-1, \reg(S^{(d)})\}\\
&=&\max\{j-1, n-\left\lceil \frac{n}{d}\right\rceil \}.
\end{eqnarray*}
\end{Proposition}
\begin{proof}
Let $S^{(d)}$ be the $d$th Veronese subring of $S=K[x_1,\dots, x_n]$. Set $R=K[I_{(n,d,0), k}]$. Then $R\subset S^{(d)}$. Consider the short exact sequence
\[
0 \to R \to S^{(d)} \to S^{(d)}/R \to 0,
\]
and its induced long exact cohomology sequence
\[
\cdots \to H_{{\frk m}}^0(S^{(d)}/R) \to H_{\frk m}^1(R) \to H_{\frk m}^1(S^{(d)}) \to \cdots
\]
We have that $\dim (S^{(d)})=n$ and
\[
H_{\frk m}^i(S^{(d)})=0 \text{ for all } i\neq n.
\]
Thus, we obtain that
\[
H_{\frk m}^i(R)= \left\{
\begin{array}{lll}
H_{\frk m}^0(S^{(d)}/R), & \text{ if } i=1,\\
H_{\frk m}^n(S^{(d)}), & \text{ if } i=n,\\
0, & \text{ if } i\neq 1, n.
\end{array}
\right.
\]
Thus, $\reg (R)=\max\{j-1, \reg(S^{(d)})\}$. Since $S^{(d)}$ is Cohen Macaulay of dimension $n$, it follows that $\reg{S^{(d)}}=a(S^{(d)})+n$, where $a(S^{(d)})$ is the $a$-invariant of $S^{(d)}$. On the other hand,
\[
a(S^{(d)})=-\min\{i \:\; (\omega_{S^{(d)}})_i \neq 0\},
\]
here $\omega_{S^{(d)}}$ denotes the canonical module of $S^{(d)}$.
Since $(\omega_{S^{(d)}})_i= (\omega_{S})_{id}$ ( \cite[Exercise 3.6.21]{bh}) and since $\omega_S= S(-n)$, it follows that
\[
a(S^{(d)})=-\min\{i \:\; S_{id-n} \neq 0\}= -\left\lceil \frac{n}{d}\right\rceil.
\]
This yields the desired conclusion.
\end{proof}
We expect that $R=K[I_{(n,d,0),k}]$ has quadratic relations. Indeed, if $k=1$, then $R$ is a polynomial ring and if $k=n$, then $R$ is the Veronese algebra which is known to have quadratic relations, see for example \cite[Proposition 6.11 and Theorem 6.16]{EH}. So, if $n=3$ only the case $k=2$ is of interest. In this case, if $d=3$, then $R$ is the pinched Veronese which is known to be even Koszul, see \cite{caviglia}. Another case that we could check with the computer is $n=4, d=2, k=2$, and this case $R$ is Gorenstein with quadratic relations, and by our formula, $\reg(R)=2$.
|
train/arxiv
|
BkiUeNk4eIOjSKE2vMGF
| 5 | 1 |
\section{Introduction}
The idea of a third family of compact stars with small radii (tertiary stars) was first suggested by Gerlach in 1968 \cite{Gerlach:1968zz} in a generic context, and then by K\"ampfer in 1981 in the context of hybrid stars with a quark core \cite{Kampfer:1981yr}. In the past years, the interest in these stars increased due to studies indicating that neutron star radii might be smaller than previously expected.
Recently, this idea came back in order to explain data from neutron star mergers suggesting again smaller radii \cite{TheLIGOScientific:2017qsa}.
In addition, the observation of twin stars (two stars with the same mass and significantly different radius)
would be a definite confirmation of a strong first-order phase transition in stars, as already pointed out in Refs. \cite{Glendenning:1998ag,Schertler:2000xq,Alvarez-Castillo:2013cxa,Benic:2014jia}.
This is a very timely idea, as NASA's Neutron star Interior Composition Explorer (NICER) has been attached to the space station in June of 2017 and will soon report accurate data of neutron star radii.
The third family of stars has also been investigated with regard to different properties and contexts such as supernovae \cite{Hempel:2015vlg,Heinimann:2016zbx}, pasta phases \cite{Alvarez-Castillo:2014dva}, particle populations \cite{Banik:2001yw,Chatterjee:2009xi,Pagliara:2014gja}, color superconductivity \cite{Banik:2002kc,Alford:2017qgh}, rotation \cite{Banik:2004ju,Bejger:2016emu}, and tidal deformation in neutron star mergers \cite{Paschalidis:2017qmb,Alvarez-Castillo:2018pve,Gomes:2018eiv}.
The conversion mechanism of hadronic stars into hybrid or quark stars is still an open question, and the possibility of a third family or even two families of stars have been addressed in past works \cite{Glendenning:1998ag,Drago:2015dea,Drago:2015cea,Alvarez-Castillo:2016dyz}.
From the microscopic point of view, a soft equation of state (EoS) in the transition region is necessary. It generates a large energy density gap which creates a sequence of hybrid stars that is disconnected from the hadronic branch (third family) \cite{Benic:2014jia}.
On the other hand, in order to reproduce the observational stellar mass constraints, it is necessary that both hadronic and quark matter equations of state are stiff. For the EoS criteria for hybrid star branches to be connected or disconnected to hadronic ones, see Refs. \cite{Alford:2013aca,Alford:2015gna,Alford:2015dpa}.
Moreover, it is important to note that magnetic fields are also relevant in the calculation of the structure of neutron stars. It has been shown that although they do not significantly affect the matter equation of state, strong magnetic fields can change the macroscopic properties of neutron stars dramatically \cite{Franzon:2015sya,Gomes:2017zkc,Chatterjee:2018prm}.
In particular, they have the effect of decreasing the central density of stars due to Lorentz force, turning hybrid and hyperonic stars into nucleonic stars (composed only of nucleons and leptons) \cite{Franzon:2015sya,Franzon:2016urz}.
In this work, we address for the first time how strong magnetic fields affect the mass-radius relation of neutron stars, both generating and destabilizing twin-star configurations.
We start from describing different models that generate twin configurations without resorting to magnetic fields. Then, we discuss the effect of adding strong magnetic field effects to these twin stars. After that, we discuss the possibility of twin stars that exist only in the case when strong magnetic fields are present. Finally, we present our conclusions and outlook.
\begin{table*}
\caption{\label{Hmodels} Nuclear matter properties at saturation for the hadronic models used in this analysis, assuming a saturation density of $\rho_0=0.15\,\rm{fm}^{-3}$. The columns read: model, nucleon effective mass $m^*_n$, compressibility modulus $K_0$, binding energy per nucleon $B/A$, and symmetry energy $a_{sym}$, at nuclear saturation density $\rho_0$.}
\begin{center}
\begin{tabular}{ccccc}
\hline
Model & $\quad m^*_n/m_n$ & $\quad K_0$ & $\quad B/A$ & $a_{sym}$ \\
& & $\quad (\mathrm{MeV})$ & $\quad (\mathrm{MeV})$ & $\quad (\mathrm{MeV})$ \\
\hline \hline
1. MBF \cite{Gomes:2014aka} & 0.66 & 297 & -15.75 & 32.00 \tabularnewline
2. Chiral \cite{Dexheimer:2014pea} & 0.67 & 318.76 & -15.65 & 32.43 \tabularnewline
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\begin{figure}[!ht]
\includegraphics[width=.9\linewidth]{fig1_mag.eps}
\includegraphics[width=.9\linewidth]{fig12_mr-twins_new.eps}
\includegraphics[width=.9\linewidth]{fig13_mr-twins_new.eps}
\caption{Mass-radius diagram for Sets 1, 2, and 3, all
reproducing twin-star configurations (without the inclusion of magnetic fields). $M_1$ and $M_2$ are the hadronic and hybrid branch maximum masses, and $\Delta R_2$ is the radius interval covered by the third family.}
\label{nonB-twins}
\end{figure}
\section{Formalism}
In this section, we present the formalism used to study the effects of magnetic fields in twin stars. In subsection \ref{non-mag_twins}, we present two equations of state that allow for the existence of a third family of non-magnetic stars. Assuming that a first-order phase transition takes place at high densities, matter is modeled in two different scenarios: one caused by a deconfinement phase transition to quark matter, and another one caused by a phase transition to high-strangeness matter. We use two relativistic mean field models to describe hadronic matter, considering a nuclear saturation density $\rho_0=0.15\,\rm{fm}^{-3}$, parametrized to reproduce the nuclear properties at saturation shown in Table \ref{Hmodels}. All models/constructions we use are able to reproduce standard nuclear and astrophysical constraints \cite{Alford:2013aca,Gomes:2014aka,Franzon:2016urz,Gomes:2018eiv}.
In subsection II B, we investigate the effects of strong magnetic fields on the twin stars from section II A. In section II C, we present a configuration of highly magnetized hybrid stars which only in this case generates magnetic twin-star sequences. For the latter analysis, only the hadron-quark phase transition scenario is considered, using one of the models from section I A but with a different parametrization.
\begin{table*}
\caption{\label{TwinProp} Properties of non-magnetic twin stars for the sets used in this analysis. The columns are: masses and radii of the maximum mass star at the first (hadronic) and second (hybrid) branches, $M_1$ and $R_1$, and $M_2$ and $R_2$, respectively; and lastly the range of radii $\Delta R_2$ of stable third-family stars. These quantities are displayed in the top panel of Figure 1.}
\begin{center}
\begin{tabular}{cccccc}
\hline
Set & $\quad M_1$ & $\quad R_1$ & $\quad M_2 $ & $\quad R_2$ & $\quad \Delta R_2$ \\
& $\quad(\mathrm{M_{\odot}})$ & $\quad (\mathrm{km})$ & $\quad (\mathrm{M_{\odot}})$ & $\quad (\mathrm{km})$ & $\quad (\mathrm{km})$ \\
\hline \hline
Set 1 &$\quad 1.969$ & $\quad 14.23$ & $\quad 1.978$ &$\quad 12.47 $& $\quad 1.33$ \tabularnewline
Set 2 & $\quad 2.061$ & $\quad 13.61$ & $\quad 2.108$ & $\quad 10.9$ & $\quad 1.97 $ \tabularnewline
Set 3 & $\quad 1.971 $ & $\quad 13.63$ & $\quad 1.680$ & $\quad 9.70$ & $\quad 0.45 $ \tabularnewline
\\
\hline\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Non-magnetic twin stars}\label{non-mag_twins}
In the following, we present different relativistic mean field models that are used to generate equations of state taking into account charge neutrality and chemical equilibrium. Sequences of spherical, non-rotating and non-magnetic twin stars are then calculated by solving the Tolman-Oppenheimer-Volkoff equations.
\subsubsection{Due to deconfinement}
First, we discuss the case in which a non-magnetic
third family is generated by a hadron-quark phase transition. For such a case, we use an EoS parametrization of the many-body forces (MBF) model for the hadronic phase \cite{Gomes:2014aka} and the MIT bag model with vector interaction for the quark phase (see, for example Ref. \cite{Franzon:2016urz}).
The hadronic phase is assumed to consist of only nucleons and leptons. In this relativistic mean-field framework, many-body forces contributions are introduced in the baryon couplings to the scalar fields ($\sigma,\delta$), and are controlled by a parameter $\zeta$. The vector interaction introduced in the MIT bag model is equivalent to the approaches proposed in Refs. \cite{Alford:2004pf,Weissenborn:2011qu,Klahn:2015mfa}, in which is referred to as vMIT or vBag-model, and allows for a stiff EoS for quark matter, able to describe massive hybrid stars. The two phases are connected by a Maxwell construction, which describes a necessary sharp phase transition to quark matter.
In this work, we choose the many-body forces parameter to be $\zeta=0.040$, which is the stiffest possible realistic parametrization of the model (see Table \ref{Hmodels}). The values of the vector coupling, bag constant and mass of the strange quark in the MIT bag model that give rise to a third family are $(g_V/m_V)^ 2=1.7\, \mathrm{fm^ 2}$, $B^{1/4}=171\,\mathrm{MeV}$ and $m_s=150\,\mathrm{MeV}$, respectively. Note that increasing the strange quark mass or repulsion does not favor hybrid stars, as higher transition pressure and larger energy gap, make it more likely that the stars will become unstable \cite{Ranea-Sandoval:2015ldr}. For Set 1, we have used the BPS equation of state \cite{Baym:1971pw} for the crust.
For Set 2, we again describe hadronic matter with the same parametrization of the MBF model, but also allowing for hyperon degrees of freedom to appear, reproducing the respective values for the hyperon potentials \cite{Gomes:2014aka}: $U_\Lambda=-28$ MeV, $U_\Sigma=30$ MeV, and $U_\Xi=-18$ MeV. For quark matter, we use the ``constant-sound-speed (CSS)'' parametrization which assumes that the speed of sound in quark matter is pressure-independent for pressures ranging from the first-order transition pressure up to the maximum central pressure of a neutron star \cite{Zdunik:1987,Alford:2013aca}. For a given hadronic matter EoS, CSS parameters are then the pressure at the transition $p_{\rm trans}$ (or equivalently the transition density $\rho_{\rm trans}$), the
discontinuity in energy density at the transition $\Delta\varepsilon$, and the
speed of sound in the high-density phase $c^2_{\rm QM}$. The CSS parameter values we applied for the MBF model are $\rho_{\rm trans}=3.5\,\rho_0$, $\Delta\varepsilon/\varepsilon_{\rm trans}=0.3$ and $c^2_{\rm QM}=1$, where $\varepsilon_{\rm trans}\equiv\varepsilon_{\rm MBF}(p_{\rm trans})$. The properties of the third family configuration found for this set is also shown in Table \ref{TwinProp}.
For the crust in Set 2, we have used EoS' from \citet{Baym:1971pw} and
\citet{Negele73ns}.
The corresponding sequence of twin stars for these models are shown in Figure \ref{nonB-twins} and are labeled Set 1 and Set 2.
Increasing the central density, the hadronic stars become more compact due to the larger gravitational attraction.
When a sharp phase transition to quark matter takes place, stars become unstable until the quark core (described with a stiff quark matter EoS) becomes large enough to overcome the instability, creating the third family of stars.
We define the maximum mass for the hadronic and hybrid branches as $M_1$ and $M_2$, respectively. The radius interval for the third family branch, from the minimum between the mass peaks until the hybrid maximum mass peak ($M_2$) is defined as $\Delta R_2$. We display the main properties of these sets in Table \ref{TwinProp}.
\subsubsection{Due to strangeness}
Here, we discuss the case in which there are stable twin stars (not considering magnetic fields) still generated by a strong phase transition not necessarily related to deconfinement. The corresponding EoS is modeled using the Quark-Hadron Chiral Parity-Doublet (Q$\chi$P) model \cite{Steinheimer:2011ea,Dexheimer:2012eu}, which contains positive and negative parity states of the baryons octet as well as quarks within the mean field approximation. The introduction of an excluded volume for the baryons suppresses hadrons at high density and/or temperature, allowing the quarks to dominate \cite{Steinheimer:2010ib}.
The coupling constants of the model were fitted to reproduce the vacuum masses of the baryons and mesons, and reasonable values for the hyperon potentials ($U_\Lambda=-30.44$ MeV, $U_\Sigma=2.47$ MeV, and $U_\Xi=-26.28$ MeV). In addition to the properties presented in Table \ref{Hmodels}, the vacuum expectation values of the scalar mesons are constrained by reproducing the pion and kaon decay constants $f_\pi$ and $f_\kappa$. The BPS equation of state \cite{Baym:1971pw} is also used to describe the crust of these stars.
The effects of strangeness on the quark sector of this equation of state are studied in Ref.~\cite{Dexheimer:2014pea} by varying the quark coupling to the strange vector meson.
In particular, it is found that a large amount of strange quarks is related to a softer equation of state that posses a first-order phase transition observed in a reduction of the chiral condensate. Note that in this model both phases contain hadrons (nucleons and hyperons) and quarks and there would not be a first-order phase transition (but a crossover instead) if it were not for the choice of strange quark coupling. In this way, a phase transition separates a phase with lower strangeness from a phase with larger strangeness.
In the context of hadronic matter, the possibility of smooth and strong phase transitions to strange matter has been explored in Ref.~\cite{Gulminelli:2013qma}.
We label this sequence of twin stars Set 3 (see Figure \ref{nonB-twins}).
For example, a particular star mass of $1.68\,\mathrm{M_\odot}$, corresponding to radii
of $14.00$ km and $9.60$ km in different branches, contains strangeness fraction of $f_s = 0.01$ and $f_s = 1.68$, respectively, at the center of the star in each branch. The strangeness is defined as $f_s = \sum_i \rho_i Q_{S_i} / \rho_B$, where $\rho_i$ is the number density of each particle, $\rho_B$ the baryon number density, and $Q_{S_i}$ is the strangeness of each particle. The properties of the two branches of stars are also shown in Table \ref{TwinProp}.
\subsection{Adding magnetic field effects}\label{mag_twins}
In this section, we investigate the effects of magnetic fields on the twin stars presented in section II A.
Magnetic field effects can be introduced simultaneously in the macroscopic structure of neutron stars by the solution of the Einstein-Maxwell equations and in the microscopic formalism of the EoS through Landau quantization. Nevertheless, as already shown in Refs. \cite{Chatterjee:2014qsa,Gomes:2017zkc}, the latter does not show significant
effects on the macroscopic properties of stars for magnetic fields of $\sim 10^{18}\,\mathrm{G}$ or smaller (which is the case in this work) and, therefore, is not taken into account in this work.
The macroscopic stellar structure, on the other hand, is significantly modified by magnetic fields with strengths $\sim 10^{18}\,\mathrm{G}$ in the stellar center \cite{Bonazzola:1993zz}. For this reason, spherical solutions of Einstein's equations must be abandoned when studying strongly magnetized stars. For this purpose, we make use of the LORENE C++ class library for numerical relativity that generates equilibrium configurations from Einstein-Maxwell's field equations assuming a poloidal magnetic field configuration produced self-consistently by a macroscopic current \cite{Bonazzola:1993zz}. The magnetic field strengh depends on the stellar radius (with respect to the symmetry axis), the dipole magnetic moment and the EoS. For a fixed dipole magnetic moment, the magnetic field increases slowly in the polar direction towards the center of each star \cite{Dexheimer:2016yqu,Chatterjee:2018prm}.
In this work, for the first time, we fix the magnetic field in the center of each star of the sequence by adjusting the dipole magnetic moment for each stellar central density. This is shown in Figure \ref{magtwins_sets} for Set 1-3 of section II A.
Note that in this work we have chosen to discuss twin stars from the analysis of their (magnetic-axis) equatorial radii. This is because neutron star radius measurements usually refer necessary to their equatorial radii. In the case of measurements using x-ray bursts, as for example the ones to be performed by NICER, they refer to stellar equatorial radius (possibly at non-zero latitudes) defined by rotation \cite{Ozel:2015ykl}. In the case of measurements from gravitational waves, they refer to a radius defined by the projection from the binary orbital motion, since this is the direction in which deformation takes place \cite{Bauswein:2017vtn,Raithel:2018ncd}. In any case, all these "equators" are not expected to be significantly different from each other angular wise in comparison to the polar angle, as magnetic and rotation axes are expected to be almost aligned for stars with strong magnetic fields \cite{Lander:2018und}.
For Set 1, shown at the top panel in Figure \ref{magtwins_sets}, the introduction of stellar magnetic fields at first increases the masses and radii of stars until after it reaches a central value of $B_c\sim 5\times 10^{17}\,\mathrm{G}$, when it completely destabilizes all twin stars. The change in mass and radius is larger for hadronic stars than for hybrid ones, as hybrid stars are more compact and, therefore, less deformable by magnetic fields \cite{Gomes:2017zkc}. For the hadronic maximum mass star, $R_{1}=14.23\,\mathrm{km}$ goes to $R_{1}=14.49\,\mathrm{km}$ and $M_{1}=1.969\,\mathrm{M_{\odot}}$ goes to $M_{1}=2.016\,\mathrm{M_{\odot}}$, for central magnetic fields going from zero to $B_c\sim 4\times 10^{17}\,\mathrm{G}$.
The behavior of twin stars for Set 2 is not distinct from Set 1, and it is shown at the middle panel in Figure \ref{magtwins_sets}. The main difference is that now none of the physical magnetic fields studied destabilizes the twin stars. This is due to the fact that the original mass range of the twins is larger than for Set 1. The changes in maximum masses and radii for the maximum mass hadronic star, from the non-magnetic case to a central magnetic field $B_c\sim 1\times 10^{18}\,\mathrm{G}$ are: from $R_{1}=13.61\,\mathrm{km}$ to $R_{1}=14.173\,\mathrm{km}$ and from $M_{1}=2.061\,\mathrm{M_{\odot}}$ to $M_{1}=2.208\,\mathrm{M_{\odot}}$.
The twin stars from Set 3 are shown at the bottom panel in Figure \ref{magtwins_sets}. In this case, again, none of the physical magnetic fields used in this analysis destabilizes the twin stars, as they are extremely compact. For this set, the changes in maximum masses and radii due to magnetic fields of $B_c\sim 1\times 10^{18}\,\mathrm{G}$ for the maximum mass hadronic star are from $R_{1}=13.63\,\mathrm{km}$ (non-magnetic) to $R_{1}=14.65\,\mathrm{km}$ and from $M_{1}=1.971\,\mathrm{M_{\odot}}$ (non-magnetic) to $M_{1}=2.159\,\mathrm{M_{\odot}}$.
It is important to notice that for central magnetic fields higher than $B_c\sim 1\times 10^{18}\,\mathrm{G}$, the hadronic star branch becomes unstable even before the twin stars for Sets 2 and 3. In these cases, the third family still remains, but the intensity of magnetic fields reaches both numeric and physical limit for our description of magnetic neutron stars. Numerically, beyond this (model dependent) threshold, the purely magnetic contribution to the energy density exceeds the matter part, and the code stops converging. From the physical point of view, stars on the hadronic branch have low central densities, and magnetic field effects on the crust equation of state should be taken into account. The latter is beyond the scope of this work.
\begin{figure}[!ht]
\includegraphics[width=.9\linewidth]{fig2_mr-rosana.eps}
\includegraphics[width=.9\linewidth]{fig3_mr-sophia.eps}
\includegraphics[width=.9\linewidth]{fig4_mr-veronica.eps}
\caption{Mass-radius diagram for Sets 1 (top), 2 (middle), and 3 (bottom), for different magnetic field configurations. For Set 1, twin stars disappear for a central magnetic field of around $B_c\sim 5\times 10^{17}\,\mathrm{G}$, but do not disappear for the maximum central magnetic field configurations investigated in the analysis of Sets 2 and 3.}
\label{magtwins_sets}
\end{figure}
\subsection{Magnetic twin stars}\label{mag_twins}
In this section, we investigate the effects of magnetic fields generating twin-star configurations that otherwise would not exist. For this purpose, we once more describe a hadron-quark phase transition with a Maxwell construction for the combination of the MBF and MIT models.
The hadronic phase parameterization is the same as in Section II A 1 ($\zeta=0.040$), but we take the values of the vector coupling and bag constant to be: $(g_V/m_V)^ 2=2.2\,\mathrm{fm^ 2}$ and $B^ {1/4}=160\,\mathrm{MeV}$, in order to reproduce massive and stable hybrid stars \emph{without} the existence of twins (Set 4). The results are presented in Figure \ref{magtwins_rosana_hybrid}.
As the central magnetic field in the hybrid star increases (and so does the magnetic field strength throughout the star), a second mass peak appears, and hence a twin-star configuration. The corresponding threshold is $B_c=3\times 10^ {17}\,\mathrm{G}$ for this configuration. For larger magnetic fields, the masses and radii of stars (especially hadronic ones) keep increasing until $B_c\sim 7\times 10^ {17}\,\mathrm{G}$, when the twin-star branch disappears again, equivalently to the discussion for Set 1 in the last session.
More specifically, the non-magnetic configuration for Set 4 has a maximum mass hybrid star of $M_{2}=1.966\,\mathrm{M_{\odot}}$ and radius $R_{2}=12.00\,\mathrm{km}$. The critical mass star, beyond which all stars are hybrid, has a mass and radius of $M_{c}=1.908\,\mathrm{M_{\odot}}$ and radius $R_{c}=14.05\,\mathrm{km}$.
When magnetic fields are introduced, due to the appearance of a third family of stars at $B_c=3\times 10^ {17}\,\mathrm{G}$, the critical mass star $M_c$ becomes the maximum mass star at the hadronic branch $M_1$, having mass and radius of $R_{1}=14.19\,\mathrm{km}$ and $M_{1}=1.939\,\mathrm{M_{\odot}}$, for this central magnetic field. The interval of radii for the third family $ R_2$ ranges from $13.73\,\mathrm{km}$ to $12.04\,\mathrm{km}$ ($\Delta R_2=1.69\, \rm{km}$).
Once the third family is established, $\Delta R_2$ decreases as a function of the central magnetic field and ultimately the twin stars disappear. In particular, at $B_c=6\times 10^ {17}\,\mathrm{G}$, when the twin-star configuration is close to the vanishing threshold, the radius and mass of the hadronic maximum mass star is
$R_{1}=14.62\,\mathrm{km}$ and $M_{1}=2.037\,\mathrm{M_{\odot}}$, and the radius interval for the third family is reduced to $\Delta R_2=(13.32-12.19)\, \rm{km}$.
Note that throughout this work, only gravitational masses were displayed in all figures. Nevertheless, baryonic mass plots would show the same qualitative behavior for twin stars.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{fig5_mr-rosana.eps}
\caption{Mass-Radius diagram for Set 4. This configuration does not present twins for the non-magnetic case, and the appearance of twin stars takes place for magnetic fields as high as $B_c\sim 4\times 10^{17}\,\mathrm{G}$. For even higher magnitudes of central magnetic fields, of around $B_c\sim 7\times 10^{17}\,\mathrm{G}$, the third family becomes unstable. }
\label{magtwins_rosana_hybrid}
\end{figure}
\section{Discussion and Conclusions}
In this work we made use of different hadronic and quark models to study the effects of magnetic fields on twin stars generated both without and exclusively by magnetic field effects. The hadronic models employed were the many-body forces (MBF) and the Quark-Hadron Chiral Parity Doublet (Q$\chi$P) model. The quark models were the vector MIT bag model, the Constant-Sound-Speed (CSS) approximation and, once more, the Quark-Hadron Chiral Parity Doublet (Q$\chi$P) model.
By defining $M_1$ and $M_2$ as the maximum mass stars on the hadronic and hybrid branches, respectively, we have shown that strong magnetic fields do not affect $M_2$ substantially. On the other hand, $M_1$ corresponds to stars that are less compact and, therefore, show a stronger increase in both mass and radius due to the magnetic field. This particular feature makes the $M_1$ peak increase in value, leading to a less pronounced minimum between the two peaks of mass ($M_1$ and $M_2$), which characterizes a third family of stars. Depending on the EoS models used and on the intensity of magnetic fields considered, this behavior can eventually result in the elimination of twin stars, as shown in our results for EoS in Set 1.
In addition, we have also studied the case in which strong magnetic fields generate twin-star configurations that otherwise would not exist. This is again related to the increase of mass and radius for the (less compact) hadronic branch. In this particular case, shown for Set 4, the critical mass (that corresponds to a critical central density) becomes a peak of mass $M_1$, which also creates a nearby minimum and, consequently, a third family.
From our results, we can confidently state that the mass minimum generated between the two mass maxima when twin stars are present depends on the central magnetic fields as well as on the compactness of stars in the hadronic and hybrid branches. A more thorough future study considering many twin stars configurations will provide a more model-independent relation between those quantities.
Strong magnetic fields give rise to an instability region on the mass-radius diagram, directly affecting hybrid star configurations by the appearance and/or vanishing of a third family of stars. Together, our conclusions point out the fact that twin stars can only exist as stable objects at specific stages of a magnetar evolution, as either the absence of strong magnetic fields or the presence of very strong ones reduces the number of models/parametrizations that give rise to a mass degeneracy corresponding to stable stars. In the future, we intend to expand our calculations to include different models together with temperature and rotation effects. This will provide more quantitative understandings of how twin stars can be a part of star evolution and how magnetic field decay can generate two families of compact stars.
\begin{acknowledgements}
The authors acknowledge the support from NewCompstar, COST Action MP 1304 and
HIC for FAIR.
The authors thank Dr. Mark Alford for fruitful discussions and suggestions.
V. D. acknowledges support of the National Science Fundation under grant PHY-1748621.
S.H. is supported by Chandra Award TM8-19002X, the N3AS (Network in Neutrinos, Nuclear Astrophysics, and Symmetries) Research Hub under grant NSF PHY-1630782, and the Heising-Simons Foundation (2017-228 Grant).
\end{acknowledgements}
|
train/arxiv
|
BkiUbD_xK1ThhCdy6WVb
| 5 | 1 |
\section*{Introduction}
Biological functions in cells and tissues are driven by the molecular-scale organization of biomolecular assemblies, which arrange in precise structures that are essential, for instance, in biomechanics and morphogenesis. A way to assess such organization is to monitor the orientation of fluorescent labels, in conditions where the label is sufficiently rigidly attached to the biomolecule of interest\cite{Beausang2013,ValadesCruz2016,Mehta2017,Ding2020}. Monitoring orientational behavior of fluorescent molecules is still a challenge, however, because both orientational fluctuations and mean orientation need to be quantified. In particular, measurements can be strongly biased by the fact that molecular orientations may fluctuate at a time scale faster than the measurement integration time, which occurs naturally in biological media even in fixed conditions\cite{Backer2015,ValadesCruz2016,Ding2020}. Recent studies have aimed at adding orientation information to super-resolution imaging, which relies on single molecule localization. Orientation and position are however difficult parameters to disentangle, leading to possible localization biases\cite{Enderlein2006,Backlund2012}. A single molecule's point spread function (PSF) is intrinsically altered by its orientational properties\cite{Enderlein2006,Ding2020}. Several methods have capitalized on this property by using Fourier-plane phase modification of the PSF\cite{Backlund2012,Agrawal,Backer2013,Backer2014}, or imaging finely sampled PSFs\cite{Mortensen2010}. However, these approaches apply only to molecules with fixed orientation.
Recent proposals to access the missing information on wobbling rely on adding complexity to the PSF via phase filtering\cite{Zhang2018} or by using the index mismatch sensitivity of the PSF's shape\cite{Ding2020}, although the axial component of the single molecules' 3D position remains inaccessible.
Other approaches use defocused imaging\cite{Aguet2009,Backer2015}, but they require either fixed orientations or predetermined spatial localization of the molecules. Alternatively, it is possible to preserve less-altered PSF images and restrict the measurements to 2D in-plane orientations by working under relatively low numerical aperture conditions and splitting polarization components\cite{ValadesCruz2016}, or using sequential polarization illumination\cite{Corrie1999,Sosa2001,Peterman2001,Backer2016}. So far, none of these techniques have allowed the simultaneous measurement of 3D orientational properties (including both orientational fluctuations and mean orientation) and 3D spatial position of single molecules, in a single-shot image scheme compatible with super-resolution localization. The main challenge is that the axial position of single molecules and their 3D orientational fluctuations (e.g. their wobbling) are intrinsically coupled by the imaging techniques.
Here we propose a simple method to engineer the molecular PSF so that it efficiently encodes information about all these properties with very little coupling. The method is based on Fourier-plane filtering not only in phase but also in polarization by using spatially-varying birefringence. It builds upon a prior technique for single-shot imaging polarimetry\cite{Roshita,Brandon,Brandon2}, where polarization is encoded in the shape of the PSF. This approach has been applied to
multiple scattering measurements\cite{BrandonAerosols} as well as to the polarimetric characterization of multicore fibers\cite{Sid}. In this work, we show that the same operating principle can be used to retrieve significantly more degrees of freedom when applied to imaging fluorescing molecules, where the PSFs encode information not only of the molecules' transverse coordinates $(x,y)$ but also of their axial height $z$, and of the three-dimensional correlations of the emitted light, which translate into the orientation of the molecules, namely the azimuthal angle $\xi$, the polar angle $\theta$, and the state of wobbling or dithering characterized by the average cone solid angle $\Omega$ (Fig.~\ref{setup}a).
Furthermore, we show that there is negligible coupling in the dependence of the PSFs on the relevant parameters being measured, that the technique involves almost no photon losses, and that transverse spatial resolution is high since the PSFs encoding this information are only about twice as large as those of diffraction-limited imaging. We refer to the new method as Coordinate and Height super-resolution Imaging with Dithering and Orientation (CHIDO).
\section*{Results}
\subsection{PSF encoding through a birefringent mask.}
The basis of the proposed technique is the inclusion at the pupil plane of an element referred to as a stressed-engineered optic (SEO), which is a BK7 glass window subjected to forces with trigonal symmetry at its edges\cite{Alexis,Roshita,Brandon}. The spatially-varying birefringence pattern that naturally results in the vicinity of the force equilibrium point has been shown to be essentially optimal for applications in polarimetry, in the sense that it efficiently encodes polarization information in the PSF's shape while causing the smallest possible increase in PSF size\cite{Anthony}. This birefringence pattern is described by the following Jones matrix (in the linear polarization basis) in the Fourier plane of the detection path (Fig.~\ref{setup}b):
\be\label{JonesMatrixSEO}
\mathbb{J}({\bf u})\!=\!\cos\!\frac{cu}2\left(\!\begin{array}{cc}\!1&0\\0&1\end{array}\!\right)+\mathrm{i}\,\sin\!\frac{cu}2\left(\!\begin{array}{cc}\cos\varphi&\!\!-\sin\varphi\\-\sin\varphi&\!\!-\cos\varphi\end{array}\!\right),
\ee
where $(u,\varphi)$ are polar pupil coordinates normalized so that $u=1$ corresponds to the pupil's edge, and $c$ is a coefficient that depends on the stress within the SEO and the radius of the pupil being used. This parameter can be chosen to optimize the system's performance: small $c$ keeps the extension of the PSFs more restricted but reduces the amount of information they carry about orientation and $z$ displacement, while large $c$ has the opposite effect\cite{Roshita,Anthony}.
After passing through the SEO, the two circular polarization components are separated to form two images by inserting a quarter-wave plate (QWP) followed by a Wollaston prism (Fig.~\ref{setup}b). A Fourier-plane image under circularly-polarized illumination shows the effect of the SEO's spatially-varying birefringence as a Fourier mask on the two detection channels (Fig.~\ref{setup}c).
We now show that the combination of the SEO and the separation of the two circular polarization images allows encoding information about a molecule's orientation and axial displacement in the shape of the PSFs. Let us model the fluorescing molecule as a quasi-monochromatic point dipole that can have any orientation (fixed or fluctuating) in three dimensions\cite{Bohmer:03,Aguet2009,Backer2015,HieuThao:20,Chandler1:19,Chandler2:19}. For now we assume that this dipole is at the center of the object focal plane of the objective, $(x,y,z)=(0,0,0)$; the effects of lateral and axial displacements will be discussed later. The dipole is placed in a homogeneous medium, at a distance to the glass coverslip larger than the wavelength. This source can be described by the $3\times3$ second moment (or correlation) matrix $\mathbf{\Gamma}$ with elements $\Gamma_{ij} =\langle E_i^*E_j\rangle$ with $i,j=x,y,z$, $E_i$ being the radiated field components, and the angular brackets denoting an average over the integration time of the detector\cite{Backer2015} (Supplementary Note 1).
This type of $3\times3$ correlation matrix has also been used to study nonparaxial polarization\cite{Brosseau,Sampson,Barakat,Tero,2methods}. For the sake of analogy with standard polarimetry (where the correlation matrix is only $2\times2$), we write $\mathbf{\Gamma}$ in terms of the {\it generalized 3D Stokes parameters} $S_n$, which are the coefficients of the expansion of this matrix in terms of the Gell-Mann matrices $\mathbf{g}_n$ (instead of the Pauli matrices used for $2\times2$ correlations, whose coefficients are the standard Stokes parameters)\cite{Brosseau}. The resulting expression is
\be
\!\mathbf{\Gamma}\!=\!\sum_{n=0}^8S_n\mathbf{g}_n=\left(\begin{array}{ccc}\frac{S_0+S_8}{\sqrt{3}}+S_1&S_2-\mathrm{i} S_3&S_4-\mathrm{i} S_5\\S_2+\mathrm{i} S_3&\frac{S_0+S_8}{\sqrt{3}}-S_1&S_6-\mathrm{i} S_7\\ S_4+\mathrm{i} S_5&S_6+\mathrm{i} S_7&\frac{S_0-2S_8}{\sqrt{3}}\end{array}\right).
\label{Gamma}
\ee
Note that we use a nonstandard numbering scheme for the Gell-Mann matrices: the elements $n=1,2,3$ are cycled so that the resulting parameters $S_n$ reduce to the standard Stokes parameters for $n=1,2,3$ when the field's $z$ component vanishes. (In this case, $S_0$ differs from the corresponding Stokes parameter for paraxial light by a factor of $\sqrt{3}/2$.)
Several measures have been proposed for the degree of polarization of nonparaxial light\cite{2methods}, one of them\cite{Sampson,Barakat,Tero} having a definition in terms of the generalized 3D Stokes parameters that resembles the standard one for paraxial light:
\be
P_{3{\rm D}}=\frac1{S_0}\left(\sum_{n=1}^8S_n^2\right)^{1/2}=\left[\frac{3\,{\rm tr}\mathbf{\Gamma}^2}{2\,({\rm tr}\mathbf{\Gamma})^2} -\frac12\right]^{1/2}.
\ee
In the present context, this degree of polarization is related to the amount of wobbling of the fluorescent dipole source. For a dipole wobbling uniformly within a cone, the cone solid angle $\Omega$ is a monotonic function of this degree of polarization (see Supplementary Note 2):
\be
\Omega=\pi\left(3-\sqrt{1+8P_{3{\rm D}}}\right).\label{Omega}
\ee
Let the PSFs at the two detector regions be denoted as $I^{(p)}$, where $p$ labels the polarization component being imaged at the corresponding detector: $p=$ r for right-hand circular (RHC) and $p=$ l for left-hand circular (LHC) (Fig.~\ref{setup}d). These PSFs depend linearly on the generalized Stokes parameters according to (Supplementary Note 1)
\be
I^{(p)}(\boldsymbol{\rho})=\sum_{n=0}^8S_n{\cal I}_n^{(p)}(\boldsymbol{\rho}),
\label{linearrel}
\ee
where ${\cal I}_n^{(p)}$ are contributions to the PSF corresponding to each generalized Stokes parameter. Expressions for these contributions are derived in Supplementary Note 1, and theoretical images for some of them at $z=0$ are shown in the top row of Fig.~\ref{PSFcomps} (where they are labeled as ${\cal I}_{n,0}^{(p)}$ for reasons that will become apparent later). Note that Fig.~\ref{PSFcomps} does not include images for ${\cal I}_3^{(p)}$, ${\cal I}_5^{(p)}$, and ${\cal I}_7^{(p)}$ because they are not of interest to the current problem. (The complete set is shown in Supplementary Figure 1.) This is because, as we can see from Eq.~(\ref{Gamma}), the generalized Stokes parameters $S_3$, $S_5$ and $S_7$ correspond to the imaginary part of $\mathbf{\Gamma}$ and therefore encode information about the helicity of the emitted field, which is assumed not to exist since the emitters are (possibly wobbling) linear dipoles. Nevertheless, if the particles did emit light with some helicity,
these elements could be incorporated into the treatment.
An important feature of the SEO's birefringence pattern is that it makes this set of PSF components nearly orthogonal while keeping their extension almost as small as possible (in analogy to the case of paraxial polarization\cite{Anthony}). This approximate orthogonality implies the strong decoupling of the information for each parameter. Another important property of the SEO is its approximate achromaticity over a spectral range corresponding to fluorescence spectral widths (typically 100nm), in contrast to PSF engineering methods based on pure phase masks. The only chromatic dependence of the Jones matrix in Eq.~(\ref{JonesMatrixSEO}) is within the parameter $c$, which is roughly inversely proportional to the wavelength. This variation compensates the natural scaling of the PSF with wavelength, such that the PSFs resulting from the integration over the fluorescence spectrum being measured are nearly indistinguishable from those resulting from only the peak wavelength.
If a measurement required larger wavelength ranges, appropriate recalibration must be used, as in any PSF-engineering-based technique. The chromatic dependence of CHIDO is discussed in Supplementary Note 1 and quantified in Supplementary Figure 2.
When the emitter is within the plane conjugate to the image, each of its two images is a linear combination of the six PSFs shown in the first row of Fig.~\ref{PSFcomps}, according to Eq.~(\ref{linearrel}). The possible differences between the two images arise from a global sign change of two members of the PSF basis set, ${\cal I}_4^{(p)}$ and ${\cal I}_6^{(p)}$. Figure~\ref{PSFexamples} and Supplementary Movie 1 show simulations of measured PSF pairs corresponding to several dipole orientations.
Also shown for comparison are the corresponding diffraction-limited PSFs resulting from not using the SEO, whose shape is nearly independent of the in-plane angle $\xi$.
In contrast, when the SEO is used, the PSFs acquire a crescent shape for a dipole within the $xy$-plane, and a rotation of the dipole within this plane results in an approximate rotation of both PSFs, in the opposite sense as the dipole and by twice the angle. Note that these PSFs are only about twice as large as the diffraction-limited ones. A dipole in the $z$ direction, on the other hand, corresponds to a PSF with trigonal symmetry (which is also only about twice as large as the corresponding diffraction-limited PSF). Wobbling of the dipole about its nominal direction has the effect of blurring the PSFs in a predictable way. Therefore, the parameters $S_n$ can be estimated by making the superposition in Eq.~(\ref{linearrel}) agree as closely as possible with the measured pair of PSFs (see Supplementary Note 2). From these parameters the matrix $\mathbf{\Gamma}$ can be constructed using Eq.~(\ref{Gamma}), which is real and symmetric because $S_3=S_5=S_7=0$. The central direction of the dipole source is then estimated as that of the eigenvector of $\mathbf{\Gamma}$ with the largest eigenvalue. The remaining eigenvectors and eigenvalues provide information about the wobbling of the molecule (Supplementary Note 2).
Additionally, in the minimization procedure that leads to the retrieval of the parameters $S_n$, the transverse $x,y$ position of each emitter can be estimated to within a fraction of a pixel (Supplementary Note 2). This analysis can be performed simultaneously for multiple emitters within an image, as long as their PSFs do not overlap.
In addition to orientation and transverse localization, the measured images provide accurate information about axial localization, since the PSFs depend on $z$ (significantly more so than those without the SEO). As shown in Fig.~\ref{PSFexamples} and Supplementary Movie 1, a variation in $z$ for a dipole oriented within the $xy$-plane causes a rotation of both measured PSFs, but these rotate in opposite directions. This is in contrast with an in-plane rotation of the dipole (a change in $\xi$), which causes common rotation of the PSFs. Therefore, if only the image corresponding to one polarization component were used, it would be nearly impossible to distinguish height from orientation, but imaging separately both circular components fully decouples $z$ and $\xi$. A rotation of the two PSFs in opposite directions also occurs when a dipole oriented in the $z$ direction changes height. The estimation of the $z$ position can then be performed by optimizing the fit of the measured PSFs to those of the basis PSFs from a bank of images corresponding to different heights.
Here, however, we use for the sake of illustration a simpler and faster retrieval algorithm: for small displacements in $z$, we can use a polynomial expression in $z$ of the form
\be
{\cal I}_n^{(p)}(\boldsymbol{\rho},z)\approx\sum_{m=0}^Mz^m{\cal I}_{n,m}^{(p)}(\boldsymbol{\rho}).
\label{ztothen}
\ee
That is, the measured PSFs can be fitted with an expanded basis composed of the $6M$ elements ${\cal I}_{n,m}^{(p)}(\boldsymbol{\rho})$. Figure~\ref{PSFcomps} shows the elements for $m=0$ and $m=1$. (The full basis is depicted in Supplementary Figure 1.) The second row, in particular, indicates the rate of change of the PSFs with small amounts of axial displacement around the nominal plane.
Notice the similarity of the PSFs ${\cal I}_{1,1}^{(p)}$ and ${\cal I}_{2,1}^{(p)}$ with ${\cal I}_{2,0}^{(p)}$ and ${\cal I}_{1,0}^{(p)}$, respectively. This similarity explains the fact that both the in-plane rotation and vertical displacement of a horizontal dipole cause rotations. The distinguishability between them arises because ${\cal I}_{1,1}^{(p)}$ and ${\cal I}_{2,1}^{(p)}$ have opposite signs for $p=$ r and l, while ${\cal I}_{2,0}^{(p)}$ and ${\cal I}_{1,0}^{(p)}$ do not, making these two sets of PSFs (where each includes both detector regions, $p=$ r, l) essentially orthogonal in the decomposition, so that in-plane orientation and height are completely decoupled in the retrieval process. Details of this simple approximate approach for the estimation of $z$, its limitations and ways to improve its range of validity are discussed in Supplementary Note 2.
\subsection{Cram\'er-Rao analysis.}
In order to estimate the sensitivity of CHIDO, we use Cram\'er-Rao (CR) lower bounds\cite{Ober2004,Vella2020} on the uncertainties of the six parameters being measured ($x,y,z,\xi,\theta,\Omega$). These bounds were deduced from a numerical calculation of the inverse of the Fisher Information matrix, in this case of dimension $6\times6$. Each of the six lower bounds depends on all six parameters, as well as on the photon number, the SEO's stress parameter $c$, and the pixelation of the PSFs. To reduce the size of the parameter space being explored, we fix $c=1.2\pi$ and assume a pixelation level comparable to that of our experimental implementation. For the sake of illustration, the results presented in what follows assume a total of 1000 detected photons over the two detection channels; the obtained levels of error scale as the inverse of the square root of the photon count. Also, for simplicity of calculation, we use $P_{\rm 3D}$ as the measure of wobble (instead of $\Omega$) since the PSFs depend linearly on this parameter. For moderate levels of wobbling, the CR lower bound for $\Omega$ is essentially equal to that for $P_{\rm 3D}$ times $8\pi/3$, as can be verified from Eq.~(\ref{Omega}). Note that the results that follow use PSFs calculated rigorously as functions of $z$ rather than the polynomial approximation mentioned in the previous section, in order to reflect the fundamental capabilities of the technique.
Figure~\ref{figCR}(a) shows the variations of the six CR lower bounds for a non-wobbling fluorophore within the plane $z=0$ with in-plane orientation ($\theta=\pi/2$) as a function of $\xi$.
Note that the dependence of the lower bounds on the in-plane orientation of the fluorophore is not very significant. The CR analysis also provides measures of the correlations between the different parameters. Several of these correlations are essentially zero (e.g. those coupling either $z$ or $\theta$ to the remaining parameters), but some vary more appreciably with $\xi$ although they still take small values, the maximum being about $0.25$. The inset shows the average over all $\xi$ of the magnitude of these correlations, which is never greater than about $10\%$.
This confirms the high level of statistical independence for the parameters retrieval, which is an important attribute of CHIDO. The variation of the CR bounds as the off-plane orientation changes is illustrated in Fig.~\ref{figCR}(b) (assuming $\xi=0$), showing that this change of orientation makes the uncertainty in $z$ first decrease slightly and then increase by less than a factor of two. The uncertainty in $\xi$ grows also very rapidly, but this is an artifact of the spherical angles since $\xi$ is undefined for $\theta=0$; by defining a combined angular lower bound as $\sigma_\alpha=(\sigma_\theta^2+\sin^2\theta\,\sigma_\xi^2)^{1/2}$, we see that the actual angular uncertainty is fairly constant.
Given that the PSFs occupy several pixels, the CR bounds depend very weakly on changes in $x$ and $y$. The dependence on $z$ is more significant, as can be seen in Fig.~\ref{figCR}(c), which shows that 3D spatial localization is affected given the expansion of the PSFs with defocusing;
the different behavior in $x$ and $y$ is due to the chosen molecule orientation ($\xi=0$). On the other hand, $z$ has essentially no effect over the level of accuracy of the determination of direction. Finally, Fig.~\ref{figCR}(d) shows that even moderate amounts of wobble have an adverse effect on the CR bounds, particularly in the estimation of height. Nevertheless the accuracy values are still at a reasonable level.
The estimates just shown, as well as other simulations we performed, indicate that when a few thousand photons are measured, one can expect an accuracy in transverse position of a few nanometers, and an uncertainty in $z$ about three or four times larger. The corresponding accuracy in the determination of orientation angles is of a few degrees.
These levels of precision are comparable or superior to those of other approaches restricted to the estimation of a subset of the parameters, whether they are based on engineering the PSF \cite{Agrawal,Backer2013,Zhang2018} or on observing the natural change in shape of un-engineered PSFs \cite{Aguet2009,Ding2020}.
The determination of wobble is the most challenging, since (except for specific positions and orientations) significant accuracy might require the detection of tens of thousands of photons. However, CHIDO does allow the estimation of wobble without significant coupling to other parameters. Finally, to evaluate the robustness of the method with respect to image aberrations, the calculations were repeated assuming that the system presents one wave of spherical aberration. While this aberration does change the shapes of the PSFs, the CR bounds remain largely the same.
\subsection{Reference measurements.}
The optical set-up for CHIDO is displayed in Fig.~\ref{setup}b (see Methods). A 488 nm continuous laser is used for wide-field illumination of the sample via a high numerical aperture objective. The fluorescence is imaged onto an emCCD camera after passing through the SEO placed at the imaged back focal plane of the objective. To mimic the polarization distribution at the pupil plane of dipole molecules with different orientations, we first used fluorescence nanobeads immobilized in a mounting medium (see Methods), together with chosen polarizing elements prior to the SEO. Besides serving as models for dipole sources, these measurements help provide a basis of reference PSFs to be used for the retrieval of unknown dipole orientations (Supplementary Note 2). Importantly, nanobeads are also used to fine-tune the alignment of the SEO when used under circular polarization (see Methods). In such situation, we measured complementary rotationally symmetric PSF shapes in RHC and LHC channels (Fig.~\ref{zfigs}a), which are close to what is expected from theory (Fig.~\ref{zfigs}b).
We first investigated the case of emitters oriented in the axial $z$ direction ($\theta=0^\circ$), whose polarization distribution at the pupil plane is radial. To simulate this situation, we inserted a radial polarization converter (Altechna, S-waveplate) before the SEO. The model could be made more accurate by also introducing an amplitude filter that simulates the correct radial dependence (approximately linear rather than constant). However, numerical simulations show that the difference in the resulting PSFs is not significant. Images of four different sets of nanobeads (corresponding to different regions of the same sample) were measured at five defocus distances each, at separations of 200 nm (see Methods). A typical image taken at the central defocus position is shown in Fig.~\ref{zfigs}c. One bead from this set was selected to construct the reference PSFs, following the approach described in Supplementary Note 2, where the dependence in $z$ was approximated by using Eq.~(\ref{ztothen}) with $M=2$. This low value of $M$ makes the algorithm faster, but comes at the price of introducing systematic errors and limiting the range in $z$ over which the retrieval is valid. Using these references, the transverse and axial positions of the nanobeads were detected. Some of the results were discarded due to low confidence in the fit, caused either by low signal levels, overlapping PSFs, or PSFs clipped at the edge of the field of view. The resulting number of nanobeads used for retrieval in set 1 was about 21 on average, while for the remaining sets it was about 35. The average and standard deviations of the retrieved heights for each of the measurements are shown in Fig.~\ref{zfigs}d. For the four sets, the average estimated heights are separated by approximately 200 nm as expected. This result used a correction in which systematic errors were largely removed by replacing $z$ with an appropriate monotonic function of $z$. (The results without the corrections are shown in Supplementary Fig. 4) Note that from the retrieved 3D positions over the four sets, it was observed that the plane containing the nanobeads was tilted by about a quarter of a degree. More details about the retrieved data from each set are shown in Supplementary Movie 2.
We then simulated emitters with different orientations within the $xy$-plane (i.e. for $\theta=90^\circ$ and varying $\xi$) by replacing the S-waveplate with a linear polarizer prior to the SEO (see Methods). Images were taken for two sets of nanobeads corresponding to two regions of a sample, each at five defocus heights in steps of 200 nm, and for several orientations of the polarizer in steps of $10^\circ$ over a range of $180^\circ$. One of these measurements is shown in Fig.~\ref{linfigs}a. A single bead was selected to generate the reference PSFs used in the retrieval for the others. Once more, a threshold in the level of confidence of the fit was applied to eliminate errors from overlapping/clipped PSFs and low signals, yielding results for about 30 nanobeads in set 1 and 36 in set 2. The insets in Fig.~\ref{linfigs}a show the retrieved $(x,y)$ position of a specific nanobead. The retrieved heights and orientations and their standard deviations for the two sets are shown in Fig.~\ref{linfigs}b, whose data are fully displayed in Supplementary Movie 3 and Supplementary Movie 4. An average defocus shift of about 100 nm was found between the two sets. We can also appreciate from the measurements that there was a relative drift in $z$ between both sets of about 100 nm over the time of data collection (over 30 minutes for each). Finally, it can be seen that the large standard deviations for some heights and directions in Fig.~\ref{linfigs}b are caused mostly by a few outliers not filtered out by the confidence threshold, corresponding to PSFs with low intensity, with overlaps, or clipped by the edge of the field of view. In general, we can see that the use of a quadratic approximation for $z$ gives rise to a magnification of the errors at the edges of the interval.
Since the number of photons detected for each nanobead was on the order of hundreds of thousands, the CR lower bound for the uncertainty in $z$ is significantly below the spread in the measurements (about 25 to 60 nm near $z=0$, sometimes significantly larger away from the nominal plane) for both the radially-polarized and the linearly-polarized cases. This larger error is partly due to the simplified approximate $z$-dependence of the PSF model used. (Future work will be devoted to refining these issues.) More importantly, this error is likely due to experimental factors, such as field-dependent aberrations over the measured field of view, non-uniformity of the substrate's shape, or even variations in the size of nanobeads (nominally of 100nm). It is therefore not surprising that the obtained standard deviations do not reach the CR bounds.
\subsection{Single molecules and super-resolution imaging.}
We then applied CHIDO to super-resolution orientational imaging, using fluorophores appropriate for Stochastic Optical Reconstruction Microscopy (STORM) \cite{Dempsey2011}. In order to evaluate the capacity of CHIDO to retrieve both 3D orientations and 3D positions of single molecules, we first imaged Alexa Fluor 488 fluorescent molecules (AF488) sparsely attached to in-vitro reconstructed F-actin single filaments via phalloidin (see Methods). These molecules are known to keep an average orientation along the actin filament, with a non-negligible wobbling extent\cite{ValadesCruz2016}. We retrieved their localization and orientation by using a PSF basis set constructed from a combination of theoretical calculations and the reference nanobead measurements (see Supplementary Note 2). Figure~\ref{molecules} shows the results obtained for three molecules positioned along an F-actin filament. For each, three sets of images were taken at defocus separations of 200~nm. Isolated pairs of PSFs around the retrieved positions (after subtraction of the average background) are shown in the insets of Fig.~\ref{molecules}a. The retrieved 3D positions, orientations and wobble angles of these molecules are presented in the figure and in Table~\ref{moleculedata}, with the exception of that for the top position for molecule 1, which fell outside the range of validity of the model generated from the nanobeads reference measurements. The range of in-plane orientations $\xi$ measured for the three molecules is restricted to a $30^\circ$ interval, which is expected from their attachment to a single oriented filament. The off-plane angle $\theta$ and wobble angle $\delta=2{\rm arccos}(1-\Omega/2\pi)$ are also consistent with expectations: polarized measurements performed in 2D have shown fluctuations within an extent $\delta$ of about $90^\circ$, with a tilt angle with respect to the fiber that can reach $20^\circ$\cite{ValadesCruz2016}. In the course of the measurements at different $z$ positions, the retrieved transverse positions (with respect to the center of the selected insets) coincide to within a few tens of nanometers and the direction angles to within about ten degrees. The estimated defocus spacings are on the order of 200~nm as expected. Given the long integration times, a total of about 40000 photons were detected for each molecule, so that according to the CR analysis even the estimates of wobble are meaningful. However the levels of error in the measurements are not yet at the level of those predicted by the CR bounds, given imperfections of the determination of the reference PSF basis used for the parameter's retrieval, as commented in the Discussion section.
Finally, we show that CHIDO can be applied to produce
STORM images of densely-labelled single F-actin filaments labelled with AF488, within a buffer appropriate for on-off blinking conditions in order to localize individual emitters (see Methods and Supplementary Movie 5). Estimates of 3D localization and orientation were performed on each detected single molecule, producing STORM-like super-resolution images that not only contain nano-scale resolution localization information, but also 3D and wobbling information. Figure~\ref{STORM}a shows all detected molecules in a stack of about 4000 STORM images, on which single filaments are also identified by their low-resolution image. We selected those molecules for which the measured PSFs allow meaningful CHIDO parameter retrievals.
Figure~\ref{STORM}b depicts the retrieved set of parameters for several molecules over three zoomed sections of the image (see Supplementary Note 3 and Supplementary Movies 6 to 8 for a 3D representation of the data and the corresponding raw PSFs). Once more, the measured orientations and wobble are in agreement with expected values. Interestingly, the retrieved heights cover a large range, which we attribute to the non-planar deposition of single filaments on the coverslip surface. Filaments can also lay on top of each other, as shown in the zoomed region on the right side of Figure~\ref{STORM}b, where two molecules were detected near the junction of the two filaments whose orientations are nearly perpendicular whose heights are notably different, as can be appreciated from their PSFs depicted in Figure~\ref{STORM}c.
Note that there was significant variation in the range of photon levels for the different molecules depicted in Figure~\ref{STORM}, varying from about 2500 to over 50000 (due to the sum performed over single molecules appearing
in several consecutive images
in the STORM images stack). According to the CR bounds, the highest precision of the retrieved positions is expected to be of the order of a nanometer and of the direction of about a degree. Furthermore, for the brightest fluorophores, the accuracy in the degree of polarization is of about 0.15, which translates into an accuracy of the solid angle wobble of just above a sterradian. These results show that CHIDO is appropriate for STORM imaging conditions.
\begin{table}
\centering
\caption{\bf Retrieved positions, orientations and wobble angle for the fluorescent molecules in Fig.~\ref{molecules}a,b}
\begingroup
\setlength{\tabcolsep}{3.7pt}
\renewcommand{\arraystretch}{1}
\begin{tabular}{c|c c|c c c|c c c}
&\multicolumn{2}{c}{Molecule 1}&\multicolumn{3}{c}{Molecule 2}&\multicolumn{3}{c}{Molecule 3}
\\
&$z_1$&$z_2
&$z_1$&$z_2$&$z_3$&$z_1$&$z_2$&$z_3$ \\
\hline
$x$ (nm) &$-108$&$-98
&$-72$&$-93$&$-109$&$-132$&$-118$&$-114$
\\
$y$ (nm)&$110$&$137$
&$62$&$76$&$38$&$23$&$19$&$15$
\\
$z$ (nm)&$-2$&$221
&$-218$&$8$&$121$&$-499$&$-268$&$-75$
\\
$\xi$&$66^\circ$&$71^\circ
&$74^\circ$&$77^\circ$&$72^\circ$&$89^\circ$&$83^\circ$&$88^\circ$
\\
$\theta$ &$77^\circ$&$75^\circ
&$80^\circ$&$79^\circ$&$62^\circ$&$79^\circ$&$72^\circ$&$80^\circ$
\\
$\delta$ &$41^\circ$&$81^\circ
&$122^\circ$&$101^\circ$&$86^\circ$&$132^\circ$&$88^\circ$&$98^\circ$
\\
\end{tabular}
\endgroup
\label{moleculedata}
\end{table}
\section*{Discussion}
A new method, CHIDO, was proposed that allows the measurement of the 3D position, averaged 3D orientation and wobbling of isolated fluorophores, readily applicable for super-resolution orientational imaging. The key elements of this method are a specific spatially-varying birefringent mask, the SEO, inserted at the pupil plane, and the subsequent separation of the two circular components to form separate images. The use of both images is shown to be of central importance for decoupling the estimations of in-plane orientation and axial position $z$. Despite the large amount of information encoded in the shape of the PSFs, these have dimensions that are only about twice as large as the corresponding diffraction-limited PSFs, making this approach suitable for measurements with relatively high densities of molecules since their PSFs would not overlap significantly if their separations are about a micron or even less. Importantly, CHIDO is
satisfactorily achromatic over the detected spectral range of fluorescence.
The retrieval of the parameters requires a reliable model for the PSFs. These models were obtained here by using nanobeads in combination with polarizers at the pupil plane to mimic molecules with different orientations. For the sake of illustration, measurements for only one nanobead were used to construct the reference PSFs, both in plane and out of plane. A more trustworthy basis of reference PSFs could be generated using multiple bead measurements per orientation and at a higher range of axial distances.
Also, the construction of the PSF basis was based on reference measurements for orientations within the $xy$-plane and normal to it. However, a complete set of PSFs also requires measurements at intermediate off-plane angles (e.g. $\theta=45^\circ$), which are more difficult to simulate experimentally (one imperfect option being an off-center S-waveplate). For the molecule measurements presented here, this incompleteness was alleviated by using a mixture of theoretical simulations and experimental data. However, another option being explored is to access the phase structure associated with the reference PSFs by using phase retrieval techniques (given the diversity in $z$ of the reference measurements). This would also allow a more precise retrieval of the $z$ coordinate over a wider range, since the $z$ dependence of the reference PSFs could be efficiently computed from diffraction calculations. Future work will be largely directed into different theoretical and experimental techniques for obtaining reliable PSF references that will let the measurements achieve levels of accuracy that approach the fundamental limits given by the CR lower bounds.
A future direction to be explored is to use CHIDO not only to estimate the amount of wobbling of the molecules but also the asymmetry of this wobbling\cite{Beausang2013,Backer2015}. As discussed in Supplementary Note 1, the $3\times3$ correlation matrix in Eq.~(\ref{Gamma}) encodes information about the correlation of all field components\cite{Backer2015}, which in the context of vector coherence corresponds to the shape of an ellipsoid that characterizes the oscillations of the field\cite{Mark}.
Within the current context, this translates into the capacity of estimating not only a solid angle but, say, two angles of oscillation for the molecule supplemented by an angle of orientation of this elliptical cross-section of the cone. We expect that with refinements of the system, and more importantly of the PSF basis, it will be possible to recover useful information about these extra paramenters for single molecules, which can then be compared with computational models for the molecular motion. Finally, while CHIDO was restricted here to non-overlapping PSFs, new fitting procedures could be developed to adapt the method to samples with higher densities, following recent work in the field\cite{Holden2011, Huang2011, Zhu2012, Mailfert2018, Barsic2015, Mazidi2019}.
With these possibilities, other applications for CHIDO can emerge in addition to imaging the 3D position and orientation of fluorophores. For example, this method could be used to probe the $3\times3$ correlation matrix at several points of a strongly nonparaxial field, such as a focused field or an evanescent wave. This would require the use of one or an array of sub-wavelength scatterers such as gold nanoparticles\cite{Lindfors,Lukas}.
\begin{methods}
\subsection{Optical setup.}
The sample is excited by a laser (Coherent, Obis 488LS-20 for reference beads and single molecule measurements; Coherent, Sapphire 488LP-200 for STORM measurements) in a wide-field or TIRF illumination configuration (Fig. \ref{setup}b), and is held on a piezo nanopositioner (Mad City Labs Inc., Nano-Z200) to perform stacks along the $z$ axis with nanometric precision. Fluorescence light emitted by the sample is then collected by a $\times 60, NA\;1.45$ oil immersion objective (Nikon, CFI Apo TIRF). A dichroic mirror (Semrock, DI02-R488) and a fluorescence filter (Semrock, 525/40) are used to select the emitted fluorescence and send it to the detection path. To adjust the field of view, a diaphragm is placed in a plane conjugated to the image.
All the lenses are achromatic doublets: L$_1$ (125mm) and L$_3$ (500mm) are in a 4f configuration enabling us to locate the SEO in the back-focal plane;
L$_2$ (400 mm) is used for back-focal plane imaging. To simulate emitters with different in-plane orientations, we put prior to the SEO a linear polarizer (Thorlabs, LPVISE100-A) mounted on a motorized rotation stage (Newport, PR50CC). To compensate unwanted polarization distortions introduced by the first dichroic mirror, we used another identical dichroic mirror (Semrock, DI02-R488). This second mirror is aligned along the plane where \textit{s} and \textit{p} polarization components of the incident fluorescence are inverted with respect to the incidence on the first dichroic mirror. Finally, the image is split into LHC and RHC polarization components by using a quarter-wave plate followed by a quartz Wollaston polarizing $2.2^\circ$ beamsplitter (Edmunds, 68-820), and each of these components is projected onto a different region of an emCCD camera
(Andor iXon Ultra 897 for beads and single molecule measurements; iXon Ultra 888 for STORM measurements). The total magnification provided by the lenses is $240$, corresponding to a pixel size of $67$ nm on the emCCD for the bead and single molecule measurements, and $54$ nm for the STORM measurements
\subsection{SEO alignment.}
For the purpose of aligning the SEO and adjusting the parameter $c$, we used a sample of yellow highlighter's fluorescent ink, embedded in a mounting medium (Sigma, Fluoromount). The fluorescence emitted by this sample is used as a bright and homogeneous illumination for the SEO. Circular polarization was produced by placing a linear polarizer and QWP before the SEO. Also, a lens (L$_2$) was inserted to image the SEO plane, leading to complementary rotationally symmetric intensity patterns whose radial dependence for the two emerging circular components is approximately proportional to $\cos^2(cu/2)$ and $\sin^2(cu/2)$ respectively for $c=\pi$ (Fig.~\ref{setup}c). The system's alignment and calibration is then fine-tuned by removing L$_2$ and keeping the polarizer and QWP, to image model nano-emitters under circular polarization conditions. We used for this purpose fluorescent nanobeads of 100 nm in size (yellow-green Carboxylate-Modified FluoSpheres), immobilized on the surface of a poly-L-lysine coated coverslip and covered with a mounting medium (Sigma, Fluoromount). Ideally, the resulting images are complementary, nearly rotationally symmetric PSF shapes, one of them donut-like, the other a bright spot, as shown in Figs.~\ref{zfigs}a,b\cite{Roshita}. These shapes are robust under defocus but they are sensitive to polarization distortions, so they can also be useful for calibrating residual undesired birefringence.
Once this stage of the calibration was complete, the polarizer and QWP prior to the SEO were removed.
\subsection{Single molecules imaging.}
To produce in-vitro reconstituted F-actin filaments, G-actin (AKL99, Cytoskeleton, Inc.) was polymerized at 5 $\mu$M in a polymerization buffer (5 mM Tris-HCl at pH 8.0, 50 mM KCl, 1 mM MgCl2, 0.2 mM Na2ATP at pH 7.0, 1 mM DTT) in presence of 5 $\mu$M phalloidin to stabilize the polymerization. To make the labeling sparse enough to isolate single molecules, we used a ratio of 1:200 phalloidin conjugated to Alexa Fluor 488. The filaments were then diluted to 0.2 $\mu$M, immobilized on the coverslip surface coated with poly-L-lysine and covered with an imaging buffer containing an oxygen scavenging system (5 mM Tris-HCl at pH 8.0, 50 mM KCl, 1 mM MgCl2, 0.2 mM Na2ATP at pH 7.0, 1 mM DTT, 1 mM Trolox, 2 mM PCA, 0.1 $\mu$M PCD). The typical experimental conditions for single molecules imaging were : TIRF illumination, laser power of a few mW (at the objective plane), camera gain 300 and 1s integration time.
\subsection{STORM imaging.}
The F-actin filaments used for STORM imaging were obtained, as for the single molecules images, from G-actin (AKL99, Cytoskeleton, Inc.) polymerized at 5 $\mu$M in a polymerization buffer (5 mM Tris-HCl at pH 8.0, 50 mM KCl, 1 mM MgCl2, 0.2 mM Na2ATP at pH 7.0, 1 mM DTT). To fully label the actin monomers the polimerization was done in the presence of 5 $\mu$M phalloidin conjugated to Alexa Fluor 488. The filaments were then diluted to 0.2 $\mu$M, immobilized on the coverslip surface coated with poly-L-lysine and covered with a STORM buffer (100 mM Tris-HCl at pH 8.0, 10\% glucose, 5 u/ml pyranose oxidase, 400 u/ml catalase, 50 mM $\beta$-MEA, 1 mM ascorbic acid, 1 mM methyl viologen, 2 mM COT).
Before taking images, the system was realigned, the value of $c$ was adjusted to $1.2\pi$ and the SEO was aligned so that one of its stress points pointed in the $y$ direction. The typical experimental conditions were : TIRF illumination, laser power 150 mW, camera gain 300 and 200 ms integration time. For STORM imaging, a stack of 3890 images was used. For each frame, the approximate $x,y$ positions of the fluorophores were detected. Since some fluorophores blinked for longer than the exposure time of one image, a routine was written to sum over all the relevant consecutive frames for each fluorophore in order to reduce SNR. Some fluorophores blinked for up to about ten frames, resulting in photon numbers of up to about 50000. Pairs of arrays of $29\times29$ pixels containing each PSF set were then used to retrieve the parameters.
\end{methods}
\begin{addendum}
\item[Acknowledgements] This research has received funding from: National Science Foundation (NSF) (PHY-1507278); Excellence Initiative of Aix-Marseille Universit\'e (AMU) A*MIDEX, a French ``Investissements d'Avenir'' programme; European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No713750. Regional Council of Provence- Alpes-C\^ote d'Azur. A*MIDEX (n° ANR- 11-IDEX-0001-02), funded by the Investissements d'Avenir project funded by the French Government, managed by the French National Research Agency (ANR). CONACyT PhD fellowship. \\
The authors are grateful to P. R\'efr\'egier who was instrumental in establishing this collaboration. They also thank A. J. Vella and M. Mavrakis, as well as A. M. Taddese, J. Puig and M. Rahman for help in the method development. Additionally, the authors thank the Center for Integrated Research Computing (CIRC) at the University of Rochester for providing computational resources.
\item[Author contributions]
M.A.A. and S.B. conceived and initiated the project, inspired on a polarimetry method by T.G.B. who designed and provided the SEO.
S.B., L.A.C. and V.C. designed and built the optical system. V.C. and L.A.C. prepared the samples and performed experiments. M.A.A wrote the algorithm and analyzed the data. All authors wrote the paper and contributed to the scientific discussion and revision of the article.
\item[Author information] The authors declare that they have no competing financial interests. Correspondence and requests for materials should be addressed to M.A. Alonso (email: [email protected]) and S. Brasselet~(email: [email protected]).
\item[Data availability] All data are available from the corresponding authors upon reasonable request.
\item[Code availability] The CHIDO code is available from the corresponding authors upon reasonable request.
\end{addendum}
\newpage
\bibliographystyle{naturemag}
|
train/arxiv
|
BkiUb9nxK19JmhArCPOQ
| 5 | 1 |
\section{Introduction}
In this paper we continue the programme of classification of
integrable dispersive $2+1$-dimensional equations started in
\cite{FMN,FN} and present the complete classification of integrable
two-component systems of second order
\begin{equation}
\label{sysgen}
\begin{split}
&u_t =F(u,v,w,Du,Dv,Dw) \\
&v_t = G(u,v,w,Du,Dv,Dw)
\end{split},
\quad w_y = u_x.
\end{equation}
Here $u(x,y,t), v(x,y,t), w(x,y,t)$ are scalar variables, $Du,Dv,Dw$ denote the collection of partial derivatives of $u,v,w$ with respect
to $x,y$ up to the second order and $F$, $G$ are polynomials in derivatives with coefficient depending only on $u,v,w$.
The approach proposed in \cite{FMN,FN} consists of two steps:
\begin{itemize}
\item One first classifies integrable {\it dispersionless}
equations which may potentially occur as dispersionless limits of
systems in consideration;
\item One then reconstructs the {\it dispersive} corrections
preserving integrability.
\end{itemize}
One of the most famous examples within class (\ref{sysgen}) was introduced by Zakharov in \cite{Zakharov}
\begin{equation}
\label{dispersive-Zakharov}
u_t = (uv)_x + u_{xx}, \quad
v_t = v v_x + w_x - v_{xx}, \quad
w_y = u_x,
\end{equation}
together with the associated Lax pair
\begin{equation}
\label{dispersive-Zakharov-Lax}
\psi_{xy}+\frac{1}{2} v\psi_y+\frac{1}{4}u\psi=0, \quad \psi_t+ \psi_{xx}+\frac{1}{2}w\psi=0.
\end{equation}
Two more known examples \cite{MikYam1997,ShabYam1997} include
\begin{equation*}
u_t=wu_x+v_x+u_{xx},\quad v_t=(vw)_x-v_{xx},\quad w_y=u_x
\end{equation*}
and
\begin{equation*}
u_t=(uv)_x+wu_x+u_{xx},\quad v_t=(vw)_x+vv_x-v_{xx}, \quad w_y=u_x
\end{equation*}
and their Lax representations are given by
$$
4\psi_{xy}+v\psi-2 u\psi_x=0,\quad \psi_t+\psi_{xx}-w\psi_x=0
$$
and
$$
2\psi_{xy}+v\psi_y-u\psi_x=0,\quad \psi_t+\psi_{xx}-w\psi_x=0.
$$
The classification results obtained in this paper include 6 more integrable systems of the form (\ref{sysgen}) (Theorem 3), some of which seem to be new.
The dispersionless counterpart of system
(\ref{dispersive-Zakharov}) read as
\begin{equation}
\label{dZakin}
u_t = (uv)_x, \quad
v_t = v v_x + w_x, \quad
w_y =
u_x
\end{equation}
and the corresponding Lax representation is
\begin{equation}
\label{displess-Zakharov-Lax}
S_xS_y+\frac{1}{2}vS_y+\frac{1}{4}u=0,\quad S_t+S_x^2+\frac{1}{2}w=0.
\end{equation}
These can be obtained from (\ref{dispersive-Zakharov}) and
(\ref{dispersive-Zakharov-Lax}) through a change
\begin{eqnarray}
\label{displim} x\to \epsilon x,\, y\to\epsilon y,\, t\to\epsilon
t,\quad \psi=e^{\frac{S}{\epsilon}}
\end{eqnarray}
and taking the limit $\epsilon\to 0$.
We recall that the dispersionless system (\ref{dZakin}) is
integrable by the method of hydrodynamic reductions (see e.g.
\cite{FerKhu}), a method which can be applied to any first-order
quasilinear system
\begin{equation}
\label{displess-system}
A(u) u_t + B(u) u_x + C(u) u_y = 0,
\end{equation}
where $A, B, C$ are $n\times n$ square matrices and $u$ is an $n$-dimensional vector. Integrability in this sense means that for any fixed number $N$, an integrable system of the form (\ref{displess-system}) possesses an infinite class of solutions of the form
\begin{equation}
\label{displess-solution}
u=u(R^1, \ldots, R^N), \quad v=v(R^1, \ldots, R^N), \quad w=w(R^1, \ldots, R^N),
\end{equation}
parametrized by $N$ arbitrary functions of a single argument, where the Riemann invariants $R^i$ (also called phases) are assumed to satisfy pairs of commuting diagonal systems
\begin{equation}
\label{Riemann-invariants}
R^i_y = \mu^i(R) R^i_x, \quad R^i_t = \lambda^i(R) R^i_x, \quad i=1,\ldots,N.
\end{equation}
In order that relations (\ref{displess-solution}) and (\ref{Riemann-invariants}) represent a nontrivial solution, it is readily seen that the velocities $\lambda^i, \mu^i$ must satisfy the so-called dispersion relation
\begin{equation}
\label{dispersion-relation}
D(\lambda^i,\mu^i) = \det{\left(\lambda^i A + B + \mu^i C\right)} = 0, \quad i=1,\ldots,N.
\end{equation}
Throughout this paper, we consider only systems for which
(\ref{dispersion-relation}) defines an irreducible curve. Moreover,
compatibility of systems (\ref{Riemann-invariants}) requires that
the velocities $\lambda^i(R), \mu^i(R)$ obey the conditions
\begin{equation}
\label{comp-diag}
\frac{\lambda^i_{j}}{\lambda^i - \lambda^j} = \frac{\mu^i_{j}}{\mu^i - \mu^j}, \quad i\neq j,
\end{equation}
where $\lambda^i_j = \partial_{R^j}\lambda^i, \mu^i_j = \partial_{R^j}\mu^i$. We admit the following definition of integrability of dispersionless systems
\begin{Def}
A quasilinear system (\ref{displess-system}) is said to be integrable if for any number of
phases $N$ it possesses infinitely many $N$-phase solutions
parametrised by $N$ arbitrary functions of one variable.
\end{Def}
Owing to the fact that the flows (\ref{Riemann-invariants}) are automatically commuting for $N=1$, one-component reductions of system (\ref{dZakin}) are parametrized by the following relations
\begin{equation}
\label{one-component-dZak}
\begin{split}
&u = R, \quad v = V(R), \quad w = W(R), \quad
R_t = (R V' + V) R_x, \quad R_y = \frac{1}{R (V')^2} R_x,
\end{split}
\end{equation}
where $V(R)$ is an arbitrary function and $W(R)$ is fixed up to an
integration constant by the relation $W' = R (V')^2$. Moreover, the
normalization $u=R$ is assumed to hold through a Miura-type
transformation. It was shown in \cite{FMN,FN,FM} that deformability
of hydrodynamic reductions can be used as a definition of
integrability of dispersive equations. Indeed, formulae
(\ref{displess-solution}), (\ref{Riemann-invariants}) governing
hydrodynamic reductions can be deformed through the addition of a
dispersive correction of the form
\begin{equation}
\label{eq:dispersive-corrections-u}
\begin{split}
&u=u(R^1,...,R^n), \quad v=V(R^1,...,R^n) + \epsilon (\ldots) + \epsilon^2 (\ldots) + \ldots, \\
&w=W(R^1,...,R^n) + \epsilon (\ldots) + \epsilon^2 (\ldots) + \ldots,\\
&R^i_t = \lambda^i(R) R^i_x + \epsilon (\ldots) + \epsilon^2 (\ldots)+ \ldots , \quad R^i_y = \mu^i(R) R^i_x + \epsilon (\ldots) + \epsilon^2 (\ldots) + \ldots,
\end{split}
\end{equation}
where the terms at $\epsilon^k$ are homogeneous differential polynomials in the $x$-derivatives of $R^i$ of total degree $k+1$ with coefficients being functions of the Riemann invariants. Using this construction the following definition of integrability of $2+1$-dimensional dispersive systems was introduced in \cite{FMN}.
\begin{Def} A $2+1$-dimensional system is said to be integrable if
all hydrodynamic reductions of its dispersionless limit\footnote{which is
supposed to be linearly non-degenerate, see Remark \ref{linear-nondegeneracy}.} can be deformed into
reductions of the corresponding dispersive counterpart.
\end{Def}
To accomplish the classification program, the polynomial systems (\ref{sysgen}) are written in the quasilinear form
\begin{equation}
\label{DS}
\begin{split}
&u_t = \alpha u_x + \beta u_y + \gamma v_x + \delta v_y + \rho w_x + \epsilon( \ldots )\\
&v_t = \phi u_x + \psi u_y + \eta v_x + \tau v_y + \kappa w_x + \epsilon( \ldots )
\end{split}
, \quad
w_y = u_x,
\end{equation}
under which they are often referred to as Davey-Stewartson type systems.
Here the greek symbols $\alpha, \ldots,$ denote functions of $u,v$ and the nonlocal variable $w = D_y^{-1} D_x u$
and the terms at $\epsilon$ are assumed to be homogeneous differential polynomials of total degree $2$. The dispersionless limit
\begin{equation}
\label{dDS}
\begin{split}
&u_t = \alpha u_x + \beta u_y + \gamma v_x + \delta v_y + \rho w_x \\
&v_t = \phi u_x + \psi u_y + \eta v_x + \tau v_y + \kappa w_x
\end{split}
, \quad
w_y = u_x,
\end{equation}
is thus obtained by simply setting $\epsilon=0$. To classify integrable systems (\ref{DS}) we first have to classify integrable {\it dispersionless} systems (\ref{dDS}) and then classify dispersive deformations of the obtained systems. Classification of integrable systems (\ref{dDS}) can be performed using the conditions of existence of 2- and 3-component hydrodynamic reductions (Section 2). The list of such systems is very extensive and moreover most of examples are not deformable to (\ref{DS}). However we notice that if a {\it dispersive} system (\ref{DS}) is
integrable then it possesses a Lax representation
$$
L\psi=0,\quad (\partial_t-A)\psi=0,
$$
where $L$ and $A$ are some linear differential operators with
coefficients being functions in $u,v,w$ and their derivatives. The
dispersionless limit (\ref{displim}) brings the Lax representation
to a form
$$
F(S_x,S_y)=0,\quad S_t=G(S_x,S_y),
$$
where $F$ and $G$ are {\it polynomials} in $S_x,S_y$ with
coefficients depending on $u,v,w$. Thus if a dispersionless system
is deformable then it possesses a {\it polynomial} dispersionless
Lax representation and it is enough to restrict ourself to only such dispersionless systems. We therefore adopt the following classification approach:
\begin{itemize}
\item We first classify integrable dispersionless systems which possess polynomial dispersionless Lax representations (Section 2). In this paper we restrict ourselves to second degree polynomials;
\item We then classify dispersive deformations of such systems (Section 3).
\end{itemize}
We finally emphasize that the same perturbative approach can be used to
obtain the Lax pair of an integrable {\it dispersive} system from
that of its {\it dispersionless} limit, given that this limit is integrable in the sense of hydrodynamic reductions. In Section 3 we present a construction which allows to deform dispersionless Lax pairs into their dispersive counterparts and give the complete list of integrable systems (\ref{DS}) along with associated Lax representations.
\section{Classification of dispersionless systems.}
\label{sec:DispLaxPairs}
In this section we describe and apply the two approaches to the classification of integrable dispersionless systems
\begin{equation}
\label{disples-sys} {\cal E:} \left\{ \begin{array}{l}
u_t=\alpha u_x+\beta u_y+\gamma v_x+\delta v_y+\rho w_x\\
v_t=\phi u_x+\psi u_y+\eta v_x+\tau v_y+\kappa w_x\end{array},\quad
w_y=u_x, \right.
\end{equation}
where we assume that coefficients $\alpha,\ldots,\kappa$ are functions of $u,v,w$.
On one hand, the method of hydrodynamic reductions provides a sufficiently restrictive integrability criterion which permits to distinguish integrable systems within the considered class. Similarly to the 3-soliton criterion for solitonic systems, it can be seen that the presence of 3-phase solutions (solutions of the form (\ref{displess-solution}), (\ref{Riemann-invariants}) with $N=3$) is necessary and sufficient to prove integrability of dispersionless systems. However, requiring directly the existence of $3$-phase solutions for system (\ref{disples-sys}) is hardly workable since the dispersion relation of system (\ref{disples-sys}) defines a cubic curve which may not be {\it a priori} a rational curve. The following theorem allows to overcome this difficulty by providing necessary and sufficient conditions for the existence of 2-phase solutions for any system (\ref{displess-system}).
\begin{The}[Ferapontov, Khusnutdinova \cite{FK}]
The Haantjes tensor of an arbitrary matrix $\Omega = \left(k A + B\right)^{-1} \left(l A + C\right)$, $k,l \in \mathbb{R}$, is zero if and only if the following hold.
\begin{itemize}
\item System (\ref{displess-system}) possesses double waves parametrized by four arbitrary
functions of a single argument.
\item The characteristic speeds $\lambda^i$, $\mu^i$ of two-component reductions are not
restricted by any algebraic relations other than the dispersion relation (\ref{dispersion-relation}).
\end{itemize}
\end{The}
Recall that for a given square matrix $\Omega=(\omega^i_j)$, the Haantjes tensor is defined as \cite{Haantjes}
\begin{equation}
H^i_{jk} = N^i_{pr} \omega^p_j \omega^r_k - N^p_{jr} \omega^i_p \omega^r_k - N^p_{rk} \omega^i_p \omega^r_j + N^p_{jk} \omega^i_r \omega^r_p,
\end{equation}
where the $N^i_{jk}$ are the components of the Nijenhuis tensor
\begin{equation}
N^i_{jk} = \omega^p_j \partial_{u^p} \omega^i_k - \omega^p_k \partial_{u^p} \omega^i_j - \omega^i_p (\partial_{u^j} \omega^p_k - \partial_{u^k} \omega^p_j).
\end{equation}
Thus the vanishing of the Haantjes tensor imposes a large set of first order differential constraints on the ten unknown functions $\alpha, \ldots, \kappa$. These conditions are very restrictive and allow to obtain a wide list of systems (\ref{dDS}) which possess 2-component reductions. Imposing the further condition of existence of 3-component reductions one then obtains the complete list of integrable dispersionless systems (\ref{dDS}). The list is extremely lengthy and most of systems are not deformable to (\ref{DS}).
\begin{Rem}
More precisely, it was possible to solve completely the case $\rho = 0$, while the case $\rho \neq 0$ seemed to be out of reach for the moment. Nevertheless, it is remarkable that all {\it deformable} integrable systems obtained through our second approach fall into the class $\rho = 0$.
\end{Rem}
\begin{Rem}
\label{linear-nondegeneracy}
In order to apply the perturbative approach it is necessary to exclude linearly degenerate dispersionless systems as their dispersive counterparts do not inherit the hydrodynamic reductions. These systems are characterized by the fact that their $N$-phase solutions are continuous and do not admit the gradient catastrophe. A different perturbative approach would be required. Fortunately, these systems can be easily discarded by verifying an algebraic criterion that was introduced in \cite{F1}.
\end{Rem}
Now we proceed to the classification of dispersionless systems (\ref{disples-sys}) which possess polynomial dispersionless Lax representations. It was noted in the Introduction that the system
\begin{equation}
\label{dzak1}
u_t = (uv)_x, \quad
v_t = v v_x + w_x, \quad
w_y =
u_x,
\end{equation}
possesses a dispersionless Lax representation
\begin{equation}
\label{zakdlp}
S_xS_y+\frac{1}{2}vS_y+\frac{1}{4}u=0,\quad S_t+S_x^2+\frac{1}{2}w=0.
\end{equation}
Introducing notations $p=S_x,\,q=S_y,\,s=S_t$ the relations (\ref{zakdlp}) can be rewritten as
\begin{equation}
\label{zakdlp1}
F:=pq+\frac{1}{2}vq+\frac{1}{4}u=0,\quad s=G:=-p^2-\frac{1}{2}w
\end{equation}
and the compatibility conditions $S_{xy}=S_{yx},S_{xt}=S_{tx},S_{yt}=S_{ty}$ take the form
\begin{equation}
\label{comp}
p_y=q_x,\quad p_t=s_x,\quad q_t=s_y.
\end{equation}
It is easy to see that relations (\ref{zakdlp1}) and (\ref{comp}) can be written as
\begin{equation}
\label{zakdlp2}
F_t=\{G,F\}\quad\mod F,\mbox{equations (\ref{dzak1})},
\end{equation}
where $$\{G,F\}:=\frac{\partial G}{\partial p}\frac{\partial
F}{\partial x}-\frac{\partial F}{\partial p}\frac{\partial
G}{\partial x}+\frac{\partial G}{\partial q}\frac{\partial
F}{\partial y}-\frac{\partial F}{\partial q}\frac{\partial
G}{\partial y}.$$
and $\frac{\partial F}{\partial x}=u_x\frac{\partial F}{\partial u}+v_x\frac{\partial F}{\partial v},\ldots$, i.e $p,q$ are treated as symbols in (\ref{zakdlp2}).
We now introduce the following definition of polynomial dispersionless Lax pair (see eg \cite{Zakharov, KonMag}).
\begin{Def} A pair $F(p,q), G(p,q)$ of polynomials in $p,q$ with
coefficients depending on $u,v$ and $u,v,w$ correspondingly is
called a generator of a dispersionless Lax pair for a system
(\ref{disples-sys}) if
\begin{enumerate}
\item $F,G$ satisfy
\begin{equation}
\label{Lax} F_t=\{G,F\}\quad \mod F,{\cal E}
\end{equation}
\item $\mbox{rank}\left(\frac{\partial F_{ij}}{\partial u}\,\,\frac{\partial F_{ij}}{\partial
v}\right)_{i,j=1,\ldots,\deg{F}}=2$, where
$F_{ij}=\frac{\partial^{i+j} F}{\partial p^i\partial
q^j}\big|_{p=q=0}$.
\end{enumerate}
\end{Def}
A polynomial Lax pair for a given system is not unique. Apart from transformations $p\to \mu_1 p+\mu_2,\, q\to \mu_1 q+\mu_3,\, s\to \mu_1 s+\mu_4$, $\mu_1,\ldots\mu_4=const$ we also have that if $F, G$ are generators of a dispersionless Lax pair for a system then $f(F), G$ are also generators of a dispersionless Lax pair for the same system for an arbitrary polynomial $f$. Moreover $G$ is defined modulo $F$. Condition 1 is the compatibility condition while condition 2 implies that the Lax pair is faithful, i.e. the compatibility condition is satisfied due to both equations in the system (\ref{disples-sys}).
We now present the list of dispersionless systems which possess polynomial dispersionless Lax pairs. We restrict the classification of polynomial Lax pairs to those of degree 2 and leave the consideration of systems with Lax pairs of higher degrees for future studies.
\begin{The} If the system (\ref{disples-sys}) possesses a polynomial dispersionless Lax pair with $\deg{F}=2,\,\,\deg{G}\mod F=2$ and if $F$ is an irreducible polynomial then up to the group of invertible transformations $v\to f(u,v)$,
$x\to x,\,y\to y+sx,\, u \to u, w\to w-s u,\, s=const$,
and rescaling of variables it is one of the following ones:
\begin{eqnarray}
\label{dZak} &&\left\{\begin{array}{l} u_t=(uv)_x\\ v_t=vv_x+w_x\end{array},\right.
\\ \label{de1} &&\left\{\begin{array}{l} u_t=wu_x+v_x\\ v_t=(vw)_x\end{array},\right.
\\ \label{de2} &&\left\{\begin{array}{l} u_t=wu_x+(uv)_x\\ v_t=(vw)_x+vv_x\end{array},\right.
\\ \label{de3} &&\left\{\begin{array}{l} u_t=wu_x+v_x-\frac{h^2}{4}vv_y\\ v_t=(vw)_x+hvv_x\end{array},\right.
\\ \label{de4} &&\left\{\begin{array}{l} u_t=uu_x-wu_y+v_x+vv_y\\ v_t=2vu_x+v^2u_y+uv_x-wu_y+w_x\end{array},\right.
\\ \label{de5} &&\left\{\begin{array}{l} u_t=(v-hu)u_x+h(w-\frac{1}{2}v^2)u_y+uv_x-huvv_y\\ v_t=-2hvu_x+h^2v^2u_y+(v-hu)v_x+h(w-\frac{1}{2}v^2)v_y+w_x\end{array},\right.
\\ \label{de6} &&\left\{\begin{array}{l} u_t=(hu+v)u_x+h(w-2huv-\frac{1}{2}v^2)u_y+uv_x-hu(hu+v)v_y\\ v_t=-2hvu_x+h^2v^2u_y+(v-hu)v_x+h(w-\frac{1}{2}v^2)v_y+w_x\end{array},\right.
\\ \label{de7} &&\left\{\begin{array}{l} u_t=(uv)_x+wu_x-\frac{h}{2}v^2(1+2hu)u_y-huv(1+hu)v_y\\ v_t=(vw)_x+vv_x-2hv^2u_x+h^2v^3u_y-\frac{1}{2}hv^2(1-2hu)v_y-2huvv_x
\end{array},\right.
\\ \label{de8} &&\left\{\begin{array}{l} u_t=(w-2huv)u_x+h^2uv^2u_y+(1-hu^2)v_x+hv(u^2-1)v_y\\ v_t=-2hv^2u_x+h^2v^3u_y+(w-2huv)v_x+h^2uv^2v_y+vw_x
\end{array},\right.
\end{eqnarray}
where $w_y=u_x$, $h=const$.
\end{The}
By the $\deg{G}\mod F$ we understand the minimal degree of $G$ modulo $F$. Parameter $h$ in the above list, if not zero, can be scaled to $1$. Note that if $h\to 0$ then systems (\ref{de5}), (\ref{de6}) reduce to (\ref{dZak}), while systems (\ref{de7}), (\ref{de8}) reduce to (\ref{de2}) and (\ref{de1}) correspondingly. The corresponding generators for the dispersionless Lax pairs are given in Table \ref{Table-displess-lax-pairs}.
\begin{table}[h]
\begin{math}
\begin{array}{|c|l|l|}
\hline
\text{Eq no} & $F$ & G \\\hline
(\ref{dZak}) & pq+\frac{1}{2}vq+\frac{1}{4}u & -p^2-\frac{1}{2}w \\[1ex]
(\ref{de1}) & pq-\frac{1}{2}up+\frac{1}{4}v & -p^2+wp \\[1ex]
(\ref{de2}) & pq-\frac{1}{2}u p+\frac{1}{2}vq & -p^2+w p \\[1ex]
(\ref{de3}) & pq+\frac{1}{4}h v q+\frac{1}{4}v-\frac{1}{2}up-\frac{1}{8}huv & -p^2+wp-\frac{1}{16}h^2v^2 \\[1ex]
(\ref{de4}) & pq+\frac{1}{2}u q+\frac{1}{4}+vq^2 & -p^2+v^2q^2-w q \\[1ex]
(\ref{de5}) & hpq-\frac{h}{2}(v+hu)q-h^2vq^2+\frac{1}{2}p & -p^2+h^2v^2q^2+\frac{1}{2}h(2w+v^2)q \\[1ex]
(\ref{de6}) & pq+\frac{1}{2}vq-hvq^2+\frac{1}{4}u & -p^2+h^2v^2q^2-(h^2uv-hw+\frac{1}{2}hv^2)p-\frac{1}{2}w \\[1ex]
(\ref{de7}) & p q+\frac{p}{2h}(1-hu) - h v q^2+\frac{uv}{4} - \frac{1}{2} v q(1-h u) & - p^2 + h^2 q^2 v^2 - \frac{1}{2}h u v^2 - \frac{1}{2} h q(2 h u-1) v^2 + w p \\[1ex]
(\ref{de8}) & pq - h v q^2 - \frac{1}{2}u p+\frac{v}{4} & -p^2 + h^2 v^2 q^2 + w p - \frac{1}{4} h v^2 \\ \hline
\end{array}
\end{math}
\caption{Generators for the Lax pairs of dispersionless systems (\ref{dZak})-(\ref{de8}).}
\label{Table-displess-lax-pairs}
\end{table}
We finally make the following observation which greatly simplifies the computations entering the perturbation scheme in the next section.
\begin{Pro}
The dispersion relation (\ref{dispersion-relation}) of every system in (\ref{dZak})-(\ref{de8}) is a rational cubic curve.
\end{Pro}
Indeed the dispersion relation (\ref{dispersion-relation}) of systems of the form (\ref{disples-sys}) defines a homogeneous cubic polynomial in the variables $\lambda^i,\mu^i$ and can be viewed as a ternary cubic form by introducing triples $X^i, Y^i,Z^i$ through the relations $\lambda^i = X^i/Z^i, \mu^i = Y^i/Z^i$. It can be shown that for each of systems (\ref{dZak}) - (\ref{de8}) the dispersion relation is a singular cubic form and therefore admits a rational parametrization. This can be verified by noting that the discriminant of the curve vanishes identically. This discriminant takes the form of a sixth-order polynomial and there exists a straightforward algorithmic procedure originally developed by Hesse \cite{Hesse} to obtain its explicit expression (see also the appendix in \cite{Nowak}). The singular point can either be a double point or a cusp and it should be noted that for systems (\ref{de5})-(\ref{de8}), the nature of the singularity can change in the limit $h \to 0$. For these cases, taking the limit in the corresponding Lax pair generators does not lead to a Lax pair of the limiting system.
\section{Dispersive deformations. The classification theorem.}
\label{sec:Deformations}
In this section we study the dispersive deformations of systems (\ref{dZak})-(\ref{de8}) up to order $\epsilon$. We illustrate the deformation procedure by recovering Zakharov's system (\ref{dispersive-Zakharov}) and its Lax pair (\ref{dispersive-Zakharov-Lax}) from their dispersionless limits. To this end, dispersive corrections in the form of infinite series are added to equations (\ref{one-component-dZak}) namely
\begin{equation}
\label{one-component-Zak}
\begin{split}
&u = R, \quad v = V(R) + \epsilon v_1(R) R_x + \epsilon^2 (...) + \ldots, \quad
w = W(R) + \epsilon w_1(R) R_x + \epsilon^2 (...) + \ldots, \\
&R_t = (R V' + V) R_x + \epsilon \left(a_1(R) R_{xx} + a_2(R) R_x^2\right) + \epsilon^2(...) + \ldots, \\
&R_y = \frac{1}{R (V')^2} R_x + \epsilon \left(A_1(R) R_{xx} + A_2(R) R_x^2\right) + \epsilon^2(...) + \ldots
\end{split}
\end{equation}
where $W' = R (V')^2$ and the unknown functions $v_1(R), w_1(R),
a_1(R), A_1(R), \ldots$ are to be determined. Again, it is assumed
that the relation $u=R$ remains undeformed through a Miura-type
transformation. Equations (\ref{one-component-Zak}) are then assumed
to form a formal solution of some second-order dispersive system
\begin{equation}
\label{dispersive-sigma-Zakharov}
\begin{split}
&u_t = (uv)_x + \epsilon \left( \sigma_{1} u_{xx} + \ldots + \sigma_{22} u_y w_x \right),\quad
v_t = v v_x + w_x + \epsilon \left( \sigma_{23} u_{xx} + \ldots + \sigma_{44} u_y w_x \right),
\end{split}
\end{equation}
where the nonlocal relation $w_y = u_x$ is kept undeformed and $\sigma_1,\ldots, \sigma_{44}$ are functions of $u,v,w$ to be determined.
Noting that $V(R)$ is an arbitrary function in (\ref{one-component-Zak}), we now require that equations (\ref{one-component-Zak})
define a solution of (\ref{dispersive-sigma-Zakharov}) for {\it any} function $V(R)$. This requirement together with the compatibility condition $R_{ty} = R_{yt}$ specify {\it uniquely} all unknown functions, up to a rescaling of $\epsilon$, and Zakharov's system (\ref{dispersive-Zakharov}) is obtained. Up to the first order, the formal series solution is given by
\begin{equation*}
\begin{split}
&u = R, \quad v = V(R) - \epsilon (V')^2 R_x + O(\epsilon^2), \quad
w = W(R) + O(\epsilon^2), \\
&R_t = (R V' + V) R_x + O(\epsilon^2), \quad R_y = \frac{1}{R (V')^2} R_x + O(\epsilon^2),
\end{split}
\end{equation*}
while higher order terms can be determined algorithmically.
We point out that a sufficient condition for defining completely the dispersive deformation, i.e. such that no function remains arbitrary, is that the dispersionless limit must not be linearly degenerate (see Remark \ref{linear-nondegeneracy}). Moreover, it turns out that all systems in the considered class possess a unique integrable deformation. We do not exclude however the possibility that a given dispersionless system may possess two or more inequivalent dispersive counterparts and our algorithm would lead to all such deformations.
Applying the same procedure to systems (\ref{dZak})-(\ref{de8}), we obtain the following exhaustive list of integrable Davey-Stewartson type systems.
\begin{The} Up to the group of point transformations $u\to u,v\to f(u,v)$ and re-scaling of variables, the following systems constitute the complete list of dispersive deformations of systems (\ref{dZak})-(\ref{de8}) up to order $\epsilon$.
\begin{eqnarray}
\label{Zak} &&\left\{ \begin{array}{l}
u_t=(uv)_x+\epsilon u_{xx}\\
v_t=vv_x+w_x-\epsilon v_{xx} \end{array}, \right.
\\ \nonumber \\
\label{e1} &&\left\{ \begin{array}{l}
u_t=wu_x+v_x+\epsilon u_{xx}\\
v_t=(vw)_x-\epsilon v_{xx}\end{array}, \right.
\\ \nonumber \\
\label{e2}&&\left\{ \begin{array}{l}
u_t=(uv)_x+wu_x+\epsilon u_{xx}\\
v_t=(vw)_x+vv_x-\epsilon v_{xx}\end{array}, \right.
\\ \nonumber \\
\label{e3}&&\left\{ \begin{array}{l}
u_t=wu_x-\frac{1}{4}h^2vv_y+v_x+\epsilon (u_{xx}+hv_{xy})\\
v_t=(vw)_x+hvv_x-\epsilon v_{xx}\end{array}, \right.
\\ \nonumber \\ \label{e4}
&&\left\{ \begin{array}{l}
u_t=-wu_y+uu_x+A(v)+\epsilon \left(A^2(u)+u_yA(v)\right)\\
v_t=v^2u_y+2vu_x-wv_y+uv_x+w_x-\epsilon \left(A^2(v)-v_yA(v)\right)\end{array}, \right.
\\ \nonumber \\ \label{e5}
&&\left\{ \begin{array}{l}
u_t=(v-hu)u_x+h(w-\frac{1}{2}v^2)u_y+uv_x-huvv_y+\epsilon\left(B^2(u)-hu_yB(v)\right),\\
v_t=-2hvu_x+h^2v^2u_y+(v-hu)v_x+h(w-\frac{1}{2}v^2)v_y+w_x-\epsilon\left(B^2(v)+hv_yB(v)\right)\end{array} \right.
\\ \nonumber \\
\label{e6}
&&\left\{ \begin{array}{l}
u_t=(hu+v)u_x+(hw-2h^2uv-\frac{1}{2}hv^2)u_y+uv_x-hu(hu+v)v_y\\ \quad\quad\quad+\epsilon\big(B^2(u)-2huB(v_y) -2hu_xv_y-hu_yv_x+3h^2vu_yv_y+2h^2uv_y^2\big),
\\
v_t=-2hvu_x+h^2v^2u_y+(v-hu)v_x+h(w-\frac{1}{2}v^2)v_y+w_x-\epsilon\left(B^2(v)+hv_yB(v)\right)\end{array} \right.
\\ \nonumber \\ \label{e7}
&&\left\{ \begin{array}{l}
u_t=(uv)_x+wu_x-\frac{h}{2}v^2(1+2hu)u_y-huv(1+hu)v_y\\ \quad\quad\quad+\epsilon\big(B^2(u)-2huB(v_y) -2hu_xv_y-hu_yv_x+3h^2vu_yv_y+2h^2uv_y^2\big),
\\
v_t=(vw)_x+vv_x-2hv^2u_x+h^2v^3u_y-\frac{1}{2}hv^2(1-2hu)v_y-2huvv_x-\epsilon\left(B^2(v)+hv_yB(v)\right)\end{array} \right.
\\ \nonumber \\ \label{e8}
&&\left\{ \begin{array}{l}
u_t=(w-2huv)u_x+h^2uv^2u_y+(1-hu^2)v_x+hv(hu^2-1)v_y+\epsilon\left(B^2(u)-hu_yB(v)\right),\\
v_t=-2hv^2u_x+h^2v^3u_y+(w-2huv)v_x+h^2uv^2v_y+vw_x-\epsilon\left(B^2(v)+hv_yB(v)\right)\end{array} \right.
\end{eqnarray}
where $w_y=u_x$ and $A=D_x+vD_y, B=D_x-hvD_y$. Parameter $h$ in the above list if non-zero may be rescaled to 1.
\end{The}
Note that several of these systems have already appeared in other classification works. Zakharov's system (\ref{Zak}) was recovered in \cite{ShabYam1997} along with systems (\ref{e1}) and (\ref{e2}). Moreover the systems
\begin{equation}
u_t = 2 w u_x + u_{xx}, \quad v_t = 2 w v_x - v_{xx}, \quad w_y = (u+v)_x,
\end{equation}
and
\begin{equation}
u_t = 2 w u_x + u_{xx}, \quad v_t = 2 w v_x - v_{xx}, \quad w_y = (uv)_x,
\end{equation}
also appeared in \cite{MikYam1997} and were shown to be equivalent to (\ref{e1}) and (\ref{e2}) through a Miura transformation in \cite{ShabYam1997}. To the best of our knowledge, systems (\ref{e3})-(\ref{e8}) never appeared in other classification problems and seem to be new.
\subsection{Deformation of dispersionless Lax pairs}
Lax pairs for systems (\ref{Zak})-(\ref{e8}) are obtained simultaneously through an analogous procedure.
The dispersionless Lax pairs for systems (\ref{disples-sys}) are given by a pair of equations
\begin{equation}
\label{lc}
F(p,q)=0,\quad s=G(p,q)
\end{equation}
and by relations
\begin{equation}
\label{lc1} p_y=q_x,\quad p_t=s_x,\quad q_t=s_y.
\end{equation}
We can view the dispersionless system (\ref{disples-sys}) together with (\ref{lc}-\ref{lc1}) as a quasilinear system of hydrodynamic type of the form (\ref{displess-system}) with a {\it reducible} dispersion relation (\ref{dispersion-relation}). This system possesses an infinite class of solutions of the form
$$
u=u(R^1,\ldots,R^N),\quad v=v(R^1,\ldots,R^N),\quad w=w(R^1,\ldots,R^N)
$$
$$
p=p(R^1,\ldots,R^N),\quad q=q(R^1,\ldots,R^N),\quad s=s(R^1,\ldots,R^N)
$$
$$
R^i_y=\mu^i(R)R^i_x,\quad R^i_t=\lambda^i(R)R^i_x,\quad i=1,\ldots,N.
$$
Similarly to the construction of deformations for the system (\ref{disples-sys}) and its hydrodynamic reductions we seek the deformation of the dispersionless Lax pair as
\begin{equation}
\label{pqsdef}
\begin{split}
&p=p(R^1,\ldots,R^N)+\epsilon()+\epsilon^2()+\cdots,\,\, q=q(R^1,\ldots,R^N)+\epsilon()+\epsilon^2()+\cdots,\\ &s=s(R^1,\ldots,R^N)+\epsilon()+\epsilon^2()+\cdots,
\end{split}
\end{equation}
where at $\epsilon^k$ we have homogeneous differential polynomials in derivatives of $R^i$ of degree $k+1$. Deformations of equations (\ref{lc}) are in the form
\begin{equation}
\label{fgdef}\begin{array}{l}
F(p,q)+\epsilon(f_1p_x+f_2p_y+f_3q_y+f_4u_x+f_5u_y+f_6v_x+f_7v_y)=0,\\
s=G(p,q)+\epsilon(g_1p_x+g_2p_y+g_3q_y+g_4u_x+g_5u_y+g_6v_x+g_7v_y+g_8w_x),
\end{array}
\end{equation}
where $f_i=f_i(u,v,p,q)$, $g_i=g_i(u,v,w,p,q)$ and relations (\ref{lc1}) are kept undeformed.
Notice that we need to deform equations (\ref{lc}) up to the order $\epsilon$ only.
Dispersive deformations of the system (\ref{disples-sys}) are already constructed and thus it is only necessary to find series (\ref{pqsdef}) satisfying (\ref{lc1}) and (\ref{fgdef}).
Analogously to the procedure of constructing the dispersive corrections for the equation one can algorithmically construct series (\ref{pqsdef}) and obtain a set of differential equations on the dispersive deformations $f_i,g_i$.
Equations (\ref{fgdef}), (\ref{lc1}) can be viewed as a nonlinear form of a Lax representation for corresponding dispersive system. To obtain the linear Lax representation we recall that $p=S_x,q=S_y,s=S_t,\psi=e^{\frac{S}{\epsilon}}$ and thus we can apply the transformation
\begin{equation}
\label{lint}
p=\epsilon D_x\log(\psi),\quad q=\epsilon D_y\log(\psi),\quad s=\epsilon D_t\log(\psi),
\end{equation}
to relations (\ref{fgdef}).
Finally, the algorithm of deformations of dispersionless Lax pairs is the following:
\begin{itemize}
\item For a given dispersionless system one obtains its dispersive deformations;
\item One deforms the corresponding dispersionless Lax pairs as in (\ref{pqsdef}), (\ref{fgdef}) using the dispersive deformation of the system and its hydrodynamic reductions to obtain the nonlinear Lax representation;
\item One applies the linearization transformation.
\end{itemize}
In the following Tables \ref{dispersive-nonlinear-lax-pairs} and \ref{dispersive-linear-lax-pairs} we present nonlinear and linear Lax representations for systems (\ref{Zak})-(\ref{e8}). The transformation between the tables is of the form (\ref{lint}).
We note that for a given dispersive system the deformation of the corresponding dispersionless Lax pair may be not unique.
Moreover for systems (\ref{Zak})-(\ref{e8}) there exists 3 different dispersive deformations of their Lax pairs.
In the tables we present the simplest deformations only as we need only one Lax representation for every system
(\ref{Zak})-(\ref{e8}) .
\begin{table}[h]
\begin{math}
\begin{array}{|c|l|}
\hline \text{Eq no} & $Nonlinear pair$ \\\hline
(\ref{Zak}) & \begin{array}{l}pq+\frac{1}{2}vq+\frac{1}{4}u+\epsilon
q_x=0,\\ s+p^2+\frac{1}{2}w+\epsilon p_x=0\end{array} \\ \hline
(\ref{e1}) & \begin{array}{l}pq-\frac{1}{2}up+\frac{1}{4}v+\epsilon
q_x=0,\\ s+p^2-wp+\epsilon p_x=0\end{array}
\\[1ex] \hline
(\ref{e2})&\begin{array}{l} pq-\frac{1}{2}up+\frac{1}{2}vq+\epsilon q_x=0,\\s+p^2-wp+\epsilon p_x=0\end{array}\\ \hline
(\ref{e3}) &
\begin{array}{l} pq+\frac{1}{4}h v
q+\frac{1}{4}v-\frac{1}{2}up-\frac{1}{8}huv+\epsilon
(q_x+\frac{1}{4}hv_y)=0,\\ s+p^2-wp+\frac{1}{16}h^2v^2+\epsilon
p_x=0
\end{array} \\ \hline
(\ref{e4})&
\begin{array}{l} pq+\frac{1}{2}u q+\frac{1}{4}+vq^2+\epsilon
(vq_y+q_x)=0,\\ s+p^2-v^2q^2+wq+\epsilon (p_x-v^2q_y)=0 \end{array}
\\ \hline
(\ref{e5}) & \begin{array}{l} hpq-\frac{h}{2}(v+hu)q-h^2vq^2+\frac{1}{2}p+\epsilon h(q_x-hvq_y)=0,\\
s+p^2-h^2v^2q^2-\frac{1}{2}h(2w+v^2)q+\epsilon(p_x-h^2v^2q_y)=0\end{array} \\ \hline
(\ref{e6}) & \begin{array}{l}pq-\frac{1}{2}vq-h v q^2+ \frac{1}{4}u+\epsilon(q_x - h v q_y- \left(\frac{1}{2}+h q\right) v_y)=0,\\
s-p^2+h^2v^2q^2 + h ( h uv - hw + \frac{h}{2} v )q - \frac{1}{2}w+\epsilon(h^2 v^2 (q_y - p_x) + (1+2hq)h^2 v^2 v_x )=0\end{array}
\\ \hline
(\ref{e7}) &
\begin{array}{l}
p q+\frac{p}{2h}(1-hu) - h v q^2+\frac{uv}{4} - \frac{1}{2} v q(1-h u) + \epsilon\left(q_x - h v q_y - h \left(\frac{1}{2h}+q\right)v_y\right) = 0,\\
s+p^2-h^2 q^2 v^2+\frac{1}{2}h u v^2+\frac{1}{2} h q(2 h u-1) v^2 - w p + \epsilon\left(p_x - h^2 v^2 q_y - 2h^2\left(\frac{1}{2h}+q\right) v v_y\right) =0
\end{array}
\\ \hline
(\ref{e8}) & \begin{array}{l} pq-hvq^2-\frac{1}{2}up+\frac{1}{4}v+
\epsilon(q_x-hvq_y)=0,\\ s+p^2-h^2v^2q^2-wp+\frac{1}{4}hv^2+\epsilon(p_x-h^2v^2q_y)=0\end{array} \\ \hline
\end{array}
\end{math}
\caption{Nonlinear pairs for systems (\ref{Zak})-(\ref{e8}).}
\label{dispersive-nonlinear-lax-pairs}
\end{table}
\begin{table}[h]
\begin{math}
\begin{array}{|c|l|}
\hline \text{Eq no} & $Lax pair$ \\\hline
(\ref{Zak}) &
\begin{array}{l} 4\epsilon^2\psi_{xy}+2\epsilon v\psi_{y}+u\psi=0,\\
2\epsilon\psi_t+2\epsilon^2\psi_{xx}+w\psi=0
\end{array} \\ \hline
(\ref{e1}) & \begin{array}{l}
4\epsilon^2\psi_{xy}+v\psi-2\epsilon u\psi_x=0,\\ \psi_t+\epsilon\psi_{xx}-w\psi_x=0\end{array} \\[1ex] \hline
(\ref{e2})& \begin{array}{l} 2\epsilon\psi_{xy}+v\psi_y-u\psi_x=0,\\
\psi_t+\epsilon\psi_{xx}-w\psi_x=0\end{array}\\ \hline
(\ref{e3}) &
\begin{array}{l}8\epsilon^2\psi_{xy}-4\epsilon u \psi_x+2\epsilon h v \psi_y+(2v-huv+2\epsilon hv_y)\psi=0,\\
16\epsilon\psi_t+16\epsilon^2\psi_{xx}-16\epsilon
w\psi_x+h^2v^2\psi=0\end{array} \\ \hline
(\ref{e4})&
\begin{array}{l}
4\epsilon^2\psi_{xy}+4\epsilon^2v\psi_{yy}+2\epsilon
u\psi_y+\psi=0,\\ \psi_t+w\psi_y-\epsilon
v^2\psi_{yy}+\epsilon\psi_{xx}=0\end{array}
\\ \hline
(\ref{e5}) &
\begin{array}{l}2\epsilon h\psi_{xy}-2\epsilon h^2 v\psi_{yy}+\psi_{x}-h(v+h u)\psi_y=0,\\
2\psi_t+2\epsilon\psi_{xx}-h(v^2+2w)\psi_y-2\epsilon
h^2v^2\psi_{yy}=0\end{array} \\ \hline
(\ref{e6}) &
\begin{array}{l}
4\epsilon^2 \psi_{xy} - 4 h \epsilon^2 v \psi_{yy} - 2 \epsilon(v+2\epsilon h v_y) \psi_y + (u-2\epsilon v_y)\psi =0,\\
2 \epsilon \psi_t - 2 \epsilon^2 \psi_{xx} +2\epsilon^2h^2v^2\psi_{yy}+\epsilon h(4\epsilon v_x+v^2+2huv-2w)\psi_y+(2\epsilon v_x-w)\psi = 0
\end{array} \\ \hline
(\ref{e7}) &
\begin{array}{l}
4 h \epsilon^2 (\psi_{xy} - h v \psi_{yy}) - 2h\epsilon (2h\epsilon v_y + v(1-h u))\psi_y - 2\epsilon(hu-1)\psi_x + h(u v-2 \epsilon v_y) \psi = 0,\\
2\epsilon\psi_t - 2h^2 \epsilon^2 v^2 \psi_{yy} -2 \epsilon w \psi_x + 2 \epsilon^2\psi_{xx}+h \epsilon v ( 2h(uv -2\epsilon v_y)-v) \psi_y+ h v( u v - 2 \epsilon v_y) \psi = 0
\end{array}
\\ \hline
(\ref{e8}) & \begin{array}{l}4\epsilon^2\psi_{xy}-4\epsilon^2hv\psi_{yy}-
2\epsilon u\psi_x+v\psi=0,\\ 4\epsilon\psi_t+4\epsilon^2\psi_{xx}-4\epsilon^2h^2v^2\psi_{yy}-4\epsilon w\psi_x+hv^2\psi=0\end{array} \\ \hline
\end{array}
\end{math}
\caption{Lax pairs of systems (\ref{Zak})-(\ref{e8}).}
\label{dispersive-linear-lax-pairs}
\end{table}
\section{Conclusion}
The main objective of this paper was to present new classification results based on the method of hydrodynamic reductions and their dispersive deformations. The technique was extended to provide an algorithm to compute the Lax pair of a dispersive system from that of its dispersionless limit. The classification of dispersionless systems was based on the requirement that in order to possess an integrable dispersive deformation, a dispersionless system should be generated by a polynomial Lax pair. In this work, we have restricted ourselves to second degree polynomial generators and a complete list of integrable Davey-Stewartson type systems has been obtained in this context. Consideration of higher degree polynomials may also lead to new systems and this task will be undertaken in a future work.
\section*{Acknowledgements}
The authors would like to thank E.V. Ferapontov and A.V. Mikhailov
for illuminating discussions. The research of BH was supported by a
FQRNT postdoctoral fellowship held within the School of Mathematics
at Loughborough University. Hospitality is gratefully acknowledged.
|
train/arxiv
|
BkiUdSs5qsBDHn4Qzf-r
| 5 | 1 |
\section{Introduction}
Recent attention has focused on semiconductor spintronics, in which electronic spin polarization is used for information processing. Especially, the generation and manipulation of nonequilibrium spin densities by exclusively electrical means in nonmagnetic semiconductors is particularly attractive. Progress toward the development of spintronic devices depends on theoretical and experimental studies of effects due to the spin-orbit interaction (SOI). This spin-dependent coupling gives rise to an internal effective magnetic field that leads to spin precession and reorientation. For semiconductor quantum wells or heterostructures, the bulk and structural inversion asymmetry give rise to Dresselhaus and Rashba SOI terms, respectively. Unfortunately, the very same SOI also causes spin relaxation. The randomization of electron spins is due to the fact that the SOI depends on the in-plane momentum ${\bm k}$. Consequently, the precession frequencies differ for spins with different wave vectors. This so-called inhomogeneous broadening in conjunction with any scattering causes spin dephasing,\cite{Wu_2001} the details of which depend on the character of dominating scattering processes, the band structure, and the crystal orientation.\cite{JAP_073702} In GaAs/Al$_{x}$Ga$_{1-x}$As quantum wells grown along the [001] axis and with balanced Rashba and Dresselhaus SOI strength, a strong anisotropy in the in-plane spin dephasing time has been measured.\cite{PRB_033305,APL_112111,PRB_073309} The spin relaxation along the [110] direction is efficiently suppressed. Based on this effect, which is robust due to an exact spin rotation symmetry of the spin-orbit Hamiltonian,\cite{PRL_236601} a nonballistic spin-field-effect transistor was proposed.\cite{PRL_146801} From a theoretical point of view, it is predicted that for an idealized model with ${\bm k}$-linear SOI the spin polarization along [110] is conserved for a certain wave vector.\cite{PRL_236601} The experimental confirmation of this prediction \cite{PRL_076604} was possible by exploiting transient spin-grating techniques. This experimental method offers an efficient tool for identifying coupled spin-charge eigenstates in the two-dimensional electron gas (2DEG). Optically induced diffraction patterns are formed in semiconductors, when two pulses with identical energies interfere on the sample and excite electron-hole pairs.\cite{PRL_4793,PRL_136602,PRB_125344,JAP_063714} By varying the relative angle between the two pump beams, the grating period can be tuned for resonant excitation of the eigenmodes. With a third time-delayed pulse that diffracts from the photo-injected spin or charge pattern, the time evolution of the spin polarization is monitored. A free-carrier concentration grating is produced within the sample by two beams with parallel linear polarization. Alternatively, an oscillating spin polarization, which levitates over a homogeneous carrier ensemble, is generated by cross-linearly polarized pump pulses. By detuning the frequencies, a moving (oscillating) charge and/or spin pattern can be produced.
Most interesting both for basic research and the application point of view are weakly damped spin-charge eigenmodes of the semiconductor heterostructure. These excitations drastically change their character when an in-plane electric field acts simultaneously on spin and charge carriers. Similar to the traditional study of space-charge waves in crystals (see, for instance, Ref. \onlinecite{BuchPetrov1}), the field-dependent spin modes can be identified and excited by an experimental set up that provides the appropriate wave vector. Such an approach can profit from methods developed in the field of space-charge waves in crystals.
It is the aim of this paper to systematically derive general spin-charge drift-diffusion equations for a semiconductor quantum well with a general ${\bm k}$-linear SOI. Based on the rigorous analytical solution of these equations, a number of electric-field driven spin resonances are studied.
\section{Solution of drift-diffusion equations}
In this section, we introduce the model, derive and solve general spin-charge coupled drift-diffusion equations for conduction-band electrons in an asymmetric semiconductor quantum well. Coupled spin and charge excitations are treated by an effective-mass Hamiltonian, which includes both SOI and short-range, spin-independent elastic scattering on impurities. We are mainly interested in spin effects exerted by an in-plane electric field ${\bm E}=(E_x,E_y,0)$. The single-particle Hamiltonian
\begin{eqnarray}
H_{0}&=&\sum_{\bm{k},s }a_{\bm{k}s }^{\dag}\left[ \varepsilon_{%
\bm{k}}-\varepsilon _{F}\right] a_{%
\bm{k}s }+\sum_{\bm{k},s ,s ^{\prime }}\left(
{\bm{\Omega}}({
\bm{k}}) \cdot {\bm{\sigma }}_{s s ^{\prime }}\right) a_{\bm{k}%
s }^{\dag}a_{\bm{k}s ^{\prime }}\nonumber\\
&-&e{\bm{E}} \sum_{\bm{k},s}\left. \nabla_{\bm \kappa}
a^{\dag}_{\bm{k}-\frac{\bm{\kappa}}{2}s}a_{\bm{k}+
\frac{\bm{\kappa}}{2}s}\right|_{\bm{\kappa}=\bm{0}}
+u\sum\limits_{\bm{k},
\bm{k}^{\prime}}\sum\limits_{s}a_{\bm{k}s}^{\dag}a_{\bm{k}^{\prime}s},
\label{Hamil}
\end{eqnarray}
is expressed by carrier creation ($a_{\bm{k}s }^{\dag}$) and annihilation ($a_{\bm{k}s }$) operators, the quantum numbers of which are the in-plane wave vector ${\bm k}=(k_x,k_y,0)$ and the spin index $s$. In Eq.~(\ref{Hamil}), $\varepsilon_{\bm k}$, $\varepsilon_F$, and ${\bm \sigma}$ denote the energy of free electrons, the Fermi energy, and the vector of Pauli matrices, respectively. The strength $u$ of elastic scattering can alternatively be characterized by the momentum-relaxation time $\tau$. Contributions of the SOI are absorbed into the definition of the vector ${\bm\Omega}({\bm k})$. We restrict the treatment to linear-in ${\bm k}$ Rashba and Dresselhaus SOI terms, which result from the inversion asymmetry of the quantum-well confining potential and the lack of bulk inversion symmetry. For the combined Rashba-Dresselhaus model, the electric-field induced spin polarization depends on the orientation of the in-plane electric field.\cite{IJMPB_4937,PRB_155323} In addition, spin relaxation and spin transport are not only affected by the orientation of spins, but also by the spin-injection direction.\cite{PRB_205328} In order to be in the position to account for all cases of interest, we consider the general class of ${\bm k}$-linear SOI expressed by $\Omega_{i}({\bm k})=\alpha_{ij}k_{j}$, where $\alpha_{ij}$ are spin-orbit coupling constants.\cite{PRB_205340} The most studied example is a semiconductor quantum well grown along the [001] direction. Under the condition that the Cartesian coordinate axes are chosen along the principal crystallographic directions, we have for the combined Rashba-Dresselhaus model $\alpha_{11}=\beta$, $\alpha_{12}=\alpha$, $\alpha_{21}=-\alpha$, and $\alpha_{22}=-\beta$, with $\alpha$ and $\beta$ being the Rashba and Dresselhaus coupling constants, respectively. A change of the spin-injection direction is achieved by the transformation ${\bm \Omega}^{\prime}=U{\bm \Omega}(U^{-1}{\bm k})$, with $U$ being a rotation matrix.\cite{PRB_205328} A particular configuration is obtained after a rotation around $\pi/4$, which leads to the SOI couplings: $\alpha_{11}=0$, $\alpha_{12}=\alpha-\beta$, $\alpha_{22}=0$, and $\alpha_{21}=-(\alpha+\beta)$.
Spin-related phenomena are completely captured by the spin-density matrix $f_{ss'}({\bm k},{\bm \kappa}|t)$, which is calculated from quantum-kinetic equations.\cite{PRL_226602,PRB_155308,PRB_075306,PRB_165313} Based on the Born approximation for elastic scattering and on low-order corrections of the SOI to the collision integral, Laplace-transformed kinetic equations are obtained for the physical components $f={\rm Tr}\widehat{f}$ and ${\bm f}={\rm Tr}{\bm\sigma}\widehat{f}$ of the spin-density matrix.\cite{PRB_165313} In these equations, spin-dependent contributions to the collision integral are needed to guarantee that the spin system correctly approaches the state of thermodynamic equilibrium. A solution of the coupled kinetic equations is searched for in the long-wavelength and low-frequency regime. Systematic studies are possible under the condition of weak SOI, when a physically relevant evolution period exists, in which the carrier energy is already thermalized, although both the charge and spin densities still remain inhomogeneous. We shall focus on this regime, where the following ansatz for the spin-density matrix is justified:\cite{PRB_075340}
\begin{equation}
\overline{f}({\bm k},{\bm\kappa}|t)=-F({\bm\kappa},t)\frac{dn(\varepsilon_{\bm k})/d\varepsilon_{\bm k}}{d n/d\varepsilon_F},\label{chargef}
\end{equation}
\begin{equation}
\overline{{\bm f}}({\bm k},{\bm\kappa}|t)=-{\bm F}({\bm\kappa},t)\frac{dn(\varepsilon_{\bm k})/d\varepsilon_{\bm k}}{d n/d\varepsilon_F}.\label{spinf}
\end{equation}
The bar over the quantities $f$ and ${\bm f}$ indicates an integration with respect to the polar angle of the vector ${\bm k}$. In Eqs.~(\ref{chargef}) and (\ref{spinf}), $n(\varepsilon_{\bm k})$ denotes the Fermi function and $n=\int d\varepsilon \rho(\varepsilon)n(\varepsilon)$ is the carrier density with $\rho(\varepsilon)$ being the density of states of the 2DEG. By applying the outlined schema, spin-charge coupled drift-diffusion equations are straightforwardly derived for the macroscopic carrier density $F({\bm\kappa},t)$ and magnetization ${\bm M}({\bm\kappa},t)=\mu_B{\bm F}({\bm\kappa},t)$ [with $\mu_B=e\hbar/(2mc)$ being the Bohr magneton]. For the general class of SOI ${\bm\Omega}=\widehat{\alpha}{\bm k}$, we obtain the coupled set of equations
\begin{equation}
\left[\frac{\partial}{\partial t}-i\mu{\bm E}{\bm\kappa}+D{\bm\kappa}^2 \right]F+\frac{i}{\hbar\mu_B}{\bm\Omega}({\bm\kappa}){\bm M}-\frac{2im\tau}{\hbar^3\mu_B}|\widehat{\alpha}|([{\bm\kappa}\times\mu{\bm E}]{\bm M})=0,\label{drift1}
\end{equation}
\begin{equation}
\left[\frac{\partial}{\partial t}-i\mu{\bm E}{\bm\kappa}+D{\bm\kappa}^2+\widehat{\Gamma} \right]{\bm M}-\frac{e}{mc}{\bm M}\times{\bm H}_{eff}-\chi (\widehat{\Gamma}{\bm H}_{eff})\frac{F}{n}-\frac{im\mu^2}{\hbar^2 c}|\widehat{\alpha}|({\bm\kappa}\times{\bm E})F={\bm G},\label{drift2}
\end{equation}
in which the matrix of spin-scattering times $\widehat{\Gamma}$, an effective magnetic field ${\bm H}_{eff}$, and the determinant $|\widehat{\alpha}|$ of the matrix $\widehat{\alpha}$ appear. $\chi=\mu_B^2n^{\prime}$ denotes the Pauli susceptibility. In addition, the spin generation by an external source is treated by the vector ${\bm G}$ on the right-hand side of Eq.~(\ref{drift2}). To keep the representation transparent, let us focus on the combined Rashba-Dresselhaus model $\alpha_{11}=-\alpha_{22}=\beta$, $\alpha_{12}=-\alpha_{21}=\alpha$, for which the coupling parameters are expressed by
\begin{equation}
\alpha=\frac{\hbar^2K}{m}\cos(\psi+\pi/4),\quad \beta=\frac{\hbar^2K}{m}\sin(\psi+\pi/4).
\end{equation}
By choosing $\psi=-\pi/4$ or $\psi=\pi/4$, the pure Rashba or pure Dresselhaus SOI is reproduced, respectively. For the spin scattering matrix, we have
\begin{equation}
\widehat{\Gamma}=\frac{1}{\tau_s}\left(
\begin{array}{ccc}
1 & \cos(2\psi) & 0 \\
\cos(2\psi) & 1 & 0 \\
0 & 0 & 2 \\
\end{array}
\right),
\end{equation}
in which the spin-scattering time $\tau_s$ appears ($1/\tau_s=4DK^2$) with $D$ being the diffusion coefficient of the charge carriers. The electric field enters the drift-diffusion Eqs.~(\ref{drift1}) and (\ref{drift2}) both directly and via an effective magnetic field
\begin{equation}
{\bm H}_{eff}=-\frac{2m^2c}{e\hbar^2}\widehat{\alpha}(\mu {\bm E}+2iD{\bm\kappa}),
\end{equation}
which is related to the SOI. This auxiliary field originates on the one hand from the in-plane electric field ${\bm E}$ and on the other hand from the inhomogeneity of spin and charge densities. For the mobility $\mu$, the Einstein relation holds $\mu=eDn^{\prime}/n$ with $n^{\prime}=dn/d\varepsilon_F$. Variants of Eqs.~(\ref{drift1}) and (\ref{drift2}) have already been published previously.\cite{PRB_052407,PRB_245210,PRL_236601,PRB_041308,PRB_205326}
An exact solution of the spin-charge coupled drift-diffusion Eq.~(\ref{drift2}) for the field-induced magnetization is easily derived for the case ${\bm E}\uparrow\uparrow{\bm\kappa}$. By applying a Laplace transformation with respect to the time variable $t$, we obtain the analytic solution
\begin{equation}
{\bm M}_{\perp}^{\prime}=\frac{\Sigma+\widehat{\Gamma}}{D_T}\left[{\bm Q}_{\perp}-\frac{e}{mc\sigma}Q_z{\bm H}_{eff} \right]-\frac{1}{\sigma D_T}\left(\frac{e}{mc} \right)^2[{\bm H}_{eff}\times({\bm H}_{eff}\times{\bm Q}_{\perp})],\label{Mag1}
\end{equation}
\begin{equation}
M_z=\frac{Q_z}{\sigma}+\frac{e}{mc\sigma}\frac{1}{D_T}{\bm H}_{eff}(\Sigma +\widehat{\Gamma}){\bm Q}_{\perp}-\left(\frac{e}{mc\sigma} \right)^2\frac{1}{D_T}{\bm H}_{eff}(\Sigma+\widehat{\Gamma}){\bf H}_{eff}Q_z,\label{Mag2}
\end{equation}
where ${\bm M}^{\prime}={\bm M}-\chi{\bm H}_{eff}$ and ${\bm M}_{\perp}^{\prime}={\bm e}_z\times{\bm M}^{\prime}$. The inhomogeneity of the transformed Eq.~(\ref{drift2}) for the spin-density matrix is denoted by ${\bm Q}$ and has the form
\begin{equation}
{\bm Q}={\bm M}(t=0)+(i\mu{\bm E}{\bm\kappa}-D{\bm\kappa}^2)\chi{\bm H}_{eff}/s+{\bm G}/s.
\end{equation}
Other quantities that appear in Eqs.~(\ref{Mag1}) and (\ref{Mag2}) are defined by $\Sigma=s-i\mu{\bm E}{\bm\kappa}+D{\bm\kappa}^2$ and $\sigma=\Sigma+2/\tau_s$, with $s$ being the Laplace variable. The general solution in Eqs.~(\ref{Mag1}) and (\ref{Mag2}) provides the basis for the study of numerous spin-related phenomena including effects of oscillating electric fields. Most important is the identification of spin excitations by treating the denominator $D_T$, which is given by
\begin{equation}
D_T=\frac{1}{\sigma}\biggl\{\Sigma\left[\sigma^2+\left(\frac{e}{mc}{\bm H}_{eff} \right)^2 \right]+g^2\left[\sigma+\frac{(\mu{\bm E})^2}{D} \right] \biggl\},\label{det}
\end{equation}
with $g=4Dm^2|\widehat{\alpha}|/\hbar^4$. The cubic equation $D_T=0$ with respect to the Laplace variable $s\rightarrow -i\omega$ yields three spin-related eigenmodes that have already been studied for zero electric field ${\bm E}={\bm 0}$ in Ref.~\onlinecite{PRB_125307}. Field-dependent eigenstates calculated from the zeros of Eq.~(\ref{det}) are characterized both by the direction of the electric field and by the spin injection/diffusion direction. Most solutions of this equation describe damped resonances in the charge transport and spin polarization. However, as already mentioned, there also exist undamped excitations, which have received particular interest in recent studies. Here, we focus on these long-lived spin states and study their dependence on an in-plane electric field.
\section{Study of field-dependent eigenmodes}
In the second part of the paper, we present selected applications of the general approach presented in the previous section. We focus on spin effects on charge transport exerted by spin injection and on long-lived field-induced spin excitations probed by interference gratings.
\subsection{Amplification of an electric field by spin injection}
As a first application of our general approach, the charge-current density is studied on the basis of its definition
\begin{equation}
{\bm j}(t)=-ie\left. \nabla_{\bm \kappa}F({\bm\kappa},t)\right|_{\bm\kappa=0}.
\end{equation}
Taking into account Eq.~(\ref{drift1}) and the solution in Eqs.~(\ref{Mag1}) and (\ref{Mag2}), the components of the conductivity tensor are straightforwardly calculated. Here, we are interested in combining the ac electric field with an external permanent spin-injection source that provides a generation rate $G_z(s)$ for the out-of-plane spin polarization.
Focusing on the linear response regime with respect to the in-plane electric field, an analytical expression for the conductivity tensor $\widehat{\sigma}$ is obtained. In accordance with the applicability of the drift-diffusion approach, the derivation is restricted to the case, when the inequality $\hbar n^{\prime}/(\tau n)\ll 1$ is satisfied. For the frequency-dependent ($s\rightarrow -i\omega$) longitudinal and Hall conductivities, we obtain
\begin{equation}
\sigma_{\stackrel{xx}{yy}}(s)/\sigma_0=1\pm \frac{G_z\tau_s}{n}\frac{\hbar n^{\prime}}{4\tau n} \frac{\sin(4\psi)}{(s\tau_s+1)(s\tau_s+2\cos^2\psi)(s\tau_s+2\sin^2\psi)}
\end{equation}
\begin{equation}
\sigma_{\stackrel{xy}{yx}}(s)/\sigma_0=\pm \frac{G_z\tau_s}{n}\frac{\hbar n^{\prime}}{2\tau n}
\frac{\sin(2\psi)}{s\tau_s+2}\left[\frac{\tau}{\tau_s}-\frac{s\tau_s+1}{(s\tau_s+2\cos^2\psi)(s\tau_s+2\sin^2\psi)} \right],
\end{equation}
with $\sigma_0=e\mu n$. Other contributions to the charge transport, which are solely due to SOI, are much weaker than the retained terms originating from spin injection. This conclusion is illustrated by the curves (a) in Fig.~\ref{Figure1}, which have been numerically calculated for the case $G_z=0$. Weak SOI leads only to a slight deviation of the longitudinal conductivities $\sigma_{xx}$ and $\sigma_{yy}$ from $\sigma_0$. The spin effect completely disappears for the special Rashba-Dresselhaus model with $\alpha=\beta$ ($\psi=0$). The situation drastically changes, when there is an appreciable permanent spin injection, which leads to additional contributions to the steady-state charge transport owing to the spin-galvanic effect. An example is shown by the curves (b) in Fig.~\ref{Figure1}. The striking observation is that Re~$\sigma_{yy}$ becomes negative for frequencies below about $10^{10}$~Hz. This remarkable behavior is confirmed by the expression for the static conductivity
\begin{equation}
\sigma_{\stackrel{xx}{yy}}(s)/\sigma_0=1\pm \frac{G_z\tau_s}{n}\frac{\hbar n^{\prime}}{4\tau n}\cot(2\psi),
\end{equation}
from which it is obvious that $\sigma_{yy}$ changes its sign for sufficiently strong spin injection. Therefore, we meet the particular situation that a paramagnetic medium, which usually absorbs energy from an ac electric field to produce a
\begin{figure}
\centerline{\epsfig{file=figure1.eps,width=7.0cm}}
\caption{Real part of the diagonal conductivity components $\sigma_{xx}$ (dashed lines) and $\sigma_{yy}$ (solid lines) as a function of frequency $\omega$ for $\beta/\alpha=0.5$, $\alpha=10^{-9}$~eVcm, $\tau=10^{-13}$~s, and $D=24$~cm$^2$/s. The sets of dashed and solid lines (a) and (b) are calculated with $G_{z0}/n=0$ and $0.03$, respectively.}
\label{Figure1}
\end{figure}
spin accumulation, is driven to another regime, where the ac field, which propagates in a given direction, is amplified by spin injection. This stimulated emission is similar to the microwave energy gain, which was recently predicted to occur in a paramagnetic medium with sufficiently large injection spin currents.\cite{PRL_116601} Based on their findings derived for rotating magnetic fields, the authors proposed a new concept for a spin-injection maser. Our result is in line with their conclusion.
\subsection{Electric-field mediated spin excitations}
As a second application of our general approach, we treat coupled spin-charge eigenstates that exist in a biased sample. Effects of this kind depend not only on the directions of the electric field and the spin injection, but also on the orientation of the crystallographic axes. Here, we study the influence of an electric field on an optically generated standing spin lattice that is periodic along the $\kappa_{+}=(\kappa_x+\kappa_y)/\sqrt{2}$ direction. For simplicity, it is assumed that the spin generation provides a regular lattice for the out-of-plane spin polarization
\begin{equation}
Q_z(\kappa)=\frac{Q_{z0}}{2}[\delta(\kappa_{+}-\kappa_0)+\delta(\kappa_{+}+\kappa_0)].
\end{equation}
Inserting this source term into Eq.~(\ref{Mag2}), we obtain for the related field-dependent magnetization the solution
\begin{equation}
M_z(\kappa,s)=\frac{\Sigma\sigma +g^2}{\Sigma\left[\sigma^2+\left(\frac{e}{mc}{\bm H}_{eff} \right)^2 \right]+g^2\left[\sigma +(\mu{\bm E})^2/D \right]}Q_z(\kappa),\label{GlMz}
\end{equation}
the character of which is mainly determined by the poles calculated from the zeros of the denominator. Pronounced oscillations arise for the special Rashba-Dresselhaus model with $\alpha=\beta$. In this case ($g=0$), Eq.~(\ref{GlMz}) is easily transformed to spatial and time variables with the result
\begin{eqnarray}
M_z(r_{+},t)=\frac{Q_{z0}}{2}&\biggl\{&e^{-D(\kappa_0+2\sqrt{2}K)^2t}\cos(\kappa_0r_{+}+\mu E_{+}(\kappa_0+2\sqrt{2}K)t)\nonumber\\
&+&e^{-D(\kappa_0-2\sqrt{2}K)^2t}\cos(\kappa_0r_{+}+\mu E_{+}(\kappa_0-2\sqrt{2}K)t)\biggl\},
\end{eqnarray}
where $K=\sqrt{2}m\alpha/\hbar^2$ and $r_{+}=(r_x+r_y)/\sqrt{2}$. In general, this solution describes damped oscillations of the magnetization. However, due to a spin-rotation symmetry, there appears an undamped soft mode, when the wave-vector component $\kappa_0$ of the imprinted spin lattice matches the quantity $2\sqrt{2}K$, which is a measure of the SOI. This eigenmode leads to long-lived oscillations of the magnetization. Numerical results, calculated from Eq.~(\ref{GlMz}), are shown in Fig.~\ref{Figure2}.
\begin{figure}
\centerline{\epsfig{file=figure2.eps,width=7.0cm}}
\caption{Time dependence of the electric-field induced out-of-plane magnetization for $E_{+}=500$~V/cm, $D=24$~cm$^2$/s, and $\tau=10^{-13}$~s. The lines a, b, and c were calculated for ($\kappa_{+}/\sqrt{2}K=2$, $\alpha=\beta=10^{-9}$eVcm), ($\kappa_{+}/\sqrt{2}K=2.01$, $\alpha=\beta=10^{-9}$eVcm), and ($\kappa_{+}/\sqrt{2}K=2.02$, $\alpha=1.05$, $\beta=0.95\,10^{-9}$eVcm), respectively. In addition, we set $r_{+}=0$.}
\label{Figure2}
\end{figure}
Under the ideal condition $\alpha=\beta$ and $\kappa_0=2\sqrt{2}K$, the induced magnetization rapidly reaches the value $M_z=Q_{z0}/2$ (curve a) and remains constant afterwards. However, any slight detuning of this special set of parameters sparks damped oscillations that can last for many nano-seconds. Examples are shown by the curves b and c in Fig.~\ref{Figure2}. The importance of such spin-coherent waves, especially their potential for future spintronic applications, has recently been emphasized by Pershin.\cite{PRB_155317} The long-lived spin waves that have been examined in this section are solely generated by an in-plane electric field. We see in them building blocks for future spintronic device applications that rely exclusively on electronic means for generating and manipulating spin.
\section{Excitation of spin waves}
Another wide research area that is covered by the spin-charge coupled drift-diffusion Eqs.~(\ref{drift1}) and (\ref{drift2}) [or their solution in Eqs.~(\ref{Mag1}) and (\ref{Mag2})] refers to the response of the spin subsystem to space-charge waves in semiconductor nanostructures. To provide an example for this kind of studies, we focus in this section on spin waves that are excited by an optically induced moving charge-density grating. Two laser beams with a slight frequency shift between them produce a moving interference pattern on the surface of the semiconductor sample that leads to a periodic generation rate for electron and holes of the form
\begin{equation}
g(x,t)=g_0+g_m\cos(K_g x-\Omega t),\label{gen}
\end{equation}
with a homogeneous part $g_0$ and a modulation $g_m$. $K_g$ and $\Omega$ denote the wave vector and frequency of the grating. The generation rate $g(x,t)$ causes electron [$F(x,t)$] and hole [$P(x,t)$] density fluctuations that have the same spatial and temporal periodicity as the source $g(x,t)$. The dynamics of photogenerated electrons and holes is described by continuity equations, which encompass both carrier generation [$g(x,t)$] and recombination [$r(x,t)$] as well as drift and diffusion (see, for example, Ref. \onlinecite{APL_073711}). If the retroaction of spin on the carrier ensemble is neglected, we obtain the set of equations
\begin{equation}
\frac{\partial F}{\partial t}=\frac{1}{e}\frac{\partial J_n}{\partial x}+g(x,t)-r(x,t),
\end{equation}
\begin{equation}
\frac{\partial P}{\partial t}=-\frac{1}{e}\frac{\partial J_p}{\partial x}+g(x,t)-r(x,t)
\end{equation}
where the current densities for electrons [$J_n(x,t)$] and holes [$J_p(x,t)$] are calculated from drift and diffusion contributions
\begin{equation}
J_n=e\mu_nE_{x}F+eD_n\frac{\partial F}{\partial x},
\end{equation}
\begin{equation}
J_p=e\mu_pE_{x}P-eD_p\frac{\partial P}{\partial x}.
\end{equation}
In these equations, $\mu_n$ and $\mu_p$ ($D_n$ and $D_p$) denote the mobilities (diffusion coefficients) for electrons and holes, respectively. A constant electric field $E_0$ applied along the $x$ direction is complemented by a space-charge field $\delta E(x,t)$, which is calculated from unbalanced electron and hole densities via the Poisson equation
\begin{equation}
\frac{\partial E_{x}}{\partial x}=\frac{4\pi e}{\varepsilon}(P-F),\label{Poiss}
\end{equation}
with $\varepsilon$ being the dielectric constant. The optical grating leads to a weak modulation of the carrier densities around their mean value ($F=F^{0}+\delta F$, $P=P^{0}+\delta P$). Due to the spin-charge coupling manifest in Eqs.~(\ref{drift1}) and (\ref{drift2}), charge-density waves are transferred to the spin degrees of freedom and vice versa. As the hole spin relaxation is rapid, the time evolution of the generated spin pattern can be interpreted in terms of the motion of electrons alone. Consequently, the hole density is not considered in the equations for the magnetization.
In the absence of the optical grating, there is no out-of-plane spin polarization ($F_z^{0}=0$). For the in-plane components, a short calculation leads to the result
\begin{equation}
F_x^{0}=-\frac{1}{2}\hbar K_{11}\mu_nE_0n^{\prime},
\end{equation}
\begin{equation}
F_y^{0}=-\frac{1}{2}\hbar K_{21}\mu_nE_0n^{\prime},
\end{equation}
which expresses the well-known effect of the electric-field mediated spin accumulation.~\cite{Edelstein,Aronov} In these equations, the spin-orbit coupling constants are denoted by $K_{ij}=2m\alpha_{ij}/\hbar^2$. Besides this homogeneous spin polarization, there is a field-induced contribution, which is due to the optical grating. For the respective spin modulation, the harmonic dependence of the carrier generation in Eq.~(\ref{gen}) via $z=K_gx-\Omega t$ remains intact. In view of the periodic boundary condition that we naturally accept for the optically induced grating, it is expedient to perform a discrete Fourier transformation with respect to the $z$ variable according to the prescription $F(z)=\sum_{p}\exp(ipz)F(p)$. The resulting equations for the Fourier coefficients of the magnetization are easily solved by perturbation theory with respect to the optically-induced electric field $Y=\delta E/E_0$ and spin $\delta {\bm F}$ contributions. For the field-dependent homogeneous spin components ($p=0$), we obtain the solution
\begin{equation}
\delta F_x(0)=\frac{2K_{12}n}{g\tau_s eE_0 n^{\prime}}\delta F_z(0),\label{Fx0}
\end{equation}
\begin{equation}
\delta F_y(0)=\frac{2K_{22}n}{g\tau_s eE_0 n^{\prime}}\delta F_z(0),\label{Fy0}
\end{equation}
\begin{equation}
\delta F_z(0)=-\frac{(\mu_n E_0)^2/D_n}{2/\tau_s+(\mu_n E_0)^2/D_n}\biggl\{S(1)Y(-1)+S(-1)Y(1) \biggl\},\label{Fz0}
\end{equation}
in which the $p=1$ spin fluctuation occurs via the quantity
\begin{equation}
S(1)=\delta F_z(1)+\frac{\tau_s}{2\tau_E}\frac{K_{11}}{K_g}\delta F_y(1)-\frac{\tau_s}{2\tau_E}\frac{K_{21}}{K_g}\delta F_x(1).\label{S1}
\end{equation}
The scattering time $\tau_{E}$, which is provided by the constant electric field, is given by $1/\tau_{E}=\mu_n E_0K_g$. The spin response described by Eqs.~(\ref{Fx0}) to (\ref{S1}) is a consequence of electric-field fluctuations that accompany the optically induced charge modulation. As we neglect the retroaction of the induced spin fluctuation on the charge balance, the determination of $Y(p=1)$ rests exclusively on Eqs.~(\ref{gen}) to (\ref{Poiss}). The calculation has been performed in our previous work.\cite{APL_073711} To keep our presentation self-contained, we present previously derived results that are needed for the calculation of the spin polarization. The relative electric-field modulation, which has the form
\begin{equation}
Y(1)=-\frac{g_m}{2g_0}\frac{1+i\Lambda_{-}}{\tau\tau_M(\Omega-\Omega_1)(\Omega-\Omega_2)},
\end{equation}
exhibits characteristic resonances at eigenmodes of space-charge waves given by
\begin{equation}
\Omega_{1,2}=-\frac{1}{2}(\mu_{-}E_0K_g+i\Gamma)\pm\sqrt{\left(\frac{1}{2}(\mu_{-}E_0K_g+i\Gamma) \right)^2+(1+\alpha_1)/\tau\tau_M}.\label{Om12}
\end{equation}
The damping of this mode
\begin{equation}
\Gamma=D_{+}K_g^2+\frac{1}{\tau_M}+\frac{1}{\tau}
\end{equation}
includes the Maxwellian relaxation time $\tau_M=\varepsilon/4\pi\sigma_d$ with $\sigma_d=eg_0\tau\mu_{+}$ and $\mu_{\pm}=\mu_n\pm\mu_p$. The parameter $\alpha_1$ in Eq.~(\ref{Om12}) depends on the electric field and is calculated from
\begin{equation}
\alpha_1=d_{+}(\Lambda_{+}-\mu\Lambda_{-})+\kappa d_{+}(1-\mu^2+\Lambda_{+}^2-\Lambda_{-}^2)+\kappa\Lambda_{+},
\end{equation}
where $d_{\pm}=\mu_{\pm}E_0K_g\tau/2$, $\Lambda_{\pm}=D_{\pm}K_g/\mu_{+}E_0$, $\kappa=(\varepsilon/4\pi e)E_0K_g/(2g_0\tau)$, and $\mu=\mu_{-}/\mu_{+}$. The resonant amplification of dc and ac current components due to space-charge waves provides information useful for the determination of the lifetime and the mobilities of photo-generated electrons and holes in semiconductors.\cite{APL_073711}
To continue the analysis of the spin response, the set of linear equations for the $p=1$ Fourier coefficients of the spin vector must be solved. The analytic solution has the form
\begin{eqnarray}
S(1)=&&\frac{\hbar\tau_s n^{\prime}}{4\tau_E^2K_g^2}\frac{g\tau_E}{\widetilde{D}_T}\left[1+\frac{1+2i\Lambda}{1+\widetilde{\Sigma}\tau_s/2\tau_E} \right](K_{11}K_{12}+K_{21}K_{22})\nonumber\\
&&\times\biggl\{\widetilde{\Sigma}(1+2i\Lambda)\frac{\delta F(1)}{n}+(\widetilde{\Sigma}-i)Y(1) \biggl\},
\label{SS1}
\end{eqnarray}
where $\widetilde{\Sigma}=\Lambda-i(1+\Omega\tau_E)$ and $\Lambda=D_nK_g/\mu_nE_0$. Again, the denominator $\widetilde{D}_T$ in Eq.~(\ref{SS1}) is used for the identification of electric-field-induced eigenmodes of the spin system. For the specific set-up treated in this section, the general expression given in Eq.~(\ref{det}) reduces to the dimensionless form
\begin{eqnarray}
\widetilde{D}_T=\frac{1}{\widetilde{\Sigma}+2\tau_E/\tau_s}
&&\biggl\{\widetilde{\Sigma}\left[(\widetilde{\Sigma}+2\tau_E/\tau_s)^2+\frac{K_{11}^2+K_{21}^2}{K_g^2}(1+2i\Lambda)^2 \right]
\nonumber\\
&&+(g\tau_E)^2\left[\widetilde{\Sigma}+\frac{2\tau_E}{\tau_s}+\frac{(1+2i\Lambda)^2}{\Lambda} \right]\biggl\}.
\label{detneu}
\end{eqnarray}
The closed solution in Eqs.~(\ref{Fx0}) to (\ref{detneu}) for the homogeneous spin polarization, which is due to a moving optical grating, has a resonant character, when eigenmodes of the spin subsystem are excited. The dispersion relations of these modes are obtained from the cubic equation $\widetilde{D}_T=0$ with respect to $\Omega$. Depending on the relative strength of the imaginary part in the equation $\Omega=\Omega(\kappa)$, a more or less pronounced resonance occurs in the induced spin polarization. Due to the spin-rotation symmetry of the model, the special Rashba-Dresselhaus system with $\alpha=\beta$ provides an attractive example. By rotating the spin-injection direction, the situation becomes even more interesting. A rotation around $\pi/4$ leads to the set of SOI parameters $\alpha_{11}=\alpha_{12}=\alpha_{22}=0$, and $\alpha_{21}=-2\alpha$. In this special case ($g=0$), we obtain for the $p=1$ Fourier component of the in-plane spin polarization the result
\begin{equation}
\delta F_y(1)=\frac{i\hbar n^{\prime}}{2\tau_E}\frac{K_{21}}{K_g}\frac{\Lambda\left[1+(K_{21}/K_g)^2\right]-i(1+\Omega\tau_E)}
{\tau_E^2(\Omega+\Omega_{s1})(\Omega+\Omega_{s2})}Y(1),
\end{equation}
in which two field-induced spin eigenmodes appear, the dispersion relations of which are expressed by
\begin{equation}
\Omega_{s1,2}=\mu_n E_0(K_g\mp K_{21})+iD_n(K_g\mp K_{21})^2.
\end{equation}
\begin{figure}
\centerline{\epsfig{file=figure3.eps,width=7.0cm}}
\caption{Induced in-plane spin polarization as a function of $z=K_gx-\Omega t$ for $\alpha=\beta=0.2\times 10^{-9}$~eVcm, $g_0=g_m=10^{19}$~cm$^{-3}$s$^{-1}$, $\mu_n=0.5$~cm$^2$/Vs, $\mu_p=0.2$~cm$^2$/Vs, $T=77$~K, and a recombination time $\tau_r=10^{-6}$~s. The curves $a$, $b$, and $c$ are calculated with $E_{0max}=2.67$~kV/cm, $E_{0max}+100$~V/cm, and $E_{0max}-100$~V/cm, respectively. For the dashed line, we used $E_0=1$~kV/cm.}
\label{Figure3}
\end{figure}
Again the damping of one mode completely disappears under the condition $K_g=-K_{21}=4m\alpha/\hbar^2$. This peculiarity gives rise to a pronounced resonance in the field dependence of the spin dynamics at $\mu_n E_{0max}=\Omega/(K_g+K_{21})$. Figure~\ref{Figure3} illustrates this effect. Calculated is the in-plane spin modulation $\delta F_y(K_gx-\Omega t)=2{\rm Re}\,\delta F_y(1)\exp(iz)$. The smooth dashed line displays the response of the spin polarization to the optically generated moving charge-density pattern for $E_0=1$kV/cm. The unpretentious signal is considerably enhanced under the resonance condition, when $E_0=E_{0max}=2.67$kV/cm (curve a in Fig.~\ref{Figure3}). By changing the field strength a little bit [$E_0=E_{0max}+100$V/cm (curve b), $E_0=E_{0max}-100$V/cm (curve c)], the phase, amplitude, and frequency of the spin wave drastically change. This resonant influence of an electric field on the excited spin waves is a pronounced effect that is expected to show up in experiments. By applying a magnetic field, the resonant in-plane spin polarization is rotated to generate an out-of plane magnetization.
\section{Summary}
The generation and manipulation of a spin polarization in nonmagnetic semiconductors by an electric field is a subject that has recently received considerable attention. All information needed for the description of these field-induced spin effects are given by the spin-density matrix, the equation of motion of which is governed by the model Hamiltonian. The SOI is the main ingredient in this approach. In spite of the fact that kinetic equations for the four-component spin-density matrix are straightforwardly derived, at least two subtle features have to be taken into account: (i) for the consistent treatment of scattering, its dependence on SOI must be considered, and (ii) to reproduce the well-known field-induced spin accumulation, third-order spin corrections have to be retained in the kinetic equations. In order to study macroscopic spin effects, it is expedient to suppress still existing superfluous information in the spin-density matrix by deriving spin-charge coupled drift-diffusion equations. Under the condition of weak SOI, we followed this program and derived Eqs.~(\ref{drift1}) and (\ref{drift2}) in an exact manner. These equations, which are valid for the general class of linear SOI, apply to various electric-field-induced spin effects that depend not only on the orientation of the crystallographic axes, but also on the spin-injection direction and the alignment of the in-plane electric field. An exact solution of the basic equations for the magnetization allows the identification of field-dependent spin excitations. Among these spin eigenmodes there are long-lived spin waves that can be excited by a spin and/or charge grating providing the necessary wave vector. The applicability of our general approach was illustrated by a few examples. The treatment of the spin-mediated conductivity of charge carriers reveals the possibility that a component of an ac electric field is amplified by spin injection. A similar effect led to the recent proposal for a spin-injection maser device.\cite{PRL_116601} In a second application, it was demonstrated how a regular lattice of an out-of-plane spin polarization excites long-lived field-dependent spin waves. The calculation refers to a [001] semiconductor quantum well with balanced Rashba and Dresselhaus SOI, for which a persistent spin helix has been identified.\cite{PRL_236601} Our final example establishes a relationship to the rich physics of space-charge waves. By considering a typical set-up for the optical generation of a moving charge pattern, the associated dynamics of the related spin degrees of freedom was treated. It was shown that the charge modulation can be used to excite intrinsic field-dependent spin waves. This example demonstrates that the powerful methods developed in the field of space-charge waves can be transferred to the study of spin excitations.
\section*{Acknowledgements}
Partial financial support by the Deutsche Forschungsgemeinschaft
is gratefully acknowledged.
|
train/arxiv
|
BkiUfyU5qhLB3UNjq15z
| 5 | 1 |
\section{Introduction}
This work continues our research of Lax-integrable (i.e., admitting Lax pairs with non-vanishing
spectral parameter) generalized Heisenberg ferromagnet type equations in 1+1 dimensions related with Camassa-Holm type equations (see, e.g., \cite{1907.10910}-\cite{1910.13281} and the references therein). In the theory of integrable systems (soliton theory) an important role plays the so-called gauge and geometrical equivalence between two integrable equations. The well-known classical example such equivalences is the (gauge and geometrical) equivalence between the Heisenberg ferromagnet equation (HFE)
\begin{eqnarray}
iA_{t}+\frac{1}{2}[A,A_{xx}]=0 \label{1}
\end{eqnarray}
and the nonlinear Schr\"odinger equation (NLSE)
\begin{eqnarray}
iq_{t}+q_{xx}+2|q|^{2}q=0, \label{2}
\end{eqnarray}
where $q$ is a complex function and
\begin{eqnarray}
A=\left(\begin{array}{cc}A_{3}&A^{-}\\A^{+}&-A_{3}\end{array}\right), \quad A^{2}=I, \quad {\bf A}=(A_{1},A_{2},A_{3}), \quad {\bf A}^{2}=1. \label{3}
\end{eqnarray}
In this paper, we study the Generalized Heisenberg ferromagnet - type equation (GHFE), namely, the so-called M-CXII equation (M-CXIIE) and its relation with the modified Camassa-Holm equation (mCHE)
\begin{eqnarray}
m_{t}+(m(u^2-u^2_{x}))_{x}+\kappa u_{x}&=&0,\\
m-u+u_{xx}&=&0,
\end{eqnarray}
where $u=u(x,t)$ is a real-valued function, $\kappa=const.$ The mCHE (4)-(5) was proposed by Fuchssteiner \cite{fpd1996} and Olver and Rosenau \cite{orpre1996}. It was obtained from the two-dimensional Euler equations, where the variables $u(x,t)$ and $m(x,t)$ represent,
respectively, the velocity of the fluid and its potential density \cite{qjmp2006}. Some properties of the mCHE and other related equations were studied in \cite{gloqcmp2013} - \cite{0805.4310}.
This paper is organized as follows. In Section 2, we give the M-CXIIE, its different forms, its Lax representation (LR) and a reduction. Geometric formulation of this equation in terms of space and plane curves is presented in Section 3. Using this geometric formalism, the geometrical equivalence between the M-CXIIE and the mCHE is established. Section 4 is devoted to the mCHE. Gauge equivalence between the M-CXIIE and the mCHE and the relation between their solutions we have established in Section 5. The 1-soliton solution of the M-CXIIE is obtained in Section 6. We discuss and conclude our results in Section 7.
\section{The generalized Heisenberg ferromagnet type equation}
\subsection{Equation}
There are exist several integrable and nonintegrable GHFE (see, e.g., \cite{R13}-\cite{zh1}). In this paper, we consider one of such GHFE, namely, the so-called M-CXIIE. This equation has the form
\begin{eqnarray}
[A,A_{xt}]+(\phi[A,A_{x}])_{x}+\frac{\kappa u_{x}}{m}[A,A_{x}]+\frac{4\alpha_{0}}{\beta^{2}}A_{x}=0, \label{6}
\end{eqnarray}
where $\phi=u^{2}-u^{2}_{x}$ and
\begin{eqnarray}
A=\left(\begin{array}{cc}A_{3}&A^{-}\\A^{+}&-A_{3}\end{array}\right), \quad A^{2}=I, \quad {\bf A}=(A_{1},A_{2},A_{3}), \quad {\bf A}^{2}=1. \label{7}
\end{eqnarray}
We can also write the M-CXIIE in the following form
\begin{eqnarray}
A_{xt}+\phi A_{xx}+v_{1}A+v_{2}A_{x}+\frac{\alpha_{0}}{\beta^{2}}[A,A_{x}]=0, \label{8}
\end{eqnarray}
where
\begin{eqnarray}
v_{1}=0.5\{A_{t},A_{x}\}+\phi A_{x}^{2}=-2umI, \quad v_{2}=\phi_{x}+\frac{\kappa u_{x}}{m}. \label{9}
\end{eqnarray}
Finally let us present the vector form of the M-CXIIE. It reads as
\begin{eqnarray}
{\bf A}_{xt}+\phi {\bf A}_{xx}+v_{1}{\bf A}+v_{2}{\bf A}_{x}+\frac{\alpha_{0}}{\beta^{2}}{\bf A}\wedge {\bf A}_{x}=0. \label{10}
\end{eqnarray}
\subsection{Lax representation}
The M-CXIIE (6) is integrable. Its LR is given by
\begin{eqnarray}
\Phi_x&=&U_{1}\Phi,\\
\Phi_t&=&V_{1}\Phi,
\end{eqnarray}
where
\begin{eqnarray}
U_{1}&=&\frac{\alpha_{0}-\alpha}{2}A+\frac{\lambda-\beta}{4\beta}[A,A_{x}], \\
V_{1}&=&(\omega-\omega_{0})A+\{\frac{(\lambda-\beta)u}{2m\beta^{2}\lambda}-\frac{(\lambda-\beta)\phi}{4\beta}\}[A,A_{x}]+\frac{u_{x}(\alpha\beta-\alpha_{0}\lambda)}{\beta^{2}\lambda m}
\end{eqnarray}
or
\begin{eqnarray}
U_{1}&=&\frac{\alpha_{0}-\alpha}{2}A+\frac{\lambda-\beta}{4\beta}[A,A_{x}], \\
V_{1}&=&(\omega-\omega_{0})A+\frac{(\lambda-\beta)}{4\beta}\{\frac{2u}{m\beta\lambda}-\phi\}[A,A_{x}]+\frac{u_{x}}{\beta m}\{\frac{\alpha}{\lambda}-\frac{\alpha_{0}}{\beta}\}A_{x}.
\end{eqnarray}
Here $\beta=const, \quad \alpha_{0}=\alpha|_{\lambda=\beta}, \quad \alpha=\sqrt{1-\frac{1}{2}\kappa \lambda^2}, \quad \omega_{0}=\omega|_{\lambda=\beta}$,
\begin{eqnarray}
\omega=\alpha\lambda^{-2}+0.5\alpha\phi, \quad u=\pm\beta^{-1}(1-\partial_{x}^{2})^{-1}\sqrt{0.5tr(A_{x})^{2}}
\end{eqnarray}
and
\begin{eqnarray}
\phi=u^{2}-u^{2}_{x}, \quad m=\pm\beta^{-1}\sqrt{0.5tr(A_{x})^{2}}=u-u_{xx}=(1-\partial_{x}^{2})u.
\end{eqnarray}
\subsection{Reductions}
The M-CXIIE admits some reductions. For example in the case $\kappa=0$, the M-CXIIE reduces to the so-called M-CXIE having the form \cite{16}
\begin{eqnarray}
[A,A_{xt}]+(\phi[A,A_{x}])_{x}+\frac{4}{\beta^{2}}A_{x}=0. \label{2}
\end{eqnarray}
This integrable GHFE was studied in \cite{16}. Its LR is given by
\begin{eqnarray}
\Phi_x&=&U_{2}\Phi,\\
\Phi_t&=&V_{2}\Phi,
\end{eqnarray}
where $(z=\phi+\lambda^{2}, z_{0}=\phi+\beta^{-2})$ and
\begin{eqnarray}
U_{2}&=&\frac{\lambda-\beta}{4\beta}[A,A_{x}], \\
V_{2}&=&(z-z_{0})A+\{\frac{(\lambda-\beta)u}{2m\beta^{2}\lambda}-\frac{(\lambda-\beta)\phi}{4\beta}\}[A,A_{x}]+\frac{u_{x}(\beta-\lambda)}{\beta^{2} \lambda m}A_{x}.
\end{eqnarray}
\section{Integrable motion of curves induced by the M-CXIIE}
The aim of this section is to present the geometric formulation of the M-CXIIE in terms of curves and to find its geometrical equivalent counterpart.
\subsection{Integrable motion of space curves}
We start from the differential geometry of space curves. In this subsection, we consider the integrable motion of space curves induced by the M-CXIIE. As usual, let us consider a smooth space curve ${\bf \gamma} (x,t): [0,X] \times [0, T] \rightarrow R^{3}$ in $R^{3}$. Let $x$ is the arc length of the curve at each time $t$. In differential language, such curve is given by the Frenet-Serret equation (FSE). The FSE and its temporal counterpart look like
\begin{eqnarray}
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{x} = C
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right),\quad
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right)_{t} = G
\left ( \begin{array}{ccc}
{\bf e}_{1} \\
{\bf e}_{2} \\
{\bf e}_{3}
\end{array} \right), \label{24}
\end{eqnarray}
where ${\bf e}_{j}$ are the unit tangent vector $(j=1)$, principal normal vector $(j=2)$ and binormal vector $(j=3)$ which given by ${\bf e}_{1}={\bf \gamma}_{x}, \quad {\bf e}_{2}=\frac{{\bf \gamma}_{xx}}{|{\bf \gamma}_{xx}|}, \quad {\bf e}_{3}={\bf e}_{1}\wedge {\bf e}_{2}, $
respectively.
Here
\begin{eqnarray}
C&=&
\left ( \begin{array}{ccc}
0 & \kappa_{1} & \kappa_{2} \\
-\kappa_{1} & 0 & \tau \\
-\kappa_{2} & -\tau & 0
\end{array} \right)=-\tau L_{1}+\kappa_{2}L_{2}-\kappa_{1}L_{3} \in so(3),\\
G&=&
\left ( \begin{array}{ccc}
0 & \omega_{3} & \omega_{2} \\
-\omega_{3} & 0 & \omega_{1} \\
-\omega_{2} & -\omega_{1} & 0
\end{array} \right)=-\omega_{1}L_{1}+\omega_{2}L_{2}-\omega_{3}L_{3}\in so(3),\label{3.26}
\end{eqnarray}
where $\tau$, $\kappa_{1}, \kappa_{2}$ are the "torsion", "geodesic curvature" and "normal curvature" of the curve, respectively; $\omega_{j}$ are some functions. Note that $L_{j}$ are basis elements of $so(3)$ algebra and have the forms
\begin{eqnarray}
L_{1}=
\left ( \begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & -1 \\
0 & 1 & 0
\end{array} \right), \quad L_{2}=
\left ( \begin{array}{ccc}
0 & 0 & 1 \\
0 & 0 & 0 \\
-1 & 0 & 0
\end{array} \right), \quad L_{3}=
\left ( \begin{array}{ccc}
0 & -1 & 0\\
1 & 0 & 0 \\
0 & 0 & 0
\end{array} \right).
\end{eqnarray}
They satisfy the following commutation relations
\begin{eqnarray}
[L_{1}, L_{2}]=L_{3}, \quad [L_{2}, L_{3}]=L_{1}, \quad [L_{3}, L_{1}]= L_{2}.
\end{eqnarray}
In the following, we need also in the basis elements of $su(2)$ algebra. They have the forms
\begin{eqnarray}
e_{1}=
\frac{1}{2i}\left ( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right), \quad e_{2}=
\frac{1}{2i}\left ( \begin{array}{cc}
0 & -i \\
i & 0
\end{array} \right), \quad e_{3}=
\frac{1}{2i}\left ( \begin{array}{cc}
1 & 0 \\
0 & -1
\end{array} \right),
\end{eqnarray}
where the Pauli matrices have the form
\begin{eqnarray}
\sigma_{1}=
\left ( \begin{array}{cc}
0 & 1 \\
1 & 0
\end{array} \right), \quad \sigma_{2}=
\left ( \begin{array}{cc}
0 & -i \\
i & 0
\end{array} \right), \quad \sigma_{3}=
\left ( \begin{array}{cc}
1 & 0 \\
0& -1
\end{array} \right).
\end{eqnarray}
These elements satisfy the following commutation relations
\begin{eqnarray}
[e_{1}, e_{2}]=e_{3}, \quad [e_{2}, e_{3}]=e_{1}, \quad [e_{3}, e_{1}]= e_{2}.
\end{eqnarray}
Note that the Pauli matrices obey the following commutation relations
\begin{eqnarray}
[\sigma_{1}, \sigma_{2}]=2i\sigma_{3}, \quad [\sigma_{2}, \sigma_{3}]=2i\sigma_{1}, \quad [\sigma_{3}, \sigma_{1}]= 2i\sigma_{2}
\end{eqnarray}
or
\begin{eqnarray}
[\sigma_{i}, \sigma_{j}]=2i\epsilon_{ijk}\sigma_{k}.
\end{eqnarray}
The well-known isomorphism between the Lie algebras $su(2)$ and $so(3)$ means the following correspondence between their basis elements $L_{j}\leftrightarrow e_{j}$. Using this isomorphism let us construct the following two matrices
\begin{eqnarray}
U&=&-\tau e_{1}+\kappa_{2}e_{2}-\kappa_{1}e_{3}=-\frac{1}{2i}\left ( \begin{array}{cc}
\kappa_{1} & \tau+i\kappa_{2} \\
\tau-i\kappa_{2} &-\kappa_{1}
\end{array} \right)=\left ( \begin{array}{cc}
u_{11} & u_{12} \\
u_{21} &-u_{11}
\end{array} \right),\\
V&=&-\omega_{1}e_{1}+\omega_{2}e_{2}-\omega_{3}e_{3}=-\frac{1}{2i}\left ( \begin{array}{cc}
\omega_{3} & \omega_{1}+i\omega_{2} \\
\omega_{1}-i\omega_{2} &-\omega_{3}
\end{array} \right)=\left ( \begin{array}{cc}
v_{11} & v_{12} \\
v_{21} &-v_{11}
\end{array} \right).\label{3.26}
\end{eqnarray}
Hence we obtain
\begin{eqnarray}
\kappa_{1}&=&-2iu_{11}, \quad \kappa_{2}=-(u_{12}-u_{21}), \quad \tau=-i(u_{12}+u_{21}),\label{3.26} \\
\omega_{1}&=&-i(v_{12}+v_{21}), \quad \omega_{2}=-(v_{12}-v_{21}), \quad \omega_{3}=-2iv_{11}.
\end{eqnarray}
The compatibility condition of the equations (\ref{24}) reads as
\begin{eqnarray}
C_t - G_x + [C, G] =U_{t}-V_{x}+[U,V]= 0\label{38}
\end{eqnarray}
or in elements
\begin{eqnarray}
\kappa_{1t}- \omega_{3x} -\kappa_{2}\omega_{1}+ \tau \omega_2&=&0, \label{39} \\
\kappa_{2t}- \omega_{2x} +\kappa_{1}\omega_{1}- \tau \omega_3&=&0, \label{40} \\
\tau_{t} - \omega_{1x} - \kappa_{1}\omega_2+\kappa_{2}\omega_{3}&=&0. \label{41} \end{eqnarray}
We now assume that
\begin{eqnarray}
\kappa_{1}=i\alpha, \quad \kappa_{2}=-\lambda m, \quad \tau=0 \label{42}
\end{eqnarray}
and
\begin{eqnarray}
\omega_{1} & = &-\frac{i\alpha u_{x}}{\lambda},\label{58}\\
\omega_{2}&=& \frac{u}{\lambda}+\lambda m(u^{2}-u_{x}^{2}), \label{59} \\
\omega_{3} & = &-2i\alpha[\lambda^{-2}+0.5(u^{2}-u^{2}_{x})]. \label{60}
\end{eqnarray}
Then it is not difficult to verify that Eqs.(\ref{39})-(\ref{41}) give us the following equations for $m, u$:
\begin{eqnarray}
m_{t}+(m(u^2-u^2_{x}))_{x}+\kappa u_{x}&=&0,\\
m-u+u_{xx}&=&0.
\end{eqnarray}
It is nothing but the mCHE.
So, we have proved that the Lakshmanan (geometrical) equivalent of the M-CXIIE is the mCHE. Finally we note that as follows from (\ref{42}), the corresponding space curve is with the zero torsion but with the constant geodesic curvature.
\subsection{Integrable motions of plane curves}
For the mCHE and that same for its equivalent counterpart - the M-CXIIE, more naturally corresponds the plane curves than the space curves. For that reason in this subsection, let us consider an integrable motions of plane curves induced by the M-CXIIE. In this case, Eqs.(\ref{24}) take the following form
\begin{eqnarray}
\left ( \begin{array}{cc}
{\bf e}_{1} \\
{\bf e}_{2}
\end{array} \right)_{x} = C
\left ( \begin{array}{cc}
{\bf e}_{1} \\
{\bf e}_{2}
\end{array} \right),\quad
\left ( \begin{array}{cc}
{\bf e}_{1} \\
{\bf e}_{2}
\end{array} \right)_{t} = G
\left ( \begin{array}{cc}
{\bf e}_{1} \\
{\bf e}_{2}
\end{array} \right), \label{3.25}
\end{eqnarray}
where
\begin{eqnarray}
C=
\left ( \begin{array}{cc}
0 & \kappa_{1} \\
-\kappa_{1} & 0 \end{array} \right),\quad G=
\left ( \begin{array}{cc}
0 & \omega_{3} \\
-\omega_{3} & 0 \end{array} \right).\label{3.26}
\end{eqnarray}
At the same time, Eqs.(\ref{39})-(\ref{41}) become
\begin{eqnarray}
\kappa_{1t}- \omega_{3x} =0. \label{55} \end{eqnarray}
We now assume that
\begin{eqnarray}
\kappa_{1}=r, \quad \omega_{3} = -r(u^{2}-u^{2}_{x}), \label{60}
\end{eqnarray}
where
\begin{eqnarray}
r=\sqrt{(u-u_{xx})^{2}+0.5k}.
\end{eqnarray}
Finally Eq.(50) gives
\begin{eqnarray}
r_{t}+[r(u^{2}-u^{2}_{x})]_{x}=0. \label{53}
\end{eqnarray}
It is nothing but the mCHE in the conservation law form
\cite{1310.4011}-\cite{1506.08639}.
So, again we have proved that the equivalent counterpart of the M-CXIIE (\ref{6}) is the mCHE (\ref{53}). Note that in this case, the curve has the zero torsion and normal curvature.
\section{Modified Camassa-Holm equation}
In the previous section, we have proved that the geometrical equivalent of the M-CXIIE is the well-known mCHE. In this section, we give some main informations on the mCHE. The mCHE has the form
\begin{eqnarray}
m_{t}+(m(u^2-u^2_{x}))_{x}+\kappa u_{x}&=&0,\\
m-u+u_{xx}&=&0.
\end{eqnarray}
The mCHE can be rewritten in the conservation law form as \cite{1310.4011}-\cite{1506.08639}
\begin{eqnarray}
r_{t}+[r(u^2-u^2_{x})]_{x}&=&0,\\
r-\sqrt{(u-u_{xx})^{2}+\sigma^{2}}&=&0,
\end{eqnarray}
where $2\sigma^{2}=\kappa$. As well-known, the mCHE is an integrable nonlinear partial differential equation. Its LR read as \cite{1911.07263}-\cite{1911.12554}
\begin{eqnarray}
\Psi_x&=&U_{3}\Psi,\\
\Psi_t&=&V_{3}\Psi,
\end{eqnarray}
where
\begin{eqnarray}
U_{3}&=&\frac{1}{2}\left(\begin{array}{cc}-\alpha&\lambda m(x,t)\\-\lambda m(x,t)&\alpha\end{array}\right),\\
V_{3}&=&\left(\begin{array}{cc}\frac{\alpha}{\lambda^2}+\frac{\alpha}{2}(u^2-u^2_{x})&-\frac{u-\alpha u_{x}}{\lambda}-\frac{1}{2}\lambda(u^2-u^2_{x})m\\ \frac{u+\alpha u_x}{\lambda}+\frac{1}{2}\lambda(u^2-u^2_{x})m&-\frac{\alpha}{\lambda^2}-\frac{\alpha}{2}(u^2-u^2_{x})\end{array}\right)
\end{eqnarray}
with
\begin{eqnarray}
\alpha=\sqrt{1-\frac{1}{2}\kappa \lambda^2}, \quad \phi=u^{2}-u_{x}^{2}.
\end{eqnarray}
Note that
\begin{eqnarray}
V_{3}=-\phi U_{3}+V^{'}_{2}=-\phi U+\left(\begin{array}{cc}\frac{\alpha}{\lambda^2}&-\frac{u-\alpha u_{x}}{\lambda}\\ \frac{u+\alpha u_x}{\lambda}&-\frac{\alpha}{\lambda^2}\end{array}\right).
\end{eqnarray}
The compatibility condition
\begin{eqnarray}
U_{3t}-V_{3x}+[U_{3},V_{3}]=0
\end{eqnarray}
gives the mCHE (54)-(55). In fact, we have
\begin{eqnarray}
\lambda&:& m_{t}+(m\phi)_{x}+\kappa u_{x}=0,\\
\frac{\alpha}{\lambda}&:& m=u-u_{xx}.
\end{eqnarray}
Note that the mCHE admits at least one reduction. Let $\kappa=0$, then the mCHE takes the form
\begin{eqnarray}
m_{t}+(m(u^2-u^2_{x}))_{x}&=&0,\\
m-u+u_{xx}&=&0.
\end{eqnarray}
Its LR is given by \cite{1911.07263}-\cite{1911.12554}
\begin{eqnarray}
Z_x&=&U_{4}Z,\\
Z_t&=&V_{4}Z,
\end{eqnarray}
where
\begin{eqnarray}
U_{4}&=&\frac{1}{2}\left(\begin{array}{cc}-1&\lambda m(x,t)\\-\lambda m(x,t)&1\end{array}\right),\\
V_{4}&=&\left(\begin{array}{cc}\frac{1}{\lambda^2}+\frac{1}{2}(u^2-u^2_{x})&-\frac{u-u_{x}}{\lambda}-\frac{1}{2}\lambda(u^2-u^2_{x})m\\ \frac{u+u_x}{\lambda}+\frac{1}{2}\lambda(u^2-u^2_{x})m&-\frac{1}{\lambda^2}-\frac{1}{2}(u^2-u^2_{x})\end{array}\right).
\end{eqnarray}
\section{Gauge equivalence between the mCHE and the M-CXIIE}
The M-CXIIE (6) is gauge equivalent to the mCHE (54)-(55). In fact, let us consider
the gauge transformation
\begin{eqnarray}
\Phi=g^{-1}\Psi,
\end{eqnarray}
where $g=\Psi|_{\lambda=\beta}$. Then the relation between the Lax pairs $U_{1}-V_{1}$ and $U_{3}-V_{3}$ is given by
\begin{eqnarray}
U_{1}=g^{-1}U_{3}g-g^{-1}g_{x}, \quad V_{1}=g^{-1}V_{3}g-g^{-1}g_{t}.
\end{eqnarray}
In the case $\kappa=0$, the gauge equivalence between the M-CXIE and the mCHE was established in \cite{16}. The gauge equivalence between the mCHE and the M-CXIIE induces some important relations between solutions of these equations. Here we present some of them.
For example, it can be shown that the solutions $A$ and $m$ is related by the formula
\begin{eqnarray}
tr(A_{x})^{2}=2{\bf A}_{x}^{2}=2(A_{1x}^{2}+A_{2x}^{2}+A_{3x}^{2})=2\beta^{2}m^{2}\end{eqnarray}
or
\begin{eqnarray}
{\bf A}_{x}^{2}=A_{1x}^{2}+A_{2x}^{2}+A_{3x}^{2}=\beta^{2}m^{2}.\end{eqnarray}
Consider the angle parametrization of the spin vector
\begin{eqnarray}
A^{+}=\sin\theta e^{i\varphi}, \quad A_{3}=\cos\theta.
\end{eqnarray}
Then from (25) we obtain
\begin{eqnarray}
{\bf A}^{2}_{x}=\theta_{x}^{2}+\varphi_{x}^{2}\sin^{2}\theta=\beta^{2}m^{2}.\end{eqnarray}
We can consider the following two particular cases: $\theta=const$ and $\varphi=const$. In this paper we consider the case $\varphi=const$ and assume that $\beta\in R$. Then Eq.(76) takes the form
\begin{eqnarray}
{\bf A}^{2}_{x}=\theta_{x}^{2}=\beta^{2}m^{2}
\end{eqnarray}
so that
\begin{eqnarray}
\theta_{x}=\pm \beta m.
\end{eqnarray}
We now return to the mCHE (54)-(55). In terms of $\theta$ it takes the form
\begin{eqnarray}
\theta_{xt}+((u^2-u^2_{x})\theta_{x})_{x}\pm \beta\kappa u_{x}&=&0,\\
m-u+u_{xx}&=&0
\end{eqnarray}
or
\begin{eqnarray}
\theta_{t}+(u^2-u^2_{x})\theta_{x}\pm \beta\kappa u&=&c,\\
\theta_{x}\mp \beta(u-u_{xx})&=&0.
\end{eqnarray}
\section{Soliton solutions of the M-CXIIE}
As the integrable equation, the M-CXIIE has all ingredients of integrable systems like LR, conservation laws, bi-Hamiltonian structure, soliton solutions and so on. In particular, it admits the peakon solutions. Here let us present a one peakon solution of the M-CXIIE. To construct this 1-peakon solution, we use the corresponding 1-peakon solution of the mCHE \cite{0811.2552}.
\begin{eqnarray}
A=g^{-1}\sigma_{3}g=
\left ( \begin{array}{cc}
A_{3} & A^{-} \\
A^{+} & -A_{3}
\end{array} \right),\end{eqnarray}
where
\begin{eqnarray}
g=
\left ( \begin{array}{cc}
g_{1} & -\bar{g}_{2} \\
g_{2} & \bar{g}_{1}
\end{array} \right), \quad g^{-1}=\frac{1}{\Delta}
\left ( \begin{array}{cc}
\bar{g}_{1} & \bar{g}_{2} \\
-g_{2} & g_{1}
\end{array} \right), \quad \Delta=|g_{1}|^{2}+|g_{2}|^{2}.\end{eqnarray}
As result we obtain the following formulas for the components of the spin matrix:
\begin{eqnarray}
A^{+}&=&-\frac{2g_{1}g_{2}}{|g_{1}|^{2}+|g_{2}|^{2}}, \quad A_{3}=\frac{|g_{1}|^{2}-|g_{2}|^{2}}{|g_{1}|^{2}+|g_{2}|^{2}}.
\end{eqnarray}
To construct the 1-soliton solution of the M-CXIIE, we consider the following seed solution of the mCHE
\begin{eqnarray}
u=0.
\end{eqnarray}
Then the equations (58)-(59) have the solutions
\begin{eqnarray}
g_{1}=a_{1}e^{-\theta}, \quad g_{2}=a_{2}e^{\theta},
\end{eqnarray}
where $a_{j}$ are complex constants and
\begin{eqnarray}
\theta=\frac{\alpha_{0}}{2}x-\frac{\alpha_{0}}{\beta^{2}}, \quad a_{j}=|a_{j}|e^{i\delta_{j}}, \quad \delta_{j}=consts.
\end{eqnarray}
Thus the 1-soliton solution of the M-CXIIE has the form
\begin{eqnarray}
A^{+}&=&-\frac{e^{i(\delta_{1}+\delta_{2})}}{\cosh(\delta-\theta)}, \quad A_{3}=\tanh(\delta-\theta),
\end{eqnarray}
where $\delta=\ln|\frac{a_{1}}{a_{2}}|$.
\section{Conclusion}
In the paper, one of the GHFE, namely, the M-CXIIE is investigated. The different forms of this equation and its reduction are given. Its LR is presented. The geometric formulation of this equation is presented. It is also shown that this equation is geometrical and gauge equivalent to the mCHE.
Finally we note that it is interesting to investigate the surface geometry of the M-CXXIIE and the mCHE \cite{R13}-\cite{1301.0180}. The M-CXIIE seemingly admits not only soliton but also peakon type solutions. It will be interesting to study the properties of the M-CXIIE and other integrable GHFE related with Camassa-Holm type equations in more detail
elsewhere.
\section*{Acknowledgements}
This work was supported by the Ministry of Edication and Science of Kazakhstan under
grants 0118РК00935 and 0118РК00693.
|
train/arxiv
|
BkiUd-HxK0zjCrsOAe7_
| 5 | 1 |
\section{Approach}%
\label{Se:Approach}
The \texttt{binnacle} toolset, shown in \Cref{Fi:Overview}, can be used
to ingest large amounts of data from GitHub. This capability is of general
use to anyone looking to analyze GitHub data. In this section, we describe the
three core contributions of our work: phased parsing, rule mining,
and rule enforcement. Each of these contributions is backed by
a corresponding tool in the \texttt{binnacle} toolset: (i) \emph{phased parsing} is
enabled by \texttt{binnacle}'s phased parser (shown in the Learn Rules and
Enforce Rules sections of \Cref{Fi:Overview}); (ii) \emph{rule mining} is enabled by
\texttt{binnacle}'s novel frequent-sub-tree-based rule miner (shown in the Learn Rules section of
\Cref{Fi:Overview}); and \emph{rule enforcement} is provided by \texttt{binnacle}'s
static rule-enforcement engine (shown in the Enforce Rules section of \Cref{Fi:Overview}).
Each of these three tools and contributions was inspired by one of the three
challenges we identified in the realm of learning from and understating
DevOps artifacts (nested languages, prior work that identifies rule mining as unachievable
\citep{SidhuLackOfReuseTravisSaner2019}, and static rule enforcement). Together,
these contributions combine to create the \texttt{binnacle} toolset: the first
structure-aware automatic rule miner and enforcement engine for Dockerfiles (and
DevOps artifacts, in general).
\subsection{Phased Parsing}%
\label{Se:PhasedParsing}
One challenging aspect of DevOps artifacts in general (and Dockerfiles in
particular) is the prevalence of nested languages. Many DevOps artifacts have a
top-level syntax that is simple and declarative (JSON, Yaml, and XML are popular
choices). This top-level syntax, albeit simple, usually allows for some form of
embedded scripting. Most commonly, these embedded scripts are \texttt{bash}. Further
complicating matters is the fact that \texttt{bash} scripts usually reference common
command-line tools, such as \texttt{apt-get} and \texttt{git}. Some popular
command-line tools, like \texttt{python} and \texttt{php}, may even allow for
further nesting of languages. Other tools, like GNU's \texttt{find}, allow for
more \texttt{bash} to be embedded as an argument to the command. This complex nesting of
different languages creates a challenge: how do we represent DevOps artifacts in
a structured way?
Previous approaches to understanding and analyzing DevOps artifacts have either
ignored the problem of nested languages, or only addressed one level of nesting
(the embedded shell within the top-level format)~\cite{SidhuLackOfReuseTravisSaner2019,GallabaUseAndMisuseTSE2018}.
We address the challenge of
structured representations in a new way: we employ \emph{phased parsing} to
progressively enrich the AST created by an initial top-level parse.
\Cref{Fi:PhasedParsing} gives an example of \emph{phased parsing}---note how, in
\cref{Fi:PhasedPhase1}, we have a shallow representation given to us by a simple
top-level parse of the example Dockerfile. After this first phase, almost all of
the interesting information is wrapped up in leaf nodes that are string literals.
We call such nodes \emph{effectively uninterpretable} (EU) because we have
no way of reasoning about their contents. These literal nodes,
which have further interesting structure, are shown in gray. After the second
phase, shown in \cref{Fi:PhasedPhase2}, we have enriched the structured
representation from Phase I by parsing the embedded \texttt{bash}. This second phase of
parsing further refines the AST constructed for the example, but, somewhat
counterintuitively, this refinement also introduces even more literal nodes with
undiscovered structure. Finally, the third phase of parsing enriches the AST by
parsing the options ``languages'' of popular command-line tools (see
\cref{Fi:PhasedPhase3}). By parsing within these command-line languages, we
create a representation of DevOps artifacts that contains \emph{more structured
information} than competing approaches.
To create our phased parser we leverage the following observations:
\begin{enumerate}
\item There are a small number of commonly used command-line
tools. Supporting the top-50 most frequently used tools allows us to cover
over 80\% of command-line-tool invocations in our corpus.
\item Popular command-line tools have documented options. This
documentation is easily accessible via manual pages or some form of
embedded help.
\end{enumerate}
Because of observation (1), we can focus our attention on the most popular
command-line tools, which makes the problem of phased parsing tractable. Instead
of somehow supporting all possible embedded command-line-tool invocations, we
can, instead, provide support for the top-N commands (where N is determined by
the amount of effort we are willing to expend). To make this process uniform and
simple, we created a parser generator that takes, as input, a declarative schema
for the options language of the command-line tool of interest. From this schema,
the parser generator outputs a parser that can be used to enrich the ASTs during
Phase III of parsing. The use of a parser generator was inspired by observation
(2): the information available in manual pages and embedded help, although free-form
English text, closely corresponds to the schema we provide our parser generator.
This correspondence is intentional. To support more command-line tools, one merely needs to
identify appropriate documentation and transliterate it into the schema format
we support. In practice, creating the schema for a typical command-line tool
took us between 15 and 30 minutes.
Although the parser generator is an integral and interesting piece
of infrastructure, we forego a detailed description of the input schema the
generator requires and the process of transliterating manual pages; instead, we
now present the rule-encoding scheme that \texttt{binnacle} uses both for rule
enforcement and rule mining.
\input{fig/rule-impacts}
\input{fig/example-tars-column}
\input{fig/rule-mining}
\subsection{Tree Association Rules (TARs)}%
\label{Se:TARs}
The second challenge the \texttt{binnacle} toolset seeks to address (rule
encoding) is motivated by the need for both automated rule mining and static
rule enforcement. In both applications, there needs to be a consistent and
powerful encoding of expressive rules with straightforward syntax and clear
semantics. As part of developing this encoding, we curated a set of \emph{Gold
Rules} and wrote a rule-enforcement engine capable of detecting violations of
these rules. We describe this enforcement engine in greater detail in
\Cref{Se:Enforcement}. To create the set of \emph{Gold Rules}, we returned to
the data in our Gold Set of Dockerfiles. These Dockerfiles were obtained from
the \texttt{docker-library/} organization on GitHub. We manually reviewed merged
pull requests to the repositories in this organization. From the merged pull
requests, if we thought that a change was applying a best practice or a fix, we
manually formulated, as English prose, a description of the change. This process
gave us approximately 50 examples of \emph{concrete changes made by Docker
experts}, paired with descriptions of the general pattern being applied.
From these concrete examples, we devised 23 rules. \added[id=J]{A summary of
these rules is given in \Cref{Ta:GoldRules}.} Most examples that we saw
could be framed as association rules of some form. As an example, a rule may
dictate that using \texttt{apt-get install \ldots} requires a preceding
\texttt{apt-get update}. Rules of this form can be phrased in terms of an
antecedent and consequent. The only wrinkle in this simple approach is that both
the antecedent and the consequent are sub-trees of the tree representation of Dockerfiles.
To deal with tree-structured
data, we specify two pieces of information that helps restrict \emph{where} the
consequent can occur in the tree, relative to the antecedent:
\begin{enumerate}
\item Its location: the consequent can either (i) \emph{precede} the
antecedent, (ii) \emph{follow} the antecedent, or (iii) \emph{be a child of}
the antecedent in the tree.
\item \smallskip Its scope: the consequent can either be (i) in the \emph{same piece}
of embedded shell as the antecedent (intra-directive), or (ii) it can be allowed to exist in a
\emph{separate piece} of embedded shell (inter-directive). Although we can encode
and enforce inter-directive rules, our miner is only capable of returning intra-directive
rules (as explained in \Cref{Se:RuleMining}). Therefore, all of the rules
we show have an intra-directive scope.
\end{enumerate}
From an antecedent, a consequent, and these two pieces of localizing
information, we can form a complete rule against which the
enriched ASTs created by the phased parser can be checked. We call these Tree Association Rules
(TARs). Three example TARs are given in \Cref{Fi:Rules}. We are not the first to
propose Tree Association Rules; \citet{mazuran2009mining} proposed
TARs in the context of extracting knowledge from XML documents. The key
difference is that their TARs require that the consequent be a child of the
antecedent in the tree, while we allow for the consequent to occur outside of
the antecedent, either preceding it or succeeding it. \added[id=J]{Although we
allow for this more general definition of TARs, our miner is only capable of mining
\emph{local} TARs---that is, TARs in the style of~\citet{mazuran2009mining}; however, our static
rule-enforcement engine has no such limitation.}
\added[id=J]{
\textbf{Rule impacts.} For each of the Gold rules, \Cref{Ta:GoldRules} provides
the consequences of a rule violation and a judgement as to whether a given
rule is unique to Dockerfiles or more aligned with general Bash best-practices.
In general, we note that rule violations have varying consequences, including space wastage,
container bloat (and consequent increased attack surface), and instances of outright build failure.
Additionally, two-thirds of the Gold rules are \emph{unique to using Bash in the context of a Dockerfile}.
}
\input{fig/abstraction}
\input{fig/static-enforcement-ex}
\subsection{Abstraction}%
\label{Se:Abstraction}
\texttt{binnacle}'s rule miner and static rule-enforcement engine both employ an
\emph{abstraction} process. The abstraction process is complementary to
\emph{phased parsing}---there may still be information within literal values
even when those literals are not from some well-defined sub-language. During the
abstraction process, for each tree in the input corpus, every literal value
residing in the tree is removed, fed to an abstraction subroutine, and replaced
by either zero, one, or several abstract nodes (these abstract nodes are
produced by the abstraction subroutine). The abstraction subroutine simply
applies a user-defined list of named regular expressions to the input literal
value. For every matched regular expression, the abstraction subroutine returns
an abstract node whose type is set to the name of the matched expression. For
example, one abstraction we use attempts to detect URLs; another detects if the
literal value is a Unix path and, if so, whether it is relative or absolute. The
abstraction process is depicted in \Cref{Fi:Abstraction}. The reason for these
abstractions is to help both \texttt{binnacle}'s rule-learning and
static-rule-enforcement phases by giving these tools the vocabulary necessary to
reason about properties of interest.
\subsection{Rule Mining}%
\label{Se:RuleMining}
The \texttt{binnacle} toolset approaches rule mining by, first, focusing on a
specific class of rules that are more amenable to automatic recovery: rules that
are \emph{local}. We define a \emph{local} Tree Association Rule (TAR) as one in
which the consequent sub-tree exists within the antecedent sub-tree. This
matches the same definition of TARs introduced by Mazuran
\latinphrase{et~al.}\xspace~\cite{mazuran2009mining}. Based on this definition, we note that local
TARs must be intra-directive (scope) and must be child-of (location). Three
examples of local TARs (each of which our rule miner is able to discover automatically)
are given in \Cref{Fi:Rule3,Fi:RuleMining3}. In
general, the task of finding arbitrary TARs from a corpus of hundreds of
thousands of trees is computationally infeasible. By focusing on local TARs, the
task of automatic mining becomes tractable.
To identify local TARs \texttt{binnacle} collects, for each node type of
interest, the set of all sub-trees with roots of the given type (e.g., all
sub-trees with APT-GET as the root). On this set of sub-trees, \texttt{binnacle}
employs frequent sub-tree mining~\cite{chi2005frequent} to recover a set of
likely consequents. Specifically, binnacle uses the \textsc{CMTreeMiner}
algorithm~\cite{chi2005mining} to identify frequent \emph{maximal},
\emph{induced}, \emph{ordered} sub-trees. \emph{Induced} indicates that all
``child-of'' relationships in the sub-tree exist in the original tree (as
opposed to the more permissive ``descendent-of'' relationship, which defines an
\emph{embedded} sub-tree). \emph{Ordered} signifies that order of the child
nodes in the sub-tree matters (as opposed to \emph{unordered} sub-trees). A
frequent sub-tree is \emph{Maximal} for a given support threshold if there is no
super-tree of the sub-tree with occurrence frequency above the support threshold
(though there may be sub-trees of the given sub-tree that have a higher
occurrence frequency).
For more details on frequent sub-trees, see Chi \latinphrase{et~al.}\xspace~\cite{chi2005frequent}.
\texttt{binnacle} returns rules in which the antecedent is the root node of a
sub-tree (where the type of the root node matches the input node-type) and the
consequent is a sub-tree identified by the frequent sub-tree miner.
An example of the rule-mining process is given in \Cref{Fi:RuleMining}. In the
first stage of rule mining, all sub-trees with the same root node-type
(\texttt{APT-GET-INSTALL}) are grouped together and collected. For each group of
sub-trees with the same root node-type, \texttt{binnacle} employs frequent
sub-tree mining to find likely consequents. In our example, two frequently
occurring sub-trees (in gray and dashed outlines, respectively) are given in
\Cref{Fi:RuleMining2}. Finally, \texttt{binnacle} creates local TARs by using
the root node as the antecedent and each of the frequent sub-trees as a
consequent, as shown in \Cref{Fi:RuleMining3}. One TAR is created for each
identified frequent sub-tree.
\subsection{Static Rule Enforcement}%
\label{Se:Enforcement}
\input{fig/eu-histograms}
Currently, the state-of-the-art in static Dockerfile support for developers is
the VSCode Docker extension \citep{github:VSCodeDocker} and the Hadolint
Dockerfile-linting tool \citep{github:Hadolint}. The VSCode extension provides
highlighting and basic linting, whereas Hadolint employs a shell parser
(ShellCheck \citep{github:ShellCheck}---the same shell parser we use) to parse
embedded \texttt{bash}, similar to our tool's second phase of parsing. The
capabilities of these tools represent steps in the right direction but,
ultimately, they do not offer enough in the way of deep semantic support.
Hadolint does not support parsing of the arguments of individual commands as
\texttt{binnacle} does in its third phase of parsing. Instead, Hadolint resorts
to fuzzy string matching and regular expressions to detect simple rule
violations.
\texttt{binnacle}'s static rule-enforcement engine takes, as input, a Dockerfile and a set of TARs.
\texttt{binnacle}'s rule engine runs, for each rule, three
stages of processing on the input corpus:
\begin{enumerate}
\item Stage I\@: The Dockerfile is parsed into a tree representation, and the enforcement engine attempts to match
the TAR's antecedent (by aligning it with a sub-tree in the input tree). If
no matches are found, the engine continues processing with the next TAR\@. If
a match is found, then the enforcement engine continues to Stage II\@. This
process is depicted in \Cref{Fi:EnforcementStage1}.
\item \smallskip Stage II\@: Depending on the \emph{scope} and \emph{location} of the
given TAR, the enforcement engine binds a region of the input tree. This region
is where, in Stage III, the enforcement engine will look for a sub-tree with
which the consequent can be aligned. \Cref{Fi:EnforcementStage2} depicts this
process, and highlights the various possible binding regions in the example
input tree.
\item \smallskip Stage III\@: Given a TAR with a matched antecedent and a bound region of
the input tree, the enforcement engine attempts to align the consequent of the
TAR with a sub-tree within the bound region. If the engine is able to find
such an alignment, then the rule has been \emph{satisfied}. If not, the rule
has been \emph{violated}. \Cref{Fi:EnforcementStage3} depicts this process and
both possible outcomes: for the rule in \Cref{Fi:Rule1}, the matched
antecedent is shown with a thick black outline, the bound region is shown in
blue, and the matched consequent is shown with a dashed black outline. In
contrast, for the rule in \Cref{Fi:Rule3}, the matched antecedent is the
same as above, the bound region is shown in green; however, the tree is
missing the consequent, represented by the dashed red sub-tree.
\end{enumerate}
The implementation of \texttt{binnacle}'s enforcement engine utilizes a simple
declarative encoding for the TARs. To reduce the bias in the manually extracted
\emph{Gold Rules} (introduced in \Cref{Se:TARs}), we used
\texttt{binnacle}'s static rule-enforcement engine and the Gold Set of
Dockerfiles (introduced in \Cref{Se:DataAcquisition}) to gather statistics that
we used to filter the \emph{Gold Rules}. For each of the 23 rules (encoded as Tree
Association Rules), we made the following measurements: (i) the \emph{support}
of the rule, which is the number of times the rule's antecedent is
matched, (ii) the \emph{confidence} of the rule, which is the percentage
of occurrences of the rule's consequent that match successfully, given that the
rule's antecedent matched successfully, and (iii) the \emph{violation rate} of
the rule, which is the percentage of occurrences of the antecedent
where the consequent is not matched. Note that our definitions of \emph{support} and \emph{confidence}
are the same as that used in traditional association rule mining~\cite{agrawal1993mining}.
We validated our \emph{Gold Rules} by keeping only those rules
with \emph{support} greater than or equal to 50 and \emph{confidence} greater
than or equal to 75\% on the \emph{Gold Set}. \added[id=J]{These support and confidence measurements
are given in \Cref{Ta:GoldRules}.} By doing this filtering, we increase the selectivity of
our \emph{Gold Rules} set, and reduce the bias of our manual selection process.
Of the original 23 rules in our \emph{Gold Rules}, 16 pass the minimum-support
threshold and, of those 16 rules, 15 pass the minimum-confidence threshold.
Henceforth, we use the term \emph{Gold Rules} to refer to the 15 rules
that passed quantitative filtering. \added[id=J]{These 15 rules are highlighted, in gray, in
\Cref{Ta:GoldRules}.}
Together, \texttt{binnacle}'s phased parser, rule miner, and static rule-enforcement
engine enable both rule learning and the enforcement of learned rules. \Cref{Fi:Overview} depicts how
these tools interact to provide the aforementioned features. Taken together, the
\texttt{binnacle} toolset fills the need for structure-aware analysis of DevOps artifacts
and provides a foundation for continued research into improving the state-of-the-art
in learning from, understanding, and analyzing DevOps artifacts.
\section{Conclusion}%
\label{Se:Conclusion}
Thus far, we have identified the ecosystem of DevOps tools and
artifacts as an ecosystem in need of greater support both academically
and industrially. We found that, on average, Dockerfiles on
GitHub are nearly \added[id=J]{\emph{five times worse}}, with respect
to violations of our \emph{Gold Rules}, compared to Dockerfiles written
by experts. Furthermore, we found that industrial Dockerfiles are no
better. Through automated rule mining and static rule enforcement, we
created tools to help bridge this gap in quality. Without increased
developer assistance, the vast disparity
between the quality of DevOps artifacts authored by experts and non-experts
is likely to continue to grow.
There are a number of pieces of follow-on work that we hope to pursue. We envision the \texttt{binnacle} tool,
the data we have collected, and the analysis we have done on Dockerfiles
as a foundation on which new tools and new analysis can be carried out.
To that end, we plan to continue to evolve the \texttt{binnacle} ecosystem
by expanding to more DevOps artifacts (Travis, CircleCI, etc.). Additionally,
the encoding of rules we utilize has the advantage of implicitly encoding
a repair (or, at least, part of a repair---localizing the insertion point
for the implicit repair may be a challenge). Furthermore, the kinds of
rules that we mine are limited to \emph{local} rules. We believe that more
rules may be within the reach of automated mining. Finally, we
hope to integrate \texttt{binnacle}'s mined rules and analysis engine
into language servers and IDE plugins to provide an avenue for collecting
real feedback that can be used to improve the assistance we provide
to DevOps developers.
\section{Data Acquisition}%
\label{Se:DataAcquisition}
A prerequisite to analyzing and learning from DevOps artifacts is gathering
a large sample of representative data. There are two challenges we must
address with respect to data acquisition: (D1) the challenge of gathering
\emph{enough} data to do interesting analysis, and (D2) the challenge of
gathering \emph{high-quality} data from which we can mine rules. To address the
first challenge, we created the \texttt{binnacle} toolset: a dockerized
distributed system capable of ingesting a large number of DevOps artifacts
from a configurable selection of GitHub repositories. \texttt{binnacle} uses a combination of Docker and Apache Kafka to enable dynamic
scaling of resources when ingesting a large number of artifacts. \Cref{Fi:Overview}
gives an overview of the three primary
tools provided by the \texttt{binnacle} toolset: a tool for data acquisition, which we
discuss in this section, a tool for rule learning (discussed further in \Cref{Se:RuleMining}), and
a tool for static rule enforcement (discussed further in \Cref{Se:Enforcement}).
Although the architecture of \texttt{binnacle} is interesting in
its own right, we refer the reader to the \texttt{binnacle} GitHub
repository for more details.\footnote{\added[id=J]{\url{https://github.com/jjhenkel/binnacle-icse2020}}} For the remainder of this section, we
instead describe the data we collected using \texttt{binnacle}, and
our approach to challenge (D2): the need for \emph{high-quality} data.
Using \texttt{binnacle}, we ingested \emph{every public repository on GitHub
with ten or more stars}. This process yielded
approximately 900,000 GitHub repositories. For each of these 900,000
repositories, we gathered a listing of all the files present in each repository.
This listing of files was generated by looking at the \texttt{HEAD} of the default
branch for each repository. Together, the metadata and file listings for each repository were stored
in a database. \added[id=J]{We ran a script against this database to identify the
files that were likely Dockerfiles using a permissive filename-based filter.
This process identified approximately 240,000 likely Dockerfiles. Of those 240,000
likely Dockerfiles, only 219,000 were successfully downloaded and parsed as Dockerfiles.
Of the 219,000 remaining files, approximately 178,000 were unique based on their SHA1 hash.
It is this set, of approximately 178,000 Dockerfiles, that we will refer
to as our corpus of Dockerfiles. }
Although both the number of repositories we ingested and the number of
Dockerfiles we collected were large, we still had not addressed challenge
(D2): high-quality data. To find high-quality data, we looked within our Dockerfile corpus
and extracted every Dockerfile that originally came from the \texttt{docker-library/}
GitHub organization. This organization is run by Docker, and houses a set
of official Dockerfiles written by and maintained by Docker experts.
There are approximately 400 such files in our Dockerfile corpus. We will refer
to this smaller subset of Dockerfiles as the \emph{Gold Set}. Because these files
are Dockerfiles created and maintained by Docker's own experts, they presumably
represent a higher standard of quality than those produced by non-experts. This set
provides us with a solution to challenge (D2)---the Gold Set can be used as an
oracle for good Dockerfile hygiene. In addition to the Gold Set, we also
collected approximately 5,000 Dockerfiles from several industrial repositories,
with the hope that these files would also be a source of high-quality data.
\section{Evaluation}%
\label{Se:Evaluation}
In this section, for each of the three core
components of the \texttt{binnacle} toolset's learning and enforcement
tools, we measure and analyze quantitative results relating to the
efficacy of the techniques behind these tools. All experiments
were performed on a 12-core workstation (with 32GB of RAM) running
Windows 10 and a recent version of Docker.
\subsection{Results: Phased Parsing}%
\label{Se:ResultsPhasedParsing}
\input{fig/mined-examples}
To understand the impacts of phased parsing, we need a metric for quantifying
the amount of \emph{useful information} present in our DevOps artifacts
(represented as trees) after each stage of parsing. The metric we use is the
fraction of leaves in our trees that are \emph{effectively uninterpretable
(EU)}. We define a leaf as \emph{effectively uninterpretable (EU)} if it is, after
the current stage of parsing, a string literal that could be further refined by
parsing the string with respect to the grammar of an additional embedded
language. (We will also count nodes explicitly marked as unknown by our parser as being \emph{EU}.)
For example, after the first phase of parsing (the top-level parse), a
Dockerfile will have nodes in its parse tree that represent embedded
bash---these nodes are \emph{EU} at this stage because they have
further structure that can be discovered given a bash parser; however, after the
first stage of parsing, these leaves are simply treated as literal values, and
therefore marked \emph{EU}.
We took three measurements over the corpus of 178,000 unique Dockerfiles introduced
in \Cref{Se:DataAcquisition}: (M1) the distribution of the fraction of leaves
that are \emph{EU} after the first phase of parsing, (M2) the
distribution of the fraction of leaves that are \emph{EU} after the second phase
of parsing, and (M3) the distribution of the fraction of leaves that are
\emph{EU} after the second phase of parsing and unresolved during the third
phase of parsing.\footnote{For (M3) we make a relative measurement:
the reason for using a different metric is to accommodate the large number of new leaf nodes that the
third phase of parsing introduces. Without this adjustment, one could argue
that our measurements are biased because the absolute fraction of \emph{EU} leaves
would be low due to the sheer number of new leaves introduced by the third parsing phase. To avoid this bias,
we measure the fraction of \emph{previously EU} leaves that remain unresolved, as opposed
to the absolute fraction of \emph{EU} leaves that remain after the third phase of parsing (which is
quite small due to the large number of new leaves introduced in the third
phase).}
Density histograms that depict the three distributions are given
in~\cref{Fi:Distributions}. As shown in \cref{Fi:Distributions}, after the first phase of parsing, the
trees in our corpus have, on average, 19.3\%
\emph{EU} leaves. This number quantifies
the difficulty of reasoning over DevOps artifacts without more sophisticated
parsing. Furthermore, the nodes in the tree most likely to play a role in
rules happen to be the \emph{EU} nodes at this stage. (This aspect is
something that our quantitative metric does not take into account and hence
over-estimates the utility of the representation available after Phase I and Phase II.)
Counterintuitively, the second phase of parsing makes the situation worse: on average,
33.2\% of leaves in second-phase trees are \emph{EU}. Competing tools, like Hadolint,
work over DevOps artifacts with a similar representation. In practice,
competing tools must either stay at what we consider a Phase I representation (just
a top-level parse) or utilize something similar to our Phase II representations.
Such tools are faced with the high fraction of \emph{EU} leaves present in a Phase II
AST\@. Tools using Phase II representations, like Hadolint, are forced to employ regular
expressions or other fuzzy matching techniques as part of their analysis.
Finally, we use our parser generator
and the generated parsers for the top-50 commands to perform a third phase of
parsing. The plot in \cref{Fi:PhasedHistoM3} shows the M3 distribution obtained
after performing the third parsing phase on our corpus of Dockerfiles. At this stage, almost all of the \emph{EU}
nodes are gone---on average, only 3.7\% of leaves that were \emph{EU} at Phase II
remain \emph{EU} in Phase III\@. In fact, over 65\% of trees in Phase II had all \emph{EU}
leaves resolved after the third phase of parsing. These results provide concrete evidence of the
efficacy of our phased-parsing technique, and, in contrast to what is possible with existing tools, the
Phase III structured representations are easily amenable to static analysis and
rule mining.
\subsection{Results: Rule Mining}%
\label{Se:ResultsRuleMining}
We applied \texttt{binnacle}'s rule miner to the Gold Set of Dockerfiles defined
in \Cref{Se:DataAcquisition}. We chose the Gold Set as our corpus for rule learning
because it presumably contains Dockerfiles of high quality. As described in \Cref{Se:RuleMining},
\texttt{binnacle}'s rule miner takes, as input, a corpus of trees and a set of node types.
We chose to mine for patterns using any new node type introduced by the third phase of
parsing. We selected these node types because (i) they represent new information gained
in the third phase of our phased-parsing process, and (ii) all rules in our manually collected \emph{Gold Rules}
set used nodes created in this phase. Rules involving these new nodes (which
come from the most deeply nested languages in our artifacts) were invisible to prior work.
To evaluate \texttt{binnacle}'s rule miner, we used the \emph{Gold Rules} (introduced
in \Cref{Se:TARs}). From the original 23 \emph{Gold Rules} we removed 8 rules that did not
pass a set of quantitative filters---this filtering is described more in \Cref{Se:Enforcement}.
Of the remaining 15 \emph{Gold Rules}, there are 9 rules that are \emph{local} (as defined
in \Cref{Se:RuleMining}). In principal, these 9 rules are all extractable by our rule miner.
Furthermore, it is conceivable that there exist interesting and useful rules, outside of the
\emph{Gold Rules}, that did not appear in the dockerfile changes that we examined in our manual extraction process.
To assess \texttt{binnacle}'s
rule miner we asked the following three questions:
\begin{itemize}
\item (Q1) How many rules are we able to extract from the data automatically?
\item \smallskip (Q2) How many of these rules match one of the 9 local \emph{Gold Rules}? (Equivalently, what is our \emph{recall} on the set of local \emph{Gold Rules}?)
\item \smallskip (Q3) How many new rules do we find, and, if we find new rules (outside of our local \emph{Gold Rules}), what can we say about them (e.g., are the new rules useful, correct, general, etc.)?
\end{itemize}
For (Q1), we found that \texttt{binnacle}'s automated rule miner returns a total of
26 rules. \texttt{binnacle}'s automated rule miner is selective
enough to produce a small number of output rules---this selectivity has the benefit of
allowing for easy manual review.
To provide a point of comparison, we also
ran a traditional association rule miner over sequences of tokens in our Phase III ASTs
(we generated these sequences via a pre-order traversal). The association rule miner
returned thousands of possible association rules. The number of rules could be reduced,
by setting very high confidence thresholds, but in doing so, interesting rules could be
missed.
For (Q2), we found that two thirds (6 of 9) local \emph{Gold Rules} were recovered by
\texttt{binnacle}'s rule miner. Because \texttt{binnancle}'s
rule miner is based on frequent sub-tree mining, it is only capable of returning rules
that, when checked against the corpus they were mined from, have a minimum confidence
equal to the minimum support supplied to the frequent sub-tree miner.
In addition to measuring recall on the local \emph{Gold Rules}, we also examined the rules encoded in Hadolint
to identify all of its rules that were local. Because Hadolint has a weaker representation
of Dockerfiles, we are not able to translate many of its rules into local TARs. However, there were three
rules that fit the definition of local TARs. Furthermore, \texttt{binnacle}'s automated
miner was able to recover each of those three rules (one rule requires the use of \texttt{apt-get install}'s \texttt{-y} flag, another
requires the use of \texttt{apt-get install}'s \texttt{--no-install-recommends} flag, and the third requires the use of
\texttt{apk add}'s \texttt{--no-cache} flag).
To classify the rules returned by our automated miner, we assigned one of the following
four classifications to each of the 26 rules returned:
\begin{itemize}
\item Syntactic: these are rules that enforce simple properties---for example, a rule
encoding the fact that the \texttt{cp} command takes two paths as arguments (see \Cref{Fi:MinedEx1}).
\item \smallskip Semantic: these are rules that encode more than just syntax.
For example, a rule that says the URL passed to the \texttt{curl} utility must include the \texttt{https://}
prefix (see \Cref{Fi:MinedEx2}).
\item \smallskip Gold: these are rules that match, or supersede, one of the rules in our collection of \emph{Gold Rules} (see \Cref{Fi:MinedEx3}).
\item \smallskip Ungeneralizable: these are rules that are correct on the corpus from which they were mined, but,
upon further inspection, seem unlikely to generalize. For example, a rule that asserts that the \texttt{sed}
utility is always used with the \texttt{--in-place} flag is ungeneralizable (see \Cref{Fi:MinedEx4}).
\end{itemize}
\added[id=J]{To answer (Q3), we assigned one of the above classifications to each of the automatically mined
rules. We found that, out of 26 rules, 12 were syntactic, 4 were semantic, 6 were gold, and 4
were ungeneralizable. \Cref{Fi:MinedExs} depicts a rule that was mined automatically in each of the four
classes. Surprisingly, \texttt{binnacle}'s automated miner discovered 16 new rules
(12 syntactic, 4 semantic) that we missed in our manual extraction. Of the newly discovered rules,
one could argue that only the semantic rules are interesting (and, therefore, one might expect a
human to implicitly filter out syntactic rules during manual mining). We would argue that even these
syntactic rules are of value. The lack of basic validation in tools like VS Code's Docker extension
creates a use case for these kind of basic structural constraints. Furthermore, the 4 novel semantic rules
include things such as: (i) the use of the} \verb|-L| \added[id=J]{flag with} \verb|curl|, \added[id=J]{following redirects,
which introduces resilience to resources that may have moved, (ii) the use of the} \verb|-p| \added[id=J]{flag with} \verb|mkdir|,
\added[id=J]{which creates nested directories when required, and (iii) the common practice of preferring soft links
over hard links by using} \verb|ln|'s \verb|-s| \added[id=J]{flag. With (Q3), we have demonstrated the feasibility
of automated mining for Dockerfiles---we hope that these efforts inspire further work into mining
from Dockerfiles and DevOps artifacts in general.}
\subsection{Results: Rule Enforcement}%
\label{Se:ResultsRuleEnforcement}
Using the 15 \emph{Gold Rules}, we measured the \emph{average violation rate} of the \emph{Gold Rules} with respect
to the Gold Dockerfiles (\Cref{Se:DataAcquisition}). The \emph{average violation rate} is the arithmetic mean of the violation
rates of each of the 15 \emph{Gold Rules} with respect to the Gold Dockerfiles. This measurement serves as a kind of
baseline---it gives us a sense of how ``good'' Dockerfiles written by experts are with respect to the \emph{Gold Rules}.
The average violation rate we measured was \added[id=J]{6.65\%, which, unsurprisingly, is quite low. We also measured the
average violation rate of the \emph{Gold Rules} with respect to our overall corpus.
We hypothesized that Dockerfiles ``in the wild'' would fare worse, with respect to violations, than those written by
experts. This hypothesis was supported by our findings: the average violation rate was 33.15\%. We had expected
an increase in the violation rate, but were surprised by the magnitude of the increase. These results
highlight the dire state of static DevOps support: Dockerfiles authored by non-experts are nearly \emph{five times} worse when
compared to those authored by experts.} Bridging this gap is one of the overarching goals of the \texttt{binnacle} ecosystem.
We also obtained a set of approximately 5,000 Dockerfiles from the source-code repositories of an industrial source, and assessed their quality by
checking them against our \emph{Gold Rules}.
\added[id=J]{To our surprise, the violation rate was no lower for these industrial Dockerfiles.}
This result provides evidence that the quality of Dockerfiles suffers in industry as well, and that there is a need for tools such as \texttt{binnacle} to aid industrial developers.
\section{Introduction}%
\label{Se:Introduction}
\added[id=J]{With the continued growth and rapid iteration of software, an increasing amount
of attention is being placed on services and infrastructure to enable developers
to test, deploy, and scale their applications quickly. This situation has given rise to the
practice of \emph{DevOps}, a blend of the words \emph{Development} and \emph{Operations}, which
seeks to build a bridge between both practices, including deploying, managing, and supporting
a software system \cite{lwakatare2015dimensions}.
Bass \latinphrase{et~al.}\xspace{} define DevOps as, the ``set
of practices intended to reduce the time between committing a change to a system
and the change being placed into normal production, while ensuring high quality''
\cite{DBLP:books/daglib/0036722}. DevOps activities include building, testing, packaging,
releasing, configuring, and monitoring software.
To aid developers in these
processes, tools such as TravisCI \citep{tool:Travis}, CircleCI
\citep{tool:Circle}, Docker \citep{tool:Docker}, and Kubernetes
\citep{tool:Kubernetes}, have become an integral part of the daily workflow of
thousands of developers. Much has been written about DevOps (see, for
example,~\cite{davis2016effective} and~\cite{kim2016devops}) and various
practices of DevOps have been studied
extensively~\cite{widder2019conceptual,hilton2016usage,staahl2016industry,vasilescu2015quality,zhao2017impact,staahl2016industry,portworx}.}
DevOps tools exist in a heterogenous and rapidly evolving landscape. As software
systems continue to grow in scale and complexity, so do DevOps tools. Part of
this increase in complexity can be seen in the input formats of DevOps tools:
many tools, like Docker \citep{tool:Circle}, Jenkins \citep{tool:Jenkins}, and
Terraform \citep{tool:Terraform}, have custom DSLs to describe their input
formats. We refer to such input files as \emph{DevOps artifacts}.
\added[id=J]{Historically, DevOps artifacts have been somewhat neglected in terms of
industrial and academic research (though they have received interest in recent years~\cite{DBLP:journals/infsof/RahmanMW19}).}
They are not ``traditional'' code, and
therefore out of the reach of various efforts in automatic mining and analysis,
but at the same time, these artifacts are \emph{complex}. Our discussions with
developers tasked with working on these artifacts indicate that they learn just
enough to ``get the job done.'' Phillips~\latinphrase{et~al.}\xspace{} found that there is little
perceived benefit in becoming an expert, because developers working on builds
told them ``if you are good, no one ever knows about
it~\cite{phillips2014understanding}.'' However, there is a strong interest in
tools to assist the development of DevOps artifacts: even with its relatively
shallow syntactic support, the VS Code Docker extension has over 3.7 million
unique installations \citep{vscode:Docker}. Unfortunately, the availability of
such a tool has not translated into the adoption of best practices. We find
that, on average, Dockerfiles on GitHub have nearly \added[id=J]{five times as many} rule
violations as those written by Docker experts. These rule violations, which we
describe in more detail in \Cref{Se:Evaluation}, range from true bugs (such as
simply forgetting the \texttt{-y} flag when using \texttt{apt-get install} which
causes the build to hang) to
violations of community established best practices (such as forgetting to use \texttt{apk add}'s
\texttt{--no-cache} flag).
The goal of our work is as follows:
\begin{equation*}
\begin{array}{|p{.95\columnwidth}|}
\hline
\textrm{We seek to address the need for more effective semantics-aware tooling in the realm of DevOps artifacts, with the ultimate goal of reducing the gap in quality between artifacts written by experts and artifacts found in open-source repositories.}\\
\hline
\end{array}
\end{equation*}
We have observed that best practices for tools like Docker have arisen, but
engineers are often unaware of these practices, and therefore unable to follow
them. Failing to follow these best practices can cause longer build times and
larger Docker images at best, and eventual broken builds at worst. To ameliorate
this problem, we introduce \texttt{binnacle}: the first toolset for
semantics-aware rule mining from, and rule enforcement in, Dockerfiles. We selected
Dockerfiles as the initial type of artifact because it is the most prevalent
DevOps artifact in industry (some 79\% of IT companies use
it~\cite{portworx}), has become the de-facto container technology in OSS~\cite{CitoEmpiricalDockerEcoMSR2017,zhang2018cd},
and it has a characteristic that we observe in many other
types of DevOps artifacts, namely, fragments of shell code are embedded within its
declarative structure.
\added[id=J]{Because many developers are comfortable with the Bash shell in
an interactive context, they may be unaware of the differences and assumptions
of shell code in the context of DevOps tools. For example, many bash tools use a caching mechanism
for efficiency. Relying on and not removing the cache can lead to wasted space, outdated
packages or data, and in some cases, broken builds. Consequently, one must always invoke \texttt{apt-get update}
before installing packages, and one should also delete the cache after installation. Default options for commands may need
to be overridden often in a Docker setting. For instance, users almost always want to install recommended dependencies.
However, using recommended dependencies (which may change over time in the external
environment of apt package lists) can silently break future Dockerfile builds and, in the
near term, create a likely wastage of space, as well as the possibility of implicit
dependencies (hence the need to use the \texttt{--no-recommends} option).
Thus, a developer who may be considered a Bash or Linux expert can still run afoul of Docker Bash pitfalls.}
To create the \texttt{binnacle} toolset, we had to address three challenges
associated with DevOps artifacts: (C1) the challenge of nested languages
(e.g., arbitrary shell code is embedded in various parts of the artifact), (C2)
the challenge of rule encoding and automated rule mining, and (C3) the challenge
of static rule enforcement. As a prerequisite to our analysis and
experimentation, we also collected approximately 900,000 GitHub repositories,
and from these repositories, captured approximately 219,000 Dockerfiles \added[id=J]{(of which 178,000 are unique)}.
Within this large corpus of Dockerfiles, we identified a subset written by Docker
experts: this \emph{Gold Set} is a collection of high-quality Dockerfiles that
our techniques use as an oracle for Docker best practices.\footnote{\added[id=J]{Data avaliable at: \url{https://github.com/jjhenkel/binnacle-icse2020}}}
To address (C1), we introduced a novel technique for generating structured
representations of DevOps artifacts in the presence of nested languages, which
we call \emph{phased parsing}. By observing that there are a relatively small
number of commonly used command-line tools---and that each of these tools has
easily accessible documentation (via manual/help pages)---we were able to enrich
our DevOps ASTs and reduce the percentage of \emph{effectively uninterpretable}
leaves (defined in \Cref{Se:PhasedParsing}) in the ASTs by over 80\%.
\input{fig/overview}
For the challenge of rule encoding and rule mining (C2), we took a three-pronged
approach:
\begin{enumerate}
\item We introduced Tree Association Rules (TARs), and created a corpus of
\emph{Gold Rules} manually extracted from changes made to Dockerfiles by Docker
experts (\Cref{Se:TARs}).
\item \smallskip We built an automated rule miner based on frequent sub-tree mining (\Cref{Se:RuleMining}).
\item \smallskip We performed a study of the quality of the automatically mined rules
using the the \emph{Gold Rules} as our ground-truth benchmark (\Cref{Se:ResultsRuleMining}).
\end{enumerate}
In seminal work by \citet{SidhuLackOfReuseTravisSaner2019}, they attempted to
learn rules to aid developers in creating DevOps artifacts, specifically
\textsc{Travis CI} files. They concluded that their ``vision of a tool that
provides suggestions to build CI specifications based on popular sequences of
phases and commands cannot be realized.'' In our work, we adopt their vision,
and show that it is indeed achievable. There is a simple explanation for why our
results differ from theirs. In our work, we use our phased parser to go two
levels deep in a hierarchy of nested languages, whereas Sidhu et al.\ only
considered one level of nested languages. Moreover, when we mine rules, we mine
them starting with the \emph{deepest} level of language nesting. Thus, our rules
are mined from the results of a layer of parsing that Sidhu et al.\ did not
perform, and they are mined \emph{only} from that layer.
Finally, to address (C3), the challenge of static rule enforcement, we
implemented a static enforcement engine that takes, as input, Tree
Association Rules (TARs). We find that
Dockerfiles on GitHub are nearly \added[id=J]{five times worse} (with respect to rule
violations) when compared to Dockerfiles written by experts, and that
Dockerfiles collected from industry sources are no better. This gap in quality
is precisely what the \texttt{binnacle} toolset seeks to address.
In summary, we make four core contributions:
\begin{enumerate}
\item A dataset of \added[id=J]{178,000 unique} Dockerfiles, processed by our phased parser,
harvested from \emph{every public GitHub repository with 10 or more stars},\footnote{We selected repositories created after
January 1st, 2007 and before June 1st, 2019.}
and a toolset, called \texttt{binnacle}, capable of ingesting and storing
DevOps artifacts.
\item \smallskip A technique for addressing the nested languages in DevOps artifacts
that we call \emph{phased parsing}.
\item \smallskip An automatic rule miner, based on frequent sub-tree mining, that
produces rules encoded as Tree Association Rules (TARs).
\item \smallskip A static rule-enforcement engine that takes, as input, a Dockerfile
and a set of TARs and produces a listing of rule violations.
\end{enumerate}
For the purpose of evaluation, we provide experimental results against
Dockerfiles, but, in general, the techniques we describe in this work are
applicable to any DevOps artifact with nested shell (e.g., \textsc{Travis CI} and
\textsc{Circle CI}). The only additional component that \texttt{binnacle}
requires to operate on a new artifact type is a top-level parser capable of
identifying any instances of embedded shell. Given such a top-level parser, the
rest of the \texttt{binnacle} toolset can be applied to learn rules and detect
violations.
Our aim is to provide help to developers in various activities. As such,
\texttt{binnacle}'s rule engine can be used to aid developers when
writing/modifying DevOps artifacts in an IDE, to inspect pull requests, or to
improve existing artifacts already checked in and in use.
\section{Overview}%
\label{Se:Overview}
The \texttt{binnacle} tool primarily operates in two distinct modes: in Mode I,
\texttt{binnacle} can be used to ingest arbitrary GitHub repositories and
extract a corpus of files from these repositories; in Mode II, \texttt{binnacle}
can be used to learn rules from a corpus of input files---following this, the
learned rules are fed to a static rule-enforcement engine to detect and catalog
violations. \Cref{Fi:Overview} depicts both of these modes of operation.
Mode I is the essential setup and data collection necessary to
drive the more interesting processing performed in Mode II. \Cref{Se:DataAcquisition}
describes Mode I (setup and collect) in greater detail.
The \texttt{binnacle} tool makes a distinction between Mode I and Mode II to allow
for other usages beyond the analysis we have done. Mode I is of broad use
to anyone looking to analyze vast swaths of GitHub data.
Mode II is the driver for rule learning and enforcement. Within Mode II there
are three key components: (i) phased parsing, (ii) rule mining,
and (iii) the rule-enforcement engine. Component (i), phased parsing, is a technique
we have developed to address the challenge of nested languages in DevOps artifacts.
This technique amounts to chaining various, increasingly specific, parsers together
to build an end-to-end parsing infrastructure capable of deriving rich ASTs from
input source files. This technique, although applied to Dockerfiles, is general in its
application to any DevOps artifacts with embedded shell. The only prerequisite is
the availability of a top-level parser. \Cref{Se:PhasedParsing} describes the
phased parsing technique in greater detail, and \Cref{Se:ResultsPhasedParsing}
presents a quantatative evaluation of \texttt{binnacle}'s phased parser.
Component (ii), rule mining, takes general tree-structured data as input and a
set of node types of interest. For each input node type, the rule miner finds
the set of sub-trees rooted at that node and applies a frequent sub-tree miner.
The outputs of the frequent sub-tree miner are then used to build simple rules.
\Cref{Se:TARs} presents further details on our rule encoding and \Cref{Se:RuleMining}
provides an in depth description of our rule mining technique. \Cref{Se:ResultsRules}
presents a comparative analysis of rule violations on the Gold Set and a set of
a random subset of Dockerfiles sampled from our Dockerfile corpus (see \Cref{Se:DataAcquisition}
for details on the methods used to collect our Dockerfiles corpus and identify
a Gold Set of high-quality Dockerfiles).
Lastly, component (iii), rule enforcement, takes a corpus of input trees
and a set of input rules, and reports all violations of the input rules. This
simple rule-based enforcement is only possible because of the phased parsing
performed earlier. Without rich representations, a simple rule-based analysis
would not yield sufficiently precise results. Furthermore, the simplicity
of the rule-enforcement engine allows for the rule miner to directly produce
rules (TARs), automatically, that can be fed into the rule-enforcement engine. \Cref{Se:Enforcement}
describes the details of our rule-enforcement engine, and the semantics of
the rules introduced in \Cref{Se:TARs} in greater detail.
\section{Related Work}%
\label{Se:RelatedWork}
Our paper is most closely related to the work of Sidhu \latinphrase{et~al.}\xspace~\cite{SidhuLackOfReuseTravisSaner2019},
who explored reuse in CI specifications in the specific context of
\textsc{Travis CI}, and concluded that there was not enough reuse to develop a ``tool that provides suggestions to build
CI specifications based on popular sequences of phases and commands.''
We differ in the DevOps artifact targeted (Dockerfiles versus \textsc{Travis CI} files),
representation of the configuration file,
and the rule-mining approach.
In a related piece of work, Gallaba and McIntosh~\cite{GallabaUseAndMisuseTSE2018} analyzed the use of \textsc{Travis CI} across nearly 10,000 repositories in GitHub, and identified best practices based on documentation, linting tools, blog posts, and stack-overflow questions.
They used their list of best practices to deduce four anti-patterns, and developed \textsc{Hansel}, a tool to identify anti-patterns in \textsc{Travis CI} config files, and \textsc{Gretel}, a tool to automatically correct them. Similar to our second phase of parsing, they used a bash parser (\textsc{BashLex}) to gain a partial understanding of the shell code in config files.
Zhang \latinphrase{et~al.}\xspace~\cite{zhang2018insight} examined the impact of changes to Dockerfiles on build time and quality issues (via the Docker linting tool Hadolint).
They found that fewer and larger Docker layers results in lower latency and fewer quality issues in general, and that the architecture and trajectory of Docker files (how the size of the file changes over time) impact both latency and quality.
Many of the rules in our Gold Set, and those learned by \texttt{binnacle}, would result in lower latency and smaller images if the rules were followed.
Xu \latinphrase{et~al.}\xspace~\cite{XuDockerfileTFSmellCOMPSAC2019} described a specific kind of problem in Docker image creation that they call the ``Temporary File Smell.'' Temporary files are often created but not deleted in Docker images. They present two approaches for identifying such temporary files.
In this paper, we also observed that removing temporary files is a best-practice employed by Dockerfile experts and both our manual Gold Set
and our learned rules contained rules that address this.
Zhang \latinphrase{et~al.}\xspace~\cite{zhang2018cd} explored the different methods of continuous deployment (CD) that use containerized deployment.
While they found that developers see many benefits when using CD, adopting CD also poses many challenges.
One common way of addressing them is through containerization, typically using Docker.
Their findings also reinforce the need for developer assistance for DevOps: they concluded that ``Bad experiences or frustration with a specific CI tool can turn developers away from CI as a practice.''
\added[id=J]{Our work falls under broader umbrella of ``infrastructure-as-code''.
This area has received increasing attention recently~\cite{DBLP:journals/infsof/RahmanMW19}.
As examples, Sharma \latinphrase{et~al.}\xspace{} examined quality issues, so-called \emph{smells}, in software-configuration
files~\cite{sharma2016does}, and Jiang \latinphrase{et~al.}\xspace{} examined the
coupling between infrastructure-as-code files and ``traditional'' source-code files.}
There have been a number of studies that mine Docker artifacts as we do.
Xu and Marinov~\cite{XuMiningContainerICSE2018} mined container-image repositories such as DockerHub, and discussed the challenges and opportunities that arise from such mining.
Zerouali \latinphrase{et~al.}\xspace~\cite{ZeroualiOutdatedContainersSANER2019} studied vulnerabilities in Docker images based on the versions of packages installed in them.
Guidotti~\latinphrase{et~al.}\xspace~\cite{GuidottiExplainingDockerSTAF2018} attempted to use Docker-image metadata to determine if certain combinations of image attributes led to increased popularity in terms of stars and pulls.
Cito~\latinphrase{et~al.}\xspace~\cite{CitoEmpiricalDockerEcoMSR2017} conducted an empirical study of the Docker ecosystem on GitHub by mining over 70,000 Docker files, and examining how they evolve, the types of quality issues that arise in them, and problems when building them.
A number of tools related to Dockerfiles have been developed in recent years as well.
Brogi \latinphrase{et~al.}\xspace~\cite{BrogiDockerFinderIC2E2017} found that searching for Docker images is currently a difficult problem and limited to simple keyword matching.
They developed \textsc{DockerFinder}, a tool that allows multi-attribute search, including attributes such as image size, software distribution, or popularity.
Yin \latinphrase{et~al.}\xspace~\cite{YinDockerRepoTaggingAPSEC2018} posited that tag support in Docker repositories would improve reusability of Docker images by mitigating the discovery problem.
They addressed this issue by building STAR, a tool that uses latent dirichlet allocation to automatically recommend tags.
Docker files may need to be updated when the requirements of the build environment or execution environment changes.
Hassan \latinphrase{et~al.}\xspace~\cite{HassanDockerfileUpdatesASE2018} developed \textsc{Rudsea}, a tool that can recommend updates to Dockerfiles based on analyzing changes in assumptions about the software environment and identifying their impacts.
To tackle the challenge of creating the right execution environment for python code snippets (\latinphrase{e.g.}\xspace, from Gists or StackOverflow) Horton and Parnin~\cite{HortonDockerizeMeICSE2019} developed \textsc{DockerizeMe}, a tool which infers python package dependencies and automatically generates a Dockerfile that will build an execution environment for pieces of python code.
\section{Threats to Validity}%
\label{Se:Threats}
We created tools and techniques that are general in their ability to operate
over DevOps artifacts with embedded shell, but we focused our evaluation on
Dockerfiles. It is possible that our findings do not translate directly to other
classes of DevOps artifacts. We ingested a large amount of data for analysis,
and, as part of that process, we used very permissive filtering. It is possible
that our corpus of Dockerfiles contains files that are not Dockerfiles,
duplicates, or other forms of noise. It is also possible that there are bugs in
the infrastructure used to collect repositories and Dockerfiles. To mitigate
these risks we kept a log of the data we collected, and verified some coarse
statistics through other sources (e.g., we used GitHub's API to download data
and then cross-checked our on-disk data against GitHub's public infrastructure
for web-based search). Through these cross-checks we were able to verify that,
for the over 900,000 repositories we ingested, greater than 99\% completed the
ingestion process successfully. \added[id=J]{Furthermore, of the approximately 240,000 likely Dockerfiles
we identified, 91\% (219,000) made it through downloading, parsing, and validation. Of this
set of files, approximately 81\% (178,000) were unique based on their SHA1 hash.
Of the files rejected during processing (downloading, parsing, and validation), most were either
malformed Dockerfiles or files with names matching our \texttt{.*dockerfile.*} filter
that were not actual Dockerfiles (e.g., \texttt{docker-compose.yml} files).}
We identified a Gold Set of Dockerfiles and used these files as the ideal
standard for the Dockerfiles in our larger corpus. It is possible that
developers do not want to achieve the same level of quality as the files in our
Gold Set. It is also possible that the Gold Set is too small and too specific to
be of real value. It is conceivable, but unlikely, that the Gold Set is not
representative of good practice. Even if that were the case, our finding still
holds that there is a significant difference between certain characteristics of
Dockerfiles written by (presumed) Docker experts and those written by
garden-variety GitHub users. \added[id=J]{We acknowledge that the average violation rate of
our Gold Rules is only a proxy for quality---but, given the data and tools currently
available, it is a reasonable and, crucially, measurable choice of metric.} For rule mining, we created, manually, a set of
Gold Rules against which we benchmarked our automated mining. Because the results
of automated mining did not agree with three of the manually extracted rules, there
is evidence that the manual process did have some bias. We sought to mitigate this
issue through the use of quantitative filtering; after filtering, we retained
only 65\% of our original Gold Rules.
|
train/arxiv
|
BkiUbWHxK7Ehm9ic-yax
| 5 | 1 |
\section{Introduction}
Structure formation involves intriguing aspects of non-equilibrium statistical physics. Complexity arises from the fact that in contrast to equilibrium thermodynamics, it is not sufficient to determine only the lowest free energy state, although the equilibrium scenario is expected as a limiting case.
A typical example for a non-equilibrium system is thin film growth, where a large variety of morphologies can be observed, which cannot be explained solely by equilibrium considerations. Instead, kinetic effects play a strong role in the formation of microscopic and mesoscopic structures. A key observable to characterize the growth behavior is the surface roughness ($\sigma$, standard deviation of the film thickness), which is also of substantial technological importance \cite{Barabasi_1995_book,Pimpinelli_1998_book,TMichely_2004_book,Krug_1997_AdvPhys,Lu_2001_book}.
The growth of molecular thin films was studied extensively both theoretically \cite{Pimpinelli_2014_JPhysChemLett,Hlawacek_2008_Science,Hlawacek_2013_JPhysCondensMatter} and experimentally \cite{Hinderhofer_2010_EurophysLett,Yim_2009_ApplPhysLett,Yang_2015_SciRep,Zhang_2010_JPhysChemC,Nahm_2017_JPhysChemC,Hinderhofer_2012_ApplPhysLett}.
Frequently, a typical feature for crystalline molecular films is the comparably fast roughening, often expressed as a high roughening exponent \cite{Durr_2003_PhysRevLett,Zhang_2010_JPhysChemC,Storzer_2017_JPhysChemC}. The complex roughening behavior is explained predominantly by kinetic effects based on high step edge barriers \cite{Fendrich_2007_PhysRevB,Roscioni_2018_JPhysChemLett,Goose_2010_PhysRevB,Ehrlich_1966_JChemPhys,Schwoebel_1966_JApplPhys}, thickness dependent strain release \cite{Durr_2003_PhysRevLett} or a restricted diffusion length due to defects or grain boundaries \cite{Winkler_2016_}.
A still more challenging case for complex materials are binary molecular systems, which are also important due to their electronic properties. Molecular mixed thin films are studied both with small mixing ratios ($\approx$ 1:100) \cite{Conrad_2008_PhysRevB,Kleemann_2012_OrgElectron,Schwarze_2016_Science} for doping as well as large mixing ratios ($\approx$ 1:1) for bulk heterojunctions and molecular complex formation \cite{Zhang_2017_AccChemRes,Hinderhofer_2012_ChemPhysChem,Aufderheide_2012_PhysRevLett,Dieterle_2015_JPhysChemC,Dieterle_2017_PhysStatusSolidiRRL,Belova_2017_JAmChemSoc,Reinhardt_2012_JPhysChemC,Broch_2013_JChemPhys,Hinderhofer_2011_JChemPhys}.
Dependent on the effective interactions of the compounds, binary systems exhibit several different mixing behaviors in the bulk, such as solid solution, co-crystallization or phase separation \cite{Kitaigorodsky_1984_book,Hinderhofer_2012_ChemPhysChem}.
The structure and morphology resulting from the growth, including the distribution of the two components A and B, strongly impact the effective electronic and optical properties and thus ultimately device performance.
From the growth perspective, the relationship between mixing behavior in equilibrium and kinetically determined surface roughness is of significant fundamental interest.
Here, we demonstrate that the mixing ratio and bulk phase behavior correlate strongly with kinetically limited growth effects and specifically with the roughness evolution.
We provide a comprehensive study of a broad range of blends with different electronic and steric characteristics.
Supported by kinetic Monte Carlo (KMC) simulations we identify three main effects:
1) A general smoothing effect for mixed films is induced by a lowered step edge barrier compared to the pure films.
2) Mixtures forming a co-crystal exhibit a local roughness maximum at 1:1 mixing ratio, because pure phase systems exhibit an increased step edge barrier compared to random mixtures.
3) Strongly phase separating mixtures exhibit increased roughness due to 3D island growth on a larger lateral scale.
\begin{figure}
\begin{center}
\includegraphics[width=4cm]{./fig_energyparameters.png}
\caption{Energy parameters in the binary lattice gas model.}
\label{fig:energetics}
\end{center}
\end{figure}
\begin{figure*} [ht]
\begin{center}
\includegraphics [width=15.5cm] {./fig-rough.png}
\caption{Roughness $\sigma$ of mixed films (20\,nm thickness) with rod like compounds (a-f) dependent on mixing ratio: a) PEN:DIP\cite{Aufderheide_2012_PhysRevLett} b) PEN:PIC\cite{Dieterle_2015_JPhysChemC,Dieterle_2017_PhysStatusSolidiRRL} c) PFP:PIC\cite{Dieterle_2015_JPhysChemC} d) DIP:PDIR-CN$_2$\cite{Belova_2017_JAmChemSoc} e) DIP:PFP\cite{Reinhardt_2012_JPhysChemC,Broch_2013_JChemPhys} f) PEN:PFP\cite{Hinderhofer_2011_JChemPhys}. From a)-f) the in-plane co-crystallization is increasing, i.e.\ PEN:DIP are nearly randomly intermixing, whereas PEN:PFP exhibit well ordered co-crystallization and b)-e) are intermediate cases. (g-i) shows the roughness $\sigma$ of phase separating compounds: g) DIP:C60 h) 6T:C60 i) PEN:C60. The degree of phase separation is increasing from g) to i). Local roughness maxima at ratio 0.5 are marked by vertical lines. All $\sigma$ values were determined by XRR, except those for pure PIC and mixtures of PEN:C60 and 6T:C60, which were determined by AFM. Simulated roughness values are shown as dotted lines. Simulation parameters are listed in supplementary Table 3.}
\label{lab:fig:rough}
\end{center}
\end{figure*}
The experiments are performed with a rich variety of molecular species, differing in shape and interaction anisotropy.
In order to identify the generic behavior, in our simulation approach we model the two--component systems generically with a simple binary lattice gas
(species A and B). The static parameters governing the equilibrium phase behavior are given by
nearest-neighbor interaction energies $\epsilon_{ij}$ $(i,j=\{\text{A,B}\})$ and substrate interaction energies
$\epsilon_\text{sub}^\text{A}$ and $\epsilon_\text{sub}^\text{B}$ for particles in the first layer (see Fig.~\ref{fig:energetics}).
Film growth is modeled with KMC simulations with solid--on--solid restrictions (see Sec.~\ref{sec:sim_model}), and the associated dynamic parameters are: (i) free diffusion constants
$D_\text{A[B]}$ for a particle of species A[B] which is not laterally bound, (ii) a deposition rate $F$ (particles per unit of time
and lattice site) and (iii) species--dependent Ehrlich--Schwoebel barriers $E^\text{ES}_{ij}$.
(An important dimensionless ratio determining the degree of nonequilibrium is given by $\Gamma=D_\text{A}/F$.)
All energetic parameters are given in units of the thermal energy.
We stress that we use the simple solid--on--solid model mainly as a conceptual tool. The model is clearly not material--specific, nor is it intended to faithfully represent the microscopic molecular moves and their associated rates. We also work with lower interaction energies and higher deposition rates than in the experiment. However, the simplicity of the model allows to uncover and quantify the three generic effects described above, and to exclude other possible sources of the observed roughness behavior. As detailed molecular simulations of binary growth systems with realistic parameters are currently out of reach, this strategy appears to be most appropriate for elucidating the universal patterns that we see.
In the following we will first (Secs.~~\ref{sec:rough_intermixing}--\ref{sec:cocrystals}) discuss the roughness evolution of
binary mixtures of rod--like compounds
which are not phase--separating but may show the formation of co--crystals.
In the second part (Sec.~\ref{sec:phaseseparation}) we will discuss the phase separating rod/sphere--shaped mixed films.
\section{Results}
\subsection{Roughness of intermixing thin films}
\label{sec:rough_intermixing}
In pure thin films, most of the studied compounds, i.e.\ pentacene (PEN), perfluoropentacene (PFP), diindenoperylene (DIP), sexithiophene (6T), and N,N'-bis(2-ethyl-hexyl)-1,7-dicyanoperylene-3,4/9,10-bis(dicarboxyimide) (PDIR-CN$_2$, where $R = C_8H_{17}$, branched) \cite{Belova_2017_JAmChemSoc} exhibit so-called layer-plus-island growth (Stranski-Krastanov, SK) on SiO$_2$ under the conditions employed here. In addition, picene (PIC) and fullerene (C60) exhibit typically island growth without wetting layer (Volmer-Weber, VW). The growth conditions of the single component films and the studied mixed films are summarized in supplementary Table 2.
Fig.\,\ref{lab:fig:rough}a-f shows the roughness of six different types of molecular mixed films dependent on mixing ratio at a thickness of 20\,nm. All of the studied binary compounds mix on the molecular level, but they can be distinguished by their tendency to form a co-crystal \cite{Aufderheide_2012_PhysRevLett,Dieterle_2015_JPhysChemC,Dieterle_2017_PhysStatusSolidiRRL,Dieterle_2015_JPhysChemC,Reinhardt_2012_JPhysChemC,Broch_2013_JChemPhys,Hinderhofer_2011_JChemPhys}.
The roughness of all pure materials is in general relatively large. A systematic trend in all films is that, upon mixing, the roughness is strongly decreased.
By comparing the roughness dependence of the various mixed systems, we can observe some significant differences. Mixtures with strong co-crystallization, i.e.\ the formation of an equimolar ordered co-crystal, exhibit a local roughness maximum at a 1:1 ratio (Fig.\,\ref{lab:fig:rough}e-f). In contrast, solid solutions do not show this local maximum (Fig.\,\ref{lab:fig:rough}a-b), which will be discussed in detail further below.
The strongest smoothing effect in absolute terms we find for mixtures with PIC (Fig.\,\ref{lab:fig:rough}b-c). Pure PIC exhibits strong island growth (VW) on SiO${_2}$ substrates \cite{Hosokai_2012_ChemPhysLett,Kurihara_2013_MolCrystLiqCryst,Hosokai_2015_JPhysChemC}. Upon mixing, the growth mode is apparently changed to SK mode. This effect is observable for PEN:PIC and PFP:PIC blends even for very low PEN or PFP concentrations.
The smoothing in mixtures is observed not only for mixtures with a compound showing strong islanding (SK : VW) but also for mixtures where both compounds exhibit SK growth mode such as PEN, PFP and DIP.
In order to better understand the overall picture, we discuss possible smoothing mechanisms and compare them to experimental results and theoretical simulations.
\subsection{Nucleation density}
\label{sec:nucleationdensity}
As a possible smoothing mechanism we consider first an increased nucleation density in the mixtures, which would yield a lower roughness via a simple geometric argument (supplementary Fig.\,1).\cite{Kotrla_2001_SurfSci}
The increase in nucleation density would lead in turn to a reduced in-plane correlation length $\xi$.
However, judging from the in-plane correlation length extracted from AFM data (supplementary Fig.\,1) we cannot identify a clear dependence between in-plane correlation length and roughness. Therefore we rule out this smoothing mechanism as being generally operational.
\begin{figure}[htb]
\centerline{\includegraphics[width=7cm]{variationEab_1_neu.png}}
\caption{Roughness in monolayers (ML) as a function of concentration of the species A after deposition of 15 ML.
Combination of two species with 3D growth, variation of $\epsilon_\text{AB}$.
Other parameters: $\Gamma=10^3$, $E^\text{ES}=3.0$, system size $300^2$, $E^\text{ES}_{ij}=E^\text{ES}$}
\label{fig:eab_1}
\end{figure}
\subsection{Interaction strength and diffusion rate}
\label{sec:interactions_diffusion}
Two additional possible smoothing mechanisms may be that an increase in diffusion rate or a different interspecies interaction strength in the blend affects the roughness.
On the one hand, a higher diffusion rate in the mixed film compared to the pure film would increase the hopping rate over step edges and lead potentially to smoother films. On the other hand, an increased interspecies interaction in the mixed films may reduce the roughness, because smoother films are energetically favored.
Experimentally, it is difficult to vary these two types of parameters systematically and independently.
This can be done in our generic model. To test the impact of interspecies interaction strength we set
$D=D_\text{A}=D_\text{B}$ (Fig.~\ref{fig:eab_1}). We combine two species which show 3D growth (similar to PEN and DIP) and vary the interspecies energy from mixing conditions ($\epsilon_\text{AB}=-3.5$) to strong demixing ($\epsilon_\text{AB}=0.5$). Only in the demixing case, there is a noticeable increase of the roughness
(this is important for the rod--sphere mixtures discussed in Sec.~\ref{sec:phaseseparation}),
otherwise it is insensitive. A simulation example for the combination of a material with 3D with an LBL growing material is shown in supplementary Fig.\,4.
To test the effect of the diffusion rate we choose energetic parameters which correspond to a well--mixing system (Fig.\,\ref{fig:db_1})
and increase the ratio of diffusion constants $D_\text{B}/D_\text{A}$.
The resulting roughness shows an approximately monotonic variation with concentration which, however, is not linear. Again, no decrease
of roughness upon addition of a second species is found as in the experiment.
Nevertheless, $D_\text{B} \neq D_\text{A}$ implies that there could be different time scales for the development of 3D growth or the formation of
islands. Thus one can expect fine--tuning effects as exemplified in the parameter sets for the picene mixtures where the
diffusion constant for picene was chosen such that it shows very strong island formation (supplementary Table 3).
We conclude that the roughness is insensitive to both diffusion rate and interaction strength (for mixing systems) in our growth regime and cannot explain the drastic roughness decrease we observe experimentally. Therefore, another mechanism must play a role, which is addressed below.
\begin{figure}[bt]
\centerline{\includegraphics[width=7cm]{variationDb_1_neu.png}}
\caption{Roughness in ML as a function of concentration of the species A after deposition of 15 ML.
Combination of one species with 3D growth with another which varies from 3D growth to LBL upon variation of $D_\text{B}$.
The diffusion constant for the first species is set to $D_\text{A}/F=10^3$, corresponding to 3D growth. The diffusion constant for the
second species is varied between $D_\text{B}/F=10^3$ (3D growth) and $10^5$ (LBL growth).
Other parameters: $E^\text{ES}=3.0$, system size $300^2$, $E^\text{ES}_{ij}=E^\text{ES}$.}
\label{fig:db_1}
\end{figure}
\subsection{Step edge barrier}
\label{sec:stepedge}
Finally another possible reason for the reduced roughness might be a modified step edge barrier.
For pure materials the step edge barrier is often significant and can lead to fast roughening \cite{Hlawacek_2013_JPhysCondensMatter,Yim_2007_JPhysChemC,Fendrich_2007_PhysRevB,Goose_2010_PhysRevB,Roscioni_2018_JPhysChemLett}.
The variation of the interspecies step edge barrier ($E^\text{ES}_{ij}$) in the simulations leads to the generic roughness effect seen in the experiment, i.e. the reduction of roughness upon mixing (Fig.~\ref{fig:es_1}). We have seen that for energetic conditions suitable for mixing there is no strong variation of the roughness with the interspecies energy. In this case, the condition $E^\text{ES}_{ij} < E^\text{ES}_{ii}$ is the only possible cause of roughness reduction. We conclude that for a reduced and species dependent step edge barrier our KMC simulations are in excellent agreement with experimental data for the rod-rod mixed systems without co-crystallization (Fig.\,\ref{lab:fig:rough}a-d).
\begin{figure}[tbh]
\centerline{\includegraphics[width=7cm]{variationES_1_neu.png}}
\caption{Roughness in ML as a function of concentration of the species A after deposition of 15 ML.
Combination of two species with 3D growth. The ES barrier $E^\text{ES}_\text{AB}=E^\text{ES}_\text{BA}$ is varied from 0.0 to 3.0, i.e. interlayer hops of one species on top (or from) a particle from the other species are more likely.
Other parameters: $\Gamma=10^3$, $D=D_\text{A}=D_\text{B}$, $E^\text{ES}_{ii}=3.0$, system size $300^2$.}
\label{fig:es_1}
\end{figure}
\begin{figure*} [bth]
\begin{center}
\includegraphics [width=14cm] {./fig-cryst.png}
\caption{Coherent island size $d_{coh}$ with rod like compounds dependent on mixing ratio: a) PEN:DIP\cite{Aufderheide_2012_PhysRevLett} b) PEN:PIC\cite{Dieterle_2015_JPhysChemC,Dieterle_2017_PhysStatusSolidiRRL} c) PFP:PIC\cite{Dieterle_2015_JPhysChemC} d) DIP:PDIR\cite{Belova_2017_JAmChemSoc} e) DIP:PFP\cite{Reinhardt_2012_JPhysChemC,Broch_2013_JChemPhys} f) PEN:PFP\cite{Hinderhofer_2011_JChemPhys}. From a)-f) the in-plane co-crystallization is increasing, i.e.\ PEN:DIP (a) are nearly randomly intermixing, whereas PEN:PFP (f) exhibits well ordered co-crystallization and b)-e) are intermediate cases.}
\label{lab:fig:rough2}
\end{center}
\end{figure*}
To rationalize the lowered step edge in the blends, we recall that the step edge barrier should be viewed as an effective quantity
that arises through a weighted average over different step conformations \cite{TMichely_2004_book}. In particular, molecular thin films have a distribution of
different step edge barriers, dependent on the crystal orientation and the trajectory of the diffusing molecule over the step edge \cite{Hlawacek_2013_JPhysCondensMatter}.
For a well-defined pure crystalline domain the step edge barrier is relatively high.
When we randomly introduce guest molecules into a molecular crystal (and also at the domain boundaries), the
number of possible step conformations will increase, since guest molecules do not fit exactly into the lattice of the host and
distort it. Therefore, it is natural to assume that with an increasing amount of guest molecules the distribution of step edge barriers
broadens and some barriers are lower compared to the pure crystal.
This effect should be strongest for large mixing ratios ($\approx$ 1:1). Since the diffusion lengths of organic molecules are relatively large \cite{Heringdorf_2001_NatureLondonUK}, the introduction of only a few low potential barriers, has a strong impact on the molecular downhill transport. Thus, the film roughening will be reduced.
It should be noted that this scenario is distinct from the well-known effect of surfactant molecules in metal homoepitaxy, which segregate at step edges and systematically modify the barrier for descending atoms\cite{Esch_1994_PhysRevLett}. Here, we do not expect the step edge barriers in the mixed films to be lower on average. Rather,
the broadening of the distribution of barriers induced by the molecular disorder opens pathways for facile descent that are preferentially used by thermal activation. Since the solid-on-solid model does not account for the orientational degrees of freedom of the molecules, in our simulations this effect is nevertheless represented by an overall reduction of the barrier.
As a measure for the increased defect density we use the coherently scattering island size $d_{coh}$ of the blends (Fig.\,\ref{lab:fig:rough2}) which is derived from the FWHM of in-plane Bragg reflections. We observe that indeed $d_{coh}$ decreases in the blends correlated to the decreasing roughness. For example, for statistically intermixing compounds such as PEN:DIP (Fig.\,\ref{lab:fig:rough}a), we find a minimum of $d_{coh}$ near the 1:1 ratio (Fig.\,\ref{lab:fig:rough2}a), which is consistent with the above explanation of the generation of smaller step edge barriers by guest molecules.
\subsection{Impact of co--crystal formation}
\label{sec:cocrystals}
The observation of a local maximum in the roughness can again be discussed in terms of the in-plane crystallinity (Fig.\,\ref{lab:fig:rough2}).
For PEN:PFP and DIP:PFP, we find a strong tendency towards the formation of a 1:1 co-crystal with a relatively large $d_{coh}>10$\,nm. Excess molecules of either compound phase separate in pure domains \cite{Hinderhofer_2011_JChemPhys,Breuer_2013_JChemPhys,Reinhardt_2012_JPhysChemC,Broch_2013_JChemPhys}.
Due to this growth behavior, the crystallinity is increased at 1:1 ratio in these mixtures in comparison to statistically mixed compounds. Then, the effect of low potential step edge barriers introduced by guest molecules is weaker. The observation of increased roughness with higher crystallinity, supports the assumption that low step edge barriers are the main smoothing mechanism in organic mixed films of two rod-like molecules.
For the simulations strong mixing conditions are characterized by $\epsilon_\text{AB} \ll (\epsilon_\text{AA}+\epsilon_\text{BB})/2$. In that limit the lattice model shows a stable checkerboard phase which is similar to the 1:1 co--crystal formed in the PFP mixtures.
There is one important difference. Experimental PFP mixtures not at equal (1:1) concentrations show phase separation into
a pure component and the 1:1 co--crystal. The lattice model does not show a similar phase separation, rather, the checkerboard
structure is randomly mixed in the system.
Since the co--crystallization is the most important difference of the PFP mixtures with PEN and DIP compared to the other
mixing blends with PEN and/or DIP, we take the PEN:DIP parameters (including unequal ES barriers) but decrease $\epsilon_\text{AB}$ substantially.
The result is shown in Fig.\,\ref{fig:eab_4}. Overall, there is no substantial change to the PEN:DIP curve but curiously for the
lowest $\epsilon_\text{AB}$ a small hump is forming for $c_\text{A}=0.5$. The effect seems to be genuine and persists also for a choice of less unequal ES barriers, nevertheless it is too small compared to the experimentally seen effect.
Therefore, the PFP:DIP and PFP:PEN mixtures should rather be considered as weakly phase separating mixtures of the pure compound with the respective 1:1 co-crystal, and the roughness behavior of these mixtures is more similar to those of weakly phase separating mixtures like DIP:C$_{60}$ described below.
\begin{figure}[bt]
\centerline{\includegraphics[width=7cm]{variationEab_4_strongmix_neu.png}}
\caption{Roughness in ML as a function of concentration of the species A after deposition of 15 ML.
Combination of two species with 3D growth, variation of $\epsilon_\text{AB}$ in the strong mixing regime.
Other parameters: $\Gamma=10^3$, $E^\text{ES}_{ii}=3.0$, $E^\text{ES}_{ij}=0.0$ ($(i\neq j)$), system size $300^2$.}
\label{fig:eab_4}
\end{figure}
\subsection{Sphere-like with rod-like compound: Phase separation}
\label{sec:phaseseparation}
Qualitatively different from intermixing rod/rod blends discussed above are rod/sphere such as mixtures realized with C60 and a rod-shaped compound. Due to geometrical constraints rod/sphere blends are typically phase separating in thermal equilibrium, but due to kinetic effects partially intermixed in thin films \cite{Banerjee_2013_PhysRevLett}.
Fig.\,\ref{lab:fig:rough}g-i) shows the roughness of mixed films of C60 with three different rod-shaped compounds (DIP, 6T, PEN).
We observe that for a small amount of guest molecules the roughness is decreased, which can be explained consistently by a reduced in-plane crystallinity.
For a phase separating system, the roughness depends on the domain size and therefore the degree of phase separation, which is related to the interspecies interaction energy \cite{Kotrla_1998_PhysRevB}.
From KMC simulations we conclude, that the smoothing effect induced by a low step edge barrier (Fig.\,\ref{fig:es_1}) and the roughening effect induced by unfavorable interspecies energies (Fig.\,\ref{fig:eab_1}, supplementary Fig.\,4), i.e.\ phase separation, are competing effects. Both effects can be active to a different degree dependent on mixing ratio.
The three mixtures studied (Fig.\,\ref{lab:fig:rough}g-i) exhibit nano phase separation into pure domains dependent on the growth conditions \cite{Salzmann_2008_JApplPhys,Banerjee_2013_PhysRevLett,Lorch_2015_ApplPhysLett,Lorch_2016_JApplCrystallogr}.
For DIP:C60 and 6T:C60 and mixing ratios deviating from 1:1, we observe smaller roughness values compared to the pure compounds, induced due to larger disorder and therefore smaller step edges as described above. For DIP:C60 the driving force for phase separation (dependent on the interspecies energy) is weak, resulting in small $d_{coh}$ (supplementary Fig.\,2).
In contrast, for PEN:C60, apparently the interspecies energies are strongly unfavorable for mixing, leading to a large $d_{coh}$ and the largest roughness of the three systems studied.
These observations are also consistent with KMC simulations with a variation of the ES barrier (Fig.\,\ref{fig:es_1}). For energetic conditions suitable for demixing we have seen that the roughness of blended films is higher, and the degree of roughness increase depends on the propensity for phase separation (i.e. the value of $\epsilon_\text{AB} - (\epsilon_\text{AA}+\epsilon_\text{BB})/2$) but also on the single species growth mode. Therefore we have here two competing mechanisms influencing the final roughness. For weakly phase separating systems the ES effect could dominate but for strongly phase separating systems it can be the other way around. This is also seen in the experiments.
\begin{figure*} [bt]
\begin{center}
\includegraphics [width=17cm] {./AFM.png}
\caption{AFM images of a) PEN, e) C$_{60}$ and three PEN:C$_{60}$ mixtures b-d) with different mixing ratios. Sketches below each AFM image show typical line scans. Colors illustrate the domain compositions: light blue (pure PEN), dark blue (nano phase separated mixture), black (C60).}
\label{lab:fig:AFM}
\end{center}
\end{figure*}
For 1:1 mixing ratios we find a local roughness maximum for all three mixed systems. For DIP:C60 it was shown that at 1:1 mixing ratios the films exhibit two types of domains: a nano phase separated wetting layer and pure domains of DIP \cite{Banerjee_2018_JPhysChemC}. Similarly, in AFM data of PEN:C60 (Fig.\,\ref{lab:fig:AFM}) we also observe a pronounced 3D growth of pure PEN domains near the 1:1 mixing ratio in combination with a mixed wetting layer \cite{Salzmann_2008_JApplPhys}. The strong 3D growth of the pure domains in these films are the main cause for the local roughness increase. Since the lateral separation between these domains is on the order of $\approx 1000$ lattice sites, this effect cannot be completely captured by our KMC simulations.
However, we find that the increase in roughness at the 1:1 ratio scales overall with the driving force for phase separation (weak for DIP:C60, strong for PEN:C60) and depends presumably mainly on the interspecies interaction energies.
We conclude that the roughening and smoothing mechanisms are qualitatively the same for all three mixtures studied, but quantitatively of course dependent on material properties.
\section{Discussion}
We presented an extensive and systematic study on the roughness evolution of different organic mixed thin films. We distinguished the roughening mechanism in intermixing rod/rod blends and phase separating rod/sphere blends. KMC simulations revealed that a species dependent step edge barrier is the main smoothing mechanism.
As a possible scenario we propose a broadening of the distribution of step edge barriers induced by guest molecules. This idea is supported by the strong correlation between in-plane coherent crystal length and roughness for all studied blends.
For intermixing rod/rod blends we find a roughness minimum close to the 1:1 ratio induced by a reduced step edge barrier. For rod/sphere blends the roughness depends, in addition to the reduced step edge barrier, on the competing effect of phase separation. Finally, a local roughness maximum at 1:1 ratio was found also for co-crystallizing blends. These blends behave similar to weakly phase separating blends, where phase separation occurs between a 1:1 co-crystal and the pure compound.
Our study shows an intriguing and subtle connection between non-equilibrium structure formation and equilibrium phase behavior
mediated by the kinetics of interlayer transport. Importantly, the near-universal smoothing observed in the mixed films
relies on the thermally activated character of step crossing events, which implies that the transport is effectively dominated by
the lowest available barriers. We expect that similar scenarios may be found in other systems where complex molecular interactions
give rise to a broad distribution of kinetic rates.
\section{Methods}
\subsection{Thin film preparation}
All films studied have a thickness of 20\,nm and were prepared by thermal evaporation in vacuum onto Si wafers with native oxide layer.
Thin films were deposited on silicon wafers with native SiO$_2$ (surface roughness $\sigma_{\mathrm{rms}} = 0.3$\,nm) under ultra high vacuum (UHV) conditions (base pressure $<1 \cdot 10^{-7}$\,Pa) by thermal evaporation.\cite{Ritley_2001_RevSciInstrum}
Before deposition, substrates were cleaned ultrasonically with acetone, isopropyl alcohol, and ultra pure water, followed by heating to 700\,K in the UHV growth chamber. All films were deposited at a substrate temperature of $T \sim 300$\,K. The growth rate was monitored by a quartz crystal microbalance. Typical evaporations rates are shown in supplementary Table 1.
\subsection{X-ray Scattering}
X-ray Reflectivity (XRR) and grazing incidence X-ray diffraction (GIXD) were measured either at the X04SA beamline of the Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland or at beamline ID10 of the ESRF in Grenoble, France.
\subsection{Roughness Determination}
Roughness ($\sigma$) values were determined by fitting XRR with Motofit\cite{Nelson_2006_JApplCrystallogr} and from AFM with Gwyddion\cite{Nevcas__}. Both methods yielded very similar results. We estimate the error bars for the $\sigma$ values on the range of 10\%.
\subsection{In-plane coherent crystal size}
Lower limits of the in-plane coherent crystal sizes $d_{coh}$ were determined by the Scherrer formula $l_{s} = 2 \pi \cdot (\mathrm{FWHM})^{-1}$, where $\mathrm{FWHM}$ is the full width half maximum of the peak in \AA$^{-1}$ determined with a Gaussian fit-function.\cite{Smilgies_2009_JApplCrystallogr} The instrumental broadening of the diffractometer was not included in the calculation, therefore only lower limits of $l_{s}$ are given.
\subsection{In-plane correlation length}
In-plane correlation lengths were determined with Gwyddion\cite{Nevcas__} from AFM data by fitting the one-dimensional power spectral density function (PSDF) with a power law. The used PSDF for each sample is an average from 2-4 images with sizes between $3-10 \mu$m.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{./fig_moves.png}
\caption{Hopping and insertion moves with associated rates in the KMC model.}
\label{fig:moves}
\end{center}
\end{figure}
\subsection{Simulation Model}
\label{sec:sim_model}
For the KMC simulations, we employ a simple film growth model using a binary lattice gas (species A and B) on a cubic lattice
with interaction energy parameters as illustrated in Fig.~\ref{fig:energetics}.
Deposition on top of the film or the bare substrate at random substrate plane coordinates is controlled by a rate $F$
(particles per unit time and lattice site). Diffusion respects the solid--on--solid condition: only the
particles (species $i$) in the top layer are allowed to diffuse to a lateral next--neighbor site with rate
$D_i \;\text{min}(1,\exp( -\Delta E))$ where $D_i$ is a species--dependent free diffusion constant and
$\Delta E$ is the energy difference between final and initial state in units of $k_B T$.
In such a move, particles of species $i$ may also ascend (moving on top of a particle of species $j$) or descend one layer
(moving down from a particle of species $j$) in which case the rate is multiplied with
$\exp(-E^\text{ES}_{ij})$ where $E^\text{ES}_{ij}$ is an Ehrlich--Schwoebel (ES) barrier. Neither overhangs nor desorption are allowed.
The moves are illustrated in Fig.\,\ref{fig:moves}.
In the one--component case (supplementary Fig.\,3), the model is characterized by the four constants $\epsilon=\epsilon_\text{AA}, \epsilon_\text{sub}=\epsilon_\text{sub}^\text{A}, E^\text{ES}=E^\text{ES}_\text{AA},
\Gamma = D_\text{A}/F$. Actual growth experiments of organic thin films are characterized by $|\epsilon|= 10 \dots 15$,
and $\Gamma = 10^9 \dots 10^{11}$ which is difficult to simulate, owing to computational costs.
However, at lower energies and smaller ratios
$\Gamma$ the model shows similar growth modes as seen experimentally. These are (a) island growth from the start (ISL) when
$\epsilon_\text{sub}$ is low enough, (b) layer--by--layer growth (LBL) and (c) 3D growth of varying degree.
The model shows two characteristic transitions
which, however, are not sharp:
(i) ISL-LBL which for given $\epsilon,E^\text{ES}$ depends on both $\epsilon_\text{sub}, \Gamma$ and whose order parameter can
be characterized by the coverage difference of layer 1 and 2 after depositing only 1 ML.
(ii) LBL-3D resp. ISL-3D which for given $\epsilon,\epsilon_\text{sub},E^\text{ES}$
depends on $\Gamma$ and where a suitable order parameter is e.g. the integral of the anti--Bragg intensity and
needs deposition of tens of MLs for locating.
More details on the one--component growth modes and the associated transitions can be found in Ref.~\cite{empting20}.
\subsection{Simulation Parameter Selection}
In selecting the parameters for the pure systems, we proceed with the assumption that there are scaling relations
for the temporal roughness evolution, i.e. that simulations at lower $|\epsilon|, |\epsilon_\text{sub}|, E^\text{ES}, \Gamma$ correspond also to certain sets of these parameters with higher values.
In the literature, the epitaxial case $\epsilon=\epsilon_\text{sub}$ and $E^\text{ES}=0$ has been investigated recently \cite{Assis_2015_JStatMech} and
a scaling $r \propto L^\beta/(\Gamma^{3/2}(\exp(-|\epsilon|) +a)$ has been found ($r$ is the normalized roughness with for a layer thickness of 1, $L$ is the number of deposited layers, $\beta \approx 0.2$, $a = 0.025$).
We have investigated the scaling relations for the exemplary case of the PEN:DIP mixtures for energy parameters $\epsilon = {-3} \dots {-5}$ and a wider range of diffusion parameters $\Gamma = D_A = D_B = 10^3\dots10^6$.
We found that the composition dependence of a multilayer film depends rather well on the single variable $\Gamma^{1.5} \exp(-|\epsilon|)$ which was also found in the single-component case by Assis. Thus, e.g., when coming from a more realistic energy scale of $\epsilon=-15$ to $\epsilon=-3$, one may reduce $\Gamma$ by a factor of $2 \times 10^3$ if that scaling holds.
In going to growth of binary systems, the dimension of the parameter space of this simple model is already enlarged to 10
(4 parameters for each of the pure systems, the cross--species energy $\epsilon_\text{AB}$ (which controls mixing and demixing) and
the cross--Ehrlich--Schwoebel barrier $E^\text{ES}_\text{AB}=E^\text{ES}_\text{BA}$).
We concentrated on combining different pairs of one--component growth modes which reflect the experimental material
combinations. In general we found that for the simple choices
$\epsilon_\text{AB} \approx (\epsilon_\text{AA}+\epsilon_\text{BB})/2$, $D_\text{A} \approx D_\text{B}$ and $E^\text{ES}_{ij}=\text{const}$ the roughness properties of the films
linearly interpolate between those of the pure substances. Therefore, any more prominent mixture effects can only be expected
when deviating from these choices.
Simulations were done on a grid size of $M=300$. Our tests for larger grid sizes ($M=800$) show that the obtained roughness values do not depend strongly on the grid size (supplementary Fig.\,5).
\section{Acknowledgment}
Support from the DFG and the BMBF is gratefully acknowledged. We thank M. Kotrla for fruitful discussions and numerous members of the T\"ubingen group
for contributing data.
\section{Supplementary Tables}
\begin{table}[htb]
\caption{Overview of growth rates of the studied mixed films.}
\begin{tabular} {cc}
\hline
Mixture & growth rate (nm/min) \\ \hline
PEN:DIP & 0.2 \\
PEN:PIC & 0.3 \\
PFP:PIC & 0.3 \\
PDIR:DIP & 0.5 \\
DIP:PFP & 0.1 \\
PEN:PFP & 0.2 \\
DIP:C60 & 0.4 \\
6T:C60 & 0.4 \\
PEN:C60 & 0.2 \\
\hline
\end{tabular}
\label{tab0}
\end{table}
\begin{table}[htb]
\caption{Overview of studied mixed film systems categorized by their growth mode (Stranski-Krastanov or Volmer-Weber) in the pure compounds and the equilibrium phase behavior of the mixture.}
\begin{tabular} {lccc}
\hline
& co-crystal & solid solution & phase separation \\ \hline
SK : SK & DIP:PFP & PEN:DIP & \\
& PEN:PFP & & \\
& DIP:PDIR-CN$_2$ & & \\
& & & \\
VW : SK & PIC:PFP & PIC:PEN & C60:PEN \\
& & & C60:DIP \\
& & & C60:6T \\
\hline
\end{tabular}
\label{tab1}
\end{table}
\begin{table}[h]
\begin{tabular*}{\textwidth}{c @{\extracolsep{\fill}} llllllllllllcc}
\hline \hline \\
mixture & Fig. & $\epsilon_\text{AA}$ & $\epsilon_\text{BB}$ & $\epsilon_\text{AB}$ & $\epsilon_\text{sub}^\text{A}$ & $\epsilon_\text{sub}^\text{B}$ & $D_\text{A}$ & $D_\text{B}$ & $E^\text{ES}_\text{AA}$ & $E^\text{ES}_\text{BB}$ & $E^\text{ES}_\text{AB}$ &\multicolumn{2}{c}{single spec.} \\
& & & & & & & & & & & & \multicolumn{2}{c}{growth mode} \\
& & & & & & & & & & & & 1 & 2 \\ \\ \hline
PEN:DIP & 1a & -3.0 & -3.0 & -2.5 & -2.7 & -2.7 & $10^3$ & $10^3$ & 3.0 & 3.0 & 0.0 & 3D & 3D \\
PIC:PEN & 1b & -2.0 & -2.0 & -2.5 & -1.0 & -1.9 & $10^5$ & $10^3$ & 0.69 & 3.0 & 0.0 & isl & 3D\\
PIC:PFP & 1c & -2.0 & -2.0 & -1.5 & -1.0 & -1.9 & $10^5$ & $10^3$ & 0.69 & 3.0 & 0.0 & isl & 3D \\
DIP:PDIR& 1d & -3.0 & -3.0 & -2.5 & -2.7 & -2.7 & $10^3$ & $10^3$ & 3.51 & 1.39 & 0.0 & 3D & 3D (weak) \\
PFP:DIP & 1e & -3.0 & -3.0 & -8.0 & -2.7 & -2.7 & $10^3$ & $10^3$ & 3.0 & 3.0 & 0.0 & 3D & 3D \\
PFP:PEN & 1f & -3.0 & -3.0 & -8.0 & -2.7 & -2.7 & $10^3$ & $10^3$ & 3.0 & 3.0 & 0.0 & 3D & 3D \\
DIP:C60 & 1g & -3.0 & -3.0 & -1.5 & -2.7 & -1.0 & $10^4$ & $10^4$ & 3.0 & 3.0 & 0.0 & 3D & isl \\
6T:C60 & 1h & -3.0 & -3.0 & -0.5 & -2.7 & -1.0 & $10^4$ & $10^4$ & 3.0 & 3.0 & 0.0 & 3D & isl \\
PEN:C60 & 1i & -3.0 & -3.0 & +0.5 & -2.7 & -1.0 & $10^4$ & $10^4$ & 3.0 & 3.0 & 0.0 & 3D & isl\\ \hline \hline
\end{tabular*}
\caption{Simulation parameters used for the results presented in Fig. {2} in the main paper. The growth modes for the single species have been
read off from the roughness behavior after growth of 15 ML.``3D'' refers to 3D growth which is characterized by a monotonic increase of roughness
with deposited ML. ``3D(weak)'' is also 3D growth with a weaker increase of the roughness. ``isl'' refers to island growth which in the
model is characterized by an initially steeply increasing roughness. It may then smoothly cross over to 3D growth, but it may also decrease again
at higher deposition when islands begin to merge.}
\label{tab:parameters}
\end{table}
\clearpage
\section{Supplementary Figures}
\begin{figure*} [hbt]
\begin{center}
\includegraphics [width=5cm] {./nucleation_density.png}
\includegraphics [width=11cm] {./fig-corr.png}
\caption{\textbf{Left:} Sketch of roughness reduction by increase of nucleation density. a) and c) pure materials, b) 1:1 mixture has reduced in-plane correlation length $\xi$. \textbf{Right:} Correlation length of blends with rod like compounds dependent on mixing ratio extracted from AFM images. a) PEN:DIP b) PEN:PIC c) PFP:PIC d) DIP:PDIR e) DIP:PFP f) PEN:PFP. It is obvious that for some mixed systems (e.g. PFP:PIC) there appears to be a nearly linear correlation between $\xi$ and $\sigma$, however for other mixtures (PEN:PIC) both parameters seem to be uncorrelated. Especially for PEN:DIP, the correlation length in 4:1 mixture is very large, however the roughness is very low. Judging from these observations we conclude that the island density may play a role for the changed roughness for some material systems but cannot explain the general roughness behavior described in the main text.}
\label{lab:fig:corr}
\end{center}
\end{figure*}
\clearpage
\begin{figure} [ht]
\begin{center}
\includegraphics [width=8cm] {./Fig-rod-ball.png}
\includegraphics [width=8cm] {./Fig-rod-ball2.png}
\caption{a) Roughness $\sigma$ of blends of C$_{60}$ with a rod like compound. b) In-plane coherent island size $d_{coh}$ dependent on the mixing ratio of rod/sphere mixtures determined from GIXD.}
\label{lab:fig:C60DIP}
\end{center}
\end{figure}
\clearpage
\begin{figure}[hb]
\centerline{\includegraphics[width=8cm]{singlespec.png}}
\caption{
Roughness $\sigma$ as a function of deposition (in ML) for 4 typical pure components as PIC, PDIR, PIC and C60 described in the model, system size $300^2$.}
\label{fig:singlespec}
\end{figure}
\clearpage
\begin{figure}[ht]
\centerline{\includegraphics[width=8cm]{variationEab_2.png}
\includegraphics[width=8cm]{variationEab_3.png}}
\caption{The effect of interspecies energy variation: In this example, we investigate the demixing case closer and combine a species with island growth with a second that shows 3D growth. This is similar to the C60 mixtures in the main paper.
Normalized roughness $\sigma$ as a function of concentration of the island forming species B after deposition of 15 ML. Species A shows 3D growth (left) and LBL (right), variation of $\epsilon_\text{AB}$. Here, the roughness increase for strong demixing conditions ($\epsilon_\text{AB}=0$ or 0.5) is much more prominent than in the previous case.
It is strongest in the absence of ES barriers (right). An important conclusion is that the experimentally observed decreased roughness in mixed films under (weak) mixing conditions is not seen here. Under demixing conditions the model predicts always an increased roughness whereas in the experiment both
increased and decreased roughness is seen.
Other parameters: $\Gamma=10^4$, $E^\text{ES}=3.0$ (left) resp. 0.0 (right), system size $100^2$.}
\label{fig:eab_2}
\end{figure}
\clearpage
\begin{figure}[h]
\centerline{\includegraphics[width=18cm]{800.png}}
\caption{System size: Comparison of simulation results for different grid sizes: a) PEN:DIP, b) DIP:C60, c) DIP:PDIR. Solid lines correspond to a grid size of $M=300$, symbols correspond to a grid size of $M =800$. We remark that the roughness only weakly depends on the lateral size M of the system. We illustrate this here for 3 examples, where lines are for the small system (M=300) and symbols for the bigger one (M=800). Both system sizes give identical results.}
\label{fig:M800}
\end{figure}
\end{document}
|
train/arxiv
|
BkiUdOA4uzlh3iJcUwyT
| 5 | 1 |
\section{Introduction}
Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ (Bi-2212) is one of three commonly studied crystalline phases of the high-$T_{c}$ superconductor Bi$_2$Sr$_2$Ca$_{n-1}$Cu$_n$O$_{2n+4+\delta}$ (BSCCO). Similar to other members of the cuprate family, Bi-2212 possesses orthorhombic symmetry and has been described as micaceous due to its layered morphology, with a cleavage plane perpendicular to the crystallographic $c$-axis \cite{Sheahen}. Bi-2212 is structurally complex, exhibiting birefringence \cite{Kobayashi} and incommensurability along the $b$ and $c$ crystallographic axes \cite{Etrillard2001,McNiven2020}.
Many properties of Bi-2212, especially the electronic and structural properties, have been studied in detail \cite{Bellini2003,Etrillard2001}, but the optical and elastic properties have received little attention. Measurements of the dielectric function at visible wavelengths are scant and, with the exception of one study \cite{Bozovic}, report only the real \cite{Quijada} or imaginary \cite{Liu} part of this quantity in this region of the electromagnetic spectrum. Furthermore, there is significant variation in these values. Two independent measurements of refractive index at 532 nm have also been reported \cite{Wang2012,Hwang} but only in one case was the direction of light propagation specified \cite{Hwang}. A similar situation exists with regard to the elastic properties with only two groups reporting values of elastic constants. Room-temperature values for elastic constants were determined from Brillouin scattering measurements of surface mode frequencies with the assumption of hexagonal symmetry \cite{Boekholt}. Ultrasonic studies on crystalline Bi-2212 report values for several elastic constants for the true orthorhombic symmetry, but were limited to temperatures $80$ K $\leq T \leq 260$ K \cite{Wu1,Wu2}. It is therefore clear that additional studies of the optical and elastic properties of Bi-2212 are necessary.
In this paper, room-temperature refractive indices and extinction coefficients for three Bi-2212 crystals at the commonly-used wavelength of 532 nm are extracted from Brillouin spectra by measurement of acoustic mode peak frequency shifts and linewidths. This method for determining the optical constants of opaque materials is uncommon, particularly at a single wavelength. Knowledge of the refractive index and extinction coefficient is especially useful because they can be used along with Kramers-Kronig transformations to determine other optical properties. Elastic constant $C_{44}$ of crystalline Bi-2212 is also determined from Brillouin scattering measurements of bulk acoustic phonon modes.
\section{Experimental Details}
\subsection{Samples}
Table \ref{table:SampleTable} summarizes some physical properties and characteristics of the Bi-2212 crystals used in the present study. These (001)-oriented samples for Brillouin scattering experiments were obtained from parent crystals using mechanical exfoliation. This process was relatively straightforward for TC91 and TC90 as both exhibited the usual platelet geometry expected of Bi-2212 crystals with a cleavage plane perpendicular to $[001]$. In contrast, TC78 was somewhat irregularly-shaped which made exfoliation of an acceptable sample more difficult.
\begin{table}[!htb]
\caption{Critical temperature ($T_{c}$), approximate dimensions, and morphology of Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ crystals used in the present work.}
\begin{ruledtabular}
\begin{tabular}{c c c c}
\multirow{2}{*}{Name} & $T_c$ & Dimensions & \multirow{2}{*}{Morphology} \\
& [K] & [mm$^3$] &\\ [0.5ex] \hline
TC91 & 91 & $10\times2\times0.02$ & Planar\\
TC90 & 90 & $2\times1\times0.05$ & Planar\\
TC78& 78 & $1\times1\times0.5$ & Irregular
\end{tabular}
\end{ruledtabular}
\label{table:SampleTable}
\end{table}
\subsection{Brillouin Light Scattering}
Brillouin scattering is the inelastic scattering of light by thermally excited acoustic phonons. For a backscattering geometry as used in the present work, conservation of energy and momentum for the scattering process yield the following expressions for surface and bulk phonon velocities, respectively \cite{Sandercock1978}:
\begin{equation}
V_R=\frac{f_R\lambda_i}{2 \sin{\theta_i}},
\label{eqn:BrillouinSurface}
\end{equation}
\begin{equation}
V_B=\frac{f_{B}\lambda_i}{2n},
\label{eqn:BrillouinBulk}
\end{equation}
where $f_X$ ($X=R,B$, where $R$ and $B$ correspond to surface and bulk modes, respectively) is peak frequency shift, $\lambda_i$ is the incident light wavelength, and $n$ is the refractive index of the target material.
Brillouin scattering experiments were carried out in air at room temperature using a backscattering geometry with the set-up shown in ref. \cite{Andrews2007}. A single mode Nd:YVO$_4$ laser emitting at a wavelength of $\lambda_{i} = 532$ nm served as the incident light source. To minimize reflection losses, the polarization of the laser beam was rotated from vertical to horizontal by use of a half-wave plate. It was then passed through attenuating filters to reduce the power to $\sim$10 mW and subsequently focused onto the sample using a $f=5$ cm lens with $f/\#=2.8$. Scattered light was collected and collimated by the same lens and focused by a $f=40$ cm lens onto the entrance pinhole ($d=300$ $\mu$m or $450$ $\mu$m) of a six-pass tandem Fabry-Perot interferometer which frequency-analyzed the scattered light. The free spectral range of the interferometer was set to values from 30 GHz to 150 GHz, with the lower ranges being used when higher resolution was required. It should be noted that the use of such a low incident light power level was necessary to avoid sample damage and thermal effects caused by optical absorption; trial runs at power levels of $\geq20$ mW generated noticeable sample heating and/or damage. As a result, spectrum acquisition times of $\sim20$ hours were required and even in these circumstances only limited data could be obtained for TC90 and TC91 due to the low quality of the spectra collected from these two samples.
\section{Results}
\subsection{Brillouin Spectra}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.5\textwidth]{TC78-Spectra.pdf}
\caption{Room-temperature anti-Stokes Brillouin spectra of TC78 collected at an incident angle of $30^\circ$ and $40^\circ$, where $B_1$, $B_2$, $B_3$ are bulk modes. Note: Peak due to Rayleigh surface mode not shown.}
\label{fig:BrillouinSpectra}
\end{figure}
Fig. \ref{fig:BrillouinSpectra} shows room-temperature spectra of sample TC78 collected at incident angles of $30^\circ$ and $40^\circ$. The peak associated with the Rayleigh surface mode ($R$, not shown), was identified by the characteristic linear dependence of its frequency shift on $\sin\theta_{i}$ (see Fig. \ref{fig:RayleighFit}) and its narrow linewidth relative to those of bulk mode peaks in the spectra of opaque solids. The remaining peaks ($B_1$, $B_2$, $B_3$) are due to bulk acoustic modes. Where possible, corresponding mode assignments were made for peaks in the spectra of TC91 and TC90.
\begin{figure}[!htb]
\includegraphics[width=0.5\textwidth]{RayleighFits.pdf}
\caption{Rayleigh surface mode peak frequency shift versus sine of incident angle for samples TC91 ($\square$), TC90 ($\blacktriangle$) and TC78 ($\circ$). Horizontal error bars are approximately the symbol width.}
\label{fig:RayleighFit}
\end{figure}
\begin{table*}[t]
\caption{Bulk acoustic mode frequency shifts ($f_{B_{i}}$) and associated linewidths ($\Gamma_{B_{i}}$) obtained from Brillouin spectra of Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ samples used in the present study. $T_c$ - critical temperature; $\theta_i$ - incident angle.}
\begin{ruledtabular}
\begin{tabular}{c c c c c c c c c}
\multirow{2}{*}{Sample} & $T_c$ & \multirow{1}{*}{$\theta_{i}$} & $f_{B_1}$ & $\Gamma_{B_1}$ & \multirow{2}{*}{$\Gamma_{B_1}$/$f_{B_1}$} & $f_{B_3}$ & $\Gamma_{B_3}$ & \multirow{2}{*}{$\Gamma_{B_3}$/$f_{B_3}$}\\
& [K] & [deg] & [GHz] & [GHz] & & [GHz] & [GHz] & \\
\hline
TC91 & 91 & 65 & 14.6$\pm0.5$ & 1.3$\pm$0.5 & 0.09 & - & - & - \\
\cline{1-9}
TC90 & 90 & 30 & 14.6$\pm$0.4 & 1.2$\pm$0.5 & 0.08 & - & - \\
\cline{1-9}
\multirow{6}{*}{TC78} & \multirow{6}{*}{78} & 10 & 13.2$\pm$0.3 & - & - & - & - & -\\
& & 30 & 13.5$\pm$0.3 & 1.2$\pm$0.3 & 0.09 & 31.9$\pm$0.3 & 1.1$\pm$0.3 & 0.03\\
& & 40 & 13.0$\pm$0.3 & 1.7$\pm$0.3 & 0.1 & 32.6$\pm$0.3 & 1.0$\pm$0.3 & 0.03\\
& & 50 & 14.1$\pm$0.3 & 1.8$\pm$0.3 & 0.1 & - & - & -\\
& & 60 & 13.4$\pm$0.3 & - & - & 33.3$\pm$0.3 & 1.1$\pm$0.3 & 0.03\\
& & 70 & 14.0$\pm$0.3 & 1.6$\pm$0.3 & 0.1 & - & - & -\\
\end{tabular}
\end{ruledtabular}
\label{tab:Quasi-Transverse}
\end{table*}
Table \ref{tab:Quasi-Transverse} shows the peak frequency shift ($f_{B_i}$) and full-width at half maximum ($\Gamma_{B_i}$) for $B_1$ and $B_3$ at various incident angles, obtained by fitting a Lorentzian function to each Brillouin peak. It is noted that 0.28 GHz, the measured $\Gamma$ value of the central elastic peak in the Brillouin spectrum, was subtracted from each fitted $\Gamma_i$ to remove the instrumental contribution to the overall linewidth. It was not possible to obtain analogous values associated with $B_3$ for TC90 and TC91 due to difficulty in fitting the peaks in the spectra of these samples, however $f_{B_1}$ and $\Gamma_{B_1}$ were obtained for an incident angle of $30^\circ$ and $65^\circ$, respectively. Peak $B_2$ was omitted from the analysis because its asymmetric shape (see Fig. \ref{fig:BrillouinSpectra}) and abnormally large width ($>2.50$ GHz) suggest that it may result from the superposition of two or more closely-spaced peaks.
\subsection{Determination of Refractive Index and Extinction Coefficient}
\subsubsection{Linewidth-Peak Frequency Shift Ratio}
The refractive index ($n$) and extinction coefficient ($\kappa$) are related to the bulk Brillouin peak linewidth, $\Gamma_B$, and frequency shift, $f_{B}$, through the equation \cite{Sandercock1972_SGe}
\begin{equation}
\frac{\Gamma_B}{f_B}=\frac{2\kappa}{n}.
\label{eqn:FWHMsandercock}
\end{equation}
To obtain the refractive index and extinction coefficient of crystalline Bi-2212, a function was created containing each set of peak frequency shift and linewidth data for both Stokes ($S$) and Anti-Stokes ($AS$) Brillouin peaks:
\begin{equation}
\sum^N_{j=1} \sum^3_{i=1} w^{{(S)}}_{ij} \left[ \frac{ \Gamma^{(S)}_{ij}}{f^{(S)}_{ij}}-\xi \right]^2 + w^{{(AS)}}_{ij}\left[ \frac{\Gamma^{(AS)}_{ij}}{f^{(AS)}_{ij}}-\xi \right] ^2 = 0,
\label{eqn:minimization}
\end{equation}
where $i$ and $j$ denote the $i^{th}$ bulk mode of the $j^{th}$ data set, $\xi=2\kappa/n$ and $w_{ij}$ is the function weight of each component. Due to the uncertainty in frequency shifts and widths being the same for a given sample (see Table \ref{tab:Quasi-Transverse}), the corresponding $w_{ij}$ were constant.
By substituting sets $f_i$ and $\Gamma_i$ from Table \ref{tab:Quasi-Transverse} for each sample into Eq. \ref{eqn:minimization}, along with the initial guesses for $n$ and $\kappa$ ranging from $1.4-3.0$ and $0.01-1.00$, respectively (chosen based on previous estimates of these constants in the visible region of the electromagnetic spectrum \cite{Bozovic,Wang2012,Hwang}), Eq. \ref{eqn:minimization} was minimized using the sequential least squares programming optimization method (SLSQP). Given that Eq. \ref{eqn:minimization} is a series of ratios which may possess an infinite set of solutions, $n$ and $\kappa$ were constrained to $1.0-6.0$ and $0.01-1.00$, respectively, with the set that minimizes the function defined in Eq. \ref{eqn:minimization} taken as the final values. This process yielded $1.9$ and $0.09$ for TC91, $1.8$ and $0.08$ for TC90, and $1.9$ and $0.07$ (averaged) for TC78 for $n$ and $\kappa$, respectively. Eq. \ref{eqn:minimization} was plotted as functions of $n$ and $\kappa$ to ensure the function itself is not flat and has a pronounced minimum at the quoted values.
\subsubsection{Equation for Rayleigh Surface Wave Velocity}
A second estimate of the refractive index can be extracted from the Brillouin data via an implicit equation for the Rayleigh surface acoustic wave velocity, $V_{R}$, in the [010] direction on the (001) plane for a crystal with orthorhombic symmetry \cite{Stoneley},
\begin{multline}
\sqrt{\left[1-\frac{V_R^2}{V_T^2}\right]} \left[ 1-\frac{C_{23}^2}{C_{22}C_{33}}-\frac{\rho V_R^2}{C_{22}} \right] \\ = \frac{\rho V_R^2}{C_{22}} \sqrt{ \frac{C_{22}}{C_{33}} \left[ 1-\frac{\rho V_R^2}{C_{22}} \right] },
\label{eqn:Stoneley}
\end{multline}
where the $C_{ij}$ are elastic constants, $\rho = 6510$ kg/m$^3$ is the density, and $V_{T}$ is the velocity of the slow quasi-transverse mode propagating in the [001] direction. In a Brillouin scattering experiment, this velocity is given by $V_T=f_{T}\lambda_i/2n$ as dictated by Eq. \ref{eqn:BrillouinBulk}, which upon substitution into Eq. \ref{eqn:Stoneley} yields
\begin{equation}
n=\frac{f_{B_{1}}\lambda_i}{2V_{R}} \sqrt{ 1 - \alpha \left[\frac{C_{22}}{C_{33}}\right] \left[\frac{{\alpha-1}}{\alpha-\beta}\right]^2},
\label{eqn:index}
\end{equation}
where $\alpha = 1-\rho V_R^2/C_{22}$ and $\beta={C_{23}^2}/{C_{22}C_{33}}$.
\begin{table*}[t]
\caption{Room-temperature optical properties of crystalline Bi$_2$Sr$_2$CaCu$_2$O$_{8+\delta}$ at a wavelength of 532 nm. $T_c$ - critical temperature; $n$ - refractive index; $\kappa$ - extinction coefficient; $d$ - optical penetration depth; $\epsilon_1$ - real part of dielectric function; $\epsilon_2$ - imaginary part of dielectric function. Entries in the ``Direction'' column specify the direction of incident light propagation inside the material, as measured from the crystallographic $c$-axis.}
\begin{ruledtabular}
\begin{tabular}{c c c c c c c c c}
\multirow{2}{*}{Study} & $T_c$ & Direction & \multirow{2}{*}{$n$} & \multirow{2}{*}{$\kappa$} & \multirow{2}{*}{2$\kappa$/$n$} & $d$ & \multirow{2}{*}{$\epsilon_1$} & \multirow{2}{*}{$\epsilon_2$}\\
& [K] & [deg] & & & & [nm] & & \\
\hline
\multirow{7}{*}{Present Work - $\Gamma_B/f_B$ } & 91 & 28 & 1.9$\pm$0.2 & 0.09$\pm$0.04 & 0.09 & 500$\pm$200 & 4$\pm$1 & 0.34$\pm$0.07\\
\cline{2-9}
& 90 & 16 & 1.8$\pm$0.2 & 0.08$\pm$0.04 & 0.09 & 500$\pm$300 & 3$\pm$1 & 0.3$\pm$0.2 \\
\cline{2-9}
& \multirow{5}{*}{78} & 16 & 2.0$\pm$0.1 & 0.06$\pm$0.02 & 0.06 & 700$\pm$200 & 4$\pm$1 & 0.24$\pm$0.07 \\
& & 20 & 1.8$\pm$0.1 & 0.07$\pm$0.2 & 0.08 & 600$\pm$200 & 3$\pm$1 & 0.25$\pm$0.08 \\
& & 24 & 1.9$\pm$0.1 & 0.10$\pm0.03$ & 0.1 & 400$\pm100$ & 4$\pm$1 & 0.4$\pm$0.1 \\
& & 27 & 1.8$\pm$0.1 & 0.03$\pm$0.1 & 0.03 & 140$\pm50$ & 3$\pm$1 & 0.11$\pm$0.04 \\
& & 30 & 1.8$\pm$0.1 & 0.10 $\pm$0.03 & 0.1 & 400$\pm$100 & 3$\pm$1 & 0.4$\pm$0.1 \\
\cline{1-9}
\multirow{3}{*}{Present Work - Eq. \ref{eqn:index}} & 91 & 21 & 2.3$\pm0.1$ & 0.1 & 0.09\footnotemark[3] & 420 & 5.3 & 0.46 \\
\cline{2-9}
& 90 & 12 & 2.2$\pm0.1$ & 0.09 & 0.08\footnotemark[3] & 470 & 4.8 & 0.40 \\
\cline{2-9}
& 78 & 4 & 2.4$\pm0.1$ & 0.1 & 0.09\footnotemark[4] & 420 & 5.8 & 0.48 \\
\cline{1-9}
\multirow{1}{*}{Wang {\textit{et al.}} \cite{Wang2012}} & 85 & - & 1.9 & 0.04 & 0.04 & 105\footnotemark[1] & 3.6\footnotemark[2] & 0.15\footnotemark[2]\\
\cline{1-9}
\multirow{3}{*}{Hwang {\textit{et al.}} \cite{Hwang}} & 96 & \multirow{3}{*}{$\sim0$} & 1.9 & 0.23 & 0.24 & 184\footnotemark[1] & 3.6\footnotemark[2] & 0.87\footnotemark[2] \\
& 69 & & 2.0 & 0.19 & 0.19 & 223\footnotemark[1] & 4.0\footnotemark[2] & 0.76\footnotemark[2] \\
& 60 & & 2.1 & 0.26 & 0.25 & 163\footnotemark[1] & 4.3\footnotemark[2] & 1.1\footnotemark[2]\\
\cline{1-9}
\multirow{1}{*}{Bozovic {\textit{et al.}} \cite{Bozovic}} & - & $\sim0$ & 2.1\footnotemark[2] & 0.42\footnotemark[2] & 0.40 & 100 & 3.4 & 1.7\\
\end{tabular}
\end{ruledtabular}
\label{tab:OpticalProperties}
\footnotetext[1]{Estimated using $d=\lambda_{i}/4\pi\kappa$.}
\footnotetext[2]{Estimated using $\epsilon_1=n^2-\kappa^2$, $\epsilon_2=2n\kappa$.}
\footnotetext[3]{Calculated using Eq. \ref{eqn:FWHMsandercock}.}
\footnotetext[4]{Obtained from Table \ref{tab:Quasi-Transverse} using $\theta_i=30^\circ$ due to unknown $\Gamma$ for $\theta_i=10^\circ$.}
\end{table*}
As shown in Fig. \ref{fig:RayleighFit}, the Brillouin data give velocities $V_R=1410$ m/s, 1570 m/s, and 1610 m/s for TC78, TC91, and TC90, respectively. These velocities are for unknown directions of propagation on the (001) plane, but it has been shown that the Rayleigh surface mode velocity on this plane is essentially constant \cite{Boekholt}. The measured velocities are therefore taken as equal to those along the [010] direction for the purposes of computing $n$.
Elastic constants $C_{22}$ and $C_{33}$ have been determined for crystalline Bi-2212 with $C_{22}=110$ GPa at 260 K, the nearest temperature to ambient for which a value of this constant has been reported \cite{Wu2}, and $C_{33}=75.8$ GPa at room temperature with the approximation of hexagonal symmetry \cite{Boekholt}. No value of $C_{23}$ appears in the literature, but because the symmetry of Bi-2212 can be approximated as tetragonal (since $a\approx b$ \cite{Etrillard2001}), $C_{23}\approx C_{13}$, with $C_{13} = 56$ GPa \cite{Boekholt}.
Using the $V_{R}$ and $C_{ij}$ values given above and frequency shifts $f_{B_1}$ from Table \ref{tab:Quasi-Transverse}, the refractive indices of the three samples of the present work as determined from Eq. \ref{eqn:index} are $n=2.4$ for TC78, $n=2.3$ for TC91, and $n=2.2$ for TC90. It should be noted that the frequency shifts $f_{B_{1}}$ were obtained from spectra collected at nonzero angles of incidence and therefore the probed phonons were not travelling along the [001] direction. For angles ranging from $5^{\circ}$ to $30^\circ$ from the [001] direction, however, the shifts were found to be largely independent of direction and therefore the shift value obtained from the spectrum collected at the lowest angle of incidence was taken as being equal to that along the [001] direction. These angles are indicated in Table \ref{tab:OpticalProperties}.
\section{Discussion}
\subsection{Optical Properties}
Table \ref{tab:OpticalProperties} shows the optical constants, $n$ and $\kappa$, at 532 nm obtained using the methods described above, along with derived quantities and previously published values. The refractive index values obtained using the linewidth-shift ratio range from 1.8 to 2.0 and are within $\leq10$\% of those determined using a combined reflectance-ellipsometry method \cite{Bozovic}, reflectance spectroscopy \cite{Hwang}, and optical interference \cite{Wang2012}. In contrast, Eq. \ref{eqn:index} yields values that are $\sim 20$\% larger than previously published results, with $2.2 \leq n \leq 2.4$. This large difference could be due to a reduction in surface mode frequencies resulting from surface damage and/or roughness \cite{Sathish,Mendik,Eguiluz}. This results in a decrease in surface phonon velocity $V_{R}$ and consequently an increase in $n$ determined by Eq. \ref{eqn:index}. More specifically, if surface roughness is present, $n$ obtained from Eq. \ref{eqn:index} should be larger than that determined from a measurement that probes bulk phonons within the volume of a material, like the linewidth-shift ratio. This is precisely what is observed. As an example, consider the case of TC78, the sample with the roughest surface as confirmed by optical microscopy and that for which Eq. \ref{eqn:index} gives the highest $n$ value. If $V_{R}$ for TC90 (1610 m/s) or TC91 (1570 m/s) is used instead of $V_{R}=1410$ m/s for this sample in Eq. \ref{eqn:index} (given the pristine (001) surface of TC90 and TC91 due to a high-quality cleave), one obtains $n=2.0-2.1$. These values are within $\sim10$\% of those determined using the linewidth-shift ratio.
Extinction coefficients obtained using the linewidth-shift ratio range from $0.03 \leq \kappa \leq 0.1$. Values for $\kappa$ corresponding to the refractive indices determined from Eq. \ref{eqn:index} were also obtained via the numerical values of ratio $\Gamma/f$ given in Table \ref{tab:Quasi-Transverse}. The extinction coefficients obtained in the present work are similar to that found by optical interference but are, at minimum, $\sim50$\% smaller than those obtained via reflectance analysis \cite{Hwang,Bozovic}. Such high $\kappa$ values result in $2\kappa/n$ ratios that would require, via Eq. \ref{eqn:FWHMsandercock}, linewidth-shift ratios $\Gamma_{B_i}/f_{B_i}$ several times larger than those obtained in the present Brillouin scattering experiments (see Table \ref{tab:Quasi-Transverse}) and also those extracted from spectra shown in previous independent Brillouin studies of Bi-2212 \cite{Boekholt}. As for the elevated $n$ values from Eq. \ref{eqn:index}, surface roughness could account for this discrepancy in $\kappa$ values obtained by Brillouin scattering and reflectance as it is known to result in increased optical absorption \cite{Vanderwal,Bennett}. Such roughness would be manifested in the reflectance result because it is primarily a measurement of light reflected from the sample surface. In contrast, surface roughness will not likely impact the bulk Brillouin result because the probed bulk phonons propagate in the volume of the material.
Table \ref{tab:OpticalProperties} also shows values for the optical penetration depth and dielectric function at 532 nm derived from the optical constants. The penetration depth, $d=\lambda_{i}/4\pi\kappa$, determined from optical constants obtained using the linewidth-shift ratio and Eq. \ref{eqn:index} are, in general, quite similar, averaging 460 nm and 440 nm, respectively. These values are $\sim2-4$ times larger than those previously reported \cite{Hwang,Bozovic} due to the above-noted differences in $\kappa$. The dielectric function ($\tilde{\epsilon}=\epsilon_1+i\epsilon_2$) at 532 nm was determined using the relationship between dielectric and optical constants: $\epsilon_1 = n^2-\kappa^2$ and $\epsilon_2 = 2n\kappa$. On average, $\epsilon_{1}$ values determined using optical constants found from the linewidth-shift ratio are similar to those obtained in other studies \cite{Bozovic,Hwang,Wang2012} but are $\sim35$\% smaller than those determined using constants found from Eq. 6, primarily due to the higher value of $n$ supplied by this equation. Values for $\epsilon_{2}$ calculated from optical constants determined using the linewidth-shift ratio are also $\sim35$\% smaller on average than those determined using constants based on Eq. 6, but are several times smaller than $\epsilon_{2}$ values obtained in previous reflectance studies \cite{Hwang,Bozovic}, mainly due to the higher $\kappa$ values obtained in the latter studies. These observations on the relative magnitudes of the optical penetration depth and dielectric function as determined by various means implicitly incorporate the trends in $n$ and $\kappa$ noted in previous paragraphs and therefore serve to highlight the effects of surface roughness on these derived quantities.
\subsection{Elastic Constant $C_{44}$}
An important consequence of the presence of surface roughness is that measurement of a quantity in the surface region will yield a different value than in the bulk. In view of this, elastic constant $C_{44}$ for Bi-2212 was determined using quantities derived solely from measurements of bulk phonons to mitigate the impact of surface effects. For this calculation, results for the refractive index ($n=2.0$) determined from the linewidth-shift ratio and the frequency shift ($f_{T}=13.2$ GHz) for the peak due to the slow quasi-transverse bulk mode propagating nearly along the $c$-axis for TC78 were used due to the superior quality of the Brillouin data obtained from this sample relative to TC90 and TC91. Substitution of these values into the expression $C_{44}=\rho V_{T}^2 = \rho (f_{T} \lambda_{i}/2n)^2$ gives $C_{44}=20\pm2$ GPa. This result is in accord with $C_{44}= \rho V_{T}^2=21.8$ GPa estimated using $V_{T}=1830$ m/s from neutron scattering measurements \cite{Etrillard2004}, but $\sim25$\% larger than the lone published value of 15.8 GPa \cite{Boekholt}. An analogous calculation for TC90 and TC91 yielded estimates for $C_{44}$ that, while crude due to the poor quality spectral data for these samples and because the direction of light propagation is at an appreciable angle to the $c$-axis, are also considerably larger than the previously published value.
In order to identify the mechanism responsible for the difference between the value of $C_{44}$ obtained in the present work and that previously reported, it is noted that the latter was determined from Brillouin scattering measurements of surface mode frequencies \cite{Boekholt}. As noted above, surface roughness or damage can result in a reduction in frequencies of surface modes measured by Brillouin scattering \cite{Sathish,Mendik,Eguiluz}. Furthermore, it has also been stated that these frequencies might be low because elastic constants at the surface differ from those in the bulk \cite{Sandercock1978}. The results obtained here are consistent with this premise and, in fact, the presence of surface roughness on crystalline Bi-2212 has been noted in another independent Brillouin scattering study \cite{Baumgart}. Moreover, using $n=2.4$ as found from Eq. \ref{eqn:index}, which contains surface mode velocity $V_{R}$ and elastic constants $C_{13}$ and $C_{33}$ determined using surface mode frequencies \cite{Boekholt}, gives $C_{44}=14$ GPa, a value within $\sim10$\% of that quoted in ref. \cite{Boekholt}. It therefore seems likely that surface roughness is responsible for the low value of $C_{44}$ reported in an earlier study relative to that obtained in the present work.
\section{Conclusion}
Room temperature optical constants of crystalline Bi-2212 were determined at a wavelength of 532 nm using two approaches based on data obtained from Brillouin light scattering spectra. Optical constants determined from measurements of bulk phonons propagating within the volume of the crystal are, in general, noticeably different from those extracted from measurements of surface modes. These differences are attributed to a surface roughness-induced lowering of surface acoustic mode frequencies, which also appears to account for why elastic constant $C_{44}$ determined in the present study is $\sim25\%$ higher than that previously reported.
\section{Acknowledgements}
The authors would like to acknowledge Dr. J. P. Clancy at McMaster University, Canada, for supplying the samples used in this work and Dr. J. Hwang at Sungkyunkwan University, South Korea, for providing raw reflectance data for their samples. This work was partially funded by the Natural Sciences and Engineering Council of Canada (NSERC) through Discovery Grants to Andrews (\#RGPIN-2015-04306) and LeBlanc (\#RGPIN-2017-04253).
\bibliographystyle{apsrev4-1}
|
train/arxiv
|
BkiUfFY25V5jOxNYlhCB
| 5 | 1 |
\section{Introduction}
Type Ia supernovae (SNe Ia), which are among the most powerful explosions observed in the universe, are events that can occur only in multiple star systems. They are not only critical to the chemical evolution of galaxies (without them, we would for example be unable to explain the amount of iron observed in the solar neighborhood), but are also increasingly being used as distance indicators, or standard candles, in cosmology. Despite this, their origin remains unknown. It is agreed upon that SNe Ia originate from the thermonuclear disruption of a white dwarf (WD) in a binary star, which attains a critical mass close to the Chandrasekhar limit of $1.4$~M$_{\odot}$ (see e.g. Livio 2001). However, the exact formation process, and even the type of systems in which such is possible, is a matter of debate. The two most popular formation channels are the single degenerate (a WD steadily accreting hydrogen-rich material from a late main sequence (MS) or red giant (RG) companion, see e.g. Nomoto 1982) and double degenerate (a super-Chandrasekhar merger of two WDs due to gravitational wave radiation (GWR) spiral-in, see e.g. Webbink 1984) scenario.
\\
\\
To address the question of which of these scenarios is dominant in nature (or both), one can turn to the observational delay time distribution (DTD) of SNe Ia, which is the number of such events per unit time as a function of time elapsed since starburst. Totani et al. (2008) obtain a DTD by observing the SN Ia rate in elliptical galaxies which are passively evolving, and thus equivalent to starburst galaxies for this purpose, at similar (near solar) metallicity but different redshifts. The thus obtained distribution is extended with the SN Ia rate for local elliptical galaxies observed by Mannucci et al. (2005). The result is a DTD decreasing inversely proportional to time and in units (SNuK, SNe per K-band luminosity) which need to be converted (into SNuM, SNe per total initial galaxy mass) in order to be compared to any theoretical model. This conversion factor is obtained from spectral energy distribution templates, but may be subject to uncertainties. Using the observational DTD, it is possible to constrain theoretical models for SN Ia formation in starburst galaxies. For the reason just mentioned, comparisons between theoretical and observational DTDs will mainly focus on the shape of the distributions, and not so much on the absolute values.
\section{Assumptions}
Previous studies have been done on this topic by other groups (see e.g. Ruiter et al. 2009; Hachisu et al. 2008; Han \& Podsiadlowski 2004; Yungelson \& Livio 2000), but the present one specifically focuses on the influence of mass transfer efficiency during Roche lobe overflow (RLOF) in close binaries. This is done with an updated version of the population number synthesis code by De Donder \& Vanbeveren (2004), which computes detailed binary evolution models, without the use of analytical formalisms. Single degenerate (SD) progenitors are assumed to be as given by Hachisu et al. (2008), where regions in the companion mass--orbital period parameter space are denoted, one for the WD+MS channel and one for the WD+RG channel. Systems entering one of these regions will encounter a mass transfer phase towards the WD that is calm enough in order not to result in nova-like flashes on the surface of the WD that burn away any accreted hydrogen, but also sufficiently fast to let the WD reach the Chandrasekhar limit and result in a SN Ia before the companion ends its life. This scenario includes the mass stripping effect, which allows accretors to blow away some of the mass coming towards them, letting some systems which would otherwise have merged escape a common envelope (CE) phase and result in a SN Ia. For the double degenerate (DD) scenario, it is assumed that every WD merger exceeding $1.4$~M$_{\odot}$ will result in a SN Ia.
\\
\\
Certain parameters need to be scrutinized, first and foremost the fraction $\beta$ of RLOF-material which is accepted by the accretor. If $\beta < 1$, mass will be lost from the system and angular momentum loss must be taken into account. This is done under the assumption that matter leaves the system with the specific angular momentum of the second Lagrangian point, since mass loss is considered to take place through a circumbinary disk. Other groups make different assumptions, which can have serious implications for the eventual evolution outcome. Finally, a formalism must be adopted for the treatment of energy conversion during CE phases. For the standard model, the $\alpha$-formalism by Webbink (1984) will be adopted, while another possibility will be considered later on.
\section{Double degenerate evolution channels}
\begin{figure*}
\centering
\includegraphics[width=10.75cm]{mennekens_fig1.eps}
\caption{Graphical representation (not to scale) of the two channels typically leading to DD SNe Ia in our population code. Left panel: (conservative) RLOF phase followed by CE phase. Right panel: two successive CE phases.}
\end{figure*}
In our code, there are two evolution channels that can lead to a DD SN Ia, which are represented graphically by typical examples in Fig.~1. In the first channel, the explosion follows an evolution which entails one stable RLOF phase (which is assumed to be conservative: $\beta = 1$), followed by a CE phase. The latter is due to the extreme mass ratio at the start of the second mass transfer phase, and the fact that the accreting object is a WD. In this channel, the resulting system is a double WD binary with a mass of the order of $1$~M$_{\odot}$ each, and with an orbital period $P$ of a few hours. Such a system then typically needs a GWR spiral-in lasting several Gyr, resulting in a SN Ia after such a long delay time. Importantly, if in this channel the RLOF phase is assumed to be totally non-conservative ($\beta = 0$), the system will merge already during that first mass transfer phase, and there will thus be no SN Ia.
\\
\\
The second channel consists of an evolution made up of two successive CE phases. The nature of the first mass transfer phase is a result of the system having an initial orbital period typically two orders of magnitude larger than in the other channel. This makes that the donor's outer layers are deeply convective by the time mass transfer starts, which causes this process to be dynamically unstable. Eventually, after the second CE phase, a double WD binary of about the same component masses as in the first channel is obtained, but with an orbital period of only a few hundred seconds. Such systems require GWR during only a few tens of thousands of years in order to merge, with the SN Ia thus having a total delay time of just a few hundred Myr.
\section{Results and discussion}
\begin{figure*}
\centering
\includegraphics[width=10.75cm]{mennekens_fig2.eps}
\caption{DTDs for $\beta=1$ of the DD (solid black) and SD (dotted gray) scenario, as well as for $\beta=0$ of the DD (dashed black) scenario, using the $\alpha$-formalism for CE. Observational data points of Totani et al. (2008) and Mannucci et al. (2005) (black circles).}
\end{figure*}
The results for the DTD, obtained with the population synthesis code, are shown in Fig.~2. It is obvious that the SD DTD for conservative RLOF ($\beta = 1$) is decreasing much too fast and too soon in order to keep matching the morphological shape of the observational data points after a few Gyr. The SD DTD for totally non-conservative RLOF ($\beta = 0$) hardly deviates from the conservative one shown. The SD scenario by itself is thus incompatible with the observations. We also find most SD events to occur through the WD+MS channel, as opposed to the WD+RG channel. The DD DTD for conservative RLOF matches the observational points in shape, but results in a too low absolute number of events to match them. This may be partially caused by uncertainties in the conversion between SNuK and SNuM, but may also have a physical explanation which will be addressed below. Importantly, most DD SNe Ia are created through a quasi-conservative RLOF followed by a CE phase, not through two successive CE phases. This is also visible from Fig.~2, by comparing the DD DTDs for totally conservative and non-conservative RLOF. In the latter case, the DTD drops dramatically after a few hundred Myr, leaving no DD SNe Ia with a sizeable delay time. This means that the first peak in the DD DTD, present in both cases of $\beta$, contains the events created through a double CE phase, and the second one (the absolute majority of events, but only present in the case of conservative RLOF) those created by a RLOF phase followed by a CE phase. Apart from confirming the aforementioned typical timescales for both channels, and the inability of the channel containing a RLOF phase to produce any SNe Ia if this phase is non-conservative, this also means that in reality a quasi-conservative RLOF (more specifically, $\beta \ge 0.9$) is required to obtain a match in morphological shape between model and observation. This has negative implications for the use of analytical formalisms for the determination of delay times, since such studies typically assume that the lifetime of the secondary star is unaffected by the mass transfer process, which is obviously not true in the case of conservative RLOF.
\\
\\
The next step is a study of the influence of the description of CE evolution. So far, the $\alpha$-formalism by Webbink (1984) was used, which is based on a balance of energy following from a conservation of angular momentum. An alternative is the $\gamma$-formalism by Nelemans \& Tout (2005), which starts from a conservation of energy to arrive at a balance of angular momentum, and which is said to be better for the treatment of systems which will result in a WD binary. The result obtained with this formalism is shown for both the SD and DD scenario and with $\beta = 1$ in Fig.~3. The SD DTD using the $\gamma$-formalism still deviates strongly from the observations, both in shape and number. While the shape of the DD DTD is in agreement with that of the observational data points, it has dropped in absolute number by another order of magnitude as compared to the $\alpha$-formalism. While it is thus not possible to reject the use of the $\gamma$-formalism based on a shape comparison, it seems unlikely that such a large SN Ia rate discrepancy can be explained.
\\
\\
Finally, some considerations are made on the absolute number of SNe Ia. As mentioned before, all considered theoretical models underestimate the observed absolute rate by a factor of at least three at the 11 Gyr point. This might be partially due to the SNuK-SNuM conversion, but a more plausible solution is stellar rotation. If it is so that stars in binaries are typically born with a higher rotational velocity than single stars, for which there seem to be indications (see e.g. Habets \& Zwaan 1989), then it seems likely that a lot of binary components will rotate faster than synchronously on the ZAMS. In that case, they will also have heavier MS convective cores than expected (see e.g. Decressin et al. 2009), which will eventually lead to heavier remnant masses. One will thus obtain heavier WDs, and hence more systems of merging WDs which attain the required $1.4$~M$_{\odot}$ for a DD SN Ia. Figure~4 shows a theoretical DTD obtained for $\beta = 1$ with a 10\% increase in MS convective core mass, and for the SD and DD scenario combined, since there is no reason why both scenarios could not be working together. This DTD agrees well, now both in morphological shape and in absolute number, with the observational DTD by Totani et al. (2008) and with the more recent one by Maoz et al. (2010).
\begin{figure*}
\centering
\includegraphics[width=10.75cm]{mennekens_fig3.eps}
\caption{DTDs for $\beta=1$ of the DD (solid black) and SD (dotted gray) scenario using the $\gamma$-formalism for CE. Observational data points of Totani et al. (2008) and Mannucci et al. (2005) (black circles).}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=10.75cm]{mennekens_fig4.eps}
\caption{DTD for $\beta=1$ of the SD and DD scenario combined using the $\alpha$-formalism for CE, and with a 10\% increase in MS convective core mass (solid black). Observational data points of Totani et al. (2008) and Mannucci et al. (2005) (black circles), as well as Maoz et al. (2010) (gray squares).}
\end{figure*}
\section{Conclusions}
We find (see also Mennekens et al. 2010) that the single degenerate scenario by itself is incompatible with the morphological shape of the observed delay time distribution of type Ia supernovae. Most double degenerate events are created through a quasi-conservative Roche lobe overflow, followed by a common envelope phase. The resulting critical dependence of the delay time distribution on the mass transfer efficiency during Roche lobe overflow and on the physics of common envelope evolution might be a way to find out more about these processes when more detailed observations become available.
|
train/arxiv
|
BkiUc0M4ubnjosH9G_fb
| 5 | 1 |
\section{Introduction} \label{intro2}
Matrix completion appears in a variety of areas where it recovers a low-rank or approximately low-rank matrix from a small fraction of observed entries
such as collaborative filtering (\cite{rennie2005}), computer vision (\cite{weinberger2006}), positioning (\cite{montanari2010}), and
recommender systems (\cite{bennett2007}).
Early work in this field was done by \cite{achlioptas2001}, \cite{azar2001}, \cite{fazel2002}, \cite{srebro2004}, and \cite{rennie2005}.
Later, \cite{candes2009recht} introduced the technique of matrix completion
by minimizing the nuclear norm under convex constraints.
This opened up a significant overlap with compressed sensing (\cite{candes2006}, \cite{donoho2006}) and
led to accelerated research in matrix completion.
They and others (\cite{candes2009recht}, \cite{candes2010tao}, \cite{keshavan2010a}, \cite{gross2011}, \cite{recht2011}) showed that the technique can exactly recover a low-rank matrix in the noiseless case.
Many of the following works showed the approximate recovery of the low-rank matrix with the presence of noise (\cite{candes2010plan}, \cite{negahban2011}, \cite{koltchinskii2011}, \cite{rohde2011}).
Several other papers studied matrix completion in various settings (e.g. \cite{davenport2014}, \cite{negahban2012}) and
proposed different estimation procedures of matrix completion (\cite{srebro2004}, \cite{keshavan2009}, \cite{koltchinskii2011solo}, \cite{cai2013}, \cite{chatterjee2014}) than the ones by \cite{candes2009recht}.
In addition to the theoretical advances, a large number of algorithms have emerged (e.g. \cite{rennie2005}, \cite{cai2010}, \cite{keshavan2009}, \cite{mazumder2010}, \cite{hastie2014}). An overview is well summarized in \cite{mazumder2010} and \cite{hastie2014}.
Many of matrix completion algorithms employ thresholded singular value decomposition (SVD) which soft- or hard- thresholds the singular values.
The statistical literature has responded by investigating its theoretical optimality and strong empirical performances.
However, a key empirical difficulty of employing thresholded SVD for matrix completion is to find the right way and level of threshold.
Depending on the choice of the thresholding scheme,
the rank of the estimated low-rank matrix and predicted values for unobserved entries can widely change.
Despite its importance, we lack understanding on how to choose the threshold level and what bias or error we eliminate by thresholding.
We propose a novel iterative matrix completion algorithm, \AI, which
recovers the underlying low-rank matrix from a few noisy entries via differentially and adaptively thresholded SVD.
Specifically, the proposed \AIs algorithm differentially thresholds the singular values and adaptively updates the threshold levels on every iteration.
As was the case with adaptive Lasso (\cite{zou2006}) and adaptive thresholding for sparse covariance matrix estimation (\cite{cai2011adaptive}),
the proposed thresholding scheme gives \AIs stronger empirical performances than the thresholding scheme
that uses a single thresholding parameter for all singular values throughout the iterations (e.g. \SIs (\cite{mazumder2010})).
Although \AIs employs multiple thresholding parameters changing over iterations,
we suggest specified values for the thresholding parameters that are theoretically-justified and data-dependent.
Hence, \AIs is free of the tuning problems associated with the choice of threshold levels.
Its single tuning parameter is the rank of the resulting estimator.
We suggest a way to choose the rank based on singular value gaps (for details, see Section \ref{realdata}).
This novel threshold scheme of \AIs makes it estimation via non-convex optimization,
understanding of whose theoretical guarantees is known to be limited.
However, to solve this problem and help understand the convergence behavior of \AI,
we introduce a simpler algorithm than \AI, generalized-\SI, and derive a sufficient condition under which it converges.
Then, we prove that \AIs behaves almost the same as generalized-\SI.
Numerical experiments and a real data analysis in Section \ref{sim} suggest superior performances of \AIs over the existing \SI-type algorithms.
The rest of this paper is organized as follows.
Section \ref{setup} describes the model setup.
Section \ref{AI} introduces the proposed algorithm \AI.
Section \ref{generalSI} introduces a generalized-\SI, a simpler algorithm than \AI.
Section \ref{sim} presents numerical experiment results.
Section \ref{discuss} concludes the paper with discussion.
All proofs are collected in Section \ref{proofs2}.
\section{The model setup} \label{setup}
Suppose that we have an $\n\times\d$ matrix of rank $\rank$,
\begin{equation} \label{mod}
\M = \U\Lam\V^T,
\end{equation}
where by SVD,
$\U=(\U_1,\ldots,\U_\rank) \in \real^{\n \times \rank}$,
$\V=(\V_1,\ldots,\V_\rank) \in \real^{\d \times \rank}$,
$\Lam = \diag (\lam_1, \ldots, \lam_\rank)\\ \in \real^{\rank \times \rank}$, and
$\lam_1\ge \ldots\ge \lam_\rank\ge0$.
The entries of $\M$ are corrupted by noise $\eps \in \real^{\n \times \d}$ whose entries are i.i.d. sub-Gaussian random variables with mean zero and variance $\sig^2$.
Hence, we can only observe $\Mf = \M + \eps$.
However, oftentimes in real world applications, not all entries of $\Mf$ are observable.
So, define $y \in \real^{\n \times \d}$ such that $y_{ij} = 1$ if the $(i,j)$-th entry of $\Mf$ is observed and $y_{ij} = 0$ if it is not observed.
The entries of $y$ are assumed to be i.i.d. Bernoulli($p$) and independent of the entries of $\epsilon$.
Then, the partially-observed noisy low-rank matrix $\Mp \in \real^{\n \times \d}$ is written as
\begin{align*}
\Mp_{ij} = \y_{ij} {\Mf}_{ij}
= \left\{\begin{matrix}
{\M}_{ij}+\eps_{ij} & \text{if observed ($\y_{ij}=1$)}\\
0 & \text{otherwise ($\y_{ij}=0$).}
\end{matrix}\right.
\end{align*}
Throughout the paper, we assume that $r \ll d \le \n$ and the entries of $\M$ are bounded by a positive constant $L$ in absolute value.
In this paper, we develop an iterative algorithm to recover $\M$ from $\Mp$ and investigate its theoretical properties and empirical performances.
\section{\AIs algorithm} \label{AI}
\subsection{Initialization} \label{initial}
We first introduce some notation.
Let a set $\Omega$ contain indices of the observed entries,
$y_{ij}=1 \Leftrightarrow (i,j) \in \Omega.$
Then, for any matrix $A \in \real^{\n\times\d}$, denote by $\P(A)$ the projection of $A$ onto $\Omega$ and by $\Portho(A)$ the projection of $A$ onto the complement of $\Omega$;
$$\[\P(A)\]_{ij} = \left\{\begin{matrix}
A_{ij} & \text{if } (i,j) \in \Omega\\
0 & \text{if } (i,j) \notin \Omega
\end{matrix}\right.
\quad\text{and}\quad
\[\Portho(A)\]_{ij} = \left\{\begin{matrix}
0 & \text{if } (i,j) \in \Omega\\
A_{ij} & \text{if } (i,j) \notin \Omega.
\end{matrix}\right.$$
That is, $\P(A) + \Portho(A) = A$.
We let $\bu_i(A)$ denote the $i$-th left singular vector of $A$,
$\bv_i(A)$ the $i$-th right singular vector of $A$, and $\blam_i(A)$ the $i$-th singular value of $A$
such that $\blam_1(A)\ge \ldots\ge \blam_\d(A)$.
The squared Frobenius norm is defined by $\left \| A \right \| _F ^2 = \tr\(A^T A\)$, the trace of $A^T A$, and
the nuclear norm by $\left \| A \right \| _\ast = \sum_{i=1}^\d \blam_i(A)$, the sum of the singular values of $A$.
For a symmetric matrix $A \in \real^{\n\times\n}$, diag(A) represents a matrix with diagonal elements of A on the diagonal and zeros elsewhere.
Many of the iterative matrix completion algorithms (e.g. \cite{cai2010}, \cite{mazumder2010}, \cite{keshavan2009}, \cite{chatterjee2014}) in the current literature initialize with $\Mp$, where the unobserved entries begin at zero.
This initialization works well with algorithms that are based on convex optimization or that are robust to the initial.
However, for algorithms that are based on non-convex optimization or that are sensitive to the initial, filling the unobserved entries with zeros may not be a good choice.
\cite{cho2015} proposed a one-step consistent estimator, $\Mhat$,
that attains the minimax error rate (\cite{koltchinskii2011}), $\rank/\p\d$, and requires only two eigendecompositions.
\AIs employs the entries of this one-step consistent estimator instead of zeros as initial values of the unobserved entries.
Algorithm \ref{alg:initial} describes how to compute the initial $\Mhat$ of \AI.
The following theorem shows that $\Mhat$ achieves the minimax error rate.
\begin{algorithm}[h]
\caption{\;Initialization (\cite{cho2015})}
\begin{algorithmic}
\Require{$\Mp$, $\y$, and $r$}
\State
$\phat \gets \frac{1}{\n\d}\sum_{i=1}^\n \sum_{j=1}^\d \y_{ij}$
\State
${\Sig}_{\phat} \gets \Mp^T\Mp - (1-\phat) \diag(\Mp^T\Mp)$
\State
${\Sig}_{t \phat} \gets \Mp\Mp^T - (1-\phat) \diag(\Mp\Mp^T)$
\State
$\Vhat_i \gets \bv_i({\Sig}_{\phat}), \quad \forall i \in \{1,\ldots,\rank\}$
\State
$\Uhat_i \gets \bu_i({\Sig}_{t \phat}), \quad \forall i \in \{1,\ldots,\rank\}$
\State
$\alphatilde \gets \frac{1}{\d-\rank} \sum_{i=\rank+1}^\d \blam_i({\Sig}_{\phat})$
\State
$\hat\tau_i \gets \blam_i({\Sig}_{\phat}) - \frac{1}{\phat}\sqrt{\blam_i({\Sig}_{\phat}) - \alphatilde}, \quad \forall i \in \{1,\ldots,\rank\}$
\State
$\lamhat_i \gets \blam_i({\Sig}_{\phat}) - \hat\tau_i, \quad\quad\quad\quad\quad\quad\;\;\, \forall i \in \{1,\ldots,\rank\}$
\State
$\shat=(\shat_1,\ldots,\shat_\rank) \gets \argmin_{s \in \{-1,1\}^\rank} \norm{ \P\( \sum_{i=1}^\rank s_i \lamhat_i \Uhat_i \Vhat_i^T - \Mp \) }_F^2$
\State $\Mhat \gets \sum_{i=1}^\rank \shat_i \lamhat_i \Uhat_i \Vhat_i^{T}$
\\
\Return{$\Mhat$}
\end{algorithmic}
\label{alg:initial}
\end{algorithm}
\begin{assumption}\label{assume1}
~
\begin{enumerate}
\item [(1)] $\p\d/\log\n \to \infty$ and $\n,\d \to \infty$ with $\;\d \le \n \le e^{\d^{\beta}},\,$ where $\;\beta<1$ free of $\n$, $\d$, and $\p$;
\item [(2)] $\lam_{i} = b_{i} \sqrt {\n\d}$ for all $i =1, \ldots,\rank$, where $\{b_{i}\}_{i =1, \ldots,\rank}$ are positive bounded values;
\item [(3)] $b_{i} > b_{i+1}$ for all $i =1, \ldots,\rank,$ where $b_{\rank+1}=0$;
\item [(4)] $\lim_{\n,\d\to\infty} \prob\Big(\min_{\s \in \{-1,1\}^\rank} \;
\big\| \P\big( \sum_{i=1}^\rank s_i \lamhat_i \Uhat_i \Vhat_i^T - \Mp \big) \big\|_F^2$
\\ \hspace*{7.8cm} $< \big\| \P\big( \sum_{i=1}^\rank s_{0i} \lamhat_i \Uhat_i \Vhat_i^T - \Mp \big) \big\|_F^2 \Big) = 0$,
\\ where $\s=(\s_1,\ldots,\s_\rank)$ and $\s_{0i} = \text{sign}(\langle \Vhat_i, \V_i \rangle) \;\text{sign}(\langle \Uhat_i, \U_i \rangle)$ for $i =1,
\ldots,\rank$.
\end{enumerate}
\end{assumption}
\begin{remark}
Under the setting where the rank $\rank$ is fixed as in this paper,
Assumption \ref{assume1}(2) implies that the underlying low-rank matrix $\M$ is dense.
More specifically,
note that the squared Frobenius norm indicates both the sum of all squared entries of a matrix and the sum of its singular values squared.
Also, note that $\smallnorm{\M}_F^2 = \sum_{i=1}^{\rank}\blam_i^2(\M) = c \n\d$ for some constant $c>0$ by Assumption \ref{assume1}(2).
Thus, the sum of all squared entries of $\M$ has an order $\n\d$.
This means that a non-vanishing proportion of entries of $\M$ contains non-vanishing signals with dimensionality (see \cite{fan2013}).
For more discussion, see Remark 2 in \cite{cho2015}.
\end{remark}
\begin{remark}
The singular vectors, $\{\Uhat_i\}_{i=1}^\rank$ and $\{\Vhat_i\}_{i=1}^\rank$, that compose $\Mhat$
are consistent estimators of $\U$ and $\V$ up to signs (for details, see \cite{cho2015}).
Hence, when combining them with $\{\lamhat_i\}_{i=1}^\rank$ to reconstruct $\Mhat$, a sign problem happens.
Assumption \ref{assume1}(4) assures that as $\n$ and $\d$ increase,
the probability of choosing different signs than the true signs, $\{\s_{0i}\}_{i=1}^\rank$, goes to zero.
Given the asymptotic consistency of $\{\Uhat_i\}_{i=1}^\rank$, $\{\Vhat_i\}_{i=1}^\rank$, and $\{\lamhat_i\}_{i=1}^\rank$,
this is not an unreasonable assumption to make.
\end{remark}
\begin{proposition}(Theorem 4.4 in \cite{cho2015}) \label{MhatConsistent}
Under Assumption \ref{assume1} and the model setup in Section \ref{setup},
$\Mhat$ is a consistent estimator of $\M$. In particular,
\begin{equation*}
\frac{1}{\n\d}\smallnorm{ \Mhat - \M }_F^2 = o_p\(\frac{h_\n }{\p\d}\)\,,
\end{equation*}
where $h_\n$ diverges very slowly with the dimensionality, for example, $\log( \log \d)$.
\end{proposition}
\begin{remark} \label{minimaxRate}
Since $h_\n$ in Proposition \ref{MhatConsistent} can be any quantity that diverges slowly with the dimensionality, the convergence rate of $\Mhat$ can be thought of as $1/\p\d$. Under the setting where the rank of $\M$ is fixed as in this paper, it is matched to the minimax error rate, $\rank/\p\d$, found in \cite{koltchinskii2011}.
\end{remark}
Using $\Mhat$ to initialize \AIs has two major advantages.
First, since $\Mhat$ is already a consistent estimator of $\M$ achieving the minimax error rate,
it allows a series of the iterates of \AIs coming after $\Mhat$ to be also consistent estimators of $\M$ achieving the minimax error rate (see Theorem \ref{thm:mathInduct}).
Second, because \AIs is based on a non-convex optimization problem (see Section \ref{generalSI}), its convergence may depend on initial values.
$\Mhat$ provides \AIs a suitable initializer.
\subsection{Adaptive thresholds} \label{adapt-thresh}
To motivate the novel thresholding scheme of \AI, we first consider the case where a fully-observed noisy low-rank matrix is available.
Specifically, suppose that the probability of observing each entry, $\p$, is $1$ and thus $\Mf = \M+\eps$ is observed.
Under the model setup in Section \ref{setup} we can easily show that
\begin{equation} \label{expect-Mf}
\expect (\Mf^T \Mf) = \M^T \M + \n\sig^2 I_\d
\quad\text{and}\quad
\expect (\Mf \Mf^T) = \M \M^T + \d\sig^2 I_\n,
\end{equation}
where $I_{\d}$ and $I_{\n}$ are identity matrices of size $\d$ and $\n$, respectively.
This shows that the eigenvectors of $\expect (\Mf^T \Mf)$ and $\expect (\Mf \Mf^T)$ are the same as the right and left singular vectors of $\M$.
Also, the top $\rank$ eigenvalues of $\expect (\Mf^T \Mf)$ consist of the squared singular values of $\M$ and a noise, $\n\sig^2$,
the latter of which is the same as the average of the bottom $\d-\rank$ eigenvalues of $\expect (\Mf^T \Mf)$.
In light of this, we want the estimator of $\M$ based on $\Mf$ to keep the first $\rank$ singular vectors of $\Mf$ as they are, but adjust the bias occuring in the singular values of $\Mf$.
Thus, the resulting estimator is
\begin{equation} \label{estor-M-fully}
\Mhat^F = \sum_{i=1}^\rank \sqrt{\blam_i^2 (\Mf)-\balp} \; \bu_i (\Mf) \bv_i (\Mf)^T,
\quad\text {where }
\balp = \frac{1}{\d-\rank}\sum_{i=\rank+1}^\d \blam_i^2 (\Mf).
\end{equation}
A simple extension of Proposition \ref{MhatConsistent} shows that
$\Mhat^F$ achieves the best possible minimax error rate of convergence, $1/\d$, since $\p=1$.
Now consider the cases where a partially-observed noisy low-rank matrix $\Mp$ is available.
For each iteration $t\ge 1$, we fill out the unobserved entries of $\Mp$ with the corresponding entries of the previous iterate $Z_{t}$,
treat the completed matrix $\Mtilde_{t}=\P(\Mp)+\Portho(Z_{t})$ as if it is a fully-observed matrix $\Mf$,
and find the next iterate $Z_{t+1}$ in the same way that we found $\Mhat^F$ from $\Mf$ in \eqref{estor-M-fully};
\begin{equation} \label{estor-Mhat-t}
Z_{t+1} = \sum_{i=1}^\rank \sqrt{\blam_i^2 (\Mtilde_{t})-\alphatilde_{t}} \; \bu_i (\Mtilde_{t}) \bv_i (\Mtilde_{t})^T,
\quad\text {where }
\alphatilde_{t} = \frac{1}{\d-\rank}\sum_{i=\rank+1}^\d \blam_i^2 (\Mtilde_{t}).
\end{equation}
Note that the difference in \eqref{estor-Mhat-t} from \eqref{estor-M-fully} is in the usage of $\Mtilde_{t}$ instead of $\Mf$.
Hence, the performance of \AIs may depend on how close $\P(Z_t)$ is to $\P(\M)$.
Algorithm \ref{alg:AI} summarizes these computing steps of \AIs continued from Algorithm \ref{alg:initial}.
\begin{algorithm}[ht]
\caption{\;\AI}
\begin{algorithmic}
\Require{$\Mp$, y, $r$, and $\varepsilon > 0$}
\State $Z_1 \gets \Mhat$ \Comment{from Algorithm \ref{alg:initial}}
\Repeat \;\; for $t=1,2,\ldots$
\State
$\Mtilde_{t} \gets \P(\Mp)+\Portho(Z_{t})$
\State
$V_i^{(t)} \gets \bv_i (\Mtilde_t), \hspace*{0.5cm} \forall i \in \{1,\ldots,\rank\}$
\State
$U_i^{(t)} \gets \bu_i (\Mtilde_t), \hspace*{0.5cm} \forall i \in \{1,\ldots,\rank\}$
\State
$\alphatilde_{t} \gets \frac{1}{\d-\rank} \sum_{i=\rank+1}^{\d} \blam_{i}^2 (\Mtilde_{t})$
\State
$\tau_{t,i} \gets \blam_i (\Mtilde_t) - \sqrt{\blam_i^2 (\Mtilde_t) - \alphatilde_{t}}, \hspace*{1.9cm} \forall i \in \{1,\ldots,\rank\}$ \Comment{Adaptive thresholds}
\State
$\lam_i^{(t)} \gets \blam_i (\Mtilde_t) - \tau_{t,i} \(= \sqrt{\blam_i^2 (\Mtilde_t) - \alphatilde_{t}}\), \quad \forall i \in \{1,\ldots,\rank\}$
\State
$Z_{t+1} \gets \sum_{i=1}^\rank \lam_i^{(t)} U_i^{(t)} V_i^{(t)T}$
\State
$t \gets t + 1$
\Until{$\norm{Z_{t+1}-Z_{t}}_F^2/\norm{Z_{t}}_F^2 \le \varepsilon$}
\\
\Return{$Z_{t+1}$}
\end{algorithmic}
\label{alg:AI}
\end{algorithm}
The following theorem illustrates that the iterates of \AIs retain the statistical performance of the initializer $\hat M$.
\begin{assumption} \label{assume3}
For all $i =1, \ldots,\rank, \;\text{sign}(\langle \bu_i(\Mtilde_t),\U_i \rangle)=\text{sign}(\langle \bv_i(\Mtilde_t),\V_i \rangle).$
\end{assumption}
\begin{theorem} \label{thm:mathInduct}
Under Assumptions \ref{assume1}-\ref{assume3} and the model setup in Section \ref{setup}, we have for any fixed value of $t$,
$$\frac{1}{\n\d}\norm{Z_t -\M}_F^2 = o_p\(\frac{h_\n}{\p\d}\), \ \mbox{ as $n,d \rightarrow \infty$ with any } h_\n \to \infty$$
where $h_\n$ diverges very slowly with the dimensionality, for example, $\log( \log \d)$.
\end{theorem}
\begin{remark}
Similarly as in Remark \ref{minimaxRate}, since $h_\n$ is a quantity diverging very slowly,
the convergence rate of $Z_t$ can be thought of as $1/\p\d$ which is matched to the minimax error rate, $\rank/\p\d$ (\cite{koltchinskii2011}).
\end{remark}
\subsection{Non-convexity of \AI}
We can view \AIs as an estimation method via non-convex optimization.
For $t\ge1$, define
\begin{equation} \label{taudef}
\tau_{t,i}=
\left\{\begin{array}{ll}
\blam_i (\Mtilde_t) - \sqrt{\blam_i^2 (\Mtilde_t) - \alphatilde_{t}}, & i \le \rank \\
\blam_{\rank+1} (\Mtilde_t), & i >\rank\end{array}\right.,
\end{equation}
where $\alphatilde_{t} = \frac{1}{\d-\rank} \sum_{i=\rank+1}^{\d} \blam_{i}^2 (\Mtilde_{t})$ and $\Mtilde_t = \P\(\Mp\) + \Portho\(Z_t\)$.
Then, in each iteration \AIs provides a solution to the problem
\begin{equation} \label{eq:Qt}
\min_{Z\in\real^{\n\times\d}} \; \frac{1}{2\n\d}\,\smallnorm{\Mtilde_t - Z}_F^2
+ \sum_{i=1}^\d \frac{\tau_{t,i}}{\sqrt{\n\d}} \frac{\blam_i (Z)}{\sqrt{\n\d}}.
\end{equation}
Note that the threshold parameters, $\tau_{t,i}$, have dependence on both the $i$-th singular value and the $t$-th iteration.
The following theorem provides an explicit solution to \eqref{eq:Qt}.
\begin{theorem} \label{solutionZ}
Let $X$ be an $\n\times\d$ matrix and let $\n\ge\d$. The optimization problem
\begin{eqnarray} \label{ProblemZ}
\min_Z \;\frac{1}{2\n\d}\norm{X-Z}_F^2 + \sum_{i=1}^\d \frac{\tau_i}{\sqrt{\n\d}} \frac{\blam_i(Z)}{\sqrt{\n\d}}
\end{eqnarray}
has a solution which is given by
\begin{eqnarray} \label{Zhat}
\hat Z = \Phi (\Delta-\btau)_+\Psi ^T,
\end{eqnarray}
where $\Phi\Delta\Psi^T$ is the SVD of $X$, $\btau = \diag(\tau_1,\ldots,\tau_\d)\in\real^{\d\times\d}$, $(\Delta-\btau)_+ = \diag\big( (\blam_1(X)-\tau_1)_+,\ldots, (\blam_\d(X)-\tau_\d)_+\big)\in\real^{\d\times\d}$, and $c_+ = \max(c,0)$ for any $c\in\real$.
\end{theorem}
\begin{remark} \label{defineTau}
To see how Theorem \ref{solutionZ} provides a solution to \eqref{eq:Qt},
let $X = \Mtilde_t$ and $\tau_i=\tau_{t,i}$ as specified in \eqref{taudef}.
Then, \eqref{eq:Qt} and \eqref{ProblemZ} become the same and $\hat Z$ in \eqref{Zhat} gives the explicit form of the $(t+1)$-th iterate, $Z_{t+1}$, in Algorithm \ref{alg:AI}.
\end{remark}
If all of the thresholding parameters in \eqref{eq:Qt} are equal
such that $\tau=\tau_{t,1}=\ldots=\tau_{t,\d}$ for all $1 \le i \le d$ and $t\ge 1$,
the optimization problem \eqref{eq:Qt} becomes equivalent to that of \SIs (\cite{mazumder2010}) and
Theorem \ref{solutionZ} provides an iterative solution to it.
While \SIs requires finding the right value of a thresholding parameter $\tau$ by using a cross validation (CV) technique
which is time-consuming and often does not have a straightforward validation criteria,
\AIs suggests specific values of the thresholding levels as in \eqref{taudef}.
The novel thresholding scheme of \AIs together with the rank constraint
results in superior empirical performances over the existing \SI-type algorithms (see Section \ref{sim}).
The thresholding scheme of \AIs can be viewed as a solution to a non-convex optimization problem
since at every iteration it differentially and adaptively thresholds the singular values.
As Hastie and others alluded to a similar issue for matrix completion methods via non-convex optimization in \cite{hastie2014},
it is hard to provide a direct convergence guarantee of \AI.
So, in the following section we introduce a generalized-\SIs algorithm, simpler than \AIs and yet still non-convex,
and investigate its asymptotic convergence.
It hints at the convergent behavior of \AIs in the asymptotic sense.
\section{Generalized \SI} \label{generalSI}
Generalized-\SIs is an algorithm which iteratively solves the problem,
\begin{equation} \label{eq:Q}
\min_{Z\in\real^{\n\times\d}} \; Q_{\tau}(Z|Z_{t}^g) := \frac{1}{2\n\d}\norm{\P\(\Mp\) + \Portho\(Z_t^g\) - Z}_F^2
+ \sum_{i=1}^\d \frac{\tau_{i}}{\sqrt{\n\d}} \frac{\blam_i (Z)}{\sqrt{\n\d}},
\end{equation}
to ultimately solve the optimization problem,
\begin{equation} \label{eq:f}
\min_{Z\in\real^{\n\times\d}} \; f_{\tau}(Z) := \frac{1}{2\n\d}\norm{\P(\Mp) - \P(Z)}_F^2
+ \sum_{i=1}^\d \frac{\tau_{i}}{\sqrt{\n\d}} \frac{\blam_i (Z)}{\sqrt{\n\d}}.
\end{equation}
Note that generalized-\SIs differentially penalizes the singular values, but the thresholding parameters do not change over iterations.
The iterative solutions of generalized-\SIs are denoted by $Z^{g}_{t+1}:= \argmin_{Z\in\real^{\n\times\d}} Q_{\tau}(Z|Z_{t}^g)$ for $t\ge 1$ and
Theorem \ref{solutionZ} provides a closed form of $Z^{g}_{t+1}$.
If $\tau_i = \tau$ for all $1\le i\le \d$, generalized-\SIs will be equivalent to \SIs and both \eqref{eq:Q} and \eqref{eq:f} become convex problems.
However, by differentially penalizing the singular values, generalized-\SIs ends up solving a non-convex optimization problem.
Theorem \ref{thm:Zsolvesf} below shows that despite the non-convexity of generalized-\SI,
the iterates of generalized-\SI, $\{Z^g_t\}_{t\ge 1}$, converge to a solution of problem \eqref{eq:f} under certain conditions.
\begin{assumption} \label{assume2}
Let $\Mtilde_{t}^g = \P(\Mp) + \Portho(Z_t^g)$ and $D_t^g := \Mtilde_{t}^g - Z^g_{t+1}$. Then,
$$\frac{1}{nd}\norm{D_t^g - D_{t+1}^g}_F^2 +\frac{2}{nd}\langle D_t^g - D_{t+1}^g,Z^g_{t+1} - Z^g_{t+2} \rangle \ge 0 \quad\text{for all }\; t\ge1.$$
\end{assumption}
\begin{theorem} \label{thm:Zsolvesf}
Let $Z_\infty$ be a limit point of the sequence $Z_t^g$.
Under Assumption \ref{assume2}, if the minimizer $Z^s$ of \eqref{eq:f} satisfies
\begin{eqnarray}\label{assume4}
Z^s \in \bigg \{ Z \in \mathbb{R}^{n \times d} : \sum_{i=1}^d \tau_i \lam_i (Z) \geq \sum_{i=1}^d \tau_i \lam_i (Z_\infty) + \langle (Z-Z_\infty), D_\infty \rangle \bigg \},
\end{eqnarray}
we have $f_{\tau}(Z_{\infty}) = f_{\tau}(Z ^s)$ and $\lim_{t\rightarrow \infty} f_{\tau}(Z^g_{t}) = f(Z^s)$.
\end{theorem}
\begin{remark} \label{connectTOsoftimpute}
If $\tau_i = \tau$ for all $i$ as in case of \SI, Assumption \ref{assume2} and \eqref{assume4} are always satisfied because
$\frac{1}{\tau}D_t^g$ belongs to the sub-gradient of $\norm{Z_{t+1}^g}_\ast$ .
\end{remark}
\begin{remark}
If $Z^s$ is unique, then
generalized-\SIs finds the global minimum point of \eqref{eq:f} by Theorem \ref{thm:Zsolvesf}.
\end{remark}
Generalized-\SIs resembles \AIs in a sense that both of them employ different thresholding parameters on $\blam_i (Z)$'s.
However, \AIs updates these tuning parameters every iteration while generalized-\SIs does not.
The following lemmas show that despite this difference,
the convergent behavior of \AIs is asymptotically close to that of generalized-\SI.
\begin{lemma} \label{lem:undateTuning}
Under Assumptions \ref{assume1}-\ref{assume3} and the model setup in Section \ref{setup}, we have
\begin{eqnarray*}
\left| \frac{\tau_{t,i}}{\sqrt{\n\d}} - \frac{\tau_{t+1,i}}{\sqrt{\n\d}} \right|
= o_p\(\sqrt{\frac{h_\n}{\p\d}}\) \quad\text{for }\; i=1,\ldots,\d,
\end{eqnarray*}
where $\tau_{t,i}$ is defined in \eqref{taudef}.
\end{lemma}
\begin{lemma} \label{corol:assume2}
Let $D_t := \Mtilde_{t} - Z_{t+1}$, where $\Mtilde_{t}$ and $Z_{t}$ are as defined in Algorithm \ref{alg:AI}.
Then, under Assumptions \ref{assume1}-\ref{assume3} and the model setup in Section \ref{setup}, we have
$$\frac{1}{nd}\norm{D_t - D_{t+1}}_F^2 +\frac{2}{nd}\langle D_t - D_{t+1},Z_{t+1} - Z_{t+2} \rangle + o_p\(\frac{h_\n}{\p\d}\) \ge 0.$$
\end{lemma}
Lemma \ref{lem:undateTuning} shows that for large $\n$ and $\d$,
thresholding parameters of \AIs are stable between iterations so that \AIs behaves similarly to generalized-\SI.
Lemma \ref{corol:assume2} shows how Assumption \ref{assume2} is adapted in \AI.
It implies a possibility of \AIs satisfying Assumption \ref{assume2} asymptotically.
Although this still does not provide a guarantee of convergence of \AI, numerical results below support this possibility.
\section{Numerical results} \label{sim}
In this section, we conducted simulations and a real-data analysis to compare \AIs for estimating $\M$ with the four different versions of \SI:
\begin{enumerate}
\item \AI: the proposed algorithm, as summarized in Algorithm \ref{alg:AI};
\item \SI: the original \SIs algorithm (\cite{mazumder2010});
\item \emph{softImpute-Rank}: \SIs with rank restriction (\cite{hastie2014});
\item \emph{softImpute-ALS}: \emph{Maximum-Margin Matrix Factorization} (\cite{hastie2014});
\item \emph{softImpute-ALS-Rank}: \emph{rank-restricted Maximum-Margin Matrix Factorization} in Algorithm 3.1 (\cite{hastie2014}).
\end{enumerate}
\emph{SoftImpute} algorithms were implemented with the \texttt{R} package, \texttt{softImpute} (\cite{hastie2015}).
The R code for \AIs is available at \url{https://github.com/chojuhee/hello-world/blob/master/adaptiveImpute_Rfunction}.
In this R code, we made two adjustments from Algorithms \ref{alg:initial} and \ref{alg:AI} for technical reasons.
First, in almost all real world applications that needed matrix completion, the entries of $\M$ are bounded below and above by constants $L_1$ and $L_2$ such that
$$ L_1 \le {\M}_{ij} \le L_2 \ $$
and smaller or larger values than the constants do not make sense.
So, after each iteration of \AI, $t\ge1$, we replace
the values of $Z_t$ that are smaller than $L_1$ with $L_1$ and
the values of $Z_t$ that are greater than $L_2$ with $L_2$.
Second, the cardinality of the set, $\{-1,1\}^\rank$, that we search over to find $\shat$ in Algorithm \ref{alg:initial} increases exponentially.
Hence, finding $\shat$ easily becomes a computational bottleneck of \AIs or is even impossible for large $\rank$.
We suggest two possible solutions to this problem.
One solution is to find $\shat$ by computing
$\shat_{i} = \text{sign}(\langle \Vhat_i, \bv_i(\Mp) \rangle) \; \text{sign}(\langle \Uhat_i, \bu_i(\Mp) \rangle) \text{ for } i=1,\ldots,\rank$.
Note that if we use $\V_i$ and $\U_i$ instead of $\bv_i(\Mp)$ and $\bu_i(\Mp)$, this gives us the true sign $s_0$ under Assumption \ref{assume1}.
The other solution is to use a linear regression.
Let a vector of the observed entries of $\Mp$ be the dependent variable and
let a vector of the corresponding entries of $\lamhat_i \Uhat_i \Vhat_i^T$ be the $i$-th column of the design matrix for $i=1,\ldots,\rank$.
Then, we set $\shat$ to be the coefficients of the regression line whose intercept is forced to be 0.
The difference in the results of these two methods are negligible.
In the following experiment, we only reported the results of the former solution for simplicity,
while the R code provided in \url{https://github.com/chojuhee/hello-world/blob/master/adaptiveImpute_Rfunction} are written for both solutions.
\subsection{Simulation study}
To create $\M=A B^{T} \in \real^{\n\times\d}$,
we sampled $A \in \mathbb{R}^{\n \times r}$ and $B \in \mathbb{R}^{\d \times r}$ to contain i.i.d. uniform$[-5,5]$ random variables
and a noise matrix $\epsilon \in \mathbb{R}^{n \times d}$ to contain i.i.d. $\mathcal{N}(0,\sigma^2)$.
Then, each entry of $\M + \epsilon$ was observed independently with probability $\p$.
Across simulations, $\n =1700$, $\d = 1000$, $r \in \{5,10,20,50\}$, $\sigma$ varies from 0.1 to 50, and $\p$ varies from 0.1 to 0.9.
For each simulation setting, the data was sampled 100 times and the errors were averaged.
To evaluate performance of the algorithms, we measured three different types of errors; test, training, and total errors;
the test error, $\mbox{\texttt{Test}}(\hat M) = \smallnorm{\Portho(\Mhat-\M)}_F^2/\smallnorm{\Portho(\M)}_F^2$,
represents the distance between the estimate $\hat{M}$ and the parameter $M_0$ measured on the unobserved entries,
the training error, $\mbox{\texttt{Training}}(\hat M) = \smallnorm{\P(\Mhat-\M)}_F^2/\smallnorm{\P(\M)}_F^2$,
the distance measured on the observed entries, and
the total error, $\mbox{\texttt{Total}}(\Mhat) = \smallnorm{\Mhat-\M}_F^2/\smallnorm{\M}_F^2$,
the distance measured on all entries.
For ease of comparison, Figure \ref{relEffvsp} and \ref{relEffvsSigma2} plot the relative efficiencies with respect to \SI-Rank.
For example, the relative test efficiency of \AIs with respect to \SI-Rank is defined as
$\mbox{\texttt{Test}}(\hat M_{rank}) / \mbox{\texttt{Test}}(\hat M_{adapt})$,
where $\hat M_{adapt}$ is an estimate of \AIs and $\hat M_{rank}$ is an estimate of \emph{softImpute-Rank}.
The relative total and training efficiencies with respect to \SI-Rank are defined similarly.
We used the best tuning parameter for the algorithms in comparison.
Specifically, for algorithms with rank restriction (including \AI), we provided the true rank (i.e. 5, 10, 20, or 50).
For \SI-type algorithms, an oracle tuning parameter was chosen to minimize the total error.
\begin{figure}[p]
\centering
\includegraphics[width=5.3in]{figure1.pdf}
\caption{The relative efficiency plotted against the probability of observing each entry, $\p$, when $\sig=1$. Training errors are measured over the observed entries, test errors over the unobserved entries, and total errors over all entries.}
\label{relEffvsp}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=5.3in]{figure2.pdf}
\caption{Change of the absolute errors when the probability of observing each entry, $\p$ increases and $\sig=1$.}
\label{absEffvsp}
\end{figure}
Figure \ref{relEffvsp} shows the change of the relative efficiencies as the probability of observing each entry, $\p$, increases with $\sig=1$.
Three columns of plots in Figure \ref{relEffvsp} correspond to three different types of errors and four rows of plots to four different values of the rank.
In all cases, \AIs outperforms the competitors and works especially better when $\p$ is small.
Among \SI-type algorithms, the algorithms with rank constraint (i.e. \SI-Rank and \SI-ALS-Rank) perform better than the ones without (i.e. \SIs and \SI-ALS).
Figure \ref{absEffvsp} shows the change of the absolute errors that are used to compute relative efficiencies in Figure \ref{relEffvsp} as the probability of observing each entry, $\p$, increases.
Figure \ref{relEffvsSigma2} shows the change of the log relative efficiencies as the standard deviation (SD) of each entry of $\eps$, $\sigma$, increases with $\p=0.1$.
When the noise level is under 15, \AIs outperforms the competitors, but when the noise level is over 15, \SI-type algorithms start to outperform \AI.
Hence, \SI-type algorithms are more robust to large noises than \AI.
It may be because when there exist large noises dominating the signals, the conditions for convergence presented in Section \ref{generalSI} are not satisfied.
In real life applications, however, it is not common to observe such large noises that dominate the signals.
Figure \ref{absEffvsSigma2} shows the change of the absolute errors that are used to compute relative efficiencies in Figure \ref{relEffvsSigma2}.
\begin{figure}[p]
\centering
\includegraphics[width=5.3in]{figure3.pdf}
\caption{The log relative efficiency plotted against the SD of each entry of $\eps$ when $\p=0.1$. Training errors are measured over the observed entries, test errors over the unobserved entries, and total errors over all entries.}
\label{relEffvsSigma2}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=5.3in]{figure4.pdf}
\caption{Change of the absolute errors when the SD of each entry of $\eps$ increases and $\p=0.1$.}
\label{absEffvsSigma2}
\end{figure}
Figure \ref{convergence} shows convergence of the iterates of \AIs to the underlying low-rank matrix over iterations;
that is, the change of log $\mbox{\texttt{Total}}(Z_t), \mbox{\texttt{Training}}(Z_t)$, and $\mbox{\texttt{Test}}(Z_t)$ errors as $t$ increases.
Across all plots, $\n =1700$, $\d = 1000$, $\p=0.1$, and the errors were averaged over 100 replicates.
In all cases, we observe that \AIs converges well.
Particularly, the smaller value of noise and/or rank is, the faster \AIs converges.
\begin{figure}[p]
\centering
\includegraphics[width=5.4in]{figure5.pdf}
\caption{Convergence of the iterates of \AIs to the underlying low-rank matrix.
In all plots, $\n =1700$, $\d = 1000$, $\p=0.1$, and all points were averaged over 100 replicates. }
\label{convergence}
\end{figure}
\subsection{A real data example} \label{realdata}
We applied \AIs and the competing methods to a real data, MovieLens 100k (\cite{movielens100k}).
We used 5 training and 5 test data sets from 5-fold CV which are publicly available in \cite{movielens100k}.
For the rank used in \AIs and \SI-type algorithms with rank constraint, we chose 3 based on a scree plot (Figure \ref{screePlot}).
Lemma 2 in \cite{cho2015} provides justification of using the scree plot and the singular value gap to choose the rank.
\begin{figure}[!ht]
\centering
\includegraphics[width=6in]{figure7.pdf}
\caption{Log of the top 50 singular values of the MovieLens 100k data matrix (\cite{movielens100k}). }
\label{screePlot}
\end{figure}
For the thresholding parameters for \SI-type algorithms, we chose the optimal values which result in the smallest test errors.
The test errors were measured by normalized mean absolute error (NMAE) (\cite{herlocker2004}),
$$\frac{1}{(M_{max}-M_{min}) | \Omega_{test} |} \sum_{(i,j) \in \Omega_{test}} |\hat{M}_{ij} - M_{ij}|,$$
where the set $\Omega_{test}$ contains indices of the entries in test data, $|\Omega_{test}|$ is the cardinality of $\Omega_{test}$,
$M_{max} = \max\{ \{M_{i,j}\}\setminus 0\}$ is the largest entry of $M$, and
$M_{min} = \min\{ \{M_{i,j}\}\setminus 0\}$ is the smallest entry of $M$.
Figure \ref{movieLens} summarizes the resulting NMAEs.
Five points in the x-axis correspond to the 5-fold CV test data, the y-axis represents the values of NMAE, and
the five different lines on the plane correspond to the 5 different algorithms in comparison.
We observe that \AIs outperforms all of the other algorithms.
Specifically, the test errors of \AIs reduce those of \SI-type algorithms by 6\%-16\%.
Among \SI-type algorithms, the ones with rank constraint (i.e. \SI-Rank and \SI-ALS-Rank) performs better than the ones without (i.e. \SI and \SI-ALS).
This is the same result to the simulation results.
\begin{figure}[!ht]
\centering
\includegraphics[width=5.4in]{figure6.pdf}
\caption{The NMAEs of \AIs and its competitors measured in 5-fold CV test data from MovieLens 100k (\cite{movielens100k}). }
\label{movieLens}
\end{figure}
\section{Discussion} \label{discuss}
Choosing the right thresholding parameter for matrix completion algorithms using thresholded SVD often poses empirical challenges.
This paper proposed a novel thresholded SVD algorithm for matrix completion, \AI, which employs
a theoretically-justified and data-dependent set of thresholding parameters.
We established its theoretical guarantees on statistical performance and showed its strong performances in both simulated and real data.
It provides understanding on the effects of thresholding and the right threshold level.
Yet, there is a newly open problem.
Although we proposed a reasonable remedy in the paper, the choice of the rank of the underlying low-rank matrix is of another great practical interest.
To estimate the rank and completely automate the entire procedure of \AIs would be a potential direction for future research.
\section{Proofs} \label{proofs2}
Denote by $C$ and $C_1$ generic constants whose values are free of $n$ and $p$ and may change from appearance to appearance.
Also, denote by $\smallnorm{v}_2$ the $\ell_2$-norm for any vector $v \in \real^\d$ and
by $\smallnorm{A}_2$ the spectral norm, the largest singular value of $A$, for any matrix $A \in \real^{\n\times\d}$.
\subsection{Proof of Theorem \ref{thm:mathInduct}}
\begin{proof}[Proof of Theorem \ref{thm:mathInduct}]
We have
\begin{eqnarray*}
\Mtilde_t
&=& \P(\Mp) + \Portho(Z_t)
\cr
&=& \y\cdot(\M + \eps) + (1_{\n}1_{\d}^T-\y)\cdot Z_t
\cr
&=& \M + \y\cdot\eps + (1_{\n}1_{\d}^T-\y)\cdot\eta_t,
\end{eqnarray*}
where $1_{\n}$ and $1_{\d}$ are vectors of length $\n$ and $\d$, respectively, filled with ones and
$\eta_t = Z_t -\M$, and, $A \cdot B= (A_{ij} B_{ij}) _{1\leq i \leq n, 1 \leq j \leq d}$ for any $A$ and $B \in \mathbb{R}^{n \times d}$.
Assume that
\begin{eqnarray} \label{init-assume}
\frac{1}{\sqrt{\n\d}}\norm{\eta_t}_F = o_p\(\sqrt{\frac{h_\n}{\p\,\d}}\).
\end{eqnarray}
Then, simple algebraic manipulations show for large $\n$
\begin{eqnarray} \label{eta_t+1}
\frac{1}{\sqrt{\n\d}}\norm{\eta_{t+1}}_F
&=&\frac{1}{\sqrt{\n\d}}\norm{Z_{t+1}-\M}_F
\cr
&=&\frac{1}{\sqrt{\n\d}}\norm{\sum_{i=1}^\rank \sqrt{\blam_i^2(\Mtilde_t) - \alphatilde_{t}} \, \bu_i(\Mtilde_t) \bv_i(\Mtilde_t)^T
- \sum_{i=1}^\rank \lam_i\U_i\V_i^T}_F
\cr
&\leq& \frac{C}{\sqrt{\n\d}}\sum_{i=1}^\rank \Bigg\{ \left| \sqrt{\blam_i^2(\Mtilde_t) - \alphatilde_{t}} - \lam_i \right| \norm{\U_i\V_i^T}_F
\cr
&&\quad\quad\quad\quad
+ \lam_i \norm{\(\bu_i(\Mtilde_t)-\U_i\O_i\) \V_i^T }_F
+ \lam_i \norm{ \U_i \(\bv_i(\Mtilde_t)-\V_i\Q_i\)^T }_F \Bigg\}
\cr
&\le&C \sum_{i=1}^\rank \Bigg\{ \frac{1}{\sqrt{\n\d}}\left| \sqrt{\blam_i^2(\Mtilde_t) - \alphatilde_{t}} - \lam_i \right|
\cr
&&\quad\quad\quad\quad
+ \frac{\lam_i}{\sqrt{\n\d}} \norm{\bu_i(\Mtilde_t)-\U_i\O_i}_F
+ \frac{\lam_i}{\sqrt{\n\d}} \norm{\bv_i(\Mtilde_t)-\V_i\Q_i}_F \Bigg\},
\end{eqnarray}
where $\O_i$ and $\Q_i$ are in $\{-1,1\}$ and minimize $\norm{\bu_i(\Mtilde_t)-\U_i\O_i }_F$ and $\norm{ \bv_i(\Mtilde_t)-\V_i\Q_i }_F$, respectively.
To find the order of \eqref{eta_t+1}, first consider the term $\norm{\bv_i(\Mtilde_t)-\V_i\O_i }_F$.
By Davis-Kahan Theorem (Theorem 3.1 in \cite{li1998two}) and Proposition 2.2 in \cite{vu2013},
\begin{eqnarray} \label{utilde-u}
\norm{\bv_i(\Mtilde_t)-\V_i\O_i }_F \le \frac{\frac{1}{\n\d}\norm{\(\Mtilde_t^T\Mtilde_t - \[ \M^T\M +\n \p \sig^2 I \] \)\V_i}_F }{\left| \frac{1}{\n\d}\( \lam_i^2+\n \p \sig^2 - \blam_{i+1}^2(\Mtilde_t) \) \right| }.
\end{eqnarray}
Consider the numerator of \eqref{utilde-u}.
We have
\begin{eqnarray} \label{vtilde-v}
&&\frac{1}{\n\d}\norm{\(\Mtilde_t^T\Mtilde_t - \[ \M^T\M + \n \p \sig I\]\)\V_i}_F
\cr
&&\le \frac{1}{\n\d}\bigg\{
\norm{\(\y\cdot\eps\)^T\(\y\cdot\eps\)\V_i- \n \p \sig^2\V_i}_F + \norm{\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]^T\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]\V_i}_F
\cr
&&\quad\quad\quad\quad
+ \norm{\M^T\(\y\cdot\eps\)\V_i}_F + \norm{\(\y\cdot\eps\)^T\M\V_i}_F
\cr
&&\quad\quad\quad\quad
+ \norm{\M^T\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]\V_i}_F + \norm{\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]^T\M\V_i}_F
\cr
&&\quad\quad\quad\quad
+ \norm{\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]^T\(\y\cdot\eps\)\V_i}_F + \norm{\(\y\cdot\eps\)^T\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]\V_i}_F \bigg\}
\cr
&&= \frac{1}{\n\d} \Big\{ O_p\(\p\sqrt{\n\d}\) + o_p\(\frac{\n h_\n}{\p}\) + O_p\(\sqrt{\p\d\n^2}\) + O_p\(\sqrt{\p\d^2\n}\)
\cr
&&\quad\quad\quad\quad
+ o_p\(\sqrt{\frac{h_\n \d\n^2}{\p}}\) + o_p\(\sqrt{\frac{h_\n \d\n^2}{\p}}\) + o_p\(\sqrt{h_\n \n^2}\) + o_p\(\sqrt{h_\n \d\n^2}\) \Big\}
\cr
&&=o_p\(\sqrt{\frac{h_\n}{\p\d}}\),
\end{eqnarray}
where the first equality holds due to \eqref{mod}, Assumption \ref{assume1}(2), \eqref{init-assume}, and \eqref{epsyTepsy} and \eqref{yepsu-yepsv} below.
We have
\begin{eqnarray} \label{epsyTepsy}
&&\expect \norm{\(\y\cdot\eps\)^T\(\y\cdot\eps\)\V_i- \n \p \sig^2\V_i}_F^2
\cr
&&= \expect\left\{\sum_{h=1}^\d \[ \sum_{k=1}^\n \sum_{j=1}^\d \( \y_{kh}\y_{kj}\eps_{kh}\eps_{kj}\V_{ij} -\p\sig^2\V_{ih}\I_{(j=h)} \) \]^2\right\}
\cr
&&= \sum_{h=1}^\d \sum_{k=1}^\n \sum_{j=1}^\d \expect\( \y_{kh}\y_{kj}\eps_{kh}\eps_{kj}\V_{ij} -\p\sig^2\V_{ih}\I_{(j=h)} \)^2
\cr
&&= \sum_{k=1}^\n \sum_{j\ne h}^\d \V_{ij}^2 \,\expect\( \y_{kh}^2\y_{kj}^2\eps_{kh}^2\eps_{kj}^2\)
+ \sum_{k=1}^\n \sum_{j=h}^\d \V_{ij}^2 \,\expect\( \y_{kj}^2\eps_{kj}^2 -\p\sig^2 \)^2
\cr
&&= O\(\p^2\n\d\),
\end{eqnarray}
where $\V_{ij}$ is the $j$-th element of $\V_i$.
Similarly, we have
\begin{eqnarray} \label{yepsu-yepsv}
\expect\smallnorm{\(\y\cdot\eps\)\V_i}_F^2=O\(\p\n\),
\expect\smallnorm{\U_i^T\(\y\cdot\eps\)}_F^2=O\(\p\d\),
\text{ and }\,
\expect\smallnorm{\y\cdot\eps}_F^2=O\(\p\n\d\).
\end{eqnarray}
Consider the denominator of \eqref{utilde-u}.
By Weyl's theorem (Theorem 4.3 in \cite{li1998}), we have
\begin{eqnarray} \label{deltaExist}
&&\max_{1\le i\le\d}\frac{1}{\n\d}|\lam_i^2 +\n \p \sig^2 - \blam_{i}^2(\Mtilde_t)|
\cr
&&\le \frac{1}{\n\d}\norm{\Mtilde_t^T\Mtilde_t - \[ \M^T\M + \n \p \sig ^2 I \]}_2
\cr
&&\le \frac{1}{\n\d}\bigg\{ \norm{\(\y\cdot\eps\)^T\(\y\cdot\eps\)- \n \p \sig ^2 I }_2 + \norm{\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]^T\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]}_2
\cr
&&\quad\quad\quad\quad
+ 2\norm{ \M^T\(\y\cdot\eps\)}_2 + 2 \norm{\M^T\[(1-\y)\cdot\eta_t\]}_2 + 2\norm{\(\y\cdot\eps\)^T\[(1_{\n}1_{\d}^T-\y)\cdot\eta_t\]}_2 \bigg\}
\cr
&&= \frac{1}{\n\d}\bigg\{ O_p\(\p\sqrt{\n\d^2}\) + o_p\(\frac{\n h_\n}{\p}\) + O_p\(\sqrt{\p\,\d\n^2}\) + o_p\(\sqrt{\frac{\d\n^2h_\n}{\p}}\) + o_p\(\sqrt{\d\n^2h_\n}\) \bigg\}
\cr
&&=o_p(1),
\end{eqnarray}
where the last two lines holds similarly to \eqref{vtilde-v}.
Thus, by \eqref{deltaExist} and \eqref{vtilde-v},
\begin{eqnarray} \label{v-v}
\smallnorm{\bv_i(\Mtilde_t)-\V_i\O_i }_F = o_p\(\sqrt{\frac{h_\n}{\p\d}}\).
\end{eqnarray}
Secondly, similar to the proof of \eqref{v-v},
we can show $\smallnorm{\bu_i(\Mtilde_t)-\U_i\O_i }_F = o_p\(\sqrt{h_\n/\p\d}\)$.
Lastly, consider the term $\frac{1}{\sqrt{\n\d}}\left| \sqrt{\blam_i^2(\Mtilde_t) - \alphatilde_{t}} - \lam_i \right|$.
By Taylor's expansion, there is $\lam_\ast^2$ between $\blam_i^2(\Mtilde_t) - \alphatilde_{t}$ and $\lam_i^2$ such that
\begin{eqnarray} \label{lam-lam}
&&\frac{1}{\sqrt{\n\d}}\left| \sqrt{\blam_i^2(\Mtilde_t) - \alphatilde_{t}} - \lam_i \right|
\cr
&&= \frac{1}{\sqrt{\n\d}}\left| \frac{1}{2\lam_\ast}\( \blam_i^2(\Mtilde_t) - \alphatilde_{t} - \lam_i^2 \) \right|
\cr
&&\le \frac{1}{2\lam_\ast\sqrt{\n\d}}\left| \blam_i^2(\Mtilde_t) - (\lam_i^2+\n\p\sig^2) \right|
+ \frac{1}{2\lam_\ast\sqrt{\n\d}}\left| \alphatilde_{t} - \n\p\sig^2 \right|.
\end{eqnarray}
We need to find the convergence rates of $\frac{1}{\n\d}\left| \blam_i^2(\Mtilde_t) - (\lam_i^2+\n\p\sig^2) \right|$ and $\frac{1}{\n\d}\left| \alphatilde_{t} - \n\p\sig^2 \right|$.
Let $\V_c = \(\V_{\rank+1},\ldots,\V_\d\) \in \real^{\d\times(\d-\rank)}$ be a matrix
such that $\V_c^T\V_c = I_{\d-\rank}$ and $\V^T\V_c = 0_{\rank\times(\d-\rank)}$
and let $\widetilde\V_{t} = \(\bv_{1}(\Mtilde_t), \ldots, \bv_\d(\Mtilde_t)\) \in \real^{\d\times\rank}$ and
$\widetilde\V_{tc} = \(\bv_{\rank+1}(\Mtilde_t), \ldots, \bv_\d(\Mtilde_t)\) \in \real^{\d\times(\d-\rank)}$
so that $\widetilde\V_{t}^T\widetilde\V_{tc}=0_{\rank\times(\d-\rank)}$.
Also, let $\O = \diag(\O_1, \ldots, \O_\rank)$ and $\O_c = \diag(\O_{\rank+1}, \ldots, \O_\d)$, where
$$\O_i := \argmin_{o \in \{-1,1\}} \norm{\V_i\,o-\bv_i(\Mtilde_t)}_2^2 \quad\text{for }\; i=1,\ldots,\d.$$
Then, we have
\begin{eqnarray} \label{lam2-lam2}
&&\frac{1}{\n\d}\left| \blam_i^2(\Mtilde_t) - (\lam_i^2+\n\p\sig^2) \right|
\cr
&&= \frac{1}{\n\d}\bigg| \bv_i(\Mtilde_t)^T\Mtilde_t^T\Mtilde_t \bv_i(\Mtilde_t)
- \V_i^T\( {\M}^T{\M} + \n \p \sig^2 I \) \V_i \bigg|
\cr
&&\le \frac{1}{\n\d}\bigg| \V_i^T \[\Mtilde_t^T\Mtilde_t - \( {\M}^T{\M}+ \n \p \sig^2 I \) \] \V_i \bigg|
+ \frac{1}{\n\d}\bigg| \bv_i(\Mtilde_t)^T\Mtilde_t^T\Mtilde_t \bv_i(\Mtilde_t) - \V_i^T\Mtilde_t^T\Mtilde_t\V_i \bigg|
\cr
&&\le o_p\( \sqrt{\frac{h_n}{\p\d}} \) + \frac{1}{\n\d}\bigg| \bv_i(\Mtilde_t)^T\Mtilde_t^T\Mtilde_t \bv_i(\Mtilde_t) - \V_i^T\Mtilde_t^T\Mtilde_t\V_i \bigg|
\cr
&&= o_p\(\sqrt{\frac{h_\n}{\p\d}}\),
\end{eqnarray}
where the second inequality can be derived by the similar way to the proof of \eqref{vtilde-v},
and the last equality is due to \eqref{eq 11 : Thm asym} below.
Simple algebraic manipulations show
\begin{eqnarray} \label{eq 11 : Thm asym}
&&\frac{1}{\n\d}\bigg| \bv_i(\Mtilde_t)^T\Mtilde_t^T\Mtilde_t \bv_i(\Mtilde_t) - \V_i^T\Mtilde_t^T\Mtilde_t\V_i \bigg| \cr
&&= \frac{1}{\n\d}\bigg| \[\V_i\O_i-\bv_i(\Mtilde_t)\]^T\Mtilde_t^T\Mtilde_t \[\V_i\O_i-\bv_i(\Mtilde_t)\] + 2\blam_i^2(\Mtilde_t)\[\V_i\O_i-\bv_i(\Mtilde_t)\]^T\bv_i(\Mtilde_t) \bigg|
\cr
&&\le \frac{2\blam_1^2(\Mtilde_t)}{\n\d}\norm{\V_i\O_i-\bv_i(\Mtilde_t)}_2^2
\cr
&&= o_p\(\frac{ h_\n}{\p\d }\),
\end{eqnarray}
where the last equality is due to \eqref{deltaExist} and \eqref{v-v}.
Also,
\begin{eqnarray} \label{rhohat-npsig2}
&&\frac{1}{\n\d}\left| \alphatilde_{t} - \n\p\sig^2 \right|
\cr
&&= \frac{1}{\n\d}\bigg| \frac{1}{\d-\rank}\sum_{j=\rank+1}^\d \bv_j(\Mtilde_t)^T\Mtilde_t^T\Mtilde_t\bv_j(\Mtilde_t) - \n\p\sig^2 \bigg|
\cr
&&\le \frac{1}{\n\d}\bigg| \frac{1}{\d-\rank}\sum_{j=\rank+1}^\d \V_j^T\[ \Mtilde_t^T\Mtilde_t - \( \M^T\M + \n \p \sig ^2 I\)\] \V_j \bigg|
\cr
&&\quad\quad\quad\quad
+ \frac{1}{\n\d}\bigg| \frac{1}{\d-\rank}\sum_{j=\rank+1}^\d \[ \bv_j(\Mtilde_t)^T \Mtilde_t^T\Mtilde_t \bv_j(\Mtilde_t) - \V_j^T \Mtilde_t^T\Mtilde_t \V_j \] \bigg| \cr
&&= o_p\(\sqrt{\frac{h_\n}{\p\d} }\)+ \frac{1}{\n\d}\bigg| \frac{1}{\d-\rank}\sum_{j=\rank+1}^\d \[ \bv_j(\Mtilde_t)^T \Mtilde_t^T\Mtilde_t \bv_j(\Mtilde_t) - \V_j^T \Mtilde_t^T\Mtilde_t \V_j \] \bigg| \cr
&&= o_p\(\sqrt{ \frac{h_\n}{\p\d} }\),
\end{eqnarray}
where the second equality can be derived by the similar way to the proof of \eqref{vtilde-v}, and the last equality is due to \eqref{eq 12 : Thm asym} below.
Similar to the proof of \eqref{eq 11 : Thm asym}, we have
\begin{eqnarray}\label{eq 12 : Thm asym}
&& \frac{1}{\n\d(\d-\rank)}\sum_{j=\rank+1}^\d \bigg| \bv_j(\Mtilde_t)^T \Mtilde_t^T\Mtilde_t \bv_j(\Mtilde_t) - \V_j^T \Mtilde_t^T\Mtilde_t \V_j \bigg|
\cr
&&\le \frac{1}{\n\d(\d-\rank)}\sum_{j=\rank+1}^\d 2\blam_1^2(\Mtilde_t)\norm{\V_j\O_j-\bv_j(\Mtilde_t)}_2^2
\cr
&&\le \frac{2\blam_1^2(\Mtilde_t)}{\n\d(\d-\rank)}\norm{\V_c\O_c-\widetilde\V_{tc}}_F^2
\cr
&&\le \frac{4\blam_1^2(\Mtilde_t)}{\n\d(\d-\rank)}\norm{\V_c\V_c^T - \widetilde\V_{tc}\widetilde\V_{tc}^T}_F^2
\cr
&&= \frac{4\blam_1^2(\Mtilde_t)}{\n\d(\d-\rank)}\norm{\V\V^T - \widetilde\V_{t}\widetilde\V_{t}^T}_F^2
\cr
&&\le \frac{4\blam_1^2(\Mtilde_t)}{\n\d(\d-\rank)}\norm{\V\O-\widetilde\V_{t}}_F^2
\cr
&&= \frac{4\blam_1^2(\Mtilde_t)}{\n\d(\d-\rank)}\sum_{i=1}^\rank\norm{\V_i\O_i-\bv_i(\Mtilde_t)}_2^2
\cr
&&= o_p\(\frac{h_\n}{\p\d^2}\),
\end{eqnarray}
where the fourth and sixth lines are due to Proposition 2.2 in \cite{vu2013}, and the last line holds from \eqref{v-v}.
The three results above \eqref{lam-lam}, \eqref{lam2-lam2}, and \eqref{rhohat-npsig2} give $\frac{1}{\sqrt{\n\d}}\left| \sqrt{\blam_i^2(\Mtilde_t) - \alphatilde_{t}} - \lam_i \right|=o_p\(\sqrt{\frac{h_\n}{\p\d}}\)$.
Therefore, combining the results above, we have that $\frac{1}{\sqrt{\n\d}}\norm{\eta_{t+1}}_F$ in \eqref{eta_t+1} is $o_p\(\sqrt{\frac{h_\n}{\p\d}}\)$.
Since $\frac{1}{\sqrt{\n\d}}\norm{\eta_{1}}_F=o_p\(\sqrt{\frac{h_\n}{\p\d}}\)$ by Proposition \ref{MhatConsistent},
we have $\frac{1}{\sqrt{\n\d}}\norm{\eta_{t}}_F=o_p\(\sqrt{\frac{h_\n}{\p\d}}\)$ for any fixed $t$ by mathematical induction.
\end{proof}
\subsection{Proof of Theorem \ref{solutionZ}}
\begin{proof}[Proof of Theorem \ref{solutionZ}]
We have
\begin{eqnarray} \label{targetOptim}
&&\min_Z \;\frac{1}{2\n\d}\norm{X-Z}_F^2 + \sum_{i=1}^\d \frac{\tau_i}{\sqrt{\n\d}} \frac{\blam_i(Z)}{\sqrt{\n\d}}
\cr
&&= \min_Z \;
\frac{1}{2\n\d} \Bigg\{ \norm{X}_F^2 -2\sum_{i=1}^\d \tilde\lam_i\cdot\tilde u_i^TX\tilde v_i + \sum_{i=1}^\d\tilde\lam_i^2\Bigg\}
+ \frac{1}{\n\d} \sum_{i=1}^\d \tau_i\tilde\lam_i,
\end{eqnarray}
where $\tilde\lam_i=\blam_i(Z)$, $\tilde u_i=\bu_i(Z)$, and $\tilde v_i=\bv_i(Z)$.
Minimizing \eqref{targetOptim} is equivalent to minimizing
\begin{eqnarray*}
-2\sum_{i=1}^\d \tilde\lam_i\cdot\tilde u_i^TX\tilde v_i + \sum_{i=1}^\d\tilde\lam_i^2
+ \sum_{i=1}^\d 2\tau_i\tilde\lam_i,
\end{eqnarray*}
with respect to $\tilde\lam_i,\tilde u_i,$ and $\tilde v_i, \; i=1,\ldots,\d$,
under the conditions that $(\tilde u_1,\ldots,\tilde u_\d)^T(\tilde u_1,\ldots,\tilde u_\d)=I_\d$,
$(\tilde v_1,\ldots,\tilde v_\d)^T(\tilde v_1,\ldots,\tilde v_\d)=I_\d$, and $\tilde\lam_1 \ge \tilde\lam_2 \geq \ldots \ge \tilde\lam_d \ge 0$.
Thus, we have
\begin{eqnarray} \label{targetOptim-final}
&&\min_{\tilde\lam_i\ge0,\tilde u_i,\tilde v_i, \; i=1,\ldots,\d} \;
-2\sum_{i=1}^\d \tilde\lam_i\cdot\tilde u_i^TX\tilde v_i + \sum_{i=1}^\d\tilde\lam_i^2
+ \sum_{i=1}^\d 2\tau_i\tilde\lam_i
\cr
&&= \min_{\tilde\lam_i\ge0, \; i=1,\ldots,\d} \;
-2\sum_{i=1}^\d \tilde\lam_i\cdot\blam_i(X) + \sum_{i=1}^\d\tilde\lam_i^2
+ \sum_{i=1}^\d 2\tau_i\tilde\lam_i
\cr
&&= \min_{\tilde\lam_i\ge0, \; i=1,\ldots,\d} \;
\sum_{i=1}^\d \left\{ \tilde\lam_i^2 -2 \tilde\lam_i \[\blam_i(X) - \tau_i\] \right\},
\end{eqnarray}
where the first equality is due to the facts that $\tilde\lam_1\ge\ldots\ge\tilde\lam_\d\ge0$, and for every $i$, the problem
$$\max_{\norm{u_i}_2^2\le1, \norm{v_i}_2^2\le1} u_i ^TXv_i
\quad\text{such that}\quad
u_i \perp\{\tilde u_1^\ast,\ldots,\tilde u_{i-1}^\ast\}, v_i \perp\{\tilde v_1^\ast,\ldots,\tilde v_{i-1}^\ast\}$$
is solved by $\tilde u_i^\ast, \tilde v_i^\ast$, the left and right singular vectors of $X$ corresponding to the $i$-th largest singular value of $X$.
Note that $\tilde u_i = \tilde u_i ^\ast$.
Since \eqref{targetOptim-final} is a quadratic function of $\tilde\lam_i$, the solution to the problem \eqref{targetOptim-final} is then $\tilde\lam_i = \(\blam_i(X) - \tau_i\)_+$.
\end{proof}
\subsection{Proof of Theorem \ref{thm:Zsolvesf}}
To ease the notation, we drop the superscript `g' in $Z_t^g$, $\Mtilde_t^g$, and $D_t^g$ in this section.
\begin{lemma}\label{z-zgoto0}
Let $Z_{t+1}:=\argmin_{Z\in\real^{\n\times\d}} Q_\tau(Z|Z_t)$ in \eqref{eq:Q}. Then, under Assumption \ref{assume2}, we have
$$\norm{Z_{t+1}-Z_{t}}_F^2 \to 0 \quad\text{as } t\to\infty.$$
\end{lemma}
\begin{proof}[Proof of Lemma \ref{z-zgoto0}]
By the construction of $D_t$,
$$(\Mtilde_{t-1}-\Mtilde_t) - (Z_t - Z_{t+1}) - (D_{t-1} - D_{t}) =0.$$
Thus, we have
\begin{eqnarray} \label{xZ-Z}
\langle\Mtilde_{t-1}-\Mtilde_t,Z_t - Z_{t+1}\rangle - \langle Z_t - Z_{t+1},Z_t - Z_{t+1}\rangle - \langle D_{t-1} - D_{t},Z_t - Z_{t+1}\rangle =0
\end{eqnarray}
and
\begin{eqnarray}\label{xM-M}
\langle\Mtilde_{t-1}-\Mtilde_t,\Mtilde_{t-1}-\Mtilde_t\rangle - \langle Z_t - Z_{t+1},\Mtilde_{t-1}-\Mtilde_t\rangle - \langle D_{t-1} - D_{t},\Mtilde_{t-1}-\Mtilde_t\rangle =0.
\end{eqnarray}
Add \eqref{xM-M} and \eqref{xZ-Z}, and
\begin{eqnarray}\label{xZ-Z:xM-M}
&&0 =\smallnorm{\Mtilde_{t-1}-\Mtilde_t}_F^2 - \smallnorm{Z_t-Z_{t+1}}_F^2 - \langle D_{t-1} - D_{t}, Z_t + \Mtilde_{t-1} - (Z_{t+1}+\Mtilde_t)\rangle
\cr
&&=\smallnorm{\Mtilde_{t-1}-\Mtilde_t}_F^2 - \smallnorm{Z_t-Z_{t+1}}_F^2 - \norm{D_{t-1} - D_{t}}_F^2 - 2\langle D_{t-1} - D_{t},Z_t - Z_{t+1} \rangle.
\end{eqnarray}
Under Assumption \ref{assume2}, \eqref{xZ-Z:xM-M} gives
\begin{eqnarray*}
\norm{Z_t-Z_{t+1}}_F^2 \le \norm{\Mtilde_{t-1} - \Mtilde_t}_F^2,
\end{eqnarray*}
and thus
\begin{eqnarray} \label{lem:z-z<z-z}
\norm{Z_{t+1}-Z_{t}}_F^2
&\le& \norm{\Mtilde_{t-1} - \Mtilde_t}_F^2
\cr
&\le& \norm{\Portho\( Z_{t-1} - Z_t \)}_F^2
\cr
&\le& \norm{Z_{t}-Z_{t-1}}_F^2
\end{eqnarray}
for all $t\ge1$.
This proves that the sequence $\{\norm{Z_{t+1}-Z_{t}}_F^2\}$ converges (since it is decreasing and bounded below).
The convergence of $\{\norm{Z_{t+1}-Z_{t}}_F^2\}$ gives
$$\norm{Z_{t+1}-Z_{t}}_F^2 - \norm{Z_{t}-Z_{t-1}}_F^2 \to 0 \text{ as } t\to\infty.$$
Then, by \eqref{lem:z-z<z-z},
\begin{eqnarray*}
0
&\ge& \norm{\Portho\( Z_{t}-Z_{t-1} \)}_F^2 - \norm{Z_{t}-Z_{t-1}}_F^2
\cr
&\ge& \norm{Z_{t+1}-Z_{t}}_F^2 - \norm{Z_{t}-Z_{t-1}}_F^2
\cr
&\to& 0 \quad\text{as } t \to \infty,
\end{eqnarray*}
which implies
\begin{eqnarray} \label{Pitself}
\norm{\Portho\( Z_{t}-Z_{t-1} \)}_F^2 - \norm{Z_{t}-Z_{t-1}}_F^2 \to 0 \Rightarrow \norm{\P\( Z_{t}-Z_{t-1} \)}_F^2 \to 0.
\end{eqnarray}
Furthermore, similarly to the proof of Lemma 2 in \cite{mazumder2010}, we can show
\begin{eqnarray} \label{fQf}
f_{\tau}(Z_{t})
\ge Q_{\tau}(Z_{t+1}|Z_t)
\ge Q_{\tau}(Z_{t+1}|Z_{t+1})
= f_{\tau}(Z_{t+1}) \ge0
\end{eqnarray}
for every fixed $\tau_1,\ldots,\tau_\d >0$ and $t\ge 1$.
Thus, we have
$$ Q_{\tau}(Z_{t+1}|Z_t) - Q_{\tau}(Z_{t+1}|Z_{t+1}) \to 0 \quad\text{as } t\to\infty,$$
which implies
$$\norm{\Portho\( Z_{t}-Z_{t+1} \)}_F^2 \to 0 \quad\text{as } t\to\infty.$$
The above along with \eqref{Pitself} gives
$$\norm{Z_{t+1}-Z_{t}}_F^2 \to 0 \quad\text{as } t\to\infty.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:Zsolvesf}]
By the construction of $D_{t}$, we have
$$0 = \(\Mtilde_{t} - Z_{t+1}\)-D_{t} \quad \text{for all } t\ge1.$$
Since $Z_\infty$ is a limit point of the sequence $Z_t$, there exists a subsequence $\{n_t\} \subset \{1,2,\ldots\}$ such that $Z_{n_t}\to Z_\infty$ as $t\to\infty$.
By Lemma \ref{z-zgoto0}, this subsequence $Z_{n_t}$ satisfies
$$Z_{n_t} - Z_{n_t +1}\to0$$
which implies
$$\Portho(Z_{n_t}) - Z_{n_t+1}\to \Portho(Z_\infty) - Z_\infty=-\P(Z_\infty).$$
Hence,
\begin{eqnarray}\label{eq 01 : Thm conv}
D_{n_t} = \(\P(\Mp)+\Portho(Z_{n_t})\) - Z_{n_t+1} \to \P(\Mp)-\P(Z_\infty) = D_\infty.
\end{eqnarray}
Due to \eqref{assume4} and \eqref{eq 01 : Thm conv}, we have
\begin{eqnarray*}
f_\tau ( Z^s) &\geq& f_\tau (Z_\infty) - \frac{1}{\n\d} \langle Z^s - Z_\infty, \P(\Mp)-\P(Z_\infty)- D _\infty \rangle \cr
&=&f_\tau (Z_\infty).
\end{eqnarray*}
Since $f_\tau ( Z^s) \le f_\tau (Z_\infty)$ by definition of $Z^s$, we have $f_\tau ( Z^s) = f_\tau (Z_\infty)$.
Lastly, by \eqref{fQf}, we have $\lim_{t\rightarrow \infty} f_{\tau}(Z_{t}) = f(Z^s)$.
\end{proof}
\subsection{Proofs of Lemmas \ref{lem:undateTuning}-\ref{corol:assume2}}
\begin{proof}[Proof of Lemma \ref{lem:undateTuning}]
For $i=1,\ldots,\rank$, we have
\begin{eqnarray*}
&&\left| \frac{\tau_{t,i}}{\sqrt{\n\d}} - \frac{\tau_{t+1,i}}{\sqrt{\n\d}} \right|
\cr
&&= \frac{1}{\sqrt{\n\d}}\left|\blam_i (\Mtilde_t) - \sqrt{\blam_i^2 (\Mtilde_t) - \alphatilde_{t}}
- \lam_i (\Mtilde_{t+1}) + \sqrt{\blam_i^2 (\Mtilde_{t+1}) - \alphatilde_{t+1}}\right|
\cr
&&\le \frac{1}{\sqrt{\n\d}}\left|\blam_i (\Mtilde_t) - \(\sqrt{\lam_i^2 - \n\p\sig^2}\) \right|
+ \frac{1}{\sqrt{\n\d}}\left|\sqrt{\blam_i^2 (\Mtilde_t) - \alphatilde_{t}} - \lam_i^2 \right|
\cr
&&\quad
+ \frac{1}{\sqrt{\n\d}}\left|\blam_i (\Mtilde_{t+1}) - \(\sqrt{\lam_i^2 - \n\p\sig^2}\) \right|
+ \frac{1}{\sqrt{\n\d}}\left|\sqrt{\blam_i^2 (\Mtilde_{t+1}) - \alphatilde_{t+1}} - \lam_i^2 \right|
\cr
&&= (I)+(II)+(III)+(IV).
\end{eqnarray*}
Then, by \eqref{lam2-lam2} and \eqref{rhohat-npsig2}, we have
\begin{eqnarray*}
(I) &=& \frac{1}{\sqrt{\n\d}}\left|\blam_i (\Mtilde_t) - \(\sqrt{\lam_i^2 - \n\p\sig^2}\) \right|
\cr
&=&\frac{1}{2\lam_\ast\sqrt{\n\d}}\left|\blam_i^2 (\Mtilde_t) - \(\lam_i^2 - \n\p\sig^2\) \right|
\cr
&\le&\frac{1}{2\lam_\ast\sqrt{\n\d}}\left| \blam_i^2(\Mtilde_t) - \alphatilde_{t} - \lam_i^2 \right|
+ \frac{1}{2\lam_\ast\sqrt{\n\d}}\left| \alphatilde_{t} - \n\p\sig^2 \right|
\cr
&=&o_p\(\sqrt{\frac{h_\n}{\p\d}}\),
\end{eqnarray*}
where the second equality holds for some $\lam_\ast$ between $\blam_i (\Mtilde_t)$ and $\sqrt{\lam_i^2 - \n\p\sig^2}$
by Taylor's expansion.
We can similarly show that $(III)=o_p\(\sqrt{h_\n/\p\d}\)$.
Both of $(II)$ and $(IV)$ are also $o_p\(\sqrt{h_\n/\p\d}\)$ by \eqref{lam-lam} and \eqref{lam2-lam2}.
\end{proof}
\begin{proof}[Proof of Lemma \ref{corol:assume2}]
From Theorem \ref{thm:mathInduct} and the construction of $D_t$ in Assumption \ref{assume2}, we have
\begin{eqnarray*}
&& \left|\frac{1}{nd}\langle D_t - D_{t+1},Z_{t+1} - Z_{t+2} \rangle \right|
\cr
&&\le \frac{1}{nd} \norm{D_t - D_{t+1}}_F \norm{Z_{t+1} - Z_{t+2}}_F
\cr
&&\le \frac{1}{nd} \norm{\Mtilde_{t} - Z_{t+1} - \(\Mtilde_{t+1} - Z_{t+2}\) }_F \norm{Z_{t+1} - Z_{t+2}}_F
\cr
&&\le \frac{1}{nd} \Big\{ \norm{\Mtilde_{t} - \Mtilde_{t+1}}_F + \norm{ Z_{t+1} - Z_{t+2} }_F \Big\} \norm{Z_{t+1} - Z_{t+2}}_F
\cr
&&= \frac{1}{nd} \Big\{ \norm{\Portho\(Z_t - Z_{t+1}\) }_F + \norm{ Z_{t+1} - Z_{t+2} }_F \Big\} \norm{Z_{t+1} - Z_{t+2}}_F
\cr
&&\le \frac{1}{nd} \Big\{ \norm{Z_t - Z_{t+1}}_F + \norm{ Z_{t+1} - Z_{t+2} }_F \Big\} \norm{Z_{t+1} - Z_{t+2}}_F
\cr
&&\le \frac{1}{nd} \Big\{ \norm{Z_{t} - \M}_F + 2\norm{Z_{t+1} - \M}_F + \norm{Z_{t+2} - \M}_F \Big\}
\cr
&& \quad \times \Big\{ \norm{Z_{t+1} - \M}_F + \norm{Z_{t+2} - \M}_F \Big\}
\cr
&&=o_p\(\frac{h_\n}{\p\d}\).
\end{eqnarray*}
\end{proof}
\bibliographystyle{Chicago}
|
train/arxiv
|
BkiUa63xK4sA-7sDdB-9
| 5 | 1 |
\section{Introduction}\label{section:intro}
The COVID-19 outbreak caused by the Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), has led to developments in Bayesian infectious disease modeling, allowing modelers to assess the impact of mitigation strategies on transmission and quantify the burden of the pandemic \citep{flaxman, monod}.
The highly transmissible nature of COVID-19 has proven to be a challenge for health systems worldwide; large-scale seroprevalence studies aiming to estimate the actual number of infections have found severe under-ascertainment \citep{ward_react2}, which has been varying in time and across countries.
The level of under-ascertainment depends on the national testing and tracing policies, the testing capacities and the impact of false positives under different regimes. Especially at the early stages of the pandemic, those tested were typically more likely to have been hospitalised or were at higher risk of infection or death, leading to only a proportion of infections being detected and reported \citep{li}. Methods that rely on reported counts of infections are expected to yield biases in the inferred rates of transmission.
This work addresses the challenges of under-ascertainment of COVID-19 infections and the presence of heterogeneity in type, relevance, and granularity of the data.
Following the approach by \cite{flaxman} and its extension to multiple age groups by \cite{monod}, we propose a Bayesian evidence synthesis approach to model the age-stratified dynamics of COVID-19, with the aim to: (i) infer the true number of infections using age-stratified daily COVID-19 attributable mortality counts, which entail a higher degree of reliability compared to the daily number of reported infections; (ii) infer the age-stratified virus transmission rates from one group to another, with a particular focus on the transmission rate between and onto vulnerable groups which could support public health authorities in devising non-pharmaceutical interventions (NPIs) targeting these groups; (iii) reconstruct the age-stratified drivers of transmission during particular time periods and capture age-stratified trends in infections.
\cite{chatzilena} assumed a homogeneously mixing population in which all individuals were equally susceptible and equally infectious should they become infected. The authors also assumed that the dynamic transmission rate between susceptible and infected individuals followed a stochastic process, see also \cite{dureau}.
We extend this framework by developing an age-stratified transmission model with heterogeneous contact rates between age groups, which offers the advantages of relaxing the assumption of a homogeneous population, incorporates age structure and accounts for the presence of social structures.
In particular, we target the transmission rate matrix process, the dimension of which increases quadratically with the number of age groups, and offer a dimension-reduction formulation projecting to latent diffusion processes. This formulation possesses several desirable characteristics in capturing the multiple group disease transmission mechanism:
(i) it is based on a natural decomposition into the underlying biological and social components; (ii) it allows for further evidence synthesis utilising information from contact surveys; (iii) it is driven by a potentially non-scalar diffusion process to adequately capture the temporal evolution of the age-stratified transmission rates, as well as extrinsic environmental factors such as NPIs and climatic changes \citep{cazelles, chatzilena, ghosh}.
Our analysis integrates multiple data sources which are publicly available across countries: age-stratified daily COVID-19 attributable mortality counts, contact surveys, age-stratified daily laboratory-confirmed COVID-19 infection counts and the age distribution of the population.
The contact survey data are used to delineate potential identifiability issues \citep{britton} at the unobserved infection rate level when those rates are decomposed in their social and biological component.
In the following, the uncertainty in contact structure is expressed via suitable prior distributions.
The outline of the paper is as follows.
Section \ref{section:methods} provides an overview of the multiple data sources and a presentation of the components of the Bayesian hierarchical model.
In Section \ref{section:application} we present the results of the empirical analysis for Greece and discuss the effect of model expansion on the inferred age-stratified transmission rates in order to resolve prior-data conflicts at a latent level. Results for Austria are presented in the Supplementary material. The proposed model is validated using estimations from a large-scale seroprevalence survey in England.
We conclude the paper in Section \ref{section:discussion} with final remarks.
\section{Materials and Methods}\label{section:methods}
\subsection{Data sources}\label{section:data}
We modeled the dynamics of the SARS-CoV-2 epidemic in Greece, Austria and England.
For each of these countries, we collected three types of data related to the SARS-CoV-2 epidemic: (i) the age distribution of reported deaths; (ii) the age distribution of all reported infections; (iii) the age distribution of the general population for each country, $\mathbb{N}$.
The study period for Austria ranges from 2020-08-31 to 2021-03-28 (30 weeks). The study period for Greece ranges from 2020-09-01 to 2021-03-29 (30 weeks).
For model validation, we analyse the period 2020-03-02 to 2021-09-27 (30 weeks) in England.
We adopted the country-specific synthetic contact matrices of \cite{prem1}, which have been constructed upon adjusted contact patterns, based on national demographic and socioeconomic characteristics.
In the absence of a synthetic contact matrix for England, we assumed that the respective synthetic contact matrix is represented by the synthetic contact matrix of \cite{prem1} for the United Kingdom.
The elements of these matrices represent the daily average numbers of contacts between age groups.
The severity of the disease is incorporated in the model via the age-stratified infection fatality rate (IFR).
The age-stratified IFR in our model is informed by the REal-time Assessment of Community Transmission-2 (REACT-2) national seroprevalence study of over 100,000 adults in England \citep{ward_react2}.
Detailed references for each data source are presented in Section S1 of the Supplementary material.
\subsection{Modeling framework}\label{section:mod_framework}
\subsubsection{Evidence synthesis}\label{section:evidence_synthesis}
The aforementioned data streams and expert knowledge were integrated into a coherent modeling framework via a Bayesian evidence synthesis approach \citep{deangelis}. The proposed framework offers the advantage of estimation of disease transmission and estimation of various hidden characteristics of the disease like the latent number of infections.
Following the works of \cite{flaxman}, \cite{monod} and \cite{chatzilena}, the modeling process is separated into a latent epidemic process and an observation process
in an effort to reduce sensitivity to observation noise and to allow for more flexibility in modeling different forms of data.
Figure \ref{fig:functional_relationships} shows the functional relationships between data sources, modeled outputs and parameters in this study.
The components of the Bayesian hierarchical model are laid out in the remainder of Section \ref{section:mod_framework}.
\subsubsection{Diffusion-driven multi-type transmission process}\label{section:trans_model}
The transmission of COVID-19 is modeled through an age-stratified deterministic Susceptible-Exposed-Infectious-Removed (SEIR) compartmental model \citep{anderson} with Erlang-distributed latent and infectious periods.
In particular, we introduce Erlang-distributed stage durations in the Exposed and Infected compartments and relax the mathematically convenient but unrealistic assumption of exponential stage durations \citep{watson, lloyd} by assuming that each of the Exposed and Infected compartments are defined by two stages, with the same rate of loss of latency ($\tau$) and infectiousness ($\gamma$) in both stages.
Removed individuals are assumed to be immune to reinfection for at least the duration of the study period.
The population is stratified into $\alpha \in \{1,\ldots,A\}$ age groups and the total size of the age group is denoted by $\mathbb{N}_{\alpha} = S_t^{\alpha} + E_{1,t}^{\alpha} + E_{2,t}^{\alpha} + I_{1,t}^{\alpha} + I_{2,t}^{\alpha} + R_t^{\alpha}$, where $S_t^{\alpha}$ represents the number of susceptible, $E_t^{\alpha} = \sum_{j=1}^{2}E_{j,t}^{\alpha}$ represents the number of exposed but not yet infectious, $I_t^{\alpha} = \sum_{j=1}^{2}I_{j,t}^{\alpha}$ is the number of infected and $R_t^{\alpha}$ is the number of removed individuals at time $t$ at age group $\alpha$.
The number of individuals in each compartment is scaled by the total population $\mathbb{N} = \sum_{\alpha = 1}^{A}\mathbb{N}_{\alpha} $, so that the sum of all compartments equals to one \citep{riou2}.
The latent epidemic process is expressed by the following non-linear system of ordinary differential equations (ODEs) \citep{kermack}
\begin{equation}
\begin{cases}\label{eq:ODE_system}
\frac{dS^{\alpha}_{t}}{dt} & = -\lambda_{\alpha}(t) S^{\alpha}_{t}
\\
\frac{dE^{\alpha}_{1,t}}{dt} & = \lambda_{\alpha}(t) S^{\alpha}_{t} - \tau E^{\alpha}_{1,t},
\\
\frac{dE^{\alpha}_{2,t}}{dt} & = \tau \left( E^{\alpha}_{1,t} - E^{\alpha}_{2,t}\right),
\\
\frac{dI^{\alpha}_{1,t}}{dt} & = \tau E^{\alpha}_{2,t} - \gamma I^{\alpha}_{1,t},
\\
\frac{dI^{\alpha}_{2,t}}{dt} & = \gamma \left( I^{\alpha}_{1,t} - I^{\alpha}_{2,t}\right),
\\
\frac{dR^{\alpha}_{t}}{dt} & = \gamma I^{\alpha}_{2,t} + \rho_{\alpha}\nu_{t-u},
\end{cases}
\end{equation}
where the mean latent and infectious periods are $d_E = \frac{2}{\tau}$, $d_I = \frac{2}{\gamma}$, respectively. The number of new infections in age group $\alpha$ at day $t$ is
\begin{equation}\label{eq:new_cases}
\Delta^{\text{infec}}_{t, \alpha} = \int_{t-1}^{t} \tau E^{\alpha}_{2,s} ds.
\end{equation}
The time-dependent force of infection $\lambda_{\alpha}(t)$ for age group $\alpha \in \{1,\ldots,A\}$ is expressed as
\begin{equation*}
\lambda_{\alpha}(t) = \sum_{\alpha'=1}^{A}\left[ m_{\alpha,\alpha'}(t) \frac{\left(I^{\alpha'}_{1,t} + I^{\alpha'}_{2,t}\right)}{\mathbb{N}_{\alpha'}}\right],
\end{equation*}
which is a function of the proportion of infectious individuals in each age group $\alpha' \in \{1,\ldots,A\}$, via the compartments $I^{\alpha'}_{1,t}$, $I^{\alpha'}_{2,t}$ divided by the total size of the age group $\mathbb{N}_{\alpha'}$, and the time-varying person-to-person transmission rate from group $\alpha$ to group $\alpha'$, $m_{\alpha,\alpha'}(t)$.
We parameterize the transmission rate between different age groups $(\alpha,\alpha') \in \{1,\ldots,A\}^2$ by
\begin{equation}\label{eq:transmission_matrix_process}
m_{\alpha,\alpha'}(t) =
\beta^{\alpha\alpha'}_{t}
\cdot
C_{\alpha,\alpha'},
\end{equation}
breaking down the transmission rate matrix into its biological and social components \citep{baguelin, knock}: the social component is represented by the average number of contacts between individuals of age group $\alpha$ and age group $\alpha'$ via the contact matrix element $C_{\alpha,\alpha'}$;
$\beta^{\alpha\alpha'}_{t}$ is the time-varying transmissibility of the virus, the probability that a contact between an infectious person in age group $\alpha$ and a susceptible person in age group $\alpha'$ leads to transmission at time $t$.
The formulation below may be viewed as a stochastic extension to the deterministic multi-type SEIR model, using diffusion processes for the coefficients $\beta^{\alpha\alpha'}_{t}$ in \eqref{eq:transmission_matrix_process}, driven by independent Brownian motions
\begin{equation}
\begin{cases}\label{eq:Euler_BM}
\beta^{\alpha\alpha'}_{t} & = \exp(x^{\alpha\alpha'}_{t}),
\\
x^{\alpha\alpha'}_{t} \mid x^{\alpha\alpha'}_{t - 1}, \sigma^{\alpha\alpha'}_x & \sim \operatorname{N}(x^{\alpha\alpha'}_{t - 1}, (\sigma^{\alpha\alpha'})^2_x),
\\
dx^{\alpha\alpha'}_{t} & = \sigma^{\alpha\alpha'}_{x}dW^{\alpha\alpha'}_{t},
\\
dW^{\alpha\alpha'}_{t} & \sim \operatorname{N}(0,d_{t}),
\end{cases}
\end{equation}
with volatilities $\sigma^{\alpha\alpha'}_x$, corresponding to the case of little information on the shape of $\beta^{\alpha\alpha'}_{t}$.
The volatility $\sigma^{\alpha\alpha'}_x$ plays the role of the regularizing factor: higher values of $\sigma^{\alpha\alpha'}_x$ lead to greater changes in $\beta^{\alpha\alpha'}_{t}$.
The exponential transformation avoids negative values which have no biological meaning \citep{dureau, cazelles}.
A major advantage of considering a diffusion process for modeling $\beta^{\alpha\alpha'}_{t}$ is its ability to capture and quantify the randomness of the underlying transmission dynamics, which is particularly useful when the dynamics are not completely understood.
The diffusion process accounts for fluctuations in transmission that are influenced by non-modeled phenomena, such as new variants, mask-wearing propensity, etc.
The diffusion process also allows for capturing the effect of unknown extrinsic factors on the age-stratified force of infection,
for monitoring of the temporal evolution of the age-specific transmission rate without the implicit inclusion of external variables and for tackling non-stationarity in the data.
We propose a diffusion-driven multi-type latent transmission model which assigns independent Brownian motions to $\log(\beta^{11}_{t}), \log(\beta^{22}_{t}), \ldots, \log(\beta^{AA}_{t})$ with respective age-stratified volatility parameters $\sigma_{x,\alpha}, \alpha \in \{1,\ldots,A\}$, for reasons of parsimony and interpretability.
The contact matrix is scaled by the age-stratified transmissibility in order to obtain the transmission rate matrix process
\begin{equation}\label{eq:transm_rate_indep_bm}
m^{\text{MBM}}_{\alpha,\alpha'}(t) = \beta^{\alpha\alpha'}_{t}
\cdot
C_{\alpha,\alpha'}
\equiv
\beta^{\alpha\alpha}_{t}
\cdot
C_{\alpha,\alpha'},
\end{equation}
under the assumption $\beta^{\alpha\alpha'}_t \equiv \beta^{\alpha\alpha}_t, \alpha \neq \alpha'$.
Henceforth, we term the model in \eqref{eq:transm_rate_indep_bm} by MBM (standing for Multi Brownian Motion).
The MBM model can also be viewed as factor analysis, offering dimension reduction from all the elements of the transmission rate matrix process to the Brownian motions $\beta_t^{\alpha\alpha}$ for $\alpha \in \{1,\ldots,A\}$.
An appealing feature of this factor analysis representation is that it separates the contact matrix which reflects social components of the model and can be informed from survey data.
We provide details on the formulation of the contact matrix in the Supplementary material.
Further reduction in the dimension of the transmission rate matrix process can be achieved by the formulation
\begin{equation}\label{eq:transm_rate_commom_bm}
m^{\text{SBM}}_{\alpha,\alpha'}(t) =
\beta^{\alpha\alpha'}_{t}
\cdot
C_{\alpha,\alpha'}
\equiv
\beta_{t}
\cdot
C_{\alpha,\alpha'},
\end{equation}
under the assumption $\beta^{\alpha\alpha'}_t \equiv \beta_{t}$, where $\beta_{t}$ is the time-varying transmissibility of the virus, the probability that a contact between an infectious person and a susceptible person leads to transmission at time $t$.
Henceforth, we term the model in \eqref{eq:transm_rate_commom_bm} by SBM (standing for Single Brownian Motion), which represents a nested model to MBM.
In related studies, \cite{birrell} estimated the transmission rate between different age groups considering time-dependent (weekly-varying) contact matrices, not allowing the age-stratified transmissibility to vary unconstrained in time. \cite{knock} considered a time-independent contact matrix; the transmission rate between different age groups was parameterised as in \eqref{eq:transm_rate_commom_bm}, however the virus transmissibility was assumed to be piecewise linear with multiple change points corresponding to major announcements and changes in COVID-19 related policy.
\subsubsection{Observation process}\label{section:obs_model}
Denote the number of observed deaths on day $t = 1, \ldots, T$ in age group $\alpha \in \{1,\ldots,A\}$ by $y_{t,\alpha}$.
A given infection may lead to observation events (i.e deaths) in the future.
A link between $y_{t,\alpha}$ and the expected number of new age-stratified infections is established via the function
\begin{equation*}
d_{t,\alpha} = \mathbb{E}[y_{t,\alpha}] = \widehat{\text{IFR}}_{\alpha} \times \sum_{s = 1}^{t-1}h_{t-s} \Delta^{\text{infec}}_{s, \alpha}
\end{equation*}
on the new expected age-stratified mortality counts, $d_{t,\alpha}$, the estimated age-stratified infection fatality rate, $\widehat{\text{IFR}}_{\alpha}$, and the infection-to-death distribution $h$, where $h_s$ gives the probability the death occurs on the $s^{th}$ day after infection, is assumed to be gamma distributed with mean 24.2 days and coefficient of variation 0.39 \citep{flaxman, monod}, that is
\begin{equation}\label{eq:itd}
h \sim \operatorname{Gamma}(6.29, 0.26).
\end{equation}
We allow for over-dispersion in the observation processes to account for noise in the underlying data streams, for example due to day-of-week effects on data collection \citep{stoner, knock}, and link $d_{t,\alpha}$ to $y_{t,\alpha}$ through an over-dispersed count model \citep{riou1, birrell, seaman}
\begin{equation}\label{eq:negbin}
y_{t,\alpha}\mid \theta \sim \operatorname{NegBin}\left(d_{t,\alpha}, \xi_{t,\alpha}\right),
\end{equation}
where $\xi_{t,\alpha} = \frac{d_{t,\alpha}}{\phi}$, such that $\mathbb{V}[y_{t,\alpha}] = d_{t,\alpha}(1+\phi)$.
The log-likelihood of the observed deaths is given by
\begin{equation*}\label{eq:loglike}
\ell^{Deaths}(y\mid \theta) = \sum_{t=1}^{T}\sum_{\alpha=1}^{A}\text{logNegBin}\left(y_{t,\alpha}\mid d_{t,\alpha}, \xi_{t,\alpha}\right),
\end{equation*}
where $y \in \mathbb{R}^{T \times A}_{0,+}$ are the surveillance data on deaths for all time-points and age groups and the parameter vector corresponds to either $\theta^{SBM}$ or $\theta^{MBM}$, defined in Section \ref{section:estimation}.
The observation process in this work does not additionally model the age-stratified infection counts, for reasons discussed in Section \ref{section:intro}.
\subsection{Parameter estimation}\label{section:estimation}
Bayesian estimation of the parameters of the SBM and MBM models was performed using Markov chain Monte Carlo (MCMC).
Computational details are provided in the Supplementary material.
In the following we discuss the challenging aspects of parameter estimation and present inference approximations used in the implementation in order to improve mixing and computational efficiency.
Estimation of the new daily infection counts in \eqref{eq:new_cases} requires the solution of the non-linear system of ODEs in \eqref{eq:ODE_system} coupled with the stochastic differential equation in \eqref{eq:Euler_BM}, which together are cast as a hypo-elliptic diffusion which is intractable.
We address the intractability of the hypo-elliptic diffusion by adopting the data augmentation framework of \cite{dureau} as a means to infer the latent sample path $x$ of the diffusion which is an infinite-dimensional object \citep{kalogeropoulos} and is indirectly observed through the time evolution of the disease states.
The task of inference for the transmission rate(s) via diffusions is particularly challenging; modeling its time evolution at a more granular time-scale (i.e. at the time-points of the observations) would naturally increase the cost of the computational effort;
to reduce the dimensionality of the stochastic process we split the study period into $k = 1, \ldots, K$ weeks and denote by $k_t$ the week that time point $t$ falls into.
We assumed that the transmissibility of the virus, $\beta_{k_t}$, remains constant between subsequent weeks $[k_t, k_t + 1)$ and employed time-discretization via an Euler-Maruyama approximation of the latent sample path $x$.
Switching the notation from $t$ to $k_t$, this implies $x_{k_t} \mid x_{k_t - 1}, \sigma_x \sim \operatorname{N}(x_{k_t - 1}, \sigma^2_x)$ for the SBM model, which may be viewed as a prior distribution on $x_{k_t}$ and decreases the dimensionality of the stochastic process to $K + 1 < T + 1$.
Another challenging aspect of parameter estimation is the estimation of the volatility $\sigma_x$ which is a top-level parameter in the Bayesian hierarchical model; due to the multiple levels of hierarchy in the evidence synthesis model (Figure \ref{fig:functional_relationships}) and the lack of information about it, this difficulty is reflected in the lower effective sample sizes in all of our analyses.
Subsequently, this increases the difficulty to break down the transmission rate in \eqref{eq:transmission_matrix_process} into its biological and social components.
Finally, for this multivariate ODE time series model, the solution to \eqref{eq:ODE_system} is approximated numerically via the Trapezoidal rule.
Higher-order numerical ODE solvers can be considered, at the cost of increased computational effort.
The Bayesian hierarchical model is implemented via a dynamic Hamiltonian Monte Carlo algorithm \citep{betancourt} which utilizes gradient information to efficiently explore high-dimensional parameter spaces or complex posterior geometries and to obtain a sample from the posterior distribution of the model parameters given the observed data.
The size of the parameter space increases with longer time horizons due to the involvement of the diffusion process in \eqref{eq:Euler_BM} at weekly level, making such a gradient-based MCMC method particularly suitable for the implementation of the Bayesian hierarchical model proposed in the work; simultaneously, this poses a challenge for ODE models, for which computing the gradient is computationally significantly more demanding and subtle than the plain likelihood evaluation \citep{vehtari}.
The increasing computational cost as a consequence of increasing volumes of data for ODE models is a point raised by \cite{birrell} and \cite{ghosh} and highlights the need for further developments in the area.
\section{Application}\label{section:application}
\subsection{The COVID-19 pandemic}\label{section:analysis_common_bm}
The proposed methodology is illustrated on data from the COVID-19 pandemic in Greece between August 2020 and March 2021, where $K = 30$ weeks.
The population was divided in three age groups, $\{0-39, 40-64, 65+\}$.
During the study period, a national lockdown was implemented in November 2020 until the end of January, 2021.
Some of the NPIs were relaxed at New Year's Day (Table S3 of the Supplementary material).
In this section we focus on the SBM model which accounts for the inherent variability of contacts in the population as a result of the response of individuals to NPIs which were taken to reduce transmission, expressing the uncertainty in contact structure via prior distributions on the elements of the random contact matrix $C$, which is constrained to not vary in time.
However, the assumption made in \eqref{eq:transm_rate_commom_bm} restricts the age-stratified effective contact rates to be expressed by an overall effective contact rate for the population, which is dominated by the effective contact rate of the age group that drives the transmission of the virus, at the time-points of the observations (Figure \ref{fig:common_bm}, panel A).
This non-realistic assumption does not properly account for the change in the behaviour of individuals at each age group over time as a result of the implementation of NPIs during the study period.
As previously discussed in Section \ref{section:estimation}, the lack of sufficient information about the volatility $\sigma_x$ increases the difficulty to break down the age-stratified transmission rate in \eqref{eq:transm_rate_commom_bm} into its biological and social components, creating an identifiability issue.
Therefore, any differences in the virus transmission procedure across strata are attributed to the random contact matrix elements $C_{\alpha,\alpha'}$, justifying the notable differences between the prior and the posterior distribution of the contact matrix (Figure \ref{fig:common_bm}, panel B).
Consequently, the age-stratified transmission rate trajectories (Figure \ref{fig:common_bm}, panel C) only differ in terms of magnitude at the time-points of the observations.
The analysis of Austrian age-stratified mortality counts revealed the same view regarding the ability of the SBM model to capture the
age-stratified trends in SARS-CoV-2 transmission (Figure S5 of the Supplementary material).
These findings suggest that the SBM model is not flexible enough to accommodate for age-stratified trends in SARS-CoV-2 transmission, motivating the need for a more flexible model which can better estimate the volatility of the diffusion process.
\subsection{Model expansion}
The assessment of the fidelity of the SBM model to the data at a latent level revealed an issue which may be seen as prior-data conflict at a latent level.
We resolve this issue by expanding the SBM model to the MBM model in the spirit of \cite{gustafson} and inspecting the effect on the contact survey data.
The age-stratified mortality counts that are available from the national surveillance systems consistently indicate an increasing number of COVID-19 attributable deaths with age; for the case of Greece the respective time series for the younger age group, i.e. $\{0-39\}$, is sparsely-informed, while the elder age groups $\{40-64, 65+\}$ are well-informed; a similar observation can be made for the Austrian dataset (Figures S1-S4 of the Supplementary material).
In the context of diffusion-driven transmission models it would be more sensible to group younger ages together so as to avoid sparsely-informed time series which are more difficult to fit and instead allow for more groupings in elder ages, i.e. by 5- or 10-year intervals, depending on the information that becomes available from the national surveillance systems.
In such a case, where the number of age groups $A$ is small, the mortality data do not provide much information on the top-level volatilities $\sigma_{x, \alpha}, \alpha \in \{1,\ldots,A\}$ of the MBM Bayesian hierarchical model, reflecting the difficulty in the estimation of the volatility $\sigma_x$ involved in the SBM model.
Similarly to the SBM model, posterior inference of the age-stratified volatilities $\sigma_{x, \alpha}, \alpha \in \{1,\ldots,A\}$ has resulted in lower effective sample sizes compared to the rest of the MBM model parameters.
The assumption of independent diffusions for the biological components of the age-stratified transmission rate in \eqref{eq:transm_rate_indep_bm} allows for better reconstructing of the age-stratified drivers of transmission (Figure \ref{fig:indep_bm}, panels A and C).
The trajectory of the transmission rate of the age group $\{60+\}$ seems to be similar with the respective trajectory shown in Figure \ref{fig:common_bm} Panel C, thus stressing the role of this group to the evolution of the epidemic.
The rate of transmission reaches its highest point in October 2020 for all age groups; the largest magnitude is demonstrated for the age group $\{60+\}$ (Figure \ref{fig:indep_bm}, panel C), providing evidence to support public health authorities in taking appropriate measures to decrease transmission within and between-groups.
The order of magnitude of the age-stratified transmission rate changes between the SBM and MBM models because of the change in the order of magnitude in the posterior number of contacts (Figures \ref{fig:common_bm} and \ref{fig:indep_bm}, panel B).
Overall, the expansion of the age-stratified transmission model offers better interpetability and more flexibility than the SBM model in inferring age-stratified dynamic transmission rates and enables the reconstruction of the age-stratified drivers of transmission.
We discuss a quantitative comparison of the two models using information criteria in Section S2 of the Supplementary material.
The MBM model is validated in the period 2020-03-02 to 2021-09-27 ($K = 30$ weeks) using the estimated age-stratified numbers of cumulative infections in England from the REACT-2 seroprevalence survey \citep{ward_react2}.
In the analysis the population was divided in three age groups, $\{0-39, 40-59, 60+\}$.
REACT-2 quantified the total number of infected individuals in England and the level of under-ascertainment of infections in the population up to mid-July 2020; it was selected for model validation based on its careful design, large representative sample size and timing.
The age-stratified estimated counts of cumulative infections from REACT-2 were adjusted by the age distribution of the population
based on the three age groups $\{0-39, 40-59, 60+\}$.
Figure \ref{fig:model_validation} shows the agreement of the age-stratified model estimates with the estimates from REACT-2 and demonstrates the ability of the MBM model to capture the level of under-ascertainment of infections over time and by age group.
\section{Discussion}\label{section:discussion}
In this paper we have presented a novel approach to modeling the age-stratified dynamics of COVID-19 based on daily mortality counts.
We proposed a flexible Bayesian evidence synthesis framework that avoids the need to make strict model assumptions about the age-specific transmission process and enables a data-driven modeling approach for inferring the mechanism governing COVID-19 transmission, based on diffusion processes that are a-priori independent. The modularity of the Bayesian approach allows for assessing fidelity to the data at a latent level and resolving the corresponding prior-data conflict via expanding the model to incorporate distinct diffusions for each age group.
There are several advantages associated with our approach.
Our model is primarily driven by the reported daily COVID-19 attributable mortality counts; a key strength of our Bayesian evidence synthesis framework is that multiple data sources which are publicly available across countries are integrated to provide a robust overall picture of the epidemic nationwide: daily COVID-19 attributable mortality counts, daily laboratory-confirmed COVID-19 infection counts and the age distribution of the population.
This increases the applicability of the model to multiple countries, since these data sources are made available by the national surveillance systems.
A main contribution of this work is the data-driven estimation of the transmissibility of the virus via mutually independent diffusion processes, which remove the need for additional information regarding the timings of specific interventions and for hypotheses about their impact on transmission.
Instead, more general assumptions need to be made, such as on the smoothness of the latent sample path of the age-stratified diffusion that we wish to infer.
Our approach allows policymakers to assess the effect of NPIs on each age group.
The model simultaneously estimates multiple metrics of outbreak progression: by modeling mortality counts instead of inferring only from confirmed infections, the Bayesian hierarchical model is more robust to changes in COVID-19 testing policies
and the ability of national healthcare systems to detect COVID-19 infections, i.e. through contact tracing, and provides valuable information about the age-stratified transmission rates, latent number of infections, reporting ratio and the effective reproduction number (Sections S4-S7 of the Supplementary material), where uncertainty is accounted for naturally via MCMC.
Our modeling approach is subject to limitations.
Age-stratified counts of vaccinations are not accounted for in the latent transmission epidemic model in the form of covariates, but this does not pose an important limitation for the considered time period, with low counts of administered vaccinations until the end of March 2021.
In its current form the proposed hierarchical model is suited for the pre-vaccine era.
In addition, the model is limited by variations in the reporting procedure of deaths and mortality definitions across time and countries (Section S1 of the Supplementary material); the dates of daily reported deaths may deviate from the actual dates of deaths. The use of diffusion processes can be viewed as an attempt to model the noise within the observational model and mitigate such issues.
Similar to the Bayesian models of \cite{monod} and \cite{wistuba}, we make parametric assumptions regarding the distribution of the time between infection and reported death in \eqref{eq:itd} (i.e. it is assumed to be the same across age groups and constant during the study period).
While factors like hospital capacity and vaccine efficacy are expected to affect the infection-to-death distribution, such parametric assumptions might not be generally transferable to countries with different reporting characteristics and healthcare systems.
Some further limitations are shared with the Bayesian model proposed in \cite{knock}.
The latent transmission model assumes a closed environment, where no new infections are imported from outside of the population.
Additionally, our model does not explicitly account for hospital-acquired infections which may have contributed to overall transmission and to persistence of infection in periods of high transmission.
Age-specific counts of hospitalizations and intensive care unit cases (where publicly available) could help relax the parametric assumptions regarding the infection-to-death distribution and assist in the estimation of the course of the COVID-19 pandemic.
The age-stratified IFR is crucial part of the observation model that enables estimation of the expected number of new infections from the observed mortality counts.
An assumption that has been made is that the age-stratified IFRs remain constant during the study period.
However, these are expected to vary in time and across populations due to a number of factors, such as SARS-CoV-2 variants, vaccine efficacy, the age distribution of the population, the age distribution of infections, underlying health conditions, access to healthcare resources, and other factors \citep{brazeau}.
Thus, dynamic age-stratified IFRs would be more appropriate when modeling the latent number of infections based on mortality data.
In future work we shall work on two directions to expand our model. The first will aim to incorporate dynamic age-stratified IFR estimates and integrate further healthcare surveillance data like age-stratified counts of vaccinations, hospitalizations and intensive care unit cases where these data are available.
The second, motivated by multi-task Gaussian Processes which have been recently implemented for a different class of Bayesian infectious disease models \citep{kypraios} could amend our model via exchangeable diffusion processes, reflecting a-priori beliefs that there is a shared structure between the dynamic transmission rates for individuals of different age groups.
\section*{Software}\label{section:software}
The \textbf{Bernadette} package for R \citep{rsoft} implements the methodology in this paper. The package, R code and documentation to reproduce the analysis are available at \url{https://github.com/bernadette-eu/indepgbm}.
\section*{Acknowledgments}
The authors would like to thank Anastasia Chatzilena, Theodore Kypraios and David Rossell for their helpful and constructive comments. {\it Conflict of Interest}: None declared.
\section*{Funding}
Lampros Bouranis' research is supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101027218.
\bibliographystyle{apalike}
|
train/arxiv
|
BkiUfIrxK7Tt522WcTn5
| 5 | 1 |
\section{Fringe fields and magnon dispersion}
We solve Maxwell's equations within the magnetostatic limit together with the Landau-Lifshitz-Gilbert {Eqs.~(2)--(4)}. We treat $\textbf{M}$ and $\textbf{H}$ as the sum of a constant dc field and an ac perturbation with frequency $\omega$, i.e., $\textbf{M}=M_S\hat{z}+\bar{\textbf{m}}e^{-i\omega t}$ and $\textbf{H} =\textbf{H}_{0}+\bar{\textbf{h}}e^{-i\omega t}$ with $|\bar{\textbf{m}}|\ll{M}_S$ and $|\bar{\textbf{h}}|\ll|\textbf{H}_{0}|$. We solve the problem for the magnetostatic potential $\phi (\textbf{r})$, defined as $\bar{\textbf{h}}=\boldsymbol{\nabla}\phi(\textbf{r})$. Neglecting the $\bar{\textbf{m}}\times\bar{\textbf{h}}$ terms, {Eqs.~(2)--(4)} yield Walker's equation~\cite{walker1957,sparks1970}
\begin{align}
\partial_{x}\left[1+\kappa\left(\textbf{r}\right)\right]\partial_{x}\phi+\partial_{y}\left[1+\kappa\left(\textbf{r}\right)\right]\partial_{y}\phi+\partial_{z}^{2}\phi=0,\label{finaleqout}
\end{align}
with
\begin{align}
\kappa\left(\textbf{r}\right)=\begin{cases}
\frac{\Omega_H}{\Omega_{H}^{2}-\Omega^{2}}\qquad & r,\theta,z\in\mathcal{D}\\
0 & r,\theta,z \notin \mathcal{D}
\end{cases}
\end{align}
where $\Omega_{H}=({H_{0}}/{M_S})$, {}{$H_0=B_{dc}/\mu_0-M_S$}, and $\Omega=({\omega}/{\gamma M_{S}})$. The solution to the above equation is given by $\phi\left(\boldsymbol{r}\right)=m_0 \thinspace e^{im\theta}J_m\left(k_or\right)Z\left(z\right)$~\cite{sparks1970-ssc}, where $J_m\left(k_or\right)$ is the Bessel function of the first kind for integer order $m$, and $Z(z)$ is proportional to either $\cos(k_i z)$ and $\sin(k_i z)$ inside the disk, and is exponential outside $(e^{\pm k_o z})$. Here $m_0$ is a coefficient to be determined through the normalization of the modes. The application of the proper boundary conditions arising from Maxwell's equations yields the following transcendental equations~\cite{sparks1970,sparks1970-ssc},
\begin{align}
i\frac{k_{i}}{\sqrt{1+\kappa}} & =k_{o}, \label{transc1} \\
\tan\left(k_{i}\frac{d}{2}+\tau\frac{\pi}{2}\right) & =\frac{k_{o}}{k_{i}},\label{transc}
\end{align}
which impose a constraint between the values of the thickness and radial wave vectors $k_i$ and $k_o$, respectively, and the frequency $\omega$. Here $\tau$ assumes the value 0 (1) for even (odd) $Z(z)$ functions. For $\tau=0$ (and assuming $k_o>0$), the solution for the $Z(z)$ function is even and is
\begin{equation}
Z\left(z\right)=\begin{cases}
e^{-k_{o}z} & z>d/2\\
\frac{e^{-k_{o}\frac{d}{2}}}{\cos\left(k_{i}\frac{d}{2}\right)}\cos\left(k_{i}z\right) & -d/2<z<d/2\\
e^{k_{o}z} & z<-d/2
\end{cases}
\end{equation}
while for $\tau=1$, the solution is odd and is
\begin{equation}
Z\left(z\right)=\begin{cases}
-e^{-k_{o}z} & z>d/2\\
-\frac{e^{-k_{o}\frac{d}{2}}}{\sin\left(k_{i}\frac{d}{2}\right)}\sin\left(k_{i}z\right) & -d/2<z<d/2\\
e^{k_{o}z} & z<-d/2
\end{cases}
\end{equation}
Now we determine the magnon dispersion analytically by assuming the condition of a fully pinned magnetization at the edge of the disk, $\bar{\textbf{m}}(r=R,\theta,z)=0$~\cite{sparks1970-ssc}. In cylindrical coordinates, the rf magnetization $\bar{\textbf{m}}=(\bar{m}_r,\bar{m}_\theta,\bar{m}_z)$ can be written as a function of $\phi(\textbf{r})$ as
\begin{align}
\bar{m}_{r} & =\kappa\partial_{r}\phi-\frac{i\nu}{r}\partial_{\theta}\phi\label{mrfinal-1}\\
\bar{m}_{\theta} & =i\nu\partial_{r}\phi+\frac{\kappa}{r}\partial_{\theta}\phi\label{mthetafinal-1}
\end{align}
where ${\nu}=[{\Omega}/({{\Omega}_{H}^{2}-\Omega^{2}})]$. Hence we have a magnetization profile within the disk for the magnon of ${\textbf{m}} \left(\textbf{r},t\right)= \textrm{Re} \left\{\bar{\textbf{m}} e^{-i\omega t}\right\}$, where
\begin{align}
m_{r} & =m_0 \cos (m\theta-\omega t) Z\left(z\right)\left[\kappa\partial_{r}J_{m}\left(k_{o}r\right)+\frac{m\nu}{r}J_{m}\left(k_{o}r\right)\right],\label{mag1}\\
m_{\theta} & =-m_0 \sin (m\theta -\omega t)Z\left(r\right)\left[\nu\partial_{r}J_{m}\left(k_{o}r\right)+\frac{m\kappa}{r}J_{m}\left(k_{o}r\right)\right],\label{mag2} \\
m_{z} & =0,
\end{align}
Further, with $\nu\approx\kappa$ and the pinning of the magnetization at the edge of the disk \hbox{$\textbf{m}(r=R,\theta,z)=0$} we obtain quantized values for the in plane wave-vector $k_{o,m,n}={\beta_{m-1}^{n}}/{R}$~\cite{sparks1970-ssc}, where $\beta_{m-1}^{n}$ is the $n$-th zero of $J_{m-1}(k_{o,m,n} r)$. This yields through Eq.~(\ref{transc1}) the analytical expression for the magnon dispersion
\begin{equation}
\omega_{\lambda}=\gamma\left\{ {H}_{0}^{2}+\frac{{H}_{0} M_{S}}{1+{}{{k_{i,\lambda}^{2}}/{k_{o,\lambda}^{2}}}}\right\}^{1/2},
\label{magnon-f}
\end{equation}
where the magnon index $\lambda=(m,n,l)$ stands for angular ($m$), radial ($n$) and thickness ($l$) modes and $k_{o,\lambda}=k_{o,m,n}$ and $k_{i,\lambda}=k_{i,m,n,l}$. The inclusion of the exchange effective magnetic field $\textbf{H}_{ex}={}{\lambda_{ex}}\nabla^2\textbf{M}$ is incorporated in the equation above by replacing $H_0\rightarrow \tilde{H}_{0}$ with $\tilde{H}_0=H_0+M_S{}{\lambda_{ex}}(k_{i,\lambda}^2+k_{o,\lambda}^2)$ where {}{$\lambda_{ex}={2A_{ex}}/{\mu_0 M_S^2}$} is the exchange length and $A_{ex}$ the exchange stiffness. This perturbative approach is justified as $H_0\gg M_S{}{\lambda_{ex}}(k_{i,\lambda}^2+k_{o,\lambda}^2) $. The FRM frequency can be recovered from {Eq.~(5) or Eq.~(\ref{magnon-f})} by taking the limit of infinite wavelength {}{($k_{o,\lambda}\rightarrow 0$)} and is ${}{\omega_{FMR}=\gamma H_0}$.
We determine now the magnetization amplitude $m_0$ for a single magnon based on the conservation of angular momentum along the $z$ direction. In the absence of magnons, the angular momentum along $z$ is given by $\int d^{3}r \thinspace \textbf{M}\cdot \hat{z}$ with $\textbf{M}=(0,0,M_S)$. After the creation of a magnon with $\textbf{m}=(m_x,m_y,0)$, there is a decrease in the magnetization of the $z$ component as $|\textbf{M}|$ should be conserved, i.e., $\textbf{M}=\left(m_{r},m_{\theta},\sqrt{M_{S}^{2}-m_{r}^{2}-m_{\theta}^{2}}\right)$. The magnon excitation has an angular momentum equal to $\hbar \gamma$ where {$\hbar$} is the reduced Planck's constant, and
this conservation yields to leading order
\begin{equation}
{}{\hbar \gamma } \approx\int d^{3}r \frac{m_{r}^{2}+m_{\theta}^{2}}{2M_{S}},
\end{equation}
which produces
\begin{equation}
m_{0,\lambda}=\sqrt{\frac{2\gamma\hbar M_{S}}{{}{2\pi}k_{o,\lambda}^{2}\kappa^{2}I_{\lambda}^rI_{\lambda}^z}}
\label{coeff}
\end{equation}
with $I_{\lambda}^r=\int_{0}^{R}rdrJ_{m-1}^{2}\left(k_{o,\lambda}r\right)$ and $I_{\lambda}^z=\int_{-\frac{d}{2}}^{\frac{d}{2}}dz\thinspace Z_{\lambda}^{2}\left(z\right)$. Finally, the magnetostatic potential $ \phi_{\lambda}(\textbf{r})=m_{0,\lambda}e^{im\theta}J_m\left(k_{o,\lambda} r\right)Z_{\lambda}(z)$, and we have fringe fields ${\textbf{h}_{\lambda}\left(\textbf{r},t\right)}= \textrm{Re}\left\{\bar{\textbf{h}}_{\lambda}(\textbf{r}) e^{-i\omega_{\lambda}t }\right\} = \textrm{Re}\left\{ \nabla \phi_{\lambda} (\textbf{r}) e^{-i\omega_{\lambda} t} \right\}$, thus
\begin{align}
\frac{{\textbf{h}_{\lambda}\left(\textbf{r},t\right)}}{m_{0,\lambda} } & =\hat{r} \cos\left(m\theta-\omega_{\lambda}t\right)\left[k_{o,\lambda}J_{m-1}\left(k_{o,\lambda}r\right)-\frac{m}{r}J_{m}\left(k_{o,\lambda} r\right)\right] Z_{\lambda}(z)\nonumber \\
-&\frac{m\hat{\theta}}{r}\sin\left(m\theta-\omega_{\lambda}t\right)J_{m}\left(k_{o,\lambda} r\right)Z_{\lambda}(z)\nonumber \\
+&\hat{z}\cos\left(m\theta-\omega_{\lambda}t\right)J_{m}\left(k_{o,\lambda} r\right)\partial_z Z_{\lambda}(z).
\end{align}
The in-plane fringe field components can be written as a linear combination of the right and left circularly polarized (complex) fields, $\bar{h}_{-,\lambda}\left(\textbf{r}\right)$ and $\bar{h}_{+,\lambda}\left(\textbf{r}\right)$, respectively, leading to
\begin{equation}
{\textbf{h}_{\lambda}\left(\textbf{r},t\right)}{}
=\frac{1}{2\sqrt{2}}\left[\hat{e}_{-}\bar{h}_{+,\lambda}\left(\textbf{r}\right)+\hat{e}_{+}\bar{h}_{-,\lambda}\left(\textbf{r}\right)\right]e^{-i\omega_\lambda t}+\frac{1}{2\sqrt{2}}\left[\hat{e}_{+}\bar{h}_{+,\lambda}^{\star}\left(\textbf{r}\right)+\hat{e}_{-}\bar{h}_{,\lambda}^{\star}\left(\textbf{r}\right)\right]e^{i\omega_\lambda t}+\hat{z}h_{z,\lambda}\left(\textbf{r},t\right) \label{fring},
\end{equation}
with $\hat{e}_{\pm}=\frac{(\hat{r}\pm i\hat{\theta})}{\sqrt{2}}$ and
\begin{equation}
\bar{h}_{+,\lambda}(\textbf{r})=m_{0,\lambda}\left[k_{o,\lambda}J_{m-1}\left(k_{o,\lambda}r\right)-2\frac{m}{r}J_{m}\left(k_{o,\lambda}r\right)\right]e^{im\theta}Z\left(z\right),
\label{fringeleft}
\end{equation}
\begin{equation}
\bar{h}_{-,\lambda}(\textbf{r})=m_{0,\lambda}\left[k_{o,\lambda}J_{m-1}\left(k_{o,\lambda}r\right)\right]e^{im\theta}Z\left(z\right).
\end{equation}
\section{NV-center spin magnon coupling}
The interaction Hamiltonian between NV center spin and magnon fringe fields is given by
\begin{equation}
\frac{{\cal H}_{I}}{\hbar}=\gamma_{\textrm{NV}}\mu_0 {\textbf{h}}_{\lambda}(\textbf{r},t)\cdot{\textbf{S}},
\label{Hint}
\end{equation}
where $\gamma_{\textrm{NV}}$ is the NV-center gyromagnetic ratio and $\textbf{S}=(S_x,S_y,S_z)$ are spin-1 matrices. We rewrite ${\textbf{S}}=\frac{1}{\sqrt{2}}\left({S}_{+}\hat{e}_{-}+{S}_{-}\hat{e}_{+}\right)+{S}_{z}\hat{e}_{z}$,
with ${S}_{\pm}={S}_{x}\pm i{S}_{y}$ -- where ${S}_{\pm}$ are the ladder (raising and lowering) operators of the $z$-component of the total angular momentum of the NV-center obeying ${S}_{\pm}\left|S,m_s\right\rangle =\sqrt{\left(S\mp m_s\right)\left(S\pm m_s+1\right)}\left|S,m_s\pm1\right\rangle $. In order to write down a Jaynes-Cumming Hamiltonian for our system, we need to rewrite our Hamiltonian Eq.~(\ref{Hint}) using the creation and annihilation operators of magnon and NV$^-$ center levels. First, we recall that the $z$ component of the magnetization can be written as {}{$\hat{M}_{z}\approx M_{S}-(1/2M_S)m_{-}m_{+}$ }
with $m_{\pm}=m_{x}\pm im_{y}$. The connection with second quantization is made by associating the $\hat{m}_{+}$ $\left(\hat{m}_{-}\right)$ operator with the increase (decrease) of the magnetization along $z$. Therefore, their relations with the magnon creation $(a_{\lambda}^\dagger)$ and annihilation operators $(a_\lambda$) are $\hat{m}_{-,\lambda}\left(\textbf{r},t\right) \rightarrow m_{-,\lambda}\left(\textbf{r},t\right)a_{\lambda}^{\dagger}$ and $\hat{m}_{+,\lambda}\left(\textbf{r},t\right) \rightarrow m_{+,\lambda}\left(\textbf{r},t\right)a_{\lambda}$ {}{\cite{kittel1963quantum,PhysRev.122.791}}, where $\lambda$ represents the magnon mode indexed by $\lambda=\left(m,n,l\right)$ and $a_{\lambda}^\dagger \left|0\right\rangle = \left|mnl\right\rangle$, where $ \left|0\right\rangle $ represents the state with no excited magnons. Using now the magnon operators, the creation of magnon {}{mode $\lambda$} decreases the $z$-component of the integrated magnetization by {}{$\hbar \gamma $}, i.e., $\int d^{3}r \thinspace \hat{M}_{z}=M_{S}\pi d R^2-{}{\hbar \gamma}\thinspace a_{\lambda}^{\dagger}a_\lambda${}{\cite{PhysRev.122.791}}. Moreover, we can also write the fringe field [Eq.~(\ref{fring})] in second quantization using the magnon operators,
\begin{equation}
\textbf{h}_{\lambda}\left(\textbf{r},t\right)=a_{\lambda}\left(\frac{\hat{e}_{-}\bar{h}_{+,\lambda}\left(\textbf{r}\right)+\hat{e}_{+}\bar{h}_{-,\lambda}\left(\textbf{r}\right)}{2\sqrt{2}}\right)e^{-i\omega_{\lambda}t}+a_{\lambda}^{\dagger}\left(\frac{\hat{e}_{+}\bar{h}_{+,\lambda}^{\star}\left(\textbf{r}\right)+\hat{e}_{-}\bar{h}_{-,\lambda}^{\star}\left(\textbf{r}\right)}{2\sqrt{2}}\right)e^{i\omega_{\lambda}t}+h_z(\textbf{r},t)\hat{z}. \end{equation}
Finally, Eq.~(\ref{Hint}) is, for leading order,
\begin{align}
\frac{{\cal H}_{I}}{ \hbar} & \approx \frac{\gamma_{\textrm{NV}}\mu_{0}}{4}\left[\bar{h}_{+,\lambda}^{\star}\left(\textbf{r}\right)a_{\lambda}^{\dagger}\hat{S}_{+}e^{i\omega_{\lambda}t}+\bar{h}_{-,\lambda}\left(\textbf{r}\right)a_{\lambda}\hat{S}_{+}e^{-i\omega_{\lambda}t}\right.\\
& +\left.\bar{h}_{+,\lambda}\left(\textbf{r}\right)a_{\lambda}\hat{S}_{-}e^{-i\omega_{\lambda}t}+\bar{h}_{-,\lambda}^{\star}\left(\textbf{r}\right)a_{\lambda}^{\dagger}\hat{S}_{-}e^{i\omega_{\lambda}t}\right].
\label{Hint2}
\end{align}
Now we apply the unitary transformation ${\cal U}\left(t\right)=e^{-\frac{i}{\hbar}{{\cal H}_{\textrm{NV}}}t}$ with $\frac{{\cal H}_{\textrm{NV}}}{\hbar}=2\pi D(S_{z}^{2}-2/3)+ \gamma_{\textrm{NV}} B_{dc} S_{z}$ to the Hamiltonian above. After applying the rotating wave approximation (RWA) and assuming $\omega_{\pm}>0$, we obtain
\begin{align}
\left({\cal U}^\dagger\frac{{\cal H}_{I}}{ \hbar}{\cal U} \right)_{\textrm{RWA}} & \approx \gamma_{\textrm{NV}}\mu_0\frac{\sqrt{2}}{4}\left[\bar{h}_{-,\lambda}\left(\textbf{r}\right)a_{\lambda}\sigma_{+}^{U}e^{-i\left(\omega_{\lambda}-\omega_{+}\right)t}+\bar{h}_{-,\lambda}^{\star}\left(\textbf{r}\right)a_{\lambda}^{\dagger}\sigma_{-}^{U}e^{i\left(\omega_{\lambda}-\omega_{+}\right)t}\right.\\
& +\left.\bar{h}_{+,\lambda}^{\star}\left(\textbf{r}\right)a_{\lambda}^{\dagger}\sigma_{+}^{D}e^{i\left(\omega_{\lambda}-\omega_{-}\right)t}+\bar{h}_{+,\lambda}\left(\textbf{r}\right)a_{\lambda}\sigma_{-}^{D}e^{-i\left(\omega_{\lambda}-\omega_{-}\right)t}\right],
\label{Hint4}
\end{align}
with $\omega_{\pm} = 2\pi D \pm {\gamma_{\rm{NV}} B_{dc}}$, $\sigma_{+}^{D}\left|-1\right\rangle =\left|0\right\rangle $, $\sigma_{-}^{D}\left|0\right\rangle =\left|-1\right\rangle $, $\sigma_{+}^{U}\left|0\right\rangle =\left|1\right\rangle $ and $\sigma_{-}^{U}\left|1\right\rangle =\left|0\right\rangle $.
From here it is clear that the right circular polarization of the fringe field $[\bar{h}_{-,\lambda}(\textbf{r})]$ only allows for transitions within the $\left\{\left|0\right\rangle,\left|+1\right\rangle \right\}$ subspace, and the left one $[\bar{h}_{+,\lambda}(\textbf{r})]$ allows for transitions within the $\left\{\left|0\right\rangle,\left|-1\right\rangle \right\}$ subspace for magnetic fields corresponding to the $\left|-1\right\rangle$ state having higher energy than the $\left|0\right\rangle$ state.
For the configuration we consider in the main text we only have resonances between magnons and the NV-center $\left|0\right\rangle \longleftrightarrow \left|-1\right\rangle $ transition (See Figure 3 in main manuscript). Therefore, the Hamiltonian within the resonant condition $\omega_- \approx \omega_\lambda$ is
\begin{equation}
\frac{{\cal H}_{I}^{\textrm{RWA}}}{\hbar}=\gamma_{\textrm{{NV}}}\mu_{0}\frac{\sqrt{2}}{4}\left[\bar{h}_{+,\lambda}^{\star}\left(\textbf{r}\right)a_{\lambda}^{\dagger}\sigma_{+}^{D}+\bar{h}_{+,\lambda}\left(\textbf{r}\right)a_{\lambda}\sigma_{-}^{D}\right].
\end{equation}
We finally write the above Hamiltonian for a NV center placed at $\textbf{r}=\textbf{r}_\textrm{NV}$
\begin{equation}
\frac{{\cal H}_{I}^{\textrm{RWA}}}{\hbar}\approx \textsl{g}_{\lambda}^{\star}\left(\textbf{r}_\textrm{NV}\right)a_{\lambda}^{\dagger} \sigma_{+}^D+\textsl{g}_{\lambda}^{}\left(\textbf{r}_\textrm{NV}\right)a_{\lambda}\sigma_{-}^D,
\label{heff}
\end{equation}
with spin-magnon coupling
\begin{equation}
\textsl{g}_\lambda(\textbf{r}_\textrm{NV})=\frac{\sqrt{2}}{4} \gamma_{\textrm{NV}}\mu_0\thinspace \bar{h}^{}_{+,\lambda}(\textbf{r}_\textrm{NV}). \label{gcoupsm}
\end{equation}
\section{Cooperativity}
The cooperativity is defined by Eq.~(1) within the manuscript and is
\begin{equation}
{\mathcal C}_\lambda=\frac{4{\textsl{g}}_{\lambda}^2}{n{\mathsf{\varkappa}}/{T_{2}^{*}}}.
\label{coop}
\end{equation}
Using $\mathsf{\varkappa}=2\pi f_{\lambda}/Q$ with $Q=1/\left(2\alpha\right)$, and Eq.~(\ref{gcoupsm}) we can rewrite Eq.~(\ref{coop}) as
\begin{align}
\mathcal{C}_\lambda=\frac{1}{2}\left(\gamma_{\textrm{NV}} \mu_{0}\right)^{2}\frac{T_{2}^{*}}{n\alpha \pi f_{\lambda}}\left|\frac{\bar{h}_{+,\lambda}\left(\textbf{r}_{\textrm{NV}}\right)}{2}\right|^{2}.
\end{align}
and finally
\begin{align}
\mathcal{C}_\lambda =\frac{\gamma}{2\pi}\frac{T_{2}^{*}}{n}\left(\hbar \gamma_{\textrm{NV}} \mu_0\right)^{2}\times\Gamma_{\lambda}\left(R,d,\textbf{r}_{\textrm{NV}}\right)\times\frac{1}{\hbar\omega_{\lambda}}\times\frac{M_{S}}{\alpha},
\end{align}
with
\begin{align}
\Gamma_{\lambda}\left(R,d,\textbf{r}_{\textrm{NV}}\right) =\frac{1/2}{k_{o,\lambda}^{2}\kappa^{2}I_{\lambda}^{r}I_{\lambda}^{z}}\left[k_{o,\lambda} J_{m-1}\left(k_{o,\lambda}r_{\textrm{NV}}\right)-2\frac{m}{r}J_{m}\left(k_{o,\lambda}r_\textrm{NV}\right)\right]^{2}e^{2k_{o,\lambda}z_{\textrm{NV}}}
\end{align}
and $\textbf{r}_\textrm{NV} = (r_\textrm{NV},\theta_\textrm{NV},z_\textrm{NV})$.
\section{\label{z-function} Results for YIG }
\begin{figure}[b]
\centering
\includegraphics[width=1\linewidth]{YIG.pdf}
\caption{Frequencies of NV center levels $\left|\pm 1\right\rangle$ (blue and red solid lines), YIG FMR and YIG magnons (black solid lines) as a function of external dc magnetic field { $B$ parallel to the [111] Diamond crystallographic direction} for (a) $M_S=1750$~G, (b) $M_S=2000$~G and (c) $M_S=2200$~G. Inset shows a zoom-in of the crossing region between $\left|-1\right\rangle$ level with both magnonic $m=1,2,...,14$ and FMR frequencies.}
\label{fig1}
\end{figure}
Here we plot the NV-center level transition frequencies {[Eq.~(7)]} and the magnon frequencies {[Eq.~(5)]} as a function of the external dc magnetic field. Interestingly, we notice the absence of a resonance between the magnon modes described in the manuscript and NV levels for typical YIG values of $M_S$ at low temperature~\cite{d2013inverse,haidar2015thickness,jermain2017increased}. The resonance between the NV frequency transition $\left|0\right\rangle \longleftrightarrow \left|-1\right\rangle$ and the magnon mode $m=7$ only happens for $M_S < 1750$~G [See Fig.~\ref{fig1}]. {}{We emphasize that these reductions of $M_S$ would imply a significant increase in the damping value due to the addition of doping.}
Most importantly, we note that the spin-magnon interacting Hamiltonian for YIG material assumes a different form compared to the Hamiltonian calculated previously [Eq.~(\ref{heff})]. The underlying reason is that the resonance between the $\left|0\right\rangle \longleftrightarrow \left|-1\right\rangle$ transition and the magnon frequency only happens for negative $\left|-1\right\rangle$ frequency ($\omega_-<0$). Taking this into account and reapplying the RWA approximation, we obtain
\begin{align}
\left({\cal U}^\dagger\frac{{\cal H}_{I}}{ \hbar}{\cal U} \right)_{\textrm{RWA}}^{\omega_{-,\lambda}<0} & \approx \gamma_{\textrm{NV}}\mu_0\frac{\sqrt{2}}{4}\left[ \bar{h}_{-,\lambda}\left(\textbf{r}\right)a_{\lambda}\sigma_{+}^{U}e^{-i\left(\omega_{\lambda}-\omega_{+}\right)t}+\bar{h}_{-,\lambda}^{\star}\left(\textbf{r}\right)a_{\lambda}^{\dagger}\sigma_{-}^{U}e^{i\left(\omega_{\lambda}-\omega_{+}\right)t} \right. \nonumber \\
& +\left. \bar{h}_{-,\lambda}\left(\textbf{r}\right)a_{\lambda}\sigma_{+}^{D}e^{-i\left(\omega_{\lambda}-\left|\omega_{-}\right|\right)t}+\bar{h}_{-,\lambda}^{\star}\left(\textbf{r}\right)a_{\lambda}^{\dagger}\sigma_{-}^{D}e^{i\left(\omega_{\lambda}-\left|\omega_{-}\right|\right)t} \right].
\label{Hint4-YIG}
\end{align}
From this equation we clearly see that, differently from V[TCNE]$_x$, the right circular polarization of the fringe fields [$\bar{h}_{-,\lambda}(\textbf{r})$] drives both the $\left\{\left|0\right\rangle,\left|-1\right\rangle \right\}$ and $\left\{\left|0\right\rangle,\left|1\right\rangle \right\}$ transitions, which is a direct consequence of $\omega_{-,\lambda}<0$. Within the resonance condition we obtain
\begin{align}
\frac{{\cal H}_{I}^{\textrm{RWA}}}{\hbar} = \gamma_{\textrm{NV}}\mu_0\frac{\sqrt{2}}{4}\left[ \bar{h}_{-,\lambda}\left(\textbf{r}\right)a_{\lambda}\sigma_{+}^{D}+\bar{h}_{-,\lambda}^{\star}\left(\textbf{r}\right)a_{\lambda}^{\dagger}\sigma_{-}^{D}\right].
\label{Hint4-YIG}
\end{align}
Finally, for a NV center located at $\textbf{r}=\textbf{r}_{\textrm{NV}}$ we obtain
\begin{align}
\frac{{\cal H}_{I}^{\textrm{RWA}}}{\hbar} = \textsl{g}_\lambda(\textbf{r}_\textrm{NV}) a_{\lambda}\sigma_{+}^{D}+\textsl{g}_{\lambda}^{\star}(\textbf{r}_\textrm{NV})a_{\lambda}^{\dagger}\sigma_{-}^{D}
\label{Hint4-YIG}
\end{align}
with spin-magnon coupling of
\begin{equation}
\textsl{g}_\lambda(\textbf{r}_\textrm{NV})=\frac{\sqrt{2}}{4} \gamma_{\textrm{NV}}\mu_0\thinspace \bar{h}^{}_{-,\lambda}(\textbf{r}_\textrm{NV}). \label{gcoupsm-yig}
\end{equation}
The cooperativity is calculated through ${\mathcal C}_\lambda=\frac{4{\textsl{g}}_{\lambda}^2}{n{\mathsf{\varkappa}}/{T_{2}^{*}}}$ and is
\begin{align}
\mathcal{C}_{\lambda}^{\textrm{YIG}} =\frac{\gamma}{2\pi}\frac{T_{2}^{*}}{n}\left(\hbar \gamma_{\textrm{NV}} \mu_0\right)^{2}\times\Gamma_{\lambda}^{\textrm{YIG}}\left(R,d,\textbf{r}_{\textrm{NV}}\right)\times\frac{1}{\hbar\omega_{\lambda}}\times\frac{M_{S}}{\alpha},
\end{align}
with
\begin{align}
\Gamma_{\lambda}^{\textrm{YIG}}\left(R,d,\textbf{r}_{\textrm{NV}}\right) =\frac{1/2}{k_{o,\lambda}^{2}\kappa^{2}I_{\lambda}^{r}I_{\lambda}^{z}}\left[k_{o,\lambda} J_{m-1}\left(k_{o,\lambda}r_{\textrm{NV}}\right)\right]^{2}e^{2k_{o,\lambda}z_{\textrm{NV}}}
\end{align}
and $\textbf{r}_\textrm{NV} = (r_\textrm{NV},\theta_\textrm{NV},z_\textrm{NV})$.
\begin{figure}[tb]
\centering
\includegraphics[width=1\linewidth]{YIG-cp.pdf}
\caption{In plane and cross section plane spatial plots of the cooperativity for the $\lambda=(7,1,1)$ YIG magnon mode assuming (a) $\alpha_{\textrm{YIG,film}}\approx 1.5\times 10^{-3}$ (b) $\alpha_{\textrm{YIG,bulk}}\approx 5\times 10^{-5}$. We also plot the cooperativity along the teal lines. The white border shows the strong-regime stability region where ${\mathcal C}_\lambda \geq 1$. The green lines delimit the disk dimension $d=100$~nm and $R=0.5$~$\mu$m. }
\label{fig2}
\end{figure}
Here we plot the cooperativity for the YIG material, assuming $M_S = 1750$~G and the magnon mode $m=7$ that is resonant with the level transition $\left|0\right\rangle \longleftrightarrow \left|-1\right\rangle$ [See black dot within Fig.~\ref{fig1}~(a)]. We use two different values for the low temperature YIG damping. In Fig.~\ref{fig2}~(a) we plot the cooperativity for the current thin-film low temperature damping $\alpha_{\textrm{YIG,film}}\approx 1.5\times 10^{-3}$~\cite{haidar2015thickness}, where the strong regime can only be achieved for shallow NV centers placed at a maximum of $50$~nm below the YIG bottom surface.
On the other hand, in Fig.~\ref{fig2}~(b) we plot the cooperativity for YIG using the bulk low temperature damping value $\alpha_{\textrm{YIG,bulk}}\approx 5\times 10^{-5}$~\cite{Tabuchi2014,doi:10.1063/1.5115266}, obtaining {${\mathcal C}_{\textrm{YIG}}\approx 90$} and {}{exceeding the} V[TCNE]$_x$ cooperativity by a factor of {$\approx 6$}.
\vskip 100mm
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
\providecommand{\newblock}{}
|
train/arxiv
|
BkiUdnk4uzki04H6kR_R
| 5 | 1 |
\section*{INTRODUCTION}
Quantum correlations play a central role in our understanding of fundamental quantum physics and represent a key resource for quantum technologies~\cite{Schroedinger,Wiseman,Modi,Silva}. Progress in quantum information science has followed an increasingly thorough understanding of how such correlations manifest themselves, and how they can be successfully generated, manipulated and characterised~\cite{Baumgratz}. In quantum optical systems, these correlations appear in either properties of the fields, such as quadrature entanglement, or at the level of individual quanta, {\it i.e.}, photons. Access and control over such correlations are key to applications in quantum information science; however, how the complementary mode- and particle-correlations precisely act as a resource is still a subject of debate~\cite{OIreview}. The capability of studying the correlations in a regime traversing mode and particle aspects is thus necessary for clarifying the origin of quantum enhancement.
When a coherent superposition of many photons occupies a single mode, a wave-like description of the quantum state in terms of continuous variables (\textit{i.e.}, the values of the quadratures) of the electromagnetic field is the standard approach~\cite{Braunstein}. The canonical technique for measuring such light fields is strong-field homodyne detection, which directly probes the quadratures of a field and can provide a full reconstruction of its quantum state~\cite{Lvovsky}. On the other hand, particle-like properties can be directly accessed with a range of photodetectors, a notable example being photon-number-resolving detectors (PNRDs)~\cite{Divochiy,Achilles1,Jaha,Calkins}, and suitable techniques for the reconstruction of the photon statistics~\cite{Zambra,Allevi}. However, such photon counters are intrinsically insensitive to phase, and thus cannot access any coherence between modes. Weak-field homodyne (WFH) has been proposed as a measurement technique bridging wave-like and particle-like descriptions~\cite{Wally,Banaszek,Kuzmich,Resch1,Puentes,Laiho,Zhang}. As in standard homodyne, a local oscillator (LO) provides phase sensitivity, while the photon statistics are accessed by number-resolving detectors. The main difference is that WFH makes use of a classical reference whose mean photon number is of a similar order as that of the probed signal; this allows for the combination of the homodyne technique with PNRDs based on photon counting modules.
In this paper, we employ WFH detection to investigate coherence between different photon-number basis states (Fock layers) across two-mode entangled states. Our detection scheme accesses this manifestation of optical coherence directly, without the need for resource-intensive full state tomography. We demonstrate the oscillations of an array of multi-photon coincidence counts when a split single-photon state (SSPS) and a two-mode squeezed state (TMSS) are interfered with a weak local oscillator. Our experiment can be regarded as a first step towards the quantitative study of the nonlocal properties of multi-mode quantum states with multi-outcome measurements by non-Gaussian detectors. To this end, we theoretically study a violation of Bell inequalities that can be achieved for low numbers of detected photons under ideal conditions.
\section*{RESULTS}
{\bf Experiment}. The weak-field homodyne setup is depicted in Fig.~\ref{fig:setup}a. A signal is mixed on a beam splitter with a local oscillator, described by the coherent state $\ket{\alpha e^{i\phi}}$, which establishes a phase reference. One or both output modes from the beam splitter are then detected using photon-number-resolving detectors. The method is termed ``weak-field'' since the intensity of the LO is comparable to that of the signal. This differs from strong-field homodyne in which the coherent state is many orders of magnitude more intense than the signal, and the outputs are detected by linear photodiodes.
To understand how WFH can detect coherence across Fock layers, consider the detection of $m$ photons. These can originate from any Fock term $\ket{k}$ in the signal together with $m{-}k$ photons from the LO; if the complex coefficients of the terms $\{\ket{k}\}$ have well-defined relative phases, then these result in a modulation of the detection probability $p(m)$ as the phase of the local oscillator is varied. For a two-mode state, the relevant phases are the ones between joint detection events coming from terms in the signal of the form $\ket{k_1}\ket{k_2}$. Consequently, coherence across such events can be observed in the modulation of joint detection probabilities.
\begin{figure}[t]
\includegraphics[viewport = -25 220 760 430, clip, width=\columnwidth]{Fig1.pdf}
\caption{{\bf Layout of the weak-field homodyne (WFH) detector.} (a) The general scheme relies on the interference between a signal and a local oscillator (LO) of similar intensity for different settings of the phase $\phi$. Either one or both outputs are measured with photon-number-resolving detectors (PNRDs). (b) Our experimental implementation adopts a collinear configuration in which the signal $\ket{\psi}$ and the LO have orthogonal polarisations. For this reason, the beam splitter (BS) in (a) is replaced by a half-wave plate (HWP) and a polarising beam splitter (PBS) which realise interference in polarisation. The phase setting of the LO is controlled by means of a geometric phase rotator (GPR), which consists of a quarter-wave plate (QWP), a half-wave plate and a second QWP. The rotation of the HWP (with both QWP fixed at $45^\circ$) applies a phase shift to the coherent state. A time-multiplexed detector (TMD) records clicks on the transmitted output mode.}
\label{fig:setup}
\end{figure}
We use the layout shown in Fig.~\ref{fig:setup}b to study the coherence between Fock layers across the two modes of a split single photon and a squeezed vacuum state. To maximise the passive phase stability of our setup, we adopt a compact design in which the LO and the signal are collinear and occupy two orthogonal polarisations of a single spatial mode. Our PNRDs are time-multiplexed detectors (TMDs) that split an incoming beam into two spatial and four temporal modes, thereby resolving up to eight photons probabilistically using two avalanche photodiodes~\cite{Achilles1,Lundeen}. Specifically, the TMDs allow us to decompose the intensity of the interference patterns resulting from each output mode into its constituent photon components. In this way we can probe pair-wise correlations between individual Fock layers as we build up a joint detection statistics matrix, every row and column representing the number of clicks in each detected mode. The click statistics gives us access to higher-order Fock states ($k \geq 1$), although this detection scheme is not fully equivalent to a number-resolving detector~\cite{Sperling}.
\begin{figure}[h!]
\includegraphics[viewport = -55 210 790 450, clip, width=\columnwidth]{Fig2.pdf}
\caption{{\bf Probing correlations across optical modes with WFH.} (a) A single photon is produced by a heralded source based on parametric downconversion (PDC). We then split the single-photon state in order to generate $\ket{\Psi}_\textrm{SSPS}$ across two separate spatial modes (A and B), both probed by the WFH detectors described in Fig.~\ref{fig:setup}b. (b) The same downconversion source produces the squeezed vacuum state, whose two modes are also probed by weak-field homodyne detectors.}
\label{fig:experiment}
\end{figure}
A split single-photon state may be written as
\begin{equation}
\label{Eq:SSPS}
\ket{\Psi}_\textrm{SSPS}=\frac{1}{\sqrt{2}}\bigl(\ket{0}_\textrm{A}\ket{1}_\textrm{B}+\ket{1}_\textrm{A}\ket{0}_\textrm{B}\bigr),
\end{equation}
which describes a single photon in a coherent superposition of modes A and B (see Fig.~\ref{fig:experiment}a and Supplementary Fig.~1 for further details on the experimental layout). The results of our investigation for the SSPS are reported in Fig.~\ref{fig:SSPS}.
The red dots show the experimental probabilities $P(m,m')$ of the joint detection of $m$ clicks in mode A and $m'$ clicks in mode B. Each $P(m,m')$ term is a function of the difference between the phase settings of the weak local oscillators, $\phi^{(-)}{=}\phi_\textrm{A}{-}\phi_\textrm{B}$, since a single photon has no phase {\it per se}~\cite{Lvovsky1,Morin}. Our count rates are such that we only consider events where $m,m'\,{\leq2}$; detection outcomes greater than this level are negligible. The blue lines are predictions from a model which accounts for the following imperfections in our experiment: non-unit efficiency of the detectors; modulation of the reflectivities of the beam splitters preceding the time-multiplexed detectors when varying the LO phases (due to the geometric phase rotators depicted in Fig.~\ref{fig:setup}b -- see Methods and Supplementary Fig.~2). We also include imperfections in the production of the single-photon states~\cite{Bartley1}, and we thus write
\begin{equation}
\label{in:state:SSPS}
\rho_0 = w_0\ket{0}\bra{0}+w_1\ket{1}\bra{1}+(1-w_0-w_1)\ket{2}\bra{2},
\end{equation}
where $w_0$ and $w_1$ are coefficients taken between $0$ and $1$ which weight the zero- and one-photon contributions to the input state. We experimentally determine these parameters from the photon statistics of the initial state (see Supplementary Note~1 and Supplementary Note~2).
\begin{figure}[!!!!!h]
\includegraphics[viewport = -100 0 490 290, clip, width=\columnwidth]{Fig3.pdf}
\caption{{\bf Joint counting statistics for the SSPS.} Correlations between the responses of two WFH detectors as a function of the phase difference $\Phi^{(-)} = \phi^{(-)}/4$ (where $\Phi^{(-)}$ is determined by the settings of the half-wave plates in the phase rotators). We recall that $P(m,m')$ is associated to the joint detection probability of $m$ photons on mode A and $m'$ photons on mode B. The red dots are measured probabilities, with uncertainties determined by a Monte Carlo routine under the assumption of a Poissonian distribution for the raw counts. The blue theoretical curves are obtained from a model which includes the main imperfections in the setup. The dashed curves correspond to an input state with the experimentally determined weights $w_0{=}0.161$ and $w_1{=}0.669$. The uncertainties $\sigma_0{=}0.011$ on $w_0$ and $\sigma_1{=}0.029$ on $w_1$ are estimated by a Monte Carlo routine; the dotted curves show the results given by the theoretical model with $(w_0 - \sigma_0, w_1 - \sigma_1)$ and $(w_0 + \sigma_0, w_1 + \sigma_1)$, respectively. The quantum efficiencies of the detectors, $\eta_{\rm A} = 0.072$ and $\eta_{\rm B} = 0.064$, are experimentally estimated with the Klyshko method~\cite{Klyshko,Achilles2}. The intensities of the two local oscillators are $|\alpha_{\rm A}|{=0.510}$ and $|\alpha_{\rm B}|{=0.585}$.}
\label{fig:SSPS}
\end{figure}
The experimental curves show oscillations in the coincidence counts: these are evident in the $P(1,1)$, $P(2,1)$ and $P(1,2)$ terms, as also predicted by the theory. What is most striking about the observed oscillatory behaviour is the fact that it is displayed by terms $P(m,m')$ for which $m + m'\!>\!1$, yet it is determined by the coherence between the terms $\ket{0}_\textrm{A}\ket{1}_\textrm{B}$ and $\ket{1}_\textrm{A}\ket{0}_\textrm{B}$ (see Eq.\eqref{Eq:SSPS}) which do not contain more than one photon each. Ideally, any additional photon detected must therefore come from the local oscillators, hence the SSPS is responsible for the coherent oscillations observed in the considered joint detection probabilities. In practice, we observe good qualitative agreement between the experimental data and our theoretical description. Indeed, we are able to account for all the main features of a 3-by-3 array of multi-photon counts with one model that has no free parameters: detection efficiencies and the weights in Eq.\eqref{in:state:SSPS} are experimentally determined. We attribute the residual discrepancies to two main factors: imperfect mode matching between the LOs and the signal modes, and variations of the laser power during the few-hour long acquisition.
An analogous experiment is conducted on a two-mode squeezed state (see Fig.~\ref{fig:experiment}b and Supplementary Fig.~2 for further details on the experimental layout). This state is archetypal of a quantum resource with well-known correlations between its Fock layers. The expression for a TMSS reads
\begin{equation}\label{Eq:TMSS}
\ket{\Psi}_\textrm{TMSS}=\sqrt{1-|\lambda|^2}\sum_{n=0}^{+\infty}\lambda^n\ket{n}_\textrm{A}\ket{n}_\textrm{B},
\end{equation}
where the real squeezing parameter $|\lambda|$ governs the photon distribution across the photon-number basis states. Here we pump our source with moderate parametric gain in order to generate significant higher-order Fock layers in the two-mode state~\cite{Mosley}. The expected phase dependence of $\{P(m,m')\}$ for a two-mode squeezed state was shown to be $\phi^{(+)}{=}\phi_\textrm{A}{+}\phi_\textrm{B}$~\cite{Blandino}. The TMSS has a phase dependence arising from that of the pump; in a conventional strong homodyne setup this would define which quadratures are squeezed. In our case, the same effect is manifested in the phase dependence of the click patterns; hence, appropriate phase locking was necessary (see Methods).
\begin{figure}[!!!!!h]
\includegraphics[viewport= -100 0 490 290, clip, width=\columnwidth]{Fig4.pdf}
\caption{{\bf Joint counting statistics for the TMSS.} Correlations between the responses of two WFH detectors as a function of the phase sum $\Phi^{(+)} = \phi^{(+)}/4$ (where $\Phi^{(+)}$ is determined by the settings of the half-wave plates in the phase rotators). We recall that $P(m,m')$ is associated to the joint detection probability of $m$ photons on mode A and $m'$ photons on mode B. The red dots are measured probabilities, with uncertainties determined by a Monte Carlo routine under the assumption of a Poissonian distribution for the raw counts. The blue theoretical curves are obtained from a model which includes the main imperfections in the setup. The dashed curves correspond to an input state with the experimentally determined squeezing parameter $|\lambda|{=}0.295$ and weight $p{=}0.04$ for the additional noise term. The uncertainty $\sigma{=}0.016$ on $|\lambda|$ is estimated by a Monte Carlo routine; the dotted curves show the results given by the theoretical model with $|\lambda| \pm \sigma$. The experimental quantum efficiencies of the detectors are $\eta_{\rm A} = 0.132$ and $\eta_{\rm B} = 0.155$ (estimated with the Klyshko method). The intensities of the two local oscillators are $|\alpha_{\rm A}|{=0.365}$ and $|\alpha_{\rm B}|{=0.347}$.}
\label{fig:TMSS}
\end{figure}
The results obtained for the two-mode squeezed state are illustrated in Fig.~\ref{fig:TMSS}. Once again, we observe good qualitative agreement between our data and the predictions from a theoretical model that includes the same imperfections as for the single-photon case. Here the input state can be modelled as
\begin{equation}\label{Eq:rhoTMSS}
\rho = (1 - p)\ket{\Psi}_\textrm{TMSS}\bra{\Psi} + p\ket{0,1}\bra{0,1}\,,
\end{equation}
where the extra term is a first-order approximation of noise in a squeezed thermal state. This asymmetry across the modes is justified by the experimentally recorded $g^{(2)}$ for the parametric downconversion source: we find $g^{(2)}_{\textrm{A}} = 1.98 \pm 0.04$ and $g^{(2)}_{\textrm{B}} = 1.92 \pm 0.04$ for the two marginals, to be compared with $g^{(2)} = 2$ for an ideal thermal state. The lower value of $g^{(2)}_{\textrm{B}}$ motivates the addition of the noise term on output mode B. The quoted values of $g^{(2)}$ also suggest that additional (and undesired) Schmidt modes might be responsible for the presence of photons in modes correlated to undetected modes~\cite{Mosley2}. As for the case of the SSPS, imperfect mode matching between the LOs and the signal modes and variations of the laser power during the data acquisition are recognised as the main causes of the residual departures of the experimental curves from the expected behaviour in Fig.~\ref{fig:TMSS}. \\
As a general remark applying to both classes of studied states, we note that our numerical models depend strongly on the detection efficiencies (see Supplementary Fig.~4 and Supplementary Fig.~5). In this sense, the more pronounced discrepancies in Fig.~\ref{fig:SSPS} and Fig.~\ref{fig:TMSS}, such as that in the $P(0,2)$ term, may be due to noise affecting our estimation of these parameters. More details on the theoretical model for both the SSPS and the TMSS can be found in Supplementary Note~1 and Supplementary Note~2.
\bigskip
{\bf Generalised Bell inequalities for WFH detection.} Correlations such as those revealed in our experiment need to be understood in relation to canonical criteria for non-classicality, for instance the violation of a Bell inequality, to assess their role as possible quantum resources~\cite{Brunner}. The ability to access and discriminate higher photon numbers leads us to refer to generalised, higher-dimensional Bell inequalities~\cite{Collins}, which we study theoretically in the context of our experiment with a TMSS.
To this end, let us consider the scenario depicted in Fig.~\ref{fig:theory}. Each mode of a TMSS is analysed by means of a WFH detector; here all four outputs are monitored by perfect PNRDs. We take into account Fock layers which lead to the detection of $M$ photons on each side: the detected photons are split among the two output ports of the PBSs (\textit{i.e.}, the photons are either transmitted or reflected) according to the convention indicated in Fig.~\ref{fig:theory}a. Specifically, we consider measurements with $D{=}M{+}1$ possible results, distinguished according to the number of photons $\Gamma$ detected on the transmitted arm of each PBS. We are thus interested in the probabilities associated with joint detection events comprising outcomes (on each output arm, on both sides) differing by $\epsilon$ (the outcomes being taken modulo $D$). $\{\alpha,\alpha'\}$ and $\{\beta,\beta'\}$ denote the LO settings on side $A$ and side $B$, respectively. The relevant probabilities are combined into the expression
\begin{widetext}
\begin{equation}\label{CGLMP}
\begin{split}
I_M{=}&\sum_{\epsilon{=}0}^{\bigl[\frac{M+1}{2}\bigr]-1}\Biggl(1-\frac{2\epsilon}{M}\Biggr) \Bigl[P(\alpha,\beta,\epsilon) + P(\alpha',\beta,-\epsilon-1) + P(\alpha',\beta',\epsilon) + P(\alpha,\beta',-\epsilon)\Bigr] \\
& - \Bigl[P(\alpha,\beta,-\epsilon-1) + P(\alpha',\beta,\epsilon) + P(\alpha',\beta',-\epsilon-1) + P(\alpha,\beta',\epsilon+1)\Bigr],
\end{split}
\end{equation}
\end{widetext}
where the local realistic bound is $|I_M|\leq 2$~\cite{Collins}. We note that the case $M{=}1$ corresponds to the standard CHSH inequality which was experimentally tested in~\cite{Kuzmich}. Additional details on how to compute $I_M$ for the specific layout that we consider are provided in Supplementary Note~3, Supplementary Fig.~6 and Supplementary Fig.~7.
We run a numerical search for values of $M$ ranging from $2$ to $8$ in order to find the set of parameters $\{\lambda, \alpha, \alpha',\beta,\beta'\}$ which determine the highest violation of these generalised Bell inequalities. Our results are shown in Fig.~\ref{fig:theory}: no violation of Eq.\eqref{CGLMP} is found beyond $M{=}5$. This behaviour comes from the fact that the maximal violation of Eq.\eqref{CGLMP} (\textit{i.e.}, the attainment of the maximum value allowed by quantum mechanics) relies on a particular structure for the entangled state~\cite{Acin}. Notably, a two-mode squeezed state is specified solely by the squeezing parameter $\lambda$: this means that there are not enough degrees of freedom to tune in order to obtain the required form for the input state. This restriction becomes more severe as the dimensionality of the system increases, to the point that no violation of local realism can be inferred despite the use of an entangled resource such as a TMSS. Hence the decrease in the violation stems from the Gaussian character of the two-mode squeezed state, which fixes the functional shape of the oscillation curves simultaneously for all Fock layers.
\begin{figure}[h!]
\includegraphics[viewport= -80 180 820 520, clip, width=\columnwidth]{Fig5.pdf}
\caption{{\bf Generalised Bell inequalities with WFH.} (a) In the ideal case, the two modes of a TMSS are interfered with two weak local oscillators; PNRDs monitor all four outputs. We fix the number of detected photons to $M$ on each side, and label the various detection events recorded by the PNRDs according to how the $M$ particles distribute themselves on the two output modes: $\Gamma, M\!-\!\Gamma, \Gamma\!+\!\epsilon$ and $M\!-\!\Gamma\!-\epsilon.$ (b) The plot shows the values taken by the quantity $I_M$ (see Eq.(\ref{CGLMP})) when the number of detected photons $M$ on each side varies between $2$ and $8$. The bound for $I_M$ predicted by local realism is $2$ for all values of $M$, hence we see that no violation of a generalised Bell inequality is possible beyond $M = 5$.}
\label{fig:theory}
\end{figure}
\section*{DISCUSSION}
We have shown how WFH can be used to deconstruct phase-dependent measurements on two-mode entangled states into their constituent Fock layers. The ability to operate devices at the interface of wave and particle regimes opens up new possibilities for quantum information processing~\cite{Kiesel}. Thus, our work provides a new insight on such resources within the broader investigations on hybrid continuous/discrete-variable coding~\cite{Alexei,Jonas1,Jonas2,Bellini,Laurat}.
As a complement, we have also studied the theory of the violation of generalised Bell inequalities using an ideal WFH setup. These tests shed light on how the transition from non-Gaussian to Gaussian measurements occurs. Such transition is achieved in the context of WFH detection as an increase in the LO strength. In fact, this study may also be interpreted in the more general framework of Gaussian vs non-Gaussian measurements, where it is well-known that the outcomes of a Gaussian measurement on a Gaussian quantum state can be explained by a local realistic model. For this reason, strong-field homodyne detection on a TMSS cannot be used to violate a Bell inequality~\cite{Grangier,Gilchrist}. On the other hand, WFH is an example of non-Gaussian measurement, as attested by its Wigner function. Consequently, we expect that the outcomes of WFH detection on an entangled Gaussian state cannot be described by a local realistic model~\cite{Kuzmich,Bjoerk,Hessmo}; however, this breaks for moderately high photon numbers.
On the experimental side, further developments for the observation of the violation of higher-dimensional Bell inequalities demand detectors with higher quantum efficiency~\cite{Eisaman}. These are not only necessary for achieving significant counting statistics, but also to prevent one Fock layer to be affected from higher-order contributions. Encouraging results have been obtained in this direction with cryogenic detectors.
\section*{METHODS}
{\bf Source of quantum states.} A pulsed Ti:Sapphire laser (repetition rate $256$kHz, central wavelength $\lambda_{\rm TiSa} = 830$nm and bandwidth $\Delta\lambda_{\rm TiSa} \simeq 30$nm) is doubled in a nonlinear crystal ($\beta$-barium borate, BBO). The second-harmonic beam ($\lambda_{\rm UV} = 415$nm) pumps a type-II parametric downconversion process in a nonlinear crystal (potassium dihydrogen phosphate, KDP) in order to produce a two-mode vacuum-squeezed state. The source is designed to generate spectrally uncorrelated modes, based on group velocity mismatch inside the birefringent medium~\cite{Mosley}. Daughter photons have orthogonal polarisations and different spectral widths ($\Delta\lambda_{\textrm{V}} \simeq 12$nm, $\Delta\lambda_{\textrm{H}} \simeq 6$nm). The same source generates the split single-photon state and the two-mode squeezed state, depending on the ultra-violet pump beam brightness.
\bigskip
{\bf Weak-field homodyne detection.} The detection system that we adopt in our experiment is realised with a collinear geometry in order to ensure passive phase stability. Each mode in Eq. \eqref{Eq:SSPS} and \eqref{Eq:TMSS} is superposed with a local oscillator of orthogonal polarisation at a polarising beam splitter (PBS). This delivers a common spatial mode at the output but orthogonal polarisations for the signal to be probed and the weak coherent beam. Interference is then realised by a half-wave plate and an additional PBS. The relative phase between the horizontal and vertical polarisations can be adjusted by means of a geometric phase rotator (GPR -- see Fig.~\ref{fig:setup}), which is composed of a quarter-wave plate (QWP), a half-wave plate and a second QWP. The axis of the first QWP is set to $45^\circ$ in order to transform the input linear polarisations into circular ones. The rotation of the HWP by an angle $\phi/4$ results in a phase shift equal to $\phi$ between left- and right-circular polarisations. The initial linear polarisations are then recovered by setting the second QWP to $45^\circ$. The successful calibration of the full device relies on the characterisation of each element (including the PBS), particularly of the QWPs. When the GPR is correctly calibrated power fluctuations around $3\%$ are recorded, while imperfections in the calibration of one of its constituents can lead to fluctuations in power above $10\%$ (see Supplementary Fig.~2). Finally, the output state is analysed with a time-multiplexed detector: this consists of two fibre-based cascaded Mach-Zehnder interferometers that split the incoming state over two spatial modes and four distinct time bins. Time-resolved clicks from avalanche photodiodes monitoring the two transmitted modes are thus registered.
\bigskip
{\bf Active phase stabilisation.} To actively lock the phase set by the GPRs, we use an ancillary laser beam (from a continuous-wave HeNe laser, $\lambda' = 633$nm) which back-propagates through the interferometer. The classical interference pattern that we obtain when the ancillary beam reproduces correctly the signal and LO optical paths constitutes the signal recorded by a photodiode connected to a PID device (SRS SIM$960$ Analog PID Controller).
\section*{ACKNOWLEDGEMENTS}
The authors are grateful to J. Spring and A. Eckstein for helping with the detection setup, and thank L. Zhang, B. Smith, M.S. Kim and M. Paternostro for fruitful discussions. This work was supported by the EPSRC (EP/K034480/1,
EP/H03031X/1, EP/H000178/1), the EC project SIQS and the Royal Society. X.-M. J. acknowledges support from the National Natural Science Foundation of China (Grant No. 11374211) and an EC Marie Curie fellowship (PIIF-GA-2011-300820). A. D. is supported by the EPSRC (EP/K04057X/1).
\section*{AUTHOR CONTRIBUTIONS}
G.D. performed the experiments and, with assistance from A.D., M.B. and M.D.V., analysed the data and investigated the application of WFH to generalised Bell inequalities. G.D. and T.J.B., with assistance from M.B., X.-M.J. and M.D.V., designed the experimental layout. T.J.B., M.B. and I.A.W. conceived the experiment. G.D., T.J.B., A.D. and M.B. wrote the paper with contributions from all authors. All authors agree to the contents of the article.
\newpage
\section*{SUPPLEMENTARY INFORMATION}
\begin{figure}[h!]
\includegraphics[viewport= -80 -10 780 500, clip, width=\columnwidth]{Setup1SI.png}
\flushleft{{\bf Supplementary Figure 1: Experimental layout for the study of optical coherence via weak-field homodyne across the modes of a split single-photon state (SSPS).} A blue beam ($\lambda_{\rm UV} = 415$nm) interacts with a Type-II potassium dihydrogen phosphate birefringent crystal to produce frequency-degenerate downconverted photons with orthogonal polarisations at $830$nm. The ultraviolet beam is obtained via second-harmonic generation on a Type-I $\beta$-barium borate crystal pumped by the radiation coming from a pulsed femtosecond Ti:Sapphire laser system (Coherent Mira Seed -- RegA, central wavelength $\lambda_{\rm TiSa} = 830$nm, repetition rate $256$kHz). The vertically polarised downconversion mode is detected by an avalanche photodiode and thus heralds the presence of a signal on the twin mode, which is mixed with a local oscillator on a polarising beam splitter (PBS). The signal and coherent state, now collinear, are then interfered in polarisation on two PBSs preceded by geometric phase rotators which allow us to vary the phase of the local oscillators independently on Alice's and Bob's side. The two transmitted output modes are monitored by two time-multiplexed detectors, while the reflected ones are discarded. A field-programmable gate array processes the recorded coincidence and single counts. BBO: $\beta$-barium borate crystal -- DM: dichroic mirror -- WP: wave plate (yellow, quarter-wave plate; red, half-wave plate) -- PBS: polarising beam splitter -- KDP: potassium dihydrogen phosphate crystal -- APD: avalanche photodiode -- GPR: geometric phase rotator -- TMD: time-multiplexed detector.}
\end{figure}
\newpage
\begin{figure}[h!]
\includegraphics[viewport= -15 60 730 650, clip, width=\columnwidth]{SIFig2.pdf}
\flushleft{{\bf Supplementary Figure 2: Effect of an imperfect geometric phase rotator (GPR) on the marginal distributions for detection events up to two photons.} (a) Theoretical marginal distributions for the split single-photon state. (b) Experimental marginal distributions for the SSPS when the GPR on Bob's side is moved. (c) Typical calibration curve for a geometric phase rotator. Theoretically, no term in the marginal counting statistics should oscillate when varying the phase of the local oscillator (see plot (a)). However, the experimental data (see graph (b)) reveals the presence of oscillations in the recorded counts on the mode where the geometric phase rotator is active. Analogous results are found for the TMSS. These plots suggest that a non-ideal GPR can cause spurious oscillations in the marginals, that is, the distributions of detection events when we trace over one mode of the full two-mode output state. The calibration curve (see plot (c)) is obtained by interfering the vacuum with a local oscillator and recording counts on one detector as the GPR is moved. We note that the visibility of the oscillations in the LO counts is about $10\%$ for the $1$-click term. If we compare this value with the visibilities of the oscillations in plot (b), we see that it falls between the $1$-click term ($V \simeq 6\%$) and the $2$-click term ($V > 10\%$ - although in this case the absolute values are of the order of $10^{-4}$ counts $\cdot$ s$^{-1}$) associated to the experimental marginal distributions. These observations support the addition of a modulation of the reflectivity in our theoretical models for the split single photon and the two-mode squeezed state.}
\end{figure}
\newpage
\begin{figure}[h!]
\includegraphics[viewport= -100 -20 780 600, clip, width=\columnwidth]{Setup2SI.png}
\flushleft{{\bf Supplementary Figure 3: Experimental layout for the study of optical coherence via weak-field homodyne across the modes of a two-mode squeezed state (TMSS).} The setup for the generation of two-mode squeezed vacuum states exhibits a few modifications with respect to that illustrated in Supplementary Fig~1. In this case we interfere both downconversion modes with a local oscillator (LO); indistinguishability in polarisation is achieved thanks to a half-wave plate and a polarising beam splitter on each output mode. Once again, the geometric phase rotators allow us to vary the phases of the local oscillators. This layout requires active phase stabilisation to ensure that the phase difference between each initial LO phase and the phase of the two-mode squeezed state is kept fixed. Our active phase locking relies on an ancillary laser beam from a continuous-wave HeNe laser ($\lambda' = 633$nm) which back-propagates through the interferometer. The signal produced by the classical interference pattern -- which is obtained when the ancillary beam reproduces correctly both optical paths in the interferometer -- is recorded by a photodiode connected to a PID device (SRS SIM$960$ Analog PID Controller) whose electronic signal is then fed to a piezo delay stage in the LO path (indicated by the gray dashed box in the diagram). BBO: $\beta$-barium borate crystal -- PBS: polarising beam splitter -- HWP: half-wave plate -- KDP: potassium dihydrogen phosphate crystal -- GPR: geometric phase rotator -- HeNe: helium-neon diode laser -- PD: photodiode -- PID: Proportional-Integral-Derivative device.}
\end{figure}
\newpage
\begin{figure}[h!]
\includegraphics[viewport= -150 -10 550 320, clip, width=\columnwidth]{SSPSthSI.pdf}
\flushleft{{\bf Supplementary Figure 4: Theoretical joint counting statistics for different detection efficiencies in the case of a split single-photon state.} To illustrate the effect of imperfect detection on the observed nonclassical oscillations, we plot the array of multi-photon coincidence counts while varying the values of the detection efficiencies. Green curves correspond to $\eta = 0.05$, cyan to $\eta = 0.1$ and blue to $\eta = 0.2$; we assume the same efficiencies on both time-multiplexed detectors and use the experimentally determined values of the coefficients $w_0$ and $w_1$. We see how the parameter $\eta$ changes both the level of detected signal and, most importantly, the visibility of the oscillations.}
\end{figure}
\begin{figure}[h!]
\includegraphics[viewport= -150 -10 540 300, clip, width=\columnwidth]{TMSSthSI.pdf}
\flushleft{{\bf Supplementary Figure 5: Theoretical joint counting statistics for different detection efficiencies in the case of a two-mode squeezed state.} We illustrate the effect of imperfect detection on the observed oscillations by varying the values of the detection efficiencies. Green curves correspond to $\eta = 0.05$, cyan to $\eta = 0.1$ and blue to $\eta = 0.2$; we assume the same efficiencies on both TMDs and use the experimentally determined values of the squeezing parameter $|\lambda|$ and weight $p$. As in the case of the SSPS, the parameter $\eta$ determines dramatic changes in the visibility of the nonclassical oscillations.}
\end{figure}
\newpage
\begin{figure}[h!!!]
\includegraphics[viewport= -100 -20 680 480, clip, width=\columnwidth]{SIFig6.png}
\flushleft{{\bf Supplementary Figure 6: Study of Gaussian entanglement with weak-field homodyne (WFH) detection.} This is the ideal layout for the investigation of the nonlocal character of Gaussian entanglement via WFH. A two-mode squeezed state, an archetypal Gaussian resource, is interfered with a weak coherent state on two PBSs; photon-number-resolving detectors D$_1$, E$_1$, D$_2$, E$_2$ monitor all four outputs (\textit{i.e.}, we do not discard the reflected output modes here). If we fix the number $M$ of photons detected by each WFH detector, we can label the recorded detection events according to how $M$ particles distribute themselves on the transmitted and reflected output modes: namely, these are the quantities \, $\Gamma, M\!-\!\Gamma, \Gamma\!+\!\epsilon$ and $M\!-\!\Gamma\!-\!\epsilon$ \, shown in the diagram.}
\end{figure}
\newpage
\begin{figure}[!!!h]
\begin{center}
\includegraphics[viewport= -70 0 800 700, clip, width=\columnwidth]{SIFig7new.pdf}
\flushleft{{\bf Supplementary Figure 7: Probabilities of four-photon detection events as functions of the phase of the local oscillator.} The graphs show the behaviour of the expectation values of the projection operators on the considered four-photon states, \textit{i.e.}, the set $\{\expect{P_{kk^{\prime}}}\}$ for $k,k^{\prime} \leq 2$ (see Supplementary Note~3): here we take into account the multiplicative factor $\sqrt{1-|\lambda|^2}e^{-|\alpha|^2/2}e^{-|\beta|^2/2}$ neglected in the expression for $\ket{\Psi}_{\textrm{out}}$ (see Eq.\eqref{output:state}). The phase on the horizontal axis is that of either one of the local oscillators, $\phi_a$ or $\phi_b$; we set $\tilde{\theta} = 0$ for the two-mode squeezed state and $\phi_b = \phi_a + \pi/4$, relabelling $\phi_a = \phi$. Physically plausible values -- under the assumption of ideal detectors, \textit{i.e.}, $\eta = 1$ for all PNRDs -- are assigned to the other variables so that the considered functions depend solely on one phase parameter: we have $|\lambda| = 0.3$ and $|\alpha| = |\beta| = 0.131$. We also assume the PBSs to be balanced, therefore $t_i = r_j = 1/\sqrt{2}$ for $i, j = 1, 2$.}
\end{center}
\end{figure}
\newpage
\section*{Supplementary Note 1: Detection probabilities for the weak-field homodyne}
We describe the expected response of a weak-field homodyne (WFH) detector to a generic quantum state. For this purpose, we study the interference of a Fock state $\ket{n}$ with a coherent state $\ket{\alpha}$ on a beam splitter (BS) with transmittivity $t$ and reflectivity $r=\sqrt{1-t^2}$. We follow the asymmetric convention for the beam splitter input-output relations, so that the modes are transformed as $\hat{a}^{\dag}_{\textrm{in}} \rightarrow t\hat{a}^{\dag}_{\textrm{out}} + r\hat{b}^{\dag}_{\textrm{out}}$, and $\hat{b}^{\dag}_{\textrm{in}} \rightarrow t\hat{b}^{\dag}_{\textrm{out}} - r\hat{a}^{\dag}_{\textrm{out}}$. From this, it follows that the operators which generate coherent and Fock states transform as $\hat{D}_{\textrm{in}}(\alpha) \rightarrow \hat{D}_{\textrm{out}2}(t\alpha)\hat{D}_{\textrm{out}1}(-r\alpha)$ and $(\hat{a}^{\dag}_{\textrm{in}})^n \rightarrow \sum_{k=0}^{n}\binom{n}{k}t^{n-k} r^{k}(\hat{a}^{\dag}_{\textrm{out}1})^{n-k}(\hat{a}^{\dag}_{\textrm{out}2})^k$, respectively. Therefore, given the initial state
\begin{equation}
\ket{\Psi}_{\textrm{in}} = \ket{\alpha}\ket{n} = \hat{D}_{\textrm{in}}(\alpha)(\hat{a}^{\dag}_{\textrm{in}})^{n}\ket{0}\ket{0},
\end{equation}
this undergoes the above-described beam splitter transformations and becomes
\begin{equation}\label{App:tot:state}
\ket{\Psi}_{\textrm{out}} = \sum_{k=0}^{n}\binom{n}{k} t^{n-k}r^{k} \hat{D}_{\textrm{out}2}(t\alpha)\hat{D}_{\textrm{out}1}(-r\alpha)(\hat{a}^{\dag}_{\textrm{out}1})^{n-k}(\hat{a}^{\dag}_{\textrm{out}2})^{k}\ket{0}\ket{0}.
\end{equation}
We introduce the short-hand notation $\ket{\alpha;n} := \hat{D}(\alpha)(\hat{a}^{\dag})^{n}\ket{0}\ket{0}$ for the displaced Fock state; we note that the norm of $\ket{\alpha;n}$ is $n!$. For a given displacement $\alpha$, these states form an orthogonal set, $\braket{\alpha;n}{\alpha;m} = \sqrt{n!}\,\delta_{nm}$, as the displacement preserves the scalar product. This notation allows us to express the total output state in Eq.(\ref{App:tot:state}) as
\begin{equation}
\ket{\Psi}_{\textrm{out}} = \sum_{k=0}^{n} \binom{n}{k} t^{n-k}r^{k} \ket{t\alpha;n-k}\ket{{-}r\alpha;k}.
\end{equation}
As a consequence, an input state whose expression in the Fock basis is $\sum_{n=0}^{\infty}c_n \ket{n}$ transforms as
\begin{equation}
\ket{\Psi}_{\textrm{out}} = \sum_{k=0}^{+\infty}\sum_{n=k}^{+\infty} c_{n} \binom{n}{k} t^{n-k}r^{k} \ket{t\alpha;n-k}\ket{-r\alpha;k},
\end{equation}
where we have inverted the order of the summations over $n$ and $k$. Our experimental detection scheme does not record the outcomes on mode $\hat{b}_{\textrm{out}}$, which corresponds to tracing over this mode. This leads to
\begin{equation}
\ket{\Phi}_{\textrm{out}} = \sum_{k=0}^{+\infty} r^{2k}k! \, \ket{\Phi}_{k}\,, \qquad \ket{\Phi}_{k} = \sum_{n=k}^{+\infty} c_n \binom{n}{k} t^{n-k}\ket{t\alpha;n-k}\,.
\end{equation}
The photon statistics are then given by the probability distribution
\begin{equation}\label{App:prob:distr}
\varphi(m) = \sum_{k=0}^{+\infty} r^{2k}k! \Biggl|\sum_{n=k}^{+\infty} c_{n} \binom{n}{k} t^{n-k} \braket{m}{t\alpha;n-k}\Biggr|^2\,.
\end{equation}
In order to express $\varphi(m)$ more explicitly, we recur to the relation
\begin{equation}\label{App:utility}
\hat{D}(\alpha)(\hat{a}^{\dag})^{n}\ket{0} = (\hat{a}^{\dag}{-}\alpha^{\ast})^{n}\hat{D}(\alpha)\ket{0}\,,
\end{equation}
which is verified by virtue of the unitarity of the displacement operator $\hat{D}$, along with the relation $\hat{D}^{\dag}(\alpha)\hat{a}^{\dag}\hat{D}(\alpha) = \hat{a}^{\dag} + \alpha^{\ast}$. \\
Consequently, the inner product $\braket{m}{\alpha;n}$ which appears in Eq.\eqref{App:prob:distr} can be written as
\begin{equation}
\begin{split}
\braket{m}{\alpha;n} &= \bra{m}\sum_{\kappa=0}^{n} \binom{n}{\kappa} (\hat{a}^{\dag})^{\kappa}(-\alpha^{\ast})^{n-\kappa} \ket{\alpha} = \\ &= \sum_{\kappa=0}^{n} \binom{n}{\kappa} \frac{\sqrt{m!}}{\sqrt{(m-\kappa)!}}(-1)^{n-\kappa}|\alpha|^{m+n-2\kappa}e^{i(m-n)\phi}e^{-|\alpha|^2/2},
\end{split}
\end{equation}
where $\alpha = |\alpha|e^{i\phi}$. \\
Finally, we have to take into account the response of the time-multiplexed detector (TMD). This is characterised by its non-unit detection efficiency $\eta$, which we model as a beam splitter with reflectivity $\eta$, and draws its photon-number resolution from temporal binning. These two aspects -- loss and binned photon counting -- are accounted for by two matrices: $L(\eta)$ models photon loss, while $C$ describes the binning operation~\cite{Achilles1,Zhang,Worsley}. If we denote by ${\bf f} = \{\varphi(0),\varphi(1),\varphi(2),...\}$ the vector of photon statistics and by ${\bf p} = \{p(0),p(1),p(2),...\}$ the vector of counting statistics, the following relation holds:
\begin{equation}
{\bf p} = C\cdot L(\eta) \cdot {\bf f}.
\end{equation}
This single-mode expression can be easily generalised to our two-mode layout. The joint photon statistics for two WFH detectors with settings $\{r,t,\alpha\}$ and $\{r',t',\alpha'\}$, observing the state $\sum_{n=0}^{+\infty}c_{n,n'} \ket{n}\ket{n'}$, are given by
\begin{equation}
\Phi(m,m^{\prime}) = \sum_{k=0}^{+\infty}\sum_{k^{\prime}=0}^{+\infty} k!k^{\prime}!\,r^{2k}(r^{\prime})^{2k^{\prime}} \Biggl|\sum_{n=k}^{+\infty}\sum_{n^{\prime}=k^{\prime}}^{+\infty} c_{nn^{\prime}} \binom{n}{k}\binom{n^{\prime}}{k^{\prime}} t^{n-k}(t^{\prime})^{n^{\prime}-k^{\prime}} \braket{m}{t\alpha;n-k}\braket{m^{\prime}}{t^{\prime}\alpha^{\prime};n^{\prime}-k^{\prime}}\Biggr|^2\,.
\end{equation}
Loss on each mode -- described by the efficiencies $\eta_1$ and $\eta_2$, respectively -- and temporal binning can be taken into account by casting the probabilities $\Phi(m,m')$ into a matrix $\underline{\underline F}$, which is related to the matrix of the joint detection probabilities $\underline{\underline{P}}$ through~\cite{Worsley}
\begin{equation}
\label{theory:P}
\underline{\underline{P}} = C_1 \cdot L(\eta_1) \cdot\underline{\underline{F}} \cdot L(\eta_2)^{{\rm T}} \cdot C_{2}^{{\rm T}}\,.
\end{equation}
\section*{Supplementary Note 2: Theoretical model of the experiment}
We adopt a collinear geometry for our WFH detectors in order to simplify the experimental scheme and ensure better passive phase stability (see Supplementary Fig.~1 and Supplementary Fig.~3). The relative phase between signal and local oscillator is varied by means of birefringent elements arranged in a geometric phase rotator (GPR). However, imperfections in the individual wave plates and calibration inaccuracies can determine an undesired modulation of the reflectivity associated to the polarising beam splitter in each WFH detector. We thus perform a calibration of both GPRs in the presence of the local oscillator only. A typical plot of the dependence of counts on the setting of the GPR is shown in Supplementary Fig.~2. We account for this spurious effect by expressing the reflectivity as $r(\Theta)=r_0\sqrt{1+v \cos(4\Theta+\theta_0)}$, where $r_0 = 0.5$ is the ideal reflectivity, $\Theta$ is the setting of the half-wave plate in the GPR, $\theta_0$ is an offset angle, and the modulation depth $v$ is expected to be of the order of $0.1$ for small deviations from the perfect case. This relation for $r(\Theta)$ is included in Eq.\eqref{theory:P} for the expected joint detection probabilities.
\medskip
\noindent{\it Modelling a split single-photon state (SSPS).} We want to compare our experimentally generated split single photon to a theoretical signal. It is reasonable to assume that the SSPS exhibits some degree of mixedness~\cite{Bartley1}, therefore we take into account imperfect single-photon generation by including contributions from the vacuum and higher-order terms from the downconversion process (given the brightness of our source) in the expression of our state:
\begin{equation}\label{SI:rhoSSPS}
\rho_0=w_0\ket{0}\bra{0}+w_1\ket{1}\bra{1}+(1-w_0-w_1)\ket{2}\bra{2}.
\end{equation}
This state produces an entangled resource when it is split on a symmetric beam splitter. The corresponding counting statistics read
\begin{equation}\label{SI:SSPSCntsMatrix}
\underline{\underline{P}}=w_0\underline{\underline{P_0}}+w_1\underline{\underline{P_1}}+(1-w_0-w_1)\underline{\underline{P_2}},
\end{equation}
where the matrix $\underline{\underline{P_k}}$ describes the statistics associated with the entangled state obtained from Fock layer $\ket{k}$.
The coefficients $w_0$ and $w_1$, along with the detection efficiencies $\eta_{\rm A}$ and $\eta_{\rm B}$, are the key parameters which appear in our model. The latter are determined from the counting statistics of the experimental input state when no local oscillator impinges on the PBS, following the Klyshko calibration method. In order to compute $w_0$ and $w_1$, we suitably invert Eq.(\ref{theory:P}) and obtain the photon statistics matrix of the source, which we label $\underline{\underline{F}}^{\textrm{PDC}}$; we thus estimate $w_0 = \underline{\underline{F}}^{\textrm{PDC}}(0,0)$ and $w_1 = \underline{\underline{F}}^{\textrm{PDC}}(1,0) + \underline{\underline{F}}^{\textrm{PDC}}(0,1)$.
In Supplementary Fig.~4 we show the theoretical curves for the joint counting statistics matrix $\underline{\underline{P}}$ up to two detection events on each output mode. These plots inform us about the main features of the physical phenomenon which we probe with our WFH detectors: coherent oscillations across Fock layers are present in all terms but those where at least one side detects no photons. Moreover, the oscillations are shown to be in phase with one another. As a further investigation of the role of loss in our scheme (which we have already taken into account by using the experimental detection efficiencies in our model), Supplementary Fig.~4 includes three distinct detection scenarios given by different values of $\eta_{\rm A}$ and $\eta_{\rm B}$; poor efficiency of detection appears to mainly affect the visibility of the studied nonclassical oscillations.
\medskip
\noindent{\it Modelling a two-mode squeezed state (TMSS).}
The expression for an ideal two-mode squeezed vacuum state reads
\begin{equation}\label{SI:TMSS}
\ket{\Psi}_\textrm{TMSS}=\sqrt{1-|\lambda|^2}\sum_{n=0}^{+\infty}\lambda^n\ket{n}_\textrm{A}\ket{n}_\textrm{B},
\end{equation}
where $|\lambda|$ is the real squeezing parameter determined by the brightness of the parametric downconversion source. However, a more realistic expression for the experimentally produced state is
\begin{equation}\label{SI:rhoTMSS}
\rho = (1-p)\ket{\Psi}_\textrm{TMSS}\bra{\Psi}+p\ket{0,1}\bra{0,1},
\end{equation}
where the term $\ket{0,1}\bra{0,1}$ is a first-order approximation of noise in a squeezed thermal state. Indeed, the presence of such contribution is suggested by the experimental $g^{(2)}$ for the source, whose values highlight the two following features for our squeezer: an asymmetry across the two modes, and the presence of unwanted frequency modes in the generated two-mode squeezed state. Both effects are captured by the additional term appearing in Eq.(\ref{SI:rhoTMSS}).
In this case, the relevant quantities for the theoretical model are given by the squeezing parameter and the weight $p$. The former can be computed from the photon statistics of the downconversion source; manual inspection of the experimental and theoretical plots suggests the value $p = 0.04$ for the coefficient in the expression of $\rho$.
The theoretical curves for the joint counting statistics matrix $\underline{\underline{P}}$ (up to two detection events) for the TMSS are presented in Supplementary Fig.~5. Similarly to the case of a split single photon, the model predicts coherent, in-phase oscillations across the Fock layers for all terms but those where at least one side detects vacuum. Once again, the effect of loss is a reduction of the visibility of such oscillations.
\bigskip
As already pointed out in the main text, we remark that our theoretical models the SSPS and the TMSS share some common limitations. Notably, imperfect mode matching affects the interference between the probed state and the classical reference. For this reason, this aspect is likely to act adversely on the joint detection statistics too. Nevertheless, our descriptions have the advantage of relying on very few parameters, most of which are directly estimated from the collected data. Each model agrees with the data for all considered terms in the joint counting statistics simultaneously, which in turn means that we can harness the behaviour of all multi-photon components in a given output state with one single description and set of parameters.
\section*{Supplementary Note 3: Nonlocality of a two-mode squeezed state detected with weak-field homodyne}
In this section we provide details on the application of the Collins-Gisin-Linden-Massar-Popescu (CGLMP) inequalities~\cite{Collins} to WFH. Following the notation introduced in Supplementary Fig.~6, we illustrate the case in which the number of detected photons on each side is $M = 2$, {\it i.e.}, we restrict our study to local subsystems with dimension $D = M+1 = 3$. The generalisation to higher photon numbers proceeds in a similar fashion. Let us consider the two-mode quantum state $\ket{\Psi}_{\rm TMSS}$, distributed across modes $\hat{a}^\dag_{\rm s},\hat{a}^\dag_{\rm i}$, and the two local oscillators $\ket{\alpha}$ and $\ket{\beta}$ on modes $\hat{b}^\dag_1$ and $\hat{b}^\dag_2$ respectively. As we focus on the detection of low photon numbers ($M = 2$), we write the input state as
\begin{equation}
\begin{split}
\ket{\Psi}_{\rm in} = \Biggl(\frac{\alpha^2}{2}\frac{\beta^2}{2}(\hat{b}^{\dag}_{1})^{2}(\hat{b}^{\dag}_{2})^{2} + \frac{\lambda^2}{2}(\hat{a}^{\dag}_{\rm s})^{2}(\hat{a}^{\dag}_{\rm i})^{2} + \lambda\alpha\beta\hat{a}^{\dag}_{\rm s}\hat{a}^{\dag}_{\rm i}\hat{b}^{\dag}_{1}\hat{b}^{\dag}_{2} \Biggr)\ket{0}_{\rm s}\ket{0}_{\rm i}\ket{0}_{1}\ket{0}_{2},
\end{split}
\end{equation}
where $\alpha = |\alpha|e^{\imath \phi_a}$, $\beta = |\beta|e^{\imath \phi_b}$ and $\lambda = |\lambda|e^{\imath \tilde{\theta}}$. \\
The BS of each WFH detector operates the following transformation from the input modes into output modes $\{\hat{d}_{1}^{\dag},\hat{e}_{1}^{\dag},\hat{d}_{2}^{\dag},\hat{e}_{2}^{\dag}\}$:
\begin{equation}\label{I:O:BS}
\left\{\begin{array}{l} \hat{a}_{\rm s}^{\dag} = t_{1} \hat{d}_{1}^{\dag} + r_{1}^{\ast} \hat{e}_{1}^{\dag}\,, \\ \hat{b}_{1}^{\dag} = r_{1} \hat{d}_{1}^{\dag} - t_{1} \hat{e}_{1}^{\dag}\,, \\ \\ \hat{a}_{\rm i}^{\dag} = t_{2} \hat{d}_{2}^{\dag} + r_{2}^{\ast} \hat{e}_{2}^{\dag}\,, \\ \hat{b}_{2}^{\dag} = r_{2} \hat{d}_{2}^{\dag} - t_{2} \hat{e}_{2}^{\dag}\;,
\end{array} \right.
\end{equation}
where we assume that $t_{j} \in \mathbb{R}$ and $r_{j} = -\imath|r_{j}|$ for $j = 1, 2$. \\
These relations lead to the expression for the output state $\ket{\Psi}_{\rm out}$ in the Fock basis (up to a normalisation factor):
\begin{equation}\label{output:state}
\begin{split}
\ket{\Psi}_{\rm out} &= \Bigl(\frac{\alpha^2 \beta^2}{4}r_{1}^{2}r_{2}^{2} + \frac{\lambda^2}{2}t_{1}^{2}t_{2}^{2} + \lambda\alpha\beta t_{1}t_{2}r_{1}r_{2}\Bigr)\ket{2200} + \Bigl(\frac{\alpha^2 \beta^2}{4}t_{1}^{2}t_{2}^{2} + \frac{\lambda^2}{2}(r_{1}^{\ast})^{2}(r_{2}^{\ast})^{2} + \lambda\alpha\beta t_{1}t_{2}r_{1}^{\ast}r_{2}^{\ast}\Bigr)\ket{0022} \; + \\ &\quad + \Bigl(\frac{\alpha^2 \beta^2}{4}r_{1}^{2}t_{2}^{2} + \frac{\lambda^2}{2}t_{1}^{2}(r_{2}^{\ast})^{2} - \lambda\alpha\beta t_{1}t_{2}r_{1}r_{2}^{\ast}\Bigr)\ket{2002} + \Bigl(\frac{\alpha^2 \beta^2}{4}t_{1}^{2}r_{2}^{2} + \frac{\lambda^2}{2}(r_{1}^{\ast})^{2}t_{2}^{2} - \lambda\alpha\beta t_{1}t_{2}r_{1}^{\ast}r_{2}\Bigr)\ket{0220} \; +
\\ &\quad + \bigl[\alpha^2 \beta^2 t_{1}t_{2}r_{1}r_{2} + 2\lambda^2 t_{1}t_{2}r_{1}^{\ast}r_{2}^{\ast} - \lambda\alpha\beta(t_{2}^{2}|r_1|^2 - |r_1|^2|r_2|^2 + t_{1}^{2}|r_2|^2 - t_{1}^{2}t_{2}^{2})\bigr]\ket{1111} \; + \\ &\quad + \Bigl[-\frac{\alpha^2 \beta^2}{2}t_{2}r_{1}^{2}r_{2} + \lambda^{2}t_{1}^{2}t_{2}r_{2}^{\ast} - \lambda\alpha\beta (t_{1}t_{2}^{2}r_{1} - t_{1}r_{1}|r_2|^2)\Bigr]\ket{2101} \; + \\ &\quad + \Bigl[-\frac{\alpha^2 \beta^2}{2}t_{1}r_{1}r_{2}^{2} + \lambda^{2}t_{1}t_{2}^{2}r_{1}^{\ast} - \lambda\alpha\beta(t_{1}^{2}t_{2}r_{2} - t_{2}|r_1|^{2}r_{2})\Bigr]\ket{1210} \; + \\ &\quad + \Bigl[-\frac{\alpha^2 \beta^2}{2}t_{1}^{2}t_{2}r_{2} + \lambda^{2}t_{2}(r_{1}^{\ast})^{2}r_{2}^{\ast} - \lambda\alpha\beta(t_{1}r_{1}^{\ast}|r_2|^2 - t_{1}t_{2}^{2}r_{1}^{\ast})\Bigr]\ket{0121} \; + \\ &\quad + \Bigl[-\frac{\alpha^2 \beta^2}{2}t_{1}t_{2}^{2}r_{1} + \lambda^{2}t_{1}r_{1}^{\ast}(r_{2}^{\ast})^{2} + \lambda\alpha\beta(t_{1}^{2}t_{2}r_{2}^{\ast} - t_{2}|r_1|^{2}r_{2}^{\ast})\Bigr]\ket{1012}.
\end{split}
\end{equation}
Here we use the convention $\ket{0000} = \ket{0}_{1{\rm t}}\ket{0}_{2{\rm t}}\ket{0}_{1{\rm r}}\ket{0}_{2{\rm r}}$, and similarly for the other number states (`t' and `r' denoting transmitted and reflected output modes, respectively).
In the work of Collins and collaborators, the local realistic bound is calculated on a linear combination of detection probabilities associated to local measurement settings -- in our case these are the complex parameters $\alpha$ and $\beta$. As introduced in~\cite{Collins}, we adopt the shorthand notation $P(A_i = B_j + \epsilon)$ to indicate the probability of an event for which the detection outcomes $A_i$ and $B_j$ differ by $\epsilon = -1, 0, 1$ (where $\{-1,0,1\}$ and $\{0,1,2\}$ are congruent modulo $3$). Therefore, the CGLMP inequality adapted to our layout reads
\begin{equation*}
|I_3| \leq 2\:,
\end{equation*}
where
\begin{equation}\label{I3}
\begin{split}
I_3 = &[P(A_1 = B_1) + P(B_1 = A_2 + 1) + P(A_2 = B_2) + P(B_2 = A_1)] - \\ &- [P(A_1 = B_1 - 1) + P(B_1 = A_2) + P(A_2 = B_2 - 1) + P(B_2 = A_1 - 1)].
\end{split}
\end{equation}
According to the notation introduced above, each term in Eq.\eqref{I3} is computed as follows:
\begin{equation}\label{probs:outcomes}
\begin{split}
&P(A_1 = B_1) = \expect{\hat{P}_{00}}(\alpha_{1},\beta_{1}) + \expect{\hat{P}_{11}}(\alpha_{1},\beta_{1}) + \expect{\hat{P}_{22}}(\alpha_{1},\beta_{1}), \\
&P(B_1 = A_2 + 1) = \expect{\hat{P}_{10}}(\alpha_{2},\beta_{1}) + \expect{\hat{P}_{21}}(\alpha_{2},\beta_{1}) + \expect{\hat{P}_{02}}(\alpha_{2},\beta_{1}), \\
&P(A_2 = B_2) = \expect{\hat{P}_{00}}(\alpha_{2},\beta_{2}) + \expect{\hat{P}_{11}}(\alpha_{2},\beta_{2}) + \expect{\hat{P}_{22}}(\alpha_{2},\beta_{2}), \\
&P(B_2 = A_1) = \expect{\hat{P}_{00}}(\alpha_{1},\beta_{2}) + \expect{\hat{P}_{11}}(\alpha_{1},\beta_{2}) + \expect{\hat{P}_{22}}(\alpha_{1},\beta_{2}), \\
&P(A_1 = B_1 - 1) = \expect{\hat{P}_{10}}(\alpha_{1},\beta_{1}) + \expect{\hat{P}_{21}}(\alpha_{1},\beta_{1}) + \expect{\hat{P}_{02}}(\alpha_{1},\beta_{1}), \\
&P(B_1 = A_2) = \expect{\hat{P}_{00}}(\alpha_{2},\beta_{1}) + \expect{\hat{P}_{11}}(\alpha_{2},\beta_{1}) + \expect{\hat{P}_{22}}(\alpha_{2},\beta_{1}), \\
&P(A_2 = B_2 - 1) = \expect{\hat{P}_{10}}(\alpha_{2},\beta_{2}) + \expect{\hat{P}_{21}}(\alpha_{2},\beta_{2}) + \expect{\hat{P}_{02}}(\alpha_{2},\beta_{2}), \\
&P(B_2 = A_1 - 1) = \expect{\hat{P}_{01}}(\alpha_{1},\beta_{2}) + \expect{\hat{P}_{12}}(\alpha_{1},\beta_{2}) + \expect{\hat{P}_{20}}(\alpha_{1},\beta_{2}),
\end{split}
\end{equation}
where we conveniently relabelled the four-photon output states. Namely,
\begin{equation*}
\begin{split}
&\ket{2200} \longleftrightarrow \ket{00} \rightarrow \hat{P}_{00}\:, \quad \ket{1111} \longleftrightarrow \ket{11} \rightarrow \hat{P}_{11}\:, \\
&\ket{0022} \longleftrightarrow \ket{22} \rightarrow \hat{P}_{22}\:, \quad \ket{1210} \longleftrightarrow \ket{10} \rightarrow \hat{P}_{10}\:, \\
&\ket{0121} \longleftrightarrow \ket{21} \rightarrow \hat{P}_{21}\:, \quad \ket{2002} \longleftrightarrow \ket{02} \rightarrow \hat{P}_{02}\:, \\
&\ket{2101} \longleftrightarrow \ket{01} \rightarrow \hat{P}_{01}\:, \quad \ket{1012} \longleftrightarrow \ket{12} \rightarrow \hat{P}_{12}\:, \\
&\ket{0220} \longleftrightarrow \ket{20} \rightarrow \hat{P}_{20}\:.
\end{split}
\end{equation*}
This operation amounts to retaining the number of detected photons on the two reflected output modes as the two significant digits for the labels. The expectation values $\expect{\cdot}$ of the projection operators $\{\hat{P}_{kk^{\prime}}\}$, $k, k^{\prime} = 0, 1, 2$, appear in Eq.(\ref{probs:outcomes}). These are calculated on the output state $\ket{\Psi}_{\rm out}$, {\it i.e.}, $\expect{\hat{P}_{kk^{\prime}}} = \bra{\Psi}\hat{P}_{kk^{\prime}}\ket{\Psi}_{\rm out}$ for all $k, k^{\prime}$ considered. The quantities $\{\alpha_{l},\beta_{l^{\prime}}\}$, $l, l^{\prime} = 1, 2$, refer to physical measurement settings, which in our case are identified by the complex amplitude of each local oscillator.
In Supplementary Fig.~7 we show the oscillations of the terms $\{\expect{\hat{P}_{kk^{\prime}}}\}$, $k, k^{\prime} = 0, 1, 2$, which appear in Eq.\eqref{I3} through the detection probabilities $P(A_i = B_j + \epsilon)$, as the phase of one local oscillator is varied. Once again, we note that this study assumes an ideal setup, from the generation of pure two-mode squeezed states to perfect WFH detectors. The most remarkable feature in Supplementary Fig.~7 is the presence of two different oscillation periods for $\{\expect{\hat{P}_{kk^{\prime}}}\}$, the period being determined by the specific Fock layer observed.
The presence of different oscillation periods associated to distinct photon numbers has been observed when a single-mode squeezed state is interfered with a coherent state of comparable energy~\cite{Afek}. However, the presence of such super-resolved fringes does not imply a quantum enhancement in phase estimation. In fact, while super-resolution can be obtained by purely classical means~\cite{Resch,Kothe}, super-sensitivity can only be achieved with quantum resources. An analysis in this direction needs to account for all resources and all post-selection probabilities. The usefulness of the scheme discussed in this work for quantum metrology is currently under investigation.
Finally, we comment on the role of detection efficiency in the framework of the generalised Bell inequalities we have just presented. As shown in Supplementary Fig.~4 and Supplementary Fig.~5, lower detection efficiencies decrease the visibility of the oscillations and artificially enhance the low-photon components. The effect is clearly more detrimental than the induction of a detection loophole, as in standard Bell tests performed with entangled photons. \\
Further, this effect is more pronounced for higher Fock layers. In fact, the first-order contribution to noise in Fock layer $M$ comes from the higher layer $M+1$. Each term in Eq.(\ref{I3}) may then be written as
\begin{equation}
P\propto \eta^M P_M + (M{+}1) \eta^M(1-\eta) \nu_{M{+}1}
\end{equation}
where $P_M$ is the correlation in the ideal case ${\eta}{=}1$, and $\nu_{M+1}$ is a noise term coming from Fock layer $M+1$. The latter is due to the loss of a single photon, which one could otherwise reject by post-selection. We thus see that the relative weight becomes more important as $M$ increases. Therefore, the achievable violation not only decreases with $M$, but its resilience to loss also becomes weaker.
|
train/arxiv
|
BkiUbLfxaJiQn5NNhlGH
| 5 | 1 |
\section{Introduction\label{Intro}}
Non-Hermitian Hamiltonians have recently attracted a lot of attention in the physics community across a wide range of fields, owing to their experimental feasibility~\cite{Doppler2016,El2018,Lu2014topological,Chen2017,Xu2016,Feng2014,Hodaei2014,Hodaei2017,Peng2014loss,
Liertzer2012,Feng2013,Ozawa2019,Konotop2016,Ruter2010,Regensburger2012,Diehl2011}
and theoretical richness~\cite{Longhi2010,Carmichael1993,Lee2014,Malzard2015,Makris2008,Klaiman2008,
Lee2019,Ashida2018,Bender2007,Bender2005}.
Quantum systems driven by non-Hermitian Hamiltonians, display various fascinating
physical phenomena comparing to those governed by Hermitian Hamiltonians.
In cold-atom experiments, non-Hermitian Hamiltonians appear due to spontaneous decay~\cite{Dalibard1992,Plenio1998,Dum1992,Lee2014,Molmer1993}.
Furthermore, various non-Hermitian Hamiltonians have been utilized to treat various physical problems that might need to consider the interaction between the environment and the system, such as free-electron lasers~\cite{Dattoli1988}, topological lasers~\cite{Kartashov2019,Bandres2018,Harari2018}, electric circuit~\cite{Liu2020,Helbig2020,Hofmann2020}, transverse mode propagation in optical resonators~\cite{SIEGMAN1979}, multiphoton ionization~\cite{Baker1983},
many resonance phenomena~\cite{Moiseyev2011non}, nitrogen-vacancy-center in diamond~\cite{wu2019,Zhang2021},
with applications on high performance sensors~\cite{Wiersig2014,Lau2018,Hodaei2017,Chen2017}, and unidirectional transport devices~\cite{Lin2011,Feng2013}.
Theoretically, non-Hermitian Hamiltonians trigger many novel physical
phenomena, such as non-Hermitian skin effect~\cite{Yao2018a,Yao2018b}, real eigenvalues with parity-time (PT) symmetry~\cite{Bender1998},
new topological properties corresponding to exceptional points (EPs)~\cite{Zhang2020,He2020,Shen2018,Bergholtz2021,Kawabata2019}, disorder induced self-energy in the effective Hamiltonian~\cite{Shen2018b,Papaj2019,Zhang2019,Tang2020,Jiang2019}, dynamical and topological properties~\cite{Bergholtz2021,Yuto2020,El2018,Konotop2016,Ygor2021}.
Finding the dynamical signatures of these non-equilibrium topological matter has become a fascinating area for more experimental and theoretical research.
In recent works, several dynamical probes to the topological invariants of non-Hermitian phases in one and two dimensions have been introduced,
such as the non-Hermitian extension of dynamical winding numbers~\cite{Zhou2018,Longwen2021,Zhu2020,Zhou2018b,Zhou2019b,Zhou2021b}
and mean chiral displacements~\cite{Zhou2019,Pan2020}. Further, the dynamical quantum phase transitions (DQPTs)~\cite{Heyl2013,Jurcevic2017,Andraschko2014,Budich2016,Heyl2018,Abdi2019,Sedlmayr2018,Sedlmayr2018b,
Sedlmayr2020,Jafari2019,Bhattacharjee2018,Bhattacharya2017,Heyl2017,Guo2019,Tian2019,Wang2019,Uhrich2020,
Mishra2020,Jafari2019dynamical,Sadrzadeh2021,Yu2021,BozhenZhou2021,Porta2020,Sehrawat2021,Modak2021,Peotta2021,Juan}
following a quench across the EPs of a non-Hermitian lattice model is studied in Refs.~\cite{Longwen2021,Zhou2018}.
It has been shown that DQPTs appear for a quench from a trivial to a non-Hermitian topological phase~\cite{Longwen2021}.
This discovery indicates an underlying relationship between non-Hermitian topological phases and DQPTs.
To the best of our knowledge, the Floquet dynamical phase transition in the systems with gain and loss, and therefore
subject to non-unitary evolution, have not been addressed in prior publications and can provide a number of new insights into the subject.
This paper is devoted to the research on the Floquet dynamical phase transition~\cite{Qianqian2021,Zamani2020,Jafari2021,Yang2019} in the periodically time driven XY and extended XY spin models in the presence of dissipation.
The non-Hermitian terms (imaginary terms) represent dissipation--the physical gain and loss--when the chain interacts with the environment.
Our main purpose is to study the effects of non-Hermitian terms on the FDPTs time and the range of driven frequency over which the DFDPTs occur.
First, we prob the phase diagram of the time-independent
effective Floquet non-Hermitian Hamiltonians by analyzing the energy gap of the systems analytically. We show that the phase diagram of
the system divided into three regions with pure real gap where confined to exceptional points, pure imaginary gap and complex gap.
We have found that, the region with real energy gap, where the DFDPTs occur,
is topologically nontrivial in the time-independent effective Floquet non-Hermitian XY Hamiltonian.
While the real gap region in the time-independent effective Floquet non-Hermitian extended XY Hamiltonian is topologically trivial.
In the other words, different from results obtained for the quenched case~\cite{Longwen2021}, existence of the non-Hermitian topologically nontrivial phase
is not necessary condition for appearance of the DFDPTs.
We have also shown that, the DFDPTs driven frequency range narrows down by increasing the dissipation coupling and shrinks to a single point at critical value of dissipation. We have found that adding the dissipation (imaginary term) to the Hermitian Hamiltonians affects those bounds of the driven frequency range
which correspond to the critical (gap closing) points of the time-independent effective Floquet Hermitian Hamiltonians.
\section{Dynamical phase transition\label{DQPT}}
The notion of a DQPT borrowed from the analogy between the partition function of an equilibrium system
$Z(\beta)={\rm Tr} [ e^{-\beta {\cal H}}]$ and the boundary quantum partition function $Z(z)=\langle\psi_{0}|e^{-z {\cal H}}|\psi_{0}\rangle$
with $|\psi_{0}\rangle$ a boundary state and $z \in \mathds{C}$.
When $z=it$, the boundary quantum partition function corresponds to a Loschmidt amplitude (LA),
${\cal L}(t)=\langle\psi_{0}|e^{-{\it i} {\cal H}t}|\psi_{0}\rangle=\prod_{k} {\cal L}_{k}(t)$,
expressing the overlap between the initial state $|\psi_{0}\rangle$
and the time-evolved one $|\psi_{0}(t)\rangle$~\cite{Heyl2013,Jurcevic2017,Andraschko2014,Budich2016,Heyl2018,Abdi2019,Yang2019,Sedlmayr2018,Sedlmayr2018b,
Sedlmayr2020,Jafari2019,Bhattacharjee2018,Bhattacharya2017,Heyl2017,Guo2019,Tian2019,Wang2019,Uhrich2020,
Zamani2020,Mishra2020,Jafari2019dynamical,Sadrzadeh2021,Jafari2021,Yu2021,BozhenZhou2021,Porta2020,Qianqian2021,Sehrawat2021,Modak2021}.
It has been argued that, like the thermal free energy, a dynamical free energy might well be defined as~\cite{Heyl2013}
\begin{eqnarray}
\label{eq1}
g(t)=-\frac{1}{2\pi}\int dk \ln|{\cal L}_{k}(t)|^{2}.
\end{eqnarray}
Here the real time $t$, plays the role of the control parameter.
DQPTs are signaled by non-analytical behavior of dynamical free energy $g(t)$ as a function of time,
evincing in characteristic cusps in $g(t)$ or one of its time-derivatives~\cite{Heyl2013,Jurcevic2017,Andraschko2014,Budich2016,Heyl2018,Abdi2019,Yang2019,Sedlmayr2018,Sedlmayr2018b,
Sedlmayr2020,Jafari2019,Bhattacharjee2018,Bhattacharya2017,Heyl2017,Guo2019,Tian2019,Wang2019,Uhrich2020,
Zamani2020,Mishra2020,Jafari2019dynamical,Sadrzadeh2021,Jafari2021,Yu2021,BozhenZhou2021,Porta2020,Qianqian2021,Sehrawat2021,Modak2021}.
These cusps are followed by zeros of Loschmidt amplitude ${\cal L}(t)$, known in statistical physics as Fisher zeros of the
partition function~\cite{BozhenZhou2021,Heyl2018}.
Furthermore, analogous to order parameters at equilibrium quantum phase transition, a dynamical topological order parameter
is proposed to capture DQPTs~\cite{Budich2016}.
The DTOP is quantized and its unit magnitude jumps at the time of DQPT reveals the topological characteristic feature of DQPT~\cite{Budich2016,Bhattacharjee2018}.
This dynamical topological order parameter is extracted from the "gauge-invariant" Pancharatnam geometric
phase associated with the Loschmidt amplitude~\cite{Budich2016}.
The dynamical topological order parameter is defined as~\cite{Budich2016}
\begin{eqnarray}
\label{eq2}
\nu_D(t)=\frac{1}{2\pi}\int_0^\pi\frac{\partial\phi^G(k,t)}{\partial k}\mathrm{d}k,
\end{eqnarray}
where the geometric phase $\phi^G(k,t)$ is gained from the total phase $\phi(k,t)$ by subtracting the dynamical
phase $\phi^{D}(k,t)$:
$$
\phi^G(k,t)=\phi(k,t)-\phi^{D}(k,t).
$$
The total phase $\phi(k,t)$ is the phase factor of LA in its polar coordinates representation,
i.e., ${\cal L}_{k}(t)=|{\cal L}_{k}(t)|e^{i\phi(k,t)},$ results $\phi(k,t)=-i\ln\left[{\cal L}_{k}(t)/|{\cal L}_{k}(t)|\right]$, and
\begin{eqnarray}
\begin{aligned}
\label{eq3}
\phi^{D}(k,t)
= &
-\int_{0}^{t}dt'\frac{\langle\psi_{-}(k,t')|
%
{\cal H}_{k}(t')
|\psi_{-}(k,t')\rangle}
{\langle\psi_{-}(k,t')|\psi_{-}(k,t')\rangle} \\
&+
\frac{i}{2}\ln\left[\frac{\langle\psi_{-}(k,t)|\psi_{-}(k,t)\rangle}{\langle\psi_{-}(k,0)|\psi_{-}(k,0)\rangle}\right].
\end{aligned}
\end{eqnarray}
In following, to examine aspects of dissipative in quantum Floquet systems, we search for dissipative Floquet DPTs in proposed non-Hermitian periodically time driven Hamiltonians.
\section{Dissipative periodically time driven XY Model and Exact Solution\label{XYmodel}}
In this section we study the phase diagram, topological properties and
FDPTs of dissipative periodically time driven XY model.
We show that the region in which DFDPTs occur is confined to exceptional points and
is topologically nontrivial and the time-independent effective Floquet non-Hermitian Hamiltonian
has real eigenvalues.
\subsection{Exact solution\label{ESXY}}
The Hamiltonian of $N$ sites dissipative periodically time driven XY spin model is given as
\begin{equation}
\begin{aligned}
\label{eq4}
{\cal H}(t)
\!
=
\!\!
\sum_{n}
&
\Big[
[J
\!
-
\!
\gamma\cos(\omega t)]S_{n}^{x}S_{n+1}^{x}
+
[J
\!
+
\!
\gamma\cos(\omega t)]S_{n}^{y}S_{n+1}^{y}
\\
&-\gamma\sin(\omega t)(S_{n}^{x}S_{n+1}^{y}+S_{n}^{y}S_{n+1}^{x})+h S_{n}^{z}
\\
&-
{\it i}(\Gamma_{u}S_{n}^{+}S_{n}^{-}+\Gamma_{d}S_{n}^{-}S_{n}^{+})\Big],
\end{aligned}
\end{equation}
where $S_{n}^{\alpha=\{x,y,z\} }=\sigma^{\alpha}/2$,
and $\sigma^{\alpha}$ are Pauli matrices.
Furthermore,
$S_{n}^{\pm}=\sigma^{\pm}/2=(\sigma^{x}\pm {\it i}\sigma^{y})/2$ are the spin raising and lowering operators
which correspond to the gain $\Gamma_{u}<0$ ($\Gamma_{d}<0$) or loss $\Gamma_{u}>0$ ($\Gamma_{d}>0$) of
spin up state $|\uparrow\rangle$ (spin down state $|\downarrow\rangle$) during the interacting processes with the environment
with the rate of $\Gamma_{u}$ ($\Gamma_{d}$), and $\omega$ is the driving frequency.
The system can be reduced to the Floquet Hermitian XY model when $\Gamma_{u}=\Gamma_{d}=0$~\cite{Yang2019}.
The term ``dissipative" refers to the system's tunneling effects to its own continuum, which is common in quantum optics and nuclear physics when using the Feshbach projection method on intrinsic states.
The Hamiltonian, Eq.~(\ref{eq1}), can be mapped to the free spinless fermion model with complex
chemical potential~\cite{Zeng2016} by Jordan-Wigner transformation~\cite{LIEB1961,Barouch1971,Jafari2011,Jafari2012} (see Appendix \ref{AA})
\begin{eqnarray}
\begin{aligned}
\label{eq5}
{\cal H}(t)= \sum_{n=1}^{N}
\Big[
&
\Big(\frac{J}{2} c_{n}^{\dagger} c_{n+1}
-\frac{\gamma}{2} e^{-{\it i} \omega t} c_{n}^{\dagger} c^{\dagger}_{n+1}+
{\rm H.C.}
\Big)
\quad
\\
&+
(h-{\it i}\Gamma_{-}) c_{n}^{\dagger} c_{n}-{\it i}\Gamma_{+}\Big],
\end{aligned}
\end{eqnarray}
where
$\Gamma_{\pm}=\Gamma_{u} \pm \Gamma_{d}$,
and
$c_{n}^{\dagger}$ ($c_{n}$) are
the spinless fermion creation (annihilation) operators, respectively.
Thanks to the Fourier transform, the Hamiltonian ${\cal H}(t)$ in Eq.~(\ref{eq5})
can be written as the sum of $N/2$ non-interacting terms
$${\cal H}(t) = \sum_{k>0} {\cal H}_{k}(t)$$
where ${\cal H}_{k}(t)=C^{\dagger}\mathbb{H}_{k}(t)C-{\it i}\Gamma_{+}\mathbb{1}$
with
$C^{\dagger}=(c_{k}^{\dagger},~c_{-k})$, and
\begin{eqnarray}
\begin{aligned}
\label{eq6}
\mathbb{H}_{k}(t)=
\left(
\begin{array}{cc}
h_{z}(k) & {\it i}h_{xy}(k)e^{-{\it i} \omega t} \\
-{\it i}h_{xy}(k)e^{{\it i} \omega t} & -h_{z}(k) \\
\end{array}
\right).
\end{aligned}
\end{eqnarray}
The parameters $h_{xy}(k)$ and $h_{z}(k) $ are given as $h_{xy}(k)=\gamma\sin(k)$, and $h_{z}(k)=J\cos(k)+h-{\it i}\Gamma_{-}$.
Using the time-dependent Schr\"{o}dinger equation
${\it i}\frac{d}{dt}|\psi_{k}^{\pm}(t)\rangle=\mathbb{H}_{k}(t)|\psi_{k}^{\pm}(t)\rangle$ in
the rotating frame given by the non-unitary transformation
$U(t)=U_{R}(t)U_{D}(t)$, with $U_{R}(t)=\exp[{\it i}\omega(\mathbb{1}-\sigma^{z})t/2]$, and
$U_D(t)= e^{-\Gamma_{+}t}\mathbb{1}$, the time-dependent Hamiltonian is transformed to the time-independent
effective Floquet non-Hermitian form (see Appendix \ref{AA})
\begin{eqnarray}
\label{eq7}
H_{F}(k)=-h_{xy}(k)\sigma_{y}+\Big(h_{z}(k)-\frac{\omega}{2}\Big)\sigma_{z}+\frac{\omega}{2}\mathbb{1}.
\end{eqnarray}
Then the time-evolved $|\psi_{k}(t)\rangle$ of the quasi-spin Hamiltonian $H_{k}(t)$, is given by
{\small
\begin{eqnarray}
|\psi_{k}(t)\rangle&=&U(t)e^{-iH_{F}(k)t}|\varphi_{k}\rangle,
\label{eq8}
\end{eqnarray}
}
where $|\varphi_{k}\rangle$ is the initial state of the system at $t=0$.
Due to the decoupling of different momentum sectors, the initial and time-evolved ground states of
the original Hamiltonian exhibit a factorization property that is expressed by
\begin{equation}
\begin{aligned}
&
|\psi(t)\rangle
=
\prod_{k}|\psi_{k}(t)\rangle=\prod_{k} U(t)e^{-iH_{F}(k)t}|\varphi_{k}\rangle,
\\
&
|\psi(t=0)\rangle
=
\prod_{k}|\varphi_{k}\rangle.
\label{eq9}
\end{aligned}
\end{equation}
We consider that at $t=0$ the system prepared at $|\psi(0)\rangle=|\varphi_{k}\rangle=|\downarrow\rangle$, i.e.,
$c_1(t=0)=0$ and $c_2(t=0)=1$, where $c_1$ and $c_2$ are probability amplitudes of
$|\psi(0)\rangle$ at up ($|\uparrow\rangle$) and down ($|\downarrow\rangle$) states, respectively.
Then according to Eq.~(\ref{eq9}) the unnormalized time evolving state $|\psi(k,t)\rangle$ of the Hamiltonian ${\cal H}_{k}(t)$ is given by:
\begin{eqnarray}
\label{eq10}
\begin{aligned}
|\psi(t)\rangle
=&
\prod_{k}|\psi(k,t)\rangle,
\end{aligned}
\end{eqnarray}
with
\begin{eqnarray}
\begin{aligned}
&
|\psi(k,t)\rangle
=
\Big[e^{-\Gamma_{+} t}\Big(\frac{h_{xy}(k)}{\Lambda}\sin(\Lambda t)\Big)|\uparrow\rangle
\\
&\quad
+
e^{-\Gamma_{+} t} e^{{\it i}\omega t}
\Big(
\!\!
\cos(\Lambda t)+
{\it i}\frac{2h_{z}(k)-\omega}{2\Lambda}
\sin(\Lambda t)\Big)|\downarrow\rangle\Big],
\;\;\quad
\end{aligned}
\end{eqnarray}
and
$\Lambda=\sqrt{h_{xy}^{2}(k)+[h_{z}(k)-\frac{\omega}{2}]^{2}}$.
The time-independent effective Floquet non-Hermitian Hamiltonian in Eq.~(\ref{eq7}), possesses the sublattice symmetry
$U_{s}H(k)U^{-1}_{s}=-H(k)$ with ${\cal S}=\sigma_{x}$,
and generalized particle-hole symmetry
$U_{p}H^{\top}(k)U^{-1}_{p}=-H(-k)$,
as well as, the time-reversal symmetry
$U_{T}H^{\top}(k)U^{-1}_{T}=H(-k)$ with $U_{p}=\sigma_{x}$ and $U_{T}=\mathbb{1}$.
Here $H^{\top}(k)$ is transposed of $H^{\top}(k)$. Consequently, the symmetry class of the non-Hermitian
time independent Hamiltonian in Eq.~(\ref{eq7}) belongs to BDI in the periodic table of non-Hermitian
topological phases~\cite{Kawabata2019}.
Moreover, $H_{F}(k)$ encompasses the inversion symmetry $U_{I}H(k)U^{-1}_{I}=H(-k)$ with $U_{I}=\sigma_{z}$, which
manifests the correspondence between the bulk topological invariant and the number of Majorana edge modes under
the open boundary condition~\cite{Kawabata2019,Zeng2016}.
The complex energy spectrum of $H_{F}$ is given as
\begin{equation}
\nonumber
\varepsilon^{\pm}_{k}=\frac{\omega}{2}\pm\sqrt{h_{xy}^{2}(k)+[h_{z}(k)-\frac{\omega}{2}]^{2}}.
\end{equation}
and becomes gapless if
\begin{eqnarray}
\begin{aligned}
\label{eq11}
&2\Gamma_{-}[J\cos(k)+h-\frac{\omega}{2}]=0,\\
&[J\cos(k)+h-\frac{\omega}{2}]^{2}+
[\gamma\sin(k)]^{2}-
\Gamma_{-}^{2}=0.
\end{aligned}
\end{eqnarray}
By solving the above equations, we can get
\begin{eqnarray}
\begin{aligned}
\label{eq12}
&k^{\ast}=\arccos(\frac{\omega-2h}{2J}),\\
\label{eq13}
&\frac{\Gamma_{-}^2}{\gamma^2}+\frac{(\omega-2h)^2}{4J^2}=1.
\end{aligned}
\end{eqnarray}
The Eq.~(\ref{eq12}) implies a limitation $\omega-2h<\pm 2J$ and the Eq.~(\ref{eq14}) depicts an elliptical
exceptional ring. Therefore, the system can be separated into three regions as shown in Fig.~\ref{fig1}.
\begin{figure}[t!]
\centerline{\includegraphics[width=\columnwidth]{Fig1.pdf}}
\centering
\caption{(Color online) The Phase diagram of the time independent
effective Floquet non-Hermitian XY Hamiltonian. The
red line denotes the exceptional ring, which corresponds elliptic
equation (Eq.~(\ref{eq13})). In region~(I), the system is in the
ferromagnetic phase with a pure real energy gap. In region~(II) the energy
gap of the non-Hermitian Hamiltonian is a pure imaginary. In region~(III)
it is in the paramagnetic phase with a complex non-Hermitian gap.}
\label{fig1}
\end{figure}
In the region~(I), inside the exceptional ring, the energy gap (eigenvalues) $\Delta=|\varepsilon_{K}^{+}-\varepsilon_{k}^{-}|$
is purely real i.e., ${\rm Im}[\Delta]=0$ and Re$[\Delta]>0$
(Im[$\mathbb{C}$] and Re[$\mathbb{C}$] represent the imaginary and real part of complex
number $\mathbb{C}$, respectively).
In this region $k^{\ast}=\arccos[(\omega-2h)/(2J)]$
and
$\Delta=\sqrt{\gamma^{2}[1-(\omega-2h)^2/(4J^{2})]-\Gamma_{-}^{2}}$.
In the region~(II) the gap is pure imaginary, i.e., Im$[\Delta]\neq0$ and Re$[\Delta]=0$.
In this region we still have $k^{\ast}=\arccos[(\omega-2h)/(2J)]$ but
the non-Hermitian strength $\Gamma_{-}$ is large enough to be dominant, then
$\Delta={\it i}\sqrt{\Gamma_{-}^{2}-\gamma^{2}[1-(\omega-2h)^2/(4J^{2})]}$.
The region~(III) ($|\omega-2h|>2J$) is characterised by the complex gap. In the other words, in the region~(III)
both real and imaginary parts of the gap is non-zero.
\subsection{Complex geometrical non-adiabatic phase\label{GPXY}}
In this section we study the geometric phase of the model to show how the
geometric phase can detect the three regions in the time-independent effective Floquet non-Hermitian
XY Hamiltonian mentioned in previous section.
For the non-adiabatic evolutions we use the Lewis-Riensenfeld invariant theory~\cite{Lewis1969} which generalized
to non-Hermitian systems~\cite{Gao1992,GARRISON1988}.
\begin{figure*}[t!]
\centerline{\includegraphics[width=\linewidth]{Fig2.pdf}}
\centering
\caption{(Color online) The density plot of Loschmidt echo $|{\cal L}_{k}(t)|^{2}$
of periodically time driven XY model as a function of time $t$ and $k$, for (a) $\Gamma_{-}=0$,
(b) $\Gamma_{-}=0.3$, (c) $\Gamma_{-}=0.7$, (d) $\Gamma_{-}=1.1$.
The dynamical free energy of the model versus time $t$ for (e) $\Gamma_{-}=0$,
(f) $\Gamma_{-}=0.3$, (g) $\Gamma_{-}=0.7$, (g) $\Gamma_{-}=1.1$.
The density plot of geometric phase as a function of time and $k$
for (i) $\Gamma_{-}=0$, (j) $\Gamma_{-}=0.3$, (k) $\Gamma_{-}=0.7$,
(l) $\Gamma_{-}=1.1$.
In all plots we set $J=h=\gamma=1$ and $\omega=2$.
}
\label{fig2}
\end{figure*}
According to Lewis-Riensenfeld theory the non-Hermitian invariant $I(t)$ associated
to the Hamiltonian $\mathbb{H}_{k}(t)$, Eq.~(\ref{eq6}), can be expressed as linear
combinations of Pauli matrices, i.e.,
\begin{eqnarray}
\label{eq14}
I(t)=r_{1}S^{+}+r_{2}(t)S^{-}+r_{3}(t)S^{z}.
\end{eqnarray}
where $r_{m=\{1,2,3\}}(t)$ are three time-dependent complex parameters and $I(t)$ satisfies the Liouville-von Neumann equation
\begin{eqnarray}
\label{eq15}
\frac{d}{dt}I(t)=\frac{\partial}{\partial t}I(t)-{\it i}\Big[I(t),\mathbb{H}_{k}(t)\Big].
\end{eqnarray}
The substitution of expressions of $I(t)$ and $\mathbb{H}_{k}(t)$ in Eq.~(\ref{eq15}) leads to the
system of coupled differential equations. By solving the coupled differential equations, which satisfies the cyclicity of $I(t+T)=I(t)$
with $T=2\pi/\omega$ results
\begin{eqnarray}
\label{eq16}
I(t)=
\left(
\begin{array}{cc}
\cos(\alpha) & \sin(\alpha)e^{-{\it i}\omega t} \\
\sin(\alpha)e^{{\it i}\omega t} & -\cos(\alpha) \\
\end{array}
\right),
\end{eqnarray}
where $\cos(\alpha)=\frac{2h_{z}(k)-\omega}{\sqrt{4h_{xy}^{2}(k)+[2h_{z}(k)-\omega]^{2}}}$.
The complex geometrical non-adiabatic phase for a cyclic evolution $T=2\pi/\omega$ is defined by~\cite{Gao1992,GARRISON1988}
\begin{eqnarray}
\label{eq17}
\beta(t)={\it i}\int_{0}^{T}\langle\Phi_{-}|\frac{\partial}{\partial t}|\Psi_{-}\rangle dt,
\end{eqnarray}
where $|\Psi_{-}\rangle$ and $|\Phi_{-}\rangle$, are the instantaneous eigenstates of $I(t)$ and $I(t)^{\dag}$ (see Appendix \ref{AB}).
The complex geometrical non-adiabatic phase for the periodically time driven dissipative Floquet XY model
is obtained as
\begin{eqnarray}
\begin{aligned}
\nonumber
\label{eq18}
\beta=\pi
[1-\cos(\alpha)]
=
\pi
\Big[
1-
\frac{2h_{z}(k)-\omega}{\sqrt{4h_{xy}^{2}(k)+[2h_{z}(k)-\omega]^{2}}}
\Big],
\end{aligned}
\\
\end{eqnarray}
which is a generalization of the complex solid angle in complex parameter space~\cite{GARRISON1988}.
The real part of the complex geometrical non-adiabatic phase is given by
\begin{eqnarray}
\nonumber
{\rm Re}[\beta]=
\left\{
\begin{array}{ll}
\pi, & \hbox{\small\text{Region(I)}} \\
\pi
\Big[
1+\frac{\Gamma_{-}}{\sqrt{\Gamma_{-}^{2}-\gamma^{2}[1-(\omega-2h)^{2}/(4J^{2})]}}
\Big], & \hbox{\small\text{Region(II)}} \\
\pi
\Big[1-f(k)
\Big]
, & \hbox{\small\text{Region(III)}}
\end{array}
\right.
\end{eqnarray}
where
$$
f(k)=
\frac{
2h+2J\cos(k)-{\omega}
}{
2{\rm Re}[\Delta]
}
-
\frac{
\Gamma_{-}
}{
{\rm Im}[\Delta]
}
.
$$
As seen the real part of the complex geometrical non-adiabatic phase shows singularity at
phase boundaries. In addition, the real part of complex geometrical non-adiabatic phase
in region~(I), which confined to exceptional points, is independent of the Hamiltonian parameters.
In the next section we will study the topological properties of the effective Hamiltonian in Eq.~(\ref{eq7})
using the winding numbers of the non-Hermitian Hamiltonians~\cite{Zhu2020}.
\subsection{Topological Invariant\label{TIXY}}
Examining the non-Hermitian Hamiltonians' winding numbers expresses that both inside and outside the exceptional ring is distinguished by its winding number,
$N_w=\frac{1}{2\pi}\int_{-\pi}^{\pi}\partial_{k}\phi(k)dk$.
Here
$\phi(k)=\arctan\left[(\omega- 2h_{z}(k))/(2h_{y}(k))\right]$
is winding angle~\cite{Zhu2020}. The winding numbers of the non-Hermitian topological and trivial phases are found to be $N_w=1$
for
$(\Gamma_{-}^2/\gamma^2)+[(\omega-2h)^2/(4J^2)]<1$
(inside the exceptional ring) and $N_w=0$ for outside the exceptional ring, respectively.
As can be seen, the topological phase spreads by dissipation which is
unique to non-Hermitian systems.
\subsection{Pure state dynamical topological quantum phase transition\label{PDPTXY}}
As obtained in Eq.~(\ref{eq10}), if at $t=0$ the system prepared at $|\psi_{-}(0)\rangle=|\downarrow\rangle$,
the unnormalized time evolved initial state of the dissipative Floquet XY Hamiltonian is expressed as:
\begin{eqnarray}
\begin{aligned}
\nonumber
\label{eq19}
&
|\psi_{-}(k,t)\rangle
=
\Big[e^{-\Gamma_{+} t}\Big(\frac{h_{xy}(k)}{\Lambda}\sin(\Lambda t)\Big)|\uparrow\rangle
\\
&
\quad
+
e^{-\Gamma_{+} t} e^{{\it i}\omega t}\Big(\cos(\Lambda t)
+
{\it i}
\frac{2h_{z}(k)-\omega}{2\Lambda}\sin(\Lambda t)\Big)|\downarrow\rangle\Big].
\end{aligned}
\\
\end{eqnarray}
It is straightforward to see how the return probability (LA) is determined
\begin{equation}
\begin{aligned}
\label{eq20}
{\cal L}(k,t)=e^{-\Gamma_{+} t}e^{{\it i}\omega t}
\Big[
\frac{\cos(\Lambda t)+
{\it i}
\frac{2h_{z}(k)-\omega}{2\Lambda}\sin(\Lambda t)}
{\sqrt{\langle\psi_{-}(k,t)|\psi_{-}(k,t)\rangle}}
\Big].
\end{aligned}
\end{equation}
The FDQPT occurs at the time instances at which at least one factor in LA becomes zero i.e., ${\cal L}_{k^{\ast}}(t^{\ast})=0$
which yields
\begin{eqnarray}
\label{eq21}
t^{\ast}=\frac{-{\it i}}{2\Lambda}\ln\Big[
\frac{2h_{z}-\omega -2\Lambda}
{2h_{z}-\omega+2\Lambda}
\Big].
\end{eqnarray}
By a rather lengthy calculation, one can obtain that there are real solutions of $t^{\ast}$ only whenever
\begin{eqnarray}
\label{eq22}
2
\Big(
h-J\sqrt{1-\frac{\Gamma_{-}^2}{\gamma^2}}
\Big)
<\omega<
2
\Big(
h+J\sqrt{1-\frac{\Gamma_{-}^2}{\gamma^2}}
\Big),
\end{eqnarray}
at quasi-momentum $k^{\ast}=\arccos[(\omega-2h)/(2J)]$ results
\begin{eqnarray}
\label{eq23}
t^{\ast}=\frac{1}{2\Lambda}(2n+1)\pi+\frac{1}{\Lambda}\arctan(\frac{\Gamma_{-}}{\Lambda}).
\end{eqnarray}
DFDPTs arise in the range of driving frequency over which the eigenvalues of the time-independent effective Floquet non-Hermitian XY Hamiltonian are purely real and the system is also topological, since Eq.~(\ref{eq22}) is nothing but Eq.~(\ref{eq13}).
On the other hand, when
$\Gamma_{-}^2/\gamma^2+(\omega-2h)^2/(4J^2)>1$, there is no critical momentum and $t^{\ast}$ is always complex
resulting no DFDPTs at any given real time $t$.
We should note that, the term $(2n+1)\pi/(2\Lambda)$ in Eq.~(\ref{eq23})
is the FDPTs time scale in the absence of dissipation and the term $[\arctan(\Gamma_{-}/\Lambda)]/\Lambda$
originates from the dissipation. As is clear, both the lower and upper bounds of the range of driven frequency
over which DFDPTs occur are function of the dissipation.
Thus, the DFDPT driven frequency range shrinks to a single point
$\omega=2h$ at $\Gamma_{-}=\pm\gamma$.
When the gain or loss of the spin up and down states are equal, $\Gamma_{u}=\Gamma_{d}\neq0$, the DFDPT times drop to non-dissipative FDPT times even in the presence of dissipation, as shown by Eq.~(\ref{eq23}).
In such a case, the system is in the resonance regime where the population
completely cycles the population between the two spin down and up states.
It is worthwhile to mention that, the anisotropy $\gamma$ does not affect the non-dissipative FDPTs driven frequency
range ($\Gamma_{u}=\Gamma_{d}=0$)~\cite{Zamani2020,Jafari2021,Yang2019}, while the DFDPTs driven frequency
range controls by $\gamma$.
The numerical simulation of the density plot of the Loschmidt echo $|{\cal L}(k,t)|^{2}$, the dynamical free
energy $g(t)$ and density plot of the geometric phase have been depicted in Fig.~\ref{fig2} for the Hamiltonian
parameters inside and out side the exceptional ring.
When the time-independent effective non-Hermitian XY Hamiltonian $H_F$ is in non-Hermitian topological phase, it is apparent that
there exist critical points $k^{\ast}$ and $t^{\ast}$, where ${\cal L}_{k^{\ast}}(t^{\ast})$ becomes zero [Figs.~\ref{fig2}(a)-\ref{fig2}(c)].
Outside of the exceptional ring, however, there is no such critical point [Fig.~\ref{fig2}(d)].
Moreover, in Figs.~\ref{fig2}(e)-\ref{fig2}(g) the DFDPTs are observed as the cusps in $g(t)$ for the driving frequency at which the system inters into
the non-Hermitian topological phase. While the dynamical free energy shows completely analytic, smooth behavior for the Hamiltonian parameters set out side
the exceptional ring.
The density plot of geometric phase are plotted in Figs.~\ref{fig2}(i)-\ref{fig2}(l) for different values of Hamiltonian's parameters
inside and out side of the exceptional ring.
As seen, the plots display singular changes at critical times $t^{\ast}$, and
at critical momentum $k^{\ast}$ when the system is in region~(I), while it shows smooth behavior for the case that the DPTs are absent.
This behaviour represents the topological aspects of DFDPTs, where the phase of the time-independent effective Floquet non-Hermitian
XY Hamiltonian is topological.
\section{Dissipative periodically time driven extended XY Model\label{EXYmodel}}
In this section we study the phase diagram, topological properties and
FDPTs of dissipative periodically time driven extended XY (EXY) model.
We show that the region where
\begin{figure}[t!]
\centerline{\includegraphics[width=\columnwidth]{Fig3.pdf}}
\centering
\caption{(Color online) The Phase diagram of the time independent
effective Floquet non-Hermitian extended XY Hamiltonian. The
red line denotes the exceptional ring, which corresponds to
Eq.~(\ref{eq29}). In region~(I), the eigenvalues (gap) of the time-independent
effective Floquet non-Hermitian extended XY Hamiltonian is
purely real . In region~(II) the energy gap of the effective non-Hermitian
Hamiltonian is a pure imaginary. In region~(III) the energy gap
is complex.}
\label{fig3}
\end{figure}
DFDPTs occur is confined to exceptional points and the time-independent effective Floquet non-Hermitian EXY Hamiltonian
has real eigenvalues but the system is topologically trivial.
\subsection{Exact solution\label{ESEXY}}
The Hamiltonian of the one-dimensional harmonically driven extended XY spin chain in the staggered magnetic field is given by~\cite{Zamani2020}
\begin{eqnarray}
\begin{aligned}
\label{eq24}
{\cal H}(t) = \sum_{n=1}^{N}
&
\Big[J_{1}\cos(\omega t) \Big(S_n^x S_{n+1}^x + S_n^y S_{n+1}^y \Big),\\
&- (-1)^{n} J_{1} \sin(\omega t) \Big(S_n^x S_{n+1}^y - S_n^y S_{n+1}^x \Big)\\
\nonumber
&-(-1)^{n} J_{2} \Big(S_n^x S_{n+1}^z S_{n+2}^x + S_n^y S_{n+1}^z S_{n+2}^y \Big)\\
&-{\it i}(\Gamma_{u}S_{n}^{+}S_{n}^{-}+\Gamma_{d}S_{n}^{-}S_{n}^{+})+(-1)^{n} h_{s} S_n^z \Big].
\end{aligned}
\\
\end{eqnarray}
The first and second terms in Eq.~(\ref{eq24}) describe the time dependent nearest neighbour XY and staggered
Dzyaloshinskii-Moriya interactions~\cite{Jafari2011b}, and the third term is a staggered cluster (three-spin)
interaction~\cite{Titvinidze}.
This Hamiltonian
can be exactly diagonalized by Jordan-Wigner transformation~\cite{LIEB1961,Barouch1971,Jafari2011,Jafari2012}
which transforms spins into spinless fermions, where $c^{\dagger}_{n}$ ($c_{n}$) is the fermion creation (annihilation)
operator~\cite{Zamani2020}. The crucial step is to define two independent fermions at site $n$, $c_{n-1/2}^{A}=c_{2n-1}$, and $c_{n}^{B}=c_{2n}$,
which can be regarded as splitting the chain having a diatomic unit cell. The Fourier transformed Hamiltonian can be expressed
as sum of independent terms ${\cal H}(t)=\sum_{k}{\cal H}_{k}(t)$ with ${\cal H}_{k}(t)=\Psi^{\dagger}\mathbb{H}_{k}(t)\Psi-{\it i}\Gamma_{+}\mathbb{1}$ and $\Psi^{\dagger}_k=(c_{k}^{\dagger B},~c_{k}^{\dagger A})$,
where the Bloch single particle Hamiltonian $\mathbb{H}_{k}(t)$ is given as $\mathbb{H}_{k}(t)=[h_{xy}(\cos(\omega t)\sigma^{x}+\sin(\omega t)\sigma^{y})+h_{z}\sigma^{z}]$,
with $h_{xy}(k)=J_{1}\cos(k/2)$ and $h_{z}(k)=J_{2}\cos(k)/2+h_{s}-{\it i}\Gamma_{-}$.
Using the time-dependent Schr\"{o}dinger equation ${\it i}\frac{d}{dt}|\psi_{k}^{\pm}(t)\rangle=\mathbb{H}_{k}(t)|\psi_{k}^{\pm}(t)\rangle$ in
the rotating frame given by the periodic non-unitary transformation $U(t)=U_{R}(t)U_{D}(t)$, with
$U_{R}(t)=\exp[{\it i}\omega(\mathbb{1}-\sigma^{z})t/2]$, and $U_D(t)= e^{-\Gamma_{+}t}\mathbb{1}$
the time dependent Hamiltonian is transformed to the time-independent effective Floquet non-Hermitian form
\begin{equation}
H_F(k)=h_{xy}(k)\sigma^{x}+(h_{z}(k)-\frac{\omega}{2})\sigma^{z}+\frac{\omega}{2}\mathbb{1}
\label{eq25}.
\end{equation}
Following the calculation in section~\ref{ESXY}, if
at $t=0$ the system prepared at
$|\psi(0)\rangle=|\varphi_{k}\rangle=|\downarrow\rangle$, then according to Eqs.~(\ref{eq8})~and~(\ref{eq9})
the unnormalized time evolving state $|\psi(k,t)\rangle$ of the Hamiltonian ${\cal H}_{k}(t)$ is given by:
{
\begin{eqnarray}
\begin{aligned}
\nonumber
\label{eq26}
&
|\psi(t)\rangle=
\prod_{k}|\psi(k,t)\rangle,\\
&
|\psi(k,t)\rangle=
\Big[e^{-\Gamma_{+} t}\Big(-{\it }i\frac{h_{xy}(k)}{\Lambda}\sin(\Lambda t)\Big)|\uparrow\rangle
\\
&
\quad
+
e^{-\Gamma_{+} t} e^{{\it i}\omega t}\Big(\cos(\Lambda t)+
{\it i}
\frac{2h_{z}(k)-\omega}{2\Lambda}\sin(\Lambda t)\Big)|\downarrow\rangle\Big],
\end{aligned}
\\
\end{eqnarray}
}
with $\Lambda=\sqrt{h_{xy}^{2}(k)+[h_{z}(k)-\frac{\omega}{2}]^{2}}$.
The complex energy spectrum of $H_{F}$ is given as
\begin{equation}
\nonumber
\epsilon^{\pm}_{k}=\frac{\omega}{2}\pm\sqrt{h_{xy}^{2}(k)+[h_{z}(k)-\frac{\omega}{2}]^{2}},
\end{equation}
\begin{figure*}[ht!]
\centerline{\includegraphics[width=\linewidth]{Fig4.pdf}}
\centering
\caption{(Color online) The density plot of Loschmidt echo $|{\cal L}_{k}(t)|^{2}$
of periodically time driven extended XY model as a function of time $t$ and $k$,
for (a) $\Gamma_{-}=0$, (b) $\Gamma_{-}=0.3$, (c) $\Gamma_{-}=0.5$, (d) $\Gamma_{-}=1.1$.
The dynamical free energy of the model versus time $t$ for (e) $\Gamma_{-}=0$,
(f) $\Gamma_{-}=0.3$, (g) $\Gamma_{-}=0.5$, (h) $\Gamma_{-}=1.1$.
The density plot of geometric phase as a function of time and $k$
for (i) $\Gamma_{-}=0$, (j) $\Gamma_{-}=0.3$, (k) $\Gamma_{-}=0.5$,
(l) $\Gamma_{-}=1.1$. In all plots we take $J_{1}=1, J_{2}=2\pi, h_{s}=3\pi,
\omega=6\pi$.}
\label{fig4}
\end{figure*}
and become gapless if
\begin{eqnarray}
\begin{aligned}
\label{eq27}
&
\Gamma_{-}
[
J_{2}\cos(k)+2h_{s}-\omega
]=0,\\
&
\frac{1}{4}
[
J_{2}\cos(k)+2h_{s}-\omega
]^{2}+
[J_{1}
\cos(\frac{k}{2})
]^{2}-{\Gamma_{-}}^{2}=0.
\quad\quad
\end{aligned}
\end{eqnarray}
By solving these equations, we can get
\begin{eqnarray}
\begin{aligned}
\label{eq28}
&k^{\ast}=\arccos(\frac{\omega-2h_{s}}{J_{2}}),
\\
\label{eq29}
&\frac{2\Gamma_{-}^2}{J_{1}^2}-\frac{\omega-2h_{s}}{J_{2}}=1.
\end{aligned}
\end{eqnarray}
The first term of Eq.~(\ref{eq28}) implies a limitation $\omega-2h_{s}<\pm J_{2}$, and the second one
defines an
exceptional points. Therefore, the system can be separated into three regions as shown in Fig.~\ref{fig3}.
In the region~(I), inside the exceptional closed curve, the energy gap $\Delta=|\epsilon_{K}^{+}-\epsilon_{k}^{-}|$
is purely real i.e., Im$[\Delta]=0$,
and Re$[\Delta]>0$. In this region $k^{\ast}=\arccos[(\omega-2h_{s})/J_{2}]$
and $\Delta=\sqrt{J_{1}^{2}(J_{2}+\omega-2h_{s})/(2J_{2})-\Gamma_{-}^{2}}$.
In the region~(II) the gap is pure imaginary, i.e., Im$[\Delta]\neq0$ and Re$[\Delta]=0$.
In this region we still have $k^{\ast}=\arccos[(\omega-2h_{s})/J_{2}]$ but
the non-Hermitian strength $\Gamma_{-}$ is large enough to be dominant, then
$\Delta={\it i}\sqrt{\Gamma_{-}^{2}-J_{1}^{2}(J_{2}+\omega-2h_{s})/(2J_{2})}$.
The region~(III) ($|\omega-2h_{s}|>J_{2}$) is characterised by the complex gap. In the other words, in the region~(III)
both real and imaginary parts of the gap is non-zero.
According to discussion in section \ref{GPXY}, the complex geometrical non-adiabatic phase for
the periodically time driven Floquet EXY model is also given by Eq.~(\ref{eq18}),
in which $h_{xy}(k)=J_{1}\cos(k/2)$ and $h_{z}(k)=J_{2}\cos(k)/2+h_{s}-{\it i}\Gamma_{-}$.
Then the real part of the complex geometrical non-adiabatic phase is given by
\begin{eqnarray}
\nonumber
{\rm Re}
[\beta]=
\left\{
\begin{array}{ll}
\pi, & \hbox{\small\text{Region(I)}} \\
\pi
[1+
\frac{
\Gamma_{-}}{\sqrt{\Gamma_{-}^{2}-J_{1}^{2}
(
J_{2}+\omega-2h_{s}
)
/(2J_{2})
}}],
& \hbox{\small\text{Region(II)}}\\
\pi[1-f(k)], & \hbox{\small\text{Region(III)}}
\end{array}
\right.
\end{eqnarray}
where
$$f(k)=
\Big(
[2h_{s}+J_{2}\cos(k)-\omega]
/
(2{\rm Re}
[\Delta]
)
\Big)
-(\Gamma_{-}/
{\rm Im}
[\Delta]
)
.
$$
As seen the real part of the complex geometrical non-adiabatic phase shows singularity at
phase boundaries. It is necessary to mention that, all regions in Fig.~\ref{fig4} are topologically trivial and winding number
is zero.
\subsection{Pure state dynamical topological quantum phase transition\label{PDPTEXY}}
The Loschmidt amplitude for EXY model is calculated as
\begin{equation}
\begin{aligned}
\label{eq30}
{\cal L}(k,t)=e^{-\Gamma_{+} t}e^{{\it i}\omega t}
\Big[
\frac{\cos(\Lambda t)+{\it i}
\frac{2h_{z}(k)-\omega}{2\Lambda}\sin(\Lambda t)}
{\sqrt{\langle\psi_{-}(k,t)|\psi_{-}(k,t)\rangle}}
\Big].
\end{aligned}
\end{equation}
The DQPT occurs at the time instances at which at least one factor in LA becomes zero i.e., ${\cal L}_{k^{\ast}}(t^{\ast})=0$
which yields
\begin{eqnarray}
\label{eq31}
t^{\ast}=\frac{-{\it i}}{2\Lambda}\ln
\Big[\frac{2h_{z}-\omega-2\Lambda}{2h_{z}-\omega+2\Lambda}\Big].
\end{eqnarray}
It straightforward to show that there are real solutions of $t^{\ast}$ only whenever
\begin{eqnarray}
\label{eq32}
2h_{s}+J_{2}(\frac{2\Gamma_{-}^2}{J_{1}^2}-1)<\omega<2h_{s}+J_{2},
\end{eqnarray}
at quasi-momentum $k^{\ast}=\arccos[(\omega-2h_{s})/J_{2}]$ results
\begin{eqnarray}
\label{eq33}
t^{\ast}=\frac{1}{2\Lambda}(2n+1)\pi+\frac{1}{\Lambda}\arctan(\frac{\Gamma_{-}}{\Lambda}).
\end{eqnarray}
According to Eq.~(\ref{eq32}) or Eq.~(\ref{eq29}), DFDPTs exist in the range of driving frequency over which the eigenvalues of the time-independent effective Floquet non-Hermitian Hamiltonian are purely real but the system is not topological.
There is no critical momentum when $(\Gamma_{-}^2/J_{1}^2)-[(\omega-2h_{s})/J_{2}]>1$ and $t^{\ast}$ is always imaginary, resulting in no DFDPTs at any real time $t$.
The lower bound of the driven frequency range across which DFDPTs occur is clearly reliant on dissipation, but the upper bound is independent of dissipation coupling.
Therefore, the range of driven frequency over which DFDPTs
occur shrinks to a single point $\omega=J_{2}+2h_{s}$ at $\Gamma_{-}=\pm J_{1}$.
It is worth noting that, in the absence of dissipation, FDPTs do not rely on the exchange coupling $J_1$,
however, in the presence of dissipation, the DFDPT driven frequency range depends on $J_{1}$.
We present the density plot of the Loschmidt echo $|{\cal L}(k,t)|^{2}$, the dynamical free energy $g(t)$,
and the density plot of geometric phase in Figs.~(\ref{fig4}) for different values of dissipation.
Figs.~\ref{fig4}(a)-\ref{fig4}(c) show
that when the eigenvalues of time-independent effective Floquet non-Hermitian EXY Hamiltonian $H_{F}$
are pure real, region~(I), there exist critical points $k^{\ast}$ and $t^{\ast}$, where ${\cal L}_{k^{\ast}}(t^{\ast})$ becomes zero.
Contradiction, there is no such critical point out side of region~(I) [Figs.~\ref{fig4}(d)].
Moreover, Figs.~\ref{fig4}(e)-\ref{fig4}(h) observe DFDPTs as cusps in $g(t)$ for the driving frequency at which the system inters into
the region~(I), while $g(t)$ shows completely analytic, smooth behavior when the Hamiltonian parameters set out side
the region~(I).
The density plots of $\Phi^{G}_{k}$ are also plotted in Figs.~\ref{fig4}(i)-\ref{fig4}(l) for different values of Hamiltonian's parameters
inside and out side of the region~(I). As seen, the plots display singular changes at critical times $t^{\ast}$, and
at critical momentum $k^{\ast}$ when the system is in region~(I), while it shows smooth behavior for the case that the DFDPTs are absent.
This behaviour represents the topological aspects of DFDPTs, where the phase of the time-independent effective Floquet non-Hermitian
EXY Hamiltonian is not topological.
In is remarkable to mention that, in the absence of the dissipation, the lower bound of driven frequency range
in Eq.~(\ref{eq32}) and both lower and upper bounds of driven frequency range in Eq.~(\ref{eq22}) are
the critical points (gap closing) of the time-independent effective Floquet Hermitian Hamiltonians
in Eqs.~(\ref{eq25})~and~(\ref{eq7}). However, the upper bound of driven frequency range in Eq.~(\ref{eq22})
is not the critical point of the time-independent effective Floquet Hermitian Hamiltonian in Eqs.~(\ref{eq25}).
As a result, we may conclude that, in the absence of dissipation, only the gap closing (critical) points of the
time-independent effective Floquet Hermitian Hamiltonian are affected by dissipation.
\section{Conclusion}
We have investigated the dissipative Floquet dynamical phase transition in the periodically time
driven XY and extended XY models in the presence of the imaginary terms, which represent the physical gain and loss during
the interacting processes with the environment. We have shown that, the time-independent effective Floquet non-Hermitian Hamiltonians
reveal three regions with pure real eigenvalues (gap) where confined to exceptional points, pure imaginary gap and complex gap.
We have found that, the complex geometrical non-adiabatic phase can distinguish each regions of the system.
We have shown that the Floquet dynamical phase transitions still appearance in the presence
of the dissipation in the region where the time-independent effective Floquet non-Hermitian Hamiltonians exhibit real eigenvalues.
While the real gap region in the time-independent effective Floquet non-Hermitian XY Hamiltonian is topologically
nontrivial, its counterpart in the time-independent effective Floquet non-Hermitian extended XY Hamiltonian is topologically
trivial. In the other words, different from results obtained for the quenched case, existence of non-Hermitian topologically
nontrivial phase is not necessary condition for appearance of the dissipative Floquet dynamical phase transitions.
We have also shown that the range of driven frequency, over which the dissipative Floquet dynamical phase transitions occur,
narrows down by increasing the dissipation coupling and shrinks to a single point at the critical value of dissipation. Furthermore,
the topological characteristic aspect of the dissipative Floquet dynamical phase transitions in the real gap region is revealed by
quantization and jumps of the dynamical geometric phase.
\section*{Acknowledgments}
A.~A. acknowledges the support
of the Max Planck- POSTECH-Hsinchu Center for Complex Phase
Materials, and financial support from the National Research
Foundation (NRF) funded by the Ministry of Science of Korea (Grant
No. 2016K1A4A01922028).
|
train/arxiv
|
BkiUd685qrqCyqr5hkMw
| 5 | 1 |
\section{INTRODUCTION}
Today's autonomous vehicles have the ability to safely drive in most urban environments during regular day time conditions. However, most vehicles encounter the exact same problem in which the sensor data received is significantly affected by weather, lighting, or other natural effects of the scene. Specifically, this affects the vehicle's ability to accurately localize to a 3-DoF pose in the environment which hinders its ability to safely navigate to a given destination.
LiDAR tends to be expensive (cost and compute-wise) and still experiences issues of fewer detected points and greater noise in heavy weather conditions such as rain, snow, etc. On the other hand, cameras are cheap (cost and compute-wise) and collect data which can be more easily leveraged to be weather-invariant. Being less compute intensive also allows for implementation on smaller platforms such as delivery robots.
We describe a lightweight method (see Figure \ref{fig:highLevelArch0} and \ref{fig:highLevelArch}) of leveraging stereo pair camera data to localize to an accurate 3-DoF position in any environment in varying weather conditions, light changes, and other natural effects such as snow and boulders. Instead of regular RGB input to our deep learning models, we construct a semantic birds-eye view (S-BEV) map as seen in Figure \ref{fig:overview}.
\begin{figure}
\centering
\includegraphics[scale=0.2]{images/sbev_samples.png}
\caption{RGB images, segmentation images, depth maps and S-BEV representations from 3 topo-nodes in the Ford AV dataset. These S-BEVs are used for 3-DoF localization of the vehicle to a map.}
\vspace*{-0.75cm}
\label{fig:overview}
\end{figure}
We are motivated by two properties of our semantic birds-eye view map. First, the semantic nature of our birds-eye view map is theoretically unaffected in lighting and weather conditions since semantic classes such as car, road, board, etc stay the same across lighting and seasons. Of course, this depends on the quality of the segmentation model but we show that most off-the-shelf segmentation models suffice as the segmenter for this pipeline. Second, a birds-eye view map characteristically doesn't change much in shape or characteristics even when parts of a scene change such as mounds of snow or puddles of rain. While this may affect the RGB image, viewing the scene top-down reflects little to no change when situations like these are faced.
Our localizer begins with an autoencoder we use to encode information about a BEV map and its relative position to a topological node. During test time, we detach the decoder part of this autoencoder and perform a nearest neighbor classification of the computed latent embeddings between the candidate S-BEV and the set of previously mapped and collected S-BEVs. We then concatenate this classification with the latent embedding of the candidate image and feed it into a 3-DoF regressor which is built on fully connected layers and outputs a 3-dimensional vector representing the lateral translation, the longitudinal translation, and the angular difference between the closest topological node and the candidate BEV map. We then perform pose concatenation between the relative pose and the global pose of the closest node to find the global pose of the candidate image in the scene. The system architecture is shown in Figure \ref{fig:aeArch}.
Our system is trained with an equal amount of S-BEV samples per topological node. In most cases, scenes are unbalanced in terms of the number of samples which exist within nodes due to various factors such as travelling speed, traffic, etc. We undersample to have an equal amount of samples per node.
We suggest that this method of localization is a useful solution for robots and cars which operate in pre-mapped environments which could drastically change in lighting and weather in various times of day or various seasons in a year. Using an S-BEV map allows for the sole use of 2D Convolutional networks rather than 3D which contributes to increase in speed. Localizing to a topo-node and training the neural net to predict a relative pose also constrains the maximum error to within a topo-node. Our solution aims to have the right balance of robustness, accuracy and speed for deployment on relatively small and light-weight embedded systems.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.98\linewidth]{images/SBEV_overview.png}
\end{center}
\vspace*{-0.35cm}
\caption{S-BEV Localization System}
\label{fig:highLevelArch0}
\vspace*{-0.45cm}
\end{figure}
\section{RELATED WORK}
\subsection{Traditional Feature-based Localization}
Many existing large-scale visual localization approaches use local features such as SIFT \cite{SIFT} to create 2D-3D matches from 2D points in the image and 3D points in the 3D SfM model \cite{irschara2009pcls, toft2018semantic, svarm2017city}. These feature correspondences are then used to retrieve camera pose. While these tend to accurate attain accurate poses, they don't seem to generalize in terms of long-term season-by-season localization. They are also compute intensive due to their exhaustive matching and as the model grows in size, matching becomes less accurate due to perceptual aliasing where multiple places look the same. Situations like day and night are also a specific point struggle for this type of model because SIFT simply doesn't consistently match features in images taken at the same location in different weather or lighting conditions. While some approaches try to mitigate perceptual aliasing through a separate step for image retrieval \cite{irschara2009pcls} or through geometric outlier filtering \cite{zeisl2015voting, sattler2017outliers, svarm2017city}, they fail due to not enough visual or structural overlap existing between varying viewpoints.
Our proposed approach uses a novel S-BEV map which can be easily augmented to introduce diverse viewpoints in angle, longitudinal, or lateral change. It also incorporates a coarse to fine approach unlike some approaches that use an intermediary image retrieval \cite{irschara2009pcls, sarlin2019coarse} which allows us to learn features of specific parts of the map in an auxiliary way.
\subsection{Motion Estimation while Learning Depth}
There has been a lot of reent work of estimating depth \cite{eigen2014depth} from a single image. Some of the most recent successes with this have been with monodepth2 \cite{monodepth2} and packnet-sfm \cite{Guizilini2019PackNetSfM3P} which use separate techniques in treating depth estimation as an auxiliary task and using 3D packing to generate accurate depth maps. Specifically, monodepth2 trains a pose estimation network between frames to eventually generate accurate depth maps.
DeMon \cite{DeMoN} and UndeepVO \cite{undeepVo} are also two deep learning based approaches that are able to do the pair task of motion estimation through an $[R | t]$ matrix and absolute scale depth estimation from a stream of images.
Our approach uses stereo for depth estimation and to unproject the semantic 2D image into 3D space for bird's eye view representation, but could use mono-depth techniques for this.
\subsection{Robust Coarse-to-Fine Localization}
Due to the compute-intensiveness of directly predicting pose from an SfM model, many approaches have tried to do coarse-to-fine localization where they first narrow down to candidate locations from a global search and only then match local image based features. While most approaches in the past have used local features such as SIFT \cite{SIFT}, newer state-of-the-art approaches like HF-Net \cite{sarlin2019coarse} use more robust local features such as deep-learning based SuperPoint \cite{superpoint} which seem to be consistently outperforming other traditional local features. Specifically, HF-Net is able to leverage these learned descriptors to perform robust localization across a large set of changes in appearance.
Another coarse-to-fine approach is one using a topo-metric map \cite{roussel2020localization} where one first uses a traditional SLAM approach to create a map along with paired sensor data including monocular or stereo images. This map is divided into discrete topological nodes which are defined by a specific pose, created based on constant translational and angular distances between each other. Models first localize to a specific node in the map and then find their positions relative to those nodes using traditional techniques like Perspective-n-Point (PnP).
While both coarse-to-fine approaches provide reasonable pose estimates, they lack the ability to perform well in drastically varying weather and lighting conditions such as rain, snow, sunset, fog, etc.
Framing our problem in a coarse-to-fine setting, however, is advantageous to us as we use the S-BEV representation which has the potential to account for varying lighting and weather conditions, and is then used to create embeddings that are used for both the course and fine localization.
S-BEV localization which accounts for these varying weather and lighting conditions as we will later address.
\subsection{Semantic Localization}
Semantic localization describes any approach that uses semantic labels (car, vegetation, building, sign, etc) to aid itself in localizing within a scene. These approaches tend to mostly be weather invariant considering that semantic labels don't change across seasons.
Schönberger et. al \cite{schonberger2018semantic} proposes an approach based on a full 3D semantic reconstruction of the world where a generative model is used to create a full scene from a partial understanding of a scene. Since the representation is 3D, it accounts for viewpoint changes in which prior 2D approaches fail. Traditional descriptors such as SIFT \cite{SIFT} or deep-learning based descriptors such as SuperPoint \cite{superpoint} aren't used given that they aren't traditional RGB images. The latent representation created by this generative model is then used for localization.
Stenborg et. al \cite{stenborg2018semantic} proposes a slightly different which is able to utilize SIFT-based features. This approach uses the fact that SIFT works on images which span a short range of time given that the lighting or weather isn't drastically different. Using these features, points can be consistently projected from a 3D map into a 2D semantic image and then be evaluated based on how many points of a certain semantic class lie within a specific semantic label. A particle filter is then weighted based on these evaluations to localize.
Garg et. al \cite{Garg_2020} used \textit{Delta Descriptors}, which is a change-based representation and is robust to appearance variation during a revisit. Furthermore, they are able to achieve state-of-the-art performance by combining these representations with sequence-based matching. Other image signatures such as the horizon line \cite{ho2014localization} has been used to relocalize vehicle on freeways by first detecting a 1-D horizon line signature and then using a particle filter which in-turn uses lane marking patterns and their frequency to build a motion model for robust localization.
While these approaches both account for changes in weather and lighting, they do not account for significant structural changes that can occur across seasons such as mounds of snow or change in vegetation. Using a birds-eye view map is favorable in this sense.
\section{METHOD}
\label{methodMarker}
Our method of localization relies on encoding a database of various locations from a map into a deep learning model. This is achieved by first training an autoencoder to predict generic scene characteristics from a single S-BEV view of the scene. Once this has been done, we generate latent embeddings using just the encoder and feed this in both to a nearest neighbor classifier a separate fully connected network for relative pose prediction. A high-level architecture is detailed in Figure \ref{fig:highLevelArch0}.
\subsection{Topo-metric Map Construction}
\label{eqMarker}
We first associate global camera poses with each of our stereo camera pairs. For indoor environments this can be done with an off-the-shelf method such as ORB-SLAM2 to extract trajectory. For larger-scale environments, we can use IMU-GNSS EKF-based approaches to extract camera poses. Following this, we must mark topological nodes via global camera poses. Nodes are marked on map-by-map basis where we select translational and angular thresholds as described in \cite{roussel2020localization}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.98\linewidth]{images/high_level_arch.png}
\end{center}
\vspace*{-0.35cm}
\caption{S-BEV Localization Architecture}
\label{fig:highLevelArch}
\vspace*{-0.45cm}
\end{figure}
\subsection{Semantic Bird's Eye View Map Construction}
To create a Semantic Bird's Eye View Map (S-BEV), we require semantic segmentation maps as well as disparity maps for our stereo pair. We use StereoSGBM with a Weighted Least Squares (WLS) Filter for post-filtering to create a smooth disparity map from a pair of rectified stereo images which we convert to real-word distances via camera intrinsics.
We run an off-the-shelf semantic segmentation network on the left image of our stereo pair. We use ResNet50dilated + PPM-deepsup as implemented with 150 classes as trained with ADE20K dataset by MIT CSAIL \cite{zhou2017scene, zhou2018semantic}, consisting of both indoor and outdoor classes. However, we narrow to 30 classes that only show indoor. While the segmentation network can be finetuned to fewer classes, we show that any off-the-shelf network provides good results. We also choose to ignore any classes associated with dynamic objects like "cars" or regions that cause inconsistent depth like "sky".
Using the segmentation output along with the depth map, we project our image into a 3D point cloud based on camera intrinsics. We then rotate this point cloud to view from above. (Figure \ref{fig:overview}). We project multiple stereo pairs in the same map, compensating for motion via camera extrinsics to encourage greater stability and less change across S-BEV map frames. Specifically, we plot the current plus the last four stereo pairs into one motion-compensated S-BEV map. Our S-BEV is a 352x352 sized image which is then used throughout the rest of our pipeline.
As seen in Figure \ref{fig:sbevMultiWeather}, our S-BEV map stays relatively constant across the six varying weather conditions (in the vKITTI dataset) shown. The main discrepancy is in foggy weather where trees toward the back of the scene are misclassified. However, because the class IDs outputted by our segmentation network are arranged in such a way that similar classes are numerically close to each other, this misclassification won't affect the rest of our pipeline. Also, the structure of the scene stays constant throughout all the images which sheds light on the factors which went into choosing S-BEV for our pipeline. However, this issue can also be mitigated through fine-tuned segmentation.
This highlights the fact that regardless of the weather and lighting condition, our input image will be processed in the exact same way through the rest of our pipeline due to the conversion into S-BEV.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.75\linewidth]{images/sbev_multiple_weather.png}
\end{center}
\vspace*{-0.35cm}
\caption{Sample S-BEV constructions on six different weather conditions on Scene 20 of the vKITTI dataset. Despite the change in weather and lighting conditions, the structure in the S-BEV signature remains similar, illustrating its potential for localization across weather and lighting conditions.}
\label{fig:sbevMultiWeather}
\vspace*{-0.45cm}
\end{figure}
\subsection{Topo-metric Node Localization}
Using the S-BEV representation and an autoencoder paired with a nearest neighbors classifier, we perform a coarse localization to topological nodes.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.98\linewidth]{images/ae_arch.png}
\end{center}
\vspace*{-0.35cm}
\caption{Autoencoder architecture. Encoder portion in gray box is used to generate latent embeddings of S-BEV map.}
\label{fig:aeArch}
\vspace*{-0.45cm}
\end{figure*}
We first train an autoencoder using the S-BEVs we've generated for all the training images. The architecture (see Figure \ref{fig:aeArch}) used consists of convolution filters of varying kernel sizes paired with skip connections between a few layers. However, rather than reconstructing the input BEV image, our autoencoder reconstructs a BEV image representing the pixel-wise average of all S-BEVs for a specific node. The pixel-wise average is meant to represent generic characteristics of an entire node scene while ignoring things such as dynamic obstacles which don't play a role in understanding our location within a scene. This encourages our network to learn its translational and rotational fit in relation to the entire node region rather than trying to reconstruct holes possibly generated by long-term artifacts such as snow mounds or dynamic objects such as cars and trucks. The network is trained with MSE loss.
We then detach the decoder part of our autoencoder and pass a candidate S-BEV map through the encoder to generate a latent embedding. We then take our latent embedding and perform a nearest neighbors classification to other sample latent embeddings from topological nodes across the entire map. The node of the numerically closest embedding based on Euclidean distance is our node classification. Our ground truth data is generated by finding the node with the minimal topo-metric distance to our sample.
\subsection{Fine Localization using 3-DoF Regressor}
\label{relPoseEq}
We perform fine localization of relative 3-DoF pose (x, y, theta) to the topological node using a 3-layer fully connected network with our input being a concatenated version of our one-hot encoded node classification and flattened latent embedding. We also add multiple layers of dropout to prevent overfitting.
Ground truth data being generated for this regressor model is based on pose relative to our topological node. Known variables include closest node global pose and camera global pose. The X, Y, and theta are calculated as follows: $Pose_{rel} = Pose_{cn}^{-1} \times Pose_{cf}$.
where $\mathcal Pose_{rel}$ is the pose of the current frame relative to the closest topological node, $\mathcal Pose_{cn}$ is the global pose of the closest node, and $\mathcal Pose_{cf}$ is the global pose of the current frame. This is often confused with multiple coordinate frames which exist within our scene including global X and Y as well as X and Y relative to our ground truth pose at that specific timestamp. However, neither of these are the same as our pose relative to the closest topological node which is an arbitrary distance in front of or behind the candidate pose.
\subsection{Final Localization Output}
After we obtain relative 3-DoF pose as well as the topological node classification we can combine them by doing the inverse operation of what was described in "\nameref{relPoseEq}" as follows with the same variable names and labels: $Pose_{cf} = Pose_{cn} \times Pose_{rel}$.
\subsection{Localization Filter}
Once we have the final localization output from the 3-DoF regressor in a global frame, we use a Kalman Filter (KF) framework for fusing that with the GPS/IMU output. The state vector is represented by \begin{math} \mu = [x, y, \theta] \end{math}. The prediction step involves predicting the motion of the vehicle between two iterations of the filter. We obtain velocities from the Applanix POS LV\cite{applanix} and apply a constant velocity motion model to estimate the pose of the vehicle (also called predicted state).
The update step is where the state of the filter is updated or corrected using measurements obtained from the fine 3-DoF regressor. These measurements correct the vehicle pose in \(x\), \(y\) and \(\theta\). Mathematically, this fusion is performed via the iterative localization updates
\begin{alignat}{2} \label{EKF}
\text{Predict:} & \quad \bar{\mu}_k = F_{k-1} \mu_{k-1} \\
& \quad \bar{\Sigma}_k = F_{k-1} \Sigma_{k-1} F_{k-1}^T + Q_{k-1} \nonumber \\
\nonumber \text{Update:} & \quad K_k = \bar{\Sigma}_k H^T_k ( H_k \bar{\Sigma}_k H^T_k + R_k)^{-1} \\
\nonumber & \quad \mu_k = \bar{\mu}_k + K_k(z_k - h_k( \bar{\mu}_k )) \\
\nonumber & \Sigma_k = (I - K_k H_k) \bar{\Sigma}_k (I - K_k H_k)^T + K_k R_k K_k^T
\end{alignat}
where $F_k$ represents the motion model of the vehicle and $Q_k$ is the corresponding uncertainty, $z_k$ is the output of the fine 3-DoF regressor and $R_k$ is the corresponding uncertainty estimated as a fit to the covariance of the 3-DoF regressor algorithm. We use GPS to initialize the filter in the linearized global frame which results in high uncertainty for the first few measurements.
\section{EXPERIMENTS}
We use two datasets for our experiments, the vKITTI2 \cite{gaidon2016virtual, cabon2020vkitti2} and the Ford AV \cite{agarwal2020ford} datasets. vKITTI2 is a synthetic dataset that contains virtual clones of 5 sequences from the original KITTI dataset. We use scene 20, which at 1.2 km in length, is the longest sequence to test the performance of our S-BEV signature across different weather and lighting conditions. The Ford AV dataset is a real-world dataset collected from 3 vehicles that drive average route lengths of 66 km through freeway and suburban Michigan over a period of 2 years. We use log 1 from vehicles V2 and V3, collected on 2017-08-04. This is primarily a freeway scene, with occasional overpasses, under cloudy conditions. We use the provided front-left and front-right camera images for stereo depth generation, along with global pose information provided to create the topological nodes. Topological node thresholds are 20m and $30 deg$ for vKITTI2 and 40m and $30 deg$ for the Ford AV dataset.
We first evaluate the car's ability to localize itself within the same run of the same path taken during data collection (rows 1-3 of Table \ref{tab:stat_weather}). Images are first categorized as belonging to a topological node (based on real-world distance to closest topo-node). Subsequently, for each topo-node, images are split into train and test using a random 80-20 split. Thus training images get slightly different viewpoints relative to the test images even though they are collected from the same run. The training images are then used the generate the depth maps, the segmentation images, and finally the S-BEV images. S-BEV images are used to train the auto-encoder, the nearest neighbour classifier and the 3-DoF models.
Subsequently, we consider the car's (V3's) ability to localize itself within a map made by another vehicle (V2) while travelling along the same route (rows 4 of Table \ref{tab:stat_weather}). Training images are now from a completely different perspective from the test images as the cars travel along different lanes (\ref{fig:SBEVdiff}.
We evaluate localization accuracy with 3 metrics. The first is node accuracy, which is determined by whether the coarse localizer (nearest neighbour classifier) localizes to the correct closest node. We then present $(x, y, theta)$ accuracies which describe the MAE error in our fine 3-DoF localizer, with respect to its closest node. Finally, we append an EKF to the pipeline, that combines the S-BEV localization output with a GPS/IMU, after which we have filtered $(x, y, theta)$ accuracies. The coordinate system of our ego vehicle is aligned in such a way where X is in the direction of travel, Y is to the left of this, and Z is pointing towards the sky.
We have performed an ablation study (Table \ref{tab:ablation}) by evaluating the efficacy of some of the techniques we used in training the Auto-encoder. $BASE$ represents the full approach described in the Method section. $AVG$ describes an ablation when the AE is trained to reconstruct the exact S-BEV input and not the smoothed/averaged version of the S-BEV over all the images in a topo-node. Finally, $AUG$ removes the angular or off-by-one data augmentations described earlier.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.98\linewidth]{images/sbev_no_match.png}
\end{center}
\vspace*{-0.35cm}
\caption{S-BEV from the same topological node from V2 and V3. These vehicles are travelling in different lanes along the same route. This leads to considerably different S-BEVs along different lanes at the same location on the freeway, leading to a decrease in accuracy of our current method.}
\label{fig:SBEVdiff}
\vspace*{-0.5cm}
\end{figure}
Table \ref{tab:stat_weather} demonstrates the results of using the full $BASE$ model on vehicles V2 \& V3 (Log 1) of the Ford AV dataset. Along the columns, we have the performance of the nearest neighbour based course localizer ($acc_{\textbf{node}}$) and then the performance of the fine-grained 3-DoF localizer to the node.
The nearest neighbour localization accuracy results show the percentage of test data that report the correct closest topological node. The 3-DoF localization errors are the x, y and theta errors relative to the node. In this table, we assume perfect nodal localization accuracy for the 3-DoF localizer, but the following EKF results (Figure \ref{fig:LocError}) shows the accuracy of the full pipeline, including nodal accuracy. Along the rows, we have results from the vKITTI and Ford AV datasets. V2 and V3 represent in-route localization errors for the same vehicle, tested on left out parts of the dataset. V3$\xrightarrow[]{}$V2 shows results for a model trained on V3 and tested on V2. Figure \ref{fig:SBEVdiff} shows the difficulties in matching a V2 to a V3 trajectory. The vehicles travel in different lanes, resulting in different views of the grass banks on either side of the highway. At the same point in the route, V2 travels in the right-most lane, which results in a big grass patch to the right, in the S-BEV signature; while V3 travels in the left-most lane, resulting in the grass patch shifting to the left. This results in our simple nearest neighbour localizer not working at all for this cross-vehicle case, but the 3-DoF localizer (assuming perfect topo-node accuracy) still reports reasonable results.
\begin{figure}[h]
\centering
\captionsetup{justification=centering}
\subfloat{\includegraphics[width=0.95\linewidth, trim=0 0 0 0, clip]{images/v3Plots.png}}
\caption{Localization error comparisons for V2}
\label{fig:LocError}
\vspace*{-0.5cm}
\end{figure}
Next, we append the pipeline with the EKF described earlier, and report before and after filtering accuracy. These pre-EKF numbers include the error from the topo-node localization (and are hence different from Table \ref{tab:stat_weather}). Figure \ref{fig:LocError} shows the error distribution in the longitudinal (X) and lateral (Y) directions for V2. The 3-DoF regressor accuracy (\begin{math} MAE_{lat} = 2.68m, MAE_{long} = 1.7m \end{math}) improves when fused with GPS/IMU using a Kalman Filter (\begin{math} MAE_{lat} = 2.17m, MAE_{long} = 1.43m \end{math}). We see a similar behaviour in case of V3 where the 3-DoF regressor accuracy (\begin{math} MAE_{lat} = 3.4m, MAE_{long} = 3.55m \end{math}) is improved by filtering it (\begin{math} MAE_{lat} = 2.4m, MAE_{long} = 2.55m \end{math}).
We also do a cross-weather localization study using the virtual vKITTI dataset. Table \ref{tab:weather_variation} illustrates the effect of different weather conditions along the same route on the course and fine localization accuracies. In the virtual dataset atleast, changing visibility and lighting does not affect the localization accuracy, which is constant across weather change.
The current version of the sytem uses a relatively simple nearest neighbour classifier for classifying an S-BEV embedding to a topo-node. This is not effective for perspective changes caused while travelling in different lanes (Figure \ref{fig:SBEVdiff}). We hope to improve the course classifier by training it with a contrastive loss as in a Siamese network that will force the embeddings of the S-BEVs from the same topo-node to be close to each other and the embeddings from different topo-nodes to be far from each other. We could also train a network to generate a cleaner S-BEV from the forward perspective camera images \cite{ng2020bev}. We expect these approaches to be more robust to lane changes and real-world lighting \& weather changes and leave this as future work.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.98\linewidth]{images/trajectory.png}
\end{center}
\vspace*{-0.35cm}
\caption{Course and fine localization results for the 22 km trajectory in Log 1, V2 of the Ford AV dataset.}
\label{fig:globalPath}
\vspace*{-0.7cm}
\end{figure}
\begin{table}[!htb]
\begin{center}
\caption{\textbf{Ablation Tests for Node Accuracy}}
\begin{tabular}{llll}
\toprule
\textbf{Dataset} & BASE & AVG & $AUG$ \\
\hline
vKITTI 2 & 99.5 & 96.6 & 96.5 \\
Ford AV V2 & 98.7 & 90.1 & 88.4 \\
\hline
\label{tab:ablation}
\end{tabular}
\end{center}
\vspace*{-1.05cm}
\end{table}
\begin{table}[!htb]
\begin{center}
\caption{\textbf{Static Weather Results}}
\begin{tabular}{lllll}
\toprule
\textbf{Dataset}&$acc_{\textbf{node}}$&$err_{\textbf{x}}$ (m) &$err_{{\textbf{y}}}$ (m) &$err_{\textbf{theta}}$ (deg) \\
\hline
vKITTI 2 & 99.5 & 0.27 & 0.04 & 0.2 \\
Ford AV V2 & 98.7 & 1.50 & 0.795 & 1.2 \\
Ford AV V3 & 94.7 & 3.79 & 1.953 & 1.4 \\
Ford AV V3$\xrightarrow[]{}$V2 & - & 4.82 & 2.46 & 1.59 \\
\hline
\label{tab:stat_weather}
\end{tabular}
\end{center}
\vspace*{-1.05cm}
\end{table}
\begin{table}[!htb]
\begin{center}
\caption{\textbf{Weather Variation Results}}
\begin{tabular}{llllll}
\toprule
\textbf{Dataset}&\textbf{Weather}&$acc_{\textbf{node}}$& \shortstack{$err_{\textbf{x}}$ \\ (m)} & \shortstack{$err_{{\textbf{y}}}$ \\ (m)} & \shortstack{$err_{\textbf{theta}}$ \\ (deg)} \\
\hline
vKITTI 2 & Regular & 99.5 & 0.27 & 0.04 & 0.2 \\
vKITTI 2 & Sunny & 99.4 & 0.22 & 0.06 & 0.2 \\
vKITTI 2 & Foggy & 99.1 & 0.23 & 0.05 & 0.4 \\
vKITTI 2 & Overcast & 99.3 & 0.30 & 0.07 & 0.5 \\
vKITTI 2 & Rainy & 99.5 & 0.22 & 0.04 & 0.3 \\
vKITTI 2 & Sunset & 99.0 & 0.27 & 0.05 & 0.2 \\
\hline
\label{tab:weather_variation}
\end{tabular}
\end{center}
\vspace*{-1.05cm}
\end{table}
\section{CONCLUSIONS}
We describe a Semantic Bird's Eye View (S-BEV) representation generated from a forward facing stereo camera that allows a vehicle to be localized along a pre-mapped route. We conduct experiments on the vKITTI2 virtual dataset with MAE of 0.27m \& 0.04m in the longitudinal and lateral directions and show that the S-BEV representation is robust to weather and lighting changes. We also demonstrate MAE of \textbf{2.17m} on a 22 km long highway route in the Ford AV dataset. Through these results, we can see that the SBEV representation has good potential for vision based localization in dynamically changing environments.
\FloatBarrier
\addtolength{\textheight}{-4cm}
\bibliographystyle{IEEEtran}
|
train/arxiv
|
BkiUdOw4ubng5asjohD9
| 5 | 1 |
\section{Introduction}
Among many existing \ac{MIMO} schemes, \ac{SM} has attracted a lot
of research interest in recent years. In \ac{SM}, two data streams
are transmitted \textemdash{} one in the conventional \ac{IQ} domain
(by employing e.g. PSK or QAM modulation), and the other in the so-called
spatial domain by selecting and activating one from all available
transmit antenna \cite{Mesleh2008,DiRenzo2014}. A straightforward
extension of SM is to allow activation of more than one transmit antennas
per time slot and possibly also to transmit more than one \ac{IQ}
stream simultaneously \cite{Younis2010}. The extended scheme is called
\ac{GSM}.
Recently, a scheme which is operationally dual to SM was developed,
called \ac{RSM} \cite{Yang2011,Perovic2015a}. The main difference
between SM and RSM comes from the signal transmission in the spatial
domain, where RSM transmits data by selecting one out of all available
\emph{receive} antennas. Accordingly, this antenna is used for the
reception of the transmitted \ac{IQ} stream. Similarly, the concept
of \ac{RSM} may be extended by selecting more than one receive antenna
per time slot for the reception of multiple \ac{IQ} stream transmission
\cite{Zhang2013,Zhang2015}. This scheme is called \ac{GRSM}.
Another interesting extension of \ac{RSM} is that of \ac{DLT} \cite{Masouros2016}.
In contrast to conventional \ac{RSM}/\ac{GRSM} which utilizes a
subset of the receive antennas, \ac{DLT} uses all available receive
antennas for the reception of the transmitted \ac{IQ} streams. Consequently,
\ac{DLT} requires a new approach to transmit information in the spatial
domain and thus DLT applies two power levels to distinguish the
\textquotedblleft selected\textquotedblright{} from the ``non-selected''
receive antennas in the spatial domain \cite{Masouros2016}. In this
way, the spatial symbols are encoded onto the signal power levels
at the receive antennas.
Although the basic theory for \ac{SM} and \ac{RSM} was initially
developed for single-user communication systems, an increasing number
of research works consider their application in multiuser scenarios.
In \cite{Narasimhan2015}, the authors considered a multiuser uplink
transmission scheme with SM implemented at each user. To enable SM
in multiuser downlink communications, a closed-form precoding solution
was derived in \cite{Narayanan2014}. In \cite{Humadi2014}, an implementation
of \ac{RSM}/\ac{GRSM} in massive \ac{MIMO} systems was investigated.
A more detailed analysis of \ac{GRSM} for multiuser downlink communications
was presented in \cite{Stavridis2016}.
The papers listed above consider multiuser communication schemes that
are based on the \ac{SM} or the \ac{RSM} operation principle. To
the best of the authors' knowledge, the only multiuser scheme that
simultaneously supports the \ac{SM} and \ac{RSM} operation principle
is presented in \cite{Pizzio2016}, and is called \ac{MU-TR-GSM}.
In each time slot, the base station in \ac{MU-TR-GSM} selects a subset
of the transmit antennas to be active. From those antennas, the base
station transmits IQ streams to the users. Also, this antenna activation
enables \ac{MU-TR-GSM} to send data in the transmit spatial domain.
Each user receives the transmitted IQ streams by a subset of receive
antennas, whose selection enables \ac{MU-TR-GSM} to send data in
the receive spatial domain. Therefore, \ac{MU-TR-GSM} manages to
combine the principles of operation of \ac{SM} and \ac{RSM}. Despite
the advantage of combining the SM and the \ac{RSM} operation principles,
\ac{MU-TR-GSM} requires a high computational complexity (see Section
\prettyref{subsec:Com-Compl}) which presents a significant barrier
to its practical implementation.
Against this background, the contributions of this paper are listed
as follows:
\begin{enumerate}
\item We propose a novel multiuser communication scheme, called \ac{DL-TR-GSM},
that simultaneously supports the operation of the \ac{SM} and \ac{RSM}
operation principles. However, in contrast to \ac{MU-TR-GSM} which
applies the conventional \ac{RSM} transmission, \ac{DL-TR-GSM} is
based on \ac{DLT}.
\item We show, through a detailed computational complexity analysis and
through simulations, that \ac{DL-TR-GSM} enables a considerable computational
complexity reduction, at the cost of a minor degradation in the \ac{BER}
performance.
\item We also provide a hardware complexity analysis which demonstrates
that \ac{DL-TR-GSM} requires a lower number of RF chains at the receiver
compared to \ac{MU-TR-GSM}. As a result, \ac{DL-TR-GSM} provides
a large reduction of the receive power consumption at each user.
\item We introduce a low-complexity detector for \ac{DL-TR-GSM}, referred
to as the separate detector. Simulation results show that this detector
provides a very similar \ac{BER} to that provided by the optimal
\ac{ML} detector.
\end{enumerate}
\section{System Model}
\subsection{\ac{DL-TR-GSM}}
The block diagram for the considered \ac{DL-TR-GSM} scheme is shown
in \prettyref{fig:Diagram-Multi}. It depicts a downlink communication
scenario between a base station equipped with $N_{t}$ transmit antennas
and $K$ users equipped with $N_{r}$ receive antennas per user. Accordingly,
the channel matrix of the \ac{DL-TR-GSM} scheme can be expressed
as
\[
\mathbf{H}=\left[\mathbf{H}^{(1)^{\mathrm{T}}}\;\mathbf{H}^{(2)^{\mathrm{T}}}\;\cdots\;\mathbf{H}^{(K)^{\mathrm{T}}}\right]^{\mathrm{T}},
\]
where $\mathbf{\mathbf{H}}^{(k)}\in\mathbb{C}^{N_{r}\times N_{t}}$
is the channel matrix between the base station and the $k$-th user.
In each time slot, the base station activates a subset of $N_{\mathrm{tact}}$
($KN_{r}\le N_{\mathrm{tact}}<N_{t}$) transmit antennas, which form
1 out of $N_{\mathrm{tcomb}}=2^{\left\lfloor \log_{2}(N_{t}/N_{\mathrm{tact}})\right\rfloor }$
transmit antenna combinations. For ease of comparison of our later
results with those in \cite{Pizzio2016}, we assume that each transmit
antenna can belong to only one transmit antenna combination; thus,
the data rate in the transmit spatial domain is $\left\lfloor \log_{2}\frac{N_{t}}{N_{\mathrm{tact}}}\right\rfloor $
instead of $\left\lfloor \log_{2}{N_{t} \choose N_{\mathrm{tact}}}\right\rfloor $.
For the \emph{s}-th combination of activated transmit antennas ($s=1,\dots,N_{\mathrm{tcomb}}$),
the resulting channel matrix $\mathbf{H}_{s}$ consists of the $N_{\mathrm{tact}}$
columns of $\mathbf{H}$ that correspond to the active transmit antennas.
Implementing \ac{SVD} on any constituent matrix $\mathbf{H}_{s}^{(k)}$
of $\mathbf{H}_{s}$, we obtain
\begin{equation}
\mathbf{H}_{s}^{(k)}=\mathbf{U}_{s}^{(k)}[\boldsymbol{\Lambda}_{s}^{(k)}\;\mathbf{0}]\left[\begin{array}{c}
\mathbf{V}_{1,s}^{(k)^{\mathrm{H}}}\\
\mathbf{V}_{2,s}^{(k)^{\mathrm{H}}}
\end{array}\right]=\mathbf{U}_{s}^{(k)}\boldsymbol{\Lambda}_{s}^{(k)}\mathbf{V}_{1,s}^{(k)^{\mathrm{H}}},\label{eq:SVD}
\end{equation}
where $\mathbf{U}_{s}^{(k)}\in\mathbb{C}^{N_{r}\times N_{r}}$ is
a unitary matrix, $\boldsymbol{\Lambda}_{s}^{(k)}\in\mathbb{C}^{N_{r}\times N_{r}}$
is a diagonal matrix of singular values and $\mathbf{V}_{1,s}^{(k)}\in\mathbb{C}^{N_{\mathrm{tact}}\times N_{r}}$.
Now, the overall receive signal vector of all $K$ users, $\overline{\mathbf{y}}=[\begin{array}{ccc}
\mathbf{\overline{\mathbf{y}}}^{(1)^{\mathrm{T}}}\;\mathbf{\overline{\mathbf{y}}}^{(2)^{\mathrm{T}}}\; & \cdots & \;\mathbf{\overline{\mathbf{y}}}^{(K)^{\mathrm{T}}}\end{array}]^{\mathrm{T}}\in\mathbb{C}^{KN_{r}\times1}$, can be written as \cite{Liu2009a}
\begin{equation}
\overline{\mathbf{y}}=\mathbf{H}_{s}\overline{\mathbf{x}}+\overline{\mathbf{n}}=\overline{\mathbf{U}}_{s}\overline{\mathbf{\Lambda}}_{s}\overline{\mathbf{V}}_{1,s}^{\mathrm{H}}\overline{\mathbf{x}}+\overline{\mathbf{n}},
\end{equation}
where, according to \prettyref{eq:SVD}, we introduced the definitions
\[
\begin{array}{c}
\overline{\mathbf{U}}_{s}=\mathrm{diag}(\mathbf{U}_{s}^{(1)}\;\mathbf{U}_{s}^{(2)}\;\cdots\;\mathbf{U}_{s}^{(K)})\\
\overline{\mathbf{\Lambda}}_{s}=\mathrm{diag}(\mathbf{\mathbf{\Lambda}}_{s}^{(1)}\;\mathbf{\mathbf{\Lambda}}_{s}^{(2)}\;\cdots\;\mathbf{\mathbf{\Lambda}}_{s}^{(K)})\\
\overline{\mathbf{V}}_{1,s}=[\mathbf{V}_{1,s}^{(1)}\;\mathbf{V}_{1,s}^{(2)}\;\cdots\;\mathbf{V}_{1,s}^{(K)}].
\end{array}
\]
Moreover, $\overline{\mathbf{x}}\in\mathbb{C}^{N_{\mathrm{tact}}\times1}$
is the transmit signal vector of the base station and $\overline{\mathbf{n}}$
is the noise vector with $KN_{r}$ \ac{iid} elements that are distributed
according to $\mathcal{CN}(0,N_{0})$, where $N_{0}$ denotes the
(one-sided) power spectral density of the additive white Gaussian
noise (AWGN).
To enable downlink signal transmission without inter-channel and inter-user
interference, a precoder is required at the transmitter. Hence, the
transmit \ac{IQ} symbol vector $\tilde{\mathbf{x}}$, which contains
$KN_{r}$ \ac{IQ} symbols of the \emph{M}-PSK modulation alphabet,
is precoded before its transmission, yielding the vector
\begin{figure}[t]
\begin{centering}
\includegraphics{images/RM+RSM}
\par\end{centering}
\caption{Block diagram for the proposed scheme. \label{fig:Diagram-Multi}}
\end{figure}
\[
\overline{\mathbf{x}}=\mathbf{B}\mathbf{\tilde{x}}.
\]
The precoding matrix is defined as
\begin{equation}
\mathbf{B}=\overline{\mathbf{V}}_{1,s}(\overline{\mathbf{V}}_{1,s}^{\mathrm{H}}\overline{\mathbf{V}}_{1,s})^{-1}\overline{\boldsymbol{\beta}}_{s}\overline{\mathbf{P}}
\end{equation}
where the diagonal matrix $\mathbf{\overline{\boldsymbol{\beta}}_{\mathrm{s}}=\mathrm{diag}}(\boldsymbol{\beta}_{s}^{(1)}\;\boldsymbol{\beta}_{s}^{(2)}\;\cdots\;\boldsymbol{\beta}_{s}^{(K)})=\mathrm{diag}(\beta_{s,1}^{(1)}\;\cdots\;\beta_{s,N_{r}}^{(1)}\;\cdots\;\beta_{s,1}^{(K)}\;\cdots\;\beta_{s,N_{r}}^{(K)})$
serves to ensure a constant average transmit power. Assuming that
all diagonal elements in $\overline{\boldsymbol{\beta}}_{\mathrm{s}}$
are equal as in \cite{Liu2009a}, we obtain $\overline{\boldsymbol{\beta}}_{\mathrm{s}}=\beta_{s}\mathbf{I}_{KN_{r}}$,
where
\begin{equation}
\beta_{s}=\sqrt{\frac{KN_{r}}{\mathrm{Tr}\left[(\overline{\mathbf{V}}_{1,s}^{\mathrm{H}}\overline{\mathbf{V}}_{1,s})^{-1}\right]}}.\label{eq:beta}
\end{equation}
In the remainder of the paper, we will assume that this is the case,
and we will refer to $\beta_{s}$ as the \emph{scaling coefficient}.
To transmit data in the receive spatial domain, \ac{DL-TR-GSM} utilizes
the power level matrix $\mathbf{\overline{P}=\mathrm{diag}}(\mathbf{P}^{(1)}\;\mathbf{P}^{(2)}\;\cdots\;\mathbf{P}^{(K)})$.
Each constituent matrix $\mathbf{P}^{(k)}$ ($k\in\{1,\dots,K\}$)
is a diagonal matrix whose $r$-th diagonal element takes the value
$\sqrt{P_{1}}$ if the $r$-th receive antenna of the \emph{k}-th
user is ``non-selected'' or $\sqrt{P_{2}}$ if the $r$-th receive
antenna of the \emph{k}-th user is ``selected''. Indices of the
``selected'' and the ``non-selected'' receive antennas of one
user determine $N_{r}$ bits transmitted to that user in the receive
spatial domain. More precisely, the indices of the ``non-selected''
receive antennas specify the positions of zeros and the indices of
the ``selected'' receive antennas specify the positions of ones.
Hereinafter, we assume $P_{1}<P_{2}$ and the set of all possible
$\mathbf{\mathbf{P}}^{(k)}$ is denoted by $\mathit{\mathcal{P}}$
(hence $\left|\mathit{\mathcal{P}}\right|=2^{N_{r}}$).
From the previous expressions, the receive signal vector of the $k$-th
user can be written as follows:
\begin{align}
\mathbf{y}^{(k)} & =\mathbf{U}_{s}^{(k)}\mathbf{\mathbf{\Lambda}}_{s}^{(k)}\boldsymbol{\beta}_{s}^{(k)}\mathbf{P}_{i}^{(k)}\mathbf{\tilde{x}}_{m}^{(k)}+\mathbf{n}^{(k)}\nonumber \\
& =\mathbf{G}_{s}^{(k)}\mathbf{P}_{i}^{(k)}\mathbf{\tilde{x}}_{m}^{(k)}+\mathbf{n}^{(k)},
\end{align}
where $m\in\{1,\dots,M^{N_{r}}\}$ and $i\in\{1,\dots,2^{N_{r}}\}$
are the index of the transmitted \ac{IQ} symbol vector and the index
of the used power level matrix for the $k$-th user, respectively.
To cover the data transmission in all three domains we refer to the
column vector $\mathbf{G}_{s}^{(k)}\mathbf{P}_{i}^{(k)}\mathbf{\tilde{x}}_{m}^{(k)}$
as the \emph{supersymbol}, where $\mathbf{G}_{s}^{(k)}=\mathbf{U}_{s}^{(k)}\mathbf{\mathbf{\Lambda}}_{s}^{(k)}\boldsymbol{\beta}_{s}^{(k)}$.
One should note that each supersymbol is uniquely determined by a
particular $(s,i,m)$ index combination.
Now, the optimal \ac{ML} detector of the \emph{k}-th user is given
as
\begin{align}
\{\hat{s},\hat{i},\hat{m}\} & =\mathop{\mathop{\text{arg min}}}_{\substack{s\in\{1,...,N_{\mathrm{tcomb}}\}\\
m\in\{1,...,M^{N_{r}}\}\\
i\in\mathit{\mathcal{P}}
}
}\left\Vert \mathbf{y}^{(k)}-\mathbf{G}_{s}^{(k)}\mathbf{P}_{i}^{(k)}\mathbf{\tilde{x}}_{m}^{(k)}\right\Vert ^{2},\label{eq:ML_det}
\end{align}
where $\hat{s}$ is the index of the detected transmit antenna combination,
$\hat{i}$ is index of the detected power level matrix and $\hat{m}$
is the index of the detected IQ modulation symbol vector.
Finally, we may note that the data rate per user of \ac{DL-TR-GSM}
is
\begin{equation}
\eta=\left\lfloor \log_{2}\frac{N_{t}}{N_{\mathrm{tact}}}\right\rfloor +N_{r}(1+\log_{2}M).
\end{equation}
\subsection{\ac{MU-TR-GSM}}
As mentioned previously, the main difference between \ac{MU-TR-GSM}
and \ac{DL-TR-GSM} is the data transmission in the receive spatial
domain. In contrast to \ac{DL-TR-GSM} which uses the \ac{DLT},
\ac{MU-TR-GSM} follows the conventional RSM operation principle.
It selects a subset of $N_{\mathrm{ract}}$ $(\text{0}<N_{\mathrm{ract}}<N_{r})$
receive antennas at one user, so that each active receive antenna
in the subset receives one IQ stream. Since there are $N_{\mathrm{rcomb}}=2^{\left\lfloor \log_{2}{N_{r} \choose N_{\mathrm{ract}}}\right\rfloor }$
receive antenna combinations, the data rate in the receive spatial
domain is $\left\lfloor \log_{2}{N_{r} \choose N_{\mathrm{ract}}}\right\rfloor $
bits per user. Another consequence of this change is the construction
of the effective channel matrix $\mathbf{H}_{s}$. Here, $\mathbf{H}_{s}$
is obtained by selecting $N_{\mathrm{tact}}$ columns and $KN_{r}$
rows of $\mathbf{H}$ that correspond to the active transmit and receive
antennas, respectively. Further signal preprocesing at the transmitter
is same as for \ac{DL-TR-GSM}. The only difference is that the precoding
matrix expression does not contain the power level matrix $\overline{\mathbf{P}}$
and that the ratio numerator $KN_{r}$ in \prettyref{eq:beta} should
be replaced by $KN_{\mathrm{ract}}$. At the reception, we execute
the \ac{ML} detection as explained in \cite{Pizzio2016}.
\section{Determining the Optimum Power Levels}
In this section, we derive the \ac{BEP} expression for \ac{DL-TR-GSM}
and based on this we derive the optimal power levels $P_{1}$ and
$P_{2}$. As all the users are assumed to have the same propagation
conditions, the following analytical development is user-independent.
Therefore, the following expressions are valid for an arbitrary user
and we omit the user index in the superscript.
The upper bound for the \ac{BEP} is given by \cite{Liu2018a}
\begin{align}
P_{e} & \le\frac{1}{\eta2^{\eta}}\sum_{(s,i,m)}\sum_{(s_{1},i_{1},m_{1})}D\left((s,i,m)(s_{1},i_{1},m_{1})\right)\nonumber \\
& \mathbb{\qquad\qquad E_{\mathbf{H}}}\left\{ \mathrm{PEP}\left((s,i,m)(s_{1},i_{1},m_{1})\right)\right\}
\end{align}
where the indices $(s,i,m)$ and $(s_{1},i_{1},m_{1})$ determine,
respectively, the transmitted and the detected supersymbol. $\mathbb{E_{\mathbf{H}}}\{\mathrm{PEP}((s,i,m)(s_{1},i_{1},m_{1}))\}$
denotes the average \ac{PEP} between the aforementioned mentioned
supersymbols and $D\left((s,i,m)(s_{1},i_{1},m_{1})\right)$ is the
Hamming distance between the binary representations of these supersymbols.
For a given $\mathbf{H}$, if the \ac{ML} detection in \prettyref{eq:ML_det}
is used, the \ac{PEP} can be expressed as
\begin{align*}
& \mathrm{PEP}\left((s,i,m)(s_{1},i_{1},m_{1})\right)\\
= & \mathrm{Pr}\left[\left\Vert \mathbf{y}-\mathbf{G}_{s}\mathbf{P}_{i}\mathbf{\tilde{x}}_{m}\right\Vert ^{2}>\left\Vert \mathbf{y}-\mathbf{G}_{s_{1}}\mathbf{P}_{i_{1}}\mathbf{\tilde{x}}_{m_{1}}\right\Vert ^{2}\right]\\
= & \mathrm{Pr}\left[\mathfrak{R}\left\{ \mathbf{n}\mathrm{^{H}}\left(\mathbf{G}_{\mathrm{s}_{1}}\mathbf{P}_{i_{1}}\mathbf{\tilde{x}}_{m_{1}}-\mathbf{G}_{s}\mathbf{P}_{i}\mathbf{\tilde{x}}_{m}\right)\right\} \vphantom{>\frac{1}{2}\left\Vert \mathbf{G}_{s_{1}}\mathbf{P}_{\mathrm{i_{1}}}\mathbf{\tilde{x}}_{m_{1}}-\mathbf{G}_{s}\mathbf{P}_{i}\mathbf{\tilde{x}}_{m}\right\Vert ^{2}}\right.\\
& \phantom{\Pr[}\left.\vphantom{\mathfrak{R}\left\{ \mathbf{n}\mathrm{^{H}}\left(\mathbf{G}_{\mathrm{s}_{1}}\mathbf{P}_{i_{1}}\mathbf{\tilde{x}}_{m_{1}}-\mathbf{G}_{s}\mathbf{P}_{i}\mathbf{\tilde{x}}_{m}\right)\right\} }>\frac{1}{2}\left\Vert \mathbf{G}_{s_{1}}\mathbf{P}_{i_{1}}\mathbf{\tilde{x}}_{m_{1}}-\mathbf{G}_{s}\mathbf{P}_{i}\mathbf{\tilde{x}}_{m}\right\Vert ^{2}\right].
\end{align*}
Since the left-hand side in the previous equation is distributed according
to $\mathcal{N}(0,\left\Vert \mathbf{G}_{s_{1}}\mathbf{P}_{i_{1}}\mathbf{\tilde{x}}_{m_{1}}-\mathbf{G}_{s}\mathbf{P}_{i}\mathbf{\tilde{x}}_{m}\right\Vert ^{2}\cdot N_{0}/2)$,
we get
\begin{multline}
\mathrm{PEP}\left((s,i,m)(s_{1},i_{1},m_{1})\right)=Q\left(\Phi/\sqrt{2N_{0}}\right)=\\
Q\left(\sqrt{\left\Vert \mathbf{G}_{s_{1}}\mathbf{P}_{i_{1}}\mathbf{\tilde{x}}_{m_{1}}-\mathbf{G}_{s}\mathbf{P}_{i}\mathbf{\tilde{x}}_{m}\right\Vert ^{2}}/\sqrt{2N_{0}}\right),\label{eq:PEP}
\end{multline}
where $\Phi$ is the Euclidean distance between the considered supersymbols.
\subsection{Power Ratio $\alpha$}
The ratio of the power levels used for communicating data in the receive
spatial domain is
\begin{equation}
\alpha=\frac{P_{2}}{P_{1}}\label{eq:cond_1}
\end{equation}
and it satisfies $\alpha>1$. While the chosen power levels need to
maintain the average transmit power unchanged, we have $(P_{1}+P_{2})/2=1.$
Thus we obtain $P_{1}=2/(1+\alpha)$ and $P_{2}=2\alpha/(1+\alpha).$
Since $P_{1}$ and $P_{2}$ are determined entirely by $\alpha$,
the goal is to find the optimal $\alpha$ that ensures the best error
rate performance.
Note that while \prettyref{eq:PEP} captures the PEPs associated with
all possible error events for DL-TR-GSM, for mathematical tractability
reasons we will consider in the following analysis only the individual
PEPs of the IQ domain and of the receive spatial domain. In these
two cases, the Euclidean distance $\Phi$ of \ac{DL-TR-GSM} in \prettyref{eq:PEP}
is mathematically equivalent to the Euclidean distance of a transmit
system that consists of $N_{r}$ parallel orthogonal subchannels.
As the channel gains of these subchannels are proportional to the
singular values in $\mathbf{\mathbf{\Lambda}}_{s}$, the minimum Euclidean
distances, i.e. the maximum \acp{PEP}, will always occur in the subchannel
with the channel gain proportional to the smallest singular value
$\lambda_{s,N_{r}}$.
In the \ac{IQ} domain, due to the use of \emph{M}-PSK modulation,
the maximum \ac{PEP} can be expressed as follows:
\begin{equation}
\mathrm{PEP_{IQ,\max}}=Q\left(\beta_{s}\lambda_{s,N_{r}}\sqrt{\frac{P_{1}}{2N_{0}}}\left|b_{m}-b_{m_{1}}\right|_{\min}\right).\label{eq:PEP_IQ_int}
\end{equation}
For two \emph{M}-PSK symbols $b_{m}$ and $b_{m_{1}}$, we have $\left|b_{m}-b_{m_{1}}\right|_{\min}=2\sin(\pi/M)$
and the previous expression can be re-written as
\begin{equation}
\mathrm{PEP_{IQ,\max}}=Q\left(\beta_{s}\lambda_{s,N_{r}}\sqrt{\frac{2P_{1}}{N_{0}}}\sin\frac{\pi}{M}\right).\label{eq:PEP_IQ}
\end{equation}
From \prettyref{eq:PEP}, the maximum \ac{PEP} of the receive spatial
domain is given as
\begin{equation}
\mathrm{PEP_{RSP,\max}}=Q\left(\beta_{s}\lambda_{s,N_{r}}\frac{\sqrt{P_{2}}-\sqrt{P_{1}}}{\sqrt{2N_{0}}}\right).\label{eq:PEP_SP}
\end{equation}
We define the optimal value of $\alpha$ to be that which minimizes
the maximum PEPs among \prettyref{eq:PEP_IQ} and \prettyref{eq:PEP_SP}.
Therefore, the following equation is valid:
\[
\beta_{s}\lambda_{s,N_{r}}\sqrt{\frac{2P_{1}}{N_{0}}}\sin\frac{\pi}{M}=\beta_{s}\lambda_{s,N_{r}}\frac{\sqrt{P_{2}}-\sqrt{P_{1}}}{\sqrt{2N_{0}}}.
\]
After some simple algebraic manipulations, the optimal power ratio
$\alpha$ is given as
\begin{equation}
\alpha_{\mathrm{opt}}=\left(1+2\sin\frac{\pi}{M}\right)^{2}.\label{eq:alpha_opt}
\end{equation}
\section{System Complexity Analysis}
\subsection{Computational Complexity\label{subsec:Com-Compl}}
In this subsection, we derive the computational complexity of \ac{DL-TR-GSM}
and \ac{MU-TR-GSM}\emph{. }The computational complexity refers to
the number of mathematical operations required for the calculation
of all \acp{SVD} and scaling coefficients $\beta_{s}$ that are needed
in order to perform signal transmission and detection.
In \ac{DL-TR-GSM}, \ac{SVD} is performed for $\mathbf{H}_{s}^{(k)}$
matrices of dimension $N_{r}\times N_{\mathrm{tact}}$, and $4N_{r}^{2}N_{\mathrm{tact}}+22N_{\mathrm{tact}}$
operations are needed for each \ac{SVD} \cite{Golub2012a}. As $N_{\mathrm{tcomb}}$
SVDs are required for each user, the total number of all \acp{SVD}
in \ac{DL-TR-GSM} equals $KN_{\mathrm{tcomb}}$. The complexity
of computing $\beta_{s}$ is primarily determined by the complexity
of the denominator in \prettyref{eq:beta}. Since the matrix $\overline{\mathbf{V}}_{1,s}^{\mathrm{H}}\overline{\mathbf{V}}_{1,s}$
is a $KN_{r}\times KN_{r}$ Hermitian matrix, only the elements on
the main diagonal and below (or above), i.e. $((KN_{r})^{2}+KN_{r})/2$
matrix elements, need to be computed. Since the computation of a single
element requires $2N_{\mathrm{tact}}-1$ operations, the computational
complexity of $\overline{\mathbf{V}}_{1,s}^{\mathrm{H}}\overline{\mathbf{V}}_{1,s}$
is $((KN_{r})^{2}+KN_{r})(N_{\mathrm{tact}}-1/2)$. Inversion of the
aforementioned matrix requires $(KN_{r})^{3}+(KN_{r})^{2}+KN_{r}$
operations \cite{Perovic2015} and the computation of the matrix trace
requires $KN_{r}-1$ operations. In addition, we have 1 square root,
1 multiplication and 1 division operation. The number of different
$\beta_{s}$ values in \ac{DL-TR-GSM} is $N_{\mathrm{tcomb}}$. In
summary, the total computational complexity of \ac{DL-TR-GSM} is
given by
\begin{align*}
C_{\mathrm{DL}} & =KN_{\mathrm{tcomb}}(4N_{r}^{2}N_{\mathrm{tact}}+22N_{\mathrm{tact}})+N_{\mathrm{tcomb}}\left[(KN_{r})^{3}\vphantom{+(KN_{r})^{2}\left(N_{\mathrm{tact}}+\frac{1}{2}\right)+(KN_{r})\left(N_{\mathrm{tact}}+\frac{3}{2}\right)+3)}\right.\\
& \left.\vphantom{N_{\mathrm{tcomb}}((KN_{r})^{3}}+(KN_{r})^{2}\left(N_{\mathrm{tact}}+\frac{1}{2}\right)+(KN_{r})\left(N_{\mathrm{tact}}+\frac{3}{2}\right)+2\right].
\end{align*}
The computational complexity derivation given above for \ac{DL-TR-GSM}
is applicable with some minor modifications to \ac{MU-TR-GSM}\emph{.
}One difference comes from the fact that the \ac{MU-TR-GSM} scheme
activates $N_{\mathrm{ract}}$ out of $N_{r}$ available receive antennas.
The other difference originates from the number of \acp{SVD} and
$\beta_{s}$ values which are given by $KN_{\mathrm{tcomb}}N_{\mathrm{rcomb}}$
and $N_{\mathrm{tcomb}}N_{\mathrm{rcomb}}^{K}$ respectively for \ac{MU-TR-GSM}.
To summarize, the computational complexity of \ac{MU-TR-GSM} is given
by
\begin{align*}
C_{\mathrm{MU}} & =KN_{\mathrm{tcomb}}N_{\mathrm{rcomb}}\left(4N_{\mathrm{ract}}^{2}N_{\mathrm{tact}}+22N_{\mathrm{tact}}\right)\\
& +N_{\mathrm{tcomb}}N_{\mathrm{rcomb}}^{K}\left[(KN_{\mathrm{ract}})^{3}+(KN_{\mathrm{ract}})^{2}\vphantom{+(KN_{r})^{2}\left(N_{\mathrm{tact}}+\frac{1}{2}\right)+(KN_{r})\left(N_{\mathrm{tact}}+\frac{3}{2}\right)+3)}\right.\\
& \left.\vphantom{N_{\mathrm{tcomb}}((KN_{r})^{3}}\times\left(N_{\mathrm{tact}}+\frac{1}{2}\right)+(KN_{\mathrm{ract}})\left(N_{\mathrm{tact}}+\frac{3}{2}\right)+2\right].
\end{align*}
\subsection{Hardware Complexity}
The fact that the transmitted \ac{IQ} streams are received by all
the receive antennas, and not by some receive antenna subset, enables
\ac{DL-TR-GSM} to use a smaller number of receive antennas compared
to \ac{MU-TR-GSM}. As the number of receive antennas corresponds
to the number of RF chains at the receiver in RSM-based systems, the
hardware complexity advantage of \ac{DL-TR-GSM} becomes apparent.
Consequently, we can illustrate it in terms of the receive power consumption.
In the following, the receive power consumption is expressed relative
to the low noise amplifier power $P_{\mathrm{LNA}}$, the RF chain
power $P_{\mathrm{RFC}}$, the analog-to-digital converter power $P_{\mathrm{ADC}}$
and the baseband power $P_{\mathrm{BB}}$. The receive power consumption
for both schemes may be computed as
\[
P_{\mathrm{TOT}}=N_{r}(P_{\mathrm{LNA}}+P_{\mathrm{RFC}}+P_{\mathrm{ADC}})+P_{\mathrm{BB}}.
\]
The component powers are expressed relative to the reference power
$P_{\mathrm{ref}}$ as $P_{\mathrm{LNA}}=P_{\mathrm{ref}}$, $P_{\mathrm{RFC}}=2P_{\mathrm{ref}}$
and $P_{\mathrm{ADC}}=P_{\mathrm{BB}}=10P_{\mathrm{ref}}$ \cite{Mendez-Rial2016}.
For the reception of 2 IQ streams per user, under the same data rate,
\ac{DL-TR-GSM} requires $N_{r}=2$ receive antennas, and \ac{MU-TR-GSM}
requires $N_{r}=4$ receive antennas from which $N_{\mathrm{ract}}=2$
receive antennas are always active. If $P_{\mathrm{ref}}=20\,\mathrm{mW}$
\cite{Mendez-Rial2016}, the receive power consumption for \ac{DL-TR-GSM}
and \ac{MU-TR-GSM} are 720\,mW and 1240\,mW, respectively. Hence,
\ac{DL-TR-GSM} requires 520\,mW less power per user, corresponding
to a 41.9\,\% reduction compared to \ac{MU-TR-GSM}. These power
gains may provide remarkable power savings, especially for communication
systems supporting a large number of users.
\section{Separate Detector}
Due to limited hardware and software resources in user terminals,
a direct implementation of the \ac{ML} detector in \prettyref{eq:ML_det}
may be unsuitable for practical use. Motivated by this fact, we propose
a low-complexity separate detector, which executes a two-step detection
process.
In the first step, the separate detector determines the most likely
transmit \ac{IQ} symbol vector and power level matrix for each transmit
antenna combination. Assuming that the \mbox{$s_{1}$-th} transmit
antenna combination is activated, we perform the following signal
processing at the receiver utilizing $\mathbf{U}_{s_{1}}^{(k)^{\mathrm{H}}}$as
\[
\tilde{\mathbf{y}}_{s_{1}}^{(k)}=\mathbf{U}_{s_{1}}^{(k)^{\mathrm{H}}}\mathbf{y}^{(k)}=\mathbf{U}_{s_{1}}^{(k)^{\mathrm{H}}}\mathbf{G}_{s}^{(k)}\mathbf{P}_{i}^{(k)}\mathbf{\tilde{x}}_{m}^{(k)}+\mathbf{U}_{s_{1}}^{(k)^{\mathrm{H}}}\mathbf{n}^{(k)}.
\]
For $s=s_{1}$ (i.e. $\mathbf{U}_{s_{1}}^{(k)^{\mathrm{H}}}\mathbf{U}_{s}^{(k)}=\mathbf{I}_{N_{r}}$),
this system of equations corresponds to a set of $N_{r}$ parallel
subchannels. Hence, we can detect the power level and the transmission
symbol independently for each receive antenna $r\,(r=1,\dots,N_{r})$
as
\begin{multline}
\{\hat{m}_{s_{1}}(r),\hat{i}_{s_{1}}(r)\}=\\
\mathop{\mathop{\text{arg min}}}_{\substack{m_{s_{1}}(r)\in\{1,...,M\}\\
i_{s_{1}}(r)\in\{1,2\}
}
}\left|\tilde{y}_{s_{1}}^{(k)}(r)-\lambda_{s_{1},r}^{(k)}\beta_{s}\sqrt{P_{i_{s_{1}}(r)}^{(k)}}\widetilde{x}_{m_{s_{1}}(r)}^{(k)}\right|^{2}.\label{eq:sep_det-1-1}
\end{multline}
Now for each transmit antenna combination $s_{1}$ we have the candidate
power level matrix $\mathbf{P}_{i_{s_{1}}}^{(k)}=\mathrm{diag}(\sqrt{P_{i_{s_{1}}(1)}^{(k)}}\;\cdots\;\sqrt{P_{i_{s_{1}}(N_{r})}^{(k)}})$
and the candidate transmitted \ac{IQ} symbol vector $\mathbf{\tilde{x}}_{m_{s_{1}}}^{(k)}=\left[\widetilde{x}_{m_{s_{1}}(1)}^{(k)}\;\cdots\;\widetilde{x}_{m_{s_{1}}(N_{r})}^{(k)}\right]^{\mathrm{T}}$.
In the second step, we determine the active transmit antenna combination
$\hat{s}$ according to the following expression:
\begin{equation}
\hat{s}=\mathop{\mathop{\text{arg min}}}_{\substack{s\in\{1,...,N_{\mathrm{tcomb}}\}}
}\left\Vert \mathbf{y}^{(k)}-\mathbf{U}_{s}^{(k)}\mathbf{\mathbf{\Lambda}}_{s}^{(k)}\beta_{s}\mathbf{P}_{i_{s}}^{(k)}\mathbf{\tilde{x}}_{m_{s}}^{(k)}\right\Vert ^{2},\label{eq:sep_det-2-1}
\end{equation}
where $\mathbf{P}_{i_{s}}^{(k)}$ and $\mathbf{\tilde{x}}_{m_{s}}^{(k)}$
are obtained from the previous expression. Finally, we obtain the
transmitted bit sequence by combining the results from \prettyref{eq:sep_det-1-1}
and \prettyref{eq:sep_det-2-1}.
\section{Simulation Results}
In this section, we present the \ac{BER} simulation results of \ac{DL-TR-GSM}
with the \ac{ML} detector and the separate detector\@. As a benchmark,
we use \ac{MU-TR-GSM} for performance comparison. Then we show the
influence of the power ratio $\alpha$ on the \ac{BER} of \ac{DL-TR-GSM}.
Finally, we provide a comparison of the computational complexity of
\ac{DL-TR-GSM} and \ac{MU-TR-GSM}.
\begin{figure}[t]
\centering{}\includegraphics{figures/plotMultiWithOutScalsaved}\caption{BER comparison without scaling coefficient $\beta_{s}$ (i.e., setting
$\beta_{s}=1$). \label{fig:BER-without}}
\end{figure}
We consider a downlink communication system with a single base station
and $K=2$ users. The base station has $N_{t}=32$ available transmit
antennas and always activates $N_{\mathrm{tact}}=4$ transmit antennas.
Each user receives 2 \ac{IQ} streams of QPSK symbols. To maintain
the same data rate, each user in \ac{DL-TR-GSM} is equipped with
$N_{r}=2$ receive antennas, while in MU-TR-GSM each user is equipped
with $N_{r}=4$ receive antennas from which $N_{\mathrm{ract}}=2$
receive antennas are always used for the IQ stream reception.
\begin{figure}[t]
\centering{}\includegraphics{figures/plotMultiWithScalsaved}\caption{BER comparison with scaling coefficient $\beta_{s}$. \label{fig:BER-with}}
\end{figure}
In \prettyref{fig:BER-without}, we show the \ac{BER} of \ac{DL-TR-GSM}
and \ac{MU-TR-GSM} without utilizing the scaling coefficient $\beta_{s}$
(i.e., setting $\beta_{s}=1$). Due to omitting the scaling coefficient,
the precoder cannot maintain the same average signal power at the
input and the output. Hence this performance comparison cannot be
considered as generally fair, but we present it in order to show that
the BER performance matches with the results in \cite[Fig. 2]{Pizzio2016}.
In this case we define the \ac{SNR} as the ratio of the signal power
at the precoder \emph{input} and the noise power. This differs from
the standard way of defining the \ac{SNR} as the ratio of the signal
power at the transmit/receive antennas and the noise power. In general,
\ac{DL-TR-GSM} achieves worse \ac{BER} than \ac{MU-TR-GSM} and
this effect becomes more pronounced at higher \ac{SNR}. Accordingly,
MU-TR-GSM exhibits up to 3\,dB lower \ac{BER} than \ac{DL-TR-GSM}
at high \ac{SNR}. On the other hand, both schemes have the same \ac{BER}
in the low-\ac{SNR} regime. As for the separate detector of \ac{DL-TR-GSM},
we see that it exhibits no performance loss with respect to the optimal
ML detector.
As already mentioned, omitting the scaling coefficient $\beta_{s}$
the precoders of the considered schemes are not able to maintain the
same average signal power and the \ac{BER} comparison in \prettyref{fig:BER-without}
cannot be classified as fair. Motivated by this, we present in \prettyref{fig:BER-with}
the \ac{BER} of \ac{DL-TR-GSM} and \ac{MU-TR-GSM} when the scaling
coefficient $\beta_{s}$ is utilized. In this case, the \ac{DL-TR-GSM}
shows a negligibly worse \ac{BER} performance than \ac{MU-TR-GSM}.\emph{
}Actually, the only visible difference is in the high-\ac{SNR} regime.
On the other hand, \ac{DL-TR-GSM} can even achieve slightly better
results at low \ac{SNR}. Again, the \ac{BER} of \ac{DL-TR-GSM}
remains extremely similar for the \ac{ML} detector and the separate
detector.
\begin{figure}[t]
\centering{}\includegraphics{figures/plot_ber_alpha}\caption{BER versus power ratio $\alpha$ for $\mathrm{SNR=30\,\mathrm{dB}}$.
Dashed lines present the optimum $\alpha$ obtained from \prettyref{eq:alpha_opt}.
\label{fig:BER-alpha}}
\end{figure}
To evaluate the correctness of the derived expression \prettyref{eq:alpha_opt},
we show the \ac{BER} of \ac{DL-TR-GSM} as a function of the power
ratio $\alpha$ in \prettyref{fig:BER-alpha}. The setup and parameters
are the same as in the previous figures, and the only difference is
that the IQ modulation order is allowed to vary. For the used IQ modulation
orders of 4, 8 and 16, the optimal values of $\alpha$ in \prettyref{eq:alpha_opt}
are 5.83, 3.12 and 1.93, respectively. These values are presented
with dashed lines in \prettyref{fig:BER-alpha}. In all cases the
optimal $\alpha$ in \prettyref{eq:alpha_opt} provides a BER which
is very close to the minimum achievable as shown by the simulations
in \prettyref{fig:BER-alpha}. Also, it can be observed that the
\ac{BER} is very robust to the change of $\alpha$ for low IQ modulation
orders (e.g., $M=4$).
\begin{figure}[t]
\centering{}\includegraphics{figures/DL_variable_K_new}\caption{BER for different numbers of users. \label{fig:BER-var-K}}
\end{figure}
\begin{figure}[t]
\centering{}\includegraphics{figures/plot_com_compl}\caption{Computational complexity versus number of users $K$. \label{fig:Complexity}}
\end{figure}
In \prettyref{fig:BER-var-K}, we show the \ac{BER} of \ac{DL-TR-GSM}
for different numbers of users. Here we consider a number of users
$K$ equal to 2, 4 and 8, and we assume for each value of $K$ that
the number of active transmit antennas is $N_{\mathrm{tact}}=2K$
(assuming two receive antennas per user). Also, to maintain the same
data rate in the transmit spatial domain as $K$ increases, we assume
that the base station is equipped with $8K$ transmit antennas. It
can be seen that the \ac{BER} increases slightly with increasing
$K$, but this trend exhibits saturation at moderate values of $K$.
Therefore, in a system with many users, a minor variation in the number
of users has a negligible impact on the system's \ac{BER} performance.
Also, a good match is again observed between the \ac{BER} of \ac{DL-TR-GSM}
with the \ac{ML} detector and the \ac{BER} of \ac{DL-TR-GSM} with
the separate detector.
In \prettyref{fig:Complexity}, we present a computational complexity
comparison of \ac{DL-TR-GSM} and \ac{MU-TR-GSM} for a varying number
of users. We observe that \ac{DL-TR-GSM} is capable of achieving
a significant complexity reduction compared to \ac{MU-TR-GSM}\emph{.
}The main reason for this is the ability of \ac{DL-TR-GSM} to reduce
the number of \ac{SVD} computations and more notably the number of
$\beta_{s}$ values (note that the number of different scaling coefficient
values increases exponentially with $K$ for \ac{MU-TR-GSM}). Hence,
\ac{DL-TR-GSM} has the potential to provide a very significant computational
complexity reduction in real communication systems with a large number
of users.
\balance
\section{Conclusion}
In this paper, we proposed a new multiuser MIMO scheme, referred to
as \ac{DL-TR-GSM}, based on the concept of \ac{DLT}. In contrast
to \ac{MU-TR-GSM} which applies an activation of a subset of the
receive antennas, \ac{DL-TR-GSM} uses different power levels to transfer
information in the receive spatial domain. This operational change
provides a considerable complexity reduction in multiuser downlink
communications. For the same reason, the hardware complexity of the
user terminals decreases, causing a potentially large power saving.
To further improve the performance of \ac{DL-TR-GSM}, we proposed
a separate detector which reduces the detection complexity, while
maintaining a near-optimal BER.
\bibliographystyle{IEEEtran}
\phantomsection\addcontentsline{toc}{section}{\refname}
|
train/arxiv
|
BkiUfR45qhDCnWORdj_h
| 5 | 1 |
\section{Introduction}
For a smooth projective variety $X$ defined over a number field, one can ask whether the set of rational points is dense.
It is expected that the set of rational points reflects the positivity of the canonical bundle $\omega_{X}$ (cf. the Bombieri-Lang conjecture) and it is tempting to study the intermediate case $\omega_{X}=\mathcal{O}_{X}$.
In this case, $X$ belongs to the class of special varieties introduced by Campana \cite{C} which is conjecturally the same as that of varieties on which rational points are potentially dense, that is, Zariski dense
after passing to some finite field extension.
An interesting subcase is given by abelian varieties, for which potential density is well-known.
It is challenging to consider the subcase given by Calabi-Yau varieties in a {\it strict sense},
i.e., smooth projective varieties $X$ with $\omega_{X}=\mathcal{O}_{X}$ and $H^{0}(X,\Omega_{X}^{i})=0$ for all $0<i<\dim X$, which are simply connected over $\mathbb{C}$.
For elliptic K3 surfaces and K3 surfaces with infinite automorphism groups, potential density holds due to Bogomolov-Tschinkel \cite{BT3}.
Moreover, there are several works on K3 surfaces over the rational numbers with Zariski dense sets of rational points; for instance, quartic K3 surfaces have long been studied \cite{E, LMvL, SD1, SD2}.
Very little is known in higher dimensions.
It is stated by Tschinkel \cite[after Problem 3.5]{T} that it would be worthwhile to find non-trivial examples of Calabi-Yau threefolds over number fields with
Zariski
dense sets of rational points.
It is only recent that the first such examples were actually obtained:
Bogomolov-Halle-Pazuki-Tanimoto \cite{BHPT} studied Calabi-Yau threefolds with abelian surface fibrations
and showed potential density for threefolds (including
simply connected ones) constructed by Gross-Popescu \cite{GP}.
However, it is not immediately clear whether their method can be used to determine the minimal field extensions over which rational points become Zariski dense.
In this short note, we construct higher-dimensional Calabi-Yau varieties in a strict sense defined over a given number field with Zariski dense sets of rational points.
We give two elementary constructions in arbitrary dimensions (Section \ref{C1}, \ref{C2}) as well as another construction in dimension three which involves certain Calabi-Yau threefolds containing an Enriques surface (Section \ref{C3}).
The constructions also show that potential density holds for
(sufficiently)
general members of the families.
The third construction is a by-product of the author's attempt to analyze in detail the recent construction due to Ottem-Suzuki \cite{OS} of a pencil of Enriques surfaces with non-algebraic integral Hodge classes.
For the third construction,
our example
contains no abelian surface and is a unique minimal model in its birational equivalence class,
thus a theorem of Bogomolov-Halle-Pazuki-Tanimoto \cite[Theorem 1.2]{BHPT} cannot be applied.
For all the constructions, elliptic fibration structures are crucial.
We work over a number field unless otherwise specified.
\begin{ack*}
The author wishes to thank Lawrence Ein,
Yohsuke Matsuzawa,
John Christian Ottem,
Ramin Takloo-Bighash, and Burt Totaro for interesting discussions.
\end{ack*}
\section{Construction I}\label{C1}
The first idea is to construct elliptic Calabi-Yau varieties in a strict sense (i.e., $H^{0}(X,\Omega^{i}_{X})=0$ for all $0<i<\dim X$ and simply connected over $\mathbb{C}$) whose base spaces are rational and which admits infinitely many sections.
For that purpose, we introduce variants of Schoen's Calabi-Yau fiber products of rational elliptic surfaces \cite{Sch}.
We let $S\subset \mathbb{P}^{1}\times \mathbb{P}^{2}$ be a smooth hypersurface of bi-degree $(1,3)$.
Then the first projection defines an elliptic fibration $f\colon S\rightarrow \mathbb{P}^{1}$.
Moreover, via the second projection, $S$ is the blow-up of $\mathbb{P}^{2}$ along the nine points given by the intersection of two cubic curves, hence rational.
For $n\geq 3$, we let $Y\subset \mathbb{P}^{1}\times \mathbb{P}^{n-1}$ be a smooth hypersurface of bi-degree $(1,n)$.
The first projection restricts to a fibration into Calabi-Yau hypersurfaces $g\colon Y\rightarrow \mathbb{P}^{1}$ and $Y$ is again rational via the second projection.
We assume that over any point in $\mathbb{P}^{1}$ either $f$ or $g$ is smooth, which is satisfied if $Y$ is general with respect to $S$.
We define $X=S\times_{\mathbb{P}^{1}}Y$ and let $\pi\colon X\rightarrow \mathbb{P}^{1}$ be the natural projection.
\begin{lem}\label{lCYI}
The fiber product $X$ is a Calabi-Yau $n$-fold in a strict sense.
\end{lem}
\begin{rem}
If $S_{\mathbb{C}}$ is very general, it is classical that the elliptic fibration $f\colon S_{\mathbb{C}}\rightarrow \mathbb{P}^{1}_{\mathbb{C}}$ admits infinitely many sections,
which implies that the same holds for the natural projection $X_{\mathbb{C}}\rightarrow Y_{\mathbb{C}}$.
This provides examples of Calabi-Yau varieties in a strict sense defined over $\mathbb{C}$ containing infinitely many rational divisors in all dimensions $\geq 3$.
We will construct below such an example over a given number field.
\end{rem}
\begin{proof}[Proof of Lemma \ref{lCYI}]
It is immediate to see that $X$ is smooth.
The fiber product $X$ is a complete intersection in $\mathbb{P}^{1}\times \mathbb{P}^{2}\times \mathbb{P}^{n-1}$ of hypersurfaces of tri-degree $(1,3,0)$ and that of tri-degree $(1,0,n)$.
We have $\omega_{X}=\mathcal{O}_{X}$ by the adjunction formula and an easy computation shows that $H^{i}(X,\mathcal{O}_{X})=0$ for all $0<i<n$.
For simple connectedness over $\mathbb{C}$,
Schoen proved this result for $n=3$ (see \cite[Lemma 1.1]{Sch0} for the strategy).
In fact, his method works for $n\geq 3$ and the argument goes as follows.
Let $U\subset \mathbb{P}^{1}_{\mathbb{C}}$ be the open subset over which $\pi\colon X_{\mathbb{C}}\rightarrow \mathbb{P}^{1}_{\mathbb{C}}$ is smooth and let $V=\pi^{-1}(U)$.
The natural map $\pi|_{V}\colon V\rightarrow U$ is topologically locally trivial and we let $F$ be a fiber.
We note that $\pi\colon X_{\mathbb{C}}\rightarrow \mathbb{P}^{1}_{\mathbb{C}}$ admits a section since $f$ and $g$ do, so does $\pi|_{V}\colon V\rightarrow U$.
Then we have a commutative diagram
\[
\xymatrix{
\pi_{1}(F)\ar[r] &\pi_{1}(V)\ar@{->>}[r]\ar@{->>}[d] &\pi_{1}(U)\ar@{->>}[d]\ar@/_1pc/[l]\\
& \pi_{1}(X_{\mathbb{C}})\ar@{->>}[r]& \pi_{1}(\mathbb{P}^{1}_{\mathbb{C}}) \ar@/_1pc/[l],
}
\]
where the upper row is exact by the homotopy long exact sequence.
Chasing the diagram and using the fact that $\pi_{1}(\mathbb{P}^{1}_{\mathbb{C}})$ is trivial, we are reduced to showing that $\pi_{1}(F)$ has the trivial image in $\pi_{1}(X_{\mathbb{C}})$.
Writing $F=F_{1}\times F_{2}$, where $F_{1}$ is a fiber of $f$ and $F_{2}$ is a fiber of $g$,
we see that the Van-Kampen theorem implies
\[\pi_{1}(F)=\pi_{1}(F_{1})\times \pi_{1}(F_{2}).
\]
Now it is enough to verify that the image of $\pi_{1}(F_{1})$ and $\pi_{1}(F_{2})$ in $\pi_{1}(X_{\mathbb{C}})$ are trivial.
This is immediate since $\pi_{1}(F_{1})\rightarrow \pi_{1}(X_{\mathbb{C}})$ (resp. $\pi_{1}(F_{2})\rightarrow \pi_{1}(X_{\mathbb{C}})$) factors through the fundamental group of a section of $X_{\mathbb{C}}\rightarrow S_{\mathbb{C}}$ (resp. $X_{\mathbb{C}}\rightarrow Y_{\mathbb{C}}$) and
since $S_{\mathbb{C}}$ (resp. $Y_{\mathbb{C}}$) is simply connected (it is rational).
The proof is complete.
\end{proof}
We give a construction of $X$ defined over $\mathbb{Q}$ such that $X(\mathbb{Q})$ is Zariski dense.
We start from constructing $S$ defined over $\mathbb{Q}$ such that the elliptic fibration $f\colon S\rightarrow \mathbb{P}^{1}$ admits infinitely many sections over $\mathbb{Q}$, or equivalently,
the generic fiber $E/\mathbb{Q}(t)$ admits a $\mathbb{Q}(t)$-rational point and the Mordel-Weil group $E(\mathbb{Q}(t))$ has a positive rank.
The construction is as follows.
Let $C\subset \mathbb{P}^{2}$ be an elliptic curve defined over $\mathbb{Q}$ with a Zariski dense set of $\mathbb{Q}$-rational points.
Let $O\in C(\mathbb{Q})$ (resp. $P\in C(\mathbb{Q})$) be the origin (resp. a non-torsion point).
Let $D\subset \mathbb{P}^{2}$ be another elliptic curve defined over $\mathbb{Q}$ which goes through both $O$ and $P$ and which intersects transversally with $C$.
Let $S\subset \mathbb{P}^{1}\times \mathbb{P}^{2}$ be the hypersurface of bi-degree $(1,3)$ defined over $\mathbb{Q}$ corresponding to the pencil of elliptic curves generated by $C$ and $D$.
A zero-section of $f\colon S\rightarrow \mathbb{P}^{1}$ is given by the $(-1)$-curve over $O$
and the Mordel-Weil group $E(\mathbb{Q}(t))$ has a positive rank since the image of the specialization homomorphism $E(\mathbb{Q}(t))\rightarrow C(\mathbb{Q})$ has a positive rank.
We conclude that $S$ has the desired property.
Let $Y$ be smooth, defined over $\mathbb{Q}$, and general so that over any point in $\mathbb{P}^{1}$ either $f$ or $g$ is smooth.
Then $X=S\times_{\mathbb{P}^{1}}Y$ is a Calabi-Yau $n$-fold in a strict sense defined over $\mathbb{Q}$.
Moreover the elliptic fibration $X=S\times_{\mathbb{P}^{1}}Y\rightarrow Y$ admits infinitely many sections over $\mathbb{Q}$ by construction.
We note that the set $Y(\mathbb{Q})$ is Zariski dense since $Y$ is rational over $\mathbb{Q}$.
The following theorem is now immediate:
\begin{thm}\label{t1}
The set $X(\mathbb{Q})$ is Zariski dense.
\end{thm}
\section{Construction II}\label{C2}
We let $X\subset (\mathbb{P}^{1})^{n+1}$ be a smooth hypersurface of multi-degree $(2^{n+1})$.
The following is an immediate consequence of the Lefschetz hyperplane section theorem:
\begin{lem}
If $n\geq 2$, then $X$ is a Calabi-Yau $n$-fold in a strict sense.
\end{lem}
We give a construction of $X$ defined over $\mathbb{Q}$ such that $X(\mathbb{Q})$ is Zariski dense.
We recall that a multi-section of an elliptic fibration is {\it saliently ramified} if it is ramified in a point which lies in a smooth elliptic fiber.
The following theorem is due to Bogomolov-Tschinkel:
\begin{thm}[\cite{BT1,BT2}]\label{BT}
Let $\phi\colon \mathcal{E} \rightarrow B$ be an elliptic fibration over a number field $K$.
If there exists a saliently ramified multi-section $\mathcal{M}$ such that $\mathcal{M}(K)$ is Zariski dense,
then $\mathcal{E}(K)$ is Zariski dense.
\end{thm}
\begin{proof}
We sketch the proof for the convenience of the reader.
It is obvious that the subset $\phi(\mathcal{M}(K))\subset B$ is Zariski dense.
Let $\phi_{\mathcal{J}}\colon \mathcal{J}\rightarrow B$ be the corresponding Jacobian fibration.
We consider a rational map
\[
\tau\colon \mathcal{E}\dashrightarrow \mathcal{J}, \, p\mapsto dp - \mathcal{M}_{\phi(p)},
\]
where $d=\deg(\mathcal{M}/B)$.
Then $\tau(B)$ cannot be contained in the $m$-torsion part $\mathcal{J}[m]$ for any positive integer $m$.
Now Merel's theorem (or Mazur's theorem when $K=\mathbb{Q}$) implies
that there exists a non-empty Zariski open subset $U\subset B$ such that a rational point $p_{b}\in \mathcal{M}(K)$ is non-torsion on the fiber $\mathcal{J}_{b}$ for any $b\in \phi(\mathcal{M}(K))\cap U$.
Finally, the fiberwise action of the Jacobian fibraion on $\mathcal{E}$ translates rational points on $\mathcal{M}$, which concludes the proof.
\end{proof}
We start from constructing a smooth hypersurface $X_{1}\subset \mathbb{P}^{1}\times \mathbb{P}^{1}$ of bi-degree $(2,2)$ defined over $\mathbb{Q}$ such that $X_{1}(\mathbb{Q})$ is Zariski dense.
We set
\[
\mathbb{P}^{1}\times \mathbb{P}^{1}=\Proj \mathbb{Q}[S_{1},T_{1}]\times \Proj \mathbb{Q}[S_{2},T_{2}].
\]
For instance, we can take $X_{1}$ to be the hypersurface defined by the equation
\[
S_{1}^2S_{2}T_{2}+S_{1}T_{1}(2S_{2}^2+2S_{2}T_{2}+3T_{2}^{2})+T_{1}^2T_{2}(S_{2}+T_{2})=0.
\]
Then $X_{1}$ is an elliptic curve with a non-torsion point defined over $\mathbb{Q}$, hence $X_{1}(\mathbb{Q})$ is Zariski dense.
For $n>1$, we set
\[
(\mathbb{P}^{1})^{n+1}=\Proj \mathbb{Q}[S_{1},T_{1}]\times \cdots \times \Proj \mathbb{Q}[S_{n+1},T_{n+1}]
\]
and inductively define $X_{n} \subset (\mathbb{P}^{1})^{n+1}$ to be a general hypersurface of multi-degree $(2^{n+1})$ defined over $\mathbb{Q}$ containing $X_{n-1}$ as the fiber of the projection $pr_{n+1}\colon X_{n}\rightarrow \mathbb{P}^{1}$ over $T_{n+1}=0$.
\begin{lem}\label{l0}
The hypersurface $X_{n}$ is smooth and $X_{n-1}$ is a saliently ramified multi-section of the elliptic fibration $pr_{1,\cdots,n-1}\colon X_{n}\rightarrow (\mathbb{P}^{1})^{n-1}$.
\end{lem}
\begin{proof}
For smoothness, it is enough to show that $X_{n}$ is smooth around $X_{n-1}$.
This is obvious because $X_{n-1}$ is a fiber of the flat proper morphism $pr_{n+1}\colon X_{n}\rightarrow \mathbb{P}^{1}$ and $X_{n-1}$ is smooth by induction.
We are reduced to showing the second assertion.
Let $B\subset (\mathbb{P}^{1})^{n-1}$ be the branch locus, that is, the set of critical values of $pr_{1,\cdots,n-1}\colon X_{n-1}\rightarrow (\mathbb{P}^{1})^{n-1}$.
We note that this morphism is generically finite, but not finite when $n>3$.
We only need to prove that the fiber of $pr_{1,\cdots, n-1}\colon X_{n}\rightarrow (\mathbb{P}^{1})^{n-1}$ over a general point in $B$ is smooth.
Let $\Sigma\subset X_{n}$ be
the set of critical points
of $pr_{1,\cdots, n-1}\colon X_{n}\rightarrow (\mathbb{P}^{1})^{n-1}$.
By generality, it is sufficient to show that $\dim \Sigma\cap X_{n-1}=n-3$.
This can be checked by a direct computation as follows.
The equation of $X_{n}\subset (\mathbb{P}^{1})^{n+1}$ can be written as
\[
S_{n+1}^{2}F+S_{n+1}T_{n+1}G+T_{n+1}^{2}H=0,
\]
where $F=0$ defines $X_{n-1}\subset (\mathbb{P}^{1})^{n}$.
If we write $F$ as
\[F=S_{n}^{2}F_{1}+S_{n}T_{n}F_{2}+T_{n}^{2}F_{3},
\]
then the set $\Sigma\cap X_{n}$ is defined by
\[
S_{n+1}=2S_{n}F_{1}+T_{n}F_{2}=S_{n}F_{2}+2T_{n}F_{3}=G=0.
\]
The first three equations define the ramification locus, that is, the set of critical points of $pr_{1,\cdots, n-1}\colon X_{n-1}\rightarrow (\mathbb{P}^{1})^{n-1}$, which is of dimension $n-2$,
thus the four equations together define a closed subset of dimension $n-3$ by generality, as we wanted.
The proof is complete.
\end{proof}
Now Theorem \ref{BT} implies:
\begin{thm}\label{t2}
The set $X_{n}(\mathbb{Q})$ is Zariski dense for any $n\geq 1$.
\end{thm}
\begin{rem}
For a smooth
hypersurface $X$ in $(\mathbb{P}^{1})^{n+1}$ of multi-degree $(2^{n+1})$, the birational automorphism group $\Bir(X_{\mathbb{C}})$ is infinite by Cantat-Oguiso \cite{CO}.
It
would be possible
to give a proof of the
density of rational points by using the action of $\Bir(X_{\mathbb{C}})$.
\end{rem}
\section{Calabi-Yau threefolds containing an Enriques surface}\label{CY3}
In this section, we work over the complex numbers.
We construct a Calabi-Yau threefold containing an Enriques surface and prove basic properties of the threefold, which will be used in Section \ref{C3}.
Let $\mathbb{P}=\mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3}\oplus \mathcal{O}(1))$.
On the projective bundle $\mathbb{P}$, we consider a map of vector bundles
\[
u\colon \mathcal{O}^{\oplus 3}\rightarrow \mathcal{O}(2H_{1})\oplus \mathcal{O}(2H),
\]
where $H_{1}$ (resp. $H$) is the pull-back of the hyperplane section class on $\mathbb{P}^{2}$ (resp. the tautological class on $\mathbb{P}$).
Let $X$ be the rank one degeneracy locus of $u$.
\begin{lem}\label{l1}
If $u$ is general, $X$ is a Calabi-Yau threefold.
We have the topological Euler characteristic $\chi_{\Top}(X)=c_{3}(T_{X})=-84$ and the Hodge numbers $h^{1,1}(X)=2$ and $h^{1,2}(X)=44$.
\end{lem}
\begin{proof}
Since the vector bundle $\mathcal{O}(2H_{1})\oplus \mathcal{O}(2H)$ is globally generated, $X$ is a smooth threefold by the Bertini theorem for degeneracy loci.
Another projective model of $X$ is defined as the zero set of a naturally defined section of $\mathcal{O}(1)^{\oplus 3}$ on the projective bundle $\widetilde{\mathbb{P}}=\mathbb{P}_{\mathbb{P}}(\mathcal{O}(2H_{1})\oplus \mathcal{O}(2H))$.
Let $\widetilde{H}$ be the tautological class on $\widetilde{\mathbb{P}}$.
The adjunction formula gives $\omega_{X}=\mathcal{O}_{X}(\widetilde{H}-2H)$.
On the other hand, $\widetilde{H}-2H$ is the class of the intersecction $X\cap \mathbb{P}_{\mathbb{P}}(\mathcal{O}(2H_{1}))$, which is empty, thus we have $\mathcal{O}_{X}(\widetilde{H}-2H)=\mathcal{O}_{X}$.
It follows that $\omega_{X}=\mathcal{O}_{X}$.
The rest of the statement is a consequence of
a direct computation using the conormal exact sequence and the Koszul resolution of the ideal sheaf of $X$ in $\widetilde{\mathbb{P}}$.
The proof is complete.
\end{proof}
We assume that $u$ is general in what follows.
\begin{lem}\label{l2}
The threefold $X$ admits an elliptic fibration $\phi \colon X\rightarrow \mathbb{P}^{2}$.
Moreover, $X$ contains an Enriques surface $S$ and the linear system $|2S|$ defines a K3 fibration $\psi\colon X\rightarrow \mathbb{P}^{1}$.
\end{lem}
\begin{rem}
There are several examples of Calabi-Yau threefolds containing an Enriques surface.
For instance, see Borisov-Nuer \cite{BN}.
\end{rem}
\begin{proof}[Proof of Lemma \ref{l2}]
A natural projection $\mathbb{P}\rightarrow \mathbb{P}^{2}$ restricts to a surjection $\phi\colon X\rightarrow \mathbb{P}^{2}$ with the geometric generic fiber a complete intersection of two quadrics in $\mathbb{P}^{3}$, which is an elliptic curve.
The morphism $\phi$ has equidimensional fibers, hence it is flat.
Moreover, the map $u$ restricts to a map of vector bundles
\[
v\colon \mathcal{O}^{\oplus 3}\rightarrow \mathcal{O}(2,0)\oplus \mathcal{O}(0,2)
\]
on $\mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3})=\mathbb{P}^{2}\times \mathbb{P}^{2}$.
The intersection $X\cap \mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3})$ is the rank one degeneracy locus of $v$, which is an Enriques surface $S$
by generality
(see \cite[Lemma 2.1]{OS}).
Then the linear system $|2S|$ defines a K3 fibration $\psi\colon X\rightarrow \mathbb{P}^{1}$ by \cite[Proposition 8.1]{BN}.
The proof is complete.
\end{proof}
\begin{cor}
The threefold $X$ contains no abelian surface.
\end{cor}
\begin{proof}
If $X$ contains an abelian surface $A$, then it is easy to see that the linear system $|A|$ defines an abelian surface fibration $\eta \colon X\rightarrow \mathbb{P}^{1}$.
Then pulling back $\NS(\mathbb{P}^{1}\times \mathbb{P}^{1})_{\mathbb{R}}$ by $(\psi, \eta)\colon X\rightarrow \mathbb{P}^{1}\times \mathbb{P}^{1}$ defines a two-dimensional linear subspace of $\NS(X)_{\mathbb{R}}$ which does not contain an ample divisor.
Therefore we should have $\rho(X)\geq 3$.
This is impossible since we have $\rho(X)=h^{1,1}(X)=2$ by Lemma \ref{l1}.
\end{proof}
\begin{lem}\label{l3}
$\Nef(X)=\overline{\Eff(X)}=\mathbb{R}_{\geq 0}[H_{1}]\oplus \mathbb{R}_{\geq 0}[S]$.
\end{lem}
\begin{proof}
Since the linear systems $|H_{1}|$ and $|2S|$ define the fibrations $\phi\colon X\rightarrow \mathbb{P}^{2}$ and $\psi\colon X\rightarrow \mathbb{P}^{1}$ respectively,
the divisors $H_{1}$ and $S$ are semi-ample but not big, so their classes give extremal rays in $\overline{\Eff(X)}$.
This finishes the proof.
\end{proof}
\begin{cor}
The threefold $X$ is a unique minimal model in its birational equivalence class.
Moreover, $\Bir(X)=\Aut(X)$ and these groups are finite.
\end{cor}
\begin{proof}
Let $f\colon X\dashrightarrow X'$ be a birational map with $X'$ a minimal model.
Then $f$ can be decomposed into a sequence of flops by a result of Kawamata \cite[Theorem 1]{Ka2}.
Note that any flopping contraction of a Calabi-Yau variety is given by a codimension one face of the nef cone (see \cite[Theorem 5.7]{Ka1}).
Since the codimension one faces $\mathbb{R}_{\geq 0}[H_{1}]$ and $\mathbb{R}_{\geq 0}[S]$ of $\Nef(X)$ give fibrations, it follows that $f$ is in fact an isomorphism.
For the last statement, we note that Oguiso proved that the automorphism group of any odd-dimensional Calabi-Yau variety in a wider sense with $\rho=2$ is finite.
The proof is complete.
\end{proof}
\begin{prop}\label{p}
The threefold $X$ is simply connected.
\end{prop}
\begin{proof}
Applying \cite[Lemma 5.2.2]{K} to the K3 fibration $\psi\colon X\rightarrow \mathbb{P}^{1}$, whose smooth fibers are simply connected, we are reduced to showing that $2S$ is the only one multiple fiber of $\psi$.
The K3 fibration $\psi\colon X\rightarrow \mathbb{P}^{1}$ and the morphism $X\rightarrow \mathbb{P}^{5}$ given by the linear system $|H|$ induce a morphism $X\rightarrow \mathbb{P}^{1}\times \mathbb{P}^{5}$,
which is the blow-up of a non-normal complete intersection $X_{0}$ in $\mathbb{P}^{1}\times \mathbb{P}^{5}$ of three hypersurfaces of bi-degree $(1,2)$ along the non-normal locus with the exceptional divisor $S$.
Now we only need to prove that the first projection $pr_{1}\colon X_{0}\rightarrow \mathbb{P}^{1}$ admits no multiple fibers.
This follows from the Lefschetz hyperplane section theorem.
The proof is complete.
\end{proof}
\section{Construction III}\label{C3}
We recall a system of affine equations for an Enriques surface introduced by Colliot-Th\'el\`ene--Skorobogatov--Swinnerton-Dyer \cite{CTSSD}.
\begin{prop}[\cite{CTSSD}, Proposition 4.1, Example 4.1.2; \cite{L}, Proposition 1.1]\label{CTSSD}
Let $k$ be a field of characteristic zero.
Let
$c, d, f\in k[t]$
be polynomials of degree $2$ such that $c\cdot d\cdot (c-d)\cdot f$ is separable.
Let $S^{0}$ be the affine variety in
$\mathbb{A}^{4}=\Spec k[t, u_{1}, u_{2}, u_{3}]$
defined by
\[
u_{1}^{2}-c(t) = f(t)u_{2}^{2}, \,u_{1}^{2}-d(t) = f(t)u_{3}^{2}.
\]
Then a minimal smooth projective model $S$ of $S^{0}$ is an Enriques surface.
The projection
$S^{0} \rightarrow \mathbb{A}^{1}=\Spec k[t]$
induces an elliptic fibration $S\rightarrow \mathbb{P}^{1}$ with reduced discriminant $c\cdot d \cdot (c-d)\cdot f$
which admits double fibers over $f=0$ whose reductions are smooth elliptic curves.
\end{prop}
Now we apply Proposition \ref{CTSSD} to
$k=\mathbb{Q}$ and
\[c=-3t^{2}-2t+8,\, d=\frac{t^{2}-15t+16}{2},\, f=\frac{t^{2}+1}{2}.\]
Let $S$ be the corresponding Enriques surface.
We prove:
\begin{lem}\label{l6}
The surface $S$ has a Zariski dense set of $\mathbb{Q}$-rational points.
\end{lem}
\begin{proof}
We follow the strategy of the proof of \cite[Proposition 5]{Sko}.
It will be easier to work on the K3 double cover $\widetilde{S}$.
By setting
\[w^{2}=\frac{t^{2}+1}{2},\, v_{2}=wu_{2},\, v_{3}=wu_{3},\]
we obtain the defining equations of its affine model $\widetilde{S}^{0}$ in
$\mathbb{A}^{5}=\Spec \mathbb{Q}[t,u_{1},v_{2},v_{3},w]$:
\[
u_{1}^{2}-(-3t^{2}-2t+8) = v_{2}^{2}, \, u_{1}^{2}-\frac{t^{2}-15t+16}{2} = v_{3}^{2},\, \frac{t^{2}+1}{2}=w^{2}.
\]
The projection
$\widetilde{S}^{0}\rightarrow C^{0}=\Spec\mathbb{Q}[t,w]/(\frac{t^{2}+1}{2}-w^{2})$
defines an elliptic fibration $\widetilde{S}\rightarrow C$ with reduced discriminant $c\cdot d\cdot (c-d)$.
We consider the curve $E^{0}\subset \widetilde{S}^{0}$ cut out by
\[
u_{1}=t-3,\,v_{2}=2t-1,
\]
which is isomorphic to the affine curve in
$\mathbb{A}^{3}=\Spec \mathbb{Q}[t,v_{3}, w]$
defined by
\[
\frac{1}{2}(t+1)(t+2)=v_{3}^{2},\, \frac{t^{2}+1}{2}=w^{2}.
\]
Then $E^{0}$ gives an elliptic curve $E$.
We prove that $E(\mathbb{Q})$ is dense.
Let $O\in E$ be the point given by
\[
t=-1,\, v_{3}=0, \, w=-1
\]
and $P\in E$ be the point given by
\[
t=7,\, v_{3}=-6,\, w=5.
\]
We only need to prove that $P-O\in \Pic^{0}(E)$ is of infinite order.
It is a simple matter to check that the map
\[
(t,v_{3},w)\mapsto \left(\frac{2(3t-4w-1)}{t-2w-1},\frac{8v_{3}}{t-2w-1}\right)
\]
gives a transformation into the Weierstrass model
\[
y^{2}=x^{3}-52x+144
\]
and sends $O$ (resp. $P$) to the point at infinity (resp. $(x,y)=(0,12)$).
Thus it is enough to verify that $(x,y)=(0,12)$ defines a point of infinite order.
This follows from a theorem of Lutz and Nagell \cite[VIII. Corollary 7.2]{S}
since the $y$-value $12$ is non-zero and $12^{2}$ does not divide $4\cdot (-52)^{3}+27\cdot 144^{2}$.
Moreover, $E$ is a saliently ramified multi-section of the elliptic fibration $\widetilde{S}\rightarrow C$.
Indeed, it is easy to verify that $E\rightarrow C$ is branched over
\[t=-1,\, w=\pm 1,
\]
while $t=-1$ is not a root of the reduced discriminant
\[
c\cdot d\cdot (c-d)= (-3t^{2}-2t+8)\left(\frac{t^{2}-15t+16}{2}\right)\left(\frac{-7t^{2}+11t}{2}\right).
\]
Now Theorem \ref{BT} shows that $\widetilde{S}(\mathbb{Q})$ is dense.
This in turn implies that $S(\mathbb{Q})$ is dense.
The proof is complete.
\end{proof}
We define a compactification of $S^{0}$ in $\mathbb{P}^{2}\times \mathbb{P}^{2}$ as follows.
We set
\[
\mathbb{P}^{2}\times \mathbb{P}^{2}=\Proj\mathbb{Q}[X_{0},X_{1},X_{2}]\times \Proj\mathbb{Q}[Y_{0},Y_{1},Y_{2}].
\]
On $\mathbb{P}^{2}\times \mathbb{P}^{2}$, we consider the map of vector bundles
\[
v\colon \mathcal{O}^{\oplus 3}\rightarrow \mathcal{O}(2,0)\oplus \mathcal{O}(0,2)
\]
given by the $2\times 3$ matrix
\[
\left(
\begin{array}{ccc}
X_{0}^{2}& X_{1}^{2}& X_{2}^{2}\\
2Y_{0}^{2}+6Y_{1}^{2}+4Y_{1}Y_{2}-16Y_{2}^{2} & 2Y_{0}^{2}-Y_{1}^{2}+15Y_{1}Y_{2}-16Y_{2}^{2} & Y_{1}^{2}+Y_{2}^{2}
\end{array}
\right).
\]
Let $S'$ be the rank one degeneracy locus of $v$.
It is straightforward to see that $S'$ is indeed a compactification of $S^{0}$.
The surface $S'$ is a local complete intersection, so in particular, Gorenstein.
The surface $S'$ has isolated singular points and blowing up along the points gives a crepant resolution $S\rightarrow S'$.
Finally, we give the construction of the Calabi-Yau theefold.
On the projective bundle $\mathbb{P}=\mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3}\oplus \mathcal{O}(1))$, we let
\[
u\colon \mathcal{O}^{\oplus 3}\rightarrow \mathcal{O}(2H_{1})\oplus \mathcal{O}(2H)
\]
be
general among all maps defined over $\mathbb{Q}$ which restrict to $v$ on $\mathbb{P}_{\mathbb{P}^{2}}(\mathcal{O}^{\oplus 3})=\mathbb{P}^{2}\times \mathbb{P}^{2}$.
We define $X$ to be the rank one degeneracy locus of $u$.
\begin{thm}\label{D}
The threefold $X$ is a Calabi-Yau threefold in a strict sense defined over $\mathbb{Q}$ with a Zariski dense set of $\mathbb{Q}$-rational points.
Moreover, $X$ satisfies the following geometric properties:
\begin{enumerate}
\item $X_{\mathbb{C}}$ admits K3 and elliptic fibrations;
\item $X_{\mathbb{C}}$ contains no abelian surface;
\item $X_{\mathbb{C}}$ is a unique minimal model in its birational equivalence class;
\item $\Bir(X_{\mathbb{C}})=\Aut(X_{\mathbb{C}})$ and these groups are finite.
\end{enumerate}
\end{thm}
\begin{proof}
One can check that $X$ is smooth and the proofs in Section \ref{CY3} still go through.
It remains to show that $X(\mathbb{Q})$ is Zariski dense.
By a similar argument to that in Lemma \ref{l0}, $S'$ is a saliently ramified multi-section of the elliptic fibration $\phi \colon X\rightarrow \mathbb{P}^{2}$.
Since $S'(\mathbb{Q})$ is dense by Lemma \ref{l6}, the result follows from Theorem \ref{BT}.
The proof is complete.
\end{proof}
One can be more explicit.
We fix a non-zero element $Z\in H^{0}(\mathbb{P},\mathcal{O}_{\mathbb{P}}(H-H_{1}))$.
Let $u$ be given by the matrix
\[
\left(
\begin{array}{ccc}
P_{1}& Q_{1}& R_{1}\\
P_{2} & Q_{2}& R_{2}
\end{array}
\right),
\]
where
\[
P_{1}=X_{0}^{2},\, Q_{1}=X_{1}^{2},\, R_{1}=X_{2}^{2}
\]
and
\begin{align*}
P_{2}&=(X_{1}^{2}+X_{2}^{2})Z^{2}+(X_{1}Y_{1}+X_{2}Y_{2})Z+2Y_{0}^{2}+6Y_{1}^{2}+4Y_{1}Y_{2}-16Y_{2}^{2},\\
Q_{2}&=(X_{0}^{2}+X_{2}^{2})Z^{2}+(X_{0}Y_{0}+X_{2}Y_{2})Z+2Y_{0}^{2}-Y_{1}^{2}+15Y_{1}Y_{2}-16Y_{2}^{2},\\
R_{2}&=(X_{0}^{2}+X_{1}^{2})Z^{2}+(X_{0}Y_{0}+X_{1}Y_{1})Z+Y_{1}^{2}+Y_{2}^{2}.
\end{align*}
Then {\it Macaulay 2} shows that the corresponding $X$ is smooth and that the discriminant locus $\Delta\subset \mathbb{P}^{2}$ of $\phi\colon X\rightarrow \mathbb{P}^{2}$ and the branch locus $B\subset \mathbb{P}^{2}$ of $\phi|_{S}\colon S\rightarrow \mathbb{P}^{2}$ meet properly.
Consequently, the set $X(\mathbb{Q})$ is Zariski dense.
|
train/arxiv
|
BkiUaonxK03BfL1dTGug
| 4 | 0.8 |
\section{Introduction}
This aim of this paper is to define higher categorical invariants (gerbes) of codimension two algebraic cycles and provide a categorical interpretation of the intersection of divisors on a smooth proper algebraic variety. This generalization of the classical relation between divisors and line bundles furnishes a new perspective on the classical Bloch-Quillen formula relating Chow groups and algebraic K-theory.
Our work is motivated by the following three basic questions.
\begin{enumerate}
\item Let $A$ and $B$ be abelian sheaves on a manifold (or algebraic variety) $X$. Given $\alpha \in H^1(X,A)$ and $\beta \in H^1(X,B)$, one has their cup-product $\alpha \cup \beta \in H^2(X, A\otimes B)$. We recall that $H^1$ and $H^2$ classify equivalence classes of torsors and gerbes\footnote{For us, the term "gerbe" signifies a stack in groupoids which is locally non-empty and locally connected (\S \ref{sec:gerbes}). It is slightly different from the ancient gerbes of \textit{Acids, alkalies and salts: their manufacture and applications, Volume 2} (1865) by Thomas Richardson and Henry Watts, pp. 567-569:
\begin{quote}
\S 4. Gerbes
This firework is made in various ways, generally throwing up a luminous and sparkling jet of fire, somewhat resembling a
water-spout: hence its name. Gerbes consist of a straight, cylindrical case, sometime made with wrought iron (if the gerbe is
of large dimensions). ... Mr. Darby has invented an entirely novel and beautiful gerbe, called the Italian gerbe..."
\end{quote}}:
\begin{align*}
H^1(X, A) \quad &\longleftrightarrow \quad \text{Isomorphism classes of $A$-torsors} \\
H^2(X, A) \quad &\longleftrightarrow \quad \text{Isomorphism classes of $A$-gerbes};
\end{align*}
we may pick torsors $P$ and $Q$ representing $\alpha$ and $\beta$ and ask
\begin{question}\label{Q1}
Given $P$ and $Q$, is there a natural construction of a gerbe $G_{P,Q}$ which manifests the cohomology class $\alpha \cup\beta =[P] \cup [Q]$?
\end{question}
The above question admits the following algebraic-geometric analogue.
\item Let $X$ be a smooth proper variety over a field $F$. Let $Z^i(X)$ be the abelian group of algebraic cycles of codimension $i$ on $X$ and let $CH^i(X)$ be the Chow group of algebraic cycles of codimension $i$ modulo rational equivalence. The isomorphism
\begin{equation*}
CH^1(X) \overset{\sim}{\too} H^1(X, \mathcal{O}^*)
\end{equation*}
connects (Weil) divisors and invertible sheaves (or $\mathbb{G}_m$-torsors). While divisors form a group, $\mathbb{G}_m$-torsors on $X$ form a Picard category $\tors_X(\mathbb{G}_m)$ with the monoidal structure provided by the Baer sum of torsors. Any divisor $D$ determines a $\mathbb{G}_m$-torsor $\mathcal{O}_D$; the torsor
$\mathcal{O}_{D +D'}$ is isomorphic to the Baer sum of $\mathcal{O}_D$ and $\mathcal{O}_{D'}$. In other words, one has an additive map \cite[II, Proposition 6.13]{Hartshorne}
\begin{equation}\label{zee-one}
Z^1(X) \to \tors_X(\mathbb{G}_m) \qquad D \mapsto \mathcal{O}_D.
\end{equation}
\begin{question}\label{Q2}
What is a natural generalization of \eqref{zee-one} to higher codimension cycles?
\end{question}
Since $\tors_X(\mathbb{G}_m)$ is a Picard category, one could expect the putative additive maps on $Z^i(X)$ to land in Picard categories or their generalizations.
\begin{question}\label{Q3} Is there a categorification of the intersection pairing
\begin{equation}\label{int-pairing}
CH^1(X) \times CH^1(X) \to CH^2(X)?
\end{equation}
\end{question}
More generally, one can ask for a categorical interpretation of the entire Chow ring of $X$.
\end{enumerate}
\subsection*{Main results}
Our first result is an affirmative answer to Question \ref{Q1}; the key observation is that a certain Heisenberg group \emph{animates} the cup-product.
\begin{theorem}
\label{primo}
Let $A, B$ be abelian sheaves on a topological space or scheme $X$.
\begin{enumerate}
\item
There is a canonical functorial Heisenberg\footnote{The usual Heisenberg group, a central extension of $A \times B$ by $\mathbb C^*,$ arises from a biadditive map $A \times B \to \mathbb C^*$.} sheaf $H_{A,B}$ on $X$ which sits in an exact sequence
\begin{equation*}
0 \to A\otimes B \to H_{A,B} \to A \times B \to 0;
\end{equation*}
the sheaf $H_{A,B}$ (of non-abelian groups) is a central extension of $A \times B$ by $A\otimes B$.
\item The associated boundary map
\begin{equation*}
\partial: H^1(X, A)\times H^1(X,B) = H^1(X, A \times B) \to H^2(X, A\otimes B)
\end{equation*}
sends the class $(\gamma, \delta)$ to the cup-product $\gamma \cup \delta$.
\item Given torsors $P$ and $Q$ for $A$ and $B$, view $P\times Q$ as a $A\times B$-torsor on $X$. Let $\mathcal{G}_{P,Q}$ be the gerbe of local liftings (see \S \ref{lifting}) of $P\times Q$ to a $H_{A,B}$-torsor; its band is $A\otimes B$ and its class in $H^2(X, A\otimes B)$ is $[P]\cup[Q]$.
\item The gerbe $\mathcal{G}_{P,Q}$ is covariant functorial in $A$ and $B$ and contravariant functorial in $X$.
\item The gerbe $\mathcal{G}_{P,Q}$ is trivial (equivalent to the stack of $A\otimes B$-torsors) if either $P$ or $Q$ is trivial.
\end{enumerate}
\end{theorem}
We prove this theorem over a general site $\C$. We also provide a natural interpretation of the (class of the) Heisenberg sheaf in terms of maps of
Eilenberg-Mac~Lane objects in \S \ref{sec:cup}; it is astonishing that the explicit cocycle \eqref{eq:3} for the Heisenberg group (when $X =$ a point) turns out to coincide with the map on the level of Eilenberg-Mac~Lane objects over a general site $\C$; cf. \ref{prop:cup}.
Here is another rephrasing of Theorem \ref{primo}: For abelian sheaves $A$ and $B$ on a site $\C$, there is a natural bimonoidal functor
\begin{equation}\label{bimon} \tors _{\C}(A)\times \tors _{\C}(B) \too \ger _{\C} (A\otimes B) \qquad (P,Q) \mapsto \mathcal{G}_{P,Q}\end{equation}
where $\tors_{\C}(A)$, $\tors_{\C}(B)$ are the Picard categories of $A$ and $B$-torsors on $\C$ and $\ger_{\C} (A\otimes B)$ is the Picard $2$-category of $A\otimes B$-gerbes on $\C$. Thus, Theorem \ref{primo} constitutes a categorification of the cup-product map
\begin{equation}
\cup: H^1(A) \times H^1(B) \to H^2(A\otimes B).
\end{equation}
Let us turn to Questions \ref{Q2} and \ref{Q3}. Suppose that $D$ and $D'$ are divisors on $X$ which intersect in the codimension-two cycle $D.D'$. Applying Theorem \ref{primo} to $\mathcal{O}_D$ and $\mathcal{O}_{D'}$ with $A = B =\mathbb{G}_m$, one has a $\mathbb{G}_m \otimes \mathbb{G}_m$-gerbe $\mathcal{G}_{D, D'}$ on $X$.
We now invoke the isomorphisms (the second is the fundamental Bloch-Quillen isomorphism)
\begin{equation*}
\mathbb{G}_m \overset{\sim}{\too} \mathcal{K}_1, \qquad CH^i(X) \underset{\eqref{BQ}}{\overset{\sim}{\too}} H^i(X, \mathcal{K}_i)
\end{equation*}
where $\mathcal{K}_i$ is the Zariski sheaf associated with the presheaf $U \mapsto K_i(U)$.
Pushforward of $\mathcal{G}_{D, D'}$ along $\mathcal{K}_1 \times \mathcal{K}_1 \to \mathcal{K}_2$ gives a $\mathcal{K}_2$-gerbe still denoted $\mathcal{G}_{D, D'}$; we call this the Heisenberg gerbe attached to the codimension-two cycle $D.D'$. This raises the possibility of relating $\mathcal{K}_2$-gerbes and codimension-two cycles on $X$, implicit in \eqref{BQ}.
\begin{theorem}\label{secondo} (i) Any codimension-two cycle $\alpha \in Z^2(X)$ determines a $\mathcal{K}_2$-gerbe $\mathcal{C}_{\alpha}$ on $X$.
(ii) the class of $\mathcal{C}_{\alpha}$ in $H^2(X, \mathcal{K}_2)$ corresponds to $\alpha\in CH^2(X)$ under the Bloch-Quillen map \eqref{BQ}.
(iii) the gerbe $\mathcal{C}_{\alpha + \alpha'}$ is equivalent to the Baer sum of $\mathcal{C}_{\alpha}$ and $\mathcal{C}_{\alpha'}$.
(iv) $\mathcal{C}_{\alpha}$ and $\mathcal{C}_{\alpha'}$ are equivalent as $\mathcal{K}_2$-gerbes if and only if $\alpha =\alpha'$ in $CH^2(X)$.
\end{theorem}
The Gersten gerbe $\mathcal{C}_{\alpha}$ of $\alpha$ admits a geometric description, closely analogous to that of the $\mathbb{G}_m$-torsor $\mathcal{O}_D$ of a divisor $D$; see Remark \ref{empty}. The Gersten sequence \eqref{gersten} is key to the construction of $\mathcal{C}_{\alpha}$. One has an additive map
\begin{equation}\label{zee-two} Z^2(X) \to \ger_X(\mathcal{K}_2) \qquad \alpha \mapsto \mathcal{C}_{\alpha}.\end{equation}
When $\alpha = D.D'$ is the intersection of two divisors, there are two $\mathcal{K}_2$-gerbes attached to it: the Heisenberg gerbe $\mathcal{G}_{D,D'}$ and the Gersten gerbe $\mathcal{C}_{\alpha}$; these are abstractly equivalent as their classes in $H^2(X, \mathcal{K}_2)$ correspond to $\alpha$. More is possible.
\begin{theorem}\label{terzo}
If $\alpha \in Z^2(X)$ is the intersection $D.D'$ of divisors $D, D' \in Z^1(X)$, then there is a natural equivalence $\Theta: \mathcal{C}_{\alpha} \to \mathcal{G}_{D, D'}$ between the Gersten and Heisenberg $\mathcal{K}_2$-gerbes attached to $\alpha = D.D'$.
\end{theorem}
Thus, Theorems \ref{primo}, \ref{secondo}, \ref{terzo} together provide the following commutative diagram thereby answering Question \ref{Q3}:
\begin{equation*}
\begin{tikzcd}
Z^1(X) \times Z^1(X) \arrow[r, dotted, "{\textrm{no~map}}"] \arrow{d}{\eqref{zee-one}} & Z^2(X) \arrow{d}{\eqref{zee-two}}\\
\tors_X(\mathbb{G}_m) \times \tors_X(\mathbb{G}_m) \arrow{r}{\eqref{bimon}} \arrow{d} & \ger_X(\mathcal{K}_2) \arrow{d}\\
CH^1(X) \times CH^1(X) \arrow{r}{\eqref{int-pairing}} & CH^2(X).
\end{tikzcd}
\end{equation*}
We begin with a review of the basic notions and tools (lifting gerbe, four-term
complexes) in \S \ref{sec:prel} and then present the construction and properties
of the Heisenberg group in \S \ref{sec:heis} before proving Theorem \ref{primo}.
After a quick discussion of various examples in \S \ref{sec:Examples}, we turn
to codimension-two algebraic cycles in \S \ref{sec:codim2} and construct the
Gersten gerbe $\mathcal{C}_{\alpha}$ and prove Theorems \ref{secondo}, \ref{terzo} using
the tools in \S \ref{sec:prel}.
\subsection*{Dictionary for codimension two cycles}
The above results indicate the viability of
viewing $\mathcal{K}_2$-gerbes as natural invariants of codimension-two cycles on
$X$. Additional evidence is given by the following points: \footnote{Let $\eta \colon \textrm{Spec}~F_X\to X$ be the generic point of $X$ and write $K_i^{\eta}$ for the sheaf $\eta_* K_i(F_X)$; one has the map $\mathcal{K}_i \to K_i^{\eta}$.}
\begin{itemize}
\item $\mathcal{K}_2$-gerbes are present (albeit implicitly) in the Bloch-Quillen formula \eqref{BQ} for $i=2$.
\item The Picard category $\mathfrak P =\tors_X(\mathbb{G}_m)$ of $\mathbb{G}_m$-torsors on $X$ satisfies
\begin{equation*}
\pi_1(\mathfrak P) = H^0(X, \mathcal{O}^*) = CH^1(X, 1), \qquad \pi_0(\mathfrak P) = H^1(X, \mathcal{O}^*)= CH^1(X).
\end{equation*}
Similarly, the Picard $2$-category $\mathfrak C = \ger_X (\mathcal{K}_2)$ of $\mathcal{K}_2$-gerbes is closely related to Bloch's higher Chow complex \cite{Bloch2} in codimension two:
\begin{equation*}
\pi_2(\mathfrak C) = H^0(X, \mathcal{K}_2)= CH^2(X,2) ,\quad \pi_1(\mathfrak C) = H^1(X, \mathcal{K}_2) = CH^2(X, 1), \quad \pi_0(\mathfrak C) = H^2(X, \mathcal{K}_2) \overset{\eqref{BQ}}{=} CH^2(X).
\end{equation*}
\item The additive map arising from Theorem \ref{secondo}
\begin{equation*}
Z^2(X) \to \ger_X(\mathcal{K}_2), \qquad \alpha \mapsto \mathcal{C}_{\alpha}
\end{equation*}
gives the Bloch-Quillen isomorphism \eqref{BQ} on the level of $\pi_0$. It provides an answer to Question \ref{Q2} for codimension two cycles.
\item The Gersten gerbe $\mathcal{C}_{\alpha}$ admits a simple algebro-geometric description (Remark \ref{trivial-eta}):
Any $\alpha$ determines a $K_2^{\eta}/{\mathcal{K}_2}$-torsor; then $\mathcal{C}_{\alpha}$ is the gerbe of liftings of this torsor to a $K_2^{\eta}$-torsor on $X$.
\item The gerbe $\mathcal{C}_{\alpha}$ is canonically trivial outside of the support of $\alpha$ (Remark \ref{trivial-eta}).
\item Pushing the Gersten gerbe $\mathcal{C}_{\alpha}$ along the map $\mathcal{K}_2 \to \Omega^2$ produces an $\Omega^2$-gerbe which manifests the (de Rham) cycle class of $\alpha$ in $H^2(X, \Omega^2)$.
\end{itemize}
The map \eqref{zee-one} is a part of the marvellous dictionary \cite[II,
\S6]{Hartshorne} arising from the divisor sequence~\eqref{gersten1}:
\begin{equation*}
\text{Divisors}\quad \longleftrightarrow \quad \text{Cartier~divisors}\quad
\longleftrightarrow \quad \text{$\mathcal{K}_1$-torsors} \quad \longleftrightarrow \quad\text{Line~bundles}\quad \longleftrightarrow \quad \text{Invertible~sheaves}.
\end{equation*}
More generally, from the Gersten sequence~\eqref{gersten} we obtain the following:
\begin{gather*}
Z^1(X) \overset{\cong}{\too} H^0(X, K_1^{\eta}/\mathcal{K}_1) \twoheadrightarrow H^1(X,\mathcal{K}_1) \cong
CH^1(X) \\
Z^2(X) \twoheadrightarrow H^1(X, K_2^{\eta}/\mathcal{K}_2) \overset{\cong}{\too} H^2(X,\mathcal{K}_2) \cong CH^2(X) .
\end{gather*}
Inspired by this and by ref.\ \cite[Definition 3.2]{Bloch}, we call $K_2^{\eta}/\mathcal{K}_2$-torsors as \emph{codimension-two Cartier cycles} on $X$. Thus the analog for codimension two cycles of the above dictionary reads
\begin{equation*}
\text{Codimension two cycles}\quad \longleftrightarrow \quad \text{Cartier~cycles}\quad \longleftrightarrow \quad\text{$\mathcal{K}_2$-gerbes}.
\end{equation*}
Since the Gersten sequence \eqref{gersten} exists for all $\mathcal{K}_i$, it is possible to generalize Theorem \ref{secondo} to higher codimensions thereby answering Question \ref{Q2}; however, this involves \textit{higher gerbes}. Any cycle of codimension $i>2$ determines a higher gerbe \cite{Breen94} with band $\mathcal{K}_i$ (see \S \ref{end} for an example); this provides a new perspective on the Bloch-Quillen formula \eqref{BQ}. The higher dimensional analogues of \eqref{bimon}, \eqref{int-pairing}, and Theorem \ref{secondo} will be pursued elsewhere.
Other than the classical Hartshorne-Serre correspondence between certain
codimension-two cycles and certain rank two vector bundles, we are not aware of
any generalizations of this dictionary to higher codimension. In particular, our
idea of attaching \emph{a higher-categorical invariant to a higher codimension
cycle} seems new in the literature. We expect that Picard $n$-categories play
a role in the functorial Riemann-Roch program of Deligne \cite{Deligne0}.
Our results are related to and inspired by the beautiful work of S.~Bloch \cite{Bloch}, L.~Breen \cite{Breen99}, J.-L.~Brylinski \cite{Brylinski}, A.~N.~Parshin \cite{Parshin}, B.~Poonen - E.~Rains \cite{PoonenRains}, and D.~Ramakrishnan \cite{Ramakrishnan} (see \S \ref{sec:Examples}). Brylinski's hope\footnote{``In principle such ideas will lead to a geometric description of all regulator maps, once the categorical aspects have been cleared up. Hopefully this would lead to a better understanding of algebraic K-theory itself.''} \cite[Introduction]{Brylinski} for a higher-categorical geometrical interpretation of the regulator maps from algebraic K-theory to Deligne cohomology was a major catalyst. In a forthcoming paper, we will investigate the relations between the Gersten gerbe and Deligne cohomology.
\subsection*{Acknowledgements.} The second author's research was supported by the 2015-2016 ``Research and Scholarship Award'' from the
Graduate School, University of Maryland. We would like to thank J.~Rosenberg, J.~Schafer, and H.~Tamvakis for comments and suggestions.
\subsection*{Notations and conventions} Let $\C$ be a site. We write $\shC$ for
the topos of sheaves over $\C$, $\shAbC$ the abelian group objects of $\shC$,
namely the abelian sheaves on $\C$, and by $\shGrpC$ the sheaves of groups on
$\C$. Our notation for cohomology is as follows. For an abelian object $A$ of a
topos $\T$, $H^i(A)$ denotes the cohomology of the terminal object $e\in\T$ with
coefficients in $A$, namely $i^\mathrm{th}$ derived functor of
$\Hom_\T(e,A)$. This is the same as $\Ext^i_{\T_\mathrm{ab}}(\mathbb{Z},A)$. More
generally, $H^i(X,A)$ denotes the cohomology of $A$ in the topos $\T/X$. We use
$\HH$ for hypercohomology.
\section{Preliminaries}\label{sec:prel}
\subsection{Abelian Gerbes \cite{Giraud,Deligne,Breen94}}
\label{sec:gerbes}
A gerbe $\mathcal{G}$ over a site $\C$ is \textit{a stack in groupoids which is locally non-empty and locally connected}.
$\mathcal{G}$ is locally nonempty if for every object $U$ of $\C$ there is a cover, say a local epimorphism, $V\to U$ such that the category $\mathcal{G}(V)$ is nonempty; it is locally connected if given objects $x,y\in \mathcal{G}(U)$ as above, then, locally on $U$, the sheaf $\Hom(x,y)$ defined above has sections. For each object $x$ over $U$ we can introduce the automorphism sheaf $\Aut_\mathcal{G}(x)$, and by local connectedness all these automorphism sheaves are (non canonically) isomorphic.
In the sequel we will only work with \emph{abelian} gerbes, where there is a coherent identification between the automorphism sheaves $\Aut_\mathcal{G}(x)$, for any choice of an object $x$ of $\mathcal{G}$, and a fixed sheaf of groups $G$. In this case $G$ is necessarily abelian\footnote{The automorphisms in $\Aut(G)$ completely decouple, hence play no role.}, and the class of $\mathcal{G}$ determines an element in $H^2(G)$, \cite[\S 2]{Breen94} (and also \cite{jardine}), where $H^i(G)=\Ext^i_{\shAbC}(\mathbb{Z},G)$ denotes the standard cohomology with coefficients in the abelian sheaf $G$ in the topos $\shC$ of sheaves over $\C$.
Let us briefly recall how the class of $\mathcal{G}$ is obtained using a \v{C}ech type argument. Assume for simplicity that the site $\C$ has pullbacks. Let $\mathcal{U}=\{U_i\}$ be a cover of an object $X$ of $\C$. Let $x_i$ be a choice of an object of $\mathcal{G}(U_i)$. For simplicity, let us assume that we can find morphisms $\alpha_{ij}\colon x_j\rvert_{U_{ij}}\to x_i\rvert_{U_{ij}}$. The class of $\mathcal{G}$ will be represented by the 2-cocycle $\{c_{ijk}\}$ of $\mathcal{U}$ with values in $G$ obtained in the standard way as the deviation for $\{\alpha_{ij}\}$ from satisfying the cocycle condition:
\begin{equation*}
\alpha_{ij}\circ \alpha_{jk} = c_{ijk}\circ \alpha_{ik}.
\end{equation*}
In the above identity---which defines it---$c_{ijk}\in \Aut(x_i\rvert_{U_{ijk}})\cong G\rvert_{U_{ijk}}$. It is obvious that $\{c_{ijk}\}$ is a cocycle.
Returning to stacks for a moment, a stack $\mathcal{G}$ determines an object $\pi_0(\mathcal{G})$, defined as the sheaf associated to the presheaf of connected components of $\mathcal{G}$, where the latter is the presheaf that to each object $U$ of $\C$ assigns the set of isomorphism classes of objects of $\mathcal{G}(U)$. By definition, if $\mathcal{G}$ is a gerbe, then $\pi_0(\mathcal{G})=*$. In general, writing just $\pi_0$ in place of $\pi_0(\mathcal{G})$, by base changing to $\pi_0$, namely considering the site $\C/\pi_0$, every stack $\mathcal{G}$ is (tautologically) a gerbe over $\pi_0$ \cite{L-MB}.
\begin{example}\hfill
\begin{enumerate}
\item The trivial gerbe with band $G$ is the stack $\tors(G)$ of $G$-torsors. Moreover, for any gerbe $\mathcal{G}$, the choice of an object $x$ in $\mathcal{G}(U)$ determines an equivalence of gerbes $\mathcal{G}\rvert_U\cong \tors (G\rvert_U)$, over $\C/U$, where $G=\Aut_\mathcal{G}(x)$. There is an equivalence $\tors(G)\cong \B_G$, the topos of (left) $G$-objects of $\shC$ (\cite{Giraud}).
\item Any line bundle $L$ over an algebraic variety $X$ over $\mathbb{Q}$ determines a gerbe $\mathcal{G}_n$ with band $\mmu_n$ (the sheaf of $n$\textsuperscript{th} roots of unity) for any $n >1$ as follows: Over any open set $U$, consider the category of pairs $(\mathcal{L}, \alpha)$ where $\mathcal{L}$ is a line bundle on $U$ and $\alpha: \mathcal{L} ^{\otimes n} \xrightarrow{\sim} L$ is an isomorphism of line bundles over $U$. These assemble to the gerbe $G_n$ of $n$\textsuperscript{th} roots of $L$. This is an example of a lifting gerbe \S \ref{lifting}.
\end{enumerate}
\end{example}
\begin{remark*} One also has the following interpretation, which shows that, in a fairly precise sense, a gerbe is the categorical analog of a torsor. Let $\mathcal{G}$ be a gerbe
over $\C$, let $\{U_i\}$ be a cover of $U\in \mathrm{Ob}(\C)$, and let $\{x_i\}$ be a
collection of objects $x_i\in \mathcal{G}(U_i)$. The $G$-torsors
$E_{ij}=\Hom(x_j,x_i)$ are part of a ``torsor cocycle'' $\gamma_{ijk}\colon
E_{ij} \otimes E_{jk}\to E_{ik}$, locally given by $c_{ijk}$, above, and
subject to the obvious identity. Let $\tors(G)$ be the stack of $G$-torsors
over $X$. Since $G$ is assumed abelian, $\tors(G)$ has a group-like
composition law given by the standard Baer sum. The fact that $\mathcal{G}$ itself is locally equivalent to $\tors(G)$, plus the datum of the torsor cocycle $\{E_{ij}\}$, show that $\mathcal{G}$ is equivalent to a $\tors(G)$-torsor.
\end{remark*}
The primary examples of abelian gerbes occurring in this paper are the gerbe of
local lifts associated to a central extension and four-term complexes, described
in the next two sections.
\subsection{The gerbe of lifts associated with a central extension}
\label{lifting}
(See \cite{Giraud,Breen94,Brylinskib}.)
A central extension
\begin{equation}
\label{eq:7}
0 \too A \overset{\imath}{\too} E\overset{p}{\too} G \too 0
\end{equation}
of sheaves of groups determines a homotopy-exact sequence
\begin{equation*}
\tors(A) \too \tors(E)\too \tors(G),
\end{equation*}
which is an extension of topoi with characteristic class $c \in H^2(\B_G,A)$. (Recall that $A$ is abelian and that $\tors(G)$ is equivalent to $\B_G$.)
If $\X$ is any topos over $\tors(G)\cong \B_G$, the gerbe of lifts is the gerbe with band $A$
\begin{equation*}
\mathcal{E} = \HOM_{\B_G}(\X,\B_E),
\end{equation*}
where $\HOM$ denotes the cartesian morphisms. The class $c(\mathcal{E})\in H^2(\X,A)$ is the pullback of $c$ along the map $\X\to \B_G$. By the universal property of $\B_G$, the morphism $\X\to \B_G$ corresponds to a $G$-torsor $P$ of $\X$, hence the $A$-gerbe $\mathcal{E}$ is the gerbe whose objects are (locally) pairs of the form $(Q,\lambda)$, where $Q$ is an $E$-torsor and $\lambda\colon Q\to P$ an equivariant map. It is easy to see that an automorphism of an object $(Q,\lambda)$ can be identified with an element of $A$, so that $A$ is indeed the band of $\mathcal{E}$.
Let us take $\X=\shC$, and let $P$ be a $G$-torsor. With the same assumptions as the end of \S~\ref{sec:gerbes}, let $X$ be an object of $\C$ with a cover $\lbrace U_i\rbrace$. In this case, the class of $\mathcal{E}$ is computed by choosing $E\rvert_{U_i}$-torsors $Q_i$ and equivariant maps $\lambda_i\colon Q_i\to P\rvert_{U_i}$. Up to refining the cover, let $\alpha_{ij}\colon Q_j\to Q_i$ be an $E$-torsor isomorphism such that $\lambda_i\circ \alpha_{ij}=\lambda_j$. With these choices the class of $\mathcal{E}$ is given by the cocycle $\alpha_{ij}\circ \alpha_{jk} \circ \alpha_{ik}^{-1}$.
\begin{remark}
\label{rem:2}
The above argument gives the well known boundary map \cite[Proposition 4.3.4]{Giraud}
\begin{equation*}
\partial^1\colon H^1(G) \too H^2(A)
\end{equation*}
(where we have omitted $\X$ from the notation). Dropping down one degree we get [ibid., Proposition 3.3.1]
\begin{equation*}
\partial^0\colon H^0(G) \too H^1(A).
\end{equation*}
In fact these are just the boundary maps determined by the above short exact sequence when all objects are abelian. The latter can be specialized even further: if $g\colon *\to G$, then by pullback the fiber $E_g$ is an $A$-torsor \cite{GrothendieckSGA7}.
\end{remark}
\subsection{Four-term complexes}
\label{four-term}
Let $\shAbC$ be the category of abelian sheaves over the site $\C$. Below we shall be interested in four-term exact sequences of the form:
\begin{equation}
\label{eq:6}
0 \too A \overset{\imath}{\too} L_1\overset{\partial}{\too}
L_0 \overset{p}{\too} B\too 0.
\end{equation}
Let $\ch_+(\shAbC)$ be the category of positively graded homological complexes of abelian sheaves. The above sequence can be thought of as a (non-exact) sequence
\begin{equation*}
0 \too A[1] \too [L_1\too L_0] \too B \too 0
\end{equation*}
of morphisms of $\ch_+(\shAbC)$. This sequence is short-exact in the sense of Picard categories, namely as a short exact sequence of Picard stacks
\begin{equation*}
0 \too \tors(A) \too \mathcal{L} \overset{p}{\too} B\too 0,
\end{equation*}
where $\mathcal{L}$ is the strictly commutative Picard stack associated to the complex $L_1\to L_0$ and the abelian object $B$ is considered as a discrete stack in the obvious way. We have isomorphisms $A\cong \pi_1(\mathcal{L})$ and $B \cong \pi_0(\mathcal{L})$, where the former is the automorphism sheaf of the object $0\in \mathcal{L}$ and the latter the sheaf of connected components (see \cite{Breen94,Breen99,DeligneDG}). It is also well known that the projection $p\colon \mathcal{L}\to B$ makes $\mathcal{L}$ a \emph{gerbe} over $B$. In this case the band of $\mathcal{L}$ over $B$ is $A_B$, thereby determining a class in $H^2(B,A)$.\footnote{This is \emph{part} of the invariant classifying the four-term sequence, see the remarks in \cite[\S 6]{Breen99}.}
Rather than considering $\mathcal{L}$ itself as a gerbe over $B$, we shall be interested in its fibers above generalized points $\beta\colon *\to B$. Let us put $\mathcal{A}=\tors(A)$.
By a categorification of the arguments in \cite{GrothendieckSGA7},
the fiber $\mathcal{L}_\beta$ above $\beta$ is an $\mathcal{A}$-torsor, hence an $A$-gerbe, by the observation at the end of \S~\ref{sec:gerbes} (see also the equivalence described in \cite{Breen90}). $\mathcal{L}_\beta$ is canonically equivalent to $\mathcal{A}$ whenever $\beta=0$. Writing
\begin{equation*}
\Hom_{\shC}(*,B) \cong \Hom_{\shAbC}(\mathbb{Z},B) = H^0(B),
\end{equation*}
we have the homomorphism
\begin{equation}\label{doppio}
\partial^2: H^0(B) \too H^2(A),
\end{equation}
which sends $\beta$ to the class of $\mathcal{L}_\beta$ in $H^2(A)$. The sum of $\beta$ and $\beta'$ is sent to the Baer sum of $\mathcal{L}_\beta+\mathcal{L}_{\beta'}$, and the characteristic class is additive. In the following Lemma we show this map is the same as the one described in \cite[Théorème 3.4.2]{Giraud}.
\begin{lemma}
\label{comparisons}
\hfill
\begin{enumerate}
\item The map $\partial^2$ in \eqref{doppio} is the canonical cohomological map (iterated boundary map) \cite[Théorème 3.4.2]{Giraud}
\begin{equation*}
d^2: H^0(B) \too H^1(C) \too H^2(A)
\end{equation*}
($C$ is defined below) arising from the four-term complex \eqref{eq:6}.
\item The image of $\beta$ under $d^2$ is the class of the gerbe $\mathcal{L}_{\beta}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We keep the same notation as above. Let us split~\eqref{eq:6} as
\begin{equation*}
\begin{tikzcd}
&&& 0 \arrow[d] & \\
0 \arrow[r] & A \arrow[r, "\imath"] & L_1 \arrow[r, "\pi"] \arrow[dr, "\partial"'] & C \arrow[r] \arrow[d, "\jmath"] & 0 \\
& & & L_0 \arrow[d, "p"] & \\
& & & B \arrow[d] & \\
& & & 0 &
\end{tikzcd}
\end{equation*}
with $C= \Im \partial$. By Grothendieck's theory of extensions~\cite{GrothendieckSGA7}, with $\beta \colon *\to B$, the fiber $(L_0)_\beta$ is a $C$-torsor (see the end of Remark~\ref{rem:2}). According to section~\ref{lifting}, we have a morphism $\tors (L_1)\to \tors (C)$, and the object $(L_0)_\beta$ of $\tors(C)$ gives rise to the gerbe of lifts $\mathcal{E}_\beta\equiv \mathcal{E}_{L_0,\beta}$, which is an $A$-gerbe. Now, consider the map assigning to $\beta\in H^0(B)$ the class of $\mathcal{E}_\beta\in H^2(A)$. By construction, this map factors through $H^1(C)$ by sending $\beta$ to the class of the torsor $(L_0)_\beta$. We then lift that to the class of the gerbe of lifts in $H^2(A)$. All stages are compatible with the abelian group structures. This is the homomorphism described in \cite[Théorème 3.4.2]{Giraud}.
It is straightforward that this is just the classical lift of $\beta$ through the four-term sequence~\eqref{eq:6}. Indeed, this is again easily seen in terms of a \v Cech cover $\{U_i\}$ of $*$. Lifts $x_i$ of $\beta\rvert_{U_i}$ are sections of the $C$-torsor $(L_0)_\beta$, therefore determining a standard $C$-valued 1-cocycle $\{c_{ij}\}$. From section~\ref{lifting} we then obtain an $A$-valued 2-cocycle $\{a_{ijk}\}$ arising from the choice of local $L_1$-torsors $X_i$ such that $X_i\to (L_0)_\beta \rvert_{U_i}$ is ($L_1\to C$)-equivariant. Note that in the case at hand, $\pi\colon L_1\to C$ being an epimorphism, the lifting of the torsor $(L_0)_\beta$ is done by choosing local trivializations, i.e.\xspace the $x_i$ above, and then choosing $X_i=L_1\rvert_{U_i}$.
The same argument shows that the class of $\mathcal{L}_\beta$, introduced earlier, is the same as that of $\mathcal{E}_\beta$. This follows from the following well known facts: objects of $\mathcal{L}_\beta$ are locally lifts of $\beta$ to $L_0$; morphisms between them are given by elements of $L_1$ acting through $\partial$. As a result, automorphisms are sections of $A$ and clearly the class so obtained coincides with that of $\mathcal{E}_\beta$. Therefore $\mathcal{E}_\beta$ and $\mathcal{L}_\beta$ are equivalent and the homomorphism of \cite[Théorème 3.4.2]{Giraud} is equal to~\eqref{doppio}, as required.
\end{proof}
From the proof of the above lemma, we obtain the following two descriptions of the $A$-gerbe $\mathcal{L}_{\beta}$.
\begin{corollary}\label{diversi} (i) For any four-term complex \eqref{eq:6} and any generalized point $\beta$ of $B$, the fiber $\mathcal{L}_{\beta}$ is a gerbe. Explicitly, it is the stack associated with the prestack which attaches to $U$ the groupoid $\mathcal{L}_{\beta}(U)$ whose objects are elements $g \in L_0(U)$ with $p(g) = \beta$ and morphisms between $g$ and $g'$ given by elements $h$ of $L_1(U)$ satisfying $\partial(h) = g-g'$.
(ii) The $A$-gerbe $\mathcal{L}_{\beta}$ is the lifting gerbe of the $C$-torsor $(L_0)_{\beta}$ to a $L_1$-torsor. \qed
\end{corollary}
We will use both descriptions in \S \ref{sec:codim2} especially in the comparison of the Gersten and the Heisenberg gerbe of a codimension two cycle, in the case that it is an intersection of divisors.
A slightly different point of view is the following. Recast the sequence~\eqref{eq:6} as a quasi-isomorphism
\begin{equation*}
A[2] \overset{\cong}{\too} \bigl[ L_1\too L_0\too B \bigr]
\end{equation*}
of three-term complexes of $\ch_+(\shAbC)$, where now $A$ has been shifted two places to the left. Also, relabel the right hand side as $L'_2\to L'_1\to L'_0$ (where again we employ homological degrees) for convenience. By \cite{Tatar}, the above morphism of complexes of $\ch_+(\shAbC)$, placed in degrees $[-2,0]$, gives an equivalence between the corresponding associated strictly commutative Picard 2-stacks
\begin{equation*}
\mathfrak{A} \overset{\cong}{\too} \mathfrak{L}
\end{equation*}
over $\C$. Here $\mathfrak{L} = [L'_2\to L'_1\to L'_0]\sptilde$ and $\mathfrak{A} = [A\to 0\to 0]\sptilde \cong \tors (\mathcal{A})\cong \ger (A)$. This time we have $\pi_0(\mathfrak{L}) = \pi_1(\mathfrak{L}) = 0$, and $\pi_2(\mathfrak{L})\cong A$, as it follows directly from the quasi-isomorphism above. Thus $\mathfrak{L}$ is 2-connected, namely any two objects are locally (i.e.\xspace after base change) connected by an arrow; similarly, any two arrows with the same source and target are---again, locally---connected by a 2-arrow.
Locally, any object of $\mathfrak{L}$ is a section $\beta\in B=L'_0$. By the preceding argument, the Picard stack $\mathcal{L}_\beta = \Aut_\mathfrak{L}(\beta)$ is an $A$-gerbe, and the assignment $\beta \mapsto \mathcal{L}_\beta$ realizes (a quasi-inverse of) the equivalence between $\mathfrak{A}$ and $\mathfrak{L}$. It is easy to see that $\mathcal{L}_\beta$ is the same as the fiber over $\beta$ introduced before.
In particular, for the Gersten resolution~\eqref{gersten}, \eqref{eq:9}, for $\mathcal{K}_2$,
we get the equivalence of Picard 2-stacks
\begin{equation}
\label{eq:8}
\ger (\mathcal{K}_2) \cong \bigl[G_2^X \bigr]\sptilde.
\end{equation}
\section{The Heisenberg group}\label{sec:heis}
The purpose of this section is to describe a functor $H\colon \Ab\times \Ab\to \Grp$, where $\Ab$ is the category of abelian groups and $\Grp$ that of groups.
If $\C$ is a site, the method immediately generalizes to the categories of abelian groups and of groups in $\shC$, the topos of sheaves on $\C$. For any pair $A, B$ of abelian sheaves on $\C$, there is a canonical Heisenberg sheaf $H_{A,B}$ (of non-commutative groups on $\C$), a central extension of $A \times B$ by $A\otimes B$.
The definition of $H$ is based on a generalization of the Heisenberg group construction due to Brylinski \cite[\S 5]{Brylinski}. A pullback along the diagonal map $A \to A\otimes A$ gives the extension constructed by Poonen and Rains \cite{PoonenRains}.
\subsection{The Heisenberg group}
\label{sec:elem}
Let $A$ and $B$ be abelian groups. Consider the (central) extension
\begin{equation}
\label{eq:1}
0 \to A\otimes B \to H_{A,B} \to A\times B \to 0
\end{equation}
where the group $H_{A,B}$ is defined by the group law:
\begin{equation}
\label{eq:2}
(a,b,t)\, (a',b',t') = (aa',bb',t + t' + a\otimes b').
\end{equation}
Here $a,a'$ are elements of $A$, $b,b'$ of $B$, and $t,t'$ of $A\otimes B$. The nonabelian group $H_{A,B}$ is evidently a functor of the pair $(A,B)$, namely a pair of homomorphisms $(f\colon A\to A', g\colon B\to B')$ induces a homomorphism $H_{f,g}\colon H_{A,B}\to H_{A',B'}$. The special case $A=B=\mmu_n$ occurs in Brylinski's treatment of the regulator map to \'etale cohomology \cite{Brylinski}.
The map
\begin{equation}
\label{eq:3}
f \colon (A\times B)\times (A\times B) \too A\otimes B,
\qquad
f(a,b,a',b') = a\otimes b',
\end{equation}
is a cocycle representing the class of the extension~\eqref{eq:1} in $H^2(A\times B, A\otimes B)$ (group cohomology). Its alternation
\begin{equation*}
\varphi_f\colon \wedge_\mathbb{Z}^2 (A\times B) \too A\otimes B,
\qquad
\varphi_f((a,b),(a',b')) = a\otimes b' - a'\otimes b,
\end{equation*}
coincides with the standard commutator map and represents the value of the projection of the class of $f$ under the third map in the universal coefficient sequence
\begin{equation*}
0 \too \Ext^1(A\times B, A\otimes B) \too
H^2(A\times B, A\otimes B) \too
\Hom(\wedge_\mathbb{Z}^2 (A\times B), A\otimes B).
\end{equation*}
As for the commutator map, it is equal to $[s,s]\colon \wedge_\mathbb{Z}^2 (A\times B)\to A\otimes B$, where $s\colon A\times B\to H_{A,B}$ is a set-theoretic lift, but the map actually is independent of the choice of $s$. (For details see, e.g.\xspace the introduction to \cite{Breen99}.)
\begin{remark}
The properties of the class of the extension $H_{A,B}$, in particular that it is a cup-product of the fundamental classes of $A$ and $B$, as we can already evince from~\eqref{eq:3}, are best expressed in terms of Eilenberg-Mac~Lane spaces. We will do this below working in the topos of sheaves over a site.
\end{remark}
\subsection{Extension to sheaves}
\label{sec:sh}
The construction of the Heisenberg group carries over to the sheaf context. Let $\C$ be a site, and $\shC$ the topos of sheaves over $\C$. Denote by $\shAbC$ the abelian group objects of $\shC$, namely the abelian sheaves on $\C$, and by $\shGrpC$ the sheaves of groups on $\C$.
For all pairs of objects $A,B$ of $\shAbC$, it is clear that the above construction of $H_{A,B}$ carries over to a functor
\begin{equation*}
H\colon \shAbC\times \shAbC \too \shGrpC.
\end{equation*}
In particular, since $H_{A,B}$ is already a sheaf of sets (isomorphic to $A \times B \times (A\otimes B)$), the only question is whether the group law varies nicely, but this is clear from its functoriality. Note further that by definition of $H_{A,B}$ the resulting epimorphism $H_{A,B}\to A\times B$ has a global section $s\colon A\times B\to H_{A,B}$ as objects of $\shC$, namely $s=(\id_A,\id_B,0)$, which we can use this to repeat the calculations of \S~\ref{sec:elem}.
In more detail, from \S~\ref{lifting}, the class of the central extenson~\eqref{eq:1} is to be found in
$H^2(\B_{A\times B},A\otimes B)$ ($A\otimes B$ is a trivial $A\times B$-module). This replaces the group cohomology of \S~\ref{sec:elem} with its appropriate topos equivalent. By pulling back to the ambient topos, say $\X=\shC$, this is the class of the gerbe of lifts from $B_{A\times B}$ to $B_H$. We are ready to give a proof of Theorem~\ref{primo}. This proof is computational.
\begin{proof}[Proof of Theorem~\ref{primo}]
Let us go back to the cocycle calculations at the end of \S~\ref{lifting}, where $X$ is an object of $\C$ equipped with a cover $\mathcal{U}=\lbrace U_i\rbrace$. An $A\times B$-torsor $(P,Q)$ over $X$ would be represented by a \v{C}ech cocycle $(a_{ij},b_{ij})$ relative to $\mathcal{U}$. The cocycle is determined by the choice of isomorphisms $(P,Q)\rvert_{U_i}\cong (A\times B)\rvert_{U_i}$. Now, define $R_i = H_{A,B}\rvert_{U_i}$ with the trivial $H_{A,B}$-torsor structure, and let $\lambda_i\colon R_i\to (P,Q)\rvert_{U_i}$ equal the epimorphism in~\eqref{eq:1}. Carrying out the calculation described at the end of~\ref{lifting} with these data gives $\alpha_{ij}\circ \alpha_{jk}\circ \alpha_{ik}^{-1}=a_{ij}\otimes b_{jk}$, which is the cup-product in \v Cech cohomology of the classes corresponding to the $A$-torsor $P$ and the $B$-torsor $Q$. In other words, the gerbe of lifts corresponding to the central extension determined by the Heisenberg group incarnates the cup product map
\begin{equation*}
H^1(X,A) \times H^1(X,B) \overset{\cup}{\too} H^2(X,A\otimes B).
\end{equation*}
For the choice $\alpha_{ij} = (a_{ij}, b_{ij}, 0)$, one has the following explicit calculation in the Heisenberg group
\begin{align*}
\alpha_{ij}\circ \alpha_{jk}\circ \alpha_{ik}^{-1}
& = (a_{ij}, b_{ij}, 0)(a_{jk}, b_{jk}, 0)(a_{ik}, b_{ik}, 0)^{-1}\\
& = (a_{ik}, b_{ik}, a_{ij}\otimes b_{jk}) (a^{-1}_{ik}, b^{-1}_{ik}, a_{ik}\otimes b_{ik})\\
& = (1,1, a_{ij} \otimes b_{jk} + a_{ik}\otimes b_{ik} - a_{ik} \otimes b_{ik}) \\
& = (1,1, a_{ij} \otimes b_{jk});
\end{align*}
We used that the inverse of $(a, b, t)$ in the Heisenberg group is $(a^{-1}, b^{-1}, -t + a\otimes b)$:
\begin{equation*}
(a, b, t) (a^{-1}, b^{-1}, -t + a\otimes b)
= (1, 1, a\otimes b^{-1} + t -t + a\otimes b) = (1,1,0).
\end{equation*}
It is well known \cite[Chapter 1, \S 1.3, Equation (1-18), p.~29]{Brylinskib} that
the \v Cech cup-product of $a =\{a_{ij}\}$ and $b = \{b_{ij}\}$ is given by the two-cocycle
\begin{equation*}
\{a\cup b\}_{ijk} = \{a_{ij}\otimes b_{jk}\}.
\end{equation*}
This proves the first three points of the statement, whereas the fourth is built-in from the very construction. The fifth follows from the fact that the class of the gerbe of lifts is bilinear: this is evident from the expression computed above.
\end{proof}
As hinted above, the cup product has a more intrinsic explanation in terms of maps between Eilenberg-Mac~Lane objects in the topos. Passing to Eilenberg-Mac Lane objects in particular ``explains'' why the cup-product realizes the cup-product pairing. First, we state
\begin{theorem}
\label{thm:cup}
The class of the extension~\eqref{eq:1} in $\shC$ corresponds to (the homotopy class of) the cup product map
\begin{equation*}
K(A\times B,1) \cong K(A,1)\times K(B,1) \too K(A\otimes B,2)
\end{equation*}
between the identity maps of $K(A,1)$ and $K(B,1)$; its expression is given by~\eqref{eq:3}.
\end{theorem}
\begin{proof}
Observe the epimorphism $H_{A,B}\to A\times B$ has global set-theoretic sections. The statement follows from Propositions~\ref{simpltop} and~\ref{prop:cup} below.
\end{proof}
The two main points, which we now proceed to illustrate, are that Eilenberg-Mac~Lane objects represent cohomology (and hypercohomology, once we take into account simplicial objects) in a topos, and that the cohomology of a group object in a topos (such as $A\times B$ in $\shC$) with trivial coefficients can be traded for the hypercohomology of a simplicial model of it. In this way we calculate the class of the extension as a map, and such map is identified with the cup product. We assemble the necessary results to flesh out the proof of Theorem~\ref{thm:cup} in the next two sections.
\subsection{Simplicial computations}
\label{sec:simplicial}
The class of the central extension~\eqref{eq:7} can be computed simplicially. (For the following recollections, see \cite[VI.5, VI.6, VI.8]{IllusieII} and \cite[\S 2]{Breen}.)
Let $\T$ be a topos, $G$ a group-object of $\T$ (for us it will be $\T=\shC$) and $BG=K(G,1)$ the standard classifying simplicial object with $B_nG=G^n$~\cite{DeligneH3}. Let $A$ be a trivial $G$-module. We will need the following well known fact.\footnote{Unfortunately we could not find a specific entry point in the literature to reference, therefore we assemble here the necessary prerequisites. See also \cite[\S\S 2,3]{chinburg} for a detailed treatment in the representable case.}
\begin{proposition}
\label{simpltop}
\begin{math}H^i(\B_G,A) \cong \HH^i(BG,A).
\end{math}
\end{proposition}
\begin{proof}
The object on the right is the hypercohomology as a simplicial object of $\T$. Let $X$ be a simplicial object in a topos $\T$. One defines
\begin{equation*}
\HH^i(X,A) = \EExt^i(\mathbb{Z}[X]\sptilde, A)
\end{equation*}
where $M\sptilde$, for any simplicial abelian object $M$ of $\T$, denotes the corresponding chain complex defined by $M_n\sptilde=M_n$, and by taking the alternate sum of the face maps. $\mathbb{Z} X_n$ denotes the abelian object of $\T$ generated by $X_n$. Of interest to us is the spectral sequence \cite[Example (2.10) and below]{Breen}:
\begin{equation*}
E_1^{p,q} = H^q(X_p,A) \Longrightarrow \HH^\bullet(X, A).
\end{equation*}
Let $X$ be any simplicial object of $\T$. The levelwise topoi $\T/X_n$, $n=0,1,\dots$, form a simplicial topos $\X=\T/X$ or equivalently a topos fibered over $\Delta^\op$, where $\Delta$ is the simplicial category. The topos $\B\X$ of $\X$-objects essentially consists of descent-like data, that is, objects $L$ of $\X_0$ equipped with an arrow $a\colon d_1^*L\to d_0^*L$ the cocycle condition $d_0^*a\, d_2^*a=d_1^*a$ and $s_0^*a = \id$ (the latter is automatic if $a$ is an isomorphism).
By~\cite[VI.8.1.3]{IllusieII}, in the case where $X=BG$, $\B\X$ is nothing but $\B_G$, the topos of $G$-objects of $\T$. One also forms the topos $\Tot(\X)$, whose objects are collections $F_n\in \X_n$ such that for each $\alpha\colon [m]\to [n]$ in $\Delta^\op$ there is a morphism $F_\alpha\colon \alpha^*F_m\to F_n$, where $\alpha^*$ is the inverse image corresponding to the morphism $\alpha\colon \X_n\to \X_m$. There is a functor $\mathit{ner}\colon \B\X\to \Tot(\X)$ sending $(L,a)$ to the object of $\Tot(\X)$ which at level $n$ equals $(d_0\cdots d_0)^*L$ ($a$ enters through the resulting face maps), see \textit{loc.\,cit.}\xspace for the actual expressions. The functor $\mathit{ner}$ is the inverse image functor for a morphism $\Tot(\X)\to \B\X$, and, $\X$ satisfying the conditions of being a ``good pseudo-category'' (\cite[VI 8.2]{IllusieII}) we have an isomorphism
\begin{equation*}
R\Gamma(\B\X,L) \overset{\cong}{\too} R\Gamma (\Tot(\X),\mathit{ner}(L))
\end{equation*}
and, in turn, a spectral sequence
\begin{equation*}
E_1^{p,q} = H^q(X_p,\mathit{ner}_p(L)) \Longrightarrow H^\bullet(\B\X, L),
\end{equation*}
\cite[VI, Corollaire 8.4.2.2]{IllusieII}. On the left hand side we recognize the spectral sequence for the cohomology of a simplicial object in a topos \cite[\S 2.10]{Breen}.
Applying the foregoing to $X=BG$, and $L$ a left $G$-object of $\T$, we obtain \cite[VI.8.4.4.5]{IllusieII}
\begin{equation*}
E_1^{p,q} = H^q(G^p,L) \Longrightarrow H^\bullet(\B_G, L).
\end{equation*}
(We set $Y=e$, the terminal object of $\T$, in the formulas from \textit{loc.\,cit.}\xspace)
Thus if $L=A$, the trivial $G$-module arising from a central extension of $G$ by $A$, by comparing the spectral sequences we can trade $H^2(\B_G, A)$ for the hypercohomology $\HH^2(K(G,1),A)$.
\end{proof}
\subsection{The cup product}
\label{sec:cup}
The class of the extension extension~\eqref{eq:1} corresponds to the homotopy class of a map $K(A\times B,1)\to K(A\otimes B,2)$. We interpret it in terms of cup products of Eilenberg-Mac~Lane objects.
Recall that for an object $M$ of $\shAbC$ we have $K(M,i)=K(M[i])$, where $M[i]$ denotes $M$ placed in homological degree $i$, and $K \colon \ch_+(\shAbC)\to s\shAbC$ is the Dold-Kan functor from nonnegative chain complexes of $\shAbC$ to simplicial abelian sheaves. Explicitly:
\begin{equation*}
K(M,i)_n =
\begin{cases}
0 & 0\leq n < i, \\
\bigoplus_{s \colon [n] \twoheadrightarrow [i]} M & n\geq i.
\end{cases}
\end{equation*}
In particular, $K(M,i)_i=M$. $K$ is a quasi-inverse to the normalized complex functor $N\colon s\shAbC\to \ch_+(\shAbC)$.
If $X$ is a simplicial object $X$ of $\shC$, we have
\begin{equation}
\label{eq:5}
\HH^i(X,M) \cong [X,K(M,i)],
\end{equation}
where the right-hand side denotes the hom-set in the homotopy category \cite{Illusie,Breen}. In particular, there is a fundamental class $\imath_M^n\in \HH^n(K(M,n),M)$, corresponding to the identity map.
Returning to the objects $A$ and $B$ of $\shAbC$, also recall the morphism \cite[Chapter II, Equation (2.22), p.~64]{Breen}
\begin{equation}
\label{eq:4}
\delta_{i,j}\colon
K(A,i)\times K(B,j) \too K(A\otimes B,i+j).
\end{equation}
It is the composition of two maps. The first is:
\begin{equation*}
K(A,i)\times K(B,j)\too d((K(A,i)\boxtimes K(B,j)) = (K(A,i)\otimes K(B,j))),
\end{equation*}
where $\boxtimes$ denotes the external tensor product of simplicial objects of $\shAbC$ and $d$ the diagonal; the second is the map in $s\shAbC$ corresponding to the Alexander-Whitney map under the Dold-Kan correspondence. We have:
\begin{proposition}
\label{prop:cup}
The class of the extension~\eqref{eq:1} is equal to $\imath_A^1\otimes \imath_B^1 = \delta_{1,1}(\imath_A^1\times \imath_B^1)$.
\end{proposition}
\begin{proof}
Observe that any simplicial morphism $f\colon X\to K(M,i)$ is determined by $f_i$, the rest, for $n>i$, being determined by the simplicial identities. Therefore we need to compute:
\begin{equation*}
K(A\times B,1)_2 \cong K(A,1)_2\times K(B,1)_2 \too K(A\otimes B,2)_2,
\end{equation*}
namely
\begin{equation*}
(A\times B)\times (A\times B) \too (A\times A)\times (B\times B) \too A\otimes B.
\end{equation*}
From the expression of the Alexander-Whitney map, in e.g.\xspace\ \cite{Illusie}, the image of the second map in $\ch_+(\shAbC)$ is the sum of $d_0^vd_0^v$, $d_1^hd_1^h$, and $d_2^hd_0^v$. Only the third one is nonzero, giving $((a,b),(a',b'))\to a\otimes b'$, which equals $f$ in the construction of the extension~\eqref{eq:1}. Using~\eqref{eq:5} we obtain the conclusion.
\end{proof}
The morphism~\eqref{eq:4} represents the standard cup product in cohomology. By Proposition ~\ref{prop:cup}, for an object $X$ of $s\shC$, the cup product
\begin{equation*}
\HH^1(X,A)\times \HH^1(X,B) \too \HH^2(X,A\otimes B)
\end{equation*}
factors through $X\to K(A,1)\times K(B,1)$ and the extension~\eqref{eq:1}.
\begin{remark*}
Proposition~\ref{prop:cup} and the above map provide a more conceptual proof of Theorem~\ref{primo}.
\end{remark*}
\section{Examples and connections to prior results}
\label{sec:Examples}
In this section, we collect some examples and briefly indicate the connections with earlier results \cite{Bloch, Brylinski, Parshin, PoonenRains, Ramakrishnan}.
\subsection{Self-cup products of Poonen-Rains} In \cite{PoonenRains}, Poonen and Rains construct, for any abelian group $A$, a central extension of the form
\begin{equation*}
0 \to A\otimes A\to U\!A\to A \to 0,
\end{equation*}
providing a functor $U\colon \Ab\to \Grp$. The group law in $U\!A$ is obtained from~\eqref{eq:2} by setting $a=a'$ and $b=b'$. Hence the above extension can be obtained from~\eqref{eq:1} by pulling back along the diagonal homomorphism $\Delta_A\colon A\to A\times A$. Similarly, both the cocycle and its alternation for the extension constructed in \textit{loc.\,cit.}\xspace\ are obtained from ours by pullback along $\Delta_A$, for $A\in \Ab$. Similar remarks apply over an abelian sheaf $A$ on any site $\C$. They use $U\!A$ to describe the self-cup product $\alpha \cup \alpha$ of any element $\alpha \in H^1(A)$.
\subsection{Brylinski's work on regulators and \'etale analogues} In \cite{Brylinski}, Brylinski has proved Theorem \ref{primo} in the case $A =B =\mmu_n$, the \'etale sheaf $\mmu_n$ of $n$\textsuperscript{th} roots of unity on a scheme $X$ over $\Spec\mathbb{Z}[\frac{1}{n}]$ using the Heisenberg group $H_{\mmu_n, \mmu_n}$ (in our notation). He used it to provide a geometric interpretation of the regulator map
\begin{equation*}
c_{1,2}: H^1(X, \mathcal{K}_2) \too H^3(X, \mmu_n^{\otimes 2}), \qquad (\text{$n$ odd}),
\end{equation*}
a special case of C.~Soul\'e's regulator. If $X$ is a smooth projective variety over $\mathbb C$ (viewed as an complex analytic space) and $f,g$ are invertible functions on $X$, P.~Deligne (and Bloch) \cite{Deligne0} constructed a holomorphic line bundle $(f,g)$ on $X$ and Bloch showed that this gives a regulator map from $K_2(X)$ to the group of isomorphism classes of holomorphic line bundles with connection, later interpreted by D.~Ramakrishnan \cite{Ramakrishnan} in terms of the three-dimensional Heisenberg group.
Write $[f]_n, [g]_n \in H^1(X, \mmu_n)$ for the images of $f,g$ under the boundary map $H^0(X, \mathcal{O}_{X^{an}}) \to H^1(X, \mmu_n)$ of the analytic Kummer sequence
\begin{equation*}
1\too \mmu_n \too \mathcal{O}_{X^{an}}^* \xrightarrow{u\mapsto u^n} \mathcal{O}_{X^{an}}^* \too 1.
\end{equation*}
The gerbe $G_{[f]_n,[g]_n}$ from Theorem \ref{primo} is compatible with Bloch-Deligne line bundle $(f,g)$, in a sense made precise in \cite[Proposition 5.1 and after]{Brylinski}.
\subsection{Finite flat group schemes} Let $X$ be any variety over a perfect field $F$ of characteristic $p>0$. For any commutative finite flat group scheme $N$ killed by $p^n$, consider the cup product pairing
\begin{equation*}
H^1(X, N) \times H^1(X, N^D) \to H^2(C, \mmu_{p^n})
\end{equation*}
of flat cohomology groups where $N^D$ is the Cartier dual of $N$. Theorem \ref{primo} provides a $\mmu_{p^n}$-gerbe on $X$ given a $N$-torsor and a $N^D$-torsor. When $N$ is the kernel of $p^n$ on an abelian scheme $A$ so that $N^D$ is the kernel of $p^n$ on the dual abelian scheme $A^D$ of $A$, the cup-product pairing is related to the N\'eron-Tate pairing \cite[p.~19]{Milne67}.
\subsection{The gerbe associated with a pair of divisors}\label{pairs} Let $X$ be a smooth variety over a field $F$. Let $D$ and $D'$ be divisors on $X$. Consider the non-abelian sheaf $H$ on $X$
obtained by pushing the Heisenberg group $H_{\mathcal{K}_1, \mathcal{K}_1}$ along the multiplication map $m: \mathcal{K}_1\otimes \mathcal{K}_1 \to \mathcal{K}_2$.
So $H$ is a central extension of $\mathcal{K}_1 \times K_1$ by $\mathcal{K}_2$ which we write
\begin{equation*}
0 \too \mathcal{K}_2 \too H \overset{\pi}{\too} \mathcal{K}_1 \times \mathcal{K}_1 \too 0.
\end{equation*}
Let $L = L_{D, D'}$ denote the $\mathcal{K}_1 \times \mathcal{K}_1$-torsor defined by the pair $D, D'$. Applying Theorem \ref{primo} gives a $\mathcal{K}_2$-gerbe on $X$ as follows. Since $H$ is a central extension (so $\mathcal{K}_1 \times \mathcal{K}_1$
acts trivially on $\mathcal{K}_2$), the category of local liftings of $L$ to a $\mathcal{K}_2$-torsor provide (\S \ref{lifting}, \cite[IV, 4.2.2]{Giraud}) a canonical $\mathcal{K}_2$-gerbe $\mathcal{G}_{D, D'}$.
\begin{definition}
The Heisenberg gerbe $\mathcal{G}_{D, D'}$ with band $\mathcal{K}_2$ is the following: For each open set $U$, the category $\mathcal{G}_{D, D'}(U)$ has objects pairs $(P, \rho)$ where $P$ is a $H$-torsor on $U$ and
\begin{equation*}
\rho\colon P \times_{\pi} (\mathcal{K}_1 \times \mathcal{K}_1) \overset{\sim}{\too} L
\end{equation*}
is an isomorphism of $\mathcal{K}_1 \times \mathcal{K}_1$-torsors; a morphism from $(P, \rho)$ to
$(P', \rho')$ is a map $f: P \to P'$ of $H$-torsors satisfying $\rho = \rho'\circ f$. It is clear that the set of morphisms from $(P, \rho)$ to $(P', \rho')$ is a $\mathcal{K}_2$-torsor.
\end{definition}
\begin{example} Assume $X$ is a curve (smooth proper) and put $Y =X \times X$.
(i) Assume $F=\mathbb F_q$ is a finite field. Let $D$ be the graph on $Y$ of the Frobenius morphism $\pi: X \to X$ and $D'$ be the diagonal, the image of $X$ under the map $\Delta: X \to X \times X$. Theorem \ref{primo} attaches a $\mathcal{K}_2$-gerbe on $Y$ to the zero-cycle $D.D'$, the intersection of the divisors $D$ and $D'$. Since the zero cycle $D.D'$ is the pushforward $\Delta_{*} \beta$ of $\beta= \displaystyle\sum_{x\in X(\mathbb F_q)} x$ on $X$, we obtain that the set of rational points on $X$ determines a $\mathcal{K}_2$-gerbe on $X \times X$.
(ii) Note that the diagonal $\Delta_Y$ (a codimension-two cycle on $Y \times Y$) can be written as an intersection of divisors $V$ and $V'$ on $Y\times Y = X \times X\times X \times X$ where $V$ (resp. $V'$) are the set of points of the latter of the form $\{(a,b,a,c)\}$ (resp. $\{(a,b,d,b)\}$). Theorem \ref{primo} says that $\Delta_Y$ determines a $\mathcal{K}_2$-gerbe on $Y\times Y$.
\end{example}
\subsection{Adjunction formula} Let $X$ be a smooth proper variety and $D$ be a smooth divisor of $X$. The classical adjunction formula states:
\textit{The restriction of the line bundle $L_D^{-1}$ to $D$ is the conormal bundle $N_D$ (a line bundle on $D$).}
Given a pair of smooth divisors $D, D'$ with $E = D \cap D'$ smooth of pure codimension two, write $\iota: E \hookrightarrow X$ for the inclusion. There is a map $\pi: \iota^*\mathcal{K}_2 \to \mathcal{K}_2^E$, where $\mathcal{K}_2^E$ indicates the usual K-theory sheaf $\mathcal{K}_2$ on $E$.
An analogue of the adjunction formula for $E$ would be a description of the $\mathcal{K}_2^E$-gerbe $\pi_* \iota^*\mathcal{G}_{D,D'}$ obtained from the $\mathcal{K}_2$-gerbe $\mathcal{G}_{D, D'}$ on $X$.
\begin{proposition}
Let $D$ and $D'$ be smooth divisors of $X$ with $E = D \cap D'$ smooth of pure codimension two.
Consider the line bundles $V = (N_D)|_E$ and $V'= (N_{D'})|_E$ on $E$. Then, $\pi_* \iota^*\mathcal{G}_{D, D'}$ is equivalent to the $\mathcal{K}^E_2$-gerbe $\mathcal{G}_{V, V'}$.
\end{proposition}
\begin{proof}
Since the restriction map $H^*(X, \mathcal{K}_i) \to H^*(E, \mathcal{K}^E_i)$ respects cup-product, this follows from the classical adjunction formula for $D$ and $D'$.
\end{proof}
\subsection{Parshin's adelic groups} Let $S$ be a smooth proper surface over a field $F$.
For any choice of a curve $C$ in $S$ and a point $P$ on $C$, Parshin
\cite[(18)]{Parshin} has introduced a discrete Heisenberg group
\begin{equation*}
0 \to \mathbb{Z} \to \tilde{\Gamma}_{P,C} \to \Gamma_{P,C} \to 0,
\end{equation*}
where $\Gamma_{P,C}$ is isomorphic (non-canonically) to $\mathbb{Z} \oplus \mathbb{Z}$; he has shown \cite[end of
\S 3]{Parshin} how a suitable product of these groups leads to an adelic
description of $CH^2(S)$ and the intersection pairing \eqref{int-pairing}. His
constructions are closely related to an adelic resolution of the sheaf
$H_{\mathcal{K}_1, \mathcal{K}_1}$ on $S$.
\section{Algebraic cycles of codimension two}\label{sec:codim2}
Throughout this section, $X$ is a smooth proper variety over a field $F$. Let $\eta \colon \textrm{Spec}~F_X\to X$ be the generic point of $X$ and write $K_i^{\eta}$ for the sheaf $j_* K_i(F_X)$.
In this section, we construct the Gersten gerbe $\mathcal{C}_{\alpha}$ for any codimension two cycle $\alpha$ on $X$, provide various equivalent
descriptions of $\mathcal{C}_{\alpha}$ and use them to prove Theorems \ref{k2gerbe}, \ref{comparison}. As a consequence, we obtain Theorems \ref{secondo} and \ref{terzo} of the introduction.
\subsection{Bloch-Quillen formula} \label{sec:bloch-quillen} Recall the (flasque) Gersten resolution\footnote{This resolution exists for any smooth variety over $F$.} \cite[\S7]{Quillen} \cite[p. 276]{handbook} \cite{Gerstenicm} of the Zariski sheaf $\mathcal{K}_i$ associated with the presheaf $U \mapsto K_i(U)$:
\begin{equation}\label{gersten}
0 \too \mathcal{K}_i \too \bigoplus_{x \in X^{(0)}} j_* K_i(x) \too \bigoplus_{x \in X^{(1)}} j_* K_{i-1}(x) \too \cdots \bigoplus_{x \in X^{(i-1)}} j_* K_1(x) \xrightarrow{\delta_{i-1}} \bigoplus_{x \in X^{(i)}}{\oplus} ~j_* K_0(x);
\end{equation}
here, any point $x\in X^{(m)}$ corresponds to a subvariety of codimension $m$
and the map $j$ is the canonical inclusion $x \hookrightarrow X$. So $\mathcal{K}_i$ is
quasi-isomorphic to the complex
\begin{equation}
\label{eq:9}
G_i^X = \bigl[ K_i^{\eta} \too \bigoplus_{x \in X^{(1)}}
j_* K_{i-1}(x) \too \cdots \bigoplus_{x \in X^{(i-1)}} j_* K_1(x)
\xrightarrow{\delta_{i-1}} \bigoplus_{x \in X^{(i)}} j_* K_0(x) \bigr] .
\end{equation}
By (\ref{gersten}), there is a functorial isomorphism \cite[\S7, Theorem 5.19]{Quillen} \cite[Corollary 72, p.~276]{handbook}
\begin{equation}\label{BQ}
\bigoplus_i CH^i(X) \xrightarrow{\sim} \bigoplus_i H^i(X, \mathcal{K}_i); \qquad \text{\makebox[0pt][l]{(Bloch-Quillen formula)}}
\end{equation}
this is an isomorphism of graded rings: D.~Grayson has proved that the intersection product on $CH(X) = \oplus_i CH^i(X)$ corresponds to the cup-product in cohomology \cite[Theorem 77, p.278]{handbook}. Thus, algebraic cycles of codimension $n$ give $n$-cocycles of the sheaf $\mathcal{K}_n$ on $X$ and that two such cocycles are cohomologous exactly when the algebraic cycles are rationally equivalent.
The final two maps in \eqref{gersten} arise essentially from the valuation and the tame symbol map \cite[pp.351-2]{Bloch}. Let $R$ be a discrete valuation ring, with fraction field $L$; let ${\rm ord}: L^{\times} \to \mathbb{Z}$ be the valuation and let $l$ be the residue field. The boundary maps from the localization sequence for $\Spec R$ are known explicitly: the map $L^{\times} = K_1(L) \to K_0(l) = \mathbb{Z}$ is the map ${\rm ord}$ and the map $K_2(L) \to K_1(l) = l^{\times}$ is the tame symbol. This applies for any normal subvariety $V$ (corresponds to a $y \in X^{(i)}$) and a divisor $x$ of $V$ (corresponding to a $x \in X^{(i+1)}$).
\subsection{Divisors}\label{divisors} We recall certain well known results about divisors and line bundles for comparison with the results below for the $\mathcal{K}_2$-gerbes attached to codimension two cycles.
If $A$ is a sheaf of abelian groups on $X$, then ${\rm Ext}_X^1(\mathbb Z, A) = H^1(X, A)$ classifies $A$-torsors on $X$. Given an extension $E$
\begin{equation*}
0 \too A \too E \overset{\pi}{\too} \mathbb{Z} \too 0
\end{equation*}
of abelian sheaves on $X$, the corresponding $A$-torsor is simply $\pi^{-1}(1)$ (a sheaf of sets). When $X$ is a point, then $\pi^{-1}(1)$ is a coset of $\pi^{-1}(0) = A$, i.e., a $A$-torsor. The classical correspondence \cite{Hartshorne} between Weil divisors (codimension-one algebraic cycles) $D$ on $X$, Cartier divisors, line bundles $\mathcal{L}_D$, and torsors $\mathcal{O}_D$ over $\mathcal{O}_X^* =\mathbb{G}_m =\mathcal{K}_1$ comes from the Gersten sequence (\ref{gersten}) for $\mathcal{K}_1$ (see also \cite[2.2]{Gerstenicm}):
\begin{equation}\label{gersten1}
0 \too \mathcal{O}_X^* \too F_X^{\times} \overset{d}{\too} \bigoplus_{x \in X^{(1)}} j_*\mathbb{Z} \to 0,
\end{equation}
where $F_X$ is the constant sheaf of rational functions on $X$ and the sum is over all irreducible effective divisors on $X$, using that $K_0(L) \cong \mathbb{Z}$ and $K_1(L) = L^{\times}$ for any field $L$. As a Weil divisor $D = \Sigma_{x\in X^1}~ n_x x$ is a formal combination with integer coefficients of subvarieties of codimension one of $X$, it determines a map of sheaves
\begin{equation*}
\psi\colon \mathbb{Z} \too \bigoplus_{x \in X^{(1)}} j_*\mathbb{Z};
\end{equation*}
$\psi(1)$ is the section with components $n_x$. The $\mathcal{O}_X^*$-torsor $\mathcal{O}_D$ attached to $D$ is given as the subset
\begin{equation}\label{divisore}
{d}^{-1}(\psi(1)) \subset F_X^{\times}.
\end{equation}
A \v Cech description of $\mathcal{O}_D$ relative to an Zariski open cover $\{U_i\}$ of $X$ is as follows. Pick a rational function $f_i$ on $U_i$ with pole of order $n_x$ along $x$ for all $x \in U^{(1)}_i$ (so $x$ is a irreducible subvariety of codimension one of $U_i$); we view $f_i \in F_X^{\times}$. On $U_i \cap U_j$, one has $f_i = g_{ij}f_j$ for unique $g_{ij} \in \mathcal{O}_X^*(U_i \cap U_j)$; the collection $\{g_{ij}\}$ is a \v Cech one-cocycle with values in $\mathcal{O}_X^*$ representing $\mathcal{O}_D$.
For any $D$, $\mathcal{L}_D$ is trivial on the complement of the support of $D$.
\begin{remark}\label{picard}
For each open $U$ of $X$, one has the Picard category $\tors_U(\mathcal{O}^*)$ of $\mathcal{O}^*$-torsors on $U$. These combine to the Picard stack $\tors(\mathcal{O}^*)$ of $\mathcal{O}^*$-torsors
on $X$. The Gersten sequence incarnates this Picard stack \cite[1.10]{DeligneB}. \qed
\end{remark}
\subsection{The Gersten gerbe of a codimension two cycle}
We next show that every cycle $\alpha$ of codimension two on $X$ determines a gerbe $\mathcal{C}_{\alpha}$ with band $\mathcal{K}_2$. The Gersten complex \eqref{gersten} enables us to give a geometric description of $\mathcal{C}_{\alpha}$; see Remark \ref{trivial-eta} below.
The cycle $\alpha$ provides a natural map
\begin{equation}\label{gersten4}
\begin{tikzcd}
&&&& {\mathbb{Z}} \arrow[d, red, "\phi" blue] \\
0 \arrow[r] &\mathcal{K}_2 \arrow[r, "\mu"] & K_2^{\eta} \arrow[r, "\nu"] & {\bigoplus\limits_{x \in X^{(1)}} j_* K_1(x)} \arrow[r, "{\delta}"] &\bigoplus\limits_{x \in X^{(2)}} j_* K_0(x) \arrow[r] & 0\\
\end{tikzcd}
\end{equation}
and an exact sequence (by pullback)
\begin{equation}\label{TEE}
0 \too \mathcal{K}_2 \too K_2^{\eta} \overset{\nu}{\too} T \overset{\delta}{\too} \mathbb{Z} \too 0.
\end{equation}
This two-extension of $\mathbb{Z}$ by $\mathcal{K}_2$ gives a class in $\Ext^2(\mathbb{Z},\mathcal{K}_2) = H^2(X, \mathcal{K}_2)$.
Writing $\alpha = \sum_x n_x [x]$ as a sum over $x\in X^{(2)}$ (irreducible codimension two subvarieties), then the $x$-component of $\phi (1)$ corresponds to $n_x$ under the canonical isomorphism $K_0(x) \cong \mathbb{Z}$. The maps $\delta$ and $\nu$ are essentially given by the valuation (or ${\rm ord}$) and tame symbol maps; see \S \ref{sec:bloch-quillen}.
\begin{definition}
The gerbe $\mathcal{C}_{\alpha}$ (associated with the cycle $\alpha$) is obtained by applying the results of \S \ref{four-term} to \eqref{gersten4}, \eqref{TEE}; thus it is an example of the gerbe $\mathcal{L}_{\beta}$ of \S \ref{four-term}, where $\beta=\phi$ and $\mathcal{L}$ is the Picard stack associated to the complex $[K_2^{\eta}\to \bigoplus\limits_{x \in X^{(1)}} j_* K_1(x)]$.
\end{definition}
\begin{remark} Corollary \ref{diversi} provides two descriptions of $\mathcal{L}_{\beta}$. It should be emphasized that both descriptions are useful. One of them, which we make explicit below, is crucial for the comparison with the Heisenberg gerbe (Theorem \ref{comparison}); the other succinct description is given in Remark \ref{trivial-eta}.
\begin{enumerate}
\item For any open set $U$ of $X$, the category $\mathcal{C}_{\alpha}(U)$ has objects $u \in \bigoplus\limits_{x \in X^{(1)}} j_* K_1(x)$ with $\delta{u} = \phi(1)$ and morphisms from $u$ to $u'$ are elements $a \in K_2^{\eta}$ with $\nu(a) = u'-u$.
\item Any Hom-set ${\rm Hom}_{\mathcal{C}_{\alpha}}(u, u')$ is a $K_2(U)$-torsor.
\item The category $\mathcal{C}_{\alpha}(U)$ can be described geometrically in terms of the ${\rm ord}$ and tame maps. For instance, let $X$ be a surface. Write the zero-cycle $\alpha$ as a finite sum $ \sum_{i\in I} n_i x_i$ of points $x_i$ of $X$. We assume $n_i \neq 0$ and write $V$ for the complement of the support of $\alpha$. Any non-zero rational function $f$ on a curve $C$ defines an object of $\mathcal{C}_{\alpha}(U)$ if $f$ is invertible on $C \cap U\cap V$ and satisfies ${\rm ord}_{x_i} f = n_i$ for each $x_i \in U$ (assuming, for simplicity, that $x_i$ is a smooth point of $C$). A general object of $\mathcal{C}_{\alpha}(U)$ is a finite collection $u = \{C_j, f_j\}$ of curves $C_j$ and non-zero rational functions $f_j$ on $C_j$ such that $f_j$ is invertible on $C_j \cap U \cap V$ and $\sum {\rm ord}_{x_i} f_j = n_i$ (an index $j$ occurs in the sum if $x_i \in C_j$) for each $x_i \in U$. A morphism from $u$ to $u'$ is an element $a \in K_2^{\eta}$ whose tame symbol is $u' - u$. \qed
\end{enumerate}
\end{remark}
\begin{theorem}\label{k2gerbe} (i) $\mathcal{C}_{\alpha}$ is a gerbe on $X$ with band $\mathcal{K}_2$.
(ii) Under \eqref{BQ}, the class of $\mathcal{C}_{\alpha} \in H^2(X, \mathcal{K}_2)$ corresponds to $\alpha \in CH^2(X)$.
(iii) $\mathcal{C}_{\alpha}$ is equivalent to $\mathcal{C}_{\alpha'}$ (as gerbes) if and only if the cycles $\alpha$ and $\alpha'$ are rationally equivalent.\end{theorem}
\begin{proof} (i) The Gersten sequence \eqref{gersten4} is an example of a four-term complex, discussed in \S \ref{four-term}. As the stack $\mathcal{C}_{\alpha}$ is a special case of the gerbe $\mathcal{L}_{\beta}$ constructed in \S \ref{four-term}, (i) is obvious.
In more detail: We first observe that \eqref{TEE} provides a quasi-isomorphism between $\mathcal{K}_2$ (sheaf) and the complex (concentrated in degree zero and one)
\begin{equation}\label{eta}
\eta: \mathcal{K}_2 \to [K_2^{\eta} \xrightarrow{\nu} Ker(\delta)].
\end{equation}
Now, suppose $U$ is disjoint from the support of $\alpha$. On such an open set $U$, the map $\phi$ is zero. This means that the objects $u$ of the category $\mathcal{C}_{\alpha}(U)$ are elements of ${\rm Ker}(\delta)$. The gerbe $\mathcal{C}_{\alpha}$, when restricted to $U$, is equivalent to the Picard stack of $\mathcal{K}_2$-torsors \cite[Expose XVIII, 1.4.15]{SGA4}: in the complex $[K_2^{\eta} \xrightarrow{\nu} {\rm Ker}(\delta)]$, one has ${\rm Coker} (\nu) =0$ and ${\rm Ker}(\nu) =\mathcal{K}_2|_U$. Since for any abelian sheaf $G$, the category $\tors(G)$ is the trivial $G$-gerbe, $\mathcal{C}_{\alpha}$ is the trivial gerbe with band $\mathcal{K}_2$ on the complement of the support of $\alpha$.
Now, consider an arbitrary open set $V$ of $X$. By the exactness of \eqref{TEE}, there is an open covering $\{U_i\}$ of $V$ and sections $u_i \in T(U_i)$ with $t_i = \phi(1)$. Fix $i$ and let $U$ be an open set contained in $U_i$. Then the category $\mathcal{C}_{\alpha}(U)$ is non-empty. The category $D$ with objects $d\in {\rm Ker}(\delta) \subset T(U)$ and morphisms $\mathrm{Hom}_D(d,d')=$ elements $a \in K_2^{\eta}$ with $\nu(a) = d'-d$. The category $D$ is clearly equivalent to the category of $K_2(U)$-torsors. The functor which sends $d$ to $d +u_i$ is easily seen to be an equivalence of categories between $D$ and $\mathcal{C}_{\alpha}(U)$. Thus $\mathcal{C}_{\alpha}$ is a gerbe with band $\mathcal{K}_2$.
(ii) The Bloch-Quillen formula \eqref{BQ} arises from the canonical map
\begin{equation*}
d^2: Z^2(X) \to H^2(X, \mathcal{K}_2)
\end{equation*}
of Lemma \ref{comparisons} attached to the four-term complex \eqref{gersten4}. As $\mathcal{C}_{\alpha}$ is a gerbe of the form $\mathcal{L}_{\beta}$, (ii) follows from Lemma \ref{comparisons}.
(iii) This is a simple consequence of the Bloch-Quillen formula (\ref{BQ}).
\end{proof}
\begin{remark}\label{trivial-eta} (i) Split the sequence \eqref{gersten4} into
\begin{equation*}
0 \too \mathcal{K}_2 \too K_2^{\eta} \too K_2^{\eta}/{\mathcal{K}_2} \too 0
\end{equation*}
and
\begin{equation*}
0 \too K_2^{\eta}/{\mathcal{K}_2} \too \bigoplus_{x \in X^{(1)}} j_* K_1(x) \too \bigoplus_{x \in X^{(2)}} j_* K_0(x) \too 0.
\end{equation*}
Since the Gersten resolution is by flasque sheaves, one has $H^1(X, K_2^{\eta}/{\mathcal{K}_2}) \xrightarrow{\sim} H^2(X, \mathcal{K}_2)$. As Cartier divisors are elements of $H^0(X, K_1^{\eta}/{\mathcal{K}_1})$, we view elements of $H^1(X, K_2^{\eta}/{\mathcal{K}_2})$ as \emph{Cartier cycles of codimension two}. The map $Z^1(X) \to H^1(X, K_2^{\eta}/{\mathcal{K}_2})$ attaches to any cycle its Cartier cycle. Lemma \ref{diversi} provides the following succinct description of $\mathcal{C}_{\alpha}$:
\emph{it is the gerbe of liftings (to a $K_2^{\eta}$-torsor) of the $(K_2^{\eta}/{\mathcal{K}_2})$-torsor determined by $\alpha$.}
(iii) The proof of Theorem \ref{k2gerbe} provides a canonical trivialization\footnote{This uses \eqref{eta}.} $\eta_{\alpha}$ of the gerbe $\mathcal{C}_{\alpha}$ on the complement of the support of $\alpha$.
(iv) The pushforward of $\mathcal{C}_{\alpha}$ along $\mathcal{K}_2 \to \Omega^2$ produces a $\Omega^2$-gerbe which manifests the cycle class of $\alpha$ in de Rham cohomology $H^2(X, \Omega^2)$. If $\alpha$ is homologically equivalent to zero, then this latter gerbe is trivial, i.e., it is the Picard stack of $\Omega^2$-torsors. \qed\end{remark}
\begin{remark}\label{empty} It may be instructive to compare the $\mathbb{G}_m$-torsor $\mathcal{O}_D$ attached to a divisor $D$ of $X$ and the $\mathcal{K}_2$-gerbe $\mathcal{C}_{\alpha}$ attached to a codimension-two cycle. Let $U$ be any open set of $X$. This goes, roughly speaking, as follows.
\begin{itemize}
\item $\mathcal{O}_D$: The set of divisors on $U$ rationally equivalent to zero is exactly the image of $d$ over $U$ in \eqref{gersten1}. So, the set $\mathcal{O}_D(U)$ is non-empty if $D =0$ in $CH^1(U)$. The sections of $\mathcal{O}_D$ over $U$ are given by rational functions $f$ on $U$ whose divisor is $D|_U$. In other words, the sections are rational equivalences between the divisor $D$ and the empty divisor. The set $\mathcal{O}_D(U)$ is a torsor over $\mathbb{G}_m(U)$.
\item $\mathcal{C}_{\alpha}$: We observe that the image of $\delta$ in \eqref{gersten4} consists of codimension-two cycles rationally equivalent to zero. So $\mathcal{C}_{\alpha}$ is non-empty if $\alpha = 0$ in $CH^2(U)$. Each rational equivalence between $\alpha$ and the empty codimension-two cycle gives an object of $\mathcal{C}_{\alpha}(U)$. The sheaf of morphisms between two objects is a $\mathcal{K}_2$-torsor. \qed
\end{itemize}
\end{remark}
The Bloch-Quillen formula \eqref{BQ} states that equivalence classes of $\mathcal{K}_2$-gerbes are in bijection with codimension-two cycles (modulo rational equivalence) on $X$. We have seen that a codimension-two cycle determines a $\mathcal{K}_2$-gerbe (an actual gerbe, not just one up to equivalence). It is natural to ask whether the converse holds: (see Proposition \ref{gerbe2cyclea} in this regard)
\begin{question}\label{gerbe2cycle}
Does a $\mathcal{K}_2$-gerbe on $X$ determine an actual codimension-two cycle?
\end{question}
Consider the $\mathcal{K}_2$-gerbe $\mathcal{G}_{D, D'}$ attached to a pair of divisors $D, D'$ on $X$. If $\mathcal{G}_{D,D'}$ determines an actual codimension-two cycle, then any pair $D, D'$ of divisors determines a canonical codimension-two cycle on $X$. This implies that there is a canonical intersection of Weil divisors and this last statement is known to be false. So the answer to Question \ref{gerbe2cycle} is negative in general.
\subsection{Gerbes and cohomology with support}\label{support}
Let $F$ be an abelian sheaf on a site $\C$. Recall that (see e.g.\xspace \cite[\S5.1]{Milne})
$H^1(F)$ is the set of isomorphism classes of auto-equivalences of the trivial gerbe $\tors(F)$ with band $F$; more generally, given gerbes $\mathcal{G}$ and $\mathcal{G}'$ with band $F$, then the set $\Hom_\C(\mathcal{G}, \mathcal{G}')$ (assumed non-empty) of maps of gerbes is a torsor for $H^1(F)$.
Recall also that, for any sheaf $F$ on a scheme $V$, the cohomology $H^*_Z(V, F)$ with support in a a closed subscheme $Z$ of $V$ fits into an exact sequence \cite[\S 5]{Bloch}
\begin{equation}
\cdots \too H^i_Z(V,F) \too H^i(V,F) \too H^i(V - Z,F) \too H^{i+1}_Z(V, F) \too \cdots ;
\end{equation}
the exactness of
\begin{equation*}
H^1(V, F) \too H^1(V- Z, F) \too H^2_Z(V,F) \too H^2(V, F) \too H^2(V -Z, F)
\end{equation*}
leads to an interpretation of the group $H^2_Z(V, F)$: it classifies isomorphism classes of pairs $(\mathcal{G}, \phi)$ consisting of a gerbe $\mathcal{G}$ with band $F$ on $V$ and a trivialization $\phi$ of $\mathcal{G}$ on $V-Z$, i.e., $\phi$ is an equivalence of $\mathcal{G}|_{V-Z}$ with $\tors(F|_{V-Z})$.
\subsection{Geometric interpretation of some results of Bloch} Bloch \cite{Bloch} has proved that:
\begin{enumerate}
\item \cite[Proposition 5.3]{Bloch} Any codimension-two cycle $\alpha$ on $X$ has a canonical cycle class $[\alpha] \in H^2_Z(X, \mathcal{K}_2)$; here $Z$ is the support of $\alpha$.
\item \cite[Theorem 5.11]{Bloch} If $D$ is a smooth divisor of $X$, then $\mathrm{Pic}(D) = H^1(D, \mathcal{K}_1)$ is a
direct summand of $H^2_D(X, \mathcal{K}_2)$.
\end{enumerate}
For (1), we note that, by Remark \ref{trivial-eta}, the gerbe $\mathcal{C}_{\alpha}$ has a trivialization $\eta_{\alpha}$ on $X - Z$. By the above interpretation of $H^2$ with support, the pair $(\mathcal{C}_{\alpha}, \eta_{\alpha})$ defines an element of $H^2_Z(X, \mathcal{K}_2)$; this is the canonical class $[\alpha]$.
For (2), recall that Bloch constructed maps $a: \mathrm{Pic}(D) \to H^2_D(X, \mathcal{K}_2)$ and $b: H^2_D(X, \mathcal{K}_2) \to \mathrm{Pic}(D)$ with $b\circ a$ the identity on $\mathrm{Pic}(D)$. We can interpret the map $a$ as follows. Note that any divisor $E$ of $D$ is a codimension-two cycle $\alpha$ on $X$. The $\mathcal{K}_2$-gerbe $\mathcal{C}_{\alpha}$ on $X$ has a canonical trivialization $\eta_{\alpha}$ on $X -E$ (and so also on the smaller $X-D$). The association $E \mapsto (\mathcal{C}_{\alpha}, \eta_{\alpha})$ gives the homomorphism $a: Pic(D) \to H^2_D(X, \mathcal{K}_2)$.
These results of Bloch provide a partial answer to Question \ref{gerbe2cycle} summarized in the following
\begin{proposition}\label{gerbe2cyclea} Let $\mathcal{G}$ be a $\mathcal{K}_2$-gerbe on $X$ and let $\beta \in CH^2(X)$ correspond to $\mathcal{G}$ in the Bloch-Quillen formula \eqref{BQ}. Let $\phi$ be a trivialization of $\mathcal{G}$ on the complement $X -D$ of a smooth divisor $D$ of $X$. Then, $\beta$ can be represented by a divisor of $D$ (unique up to rational equivalence on $D$).
\end{proposition}
Note that the data of $\phi$ is crucial: the map $\mathrm{Pic}(D) \to CH^2(X)$
is not injective in general \cite[(iii), p.~269]{Bloch2}.
\begin{proposition}
Let $i\colon D\to X$ and $j\colon U=X-D\to X$ be the inclusion maps. We have the following short exact sequence of Picard 2-stacks
\begin{equation*}
\tors (i_*\mathcal{K}_1^D) \too \ger (\mathcal{K}_2^X) \too \ger (j_*\mathcal{K}_2^U).
\end{equation*}
\end{proposition}
\begin{proof}
Analyzing the Gersten sequence \eqref{gersten}, \eqref{eq:9} for $\mathcal{K}_2$ on
$X$ and $U$, we get the short exact sequence:
\begin{equation*}
0\too i_*G_1^D \too G_2^X \too j_*G_2^U \too 0.
\end{equation*}
This gives a short exact sequence of Picard 2-stacks, then
use~\eqref{eq:8}. Note that $\tors (i_*\mathcal{K}_1^D)$ is considered as a Picard
2-stack with no nontrivial 2-morphisms.
\end{proof}
The global long exact cohomology sequence arising from the exact sequence in the
proposition gives part of the localization sequence for higher Chow groups
\begin{equation*}
\cdots \too CH^1(D,1) \too CH^2(X, 1) \too CH^2(X-D,1) \too \mathrm{Pic}(D) \too CH^2(X) \too CH^2(X -D) \too 0.
\end{equation*}
This uses the fact that $CH^1(D,0) =\mathrm{Pic}(D)$, that $CH^1(D,1) = H^0(D, \mathcal{O}^*)$ and $CH^1(D,j)$ is zero for $j >1$ \cite[ (viii), p.~269]{Bloch2}.
\subsection{The two gerbes associated with an intersection of divisors} For a codimension-two cycle of $X$ presented as the intersection of divisors, we know that the $\mathcal{K}_2$-gerbes in Theorem \ref{k2gerbe} (Gersten gerbe) and in \S \ref{pairs} (using Theorem \ref{primo}) (Heisenberg gerbe) are equivalent (as their class in $H^2(X, \mathcal{K}_2)$ corresponds to the class of the codimension-two cycle in $CH^2(X)$ via \eqref{BQ}). We now construct an actual equivalence between them.
\begin{theorem}\label{comparison} Suppose that the codimension-two cycle $\alpha$ is the intersection $D.D'$ of divisors $D$ and $D'$ on $X$. There is a natural equivalence\footnote{By \S \ref{support}. the set of such equivalences is a torsor over $H^1(X, \mathcal{K}_2) =CH^2(X,1)$ \cite[\S 2.1]{Stach}.}
\begin{equation*}
\Theta: \mathcal{C}_{\alpha} \to \mathcal{G}_{D, D'}
\end{equation*}
of $\mathcal{K}_2$-gerbes on $X$.
\end{theorem}
\begin{proof}
By Theorem \ref{primo} and Theorem \ref{k2gerbe}, the classes of the gerbes $\mathcal{G}_{D,D'}$ and
$\mathcal{C}_{\alpha}$ in $H^2(X, \mathcal{K}_2)$ both correspond to the class of $\alpha$ in
$CH^2(X)$. This shows that they are equivalent.
Let us exhibit an actual equivalence. We will construct a functor $\Theta_U:
\mathcal{C}_{\alpha}(U) \to \mathcal{G}_{D, D'}(U)$, compatible with restriction maps $V\subset
U\subset X$.
Consider an object $r \in \mathcal{C}_{\alpha}(U)$. We want to attach to $r$ a $H$-torsor $\Theta_U(r)$ on $U$ in a functorial manner. Each $\Theta_U(r)$ is a $H$-torsor
which lifts the $\mathcal{K}_1 \times \mathcal{K}_1$-torsor $\mathcal{O}_D \times \mathcal{O}_{D'}$ on $U$.
We will describe $\Theta_U(r)$ by means of \v Cech cocycles. Fix an open covering $\{U_i\}$ of $U$ and write $\mathcal{C}^n(A)$ for \v Cech $n$-cochains with values in the sheaf $A$ with respect to this covering.\medskip
\paragraph{\textit{Step 1.}} Let $a= \{a_{i,j}\}$ and $b = \{b_{i,j}\}$ with $a,b \in \mathcal{C}^1(O^*)$ be \v Cech 1-cocycles for $\mathcal{O}_D$ and $\mathcal{O}_{D'}$. Pick $h = \{h_{i,j}\} \in \mathcal{C}^1(H)$ of the form
\begin{equation*}
h_{i,j} = (a_{i,j}, b_{i,j}, c_{i,j}) \in H(U_i\cap U_j).
\end{equation*}
We need $c_{i,j} \in K_2(U_i \cap U_j)$ such that $h$ is a \v Cech 1-cocycle (for $\Theta_U(r)$, the putative $H$-torsor).
Since $a,b$ are \v Cech cocycles, the \v Cech boundary $\partial h$ is of the form
\begin{equation*}
\partial h = \{(1,1, y_{i,j,k})\}
\end{equation*}
with $y =\{y_{i,j,k}\} \in \mathcal{C}^2(\mathcal{K}_2)$ a \v Cech 2-cocycle. This cocycle $y$
represents the gerbe $\mathcal{G}_{D,D'}$ on $U$. \vspace*{1em}
\paragraph{\textit{Step 2.}} Recall that $\mathcal{C}_{\alpha}$ is the associated stack of the prestack $U \mapsto \mathcal{C}_{\alpha}(U)$ where the category $\mathcal{C}_{\alpha}(U)$ has objects $u \in \oplus_{x \in X^1} j_* K_1(x)$ with $\delta{u} = \phi(1)$ and morphisms from $u$ to $v$ are elements $a \in K_2^{\eta}$ with $\nu(a) = v-u$.
Since the category $\mathcal{C}_{\alpha}(U)$ is non-empty, the class of the gerbe $\mathcal{C}_{\alpha}$ (restricted to $U$) in $H^2(U, \mathcal{K}_2)$ is zero. Since $\mathcal{C}_{\alpha}$ and $\mathcal{G}_{D, D'}$ are equivalent, so the class of $\mathcal{G}_{D, D'}$ in $H^2(U, \mathcal{K}_2)$ is also zero.\vspace*{1em}
\paragraph{\textit{Step 3.}} Consider the case $r$ is given by a pair $(C, g)$ where $C$ is a divisor on $X$ and $g$ is a rational function on $C$. The condition $\delta(r) = \phi(1)$ says $\alpha \cap U$ is the intersection of $U$ with the zero locus of $g$. Assume $g \in \mathcal{O}_C(C \cap U)$. Given any lifting $\tilde{g} \in \mathcal{O}_X(U)$ with divisor $C'$ on $U$, we can write $\alpha \cap U$ as the intersection of the divisors $C \cap U$ and the (principal) divisor $C'$ in $U$. By the results in \S \ref{pairs}, there is a $\mathcal{K}_2$-gerbe $\mathcal{G}_{C\cap U, C'}$ on $U$. As $C'$ is principal, its class in $H^1(U, \mathcal{K}_1)$ is zero; so the class of $\mathcal{G}_{C\cap U, C'}$ in $H^2(U, \mathcal{K}_2)$ is zero.\vspace*{1em}
\paragraph{\textit{Step 4.}} Let $z =\{z_{i,j,k}\} \in \mathcal{C}^2(\mathcal{K}_2)$ be a \v Cech 2-cocycle for $\mathcal{G}_{C\cap U, C'}$;
So $z = \partial w$ is the boundary of a \v Cech cochain $w = \{w_{i,j}\} \in \mathcal{C}^1(\mathcal{K}_2)$. Note that
$y - z = \partial v$ for a 1-cochain $v$ since $\mathcal{G}_{C\cap U, C'}$ and $\mathcal{G}_{D, D'}$ are equivalent as gerbes on $U$: both are trivial on $U$!
The \v Cech cochain $h' =\{h'_{i,j}\} \in \mathcal{C}^1(H)$ with
\begin{equation*}
h'_{i,j} = (a_{i,j}, b_{i,j}, c_{i,j})(1,1, - w_{i,j})(1,1,- v_{i,j})
\end{equation*}
is a \v Cech cocycle and represents the required $H$-torsor $\Theta_U(r)$ on $U$.\vspace*{1em}
\paragraph{\textit{Step 5.}} The same argument with simple modifications works for a
general object of $\mathcal{C}_{\alpha}$.
It is easy to check that $\Theta_U$ is a functor, compatible with restriction maps $V\subset
U\subset X$, and that the induced morphism of gerbes is an equivalence.
\end{proof}
\subsection{Higher gerbes attached to smooth Parshin chains}\label{end}
By Gersten's conjecture, the localization sequence \cite[\S7 Proposition 3.2]{Quillen} breaks up into short exact sequences
\begin{equation*}
0 \too K_i(V) \too K_i(V-Y) \too K_{i-1}(Y) \too 0, \qquad (i> 0)
\end{equation*}
for any smooth variety $V$ over $F$ and a closed smooth subvariety $Y$ of $V$.
Let $j:D \to X $ be a smooth closed subvariety of codimension one of $X$; write $\iota: X -D \to X$ for the open complement of $D$. Any divisor $\alpha$ of $D$ is a codimension-two cycle on $X$; one has a map ${\rm Pic}(D) \to CH^2(X)$ \cite[(iii), p.~269]{Bloch2}.
This gives the exact sequence (for $i>0$)
\begin{equation*}
0 \too \mathcal{K}_i \too \mathcal{F}_i \too j_* \mathcal{K}^D_{i-1} \too 0
\end{equation*}
of sheaves on $X$ where $\mathcal{F}_i = \iota_*\mathcal{K}^U_i$ is the sheaf associated with the presheaf $U \mapsto K_i(U-D)$. We write $\mathcal{K}_i^D$ and $\mathcal{K}_i^U$ for the usual K-theory sheaves on $D$ and $U$ since the notation $\mathcal{K}_i$ is already reserved for the sheaf on $X$.
The boundary map
\begin{equation*}
H^1(D, \mathcal{K}^D_1) = H^1(X, j_*\mathcal{K}^D_1) \too H^2(X, \mathcal{K}_2)
\end{equation*}
is the map $CH^1(D) \to CH^2(X)$. For any divisor $\alpha$ of $D$, the $\mathcal{K}^D_1$-torsor $\mathcal{O}_{\alpha}$ determines a unique $j_*\mathcal{K}^D_1$-torsor $L_{\alpha}$ on $X$. The $\mathcal{K}_2$-gerbe $\mathcal{C}_{\alpha}$ (viewing $\alpha$ as a codimension two cycle on $X$) is the lifting gerbe of the $j_*\mathcal{K}^D_1$-torsor $L_{\alpha}$ (obstructions to lifting to a $\mathcal{F}_2$-torsor).
This generalizes to higher codimensions (and pursued in forthcoming work):
\begin{itemize}
\item (codimension three) If $\beta$ is a codimension-two cycle of $D$, then the gerbe $\mathcal{C}_{\beta}$ on $D$ determines a unique gerbe $L_{\beta}$ on $X$ (with band $j_*\mathcal{K}^D_2$). The obstructions to lifting $L_{\beta}$ to a $\mathcal{F}_3$-gerbe is a $2$-gerbe $\mathcal{G}_{\beta}$ with band $\mathcal{K}_3$ on $X$. This gives an example of a higher gerbe invariant of a codimension three cycle on $X$. Gerbes with band $K_3(F_X)/{\mathcal{K}_3}$ provide the codimension-three analog of Cartier divisors $H^0(X, K_1(F_X)/{\mathcal{K}_1})$.
\item (Parshin chains) Recall that a chain of subvarieties
\begin{equation*}
X_0 \hookrightarrow X_1 \hookrightarrow X_2 \hookrightarrow X_3 \hookrightarrow \cdots \hookrightarrow X_n = X
\end{equation*}
where each $X_i$ is a divisor of $X_{i+1}$ gives rise to a Parshin chain on $X$. We will call a Parshin chain smooth if all the subvarieties $X_i$ are smooth. Iterating the previous construction provides a higher gerbe on $X_n=X$ with band $\mathcal{K}_j$ attached to $X_{n-j}$ (a codimension $j$ cycle of $X_n$).
\end{itemize}
|
train/arxiv
|
BkiUgP7xK6EuNA_gQsIj
| 5 | 1 |
\section{Introduction}
Clinical pathways are tools used to guide evidence-based healthcare to promote organised and efficient patient care \cite{kinsman2010clinical,de2006defining, rotter2010clinical}. They follow disease-specific guidelines to coordinate a set of services to be executed by different stakeholders, and aim to optimise outcomes in settings such as acute care and home care \cite{deneckere2012care}. A clinical pathway provides a set of treatments/actions that help a patient to move from the current state to a target state that a disease is cured, or controlled.
Today, clinical pathways are typically defined based on best practices by multidisciplinary teams within one care organization, for a specific disease, and for a typical patient profile \cite{chevalley2002osteoporosis,takegami2003impact}. The pathways are finally shared on paper or integrated into operational clinical IT systems and impose the actions to be performed by caregivers and patients. Such pathways cannot span over multiple organizations and are often considered too generic and inflexible for adaptation to the characteristics or situations of an individual patient.
The care plan made by a traditional clinical pathway management tool is mostly built with the current state and a disease to be tackled. It lacks a global vision of the possible future state, which is a combined consequence of several pathways. In a situation like comorbidity, there coexist several clinical pathways to cope with different diseases. A planned action in one pathway would change the state of a patient in the future, and such a change is not explicit to another pathway, which is only produced based on the current state. In some situations, such a change of patient state might trigger a risk or alarm for another pathway. While some of such conflicts can be detected by stating certain actions as conflicts, e.g. contraindication, others are not obvious at the planning stage, but only manifest themselves when the action is carried out with visible consequences. One example of such a conflict is that a patient is scheduled for an X-ray without contrast fluid on one day for a check-up of disease A, and later is scheduled with an extra X-ray with contrast medium one day earlier for a check-up of disease B. In such a case, the X-ray for disease A cannot be taken anymore because of the contamination by the contrast medium and will need to be postponed until the contrast is washed out from the body. If a clinical decision support tool is able to foresee expected future state, such a conflict would be detected before it actually takes place.
This paper introduces weighted state transition logic and an implementation of that logic based on semantic web technology. It allows us to explicitly describe the expected state transitions of a planned clinical event in N3 logic \cite{berners2008n3logic}. Executed by the semantic reasoning engine EYE \cite{eye}, it is possible to generate clinical pathways towards a target state. In addition, it is able to predict potential conflicts in the future when multiple pathways are coexisting, without needing to actually execute those actions. Moreover, by assigning weights to each step, it is possible to generate overall weights for each possible pathway from the current state to a target state,
and thereby to support informed path selection.
The presented logic was developed and implemented in the GPS4IC (GPS for Integrated Care) project \cite{GPS4IC} to generate adaptive personalized clinical pathways. By introducing services from home care providers, the system extends its scope from hospital to ambient assisted home environment \cite{sun2009promises} to provide integrated care \cite{campbell1998integrated}.
The remainder of this paper discusses the concept of weighted state transition logic, followed by an implementation of that logic using semantic web technology. We explain the architecture of our implementation as it was applied in the GPS4IC project, and provide the link to an example scenario on Github. Lessons learned regarding the application of weighted state transition logic are given at the end of this paper.
\section{Related work}
The target of developing tools supporting adaptive clinical pathways to cope with comorbidity and to provide personalized care has attracted several research initiatives. They inspired the creation of the weighted state transition logic presented in this paper.
Colaert et al. \cite{colaert2007,chen2004towards} introduced the term adaptive clinical pathway, which goes beyond classical clinical pathway. It is about an intelligent and federated workflow system using semantic technology, crossing the episodic and local hospital boundaries to come to a life long and the regional healthcare system.
Sun et al. \cite{sun2015semantic} built a virtual semantic layer on top of Electronic Health Records (EHRs), to integrate healthcare data and to support different clinical research. Zhang et al. \cite{zhang2016integrating} proposed a unified representation of healthcare domain knowledge and patient data based on HL7 RIM and ontologies, and developed a semantic healthcare knowledge base. Both works built the foundation to represent and process healthcare data in
a common way that allows the application of semantic rules by a reasoning engine.
Alexandrou et al. \cite{alexandrou2010holistic} implemented the SEMPATH software platform, which leverages the provision of highly personalized health care treatment by utilizing and managing clinical pathways. SEMPATH performs rule-based exception detection with the semantic web rule language (SWRL) \cite{swrl}. It performs dynamic clinical pathway adaptation during the execution time of each pathway to personalize the treatment scheme. Wang et al. \cite{wang2013creating} semantically processed and aggregated EHRs with ontologies relevant for clinical pathways. They applied reasoning by rules in SWRL to adjust standardised clinical pathways to meet different patients' practical needs. Both works can cope with comorbidity and provide personalized care with semantic rules. The limitation is that the event-driven adaption is only triggered when the patient state changes. It is not feasible in advance to predict and avoid a potential conflict that may pop up in the future.
Using machine learning (ML) technology to construct predictive modelling with EHRs is attracting much research interest in recent years. Rajkomar et al. \cite{rajkomar2018scalable} build scalable and accurate deep learning with EHRs to predict a set of clinical events. The limitation of applying ML technology in adaptive clinical pathway management is that ML is mostly focused on predicting an upcoming clinical event, and lacking the ability to make use of the predicted event for adaptive clinical pathway management. In addition, issues such as explainability and transparency of the machine learning models still need to be addressed \cite{cutillo2020machine}.
Verborgh et al. proposed a method (RESTdesc) \cite{verborgh2017pragmatic, verborgh2014serendipitous} to automatically find a path to reach a specific goal by executing a set of steps sequentially. By stating the current states as facts, the steps as rules, and the target state as a query, a semantic reasoning engine is able to find a satisfactory path to lead from the current state to the target state. There are still limitations that prevent the application of RESTdesc in the clinical domain, which will be discussed in details in the next section. Weighted state transition logic as presented in this paper is inspired by RESTdesc, and made several significant improvements to fit the requirements of the clinical pathway generation and adaption.
\section{Modeling state transition in clinical domain}
\subsection{The requirements}
A clinical pathway is one of the main tools to manage the delivery of quality care. It is often following standardized guidelines and it consists of a sequence of treatments/actions that helps a patient to move from the current state to a target state that a disease is cured, or controlled. Although the starting state is explicit as being the current state, the intermediate states are implicit because each treatment/action would change the state of the patient who is receiving it. Even if a treatment is only meant to keep the current state, it leads to a new state that such a treatment is received. While a physician is designing a clinical pathway, the consequence of each action is implicitly applied in the mind of the physician. However, when dealing with comorbidity, where there are multiple pathways that are dealing with different diseases, it is difficult and time consuming for a physician to take into account the consequences of each action that is planed by other physicians. Although some clinical decision support tools are able to detect drug contraindications, it is extremely difficult to detect conflicting events that are scheduled to be carried out in future. Enabling clinical decision support tools to make an analysis not only based on the current facts, but also taking into account the influences of the existing pathways (i.e. planned actions) becomes a requirement, as well as a challenge.
In order to allow clinical decision support tools to take into account the influences of the existing pathways, it is a prerequisite to make such influences explicit. While the sequences to execute the actions are usually explicitly stated, the consequences of these executions are often implicit. The consequence of an action represents the expected future state, and the expected future state can be propagated when more than one action is planned sequentially. By taking into account of the consequences of planned actions, the weighted state transition logic presented in this paper is able to predict the future state that allows us to perform tasks such as path generation and path validation. During the process of path generation and path validation, the model continuously updates its present state with the predicted future state. The process has the Markov property \cite{markov_property}: the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.
RESTdesc, as introduced in the related work, is also able to explicitly describe the consequence of an action as well as to generate a path towards a target state. Yet it still has two major limitations: Firstly, the state transition logic of RESTdesc only allows to assert new states and lacks support to retract old states. In the application of the clinical domain, it requires a state transition logic to retract statements that are no longer valid, e.g. the temperature or a lab test of a patient. Secondly, the RESTdesc solution only generates one path that leads to the target state, it does not display alternative paths. In the clinical decision support system, it is a preferred feature to provide alternative paths for path selection, ideally with overall weights (e.g. regarding cost, treatment time, .etc) of each path explicitly stated.
The weighted state transition logic presented in this paper meets the aforementioned challenges. We first introduce some existing logic to model state change, then followed by our weighted state transition logic.
\subsection{Existing logic to model state change}
There are different ways to model state change, in this section, we analyze the limitations of some existing logics, and propose our weighted state transition logic to model state change.
\subsubsection{Classical and intuitionistic logic}
Classical and intuitionistic logic \cite{van1986intuitionistic} generates new states as follows:\\
\fbox{%
\parbox{0.9\textwidth}{%
If A and A $\Rightarrow$ B, then B, but A still holds
}%
}
\newline
In the intuitionistic implication, when the premise of an inference is fulfilled, the conclusion is derived, and the premise still holds as stable truth. This is correct in mathematics, but could be problematic in real-life, e.g. clinical applications, for example:
\begin{itemize}
\item Let A be the fact that patient X has temperature 40 \textdegree{}C
\item Let B be the fact that patient X has temperature 37 \textdegree{}C
\item Let $A=>B$ be the action of taking pill Paracetamol, that is the temperature drops from 40 \textdegree{}C (A) to 37 \textdegree{}C (B).
\end{itemize}
Then given A, following the intuitionistic implication $A=>B$ (i.e. patient X has temperature 40 \textdegree{}C and takes pill Paracetamol), the consequence would be both fact A and fact B. The patient is with temperature both 40 \textdegree{}C and 37 \textdegree{}C, while in reality, only the latter is required and the former one is not needed any more.
\subsubsection{Linear logic}
Linear Logic solves this problem by eliminating the previous state. In Linear Logic \cite{girard1987linear, girard1995linear}, the state change is expressed as below, the fact on the left side of the transition will be consumed, and does not hold any more:
\newline
\fbox{%
\parbox{0.9\textwidth}{%
If A and A $\multimap B$, then B, and A does not hold any more
}%
}
\newline
Take the aforementioned example:
\begin{itemize}
\item Let A be the fact that patient X has temperature 40 \textdegree{}C
\item Let B be the fact that patient X has temperature 37 \textdegree{}C
\item Let $A \multimap B$ be the action of taking pill Paracetamol, that is the temperature drops from 40 \textdegree{}C (A) to 37 \textdegree{}C (B).
\end{itemize}
Then given A and $A \multimap B$ (i.e. patient X has temperature 40 \textdegree{}C and takes pill Paracetamol), following linear logic, the consequence would be only fact B. The patient is with temperature 37 \textdegree{}C, and the fact that the patient is with temperature 40 \textdegree{}C will be dropped.
However, if we modify the condition as follows:
\begin{itemize}
\item Let A be the fact that patient X has temperature 40 \textdegree{}C, and patient X has no contraindication with Paracetamol.
\item Let B be the fact that patient X has temperature 37 \textdegree{}C
\item Let $A \multimap B$ be the action of taking pill Paracetamol, that is the temperature drops from 40 \textdegree{}C (A) to 37 \textdegree{}C (B).
\end{itemize}
Then given A and $A \multimap B$ (i.e. patient X has temperature 40 \textdegree{}C and takes pill Paracetamol), following linear logic, the consequence would be fact B. The patient is with temperature 37 \textdegree{}C. The fact that the patient is with temperature 40 \textdegree{}C is dropped. However, the fact that the patient has no contraindication with Paracetamol will be dropped as well. This is against the original purpose, and missing this fact would prevent the drug Paracetamol being able to be applied in future.
To cope with such situations, linear logic also allows to express stable truth as the intuitionistic implication. $(!A) \multimap B$ is equivalent to $A \Rightarrow B$, it is therefore possible to introduce stable truth in linear logic with the expression below:\\
\fbox{%
\parbox{0.9\textwidth}{%
If A and $(!A) \multimap B$, then B, and A still holds
}%
}
\newline
\subsection{Weighted state transition logic}
The state transition logic presented in this paper is largely inspired by the linear logic. It uses the linear implication to express the state change, and also rely on intuitionistic implication to indicate stable truth.
\begin{figure*}
\centering\includegraphics[width=0.75\linewidth]{state-transition.png}
\caption{Weighted state transition logic}
\label{fig:state-transition}
\end{figure*}
Figure \ref{fig:state-transition} shows the concept of weighted state transition logic. 'From' is representing the current state that is to be retracted, and 'To' is representing target state to be asserted. The section of the condition is representing the prerequisite to be fulfilled to carry out the state transition. The 'Condition' will not be retracted during the state transition. Below is the simplified representation of the proposed weighted state transition in terms of linear logic:\\
\fbox{%
\parbox{0.9\textwidth}{%
$\text{From} \otimes (!\text{Condition}) \multimap \text{To}$
}%
}
\newline
It is important to point out that in our weighted state transition logic, every fact only occurs once, while linear logic allows multiple occurrences of one fact. Besides this modification, the weighted state transition logic presented in this paper is also extended with the following features which allow to generate personalized adaptive clinical workflows:
\subsubsection{Duration of state change}
Temporal constraint management of clinical events is a crucial task in clinical pathway management \cite{combi2014representing}. It is important to know the start time as well as the duration of each step listed in a clinical pathway. Weighted state transition logic introduces the concept of duration to indicate the required time to complete the state transition, this is denoted by $\Delta$ T in Figure \ref{fig:state-transition}. The use of a duration allows to explicitly indicate the start and end time of a state change after the start time for the whole path is set up.
\subsubsection{Transition state of state change}
The weighted state transition logic is created to define and manage the change of states. The 'From' state is retracted, and the 'To' state is asserted. With the introduction of duration, the state change is no longer considered as an instantaneous event, but a transition which has a duration. We retract the 'From' state at the start of the state change, and add the 'To' state at the end. Nevertheless, it is hard to describe the state during the transition period. The concept of the transition state is introduced to make an explicit description of the state during the state transition period. At the start of the state change, the 'From' state is retracted, and the transition state is asserted. At the end state, the transition state is retracted, and the 'To' state is asserted.
\subsubsection{Weights of state change}
Besides the required time($\Delta$ T) to complete a state transition, there are also other parameters to weight a state transition, as well as an overall clinical pathway that consists of a set of state transitions. We use the duration, cost, comfort and belief as weights to evaluate a state transition in the healthcare domain:
\begin{itemize}
\item Duration - a positive number which indicates how long the execution of a step takes.
\item Cost - a positive number indicating how much the step costs in Euros.
\item Comfort - a number between 0 and 1 indicating how comfortable the step is for the patient, 1 is very comfortable, 0 is uncomfortable.
\item Belief - a number between 0 and 1 indicating the probability that the step actually leads to the expected result.
\end{itemize}
More details of weighted state transitions are enclosed in our other paper \cite{doerthe2021} This paper focuses on the application of weighted state transition logic in the healthcare domain to predict the future state, as well as applications built on top of the predicted future state.
\section{Implementation of weighted state transition logic - a semantic web based approach}
The weighted state transition logic presented in the previous section is implemented with semantic representation as backward rules in N3 language. We created an ontology named as gps-schema \footnote{http://josd.github.io/eye/reasoning/gps/gps-schema} to enable a semantic representation of state change. The current state of a patient is represented with RDF graphs, and the target to reach is represented as an N3 query. Background knowledge is also introduced as RDF graphs or N3 rules. We use the semantic reasoning engine EYE to execute a set of tasks such as path generation and path validation. To meet the special needs of those tasks, we created several plugins \footnote{https://github.com/hongsun502/wstLogic/tree/master/engine} and rule sets for that reasoner.
\subsection{Semantic description of state change}
This section uses the neoadjuvant chemoradiotherapy in the domain of colon cancer as an example to introduce the semantic description of state change. The introduction of background knowledge and target description also uses examples from colon cancer treatment. The detailed examples of treating colon cancer can be found in our Github project \cite{wstLogic}.
Listing 1 shows the semantic description of a neoadjuvant chemoradiotherapy in the domain of colon cancer. In general, this description indicates that by taking the action of neoadjuvant chemoradiotherapy, the size of the tumor is expected to become 70\% of its original size.
\begin{lstlisting}[frame=bt,numbers=left,basicstyle=\footnotesize,caption=Sample state change representation of colon cancer therapy]
PREFIX math: <http://www.w3.org/2000/10/swap/math#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX gps: <http://josd.github.io/eye/reasoning/gps/gps-schema#>
PREFIX action: <http://josd.github.io/eye/reasoning/gps/action#>
PREFIX sct: <http://snomed.info/id/>
PREFIX therapy: <http://josd.github.io/eye/reasoning/gps/therapy#>
PREFIX care: <http://josd.github.io/eye/reasoning/gps/care#>
{care:Colon_cancer #Map
gps:description (
{?patient care:tumor_size ?size.
?patient care:metastasis_risk ?risk.} #From
{?patient gps:therapy therapy:Neoadjuvant_chemoradiotherapy.}
{?patient care:tumor_size ?new_size.
?patient care:metastasis_risk ?new_risk.} #To
action:Neoadjuvant_chemoradiotherapy #Action
"P50D"^^xsd:dayTimeDuration #Duration
14147 #Cost
0.9 #Belief
0.4 #Comfort
)} <=
{?patient care:diagnosis sct:363406005. #Colon cancer
?patient care:tnm_t ?t_value . #Tumor-Node-Metastasis
?t_value math:greaterThan 2 .
?patient care:tumor_size ?size.
(?size 0.7) math:product ?new_size.
?patient care:metastasis_risk ?risk .
(?risk 0.5) math:product ?new_risk. }. #Condition graph
\end{lstlisting}
Line 9 indicates the specialized domain of the action, in this case, it is care:Colon\_cancer. We use 'Map' to indicate domain information of each specific medical domain, mimicking a map that provides different paths. We use the concept 'Map' to separate different domain knowledge, so that domain experts can focus on creating rules in their own expertise. The graph stated in the From section (lines 11-12) indicates the state before the action is applied, it will be retracted once the action is started. In this case, the current tumor size and metastasis risk of the patient will be retracted. Line 13 contains the transition state, it indicates that during the state transition, the patient is receiving neoadjuvant chemoradiotherapy. The graph stated in the transition section will be asserted when the action is started. It will be retracted when the action of neoadjuvant chemoradiotherapy is finished. Lines 14-15 indicate the target state. When the state transition is finished, new values of tumor size (?new\_size) and metastasis risk (?new\_risk) will be asserted. The new size and new risk are reflecting the expectation of the treatment. In reality, the new size and new risk might differ, the confidence of reaching the target is indicated by the parameter belief.
Line 16 indicates the action to be taken in the state transition is neoadjuvant chemoradiotherapy. Lines 17-20 indicate the weights of the state transition. Duration indicates the action will take 50 days. Cost indicates the cost of the action is 14147 Euros. It is believed 90\% chance the target can be reached, and the comfort level of this action is 40\%. Both Belief and Comfort are initially subjective values based on the inputs of physicians. Nevertheless, they can be based on existing studies, as well as being updated following the outcome of evaluating the actual outcome of the state transition.
Lines 22-28 form the section of Condition. Lines 22-24 indicate the premise of carrying this action, that is a patient is diagnosed with colon cancer (line 22), and the tumor is reaching more than two layers of the colon (lines 23-24). The new size of tumor is calculated in lines 25-26, it will be 70\% of the original size. The new risk of metastasis is calculated in lines 27-28, it will be 50\% of the original risk.
\subsection{Semantic description of background knowledge}
Background knowledge, such as stating a drug conflict, or asserting a statement of fever when the body temperature is above 38 degrees, are expressed as N3 backward rules. Listing 2 shows a sample of semantic description of a conflict between drug Pramipexol and surgery of colon cancer.
\begin{lstlisting}[frame=bt,numbers=left,basicstyle=\footnotesize,caption=Sample semantic description of a conflict]
PREFIX gps: <http://josd.github.io/eye/reasoning/gps/gps-schema#>
PREFIX med: <http://josd.github.io/eye/reasoning/gps/medication#>
PREFIX surgery: <http://josd.github.io/eye/reasoning/gps/surgery#>
{ ?patient gps:alert
{medication:Pramipexol gps:conflict surgery:surgery_colon_cancer.}.
} <=
{ ?patient gps:medication med:Pramipexol.
?patient gps:surgery surgery:surgery_colon_cancer. }.
\end{lstlisting}
\subsection{Path generation}
Once the state transitions and background knowledge of a relevant domain are set up, it is possible to automatically generate a set of potential paths from the current state of a patient towards a target state. Figure \ref{fig:path-generation} shows the process of path generation. The EYE reasoning engine takes the current state of the patient, together with background knowledge and state transition descriptions of the relevant domain as inputs of a reasoning process for path generation. The expectations and constraints of a target path are expressed as the query of the process. The path generation plugin \footnote{https://github.com/hongsun502/wstLogic/tree/master/engine/gps-plugin.n3} finds a set of possible paths that start from the current state and end with the target states, with the stated constraints in the query. The path search process is carried out as forward chaining.
\begin{figure*}
\centering\includegraphics[width=0.99\linewidth]{path-generation.png}
\caption{Path generation}
\label{fig:path-generation}
\end{figure*}
\subsubsection{Semantic description of a target}
Listing 3 shows an example of the target description of a colon cancer therapy. Lines 7-10 describe the target: The tumor size of the patient should be 0, and the metastasis risk should be lower than 10\%. Line 11 defines the actions of a path (?PATH), as well as the overall duration, cost, belief and comfort to be calculated for a path. Line 12 defines the constraints of a path, the maximum duration (150 days), maximum cost (50000 Euros), the minimum overall belief (0.1), and the minimum overall comfort (0.1). Line 14 passes the generated paths to the output, it consists of the action sets, duration, cost, belief, comfort, as well as the metastasis risk by the end of the path.
\begin{lstlisting}[frame=bt,numbers=left,basicstyle=\footnotesize,caption=Sample target description of a colon cancer therapy]
PREFIX math: <http://www.w3.org/2000/10/swap/math#>
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX gps: <http://josd.github.io/eye/reasoning/gps/gps-schema#>
PREFIX care: <http://josd.github.io/eye/reasoning/gps/care#>
{?SCOPE gps:findpath (
{ ?patient a care:Patient.
?patient care:tumor_size 0 .
?patient care:metastasis_risk ?risk .
?risk math:lessThan 0.1 . }
?PATH ?DURATION ?COST ?BELIEF ?COMFORT
("P150D"^^xsd:dayTimeDuration 50000.0 0.1 0.1)).
}=> {
?patient gps:path (?PATH ?DURATION ?COST ?BELIEF ?COMFORT (?risk)).}
\end{lstlisting}
\subsubsection{Sample paths}
Figure \ref{fig:paths} shows two sample paths generated in the path generation, corresponding to the target stated in Listing 3. The column 'PATH' indicates the actions to be taken in the clinical pathway. The 'METASTASIS\_RISK' indicates the risk of metastasis. The rest of the columns are overall weights of a path. The overall duration and cost are calculated by summing up the duration and cost of each action listed in the path. The overall belief and comfort are calculated by multiplying the belief and comfort of each action. The path 0 first takes neoadjuvant chemoradiotherapy, followed by surgery of colon cancer. The path 1 first takes surgery of colon cancer, followed by adjuvant chemoradiotherapy. It can be observed that the first path has a shorter duration, lower cost and better comfort.
\begin{figure*}
\centering\includegraphics[width=0.99\linewidth]{paths.png}
\caption{Sample paths}
\label{fig:paths}
\end{figure*}
\subsection{Path validation}
\begin{figure}
\centering\includegraphics[width=0.99\linewidth]{path-validation.png}
\caption{Path validation}
\label{fig:path-validation}
\end{figure}
Path validation checks whether executing a planned path would lead to the defined goal following the update of patient state. Figure \ref{fig:path-validation} shows that a path is generated at T0, with the known patient state at T0. After the first planned action is carried out, the patient state is updated at T1. Path validation is executed at T1 to check if the target state can still be reached.
The path validation takes the up to date patient state and performs the state transitions of the planned path sequentially till the end of the path. By applying the state transitions, it predicts the future state and checks whether the goal can be fulfilled or not. In the example given in Figure \ref{fig:path-validation}, the path validation process takes the updated state (at T1), and applies the state transition of action a2 to generate a predicted state, and check if the predicted state fulfils the target state. In case a target state is predicted as no longer reachable, the responsible physician will get notified before action a2 actually take place.
\subsection{Conflict detection}
\begin{figure}
\centering\includegraphics[width=0.99\linewidth]{conflict-detection.png}
\caption{Conflict detection}
\label{fig:conflict-detection}
\end{figure}
Conflict detection checks if a new path brings any conflict with existing paths. Figure \ref{fig:conflict-detection} shows that at T1, a new path is generated. The original sequence of the existing path is (a1,a2,a3). With the extension of the new path, the sequence of the aggregated path would be (a1,a2,b1,a3,b2). Similar to the path validation, conflict detection performs the state transitions of the aggregated path sequentially to predict the future states and search for conflicts. It checks two types of conflicts: firstly, it checks if there are any explicit conflicts between different operations, e.g. the conflict stated in Listing 2.
Secondly, it checks implicit conflicts. Both existing path and new path are predicted to be able to reach their targets when they are executed respectively. There is no guarantee that the goals can still be reached with the interference from the other path. Implicit conflict detection checks if both the existing target and the new target can still be reached by applying the aggregated path sequentially.
Both path validation and conflict detection aim to predict if there is a potential failure of a clinical pathway in the future, based on the future states build on state transitions. It allows the early intervention of the clinical pathway before irreversible actions takes place.
\section{GPS for integrated care with weighted state transition logic}
The methods introduced in the previous section were implemented in the GPS4IC (GPS4IntegratedCare) project \cite{GPS4IC}. The GPS4IC project aims to develop a platform which allows the automatic generation of dynamic and personalized clinical pathways. A smart workflow engine was developed in this project which is able to dynamically generate personalized clinical pathways based on weighted state transition logic. The engine is also able to aggregate different clinical pathways, detect conflicts, and validate ongoing pathways.
\subsection{Architecture}
\begin{figure*}
\centering\includegraphics[width=0.95\linewidth]{black-box.png}
\caption{Solution architecture of GPS4IC project}
\label{fig:black-box}
\end{figure*}
Figure \ref{fig:black-box} shows the solution architecture of the GPS4IC project. The GPS4IC platform consists of the GPS4IC agent and the GPS4IC data hub.
The GPS4IC data hub provides relevant data to the GPS4IC agent. The data hub retrieves live data from different sources to reflect the latest state (of a patient). We use the semantic data virtualization approach \cite{sun2015semantic} to build the data hub for the following benefits:
\begin{itemize}
\item The data is still kept in their original repositories, which is crucial in clinical applications where security and privacy are important.
\item The data is semantically processed, so that it can be used together with the state change descriptions by the semantic reasoning engine.
\end{itemize}
Besides providing data to reflect the state, the data hub also provides relevant knowledge including:
\begin{itemize}
\item State transition descriptions, which are fundamental elements to consist a clinical pathway.
\item Clinical knowledge e.g. statements of contraindications or hierarchical relationships with clinical concepts, which are used to infer further knowledge or detect conflicting situations.
\end{itemize}
The GPS4IC agent communicates with the service requester to specify goals, and confirms the path to take. It retrieves data from the GPS4IC data hub to run the path generation, and it communicates the requested action to the care provider. The GPS4IC agent is able to accomplish the use cases listed in the following section. Once a care provider finishes a task, the relevant update is made in corresponding local systems, and is reflected to the GPS4IC agent through the data hub. For example, if a temperature measurement is made by a nurse, the value of the temperature will be entered into the EHR system, and when the GPS4IC agent requests the latest temperature, the data hub will provide this newly entered value.
\subsection{Scenario - Pathway management of Parkinson and Colon Cancer }
The scenario of managing personalized clinical pathway for a patient with comorbidity of colon cancer and Parkinson disease has been implemented in the GPS4IC project. It is published as an pen source project on Github \cite{wstLogic}. The corresponding data and domain knowledge, represented with N3 language, are enclosed in the Github project. Source code of the path generation, path validation and conflict detection engines are also provided. The scenario of path generation and validation of each disease, and conflict detection between two paths are also constructed with visualizations.
\subsection{Lessons learned}
During the GPS4IC project, several workshops were held in different hospitals to demonstrate the use cases to physicians and nurses for feedback. In order to cope with comorbidity, we allow the users of our system to choose different care plans per disease, and validate if there are conflicts between different pathways when a new pathway is added or the patient state is updated. Initially, we planned in the project to automatically compose joint care plans for all diseases of a patient by stating a combined target to cope with multiple diseases. However, the user studies of the project showed that physicians deciding over a care plan prefer to only take responsibility for the diseases falling under their expertise. It was therefore decided that for each disease an independent care plan will be generated. This decision made it necessary to check different care plans for compatibility after a new clinical pathway is planned. If two care plans are compatible, they can be executed in parallel. If plans are not compatible, the persons responsible for the care plans can contact each other to come to a common agreement. Instead of frequently holding joint meetings to discuss treatment of comorbity in the current practice, our implementation can ease the communication between doctors and make them aware of each other's plans.
During the demonstrations in hospitals, it turned out that nurses are more focused on operational details, e.g. making an appointment for a planned surgery, while the physicians would like to avoid those operational details and focus on abstracted workflows. They found the inference based on background knowledge useful, and they also consider such a tool beneficial to bring the whole picture of ongoing clinical pathways to general practitioners.
It also demonstrated that there is a different acceptance level of such a tool in different disease domains. Neurologists found it is seldom that the consequence of a treatment can be explicitly stated as state change described in this paper. Radiologists share the concern of the difficulty in defining state change descriptions. However, they tend to be more tolerant to consider manually create state change descriptions, and let the description evolve following the feedback received on this system is a feasible approach. We therefore consider that the application of this tool should be limited to domains where explicitly state change descriptions are possible.
\section{Conclusion}
This paper introduced the weighted state transition logic and its application in predicting future state for adaptive clinical pathway management. The traditional clinical pathways suffer from either lacking the flexibility to adapt, or lacking a pathway but relying on the outcome of each step. Along with the increasing population of elderly people, the number of patients with comorbidity will also increase dramatically. Traditional clinical pathways are invented to provide generalized guidelines to a specific disease, they are not designed to provide personalized care that copes with comorbidity. Applying the state change concept of weighted state transition logic into the clinical domain, it allows us to model the consequences of each step in a clinical pathway, which eventually allows us to predict the future state in defining a care plan. The predicted future state is not based on a single clinical pathway, but taking into account the consequences of the planned steps on all the existing pathways. Building the system on top of a semantic web language (N3) also allows easy integration of existing knowledge and enables the generation of the personalized clinical pathway.
The proposed approach is implemented in the GPS4IC project. The platform built in the project is able to generate a personalized clinical pathway, detect conflicts that are predicted to happen in the future, and validate ongoing clinical pathways. Future work will focus on improving the system following the outcome learned from the GPS4IC project, to investigate better ways of creating state change descriptions, as well as fine-tuning the state change descriptions following the feedback received when the system is on the operation.
\section{Acknowledgement}
This work was supported by funding from the GPS4IC project \cite{GPS4IC}. The authours would like to thank Dirk Coleart for initiating the GPS4IC project and the contributions from Els Lion, Elric Verbruggen, Joachim Van Herwegen, Ruben Verborgh and Estefania Serral during the project.
\section*{References}
\bibliographystyle{elsarticle-num}
|
train/arxiv
|
BkiUfKLxK1Thg9qFfTsy
| 5 | 1 |
\section{Introduction}\label{sec:Intro}
Food expenditure forms an integral part of the total family (or household)
expenditure and is often categorized into food at home (FAH), food away from
home (FAFH) and food delivered at home (FDAH). This categorization is
relevant from a health perspective and other reasons. First, the division
permits us to analyze the nutrition quality of food amongst families. This is
important because there are health implications of consuming more FAFH, as it
is considered to be less nutritious than FAH \citep{Mancino-etal-2009} and
more energy dense \citep{Binkley-2008}. Some authors have also linked more
FAFH to overweight and obesity \citep{Cai-etal-2008}. Second, the division
allows us to answer interesting policy-oriented questions. For example, what
is the effect of a female headed family on FAH expenditure or does having a
home mortgage reduce FAH and/or FAFH expenditures? Third, food assistance
programs are often designed to minimize the health risks arising from
deficient nutrition particularly amongst unemployed and lower-income groups.
This categorization can help assess the efficacy of food assistance program
on FAH expenditure of the vulnerable groups, particularly during times of
economic crisis.
\begin{sloppypar}
As a result, the study of expenditure on FAH and FAFH have attracted
considerable attention in the literature. Few previous studies using
cross-section data include \citet{Lee-Brown-1986}, \citet{Nayga-1996},
\citet{Aguiar-Hurst-2005} and \citet{Liu-et-al-2013}. \citet{Lee-Brown-1986}
employ a switching regression model on the 1977-78 Nationwide Food
Consumption Survey data to examine expenditures on FAH and FAFH amongst the
US households. \citet{Nayga-1996} utilizes the 1992 US consumer expenditure
survey (CES) data to estimate the effect of wife's education and employment
on three subcategories of food expenditure -- for prepared food, food
prepared at home, and food away from home. The modeling scheme utilized is a
generalized version of Heckman's sample selection model \citep{Heckman-1979}.
\citet{Aguiar-Hurst-2005} employs an instrumental variable linear regression
to investigate, amongst other things, the effect of anticipated (i.e.,
retirement) and unanticipated (i.e., unemployment) shock to income on TF, FAH
and FAFH expenditures. The data is taken from the Continuing Survey of Food
Intake of Individuals (CSFII, collected by the US Department of Agriculture)
and corresponds to interviews conducted between $1989-1991$ and $1994-1996$,
but the households are different in the two interviews.
\citet{Liu-et-al-2015} use a trivariate sample selection procedure to study
patterns in FAFH expenditure amongst the Chinese households. In studies such
as \citet{Liu-et-al-2013}, the use of the sample selection framework is
motivated to account for the occurrence of zero expenditures, particularly on
FAFH or in its subdivision (e.g., full-service restaurants, fast-food
restaurants and others). To get a more complete picture, readers may look
into Table~1 of \citet{Davis-2014} for a brief summary of 17 articles (out of
20) on studies related to food expenditure using cross-section data.
\end{sloppypar}
The relationship of food expenditure (at home and away from home) to other
covariates have been the focus of analysis in several cross-section studies.
They include the relation of food expenditure (of various types) to consumer
preferences \citep{Stewart-etal-2005}, family composition
\citep{Liu-et-al-2013}, race \citep{Lanfranco-etal-2002}, homeownership and
mortgages \citep{Nayga-1996, Mian-etal-2013}, wife's labor force
participation \citep{Redman-1980, Kinsey-1983, Darian-Klein-1989, Yen-1993,
Nayga-1996}, children's welfare \citep{Handa-1996}, and obesity
\citep{Drichoutis-etal-2012}. Some authors have also examined the effects of
tax on food expenditure. For example, \citet{Zheng-etal-2019} examines the
impact of tax on expenditure in grocery food (i.e., FAH) and restaurant food
(i.e., FAFH) using a weekly data observed between April 2012$-$January 2013,
collected by United States Department of Agriculture (USDA). They find that
tax on grocery (restaurant food) reduces expenditure on grocery (restaurant
food) and increases expenditure on restaurant food (grocery).
The above paragraphs clearly indicate that there are ample cross-section
studies on food expenditure, but panel or longitudinal studies are rather
lacking with few exceptions. \citet{Cai-etal-2008} presents a state-level
analysis of different types of food expenditure on overweight rates, obesity
rates and combined rates (the sum of overweight and obesity rates) using data
from the Behavioral Risk Factor Surveillance System. The primary finding is
that FAH (FAFH) expenditure is negatively (positively) associated to obesity
and combined rates, and both FAH and FAFH expenditures do not significantly
affect overweight rates. The only panel study mentioned in \citet{Davis-2014}
is the article by \citet{Gelber-Mitchell-2012}, where they use PSID and time
diary data between $1975-2004$, and find that for a decrease in income tax
(i.e., incentive to join the labor force increases) single women are much
more likely to increase FAFH expenditure to substitute for housework compared
to single men. At the same time, the effect on FAH expenditure is
statistically insignificant. \citet{Kohara-Kamiya-2016} use a panel data on
Japanese households for the period 2004$-$2006 and find that mothers' labor
supply decision has a negative effect on food produced at home. Moreover, the
negative effect is common for all economic classes and more pronounced for
the low economic class. Besides, there are abundant studies that examine the
impact on food expenditure from participating in Supplemental Nutrition
Assistance Program (SNAP), formerly known as Food Stamp Program (FSP)
\footnote{The American Recovery and Reinvestment Act (ARRA) of 2009, renamed
the FSP to SNAP and increased benefits by an average of \$80 per household.
However, a common variable to capture SNAP participation pre- and post-ARRA
is not available in PSID.}. Few articles from this literature\footnote{Within
the SNAP literature, the central debate is whether households respond
similarly to an increase in cash income and in-kind transfer (food coupons).
While some researchers, such as \citet{Hoynes-Schanzenbach-2009}, have found
that the response is similar; others such as \citet{Beatty-Tuttle-2014} have
found that households increase in food expenditure is more when given an
in-kind transfer (food stamps) as compared to cash income.} includes
\citet{Hoynes-Schanzenbach-2009}, \citet{Wilde-etal-2009},
\citet{Beatty-Tuttle-2014} and \citet{Burney-2018}. However, these studies
focus on the conditional mean of the response variable and thus cannot
explain the relationship at the quantiles.
The current study takes a broader perspective and looks at expenditures on
total food (TF), food at home (FAH), and food away from home (FAFH), and
explains its variation based on various demographic, socioeconomic and
geographic factors including mortgage and recession. The data is taken from
Panel Study of Income Dynamics (PSID) and is composed of 2174 family units
observed over the period $2001-2015$. Since ours is a panel data, we exploit
a longitudinal or panel regression framework that can accommodate both common
(fixed-effects) and individual-specific (random-effects) parameters (hence
also known as mixed effects model in Statistics)\footnote{The terms
fixed-effects and random-effects have been used to mean different things in
the literature and there is no agreed-upon definition. In this paper,
fixed-effects refers to regression coefficients that do not differ across $i$
(or individuals) and random-effects mean regression coefficients that differ
across $i$ \citep[see][Ch.~10]{Greenberg-2012}. Andrew Gelman lists five
different definitions of fixed-effects and random-effects at
{\url{https://statmodeling.stat.columbia.edu/2005/01/25/why_i_dont_use/}}.
But again, there are other popular definitions such as in Classical
econometrics where fixed-effects means that the unobserved
individual-specific heterogeneity are correlated with the regressors, while
random-effects imply zero correlation (or more strongly statistical
independence) between individual-specific heterogeneity and the regressors
\citep[see][]{Cameron-Trivedi-2005,Wooldridge-2010,Hsiao-2014,Greene-2017}.}.
However, mean longitudinal regression is not capable of capturing the
heterogeneity in covariate effects across the conditional distribution of the
response variable. To overcome this limitation, we study the heterogeneous
effect of the covariates on food expenditure (TF, FAH and FAFH) using a
quantile model for longitudinal data that accommodates both common effects
and individual-specific effects, also known as quantile mixed models.
This paper contributes to the literature in at least three different ways.
First, quantile longitudinal regression provides a comprehensive
understanding of food expenditure pattern of family units to variation in
covariates by providing estimates at different quantiles. The method is
robust compared to standard longitudinal models where the focus is on the
mean, because amongst other things it is unaffected by the presence of
outliers in the data. Second, this study adds to the understanding of the
differences in food expenditure pre-, during- and after- the Great Recession.
This enables us to capture patterns linking recession and food expenditure by
categories which we explore in this study. To our knowledge, this is the
first attempt to examine the effects of the Great Recession on food
expenditure at home and away from home within a quantile panel data
framework. Third, longitudinal data allows us to model the behavior of family
units over time, which provides an advantage to control for unobserved
heterogeneity leading to more robust estimates. As shown in this paper, it is
important to control for this repeated behavior because models which treat
unobserved heterogeneity as a part of error term often result in inconsistent
estimates and may lead to incorrect policy inference.
The remaining paper is organized as follows. Section~\ref{sec:Methodology}
lays out the basic framework of the mean regression and quantile regression
models for longitudinal data that we employ in our analysis.
Section~\ref{sec:Data} presents a descriptive summary of the data and
discusses the trends in variables over the time period of our study.
Section~\ref{sec:Results} presents the results from the aforementioned
regression models and shows the consequences of not modeling
individual-specific heterogeneity. Finally, Section~\ref{sec:Conclusion}
presents concluding remarks.
\section{Methodology}\label{sec:Methodology}
This section presents the mean regression for longitudinal data model and
outlines the Bayesian approach for its estimation \citep{Chib-1999,
Greenberg-2012}. Thereafter, we present the Bayesian quantile regression for
longitudinal data model and its estimation algorithm, which is inspired from
\citet{Luo-2012} and \citet{Rahman-Vossmeyer-2019}.
\subsection{Mean Regression for Longitudinal Data}\label{subsec:MRLD}
The longitudinal data model can be expressed in terms of the following
equation,
\begin{align}
y_{it} = x'_{it}\beta + s'_{it}\alpha_{i} + \epsilon_{it}, \hspace*{1cm} \forall
\hspace*{0.2cm} i = 1,\ldots,n, \hspace*{0.5cm} t = 1,\ldots,T,
\label{eq:mrld1}
\end{align}
where $y_{it}$ denotes the value of the response $y$ for the $i$-th
individual at the $t$-th time period, $x'_{it}$ is a $1 \times k$ vector of
explanatory variables, $\beta$ is $k \times 1$ vector of common
(fixed-effects) parameters, $s'_{it}$ is a $1 \times l$ vector of covariates
(often a subset of $x_{it}$) with individual-specific effects, $\alpha_{i}$
is an $l \times 1$ vector of individual-specific (random-effects) parameters
included to capture the marginal dependence between observations on the same
individual, and $\epsilon_{it}$ is the error term assumed to independently
and identically distributed (\emph{iid}) as a normal distribution i.e.,
$\epsilon_{it} \overset{iid}{\sim} N(0, h^{-1})$ for all values of
$i=1,\ldots,n$; $t=1,\cdots,T$, where $h^{-1}$ is the variance. The
distributional assumption on the error implies that $y_{it}$, conditional on
$\alpha_{i}$ are independently distributed as a normal distribution i.e.,
$y_{it}|\alpha_{i} \sim N(x'_{it}\beta + s'_{it} \alpha_{i},h^{-1})$ for all
$i=1,\ldots,n$; $t=1,\cdots,T$.
In this paper, the response variable $y$ will either be TF, FAH or FAFH
expenditures. The vector $x_{it}$ will consist of a common intercept and a
host of covariates related to demographic, socioeconomic and geographic
factors. Lastly, the vector of covariates with individual-specific effects
$s'_{it}$ will consist of an intercept and inverse-hyperbolic sine
transformation of income.
To proceed with the Bayesian estimation of the longitudinal model, we first
stack the model for each individual $i$. This is convenient for multiple
reasons including reducing the computational burden. We define $y_{i} =
(y_{i1},\ldots,y_{iT})'$, $X_{i} = (x'_{i1}, x'_{i2},\ldots,x'_{iT})'$,
$S_{i} = (s'_{i1}, s'_{i2}, \ldots,s'_{iT})'$, $\epsilon_{i} =
(\epsilon_{i1}, \ldots,\epsilon_{iT})'$. The resulting stacked model can be
written as,
\begin{equation}
\begin{split}
y_{i} & = X_{i} \beta + S_{i}\alpha_{i} + \epsilon_{i}, \hspace*{1cm}
\textrm{for} \; i=1,\cdots,n,
\\
& \alpha_{i}|\Sigma \sim N_{l}(0, \Sigma),
\\
\beta & \sim N_{k}(\beta_{0}, B_{0}), \qquad
\Sigma^{-1} \sim Wish(\nu_{0}, D_{0}), \qquad h \sim Ga(c_{0}/2, d_{0}/2),
\end{split}
\label{eq:mrld2}
\end{equation}
where we assume that $\alpha_{i}|\Sigma$ are mutually independent and
identically distributed as $N_{l}(0, \Sigma)$, and the last line represents
the prior distributions, with $N$, $Wish$ and $Ga$ denoting the normal,
Wishart and gamma distributions, respectively. The model given by
equation~\eqref{eq:mrld2} implies that the conditional density
$y_{i}|\alpha_{i} \sim N( X_{i} \beta + S_{i}\alpha_{i}, h^{-1}I_{T})$ for
$i=1,\ldots,n$. The complete data density is then given by,
\begin{equation*}
f(y,\alpha|\beta,h,\Sigma) = \prod_{i=1}^{n}
f(y_{i},\alpha_{i}|\beta,h,\Sigma) = \prod_{i=1}^{n}
f(y_{i}|\beta,\alpha_{i},h) \pi(\alpha_{i}|\Sigma),
\end{equation*}
which is equivalent to the complete data likelihood when viewed as a function
of the parameters.
\begin{table*}[b!]
\begin{algorithm}
\label{algo1}
\rule{\textwidth}{0.5pt}\small{
\begin{enumerate}[itemsep=-2ex]
\item Sample $(\beta, \alpha)$ in one block as follows:
\begin{enumerate}[leftmargin=3ex]
\item Let $\Psi_{i} = S_{i}\Sigma S_{i}'+ h^{-1}I_{T}$.
Sample $\beta$ marginally of $\alpha$ from
$\beta|y,h,\Sigma $ $\sim$ $N\big(\widetilde{\beta}, \widetilde{B}\big)$, where,
\begin{equation*}
\widetilde{B}^{-1} = \bigg(\sum_{i=1}^{n}
X'_{i} \Psi_{i}^{-1} X_{i}
+ B_{0}^{-1} \bigg), \quad
\mathrm{and} \quad
\widetilde{\beta} = \widetilde{B}\left(\sum\limits_{i=1}^{n}X_{i}'\Psi_{i}^{-1}
y_{i}+ B_{0}^{-1}\beta_{0} \right).
\end{equation*}
\item Sample $\alpha_{i}|y,\beta,h,\Sigma$ $\sim$ $N\big(\widetilde{a},
\widetilde{A}\big)$
for $i=1,\cdots,n$, where,
\begin{equation*}
\widetilde{A}^{-1} = \left(hS_{i}'S_{i} + \Sigma^{-1}\right),
\quad \mathrm{and} \quad
\tilde{a} = \widetilde{A} \Big(h S_{i}'\big(y_{i}-X_{i}\beta\big) \Big).
\end{equation*}
\end{enumerate}
\item Sample $\Sigma^{-1}|\alpha \sim Wish\big(\nu_{1},D_{1}\big)$, where
$\nu_{1} = (\nu_{0} + n)$, and
$D_{1}^{-1} = \Big( \displaystyle D_{0}^{-1}+
\sum\limits_{i=1}^{n}\alpha_{i}\alpha_{i}' \Big)$.\\
\item Sample $h|y,\beta,\alpha$ $\sim$
$Ga\big(c_{1}/2,d_{1}/2 \big)$ where,
\begin{equation*}
c_{1} = \left(c_{0} + nT\right),
\quad \mathrm{and} \quad
d_{1} = d_{0} + \sum_{i=1}^{n}(y_{i}-X_{i}
\beta-S_{i}{\alpha}_{i})'(y_{i}-X_{i}\beta-S_{i}{\alpha}_{i}).
\end{equation*}
\end{enumerate}}
\rule{\textwidth}{0.5pt}
\end{algorithm}
\end{table*}
By Bayes' theorem, the complete data posterior density can be written as
product of the complete data likelihood times the prior distributions as
follows,
\allowdisplaybreaks{
\begin{equation}
\begin{split}
& \pi(\beta, \alpha, \Sigma^{-1},h|y) \propto \Big\{\prod_{i=1}^{n}
f(y_{i}|\beta,\alpha_{i},h) \pi(\alpha_{i}|\Sigma) \Big\}
\pi(\beta) \pi(\Sigma^{-1}) \pi(h)\\
& \quad \propto h^{nT/2}\exp\bigg[-\frac{h}{2}
\sum\limits_{i=1}^{n}(y_{i}-X_{i}\beta-S_{i}
{\alpha}_{i})'(y_{i}-X_{i}\beta-S_{i}{\alpha}_{i})\bigg]\\
& \qquad \times |\Sigma|^{-\frac{n}{2}}
\exp\bigg[-\frac{1}{2}\sum\limits_{i=1}^{n}\alpha_{i}'\Sigma^{-1}\alpha_{i}\bigg]
\exp\Big[-\frac{1}{2}(\beta - \beta_{0})'B_{0}^{-1}(\beta - \beta_{0})\Big] \\
& \qquad \times |\Sigma^{-1}|^\frac{(\nu_{0}-l-1)}{2}
\exp\bigg[-\frac{1}{2}tr(D_{0}^{-1}\Sigma^{-1})\bigg] \times
h^{\frac{c_{0}}{2}-1} \exp\bigg[-\frac{d_{0}h}{2}\bigg].
\end{split}
\label{eq:mrldPost}
\end{equation}
}
The conditional posterior distributions are derived from the complete data
posterior (Equation~\ref{eq:mrldPost}) and the model is estimated using Gibbs
sampling, a well known Markov chain Monte Carlo method
\citep{Geman-Geman-1984,Casella-George-1992}. The MCMC algorithm for
estimating the model is presented in Algorithm~\ref{algo1}. The parameters
$(\beta,\alpha)$ are sampled jointly to avoid correlation between the
parameters, because the covariates in $s_{it}$ are often a subset of $x_{it}$
\citep[][Chap. 10]{Greenberg-2012}. Specifically, we first sample $\beta$
(marginally of $\alpha$, but conditional on other model parameters) from an
updated normal distribution and then sampled $\alpha$ (conditional on $\beta$
and other model parameters) from its updated normal distribution. The
precision matrix $\Sigma^{-1}$ is sampled from an updated Wishart
distribution and finally, the precision parameter $h$ is sampled from an
updated gamma distribution.
\subsection{Quantile Regression for Longitudinal Data}\label{subsec:QRLD}
The quantile regression for longitudinal data can be expressed in terms of
the following equation,
\begin{align}
y_{it} = x'_{it}\beta + s'_{it}\alpha_{i} + \epsilon_{it}, \hspace*{1cm} \forall
\hspace*{0.2cm} i = 1,\ldots,n, \hspace*{0.5cm} t = 1,\ldots,T,
\label{eq:qrld1}
\end{align}
where all the notations are same as in Section~\ref{subsec:MRLD}, except that
the errors are assumed to be \emph{i.i.d.} as an asymmetric Laplace (AL)
distribution, i.e., $\epsilon_{it} \overset{iid}{\sim} AL(0,h^{-1},p)$, where
$h^{-1}$ is the inverse of the scale parameter and $p$ denotes a quantile.
This implies that $y_{it}$, conditional on $\alpha_{i}$, are independently
distributed as an AL distribution i.e., $y_{it}|\alpha_{i} \sim AL(x'_{it}
\beta + s'_{it}\alpha_{i}, h^{-1},p)$ for $i=1,\cdots,n$, $t=1,\ldots,T$.
Note that the error distribution is assumed to be AL to form a working
likelihood because the quantile loss function appears in the exponent of an
AL distribution \citep[see][]{Yu-Moyeed-2001,Rahman-2016}. The resulting
conditional quantile function for response $y_{it}$ is,
\begin{align*}
Q_{y_{it}}(p|x_{it},\alpha_{i}) = x'_{it} \beta + s'_{it}\alpha_{i},
\end{align*}
where $Q_{y_{it}} \equiv F_{y_{it}}^{-1}(\cdot)$ is the inverse of the
cumulative distribution function of the outcome variable conditional on the
individual specific parameters and the covariates.
We can directly work with the AL distribution, however, it is not convenient
for Gibbs sampling. So, as proposed in \citet{Kozumi-Kobayashi-2011}, we make
use of the normal-exponential mixture representation of the AL distribution,
\begin{equation}
\epsilon_{it} = h^{-1} \theta w_{it} + h^{-1} \tau \sqrt{w_{it}} \, u_{it},
\hspace{0.75in}
\forall \; i=1,\ldots,n; \; t = 1, \ldots, T,
\label{eq:normal-exp}
\end{equation}
where $u_{it} \sim N(0,1)$ is mutually independent of $w_{it}
\sim\mathcal{E}(1)$, $\theta = \frac{1-2p}{p(1-p)}$, $\tau =
\sqrt{\frac{2}{p(1-p)}}$, and the symbol $\mathcal{E}$ denotes an exponential
distribution. The resulting quantile regression for longitudinal data model
can be expressed as,
\begin{align}
y_{it} = x'_{it}\beta + s'_{it}\alpha_{i} + \theta \nu_{it} +
\tau \sqrt{h^{-1} \nu_{it}} \, u_{it}, \hspace*{1cm} \forall
\hspace*{0.2cm} i = 1,\ldots,n, \hspace*{0.5cm} t = 1,\ldots,T.
\label{eq:qrld2}
\end{align}
where we have used the transformation $\nu_{it} = w_{it}/h$, since the
presence of the scale parameter in the conditional mean is not conducive to
Gibbs sampling \citep{Kozumi-Kobayashi-2011,Rahman-Karnawat-2019}. See also
\citet{Bresson-etal-2020} and \citet{Ojha-Rahman-2020}, where the scale is
fixed at 1 to identify the parameters of quantile regression with binary
outcomes.
To proceed with the Bayesian estimation, we again stack the model across $i$
for reasons mentioned earlier. Define $y_{i} = (y_{i1}, \ldots ,y_{iT})'$,
$X_{i} = (x'_{i1}, x'_{i2},\ldots,x'_{iT})'$, $S_{i} = (s'_{i1},
s'_{i2},\ldots,s'_{iT})'$, $D_{\tau\sqrt{\frac{\nu_{i}}{h}}} =
diag(\tau\sqrt{\frac{\nu_{i1}}{h}}, \ldots ,\tau\sqrt{\frac{\nu_{iT}}{h}})$,
$u_{i} = (u_{i1}, \ldots ,u_{iT})'$, and lastly $\nu_{i} = (\nu_{i1}, \ldots
,\nu_{iT})'$. The resulting stacked quantile regression for longitudinal data
can be written as,
\begin{equation}
\begin{split}
y_{i} & = X_{i} \beta + S_{i}\alpha_{i} + \theta \nu_{i} +
D_{\tau \sqrt{\frac{\nu_{i}}{h}}} \; u_{i}, \hspace*{1cm} \textrm{for} \; i=1,\ldots,n,
\\
\alpha_{i}|\Sigma & \sim N_{l}(0, \Sigma), \hspace{0.6in}
\nu_{it} \sim \mathcal{E}(1/h),
\hspace{0.7in} u_{it} \sim N(0,1),
\\
\beta & \sim N_{k}(\beta_{0}, B_{0}), \qquad \Sigma^{-1} \sim Wish(\nu_{0},
D_{0}), \qquad h \sim Ga(c_{0}/2, d_{0}/2),
\end{split}
\label{eq:qrld3}
\end{equation}
where we assume $\alpha_{i}|\Sigma$ are mutually independent and identically
distributed as $N_{l}(0, \Sigma)$, and the last line represents the prior
distributions of the model parameters. The quantile model given by
Equation~\eqref{eq:qrld3} implies that the conditional density
$y_{i}|\alpha_{i} \sim N(X_{i}\beta + S_{i} \alpha_{i} + \theta \nu_{i},
D^{2}_{\tau \sqrt{ \frac{\nu_{i}}{h}}})$ for $i=1,\ldots,n$. The complete
data density is then given by $ f(y,\alpha|\beta,v,h,\Sigma) = \displaystyle
\prod_{i=1}^{n} f(y_{i},\alpha_{i}|\beta,\nu_{i},h,\Sigma) = \displaystyle
\prod_{i=1}^{n} f(y_{i}|\beta,\alpha_{i},\nu_{i},h) \pi(\alpha_{i}|\Sigma)$.
\begin{table*}[b!]
\begin{algorithm}
\label{algo2} \rule{\textwidth}{0.5pt} \small{
\begin{enumerate}[itemsep=-2ex]
\item Sample $(\beta, \alpha)$ in one block as follows:
\begin{enumerate}[leftmargin=3ex]
\item Let $\Omega_{i} =
\left(S_{i}\Sigma S'_{i} + D_{\tau \sqrt{\frac{\nu_{i}}{h}}}^{2}\right)$.
Sample $\beta$ marginally of $\alpha$ from
$\beta|y,\nu,\Sigma,h$ $\sim$ $N\big(\widetilde{\beta},
\widetilde{B}\big)$, where,
\begin{equation*}
\widetilde{B}^{-1} = \bigg(\sum_{i=1}^{n}
X'_{i} \Omega_{i}^{-1} X_{i}
+ B_{0}^{-1} \bigg), \quad
\mathrm{and} \quad
\widetilde{\beta} =
\widetilde{B}\left(\sum\limits_{i=1}^{n}X_{i}'\Omega_{i}^{-1}
(y_{i}-\theta \nu_{i})+ B_{0}^{-1}\beta_{0} \right).
\end{equation*}
\item Sample $\alpha_{i}|y,\beta,\nu,h,\Sigma$ $\sim$ $N\big(\widetilde{a},
\widetilde{A}\big)$
for $i=1,\ldots,n$, where,
\begin{equation*}
\widetilde{A}^{-1} = \left(S'_{i} \, D^{-2}_{\tau \sqrt{\frac{\nu_{i}}{h}}}
\, S_{i} + \Sigma^{-1}\right),
\quad \mathrm{and} \quad
\widetilde{a} = \widetilde{A} \left(S'_{i} D^{-2}_{\tau \sqrt{\frac{\nu_{i}}{h}}} \,
\big(y_{i} - X_{i} \beta - \theta \nu_{i} \big) \right).
\end{equation*}
\end{enumerate}
\item Sample $\nu_{it}|y_{it},\beta,\alpha_{i},h$ $\sim$ $GIG \,
\big(0.5, \widetilde{\lambda}_{it}, \widetilde{\eta}\big)$ for
$i=1,\ldots,n$ and $t=1,\ldots,T$, where,
\begin{equation*}
\widetilde{\lambda}_{it} = h\bigg( \frac{ y_{it} - x'_{it}\beta - s'_{it}
\alpha_{i}}{\tau} \bigg)^{2} \quad \mathrm{and} \quad \widetilde{\eta}
= h \bigg(\frac{\theta^{2}}{\tau^{2}} + 2 \bigg).
\end{equation*}
\item Sample $\Sigma^{-1}|\alpha \sim Wish\big(\nu_{1},D_{1}\big)$, where
$\nu_{1} = (\nu_{0} + n)$, and
$D_{1}^{-1} = \Big( \displaystyle D_{0}^{-1}+\sum\limits_{i=1}^{n}
\alpha_{i}\alpha_{i}' \Big)$.\\
\item Sample $h|y,\beta,\alpha, \nu \sim Ga\Big(c_{1}/2,d_{1}/2\Big)$ where,
\begin{equation*}
c_{1} = \left(c_{0}+3nT\right),
\quad \mathrm{and} \quad
d_{1} = d_{0}+2\sum\limits_{i=1}^{n}\sum
\limits_{t=1}^{T}v_{it}+\sum\limits_{i=1}^{n}\sum\limits_{t=1}^{T}
\frac{\big(y_{it} - x_{it}'\beta-s_{it}'\alpha_{i}-\theta
\nu_{it}\big)^{2}}{\tau^{2} \nu_{it}}.
\end{equation*}
\end{enumerate}}
\rule{\textwidth}{0.5pt}
\end{algorithm}
\end{table*}
Once again, we employ the Bayes' theorem to obtain the complete data
posterior as the product of the complete data likelihood times the prior
distributions as follows:
\begin{equation}
\begin{split}
& \pi(\beta,\alpha,\nu,\Sigma^{-1},h|y) \propto
\bigg\{\prod\limits_{i=1}^{n} f(y_{i}|\beta,\alpha_{i},\nu_{i},h)
\pi(\alpha_{i}|\Sigma)\pi(\nu_{i})\bigg\} \pi(\beta) \pi(\Sigma^{-1}) \pi(h)\\
& \propto \prod_{i=1}^{n} \bigg\{ |D^{2}_{\tau\sqrt{\frac{\nu_{i}}{h}}}|^{-\frac{1}{2}}
\exp\bigg[-\frac{1}{2} (y_{i}-X_{i}\beta-S_{i}{\alpha}_{i}-\theta \nu_{i})'
D^{-2}_{\tau\sqrt{\frac{v_{i}}{h}}}
(y_{i}-X_{i}\beta-S_{i}{\alpha}_{i}-\theta \nu_{i})\bigg] \bigg\} \\
& \quad \times |\Sigma^{-1}|^{\frac{n}{2}}\exp\bigg[-\frac{1}{2}
\sum\limits_{i=1}^{n}\alpha_{i}'\Sigma^{-1}\alpha_{i}\bigg]
\times h^{nT} \exp\bigg[-h\sum\limits_{i=1}^{n}
\sum\limits_{t=1}^{T} \nu_{it} \bigg]
\times h^{\frac{c_{0}}{2}-1}
\exp\Big(-\frac{d_{0}h}{2}\Big)\\
& \quad \times \exp\bigg[-\frac{1}{2}(\beta - \beta_{0})'B_{0}^{-1}
(\beta - \beta_{0})\bigg] \times |\Sigma^{-1}|^\frac{(\nu_{0}-l-1)}{2}
\exp\Big[-\frac{1}{2}tr(D_{0}^{-1}\Sigma^{-1})\Big].
\label{eq:qrldPost}
\end{split}
\end{equation}
\begin{sloppypar}
The conditional posteriors can be derived from the joint posterior
distribution (Equation~\ref{eq:qrldPost}) and the model can be estimated
using Gibbs sampling as presented in Algorithm~\ref{algo2}. Specifically, we
sample $\beta$ and $\alpha$ in a single block to elude the problem of poor
mixing due to correlation between the parameters for reasons mentioned
earlier \citep[see also][]{Rahman-Vossmeyer-2019, Bresson-etal-2020}. The
common effects parameters $\beta$, marginally of $\alpha$, are sampled from
an updated normal distribution and the individual-specific parameters
$\alpha_{i}$'s are sampled from their respective updated normal distribution.
The mixture variable $\nu$ is sampled component-wise from an updated
generalized inverse Gaussian (GIG) distribution \citep{Devroye-2014}. The
precision matrix $\Sigma^{-1}$ is sampled from an updated Wishart
distribution and the parameter $h$ is sampled from an updated gamma
distribution.
\end{sloppypar}
\section{Data}\label{sec:Data}
The current study utilizes data from the Panel Study of Income Dynamics
(PSID), which began in 1968 and is the longest running longitudinal household
survey in the world. We constructed a balanced panel of 2174 family units
with data for each alternate year, i.e., 2001, 2003, 2005, 2007, 2009, 2011,
2013 and 2015. This is because beginning 1997, the PSID collects data every
alternate year. Our constructed data has information on different types of
food expenditures, considered as dependent variables, and a host of
socioeconomic, demographic and geographic variables which are used as
covariates or independent variables in our study. Table
\ref{Table:DataSummary} presents a descriptive summary of the variables
considered in our analysis.
The primary variable of interest is the food expenditure of a family unit,
which the PSID categorizes into three types: food at home (FAH), food away
from home (FAFH) and food delivered at home (FDAH). The sum of these three
expenditures yield total food (TF) expenditure of the family unit. The
variable FAH represents the annualized expenditure of family unit at home and
in our sample lies between \$0 and \$36400. There are only few observations
with zero value for FAH. Similarly, the variable FAFH represents annualized
food expenditure away from home and in the sample lies in the range \$0 to
\$44,200. The zero values for FAFH is small at 5.7\% of the total number of
observations. All observations with zero TF expenditure were removed from the
sample. Our study considers expenditure on TF, FAH and FAFH as the dependent
variable in different regressions. The expenditure on FDAH is dropped due to
large number of zero values, which makes censoring important and a sample
selection framework more appropriate.
\begin{table}[t]
\footnotesize \def\sym#1{\ifmmode^{#1}\else\(^{#1}\)\fi}
\setlength{\extrarowheight}{1.2pt}
\begin{center}
\begin{tabular}{l*{8}{c}}
\toprule
\multicolumn{1}{c}{Variables$\backslash$Years} &\multicolumn{1}{c}{2001}
&\multicolumn{1}{c}{2003} &\multicolumn{1}{c}{2005}
&\multicolumn{1}{c}{2007} &\multicolumn{1}{c}{2009}
&\multicolumn{1}{c}{2011} &\multicolumn{1}{c}{2013} &\multicolumn{1}{c}{2015}\\
\hline \\
TF/1000
& 6.70 & 6.89 & 7.40 & 7.93 & 7.91 & 8.22 & 8.56 & 8.90 \\
& (3.62) & (3.73) & (4.18) & (4.64) & (4.52) & (4.75) & (5.18) & (5.50) \\
FAH/1000
& 4.60 & 4.70 & 5.00 & 5.42 & 5.57 & 5.79 & 6.02 & 6.17 \\
& (2.64) & (2.64) & (2.82) & (3.17) & (3.26) & (3.44) & (3.71) & (3.79) \\
FAFH/1000
& 1.99 & 2.04 & 2.28 & 2.40 & 2.24 & 2.33 & 2.44 & 2.63 \\
& (1.96) & (2.02) & (2.48) & (2.65) & (2.32) & (2.43) & (2.65) & (2.85) \\
Head Age
& 44.96 & 46.97 & 48.93 & 50.96 & 52.95 & 54.96 & 56.95 & 58.98 \\
& (12.36) & (12.37) & (12.37) & (12.37) & (12.36) & (12.37) & (12.36) & (12.33) \\
Head Edu
& 13.38 & 13.43 & 13.43 & 13.43 & 13.66 & 13.66 & 13.68 & 13.69 \\
& (2.68) & (2.64) & (2.64) & (2.64) & (2.62) & (2.62) & (2.63) & (2.63) \\
Spouse Edu
& 9.50 & 9.63 & 9.78 & 9.90 & 10.16 & 10.09 & 9.99 & 9.93 \\
& (6.47) & (6.44) & (6.40) & (6.36) & (6.51) & (6.53) & (6.61) & (6.66) \\
Family Size
& 2.94 & 2.91 & 2.87 & 2.82 & 2.76 & 2.66 & 2.58 & 2.48 \\
& (1.47) & (1.45) & (1.42) & (1.43) & (1.43) & (1.39) & (1.37) & (1.32) \\
Family Income/10000
& 7.72 & 7.97 & 8.78 & 9.22 & 9.59 & 9.34 & 9.83 & 9.94 \\
& (8.46) & (12.32) & (16.05) & (9.48) & (9.74) & (10.31) & (13.23) & (10.03) \\
Head Emp
& 0.85 & 0.85 & 0.84 & 0.81 & 0.73 & 0.69 & 0.66 & 0.62 \\
Head Female
& 0.18 & 0.18 & 0.18 & 0.18 & 0.18 & 0.18 & 0.18 & 0.18 \\
Married
& 0.68 & 0.69 & 0.70 & 0.71 & 0.71 & 0.71 & 0.70 & 0.70 \\
Single
& 0.15 & 0.13 & 0.11 & 0.11 & 0.10 & 0.10 & 0.10 & 0.09 \\
Homeowner
& 0.75 & 0.78 & 0.80 & 0.81 & 0.81 & 0.81 & 0.81 & 0.81 \\
Mortgage
& 0.59 & 0.60 & 0.61 & 0.61 & 0.61 & 0.57 & 0.55 & 0.53 \\
White
& 0.68 & 0.68 & 0.71 & 0.71 & 0.71 & 0.71 & 0.71 & 0.71 \\
Non-White
& 0.32 & 0.32 & 0.29 & 0.29 & 0.29 & 0.29 & 0.29 & 0.29 \\
Recession
& 1.00 & 0.00 & 0.00 & 1.00 & 1.00 & 0.00 & 0.00 & 0.00 \\
Northeast
& 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.15 \\
West
& 0.20 & 0.19 & 0.19 & 0.20 & 0.19 & 0.19 & 0.19 & 0.19 \\
South
& 0.38 & 0.38 & 0.38 & 0.38 & 0.39 & 0.39 & 0.39 & 0.40 \\
\bottomrule
\end{tabular}
\caption{Data Summary - The table presents the mean and standard deviation
(in parenthesis) of the continuous variables and proportion
of the categorical variables for each considered year. \label{Table:DataSummary}}
\end{center}
\end{table}
An interesting characteristic about the distribution of food expenditures is
that they are positively skewed. Figure~\ref{fig:boxplot} presents a box plot
of the different types of food expenditure utilized in the study. Each box
plot represents the distribution of food expenditure for a particular year.
In each box plot, the solid line within the box shows the median value, while
the bottom and top of the box represent the 25th and 75th percentiles,
respectively. The vertical lines are whiskers and they show either the
maximum/minimum values or 1.5 times the interquartile range of the data,
whichever is smaller. Points more than 1.5 times the interquartile range
below (above) the first (third) quartile are defined as outlier and plotted
individually. As seen from Figure~\ref{fig:boxplot}, for each box plot
(across different types of food expenditure) there are large number of
outliers towards the higher values making the distribution positively skewed.
Consequently, the mean food expenditure (which is pushed upward due to the
presence of high values) and covariate effects at the conditional mean is
inadequate for a complete picture. In the literature, studies have used
logarithmic transformation of food expenditure to alleviate this problem of
heteroscedasticity \citep{Liu-et-al-2013}. However, taking a logarithmic
transformation cannot eliminate the non-normality or the heteroscedasticity
problem. Besides, food and nutritional assistance programs (such as
Supplemental Nutrition Assistance Program or SNAP) are typically interested
in the lower tail (i.e., families/households with low food expenditure) to
ensure food security.
\begin{figure}[!t]
\centerline{
\mbox{\includegraphics[width=6.75in, height=6.5in]{fig-boxplot}} }
\vspace{-2pc}
\caption{Box plot for different types of food expenditure.}
\label{fig:boxplot}
\end{figure}
The covariates or independent variables utilized in this study (see
Table~\ref{Table:DataSummary}) include age of the head (\emph{Head Age}),
education of the head (\emph{Head Edu}) and the spouse (\emph{Spouse Edu}),
measured as the number of years of schooling and takes value between 0 to 17
(17 represents post graduate level work and above). \emph{Family Size}
represents number of members in a family unit. The variable \emph{Family
Income} indicates the actual value of income including transfer income in the
previous year (negative values representing loss). We use the
inverse-hyperbolic sine (IHS) transformation on income variable because it
adjusts for skewness and retains 0 and negative values
\citep{Friedline-2015,Rahman-Vossmeyer-2019}. The indicator for employment
status of the head (\emph{Head Emp}) equals 1 if the head is employed and 0
otherwise (omitted). The omitted category includes respondents that are
temporarily laid off, looking for work, retired, permanently/temporarily
disabled, keeping house, student and others.
The indicator for gender of the head (\emph{Head Female}) is coded as 1 if
`female' and 0 if `male', while the marital status (of the head) is
categorized into \emph{Married}, \emph{Single} and \emph{Separated}
(omitted). The omitted category (\emph{Separated}) consists of respondents
who are widowed, divorced/annulled or separated. Other variables included in
our study are indicators for homeownership (\emph{Homeowner}) and mortgages
(\emph{Mortgage}). The variable \emph{Homeowner} takes the value 1 if the
respondent is a homeowner and 0 otherwise. Similarly, we have the
\emph{Mortgage} variable which equals 1 if the respondent has a mortgage on
property and 0 otherwise. Race is categorized into \emph{White} and
\emph{Non-White} composed of Blacks, American Indian, Aleut, Eskimo, Asian,
Pacific Islander and Latino. Besides, we have indicators for recession years
and the region in which the family resides. The recession dummy takes the
value 1 for the years 2001, 2007 and 2009 because these were years with some
recession period. Following the US Census Bureau, the region variable is
classified into \emph{Northeast}, \emph{West}, \emph{South} and
\emph{Midwest} (omitted). Including regional indicators help us to look at
differences, if any, in the expenditure behaviour of the families across
regions.
We now look at the movement in average values of the variables for the
sampled period. The average FAH expenditure for a typical family unit is
around \$4590 in 2001, while the FAFH expenditure is around \$1990 for the
same year. The average expenditure on TF is approximately \$6700 in 2001 and
increases to \$8900 in 2015. Not surprisingly, the average expenditure on
FAFH and TF were lower in the year 2009 compared to its respective values in
2007. This shows the adverse effect of the economic crisis on average food
expenditure. The adverse effect seems to persist longer for FAFH expenditure,
as its average value in 2011 is lower compared to 2007.
The average age of the head is around 45 years with a family size of
approximately 3 members in 2001. In the sample, the family units are
predominantly headed by males (about 82\%) with an average of 13.38 years of
schooling in 2001. The average years of schooling of the spouse is lower than
that of the head and stands at 9.5 years in 2001, but increases to
approximately 10 years in 2015. The sample clearly shows the effect of the
Great Recession (December 2007 - June 2009) on the variables \emph{Family
Income} and \emph{Head Emp}. The mean annual family income is approximately
\$77,000 in 2001 and increases to \$99,300 in 2015. However, there is a drop
in average family income for 2011 compared to 2009. The effect of the
economic crisis is much more pronounced on employment status of the head. In
the sample, about 85\% are employed in 2001, which started decreasing in 2005
and stood at 73\% during 2009. However, the lowest percentage employed for
the sample is 62\% in 2015.
A large proportion of the sampled respondents are married (0.68 in 2001) and
remains in the range $0.68-0.71$ throughout the period of our study, while
the proportion of single decreases from 0.15 to 0.09 between 2001 and 2015.
Approximately 75\% of the families own a house in 2001 and this proportion
reaches 81\% in 2007, and remains in that region for subsequent years. The
proportion of respondents having a mortgage on property decrease from 0.59 to
0.53 between 2001 to 2015. Nonetheless, the mortgage percentage was higher
than 0.59 between 2003 to 2009, which is another hallmark of the Great
Recession. On the racial aspect, majority of the sampled families (about
68\%) are White, while the remaining 32\% consists of Blacks and other races;
thus giving a diverse sample for the study. Our sample is also geographically
heterogeneous. Most of the sampled respondents live in the South (38\%),
followed by Midwest (26\%), West (20\%) and Northeast (16\%). This percentage
is stable over the sample period, suggesting little geographic mobility
across regions.
\section{Results}\label{sec:Results}
This section discusses the results for the three types of food expenditure
using the models presented in Section~\ref{sec:Methodology}. In particular,
the results from longitudinal mean regression is presented in
Table~\ref{Table:MeanRegression} and the results from longitudinal quantile
regression is exhibited in Table~\ref{Table:QuantReg}. The posterior
estimates are based on 12,000 MCMC iterations after a burn-in of 3,000
iterations. Trace plots of the MCMC draws, not presented for the sake of
brevity, mimics that of white noise and confirms that the chains have
converged. Moderately diffused priors are utilized for the parameters in both
the models: $\beta \sim N_{k}(0,100*I)$, $\alpha_{i} \sim N_{l}(0,I)$,
$\Sigma^{-1} \sim Wish(5,10 \ast I_{l})$ and $h \sim IG(10/2, 9/2)$. Note
that the definition of $h$ are different in the mean and quantile regression
models. Besides, there are two components of $\alpha_{i}$,
individual-specific intercept and individual-specific coefficient for
inverse-hyperbolic sine transformation of income. With respect to
individual-specific effects, results from Table~\ref{Table:MeanRegression}
and Table~\ref{Table:QuantReg} show that the standard deviations of
$\alpha_{i}$ (i.e., $(\sqrt{\sigma_{11}}, \sqrt{\sigma_{22}}$) are different
for the mean and quantile regression models. As such, a modeling approach
with identical variances should be avoided. We now discuss the results for
the common parameters in all the econometric models.
\begin{table}[!b]
\centering \footnotesize \setlength{\tabcolsep}{4pt}
\setlength{\extrarowheight}{2pt}
\setlength\arrayrulewidth{1pt}
\begin{tabular}{l rrr rrr rrr}
\toprule
&& \multicolumn{2}{c}{\textsc{tf}}
&& \multicolumn{2}{c}{\textsc{fah}}
&& \multicolumn{2}{c}{\textsc{fafh}}\\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}\\
\midrule
Intercept && $-12.53$& $0.84$ && $-10.28$ & $ 0.63$ && $-2.12$ & $0.43$ \\
log (HeadAge) && $ 3.65$ & $0.19$ && $ 2.94$ & $ 0.14$ && $ 0.66$ & $0.10$ \\
Head Edu && $ 0.08$ & $0.02$ && $ 0.04$ & $ 0.01$ && $ 0.05$ & $0.01$ \\
Spouse Edu && $ 0.06$ & $0.01$ && $ 0.05$ & $ 0.01$ && $ 0.01$ & $0.01$ \\
Family Size && $ 0.76$ & $0.03$ && $ 0.73$ & $ 0.02$ && $ 0.03$ & $0.02$ \\
IHS Income && $ 2.79$ & $0.14$ && $ 1.37$ & $ 0.10$ && $ 1.35$ & $0.07$ \\
Head Emp(HE) && $ 0.15$ & $0.08$ && $ 0.04$ & $ 0.06$ && $ 0.12$ & $0.05$ \\
Head Female (HF) && $-1.51$ & $0.20$ && $-0.79$ & $ 0.15$ && $-0.71$ & $0.11$ \\
HE$\times$HF && $ 0.62$ & $0.17$ && $ 0.37$ & $ 0.13$ && $ 0.26$ & $0.09$ \\
Married && $-0.09$ & $0.19$ && $-0.01$ & $ 0.14$ && $-0.02$ & $0.10$ \\
Single && $ 0.96$ & $0.17$ && $ 0.44$ & $ 0.12$ && $ 0.43$ & $0.09$ \\
Mort$\times$Home && $ 0.10$ & $0.07$ && $ 0.10$ & $ 0.06$ && $ 0.02$ & $0.04$ \\
Non-White && $-0.59$ & $0.12$ && $-0.35$ & $ 0.09$ && $-0.26$ & $0.06$ \\
Recession && $-0.26$ & $0.05$ && $-0.19$ & $ 0.04$ && $-0.08$ & $0.03$ \\
Northeast && $ 0.63$ & $0.17$ && $ 0.47$ & $ 0.12$ && $ 0.14$ & $0.09$ \\
West && $ 0.64$ & $0.15$ && $ 0.53$ & $ 0.11$ && $ 0.12$ & $0.08$ \\
South && $ 0.54$ & $0.13$ && $ 0.35$ & $ 0.10$ && $ 0.21$ & $0.07$ \\
$h$
&& $ 0.13$ & $0.01$ && $ 0.22$ & $ 0.01$ && $ 0.42$ & $0.01$ \\
$\sigma_{11}^{\frac{1}{2}}$
&& $ 2.31$ & $0.10$ && $ 1.57$ & $ 0.08$ && $ 0.99$ & $0.06$ \\
$\sigma_{22}^{\frac{1}{2}}$
&& $ 3.07$ & $0.13$ && $ 1.87$ & $ 0.10$ && $ 1.66$ & $0.07$ \\
$\rho_{1,2}$
&& $-0.42$ & $0.05$ && $-0.32$ & $ 0.07$ && $-0.28$ & $0.06$ \\
\bottomrule
\multicolumn{10}{l}{\emph{Note}: $h = \sigma^{-2}$ in mean regression.}
\end{tabular}
\caption{Posterior mean (\textsc{mean}) and
standard deviation (\textsc{std}) of the parameters from longitudinal mean regression.}
\label{Table:MeanRegression}
\end{table}
The results from the longitudinal mean regression, presented in
Table~\ref{Table:MeanRegression}, shows that (logarithm of) \emph{Head Age}
positively affects expenditures on TF, FAH and FAFH. Comparing the
coefficients across categories, we observe that the coefficient for logarithm
of \emph{Head Age} in the FAH equation is much higher (more than 4 times)
than its corresponding value in the FAFH equation. The result agrees with the
intuition that people prefer eating at home as they get older because FAH is
considered to be much healthier. Another argument put forward by
\citet{Liu-et-al-2013} is that social activity reduces with age leading to
lower rise in FAFH expenditure. Other studies that have found a positive
coefficient for \emph{Head Age} include \citet{Redman-1980},
\citet{Nayga-1996}, \citet{Stewart-Yen-2004}, and \citet{Zheng-etal-2019}.
Moving to the results from quantile regression shown in
Table~\ref{Table:QuantReg}, we observe that there is considerable variation
in the coefficients for logarithm of \emph{Head Age}. For example, in the FAH
(FAFH) equation the ratio of coefficients from \emph{Head Age} between
80th-to-20th quantiles is 1.86 (3.58). These differences show considerable
heterogeneity in the effect of \emph{Head Age} on different types of food
expenditure.
The two education variables \emph{Head Edu} and \emph{Spouse Edu} positively
affects TF and FAH expenditures. \citet{Zheng-etal-2019} also finds a
positive effect of head's education on FAH expenditure. For the FAFH
expenditure, only \emph{Head Edu} has a positive effect, but \emph{Spouse
Edu} has no effect (statistically speaking) because the credible interval for
\emph{Spouse Edu} contains zero. This implies that higher educated spouses
(mostly females in our sample) are more knowledgable to understand the
importance of healthy diet and consequently spend more on FAH, but not on
FAFH. Our findings are similar to those reported by \citet{Redman-1980}, and
\citet{Kohara-Kamiya-2016}. The results from quantile regression show
considerable heterogeneity in the covariate effects, but a comparison of the
coefficients for \emph{Head Edu} and \emph{Spouse Edu} seems more
interesting. For FAH expenditure, across quantiles the coefficient for
\emph{Spouse Edu} is always higher than that of \emph{Head Edu} (at the
median the coefficient of \emph{Head Edu} is 0.49 and that of \emph{Spouse
Edu} is 0.51). This implies that spouses (mostly female in our sample) have a
larger positive impact on FAH expenditure across its distribution. In
contrast, for FAFH expenditure the coefficients for \emph{Head Edu} are
higher across quantiles compared to \emph{Spouse Edu}). This implies that an
increase in head's education leads to a higher increase in consumption of
outside food. A possible explanation of such a result is higher involvement
of males in sociable activities \citep{Liu-et-al-2013}.
The variable \emph{Family Size} positively affects TF and FAH expenditures,
but not the FAFH expenditure. The positive effect on FAH is understandable as
larger families tend to eat more at home and less outside, and is consistent
with results reported by \citet{Zheng-etal-2019}. However, the statistically
zero effect on FAFH expenditure is in contrast to those reported in the
literature. While some articles find a positive effect of family size
\citep{Stewart-Yen-2004, Liu-et-al-2013,Zheng-etal-2019}; others have
reported a negative effect on FAFH expenditure
\citep{Redman-1980,Byrne-etal-1996}. The quantile regression results once
again show heterogeneity in covariate effects. For TF expenditure, the
coefficient of \emph{Family Size} is larger at higher quantiles, with the
ratio of 80th-to-20th quantile coefficients at 1.43. For the FAH expenditure,
the coefficient for \emph{Family Size} are similar in size and sign to those
from the TF expenditure equation. Interestingly, \emph{Family Size} has no
impact on FAFH expenditure for lower and middle quantiles.
\begin{landscape}
\begin{table}[!t]
\centering \footnotesize \setlength{\tabcolsep}{3pt}
\setlength{\extrarowheight}{2pt}
\setlength\arrayrulewidth{1pt}
\begin{tabular}{l rrr rrr rrr rrr rrr rrr rrr rrr rrr}
\toprule
&&\multicolumn{8}{c}{\textsc{tf}} && \multicolumn{8}{c}{\textsc{fah}}
&& \multicolumn{8}{c}{\textsc{fafh}}\\
\cmidrule{3-10} \cmidrule{12-19} \cmidrule{21-28}
&& \multicolumn{2}{c}{\textsc{20$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{50$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{80$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{20$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{50$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{80$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{20$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{50$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{80$^{th}$ }} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10} \cmidrule{12-13} \cmidrule{15-16}
\cmidrule{18-19} \cmidrule{21-22} \cmidrule{24-25} \cmidrule{27-28}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}\\
\midrule
Intercept && $-9.22$ & $0.61$ && $-11.72$ & $ 0.70$ && $-14.94$ & $0.86$
&& $-7.83$ & $0.44$ && $-10.24$ & $ 0.52$ && $-13.15$ & $0.66$
&& $-0.69$ & $0.22$ && $-1.12$ & $ 0.30$ && $ -1.48$ & $0.43$\\
log(Head Age) && $ 2.56$ & $0.14$ && $ 3.41$ & $ 0.16$ && $ 4.68$ & $0.20$
&& $ 2.13$ & $0.10$ && $ 2.88$ & $ 0.12$ && $ 3.97$ & $0.15$
&& $ 0.19$ & $0.05$ && $ 0.40$ & $ 0.07$ && $ 0.68$ & $0.10$\\
Head Edu && $ 0.06$ & $0.01$ && $ 0.08$ & $ 0.02$ && $ 0.06$ & $0.02$
&& $ 0.03$ & $0.01$ && $ 0.05$ & $ 0.01$ && $ 0.03$ & $0.02$
&& $ 0.02$ & $0.01$ && $ 0.03$ & $ 0.01$ && $ 0.04$ & $0.01$\\
Spouse Edu && $ 0.05$ & $0.01$ && $ 0.06$ & $ 0.01$ && $ 0.06$ & $0.01$
&& $ 0.04$ & $0.01$ && $ 0.05$ & $ 0.01$ && $ 0.06$ & $0.01$
&& $ 0.01$ & $0.01$ && $ 0.01$ & $ 0.01$ && $ 0.01$ & $0.01$\\
Family Size && $ 0.58$ & $0.02$ && $ 0.70$ & $ 0.02$ && $ 0.83$ & $0.03$
&& $ 0.55$ & $0.02$ && $ 0.67$ & $ 0.02$ && $ 0.80$ & $0.02$
&& $-0.01$ & $0.01$ && $ 0.01$ & $ 0.01$ && $ 0.04$ & $0.01$\\
IHS Income && $ 2.33$ & $0.10$ && $ 2.71$ & $ 0.12$ && $ 3.22$ & $0.16$
&& $ 1.06$ & $0.07$ && $ 1.24$ & $ 0.08$ && $ 1.50$ & $0.11$
&& $ 1.00$ & $0.04$ && $ 1.33$ & $ 0.06$ && $ 1.71$ & $0.08$\\
Head Emp(HE) && $ 0.21$ & $0.06$ && $ 0.16$ & $ 0.07$ && $ 0.08$ & $0.08$
&& $ 0.05$ & $0.04$ && $ 0.06$ & $ 0.05$ && $ 0.03$ & $0.06$
&& $ 0.06$ & $0.02$ && $ 0.08$ & $ 0.03$ && $ 0.08$ & $0.04$\\
Head Female (HF) && $-0.99$ & $0.15$ && $-1.36$ & $ 0.17$ && $-1.80$ & $0.23$
&& $-0.50$ & $0.11$ && $-0.74$ & $ 0.13$ && $-1.04$ & $0.18$
&& $-0.29$ & $0.05$ && $-0.53$ & $ 0.07$ && $-0.83$ & $0.12$\\
HE$\times$HF && $ 0.35$ & $0.11$ && $ 0.45$ & $ 0.13$ && $ 0.47$ & $0.15$
&& $ 0.24$ & $0.08$ && $ 0.26$ & $ 0.09$ && $ 0.22$ & $0.12$
&& $ 0.15$ & $0.04$ && $ 0.18$ & $ 0.06$ && $ 0.14$ & $0.08$\\
Married && $ 0.12$ & $0.14$ && $ 0.02$ & $ 0.16$ && $-0.11$ & $0.20$
&& $ 0.11$ & $0.10$ && $ 0.01$ & $ 0.12$ && $-0.06$ & $0.15$
&& $ 0.04$ & $0.05$ && $-0.02$ & $ 0.07$ && $ 0.01$ & $0.10$\\
Single && $ 0.73$ & $0.12$ && $ 0.96$ & $ 0.15$ && $ 1.17$ & $0.20$
&& $ 0.33$ & $0.09$ && $ 0.49$ & $ 0.11$ && $ 0.56$ & $0.14$
&& $ 0.20$ & $0.04$ && $ 0.29$ & $ 0.06$ && $ 0.47$ & $0.10$\\
Mort$\times$Home && $ 0.02$ & $0.05$ && $ 0.06$ & $ 0.06$ && $ 0.09$ & $0.07$
&& $ 0.06$ & $0.04$ && $ 0.07$ & $ 0.04$ && $ 0.06$ & $0.05$
&& $ 0.01$ & $0.02$ && $ 0.01$ & $ 0.03$ && $ 0.01$ & $0.03$\\
Non-White && $-0.61$ & $0.09$ && $-0.50$ & $ 0.11$ && $-0.45$ & $0.14$
&& $-0.38$ & $0.06$ && $-0.30$ & $ 0.08$ && $-0.20$ & $0.11$
&& $-0.19$ & $0.03$ && $-0.21$ & $ 0.05$ && $-0.24$ & $0.07$\\
Recession && $-0.12$ & $0.03$ && $-0.15$ & $ 0.03$ && $-0.15$ & $0.04$
&& $-0.06$ & $0.02$ && $-0.08$ & $ 0.03$ && $-0.07$ & $0.03$
&& $-0.04$ & $0.01$ && $-0.04$ & $ 0.02$ && $-0.03$ & $0.02$\\
Northeast && $ 0.61$ & $0.12$ && $ 0.69$ & $ 0.15$ && $ 0.66$ & $0.20$
&& $ 0.41$ & $0.09$ && $ 0.44$ & $ 0.11$ && $ 0.52$ & $0.15$
&& $ 0.04$ & $0.05$ && $ 0.12$ & $ 0.07$ && $ 0.25$ & $0.10$\\
West && $ 0.38$ & $0.12$ && $ 0.56$ & $ 0.14$ && $ 0.66$ & $0.18$
&& $ 0.29$ & $0.08$ && $ 0.43$ & $ 0.10$ && $ 0.62$ & $0.13$
&& $ 0.03$ & $0.04$ && $ 0.09$ & $ 0.06$ && $ 0.15$ & $0.09$ \\
South && $ 0.35$ & $0.10$ && $ 0.48$ & $ 0.12$ && $ 0.68$ & $0.15$
&& $ 0.20$ & $0.07$ && $ 0.26$ & $ 0.09$ && $ 0.41$ & $0.11$
&& $ 0.11$ & $0.04$ && $ 0.17$ & $ 0.05$ && $ 0.26$ & $0.08$\\
$h$
&& $ 1.76$ & $0.02$ && $ 1.10$ & $ 0.01$ && $ 1.49$ & $0.01$
&& $ 2.31$ & $0.02$ && $ 1.44$ & $ 0.01$ && $ 1.93$ & $0.02$
&& $ 3.64$ & $0.03$ && $ 2.18$ & $ 0.02$ && $ 2.84$ & $0.02$\\
$\sigma_{11}^{\frac{1}{2}}$
&& $ 1.72$ & $0.08$ && $ 2.09$ & $ 0.09$ && $ 3.47$ & $0.11$
&& $ 1.20$ & $0.05$ && $ 1.42$ & $ 0.06$ && $ 2.53$ & $0.08$
&& $ 0.38$ & $0.04$ && $ 0.72$ & $ 0.05$ && $ 1.63$ & $0.06$\\
$\sigma_{22}^{\frac{1}{2}}$
&& $ 2.45$ & $0.11$ && $ 2.81$ & $ 0.12$ && $ 4.52$ & $0.15$
&& $ 1.48$ & $0.08$ && $ 1.55$ & $ 0.10$ && $ 2.87$ & $0.12$
&& $ 0.95$ & $0.04$ && $ 1.40$ & $ 0.06$ && $ 2.44$ & $0.08$\\
$\rho_{1,2}$
&& $-0.45$ & $0.05$ && $-0.36$ & $ 0.05$ && $-0.56$ & $0.03$
&& $-0.31$ & $0.06$ && $-0.14$ & $ 0.08$ && $-0.52$ & $0.04$
&& $ 0.14$ & $0.14$ && $-0.06$ & $ 0.09$ && $-0.43$ & $0.04$\\
\bottomrule
\multicolumn{27}{l}{\emph{Note}: $h = \sigma^{-1}$ in quantile regression.}
\end{tabular}
\caption{Posterior mean (\textsc{mean}) and
standard deviation (\textsc{std}) of the parameters from longitudinal quantile
regression.}
\label{Table:QuantReg}
\end{table}
\end{landscape}
Total family income is perhaps the most decisive variable that steers food
expenditure. We use the IHS transformation of family income for reasons
mentioned earlier \citep[see also][]{Friedline-2015,Rahman-Vossmeyer-2019}.
As seen in Table~\ref{Table:MeanRegression}, the transformed income variable
positively affects expenditures on TF, FAH and FAFH. The intuition is clear,
increase in income translates to increase in food expenditures of all types.
This result finds support in several other works such as \citet{Redman-1980},
\citet{Lee-Brown-1986}, \citet{Nayga-1996}, \citet{ZiolGuest-etal-2006}, and
\citet{Liu-et-al-2013}. Results from quantile regression show considerable
heterogeneity in covariate effects with higher quantiles showing a larger
impact of income on food expenditure. The ratio of 80th-to-20th quantile
coefficients for \emph{IHS Income} in the TF, FAH and FAFH equations are
1.38, 1.41 and 1.71, respectively.
The next three variables in Table~\ref{Table:MeanRegression} are indicator
variable for head's employment (\emph{Head Emp}), indicator variable for
female head (\emph{Head Female}), and interaction of the two indicators.
Head's employment has a positive effect on TF and FAFH expenditures, but
statistically has no effect on FAH expenditure. These findings are similar to
those in \citet{Aguiar-Hurst-2005}, \citet{Huang-etal-2015}, and
\citet{Antelo-etal-2017}. The indicator for \emph{Head Female} is negative
for all categories, which suggests that female headed families tend to spend
less on overall and each category of food. This can be attributed to two
factors, females are better at managing family expenditure and an empowered
woman better understands the importance of nutritious food and thus reduces
FAFH expenditure. The interaction term (\emph{Head Emp} $\times$ \emph{Head
Female}) in all three regressions are positive, which implies that an
employed female head spends more on overall and each category of food
purchase. Results from quantile regression, presented in
Table~\ref{Table:QuantReg}, once again reveal heterogeneity in the covariate
effect of the three indicator variables. Heads's employment positively
affects TF expenditure at lower and middle quantiles, but not at upper
quantiles. There is no effect on FAH expenditure and a positive effect on
FAFH expenditure across quantiles. \emph{Head Female} has a negative effect
on overall and each category of food expenditure, and the negative effect
increases at upper quantiles. The interaction term shows a positive effect on
TF expenditure across quantiles, but a positive effect on FAH and FAFH
expenditures only at lower and middle quantiles. Hence, at higher levels of
FAH and FAFH expenditures the employment of female head does not play an
important role.
The impact of marital status on food expenditure is examined through the two
indicator variables, \emph{Married} and \emph{Single}. The base or omitted
category is \emph{Separated}, explained in Section~\ref{sec:Data}. As seen
from Table~\ref{Table:MeanRegression}, the coefficient for \emph{Married} is
not statistically different from zero. So, being married has statistically no
effect on food expenditure relative to the omitted category,
\emph{Separated}. However, being single has a positive effect on overall food
expenditure and across categories. Our findings are consistent with results
reported by \citet{Stewart-Yen-2004} and \citet{Liu-et-al-2013}, but
contradictory to those by \citet{Byrne-etal-1996} and
\citet{Zheng-etal-2019}. The results from quantile regression reinforces the
findings from the mean regression. Across quantiles, being married has no
effect on food expenditure as compared to the omitted category. On the other
hand, being single has a positive effect on food expenditure and are
increasing with quantiles. The ratio of 80th-to-20th quantile coefficients
for TF, FAH and FAFH expenditures are 1.60, 1.70, and 2.35, respectively.
Homeowners having mortgages are resource constrained and have a lower cash
flow for a given income. This may negatively affect food expenditure,
particularly, FAFH expenditure. To explore this hypothesis, we include an
indicator variable for homeowners having mortgages into our regression
equations. Results from Table~\ref{Table:MeanRegression} and
Table~\ref{Table:QuantReg} show that families with mortgages have
statistically no effect on food expenditure. Our results are opposite to
those by \citet{Nayga-1996}, where he finds that homeowners with mortgages do
spend more on food prepared at home and FAFH, but not on prepared foods
(e.g., frozen meals and prepared salads). Similarly, \citet{Liu-et-al-2013}
find that homeowners who are married (with and without children) have higher
probability of different types of FAFH expenditures (e.g., full-service
dining, fast-food and other facilities), but for single-person homeowners
this is true only for full-service dining.
Variations in food expenditure have often been linked to racial disparity. To
investigate this conjecture, we include an indicator variable for
\emph{Non-White}, keeping \emph{White} as the base or omitted category.
Results from mean regression, presented in Table~\ref{Table:MeanRegression},
exhibit that \emph{Non-White} tends to have lower expenditure on overall
food, as well as FAH and FAFH expenditures. Our findings are consistent with
\citet{Nayga-1996} who finds that white households are likely to spend more
on FAH and FAFH. Similarly, \citet{Lee-Brown-1986} report that Non-White are
less likely to eat away from home. Our results are also in agreement with
findings from other previous works such as \citet{Redman-1980},
\citet{Stewart-Yen-2004}, and \citet{Liu-et-al-2013}. Another reason for the
negative coefficient, as noted by \citet{Byrne-etal-1996} is due to
non-availability of ethnic foods at local restaurants. The results from
quantile regression, shown in Table~\ref{Table:QuantReg}, largely agree with
the finding from mean regression. \emph{Non-White} have lower TF expenditure
compared to \emph{White}. Moreover, the impact is larger at lower quantiles
and decrease as we move to upper quantiles. For, FAH expenditure, the
\emph{Non-White} have lower expenditures only at the lower and middle
quantiles, but not at upper quantiles. In contrast, FAFH expenditure for
\emph{Non-White} is lower across quantiles and the negative impact increases
with increasing quantiles.
Most expenditures, including consumption, typically decline during times of
recession. To explore the negative effect on food expenditure, if any, we
include an indicator for recession years (2001, 2007 and 2009) into our
regression. Results from mean regression show that the coefficient for
\emph{Recession} is negative for all types of food expenditure, which implies
that expected food expenditure (overall and category wise) declined during
the recession years. As reported in Table~\ref{Table:MeanRegression}, average
TF, FAH and FAFH expenditures declined by \$257, \$190 and \$75,
respectively. Our findings are supported by \citet{Griffith-etal-2013}, where
they report decline in expenditure for food items for British households
during and post the Great Recession. Similarly, \citet{Antelo-etal-2017} also
find that food expenditure for Spanish households declined during the crisis
period (i.e., 2008$-$2014) in Spain. Moving to quantile regression, we find
that the quantile results reinforces the findings from mean regression. Both
TF and FAH expenditures declined across quantiles during the recession years,
and the effect is more or less uniform across the considered quantiles. For
FAFH expenditure, we observe a decline only at lower and middle quantiles,
but not at the upper quantile. So, families whose expenditure on FAFH is high
are not affected by recession years.
Lastly, we include regional indicators to examine geographical differences in
food expenditure. These differences may be due to varying levels of
urbanization, climatic conditions and diverse food culture. We include
indicators for \emph{Northeast}, \emph{West} and \emph{South} into our
regression equations. \emph{Midwest} is used as the base or omitted category.
Our regional classification follows the definition of the US Census Bureau.
Results from mean regressions (see Table~\ref{Table:MeanRegression}) reveal
that an average family living in \emph{South} (relative to \emph{Midwest})
have higher TF, FAH and FAFH expenditures. However, for families living in
the \emph{Northeast} and \emph{West}, the average expenditure is more on TF
and FAH but not on FAFH. Other studies, such as \citet{Lee-Brown-1986},
\citet{Nayga-1996}, \citet{Byrne-etal-1996} and \citet{Liu-et-al-2013} also
find disparity in regional food expenditures. Moving to results from quantile
regression (see Table~\ref{Table:QuantReg}), we see that for TF and FAH
expenditures, all the quantile coefficients for \emph{Northeast}, \emph{West}
and \emph{South} are positive and increase with quantiles. This suggests
families living in the three regions have higher quantile expenditures
(compared to those living in \emph{Midwest}) and the differential impact
increases at higher quantiles. For FAFH expenditure, only \emph{South} and
\emph{Northeast} (at the upper quantiles only) have a positive effect on FAFH
expenditure.
In summary, the results from quantile regression reveal considerable
heterogeneity in covariate effects which cannot be uncovered from mean
regression. The additional information from quantile regression may be useful
for policy making in the government or business, such as aiming sections of
the population for welfare schemes or running campaigns to promote business.
\subsection{Heterogeneity Bias}
Unobserved heterogeneity is a large component of food expenditure and we
control for this in our (mean and quantile) regression models with
individual-specific parameters in the intercept and income. To demonstrate
the heterogeneity bias and poorer model fit that can occur, we estimate the
quantile models without including the individual-specific effects (i.e.,
without including the conditional dependence between observations across time
for the same family unit). This model can be estimated as a special case of
Algorithm~\ref{algo2}, by eliminating Step~1(a) and Step~(3), and removing
$(\alpha_{i}, \Sigma^{-1})$ from the conditional posteriors of the remaining
parameters.
The results from the longitudinal quantile models without the
individual-specific effects are presented in Table~\ref{Table:QuantRegNoRE}
and they differ widely compared to those of Table~\ref{Table:QuantReg}, which
presents the results from longitudinal quantile regression with
individual-specific effects. For example, the coefficients for \emph{Head
Age}, \emph{IHS Income}, \emph{Head Female}, and \emph{Recession} are
noticeably different in the two models across quantiles and types of food
expenditure. Again, there are variables whose coefficients either become
statistically equivalent to or different from zero when the
individual-specific parameters are excluded. In the former category, we have
the coefficient for \emph{Spouse Edu} at middle and upper quantile for total
food expenditure. In the latter category, we have the coefficient for
homeowners with mortgages (\emph{Mort $\times$ Home}) at lower quantiles for
expenditures on total food and food at home.
\begin{landscape}
\begin{table}
\centering \footnotesize \setlength{\tabcolsep}{3pt}
\setlength{\extrarowheight}{2pt}
\setlength\arrayrulewidth{1pt}
\begin{tabular}{l rrr rrr rrr rrr rrr rrr rrr rrr rrr}
\toprule
&&\multicolumn{8}{c}{\textsc{tf}} && \multicolumn{8}{c}{\textsc{fah}}
&& \multicolumn{8}{c}{\textsc{fafh}}\\
\cmidrule{3-10} \cmidrule{12-19} \cmidrule{21-28}
&& \multicolumn{2}{c}{\textsc{20$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{50$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{80$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{20$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{50$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{80$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{20$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{50$^{th}$ }}
&& \multicolumn{2}{c}{\textsc{80$^{th}$ }} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10} \cmidrule{12-13} \cmidrule{15-16}
\cmidrule{18-19} \cmidrule{21-22} \cmidrule{24-25} \cmidrule{27-28}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}
& & \textsc{mean} & \textsc{std}\\
\midrule
Intercept && $-0.80$ & $0.39$ && $ -1.26$ & $ 0.52$ && $ -1.70$ & $0.66$
&& $-1.77$ & $0.29$ && $ -2.75$ & $ 0.36$ && $ -3.46$ & $0.45$
&& $ 0.86$ & $0.16$ && $ 1.02$ & $ 0.25$ && $ 0.35$ & $0.39$\\
log(Head Age) && $ 0.46$ & $0.09$ && $ 0.70$ & $ 0.12$ && $ 1.12$ & $0.15$
&& $ 0.58$ & $0.06$ && $ 0.93$ & $ 0.08$ && $ 1.24$ & $0.10$
&& $-0.23$ & $0.04$ && $-0.16$ & $ 0.05$ && $ 0.18$ & $0.08$\\
Head Edu && $ 0.01$ & $0.01$ && $ 0.03$ & $ 0.01$ && $ 0.04$ & $0.01$
&& $-0.01$ & $0.01$ && $ 0.02$ & $ 0.01$ && $ 0.04$ & $0.01$
&& $ 0.01$ & $0.00$ && $ 0.02$ & $ 0.01$ && $ 0.02$ & $0.01$\\
Spouse Edu && $ 0.05$ & $0.01$ && $ 0.01$ & $ 0.01$ && $-0.01$ & $0.01$
&& $ 0.05$ & $0.01$ && $ 0.04$ & $ 0.01$ && $ 0.01$ & $0.01$
&& $-0.01$ & $0.01$ && $-0.01$ & $ 0.00$ && $-0.02$ & $0.01$\\
Family Size && $ 0.52$ & $0.02$ && $ 0.80$ & $ 0.02$ && $ 1.14$ & $0.03$
&& $ 0.54$ & $0.01$ && $ 0.78$ & $ 0.02$ && $ 1.12$ & $0.02$
&& $-0.02$ & $0.01$ && $-0.01$ & $ 0.01$ && $ 0.06$ & $0.01$\\
IHS Income && $ 2.47$ & $0.06$ && $ 3.71$ & $ 0.08$ && $ 5.22$ & $0.09$
&& $ 1.01$ & $0.04$ && $ 1.43$ & $ 0.06$ && $ 2.09$ & $0.07$
&& $ 0.99$ & $0.03$ && $ 1.81$ & $ 0.04$ && $ 3.11$ & $0.05$\\
Head Emp(HE) && $ 0.23$ & $0.06$ && $ 0.11$ & $ 0.07$ && $-0.12$ & $0.09$
&& $ 0.07$ & $0.04$ && $ 0.15$ & $ 0.05$ && $ 0.05$ & $0.07$
&& $ 0.09$ & $0.02$ && $ 0.06$ & $ 0.03$ && $-0.01$ & $0.05$\\
Head Female (HF) && $-0.55$ & $0.08$ && $-0.81$ & $ 0.11$ && $-1.41$ & $0.15$
&& $-0.18$ & $0.06$ && $-0.25$ & $ 0.08$ && $-0.43$ & $0.11$
&& $-0.06$ & $0.03$ && $-0.40$ & $ 0.05$ && $-0.86$ & $0.08$\\
HE$\times$HF && $ 0.42$ & $0.09$ && $ 0.51$ & $ 0.12$ && $ 0.68$ & $0.15$
&& $ 0.28$ & $0.07$ && $ 0.21$ & $ 0.09$ && $ 0.19$ & $0.11$
&& $ 0.01$ & $0.03$ && $ 0.18$ & $ 0.05$ && $ 0.27$ & $0.08$\\
Married && $-0.01$ & $0.12$ && $ 0.23$ & $ 0.14$ && $ 0.01$ & $0.16$
&& $-0.04$ & $0.08$ && $ 0.09$ & $ 0.10$ && $ 0.45$ & $0.11$
&& $ 0.03$ & $0.04$ && $-0.01$ & $ 0.07$ && $-0.18$ & $0.09$\\
Single && $ 0.18$ & $0.06$ && $ 0.36$ & $ 0.09$ && $ 0.57$ & $0.12$
&& $ 0.06$ & $0.05$ && $ 0.12$ & $ 0.06$ && $ 0.15$ & $0.08$
&& $ 0.10$ & $0.03$ && $ 0.16$ & $ 0.04$ && $ 0.38$ & $0.06$\\
Mort$\times$Home && $ 0.11$ & $0.04$ && $ 0.02$ & $ 0.05$ && $ 0.03$ & $0.07$
&& $ 0.14$ & $0.03$ && $ 0.03$ & $ 0.04$ && $ 0.01$ & $0.05$
&& $ 0.01$ & $0.02$ && $-0.06$ & $ 0.02$ && $ 0.02$ & $0.04$\\
Non-White && $-0.67$ & $0.04$ && $-0.80$ & $ 0.06$ && $-0.87$ & $0.08$
&& $-0.48$ & $0.03$ && $-0.51$ & $ 0.04$ && $-0.50$ & $0.05$
&& $-0.11$ & $0.02$ && $-0.22$ & $ 0.03$ && $-0.41$ & $0.04$\\
Recession && $-0.23$ & $0.04$ && $-0.36$ & $ 0.05$ && $-0.51$ & $0.06$
&& $-0.17$ & $0.03$ && $-0.20$ & $ 0.03$ && $-0.51$ & $0.04$
&& $-0.05$ & $0.02$ && $-0.07$ & $ 0.02$ && $-0.08$ & $0.03$\\
Northeast && $ 0.51$ & $0.06$ && $ 0.88$ & $ 0.08$ && $ 1.41$ & $0.10$
&& $ 0.52$ & $0.04$ && $ 0.61$ & $ 0.05$ && $ 0.97$ & $0.07$
&& $-0.01$ & $0.03$ && $ 0.12$ & $ 0.04$ && $ 0.32$ & $0.05$\\
West && $ 0.36$ & $0.05$ && $ 0.46$ & $ 0.07$ && $ 0.79$ & $0.09$
&& $ 0.40$ & $0.04$ && $ 0.49$ & $ 0.05$ && $ 0.70$ & $0.06$
&& $-0.01$ & $0.02$ && $ 0.03$ & $ 0.03$ && $ 0.06$ & $0.05$ \\
South && $ 0.38$ & $0.05$ && $ 0.49$ & $ 0.06$ && $ 0.80$ & $0.08$
&& $ 0.23$ & $0.03$ && $ 0.25$ & $ 0.04$ && $ 0.49$ & $0.05$
&& $ 0.10$ & $0.02$ && $ 0.23$ & $ 0.03$ && $ 0.30$ & $0.04$\\
$h$ && $ 1.26$ & $0.01$ && $ 0.76$ & $ 0.01$ && $ 0.90$ & $0.01$
&& $ 1.71$ & $0.01$ && $ 1.04$ & $ 0.01$ && $ 1.24$ & $0.01$
&& $ 2.70$ & $0.02$ && $ 1.48$ & $ 0.01$ && $ 1.62$ & $0.01$\\
\bottomrule
\multicolumn{27}{l}{\emph{Note}: $h = \sigma^{-1}$ in quantile regression.}
\end{tabular}
\caption{Posterior mean (\textsc{mean}) and
standard deviation (\textsc{std}) of the parameters from longitudinal quantile
regression without random-effects.}
\label{Table:QuantRegNoRE}
\end{table}
\end{landscape}
\begin{sloppypar}
To highlight the importance of modeling the individual-specific effects (or
random-effects), we compare model fitting at the considered quantiles using
the conditional log-likelihood, conditional Akaike Information Criterion
(cAIC) and conditional Bayesian Information Criterion (cBIC). The calculation
of cAIC and cBIC are proposed and explained in \citet{Greven-Kneib-2010} and
\citet{Delattre-etal-2014}, respectively. These model comparison measures are
presented in Table~\ref{Table:ModelComp}. The table clearly shows that across
quantiles, the value of the conditional log-likelihood is higher and those of
cAIC and cBIC are lower for each longitudinal quantile regression when
individual-specific effects are included. Consequently, there is a strong
evidence for modeling unobserved heterogeneity and ignoring it can lead to
poor model fitting.
\end{sloppypar}
\begin{table}[!t]
\centering \footnotesize \setlength{\tabcolsep}{5pt} \setlength{\extrarowheight}{1.5pt}
\setlength\arrayrulewidth{1pt}
\begin{tabular}{c rrr rrr rrr r }
\toprule
& & \multicolumn{2}{c}{\textsc{20th quantile}} & & \multicolumn{2}{c}{\textsc{50th quantile}}
& & \multicolumn{2}{c}{\textsc{80th quantile}} \\
\cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10}
& & with \textsc{RE} & w/o \textsc{RE}
& & with \textsc{RE} & w/o \textsc{RE}
& & with \textsc{RE} & w/o \textsc{RE} & \\
\midrule
& & \multicolumn{8}{c}{TF Expenditure} & \\
\midrule
$\log$-L & & $ -38282$ & $-45188$ & & $ -38701$ & $-46268$
& & $ -40996$ & $-51031$ & \\
cAIC & & $ 76601$ & $ 90411$ & & $ 77439$ & $ 92573$
& & $ 82029$ & $102099$ & \\
cBIC & & $ 76802$ & $ 90551$ & & $ 77640$ & $ 92713$
& & $ 82230$ & $102239$ & \\
\midrule
& & \multicolumn{8}{c}{FAH Expenditure} & \\
\midrule
$\log$-L & & $ -33577$ & $-39932$ & & $ -34054$ & $-40773$
& & $ -36617$ & $-45538$ & \\
cAIC & & $ 67189$ & $ 79901$ & & $ 68143$ & $ 81582$
& & $ 73271$ & $ 91111$ & \\
cBIC & & $ 67390$ & $ 80041$ & & $ 68344$ & $ 81722$
& & $ 73472$ & $ 91251$ & \\
\midrule
& & \multicolumn{8}{c}{FAFH Expenditure} & \\
\midrule
$\log$-L & & $ -25753$ & $-31996$ & & $ -26797$ & $-34630$
& & $ -29883$ & $-40896$ & \\
cAIC & & $ 51542$ & $ 64028$ & & $ 53630$ & $ 69295$
& & $ 59802$ & $ 81828$ & \\
cBIC & & $ 51743$ & $ 64168$ & & $ 53832$ & $ 69435$
& & $ 60003$ & $ 81968$ & \\
\bottomrule
\end{tabular}
\caption{Model comparison between the longitudinal quantile
regression with random-effects (with RE) and without random-effects (w/o RE).
The log-likelihood ($\log$-L), conditional Akaike Information Criterion (cAIC) and
conditional Bayesian Information Criterion (cBIC) are evaluated at the posterior mean
of the parameters.}\label{Table:ModelComp}
\end{table}
\section{Conclusion}\label{sec:Conclusion}
This article studies the relationship between different types of food
expenditures (total food, food at home, and food away from home) and a host
of economic, geographic, and demographic factors using data from the Panel
Study of Income Dynamics for the period 2001$-$2015. Food expenditures are
typically right skewed and thus covariate effects are likely to be
heterogeneous across the conditional distribution of the response variable.
Besides, unobserved heterogeneity is a large component of food expenditure.
To explore these considerations, we study food expenditure within a
longitudinal quantile framework that models dependence between the
observations across time for the same family units. Results point to several
important aspects including the presence of heterogeneity in the covariate
effects. For example, we find that there are notable differences in the food
expenditure behavior (of all types) between male and female headed
households, expenditures on food away from home by employed female heads are
heterogeneous across quantiles, and food expenditures (of all types) decrease
during times of economic crisis and varies with quantiles. Besides, the paper
provides strong empirical evidence that not considering unobserved
heterogeneity can lead to heterogeneity bias and poor model fitting.
While our paper emphasizes the modeling of heterogeneity in food expenditure,
the findings reported also provide greater insights on total food expenditure
and expenditures on food at home and food away from home, which may be of
special interest to policy makers in the health and business sectors. For
example, we find that spouse education has a positive effect across the
distribution of food at home expenditure. Therefore, policy makers may
provide higher incentives to female education in order to achieve better
health outcomes in the country. Similarly, we find that being single or
employed female heads have a positive effect on the distribution of food away
from home expenditure. Consequently, restaurant and fast food chains may run
campaigns targeting these specific groups to increase their sales. The above
discussion and other findings reported in the paper may be utilized to better
formulate policies and business decisions.
\clearpage \pagebreak
\pdfbookmark[1]{References}{unnumbered}
|
train/arxiv
|
BkiUdnXxK7IABDHEY00u
| 5 | 1 |
\section{Introduction}
\hyphenation{kruijssen}
We recently identified a galaxy
with little or no dark matter ({van Dokkum} {et~al.} 2018, hereafter \natpap).
\blob\ has a stellar mass of $M_{\rm stars}\approx 2
\times 10^8$\,\msun\ and a 90\,\% confidence upper limit on its dark
matter halo mass of $M_{\rm halo}<1.5\times 10^8$\,\msun, placing it
a factor of $\gtrsim 400$ off of the canonical stellar mass -- halo
mass relation (Moster et al.\ 2013; {Behroozi} {et~al.} 2013).
\blob\ is a featureless, spheroidal
``ultra diffuse'' galaxy (UDG; {van Dokkum} {et~al.} 2015),
with an effective radius of $R_e = 2.2$\,kpc and a central
surface brightness $\mu(V_{606},0) = 24.4$\,mag\,arcsec$^{-2}$.
It has a radial velocity of $1803$\,\kms. Its
SBF-determined distance is $19.0\pm 1.7$\,Mpc (\natpap), consistent with
that of the NGC\,1052 group at $D\approx 20$\,Mpc ({Blakeslee} {et~al.} 2010).
The kinematics of \blob\ were measured from the radial velocities of 10
compact objects
that are associated with the galaxy.
These objects drew our attention to the galaxy in
the first place: it is a large, low surface brightness blob in our
Dragonfly Telephoto Array imaging ({Abraham} \& {van Dokkum} 2014; {Merritt} {et~al.} 2016) but a
collection of point-like sources in the Sloan Digital Sky Survey.
Finding globular clusters (GCs) in a UDG is
in itself not unusual
({Beasley} {et~al.} 2016; {Peng} \& {Lim} 2016; {van Dokkum} {et~al.} 2016, 2017; {Amorisco}, {Monachesi}, \& {White} 2018).
In fact, Coma UDGs have on average $\sim 7$\
times more GCs than other galaxies of the same
luminosity ({van Dokkum} {et~al.} 2017), with large galaxy-to-galaxy scatter
({Amorisco} {et~al.} 2018).
However, what {\em is} unusual, or at least unexpected, is the
remarkable luminosity of the clusters.
The luminosity function of the GC
populations of Coma UDGs is consistent with that seen in other galaxies,
peaking at an absolute magnitude
$M_V \sim -7.5$ ({Peng} \& {Lim} 2016; {van Dokkum} {et~al.} 2017; {Amorisco} {et~al.} 2018). The
ten clusters that
were analyzed in \natpap\ are all significantly brighter than this,
raising the question whether the GC luminosity function is
systematically offset from that in other galaxies.
Here we focus on the properties of the compact objects in
\blob, using imaging from the
{\em Hubble Space Telescope} ({\em HST}) and spectroscopy
obtained with the W.~M.~Keck Observatory.
We show that the GC system of \blob\ is unprecedented,
both in terms of the average properties of the clusters and in its offset
from the canonical scaling relation between GC system mass and
total galaxy mass.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.9\linewidth]{spectra.pdf}
\end{center}
\vspace{-0.2cm}
\caption{
Keck/LRIS spectra (left and right) and {\em HST} images (center)
of the 11 clusters
associated with \blob. The color images, generated
from the $V_{606}$ and $I_{814}$ data, span $1\arcsec \times
1\arcsec$. Some of the clusters are visibly flattened.
The background image was generated by masking all objects in
the $I_{814}$ {\em HST} frame that do
not match the color and size criteria we use for selecting GCs,
and then applying a slight smoothing to emphasize the compact
objects.
The spectra focus on the
wavelength region around the redshifted $\lambda{}4861$\,H$\beta$
and $\lambda{}5172$\,Mg lines. The red line is a S/N-weighted average
of the 11 spectra.
}
\label{spectra.fig}
\end{figure*}
\section{Identification}
\subsection{Spectroscopically-Identified Clusters}
We obtained spectra of compact objects in the \blob\ region with
the Keck telescopes, using
the Deep Imaging Multi-Object Spectrograph
on Keck II, the red arm of the Low-Resolution Imaging
Spectrometer (LRIS; {Oke} {et~al.} 1995), and the blue arm of
LRIS. The sample selection, reduction, and analysis of the
high resolution DEIMOS and red LRIS data are described in detail
in \natpap. The
blue-side LRIS data
were obtained with the 300/5000 grism
and $1\farcs0$ slits,
providing a spectral resolution ranging from $\sigma_{\rm instr}
\sim{}350$\,\kms\ at $\lambda=3800$\,\AA\ to $\sigma_{\rm
instr}\sim{}150$\,\kms\
at $\lambda=6600$\,\AA. The reduction followed the same procedures as
the red side data, and is described in \natpap.
The spectral resolution
is too low for accurate radial velocity measurements, but the wide
wavelength coverage provides
constraints on the stellar populations (\S\,\ref{sps.sec}).
Small sections of the
spectra of the 11 confirmed GCs are shown in Fig.\ \ref{spectra.fig}.
Note that we analyze one more object in this paper than in \natpap;
this is because
the S/N ratio of the red spectrum of GC-93 is too low for
an accurate velocity measurement.\footnote{Oddly the red side spectrum of
GC-93 appears to be
featureless in the Ca\,triplet region.
}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.95\linewidth]{selglobs.pdf}
\end{center}
\vspace{-0.2cm}
\caption{
Photometric selection of globular clusters. The top panel shows the
color-magnitude relation of all
objects in the {\em HST} images of \blob. The 11 spectroscopically-confirmed objects
are marked with yellow and black circles. Dashed lines delineate the
$\pm{}2\sigma$ range of the colors of the confirmed clusters:
$0.28<V_{606}-I_{814}<0.43$. The bottom panel shows the size-magnitude
relation for all objects that satisfy this color criterion. Objects with
${\rm FWHM}<4.7$\,pixels are candidate GCs.
The image on the right is a wider view of
that shown in Fig.\ \ref{spectra.fig}. All objects
are masked, except those that match the color and size criteria.
}
\label{selglobs.fig}
\end{figure*}
\subsection{Photometrically-Identified Clusters}
To measure the luminosity function we also have to consider
GCs that are fainter
than the spectroscopic limits, as well as any that might not
have been included in the masks. We select all candidate GCs using
the $V_{606}$ and $I_{814}$ {\em HST} images
(described in \natpap).
Photometric catalogs were created using SExtractor ({Bertin} \& {Arnouts} 1996) in
dual image mode. The photometry was corrected for Galactic
extinction (Schlafley \& Finkbeiner 2011),
and the $V_{606}-I_{814}$ colors were corrected for
the wavelength dependence of the PSF. Total magnitudes were determined
from the ``AUTO'' fluxes, with an object-by-object
correction to an infinite aperture
as determined from
the encircled energy curves of {Bohlin} (2016).
The top panel in Fig.\ \ref{selglobs.fig} shows all objects with
$I_{814}<26.5$ in the plane of $V_{606}-I_{814}$ color vs.\ $I_{814}$
magnitude. The 11 spectroscopically-identified clusters have a
remarkably small range in color: we find
$\langle{}V_{606}-I_{814}\rangle=0.36$ with an observed rms scatter of
$\sigma_{V-I}=0.039$. This is not a result of selection;
we obtained spectra of nearly all compact objects in the vicinity
of \blob\ irrespective of their color. The bottom panel of
Fig.\ \ref{selglobs.fig} shows the relation between the SExtractor
FWHM and $I_{814}$ magnitude for all objects that have colors
in the range $\langle{}V_{606}-I_{814}\rangle\pm2\sigma_{V-I}$.
We note that the results are not sensitive to the precise limits that
are used here.
As expected, the spectroscopically-identified
GCs are small. The dashed line corresponds to
${\rm FWHM}<\langle{\rm FWHM}\rangle+2.5\sigma_{\rm FWHM} = 4.7$\,pixels.
We find that the spectroscopic completeness is 100\,\% for $I_{814}<23$
objects that satisfy the color and size criteria. We find 16 candidate
GCs with $23<I_{814}<25.5$, but as we show below most
are probably compact background galaxies.
The grey scale panel of Fig.\ \ref{selglobs.fig}
shows the $I_{814}$ data after masking
all objects that do {\em not} satisfy these
criteria. The masked image was smoothed with a Gaussian of
${\rm FWHM}=0\farcs 9$.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.7\linewidth]{lf.pdf}
\end{center}
\vspace{-0.2cm}
\caption{
Luminosity function of the compact objects in \blob.
{\em Top left:} Observed luminosity function,
in apparent $I_{814}$ magnitude. The blue line shows the magnitude
distribution of objects in blank field
3D-HST/CANDELS imaging that have the same colors and sizes as
the GCs. {\em Top right:} Observed luminosity function, after
correcting each bin for the expected number of unrelated objects.
{\em Bottom:} Luminosity function in absolute magnitude, for $D=20$\,Mpc.
The luminosity functions of GCs in the Milky
Way and in Coma UDGs are shown in red and blue, respectively.
}
\label{lf.fig}
\end{figure*}
\section{Luminosity Function and Specific Frequency}
\label{lf.sec}
At bright magnitudes it is straightforward to measure the luminosity
function of the GCs because the spectroscopic completeness is 100\,\%,
but at $I_{814}>23$ a correction needs to be made for interlopers.
This is evident from the
distribution of objects in the bottom panel of Fig.\
\ref{selglobs.fig}: at $I_{814}<23$ the GCs are well-separated from
other objects, but at faint magnitudes there is a continuous distribution
of sources with ${\rm FWHM}\sim 2-15$\,pixels.
This magnitude-dependent correction for unrelated objects was determined from
ACS imaging obtained in the blank field CANDELS survey ({Koekemoer} {et~al.} 2011).
We obtained CANDELS $V_{606}$ and $I_{814}$ images of the AEGIS field from
the 3D-HST data release ({Skelton} {et~al.} 2014), and analyzed these in the exact
same way as the \blob\ data.
The results are shown in the top panels of Fig.\ \ref{lf.fig}.
The expected contamination increases
steadily with magnitude at $I_{814}>23$.
The top right panel shows the
observed magnitude distribution after subtracting the expected contamination,
with the uncertainties reflecting the Poisson errors in the observed
counts in each bin. There is a pronounced peak at $I_{814}=22.0$
with a $1\sigma$ width of 0.4\,mag, consisting of the 11 confirmed
clusters.
The bottom panel of Fig.\ \ref{lf.fig} shows the luminosity function.
For consistency with other work we focus on $M_{V,606}$, determined
from the total $I_{814}$ magnitudes through $M_{V,606}=I_{814}
+(V_{606}-I_{814})-31.50$.
The mean absolute magnitude of the confirmed clusters is $M_{V,606}=-9.1$,
and the brightest cluster (GC-73) has $M_{V,606}=-10.1$.
The red curve shows the (scaled) luminosity function of Milky Way GCs,
obtained from the 2010 edition of the {Harris} (1996)
catalog\footnote{http://physwww.mcmaster.ca/\~{ }harris/mwgc.dat}
with $M_{V,606}=M_V-0.05$.
The peak magnitude of $M_V\sim{}-7.5$
for the Milky Way is similar to that seen in other galaxies
(e.g., {Rejkuba} 2012).
The blue curve is the average luminosity function of GCs in
the two UDGs Dragonfly~44 and DFX1, taken from {van Dokkum} {et~al.} (2017).
The luminosity function of \blob\ is shifted to higher
luminosities than those of other galaxies, including other
UDGs. The difference is a factor of $\sim 4$. Phrased differently,
the GC luminosity function of \blob\ is not far removed from the
bright end of the luminosity function of the Milky Way:
\blob\ has
11 clusters brighter than $M_{V,606}=-8.6$, whereas the Milky
Way has 20
(and only 15 with ${\rm [Fe/H]}<-1$).
However,
there is only marginal evidence for the presence of ``classical''
GCs with $M_{V,606}\sim -7.5$
in \blob: after correcting for
interlopers, the total number of GCs with $-8.5<M_{V,606}<-6.5$
is $N_{\rm peak}=4.2^{+3.4}_{-2.1}$ (compared to $N_{\rm peak}=84$
in the Milky Way).
Taking the total number of globular clusters as $\approx 15$,
we derive a specific frequency
$S_N\equiv{}N_{\rm GC}\times 10^{0.4 (M_V^{\rm g}+15)}\approx{}11$,
where $M_V^{\rm g}=-15.4$ is the total magnitude of the
galaxy (see \natpap).
The 11 spectroscopically-confirmed clusters
constitute 4\,\% of the total luminosity
of \blob\ (with 1\,\% contributed by GC-73 and 3\,\% by the other
clusters).
\section{Structural Parameters}
We use the {\em HST} imaging
to compare the morphologies of the \blob\ GCs to
those of Milky Way GCs. We fit {King} (1962) models
to the
individual {\tt .flc} files
using the GALFIT software ({Peng} {et~al.} 2002) with
synthetic PSFs.
This provides eight independent measurements (four in $V_{606}$ and
four in $I_{814}$). Cosmic rays
and
neighboring objects were masked in the fits.
The results are listed in Table 1.
Circularized
half-light radii $r_h$ were determined from the measured core and tidal
radii (multiplied by $\sqrt{b/a}$).
The listed values are the
biweight averages (see {Beers}, {Flynn}, \& {Gebhardt} 1990)
of the eight individual measurements, and for each entry
the listed error is the biweight
scatter in the eight individual measurements.
We verified that very similar values are obtained if a {Sersic} (1968)
profile is fitted to the objects instead of a King profile.
As a test of
our ability to measure the sizes of these small objects
we also included
four stars of similar brightness to the GCs in the fits.
All four stars
have $r_h<0\farcs{}018$, whereas the GCs have sizes in the
range $0\farcs{}043\leq r_h\leq 0\farcs{}089$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.95\linewidth]{structure.pdf}
\end{center}
\vspace{-0.2cm}
\caption{
Morphological parameters of the GCs. The top panel shows the
circularized half-light
radii versus the absolute magnitude, for \blob\ (black points with
error bars) and the Milky Way (red). Errors in $M_{V,606}$
and $r_h$
include a 10\,\% uncertainty in the distance (see \natpap).
The bottom panel shows
the ellipticity. Means are indicated with dashed lines.
}
\label{structure.fig}
\end{figure}
The sizes and ellipticities are compared to those of Milky Way GCs
in Fig.\ \ref{structure.fig}, again making use of the 2010
version of the {Harris} (1996) compilation. The (biweight)
mean size of the 11 objects is $\langle{}r_h\rangle=6.2\pm 0.5$\,pc, a factor
of 2.2 larger than the mean size of Milky Way GCs
in the same luminosity range. The mean ellipticity is
$\langle\epsilon\rangle=0.18\pm 0.02$, a factor of 2.6 larger than
Milky Way GCs.
\noindent
\begin{deluxetable}{cccccc}
\tablecaption{Properties Of Globular Clusters\label{prop.tab}}
\tabletypesize{\footnotesize}
\tablehead{\colhead{Id} & \colhead{RA} & \colhead{DEC}
& \colhead{$M_{V,606}$} &
\colhead{$r_h$\tablenotemark{a}}
& \colhead{$\epsilon$}}
\startdata
39 & 2$^{\rm h}41^{\rm m}45.07^{\rm s}$ & $-8\arcdeg25\arcmin24\farcs9$
& $-9.3$ & $7.5\pm0.7$ & $0.16\pm0.03$\\
59 & 2$^{\rm h}41^{\rm m}48.08^{\rm s}$ & $-8\arcdeg24\arcmin57\farcs5$
& $-8.9$ & $6.5\pm1.0$ & $0.31\pm0.03$\\
71 & 2$^{\rm h}41^{\rm m}45.13^{\rm s}$ & $-8\arcdeg24\arcmin23\farcs0$
& $-9.0$ & $6.7\pm0.8$ & $0.08\pm0.05$\\
73 & 2$^{\rm h}41^{\rm m}48.22^{\rm s}$ & $-8\arcdeg24\arcmin18\farcs1$
& $-10.1$ & $6.4\pm0.7$ & $0.19\pm0.06$\\
77 & 2$^{\rm h}41^{\rm m}46.54^{\rm s}$ & $-8\arcdeg24\arcmin14\farcs0$
& $-9.6$ & $9.4\pm0.6$ & $0.31\pm0.02$\\
85 & 2$^{\rm h}41^{\rm m}47.75^{\rm s}$ & $-8\arcdeg24\arcmin05\farcs9$
& $-9.2$ & $5.2\pm0.8$ & $0.19\pm0.09$\\
91 & 2$^{\rm h}41^{\rm m}42.17^{\rm s}$ & $-8\arcdeg23\arcmin54\farcs0$
& $-9.2$ & $8.4\pm0.7$ & $0.13\pm0.04$\\
93 & 2$^{\rm h}41^{\rm m}46.72^{\rm s}$ & $-8\arcdeg23\arcmin51\farcs3$
& $-8.6$ & $4.1\pm1.0$ & $0.22\pm0.06$\\
92 & 2$^{\rm h}41^{\rm m}46.90^{\rm s}$ & $-8\arcdeg23\arcmin51\farcs1$
& $-9.4$ & $4.3\pm1.0$ & $0.21\pm0.06$\\
98 & 2$^{\rm h}41^{\rm m}47.34^{\rm s}$ & $-8\arcdeg23\arcmin35\farcs2$
& $-8.7$ & $5.4\pm1.7$ & $0.20\pm0.04$\\
101 & 2$^{\rm h}41^{\rm m}45.21^{\rm s}$ & $-8\arcdeg23\arcmin28\farcs3$
& $-8.6$ & $4.8\pm1.1$ & $0.16\pm0.04$
\enddata
\tablenotetext{a}{Circularized half-light radius of King profile,
in parsecs.
}
\end{deluxetable}
\section{Stellar Populations}
\label{sps.sec}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=0.8\linewidth]{spec.pdf}
\end{center}
\vspace{-0.2cm}
\caption{
Combined Keck/LRIS spectrum of the 11 GCs, weighted by the S/N ratio.
Errors are shown in grey. The best-fitting stellar population
synthesis model is shown in red. This model
has an age of $9.3^{+1.3}_{-1.2}$\,Gyr,
${\rm [Fe/H]}=-1.35\pm 0.12$, and ${\rm [Mg/Fe]}=0.16\pm
0.17$. The age is a lower limit, as it does not take the
possible presence of blue horizontal branch stars into account.
}\vspace{0.2cm}
\label{spec.fig}
\end{figure*}
We modeled
the LRIS-blue spectra with the most recent version
of the \texttt{alf} code ({Conroy} \& {van Dokkum} 2012; {Conroy} {et~al.} 2018).
To improve the constraints on the stellar population parameters
we stacked the 11 GC spectra, weighting by
the S/N ratio.
The stacked spectrum is shown in Fig.\ \ref{spec.fig}.
The S/N ratio ranges from
$\approx 12$\,pix$^{-1}$ at $\lambda=3800$\,\AA\
to $\approx 55$\,pix$^{-1}$ at $\lambda=5400$\,\AA\ (with
$1.5$\,\AA\,pix$^{-1}$).
The best fitting model, shown in red, has
${\rm [Fe/H]}=-1.35\pm 0.12$, ${\rm [Mg/Fe]}=0.16\pm
0.17$, and ${\rm age}=9.3^{+1.3}_{-1.2}$\,Gyr.
The mass-to-light ratio is $M/L_V=1.8\pm 0.2$.
The errors were determined using an MCMC fitting technique,
as described in {Conroy} \& {van Dokkum} (2012).
We conclude that the objects are old and metal poor. This likely
applies to the entire system: the scatter in the $V_{606}-I_{814}$
colors of the GCs is very small, and their average color is
consistent with that of
the diffuse galaxy light: $\langle{}V_{606}-I_{814}\rangle_{\rm gc}=0.36
\pm{}0.02$ and
$(V_{606}-I_{814})_{\rm gal}=0.37\pm 0.05$.
The $\alpha-$enhancement appears to be low, but typical values for
globular clusters ($0.3-0.5$) are only $1-2\sigma$ removed from the
best fit. Importantly, the
age (and also the $M/L$ ratio)
should be regarded as lower limits, due to the possible effects
of blue horizontal branch (BHB) stars. As discussed in, e.g.,
{Schiavon} (2007) and {Conroy} {et~al.} (2018) the presence of BHB stars reduces
the ages that are derived from integrated-light spectra.
The average spectrum of the 11 \blob\ GCs is
similar to the integrated-light spectra of Galactic GCs with
${\rm [Fe/H]}\sim -1.4$ and ages of $\sim 12$\,Gyr
(see {Mar{\'{\i}}n-Franch} {et~al.} 2009).
\section{Discussion}
We analyzed the population of globular clusters
associated with the UDG \blob.
Superficially the galaxy resembles many other UDGs.
For example,
the morphology of the diffuse light and the fraction of the light
that is in GCs are similar
to the well-studied UDG Dragonfly~17 in the Coma cluster
({van Dokkum} {et~al.} 2015; {Peng} \& {Lim} 2016; {Beasley} \& {Trujillo} 2016). The stellar populations
are also similar; the $V_{606}-I_{814}$ colors
are identical within the errors
to those of Dragonfly~44
({van Dokkum} {et~al.} 2017), and {Gu} {et~al.} (2017)
report ages and metallicities for three Coma UDGs that are consistent
with what we find here.
A generic explanation for such diffuse, globular cluster-rich
systems may be that they are ``failed''
galaxies, in which star formation terminated
shortly after the
metal-poor GCs appeared
and before a metal-rich component began to form. This naturally
explains their specific frequencies and uniform stellar populations,
and is qualitatively consistent with the observation that $S_N$
in dwarf galaxies is much higher when only metal-poor
stars are considered (e.g., {Larsen} {et~al.} 2014).
\blob\ is also very {\em different} from other UDGs (and indeed
all other known galaxies), in two distinct ways
that may be related to one another. First, the luminosity function
of the GCs has a narrow peak at $M_{V,606}\approx -9.1$
(Fig.~\ref{lf.fig}). This
is remarkable as the canonical value of $M_V\approx -7.5$
was thought to be universal, with only $\sim 0.2$\,mag
variation between galaxies
(see {Rejkuba} 2012).
The origin of
this unusual luminosity function is unknown; it
could be related to enhanced hierarchical
merging of lower mass clusters
(S.\ Trujillo-Gomez et al., in prep.).
The sizes and ellipticities of the GCs are different too, but this
may not be very fundamental. Since $\rho\propto{}Mr_h^{-3}$ the
GCs are a factor of $\sim{}2$ less dense than is typical. However,
their virial velocities are a factor of $\sqrt{2}$ higher, which
means that their kinetic energy densities
$e_{\rm kin}\sim P\propto\rho{}v^2$ are similar.
Therefore, the same gas pressures were needed to form these
clusters as those that led to the formation of typical Galactic
GCs (see {Elmegreen} \& {Efremov} 1997).
The higher ellipticities may simply reflect the initial angular
momentum of the GCs; as $t_{\rm r} \propto \sqrt{M}r_h^{1.5}$ the
relaxation times are a factor of $\sim 5$ longer than in typical
Milky Way GCs. We note that the effects of the external
gravitational potential on the structure of the GCs
are likely weak, due to the lack of dark
matter in \blob\ and the high masses of the clusters
(see, e.g., Goodwin 1997; Miholics et al.\ 2016).
The second difference is that the galaxy has no (or very little) dark
matter (see \natpap). This stands in stark contrast to cluster UDGs
(see {Beasley} {et~al.} 2016; {van Dokkum} {et~al.} 2016; {Mowla} {et~al.} 2017), and is inconsistent with
the idea that the old, metal-poor globular cluster systems
of galaxies are always closely connected to their
dark matter halos. Specifically,
previous studies
found that the ratio between the total mass in GCs and the
total (dark + baryonic) mass of galaxies is
remarkably constant, with $M_{\rm gc}^{\rm
tot}\approx{}3\times{}10^{-5}\,M_{\rm gal}^{\rm tot}$
(Blakeslee et al.\ 1997; {Harris} et al.\ 2015; {Forbes} {et~al.} 2016).
Taking $M/L_V\approx 2$ (\S\,5) we find
$\approx{}9\times{}10^6$\,\msun\ for the total mass of the
globular clusters in \blob, and in \natpap\ we derived a
90\,\% upper limit of
$<3.4\times 10^8$\,\msun\ for its total galaxy mass.
Therefore, the mass in the GC system is $\gtrsim 3$\,\% of the
mass of the galaxy, a factor of $\sim 1000$ higher than the
Harris et al.\ value.
The existence of \blob\ suggests that the approximately linear
correlation
between GC system mass and total galaxy mass is not the result of
a fundamental relation between the
formation of metal-poor globular clusters and the
properties of dark matter halos
(as had been suggested by, e.g., {Spitler} \& {Forbes} 2009; {Trenti}, {Padoan}, \& {Jimenez} 2015; {Boylan-Kolchin} 2017).
Instead, the correlation may be
a by-product of other relations, with globular cluster formation
ultimately a baryon-driven process
(see, e.g., {Kruijssen} 2015; {Mandelker} {et~al.} 2017).
Taking these ideas one step further, perhaps a key aspect of forming a UDG
-- or at least UDGs with many GCs -- is, paradoxically, the presence of
very dense gas at high redshift. After a short period of very intense
star formation the gas was blown out, possibly by supernova
(or black hole) feedback from the forming clumps themselves
(e.g., {Calura} {et~al.} 2015). If the gas contained most of the mass
in the central regions of the forming galaxy this event may have led
to the extreme puffing up of the inner few kpc
(see also {Di Cintio} {et~al.} 2017; {Chan} {et~al.} 2017). The gas never returned,
either because the galaxy ended up in a cluster (Dragonfly 17) or
because it had very low mass (\blob). In this context having a massive
dark matter halo is not a central aspect of UDGs, but one of several
ways to reach sufficiently high gas densities for efficient
globular cluster formation at early times.
Of course, all this is speculation; also, this description of
events does not address the origin of
$\sim 10^{8-9}$\,\msun\ of extremely dense gas without a dark matter
halo. In this context,
an important unanswered question is whether
\blob\ is a ``pathological'' galaxy
that is the result of a rare set of circumstances
or representative of a class of similar objects. There are several
galaxies in our Cycle 23 {\em HST} program that superficially
resemble it, although none has quite as many luminous star
clusters.
\blob-like objects may have been more common in the past,
as large galaxies without dark matter lead a tenuous existence; in
clusters and massive groups they are easily destroyed, donating their
star clusters to the intracluster population of GCs and
ultra compact dwarfs (UCDs).
We note that progenitors of galaxies like \blob\
could readily be identified in JWST observations
if its luminous GCs did indeed form within $\sim 10^8$\,yr
of each other in a dense region.
Finally, we briefly discuss whether the compact
objects in \blob\ should be considered globular clusters at all.
In terms of
their average luminosity and size they are
intermediate between GCs and UCDs
(see, e.g., {Brodie} {et~al.} 2011), and
this question
hinges on whether we focus on the
population or on individual objects: the population characteristics
are unprecedented, but
for each individual object in \blob\ a match can be found
among the thousands of GCs with measured sizes and luminosities
in other galaxies
(e.g., {Larsen} {et~al.} 2001; {Barmby} {et~al.} 2007).
Intriguingly, in terms
of their sizes, flattening, stellar populations,
and luminosities the 11 compact
star clusters are remarkably similar to $\omega$\,Centauri --
an object whose nature has been the topic of decades of debate
(see, e.g., {Norris} \& {Da Costa} 1995).
\acknowledgements{Support from {\em HST} grant HST-GO-14644
and NSF grants AST-1312376, AST-1515084,
AST-1518294, and AST-1613582
is gratefully acknowledged.
JMDK gratefully acknowledges funding from the German Research Foundation (DFG) in the form of an Emmy Noether Research Group (grant number KR4801/1-1) and from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme via the ERC Starting Grant MUSTANG (grant agreement number 714907).
AJR is a Research Corporation for Science Advancement Cottrell Scholar.
}
|
train/arxiv
|
BkiUfm_xK6EuNArbEavH
| 5 | 1 |
\section*{Introduction}
In February 2005, some participants of the meeting Matemairacorana, organized in Recife, Brazil, by Hildeberto Cabral, met and discussed open questions. In December 2010, an international meeting on Hamiltonian systems and Celestial Mechanics, called Sidimec, took place in Aracaju, Brazil, in honor of Hildeberto Cabral on his 70th birthday. All participants were invited to make contributions to a list of open questions.
The problems deal with the dynamics of equations (\ref{nbp}), invariant manifolds, existence of particular solutions, singularities, relative equilibria and central configurations.
\subsection*{A few words on the $n$-body problem} The $n$-body problem is defined by the second order system of ordinary differential equations
\begin{equation}\label{nbp}
\ddot{\mathbf{q}}_i=\sum_{j\ne i}^n m_jr_{ij}^{2a}(\mathbf{q}_j-\mathbf{q}_i)
\end{equation}
where the positions $\mathbf{q}_i$ are in $\mathbf{R}^3$, the body located at $\mathbf{q}_i$ has mass $m_i$, $r_{ij}=\|\mathbf{q}_i-\mathbf{q}_j\|$ and $a=-3/2$. In some questions we consider the same equations with different values of $a$. When nothing about $a$ is specified, or when we write ``the Newtonian $n$-body problem", we mean $a=-3/2$. The quantities
\begin{equation}\label{mom}
\mathrm{I}=\frac{1}{M}\displaystyle\sum_{i<j}m_im_jr_{ij}^2,\qquad \mathrm{U}=-\frac{1}{2(a+1)}\displaystyle\sum_{i<j}m_im_jr_{ij}^{2(a+1)},
\end{equation}
where $M$ is the total mass, $a\neq -1$, are respectively called the moment of inertia and the potential.
\subsection*{A few words on central configurations} In the $n$-body problem, a configuration is {\it central }if there exists a real number $\lambda$ such that
\begin{equation}\label{cc-eq}
\lambda (\mathbf{q}_i -\mathbf{q}_G) = \sum_{j\neq i} m_j \|\mathbf{q}_i - \mathbf{q}_j \|^{2a}
(\mathbf{q}_i - \mathbf{q}_j).
\end{equation}
By $\mathbf{q}_G$ we mean the position of the center of mass of
the system. Here the exponent $a$ is often taken less than $-1$. If $a=-3/2$, we are considering the Newtonian force
law of Celestial Mechanics.
Due to the invariance of Newton's equation by rotations, one can define a reduced problem.
A fixed point of the reduced problem is called a \textit{relative equilibrium}. One can prove that the configuration of
a relative equilibrium is a planar central configuration. One counts central configurations or relative equilibria up to
rotation and rescaling.
\section*{The problems}
\paragraph{\bf Problem {1}{} -- Paul Painlev\'e --} Can a non-collision singularity occur in the 4-body problem?
\paragraph{\bf Comments.} This would be a singularity at a time $t_1$ which is not a collision and such that there is no
collision in a neighborhood of $t_1$. The question is implicit in Painlev\'e's {\it Le\c cons de Stockholm}, \S 20 (1895, see Painlev\'e 1897, 1972) ``{\it Si, $t$ tendant vers $t_1$,
certains des $n$ corps ne tendent vers aucune position limite \`a distance finie, il existe au moins quatre corps ${\rm
M}_1,\dots,{\rm M}_\nu$ ($\nu\geq 4$), qui ne tendent vers aucune position limite, et tels que le minimum $\rho(t)$ des
distances mutuelles $r_{ij}$ de ces points ${\rm M}_1,\dots,{\rm M}_\nu$ tende vers z\'ero avec $t-t_1$, sans qu'aucune
des quantit\'es $r_{ij}$ tende constamment vers z\'ero.} La singularit\'e en question ne saurait donc se produire que
par suite de croisements de $\nu$ astres entre eux $(\nu\geq 4)$, croisements de plus en plus fr\'equents quand $t$ tend
vers $t_1$ et de plus en plus semblables \`a des chocs. Ces {\it pseudo-chocs} ont d\'ej\`a \'et\'e signal\'es par M.\ Poincar\'e comme pouvant donner naissance (pour $n>3$) \`a des solutions p\'eriodiques d'une nature particuli\`ere."
Gerver (2003) recalls what is known of the subject of non-collision singularities, in particular the famous results by Mather and McGehee, by Xia and by him. He then suggests a possible solution of the 4-body case. A related question is proposed in Xia (2002).
\paragraph{\bf Problem {2}{} -- Aurel Wintner --} Is symmetry on the masses and the configurations a necessary condition for any flat
but non-planar solution of the $n$-body problem, if $n>3$?
\paragraph{\bf Comments.} This problem is proposed in \S 389bis of Wintner's book. A {\it flat} solution is such that the configuration
is always in a plane. Such solution is called {\it planar} if this plane is fixed. Wintner gives examples of flat solutions which generalize {\it isosceles} solutions, where there is a symmetry. The study of flat solutions is suggested by the following result. If the configuration is always on a line, then the solution is planar. This line is fixed or the configuration is central (Wintner 1941, \S331).
\paragraph{\bf Problem {3}{} --} In the three-body problem with given masses, is there an algebraic invariant subvariety in the phase space besides the known ones?
\paragraph{\bf Comments.} The known ones are obtained by fixing the energy and/or the angular momentum and/or by
restricting to an isosceles problem and/or by restricting to the set of homographic solutions with a given central
configuration and/or by restricting to some lower dimensional 3-body problem. If answered negatively then Bruns'
theorem (see Julliard-Tosel 2000) is reproved, as well as several problems similar to
\textbf{Problem {2}}. In this kind of problems one assumes an algebraic constraint along a solution and tries to prove that it implies a much
stronger algebraic constraint. Taking successive time derivatives of this constraint, one finds a family of constraints that defines an
invariant algebraic manifold. A classification of such manifolds would give another proof of the following results.
\begin{enumerate}
\item if the bodies form an isosceles triangle for all time, then either it is equilateral, or two masses are equal and the problem is isosceles (Wintner 1941, \S389).
\item if a $3$-body solution has a constant moment of inertia, then it is a relative equilibrium (Saari's conjecture, see Moeckel 2008).
\item a solution whose plane containing the three bodies makes a constant angle with the plane orthogonal to
the angular momentum vector is either planar or isosceles (Salehani 2011).
\end{enumerate}
\paragraph{\bf Problem {4}{} -- Richard Montgomery --} In the 3-body problem with zero angular momentum and negative energy, are all bi-infinite syzygy sequences realized by collision-free solutions?
Are all periodic sequences realized by periodic solutions? Are all finite syzygy sequences realized by solutions asymptotic to an equilateral triple collision when time increases and when time decreases?
\paragraph{\bf Comments.} In the zero angular momentum, negative energy three-body problem
all solutions but Lagrange's total collapse solution have ``syzygies'': instants where the
three masses are collinear but not coincident (Montgomery 2007). The non-collision syzygies
come in three flavors, 1, 2, 3, depending on which mass is between the other two at the syzygy.
Thus, any collision-free solution to the problem yields a bi-infinite sequence,
such as ..123231122.. in the symbols 1, 2, 3. Simply write the flavors of syzygy
in their temporal order of occurrence.
If we change the potential from a $1/r$ to a $1/r^2$ potential (Montgomery 2005), syzygy sequences may provide full information on the solutions. If we take the three masses to be equal, the bounded non-collision orbits are in 1:1 correspondence, modulo symmetries, with the bi-infinite sequences where the same syzygy does not appear twice in a row, and where the same pair of syzygies does not appear infinitely many times in a row.
\paragraph{\bf Problem {5}{} -- Alain Chenciner, Andrea Venturelli --} Problem of the infinite spin in the total collision of the $n$-body problem: in the
planar $n$-body problem, may a configuration make an infinite number of revolutions before arriving at a total collapse?
\paragraph{\bf Comments.} Several claims of even stronger results were published, but even in this basic case, we cannot
find a proof in the corresponding papers. Chazy answered negatively if the limiting central configuration is
non-degenerate. Chazy (1918) indeed ``postulates" the non-degeneracy of any central configuration, and uses the postulate in the proof of this result (op.\ cit.\
p.\ 361, footnote 1). This postulate is wrong (see e.g.\ Albouy and Kaloshin 2012). Probably for this reason, Wintner (1941, p.\ 431) rightly considered that Chazy did not discard the infinite spin in the 4-body problem.
\paragraph{\bf Problem {6}{} -- Jean-Pierre Marco --} Is the topological entropy positive, in the dynamics of the isosceles
collisionless 3-body problem, for values of the angular momentum and of the energy such that the integral manifold is a
smooth compact manifold?
\paragraph{\bf Problem {7}{} -- George David Birkhoff, Michel Herman --} Let ${\cal M}$ be an integral manifold of the 3-body problem.
Do solutions for which $\mathrm{I}\to +\infty$ as $t\to +\infty$ fill up ${\cal M}$ densely?
\paragraph{\bf Comments.} As usual the center of mass is fixed at the origin. An integral
manifold is the intersection of a level of the energy and a level of the angular momentum. Birkhoff (1927) asks
this question in the difficult and natural case where the angular momentum is non-zero, the energy is negative and the
dimension is 3. But even the case with a zero angular momentum and thus a planar solution is
unsolved. Note that if the energy is non-negative the answer is yes. Herman (1998) insisted on this
question and reformulated it.
\paragraph{\bf Problem {8}{} -- Giovanni F.\ Gronchi --} Consider the distance $d$ between two points on two distinct confocal
ellipses in $\mathbf{R}^3$. If both eccentricities are $>0$, is 12 the maximum number of stationary
points of the function $d^2$? If only one of the eccentricities is $0$, can this upper bound be reduced to 10?
\paragraph{\bf Comments.} Kholshevnikov and Vassiliev (1999) conjecture that the answer to the first question is positive. Examples with 12 stationary points, and with 10 points in the circle-ellipse case, were
found (Gronchi 2002). Moreover, the cases with infinitely many points were completely classified (either two
coinciding conics or two concentric coplanar circles) and are excluded here.
\paragraph{\bf Problem {9}{} -- Jean Chazy, Aurel Wintner, Steve Smale --} Is the number of relative equilibria finite, in the
$n$-body problem of Celestial Mechanics, for an arbitrarily given choice of positive real numbers $m_1,\dots,
m_n$ as the masses?
\paragraph{\bf Comments.} See Smale (1998), Hampton and Moeckel (2006), Hampton and Jensen (2011). The last result (Albouy and Kaloshin 2012) on this question is: for 5 bodies in the plane, the number is finite, except perhaps if the masses belong to an explicit codimension 2 subvariety
of the space of positive masses.
\paragraph{\bf Problem {10}{} --} Given four masses $m_1,m_2,m_3, m_4$ and with $a=-3/2$, is there only one convex central
configuration for each cyclic order?
\paragraph{\bf Comments.} MacMillan and Bartky (1932) proved there is at least one. Xia (2004) has a simpler
argument. The uniqueness is known if the two ends of a diagonal carry equal masses (Albouy et al.\ 2008).
It is not known if, the bodies being numbered 1, 2, 3, 4 in cyclic order, $m_1=m_2$ and $m_3=m_4$.
The last theorem in MacMillan and Bartky (1932, \S18) gives a part of the answer in this case.
\paragraph{\bf Problem {11}{} --} For any $a$ in some interval containing the exponent $-3/2$, is there a unique central
configuration of four equal masses with a given axis of symmetry and no other symmetry?
\paragraph{\bf Comments.} Albouy and Sim\'o (Albouy 1995) conjecture that the answer is positive.
\paragraph{\bf Problem {12}{} --} For any $a\leq-1$, except for the regular $n$-gon with equal masses, are
there central configurations of $n$ bodies lying on a circle and having their center of mass at the center of the
circle?
\paragraph{\bf Comments.} In the Newtonian case, where $a=-3/2$, and for $n=4$, this problem was answered negatively
by Hampton (2005). However, we are asking for a general method of proving the symmetry of the central
configurations subjected to constraints on the geometry and the masses, namely, the configuration is co-circular and
the center of mass is at the center of the circle, regardless the value of the exponent $a$. We suggest to take $a=-1$
or $n = 4$ as a good starting point.
\paragraph{\bf Problem {13}{} --} In the Newtonian five-body problem with equal masses, is every central
configuration symmetric?
\paragraph{\bf Comments.} For convex spatial central configurations the answer is yes (Albouy et al.\ 2008). See
Santos (2004) for very related results. According to the computations in Lee and Santoprete (2009),
one can be almost sure of a positive answer.
\paragraph{\bf Problem {14}{} -- Jaume Llibre --} Consider the planar central configurations of $N$ bodies of mass $\epsilon$ and a body of unit mass.
Consider their non-coalescent limits when $\epsilon\to 0$. If $N \ge 9$, should the
infinitesimal bodies form a regular polygon? If $N \le 8$, are the limiting central configurations necessarily
symmetric?
\paragraph{\bf Comments.} Non-coalescent means that the infinitesimal bodies all have distinct limiting positions. Hall (1987)
got the first results on this problem, about $N=2$, $N=3$ and very large $N$, while Salo and Yoder (1988) obtained numerically
a conjecturally complete list of configurations. A positive answer to {\bf Problem {14}} was obtained for all
$N>\exp(73)$ in Casasayas et al.\ (1994), for $N=4$ in Albouy and Fu (2009).
\paragraph{\bf Problem {15}{} -- Rick Moeckel --} In the planar Newtonian $n$-body problem, consider a solution of
relative equilibrium which is linearly stable (i.e.\ a fixed point of the reduced system which is linearly stable). Is
there always a dominant mass, i.e.\ a body with a mass, let us say, at least 10 times bigger than the total mass of the
other bodies?
\paragraph{\bf Comments.} This would imply the instability of any relative equilibrium with equal masses. Even this is not proved, except if $n=3$, 4 or $n\geq 24306$ (Roberts 1999).
\paragraph{\bf Problem {16}{} -- Rick Moeckel --} Under the same hypothesis as in \textbf{Problem {15}}, is the configuration always a non-degenerate
minimum of the function $\mathrm{U}$ restricted to the sphere $\mathrm{I}=1$?
\paragraph{\bf Comments.} This question is suggested by Theorem 1 in
Moeckel (1994) and by \textbf{Problem {15}}. The central configuration should correspond to a critical point with even index (Hu and Sun 2009).
\paragraph{\bf Problem {17}{} --} Does there exist a planar 5-body central configuration which has a degeneracy in the vertical
direction?
\paragraph{\bf Comments.} We mean a degeneracy of the central configuration seen as a critical point, as in \textbf{Problem {16}}. Moeckel and Sim\'o (1995) proved that such a degeneracy does occur in the 946-body problem. In the tensorial approach of central configurations due to Albouy and Chenciner (see Albouy 1997, p.\ 72), one defines the corank of a relative central configuration $\beta$ with multiplier $\lambda$ as being the rank of the tensor
$\alpha=dg(\beta)$ where $g$ is the real function
$$g(\beta)=\mathrm{2U}(\beta)+\lambda\mathrm{I}(\beta).$$
One knows that a planar relative central configuration of the five-body problem has corank $\leq2$.
\textbf{Problem {17}} asks if there is a central configuration of the planar five-body problem with corank one.
\bigskip\bigskip
{\bf Acknowledgments.} The authors thank the referees, Alain Chenciner, Gonzalo Contreras, Jacques F\'ejoz, Antonio Carlos Fernandes, Giovanni-Federico Gronchi, Marcel Guardia, Jaume Llibre, Jean-Pierre Marco, Rick Moeckel, Richard Montgomery, Gareth Roberts, Alfonso Sorrentino, Andrea Venturelli and Claudio Vidal for having enriched our work with many questions and comments. We benefited from the lists of open questions on the $n$-body problem available on Richard Montgomery's web page. We thank the regional program Math-AmSud for supporting our work.
\section*{References}
\hangindent=2em
\hangafter=1
Albouy, A.: The symmetric central configurations of four equal masses. Contemp.\ Math.\ {\bf 198}, 131--135 (1995)
\hangindent=2em
\hangafter=1
Albouy, A.: Recherches sur le probl\`eme des $n$ corps. Notes scientifiques et techniques du Bureau des Longitudes S058. Institut de M\'ecanique C\'eleste et Calcul des \'Eph\'em\'erides, Paris (1997)
\hangindent=2em
\hangafter=1
Albouy, A., Fu, Y.: Relative equilibria of four identical satellites. Proc.\ R.\ Soc.\ A {\bf 465}(2109), 2633--2645 (2009)
\hangindent=2em
\hangafter=1
Albouy, A., Kaloshin, V.: Finiteness of central configurations of five bodies in the plane. Ann.\ Math.\ {\bf 176}(1), 535--588 (2012)
\hangindent=2em
\hangafter=1
Albouy, A., Fu, Y., Sun, S.: Symmetry of planar four-body convex central configurations. Proc.\ R.\ Soc.\ A {\bf 464}(2093), 1355--1365 (2008)
\hangindent=2em
\hangafter=1
Birkhoff, G.D.: Dynamical Systems, volume IX, p.\ 290. Am.\ Math.\ Soc.\ Colloquium Pub.\ (1927)
\hangindent=2em
\hangafter=1
Casasayas, J., Llibre, J., Nunes, A.: Central configurations of the planar $1 + N$-body problem. Celest.\ Mech.\ Dyn.\ Astron.\ {\bf 60}, 273--288 (1994)
\hangindent=2em
\hangafter=1
Chazy, J.: Sur certaines trajectoires du probl\`eme des $n$ corps. Bull.\ Astron.\ {\bf 35}, 321--389 (1918)
\hangindent=2em
\hangafter=1
Gerver, J.L.: Noncollision singularities: do four bodies suffice? Exp.\ Math.\ {\bf 12}(2), 187--198 (2003)
\hangindent=2em
\hangafter=1
Gronchi, G.F.: On the stationary points of the squared distance between two ellipses with a common
focus. SIAM J.\ Sci.\ Comput.\ {\bf 24}(1), 61--80 (2002)
\hangindent=2em
\hangafter=1
Hall, G.R.: Central Configurations in the Planar $1 + n$ body problem. Pre\-print, Boston University, Boston
(1987)
\hangindent=2em
\hangafter=1
Hampton, M.: Co-circular central configurations in the four-body problem. In: Equadiff 2003 International
Conference on Differential Equations, pp.\ 993--998. World Sci.\ Publ.\ Co.\ Pte.\ Ltd.\ (2005)
\hangindent=2em
\hangafter=1
Hampton, M., Jensen, A.: Finiteness of spatial central configurations in the five-body problem. Celest.\ Mech.\ Dyn.\ Astron.\ {\bf 109}, 321--332 (2011)
\hangindent=2em
\hangafter=1
Hampton, M., Moeckel, R.: Finiteness of relative equilibria of the four-body problem. Invent.\ Math.\ {\bf 163}, 289--312 (2006)
\hangindent=2em
\hangafter=1
Herman, M.: Some open problems in dynamical systems. In: Proceedings of the International Congress of Mathematicians, Documenta Mathematica J.\ DMV, Extra volume ICM II, pp.\ 797--808 (1998)
\hangindent=2em
\hangafter=1
Hu, X., Sun, S.: Stability of relative equilibria and Morse index of central configurations. Comptes Rendus Mathematique {\bf 347}(21--22), 1309--1312 (2009)
\hangindent=2em
\hangafter=1
Julliard-Tosel, E.: Bruns' theorem: The proof and some generalizations. Celest.\ Mech.\ Dyn.\ Astron.\ {\bf 76}, 241--281 (2000)
\hangindent=2em
\hangafter=1
Kholshevnikov, K.V., Vassiliev, N.N.: On the distance function between two Keplerian elliptic orbits. Celest.\ Mech.\ Dyn.\ Astron.\ {\bf 75}, 75--83 (1999)
\hangindent=2em
\hangafter=1
Lee, T.-L., Santoprete, M.: Central configurations of the five-body problem with equal masses. Celest.\ Mech.\ Dyn.\ Astron.\ {\bf 104}(4), 369--381 (2009)
\hangindent=2em
\hangafter=1
MacMillan, W.D., Bartky, W.: Permanent configurations in the problem of four bodies. Trans.\ Am.\ Math.\ Soc.\ {\bf 34}(4), 838--875 (1932)
\hangindent=2em
\hangafter=1
Moeckel, R.: Linear stability of relative equilibria with a dominant mass. J.\ Dyn.\ Differ.\ Equ.\ {\bf 6}(1), 37--51 (1994)
\hangindent=2em
\hangafter=1
Moeckel, R.: A proof of Saari's conjecture for the three-body problem in $\mathbf{R}^d$. Disc.\ Cont.\ Dyn.\ Sys.\ Ser.\ S {\bf 1}(4), 631--646 (2008)
\hangindent=2em
\hangafter=1
Moeckel, R., Sim\'o, C.: Bifurcation of spatial central configurations from planar ones. SIAM J.\ Math.\ Anal.\ {\bf 26}(4), 978--998 (1995)
\hangindent=2em
\hangafter=1
Montgomery, R.: Fitting hyperbolic pants to a three-body problem. Ergod.\ Theory Dyn.\ Syst.\ {\bf 25}(3), 921--947 (2005)
\hangindent=2em
\hangafter=1
Montgomery, R.: The zero angular momentum, three-body problem: all but one solution has syzygies. Ergod.\ Theory Dyn.\ Syst.\ {\bf 27}(6), 1933--1946 (2007)
\hangindent=2em
\hangafter=1
Painlev\'e, P.: Le\c cons sur la th\'eorie analytique des \'equations diff\'erentielles. Hermann, Paris (1897)
\hangindent=2em
\hangafter=1
Painlev\'e, P.: \OE uvres, Tome 1. \'Editions du CNRS, Paris (1972)
\hangindent=2em
\hangafter=1
Roberts, G.E.: Spectral instability of relative equilibria in the planar $n$-body problem. Nonlinearity {\bf 12},
757--769 (1999)
\hangindent=2em
\hangafter=1
Salehani, M.K.: Global geometry of non-planar 3-body motions. Celest.\ Mech.\ Dyn.\ Astron.\ {\bf 111}, 465--479 (2011)
\hangindent=2em
\hangafter=1
Salo, H., Yoder, C.F.: The dynamics of coorbital satellite systems. Astron.\ Astrophys.\ {\bf 205}, 309--327 (1988)
\hangindent=2em
\hangafter=1
Santos, A.A.: Dziobek's configurations in restricted problems and bifurcation. Celest.\ Mech.\ Dyn.\ Astron.\ {\bf 90}(3--4), 213--238 (2004)
\hangindent=2em
\hangafter=1
Smale, S.: Mathematical problems for the next century. Math.\ Intell.\ {\bf 20}(2), 7--15 (1998)
\hangindent=2em
\hangafter=1
Wintner, A.: The Analytical Foundations of Celestial Mechanics. Princeton University Press, Princeton (1941)
\hangindent=2em
\hangafter=1
Xia, Z.: Some of the problems that Saari didn't solve. Contemp.\ Math.\ {\bf 292}, 267--270 (2002)
\hangindent=2em
\hangafter=1
Xia, Z.: Convex central configurations for the $n$-body problem. J.\ Differ.\ Equ.\ {\bf 200}(2), 185--190 (2004)
\end{document}
\keywords{$n$ body problem \and celestial mechanics \and list of problems}
\subclass{70F10 \and 70F15 }
|
train/arxiv
|
BkiUdrM4uBhi65FQgkzM
| 5 | 1 |
\section{Introduction \label{sec:intro}}
Early-type galaxies (ETGs) are known to follow characteristic scaling relations between several structural parameters including, most prominently, the line-of-sight velocity dispersion $\sigma_*$ of their stars. The first of these relations to be discovered was the Faber--Jackson relation $L\propto\sigma_*^4$ between velocity dispersion and galactic luminosity $L$ \citep{faber1976}. Subsequent work led to the discovery of the ``classic'' fundamental plane relation $L\propto\sigma_*^{\alpha}I^{\beta}$, or equivalently, $R_e\propto\sigma_*^{\gamma}I^{\delta}$ which include the (two-dimensional, projected) effective (half-light) radius $R_e$ and the average surface brightness $I$ within $R_e$ \citep{dressler1987, djorgovski1987}; recent studies \citep{cappellari2013a} find $\gamma\approx1$ and $\delta\approx-0.8$. From the virial relation $M\propto\sigma_*^2R_e$, with $M$ being the galaxy mass, one expects $\gamma=2$ and $\delta=-1$; the tilt of the fundamental plane, i.e. the discrepancy between expected and observed values, can be ascribed to a scaling of mass-to-light ratio with velocity dispersion (\citealt{cappellari2006}; but see also \citealt{cardone2011}). The Faber--Jackson relation can be understood as a projection of the fundamental plane onto the $L$--$\sigma_*$ plane (but see also \citealt{sanders2010}).
Whereas scaling relations that involve $L$ are convenient because the luminosity is an observable, the galactic dynamics is controlled by the galaxy mass $M$ for which $L$ is a proxy. For pressure-supported stellar systems, mass and velocity dispersion are connected via the virial relation
\begin{equation}
\label{eq:virial}
M = k_e\,\frac{\sigma_*^2\,R_e}{G}
\end{equation}
where $G$ is Newton's constant and $k_e$ is a geometry factor of order unity \citep[e.g.,][]{binney2008}. Accordingly, a ``more fundamental plane'' \citep{bolton2007} is given by the ``mass plane'' relation $M\propto\sigma_*^{\kappa}R_e^{\lambda}$; from the virial theorem, one expects $\kappa=2$ and $\lambda=1$ (see also \citealt{cappellari2016} for a recent review).
\begin{figure*}[t!]
\centering
\includegraphics[angle=-90, width=84mm]{fig/Mbulge-virial-massplane-ATLAS3D.eps}
\hspace{4mm}
\includegraphics[angle=-90, width=84mm]{fig/Mbulge-virial-massplane-Saglia2016.eps}
\caption{Galaxy mass $M$ as function of virial term $\sigma_*^2R_e/G$, both in units of solar mass. Please note the somewhat different axis scales. \emph{Left:} for the ATLAS$^{\rm 3D}$\ sample. The grey line corresponds to a linear relation (Equation \ref{eq:virial}) with ensemble-averaged virial factor $\langle k_e \rangle = 5.15$. The black line marks the best-fit generalized virial relation (Equation~\ref{eq:massplane}) with $x = 0.924 \pm 0.016$ \emph{Right:} for the data of \citet{saglia2016}. The grey line indicates a linear relation with $\langle k_e \rangle = 4.01$. The black line marks the best-fit generalized virial relation with $x = 0.923 \pm 0.018$. \label{fig:virialRe}}
\end{figure*}
Testing the validity and accuracy of Equation~(\ref{eq:virial}) is important as virial mass estimators of this type are widely applied to pressure-supported stellar systems. Fundamental plane studies usually derive masses $M$ from photometry and equate them with dynamical masses, presuming an equality of the two. A deviation from the theoretical relation would imply the presence of additional ``hidden'' parameters or dependencies between parameters. Else than for relations between luminosity and other parameters, a tilt in the mass plane would be connected immediately to the dynamics or structure of galaxies. To date, the virial relation (Equation~\ref{eq:virial}) is commonly assumed to be valid exactly (cf., e.g., \citealt{cappellari2013a}). This is, however, not undisputed. Based on an analysis of about 50\,000 SDSS galaxies, \citet{hyde2009} concluded that $M\propto(\sigma_*^2R_e)^{0.83\pm0.01}$. More recent observations, accompanied by more sophisticated dynamical modeling, of early-type galaxies in three nearby galaxy clusters find $\kappa\approx1.7$ \citep{scott2015}. This raises the question to what extend Equation~(\ref{eq:virial}) is appropriate for describing the dynamics of ETGs, and which alternative formulations might be necessary.
\section{Data \label{sec:data}}
This work is primarily based on the ATLAS$^{\rm 3D}$\ database of \citet{cappellari2011, cappellari2013a}. In addition, I use the dataset of \citet{saglia2016} for an independent check. The two samples cannot be combined directly because they employ different conventions for calculating effective stellar velocity dispersions.
\subsection{The ATLAS$^{\rm\bf 3D}$ Sample \label{sec:atlasd}}
The ATLAS$^{\rm 3D}$\ project \citep{cappellari2011, cappellari2013a} provides\footnote{\url{http://www-astro.physics.ox.ac.uk/atlas3d}} data for a volume-limited sample of 260 nearby (located within $\lesssim$42 Mpc) early-type galaxies. For each galaxy, the surface brightness distribution is modeled with a Multi-Gaussian Expansion (MGE) algorithm. The results are fed into an Jeans Anisotropic MGE (JAM) algorithm which computes predictions for the line-of-sight velocity dispersion distribution in the sky plane. These values are compared to observed velocity dispersion distributions obtained from optical integral-field spectroscopy. The best-fit JAM models provide the effective radius $R_e$ and masses $M$. For each galaxy, an effective velocity dispersion $\sigma_*$ is measured from a combined spectrum co-added over an ellipse of area $\pi R_e^2$.
As the JAM results vary in quality, some quality-based selection of data is needed. Following the suggestion of \citet{cappellari2013a}, I select galaxies for which there is at least a ``good'' (quality flag $\geq$2) agreement between predicted and observed velocity dispersion distributions. This results in a final dataset comprising 101 galaxies. The selected galaxies have (JAM) masses $M$ between $\approx$$9\times10^{9}\,M_{\odot}$ and $\approx$$5\times10^{11}\,M_{\odot}$, effective velocity dispersions $\sigma_*$ between $\approx$70~${\rm km\,s}^{-1}$ and $\approx$280~${\rm km\,s}^{-1}$, and effective radii $R_e$ ranging from $\approx$0.5~kpc to $\approx$7~kpc. Formal uncertainties are 10\% (0.041 dex) for effective radii, 5\% (0.021 dex) for effective velocity dispersions, and 12\% (0.049 dex) for galaxy (JAM) masses.
\subsection{The Sample of \citet{saglia2016} \label{sec:saglia}}
The dataset by \citet{saglia2016} provides data for 72 local (located within $\lesssim$150 Mpc) elliptical galaxies and classical bulges. Classical bulges can be regarded as elliptical galaxies that formed new disks around them; they follow the same parameter correlations as ``free'' ellipticals do \citep{kormendy2012}. Accordingly, I will treat both types of objects jointly from now on. The sample of \citet{saglia2016} was selected for studies of black hole -- host galaxy relations and combines (re-calibrated where necessary) literature results with new integral-field spectroscopic observations.
For each galaxy, mass and scale radius are derived from photometry. Each target is decomposed into its elliptical bulge and other components like disks, rings, or bars (if any). Bulge masses $M$ are calculated from their luminosities using mass-to-light ratios derived from dynamical modeling. The three-dimensional spherical half-mass radii $r_h$ are used as scale radii. The effective stellar velocity dispersion $\sigma_*$ is derived from a brightness-weighted sum of the squares of velocity dispersion and rotation speed over radii from 0 to $R_e$. The radii $r_h$ and $R_e$ are related like $R_e=(0.74\pm0.01)\,r_h$ (\citealt{saglia2016} for their sample; see also \citealt{hernquist1990, wolf2010} for general derivations).
Sample galaxies were selected with emphasis on covering a wide range in $\sigma_*$, from $\approx$70~${\rm km\,s}^{-1}$ to $\approx$390~${\rm km\,s}^{-1}$. Half-mass radii $r_h$ range from $\approx$0.1~kpc to $\approx$32~kpc, bulge masses $M$ are located between $\approx$$4\times10^8\,M_{\odot}$ and $\approx$$2\times10^{12}\,M_{\odot}$. Median formal uncertainties are 21\% (0.083 dex) for bulge masses, 5\% (0.021 dex) for effective velocity dispersions, and 25\% (0.096 dex) for half-light radii.
\section{Analysis and Results \label{sec:analysis}}
\begin{figure}[t!]
\centering
\includegraphics[angle=-90, width=84mm]{fig/Mbulge-virial-massplane-ATLAS3D-Re-ell.eps}
\caption{Galaxy mass $M$ as function of virial term $\sigma_*^2a/G$ for the ATLAS$^{\rm 3D}$\ sample, with semi-major axis $a$. The grey line indicates a linear relation (Equation~\ref{eq:virial-a}) with $\langle k_a \rangle = 3.82$. The black line marks the best-fit generalized virial relation (Equation~\ref{eq:massplane-a}) with $x' = 0.976 \pm 0.018$. \label{fig:virial-a}}
\end{figure}
\subsection{The Virial Relation \label{sec:virial}}
\subsubsection{Effective Radius as Scale Radius \label{sec:virialRe}}
Masses, velocity dispersions, and radii are (supposed to be) connected via the virial relation expressed by Equation~(\ref{eq:virial}). Figure~\ref{fig:virialRe} shows mass $M$ as function of virial term $\sigma_*^2R_e/G$ for the two samples. Assuming a linear relation gives the best agreement for an ensemble-averaged $\langle k_e \rangle = 5.15\pm0.09$ and $\langle k_e \rangle = 4.01\pm0.18$ (standard errors of means) for the ATLAS$^{\rm 3D}$\ and \citet{saglia2016} samples, respectively, in full agreement with \citet{cappellari2013a} (for ATLAS$^{\rm 3D}$). Assuming that different conventions for calculating $\sigma_*$ explain the difference entirely, the velocity dispersions of \citet{saglia2016} are systematically higher than the ones of ATLAS$^{\rm 3D}$\ by 13\%.
Taking a closer look however, the data deviate systematically from a naive linear relation. For a quantitative analysis I use the generalized virial relation
\begin{equation}
\label{eq:massplane}
\log\left(\frac{M}{10^{11}M_{\odot}}\right) = x\log\left(\frac{\sigma_*^2R_e/G}{10^{10.5}M_{\odot}}\right) + y
\end{equation}
where $x$ and $y$ are free parameters; mass and virial term are normalized by their approximate medians in order to minimize the covariance of the fit parameters. Logarithms are decadic. Equation~(\ref{eq:massplane}) describes a ``restricted mass plane'' because $\sigma_*$ and $R$ are coupled like $M\propto\sigma_*^{2x}R^x$ instead of a more general relation $M\propto\sigma_*^{\kappa}R^{\lambda}$ with independent $\kappa$ and $\lambda$. By construction, the restricted mass plane probes the evolution of the ratio of observed and dynamically expected masses.
I fit Equation~(\ref{eq:massplane}) to the data via a standard weighted linear least-squares regression. Error bars are rescaled iteratively such that min$(\chi^2/{\rm d.o.f.})=1$. The best-fit slopes are $x = 0.924\pm0.016$ and $x = 0.923\pm0.018$ (with formal $1\sigma$ errors) for the ATLAS$^{\rm 3D}$\ and \citet{saglia2016} data, respectively. Both values are in good agreement with each other and both are significantly -- by $4.8\sigma$ and $4.3\sigma$, respectively -- smaller than unity: the empirical relation is flatter than the theoretical one. The intrinsic scatter (i.e., the difference in squares of rms residual and bivariate rms measurement error) about the best-fit lines is consistent with zero in both cases.
\subsubsection{Semi-Major Axis Length as Scale Radius \label{sec:virial-a}}
\begin{figure}
\includegraphics[angle=-90, width=84mm]{fig/mass-ellipticity.eps}
\caption{Evolution of ``roundness'' $1 - \epsilon$ as function of galaxy mass $M$. The grey line marks the best-fit powerlaw relation, with a slope of $0.123 \pm 0.041$. \label{fig:ell-mass}}
\end{figure}
The effective radius is given by $R_e = \sqrt{a b}$, with $a$ and $b$ being the semi-major and semi-minor axis of the \emph{projected} ellipse that encloses half of the galaxy light, respectively. As argued by, e.g., \citet{hopkins2010}, the semi-major axis $a$ is a more robust proxy for the physical scale radius of a galaxy than $R_e$. Replacing $R_e$ with $a$ results in a modified virial relation
\begin{equation}
\label{eq:virial-a}
M = k_a\,\frac{\sigma_*^2\,a}{G}
\end{equation}
which replaces Equation~(\ref{eq:virial}) and an updated ``restricted mass plane'' relation
\begin{equation}
\label{eq:massplane-a}
\log\left(\frac{M}{10^{11}M_{\odot}}\right) = x'\log\left(\frac{\sigma_*^2 a/G}{10^{10.5}M_{\odot}}\right) + y'
\end{equation}
which replaces Equation~(\ref{eq:massplane}). By definition, $R_e$ and $a$ are related like $a = R_e / \sqrt{1 - \epsilon}$, with $\epsilon = 1 - b/a$ being the ellipticity. The discussion in the remainder of Section~\ref{sec:analysis} refers to the ATLAS$^{\rm 3D}$\ dataset only because \citet{saglia2016} do not provide ellipticity or semi-major axis length information for their sample galaxies.
Figure~\ref{fig:virial-a} shows galaxy mass $M$ as function of virial term $\sigma_*^2 a / G$, with $a$ computed from $R_e$ and $\epsilon$. Assuming a linear relationship gives an ensemble-averaged virial factor $\langle k_a \rangle = 3.82 \pm 0.062$, again in good agreement with \citet{cappellari2013a}. Fitting Equation~(\ref{eq:massplane-a}) to the data (in the same way as done in Section~\ref{sec:virialRe}) results in a slope of $x' = 0.976 \pm 0.018$ -- which agrees with unity within errors. The intrinsic scatter about the best-fit line is consistent with zero.
\subsection{Ellipticity as Function of Mass \label{sec:ell-mass}}
Whereas use of $a$ in the virial relation results in agreement between data and expectation (Section~\ref{sec:virial-a}), use of $R_e$ finds an empirical relation that is significantly flatter than theoretically expected (Section~\ref{sec:virialRe}). As $R_e = a\sqrt{1 - \epsilon}$, the difference between the two empirical relations implies that the ellipticity $\epsilon$ is a function of galaxy mass. Figure~\ref{fig:ell-mass} illustrates the scaling of the ``roundness'' $1-\epsilon$ with $M$ for the ATLAS$^{\rm 3D}$\ sample. For a quantitative test, I fit the relation
\begin{equation}
\log\left(\frac{1-\epsilon}{0.5}\right) = \xi \log\left(\frac{M}{10^{11}M_{\odot}}\right) + \zeta
\label{eq:ell-mass}
\end{equation}
to the data; $\xi$ and $\zeta$ are free parameters. The fit returns a slope of $\xi = 0.123 \pm 0.041$; galaxies of higher mass tend to be less elliptical in average than the ones of lower mass.
\section{Discussion \label{sec:discuss}}
The very existence of a tight mass plane relation is somewhat puzzling. Light distributions, mass-to-light ratios, and thus the masses $M$ of EGS are derived from carefully modeling each system individually. It is not obvious that $M$ should correlate as tightly with the coarse proxies for mass, $\sigma_*$ and either $R_e$ or $a$ (combined in the virial term), as it does, with zero intrinsic scatter (see also the corresponding discussion in \citealt{cappellari2013a}). The global dynamics of ETGs is simpler than one might expect given that they show a wide range of geometries and mass profiles. Likewise, it is noteworthy that a generalized virial relation (Sections \ref{sec:virialRe} and \ref{sec:virial-a}) with slope $x$ or $x'$ describes the dynamics of EGS completely: as there is zero intrinsic scatter about the best-fit lines, adding another free parameter by letting $M$ scale independently with $\sigma_*$ and either $R_e$ or $a$ would not provide additional information (Occam's razor). This is illustrated in Section 4.3 of \citet{cappellari2013a}: their mass plane analysis, using their full sample of galaxies minus a few outliers, returns $M \propto \sigma_*^{1.928} R_e^{0.964}$ which is identical to $M \propto (\sigma_*^2 R_e)^{0.964}$ -- as expected for a fit with too many free parameters.
The analysis in Section~\ref{sec:virialRe} unambiguously shows that, when using $R_e$ as scale radius, the virial relation is tilted, with $x\approx0.92$ being significantly smaller than unity. This is consistent with the original mass plane analysis by \citet{cappellari2013a} who found that their values for $\kappa$ and $\lambda$ were smaller than the expected values by 2.8$\sigma$ and 2.0$\sigma$, respectively. Indeed, when combining the corresponding false alarm probabilities, the analysis by \citet{cappellari2013a} finds a tilt with a significance of $3.7\sigma$ -- but doing this is permissible only when assuming a priori that $\kappa$ and $\lambda$ are correlated (which is at odds with a mass \emph{plane} analysis). My result also qualitatively agrees with the trend observed by \citet{hyde2009}; however, I find a value for $x$ which is significantly larger than the one found from the SDSS sample. Given that I find the same result from two independently drawn and modeled samples of ETGs, I suspect that \citet{hyde2009} underestimated their systematic uncertainties. This can be compared to the results by \cite{scott2015} who found $x = 0.93\pm0.06$ for their sample, which likewise was smaller than unity, however not yet statistically significant. \citet{scott2015} suspected the result $x \neq 1$ to be a feature of JAM modeling. Given however that \citet{saglia2016} use several different types of dynamical modeling to derive EGS masses (cf. their Section~2.2), it seems unlikely that the tilt in the virial relation can be a modeling artifact.
As shown in Section~\ref{sec:virial-a}, empirical and theoretical virial relation agree (within errors) when adopting the semi-major axis of the half-light ellipse, $a$, as galactic scale radius. Indeed, an improved agreement in this case was already noted in the original mass plane analysis by \citet{cappellari2013a} (cf. also Section 4.2.1 of \citealt{cappellari2016}), although in their analysis the difference between the two formulations (with $R_e$ and $a$, respectively) was not yet statistically significant when assuming independence of $\kappa$ and $\lambda$ from each other. The difference between using $a$ and using $R_e$ in the virial relation arises from a scaling of ellipticity $\epsilon$ with galaxy mass: the higher $M$, the higher the roundness $1-\epsilon$ (Section~\ref{sec:ell-mass}). As noted by \citet{vanderwel2009} and \citet{weijmans2014}, this trend is due to a lack of highly elliptical ($b/a < 0.6$) galaxies at masses $M \gtrsim 10^{11}M_{\odot}$ (see also Figure~\ref{fig:ell-mass}); \citet{vanderwel2009} interpreted this observation as evidence for major merging being the dominant mechanism for forming massive galaxies. With $(1-\epsilon) \propto M^{0.12}$ and thus $\sqrt{1-\epsilon} \propto M^{0.06}$, the empirical roundness--mass relation is sufficient to explain the difference between $x\approx0.92$ and unity (within errors). It seems that my analysis is the first to explicitly note the impact of the roundness--mass relation on the virial and mass plane relations of early-type galaxies.
Accepting Equation~(\ref{eq:virial-a}) as the correct virial relation means accepting that $a$ is a proper proxy for the scale radius of early-type galaxies (whereas $R_e$ is not). This was already suggested by \citet{hopkins2010} and later supported by \citet{cappellari2013a}. \citet{hopkins2010} argued that $R_e$ is affected by projection whereas $a$ is not: the same axisymmetric and oblate galaxy viewed under different angles will show different $R_e$ but always the same $a$. (This is actually the reason why \citet{cappellari2013a} argued in favor of using $a$; they did not yet note the effect of ellipticity scaling with ETG mass.) Combining this argument with the fact that Equation~(\ref{eq:virial-a}) fits the available data with no intrinsic scatter implies that \emph{early-type galaxies are intrinsically axisymmetric and oblate in general} -- if they were triaxial or prolate, $a$ would not usually coincide with the longest axis in projection and would not be a measure of galaxy size. For the ATLAS$^{\rm 3D}$\ sample, uncertainties on either $a$ or $R_e$ are given as 10\%, limiting deviations from axisymmetry -- more specifically, the deviation of the ratio of the two longest axes of a triaxial ellipsoid from unity -- to about the same amount. This is in good agreement with the results from modeling the intrinsic shapes of early-type galaxies based on their kinematics and light distributions (with the possible exception of a small sub-population of slowly rotating ETGs; \citealt{weijmans2014}).
\section{Conclusions \label{sec:conclude}}
Using public data for the early-type galaxy samples of \citet{cappellari2011, cappellari2013a} and \citet{saglia2016}, I probe the validity and accuracy of the virial relation given by Equation~(\ref{eq:virial}). The key results are:
\begin{enumerate}
\item Assuming a linear relationship between galaxy mass and virial term, I find ensemble-averaged virial factors of $\langle k_e \rangle = 5.15 \pm 0.09$ and $\langle k_e \rangle = 4.01 \pm 0.18$ for the ATLAS$^{\rm 3D}$\ and \citet{saglia2016} samples, respectively, in agreement with \citet{cappellari2013a} (for ATLAS$^{\rm 3D}$). The difference between the two samples arguably arises from the \citet{saglia2016} velocity dispersions being systematically higher than the ATLAS$^{\rm 3D}$\ ones by 13\% due to different conventions.
\item For both galaxy samples, the empirical virial relation is significantly (by more than $4\sigma$) tilted, such that $M \propto (\sigma_*^2 R_e)^{0.92}$. For the ATLAS$^{\rm 3D}$\ data, this is consistent with the mass plane analysis provided \citet{cappellari2013a}.
\item Replacing the effective radius $R_e$ with the semi-major axis of the projected half-light ellipse $a$ reconciles empirical and theoretical virial relations, with $M \propto \sigma_*^2 a$ (Equations \ref{eq:virial-a} and \ref{eq:massplane-a}). The ensemble-averaged virial factor is $\langle k_a \rangle = 3.82 \pm 0.062$, in good agreement with \citet{cappellari2013a}.
\item All best-fit virial relations show intrinsic scatter consistent with zero. This implies that the mass plane of ETGs is fully determined by the virial relation, i.e., that masses $M$ do not scale independently with $\sigma_*$ and either $R_e$ or $a$ but only with $\sigma_*^2 R_e$ (or $\sigma_*^2 a$).
\item The ``roundness'' $1-\epsilon$, with ellipticity $\epsilon$, mildly scales with galaxy mass such that $(1 - \epsilon) \propto M^{0.12}$. This agrees with the known lack of highly elliptical galaxies for $M \gtrsim 10^{11}M_{\odot}$. As $R_e = a\sqrt{1-\epsilon}$, the scaling of mass and roundness explains the tilt in the virial relation that occurs when using $R_e$ instead of $a$ as galaxy scale radius.
\item Given that (i) $a$ turns out to be the correct proxy for the galactic scale radius and (ii) the best-fit virial relation (Equation~\ref{eq:massplane-a}) fits the data with zero intrinsic scatter, one finds that early-type galaxies are axisymmetric and oblate in general. This agrees with results from modeling their intrinsic shapes based on kinematics and light distributions.
\end{enumerate}
\acknowledgments
I am grateful to Kyu-Hyun Chae (Sejong U) for valuable discussion. This work is based on the ATLAS$^{\rm 3D}$\ database of \citet{cappellari2011, cappellari2013a} and the database of \citet{saglia2016}. I make use of the data analysis software package \textsc{dpuser} developed and maintained by Thomas Ott at MPE Garching (\url{www.mpe.mpg.de/~ott/dpuser/index.html}). I acknowledge financial support from the National Research Foundation of Korea (NRF) via Basic Research Grant NRF-2015-R1D1A1A-01056807. Last but not least, thanks go to an anonymous referee for helpful comments.
|
train/arxiv
|
BkiUd9A5qsFAfyQRadfF
| 5 | 1 |
\section{Introduction}
Time-varying interface problems are frequently encountered in numerical simulations for multi-phase flows and fluid-structure interactions.
Transient deformation of material region poses a great challenge to the design of
high-order numerical methods for solving such kind of problems. The problem becomes even more difficult when the evolution of the solution and the deformation of the domain have strong and nonlinear interactions.
In this paper, we study the nonlinear convection-diffusion equation on a time-varying domain
\begin{subequations}\label{cd-model}
\begin{align}
\partial_t{\boldsymbol{u}}+ {\boldsymbol{u}} \cdot \nabla {\boldsymbol{u}}- \nu\Delta{\boldsymbol{u}} = {\boldsymbol{f}}&
\quad \text{in}\;\; \Omega_t, \label{cd-eqn} \\
\partial_{\boldsymbol{n}} {\boldsymbol{u}} =0& \quad \text{on}\;\;\Gamma_t, \label{cd-bc}\\
{\boldsymbol{u}}(0) ={\boldsymbol{u}}_0& \quad \text{in}\;\;\Omega_0, \label{cd-ic}
\end{align}
\end{subequations}
where $\Omega_t\subset \mathbb{R}^2$ is a bounded domain for each $t\ge 0$,
$\Gamma_t = \partial \Omega_t$ the boundary of $\Omega_t$, ${\boldsymbol{f}}({\boldsymbol{x}},t)$ the source term, $\nu$ the diffusion coefficient which is a positive constant, ${\boldsymbol{u}}_0$ the initial value, and $\Omega_0$ the initial shape of the domain. Moreover, $\partial_t{\boldsymbol{u}}$ denotes the partial derivative $\frac{\partial{\boldsymbol{u}}}{\partial t}$ and $\partial_{\boldsymbol{n}}{\boldsymbol{u}}$ denotes the directional derivative $\frac{\partial{\boldsymbol{u}}}{\partial{\boldsymbol{n}}}$ on $\Gamma_t$ with ${\boldsymbol{n}}$ being the unit outer normal on $\Gamma_t$. Equation \eqref{cd-eqn} can be viewed as a simplification of the momentum equation of the incompressible Navier-Stokes equations by removing the pressure term.
The flow velocity ${\boldsymbol{u}}$ drives the deformation of $\Omega_t$, that is,
$\Omega_t = \big\{{\boldsymbol{X}}(t;0,{\boldsymbol{x}}):
\forall\,{\boldsymbol{x}}\in\Omega_0\big\}$,
where ${\boldsymbol{X}}(t;0,{\boldsymbol{x}})$ is the solution to the
ordinary differential equation (ODE)
\begin{equation}\label{eq:X}
\frac{\mathrm{d}}{\mathrm{d} t} {\boldsymbol{X}}(t;0,{\boldsymbol{x}})= {\boldsymbol{u}}\big({\boldsymbol{X}}(t;0,{\boldsymbol{x}}),t\big),\qquad
{\boldsymbol{X}}(0;0,{\boldsymbol{x}}) = {\boldsymbol{x}}.
\end{equation}
Throughout the paper, we assume that the exact solution ${\boldsymbol{u}}({\boldsymbol{x}},t)$ is ${\boldsymbol{C}}^{2}$-smooth in both ${\boldsymbol{x}}$ and $t$. Classic theories for ODEs show that \eqref{eq:X} has a unique solution.
The nonlinearity of the problem appears not only in the convection term, but also in the interaction between ${\boldsymbol{u}}$ and $\Omega_t$.
To design high-order numerical methods for \eqref{cd-model}, one
must also seek a high-order surface-tracking algorithm for building the varying domain with numerical solutions.
An entirely high-order solver requires that the surface-tracking algorithm has at least the same order of accuracy as the numerical method for solving the partial differential equation (PDE) \cite{ma21}.
If the driving velocity of surface is known, there are extensive works in the literature on tracing and representing the surface approximately. We refer to \cite{ale99,gig06,her08,mor17,osh88,osh00} and the references therein for level set methods, to \cite{har00,hir81,kum21} for volume-of-fluid methods,
to \cite{ahn07,ahn09,dya08} for moment-of-fluid methods, and
to \cite{try01,zha01} for front-tracking method. In a series of works \cite{zha14,zha16,zha17,zha18}, Zhang developed the cubic MARS (Mapping and Adjusting Regular Semi-analytic sets) algorithm for tracking the loci of free surface. The algorithm traces control points on the surface and
forms an approximate surface with cubic spline interpolations.
It can achieve high order accuracy by controling distances between neighboring control points.
In the previous work \cite{ma21}, we developed a high-order finite element method for solving linear advection-diffusion equation on Eulerian meshes. In that work, the driving velocity of the domain is explicitly given so that the surface-tracking procedure can be implemented easily with cubic MARS algorithm. A thorough error analysis for the finite element solution is conducted by considering all errors from interface-tracking, spatial discretization, and temporal integration. However, for the nonlinear problem \eqref{cd-model}, what's more difficult is that {\it the driving velocity of the boundary is the unknown solution to the PDE}. To maintain the overall high-order accuracy of numerical method, we propose a high-order implicit-explicit (IMEX) scheme which advances the boundary explicitly, while solves \eqref{cd-model} implicitly.
During the past three decades, numerical schemes for solving interface problems on unfitted grids are very popular in the literature. To mention some of them, we refer
to \cite{bre12,mit05,pes77} for the immersed boundary method, to \cite{lev94,li06} for the immersed interface method, to \cite{guo21,li98,li03,lin09} for the immersed finite element method, and to \cite{bec09,han02,hua17,leh15,liu20,leh13,wu19} for Nitsche extended finite element methods. The main idea is to double the degrees of freedom on interface elements and add penalty terms to enforce interface conditions weakly.
Similar ideas can also be found in fictitious domain methods which allow the boundary to cross mesh elements \cite{bur10,bur12,jar09,mas14}. To the best of our knowledge, there are very few papers in the literature on high-order methods for free-surface problems where the motion of domain is driven by the solution to the equation, particularly, for those problems on severely deforming domains. The purpose of this paper is to develop third- and fourth-order methods for
solving the nonlinear convection-diffusion equation \eqref{cd-model} on time-varying domains.
For time integration, we propose an IMEX scheme by applying the $k^{\rm th}$-order Semi-implicit Backward Difference Formula (SBDF-$k$) \cite{asc95} to a Lagrangian form of the convection-diffusion equation. The spatial arguments of the velocity are implicitly defined by means of flow maps.
For spatial discretization, we adopt high-order fictitious-domain finite element methods using cut elements. The methods are based on a fixed Eulerian mesh that covers the full movement range of the deforming domain.
The novelties of this work are listed as follows.
\begin{itemize}[leftmargin=5mm]
\item [1.] A high-order IMEX scheme is proposed for solving \eqref{cd-model}, where the time integration is taken implicitly along characteristic curves, while the computational domains are formed explicitly with high-order surface-tracking algorithm.
\item [2.] By numerical examples on severely deforming domains, we show that the SBDF-$3$ and SBDF-$4$ schemes have optimal convergence orders.
\end{itemize}
\vspace{2mm}
The rest of the paper is organized as follows.
In section~2, we introduce the semi-discrete SBDF-$k$ schemes, $1\le k\le 4$, for solving problem \eqref{cd-model}. In section~3, we
introduce the fully discrete finite element method for solving \eqref{cd-model} on a fixed Eulerian mesh. A high-order surface-tracking algorithm is presented to build the computational domains explicitly with numerical solutions. In section~4, we present an efficient algorithm for computing backward flow maps.
Three numerical examples are presented to show that the proposed numerical methods
have overall optimal convergence orders for $k=3,4$.
\section{The semi-discrete schemes}
For any fixed ${\boldsymbol{x}}\in\Omega_0$, the chain rule and \eqref{eq:X} indicate that the material derivative of ${\boldsymbol{u}}$ satisfies
\begin{equation}\label{eq:DuDt}
\frac{\mathrm{d}}{\mathrm{d} t}{\boldsymbol{u}}\big({\boldsymbol{X}}(t;0,{\boldsymbol{x}}),t\big) = \partial_t {\boldsymbol{u}}\big({\boldsymbol{X}}(t;0,{\boldsymbol{x}}),t\big)
+ {\boldsymbol{u}}\big({\boldsymbol{X}}(t;0,{\boldsymbol{x}}),t\big) \cdot\nabla {\boldsymbol{u}}\big({\boldsymbol{X}}(t;0,{\boldsymbol{x}}),t\big),
\end{equation}
where $\nabla{\boldsymbol{u}}({\boldsymbol{z}},t)$ denotes the gradient of ${\boldsymbol{u}}$ with respect to ${\boldsymbol{z}}$ and $\partial_t{\boldsymbol{u}}({\boldsymbol{z}},t)$ denotes the partial derivative of ${\boldsymbol{u}}$ with respect to $t$. Similarly, $\Delta{\boldsymbol{u}}({\boldsymbol{z}},t)$ also denotes the Laplacian of ${\boldsymbol{u}}$ with respect to ${\boldsymbol{z}}$. For convenience, we omit the arguments of ${\boldsymbol{u}}$ without causing confusions and write problem \eqref{cd-model} into a compact form
\begin{subequations}\label{cd1}
\begin{align}
\frac{\mathrm{d}{\boldsymbol{u}}}{\mathrm{d} t}- \nu\Delta{\boldsymbol{u}} = {\boldsymbol{f}}&
\quad \text{in}\;\; \Omega_t, \label{cd1-eqn} \\
\partial_{\boldsymbol{n}} {\boldsymbol{u}} =0& \quad \text{on}\;\;\Gamma_t, \label{cd1-bc}\\
{\boldsymbol{u}}|_{t=0} ={\boldsymbol{u}}_0& \quad \text{in}\;\;\Omega_0. \label{cd1-ic}
\end{align}
\end{subequations}
Through \eqref{eq:DuDt}, problems \eqref{eq:X} and \eqref{cd1} form a coupled system of initial-boundary value problems.
Now we describe the SBDF-$k$ for solving \eqref{eq:X} and \eqref{cd1}.
Let $0=t_1\le t_2\le \cdots \le t_N=T$ be the uniform partition of the interval $[0,T]$ with step size $\tau=T/N$. For convenience, we denote the exact flow map from $\Omega_{t_m}$ to $\Omega_{t_n}$ by ${\boldsymbol{X}}^{m,n}:={\boldsymbol{X}}(t_n;t_m,\cdot)$ for all $0\le m\le n\le N$.
The inverse of ${\boldsymbol{X}}^{m,n}$ is denoted by
\begin{eqnarray*}
{\boldsymbol{X}}^{n,m} := \big({\boldsymbol{X}}^{m,n}\big)^{-1}.
\end{eqnarray*}
For each $k\le n\le N$, we are going to study the approximate solution ${\boldsymbol{X}}^{0,n}_\tau({\boldsymbol{x}})$ of \eqref{eq:X} and the approximate solution ${\boldsymbol{u}}^n$ of \eqref{eq:DuDt} at $t_n$.
First we assume that the discrete approximations ${\boldsymbol{X}}^{0,m}_\tau({\boldsymbol{x}})$ of ${\boldsymbol{X}}(t_m;0,{\boldsymbol{x}})$ have already been obtained for all $0\le m<n$ and all ${\boldsymbol{x}}\in\Omega^0\equiv \Omega_0$. The approximate domain at $t_m$ is defined as the range of ${\boldsymbol{X}}^{0,m}_\tau$, namely,
\begin{eqnarray*}
\Omega^m=\big\{{\boldsymbol{X}}^{0,m}_\tau({\boldsymbol{x}}): \forall\,{\boldsymbol{x}}\in\Omega^0\big\}.
\end{eqnarray*}
Suppose that ${\boldsymbol{X}}^{0,m}_\tau$ provides a good approximation to ${\boldsymbol{X}}^{0,m}$.
The boundedness of ${\boldsymbol{X}}^{0,m}$ implies that ${\boldsymbol{X}}^{0,m}_\tau$ is one-to-one and has a bounded inverse.
For any $0\le j\le m$, the maps ${\boldsymbol{X}}^{j,m}_\tau$: $\Omega^{j}\to \Omega^m$ and
${\boldsymbol{X}}^{j,m}$: $\Omega_{t_j}\to \Omega_{t_m}$ are defined as
\begin{equation}\label{Xmn}
{\boldsymbol{X}}_\tau^{j,m} := {\boldsymbol{X}}_\tau^{0,m}\circ \big({\boldsymbol{X}}_\tau^{0,j}\big)^{-1},\qquad
{\boldsymbol{X}}^{j,m} := {\boldsymbol{X}}^{0,m}\circ \big({\boldsymbol{X}}^{0,j}\big)^{-1}.
\end{equation}
The inverses of ${\boldsymbol{X}}_\tau^{j,m}$ is denoted by
\begin{equation}\label{Xmj}
{\boldsymbol{X}}_\tau^{m,j} := \big({\boldsymbol{X}}_\tau^{j,m}\big)^{-1}.
\end{equation}
Next we suppose that the approximate solutions ${\boldsymbol{u}}^m$ of \eqref{cd1} are obtained for $0\le m<n$ and that each ${\boldsymbol{u}}^m$ is supported on $\overline{\Omega^m}$. In the $n^{\rm th}$ time step, the explicit $k^{\rm th}$-order scheme for solving \eqref{eq:X} has the form
\begin{equation}\label{eq:DnX0}
a_0^k {\boldsymbol{X}}_\tau^{0,n}({\boldsymbol{x}})
= \tau \sum_{i =1}^{k} b_i^k {\boldsymbol{u}}^{n-i}\circ{\boldsymbol{X}}_\tau^{0,n-i}({\boldsymbol{x}})
-\sum_{i=1}^k a_i^k {\boldsymbol{X}}_\tau^{0,n-i}({\boldsymbol{x}}), \qquad \forall\,{\boldsymbol{x}}\in \Omega^0.
\end{equation}
where the coefficients $a_i^k,b_i^k$ are listed in Table~\ref{tab:SBDF}.
Using \eqref{Xmj}, we can rewrite \eqref{eq:DnX0} equivalently as follows
\begin{equation}\label{eq:DnX}
a_0^k {\boldsymbol{X}}_\tau^{n-1,n}({\boldsymbol{x}})
= \tau \sum_{i =1}^{k} b_i^k {\boldsymbol{u}}^{n-i}\circ{\boldsymbol{X}}_\tau^{n-1,n-i}({\boldsymbol{x}})
-\sum_{i=1}^k a_i^k {\boldsymbol{X}}_\tau^{n-1,n-i}({\boldsymbol{x}}), \qquad \forall\,{\boldsymbol{x}}\in \Omega^{n-1}.
\end{equation}
Clearly ${\boldsymbol{X}}^{n-1,n}_\tau$ defines the approximate domain of the $n^{\rm th}$ time step
\begin{equation}\label{Dn}
\Omega^n=\big\{{\boldsymbol{X}}^{n-1,n}_\tau({\boldsymbol{x}}): \forall\,{\boldsymbol{x}}\in\Omega^{n-1}\big\},\qquad
\Gamma^n:=\partial\Omega^n.
\end{equation}
Define ${\boldsymbol{f}}^n:={\boldsymbol{f}}(\cdot,t_n)$. The implicit $k^{\rm th}$-order scheme for solving \eqref{cd1} is given by
\begin{subequations}\label{un-pro}
\begin{align}
a_0^k{\boldsymbol{u}}^n-\tau\nu\Delta{\boldsymbol{u}}^{n}=\tau{\boldsymbol{f}}^n - \sum_{i=1}^ka_i^k{\boldsymbol{u}}^{n-i}
\circ{\boldsymbol{X}}^{n,n-i}_\tau\quad
&\hbox{in}\;\;\Omega^n, \label{un-eqn}\\
\partial_{\boldsymbol{n}}{\boldsymbol{u}}^n =0 \quad
&\hbox{on}\;\;\Gamma^n. \label{un-bc}
\end{align}
\end{subequations}
\begin{table}[http!]
\center
\caption{Coefficients for SBDF schemes.}\label{tab:SBDF}
\setlength{\tabcolsep}{2mm}
\begin{tabular}{ |c|l|l|l|l|l|}
\hline
\diagbox[dir=SE]{$k$}{$\big(a_i^k,b_i^k\big)$}{$i$} &\qquad $0$ &\quad\;\;$1$ &\qquad $2$ &\qquad $3$ &\qquad $4$ \\ \hline
$1$ & $(1,\times)$ &$(-1,\,1)$ &$(0,\,0)$ &$(0,\,0)$ &$(0,\,0)$ \\ \hline
$2$ &$(3/2,\,\times)$ &$(-2,\,2)$ &$(1/2,\,-1)$ &$(0,\,0)$ &$(0,\,0)$ \\ \hline
$3$ &$(11/6,\,\times)$ &$(-3,\,3)$ &$(3/2,\,-3)$ &$(-1/3,\,1)$ &$(0,\,0)$ \\ \hline
$4$ &$(25/12,\,\times)$ &$(-4,\,4)$ &$(3,\,-6)$ &$(-4/3,\,4)$ &$(1/4,\,-1)$ \\ \hline
\end{tabular}
\end{table}
\section{High-order finite element methods}
\label{sec:fem}
In practice, we are not able to build $\Omega^n$ with \eqref{eq:DnX} and \eqref{Dn}.
The purpose of this section is to propose a surface-tracking algorithm to build an approximate domain $\Omega^n_h$
and to propose a fictitious-domain finite element method using cut elements (\cite{bur10,bur12}) for computing the discrete solution ${\boldsymbol{u}}^n_h\in\Honev[\Omega^n_h]$ in each time step.
Let $D\subset\mathbb{R}^2$ be an open domain satisfying $\bar\Omega_t\subset D$ for all $0\le t\le T$. Let $\mathcal{T}_h$ be the uniform partion of $\bar D$, which consists of close squares of side-length $h$.
\vspace{1mm}
The algorithms for computing ${\boldsymbol{u}}^n_h$ and $\Omega^n_h$ will be described successively.
\begin{center}
\fbox{\parbox{0.975\textwidth}
{First we assume that all the quantities below have been obtained for all $0\le m<n$,
\begin{enumerate}[leftmargin=5mm]
\item the computational domains $\Omega^m_h$ and the finite element solutions ${\boldsymbol{u}}^m_h\in\Honev[\Omega^m_h]$,
\item the discrete forward maps ${\boldsymbol{X}}^{m-1,m}_h$: $\bar\Omega^{m-1}_h\to
{\boldsymbol{X}}^{m-1,m}(\bar\Omega^{m-1}_h)$, and
\item the discrete backward maps ${\boldsymbol{X}}^{m,m-1}_h$: $\bar \Omega^{m}_h
\to \bar \Omega^{m-1}_h$.
\end{enumerate}\vspace{1mm}
Our task is to establish the computational domain $\Omega^n_h$, the finite element solution ${\boldsymbol{u}}^n_h$, the forward map ${\boldsymbol{X}}^{n-1,n}_h$, and the backward map ${\boldsymbol{X}}^{n,n-1}_h$.}}
\end{center}
\subsection{Finite element spaces}
Let $\Gamma^m_h=\partial\Omega^m_h$ denote the boundary of $\Omega^m_h$ for convenience.
We introduce an open domain $\tilde\Omega^m_h$ which is larger than $\Omega^m_h$
\begin{equation}\label{domain-tm}
\tilde\Omega^m_h = \left\{{\boldsymbol{x}} + t{\boldsymbol{n}}({\boldsymbol{x}}): 0\le t<h/4,\;{\boldsymbol{x}}\in\Gamma^m_h\right\},
\end{equation}
where ${\boldsymbol{n}}({\boldsymbol{x}})$ is the unit normal of $\Gamma^m_h$ at ${\boldsymbol{x}}$ and points to the exterior of $\Omega^m_h$.
Without loss of generality, we assume that $\tilde\Omega^m_h\subset D$.
Let $\mathcal{T}^m_{h,I}$ denote the set of interior elements of $\Omega^m_h$, namely,
\begin{eqnarray*}
\mathcal{T}^m_{h,I} = \left\{K\in\mathcal{T}_h:\; K\subset \Omega^m_h\right\}.
\end{eqnarray*}
The mesh $\mathcal{T}_h$ induces a cover of $\tilde\Omega^m_h$ and a cover of $\Gamma^m_h$, which are defined as follows
\begin{eqnarray*}
\mathcal{T}^m_h := \big\{K\in\mathcal{T}_h:\;
\mathrm{area}(K\cap\tilde\Omega^m_h) >0\big\}, \qquad
\mathcal{T}^m_{h,B} := \mathcal{T}^m_h\backslash \mathcal{T}^m_{h,I} .
\end{eqnarray*}
The cover $\mathcal{T}^m_h$ generates a fictitious domain which is denoted by
\begin{eqnarray*}
D^m:= \mathrm{interior}
\big(\cup_{K\in\mathcal{T}^m_h}K\big).
\end{eqnarray*}
Clearly we have $\Omega^m_h\subset\tilde\Omega^m_h\subset D^m$.
Let $\mathcal{E}_h$ denote the set of all edges in $\mathcal{T}_h$. The set of boundary-zone edges is denoted by (see Fig.~\ref{fig:Tnh})
\begin{eqnarray*}
\mathcal{E}_{h,B}^{m}= \left\{E\in\mathcal{E}_h: \; E \not\subset\partial D^m
\;\; \hbox{and}\;\; \exists K\in \mathcal{T}^m_{h,B}\;\;
\hbox{s.t.}\;\; E\subset\partial K\right\}.
\end{eqnarray*}
\begin{figure}[http!]
\centering
\begin{tikzpicture}[scale =2]
\filldraw[red!70!white](0.2*3,0.2*2)--(0.2*3,0.2*3)--(0.2*2,0.2*3)--(0.2*2,0.2*7)--(0.2*3,0.2*7)--(0.2*3,0.2*8)--(0.2*4,0.2*8)--(0.2*7,0.2*8)--(0.2*7,0.2*7)--(0.2*8,0.2*7)--(0.2*8,0.2*3)--(0.2*7,0.2*3)--(0.2*7,0.2*2)--(0.2*3,0.2*2);
\filldraw[yellow!70!white](0.2*3,0.2*6)--(0.2*4,0.2*6)--(0.2*4,0.2*7)--(0.2*5,0.2*7)--(0.2*6,0.2*7)--(0.2*6,0.2*6)--(0.2*7,0.2*6)--(0.2*7,0.2*5)--(0.2*7,0.2*4)--(0.2*6,0.2*4)--(0.2*6,0.2*3)--(0.2*5,0.2*3)--(0.2*4,0.2*3)--(0.2*4,0.2*4)--(0.2*3,0.2*4)--(0.2*3,0.2*5)--(0.2*3,0.2*6);
\filldraw[red!70!white](0.2*1,0.2*4)--(0.2*2,0.2*4)--(0.2*2,0.2*6)--(0.2*1,0.2*6)--(0.2*1,0.2*4);
\filldraw[red!70!white](0.2*2,0.2*7)--(0.2*3,0.2*7)--(0.2*3,0.2*8)--(0.2*2,0.2*8)--(0.2*2,0.2*7);
\filldraw[red!70!white](0.2*2,0.2*2)--(0.2*3,0.2*2)--(0.2*3,0.2*3)--(0.2*2,0.2*3)--(0.2*2,0.2*2);
\filldraw[red!70!white](0.2*7,0.2*2)--(0.2*8,0.2*2)--(0.2*8,0.2*3)--(0.2*7,0.2*3)--(0.2*7,0.2*2);
\filldraw[red!70!white](0.2*8,0.2*4)--(0.2*9,0.2*4)--(0.2*9,0.2*6)--(0.2*8,0.2*6)--(0.2*8,0.2*4);
\filldraw[red!70!white](0.2*7,0.2*7)--(0.2*8,0.2*7)--(0.2*8,0.2*8)--(0.2*7,0.2*8)--(0.2*7,0.2*7);
\draw[black, thick] (1,1) ellipse [x radius=0.56cm, y radius=0.5cm];
\draw[yellow,thick] (1,1) ellipse [x radius=0.62cm, y radius = 0.56cm];
\filldraw[step =0.2cm,gray,thin] (0,0) grid (2cm,2cm);
\draw [blue, ultra thick] (0.2*3,0.2*3)--(0.2*3,0.2*7)--(0.2*7,0.2*7)--(0.2*7,0.2*3)--(0.2*3, 0.2*3);
\draw [blue,ultra thick] (0.2*2,0.2*4)--(0.2*3,0.2*4);
\draw [blue,ultra thick] (0.2*2,0.2*5)--(0.2*3,0.2*5);
\draw [blue,ultra thick] (0.2*2,0.2*6)--(0.2*3,0.2*6);
\draw [blue,ultra thick] (0.2*2,0.2*4)--(0.2*3,0.2*4);
\draw [blue,ultra thick] (0.2*4,0.2*2)--(0.2*4,0.2*4)--(0.2*3, 0.2*4);
\draw [blue,ultra thick] (0.2*4,0.2*8)--(0.2*4,0.2*6)--(0.2*3, 0.2*6);
\draw [blue,ultra thick] (0.2*6,0.2*2)--(0.2*6,0.2*4)--(0.2*8, 0.2*4);
\draw [blue,ultra thick] (0.2*6,0.2*8)--(0.2*6,0.2*6)--(0.2*8, 0.2*6);
\draw [blue,ultra thick] (0.2*5,0.2*2)--(0.2*5,0.2*3);
\draw [blue,ultra thick] (0.2*7,0.2*5)--(0.2*8,0.2*5);
\draw [blue,ultra thick] (0.2*5,0.2*7)--(0.2*5,0.2*8);
\draw [blue,ultra thick] (0.2*1,0.2*5)--(0.2*2,0.2*5);
\draw [blue,ultra thick] (0.2*2,0.2*4)--(0.2*2,0.2*6);
\draw [blue,ultra thick] (0.2*8,0.2*5)--(0.2*9,0.2*5);
\draw [blue,ultra thick] (0.2*8,0.2*4)--(0.2*8,0.2*6);
\draw [blue,ultra thick] (0.2*2,0.2*7)--(0.2*3,0.2*7)--(0.2*3,0.2*8);
\draw [blue,ultra thick] (0.2*7,0.2*8)--(0.2*7,0.2*7)--(0.2*8,0.2*7);
\draw [blue,ultra thick] (0.2*2,0.2*3)--(0.2*3,0.2*3)--(0.2*3,0.2*2);
\draw [blue,ultra thick] (0.2*8,0.2*3)--(0.2*7,0.2*3)--(0.2*7,0.2*2);
\end{tikzpicture}
\caption{$\mathcal{T}^m_h:$ the squares colored in red and yellow;\; $\mathcal{T}^m_{h,B}:$ the squares colored in red;\; $\mathcal{E}^m_{h,B}:$ the edges colored in blue;\; $D^m:$ the open domain colored in red and yellow; $\Gamma^m_h$: the black circle; $\partial\tilde\Omega^m_h$: the yellow circle.}
\label{fig:Tnh}
\end{figure}
Now we define the finite element spaces as follows
\begin{align*}
V(k,\mathcal{T}_h) := \big\{v\in\Hone[D]:
v|_K\in Q_k(K),\;\forall\,K\in \mathcal{T}_h\big\}, \quad
V(k,\mathcal{T}^m_h) := \big\{v|_{D^m}: v\in V(k,\mathcal{T}_h) \big\} ,
\end{align*}
where $Q_k$ is the space of polynomials whose degrees are no more than $k$ for each variable.
The space of piecewise regular functions over the mesh $\mathcal{T}^m_h$ is defined by
\begin{eqnarray*}
H^j(\mathcal{T}^m_h) := \big\{v\in \Ltwo[D^n]:\;
v|_K\in H^j(K),\;\forall\, K\in\mathcal{T}^m_h\big\},\qquad j\ge 1.
\end{eqnarray*}
It is clear that $V(k,\mathcal{T}_h^m)\subset H^j(\mathcal{T}^m_h)$.
We will use the notations ${\boldsymbol{V}}(k,\mathcal{T}_h)=\big(V(k,\mathcal{T}_h)\big)^2$ and
${\boldsymbol{V}}(k,\mathcal{T}^m_h)=\big(V(k,\mathcal{T}^m_h)\big)^2$ in the rest of the paper.
\subsection{Forward flow map ${\boldsymbol{X}}_h^{n-1,n}$}
\label{sec:Xni}
Using the one-step maps ${\boldsymbol{X}}^{m,m-1}_h$ with $0\le m<n$, we can define the multi-step backward flow map ${{\boldsymbol{X}}}_h^{n-1,n-i}$: $\bar\Omega^{n-1}_h\to \bar\Omega^{n-i}_h$ as
\begin{equation}\label{Xmstep}
{{\boldsymbol{X}}}_h^{n-1,n-i} :={{\boldsymbol{X}}}_h^{n-i+1,n-i}\circ{{\boldsymbol{X}}}_h^{n-i+2,n-i+1}\circ \cdots\circ {{\boldsymbol{X}}}_h^{n-1,n-2},\qquad 1\le i\le k.
\end{equation}
Similar to \eqref{eq:DnX}, we first define the forward flow map at $t_n$
\begin{equation}\label{hXnh}
{\boldsymbol{X}}^{n-1,n}_h({\boldsymbol{x}})
:=\frac{1}{a_0^k}\sum_{i=1}^k
\big[\tau b_i^k{\boldsymbol{u}}_h^{n-i}\circ{\boldsymbol{X}}^{n-1,n-i}_h({\boldsymbol{x}})-a_i^k{\boldsymbol{X}}^{n-1,n-i}_h({\boldsymbol{x}})\big],
\quad \forall\,{\boldsymbol{x}}\in\bar\Omega^{n-1}_h.
\end{equation}
Since ${\boldsymbol{X}}^{n-1,n}_h$ represents the forward evolution of computational domain from $t_{n-1}$ to $t_n$,
we call it the {\it forward flow map}.
\subsection{The computational domain $\Omega^n_h$}
\label{sec:domain}
Next we present the surface-tracking algorithm which generates the approximate boundary $\Gamma^n_h$, or equivalently, the computational domain $\Omega^n_h$.
In \cite{zha18}, Zhang and Fogelson proposed a surface-tracking algorithm which uses cubic spline interpolation and explicit expression of driving velocity.
Here we present a modified algorithm which uses the numerical solution as the driving velocity.
Let $\mathcal{P}^0=\left\{{\boldsymbol{p}}^0_j: 0\le j\le J^0\right\}$ be the set of control points on the initial boundary $\Gamma^0_h:=\Gamma_0$. Suppose that the arc length of $\Gamma^0_h$ between ${\boldsymbol{p}}^0_0$ and ${\boldsymbol{p}}^0_{j}$ equals to $L^0_j=j\eta$ for $1\le j\le J^0$, where $\eta:= L^0/J^0$ and $L^0$ is the arc length of $\Gamma^0_h$. For all $0\le m<n$, suppose that we are given with the set of control points $\mathcal{P}^m=\{{\boldsymbol{p}}^m_j: 0\le j\le J^m\}\subset\Gamma^m_h$ and the parametric representation $\boldsymbol{\chi}_m$ of $\Gamma^m_h$, which satisfies
\begin{equation*}
\boldsymbol{\chi}_m(L^m_j) = {\boldsymbol{p}}^m_j, \qquad
L^m_j = \sum_{i=0}^j\SN{{\boldsymbol{p}}^m_{i+1}-{\boldsymbol{p}}^m_{i}},\quad 0\le i\le J^m.
\end{equation*}
\begin{algorithm}\label{alg:mars}
{\sf
Given $n\ge 1$ and a constant $\delta\in (0,0.5]$, the surface-tracking algorithm for constructing $\Gamma^n_h$ consists of three steps.
\begin{enumerate}[leftmargin=6mm]
\item Trace forward each control point in $\mathcal{P}^{n-1}$ to obtain the new set of control points $\mathcal{P}^n= \{{\boldsymbol{p}}^{n}_j: j=0,\cdots,J^n\}$, where ${\boldsymbol{p}}^{n}_j={\boldsymbol{X}}^{n-1,n}_h({\boldsymbol{p}}^{n-1}_j)$ and $J^n = J^{n-1}$.
\item Adjust $\mathcal{P}^n$. For each $0\le j< J^n$, let $M_j$ be the smallest integer no less than $|{\boldsymbol{p}}_{j+1}^n-{\boldsymbol{p}}^n_{j}|/\eta$.
\begin{itemize}[leftmargin=4mm]
\item If $M_j>1$, define $\Delta l_j := (L^{n-1}_{j+1}-L^{n-1}_{j})/M_j$ and update $\mathcal{P}^n$, $J^n$ as follows
\begin{equation}\label{alg-step2}
\mathcal{P}^n \leftarrow \mathcal{P}^n \cup\big\{
{\boldsymbol{p}}^{n}_{j,m}: 1\le m < M_j \big\}, \qquad
J^n \leftarrow J^n + M_j-1 .
\end{equation}
where ${\boldsymbol{p}}^{n}_{j,m} = {\boldsymbol{X}}_h^{n-1,n}({\boldsymbol{p}}^{n-1}_{j,m})$ and ${\boldsymbol{p}}^{n-1}_{j,m} = \boldsymbol{\chi}_{n-1}(L^{n-1}_{j}+m\Delta l_j)$.
\item Otherwise, remove control points from $\mathcal{P}^n$ as many as possible such that $\delta\eta < \SN{{\boldsymbol{p}}^n_{j+1}-{\boldsymbol{p}}^n_j}\le \eta$ holds for all $j$.
\end{itemize}
\item Based on the point set $\mathcal{P}^n$ and the nodal set
$\mathcal{L}^n= \big\{\sum_{j=0}^i\big|{\boldsymbol{p}}^n_{j+1}-{\boldsymbol{p}}^n_{j}\big|:0\le i< J^n\big\}$, we construct the cubic spline function
$\boldsymbol{\chi}_{n}$ and define
\begin{equation*}
\Gamma^{n}_h:=\left\{\boldsymbol{\chi}_{n}(l): l\in [0,L^{n}]\right\}, \qquad
L^n:= \sum_{j=0}^{J_n}\big|{\boldsymbol{p}}^n_{j+1}-{\boldsymbol{p}}^n_{j}\big|.
\end{equation*}
\end{enumerate}}
\end{algorithm}
\begin{remark}
Step~2 of Algorithm~\ref{alg:mars} not only adapts the interface-tracking algorithm to severely deforming domains, but also enhances the stability of the cubic spline interpolation. For low-speed flow and short-time simulations, it is also reasonable to remove Step~2 from Algorithm~\ref{alg:mars} and set $\mathcal{L}^n\equiv\mathcal{L}^0$.
\end{remark}
\subsection{Backward flow map ${\boldsymbol{X}}^{n,n-1}_h$}
\label{sec:bfm}
Note that $\Omega^n_h$ is constructed with cubic spline interpolation over the point set $\mathcal{P}^n$. Generally we have $\Omega^n_h\ne {\boldsymbol{X}}^{n-1,n}(\bar\Omega^{n-1}_h)$. Moreover, the computation of $\big({\boldsymbol{X}}^{n-1,n}_h\big)^{-1}$ is very time-consuming in practical computations. The backward flow map ${\boldsymbol{X}}^{n,n-1}_h$ is an approximation of $({\boldsymbol{X}}^{n-1,n}_h)^{-1}$ and will be defined in two steps.
{\bf Step~1. Define an approximation $\tilde{\boldsymbol{X}}^{n-1,n}_h$ of ${\boldsymbol{X}}^{n-1,n}_h$.} We shall define
$\tilde{\boldsymbol{X}}^{n-1,n}_h$: $\bar\Omega^{n-1}_h\to\mathbb{R}^2$ piecewise on each element $K\in\mathcal{T}^{n-1}_{h}$.
\begin{figure}[http!]
\begin{center}
\includegraphics[width=0.25\textwidth]{interiorelement.png}
\caption{Uniform nodal points on an interior element $K\in\mathcal{T}^{n-1}_{h,I}$.}
\label{fig:XnK}
\end{center}
\end{figure}
First we consider each interior element $K\in\mathcal{T}^{n-1}_{h,I}$. Let ${\boldsymbol{A}}^K_{ij}$, $0\le i,j\le k$, be the nodal points taken uniformly on $K$ (see Fig.~\ref{fig:XnK}).
Let $P_k([0,1])$ be the space of polynomials on $[0,1]$ with degrees $\le k$ and let $\{b_0,\cdots,b_k\}$ be the basis of $P_k([0,1])$ satisfying
\begin{eqnarray*}
b_i(j/k)=\delta_{i,j},\qquad 0\le j\le k,
\end{eqnarray*}
where $\delta_{i,j}$ stands for the Kronecker delta function.
An isoparametric transform from the reference element $\hat{K}=[0,1]^2$ to $K$ is defined as
\begin{equation}\label{FK}
F_K(\boldsymbol{\xi}) := \sum_{i,j=0}^k{\boldsymbol{A}}^K_{ij}b_i(\xi_1)b_j(\xi_2),\qquad
\forall\,\boldsymbol{\xi}=(\xi_1,\xi_2)\in\hat{K}.
\end{equation}
We use ${\boldsymbol{X}}^{n-1,n}_h$ to trace forward each ${\boldsymbol{A}}^K_{ij}$ from $t_{n-1}$ to $t_n$ and get the nodal points
\begin{eqnarray*}
{\boldsymbol{X}}^{n-1,n}_h\big({\boldsymbol{A}}^K_{ij}\big),\qquad 0\le i,j\le k.
\end{eqnarray*}
They define another isoparametric transform
\begin{equation}\label{GK}
G_K(\boldsymbol{\xi}) := \sum_{i,j=0}^k{\boldsymbol{X}}^{n-1,n}_h\big({\boldsymbol{A}}^K_{ij}\big) b_i(\xi_1)b_j(\xi_2)
\qquad \forall\,\boldsymbol{\xi}=(\xi_1,\xi_2)\in\hat{K}.
\end{equation}
We get a homeomorphism from $K$ to $K^n:=\big\{G_K(\boldsymbol{\xi}): \boldsymbol{\xi}\in\hat{K}\big\}$
\begin{equation}\label{X-iK}
\tilde{\boldsymbol{X}}^{n-1,n}_K :=G_K\circ F_K^{-1} .
\end{equation}
\begin{figure}[http!]
\begin{center}
\includegraphics[width=0.25\textwidth]{BENCpts.png}
\caption{The partition of $K\cap\Omega^{n-1}_h$ into curved polygons $K_0$, $K_1$, and $K_2$, where red dots stand for control points in $\mathcal{P}^{n-1}$. Quasi-uniform nodal points are shown on $K_0$ and $K_2$, respectively.}
\label{fig:XnKm}
\end{center}
\end{figure}
Next we consider each $K\in\mathcal{T}^{n-1}_{h,B}$. Since $\Gamma^{n-1}_h\cap K$ is represented by the piecewise cubic funtion $\boldsymbol{\chi}_{n-1}$. Let $M$ be the number of control points in the interior of $K$. We subdivide $K\cap\bar\Omega^{n-1}_h$ into $M+1$ curved polygons
\begin{equation*}
K\cap\bar\Omega^{n-1}_h=K_0\cup K_1\cup\cdots\cup K_M,\qquad
\mathring{K}_l\cap\mathring{K}_m=\emptyset\quad
\hbox{for}\;\; l\ne m.
\end{equation*}
Each $K_m$ is either a curved triangle or a curved quadrilateral with only one curved edge $\partial K_m\cap\Gamma^{n-1}_h$ (see Fig.~\ref{fig:XnKm}). We define isoparametric transforms for the two cases of $K_m$ respectively.
\vspace{1mm}
\begin{itemize}[leftmargin=5mm]
\item If $K_m$ is a curved quadrilateral (see $K_2$ in Fig.~\ref{fig:XnKm}),
we take nodal points
${\boldsymbol{A}}^{K_m}_{ij}, 0\le i,j\le k$, quasi-uniformly on $K_m$. Similar to \eqref{FK}--\eqref{GK}, we define two isoparametric transforms as
\begin{align}\label{FmGm-K}
F_{K_m}(\boldsymbol{\xi}) := \sum_{i,j=0}^k
{\boldsymbol{A}}^{K_m}_{ij}b_i(\xi_1)b_j(\xi_2),\quad
G_{K_m}(\boldsymbol{\xi}) := \sum_{i,j=0}^k{\boldsymbol{X}}^{n-1,n}_h\big({\boldsymbol{A}}^{K_m}_{ij}\big) b_i(\xi_1)b_j(\xi_2),\quad
\forall\,\boldsymbol{\xi}\in\hat{K}.
\end{align}
\item If $K_m$ is a curved triangle (see $K_0$ in Fig.~\ref{fig:XnKm}),
we take $(k+1)(k+2)/2$ nodal points ${\boldsymbol{A}}^{K_m}_{ij},0\le i+j\le k$, quasi-uniformly on $K_m$. Let
$\hat{T}$ be the reference triangle with vertices $(0,0)$, $(1,0)$, and $(0,1)$.
The two isoparametric transforms are defined as
\begin{align}\label{FmGm-T}
F_{K_m}(\boldsymbol{\xi}) := \sum_{i+j=0}^k
{\boldsymbol{A}}^{K_m}_{ij}b_{ij}(\boldsymbol{\xi}),\quad
G_{K_m}(\boldsymbol{\xi}) := \sum_{i+j=0}^k{\boldsymbol{X}}^{n-1,n}_h
\big({\boldsymbol{A}}^{K_m}_{ij}\big) b_{ij}(\boldsymbol{\xi}),\quad
\forall\,\boldsymbol{\xi}\in\hat{T},
\end{align}
where $b_{ij}\in P_k(\hat{T})$ satisfies $b_{ij}(l/k,m/k)=\delta_{i,l}\delta_{j,m}$ for two integers satisfying $0\le l+m\le k$.
\end{itemize}
\vspace{1mm}
In both cases, they define a homeomorphism from $K_m$ to $K^n_m :=\big\{G_{K_m}(\boldsymbol{\xi}):\boldsymbol{\xi}\in\hat{K}\big\}$
\begin{equation}\label{Kmn}
\tilde{\boldsymbol{X}}^{n-1,n}_{K_m} :=G_{K_m}\circ F_{K_m}^{-1}.
\end{equation}
Therefore, we obtain a homeomorphism $\tilde{\boldsymbol{X}}^{n-1,n}_K$: $K\cap\bar\Omega^{n-1}_h \to K^n:=\cup_{m=0}^M K^n_m$
\begin{equation}\label{X-bK}
\tilde{\boldsymbol{X}}^{n-1,n}_K\big|_{K_m} = \tilde{\boldsymbol{X}}^{n-1,n}_{K_m} .
\end{equation}
Define $\Omega^n_{{\boldsymbol{X}}}:=\cup_{K\in\mathcal{T}^{n-1}_h}K^n$. Combining \eqref{X-iK} and \eqref{X-bK}, we obtain a homeomorphism $\tilde{\boldsymbol{X}}^{n-1,n}_h$: $\bar\Omega^{n-1}_h\to \bar\Omega^n_{{\boldsymbol{X}}}$ which is defined piecewise as follows
\begin{equation}\label{tXn}
\tilde{\boldsymbol{X}}^{n-1,n}_h = \tilde{\boldsymbol{X}}^{n-1,n}_K \quad \hbox{on}\;\; K\cap\bar\Omega^{n-1}_h.
\end{equation}
{\bf Step~2. Define the backward flow map ${\boldsymbol{X}}^{n,n-1}_h$.} First we let
$\tilde{\boldsymbol{X}}^{n,n-1}_h :=\big(\tilde{\boldsymbol{X}}^{n,n-1}_h\big)^{-1}$ which is a homeomorphism from $\bar\Omega^n_{{\boldsymbol{X}}}$ to $\bar\Omega^{n-1}_h$. Note that $\tilde{\boldsymbol{X}}^{n,n-1}_h$ is undefined on $\Omega^n_h\backslash\Omega^n_{{\boldsymbol{X}}}$.
Next we extend it to $\bar\Omega^n_h\cup\bar\Omega^n_{{\boldsymbol{X}}}$ and denote the extension by the same notation.
For any ${\boldsymbol{x}}\in\partial\Omega^n_{{\boldsymbol{X}}}$, let ${\boldsymbol{n}}({\boldsymbol{x}})$ be the unit outer normal at ${\boldsymbol{x}}\in\partial \Omega^n_{{\boldsymbol{X}}}$ (see Fig.~\ref{fig:ext-Xnh}). The extension is defined by constant along ${\boldsymbol{n}}({\boldsymbol{x}})$, namely,
\begin{equation}\label{nx}
\tilde{\boldsymbol{X}}^{n,n-1}_h\big({\boldsymbol{x}}+t{\boldsymbol{n}}({\boldsymbol{x}})\big) \equiv \tilde{\boldsymbol{X}}^{n,n-1}_h({\boldsymbol{x}})
\qquad \hbox{for all}\;\; t>0 \;\;\hbox{satisfying}\; \;
{\boldsymbol{x}}+t{\boldsymbol{n}}_{\boldsymbol{x}}\in \bar{\Omega}^n_h.
\end{equation}
Finally, we define the backward flow map as
\begin{equation}\label{bfm}
{\boldsymbol{X}}^{n,n-1}_h = \tilde{\boldsymbol{X}}^{n,n-1}_h\big|_{\bar\Omega^n_h}.
\end{equation}
Clearly ${\boldsymbol{X}}^{n,n-1}_h$: $\bar{\Omega}^n_h\to \bar{\Omega}^{n-1}_h$ is neither surjective nor injective.
\begin{figure}[http!]
\begin{center}
\includegraphics[width=0.25\textwidth]{extXnh.png}
\caption{Constant extension of $\tilde{{\boldsymbol{X}}}_h^{n,n-1}$ to $\bar{\Omega}^n_h\backslash \bar\Omega^{n}_{{\boldsymbol{X}}}$ along the normal direction ${\boldsymbol{n}}={\boldsymbol{n}}({\boldsymbol{x}})$.}
\label{fig:ext-Xnh}
\end{center}
\end{figure}
\begin{remark}
By \eqref{FK}--\eqref{Kmn}, the computation of $\tilde{\boldsymbol{X}}^{n-1,n}_h = \big(\tilde{\boldsymbol{X}}^{n-1,n}_h\big)^{-1}$ requires two operations
\begin{itemize}[leftmargin=5mm]
\item tracing the nodal points ${\boldsymbol{A}}^K_{ij}$, ${\boldsymbol{A}}^{K_m}_{ij}$ one step forward $($see \eqref{GK} and \eqref{FmGm-K}--\eqref{FmGm-T}$)$ ,
\item computing composite isoparametric transforms $F_K\circ G_K^{-1}$ and $F_{K_m}\circ G_{K_m}^{-1}$ $($see \eqref{X-iK} and \eqref{Kmn}$)$ .
\end{itemize}
In view of \eqref{bfm}, we know that the computational complexity of ${\boldsymbol{X}}^{n,n-1}_h$ is much more economic than that of the inverse map $\big({\boldsymbol{X}}^{n-1,n}_h\big)^{-1}$. Nevertheless, optimal convergence orders are observed in numerical experiments which use ${\boldsymbol{X}}^{n,n-1}_h$.
\end{remark}
\subsection{Finite element scheme for computing ${\boldsymbol{u}}^n_h$}
Remember that we have obtained the computation domain $\Omega^n_h$ in subsection~\ref{sec:domain}. Let $\mathcal{T}^n_h$ be the sub-mesh covering $\Omega^n_h$ and let $\mathcal{E}_{h,B}^{n}$ be the set of boundary-zone edges.
For any edge $E\in\mathcal{E}_{h,B}^{n}$, suppose $E=K_1\cap K_2$ with $K_1,K_2\in\mathcal{T}_h^n$. Let ${\boldsymbol{n}}_{K_1}$ be the unit normal of $E$ , pointing to the exterior of $K_1$, and let ${\boldsymbol{n}}_{K_2}=-{\boldsymbol{n}}_{K_1}$. The normal jump of a scalar function $v$ across $E$ is defined by
\begin{eqnarray*}
\jump{v}({\boldsymbol{x}}) =\lim_{\varepsilon\to 0+}
\left[v({\boldsymbol{x}}-\varepsilon{\boldsymbol{n}}_{K_1}) {\boldsymbol{n}}_{K_1}
+v({\boldsymbol{x}}-\varepsilon{\boldsymbol{n}}_{K_2}){\boldsymbol{n}}_{K_2}\right]\qquad \forall\,{\boldsymbol{x}}\in E.
\end{eqnarray*}
Clearly $\jump{{\boldsymbol{v}}}$ is a matrix function on $E$ if ${\boldsymbol{v}}$ is a vector function. For two vector functions ${\boldsymbol{v}}$ and ${\boldsymbol{w}}$, the Euclidean inner product of $\jump{{\boldsymbol{v}}}$ and $\jump{{\boldsymbol{w}}}$ is defined by
\begin{eqnarray*}
\jump{{\boldsymbol{v}}}:\jump{{\boldsymbol{w}}} =\jump{v_1}\cdot \jump{w_1} +
\jump{v_2}\cdot \jump{w_2}.
\end{eqnarray*}
Furthermore, we define two bilinear forms on ${\boldsymbol{H}}^{k+1}(\mathcal{T}^n_h) \cap\Honev[D^n]$ as follows
\begin{align}
\mathscr{A}^n_h({\boldsymbol{w}},{\boldsymbol{v}}):=\,&
\int_{\Omega^n_{h}} \nu \nabla {\boldsymbol{w}}\cdot\nabla {\boldsymbol{v}}
+ \mathscr{J}_{h}^{n}({\boldsymbol{w}},{\boldsymbol{v}}) ,\label{A-nh}\\
\mathscr{J}^n_h({\boldsymbol{w}},{\boldsymbol{v}}):=\,& \gamma
\sum_{E\in \mathcal{E}_{h,B}^{n}}
\sum_{l=1}^k \frac{h^{2l-1}}{[(l-1)!]^2}\int_E \nu
\jump{\partial_{{\boldsymbol{n}}}^l {\boldsymbol{w}}} :
\jump{\partial_{{\boldsymbol{n}}}^l {\boldsymbol{v}}}, \label{J-nh}
\end{align}
where $\gamma $ is a positive constant whose value will be specified in the section for numerical experiments.
Here $\mathscr{J}_{h}^{n}$, called ``ghost penalty stabilization'' in the literature (cf. \cite{bur10-1}), is used to enhance the stability of numerical solutions.
Define $\underline{{\boldsymbol{U}}^n_h}=\big[{\boldsymbol{U}}^{n-k,n}_h,\cdots,{\boldsymbol{U}}^{n,n}_h\big]^\top$, ${\boldsymbol{U}}^{n,n}_h={\boldsymbol{u}}^n_h$, and ${\boldsymbol{U}}_h^{n-i,n}:={\boldsymbol{u}}_h^{n-i}\circ{{\boldsymbol{X}}}_h^{n,n-i}$ for $1\le i\le k$. Similar to \eqref{un-eqn}, we define the discrete BDF-$k$ difference operator as
\begin{equation}\label{def:Lambdak}
\frac{1}{\tau}\Lambda^k \underline{{\boldsymbol{U}}_h^n}
:= \frac{1}{\tau}\sum_{i=0}^k a_i^k {\boldsymbol{U}}_h^{n-i,n}.
\end{equation}
\begin{center}
\fbox{\parbox{0.975\textwidth}
{The finite element approximation to problem \eqref{un-pro} is to seek ${\boldsymbol{u}}^n_h\in {\boldsymbol{V}}(k,\mathcal{T}_h^{n})$ such that
\begin{align} \label{eq:fully-discrete}
\big(\Lambda^k \underline{{\boldsymbol{U}}^n_h}, {\boldsymbol{v}}_h\big)_{\Omega^n_{h}}
+\tau \mathscr{A}^n_h({\boldsymbol{u}}_h^n,{\boldsymbol{v}}_h)
=\tau ({\boldsymbol{f}}^n,{\boldsymbol{v}}_h)_{\Omega^n_{h}} \qquad
\forall\,{\boldsymbol{v}}_h\in {\boldsymbol{V}}(k,\mathcal{T}_h^{n}).
\end{align}}}
\end{center}
\begin{remark}
Suppose that the approximation of $\Gamma^n_h$ to the exact boundary $\Gamma_{t_n}$ is high-order, say
\begin{eqnarray*}
\max_{{\boldsymbol{x}}\in\Gamma_{t_n}}\min_{{\boldsymbol{y}}\in\Gamma^n_h} \SN{{\boldsymbol{x}}-{\boldsymbol{y}}} = O(h^k), \qquad k\ge 2.
\end{eqnarray*}
From \eqref{domain-tm} we know that $\bar\Omega_{t_n}\subset\tilde\Omega^n_h$ if $h$ is small enough. This guarantees that the numerical solution ${\boldsymbol{u}}^n_h$ is well-defined on the exact domain $\bar\Omega_{t_n}$. That is why we choose an enlarged domain $\tilde\Omega^n_h\subset D^n$ to define the finite element space, instead of the tracked domain $\Omega^n_h$.
\end{remark}
To end this section, we prove the well-posedness of the discrete problem.
\begin{theorem}
Suppose that the pre-calculated solutions ${\boldsymbol{u}}^0_h,\cdots,{\boldsymbol{u}}^{k-1}_h$ are given. Then the discrete problem \eqref{eq:fully-discrete} has a unique solution
${\boldsymbol{u}}^n_h\in {\boldsymbol{V}}(k,\mathcal{T}^n_h)$ for each $k\le n\le N$.
\end{theorem}
\begin{proof}
From \eqref{A-nh} and \eqref{J-nh}, it is easy to see that the bilinear form $\mathscr{A}^n_h$ is symmetic and semi-positive on ${\boldsymbol{V}}(k,\mathcal{T}^n_h)$. It suffices to show that
$a_0^k\tau^{-1}({\boldsymbol{v}}_h,{\boldsymbol{v}}_h)_{\Omega^n_h}+\mathscr{A}^n_h({\boldsymbol{v}}_h,{\boldsymbol{v}}_h)=0$ implies ${\boldsymbol{v}}_h\equiv 0$.
It clearly yields
\begin{eqnarray*}
{\boldsymbol{v}}_h=0 \quad \hbox{in}\;\;\Omega^n_h,\qquad
\jump{\partial_{{\boldsymbol{n}}}^l {\boldsymbol{v}}_h}=0\quad \hbox{on any}\;E\in\mathcal{E}^n_{h,B}, \quad 0\le l\le k.
\end{eqnarray*}
The conclusion is obvious.
\end{proof}
\section{Numerical experiments}
In this section, we use three numerical examples to verify the convergence orders of the proposed finite element method. The second and third examples demonstrate the robustness of the method for severely deforming domains.
\subsection{An efficient algorithm for computing $\big({\boldsymbol{U}}_h^{n,n-i}, {\boldsymbol{v}}_h\big)_{\Omega_h^n}$}
\label{sec:Uni}
Remember from \eqref{eq:fully-discrete} that we need to calculate the integrals accurately and efficiently
\begin{eqnarray*}
\big({\boldsymbol{U}}_h^{n,n-i},{\boldsymbol{v}}_h\big)_{\Omega^n_h}
=\big({\boldsymbol{u}}_h^{n-i}\circ{\boldsymbol{X}}^{n,n-i}_h,{\boldsymbol{v}}_h\big)_{\Omega^n_h} \quad\hbox{for}\;\;
1\le i\le k\;\;\hbox{and}\;\;{\boldsymbol{v}}_h\in{\boldsymbol{V}}(k,\mathcal{T}^n_h).
\end{eqnarray*}
For $i\ge 2$, the computation of $\big({\boldsymbol{u}}_h^{n-i}\circ{\boldsymbol{X}}^{n,n-i}_h,{\boldsymbol{v}}_h\big)_{\Omega^n_h}$ involves multi-step map ${\boldsymbol{X}}^{n,n-i}_h$ and is time-consuming.
To simplify the computation while keep accuracy, we make the replacement in calculating the integral
\begin{eqnarray*}
\big({\boldsymbol{U}}_h^{n,n-i},{\boldsymbol{v}}_h\big)_{\Omega^n_h}
\approx \big(\hat{\boldsymbol{u}}_h^{n,n-i},{\boldsymbol{v}}_h\big)_{\Omega^n_h},
\end{eqnarray*}
where $\hat{\boldsymbol{u}}_h^{n,n-i}\in {\boldsymbol{V}}(k,\mathcal{T}_h^n)$ is defined in Algorithm~\ref{alg:int} via modified $L^2$-projections. The computation of $\hat{\boldsymbol{u}}_h^{n,n-i}$ only involves one-step maps and reduces the computational time significantly. \vspace{1mm}
\begin{algorithm}\label{alg:int}
The functions $\hat{\boldsymbol{u}}_h^{n,n-i}\in {\boldsymbol{V}}(k,\mathcal{T}_h^n)$, $1\le i\le k$, are calculated successively.
Suppose that $\hat{\boldsymbol{u}}_h^{n-1,n-i}\in {\boldsymbol{V}}(k,\mathcal{T}_h^{n-1})$, $2\le i\le k$, have been obtained.
Find $\hat{\boldsymbol{u}}_h^{n,n-i}\in {\boldsymbol{V}}(k,\mathcal{T}_h^n)$ such that
\begin{equation}\label{L2pro-1}
\mathscr{M}^n_h\big(\hat{\boldsymbol{u}}_h^{n,n-i},{\boldsymbol{v}}_h\big) = (\hat{{\boldsymbol{u}}}_h^{n-1,n-i}\circ {{\boldsymbol{X}}}_h^{n,n-1},{\boldsymbol{v}}_h)_{\Omega_h^n}\quad
\forall\,{\boldsymbol{v}}_h\in {\boldsymbol{V}}(k,\mathcal{T}_h^n), \quad 1\le i\le k.
\end{equation}
where $\hat{{\boldsymbol{u}}}_h^{n-1,n-1}={\boldsymbol{u}}^{n-1}_h$ and
\begin{align*}
\mathscr{M}^n_h({\boldsymbol{w}},{\boldsymbol{v}}):=\,& \int_{\Omega^n_{h}} {\boldsymbol{w}}\cdot{\boldsymbol{v}}
+\gamma \sum_{E\in \mathcal{E}_{h,B}^n}
\sum_{l=1}^k \frac{h^{2l+1}}{(l!)^2}\int_E
\jump{\partial_{{\boldsymbol{n}}}^l {\boldsymbol{w}}}:
\jump{\partial_{{\boldsymbol{n}}}^l {\boldsymbol{v}}}.
\end{align*}
\end{algorithm}
\subsection{The computation of $\big({\boldsymbol{w}}_h\circ{\boldsymbol{X}}_h^{n,n-1}, {\boldsymbol{v}}_h\big)_{\Omega_h^n}$
for any ${\boldsymbol{w}}_h\in {\boldsymbol{V}}(k,\mathcal{T}_h^{n-1})$ and ${\boldsymbol{v}}_h\in {\boldsymbol{V}}(k,\mathcal{T}_h^{n})$}
Using \eqref{bfm}, we approximate the integral as follows
\begin{eqnarray*}
\big({\boldsymbol{w}}_h\circ {{\boldsymbol{X}}}_h^{n,n-1},{\boldsymbol{v}}_h\big)_{\Omega^{n}_h}
=\big({\boldsymbol{w}}_h\circ \tilde{{\boldsymbol{X}}}_h^{n,n-1},{\boldsymbol{v}}_h\big)_{\Omega^{n}_h}
\approx \big({\boldsymbol{w}}_h\circ \tilde{{\boldsymbol{X}}}_h^{n,n-1},{\boldsymbol{v}}_h\big)_{\Omega^{n}_{\boldsymbol{X}}}
=\big({\boldsymbol{w}}_h\circ \big(\tilde{{\boldsymbol{X}}}_h^{n-1,n}\big)^{-1},{\boldsymbol{v}}_h\big)_{\Omega^{n}_{\boldsymbol{X}}},
\end{eqnarray*}
where $\Omega^{n}_{\boldsymbol{X}} =\tilde{\boldsymbol{X}}^{n-1,n}_h(\Omega^{n-1}_h)$. By \eqref{tXn},
$\tilde{\boldsymbol{X}}^{n-1,n}_h$ is a homeomorphism from $\bar\Omega^{n-1}_h$ to $\Omega^{n}_{\boldsymbol{X}}$
and provides an approximation to the forward flow map ${\boldsymbol{X}}^{n-1,n}_h$. Moreover, $\tilde{\boldsymbol{X}}^{n-1,n}_h$ is defined piecewise on each $K\in\mathcal{T}^{n-1}_h$.
By \eqref{FK}--\eqref{Kmn}, the integral on the right-hand side can be calculated as follows
\begin{align}
\big({\boldsymbol{w}}_h\circ \big(\tilde{{\boldsymbol{X}}}_h^{n-1,n}\big)^{-1},{\boldsymbol{v}}_h\big)_{\Omega^{n}_{\boldsymbol{X}}}
=\,& \sum_{K\in\mathcal{T}^{n-1}_{h,I}}
\int_{\hat{K}}({\boldsymbol{w}}_h\circ F_K)\cdot ({\boldsymbol{v}}_h\circ G_K) \SN{\det(\mathrm{d}_{\boldsymbol{\xi}} G_K)} \notag\\
&+\sum_{K\in\mathcal{T}^{n-1}_{h,B}}
\sum_{m=0}^M \int_{\hat K_m}({\boldsymbol{w}}_h\circ F_{K_m})\cdot
({\boldsymbol{v}}_h\circ G_{K_m})\SN{\det(\mathrm{d}_{\boldsymbol{\xi}} G_{K_m})}, \label{cal-wv}
\end{align}
where $\hat K_m=\hat K$ if $K_m$ is a curved quadriliteral and $\hat K_m =\hat T$ if $K_m$ is a curved triangle (see subsection~\ref{sec:bfm}).
Here $F_K$ is the isoparametric transform from the reference element $\hat{K}$ to $K$ or $K_m$, $G_K$ is the isoparametric transform from $\hat{K}$ to $K^n$ or $K^n_m$, and $\mathrm{d}_{\boldsymbol{\xi}} G_K$, $\mathrm{d}_{\boldsymbol{\xi}}G_{K_m}$ are Jacobi matrices of $G_K$ and $G_{K_m}$, respectively. Each integral on reference elements will be computed with Gaussian quadrature rule of the $(k+1)^{\rm th}$-order.
Since $\hat{{\boldsymbol{u}}}_h^{n-1,n-i}\in {\boldsymbol{V}}(k,\mathcal{T}^{n-1}_h)$, the right-hand side of \eqref{L2pro-1} can be calculated with the formula \eqref{cal-wv}.
\subsection{Numerical examples}
In this section, we report three examples for various scenarios of the domain: rotation, vortex shear, and severe deformation. In order to test convergence orders of the method,
we take ${\boldsymbol{u}}$ as smooth functions and set the right-hand side of \eqref{cd-model} by
\begin{eqnarray*}
{\boldsymbol{f}}= \partial_t {\boldsymbol{u}} + {\boldsymbol{u}}\cdot \nabla {\boldsymbol{u}} -\nu \Delta {\boldsymbol{u}}.
\end{eqnarray*}
The Neumann condition is set by
\begin{eqnarray*}
{\boldsymbol{g}}_N:=\nabla {\boldsymbol{u}}\cdot {\boldsymbol{n}}\quad \hbox{on}\;\;\Gamma_t.
\end{eqnarray*}
For each example, we require that the final shape of domain $\Omega_T$ coincides with the initial shape $\Omega_0$. This helps us to compute the surface-tracking error
\begin{equation}\label{eq:err of Omega}
e_\Omega = \sum_{K\in \mathcal{T}_h}\left|\mathrm{area}\big(\Omega_T\cap K\big) - \mathrm{area}\big(\Omega^N_h\cap K\big)\right|,
\end{equation}
which is also called the geometrical error \cite{Aul03,Aul04}. Here $t_N=T$ is the final time of evolution. Since the exact boundary $\Gamma_T$ and the approximate boundary $\Gamma^N_h$ are very close, calculating the exact area of domain difference directly
\begin{eqnarray*}
e_{0,\Omega} =\mathrm{area}(\Omega_T\backslash\Omega_h^N)
+\mathrm{area}(\Omega_h^N\backslash\Omega_T)
\end{eqnarray*}
may cause an ill-conditioned problem. In fact, it is easy to see that $e_\Omega\to e_{0,\Omega}$ as $h\to 0$.
To measure the error between the exact velocity and the numerical solution,
we use
\begin{eqnarray*}
e_0=\N{{\boldsymbol{u}}(\cdot,T)-{\boldsymbol{u}}_h^{N}}_{{\boldsymbol{L}}^2(\Omega_T)},
\end{eqnarray*}
since $\Omega_T=\Omega_0$ is explicitly given. However, $\Omega_{t_n}$ are unknown in intermediate time steps. We measure the $H^1$-norm errors on the computational domains by
\begin{equation*}
e_1=\bigg(\sum_{n=k}^N \tau \SN{{\boldsymbol{u}}(\cdot,t_n)-{\boldsymbol{u}}_h^n}_{{\boldsymbol{H}}^1(\Omega_h^n)}^2\bigg)^{1/2}.
\end{equation*}
Throughout this section, we set the diffusion coefficient by $\nu=1$. Other constant values of $\nu$ do not influence the convergence orders for $\tau$ small enough. The penalty parameter is set by $\gamma=0.001$ in $\mathscr{J}^n_h$. In Algorithm~\ref{alg:mars}, the parameters for surface-tracking are set by $\eta = 0.5 h$ and $\delta = 0.01$.
\vspace{1mm}
\begin{example}[Rotation of an elliptic disk]\label{ex1}
The exact velocity is set by
\begin{equation*}
{\boldsymbol{u}} = (0.5-y,x-0.5).
\end{equation*}
The initial domain is an elliptic disk whose center is located at $(0.5,0.5)$, major axis $R_1 = 0.6$, and minor axis $R_2=0.3$. The finial time is set by $T=\pi$.
\end{example}
The moving domain $\Omega_t$ rotates half a circle counterclockwise with angular velocity $=1$ and returns to its original shape at $t=T$.
Fig.~\ref{fig:domain-ex3} shows the snapshots of $\Omega^n_h$ which are formed with Algorithm~\ref{alg:mars} at $t_n=0$, $T/8$, $T/4$, $T/2$, $3T/4$, and $T$, respectively.
From Tables~\ref{tab:ex3-k3} and \ref{tab:ex3-k4}, we find that optimal convergence orders are obtained for both the SBDF-3 and SBDF-4 schemes, namely,
\begin{eqnarray*}
e_0 \sim \tau^k,\qquad
e_1 \sim \tau^k,\qquad
e_\Omega \sim \tau^k,\qquad k=3,4.
\end{eqnarray*}
\begin{figure}[http!]
\centering
\subfigure[$t=0$]{
\includegraphics[width=4.5cm]{rotationN128step5.png}
}
\subfigure[$t=T/8$]{
\includegraphics[width=4.5cm]{rotationN128step1.png}
}
\subfigure[$t=T/4$]{
\includegraphics[width=4.5cm]{rotationN128step2.png}
} \\
\subfigure[$t=T/2$]{
\includegraphics[width=4.5cm]{rotationN128step3.png}
\label{fig3:T/2}}
\quad
\subfigure[$t=3T/4$]{
\includegraphics[width=4.5cm]{rotationN128step4.png}
}
\subfigure[t=$T$]{
\includegraphics[width=4.5cm]{rotationN128step5.png}
}
\caption{Computational domains $\Omega^n_h$ at different time steps
$(h=1/128,\; k=3)$.} \label{fig:domain-ex1}
\end{figure}
\begin{table}[http!]
\center
\caption{Convergence orders for the SBDF-3 scheme (Example~\ref{ex1}).}\label{tab:ex1-k3}
\setlength{\tabcolsep}{3mm}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
$h=\tau/\pi$ & $e_0$ &Order & $e_1$ &Order & $e_\Omega$ &Order\\ \hline
1/16 &1.06e-05 &- &1.10e-04 &- &4.03e-03 &-\\ \hline
1/32 &1.36e-06 &2.93 &1.47e-05 &2.91 &5.73e-04 &2.81 \\ \hline
1/64 &1.71e-07 &2.99 &1.89e-06 &2.96 &7.54e-05 &2.92 \\ \hline
1/128 &2.14e-08 &3.00 &2.39e-07 &2.98 &9.64e-06 &2.97 \\ \hline
\end{tabular}
\end{table}
\begin{table}[http!]
\center
\caption{Convergence orders for the SBDF-4 scheme (Example~\ref{ex1}).}\label{tab:ex1-k4}
\setlength{\tabcolsep}{3mm}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
$h=\tau/\pi$ & $e_0$ &Order & $e_1$ &Order & $e_\Omega$ &Order\\ \hline
1/16 &2.13e-06 &- &2.17e-05 &- &4.14e-04 &-\\ \hline
1/32 &1.34e-07 &3.98 &1.44e-06 &3.91 &2.83e-05 &3.87 \\ \hline
1/64 &8.42e-09 &4.00 &9.26e-08 &3.96 &1.86e-06 &3.93 \\ \hline
1/128 &5.56e-10 &3.92 &5.88e-09 &3.98 &1.20e-07 &3.96 \\ \hline
\end{tabular}
\end{table}
\begin{example}[Vortex shear of a circular disk]\label{ex2}
The exact velocity is given by
\begin{equation*}
{\boldsymbol{u}} = \cos (\pi t/3) \big(\sin^2 (\pi x) \sin (2\pi y),\, -\sin^2 (\pi y) \sin (2\pi x)\big).
\end{equation*}
The initial domain is a disk with radius $R=0.15$ and centering at $(0.5,0.75)$. The final time is $T=3$.
\end{example}
The domain is stretched into a snake-like region at $T/2$ and goes back to its initial shape at $t=T$.
Fig.~\ref{fig:domain-ex2} shows the snapshots of the computational domain $\Omega^n_h$ during the time evolution procedure. Tables~\ref{tab:ex1-k3} and \ref{tab:ex1-k4} show optimal convergence orders for both the SBDF-3 and SBDF-4 schemes, respectively. From Fig.~\ref{fig:domain-ex2}~(d), we find that although the domain experiences severe deformations, high-order convergence is still obtained. This demonstrates the robustness of the finite element method.
\begin{figure}[http]
\centering
\subfigure[t=$0$]{
\includegraphics[width=4.5cm]{vortexstep5.png}
}
\subfigure[$t=T/8$]{
\includegraphics[width=4.5cm]{vortexstep1.png}
}
\subfigure[$t=T/4$]{
\includegraphics[width=4.5cm]{vortexstep2.png}
} \\
\subfigure[$t=T/2$]{
\includegraphics[width=4.5cm]{vortexstep3.png}
\label{fig1:T/2}}
\subfigure[$t=3T/4$]{
\includegraphics[width=4.5cm]{vortexstep4.png}
}
\subfigure[$t=T$]{
\includegraphics[width=4.5cm]{vortexstep5.png}
}
\caption{Computational domains $\Omega^n_h$ at different time steps
$(h=1/128,\;k=3)$.} \label{fig:domain-ex2}
\end{figure}
\begin{table}[http!]
\center
\caption{Convergence orders for the SBDF-3 scheme (Example~\ref{ex2}).}\label{tab:ex2-k3}
\setlength{\tabcolsep}{3mm}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
$h=\tau$ & $e_0$ &Order & $e_1$ &Order & $e_\Omega$ &Order\\ \hline
1/16 &1.17e-03 &- &1.71e-03 &- &6.70e-03 &-\\ \hline
1/32 &1.07e-04 &3.45 &2.00e-04 &3.09 &1.03e-03 &2.71 \\ \hline
1/64 &9.59e-06 &3.48 &2.51e-05 &3.00 &1.45e-04 &2.82 \\ \hline
1/128 &9.52e-07 &3.33 &3.16e-06 &2.99 &1.89e-05 &2.94 \\ \hline
\end{tabular}
\end{table}
\begin{table}[http!]
\center
\caption{Convergence orders for the SBDF-4 scheme (Example~\ref{ex2}).}\label{tab:ex2-k4}
\setlength{\tabcolsep}{3mm}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
$h=\tau$ & $e_0$ &Order & $e_1$ &Order & $e_\Omega$ &Order\\ \hline
1/16 &1.60e-03 &- &7.40e-04 &- &1.14e-02 &-\\ \hline
1/32 &1.18e-04 &3.76 &5.80e-05 &3.67 &7.20e-04 &3.98 \\ \hline
1/64 &7.62e-06 &3.95 &3.95e-06 &3.87 &4.24e-05 &4.08 \\ \hline
1/128 &4.78e-07 &3.99 &2.53e-07 &3.97 &2.51e-06 &4.08 \\ \hline
\end{tabular}
\end{table}
\begin{example}[Deformation of a circular disk]\label{ex3}
The exact solution is given by
\begin{equation*}
{\boldsymbol{u}}= \cos(\pi t/3)
\big(\sin(2\pi x)\sin(2 \pi y),
\cos( 2\pi x)\cos(2 \pi y)\big).
\end{equation*}
The initial domain $\Omega_0$ is same to that in Example~\ref{ex2}, that is, a disk of radius $R=0.15$ and centering at $(0.5,0.5)$. The final time is set by $T=3$.
\end{example}
In this example, the deformation of the domain is even more severe than that in Example~\ref{ex2}.
At $t=T/2$, the middle part of the domain is stretched into a filament (see Fig.~\ref{fig:hT-ex3}).
At this time, the domain attains its largest deformation compared with the initial shape.
In the second half period, the domain returns to its initial shape. Moreover, we adopt a fine mesh with $h=1/256$ to capture the largest deformation of the domain.
Nevertheless, Tables~\ref{tab:ex2-k3} and \ref{tab:ex2-k4} show that quasi-optimal convergence is observed for both the SBDF-3 and SBDF-4 schemes, respectively. This indicates that
\begin{eqnarray*}
e_0 \sim \tau^k,\qquad
e_1 \sim \tau^k,\qquad
e_\Omega \sim \tau^k,\qquad k=3,4,
\end{eqnarray*}
hold asymptotically as $\tau=h\to 0$.
Finally, Fig.~\ref{fig:domain-ex2} shows the computational domains formed with the surface-tracking Algorithm~\ref{alg:mars} during the time evolution procedure. The severe deformations demonstrate the competitive behavior of the finite element method.
\begin{table}[http!]
\center
\caption{Convergence orders for the SBDF-3 scheme (Example~\ref{ex3}).}\label{tab:ex3-k3}
\setlength{\tabcolsep}{3mm}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
$h=\tau$ & $e_0$ &Order & $e_1$ &Order & $e_\Omega$ &Order\\ \hline
1/32 &9.41e-05 &- &5.13e-04 &- &7.56e-03 &-\\ \hline
1/64 &3.88e-06 &4.60 &6.13e-05 &3.06 &1.01e-03 &2.90 \\ \hline
1/128 &3.49e-07 &3.47 &7.85e-06 &2.96 &1.28e-04 &2.98 \\ \hline
1/256 &5.25e-08 &2.73 &9.95e-07 &2.98 &1.60e-05 &3.00 \\ \hline
\end{tabular}
\end{table}
\begin{table}[http!]
\center
\caption{Convergence orders for the SBDF-4 scheme (Example~\ref{ex3}).}\label{tab:ex3-k4}
\setlength{\tabcolsep}{3mm}
\begin{tabular}{ |c|c|c|c|c|c|c|}
\hline
$h=\tau$ & $e_0$ &Order & $e_1$ &Order & $e_\Omega$ &Order\\ \hline
1/32 &9.77e-05 &- &2.65e-04 &- &2.57e-04 &-\\ \hline
1/64 &9.03e-06 &3.44 &1.27e-05 &4.38 &6.83e-05 &1.91 \\ \hline
1/128 &6.04e-07 &3.90 &7.77e-07 &4.03 &6.39e-06 &3.42 \\ \hline
1/256 &3.84e-08 &3.98 &5.12e-08 &3.92 &4.82e-07 &3.73 \\ \hline
\end{tabular}
\end{table}
\begin{figure}[http!]
\centering
\includegraphics[width=12cm]{deformationN12832.png}
\caption{The computational domain $\Omega^n_h$ at $t_n=T/2$
$(h=1/128,\;k=4)$.} \label{fig:hT-ex3}
\end{figure}
\begin{figure}[http!]
\centering
\subfigure[$t=0$]{
\includegraphics[width=3cm]{deformationN128step0.png}}
\subfigure[$t=T/8$]{
\includegraphics[width=5cm]{deformationN128step1.png}}
\subfigure[$t=T/4$]{
\includegraphics[width=5cm]{deformationN128step2.png}} \\
\subfigure[$t=T/2$]{
\includegraphics[width=5cm]{deformationN128step3.png}}
\subfigure[$t=3T/4$]{
\includegraphics[width=5cm]{deformationN128step4.png}}
\subfigure[$t=T$]{
\includegraphics[width=3cm]{deformationN128step5.png}}
\caption{Computational domains $\Omega^n_h$ at different time steps
$(h=1/128,\;k=4)$.} \label{fig:domain-ex3}
\end{figure}
\section*{Acknowledgments}
The authors are grateful to professor Qinghai Zhang of Zhejiang University, China. The implementation of Algorithm~\ref{alg:mars} in this paper is based on professor Zhang's interface-tracking code.
\bibliographystyle{amsplain}
|
train/arxiv
|
BkiUfdM5qhLB2E5MOaJo
| 5 | 1 |
\section{Introduction}
\label{sec:intro}
\begin{figure*} [!t]
\centering
\includegraphics[scale=1.38]{fig1}
\caption{Experimental setup.}
\label{fig:setup}
\end{figure*}
\begin{figure*} [!htb]
\centering
\includegraphics[scale=1.38]{fig2}
\caption{Schematic (side view) of the beam specimen, highlighting some of its characteristic dimensions. This figure is to be used as reference for the numbering of the patches. The solid black squares represent input and output accelerometers.}
\label{fig:beam}
\end{figure*}
Ever since Robert Forward's paper on the electronic damping of vibrations in optical structures~\cite{Forward_APPLOPT_1979} (where he stated ``We hope that the examples presented in this paper will [...] lead to the realization that one does not need to find a purely mechanical solution to a mechanical vibration problem''), there has been extensive progress in the field of vibration control via shunted piezoelectric elements. \emph{Shunting} is referred to as the act of connecting the electrodes of a piezoelectric element to a passive electrical circuit, which represents an electrical extension of the structure. A seminal work on the topic is that of Hagood and von Flotow~\cite{Hagood_JSV_1991}, who presented a detailed theoretical formulation on shunting based on linear piezoelectric theory, and experimental results on the attenuation of a single mode of vibration via resistive and resistive-inductive (RL) circuits. While resistive circuits behave as broadband dampers, RL circuits allow to obtain pronounced and localized vibration attenuation; when connecting a RL in series to a piezo patch (which can be modeled as a capacitor in series with a voltage supply) we obtain a resonant RLC circuit, which behaves as an electrical dynamic damper which can be tuned by modifying the circuit parameters. In recent years, extensive efforts have been devoted to extending the effects of RL circuits to multiple modes of vibration~\cite{Hollkamp_JIMSS_1994, Viana_JBSMSE_2006, Thomas_SMS_2012} (multimodal vibration attenuation) and to the investigation of alternative passive circuits, such as the negative capacitor (NC)~\cite{Behrens_SMS_2003, Park_JVC_2005, Beck_JIMSS_2011}. Moreover, with the discovery of phononic crystals~\cite{Kushwaha_PRL_1993}---periodic structures with unconventional wave manipulation characteristics such as phononic bandgaps---researchers have begun to incorporate shunted piezoelectric patches in the architectures of these materials to obtain tunable, complex wave patterns. This has motivated an increasingly larger number of studies on structures involving \emph{periodic arrays} of shunted piezos. Among these, we recall the pioneering contribution of Thorp et al.~\cite{Thorp_SMS_2001}, who paired the concepts of phononic crystals and shunting for the first time. It is also interesting how the authors hinted at the concept of \emph{relaxation} of the periodicity to widen the bandgaps, invoking \emph{Anderson localization} mechanisms, anticipating a discussion that has become central in recent phononic crystals literature~\cite{Celli_APL2_2015}. Noteworthy is also the work of Airoldi and Ruzzene~\cite{Airoldi_NJoP_2011}, who were the first to use a periodic structure with independent RL shunts to control waves, rather than to attenuate resonance peaks; their work---the first catering to problems encountered in metamaterial engineering applications---provided compelling results in terms of locally-resonant bandgap generation. In parallel, researchers have also proposed more convoluted active and passive shunting strategies to enhance vibration attenuation. Examples are electrical networks placed in parallel to the mechanical network (where each patch is shunted to the neighboring patch instead of connected to ground)~\cite{dellIsola_SMS_2004, Lossouarn_SMS_2015, Bergamini_JAP_2015}, enhanced resonant shunts~\cite{Wang_SMS2_2011} and amplifier-resonator feedback circuits~\cite{Wang_SMS_2016}. While most of the studies involving periodic arrays of shunted piezos are centered around vibration attenuation and bandgap generation~\cite{Thorp_SMS_2001, Airoldi_NJoP_2011, dellIsola_SMS_2004, Spadoni_JIMSS_2009, Casadei_SMS_2010, Airoldi_JIMSS_2011, Wang_SMS_2011, Wang_SMS2_2011, Bergamini_ADVMAT_2014, Chen_JVA_2014, Lossouarn_SMS_2015, Bergamini_JAP_2015, Zhou_SMS_2015, Wang_SMS_2016, Zhu_APL_2016}, others have proposed shunted piezos as a mean to achieve tunable wave focusing in metamaterial architectures~\cite{Celli_APL_2015, Wen_JIMSS_2016, YI_SMS_2016}, and tunable waveguiding~\cite{Oh_APL_2011, Casadei_JAP_2012}.
So far, most of the experimental studies involving multiple patches bonded to the same substrate have been characterized by a uniform tuning of the circuit characteristics. Apart from the previously-discussed work by Thorp et al.~\cite{Thorp_SMS_2001}, exceptions are the recent works of Wang and Chen~\cite{Wang_SMS_2016} and Bergamini et al.\cite{Bergamini_JAP_2015}, who discussed the use of a superlattice method and diatomic unit cells, respectively, as strategies to extend the bandgaps to multiple frequencies. In both cases, different inductance values are connected to neighboring patches, while preserving some sort of periodicity.
While modeling structures with periodically-placed piezoelectric elements might present some convenient aspects, such as the possibility of using reduced models (i.e., a unit cell analysis) and the availability of additional frequency regions of Bragg attenuation, one might argue that, when dealing with resonant shunts, periodicity is not strictly needed for bandgap generation and wave attenuation. In fact, when locally-resonant mechanisms are involved, the resonators' placement throughout the medium does not affect the global formation of bandgaps~\cite{Rupin_PRL_2014, Celli_APL2_2015}. On the contrary, in some situations, it might be convenient to leverage some ``organized disorder'' to enhance the wave attenuation performance. The attempt to extend the usually narrow frequency regions of resonance-based attenuation has received growing interest in the metamaterials community over recent years. An example is the work of Kr\"odel et al.~\cite{Kroedel_EML_2015} on metamaterial-enabled seismic isolation, where a wide locally-resonant bandgap is created by employing resonators tuned at adjacent frequencies and displaying slightly overlapping spectra. The concept of trapping a wide spectrum of adjacent frequencies has been pioneered by work in optics, where the application to the control of visible light has inspired the name ``rainbow trap''~\cite{Tsakmakidis_NATURE_2007, Gan_PNAS_2011}, and has recently been extended to the realm of acoustic waves~\cite{Zhu_SCIREP_2013, Ni_SCIREP_2014, Zhou_APL_2016}.
In our work, we embrace these ideas of disorder and relaxation of the periodicity to demonstrate the wave attenuation capabilities of an array of randomly-positioned piezoelectric elements shunted with non-uniform RL circuits. We demonstrate that a device capable of attenuating vibrations over a broad frequency range can be obtained by tuning different RL circuits to resonate at different frequencies, thus realizing a tunable rainbow trap for elastic wave manipulation. The behavior of our test system is probed with a transient chirp signal and monitored by measuring how its spectro-spatial characteristics are modified by the sequential activation of the shunting circuits. Moreover, we propose an additional angle to the problem, whereby we look at the effects of wave manipulation through the prism of wave packet distortion.
This paper is organized as follows. In Sec.~\ref{sec:exp} we discuss in detail our experimental setup, including the specimen, the circuitry and the acquisition system. In Sec.~\ref{sec:res1} we discuss the individual tuning of the circuits. In Sec.~\ref{sec:res2} we report on the broadband wave attenuation enabled by the rainbow trap strategy. Finally, the conclusions of our work are drawn in Sec.~\ref{sec:con}.
\section{Experimental setup}
\label{sec:exp}
In this section, we describe in detail the experimental framework used throughout this work. A comprehensive picture of the setup is shown in Fig.~\ref{fig:setup}. We would like to point out that most of the circuitry solutions used here are inspired by the informative works of Viana and Steffen~\cite{Viana_JBSMSE_2006}, and of Thomas, Ducarne and De\"u~\cite{Thomas_SMS_2012, Ducarne_THESIS_2009}.
\subsection{Specimen geometry and patch configuration}
\label{sec:spec}
Our specimen is an Aluminum beam with eighteen PZT patches (STEMiNC, part SMPL20W15T14R111) bonded according to a random spatial pattern, as shown in the schematic of Fig.~\ref{fig:beam}. Note that the beam is instrumented with a dense population of patches. While this may be redundant for most wave control applications, it grants the possibility to test a variety of shunting sequences and to potentially investigate the effects of location, clustering and order. In our experiments, a maximum of seven patches will be activated simultaneously. Note that each patch has a slightly different value of capacitance; the average capacitance measured with a multimeter is $C_{p,\,ave}= 2.137\,\mathrm{nF}$.
The main geometric dimensions of the specimens are the following: the length of the beam is $L=51.6\,\mathrm{cm}$, the thickness of the beam is $t=0.26\,\mathrm{cm}$, the width of the beam is $b=1.5\,\mathrm{cm}$, the length of each patch is $L_p=2\,\mathrm{cm}$, the thickness of a patch is $t_p=0.14\,\mathrm{cm}$ and the width of the patch is $b_p=1.5\,\mathrm{cm}$. Note that, for the sake of brevity, we did not report the distance between each pair of neighboring patches; it will be demonstrated that this parameter is essentially uninfluential with respect to the effects discussed in this article. The patches are bonded to the beam using a 2-part epoxy glue (3M Scotch-Weld 1838 B/A); details on how to properly bond the patches are reported in the Supplementary Data (SD) section. The sketch also shows the position of the actuator and of the input and output accelerometers.
The measured frequency response function of the beam (obtained using the signals recorded at the input and output accelerometers in response to a multitone excitation spanning the $5$--$10\,\mathrm{kHz}$ range) is shown in Fig.~\ref{fig:beamtf}.
\begin{figure} [!htb]
\centering
\includegraphics[scale=1.38]{fig3}
\caption{Frequency response function of the beam specimen (with short-circuited patches), obtained using the signals recorded at the input and output accelerometers in response to a multi-tone signal in the $5$--$10\,\mathrm{kHz}$ range. The boxed region corresponds to the frequency interval of interest for our analysis.}
\label{fig:beamtf}
\end{figure}
The region highlighted in red represents the frequency interval of interest, i.e., the region where we want to test our wave attenuation strategy; note how this region, ranging from $7\,\mathrm{kHz}$ to $8.8\,\mathrm{kHz}$, deliberately spans multiple peaks of the beam response. The reason for selecting this frequency region will be explained in Sec.~\ref{sec:circ}.
\subsection{Circuitry}
\label{sec:circ}
Throughout this work, we resort to resistor-inductor (RL) shunting circuits. As shown by Hagood and von Flotow~\cite{Hagood_JSV_1991} and by Viana and Steffen~\cite{Viana_JBSMSE_2006}, a piezoelectric patch (which acts as a capacitor with capacitance $C_p$ in series with a voltage supply) placed in series with a RL circuit realizes a resonant RLC circuit and therefore behaves as a dynamic damper. In analogy with purely-mechanical resonant systems, the electric dynamic damper resonates at a frequency that can be determined from the circuit parameters as:
\begin{equation}
f_{res}=\frac{1}{2\,\pi}\sqrt{\frac{1}{L\,C_p}}\,\,\,\,\,.
\label{eq:res}
\end{equation}
Note that the resistance $R$ does not influence the natural frequency of the resonant circuit, but is directly proportional to the level of damping introduced in the structure. Given the very small capacitance values of the piezoelectric patches (of the order of $1\,\mathrm{nF}$), and given the fact that we want to operate at relatively low frequencies (of the order of $1000\,\mathrm{Hz}$), the inductances required to achieve resonant conditions in the desired range could vary in between $0.01\,\mathrm{H}$ and $10\,\mathrm{H}$. Such values of inductance are achievable with conventional coil-based inductors, using devices with characteristic sizes of the order of few centimeters. To avoid working with inductors of impractical dimensions, it is customary to resort to \emph{synthetic inductors}, i.e., circuits which artificially mimic the electrical behavior of conventional inductors. An example is the \emph{Antoniou circuit}~\cite{Antoniou_IEEE_1969}, which involves two operational amplifiers, a capacitor and four resistors, arranged as sketched in Fig.~\ref{fig:circ}a.
\begin{figure} [!htb]
\centering
\includegraphics[scale=1.38]{fig4}
\caption{The Antoniou circuit. (a) Schematic. (b) Realization on a breadboard, showing a series resistor $R_s$ and a tunable potentiometer in lieu of $R_1$.}
\label{fig:circ}
\end{figure}
The equivalent inductance can be calculated as:
\begin{equation}
L_{eq}=\frac{R_1\,R_3\,R_4\,C}{R_2}\,\,\,\,\,.
\label{eq:ant}
\end{equation}
Another major advantage of using this circuit is the fact that the value of the equivalent inductance can be modified by simply tuning one of the components. This feature is especially important in the context of our work, as tunability of the response is a key objective. In our case, as visible in the realization of the circuit on a breadboard shown in Fig.~\ref{fig:circ}b, the tunable component is the potentiometer corresponding to $R_1$, which can assume values in the $100$--$1000\,\mathrm{\Omega}$ range. The other circuital components are: $R_2=680\,\mathrm{\Omega}$, $R_3=1000\,\mathrm{\Omega}$, $R_4=1500\,\mathrm{\Omega}$, $C=0.22\,\mathrm{\mu F}$ (Mylar type), while, for the operational amplifiers, we select the OPA445 model (TI OPA445AP). Note that the amplifiers are powered by two DC power supplies (BK Precision 1667) arranged in series as to share the same ground and produce a voltage difference of $\pm30\,\mathrm{V}$. Note that the ground terminal coming from the power supplies is used as reference ground for the circuits as well as for the patches. More details on grounding are reported in the SD section. Given these components, the circuit can assume inductance values ranging from $0.0485\,\mathrm{H}$ to $0.485\,\mathrm{H}$ (from Eq.~\ref{eq:ant}). Since the average capacitance of the piezo patches is $C_{p,\,ave}= 2.137\,\mathrm{nF}$, the piezo+RL circuits can be tuned to resonate between $4950\,\mathrm{Hz}$ and $15600\,\mathrm{Hz}$ (from Eq.~\ref{eq:res}).
It is important to point out that the choice of circuit components is not accidental. In fact, in order to harness the full attenuation capability of the shunting strategy, it is fundamental to reduce as much as possible the \emph{parasitic} resistance of the circuit (i.e., the resistance that is inherent to the circuit components). The parasitic resistance can be measured experimentally as a function of the circuit components. In general, lower equivalent inductances lead to smaller values of the parasitic resistance; in light of this, working at high frequencies can be advantageous. For this reason, we decided to work in the $7$--$8.8\,\mathrm{kHz}$ frequency range. In any case, once the components are chosen and the circuits are assembled, it is suggested to test their electric response. Details on how to electrically test RL circuits are also discussed in the SD section.
\subsection{Shunting and ``screaming''}
\label{sec:scream}
While the beam comprises eighteen randomly-positioned patches, we only shunt seven of them (the results in this article refer to the activation of patches 1, 5, 8, 10, 12, 14 and 18, according to the numbering in Fig.~\ref{fig:beam}). Working with a subset of patches strengthens our claim regarding the non-influence of the patch positioning. It is worth pointing out that increasing the number of patches working at the same time is not a simple task. This is due to the fact that shunting multiple patches can lead to an unstable behavior of the system. This is especially observed when the series resistors $R_s$ (as previously mentioned, the circuit connected to each patch also includes a series resistor $R_s$) are zero or small. In an ideal scenario, it would be preferable not to use any series resistor, to enhance the attenuation behavior of each resonant circuit at its resonant frequency. However, without $R_s$, spurious vibrations are transmitted to the beam specimen and the patches can be heard ``screaming'', i.e., vibrating in the audible frequency range. Interestingly, this behavior is particularly strong when multiple shunts are tuned to resonate at the same frequency. This aspect is discussed more in detail in the SD section. In the following, to avoid instability phenomena at the frequencies of interest, we select $R_s=100\,\mathrm{\Omega}$.
\subsection{Signal generation and response acquisition}
To generate and acquire the signals, we resort to a data acquisition system (NI cDAQ-9174), equipped with an output module (NI 9263) and an input module (NI 9215), and connected to a laptop running the LabVIEW software. The input signal is generated in LabVIEW, sent through the output module to a power amplifier (Br\"uel \& Kj\ae r Type 2718) and then fed to a shaker (Br\"uel \& Kj\ae r Type 4810), which transmits it to the beam specimen through a stinger. To span the frequency range of interest (highlighted in Fig.~\ref{fig:beamtf}), we prescribe the signal shown in Fig.~\ref{fig:sig}a, i.e., a sine-modulated chirp with frequency content linearly increasing from $7\,\mathrm{kHz}$ to $8.8\,\mathrm{kHz}$ in $2\,\mathrm{ms}$. The frequency content of the signal is shown in Fig.~\ref{fig:sig}b.
\begin{figure} [!htb]
\centering
\includegraphics[scale=1.38]{fig5}
\caption{Input signal: linear chirp with frequency content ranging from $7\,\mathrm{kHz}$ to $8.8\,\mathrm{kHz}$. (a) Time history. (b) Frequency spectrum.}
\label{fig:sig}
\end{figure}
\begin{figure*} [!htb]
\centering
\includegraphics[scale=1.38]{fig6}
\caption{Experimental results on single-frequency distillation; these spectra are obtained via Discrete Fourier Transform (DFT) of the transient signal acquired at the output accelerometer. (a) Short circuit. (b) RED shunt only. (c) ORANGE shunt only. (d) YELLOW shunt only. (e) GREEN shunt only. (f) BLUE shunt only. (g) INDIGO shunt only. (h) VIOLET shunt only. In (b-h), the vertical lines indicate the estimated tuning frequency of each shunt, while the thin gray profiles in the background correspond to the short circuit response, used as reference.}
\label{fig:fs}
\end{figure*}
To evaluate the system response, we analyze the data recorded by the output accelerometer, i.e., the one which is further away from the excitation source. The signal recorded by this accelerometer (PCB Model 352A73) is conditioned with a signal conditioner (PCB Model 482A22), sent to the input module and displayed in LabVIEW. The postprocessing of the data is performed in MATLAB. To reduce the noise, the signal corresponding to each separate experiment is acquired seven times and averaged.
\begin{figure*} [!htb]
\centering
\includegraphics[scale=1.38]{fig7}
\caption{Behavior of the rainbow trap in the frequency domain. The trap is activated following the sequence R-I-O-B-V-G-Y. (a-g) Results after each of the seven activation steps. The thin gray profiles in the background represent the frequency content of the signal recorded when the patches are all short circuited. (h) Final spectrum of the signal, obtained when all seven shunts are activated.}
\label{fig:f}
\end{figure*}
\section{Circuit tuning and single-frequency distillation}
\label{sec:res1}
Let us recall that our objective is to realize a tunable bandgap filter capable of operating over broadband spectra. This is the elasto-acoustic analog of a concept that, in optics and electromagnetism, has been originally termed ``rainbow trap''. The idea consists of shunting a discrete number of patches and tuning each one individually at different adjacent frequencies, such that the corrections are felt over slightly overlapping frequency bands. Here, we demonstrate the concept using seven of the amenable eighteen patches (1, 5, 8, 10, 12, 14 and 18). The individual tuning of the patches is performed by monitoring real-time changes in the frequency content of the signal recorded at the output accelerometer. Fig.~\ref{fig:fs}a represents the frequency content of the signal in the reference scenario, in which all patches are short circuited.
We can see that the spectrum of the applied chirp has morphed into the two-peak structure of Fig.~\ref{fig:fs}a, to reflect the presence of two resonances in the frequency response of the beam in the $7$--$8.8\,\mathrm{kHz}$ range (as shown in Fig.~\ref{fig:beamtf}). The thick black curves in Figs.~\ref{fig:fs}b-h represent the frequency content of the signals obtained by activating one patch at the time, superimposed on top of the response of the short-circuited beam (thin gray line). We can see that each shunt has a very localized behavior, being able to \emph{distill} (subtract) the energy associated with a very narrow frequency band. In analogy with the spectrum of visible light rainbows, and in order to maintain a convenient and visually friendly taxonomy of the frequency sequence, we name each of the seven shunts as one of the seven colors of a discrete (Newtonian) rainbow: the RED (R) shunt resonates at $f_{\mathrm{R}}\approx 7260\,\mathrm{Hz}$ (Fig.~\ref{fig:fs}b), the ORANGE (O) at $f_{\mathrm{O}}\approx 7390\,\mathrm{Hz}$ (Fig.~\ref{fig:fs}c), the YELLOW (Y) at $f_{\mathrm{Y}}\approx 7520\,\mathrm{Hz}$ (Fig.~\ref{fig:fs}d), the GREEN (G) at $f_{\mathrm{G}}\approx 7600\,\mathrm{Hz}$ (Fig.~\ref{fig:fs}e), the BLUE (B) at $f_{\mathrm{B}}\approx 7700\,\mathrm{Hz}$ (Fig.~\ref{fig:fs}f), the INDIGO (I) at $f_{\mathrm{I}}\approx 7850\,\mathrm{Hz}$ (Fig.~\ref{fig:fs}g) and the VIOLET (V) at $f_{\mathrm{V}}\approx 8000\,\mathrm{Hz}$ (Fig.~\ref{fig:fs}h). These resonant frequencies are estimated by inspecting Figs.~\ref{fig:fs}b-h and are highlighted by colored vertical lines. Also note that the R shunt corresponds to patch 14, the O to 8, the Y to 18, the G to 12, the B to 1, the I to 10 and the V to 5. By comparing the results produced by individual shunts, we can see that some of them cause stronger attenuation than others. In our opinion, this is not influenced by the position of the patch and it might be predominantly due to bonding imperfections (which could reduce the effective contact area of some of the patches).
\section{The rainbow trap}
\label{sec:res2}
In this section, we illustrate the result of the simultaneous activation of multiple patches and we provide a demonstration of our tunable electromechanical rainbow trap, analyzing its effects both on the frequency content of the signal and on the shape of the wavepacket.
\subsection{Frequency response - Bandgap effect}
\label{sec:f}
To activate the broadband trap, we short circuit all the patches, we turn on the DC power supplies and then we connect all the shunts one after the other, according to a pre-determined \emph{activation sequence}. Note that any activation sequence is bound to produce the same net result, due to the linearity of our system; to prove this point, in the SD section, we report the results obtained with a different sequence, which indeed yields the same overall signal attenuation. For completeness, we report the data recorded at each activation step. The frequency content of the signal recorded at the output accelerometer, for a R-I-O-B-V-G-Y activation sequence, is reported in Fig.~\ref{fig:f}. In Figs.~\ref{fig:f}a-g, the colored vertical lines indicate the frequencies at which the shunts activated at each step of the sequence are tuned (the lines correspond to the tuning frequencies estimated in Fig.~\ref{fig:fs}). As we go through the shunting sequence, the energy associated with the wave packet is gradually distilled: each shunt absorbs the energy corresponding to frequencies close to its resonance, until we reach the final attenuation stage shown in Fig.~\ref{fig:f}h. It is interesting to notice how the activation of some patches causes the frequency content of the signal to be stretched, compressed and deformed, apparently deploying some of the energy from the corrected region to a neighboring frequency interval. For example, this is visible in Fig.~\ref{fig:f}a, where the RED circuit causes frequencies around $7.7\,\mathrm{kHz}$ to become more favorable than in the reference case. Fig.~\ref{fig:f}h allows to fully appreciate the broadband wave attenuation capability of our rainbow trap; it also confirms how the periodicity of the patch placement and the homogeneity of the shunts characteristics are not required in a transient wave propagation problem involving RL shunts. This is due to the fact that the traveling packet, which has local support, propagates along the beam and is therefore forced to interact sequentially with all the patches, thus feeling the effect of each correction. Moreover, due to its short wavelength characteristics, the wave has the opportunity to impart significant deformation to each patch. In contrast, in vibration attenuation problems (steady-state conditions), the possibility for the patches to undergo significant levels of strain at the frequencies corresponding to the peaks that we intend to suppress is directly related to the modal characteristics of the structure. In such scenario, the arrangement of patches would need to be carefully designed.
\begin{figure*} [!htb]
\centering
\includegraphics[scale=1.38]{fig8}
\caption{Effect of the rainbow trap on the shape of the wavepacket. (a) Short circuit scenario---the backward (BWD) part is due to boundary reflection. (b) Signal comparison between fully-activated rainbow trap (black line) and short circuit case (gray line), highlighting broadband attenuation and distortion. (c) Detail of the effect of the RED shunt (black line), compared to the short circuit case (gray line), highlighting the coexistence of zones of attenuation and zones of amplification.}
\label{fig:t}
\end{figure*}
\subsection{Time domain considerations - Packet distortion}
\label{sec:t}
Most of the existing literature on RL shunts focuses on spectral effects and frequency manipulation. In this work, we attempt to provide an additional angle to the problem, by monitoring how resonant shunts can affect the morphological characteristics of a traveling signal. Since our working signal is a chirp, with a frequency content that evolves across the packet, it is interesting to explore how the effects of specific frequency distillations are recorded in different (predictable) zones of the signal (e.g. front or tail) and how they manifest as localized modifications of the packet envelope. In Fig.~\ref{fig:t}a, we report the wavepacket recorded when all the patches are short circuited. In the $0$--$3\,\mathrm{ms}$ interval, we can identify a first packet, corresponding to the signal traveling away from the source, here termed forward (FWD) packet. The oscillations starting approximately at $3\,\mathrm{ms}$, on the other hand, correspond to the signal reflected by the clamped end of the beam and traveling back towards the source, here termed backward (BWD) packet. We can see that, even without shunting, the FWD packet already features some distortion with respect to the prescribed signal (shown in Fig.~\ref{fig:sig}a), due to the inherent dispersive characteristics of the beam and its bimodal response in the frequency region of interest. In Fig.~\ref{fig:t}b, we compare the signal recorded when the rainbow trap is completely activated (black line) against the short circuited case (gray line). This representation confirms how the shunting strategy causes a generalized de-energization of the signal and indicates that the rainbow trap effectively acts as an \emph{acousto-elastic signal jammer}.
It is especially interesting to analyze the effects of each individual step of the activation sequence on the wavepacket distortion. For the sake of brevity, we only report the analysis of the first step of the sequence, corresponding to the application of the RED shunt (Fig.~\ref{fig:t}c); the black line corresponds to the wavepacket upon shunting and the gray line reflects the short circuit case, which is taken as reference to quantify the influence of the frequency distillation. In the interest of clarity, we limit the inspection to the FWD portion of the signal, although similar considerations can be made for the reflected wave. In the time interval of interest, we can recognize two distinct zones with modified features. Between $0.8\,\mathrm{ms}$ and $1.8\,\mathrm{ms}$ the wave is attenuated, as highlighted by the amplitude reduction of the wave crests. This conforms to the fact that the frequency content of this portion of the packet lives in the neighborhood of the resonance frequency of the RED shunt. In contrast, an increase in the crests amplitude indicates that the wave is amplified in the $1.8$--$3\,\mathrm{ms}$ interval. This observation is consistent with the global picture provided by the spectrum of Fig.~\ref{fig:f}a: while the signal is dynamically damped in the neighborhood of the natural frequency of the resonant shunt, it is locally amplified at frequencies above resonance, which are here located towards the tail of the packet. A complete description of the effects of the other steps in the activation sequence is reported in the SD section. In summary, due to the activation of each shunt, we observe a localization of the amplitude correction effects in the signal which is consistent with the position of the frequencies in the chirp time history. This could open avenues towards the design of structures featuring RL shunts that could be programmed to impart desired morphological characteristics and engineered distortion patterns to the envelope of broadband signals.
\section{Conclusions}
\label{sec:con}
To summarize, in this work we have demonstrated the wave attenuation capabilities of a random array of piezo elements bonded to a waveguide (beam) and shunted with RL circuits. By letting the shunts resonate at adjacent frequencies, we obtained broadband attenuation of a traveling wavepacket. We have revised the concept of ``organized disorder'' in the context of shunted piezoelectrics, thus adding a layer to the outstanding versatility of shunted piezo elements and their applicability in the design of tunable metamaterials.
\ack
We acknowledge the support of the National Science Foundation (grant CMMI-1266089). Davide Cardella also acknowledges the support of the Outgoing office of Politecnico di Torino (through the Master Thesis Abroad Scholarship). We are indebted to Lauren E. Linderman and Paul Bergson (University of Minnesota) for their support with the experimental equipment. We are grateful to Iacopo Gentilini and Douglas Isenberg (Embry-Riddle Aeronautical University, Prescott) for their helpful insight and assistance with circuitry during the initial stages of our work. Finally, we thank Benjamin S. Beck (Pennsylvania State University) for an enlightening discussion on shunted piezoelectrics.
\clearpage
\section*{References}
\bibliographystyle{iopart-num}
\balance
\providecommand{\newblock}{}
|
train/arxiv
|
BkiUdCA5qsBDAu0h_GBL
| 5 | 1 |
\section{Introduction}
Classical Teichm\"uller space can be viewed as
the moduli space of marked hyperbolic structures
of finite volume on a surface. In the case of a
punctured surface, many geometrically meaningful
\emph{ideal cell decompositions} for its
Teichm\"uller space are known. For instance,
quadratic differentials are used for the
construction attributed to Harer, Mumford and
Thurston~\cite{Harer1986}; hyperbolic geometry
and geodesic laminations are used by Bowditch
and Epstein~\cite{BE1988}; and
Penner~\cite{Penner1987} uses Euclidean cell
decompositions associated to the points in
\emph{decorated} Teichm\"uller space. The
decoration arises from associating a positive
real number to each cusp of the surface. All of
these decompositions are natural in the sense
that they are invariant under the action of the
mapping class group (and hence descend to a
cell decomposition of the moduli space of
unmarked structures) and that they do not
involve any arbitrary choices.
A hyperbolic structure is an example of a
\emph{strictly convex projective structure},
and two hyperbolic structures are equivalent as
hyperbolic structures if and only if they are
equivalent as projective structures. Let
$S_{g,n}$ denote the surface of genus $g$ with
$n$ punctures. We will always assume that $2g+n>2,$
so that the surface has negative Euler
characteristic. Whereas the classical Teichm\"uller
space $\mathcal{T}(S_{g,n})$ is homeomorphic with
$\bkR^{6g-6+2n},$ Marquis~\cite{Marq1} has shown that
the analogous moduli space $\mathcal{T}_+(S_{g,n})$
of marked strictly convex projective structures of finite
volume on $S_{g,n}$ is homeomorphic with
$\bkR^{16g-16+6n}.$
Recently, Cooper and Long~\cite{CL} generalised the
key construction of Epstein and Penner~\cite{MR918457},
which was used in Penner's decomposition of decorated
Teichm\"uller space~\cite{Penner1987}. Cooper and
Long~\cite{CL} state that their construction can be
used to define a decomposition of the decorated moduli
space $\widetilde{\mathcal{T}}_+(S_{g,n}),$ but that it is
not known whether all components of this decomposition
are cells.
As in the classical setting, there is a principal
$\bkR^n_+$ foliated fibration
$\widetilde{\mathcal{T}}_+(S_{g,n}) \to {\mathcal{T}}_+(S_{g,n}),$
and different points in a fibre above a point of
${\mathcal{T}}_+(S_{g,n})$ may lie in different components
of the decomposition of $\widetilde{\mathcal{T}}_+(S_{g,n}).$
However, if there is only one cusp, then all points in a
fibre lie in the same component, and one obtains a
decomposition of ${\mathcal{T}}_+(S_{g,1}).$
The proofs of cellularity in
\cite{MR918457, Penner1987} make essential
use of the hyperbolic metric and in particular the
Minkowski model for hyperbolic space. One obstacle
in finding analogous proofs that work in the setting of projective
geometry lies in the fact that the model geometry varies.
Whereas every hyperbolic surface is a quotient of the
interior of the unit disc, one can only
guarantee that a strictly convex
projective surface is the
quotient of some open strictly
convex domain in projective space. But as one varies the
projective structure, the domain may change to a
projectively inequivalent domain. Moreover, the geometry
arises from the Hilbert metric on the domain, which in
general is a non-Riemannian Finsler metric.
The main contribution of this paper is to give the first evidence towards a positive answer to
the question of whether Penner's result generalises to $\widetilde{\mathcal{T}}_+(S_{g,n})$. We also introduce the concept of \emph{trigonal matrices} in \S\ref{sec:projectivity}, which allow computation of holonomy without the introduction of cube roots, and the concept of \emph{cloverleaf position} in \S\ref{sec:clovers}, which normalises domains to vary compactly.
We show that for the once-punctured torus $S_{1,1}$, the
decomposition of $\mathcal{T}_+(S_{1,1})$ is indeed an ideal
cell decomposition, which is invariant under the action of the mapping class group. Moreover, there is a natural bijection between the cells and the ideal cell decompositions of $S_{1,1}.$ This is stated formally as Theorem~\ref{thm:main} in \S\ref{sec:Convex hull constructions}. The analogous statement for the decorated moduli space $\widetilde{\mathcal{T}}_+(S_{0,3})$ of the thrice-punctured sphere $S_{0,3}$ is also shown (see Theorem~\ref{thm:main2} in \S\ref{sec:S03}).
In addition to giving evidence towards a generalisation of Penner's result, our methods show
that on the one hand, the parametrisation due to Fock and
Goncharov~\cite{FG2006, FG2007} makes the computation of the
decomposition of moduli space feasible, and that it may also
provide the right theoretical framework for a general proof.
We also show that our computational tools allow a systematic
study of deformations and degenerations of strictly convex
projective structures in \S\ref{sec:clovers} and discuss further directions in \S\ref{sec:conclusion}.
\section{Ideal cell decompositions of surfaces}
An \emph{ideal cell decomposition} of $S_{g,n}$ consists of
a union $\Delta$ of pairwise disjoint arcs connecting (not
necessarily distinct) punctures with the properties that no
two arcs are homotopic (keeping their endpoints at the
punctures) and that each component of $S_{g,n} \setminus \Delta$
is an open disc. The arcs are called \emph{ideal edges}. We
regard two ideal cell decompositions as the same if they
are isotopic (keeping the endpoints of all arcs at the
punctures). The set of (isotopy classes of) ideal cell
decompositions of $S_{g,n}$ has the structure of a
partially ordered set, with the partial order given by
inclusion. Given ideal cell decompositions $\Delta_1$
and $\Delta_2$ we always understand statements such as
``$\Delta_1=\Delta_2,$'' ``$\Delta_1\subseteq \Delta_2$''
or ``$\Delta_1\cap\Delta_2\neq \emptyset$'' up to isotopy.
For instance, in the case of $S_{1,1},$ an ideal cell
decomposition either has two ideal edges and its
complement is an \emph{ideal quadrilateral} or it has
three ideal edges and its complement consists of two
\emph{ideal triangles}. We call the latter an \emph{ideal triangulation} and the former an \emph{ideal quadrilation} of $S_{1,1}.$
An ideal quadrilateral can be
divided into two triangles in two different ways,
depending on which diagonal is used to subdivide it. The
space of all ideal cell decompositions of $S_{1,1}$ is
naturally identified with the infinite trivalent tree.
Vertices of the tree correspond to ideal triangulations
and there is an edge between two such triangulations
$\Delta_0$ and $\Delta_1$ if and only if $\Delta_0$ is
obtained from $\Delta_1$ by deleting an ideal edge $e$
(hence creating an ideal quadrilateral) and then
inserting the other diagonal of the quadrilateral. This is called an \emph{edge flip} or \emph{elementary move}. The
ideal cell decomposition $\Delta_1 \setminus \{e\}$ is
associated with the edge in the tree with endpoints $\Delta_0$ and $\Delta_1.$
Floyd and Hatcher~\cite{FH1982} identify this tree with the dual tree to the
\emph{modular tessellation} or \emph{Farey tessellation} of the hyperbolic plane. An excellent illustration of this (including the information about edge flips) can be found in Lackenby~\cite{Lackenby}. The tiles in the modular tessellation are
ideal triangles with the properties
\begin{enumerate}
\item each vertex is a rational number or $\infty = \frac{1}{0}$,
\item if $\frac{p}{q}$ and $\frac{r}{s}$ are two vertices of the same ideal triangle, then $ps-rq=\pm 1$,
\item the set of vertices of each ideal triangle is of the form $\{\frac{p}{q}, \frac{r}{s}, \frac{p+r}{q+s}\}$
\end{enumerate}
The full tessellation can thereby be generated from the ideal triangle with vertices $\frac{0}{1}, \frac{1}{0}, \frac{1}{1}$ and the ideal triangle with vertices $\frac{1}{0}, \frac{-1}{1}, \frac{0}{1}$.
Moreover, the element of the mapping class group taking one ideal triangulation to another can be determined from this information.
\section{Convex hull constructions}
\label{sec:Convex hull constructions}
We summarise some key definitions and results that can
be found in \cite{Marq1, CLT1}.
A \emph{strictly convex projective surface} is
$S = \Omega / \Gamma,$ where $\Omega$ is an open
strictly convex domain in the real projective plane
with the property that the closure of $\Omega$ is contained
in an affine patch, and $\Gamma$ is a torsion-free discrete
group of projective transformations leaving $\Omega$
invariant. Since there is an analytic isomorphism
$\PGL(3, \bkR)\cong \SL(3,\bkR),$ we may assume $\Gamma < \SL(3,\bkR).$
The Hilbert metric on $\Omega$ can be used to define
a notion of volume on $S,$ and we are interested in the
case where $S$ is non-compact but of finite volume. Then
the ends of $S$ are cusps, and the holonomy of each cusp
is conjugate to the \emph{standard parabolic}
\[
\begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1\\ 0 & 0 & 1\end{pmatrix},
\]
and its unique fixed point on $\partial \Omega$ is
called a \emph{parabolic fixed point}.
Cooper and Long~\cite{CL} associate ideal cell decompositions
to cusped strictly convex projective surfaces of finite volume
as follows. Suppose $S = \Omega / \Gamma$ is homeomorphic with
$S_{g,n}.$ The $(\SL(3, \bkR), \bkR P^2)$--structure of $S$ lifts to
a $(\SL(3, \bkR), \mathbb{S}^2)$--structure. We denote a lift
of $\Omega$ to $\mathbb{S}^2\subset \bkR^3$ by $\Omega^+.$
A \emph{light-cone representative} of $p \in \partial \Omega$
is a lift
$v_p \in \mathcal L = \mathcal L^+ = \bkR^+ \cdot \partial \Omega^+.$
Each cusp $c$ of $S$ corresponds to an orbit of parabolic fixed
points on $\partial \Omega.$ Choose an orbit representative
$p_c \in \partial \Omega,$ and hence a light-cone representative
$v_{c} = v_{p_c} \in \mathcal L.$ The set
$B = \{ \Gamma \cdot v_c \mid c \text{ is a cusp of } S\}$
is discrete. Let $C$ be the convex hull of $B.$ Then the
projection of the faces of $\partial C$ onto $\Omega$ is
a $\Gamma$--invariant ideal cell decomposition of $\Omega,$
and hence descends to an ideal cell decomposition of
$\Omega/\Gamma,$ called an \emph{Epstein-Penner decomposition}
by Cooper and Long.
Varying the light-cone representatives
$v_c$ gives a $(n-1)$--parameter family of
$\Gamma$--invariant ideal cell decompositions of $\Omega$.
Note that if each
face of $C$ is a triangle, then a small perturbation
of the lengths of the $p_c$ will not change the
combinatorics of $C.$ Also, in the case of one cusp, varying the length
of $p_c$ merely dilates $C$ and hence does not change
the combinatorics of the convex hull. In particular, the decomposition of the surface
$\Omega/\Gamma$ is canonical if $n = 1.$ However, if there is more than one
cusp, then varying the length of just one $p_c$
will eventually result in different decompositions, since it
changes the relative heights of the vertices of $C.$
To summarise, given $p\in \widetilde{\mathcal{T}}_+(S_{g,n}),$ the convex
hull construction by Cooper and Long~\cite{CL} associates
to $p$ a canonical ideal cell decomposition $\Delta_p.$
Analogous to Penner~\cite{Penner1987},
define for any ideal cell decomposition
$\Delta\subset S_{g,n}$ the sets
\begin{align*}
\mathring{\mathcal{C}}(\Delta) &= \{ p \in \widetilde{\mathcal{T}}_+(S_{g,n}) \mid \Delta_p = \Delta\},\\
{\mathcal{C}}(\Delta) &= \{ p \in \widetilde{\mathcal{T}}_+(S_{g,n}) \mid \Delta_p \subseteq \Delta\}.
\end{align*}
As in the classical case, we have
${\mathcal{C}}(\Delta_1) \cap {\mathcal{C}}(\Delta_2) \neq \emptyset$
if and only if $\Delta_1\cap \Delta_2$ is an ideal cell
decomposition of $S_{g,n},$ and in this case
${\mathcal{C}}(\Delta_1) \cap {\mathcal{C}}(\Delta_2) = {\mathcal{C}}(\Delta_1\cap \Delta_2)$.
Moreover, if there is just one puncture, we may replace $\widetilde{\mathcal{T}}_+(S_{g,1})$ with ${\mathcal{T}}_+(S_{g,1})$ in the above definitions.
We can now state the main theorem of this paper:
\begin{theorem}\label{thm:main}
The set
\[
\{ \mathring{\mathcal{C}}(\Delta) \mid \Delta \text{ is an ideal cell decomposition of } S_{1,1}\}
\]
is an ideal cell decomposition of $\mathcal{T}_+(S_{1,1})$
that is invariant under the action of the mapping class group. Moreover, $\mathring{\mathcal{C}}$ is a natural bijection between the cells and the ideal cell decompositions of $S_{1,1}.$
\end{theorem}
The proof will be given in \S\ref{proof of main}, and the analogous statement for the thrice-punctured sphere is proved in \S\ref{sec:S03}. First, some general results are developed in \S\ref{sec:projectivity}, the computation of holonomy is discussed in \S\ref{sec:monodromy}, and the coordinates for
$\mathcal{T}_+(S_{1,1})$ are derived in \S\ref{sec:periphery}. Possible applications and further directions are discussed in \S\ref{sec:clovers} and \S\ref{sec:conclusion}.
\section{Projectivity}
\label{sec:projectivity}
Fock and Goncharov discovered in \cite{FG2006} that
the moduli space of mutually inscribed and
circumscribed triangles (from the perspective
of some affine patch in $\bkR P^2$) is naturally isomorphic
to the positive real line. We now develop an explicit formulation of this isomorphism, introducing the new concept of a \emph{(standard) trigonal matrix}.
An element
of $\mathcal{P}^+_3$ is the projectivity
class of a combination of
three points in $\bkR P^2$ in general position
and three lines through those points such
that, from the perspective of some affine
patch, the triangle formed by the points
lies inside the trilateral formed by the
lines, as in the left of Figure \ref{fig:ayenay}.
\begin{figure}
\begin{center}
{\includegraphics{ayenay.pdf}}
\end{center}
\caption{Representatives of an element and a non-element of $\mathcal{P}^+_3$.}\label{fig:ayenay}
\end{figure}
In terms more amenable to
calculation, such a triangle and trilateral are the projectivisations,
respectively, of a triple $(V_0,V_1,V_2)$ of vectors in $\bkR^3$
and a triple $(v_0,v_1,v_2)$ of covectors such that
$v_i.V_j \geq 0$, with equality only when $i = j$.
This is viewed as a pair $(V.\Delta,\Delta.v)$ of
left and right cosets of the subgroup $\Delta$
of diagonal matrices in $\GL(3, \bkR)$
admitting a representative $(V,v)$ such that
$v.V$ is a positive counter-diagonal
matrix, i.e. a matrix of the form
\[
\begin{pmatrix}
0 & + & +\\
+ & 0 & +\\
+ & + & 0
\end{pmatrix},
\]
where the $+$ entries are positive numbers, possibly different.
Let
\begin{equation}\label{eqn:sigma}
\sigma = \begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0 \end{pmatrix}.
\end{equation}
If $M$ is positive counter-diagonal, then $\sigma.M.\sigma^{-1}$
is again positive counter-diagonal. The fixpoints of this
$\mathbb{Z}/3\mathbb{Z}$ action are called \emph{trigonal}.
We will show that every element of $\mathcal{P}^+_3$
admits as representative a pair $(V.\Delta,\Delta.v)$ with $v.V$ being some
trigonal matrix. However, the space of trigonal matrices
is two-dimensional, and our intent is to show that $\mathcal{P}^+_3$
is one-dimensional. So we should like to produce,
for any such pair, a canonical trigonal matrix. One possible
choice of trigonal matrix (with one free parameter) is the \emph{standard trigonal} matrix
\begin{equation}\label{eq:cantri}
C^f_3 =
\begin{pmatrix}
0 & f & 1\\
1 & 0 & f\\
f & 1 & 0
\end{pmatrix}.
\end{equation}
\begin{proposition}\label{prp:dblcos}
Every double coset of the form $\Delta P \Delta$ in $\GL(3,\bkR)$
with $P$ a positive counter-diagonal matrix admits
a unique standard trigonal representative.
\end{proposition}
\begin{proof}
Let $P$ be a positive counter-diagonal matrix in $\GL(3,\bkR)$.
We need to solve
\begin{equation}\label{eqn:pcdm}
C.P.D =
\begin{pmatrix}
0 & f & 1\\
1 & 0 & f\\
f & 1 & 0
\end{pmatrix}
\end{equation}
for $C,D$, and $f$. We should
end up with a parameter
space of solutions for $C$ and $D$, since we can scale
both $C$ and $D$ by scalars, but $f$
should be uniquely determined.
A direct calculation (see \S\ref{app:listings}, Listing~\ref{list:maxima0}) shows that
(\ref{eqn:pcdm}) admits solutions in $C,D,f$
precisely when
\[
f = \left ( \frac{P_{01}\cdot P_{12} \cdot P_{20}}
{P_{02}\cdot P_{10} \cdot P_{21}} \right ) ^ {1/3},
\]
where the $P_{i j}$ are the entries of $P.$
This concludes the proof of the proposition.
\end{proof}
Next, assuming that $v.V$ is standard trigonal, we wish to pick
$m \in \GL(3, \bkR)$ such that $v.m^{-1}$ and $m.V$ are
as nice as possible. To achieve this, we break duality between $v$ and $V$ here
and just let $m=V^{-1}$.
Even if $v.V$ is not assumed standard trigonal, we have:
\begin{theorem}\label{thm:proj}
Suppose $V,v \in \GL(3,\bkR)$ such that $v.V$ is
positive counter-diagonal. Let $m = (V.D)^{-1}$, where
$D$ is diagonal with $\lambda_0,\lambda_1,\lambda_2$
along its diagonal, and where
\begin{equation}\label{eqn:lambdas}
\lambda_0^3 = \frac{(v.V)_{1\,2} \cdot (v.V)_{2\,1}}
{(v.V)_{1\,0} \cdot (v.V)_{2\,0}},\qquad
\lambda_1^3 = \frac{(v.V)_{2\,0} \cdot (v.V)_{0\,2}}
{(v.V)_{2\,1} \cdot (v.V)_{0\,1}},\qquad
\lambda_2^3 = \frac{(v.V)_{0\,1} \cdot (v.V)_{1\,0}}
{(v.V)_{0\,2} \cdot (v.V)_{1\,2}}.
\end{equation}
Then $m.V.\Delta = \Delta$ and $\Delta.v.m^{-1} = \Delta.C_f^3$,
where $f$ is as in the proof of Proposition \ref{prp:dblcos} with
$P = v.V$.
\end{theorem}
\begin{proof}
By assumption, $v.V$ is positive counter-diagonal.
By Proposition \ref{prp:dblcos}, there exist
$C$ and $D$ such that $C.(v.V).D = (C.v).(V.D)$ are standard trigonal.
Then $m = (V.D)^{-1}$ is an element
of $\GL(3,\bkR)$ such that the image of
$(V,v)$ under $m$ and $(I_3,C_3^f)$ project
to the same configuration. This proves
the second portion of the theorem, reducing
our proof obligations to verifying that
$D$ as defined by (\ref{eqn:lambdas})
admits the existence of $C \in \Delta$ such
that $(C.v).(V.D)$ is standard trigonal.
This can be verified by a direct calculation (see \S\ref{app:listings}, Listings~\ref{list:maxima1} and \ref{list:out1}).
\end{proof}
\section{Developing map and holonomy}
\label{sec:monodromy}
Let $S = S_{1,1}$ be a once-punctured torus with
a fixed ideal triangulation $\Delta_\ast$ and a fixed orientation.
We slightly modify the framework of \cite{FG2006}
to parameterise the marked strictly convex
projective structures with finite volume on $S.$ We also note that
Fock and Goncharov treat the more general space of framed
structures with geodesic boundary, of which the finite-volume
structures form a proper subset of positive codimension.
Lift $\Delta_\ast$ to the universal cover
$\phi: \widetilde{S} \to S,$ denote the lifted triangulation by
$\widetilde{\Delta}_\ast$, and identify the group of deck transformations with $\pi_1(S).$
The developing map $\dev : \widetilde{S} \to \Omega$
for such a structure sends $\widetilde{\Delta}_\ast$
to an ideal triangulation $\tilde{\tau}$ of $\Omega$,
which we may assume has straight edges (see \cite{WoTi}).
The edges
of this ideal triangulation have well-defined
endpoints on $\partial \Omega$, and the finite volume condition
implies $\Omega$ is round, meaning
that every point on its boundary admits a unique
tangent line (see \cite{CLT1}). Any triangle of $\widetilde{\Delta}_\ast$
therefore inherits an associated combination of
three points in $\bkR P^2$ and three lines through
these points such that, by strict convexity of $\Omega$,
the triangle formed by these points lies inside the
trilateral formed by the lines as in the left of Figure \ref{fig:ayenay}.
Associated to the geometric structure is a \emph{holonomy} $\hol: \pi_1(S)\to \SL(3,\bkR)$
which makes $\dev$ equivariant under the action of $\pi_1(S)$.
That is, denoting $\Gamma = \hol(\pi_1(S))$, the map $\dev$
takes $\pi_1(S)$--equivalent triangles of $\widetilde{\Delta}_\ast$ to
$\Gamma$--equivariant triangles of $\tilde{\tau}$ in $\Omega$.
Finally, any two such developing maps for the same structure differ by a projectivity. In conclusion, then, for any triangle of $\Delta_\ast$, we get an associated element of $\mathcal{P}_3^+$, and
thereby, via Proposition~\ref{prp:dblcos} and Theorem~\ref{thm:proj},
a well-defined positive real number $f$.
Likewise, to any oriented edge $\tilde{e}$ of $\widetilde{\Delta}_\ast$,
associate the pair $(\tilde{t},\tilde{t}')$ of triangles
in $\widetilde{\Delta}_\ast,$ where $\tilde{e}$ is adjacent
to both, and where the orientation of $\tilde{t}$ induces the orientation of $\tilde{e}$.
We may suppose that in an affine patch, the images
of $\tilde{t},\tilde{t}'$, and the
flags attached to their vertices, look as shown in Figure~\ref{fig:nearedge}.
\begin{figure}[h]
\begin{center}
\includegraphics{nearedge_new.pdf}
\end{center}
\caption{Near the image of an oriented edge $\tilde{e}$: the labels match the orientation}\label{fig:nearedge}
\end{figure}
The flags depicted admit (nonzero) vector-covector representatives
$(V_i,v_i)$ for $i \in \{+,-,l,r\}$,
such that $v_i . V_i = 0$.
Let $v_{r+}$ be a covector representing the line through $V_+$ and $V_r$,
and likewise let $v_{l+}$ be a covector representing the
line through $V_+$ and $V_l$. Let
$v$ be the matrix whose rows are $v_-,v_{r+},v_{l+}$
and $V$ the matrix whose columns are
$V_-,V_r,V_l$. Letting $[x],[X]$ denote
the projections to $\bkR P^2$ of covectors $x$
and vectors $X$, we can associate
to this oriented edge $\tilde{e}$ the
triple of flags $(([v_-],[V_-]),([v_{r+}],[V_r]),([v_{l-}],[V_l]))$,
whose projectivity class is some element of $\mathcal{P}_3^+$,
to which we can associate a single, positive real number $f$ as in the section above.
(The \emph{triple ratio} of the triangle defined in \cite{FG2006} equals $f^3$.
However, our edge parameters' cubes are the \emph{reciprocals}
of Fock and Goncharov's edge parameters.)
We may then fix the developing map such that it has the following three properties:
\begin{enumerate}
\item the standard
basis for $\bkR^3$ projects to
the vertices of $\dev(\tilde{t})$;
\item $(1,0,0)^t$ and $(0,1,0)^t$
project to $V_-$ and $V_+$, respectively; and
\item
the kernels of the covectors $(0,t_{012},1)$,
$(1,0,t_{012})$, and $(t_{012},1,0)$
project to lines tangent to the boundary
of the convex domain $\Omega$ which is
the image of the developing map.
\end{enumerate}
This choice of developing map is unique up to isotopy,
given our choices of $\phi,\tilde{t},\tilde{e}$.
Hence for each point in $\mathcal{T}_+(S)$ this fixes a unique developing map, and each such developing map determines a unique point in $\mathcal{T}_+(S)$ provided that the holonomy around the cusp is parabolic.
\begin{figure}[h]
\begin{center}
\includegraphics{ourparams_final.pdf}
\end{center}
\caption[Our standard developing map]{Our standard developing map with labels in our modified framework, where
$V_0 = \begin{pmatrix} 1 & 0 & 0 \end{pmatrix}^t$,
$V_1 = \begin{pmatrix} 0 & 1 & 0 \end{pmatrix}^t$,
$V_2 = \begin{pmatrix} 0 & 0 & 1 \end{pmatrix}^t$, and
$v_0 = \begin{pmatrix} 0 & t_{012} & 1 \end{pmatrix}$,
$v_1 = \begin{pmatrix} 1 & 0 & t_{012} \end{pmatrix}$,
$v_2 = \begin{pmatrix} t_{012} & 1 & 0\end{pmatrix}.$
}
\label{fig:ourparams}
\end{figure}
\section{Periphery}
\label{sec:periphery}
From the developing map, we may now calculate the holonomy. To this end, it suffices to determine its values on generators of the fundamental group.
A marking of $\pi_1(S)$ is chosen as follows.
Adjacent to $\tilde{t}$ are three other
triangles which project to the same
triangle $\phi(\tilde{t}')$ of the ideal triangulation
of $S$. In the cyclic
order induced by the orientation
of $\tilde{t}$, let these triangles be $c,m,y$, with
$y$ being the triangle adjacent to $\tilde{e}$.
Then we may choose for generators the deck
transformations $r,g,b$, where $r$ takes
$m$ to $y$, $g$ takes $y$ to $c$, and $b$ takes
$c$ to $m$.
The images of these deck transformations
now are simple to calculate; we just need
to calculate representatives $(V,v)$ as above
for $c,m,$ and $y$, and then use Theorem~\ref{thm:proj}
to get the holonomy. We can just
focus on $y$, and the other two will follow
by symmetry.
We already know two flags of $y$: namely,
the first two standard basis vectors and
the first two associated covectors.
To solve for the other vertex of $y$,
we use the edge parameters $e_{01}$ and $e_{10}$.
To solve for the associated line through
this vertex, we use the other face parameter and the definition of these parameters as triple ratios.
Solving for some element of the vertex of $y$ (regarding the vertex as
a one-dimensional subspace of $\bkR^3$) gives the vertex $U_2$ (see \S\ref{app:listings}, Listing~\ref{list:maxima6}, for the computation):
\[
U_2 = \left \langle \begin{pmatrix} (e_{1 0}^3 + 1) \cdot t_{0 1 2}^2\\
(e_{0 1}^3 + 1) \cdot e_{1 0}^3\\
-e_{1 0}^3 \cdot t_{0 1 2}
\end{pmatrix} \right \rangle.
\]
By symmetry (or by independent calculations), we conclude
\[
U_0 = \left \langle \begin{pmatrix} -e_{2 1}^3 \cdot t_{0 1 2}\\
(e_{2 1}^3 + 1) \cdot t_{0 1 2}^2\\
(e_{1 2}^3 + 1) \cdot e_{2 1}^3
\end{pmatrix} \right \rangle,
\qquad \qquad
U_1 = \left \langle \begin{pmatrix} (e_{2 0}^3 + 1) \cdot e_{0 2}^3\\
-e_{0 2}^3 \cdot t_{0 1 2}\\
(e_{0 2}^3 + 1) \cdot t_{0 1 2}^2
\end{pmatrix} \right \rangle.
\]
We have now found the other vertices of our configurations $c,m,y$,
so we have all the triangle parts. Care has to be taken to keep the vertices
in a consistent order, so that the holonomy comes out without any extraneous rotation.
(Indeed, the holonomy for $S_{0,3}$ differs only in this respect.)
We next need to compute covector representatives of the
lines through the $U$-vertices. We solve for these
using the other face parameter $t_{210}$ (see \S\ref{app:listings}, Listings~\ref{list:maxima9} and \ref{list:maxima10}).
This gives:
\[
u_2 = \left \langle \begin{pmatrix} e_{0 1}^3 \cdot e_{1 0}^3\ &
t_{0 1 2}^2 \cdot t_{2 1 0}^3\ &
t_{0 1 2} \cdot \left ( e_{0 1}^3 \cdot t_{2 1 0}^3 +
t_{2 1 0}^3 + e_{0 1}^3 \cdot e_{1 0}^3 + e_{0 1}^3
\right ) \end{pmatrix} \right \rangle
\]
and again, by symmetry or independent computation, we conclude
\[
u_0 = \left \langle \begin{pmatrix}
t_{0 1 2} \cdot \left ( e_{1 2}^3 \cdot t_{2 1 0}^3 +
t_{2 1 0}^3 + e_{1 2}^3 \cdot e_{2 1}^3 + e_{1 2}^3
\right )\ &
e_{1 2}^3 \cdot e_{2 1}^3\ &
t_{0 1 2}^2 \cdot t_{2 1 0}^3 \end{pmatrix} \right \rangle
\]
and
\[
u_1 = \left \langle \begin{pmatrix}
t_{0 1 2}^2 \cdot t_{2 1 0}^3\ &
t_{0 1 2} \cdot \left ( e_{2 0}^3 \cdot t_{2 1 0}^3 +
t_{2 1 0}^3 + e_{2 0}^3 \cdot e_{0 2}^3 + e_{2 0}^3
\right )\ &
e_{2 0}^3 \cdot e_{0 2}^3 \end{pmatrix} \right \rangle.
\]
Recall that $r$ takes $m$ to $y$, $g$ takes $y$ to $c$, and $b$ takes $c$ to $m$.
So with the above results, we can define the trilaterals, completing our
construction of the configurations we need for
the monodromy calculation (see \S\ref{app:listings}, Listing~\ref{list:maxima11}).
The formul\ae\ for $r$, $g$, and $b$ are complicated
and not particularly illuminating,
so we leave them in the internals of the computer.
From the above, the fundamental group of $S = S_{1,1}$ has
the presentation $\langle r,g,b\ |\ b\cdot g \cdot r\rangle$ and a fixed marking.
The element $r\cdot g \cdot b$ is a peripheral
element, representing a simple closed loop around the cusp. As stated in
\S\ref{sec:Convex hull constructions}, finite volume requires this
peripheral element to have parabolic holonomy.
This means that its characteristic
polynomial is of the form $k\cdot(\lambda - 1)^3$. We
may calculate the conditions this imposes on the
Fock-Goncharov parameters using the characteristic polynomial.
A direct computation (see \S\ref{app:listings}, Listing~\ref{list:maxima12}) shows that
the characteristic polynomial of $r\cdot g\cdot b$ is proportional to
$(\lambda - T^3)\cdot(E^3 \cdot \lambda - T^3)\cdot(T^6 \cdot \lambda - E^3),$
where $T$ is the product of the face parameters and $E$ is the product of the
edge parameters. We therefore have:
\begin{lemma}\label{lem:parhol}
A strictly convex projective structure
on $S_{1,1}$ has parabolic peripheral holonomy
(and finite volume) if and only if
the product of the face parameters and
the product of the edge parameters both equal 1.
\end{lemma}
To sum up, we now have an identification of $\mathcal{T}_+(S_{1,1})$ with
\[
\{ (t_{012}, t_{210}, e_{01}, e_{10}, e_{02}, e_{20}, e_{12}, e_{21}) \in \bkR^8_+ \mid t_{012}t_{210}=1, \ e_{01} e_{10} e_{02} e_{20} e_{12} e_{21}=1 \}.
\]
\section{Cells}
\label{sec:canonicity}
An algorithm to compute the ideal cell decompositions of Cooper and Long~\cite{CL}
was recently described by Tillmann and Wong~\cite{WoTi}, based on an algorithm for hyperbolic surfaces by Weeks~\cite{MR1241189}.
This algorithm takes as starting point a fixed ideal
triangulation of $S_{g,n}$ and then computes the
canonical ideal cell decomposition associated to a point
in moduli space using an edge flipping algorithm
followed by possibly deleting redundant edges. This
allows one to keep track of the marking, the isotopy class
of every intermediate ideal triangulation, and the
isotopy class of the final ideal cell decomposition.
We make use of a portion of this work as follows.
Let $p=(t_{012}, t_{210}, e_{01}, e_{10}, e_{02}, e_{20}, e_{12}, e_{21}) \in \mathcal{T}_+(S_{1,1})$
and choose a light-cone representative of the cusp; we pick not
a standard basis vector, but the slightly different vector
\[
S_0 = \begin{pmatrix} e_{2 0} \cdot e_{0 2} \cdot e_{2 1} \\ 0 \\ 0 \end{pmatrix}.
\]
This represents the image of the terminal endpoint of
$\tilde{e}$ as well as the standard basis vector does.
The orbit of this vector under the holonomy of
the fundamental group is some collection of vectors,
including the elements
\[
S_1 = b^{-1}.S_0 = \begin{pmatrix} 0 \\ e_{0 1} \cdot e_{1 0} \cdot e_{0 2} \\ 0 \end{pmatrix},
\qquad \qquad
S_2 = g.S_0 = \begin{pmatrix} 0 \\ 0 \\ e_{1 2} \cdot e_{2 1} \cdot e_{1 0} \end{pmatrix}
\]
which represent the other vertices. Note that $S_1$ represents
the initial endpoint of $\tilde{e}$. Let $\omega$ be the covector such
that $\omega.S_0 = \omega.S_1 = \omega.S_2 = 1$. Under
our assumption of parabolic holonomy, we may write
\[
\omega = \begin{pmatrix} e_{0 1} \cdot e_{1 0} \cdot e_{1 2} &
e_{1 2} \cdot e_{2 1} \cdot e_{2 0} &
e_{2 0} \cdot e_{0 2} \cdot e_{0 1} \end{pmatrix}.
\]
That is, the plane $P$ through $S_0,S_1,S_2$ is given as $P = \{v : \omega.v = 1 \}$.
Then $p \in {\mathcal{C}}(\Delta_\ast)$ if and only if for every $\gamma\in \Gamma$ we have
$\omega.(\gamma.S_0) \geq 1$, i.e.\thinspace when every element
of the orbit of $S_0$ does not lie on the same side of $P$ as the origin. It was shown in
\cite{WoTi} that it suffices to show this locally, thus turning it into a finite problem.
For our purposes, this means that
$p \in {\mathcal{C}}(\Delta_\ast)$ is equivalent to showing that for all $v \in \{r.S_0, g.S_1, b.S_2\}$,
$\omega.v \geq 1$. By construction, $r.S_0 = g^{-1}.S_1$.
Therefore call the quantity $\omega.(r.S_0)$ \emph{yellow bending};
we denote yellow bending by $YB$.\footnote{We wish to emphasise that we make the natural choice of using a two letter acronym here.}
We call the condition $YB \geq 1$ \emph{yellow consistency}.
In the case of yellow consistency, the convex hull is non-concave along the associated edge.
We call the condition $YB = 1$ \emph{yellow flatness}.
We make similar definitions for cyan and magenta, with
$CB$ and $MB$ being their associated bendings.
Note that if one deletes $\phi(\tilde{e})$
from $\Delta_\ast$ to get an ideal quadrilation
$\Delta'$ of $S$, then for all points $p$ in moduli
space, $YB(p) = 1$ is equivalent to
$p \in \mathcal{C}(\Delta')$. Likewise for the other
edges.
Consistency occurs when all bendings
are greater than or equal to one.
Now, the bendings are all rational
functions, so consistency is a semi-algebraic
condition. To show that the set of
canonical structures is a cell, we will
show the following.
\begin{lemma}\label{lem:flatdisc}
The semi-algebraic set determined by the cyan
flatness condition is a smooth, properly embedded cell of codimension 1
in $\mathcal{T}_+(S_{1,1})$.
\end{lemma}
\begin{lemma}\label{lem:flatdisjoint}
The cyan, yellow and magenta flatness conditions are pairwise disjoint.
\end{lemma}
Assuming the lemmata, we can now prove the main theorem:
\begin{proof}[Proof of Theorem \ref{thm:main}]\label{proof of main}
Using our modification of Fock and Goncharov's coordinates
as described above, $\mathcal{T}_+(S_{1,1})$ is identified
with a properly embedded 6--disc in the positive
orthant of $\bkR^8$. Since the action of the mapping
class group of $S_{1,1}$ is transitive on the set of
all ideal triangulations of $S_{1,1},$ as well as on the
set of all ideal quadrilations of $S_{1,1},$ it suffices
to show that
\begin{enumerate}
\item one of the sets $\mathring{\mathcal{C}}(\Delta),$
where $\Delta$ is an arbitrary but fixed ideal
triangulation, is an ideal cell; and
\item one of the sets $\mathring{\mathcal{C}}(\Delta'),$
where $\Delta'$ is an arbitrary but fixed ideal
quadrilation, is an ideal cell.
\end{enumerate}
The latter is the contents of Lemma~\ref{lem:flatdisc}
for $\Delta'$ the ideal quadrilation obtained from $\Delta_\ast$ by
deleting the cyan edge. Hence we turn to the former.
Let $\Delta_0,$ $\Delta_1$ and $\Delta_2$ denote the
three ideal quadrilations obtained by deleting one of
the three ideal edges from $\Delta_\ast.$ Then the
frontier of $\mathring{\mathcal{C}}(\Delta)$ is
contained in
$\mathring{\mathcal{C}}(\Delta_0)\cup \mathring{\mathcal{C}}(\Delta_1)\cup \mathring{\mathcal{C}}(\Delta_2).$
For each ideal quadrilation $\Delta_i$, it follows from
Lemma~\ref{lem:flatdisc} that
$\mathring{\mathcal{C}}(\Delta_i)$
is a smooth properly embedded 5-disc. Whence each
$\mathring{\mathcal{C}}(\Delta_i)$
cuts $\mathcal{T}_+(S_{1,1})$ into two 6--discs. Now since
any two $\mathring{\mathcal{C}}(\Delta_i)$ and
$\mathring{\mathcal{C}}(\Delta_j)$ are disjoint by
Lemma~\ref{lem:flatdisjoint} it follows that
$\mathcal{T}_+(S_{1,1}) \setminus \bigcup_i \mathring{\mathcal{C}}(\Delta_i)$
consists of four open 6--discs, and that the one 6--disc with all three 5--discs in its boundary is
$\mathring{\mathcal{C}}(\Delta).$
The statement that $\mathring{\mathcal{C}}$ is a bijection now follows from the well-known fact that any two ideal triangulations of $S$ are related by a finite sequence of edge flips.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:flatdisc}]
We can
solve $CB = 1$ for $e_{2 0}.$ (See \S\ref{app:listings}, Listing~\ref{list:maxima16}.)
Let
\begin{align}\label{eqn:topc}
\top_c &= e_{0 1} \cdot e_{1 0}^2 \cdot e_{1 2}^2 \cdot e_{2 1} \cdot t_{0 1 2}
- e_{1 2}^3 - 1,\\
\label{eqn:botc}
\bot_c &= e_{2 1}^3 \cdot t_{0 1 2} + t_{0 1 2}
- e_{0 1}^2 \cdot e_{1 0} \cdot e_{1 2} \cdot e_{2 1}^2.
\end{align}
The
\[
(CB = 1)\ \equiv\ (e_{1 0}\cdot e_{1 2}^2 \cdot t_{0 1 2} \cdot \bot_c \cdot e_{2 0}
= e_{2 1} \cdot \top_c).
\]
So when we project the subset $\mathcal{C}$ cut
out by cyan flatness onto the $t_{0 1 2}, e_{0 1}, e_{1 0}, e_{1 2}, e_{2 1}$
plane, the image decomposes into three pieces:
\[
(\bot_c > 0 \land \top_c > 0) \lor
(\bot_c = 0 \land \top_c = 0) \lor
(\bot_c < 0 \land \top_c < 0),
\]
with the fiber over every point of the
first and last pieces a single point
(since we get the graph of
$e_{2 0} = e_{2 1} \cdot \top_c / (e_{1 0} \cdot e_{1 2}^2 \cdot t_{0 1 2} \cdot \bot_c)$
over these regions), and the fiber over a point in the
middle piece a whole $\bkR_+$.
We will show that the first and last
pieces are 5-discs, and that the middle
piece is a 3-disc. Then $CB = 1$ will
be the union of two 5-discs and a
4-disc (the product of the middle
3-disc with $\bkR_+$) in their boundaries; this union is
again a 5-disc.
Now, if $\sim$ is one of $<,>,=$, then
\[
(\top_c \sim 0) \equiv t_{0 1 2}
\sim \frac{e_{1 2}^3 + 1}
{e_{0 1} \cdot e_{1 0}^2 \cdot e_{1 2}^2 \cdot e_{2 1}},
\qquad \qquad
(\bot_c \sim 0) \equiv t_{0 1 2}
\sim \frac{e_{0 1}^2 \cdot e_{1 0} \cdot e_{1 2} \cdot e_{2 1}^2}
{e_{2 1}^3 + 1}.
\]
Let $p = (e_{1 2}^3 + 1)/(e_{0 1} \cdot e_{1 0}^2 \cdot e_{1 2}^2 \cdot e_{2 1})$
and $q = e_{0 1}^2 \cdot e_{1 0} \cdot e_{1 2} \cdot e_{2 1}^2/(e_{2 1}^3 + 1)$.
Let $\uparrow$ and $\downarrow$ denote
the maximum and minimum operators respectively.
The first piece is equivalent to $t_{0 1 2} > p \uparrow q$, which is
the region above the graph of a function (viz. $p \uparrow q$); this region
is a 5-disc.
Now, $p$ and $q$ are both positive functions, assuming
their arguments are positive. So the last piece is
the region between the graph of a positive function (viz. $p \downarrow q$) and the
$e_{0 1},e_{1 0},e_{1 2},e_{2 1}$-plane; this is again just a 5-disc.
For the middle piece, one sees that
$p = q$ is equivalent to
\[
e_{0 1}^3 = \frac{(e_{1 2}^3 + 1) \cdot (e_{2 1}^3 + 1)}
{e_{1 0}^3 \cdot e_{1 2}^3 \cdot e_{2 1}^3},
\]
which is the graph of a positive function
(the cube root of the right-hand side) over $\bkR^3_+,$ which is a 3-disc.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:flatdisjoint}]
To show disjointness, it will suffice to show that
the projections of $\mathcal{C}(Q_c)$ and $\mathcal{C}(Q_y)$
to the $t_{0 1 2},e_{0 1},e_{1 0},e_{1 2},e_{2 1}$-plane
are disjoint, where $Q_e$ is the ideal quadrilation
obtained from $\Delta_\ast$ by forgetting $e$.
To that end, we repeat the computation from the proof of the previous lemma for
yellow flatness (see \S\ref{app:listings}, Listing~\ref{list:maxima17}).
Based on this, define
\begin{align*}
\top_y &= e_{0 1}\cdot e_{1 0}^3 \cdot e_{1 2}^2 \cdot e_{2 1} \cdot t_{0 1 2}
+e_{0 1}\cdot e_{1 2}^2 \cdot e_{2 1} \cdot t_{0 1 2} - e_{1 0},\\
\bot_y &= e_{0 1}\cdot t_{0 1 2} - e_{0 1}^3 \cdot e_{1 0} \cdot e_{1 2}\cdot e_{2 1}^2
-e_{1 0}\cdot e_{1 2} \cdot e_{2 1}^2.
\end{align*}
The projection of the cyan flat-set
to this plane decomposes into
the pieces shown earlier, and
the projection of the yellow flat-set
decomposes likewise.
Algorithms for cylindrical algebraic
decomposition can return a list
containing a point from every cell
of this decomposition. We may run
such an algorithm in {\tt Sage}\rm~\cite{sage} on the
intersection of the cyan and yellow
flat-set projections (see \S\ref{app:listings}, Listings~\ref{list:SAGE} and \ref{list:SAGEout}).
The computation used \texttt{qepcad}; and the output is
a list of a point from every cell in the intersection
of the projections to the $t_{0 1 2},e_{0 1},e_{1 0},e_{1 2},e_{2 1}$-plane
of the cyan and yellow flat-sets.
But this list is empty; therefore, their intersection is empty.
\end{proof}
\section{The thrice-punctured sphere}
\label{sec:S03}
An ideal triangulation of the thrice-punctured sphere also consists of three properly embedded arcs, and hence divides the sphere into two ideal triangles. So there are six edge invariants and two triangle invariants. However, as there are three cusps, there are three holonomy conditions to ensure that the peripheral elements are parabolic. Using the same set-up as in Figure~\ref{fig:ourparams}, but taking into account that each of the three indicated deck transformations now fixes the respective vertex of the triangle, a direct computation yields an identification of $\mathcal{T}_+(S_{0,3})$ with the set
\[
\{ (t_{012}, t_{210}, e_{01}, e_{10}, e_{02}, e_{20}, e_{12}, e_{21}) \in \bkR^8_+ \mid t_{012}t_{210}=1, \ e_{01}= \frac{1}{e_{10}}= e_{12}=\frac{1}{e_{21}} = e_{20} = \frac{1}{e_{21}} \},
\]
showing that $\mathcal{T}_+(S_{0,3})$ is 2--dimensional as proven by Marquis~\cite{Marq1}. We will simply write $(t_{012}, e_{01}) \in \mathcal{T}_+(S_{0,3}).$ The result of the convex hull construction of Cooper and Long~\cite{CL} now depends, for each point in $\mathcal{T}_+(S_{0,3}),$ on the lengths of the light-cone representatives for the three cusps, up to scaling all of them by the same factor. Whence there is an ideal cell decomposition of $S_{0,3}$ associated to each point in the \emph{decorated} moduli space $\widetilde{\mathcal{T}}_+(S_{0,3}),$ and the latter can be identified with the positive orthant in $\bkR^5.$
\begin{figure}[h]
\centering
\includegraphics[width=5.5cm]{arcpod.pdf}
\caption{Flip graph of the thrice-punctured sphere}
\label{fig:arcpod}
\end{figure}
Using Alexander's trick, it is an elementary exercise
to determine that there are exactly four ideal
triangulations and three ideal quadrilations of
$S_{0,3}$. The \emph{flip graph} is the tripod shown
in Figure~\ref{fig:arcpod}, where the quadrilations are
obtained as intersections of the two pictures at the
ends of each dotted arc. Let $\Delta_\ast$ be the ideal
triangulation with the property that there is an arc
between any two punctures. The punctures are labelled
by $0,$ $1$ and $2,$ and $\Delta_i$ is the triangulation
obtained from $\Delta\ast$ by performing an edge flip on
the edge not meeting $i \in \{0, 1, 2\}.$ Moreover, the
mapping class group of $S_{0,3}$ is naturally isomorphic
with the group of all permutations of the three cusps.
\begin{theorem}\label{thm:main2}
The set
\[
\{ \mathring{\mathcal{C}}(\Delta) \mid \Delta \text{ is an ideal cell decomposition of } S_{0,3}\}
\]
is an ideal cell decomposition of $\widetilde{\mathcal{T}}_+(S_{0,3})$
that is invariant under the action of the mapping class group. Moreover, $\mathring{\mathcal{C}}$ is a natural bijection between the cells and the ideal cell decompositions of $S_{0,3}.$
\end{theorem}
\begin{proof}
Using the above set-up, let $(e_{01}, t_{012}) \in \mathcal{T}_+(S_{0,3}),$ and
choose light-cone representatives $V_0 = \begin{pmatrix} \frac{1}{\omega_0} \\ 0 \\ 0 \end{pmatrix}$,
$V_1 = \begin{pmatrix} 0 \\ \frac{1}{\omega_1} \\ 0 \end{pmatrix}$,
$V_2 = \begin{pmatrix} 0 \\ 0 \\ \frac{1}{\omega_2} \end{pmatrix}.$
The corresponding point in $\widetilde{\mathcal{T}}_+(S_{0,3})$ is identified with
$(t_{012}, e_{01}, \omega_0, \omega_1, \omega_2) \in \bkR^5_+.$
The three convexity conditions associated to the edges of $\Delta_\ast$ are equivalent to
\begin{align*}
\omega_1 - \omega_2\; t_{012} + \omega_0\; t^2_{012} &\ge 0,\\
\omega_2 - \omega_0\; t_{012} + \omega_1\; t^2_{012} &\ge 0,\\
\omega_0 - \omega_1\; t_{012} + \omega_2\; t^2_{012} &\ge 0.
\end{align*}
It is interesting to note that these are independent of $e_{01},$
whence the decomposition is the product of a decomposition of
$\bkR^4$ with $\bkR.$. These define $\mathcal{C} (\Delta_\ast).$
Using these conditions, it is straight forward to check that
${\mathcal{C}}(\Delta_\ast)$ is an ideal cell. Moreover,
\[
\mathring{\mathcal{C}}(\Delta_0) = \{ (t_{012}, e_{01}, \omega_0, \omega_1, \omega_2) \in \bkR^5_+ \mid
\omega_2 - \omega_0\; t_{012} + \omega_1\; t^2_{012}<0\},
\]
and likewise for $\Delta_1$ and $\Delta_2$ by cyclically permuting the appropriate subscripts. This divides $\bkR^5_+$ into four open 5--balls along three properly embedded open 4--balls, and the dual skeleton to this decomposition is naturally identified with the flip graph of $S_{0,3}.$ The invariance by the action of the mapping class group follows since it acts as the group of permutations on $\{0, 1, 2\}.$
\end{proof}
\section{Cloverleaf patches}
\label{sec:clovers}
Of course, we would now like to explore how the geometry of a marked strictly convex projective structure relates to its relative position within a cellular subsets of moduli space, and how the geometry varies as one moves around in moduli space or towards the ``boundary," i.e.\thinspace as at least one coordinate becomes very small or very large.
Structures near the boundary might be difficult to
draw properly, given an arbitrary affine patch.
The image $\Omega$ of the developing map might go off to
infinity or become thin, yielding uninformative pictures.
We must take care, then, with our choice of affine patch.
One such choice is \emph{Benz\'ecri position},
discussed in \cite{Benz, CLT2}. Our investigations here
led us to discover another such choice,
which we call a \emph{cloverleaf patch}.
\label{def:cloverleaf}
Let $S$ be an oriented marked convex projective
surface with geodesic boundary. Let $\Delta_\ast$
be an ideal triangulation of $S$. Let $\tilde{t}$
and $\tilde{e}$ be an adjacent triangle and
edge in the lift $\widetilde{\Delta_\ast}$
of $\Delta_\ast$ through a universal cover $\phi$.
Let $\dev$ be the developing map satisfying our choices above, viz.
\begin{enumerate}
\item the standard
basis for $\bkR^3$ projects to
the image under $\dev$
of the vertices of $\tilde{t}$;
\item $(1,0,0)^t$ and $(0,1,0)^t$
project to $V_-$ and $V_+$, respectively; and
\item
the kernels of the covectors $(0,t_{012},1)$,
$(1,0,t_{012})$, and $(t_{012},1,0)$
project to lines tangent to the boundary
of the convex domain $\Omega$ that is
the image of $\dev$,
where $t_{012}$ is the parameter associated to $\tilde{t}$.
\end{enumerate}
Finally, let
\[
\omega = \begin{pmatrix} e_{0 1} \cdot e_{1 0} \cdot e_{1 2} &
e_{1 2} \cdot e_{2 1} \cdot e_{2 0} &
e_{2 0} \cdot e_{0 2} \cdot e_{0 1} \end{pmatrix}
\]
and let $P$ be the affine patch given by $\omega.v = 1$.
Then $P$ is the \emph{cloverleaf patch} of the structure
on $S$ with respect to $\tilde{t}$ and $\tilde{e}$.
\begin{theorem}\label{thm:cloverleaf}
Let $\Omega'$ be the image of $\Omega$ under
the affine isomorphism $\alpha$ between a cloverleaf
patch and $\bkR^2 = \mathbb{C}$ that
sends the vertices of $\tilde{t}$ to
the cube roots of unity and the vertices
of $\tilde{e}$ to the primitive cube
roots of unity.
Then $\Omega'$ contains the triangle
whose vertices are the cube roots of
unity, and is contained in the union
of unit discs centered at the cube
roots of $-1.$
\end{theorem}
\begin{figure}
\begin{center}
\scalebox{0.5}{\includegraphics{cloverleaf.pdf}}
\end{center}
\caption{Three vertices and supporting hyperplanes in a cloverleaf patch.}\label{fig:cloverleaf}
\end{figure}
\begin{proof}
The domain $\Omega'$ by definition contains the
image of $\tilde{t}$, and the image
of $\tilde{t}$ under $\alpha$ is,
by definition, the triangle whose
vertices are the roots of unity. This
concludes the proof of the first claim.
Let
\[
\rho =
\begin{pmatrix}
\frac{e_{20} \cdot e_{02}}
{e_{12} \cdot e_{10}} & 0 & 0\\
0 & \frac{e_{01} \cdot e_{10}}
{e_{20} \cdot e_{21}} & 0\\
0 & 0 & \frac{e_{12} \cdot e_{21}}
{e_{01} \cdot e_{02}}
\end{pmatrix},
\]
and let $\sigma$ be as in (\ref{eqn:sigma}).
Then $\sigma . \rho$ is an element of
$\SL(3, \bkR)$ which permutes the
vertices of the image of $\tilde{t}$.
But it also permutes the covectors
$v_0 = \begin{pmatrix} 0 & t_{012} & 1 \end{pmatrix}$,
$v_1 = \begin{pmatrix} 1 & 0 & t_{012} \end{pmatrix}$, and
$v_2 = \begin{pmatrix} t_{012} & 1 & 0\end{pmatrix}$.
We've chosen our developing map so that
the kernels of these covectors project to
tangent lines to $\partial \Omega$ at the
vertices of the image of $\tilde{t}$.
So $\Omega$ lies within
the trilateral
$\tau = \{ V : \langle \forall i \in \{0,1,2\} : v_i.V > 0\rangle\}$.
This trilateral is symmetric under $\sigma . \rho$,
so its image is symmetric under $s = \alpha \circ \sigma . \rho \circ \alpha^{-1}$.
But $s$ is an order 3 affine automorphism of $\bkR^2$
permuting the image of the vertices of $\tilde{t}$, the
cube roots of unity. So $s$ is just $2\pi/3$ rotation
about the origin. Therefore the image of
$\tau$ is a trilateral with an order 3 rotational
symmetry. Hence it is an equilateral trilateral
circumscribed about the equilateral triangle
formed by the roots of unity.
Using elementary Euclidean geometry, it is
easy to see that this trilateral must
lie in the region described, the
union of the unit discs centered
at cube roots of $-1.$ (See Figure~\ref{fig:cloverleaf}.)
Since $\Omega'$ lies inside this trilateral,
it lies inside the region as well.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[width=3.4cm]{sage0.pdf}
\includegraphics[width=3.4cm]{sage1.pdf}
\includegraphics[width=3.4cm]{sage2.pdf}
\includegraphics[width=3.4cm]{sage3.pdf}
\includegraphics[width=3.4cm]{sage4.pdf}
\includegraphics[width=3.4cm]{sage5.pdf}
\includegraphics[width=3.4cm]{sage6.pdf}
\includegraphics[width=3.4cm]{sage7.pdf}
\caption{A degenerating sequence of projective structures in cloverleaf position all lying in the same cell of moduli space and with the first and last pictures close to the boundary at infinity. The parameters are
$(1/\sqrt[3]{2}, \sqrt[3]{2}, 1, \sqrt[3]{4}, 1, 1, 2^\mu, 2^{-\mu-2/3})$
for $\mu \in \{-2.5 + i/2 : 0 \leq i < 8\}$, and the domains appear to converge to polygons.
}
\label{fig:cloversequence}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=3.4cm]{sage0s03.pdf}
\includegraphics[width=3.4cm]{sage1s03.pdf}
\includegraphics[width=3.4cm]{sage2s03.pdf}
\includegraphics[width=3.4cm]{sage3s03.pdf}
\includegraphics[width=3.4cm]{sage4s03.pdf}
\includegraphics[width=3.4cm]{sage5s03.pdf}
\includegraphics[width=3.4cm]{sage6s03.pdf}
\includegraphics[width=3.4cm]{sage7s03.pdf}
\caption{A degenerating sequence of projective structures on $S_{0,3}$ in clover position all lying in the same cell of moduli space and with the first and last pictures close to the boundary at infinity.}
\label{fig:cloversequenceS03}
\end{figure}
This set-up, and the explicit determination of the cell decomposition for $S_{1,1}$, allow a systematic study of deformations and degenerations of strictly convex projective structures on the once-punctured torus, which will be conducted in the future. See Figure~\ref{fig:cloversequence} for an example. Moreover, there is scope to do explicit computations for other surfaces of low complexity---the main issue is that the rational functions appearing in the coordinates of orbits of the vertices of the fundamental domain become more complicated. Such examples are of particular interest because there are different possible approaches to compactifying the moduli space (just as there are different compactifications of Teichm\"uller space), whose relative merits can be explored with our tools. Moreover, there should be geometric invariants and properties associated with the relative position of a point in moduli space within the cell containing it. Again, it is hoped that our tools can be used to determine, and quantify, such invariants.
\section{Conclusion}
\label{sec:conclusion}
This paper gives evidence that results known about classical Teichm\"uller space may have analogues in projective geometry. In this paper, we have focussed on cusped strictly convex projective surfaces and highlighted possible applications in the previous section. More generally, the parameter space due to Fock and Goncharov also parameterises strictly convex projective structures with geodesic boundary. Cell decompositions for the analogous space of hyperbolic structures are known using a variety of approaches; see, for instance, the work of Ushijima~\cite{Ushijima1999}, Penner~\cite{Penner2004}, Mondello~\cite{Mondello2009}, and Guo and Luo~\cite{GL2011}. Extending our methods to these more general surfaces would provide a unified framework, in which cusps can open to boundary components, and boundary components shrink to cusps.
An even more tantalising problem arises when going to higher dimensions. The constructions due to Epstein and Penner~\cite{MR918457} and Cooper and Long~\cite{CL} work in arbitrary dimensions. Moreover, there is a canonical cell decomposition of hyperbolic manifolds with boundary due to Kojima~\cite{koj1990, koj}. Whereas the moduli space of complete hyperbolic structures on a finite-volume hyperbolic manifold is a single point if the dimension is at least three,
the moduli space of strictly convex projective structures on such a manifold may be larger as shown by Cooper, Long and Thistlethwaite~\cite{MR2264468, MR2372851, MR2529905}. At the time of writing, it is not clear why some hyperbolic 3--manifolds deform whilst others do not, and the study of the decomposition of the decorated moduli space may hold the key to the answer, as well as shed light on other connections between the geometry and topology of a manifold.
\subparagraph*{Acknowledgements}
This research was partially supported by Australian Research Council grant DP140100158.
|
train/arxiv
|
BkiUdcfxaJJQnKrAkGtw
| 5 | 1 |
\section{Introduction}
\label{sec: Intro}
Waveguide quantum electrodynamics (QED) is a modern field of research focused on the study of light-matter interactions in one dimension. The confinement of electromagnetic radiation to a single spatial dimension allows one to achieve a significant enhancement of the coupling between atoms and fields, as well as to attain a better matching between the spatial modes of the emitting and absorbing atoms \cite{Lalumiere_PRA_2013, Arcari_2014}. Apart from the purely academic interest in the study of strong light-matter coupling, a great deal of motivation in this field of research comes from the technological sector, namely from the quantum computing \cite{Angelakis_2007, Kockum_PRL_2018, Paulisch_2016, Vermersch_2017, Xu_2018, Zheng_2013}. One of the most prominent examples is the so-called quantum network: A system of quantum processors interconnected by quantum radiation channels propagating quantum information and entanglement between them \cite{Kimble_2008}.
Although photons are the excellent carriers of quantum information, capable of high fidelity entanglement and information transfer \cite{Kimble_2015, Mitsch_2014, Sipahigil_2017}, recent advancement in quantum electronics offers a large variety of alternatives. Most commonly, nowadays, waveguide QED setups are realized in the experiments with superconducting quantum circuits, where superconducting transmission lines act as quantum radiation channels, whereas the Josephson-junction-based superconducting quantum bits play the role of quantum emitters \cite{Astafiev_Sci_2010, Hoi_2015, Loo_2013}. Other examples include superconducting qubits coupled to propagating phonons (surface acoustic waves) employing the piezoelectric effect \cite{Chu_2017, Gustafsson_2014, Manenti_2017,Andersson_Nat_Phys_2019}, and surface plasmons coupled to quantum dots \cite{Akimov_2007}.
Theoretically, the atom-field interactions in waveguide QED can often be accurately treated in the so-called Markovian limit $\gamma\tau\ll1$, where $\gamma$ and $\tau$ stand for the characteristic decay rate and time delay in the system \cite{Guimond_QST_2017}. In the course of the last few decades, a significant number of theoretical approaches allowing one to tackle the waveguide QED problems within this limit was proposed. In particular, the Markovian waveguide QED systems are commonly examined theoretically with help of master equation based-approaches \cite{Lalumiere_PRA_2013, Lehmberg_1970_1, Lehmberg_1970_2, You_2018, Mirza_2016, Li_2009, Lin_2019}. Indeed, within the framework of the associated input-output formalism \cite{Lalumiere_PRA_2013, Fan_2010, Xu_2015}, Lindblad-type equation approaches allow one to study transmission and reflection characteristics, as well as the photonic correlations for arbitrary initial photon states, including coherent, thermal, and Fock ones. Moreover, at a rather modest expense, a substantial variety of theoretical tools are available for a derivation of master equations: equation of motion-based methods \cite{Lalumiere_PRA_2013, Lehmberg_1970_1, Lehmberg_1970_2}, path integral techniques \cite{Shi_2015}, SLH formalism \cite{Combes_2017}, to mention just a few. Recently master equation-based approach was generalized to capture the effects of temporal modulation of the system parameters \cite{PPG_2017,Liao_2020}, such as the light-matter coupling strength and the transition frequencies of quantum emitters.
\par
Another frequenter method of studying waveguide QED systems within Markov approximation is the coordinate space Bethe ansatz \cite{Shen_2015, Yudson_2008, Yudson_1998, Zheng_2010, Shen_2005, Tsoi_2008, Cheng_2017, Liao_2016}. Within this approach, one is able to determine the exact stationary eigenstates of the system's Hamiltonian in a subspace of a given excitation number, which, in turn, enables one to calculate stationary observables as well as photonic correlation functions exactly. Moreover, the Bethe ansatz technique was shown to be applicable to the studies of real-time dynamics of few-photon states \cite{Yudson_1998, Chen_2011, Dinc_2019}, systems where the photon-photon bound states occur \cite{Facchi_2016, Zheng_2010}, and systems with delayed coherent feedback \cite{Calajo_2019}. Despite its scalability to the cases of multiple emitters and emitters with complicated level structures, this approach is known to be strongly limited by the number of excitations in the system due to the rapid increase of complexity of the resulting Bethe wavefunctions \cite{Fang_2014, Yudson_1998, Shen_2015, Fang_2017}.
\par
Another group of systematic methods of studying the waveguide QED systems in the limit of the negligible delay times is comprised of the field theory-based approaches. For example, in \cite{Shi_2009, Shi_2011, Shi_2013}, by representing the atomic operators in terms of the slave fermions, the authors were able to employ the path integral representation of the field correlation functions, from which the $S$-matrix may be established by means of the celebrated Lehmann-Symanzik-Zimmermann reduction formula. Another field theoretical method to be mentioned is the so-called diagrammatic resummation theory developed in \cite{Pletyukhov_New_J_Phys_2012, Pletyukhov_2015, Pedersen_PRA_2017, Tian_Feng_2017}. Within the framework of resummation theory, one sums the perturbative series of Feynmann diagrams for the matrix elements of the transition operator to infinite orders, which, in turn, allows one to determine the exact $S$-matrix, with the help of which all of the stationary observables may be calculated \cite{Pletyukhov_2015}.
\par
Although physics behind waveguide QED systems is easily accessible in the Markovian limit using a large variety of theoretical tools, there exists a number of problems forcing one to go beyond this approximation \cite{Guimond_QST_2017, Rivas_2014, Breuer_2016, Sinha_prl_2020, Sinha_pra_2020}. As it is well known, in the single excitation subspace, all of the dynamical and stationary information about the system may be easily obtained (either analytically or numerically) by means of Bethe ansatz for a system of arbitrary complexity \cite{Guo_2020, Guo_Phys_Rev_2017, Dinc-2020, Dorner_2002}. Primarily, this assertion has to do with the fact that it is relatively easy to conceive a closed system of (delay) differential equations governing the evolution of the system. Even though it is possible to proceed in the same manner in higher excitation subspaces \cite{Redchenko_2014}, the calculations become much more cumbersome and lack systematicity. In the course of the last decade, a number of theoretical approaches allowing one to overcome these difficulties were put forward. On the numerical side, for example, significant progress in the dynamical simulation of the non-Markovian 1D quantum systems was achieved within the framework of matrix product state ansatz \cite{Grimsm_2015, Pichler_2016, Fin_2020, Wang_2017}. Despite the complexity associated with the non-Markovian waveguide QED systems, a few analytical methods were also recently suggested. In particular, it is a common practice to generalize the Lindblad-type equation approaches to the non-Markovian realm \cite{Shi_2015, Chen_2018, Wu_2010, Tan_2011}. Although generalized master equations can only provide exact results in the case of linear scatterers, they allow for systematic approximate treatment of systems with nonlinearities such as qubits.
\par
Another common approach to waveguide QED problems with delayed coherent quantum feedback is based on the diagrammatic resummation theory. In recent years resummation ideas were successfully applied to solve the two-photon scattering and dynamics in the systems with two distant qubits \cite{Laakso_PRL_2014}, a single qubit in front of a mirror \cite{Guimond_QST_2017}, a giant acoustic atom \cite{Guo_Phys_Rev_2017}, and a qubit coupled to a resonator array \cite{Koc_2016}.
\par
In this paper, we present a systematic generalizattion of the diagrammatic approach to scattering of multiphoton states in waveguide QED. Our approach is based on the exact resummation of the perturbation theory for the transition operator which turns out to be possible due to the conservation of excitation number guaranteed by the rotating wave approximation. The advantage of our approach is its insensitivity to the form of light-matter coupling constants, thus, allowing one to potentially examine any kinds of waveguide QED systems, including the systems with delayed coherent quantum feedback. We start by making an exposure of the method by the direct example of $1$-qubit waveguide QED. This framework lays down a basis for further calculations, in particular general qubit number two and three-photon scattering theory. We then apply the developed theory to a weakly coherent pulse scattering on a giant acoustic atom, an intrinsically non-Markovian system. In particular, we consider the scattering of a coherent state with small enough coherence parameter $|\alpha|\ll1$ chosen such that the terms of order $|\alpha|^{4}$ can be ignored. With the help of the general methods developed in Section \ref{sec: Scat_Theor} we compute an exact output state of radiation and study the particle correlations in it with the help of the theory of optical coherence. In particular, we compute the first, second, and third-order coherence functions to the lowest order in $|\alpha|$ and discuss the impact of the non-Markovianity of the scatterer on the observable quantities.
\section{Scattering Theory}
\label{sec: Scat_Theor}
In this section, we first set up the notations used throughout the paper. Next, we introduce the general formalism in the framework of the single-qubit waveguide QED. In particular, we extend the findings of the preceding papers \cite{Guimond_QST_2017, Pletyukhov_New_J_Phys_2012, Pedersen_PRA_2017, Guo_Phys_Rev_2017} to the realm of non-Markovian models by allowing for arbitrary momentum dependence of waveguide modes and radiation-qubit couplings. This development, in turn, allows one to study multi-photon scattering problems in systems with non-linear dispersion of the modes supported by the radiation channels, as well as the systems with artificial feedback loops, such as a qubit placed in front of a mirror or a giant acoustic atom, for example.
\par
Further on, we generalize the scattering formalism to the systems with a higher number of qubits, where the quantum feedback loops a naturally present due to the finiteness of time required for a photon to propagate between a given pair of distant emitters. Specifically, we discuss two and three-photon scattering problems on an arbitrary number of emitters, extending the approach of \cite{Laakso_PRL_2014}.
\par
Alongside this, we discuss several practical issues, such as the separation of elastic contributions to the scattering matrices and the generalized cluster decomposition.
\subsection{Hamiltonian, generalized summation convention, and the $S$ -matrix}
\label{sec: Conventions}
Let us consider a collection of $N_{q}$ qubits coupled to a waveguide with $N_{c}$ radiation channels. The Hamiltonian of such a system assumes the following form $\mathcal{H}=\mathcal{H}_{0}+\mathcal{V}$, where
\begin{align}
\label{eq: Bare_Hamiltonian}
\mathcal{H}_{0}=&\sum_{n=1}^{N_{q}}\Omega_{n}\sigma_{+}^{(n)}\sigma_{-}^{(n)}+\sum_{\mu=1}^{N_{c}}\int_{B_{\mu}}dk\omega_{\mu}(k)a^{\dagger}_{\mu}(k)a_{\mu}(k),
\end{align}
is the free Hamiltonian, and
\begin{align}
\label{eq: inter}
\mathcal{V}=&\sum_{\mu=1}^{N_{c}}\sum_{n=1}^{N_{q}}\int_{B_{\mu}}dk[g_{\mu, n}(k)a^{\dagger}_{\mu}(k)\sigma_{-}^{(n)} + \text{h.c.} ]
\end{align}
is the dipole light-matter interaction in the rotating wave approximation (RWA). In the above expression, $\Omega_{n}$ is the transition frequency of the $n^{\text{th}}$ qubit, the dispersion relation $\omega_{\mu}(k)$ and the bandwidth $B_{\mu}$ characterize the radiation channel $\mu$, while $a^{\dagger}_{\mu}(k)$ and $a_{\mu}(k)$ stand for the creation and annihilation field operators of a photon with momentum $k$ and obey the standard bosonic commutation relations:
\begin{align}
\label{eq: commutator_boson_zero}
[a_{\mu}(k), a^{\dagger}_{\mu'}(k')]=&\delta_{\mu, \mu'}\delta(k-k'),\\
\label{eq: commutator_boson_non-zero}
[a_{\mu}^{\dagger}(k), a^{\dagger}_{\mu'}(k')]=&[a_{\mu}(k), a_{\mu'}(k')]=0.
\end{align}
The operators acting on the Hilbert space of the $n^{\text{th}}$ qubit, $\{\sigma_{3}^{(n)}, \sigma_{+}^{(n)}, \sigma_{-}^{(n)}\}$, are defined according to $\sigma^{(n)}_{l}=1_{\mathbb{C}^{2}}^{\otimes(n-1)}\otimes\sigma_{l}\otimes1_{\mathbb{C}^{2}}^{\otimes(N_{q}-n)}$, with the $\sigma$-matrices being chosen according to the standard convention
\begin{align}
\label{eq: Paulis}
\sigma_{3}=\begin{pmatrix}1 &0 \\ 0& -1 \end{pmatrix}, \quad \sigma_{+}=\sigma_{-}^{\dagger}=\begin{pmatrix} 0 & 1\\ 0& 0 \end{pmatrix}.
\end{align}
\par
The RWA is justified as long as the characteristic operational frequency $\omega_{0}$ is such that the condition $|g_{\mu}^{2}(\omega_{0})/\omega_{0}|\ll1$ is satisfied \cite{Mandel_Wolf}.
One of the main benefits of this approximation is the conservation of the total number of excitations in the system, i.e. an operator $\mathcal{N}=\sum_{n=1}^{N_q}\sigma_{+}^{(n)}\sigma_{-}^{(n)}+\sum_{\mu=1}^{N_c}\int_{B_{\mu}}dka_{\mu}^{\dagger}(k)a_{\mu}(k)$ commutes with the full Hamiltonian, $[\mathcal{N}, \mathcal{H}]=0$. This property allows us to simultaneously diagonalize $\mathcal{H}$ and $\mathcal{N}$. Moreover, the eigenspaces of the Hamiltonian labeled by the eigenvalues of $\mathcal{N}$ are certainly orthogonal, hence there exists a direct sum decomposition of the total Hilbert space of the system
\begin{equation}
\label{eq: Direct_sum_hilbert}
\mathscr{H}=\bigoplus_{N=0}^{\infty}\mathscr{H}_{N},
\end{equation}
where $\mathscr{H}_{N}$ is the $D(N, N_{q})=\sum_{l=0}^{\min(N_{q}, N)}\frac{N_{q}!}{l!(N_{q}-l)!}$ dimensional subspace of all possible states with $N$ excitations. Note that here $D(N, N_{q})$ does not stand for the dimension of $\mathscr{H}_{N}$ in the strict mathematical sense. Instead, it is a number of ways to distribute $N$ excitations in a system of $N_{q}$ qubits (i.e., each single-photon infinite-dimensional vector space adds unity to $D(N, N_{q})$). Analogous direct sum decompositions hold for the Hamiltonian, unitary evolution operator, and the $S$-matrix (to be introduced later). Moreover, such a decomposition considerably simplifies the problem since the calculations may be performed in all of the subspaces separately.
\par
To simplify our further analysis, it is useful to introduce compact notations. Thus we define a multi-index $s=(\mu, k)$ and the generalized summation convention: if two multi-indices are repeated, summation over the channel index $\mu$ and integration with respect to the momentum $k$ (over the relevant bandwidth $B_{\mu}$) is implied. We also introduce the following notation for the bare interaction vertex
\begin{equation}
\label{eq: Bare_vertex_sc}
v_{s}=\sum_{n=1}^{N_{q}}g_{\mu, n}(k)\sigma^{(n)}_{-}.
\end{equation}
Using these conventions we rewrite the Hamiltonian as
\begin{align}
\label{eq: Bare_Hamiltonian_compact}
\mathcal{H}=&\mathcal{H}_{0}+\mathcal{V}, \quad \mathcal{H}_{0}=\omega_{s}a^{\dagger}_{s}a_{s}+\sum_{n=1}^{N_{q}}\Omega_{n}\sigma_{+}^{(n)}\sigma_{-}^{(n)}, \\
\label{eq: Potential_compact}
\mathcal{V}=&a^{\dagger}_{s}v_{s}+v_{s}^{\dagger}a_{s}.
\end{align}
\par
The scattering matrix, or the $S$-matrix, is the main object of interest in the present section. It can be generally defined via the so-called transition operator, or the $T$-matrix, in the following way:
\begin{align}
\label{eq: SMatrix_def}
\mathcal{S}=&1_{\mathscr{H}}-2\pi{i}\delta(\epsilon_{i}-\epsilon_{f})\mathcal{T}(\epsilon_{i}), \\
\label{eq: TMatrix_def}
\mathcal{T}(\epsilon)=&\mathcal{V}+\mathcal{V}\mathcal{G}(\epsilon)\mathcal{V},
\end{align}
where the $T$-matrix is put on-shell, i.e. $\epsilon=\epsilon_i$, and the energies $\epsilon_{i}, \epsilon_{f}$, corresponding to initial and final states of the system, obey the energy conservation in the scattering processes, which is mathematically ensured by the delta-function. In (\ref{eq: TMatrix_def}) we have denoted by $\mathcal{G}(\epsilon)$ the retarded Green's operator defined according to
\begin{align}
\label{eq: Green_operator}
\mathcal{G}(\epsilon)=&\frac{1}{\epsilon-\mathcal{H}+i\eta}=\sum_{n=0}^{\infty}(\mathcal{G}_{0}(\epsilon)\mathcal{V})^n \mathcal{G}_{0}(\epsilon),\\
\label{eq: Bare_Green_operator}
\mathcal{G}_{0}(\epsilon)=&\frac{1}{\epsilon-\mathcal{H}_{0}+i\eta}, \quad \eta\rightarrow0^{+}.
\end{align}
\subsection{General properties of the scattering matrix in waveguide QED}
\label{sec: Properties}
Let us consider the scattering problem for the following initial state $\ket{N_{p}}\otimes\ket{g}$, where $\ket{N_{p}}$ is a $N_{p}$-photon state, and $\ket{g}=\ket{0}^{\otimes{N_{q}}}$ is the ground state of the scatterer. Due to the conservation of excitation-number operator, all of the possible $D(N_{p}, N_{q})$ scattering outcomes must contain $N_{p}$ excitations. Due to the fact that for any system of qubits the ground state $\ket{g}$ is the only non-decaying (subradiant) subspace, in the long-time limit (a priory assumed in scattering theory) all of the emitters will definitely decay into the continuum, leaving us with the only possibility for the system to end up in the state $\ket{N_{p}'}\otimes\ket{g}$ (here $\ket{N_{p}'}$ is again a $N_{p}$-photon state, with potentially redistributed momenta) [\onlinecite{pisotski}]. If one wishes to extend the scattering theory to the systems with metastable ground states, such as e.g. a $\Lambda$ three-level system, one then has to consider calculating more matrix elements of the transition operator \cite{Trivedi_2018,Xu_Phys_Rev_2017}.
\par
Since the only matrix elements, we are interested in are diagonal in both the photon and qubit space, due to the RWA, the only terms contributing to the perturbation expansion of the $T$-matrix are those containing an even number of interactions $\mathcal{V}$, thus reducing the number of diagrams by half. Another important feature to be mentioned is the nilpotency of the photon-qubit interaction vertex operator $v_{s_{1}}^{\dagger}...v_{s_{N_{q}+1}}^{\dagger}=v_{s_{1}}...v_{s_{N_{q}+1}}=0$, which along with the property $v_{s}\ket{g}=\bra{g}v_{s}^{\dagger}=0$ and the fact that the number of $v$'s and $v^{\dagger}$'s has to be equal in each graph contributing to the expansion, significantly reduces the number of non-zero diagrams at each order in perturbation theory. In fact, as we are going to see in this section, all of the diagrams contributing to the series for any fixed $N_{q}$ may be constructed out of a finite number of "clusters", in turn allowing, in principle for the exact resummation of the perturbation series. One can also organize the calculation differently, namely by fixing $N_{p}$ and allowing $N_{q}$ to vary instead. Calculation within this approach is facilitated in its turn by the fact that $a_{s_{1}}...a_{s_{N_{p}+1}}\ket{N_{p}}=\bra{N_{p}}a_{s_{1}}^{\dagger}...a_{s_{N_{p}+1}}^{\dagger}=0$ and the fact that the number of $a$'s and $a^{\dagger}$'s in each term of the perturbation series have to be equal. Although, these two approaches are clearly dual due to the structure of interaction potential in RWA. This approach is beneficial when studying few-photon scattering on a large number of qubits. This assertion has to do with the fact that once the solution of $N_{p}$ photon scattering problem on $N_{q}=N_{p}$ the scattering of $N_{p}$ particles on $N_{q}>N_{p}$ follows the same lines. Indeed, since $v_{s}$ ($v_{s}^{\dagger}$) by itself contains the sum of all of the single-qubit lowering (raising) operators and the highest number of qubits that the $N_{p}$ photon pulse can excite equals to $N_{p}$, the normal ordered $N_{p}$-photon $S$-matrix cannot contain projectors on subspaces with higher-excitation number than $N_{p}$. Using this fact in the following we are going to derive the generic two and three-photon scattering matrices by considering two and three particle scattering on two and three atoms respectively.
\subsection{Scattering theory in $N_{q}=1$ waveguide QED}
\label{sec:Q=1}
In this subsection, we would like to make a detailed exposition of our general method by considering the simplest imaginable scenario of a single qubit coupled to a waveguide. As it was anticipated in Subsection \ref{sec: Properties}, to solve the $N_{p}$-photon scattering problem, we shall determine the following matrix elements of the transition operator
\begin{align}
\label{eq: Cll1_1}
&\Braket{N_{p}', g|\mathcal{T}^{(1)}(\epsilon)|N_{p}, g}=\Braket{N_{p}', g|\mathcal{V}\mathcal{G}(\epsilon)\mathcal{V}|N_{p}, g}\\
\label{eq: Cll1_3}
&=\Braket{N_{p}', g|a^{\dagger}_{s_{1}'}v_{s_{1}'}\mathcal{G}(\epsilon)v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g}\\
\label{eq: Cll1_4}
&=\sum_{n=0}^{\infty}\Braket{N_{p}', g|a^{\dagger}_{s_{1}'}v_{s_{1}'}(\mathcal{G}_{0}(\epsilon)\mathcal{V})^{n}\mathcal{G}_{0}(\epsilon)v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g},
\end{align}
where the superscript $(1)$ refers to the $N_{q}=1$ waveguide QED and we have used the fact that $\Braket{N_{p}', g|\mathcal{V}|N_{p}, g}=0$. Note that in what follows, whenever an argument of an object is omitted, we understand that its argument is $\epsilon$, e.g $\mathcal{G}^{(1)}$ stands for $\mathcal{G}^{(1)}(\epsilon)$, etc, however, when the argument of a given operator or vertex function is of importance, it would be explicitly stated. As it was mentioned above, due to RWA, only even terms contribute to the above geometric series, so that
\begin{align}
\nonumber
&\Braket{N_{p}', g|\mathcal{T}^{(1)}|N_{p}, g}\\
\label{eq: Cll1_5}
&=\sum_{n=0}^{\infty}\Braket{N_{p}', g|a^{\dagger}_{s_{1}'}v_{s_{1}'}(\mathcal{G}_{0}\mathcal{V}\mathcal{G}_{0}\mathcal{V})^{n}\mathcal{G}_{0}v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g}.
\end{align}
Now, let us consider the $\mathcal{V}\mathcal{G}_{0}\mathcal{V}$ term in the brackets above
\begin{align}
\nonumber
\mathcal{V}\mathcal{G}_{0}\mathcal{V}=&a^{\dagger}_{s'}v_{s'}\mathcal{G}_{0}a^{\dagger}_{s}v_{s}+a^{\dagger}_{s'}v_{s'}\mathcal{G}_{0}v_{s}^{\dagger}a_{s}\\
\label{eq: Cll1_6}
+&v_{s'}^{\dagger}a_{s'}\mathcal{G}_{0}v_{s}^{\dagger}a_{s}+v_{s'}^{\dagger}a_{s'}\mathcal{G}_{0}a^{\dagger}_{s}v_{s}.
\end{align}
Clearly, the terms with two $v$s and two $v^{\dagger}$s do not contribute (they are in fact zero) by the single-qubit nilpotency condition $v^{2}=(v^{\dagger})^{2}=0$. Among the "diagonal" terms, the only non-zero contribution comes from the $v^{\dagger}v$ term since the term $vv^{\dagger}$ annihilates the state $v^{\dagger}\ket{g}$ and gives zero whenever it is multiplied with $v^{\dagger}v$ term. Hence, we arrive at
\begin{align}
\nonumber
&\Braket{N_{p}', g|\mathcal{T}^{(1)}|N_{p}, g}\\
\label{eq: Cll1_7}
&=\sum_{n=0}^{\infty}\bra{N_{p}', g}a^{\dagger}_{s_{1}'}v_{s_{1}'}(\mathcal{G}_{0}v_{s'}^{\dagger}a_{s'}\mathcal{G}_{0}a^{\dagger}_{s}v_{s})^{n}\mathcal{G}_{0}v^{\dagger}_{s_{1}}a_{s_{1}}\ket{N_{p}, g}.
\end{align}
Now, by using the commutation relations (\ref{eq: commutator_boson_zero}) and (\ref{eq: commutator_boson_non-zero}), one may easily establish the following intertwining property of bosonic operators \cite{Pletyukhov_New_J_Phys_2012}
\begin{align}
\label{eq: CII1_8}
a_{s}f(\mathcal{H}_{0})&=f(\mathcal{H}_{0}-\omega_{s})a_{s},\\
f(\mathcal{H}_{0})a_{s}^{\dagger}&=a_{s}^{\dagger}f(\mathcal{H}_{0}-\omega_{s}),
\end{align}
where $f$ is some function admitting for the Maclaurin expansion $f(z)=\sum_{n=0}^{\infty}f_{n}z^{n}$. So, we see that
\begin{align}
\nonumber
v_{s'}^{\dagger}a_{s'}\mathcal{G}_{0}(\epsilon)a^{\dagger}_{s}v_{s}&=v_{s}^{\dagger}\mathcal{G}_{0}(\epsilon-\omega_{s})v_{s}\\
\label{eq: CII1_9}
&+a^{\dagger}_{s'}v_{s}^{\dagger}\mathcal{G}_{0}(\epsilon-\omega_{s'}-\omega_{s})v_{s'}a_{s},
\end{align}
where in the last term of (\ref{eq: CII1_9}) the contraction of the indices $s$ and $s'$ is not assumed. By defining the self-energy operator $\Sigma^{(1)}$ and the effective potential energy operator $\mathcal{R}^{(1)}$ as
\begin{align}
\label{eq: CII1_10}
\Sigma^{(1)}(\epsilon)=&v_{s}^{\dagger}\mathcal{G}_{0}(\epsilon-\omega_{s})v_{s}, \quad \mathcal{R}^{(1)}(\epsilon)=a^{\dagger}_{s'}\mathcal{R}_{s', s}^{(1)}(\epsilon)a_{s}, \\
\label{eq: CII1_11}
\mathcal{R}_{s', s}^{(1)}(\epsilon)=&v_{s}^{\dagger}\mathcal{G}_{0}(\epsilon-\omega_{s'}-\omega_{s})v_{s'},
\end{align}
and resumming the perturbative series in (\ref{eq: Cll1_7}), we conclude that
\begin{align}
\nonumber
&\Braket{N_{p}', g|\mathcal{T}^{(1)}|N_{p}, g}\\
\label{eq: CII1_12}
&=\Bra{N_{p}', g}a^{\dagger}_{s_{1}'}v_{s_{1}'}\frac{1}{1-{\mathcal{G}}^{(1)}\mathcal{R}^{(1)}}{\mathcal{G}}^{(1)}v^{\dagger}_{s_{1}}a_{s_{1}}\ket{N_{p}, g},
\end{align}
where we have introduced the following Green's operator $(\mathcal{G}^{(1)})^{-1}(\epsilon)=\mathcal{G}_{0}^{-1}(\epsilon)-\Sigma^{(1)}(\epsilon)$. By defining the generating operator:
\begin{align}
\label{eq: CII1_13}
\mathcal{W}^{(1)}=\mathcal{R}^{(1)}+\mathcal{R}^{(1)}\mathcal{G}^{(1)}\mathcal{W}^{(1)},
\end{align}
we arrive at the following result:
\begin{align}
\nonumber
&\Braket{N_{p}', g|\mathcal{T}^{(1)}|N_{p}, g}\\
\nonumber
&=\Braket{N_{p}', g|a^{\dagger}_{s_{1}'}v_{s_{1}'}\mathcal{G}^{(1)}v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g}\\
\label{eq: CII1_14}
&+\Braket{N_{p}', g|a^{\dagger}_{s_{1}'}v_{s_{1}'}\mathcal{G}^{(1)}\mathcal{W}^{(1)}\mathcal{G}^{(1)}v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g}. \end{align}
By analyzing the structure of $\mathcal{R}^{(1)}$, we conclude that $\mathcal{W}^{(1)}$ admits for the following series representation:
\begin{align}
\label{eq: CII1_15}
\mathcal{W}^{(1)}=\sum_{n=1}^{\infty}a^{\dagger}_{s_{1}'}...a^{\dagger}_{s_{n}'}\mathcal{W}^{(1, n)}_{s_{1}'...s_{n}', s_{1}...s_{n}}a_{s_{n}}...a_{s_{1}},
\end{align}
where $\mathcal{W}^{(1, n)}$'s are the operator-valued functions, which depend only on $\epsilon-\mathcal{H}_{0}$ and $2n$ multi-indices $\{s_{1}', ..., s_{n}', s_{1}, ..., s_{n}\}$.
\par
By inserting (\ref{eq: CII1_15}) in (\ref{eq: CII1_13}) and taking the projections onto the particle subspaces we arrive at the following hierarchy of integral equations:
\begin{widetext}
\begin{align}
\label{eq:W11_Integral}
W_{s_{1}', s_{1}}^{(1, 1)}(\epsilon)=&R_{s_{1}', s_{1}}^{(1)}(\epsilon)+R_{s_{1}', s}^{(1)}(\epsilon)G^{(1)}(\epsilon-\omega_{s})W_{s, s_{1}}^{(1, 1)}(\epsilon),\\
\nonumber
\vdots\\
\nonumber
W^{(1, n)}_{s_{1}'...s_{n}', s_{1}...s_{n}}(\epsilon)=&R_{s_{1}', s_{1}}^{(1)}\Bigg(\epsilon-\sum_{l=2}^{n}\omega_{s_{n}'}\Bigg)G^{(1)}\Bigg(\epsilon-\omega_{s_{1}}-\sum_{l=2}^{n}\omega_{s_{l}'}\Bigg)W^{(1, n-1)}_{s_{2}'...s_{n}', s_{2}...s_{n}}(\epsilon-\omega_{s_{1}})\\
\label{eq: CII1_18}
+&R_{s_{1}', s}^{(1)}\Bigg(\epsilon-\sum_{l=2}^{n}\omega_{s_{n}'}\Bigg)G^{(1)}\Bigg(\epsilon-\omega_{s}-\sum_{l=2}^{n}\omega_{s_{l}'}\Bigg)\Big[W^{(1, n)}_{ss_{2}'...s_{n}', s_{1}...s_{n}}(\epsilon)+...+W^{(1, n)}_{s_{2}'...s_{n}'s, s_{1}...s_{n}}(\epsilon)\Big]\\
\nonumber
\vdots
\end{align}
\end{widetext}
Here, we have used regular letters $W, G, R$ instead of calligraphic ones to indicate that these objects are not operators but are rather their projections on the photonic vacuum (note that $W^{(1, n)}$s are still acting as operators on the qubit space). The collection of the integral equations above may be conveniently represented diagrammatically, see Figure \ref{fig: CII1_1}. Diagrammatic rules may be formulated as follows. To each dotted line associate the bare propagator $G_{0}(\epsilon)$. To each wavy line with incoming (outgoing) arrow associate the bare absorption $v^{\dagger}$ (emission $v$) vertex. Whenever two wavy lines are contracted together, one has to integrate over all momentum and sum over all channels. When a given wavy line passes over the propagator, its argument has to be shifted by the frequency carried by this line (which is a direct consequence of the intertwining property (\ref{eq: CII1_8})).
\begin{figure}[b]
\includegraphics[width=\columnwidth]{1qubit.pdf}
\caption{Hierarchy of integral equations governing effective multi-photon vertex functions in $N_{q}=1$ waveguide QED. Here the $\Sigma^{(1)}$ bubble corresponds to the self-energy in a subspace of a single excitation used to the define the dressed Green's function (represented by a double line) from the bare Green's function (depicted by the dashed line). The $W^{(1, n)}$ bubbles (bubbles with $2n$-amputated legs) represent effective $n$-photon vertex functions describing effective interaction between $n$-photons induced by non-linearity. Bare absorption $v^{\dagger}$ and emission $v$ vertices are represented by black dots with incoming and outgoing photon lines respectively. Diagrammatic rules are as stated in Sec. \ref{sec:Q=1}. }
\label{fig: CII1_1}
\end{figure}
For example, the self-energy diagram in Figure \ref{fig: CII1_1} is translated as
\begin{align}
\Sigma^{(1)}(\epsilon)=v_{s}^{\dagger}G_{0}(\epsilon-\omega_{s})v_{s}.
\end{align}
Now having identified the equations defining the components of $\mathcal{W}^{(1)}$, we may express the transition operator as follows
\begin{equation}
\label{eq: CII1_19}
\mathcal{T}^{(1)}(\epsilon)=\sum_{n=1}^{\infty}T^{(1, n)}_{s_{1}'...s_{n}', s_{1}...s_{n}}(\epsilon)a^{\dagger}_{s_{1}'}...a_{s_{n}'}^{\dagger}a_{s_{n}}...a_{s_{1}},
\end{equation}
where we have defined the following functions
\begin{align}
\label{eq:T11}
T^{(1, 1)}_{s_{1}', s_{1}}(\epsilon)=\Braket{g|v_{s_{1}'}G^{(1)}(\epsilon)v^{\dagger}_{s_{1}}|g},
\end{align}
\begin{widetext}
\begin{align}
\label{eq: CII1_21}
T^{(1, n>1)}_{s_{1}'...s_{n}', s_{1}...s_{n}}(\epsilon)=\Braket{g|v_{s_{1}'}G^{(1)}\Bigg(\epsilon-\sum_{l=2}^{n}\omega_{s_{l}'}\Bigg)W^{(1, n-1)}_{s_{2}'...s_{n}', s_{2}...s_{n}}(\epsilon)G^{(1)}\Bigg(\epsilon-\sum_{l=2}^{n}\omega_{s_{l}}\Bigg)v^{\dagger}_{s_{1}}|g}.
\end{align}
\end{widetext}
Note that we are working with the non-symmetrized forms of operators and perform the symmetrization only when doing actual calculations. Also note that the dependence of $\mathcal{T}_{s_{1}'...s_{n}', s_{1}...s_{n}}^{(1, n)}$ on $\omega_{s}a^{\dagger}_{s}a_{s}$ may be omitted taking into account normal ordering, when $S$-matrix is going to be finally contracted with the initial state, all of the $T$-matrix components are going to be eventually projected onto the photonic vacuum and energies are going to be put on-shell. Using the above representation of the transition operator we immediately deduce that the scattering operator takes the following form:
\begin{widetext}
\begin{equation}
\label{eq: CII1_22}
\mathcal{S}=1_{\mathscr{H}}-2\pi{i}\sum_{n=1}^{\infty}T^{(n)}_{s_{1}'...s_{n}', s_{1}...s_{n}}\Bigg(\sum_{l=1}^{n}\omega_{s_{l}}\Bigg)\delta\Bigg(\sum_{l=1}^{n}\omega_{s_{l}'}-\sum_{l=1}^{n}\omega_{s_{l}}\Bigg)a^{\dagger}_{s_{1}'}...a_{s_{n}'}^{\dagger}a_{s_{n}}...a_{s_{1}}.
\end{equation}
\end{widetext}
Practically, when one contracts the $S$-matrix with the initial $N_{p}$-photon state, only first $N_{p}$ terms in the above series contribute. It is, however, beneficial to represent the $S$-matrix in terms of the direct sum of $N$-body operators which act solely in the $N$-particle subspaces
\begin{equation}
\label{eq: CII1_23}
\mathcal{S}=1_{\mathscr{H}}\oplus\bigoplus_{n=1}^{\infty}\mathcal{S}_{n},
\end{equation}
where $\mathcal{S}_{n}$ may be written as
\begin{equation}
\label{eq:Sn}
\mathcal{S}_{n}=S^{(n)}_{s_{1}'...s_{n}', s_{1}...s_{n}}a^{\dagger}_{s_{1}'}...a_{s_{n}'}^{\dagger}a_{s_{n}}...a_{s_{1}},
\end{equation}
\begin{widetext}
\begin{align}
\label{eq:Sn_Matrix}
S^{(n)}_{s_{1}'...s_{n}', s_{1}...s_{n}}=\frac{1}{n!}\prod_{l=1}^{n}\delta_{s_{l}', s_{l}}-2\pi{i}\sum_{m=1}^{n}\frac{1}{(n-m)!}T^{(n)}_{s_{1}'...s_{m}', s_{1}...s_{m}}\Bigg(\sum_{l=1}^{m}\omega_{s_{l}}\Bigg)\delta\Bigg(\sum_{l=1}^{n}\omega_{s_{l}'}-\sum_{l=1}^{n}\omega_{s_{l}}\Bigg)\prod_{r=m+1}^{n}\delta_{s_{r}', s_{r}}.
\end{align}
\end{widetext}
Note that the combinatorial prefactors $\frac{1}{(n-m)!}, \quad m\in\{0, ..., n\}$ and additional $\delta$ functions $\delta_{s_{r}', s_{r}}$ are chosen to take into account the excess number of $(n-m)!$ Wick contractions.
\subsection{Generalized cluster decomposition}
\label{sec:GCD}
In this section, we would like to illustrate how the generalized cluster decomposition in waveguide QED, extensively studied in [\onlinecite{Xu_Phys_Rev_2017}-\onlinecite{Burillo_New_J_Phys_2018}], naturally follows from the results of the previous section.
\par
The cluster decomposition is a way of separating the elastic and inelastic contributions to the $S$-matrix. The elastic contribution physically corresponds to the scattering channel in which all photons scatter coherently, i.e. conserve their energy individually in the scattering process. On the other hand, the inelastic contribution corresponds to the incoherent scattering. Thereby the photons redistribute their initial energy between one another via effective photon-photon interaction which is mediated by their interaction with nonlinear scatterers (e.g., qubits). In contrast, in the case of a linear scatter (e.g., a cavity mode), the effective photon-photon interaction is not generated, and the $n$-photon $S$-matrix simply factors out into the product of $n$ single-particle $S$-matrices.
\par
The first step towards the cluster decomposition of the multi-photon $S$-matrices is the realization that the multi-photon vertex functions $W^{(1, n)}$ contain the disconnected components. These, in turn, arise from the projections of $R^{(1)}_{s', s}$ on the ground state of the scatterer thus resulting in the (quasi)elastic contributions $\propto\delta(\epsilon-...)$ to $W^{(1, n)}$'s. Not only the separation of these contributions is crucial for the cluster decomposition but is of evident importance for both numerical and analytical treatment of the integral equations (\ref{eq:W11_Integral}), (\ref{eq: CII1_18}). It is the easiest to define the connected parts of the multi-photon vertex functions recursively. First, we observe that
\begin{align}
\label{eq: CII2_1}
R_{s', s}^{(1)}(\epsilon)=&v_{s}^{\dagger}G_{0}(\epsilon-\omega_{s'}-\omega_{s})v_{s'}\\
=&-i\pi\delta(\epsilon-\omega_{s'}-\omega_{s})v_{s}^{\dagger}v_{s'}\\
+&v_{s}^{\dagger}P\frac{1}{\epsilon-\omega_{s'}-\omega_{s}}v_{s'},
\end{align}
where $P$ stands for the Cauchy principle value. Which helps us to define the connected part of one-particle vertex function as
\begin{align}
\label{eq: CII2_2}
W_{s_{1}', s_{1}}^{(1, 1, C)}(\epsilon)=&W_{s_{1}', s_{1}}^{(1, 1)}(\epsilon)+i\pi\delta(\epsilon-\omega_{s_{1}'}-\omega_{s_{1}})v_{s_{1}}^{\dagger}v_{s_{1}'}.
\end{align}
By then analyzing the structure of the general equation in the hierarchy (\ref{eq: CII1_18}) and bearing in mind the equation satisfied by $W_{s_{1}', s_{1}}^{(1, 1)}(\epsilon)$ one may easily deduce the connected part of $n$-photon effective vertex function may be defined in the following manner
\begin{widetext}
\begin{align}
\label{eq: CII2_3}
W^{(1, n, C)}_{s_{1}'...s_{n}', s_{1}...s_{n}}(\epsilon)=&W^{(1, n)}_{s_{1}'...s_{n}', s_{1}...s_{n}}(\epsilon)-W_{s_{1}', s_{1}}^{(1, 1)}\Bigg(\epsilon-\sum_{l=2}^{n}\omega_{s_{n}'}\Bigg)G^{(1)}\Bigg(\epsilon-\omega_{s_{1}}-\sum_{l=2}^{n}\omega_{s_{l}'}\Bigg)W^{(1, n-1)}_{s_{2}'...s_{n}', s_{2}...s_{n}}(\epsilon-\omega_{s_{1}}).
\end{align}
\end{widetext}
Now, with the help of the definition (\ref{eq: CII2_2}) one may immediately deduce the following decomposition of the two-body transition operator
\begin{align}
\nonumber
&T^{(1, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega_{s_{1}}+\omega_{s_{2}})\\
\nonumber
&=-i\pi\delta(\omega_{s_{1}}-\omega_{s_{2}'})T^{(1, 1)}_{s_{1}', s_{1}}(\omega_{s_{1}})T^{(1, 1)}_{s_{2}', s_{2}}(\omega_{s_{2}})\\
\label{eq: CII2_4}
&+T^{(1, 2, C)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega_{s_{1}}+\omega_{s_{2}}),
\end{align}
where $T^{(1, 2, C)}$ is defined in precisely the same way as $T^{(1, 2)}$ but with $W^{(1, 1, C)}$ replacing $W^{(1, 1)}$. By plugging the representation (\ref{eq: CII2_4}) into the definition of the two-photon $S$-matrix (\ref{eq:Sn_Matrix}) we immediately arrive at the following cluster decomposition principle:
\begin{align}
\nonumber
\mathcal{S}_{2}=&\Bigg[\frac{1}{2}S^{(1)}_{s_{1}', s_{1}}S^{(1)}_{s_{2}', s_{2}}-2\pi{i}T^{(1, 2, C)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega_{s_{1}}+\omega_{s_{2}})\\
\label{eq: CII2_5}
\times&\delta(\omega_{s_{1}'}+\omega_{s_{2}'}-\omega_{s_{1}}-\omega_{s_{2}})\Bigg]a_{s_{1}'}^{\dagger}a_{s_{2}'}^{\dagger}a_{s_{2}}a_{s_{1}}.
\end{align}
The physical meaning of the above expression is rather clear. The first term, being a product of one-particle $S$-matrices, describes the coherent scattering of two particles, i.e. a scattering process in which there is no effective interactions between the photons. The second term, on the other hand, is completely connected and describes the incoherent scattering of photons in which the particles redistribute their initial energy by interaction. A slightly more involved, but otherwise completely analogous, calculation may be done in the three-photon sector. As a result, one arrives at the following cluster decomposition of the three-body $S$-matrix:
\begin{align}
\nonumber
\mathcal{S}_{3}=&\Bigg[\frac{1}{6}S^{(1)}_{s_{1}', s_{1}}S^{(1)}_{s_{2}', s_{2}}S^{(1)}_{s_{3}', s_{3}}-2\pi{i}T^{(1, 2, C)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega_{s_{1}}+\omega_{s_{2}})\\
\nonumber
\times&\delta(\omega_{s_{1}'}+\omega_{s_{2}'}-\omega_{s_{1}}-\omega_{s_{2}})S^{(1)}_{s_{3}', s_{3}}\\
\nonumber
-&2\pi{i}T^{(1, 3, C)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\omega_{s_{1}}+\omega_{s_{2}}+\omega_{s_{3}})\\
\nonumber
\times&\delta(\omega_{s_{1}'}+\omega_{s_{2}'}+\omega_{s_{3}'}-\omega_{s_{1}}-\omega_{s_{2}}-\omega_{s_{3}})\Bigg]\\
\label{eq: STE41}
\times&{a}_{s_{1}'}^{\dagger}a_{s_{2}'}^{\dagger}a_{s_{3}'}^{\dagger}a_{s_{3}}a_{s_{2}}a_{s_{1}},
\end{align}
where the connected part of the three-photon $T$-matrix may be found in the Appendix \ref{3bodyScat}.
\subsection{Closed form solution in the Markovian limit}
\begin{figure}[b]
\includegraphics[width=\columnwidth]{NONCROSSING.pdf}
\caption{Exact diagrammatic representation of the $n$-particle $T$-matrix in the Markovian limit. The above diagrams also correspond to the non-crossing approximation discussed in section \ref{sec: AS}. As opposed to the exact $n$-photon $T$-matrix parameterized by the $n-1$-particle effective vertex function resuming an infinite number of emission and absorption processes in both direct and exchange channels, this approximation replaces the vertex by a sequence of $n-2$ and $n-1$ ($n\geq 2$) alternating excitations and deexcitations of the emitter. }
\label{fig: CII3_1}
\end{figure}
Let us now consider the single-qubit scattering problem in the Markovian limit. Validity of Markovian approximation demands a number of assumptions: linear dispersion relation in all of the channels $\omega_{\mu}(k)=\omega_{0\mu}+v_{\mu}k$, broadband limit $B_{\mu}=\mathbb{R}, \forall\mu\in\{1, ..., N_{c}\}$ and local couplings (i.e. independent of frequency) $g_{\mu}(k)=\sqrt{\gamma_{\mu}/\pi}$. Within the above assumptions, the self-energy diagram reads as $\Sigma^{(1)}(\epsilon)=-i\sum_{\mu=1}^{N_{c}}\gamma_{\mu}/v_{\mu}\ket{1}\bra{1}$, which is independent of $\epsilon$ (here we have introduced the projector on the qubit's excited state $\ket{1}\bra{1}=\frac{1+\sigma_{3}}{2}$). By causality, all of the $W$ functions are analytic in momentum (energy) variables in the lower (upper) half of the complex plane. By exploiting this analyticity along with the momentum independence of both coupling constants and self-energy, we can close all of the integration contours in the lower half of the complex momentum plane to render all of the integrals zero, thus promoting the integral equations into algebraic ones. As a result, we recover the solution obtained in [\onlinecite{Pletyukhov_New_J_Phys_2012}]. Diagrammatically the above solution is equivalent to the so-called non-crossing approximation (which becomes exact in the Markovian limit) and it is represented in Figure \ref{fig: CII3_1}.
\subsection{Approximation strategies}
\label{sec: AS}
\begin{figure}[b]
\includegraphics[width=\columnwidth]{WC.pdf}
\caption{Diagrammatic representation of $n$-particle transition matrix within the weak correlation approximation. Here two-particle bubbles correspond to a single-photon effective vertex functions $W^{(1, 1)}$. Opposite to the quasi-Markovian approximation, this approximation resumms exactly all of the single particle processes, ignoring all the connected diagrams corresponding to multiple-particle scattering.}
\label{fig: CII4_1}
\end{figure}
Although it is possible to write an entire hierarchy of exact equations for the multi-photon vertex functions, which in turn parametrize the exact $S$-matrix, their analytical solution is in general only possible in the Markovian limit. To make some progress in solving the scattering problem, one has to resort to some kind of approximation routine. In this section we propose a couple of physically motivated resummation approaches, allowing one to avoid (or partially avoid) the solution of the exact integral equations.
\par
The most basic approximation, which is accurate in the regime $\gamma\tau\lesssim1$, where $\tau$ is a typical time delay in the system due to photons' propagation between two nearby scatterers, and $\gamma$ is a typical decay rate of scatterers, is the so called quasi-Markovian approximation. While in the quantum master equation approach it has a long tradition (see, e.g., \cite{Lehmberg_1970_1,Lehmberg_1970_2}), it has been recently realised \cite{Pletyukhov_New_J_Phys_2012,Pedersen_PRA_2017} that in the diagrammatic approach it is mathematically equivalent to picking only non-crossing diagrams shown in Fig. \ref{fig: CII3_1} and thereby fully neglecting vertex corrections. To give a physical explanation of this equivalence, we view photons propagating in the waveguide as an effective reservoir. It is intuitively clear that a fast decay of time correlations between these photons is an essential feature of the Markovian regime (we add here the prefix quasi in order to indicate that the field values at positions of different scatterers can still differ from each other by a phase factor). On the other hand, these time correlations result from correlated virtual processes of emission and absorption of different photons at different scatterers, which are mathematically encoded in the vertex corrections. Thus, the neglect of vertex corrections is equivalent to making the quasi-Markovian approximation. It is also worth noting that this approximation for a single-photon scattering matrix coincides with its exact expression, since in the absence of other photons the correlations in questions are not generated.
Below we give a step-by-step prescription how to implement the quasi-Markovian approximation in our diagrammatic approach. The exact $n$-photon transition matrices are represented by a $n-1$-particle effective vertex function in between two dressed Green's functions followed by emission/absorption vertices (see equation (\ref{eq: CII1_21})). In this approximation, the exact multi-photon vertex functions are simply replaced with a sequence of $n-1$ bare and $n-2$ dressed Green's functions interspersed with $2n-2$ emission and absorption vertices. In particular, in the $n=2$ case we get $W^{(1, 1)}_{s_{1}', s_{1}}(\epsilon)\rightarrow v_{s_{1}}^{\dagger}G_{0}(\epsilon-\omega_{s_{1}'}-\omega_{s_{1}})v_{s_{1}'}$. In the $n=3$ case we obtain $W^{(1, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon)\rightarrow v_{s_{1}}^{\dagger}G_{0}(\epsilon-\omega_{s_{1}}-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}'})v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}-\omega_{s_{2}'})v_{s_{2'}}$, and so on. In this approximation, all of the non-trivial momentum dependence of the $S$-matrix, comes entirely from the frequency dependence of the self-energy diagram. Note that the single-photon $T$-matrix is completely determined by the dressed propagator $G^{(1)}(\epsilon)$ (see equation (\ref{eq:T11})). This observation explicitly confirms the statement that the quasi-Markovian approximation becomes exact for a single propagating photon.
\par
The second approximation routine, which is more accurate than the quasi-Markovian one for $\gamma \tau \gtrsim 1$, makes a partial account of the vertex corrections. In particular, this approximation is based on the resummation of the full single-photon vertex functions $W^{(1, 1)}_{s', s}(\epsilon)$. Diagrammatically, this approximation amounts to the replacement of the dotted lines in Figure \ref{fig: CII3_1} by the full single-particle bubbles, the resulting $n$-photon $T$-matrix is shown in Figure \ref{fig: CII4_1}. This approximation corresponds to resummation of all of the direct interaction diagrams (comping entirely from the single particle sector), completely ignoring the exchange processes (encompassed by the connected parts of many-particle vertices). In order to clarify the matters, let us consider an integral equation governing the two-particle effective vertex shown in Figure \ref{fig: CII1_1}. Using the diagrammatic rules we deduce
\begin{align}
\nonumber
&W^{(1, 2)}_{s_{1}'s_{2}', s_{1}, s_{2}}(\epsilon)\\
\nonumber
&=R^{(1)}_{s_{1}', s_{1}}(\epsilon-\omega_{s_{2}'})G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}'})W^{(1, 1)}_{s_{2}', s_{2}}(\epsilon-\omega_{1})\\
\nonumber
&+R^{(1)}_{s_{1}, s}(\epsilon-\omega_{s_{2}'})G^{(1)}(\epsilon-\omega_{s}-\omega_{s_{2}'})W^{(1, 2)}_{ss_{2}', s_{1}s_{2}}\\
&+R^{(1)}_{s_{1}, s}(\epsilon-\omega_{s_{2}'})G^{(1)}(\epsilon-\omega_{s}-\omega_{s_{2}'})W^{(1, 2)}_{s_{2}'s, s_{1}s_{2}}.
\label{eq:nn}
\end{align}
Note that the last line above corresponds to the exchange interaction between that particles (see also Figure \ref{fig: CII1_1}). If we ignore the the last term completely and take into account the equation satisfied by $W^{(1, 1)}$ it is easy to show that
\begin{align}
\nonumber
&W^{(1, 2)}_{s_{1}'s_{2}', s_{1}, s_{2}}(\epsilon)\\
&=W^{(1, 1)}_{s_{1}', s_{1}}(\epsilon-\omega_{s_{2}'})G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}'})W^{(1, 1)}_{s_{2}', s_{2}}(\epsilon-\omega_{1})
\end{align}
is an exact solution of (\ref{eq:nn}).
We expect this approximation to get worse as one increases the number of photons in the system since this approximation ignores larger and larger fraction of diagrams with the increase in the number of particles involved in a scattering process. This approximation is beneficial since an integral equation for $W^{(1, 1)}$ may be frequently solved analytically by means of the method developed in the supplementary material of [\onlinecite{Laakso_PRL_2014}]. Due to the non-trivial momentum dependence of $W^{(1, 1)}$, we expect this approximation to be better than a quasi-Markovian one. In what follows we refer to this approximation as to the weak correlation one. Note that as the quasi-Markovian approximation is exact in the single-particle sector, the weak correlation approximation is exact in the two-particle sector, thus in order to test its validity, one has to consider at least a three-photon scattering problem.
\subsection{Systems with more than one qubit}
In this section, we would like to present certain generalizations of the theory developed in Section \ref{sec:Q=1}. Specifically, we would like to demonstrate how the above-presented formalism may be extended to the systems with a larger number of qubits. In particular, we are going to focus our attention on the systems with two and three emitters. By deriving the equations governing two and three-photon scattering matrices in two and three-qubit waveguide QED systems respectively we present the most general two and three-particle equations, holding independently of the number of qubits (see the discussion in Section \ref{sec: Properties}).
\subsubsection{$N_{q}=2$ waveguide QED}
\begin{figure}[h]
\includegraphics[width=0.8\columnwidth]{replacement.pdf}
\caption{Diagrammatic representation of the replacement required to obtain $R^{(2)}_{s', s}$ from $R^{(1)}_{s', s}$. Note that the extra contribution to the effective potential energy vertex $R^{(2)}$ in two-qubit systems is not single-particle connected, i.e. it is possible to cut the intermediate propagator such that the diagram falls into two distinct pieces. This very fact makes the splitting (\ref{eq: DII1_8})-(\ref{eq: DII1_11}) into reducible and irreducible contributions possible.}
\label{fig: replacement}
\end{figure}
First, we would like to focus on the case of two emitters coupled to a waveguide since the generalization of the single-qubit results is the most apparent in this case. The starting point in the analysis are equations (\ref{eq: Cll1_5}) and (\ref{eq: Cll1_6}). As opposed to the single-emitter case, now, clearly, both diagonal terms $v_{s'}^{\dagger}a_{s'}\mathcal{G}_{0}a^{\dagger}_{s}v_{s}$ and $a^{\dagger}_{s'}v_{s'}\mathcal{G}_{0}v_{s}^{\dagger}a_{s}$ contribute to the geometric series for the transition operator. Non-diagonal ones still give zero, since they both annihilate the state $v^{\dagger}\ket{g}$. By resumming the series and introducing the following set of objects
\begin{align}
\label{eq: DII1_1}
\mathcal{R}^{(2)}=&a_{s_{1}'}^{\dagger}\mathcal{R}^{(2)}_{s_{1}', s_{1}}a_{s_{1}},\\
\label{eq: DII1_2}
\mathcal{R}^{(2)}_{s_{1}', s_{1}}=&\mathcal{R}^{(1)}_{s_{1}', s_{1}}+v_{s_{1}'}\mathcal{G}_{0}v_{s_{1}}^{\dagger},\\
\label{eq: DII1_3}
\mathcal{W}^{(2)}=&\mathcal{R}^{(2)}+\mathcal{R}^{(2)}\mathcal{G}^{(1)}\mathcal{W}^{(2)},
\end{align}
we render the matrix elements of the transition operator in the following form
\begin{align}
\nonumber
&\Braket{N_{p}', g|\mathcal{T}^{(2)}|N_{p}, g}\\\nonumber&=\Braket{N_{p}', g|a^{\dagger}_{s_{1}'}v_{s_{1}'}\mathcal{G}^{(1)}v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g}\\\label{eq: DII1_4}&+\Braket{N_{p}', g|a^{\dagger}_{s_{1}'}v_{s_{1}'}\mathcal{G}^{(1)}\mathcal{W}^{(2)}\mathcal{G}^{(1)}v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g}.
\end{align}
Since the lowest term in the expansion of $\mathcal{W}^{(2)}$ is again a single-photon operator
\begin{align}
\label{eq: DII1_5}
\mathcal{W}^{(2)}=\sum_{n=1}^{\infty}a^{\dagger}_{s_{1}'}...a^{\dagger}_{s_{n}'}\mathcal{W}^{(2, n)}_{s_{1}'...s_{n}', s_{1}...s_{n}}a_{s_{n}}...a_{s_{1}},
\end{align}
we see that the hierarchy of equations satisfied by the two-photon vertex functions is precisely the same as (\ref{eq:W11_Integral})-(\ref{eq: CII1_18}) with $R^{(1)}$ being replaced by $R^{(2)}$ (see Figure \ref{fig: replacement}).
\par
In order to understand the difference between the single-qubit and the two-qubit theories, let us have a closer look at the equation defining $\mathcal{W}^{(2)}$
\begin{align}
\nonumber
\mathcal{W}^{(2)}=&\mathcal{R}^{(1)}+a_{s_{1}'}^{\dagger}v_{s_{1}'}\mathcal{G}_{0}v_{s_{1}}^{\dagger}a_{s_{1}}+\mathcal{R}^{(1)}\mathcal{G}^{(1)}\mathcal{W}^{(2)}\\
\label{eq: DII1_6}
+&a_{s_{1}'}^{\dagger}v_{s_{1}'}\mathcal{G}_{0}v_{s_{1}}^{\dagger}a_{s_{1}}\mathcal{G}^{(1)}\mathcal{W}^{(2)}.
\end{align}
Now let us perform the following separation $\mathcal{W}^{(2)}=\mathcal{W}^{(2, i)}+\mathcal{W}^{(2, r)}$, where the superscripts $i$ and $r$ stand for the irreducible and reducible contributions respectively, and $\mathcal{W}^{(2, i)}$ is chosen to satisfy the following equation (\ref{eq: CII1_13}). This leads one to the following equation satisfied by the reducible part
\begin{align}
\nonumber
\mathcal{W}^{(2, r)}=&a_{s_{1}'}^{\dagger}v_{s_{1}'}\mathcal{G}_{0}v_{s_{1}}^{\dagger}a_{s_{1}}+\mathcal{R}^{(1)}\mathcal{G}^{(1)}\mathcal{W}^{(2, r)}\\
\nonumber
+&a_{s_{1}'}^{\dagger}v_{s_{1}'}\mathcal{G}_{0}v_{s_{1}}^{\dagger}a_{s_{1}}\mathcal{G}^{(1)}\mathcal{W}^{(2, i)}\\
\label{eq: DII1_7}
+&a_{s_{1}'}^{\dagger}v_{s_{1}'}\mathcal{G}_{0}v_{s_{1}}^{\dagger}a_{s_{1}}\mathcal{G}^{(1)}\mathcal{W}^{(2, r)}.
\end{align}
The solution of equation (\ref{eq: DII1_7}) may then conveniently parametrized as $W^{(2,r)}(\epsilon)=\mathcal{V}^{(2)}(\epsilon)\mathcal{G}^{(2)}(\epsilon)\overline{\mathcal{V}}^{(2)}(\epsilon)$, where
\begin{align}
\label{eq: DII1_8}
\mathcal{G}^{(2)}=&\mathcal{G}_{0}+\mathcal{G}_{0}\Sigma^{(2)}\mathcal{G}^{(2)}, \\
\label{eq: DII1_9}
\Sigma^{(2)}=&a_{s}v_{s}^{\dagger}\mathcal{G}^{(1)}v_{s'}a^{\dagger}_{s'}+a_{s}v_{s}^{\dagger}\mathcal{G}^{(1)}\mathcal{W}^{(2, i)}\mathcal{G}^{(1)}v_{s'}a^{\dagger}_{s'},\\
\label{eq: DII1_10}
\overline{\mathcal{V}}^{(2)}=&v_{s}^{\dagger}a_{s}+v_{s}^{\dagger}a_{s}\mathcal{G}\mathcal{W}^{(2, i)}=:\sum_{n=1}^{\infty}\overline{V}^{(2, n)}_{s_{1}...s_{n}}a_{s_{1}}...a_{s_{n}}, \\
\label{eq: DII1_11}
\mathcal{V}^{(2)}=&a_{s}^{\dagger}v_{s}+\mathcal{W}^{(2, i)}\mathcal{G}v_{s}a_{s}^{\dagger}=:\sum_{n=1}^{\infty}{V}^{(2, n)}_{s_{1}...s_{n}}a_{s_{1}}^{\dagger}...a_{s_{n}}^{\dagger}.
\end{align}
The meaning of the above-defined objects is as follows. $\mathcal{G}^{(2)}(\epsilon)$ may be thought of as a Green's operator in the two-qubit excitation subspace since in practice, it is always projected there. $\Sigma^{(2)}(\epsilon)$ in its turn may be thought of as a self-energy operator in the two-qubit excitation subspace. $\mathcal{V}(\epsilon)$ and $\overline{\mathcal{V}}(\epsilon)$ may be understood as the renormalized absorption and emission vertex operators.
\par
As it was anticipated above, the equations governing a two-photon scattering on a pair of qubits hold in the case of a two-photon scattering for a general $N_{q}$ system. Bearing this in mind, in Figure \ref{fig: DII1_2} we present the system of exact integral equations governing the general two-photon scattering problem in waveguide QED (i.e. equations defining $W^{(2, 1)}$). These equations were first obtained in [\onlinecite{Laakso_PRL_2014}] in relation with the two-photon scattering problem on two distant qubits.
\subsubsection{$N_{q}=3$ waveguide QED}
Let us finally dedicate our attention to three-qubit waveguide QED systems. Due to the enormous increase in the computational complexity, we restrict ourselves to the first non-trivial subspace within such a setup - a three-excitation subspace. As before, the starting point of the analysis is the equation (\ref{eq: Cll1_5}). No doubt, both of the "diagonal" terms $\sim v^{\dagger}v$ and $\sim vv^{\dagger}$ give a non-zero contribution to the transition operator as it was the case in the previous section, however, one can clearly see that now the "off-diagonal" terms ($vv, \ v^{\dagger}v^{\dagger}$) have to be also taken into account. By defining the following operators
\begin{align}
\label{eq: DII2_1}
\mathcal{D}_{+}=a^{\dagger}_{s'}v_{s'}\mathcal{G}_{0}a^{\dagger}_{s}v_{s}, \quad \mathcal{D}_{-}=v_{s'}^{\dagger}a_{s'}\mathcal{G}_{0}v_{s}^{\dagger}a_{s},
\end{align}
\begin{figure}[b!]
\includegraphics[width=\columnwidth]{2photon.pdf}
\caption{Equations corresponding to the general two-photon scattering problem. Equation in the first line describes the splitting of the effective single-particle vertex function $W^{(2, 1)}$ in the generic two-photon scattering problem into its reducible $W^{(2, 1, r)}$ and irreducible parts $W^{(2, 1, i)}$. The irreducible part is generated from the fundamental process of the single-qubit waveguide QED $R^{(1)}$ (as shown in the second line) and thus satisfies the same integral equation as $W^{(1, 1)}$. The reducible part, steming from non-single-particle connectedness of the additional contribution to $R^{(2)}$ (see Figure \ref{fig: replacement}), is parametrized by the dressed Green's function $G^{(2, 0)}$ in two excitation subspace (shown as the curly line) as well as renormalized emission and absorption vertices (indicated by rectangles). As one can see the entire system of equations blows down to the solution of the equation defining the irreducible part of the vertex function, since its knowledge is sufficient to completely determine both the renormalized vertices $V^{(2, 1)}, \ \overline{V}^{(2, 1)}$ as well as the self energy $\Sigma^{(2, 0)}$ in the two excitation subspace.}
\label{fig: DII1_2}
\end{figure}
bearing in mind the rotating wave approximation, and resuming the series (\ref{eq: Cll1_5}) to incorporate the diagonal terms and again making use of RWA, in the realm of $N_{q}=3$ waveguide QED, we obtain
\begin{align}
\nonumber
&\Braket{N_{p}', g|\mathcal{T}^{(3)}|N_{p}, g}\\
\label{eq: DII2_2}
&=\Braket{N_{p}', g|a^{\dagger}_{s_{1}'}v_{s_{1}'}\frac{1}{(\mathcal{G}^{(1)})^{-1}-\mathcal{R}^{(3)}}v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g},\\
\label{eq: DII2_4}
&\mathcal{R}^{(3)}=\mathcal{R}^{(2)}+\mathcal{D},\\
\nonumber
&\mathcal{D}=\mathcal{D}_{+}\frac{1}{(\mathcal{G}^{(1)})^{-1}-\mathcal{R}^{(2)}}\mathcal{D}_{-}\\
\label{eq: DII2_5}
&=\mathcal{R}^{(2)}+\mathcal{D}_{+}\mathcal{G}^{(1)}\mathcal{D}_{-}+\mathcal{D}_{+}\mathcal{G}^{(1)}\mathcal{W}^{(2)}\mathcal{G}^{(1)}\mathcal{D}_{-}.
\end{align}
As usual, we define the $\mathcal{W}^{(3)}$ operator according to
\begin{align}
\label{eq: 3body_vertex_generator}
\mathcal{W}^{(3)}=\mathcal{D}+\mathcal{D}(\mathcal{G}^{(1)}+\mathcal{G}^{(1)}\mathcal{W}^{(2)}\mathcal{G}^{(1)})\mathcal{W}^{(3)}.
\end{align}
Which brings the transition operator to the following form
\begin{widetext}
\begin{align}
\label{eq: Tphoton_Transition_operator}
&\Braket{N_{p}', g|\mathcal{T}^{(3)}|N_{p}, g}=\Braket{N_{p}', g|\mathcal{T}^{(2)}+a^{\dagger}_{s_{1}'}v_{s_{1}'}\mathcal{G}^{(1)}(1+\mathcal{W}^{(2)}\mathcal{G}^{(1)})\mathcal{W}^{(3)}(\mathcal{G}^{(1)}\mathcal{W}^{(2)}+1)\mathcal{G}^{(1)}v^{\dagger}_{s_{1}}a_{s_{1}}|N_{p}, g}.
\end{align}
\end{widetext}
Upon the projection, onto the three-photon subspace, we obtain the following elegant set of equations governing the generic (see Section \ref{sec: Properties}) three-particle $T$-matrix (see Appendix \ref{ap: dressed} for the detailed derivation)
\begin{widetext}
\begin{align}
\nonumber
T^{(3, 3)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\epsilon)=&\Braket{g|v_{s_{1}'}{G}^{(1)}(\epsilon-\omega_{s_{2}'}-\omega_{s_{3}'})[W^{(2, 2)}_{s_{2}'s_{3}', s_{2}s_{3}}(\epsilon)+V^{(3, 2)}_{s_{2}'s_{3}'}(\epsilon)G^{(3, 0)}(\epsilon)\overline{V}^{(3, 2)}_{s_{2}s_{3}}(\epsilon)]{G}^{(1)}(\epsilon-\omega_{s_{2}}-\omega_{s_{3}})v^{\dagger}_{s_{1}}|g}\\
=:&\Braket{g|v_{s_{1}'}{G}^{(1)}(\epsilon-\omega_{s_{2}'}-\omega_{s_{3}'})[W^{(2, 2)}_{s_{2}'s_{3}', s_{2}s_{3}}(\epsilon)+W^{(3, 2)}_{s_{2}'s_{3}', s_{2}s_{3}}(\epsilon)]{G}^{(1)}(\epsilon-\omega_{s_{2}}-\omega_{s_{3}})v^{\dagger}_{s_{1}}|g},\\
G^{(3, 0)}(\epsilon)=&G^{(1)}(\epsilon)+G^{(1)}(\epsilon)\Sigma^{(3, 0)}(\epsilon)G^{(3, 0)}(\epsilon),\\
\label{eq:3photon_self1}
\Sigma^{(3, 0)}(\epsilon)=&(v_{s}^{\dagger}G_{0}(\epsilon-\omega_{s})v_{s'}^{\dagger}+v_{s'}^{\dagger}G_{0}(\epsilon-\omega_{s'})v_{s}^{\dagger})G^{(1)}(\epsilon-\omega_{s}-\omega_{s'})V^{(3, 2)}_{ss'}(\epsilon)\\
\label{eq:3photon_self2}
\equiv&\overline{V}^{(3, 2)}_{ss'}(\epsilon)G^{(1)}(\epsilon-\omega_{s}-\omega_{s'})(v_{s'}G_{0}(\epsilon-\omega_{s})v_{s}+v_{s}G_{0}(\epsilon-\omega_{s'})v_{s'}),
\end{align}
\begin{align}
\label{eq: twophotonabsorb}
\overline{V}^{(3, 2)}_{s_{1}s_{2}}(\epsilon)=&v^{\dagger}_{s_{1}}G^{(2, 0)}(\epsilon)\overline{V}^{(2, 1)}_{s_{2}}(\epsilon)+\overline{V}^{(3, 2)}_{ss_{1}}(\epsilon)G^{(1)}(\epsilon-\omega_{s})W^{(2, 1)}_{s, s_{2}}(\epsilon),\\
\label{eq: twophotonemis}
V^{(3, 2)}_{s_{1}'s_{2}'}(\epsilon)=&{V}^{(2, 1)}_{s_{1}'}(\epsilon)G^{(2, 0)}(\epsilon)v^{\dagger}_{s_{2}'}+W^{(2, 1)}_{s_{1}', s}(\epsilon)G^{(1)}(\epsilon-\omega_{s}){V}^{(3, 2)}_{s_{2}'s}(\epsilon).
\end{align}
\end{widetext}
Here $G^{(2, 0)}(\epsilon)$ is the projection of (\ref{eq: DII1_8}) onto the photon vacuum state. As usual, the above equation may be compactly represented diagrammatically as shown in Figure \ref{fig: DII2_3}.
\begin{figure}[b!]
\includegraphics[width=\columnwidth]{3photon.pdf}
\caption{Equations corresponding to the general three-photon scattering problem. Here the first line defines the effective two-body vertex function $W^{(3, 2)}$ in terms of the dressed Green's function in three excitation subspace shown as a double dashed line as well as effective two-photon absorption/emission vertices depicted by square boxes with a pair of incoming/outgoing amputated photon legs (note that these vertices correspond to the absorption/emission of photon pairs by a system as a whole). As it is shown in the last five lines the effective two-photon emission/absorption vertices as well as the self-energy bubble defining the Green's function in the three-excitation subspace satisfy a closed hierarchy of self-consistent equations, the validity of which is directly proven in Appendix \ref{ap: dressed}. Curly lines as well as the rectangular boxes with single amputated photon lines are precisely defined in Figure \ref{fig: DII1_2}. }
\label{fig: DII2_3}
\end{figure}
\section{Application: a giant acoustic atom}
\label{sec:GIANT_ATOM}
In this section, we would like to consider a practical application of the above-developed theory. In order to showcase our method in full glory, we would like to focus on the non-Markovian scattering setup - a giant acoustic atom, extensively studied theoretically and experimentally in [\onlinecite{Andersson_Nat_Phys_2019}, \onlinecite{Guo_Phys_Rev_2017}, \onlinecite{Kockum_PRA_2014, Vadiraj_2020, Kannan_2020, Guo_2020, Cilluffo_2020}] (see also [\onlinecite{Kockum_2021}] for a detailed review).
\subsection{Model and conventions}
A giant acoustic atom may be defined as a two-level system coupled to an acoustical waveguide, in which the radiation is carried by the surface acoustic waves (SAW), at two distant points $x=\pm R/2$. The Hamiltonian of such a system may be written as $\mathcal{H}=\mathcal{H_{0}}+\mathcal{V}$
\begin{align}
\mathcal{H}_{0}&=\Omega\sigma_{+}\sigma_{-}-iv\sum_{\mu=1, 2}\int{dx}c_{\mu}b^{\dagger}_{\mu}(x)\partial_{x}b_{\mu}(x),\\
\mathcal{V}&=\sum_{\mu=1, 2}(\sqrt{\Gamma_{1}}b^{\dagger}_{\mu}(-R/2)+\sqrt{\Gamma_{2}}b^{\dagger}_{\mu}(R/2))\sigma_{-}+\text{h.c.}
\end{align}
Here $b_{\mu}(x), b_{\mu}^{\dagger}(x)$ are the position space field operators of phonons, the index $\mu$ distinguishes beetween the right $\mu=1$ and left $\mu=2$ mooving fields, and $c_{\mu}=(-1)^{\mu-1}$. Note that we have made use of the common assumption of the mode dispersion being linear in the vicinity of the relevant energy scale $\sim\Omega$ and the the bandwidth be infinite. Defining the Fourier transformation as
\begin{align}
b_{\mu}(x)=\frac{1}{\sqrt{2\pi}}\int{dk}e^{ic_{\mu}kx}a_{\mu}(k),
\end{align}
we bring the Hamiltonian in to the form (\ref{eq: Bare_Hamiltonian}), (\ref{eq: inter}):
\begin{align}
\mathcal{H}_{0}&=\Omega\sigma_{+}\sigma_{-}+v\sum_{\mu=1, 2}\int{dk}ka^{\dagger}_{\mu}(k)a_{\mu}(k),\\
\mathcal{V}&=\sum_{\mu=1, 2}\int{dk}{g}_{\mu}(k)a_{\mu}(k)\sigma_{-}+\text{h.c.},\\
{g}_{\mu}(k)&=\sqrt{\frac{\Gamma_{1}}{2\pi}}e^{-ic_{\mu}kR/2}+\sqrt{\frac{\Gamma_{1}}{2\pi}}e^{ic_{\mu}kR/2}.
\end{align}
In the following subsections we are going to study the scattering of a coherent pulse in the form of a wavepacket centered around the frequency $vk_{0}$ (see Section \ref{sec:pf}), for that sake it is convenient to perform the following time-dependant gauge transformation:
\begin{align}
\mathcal{U}(t)=\exp\Bigg(-ivk_{0}\Bigg[\int{dk}a^{\dagger}_{\mu}(k)a_{\mu}(k)+\sigma_{+}\sigma_{-}\Bigg]t\Bigg).
\end{align}
The Hamiltonian transforms as $\mathcal{H}\rightarrow\tilde{\mathcal{H}}=\mathcal{U}^{\dagger}(t)\mathcal{H}\mathcal{U}(t)-i\mathcal{U}^{\dagger}(t)\frac{d\mathcal{U}(t)}{dt}=:\tilde{\mathcal{H}}_{0}+\tilde{\mathcal{V}}$, resulting in final form of the Hamiltonian we are going to work with:
\begin{align}
\tilde{\mathcal{H}}_{0}&=-\Delta\sigma_{+}\sigma_{-}+v\sum_{\mu=1, 2}\int{dk}k\tilde{a}^{\dagger}_{\mu}(k)\tilde{a}_{\mu}(k),\\
\tilde{\mathcal{V}}&=\sum_{\mu=1, 2}\int{dk}\tilde{g}_{\mu}(k)\tilde{a}_{\mu}(k)\sigma_{-}+\text{h.c.},\\
\tilde{g}_{\mu}(k)&=\sqrt{\frac{\Gamma_{1}}{2\pi}}e^{-ic_{\mu}(k+k_{0})R/2}+\sqrt{\frac{\Gamma_{1}}{2\pi}}e^{ic_{\mu}(k+k_{0})R/2},\\
\tilde{a}_{\mu}(k)&=a_{\mu}(k+k_{0}), \quad \Delta=\omega_{0}-\Omega,\quad \omega_{0}=vk_{0}.
\end{align}
In the following we adopt the system of units such that $v=1$, for simplicity we will also assume $\Gamma_{1}=\Gamma_{2}=\gamma/2$. Furthermore, we shall also drop the tilde symbols out of operators for notational convenience.
\subsection{Problem formulation}
\label{sec:pf}
In order to formulate the scattering problem, we define the following wave-packet operators
\begin{align}
A_{\mu}^{\dagger}=\int {dk}\varphi_{L}(k)a_{\mu}^{\dagger}(k),
\end{align}
where $\varphi_{L}(k)=\sqrt{\frac{2}{\pi L}}\frac{\sin(kL/2)}{k}$ is the Fourier transform of the rectangular pulse of length $L$, with the property $\varphi_{L}(k)\sim\sqrt{\frac{2\pi}{L}}\delta(k), \ L\rightarrow\infty$ (obviously, it is possible to choose any other nascent $\delta$-function for $\varphi_{L}(k)$). Note that since we are working in the frame of reference rotating with frequency $k_{0}$, peaking of $\varphi_{L}(k)$ at $k=0$ corresponds to a pulse centered at $k=k_{0}$ in the original one. Note that throughout this section we focus on the zero detuning setup $\Delta=0$, i.e. the initial $k_{0}$ is chosen to be equal to $\Omega$. The effect of non-zero detuning of atom from radiation is studied in Appendix \ref{ap:detun}. Definition of these wave-packet operators is crucial since the eigenstates of the bare Hamiltonian $\mathcal{H}_{0}$ are not normalizable. Hence, in order to avoid the ascent of the undefinable quantities such as $\delta(0), (\delta(0))^{2}, ...$ coming from the elastic clusters of the $S$-matrix in the calculation of the observables, one has to work with the normalizable states and take the plane-wave limit $L\rightarrow\infty$ at the very end. Having defined the Fock creation operators $A^{\dagger}_{\mu}$, we can define the coherent state as a displaced vacuum
\begin{align}
\label{eq:Coherent_State}
\ket{\alpha}_{\mu}=e^{-|\alpha|^{2}/2}e^{\alpha A^{\dagger}_{\mu}}\ket{\Omega}=e^{-|\alpha|^{2}/2}\sum_{n=0}^{\infty}\frac{\alpha^{n}}{\sqrt{n!}}\ket{\Phi^{(n)}_{\mu}},
\end{align}
where $\alpha\in\mathbb{C}$ is the so-called coherence parameter, defined such that $|\alpha|^{2}$ is the average photon number (power), and the normalized $n$-particle Fock states $\ket{\Phi^{(n)}_{\mu}}$ were defined according to
\begin{equation}
\label{eq:Fock_State}
\ket{\Phi^{(n)}_{\mu}}=\frac{1}{\sqrt{n!}}\Big(A^{\dagger}_{\mu}\Big)^{n}\ket{\Omega}.
\end{equation}
In what follows, we assume that $|\alpha|\ll1$ in such a way that the terms of order $|\alpha|^{4}$ are negligible
\begin{align}
\nonumber
\ket{\alpha}_{\mu}\approx&{e}^{-|\alpha|^{2}/2}\Bigg(\ket{\Omega}+\frac{\alpha}{\sqrt{1!}}\ket{\Phi^{(1)}_{\mu}}+\frac{\alpha^{2}}{\sqrt{2!}}\ket{\Phi^{(2)}_{\mu}}\\
\label{eq:Weak_Coherent_State}
+&\frac{\alpha^{3}}{\sqrt{3!}}\ket{\Phi^{(3)}_{\mu}}+\mathcal{O}(|\alpha|^{4})\Bigg)\equiv\ket{\alpha}_{\mu}^{(3)}.
\end{align}
The factor $e^{-|\alpha|^{2}/2}$ has to be retained until various overlaps of states are calculated, to ensure the normalization of both the initial and final states as well as the power conservation (here the normalization is again assumed in the power perturbative regime, that is up to order $\mathcal{O}(|\alpha|^{8})$). With this in hands, we formulate the scattering problem as follows. We assume that the initial state of the system is given by $\ket{\psi_{i}}=\ket{\alpha}_{1}^{(3)}\otimes\ket{0}$, that is a leftwards-propagating weakly coherent pulse $\ket{\alpha}_{1}^{(3)}$ is incident on a two-level system which is initially in its ground state $\ket{0}$. According to the theory developed above, the final state of the systems has the following form
\begin{align}
\nonumber
\ket{\psi_{f}}=&{e}^{-|\alpha|^{2}/2}\Bigg(\ket{\Omega}+\frac{\alpha}{\sqrt{1!}}{\mathcal{S}}_{1}\ket{\Phi^{(1)}_{1}}\\
\label{eq:Weak_Coherent_FState}
+&\frac{\alpha^{2}}{\sqrt{2!}}{\mathcal{S}}_{2}\ket{\Phi^{(2)}_{1}}+\frac{\alpha^{3}}{\sqrt{3!}}{\mathcal{S}}_{3}\ket{\Phi^{(3)}_{1}}\Bigg)\otimes\ket{0}.
\end{align}
\par
Inelastic contributions to the $n$-phonon $S$-matrices (discussed in Section \ref{sec:GCD}) introduce a non-trivial momentum redistribution of the incident phonons (note that owing to the linear dispersion relation and our choice of units energy and momentum may be used interchangeably), leading to the non-trivial phonon correlations in the final state. As it is well known, the phononic correlations may be conveniently examined with the help of the so-called coherence functions introduced by Glauber \cite{Glauber_PR_1963} in 1963. Defining the Fourier transform of the field operators according to
\begin{align}
\label{eq:Fourier_Operators}
a_{\mu}(\tau)=\int{dk}\frac{e^{ik\tau}}{\sqrt{2\pi}}a(k),
\end{align}
one constructs the first-order coherence function as
\begin{widetext}
\begin{align}
\nonumber
C_{\mu}^{(1)}(\tau)=&\braket{\psi_{f}|a^{\dagger}_{\mu}(\tau)a_{\mu}(0)|\psi_{f}}\\
\label{eq:1st_order_coherence}
=&\Bigg(1-|\alpha|^{2}+\frac{|\alpha|^{4}}{2}\Bigg)|\alpha|^{2}C^{(1, 1)}_{\mu}(\tau)+(1-|\alpha|^{2})\frac{|\alpha|^{4}}{2}C^{(1, 2)}_{\mu}(\tau)+\frac{|\alpha|^{6}}{6}C^{(1, 3)}_{\mu}(\tau)+\mathcal{O}(|\alpha|^{8}),
\end{align}
\end{widetext}
where we have introduced the $n$-particle Fock state first-order correlation functions as
\begin{equation}
\label{eq:1st_order_coh_Fock}
C^{(1, n)}_{\mu}(\tau)=\Braket{\Phi^{(n)}_{1}|(\mathcal{S}_{n})^{\dagger}a^{\dagger}_{\mu}(\tau)a_{\mu}(0)\mathcal{S}_{n}|\Phi_{1}^{(n)}}.
\end{equation}
With the help of the first-order coherence function, one may define the spectral power density as the Fourier transform of (\ref{eq:1st_order_coherence}):
\begin{equation}
\label{eq:Spectral_Density_Def}
S_{\mu}(k)=\int d\tau\frac{e^{-ik\tau}}{2\pi}C^{(1)}_{\mu}(\tau).
\end{equation}
$S_{\mu}(k)$ is understood as a momentum space distribution of power in the scattered state of radiation, that is to a given mode $k$ (supported by the $\mu^{th}$ channel) it associates a certain power $S_{\mu}(k)$. The power conservation condition
\begin{equation}
\label{eq:Power_Conservation}
\sum_{\mu=1, 2}\int dkS_{\mu}(k)\rightarrow\Phi, \quad \Phi=\frac{|\alpha|^{2}}{L},
\end{equation}
is automatically satisfied due to the unitarity of the $S$-matrix. In a linear system, where the $n$ body $S$-matrix factorises into the product of single-particle $S$-matrices, the first order coherence function is a constant, thus leading to the purely elastic power density $S_{\mu}(k)\propto\delta(k)$. Since a qubit is an intrinsically non-linear system, we are going to see that the spectral power density admits for the following decomposition $S(k)=S^{\text{el}}(k)+S^{\text{inel}}(k)$, where $S^{\text{el}}(k)\propto\delta(k)$ is the elastic contribution to the spectral density, and $S^{\text{inel}}(k)$, in turn, is the inelastic part of spectral power density with a non-trivial momentum dependence.
\par
Further, we define the normalized second and third-order coherence functions:
\begin{widetext}
\begin{align}
\label{eq:2d_coh_fun_def}
C^{(2)}_{\mu, \mu'}(\tau)=&\frac{\braket{\psi_{f}|a^{\dagger}_{\mu}(0)a^{\dagger}_{\mu'}(\tau)a_{\mu'}(\tau)a_{\mu}(0)|\psi_{f}}}{C^{(1)}_{\mu}(0)C^{(1)}_{\mu'}(0)}=\Bigg(1-\frac{|\alpha|^{2}}{2}\Bigg)\frac{1}{2}C^{(2, 2)}_{\mu, \mu'}(\tau)+\frac{|\alpha|^{2}}{6}C^{(2, 3)}_{\mu, \mu'}(\tau)+\mathcal{O}(|\alpha|^{4}),\\
\label{eq:3d_coh_fun_def}
C^{(3)}_{\mu, \mu', \mu''}(\tau, \ \tau')=&\frac{\braket{\psi_{f}|a^{\dagger}_{\mu}(0)a^{\dagger}_{\mu'}(\tau)a^{\dagger}_{\mu''}(\tau')a_{\mu''}(\tau')a_{\mu'}(\tau)a_{\mu}(0)|\psi_{f}}}{C^{(1)}_{\mu}(0)C^{(1)}_{\mu'}(0)C^{(1)}_{\mu''}(0)}=\frac{1}{6}C^{(3, 3)}_{\mu, \mu', \mu''}(\tau, \ \tau')+\mathcal{O}(|\alpha|^{2}),\\
C^{(m, n)}_{\mu_{1}, ..., \mu_{m}}(\tau_{1}, .., \tau_{m-1})=&\frac{\braket{\Phi^{(n)}_{1}|{\mathcal{S}}_{n}^{\dagger}a^{\dagger}_{\mu_{1}}(0)...a^{\dagger}_{\mu_{m}}(\tau_{m-1})a_{\mu_{m}}(\tau_{m-1})...a_{\mu_{1}}(0){\mathcal{S}}_{n}|\Phi^{(n)}_{1}}}{\prod_{l=1}^{m}C^{(1)}_{\mu_{l}}(0)}.
\end{align}
\end{widetext}
For pairs of phonons, the normalized second-order coherence function is defined as the arrival probability of the second particle as a function of the delay $\tau$ following the detection of the first one, normalized by the individual photon probabilities. Likewise, for particle triples, the normalized third-order coherence function is defined as the arrival probability of the third and second particle as a function of delays $\tau', \tau$ following the detection of the first one, normalized by the individual phonon probabilities. A perfectly coherent source is characterized by a uniform arrival probability, yielding the correlation functions of all orders equal to unity. Whenever correlation functions exceed unity, particle statistics is said to be super-Poissonian and the particles are said to be bunched together, whereas in the case of correlation functions falling below unity, the statistics of particles is said to be sub-Poissonian and particles are correspondingly said to be anti-bunched.
\par
In the following subsections, we are going to use the ideas developed in Section \ref{sec:Q=1} in order to compute the above-discussed observables to the lowest order in $|\alpha|$ exactly.
\subsection{Spectral power density}
\label{SPd}
Let us begin with the analysis of the power spectrum of the SAW scattered by the giant atom. We start by considering the Fock state first-order coherence functions defined via (\ref{eq:1st_order_coh_Fock}). In order to establish $C^{(1, 1)}_{\mu}(\tau)$ one first has to find the matrix elements of the single-particle $S$-matrix, which, in turn, demands the knowledge of the dressed propagator in the single-excitation subspace $G^{(1)}(\epsilon)$. The self-energy diagram may be evaluated straightforwardly to yield:
\begin{equation}
\label{eq:Self_Energy}
\Sigma^{(1)}(\epsilon)=-i\gamma(1+e^{i(\epsilon+k_{0})R})\sigma_{+}\sigma_{-},
\end{equation}
in accordance with reference [\onlinecite{Guo_Phys_Rev_2017}]. With this in hands, the single-phonon scattering operator may be easily established with the help of equations (\ref{eq:T11}), (\ref{eq:Sn}) and (\ref{eq:Sn_Matrix}):
\begin{align}
\label{eq:1Photon_S_operator}
\mathcal{S}_{1}=&\sum_{\mu', \mu}\int{dkdk'}S^{(1)}_{\mu' k', \mu k}a^{\dagger}_{\mu'}(k')a_{\mu}(k),\\
\label{eq:comp}
S^{(1)}_{\mu' k', \mu k}=&\delta(k-k')S^{(1)}_{\mu', \mu}(k)\sigma_{-}\sigma_{+}\\
\label{eq:1Photon_S_on-shell}
S^{(1)}_{\mu', \mu}(k)=&(\delta_{\mu, \mu'}-2\pi{i}g_{\mu}^{*}(k)g_{\mu'}(k)\tilde{G}^{(1)}(k)).
\end{align}
Here the tilde symbol denotes the projection onto the excited state in the qubit space, and ${G}^{(1)}(\epsilon)={G}_{0}^{-1}(\epsilon)-{\Sigma}^{(1)}(\epsilon)$ in accordance with the definition in Section \ref{sec:Q=1}. Performing a straightforward calculation, we obtain
\begin{align}
\nonumber
C^{(1)}_{\mu}(\tau)=&|\alpha|^{2}C^{(1, 1)}_{\mu}(\tau)+\mathcal{O}(|\alpha|^{4})\\
\label{eq:1Photon_C1}
=&\Phi|S^{(1)}_{\mu0, 10}|^{2}+\mathcal{O}(\Phi^{2})\\
\label{eq:1Photon_S1}
\implies& S_{\mu}(k)=\Phi|S^{(1)}_{\mu0, 10}|^{2}\delta(k)+\mathcal{O}(\Phi^{2}).
\end{align}
As we can see there exists no inelastic contribution to the spectral density in single-photon sector. In general the phenomenon of resonance fluorescence [\onlinecite{Mollow_1969}], leading to the inelastic power spectrum, is underpinned by the possibility of the two-level system to emit into the modes other than the incident one. This naturally allows the photons incident on the atom to exchange their energy between one another (whilst conserving the total energy) due to the higher order emission and absorption processes, leading to the inelastic power spectrum. In the case of the single-photon however, the particle is ought to conserve its energy individually (as mathematically prescribed by the $\delta$-function in equation (\ref{eq:comp})) leading to the purely elastic spectrum.
\par
In order to obtain the leading order inelastic contribution to the spectral power density, we thus have to consider the contribution of the multi-particle states in the expansion (\ref{eq:Weak_Coherent_FState}), so that to allow for the inter-particle interaction. In this case, the information about the inelastic scattering is entirely contained in the connected component of the two-phonon $T$-matrix (see Section \ref{sec:GCD}), which captures the nonlinear acoustic effects via the effective single-particle vertex function. Contracting the cluster decomposed $S$-matrix (\ref{eq: CII2_5}) with the two-particle Fock state we arrive at the following result
\begin{widetext}
\begin{align}
\label{eq: final_two_particle_state}
\sum_{\mu, \mu'}\Bigg(\frac{1}{\sqrt{2}}\int{dkdk'}\varphi(k)\varphi(k')S^{(1)}_{\mu, 1}(k)S^{(1)}_{\mu', 1}(k'){a}_{\mu}^{\dagger}(k)a_{\mu'}^{\dagger}(k')-\frac{8\pi^{2}i}{L\sqrt{2}}\int{dk}T^{(2, C)}_{\mu k, \mu'-k, 10, 10}(0){a}_{\mu}^{\dagger}(k)a_{\mu'}^{\dagger}(-k)\Bigg)\ket{\Omega}\otimes\ket{0}.
\end{align}
\end{widetext}
Again we would like to emphasize that at this point the plane-wave limit ($L\rightarrow\infty$) may only be taken in the term containing the connected component of the $S$-matrix since in this limit the first part of the state (\ref{eq: final_two_particle_state}) is clearly non-normalizable. With the help of the final two-particle state derived above one may easily establish
\begin{align}
\nonumber
C^{(1, 2)}_{\mu}(\tau)=&16\pi^{2}\sum_{\mu'}\text{Im}\{M_{\mu', \mu}(0)(S_{\mu'; 1}^{(1)}(0)S_{\mu; 1}^{(1)}(0))^{*}\}\\
\label{eq: first_order_coherence_2photon}
+&32\pi^{3}\sum_{\mu'}\int {dk}e^{ik\tau}|M_{\mu', \mu}(k)|^{2},
\end{align}
where $M_{\mu', \mu}(k)$ is the symmetric part of $T^{(2, C)}_{\mu k, \mu'-k, 10, 10}(0)$, i.e. $M_{\mu', \mu}(k)=M_{\mu, \mu'}(-k)=(T^{(2, C)}_{\mu' k, \mu-k, 10, 10}(0)+T^{(2, C)}_{\mu -k, \mu'k, 10, 10}(0))/2$. Performing the Fourier transform of (\ref{eq: first_order_coherence_2photon}) we arrive at the following expressions for the elastic $S^{\text{el}}(k)$ and inelastic $S^{\text{inel}}(k)$ spectral power densities valid to order $\Phi^{3}$:
\begin{align}
\nonumber
S_{\mu}^{\text{el}}(k)=&\delta(k)\Bigg(\Phi|S^{(1)}_{\mu; 1}(0)|^{2}+16\pi^{2}\Phi^{2}\\
\label{eq: Elastic_SPD}
\times&\sum_{\mu'}\text{Im}\{M_{\mu', \mu}(0)(S_{\mu'; 1}^{(1)}(0)S_{\mu; 1}^{(1)}(0))^{*}\}\Bigg),\\
\label{eq: Inelastic_SPD}
S_{\mu}^{\text{inel}}(k)=&32\pi^{3}\Phi^{2}\sum_{\mu'}|M_{\mu', \mu}(k)|^{2}.
\end{align}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{S.pdf}
\caption{Inelastic spectral power density (scaled by $\Phi^{2}$) of SAW scattered by a giant acoustic atom as a function of frequency $\omega=k$ for a variety of inter-leg separations $\gamma R=1, 3, 5$. Top panel: exact solution; Bottom panel: quasi-Markovian approximation. Other model parameters: $k_{0}R=\pi/4 \mod2\pi$, and $\Delta=0$. Here the limit $L\rightarrow\infty$ was taken.}
\label{fig: Spectral_Density}
\end{figure}
The inelastic spectral densities are shown in Figure \ref{fig: Spectral_Density} for different inter-leg separations (R = 1, 3, 5). As one can see, the inelastic power spectrum develops a couple of sharp peaks near the origin of momentum space followed by infinitely many smaller side peaks. In order to interpret the nature of these peaks, we adopt the physical picture discussed in [\onlinecite{Chang_NJP_2012, Guimond_PRA_2016, Guimond_QST_2017}] for an "atom in front of a mirror" system. One can interpret this system as a leaky cavity formed by the two connection points of a giant atom. In this picture these peaks may be thought of as being located at the renormalized excitation frequencies of the effective cavity broadened by the renormalized decay rates. The resulting effect is the formation of sharp, bound-state like peaks in the intensity of the scattered phonons, corresponding to cavity resonances. As the delay $\gamma R$ increases, the peaks get closer to $\omega=0$, and get sharper, corresponding to a decrease of the effective cavity linewidth $\simeq1/(\gamma R)$, that is effective cavity resonances approach the resonance of the two-level system, thus making atomic connection points better and better mirrors and hence increasing the quality factor of the effective cavity (quality factor is a common physical measure of both the rate of excitation damping and the rate of energy loss of an oscillator or a cavity)
\par
Additionally, spectral densities based on the approximate solution of the scattering problem in the quasi-Markovian approximation are shown in Figure \ref{fig: Spectral_Density}. As it was mentioned in the section \ref{sec: AS}, quasi-Markovian results show a good agreement with an exact solution in $\gamma{R}\ll1$ parameter regime. As the delay time increases $\gamma{R}\geq1$, the discrepancy between the approximate and complete solutions becomes dramatic. One can clearly see the tendency of the quasi-Markovian theory to enhance the scattering into the incoming frequency states $\omega=0$. Indeed, in contrast to the exact solution, taking into account infinitely many excursions of phonons to the qubit, the quasi-Markovian approximation is based on the assumption of a single scattering event after which each phonon immediately leaves the system, thus leading to a more elastic result.
\subsubsection{Second-order coherence}
\label{2d_order_coh}
\begin{figure*}[t]
\includegraphics[width=2\columnwidth]{g2.pdf}
\caption{The figure demonstrates the independent components of the normalized second order coherence function $C^{(2)}_{11}, \ C^{(2)}_{12}, \ C^{(2)}_{22}$ for a various inter-leg separations $\gamma R=1, 3, 5$. As before the top panel corresponds to an exact solution, the bottom one to its quasi-Markovian approximation, and $k_{0}R=\pi/4 \mod2\pi$, $\Delta=0$. Here the limit $L\rightarrow\infty$ was taken and by definition $C^{(2)}$ is dimensionless.}
\label{fig: 2d_Order_Coh}
\end{figure*}
Let us now consider the second-order coherence function to the lowest order in $\Phi$ - $\mathcal{O}(\Phi^{0})$. Performing the relevant Wick contractions we arrive at the following simple expression for the normalized second-order coherence
\begin{align}
\label{eq: second_order_coherence}
C_{\mu', \mu}^{(2)}(\tau)
=\Bigg{|}1-\frac{4\pi{i}}{S^{(1)}_{\mu', 1}(0)S^{(1)}_{\mu, 1}(0)}\int{dk}e^{ik\tau}M_{\mu', \mu}(k)\Bigg{|}^{2}.
\end{align}
The independent components of second-order coherence function's components are presented in Figure \ref{fig: 2d_Order_Coh}.
\par
As it was mentioned above, the higher-order coherence functions provide the information about the information about the statistics of radiation scattered by a giant acoustic atom. Since $C^{(2)}_{1, 1}(0)>1$, one can clearly see that the statistics of back-scattered phonons is super-Posissonian, i.e. the particles tend to bunch together. Indeed, this result agrees with the physical expectation that at zero detuning $\Delta=0$ the power extinction $1-|S_{11}(0)|^{2}$ is enhanced even in the presence of pure dephasing\cite{Astafiev_Sci_2010, Kimble_PRA_1976}. This assertion also explains the anti-bunching of forwardly scattered photons $C^{(2)}_{2, 2}(0)$. Another clear feature of the second-order coherence function shown in Figure \ref{fig: 2d_Order_Coh} is presence of long-range quantum correlations of phonons, i.e. the components of $C^{(2)}$ do not decay to unity even for delays significantly exceeding the inter-leg separation $\tau\gg1$. Note that this correlation effect becomes more and more pronounced with the increase in $R$, where correlation functions exhibit slightly damped oscillations around unity.
\par
Another interesting feature of the second-order coherence is the presence of the non-differentiable peaks at natural multiples of the inter-leg separation $\tau_{n}=nR, \ n\in\mathbb{N}$. Physically, this property may again be understood with the help of the simple picture of a leaky cavity formed by the scatterer. Indeed, once a phonon is trapped between the legs of the atom, it may bounce off the cavity's walls back and forth multiple times, leading to the formation of the non-analytical structures present in the second-order coherence. The fact that the non-differentiable peaks are more pronounced at smaller values of $n$ is a direct manifestation of the fact that the quality factor of the effective cavity is not infinite. This property is indeed of interest since the second-order coherence function is an experimentally measurable quantity which makes these sharp peaks a potentially observable effect.
\par
Alongside the exact solution, the results based on the quasi-Markovian approximation are presented in Figure \ref{fig: 2d_Order_Coh} (lower panel). The discrepancy between the two is apparent. Another feature typical of this approximation is the tendency to overestimate the amplitudes of oscillation as was first pointed out in the supplemental material of reference [\onlinecite{Laakso_PRL_2014}].
\subsubsection{Third-order coherence}
\label{3d_order_coh}
\begin{figure*}[t]
\includegraphics[width=1.99\columnwidth]{g3.pdf}
\caption{Independent components $(\mu, \mu', \mu'')=(1, 1, 1),\ (1, 1, 2),\ (1, 2, 2),\ (2, 2, 2)$ of the third-order coherence function of phonons scattered by a giant acoustic atom. Top, central, and bottom panels correspond to the exact solution $C^{(3)}_{\mu, \mu', \mu''}(\tau, \tau')$, weak correlation approximation $\tilde{C}^{(3)}_{\mu, \mu', \mu''}(\tau, \tau')$, and their difference $\Delta{C^{(3)}_{\mu, \mu', \mu''}(\tau, \tau')}=C^{(3)}_{\mu, \mu', \mu''}(\tau, \tau')-\tilde{C}^{(3)}_{\mu, \mu', \mu''}(\tau, \tau')$ respectively. Here $\gamma R=5$, $k_{0}R=\pi/4 \mod2\pi$, and $\Delta=0$. Here the limit $L\rightarrow\infty$ was taken and by definition $C^{(2)}$ is dimensionless.}
\label{fig: 3d_Order_Coh_res}
\end{figure*}
Let us finally consider the third-order coherence function. The full expression for the third-order coherence function $C^{(3)}_{\mu, \mu', \mu''}(\tau, \tau')$ in terms of the symmetrized components of two and three-particle transition matrices may be found in Appendix \ref{ap: 3ocf_exp}. Although in general, $C^{(3)}_{\mu, \mu', \mu''}$ has $8$ independent components in the case of two radiation channels, due to our particular choice of the coupling $\Gamma_{1}=\Gamma_{2}=\gamma/2$, only four independent components remain:
\begin{align}
&C^{(3)}_{1, 1, 1}(\tau, \tau'), \quad C^{(3)}_{2, 2, 2}(\tau, \tau'),\\
&C^{(3)}_{1, 1, 2}(\tau, \tau')=C^{(3)}_{1, 2, 1}(\tau, \tau')=C^{(3)}_{2, 1, 1}(\tau, \tau'),\\
&C^{(3)}_{1, 2, 2}(\tau, \tau')=C^{(3)}_{2, 2, 1}(\tau, \tau')=C^{(3)}_{2, 1, 2}(\tau, \tau').
\end{align}
\par
Independent components of $C^{(3)}$ for a system with $R=5$, $k_{0}R=\pi/4 \mod2\pi$, and $\Delta=0$ are shown in the top panel of Figure \ref{fig: 3d_Order_Coh_res}. First, we note that the individual components of third-order coherence at $\tau, \tau'=0$ significantly exceed unity, signifying the bunching of phonons. This effect can be attributed to the fact that a single two-level system can only emit and absorb a single quantum of radiation at a time, which, in turn, significantly increases the probability of simultaneous detection of a pair of phonons in the waveguide. Moreover, this effect is more pronounced in those components of the correlation function, describing correlations with phonons in the channel of incident radiation $\mu=1$. This artifact again has to do with the fact of the enhancement of power extinction by an atom at zero detuning.
\par
Another interesting feature of $C^{(3)}$ to be noticed is the presence of clear peak and downfall structures located on the lines $\tau'-\tau=nR, \ n\in\mathbb{Z}$. Physically $\delta\tau=\tau'-\tau$ corresponds to the average delay time between the second and third phonon detection events upon detection of the first one at zero time. The quantization of $\delta\tau$ in the units of deterministic time delay is the characteristic feature of the system under consideration and may be potentially observed in future experiments via the observation of enhancement/diminution of the conditional probability of arrival of the third particle. In fact, the peak and downfall structures discussed above are non-differentiable, as it was the case with the second-order coherence function, and again this phenomenon may be understood with the help of a simple picture of an effective cavity discussed in Sections \ref{2d_order_coh} and \ref{SPd}.
\par
Beside the exact solution ${C}^{(3)}_{\mu, \mu', \mu''}(\tau, \tau')$, the solution based on the weak-correlation (WC) approximation $\tilde{C}^{(3)}_{\mu, \mu', \mu''}(\tau, \tau')$ as well as the difference between the exact solution and the WC one
\begin{align}
\label{eq: difference}
\Delta{C}^{(3)}_{\mu, \mu', \mu''}={C}^{(3)}_{\mu, \mu', \mu''}-\tilde{C}^{(3)}_{\mu, \mu', \mu''},
\end{align}
are shown in Figure \ref{fig: 3d_Order_Coh_res} in central and bottom pannels respectively. As one can anticipate, the WC approximation tends to notably overestimate the amplitude of the third-order coherence function. This effect is especially apparent in the vicinity of $\tau, \tau'=0$, where the WC approximation significantly overestimates the phononic bunching. Away from the temporal origin, though, the approximation becomes adequate and only slightly deviates from the exact result, as one may infer from the plots in the bottom panel of Figure \ref{fig: 3d_Order_Coh_res}. The discrepancy between the exact solution and its WC approximation is not hard to understand. Weak-correlation approximation ignores an infinite diagrammatic channel, consisting of exchange interaction diagrams between the phonons, which are, of course, of importance when one studies the statistical properties of particles.
\section{Conclusions and outlook}
In this paper, the diagrammatic theory of scattering and dynamics of multi-photon states in waveguide QED was developed. In particular, it was shown that the $N_{p}$-photon scattering matrices in single-qubit waveguide QED may be conveniently parametrized in terms of effective $N_{p}-1$-photon vertex functions and the equations satisfied by these vertex functions were established. Next, certain practical issues related to the direct sum representation of the $S$-matrix, separation of elastic contributions to effective vertices, as well as the generalized cluster decomposition were discussed. Further, a generalization to the waveguide QED systems with more than a single qubit was given. Specifically, in the case of the two-qubit systems, it was established that the equations governing multi-photon vertex functions remain the same as in the case of a single qubit, up to the inclusion of higher-order vertex corrections. Moreover, we have shown that once the integral equations governing $N_{q}$-photon scattering matrix in $N_{q}$ waveguide QED, these equations hold for any system of qubits and established the generic equations governing $2$ and $3$ photon scattering operators by considering $2$ and $3$ photon scattering on two and three qubits respectively.
\par
Next, the diagrammatic theory of scattering was applied to a problem of scattering of a weakly coherent pulse on the giant acoustic atom. Namely, by expanding a coherent state perturbatively in a coherence parameter up to third order in $|\alpha|$ and studying its scattering on the atom, we were able to establish the first, second, and third-order coherence functions of scattered radiation. Moreover, a set of approximation routines was suggested along with the exact method and the two were compared where appropriate. Further, the statistical properties of scattered surface acoustic waves were studied, and the effect of the non-Markovian nature of the setup on statistics was discussed.
\par
In our future work, we are going to present the generalization of the resummation approach enabling one to study real-time dynamics in waveguide QED systems. It would be of further interest to extend the present theory to study scattering in waveguide QED systems containing emitters with more complicated selection rules such as three and four-level systems. Another generalization of the theory presented in this paper of potential future interest is the study of particular 2 and 3-particle scattering problems on systems containing multiple distant qubits. Furthermore, it would be potentially interesting to assess the effects of counter-rotating terms, which were ignored throughout this work, which, however, will require the use of techniques other than the one discussed in this work.
\section*{Acknowledgments}
We thank H. Schoeller for helpful instructions on diagrammatic methods in field theory applications and, in particular, for pointing to us the distributional Poincare-Bertrand identity. KP is grateful to A. Samson for enlightening discussions on numerical methods used throughout the paper. MP acknowledges the durable exchange of ideas on the subject of study with V. Gritsev and V. Yudson. This work was supported by the Deutsche Forschungsgemeinschaft (DFG) via the contract RTG 1995.
\begin{appendix}
\section{Cluster decomposition of three-photon $S$-matrix}
\label{3bodyScat}
The starting point of our analysis in this appendix goes back to the definition of the three-body component of the $S$-matrix entering its direct sum representation (throughout this appendix, for simplicity, it is assumed that $\omega_{\mu}(k)=\omega(k), \quad B_{\mu}=B, \quad \forall\mu\in\{1, ..., N_{c}\}$), namely
\begin{widetext}
\begin{align}
\nonumber
\mathcal{S}_{3}=&\Bigg[\frac{1}{3!}\delta_{s_{1}', s_{1}}\delta_{s_{2}', s_{2}}\delta_{s_{3}', s_{3}}-2\pi{i}T^{(1, 1)}_{s_{1}', s_{1}}(\omega(k_{1}))\delta(\omega(k_{1}')-\omega(k_{1}))\frac{1}{2!}\delta_{s_{2}', s_{2}}\delta_{s_{3}', s_{3}}\\
\nonumber
&-2\pi{i}T^{(1, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega(k_{1})+\omega(k_{2}))\delta(\omega(k_{1}')+\omega(k_{2}')-\omega(k_{1})-\omega(k_{2}))\frac{1}{1!}\delta_{s_{3}', s_{3}}\\
\nonumber
&- 2\pi{i}T^{(1, 3)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3}))\delta(\omega(k_{1}')+\omega(k_{2}')+\omega(k_{3}')-\omega(k_{1})-\omega(k_{2})-\omega(k_{3}))\Bigg]\\
\times&{a_{s_{1}'}^{\dagger}}a_{s_{2}'}^{\dagger}a_{s_{3}'}^{\dagger}a_{s_{3}}a_{s_{2}}a_{s_{1}}.
\end{align}
\end{widetext}
As before we write
\begin{align}
\nonumber
&T^{(1, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega(k_{1})+\omega(k_{2}))=\\
\nonumber
&-\pi{i}\delta(\omega(k_{1}')-\omega(k_{1}))T^{(1, 1)}_{s_{1}', s_{1}}(\omega(k_{1}))T^{(1, 1)}_{s_{2}', s_{2}}(\omega(k_{2}))\\
&+T^{(1, 2, C)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega(k_{1})+\omega(k_{2})).
\end{align}
Here, as before, the equality symbol is understood in the sense of permutation equivalence and on-shell condition. Now let us start massaging the three-body transition operator. First of all, one has
\begin{widetext}
\begin{align}
& T^{(1, 3)}_{\mu_{1}'k_{1}', \mu_{2}'k_{2}', \mu_{3}'k_{3}'; \mu_{1}k_{1}, \mu_{2}k_{2}, \mu_{3}k_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3}))
=g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})g_{\mu_{3}}^{*}(k_{3})
g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{3}'}(k_{3}')
\nonumber \\
& \qquad \qquad \times \tilde{G}^{(1)}(\omega(k_{1}))\tilde{G}^{(1)}(\omega(k_{1}'))F^{(1, 2)}(k_{2}', k_{3}', k_{2}, k_{3}, \omega(k_{1})+\omega(k_{2})+\omega(k_{3})),
\end{align}
where $F^{(1, 2)}(k_{2}', k_{3}', k_{2}, k_{3}, \omega(k_{1})+\omega(k_{2})+\omega(k_{3}))$ is not an entirely connected object defined via
\begin{align}
F^{(1, 2)}(k_{1}', k_{2}', k_{1}, k_{2}, \epsilon)=\frac{1}{g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})}\Braket{g|W^{(1, 2)}_{\mu_{1}'k_{1}'\mu_{2}'k_{2}', \mu_{1}k_{1}\mu_{2}k_{2}}(\epsilon)|g}.
\end{align}
Separation of elastic contribution (\ref{eq: CII2_3}) translates into the following decomposition
\begin{align}
& F^{(1, 2)}(k_{2}', k_{3}', k_{2}, k_{3}, \omega(k_{1})+\omega(k_{2})+\omega(k_{3})) \nonumber \\
=&F^{(1, 1)}(k_{2}', k_{2}, \omega(k_{1}')+\omega(k_{2}')) \tilde{G}^{(1)}(\omega(k_{1})+\omega(k_{3})-\omega(k_{3}')) F^{(1, 1)}(k_{3}', k_{3}, \omega(k_{1})+\omega(k_{3}))\nonumber \\
&+ \overline{F}^{(2, 2)}(k_{2}', k_{3}', k_{2}, k_{3}, \omega(k_{1})+\omega(k_{2})+\omega(k_{3})).
\end{align}
\end{widetext}
Bearing in mind that $\overline{F}^{(2, 2)}(k_{2}', k_{3}', k_{2}, k_{3}, \omega(k_{1})+\omega(k_{2})+\omega(k_{3}))$ is an analytic function we decompose the three photon $T$-matrix as follows
\begin{align}
& T^{(1, 3)}_{\mu_{1}'k_{1}', \mu_{2}'k_{2}', \mu_{3}'k_{3}'; \mu_{1}k_{1}, \mu_{2}k_{2}, \mu_{3}k_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3}))
\nonumber \\
=& \overline{T}^{(1, 3)}_{\mu_{1}'k_{1}', \mu_{2}'k_{2}', \mu_{3}'k_{3}'; \mu_{1}k_{1}, \mu_{2}k_{2}, \mu_{3}k_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3}))\nonumber \\
+& \hat{T}^{(1, 3)}_{\mu_{1}'k_{1}', \mu_{2}'k_{2}', \mu_{3}'k_{3}'; \mu_{1}k_{1}, \mu_{2}k_{2}, \mu_{3}k_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3})),
\end{align}
where we have defined the following objects
\begin{align}
& \overline{T}^{(1, 3)}_{\mu_{1}'k_{1}', \mu_{2}'k_{2}', \mu_{3}'k_{3}'; \mu_{1}k_{1}, \mu_{2}k_{2}, \mu_{3}k_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3})) \nonumber \\
=& g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})g_{\mu_{3}}^{*}(k_{3}){g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{3}'}(k_{3}')} \nonumber \\
& \times\tilde{G}(\omega(k_{1}))\tilde{G}(\omega(k_{1}')) \nonumber \\
& \times \overline{F}^{(1, 2)}(k_{2}', k_{3}', k_{2}, k_{3}, \omega(k_{1})+\omega(k_{2})+\omega(k_{3}))
\end{align}
and
\begin{align}
& \hat{T}^{(1, 3)}_{\mu_{1}'k_{1}', \mu_{2}'k_{2}', \mu_{3}'k_{3}'; \mu_{1}k_{1}, \mu_{2}k_{2}, \mu_{3}k_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3})) \nonumber \\
=& g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})g_{\mu_{3}}^{*}(k_{3}){g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{3}'}(k_{3}')} \nonumber \\
& \times\tilde{G}(\omega(k_{1}))\tilde{G}(\omega(k_{1}'))F^{(1, 1)}(k_{2}', k_{2}, \omega(k_{1}')+\omega(k_{2}')) \nonumber \\
& \times \tilde{G}^{(1)}(\omega(k_{1})+\omega(k_{3})-\omega(k_{3}')) \nonumber \\
& \times F^{(1, 1)}(k_{3}', k_{3}, \omega(k_{1})+\omega(k_{3})).
\label{eq:T13G0}
\end{align}
In the last equation we have
\begin{align}
& F^{(1, 1)}(k_{2}', k_{2}, \omega(k_{1}')+\omega(k_{2}')) = \hat{G}_{0}(\omega(k_{1}')-\omega(k_{2})) \nonumber \\ & \qquad +\overline{F}^{(1, 1)}(k_{2}', k_{2}, \omega(k_{1}')+\omega(k_{2}')), \\
& F^{(1, 1)}(k_{3}', k_{3}, \omega(k_{1})+\omega(k_{3})) = \hat{G}_{0}(\omega(k_{1})-\omega(k_{3}')) \nonumber \\
& \qquad + \overline{F}^{(1, 1)}(k_{3}', k_{3}, \omega(k_{1})+\omega(k_{3})) .
\end{align}
The terms in \eqref{eq:T13G0} containing $\hat{G}_{0}$ deserve a special attention since they can yield delta-functions determining additional conservation of frequencies.
In particular, the terms with $\hat{G}_{0} \overline{F}^{(1, 1)}$ and $\overline{F}^{(1, 1)} \hat{G}_{0}$ together give
\begin{align}
& -2\pi{i}\delta(\omega(k_{3}')-\omega(k_{3}))T^{(1, 1)}_{s_{3}', s_{3}}(\omega(k_{3}))\nonumber \\
& \times \Bigg[T^{(1, 2, C)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega(k_{1})+\omega(k_{2})) \nonumber \\
& -g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2}) g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}') \tilde{G}^{(1)}(\omega(k_{1}))\tilde{G}^{(1)}(\omega(k_{1}')) \nonumber \\
&\qquad \times P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{2}')}\Bigg)\Bigg]
\label{eq:term1df} \\
&+g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})g_{\mu_{3}}^{*}(k_{3}){g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{3}'}(k_{3}')}\nonumber \\
& \quad \times \tilde{G}(\omega(k_{1}))\tilde{G}(\omega(k_{1}')) \tilde{G}^{(1)}(\omega(k_{1})+\omega(k_{3})-\omega(k_{3}')) \nonumber \\
& \times \Bigg[ P \Bigg(\frac{1}{\omega(k_{1}')-\omega(k_{2})}\Bigg) \overline{F}^{(1, 1)}(k_{3}', k_{3}, \omega(k_{1})+\omega(k_{3})) \nonumber \\
& \quad + \overline{F}^{(1, 1)}(k_{2}', k_{2}, \omega(k_{1}')+\omega(k_{2}')) P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{3}')}\Bigg)\Bigg] .
\label{eq:term2df}
\end{align}
The term \eqref{eq:term1df} contributes to the $3=2+1$ cluster of the three-photon $S$-matrix. In turn, the term \eqref{eq:term2df} is completely connected and non-singular. This property becomes explicitly visible if we re-express it as
\begin{align}
& \frac{g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})g_{\mu_{3}}^{*}(k_{3}) g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{3}'}(k_{3}')}{\omega(k_{1}')-\omega(k_{1})} \nonumber \\
\times & \Bigg[\tilde{G}^{(1)}(\omega(k_{1}'))\tilde{G}^{(1)}(\omega(k_{3})) \tilde{G}^{(1)}(\omega(k_{3}')+\omega(k_{1}') -\omega(k_{1}))\nonumber \\
& \quad \times \overline{F}^{(1, 1)}(k_{2}', k_{2}, \omega(k_{1}')+\omega(k_{2}')+\omega(k_{3}')-\omega(k_{1})) \nonumber \\
&-\tilde{G}^{(1)}(\omega(k_{1})) \tilde{G}^{(1)}(\omega(k_{1})+\omega(k_{3})-\omega(k_{1}')) \tilde{G}^{(1)}(\omega(k_{3}')) \nonumber \\
& \quad \times \overline{F}^{(1, 1)}(k_{2}', k_{2}, \omega(k_{2}')+\omega(k_{3}'))\Bigg].
\end{align}
This representation makes it obvious that in the limit $k'_1 \to k_1$ this function is finite.
Now, let us consider the contribution to \eqref{eq:T13G0} containing $ \hat{G}_{0} \hat{G}_{0}$. It amounts to
\begin{align}
& g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})g_{\mu_{3}}^{*}(k_{3}) g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{3}'}(k_{3}') \nonumber \\
& \times \tilde{G}(\omega(k_{1}))\tilde{G}(\omega(k_{3}')) \tilde{G}^{(1)}(\omega(k_{2})+\omega(k_{1})-\omega(k_{1}')) \nonumber \\
& \times\hat{G}_{0}(\omega(k_{3}')-\omega(k_{3}))\hat{G}_{0}(\omega(k_{1})-\omega(k_{1}'))
\nonumber \\
=& (-i\pi)^{2}T^{(1, 1)}_{s_{1}', s_{1}}(\omega(k_{1}))T^{(1, 1)}_{s_{2}', s_{2}}(\omega(k_{2}))T^{(1, 1)}_{s_{3}', s_{3}}(\omega(k_{3}))\nonumber \\
& \times \delta(\omega(k_{1}')-\omega(k_{1}))\delta(\omega(k_{2}')-\omega(k_{2})) \label{term_dg1} \\
-& 2\pi{i} g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2}) g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}') \tilde{G}^{(1)}(\omega(k_{1})) \nonumber \\
& \times \tilde{G}^{(1)}(\omega(k_{1}')) T^{(1, 1)}_{s_{3}', s_{3}}(\omega(k_{3}))\delta(\omega(k_{3}')-\omega(k_{3})) \label{term_dg2} \\
+&g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})g_{\mu_{3}}^{*}(k_{3}){g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{3}'}(k_{3}')} \nonumber \\
& \times \tilde{G}(\omega(k_{3}')) \tilde{G}(\omega(k_{1})) \tilde{G}^{(1)}(\omega(k_{2})+\omega(k_{1})-\omega(k_{1}')) \nonumber \\
& \times {P}\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg) P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg). \label{term_dg3}
\end{align}
The term \eqref{term_dg1} contributes to the $3=1+1+1$ cluster. The term \eqref{term_dg2} contributes to the $3=2+1$ cluster.
The term \eqref{term_dg3} requires a special consideration. Rewriting
\begin{align}
& \tilde{G}^{(1)}(\omega(k_{3}'))\tilde{G}^{(1)}(\omega(k_{2})+\omega(k_{1})-\omega(k_{1}'))\tilde{G}^{(1)}(\omega(k_{1}))\nonumber \\
&\times P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg) P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg) \nonumber \\
=&\tilde{G}^{(1)}(\omega(k_{3}'))\tilde{G}^{(1)}(\omega(k_{1}))P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg)\nonumber \\
& \times \frac{\tilde{G}^{(1)}(\omega(k_{2})+\omega(k_{1})-\omega(k_{1}'))-\tilde{G}^{(1)}(\omega(k_{2}))}{\omega(k_{1})-\omega(k_{1}')} \nonumber \\
+&\tilde{G}^{(1)}(\omega(k_{3}'))\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{1})) \nonumber \\
& \times P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg)P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg) \nonumber \\
=&\tilde{G}^{(1)}(\omega(k_{3}'))\tilde{G}^{(1)}(\omega(k_{1}))P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg) \nonumber \\
& \times \frac{\tilde{G}^{(1)}(\omega(k_{2})+\omega(k_{1})-\omega(k_{1}'))-\tilde{G}^{(1)}(\omega(k_{2}))}{\omega(k_{1})-\omega(k_{1}')} \label{term_dl1} \\
+&\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{1}))\nonumber \\
& \times \frac{\tilde{G}^{(1)}(\omega(k_{3}'))-\tilde{G}^{(1)}(\omega(k_{3}))}{\omega(k_{3}')-\omega(k_{3})}P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg) \label{term_dl2} \\
+&\tilde{G}^{(1)}(\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{1}))\nonumber \\
& \times P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg)P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg), \label{term_dl3}
\end{align}
we observe that
the terms \eqref{term_dl1} and \eqref{term_dl2} terms give together a non-singular contribution. In fact,
\begin{align}
& \tilde{G}^{(1)}(\omega(k_{3}'))\tilde{G}^{(1)}(\omega(k_{1}))
P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg) \nonumber \\
& \times \frac{\tilde{G}^{(1)}(\omega(k_{2})+\omega(k_{1})-\omega(k_{1}'))-\tilde{G}^{(1)}(\omega(k_{2}))}{\omega(k_{1})-\omega(k_{1}')} \nonumber \\
+&\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{1}))\frac{\tilde{G}^{(1)}(\omega(k_{3}'))-\tilde{G}^{(1)}(\omega(k_{3}))}{\omega(k_{3}')-\omega(k_{3})} \nonumber \\
& \times P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg) \nonumber \\
&=\tilde{G}^{(1)}(\omega(k_{3}'))\tilde{G}^{(1)}(\omega(k_{1}))\tilde{G}^{(1)}(\omega(k_{2})) \nonumber \\
& \times \tilde{G}^{(1)}(\omega(k_{2}')+\omega(k_{3}')-\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{2}')) \nonumber \\
& \times\frac{(\tilde{G}^{(1)})^{-1}(\omega(k_{2}'))-(\tilde{G}^{(1)})^{-1}(\omega(k_{2}')+\omega(k_{3}')-\omega(k_{3}))}{\omega(k_{3}')-\omega(k_{3})} \nonumber \\
&\times\frac{(\tilde{G}^{(1)})^{-1}(\omega(k_{2}))-(\tilde{G}^{(1)})^{-1}(\omega(k_{2})+\omega(k_{1})-\omega(k_{1}'))}{\omega(k_{1})-\omega(k_{1}')}
\nonumber \\
&+\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{3}'))\frac{1}{\omega(k_{1}')-\omega(k_{1})} \nonumber \\
& \times\Bigg(\tilde{G}^{(1)}(\omega(k_{1}')) \nonumber \\
& \times \frac{(\tilde{G}^{(1)})^{-1}(\omega(k_{3}))-(\tilde{G}^{(1)})^{-1}(\omega(k_{1}')+\omega(k_{3}')-\omega(k_{1}))}{\omega(k_{1}')+\omega(k_{3}')-\omega(k_{1})-\omega(k_{3})} \nonumber \\
&-\tilde{G}^{(1)}(\omega(k_{1}))\frac{(\tilde{G}^{(1)})^{-1}(\omega(k_{3}))-(\tilde{G}^{(1)})^{-1}(\omega(k_{3}'))}{\omega(k_{3}')-\omega(k_{3})}\Bigg).
\end{align}
In contrast, the term \eqref{term_dl3} is singular and contributes to the $3=1+1+1$ cluster of the scattering matrix. To show this, we first fully symmetrize
\begin{align}
& \tilde{G}^{(1)}(\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{1})) \nonumber \\
& \times P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg)P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg) \nonumber \\
\to &\frac{1}{3}\tilde{G}^{(1)}(\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{1})) \nonumber \\
& \times \Bigg[P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg)P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg) \nonumber \\
& \quad +P\Bigg(\frac{1}{\omega(k_{2}')-\omega(k_{2})}\Bigg)P\Bigg(\frac{1}{\omega(k_{3})-\omega(k_{3}')}\Bigg) \nonumber \\
& \quad +P\Bigg(\frac{1}{\omega(k_{1}')-\omega(k_{1})}\Bigg)P\Bigg(\frac{1}{\omega(k_{2})-\omega(k_{2}')}\Bigg)\Bigg].
\label{symmP}
\end{align}
Owing to the Poincare-Bertrand distributional identity
\begin{align}
P\Bigg(\frac{1}{x}\Bigg)P\Bigg(\frac{1}{y}\Bigg)&=P\Bigg(\frac{1}{y-x}\Bigg) \Bigg[P\Bigg(\frac{1}{x}\Bigg)-P\Bigg(\frac{1}{y}\Bigg)\Bigg] \nonumber \\
&+\pi^{2}\delta(x)\delta(y),
\end{align}
we establish the identity
\begin{align}
& P\Bigg(\frac{1}{\omega(k_{3}')-\omega(k_{3})}\Bigg)P\Bigg(\frac{1}{\omega(k_{1})-\omega(k_{1}')}\Bigg)\nonumber \\
+& P\Bigg(\frac{1}{\omega(k_{2}')-\omega(k_{2})}\Bigg)P\Bigg(\frac{1}{\omega(k_{3})-\omega(k_{3}')}\Bigg) \nonumber \\
+& P\Bigg(\frac{1}{\omega(k_{1}')-\omega(k_{1})}\Bigg)P\Bigg(\frac{1}{\omega(k_{2})-\omega(k_{2}')}\Bigg)\nonumber \\
=& \pi^{2}\delta(\omega(k_{3}')-\omega(k_{3}))\delta(\omega(k_{1}')-\omega(k_{1})),
\end{align}
leading us to the result
\begin{align}
& \frac{\pi^2}{3} \tilde{G}^{(1)}(\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{1})) \\ & \times \delta(\omega(k_{3}')-\omega(k_{3}))\delta(\omega(k_{1}')-\omega(k_{1}))
\end{align}
in \eqref{symmP}.
Combining all of the above results, we arrive at the following decomposition of the three-photon $T$-matrix
\begin{widetext}
\begin{align}
& T^{(1, 3)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3})) \nonumber \\
=& \frac{(-2\pi{i})^{2}}{6}T^{(1)}_{s_{1}', s_{1}}(\omega(k_{1}))T^{(1)}_{s_{2}', s_{2}}(\omega(k_{2}))T^{(1)}_{s_{3}', s_{3}}(\omega(k_{3})) \delta(\omega(k_{1}')-\omega(k_{1})) \delta(\omega(k_{2}')-\omega(k_{2}))\nonumber \\
- & \, 2\pi{i}T^{(1, 2, C)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega(k_{1})+\omega(k_{2}))T^{(1)}_{s_{3}', s_{3}}(\omega(k_{3}))\delta(\omega(k_{3}')-\omega(k_{3})) + T^{(1, 3, C)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3})),
\end{align}
with the three-body connected part
\begin{align}
& T^{(1, 3, C)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3}))=g_{\mu_{1}}^{*}(k_{1})g_{\mu_{2}}^{*}(k_{2})g_{\mu_{3}}^{*}(k_{3}){g_{\mu_{1}'}(k_{1}')g_{\mu_{2}'}(k_{2}')g_{\mu_{3}'}(k_{3}')} \nonumber \\
& \times\Bigg\{\tilde{G}^{(1)}(\omega(k_{3}'))\tilde{G}^{(1)}(\omega(k_{1}))\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{2}')+\omega(k_{3}')-\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{2}')) \nonumber \\
& \qquad \times \frac{(\tilde{G}^{(1)})^{-1}(\omega(k_{2'}))-(\tilde{G}^{(1)})^{-1}(\omega(k_{2}')+\omega(k_{3}')-\omega(k_{3}))}{\omega(k_{3}')-\omega(k_{3})}\frac{(\tilde{G}^{(1)})^{-1}(\omega(k_{2}))-(\tilde{G}^{(1)})^{-1}(\omega(k_{2})+\omega(k_{1})-\omega(k_{1}'))}{\omega(k_{1})-\omega(k_{1}')}\nonumber \\
& \qquad + \frac{1}{\omega(k_{1}')-\omega(k_{1})} \Bigg[\tilde{G}^{(1)}(\omega(k_{2}))\tilde{G}^{(1)}(\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{3}')) \nonumber \\
& \qquad \qquad \times \Bigg(\tilde{G}^{(1)}(\omega(k_{1}')) \frac{(\tilde{G}^{(1)})^{-1}(\omega(k_{3}))-(\tilde{G}^{(1)})^{-1}(\omega(k_{1}')+\omega(k_{3}')-\omega(k_{1}))}{\omega(k_{1}')+\omega(k_{3}')-\omega(k_{1})-\omega(k_{3})} \nonumber \\ & \qquad \qquad \qquad -\tilde{G}^{(1)}(\omega(k_{1}))\frac{(\tilde{G}^{(1)})^{-1}(\omega(k_{3}))-(\tilde{G}^{(1)})^{-1}(\omega(k_{3}'))}{\omega(k_{3}')-\omega(k_{3})}\Bigg) \nonumber \\
& \qquad \qquad +\tilde{G}^{(1)}(\omega(k_{1}'))\tilde{G}^{(1)}(\omega(k_{3}))\tilde{G}^{(1)}(\omega(k_{3}')+\omega(k_{1}')-\omega(k_{1})) \overline{F}^{(1, 1)}(k_{2}', k_{2}, \omega(k_{1}')+\omega(k_{2}')+\omega(k_{3}')-\omega(k_{1})) \nonumber \\
& \qquad \qquad -\tilde{G}^{(1)}(\omega(k_{1}))\tilde{G}^{(1)}(\omega(k_{3}'))\tilde{G}^{(1)}(\omega(k_{1})+\omega(k_{3})-\omega(k_{1}')) \overline{F}^{(1, 1)}(k_{2}', k_{2}, \omega(k_{2}')+\omega(k_{3}'))\Bigg] \nonumber \\
& \qquad +\tilde{G}(\omega(k_{1}))\tilde{G}(\omega(k_{1}'))\overline{F}^{(1, 1)}(k_{2}', k_{2}, \omega(k_{1}')+\omega(k_{2}'))\tilde{G}^{(1)}(\omega(k_{1})+\omega(k_{3})-\omega(k_{3}')) \overline{F}^{(1, 1)}(k_{3}', k_{3}, \omega(k_{1})+\omega(k_{3}))\Bigg\} \nonumber \\
&+\overline{T}^{(1, 3)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3})).
\end{align}
Plugging everything into the definition of the three-photon $S$-matrix we finally obtain
\begin{align}
\mathcal{S}_{3}& =\Bigg[\frac{1}{3!}S^{(1)}_{s_{1}', s_{1}}S^{(1)}_{s_{2}', s_{2}}S^{(1)}_{s_{3}', s_{3}}-2\pi{i}T^{(1, 2, C)}_{s_{1}'s_{2}', s_{1}s_{2}}(\omega(k_{1})+\omega(k_{2}))S^{(1)}_{s_{3}', s_{3}}\delta(\omega(k_{1}')+\omega(k_{2}')-\omega(k_{1})-\omega(k_{2})
\nonumber \\
\label{eq: S3-cluster}
& -2\pi{i}T^{(1, 3, C)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\omega(k_{1})+\omega(k_{2})+\omega(k_{3}))\delta(\omega(k_{1}')+\omega(k_{2}')+\omega(k_{3}')-\omega(k_{1})-\omega(k_{2})-\omega(k_{3}))\Bigg] a_{s_{1}'}^{\dagger} a_{s_{2}'}^{\dagger}a_{s_{3}'}^{\dagger}a_{s_{3}}a_{s_{2}}a_{s_{1}}.
\end{align}
\end{widetext}
\section{Third-order coherence function}
\label{ap: 3ocf_exp}
In this appendix we present the formula for the third order coherence function of phonons in the giant atom model. Using the formula (\ref{eq: S3-cluster}) with $\omega(k)=k$ along with the notations introduced in the Section \ref{sec:GIANT_ATOM}, we obtain the following result upon contraction with the three-phonon Fock state
\begin{widetext}
\begin{align}
\nonumber
\mathcal{S}_{3}\ket{\Phi^{(3)}_{1}}&=\frac{1}{\sqrt{6}}\sum_{\{\mu'_{i}\}}\Bigg(\int_{k_{1}k_{2}k_{3}}\varphi(k_{1})\varphi(k_{2})\varphi(k_{3})S^{(1)}_{\mu_{1}', 1}(k_{1})S^{(1)}_{\mu_{2}', 1}(k_{2})S^{(1)}_{\mu_{3}', 1}(k_{3})a_{\mu_{1}'}^{\dagger}(k_{1})a_{\mu_{2}'}^{\dagger}(k_{2})a_{\mu_{3}'}^{\dagger}(k_{3})\ket{\Omega}\\
\nonumber
&-12\pi{i}\int_{k_{1}'k_{2}'k_{1}k_{2}k_{3}}\varphi(k_{1})\varphi(k_{2})\varphi(k_{3})T^{(2, C)}_{\mu_{1}'k_{1}', \mu_{2}'k_{2}', \mu_{1}k_{1}, \mu_{2}k_{2}}(k_{1}+k_{2})\delta(k_{1}'+k_{2}'-k_{1}-k_{2})S^{(1)}_{\mu_{3}'; 1}(k_{3})\\
\nonumber
&\times{a}_{\mu_{1}'}^{\dagger}(k_{1}')a_{\mu_{2}'}^{\dagger}(k_{2}')a_{\mu_{3}'}^{\dagger}(k_{3})\ket{\Omega}\\
\label{eq: SPE57}
&-12\pi{i}\Bigg(\frac{2\pi}{L}\Bigg)^{3/2}\int_{k_{1}'k_{2}'k_{3}'}Q(k_{1}', k_{2}', k_{3}')\delta(k_{1}'+k_{2}'+k_{3}')a_{\mu_{1}'}^{\dagger}(k_{1}')a_{\mu_{2}'}^{\dagger}(k_{2}')a_{\mu_{3}'}^{\dagger}(k_{3}')\ket{\Omega}\Bigg),
\end{align}
where we have introduced the following symmetrized version of the connected three-phonon transition operator
\begin{align}
\nonumber
Q(k_{1}', k_{2}', k_{3}')&=\frac{1}{6}\Big[T^{(3, C)}_{\mu_{1}'k_{1}', \mu_{2}'k_{2}', \mu_{3}'k_{3}', 10, 10, 10}(0)+T^{(3, C)}_{\mu_{1}'k_{1}', \mu_{3}'k_{3}', \mu_{2}'k_{2}', 10, 10, 10}(0)+T^{(3, C)}_{\mu_{3}'k_{3}', \mu_{2}'k_{2}', \mu_{1}'k_{1}', 10, 10, 10}(0)\\
\label{eq: SPE58}
&+T^{(3, C)}_{\mu_{3}'k_{3}', \mu_{1}'k_{1}', \mu_{2}'k_{2}', 10, 10, 10}(0)+T^{(3, C)}_{\mu_{2}'k_{2}', \mu_{1}'k_{1}', \mu_{3}'k_{3}', 10, 10, 10}(0)+T^{(3, C)}_{\mu_{2}'k_{2}', \mu_{3}'k_{3}', \mu_{1}'k_{1}', 10, 10, 10}(0)\Big].
\end{align}
\end{widetext}
Here we have suppressed the dependence of $Q(k_{1}', k_{2}', k_{3}')$ on $\{\mu'\}$ since in the giant atom model the coupling constants are independent of the channel index. Now we consider
\begin{widetext}
\begin{align}
\nonumber
a_{\mu''}(\tau_{3})a_{\mu'}(\tau_{2})a_{\mu}(\tau_{1})\mathcal{S}_{3}\ket{\Phi^{(3)}_{1}}&=\frac{\sqrt{6}}{L^{3/2}}S^{(1)}_{\mu'', 1}(0)S^{(1)}_{\mu', 1}(0)S^{(1)}_{\mu, 1}(0)\Bigg(1-4\pi{i}\Bigg[\frac{I^{(1)}_{\mu'', \mu'}(\tau_{3}-\tau_{2})}{S^{(1)}_{\mu'', 1}(0)S^{(1)}_{\mu', 1}(0)}\\
\label{eq: SPE62}
&+\frac{I^{(1)}_{\mu, \mu'}(\tau_{2}-\tau_{1})}{S^{(1)}_{\mu, 1}(0)S^{(1)}_{\mu', 1}(0)}+\frac{I^{(1)}_{\mu'', \mu}(\tau_{3}-\tau_{1})}{S^{(1)}_{\mu'', 1}(0)S^{(1)}_{\mu, 1}(0)}\Bigg]-12\pi{i}\frac{I^{(2)}(\tau_{3}-\tau_{1}, \tau_{2}-\tau_{1})}{S^{(1)}_{\mu'', 1}(0)S^{(1)}_{\mu', 1}(0)S^{(1)}_{\mu, 1}(0)}\Bigg),
\end{align}
\end{widetext}
where the following functions were defined
\begin{align}
\label{eq: SPE63}
I^{(1)}_{\mu', \mu}(t_{1})=&\int_{k}e^{ikt_{1}}M_{\mu', \mu}(k),\\
I^{(2)}(t_{2}, t_{1})=&\int_{k, q}e^{iqt_{2}}e^{ikt_{1}}Q(-k-q, k, q),
\end{align}
where $M_{\mu, \mu'}(k)=(T^{(2, C)}_{\mu' k, \mu-k, 10, 10}(0)+T^{(2, C)}_{\mu -k, \mu'k, 10, 10}(0))/2$, as before. By introducing the following variables $\tau'=\tau_{3}-\tau_{1}$, $\tau=\tau_{2}-\tau_{1}$, we can immediately write down the normalized third order coherence function to the lowest order in $\varphi$ as
\begin{widetext}
\begin{align}
C^{(3)}_{\mu'', \mu', \mu}(t', t)=\Bigg{|}1-4\pi{i}\Bigg[\frac{I^{(1)}_{\mu'', \mu'}(t'-t)}{S^{(1)}_{\mu'', 1}(0)S^{(1)}_{\mu', 1}(0)}+\frac{I^{(1)}_{\mu, \mu'}(t)}{S^{(1)}_{\mu, 1}(0)S^{(1)}_{\mu', 1}(0)}+\frac{I^{(1)}_{\mu'', \mu}(t')}{S^{(1)}_{\mu'', 1}(0)S^{(1)}_{\mu; 1}(0)}\Bigg]-12\pi{i}\frac{I^{(2)}(t', t)}{S^{(1)}_{\mu'', 1}(0)S^{(1)}_{\mu', 1}(0)S^{(1)}_{\mu, 1}(0)}\Bigg{|}^{2}.
\end{align}
\end{widetext}
\section{Diagrammatic representation of the generic three-body transition operator}
\label{ap: dressed}
In this Appendix we start our analysis with equation (\ref{eq: 3body_vertex_generator}). Taking matrix elements of (\ref{eq: 3body_vertex_generator}) in the three-particle subspace we arrive at the following integral equation
\begin{widetext}
\begin{align}
\nonumber
W^{(3, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon)=&{D}^{(2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon)\\
\nonumber
+&[{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{2}'\bar{s}_{1}'}(\epsilon)+{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{1}'\bar{s}_{2}'}(\epsilon)]{G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon)\\
\nonumber
+&{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{1}'\bar{s}_{2}}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}})W^{(2, 1)}_{\bar{s}_{2}, \bar{s}_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon)\\
\nonumber
+&D^{(2)}_{s_{1}'s_{2}', \bar{s}_{1}\bar{s}_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}}-\omega_{\bar{s}_{1}'})W^{(2, 1)}_{\bar{s}_{1}, \bar{s}_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon)\\
\nonumber
+&{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{1}\bar{s}_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}}-\omega_{\bar{s}_{2}'})W^{(2, 1)}_{\bar{s}_{1}, \bar{s}_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon)\\
\nonumber
+&{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{2}'\bar{s}_{2}}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{2}'}-\omega_{\bar{s}_{2}})W^{(2, 1)}_{\bar{s}_{2}, \bar{s}_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon)\\
\nonumber
+&{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{1}\bar{s}_{2}}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}}-\omega_{\bar{s}_{2}})W^{(2, 2)}_{\bar{s}_{2}\bar{s}_{1}, \bar{s}_{2}'\bar{s}_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon)\\
\nonumber
+&{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{1}\bar{s}_{2}}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}}-\omega_{\bar{s}_{2}})W^{(2, 2)}_{\bar{s}_{1}\bar{s}_{2}, \bar{s}_{2}'\bar{s}_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon)\\
\nonumber
+&{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{1}\bar{s}_{2}}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}}-\omega_{\bar{s}_{2}})W^{(2, 2)}_{\bar{s}_{1}\bar{s}_{2}, \bar{s}_{1}'\bar{s}_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon)\\
+&{D}^{(2)}_{s_{1}'s_{2}', \bar{s}_{1}\bar{s}_{2}}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}}-\omega_{\bar{s}_{2}})W^{(2, 2)}_{\bar{s}_{2}'\bar{s}_{1}, \bar{s}_{1}'\bar{s}_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})W^{(3, 2)}_{\bar{s}_{1}'\bar{s}_{2}', s_{1}s_{2}}(\epsilon),\end{align}
\end{widetext}
where the projection of $\mathcal{D}$ onto the $2$-particle subspace is given by
\begin{align}
\nonumber
&D^{(2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon)\\
&=v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}G^{(1)}(\epsilon)v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{1}}^{\dagger}.
\end{align}
Baring this in mind we make the following ansatz
\begin{align}
\nonumber
&W^{(3, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon)\\
&=v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}G^{(3, 0)}(\epsilon)v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{1}}^{\dagger}.
\end{align}
This, in turn, leads to the following Dyson equation
\begin{align}
G^{(3, 0)}(\epsilon)=G^{(1)}(\epsilon)+G^{(1)}(\epsilon)\Sigma^{(3, 0)}(\epsilon)G^{(3, 0)}(\epsilon),
\end{align}
where the self-energy in the three-excitation subspace is given by
\begin{widetext}
\begin{align}
\nonumber
\Sigma^{(3, 0)}(\epsilon)=&[v_{s_{1}'}^{\dagger}G_{0}(\epsilon-\omega_{s_{1}'})v_{s_{2}'}^{\dagger}+v_{s_{2}'}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{1}'}^{\dagger}]{G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}\\
\nonumber
+&v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{1}'}^{\dagger}{G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}})W^{(2, 1)}_{s_{2}, s_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}\\
\nonumber
+&v_{s_{1}'}^{\dagger}G_{0}(\epsilon-\omega_{s_{1}'})v_{s_{1}}^{\dagger}{G}^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{1}'})W^{(2, 1)}_{s_{1}, s_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}\\
\nonumber
+&v_{s_{2}'}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{1}}^{\dagger}{G}^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}'})W^{(2, 1)}_{s_{1}, s_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}\\
\nonumber
+&v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{2}'}^{\dagger}{G}^{(1)}(\epsilon-\omega_{s_{2}'}-\omega_{s_{2}})W^{(2, 1)}_{s_{2}, s_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}\\
\nonumber
+&v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{1}}^{\dagger}{G}^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}})W^{(2, 2)}_{s_{2}s_{1}, s_{2}'s_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}\\
\nonumber
+&v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{1}}^{\dagger}{G}^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}})W^{(2, 2)}_{s_{1}s_{2}, s_{2}'s_{1}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}\\
\nonumber
+&v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{1}}^{\dagger}{G}^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}})W^{(2, 2)}_{s_{1}s_{2}, s_{1}'s_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}\\
+&v_{s_{2}}^{\dagger}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{1}}^{\dagger}{G}^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}})W^{(2, 2)}_{s_{2}s_{1}, s_{1}'s_{2}'}(\epsilon){G}^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})v_{s_{1}'}G_{0}(\epsilon-\omega_{s_{2}'})v_{s_{2}'}.
\end{align}
\end{widetext}
Let us now consider the expression (\ref{eq: Tphoton_Transition_operator}) for the transition operator. Concentrating on the three-photon subspace we obtain
\begin{widetext}
\begin{align}
T^{(3, 3)}_{s_{1}'s_{2}'s_{3}', s_{1}s_{2}s_{3}}(\epsilon)=&\Braket{g|v_{s_{1}'}{G}^{(1)}(\epsilon-\omega_{s_{2}'}-\omega_{s_{3}'})[W^{(2, 2)}_{s_{2}'s_{3}', s_{2}s_{3}}(\epsilon)+V^{(3, 2)}_{s_{2}'s_{3}'}(\epsilon)G^{(3, 0)}(\epsilon)\overline{V}^{(3, 2)}_{s_{2}s_{3}}(\epsilon)]{G}^{(1)}(\epsilon-\omega_{s_{2}}-\omega_{s_{3}})v^{\dagger}_{s_{1}}|g},\\
\nonumber
V^{(3, 2)}_{s_{1}'s_{2}'}=&v_{s_{1}'}G_{0}(\epsilon)v_{s_{2}'}+W^{(2, 1)}_{s_{1}', s}(\epsilon)G^{(1)}(\epsilon-\omega_{s})v_{s}G_{0}(\epsilon)v_{s_{2}'}+W^{(2, 1)}_{s_{1}', s}(\epsilon)G^{(1)}(\epsilon-\omega_{s})v_{s_{2}'}G_{0}(\epsilon-\omega_{s})v_{s}\\
\nonumber
+&W^{(2, 2)}_{s_{1}'s_{2}',
\bar{s}_{1}'\bar{s}_{2}'}(\epsilon)G^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})v_{\bar{s}_{1}'}G_{0}(\epsilon-\omega_{\bar{s}_{2}'})v_{\bar{s}_{2}'}\\
+&W^{(2, 2)}_{s_{1}'s_{2}', \bar{s}_{2}'\bar{s}_{1}'}(\epsilon)G^{(1)}(\epsilon-\omega_{\bar{s}_{1}'}-\omega_{\bar{s}_{2}'})v_{\bar{s}_{1}'}G_{0}(\epsilon-\omega_{\bar{s}_{2}'})v_{\bar{s}_{2}'},\\
\nonumber
\overline{V}^{(3, 2)}_{s_{1}s_{2}}=&v_{s_{1}}G_{0}(\epsilon)v_{s_{2}}+v_{s_{1}}^{\dagger}G_{0}(\epsilon)v_{s'}^{\dagger}G^{(1)}(\epsilon-\omega_{s'})W^{(2, 1)}_{s', s_{2}}(\epsilon)+v_{s'}^{\dagger}G_{0}(\epsilon-\omega_{s'})v_{s_{1}}^{\dagger}G^{(1)}(\epsilon-\omega_{s'})W^{(2, 1)}_{s', s_{2}}(\epsilon)\\
\nonumber
+&v^{\dagger}_{\bar{s}_{2}}G_{0}(\epsilon-\omega_{\bar{s}_{2}})v^{\dagger}_{\bar{s}_{1}}G^{(1)}(\epsilon-\omega_{\bar{s}_{1}}-\omega_{\bar{s}_{2}})W^{(2, 2)}_{\bar{s}_{1}\bar{s}_{2}, s_{1}s_{2}}(\epsilon)\\
+&v^{\dagger}_{\bar{s}_{2}}G_{0}(\epsilon-\omega_{\bar{s}_{2}})v^{\dagger}_{\bar{s}_{1}}G^{(1)}(\epsilon-\omega_{\bar{s}_{1}}-\omega_{\bar{s}_{2}})W^{(2, 2)}_{\bar{s}_{2}\bar{s}_{1}, s_{1}s_{2}}(\epsilon).
\end{align}
\end{widetext}
Now, our goal is to rewrite the renormalized two-particle emission $V_{s_{1}'s_{2}'}$ and absorption $\overline{V}_{s_{1}s_{2}}$ vertices, as well as the self-energy in three-excitation subspace $\Sigma^{(3, 3)}$ in terms of full Green's and vertex functions, as it is presented in the main text.
\par
First, we note the following identity
\begin{widetext}
\begin{align}
\nonumber
&v_{s_{1}'}G_{0}(\epsilon)v_{s_{2}'}+W^{(2, 1)}_{s_{1}', s}(\epsilon)G^{(1)}(\epsilon-\omega_{s})v_{s}G_{0}(\epsilon)v_{s_{2}'}=v_{s'_1} G_0(\epsilon)v_{s'_2} + W_{s'_1, s}^{(2, 1, i)}(\epsilon) G^{(1)}(\epsilon-\omega_{s}) v_{s} G_0(\epsilon) v_{s'_2}\\
\nonumber
&+V_{s'_1}^{(2, 1)}(\epsilon) G^{(2, 0)}(\epsilon) \overline{V}_s^{(2, 1)}(\epsilon) G^{(1)}(\epsilon-\omega_{s}) v_{s} G_0(\epsilon) v_{s'_2} = v_{s'_1} G_0(\epsilon) v_{s'_2} + W_{s'_1, s}^{(2, 1, i)} G^{(1)}(\epsilon-\omega_{s}) v_{s} G_0(\epsilon) v_{s'_2}\\
&+V_{s'_1}^{(2, 1)}(\epsilon) G^{(2, 0)}(\epsilon) \Sigma^{(2, 0)} G_0(\epsilon)
\nonumber
v_{s'_2}=v_{s'_1} G_0(\epsilon) v_{s'_2} + W_{s'_1, s}^{(2, 1, i)}(\epsilon) G^{(1)}(\epsilon-\omega_{s}) v_{s} G_0(\epsilon) v_{s'_2} \\
\label{eq:ident_1}
&+V_{s'_1}^{(2, 1)}(\epsilon) [ G^{(2, 0)}(\epsilon) -G_0(\epsilon) ] v_{s'_2}= V_{s'_1}^{(2, 1)}(\epsilon) G^{(2, 0)}(\epsilon) v_{s'_2}.
\end{align}
\end{widetext}
Analogously
\begin{align}
\nonumber
&v_{s_1}^{\dagger} G_0(\epsilon) v_{s_2}^{\dagger} + v_{s_1}^{\dagger} G_0(\epsilon) v_{s'}^{\dagger} G^{(1)}(\epsilon-\omega_{s'}) W_{s',s_2}^{(2,1)}(\epsilon)\\
\label{eq:ident_2}
&=v_{s_1}^{\dagger} G^{(2,0)}(\epsilon) \overline{V}_{s_2}^{(2, 1)}(\epsilon).
\end{align}
Analysing the structure of equations satisfied by $W^{(2, 1)}$ and $W^{(2, 2)}$ one easily concludes that
\begin{align}
\nonumber
\label{eq: 2body_right}
W^{(2, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon)&=W^{(2, 1)}_{s_{1}', s_{1}}(\epsilon)G^{(1)}(\epsilon)W^{(2, 1)}_{s_{2}', s_{2}}(\epsilon)\\
&+W^{(2, 1)}_{s_{1}', s}(\epsilon)G^{(1)}(\epsilon-\omega_{s})W^{(2, 2)}_{s_{2}'s, s_{1}s_{2}}(\epsilon),
\end{align}
\begin{align}
\nonumber
\label{eq: 2body_left}
W^{(2, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon)&= W^{(2, 1)}_{s_{1}', s_{1}}(\epsilon)G^{(1)}(\epsilon)W^{(2, 1)}_{s_{2}', s_{2}}(\epsilon)\\
&+W^{(2, 2)}_{s_{1}'s_{2}', ss_{1}}(\epsilon)G^{(1)}(\epsilon-\omega_{s})W^{(2, 1)}_{s,s_{2}}(\epsilon).
\end{align}
Multiplying (\ref{eq: 2body_right}) and (\ref{eq: 2body_left}) by $G^{(1)}(\epsilon)v_{s_{2}}G_{0}(\epsilon)v_{s_{1}}$ from the right and by $v_{s_{2}'}^{\dagger}G_{0}(\epsilon)v_{s_{1}'}^{\dagger}G^{(1)}(\epsilon)$ from the left respectively and contracting the relevant indices, we obtain
\begin{widetext}
\begin{align}
\nonumber
& W_{s'_1 s'_2, s_1 s_2}^{(2,2)}(\epsilon) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_2} G_0(\epsilon-\omega_{s_{2}}) v_{s_1}= W_{s'_1 , s_1}^{(2,1)}(\epsilon) G^{(1)}(\epsilon-\omega_{s_{1}}) [V_{ s'_2}^{(2, 1)} (\epsilon)G^{(2, 0)}(\epsilon-\omega_{s_{1}}) v_{s_1} \\
&- v_{ s'_2} G_0(\epsilon-\omega_{s_{1}}) v_{s_1}]+ W_{s'_1 , s}^{(2,1)}(\epsilon) G^{(1)}(\epsilon-\omega_{s}) W_{s'_2 s, s_1 s_2}^{(2,2)}(\epsilon) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_2} G_0(\epsilon-\omega_{s_{1}}) v_{s_1},\\
\nonumber
& v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s'_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) W_{s'_1 s'_2, s_1 s_2}^{(2, 2)}(\epsilon)= [ v_{s'_2}^{\dagger} G^{(2, 0)}(\epsilon-\omega_{s_{2}'}) \overline{V}_{s_1}^{(2, 1)}(\epsilon) - v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s_1}^{\dagger} ] \\
& \times G^{(1)}(\epsilon-\omega_{s_{2}'}) W_{s'_2, s_2}^{(2,1)} + v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s'_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) W_{s'_1 s'_2, s s_1 }^{(2,2)} G^{(1)}(\epsilon-\omega_{s}) W_{s , s_2}^{(2,1)}(\epsilon) .
\end{align}
\end{widetext}
Now, defining the following objects
\begin{align}
\nonumber
&K^{(d)}_{s'_1 s'_2}(\epsilon)\\
\nonumber
&= W_{s'_1 s'_2, s_1 s_2}^{(2,2)}(\epsilon) G^{(1)}(\epsilon-\omega_{s_{2}}-\omega_{s_{1}}) v_{s_2} G_0(\epsilon-\omega_{s_{1}}) v_{s_1} \\
&+ W_{s'_1 , s_1}^{(2,1)} (\epsilon) G^{(1)}(\epsilon-\omega_{s_{1}}) v_{ s'_2} G_0(\epsilon-\omega_{s_{1}}) v_{s_1},
\end{align}
\begin{align}
\nonumber
&\overline{K}^{(d)}_{s_1 s_2}(\epsilon)\\
\nonumber
&= v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s'_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) W_{s'_1 s'_2, s_1 s_2}^{(2,2)}(\epsilon) \\
&+ v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{2}'}) W_{s'_2, s_2}^{(2,1)}(\epsilon),
\end{align}
we establish the following equations
\begin{widetext}
\begin{align}
\nonumber
K^{(d)}_{s'_1 s'_2}(\epsilon) &= W_{s'_1 , s_1}^{(2,1)}(\epsilon)G^{(1)}(\epsilon-\omega_{s_{1}})V_{ s'_2}^{(2, 1)}(\epsilon-\omega_{s_{1}}) G^{(2, 0)}(\epsilon-\omega_{s_{1}}) v_{s_1}\\
&+ W_{s'_1 , s}^{(2,1)}(\epsilon) G^{(1)}(\epsilon-\omega_{s}) [K^{(d)}_{s'_2 s}(\epsilon) - W_{s'_2 , s_1}^{(2,1)}(\epsilon-\omega_{s}) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s}) v_{ s} G_0(\epsilon-\omega_{s_{1}}) v_{s_1}],
\end{align}
\begin{align}
\nonumber
\overline{K}^{(d)}_{s_1 s_2}(\epsilon) &= v_{s'_2}^{\dagger} G^{(2, 0)}(\epsilon-\omega_{s_{2}'}) \overline{V}_{s_1}^{(2, 1)}(\epsilon-\omega_{s_{2}'}) G^{(1)}(\epsilon-\omega_{s_{2}'}) W_{s'_2, s_2}^{(1,0)} (\epsilon)\\
&+ [\overline{K}^{(d)}_{s s_1}(\epsilon) - v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{2}'}-\omega_{s}) W_{s'_2, s_1}^{(2,1)}(\epsilon-\omega_{s}) ] G^{(1)}(\epsilon-\omega_{s}) W_{s , s_2}^{(2,1)}(\epsilon) .
\end{align}
\end{widetext}
Analogously multiplying (\ref{eq: 2body_right}) and (\ref{eq: 2body_left}) by $G^{(1)}(\epsilon)v_{s_{1}}G_{0}(\epsilon)v_{s_{2}}$ from the right and by $v_{s_{1}'}^{\dagger}G_{0}(\epsilon)v_{s_{2}'}^{\dagger}G^{(1)}(\epsilon)$ from the left respectively, contracting the $s_{1, 2}$ indices, and defining
\begin{align}
\nonumber
&K^{(e)}_{s_{1}'s_{2}'}(\epsilon)\\
&=W^{(2, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon)G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}})v_{s_{1}}G_{0}(\epsilon-\omega_{s_{2}})v_{s_{2}},
\end{align}
\begin{align}
\nonumber
&\overline{K}^{(e)}_{s_{1}s_{2}}(\epsilon)\\
&=v_{s_{1}'}^{\dagger}G_{0}(\epsilon-\omega_{s_{1}'})v_{s_{2}'}^{\dagger}G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})W^{(2, 2)}_{s_{1}'s_{2}', s_{1}s_{2}}(\epsilon),
\end{align}
we deduce
\begin{widetext}
\begin{align}
\nonumber
K^{(e)}_{s'_1 s'_2}(\epsilon)&= W_{s'_1 , s_1}^{(2,1)} (\epsilon)G^{(1)}(\epsilon-\omega_{s_{1}}) W_{s'_2, s_2}^{(2,1)}(\epsilon-\omega_{s_{1}}) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_1} G_0(\epsilon-\omega_{s_{2}}) v_{s_2}\\
&+ W_{s'_1 , s}^{(2,1)}(\epsilon) G^{(1)}(\epsilon-\omega_{s}) K^{(e)}_{s'_2 s}(\epsilon) ,\\
\nonumber
\overline{K}^{(e)}_{s_1 s_2}(\epsilon) &= v_{s'_1}^{\dagger} G_0(\epsilon-\omega_{s_{1}'}) v_{s'_2}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) W_{s'_1 , s_1}^{(2,1)}(\epsilon-\omega_{s_{2}'}) G^{(1)}(\epsilon-\omega_{s_{2}'}) W_{s'_2, s_2}^{(2,1)}(\epsilon)\\
&+ \overline{K}^{(e)}_{s s_1}(\epsilon) G^{(1)}(\epsilon-\omega_{s}) W_{s , s_2}^{(2,1)}(\epsilon).
\end{align}
\end{widetext}
Further we define
\begin{align}
&K_{s_{1}'s_{2}'}(\epsilon)=K_{s_{1}'s_{2}'}^{(d)}(\epsilon)+K_{s_{1}'s_{2}'}^{(e)}(\epsilon),\\
&\overline{K}_{s_{1}s_{2}}(\epsilon)=\overline{K}_{s_{1}s_{2}}^{(d)}(\epsilon)+\overline{K}_{s_{1}s_{2}}^{(e)}(\epsilon).
\end{align}
With these definitions it is easy to show that the two-particle emission/absorption vertices are given by
\begin{align}
\label{eq: C25}
&V^{(3, 2)}_{s_{1}'s_{2}'}(\epsilon)=V^{(2, 1)}_{s_{1}'}(\epsilon)G^{(2, 0)}(\epsilon)V^{(2, 1)}_{s_{2}'}(\epsilon)+\tilde{K}_{s_{1}'s_{2}'}(\epsilon),\\
\label{eq: C26}
&\overline{V}^{(3, 2)}_{s_{1}s_{2}}(\epsilon)=\overline{V}^{(2, 1)}_{s_{1}}(\epsilon)G^{(2, 0)}(\epsilon)\overline{V}^{(2, 1)}_{s_{2}}(\epsilon)+\tilde{\overline{K}}_{s_{1}s_{2}}(\epsilon),
\end{align}
where
\begin{align}
\nonumber
&\tilde{K}_{s_{1}'s_{2}'}(\epsilon)={K}_{s_{1}'s_{2}'}(\epsilon)\\
&-V_{s_{1}'}^{(2, 1)}(\epsilon)G^{(2, 0)}(\epsilon)W^{(2, 1, i)}_{s_{2}', s_{1}}(\epsilon)G^{(1)}(\epsilon-\omega_{s_{1}})v_{s_{1}},\\
\nonumber
&\tilde{\overline{K}}_{s_{1}s_{2}}(\epsilon)={\overline{K}}_{s_{1}s_{2}}(\epsilon)\\
&-v_{s_{2}'}^{\dagger}G^{(1)}(\epsilon-\omega_{s_{2}'})W^{(2, 1, i)}_{s_{2}', s_{1}}(\epsilon)G^{(2, 0)}(\epsilon)\overline{V}_{s_{2}}^{(2, 1)}(\epsilon),
\end{align}
obey the following integral equations
\begin{widetext}
\begin{align}
\nonumber
\tilde{K}_{s_{1}'s_{2}'}(\epsilon)=&W^{(2, 1)}_{s_{1}', s}(\epsilon)G^{(1)}(\epsilon-\omega_{s})V^{(2, 1)}_{s_{2}'}(\epsilon-\omega_{s})G^{(2, 0)}(\epsilon-\omega_{s})V^{(2, 1)}_{s}(\epsilon)-V_{s_{1}'}^{(2, 1)}(\epsilon)G^{(2, 0)}(\epsilon)W^{(2, 1, i)}_{s_{2}', s}(\epsilon)G^{(1)}(\epsilon-\omega_{s})v_{s}\\
\label{eq: C29}
+&W^{(2, 1)}_{s_{1}, s}(\epsilon)G^{(1)}(\epsilon-\omega_{s})\tilde{K}_{s_{2}'s}(\epsilon),\\
\nonumber
\tilde{\overline{K}}_{s_{1}s_{2}}(\epsilon)=&\overline{V}^{(2, 1)}_{s}(\epsilon)G^{(2, 0)}(\epsilon-\omega_{s})V^{(2, 1)}_{s_{1}}(\epsilon-\omega_{s})G^{(1)}(\epsilon-\omega_{s})W^{(2, 1)}_{s, s_{2}}(\epsilon)-v_{s}^{\dagger}G^{(1)}(\epsilon-\omega_{s})W^{(2, 1, i)}_{s, s_{1}}(\epsilon)G^{(2, 0)}(\epsilon)\overline{V}^{(2, 1)}_{s_{2}}(\epsilon)\\
\label{eq: C30}
+&\tilde{\overline{K}}_{ss_{1}}(\epsilon)G^{(1)}(\epsilon-\omega_{s})W^{(2, 1)}_{s, s_{2}}(\epsilon).
\end{align}
\end{widetext}
Using equations (\ref{eq: C25}), (\ref{eq: C26}) together with (\ref{eq: C29}), (\ref{eq: C30}), we finally arrive at the following equations
\begin{align}
\nonumber
V^{(3, 2)}_{s_{1}'s_{2}'}(\epsilon)&=V_{s_{1}'}^{(2, 1)}(\epsilon)G^{(2, 0)}(\epsilon)v_{s_{2}'}\\
&+W^{(2, 1)}_{s_{1}', s}(\epsilon)G^{(1)}(\epsilon-\omega_{s})V^{(3, 2)}_{s_{2}'s}(\epsilon),
\end{align}
\begin{align}
\nonumber
\overline{V}^{(3, 2)}_{s_{1}s_{2}}(\epsilon)&=v_{s_{1}}G^{(2, 1)}(\epsilon)\overline{V}_{s_{2}}(\epsilon)\\
&+\overline{V}^{(3, 2)}_{ss_{1}}(\epsilon)G^{(1)}(\epsilon-\omega_{s})W^{(2, 1)}_{s, s_{2}}(\epsilon),
\end{align}
which are precisely the equations (\ref{eq: twophotonabsorb}) and (\ref{eq: twophotonemis}) stated in the main text.
\par
Now, we turn our attention to the self-energy bubble. Let us show that equations (\ref{eq:3photon_self1}) and (\ref{eq:3photon_self2}) hold via direct substitution. One has
\begin{widetext}
\begin{align}
\nonumber
\Sigma^{(3, 0)} (\epsilon) &= ( v_{s'_1}^{\dagger} G_0(\epsilon-\omega_{s_{1}'}) v_{s'_2}^{\dagger} + v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s'_1}^{\dagger} ) G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) V_{s'_1 s'_2}^{(3, 2)}(\epsilon) \\
\nonumber
&= v_{s'_2}^{\dagger} [ G^{(2, 0)}(\epsilon-\omega_{s_{2}'}) - G_0(\epsilon-\omega_{s_{2}'})] v_{s'_2} - v_{s'_1}^{\dagger} G_0(\epsilon-\omega_{s_{1}'}) v_s^{\dagger} G^{(1)} (\epsilon-\omega_{s_{1}'}-\omega_{s})v_{s'_1} G_0(\epsilon-\omega_{s}) v_{s} \\
\nonumber
&+ v_{s'_1}^{\dagger} G_0(\epsilon-\omega_{s_{1}'}) v_{s'_2}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) V_{s'_1} ^{(2, 1)}(\epsilon-\omega_{s_{2}'})G^{(2, 0)}(\epsilon-\omega_{s_{2}'}) v_{s'_2}\\
\nonumber
&+ v_{s'_1}^{\dagger} G^{(2, 0)}(\epsilon-\omega_{s_{1}'}) \overline{V}_s^{(2, 1)}(\epsilon-\omega_{s_{1}'})G^{(1)}(\epsilon-\omega_{s}-\omega_{s_{1}'})v_{s'_1} G_0(\epsilon-\omega_{s}) v_{s}\\
\nonumber
&+ v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s'_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) W_{s'_2 , s}^{(2,1)}(\epsilon-\omega_{s_{1}'}) G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s}) v_{s'_1} G_0(\epsilon-\omega_{s}) v_{s} \\
\nonumber
&+ v_{s'_1}^{\dagger} G_0(\epsilon-\omega_{s_{1}'}) v_{s'_2}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) W_{s'_1 s'_2 , s' s}^{(2,2)}(\epsilon) G^{(1)} (\epsilon-\omega_{s'}-\omega_{s})v_{s'} G_0(\epsilon-\omega_{s}) v_{s} \\
\nonumber
&+v_{s'_1}^{\dagger} G_0(\epsilon-\omega_{s_{1}'}) v_{s'_2}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'}) W_{s'_1 s'_2 , s s'}^{(2,2)} (\epsilon)G^{(1)}(\epsilon-\omega_{s'}-\omega_{s}) v_{s'} G_0(\epsilon-\omega_{s}) v_{s} \\
\nonumber
&+v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s'_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})W_{s'_1 s'_2 , s' s}^{(2,2)}(\epsilon)G^{(1)}(\epsilon-\omega_{s'}-\omega_{s}) v_{s'} G_0(\epsilon-\omega_{s}) v_{s}\\
&+v_{s'_2}^{\dagger} G_0(\epsilon-\omega_{s_{2}'}) v_{s'_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}'}-\omega_{s_{2}'})W_{s'_1 s'_2 , s s'}^{(2,2)}(\epsilon)G^{(1)}(\epsilon-\omega_{s'}-\omega_{s}) v_{s'} G_0(\epsilon-\omega_{s}) v_{s},\\
\nonumber
\Sigma^{(3, 0)} (\epsilon) &=\overline{V}_{s_1 s_2}^{(3, 2)} G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) (v_{s_1} G_0(\epsilon-\omega_{s_{2}}) v_{s_2} + v_{s_2} G_0 (\epsilon-\omega_{s_{1}})v_{s_1} ) \\
\nonumber
&= v_{s_2}^{\dagger} [G^{(2, 0)}(\epsilon-\omega_{s_{2}}) - G_0(\epsilon-\omega_{s_{2}}) ] v_{s_2} - v_{s'}^{\dagger} G_0(\epsilon-\omega_{s'}) v_{s_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s'}) v_{s'} G_0 (\epsilon-\omega_{s_{1}})v_{s_1} \\
\nonumber
&+ v_{s'}^{\dagger} G_0(\epsilon-\omega_{s'}) v_{s_1}^{\dagger} G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s'})V_{s'}^{(2, 1)}(\epsilon-\omega_{s_{1}}) G^{(2, 0)}(\epsilon-\omega_{s_{1}}) v_{s_1} \\
\nonumber
&+ v_{s_2}^{\dagger} G^{(2, 0)}(\epsilon-\omega_{s_{2}}) \overline{V}_{s_1}^{(2, 1)}(\epsilon-\omega_{s_{2}}) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_2} G_0(\epsilon-\omega_{s_{1}}) v_{s_1} \\
\nonumber
&+ v_{s'}^{\dagger} G_0(\epsilon-\omega_{s'}) v_{s_1}^{\dagger} G^{(1)} (\epsilon-\omega_{s'}-\omega_{s_{1}})W_{s',s_2}^{(2,1)}(\epsilon-\omega_{s_{1}}) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_1} G_0(\epsilon-\omega_{s_{2}}) v_{s_2} \\
\nonumber
&+ v_{s'}^{\dagger} G_0(\epsilon-\omega_{s'}) v_{s}^{\dagger}G^{(1)}(\epsilon-\omega_{s}-\omega_{s'}) W_{s s' , s_1 s_2}^{(2,2)}(\epsilon) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_1} G_0(\epsilon-\omega_{s_{2}}) v_{s_2} \\
\nonumber
&+ v_{s'}^{\dagger} G_0(\epsilon-\omega_{s'}) v_{s}^{\dagger}G^{(1)}(\epsilon-\omega_{s}-\omega_{s'}) W_{s' s , s_1 s_2}^{(2,2)}(\epsilon) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_1} G_0(\epsilon-\omega_{s_{2}}) v_{s_2} \\
\nonumber
&+ v_{s'}^{\dagger} G_0(\epsilon-\omega_{s'}) v_{s}^{\dagger}G^{(1)}(\epsilon-\omega_{s}-\omega_{s'}) W_{s s' , s_1 s_2}^{(2,2)} (\epsilon) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_2} G_0(\epsilon-\omega_{s_{1}}) v_{s_1} \\
&+ v_{s'}^{\dagger} G_0(\epsilon-\omega_{s'}) v_{s}^{\dagger}G^{(1)}(\epsilon-\omega_{s}-\omega_{s'}) W_{s' s , s_1 s_2}^{(2,2)} (\epsilon) G^{(1)}(\epsilon-\omega_{s_{1}}-\omega_{s_{2}}) v_{s_2} G_0(\epsilon-\omega_{s_{1}})v_{s_1}.
\end{align}
\end{widetext}
This, together with identities (\ref{eq:ident_1}) and (\ref{eq:ident_2}) justifies the proposed representation of the self-energy.
\section{Effect of the non-zero detuning}
\label{ap:detun}
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{sdn.pdf}
\caption{Spectral power density (scaled by $1/\Phi^{2}$) for of the giant atom model as a function of $\Delta, \ k$ for various values of $k_{0}R$ and $\gamma R=5$. Here the dashed black lines indicate the pole position of the dressed Green's function in the single excitation subspace.}
\label{fig: S_nz}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.9\columnwidth]{gd2.pdf}
\caption{Second order coherence function for the system with $k_{0}R=3\pi/11, \ \gamma R=5, \ \Delta=0.1, \ 0.5, \ 1.0$ (as before, the the second order coherence function is dimensionless).}
\label{fig: g2d}
\end{figure}
In this appendix we analyse the effect on non-zero detuning of the atom from radiation on the observable quantities. In particular, we focus on the spectral power density and the second order coherence function. As it was discussed in Section \ref{SPd}, the sharp bound-state-like peaks in the line-shape of spectral density may be understood with a simple physical picture of an effective cavity. When one increases (or decreases) the detuning from zero value, one effectively changes the modes supported by the cavity and thus one expects the position of the peaks to be shifted. The precise location of the resonances in the spectral density as function of $\Delta$ and $k$ for various values of $k_{0}R$ is shown in Figure $\ref{fig: S_nz}$. As we can see, for non-zero dephasing $k_{0}R\neq0$ the spectrum is not a symmetric function of $\Delta$. For negative dephasing we find that emission into the zero modes is enhanced for a negatively detuned atom $\Delta<0$, whereas the picture is opposite for positive $k_{0}R$. In general, we can clearly resolve a pair of sharp peaks which eventually merge together at certain values of parameters (e.g. $k_{0}R=\Delta=0$). It is interesting to note that the location of this peaks is almost entirely determined by the poles of the dressed propagator in the single-excitation subspace $(G^{(1)}(k))^{-1}=k+\Delta+i\gamma(1+e^{i(k+k_{0})R})=0$. This equation is solved by
\begin{align}
k=\pm i\frac{R\gamma-iR\Delta-W_{n}(-\gamma Re^{i(k_{0}-\Delta-i\gamma)R})}{R},
\label{pole}
\end{align}
where $W_{n}(z)$ is the $n^{\text{th}}$ branch of the Lambert $W$-function, also known as the product logarithm. The real part of (\ref{pole}) with $n=0, \pm1$ is plotted as black dashed lines in Figure \ref{fig: S_nz}. The vertical lines in Figure \ref{fig: S_nz} represent the discontinuous jumps of the Lambert function across the branch cut. As one may notice, the formula (\ref{pole}) is indeed in perfect agreement with numerical results.
\par
Let us now consider the second order coherence function at non-zero detuning. Second order coherence for the system with $k_{0}R=3\pi/11, \ \gamma R=5, \ \Delta=0.1, \ 0.5, \ 1.0$ is shown in Figure \ref{fig: g2d}. We note that the presence of detuning does not affect the general trend of strong photon bunching in the first channel and their corresponding anti-bunching in the second one, as was discussed in Section \ref{SPd}. Neither the detuning affects the presence of non-differentiable peaks occurring at integer multiplies of delay time $R$.
\end{appendix}
|
train/arxiv
|
BkiUd644dbghRguYJjT-
| 5 | 1 |
\section{Functional Embedding Versus Functional Regularization}
In this work we propose a functional embedding framework, in which the embedding of a user/item is obtained by some function such as neural networks. We notice another approach is to penalize the distance between user/item embedding and the function output (instead of equate them directly as in functional embedding), which we refer as functional regularization, and it is used in \cite{wang2015collaborative}. More specifically, functional regularization emits following form of loss function:
$$
\mathcal{L}(\mathbf{h}_u, \mathbf{h}_v) + \lambda \|\mathbf{h}_u - \mathbf{f}(\mathbf{x}_u)\|^2
$$
Here we point out its main issue, which does not appear in Functional Embedding. In order to equate the two embedding vectors, we need to increase $\lambda$. However, setting large $\lambda$ will slow down the training progress under coordinate descent. The gradient w.r.t. $\mathbf{h}_u$ is $\nabla_{\mathbf{h}_u} \mathcal{L}(\mathbf{h}_u, \mathbf{h}_v) + \frac{\lambda}{2} (\mathbf{h}_u - \mathbf{f}(\mathbf{x}_u))$, so when $\lambda$ is large, $\mathbf{h}^{t+1}_u \rightarrow \mathbf{f}^{t}(\mathbf{x}_u)$, which means $\mathbf{h}_u$ cannot be effectively updated by interaction information.
\section{Vector Dot Product Versus Matrix multiplication}
Here we provide some empirical evidence for the computation time difference of replacing vector dot product with matrix multiplication. Since vector dot product can be batched by element-wise matrix multiplication followed by summing over each row. We compare two operations between two square matrices of size $n$: (1) element-wise matrix multiplication, and (2) matrix multiplication. A straightforward implementation of the former has algorithmic complexity of $O(n^2)$, while the latter has $O(n^3)$. However, modern computation devices such as GPUs are better optimized for the latter, so when the matrix size is relatively small, their computation time can be quite similar. This is demonstrated in Figure \ref{fig:matmul_vs_mul}. In our choice of batch size and embedding dimension, $n \ll 1000$, so the computation time is comparable. Furthermore, $t_i\ll t_g$, so even several times increase would also be ignorable.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\textwidth]{figures/matmul_vs_mul}
\caption{The computation time ratio between matrix multiplication and element-wise matrix multiplication for different square matrix sizes.}
\label{fig:matmul_vs_mul}
\end{figure}
\section{Proofs}
Here we give the proofs for both the lemma and the proposition introduced in the main paper. For brevity, throughout we assume by default the loss function $\mathcal{L}$ is the pointwise loss of Eq. (1) in the main paper. Proofs are only given for the pointwise loss, but it can be similarly derived for the pairwise loss. We start by first introducing some definitions.
\begin{definition0}
A function $f$ is $L$-$smooth$ if there is a constant $L$ such that
$$ \|\nabla f(x) - \nabla f(y)\| \le L \|x - y\|
$$
\end{definition0}
Such an assumption is very common in the analysis of first-order methods. In the following proof, we assume any loss functions $\mathcal{L}$ is $L$-$smooth$.
\begin{property0}
(Quadratic Upper Bound) A $L$-$smooth$ function $f$ has the following property
$$
f(y) \le f(x) + \nabla f(x)^T (y - x) + \frac{L}{2} \|y - x\|^2
$$
\end{property0}
\begin{definition0}
We say a function $f$ has a $\sigma$-$bounded$ gradient if $\|\nabla f_i(\theta)\|^2 \le \sigma$ for all $i\in [n]$ and any $\theta \in \mathbb{R}^d$.
\end{definition0}
For each training iteration, we first sample a mini-batch of links (denoted by $B$) of both positive links ($B^+$) and negative links ($B^-$), according to the sampling algorithm (one of the Algorithm 2, 3, 4, 5), and then the stochastic gradient is computed and applied to the parameters as follows:
\begin{equation}
\label{eq:update}
\mathbf{\theta}^{t + 1} = \mathbf{\theta}^t - \frac{\eta_t}{m} \sum_{(u, v) \in B^+_t} c^+_{uv}\nabla \mathcal{L}^+(\theta|u, v) - \frac{\eta_t}{n} \sum_{(u, v) \in B^-_t} c^-_{uv}\nabla \mathcal{L}^-(\theta|u, v)
\end{equation}
Here we use $\mathcal{L}^+(\theta|u, v)$ to denote the gradient of loss function $\mathcal{L}^+(\theta)$ given a pair of $(u, v)$. And $m$, $n$ are the number of positive and negative links in the batch $B$, respectively.
\begin{lemma0}
\label{th:convergence_lemma} (unbiased stochastic gradient)
Under sampling Algorithm 2, 3, 4, 5, we have $\mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] = \nabla\mathcal{L}(\theta^t)$. In other words, the stochastic mini-batch gradient equals to true gradient in expectation.
\end{lemma0}
\begin{proof}
Below we prove this lemma for each for the sampling Algorithm. For completeness, we also show the proof for Uniform Sampling as follows. The main idea is show the expectation of stochastic gradient computed in a randomly formed mini-batch equal to the true gradient of objective in Eq. \ref{eq:point}.
\paragraph{IID Sampling}
The positive links in the batch $B$ are i.i.d. samples from $P_d(u,v)$ (i.e. drawn uniformly at random from all positive links), and the negative links in $B$ are i.i.d. samples from $P_d(u) P_n(v)$, thus we have
\begin{equation}
\begin{split}
& \mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] \\
=& \frac{1}{m}\sum_{i=1}^{m}\mathbb{E}_{(u, v)\sim P_d(u, v)}[c^+_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \frac{1}{n}\sum_{i=1}^{n}\mathbb{E}_{(u, v)\sim P_d(u) P_n(v')}[c^-_{uv'} \nabla\mathcal{L}^-(\theta|u, v')]\\
=& \mathbb{E}_{u\sim P_d(u)}\bigg[\mathbb{E}_{v\sim P_d(v|u)}[ c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \mathbb{E}_{v'\sim P_n(v')}[c^{-}_{uv'} \nabla\mathcal{L}^-(\theta|u, v')]\bigg]\\
=&\nabla \mathcal{L}(\theta^t)
\end{split}
\end{equation}
The first equality is due to the definition of sampling procedure, the second equality is due to the definition of expectation, and the final equality is due to the definition of pointwise loss function in Eq. \ref{eq:point}.
\paragraph{Negative Sampling}
In Negative Sampling, we have batch $B$ consists of i.i.d. samples of $m$ positive links, and conditioning on each positive link, $k$ negative links are sampled by replacing items in the same i.i.d. manner. Positive links are sampled from $P_d(u, v)$, and negative items are sampled from $P_n(v')$, thus we have
\begin{equation}
\begin{split}
& \mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] \\
=& \frac{1}{m}\sum_{i=1}^{m}\mathbb{E}_{(u, v)\sim P_d(u, v)} \frac{1}{k}\sum_{j=1}^{k} \mathbb{E}_{v'\sim P_n(v')}[c^+_{uv}\nabla\mathcal{L}^+(\theta|u, v) + c^-_{uv'} \nabla\mathcal{L}^-(\theta|u, v')]\\
=& \mathbb{E}_{u\sim P_d(u)}\bigg[
\mathbb{E}_{v\sim P_d(v|u)}[c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \mathbb{E}_{v'\sim P_n(v')}[c^{-}_{uv} \nabla\mathcal{L}^-(\theta|u, v')] \bigg]\\
=& \nabla \mathcal{L}(\theta^t)
\end{split}
\end{equation}
The first equality is due to the definition of sampling procedure, and the second equality is due to the properties of joint probability distribution and expectation.
\paragraph{Stratified Sampling (by Items)}
In Stratified Sampling (by Items), a batch $B$ consists of links samples drawn in two steps: (1) draw an item $v\sim P_d(v)$, and (2) draw positive users $u\sim P_d(u|v)$ and negative users $u'\sim P_d(u)$ respectively. Additionally, negative terms are also re-weighted, thus we have
\begin{equation}
\begin{split}
& \mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] \\
=& \mathbb{E}_{v\sim P_d(v)}\bigg[
\frac{1}{m} \sum_{i=1}^{m}\mathbb{E}_{u\sim P_d(u|v)}[c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \frac{1}{n}\sum_{i=1}^{n}\mathbb{E}_{u\sim P_d(u)}[c^{-}_{uv} \frac{P_n(v)}{P_d(v)} \nabla\mathcal{L}^-(\theta|u, v)] \bigg]\\
=& \mathbb{E}_{(u, v)\sim P_d(u, v)} [c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \mathbb{E}_{(u, v)\sim P_d(u) P_d(v)}[c^{-}_{uv} \frac{P_n(v)}{P_d(v)} \nabla\mathcal{L}^-(\theta|u, v)] \\
=& \mathbb{E}_{(u, v)\sim P_d(u, v)} [c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \mathbb{E}_{(u, v)\sim P_d(u) P_n(v)}[c^{-}_{uv} \nabla\mathcal{L}^-(\theta|u, v)] \\
=& \mathbb{E}_{u\sim P_d(u)}\bigg[
\mathbb{E}_{v\sim P_d(v|u)}[c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \mathbb{E}_{v'\sim P_n(v')}[c^{-}_{uv} \nabla\mathcal{L}^-(\theta|u, v')] \bigg]\\
=& \nabla \mathcal{L}(\theta^t)
\end{split}
\end{equation}
The first equality is due to the definition of sampling procedure, and the second, the third and the forth equality is due to the properties of joint probability distribution and expectation.
\paragraph{Negative Sharing}
In Negative Sharing, we only draw positive links uniformly at random (i.e. $(u,v)\sim P_d(u, v)$), while constructing negative links from sharing the items in the batch. So the batch $B$ we use for computing gradient consists of both $m$ positive links and $m(m-1)$ negative links.
Although we do not draw negative links directly, we can still calculate their probability according to the probability distribution from which we draw the positive links. So a pair of constructed negative link in the batch is drawn from $(u, v)\sim P_d(u, v) = P_d(v) P_d(u|v)$. Additionally, negative terms are also re-weighted, we have
\begin{equation}
\begin{split}
& \mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] \\
=& \frac{1}{m}\sum_{i=1}^{m}\mathbb{E}_{(u, v)\sim P_d(u, v)} [c^+_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \frac{1}{m(m-1)}\sum_{j=1}^{m(m-1)}\mathbb{E}_{(u, v)\sim P_d(u, v)} [c^-_{uv} \frac{P_n(v)}{P_d(v)}\nabla\mathcal{L}^-(\theta|u, v)]\\
=& \mathbb{E}_{u\sim P_d(u)}\bigg[
\mathbb{E}_{v\sim P_d(v|u)}[c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \mathbb{E}_{v'\sim P_n(v')}[c^{-}_{uv'} \nabla\mathcal{L}^-(\theta|u, v')] \bigg]\\
=& \nabla \mathcal{L}(\theta^t)
\end{split}
\end{equation}
The first equality is due to the definition of sampling procedure, and the second equality is due to the properties of joint probability distribution and expectation.
\paragraph{Stratified Sampling with Negative Sharing}
Under this setting, we follow a two-step sampling procedure: (1) draw an item $v\sim P_d(v)$, and (2) draw positive users $u \sim P_d(u|v)$. Negative links are constructed from independently drawn items in the same batch. So the batch $B$ consists of $m$ positive links and $n$ negative links.
We can use the same method as in Negative Sharing to calculate the probability of sampled negative links, which is also $(u, v) \sim P_d(u, v)$. Again, negative terms are re-weighted, thus we have
\begin{equation}
\begin{split}
& \mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] \\
=& \frac{1}{m} \sum_{i=1}^{m} \mathbb{E}_{v\sim P_d(v), u\sim P_d(u|v)} [c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] +\frac{1}{n}\sum_{j=1}^{n}\mathbb{E}_{(u, v)\sim P_d(u, v)} [c^{-}_{uv} \frac{P_n(v)}{P_d(v)} \nabla\mathcal{L}^-(\theta|u, v)] \\
=& \mathbb{E}_{(u, v)\sim P_d(u, v)} [c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \mathbb{E}_{(u, v')\sim P_d(u) P_n(v')}[c^{-}_{uv'} \nabla\mathcal{L}^-(\theta|u, v')] \\
=& \mathbb{E}_{u\sim P_d(u)}\bigg[
\mathbb{E}_{v\sim P_d(v|u)}[c^{+}_{uv}\nabla\mathcal{L}^+(\theta|u, v)] + \mathbb{E}_{v'\sim P_n(v')}[c^{-}_{uv'} \nabla\mathcal{L}^-(\theta|u, v')] \bigg]\\
=& \nabla \mathcal{L}(\theta^t)
\end{split}
\end{equation}
The first equality is due to the definition of sampling procedure, and the second, third and fourth equality is due to the properties of joint probability distribution and expectation.
\end{proof}
\begin{proposition0}\label{th:convergence}
Suppose $\mathcal{L}$ has $\sigma$-bounded gradient; let $\eta_t = \eta = c/ \sqrt{T}$ where $c = \sqrt{\frac{2(\mathcal{L}(\theta^0) - \mathcal{L}(\theta^*)}{L \sigma^2}}$, and $\theta^*$ is the minimizer to $\mathcal{L}$. Then, the following holds for the sampling strategies given in Algorithm 2, 3, 4, 5
$$
\min_{0\le t\le T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta^t)\|^2] \le \sqrt{\frac{2(\mathcal{L}(\theta^0) - \mathcal{L}(\theta^*))}{T}} \sigma
$$
\end{proposition0}
\begin{proof}
With the property of $L$-$smooth$ function $\mathcal{L}$, we have
\begin{equation}
\label{eq:apply_smooth}
\mathbb{E}[\mathcal{L}(\mathbf{\theta}^{t + 1})] \le \mathbb{E}[\mathcal{L}(\mathbf{\theta}^{t}) + \langle\nabla\mathcal{L}(\mathbf{\theta}^t), \mathbf{\theta}^{t + 1} - \mathbf{\theta}^{t}\rangle + \frac{L}{2}\|\mathbf{\theta}^{t+1} - \mathbf{\theta}^t\|^2]
\end{equation}
By applying the stochastic update equation, lemma \ref{th:convergence_lemma}, i.e. $\mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] = \nabla\mathcal{L}(\theta^t)$, we hav
\begin{equation}
\label{eq:apply_lemma}
\begin{split}
&\mathbb{E}[\langle\nabla\mathcal{L}(\mathbf{\theta}^t), \mathbf{\theta}^{t + 1} - \mathbf{\theta}^{t}\rangle + \frac{L}{2}\|\mathbf{\theta}^{t+1} - \mathbf{\theta}^t\|^2] \\
\le& \eta_t \mathbb{E}[\|\nabla\mathcal{L}(\mathbf{\theta}^t)\|^2] + \frac{L\eta_t^2}{2}\mathbb{E}[\|\nabla\mathcal{L}_B(\mathbf{\theta}^t)\|^2]
\end{split}
\end{equation}
\iffalse
With lemma \ref{th:convergence_lemma}, i.e. $\mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] = \nabla\mathcal{L}(\theta^t)$, we have
\begin{equation}
\label{eq:apply_lemma}
\mathbb{E}[\langle\nabla\mathcal{L}(\mathbf{\theta}^t), \mathbf{\theta}^{t + 1} - \mathbf{\theta}^{t}\rangle \le \eta_t \mathbb{E}[\|\nabla\mathcal{L}(\mathbf{\theta}^t)\|^2]
\end{equation}
Assuming we have same amount of positive links and negative links in each batch, otherwise we duplicate the less ones to make them equal, and adjust $c^+$ or $c^-$ accordingly for correctness. Applying Cauchy-Schwarz inequality, we have
\begin{equation}
\|\theta^{t+1} - \theta^{t}\|^2 = \| \frac{1}{m} \nabla\mathcal{L}_B(\theta) \|^2
\end{equation}
\fi
Combining results in Eq. \ref{eq:apply_smooth} and \ref{eq:apply_lemma}, with assumption that the function $\mathcal{L}$ is $\sigma$-$bounded$, we have
\begin{equation*}
\mathbb{E}[\mathcal{L}(\mathbf{\theta}^{t + 1})] \le \mathbb{E}[\mathcal{L}(\mathbf{\theta}^{t})] + \eta_t \mathbb{E}[\|\nabla\mathcal{L}(\mathbf{\theta}^t)\|^2] + \frac{L\eta_t^2}{2b}\sigma^2
\end{equation*}
Rearranging the above equation we obtain
\begin{equation}
\label{eq:recursive_bound}
\mathbb{E}[\|\nabla\mathcal{L}(\mathbf{\theta}^t)\|^2] \le \frac{1}{\eta_t}\mathbb{E}[\mathcal{L}(\mathbf{\theta}^{t} - \mathcal{L}(\mathbf{\theta}^{t + 1})] + \frac{L\eta_t}{2b}\sigma^2
\end{equation}
By summing Eq. \ref{eq:recursive_bound} from $t=0$ to $T - 1$ and setting $\eta= c/ \sqrt{T}$, we have
\begin{equation}
\begin{split}
\min_{t} \mathbb{E}[\|\nabla \mathcal{L}(\mathbf{\theta}^t)\|^2] \le & \frac{1}{T} \sum_{0}^{T-1} \mathbb{E}[\|\mathcal{L}(\mathbf{\theta}^t)\|^2] \\
\le & \frac{1}{c \sqrt{T}}(\mathcal{L}(\mathbf{\theta}^0) - \mathcal{L}(\mathbf{\theta}^*)) + \frac{Lc}{2\sqrt{T}} \sigma^2
\end{split}
\end{equation}
By setting
$$
c = \sqrt{\frac{2(\mathcal{L}(\mathbf{\theta}^0) - \mathcal{L}(\mathbf{\theta}^*))}{L\sigma^2}}
$$
We obtain the desired result.
\end{proof}
\section{Discussions}
While it is discussed under content-based collaborative filtering problem in this work, the study of sampling strategies for ``graph-based'' loss functions have further implications. The IID sampling strategy is simple and popular for SGD-based training, since the loss function terms usually do not share the common computations. So no matter how a mini-batch is formed, it almost bears the same amount of computation. This assumption is shattered by models that are defined under graph structure, with applications in social and knowledge graph mining \cite{bordes2013translating}, image caption ranking \cite{lin2016leveraging}, and so on. For those scenarios, we believe better sampling strategies can result in much faster training than that with IID sampling.
We would also like to point out limitations of our work. The first one is the setting of implicit feedback. When the problem is posed under explicit feedback, Negative Sharing can be less effective since the constructed negative samples may not overlap with the explicit negative ones. The second one is the assumption of efficient computation for interaction functions. When we use neural networks as interaction functions, we may need to consider constructing negative samples more wisely for Negative Sharing as it will also come with a noticeable cost.
\section{Conclusions and Future Work}
In this work, we propose a hybrid recommendation framework, combining conventional collaborative filtering with (deep) neural networks. The framework generalizes several existing state-of-the-art recommendation models, and embody potentially more powerful ones. To overcome the high computational cost brought by combining ``cheap'' CF with ``expensive'' NN, we first establish the connection between the loss functions and the user-item interaction bipartite graph, and then point out the computational costs can vary with different sampling strategies. Based on this insight, we propose three novel sampling strategies that can significantly improve the training efficiency of the proposed framework, as well as the recommendation performance.
In the future, there are some promising directions. Firstly, based on the efficient sampling techniques of this paper, we can more efficiently study different neural networks and auxiliary information for building hybrid recommendation models. Secondly, we can also study the effects of negative sampling distributions and its affect on the design of more efficient sampling strategies. Lastly but not least, it would also be interesting to apply our sampling strategies in a distributed training environments where multi-GPUs and multi-machines are considered.
\section{Mini-Batch Sampling Strategies For Efficient Model Training}
In this section, we propose and discuss different sampling strategies that can improve the efficiency of the model training.
\subsection{Computational Cost in a Graph View}
Before the discussion of different sampling strategies, we motivate our readers by first making a connection between the loss functions and the bipartite graph of user-item interactions. In the loss functions laid out before, we observed that each loss function term in Eq. \ref{eq:point}, namely, $\mathcal{L}(u, v)$, involves a pair of user and item, which corresponds to a link in their interaction graph. And two types of links corresponding to two types of loss terms in the loss functions, i.e., positive links/terms and negative links/terms. Similar analysis holds for pairwise loss in Eq. \ref{eq:pair}, though there are slight differences as each single loss function corresponds to a pair of links with opposite signs on the graph. We can also establish a correspondence between user/item functions and nodes in the graph, i.e., $\mathbf{f}(u)$ to user node $u$ and $\mathbf{g}(v)$ to item node $v$. The connection is illustrated in Figure \ref{fig:connection}. Since the loss functions are defined over the links, we name them ``\textit{graph-based}'' loss functions to emphasize the connection.
\begin{figure}[t!]
\centering
\includegraphics[width=0.35\textwidth]{figures/connection}
\caption{The bipartite interaction graph for pointwise loss functions, where loss functions are defined over links. The pairwise loss functions are defined over pairs of links.}
\label{fig:connection}
\end{figure}
The key observation for graph-based loss functions is that: the loss functions are defined over links, but the major computational burden are located at nodes (due to the use of costly $\mathbf{g}(\cdot)$ function). Since each node is associated with multiple links, which are corresponding to multiple loss function terms, the computational costs of loss functions over links are coupled (as they may share the same nodes) when using mini-batch based SGD. Hence, varied sampling strategies yield different computational costs. For example, when we put links connected to the same node together in a mini-batch, the computational cost can be lowered as there are fewer $\mathbf{g}(\cdot)$ to compute\footnote{This holds for both forward and backward computation. For the latter, the gradient from different links can be aggregated before back-propagating to $\mathbf{g}(\cdot)$.}. This is in great contrast to conventional optimization problems, where each loss function term dose not couple with others in terms of computation cost.
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\textwidth]{figures/sampling_1}
\caption{Negative}
\label{fig:sampling_a}
\end{subfigure}\hspace{1.5em}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\textwidth]{figures/sampling_2}
\caption{Stratified (by Items)}
\label{fig:sampling_b}
\end{subfigure}\hspace{1.5em}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\textwidth]{figures/sampling_3}
\vspace{1.5em}
\caption{Negative Sharing}
\label{fig:sampling_c}
\end{subfigure}\hspace{1.5em}
\begin{subfigure}[b]{0.18\textwidth}
\includegraphics[width=\textwidth]{figures/sampling_4}
\vspace{1.5em}
\caption{Stratified with N.S.}
\label{fig:sampling_d}
\end{subfigure}
\caption{Illustration of four different sampling strategies. \ref{fig:sampling_b}-\ref{fig:sampling_d} are the proposed sampling strategies. Red lines denote positive links/interactions, and black lines denote negative links/interactions.}
\label{fig:sampling}
\end{figure*}
\subsection{Existing Mini-Batch Sampling Strategies}
In standard SGD sampler, (positive) data samples are drawn uniformly at random for gradient computation. Due to the appearance of negative samples, we draw negative samples from some predefined probability distribution, i.e. $(u', v')\sim P_n(u', v')$. We call this approach ``\textit{IID Sampling}'', since each positive link is dependently and identical distributed, and the same holds for negative links (with a different distribution).
Many existing algorithms with graph-based loss functions \cite{mikolov2013distributed,tang2015line,bansal2016ask} adopt the ``\textit{Negative Sampling}'' strategy, in which $k$ negative samples are drawn whenever a positive example is drawn. The negative samples are sampled based on the positive ones by replacing the items in the positive samples. This is illustrated in Algorithm \ref{alg:neg_sampling} and Figure \ref{fig:sampling_a}.
\begin{algorithm}[h]
\caption{Negative Sampling \cite{mikolov2013efficient,tang2015line,bansal2016ask}}
\label{alg:neg_sampling}
\begin{algorithmic}
\STATE \algorithmicrequire~number of positive links in a mini-batch $b$, number of negative links per positive one: $k$
\STATE draw $b$ positive links uniformly at random
\FOR{each of $b$ positive links}
\STATE draw $k$ negative links by replacing true item $v$ with $v' \propto P_n(v')$
\ENDFOR
\end{algorithmic}
\end{algorithm}
The IID Sampling strategy dose not take into account the property of graph-based loss functions, since samples are completely independent of each other. Hence, the computational cost in a single mini-batch cannot be amortized across different samples, leading to very extensive computations with (deep) neural networks. The Negative Sampling does not really help, since the item function computation cost $t_g$ is the dominant one. To be more specific, consider a mini-batch with $b(1+k)$ links sampled by IID Sampling or Negative Sampling, we have to conduct item based $\mathbf{g}(\cdot)$ computation $b(1 + k)$ times, since items in a mini-batch are likely to be non-overlapping with sufficient large item sets.
\subsection{The Proposed Sampling Strategies}
\subsubsection{Stratified Sampling (by Items)}
Motivated by the connection between the loss functions and the bipartite interaction graph as shown in Figure \ref{fig:connection}, we propose to sample links that share nodes, in particular those with high computational cost (i.e. $t_g$ for item function $\mathbf{g}(\cdot)$ in our case). By doing so, the computational cost within a mini-batch can be amortized, since fewer costly functions are computed (in both forward and backward propagations).
In order to achieve this, we (conceptually) partition the links, which correspond to loss function terms, into \textit{strata}. A \textit{stratum} in the strata is a set of links on the bipartite graph sharing the same source or destination node. Instead of drawing links directly for training, we will first draw stratum and then draw both positive and negative links. Since we want each stratum to share the same item, we can directly draw an item and then sample its links. The details are given in Algorithm \ref{alg:stratified_sampling} and illustrated in Figure \ref{fig:sampling_b}.
\begin{algorithm}[t!]
\caption{Stratified Sampling (by Items)}
\label{alg:stratified_sampling}
\begin{algorithmic}
\STATE \algorithmicrequire~number of positive links in a mini-batch: $b$, number of positive links per stratum: $s$, number of negative links per positive one: $k$
\REPEAT
\STATE draw an item $v \propto P_d(v)$
\STATE draw $s$ positive users $\{u\}$ of $v$ uniformly at random
\STATE draw $k\times s$ negative users $ \{u'\} \propto P_d(u')$
\UNTIL{a mini-batch of $b$ positive links are sampled}
\end{algorithmic}
\end{algorithm}
Compared to Negative Sampling in Algorithm \ref{alg:neg_sampling}, there are several differences: (1) Stratified Sampling can be based on either item or user, but in the negative sampling only negative items are drawn; and (2) each node in stratified sampling can be associated with more than 1 positive link (i.e., $s>1$, which can help improve the speedup as shown below), while in negative sampling each node is only associated with one positive link.
Now we consider its speedup for a mini-batch including $b$ positive links/interactions and $bk$ negative ones, which contains $b(1+k)$ users and $b/s$ items. The Stratified Sampling (by Items) only requires $b/s$ computations of $\mathbf{g}(\cdot)$ functions, while the Negative Sampling requires $b(1 + k)$ computations. Assuming $t_g \gg t_f, t_i$, i.e. the computation cost is dominated by the item function $g(\cdot)$, the Stratified Sampling (by Items) can provide $s(1 + k)$ times speedup in a mini-batch. With $s=4, k=10$ as used in some of our experiments, it yields to $\times 40$ speedup optimally. However, it is worth pointing out that item-based Stratified Sampling cannot be applied to pairwise loss functions, which compare preferences over items based on a given user
\subsubsection{Negative Sharing}
The idea of Negative Sharing is inspired from a different aspect of the connection between the loss functions and the bipartite interaction graph. Since $t_i \ll t_g$, i.e. the computational cost of interaction function (dot product) is ignorable compared to that of item function, when a mini-batch of users and items are sampled, increasing the number of interactions among them may not result in a significant increase of computational cost. This can be achieved by creating a complete bipartite graph for a mini-batch by adding negative links between all non-interaction pairs between users and items. Using this strategy, we can draw NO negative links at all!
More specifically, consider the IID Sampling, when $b$ positive links are sampled, there will be $b$ users and $b$ items involved (assuming the sizes of user set and item set are much larger than $b$). Note that, there are $b(b-1)$ non-interactions in the mini-batch, which are not considered in IID Sampling or Negative Sampling, instead they draw additional negative samples. Since the main computational cost of training is on the node computation and the node set is fixed given the batch of $b$ positive links, we can share the nodes for negative links without increasing much of computational burdens. Based on this idea, Algorithm \ref{alg:neg_shared} summarizes an extremely simple sampling procedure, and it is illustrated in Figure \ref{fig:sampling_c}.
\begin{algorithm}[t!]
\caption{Negative Sharing}
\label{alg:neg_shared}
\begin{algorithmic}
\STATE \algorithmicrequire~number of positive links in a mini-batch: $b$
\STATE draw $b$ positive user-item pairs $\{(u, v)\}$ uniformly at random
\STATE construct negative pairs by connecting non-linked users and items in the batch
\end{algorithmic}
\end{algorithm}
Since Negative Sharing avoids sampling $k$ negative links, it only contains $b$ items while in Negative Sampling contains $b(1+k)$ items. So it can provide $(1 + k)$ times speedup compared to Negative Sampling (assuming $t_g \gg t_f, t_i$, and total interaction cost is still insignificant). Given the batch size $b$ is usually larger than $k$ (e.g., $b=512, k=20$ in our experiments), much more negative links (e.g. $512 \times 511$) will also be considered, this is helpful for both faster convergence and better performance, which is shown in our experiments. However, as the number of negative samples increases, the performance and the convergence will not be improved linearly. diminishing return is expected.
\subsubsection{Stratified Sampling with Negative Sharing}
The two strategies above can both reduce the computational cost by smarter sampling of the mini-batch. However, they both have weakness: Stratified Sampling cannot deal with pairwise loss and it is still dependent on the number of negative examples $k$, and Negative Sharing introduces a lot of negative samples which may be unnecessary due to diminishing return.
The good news is, the two sampling strategies are proposed from different perspectives, and combining them together can preserve their advantages while avoid their weakness. This leads to the Stratified Sampling with Negative Sharing, which can be applied to both pointwise and pairwise loss functions, and it can have flexible ratio between positive and negative samples (i.e. more positive links given the same negative links compared to Negative Sharing). To do so, basically we sample positive links according to Stratified Sampling, and then sample/create negative links by treating non-interactions as negative links. The details are given in Algorithm \ref{alg:stratified_sampling_with_neg_shared} and illustrated in Figure \ref{fig:sampling_d}.
\begin{algorithm}[t!]
\caption{Stratified Sampling with Negative Sharing}
\label{alg:stratified_sampling_with_neg_shared}
\begin{algorithmic}
\STATE \algorithmicrequire~number of positive links in a mini-batch: $b$, number of positive links per stratum: $s$
\REPEAT
\STATE draw an item $v \propto P_d(v)$
\STATE draw $s$ positive users of item $v$ uniformly at random
\UNTIL{a mini-batch of $b/s$ items are sampled}
\STATE construct negative pairs by connecting non-linked users and items in the batch
\end{algorithmic}
\end{algorithm}
Computationally, Stratified Sampling with Negative Sharing only involve $b/s$ item nodes in a mini-batch, so it can provide the same $s(1 + k)$ times speedup over Negative Sampling as Stratified Sampling (by Items) does, but it will utilize much more negative links compared to Negative Sampling. For example, in our experiments with $b=512, s=4$, we have $127$ negative links per positive one, much larger than $k=10$ in Negative Sampling, and only requires $1/4$ times of $\mathbf{g}(\cdot)$ computations compared to Negative Sharing.
\begin{table*}[!h]
\centering
\caption{Computational cost analysis for a batch of $b$ positive links. We use vec to denote vector multiplication, and mat to denote matrix multiplication. Since $t_g \gg t_f, t_i$ in practice, the theoretical speedup per iteration can be approximated by comparing the number of $t_g$ computation, which is colored red below. The number of iterations to reach a referenced loss is related to the number of negative links in each mini-batch.}
\label{tab:cost_model}
\begin{tabular}{cccccccc}
\Xhline{2.5\arrayrulewidth}
\multicolumn{1}{c}{Sampling} & \multicolumn{1}{c}{\# pos. links} & \multicolumn{1}{c}{\# neg. links} & \multicolumn{1}{c}{\# $t_f$} & \multicolumn{1}{c}{\textcolor{red}{\# $t_g$}} & \multicolumn{1}{c}{\# $t_i$} & \multicolumn{1}{c}{pointwise} & \multicolumn{1}{c}{pairwise}\\ \hline
IID \cite{bottou2010large} & $b $ & $bk$ & $b(1+k)$ & \textcolor{red}{$b(1+k)$} & $b(1+k)$ vec & \checkmark & $\times$ \\
Negative \cite{mikolov2013efficient,tang2015line,bansal2016ask} & $b $ & $bk$ & $b$ & \textcolor{red}{$b(1+k)$} & $b(1+k)$ vec & \checkmark & \checkmark\\ \hline
Stratified (by Items) & $b$ & $bk$ & $b(1+k)$ & \textcolor{red}{$\frac{b}{s}$} & $b(1+k)$ vec & \checkmark & $\times$ \\
Negative Sharing & $b$ & $b(b-1)$ & $b $ & \textcolor{red}{$b$} & $b\times b$ mat & \checkmark & \checkmark \\
Stratified with N.S. & $b $ & $ \frac{b(b-1)}{s}$ & $b$ & \textcolor{red}{$\frac{b}{s}$} & $b\times\frac{b}{s}$ mat &\checkmark & \checkmark \\% \hline
\Xhline{2.5\arrayrulewidth}
\end{tabular}
\end{table*}
\subsubsection{Implementation Details}
When the negative/noise distribution $P_n$ is not unigram\footnote{Unigram means proportional to item frequency, such as node degree in user-item interaction graph.}, we need to adjust the loss function in order to make sure the stochastic gradient is unbiased. For pointwise loss, each of the negative term is adjusted by multiplying a weight of $\frac{P_n(v')}{P_d(v')}$; for pairwise loss, each term based on a triplet of $(u, v, v')$ is adjusted by multiplying a weight of $\frac{P_n(v')}{P_d(v')}$ where $v'$ is the sampled negative item.
Instead of sampling, we prefer to use shuffling as much as we can, which produces unbiased samples while yielding zero variance. This can be a useful trick for achieving better performance when the number of drawn samples are not large enough for each loss terms. For IID and Negative Sampling, this can be easily done for positive links by simply shuffling them. As for the Stratified Sampling (w./wo. Negative Sharing), instead of shuffling the positive links directly, we shuffle the randomly formed strata (where each stratum contains roughly a single item)\footnote{This can be done by first shuffling users associated with each item, and then concatenating all links according to items in random order, random strata is then formed by segmenting the list.}. All other necessary sampling operations required are sampling from discrete distributions, which can be done in $O(1)$ with Alias method.
In Negative Sharing (w./wo. Stratified Sampling), We can compute the user-item interactions with more efficient operator, i.e. replacing the vector dot product between each pair of $(\mathbf{f}, \mathbf{g})$ with matrix multiplication between $(\mathbf{F}, \mathbf{G})$, where $\mathbf{F} = [\mathbf{f}_{u_1}, \cdots, \mathbf{f}_{u_n}]$, $\mathbf{G} = [\mathbf{g}_{v_1}, \cdots, \mathbf{g}_{v_m}]$. Since matrix multiplication is higher in BLAS level than vector multiplication \cite{ji2016parallelizing}, even we increase the number of interactions, with medium matrix size (e.g. 1000$\times$ 1000) it does not affect the computational cost much in practice.
\subsection{Computational Cost and Convergence Analysis}
Here we provide a summary for the computational cost for different sampling strategies discussed above, and also analyze their convergences. Two aspects that can lead to speedup are analyzed: (1) the computational cost for a mini-batch, i.e. per iteration, and (2) the number of iterations required to reach some referenced loss.
\subsubsection{Computational Cost}
To fairly compare different sampling strategies, we fix the same number of positive links in each of the mini-batch, which correspond to the positive terms in the loss function. Table \ref{tab:cost_model} shows the computational cost of different sampling strategies for a given mini-batch. Since $t_g \gg t_f,t_i$ in practice, we approximate the theoretical speedup per iteration by comparing the number of $t_g$ computation. We can see that the proposed sampling strategies can provide $(1+k)$, by Negative Sharing, or $s(1+k)$, by Stratified Sampling (w./w.o. Negative Sharing), times speedup for each iteration compared to IID Sampling or Negative Sampling. As for the number of iterations to reach a reference loss, it is related to number of negative samples utilized, which is analyzed below.
\subsubsection{Convergence Analysis}
We want to make sure the SGD training under the proposed sampling strategies can converge correctly. The necessary condition for this to hold is the stochastic gradient estimator has to be unbiased, which leads us to the following lemma.
\begin{lemma0}
\label{th:convergence_lemma} (unbiased stochastic gradient)
Under sampling Algorithm \ref{alg:neg_sampling}, \ref{alg:stratified_sampling}, \ref{alg:neg_shared}, and \ref{alg:stratified_sampling_with_neg_shared}, we have $\mathbb{E}_B[\nabla \mathcal{L}_{B}(\theta^t)] = \nabla\mathcal{L}(\theta^t)$. In other words, the stochastic mini-batch gradient equals to true gradient in expectation.
\end{lemma0}
This holds for both pointwise loss and pairwise loss. It is guaranteed since we draw samples stochastically and re-weight certain samples accordingly. The detailed proof can be found in the supplementary material.
Given this lemma, we can further analyze the convergence behavior of the proposed sampling behaviors. Due to the highly non-linear and non-convex functions composed by (deep) neural networks, the convergence rate is usually difficult to analyze. So we show the SGD with the proposed sampling strategies follow a local convergence bound (similar to \cite{ghadimi2013stochastic,reddi2016stochastic}).
\begin{proposition0}\label{th:convergence} (local convergence)
Suppose $\mathcal{L}$ has $\sigma$-bounded gradient; let $\eta_t = \eta = c/ \sqrt{T}$ where $c = \sqrt{\frac{2(\mathcal{L}(\theta^0) - \mathcal{L}(\theta^*)}{L \sigma^2}}$, and $\theta^*$ is the minimizer to $\mathcal{L}$. Then, the following holds for the proposed sampling strategies given in Algorithm \ref{alg:neg_sampling}, \ref{alg:stratified_sampling}, \ref{alg:neg_shared}, \ref{alg:stratified_sampling_with_neg_shared}
$$
\min_{0\le t\le T-1} \mathbb{E}[\|\nabla \mathcal{L}(\theta^t)\|^2] \le \sqrt{\frac{2(\mathcal{L}(\theta^0) - \mathcal{L}(\theta^*))}{T}} \sigma
$$
\end{proposition0}
The detailed proof is also given in the supplementary material.
Furthermore, utilizing more negative links in each mini-batch can lower the expected stochastic gradient variance. As shown in \cite{zhao2014accelerating,zhao2015stochastic}, the reduction of variance can lead to faster convergence. This suggests that Negative Sharing (w./wo. Stratified Sampling) has better convergence than the Stratified Sampling (by Items).
\section{Experiments}
\subsection{Data Sets}
Two real-world text recommendation data sets are used for the experiments. The first data set CiteULike, collected from CiteULike.org, is provided in~\cite{wang2011collaborative}. The CiteULike data set contains users bookmarking papers, where each paper is associated with a title and an abstract. The second data set is a random subset of Yahoo! News data set \footnote{https://webscope.sandbox.yahoo.com/catalog.php?datatype=r\&did=75}, which contains users clicking on news presented at Yahoo!. There are 5,551 users and 16,980 items, and total of 204,986 positive interactions in CiteULike data. As for Yahoo! News data, there are 10,000 users, 58,579 items and 515,503 interactions.
Following \cite{chen2017text}, we select a portion (20\%) of items to form the pool of test items. All user interactions with those test items are held-out during training, only the remaining user-item interactions are used as training data, which simulates the scenarios for recommending newly-emerged text articles.
\nop{
\begin{table}[t!]
\small
\centering
\caption{\label{tab:data_stat1} Data statistics for user, items and their interactions.}
\begin{tabular}{lrrr}
\toprule
{} & \# of user & \# of item & \# of interaction \\
\midrule
Citeulike & 5,551 & 16,980 & 204,986 \\
News & 10,000 & 58,579 & 515,503 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[t!]
\small
\centering
\caption{\label{tab:data_stat2} Data statistics for text content.}
\begin{tabular}{lrrrrr}
\toprule
{} & voc. size & max & min & mean & median \\
\midrule
Citeulike & 23,011 & 300 & 22 & 194 & 186 \\
News & 41,537 & 200 & 2 & 89 & 90 \\
\bottomrule
\end{tabular}
\end{table}
The detailed data set statistics are shown in Table~\ref{tab:data_stat1} and~\ref{tab:data_stat2}.
}
\subsection{Experimental Settings}
The main purpose of experiments is to compare the efficiency and effectiveness of our proposed sampling strategies against existing ones. So we mainly compare Stratified Sampling, Negative Sharing, and Stratified Sampling with Negative Sharing, against IID sampling and Negative Sampling. It is worth noting that several existing state-of-the-art models \cite{van2013deep,bansal2016ask,chen2017text} are special cases of our framework (e.g. using MSE-loss/Log-loss with CNN or RNN), so they are compared to other loss functions under our framework.
\paragraph{Evaluation Metrics} For recommendation performance, we follow \cite{wang2015collaborative,bansal2016ask} and use recall@M. As pointed out in \cite{wang2015collaborative}, the precision is not a suitable performance measure since non interactions may be due to (1) the user is not interested in the item, or (2) the user does not pay attention to its existence. More specifically, for each user, we rank candidate test items based on the predicted scores, and then compute recall@M based on the list. Finally the recall@M is averaged over all users.
\iffalse
More specifically, the recall@M for each user is defined as:
$$
\text{recall@M} = \frac{\text{number of items users interact among the top M}}{\text{total number of items that the user interacts}}
$$
The final recall@M is averaged over all users.
\fi
As for the computational cost, we mainly measure it in three dimensions: the training time for each iteration (or epoch equivalently, since batch size is fixed for all methods), the number of iterations needed to reach a referenced loss, and the total amount of computation time needed to reach the same loss. In our experiments, we use the smallest loss obtained by IID sampling in the maximum 30 epochs as referenced loss. Noted that all time measure mentioned here is in Wall Time.
\paragraph{Parameter Settings} The key parameters are tuned with validation set, while others are simply set to reasonable values. We adopt Adam \cite{kingma2014adam} as the stochastic optimizer. We use the same batch size $b=512$ for all sampling strategies, we use the number of positive link per sampled stratum $s=4$, learning rate is set to 0.001 for MSE-loss, and 0.01 for others. $\gamma$ is set to 0.1 for Hinge-loss, and 10 for others. $\lambda$ is set to 8 for MSE-loss, and 128 for others. We set number of negative examples $k=10$ for convolutional neural networks, and $k=5$ for RNN/LSTM due to the GPU memory limit. All experiments are run with Titan X GPUs. We use unigram noise/negative distribution.
For CNN, we adopt the structure similar in \cite{kim2014convolutional}, and use 50 filters with filter size of 3. Regularization is added using both weight decay on user embedding and dropout on item embedding. For RNN, we use LSTM \cite{hochreiter1997long} with 50 hidden units. For both models, the dimensions of user and word embedding are set to 50. Early stop is utilized, and the experiments are run to maximum 30 epochs.
\subsection{Speedup Under Different Sampling Strategies}
\begin{table}[t]
\small
\centering
\caption{Comparisons of speedup for different sampling strategies against IID Sampling: per iteration, \# of iteration, and total speedup.}
\label{tab:speedup}
\begin{tabular}{|p{1.3em}c|ccc|ccc|}\hline
& & \multicolumn{3}{c|}{CiteULike} & \multicolumn{3}{c|}{News} \\ [5pt]
Model & Sampling & \multicolumn{1}{c}{Per it.} & \multicolumn{1}{c}{\# of it.} & \multicolumn{1}{c|}{Total} & \multicolumn{1}{c}{Per it.} & \multicolumn{1}{c}{\# of it.} & \multicolumn{1}{c|}{Total} \\[5pt] \hline
\multirow{6}{*}{CNN} & Negative & 1.02 & 1.00 & 1.02 & 1.03 & 1.03 & 1.06\\[5pt]
& Stratified& 8.83 & 0.97 & 8.56 & 6.40 & 0.97 & 6.20\\[5pt]
& N.S. & 8.42 & \textbf{2.31} & 19.50 & 6.54 & \textbf{2.21} & 14.45\\[5pt]
& Strat. w. N.S. & \textbf{15.53} & 1.87 & \textbf{29.12} & \textbf{11.49} & 2.17 & \textbf{24.98}\\[5pt] \hline
\multirow{6}{*}{LSTM} & Negative & 0.99 & 0.96 & 0.95& 1.0 & 1.25 & 1.25 \\[5pt]
& Stratified & 3.1 & 0.77 & 2.38 & 3.12 & 1.03 & 3.22\\[5pt]
& N.S. & 2.87 & \textbf{2.45} & 7.03 & 2.78 & \textbf{4.14} & \textbf{11.5}\\[5pt]
& Strat. w. N.S. & \textbf{3.4} & 2.22 & \textbf{7.57}& \textbf{3.13} & 3.32 & 10.41 \\[5pt] \hline
\end{tabular}
\end{table}
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/convergence-legend-citeulike_title_and_abstract-cnn_embedding-cost-epoch}
\caption{Citeulike (epoch)}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/convergence-legend-citeulike_title_and_abstract-cnn_embedding-cost-time}
\caption{Citeulike (wall time)}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/convergence-legend-news_title_and_abstract-cnn_embedding-cost-epoch}
\caption{News (epoch)}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/convergence-legend-news_title_and_abstract-cnn_embedding-cost-time}
\caption{News (wall time)}
\end{subfigure}
\caption{Training loss curves (all methods have the same number of $b$ positive samples in a mini-batch)}
\label{fig:convergence_curve_loss}
\end{figure*}
\begin{figure*}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/convergence-legend-citeulike_title_and_abstract-cnn_embedding-test_recall-epoch}
\caption{Citeulike (epoch)}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/convergence-legend-citeulike_title_and_abstract-cnn_embedding-test_recall-time}
\caption{Citeulike (wall time)}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/convergence-legend-news_title_and_abstract-cnn_embedding-test_recall-epoch}
\caption{News (epoch)}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{figures/convergence-legend-news_title_and_abstract-cnn_embedding-test_recall-time}
\caption{News (wall time)}
\end{subfigure}
\caption{Test performance/recall curves (all methods have the same number of $b$ positive samples in a mini-batch).}
\label{fig:convergence_curve_recall}
\end{figure*}
Table \ref{tab:speedup} breaks down the speedup into (1) speedup for training on a given mini-batch, (2) number of iterations (to reach referenced cost) speedup, and (3) the total speedup, which is product of the first two. Different strategies are compared against IID Sampling. It is shown that Negative Sampling has similar computational cost as IID Sampling, which fits our projection. All three proposed sampling strategies can significantly reduce the computation cost within a mini-batch. Moreover, the Negative Sharing and Stratified Sampling with Negative Sharing can further improve the convergence w.r.t. the number of iterations, which demonstrates the benefit of using larger number of negative examples.
Figure \ref{fig:convergence_curve_loss} and \ref{fig:convergence_curve_recall} shows the convergence curves of both loss and test performance for different sampling strategies (with CNN + SG-loss). In both figures, we measure progress every epoch, which is equivalent to a fixed number of iterations since all methods have the same batch size $b$. In both figures, we can observe mainly two types of convergences behavior. Firstly, in terms of number of iterations, Negative Sharing (w./wo. Stratified Sampling) converge fastest, which attributes to the number of negative samples used. Secondly, in terms of wall time, Negative Sharing (w./wo. Stratified Sampling) and Stratified Sampling (by Items) are all significantly faster than baseline sampling strategies, i.e. IID Sampling and Neagtive Sampling. It is also interesting to see that that overfitting occurs earlier as convergence speeds up, which does no harm as early stopping can be used.
For Stratified Sampling (w./wo. negative sharing), the number of positive links per stratum $s$ can also play a role to improve speedup as we analyzed before. As shown in Figure \ref{fig:strata_size}, the convergence time as well as recommendation performance can both be improved with a reasonable $s$, such as 4 or 8 in our case.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figures/chopsize-convergence-legend-citeulike_title_and_abstract-group_sample-cost}
\caption{Loss (Stratified)}
\label{}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figures/chopsize-convergence-legend-citeulike_title_and_abstract-group_neg_shared-cost}
\caption{Loss (Stratified with N.S.)}
\label{}
\end{subfigure}
\\
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figures/chopsize-convergence-legend-citeulike_title_and_abstract-group_sample-test_recall}
\caption{Recall (Stratified)}
\label{}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figures/chopsize-convergence-legend-citeulike_title_and_abstract-group_neg_shared-test_recall}
\caption{Recall (Stratified with N.S.)}
\label{}
\end{subfigure}
\caption{The number of positive links per stratum $s$ VS loss and performance.}
\label{fig:strata_size}
\end{figure}
\subsection{Recommendation Performance Under Different Sampling Strategies}
It is shown in above experiments that the proposed sampling strategies are significantly faster than the baselines. But we would also like to further access the recommendation performance by adopting the proposed strategies.
Table \ref{tab:perf_comp} compares the proposed sampling strategies with CNN/RNN models and four loss functions (both pointwise and pairwise). We can see that IID Sampling, Negative Sampling and Stratified Sampling (by Items) have similar recommendation performances, which is expected since they all utilize same amount of negative links. For Negative Sharing and Stratified Sampling with Negative Sharing, since there are much more negative samples utilized, their performances are significantly better. We also observe that the current recommendation models based on MSE-loss \cite{van2013deep,bansal2016ask} can be improved by others such as SG-loss and pairwise loss functions \cite{chen2017text}.
\begin{table*}[t]
\small
\centering
\caption{Recall@50 for different sampling strategies under different models and losses.}
\label{tab:perf_comp}
\begin{tabular}{|cc|cccc|cccc|}\hline
& & \multicolumn{4}{c|}{CiteULike} & \multicolumn{4}{c|}{News} \\ [5pt]
Model & Sampling & \multicolumn{1}{l}{SG-loss} & \multicolumn{1}{l}{MSE-loss} & \multicolumn{1}{l}{Hinge-loss} & \multicolumn{1}{l|}{Log-loss}& \multicolumn{1}{l}{SG-loss} & \multicolumn{1}{l}{MSE-loss} & \multicolumn{1}{l}{Hinge-loss} & \multicolumn{1}{l|}{Log-loss} \\[5pt] \hline
\multirow{7}{*}{CNN} & IID & 0.4746 & 0.4437 & - & - & 0.1091 & 0.0929 & - & - \\[5pt]
& Negative & 0.4725 & 0.4408 & 0.4729 & 0.4796 & 0.1083 & 0.0956 & 0.1013 & 0.1009\\[5pt]
& Stratified & 0.4761 & 0.4394 & - & - & 0.1090 & 0.0913 & - & - \\[5pt]
& Negative Sharing & 0.4866 & 0.4423 & \textbf{0.4794} & 0.4769& 0.1131 & 0.0968 & 0.0909 & 0.0932\\[5pt]
& Stratified with N.S. & \textbf{0.4890} & \textbf{0.4535} & 0.4790 & \textbf{0.4884}& \textbf{0.1196} & \textbf{0.1043} & \textbf{0.1059} & \textbf{0.1100} \\[5pt] \hline
\multirow{7}{*}{LSTM}
& IID & 0.4479 & 0.4718 & - & - & 0.0971 & 0.0998 & - & - \\[5pt]
& Negative & 0.4371 & 0.4668 & 0.4321 & 0.4540 & 0.0977 & 0.0977 & 0.0718 & 0.0711\\[5pt]
& Stratified & 0.4344 & 0.4685 & - & - & 0.0966 & 0.0996 & - & - \\[5pt]
& Negative Sharing & 0.4629 & 0.4839 & 0.4605 & 0.4674 & \textbf{0.1121} & 0.0982 & 0.0806 & 0.0862 \\[5pt]
& Stratified with N.S. & \textbf{0.4742} & \textbf{0.4877} & \textbf{0.4703} & \textbf{0.4730} & 0.1051 & \textbf{0.1098} & \textbf{0.1017} & \textbf{0.1002}\\[5pt] \hline
\end{tabular}
\end{table*}
To further investigate the superior performance brought by Negative Sharing. We study the number of negative examples $k$ and the convergence performance. Figure \ref{fig:negnums} shows the test performance against various $k$. As shown in the figure, we observe a clear diminishing return in the improvement of performance. However, the performance seems still increasing even we use 20 negative examples, which explains why our proposed method with negative sharing can result in better performance.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figures/negnums-citeulike_title_and_abstract}
\caption{CiteULike}
\label{}
\end{subfigure}
\begin{subfigure}[b]{0.23\textwidth}
\includegraphics[width=\textwidth]{figures/negnums-news_title_and_abstract}
\caption{News}
\label{}
\end{subfigure}
\caption{The number of negatives VS performances.}
\label{fig:negnums}
\end{figure}
\section{Introduction}
Collaborative Filtering (CF) has been one of the most effective methods in recommender systems, and methods like matrix factorization \cite{koren2008factorization,koren2009matrix,salakhutdinov2011probabilistic} are widely adopted. However, one of its limitation is the dealing of ``cold-start'' problem, where there are few or no observed interactions for new users or items, such as in news recommendation. To overcome this problem, hybrid methods are proposed to incorporate side information \cite{rendle2010factorization,chen2012svdfeature,singh2008relational}, or item content information \cite{wang2011collaborative,gopalan2014content} into the recommendation algorithm. Although these methods can deal with side information to some extent, they are not effective for extracting features in complicated data, such as image, audio and text. On the contrary, deep neural networks have been shown very powerful at extracting complicated features from those data automatically \cite{krizhevsky2012imagenet,kim2014convolutional}. Hence, it is natural to combine deep learning with traditional collaborative filtering for recommendation tasks, as seen in recent studies \cite{wang2015collaborative,bansal2016ask,zheng2016neural,chen2017text}.
In this work, we generalize several state-of-the-art neural network-based recommendation algorithms \cite{van2013deep,bansal2016ask,chen2017text}, and propose a more general framework that combines both collaborative filtering and deep neural networks in a unified fashion. The framework inherits the best of two worlds: (1) the power of collaborative filtering at capturing user preference via their interaction with items, and (2) that of deep neural networks at automatically extracting high-level features from content data.
However, it also comes with a price. Traditional CF methods, such as sparse matrix factorization \cite{salakhutdinov2011probabilistic,koren2008factorization}, are usually fast to train, while the deep neural networks in general are much more computationally expensive \cite{krizhevsky2012imagenet}. Combining these two models in a new recommendation framework can easily increase computational cost by hundreds of times, thus require a new design of the training algorithm to make it more efficient.
We tackle the computational challenges by first establishing a connection between the loss functions and the user-item interaction bipartite graph. We realize the key issue when combining the CF and deep neural networks are in: the loss function terms are defined over the links, and thus sampling is on links for the stochastic gradient training, while the main computational burdens are located at nodes (e.g., Convolutional Neural Network computation for image of an item). For this type of loss functions, varied mini-batch sampling strategies can lead to different computational costs, depending on how many node computations are required in a mini-batch. The existing stochastic sampling techniques, such as IID sampling, are inefficient, as they do not take into account the node computations that can be potentially shared across links/data points.
Inspired by the connection established, we propose three novel sampling strategies for the general framework that can take coupled computation costs across user-item interactions into consideration. The first strategy is Stratified Sampling, which try to amortize costly node computation by partitioning the links into different groups based on nodes (called stratum), and sample links based on these groups. The second strategy is Negative Sharing, which is based on the observation that interaction/link computation is fast, so once a mini-batch of user-item tuples are sampled, we share the nodes for more links by creating additional negative links between nodes in the same batch. Both strategies have their pros and cons, and to keep their advantages while avoid their weakness, we form the third strategy by combining the above two strategies. Theoretical analysis of computational cost and convergence is also provided.
Our contributions can be summarized as follows.
\begin{itemize}
\item We propose a general hybrid recommendation framework (Neural Network-based Collaborative Filtering) combining CF and content-based methods with deep neural networks, which generalize several state-of-the-art approaches.
\item We establish a connection between the loss functions and the user-item interaction graph, based on which, we propose sampling strategies that can significantly improve training efficiency (up to $\times 30$ times faster in our experiments) as well as the recommendation performance of the proposed framework.
\item We provide both theoretical analysis and empirical experiments to demonstrate the superiority of the proposed methods.
\end{itemize}
\section*{Acknowledgements}
The authors would like to thank anonymous reviewers for helpful suggestions. The authors would also like to thank NVIDIA for the donation of one Titan X GPU. This work is partially supported by NSF CAREER \#1741634.
\bibliographystyle{ACM-Reference-Format}
\section{A General Framework for Neural Network-based Collaborative Filtering}
In this section, we propose a general framework for neural network-based Collaborative Filtering that incorporates both interaction and content information.
\subsection{Text Recommendation Problem}
In this work, we use the text recommendation task \cite{wang2011collaborative,wang2015collaborative,bansal2016ask,chen2017text} as an illustrative application for the proposed framework. However, the proposed framework can be applied to more scenarios such as music and video recommendations.
We use $\mathbf{x}_u$ and $\mathbf{x}_v$ to denote features of user $u$ and item $v$, respectively. In text recommendation setting, we set $\mathbf{x}_u$ to one-hot vector indicating $u$'s user id (i.e. a binary vector with only one at the $u$-th position)\footnote{Other user profile features can be included, if available.}, and $\mathbf{x}_v$ as the text sequence, i.e. $\mathbf{x}_v = (w_1, w_2, \cdots, w_t)$. A response matrix $\mathbf{\tilde{R}}$ is used to denote the historical interactions between users and articles, where $\tilde{r}_{uv}$ indicates interaction between a user $u$ and an article $v$, such as ``click-or-not'' and ``like-or-not''. Furthermore, we consider $\mathbf{\tilde{R}}$ as implicit feedback in this work, which means only positive interactions are provided, and non-interactions are treated as negative feedback implicitly.
Given user/item features $\{\mathbf{x}_u \}, \{\mathbf{x}_v\}$ and their historical interaction $\mathbf{\tilde{R}}$, the goal is to learn a model which can rank new articles for an existing user $u$ based on this user's interests and an article's text content.
\subsection{Functional Embedding}
In most of existing matrix factorization techniques \cite{koren2008factorization,koren2009matrix,salakhutdinov2011probabilistic}, each user/item ID is associated with a latent vector $\mathbf{u}$ or $\mathbf{v}$ (i.e., embedding), which can be considered as a simple linear transformation from the one-hot vector represented by their IDs, i.e. $\mathbf{u}_u = \mathbf{f}(\mathbf{x}_u) = \mathbf{W}^T \mathbf{x}_u$ ($\mathbf{W}$ is the embedding/weight matrix). Although simple, this direct association of user/item ID with representation make it less flexible and unable to incorporate features such as text and image.
In order to effectively incorporate user and item features such as content information, it has been proposed to replace embedding vectors $\mathbf{u}$ or $\mathbf{v}$ with functions such as decision trees \cite{zhou2011functional} and some specific neural networks \cite{bansal2016ask,chen2017text}. Generalizing the existing work, we propose to replace the original embedding vectors $\mathbf{u}$ and $\mathbf{v}$ with general differentiable functions $\mathbf{f}(\cdot) \in \mathbb{R}^d$ and $\mathbf{g}(\cdot) \in \mathbb{R}^d$ that take user/item features $\mathbf{x}_u, \mathbf{x}_v$ as their inputs. Since the user/item embeddings are the output vectors of functions, we call this approach \textit{Functional Embedding}. After embeddings are computed, a score function $r(u,v)$ can be defined based on these embeddings for a user/item pair $(u,v)$, such as vector dot product $r(u, v) = \mathbf{f}(\mathbf{x}_u)^T \mathbf{g}(\mathbf{x}_v) $ (used in this work), or a general neural network.
\begin{figure}[t!]
\centering
\includegraphics[width=0.34\textwidth]{figures/framework}
\caption{The functional embedding framework.}
\label{fig:framework}
\end{figure}
The model framework is shown in Figure \ref{fig:framework}. It is easy to see that our framework is very general, as it does not explicitly specify the feature extraction functions, as long as the functions are differentiable. In practice, these function can be specified with neural networks such as CNN or RNN, for extracting high-level information from image, audio, or text sequence. When there are no features associated, it degenerates to conventional matrix factorization where user/item IDs are used as their features.
For simplicity, we will denote the output of $\mathbf{f}(\mathbf{x}_u)$ and $\mathbf{g}(\mathbf{x}_v)$ by $\mathbf{f}_u$ and $\mathbf{g}_v$, which are the embedding vectors for user $u$ and item $v$.
\subsection{Loss Functions for Implicit Feedback}
In many real-world applications, users only provide positive signals according to their preferences, while negative signals are usually implicit. This is usually referred as ``implicit feedback'' \cite{pan2008one,hu2008collaborative,rendle2009bpr}. In this work, we consider two types of loss functions that can handle recommendation tasks with implicit feedback, namely, pointwise loss functions and pairwise loss functions.
\begin{table}[!t]
\centering
\small{
\caption{Examples of loss functions for recommendation.}
\label{tab:loss_inst}
\begin{tabular}{c}
\hline
{Pointwise loss}\\
\hline
SG-loss \cite{mikolov2013distributed}: -$\sum_{(u, v) \in \mathcal{D}} \bigg(\log\sigma(\mathbf{f}_u^T \mathbf{g}_v) + \lambda \mathbb{E}_{v'\sim P_n} \log\sigma(-\mathbf{f}_u^T \mathbf{g}_{v'})\bigg)$ \\
MSE-loss \cite{van2013deep}: $ \sum_{(u, v) \in \mathcal{D}} \bigg((\tilde{r}^+_{uv} - \mathbf{f}_u^T \mathbf{g}_v)^2 + \lambda \mathbb{E}_{v' \sim P_n} (\tilde{r}^-_{uv'} - \mathbf{f}_u^T \mathbf{g}_{v'})^2\bigg)$ \\ \hline
{Pairwise loss}\\
\hline
Log-loss \cite{rendle2009bpr}: -$\sum_{(u, v) \in \mathcal{D}} \mathbb{E}_{v'\sim P_n} \log \sigma\bigg(\gamma(\mathbf{f}_u^T \mathbf{g}_v - \mathbf{f}_u^T \mathbf{g}_{v'})\bigg)$\\
Hinge-loss \cite{weimer2008improving}:
$\sum_{(u, v) \in \mathcal{D}} \mathbb{E}_{{v'}\sim P_n} \max\bigg(\mathbf{f}_u^T \mathbf{g}_{v'} - \mathbf{f}_u^T \mathbf{g}_v + \gamma, 0\bigg)$ \\
\hline
\end{tabular}
}
\end{table}
Pointwise loss functions have been applied to such problems in many existing work. In \cite{van2013deep,wang2015collaborative,bansal2016ask}, mean square loss (MSE) has been applied where ``negative terms'' are weighted less. And skip-gram (SG) loss has been successfully utilized to learn robust word embedding \cite{mikolov2013distributed}.
These two loss functions are summarized in Table \ref{tab:loss_inst}. Note that we use a weighted expectation term over all negative samples, which can be approximated with small number of samples. We can also abstract the pointwise loss functions into the following form:
\begin{equation}
\label{eq:point}
\begin{split}
\mathcal{L}_{\text{pointwise}} = \mathbb{E}_{u\sim P_d(u)}\bigg[
&\mathbb{E}_{v\sim P_d(v|u)} c^+_{uv} \mathcal{L^+}(u, v|\theta) \\
& +\mathbb{E}_{v'\sim P_n(v')} c^-_{uv'} \mathcal{L^-}(u, v'|\theta)
\bigg]
\end{split}
\end{equation}
where $P_d$ is (empirical) data distribution, $P_n$ is user-defined negative data distribution, $c$ is user defined weights for the different user-item pairs, $\theta$ denotes the set of all parameters, $\mathcal{L}^+(u, v|\theta)$ denotes the loss function on a single positive pair $(u,v)$, and $\mathcal{L}^-(u, v|\theta)$ denotes the loss on a single negative pair. Generally speaking, given a user $u$, pointwise loss function encourages her score with positive items $\{v\}$, and discourage her score with negative items $\{v'\}$.
When it comes to ranking problem as commonly seen in implicit feedback setting, some have argued that the pairwise loss would be advantageous \cite{rendle2009bpr,weimer2008improving}, as pairwise loss encourages ranking of positive items above negative items for the given user. Different from pointwise counterparts, pairwise loss functions are defined on a triplet of $(u, v, v')$, where $v$ is a positive item and $v'$ is a negative item to the user $u$. Table \ref{tab:loss_inst} also gives two instances of such loss functions used in existing papers \cite{rendle2009bpr,weimer2008improving} (with $\gamma$ being the pre-defined ``margin'' parameter). We can also abstract pairwise loss functions by the following form:
\begin{equation}
\label{eq:pair}
\begin{split}
\mathcal{L}_{\text{pairwise}} = \mathbb{E}_{u\sim P_d(u)} \mathbb{E}_{v\sim P_d(v|u)}\mathbb{E}_{v'\sim P_n(v')} c_{uvv'} \mathcal{L}(u, v, v'|\theta)
\end{split}
\end{equation}
where the notations are similarly defined as in Eq. \ref{eq:point} and $\mathcal{L}(u, v, v'|\theta)$ denotes the loss function on the triplet $(u,v,v')$.
\subsection{Stochastic Gradient Descent Training and Computational Challenges}
To train the model, we use stochastic gradient descent based algorithms \cite{bottou2010large,kingma2014adam}, which are widely used for training matrix factorization and neural networks. The main flow of the training algorithm is summarized in Algorithm \ref{alg:original}.
\begin{algorithm}[!t]
\caption{Standard model training procedure}
\label{alg:original}
\begin{algorithmic}
\WHILE{not converged}
\STATE // mini-batch sampling
\STATE draw a mini-batch of user-item tuples $(u, v)$\footnotemark
\STATE // forward pass
\STATE compute $\mathbf{f}(\mathbf{x}_u)$, $\mathbf{g}(\mathbf{x}_v)$ and their interaction $\mathbf{f}_u^T \mathbf{g}_v$
\STATE compute the loss function $\mathcal{L}$
\STATE // backward pass
\STATE compute gradients and apply SGD updates
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\footnotetext{Draw a mini-batch of user-item triplets $(u, v, v')$ if a pairwise loss function is adopted.}
By adopting the functional embedding with (deep) neural networks, we can increase the power of the model, but it also comes with a cost. Figure \ref{fig:operational} shows the training time (for CiteULike data) with different item functions $\mathbf{g}(\cdot)$, namely linear embedding taking item id as feature (equivalent to conventional MF), CNN-based content embedding, and RNN/LSTM-based content embedding. We see orders of magnitude increase of training time for the latter two embedding functions, which may create barriers to adopt models under this framework.
Breaking down the computation cost of the framework, there are three major parts of computational cost. The first part is the user based computation (denoted by $t_f$ time units per user), which includes forward computation of user function $\mathbf{f}(\mathbf{x}_u)$, and backward computation of the function output w.r.t. its parameters. The second part is the item based computation (denoted by $t_g$ time units per item), which similarly includes forward computation of item function $\mathbf{g}(\mathbf{x}_v)$, as well as the back computation. The third part is the computation for interaction function (denoted by $t_i$ time units per interaction). The total computational cost for a mini-batch is then $t_f \times \text{\# of users} + t_g \times \text{\# of items} + t_i \times \text{\# of interactions}$, with some other minor operations which we assume ignorable. In the text recommendation application, user IDs are used as user features (which can be seen as linear layer on top of the one-hot inputs), (deep) neural networks are used for text sequences, vector dot product is used as interaction function, thus the dominant computational cost is $t_g$ (orders of magnitude larger than $t_f$ and $t_i$). In other words, we assume $t_g \gg t_f, t_i$ in this work.
\begin{figure}[h]
\centering
\includegraphics[width=0.3\textwidth]{figures/operational}
\caption{Model training time per epoch with different types of item functions (in log-scale).}
\label{fig:operational}
\end{figure}
\section{Related Work}
Collaborative filtering \cite{koren2009matrix} has been one of the most effective methods in recommender systems, and methods like matrix factorization \cite{koren2008factorization,salakhutdinov2011probabilistic} are widely adopted. While many papers focus on the explicit feedback setting such as rating prediction, implicit feedback is found in many real-world scenarios and studied by many papers as well \cite{pan2008one,hu2008collaborative,rendle2009bpr}. Although collaborative filtering techniques are powerful, they suffer from the so-called ``cold-start'' problem since side/content information is not well leveraged. To address the issue and improve performance, hybrid methods are proposed to incorporate side information \cite{singh2008relational,rendle2010factorization,zhou2011functional,chen2012svdfeature,chen2017task}, as well as content information \cite{wang2011collaborative,gopalan2014content,wang2015collaborative,chen2017text}.
Deep Neural Networks (DNNs) have been showing extraordinary abilities to extract high-level features from raw data, such as video, audio, and text \cite{collobert2011natural,kim2014convolutional,zhang2015character}. Compared to traditional feature detectors, such as SIFT and n-grams, DNNs and other embedding methods \cite{tang2015line,chen2016entity,chen2017task} can automatically extract better features that produce higher performance in various tasks. To leverage the extraordinary feature extraction or content understanding abilities of DNNs for recommender systems, recent efforts are made in combining collaborative filtering and neural networks \cite{van2013deep,wang2015collaborative,bansal2016ask,chen2017text}. \cite{wang2015collaborative} adopts autoencoder for extracting item-side text information for article recommendation, \cite{bansal2016ask} adopts RNN/GRU to better understand the text content. \cite{chen2017text} proposes to use CNN and pairwise loss functions, and also incorporate unsupervised text embedding. The general functional embedding framework in this work subsumes existing models \cite{van2013deep,bansal2016ask,chen2017text}.
Stochastic Gradient Descent \cite{bottou2010large} and its variants \cite{kingma2014adam} have been widely adopted in training machine learning models, including neural networks. Samples are drawn uniformly at random (IID) so that the stochastic gradient vector equals to the true gradient in expectation. In the setting where negative examples are overwhelming, such as in word embedding (e.g., Word2Vec \cite{mikolov2013distributed}) and network embedding (e.g., LINE \cite{tang2015line}) tasks, negative sampling is utilized. Recent efforts have been made to improve SGD convergence by (1) reducing the variance of stochastic gradient estimator, or (2) distributing the training over multiple workers. Several sampling techniques, such as stratified sampling \cite{zhao2014accelerating} and importance sampling \cite{zhao2015stochastic} are proposed to achieve the variance reduction. Different from their work, we improve sampling strategies in SGD by reducing the computational cost of a mini-batch while preserving, or even increasing, the number of data points in the mini-batch. Sampling techniques are also studied in \cite{gemulla2011large,zhuang2013fast} to distribute the computation of matrix factorization, their objectives in sampling strategy design are reducing the parameter overlapping and cache miss. We also find that the idea of sharing negative examples is exploited to speed up word embedding training in \cite{ji2016parallelizing}.
|
train/arxiv
|
BkiUdaY5qX_BnPKL8XcU
| 5 | 1 |
\section{INTRODUCTION}
Bohmian Quantum Mechanics (BQM) is an alternative interpretation of Quantum Mechanics (QM) where the quantum particles follow certain deterministic trajectories guided by the usual wavefunction $\Psi$ (the solution of Schr\"{o}dinger's equation) according to the so called Bohmian equations (BE):
\begin{align}
m_i\frac{dr_i}{dt}=\hbar\Im\left(\frac{\nabla\Psi}{\Psi}\right)
\end{align}
BQM is a highly nonlocal theory where quantum entanglement has a strong impact on the evolution of quantum trajectories \cite{Bohm, BohmII, durr2009bohmian}. Moreover, since the BE are nonlinear one expects to find both ordered and chaotic trajectories. There are many works which study in detail and from many perspectives both ordered and chaotic Bohmian trajectories (\cite{iacomelli1996regular, frisk1997properties, falsaperla2003motion, wisniacki2005motion,efthymiopoulos2006chaos, wisniacki2007vortex, borondo2009dynamical}).
In a series of previous works we focused on the production of chaos in Bohmian trajectories and came up with a generic theoretical mechanism for the emergence of chaos in arbitrary 2d and 3d systems \cite{efth2009, tzemos2018origin}. This is the so called nodal point-X-point complex (NPXPC) mechanism, which states that whenever a Bohmian particle comes close to the neighborhood of a moving nodal point of the wavefunction (the point where $\Psi=0$) it gets scattered by its accompanying X-point (a hyperbolic point of the Bohmian flow in the frame of reference of the moving nodal point with the same velocity with that of the nodal point). The cumulative action of such close encounters between the particle and the NPXPCs leads to the emergence of chaos (for a review of chaos in BQM see \cite{efth2017, contopoulos2020chaos}).
In previous papers \cite{tzemos2019bohmian, tzemos2020chaos, tzemos2020ergodicity} we considered the problems of chaos, ergodicity and applicability of Born's rule in a two qubit system, composed of coherent states of the quantum harmonic oscillator \cite{asboth2004coherent, garrison2008quantum}. This system has many interesting feautures that facilitate the study of many different aspects in BQM:
\begin{itemize}
\item Its entanglement can be calculated analytically since it is in close analogy with a two spin based qubit model (for entanglement in Bohmian trajectories see also \cite{zander2018revisiting, elsayed2018entangled}).
\item It has infinitely many NPXPCs lying on a straight lattice which moves and rotates in the configuration space. These lattices exist for every nonzero value of entanglement. The NPXPCs go to infinity at certain times and then they reappear. We found that all partially entangled states produce both chaotic and ordered trajectories, while in the two extreme cases of zero and maximum entaglement we have only ordered and only chaotic trajectories correspondingly.
\item Its probability density $P=|\Psi|^2$ is characterized by two well defined blobs which move in the configuration space and collide close to the origin. During these collisions we see the temporary formation of several blobs between the nodal points of $\Psi$. After a collision the two blobs are reformed and move on until the next collision and so forth.
\item Its ordered trajectories lie on a certain region of the configuration space, for given parameters.
\item The chaotic trajectories of our model were found to be ergodic for every given amount of the entanglement. Namely, their final distribution of points is the same regardless of their initial conditions (for ergodicity in Bohmian trajectories see also \cite{aharonov2004time, avanzini2017quantum}).
\item The ergodic nature of the chaotic trajectories relies heavily on the existence of infinitely many NPXPCs. Consequently, although it is a simple quantum mechanical system, it exhibits very rich dynamics from a Bohmian standpoint.
\end{itemize}
The above features make our model useful for the study of the origin of Born's rule, an important open problem in BQM \cite{valentini1991signalI, valentini1991signalII, durr1992quantum, valentini2005dynamical, towler2011time, abraham2014long, durr2019typicality}. Born's rule (BR) states that the probability density of finding a quantum particle in a certain region of space is equal to the absolute square of its wavefunction, namely:
\begin{align}
P=|\Psi|^2
\end{align}
It is well known in BQM that if Born's rule is initially satisfied, namely if the initial distribution of quantum particles $P_0$ is equal to $|\Psi_0|^2$, then it is satisfied for all times. However in BQM we can start with an arbitrary initial distribution with $P_0\neq |\Psi_0|^2$. Since Born's rule has
never been doubted by the experiment, we study the mechanism responsible for its dynamical establishment.
Most of our previous work was numerical. We found cases where Born's rule was established and cases where it was not established and we concluded that the amount of entanglement and the nature of the trajectories in the distribution of Born's rule is responsible for its dynamical establishment, something that is true but not sufficient.
In the present work we study in detail with analytical formulae the mathematical background of our previous numerical results. Moreover we provide further simulations in order to separate clearly the cases of the accessibility (or not) of Born's rule by an arbitrary initial distribution. The core result of our analysis is that the ergodicity of chaotic trajectories implies that an arbitrary initial distribution will finally come close to Born's rule distribution if the ratio between its chaotic and ordered trajectories is close to that of the distribution satisfying BR.
In Section 2 we give the model of the two entangled qubits and in Section 3 we consider the time evolution of its probability density $|\Psi|^2$. In Section 4 we consider the nodal points of our model (where $\Psi=0$) and the evolution of the corresponding NPXPCs. We then study distributions of particles for various amounts of entanglement, when Born's rule is initially satisfied (Section 5) and the role of chaotic vs ordered trajectories in deriving (or not) Born's rule in the long run (Section 6). In Section 7 we find for what initial distributions of particles the final pattern is close to Born's rule. Finally, in section 8 we draw our conclusions. In the Appendix we present an approximative algorithm for the distinction between the ordered and the chaotic trajectories of our model.
\section{THE MODEL}
The case of 2 qubits deals with a most general solution of Schr\"{o}dinger's equation correspoding to a classical case of two harmonic oscillators, namely to a Hamiltonian of the form $
H=\frac{1}{2}(p_x^2/m_x+p_y^2/m_y+m_x\omega_x^2x^2+m_y\omega_yy^2)$.
The solutions of the Schr\"{o}dinger equation are of the form
\begin{align}\label{Psigen}
\Psi=c_1Y_R(x,t)Y_L(y,t)+c_2Y_L(x,t)Y_R(y,t),
\end{align}
where
\begin{align}
\nonumber Y(x,t)=\Bigg(\frac{m_x\omega_x}{\pi\hbar}\Bigg)^{\frac{1}{4}}
\exp\Bigg[&-\frac{m_x\omega_x}{2\hbar}\Bigg(x-\sqrt{\frac{2\hbar}{m_x\omega_x}}
a_0\cos(\sigma_x-\omega_x t)\Bigg)^2\\&+i\Bigg(\sqrt{\frac{2m_x\omega_x}{\hbar}}
a_0\sin(\sigma_x-\omega_x t)x+\frac{1}{2}\Big[a_0^2
\sin(2(\omega_x t-\sigma_x))-\omega_x t\Big]\Bigg)\Bigg],
\end{align}
and the corresponding expression for $Y(y,t)$.
The entanglement depends on the values of $c_1$ and $c_2$ ($|c_1|^2+|c_2|^2=1$). In particular if $c_2=0$ we have a product state with no entanglement.
We work with $\hbar=m_x=m_y=1, \omega_x=1,\omega_y=\sqrt{3}$ and $a_0=5/2$. Moreover $\sigma_x=\sigma_y=0$ for $Y_R$, while $\sigma_x=\sigma_y=\pi$ for $Y_L$. The values of $\omega_x, \omega_y$ have a non commensurable ratio, while $a_0$ is sufficiently large in order to secure the qubit character of the solution.
Thus
\begin{align}
Y_R(x,t)\!=\!\Big(\frac{\omega_x}{\pi}\Big)^{\frac{1}{4}}\exp\!\Bigg[\!-\frac{\omega_x}{2}\left(x-\sqrt{\frac{2}{\omega_x}}a_0\cos(\omega_xt)\right)^2+i\! \left( -\sqrt {2\omega_x}a_{0}x\sin \left( \omega_{x
}t \right) +\frac{a_{0}^{2}\sin( 2\omega_{x}t) -\omega
_{x}t} {2}\right) \!
\Bigg]
\end{align}
while in $Y_L(x,t)$ the factor in the square inside the exponent is $[x+\sqrt{\frac{2}{\omega_x}}a_0\cos(\omega_x t)]$. The terms $Y_R(y,t)$ and $Y_L(y,t)$ are similar.
For any non zero value of the entanglement the initial distribution $|\Psi_0|^2$ consists of two Gaussian blobs, one on the lower right quadrant and one on the upper left quadrant of the configuration space. We consider mainly cases where $c_2<c_1$ in which the first blob is larger (Figs.~\ref{t0}a,b for the cases $c_2=0.5$ and $c_2=0.2$). In the case $c_2=c_1=\sqrt{2}/2$ the two blobs are equal (Fig. 2 of our paper \cite{tzemos2020ergodicity}). If $c_2=0.5$ the maximum height of the secondary blob is about $1/3$ of the main blob and if $c_2=0.2$ it is only $0.04$ of the main blob. In the latter case the secondary blob is barely seen in Fig.~\ref{t0}b. The value of the maximum $|\Psi_0|^2$ as a function of $c_2$ is given in Fig.~\ref{perc2}. The volume of the main blob (which gives the proportion of the particles of this blob $p_2$) is also given as function of $c_2$ in Fig.~\ref{perc2}. The ratio $p_1/p_2$ is very close to the corresponding ratio between the maximum heights of the two blobs. If $c_2=0$ there is only one blob.
\begin{figure}[H]
\centering
\includegraphics[scale=0.17]{fig1a.png}
\includegraphics[scale=0.17]{fig2a.png}
\caption{The initial form of $|\Psi|^2$ in the case (a) $c_2=0.5$ and (b) $c_2=0.2$. }\label{t0}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.37]{fig2.png}
\caption{The maximum height of the main blob of $|\Psi_0|^2$ (red squares) and the percentage of the particles on the main blob $p_2$ (black dots) as functions of the entanglement parameter $c_2$.}\label{perc2}
\end{figure}
\section{TIME EVOLUTION OF THE PROBABILITY DENSITY $|\Psi|^2$}
The values of $|\Psi|^2$ in the product state ($c_2=0$) form a blob around a center given by
\begin{align}\label{xc1yc1}
x_c=\sqrt{\frac{2}{\omega_x}}a_0\cos(\omega_x t), y_c=-\sqrt{\frac{2}{\omega_y}}a_0\cos(\omega_y t),
\end{align}
For these values of $x,y$ we have
\begin{align}
\Psi=Y_R(x_c,t)Y_L(y_c,t)=\frac {(\omega_{x}\omega_y)^{\frac{1}{4}}}{\sqrt {\pi}}{ \exp\left[-\frac{i}{2} \left( {a_{0}}^{2}\left(\sin \left(2 \omega_{x}t \right)+\sin \left( 2\omega_{y
}t \right)\right)+{(\omega_{x}+\omega_y)}t \right) \right]}\label{Psig}
\end{align}
and
\begin{align}\label{Psimax}
|\Psi|^2_{max}=\frac{\sqrt{\omega_x\omega_y}}{\pi}.
\end{align}
As a consequence the blob follows a Lissajous figure given by Eqs.~(\ref{xc1yc1}) with a constant amplitude, given by Eq.~(\ref{Psimax}). For $t=0$ the position of the center of the blob is at $x_c=\sqrt{\frac{2}{\omega_x}}a_0, y_c=-\sqrt{\frac{2}{\omega_y}}a_0$
and for the above values of $\omega_x, \omega_y$ and $a_0$ it is $(x_c'=3.54,y_c'=-2.69)$. A symmetric solution occurs in the product state with $c_1=0$. In that case there is a blob which is initially around the point $(x_c=-3.54, y=2.69)$ and forms a Lissajous figure
\begin{align}
x_c'=-\sqrt{\frac{2}{\omega_x}}a_0\cos(\omega_x t),\quad
y_c'=\sqrt{\frac{2}{\omega_y}}a_0\cos(\omega_y t),\label{xcycd}
\end{align}
symmetric with respect to the trajectory (\ref{xc1yc1}).
However, when $c_2\neq 0$ the situation is more complicated. If we set the solution (\ref{xcycd}) in Eq.~(\ref{Psigen}) the term $c_1Y_R(x,t)Y_L(y,t)$ is $c_1$ times the function $\Psi$ of Eq.~(\ref{Psig}), where the exponent has only an imaginary quantity. But we have also the term $c_2Y_L(x,t)Y_R(y,t)$, which has a real exponential besides the imaginary part. This term is
\begin{align}
\nonumber c_2Y_L(x,t)Y_R(y,t)=c_2\frac{\sqrt{\omega_x\omega_y}}{\pi}\exp\left[-4a_0^2\left(\cos^2(\omega_x t)+\cos^2(\omega_y t)\right)\right]\\ \times
\exp\left[\frac{i}{2}\left({3}a_0^2\left(\sin(2\omega_x t)+\sin(2\omega_y t)\right)-\left(\omega_x+\omega_y \right)t\right)\right]\label{term}
\end{align}
The first two terms of the exponent are
\begin{align}
E=\exp\Big[-25(\cos^2t+\cos^2(\sqrt{3}t)\Big]
\end{align}
when $\omega_x=1, \omega_y=\sqrt{3}, a_0=5/2$
then $E$ is larger than $\exp(-50)$.
In general this term is very small. Therefore the trajectory of
the blob that is initially around $(x_c=3.54, y_c=-2.69)$, is very close to the trajectory of the blob of the product state $c_2=0$. The other
blob is initially around the point ($x_c=-3.54, y_c=2.69$) and
forms an trajectory almost symmetric, with respect to the origin, to
that of the first blob. (The two trajectories are exactly symmetric if
$c_1=c_2$).
Sometimes the quantity $(\cos^2t+\cos^2(\sqrt{3}t)$ is very
small and then the exponential factor becomes of order 1. This happens if $t$ is close to $k\pi/2$ and at the same time
close to $k_2\pi/(2\sqrt{3})$ for odd integers $k_1$ and $k_2$.
E.g. if $k_1=3 (t_1=4.71), k_2=5 (t_2=4.59)$ then $E$ is maximum $E=0.31$ between $t_1$ and $t_2$ for $t=4.58$. Similarly for $k_1=5, k_2=7$ we find a maximum $E=0.03$ for $t=8.1$. At the times $t=4.58$ and $t=8.1$ we have collisions of the two blobs because their distances from the origin is very small (see Fig.~\ref{nodal_nodal}). The distances of the top of one blob from the origin is:
\begin{align}
\Delta_o=\sqrt{\frac{2}{\omega_x\omega_y}}a_0\sqrt{\omega_y\cos^2(\omega_xt)+\omega_x\cos^2(\omega_yt)}
\end{align}
The collisions occur when the two blobs approach each other as they come close to the origin with their tops forming almost symmetric Lissajous curves. In fact the tops of the blobs appear at the values of x and y where
\begin{align}\label{partial}
\frac{\partial |\Psi|^2}{\partial x}=\frac{\partial |\Psi|^2}{\partial y}=0
\end{align}
If we set the values (\ref{xc1yc1}) in $|\Psi|^2$ we find that in
general the Eqs.~(\ref{partial}) are satisfied with very high accuracy, except for the times close to the collisions of the blobs. The collision times are approximately the same for any value of $c_2$, since the time interval where the absolute values of the derivatives of $|\Psi|^2$ are larger than a small value of order $10^{-4}$ is about $\Delta t\simeq 0.5$.
Between collisions the two blobs form slightly deformed Lissajous figures, therefore they stay longer at the four corners of these curves.
\section{NODAL POINTS}
The wavefunction vanishes at the nodal points where $\Psi_R=\Psi_I=0$.
In this model we have an infinity of nodal points given by
the formulae:
\begin{eqnarray}
\nonumber\label{xnod}&x_{nod}={\frac {\sqrt {2}
\left( k\pi\,\cos \left(
\omega_{y}\,t \right) +\sin \left(
\omega_{y}\,t \right) \ln \left(
\left| {\frac {c_{1}}{c_{2}}} \right|
\right) \right) }{4\sqrt {\omega_{x}}a_{0}\,\sin \left(
\omega_{xy} t \right) } }\\&
\label{ynod}y_{nod}={\frac {\sqrt {
2} \left(k\pi\, \cos \left( \omega_{x}t
\right) +\sin \left( \omega_{x}t \right)
\ln \left( \left|
{\frac {c_{1}}{c_{2}}} \right| \right)
\right) }{4\sqrt {\omega_{y}}a_{0}\,\sin
\left( \omega_{xy}\,t \right) }}
\end{eqnarray}
with $k\in Z $, $k$ even for $c_1c_2<0$
or odd for $c_1c_2>0$ and $\omega_{xy}\equiv \omega_x-\omega_y$.
The differences between successive nodes $k-2$ and $k$ are:
\begin{align}
\Delta x={\frac {\sqrt {2}\pi\,\cos \left( \omega_{y}\,t \right) }{2\sqrt {
\omega_{x}}a_{0}\,\sin \left( \omega_{xy}t\right) }},\quad
\Delta y={\frac {\sqrt {2}\pi\,\cos \left( \omega_{x}\,t \right) }{2\sqrt {
\omega_{y}}a_{0}\,\sin \left( \omega_{xy}t\right) }}
\end{align}
Therefore the nodal points lie on a straight line with inclination
\begin{align}
\frac{\Delta y}{\Delta x}=\sqrt{\frac{\omega_x}{\omega_y}}\frac{\cos\omega_xt)}{\cos(\omega_yt)}
\end{align}
At any time $t$ the distance between the node $k$ from the
node $k-2$ is
\begin{align}
\Delta=\frac{\pi}{a_0|\sin(\omega_{xy}t)|}\sqrt{\frac{\omega_x\cos^2(\omega_xt)+\omega_y\cos^2(\omega_yt)}{2\omega_x\omega_y}}
\end{align}
It is of interest to note that this distance between successive nodes is the same for all $c_2\neq 0$ at the same time (Fig.~\ref{nodal_nodal}).
For $t=\frac{\Lambda\pi}{\omega_{xy}}$ with integer $\Lambda$, the nodes are at infinity and for an interval of $t$ their distances are given in Fig.~\ref{nodal_nodal}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{fig3.png}
\caption{The distance $\Delta_{o}$ between the center of the lower left blob and the origin for $t\in[0,10]$ (blue dashed curve) and the distance $\Delta$ between two successive nodal points ($k=-1 $ and $k=1$) (red curve). This distance is the same between any two consecutive nodal points and for any $c_2\neq0$. }\label{nodal_nodal}
\end{figure}
The line of nodes is at a distance:
\begin{align}
d_{no}=\frac{\sqrt {2} {\ln \left( \left| {\frac {c_{1}}{c_{2}}}
\right| \right) } }{{4a_0}\sqrt {{ \left( \cos
\left( \omega_{x}\,t \right) \right) ^{2}\omega_{x}+\omega_{y}\,
\left( \cos \left( \omega_{y}\,t \right) \right) ^{2}}}}
\end{align}
from the origin. This distance depends on $t$ and $c_2$. In the particular case $c_1=c_2=\sqrt{2}/2$ this distance is zero, i.e. the line of nodes passes always through the origin and when $c_1\neq c_2$ this distance is larger than $d_{min}=\frac{\sqrt{2}\ln\Big(\Big|\frac{c_1}{c_2}\Big|\Big)}{4\sqrt{\omega_x+\omega_y}}\simeq0.086\ln\left(\left|\frac{c_1}{c_2}\right|\right)$. The line of nodes rotates clockwise and counterclockwise from time to time, thus covering most areas of the configuration space (see Fig.~1 of \cite{tzemos2020chaos}).
When the blobs of $|\Psi|^2$ are far from the line of nodes the value of $|\Psi|^2$ between the nodes is very small (less than $10^{-11}$).
\begin{figure}[H]
\centering
\includegraphics[scale=0.17]{fig4a.png}
\includegraphics[scale=0.17]{fig4b.png}\\
\includegraphics[scale=0.17]{fig4c.png}
\includegraphics[scale=0.17]{fig4d.png}
\caption{3d plots of $|\Psi|^2$. Left column: $c_2=0.5$ for $t=4.5$ and $t=1.05$. Right column: $c_2=0.2$ for $t=4.5$ and $t=1.05$.}\label{nodal_origin}
\end{figure}
When the blobs approach the line of nodes the blobs are split into a number of secondary blobs between the nodes that are close to the origin (Figs.~\ref{nodal_origin}ab).
This was seen in the case $c_2=\sqrt{2}/2$ (Fig.~2 of \cite{tzemos2020ergodicity}), where the two blobs are equal and there are symmetric peaks on both sides of the origin at the peak of the collision. Here we show in Fig.~\ref{nodal_origin} the collisions of the blobs in the cases $c_2=0.5$ and $c_2=0.2$ where the two blobs are not equal in size.
At the collision (e.g. at $t=4.58$ ) the splittings are quite asymmetric (Figs.~\ref{nodal_origin}ab). The positions of the nodes are then at their closest distance from each other.
However at some minima of the distances between nodes we do not have collisions. E.g. this happens at the minima $t=1, t=3.2, t=6.3$ etc (Fig.~\ref{nodal_nodal}). In these cases the two blobs do not approach each other very close. Then only their outer parts may overlap (Figs.~\ref{nodal_origin}cd).
In such cases only few particles of the distribution that correspond to the blobs are deflected (see section 5).
The motion of the nodes dictates the motion of the NPXPCs, i.e. the characteristic structures of the Bohmian flow which are responsible for the generation of chaos \cite{efth2009}. Consequently, in order to monitor the scattering events underwent by the particles of a distribution, one needs, besides the nodes, to mark also the position of the X-points. The X-points are stationary in the frame centered at a moving nodal point and deflect the approaching particles. An example of the lattice of the NPXPCs is shown in Fig.~\ref{xp}, where we see that the X-points are about halfway between the nodes and very close to the line of nodes.
\begin{figure}[H]
\centering
\includegraphics[scale=0.27]{fig5.png}
\caption{The flow (small blue arrows) around the central nodal point $k=-1$ for $c_2=\sqrt{2}/2$ and $t=2.46$. The black dashed line joining the nodal points shows the direction of the nodal lattice at the current time. The X-points (red dots joined by the red dashed line) are very close to the black line.}\label{xp}
\end{figure}
\section{DISTRIBUTIONS OF TRAJECTORIES WITH $P_0=|\Psi_0|^2$}
In our previous papers \cite{tzemos2020chaos,tzemos2020ergodicity} we considered the trajectories in the cases $c_2=\sqrt{2}/2\simeq 0.707$ (maximally entangled state), $c_2=0.5$ (strongly entangled state), $c_2=0.2$ (weakly entangled state) and $c_2=0$ (product state) and checked whether a distribution reaches the Born rule in the long run, by comparing the final pattern of the points of the trajectories of the initial distribution with that of the Born rule. These patterns are formed by collecting all the points of the trajectories inside the cells of a $360\times 360$ grid covering the space $[x,y]\in[-9,9]$ at times equal to $t=n\Delta t (n=0, 1, 2, ...)$ and up to a sufficiently large time $t_f$, with a step $\Delta t=0.05$ and plotting them by use of a spectral color plot.\footnote{We have checked that the patterns do not change if we take smaller values of $\Delta t$.} An example of such a pattern, with $t_f=5000$, is shown in Fig.~\ref{0707_md_satisfaction}. We also found that if $P_0=|\Psi_0|^2$, then the long term distributions of the points of the trajectories form very similar patterns, like that of Fig.~\ref{0707_md_satisfaction}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{fig6.png}
\caption{Multiparticle distribution of 2400 particles in the case $c_2=\sqrt{2}/2$ when Born's rule is initially satisfied for times up to $t=5000$. }\label{0707_md_satisfaction}
\end{figure}
As $t$ increases the patterns for any given $c_2$ tend to a final form. The evolution of the distributions over the course of time and the differences between them can be studied using a matrix norm. In the present paper we work with the Frobenius norm $D$.\footnote{The Frobenius norm gives the distance $D$ between two matrices $A$ and $B$ according to the formula:
\begin{align}
D\equiv||A-B||=\sqrt{tr(A-B)^{\dagger}(A-B)}
\end{align} The details of an application of this norm in our particular problem are given in \cite{tzemos2020ergodicity}.}
In Fig.~\ref{self_distance_ikanopoiisi} we calculate $D$ between the patterns at $t=0, 100, 200,\dots 5000$ for two initial distributions of 2400 particles which satisfy BR. We see that $D$ is always smaller than $0.01$ and tends to zero as $t$ increases. In fact beyond $t=2000$ it is smaller than $D=0.0003$. In all the distributions of particles considered below we find that a final pattern is reached after a time $t=5000$, while in the case of individual trajectories a final pattern is reached after much larger times (of order $10^6$).
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{fig7.png}
\caption{Successive Frobenius norms for $c_2=0.2$ (blue) and $c_2=0.5$ (orange) when the initial distribution satisfies Born's rule. The orange dots cover most of blue dots.}\label{self_distance_ikanopoiisi}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{fig8.png}
\caption{The Frobenius norm $D_F$ between an initial distribution that satisfies Born's rule and the distribution of $c_2=\sqrt{2}/2$ that satisfies initially Born's rule, as function of the entanglement.}\label{0707alles}
\end{figure}
Even though the two blobs $|\Psi|^2$ vary with $c_2$, following the changes of $|\Psi|^2$ discussed in Section 2, the final patterns of the points of the trajectories for various $c_2$ are very similar. In Fig.~\ref{0707alles} we compare the final patterns for various values of $c_2$ with that of the maximum entanglement $c_2=\sqrt{2}/2$ and find a final Frobenius norm $D_F=0.00262$ for $c_2=0.5$, $D_F=0.00825$ for $c_2=0.2$ and $D_F=0.00857$ for $c_2=0$. The values of $D_F$ increase as $c_2$ decreases, and their small values (smaller than $0.01$) account for the similarity between the color plots for various $c_2$ and that of Fig.~\ref{0707_md_satisfaction}.
During the collisions several trajectories are deflected by approaching
one of the NPXPCs and they may go from one blob to the other. Nevertheless
the blobs are formed again after every collision and they continue to satisfy
Born's rule $P=|\Psi|^2$. This was shown in Fig.~7 of our paper \cite{tzemos2020chaos} in the
case of maximum entanglement. The same happens for other values of the entanglement. E.g. in Figs.~\ref{distsbl}abcd we give the distributions of the points of the trajectories in the case of small
entanglement $c_2=0.2$ initially (Fig.~\ref{distsbl}a), at the first approach (Fig.~\ref{distsbl}b), at the first collision (Fig.~\ref{distsbl} c) and a little after this collision
(Figs.~\ref{distsbl}d). If the approach of the two blobs is not very close (Fig.~\ref{distsbl}b) only a few particles move from one blob to the other. If, however, we have a direct collision (Fig.~\ref{distsbl}c), many particles move to a different blob. However after the collisions the same blobs are formed again, although they are followed by particles of different colors (Fig.~\ref{distsbl}d). Then the points of the total set of trajectories form essentially the same overall picture as shown in Fig.~\ref{0707_md_satisfaction}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{fig9a.png}
\includegraphics[scale=0.25]{fig9b.png}
\includegraphics[scale=0.25]{fig9c.png}
\includegraphics[scale=0.25]{fig9d.png}
\caption{Exhange of particles as two blobs aproach each other in the case $c_2=0.2$ when Born's rule is initially satisfied with a total number of particles equal to 2400. (a) Initial conditions $t=0$ (b) approach $t=1.05$, (c) collision $t=4.6$ (d) After the collision $t=6$ the blobs are formed again.}\label{distsbl}
\end{figure}
\section{CHAOTIC VS ORDERED TRAJECTORIES}
In the case of zero entanglement, we have only one Gaussian blob (in the lower right part of the configuration space) and all the trajectories form Lissajous figures (they are ordered) and each of them gives a different final pattern of points. Thus in this case the BR is satisfied only by an appropriate distribution of such figures. With a slight increase of the entanglement from zero, nodal points appear and generate chaotic trajectories in a large part of the configuration space. The ordered trajectories are then confined near the center of the main blob of $|\Psi|^2$. In fact if $0<c_2<\sqrt{2}/2$ the main blob contains both ordered and chaotic trajectories, while the secondary blob contains only chaotic trajectories. In the limit of maximum entanglement ($c_2=\sqrt{2}/2$) the region of ordered trajectories disappears and the Born rule is always established, because all the trajectories are chaotic and ergodic.
Consequently it is of great interest to understand when BR is accessible in the case of the partially entangled states.
\begin{figure}[H]
\centering
\includegraphics[scale=0.23]{fig10a.png}
\includegraphics[scale=0.23]{fig10b.png}
\caption{Two single chaotic-ergodic trajectories in the case $c_2=0.2$ for $t$ up to $2\times 10^6$. (a) $ x_0=-2.52027, y_0=2.17529$ and (b) $x_0=2, y_0=-2$.}\label{ekatommyria}
\end{figure}
We note again that he patterns of the points of individual chaotic trajectories for every $c_2\neq 0$ are the same and it does not matter if a chaotic trajectory starts inside the main blob , or not. E.g. in Fig.~\ref{ekatommyria} we see the patterns of the points of two chaotic trajectories, one in the upper left and another one in the lower right (inside the main blob of $|\Psi|^2$). These patterns require a long time to be established, but the patterns found after a time $t=2\times 10^6$ are quite similar.
We have found that the Frobenius norms between different chaotic trajectories for the same $c_2$ are smaller than $10^{-16}$. Therefore these trajectories are exactly ergodic.
However the patterns of the points of the chaotic trajectories of different $c_2$ are different from the patterns that follow Born's rule, due to the existence of ordered trajectories. Their difference increases as the value of $c_2$ decreases, as seen in Fig.~\ref{3chaotic}. We see that the final Frobenius norm is about $D_F=0.01$ or smaller if $0.5<c_2<0.707$ and tends to zero as $c_2$ tends to $\sqrt{2}/{2}$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{fig11.png}
\caption{The final Frobenius norm $D_F$ of the deviations of the patterns of the points of individual chaotic trajectories from the corresponding Born patterns as a function of $c_2$. }\label{3chaotic}
\end{figure}
In particular in the case $c_2=0.2$ we see that the pattern of Fig.~\ref{ekatommyria} forms 4 red spots at $x=\pm 2.4, y=\pm 1.8$, while the red spots in the case of Born's rule (close to the case of Fig.\ref{0707_md_satisfaction}) are at $x=3.0, y=\pm 2.2$. As $c_2$ becomes smaller than $c_2=0.1$ the deviations become even larger (see, e.g. the case $c_2=0.001$ in \cite{tzemos2020ergodicity}).
These differences stem from the fact that for $c_2<\sqrt{2}/2$ there is a number of ordered trajectories in the lower right blob of Born's rule and this proportion increases as $c_2$ decreases. The ordered trajectories are deformed Lissajous curves and it is only their collective pattern, together with the collective pattern of the appropriate proportion of chaotic trajectories, that generates the Born rule after a long time.
The proportion of the chaotic trajectories, $b$, in the lower right blob of the initial Born distribution for various values of $c_2$ is given in Fig.~\ref{produp}. The distinction between ordered and chaotic trajectories was made by an approximate algorithm that is described in the Appendix. The proportion of ordered trajectories is equal to zero for $c_2=\sqrt{2}/2$ and it is equal to $1$ (i.e. $100\%$) if $c_2=0$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{fig12.png}
\caption{The percentage $b$ of the chaotic trajectories on the main blob (lower right) of $|\Psi_0|^2$ as a function of the entanglement in the case of Born's distribution, according to our approximative algorithm described in the Appendix. We observe the two extreme cases $c_2=0$ where all trajectories are ordered and $c_2=\sqrt{2}/2$ where all trajectories are chaotic. }\label{produp}
\end{figure}
If now we take a set of particles consisting of a proportion $p_1$ on the upper left blob and $p_2=1-p_1$ on the lower right blob, the total proportion of chaotic trajectories is
\begin{align}\label{pch}
P_{ch}=p_1+bp_2
\end{align}
while the proportion of ordered trajectories is
\begin{align}\label{por}
P_{or}=(1-b)p_2
\end{align}
(with $P_{ch}+P_{or}=1$).
Thus we find that the ratio between chaotic and ordered trajectories is:
\begin{align}\label{logos}
\frac{P_{ch}}{P_{or}}=\frac{p_1/p_2+b}{1-b}
\end{align}
For every value of $c_2$ the proportions $p_1/p_2$ and $b$ are fixed, thus the ratio $P_{ch}/P_{or}$ is also fixed. E.g. for $c_2=0.2$ we have $p_1/p_2=0.04$ and $b=0.14$ therefore $P_{ch}/P_{or}\simeq 0.21$. Similarly in the case $c_2=0.5$ we find $P_{ch}/P_{or}\simeq 7.9$.
From Fig.~\ref{produp} we conclude that the proportion of ordered trajectories is small for relatively large entanglement (i.e. $c_2>0.5$). In these cases the final Frobenius norm $D_F$ is small (Fig.~\ref{3chaotic}). E.g. in the case $c_2=0.5$ the proportion of ordered trajectories is about $1-b=0.15$ and the $D_F$ is about $0.01$. Then the corresponding pattern of the points of the trajectories is quite close to that of BR.
On the other hand for weak entanglement (i.e. $c_2<0.3$) the proportion of ordered trajectories is relatively large. E.g. for $c_2=0.2$ the proportion of ordered trajectories is about $1-b=0.86$. Then $D_F\simeq0.05$ if the initial distribution is 100\% around the upper left blob, and the pattern of the points of the trajectories differs significantly from that of BR, as seen in Fig.~\ref{02paraviasi}. In fact in Fig.~\ref{02paraviasi} is practically identical with the final pattern of the points of individual trajectories of Fig.~ \ref{ekatommyria}. Of course if we take a larger proportion of the initial conditions around the lower right blob the difference from Born's rule becomes smaller, as seen in Fig.~\ref{02_distance_from_borns_rule} and becomes zero when we take about 96\% in the lower left blob (the proportion of Born's rule itself).
If $c_2$ is even smaller (smaller than $c_2=0.1$) the deviations from BR are larger, and when $c_2=0$ they become maximum, unless of course we populate the lower right blob with the great majority (the totality if $c_2=0$) of initial conditions, as required by BR.
\begin{figure}[H]
\centering
\includegraphics[scale=0.255]{fig13.png}
\caption{The distribution of points of a particle distribution that violates initially Born's rule (2304 particles on the upper left blob and 96 on the lower right blob i.e. inverse proportions from the proportions of Born's rule) for $c_2=0.2$ and $t=5000$.}\label{02paraviasi}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{fig14.png}
\caption{ The Frobenius norm $D$ of the patterns of the points of the trajectories for $c_2=0.2$ between the Born distribution of particles at $t=5000$ and of three initial distributions which violate Born's rule. The blue curve corresponds to 2304 particles on the upper left blob ($p_1=0.96$) and 96 on the lower right blob (proportion $p_2=0.04$), while the orange and green curves correspond to 1200-1200 particles ($p_1=p_2=0.5$) and 800-1600 ($p_1=1/3, p_2=2/3$) in the two blobs. Finally the red and purple curves correspond to 500-1900 particles ($p_1=0.21, p_2=0.79$) and 200-2200 particles $(p_1=0.08, p_2=0.92)$ correspondingly.}\label{02_distance_from_borns_rule}
\end{figure}
However if we take the initial set of ordered trajectories, as we found in applying BR for a given $c_2$, we can take the remaining set of chaotic trajectories anywhere and then we always recover the Born distribution in the long term. We have checked that in a number of cases by taking all the chaotic trajectories around the upper left blob or elsewhere. Three examples are given in Fig.~\ref{analogy}, where we compare an initial distribution satisfying BR (Fig.~\ref{analogy}a) with distributions violating initially BR but with the same ratio between chaotic and ordered trajectories (Figs~\ref{analogy}bc). In particular in Fig.~\ref{analogy}c we have taken an initial violation of BR, where the main blob has only ordered trajectories and all the chaotic trajectories are taken in another blob around the point $(3.54, -1.69)$, but with the same ratio $P_{ch}/P_{or}$. We observe their close similarity. Consequently it is the ratio between the ordered and chaotic-ergodic trajectories which makes BR accessible (or not).
\begin{figure}[H]
\centering
\includegraphics[scale=0.15]{fig15a.png}
\includegraphics[scale=0.21]{fig15b.png}
\includegraphics[scale=0.21]{fig15c}
\caption{A realization of Born's rule in the case $c_2=0.2$ with 1000 particles on the main blob and 40 on the upper left blob for $t=5000$. b) An initial violation of Born's rule with 1000 particles on the main blob and 40 on a blob around the point $(-3.54, -1.69)$. c) An initial violation with 850 ordered trajectories on the main blob and 190 chaotic trajectories on a blob around the point $(3.54,-1)$. We observe the similarity of the three figures.}\label{analogy}
\end{figure}
\section{DISTRIBUTIONS OF TRAJECTORIES WITH $P_0\neq|\Psi_0|^2$}
If we take initial conditions of particles different from those of BR we may (or not) approach BR after a long time. As we have seen in the previous section Born's rule is reached for any initial distribution of particles in the case of maximum entanglement ($c_2=\sqrt{2}/2$). For smaller values of $c_2$ Born's rule is reached for any distribution of chaotic trajectories, provided that the proportion of ordered trajectories is the same with that of Born's rule. If, however, the proportion of ordered trajectories is smaller (or larger) than that provided by BR rule we never recover Born's rule in the long run. E.g. this happens if we take particles with initial conditions in the upper left and in the lower right blob with a ratio $p_1/p_2$ different from that of BR. We have also a deviation of the ratio $P_{ch}/P_{ord}$ (according to Eq.~(\ref{logos})), therefore we cannot reach BR after a long time.
However, if the proportion of ordered trajectories is close to that required by Born's rule, then the deviation of the pattern of the trajectories from that of Born's rule is small.
In order to find quantitatively the deviations from Born's rule we have considered two examples. In the first example we give the final Frobenius norm $D_F$ (deviations from Born's rule) for various values of $c_2$ when the initial distribution of particles is 100\% in the upper left blob (Fig.\ref{allup}). We see that when $c_2=0$ this norm is relatively large ($D_F\simeq 0.165$), but when $c_2$ increases this norm decreases considerably and for $c_2\geq 0.5$ it is smaller than $D_F=0.01$. This means that for relatively large $c_2$ the final pattern is very close to that of BR.
\begin{figure}[H]
\centering
\includegraphics[scale=0.25]{fig16.png}\\
\caption{Successive Frobenius norms comparing the evolution of initial distributions lying $100\%$ in the upper left blob and the Born distribution at $t=5000$ for $c_2=0,0.001, 0.01, 0.1, 0.2 ,0.3 ,0.4 ,0.5, 0.6, 0.707$.}\label{allup}
\end{figure}
In the second example we calculated the final Frobenius norm $D_F$ of the deviations of the final pattern of the points of the trajectories for various proportions $p_1/p_2$ of initial conditions in the upper left and in the lower right blob in the cases $c_2=0.2$ and $c_2=0.5$ (Fig.~\ref{Nomos}). We see that as $p_1/p_2$ decreases the values of $D_F$ become smaller. When the ratio $p_1/p_2$ tends to the value appropriate for Born's rule the value of $D_F$ tends to zero. However for smaller $p_1/p_2$ the values of $D_F$ become again positive. In the case $c_2=0.5$ we have $D_F\leq 0.005$ for $p_1/p_2<2$, therefore we find again that for large $c_2$ ($c_2\geq 0.5$) the final pattern is very close to BR. On the other hand for $c_2=0.2$ we have $D_F>0.025$ for $p_1/p_2>1$, therefore the final deviation from BR is larger and only if $p_1/p_2$ is smaller than $0.2$ we have $D_F<0.01$, i.e. we come close to Born's rule. Therefore for $c_2=0.2$ or less BR is not satisfied in general.
\begin{figure}[H]
\centering
\includegraphics[scale=0.4]{fig17.png}
\caption{The final Frobenius norm $D_F$ as a function of the proportion $p_1/p_2$ of the initial particles in the upper left blob and in the lower right blob for $c_2=0.2$ (red dots) and $c_2=0.5$ (blue dots).}\label{Nomos}
\end{figure}
\section{Conclusions}
In the present paper we studied the role of chaotic and ordered trajectories in establishing Born's rule, in a paradigmatic entangled 2-qubit system.
We calculated many trajectories for various values of $c_2$ and found the patterns of their points over the course of time. We established the following:
\begin{enumerate}
\item The form of $|\Psi|^2$ generates two blobs for various values of the entanglement, one on the lower right from the origin (main blob) and the other on the upper left from the origin. The two blobs approach each other from time to time and undergo several collisions, where we have the formation of secondary blobs. The collisions occur at practically the same times for all the values of the entanglement. After the collisions the two blobs are formed again.
\item If the initial distribution $P_0$ satisfies Born's rule $P_0=|\Psi_0|^2$ then it is known that this distribution follows the evolutuion of $|\Psi|^2$ for all times. During the collisions the two blobs exchange particles and later on the blobs consist of a mixture of particles from the initial blobs.
\item The exchanges of particles occur when particles approach the nodal points, where $\Psi=0$, and the nearby X-points. There is an infinite number of nodal points along a straight line, where the distances between the nearby nodal points are the same for any value of $c_2$, but they change in time. These distances are minimal during the collisions.
\item The differences between successive in time patterns, giving the distribution of the points of the trajectories, decrease, as time increases, and tend to zero, giving a final norm $D_F$ for every value of the entanglement.
\item The difference between the final Born pattern and the final pattern of the maximum entanglement case is small for any amount of the entanglement (the final Frobenius norm $D_F$ is less than $0.01$) and decreases as the entanglement increases.
\item The main blob of Born's rule consists of chaotic and ordered trajectories. Ordered trajectories appear near the center of the main blob. The proportion of ordered trajectories increases as the entanglement decreases. When the entanglement is maximum all the trajectories are chaotic and when the entanglement is zero all the trajectories are ordered. The initial secondary blob (upper left) consists practically of only chaotic trajectories.
\item For any given value of entanglement the points of the individual chaotic trajectories form the same pattern. The differences between the patterns of various chaotic trajectories are insignificant (The Frobenius norm of their differences is smaller than $10^{-16}$). Consequently the chaotic trajectories are always ergodic.
\item If we take the proportion of ordered trajectories for a given amount of entanglement, according to Born's rule, then for any initial distribution of the chaotic trajectories the final pattern of the points of the trajectories tends to that of BR.
\item However, if the ratio between chaotic and ordered trajectories is different from that of BR, then the final pattern of the points of the trajectories is also different from that of BR and the difference increases as the entanglement decreases. The difference is small for strongly entangled states and it is large for weakly entangled states. The difference is also small for any value of entanglement if the initial proportions $p_1$ and $p_2$ of particles in the upper left blob and in the lower right blob have a ratio $p_1/p_2$ close to the ratio of the Born rule.
\end{enumerate}
\section{Appendix}
Our numerical results show clearly the key role of the ratio between the chaotic and ordered trajectories for the approach of an arbitrary initial distribution to that of Born's rule. Consequenlty it is of fundamental importance to separate the ordered from the chaotic trajectories of an initial distribution. The standard way of doing this is to calculate the Lyapunov characteristic number (LCN)
\begin{equation}
LCN=\lim_{t\to\infty}\chi,
\end{equation}
where $\chi$ is the `finite time LCN'
\begin{equation}\label{chi}
\chi=\ln(\xi/\xi_0)/t,
\end{equation} ($\xi_0, \xi$ are infinitesimal deviations $\xi(t)=\sqrt{\delta x^2+\delta y^2}$ at
times $t_0=0$ and $t$). If LCN is a positive number then the trajectory is chaotic and if LCN is zero then the trajectory is ordered. This method was followed in our previous works in these series of studies. However the calculation of LCN is a demanding computational problem, and would require a huge amount of work in this case where we focus on multiparticle distributions rather than single Bohmian trajectories.
In this work we avoided the calculation of thousands of LCNs
by exploiting the shape of the ordered trajectories of this model. As we have already seen the ordered trajectories are perfect or distorted Lissajous curves. Moreover in our previous work we showed that every Lissajous curve starts at its lower right corner and consequently its motion points initially to smaller $x$ and larger $y$. Finally the size of the perfect Lissajous curves (in the case $c_2=0$) is easily found and it is equal to
\begin{align}
|\Delta x_{max}|=\frac{2a_0\sqrt{2}}{\sqrt{\omega_x}},\quad |\Delta y_{max}|=\frac{2a_0\sqrt{2}}{\sqrt{\omega_y}}
\end{align}
Consequently if we calculate the trajectories of a distribution of $N$ particles for a quite long time, we can ask how many of them have exceeded significantly the area of the Lissajous curve plus a sufficient amount of space to larger and lower $x$ and $y$ than those at $t=0$ in order to cover the case of distorted Lissajous curves. These are characterized as chaotic curves.
This method is of course just an approximation but in the limit of large $t$ and $N$ it gives reliable results. The results of this method for the lower right blob of the $|\Psi|^2$ with $N=2400$ particles and $t=10^3$ are shown in Fig.~\ref{produp}.
\section*{References}
\bibliographystyle{iopart-num}
|
train/arxiv
|
BkiUcQrxK6EuNA_gM4E9
| 5 | 1 |
\section{Introduction}
\citet{Mill64} pointed out that all gravitational $N$-body systems are
chaotic, in the sense that the trajectories of all particles in two
systems that differ initially by a small shift in the starting
position or velocity of even a single particle will diverge
exponentially over time. Thus, two simulations started from the same
initial conditions will follow identical evolutionary paths only if
the arithmetic operations are performed with the same precision and in
the same order, so that round off error is identical. These
statements are true for every code, irrespective of the algorithm used
for the computations, and no matter how many particles are employed.
In particular, a simulation can never be reproduced exactly when run
with a different code.
Microscopic chaos is unimportant for many applications because the
different evolutionary paths of almost identical simulations lead to
similar macroscopic properties such as mass profiles, overall shape,
{\it etc.}, which therefore constitute firm results. \citet[][hereafter
BT08, p.~344]{BT08} make this argument and cite a test by
\citet{Fren99} which indeed shows that many different codes yield
similar key properties after following the collapse of a dark matter
halo. In fact, results generally converge in tests that vary the
numerical grid, softening, and/or number of particles
\citep[{\it e.g.}][]{Powe03,DMS4}, which they would not do if there were a
large element of stochasticity. \citet{Sell08} also demonstrated
exquisitely reproducible evolution of halo models that were perturbed
by externally imposed bars, in sharp contrast to the results presented
here.
Simulations with active discs of particles, on the other hand, are not
so well behaved. \citet{SD06} reported some minor differences, and
one major, in a set of experiments using different numerical
parameters but the same file of initial coordinates. We show here
that simulations with discs can, at least for certain models, exhibit
bi-modally divergent macroscopic results, even between cases that
differ only at the round-off error level. The reason for this
qualitative difference for discs is because collective instabilities
and vigorous responses develop from particle noise. Here we identify
a number of distinct causes of stochastic behaviour in discs, and
demonstrate explicitly how the evolution is affected.
We show that the principal sources of divergent behaviour are: (a)
multiple in-plane global modes, (b) swing amplified noise, (c) bending
instabilities, (d) suppression of dynamical friction, and (e) the
truly chaotic nature of $N$-body systems. We also show that the
distribution of evolutionary paths taken in simulations of
different realizations of the same model varies systematically with
the care taken to set up the initial coordinates of halo particles.
We deliberately choose to illustrate just how large the differences
can be for one particular unstable equilibrium model. Stochasticity
is present in all simulations and its effects are always noticeable in
those containing discs, but generally variations in the evolution show
less scatter than in the case studied here. We show that the range of
behaviour is similar in two quite distinct $N$-body codes and
illustrate the sensitivity to differences at the round-off error
level. We also show that increasing the number of particles does not
reduce the spread of measured properties.
Real galaxies are assembled and evolve in a complicated manner, and
certainly do not pass through a well-constructed axisymmetric,
equilibrium phase that is unstable, although such a model is commonly
used as a starting point of simulations. The objectives of
experiments of this type are therefore (1) to determine whether
plausible axisymmetric galaxy models are globally stable and (2) to
develop an understanding of the dynamical evolution of models that
form bars and other non-axisymmetric structures. While we adopt
a model of this type in this paper, its remarkable behaviour has
implications for all simulations of disc-halo models, regardless
of how they were created.
The main part of the paper demonstrates the role of the five
above-named sources of stochasticity in the evolution of disc models.
We also explicitly show the effects of different particle selection
techniques on the robustness of the behaviour. Stochastic divergence
has been reported elsewhere, but not recognized as an intrinsic aspect
of these models; {\it e.g.}, \citet{KVCQ} attributed divergent evolution to
inadequate numerical care, whereas stochasticity could be the cause.
Appendix B reports extensive tests that confirm that the results we
report here do not depend on any numerical parameters.
\section{Selection of particles}
\label{selection}
The selection of initial particle positions and velocities of an
equilibrium model requires careful attention. Random selection of
even many millions of particles will lead to shot noise variations in
both the density and velocity distributions of a model. Here we
summarize the available techniques to select initial coordinates of
particles, with a focus on disc-halo models. These methods generally
yield a set of particles that are not specific to any particular
$N$-body code.
\subsection{Selecting from a {\sc DF}}
\label{determ}
Jeans theorem requires that an equilibrium model should have a
distribution function ({\sc DF}) that is a function of the isolating
integrals (BT08, p.~283). Thus the best way to realize an equilibrium
set of particles for an initial model is to select from a {\sc DF}, when
one is available.
While random selection of particles may be common practice, it
immediately discards a large part of this potential advantage. One
widely used technique \citep[{\it e.g.}][]{HBWK,WK07,ZM08,DBS9} is to accept
or reject candidate particles based on a comparison of a random
variable with the value of the {\sc DF}\ at the phase-space position of
each particle, which introduces shot noise in the density of particles
in integral space. The evolution of the simulation will be that of
the selected {\sc DF}, not the intended one, and different random
realizations lead to significant variations in the measured
frequencies of the instabilities in the linear regime \citep{Sell83}
and substantial differences in the non-linear regime. It is therefore
best to adopt a deterministic procedure for particle selection from a
{\sc DF}.
A scheme to select particles smoothly in this way, first used in
\citet{Sell83} and described more fully in \citet{SA86}, is summarized
in the Appendix of \citet{DS00}. We divide integral, generally
$(E,L)$, space into $n$ areas in such a way that $\int\int F dEdL$
over each small area is exactly $1/n$th of the integral over the total
accessible ranges of $E$ \& $L$. Here $F(E,L)$ is the differential
distribution after integration over the other phase space variables
(BT08, pp.~292, 299). Requiring that one particle lies within each
area ensures that the selected set of particles is as close as
possible to representing the desired particle density in integral
space. We choose the precise position of a selected particle within
each area quasi-randomly in order to ensure that the particles do not
lie on an exact raster in integral space. We describe this scheme as
deterministic selection from the {\sc DF}, a term that ignores this minor
random element.
This scheme is readily adapted to select particles of unequal masses
if desired. To select particles having masses proportional to a
weight function $w(E,L)$, one simply weights the {\sc DF}\ by $w^{-1}$,
which automatically adjusts the subdivision of $(E,L)$-space into
areas of equal weighted {\sc DF}, as described in \citet{Sell08}.
The phases of the particles around the orbit defined by these
integrals can be selected at random. We have no evidence that the
choice of radial phase, either for flat discs or for spheres, causes
significant variations in the outcome and we discuss the choice of
azimuthal phases in Section~\ref{qstart} below.
\citet{DS00} describe the similar procedure for 2-integral spheroidal
models.
\subsection{When No Simple {\sc DF}\ Is Available}
\label{jeansapp}
Comparatively few useful mass models have known {\sc DF} s, and the
realization of an equilibrium set of particles for a general model
presents a significant challenge. Some authors \citep[{\it e.g.}][]{SN93}
have simply created a rough $N$-body system, which they then evolve in
the presence of a frozen disc, thereby allowing the halo to relax
towards some nearby equilibrium.
\citet{Hern93} advocates solving the Jeans equations for each
component in the combined potential of all mass components. His
method is widely used \citep[{\it e.g.}][]{VK03,Atha03,EZ04,KVCQ}, but the
resulting equilibrium is approximate.
In general, it is better to derive an approximate {\sc DF}\ for a spherical
or spheroidal system. An isotropic {\sc DF}\ for a spherical system can
usually be obtained by Eddington inversion (BT08, p.~289), although it
is important to verify that the function is positive for all energies
(which it generally is, for reasonable mass models).
Creating an equilibrium {\sc DF}\ for a multi-component system presents a
greater challenge, for which three effective approaches have been
developed. \citet{RSJK}, \citet{KD95} and \citet{DS00} employ the
method of \citet{PT70} to derive the mass distribution for a halo
having some assumed {\sc DF}\ that will be in equilibrium in the presence
of one or more other mass components. Alternatively, one can use
Eddington's inversion formula for the halo only in the potential of
the combined disc and halo \citep{HBWK}. A third possibility, as
here, is to start from a known spherical halo with a known {\sc DF}\ and
compress it by adding a disc and/or a bulge using Young's (1980)
method \citep[see][]{SM05}, and then to select particles from the
compressed {\sc DF}. Even though the last two methods use only the
monopole term for the disc, all three methods yield a spheroidal
system that is close to detailed equilibrium everywhere.
In general, it is more difficult to construct a good equilibrium for a
disc component. The circular speed in the disc mid-plane as a
function of radius is determined by the total mass distribution and,
commonly, one specifies $Q(R)$ \citep{Toom64} to determine the radial
velocity spread at each radius. The Jeans equations in the epicycle
approximation (BT08, p.~326) generally yield a poor equilibrium except
when the radial dispersion is a small fraction of the circular speed,
and the asymmetric drift formula may have no solution near the centres
of hot discs. \citet{Shu69} describes an approximate {\sc DF}\ for a warm
disc with a given radial velocity dispersion that we, and
\citet{KD95}, have found to be quite serviceable. Again in cases
where the radial velocity dispersion stretches the validity of the
epicycle approximation, radial gradients can lead to a disc surface
density after integration over all velocities that differs slightly
from that specified, as shown in Section~\ref{model}.
The vertical structure of an isothermal stellar sheet is given by the
formulae developed by \citet{Spit42} and \citet{Camm50}, and BT08
(p.~321) describe a generalization of the in-plane {\sc DF}\ to include
this feature, which they describe as the Schwarzschild {\sc DF}. The
Spitzer-Camm formulae assume full Newtonian gravity and no radial
density or dispersion gradient. Force softening has an increasingly
detrimental effect on the vertical balance as the ratio of disc
thickness to softening length is reduced, we therefore prefer to
construct a vertical equilibrium from the 1D vertical Jeans equation
in the actual force field of the softened disc potential, which leads
to a better equilibrium.
\subsection{Quiet Starts}
\label{qstart}
The quiet start technique is a valuable addition to the set up process
only when the model has a few vigorous, large-scale instabilities,
such as arise in a cool, massive disc with a rotation curve that rises
approximately linearly from the centre. It is of little help when
linear stability theory predicts the model to be responsive but
(almost) stable \citep[{\it e.g.}][]{Sell89,SE01}. In these latter cases,
collective responses to residual noise grow more vigorously than any
global modes, and the particle arrangement randomizes quickly.
For a quiet start, one reproduces each selected master particle
multiple times in a symmetrical arrangement, with image particles
having the identical radius and velocity components in polar
coordinates. We restrict the meaning of the phrase ``quiet start'' to
this symmetrical arrangement of particles -- {\it i.e.}\ a quiet start can be
used no matter how the coordinates of the master particles are
selected. Conversely, a ``noisy start'' means only that azimuthal
coordinates are selected at random, again independent of how the
master particles are selected. The procedures for discs and
spheroidal components differ slightly.
For discs, we place image particles at the corners of an almost
regular polygon in 2D, centred on the model centre. The polygon is
not exactly regular because we nudge the particles away from exact
$n$-fold symmetry by a random fraction of a small angle, typically
$0.02^\circ$. When the disc has a finite thickness, the polygon must
be duplicated with a second on the opposite side of the mid-plane for
which both the vertical position $z$ and velocity $v_z$ of every
particle in each of the two polygons have opposite signs.
When the force-determination method is based around an expansion in
sectoral harmonics that is truncated at low order, $m_{\rm max}$, and
the number of sides to the polygon $n \geq 2m_{\rm max}+1$, azimuthal
forces in the initial model are much lower than would arise from
particle shot noise -- hence the label ``quiet start''.
\begin{figure}
\centerline{\includegraphics[width=.8\hsize]{rchern.ps}}
\caption{The inner rotation curve of our standard model
(solid). The separate contributions of the disc (dashed) and halo
(dotted) are also shown.}
\label{rchern}
\end{figure}
We have not tried quiet starts for other force methods, but they could
still offer a significant advantage provided that the number of
corners adopted for the polygon exceeds the azimuthal order of all the
strong instabilities and non-axisymmetric responses (Section~\ref{swamp}) by
at least a factor two.
\begin{figure*}
\centerline{\includegraphics[width=.57\hsize,angle=270]{plotdf.ps}}
\caption{Details of the approximate {\sc DF}\ for the disc. Panels (a) and
(b) show respectively the variation of $f$ with radial velocity and
azimuthal velocity at five different radii. Panel (c) shows the
radial variations of the rms azimuthal speed ($v_\phi$ solid) and
radial speed ($v_r$ dashed), (d) compares the circular speed (dotted)
with the mean $v_\phi$ (solid) to illustrate the asymmetric
drift. Panels (e) and (f) compare respectively the actual surface
density and $Q$ profiles (solid) with the desired profiles (dashed).
The {\sc DF}\ does not reproduce these curves perfectly, but the departures
are minor.}
\label{plotdf}
\end{figure*}
We adopt a similar procedure for spheroidal components, except that we
create image particles by rotating the initial position and velocity
vectors using the usual rotation matrix for the adopted set of Euler
angles \citep[{\it e.g.}][p.~199]{Arfk85}. The set of Euler angles used
creates an $n$-fold rotationally symmetric set of particles, which is
also reflection symmetric about the mid-plane, and has zero net
momentum with a centre of mass at the model centre; each master
particle is therefore inserted $2n$ times. It is reasonable to adopt
$n \ga 4$.
\section{Models}
Here we describe all the various galaxy models we use in this paper.
\subsection{Standard Galaxy Model}
\label{model}
Our standard model is a composite disc-halo system with the rotation
curve shown in Fig.~\ref{rchern}. The two mass components are an
exponential disc and a compressed, strongly truncated, Hernquist halo.
The initial surface density of the disc has the usual exponential form
\begin{equation}
\Sigma(R) = {M_d \over 2\pi R_d^2}e^{-R/R_d},
\end{equation}
where $M_d$ is the nominal disc mass. We truncate the disc at
$R=5R_d$, leaving an active disc mass of $\approx 0.96 M_d$. The disc
particles are set in orbital motion with a radial velocity spread so
as to make Toomre's $Q=1.5$. For most models, we determine the
approximate equilibrium velocities by solving the Jeans equations in
the epicycle approximation as described in Section~\ref{jeansapp}.
In some cases we adopt Shu's approximate {\sc DF}\ instead, and select disc
particles deterministically from it. Properties of the {\sc DF}\ and the
radial variations of the low-order velocity moments are shown in
Fig.~\ref{plotdf}. While the radial velocity distributions are
nicely Gaussian, the azimuthal velocity distributions (\ref{plotdf}b)
are markedly skewed. This aspect, and the departures of the surface
density and $Q$ profiles from the desired values all decrease for
models with less dominant discs or with lower values of $Q$.
For fully 3D simulations, the density profile normal to the disc
plane is Gaussian, with a constant scale height of $0.05R_d$ and
appropriate vertical velocities in the numerically determined vertical
force profile.
We construct a halo in equilibrium with the disc in the following manner.
We start from the initial density profile suggested by \citet{Hern90}
\begin{equation}
\rho_0(r) = {M_h r_s \over 2\pi r(r_s + r)^3},
\label{Hernquist}
\end{equation}
which has total mass $M_h$ and scale radius $r_s$, with the isotropic
distribution function ({\sc DF}) also given by Hernquist. We strongly
truncate this halo by eliminating all particles with enough energy to
reach $r > 2r_s$, causing the density to taper gently to zero at this
radius, and an actual halo mass of $\approx 0.25M_h$. Since most of
the discarded mass is at large radii, there is little change to the
central attraction at $r<2r_s$ and the model remains close to
equilibrium.
For our standard model, we choose $r_s = 40R_d$ and set $M_h = 80 M_d$
so that the halo mass is approximately 19 times that of the disc. We
then employ the halo compression algorithm described by \citet{SM05}
to compute a new, mildly anisotropic, {\sc DF}\ for the compressed halo
that results from including the above disc. The rotation curve,
Fig.~\ref{rchern}, shows that the disc dominates the central
attraction over most of the inner part and the total rotation curve is
approximately flat at large radii.
We adopt a system of units such that $G = M_d = a_d = 1$, where $G$ is
Newton's constant, $M_d$ is the mass of the untruncated disc, and
$a_d$ is the length scale for the type of disc adopted. Therefore
distances are in units of $a_d$, masses are in units of $M_d$, one
dynamical time $\tau = (a_d^3/GM_d)^{1/2}$, and velocities are in
units of $\hat v = (G M_d / a_d)^{1/2} \equiv a_d / \tau$. One
possible scaling to physical units is to choose the dynamical time to
be $10\;$Myr and $a_d = 3\;$kpc, which implies $M_d = 5.98 \times
10^{10}\;{\rm M}_\odot$. The velocity unit $\hat v =
293\;$km~s$^{-1}$, and the peak circular speed in Fig.~\ref{rchern}
is approximately $235\;$km~s$^{-1}$.
We also present results for two other disc-halo models for which we
choose $r_s = 30R_d$ and $r_s = 50R_d$, {\it i.e.}\ that bracket our standard
case. The more extended halo leads to a more dominant disc, while the
disc is less dominant in the more concentrated halo.
\begin{figure}
\centerline{\includegraphics[width=.8\hsize]{hist.ps}}
\caption{The frequency distribution of halo particle masses, in units
of the disc particle mass.}
\label{hist}
\end{figure}
We select halo particles from the compressed {\sc DF}\ using the smooth
procedure summarized in Section~\ref{determ}, with the weight function for
particle masses being $w(L) = 0.5 + 20L$, where $L = |\;\pmb{\mit L}|$ is the
total specific angular momentum. All disc particles have equal
masses, but the masses of halo particles range from 0.7 to 14.6 times
the mass of the disc particles. Fig.~\ref{hist} shows the frequency
distribution of halo particle masses.
As a result of this careful procedure, both the disc and halo
components are very close to equilibrium in the combined potential and
the initial ratio of kinetic to the virial of Clausius (measured from the
particles) is $T/|W|=0.498$. At the same time, the phases of the
particles in their carefully selected orbits are chosen at random, so
that the model indeed starts from the usual level of shot noise
resulting from the random locations of the particles.
\subsection{Isochrone Disc}
We also present results using the isochrone disc with no halo. The
potential (BT08, p.~65) has a simple form
\begin{equation}
\Phi(R) = -{GM_d \over a} \left[x + (1 + x^2)^{1/2}\right]^{-1},
\end{equation}
while the surface density is
\begin{equation}
\Sigma(R) = {M_d a \over 2\pi r^3} \left\{ \log\left[ x + (1 +
x^2)^{1/2} \right] - {x \over (1 + x^2)^{1/2}} \right\}.
\end{equation}
Here $a$ is a length scale, and $x=r/a$; note $\Sigma(0)=M_d/(6\pi
a^2)$. \citet{Kaln76} describes a convenient family of {\sc DF} s
characterized by a parameter $m_K$; we refer to each model as the
isochrone/$m_K$ disc. He \citep{Kaln78} also presents some
preliminary results for the normal modes, which were confirmed in
simulations \citep{ES95}. The local stability parameter
\citep{Toom64} for the isochrone/5 disc has an near constant value of
$Q \simeq 1.6$, and is $Q \simeq 1.2$ for the isochrone/8 model.
\begin{table}
\caption{Numerical parameters for our standard runs}
\label{params}
\begin{tabular}{@{}lrr}
& Cylindrical grid & Spherical grid \\
\hline
Grid size & $(N_R,N_\phi,N_z)\quad$ \\
& $ = (127,192,125)$ & $n_r = 500$ \\
Angular compnts & $0\leq m \leq 8$ & $0 \leq l \leq 4$ \\
Outer radius & $6.076R_d$ & $80R_d$ \\
$z$-spacing & $0.01R_d$ \\
Softening rule & cubic spline & none \\
Softening length & $\epsilon = 0.05R_d$ \\
Number of particles & 500\,000 & 2\,500\,000 \\
Equal masses & yes & no (see Fig.~\ref{hist}) \\
Shortest time step & $0.0125(R_d^3/GM)^{1/2}$ & $0.0125(R_d^3/GM)^{1/2}$ \\
Time step zones & 5 & 5 \\
\hline
\end{tabular}
\end{table}
\section{Results}
\label{illust}
We begin by showing just how much variation can occur. We first
present the evolution of our standard disc/halo model whose rotation
curve is shown in Fig.~\ref{rchern}. Note that the disc equilibrium
in these models is set by solving the Jeans equations, while the halo
particles are selected deterministically from a {\sc DF}.
Fig.~\ref{basichyb} shows results from 16 separate runs with Sellwood's
(2003) hybrid grid code using fixed numerical parameters, given in
Table~\ref{params}, but with different random seeds for the initial
coordinates of the disc particles only. We plot the evolution of both
the amplitude and pattern speed of the bar, measured as described in
Appendix A. Even though the initial particles are selected from the
same distributions, with different random seeds for the disc only, the
amplitude evolution differs greatly from run to run and there is
considerable spread in the evolution of the pattern speed.
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{cusoft.5.ps}
\caption{Evolution of the amplitude (left) and pattern speed (right)
of the bar in 16 runs with different random seeds for the disc
particle coordinates, run using Sellwood's (2003) hybrid code. The
tiny differences in the initial models lead to a remarkably wide range
of properties of the bar at late times.}
\label{basichyb}
\smallskip
\includegraphics[width=.57\hsize,angle=270]{vplot.ps}
\caption{Evolution of 5 runs with different random seeds for the disc
particle coordinates, run using {\sc PKDGRAV}\ with $\epsilon = 0.05R_d$.}
\label{pkdgrav}
\end{figure}
In order to demonstrate immediately that the scatter in
Fig.~\ref{basichyb} is not a numerical artefact of our grid code,
Fig.~\ref{pkdgrav} shows the results of a similar test with 5 runs
using the tree code {\sc PKDGRAV}\ \citep{Stad01} using an opening angle $\theta
= 0.7$. {\sc PKDGRAV}\ is a multi-stepping code, with time steps refined such
that $\delta t = \Delta t/2^n < \eta (\epsilon/a)^{1/2}$, where
$\epsilon$ is the softening and $a$ is the acceleration at a
particle's current position. We use base time step $\Delta t = 0.01$
and $\eta = 0.2$, which gives identical time steps for all particles.
The results show a comparable spread in the evolution of both the
amplitude and pattern speeds. Results from the two codes with
identical initial coordinates for all the particles do not compare in
detail. For this problem, the tree code runs about 37 times more
slowly than Sellwood's (2003) grid code; we therefore use it only for
this cross check.
The gross qualitative behaviour of all the models in
Figs.~\ref{basichyb} \& \ref{pkdgrav} is similar at first. The bar forms at
similar times with similar pattern speeds, though the initial peak
amplitude varies by about $\sim 25\%$. The evolution thereafter
further diverges, notably with increasingly large differences in the
bar amplitude. Steep declines in the bar amplitude in the interval
$200 \la t \la 400$ are generally associated with buckling events
\citep[{\it e.g.}][]{RSJK}, but the timing of these events varies
considerably. At late times in Fig.~\ref{basichyb}, the bar
amplitude rises steadily in 9/16 simulations, although starting from
different times in each case, while it stays low (over the time
interval shown) in the remaining 7.
It is more encouraging to note that the rate of decrease of the bar
pattern speed does correlate with the bar amplitude; strong bars are
more strongly braked by halo friction, as expected. Furthermore,
continued amplitude growth of bars that are strongly braked has been
reported previously \citep[{\it e.g.}][]{Atha02}.
\begin{figure}
\centerline{\includegraphics[width=.8\hsize]{rcnew.ps}}
\medskip
\centerline{\includegraphics[width=.8\hsize]{rcext.ps}}
\caption{The inner rotation curves of models with (above) a slightly
more domination halo and (below) a slightly more extended halo. The
line styles mean the same as in Fig.~\ref{rchern}. The behaviour
of these models is shown in Figs.~\ref{newhalo} \& \ref{exthalo}}.
\label{rcother}
\end{figure}
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{newhalo.ps}
\caption{Evolution of a set of models with a more dominant halo than
those shown in Fig.~\ref{basichyb}. The initial rotation curve is
shown in the upper panel of Fig.~\ref{rcother}.}
\label{newhalo}
\vspace{0.3cm}
\includegraphics[width=.57\hsize,angle=270]{exthalo.ps}
\caption{Evolution of a set of models with a less dominant halo than
those shown in Fig.~\ref{basichyb}. The initial rotation curve is
shown in the lower panel of Fig.~\ref{rcother}.}
\label{exthalo}
\vspace{0.3cm}
\includegraphics[width=.57\hsize,angle=270]{threehalos.ps}
\caption{Comparison of the estimated means (solid lines) and
$\pm1\sigma$ scatter (dotted curves) in the three different haloes
shown in Figs.~\ref{basichyb} (red), \ref{newhalo} (blue), \&
\ref{exthalo} (green).}
\label{threehalos}
\end{figure}
\subsection{Divergence at Late Times}
\citet{DBS9} report a similar study of bar-unstable disc-halo models,
which also reveal large amplitude differences in the short term.
However, they stress that the long-term evolution of their simulations
is reproducible, in contrast to our finding.
Fig.~\ref{newhalo} shows that we confirm their conclusion for a
different model with a slightly more dominant halo; the evolution of
both the bar amplitude and pattern speed shows much less scatter than
is seen in Fig.~\ref{basichyb}. All cases show a steady rise in bar
amplitude after the buckling event, although the curves for the
different realizations during this stage of the evolution are offset
in time, as also found by Dubinski {\it et al.}
Fig.~\ref{exthalo} shows results from a third model with a more
dominant disc. The amplitude evolution in this model is again
bi-modal, rising steadily at late times in half the cases, although
not by as much as in our standard case (Fig.~\ref{basichyb}). The
rotation curves of both these models are shown in
Fig.~\ref{rcother}.
The late rise in bar amplitude occurs, if at all, only in models with
live haloes and is associated with frictional braking. It is natural
that frictional braking should be stronger when the halo is more
dominant. In our standard model (Fig.~\ref{basichyb}), and in the
more dominant disc case (Fig.~\ref{exthalo}), the large late-time
differences arise because strong friction kicks in in some cases but
not in all. We argue in Section~\ref{dynfr} that the reason for these
differences is the existence of adverse gradients in the halo {\sc DF},
which can inhibit friction \citep{SD06}. Whatever the cause, it is
clear from these two sets of runs that onset of friction and steady
bar growth at late times depends on comparatively minor differences in
the earlier evolution caused by the different random seeds.
In order to quantify the scatter, we compute the bi-weight estimate
\citep{Beers} of the mean and dispersion of the measurements
throughout all sets of experiments.\footnote{Their algorithm assumes
the data to be unimodal with a few outliers, which is manifestly not
the case in our data at late times.} Since bar growth is shifted
slightly in time in the different runs shown in
Figs.~\ref{basichyb}, \ref{newhalo}, and \ref{exthalo}, we apply a
small time offset to the evolution of both quantities in order to
ensure that the evolution coincides as the relative bar amplitude
grows through 0.1, before computing the mean and scatter from each
set. Fig.~\ref{threehalos} shows the time evolution of the means
and scatter of the bar amplitude and pattern speed for all three
haloes. It is clear that the stochastic spread is greatest for our
standard halo (red lines), less for the less dominant halo (green
lines) and least for the more dominant halo (blue lines).
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{exp.df.det.n.ps}
\caption{Evolution of a set of 16 runs of our standard model that used
a more careful disc set up procedure.}
\label{expdfdetn}
\medskip
\includegraphics[width=.57\hsize,angle=270]{ranhal.ps}
\caption{Evolution of a set of 16 runs that used Hernquist's Jeans
equation procedure to set up an approximate equilibrium for the halo
particles. The bar amplitude grows at late times and the pattern
decreases in all but three of these cases.}
\label{ranhal}
\end{figure}
\subsection{Particle selection}
\label{pselect}
Fig.~\ref{expdfdetn} shows the consequence of selecting disc
particles in a deterministic manner from an approximate {\sc DF}\ as
described in Sections~\ref{jeansapp} \& \ref{model}. This procedure still
has a random element when choosing the precise values of $E$ \& $L_z$
within each sub-area, and the simulations have noisy starts because we
randomly select the radial and azimuthal phases of the particles. The
16 different runs used different random seeds and are to be compared
with those shown in Fig~\ref{basichyb}, for which disc particle
velocities were selected from Gaussians whose widths were estimated
from the Jeans equations in the epicycle approximation. There is no
significant improvement, and in this case 6/16 runs have not slowed
much by $t=800$.
The consequences of selecting {\it halo\/} particle velocities from
Gaussians whose widths are determined from the Jeans equations
\citep{Hern93}, are shown in Fig.~\ref{ranhal}. With this more
approximate halo equilibrium we see that all but 3/16 bars grow and
slow. The non-slowing fraction was 5/16 in a similar set of
experiments (not shown) in which the halo particles were selected from
the {\sc DF}\ by the accept/reject method, instead of deterministically for
Fig.~\ref{basichyb}.
Thus we find a weak trend in these results with the quality of the
different halo set-up procedures. The fraction of bars that do not
experience strong friction rises to almost half when we use the most
careful set-up procedure we have been able to devise for the halo,
whereas use of the density profile to choose radii and Jeans equations
to set halo velocities results in a large majority (13/16) of bars
that experience strong friction (Fig.~\ref{ranhal}). This trend is
also consistent with the weak dependence on halo particle number
reported in Appendix B, where we find that the larger the halo
particle number, the smaller the fraction of bars that slow. We also
find a larger fraction of slowing bars when we use equal mass
particles. These results hint that still larger calculations that are
set up with extreme care may evolve in a consistent manner independent
of the random seed, but we have been unable to demonstrate this.
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{isoc5.df.det.n.ps}
\caption{Evolution of the bar starting from 16 different selections of
particles from the same DF of the isochrone/5 disc.}
\label{isoc5dfdetn}
\end{figure}
\begin{table}
\caption{Numerical parameters for our 2D simulations}
\label{pars2d}
\begin{tabular}{@{}lrr}
& Isochrone & Standard model\\
\hline
Grid $(N_R,N_\phi)$ & (180,256) & (170,256) \\
Sectoral harmonics & $0\leq m \leq 8$ & $0\leq m \leq 8$ \\
Outer radius & $3.995a$ & $6.23R_d$ \\
Softening rule & Plummer & Plummer \\
Softening length $\epsilon$ & $0.05a$ & $0.1R_d$ \\
Number of particles & 500\,000 & various \\
Equal masses & yes & yes \\
Shortest time step & $0.025$& $0.0125$ \\
Time step zones & 1 & 3 \\
\hline
\end{tabular}
\end{table}
\section{Sources of stochasticity}
In this section, we describe and illustrate five sources of
stochasticity, four of which contribute to the large scatter just
described.
\subsection{A Reproducible Result}
We start from a simple unstable disc model for which the outcomes of
simulations do not diverge with different random selections of initial
particles. Fig.~\ref{isoc5dfdetn} shows results from noisy start
simulations in 2D of an isochrone/5 disc, in which $Q \simeq 1.6$;
numerical parameters are given in Table~\ref{pars2d}. The different
curves come from separate simulations with different selections of
particles from the {\em same} {\sc DF}, using the ``deterministic''
procedure described in Section~\ref{selection}. The small scatter in the
bar amplitude at late times can be further reduced by restricting
disturbance forces to the $m=2$ sectoral harmonic only.
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{isoc8.df.det.qmm.ps}
\caption{The time evolution of the bar amplitude and pattern speed in
a quiet start isochrone/8 disc in which $Q \simeq 1.2$. Note the
somewhat larger spread compared with that shown in
Fig.~\ref{isoc5dfdetn}.}
\label{isoc8dfdetq}
\vspace{0.25cm}
\includegraphics[width=.57\hsize,angle=270]{isoc8.df.det.n.ps}
\caption{Evolution of the bar in a noisy start isochrone/8 disc in which
particles are drawn from the same {\sc DF}\ as was used for
Fig.~\ref{isoc8dfdetq}.}
\label{isoc8dfdetn}
\end{figure}
\subsection{Multiple Modes}
\label{multimodes}
Most unstable disc models support a large set of
small-amplitude, unstable modes having a wide range of growth rates
\citep[{\it e.g.}][]{Toom81,Jala07}. These linear modes, even those
with the same angular periodicity, grow independently for as long as
all disturbance amplitudes remain small. If the seed amplitudes of
all modes are low, the first to saturate will be the most rapidly
growing. In most unstable discs, the fastest growing mode is
generally the simplest, or fundamental, mode that is usually dubbed
the bar mode. But if the growth rate of the bar mode does not exceed
that of the next most vigorous mode by a large enough margin for some
seed amplitudes, then both may have comparable amplitude when one
saturates. The consequence of two or more modes reaching large
amplitude at similar times but with random phases can lead to
constructive or destructive interference in the measured amplitudes as
the ``bar'' saturates. Non-linear effects then cause such differences
to persist.
We use the slightly cooler $m_K=8$ isochrone disc to demonstrate this
behaviour explicitly and, to avoid additional complications, we
restrict disturbance forces to those arising from the $m=2$ sectoral
harmonic only. Figs.~\ref{isoc8dfdetq} \& \ref{isoc8dfdetn}
illustrate the dependence of the outcome on the initial noise
amplitude. The quiet start simulations in Fig.~\ref{isoc8dfdetq}
are good enough that the growth rates of the two most rapidly growing
$m=2$ modes can be estimated by fitting to data from the
extensive period of evolution before growth ends \citep[{\it e.g.}][]{SA86}.
We find the growth rate of the second mode to be some 85\% of that of
the bar mode and that its amplitude (peak $\delta \Sigma/\Sigma$)
can be within a factor of a few of the dominant mode as the bar
saturates. The consequence is a slight increase in the scatter of the
later bar amplitudes in this case compared with the case for the
hotter disc shown in Fig.~\ref{isoc5dfdetn}.
The mild scatter in Fig.~\ref{isoc8dfdetq} requires a quiet start,
which decreases the seed amplitude of all non-axisymmetric
disturbances that grow for $\sim 100$ time units before the rising
amplitudes even become discernible in the figure. The much larger
seed amplitudes when noisy starts are used do not allow the dominant
mode to outgrow all others before saturation, with the consequences
illustrated in Fig.~\ref{isoc8dfdetn}. The same sets of particles
were used as for the results shown in Fig.~\ref{isoc8dfdetq}, but we
placed the image particles at random azimuths, instead of evenly. The
period of rising amplitude is too short to allow more than very rough
measurments of the growing modes, but it is clear that multiple
unstable modes having comparable growth rates are seeded at large
initial amplitudes by the shot noise. With such high seed amplitudes,
there is not enough time for the most rapidly growing mode to outgrow
the others, which therefore leads to very substantial variations in
the final bar amplitudes. Note that this did not happen in the warmer
disc (Fig.~\ref{isoc5dfdetn}), which also used a noisy start, since
in that case all growth rates are lower, while the growth rate of the
dominant bar mode exceeds that of all others by a larger margin.
Notice also that not only is there greater scatter in both the bar
amplitude and pattern speed in Fig.~\ref{isoc8dfdetn}, but both
quantities scatter to lower values. We find indications that runs
having lower pattern speed have the more dominant second mode. The
fundamental bar mode, when it has time to outgrow the second mode,
peaks at a greater amplitude and then relaxes back to lower value, as
always happens in Fig.~\ref{isoc8dfdetq}. But when the second mode
is competitive, the bar amplitude generally has a lower initial peak,
and may even rise subsequently.
\subsection{Swing-amplified noise}
\label{swamp}
Our standard model is more complicated than the isolated isochrone
disc. In particular, the inner rotation curve (Fig.~\ref{rchern})
rises steeply where the halo density cusp dominates. Recall that a
mode is a standing wave oscillation of the system, which can be
neutral, growing, or decaying. The dominant linear global modes,
known as cavity modes, in bar unstable discs are standing waves
between the centre and corotation that must have a high enough pattern
speed to avoid any inner Lindblad resonances
\citep[][pp.~508-518]{Toom81,BT08}. The consequence of a steeply
rising rotation curve is to make the maximum of the function $\Omega -
\kappa/2$ rise to high values near the centre, requiring any linear
bisymmetric modes to have very high pattern speeds, small corotation
radii, and to have very low growth rates (because the inner disc is
not all that responsive).
The outer disc, on the other hand, is highly responsive but has no
cavity-type modes. We see evidence for weak edge-type modes, which
arise from a steep density gradient \citep{Toom81} at the sharply
truncated outer edge, but they are sufficiently far out and of low
enough frequency to be decoupled from the bar forming process in the
inner disc.
Shot noise from the particles is vigorously amplified, but transient
swing-amplified responses should be damped at the inner Lindblad
resonance (ILR) of the disturbance \citep[][p.~510]{Toom81,BT08}, as
long as the amplitude remains tiny. Large amplitude waves are not
damped, however, and trap disc particles near the ILR into a bar-like
feature \citep{Sell89}.
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{50K.ps}
\includegraphics[width=.57\hsize,angle=270]{500K.ps}
\includegraphics[width=.57\hsize,angle=270]{5M.ps}
\includegraphics[width=.57\hsize,angle=270]{50M.ps}
\caption{Evolution of the bar in four sets of 16 runs with different
random seeds for the disc particle coordinates. The number of
particles rises by a factor of 10 from row to row, ranging from 50K in
the top row to 50M in the bottom row.}
\label{p2dNvary}
\end{figure}
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{multiN.ps}
\caption{Summary plot showing the means and $\pm1\sigma$ scatter of
the runs shown in Fig.~\ref{p2dNvary}.}
\label{multiN}
\end{figure}
Bar formation through amplified noise inevitably leads to a range of
bar properties, but it is fortunate that the range turns out to be
surprisingly narrow. To illustrate this, we study bar formation in
our standard model in simplified simulations in which the motions of
disc particles are confined to a plane, and the halo particles are
replaced by a rigid mass component that simply provides the extra
central attraction to yield the same rotation curve as shown in
Fig.~\ref{rchern}. This approach has several advantages: the
calculations are less expensive in computer time, but more importantly
the dynamics is simpler because both bar buckling and halo friction
are eliminated, enabling us to isolate the bar formation process from
these other complicating aspects of the overall evolution.
Fig.~\ref{p2dNvary} shows 4 sets of 16 runs each in which $N$ is
increased by a factor 10 from row to row, from $N=50$K at the top, to
$N=50$M for the bottom row. The results from each run have been
slightly shifted horizontally so that the amplitude passes through 0.1
at the same time (the mean for the 16 runs) as described above. The
bar amplitude has a higher peak than in Figs.~\ref{basichyb} \&
\ref{pkdgrav} in part, at least, because we use a different softening
rule in 2D. The discrepant line in one of the pattern speed panels
shows that the bar cannot always be identified in the early stages,
but eventually it is in all cases.
Fig.~\ref{multiN} shows the evolution of the means and scatter in
the four sets of experiments, and reveals that the main effects of
increasing $N$ are threefold: the formation of the bar is delayed
because of lower seed noise, the mean peak bar amplitude increases and
the scatter in the amplitude evolution {\it rises\/} with increasing
particle number, at least to $N=5\,$M. The pattern speeds are better
behaved, with scatter decreasing as $N$ rises.
Because these calculations have less freedom, the amplitude variation
is much less than those shown in Fig.~\ref{basichyb}, which have
the same numbers of disc particles as those in the second row of
Fig.~\ref{p2dNvary}. Nevertheless, the spread in the bar amplitudes
after the initial rise remains quite high. The pattern speed does not
decline as much because the rigid halo does not cause dynamical
friction.
Since amplified noise is intrinsically stochastic, the dominant
transient responses in different random realizations of the disc must
differ. The possible frequency range of the dominant pattern is
broad, but not unbounded; the rotation curve and surface density
profile, among other properties, cause the responsiveness of the disc
to vary with radius, and therefore the dominant responses have
corotation radii in the region where the disc is most responsive.
Thus the very first collective responses at low, but fixed, $N$ lead
to initial bars having a range of strengths, {\it i.e.}\ sizes, with the
larger bars developing more slowly because the clock runs more slowly
farther out in the disc. (The time delays have been removed from
Fig.~\ref{p2dNvary}.)
The larger the number of particles, the longer it takes for the bar to
form (Fig.~\ref{multiN}). Initial transient responses occur at
roughly the same rate but, in experiments with larger $N$, the lower
initial amplitudes do not lead to immediate bar formation. Subsequent
amplification events tend to be of greater amplitude, and to occur
farther out in the disc. Thus we see that a lower level of shot noise
favours large amplitude responses farther out in the disc that briefly
lead to longer and stronger bars.
The pleasant surprise is that after the initial transient episodes
produce bars of different sizes and angular speeds, we observe
(Fig.~\ref{multiN}) that subsequent evolution causes the range of
bar strengths to narrow. Also most of the systematic trends with
particle number are erased in the subsequent evolution, and neither
the bar amplitude nor its pattern speed at later times exhibits more
than a mild dependence on $N$. It is fortunate that a degree of
uniformity of the bar properties emerges after such tumultuously
different evolution. But it is far from obvious why it should,
especially since the model could have supported bars of wide range of
sizes ({\it e.g.}\ Fig.~\ref{basichyb}).
The results shown in Fig.~\ref{p2dNvary} are for models with rigid
haloes in which the disc was created using the Jeans equations
(Section~\ref{jeansapp}). Far from becoming better behaved, the scatter in
the amplitude evolution {\it increases\/} as $N$ rises! We conducted
a similar set of tests, also with rigid haloes, for which disc
particles were selected deterministically from an approximate {\sc DF}.
The evolution of these more carefully set up models resulted in
slightly improved behaviour: the bar formed somewhat more slowly,
peaked at a little lower amplitude for the same value of $N$, and the
scatter no longer varied systematically with $N$. However, the final
bar amplitude and pattern speeds were within the ranges shown in
Fig.~\ref{p2dNvary}.
Unlike the results for the isochrone presented in Appendix C, the more
careful selection of particles yielded only a slight reduction in the
spread in evolution. It is likely that this difference in behaviour of
the two discs is due to the difference in bar forming mechanism; the
instability of the isochrone disc is due to strongly unstable linear
global modes, whereas as the bars in our standard model form through
non-linear trapping of swing-amplified particle noise that would be
less affected by the quality of the equilibrium.
Thus far we have discussed only bisymmetric instabilities, but other
low-order instabilities may also be competitive. In fact, we find
some evidence for lop-sidedness, which we describe in the next
subsection.
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{rflctz.ps}
\caption{Comparison of the time evolution of two runs that differ only
in the imposition of reflection symmetry about the midplane. The
solid curves are for a model taken from Fig.~\ref{basichyb} in which
vertical forces are unrestricted while the dashed curves show the
evolution of the same initial model when vertical forces from the disc
are constrained to be symmetric about the mid-plane.}
\label{rflctz}
\end{figure}
\subsection{Bending modes}
The bars in most 3D simulations suffer from buckling instabilities
that, when they saturate, thicken the bar in the vertical direction
\citep[{\it e.g.}][]{CS81,RSJK}. In many, but not all, cases the evolution
of this bending mode is quite violent and weakens the bar
significantly, while the central density of the bar rises, as reported
by Raha {\it et al.}\ \ The radial rearrangement of mass evidently liberates
the energy needed to puff up the bar in the vertical direction.
The time of saturation of the buckling mode depends on a variety of
factors, such as the formation time of the bar, and the initial seed
amplitude of the bending mode, the strength of the bar, {\it etc.}\ Several
of these factors will in turn depend on the already stochastic
formation of the bar. It is hardly surprising therefore, that this
event occurs over a wide range of times and with a wide range of
severity (Fig.~\ref{basichyb}), thereby compounding the overall
level of stochasticity.
The buckling mode can be inhibited by artificially imposing reflection
symmetry about the mid-plane, which causes a substantial change to the
evolution. Fig.~\ref{rflctz} compares the evolution for one case;
the dashed curves show that when buckling is inhibited, the bar
continues to grow in amplitude, while slowing, for a long period. On
the other hand, the amplitude drops quite abruptly when the bar
buckles (solid curves) and the subsequent amplitude and pattern speed
hold approximately steady.
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{nomeq1m.ps}
\caption{Evolution of a set of 16 runs that differ from those shown in
Fig.~\ref{basichyb} only in the suppression of lop-sidedness about the
$z$-axis. }
\label{nomeq1m}
\end{figure}
Not all the bars in the runs illustrated in Fig.~\ref{basichyb}
experience a violent buckling event. In some cases the bar amplitude
does not decrease after the initial peak, while in others the
amplitude drop is more gradual.
Fig.~\ref{nomeq1m} shows the effect of suppressing the $m=1$
sectoral harmonic about the $z$-axis for both the disc and halo
particles. This has the effect of preventing the centres of either
component from leaving the $z$-axis. (Suppressing the $l=1$ component
of the halo force calculation would nail the centre of that component
to the origin, which would prevent the halo from responding properly
to a buckling mode.) With lop-sidedness inhibited in this way, all
bars buckle, and all but one do so violently with a large decrease in
amplitude. This difference in buckling behaviour from that shown for
the same initial models in Fig.~\ref{basichyb} indicates that
buckling is strongly influenced by mild lop-sidedness, which has not
been reported elsewhere, as far as we are aware. We could not find
any evidence for lop-sided instabilities in the runs shown in
Fig.~\ref{basichyb}, and the distance between the centroids of the
halo and disc particles was $\la 0.002R_d$. As it seems unlikely that
such small offsets could have such a large effect on the saturation of
the buckling mode, we think it possible that an anti-symmetric mode
competes. Investigation of this possibility here would be too great a
digression.
Despite the violence of most buckling events, most bars in these
restricted simulations continue to slow after the buckling event and
amplitude growth resumes. The four exceptions are bars that remained
strong right after their formation and did not slow much either before
or after the buckling event.
Results reported in Appendix B show that the buckling behaviour is also
somewhat sensitive to particle softening.
\citet{KVCQ} report that the violence of the buckling event also
depends on the initial thickness of the disc. This is as expected,
since \citet{MS94} showed that buckling is a consequence of a
collective instability that arises in systems in which the velocity
distribution becomes too anisotropic, and thickening the disc reduces
the flattening of the velocity ellipsoid. However, in a separate test
with a set of runs with twice the disc thickness (not shown), we still
find a similar degree of scatter in the late evolution.
\begin{figure}
\begin{center}
\includegraphics[width=.57\hsize,angle=270]{contd.ps}
\end{center}
\caption{The results shown in Fig.~\ref{basichyb}, but with the curves
colored blue when the torque on the halo is low and red otherwise.
The calculations were continued for models that had not slowed by
$t=1000$ and were stopped either at $t=3000$ or soon after friction
kicked in, which happened in all but two cases.}
\label{contd}
\end{figure}
\subsection{Incidence of Dynamical Friction}
\label{dynfr}
Fig.~\ref{contd} shows that the divergent late-time evolution of the
runs shown in Fig.~\ref{basichyb} is due to differences in the
incidence of dynamical friction. The lines are coloured blue when the
torque acting on the halo $dL_z/dt < 5 \times 10^{-5}GM^2/R_d$, and
are red otherwise.
The absence of bar friction may have a variety of causes: (a) low halo
density, (b) a weak bar, and (c) metastability caused by local adverse
gradients in the density of halo particles as function of angular
momentum \citep{SD06}. The halo density is just about the same in all
cases, but the bar strength varies widely and it is clear that the
weaker bars experience little friction.
\begin{figure*}
\begin{center}
\includegraphics[width=.53\hsize,angle=270]{cmprsn.ps}
\end{center}
\caption{Comparison of the amplitude evolution of the models shown in
Fig.~\ref{basichyb} (solid lines) with the same sets of particles
processed in reverse order (dashed lines). The evolution of these
two sets of identical runs is measurably different in all cases, and
qualitatively different in some, especially cases 10 \& 15. The dotted
lines in the first 5 panels show the evoltions using {\sc PKDGRAV}\ for the same
files of initial particles.}
\label{cmprsn}
\end{figure*}
The third possibility is indicated by the evidence in
Fig.~\ref{contd}, since friction eventually resumes, sometimes after
a very long period during which the bar amplitude does not increase;
the metastable state does not last indefinitely. We argue
\citep{SD06} that the metastable state has a finite lifetime because
weak friction at minor resonances gradually slows the bar until the
more important resonances move out of the region of adverse gradients,
allowing strong friction to resume.
Metastability could be caused by the buckling event, since bars that
are weakened substantially by a buckling event, such as the case
picked out in Fig.~\ref{rflctz}, generally do not experience much
friction at late times, and their amplitudes stay low. The upward
rise in the bar pattern speed at the time of buckling is shown clearly
by the solid curve in Fig.~\ref{rflctz}, which we \citep{SD06} found
to be a likely cause of metastability. It is reasonable that the
concentration of mass to the centre as the bar buckles should cause an
upward fluctuation in the bar pattern speed (because the orbit periods
must vary inversely as the square root of the mean interior density).
However, buckling does not always lead to a cessation of friction; for
example, many of the bars in the 16 runs with a more dense halo
(Fig.~\ref{newhalo}) clearly buckled, but friction continued in all
cases.
\subsection{True chaos}
Here we show that Miller's (1964) instability can lead to macroscopic
differences in discs. Where initial evolution is largely determined
by swing-amplification of the spectrum of particle noise laid down by
the random coordinates of particles, models that differ by tiny
amounts quickly diverge because the subsequent spiral events depend on
the details of evolution of previous events. This phenomenon causes
the micro-chaos in $N$-body systems to lead to macroscopic differences
in discs.
Fig.~\ref{cmprsn} compares the amplitude evolution of each case
shown in Fig.~\ref{basichyb} (solid lines) with another run of the
same case with the order of the particles reversed (dashed). Thus the
initial phase space coordinates of all particles were identical and
are evolved with the same code on identical processors. Each pair of
simulations differ only in the order in which arithmetic operations
are performed, which changes the initial accelerations at the round
off error level only, yet the amplitudes at late times generally
differ visibly, and in some cases, {\it e.g.}\ 10 \& 15, the evolution
differs qualitatively.
So far, every calculation with grid codes that we have reported here
was conducted using single precision arithmetic for most operations.
We have checked that increased precision has no effect on the range
of behaviour shown in Fig.~\ref{basichyb}, and results differ only
slightly, as we now show for one case.
\begin{figure}
\includegraphics[width=.57\hsize,angle=270]{ptest.ps}
\centerline{\includegraphics[width=.9\hsize]{lyapunov.ps}}
\caption{The upper panels compare the evolution of four cases that
started from the identical file of particle coordinates, with all
numerical parameters held fixed, except that solid lines are for
calculations in single precision, dotted lines are for the identical
calculations in double precision. As for Fig.~\ref{cmprsn}, the order
of the particles was reversed in one of each pair. The lower panel
shows the time evolution of the quantity $d$ defined in
eq.~(\ref{diff}) for both pairs of runs.}
\label{lyapunov}
\end{figure}
Fig.~\ref{lyapunov} shows that the system remains chaotic when we
repeat the calculations using double precision arithmetic (dotted
lines). The higher precision calculations begin to diverge visibly at
about the same times as in the single precision cases, and the
subsequent differences are comparable. In order to monitor the
divergence in these cases, we compute the value over time of the
difference
\begin{equation}
d = \left[ \Re(A_{2,a} - A_{2,b})^2 + \Im(A_{2,a} - A_{2,b})^2 \right]^{1/2}
\label{diff}
\end{equation}
between the bar coefficients (eq.~\ref{fcoeff}) in these pairs of
experiments ($a \; \& \; b$) in which the order of the particles was
reversed. The solid (dotted) line in the lower panel of
Fig.~\ref{lyapunov} shows the result for the single (double)
precision pair. By $t \sim 300$ the models differ quite visibly in
the amplitude and phase of the bar, which accounts for the fact that
$d$ asymptotes to a lasting value where the phases of the two bars
differ.
The difference, $d$, in double precision grows quasi-exponentially
over time at first, which is symptomatic of chaos, with a Lyapunov
($e$-folding) time of $\sim 4.75$ dynamical times, {\it i.e.}, less than 25\%
of the orbit period ($\sim 20$ dynamical times) at $R = 2.5R_d$.
Using this estimate of the Lyapunov time, the difference in the double
precision case should equal the initial difference in the single
precision case after $\approx 93$ dynamical times, and the early
evolution of $d$ in the lower precision case is roughly similar to
that in the double precision case with a time offset of this
magnitude. Even though there is a much smaller initial difference
between the two double-precision models, the seed amplitude of the
instabilities is set by the shot noise, which is the same in all 4
runs. Thus the non-axisymmetric structures are almost fully developed
in the double precision models by the time the dotted curve reaches
the level of the start of the solid line; therefore one cannot expect
the curves to overlay perfectly.
It is curious that the difference in the double precision case
``catches up'' with that in the single precision case. The shoulder
in $\log_{10}d$ that appears in both precisions at about $t=300$ seems
to be responsible for this convergence, which occurs both at such a
large value of $d$ as to be well past where exponential divergence
could be expected to hold, and at a time when the bar in all four runs
is fully developed.
A perfect collisionless particle system should be exactly time
reversible; that is, if the velocities of all the particles were
reversed at some instant, the system should retrace its evolution.
Fig.~\ref{revers} shows that reversed simulations do retrace their
evolution for a short while, between 60 and 80 dynamical times, after
which the evolution of the reversed model visibly departs from the
corresponding reflection of the forward evolution. This period of
successful reversibility is consistent with our Lyapunov divergence
estimate: 15 Lyapunov times ($=71.25$ dynamical times) corresponds to
a divergence of $\sim 10^{6.5}$, which is sufficient to alter almost
every significant digit in these single precision calculations and
lead to reversed evolution that becomes largely independent of that in
the forward direction. Further analysis of these simulations revealed
that the first signs of irreversibility appeared as differences in the
leading spiral Fourier components, suggesting that vigorous
swing-amplification of particle noise is primarily responsible for the
short Lyapunov time.
\begin{figure}
\includegraphics[width=\hsize,angle=0]{revers.ps}
\caption{The magenta line shows the unsmoothed bar amplitude evolution
of one model run to $t=200$. The other lines show the continued
evolution of the same model with the velocities of all the particles
reversed at $t=50$ (cyan), $t=100$ (blue), $t=150$ (green), and
$t=200$ (red). In all four cases, the evolution immediately after the
reversal faithfully retraces the forward evolution for a period less
than 100 time units. After this time the evolution departs noticeably
from a reflection of the line about the reversed moment.}
\label{revers}
\end{figure}
We conclude from these tests that the $N$-body system we are trying to
simulate is indeed chaotic. Further, the effects of chaos are not
significantly worsened by round-off error in single precision; we
have also verified that the full divergence of the results in
Fig.~\ref{basichyb} persists in double precision. In fact, the
first author has frequently checked, and always confirmed, that no
advantage results from use of higher precision arithmetic when
computing the evolution of collisionless $N$-body systems. This
conclusion is in sharp contrast with the requirements for collisional
systems \citep[{\it e.g.}][]{Aars08}.
In none of the simulations with grid codes reported in this paper did
we distribute the computation over multiple parallel processors, even
though the code has been well optimized for parallel use. We adopted
this strategy in order to avoid the additional randomness that is
inevitable when results from multiple processors are combined in an
unpredictable order.
The dotted curves in the first five panels of Fig.~\ref{cmprsn} show
the result using the tree code {\sc PKDGRAV}\ for the same initial coordinates
in each case, which are reproduced from Fig.~\ref{pkdgrav}.
Although the ranges and distributions of measured bar properties shown
in Figs.~\ref{basichyb} \& \ref{pkdgrav} are similar, the results do
not compare in detail, as noted above. Results from the two different
codes diverge strongly in all but one case, reinforcing the conclusion
of intrinsic stochasticity. Which of the two possible evolutionary
paths is taken in the evolution is affected no more, and no less, by
code differences than by choices of the random seed.
\section{Discussion}
\label{discussion}
\subsection{Is there a right result?}
One of the most troubling aspects of the diverging evolution in
Figs.~\ref{basichyb} \& \ref{pkdgrav} is that one cannot decide
which of the two patterns of behaviour is ``correct,'' or indeed
whether there could be a unique evolutionary path with a perfect code
and infinite numbers of particles.
Since these models have high density centres (Fig.~\ref{rchern}),
linear stability analysis would most likely reveal that all global
modes, with the possible exception of edge modes \citep{Toom81}, have
very low growth rates, and therefore the disc ought to be stable and
not form a bar. If this is indeed what linear theory would predict,
then the ``right result'' with a perfect code and infinite numbers of
particles would be a stable model that does not form a bar. This
outcome never occurred in the $> 400$ simulations we report here,
even in cases with one hundred times our standard number of disc
particles (Fig.~\ref{p2dNvary}).
The level of shot noise in a simulation with $\ga 1$~million particles
is clearly $\sim 100$ times higher than would be present in a real
galaxy if the $\sim 10^{10}$ stars were randomly distributed. But the
mass in real galaxy discs is clumpier because of the existence of star
clusters and giant gas clouds, which raises the amplitude of random
potential fluctuations -- although the density fluctuation spectrum
may not be the same as that of shot noise in the simulations.
Nevertheless, it seems most unlikely that a real galaxy closely
resembling the model used in our simulations could avoid being barred.
\subsection{Dynamical Friction}
The greatest source of divergence is the bimodal nature of dynamical
friction, which is avoided for a long time in some cases, but kicks in
immediately in others, causing the bar to slow and increase in
strength by a substantial factor. It is likely that friction is
avoided because the needed gradient in the halo {\sc DF}\ as a function of
angular momentum has been flattened by the earlier evolution of the
model, as reported by \citet{SD06}. The fact that this happens here
more frequently than we found with the model created by \citet{VK03}
may have two causes: their model had both a less dominant disc and an
initial halo with significant departures from equilibrium.
In Section~\ref{pselect}, we reported a weak trend towards a larger
fraction of non-slowing bars as we took greater care over the initial
selection of particles; further, the largest fraction (10/16) occurs
in the test with four times the number of halo particles reported in
Appendix B. This weak trend suggests that the metastable state is
reached more readily as the quality of the simulation is improved.
However, \citet{SD06} found that the metastable state, in which the
bar did not slow, was not indefinite and friction eventually resumed,
as we also find here (Fig.~\ref{contd}). Furthermore, they found
the metastable state to be fragile, and friction would resume soon
after a tiny perturbation, such as the distant passage of a small
satellite galaxy. Thus, even though the metastable state is reached
more frequently in higher quality calculations, it is unlikely it
could be sustained in real galaxies. We conclude therefore that the
strongly braked and growing bar is the most ``realistic'' outcome from
these simulations.
\subsection{Introducing a seed disturbance}
\citet{HBWK} attempted to make the outcome more predictable by seeding
the bar instability by an externally applied transient squeeze. We
argue here that this approach is not the panacea it may seem.
In the case of discs having well-defined global instabilities, noisy
starts already seed the dominant unstable modes at high amplitude
\citep[Section~\ref{multimodes};][]{Sell83}. If a seed disturbance is to
prevail, it must be imposed at such high amplitude as to be
practically non-linear at the outset. Furthermore, the objective must
be to favour the dominant mode over the others, which cannot be
achieved by a simple perturbation. Instead, one must impose both the
detailed radial shape and perturbed velocities of the mode, which are
generally not known. A more generic disturbance, such as a
``squeeze'' will simply raise the amplitude of all the modes and
transients, giving {\it less\/} time for the dominant mode to outgrow
the others. Quiet starts \citep[Section~\ref{qstart};][]{Sell83,SA86},
however, have the effect of reducing the initial amplitudes of all
non-axisymmetric disturbances to such an extent that there is ample
time for the most rapidly growing mode to prevail. Thus the outcome
of a quiet start experiment is tolerably reproducible without the need
to apply an additional seed (Fig.~\ref{isoc8dfdetq}).
The situation is far more difficult in the case, as in the present
study, where the disc has no prevailing global instabilities, since
the evolution of a simulation is dominated by swing-amplified shot
noise. Quiet starts are all but useless in these circumstances also,
since they break up rapidly as the tiny seed noise is swing amplified,
with similar outcomes, only slightly delayed, to those from noisy
starts. Cranking up the particle number does not reduce variations in
the bar amplitude at later times (they actually increased in
Fig.~\ref{p2dNvary}), but does delay bar formation. Because of
this, perhaps a suitable seed disturbance in a very large $N$ disc may
prevail over the amplified shot noise and lead to a more reproducible
outcome. We have not explored this idea here and leave it for a
future study.
\section{Conclusions}
We have shown that simulations over a fixed evolutionary period of
a simple disc-halo galaxy model can vary widely between cases that
differ only in the random seed used to generate the particles, even
though they are drawn from identical distributions.
Fig.~\ref{basichyb} shows that the late-time amplitude of the bar
can differ by a factor of three or more while the stronger bars may
have half the pattern speed of the weaker ones. Fig.~\ref{contd}
shows that the largest differences are only temporary, however. We
have deliberately focused our study on a case which displays this
extreme bad behaviour. Stochastic variations are inevitable, but
evolution is generally less divergent; {\it e.g.}, when the halo has both
higher and lower density ({\it e.g.}\ Fig.~\ref{threehalos}).
We have shown that the divergent outcomes do not result from a
numerical artefact, since they are independent of numerical parameters
(Appendix B). Also, similar behaviour occurs with a code of a totally
different type ({\sc PKDGRAV}, see Fig.~\ref{pkdgrav}). Instead, this extreme
stochasticity results from a number of physical causes that we have
identified and illustrated. The most important for our model are:
swing-amplified particle noise, the variations in the incidence and
severity of buckling, and the incidence of dynamical friction. We
have separately shown (Fig.~\ref{isoc8dfdetn}) that other disc
models having a well-defined spectrum of global modes can have a range
of outcomes because of the coexistence of competing instabilities.
The calculations in Fig.~\ref{basichyb} are of models that were set
up with considerable care so as to be as close as possible to
equilibrium. An additional level of unpredictability can result from
less careful set-up procedures, as illustrated in Appendix C.
We have been aware for many years that simulations including disc
components can be reproduced exactly only if the arithmetic operations
are performed in the same order to the same precision, and that
differences at the round-off error level can lead to visibly different
evolution. However, we have been surprised by the strongly divergent
behaviour of the particular model studied here. The pairs of
divergent results in Fig.~\ref{cmprsn} are the stellar dynamical
equivalents of the possible macroscopic atmospheric consequences of
Lorenz's butterfly flapping its wings. Because the system is chaotic,
improved precision arithmetic is of no help in reducing the scatter in
the outcomes.
The divergence in different realizations of our standard case arises
from a temporary delay in the incidence of dynamical friction, which
is determined by minor details of the early evolution. Strong
friction causes the bar to both slow and grow; in some cases this
occurs right after bar formation, but in others the bar rotates
steadily at an almost constant amplitude for a protracted period.
Friction is avoided when the earlier evolution causes an inflexion in
the angular momentum density gradient of the halo. We \citep{SD06}
previously described this as a metastable state because it did not
last indefinitely even when the evolution was unperturbed, and we also
showed that mild perturbations could cause friction to resume. We
find that the fraction of initially non-slowing bars increases as
greater care is taken over the initial set up because the smaller
fluctuations in such models are less likely to nudge the model out of
the metastable state.
We argue in Section~\ref{discussion} that the most realistic outcome of
these experiments is the slowing and growing bar, despite the fact
that we find the delayed friction result increasingly often as we
improve the quality of the initial set-up and of the simulation.
Since most real galaxies are likely to be subjected to frequent mild
perturbations, we conclude that slowing and growing bars are in
fact the more realistic outcome.
Since the possible evolution of the simulation is not unique, multiple
experiments of essentially the same model are needed in order to
demonstrate that the behaviour is robust. Furthermore, the failure of
an experiment by one group to reproduce the results of a similar
experiment by another may not be the result of errors or artefacts in
either or both codes, but rather a reflection of a fundamental
stochasticity of the system under study.
\citet{KVCQ} report a similar, but less extensive, comparison
between two tree codes and an adaptive mesh method, and conclude that
all the codes produce ``nearly the same'' results in simulations
performed with sufficient numerical care. However, inspection of the
comparatively short evolution shown in their Fig.~8 reveals slowly
diverging outcomes, even between two simulations run with tree codes.
They also report (their Fig.~1) a strongly divergent result when the
time step was varied; the sharp decrease in bar strength in this one
case was clearly a consequence of a more violent buckling event than
in their comparison cases. Such a difference could have easily arisen
from stochastic variations of the kind discussed here, and the
conclusion that the shorter time step is required no longer follows.
We show here (Appendix B), as do \citet{DBS9}, that results are robust
to wide variations in time step. Clearly when stochasticity can lead
to sharply divergent results, parameter tests that throw up surprises
are conclusive only after ensembles of particle realizations have been
simulated. This must also be a requirement for meaningful comparisons
between codes or workers.
Since the principal sources of stochasticity are connected to disc
dynamics, they are unrelated to the halo particle number question
raised by \citet{WK07}. Not only has \citet{Sell08} already shown
that friction can be captured adequately with moderate particle
numbers, but we have found here that the expected bar friction arises
more readily in haloes with fewer or equal mass halo particles, or in
haloes that are not set up with great care -- which is not the
expected behaviour were particle scattering dominant. Instead, small
departures from equilibrium can upset the delicate metastable state in
which bars can rotate without friction \citep{SD06}.
It should be noted that bars that slow through dynamical friction also
grow in length, as reported earlier by \citet{Atha02}. Nevertheless,
for these models the ratio of corotation radius to bar semi-axis
${\cal R} > 1.4$, as expected for a moderate-density halo \citep{DS00}.
Those bars that avoid friction for a long period, however, have ${\cal
R} < 1.4$, as also found by \citet{VK03}, but this metastable state is
fragile and unlikely to arise in real galaxies \citep{SD06}.
Since all $N$-body simulations are intrinsically chaotic, they can be
reproduced exactly only if the same arithmetic operations are
performed in the same order with the same precision, as noted in the
introduction, and borne out in Fig.~\ref{cmprsn}. These
requirements dictate the use of the same code, compiler, operating
system, and hardware. Further, if the calculation is stopped and then
resumed, it is important to save sufficient information so that the
acceleration used to advance each particle at the next step is
identical, to machine precision, to that it would have been had the
calculation not been interrupted. This can be arranged without too
much difficulty, if the calculation is run on a single processor.
However, simulations that distribute work over parallel processors in
computer clusters would be exactly reproducible only if care is taken
to ensure that the work is distributed and the results are combined in
a fully predictable manner.
Provided the divergence is slight, exact reproducibility is of little
scientific interest, although such a capability is useful to the
practitioner. But when, as described here, the model under test can
have strongly divergent behaviour that arises from differences that
begin at the round off level with the same code on the same machine,
comparison of results between different codes and on different
platforms becomes much less likely to produce agreement, even when the
simulations share the same file of initial coordinates. It is ironic
that the model used here was in fact that selected as a test case for
code comparison; fortunately, the authors discovered its unsuitability
in time!
\section*{Acknowledgments}
We thank Scott Tremaine, Tom Quinn, and the referee, Martin Weinberg,
for helpful comments on the manuscript and Juntai Shen for
discussions. This work was supported by grants to JAS from the NSF
(AST-0507323) and from NASA (NNG05GC29G) and by a Livesey Grant from
the University of Central Lancashire to VPD. The {\sc PKDGRAV}\ simulations
were performed at the Arctic Region Supercomputing Center (ARSC).
|
train/arxiv
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.